Can AI make doctors perform better?

Can AI make doctors perform better?


One of the most touted promises of medical artificial intelligence (AI) is that they can help human clinicians more accurately interpret images such as X-rays and CT scans, thereby making more accurate diagnostic reports and enhancing the performance of radiologists.

But is this really the case?

A collaborative study by Harvard Medical School, MIT, and Stanford University in the United States shows that the effectiveness of using AI tools for image interpretation appears to vary among clinicians.

In other words, whether it is useful or useless, humans have the final say at this stage. Because the findings suggest that individual clinician differences affect the interaction between humans and machines in key ways that AI experts don’t fully understand. The analysis was recently published in the journal Nature Medicine.

Consider doctor’s personal factors

Research shows that in some cases, the use of AI may interfere with radiologists’ performance and affect the accuracy of their interpretations.

While previous studies have shown that AI assistants can indeed improve doctors’ diagnostic performance, these studies looked at doctors as a whole and did not take into account differences between different doctors. Clinically, every doctor’s judgment is 100% for the patient.

In contrast, the new study looked at clinicians’ personal factors—area of ​​expertise, years in practice, and previous experience with AI tools—and analyzed how these factors play into human-machine collaboration.

The researchers analyzed how AI affected the performance of 140 radiologists on 15 X-ray diagnostic tasks, in which doctors need to reliably spot distinct features on images and make accurate diagnoses. The analysis involved 324 patient cases with 15 conditions.

To determine how AI affects doctors’ ability to detect and correctly identify problems, the researchers used advanced computational methods to capture changes in performance with and without AI.

The results showed that the effects of AI assistance were inconsistent and varied among radiologists, with some radiologists’ performance improving due to AI and others’ performance “worsening.”

Palanafo Rapkol, assistant professor of biomedical informatics at the Blavatnik Institute of the Royal College of Physicians of London, confirmed the research team’s findings and said, “We should not think of doctors as a uniform group and only think about AI. ‘average’ impact on its performance”.

Still, this finding doesn’t mean doctors and clinics should be discouraged from adopting AI. Instead, the results point to the need to better understand how humans and AI interact, and to design carefully calibrated methods that improve, rather than hurt, human performance.

AI “assistant” is still difficult to predict

Given that the imaging department is considered to be the clinical medical field that can receive the greatest help from AI, the results of this study are quite representative.

What is noteworthy in this discovery is that in radiology, AI is affecting the performance of human doctors in surprising ways.

For example, contrary to the researchers’ expectations, factors such as how many years of experience a radiologist had, whether they specialized in thoracic radiology, and whether they had previously used AI equipment did not reliably predict the impact of AI tools on their performance.

Another finding that challenges common wisdom: Clinicians who perform poorly at baseline do not consistently benefit from AI. Overall, radiologists with lower baseline performance still had lower performance, with or without AI. The same was true for radiologists who performed better at baseline—their overall performance was consistently good with or without AI.

But what is certain is that more accurate AI improves radiologists’ performance, while mediocre AI reduces the diagnostic accuracy of human clinicians.

The significance of this finding is also that before clinical deployment, the performance of AI tools must be tested and verified to ensure that inferior AI does not interfere with the judgment of human clinicians and thereby delay the patient’s condition.

Impact on the future of clinical medicine

Clinicians have varying levels of expertise, experience and decision-making styles, so ensuring AI reflects this diversity is critical to delivering targeted treatments. Individual factors and changes should be the key to ensuring the progress of AI, rather than factors that interfere with and ultimately affect diagnosis.

Interestingly, this finding does not explain why AI has a different impact on human clinicians’ performance, but as AI’s impact on clinical medicine becomes more profound, understanding why is critical. On this point, AI experts are still working hard.

The research team added that in the next step, the interaction between radiologists and AI should be tested in an experimental environment that simulates real-life scenarios, and the test results need to reflect the actual patient population. In addition to improving the accuracy of AI tools, it is also important to train radiologists to promptly detect inaccurate AI and review and question the diagnosis of AI tools.

In other words, before AI can help you, you need to improve yourself first.


Source link