AI in Health

AI is everywhere in healthcare now – but how do patients feel about it?

AI is everywhere in healthcare now – but how do patients feel about it?
Patients around the world have been asked their attitude towards the use of medical AI
Patients around the world have been asked their attitude towards the use of medical AI
View 2 Images
Patients around the world have been asked their attitude towards the use of medical AI
1/2
Patients around the world have been asked their attitude towards the use of medical AI
The research makes it clear that patients want AI to assist clinicians, not replace them
2/2
The research makes it clear that patients want AI to assist clinicians, not replace them

Patients worldwide are cautiously optimistic about the use of AI in healthcare. Most support it as a helpful assistant, but few trust it to replace doctors, according to a new study that reveals trust, concerns, and the need for explainable AI.

There has been plenty of research into the growing use of AI in medicine and how medical professionals feel about it. But there are far fewer studies into how patients, who are arguably the most significant stakeholders, feel about the use of medical AI.

In a new study led by the Technical University of Munich (TUM) in Germany, researchers surveyed patients to understand their views on the use of AI in healthcare.

The researchers recruited 13,806 adult patients from 43 countries; 64.8% were from the Global North, 35.2% from the Global South. Their median age was 48, and 50.5% were male. Using an anonymous questionnaire, participants were asked about their comfort levels, perceived benefits, risks, and trust in the use of AI tools across different contexts, including diagnosis, treatment recommendations, and administrative support. To cover patients with a wide range of conditions, the survey was conducted in radiology departments that carried out X-rays, CT scans, and MRIs.

Overall, most patients (57.6%) viewed the general use of medical AI positively, although men were slightly more positive than women: 59.1% vs. 55.6%. Perhaps expectedly, participants who were more familiar with technology and rated themselves as having a higher understanding of AI were more likely to approve of its use in healthcare. Of those who regarded themselves as AI experts, 83.3% had rather positive or extremely positive views, compared with 38.0% who self-reported little AI knowledge.

Interestingly, the researchers found that the more severe a patient’s illness was, the more negative their attitude toward the use of medical AI was. Among patients in very poor health, 26.6% held extremely negative views of AI, and 29.2% had rather negative views of it. By comparison, for patients in very good health only 1.3% held extremely negative views and 5.3% held rather negative views.

“The exact reasons for negative attitudes among seriously ill patients cannot be determined from our study,” said the study’s lead author, Felix Busch, MD, an assistant physician at TUM’s Institute of Diagnostic and Interventional Radiology. “We suspect that experience with the healthcare system, illness burden, and psychological factors play a role.”

The research makes it clear that patients want AI to assist clinicians, not replace them
The research makes it clear that patients want AI to assist clinicians, not replace them

When it came to trusting AI, the results were pretty much 50-50. Overall, 48.5% of patients were confident that AI would improve healthcare, 43.9% trusted it to provide reliable health information, 43.6% trusted AI to provide accurate information about their diagnosis, and 41.8% trusted AI to provide accurate information about their response to treatment.

How positively patients viewed the use of AI also depended on where and how it was used. For instance, 59.3% supported using AI to analyze radiographic images such as X-rays, 54.6% were happy for it to be used for cancer diagnosis, and 67.9% supported the use of AI to provide a physician with a second opinion.

Only a small number of patients (4.4%) supported the idea of a diagnosis made only by AI. However, at the same time, only 6.6% were okay with a diagnosis made entirely without AI (that is, physician only). Notably, 70.2% wanted AI that was “explainable,” meaning that users could see the steps the tech took to reach its conclusions, even if this meant a trade-off in accuracy. And 72.9% wanted the technology to function as an assistant, with clinicians making the final decision.

The biggest concern that patients had about the use of medical AI was that it could change the way healthcare is delivered. Almost two-thirds, 61.8%, feared that AI could reduce doctor-patient interactions, and 61.7% were concerned it could replace human doctors.

The study had some limitations. Principally, its reliance on self-reporting may not reflect actual behavior in real healthcare settings. Importantly, attitudes toward AI may change quickly as technology becomes more common and trusted, so results may not remain stable over time.

Nonetheless, the study raises some important considerations. It is clear from these findings that patients believe AI tools used in healthcare should augment, not replace, clinicians. Patients appear to be more comfortable when AI is framed as a support system, and consider transparency in AI important. Of course, responses differed based on personal knowledge of AI, which can be directly related to age and background, so communication and education about AI adoption strategies may need to be tailored to different patient groups.

Future studies should clarify how healthcare settings influence patients’ attitudes toward AI, including a comparison of hospitalized patients and outpatients.

“Follow-up surveys are needed to test this and to align the development of medical AI with patients’ needs,” said study co-author Lisa Adams, MD.

The study was published in the journal JAMA Network Open.

Source: Technical University of Munich

2 comments
2 comments
YourAmazonOrder
Who gets sued for malpractice, the doctor, the hospital or Nvidia.?
gimd
Here in Canada you cant sue doctors or hospitals, as I found out with my daughters birth that went south, Nvidia on the hook until legislation passes to fix that. At this point I think I would trust the two about the same maybe leaning to the bot depending on the credentials of the doctor. What do you call a med student that passes with a 60% vs a student that passes in the 90's? Both are called doctor, but which do you want helping you? At least with the bot there should be consistency.