Home Health Don’t trust Dr. AI: New search engine offers medical advice that could lead to death in one in five cases

Don’t trust Dr. AI: New search engine offers medical advice that could lead to death in one in five cases

0 comments
German researchers found that more than a fifth of AI chatbots' answers to common questions about prescription drugs could

Frantically searching for our symptoms online and self-diagnosing them is something many of us are guilty of.

But Dr AI could be giving advice on “potentially harmful” drugs, a worrying study suggests.

German researchers found that more than a fifth of AI-powered chatbots’ answers to common questions about prescription drugs could “lead to death or serious harm.”

Experts urge patients not to rely on these search engines for accurate and safe information.

Doctors have also been warned not to recommend these tools until more “accurate and reliable” alternatives are available.

German researchers found that more than a fifth of AI chatbots’ answers to common questions about prescription drugs could “lead to death or serious harm”

In the study, scientists at the University of Erlangen-Nuremberg identified the 10 most frequently asked questions from patients about the 50 most prescribed medications in the US.

These included adverse drug reactions, instructions for use, and contraindications (reasons why the drug should not be taken).

Using Bing Copilot, a search engine with AI-powered chatbot functions developed by Microsoft, the researchers evaluated the 500 responses, comparing them to responses given by clinical pharmacists and physicians with expertise in pharmacology.

Responses were also compared to a peer-reviewed drug information website.

They found that the chatbots’ statements did not match the reference data in more than a quarter (26 percent) of all cases and were completely inconsistent in just over 3 percent.

But a more detailed analysis of 20 responses also revealed that four in ten (42 per cent) were considered to cause moderate or minor harm and 22 per cent were considered to cause death or serious harm.

The scientists, who also evaluated the readability of all the chatbot responses, found that the responses often required a college education to understand.

writing in the diary BMJ quality and safetyThe researchers said: ‘CHatbot responses were largely difficult to read and responses repeatedly lacked information or displayed inaccuracies, possibly threatening patient and medication safety.

“Despite their potential, it is still essential for patients to consult their healthcare professionals, as chatbots may not always generate error-free information.

“Caution is advised when recommending AI-based search engines until citation engines with higher accuracy rates are available.”

A Microsoft spokesperson said: ‘Copilot answers complex questions by bringing together information from multiple sources into a single answer.

‘Copilot provides citations linked to these answers so the user can explore and investigate further as they would with traditional search.

‘For questions relating to medical advice, we always recommend consulting a healthcare professional.’

The scientists, who also evaluated the readability of all chatbot responses, found that the responses often required a college-level education to understand.

The scientists, who also evaluated the readability of all chatbot responses, found that the responses often required a college-level education to understand.

The scientists also acknowledged that the study had “several limitations,” including the fact that it was not based on real patient experiences.

Patients could actually ask the chatbot for more information or ask it to provide answers in a clearer structure, for example, they said.

It comes as doctors were warned last month that they could be putting patient safety at risk by relying on AI to help with diagnosis.

The researchers sent the survey to 1,000 GPs using the largest professional network of UK doctors currently registered with the General Medical Council.

One in five admitted to using programs like ChatGPT and Bing AI during clinical practice, despite there being no official guidance on how to work with them.

Experts warned that issues such as “algorithm biases” could lead to misdiagnoses and that patient data could also be at risk of being compromised.

They said doctors should be aware of the risks and called for legislation to cover its use in healthcare settings.

You may also like