Home Health The computer will see you now! One in five GPs use AI to make diagnoses and take notes despite the risk of errors

The computer will see you now! One in five GPs use AI to make diagnoses and take notes despite the risk of errors

0 comment
Study reveals one in five GPs use artificial intelligence to help with diagnoses (file photo)

Family doctors are putting patient safety at risk by relying on AI to help with diagnoses, a study warns.

One in five GPs admitted to using software such as ChatGPT and Bing AI during clinical practice, despite there being no official guidelines on how to work with them.

Experts warned that problems such as “algorithm bias” could lead to misdiagnoses and that patient data could also be at risk. They said doctors should be aware of the risks and called for legislation to regulate their use in healthcare settings.

The researchers sent the survey to 1,000 GPs using the largest professional network of doctors in the UK currently registered with the General Medical Council.

Physicians were asked if they had ever used any of the following in any aspect of their clinical practice: ChatGPT; Bing AI; Google’s Bard; or “Other.” More than half of respondents (54 percent) were 46 years or older.

Study reveals one in five GPs use artificial intelligence to help with diagnoses (file photo)

One in five (20 percent) reported using generative AI tools in their clinical practice.

Of these, nearly one in three (29 percent) reported using these tools to generate documentation after patient appointments.

A similar number (28 percent) said they used them to suggest a differential diagnosis, according to the findings published in the BMJ.

One in four (25 percent) said they used the tools to suggest treatment options, such as possible medications or referrals.

The researchers, who included scientists from Uppsala University in Sweden and the University of Zurich, said that while AI can be useful in helping with documentation, it is “prone to creating misinformation”.

They write: ‘We caution that these tools have limitations as they may contain subtle errors and biases.

“They can also cause harm and undermine patient privacy, as it is unclear how the internet companies behind generative AI use the information they collect.”

While chatbots are increasingly the target of regulatory efforts, it “remains unclear” how legislation will practically relate to these tools in clinical practice, they added.

Researchers say that while useful for helping with documentation, AI is

Researchers say that while useful for helping with documentation, AI is “prone to creating misinformation” (file photo)

Doctors and medical trainees need to be fully informed about the pros and cons of AI, especially given the “inherent risks” it poses, they conclude.

Professor Kamila Hawthorne, president of the Royal College of GPs, agreed that the use of AI was “not without potential risks” and called for its implementation in general medical practice to be closely regulated to ensure the safety of patients and the security of their data.

She said: ‘Technology will always need to work alongside and complement the work of doctors and other healthcare professionals, and can never be seen as a replacement for the expertise of a qualified medical professional.

‘There is clearly potential for the use of generative AI in general practice, but it is vital that it is implemented carefully and closely regulated in the interests of patient safety.’

You may also like