WhatsNew2Day
Latest News And Breaking Headlines

Artificial intelligence was supposed to transform health care. It hasn’t.

“Companies promise the world and often deliver nothing,” said Bob Wachter, chief of medicine at the University of California, San Francisco. “When I look for examples of… real AI and machine learning that really make a difference, there are very few. It’s quite disappointing.”

Administrators say that algorithms — the software that processes data — from third-party companies don’t always work as advertised, because each health system has its own technological framework. So hospitals are building engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But it’s going slow. Research based on vacancies shows healthcare behind every industry except construction in adopting AI.

The Food and Drug Administration has taken steps to develop a model for evaluating AI, but this is still in its infancy. There are questions about how regulators can track algorithms as they evolve and curb the technology’s detrimental aspects, such as bias that threatens to exacerbate health inequalities.

“Sometimes AI is assumed to work, and it’s just a matter of adoption, which isn’t necessarily true,” said Florenta Teodoridis, a professor at the University of Southern California business school whose research focuses on AI. She added that not being able to understand why an algorithm arrived at a particular result is fine for things like forecasting the weather. But in healthcare, its impact is potentially life-changing.

The bullish case for AI

Despite the hurdles, the tech industry is still excited about AI’s potential to transform healthcare.

“The transition is a little slower than I’d hoped, but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026,” Hinton told POLITICO via email. He said he never suggested that we abolish radiologists, but let AI read scans for them.

If he’s right, artificial intelligence will start taking over more of the ordinary tasks in medicine, giving doctors more time to spend with patients to make the right diagnosis or develop a comprehensive treatment plan.

“I see us as a medical community moving toward a better understanding of what they can and cannot do,” said Lara Jehi, chief of research information at the Cleveland Clinic. “It’s not going to replace radiologists, and it shouldn’t replace radiologists either.”

Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hour-long process that oncologists and physicists undertake to devise a surgical plan for removing complicated head and neck tumors.

An algorithm can get the job done in an hour, says John D. Halamka, president of Mayo Clinic Platform: “We took 80 percent of the human effort out of it.” The technology gives doctors a blueprint they can revise and adapt without having to do basic physics themselves, he said.

NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebook’s Artificial Intelligence Research group to reduce the time it takes to get an MRI from an hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who collaborated on the study, sees opportunities in AI’s ability to reduce the amount of data doctors must review.

Covid has accelerated the development of AI. During the pandemic, health care providers and researchers shared data about the disease and anonymized patient data to crowdsource treatments.

Microsoft and Adaptive Biotechnologies, who are collaborating on machine learning to better understand the immune system, have put their technology to work on patient data to see how the virus affected the immune system.

“The amount of knowledge that has been gained and the level of progress is just really exciting,” said Peter Lee, corporate vice president of research and incubations at Microsoft.

There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to use it, the health system has set up a response team to monitor the technology for alerts and take action when needed.

“I call it our healthcare traffic control,” said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis has been decreasing.

Hurdles for AI

The biggest barrier to the use of artificial intelligence in healthcare has to do with infrastructure.

Healthcare systems must enable algorithms to access patient data. In recent years, large, well-funded systems have invested in moving their data to the cloud, creating huge lakes of data ready to be consumed by artificial intelligence. But that’s not so easy for smaller players.

Another problem is that each health system is unique in its technology and the way it treats patients. That means an algorithm may not work equally well everywhere.

In the past year, an independent investigation into a widely used sepsis detection algorithm from EPD giant Epic showed poor results in the real world, suggesting where and how hospitals used the AI.

This dilemma has led top health systems to build their own tech teams and develop AI in-house.

That could lead to complications in the long run. Unless health systems sell their technology, they are unlikely to undergo the kind of scrutiny that commercial software would. This could ensure that defects remain unresolved for longer than would otherwise be the case. It’s not just that the health systems are implementing AI while no one is watching. It is also that the stakeholders in artificial intelligence, in healthcare, technology and government, have not agreed on standards.

A lack of quality data — giving algorithms material to work with — is another major barrier to deploying the technology in healthcare facilities.

Much data comes from electronic health records, but is often locked up in health systems, making it more difficult to collect large data sets. For example, a hospital may have complete records for one visit, but the rest of a patient’s medical history is kept elsewhere, making it more difficult to draw conclusions about how to continue caring for the patient.

“We have bits and pieces, but not the whole,” said Aneesh Chopra, who served as the administration’s chief technology officer under former President Barack Obama and is now president of data company CareJourney.

While some healthcare systems have invested in collecting data from multiple sources into a single repository, not all hospitals have the resources to do so.

Healthcare also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the industry behind others in terms of algorithmic horsepower.

Importantly, there is not enough strong data on health outcomes, making it more difficult for healthcare providers to use AI to improve the way they treat patients.

That may be changing. A recent series of studies on a Sepsis Algorithm contained extensive details about the use of the technology in practice and documented adoption rates by physicians. Experts have hailed the studies as a good template for how to conduct future AI studies.

But working with healthcare data is also more difficult than in other sectors because it is highly individualized.

“We found that even internally across our different sites and locations, these models don’t deliver uniform performance,” says Jehi of the Cleveland Clinic.

And the stakes are high if something goes wrong. “The number of paths patients can take is very different from the number of paths I can take when I am on Amazon and try to order a product,” Wachter said.

Health experts also worry that algorithms could amplify prejudice and inequalities in health care.

For example, a study from 2019 found that a hospital algorithm was more likely to push white patients into programs that focused on providing better care than black patients, even while controlling for disease level.

The role of government

Last year the FDA has published a series of guidelines for the use of AI as a medical device, advocates establishing “good machine learning practices”, monitoring how algorithms behave in real-world scenarios, and developing research methods to eradicate bias.

The agency then published more specific guidelines on machine learning in radiologic devices, requiring companies to outline how the technology should perform and provide evidence that it works as intended. The FDA has approved more than 300 AI devices since 1997, primarily in radiology.

Regulating algorithms is a challenge, especially given the speed at which technology evolves. The FDA is trying to prevent that by requiring companies to establish real-time monitoring and submit plans for future changes.

But internal AI is not overseen by the FDA. Bakul Patel, former head of the FDA’s Center for Devices and Radiological Health and now Google’s senior director for global digital health strategy and regulatory affairs, said the FDA is thinking about how it could regulate non-commercial artificial intelligence in health systems, but he adds, there is no “easy answer”.

FDA needs to cut the needle between taking enough action to reduce errors in algorithms while also not stifling the potential of AI, he said.

Some argue that public-private standards for AI would help advance the technology. Groups including the Coalition for Health AI, whose members include major health systems and universities, as well as Google and Microsoft, are working on this approach.

But the standards they envision would be voluntary, which could mitigate their impact if not widely adopted.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More