Advertisements
An algorithm used to help physicians determine the future of about 200 million patients in the US has a strong preference for black people every year (file image)

The US healthcare algorithm used to make decisions for 200 million patients annually is accused of being racist against black people

  • Algorithm disproportionately advised physicians to pay more attention to whites
  • Fewer than half of the black patients who needed care were put forward, research showed
  • Software helps doctors to decide the future of about 200 million patients in the US every year
Advertisements

An algorithm used by US hospitals to identify patients with chronic diseases has a significant preference for black people, researchers have claimed.

The artificial intelligence, sold by health care company Optum, disproportionately advised doctors to give more care to whites, even when black patients were sicker.

It means that only 18 percent of black patients were proposed for an ongoing care program, while 47 percent of them desperately needed it.

The software helps doctors determine the future of about 200 million patients in the US every year.

Advertisements

An algorithm used to help physicians determine the future of about 200 million patients in the US has a strong preference for black people every year (file image)

An algorithm used to help physicians determine the future of about 200 million patients in the US has a strong preference for black people every year (file image)

Scientists from universities in Chicago, Boston and Berkeley marked the mistake in their study, published in the journal Science, and working with Optum on a solution.

They said that the algorithm – designed to help patients take medication or stay out of the hospital – was not intentionally racist because it specifically excluded ethnicity in its decision-making.

Instead of using illness or biological data, the technology uses cost and insurance information to predict how healthy a person is.

The computer system is programmed to believe that the more money is spent, the sicker the patient is.

But the data it worked with showed that less was spent on black patients because they received less care.

Advertisements

Black patients spent about $ 1,800 (£ 1,400) less annually on medical costs than white patients with the same disease.

HOW DO ARTIFICIAL INTELLIGENCE LEARN?

AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works to learn.

ANNs can be trained to recognize patterns in information – including speech, text data or visual images – and form the basis for a large number of developments in AI in recent years.

Conventional AI uses input to teach & # 39; an algorithm on a specific topic & # 39; by giving it huge amounts of information.

Practical applications include Google's language translation services, Facebook & # 39; s face recognition software, and Snapchat & # 39; s live image-changing filters.

Advertisements

Entering this data can be extremely time-consuming and is limited to one type of knowledge.

A new breed of ANN & # 39; s called Adversarial Neural Networks contrasts the minds of two AI bots, allowing them to learn from each other.

This approach is designed to speed up the learning process and refine the output of AI systems.

Researchers say that this is due to countless factors, including a lack of insurance, a lack of access and even unconscious prejudices of doctors.

That is why the machine ranked white patients as much at risk for future health problems as black patients who were actually much more ill.

Advertisements

Correcting these prejudices in the algorithm would mark more than double the number of black patients for extra care, the study found.

Based on the findings, the study was replicated on a different dataset of 3.7 million patients.

It was found that black patients jointly suffered an additional 48,772 chronic diseases compared to white people.

Although this study was performed on only one health care algorithm, the researchers say that similar prejudices are likely to exist in a number of industries.

Principal investigator Sendhil Mullainathan, a professor of computer and behavioral sciences at the University of Chicago, said: & I think it's really unthinkable that someone else's algorithm won't suffer from this.

Advertisements

& # 39; I hope this causes the entire industry to say, "Oh, we gotta fix this."

Optum said it welcomed the research and claimed it would be useful for makers of other healthcare algorithms, many of which use similar systems.

Spokesman Tyler Mason said: “Predictive algorithms that support these tools must be constantly reviewed and refined, and supplemented with information such as socio-economic data, to help clinicians make the best informed healthcare decisions for each patient.

& # 39; As we advise our clients, these tools should never be seen as a substitute for the expertise and knowledge of a physician about the individual needs of their patients. & # 39;

. (TagsToTranslate) Dailymail (t) health