Home Tech Revealed: Bias found in AI system used to detect UK benefit fraud

Revealed: Bias found in AI system used to detect UK benefit fraud

0 comments
Revealed: Bias found in AI system used to detect UK benefit fraud

An artificial intelligence system used by the UK government to detect welfare fraud shows biases based on people’s age, disability, marital status and nationality, The Guardian can reveal.

An internal evaluation of a machine learning program used to examine thousands of claims for universal credit payments across England found that it incorrectly selected people from some groups more than others when recommending who to investigate for possible fraud.

Admission was made in published documents under the Department for Work and Pensions (DWP) Freedom of Information Act. The “statistically significant disparity in results” emerged in an “equity analysis” of the automated system for universal credit advances carried out in February of this year.

The emergence of bias comes after the DWP said this summer that the AI ​​system “does not raise any immediate concerns of discrimination, unfair treatment or detrimental impact on customers”.

This assurance came in part because the final decision about whether a person receives a welfare payment is still made by a human being, and officials believe that continued use of the system, which attempts to help reduce about £8 billion a year lost in fraud and error – is “reasonable and proportionate”.

But no equity analysis has yet been conducted regarding potential biases focused on race, sex, sexual orientation and religion, or pregnancy, motherhood and gender reassignment status, the revelations reveal.

Activists responded by accusing the government of a “harm first, fix later” policy and called on ministers to be more open about which groups were likely to be wrongly suspected by the algorithm of trying to game the system.

“It is clear that in a vast majority of cases the DWP did not assess whether its automated processes risked unfairly targeting marginalized groups,” said Caroline Selman, senior researcher at the Public Law Project, which first obtained the analysis.

“DWP must end this ‘harm first, fix later’ approach and stop deploying tools when it is unable to adequately understand the risk of harm they pose.”

Recognition of disparities in how the automated system assesses fraud risks is likely to increase scrutiny of the rapidly expanding government use of artificial intelligence systems and fuel calls for greater transparency.

According to an independent count, there are At least 55 automated tools used by public authorities in the UK. potentially affecting decisions about millions of people, although the government’s own record includes only nine.

Last month, The Guardian revealed that not a single Whitehall department had recorded the use of AI systems since the government said would become mandatory earlier this year.

Records show that public bodies have awarded dozens of contracts for algorithmic and artificial intelligence services. Last month, a police procurement body set up by the Home Office put a contract for facial recognition software, valued at up to £20m, up for sale, reigniting concerns about “mass biometric surveillance”.

Peter Kyle, secretary of state for science and technology, previously told The Guardian that the public sector “has not taken seriously enough the need to be transparent in the way the government uses algorithms”.

In recent years, government departments including the Home Office and the DWP have been reluctant to reveal more about their use of AI, citing concerns that doing so could allow bad actors to manipulate systems.

It is unclear which age groups are most likely to be subject to fraud checks by the algorithm, as the DWP drafted that part of the fairness analysis.

It also did not reveal whether people with disabilities are more or less likely to be wrongly selected for investigation by the algorithm than people without disabilities, or the difference between the way the algorithm treats different nationalities. Officials said this was to prevent scammers from gaming the system.

A DWP spokesperson said: “Our AI tool is not a substitute for human judgement, and a caseworker will always examine all available information to make a decision. “We are taking bold and decisive action to tackle benefit fraud – our Fraud and Error Bill will enable more efficient and effective investigations to more quickly identify criminals exploiting the benefits system.”

You may also like