Home Money Algorithms monitored social welfare systems for years. Now they are under fire for bias

Algorithms monitored social welfare systems for years. Now they are under fire for bias

0 comments
Algorithms monitored social welfare systems for years. Now they are under fire for bias

“People who receive a social subsidy reserved for people with disabilities (the Assignment Adulte Handicapé, or AAH) are directly targeted by a variable in the algorithm,” says Bastien Le Querrec, legal expert at La Quadrature du Net. “The score of risk for people who receive AAH and who are working increases.

Because it also scores single-parent families higher than two-parent families, the groups argue that it indirectly discriminates against single mothers, who are statistically more likely to be exclusive caregivers. “In the criteria of the 2014 version of the algorithm, the score of beneficiaries who have been divorced for less than 18 months is higher,” says Le Querrec.

Cap’s Changer says he has been contacted by both single mothers and disabled people for help, after being the subject of an investigation.

The CNAF agency, tasked with distributing financial aid including housing, disability and child benefits, did not immediately respond to a request for comment or to WIRED’s question about whether the algorithm currently in use had changed significantly since the 2014 version.

As in France, human rights groups in other European countries argue that they subject lower-income members of society to intense surveillance, often with profound consequences.

When tens of thousands of people in the Netherlands (many of them from the area of ​​the country) Ghanaian community—were falsely accused of defrauding the child benefit system, not only were they ordered to pay back the money the algorithm said they had allegedly stolen. Many of them claim they were also left with spiraling debt and destroyed credit ratings.

The problem is not the way the algorithm was designed, but its use in the welfare system, says Soizic Pénicaud, a professor of AI policy at Sciences Po Paris, who previously worked for the French government on transparency of welfare algorithms. public sector. “The use of algorithms in the context of social policy carries many more risks than benefits,” he says. “I have not seen any examples in Europe or the world where these systems have been used with positive results.”

The case has ramifications beyond France. Wellbeing algorithms are expected to be an early test of how the new EU AI rules will be applied once they come into force in February 2025. Thereafter, “social scoring” – the use of AI to evaluate people’s behavior and then subject some of them to harmful treatment—it will be forbidden throughout the block.

“Many of these welfare systems that detect fraud can, in my opinion, be social scores in practice,” says Matthias Spielkamp, ​​co-founder of the nonprofit Algorithm Watch. However, public sector representatives are likely to disagree with that definition, and arguments over how to define these systems will likely end up in court. “I think it’s a very difficult question,” says Spielkamp.

You may also like