Home Tech UK government does not include AI use in mandatory registration

UK government does not include AI use in mandatory registration

0 comments
UK government does not include AI use in mandatory registration

No Whitehall department has recorded the use of AI systems since the government said. would become mandatoryprompting warnings that the public sector is “going blind” on the deployment of algorithmic technology affecting millions of lives.

The government is already using AI to inform decisions on everything from benefit payments to immigration enforcement, and records show that public bodies have awarded dozens of contracts for AI and algorithmic services. Last week, a police procurement body set up by the Home Office put a contract for facial recognition software, valued at up to £20m, up for sale, reigniting concerns about “mass biometric surveillance”.

But so far only the details of nine algorithmic systems have been presented to a court. public recordnot including any of a growing number of artificial intelligence programs used in the welfare system, by the Home Office or by the police. The shortage of information comes despite the government announcing in February this year that the use of AI registration would now be “a requirement for all government departments”.

Experts I have warned that if adopted uncritically, AI carries the potential to cause harm, with recent high-profile examples of IT systems not working as intended, including the Post Office’s Horizon software. The AI ​​used in Whitehall ranges from Microsoft’s Copilot system, which is being widely tested, to automated checks for fraud and errors in the benefits system. A recent AI contract notice issued by the Department for Work and Pensions (DWP) described “growing interest within the DWP, reflecting that of the government and wider society”.

Peter Kyle, secretary of state for science and technology, admitted that the public sector “has not taken seriously enough the need to be transparent in the way the government uses algorithms”.

When asked about the lack of transparency, Kyle told The Guardian: “I accept that if the government uses algorithms on behalf of the public, the public has a right to know. The public needs to feel that the algorithms are there to serve them and not the other way around. The only way to do that is to be transparent about its use.”

Big Brother Watch, a privacy rights campaign group, said the emergence of the police facial recognition contract, despite Parliamentarians warning of the lack of legislation to regulate its use, was “yet another example of the government’s lack of transparency about the use of AI technology.”

“The secret use of AI and algorithms to impact people’s lives puts everyone’s data rights at risk. Government departments need to be open and honest about how they use this technology,” said Madeleine Stone, advocacy director.

The Home Office declined to comment.

The Ada Lovelace Institute recently warned that AI systems may appear to reduce administrative burdens, “but can seriously damage public trust and reduce public benefit if the predictions or results they produce are discriminatory, harmful or simply ineffective.”

Imogen Parker, associate director at the data and AI research body, said: “The lack of transparency is not only keeping the public in the dark, it also means the public sector is flying blind in adopting AI. Not publishing records of algorithmic transparency is limiting the public sector’s ability to determine whether these tools work, learn from what doesn’t work, and monitor the different social impacts of these tools.

As of the end of 2022, only three algorithms have been registered in the national registry. This is a system used by the Cabinet Office to identify digital records of long-term historical value, an AI-powered camera being used to analyze pedestrian crossings in Cambridge. and a system to analyze patients’ views on NHS services.

But since February there have been 164 contracts with public bodies that mention AI, according to Tussell, a firm that monitors public contracts. Technology companies, including Microsoft and Meta, are vigorously promoting their artificial intelligence systems throughout the government. Google Cloud funded a recent report that claimed greater deployment of generative AI could unlock up to £38 billion across the public sector by 2030. Kyle called it “a powerful reminder of how generative AI can be revolutionary for government services “.

Not all of the latest public sector AI includes data on members of the public. A £7m contract with Derby City Council is described as “Transforming City Hall using AI technology” and a £4.5m contract with the education department aims to “improve AI performance to education.”

A spokesperson for the science and technology department confirmed that the transparency rule “is now mandatory for all departments” and said that “several records will be published soon.”

Where is the government already using AI?

  • The Department for Work and Pensions has been using generative AI to read more than 20,000 documents a day to “understand and summarize correspondence”, after which the full information is shared with officials for decision-making. It has automated systems to detect fraud and errors in universal credit applications, and AI helps agents working on personal independence payment applications by summarizing the evidence. This autumn, DWP began rolling out basic AI tools to job centres, allowing careers counselors to ask questions about universal credit counseling in a bid to improve the effectiveness of conversations with job seekers.

  • The Home Office deploys an immigration enforcement system powered by artificial intelligence, which critics call a “robotic social worker.” An algorithm intervenes in decision-making, including the return of people to their countries of origin. The government describes it as a “rules-based” system rather than artificial intelligence, as it does not involve machine learning from data. He says it brings efficiency by prioritizing work, but that a human being is still responsible for each decision. The system is being used amid a growing number of cases of asylum seekers subject to removal measures, which now number around 41,000 people.

  • Several police forces use facial recognition software to locate suspected criminals with the help of artificial intelligence. These include the Metropolitan Police, South Wales Police and Essex Police. Critics have warned that such software will “transform Britain’s streets into high-tech police queues”, but supporters say it catches criminal suspects and the data of innocent bystanders is not stored.

  • NHS England has a £330m contract with Palantir to create a huge new data platform. The agreement with the American company that build AI-enabled digital infrastructure and is run by Donald Trump supporter Peter Thiel, has raised concerns about patient privacy, although Palantir says its customers retain full control of the data.

  • An AI chatbot is being trialled to help people navigate the government’s extensive gov.uk website. It has been built by the government digital service using OpenAI’s ChatGPT technology. red boxAnother AI chatbot for use by officials in Downing Street and other government departments, has also been deployed to allow officials to quickly drill down into secure government documents and get quick summaries and personalized reports.

You may also like