An Amazon-hosted AI tool designed to boost UK MoD recruitment puts defense personnel at risk of being publicly identified, according to a government evaluation.
The data used in the automated system to improve the writing of defense job advertisements and attract more diverse candidates by improving inclusive language, includes names, roles and emails of military personnel and is stored by Amazon in the US. This means that “a data breach can have worrying consequences, namely the identification of defense personnel,” according to documents detailing the government’s artificial intelligence systems published for the first time today.
The risk has been deemed “low” and the Ministry of Defense said the providers, Textio, Amazon Web Services and Amazon GuardDuty, a threat detection service, have put “robust protections” in place.
But it is one of several risks recognized by the government about its use of artificial intelligence tools in the public sector in a tranche of documents published to improve transparency on the use of algorithms by the central government.
Official statements on how the algorithms work emphasize that there are mitigations and safeguards to address the risks, as ministers push to use AI to boost the UK’s economic productivity and, in the words of technology secretary Peter Kyle on Tuesday, “bring public services.” come back from the brink of the abyss.”
This week it was reported that Chris Wormald, the new cabinet secretary, told civil servants that the prime minister wants to “restructure the way government works”, demanding that civil servants take advantage of the major opportunities provided by technology.
Google and Meta have been working directly with the UK government on pilot projects to use AI in public services. Microsoft is providing its AI-powered Copilot system to civil servants and earlier this month, Cabinet Office minister Pat McFadden said he wanted the government to “think more like a startup.”
Other risks and benefits identified in current central government AIs include:
-
The possibility that inappropriate teaching material is generated by a AI-powered lesson planning tool used by teachers based on Open AI’s powerful large language model, GPT-4o. AI saves teachers time and can quickly personalize lesson plans in ways that would not otherwise be possible.
-
“Hallucinations” of a Chatbot implemented to answer questions about children’s well-being. in family courts. However, it also offers 24-hour information and reduces queue times for people who need to speak to a human agent.
-
“Error code operation” and “incorrect input data” in new HM Treasury Policy Engine which uses machine learning to model tax and benefit changes “more accurately than existing approaches.”
-
“A degradation of human reasoning” if users of an AI to prioritize food hygiene inspection risks become too reliant on the system. It may also result in “a consistent score for establishments of a certain type”, but it should also mean quicker inspections of places that are most likely to breach hygiene rules.
The disclosures come in a newly expanded algorithmic transparency registry that records detailed information about 23 central government algorithms. Some algorithms, such as those used in the social security system by the Department for Work and Pensions, which have shown signs of bias, are still not registered.
“Technology has enormous potential to improve public services,” Kyle said. “We will use it to reduce delays, save money and improve outcomes for citizens across the country. “Transparency about how and why the public sector uses algorithmic tools is crucial to ensuring they are reliable and effective.”
Central government organizations will be required to publish a record of any algorithmic tools that directly interact with citizens or significantly influence decisions made about individuals, unless a limited set of exemptions apply, such as national security. Logs for the tools will be published once they are publicly tested or live and operational.
Other AIs included in the expanded register include an AI chatbot that handles customer queries to Network Rail and is trained on historical cases from the rail entity’s customer relations system.
The Department of Education is operating an AI lesson assistant for teachers, Aila, using Open AI’s GPT-4o model. Created within Whitehall, rather than using a contractor, it allows teachers to generate lesson plans. The tool is intentionally designed not to generate lessons at the touch of a button. But risks identified and being mitigated include harmful or inappropriate teaching materials produced, bias or misinformation, and “immediate injection” – a way for malicious actors to trick AI into carrying out their intentions.
The Children and Family Court Advice and Support Service, which advises family courts on the welfare of children, uses a natural language processing robot to power a chat service on a website that runs around 2,500 consultations per month. One of the recognized risks is that you may be handling reports of concerns about children, while others are “hallucinations” and “inaccurate results.” It has a two-thirds success rate. It is supported by companies such as Genesys and Kerv, which again use Amazon Web Services.