Home Tech UK needs system to record AI malfunctions and malfunctions, think tank says

UK needs system to record AI malfunctions and malfunctions, think tank says

0 comments
UK needs system to record AI malfunctions and malfunctions, think tank says

The UK needs a system to record the misuse and malfunction of artificial intelligence or ministers risk being unaware of alarming incidents involving the technology, according to a report.

The next government should create a system to record incidents involving AI in public services and should consider building a center to collect AI-related episodes across the UK, the Center for Long Term Resilience (CLTR) said. , a group of experts.

CLTR, which focuses on government responses to unforeseen crises and extreme risks, said an incident reporting regime such as the system operated by the Air Accidents Investigation Branch (AAIB) was vital to using the technology successfully.

The report cites 10,000 AI “safety incidents” recorded by media outlets since 2014, listed in a database compiled by the Organization for Economic Co-operation and Development, an international research body. The OECD definition of a harmful AI incident ranges from physical harm to economic, reputational and psychological harm.

Examples registered in the OECD AI Security Incident Monitor include a deepfake of Labor leader Keir Starmer, allegedly abusive towards party staffGoogle’s Gemini model portrays German WWII soldiers as people of color, incidents involving driverless cars and a man who planned to assassinate the late queen and who was encouraged by a chatbot.

“Incident reporting has played a transformative role in risk mitigation and management in safety-critical industries such as aviation and medicine. But it is largely missing from the regulatory landscape being developed for AI. This leaves the UK government blind to incidents arising from the use of AI, inhibiting its ability to respond,” said Tommy Shaffer Shane, policy director at CLTR and author of the report.

CLTR said the UK government should follow the lead of industries where safety is a critical issue, such as aviation and medicine, and introduce a “well-functioning incident reporting regime”. CLTR said many AI incidents would likely not be covered by UK watchdogs because there was no regulator focused on cutting-edge AI systems such as chatbots and image generators. The Labor Party has committed to introducing binding regulation for the majority advanced AI companies.

Such a setup would provide quick insights into how AI is malfunctioning, the think tank said, and help the government anticipate similar incidents in the future. He added that incident reporting would help coordinate responses to serious incidents where speed of response was crucial and identify early signs of large-scale damage that could occur in the future.

Some models may only show damage once they have been fully released, despite being tested by the UK’s AI Safety Institute, and incident reports at least allow the government to see how well the setup The country’s regulatory framework is addressing those risks.

CLTR said the Department of Science, Innovation and Technology (DSIT) risked lacking an up-to-date picture of the misuse of AI systems, such as disinformation campaigns, attempts to develop biological weapons, biases in AI systems or misuse of AI in public services, such as in the Netherlands, where tax authorities plunged thousands of families into financial difficulties after deploying an AI programme in a Misguided attempt to address benefit fraud.

“DSIT should prioritize ensuring the UK Government learns of these new harms not through the news, but through proven incident reporting processes,” the report says.

skip past newsletter promotion

CLTR, which is largely funded by wealthy Estonian computer programmer Jaan Tallinn, recommended three immediate steps: create a government system for reporting AI incidents in public services; ask UK regulators to find loopholes in AI incident reporting; and consider creating a pilot AI incident database, which could collect AI-related episodes from existing bodies such as the AAIB, the Information Commissioner’s Office and the medicines regulator, the MHRA.

CLTR said the reporting system for AI use in public services could build on the existing algorithmic transparency reporting standard, which encourages law enforcement departments and authorities to disclose AI use.

In May, 10 countries, including the UK, plus the EU, signed a declaration on AI safety cooperation that tracking included “AI Harms and Security Incidents.”

The report added that an incident reporting system would also help the DSIT’s Central AI Risk Function (CAIRF) body, which assesses and reports on risks associated with AI.

You may also like