Home Tech Seoul summit shows UK progress in making advanced AI safe

Seoul summit shows UK progress in making advanced AI safe

0 comments
Seoul summit shows UK progress in making advanced AI safe

The UK is leading an international effort to test the most advanced AI models to detect security risks before they reach the public, as regulators race to create a workable security regime ahead of the Paris summit in six months. .

Britain’s AI Safety Institute, the first of its kind, now has counterparts from around the world, including South Korea, the United States, Singapore, Japan and France.

Regulators at the Seoul AI Summit hope agencies can collaborate to create the 21st century version of the Montreal Protocol, the groundbreaking agreement to control CFCs and close the hole in the ozone layer.

technology/article/2024/may/21/first-companies-sign-up-ai-safety-standards-seoul-summit"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}" config="{"renderingTarget":"Web","darkModeAvailable":false,"inAdvertisingPartnerABTest":false,"assetOrigin":"https://assets.guim.co.uk/"}"/>

But before doing so, the institutes must agree on how they can work together to turn an international patchwork of approaches and regulations into a unified effort to corral AI research.

“At Bletchley, we announce the UK AI Safety Institute, the world’s first government-backed organization dedicated to advanced AI safety for the public good,” said Michelle Donelan, UK technology secretary, in Seoul on Wednesday. She credited the “Bletchley effect” with spurring the creation of a global network of peers doing the same thing.

These institutes will begin to share information about models, their limitations, capabilities and risks, as well as monitor specific “AI security incidents and damages” where they occur and share resources to advance the global understanding of AI security science. AI.

technology/article/2024/may/21/techscape-openai-sam-altman-superalignment-scarlett-johansson"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":0}}" config="{"renderingTarget":"Web","darkModeAvailable":false,"inAdvertisingPartnerABTest":false,"assetOrigin":"https://assets.guim.co.uk/"}"/>

At the first “full” meeting of those countries on Wednesday, Donelan warned that the creation of the network was only a first step. “We must not rest on our laurels. As the pace of AI development accelerates, we must match that speed with our own efforts if we are to address the risks and seize the limitless opportunities for our audiences.”

The network of security institutes has a strict deadline. This autumn, leaders will meet again, this time in Paris, for the first full AI summit since Bletchley. There, for the conversation to move from how to test AI models to how to regulate them, security institutes will have to demonstrate that they have mastered what Donalan called “the nascent science of cutting-edge AI testing and evaluation.”

Jack Clark, co-founder and head of policy at AI lab Anthropic, said simply establishing a functional safety institute puts the UK “a hundred miles” further along the path to safe AI than the world did two years ago.

“I think what we need to do now is encourage governments, as I have been doing here, to continue to invest the money necessary to create security institutes and fill them with enough technical personnel so that they can actually create their own information. and testing,” she said.

As part of the investment in that science, Donelan announced £8.5m funding to “break new ground” in AI safety testing.

Francine Bennett, acting director of the Ada Lovelace Institute, called that funding a good start and said it would need to “pave the way for a much more substantial program of understanding and protecting against social and systemic risk.”

“It is fantastic to see the security institute and the government taking steps towards a broader vision of what security means, both in the state of the science report and with this funding; “We are recognizing that safety is not something that can be sufficiently tested in a laboratory,” Bennett added.

The summit was criticized for leaving key voices out of the conversation. No Korean civil society groups were present, and the host country was represented only through academia, government and industry, while only the largest AI companies were invited to participate. Roeland Decorte, president of the AI ​​Founders Association, warned that discussions risked “focusing only on flashy, large-scale models, of which only a handful will come to dominate and which can currently only be created by the most powerful players.” significant with financial losses.” as a result.

“The question is, in the end, do we want to regulate and build a future mature AI economy that creates a sustainable framework for the majority of companies operating in space,” he added.

You may also like