HomeTech First companies adhere to AI safety standards ahead of Seoul summit

First companies adhere to AI safety standards ahead of Seoul summit

0 comment
First companies adhere to AI safety standards ahead of Seoul summit

The first 16 companies have signed up to voluntary AI security standards unveiled at the Bletchley Park summit, Rishi Sunak said ahead of the follow-up event in Seoul.

But the standards have faced criticism for lacking teeth, with signatories pledging only to “work to” share information, “invest” in cybersecurity and “prioritize” research into social risks.

“These commitments ensure that the world’s leading AI companies provide transparency and accountability in their plans to develop safe AI,” Sunak said. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”

Among the 16 are China’s Zhipu.ai and the United Arab Emirates Institute of Technology Innovation. The presence of signatories from countries that have been less willing to bind national champions to safety standards is a benefit of the lighter touch, the government says.

UK technology secretary Michelle Donovan said the Seoul event “really builds on the work we did on Bletchley and the ‘Bletchley effect’ we created afterwards. It really had the ripple effect of moving AI and AI security onto the agenda of many nations. We saw this when nations presented plans to create their own AI safety institutes, for example.

“And what we’ve achieved in Seoul is that we’ve really expanded the conversation. “We have a collection from around the world, which highlights that this process is really driving companies, not just in certain countries but in all areas of the world, to really address this problem.”

But the longer the codes remain voluntary, the greater the risk that AI companies will simply ignore them, warned Fran Bennett, acting director of the Ada Lovelace Institute.

“People thinking and talking about safety and security, that’s all good. So is securing commitments from companies in other nations, particularly China and the United Arab Emirates. But for companies to determine what is safe and what is dangerous, and voluntarily choose what to do about it, that is problematic.

“It’s great to think about security and setting standards, but now you need some muscle: you need regulation and some institutions that are able to draw the line from the perspective of the people affected, not the companies that build the things.”

skip past newsletter promotion

Later on Tuesday, Sunak will co-chair a virtual meeting of world leaders on “innovation and inclusion” in AI with South Korean President Yoon Suk Yeol.

You may also like