Home Tech Trying to tame AI: Seoul summit points out obstacles to regulation

Trying to tame AI: Seoul summit points out obstacles to regulation

0 comment
Trying to tame AI: Seoul summit points out obstacles to regulation

The Bletchley Park AI Summit in 2023 was a landmark event in AI regulation simply by its existence.

Between the event’s announcement and its first day, the general conversation had shifted from a tone of slight bewilderment to general agreement that AI regulation is worth discussing.

But the task of monitoring them, held this week at a research park on the outskirts of Seoul, is more difficult: can the United Kingdom and South Korea demonstrate that governments are moving away from talking about regulating AI to really do it?

At the end of the Seoul summit, the big achievement the UK was touting was the creation of a global network of AI safety institutes, building on the British pioneers founded after the last meeting.

Technology Secretary Michelle Donelan attributed the new institutes to the “Bletchley effect” at work and announced plans to lead a system by which regulators in the U.S., Canada, Britain, France, Japan, Korea, Australia, Singapore and EU share information on AI models, damages and security incidents.

Michelle Donelan, the UK’s technology minister, said the emerging global network of security institutes had narrowed down to the progress made at the Bletchley Park summit last year. Photograph: Lee Jin-man/AP

“Two years ago, governments were receiving information about AI almost exclusively from the private sector and academics, but they themselves didn’t have the capacity to really develop their own evidence base,” says Jack Clark, co-founder and chief policy officer at the Science Lab. Anthropic AI. In Seoul, “we heard from the UK security institute: they have run tests on a variety of models, including Anthropic’s, and got anonymized results for a variety of misuses. “They also discussed how they built their own jailbreak attacks to break the security systems on all of these models.”

That success, Clark says, has left him “slightly more optimistic” than the year before Bletchley. But the power of the new security institutes is limited to observation and reporting, running the risk of being forced to simply sit back and watch AI harms increase unchecked. Still, Clark maintains, “there is tremendous power in shaming people and companies.”

“You can be a security institute and just test publicly available models. And if you find really inconvenient things about them, you can publish them, just like in academia today. What we see is that companies take very important measures in response to that. Nobody likes to be last in the standings.”

Jack Clark, Anthropic’s co-founder and chief policy officer, said toothless security institutes have “tremendous power” to embarrass companies. Photograph: Anthony Wallace/AFP/Getty Images

Even the act of observing yourself can change things. Security institutes in the EU and the United States, for example, have established “computing” thresholds, seeking to define who is under the gaze of their security institutes based on the amount of computing power they muster to build their “border” models. ”. In turn, those thresholds have begun to become a clear dividing line: It’s better to be slightly below the threshold and avoid the hassle of working with a regulator than to be slightly above it and create a lot of extra work, one founder said. In the United States, that limit is high enough that only the wealthiest companies can afford to exceed it, but the EU’s lower limit has brought hundreds of companies under the auspices of his institute.

However, IBM’s director of privacy and trust, Christina Montgomery, says that “the computing thresholds still exist, because it is a very clear line. It is very difficult to determine what the other capabilities are. But that’s going to change and evolve quickly, and it should, because given all the new techniques that are coming out for how to tune and train models, it doesn’t matter how big the model is.” Instead, she suggests, governments will begin to focus on other aspects of AI systems, such as the number of users who are exposed to the model.

Andrew Ng, former head of Google Brain, advocated for regulation to target AI applications, rather than AI systems themselves. Photograph: Anthony Wallace/AFP/Getty Images

The Seoul summit also exposed a more fundamental divide: should regulation focus on AI or should it focus solely on the uses of AI technologies? Former Google Brain head Andrew Ng defended the latter, arguing that regulating AI makes as much sense as regulating “electric motors”: “it’s very difficult to say ‘how can we make an electric motor safe,’ without simply building very well small electric motors.”

Janil Puthucheary, Singapore’s senior minister for communications, information and health, echoed Ng’s point. “To a large extent, the use of AI today is not deregulated. And the public is not unprotected,” he stated. “If AI is applied in the healthcare sector, all healthcare regulatory tools must address the risks. If it was later applied in the aviation industry, we already have a mechanism and a platform to regulate that risk.”

But focusing on applications rather than underlying AI systems risks overlooking what some consider AI’s biggest security problem: the possibility that a “superintelligent” AI system could lead to the end of civilization. Massachusetts Institute of Technology professor Max Tegmark compared the release of GPT-4 to the “Fermi moment,” the creation of the first nuclear reactor, which virtually guaranteed that an atomic bomb would not be far away, and said the similar risk Powerful AI systems were needed to stay top of mind.

Donelan defended the change in approach. “One of the key pillars today is inclusion, which can mean many things but should also mean the inclusion of all potential risks. “That’s something we constantly try to do.”

For Clark, that was little consolation. “I would just say that the more things you try to do, the less likely you are to be successful at them,” he said. “If you end up with a kitchen sink approach, then you really dilute the ability to do anything.”

You may also like