HomeTech Big Tech Has Distracted World From Existential Risk of AI, Says Top Scientist

Big Tech Has Distracted World From Existential Risk of AI, Says Top Scientist

0 comment
Big Tech Has Distracted World From Existential Risk of AI, Says Top Scientist

Big tech has managed to distract the world from the existential risk to humanity that artificial intelligence still poses, a prominent AI scientist and activist has warned.

Speaking to The Guardian at the AI ​​Summit in Seoul, South Korea, Max Tegmark said the shift in focus from the extinction of life to a broader conception of the safety of artificial intelligence risked an unacceptable delay in the imposition of strict regulation on the most powerful creators. programs.

“In 1942, Enrico Fermi built the first reactor with a self-sustaining nuclear chain reaction under a Chicago football field,” said Tegmark, who trained as a physicist. “When the leading physicists of the time heard about this, they were very scared, because they realized that the biggest remaining obstacle to building a nuclear bomb had just been overcome. They realized that they were only a few years away, and in fact it was three years, with the Trinity test in 1945.

“AI models that can pass the Turing test (where someone cannot say in a conversation that they are not talking to another human being) are the same warning for the type of AI that can be lost control. That’s why people like Geoffrey Hinton and Yoshua Bengio (and even many tech CEOs, at least privately) are now going crazy.”

The Future of Life Institute, a Tegmark nonprofit, led the call last year for a six-month “pause” on advanced AI research because of those fears. The release of OpenAI’s GPT-4 model in March of that year was the canary in the coal mine, he said, and showed that the risk was unacceptably close.

Despite thousands of signatures, from experts like Hinton and Bengio, two of the three “godfathers” of AI who pioneered the machine learning approach that underpins the field today, no pause was agreed upon.

Instead, AI summits, of which Seoul is the second after Bletchley Park in the United Kingdom last November, have led the nascent field of AI regulation. “We wanted that letter to legitimize the conversation and we are very happy with how it worked out. Once people saw that people like Bengio were worried, they thought, “It’s okay for me to worry about that.” Even the guy at my gas station told me after that that he’s worried about AI replacing us.

“But now we have to go from just talking to walking.”

However, since the initial announcement at what became the Bletchley Park summit, the focus of international AI regulation has shifted away from existential risk.

In Seoul, only one of the three “high-level” groups directly addressed security and analyzed the “full spectrum” of risks, “from privacy breaches to labor market disruptions and potential catastrophic outcomes.” Tegmark maintains that downplaying the most serious risks is unhealthy and not accidental.

“That’s exactly what I predicted would happen thanks to the industry lobby,” he said. “In 1955, the first articles appeared in magazines saying that smoking causes lung cancer, and you would think that there would be some regulation very quickly. But no, it was necessary until 1980, because there was a big push by the industry to distract. I feel like that’s what’s happening now.

skip past newsletter promotion

“Of course, AI also causes current harms: there is bias, it harms marginalized groups… But as Michelle Donelan (the UK science and technology secretary) herself said, it’s not that we can’t deal with both. It’s a bit like saying, ‘Let’s not pay attention to climate change because there will be a hurricane this year, so we should just focus on the hurricane.’”

Tegmark’s critics have made the same argument as his own claims: that the industry wants everyone to talk about hypothetical risks in the future to distract from concrete harms in the present, a charge he rejects. “Even if you think about it on its own merits, it’s quite galactic brain: It would be all 4D chess for someone like (OpenAI boss) Sam Altman, to avoid regulation, tell everyone that the lights could go out for everyone, and then try to persuade people like us to sound the alarm ”.

Instead, he maintains, the lack of support from some technology leaders is because “I think everyone feels like they are trapped in an impossible situation where, even if they wanted to stop, they can’t. If a CEO of a tobacco company wakes up one morning and feels that what he is doing is not right, what will happen? They are going to replace the CEO. So the only way to make safety first is for the government to establish safety standards for everyone.”

You may also like