Home Tech The ‘godfather of AI’ reduces the chances that technology will wipe out humanity in the next 30 years

The ‘godfather of AI’ reduces the chances that technology will wipe out humanity in the next 30 years

0 comments
The 'godfather of AI' reduces the chances that technology will wipe out humanity in the next 30 years

The British-Canadian computer scientist often touted as the “godfather” of artificial intelligence has reduced the odds that AI will wipe out humanity in the next three decades, warning that the pace of change in the technology is “much faster.” than expected.

Professor Geoffrey Hinton, who this year received the Nobel Prize in Physics for his work in AI, said there was a “10% to 20%” chance that AI would lead to human extinction in the next three decades.

Hinton had previously said that there was a 10% chance that the technology triggering a catastrophic result for humanity.

Asked on BBC Radio 4’s Today program whether he had changed his analysis of a potential AI apocalypse and the one in 10 chance of it happening, he said: “Not really, between 10% and 20%.” “.

Hinton’s estimate prompted Today’s guest editor, former chancellor Sajid Javid, to say “you’re going up”, to which Hinton replied: “If anything. You see, we’ve never had to deal with things smarter than us before.”

And he added: “And how many examples do you know of something more intelligent being controlled by something less intelligent? There are very few examples. There is a mother and a baby. Evolution worked hard to allow the baby to control the mother, but that’s the only example I know of.”

Hinton, a London-born professor emeritus at the University of Toronto, said humans would be like little children compared to the intelligence of highly powerful artificial intelligence systems.

“I like to think of it as: Imagine yourself and a three-year-old. “We will be three-year-olds,” he said.

AI can be loosely defined as computer systems that perform tasks that typically require human intelligence.

Last year, Hinton made headlines after quitting her job at Google to speak more openly about the risks posed by the unrestricted development of AI, citing concerns that “bad actors” would use the technology to harm others. A key concern of AI safety advocates is that the creation of artificial general intelligence, or systems that are smarter than humans, could lead to the technology posing an existential threat by evading human control.

Reflecting on where he thought AI development would have reached when he began his work in the technology, Hinton said: “I didn’t think it would be where we are now. I thought at some point in the future we would get here.”

skip past newsletter promotion

He added: “Because the situation we are in now is that most experts in the field think that at some point, probably within the next 20 years, we are going to develop AIs that are smarter than people. And that is a very scary thought.”

Hinton said the pace of development was “very, very fast, much faster than I expected” and called for the government to regulate the technology.

“My concern is that the invisible hand will not keep us safe. So leaving it in the hands of large for-profit companies will not be enough to ensure that they develop it safely,” he said. “The only thing that can force these large companies to do more research on security is government regulation.”

Hinton is one of the three. “godfathers of AI” who have won the ACM AM Turing Prize, the computer science equivalent of the Nobel Prize, for their work. However, one of the trio, Yann LeCun, the chief AI scientist at Mark Zuckerberg’s Meta, has minimized the existential threat and has said that AI “could actually save humanity from extinction.”

You may also like