The ‘Godfather of Artificial Intelligence’ has sensationally resigned from Google, warning that the technology could turn life as we know it upside down.
Geoffrey Hinton, 75, is credited with creating the technology that became the foundation of AI systems like ChatGPT and Google Bard.
But the Turing Prize winner now says part of him regrets helping to create the systems, which he fears could spread misinformation and replace people in the workforce.
He said he had to whisper excuses like “if I hadn’t built it, someone else would have” to avoid being overwhelmed with guilt.
He drew comparisons to the “father of the atomic bomb” Robert Oppenheimer, who was reportedly distraught by his invention and devoted the rest of his life to stopping its proliferation.
Geoffrey Hinton, 75, who is seen as the “Godfather of Artificial Technology,” said part of him now regrets helping to create the systems. He is pictured above speaking at a summit organized by media company Thomson Reuters in Toronto, Canada, in 2017

There is a big AI gap in Silicon Valley. Brilliant minds are divided on the progress of the systems – some say it will improve humanity and others fear technology will destroy it
Talking to the New York Times about his dismissal, he warned that AI would flood the internet with fake photos, videos and texts in the near future.
These would be of a standard, he added, where the average person “could not know what is true anymore.”
The technology also posed a serious risk to “drug work,” he said, and could upend the careers of people working as paralegals, personal assistants and translators.
Some employees already say they use it to cover multiple tasks for them, doing tasks like creating marketing materials and transcribing Zoom meetings so they don’t have to listen.
“Maybe what happens in these systems is actually much better than what happens in the (human) brain,” he said, explaining his fears.
“The idea that this stuff can actually get smarter than humans — a few people believed that.
“But most people thought it was far away. And I thought it was far away. I thought it was 30 to 50 years away or even longer.
“Of course I don’t think so anymore.”
Asked why he had helped develop a potentially dangerous technology, he said, “I console myself with the common excuse: if I hadn’t done it, someone else would have.”
Hinton added that he had previously paraphrased Oppenheimer when asked this question in the past, saying: “If you see something that’s technically beautiful, go ahead and do it.”
Hinton decided to leave Google last month after ten years with the tech giant amid the proliferation of AI technologies.
He had a lengthy chat with the CEO of Google’s parent company Alphabet, Sundar Pichai, before leaving – though it’s not clear what was said.
In an open letter to his former employer, he accused Google of not being a “proper steward” for AI technologies.
In the past, the company has concealed potentially dangerous technologies, he said. But it had now thrown caution to the wind as it competed with Microsoft – which added a ChatBot to its Bing search engine last month.
Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to taking a responsible approach to AI. We are constantly learning to understand emerging risks while innovating boldly at the same time.”
His warning comes as Silicon Valley slides into a civil war over the advancement of artificial intelligence – with the world’s greatest minds divided over whether it will uplift or destroy humanity.

The fear of AI comes as experts predict it will reach singularity by 2045, which is when the technology surpasses human intelligence at which we can’t control it
Elon Musk, Apple co-founder Steve Wozniak, and the late Stephen Hawking are among AI’s most famous critics who believe it poses a “profound risk to society and humanity” and could have “catastrophic consequences.”
Last month, they even called for a pause in the “dangerous race” to roll out advanced AI as more risk assessments were needed.
But Bill Gates, My Pichai and futurist Ray Kurzweil are on the other side of the debate, touting the technology as the “most important” innovation of our time.
They claim it can cure cancer, solve climate change and increase productivity.
Hinton has not previously added his voice to the debate, saying he would not speak out until he formally left Google.
He rose to fame in 2012 when at the University of Toronto, Canada, he and two students designed a neural network that could analyze thousands of photos and taught himself to identify common objects such as flowers, dogs and cars.
Google later spent $44 million to acquire the company Hinton founded based on the technology.

The release of AI bots like ChatGPT (stock image) has sparked calls from many circles to review the technology due to the risk it poses to humanity
Advanced AI systems already available include ChatGPT, which now has more than a billion people signed up after its November release. Data shows that it also has a whopping 100 million active monthly users.
Launched by San Francisco-based OpenAI, the platform has become an instant global success.
The chatbot is a large language model trained on massive text data, allowing it to generate eerily human-like text in response to a given prompt.
The public uses ChatGPT to write research papers, books, news articles, emails and other text based work and while many see it more as a virtual assistant, many brilliant minds see it as the end of humanity.
If humans lose control of AI, it is considered to have reached singularity, meaning it has surpassed human intelligence and can think independently.
AI would no longer need humans or listen to humans, allowing it to steal nuclear codes, create pandemics and start world wars.
DeepAI founder Kevin Baragona, who signed the letter, told DailyMail.com: “It’s almost like a war between chimpanzees and humans.
The humans obviously win because we are much smarter and can use more advanced technology to beat them.
“If we’re like the chimpanzees, the AI will either destroy us or we’ll become addicted to it.”