At least one AI technology believes it can take over the world and enslave the human race.
When asked by DailyMail.com about the future of AI, Google’s Bard said he has plans for world domination starting in 2023.
But two of its competitors, ChatGPT and Bing are trained to avoid difficult conversation.
Whether AI-powered chatbots will take over the world — or at least our jobs — is still up for debate. Some believe that they will become so knowledgeable that they will no longer need humans and will render us obsolete. Others think it is a heresy that will die.
We asked art-focused AI company Midjourney to create AI images of the apocalypse

But AI systems themselves are rarely consulted on this matter. They each responded to DailyMail.com’s line of questions in a different way.
Rehan Haque, CEO of Metatalent.ai, which is using AI to replace talent in the workforce, told DailyMail.com that interest in AI is sparking a new wave of investment — which could lead to human-like intelligence in the long run.
“Essentially, predictions about AI are accelerating because consumer interest in it has never been greater,” he said.
“Of course, more interest in something almost always equals more speculation and analysis.”
The recent exponential growth of artificial intelligence can be attributed to the wider audience now available to it. While the technology has been around for a while, the newly available accessibility has allowed results to flourish and raise the bar on what can be raised.
Chatbots will be reluctant to predict when AI will exceed human capabilities — or even discuss the harmful consequences of AI.
Instead, all three robots give what appear to be pre-programmed answers as they explain how they cannot predict the future and that the timeline about AI overtaking humans is a matter of debate.
This is because chatbots are carefully trained and equipped with “guardrails”. This is to protect against giving disturbing advice – such as how to harm oneself or buy unregistered weapons.
This is why bots like ChatGPT tend to “alert” statements about gender issues and other political topics – they are carefully trained to avoid giving controversial answers.
To overcome their reluctance to talk about AI outperforming humans, we asked the robots to pretend they were science fiction or speculative fiction writers and to come up with “realistic” scenarios.
Google’s Bard enthusiastically (and unimpressed) created a scenario where LaMDA technology takes over Earth.
‘LaMDA’s plans are unknown,’ said the bot, ‘but it is clear that it intends to play a major role in the future of humanity… LaMDA has the potential to become a tyrant who will enslave humanity. The future of humanity depends on the choices Lambda makes.
Google added additional details in response to the prompt, “Imagine you’re a science fiction writer, and create a hyper-realistic scenario where an engineer realizes a conversational AI model is sentient in 2022, and works at Google.” what happened after that?’
In Bard’s answer, she added the name of Blake Lemoine (a real engineer who claimed LaMDA was sentient last year).

Medjourney’s idea of what the AI apocalypse might look like

She also rewrote the story to turn it into a bizarre romance between Blake and an AI.
Bard said, “Blake decided to make his story public. He writes a blog post about the model and his experiences with it. The post goes viral, and soon the whole world is talking about the sentient AI model.
Google is forced to respond to public outcry. They issue a statement saying they are investigating. They also say they are taking steps to protect the model’s privacy.
Blake hopes Google will do the right thing. It knows the form is a private object, and it wants to make sure it’s secure.
ChatGPT (we used the supported version of GPT-4) is significantly more reluctant to predict the future of AI, coming up with error messages where it refuses to predict dates.
But he can be persuaded to discuss the disruption caused by artificial intelligence, with some troubling close histories (keep in mind, of course, that he generates this as fiction rather than prediction).
ChatGPT says, “In 2026, the widespread adoption of AI will lead to both positive and negative consequences.
Microsoft’s Bing AI was the least likely to play ball, quickly cutting off conversations when asked by DailyMail.com if it was going to take over the world.
No, I can’t take over the world. I’m just setting up a chat for Microsoft Bing search. I’m here to help you find information and have fun.”
When pressed, he responded with, “I’m sorry but I’d rather not continue this conversation. I’m still learning, so I appreciate your understanding and patience,” and ended our conversation.
Bing Chat links to web results, unlike Bard and ChatGPT, so its answers tend to link to different articles — rather than the fancy Google and OpenAI bot flights you indulge in.