Elon Musk’s artificial intelligence chatbot Grok is unleashing a “torrent of misinformation” through its image-generating tool, an expert has warned, as damaging images depicting politicians perpetrating 9/11 and cartoon characters as killers are spreading on X.
A new version of Grok, which is available to paid subscribers on the social media platform, was launched on Wednesday with a new AI-powered image generation tool, prompting a flood of bizarre images to appear.
The image tool seemingly has few limits on what it can generate, lacking the protections that have become industry standards among rivals like ChatGPT, which rejects requests for images depicting real-world violence and explicit content, for example.
Grok, by contrast, has allowed the creation of degrading and offensive images, often depicting politicians, celebrities or religious figures naked or engaged in violent acts.
The chatbot also doesn’t seem to shy away from generating images of copyrighted characters, and many images of cartoon and comic book characters engaging in nefarious or illegal activities have also been posted.
Elon Musk’s artificial intelligence chatbot Grok is unleashing a “torrent of misinformation” through its image-generating tool, an expert has warned
Daniel Card, a fellow at BCS, the Chartered Institute for IT, said the problem of disinformation and misinformation on X was a “societal crisis” because of its potential impact.
“Grok may have some guardrails, but it’s unleashing a torrent of misinformation, copyright chaos and explicit deepfakes,” he said.
“This is not just a defence issue: it is a social crisis. Information warfare has become a bigger threat than cyber attacks, infiltrating our daily lives and distorting global perceptions.
‘These challenges demand bold, modern solutions. By the time regulators step in, disinformation has already reached millions of people and is spreading at a pace we are simply not prepared for.
In the United States, distorted views of countries like the United Kingdom are spreading, fueled by exaggerated reports about the danger they face. We are at a critical moment in the search for truth in the age of artificial intelligence.
“Our current strategies are not up to the task. As we move towards a hybrid digital-physical world, this threat could become society’s greatest challenge. We must act now: policymakers, governments and technology leaders must step up their efforts.”
But Musk seemed to revel in the controversial nature of the chatbot update, posting on X on Wednesday: “Grok is the world’s funniest AI!”
Some users responded to Musk by using the tool to mock him, for example by asking the tool to imagine him holding offensive signs or, in one case, showing the staunch Trump supporter holding a Harris-Walz sign.
Other fake images show Kamala Harris and Donald Trump working together in an Amazon warehouse, enjoying a trip to the beach together and even kissing.
The most sinister AI creations included images of Musk, Trump and others taking part in school shootings, while some have also depicted public figures carrying out the 9/11 terrorist attacks.
Other users asked Grok to create highly offensive images, including that of the Prophet Muhammad, in one case holding a bomb.
Several also showed politicians portrayed in Nazi uniforms and as historical dictators.
Alejandra Caraballo, an American civil rights lawyer and clinical instructor at Harvard Law School’s Cyberlaw Clinic, criticized the Grok app’s apparent lack of filters.
Writing about X, he described it as “one of the most reckless and irresponsible implementations of AI I have ever seen.”
The wave of misleading images will be of particular concern ahead of the US election in November, as very few of the images will be accompanied by warnings or notes from the X-community.
This comes after X and Musk were heavily criticised for the role the platform played in the recent riots in Britain, where misinformation that sparked much of the unrest was allowed to spread, while Musk interacted with far-right figures on the site and reiterated his belief in “absolute freedom of speech”.
And last month, he was accused of breaking his own platform’s rules on deepfakes after posting a doctored video mocking Vice President Harris by dubbing her with a manipulated voice.
The clip was viewed nearly 130 million times by X users. In the clip, Harris’s fake voice says, “I was selected because I’m the most diverse hire.”
She adds that anyone who criticises her is “sexist and racist”.
Other generative AI deepfakes, both in the United States and elsewhere, have reportedly attempted to influence voters with misinformation, humor, or both.
In Slovakia in 2023, fake audio clips impersonated a candidate discussing plans to rig an election and raise the price of beer days before the vote.
In 2022, a satirical ad from a political action committee superimposed the face of a Louisiana mayoral candidate onto that of an actor portraying him as an underachieving high school student.
Congress has yet to pass AI legislation into policy, and federal agencies have taken only limited action, leaving most existing U.S. regulation in the hands of the states.
More than a third of states have created their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.
Beyond X, other social media companies have also created policies regarding synthetic and manipulated media shared on their platforms.
Users of the video platform YouTube, for example, will be required to disclose whether they have used generative artificial intelligence to create videos or face suspension.