Extremists across the United States have weaponized AI tools to help them spread hate speech more efficiently, recruit new members and radicalize their supporters online at unprecedented speed and scale, according to a new report from the Middle East Media Research Institute (MEMRI), an American non-profit press monitoring organization.
The report found that AI-generated content is now a mainstay of extremist production: They are developing their own extremist-infused AI models and are already experimenting with novel ways to harness the technology, including producing blueprints for 3D weapons. and recipes to make bombs
Researchers at the Domestic Terrorism Threat Monitor, a group within the institute that specifically tracks extremists based in the United States, lay out in great detail the scale and scope of AI use among domestic actors, including neo-Nazis, white supremacists and the anti-Nazis. government extremists.
“Initially there was a little bit of hesitation around this technology and we saw a lot of debate and discussion among (extremists) online about whether this technology could be used for their purposes,” said Simon Purdue, director of the Domestic Terrorism Threat Monitor at MEMRI, he told reporters at a briefing earlier this week. “In recent years we have gone from seeing occasional AI content to AI being a major part of online hate propaganda content, particularly when it comes to video and visual propaganda. “So as this technology develops, we will see extremists using it more.”
As the US election approaches, the Purdue team is tracking a series of worrying developments in the use of AI technology by extremists, including the widespread adoption of AI video tools.
“The biggest trend we’ve noticed (in 2024) is the rise of video,” Purdue says. “Last year, AI-generated video content was very basic. This year, with the launch of OpenAI’s Sora and other video manipulation or generation platforms, we have seen extremists use them as a means to produce video content. “We’ve also seen a lot of excitement about this, a lot of people are talking about how this could allow them to produce feature films.”
Extremists have already used this technology to create videos showing President Joe Biden using racial slurs during a speech and actress Emma Watson reading aloud. My struggle while wearing a Nazi uniform.
Last year, WIRED reported on how extremists linked to Hamas and Hezbollah were leveraging generative AI tools to undermine the hash-sharing database that allows Big Tech platforms to rapidly remove terrorist content in a coordinated manner, and There is currently no solution available for this issue.
Adam Hadley, executive director of Tech Against Terrorism, says he and his colleagues have already archived tens of thousands of AI-generated images created by far-right extremists.
“This technology is used in two main ways,” Hadley tells WIRED. “Firstly, generative AI is used to create and manage robots that operate fake accounts and secondly, just as generative AI is revolutionizing productivity, it is also used to generate text, images and videos through tools of Open Source. “Both uses illustrate the significant risk that terrorist and violent content could be produced and disseminated on a large scale.”