Home Tech OpenAI says Russian and Israeli groups used its tools to spread disinformation

OpenAI says Russian and Israeli groups used its tools to spread disinformation

0 comment
OpenAI says Russian and Israeli groups used its tools to spread disinformation

OpenAI on Thursday published its first report on how its artificial intelligence tools are used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating in Russia, China, Israel and Iran.

Malicious actors used the company’s generative AI models to create and publish propaganda content on social media platforms and translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report.

As generative AI has become a booming industry, there has been widespread concern among researchers and policymakers about its potential to increase the quantity and quality of misinformation online. AI companies like OpenAI, which makes ChatGPT, have tried with mixed results to allay these concerns and put up barriers to their technology.

OpenAI’s 39-page report is one of the most detailed accounts by an artificial intelligence company of the use of its software for propaganda purposes. OpenAI stated that its researchers found and banned accounts associated with five covert influence operations over the past three months, which came from a mix of state and private actors.

In Russia, two operations created and disseminated content critical of the United States, Ukraine, and several Baltic nations. One of the operations used an OpenAI model to debug code and create a bot that was published on Telegram. China’s influence operation generated texts in English, Chinese, Japanese and Korean, which agents then posted on Twitter and Medium.

Iranian actors generated entire articles attacking the United States and Israel, which they translated into English and French. An Israeli political company called Stoic ran a network of fake social media accounts that created a variety of content, including posts that accused American student protests against Israel’s war in Gaza as anti-Semitic.

Several of the disinformation spreaders that OpenAI banned from its platform were already known to researchers and authorities. The US Treasury sanctioned two Russian men in March who were allegedly behind one of the campaigns OpenAI detected, while Meta also banned Stoic from its platform this year for violating its policies.

The report also highlights how generative AI is being incorporated into disinformation campaigns as a means to improve certain aspects of content generation, such as making posts more compelling in foreign languages, but that it is not the only propaganda tool.

“All of these operations used AI to some extent, but none used it exclusively,” the report states. “Instead, the AI-generated material was just one of many types of content they published, alongside more traditional formats such as manually typed texts or memes copied from the Internet.”

While neither campaign had a notable impact, their use of the technology shows how malicious actors are finding that generative AI allows them to increase propaganda production. Writing, translating and publishing content can now be done more efficiently through the use of artificial intelligence tools, lowering the bar for creating disinformation campaigns.

Over the past year, malicious actors have used generative AI in countries around the world to attempt to influence politics and public opinion. Deepfake audio, AI-generated images, and text-based campaigns have been used to disrupt election campaigns, leading to increased pressure on companies like OpenAI to restrict the use of their tools.

OpenAI stated that it plans to periodically publish similar reports on covert influence operations, as well as remove accounts that violate its policies.

You may also like