Home Tech Foreign influence campaigns also don’t know how to use AI yet

Foreign influence campaigns also don’t know how to use AI yet

0 comment
Foreign influence campaigns also don't know how to use AI yet

Today, OpenAI launched its first threat report, which details how actors from Russia, Iran, China, and Israel have attempted to use its technology for foreign influence operations around the world. The report names five different networks that OpenAI identified and shut down between 2023 and 2024. In the report, OpenAI reveals that established networks such as Russia’s Doppleganger and China’s Spamoflauge are experimenting with how to use generative AI to automate their operations. They’re not very good at it either.

And while it’s a modest relief that these actors haven’t mastered generative AI to become unstoppable forces of disinformation, it’s clear that they are experimenting, and that alone should be concerning.

OpenAI’s report reveals that influencer campaigns run into the limits of generative AI, which does not reliably produce good copy or code. struggle with idioms(which make language sound more reliably human and personal) and sometimes with basic grammar too (so much so that OpenAI named one network “Bad Grammar”). The Bad Grammar network was so sloppy that it once revealed its true identity: “As an AI Language Model, I am here to help and provide the desired feedback,” it posted.

One network used ChatGPT to debug code that would allow it to automate posts on Telegram, a chat app that has long been a favorite of extremists and influence networks. This worked well sometimes, but other times it led to the same account being posted as two separate characters, giving away the game.

In other cases, ChatGPT was used to create code and content for websites and social networks. Spamoflauge, for example, used ChatGPT to debug code and create a WordPress website that published stories attacking members of the Chinese diaspora who were critical of the country’s government.

According to the report, AI-generated content failed to leave the influence networks themselves and reach the general public, even when shared on widely used platforms such as X, Facebook or Instagram. This was the case for campaigns run by an Israeli company that was apparently working for hire and publishing content ranging from anti-Qatar to anti-BJP, the Hindu nationalist party that currently controls the Indian government.

Taken together, the report presents a picture of several relatively ineffective campaigns using crude propaganda, which apparently appeases fears that many experts We have heard about the potential of this new technology to spread misinformation and disinformation, particularly during a crucial election year.

But social media influence campaigns often innovate over time to avoid detection, knowing the platforms and their tools, sometimes better than the platforms’ employees themselves. While these initial campaigns may be small or ineffective, they appear to still be in the experimental stage, says Jessica Walton, a researcher at the CyberPeace Institute who has studied Doppleganger’s use of generative AI.

In their investigation, the network used seemingly real Facebook profiles to publish articles, often on divisive political topics. “The real articles are written using generative AI,” she says. “And mainly what they’re trying to do is see what will fly, what Meta’s algorithms will and won’t be able to capture.”

In other words, expect them to get better from now on.

You may also like