Russia is using generative artificial intelligence for online deception campaigns, but its efforts have been unsuccessful, according to a Meta Security report published Thursday.
Facebook and Instagram’s parent company found that AI-powered tactics so far “only provide incremental productivity and content generation gains” for malicious actors and Meta has been able to disrupt deceptive influence operations.
Meta’s efforts to combat “coordinated inauthentic behavior” on its platforms come as fears grow that generative AI is being used to deceive or mislead people in elections in the United States and other countries.
Russia remains the main source of “coordinated inauthentic behavior” via fake Facebook and Instagram accounts, David Agranovich, Meta’s director of security policy, told reporters.
Since Russia’s invasion of Ukraine in 2022, those efforts have focused on undermining Ukraine and its allies, the report said.
As the US election approaches, Meta expects Russian-backed online deception campaigns to target political candidates who support Ukraine.
Facebook has been accused for years of being used as a powerful platform for election disinformation. Russian agents used Facebook and other US-based social media to stoke political tensions in multiple US elections, including the 2016 election won by Donald Trump.
Experts fear an unprecedented deluge of misinformation from bad actors on social media due to the ease of using generative AI tools like ChatGPT or the Dall-E image generator to create on-demand content in seconds.
According to the report, AI has been used to create images and videos, as well as to translate or generate text, as well as to create fake news or summaries.
When Meta looks for hoaxes, it looks at how accounts act rather than the content they post.
Influence campaigns tend to span a variety of online platforms, and Meta has noticed posts on X, formerly Twitter, being used to make fabricated content appear more credible. Meta shares its findings with X and other internet companies and says coordinated advocacy is needed to thwart disinformation.
“As far as Twitter (X) is concerned, they are still going through a transition,” Agranovich said when asked if Meta believes X will act on deception tips. “A lot of the people we have dealt with in the past have already left.”
X has gutted trust and safety teams and scaled back content moderation efforts once used to control misinformation, turning it into what researchers call a breeding ground for disinformation.