Home Tech Meta says it has dismantled about 20 covert influence operations in 2024

Meta says it has dismantled about 20 covert influence operations in 2024

0 comments
Meta says it has dismantled about 20 covert influence operations in 2024

Meta has intervened to shut down about 20 covert influence operations around the world this year, it emerged, although the technology company said fears that AI-powered fake elections had not materialized in 2024.

Nick Clegg, president of global affairs for the company that runs Facebook, Instagram and WhatsApp, said Russia remained the number one source of adverse online activity, but told a briefing it was “surprising” how little it was used. AI to try to deceive voters in the busiest year in the history of elections around the world.

The former British deputy prime minister revealed that Meta, which has more than 3 billion users, had to reject just over 500,000 requests to generate images with its own artificial intelligence tools of Donald Trump and Kamala Harris, JD Vance and Joe Biden in the previous month. until the day of the US elections.

But the company’s security experts had to face a new operation using fake accounts to manipulate public debate for a strategic objective, at a rate of more than one every three weeks. Incidents of “coordinated inauthentic behavior” included a Russian network using dozens of fictitious Facebook accounts and news websites to attack people in Georgia, Armenia and Azerbaijan.

Another was a Russia-based operation that used artificial intelligence to create fake news websites using brands like Fox News and the Telegraph to try to weaken Western support for Ukraine, and used Francophone fake news sites to promote Russia’s role in Africa and criticize that of France.

“Russia remains the number one source of covert influence operations we have disrupted to date, with 39 networks disrupted in total since 2017,” he said. The next most frequent sources of foreign interference detected by Meta are Iran and China.

Assessing the effect of AI spoofing after a wave of surveys in 50 countries, including the United States, India, Taiwan, France, Germany and the United Kingdom, he said: “There were all kinds of warnings about the potential risks of things such as widespread falsification”. Deepfakes and AI enabled disinformation campaigns. That’s not what we’ve seen in what we’ve monitored across all of our services. “It appears that these risks did not materialize to a significant extent and that any such impact was modest and limited in scope.”

But Clegg warned against complacency and said the relatively low impact of deepfake that uses generative AI to manipulate videos, voices and photographs was “very, very likely” to change.

“It is clear that these tools will become more prevalent and we will see more and more synthetic and hybrid content online,” he said.

Meta evaluation continues conclusions Last month, the Center for Emerging Technology and Security stated that “misleading AI-generated content has shaped American election discourse by amplifying other forms of misinformation and inflaming political debates.” He said evidence was lacking about its impact on Donald Trump’s election victory.

It concluded that AI-driven threats began to damage the health of democratic systems in 2024 and warned that “complacency must not creep in” ahead of the 2025 elections in Australia and Canada.

Sam Stockwell, a research associate at the Alan Turing Institute, said artificial intelligence tools may have shaped election speeches and amplified harmful narratives in subtle ways, particularly in the recent US election.

“This included misleading claims that Kamala Harris’ rally was AI-generated and unfounded rumors that Haitian immigrants were eating pets went viral with the help of AI-generated xenophobic memes,” he said.

You may also like