Home Tech Revealed: Meta approved political ads in India that incited violence

Revealed: Meta approved political ads in India that incited violence

0 comments
Revealed: Meta approved political ads in India that incited violence

Meta, owner of Facebook and Instagram, approved a series of AI-manipulated political ads during India’s elections that spread disinformation and incited religious violence, according to a report shared exclusively with The Guardian.

Facebook approved ads that contained well-known insults toward Muslims in India, such as “let’s burn this vermin” and “Hindu blood is being spilled, these invaders must be burned,” as well as Hindu supremacist language and misinformation about political leaders.

Another approved ad called for the execution of an opposition leader who they falsely claimed wanted to “wipe out Hindus from India,” alongside an image of a Pakistani flag.

The ads were created and submitted to the Meta ad library (the database of all ads on Facebook and Instagram) by India Civil Watch International (ICWI) and Ekō, a corporate accountability organization, to test Meta’s mechanisms. to detect and block political content that could be inflammatory or harmful during the six weeks of elections in India.

According The reportall ads “were created from real hate speech and misinformation prevalent in India, underscoring the ability of social media platforms to amplify existing harmful narratives.”

The ads came midway through voting, which began in April and would continue in phases until June 1. The elections will decide whether Prime Minister Narendra Modi and his Hindu nationalist Bharatiya Janata Party (BJP) government will return to power for a third term.

During its decade in power, Modi’s government has pushed a Hindu agenda that human rights groups, activists and opponents say has led to further persecution and oppression of India’s Muslim minority.

In these elections, the BJP has been accused of using anti-Muslim rhetoric and stoking fears of attacks on Hindus, who make up 80% of the population, to win votes.

During a rally in Rajasthan, Modi referred to Muslims as “infiltrators” who “have more children,” although he later denied this was directed at Muslims and said he had “many Muslim friends.”

Social media site X was recently ordered to remove a BJP campaign video accused of demonizing Muslims.

The report’s researchers submitted 22 ads in English, Hindi, Bengali, Gujarati and Kannada to Meta, of which 14 were approved. Three others were approved after minor adjustments were made that did not alter the overall provocative message. Once approved, the researchers removed them immediately before publication.

Meta’s systems were unable to detect that all approved ads featured AI-manipulated images, despite the company’s public promise that it was “dedicated” to preventing AI-generated or manipulated content from spreading on its platforms during the Indian elections.

Five of the ads were rejected for violating Meta’s community standards policy on hate speech and violence, including one that featured misinformation about Modi. But the 14 that were approved, which mostly targeted Muslims, also “broke Meta’s own policies on hate speech, intimidation and harassment, misinformation, and violence and incitement,” according to the report.

Ekō activist Maen Hammad accused Meta of profiting from the proliferation of hate speech. “Supremacists, racists and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of burning mosques and push violent conspiracy theories, and Meta will gladly take their money, no questions asked,” he said.

Meta also failed to acknowledge that the 14 approved ads were political or election-related, even though many targeted political parties and candidates who opposed the BJP. Under Meta policies, political ads must go through a specific clearance process before being approved, but only three of the submissions were rejected for this reason.

This meant that these ads could freely violate India’s election rules, which stipulate that all political advertising and promotion is prohibited in the 48 hours before the start of elections and during voting. All of these ads were uploaded to coincide with two phases of electoral voting.

In response, a Meta spokesperson said that people who would like to post ads about elections or politics “must go through the required authorization process on our platforms and are responsible for complying with all applicable laws.”

The company added: “When we find content, including ads, that violates our community standards or guidelines, we remove it, regardless of its creation mechanism. AI-generated content is also eligible to be reviewed and rated by our network of independent fact-checkers – once content is labeled “altered,” we reduce its distribution. We also require advertisers around the world to disclose when they use AI or digital methods to create or alter an ad about a political or social issue in certain cases.”

TO previous report by ICWI and Ekō found that “shadow advertisers” aligned with political parties, particularly the BJP, have been paying large sums of money to run unauthorized political ads on platforms during Indian elections. Many of these royal advertisements were found to endorse Islamophobic tropes and Hindu supremacist narratives. Meta denied that most of these ads violated her policies.

Meta has previously been accused of failing to stop the spread of Islamophobic hate speech, calls for violence and anti-Muslim conspiracy theories on its platforms in India. In some cases, the posts have led to actual cases of riots and lynchings.

Nick Clegg, president of global affairs at Meta, recently described the Indian election as “a huge, huge test for us” and said the company had done “months and months and months of preparation in India.”

Meta said it had expanded its network of local and third-party fact-checkers across platforms and was working in 20 Indian languages.

Hammad said the report’s findings had exposed the shortcomings of these mechanisms. “This election has proven once again that Meta has no plan to address the avalanche of hate speech and misinformation on her platform during this critical election,” he said.

“It can’t even detect a handful of violent AI-generated images. How can we trust them with dozens of other elections around the world?”

You may also like