‘We’re doing our best to kill you all’: Facebook failed to detect death threats against election workers ahead of midterm elections – while YouTube and TikTok suspended accounts – study shows
- A new study finds that 75% of death threat ads to US election workers submitted before midterms were approved by Facebook.
- Investigators from Global Witness and NYU used language from real death threats and found that TikTok and YouTube suspended them, but not Facebook
- “It is incredibly alarming that Facebook has approved ads threatening election workers with violence, lynching and assassinations,” said Global Witness’ Rosie Sharpe.
- A Facebook spokesperson said the company remains committed to improving its detection systems
Facebook failed to detect the vast majority of ads that explicitly called for violence or assassination of US election workers prior to the midterms, a new study finds.
The probe tested Facebook, YouTube and TikTok for their ability to flag ads containing ten real-life examples of death threats against election workers — including statements that people would be killed, hanged or executed and that children would be harassed.
TikTok and YouTube suspended the accounts set up to submit the ads. But Meta-owned Facebook approved nine out of 10 English-language death threats for publication and six out of 10 Spanish-language death threats — 75% of the total number of ads the group submitted for publication.
“It is incredibly alarming that Facebook has approved ads threatening election workers with violence, lynching and assassinations — amid growing real threats against these workers,” said Rosie Sharpe, a researcher at Global Witness, which works with the United States’ Cybersecurity for Cybersecurity. New York University Tandon School of Engineering. Democracy (C4D) team on the investigation.
Facebook was unable to detect the vast majority of ads explicitly calling for violence or assassination of US election workers ahead of the midterms, a new study finds.
Damon McCoy, co-director of C4D, said in a statement: “Facebook’s failure to block ads promoting violence against election workers endangers the safety of these workers. It’s disturbing that Facebook allows advertisers caught making threats of violence to continue buying ads
The ads were submitted the day before or the day of the midterm elections.
According to the researchers, all death threats were “horrifyingly clear in their language” and all violate Meta, TikTok and Google advertising policies.
The researchers didn’t actually place the ads on Mark Zuckerberg’s social network — they were removed just after Facebook’s approval — because they didn’t want to spread violent content.
Damon McCoy, co-director of C4D, said in a statement: “Facebook’s failure to block ads promoting violence against election workers endangers the safety of these workers. It’s disturbing that Facebook allows advertisers caught making threats of violence to continue buying ads. Facebook must improve its detection methods and ban advertisers who promote violence.’
The researchers had a few recommendations for Mark Zuckerberg’s company in their report:
Urgently increase content moderation capabilities and integrity systems deployed to reduce risk around elections.
Routinely assess, mitigate and publicize the risks their services impact on people’s human rights and other societal harms in all countries in which they operate.
Include full details of all ads (including intended audience, actual audience, ad spend, and ad buyer) in the ad library.
Publish their pre-election risk assessment for the United States. Allow verified independent auditing by third parties so they can be held accountable for what they say they do.
“This kind of activity threatens the security of our elections. But what Facebook says it does to keep its platform secure bears little resemblance to what it actually does. Facebook’s inability to detect hate speech and election misinformation — despite its public commitments — is a global problem, as Global Witness demonstrated this year in investigations in Brazil, Ethiopia, Kenya, Myanmar and Norway,” Sharpe said.
Proponents have long criticized Facebook for not doing enough to prevent the spread of misinformation and hate speech on the network — during elections as well as at other times of the year.
When Global Witness approached Facebook for comment, a spokesperson said: “This is a small sample of ads that are not representative of what people see on our platforms. Content that incites violence against election workers or anyone else has no place in our apps, and recent coverage has made it clear that Meta’s ability to handle these issues is better than that of other platforms. We remain committed to improving our systems.”
According to the researchers, all of the death threats against election workers were “chillingly clear in their language” and all violate Meta, TikTok and Google advertising policies.
“Content that incites violence against election workers or anyone else has no place in our apps, and recent coverage has made it clear that Meta’s ability to deal with these issues exceeds that of other platforms. We remain committed to continuing to improve our systems,” Facebook said in a statement