Bots were the main driver behind the spread of disinformation about COVID-19 online, a new study suggests.
Researchers looked at the sharing of several links related to the pandemic in more than 300,000 posts on Facebook groups.
They used the timing that certain links to misinformation were shared in these groups to measure bot activity, with frequent sharing — multiple messages from the same link in groups just seconds apart — being a sign of bot activity.
The team — led by the University of California San Diego in collaboration with researchers from George Washington University and Johns Hopkins University — is asking Facebook and other social media giants to tighten restrictions and limit the spread of misinformation.
Researchers found that much of the misinformation about Covid-19, masks and the vaccines was being spread through social media bot accounts.
“The coronavirus pandemic has led to what the World Health Organization has called an ‘infodemic’ of misinformation,” lead author Dr John Ayers, a scientist specializing in public health surveillance at the University of California, San Diego, said in a statement.
But bots — such as those used by Russian agents during the 2016 US presidential election — have been overlooked as a source of disinformation about COVID-19.
One of the links the researchers used for the study was a study in Denmark that was inconclusive about whether or not to wear a mask, reducing the transmission of COVID-19.
The study was misinterpreted, and is used by many on social media, especially Facebook, as a source of misinformation.
Researchers found that the post was often shared by multiple accounts with multiple groups within seconds, a sign that the accounts sharing the post were bots operating in the same network.
Nearly 40 percent of the times the post was shared on Facebook, it happened in groups the researchers flagged as having high bot activity.
One in five of those reports lied about the results of the study, saying that the researchers determined that masks were harmful to their wearer — a conclusion never made in the study.
Reports from the study show that Facebook groups with detected bot activity were 2.3 times more likely to share the false claim that masks hurt their wearer.
“Bots also appear to be undermining critical public health settings,” said Brian Chu, study co-author and medical student.
“In our case study, bots mischaracterized a prominent publication from a prestigious medical journal to spread misinformation.”
“This suggests that no content is safe from the dangers of disinformation with weapons.”
Researchers are calling on Mark Zuckerberg (pictured), CEO of Facebook, and other leading figures in the tech industry to take a stronger stance against public health disinformation. However, not all members of the research field agree
The researchers are asking Facebook and other social media giants to tighten restrictions on the spread of disinformation.
They believe that companies like Facebook can easily detect and censor false information produced by bots because they were able to detect much of the misinformation and bot active groups themselves.
Researchers also fear that bots could manipulate the algorithms used by these companies, as the massive sharing of these stories by bots could lead the algorithm to think they are more popular than they are, and boost them on user feeds.
“Our work shows that social media platforms can detect and therefore remove these coordinated bot campaigns,” said Dr. David Broniatowski, associate director of the GW Institute for Data, Democracy, and Politics, and co-author of the study.
“Efforts to remove rogue bots from social media platforms should become a priority for lawmakers, regulators and social media companies who have instead focused on individual bits of misinformation from ordinary users.”
However, not all researchers agree.
Kamran Abbasi, editor-in-chief of The BMJ, one of the oldest medical journals in England, wrote in a op-ed that social media platforms that censor these stories can be dangerous.
“It looks like 2020 is Orwell’s 1984, where the limits of public discourse are set by billion-dollar corporations (rather than a totalitarian regime) and secret algorithms coded by unidentified employees,” Abbas wrote, referring to Facebook that possibly censoring or labeling stories about the Danish study as ‘misinformation’.
“Where is Facebook’s responsibility for the lies and harmful misinformation it has spread on controversial topics such as mental health and suicides, minorities and vaccines?
Facebook, in particular, claims to allow free speech on its platform, but acts selectively, seemingly without logic, consistency or transparency.
‘For example, mastery of facts and opinions promotes hidden agendas and manipulates the public. ‘
Misinformation about Covid-19 has spread around the world as fast as the virus.
Last week, prominent feminist writer Naomi Wolf was suspended from Twitter after a series of posts spread misinformation about the Covid-19 vaccines.
Naomi Wolf had sparked controversy in recent months over a series of posts spreading misinformation about Covid-19 vaccines. The feminist writer was banned from Twitter last week
Recent claims she made on Twitter included that vaccines are software platforms that can receive uploads, and that wastewater from vaccinated individuals can be hazardous to drinking water supplies, both claims without scientific backing.
Facebook, Instagram and other platforms have also added features to combat vaccine misinformation by automatically linking information about the vaccines to every post made about the injections.
Facebook has even said they will do that outright remove certain posts that make unfounded claims about the vaccines.
The study will be available Monday in the Journal of American Medicine.