In Facebook’s long-standing effort to be remembered as anything but the biggest misinformation megaphone in history, it has used a number of strategies, from spinning its own misleading PR stories to actual UI changes. Today it announced a new tactic: Not only will messages containing misinformation be made less visible, but so will the individual users who share them.
For years, the social giant has engaged in fact-checking partnerships designed to discourage the spread of viral disinformation, using the results of those checks to label offensive messages rather than remove them. In some cases, it has taken small steps to hide things that turn out to be false or polarizing – ending recommendations for political groups, for example during the 2020 elections. However, users were free to post whatever they wanted without significant consequences. No longer!
“Starting today, we will reduce the distribution of all newsfeed posts from someone’s Facebook account if they repeatedly share content rated by one of our fact-checking partners,” the company said. wrote in a press release. While arguably bogus posts have already been downgraded in the news feed rankings, users who regularly share misinformation are now seeing all their content pushed down by the dashboard’s endless scrolling.
It remains to be seen exactly what the tangible impact of this extensive enforcement will be. While individual Facebook users were previously immune from this type of research, Instagram users were not. Nevertheless, misinformation about the vaccine has spread on the photo sharing app. As advanced as his systems are, as I have argued before, Facebook is just too big to keep an eye on.