Home Tech Why the Fake News Trust Trap Could Be Your Downfall

Why the Fake News Trust Trap Could Be Your Downfall

0 comments
Why the Fake News Trust Trap Could Be Your Downfall

YoIt’s a wild world online, with misinformation and misinformation flying at warp speed. I’m halfway through writing a book on the history of fake news, so I’m well aware that people making things up isn’t new. But what is new is the reach of the rioters, whether their actions are deliberate or accidental.

Social media and the web in general changed the game for the naughty and made it easier for the rest of us to be scammed online without realizing it (see: the strange “Goodbye Meta AI” trend I wrote about this week for Guardian). The rise of generative AI since the launch of ChatGPT in 2022 has also increased the risks. While early research suggests that our biggest fears about the impact of AI-generated deepfakes on elections are unfounded, the overall information environment is disconcerting.

Seeing is believing?

This is evident in data collected by the Behavioral Insights Team (BIT), a social purpose organization spun out of the UK government, and shared exclusively with me for TechScape. The survey of 2,000 UK adults highlights just how confusing the Wild West network is today.

While 59% of those surveyed by BIT believe they can detect false information online, BIT researchers found that only 36% of people trusted that others could detect fake news.

That’s a problem for two reasons. One is our lack of confidence in other people’s ability to identify false stories. The other is the perception gap between our own capabilities and those of the general public. I suspect that if we actually measured how well people distinguish misinformation from truth, it would be closer to the lower number than the higher one. In short, we tend to think we are smarter than we are.

Don’t you believe me? For my first book, youtubersI commissioned a survey from YouGov to see how well the public recognized the platform’s main figures. The YouGov team recommended that, among the real names, I add someone who did not exist as a kind of sense test to identify the proportion of people who were lying. A worrying number of respondents said confidently that they knew the person the pollsters had invented, and that they knew him well.

A swamp of misinformation

All of this is important due to the magnitude of the false information problem that exists.

Three-quarters of BIT respondents said they had seen fake news in the last week, with worst offenders. LinkedIn was seen as the least worst (but it’s not entirely clear if that’s because many avoid the platform because it has a reputation for being boring).

In any case, the findings are not comfortable for those who carried them out. “Our latest research has added to growing evidence that social media users are overconfident in their ability to detect false information,” says Eva Kolker, director of consumer and business markets at BIT. “Paradoxically, this could be making people more susceptible to it.”

Bottom line: If you think you’re better than others at spotting fake news, you’re actually more likely to have lower defenses and run into trouble when you (inevitably) encounter it online.

What should be done?

Photography: X

Well, a start would be to train users to be more aware of the risks of fake news and the impact it can have when shared with their social circles. Things grow quickly thanks to mob mentality mediated by social media algorithms, like that Goodbye Meta AI post. That’s why it’s important to think twice and click once. (In this article I described better ways to combat threats to our data.)

But Kolker isn’t convinced that’s enough. “Many of our attempts to improve online safety have focused on improving the knowledge and capabilities of individual users,” he says. “While important, our research shows that there are inherent limits to the effectiveness of this approach.”

“We cannot simply rely on individuals to change their behavior. To truly combat misinformation, we also need social media platforms to take action and regulators and the government to step in to level the playing field.”

skip past newsletter promotion

Is it time for an intervention?

The TBI put together a series of recommendations that they had presented to governments and social media platforms to try to combat misinformation and disinformation. The first of these is to flag posts containing false information as soon as they are detected, to try to raise public awareness before sharing them. To Meta’s credit, that’s something it did with the Goodbye Meta AI trend, adding labels to posts that pointed out that the information was incorrect.

The TBI also recommends that platforms be stricter in the amount of legal but harmful content they display. Conspiracy theories fester in a putrid information environment, and the TBI seems to suggest that the standard Silicon Valley approach – that sunlight is the best disinfectant – is not enough.

Except in one case. Their third recommendation is periodic public rankings of how much false or harmful content is on each platform.

It’s hard to say if any of this will work. I’ve been looking at the science lately along with studies and surveys like BIT’s for a variety of reasons, and each positive intervention also seems to have its drawbacks. But if the viral Goodbye Meta AI trend shows us anything, it’s that we can’t simply assume that people are able to distinguish what’s real from what isn’t.

Chris Stokel-Walker’s most recent book is How AI took over the world. His next book, about the history of fake news, will be out in spring 2025.

The Broadest TechScape

Musau Mutisya uses the PlantVillage app to diagnose a corn plant on his farm in Kenya. Photograph: Stephen Mukhongi/The Guardian

You may also like