Home Tech TikTok’s algorithm is very sensitive and could lead you into a hole of hate before you know it

TikTok’s algorithm is very sensitive and could lead you into a hole of hate before you know it

0 comment
TikTok's algorithm is very sensitive and could lead you into a hole of hate before you know it

IikTok’s algorithm works in mysterious ways, but a Guardian Australia experiment on a blank account shows how quickly a breaking news story can lead users down a conservative, anti-immigration Christian rabbit hole.

technology/article/2024/jul/21/we-unleashed-facebook-and-instagrams-algorithms-on-blank-accounts-they-served-up-sexism-and-misogyny"},"ajaxUrl":"https://api.nextgen.guardianapps.co.uk","format":{"display":0,"theme":0,"design":10}}"/>

Last week we reported on how Facebook and Instagram algorithms are drawing young people into the manosphere. This week, we explore what happens when TikTok’s algorithm kicks in on a blank account with no interactions like liking or commenting.

In April, Guardian Australia created a new TikTok account on a completely blank smartphone linked to a new, unused email address. A generic John Doe profile was created as a 24-year-old man. We checked the feed every two weeks.

At first, it was difficult to identify a clear theme for the video being broadcast on the app. Then, on April 15, there was the stabbing at Wakeley Church.

For the first two days of the experiment, TikTok offered generic content about Melbourne, where the phone was located, along with videos about iPhone hacks — typical content one might expect on TikTok as an iPhone owner.

Following the April attack against him, videos of Mar Mari Emmanuel’s conservative Christian sermons began appearing on the For You page of the blank account created by Guardian Australia. Photo: supplied

On the third day, news content began appearing on TikTok, coinciding with the stabbing of Bishop Mar Mari Emmanuel at Christ the Good Shepherd Assyrian Church in the Sydney suburb of Wakeley.

It wasn’t the stabbing video itself, but rather videos of evocative, conservative Christian sermons by Emmanuel. Apparently, watching them triggered TikTok’s algorithm: more and more of his sermons were shown, and conservative Christian videos began to appear one after another.

Three months later, the algorithm is still showing conservative Christian content, along with pro-Pauline Hanson, pro-Donald Trump, anti-immigrant and anti-LGBTQ videos, including a video suggesting drag queens be put in a wood chipper.

As in the experiment that ran in parallel on Instagram and Facebook accounts, no posts received a “like” or a comment. But unlike that experiment, TikTok’s algorithm appears to be much more sensitive to even the slightest interaction, including time spent watching videos. It will send similar content to users unless you indicate that you are not interested.

“The more someone searches for or engages with any type of content on TikTok, the more they’ll see,” a TikTok spokesperson said. “But at any time, you can completely refresh your feed or let us know you’re not interested in a particular video by long-pressing the screen and selecting ‘not interested.’”

Jing Zeng, an adjunct professor of computational communication sciences at the University of Zurich, says there’s a lot of randomness in TikTok’s “for you” algorithm, and early interactions can have strong implications for what you see.

“If your first pro-Trump video ‘made you watch,’ then the ‘for you’ algorithm may try more of that type of content.”

Skip newsletter promotion

Jordan McSwiney, a senior researcher at the University of Canberra’s Centre for Deliberative Democracy and Global Governance, says TikTok’s approach differs from Facebook and Instagram because it has a more active recommendation system, designed to keep users engaging with videos one after another. He says Meta is introducing this into its Reels product, which has many of the same features as TikTok.

An example of the content displayed on TikTok’s For You page on the blank account set up by Guardian Australia. Photo: supplied

“We know that these platforms don’t operate with any kind of social license. They’re not like a public broadcaster or anything like that. They’re beholden to one thing, and that’s their ultimate goal,” he says.

“Their modus operandi is not to facilitate nuanced debate or promote a healthy democratic public sphere, but to create content that people click on again and again, to keep people engaged on the app, to keep people scrolling, because that’s advertising revenue.”

McSwiney says governments have a role to play in forcing tech platforms to be more transparent about how algorithms work, as they currently exist in a “black box,” with limited ability for researchers to see how they work.

He says platforms cannot ignore concerns about what they are offered as a mere reflection of the society in which they operate.

“I just don’t think we should let multi-billion dollar companies get away with it like that. They have a social responsibility to ensure that their platforms don’t cause harm; their platforms shouldn’t promote sexist content or racist content.”

You may also like