Home Tech “It’s fine, everyone else is doing it”: How do we address the role social media violence played in the UK riots?

“It’s fine, everyone else is doing it”: How do we address the role social media violence played in the UK riots?

0 comments
Head and shoulders photo by Bobby Shirbon

TOAmong those quickly convicted and sentenced last week for their part in the racist riots was Bobby Shirbonwho had left his 18th birthday party at a bingo hall in Hartlepool to join a mob that roamed the streets of the town, attacking houses believed to be occupied by asylum seekers. Shirbon was arrested for smashing windows and throwing bottles at police. He was sentenced to 20 months in prison.

During his arrest, Shirbon had claimed that his actions were justified by his ubiquity: “It’s okay,” he told officers, “everyone’s doing it.” Of course, that has been a constant claim by those caught up in acts of mass violence over the years, but for many of the hundreds of people now facing significant prison sentences, the “defense” has a sharper resonance.

Shirbon was distracted from her birthday celebration by alerts on her social media. Some of this was perhaps misinformation about the tragic events in Southport; but attached and embedded in that would have been the video clips and snippets that quickly became the context-less catalyst for the spreading violence.

Bobby Shirbon left his birthday party in Hartlepool to go to the scene of a riot after receiving alerts on social media.
Photograph: Cleveland Police/PA

Anyone with a phone will probably have watched with mounting horror these videos over the past week: the video of racists stopping cars at makeshift checkpoints in Middlesbrough; the one of the lone black man being attacked in a park in Manchester; the one of the drinker outside a pub in Birmingham being set upon by a revenge-minded gang. The visceral evidence of violence – a real-time sense of barbarity suddenly normalised – is, for some, the essential spark to take to the streets: “everyone else is doing it”. In that sense, most of us now carry the triggers of Kristallnacht in our pockets.

Over the past week, I have been reading that quaint document from another era, the many pages of the BBC’s rigorous report. guidelines As for the depiction of violence, it is worth recalling what is permitted for our national broadcaster: “When showing real-life violence,” the guidelines state, “we must strike a balance between the demands of accuracy and the dangers of causing undue distress.” Particular editorial care must be taken with “violence that may reflect personal experience – for example, domestic violence, pub fights, football hooliganism,” and “we must ensure that verbal or physical violence that children can easily imitate… is not broadcast in programmes that predate the watershed.”

Of course, there is no tipping point in social media. Nor in any effort, in the pursuit of anonymous clicks, to strike a balance between accuracy and angst. Quite the opposite. Entire YouTube channels and X-rated accounts with hundreds of thousands of followers are dedicated to providing a constant, daily stream of the most graphic gang fights, school brawls, and road rage from around the world. One of the first things Elon Musk pushed when he bought Twitter, after firing most of its moderators, was a feature that allowed users to swipe up to view a photo. automatic transmission of video content. It was inundated with complaints from people who were inadvertently confronted with scenes of beatings and murders.

Fast forward a couple of years, if you showed any interest in the events of the past week, you would probably find your timeline immediately filled with the most disturbing bits of violence, including an unrelated machete fight in Southend, framed in the most incendiary terms by political agitators (among them Musk himself, who seemed intent on promoting the idea of ​​a British “civil war” to his 193 million followers).

Elon Musk seems determined to promote the idea of ​​”civil war” on his own social media platform, X Photograph: Julia Nikhinson/AP

There is a reason that, in independently regulated media, images and films of such events are required to be contextualised and pixelated, and poured into the news piecemeal. More than thousands of words of reporting, these images saturate our imaginations. The unregulated flow of them, chosen for their graphic nature, shared to outrage or laugh out loud, has consequences that come as no surprise to those who have studied the issue more closely.

Dr. Kaitlyn Regehr is co-author of a large-scale study, Safer Scrolling, published This year, she has focused on how social media “gamifies” hate and misogyny among young people. She suggests: “The fact is that social media companies are in the business of selling attention. There have been numerous whistleblowers who have come out of these companies and also research, including my own, that points to the fact that algorithms prioritise harm and misinformation because it is far more exciting and attention-grabbing than the truth.”

Keir Starmer has spoken in recent days about how the forthcoming Internet Safety Bill, due to come into force next year, may need to be strengthened in light of last week’s events. Regehr, who advised on the legislation, is in no doubt: “This is not a discussion about freedom of speech. We are talking about the way content is distributed, fed and prioritised by algorithms. There are millions and millions of posts, and the algorithm decides the 100 we see.” Regulators, he suggests, at the very least need to understand how those algorithms work.

Regehr agrees that, in this context, it would be valuable to take note of the latest social media feeds of those convicted of racist violence last week, to see the patterns of what they were seeing. “We need to make that link clearer to policymakers and the general public,” he says, so that “it can be understood as a much more widespread and systemic problem, which I think is reaching an existential crisis.”

The focus of this crisis is widely said to be deliberate misinformation; research suggests this neglects a critical component: the way that misinformation is routinely attached to the most graphic video content.

Skip newsletter promotion

For the past seven years, Shakuntala Banaji, a professor of media culture and social change at the London School of Economics, has worked with researchers studying the ways in which sharing short-form video clips has been a contributing factor to racial violence, lynchings and pogroms around the world. “We watch a lot of TikToks,” Banaji says. “We’ve watched a lot of Instagram reels. And we’ve all had to go to therapy afterwards… It’s absolutely degrading and repulsive.”

The group is collecting and studying the impact of thousands of videos like those that were released last week: brutal street attacks with very little or deliberately false contextualisation. The work has yielded some surprising data. One of them is that the audience most susceptible to this type of content is not teenagers and young adults, but middle-class and middle-aged viewers.

The deliberate narrowness of the political context is key. “What we found very interesting was that in some countries the same kind of graphic content was circulating, but it didn’t lead to street violence,” Banaji says. The key component in places where racist violence occurred, he suggests, was the political framing of the material. “In India, in Myanmar, in Bolsonaro’s Brazil, and in the UK after Brexit – where we saw a massive rise in Islamophobic attacks – the crucial difference was not the stance the government took in trying to regulate the internet, but the tone it took towards the groups that were being targeted.”

Banaji’s research concludes that there is a “kind of triangle… that makes this so dangerous. Only one part of it is the content of the media. Equally important is, first, how the violence is subtitled and edited, and second, what the mainstream media and politicians say about that content, tacitly or explicitly.” In these terms, he believes that attempts to police these platforms, particularly by political figures who also seek to use them to stoke division, can only backfire. He believes that fully independent regulation, allied with political rhetoric that rejects racism and inflammatory comments, slowly takes power away from the algorithms.

Regehr agrees that these changes cannot come soon enough. “Almost everything we consume, including terrestrial television, legitimate journalism, food, drugs and medicines, is regulated,” he says. “Yet social media remains an unregulated space. I think we hide behind the idea that the technology is still new, that we are still working on it. But the World Wide Web was launched 30 years ago. Almost half the population has never lived without it.”

The consequences, last week, were all around us.

You may also like