HomeTech Why Musk’s turmoil shows the limits of social media laws

Why Musk’s turmoil shows the limits of social media laws

0 comments
A tweet from Elon Musk that reads: “Only people on our CSE team have seen those images. For now, we will remove those posts and reinstate the account.”

What can the UK government do about Twitter? ought What does Twitter have to do with it? And what does Elon Musk care about it?

The billionaire owner of the social network, which is still officially known as X, has had a fun week stirring up unrest on his platform. Aside from his own posts — a mix of low-effort memes that look like they’re straight out of 8chan and faux-concerned reposts from far-right personalities — the platform at large briefly became a crucial part of organizing the mess, alongside the other two of the three Ts: TikTok and Telegram.

Everyone agrees that something needs to be done. Bruce Daisley, former Twitter VP for EMEA, suggests personal responsibility:

In the short term, Musk and his fellow executives must be reminded of the criminal liability they bear for their actions under existing laws. Britain’s Internet Safety Act 2023 must be strengthened with immediate effect. Minister Keir Starmer and his team should reflect on whether Ofcom – the media regulator that seems to be continually challenged by the output and behaviour of outlets such as GB News – is in a position to deal with the whirlwind actions of people like Musk. In my experience, that threat of personal sanction is far more effective for executives than the risk of corporate fines. If Musk were to continue to cause unrest, a warrant for his arrest might send fireworks flying out of his hand, but as an international jet-setter it would have the effect of focusing his mind.

Last week, London Mayor Sadiq Khan put forward his own proposal:

“I think the government has quickly realised that the Internet Safety Act needs to be amended,” Khan said in an interview with the Guardian. “I think what the government should quickly do is check whether it is fit for purpose. I think it is not.”

Khan said there were “things that responsible social media platforms could do” but added: “If they don’t clean up their own act, regulation will come.”

Ewan McGaughey, a law professor at King’s College London, gave me a more specific suggestion about what the government could do when I spoke to him on Monday. The Communications Act 2003 underpins much of Ofcom’s powers, he says, and is used to regulate live radio and television. But the text of the act does not limit it to just those media:

If we just look at the Ofcom can act alone, but it has the power to regulate online media content, because section 232 says that a “licensed television content service” includes distribution “by any means involving the use of an electronic communications network.” Ofcom could choose to assert its powers, but this is highly unlikely because it knows it would face challenges from tech companies, including those fuelling riots and conspiracy theories.

Even if Ofcom or the government were unwilling to reinterpret the old law, he added, it would only take one simple change to bring Twitter under the much stricter aegis of streaming controls:

There is no difference, for example, between Elon Musk posting videos on X about so-called two-tier policing, or posts about “detention camps”, or that “civil war is inevitable”, and ITV or Sky or the BBC broadcasting news… The Online Safety Act It is completely inadequate, as it is only written to stop “illegal” content, which by itself does not include statements that are erroneous or even dangerous.

The Law of Keeping Your Promises

Middlesbrough police responded to rioters this month, who had been encouraged by social media posts. Photograph: Gary Calton/The Observer

It’s strange to feel sorry for an inanimate object, but I wonder if the Internet Safety Act is getting a bit of a rough deal, having only just come into force. The Act, a mammoth piece of legislation with more than 200 separate clauses, was passed in 2023, but most of its changes will only come into effect once Ofcom completes a laborious consultation process and creates a code of conduct.

Meanwhile, the law only provides a handful of new criminal offences, including bans on displaying sexual content online and on climbing up a woman’s skirt. Two of the new offences have been put to the test this week, after parts of the old offence of malicious communications were replaced by more specific offences of threats and false communications.

But what if things had gone more quickly and Ofcom had been up and running? Would anything have changed?

The Online Safety Act is a curious piece of legislation: an attempt to rein in the internet’s worst impulses, written by a government that was simultaneously trying to position itself as the pro-free speech side in an escalating culture war, and enforced by a regulator that emphatically did not want to end up ruling on individual social media posts.

The result could be described as an elegant attempt to thread a needle or an ungainly botch job, depending on who you ask. The Internet Safety Act does not make anything illegal online, but it does impose an obligation on social media companies to have specific codes of conduct and to consistently enforce them. For some types of harm, such as self-harm, racial abuse or incitement to racial hatred, major services have a duty to at least offer adults the option not to view such content and to prevent children from viewing it. For illegal material, from child abuse images to threatening or fake communications, new risk assessments are required to ensure that companies are actively working to combat it.

It’s easy to understand why the law provoked such a huge shrug when it was passed. Its main result is a new mountain of paperwork, requiring social networks to prove they are doing what they already did: trying to moderate racist abuse, trying to tackle child abuse images, and trying to enforce their terms of service.

The defence of the law is that it works less as a piece of legislation to force companies to behave differently, and more as something that allows Ofcom to hit them over the head with its own promises. The easiest way to get fined under the Online Safety Act (and, taking a cue from the GDPR, those fines can amount to a hefty 10% of global turnover) is to loudly insist to your customers that you are doing something to address a problem on your platform, and then do nothing.

Skip newsletter promotion

Think of it this way: Most of the law is designed to address the hypothetical enemy of a tech CEO who stands up in an investigation and solemnly intones that the horrible behavior they’re seeing is against the terms of their service, before returning to the office and doing nothing at all about the problem.

The problem for Ofcom is that, well, multinational social networks aren’t actually run by cartoon villains who ignore their legal departments, override their moderators and blithely insist on imposing one set of terms of service for their friends and another set of terms of service for their enemies.

Except one.

Do as I say, not as I do.

Twitter, under Elon Musk, has become the perfect test case for the Internet Safety Act. On paper, the social network is a relatively normal network. It has terms of service that block much the same range of content as other major networks, though they are slightly more permissive when it comes to pornographic content. It has a moderation team that uses a combination of automated and human moderation to remove objectionable content, offers an appeals process for those who think they have been treated unfairly, and doles out escalating attacks that ultimately lead to account bans for violations.

But there is another level to how Twitter works: what Elon Musk says, goes. To give just one example: last summer, a popular right-wing influencer Republished images of child abuse which earned its creator 129 years in prison. It was not clear what his motivation was for doing so, but the account was instantly suspended. Then Musk stepped in:

Photography: X.com

In theory, Twitter’s terms of service probably ban many of the worst posts related to the unrest in Britain. “Hateful conduct” is prohibited, as is “inciting, glorifying or expressing a desire for violence.” In practice, the rules appear to be inconsistently enforced. And that’s the point at which Ofcom could start getting very pushy with Musk and the company he owns.

The broader TechScape

Is love for AI healthy? Illustration: Thomas Burden/The Observer

You may also like