Elon Musk will be called by MPs to testify about X’s role in spreading disinformation, in a parliamentary inquiry into the UK riots and the rise of false and harmful AI content, The Guardian has learned.
Senior executives at Meta, which runs Facebook and Instagram, and TikTok are also expected to be called in for questioning as part of a social media inquiry by the House of Commons science and technology select committee.
The first hearings will take place in the new year, amid growing concern that the UK’s online safety laws risk being overtaken by the rapid advancement of technology and the politicization of platforms like X.
MPs will investigate the consequences of generative AI, which was used in widely shared images posted on Facebook and X inciting people to join Islamophobic protests following the murder of three schoolgirls in Southport in August. They will also investigate Silicon Valley business models that “encourage the dissemination of content that can mislead and harm.”
“(Musk) has very strong views on multiple aspects of this,” said Chi Onwurah, Labor chair of the select committee. “I would certainly like the opportunity to question him to see… how he reconciles his promotion of free speech with his promotion of outright misinformation.”
Musk, the owner of X, was furious when he was not invited to a UK government international investment summit in September. Onwurah told The Guardian: “I would like to make up for that by inviting him to attend.”
Former Labor minister Peter Mandelson, a candidate to become the next UK ambassador to Washington, this week called for an end to the “feud” between Musk and the UK government.
“It’s kind of a technological, industrial and commercial phenomenon,” Mandelson said on the How to Win an Election podcast. “And, in my opinion, it would be unwise for Britain to ignore it. “These disputes cannot continue.”
X did not respond when asked if Musk would testify in the UK, although it seems unlikely. The world’s richest man is preparing to take a senior role in the Trump White House and has been highly critical of the Labor government, even weighing in on changes to the inheritance tax on farms, saying on Monday that “Britain is going completely Stalin.” During the riots that followed the Southport murders, he said: “Civil war is inevitable.”
The House of Commons investigation comes amid fresh turbulence across the social media landscape as millions of X users move to Bluesky, a new platform, and many migrate in protest over misinformation. , the presence of once-banned users such as Tommy Robinson and Andrew Tate, and updated updates. terms of service that allow the platform to train its AI models with user data.
Keir Starmer said on Tuesday he had “no plans” to personally join Bluesky or for government departments to open an official account. The prime minister told reporters at the G20 summit in Brazil: “The important thing for a government is that we are able to reach as many people and communicate with as many people as possible, and that is the only test for all of this as far as as far as I’m worried.”
After Musk was not invited to the UK government’s investment summit, he said: “I don’t think anyone should go to the UK when they are releasing convicted pedophiles to jail people for social media posts.”
One person jailed after the riots was Lucy Connolly, who posted on X: “Mass deportation now, set fire to all the fucking hotels full of bastards for all I care.” She was convicted under the Public Order Act for publishing material intended to incite racial hatred. X found that the post did not violate its rules against violent threats.
Onwurah said the investigation would attempt to “get to the bottom of the links between social media algorithms, generative AI and the spread of harmful or false content.”
It will also examine the use of AI to complement search engines such as Google, which was found recently regurgitating false and racist claims about people in African countries having low average IQs. Google said the AI summaries containing the claims had violated its policies and had been removed.
After the Southport murders on July 29, misinformation spread across social media, with accounts with more than 100,000 followers falsely naming the alleged attacker as a Muslim asylum seeker.
Ofcom, the UK communications regulator, has already concluded that some platforms “were used to spread hate, provoke violence against racial and religious groups and encourage others to attack and burn down mosques and asylum accommodation”.
Next month, Ofcom will publish rules on unlawful harms under the Online Safety Act, which are expected to require social media companies to prevent the spread of unlawful material and mitigate security risks, including police activity that provokes violence or incites hatred, and false alarms. communications intended to cause harm.
Companies will need to remove illegal material once they are aware of it and address security risks in the design of their products.