The politics of AI regulation became a little clearer this weekend, after an influential Labor think tank set out in its manifesto its framework for how the party should approach the subject.
From our history:
The policy document, produced by center-left think tank Labor Together, proposes a legal ban on dedicated nudification tools that allow users to generate explicit content by uploading images of real people.
It would also require developers of general-purpose AI tools and web hosting companies to take reasonable steps to ensure that they are not involved in producing such images or other harmful deepfakes.
Labor Together’s suggestions do not yet constitute party policy, but they highlight the sort of issues that Westminster pundits think a campaign can be built around. (If you want to read the tea leaves, Peter Kyle, the shadow technology minister, said he was “studying the proposals carefully.”)
Over the past few decades, technology has been a curiously apolitical area in the UK, with all parties agreeing on the vague idea that it is important to support British technology as an engine of growth and soft power , and few active campaigns beyond that.
Even as technology regulation has become a high-profile political concern, beginning with Theresa May’s government and the introduction of the Online Safety Act, debates about it have tended to be technocratic rather than science-based. principles or supporters. Labor forced some votes on specific amendments to the bill, but when push came to shove it passed without opposition.
In hindsight, the most important battle over this bill took place within the Conservative Party itself, when one wing decided to attack the entire process as an attempt to ban “feelings”. injured”, in part because of clauses intended to replace the old offense of “malicious communications” with more specific offenses.
But this time things could be different. If Labor proposes a ban on nudification tools, it seems unlikely that this proposal will simply be co-opted by the Conservative Party. Instead, it could highlight a divide between the two parties’ concerns about AI, with Rishi Sunak leading the Conservatives to focus on Silicon Valley-related concerns about existential risk, and Labor focusing on misuse here and now.
“MrDeepFakes does not represent technology”
I spoke to the paper’s writers, Kirsty Innes and Laurel Boxall, for the story, and was struck by how much they expected such a divide. “This type of rapid response is lacking among analog conservatives, who believe that AI is either a ‘mutant algorithm’ or a Silicon Valley toy that can be scaled up without concern for the impact on workers,” Innes said. “It took them seven years to get the Online Safety Act through Parliament, and in the meantime the world has moved on.
“We need to move beyond the idea that you’re either pro-innovation or pro-protection of the public interest – that it’s government versus business,” Innes added. “The vast majority of technology companies want to see their tools used wisely. The tech industry knows this is a problem – MrDeepFakes doesn’t represent it. So I think they’ll want to help us with that.
The policy document also proposes a looser set of regulations for the broader technology sector that supports AI. Web hosts, search engines and payment platforms would be forced to ensure their customers do not facilitate the creation of “harmful deepfakes”, which would be backed by fines from Ofcom. Critics, in turn, might object that such a policy could have a chilling effect: if “harmful” is in the eye of the beholder, then it may be easier for a platform to completely ban all deepfake tools.
According to a survey of Control the AI, a nonprofit that focuses on regulating AI, the British public is more supportive of banning deepfakes than almost anywhere else: 86% of people support the action. But even Italy, where support was lowest, had a comfortable majority in favor, at 74 percent.
Deepfakes, cheapfakes and AI elections – join us live
Another proposal in the document that seems less likely to come to fruition is that all major parties commit not to use AI to create misleading content for their campaign over the next nine months. Call me pessimistic, but I don’t think such a commitment would stand up to the acrimony we are about to see intensify across Britain – nor the value of a plausibly deniable campaign on social networks that defame your rivals.
Coincidentally, I will be hosting a Guardian Live event on this very topic next month. A panel of experts, including Katie Harbath of tech policy firm Anchor Change and Imran Ahmed of the Center for Countering Digital Hate, will join me to talk about what the next year might look like as two billion people will vote in the first wave of voting. elections that could plausibly be affected by generative AI.
It seems a given that we will see deepfakes and other AI-generated misinformation being used as a campaign tool, but it is less clear whether this will be the case. work. Are fake images and videos a sea change in the disinformation game, or are they simply an evolution of text-based lies and “cheapfakes,” a real image with a false or misleading caption?
I am more concerned about the effect of new technology on the already weakened public domain. Twitter is a shell of its former self, Reddit is about to go public thanks to AI deals, Threads explicitly removes political conversations, and Google Search is full of SEO-generated SEO spam. AI. Where is the conversation really going? arrive? And how does the campaign work in this brave new world?
Robots
I don’t normally upload YouTube videos here, but Figure’s latest demo is so extraordinary that it’s worth it. share this video.
We’re definitely out of prediction season, but if I had to make one for the next 12 months, it would be this: what 2022 was for chatbots, 2024 will be for robots.
Robotics has traditionally been a difficult, slow and expensive field. But lessons learned from AI advances in recent years are starting to change that. If you can train systems in simulated worlds, command them with natural language, and then give them control of physical bodies, you can start to see the same rate of improvement that we’ve seen with the big guns. language models over the past five years.
And as I understand it, that’s what happened.
If you would like to read the full version of the newsletter, subscribe to receive TechScape in your inbox every Tuesday.