So here’s a thought. Instead of continuing with a technology that its key inventors say could soon have the power to kill people, how about not continuing with it?
This radical idea was prompted by a warning from the man who set up the Prime Minister’s AI task force. Matt Clifford commented, “You can have really very dangerous threats to people that can kill a lot of people, not all people, simply from where we expect models to be two years from now.” In retrospect, I may be overreacting. His full comments were more nuanced and not all of them are human, by the way. Only many of them.
But similar apocalyptic warnings come from leading figures in its development, writing under the auspices of the Center for AI Safety. In an admirably succinct warning, a who’s who of the AI industry stressed: “Reducing the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The heads of Google DeepMind, OpenAI and countless others have taken time off from inventing the technology that could wipe out all human lives to warn the rest of us that something really needs to be done to prevent it.
And these guys would be the geniuses? In pottery making in England, there are some slightly wacky guys who have invented a new machine that may be brilliant, but could also burn down their house, and most of them have figured out for themselves that the machine might not be. in hindsight a good idea.
This is where the inventors of small fries went wrong. Perhaps instead of figuring out the risks for themselves, they really should raise several billion pounds of venture capital funding and then write a letter to the council warning that they really need to be audited.
I recognize, to be serious, that great things are expected from artificial intelligence, many of which do not involve the extermination of the human race. Many argue that AI could play a central role in achieving a carbon-free future, though that may just be a euphemism for wiping out humanity.
Equally important is that the advances already made cannot go undiscovered. But AI chatbots are already falsifying information – or “hallucinating” as the developers prefer to call it – and the inventors aren’t exactly sure why. So there seems to be a case for slowing down and smoothing out that little wrinkle before we move on to, you know, extinction-level technology.
A generous view of the technology leaders calling for a leash is that they are responsible and that it is the other irresponsible actors they are concerned about. They’d like to do more, but you see, the guys at Google can’t be beaten by the guys at Microsoft.
So these warnings are an attempt to push politicians and regulators into action, which is damn sporting of them, given that world leaders have such an excellent track record of responding cooperatively and intelligently to extinction-level threats. I mean come on. They reported it to the US Congress. I don’t think we can ask for much more. And the UK government is now on the case, which would be more reassuring if it wasn’t still struggling to deal with asylum seekers in less than 18 months.
With any luck, the warnings will indeed prompt governments to take actionable action. Perhaps this will lead to global standards, international agreements and a moratorium on murderous developments.
Anyway, the conscience of the AI gurus is now clear. They did everything they could. And if one day, around 2025, the machines do indeed gain the power to wipe us out – sorry, many of us – I like to think that in the final seconds the AI will ask one last question to the brilliant minds who willy-nilly blundering ahead with a technology that could destroy us without figuring out how to stop it at that stage.
“Why did you continue, knowing the risks?” asks SkyNet. And in their last seconds, the geniuses answer: “What do you mean? We have signed a statement.”
To follow @FTMag on Twitter to be the first to know about our latest stories