Philosopher Nick Bostrom is surprisingly cheerful for someone who has spent so much time worrying about the ways humanity could destroy itself. In photographs he often appears deadly serious, perhaps appropriately tormented by the existential dangers lurking in his brain. When we talk over Zoom, he seems relaxed and smiling.
Bostrom has dedicated himself to reflecting on distant technological advances and existential risks for humanity. With the publication of his latest book, Superintelligence: paths, dangers, strategiesIn 2014, Bostrom brought to public attention what was then a fringe idea: that AI would advance to a point where it could turn against humanity and wipe it out.
To many inside and outside AI research, the idea seemed fanciful, but influential figures including Elon Musk cited Bostrom’s writings. The book generated a wave of apocalyptic concern about AI that erupted recently following the arrival of ChatGPT. Concern about AI risk is not only a widespread issue, but also an issue within government AI policy circles.
Bostrom’s new book takes a very different tack. Instead of playing the fatal hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but avoided disaster. All diseases have been eliminated and humans can live indefinitely in infinite abundance. Bostrom’s book examines what life would mean within a techno-utopia and asks whether it might be rather empty. He spoke to WIRED via Zoom, in a conversation that has been lightly edited for length and clarity.
Will Knight: Why go from writing about superintelligent AI threatening humanity to considering a future where it will be used for good?
Nick Bostrom: The various things that could go wrong in AI development are now receiving much more attention. It’s a big change in the last 10 years. Now all the major cutting-edge AI labs have research groups trying to develop scalable alignment methods. And also in recent years we have seen political leaders starting to pay attention to AI.
There hasn’t yet been a commensurate increase in depth and sophistication in terms of thinking about where things will go if we don’t fall into one of these pits. There has been a lot of superficial thought on the subject.
when you wrote superintelligence, few would have expected the existential risks of AI to become a widespread debate so quickly. Will we have to worry about problems with his new book sooner than people think?
As we start to see automation roll out, assuming progress continues, I think these conversations will start to happen and eventually deepen.
Companion social apps will become increasingly prominent. People will have all kinds of different points of view and it’s a great place to maybe have a little culture war. It might be great for people who can’t find satisfaction in ordinary life, but what if there is a segment of the population that enjoys abusing them?
In the political and information sphere we could see the use of AI in political campaigns, marketing and automated propaganda systems. But if we have a sufficient level of wisdom, these things could really amplify our ability to be constructive democratic citizens, with individual advice that explains what policy proposals mean to you. There will be a lot of dynamics for society.
Would a future in which AI has solved many problems, such as climate change, disease and the need for work, really be so bad?