In 2020, Oxford-based philosopher Toby Ord published a book called The precipice on the risk of extinction of humanity. He estimates the risk of an “existential catastrophe” for our species over the next century is one in six.
This is a fairly accurate and alarming figure. The complaint attracted securities at the time, and has been influential ever since – recently discussed by Australian politician Andrew Leigh in a speech in Melbourne.
It’s hard to disagree with the idea that we face worrying prospects in the coming decades of climate change, nuclear weapons, and bioengineered pathogens (all major problems in my opinion ), malicious AI, and large asteroids (which I would consider less of a concern). ).
But what about this number? Where does that come from? And what does that actually mean?
Coin Mintages and Weather Forecasts
To answer these questions, we must first answer another: what is probability?
The most traditional view of probability is called frequentism and gets its name from its heritage in dice and card games. From this perspective, we know that there is a one in six chance that a fair die rolls a three (for example) by observing the frequency of threes in a large number of rolls.
Or consider the more complicated case of weather forecasting. What does it mean when a metrologist tells us that there is a one in six (or 17%) chance that it will rain tomorrow?
It is difficult to believe that the metrologist wants us to imagine a vast set of “tomorrows”, part of which will experience precipitation. Instead, we need to look at many of these predictions and see what happened after them.
If the forecaster does his job well, we should see that when he says “one chance in six that it will rain tomorrow”, it actually rained the next day one in six times.
Thus, traditional probability depends on observations and procedure. To calculate it, we need to have a set of repeated events on which to base our estimate.
Can we learn from the Moon?
So what does this mean for the likelihood of human extinction? Well, such an event would be unique: once it happened, there would be no room for a repeat.
Instead, we might find parallel events that we can learn from. Indeed, in Ord’s book he discusses a number of potential extinction events, some of which can potentially be examined in light of a story.
For example, we can estimate the chances of an extinction-sized asteroid hitting Earth by looking at how many such space rocks have hit the Moon over its history. A French scientist named Jean-Marc Salotti I did it in 2022calculating the probability of extinction over the next century at around one in 300 million.
Of course, such an estimate is fraught with uncertainty, but it relies on something approximating a proper frequency calculation. Ord, by contrast, estimates the risk of extinction from an asteroid at one in a million, while noting a considerable degree of uncertainty.
A results ranking system
There is another way of thinking about probability called Bayesianism, named after the English statistician Thomas Bayes. It focuses less on the events themselves and more on what we know, expect, and believe about them.
In very simple terms, we can say that Bayesians view probabilities as a sort of ranking system. From this perspective, the specific number attached to a probability should not be taken directly, but rather compared to other probabilities to understand which outcomes are more or less likely.
Ord’s book, for example, contains a table of potential extinction events and his personal estimates of their likelihood. From a Bayesian perspective, we can think of these values as relative rankings. Ord thinks extinction due to asteroid impact (one in a million) is much less likely than extinction due to climate change (one in a thousand), and both are much less likely than extinction due to what he calls “unaligned artificial intelligence” (one in 10). ).
The difficulty here is that initial estimates of Bayesian probabilities (often called “a priori”) are rather subjective (e.g. I would rank AI-based chances of extinction much lower). Traditional Bayesian reasoning moves from “priors” to “posteriors” by again incorporating observational evidence of relevant outcomes to “update” probability values.
And once again, results relating to the probability of human extinction are rare.
Subjective estimates
There are two ways to think about the accuracy and usefulness of probability calculations: calibration and discrimination.
Calibration is the accuracy of the actual values of probabilities. We cannot determine this without appropriate observational information. Discrimination, on the other hand, simply refers to relative rankings.
We have no basis to believe that the Ord values are correctly calibrated. Of course, that’s probably not his intention. He himself indicates that they are for the most part designed to give “order of magnitude” indications.
Even so, without any related observational confirmation, most of these estimates simply remain in the subjective realm of a priori probabilities.
Not well calibrated – but perhaps still useful
So what should we make of “one in six”? Experience suggests that most people have a less than perfect understanding of probability (as evidenced by, among other things, the continued volume of lottery ticket sales). In this environment, if you’re making an argument in public, an estimate of “probability” doesn’t necessarily need to be well-calibrated – it just needs to have the right kind of psychological impact.
From that perspective, I’d say “one in six” does the trick. “One in 100” might seem small enough to ignore, while “one in three” might cause panic or be considered apocalyptic delusion.
As someone concerned about the future, I hope that risks such as climate change and nuclear proliferation receive the attention they deserve. But as a data scientist, I hope that the careless use of probability will be left behind and replaced with widespread education about its true meaning and proper use.
Steven Stern is a professor of data science at Bond University. This piece first appeared on The conversation.