tTwo weeks ago it was quietly announced that the Future of Humanity Institute, Oxford’s renowned multidisciplinary research centre, no longer had a future. It closed without notice on April 16. Initially there was only a brief statement on its website indicating that it had closed and that its research could continue elsewhere within and outside the university.
The institute, which was dedicated to studying existential risks to humanity, was founded in 2005 by Swedish philosopher Nick Bostrom and quickly made a name for itself beyond academic circles, particularly in Silicon Valley, where several tech billionaires sang its praises. . and provided financial support.
Bostrom is perhaps best known for his best-selling 2014 book. superintelligence, which warned of the existential dangers of artificial intelligence, but also gained widespread recognition for its 2003 academic paper “Are You Living in a Computer Simulation?” The article argued that over time humans would probably develop the ability to make simulations that were indistinguishable from reality, and if this were the case, it was possible that it had already happened and that the simulations were us.
I interviewed Bostrom more than a decade ago and he possessed one of those elusive, rather abstract personalities that perhaps lend credence to simulation theory. With his pale complexion and reputation for working all night, he seemed like the type of person who didn’t go out much. The institute appears to have recognized this social deficiency in its final reporta long epitaph written by FHI researcher Anders Sandberg, which read:
“We do not invest enough in university policy and sociality to form a stable, long-term relationship with our faculty… When epistemic and communicative practices diverge too much, misunderstandings proliferate.”
Like Sandberg, Bostrom has advocated transhumanism, the belief in using advanced technologies to enhance longevity and cognition, and is said to have been committed to cryogenic preservation. Although proudly provocative on the page, he appeared cautious and defensive in person, as if he were aware of a momentous truth that required vigilant protection.
His office, situated in a medieval alley, was a typical cramped Oxford establishment, and it would have been easy to dismiss the institute as a whimsical enterprise, an eccentric, if laudable, field of study for those, like Bostrom, with a penchant for science fiction. . But even a decade ago, when I visited, the FHI was already on its way to becoming the favorite research group of tech billionaires.
In 2018 he received £13.3m from the Open Philanthropy Project, a charity backed by Facebook co-founder Dustin Moskovitz. And Elon Musk has also been a benefactor. Big Tech took Bostrom’s warnings about AI seriously. But as competition has intensified in the race to create artificial general intelligence, ethics have tended to take a backseat.
Among the other ideas and movements that have emerged from the IHF are long-termism – the notion that humanity should prioritize the needs of the distant future because it theoretically contains many more lives than the present – and effective altruism (EA), a utilitarian approach. to maximize global good.
These philosophies, which have become intertwined, inspired something of a cult, which may have alienated many in Oxford’s philosophical community and, indeed, among the university’s administrators.
According to the FHI itself, its closure was a result of growing administrative tensions with Oxford’s philosophy faculty. “Starting in 2020, the College imposed a freeze on fundraising and hiring. At the end of 2023, the Faculty of Philosophy decided that the contracts of the remaining FHI staff would not be renewed,” the final report states.
But both Bostrom and the institute, which brought together philosophers, computer scientists, mathematicians and economists, have been the subject of numerous controversies in recent years. Fifteen months ago Bostrom was forced to issue an apology for comments he had made in a group email back in 1996, when he was 23 and a graduate student at the London School of Economics. In the recovered message, Bostrom used the N-word and argued that white people were smarter than black people.
The apology did little to appease Bostrom’s critics, especially since he conspicuously failed to withdraw his central argument about race and intelligence, and appeared to make a partial defense of eugenics. Although, after an investigation, Oxford University accepted that Bostrom was not racist, the entire episode left a stain on the institute’s reputation at a time when issues of anti-racism and decolonization have become vitally important to many university departments. .
It was Émile Torres, a former supporter of long-termism who has become its most outspoken critic, who unearthed the 1996 email. Torres says it is their understanding that “it was the last straw for Oxford’s philosophy department.”
Torres has come to believe that the work of the FHI and its affiliates amounts to what they call a “Harmful ideology” and “eugenics on steroids”. They refuse to see Bostrom’s 1996 comments as poorly worded juvenile, but indicative of a brutally utilitarian view of humanity. Torres notes that six years after the email thread, Bostrom wrote an article on the existential risk that helped launch the longtermist movement, in which he discusses “dysgenic pressures” (dysgenic is the opposite of eugenic). Bostrom wrote:
“It currently appears that in some places there is a negative correlation between intellectual performance and fertility. If such selection operated over a long period of time, we could evolve into a less intelligent but more fertile species. homo phylloprogenitus (‘lover of many children’).”
Bostrom now says he has no particular interest in the race issue and is happy to leave it in the hands of others with “more relevant knowledge.” But the 28-year-old email is not the only issue Oxford has had to consider. As Torres says, the effective altruist/long-term movement has “suffered a series of scandals since the end of 2022.”
Just a month before Bostrom’s incendiary comments came to light, cryptocurrency entrepreneur Sam Bankman-Fried was extradited from the Bahamas to face charges in the US related to a multimillion-dollar fraud. Bankman-Fried was a vocal and financial supporter of effective altruism and a close friend of William MacAskill, an academic who has strong ties to the FHI and who created the Center for Effective Altruism, where Bankman-Fried worked briefly.
It is said that it was MacAskill who persuaded Bankman-Fried a decade ago to try to make as much money as possible so he could give it away. The businessman appeared to follow the first part of that court order, but then spent $300 million in fraudulently earned money on real estate in the Bahamas. His downfall and subsequent 25-year prison sentence have contributed little to the moral arguments presented by the FHI and its associated groups.
If that weren’t enough, last November’s coup that briefly unseated Sam Altman as CEO of Open AI, the company behind ChatGPT, was attributed to members of the company’s board of directors who supported EA. Altman’s quick return was seen as a defeat for the EA community and, Torres says, “has seriously undermined the influence of EA’s long-termism within Silicon Valley.”
All this, of course, seems far removed from the not inconsiderable issue of preserving humanity, which is the cause for which the IHF was apparently created. There is no doubt that that noble effort will find other academic avenues to explore, but perhaps without the cult ideological framework that left the institute with a bright future behind it.