Would it be too disruptive if protests organized sit-ins or chained themselves to the doors of AI developers? a Discord member asked. “Probably not. In the end, we do what we have to do for a future with humanity, while we still can.”
Meindertsma was worried about the consequences of AI after reading Superintelligence, a 2014 book by philosopher Nick Bostrom that popularized the idea that highly advanced artificial intelligence systems could pose a risk to human existence. Joseph Miller, the organizer of the PauseAI protest in London, was similarly inspired.
It was the release of OpenAI’s Chat-GPT 3 large language model in 2020 that really worried Miller about the trajectory AI was taking. “I suddenly realized that this is not a problem for the distant future, but rather something that AI is really getting better at,” he says. Miller joined a nonprofit that researches AI safety and later got involved with PauseAI.
Bostrom’s ideas have influenced the “effective altruism” community, a broad social movement that includes supporters of long-termism: the idea that influencing the long-term future should be a moral priority for humans today. Although many of PauseAI’s organizers have roots in the effective altruism movement, they want to go beyond the philosophy and garner more support for their cause.
Pause AI US director Holly Elmore wants the movement to be a “broad church” that includes artists, writers and copyright owners whose livelihoods are threatened by artificial intelligence systems that can imitate creative works. “I am a utilitarian. I’m thinking about the consequences ultimately, but the injustice that really drives me to do this kind of activism is the lack of consent” from the companies that produce AI models, she says.
“We don’t have to choose which harm from AI is the most important when we talk about pausing as a solution. Pause is the only solution that addresses them all.”
Miller echoed this point. He says he has spoken to artists whose livelihoods have been affected by the growth of AI art generators. “These are problems that are real today and are signs that much more dangerous things are to come.”
One of the London protesters, Gideon Futerman, has a stack of leaflets that he is trying to hand out to officials leaving the building across the street. He has been protesting with the group since last year. “The idea that a pause was possible has really taken hold since then,” he says.
Futerman is optimistic that protest movements can influence the trajectory of new technologies. He points out that the fight against genetically modified organisms was fundamental for Europe to abandon this technology in the 1990s. The same goes for nuclear energy. Not that these movements necessarily had the right ideas, he says, but they demonstrate that popular protests can hinder the advancement of even technologies that promise low-carbon energy or more abundant crops.
In London, the group of protesters crosses the street to offer leaflets to a group of officials leaving government offices. Most seem completely uninterested, but some accept a piece of paper. That same day, Rishi Sunak, the British prime minister who six months earlier had hosted the first AI Security Summit, had given a speech in which he nodded to fears about AI. But after that brief reference, he focused firmly on the potential benefits.
Pause AI leaders WIRED spoke to said they were not considering more disruptive direct actions for now, such as sit-ins or encampments near AI offices. “Our tactics and our methods are actually very moderate,” Elmore says. “I want to be the moderate base for many organizations in this space. I am sure we would never tolerate violence. I also want Pause AI to go above and beyond and be very reliable.”
Meindertsma agrees, stating that further disruptive actions are not justified at this time. “I really hope we don’t need to take any other measures. I do not think that it’s necessary. “I don’t feel like the type of person who leads a movement that is not completely legal.”
The founder of Pause AI is also hopeful that his movement can shed the “fatalistic AI” label. “A doomer is someone who renounces humanity,” he says. “I am an optimistic person; “I think we can do something about it.”