Home Tech AI could cause “social ruptures” between people who disagree with its sensitivity

AI could cause “social ruptures” between people who disagree with its sensitivity

0 comments
AI could cause "social ruptures" between people who disagree with its sensitivity

Major “social rifts” between people who think AI systems are conscious and those who insist the technology senses nothing is coming, a leading philosopher has said.

The comments, by Jonathan Birch, a philosophy professor at the London School of Economics, come as governments prepare to meet this week in San Francisco to accelerate the creation of security barriers for face the most serious risks of AI.

Last week, a transatlantic group of academics predicted that the The dawn of consciousness in AI systems is likely by 2035 and it has now been said that this could lead to “subcultures that see others as making huge mistakes” about whether computer programs are owed welfare rights similar to those of humans or animals.

Birch said he was “concerned about major social divides” as people differ over whether AI systems are actually capable of feeling feelings like pain and joy.

The debate over the consequences of sentience in AI has echoes of science fiction films, such as Steven Spielberg’s AI (2001) and Spike Jonze’s Her (2013), in which humans wrestle with the sentience of AIs. AI safety bodies from the US, UK and other countries will meet with technology companies this week to develop stronger safety frameworks as the technology advances rapidly.

There are already significant differences between how different countries and religions view animal sentience, such as between India, where hundreds of millions of people are vegetarians, and the United States, which is one of the world’s largest meat consumers. Opinions on the sensitivity of AI could divide along similar lines, while the view of theocracies, such as Saudi Arabia, which is positioning itself as an AI hub, could also differ from that of secular states. The issue could also cause tensions within families with people developing close relationships with chatbots, or even AI avatars of deceased loved ones, clashing with relatives who believe only flesh-and-blood creatures have consciousness.

Birch, an animal sensitivity expert who has pioneered work that has led to a growing number of bans on octopus farming, co-authored a study involving academics and artificial intelligence experts at New York University. the University of Oxford, Stanford University and Eleos and Anthropic. AI companies saying the prospect of AI systems with their own interests and moral significance “is no longer a problem just for science fiction or the distant future.”

They want the big tech companies that develop AI to start taking it seriously by determining the sensitivity of their systems to evaluate whether their models are capable of being happy and suffering, and whether they can benefit or be harmed.

“I’m quite concerned about the major social divisions because of this,” Birch said. “We’re going to have subcultures that see each other as making big mistakes… (there could be) huge social rifts where one side sees the other side very cruelly exploiting AI, while the other side sees the first side deceiving itself by to think that there is sensitivity”. there.”

But he said AI companies “want to really focus on reliability and profitability… and they don’t want to get sidetracked by this debate about whether they could be creating more than a product, but actually creating a new way of being conscious. That question, of supreme interest to philosophers, has commercial reasons to downplay its importance.”

One method to determine how conscious an AI is could be to follow the marker system used to guide animal policies. For example, an octopus is considered to have greater sensitivity than a snail or an oyster.

Any evaluation would effectively ask whether a chatbot on your phone could really be happy or sad or whether robots programmed to do your housework suffer if you don’t treat them right. One would even have to consider whether an automated warehouse system has the capacity to become frustrated.

Another author, Patrick Butlin, a researcher at the University of Oxford’s Global Priorities Institute, said: “We could identify a risk that an AI system might try to resist us in a way that would be dangerous to humans” and there could be an argument for ” slow down the development of AI” until more work is done on consciousness.

skip past newsletter promotion

“These types of potential awareness assessments are not being done at this time,” he said.

Microsoft and Perplexity, two leading American companies involved in building artificial intelligence systems, declined to comment on the call by academics to evaluate their sensitivity models. Meta, Open AI and Google also did not respond.

Not all experts agree about the looming awareness of AI systems. Anil Seth, a leading neuroscientist and consciousness researcher, has said “It is very far away and may not be possible at all.” But even though it is unlikely, it is not wise to rule out the possibility completely.”

Distinguish between intelligence and consciousness. The first is the ability to do the right thing at the right time, the second is a state in which we are not only processing information but “our minds are full of light, color, shadow and shapes. Emotions, thoughts, beliefs, intentions, we all feel them in a particular way.”

But AI models in large languages, trained on billions of words of human writing, have already begun to show that they can be motivated at least by concepts of pleasure and pain. When AIs, including Chat GPT-4o, were given the task of maximizing points in a game, the researchers found that if a balance was included between getting more points and “feeling” more pain, the AIs would achieve it, another study published last week presented.

You may also like