Did you deactivate a robot begging for his life? Study warns that humans can be manipulated by bots

For the study, the researchers recruited 89 volunteers to interact with the small humanoid robot, Nao. The participants were told that their conversations with the robot were intended to help them improve their interaction capabilities

While it may not always be easy to disconnect your electronic devices, doing so is rarely a case of moral dilemma.

But shutting down could be much harder if your devices begged you not to do it.

A new study explores the ways in which social robots can manipulate their owners by pulling on our heart fibers.

When the robots protested, shouting things like & # 39; No! Please, do not disconnect me, and implying that they were afraid of the dark, the participants hesitated and sometimes even refused to turn them off.

Scroll down to watch the video

For the study, the researchers recruited 89 volunteers to interact with the small humanoid robot, Nao. The participants were told that their conversations with the robot were intended to help them improve their interaction capabilities

For the study, the researchers recruited 89 volunteers to interact with the small humanoid robot, Nao. The participants were told that their conversations with the robot were intended to help them improve their interaction capabilities

In the new study published in the journal PLOS One, researchers in Germany tried to understand if people are more likely to see a robot as & # 39; live & # 39; if it behaves in a more human way, compared to those that operate strictly in a machine. in the same way.

It builds something known as the media equation theory, in which people apply social norms, which generally only apply in interactions with humans, also when interacting with various media such as computers and robots.

With plans to use social robots in more and more areas of our lives, from airports to elderly care, we may soon find ourselves facing a unique set of moral challenges.

For the study, the researchers recruited 89 volunteers to interact with the small humanoid robot, Nao.

Participants were told that their conversations with the robot were intended to help him improve his interaction skills.

They were assigned two tasks: create a schedule for a week, with seven activities to choose from, and play a game of questions and answers with Nao.

During this section, the robots asked questions like, "Do you like pizza or pasta?"

Once the two tasks were completed, the participants rang a bell and informed the speakers that they had gathered enough data, and that they could turn off the robot if they wished.

The participants received two tasks: create a schedule for a week, with seven activities to choose from, and play a game of questions and answers with Nao.

The participants received two tasks: create a schedule for a week, with seven activities to choose from, and play a game of questions and answers with Nao.

The participants received two tasks: create a schedule for a week, with seven activities to choose from, and play a game of questions and answers with Nao.

Once the two tasks were completed, the participants rang a bell and informed the speakers that they had gathered enough data, and that they could turn off the robot if they wished. But at this point, some of the robots objected

Once the two tasks were completed, the participants rang a bell and informed the speakers that they had gathered enough data, and that they could turn off the robot if they wished. But at this point, some of the robots objected

Once the two tasks were completed, the participants rang a bell and informed the speakers that they had gathered enough data, and that they could turn off the robot if they wished. But at this point, some of the robots objected

At this point, about half of the robots objected to it going out, saying: No! Please, do not turn me off! I'm afraid it will not shine again!

For the participants in these scenarios, turning off the robots was not so easy for those who were not confronted with the supplications.

In this group, 14 volunteers chose to leave the robots on, and those who turned it off at the end hesitated to do so in comparison to the other group.

The researchers then interviewed the study participants about their choices.

Eight people said they felt bad for the robots because he told them about their fears of the dark, while six said they did not want to act against their wishes.

Some participants also said that they left because they had the option to do so, while others questioned whether they could interact more with the robot or were afraid of doing something wrong.

A new study shows how shutting down a robot could be much more difficult if it begs you not to do it

A new study shows how shutting down a robot could be much more difficult if it begs you not to do it

This type of dilemma has become the central focus of many science fiction narratives, such as the one shown in the Westworld show (still shown above)

This type of dilemma has become the central focus of many science fiction narratives, such as the one shown in the Westworld show (still shown above)

A new study shows how shutting down a robot could be much more difficult if it begs you not to do it. This type of dilemma has become the central focus of many science fiction narratives, such as the show Westworld (right)

Only one person said they were surprised by the robot's objection.

The study highlights the complexity of our relationships with machines, as they behave more like humans.

Unchained by the objection, people tend to treat the robot rather as a real person than as a machine by following or at least considering following their request to remain connected, which is based on the central statement of the theory of the media equation, & # 39; authors wrote

"Therefore, although the disconnection situation does not occur with a human interaction partner, people tend to treat a robot that gives autonomy signals more as a human interaction partner than other electronic devices or a robot would treat. that does not reveal autonomy. & # 39;

WHAT ARE THE EXPERTS ABOUT THE CONDITION OF ROBOTS AS PEOPLE UNDER THE LAW?

The question of whether robots are people has European legislators and other experts disagree.

The problem first arose in January 2017, thanks to a paragraph of text buried deep in a report from the European Parliament, which advised to create a "legal status for robots".

A group of 156 specialists in artificial intelligence from 14 nations has written an open letter to the European Commission in Brussels denouncing the measure.

Writing in the statement, they said: "We, experts in artificial intelligence and robotics, industry leaders, legal experts, doctors and ethicists, confirm that establishing robotics standards across the EU is relevant to ensure a high level of safety and security. protection Citizens of the European Union while promoting innovation.

"As the interactions between humans and robots become commonplace, the European Union must provide the appropriate framework to strengthen democracy and the values ​​of the European Union.

"In fact, the framework of artificial intelligence and robotics must be explored not only through economic and legal aspects, but also through its social, psychological and ethical impacts.

"In this context, we are concerned about the European Parliament resolution on the rules of civil law robotics and its recommendation to the European Commission."

They say that creating a legal status of an "electronic person" for self-learning robots is a bad idea, for a number of reasons.

This includes the fact that the companies that manufacture the machines can be exempt from any legal liability for the damages inflicted by their creations.

They added: "The legal status of a robot can not be derived from the Natural Person model, since the robot would have human rights, such as the right to dignity, the right to remuneration or the right to citizenship.

The legal status of a robot can not be derived from the Legal Entity model, as it is granted to companies, since it implies the existence of human persons behind the legal entity to represent and direct it. . This is not the case of a robot. "

"Accordingly, we affirm that the European Union should promote the development of the AI ​​and bobotics industry to the extent that it limits health and safety risks for human beings.

"The protection of the users of robots and third parties must be the core of all the legal provisions of the EU."

(function() {
var _fbq = window._fbq || (window._fbq = []);
if (!_fbq.loaded) {
var fbds = document.createElement(‘script’);
fbds.async = true;
fbds.src = “http://connect.facebook.net/en_US/fbds.js”;
var s = document.getElementsByTagName(‘script’)[0];
s.parentNode.insertBefore(fbds, s);
_fbq.loaded = true;
}
_fbq.push([‘addPixelId’, ‘1401367413466420’]);
})();
window._fbq = window._fbq || [];
window._fbq.push([“track”, “PixelInitialized”, {}]);
.