Artificial intelligence can develop racism on its own

Robot could learn to treat other life forms, including humans, as less valuable than them, new research suggests. Experts say that prejudice towards others does not require a high level of cognitive ability and could be easily displayed by AI (stock image)

The robots could learn to treat other life forms, including humans, as less valuable than them, says new research.

Experts say that prejudice towards others does not require a high level of cognitive ability and could easily be exhibited with artificially intelligent machines.

These machines could teach each other the value of excluding others from outside their immediate group.

The latest findings are based on computer simulations of how AI, or virtual agents, form a group and interact with each other.

Scroll down to watch the video

Robot could learn to treat other life forms, including humans, as less valuable than them, new research suggests. Experts say that prejudice towards others does not require a high level of cognitive ability and could be easily displayed by AI (stock image)

Robot could learn to treat other life forms, including humans, as less valuable than them, new research suggests. Experts say that prejudice towards others does not require a high level of cognitive ability and could be easily displayed by AI (stock image)

Experts in computer science and psychology at Cardiff University and MIT have shown that autonomous machine groups demonstrate prejudices by simply identifying, copying and learning this behavior from others.

It may seem that prejudice is a specific phenomenon of people that requires human cognition to form an opinion, or stereotype, of a certain person or group.

Some types of computer algorithms have already exhibited prejudices, such as racism and sexism, based on learning public records and other data generated by humans.

However, the latest study demonstrates the possibility that AI will develop damaging groups on its own.

To determine if artificial intelligence could acquire prejudices by itself, the scientists performed a simulation that saw the robots participate in a game of give and take.

In the game, each individual makes a decision about whether they donate money to someone within their own group or to another group of individuals.

The game tests the donation strategy of an individual, verifying their levels of prejudice towards those outside.

As the game develops and a supercomputer accumulates thousands of simulations, each individual begins to learn new strategies by copying others, either within their own group or the entire population.

These machines could teach the value of excluding others from outside their group. The new findings are based on computer simulations of how AI, or virtual agents, form a group and interact with each other (stock image)

These machines could teach the value of excluding others from outside their group. The new findings are based on computer simulations of how AI, or virtual agents, form a group and interact with each other (stock image)

These machines could teach the value of excluding others from outside their group. The new findings are based on computer simulations of how AI, or virtual agents, form a group and interact with each other (stock image)

"By running these simulations thousands and thousands of times, we begin to understand how prejudices evolve and the conditions that promote or hinder it," said study co-author Roger Whitaker, of the University of Cardiff's computer school. . and computing.

"Our simulations show that prejudice is a powerful force of nature and, through evolution, can be easily encouraged in virtual populations, to the detriment of a wider connectivity with others.

"Protection against harmful groups can inadvertently lead individuals to form additional detrimental groups, resulting in a fractured population, such a generalized bias is difficult to reverse."

The findings involve individuals who update their levels of prejudice by copying preferably those who obtain a higher payment in the short term, which means that these decisions do not necessarily require advanced cognitive skills.

HOW DOES ARTIFICIAL INTELLIGENCE LEARN?

Artificial intelligence systems are based on artificial neural networks (ANN), which try to simulate the way the brain works to learn.

RNA can be trained to recognize patterns in information, including speech, text data or visual images, and are the basis of a large number of AI developments in recent years.

Conventional AI uses information to "teach" an algorithm about a particular subject by feeding it with massive amounts of information.

Artificial intelligence systems are based on artificial neural networks (ANN), which try to simulate the way the brain works to learn. RNA can be trained to recognize patterns in information, including voice, text data or visual images

Artificial intelligence systems are based on artificial neural networks (ANN), which try to simulate the way the brain works to learn. RNA can be trained to recognize patterns in information, including voice, text data or visual images

Artificial intelligence systems are based on artificial neural networks (ANN), which try to simulate the way the brain works to learn. RNA can be trained to recognize patterns in information, including voice, text data or visual images

Practical applications include Google's language translation services, Facebook's face recognition software and active filters that alter Snapchat's image.

The process of entering these data can be time consuming and limited to one type of knowledge.

A new generation of RNA called Adversarial Neural Networks confronts the ingenuity of two AI bots with each other, allowing them to learn from each other.

This approach is designed to accelerate the learning process, as well as to perfect the result created by artificial intelligence systems.

"It is feasible that autonomous machines with the ability to identify with discrimination and copy others may in the future be susceptible to the harmful phenomena that we see in the human population," added Professor Whitaker.

"Many of the artificial intelligence developments that we are seeing imply autonomy and self-control, which means that the behavior of the devices is also influenced by others around them.

& # 39; Vehicles and the Internet of things are two recent examples. Our study offers a theoretical vision in which simulated agents periodically resort to others to obtain some kind of resource. "

Another interesting finding of the study was that under particular conditions, it was more difficult for prejudice to take hold.

"With a greater number of subpopulations, alliances of non-detrimental groups can cooperate without being exploited," said Professor Whitaker.

"This also diminishes their minority status, which reduces the susceptibility to prejudice.

"However, this also requires circumstances in which agents are more willing to interact outside of their group."

The full findings of the study were published in the journal Scientific Reports.

HOW DO RESEARCHERS DETERMINE IF AN AI IS "RACIST"?

In a new study entitled Gender Shades, the team of researchers discovered that popular facial recognition services from Microsoft, IBM and Face ++ can discriminate by gender and race.

The data set consists of 1,270 photos of parliamentarians from three African nations and three Nordic countries where women hold positions

The faces were selected to represent a wide range of human skin tones, using a labeling system developed by dermatologists, called the Fitzpatrick scale.

All three services performed better on white, male faces and had the highest error rates in dark-skinned men and women

Microsoft could not detect darker-skinned women 21% of the time, while IBM and Face ++ would not work in darker-skinned women in about 35% of cases.

The study tried to find out if the facial recognition systems of Microsoft, IBM and Face ++ discriminated by gender and race. The researchers discovered that Microsoft systems could not correctly identify dark-skinned women 21% of the time, while IBM and Face ++ had an error rate of approximately 35%.

The study tried to find out if the facial recognition systems of Microsoft, IBM and Face ++ discriminated by gender and race. The researchers discovered that Microsoft systems could not correctly identify dark-skinned women 21% of the time, while IBM and Face ++ had an error rate of approximately 35%.

The study tried to find out if the facial recognition systems of Microsoft, IBM and Face ++ discriminated by gender and race. The researchers discovered that Microsoft systems could not correctly identify dark-skinned women 21% of the time, while IBM and Face ++ had an error rate of approximately 35%.

.