By learning AI how to feel FEAR, autonomous cars can become better drivers, research suggests

By learning AI how to feel FEAR, autonomous cars can become better drivers, research suggests

  • Microsoft researchers used wrist sensors to track people's fear responses
  • Signals were then used to train and guide the algorithm through driving simulation
  • They discovered that those who seemed to fear experienced fewer crashes during tests

Artificial intelligence has become extremely advanced in recent years, so much so that the prospect of self-driving cars on city roads is no longer a distant concept.

But despite their current capabilities, there is one thing that people on our side have that AI does not inherently have – fear.

Physiological responses, driven by anxiety, help people make crucial decisions and stay alert, especially when it comes to situations such as driving.

In a new study, Microsoft researchers are building on this idea to improve the decisiveness of self-driving cars, in the attempt to & # 39; visceral machines & # 39; to develop that learn faster and make fewer mistakes.

AI has become extremely advanced in recent years, so much so that the prospect of self-driving cars on city roads is no longer a distant concept. Despite their current capabilities, there is one thing that people on our side have that AI has inherently - fear. File photo

AI has become extremely advanced in recent years, so much so that the prospect of self-driving cars on city roads is no longer a distant concept. Despite their current capabilities, there is one thing that people on our side have that AI does not inherently have – fear. File photo

The team detailed their findings in a paper presented during the 2019 International Conference on Learning Representations (ICLR).

To teach AI to feel & # 39; feel & # 39 ;, the researchers used wrist sensors to track the excitement of people while using a driving simulator.

These signals were then applied to the algorithm to find out in which situations a person's pulse was peaked.

& # 39; As people learn to navigate around the world, responses to the autonomic nervous system (eg & # 39; Fighting or fleeing & # 39;) provide intrinsic feedback on the possible consequences of action choices (for example, getting nervous when close to be a cliff or drive quickly around a bend). ), & # 39; writers Daniel McDuff and Ashish Kapoor explain in the newspaper summary.

& # 39; Physiological changes are correlated to these biological preparations to protect themselves against danger & # 39 ;.

According to the researchers, if someone feels more anxious in a particular situation, learning the algorithm could serve as a guide to help machines avoid risks.

& # 39; Our hypothesis is that such reward functions can bypass the challenges associated with scarce and skewed rewards in reinforcing learning environments and help improve sample efficiency & # 39 ;, the team explains.

The researchers put the autonomous software through a simulated maze full of walls and slopes to see how they performed with fear that had been put into it.

And compared to an AI that was only trained for proximity to the wall, the fear-learned system would crash much less quickly.

& # 39; A major benefit of training a reward on a signal correlated with the sympathetic reactions of the nervous system is that the rewards are not scarce – the negative reward begins to appear before the car crashes, & # 39; the researchers wrote.

& # 39; This leads to efficiency in training and with good design this can lead to policies that also match the desired mission. & # 39;

But there are comments.

& # 39; Although emotions are important for decision making, they can also negatively influence decisions in certain contexts, & # 39 ;, the researchers note.

& # 39; Future work will consider balancing intrinsic and extrinsic rewards and include extensions to representations with multiple intrinsic drives (such as hunger, anxiety, and pain). & # 39;

HOW CAN ARTIFICIAL INTELLIGENCE LEARN?

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works to learn.

ANNs can be trained to recognize patterns in information – including speech, text data or visual images – and form the basis for a large number of developments in AI in recent years.

Conventional AI uses input to learn an algorithm on a specific topic & # 39; by giving it huge amounts of information.

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works to learn. ANNs can be trained to recognize patterns in information - including speech, text data or visual images

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works to learn. ANNs can be trained to recognize patterns in information - including speech, text data or visual images

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works to learn. ANNs can be trained to recognize patterns in information – including speech, text data or visual images

Practical applications include Google's translation services, Facebook's face recognition software and Snapchat's image-changing live filters.

Entering this data can be very time-consuming and is limited to one type of knowledge.

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, allowing them to learn from each other.

This approach is designed to speed up the learning process and refine the output of AI systems.

. (TagsToTranslate) Dailymail (t) sciencetech