The first AI that sees as a person can lead to automated search and rescue robots

The first AI to see a person can lead to automated search and rescue robots, scientists say

  • Computer scientists have taught an agent to take snapshots of his environment
  • Most artificial intelligence systems are only trained for very specific tasks
  • It takes a glimpse of a room that has never been seen before to a & # 39; full scene & # 39; to create
  • The team of computer scientists says the skill can be used for search and rescue missions and that they can be equipped for new observation tasks when they occur
Advertisements

Computer scientists have taught an artificial intelligence agent how to take over the entire environment by taking just a few snapshots.

The new technology can collect visual information that can be used for a wide range of tasks, including search and rescue.

Researchers have taught the computer system how to quickly catch a glimpse of a room that had never been seen before to a & # 39; full scene & # 39; to create.

The scientists used deep learning, a kind of machine learning inspired by the neural networks of the brain, to train their agent on thousands of 360-degree images of different environments.

Advertisements

They say their research can help effective search and rescue missions by creating robots that can pass on information to the authorities.

Most computer systems are trained for very specific tasks – such as recognizing an object or estimating volume – in an environment that they have previously experienced.

Scroll down for video

Computer scientists have taught an artificial intelligence agent to do something that normally only people can do by taking a few quick glances and distracting the entire environment from it

Computer scientists have taught an artificial intelligence agent to do something that normally only people can do by taking a few quick glances and distracting the entire environment from it

The technology, developed by a team of computer scientists from the University of Texas, collects visual information that can then be used for a wide range of tasks.

The main goal is that it can quickly locate people, flames and hazardous materials and pass on that information to firefighters, the researchers said.

Advertisements

After each glimpse, it chooses the next shot that it predicts will add the newest information about the entire scene.

They use the example of a person in a shopping mall they had never visited before, and they saw apples, you would expect to find oranges in the area, but to locate the milk you could look the other way.

Based on these looks, the agent deduces what he would have seen if he had looked in all other directions and reconstructed a full 360-degree view of his surroundings.

When an agent is presented with a scene he has never seen before, the agent uses his experience to choose a few glimpses.

Professor Kristen Grauman, who led the study, said: & # 39; Just as you get prior information about the regularities that exist in previously experienced environments, like all the supermarkets you've ever been to, this agent is looking for a non exhaustive way. & # 39;

Advertisements

& # 39; We want an agent who is generally equipped to enter environments and be ready for new observation tasks when they occur.

& # 39; It behaves in a way that is versatile and able to succeed in various tasks because it has learned useful patterns about the visual world. & # 39;

& # 39; What makes this system so effective is that it not only takes photos in random directions, but after each glimpse, choosing the next shot that predicts the most new information about the entire scene, said Professor Grauman.

The study was partially supported by the U.S. Defense Advanced Research Projects Agency and the US Air Force Office of Scientific Research.

HOW CAN ARTIFICIAL INTELLIGENCE LEARN?

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works to learn.

Advertisements

ANNs can be trained to recognize patterns in information – including speech, text data or visual images – and form the basis for a large number of developments in AI in recent years.

Conventional AI uses input to learn an algorithm on a specific topic & # 39; by giving it huge amounts of information.

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works to learn. ANNs can be trained to recognize patterns in information - including speech, text data or visual images

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works to learn. ANNs can be trained to recognize patterns in information - including speech, text data or visual images

AI systems are based on artificial neural networks (ANNs), which try to simulate how the brain works to learn. ANNs can be trained to recognize patterns in information – including speech, text data or visual images

Practical applications include Google's translation services, Facebook's face recognition software and Snapchat's image-changing live filters.

Advertisements

Entering this data can be very time-consuming and is limited to one type of knowledge.

A new breed of ANNs called Adversarial Neural Networks pits the wits of two AI bots against each other, allowing them to learn from each other.

This approach is designed to speed up the learning process and refine the output of AI systems.

. (TagsToTranslate) Dailymail (t) sciencetech

- Advertisement -