The crème de la crème of AI: British scientists make a robot that can put things out of the way to get the milk out of the fridge (but it can't make a decent cuppa yet)
- Finding objects in a messy space is a challenging task for robots to solve
- To teach AI to combine this, experts combined planning with trial-by-error learning
- By trying thousands of approaches, the robot can determine which one will succeed
- Similar approaches can be used to enable robots to find things in warehouses
A robot that can quickly dig a bottle of milk from the depths of a messy refrigerator was created by a team of British scientists.
Artificial intelligence does this by combining automated planning – based on images of the objects – with experimental learning.
In addition to rearranging refrigerator contents, the same arrangement could be used to allow robots to perform various similar tasks – such as locating items in a warehouse.
Scroll down for video
A robot that can quickly dig a bottle of milk from the depths of a messy fridge, pictured here while products are shaken on a table, was made by a team of British scientists
When you reach into an overflowing fridge, it is sometimes not possible, for example, to remove a desired pint of milk without first shaking other items.
Although this type of operation can easily happen to us, such a complex task is for a robot – it has to think of a series of separate movements that lead to the final goal, which can be time-consuming to calculate.
To help artificial intelligence learn this, robotics Wissam Bejjani and colleagues from the University of Leeds combined two different approaches.
The first – the so-called automatic planning – enables the robot's visual system to see the problem that is being presented to it and from there simulate the various movements needed to reach the target object.
However, as the old saying goes, no plan survives contact with the enemy – and such simulations cannot capture the complexity of the real world.
The execution of his plans can therefore lead to the robot not performing its task as intended – for example by accidentally throwing items on the floor.
Here comes the second approach – the so-called & # 39; reinforcement learning & # 39; – where the robot performs thousands of trial and error attempts to find out which of its plans is most likely to succeed.
The robot can generalize – apply what he has learned to each unique set of circumstances, Mr. Bejjani said.
The robot's twin strategy, he adds, has been successful in laboratory testing.
& # 39; With one problem in which the robot had to move a large apple, he first moved to the left side of the apple to remove the clutter before manipulating the apple. & # 39;
& # 39; It did this without the clutter falling off the shelf. & # 39;
By using this approach, the thinking speed of the robot has been increased by a factor of ten, with a decision that the robot would have taken for 50 seconds and now found it, paper author and computer expert Mehmet Dogar added.
& # 39; Artificial intelligence is good at enabling robots to reason – for example, we've seen robots involved in chess games with grandmasters & # 39 ;, paper author and computer expert Matteo Leonetti added.
& # 39; But robots are not very good at what people do very well: being very mobile and agile. & # 39;
& # 39; Those physical skills are embedded in the human brain, the result of evolution, and the way we practice and practice and practice. & # 39;
& # 39; And that is an idea that we apply to the next generation of robots. & # 39;
The robot can generalize – apply what he has learned to each unique set of circumstances, Mr. Bejjani said. This can, for example, prevent efficient removal of fruit and vegetables to remove milk from the depths of an overfilled refrigerator
The full findings of the study were presented on the International conference 2019 on intelligent robotics and systems, Held from November 4 to 8 in Macau, China.
A pre-print of the researchers' article, which has not yet been assessed by peers, can be read on the arXiv repository.
HOW DO ARTIFICIAL INTELLIGENCE LEARN?
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works to learn.
ANNs can be trained to recognize patterns in information – including speech, text data or visual images – and form the basis for a large number of developments in AI in recent years.
Conventional AI uses input to teach & # 39; an algorithm on a specific topic & # 39; by giving it huge amounts of information.
AI systems rely on artificial neural networks (ANNs), which try to simulate the way the brain works to learn. ANN & # 39; s can be trained to recognize patterns in information – including speech, text data, or visual images
Practical applications include Google's language translation services, Facebook & # 39; s face recognition software, and Snapchat & # 39; s live image-changing filters.
Entering this data can be extremely time-consuming and is limited to one type of knowledge.
A new breed of ANN & # 39; s called Adversarial Neural Networks contrasts the minds of two AI bots, allowing them to learn from each other.
This approach is designed to speed up the learning process and refine the output of AI systems.
. (TagsToTranslate) Dailymail (t) sciencetech