22 Oct 2018 |
World innovation news |
Optimizing Human-Robot Communication with Meta-Learning
The evolution of robotics goes hand in hand with the optimization of communication technologies between humans and robots. A team from the University of California at Berkeley has created a combination method that optimizes person-machine interaction in robotic learning through imitation.
Researchers at the Berkeley Artificial Intelligence Research (BAIR) Lab introduced the work with a study presented at the December 2017 NIPS Deep Reinforcement Learning Symposium. Their goal was to create a system that facilitates robot learning with face-to-face interaction.
Learning through Imitation
Robots learned to beat us at chess and to help us in several tasks thanks to two types of technologies. They can be programmed to perform specific actions, and they can also learn to perform tasks by watching us. The second method, inspired by mimicry—the learning mode of living beings—requires work behind the scenes. This involves giving robots the capacity to learn by imitation in order to reproduce human action, which they must grasp through visual perception. Learning can be done in a physical environment or through virtual reality.
Whatever the context, this method requires many demonstrations in order to deploy a deep learning method. In fact, learning from raw visual inputs requires a large amount of data. Current deep learning visual signal recognition systems use hundreds of thousands of images to teach a robot to perform a task.
Other restrictions complicate the visual data analysis process. According to the researchers, a robot faces two main challenges. It must:
- detect and scrutinize the instructor’s movements and interaction with objects and the environment;
- perform tasks with limbs whose structure, movement and speed differ from those of the human body.
A Technology Combining Two Deep Learning Methods
The technology created by the team combines two methods: learning through imitation and meta-learning. In meta-learning the robot receives upstream-supplied metadata from a similar machine that has succeeded in imitating a human being performing a task. From this demonstration, the robot learns to compare human gestures with robot gestures. This learning phase enables it to identify the task, learn how to analyze human skills and how to replicate them. The process uses an algorithm known as Model-Agnostic Meta-Learning (MAML). The researchers adapted solutions used in natural language learning.
Then, when the robot is presented with a new video showing a person doing another task by manipulating different objects, it will deduce that it must reproduce the same gestures to achieve the same result. In addition, it will execute the task by translating human movements and their interactions with the environment according to its own motor abilities and the actual context.
To evaluate this system, the team equipped a robot with training data from a robot applying wall paint. Then the robot watched a single video showing a man moving an object to perform the same task.
The team wants to increase the robot’s learning capabilities by allowing it to learn to do different tasks from a single visual demonstration.
This study entitled “One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning,” co-authored by Tianhe Yu, Chelsea Finn, Annie Xie, Sudeep Dasari, Tianhao Zhang, Pieter Abbeel and Sergey Levine, was published online at the Cornell University Library.