Researchers at Google Research and the University of California, Berkeley, have recently developed an imitation learning system that could enable a variety of agile locomotion behaviors in robots. Their technique, presented in a paper pre-published on arXiv, allows robots to acquire new skills by imitating animals.
"This project builds on some previous works from computer graphics, which trained simulated characters to move by imitating human motion capture data," Jason Peng, one of the researchers who carried out the study, told TechXplore. "Most of these techniques were primarily applied in simulation, but in our recent project we took a first step towards applying them to real robots."
Peng and his colleagues initially trained a four-legged robot to imitate the movements and walking style of a dog within a simulated environment. Their system was trained on motion data recorded from a real dog, using an approach known as reinforcement learning.
"One of the advantages of training in simulation is that it is very fast, so we can simulate months of training in a matter of days," Peng explained. "Once the robot has been trained in simulation, we can adapt what it has learned to a real robot, using only a few minutes of data collected in the real world."
The imitation learning method employed by Peng and his colleagues is far more scalable than more traditional techniques for designing robotic controllers. In fact, instead of designing a new controller for every skill that one is trying to reproduce in robots, their approach can simply train robots to achieve specific locomotion styles by showing them a few examples of animals performing the desired movements. The robot can then automatically learn new locomotion skills simply by observing these examples.
Peng and his colleagues evaluated their approach in a series of experiments, training Laikago, a 18-DoF quadruped robot, to reproduce different animal locomotion behaviors, including different ways of running, hopping and turning. Remarkably, their technique allowed the robot to automatically synthesize controllers for a variety of animal locomotion styles, effectively transferring the skills it learned in simulated environments to the real world.
"The most exciting result for us was that the same underlying method can learn a pretty large variety of skills ranging from walking to dynamic hopping and turning and all of the skills learned in simulation can also be transferred to a real robot," Peng said. "These imitation learning techniques could make it much easier to build large repertoires of skills for robots that can enable them to move and interact more agilely with the real world."
In the future, the imitation learning system developed by Peng and his colleagues could enable a broader variety of agile movements in animal-inspired robots. Currently, their technique can only be trained using motion data, but the researchers are trying to develop it further, so that it can also learn from videos of animals.
"We are now interested in trying to get robots to imitate different kinds of motion data, such as video clips," Peng said. "Motion capture data can sometimes be fairly difficult to record, especially from animals, as getting a dog into a mocap studio can be tricky. It would be great if we can just use our phones to record some video clips of what we want the robot to do and then have the robot learn how to reproduce those skills automatically."
More information: Learning agile robotic locomotion skills by imitating animals. arXiv:2004.00784 [cs.RO]. arxiv.org/abs/2004.00784
© 2020 Science X Network