New algorithm allows human being to communicate task to robot by performing it first in virtual reality

New algorithm allows human being to communicate task to robot by performing it first in virtual reality

(Tech Xplore)—A new algorithm developed by a team at OpenAI (backed by Elon Musk) allows for teaching a robot how to do something by first having a human being demonstrate it in a virtual reality setting. As the researchers note, the scheme is based on something the company calls one-time imitation learning.

As computer hardware has improved, it has been continually applied to robotics, creating ever more useful machines. The software behind such robots has been evolving, too—from simple command-driven systems to complex schemes that combine an assortment of hardware and learning mechanisms. In this new effort, the team at OpenAI has added a new twist, allowing a robot to learn how to do something by watching it being done in a virtual world.

Prior robot learning systems have relied on allowing a robot to watch something being done in the real world or by having its parts moved physically and then remembering what occurred, but such methods have limitations and drawbacks, because they exist in the real world. Creating a virtual world allows for adding elements that back up the learning process. For example, in this new work at OpenAI, the researchers were able to first teach the robot about blocks, their colors, their locations and what they look like when stacked up on a table—all by showing it multiple examples of such elements in a virtual world over a very short amount of time. Doing the same thing in the real world would have taken hours, weeks or even months.

Once the robot has learned about possibilities, it uses that information as one of two neural networks at its disposal—the first is the Vision Network, which essentially learns what is possible. The robot then accesses a second neural called the Imitation Network, which uses information from the first network and what it learned to devise a strategy for mimicking the actions of a scene it views of a robot picking up and stacking blocks—a scene that was created by a human being manually controlling a virtual robot. The result was a robot able to learn a task demonstrated by a human in a virtual world after viewing it just one time.

The team at OpenAI notes that the results may seem rather simple, but they point out that the algorithm and the itself can be programed to teach a a wide variety of tasks faster and in more efficient ways than other systems.

Explore further

MekaMon combines virtual and augmented reality with real world spider-like robots

More information:

© 2017 Tech Xplore

Citation: New algorithm allows human being to communicate task to robot by performing it first in virtual reality (2017, May 18) retrieved 16 September 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more