All finger robots want for Christmas is a hand like Dactyl

All finger robots want for Christmas is a hand like Dactyl
Credit: openai

A lettered, multi-colored block: A trivial task awaits humans to pick it up, turn it around, toss it around in the palm of our hands. For a robot expert, though, this is an uphill task that is tough to climb. Hand manipulation for robots has always been a challenge.

Enter Dactyl. An OpenAI video posted Monday, titled Learning Dexterity, proudly showed their system, Dactyl, which has been created to manipulate objects—in a first-rate way.

The accent is on the word dexterity. Its fingers handle the block in a way that is quite remarkable, including deftly turning the block on its different sides. It learned how to rotate the block into any orientation liked.

They trained a , said IEEE Spectrum's Evan Ackerman, to control a Shadow hand to manipulate objects, in just 50 hours.

To be sure, the other reason their hand drew interest was that it was worked up in a shorter time. Ackerman underscored the significance of time shavings for robot teams. (The numbers are humbling. IEEE Spectrum mentioned 50 successful cube manipulations as the result of 6,144 CPU cores and 8 GPUs collecting 100 years of simulated robot experience in 50 hours.)

It takes humans years to achieve "robust" levels of hand manipulations. Well, robots, said Ackerman, "don't have that kind of time. Learning through practice and experience is still the way to go for complex tasks like this, and the challenge is finding a way to learn faster and more efficiently than just giving a something to manipulate over and over until it learns what works and what doesn't, which would probably take about a hundred years."

Reuters similarly described why their work matters: "Physical training takes months or years and has problems of its own - for example, if a robot hand drops a workpiece, a human needs to pick it up and put it back. That is expensive as well. Researchers have sought to chop up those years of physical training and distribute them to multiple computers for a software simulation that can do the training in hours or days, without human help."

Another exciting aspect was pegged by Stephen Nellis in the Reuters article. "Researchers injected random noise into the software simulation, making the robot hand's virtual world messy enough that it was not befuddled by the unexpected in the real world."

In raising the bar on hand manipulation, the team had managed to cover variabilities that cannot be modeled well. Ackerman wrote, "This includes the mass and dimensions of the object, friction of both the object's surface and the robot's fingertips, how well the robot's joints are damped, actuator forces, joint limits, motor backlash and noise, and more."

In their OpenAI blog posting, the team said that they trained a human-like robot hand to manipulate physical objects "with unprecedented dexterity." They noted how Dactyl was trained entirely in simulation, "adapting to real-world physics using techniques we've been working on for the past year. Dactyl learns from scratch using the same general-purpose reinforcement learning algorithm and code as OpenAI Five."

It is possible to train agents in simulation and have them solve real-world tasks, they said, without physically-accurate modeling of the world.

More information: OpenAI blog: blog.openai.com/learning-dexterity/

Paper: d4mucfpksywv.cloudfront.net/re … -dexterity-paper.pdf

© 2018 Tech Xplore

Citation: All finger robots want for Christmas is a hand like Dactyl (2018, July 31) retrieved 29 March 2024 from https://techxplore.com/news/2018-07-finger-robots-christmas-dactyl.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Focus on a reinforcement learning algorithm that can learn from failure

24 shares

Feedback to editors