Robotic hand uses AI, 100 simulated years to achieve dexterity

Robotic hand uses AI, 100 simulated years to achieve dexterity

AI researchers from non-profit organisation OpenAI have created a system that allows a robotic hand to learn to manipulate physical objects with unprecedented dexterity, with no human input.

OpenAI’s research has made significant advances in the field of training robots in simulated environments in order to solve real-world problems more swiftly and efficiently than was possible before.

Using 6,144 CPU cores and eight GPUs to train the robot hand, OpenAI was able to amass the equivalent of one hundred years of real-world testing experience in just 50 hours.

Their robotic hand system, known as Dactyl (From the Greek daktylos, meaning finger), uses the humanoid Shadow Dexterous Hand from the Shadow Robot Company. The hand successfully taught itself to rotate a cube 50 times in succession, thanks to a reinforcement learning algorithm.

This required Dactyl to learn various manipulation behaviours for itself, including finger pivoting, sliding, and finger gaiting.

 

Using simulation to train a robotic hand

An OpenAI blog post explains how the research team placed a cube in the palm of the robot hand and asked it to reorient the object. Dactyl did so using just the inputs from three RGB cameras and the coordinates of its fingertips, testing its findings at high speed in a virtual environment before carrying them out in the real world.

Once trained in simulation without human input, Dactyl was able to perform the assigned task without any fine-tuning from OpenAI’s human researchers.

The team used an approach known as domain randomisation, allowing the system to gain experience quickly from experimentation and testing in the virtual world, before applying its findings in the real one.

The MuJoCo physics engine used to simulate the robot encountered difficulties in measuring real physical attributes like friction, damping, and rolling resistance. It also found it difficult to reproduce the contact forces that occur when manipulating an object.

The research team overcame these hurdles using the domain randomisation technique, in which different approaches to manipulating the cube were applied randomly. The Dactyl system could then learn from multiple stored observations, via a neural network.

Dactyl lab setup (credit: OpenAI)

How reinforcement learning could advance robotics

The robot adopted many of the hand movements used by humans, as the research paper explains:

Our method does not rely on any human demonstrations, but many behaviours found in human manipulation emerge naturally, including finger gaiting, multi-finger coordination, and the controlled use of gravity.

While the robot hand’s abilities still fall short of human dexterity and practical usefulness, the results are impressive and demonstrate that deep reinforcement learning algorithms can be applied to real-world robotics to help machines learn more quickly than humans are able to.

Internet of Business says

Solving the inherent clumsiness of humanoid robots has been a longstanding problem for researchers.

Robots lack the natural ability that we have – or acquired as children – to perceive the properties of an object before touching it. We can guess how heavy an object is, what it’s made of, and what it will feel like. With that information, we are able to intuit how best to pick it up and manipulate it.

Most robots are also unable to detect the shear forces and vibrations that humans can sense through their skin and adjust their grip as required.

Several research projects have explored flexible sensor ‘skin’ that seeks to tackle this problem, taking what’s known as a ‘multimodal’ approach.

OpenAI’s machine learning system reveals the potential of using simulated environments that allow robots to teach themselves. This bypasses the need for trainers to spend hours inputting instructions and allows robots to complete tasks that were previously impossible for machines, by enabling them to figure out for themselves the best way to complete a task.

While the physics engine used is a rigid body simulator, it would be fascinating to see what a reinforcement learning approach could do with simulator that models the deformable silicon of a soft robot. This would combine the advantages of both the flexible sensor skin approach and machine learning.