Robotics

Robotic hand identifies what it's grasping by sensing its shape

The GelSight EndoFlex hand grasps a Rubik's Cube, with insets showing how four of its six sensors see the contours of the object
MIT
The GelSight EndoFlex hand grasps a Rubik's Cube, with insets showing how four of its six sensors see the contours of the object
MIT

If a robot is going to be grasping delicate objects, then that bot had better know what those objects are, so it can treat them accordingly. A new robotic hand allows it to do so, by sensing the shape of the object along the length of its three digits.

Developed by a team of scientists at MIT, the experimental device is known as the GelSight EndoFlex. And true to its name, it incorporates the university's GelSight technology, which had previously only been utilized in the fingertip pads of robotic hands.

The EndoFlex's three mechanical digits are arranged in a Y shape – there are two "fingers" at the top, with an opposable "thumb" at the bottom. Each one consists of an articulated hard polymer skeleton, encased within a soft and flexible outer layer. The GelSight sensors themselves – two per digit – are located on the underside of the top and middle sections of those digits.

Each sensor incorporates a slab of clear, synthetic rubber that is coated on one side with a layer of metallic paint – that paint serves as the finger's skin. When the paint is pressed against a surface, it deforms to the shape of that surface. Looking through the opposite, unpainted side of the rubber, a tiny integrated camera (with help from three colored LEDs) can image the minute contours of the surface, pressing up into the paint.

Special algorithms on a linked computer turn those contours into 3D images which capture details less than one micrometer in depth and approximately two micrometers in width. The paint is necessary in order to standardize the optical qualities of the surface, so that the system isn't confused by multiple colors or materials.

In the case of the EndoFlex, by combining images from six such sensors at once (two on each of the three digits), it's possible to create a three-dimensional model of the item being grasped. Machine-learning-based software is then able to identify what object that model represents, after the hand has grasped the object just one time. The system has an accuracy rate of about 85% in its present form, although that number should improve as the technology is developed further.

"Having both soft and rigid elements is very important in any hand, but so is being able to perform great sensing over a really large area, especially if we want to consider doing very complicated manipulation tasks like what our own hands can do," said mechanical engineering graduate student Sandra Liu, who co-led the research along with undergraduate student Leonardo Zamora Yañez and Prof. Edward Adelson.

"Our goal with this work was to combine all the things that make our human hands so good into a robotic finger that can do tasks other robotic fingers can’t currently do."

Source: MIT

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
1 comment
anthony88
Metal skin....like a T-1000... Oh no!