Anyone who has played a game of Jenga will know the delicate touch required to keep the tower of wooden blocks from crashing down, and it's not the kind of finesse you'd associate with a typical robot. Engineers at MIT have now developed a manipulator arm that may just push and prod with the best of them, relying on visual data, machine learning algorithms and a fast-moving branch of robotics incorporating tactile feedback.
Robots that possess a sense of "touch" in addition to more common traits like an ability to see and grab a hold of things are becoming more common, and are bringing with them all kinds of new possibilities.
Last month, for example, the British Army took delivery of bomb disposal robots fitted with a manipulation arm that relays physical feedback to a remote user, apparently offering them human-like dexterity when defusing explosives from a distance. We are also seeing these kinds of machines put to use exploring the ocean depths for treasure, enabling telerobotic handshakes between Earth and space, and give prosthesis users a sense of touch.
While machines are very much capable of playing games that involve human-level thinking, such as Go or chess, taking on games like Jenga with its need for delicate touch is another story entirely.
"Playing the game of Jenga also requires mastery of physical skills such as probing, pushing, pulling, placing, and aligning pieces," says Alberto Rodriguez, Assistant Professor in the Department of Mechanical Engineering at MIT. "It requires interactive perception and manipulation, where you have to go and touch the tower to learn how and when to move blocks. This is very difficult to simulate, so the robot has to learn in the real world, by interacting with the real Jenga tower. The key challenge is to learn from a relatively small number of experiments by exploiting common sense about objects and physics."
Rodriguez and his team fitted out an industrial ABB IRB 20 robotic arm with an external camera, a force-sensing wrist cuff and a soft-pronged gripper. They then began training the robot by having it choose a block in the Jenga tower at random, then select a specific location on that block to push ever so subtly. Each time it did so, a connected computer would record the visual and force measurements and categorize each attempt as successful or unsuccessful.
The team says that within around 300 attempts using this mix of tactile and visual feedback, the robot was able to use modeling to predict which blocks would be harder to move than others, and which may cause the tower to fall. It did this by sorting the outcomes into clusters.
"The robot builds clusters and then learns models for each of these clusters, instead of learning a model that captures absolutely everything that could happen," says Rodriguez.
The team then compared the trained robot's performance to human players and found little difference in their success at keeping the tower upright while removing the wooden blocks. They also tested it alongside machine learning algorithms playing computer simulations of the game, and found their learning curve to be more efficient.
"We provide to these algorithms the same information our system gets, to see how they learn to play Jenga at a similar level," says study co-author Miquel Oller. "Compared with our approach, these algorithms need to explore orders of magnitude more towers to learn the game."
While a Jenga-playing robot is impressive, it's not the endgame here. The researchers hope the robotic technology can be put to use in environments where a careful eye and delicate touch are needed, such as separating recycling from waste and assembling consumer products.
"In a cellphone assembly line, in almost every single step, the feeling of a snap-fit, or a threaded screw, is coming from force and touch rather than vision," Rodriguez says. "Learning models for those actions is prime real-estate for this kind of technology."
The research was published in the journal Science Robotics, while you can see the robot do its thing in the video below.
Source: MIT