Robots can perform an ever-increasing number of human-like actions, but until recently, lying wasn’t one of them. Now, thanks to researchers at the Georgia Institute of Technology, they can. More accurately, the Deep South robots have been taught “deceptive behavior.” This might sound like the recipe for a Philip K. Dick-esque disaster, but it could have practical uses. Robots on the battlefield, for instance, could use deception to elude captors. In a search and rescue scenario, a robot might have to be deceptive to handle a panicking human. For now, however, the robots are using their new skill to play a mean game of hide-and-seek.
Regents professor Ronald Arkin and research engineer Alan Wagner utilized interdependence theory and game theory to create algorithms that tested the value of deception in a given situation. In order for deception to be deemed appropriate, the situation had to involve a conflict between the deceiving robot and another robot, and the deceiving robot had to benefit from the deception. It carried out its dastardly deeds by providing false communications regarding its actions, based on what it knew about the other robot.
UPGRADE TO NEW ATLAS PLUS
More than 1,200 New Atlas Plus subscribers directly support our journalism, and get access to our premium ad-free site and email newsletter. Join them for just US$19 a year.UPGRADE
What it all boiled down to was a series of 20 hide-and-seek experiments. The autonomous hiding/deceiving robot could randomly choose one of three hiding spots, and would have no choice but to knock over one of three paths of colored markers to get there. The seeking robot could then, presumably, find the hiding robot by identifying which path of markers was knocked down. Sounds easy, except that sneaky, conniving hiding robot would turn around after knocking down one path of markers, and go hide in one of the other spots.
In 75 percent of the trials, the hiding robot succeeded in evading the seeking robot. In the other 25 percent, it wasn’t able to knock down the right markers necessary to produce its desired deception. The full results of the Georgia Tech experiment were recently published in the International Journal of Social Robotics.
“The experimental results weren’t perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment,” said Wagner. “The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot.”
The project was funded by the Office of Naval Research.View gallery - 2 images