Robotics

Robots taught to deceive

Robots taught to deceive
Prof. Ronald Arkin (left) and research engineer Alan Wagner with their hide-and-seek-playing robots
Prof. Ronald Arkin (left) and research engineer Alan Wagner with their hide-and-seek-playing robots
View 2 Images
Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner
1/2
Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner
Prof. Ronald Arkin (left) and research engineer Alan Wagner with their hide-and-seek-playing robots
2/2
Prof. Ronald Arkin (left) and research engineer Alan Wagner with their hide-and-seek-playing robots

Robots can perform an ever-increasing number of human-like actions, but until recently, lying wasn’t one of them. Now, thanks to researchers at the Georgia Institute of Technology, they can. More accurately, the Deep South robots have been taught “deceptive behavior.” This might sound like the recipe for a Philip K. Dick-esque disaster, but it could have practical uses. Robots on the battlefield, for instance, could use deception to elude captors. In a search and rescue scenario, a robot might have to be deceptive to handle a panicking human. For now, however, the robots are using their new skill to play a mean game of hide-and-seek.

Regents professor Ronald Arkin and research engineer Alan Wagner utilized interdependence theory and game theory to create algorithms that tested the value of deception in a given situation. In order for deception to be deemed appropriate, the situation had to involve a conflict between the deceiving robot and another robot, and the deceiving robot had to benefit from the deception. It carried out its dastardly deeds by providing false communications regarding its actions, based on what it knew about the other robot.

Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner
Georgia Tech Regents professor Ronald Arkin (left) and research engineer Alan Wagner

What it all boiled down to was a series of 20 hide-and-seek experiments. The autonomous hiding/deceiving robot could randomly choose one of three hiding spots, and would have no choice but to knock over one of three paths of colored markers to get there. The seeking robot could then, presumably, find the hiding robot by identifying which path of markers was knocked down. Sounds easy, except that sneaky, conniving hiding robot would turn around after knocking down one path of markers, and go hide in one of the other spots.

In 75 percent of the trials, the hiding robot succeeded in evading the seeking robot. In the other 25 percent, it wasn’t able to knock down the right markers necessary to produce its desired deception. The full results of the Georgia Tech experiment were recently published in the International Journal of Social Robotics.

“The experimental results weren’t perfect, but they demonstrated the learning and use of deception signals by real robots in a noisy environment,” said Wagner. “The results were also a preliminary indication that the techniques and algorithms described in the paper could be used to successfully produce deceptive behavior in a robot.”

The project was funded by the Office of Naval Research.

3 comments
3 comments
Omer Qadir
trivial !! virus\'s have been doing this for AGES !!
Ariel Dahan
You taught me language; and my profit on't Is, I know how to curse. The red plague rid you For learning me your language. Shakespeare - The Tempest - Act I, Scene ii (ll. 365-367) Caliban's invective
Jeff Frontz
To Omer\'s point, why were robots necessary to conduct this research? It seems like a software simulation would have accomplished the same thing (albeit lacking the \"wow\" factor).