Teaching a robot how to deal with real-world problems is a challenging task. There has been much progress in building robots that can precisely repeat individual tasks with a level of speed and accuracy impossible for human craftspeople. But there are many more tasks that could be done if robots could be supplied with even a limited amount of judgement. A robotics group led by Professor Sylvain Calinon at the Italian Institute of Technology (IIT) is making progress in solving this problem.
What is judgement? Imagine the task of replacing a light bulb. A human unscrews the light bulb, finds a replacement, then screws the replacement into the light socket. If sockets are in different locations than expected, or are tilted, or if the replacement light bulb has different dimensions than than the person has previously encountered, that person will usually be able to tell that they are confronted with "the same sort" of task, and will adapt their knowledge of previous tasks to changing out the bulb. The twin steps of recognition of the task and adapting past knowledge are roughly what we call judgement or decision-making.
In contrast, an accurate but unthinking robot would only be able to carry out changing the light bulb if the socket and light bulb were in pre-programmed positions and orientations, and of the same type for which the robot's program was written.
The IIT robotic system emulates judgement through three factors – constraints, recognition, and adaptation:
- Constraints – "Don't hit the glass of the light bulb" and "Don't squeeze or torque the light bulb too strongly" are examples of simple constraints that provide some appearance of judgement;
- Recognition – Using imaging or tactile senses, the robot can recognize the real task with which it is presented by comparing the geometry of the real task with a 6D rotated and translated version of model task descriptions and solutions in the robot's memory
- Adaptation – Converting the ideal motions of the robot into the frame of reference from which the real task looks like the model task.
The level of sophistication with which the three factors of judgement can be implemented varies widely. Constraints are relatively easy to build into software. The problem is that constraints are very specific, and a large number must be input to avoid pitfalls in a given task.
Visual recognition software has come a long ways from its beginnings. However, it is still a computation-intensive activity. Imagine such a recognition system that constructs the three-dimensional data about a real task. To recognize the real task as a version of some model task from memory, the recognition system might take each model in turn, and attempt to fit it onto the exterior shape of the real task.
Where this becomes difficult is when the shape of the real task doesn't exactly match the corresponding model task. The recognition software must have some notion of how close the real task geometry is to some variation of the model task geometry. Providing this ability becomes more difficult as the acceptable variations of the model task geometry increase. Simple task recognition is currently not very robust, although it is improving.
A solution to the recognition problem also provides a way to map the real task onto the model task. Using that mapping, the model task instructions can be readily mapped to the real task through the inverse of the recognition mapping.
Learning through demonstrationThe IIT group has developed a robot whose purpose in life is to help a person build an IKEA table. Rather than having the main surface of the table attached to a bench with a precise position and orientation, the robot is going to hold the table in midair while the person screws in one of the table legs.
Initially, the robot is in a compliant mode, so that the table can be placed in various positions freely by the human partner. In this mode, the robot is learning through demonstration. What it learns is that when its partner is adjusting the position of the table top, it should freely follow the partner's movements. Next, when the robot sees its partner begin to screw in a table leg, it becomes stiff to make the process of fitting the leg easier. The change from compliant to stiff, as well as the reverse change, is not the result of a conventional command, but occurs in response to real-time visual recognition of a change in the scene – a leg is now being rotated, rather than held still or being adjusted. The behavior was not itself pre-programmed. Rather, the robot was taught that behavior by the user.
Another important aspect of this approach is that the robot can be trained to go into compliant mode whenever it detects a person in danger from its operations, making it safer for humans and robots to safely work together.
In order to truly introduce robotic assistants into modern society, it will be necessary to develop robots that are intrinsically safe, flexible in learning new tasks, and simple enough for ordinary citizens to teach. The IIT work demonstrated in the video below is a significant step in that direction.