Researchers at Carnegie Mellon University have come up with a novel way of improving robots' inspection and manipulation skills. The team fitted a small camera to the hand of the machine, allowing it to quickly track its environment and the position of the hand in real time.

Remotely operated or autonomous robots often feature cameras and/or other sensors located on the head section of the machine, but that solution is often far from ideal. This is largely due to the fact that robots usually lack the flexibility to lean over and get a good look at the space they're working in.

Researchers believe that placing cameras and sensors on the hands of robots could be much more effective, and thanks to sensors becoming smaller and more power efficient, it's now something that can be easily investigated.

However, it's not quite as simple as integrating a sensor into a robotic hand and getting to work – if the machine can't actually see the hand, and place it in the immediate environment, then it's not going to be able to complete tasks, such as object manipulation and inspection, all that effectively.

One well-known solution that can help tackle the problem is simultaneous localization and mapping (SLAM). It involves the robot taking in data from various sensors, such as radars and cameras, and bringing it together to build a 3D map of the space around it in real time. There are existing algorithms that can perform the task, but they're extremely computationally intensive, making them impractical for widespread use.

However, the Carnegie Mellon researchers found that mounting the camera on a robotic arm with the hand in sight can make things much simpler. The geometry of the hand constrains how the camera is able to move, and by automatically tracking the joint angle, it's possible to very quickly produce high-quality maps of the environment, even during rapid movement.

The researchers tested out the technology on a system with a small depth camera attached to a lightweight manipulator arm. Using the depth data to construct a virtual 3D model of a bookshelf, the team found the results to be equivalent to, or better than, alternative mapping techniques.

"We still have much to do to improve this approach, but we believe it has huge potential for robot manipulation," said associate professor of robotics Siddhartha Srinivasa.

The findings of the work, which was jointly funded by Toyota, the Office of Naval Research, and the National Science Foundation, were presented today at the IEEE International Conference on Robotics and Automation in Stockholm.