March 25, 2008 The nuances and imprecise nature of human language pose big challenges for developers looking to advance voice control technology of helper robots, but researchers at the Georgia Institute of Technology have found an effective way to circumvent verbal communication by instructing a robot to fetch items under direction of a laser pointer.
While scientists make incremental progress in creating a robot that responds to speech, gestures and body language, the El-E (pronounced like the name Ellie) robot could provide help to users with mobility impairments by retrieving items selected with a laser pointer. “We humans naturally point at things but we aren’t very accurate, so we use the context of the situation or verbal cues to clarify which object is important,” said Charlie Kemp, who led the research team from the Center for Healthcare Robotics in the Health Systems Institute at Georgia Tech. “Robots have some ability to retrieve specific, predefined objects, such as a soda can, but retrieving generic everyday objects has been a challenge for robots.” The laser pointer interface and methods developed by Kemp’s team overcome this challenge by providing a direct way for people to communicate the location of interest to El-E and complimentary methods that enable El-E to pick up an object found at this location. Through these innovations, El-E can retrieve objects without understanding what the object is or what it’s called.
A custom-built, omni-directional camera means that El-E can to see most of the room which helps to locate items. The robot then estimates where the item is and travels to the location. If the location is above the floor, the robot finds the edge of the surface on which the object is sitting, such as the edge of a table and, using sensors, elevates its arm to match the height of the object. El-E then uses a laser range finder to scan across the surface and, after moving its hand above the object, uses a camera in its hand to visually distinguish the object from the texture of the floor or table. It then refines the hand’s position and orientation and descends on the object while using sensors in its hand to decide when to stop moving down and start closing its gripper. Finally, it closes its gripper upon the object until it has a secure grip. Once the robot has picked up the item, the laser pointer can be used to guide the robot to another location to deposit the item or direct the robot to take the item to a person. El-E distinguishes between these two situations by looking for a face near the selected location. It uses the location of the face and legs to determine where it will present the object and then returns to the user’s side once the item has been delivered.
El-E’s power and computation are all on board (no tethers or hidden computers in the next room) and it runs Ubuntu Linux on a Mac mini. The Georgia Tech a team is now working to help El-E expand its capabilities to include switching lights on and off when the user selects a light switch and opening and closing doors when the user selects a door knob. The team is also gathering input from ALS (also known as Lou Gehrig’s disease) patients and doctors to prepare the robot to assist patients with severe mobility challenges associated with the disease.
Home helper robots are becoming big business. Last year WooWee unveiled Rovio, a 3-wheeled Wi-Fi enabled robot able to undertake home surveillance. Additionally, iRobot has released a number of handy home assistants including the award winning, gutter cleaning Looj. More recently, the Readybot Robot Challenge announced details of its first prototype kitchen cleaning robot.
Via Georgia Tech.