Robotics

Robo Brain uses the web to teach robots human knowledge

Robo Brain uses the web to teach robots human knowledge
The team behind the Robo Brain project (Aditya Jami, Kevin Lee, Prof. Ashutosh Saxena, Ashesh Jain, Ozan Sener and Chenxia Wu)
The team behind the Robo Brain project (Aditya Jami, Kevin Lee, Prof. Ashutosh Saxena, Ashesh Jain, Ozan Sener and Chenxia Wu)
View 3 Images
Robo Earth is a computational system from which computers can learn information that helps them to understand the world
1/3
Robo Earth is a computational system from which computers can learn information that helps them to understand the world
2/3
The team behind the Robo Brain project (Aditya Jami, Kevin Lee, Prof. Ashutosh Saxena, Ashesh Jain, Ozan Sener and Chenxia Wu)
3/3
The team behind the Robo Brain project (Aditya Jami, Kevin Lee, Prof. Ashutosh Saxena, Ashesh Jain, Ozan Sener and Chenxia Wu)
View gallery - 3 images

One of the steps towards to making robots into the all powerful overlords envisioned in books and movies is to teach them all human knowledge. A project named Robo Brain can do this without any help from humans, trawling the web in search of information and then sharing it with robots.

The project is being carried out by academics at Cornell University's Department of Computer Sciences. A team comprised of many of the same researchers who were responsible for producing the Tell Me Dave robot that's able to understand and follow natural language instructions.

Robo Brain is also able to understand natural language, and uses this capability to make sense of the information it finds on the internet. It allows robots to understand how the world works using the data it finds, as opposed to simply storing the data without having any insight into it. The system can teach robots things like how to find your keys, pour a drink, put away dishes and when not to interrupt two people having a conversation.

"If a robot encounters a situation it hasn't seen before it can query Robo Brain in the cloud," explains assistant professor of computer science at Cornell University Ashutosh Saxena. "Robo Brain will learn to recognize objects by comparing them with images online. From this, it will be able to learn what they are called and how they are used.

In a given example, a robot might see a coffee mug and learn from Robo Brain that not only is it a coffee mug, but that liquids can be poured into or out of it, that it can be held by the handle and that it must be carried upright when it is full, but need not necessarily be upright when it is in a dishwasher or cupboard.

"We have first built the infrastructure layer for the brain, and are currently feeding it with data and learned pieces of information," says Saxena to Gizmag. "Currently, we are applying learning algorithms such as 'structured deep learning' and 'co-active learning' to help the brain to learn better."

Approximately 1 billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals are being downloaded and analyzed by Robo Brain. The information is translated and stored in an open-source format that can be understood by robots.

The conceptual stage of the Robo Brain project began a few months go and it went live in July. There are currently four university partners working on the project and it will be opened up to others as progress is made. It's expected to be 2-3 years before Robo Brain can be used on a larger scale.

The project is supported by the National Science Foundation, The Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative.

A talk by Saxena at the 2014 Robotics: Science and Systems Conference last month provided an introduction to Robo Brain.

Source: Robo Brain

View gallery - 3 images
1 comment
1 comment
The Skud
Personaly, I would rather see the opposite of this project - Evolve an easily learnt (by humans) language to control a robot that would not pick too many cues from a conversation near the robot. It would need to hear and recognise its name then listen for clear instructions, not just walk past - as in the example given - see a coffee cup and place it in the dishwasher. There might have been a special reason for that cup being there, on display, to 'hold a place' while a person visits the buffet, or whatever.