Robotics

MIT is teaching robots to better move among us

MIT's crowd navigating pedestrian robot in action
MIT
MIT's crowd navigating pedestrian robot in action
MIT

A team of scientists at MIT has developed an autonomous robot that uses a suite of sensors and an advanced machine learning technique to navigate crowded areas while adhering to (human) social norms. The wheeled automaton could be another step towards fully automated delivery bots, or even smart personal mobility devices capable of navigating a busy street.

If robots are ever going to move freely among us, going about their (hopefully) innocent day to day activities, it's important that they're not only able to understand and navigate their environment, but also that they can predict and navigate us. A robot must know where it is, know where we are, and be able to plan a route and to execute its chosen path.

Previous attempts at making robots navigate a human-packed locale have been met with varying degrees of frustration. A trajectory-based approach, for example – whereby a robot predicts where a person is going to walk based on sensor data – is problematic because the robot must collect the data in a constantly changing environment and figure out what its next move is. This can often result in stop/start movement.

Another method involves programming a robot with a simple reactive approach to crowd management, where it uses geometry and physics to plan a route and avoid collisions. This works great when someone is walking in a straight line, but human beings are unpredictable beasts, prone to sudden changes in direction, which could result in person and bot attempting to occupy the same space at the same time.

The MIT team attempted to teach its robotic buddy to navigate crowds using a technique called reinforced learning. On a basic level, the method involves putting a robot through a series of computer simulation training scenarios designed to teach it how to deal with objects traveling at various speeds and trajectories, while taking note of simulated people in the environment.

Simulation was also used to teach the robot to navigate while observing social norms, such as walking on the right-hand side and keeping to a pedestrian pace of 1.2 meters per second. When the robot is then faced with a room of people in the real world, it recognizes certain situations encountered during the training, and deals with them accordingly while observing pedestrian rules.

Outside of the computer, MIT describes its robot as a "knee-high kiosk on wheels." It's fitted with a variety of sensors including a webcam, depth-sensor, and a high resolution LIDAR sensor that allows the robot to perceive its environment, and employs open-source algorithms to help determine its position.

The sensors assess the environment around the robot every tenth of a second, allowing it to fluidly adjust its path on the go without the need to stop and calculate its best option.

"We're not planning an entire path to the goal — it doesn't make sense to do that anymore, especially if you're assuming the world is changing," comments graduate student Michael Everett, one of the co-authors of a paper on the research. "We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural, and is anticipating what people are doing."

The scientists combined their unusual looking robot with the reinforced learning technique, and headed down to the MIT's Stata Center for a series of physical tests. The robot successfully navigated the winding, pedestrian-clogged hallways of the building for 20 minutes at a time, without bumping into a single person.

"We wanted to bring it somewhere where people were doing their everyday things, going to class, getting food, and we showed we were pretty robust to all that," Everett says. "One time there was even a tour group, and it perfectly avoided them."

The team plans to continue and expand its research, and examine how robots fare in a pedestrian scenario where people move in crowds. This could require an updated set of behavioral rules.

Some people may be somewhat concerned by the idea of teaching robots to move among us, but most would likely agree that MIT's latest robot poses little threat to humanity. If this technology is combined with MIT's cheetah design, and powered by the IMB Watson supercomputer to create some kind of robotic centaur, maybe then it's time to worry.

A paper detailing the research will be presented at the IEEE Conference on Intelligent Robots and Systems next month.

The video below shows MIT's robot at work.

Source: MIT

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
2 comments
Daishi
This is much more useful research than putting legs on robots. We have a lot of basic navigation obstacles (like indoor mapping and positioning) that still have to be solved before we are ready to put $200k legged mobility systems on robots. Basic robotic navigation is the fundamental building block that most actual applications must be built on so it's essential work. There are a bunch of other use cases but personally I want to see ubiquitous availability of telepresence robots at things like conferences and museums so people can attend "in person" from anywhere in the world. Outdoor spaces can be navigated/seen remotely by geofenced drones. cape.com actually has a working version of this technology using DJI's Inspire. My view is anyone who wants to see places like the grand canyon should be able to reserve a drone to fly it around and have it auto pilot back to base when they are done or the time is up. I think you guys should cover the cape.com platform because it's interesting. Someone on youtube did a demo of it here https://www.youtube.com/watch?v=4oONCoH6afA
Bob Flint
The study of human behavior is at the center of autonomous mobility, be it a wheeled robot or a larger vehicle on the roads. Humans are sometimes predictable, but can also be illogical, impaired, or just dumb...
Most humans would have walked past the women waiting for her coffee, since there was plenty of space, but the posture & presence and physical body language humans have are difficult to impart in acceptable doses that differ from aggression, to a more authoritative motion.