Robotics

MIT's visual navigation tech quickly shows delivery robots the door

New navigation software developed at MIT enables delivery robots to find the front door using visual cues
MIT News
New navigation software developed at MIT enables delivery robots to find the front door using visual cues
MIT News

Aerial drones steal most of the limelight, but robots built to roam cities at ground level may also play a big role in automated delivery services of the future. An engineering team at MIT has been working on new navigation software that can enable these machines to recognize the typical features of a front yard, and in doing so, find their way to the front door in a far more efficient manner.

Big names like Amazon, FedEx and Domino’s are at least exploring the possibility of autonomous delivery robots that roll through the streets and drop customers’ goods off at their doors. These types of machines might be equipped with maps of a neighborhood, or even a whole city, and then use GPS or a set of pre-loaded coordinates to reach their destination.

The MIT team believes this might be overkill for robots that need to only carry items over short distances, so it has been working on an alternative approach. Instead of pre-loading the robot with a map of its environment, the researchers have developed a navigation technology that enables robots to rely on visual cues when planning their next move.

“We wouldn’t want to have to make a map of every building that we’d need to visit,” says Michael Everett, a graduate student in MIT’s Department of Mechanical Engineering. “With this technique, we hope to drop a robot at the end of any driveway and have it find a door.”

This area of research revolves around robots grasping their environment in a similar way to humans. By training them to understand visual objects and label them, like “driveway” or “garage,” for example, the robot can then make an educated guess on which way to turn next.

The MIT team’s new technology makes use of existing navigation algorithms that enable robots to identify visual objects in their environment, like a “door” or “footpath,” but with the added ability to make a decision based on what it sees. Part of this is what the team calls its “cost-to-go estimator,” a feature of the algorithm that provides the robot with a probability that moving toward an identified feature will bring them closer to their goal (the front door).

This takes the form of a map, with lighter regions indicating areas that are closer to the goal, and the darker regions farther away. A driveway might be lighter than the sidewalk beyond the front fence, for example, and will even become lighter as it leads to the front door.

The technology was tested out in simulations using satellite imagery of neighborhoods, with the system conjuring up semantic labels of the different features as well as the cost-to-go estimation map with the lighter and darker regions. The team then applied the algorithm to an unknown house outside this set of training data, with the navigation algorithm finding the front door 189 percent faster than current navigation solutions.

“Even if a robot is delivering a package to an environment it’s never been to, there might be clues that will be the same as other places it’s seen,” Everett says. “So the world may be laid out a little differently, but there’s probably some things in common.”

The team is presenting its research at the the International Conference on Intelligent Robots and Systems this week, while the video below offers a look at the navigation technology in action.

Source: MIT

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
2 comments
Colt12
Advancing every day, way to go MIT.
buzzclick
This delivery bot can't climb steps, so it'll leave the package close by. Close enough for a thief to get any ideas?