New algorithm allows autonomous drone to zip through trees at 30 mph
A commonly-held reservation when it comes to drones is their propensity to smash into things. Researchers at MIT's Computer Science and Artificial Lab (CSAIL) are not the only ones working on this problem, but they have made one of the more promising advances in the area so far. The team has found a way to streamline the computational algorithms needed for a drone to map its surroundings, giving its autonomous aircraft a major turbo boost when avoiding obstacles.
Current approaches to obstacle avoidance systems for drones involve using onboard cameras and processors to snap images and analyze the surroundings at regular intervals, say every one or two meters (3.3 or 6.6 ft). This requires a lot of processing power and means that the drones struggle to move faster than 8 or 10 km/h (5 or 6 mph) without specialized processing hardware. CSAIL PhD student Andrew Barry took a seemingly counterintuitive approach to speed things up.
Sick of Ads?
More than 700 New Atlas Plus subscribers read our newsletter and website without ads.
Join them for just US$19 a year.More Information
His thinking is that when a drone is moving at faster speeds, the environment doesn't appear to change all that much between frames. So he instead designed his algorithms to take readings only every 10 meters (32 ft) instead.
"You don’t have to know about anything that’s closer or further than that," Barry says. "As you fly, you push that 10-meter horizon forward, and, as long as your first 10 meters are clear, you can build a full map of the world around you."
Working with a US$1,700 drone built with off-the-shelf components and featuring a camera on each wing and a pair of processors you might find in a cell phone, Barry and his team put this technique to the test. The aircraft was launched into the countryside, where it made its way autonomously through a set of trees while traveling at 48 km/h (30 mph).
CSAIL says the system runs 20 times faster than existing software, extracting depth information at a rate of 120 frames per second. The team is now looking to develop the software further so it can work at more than one depth and in denser surroundings, such as thick forest.
"As hardware advances allow for more complex computation, we will be able to search at multiple depths and therefore check and correct our estimates," says Barry. "This lets us make our algorithms more aggressive, even in environments with larger numbers of obstacles."
You can see the algorithm in action in the video below, and the team has made the software open-source and available online here.