Robotics

MIT's mini cheetah sets new speed PB by learning from experience

MIT's mini cheetah sets new speed PB by learning from experience
The mini cheetah can adapt to unexpected terrain
The mini cheetah can adapt to unexpected terrain
View 8 Images
The mini cheetah can handle loose gravel
1/8
The mini cheetah can handle loose gravel
The mini cheetah at rest
2/8
The mini cheetah at rest
The mini cheetah can adapt to unexpected terrain
3/8
The mini cheetah can adapt to unexpected terrain
The mini cheetah running at speed
4/8
The mini cheetah running at speed
The mini cheetah ready to move
5/8
The mini cheetah ready to move
The mini cheetah quadruped robot
6/8
The mini cheetah quadruped robot
The mini cheetah can learn by experience
7/8
The mini cheetah can learn by experience
Robotic mini cheetah (left) and a real dog (right)
8/8
Robotic mini cheetah (left) and a real dog (right)
View gallery - 8 images

MIT's mini cheetah robot has broken its own personal best (PB) speed, hitting 8.72 mph (14.04 km/h) thanks to a new model-free reinforcement learning system that allows the robot to figure out on its own the best way to run and allows it to adapt to different terrain, without relying on human analysis.

The mini cheetah isn't the fastest quadruped robot going around. In 2012, its larger Cheetah sibling reached a top speed of 28.3 mph (45.5 km/h), but the mini cheetah being developed by MIT’s Improbable AI Lab and the National Science Foundation's Institute of AI and Fundamental Interactions (IAIFI) is much more agile and is able to learn without even taking a step.

In a new video, the quadruped robot can be seen crashing into barriers and recovering, racing through obstacles, running with one leg out of action, and adapting to slippery, icy terrain as well as hills of loose gravel. This adaptability is thanks to a simple neural network that can makes assessments of new situations that may put its hardwire under high stress.

The mini cheetah running at speed
The mini cheetah running at speed

Normally, how a robot moves is controlled by a system that uses data based on an analysis of how mechanical limbs move to create models that serve as guides. However, these models are often inefficient and inadequate because it isn't possible to anticipate every contingency.

When a robot is running at top speed, it's operating at the limits of its hardware, which makes it very hard to model, so the robot has trouble adapting quickly to sudden changes in its environment. To overcome this, instead of analytically designed robots, such as Boston Dynamics' Spot, which rely on humans analyzing the physics of movement and manually configuring the robot's hardware and software, the MIT team has opted for one that learns by experience.

In this, the robot learns by trial and error without a human in the loop. If the robot has enough experience of different terrains it can be made to automatically improve its behavior. And this experience doesn't even need to be in the real world. According to the team, using simulations, the Mini-Cheetah can accumulate 100 days' of experience in three hours while standing still.

Robotic mini cheetah (left) and a real dog (right)
Robotic mini cheetah (left) and a real dog (right)

"We developed an approach by which the robot’s behavior improves from simulated experience, and our approach critically also enables successful deployment of those learned behaviors in the real world," said MIT PhD student Gabriel Margolis and IAIFI postdoc Ge Yang. "The intuition behind why the robot’s running skills work well in the real world is: Of all the environments it sees in this simulator, some will teach the robot skills that are useful in the real world. When operating in the real world, our controller identifies and executes the relevant skills in real-time."

With such a system, the researchers claim that it is possible to scale up the technology, which the traditional paradigm can't do readily.

"A more practical way to build a robot with many diverse skills is to tell the robot what to do and let it figure out the how," added Margolis and Yang. "Our system is an example of this. In our lab, we’ve begun to apply this paradigm to other robotic systems, including hands that can pick up and manipulate many different objects."

The video below is of the mini cheetah showing what it's learned.

Mini-Cheetah

Source: MIT

View gallery - 8 images
3 comments
3 comments
Username
It only makes a difference for the first robot. After a robot knows (whether he learned or a human learned) the knowledge will be downloaded into the following robots.
PeachBlues2020
Hope it does not bite! 9mp will catch you eventually ..
This place sucks ass
It looks like when a blind dog chases a ball, how it throws its legs wide to keep a low/wide stance while shuffling foward until it is balanced again. That is pretty cool. Does it have Lidar sensors for its enviroment or running off of touch feedback? Guess its all fun until one is chasing you!