AI & Humanoids

Video: Google robot paddles human table tennis competitors

Video: Google robot paddles human table tennis competitors
Google DeepMind's table tennis robot takes on a human competitor in a trial of the system
Google DeepMind's table tennis robot takes on a human competitor in a trial of the system
View 1 Image
Google DeepMind's table tennis robot takes on a human competitor in a trial of the system
1/1
Google DeepMind's table tennis robot takes on a human competitor in a trial of the system

Using heaps of data, Google trained a table-tennis-playing robot to take on human competitors and get better as it did so. The results were impressive and represent a leap forward in robotic speed and dexterity. It also looks really fun.

"Achieving human-level speed and performance on real world tasks is a north star for the robotics research community." Thus begins a paper written by a team of Google scientists who helped create, train, and test the table-tennis bot.

We've certainly seen quite a bit of advancement in robotics that allows humanoid machines with the performance chops to handle real world tasks including everything from chopping ingredients for dinner to working in a BMW factory. But as the Google team's quote suggests, the ability to add speed to that precision is developing a bit more, well, slowly.

That's why the new table-tennis-playing robot is so impressive. As you can see in the following video, in games with human competitors, the bot was able to hold its own, although it's not quite Olympic-level yet. During 29 matches, the bot had a 45% success rate, defeating 13 players. While that's certainly better than a lot of New Atlas writers would do against any competitor, the bot was only able to excel against beginner to intermediate players. It lost all of the matches it played against advanced players. It also didn't have the ability to serve the ball.

Some highlights - Achieving human level competitive robot table tennis

“Even a few months back, we projected that realistically the robot may not be able to win against people it had not played before,". Pannag Sanketi, told MIT Technology Review. "The system certainly exceeded our expectations. The way the robot outmaneuvered even strong opponents was mind blowing.” Sanketi, who led the project, is the senior staff software engineer at Google DeepMind. Google's DeepMind is the AI branch of the company, so this research was ultimately as much about data sets and decision making as it was about the actual performance of the paddle-wielding robot.

To train the system, the researchers amassed a large amount of data about ball states in table tennis including things like spin, speed, and position. Next, during simulated matches, the bot's "brain" was trained in the basics of the game. That was enough to get it playing human competitors. Then, during the matches, the system used a set of cameras to respond to human challengers using what it knew. It was also able to continue learning and trying out new tactics to beat challengers, which meant it was able to improve on the fly.

“I'm a big fan of seeing robot systems actually working with and around real humans, and this is a fantastic example of this,” Sanketi told MIT. “It may not be a strong player, but the raw ingredients are there to keep improving and eventually get there.”

The following video shows even more details of the bot in training and the various skills it was able to employ.

Demonstrations - Achieving human level competitive robot table tennis

The research has been published in an Arxiv paper.

Sources: MIT Technology Review, Google

5 comments
5 comments
Sciencie
I WANT one!
Alan
Amazing! Let's revisit this in say, one year. Should be able to beat the pros then.

P.S. can it hit with spin?
White Rabbit
An impressive advance in robotics! However, it's worthwhile to view it in a broader perspective. It took "heaps of data" and years of training to produce a single-purposed machine that performs rather poorly. Learning systems may help it improve, and it may not even take the year Alan suggests, but there are serious questions about the claimed rate of success. From Google DeepMind: "The humans played 3 games against the robot following standard table tennis rules with some modifications because the robot is physically unable to serve the ball." So it's not really playing table tennis, and the only way it could win was to change the rules!
The following paragraph from the MIT Technology Review is revealing:
"The system struggled to hit the ball when it was hit either very fast, beyond its field of vision (more than six feet above the table), or very low, because of a protocol that instructs it to avoid collisions that could damage its paddle. Spinning balls proved a challenge because it lacked the capacity to directly measure spin—a limitation that advanced players were quick to take advantage of."
Captain Danger
@white Rabbit
You have absolutely no idea how complex this is.
just measuring the speed an trajectory of the ball in real time is incredible.
next you have 8 axes of motion to control ( 6 for the robot and the XY gantry) and I would be that they are not just sending joint moves to the controller but have an external controller working on all 8 axes.
then you need to determine the target , where to move to before hitting the ball ,how to contact the ball and what speed to move at.
An you feel superior because the "had to change the rules".
If I was you I would be very afraid , the robot overlords will be coming for you first because you do not have enough respect.

Palmerfralick
to me it looked like the humans were hitting the ball directly to the bot. not trying at all to hit a shot to make the bot miss. but I get it, still learning. carry on