Computers

2017: The year AI beat us at all our own games

2017: The year AI beat us at all our own games
AI beat us at a bunch of games in 2017 and it looks like it's something we'll have to get used to
AI beat us at a bunch of games in 2017 and it looks like it's something we'll have to get used to
View 6 Images
Chess was only the beginning in the realm of computers beating humans at games
1/6
Chess was only the beginning in the realm of computers beating humans at games
DeepMind's AlphaGo Master AI comprehensively beat the world Go champion across three matches in 2017
2/6
DeepMind's AlphaGo Master AI comprehensively beat the world Go champion across three matches in 2017
After a grueling 20-day marathon, the machine has comprehensively thrashed all four professional poker players
3/6
After a grueling 20-day marathon, the machine has comprehensively thrashed all four professional poker players
The International is a giant eSports competition with millions of dollars in prize money (Credit: zilsonzxc CC BY SA 4.0)
4/6
The International is a giant eSports competition with millions of dollars in prize money (Credit: zilsonzxc CC BY SA 4.0)
The "Hybrid Reward Architecture" (HRA) system learned to master the notoriously difficult video game Ms Pac-Man
5/6
The "Hybrid Reward Architecture" (HRA) system learned to master the notoriously difficult video game Ms Pac-Man
AI beat us at a bunch of games in 2017 and it looks like it's something we'll have to get used to
6/6
AI beat us at a bunch of games in 2017 and it looks like it's something we'll have to get used to
View gallery - 6 images

For much of the 20th century, the game of chess served as a benchmark for artificial intelligence researchers. John McCarthy, who coined the term "artificial intelligence" back in the early 1950s, once referred to chess as the "Drosophila of AI", in a reference to how significant early research on the fruit fly was to the field of genetics.

In the late 1990s, IBM's Deep Blue embarked upon a series of chess games against Garry Kasparov, the world champion. In 1997, Deep Blue ultimately beat Kasparov, marking the first time a machine had defeated a world champion in match play. By the early to mid 2000s, the technology had improved to the point where the machines were consistently beating chess grandmasters in almost every game-playing context.

Naturally, AI developers moved onto other, more complex, games to test their increasingly sophisticated algorithms. Over the past 12 months AI crossed a series of new thresholds, finally beating human players in a variety of different games, from the ancient game of Go to the dynamic and interactive card game, Texas Hold-Em Poker.

Chess was only the beginning in the realm of computers beating humans at games
Chess was only the beginning in the realm of computers beating humans at games

Going, going, gone

In the late 1990s, after a machine finally and definitively beat a chess grandmaster, an astrophysicist from Princeton remarked, ''It may be a hundred years before a computer beats humans at Go – maybe even longer."

Taking up the challenge, computer scientists turned their attention to this ancient Chinese strategy game, which is both deceptively simple to play, yet extraordinarily complex to master.

It has only been in the last decade that machine learning developments have created truly competitive AI Go players. In 2014, Google commenced working on a deep learning neural network called AlphaGo. After a couple of years of semi-successful Go challenges the development team tried something different.

At the very end of 2016 a mysterious online Go player named "Master" appeared on the popular Asian game server Tygem. Over the next few days this mysterious player dominated in games with many world champions on the system. By the 4th of January the jig was up and it was officially confirmed that the "Master" was in fact the latest iteration of DeepMind's AI AlphaGo.

In May of 2017, AlphaGo "Master" took on Ke Jie, the world's highest ranked Go player. Over three games the machine comprehensively dominated the world champion, but perhaps most startling was the revelation in October that Google had already produced a more sophisticated iteration of AlphaGo that was even better than the "Master".

DeepMind's AlphaGo Master AI comprehensively beat the world Go champion across three matches in 2017
DeepMind's AlphaGo Master AI comprehensively beat the world Go champion across three matches in 2017

AlphaGo Zero, revealed in a Nature journal article, was a revolutionary algorithm designed to learn entirely from self-play. The system simply plays against itself, over and over, and learns how to master whatever game it has been programmed to work with. After 21 days of learning, AlphaGo Zero had reached the level of "Master" and by day 40 it had exceeded the skill level of every prior version.

By December 2017, an even newer version of the system was revealed by DeepMind. Called AlphaZero, this new AI could master a variety of games in just hours. After merely eight hours of self training the system could not only beat prior versions of AlphaGo Zero, but it also could become a chess grandmaster and a shogi champion.

Mastering the bluff

While Go offers a game rich in complexity, mastering Poker has proved an entirely different proposition for AI. To win big at Poker one needs to master the art of deception. Bluffing and recognizing when you are being bluffed are key dynamic abilities that need to be mastered to win big in this infamous card game.

After over a decade of attempts 2017 saw two separate studies revealing AI finally beating big-time poker professionals. DeepStack, from the University of Alberta, unveiled an AI system that could comprehensively dominate human poker players using an artificially intelligent form of "intuition".

After a grueling 20-day marathon, the machine has comprehensively thrashed all four professional poker players
After a grueling 20-day marathon, the machine has comprehensively thrashed all four professional poker players

A team from Carnegie Mellon University put on a more public spectacle in January of 2017 when its Libratus AI system spent 20 days playing 120,000 hands of No Limit Texas Hold'em against four poker professionals. While the pros spent every evening of the challenge discussing amongst themselves what weaknesses they could exploit in the AI, the machine also improved itself every day, patching holes in its gameplay and improving its strategy.

The human brains were no match for the machine and after nearly a month of full-time gameplay Libratus was up by US$1.7 million, with every one of the four professionals having lost thousands of fictional dollars. One of the losing professional players said to Wired halfway through the bruising competition, "I felt like I was playing against someone who was cheating, like it could see my cards. I'm not accusing it of cheating. It was just that good."

Elon Musk's AI experiment

In 2015, Elon Musk and a small group of investors founded a group called OpenAI. The venture was designed to explore the development of artificial intelligence systems with a particular interest in reinforcement learning - systems where a machine teaches itself how to improve at a particular task.

In August 2017, the OpenAI team set its sights on conquering Dota 2, the central game in a giant eSports tournament called The International. Dota 2 is an extremely popular, and complicated, multiplayer online battle arena game and is serious business in the world of competitive gaming.

The International is a giant eSports competition with millions of dollars in prize money (Credit: zilsonzxc CC BY SA 4.0)
The International is a giant eSports competition with millions of dollars in prize money (Credit: zilsonzxc CC BY SA 4.0)

After just two weeks of learning, the OpenAI bot was unleashed on the tournament and subsequently beat several of the world's top players. The AI system was only trained on a more simplistic one-to-one version of the game, but the OpenAI team is now working on teaching the system how to play "team" games of five-on-five.

Divide and conquer - The Pac-Man challenge

A couple of years ago, Google DeepMind set its AI loose on 49 Atari 2600 games. Provided with the same inputs as any human player the AI figured out how to play, and win, many of the games. Some games proved harder than others for the AI to master though, and the classic, but notoriously difficult, 1980s video game Ms Pac-Man was especially challenging.

In 2017, a deep learning startup called Maluuba was acquired by Google and incorporated into the DeepMind group. Maluuba's novel machine learning method was called "Hybrid Reward Architecture" (HRA). Applying this method to Ms Pac-Man the system created more than 150 individual agents, each tasked with specific goals – such as finding a specific pellet, or avoiding ghosts.

The "Hybrid Reward Architecture" (HRA) system learned to master the notoriously difficult video game Ms Pac-Man
The "Hybrid Reward Architecture" (HRA) system learned to master the notoriously difficult video game Ms Pac-Man

The HRA method generates a top agent, something akin to a senior manager. This top agent evaluates all the suggestions from the lower agents before making the final decision on an individual move. The approach has been euphemistically dubbed "divide-and-conquer," where a complex task is broken up into smaller parts.

After applying the method to Ms Pac-Man the AI quickly figured out how to achieve a top score of 999,990, which no human or AI has managed to achieve previously.

AI will soon make the games

What is the next logical step if AI can beat us in almost every game out there?

A researcher from the University of Falmouth recently revealed a machine learning algorithm that he claims can dream up its own games for us to play from scratch. Called Angelina, this AI system is improving itself from day to day but can currently make games using datasets it scrapes from sources as varied as Wikimedia Commons to online newspapers and social media.

So what does all this mean?

Perhaps the most significant, and potentially frightening, development of 2017 has been the dramatic progress of reinforcement learning systems. These programs can efficiently teach themselves how to master new skills. The most recent AlphaZero iteration, for example, can achieve superhuman skills at some games after just a few days of self-directed learning.

A large survey of more than 350 AI researchers suggested it won't be too long before AI can beat us as pretty much everything. The survey predicted that within 10 years AI will drive better than us, by 2049 it will be able to write a best-selling novel, and by 2053 it will perform better than humans at surgery. In fact, the survey concluded that there is a 50 percent chance that by 2060 AI will essentially be able to do everything we can do, but better.

2017 has undoubtedly been a milestone year in AI beating humans at increasingly complex games, and while this may seem like a trivial achievement the implications are huge. Many of these AI-development companies are quickly turning their sights on real-world challenges. Google DeepMind has already moved the AlphaGo Zero system away from the game and onto a comprehensive study of protein folding in the hopes of revealing a treatment for diseases such as Alzheimer's and Parkinson's.

"Ultimately we want to harness algorithmic breakthroughs like this to help solve all sorts of pressing real world problems," says Demis Hassabis, co-founder and CEO of DeepMind. "If similar techniques can be applied to other structured problems, such as protein folding, reducing energy consumption or searching for revolutionary new materials, the resulting breakthroughs have the potential to drive forward human understanding and positively impact all of our lives."

View gallery - 6 images
5 comments
5 comments
Daishi
Something else interesting I found was halite.io. It's essentially a game but it's not intended to be played by human players. The player writes an AI (in any language) for the game and you play your AI against other programmers AI's. I think the concept of AI vs AI programming challenges is really interesting and has some potential. There are starter bots for halite in every language for anyone wanting to dip their toes in the water.
Lennerton
Perhaps soon, if not already, governmental grants will fund AI projects for governing and the betterment of the people. Hopefully, party politics will be left out of and a utilitarian approach (the best for the most) will be supreme. And then will come other hard questions regarding world sustainment/co-existing/decimation, followed by group thought and absence of original thinking. God help us all.
Jack Decker
No, in NO way can anyone say that AI has beaten humans at poker. What did happen? It wore down four poker players after marathon sessions with JUST them playing one-on-one with it non-stop for days. Sorry but that isn't poker at all. Also, the four poker "pros" were no-name poker "professionals". No famous poker player. I've yet to hear that any of those four having won any significant poker tournament. And you know what makes a poker professional a poker professional? Table selection. That's it. They hunt for a table full of fish (poor poker players) and then become the shark at that table. Sharks don't sit down with sharks. Now if you want to play sharks against sharks, you enter a poker tournament.
For any AI fanboy to say that AI has beaten humans at poker, it has to actually play poker as it is actually played. Have it sit at a table of ten poker players and then see how it does. Have the poker players come and go at random times and get replaced by other poker players, if you're saying it can beat humans at a grind game. Have it participate in an actual poker tournament if you want to say it can win against humans at tournament poker. No human poker player has EVER played the same SINGLE poker player for days on end. And in poker tournaments, the only time a poker player plays a single poker player is at the very end of the tournament when there is only two players left. At that stage, it is pretty much a coin-flip as to who will win and that last one-on-one might last for at most a half-hour with ever escalating blinds. GETTING to that point is where the skill comes in.
CharlieSeattle
The Duel: Timo Boll vs. KUKA Robot
https://www.youtube.com/watch?v=tIIJME8-au8
Published on Mar 10, 2014
Man against machine.
The unbelievably fast KUKA robot faces off against one of the best table tennis players of all time. Who has the best technique? Who will win the first ever table tennis duel of human versus robot?
Watch this thrilling commercial of table tennis and robotics performed at the highest level. The KUKA KR AGILUS demonstrates its skills with the table tennis racket - a realistic vision of what robots can be capable of in the future.
christopher
This is not "Artificial Intelligence" - it is deep-learning. Humans need to stop using the wrong words, because people who don't know otherwise assume the word is accurate!
If you changed just 1 rule in every game that AI beats us all at, then even a child will beat AI at all of them.
This is why it is not "Intelligence". It's simply "Artificial Trial-and-error".
This is also why AI should not be allowed to drive cars or be given control over life-threatening actions - it looks and feels "intelligent" to non-experts, but the day it encounters anything new that it was not "deep learned" on, you die. Like the AI car that got confused at traffic lights, so stopped (which caused collisions), or the dead man who's AI didn't expect a white truck to turn in front, etc.