Due to its incredible degree of complexity, the ancient Chinese game of Go has been viewed as the Mt Everest of artificial intelligence. But just as chess, checkers and Jeopardy fell before it, a Google-made computer program has finally bettered a top-flight human opponent, one regarded as the best player of the past decade, no less. In what marks a significant advance for the field of AI, AlphaGo has today claimed victory in a five-game series, but not before South Korean Lee Sedol could land a few shots of his own.
The Google DeepMind Challenge kicked off last week in Seoul, pitting Sedol against the purpose-built AlphaGo computer program in a best of five series. Google-acquired artificial intelligence firm DeepMind had set out to build the best Go player in the world, but doing so would require some novel approaches to machine learning. In fact, in 2014 experts estimated that it could be a decade before AI advanced enough to allow a computer to win at Go without a handicap.
The team built an advanced search tree to sort through all the possible positions on the Go board, which equate to more atoms than are in the universe, along with deep neural networks. These networks process a description of the board through millions of neuron-like connections, with a so-called "policy network" picking the next move to play, while a "value network" predicts who will go on to win the game.
AlphaGo taught itself to play the game by using these neural networks to process 30 million moves played by human experts, which enabled it to predict a move correctly 57 percent of the time. And by then playing thousands of games among its own neural networks, through much trial and error the program was able to learn and come up with its own strategies, hinting at a potential superiority over us mere mortals.
It was first put through its paces in a tournament with other advanced Go-playing computer programs. Victorious in 499 out of 500 games, AlpaGo soon set its sights on a human opponent, going on to win five games to zero against three-time European Go champion and professional player Fan Hui last October.
But to be the best you have to beat the best, and that's now exactly what AlphaGo has gone and done. It was fast out of the blocks in its battle with Sedol, taking the first three bouts and effectively wrapping up the series without dropping a single game.
But Sedol struck back in game four. According to the commentators, AlphaGo was in a commanding position up until around the midway point. But at move 78, Sedol played a "really brilliant" maneuver, which was followed up by a mistake from AlphaGo and ultimately led to its demise.
"It seems Lee Sedol can now read AlphaGo better and has a better understanding of how AlphaGo moves," South Korean commentator, Song Taegon, said after the game. "For the 5th match, it will be a far closer battle than before since we know each better."
So despite AlphaGo having already won the series, could Sedol further exploit this apparent weakness? Well it took 280 moves (the previous four took an average of 188), but Google's program ultimately prevailed, taking the series 4-1. AlphaGo had played a move early that appeared to onlookers to be an error, but may have in fact been a calculated and effective new strategy.
"It was difficult to say at what point AlphaGo was ahead or behind, a close game throughout," said commentator Michael Redmond. "AlphaGo made what looked like a mistake with move 48, similar to the mistake in Game Four in the middle of the board. After that AlphaGo played very well in the middle of the board, and the game developed into a long, very difficult end game ... AlphaGo has the potential to be a huge study tool for us professionals, when it's available for us to play at home."
Google DeepMind wins US$1 million for taking out the series, which will be donated to UNICEF, along with science, tech, engineering and math charities, and Go organizations.
Source: Google