The past year has proven to be a landmark in artificial intelligence research. We have seen several big breakthroughs in AI development that were years in the making, from finally decimating human competitors in the incredibly difficult game of Go, to cracking the complexity of poker and snatching a heap of money from the professionals.
Games have turned out to be an important training ground for artificial intelligence. The complex, and dynamic, problems that surface while playing a game often require solutions that can't easily be "programmed".
Elon Musk's OpenAI research company has long had an emphasis on reinforcement learning, a type of machine learning where a system improves the quality of its actions independently through trial and error. The company has recently unleashed its latest bot on The International, a giant eSports tournament focusing on the game Dota 2.
OpenAI used Dota 2 as a test project for its machine learning systems due to the complexity and interactivity of the game. Dota 2 requires players to plan, trick, and deceive opponents with a great deal of sophistication.
"The rules of Dota are so complicated that if you just think really hard about how the game works and try to write those rules down, you're not even gonna be able to reach the performance of a reasonable player," says Greg Brockman from OpenAI.
So the team set the bot to teach itself how to play the game through self-play. The system learned to conquer the game from scratch by playing a mirror of itself. After just two weeks of learning, the bot beat several of the world's top Dota 2 players including "Dendi", a professional regarded as one of the most creative and unorthodox players on the scene.
At this stage the bot only plays in the more simplistic one-to-one version of Dota 2. The full, and exponentially more complex, version is played by two teams of five. The OpenAI team are now working on teaching teams of bots to play this complete version, aiming to unleash the AI on The International again next year.
"Beyond that we want to start mixing together AIs and human players on a single team and try to reach a level of performance that neither of them could reach on their own," says Brockman.
These kinds of AI experiments are more than just simple novelty. They allow researchers the ability to further refine the machine learning algorithms that will be key to functional AI in the future. The better the machines are at learning on the fly, the better they will be able to function in the real world when confronted with unique or anomalous circumstances.
Take a look at the OpenAI Dota 2 experiment in the video below.
Source: OpenAI
It's great to see this concept become a reality but I feel like in this day and age it's not a great feat to build an engine like this and program it in this way.