AI recreates video game engine after watching two minutes of gameplay
Advances in artificial intelligence have seen computers get better at interpreting (and winning) video games, but a new research project suggests they mightn't be too bad at building them either. Scientists have developed an AI system that can recreate a video game engine after watching just two minutes of gameplay, which could help lighten the load for developers and allow them to experiment with different styles of play.
Researchers at the Georgia Institute of Technology set out to build an AI system that could study frames of a 2D video game and then go about building a replica of the complete game engine. That is the software that dictates the gameplay, including everything from the graphics to the physics and player movement.
The team did this by training the AI on footage of two distinct types of players making their way through Level 1 of Super Mario Brothers. One that adopted an "explorer" style of play and the other a "speedrunner" style, where they headed straight for the goal. After watching less than two minutes of gameplay, the system was able to build its own model by observing the frames and predicting future events, such as the path the player would take and its interactions with enemies.
"Our AI creates the predictive model without ever accessing the game's code, and makes significantly more accurate future event predictions than those of convolutional neural networks," says lead researcher Matthew Guzdial. "A single video won't produce a perfect clone of the game engine, but by training the AI on just a few additional videos you get something that's pretty close."
With the engine built, the team then used a second AI to play the level and test out the actual gameplay on unique game levels created by the system itself. The researchers report that the cloned engine was largely indistinguishable from the real one, though there were minor blemishes, such as missing frames and protagonists that disappeared momentarily.
"The technique relies on a relatively simple search algorithm that searches through possible sets of rules that can best predict a set of frame transitions," says Mark Riedl, associate professor of Interactive Computing at the Georgia Institute of Technology and co-investigator on the project. "To our knowledge this represents the first AI technique to learn a game engine and simulate a game world with gameplay footage."
For now, the team's work uses the Super Mario Brothers game, although it has started to train the system on other 2D titles, Mega Man and Sonic the Hedgehog. They say that more complicated games where action takes place off the screen, like Clash of Clans, for example, might be beyond its reach, but the technology could make development of certain games faster and allow more room for experimentation.
"Intelligent agents need to be able to make predictions about their environment if they are to deliver on the promise of advancing different technology applications," says Guzdial. "Our model can be used for a variety of tasks in training or education scenarios, and we think it will scale to many types of games as we move forward."
The team has published a paper describing the research that can be accessed online.
Source: Georgia Institute of Technology