Old-school gamers will fondly remember the effort it took them to master a new Super Mario level, but thanks to a new development in artificial intelligence the pixelated Italian plumber and his friends are now teaming up to do the job themselves. Researchers from the University of Tübingen in Germany have developed an algorithm that allows videogame characters to learn from each other in human-like ways through observation and imitation, letting agents collaborate to reach a common goal. Future applications could include intelligent social support systems and swarms of modular robots that learn to perform complex actions on little human instruction.
One area where AI is still lagging behind, however, is in harnessing the power of social interactions to learn more about the world. Observation, communication, imitation and collaboration are essential human learning tools, so translating them to the realm of artificial intelligence is an area well worth researching if we are ever to pack our brains' abilities inside a silicon chip.
Professor Martin Butz and team are now demonstrating this all-important social intelligence inside a Super Mario videogame clone, allowing characters Mario, Luigi, Yoshi and Toad to talk to each other in plain English, learn by observing each others' actions, and collaborate to achieve a common goal.
At the start of the game, the four characters are equipped with different abilities and their knowledge is limited to the controls for moving around the world and abstract concepts like enemies, coins, and iron blocks. (Like a human player first starting the game, they have no prior knowledge of how these entities behave or interact with the characters.)
The four agents in the game can't be controlled directly by the users through keystrokes, but are motivated by four different objectives (utility functions in technical speak): food, health, completing the level, and learning more about the world.
Just like a child learning by observation and imitation, the four agents can watch one another in action and communicate in plain English to other characters in order to learn new rules and concepts. Then, a probabilistic algorithm lets them quickly draw conclusions on the rules that govern the world so they can use that knowledge to progress in the level.
"Characters learn from object interactions in a probabilistic but very fast manner," Butz told Gizmag. "Currently, one object interaction (such as destroying a box) yields the knowledge immediately (one try). Thus, rule learning is very fast – but can adapt when things change in the world, because each rule encodes a probabilistic belief about the encoded interaction consequences."
As an example, if Toad has figured out how to collect coins, Mario can ask it how to do so and then try for himself. Or, to get past a tricky part in the level, the two could work out that they can stand on each other's heads to reach coins or blocks they could not have otherwise reached.
The task of completing the level may not sound overly complex in itself, because other software could easily be implemented to brute-force the solution. The impressive feat here, however, is that the characters are able to achieve this on their own, through their own curiosity and social intelligence, on very little (if any) explicit human instruction.
According to Fabian Schrodt, one of the main developers on the team, one of the researchers' main goals is to make artificial social intelligence easier to teach as well as advancing the field of human-machine interaction, including driving assistance.
"Applications could focus on social support systems which can communicate about certain aspects of the world and particularly certain sets of interactions – enabling basic reasoning (seeing that you have X, maybe you want to do Y now)," Butz tells us. "Thus, any type of intelligent support system would benefit from such social capabilities."
The video below further illustrates what the characters are capable of.
Source: University of Tübingen
Want a cleaner, faster loading and ad free reading experience?
Try New Atlas Plus. Learn more