AI & Humanoids

Personal robot harnesses inner speech to talk itself through problems

View 2 Images
Pepper the humanoid robot was launched in 2015 by Japanese firm Softbank
Softbank
Pepper the humanoid robot was launched in 2015 by Japanese firm Softbank
Softbank
Pepper the robot talks itself through a problem involving napkin placement on a table
Antonio Chella, Arianna Pipotone

Robots made to serve as personal assistants are improving all the time, but the reasoning behind their decision-making is still something of a mystery to the everyday user. In an effort to improve communications between humans and their robotic counterparts, scientists in Italy have demonstrated the benefits of a form of robotic inner speech by getting a humanoid robot called Pepper to talk to itself as it solves problems, which also improved its ability to complete tasks.

Pepper the robot was launched by Japanese firm Softbank back in 2015 as the world's first personal robot to be capable of reading people's emotions, as well as displaying emotions of its own via a built-in display. The original inspiration for the robot was to provide people with a personal assistant they could grow and have fun with, by responding to people's facial expressions and the things that they say.

Scientists at the University of Palermo saw the commercially available Pepper robot as an ideal vehicle to test out what they call an inner speech cognitive architecture. This software is modeled on the human form of self-dialogue as a psychological tool, where we might talk to ourselves as a way of planning, focusing, reasoning and ultimately making better decisions.

This was put to the test through a series of experiments, with Pepper demonstrating a higher rate of task completion when armed with the inner speech capability. One of these problems involved having the robot place a napkin on a table, but in a spot that went against what it had been trained to do. This contradiction prompted a confused Pepper to ask itself a series of questions before confirming the user's request, and then talking to itself some more.

Pepper the robot talks itself through a problem involving napkin placement on a table
Antonio Chella, Arianna Pipotone

"Ehm, this situation upsets me," Pepper said. "I would never break the rules, but I can't upset him, so I'm doing what he wants."

The outcome was Pepper placing the napkin in the spot the user requested, and by talking itself through the dilemma, the inner speech cognitive architecture not only led to better problem-solving, but better transparency. The goal of the research team is to improve the trust between humans and robots, in addition to improving the performance of these machines as collaborators and personal assistants.

"People were very surprised by the robot's ability," says first author Arianna Pipitone. "The approach makes the robot different from typical machines because it has the ability to reason, to think. Inner speech enables alternative solutions for the robots and humans to collaborate and get out of stalemate situations."

One drawback of the technology is that the robot can take more time to complete tasks when it has to stop and talk to itself, which could be too inefficient in some circumstances. Nonetheless, the authors believe the study provides a solid foundation to explore how inner speech could improve robotic performance across a range of applications, such as navigation apps and even medical robots in operating theaters.

"Inner speech could be useful in all the cases where we trust the computer or a robot for the evaluation of a situation," says co-author Antonio Chella.

The research was published in the journal iScience.

Source: Cell Press via EurekAlert

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
0 comments
There are no comments. Be the first!