With AI already a big part of everyday life and its involvement only bound to increase, researchers have turned to four- to 11-year-olds to ask how they think we should treat intelligent technology.
Duke University researchers asked children whether they think Alexa and Roomba devices have the capacity to think and feel and gauged whether they should be treated differently because they’re not human.
Alexa is Amazon’s virtual assistant capable of voice interaction, music playback, streaming podcasts and providing real-time information, amongst other things. Roomba is an autonomous robotic vacuum cleaner made by iRobot that can navigate the floor of a home, sensing obstacles.
The researchers were inspired in part by Hollywood’s portrayal of human-robot interactions in shows like HBO’s Westworld.
“In Westworld and the movie Ex Machina, we see how adults might interact with robots in these very cruel and horrible ways,” said Teresa Flanagan, lead author of the study. “But how would kids interact with them?”
The researchers recruited 127 four to 11-year-olds to watch a 20-minute video of each technology before asking them questions about the devices. Questions included asking the children whether technology knows the difference between good and bad and has feelings and whether it’s okay to yell at or hit technology when it doesn’t perform.
The study’s results showed that, overall, kids decided that both Alexa and Roomba probably aren’t ticklish and wouldn’t feel pain if they were pinched, suggesting they’re aware that devices don’t feel pain.
However, they gave Alexa – but not Roomba – credit for having mental and emotional capabilities, something the researchers attribute to Alexa’s speaking ability.
“Even without a body, young children think the Alexa has emotions and a mind,” Flanagan said. “And it’s not that they think every technology has emotions and minds – they don’t think the Roomba does – so it’s something special about the Alexa’s ability to communicate verbally.”
Regardless of the device's perceived capabilities, children of all ages agreed it was wrong to hit or yell at the devices.
“Kids don’t seem to think a Roomba has much mental abilities like thinking or feeling,” said Flanagan. “But kids still think we should treat it well. We shouldn’t hit or yell at it even if it can’t hear us yelling.”
However, this kindness evaporated as the children aged. The older they were, the more they thought it would be acceptable to assault the technology.
“Four- and five-year-olds seem to think you don’t have the freedom to make a moral violation, like attacking someone,” Flanagan said. “But as they get older, they seem to think it’s not great, but you do have the freedom to do it.”
At a time when AI like ChatGPT is making headlines, the study’s findings provide insight into the relationships children have with technology and whether ethics should guide kids in their treatment of intelligent technology. And the researchers say it poses questions about the role parents might play in modeling good behavior in their children.
But, for now, they intend to embark on a further study investigating why children think it’s wrong – or right – to assault technology.
The study was published in the journal Developmental Psychology.
Source: Duke University