Isaac Asimov's Three Laws of Robotics are versatile and simple enough that they still persist 75 years after he first coined them. But our current world, where robots and AI agents are cleaning our houses, driving our cars and working alongside us, is vastly different than even the most forward-thinking sci-fi writers could imagine. To make sure the guidelines for programming artificial intelligence cast as wide a net as possible, experts from the University of Hertfordshire have detailed a new system they call "Empowerment."
Originally created as a safety feature of the robots in Asimov's speculative stories, the Three Laws are elegant in their simplicity. 1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.
But the Hertfordshire researchers believe that these laws don't quite cover all the nuances that could arise in a robot's day-to-day life. Any guidelines for robot behavior need to be simultaneously generic enough to apply to any situation, yet well defined enough to ensure the robot always acts in the best interests of themselves and the humans around them.
As an AI concept, Empowerment has been around since 2005, and the team has developed and refined it over the last 12 years. Put simply, a robot's primary motivation should be to try to keep its options open, and it should take actions that leave it with as many options as possible for its next move. By extension, the robot should also act to maximize the empowerment of the humans around it, too.
It sounds fairly basic, but the researchers say that agents acting under this principle have exhibited surprisingly natural behavior. Even better, they only need to be given an understanding of the overall dynamics of the world, without having to be programmed for every specific scenario that might happen.
The Three Laws were designed to make sure robots are productive without harming themselves or humans, and Empowerment covers these same basic points. For example, injuring or killing a human would obviously decrease that person's empowerment – after all, they won't have any options left. The same goes for the third law, where a robot's own empowerment and wellbeing is at stake.
"There is currently a lot of debate on ethics and safety in robotics, including a recent a call for ethical standards or guidelines for robots," says Christoph Salge, co-author of the paper. "In particular there is a need for robots to be guided by some form of generic, higher instruction level if they are expected to deal with increasingly novel and complex situations in the future – acting as servants, companions and co-workers.
"Imbuing a robot with these kinds of motivation is difficult, because robots have problems understanding human language and specific behavior rules can fail when applied to differing contexts. From the outset, formalizing this kind of behavior in a generic and proactive way poses a difficult challenge. We believe that our approach can offer a solution."
The paper was published in the journal Frontiers in Robotics and AI.
Source: University of Hertfordshire