Robotics

"Empowering" robots could replace the Three Laws of Robotics

AI experts have detailed a new set of guidelines for ethical robot behavior, around a principle called Empowerment
AI experts have detailed a new set of guidelines for ethical robot behavior, around a principle called Empowerment

Isaac Asimov's Three Laws of Robotics are versatile and simple enough that they still persist 75 years after he first coined them. But our current world, where robots and AI agents are cleaning our houses, driving our cars and working alongside us, is vastly different than even the most forward-thinking sci-fi writers could imagine. To make sure the guidelines for programming artificial intelligence cast as wide a net as possible, experts from the University of Hertfordshire have detailed a new system they call "Empowerment."

Originally created as a safety feature of the robots in Asimov's speculative stories, the Three Laws are elegant in their simplicity. 1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.

But the Hertfordshire researchers believe that these laws don't quite cover all the nuances that could arise in a robot's day-to-day life. Any guidelines for robot behavior need to be simultaneously generic enough to apply to any situation, yet well defined enough to ensure the robot always acts in the best interests of themselves and the humans around them.

As an AI concept, Empowerment has been around since 2005, and the team has developed and refined it over the last 12 years. Put simply, a robot's primary motivation should be to try to keep its options open, and it should take actions that leave it with as many options as possible for its next move. By extension, the robot should also act to maximize the empowerment of the humans around it, too.

It sounds fairly basic, but the researchers say that agents acting under this principle have exhibited surprisingly natural behavior. Even better, they only need to be given an understanding of the overall dynamics of the world, without having to be programmed for every specific scenario that might happen.

The Three Laws were designed to make sure robots are productive without harming themselves or humans, and Empowerment covers these same basic points. For example, injuring or killing a human would obviously decrease that person's empowerment – after all, they won't have any options left. The same goes for the third law, where a robot's own empowerment and wellbeing is at stake.

"There is currently a lot of debate on ethics and safety in robotics, including a recent a call for ethical standards or guidelines for robots," says Christoph Salge, co-author of the paper. "In particular there is a need for robots to be guided by some form of generic, higher instruction level if they are expected to deal with increasingly novel and complex situations in the future – acting as servants, companions and co-workers.

"Imbuing a robot with these kinds of motivation is difficult, because robots have problems understanding human language and specific behavior rules can fail when applied to differing contexts. From the outset, formalizing this kind of behavior in a generic and proactive way poses a difficult challenge. We believe that our approach can offer a solution."

The paper was published in the journal Frontiers in Robotics and AI.

Source: University of Hertfordshire

  • Facebook
  • Twitter
  • Flipboard
  • LinkedIn
9 comments
ei3io
Kudos remain for Isaac's 3 principles. He was big at the First International Robotics Conference @ LIU in Brooklyn NY which was an early benchmark in this history.
Username
"But our current world, where robots and AI agents are cleaning our houses, driving our cars and working alongside us, is vastly different than even the most forward-thinking sci-fi writers could imagine"
This writer should read more.
"Empowerment" seems to be a catchy re-branding with the exact desired outcome.
CarlUsick
The first rule should be that all robots have an easily accessible and hardwired off switch. Plus maybe a large unprotected area that will totally disable it with one small caliber bullet. Basically weak, vulnerable and dependent on human care. That should buy us a few weeks anyway.
Gregg Eshelman
Just so long as they don't conceive of a "Zeroth Law". In Asimov's continuation of his Foundation series, that led to the robots working to convert humanity into a hive mind. Giskard and Daneel R. Olivaw convinced themselves that putting the "good" of Humanity above the individual was the Best Thing Ever, but still had to trick a human into "ordering" the go-ahead for the project.
Spod
What's missing from the 3 laws of robotics is any mention of how a robot should behave around animals. Given they are totally ignored in the 3 laws, a robot would treat any animal as just a thing with no value, obviously an unacceptable situation to an ethical person. I always wondered why Asimov left this out, maybe just an indictment of the poor attitudes towards animals in general at the time he wrote them.
Graeme S
Given what man has done in the past and is capable of doing, one must conclude that in giving mankind the ability to choose between right and wrong hasn't worked out well, and the complexity of both the social and legal worlds has not come close to creating the world that we now think we can orchestrate by our own human intelligence. Without an ultimate truth guiding us we will fail. If mankind cannot work it out amongst itself and refuses to accept in an ultimate truth, what hope can we expect from that which we create. Laws alone will not give us the security we are trying to create, if we, knowing what is right and wrong, still choose wrong and what suits us, our prodigy will be like us but worse
Starper
Being interested in languages for many years, I've collected over 130 dictionaries. What is interesting is the number of words found in many languages that I recognize. Whether Romanian or Yiddish, Swahili or Hindi. And I have found that English is pretty much a universal language, and have met or heard people in foreign countries who either are fluent in English or at least know some. However I don't think verbal language is a solution, but a universal sign language might be. Robots can process more and faster visually, while having problems understanding speech, because no two people would pronounce a word exactly the same. Taking in to account regional variations, such as English spoken in the US, England, Canada, New Zealand and Australia no wonder a robot would have problems. Sign languages are found in almost every country.
bwana4swahili
Empowerment! Sounds like the Russian have a robot empowered to autonomously wage war. Yup, I'd rather have Azimov's approach!!
Lamb
Too late. Trouble is the wrong guys are making the robots. Already giving robots guns. They will come for our guns.