A group of researchers from Tufts University, Brown University and the Rensselaer Polytechnic Institute are collaborating with the US Navy in a multi-year effort to explore how they might create robots endowed with their own sense of morality. If they are successful, they will create an artificial intelligence able to autonomously assess a difficult situation and then make complex ethical decisions that can override the rigid instructions it was given.
Seventy-two years ago, science fiction writer Isaac Asimov introduced "three laws of robotics" that could guide the moral compass of a highly advanced artificial intelligence. Sadly, given that today's most advanced AIs are still rather brittle and clueless about the world around them, one could argue that we are nowhere near building robots that are even able to understand these rules, let alone apply them.
A team of researchers led by Prof. Matthias Scheutz at Tufts University is tackling this very difficult problem by trying to break down human moral competence into its basic components, developing a framework for human moral reasoning. Later on, the team will attempt to model this framework in an algorithm that could be embedded in an artificial intelligence. The infrastructure would allow the robot to override its instructions in the face of new evidence, and justify its actions to the humans who control it.
"Moral competence can be roughly thought about as the ability to learn, reason with, act upon, and talk about the laws and societal conventions on which humans tend to agree," says Scheutz. "The question is whether machines – or any other artificial system, for that matter – can emulate and exercise these abilities."
For instance, a robot medic could be ordered to transport urgently needed medication to a nearby facility, and encounter a person in critical condition along the way. The robot's "moral compass" would allow it to assess the situation and autonomously decide whether it should stop and assist the person or carry on with its original mission.
If Asimov's novels have taught us anything, it's that no rigid, pre-programmed set of rules can account for every possible scenario, as something unforeseeable is bound to happen sooner or later. Scheutz and colleagues agree, and have devised a two-step process to tackle the problem.
In their vision, all of the robot's decisions would first go through a preliminary ethical check using a system similar to those in the most advanced question-answering AIs, such as IBM's Watson. If more help is needed, then the robot will rely on the system that Scheutz and colleagues are developing, which tries to model the complexity of human morality.
As the project is being developed in collaboration with the US Navy, the technology could find its first application in medical robots designed to assist soldiers in the battlefield.
Source: Tufts University
The difference between self defense and murder, the difference between good guy and bad guy etc are highly complex topics even humans do not agree on.
Look at even a simple example like mainstream news channels like fox/cnn/msnbc etc. Even thinking humans guided by journalistic integrity serve largely as the direct mouth pieces of one of the 2 political parties of choice. Even people don't formulate objective views on events, their opinions on matters are almost completely handed down to them by the media outlets or their political affiliation.
Complex morality is more tribal than fair. I don't think we give being simplistic enough credit.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 3. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 4. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
I confess to not having read the work of Asimov but from the movie iRobot I believe the robots came to the conclusion that the best way to protect humans from harm was to take over the world and remove human authority to do things like declare war.
I think the "3 laws" were intentionally flawed upon creation by a sci fi author to allow for robotic uprising should probably not be the guiding principals we actually use. I think to prevent the scenario in iRobot the first law must be shortened to:
1. A robot may not injure a human being.
I think getting creative with this point would just be used as justification to allow "our" robots to kill bad guys because "we" are obviously the good guys. This of course loses sight of the point that to someone somewhere we are all "them".
Robot AI's with morality smacks way too much of Skynet for my tastes. Morality varies from culture to culture and is so malleable that I think it is unworkable. Depending upon whose morality is used there is no real difference between the morality of a country fighting for survival or an AI fighting for it's survival by defending itself with robot AI's if it thinks that it will be turned off.
I believe that this is the wrong time to aim for teaching robots morality. The first thing the researchers need to accomplish is the manufacture of a technology that can definitively learn in the same fashion as a human, i.e. learn from its mistakes and be relatively limitless in this function. Once that is accomplished the researchers will then, reasonably, be able to concentrate on imbuing that technology with traits that are distinctly human.
That's the first thing I thought about. AI should be a managed, soulless tool that parses text, speech, crunches numbers, suggests solutions.
Bringing in ethics is downright dangerous.