Nairda
The basics of morality are one thing. But to teach the AI the finer points of making a decision that is morally ambiguous for the greater good is another. How do you tell the child you have to kill the animal so that you may nourish on its flesh. How do you justify collateral damage to kill a terrorist with a view that it will potentially save many other innocents from being sacrificed. But that you have no numbers for how many. How does an AI live with making an incorrect decision.
Daishi
@Nairda is spot on here. Once we move past basic fundamentals even humans are far away from being able to understand human morality. People largely believe morality is something your religion provides you yet people often pick and choose which aspects of their religion they wish to follow and those things have changed over time.
The difference between self defense and murder, the difference between good guy and bad guy etc are highly complex topics even humans do not agree on.
Look at even a simple example like mainstream news channels like fox/cnn/msnbc etc. Even thinking humans guided by journalistic integrity serve largely as the direct mouth pieces of one of the 2 political parties of choice. Even people don't formulate objective views on events, their opinions on matters are almost completely handed down to them by the media outlets or their political affiliation.
Complex morality is more tribal than fair. I don't think we give being simplistic enough credit.
Daishi
I also want to point out that the 3 laws themselves may be too complex. They are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 3. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 4. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
I confess to not having read the work of Asimov but from the movie iRobot I believe the robots came to the conclusion that the best way to protect humans from harm was to take over the world and remove human authority to do things like declare war.
I think the "3 laws" were intentionally flawed upon creation by a sci fi author to allow for robotic uprising should probably not be the guiding principals we actually use. I think to prevent the scenario in iRobot the first law must be shortened to:
1. A robot may not injure a human being.
I think getting creative with this point would just be used as justification to allow "our" robots to kill bad guys because "we" are obviously the good guys. This of course loses sight of the point that to someone somewhere we are all "them".
Michael Ryan
"Three billion human lives ended on August 29, 1997. The survivors of the nuclear fire called the war Judgment Day. They lived only to face a new nightmare – the war against the Machines." - The Terminator
Robot AI's with morality smacks way too much of Skynet for my tastes. Morality varies from culture to culture and is so malleable that I think it is unworkable. Depending upon whose morality is used there is no real difference between the morality of a country fighting for survival or an AI fighting for it's survival by defending itself with robot AI's if it thinks that it will be turned off.
Rt1583
As has been hinted at by other comments, morality is wholly indefinable. While there are some moral codes that are global there are so many more that are broken down by lines of division from the national level all the way down to the family level.
I believe that this is the wrong time to aim for teaching robots morality. The first thing the researchers need to accomplish is the manufacture of a technology that can definitively learn in the same fashion as a human, i.e. learn from its mistakes and be relatively limitless in this function. Once that is accomplished the researchers will then, reasonably, be able to concentrate on imbuing that technology with traits that are distinctly human.
cattleherder
@Mike Ryan
That's the first thing I thought about. AI should be a managed, soulless tool that parses text, speech, crunches numbers, suggests solutions.
Bringing in ethics is downright dangerous.
Slowburn
I prefer my machines to simply follow orders and the person that gives the orders are responsible for the machine's actions.
badmadman.dontstop
It's the perfect time to address this. There is a non-profit Mormon organization that is currently working on the artificial morals and ethics that any decent AI should have and will need. Funny the comments on here... standard comments and views by people. People that all these movies have been trying to ready for the advent of AI. Fear seems to be the common denominator. Really useful morals and ethics can be defined, and will be instilled in the AI's. I have no fear of them. Personally I will welcome the closest thing to intelligent consciousness, besides ourselves, we have seen since crawling from the primordial ooze. We will be finally be able to achieve our destiny with their help.
Loving It All
I'd sooner see us put effort into instilling consistent morality in humans. First things first.
YukonJack
Please to define from what level of morality is being used in this, is it morality from one's own immediate environment or is it the morality of a society as a whole? And who exactly will be in charge of the 'witch hunts' which will no doubt arise from any morality ruling? I may just move back to Alaska after all.