Automotive

Self-driving cars could soon be making decisions based on morality

Self-driving cars could soon be making decisions based on morality
Researchers say it should be possible to get self-driving cars to use morality and ethics to help make decisions
Researchers say it should be possible to get self-driving cars to use morality and ethics to help make decisions
View 1 Image
Researchers say it should be possible to get self-driving cars to use morality and ethics to help make decisions
1/1
Researchers say it should be possible to get self-driving cars to use morality and ethics to help make decisions

The development of autonomous cars has raised plenty of questions, including the tricky problem of autonomous systems making potentially life-or-death decisions. Should self-driving vehicles protect their owners at all costs, or should they sacrifice them to save a bigger group of people? Although there's no concrete answer, a research team in Germany says morality could soon be playing a role in how self-driving cars make decisions.

Common knowledge suggests human morality is too dependent on context for accurate modeling, which means it can't be effectively integrated into a self-driving algorithm. Researchers from the University of Osnabrück in Germany suggest this isn't actually the case.

In virtual reality, study participants were asked to drive a car through suburban streets on a foggy night. On their virtual journeys they were presented with the choice of slamming into inanimate objects, animals or humans in an inevitable accident. The subsequent decisions were modeled and turned into a set of rules, creating a "value-of-life" model for every human, animal and inanimate object likely to be involved in an accident.

"Now that we know how to implement human ethical decisions into machines we, as a society, are still left with a double dilemma," says Professor Peter König, an author on the paper. "Firstly, we have to decide whether moral values should be included in guidelines for machine behaviour and secondly, if they are, should machines act just like humans?"

The German Federal Ministry of Transport and Digital Infrastructure has recently defined 20 ethical principles for self-driving cars, but they're based in the assumption that human morality can't be modeled. They also make some bold assertions on how cars should act, arguing a child running onto the road would be less "qualified" to be saved than an adult standing on the footpath watching, because the child created the risk. Although logical, that isn't necessarily how a human would respond to the same situation.

So, what's the right approach? The University of Osnabrück study doesn't offer a definitive answer, but the researchers point out that the "sheer expected number of incidents where moral judgment comes into play creates a necessity for ethical decision-making systems in self-driving cars." And it's not just cars we need to think about. AI systems and robots will likely be given more and more responsibilities in other potential life-and-death environments, such as hospitals, so it seems like a good idea to give them a moral and ethical framework to work with.

The team's study was published in Frontiers in Behavioral Neuroscience.

Source: ScienceDaily

7 comments
7 comments
Deres
This is more complexe than just morality. Take the example of a moving stroller on thé road. Would thé machine kill its driver to save an hypothetical baby that it does not see ? Moreover, it could bé a homeless'stroller ... And thé exact content if thé car should bé taken into account ...
piperTom
"Should self-driving vehicles protect " owners or others?? I don't know what Ethics will say, but I know which answer will sell better.
vqsteve
Morality is often individual in nature. What I would do is different than what another would in a given circumstance.
Prospective self driving car owners should do a virtual reality driving test that would present various moral dilemma situations. Their biases, preferences and ethical driving choices would then be incorporated into the car's software and algorithms - the car would respond as they would have - along with corresponding liability or justification.
Manufacturers and insurers should jump on board with this model as it would shift responsibility to the consumer. "It wasn't the car's fault. It did just what the owner chose for it to do."
Anne Ominous
The whole "self-driving car" thing is going to be a complete mess as soon as one of them hits "the Trolley Problem" and kills somebody, and the family takes people to court.
https://en.wikipedia.org/wiki/Trolley_problem
Even humans cannot "solve" the trolley problem. But with self-driving cars, there is yet another concern:
When a human is driving the car, that person is responsible for his/her decisions.
When the car is operating under the direction of a program, who is responsible for "making the decision"?
The manufacturer? The programmer? The programmers' boss, who told them what to program?
Only one thing is certain: it will not be the human who was behind the wheel.
CharlieSeattle
Please run these "Self-Driving Car" tests to simulate real world conditions FIRST!!
Test sensors in a dark tunnel without light after a storm. Test sensors with 50% disabled to simulate a defect or failure. Test sensor covered with ice not chipped off. Test sensors covered with bird doo. Test sensors covered with engine oil spray. Test sensors covered with Arizona road dust. Test sensors covered with mud. Test sensors covered with sleet storm. Test sensors covered with sticky leaves. Test sensors covered with dead bugs sticking to sensors. Test sensors covered with clear wax applied by a vandal. Test sensors disabled by a EMP strike. Test sensors against popular car top carriers, canoes, mattress etc.
CRITICAL TEST: Test "Self-Driving Car" software ability to block a NSA/CIA/FBI cyber hack used to stage an undetectable assassination! They can and will spoof their real location from the USA to Russia, Nigeria or your mommies basement!
Douglas Bennett Rogers
Maybe you will buy a priority level, with the lowest one stopping for pigeons.
Nairda
The trolley problem is not a problem. As long as the police investigators see that there are long black lines leading up to the trolley (impact or otherwise), it shows the intent of the driver or AI of wanting to avoid the impact. It can be argue in court. Swerving at speed often results in multiple impacts and loss of vehicle control. A good AI would see this and choose appropriately.