As autonomous vehicles pull out onto the roads in greater numbers, issues of safety and morality will inevitably be raised. Extreme ethical scenarios are being considered, like deciding whether a car will swerve to avoid hitting a pedestrian if it means killing its own occupants, but not all of these questions are a matter of life and death. Stanford researchers are looking into how to ethically program cars to break minor laws if required.
A survey of driverless car safety studied by MIT paints a grim picture, as people generally seem to agree that autonomous cars should be designed to minimize casualties, but most wouldn't buy a car that might kill them for the greater good. Thankfully, these scenarios, while important to plan for, should be rare occurrences.
The project at Stanford is considering more minor ethical issues, which may have less severe consequences but will crop up more often. For example, if an autonomous car approaches an obstacle that takes up half a lane, and there's a double line in the middle of the road, what should the vehicle do? A human driver might not think twice about momentarily breaking the law and passing over the lines to get past – assuming there's no oncoming traffic, obviously – but is it right for autonomous cars to be programmed to plan ahead of time to break the law? And if so, under what circumstances, and to what extent?
"We can treat that as a very harsh, strict constraint, and the vehicle will have to come to a complete stop in order to not hit the obstacle," explains Sarah Thornton, Stanford PhD candidate.
Programming cars to follow the law to the letter might feel like the sensible option at first glance, points out Stanford PhD candidate, Sarah Thornton, but human drivers will often make these minor infractions to maintain vehicle safety and passenger comfort, and stopping in the middle of the road for a wayward cardboard box might not be so safe an option anyway.
Instead, autonomous vehicles may be better off balancing practicality with the need to stay on the right side of the law. The car could choose to cross the lines, but pass as close as possible to the obstacle, minimizing its law-breaking but which may be, as Thornton says, "very uncomfortable for the occupant in the passenger seat." Or, if safe to do so, the car may choose to veer out into the opposite lane to give the obstacle a wide berth. It's these kinds of considerations will need to be examined further.
"The vehicles are going to be what's making the decisions now," says Selina Pan, postdoctoral scholar at Stanford. "And so we need to somehow translate social behavior, ethical behavior, into what happens once the vehicle finally takes full control."
The team demonstrates the different scenarios in the video below.
Source: Stanford University
Since there are so many conflicting laws on the books, many will need to modified, & new ones created.
Unlikely that we will reach a consensus in the time that we think the technology will be affordable for those that are actually interested.
Case in point the recent Tesla crash even in autopilot mode, had the speed limit been observed, the outcome would have been different.
139 Exceptions for avoiding obstructions on a road
...
(4) A driver may drive on a dividing strip, or on or over a single continuous line, or 2 parallel continuous lines, along a side of or surrounding a painted island, to avoid an obstruction if: (a) the driver has a clear view of any approaching traffic, and (b) it is necessary and reasonable to drive on the dividing strip or painted island to avoid the obstruction, and (c) the driver can do so safely.