Waymo (Google before it) has now clocked up over 10 million autonomous miles on public roads in 25 cities. Last year, Phoenix, Arizona, became the first to set its self-driving cars free of human back up by allowing a public trial of fully driverless vehicles. Now California has joined the party.
California's Department of Motor Vehicles has granted Waymo the first permit in the state for driverless testing on public roads, the result of new rules that allow for applications to be sent in for testing of fully driverless vehicles within "carefully defined limits." That means that the fully driverless vehicle route will be restricted to parts of Mountain View, Sunnyvale, Los Altos, Los Altos Hills and Palo Alto.
The approved zone should feel like home for the company, as the area is where both Waymo and its parent, Alphabet, are headquartered. The tests will begin in a limited area and expand as confidence and experience builds. Before moving into new territory, Waymo intends to notify residents that there are fully driverless vehicles in the area.
The permit includes day and night time driving on city streets, rural roads and highways with speed limits up to 65 mph (104 km/h). Since there will be no human backup if a problem occurs, Waymo says that its test vehicles will come to a safe stop and try to work out how best to proceed. And they will call fleet support for help if the issue can't be resolved locally.
The first passengers in the fully driverless vehicles will be members of the Waymo team, but the company plans to open the project up to members of the public some time in the future.
Source: Waymo
An autonomous vehicle would know way, way sooner that the road was icy - much sooner than even a very experienced 'ordinary driver' would know because it would be able to measure the outside/road surface temperature dynamically (it would also have constant access to weather forecasts, for that matter). So, the example accident scenarios you site simply would not - in the vast majority of cases - ever become an issue.
Everyone seems to want to apply potential *human* failings to autonomous vehicle driving when, of course, the human element (and that responsible for 90% of 'accidents') will be completely removed in autonomous vehicles.
An example: "Oh, well, what'll happen when an autonomous car gets to a junction controlled by traffic lights and the lights are out?" Well, given that autonomous vehicles will constantly be talking to each other, knowing where they all are, what speed they are doing and where they want to go next, traffic lights - and all the delays and cost that their current necessity for implies - *will simply no longer be needed*.
Like I say, just imagine....
My first time out in the snow, I benefit from someone like Firefly, who knew to watch out for me, as I was approaching an intersection. I had not allowed quite enough time to stop, and partially entered the intersection before I came to a stop. In the crossing lane, the individual had stopped for me, even though they had the right away.
I have also run into situations with a horse and buggy, when you have to take the feelings of the horse into consideration. If they look jittery, don't enter the intersection, even if you have the right away. Does a robotic car know when a horse looks nervous?
I also ran into a situation where there were some very foolish kids pulling a prank, where they threw a dummy into the road, hoping to induce a panic stop by a passing driver. In my case, I understood the real situation from a distance. But, would a robotic car perceive that? Would it panic stop, even if the road were wet, to avoid running over the dummy, but risk sliding and hitting the real kids who were not visible at that moment, because they were crouching down to stay hidden?