A startup founded by two university students has announced that it's received a cash injection that will allow it to launch a pilot fleet of self-driving vehicles in central London.
Founded in 2017 by Dr. Alex Kendall and Dr. Amer Shah, then Ph.D students at the University of Cambridge, Wayve's modified self-driving Renault Twizy electric quad has already taught itself to drive from scratch in 20 minutes, and added a camera sensor and GPS into the mix to navigate around never-before-encountered urban environments.
Rather than relying on hand-coded rules, the Wayve system makes use of machine learning. By copying the driving behaviors of expert human drivers, while also noting when a safety driver needs to intervene, the system gains driving experience in much the same way as a human would when taking lessons.
Now the tech has been moved into a Jaguar iPace EV, with a fleet about to be rolled out onto the streets of London as part of a self-driving trial thanks to successful Series A funding. Investment of US$20 million has been secured from Eclipse Ventures, along with Balderton Capital, as well as existing investors Compound, Fly Ventures, Firstminute Capital and others.
"The average human learns to drive in just 50 hours with visual input primarily," said Suranga Chandratillake from Balderton Capital. "Once we have learned, we are capable at driving on roads around the world despite vastly differing traffic laws and cultural context. Wayve's self-driving technology is the closest to this human approach to learning. The great advantage of solving the problem this way is that it is robust in the face of a global opportunity."
Sources: Wayve, University of Cambridge
Disclaimer first: In my opinion, AI doesn't need to be 100% save to be considered save enough. I think, it's sufficient when the AI drives several times saver than the average human driver would do in similar traffic conditions.
That given, it may take years or even decades until AIs can drive save enough in London or Paris.
What they are already good enough for, is long-haul driving on highways. Long hours of stupidly easy driving conditions, with the main danger for a human driver that he falls asleep or starts some stupid activities out of boredom. AIs don't sleep, neither do they get bored. A "Highway Auto-Pilot" could be a real thing right now, with the technology we already have. You just need an emergency driver on board (a single emergency driver for a convoy of half a dozen AI trucks should be enough) - if the AI detects a situation it doesn't understand, it just stops the vehicle(s), and wakes the emergency driver to judge and resolve the situation.
I'd be perfectly happy to have such an auto-pilot in my car, and use it in any situation where such an auto-pilot drives statistically several times safer than an average human driver. Especially longg-haul highway driving, when I could read something or take a nap instead of having to stare for hours at the road before me.
I don't see the big problem with "who gets sued if something happens". An AI auto-pilot would be a technical part of the vehicle, so as I understand it, in the case of an accident caused by AI failure, the legal situation would be not different from one with a technical issue with the brakes, the steering, or any other part of the car. A technical probelm with a component of the car is a technical problem, no matter if mechanics, electronics or AI.