Earlier this month, Singularity University (SU) held a summit in Christchurch, New Zealand, to discuss exponential technologies and their potential impacts. This was the first time one of the Silicon Valley think tank's summits was held in the Southern Hemisphere. Despite a 7.8 magnitude quake striking 95 km from Christchurch the night before the opening day, the three-day event kicked off as planned before a sellout – albeit slightly sleep deprived – crowd. New Atlas managed to corral a few of the speakers for one-on-one interviews and the first (autonomous) cab off the rank was Brad Templeton.
Templeton is, among other things, a software architect, developer, internet entrepreneur, artist, and serves on the boards of both the Foresight Institute and the Electronic Frontier Foundation. Since 1979 he has been an active member of the computer network community and was instrumental in the building and growth of USENET in its earliest days. More recently he was among the founding faculty of SU, where he chairs the program on computing and networking. He also spent two years on the Google team developing self-driving cars, and it was on autonomous vehicles – or "robocars" as he calls them – that he was speaking at the SUNZ Summit.
New Atlas: The trolley problem, which essentially asks whether a car should favor harming one person over another if an accident were unavoidable, has received a lot of press recently. Do you think this distracts from the overwhelming upside of autonomous cars?
Brad Templeton: Absolutely. It's the number one question that people ask these days, to the extreme that President Obama, in an interview on technology, brought that up first. It's really much more interesting than important. We're absolutely fascinated at the idea of machines maybe deciding who lives and who dies. You know, the idea of a computer deciding to deliberately run over one person over another makes us think about Arnold Schwarzenegger or some movie idea of robots. But the reality is it's not a very common situation – I hope it's never happened to you or anyone you know or you've never even read of someone deliberately running over one person over another in the paper – and because of that I would say it's a good thing to work on for version two.
If we spend a lot of time fussing over making cars that could figure out these complex problems – do I run over Hitler and Goebbels in lane one or Gandhi in lane two, and does that mean I have to understand the difference between Gandhi and Hitler? Well, a human understands that but a machine doesn't – if we start fussing about that we'll delay getting the cars out, saving real lives in order to worry about killing the right person hypothetically in a classroom exercise. So, I really discourage people from focusing on that problem, but I can't stop them. For now, I say let's put that on the list for version two and let's get the vehicles out there first.
New Atlas: You proposed a panel system for dealing with those kinds of issues ...
Brad Templeton: Yeah, mainly I've been trying to get people to say, this is not something that we demand of the programmers – the programmers don't want to program a car and decide who to kill, what a dreadful thing to ask someone to do – but there's people running around, and now the government's running around, saying, oh we have to resolve this question because the public's so fascinated by it. So yeah, I said we have to come to some sort of resolution about it. Unfortunately it's not good enough to remind people that it doesn't happen very often. So it might make sense to let the government, the policy makers, solve the problem and then the programmers and the companies who are making the vehicles have someone they can ask; they follow the ruling and they'd be in the right on the law, and that's really what the vehicle code is, except this would be much faster.
Right now we have this big complex vehicle code that says who can turn left, who can turn right and tells you who's in the right and who's in the wrong when there was some sort of accident, and that took a century to build and was built piece by piece. Here we're moving much faster, so it might make more sense to have a fast way to make rules. One thing that's very interesting when thinking about rule making is, there's two things very different about a robot versus people. One is, if a robot ever makes a mistake, say there is an accident or something that requires a ticket or something, (that's another favorite question – how do police write a ticket) – they'll actually immediately fix it and within a few weeks, as soon as they can test it, it'll be out and the robots will never make that mistake again.
So it's not like people, if someone makes a bad lane change and gets a ticket, it doesn't mean no one ever makes bad lane changes again, so you wouldn't regulate that the same way you regulate people. For example, there's a lot of places where it says no left turn, where you actually can make a left turn 99 percent of the time, but 1 percent of the time people would make a mistake and, bang. So we put up a sign, no left turn. But the robot would be very reliable and know when it can go and when it can't, and so you would say, ok it would be alright for you to do that.
The other thing is, even if there's a lot of competing companies, every different car company makes one and every search engine company makes one and everything else, you could still get all the people making them in a room and say, listen, here's the situation, what should we do about this and everybody would talk and they'd say, ok, we'll do this and suddenly everybody would be doing the right thing. So it's very different from the way we think about writing rules for people.
New Atlas: Do you worry that implementing regulations before the technology has been fully developed risks stifling innovation, particularly in relation to using neural networks and deep learning in self-driving vehicles?
Brad Templeton: What's happening in neural networks right now is really amazing, and barriers are falling down that really nobody was ready for, so amazing stuff. And many people feel with more and more confidence that we will be able to use those kinds of technologies, these learning technologies, to make something drive much sooner than we thought we would. But we can't do it yet, so it's a bet about when we do it.
But there are many parts to the problem, and the first part of the problem is what we call the perception problem. A perception problem is, you're looking out and that's a dog and that's a car and that's going too fast – basically, understanding the world – and so these neural networks are really helping in that problem and it's a no brainer. Everybody is using these technologies to try and improve the understanding of what in the world's going on.
The second level's a little more interesting, and it's the neural network actually understanding if it's time for it to change lanes or swerve around something, etc. Neural networks will also be able to do that and there are people who have built vehicles that do that, but that's a little more difficult to understand. So the problem with a neural network, or any kind of machine learning system, is it sort of makes a bit of a black box – stuff comes in, stuff comes out, and no human being can actually explain to you why it did what it did.
I mean, you can look at any one particular decision and with a great deal of work you could go in and examine every little neuron firing and say, ok it weighted this stuff here and that stuff there, but the reality is it's a bit like – although it's not nearly as evolved as the human brain – but it's a bit like me trying to explain to you why the neurons firing in my head made me say this sentence to you. Now I have no idea how that happened, but it doesn't mean I'm not able to talk.
So people are uncomfortable with that because we don't like the idea of not knowing how it did something and, more to the point, if it does something wrong, you can train it again and it won't do that wrong thing again, but you won't know how you fixed it and you won't know that you've fixed another thing like it. So with traditional computer programming, what you've got is something that's not nearly as powerful.
It's a question of what business people call QA – quality assurance. How do you know? Because the biggest challenge is – making these cars work and be safe is of course a big challenge – but one of the biggest parts of that is how do you know you've done it? When you prove that you've done it to yourself, how do you prove that you've done it to your lawyers, your board of directors, the public, the government, whoever you have to prove it to? With neural networks and machine learning it's harder to prove you've done it.
Now, we might actually find ourselves in a situation where we've got two different systems: one we understand how it works, and maybe it has an accident every 150,000 km, and we have another one we have no idea how it works, but it has an accident every 200,000 km. So which one do you want to ride in?
The latest regulations published by the US government – and as I've said, I think these regulations are wrong, because I think it's far too soon to actually make the decision about this, but this just conveys the fear people have – they have said they want every manufacturer to be able to explain how their car works and their rules, and they have to lay out why this is safe and how do you know it's safe? But what if the answer is, we have no idea, we just trained this network by watching a bunch of good drivers drive and now it seems to drive well – which is how you get a driver's license, I suppose.
The bigger issue behind this is that the thing that people want to do with neural networks is to drive with just a camera, just like human beings drive with just two, well, we can get by on one. So people have this image of, let's make our AI use the same tools a human has. Other people have said, we've got these laser scanners, we've got radars, etc., that can give you superhuman sensing – why would you not want that? Lidar is that way – it knows the distance to things. There's never any question about it telling if something is close to you or something is far away, whereas with an image there are optical illusions that can fool you about how far away something is and computers have an even harder time figuring out where things are.
One of the reasons people want to do it with cameras is just because it's cheaper. Laser scanners are expensive, but that's changing very quickly – I'm involved with a company working on making that happen, actually – and I have every confidence that they will be available cheaply in time. Now Tesla is trying to do things with the production car that they make today, so they can't put a laser scanner in because there's no laser scanner available for a production car today – there will be in a couple of years, but there isn't today – so they've made a strong, and I think incorrect, bet that they should try and do [full autonomy]with what they can have today and I think that's very much different from the Moore's Law rule where you actually make the bet that even better technology is coming.
New Atlas: What about the dangers of hacking?
Brad Templeton: The truth is current cars were designed with no security in them at all and everybody in the car community and the security community has known this for years. (The 60 Minutes report) was the day the press found out, which caused a little bit of a stir. But the people designing the vehicles are much more aware of this kind of issue and are working to make them as secure as they can, but perfection is a very difficult goal for computer security.
One thing it does mean is that, I believe – I'm not alone, but it's a minority view – is that these cars will not talk to other cars or the city. A lot of people imagine this world of cars all chatting with other cars, but that is exactly how that sort of thing [hacking] happens, the cars running around talking to everything. I think that in the interests of keeping security at the highest levels you can, cars are only going to talk to headquarters and it's going to be a little bit afraid of even doing that, worried that its own HQ might be compromised and try to attack you.
But there are major efforts in most governments of the world to promote [C2C and C2I communication] because there's a bunch of people who bet their careers on the idea of cars talking to each other and being connected as the solution to a lot of problems. The truth is it's a solution to some problems, but it's not actually the solution to a lot of really important problems and they never really thought it out to realize that the problems it would solve would be minor and the risks it would create would be major.
If you talk to anybody actually working on building a self-driving car that's going to go door to door, they are not paying attention to the C2C communication at all. And there's one very obvious reason why – the first car you put on the road has nobody else to talk to, so you can't do that first. You ask anyone who's actually working on it and they're not someone who's staked their career on it in the past, they'll say, no plans to use that today, if it does show up, maybe we'll use it. But then if you ask them about the security issues they'll think for a second and say, maybe we won't use it.
Governments are actually trying to push companies to come together, so they have come up with standards, but the problem is they are solutions in search of a problem mostly.
New Atlas: Once the car companies start churning out fully autonomous cars, will they need middlemen like Uber and Lyft, if we're getting rid of our own cars, for example?
Brad Templeton: No, it's the other way – why will Lyft need GM or why will Uber need GM is the bigger question. GM put $500 million into Lyft, so they actually don't think of them as a middleman, but as their partner. I believe that we're going to see a move – not a universal move, but in the cities, rather than the country – towards people buying rides instead of cars.
If you summon an Uber today, you can summon Uber Select, which is a luxury car, but if you order it, you don't care if they send you a Lexus or a Mercedes or a BMW, you just care that you're getting a luxury car. Uber's the brand you care about, not the brand on the front of the car, it's just something that says, ok they've given me a luxury car. That's very frightening to the car companies, because then they're superfluous – it doesn't really matter it is as long as the car met the standards that Uber promised the customer. So, that's scary news for car companies. It pretty much gets rid of the value of the brand because if it's a luxury car you don't even care if it's some Chinese brand you never heard of.
New Atlas: Because it's not sitting in your driveway displaying your wealth?
Brad Templeton: Yeah. Today we buy a car for at least five years. When you're buying a car for 15 minutes you have a different view about it. People will still like to show off their wealth – if I'm taking a ride to the club or the opening of the opera or I'm taking some friends out, I'm going to pay for something that says, "look at me, I'm so fancy," but for my commute I'll probably be more interested in saving money. Some people don't want to save money, but most people do.
New Atlas: Can you estimate any kind of timeframe when personal car ownership becomes the exception rather than the rule?
Brad Templeton: That really depends on where you live. I think if you live in the country there's not going to be on-demand taxi service in 30 seconds, not now, not for a long, long time, so those rural people – that's still 40 percent of the population or more – they're going to still want a vehicle. The vehicle will probably drive itself, but they might switch from having two cars to one, because if you've got a car and that car goes out then you just send a signal saying, ok we'd like to briefly hire another car to be ready for dad after mum took the car out. So, it could be like you had two cars but you didn't actually have to buy two.
That'll happen a lot with people in the city, too. A lot of people are still very keen on car ownership, but they might stop having three cars and have two or have one. You definitely might see parents saying, instead of buying our 16-year-old a car in which they might kill themselves, maybe we'll give them a subscription service, and it seems that the 16 year olds might be quite happy with this. I have to admit, from my generation getting your first car was this big rite of passage, but the evidence seems to show that's not the same now.
I predict that when the technology is a little more mature, it's going to cost in the neighborhood of US$0.22c a km to ride in a car – because you're not going to be powered on petrol, you're going to be powered on electricity – so something in that range. Owning your car today is probably more like 50, 60, 70c a km when they've priced it all out. So we're talking about very economical transport that lots more people will be able to afford than used to, and that's also going to make people say, why do I want to go to the bother or owning a car. By the way, 22 cents a km is not for a fancy car, it's for a one-person car – because you just want a ride across town and doesn't need to be fancy. So, that's cheaper than a ride in the tram, which is actually very, very interesting.
New Atlas: Talk of autonomous vehicles largely focuses on the advantages for the developed world. What do you think the potential advantages or benefits of the technology are for the developing world?
Brad Templeton: Obviously, the people developing it are rational and will go after the rich people first, which leaves open some opportunities. In India, for example, Tata and Mahindra, which are the two biggest Indian automotive conglomerates, are researching their own projects and they're going to have to deal with the challenges of driving in India, which is much more difficult than driving in countries like Australia or the US.
In the developing world we have a couple of different issues. One interesting issue is, while I very firmly tell people that in the rich world, the people who design the cars are designing the cars to drive on existing roads. If you don't have any roads at all, then you suddenly have some interesting opportunities for maybe building the roads a bit differently and at lower cost – so that's an interesting option for the developing world.
I'm actually not a fan yet of the flying car idea – by which I mean the quadcopter for people, or even for cargo – in the city, because I'm not sure we're ready for skies in the city to be just be 1,000 buzzing helicopters filled with things and people, but maybe we can solve that. You've seen the Martin Jetpack, but you'd never let your neighbor take off at 90 dB from their backyard – you wouldn't even let someone take off at 60 dB.
So maybe in other parts of the world, though, the sky might actually have some merit, or you could build a road, for example – here in New Zealand I've seen this a lot driving around – with one-way bridges over rivers. This is actually one of the ways where cars might communicate with each other, but they wouldn't communicate directly to each other. They'd once again communicate with some server so you could just time yourself along the road so you never try and go across the bridge at the same time as someone else.
So, you could get by with cheaper infrastructure. You could get by with one-lane roads, you could get by with kind of like rails – we actually had this phrase at Google to describe how the car would drive, we'd say it's like driving on rails, because if we wanted it to it would drive exactly the same spot every time.
In fact, there was one team that was building cars like this and they noticed they would drive around these paved surfaces and leave black trails because the cars drove exactly the same spot every time, so a little bit of tire rubber eventually formed black lines on the road where the cars were driving. So, you could possibly build roads that were really just two strips of concrete. You would have to standardize the stance to do that. Or it could be thicker in some parts or thinner in other parts. But nobody's really trying to do that yet so I can't promise you that works – it's just an interesting idea.
New Atlas: Do you think we'll ever get 100 percent levels of autonomous cars on the road?
Brad Templeton: Ever is a long way – and we still have horses. Depends where you are. What you might see, the first hints of that you'd see would be, you could imagine a town saying look, we're gonna ban the steering wheel from the CBD or we're going to ban it from this lane on the highway. But, I don't think we could take away the keys – not for a few decades. It's very stupid to make predictions about 2060. Put it this way, there are a number of people who make predictions that by 2050 there'll be robots just as smart as human beings or smarter and then all bets are off and anything from, they're our wonderful benefactors to they kill us all, so I'm hoping for more the first one.
More of Templeton's views on robocars can be found here.
Disclaimer: New Atlas attended the SUNZ Summit courtesy of Singularity University.
If it were electric and cost $35000 and lasted 20 years, my ICE van 17 and counting...
The 'trolley' issue is BS. There IS no answer today with humans at the wheel. Someone is going to die. Let regulators provide the answer if they so much want it.
I think the cost of transportation should come down on the higher end of the projections of the author. Insurance on autonomous cars should be considerably lower since they should reduce accidents by over 90%. Insurance cost should drop by at least 75% from today's levels. Anything else will be criminal. Energy cost should drop by at least 50% if not 60% or more. Electric vehicles are 3 times as energy efficient as gasoline engines. Maintenance cost will be much lower and electric motors can last 2 to 3 times longer than ICE's at a minimum. With autonomous cars you also need to costly driver. Cost of production should be much lower than for an ICE car, once economies of scale is reached (specifically the cost of batteries needs to come down). With all these savings and cost reductions, cost per mile traveled vs owning a personal car today should drop precipitously.
In the developing world this should cause a huge spike in miles traveled per person. In the developed world, most people already travel as much as they want. Which means congestion in the developing world could get worse but better in the developed world (same amount of miles per person traveled per person per year, but using less vehicles due to cost savings inherent in sharing).