If Hollywood is to be believed, the development of super-intelligent AI will spell the end of civilization as we know it and spark an unwinnable war between man and machine. It doesn't make for nearly as exciting entertainment, but artificial intelligence also offers tremendous upside, from the potential to deliver customized education to everyone, to improving disease diagnosis and treatment and eradicating poverty. Although AI researchers are focused these beneficial outcomes, the dystopian vision portrayed in so much science fiction is also a real possibility. At the recent Singularity University (SU) New Zealand Summit we talked with Neil Jacobstein, the former president and current chair of the Artificial Intelligence and Robotics Track at SU, about how the outcomes feared by so many can be avoided.
Humans dominate the planet, not because we're the biggest and the strongest – we're obviously not – but because we're the smartest. So, it's understandable that the prospect of an intelligence greater than our own is more than a little unnerving. Sure, such an intelligence could help deal with many of the world's problems that we've been unable to solve ourselves, but isn't it conceivable it might also begin to see us not just as irrelevant, but possibly even as a detriment to the planet?
As Stephen Hawking recently said, the creation of powerful AI will be "either the best, or the worst thing, ever to happen to humanity." The fact is, AI offers such tremendous upside, there appears to be no stopping the train that is AI development despite the massive potential downside risk. So, what's the best way to manage this risk? We put this to Neil Jacobstein as part of our regular "One big question" series. Here's what he said:
I don't think that there's one best way to manage the risk. I think that we're constantly researching it. I don't think that there's any possibility now of just saying, oh well, something might go wrong, let's not do it. So, we're going to move forward and that puts the emphasis on doing it responsibly and doing it proactively. Do I think that we've got a reasonable shot at managing the downside risk, I do.
We need to continually do research and focus on verification, validity, security and control. Just to flesh that out a little: verification is about verifying that a piece of software meets a formal spec. But even if it meets the formal spec it may not be valid - maybe it would permit malicious behavior, so you need to test it for validity. And then after you've tested it for validity you need to make sure it's been properly secured so that it can't be hacked from within or without, even with sophisticated hacking techniques. And then you need to be able to assume that, with the best of intentions, something could still go wrong, so you need multiple, perhaps a dozen, layered, redundant control systems, so if the system goes off the rails you can reestablish control.
And the good news is we've been running this experiment for a very long time. If you look at anti-virus software, the things that we call viruses and worms are, in fact, snippets of AI, and we use AI to detect them, to detect malware and shut it down before it does bad things. It's kind of a predator/prey cycle.
You have to be proactive and smart about controlling the downside risks and malware. But you could think of this as a subset of cybersecurity, which is a predator/prey system or an arms race, and there isn't any getting around it. I mean the malware is constantly ratcheting up in sophistication and the defenses are ratcheting up in sophistication, and there really isn't any alternative to that.
Also, I think that before we jump to putting regulations and laws in place by people who are not fully conversant with the technology, we should ask questions about what are the range of outcomes that we can foresee and how do we increase the probability of beneficial outcomes - that should be the focus. On preventing downside risks, I don't think that reflexive regulation is the way to go. I think thoughtful, considered regulation is the way to go on part of it, but that wouldn't be my first or second choice for controlling downside risk, because I think it tends to go slowly, it tends to be written by people who are lawyers who are not technologists. So, it's of value, but limited value and it has the risk of them making uninformed regulatory moves.
I think [the OpenAI initiative by Elon Musk and others] is positive, but it also comes with some risk, because it distributes powerful AIs all over the world. But I think that net, it'll be a positive thing.
However, I don't think that self-regulation is going to be enough. I think that you need accountability and transparency and you need to hold people, and corporations, accountable for outcomes. Just like you would if a corporation has an employee that does bad things, they're still doing it on behalf of their corporation and ultimately they're accountable. In a similar way, AIs are going to have to be accountable - they don't have legal standing, so the people who own them or who are deploying them are going to have to take responsibility for the consequences of how they get deployed. There are actually systems in place to keep corporations and governments accountable and I think that we should enforce those rules with regard to the assistant systems that people use.
I don't think that there are any easy answers here, but I do think that people underestimate how much AIs could improve the quality of our lives. They tend to see a lot of movies about how they could become the agents of dystopia.
With regard to the TV or Hollywood movie portrayals of AI, they tend to run toward malicious AI and people's worst fears of what an AI would be. They tend to portray humans as almost always protagonists that are good and I think it's simplistic and does not serve our purposes - if the first association that people have with AI and robotics is "Terminator" that's not a great way to build the future. Yes, we want to acknowledge what some of the downside risks could be, and I think in that sense having some scenarios that portray dystopian scenarios is a service, but once we've seen those movies it's important to understand that most of the applications of real world AI are very beneficial and rather than having negative associations, it's important to have a positive association with all the applications that are now doable and will be doable in the future.
I think that eventually AIs are going to observe what we do - just like children observe what we do and pick up habits and attitudes and morality from us. It's not an accident that the children of high integrity, high moral-standing people tend to, not always, but tend to be like their parents in that regard and I think it would greatly help if we all set a good example for the systems around us. Do I think that there's some simplistic morality set of rules that we could inject into AIs and forevermore they would be moral? No I don't. But I do think that we're not going to be able to expect them to be highly ethical in their behavior if we're not. There are all kinds of problems with that formula.
We need to come together like adults to manage the downside risks and activate the potential. We have habituated to our baseline risks, including species extinction and climate change and the omnipresent potential for thermonuclear war and many of these baseline risks could be addressed - poverty, hunger, literacy, pandemic disease - all those things we could do a much better job of managing with technology, with exponential technology, and I think we ought to focus on doing that, and if we do that well it'll greatly reduce the risk that the events in the 2035 arena, the 2045 arena, will be positive instead of dystopian.
The good news is there are far more people on the side of building up civilization than there are on the side of tearing it down. That means more energy, more intellect, more computing power, more algorithms, so I think that we have a very good shot of being able to manage that.
Disclaimer: Darren Quick attended SUNZ courtesy of SU.