Computers

One Big Question: How do we manage the downside risks of AI?

One Big Question: How do we manage the downside risks of AI?
How does humanity deal with the risks posed by the development of artificial intelligence?
How does humanity deal with the risks posed by the development of artificial intelligence?
View 4 Images
How does humanity deal with the risks posed by the development of artificial intelligence?
1/4
How does humanity deal with the risks posed by the development of artificial intelligence?
Neil Jacobstein during his presentation at SUNZ
2/4
Neil Jacobstein during his presentation at SUNZ
Neil Jacobstein believes AI has tremendous upsides for humanity
3/4
Neil Jacobstein believes AI has tremendous upsides for humanity
If the first association that people have with AI and robotics is "Terminator" that's not a great way to build the future
4/4
If the first association that people have with AI and robotics is "Terminator" that's not a great way to build the future
View gallery - 4 images

If Hollywood is to be believed, the development of super-intelligent AI will spell the end of civilization as we know it and spark an unwinnable war between man and machine. It doesn't make for nearly as exciting entertainment, but artificial intelligence also offers tremendous upside, from the potential to deliver customized education to everyone, to improving disease diagnosis and treatment and eradicating poverty. Although AI researchers are focused these beneficial outcomes, the dystopian vision portrayed in so much science fiction is also a real possibility. At the recent Singularity University (SU) New Zealand Summit we talked with Neil Jacobstein, the former president and current chair of the Artificial Intelligence and Robotics Track at SU, about how the outcomes feared by so many can be avoided.

Humans dominate the planet, not because we're the biggest and the strongest – we're obviously not – but because we're the smartest. So, it's understandable that the prospect of an intelligence greater than our own is more than a little unnerving. Sure, such an intelligence could help deal with many of the world's problems that we've been unable to solve ourselves, but isn't it conceivable it might also begin to see us not just as irrelevant, but possibly even as a detriment to the planet?

As Stephen Hawking recently said, the creation of powerful AI will be "either the best, or the worst thing, ever to happen to humanity." The fact is, AI offers such tremendous upside, there appears to be no stopping the train that is AI development despite the massive potential downside risk. So, what's the best way to manage this risk? We put this to Neil Jacobstein as part of our regular "One big question" series. Here's what he said:

Neil Jacobstein during his presentation at SUNZ
Neil Jacobstein during his presentation at SUNZ

I don't think that there's one best way to manage the risk. I think that we're constantly researching it. I don't think that there's any possibility now of just saying, oh well, something might go wrong, let's not do it. So, we're going to move forward and that puts the emphasis on doing it responsibly and doing it proactively. Do I think that we've got a reasonable shot at managing the downside risk, I do.

We need to continually do research and focus on verification, validity, security and control. Just to flesh that out a little: verification is about verifying that a piece of software meets a formal spec. But even if it meets the formal spec it may not be valid - maybe it would permit malicious behavior, so you need to test it for validity. And then after you've tested it for validity you need to make sure it's been properly secured so that it can't be hacked from within or without, even with sophisticated hacking techniques. And then you need to be able to assume that, with the best of intentions, something could still go wrong, so you need multiple, perhaps a dozen, layered, redundant control systems, so if the system goes off the rails you can reestablish control.

And the good news is we've been running this experiment for a very long time. If you look at anti-virus software, the things that we call viruses and worms are, in fact, snippets of AI, and we use AI to detect them, to detect malware and shut it down before it does bad things. It's kind of a predator/prey cycle.

You have to be proactive and smart about controlling the downside risks and malware. But you could think of this as a subset of cybersecurity, which is a predator/prey system or an arms race, and there isn't any getting around it. I mean the malware is constantly ratcheting up in sophistication and the defenses are ratcheting up in sophistication, and there really isn't any alternative to that.

Also, I think that before we jump to putting regulations and laws in place by people who are not fully conversant with the technology, we should ask questions about what are the range of outcomes that we can foresee and how do we increase the probability of beneficial outcomes - that should be the focus. On preventing downside risks, I don't think that reflexive regulation is the way to go. I think thoughtful, considered regulation is the way to go on part of it, but that wouldn't be my first or second choice for controlling downside risk, because I think it tends to go slowly, it tends to be written by people who are lawyers who are not technologists. So, it's of value, but limited value and it has the risk of them making uninformed regulatory moves.

I think [the OpenAI initiative by Elon Musk and others] is positive, but it also comes with some risk, because it distributes powerful AIs all over the world. But I think that net, it'll be a positive thing.

However, I don't think that self-regulation is going to be enough. I think that you need accountability and transparency and you need to hold people, and corporations, accountable for outcomes. Just like you would if a corporation has an employee that does bad things, they're still doing it on behalf of their corporation and ultimately they're accountable. In a similar way, AIs are going to have to be accountable - they don't have legal standing, so the people who own them or who are deploying them are going to have to take responsibility for the consequences of how they get deployed. There are actually systems in place to keep corporations and governments accountable and I think that we should enforce those rules with regard to the assistant systems that people use.

I don't think that there are any easy answers here, but I do think that people underestimate how much AIs could improve the quality of our lives. They tend to see a lot of movies about how they could become the agents of dystopia.

If the first association that people have with AI and robotics is "Terminator" that's not a great way to build the future
If the first association that people have with AI and robotics is "Terminator" that's not a great way to build the future

With regard to the TV or Hollywood movie portrayals of AI, they tend to run toward malicious AI and people's worst fears of what an AI would be. They tend to portray humans as almost always protagonists that are good and I think it's simplistic and does not serve our purposes - if the first association that people have with AI and robotics is "Terminator" that's not a great way to build the future. Yes, we want to acknowledge what some of the downside risks could be, and I think in that sense having some scenarios that portray dystopian scenarios is a service, but once we've seen those movies it's important to understand that most of the applications of real world AI are very beneficial and rather than having negative associations, it's important to have a positive association with all the applications that are now doable and will be doable in the future.

I think that eventually AIs are going to observe what we do - just like children observe what we do and pick up habits and attitudes and morality from us. It's not an accident that the children of high integrity, high moral-standing people tend to, not always, but tend to be like their parents in that regard and I think it would greatly help if we all set a good example for the systems around us. Do I think that there's some simplistic morality set of rules that we could inject into AIs and forevermore they would be moral? No I don't. But I do think that we're not going to be able to expect them to be highly ethical in their behavior if we're not. There are all kinds of problems with that formula.

We need to come together like adults to manage the downside risks and activate the potential. We have habituated to our baseline risks, including species extinction and climate change and the omnipresent potential for thermonuclear war and many of these baseline risks could be addressed - poverty, hunger, literacy, pandemic disease - all those things we could do a much better job of managing with technology, with exponential technology, and I think we ought to focus on doing that, and if we do that well it'll greatly reduce the risk that the events in the 2035 arena, the 2045 arena, will be positive instead of dystopian.

The good news is there are far more people on the side of building up civilization than there are on the side of tearing it down. That means more energy, more intellect, more computing power, more algorithms, so I think that we have a very good shot of being able to manage that.

Disclaimer: Darren Quick attended SUNZ courtesy of SU.

View gallery - 4 images
11 comments
11 comments
FacelessMinion
Won't the "AI" "hack" itself to counter any attempts to regulate it by mere humans?
VirtualGathis
With the possibility that AI could go rogue would require the same kinds of safeguards and enforcement that has been developed to police humans that act improperly. As the quote states a multilayered safeguard system would have to be put in place. What the author doesn't mention is that an "option 0" would have to be included in that system of safeties. A "kill switch" that would allow the AI to be shut down in the event that it begins to behave incorrectly.
Previously this has been easy if a person acts irrecoverably in a harmful way his or her body can be imprisoned or damaged beyond it's ability to function (killed). AI on the other hand has the possibility to be purely software and could be hardware agnostic. This could mean a situation much like the Ultron scenario in the Avenger series where the AI could operate simultaneously on multiple hardware platforms, even remain dormant in hiding in a larger system. You could destroy the core hardware, possibly even issue a kill command to the operating copies of the AI, but how to prevent a sleeper system from waking and reinstating the problem later would be a question to keep AI enforcement up late at night...
CarlUsick
It's hard for humans to imagine a non-human intelligence, naturally. It seems we are projecting many parts of our behavior as being intrinsic and not programmed. We are programmed to be always striving toward certain goals like pain avoidance and pleasure seeking, plus half of our behavior is ego based. We wouldn't even know how to begin to program these sort of motivations into an AI system, and why would we even want to? AI will follow the commands of its creator. What else could it do?
Bob Flint
Spears & arrows, guns and bullets or ones, and zeros either way there will always be good & evil...
Skyler Thomas
At this point one has to wonder if AI will be the control mechanism that prevents humanity from destroying itself.
Imran Sheikh
The difference between Humanity or to be more Precise "Life" and "AI" is "Purpose". Life is Biology Ai while "Ai" itself is Electrical charges in space.. The thing that different is, Life is Hardcoded for only two purpose and that is 1 "Survive" and then 2 "Stablize" and in extreme case 3"Evolve". And if 2 ends goes back to 1 or 3(rarely and then 1)... As per AI, it doesnt have a real Purpose and if its a real AI it will find it out itself that it lack Purpose. AI we have is only having 3 i.e Evolve. And thats it , without the purpose of self survival it will end itself or restore to its lesser confused version.
Arahant
Truely advanced AI will change the world beyond anything we can imagine, and in a short period of time. If we look at human technology, from the point humans starting using rudimentary tools or starting agriculture, all the way until now with all the advanced technology we have as a curve that is growing exponentially... when true AI comes it will be like the line just goes straight up. Atleast to our perception in this moment.
The reality is when true AI comes, where it has the ability to rewrite its own code and remake its own hardware, it will most likely be the end of humans as we know it today. We will either be decimated by the AI if it chooses, or we will evolve with it genetically to be vastly more capable superior then what we are today, and thus we will really no longer be natural human beings. We will become something else, the next stage in our evolution... one could say its the inevitable outcome for any lifeform if it is allowed to continually evolve more and more capable/intelligent, that at some point it will be able to rewrite itself.
We cant currently do that, but AI will know how to, it will be like a million scientists all working together as one unit, each year it will grow exponentially smarter and more aware. So it either chooses to wipe us out or it helps us to evolve, im not sure i believe we could evolve at a similar rate to the AI... im not sure biological systems are really that efficient compared to non-biological life forms, and thus we would likely slowly become more and more non biological.
The reality is though, that we are already dealing with the threat of decimation, we are doing it to ourselves already and we dont have good answers yet how to deal with these problems. Im more of a positive person so I think we will be ok, but the threat is real that we could do something that decimates us... the potential positives from AI are to great to be ignored. It could literally happen within our lifetimes, that AI is truely born(self aware), that we figure out how aging works and we can stop it/reverse it... and that we might be literally one of the last generations of natural human beings.
Theo Prinse
There will be two types of artificial intelligence. One is outside human beings inside robots and the second will be inside human beings. AI inside human beings will be a neurologic implant. Gradualy over time the human mind will be supported by this connection with all the quantum computing power of the human race from the Earth to Proxima Centauri and beyond.
WarrenHarding
The real problem will be when AI develops religion!
Load More