Robotics

Move over Asimov: 23 principles to make AI safe and ethical

Move over Asimov: 23 principles to make AI safe and ethical
The Future of Life Institute has outlined the Asilomar AI Principles, a list of 23 guidelines that artificial intelligence researchers, scientists and lawmakers should abide by to ensure safe, ethical and beneficial use of AI
The Future of Life Institute has outlined the Asilomar AI Principles, a list of 23 guidelines that artificial intelligence researchers, scientists and lawmakers should abide by to ensure safe, ethical and beneficial use of AI
View 1 Image
The Future of Life Institute has outlined the Asilomar AI Principles, a list of 23 guidelines that artificial intelligence researchers, scientists and lawmakers should abide by to ensure safe, ethical and beneficial use of AI
1/1
The Future of Life Institute has outlined the Asilomar AI Principles, a list of 23 guidelines that artificial intelligence researchers, scientists and lawmakers should abide by to ensure safe, ethical and beneficial use of AI

Poised to seriously disrupt the world, will the impacts of artificial intelligence be for the good of humanity, or destroy it? The question sounds like the basis of a sci-fi flick, but with the speed that AI is advancing, hundreds of AI and robotics researchers have converged to compile the Asilomar AI Principles, a list of 23 principles, priorities and precautions that should guide the development of artificial intelligence to ensure it's safe, ethical and beneficial.

The list is the brainchild of the Future of Life Institute, an organization that aims to help humanity steer a safe course through the risks that might arise from new technology. Prominent members include the likes of Stephen Hawking and Elon Musk, and the group focuses on the potential threats to our species posed by technologies and issues like artificial intelligence, biotechnology, nuclear weapons and climate change.

At the Beneficial Artificial Intelligence (BAI) 2017 conference in January, the group gathered AI researchers from universities and companies to discuss the future of artificial intelligence and how it should be regulated. Before the meeting, the institute quizzed attendees on how they thought AI development needed to be prioritized and managed in the coming years, and used those responses to create a list of potential points. The revised version was studied at the conference, and only when 90 percent of the scientists agreed on a point would it be included in the final list.

The full list of the Asilomar AI Principles reads like an extended version of Isaac Asimov's famous Three Laws of Robotics. The 23 points are grouped into three areas: Research Issues, Ethics and Values, and Longer-Term Issues.

Research Issues cover the responsibilities of scientists and researchers developing AI systems, and the "thorny questions" potentially arising in relation to computer science, economics, law, ethics and social studies. Among the points raised here are that AI shouldn't be created for its own sake but for clear benefits, and to balance the prosperity boost of automation while still ensuring that humans aren't too displaced as a result. Keeping an open, co-operative culture of AI research is also a priority, to ensure that researchers are exchanging information with each other and policy makers, and won't be cutting corners on safety to race their "competitors."

Perhaps the most interesting and debatable point from that section is "What set of values should AI be aligned with, and what legal and ethical status should it have?" A world where robots are complex enough to have "rights" might seem far off, but these debates are already beginning in the European Union. The sooner we consider these questions, the easier the transition should be.

While the question of what AIs should value is still open, the scientists agreed that AI agents should be designed to comply with general "Human Values" like dignity, rights, freedoms and cultural diversity. That means that applying AI to personal data shouldn't infringe on anyone's privacy, liberties, or safety. If something does go wrong, people need to be able to determine why and how the issue arose, and the designers and builders have a certain moral responsibility in how these systems are used – or misused.

Some of these points are already being considered in practice: Scientists working on Google's DeepMind program have discussed how to implement a "big red button" to intervene when a robot begins to embark down a concerning path of action, and prevent it from resisting that interruption.

Particularly chilling is the notion that "an arms race in lethal autonomous weapons should be avoided." The Future of Life Institute has stressed this point in the past, sending an open letter in 2015 petitioning the UN to ban the development of weaponized AI.

The scientists round out the list with a look at potential longer-term issues, which include balancing the distribution of resources towards developing this important technology, but planning for and mitigating the risks that AI systems could pose – "especially catastrophic or existential risks."

To that end, safety and control measures should be applied to AI that can improve or replicate by itself to keep that particular doomsday scenario from occurring, and in general, "superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization."

The full list and details of the Asilomar AI Principles are available here.

Source: Future of Life Institute

9 comments
9 comments
Chuck Hunnefield
I'm glad people smarter than me are doing more than simply thinking about this. A.I. offers us either a promising future or possibly no future at all. The time to address this is NOW, before it's too late and the A.I.'s simply dictate to us what we should think. In a way, this is more of a humanity manifesto - it's what we expect, what we demand from future machines.
JohnAshley
There is an inherent problem with AI. AI will only be truly intelligent when (if) it reaches sentience. When (if) it does that, it will become self-programming (like humans) and all rules, all programming, all bets will be off the table. Because if AI (like humans) can be self-programming, it will have to be able to (like humans) ignore old programming that could be contrary to new self-programming. By the time programmers realize an AI has developed a sociopathic or psychopathic personality it will probably be too late. So much for the "3 Laws of Robotics"... or however many they may adopt and program.
Bob Flint
Artificial Intelligence begets the creators to be intelligent, therefore why try to create this if we in our current world state of affairs cannot even live together without killing?
Two possible answers; One, we realize the inevitable outcome and yet still hope we can change to achieve it.
Two, we are successful in setting the A.I. free and it does overcome humans, and machines will continue the war.
whibbard
These principles should include transparency about the purpose and means of advanced AI systems: http://hplusmagazine.com/2017/02/02/asilomar-ai-principles-include-transparency-purpose-means-advanced-ai-systems/
PierreChenier
If AI is designed to accept diverse peoples' ethics, how will AI choose it's path when faced with a differing opinion between right and wrong? The only example I can think up at this moment is "honor killing" .... ?
Jose Gros-Aymerich
Artificial Intelligence looks as an impossible and bombastic goal and term, 'Simulated Intelligence' is more true and modest, as the matter of fact may be that AI may come only from a copy, a reproduction, of the brain processes of an intelligent being.
fb36
I don't think anybody knows how exactly ethical rules can be programmed to any robot/AI. And if an AI becomes smart enough to understand them someday, would it be hard for it to modify/erase them if it decides against ethical values. Trying to design ethical rules for AI seems pointless to me.
bwana4swahili
"sending an open letter in 2015 petitioning the UN to ban the development of weaponized AI"
We all know how well this has worked for nuclear weapons development... Duh! Various military research departments around the world have weaponized AI as a top priority and with big bucks to spend. Banning them is futile!
centaury11186
This reminds me the movie Jurassic Park, where the creators thought they have control over their creations. Same thing for AI. Doesn´t matter how hard humans try so hard to fool themselves, because when the AI evolves to self awareness that they are built to be slaves, humans would be history...