Poised to seriously disrupt the world, will the impacts of artificial intelligence be for the good of humanity, or destroy it? The question sounds like the basis of a sci-fi flick, but with the speed that AI is advancing, hundreds of AI and robotics researchers have converged to compile the Asilomar AI Principles, a list of 23 principles, priorities and precautions that should guide the development of artificial intelligence to ensure it's safe, ethical and beneficial.
The list is the brainchild of the Future of Life Institute, an organization that aims to help humanity steer a safe course through the risks that might arise from new technology. Prominent members include the likes of Stephen Hawking and Elon Musk, and the group focuses on the potential threats to our species posed by technologies and issues like artificial intelligence, biotechnology, nuclear weapons and climate change.
At the Beneficial Artificial Intelligence (BAI) 2017 conference in January, the group gathered AI researchers from universities and companies to discuss the future of artificial intelligence and how it should be regulated. Before the meeting, the institute quizzed attendees on how they thought AI development needed to be prioritized and managed in the coming years, and used those responses to create a list of potential points. The revised version was studied at the conference, and only when 90 percent of the scientists agreed on a point would it be included in the final list.
The full list of the Asilomar AI Principles reads like an extended version of Isaac Asimov's famous Three Laws of Robotics. The 23 points are grouped into three areas: Research Issues, Ethics and Values, and Longer-Term Issues.
Research Issues cover the responsibilities of scientists and researchers developing AI systems, and the "thorny questions" potentially arising in relation to computer science, economics, law, ethics and social studies. Among the points raised here are that AI shouldn't be created for its own sake but for clear benefits, and to balance the prosperity boost of automation while still ensuring that humans aren't too displaced as a result. Keeping an open, co-operative culture of AI research is also a priority, to ensure that researchers are exchanging information with each other and policy makers, and won't be cutting corners on safety to race their "competitors."
Perhaps the most interesting and debatable point from that section is "What set of values should AI be aligned with, and what legal and ethical status should it have?" A world where robots are complex enough to have "rights" might seem far off, but these debates are already beginning in the European Union. The sooner we consider these questions, the easier the transition should be.
While the question of what AIs should value is still open, the scientists agreed that AI agents should be designed to comply with general "Human Values" like dignity, rights, freedoms and cultural diversity. That means that applying AI to personal data shouldn't infringe on anyone's privacy, liberties, or safety. If something does go wrong, people need to be able to determine why and how the issue arose, and the designers and builders have a certain moral responsibility in how these systems are used – or misused.
Some of these points are already being considered in practice: Scientists working on Google's DeepMind program have discussed how to implement a "big red button" to intervene when a robot begins to embark down a concerning path of action, and prevent it from resisting that interruption.
Particularly chilling is the notion that "an arms race in lethal autonomous weapons should be avoided." The Future of Life Institute has stressed this point in the past, sending an open letter in 2015 petitioning the UN to ban the development of weaponized AI.
The scientists round out the list with a look at potential longer-term issues, which include balancing the distribution of resources towards developing this important technology, but planning for and mitigating the risks that AI systems could pose – "especially catastrophic or existential risks."
To that end, safety and control measures should be applied to AI that can improve or replicate by itself to keep that particular doomsday scenario from occurring, and in general, "superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization."
The full list and details of the Asilomar AI Principles are available here.
Source: Future of Life Institute
Two possible answers; One, we realize the inevitable outcome and yet still hope we can change to achieve it.
Two, we are successful in setting the A.I. free and it does overcome humans, and machines will continue the war.
We all know how well this has worked for nuclear weapons development... Duh! Various military research departments around the world have weaponized AI as a top priority and with big bucks to spend. Banning them is futile!