The Future of LifeInstitute has presented an open letter signed by over 1,000 roboticsand artificial intelligence (AI) researchers urging the UnitedNations to impose a ban on the development of weaponized AI with thecapability to target and kill without meaningful human intervention.The letter was presented at the 2015 International Conference onArtificial Intelligence (IJCAI), and is backed with the endorsementsof a number of prominent scientists and industry leaders, includingStephen Hawking, Elon Musk, Steve Wozniak, and Noam Chomsky.
To some, armed andautonomous AI could seem a fanciful concept confined to the realm ofvideo games and sci-fi. However, the chilling warning containedwithin the newly released open letter insists that the technologywill be readily available within years, not decades, and that actionmust be taken now if we are to prevent the birth of a new paradigm ofmodern warfare.
Consider now theimplications of this. According to the open letter, many now considerweaponized AI to be the third revolution in modern warfare, aftergunpower and nuclear arms. However, for the previous two there havealways been powerful disincentives to utilize the technology. Forrifles to be used in the field, you need a soldier to wield theweapon, and this in turn meant putting a soldiers life at risk.
With the nuclearrevolution you had to consider the costly and difficult nature ofacquiring the materials and expertize required to make a bomb, notto mention the monstrous loss of life and internationalcondemnation that would inevitably follow the deployment of such aweapon and the threat of mutually assured destruction (MAD). These deterrent factors have resulted in only two bombs beingdetonated in conflict over the course of the nuclear era to date.
The true danger of anAI war machine is that it lacks these bars to conflict. AI couldreplace the need to risk a soldier's life in the field, and itsdeployment would not bring down the ire of the internationalcommunity in the same way as the launch of an ICBM. Furthermore,according to the open letter, armed AI drones with the capacity to hunt and kill persons independent of humancommand would be cheap and relatively easy to mass produce.
The technology willhave the overall effect of making a military incursion less costlyand more appealing, essentially lowering the threshold for conflict.Furthermore, taking the kill decision out of the hands of human beingdoes by its nature remove the element of human compassion and areasoning process which, at least in the foreseeable future, isunmatchable by a mere machine.
Another chilling aspectof weaponized AI tech that the letter highlights is the potential ofsuch military equipment to make its way into the hands of despots andwarlords who wouldn't think twice about deploying the machines as atool to check discontent, or even perform ethnic cleansing.
“Many of the leadingscientists in our field have put their names to this cause," says professor of Artificial Intelligence at the University of New South Wales (UNSW) and NICTA Toby Walsh. "With thisOpen Letter, we hope to bring awareness to a dire subject that,without a doubt, will have a vicious impact on the whole of mankind. We can get it right at this early stage, or we can standidly by and witness the birth of a new era of warfare. Frankly,that’s not something many of us want to see. Our call to action issimple: ban offensive autonomous weapons, and in doing so, securing asafe future for us all.”
Source: The Future ofLife Institute
This notion of micromanaging the actions of autonomous weapons at this time is a sign of low confidence in the immature coding / limitation of the field processing. And I support that, at this time.
As field computing improves to the point where it can recognize the difference between a mother holding her baby to a soldier holding a gun,this whole topic will fall by the wayside and will have to be revisited. If you look at many examples in history, humans soldiers have been able to tell one thing from another, but have chosen to ignore the mandate and make ethically questionable decisions.
A machine with clear guidelines could not easily be swayed. Any failure to apply these ethical principles in battle would be the fault of the programmer for not identifying them in the initial equipment acceptance testing/simulation. When found, easily remedied for all machines, for the same mistake never to be repeated.
I support the idea of a kill switch as a last resort, but it will be impossible to keep weapons out of AI's hands because they will likely integrate heavily into civilian security initiatives such as perimeter/checkpoint defense and law enforcement assistance in the foreseeable future.
So yeah... Good luck with that proposed ban.
And as for the reasoning and compassion of humans, we have several millenia of conflict to see how well that works.
I've taken out many dogs - and their owners. At 1st I thought "ooo...collateral damage." but then I realized it was the owners that were the real targets anyway...