Robotics

As the UN delays talks, more industry leaders back ban on weaponized AI

As the UN delays talks, more industry leaders back ban on weaponized AI
A second open letter, this time from 116 founders of AI and robotics companies, is urging the UN to act on banning weaponized AI
A second open letter, this time from 116 founders of AI and robotics companies, is urging the UN to act on banning weaponized AI
View 2 Images
A second open letter, this time from 116 founders of AI and robotics companies, is urging the UN to act on banning weaponized AI
1/2
A second open letter, this time from 116 founders of AI and robotics companies, is urging the UN to act on banning weaponized AI
This unmanned ground vehicle is produced by Milrem, and while currently remotely operated, it could easily be adapted to autonomous operation
2/2
This unmanned ground vehicle is produced by Milrem, and while currently remotely operated, it could easily be adapted to autonomous operation

Two years ago, the Future of Life Institute presented an open letter at the 2015 International Conference on Artificial Intelligence (IJCAI) urging the United Nations to ban the development of weaponized artificial intelligence. Now a second open letter has been released, again coinciding with the start of the 2017 IJCAI. This new letter is co-signed by over 100 founders of robotics and AI companies from around the world, and demands the UN stop delaying its talks and take action.

Just a few years ago, the idea of autonomous weaponry resided solely within the realms of science fiction, but the rapidly advancing fields of AI and robotics have turned a frightening fiction into a dawning reality. With global arms manufacturer Kalashnikov recently launching a fully automated range of combat modules and startup Duke Robotics attaching machine guns to drones, the future of robotic and autonomous warfare seems incredibly close.

The original 2015 letter, directed at the UN, was co-signed by over 1,000 different scientists and researchers from around the world, including Stephen Hawking, Noam Chomsky and Steve Wozniak. The UN slowly, but surely, responded, formally convening a group of experts in late 2016, under the banner of the Convention on Conventional Weapons (CCW) with a view towards discussing and implementing a global ban.

The first discussions of this newly formed UN group were set to take place this month, but they were canceled back in May due to "insufficient funding". This bureaucratic bungle, stemming from several nations apparently falling into arrears with promised contributions, also threatens to cancel the second scheduled meeting on lethal autonomous weapons set for November this year.

These delays inspired this second open letter, which concentrated on recruiting support from those on the business and industry side of robotics and AI. One hundred and sixteen founders of major companies from around the world have already co-signed this new letter, including Elon Musk, Mustafa Salesman (founder of Google's DeepMind), and Essen Østergaard (founder of Denmark's Universal Robotics).

"Lethal autonomous weapons threaten to become the third revolution in warfare," the letter states. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora's box is opened, it will be hard to close."

This unmanned ground vehicle is produced by Milrem, and while currently remotely operated, it could easily be adapted to autonomous operation
This unmanned ground vehicle is produced by Milrem, and while currently remotely operated, it could easily be adapted to autonomous operation

Despite getting a notable collection of industry luminaries on board, this appeal is looking like it will face an uphill battle over the coming months and years. Advocates of a ban on lethal autonomous weapons want all development in the field to be considered for prohibition, just as is done with biological and chemical weapons, but not all countries are agreeable.

While most UN member countries, including the US and UK, have agreed to forming this panel of experts, any actual proposal for a ban will likely face strong opposition. In 2015 the UK foreign office told The Guardian that the government does not see a need for these new laws. Russia of course, has not expressed support for this entire process either.

The United States has not communicated a solid position on the matter, and while it supported the convening of this UN group, one can't imagine the world's biggest military power willingly supporting a proposal that would stifle its ability to develop complex new weapons systems – especially when Russia has already indicated support for the Kalashnikov AI systems.

Whether such broad collective support across academic, research, and industry fields actually amounts to anything is yet to be seen, but this second open letter hopefully prompts a conversation on AI weapons development that the world drastically needs to have.

Source: University of New South Wales

6 comments
6 comments
Daishi
Rodney Brooks (iRobot founder) responded to Elon's alarmism around AI by saying "If it doesn’t apply to anything, what the hell do you have the regulation for? Tell me, what behavior do you want to change, Elon?" but I think "don't pair AI with lethal force" is a fairly clear line in the sand worth drawing. Look, even humans are unclear on rules of engagement and good guy vs bad guy in conflicts sometimes even after much reflection. It is very possible to build robots that if given full autonomy would be horribly efficient killing machines with mass casualties on the other side and doing this should be viewed similarly to the use of nuclear or chemical weapons. I think there is a line in the sand that can be drawn between humans piloting sophisticated weaponry and letting AI "take the wheel" taking human lives that shouldn't be crossed. If we want to use AI for recon and information gathering fine so long as those platforms are not weaponized. This seems objectively fair enough to both sides of the debate that we should be able to find some common ground.
Brian M
Its a pretty pointless argument, AI weapons will be created and used irrespective of the UN. Simply because not all regimes, North Korea for example, will obey UN rules and that AI is a lot easier to research and develop than nuclear weapons.
So its absolutely essential that the 'good' guys have access to them as well, otherwise the bad guys will win - Its as simple as that, you can't uninvent something - nuclear weapons and ballistic missiles should tell us that!
Not a nice thought, but its the real world out there.

chase
You don't need to ban weaponizing AI.
All that needs be done is to add in... A) a life detection media with a failsafe. B) a protocol banning weapon use against life forms that can't be overridden without disabling the AI device.
Then you're left with all sides just hashing out disputes via robot wars. Which they already have on a much smaller scale. Scientist aren't complaining about those, which do nothing more than promote these mentioned in the article.
Something on the magnitude of the above proposed weaponized AI units you could televise it. Start building major arenas where all wars are fought and sell tickets to the show.
Get your popcorn here! Hot dogs and fresh Beer!
Heck, the UN could oversee all robot war events... Even be the referees.
Ralf Biernacki
@chase: "Then you're left with all sides. . ." The fundamental problem with your solution is the "all" in that sentence. BrianM hits it right on the spot. Secret research and development in that area cannot be prevented. In fact, the more effective weaponized AI is, the more imperative it is for the good guys (those who would consider obeying the ban) to not fall behind. The ban is self-defeating, and I am surprised Hawking et al. haven't realized that.
SteveO
The only way to truly ban weaponized robots is to ban the development of AI. There is so much dual-use technology that it would be extremely easy for anyone to build such a weapon in their garage. One could use a basic facial recognition algorithm to identify a specific person or people with certain characteristics and rather than send a signal to the authorities, send the signal to a servo attached to a gun. Additionally, while I think Elon Musk is doing a lot of interesting work, he is being a bit ironic with all of this. Just a week ago he was bragging about how his AI company developed an algorithm that beat the best human players in a combat simulation game. With a bit (maybe a little more than a bit...) more development, you could apply that to an unmanned weapon platform and have the thing he appears to be afraid of this week. Goes to show you that when you are super rich, you really can have it both ways.
Daishi
@SteveO I agree with the previous user comment on technology with dual purpose that's impossible to control but I think there is a big difference between building AI's to play chess and Dota and giving robots actual guns with autonomous use in mind. One is video games and the other is real life and the difference between them isn't all that ambiguous. Your question is almost like saying "What's the difference between shooting people in video games and shooting actual people with real guns?". The actual code for autonomous robot and vehicle navigation and object tracking is absolutely going to be written no matter what. That's the harder part to solve. Equipping said platforms with guns isn't technically that much more complicated and it's already been done for years with autonomous paintball guns by hobbyists. We will never prevent the technology from existing (it already does) but we can ask that it not be paired with lethal force.