Robotics

AI experts call for boycott of South Korean university over autonomous weapons research

AI experts call for boycott of South Korean university over autonomous weapons research
Allegedly autonomous-capable weaponry revealed by Russia and the Kalashnikov Group in 2017
Allegedly autonomous-capable weaponry revealed by Russia and the Kalashnikov Group in 2017
View 1 Image
Allegedly autonomous-capable weaponry revealed by Russia and the Kalashnikov Group in 2017
1/1
Allegedly autonomous-capable weaponry revealed by Russia and the Kalashnikov Group in 2017

Ahead of a major United Nations meeting to address the growing issues surrounding lethal autonomous weapons, a team of leading AI and robotics researchers has called for a boycott of a South Korean university that recently announced the opening of an AI weapons lab.

Back in February, South Korea's leading government-run university, KAIST (Korea Advanced Institute of Science and Technology), announced the opening of a new facility called the Research Center for the Convergence of National Defense and Artificial Intelligence. The announcement revealed the new research facility would investigate a variety of AI-based military systems, including autonomous undersea vehicles and "AI-based command and decision systems."

The announcement was received with great concern by many in the artificial intelligence research community and a recently revealed open letter, signed by over 50 researchers from around the globe, is calling for a boycott of all academic collaborations with KAIST over the matter.

"At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons," the letter states.

"We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control. We will, for example, not visit KAIST, host visitors from KAIST, or contribute to any research project involving KAIST."

The boycott has been organized by Toby Walsh, an artificial intelligence researcher from the University of New South Wales in Sydney, who has been prominent in calling for regulations into the development of autonomous weapons. Walsh previously masterminded major open letters in 2015 and 2017 calling for the banning of weaponized artificial intelligence.

"Back in 2015, we warned of an arms race in autonomous weapons," says Walsh. "That arms race has begun. We can see prototypes of autonomous weapons under development today by many nations including the US, China, Russia and the UK."

KAIST president Sung-Chul Shin has responded to the threatened academic boycott by issuing a statement denying the institution is working on lethal autonomous weapons systems or "killer robots."

"The centre aims to develop algorithms on efficient logistical systems, unmanned navigation [and an] aviation training system," Shin says in the statement. "KAIST will be responsible for educating the researchers and providing consultation. As an academic institution, we value human rights and ethical standards to a very high degree. KAIST will not conduct any research activities counter to human dignity, including autonomous weapons lacking meaningful human control."

Curiously, the KAIST website has removed the original announcement, published in late February, describing the opening of the new research centre.

The phrase "meaningful human control" seems to be most important in these ongoing regulatory discussions. On April 9, the United Nations Group of Government Experts on Lethal Autonomous Weapons Systems will reconvene for the first of two meetings scheduled this year to investigate policy outcomes related to AI weapons. The provisional agenda of these meetings suggests determining what "meaningful human control" actually means will be fundamental to any future legal provisions.

While 22 nations have already called for an outright, pre-emptive ban on the development of autonomous weapons, several larger military states seem to be delaying any action. Russia, having already revealed its progress towards developing autonomous weaponry, has issued an outright refusal to support any "preventative ban" on the issue, and the United States is developing its own autonomous weapons programs. Without the support of these major military powers it is hard to see any future United Nation regulation being anything but symbolic.

Update April 9, 2018: The boycott of KAIST by more 50 AI and robotics researchers has ended.
"I was very pleased that the president of KAIST has agreed not to develop lethal autonomous weapons, and to follow international norms by ensuring meaningful human control of any AI-based weapon that will be developed," said Toby Walsh, who initiated the action. "I applaud KAIST for doing the right thing, and I'll be happy to work with KAIST in the future.
"It goes to show the power of the scientific community when we choose to speak out – our action was an overnight success. We initially sought assurances in private from the university more than month ago about the goals of their new lab. But the day after we announced the boycott, KAIST gave assurances very publicly and very clearly.
"There are plenty of good applications for AI, even in a military setting. No one, for instance, should risk a life or limb clearing a minefield – this is a perfect job for a robot. But we should not, however, hand over the decision of who lives or who dies to a machine – this crosses an ethical red-line and will result in new weapons of mass destruction."

9 comments
9 comments
notarichman
Yep, let's send the robots instead of humans. both make mistakes, but who's to blame when a robot makes one? see auto-driven car crashes.
Bob
I get very disturbed by people who threaten boycotts on ethics when they don't seem to have any. Would you rather have humans killing humans than AI killing humans? Haven't most of the atrocities and genocides in war been under "meaningful human control"? Wouldn't you agree that any war is under "meaningful human control"? When others are developing these weapons can you ethically try to hinder your own side? Yes, there needs to be discussion but Pandora is already out of the box.
aki009
Good luck with that boycott. Everybody is researching AI weapons.
BrianK56
How is it that the rest of the world can work on AI but SK should not? I could understand if it were NK.
Lardo
But then, if it were NK, no one (at least at the UN) would utter a peep.
SteveO
Too many don't seem to understand that if you want to ban lethal autonomous systems, you actually have to ban AI development. Most AI algorithms that are currently being used could easily be used for lethal purposes with very minor changes, including small code changes or the addition of a simple subsystem. Facial recognition software that sends a signal when a person of interest is spotted could just as easily send the exact same signal to a gun mount to kill that person. Self driving cars could have a hacker change "avoid pedestrians" to "drive over pedestrians", as another example.
Robert Walther
...and of course with humane systems like Russia and Chinese CCP leading the way...
bwana4swahili
I'm sure all the countries developing AI weaponry would love to see western friendly countries not proceed down this path. Russia, China and the USA would be first in line to protest this development UNLESS they could steal or buy it!
Daishi
@SteveO You aren't wrong. Self driving car work really began out of the 2004 Darpa Grand Challenge and progressed from there. The team from Stanford that won the 2005 challenge essentially went on to found Google's self driving car division. So the US government "Defense Advanced Research Projects Agency" is effectively responsible for the self driving cars (or the attempts at them anyway) we have today. The teams started out having to navigate through open desert and later expanded to having to navigate roads by 2007.