AI big guns pledge not to develop autonomous "killer robots"

AI big guns pledge not to develop autonomous "killer robots"
Hundreds of companies and thousands of people from the AI industry have pledged not to develop lethal autonomous weapons
Hundreds of companies and thousands of people from the AI industry have pledged not to develop lethal autonomous weapons
View 1 Image
Hundreds of companies and thousands of people from the AI industry have pledged not to develop lethal autonomous weapons
Hundreds of companies and thousands of people from the AI industry have pledged not to develop lethal autonomous weapons

The idea of killer robots currently remains in the realm in science fiction, but it's pretty alarming to realize that artificial intelligence experts are treating it as a genuine possibility in the near future. After years of petitioning the United Nations to take action against weaponized AI, the Future of Life Institute (FLI) has now taken matters into its own hands, with thousands of researchers, engineers and companies in the AI industry pledging not to develop or support the development of autonomous killing machines in any way.

In 2015, the FLI presented an open letter to the UN, urging the organization to impose a ban on the development of lethal autonomous weapons systems (LAWS). The letter was signed by over 1,000 robotics researchers and prominent scientists, such as Elon Musk and Stephen Hawking. Two years later, the FLI and many of the same signatories sent off a follow-up, as talks continuously stalled.

After another year of inaction, these industry leaders have now taken a more direct approach that doesn't require the UN's input. Thousands of people have now signed a pledge declaring that "we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons."

The signatories this time around include 160 AI-related companies and organizations, and over 2,400 individuals. Among the ranks are Google DeepMind, ClearPath Robotics, the European Association for AI, the XPRIZE Foundation, Silicon Valley Robotics, University College London, and people like Elon Musk, Google Research's Jeffrey Dean and Member of UK Parliament Alex Sobel.

The specific technology that the group is opposing includes weaponized AI systems that can identify, target and kill people entirely autonomously. That wouldn't include things like military drones, which human pilots can use to identify and kill targets remotely. While that might sound like a strange distinction to make, the group argues that the latter case still has a human "in the loop", as a moral and ethical filter for the act.

"We the undersigned agree that the decision to take a human life should never be delegated to a machine," reads the official statement. "There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable."

When these killing machines are linked to data and surveillance platforms, the statement continues, LAWS could become powerful instruments of violence and oppression, essentially making the act of taking human lives too easy, risk-free and unaccountable. Especially problematic is the potential for those devices to fall into the wrong hands through the black market.

The aim of the pledge, it seems, is to "shame" companies and people into signing. If more and more of the big players are jumping onboard, those that don't will likely come under scrutiny by their peers and customers until they also sign up.

Whether or not it plays out that way, it's at least encouraging to see baby steps being made towards the goal of a killer robot-free future. The UN is set to hold the next meeting on LAWS in August.

Source: Future of Life Institute

Isnt it a bit late or after the horse as bolted as there are already Lethal autonomous Weapon systems (LAWs) out there - and by signing this then you are placing yourself behind a very large curve as an automous system makes a decision in the blink of an eye - waiting for a human operator takes a much longer time
It's a noble effort, but doomed. If you work on autonomous vehicles or auto-focus cameras or facial recognition, your work is just a mechanic's bridge from an autonomous weapon. Technology cannot be bottled up. And don't forget land mines; we've had LAW for hundreds of years.
We shouldn't stop the development since other countries are doing it. We might fall behind and then it will be late. Yes, we might feel good for a while, but then suddenly realize that the enemy is attacking, and we cannot do anything about it.
Expanded Viewpoint
Yeahhhh, riiiight, suuurre, of course that will work to stop psychopaths from doing what psychopaths do!! Don't get me wrong here, the intentions are good, but we sure do know what the road to Hell is paved with!! Greed for money and lust for power over others will ALWAYS win out, until the day when enough people wake up and realize that these mechanical versions of the Frankenstein monster are not such a good idea and that they too will be victims of their own success, even if it's indirectly so. When they get the news of a loved one being killed by one of these machines, in shock and disbelief they will say to themselves, "My God, what have I done?!?!"
Libs are so... Refusing to enter a race does not stop the other guy from winning. Russia, China, North Korea, Iran and others, even ISIS, all have the capability to run this race. As AI grows it will be diverted. The only question is Who?
Robert Walther
Maybe we can resurrect Neville Chamberlain to get this in writing...
If we the people actually controlled what our military does, such a gesture might have some teeth. Unfortunately, we don't. Those with all the money control things. If they can't recruit enough humans to built human killing machines that can calculate they will just use robots to build them. It wouldn't surprise me at all if that is a deep secret project going right now. Only the people with a "need to know" will be aware of it's existence.
For defense, that kind of work is best done by their national labs like Sandia, Oak Ridge, Lawrence Berkeley, etc. So this declaration by the private enterprises is moot as far as future deployment is concerned.
As much as I'd like to believe this will change anything, it WON'T! Someone, some group or some country will continue development to the point of no return... The ones abiding by these rules will simply fall by the wayside.
People pledge, with the might of the United Nations to ensure enforcement. Hmm, sounds pretty weak to me, especially since I've already seen semi-autonomous flying AK-47s in videos.
I'm sure the global weapons manufacturers, military, mercenary groups, and Neville's ghost are winking at this with smiles on their faces.