Military

Kalashnikov’s new autonomous weapons and the “Terminator conundrum”

Kalashnikov’s new autonomous weapons and the “Terminator conundrum”
The Kalashnikov Group recently announced the development of a fully automated combat module with the capacity to make decisions and identify targets
The Kalashnikov Group recently announced the development of a fully automated combat module with the capacity to make decisions and identify targets
View 15 Images
Some of the weaponry recently revealed by the Kalashnikov Group
1/15
Some of the weaponry recently revealed by the Kalashnikov Group
Some of the weaponry recently revealed by the Kalashnikov Group
2/15
Some of the weaponry recently revealed by the Kalashnikov Group
The autonomous capacity of the weaponry is yet to be demonstrated
3/15
The autonomous capacity of the weaponry is yet to be demonstrated
The weapons shown off by Kalashnikov Group seemingly still have human control capacities
4/15
The weapons shown off by Kalashnikov Group seemingly still have human control capacities
Some of the vehicles recently shown off by Kalashnikov Group
5/15
Some of the vehicles recently shown off by Kalashnikov Group
The Kalashnikov Group recently announced the development of a fully automated combat module with the capacity to make decisions and identify targets
6/15
The Kalashnikov Group recently announced the development of a fully automated combat module with the capacity to make decisions and identify targets
Drones were also revealed in the launch
7/15
Drones were also revealed in the launch
Drones were also revealed in the launch
8/15
Drones were also revealed in the launch
The weapons shown off by Kalashnikov Group seemingly still have human control capacities
9/15
The weapons shown off by Kalashnikov Group seemingly still have human control capacities
The weapons shown off by Kalashnikov Group seemingly still have human control capacities
10/15
The weapons shown off by Kalashnikov Group seemingly still have human control capacities
Objects revealed at the Kalashnikov Group launch
11/15
Objects revealed at the Kalashnikov Group launch
These automated weapons can be fitted to conventional military vehicles
12/15
These automated weapons can be fitted to conventional military vehicles
Some of the weaponry recently revealed by the Kalashnikov Group
13/15
Some of the weaponry recently revealed by the Kalashnikov Group
Vladimir Putin visited the Kalashnikov Group just days before the company's announcement
14/15
Vladimir Putin visited the Kalashnikov Group just days before the company's announcement
A Russian politician revealed this robot that can fire handguns several months ago on Twitter
15/15
A Russian politician revealed this robot that can fire handguns several months ago on Twitter
View gallery - 15 images

Earlier this month, the Russian weapons manufacturer Kalashnikov Group made a low-key announcement with frightening implications. The company revealed it had developed a range of combat robots that are fully automated and used artificial intelligence to identify targets and make independent decisions. The revelation rekindled the simmering, and controversial, debate over autonomous weaponry and asked the question, at what point do we hand control of lethal weapons over to artificial intelligence?

In 2015, over one thousand robotics and artificial intelligence researchers, including Elon Musk and Stephen Hawking, signed an open letter urging the United Nations to impose a ban on the development and deployment of weaponized AI. The wheels of bureaucracy move slowly though, and the UN didn't respond until December 2016. The UN has now formally convened a group of government experts as a step towards implementing a formal global ban, but realistically speaking this could still be several years away.

Some of the weaponry recently revealed by the Kalashnikov Group
Some of the weaponry recently revealed by the Kalashnikov Group

The fully-automated Kalashnikov

While the United Nations are currently forming a group to discuss the possibility of introducing a potential ban on AI-controlled weaponry, Russia is already about to demonstrate actual autonomous combat robots. A few days after Russian President Vladimir Putin visited the weapons manufacturer Kalashnikov Group, infamous for inventing the AK-47, known as the most effective killing machine in human history, came the following announcement:

"In the imminent future, the Group will unveil a range of products based on neural networks," said Sofiya Ivanova, the Group's Director for Communications. "A fully automated combat module featuring this technology is planned to be demonstrated at the Army-2017 forum," she added, in a short statement to the state-run news agency TASS.

The brevity of the comments make it unclear as to specifically what has been produced or how they would be deployed, but the language is clear. The company has developed a "fully automated" system that is based on "neural networks." This weaponized "combat module" can apparently identify targets and make decisions on its own. And we'll be seeing it soon.

Vladimir Putin visited the Kalashnikov Group just days before the company's announcement
Vladimir Putin visited the Kalashnikov Group just days before the company's announcement

The "Terminator conundrum"

The question of whether we should remove human oversight from any automated military operation has been hotly debated for some time. In the US there is no official consensus on the dilemma. Known informally inside the corridors of the Pentagon as "the Terminator conundrum," the question being asked is whether stifling the development of these types of weapons would actually allow other less ethically minded countries to leap ahead? Or is it a greater danger to ultimately allow machines the ability to make life or death decisions?

Currently the United States' official stance on autonomous weapons is that human approval must be in the loop on any engagement that involves lethal force. Autonomous systems can only be deployed for "non-lethal, non-kinetic force, such as some forms of electronic attack."

In a compelling essay co-authored by retired US Army Colonel Joseph Brecher, the argument against the banning of autonomous weaponry is starkly presented. A scenario is described whereby two combatants are facing off. One holds an arsenal of fully autonomous combat robots, while the other has similar weaponry with only semi-autonomous capabilities that keep a human in the loop.

In this scenario the combatant with the semi-autonomous capability is at two significant disadvantages. Speed of course is an obvious concern. An autonomous system will inherently be able to act faster and defeat a system that needs to pause for a human to approve its lethal actions.

The second disadvantage of a human-led system is its vulnerability to hacking. A semi-autonomous system, be it on the ground or in the air, requires a communications link that could ultimately be compromised. Turning a nation's combat robots on itself would be the ultimate act of future cyberwarfare, and the more independent a system is, the more closed off and secure it can be to these kinds of outside compromises.

The confronting conclusion to this line of thinking is that restraining the development of lethal, autonomous weapon systems would actually strengthen the military force of those less-scrupulous countries that pursue those technologies.

A Russian politician revealed this robot that can fire handguns several months ago on Twitter
A Russian politician revealed this robot that can fire handguns several months ago on Twitter

Could AI remove human error?

Putting aside the frightening mental image of autonomous robot soldiers for a moment, some researchers are arguing that a more thorough implementation of artificial intelligence into military processes could actually improve accuracy and reduce accidental civilian fatalities.

Human error or indiscriminate targeting often results in those awful news stories showing civilians bloodied by bombs that hit urban centers by mistake. What if artificially intelligent weapons systems could not only find their own way to a specific target, but accurately identify the person and hold off on weapons deployment before autonomously going in for the kill at a time it deems appropriate and safer for non-combatants?

Some of the weaponry recently revealed by the Kalashnikov Group
Some of the weaponry recently revealed by the Kalashnikov Group

In a report supported by the Future of Life Institute, ironically bankrolled by Elon Musk, research scientist Heather Roff examines the current state of autonomous weapon systems and considers where future developments could be headed. Roff writes that there are two current technologies sweeping through new weapons development.

"The two most recent emerging technologies are Target Image Discrimination and Loitering (i.e. self-engagement)," writes Roff. "The former has been aided by improvements in computer vision and image processing and is being incorporated on most new missile technologies. The latter is emerging in certain standoff platforms as well as some small UAVs. They represent a new frontier of autonomy, where the weapon does not have a specific target but a set of potential targets, and it waits in the engagement zone until an appropriate target is detected. This technology is on a low number of deployed systems, but is a heavy component of systems in development."

The autonomous capacity of the weaponry is yet to be demonstrated
The autonomous capacity of the weaponry is yet to be demonstrated

Of course, these systems would still currently require a "human in the loop" to trigger any lethal action, but at what point is the human actually holding back the efficiency of the system?

These are questions that no one currently has good answers for.

With the Russian-backed Kalashnikov Group announcing the development of a fully automated combat system, and the United Nations skirting around the issue of a global ban on autonomous weaponry, we are quickly going to need to figure those answers out.

The "Terminator conundrum" may have been an amusing thought experiment for the last few years, but the science fiction is quickly becoming science fact. Are we ready to give machines the authority to make life or death decisions?

View gallery - 15 images
17 comments
17 comments
WilliamSager
I recall not that long ago Russia talking about a supersonic intercontinental range self homing torpedo. Forgive me for not worrying to much. I also recall back in the 1950s the USSR tricking us into thinking they had a nuclear powered bomber with unlimited range. Before long our designers were talking about building thin walled nuclear reactors to power our aircraft. The aircraft would be based on isolated islands and manned by pilots who had already had children. President Eisenhower put a end to this talk in his famous "Military Industrial Complex" speech.
Ralf Biernacki
As far as Col. Brecher's assertion that human-supervised systems would be more vulnerable to hacking, I take the opposite view. A fully autonomic system could be compromised without anyone noticing---the first warning would be a major battle disastrously lost. On the other hand, a human operator would have a fair chance of noticing anything odd. That is why all industrial plants, even fully automated production lines, always have a human operator supervising things. The assertion that the presence of the operator makes the plant more hackable is preposterous. And "vulnerable" communication links would not be eliminated anyway by making the robots autonomous---human soldiers are more autonomous, and have more sophisticated neural nets, than any current or prospective robot, yet still need to communicate on the battlefield. You cannot have an army without C&C, and no future development can ever change that. <p> I also mostly agree with William---Russia regularly makes announcements like that for propaganda leverage. Still, it would be prudent to keep an eye on the advancements in the field, to be able to judge when this hype starts to turn realistic.
Gizmowiz
Why not make a nuclear plane manned by only robots? No shielding would be necessary then. It could then fly for months without landing to refuel.
Bob
I have wondered for years why we did not have a weapons system that could automatically fire back if fired upon. A system that could immediately identify the source of the hostile fire and return fire within milliseconds. Such a system would not have to be very large and would be an efficient defense against snipers by locating their position and returning fire quickly. It could also be used to defend positions that were vulnerable to attack. As far as AI goes for fully autonomous robots, I doubt they will reduce civilian casualties. When unethical enemies use civilians for cover and children to carry suicide bombs or hide in schools and hospitals, there will be no choice but to shoot. It won't matter if AI is making the decision or a human.
Catweazle
The scope for unintended consequences would appear limitless...
Infidel762
A UN Ban, that will work wonders. Gort?
Kpar
The article says, "The UN has now formally convened a group of government experts..."
OK, that's it! Nothing to see here, move along.
This will be resolved in NO TIME!
Helios
Why not imagine a world without war? To those who say that will never happen, of course you are correct. If only by the unwillingness to try to accomplish it. War is big business and it is perpetuated not by man's nature, it's marketing, advertising and propaganda.
@Vincent, you are bit behind the state of the art. UAVs are remotely piloted, no need for a "futuristic robot" to manage the controls. Very shortly there will be fully autonomous UAVs, no need for a shift change.
Wolf0579
Anyone who thinks Russia will be deterred by the UN, or "political correctness" had better think again. I know that about half the country has been converted to the idea that Russia is "OK". Let me tell those people that we have NEVER caught the Russians doing ANYTHING beneficial for the Human Race. EVER.
FabianLamaestra
Queue up the skynet history lesson....
https://www.youtube.com/watch?v=4DQsG3TKQ0I
Load More