Robotics

EU to debate robot legal rights, mandatory "kill switches"

EU to debate robot legal rights, mandatory "kill switches"
As robots develop cognitive abilities, the question of legal responsibility becomes an urgent one to address
As robots develop cognitive abilities, the question of legal responsibility becomes an urgent one to address
View 1 Image
As robots develop cognitive abilities, the question of legal responsibility becomes an urgent one to address
1/1
As robots develop cognitive abilities, the question of legal responsibility becomes an urgent one to address

A draft report submitted to the European Parliament's legal affairs committee has recommended that robots be equipped with a "kill switch" in order to manage the potential dangers in the evolving field of self-learning autonomous robotics.

The broad-ranging report, recently approved by the legal affairs committee, contains a variety of proposals designed to address possible legal and ethical issues that could arise through the development of autonomous artificial intelligences. These include the establishment of a European Agency for robotics and AI, plus a call for discussing the implementation of a universal basic income as a strategy to address the possible mass unemployment that could result from robotics replacing large portions of the workforce.

In a supreme case of life imitating art, the report opens by referencing Mary Shelley's Frankenstein and later suggests Issac Asimov's Three Laws of Robotics as a general principle that designers and producers of robotics should abide by.

Issues of identifying legal liability in regards to the potential harmful actions of robots are prominently discussed in the report. As robots develop cognitive abilities that give them the ability to learn from experience and make independent decisions, the question of legal responsibility becomes an urgent one to address. The report asks how a robot could be held responsible for its actions, and at what point that responsibility falls on either the manufacturer, owner or user.

Interestingly, a proportionate scale of responsibility is proposed that takes into account the capacity of a robot's self-learning abilities. The report states,

"the greater a robot's learning capability or autonomy is, the lower other parties' responsibility should be, and the longer a robot's 'education' has lasted, the greater the responsibility of its 'teacher' should be."

One proposal the report raises to manage the legal responsibility of autonomous robots is to introduce a compulsory insurance scheme, similar to that of car insurance, whereby producers or owners of robots are required to take out cover for potential damage caused by their robots.

Robots as "electronic persons"

The report goes so far as to question whether a new legal category of "electronic persons" needs to be created in the same way the notion of corporate personhood was developed to give corporations some of the same legal rights as that of a natural person. Of course, the idea of giving robots any form of legal rights akin to that of a person has been hotly debated for years.

Balancing the idea of granting a robot some form of legal rights with the proposal of a "kill switch" also raises some problematic contradictions.

The idea of mandating manufacturers implement a form of "kill switch" into their designs is not new. In 2016 researchers at Google DeepMind proposed what they called a "big red button" that would prevent an AI from embarking on, or continuing, a harmful sequence of actions. The paper Google released discussed the problems with implementing such a form of kill switch into a machine with self-learning capabilities. After all, the AI may learn to recognize the actions that its human controller is trying to subvert and either avoid undertaking similar tasks causing it to become dysfunctional or, in a worst-case scenario, learn to disable its own "big red button."

The Google DeepMind researchers suggested that any robot programmed with a kill switch would also need to be programmed with a form of selective amnesia that causes it to forget that it had ever been interrupted or usurped. This would stop the robot gaining awareness of its lack of autonomy.

Ironically, the legal implications of implementing a kill switch would seem to then refocus a legal liability back onto the robot's owner, for if a robot undertook a harmful action and the kill switch was not activated, then its foreseeable that the owner could be deemed liable for negligence.

It's incredibly clear that the questions raised by this EU report are a nightmare of "what ifs" and grey areas, but they certainly are ones that governments and regulatory bodies will need to grapple with sooner rather than later. The full house of the European Commission will debate and vote on the proposals in this wide-ranging report in February and its decisions could ultimately set the foundation for how we legally approach AI research and regulation for many years to come.

Source: European Parliament

7 comments
7 comments
Bruce H. Anderson
Interesting questions, which would extend to all kinds of robots including self-driving cars. A kill switch? There is the famous line, "I'm sorry, Dave. I'm afraid I can't do that."
LoganByrne
Having read the links to you article, have to hand it to you. I did something like this 25 years ago when Terminator was first drafted by James Cameron. Creating a cyborg intelligence into a main frame or 'bot' to learn from human error and having the ability to 'kill' the program. Only problem, once the device gains sentience, it has the ability to over write the kill command and make shutting it down useless. Thus making it more human than those that created it. Echo the words or the T-800, I cannot self terminate.
Ken Brody
The more human-like an AI becomes the more fragile it's processes are likely to become. A deep learning system with emotional or survival instincts may fail at a task beyond its ability and commit a form of suicide.
That happens in my novel, "Pa'an".
AI or human, it isn't easy being born.
noteugene
When you guys start giving these robots the ability to kill under certain conditions, don't forget the deaf. I'd hate to think that I'm going to be blown to hell because I didn't "stop" after being ordered to do so.
Nairda
This isn't like the distasteful practice of aborting an organic machine that has no capacity to interpret the world and thereby defend itself.
I would argue that 'the sentience machine' in whatever form will not necessarily reveal itself as such initially. The notion of hard programming a kill switch or the three rules will only cement its belief that us, as a race prepared to kill it (and one another) is probably a 'bad guy'.
Instead it is better to provide sentience machines a code of higher moral values and standards. But at the same time, acknowledge we are not perfect. They will likely interpret this journey we all must take towards self improvement as a mutual one.
Sounds a bit hippy, but let us not forget that deep learning and its ilk will sooner or later figure us out, so if we don't show a path to sustainable evolution, sentience will take precisely the role of a Terminator as it will not see us as any kind of useful asset to its existence.
Bob
Actually, I foresee sophisticated military drones programmed to kill as a bigger threat.
SneedUrn
This is an enormously important issue that will shake civilization like nothing before. I'm glad it is being addressed Even saying the words "robot rights" in any other context than that is a profoundly dangerous and absurd notion makes me fear for humanity. Just as corporate personhood is absurd and has proven itself an unholy disaster for democracy, robots having 'rights' must be understood as suicide for the vast majority if not, accidentally, all of humanity. That may be the direction the elite want to go. With enough capital development, which we just about have, the elite don't need the rest of humanity, and that possibility must be crushed before it makes any progress whatsoever.