February 7, 2017
Robot rights: at what point should an intelligent machine be considered a 'person'?
Science fiction likes to depict robots as autonomous machines, capable of making their own decisions and often expressing their own personalities. Yet we also tend to think of robots as property, and as lacking the kind of rights that we reserve for people.
But if a machine can think, decide and act on its own volition, if it can be harmed or held responsible for its actions, should we stop treating it like property and start treating it more like a person with rights?
What if a robot achieves true self-awareness? Should it have equal rights with us and the same protection under the law, or at least something similar?
These are some of the issues being discussed by the European Parliament's Committee on Legal Affairs. Last year it released a draft report and motion calling for a set of civil law rules on robotics regulating their manufacture, use, autonomy and impact upon society.
Of the legal solutions proposed, perhaps most interesting was the suggestion of creating a legal status of "electronic persons" for the most sophisticated robots.
The report acknowledged that improvements in the autonomous and cognitive abilities of robots makes them more than simple tools, and makes ordinary rules on liability, such as contractual and tort liability, insufficient for handling them.
For example, the current EU directive on liability for harm by robots only covers foreseeable damage caused by manufacturing defects. In these cases, the manufacturer is responsible. However, when robots are able to learn and adapt to their environment in unpredictable ways, it's harder for a manufacturer to foresee problems that could cause harm.
The report also questions about whether or not sufficiently sophisticated robots should be regarded as natural persons, legal persons (like corporations), animals or objects. Rather than lumping them into an existing category, it proposes that a new category of "electronic person" is more appropriate.
The report does not advocate immediate legislative action, though. Instead it proposes that legislation be updated if robots become more complex; if and when they develop more behavioural sophistication. If this occurs, one recommendation is to reduce the liability of "creators" proportional to the autonomy of the robot, and that a compulsory "no-fault" liability insurance could cover the shortfall.
But why go so far as to create a new category of "electronic persons"? After all, computers still have a long way to go before they match human intelligence, if they ever do.
But it can be agreed that robots – or more precisely the software that controls them – is becoming increasingly complex. Autonomous (or "emergent") machines are becoming more common. There are ongoing discussions about the legal liability for autonomous vehicles, or whether we might be able to sue robotic surgeons.
These are not complicated problems as long as liability rests with the manufacturers. But what if manufacturers cannot be easily identified, such as if open source software is used by autonomous vehicles? Whom do you sue when there are millions of "creators" all over the world?
Artificial intelligence is also starting to live up to its moniker. Alan Turing, the father of modern computing, proposed a test in which a computer is considered "intelligent" if it fools humans into believing that the computer is human by its responses to questions. Already there are machines that are getting close to passing this test.
There are also other incredible successes, such as the computer that creates soundtracks to videos that are indistinguishable from natural sounds, the robot that can beat CAPTCHA, one that can create handwriting indistinguishable from human handwriting and the AI that recently beat some of the world's best poker players.
If this progress continues, it may not be long before self-aware robots are not just a product of fantastic speculation.
The EU report is among the first to formally consider these issues, but other countries are also engaging. Peking University's Yueh-Hsuan Weng writes that Japan and South Korea expect us to live in a human-robot coexistence by 2030. Japan's Ministry of Economy, Trade and Industry has created a series of robot guidelines addressing business and safety issues for next generation robots.
If we did give robots some kind of legal status, what would it be? If they behaved like humans we could treat them like legal subjects rather than legal objects, or at least something in between. Legal subjects have rights and duties, and this gives them legal "personhood". They do not have to be physical persons; a corporation is not a physical person but is recognised as a legal subject. Legal objects, on the other hand, do not have rights or duties although they may have economic value.
Assigning rights and duties to an inanimate object or software program independent of their creators may seem strange. However, with corporations we already see extensive rights and obligations given to fictitious legal entities.
Perhaps the approach to robots could be similar to that of corporations? The robot (or software program), if sufficiently sophisticated or if satisfying certain requirements, could be given similar rights to a corporation. This would allow it to earn money, pay taxes, own assets and sue or be sued independently of its creators. Its creators could, like directors of corporations, have rights or duties to the robot and to others with whom the robot interacts.
Robots would still have to be partly treated as legal objects since, unlike corporations, they may have physical bodies. The "electronic person" could thus be a combination of both a legal subject and a legal object.
The European Parliament will vote on the resolution this month. Regardless of the result, reconsidering robots and the law is inevitable and will require complex legal, computer science and insurance research.