Shall we send robots to jail?

 

A month ago, AlphaGo, a software developed by Google Deep Mind defeated the champion Lee Sidol during a 5 game long match of Go.

AlphaGo is a machine learning software that has been ‘trained’ with a database of 600.000 games and then by playing against itself. Its winning match made the history: and not only because it proves how advanced and intelligent machine learning systems have become.

Apparently Alpha Go was capable of a winning move that many have defined beautiful, unpredictable, human-like. Actually, more than human like. As a commenter said: “It’s not a human move. I’ve never seen a human play this move.” The move was totally unexpected.

 

Rewind. It’s 1997 and Gary Kasparov has just been defeated by IBM Deep Blue in a seminal chess match. The machine wins after an “incredibly refined move” that Kasparov defines practically counterintuitive and so unexpected. Funnily enough, years later, one of the Deep Blue’s designers reveals how that counterintuitive move was not programmed, but the mere result of a bug.  A mistake, in other words.

Robots are capable not only of clever behavior, but of actions they were not programmed nor expected to do.  But as Stanford Law fellow Eran Kahana has noticed “Within the context of a game, AI unpredictability is harmless. But once we exit that safe haven, (…) the consequences of a “shocking” action can be dangerous and expensive”.

So either the unexpected behaviour is the consequence of some autonomous process, or an error: who’s responsible for the machine’s actions?

Legal scholars and practitioners have been asking this question since 1983 at least.

And it seems to me, after all, that the scene is divided among 2 kind of approaches.

The Blade Runner approach

The supporters of what I’ll call the Blade Runner approach  basically ask if robots can or will be able to feel and think like a human  and if we have a way to find it out.

Now, asking if a robot really understand seems a bit of an extreme question… But look at this.

Dog gif AI law

Just a guy kicking a robot dog…but something doesn’t feel quite right. Many people started to complain that the dog was being mistreated, that kicking was cruel, that “Kicking a dog, even a robot dog seemed wrong”. Basically, we tend to attribute human features to machines. So no wonder we try to do it in the world of law.

Criminal law often requires culpability in the offender: we call it the “guilty mind” or mens rea. That’s why for this group of folks, it’s important to find out if the robots has committed a crime with guilty mind, that is, has intentions.

And even if some concludes that unpredictable actions like AlphaGo’s moves “may not be enough evidence for sentience”, other plea for more research into the robot’s mind. “If we continue to develop sophisticated forms of artificial intelligence, we have a moral obligation to improve our understanding of the conditions under which artificial consciousness might genuinely emerge” says Eric Schwitzgebel, professor of philosophy at the University of California.

All this seems arous, doesn’t it? We are not even aware of how the human brains and human consciousness fully works – how can we establish that for computers?

The practical approach

There are others that approach the matter in a different, more practical way. They don’t ask and don’t care if the robot is sentient. They move into the realm of what we lawyers call “tort law”. All they ask if and when the designers of the machine should be held accountable for its unpredictable behaviour.

According to Kahana, they shouldn’t: but not because the robot should be held accountable instead. Designer shouldn’t because this wouldn’t be “reasonable, fair and economically efficient”. “If AI designers are by default (unlimitedly) liable because they build AI applications that can behave unpredictably that can have an undesired effect of hobbling a nascent industry” .

For Kahana we need a balanced, down to earth approach, mixing legal and industry standards and valuing case by case. “If, for example, an AI designer was legally required to bake in a security mechanism (such as a back-door) that disables the AI when a harmful activity occurs, the failure to implement it becomes the violation, not the design of the AI itself.”

Similar premises for IP attorney Nathan Greenblatt, which focuses on the issues raised by Google cars.  We don’t need to look for a computer’s will or mens rea says Greenblatt – “The robo-driver’s private “thoughts” (in the form of computer code) need not be parsed. Only its conduct need be considered.”

If there’s a negligent conduct and damage, it will be the carmaker to be held accountable. Which, for Greenblatt, is both fair and economically efficient:  manufacturers would just need to pay insurance for each vehicle (which could possibly be lower than for human driven cars).

So, should we investigate what’s going on computers mind or just understand how to attribute liability on the designer or manufacturer? Whatever route we decide to pursue, the field is open.

Robots can challenge the way we apply the law; and they could even teach us something new. Fan Hui, another Go champion, after being defeated by AlphaGo, has seen himself improve dramatically after studying the robots moves: “the experience has, quite literally, changed the way he views the game”. Could that happen in law too?

 

Further readings:

Eran Kahana Abstract Conceptualization and IP Infringement by AI: Part II

Nathan A. Greenblatt Self-Driving Cars Will Be Ready Before Our Laws Are

Jeffrey Wale, David Yuratich, Robot law: what happens if intelligent machines commit crimes?

Ryan Calo, When a Robot Kills, Is It Murder or Product Liability?

Clarissa Véliz The Challenge of Determining Whether an A.I. Is Sentient

Eric Schwitzgebel We have greater moral obligations to robots than to humans

 

share