Saturday, 20 July 2019

Why the 3 Laws of Robotics Wouldn't Work — and What Would Instead

Have you ever heard of the Three Laws of Robotics? Isaac Asimov's ground rules for healthy human-robot relations are meant to make sure that we are never hurt or betrayed by our robotic creations. Here they are, in case you need a primer:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Seems like that covers just about everything, right? Well, actually, no...and Asimov knew it (he did write an entire short story collection about exploiting the loopholes). What's more, modern roboticists tend to think the rules aren't just flawed, they're fundamentally misguided.

Making a Moral Machine

We're a lot closer to making a true artificial intelligence than we were when Asimov wrote I, Robot. But we're probably even closer to creating an intelligence that's exponentially greater than ours. And whether that intelligence is conscious or not, we'll want to be absolutely positive that it won't turn its big binary brain on us.
To that end, Asimov's laws are woefully inadequate, say A.I. experts Ben Goertzel and Louie Helm. It's not that they aren't as comprehensive as Asimov thought they were, it's that they're based on an inherently faulty moral foundation. In Helm's view, a rule-based system such as Asimov's can't possibly work, because it's essentially trying to restrain a being whose power is (for all intents and purposes) limitless. It would only be a matter of time before an A.I. would find a workaround for the rules we set in place.
Additionally, Asimov's rules create an inherent hierarchy wherein humans are granted more rights than robots. To Helm, just creating an intelligence powerful enough to raise the question of what sort of rights it should have is an unforgivable ethical oversight. Instead, he hopes that "most AI developers are ethical people, so they will avoid creating what philosophers would refer to as a 'beings of moral significance.' Especially when they could just as easily create advanced thinking machines that don't have that inherent ethical liability." In other words, a supercomputer doesn't need to have hopes and dreams in order to do its super-computing.

Empowering Ethics

So rules work okay for human beings, but they'd be pretty much unenforceable in advanced artificial intelligences. How do we make sure our robots don't turn against us if we can't just program "don't kill humans" into them? Some experts think the answer is to give A.I.s their own moral compass — a nebulous sense of right and wrong that lets robots judge for themselves. Researchers at the University of Hertfordshire call this approach the "Empowerment" style of ethical programming.
Instead of having some actions prescribed and others forbidden, these A.I.s are made to value empowerment: the ability to make choices. The decisions they make are those that allow them to make more choices, and they value that same empowerment in others. Basically, they won't kill you, because if they kill you, your options would be severely limited. That's...sort of reassuring. But if empowerment gives robots an intuitive sense of the value of human life, then it may be the blueprint for peaceful robo-human relationships in the future.

Written by Reuben Westmaas


No comments:

Post a Comment