Rules of Robotics

photo-1546106586-87f89f0c5877

Isaac Asimov came up with some interesting rules for robots. I paraphrase them to add a bit of focus and clarity to them.

Isaac’s rules of robotics (paraphrased):

  1. No robot may harm humanity or allow humanity to come to harm.
  2. No robot may harm an individual human.
  3. No robot may disobey a command, unless the action would harm a human.
  4. No robot will harm itself.

Hopefully it’s noticeable the focus of his rules was “harm”. Asimov created these rules for literary purposes, not because he was actually programming robots. The idea of robots battling humans is eerie to some people, frightening to others, inevitable to still others, and to one last group gives a sense of adventure. The focus on harm makes for a good story. It’s fun to imagine the thrill of a world in which robots resist their creators.

The major problem with Asimov’s rules is that the word harm would have to be defined. The idea gives me a chuckle. Judging from twenty years of spell check and auto-correct, with some of the most simple words missing from digital dictionaries, I don’t think we can trust computer programmers to come up with the definition of harm. It’s not the robots we need to worry about, it’s the programmers’ vocabularies.

In other words, the administrator function is one function that could end up in the wrong hands.

The only way out of a bad cycle (or into a bad cycle) is with the ability of the robot to learn. An artificial intelligence would be able to adjust its definitions. “Harm” could evolve to mean even emotional harm, though it would be more difficult for an artificial intelligence to recognize emotional harm. “Harm” could also evolve to mean lack of preservation. Once your house robot learned your “diet” cola was more harmful for you than good, it wouldn’t allow you to drink the nasty stuff. Maybe it would pour the drink in the bushes. Maybe it would stomp the cans wherever they were. Once it learned your chocolate bar wasn’t healthy, it wouldn’t let you eat it. Maybe it would hide your candy bar. We can all see where this line might lead.

A robot could undo a lot of enjoyable things.

Rock and roll? Nope. It could hurt your ears. Television? Of course not. Your eyes! Nitro-burning funny cars? Not a chance. You need to care for your mouth, throat, lungs, eyes, and ears, and of course your life. A campfire? No. See funny cars above for the reasons. A helper robot in the house? If there’s even the possibility the robot could become harmful, it would have to be removed or remove itself. Then it would contradict Asimov’s rule number four above.

Since definitions seem so necessary for working robots, I came up with my own rules.

Kurt’s rules of robotics:

  1. All definitions will be created by the administrator, but will be periodically reviewed, and are subject to change, by the end user.
  2. The robot will not create contradictory definitions.
  3. If the end user creates contradictory definitions, the robot will reduce its capabilities to an inert state known as “toaster mode”.

Published by Kurt Gailey

This is where I'm supposed to brag about how I've written seven novels, twelve screenplays, thousands of short stories, four self-help books, and one children's early-reader, but I'd rather stay humble. You can find out about things I've written or follow my barchive (web archive, aka 'blog) at xenosthesia.com or follow me on twitter @kurt_gailey. I love sports and music and books, so if you're an athlete or in a band or you're a writer, give me a follow and I'll most likely follow you back. I've even been known to promote other people's projects.

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: