Isaac Asimov came up with some interesting rules for robots. I paraphrase them to add a bit of focus and clarity to them.
Isaac’s rules of robotics (paraphrased):
- No robot may harm humanity or allow humanity to come to harm.
- No robot may harm an individual human.
- No robot may disobey a command, unless the action would harm a human.
- No robot will harm itself.
Hopefully it’s noticeable the focus of his rules was “harm”. Asimov created these rules for literary purposes, not because he was actually programming robots. The idea of robots battling humans is eerie to some people, frightening to others, inevitable to still others, and to one last group gives a sense of adventure. The focus on harm makes for a good story. It’s fun to imagine the thrill of a world in which robots resist their creators.
The major problem with Asimov’s rules is that the word harm would have to be defined. The idea gives me a chuckle. Judging from twenty years of spell check and auto-correct, with some of the most simple words missing from digital dictionaries, I don’t think we can trust computer programmers to come up with the definition of harm. It’s not the robots we need to worry about, it’s the programmers’ vocabularies.
In other words, the administrator function is one function that could end up in the wrong hands.
The only way out of a bad cycle (or into a bad cycle) is with the ability of the robot to learn. An artificial intelligence would be able to adjust its definitions. “Harm” could evolve to mean even emotional harm, though it would be more difficult for an artificial intelligence to recognize emotional harm. “Harm” could also evolve to mean lack of preservation. Once your house robot learned your “diet” cola was more harmful for you than good, it wouldn’t allow you to drink the nasty stuff. Maybe it would pour the drink in the bushes. Maybe it would stomp the cans wherever they were. Once it learned your chocolate bar wasn’t healthy, it wouldn’t let you eat it. Maybe it would hide your candy bar. We can all see where this line might lead.
A robot could undo a lot of enjoyable things.
Rock and roll? Nope. It could hurt your ears. Television? Of course not. Your eyes! Nitro-burning funny cars? Not a chance. You need to care for your mouth, throat, lungs, eyes, and ears, and of course your life. A campfire? No. See funny cars above for the reasons. A helper robot in the house? If there’s even the possibility the robot could become harmful, it would have to be removed or remove itself. Then it would contradict Asimov’s rule number four above.
Since definitions seem so necessary for working robots, I came up with my own rules.
Kurt’s rules of robotics:
- All definitions will be created by the administrator, but will be periodically reviewed, and are subject to change, by the end user.
- The robot will not create contradictory definitions.
- If the end user creates contradictory definitions, the robot will reduce its capabilities to an inert state known as “toaster mode”.