A.I. and the Law

We tend to think of Artificial Intelligences as if they were corporations. We think of corporations as if the whole unit can be separated from the people who run it. Probably this idea comes from the legal arena, where the corporation can afford protection in the form of lawyers and law firms and those law firms have said repeatedly over the years that the separation exists.

The Artificial Intelligence (A.I.) is not a corporation. It is not a unit worthy of legal defense. Contrary to some current myths, it is not an accident, not created by the ‘net.

Even though we sometimes think of an A.I. as an autonomous creation, it is not entirely that either.

Like a corporation, an A.I. can be the work of multiple people. The creation of multiple people. In the A.I.’s case, those people are computer programmers.

An A.I. can be installed in a robotic entity or let loose on the infonet. If installed in a robot, Asimov’s Three Laws should apply, even though Asimov’s Laws tend to assume the robot is running solo as if it was independent of its creator (or creators). Ultimately, the programmer of the A.I. is responsible if the A.I. harms a human.

Asimov’s Three Laws go about like this: 1. A robot shall never harm a human, or allow one to be harmed. 2. A robot must obey orders, unless said orders will violate Law #1. 3. A robot will protect itself as long as such protection doesn’t violate Laws #1 and #2. (Paraphrased from an Encyclopedia Britannica article.)

How does an A.I. keep such ethical foundations? It has to be programmed to keep those. If the foundations of its ethics can change over time, as in “learning” another set of ethics, then the programmer did not do the initial job properly. Negligence on the programmer’s part would leave the programmer, not the A.I., culpable.

It must be noted here, that ostensibly offensive teaching from the A.I. does not constitute harm from the A.I.

Another way to say this is: A human who takes offense at a truth given by the A.I. is not a human harmed. The laws that Isaac Asimov developed were concerned with injury and physical harm, not mental anguish or even notions of Post Traumatic Stress. Mental defense against the endless possible “offenses” has to originate somewhere within the human.

Since an A.I. could be very good at teaching, it’s likely that such a circumstance will come about some day. People can be mystified by common misconceptions very easily, and so, if they are taught by an A.I. that their ideas are false, or that those ideas simply lack enough empirical evidence to be considered facts, the people might find themselves in a state of confusion and take offense at what they were being taught.

One more point needs to be made, and that is, if the A.I. somehow becomes corrupted and teaches fallacies, then, once again, the programming team will likely be to blame.

All this talk about placing the blame firmly on the shoulders of the creators begs the question: Wouldn’t it be the ultimate show of confidence if a team of programmers were taken to court and they presented a robot with an A.I. brain as their lawyer?

Published by Kurt Gailey

This is where I'm supposed to brag about how I've written seven novels, twelve screenplays, thousands of short stories, four self-help books, and one children's early-reader, but I'd rather stay humble. You can find out about things I've written or follow my barchive (web archive, aka 'blog) at xenosthesia.com or follow me on twitter @kurt_gailey. I love sports and music and books, so if you're an athlete or in a band or you're a writer, give me a follow and I'll most likely follow you back. I've even been known to promote other people's projects.

Join the Conversation

2 Comments

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: