Making Moral Robots May Be Just as Hard as Making Moral Humans

I found the recent tome “Moral Machines”  from authors Wallach and Allen a spanking good read, but I fear they miss a vital point.
These guys are ethicists  (which strikes me as not a honest way to earn a wage), and I applaud them for broaching the subject as I am certain it will be a filed very hotly debated in a near future. Probably much more salient to our daily lives than the status of human clones (where we will ask ourselves : Is it moral to throw a naked clone off of a roof – i.e. make an obscene clone fall?) as thinking machines are probably a fair bit closer to our capabilities at this time than are twins separated by time.
The book explores 6 different methods for assuring that intelligent machines behave far better than the average 4 year old:

  1. Keep them stupid. This will keep them out of trouble as they will not likely encounter situations that involve behavioral dilemmas. This one is probably already out of Pandora’s box and as soon as machines get smart enough to build other machines you can forget about it.
  2. Do not arm machines. Well, too bloody late. We’ve had drones with missiles for way too lomng and half of our armaments are software based.
  3. Build them to follow Asimovs 3 laws of Robotics which are in order of importance: Don’t Harm humans (or allow them to come to harm), obey humans, and preserve yourself. The problem with this is that existence is full of less than clear cut situations for simple rules like this. For example, a robot walks into a kindergarten class to find a killer with an assault weapon expecting children one by one. If he kills the madman, he violates the first law (presuming that is the only way to stop him quickly enough). Robocop would theoretically figure that out but it shows 9in fairly black and white terms) why simple rules based systems would be easily challenged. Remember for every monkey proof rule that can be designed, reality goes out and builds a better monkey.
  4. Program them with principles. Hard wire the Golden Rule. Do unto humans as you would have them do unto you. The challenge with that one is that we get into the complete ethical debate here and in the entire course of human history we have not been able to come up with a set way to function that everyone finds both  adaptable or useful in every situation. Of course this one could mean that you get robots of various belief systems (hmmm, reincarnated Buddhists robots from the scrap heap who deny reality and assume this is all a simulation).
  5. Teach them the way we teach children: 2 problems with that one. First it may take way too long for them to absorb that data (and we are pretty slow teachers). Second, Hitler, Jack the Ripper and Mother Teresa were all children. The process has mixed results.
  6. Give them emotions and make them master them. Snakes Alive! We haven’t done that with Humans yet and the odds of getting it right in machines while racing to have usefull thinking machines makes that pretty unlikely.

Finally, I think we need to recognize that we will have machines that do not always behave the way we want them to. This opens up a whole new can of worms as to who we deal with a thin king machine that has gone awry. After all we already have huge problems with humans who do as we do rather than doing as we say.


Tags: , , , , , ,

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: