Posts Tagged ‘artificial intelligence’

Human rights for intelligent machines?

January 19, 2009

If, as so many who are awating the coming singularity with byted breath, it comes to pass that machines become intelligent enough that we recognize them as self aware (this may already be the case – see my missive on “if intelligent machines where here, how would we know?”), will there not be a time when we begin to debate as to whter they are to have individual “human rights” or at least some equivalent form?

Human history is filled with humans not recognizing other humans as equal both under the law and under the accepted mpores of the time. We have a habit of subjecting other intelligences of even our own kind simply by labeling them as being different . Who knows if wiping out the Neanderthals wasn’t our first exercise in genocide?

In any case, it seems likely that one day there will be a discussion that may even turn to whether machines will have a right to vote and, by extensions, a right to riun for and maintain political office.

If you Google “machines and voting”, you will get close to 7 million pages , a majority of which seem to be concerned with how votes are miscounted.  This leads me to wonder if the machines, themselves would not be in the perfect position to elect one of their own.

Tom Stoppard once said, “In Democracy, it’s not the voting, it’s the counting.”

Of course the upside might be rational, logical politicians who are devoid of all forms of human frailties when it comes to corruption, But oops! i forgot. It isn’t so much that power corrupts as that it attracts whose who are susceptible to being corrupt. It’s probably much more likely that if machines are so wont as to seek power, they will simply take it rather than to seek an equal level of citizenship.

Advertisements

Making Moral Robots May Be Just as Hard as Making Moral Humans

November 19, 2008

I found the recent tome “Moral Machines”  from authors Wallach and Allen a spanking good read, but I fear they miss a vital point.
These guys are ethicists  (which strikes me as not a honest way to earn a wage), and I applaud them for broaching the subject as I am certain it will be a filed very hotly debated in a near future. Probably much more salient to our daily lives than the status of human clones (where we will ask ourselves : Is it moral to throw a naked clone off of a roof – i.e. make an obscene clone fall?) as thinking machines are probably a fair bit closer to our capabilities at this time than are twins separated by time.
The book explores 6 different methods for assuring that intelligent machines behave far better than the average 4 year old:

  1. Keep them stupid. This will keep them out of trouble as they will not likely encounter situations that involve behavioral dilemmas. This one is probably already out of Pandora’s box and as soon as machines get smart enough to build other machines you can forget about it.
  2. Do not arm machines. Well, too bloody late. We’ve had drones with missiles for way too lomng and half of our armaments are software based.
  3. Build them to follow Asimovs 3 laws of Robotics which are in order of importance: Don’t Harm humans (or allow them to come to harm), obey humans, and preserve yourself. The problem with this is that existence is full of less than clear cut situations for simple rules like this. For example, a robot walks into a kindergarten class to find a killer with an assault weapon expecting children one by one. If he kills the madman, he violates the first law (presuming that is the only way to stop him quickly enough). Robocop would theoretically figure that out but it shows 9in fairly black and white terms) why simple rules based systems would be easily challenged. Remember for every monkey proof rule that can be designed, reality goes out and builds a better monkey.
  4. Program them with principles. Hard wire the Golden Rule. Do unto humans as you would have them do unto you. The challenge with that one is that we get into the complete ethical debate here and in the entire course of human history we have not been able to come up with a set way to function that everyone finds both  adaptable or useful in every situation. Of course this one could mean that you get robots of various belief systems (hmmm, reincarnated Buddhists robots from the scrap heap who deny reality and assume this is all a simulation).
  5. Teach them the way we teach children: 2 problems with that one. First it may take way too long for them to absorb that data (and we are pretty slow teachers). Second, Hitler, Jack the Ripper and Mother Teresa were all children. The process has mixed results.
  6. Give them emotions and make them master them. Snakes Alive! We haven’t done that with Humans yet and the odds of getting it right in machines while racing to have usefull thinking machines makes that pretty unlikely.

Finally, I think we need to recognize that we will have machines that do not always behave the way we want them to. This opens up a whole new can of worms as to who we deal with a thin king machine that has gone awry. After all we already have huge problems with humans who do as we do rather than doing as we say.

When the Teaching Machines Learn to Think

March 31, 2008

A recent article in Science Daily  announces what may be an AI breakthrough. The article explains that AI researchers in Sweden have combined two tired and trued methods of artificial intelligence in order to create a robot that “learns like a child or a puppy”.

Essentially, they have put together the classic rules based system (which tends to mimic human rationality) with an Artificial Neural Network (our attempt at mimicking the physical structure of the brain). The result is a system that reacts to new and different stimuli by learning about it.

 They boot up the system and the only human intervention after that was telling the system whether it had succeeded or not (which brings to mind how me may be able to train for that). The system was used to try ti fit pegs into holes. It seems that on each subsequent attempt, the system did better having “learned” about the best ways to go about it.

 The real breakthrough here is that they have a system that seems to be able to adapt to its situation. This could be a really big step towards some form of useful artificial intelligence. We are still within the 2005-2030 range which Vernor Vinge predicted would be the time span that an artificial super human intelligence would appear and what he called the “end of humanity” would occur. This may be a significant step in that direction.

 Such a superhuman intelligence would indeed require both the neural network and the rules based intelligence. While AI researchers spent more of the early years working on the rules based approach, it may be that the neural network through the general expansion of the internet

 This comes to the question of whether a set of artificial neural networks may be able to evolve the same way biological ones have. And even more poignantly, whether these artificial networks may simply be extensions of existing biological ones?

The interesting part happens when artificial networks begin to interface and grow enough so that the biological originators become the extensions rather than vice-versa. And if there is indeed a need for a rules based system to manage it all, who makes the rules (and what are they)?

When the servers become large, complex, integrated, interconnected enough that the services become the “servees”, does the sleeper awaken?