Posts Tagged ‘singularity’

Human rights for intelligent machines?

January 19, 2009

If, as so many who are awating the coming singularity with byted breath, it comes to pass that machines become intelligent enough that we recognize them as self aware (this may already be the case – see my missive on “if intelligent machines where here, how would we know?”), will there not be a time when we begin to debate as to whter they are to have individual “human rights” or at least some equivalent form?

Human history is filled with humans not recognizing other humans as equal both under the law and under the accepted mpores of the time. We have a habit of subjecting other intelligences of even our own kind simply by labeling them as being different . Who knows if wiping out the Neanderthals wasn’t our first exercise in genocide?

In any case, it seems likely that one day there will be a discussion that may even turn to whether machines will have a right to vote and, by extensions, a right to riun for and maintain political office.

If you Google “machines and voting”, you will get close to 7 million pages , a majority of which seem to be concerned with how votes are miscounted.  This leads me to wonder if the machines, themselves would not be in the perfect position to elect one of their own.

Tom Stoppard once said, “In Democracy, it’s not the voting, it’s the counting.”

Of course the upside might be rational, logical politicians who are devoid of all forms of human frailties when it comes to corruption, But oops! i forgot. It isn’t so much that power corrupts as that it attracts whose who are susceptible to being corrupt. It’s probably much more likely that if machines are so wont as to seek power, they will simply take it rather than to seek an equal level of citizenship.


When the Teaching Machines Learn to Think

March 31, 2008

A recent article in Science Daily  announces what may be an AI breakthrough. The article explains that AI researchers in Sweden have combined two tired and trued methods of artificial intelligence in order to create a robot that “learns like a child or a puppy”.

Essentially, they have put together the classic rules based system (which tends to mimic human rationality) with an Artificial Neural Network (our attempt at mimicking the physical structure of the brain). The result is a system that reacts to new and different stimuli by learning about it.

 They boot up the system and the only human intervention after that was telling the system whether it had succeeded or not (which brings to mind how me may be able to train for that). The system was used to try ti fit pegs into holes. It seems that on each subsequent attempt, the system did better having “learned” about the best ways to go about it.

 The real breakthrough here is that they have a system that seems to be able to adapt to its situation. This could be a really big step towards some form of useful artificial intelligence. We are still within the 2005-2030 range which Vernor Vinge predicted would be the time span that an artificial super human intelligence would appear and what he called the “end of humanity” would occur. This may be a significant step in that direction.

 Such a superhuman intelligence would indeed require both the neural network and the rules based intelligence. While AI researchers spent more of the early years working on the rules based approach, it may be that the neural network through the general expansion of the internet

 This comes to the question of whether a set of artificial neural networks may be able to evolve the same way biological ones have. And even more poignantly, whether these artificial networks may simply be extensions of existing biological ones?

The interesting part happens when artificial networks begin to interface and grow enough so that the biological originators become the extensions rather than vice-versa. And if there is indeed a need for a rules based system to manage it all, who makes the rules (and what are they)?

When the servers become large, complex, integrated, interconnected enough that the services become the “servees”, does the sleeper awaken?