Archive for the ‘Artificial intellignece’ Category

Making Moral Robots May Be Just as Hard as Making Moral Humans

November 19, 2008

I found the recent tome “Moral Machines”  from authors Wallach and Allen a spanking good read, but I fear they miss a vital point.
These guys are ethicists  (which strikes me as not a honest way to earn a wage), and I applaud them for broaching the subject as I am certain it will be a filed very hotly debated in a near future. Probably much more salient to our daily lives than the status of human clones (where we will ask ourselves : Is it moral to throw a naked clone off of a roof – i.e. make an obscene clone fall?) as thinking machines are probably a fair bit closer to our capabilities at this time than are twins separated by time.
The book explores 6 different methods for assuring that intelligent machines behave far better than the average 4 year old:

  1. Keep them stupid. This will keep them out of trouble as they will not likely encounter situations that involve behavioral dilemmas. This one is probably already out of Pandora’s box and as soon as machines get smart enough to build other machines you can forget about it.
  2. Do not arm machines. Well, too bloody late. We’ve had drones with missiles for way too lomng and half of our armaments are software based.
  3. Build them to follow Asimovs 3 laws of Robotics which are in order of importance: Don’t Harm humans (or allow them to come to harm), obey humans, and preserve yourself. The problem with this is that existence is full of less than clear cut situations for simple rules like this. For example, a robot walks into a kindergarten class to find a killer with an assault weapon expecting children one by one. If he kills the madman, he violates the first law (presuming that is the only way to stop him quickly enough). Robocop would theoretically figure that out but it shows 9in fairly black and white terms) why simple rules based systems would be easily challenged. Remember for every monkey proof rule that can be designed, reality goes out and builds a better monkey.
  4. Program them with principles. Hard wire the Golden Rule. Do unto humans as you would have them do unto you. The challenge with that one is that we get into the complete ethical debate here and in the entire course of human history we have not been able to come up with a set way to function that everyone finds both  adaptable or useful in every situation. Of course this one could mean that you get robots of various belief systems (hmmm, reincarnated Buddhists robots from the scrap heap who deny reality and assume this is all a simulation).
  5. Teach them the way we teach children: 2 problems with that one. First it may take way too long for them to absorb that data (and we are pretty slow teachers). Second, Hitler, Jack the Ripper and Mother Teresa were all children. The process has mixed results.
  6. Give them emotions and make them master them. Snakes Alive! We haven’t done that with Humans yet and the odds of getting it right in machines while racing to have usefull thinking machines makes that pretty unlikely.

Finally, I think we need to recognize that we will have machines that do not always behave the way we want them to. This opens up a whole new can of worms as to who we deal with a thin king machine that has gone awry. After all we already have huge problems with humans who do as we do rather than doing as we say.

The Dancing Bees of the Hive Mind

April 15, 2008

Much Digital ink is beginning to be spilled about Web 3.0 and the Semantic Web.

The entire notion is that more information about data will lead to better understanding and exploitation of data (maybe that is why information wants to be free?).

Recently, I came across a few articles describing sprocket technology or twining.

The idea is that you have this cool little sprocket running on your machine and learning about your habits. Then this device can talk to other similar devices within your particular network. AS the devices learn more, they can enrich your experience by alerting you of information you would probably be interested in. Of course, advertisiers would love to interrupt you in the middle of what you consider important information for you to have with what they consider important information for you to have.

In any case, these little intelligent agents (now isn’t that term a blast from the past) remind me of dancing bees. I know, I know, I keep going back to the hive mind, but think about how dancing bees inform the hive of where they can find lovely flowers in bloom, ripe with all the pollen they can carry.

Sadly, it seems the little apians have been suffering lately and despite our best efforts to blame cell phone towers, we are still not sure what has been causing this bee blight.

So what if the real culprit is advertising? Some devious virus that is giving the bees false steps in their dance with the stars and that is leading bees to leave the hive en masse in search of fields flowers that never were; a Pleasantville of insects sold on the “better life”?

If we want to find the bees, we’ll need to follow the money. Isn’t it always that way?

If Artificially Intelligent Machines Were Here, How Would We Know?

April 6, 2008

In his 1950 paper, “Computing Machinery and Intelligence” , mathematician Alan Turing described a test of a machines ability to achieve intelligence. This has been popularly named the Turing Test. Essentially, it asks a human judge to have a conversation (in written text and real time) with another human and a machine. If the judging human cannot distinguish between the human and the machine, the machine is said to have passed the test and at least mimicked intelligence. This has spawned a whole school of research about “natural language dialogue systems”.

We all know that unless there is a good financial reason to build such machines, the exercise of doing so will remain just an exercise. So what I am curious about is that when machines successfully pass the test and can imitate human conversations, what are the applications that may be applied?

Of course, one of them would be sex and the other crime.

A Russian program called CyberLover conducts fully automated flirtatious conversations in a bid to collect personal data from unsuspecting real humans behaving all too humanly. The program can be found in dating chat sites and is designed to lure victims into sharing their identities or get them to visit web sites that provoke malware infestations. It can do so with up to 10 simultaneous partners making it quite an efficient machine as well.

With the rapid expansion of social networks and websites focused on conversation and discussion, this type of approach leads one to think that there may soon be a plethora of intelligent machines conversing with online denizens with the goal of gathering their personal data or zombifying their machines (and perhaps thus replicating themselves).

This leads me to the title of this missive. If artificially intelligent machines were here, how would we know? After all, the purpose of the Turing test is to have the machine fool the human into thinking it isn’t a machine. So, by Turing’s early definition, fooling a human is how one detects artificial intelligence. But if the human is fooled, who does the detecting?

Now, while I do subscribe to the notion that even paranoids can have real enemies, I don’t think this calls for panic just yet. But it does bring me back to my notion of the hive mind.

If we were indeed developing a larger, collective intelligence, how would we know? Perhaps that intelligence would be of a nature that we would not recognize it as not us. Or perhaps it would contain so much of us that we would not recognize its whole.

If we were made up of intelligent cells, would the cells know they belonged to a greater mind? Would we know that we were made up of intelligent cells?

Could we be creating an intelligent design and not know it?

When the Teaching Machines Learn to Think

March 31, 2008

A recent article in Science Daily  announces what may be an AI breakthrough. The article explains that AI researchers in Sweden have combined two tired and trued methods of artificial intelligence in order to create a robot that “learns like a child or a puppy”.

Essentially, they have put together the classic rules based system (which tends to mimic human rationality) with an Artificial Neural Network (our attempt at mimicking the physical structure of the brain). The result is a system that reacts to new and different stimuli by learning about it.

 They boot up the system and the only human intervention after that was telling the system whether it had succeeded or not (which brings to mind how me may be able to train for that). The system was used to try ti fit pegs into holes. It seems that on each subsequent attempt, the system did better having “learned” about the best ways to go about it.

 The real breakthrough here is that they have a system that seems to be able to adapt to its situation. This could be a really big step towards some form of useful artificial intelligence. We are still within the 2005-2030 range which Vernor Vinge predicted would be the time span that an artificial super human intelligence would appear and what he called the “end of humanity” would occur. This may be a significant step in that direction.

 Such a superhuman intelligence would indeed require both the neural network and the rules based intelligence. While AI researchers spent more of the early years working on the rules based approach, it may be that the neural network through the general expansion of the internet

 This comes to the question of whether a set of artificial neural networks may be able to evolve the same way biological ones have. And even more poignantly, whether these artificial networks may simply be extensions of existing biological ones?

The interesting part happens when artificial networks begin to interface and grow enough so that the biological originators become the extensions rather than vice-versa. And if there is indeed a need for a rules based system to manage it all, who makes the rules (and what are they)?

When the servers become large, complex, integrated, interconnected enough that the services become the “servees”, does the sleeper awaken?