Posts Tagged ‘Hive mind’

The Dancing Bees of the Hive Mind

April 15, 2008

Much Digital ink is beginning to be spilled about Web 3.0 and the Semantic Web.

The entire notion is that more information about data will lead to better understanding and exploitation of data (maybe that is why information wants to be free?).

Recently, I came across a few articles describing sprocket technology or twining.

The idea is that you have this cool little sprocket running on your machine and learning about your habits. Then this device can talk to other similar devices within your particular network. AS the devices learn more, they can enrich your experience by alerting you of information you would probably be interested in. Of course, advertisiers would love to interrupt you in the middle of what you consider important information for you to have with what they consider important information for you to have.

In any case, these little intelligent agents (now isn’t that term a blast from the past) remind me of dancing bees. I know, I know, I keep going back to the hive mind, but think about how dancing bees inform the hive of where they can find lovely flowers in bloom, ripe with all the pollen they can carry.

Sadly, it seems the little apians have been suffering lately and despite our best efforts to blame cell phone towers, we are still not sure what has been causing this bee blight.

So what if the real culprit is advertising? Some devious virus that is giving the bees false steps in their dance with the stars and that is leading bees to leave the hive en masse in search of fields flowers that never were; a Pleasantville of insects sold on the “better life”?

If we want to find the bees, we’ll need to follow the money. Isn’t it always that way?

Advertisements

If Artificially Intelligent Machines Were Here, How Would We Know?

April 6, 2008

In his 1950 paper, “Computing Machinery and Intelligence” , mathematician Alan Turing described a test of a machines ability to achieve intelligence. This has been popularly named the Turing Test. Essentially, it asks a human judge to have a conversation (in written text and real time) with another human and a machine. If the judging human cannot distinguish between the human and the machine, the machine is said to have passed the test and at least mimicked intelligence. This has spawned a whole school of research about “natural language dialogue systems”.

We all know that unless there is a good financial reason to build such machines, the exercise of doing so will remain just an exercise. So what I am curious about is that when machines successfully pass the test and can imitate human conversations, what are the applications that may be applied?

Of course, one of them would be sex and the other crime.

A Russian program called CyberLover conducts fully automated flirtatious conversations in a bid to collect personal data from unsuspecting real humans behaving all too humanly. The program can be found in dating chat sites and is designed to lure victims into sharing their identities or get them to visit web sites that provoke malware infestations. It can do so with up to 10 simultaneous partners making it quite an efficient machine as well.

With the rapid expansion of social networks and websites focused on conversation and discussion, this type of approach leads one to think that there may soon be a plethora of intelligent machines conversing with online denizens with the goal of gathering their personal data or zombifying their machines (and perhaps thus replicating themselves).

This leads me to the title of this missive. If artificially intelligent machines were here, how would we know? After all, the purpose of the Turing test is to have the machine fool the human into thinking it isn’t a machine. So, by Turing’s early definition, fooling a human is how one detects artificial intelligence. But if the human is fooled, who does the detecting?

Now, while I do subscribe to the notion that even paranoids can have real enemies, I don’t think this calls for panic just yet. But it does bring me back to my notion of the hive mind.

If we were indeed developing a larger, collective intelligence, how would we know? Perhaps that intelligence would be of a nature that we would not recognize it as not us. Or perhaps it would contain so much of us that we would not recognize its whole.

If we were made up of intelligent cells, would the cells know they belonged to a greater mind? Would we know that we were made up of intelligent cells?

Could we be creating an intelligent design and not know it?

When the Teaching Machines Learn to Think

March 31, 2008

A recent article in Science Daily  announces what may be an AI breakthrough. The article explains that AI researchers in Sweden have combined two tired and trued methods of artificial intelligence in order to create a robot that “learns like a child or a puppy”.

Essentially, they have put together the classic rules based system (which tends to mimic human rationality) with an Artificial Neural Network (our attempt at mimicking the physical structure of the brain). The result is a system that reacts to new and different stimuli by learning about it.

 They boot up the system and the only human intervention after that was telling the system whether it had succeeded or not (which brings to mind how me may be able to train for that). The system was used to try ti fit pegs into holes. It seems that on each subsequent attempt, the system did better having “learned” about the best ways to go about it.

 The real breakthrough here is that they have a system that seems to be able to adapt to its situation. This could be a really big step towards some form of useful artificial intelligence. We are still within the 2005-2030 range which Vernor Vinge predicted would be the time span that an artificial super human intelligence would appear and what he called the “end of humanity” would occur. This may be a significant step in that direction.

 Such a superhuman intelligence would indeed require both the neural network and the rules based intelligence. While AI researchers spent more of the early years working on the rules based approach, it may be that the neural network through the general expansion of the internet

 This comes to the question of whether a set of artificial neural networks may be able to evolve the same way biological ones have. And even more poignantly, whether these artificial networks may simply be extensions of existing biological ones?

The interesting part happens when artificial networks begin to interface and grow enough so that the biological originators become the extensions rather than vice-versa. And if there is indeed a need for a rules based system to manage it all, who makes the rules (and what are they)?

When the servers become large, complex, integrated, interconnected enough that the services become the “servees”, does the sleeper awaken?

Thank you for joining the Hive Mind. You are not a number, you are a free man.

March 7, 2008

Yes, Virginia, you are an individual, just like everybody else.

As our more and more connected community of human thought continues to evolve and change, I am wondering whether or not we are finally moving towards a true hive mind.

Late night bad science fiction television and heart pounding video games teach us that the hive mind engulfs individuals and makes them subject to the “uber –needs” of our glorious collective culture. Rarely does it really explore the true experience of the drone.

Does the drone actually see themselves as such, or does it see itself as a somewhat empowered individual deftly navigating the slings and arrows of outrageous marketing attempts at influencing its desires, thoughts, needs and snacking peccadillo’s?

Do you, dear reader, feel that the hive mind has begun to take over yet?

Have you been taken in by the conscious collective? Are you a happy-go-lucky member of the Culture?

Have you become a drone despite yourself?

I can almost see you recoil from the screen in horror. “What, me? Heaven forfend!”

But not so fast. Not so fast.

Perhaps it is human destiny to build this wonderful cyber brain?  Perhaps we should be embracing this for the “common good?” After all, life seems to make successful organisms from a sum of many parts.

Maybe this is the path that will give us the way to dominate the universe (if not us, then who?)? or at least the next step in our “explorevolution” of who we are, why we are and when will the pizza get here?

Drones may be a lot happier than rogue elements or exploring individuals. Sheep are happier in a herd since it tends to make it easier for them to survive, find a breeding mate and reduces stress. You just have to watch out for that Good Shepherd.

You know, the one who eats mutton and wears lambskin.