Posts Tagged ‘turing test’

If Artificially Intelligent Machines Were Here, How Would We Know?

April 6, 2008

In his 1950 paper, “Computing Machinery and Intelligence” , mathematician Alan Turing described a test of a machines ability to achieve intelligence. This has been popularly named the Turing Test. Essentially, it asks a human judge to have a conversation (in written text and real time) with another human and a machine. If the judging human cannot distinguish between the human and the machine, the machine is said to have passed the test and at least mimicked intelligence. This has spawned a whole school of research about “natural language dialogue systems”.

We all know that unless there is a good financial reason to build such machines, the exercise of doing so will remain just an exercise. So what I am curious about is that when machines successfully pass the test and can imitate human conversations, what are the applications that may be applied?

Of course, one of them would be sex and the other crime.

A Russian program called CyberLover conducts fully automated flirtatious conversations in a bid to collect personal data from unsuspecting real humans behaving all too humanly. The program can be found in dating chat sites and is designed to lure victims into sharing their identities or get them to visit web sites that provoke malware infestations. It can do so with up to 10 simultaneous partners making it quite an efficient machine as well.

With the rapid expansion of social networks and websites focused on conversation and discussion, this type of approach leads one to think that there may soon be a plethora of intelligent machines conversing with online denizens with the goal of gathering their personal data or zombifying their machines (and perhaps thus replicating themselves).

This leads me to the title of this missive. If artificially intelligent machines were here, how would we know? After all, the purpose of the Turing test is to have the machine fool the human into thinking it isn’t a machine. So, by Turing’s early definition, fooling a human is how one detects artificial intelligence. But if the human is fooled, who does the detecting?

Now, while I do subscribe to the notion that even paranoids can have real enemies, I don’t think this calls for panic just yet. But it does bring me back to my notion of the hive mind.

If we were indeed developing a larger, collective intelligence, how would we know? Perhaps that intelligence would be of a nature that we would not recognize it as not us. Or perhaps it would contain so much of us that we would not recognize its whole.

If we were made up of intelligent cells, would the cells know they belonged to a greater mind? Would we know that we were made up of intelligent cells?

Could we be creating an intelligent design and not know it?

Advertisements