What’s Wrong With the Turing Test?
Not much, as long as you remember what it really means…
It wasn’t long ago that the world was captivated by a robot woman who appeared to be able to hold a conversation. Actually, that’s a little unfair — she could hold a conversation. It got people to consider whether the robot could pass the Turing Test, or the Imitation Game, as Alan Turing called it.
Basically, the Turing Test asks whether a machine can fool us into thinking it’s really a person. It’s a test of whether machines can think. The initial set up was: a human and machine have a conversation via text, and an independent observer tries to figure out which is the real person and which is the machine. If the machine stumps the observer, it has ‘passed the Turing Test.’
Turing wasn’t exactly designing a realistic experiment, he was just engaging in a thought exercise. If there were to be such an experiment, social scientists would be able to spend years researching all the possible variations: observer qualifications, length of conversation, and so on. But we’re not concerned with that.
The problem is that many people have come to confuse thinking, intelligence, and awareness. That’s not surprising, considering that each of those terms has various denotative and connotative meanings. Turing was only concerned about thinking, by which he meant computation: could computers figure things out in a way similar to humans? He explicitly stated that he wasn’t talking about awareness, and left the question of intelligence more or less open.
If you’re a materialist, things are simple. Computation is thinking and thinking is intelligence, and awareness doesn’t count (it’s either unimportant or not even really existent, an ‘epiphenomenon’ of matter). For materialists, a good thinking machine is equivalent to, or better than, a human mind, because the mind is just the product of the brain, and the brain is just a ‘wet-ware’ machine. But if you’re a materialist, odds are strong that you’re not reading my Medium articles.
Because of the materialist bias in our society, which can be as subtle as it is pervasive, many folks have come to wonder if artificial intelligence equates to being aware. That is, would a really sophisticated machine develop consciousness as a result of its computational abilities?
This isn’t necessarily an issue for hardcore materialists themselves, who as I said tend to discount consciousness entirely. Rather, it’s the kind of question that fans of science fiction and spirituality might consider. Mr. Data, an android, pondered the question for the seven television seasons and four feature films of Star Trek: The Next Generation. At a lecture I attended in the late 1990s, Terence McKenna — who tended towards materialism more than you might think — speculated that the internet might be conscious.
As you may already be thinking, computational ability, however quantitatively sophisticated, is qualitatively different from actual consciousness and all its sequelae in the realms of emotion and feeling: love, compassion, empathy, and, while we’re at it, fear, anger, and doubt.
I have great doubts that machines will ever have these qualities, although they may be programmed to act as though they do. And I don’t think machines, no matter how sophisticated, will attain consciousness or self-awareness (on the plus side for them, they won’t wake up at four in the morning full of anxiety and doubt).
We can flip the Turing Test around a little, and recognize that our own thoughts and language aren’t always that creative and spontaneous. In other words, we often sound suspiciously like machines. My iPhone isn’t great at choosing the next word I want to type, but it isn’t awful either. And Google Docs annoyingly anticipates the conclusion of phrases I’ve begun to write (I often change them, just out of spite). Listen to conversations people are having about inflation, politics, movies, sports — almost anything — and note how frequently they use stock phrases to expound on stock ideas. Maybe the Turing Test is too easy.