Strong AI. The Turing Test. The Chinese room. As I’m sure you’ll agree, not nearly enough has been written about these topics. So when an anonymous commenter told me there’s a new polemic arguing that computers will never think — and that this polemic, by one Mark Halpern, is “being blogged about in a positive way (getting reviews like ‘thoughtful’ and ‘fascinating’)” — of course I had to read it immediately.
Halpern’s thesis, to oversimplify a bit, is that artificial intelligence research is a pile of shit. Like the fabled restaurant patron who complains that the food is terrible and the portions are too small, Halpern both denigrates a half-century of academic computer science for not producing a machine that can pass the Turing Test, and argues that, even if a machine did pass the Test, it wouldn’t really be “thinking.” After all, it’s just a machine!
(For readers with social lives: the Turing Test, introduced by Alan Turing in one of the most famous philosophy papers ever written, is a game where you type back and forth with an unknown entity in another room, and then have to decide whether you’re talking to a human or a machine. The details are less important than most people make them out to be. Turing says that the question “Can machines think?” is too meaningless to deserve discussion, and proposes that we instead ask whether a machine can be built that can’t be distinguished from human via a test such as his.)
If you haven’t read Halpern’s essay, the following excerpts should help you simulate a person who has.
Turing does not argue for the premise that the ability to convince an unspecified number of observers, of unspecified qualifications, for some unspecified length of time, and on an unspecified number of occasions, would justify the conclusion that the computer was thinking — he simply asserts it.
A conversation may allow us to judge the quality or depth of another’s thought, but not whether he is a thinking being at all; his membership in the species Homo sapiens settles that question — or rather, prevents it from even arising.
…the relationship of the AI community to Turing is much like that of adolescents to their parents: abject dependence alternating with embarrassed repudiation. For AI workers, to be able to present themselves as “Turing’s Men” is invaluable; his status is that of a von Neumann, Fermi, or Gell-Mann, just one step below that of immortals like Newton and Einstein. He is the one undoubted genius whose name is associated with the AI project … When members of the AI community need some illustrious forebear to lend dignity to their position, Turing’s name is regularly invoked, and his paper referred to as if holy writ. But when the specifics of that paper are brought up, and when critics ask why the Test has not yet been successfully performed, he is brushed aside as an early and rather unsophisticated enthusiast.
Apart from [the Turing test], no one has proposed any compelling alternative for judging the success or failure of AI, leaving the field in a state of utter confusion.
[W]hen a machine does something “intelligent,” it is because some extraordinarily brilliant person or persons, sometime in the past, found a way to preserve some fragment of intelligent action in the form of an artifact. Computers are general-purpose algorithm executors, and their apparent intelligent activity is simply an illusion suffered by those who do not fully appreciate the way in which algorithms capture and preserve not intelligence itself but the fruits of intelligence.
Of course, Halpern never asks whether the brain’s apparent intelligence is merely a preserved fragment of its billion-year evolutionary past. That would be ridiculous! Indeed, Halpern seems to think that if human intelligence is open to question, then the Turing Test is meaningless:
One AI champion, Yorick Wilks … has questioned how we can even be sure that other humans think, and suggests that something like the Test is what we actually, if unconsciously, employ to reassure ourselves that they do. Wilks … offers us here a reductio ad absurdum: the Turing Test asks us to evaluate an unknown entity by comparing its performance, at least implicitly, with that of a known quantity, a human being. But if Wilks is to be believed, we have unknowns on both sides of the comparison; with what do we compare a human being to learn if he thinks?
I think Halpern is simply mistaken here. The correct analogy is not between computers and humans; it’s between computers and humans other than oneself. For example, I have no direct evidence that the commenters on this blog think. I assume they think, since they’re so darned witty and insightful, and my own experience leads me to believe that that requires thinking. So why should this conclusion change if it turns out that, say, Greg Kuperberg is a robot (the KuperBlogPoster3000)?
Turing himself put the point as well as anyone:
According to the most extreme form of [the argument from consciousness] the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe ‘A thinks but B does not’ whilst B believes ‘B thinks but A does not’. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.
There’s a story that A. Lawrence Lowell, the president of Harvard in the 1920’s, wanted to impose a Jew quota because “Jews cheat.” When someone pointed out that non-Jews also cheat, Lowell replied: “You’re changing the subject. We’re talking about Jews.” Likewise, when one asks the strong-AI skeptic how a grayish-white clump of meat can think, the response often boils down to: “You’re changing the subject. We’re talking about computers.”
And this leads to my central thesis: that the Turing Test isn’t “really” about computers or consciousness or AI. Take away the futuristic trappings, and what you’re left with is a moral exhortation — a plea to judge others, not by their “inner essences” (which we can never presume to know), but by their relevant observed behavior.
It doesn’t take a hermeneutic acrobat to tease this out of Turing’s text. Consider the following passages:
The inability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities, e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man.
It will not be possible to apply exactly the same teaching process to the machine as to a normal child. It will not, for instance, be provided with legs, so that it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes. But however well these deficiencies might be overcome by clever engineering, one could not send the creature to school with out the other children making excessive fun of it.
If you want to know why Turing is such a hero of mine (besides his invention of the Turing machine, his role in winning World War II, and so on), the second passage above contains the answer. Let others debate whether a robotic child would have “qualia” or “aboutness” — Turing is worried that the other kids would make fun of it at school.
Look, once you adopt the “moral” stance, this whole could-a-computer-think business is really not complicated. Let me lay it out for you, in convenient question-and-answer format.
Q. If a computer passed the Turing Test, would we be obligated to regard it as conscious?
A. Yes.
Q. But how would we know it was conscious?
A. How do I know you’re conscious?
Q. But how could a bunch of transistors be conscious?
A. How could a bunch of neurons be conscious?
Q. Why do you always answer a question with a question?
A. Why shouldn’t I?
Q. So you’re saying there’s no mystery about consciousness?
A. No, just that the mystery seems no different in the one case than the other.
Q. But you can’t just evade a mystery by pointing to something else that’s equally mysterious!
A. Clearly you’re not a theoretical computer scientist.
As most of you know, in 1952 — a decade after his contributions to breaking the U-boat Enigma saved the Battle of the Atlantic — Turing was convicted of “gross homosexual indecency,” stripped of his security clearance, and forced to take estrogen treatments that caused him to grow breasts (it was thought, paradoxically, that this would “cure” him of homosexuality). Two years later, at age 41, the founder of computer science killed himself by biting the infamous cyanide-laced apple.
I agree with what I take to be Turing’s basic moral principle: that we should judge others by their relevant words and actions, not by what they “really are” (as if the latter were knowable to us). But I fear that, like Turing, I don’t have any argument for this principle that isn’t ultimately circular. All I can do is assert it, and assert it, and assert it.