I sometimes reference a computational linguistics factoid that appears to be now lost in the mists of early DoD Tipster program research: Chinese linguists only agree on the segmentation of texts into words about 80% of the time. We can find some qualitative agreement on the problematic nature of the task, but the 80% is widely smeared out among the references that I can now find. It should be no real surprise, though, because even English with white-space tokenization resists easy characterization of words versus phrases: “New York” and “New York City” are almost words in themselves, though just given white-space tokenization are also phrases. Phrases lift out with common and distinct usage, however, and become more than the sum of their parts; it would be ridiculously noisy to match a search for “York” against “New York” because no one in the modern world attaches semantic significance to the “York” part of the phrase. It exists as a whole and the nature of the parts has dissolved against this wholism.
John Searle’s Chinese Room argument came up again today. My son was waxing, as he does, in a discussion about mathematics and order, and suggested a poverty of our considerations of the world as being purely and completely natural. He meant in the sense of “materialism” and “naturalism” meaning that there are no mystical or magical elements to the world in a metaphysical sense. I argued that there may nonetheless be something that is different and indescribable by simple naturalistic calculi: there may be qualia. It led, in turn, to a qualification of what is unique about the human experience and hence on to Searle’s Chinese Room.
And what happens in the Chinese Room? Well, without knowledge of Chinese, you are trapped in a room with a large collection of rules for converting Chinese questions into Chinese answers. As slips of Chinese questions arrive, you consult the rule book and spit out responses. Searle’s point was that it is silly to argue that the algorithm embodied by the room really understands Chinese and that the notion of “Strong AI” (artificial intelligence is equivalent to human intelligence insofar as there is behaviorally equivalence between the two) falls short of the meaning of “strong.” This is a correlate to the Turing Test in a way, which also posits a thought experiment with computer and human interlocutors who are remotely located.
The arguments against the Chinese Room range from complaints that there is no other way to establish intelligence to the claim that given sensory-motor relationships with the objects the symbols represent, the room could be considered intentional. I don’t dispute any of these arguments, however. Instead, I would point out that the initial specification of the gedankenexperiment fails in the assumption that the Chinese Room is actually able to produce adequate outputs for the range of possible inputs. In fact, while even the linguists disagree about the nature of Chinese words, every language can be used to produce utterances that have never been uttered before. Chomsky’s famous “colorless green ideas sleep furiously” shows the problem with clarity. It is the infinitude of language and its inherent ambiguity that makes the Chinese Room an inexact metaphor. A Chinese questioner could ask how do the “soul-eyes of polar bears beam into the hearts of coal miners?” and the system would fail like enjambing precision German machinery fed tainted oil. Yeah, German machinery enjambs just like polar bears beam.
So the argument stands in its opposition to Strong AI given its initial assumptions, but fails given real qualifications of those assumptions.
NOTE: There is possibly a formal argument embedded in here in that a Chomsky grammar that is recursively enumerable has infinite possible productions but that an algorithm can be devised to accommodate those productions given Turing completeness. Such an algorithm is in principle only, however, and does require a finite symbol alphabet. While the Chinese characters may be finite, the semantic and pragmatic metadata are not clearly so.