2010, Minds and Machines
Ford's "Helen Keller Was Never in a Chinese Room" claims that my argument in "How Helen Keller Used Syntactic Semantics to Escape from a Chinese Room" fails because Searle and I use the terms 'syntax' and 'semantics' differently, hence are at cross purposes. Ford has misunderstood me; this reply clarifies my theory. Jason Michael Ford's "Helen Keller Was Never in a Chinese Room" (2010) claims that my argument in "How Helen Keller Used Syntactic Semantics to Escape from a Chinese Room" (Rapaport 2006) fails because Searle and I use the terms 'syntax' and 'semantics' differently, hence are at cross purposes. I think Ford has misunderstood me, so I am grateful for this opportunity to clarify my theory. The theory of syntactic semantics (Rapaport 1988) underlies computationalism: the claim that cognition is computable, i.e., that there is an algorithm (or a family of algorithms) that compute cognitive functions (Rapaport 1998). The theory has three parts: First, cognitive agents have direct access only to internal representatives of external objects. As Ray Jackendoff (2002, §10.4) says, a cognitive agent understands the world by "pushing the world into the mind". Therefore, both words and their meanings (including external objects serving as their referents) are represented internally in a single language of thought (LOT). For humans, this LOT is a biological neural network; for computers, it might be some kind of knowledge-representation and reasoning system (such as SNePS; see Shapiro & Rapaport 1987). 1 Second, it follows that words, their meanings, and semantic relations between them are all syntactic, where syntax is the study of relations among members of a single set (of signs, or marks, or neurons, etc.), and semantics is the study of relations between two sets (of signs, marks, neurons, etc., on the one hand, and their meanings, on the other) (cf. Morris 1938). "Pushing" meanings into the same set as symbols for them allows semantics to be done syntactically: It turns semantic relations between two sets (a set of internal marks and a set of (external) meanings) into syntactic relations among the marks of a single (internal) LOT. For example, truth tables and formal semantics are both syntactic enterprises, as are the relations between neuron firings representing signs and neuron firings representing external meanings. Consequently, symbol-manipulating computers can do semantics by doing syntax. Finally, understanding is recursive: We understand a syntactic domain (call it 'SYN 1 ') indirectly by interpreting it in terms of a semantic domain (call it 'SEM 1 '). But SEM 1 must be antecedently understood by considering it as a syntactic domain (rename it 'SYN 2 ') interpreted in terms of yet another semantic domain, which also must be antecedently understood. And so on. But, in order not to make it go on ad infinitum, there must be a base case: a domain that is understood directly, i.e., in terms of itself (i.e., not "antecedently"). Such direct understanding is syntactic understanding (Rapaport 1986b). (And perhaps it is holistic understanding; cf. Rapaport 2002.) Thus, the theory of syntactic semantics asserts that syntax suffices for semantic cognition, that cognition is therefore computable, and that computers are hence capable of thinking.