2016
Reviewed by Francesco-Alessio Ursini, Stockholms Universitet The Modular Architecture of Grammar presents a state-of-the-art introduction to automodular grammar, a theory based on Fodor's (1983) modularity of mind hypothesis. According to the Modularity of Grammar Hypothesis, autonomous modules generate linguistic representations (e.g. sentence structures, propositions), but do not interact (p.7). The representations that these systems generate, however, are connected via mapping principles governed by the interface (meta-)module. The theoretical consequences of this assumption are far-reaching. For instance, the theory lacks movement operations or hierarchical levels of representation, and syntax does not have a central function in the architecture (cf. GB and Distributed Morphology: Chomsky 1981, Halle & Marantz 1993). The theory is tested against a wide set of data, including some well-known but still controversial problems. It presents an interesting representational alternative to derivational theories, and can provide several stimulating points of reflection for theoretically-inclined linguists. Below, I summarize the contents of the book. Chapter 1 introduces the two central modules of this architecture: semantics and syntax. The semantic module generates Function/Argument (FA) structures, which determine how the meanings of lexical items, phrases and sentences are composed. The syntax module generates phrase/sentence structure, as standardly assumed in generative frameworks. The syntactic rules of representation come in a standard, if conservative generative format (e.g. S→NP, VP). The semantic rules also come in a conservative, categorial format. For instance, an object of type Fap is a function that takes an argument object of type a as an input, and returns a type p proposition as a result (cf. Cresswell 1973). Lexical items are initially defined as pairings of F/A and syntactic representations, which include information about category and distribution. For instance, the intransitive verb sneeze has F/A type Fa and syntactic category "V in [VP ___]" (i.e. it is a verb in a VP). Chapter 2 presents the interface module and its three core principles. The first is lexical correspondence: each lexical item must have a representation in each module/dimension. The second is categorial correspondence: categories from different modules are mapped in a homogenous way (e.g. NPs to arguments, propositions to sentences). The third is geometric correspondence: relations from one dimension (e.g. c-command in syntax) must correspond to relation in another dimension (e.g. scope in semantics). Since the theory assumes that different rules generate syntactic and semantic representations, which are however connected via precise mappings, it predicts that discrepancies and asymmetries among representations can arise. For instance, copular sentences such as Sally is a carpenter are analysed as including lexical correspondence discrepancies. The copula and indefinite article are treated as having null semantic representations, the NPs Sally and carpenter as having semantic representations that combine to form a proposition (i.e. argument for Sally, predicate for carpenter). The interface module maps these NPs to respectively argument and predicate type representations, and copula and indefinite article to null representations. Hence, lexical and categorial correspondence are maintained even if not all syntactic representations correspond to non-null semantic representations. Chapter 3 adds the role (also event, cognitive) structure module, which determines the event structure and thematic roles associated to lexical items and sentences. Only three roles are postulated: proto-agent, proto-patient, and ancillary participant (cf. Dowty 1991). Thus, the role structure of a verb such as put can be represented as "RS: "put" (type), AGT, PAT, ANC". Notably, role structures are assumed to be "flat" sequences including event type and roles. The assumption of a distinct role structure module is motivated via the analysis of voice