Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004
AI
Cognitive Modeling and Verbal Semantics explores the relationship between linguistic structures and cognitive processes, focusing on the representation of verbs within a semantic framework. Through tree-structured representations, the paper analyzes dynamic aspects of event entities defined by verbs like 'setzen' and 'put.' It highlights the nuanced meanings derived from different contexts and provides illustrative examples from literature to clarify verbal semantics.
mbph.de
Ray Jackendoff has developed a semantic theory which claims to be the semantic format corresponding to Transformational Grammar. As the syntactic rules the rules of sematics are implicitly known, and the basic structures are innate. This innateness of basic structures guarantees intersubjective reference. Each speaker refers only to his or her projected world, not to an objective world. He or she does so by using conceptual structures, which might be projected as entities of various kinds. This conceptual structure is the same in cognitive capacities as vision and the use of language. Concerning human categorization Jackendoff argues against other theories of meaning, especially theories of truth conditions ( §1). Furthermore the semantic structures should obey a grammaticality constraint which is violated by the predicate calculus ( §2). The thematic relations hypothesis ( §3) concerns the underlying structure of all sematic fields. §1 Human categorisation, features, and word meaning
Semantics - Lexical Structures and Adjectives, 2019
Journal of Pragmatics, 2006
Leonard Talmy is a leading light of cognitive linguistics, known especially for his work in cognitive semantics, an approach to linguistics that aims to describe the linguistic representation of conceptual structure. The two-volume set ''Toward a Cognitive Semantics'' is a collection of 16 of Talmy's papers spanning roughly 30 years of his thinking and writing. The papers have been updated, expanded, revised, and arranged by concept into chapters. This review of the volumes is tailored to a non-specialist linguist or cognitive scientist interested in a general orientation to the contents and presentation. In the introduction common to the two books, Talmy situates cognitive linguistics within the discipline of linguistics and identifies his primary methodology as introspection. The ''overlapping systems model'' of cognitive organization is outlined, in which cognitive systems, such as language, vision, kinesthetics, and reasoning can and do interact. Talmy proposes ''the general finding that each system has certain structural properties that are uniquely its own, certain structural properties that it shares with only one or a few other systems, and certain structural properties that it shares with most or all the other systems. These last properties would constitute the most fundamental properties of conceptual structuring in human cognition.'' The reader is guided to specific chapters in which the linguistic system is compared to other cognitive systems of visual perception, kinesthetic perception, attention, understanding/reasoning, pattern integration, cognitive culture, and affect. Each volume of the set is about 500 pages long, with eight chapters organized into three or four major sections. The first volume, ''Concept structuring systems'' expounds Talmy's vision of the fundamental systems of conceptual structuring in language. Part 1 presents a theoretical orientation, Part 2 addresses configurational structure, Part 3 discusses the distribution of attention, and Part 4 describes force dynamics. The second volume, ''Typology and process in concept structuring,'' turns from conceptual systems themselves to the processes that structure concepts and the typologies that emerge from these. Part 1 looks at the processes on a long-term scale, longer than an individual's lifetime, that deal with the representation of event structure. Part 2 considers the short-term scale of cognitive processing with a look at online processing, and Part 3 addresses medium-term processes in the acquisition of culture and the processing of narrative. In volume 1, Chapter 1, ''The relation of grammar to cognition,'' is a greatly revised and expanded version of a 1988 paper, itself an expansion of papers from 1977 and 1978. This paper details the ''semantics of grammar'' in language, toward the larger goal of determining the character of conceptual structure in general. Talmy proposes that the fundamental design feature www.elsevier.com/locate/pragma
The Philosophical Forum, 2003
Semantics and contextual expression, 1989
Theoretical Linguistics, 2012
Proceedings of the 1975 workshop on Theoretical issues in natural language processing - TINLAP '75, 1975
Prepublication version of paper that app read in Hypothesis A / Hypothesis B : Linguistic Explorations in Honor of David M. Perlmutter by Gerdts, Donna B., Moore, John C., Polinsky, Maria Many linguistic theories have a component that maps from a representation that I will call 'proto-linguistic' to whatever the syntactic representation used in the theory is. The proto-linguistic representation embodies assumptions about equivalence classes in the conceptualization of roles associated with various types of activities as they are expressed in verbs and other linguistic predicates. A question that has not been asked explicitly about this mapping component is what its formal power should be. In this paper I will argue that finite state power suffices and show how one of these mapping systems, Lexical Mapping Theory (LMT) as used in LFG can be straightforwardly modeled with finite state tools. The modeling will show how the system can be improved in one important aspect by using lenient composition as defined in Karttunen (1998, 2005).
Cognitive Linguistics, 1996
Page 1. 1 Conceptual semantics and cognitive linguistics by Ray Jackendoff Published in Cognitive Linguistics 7-1 (1996), 93-129 © Walter de Gruyter Overwiew 1) Ideology 1) Syntax 1) Conceptual Structure 1) Notation 1) Polysemy Page 2. 2 1) Ideology ...
Annual Review of Cognitive Linguistics, 2007
Although neither theoretical nor computational linguists did provide sufficiently careful insight into the problem of semantic roles, recently some progress is being achieved in robotics (study of the simulation of human interaction), and mostly in multi-agent systems. Taking advantage of this motivation and applying it to the study of languages, I distinguish between various abstract ontological levels. Instead of using such concepts as agentive, objective, experiencer, etc., on the highest (generic) ontological level, I postulate generalised agents which are defined by the following ontological features, among others: (1) features of control (autonomy): goal and feedback, (2) features of emotion (character): desire and intention, (3) epistemic features (reason): belief and cognition, (4) communication features (language faculty): verbal and visual. In accordance with such ontological concepts, natural and artificial entities are obviously suited to fulfil the semantic roles of agents and figures respectively in the widest sense of these terms. I further propose to distinguish between three classes of generic ontological roles, namely ACTIVE, MEDIAN or PASSIVE. Here are examples of generic roles: (1) active role (Initiator, Causer, Enabler, Benefactor, Executor, Stimulant, Source, Instigator etc.), (2) passive role (Terminator, Affect, Enabled, Beneficient, Executed, Experiencer, Goal, etc.) and (3) median role (Mediator, Instrument, Benefit, Motor, Means etc.). Figures can play quasi-active (Q-active) roles.
Language, 1999
1 1.2 Frege on compositionality 2 1.3 Tutorial on sets and functions 3 1.3.1 Sets 4 1.3.2 Questions and answers about the abstraction notation for sets 5 1.3.3 Functions 10 2 Executing the Fregean Program 13 2.1 First example of a Fregean interpretation 13 2.1.1 Applying the semantics to an example 16 2.1.2 Deriving truth-conditions in an extensional semantics 2.1.3 Object language and metalanguage 2.2 Sets and their characteristic functions 2.3 Adding transitive verbs: semantic types and denotation domains 2.4 Schonfinkelization 2.5 Defining functions in the i\.-notation 3 Semantics and Syntax 3.1 Type-driven interpretation 3.2 The structure of the input to semantic interpretation 3.3 Well-formedness and interpretability 3.4 The a-Criterion 3.5 Argument structure and linking 4 More of English: Nonverbal Predicates, Modifiers, Definite Descriptions 4.1 Semantically vacuous words 4.2 Nonverbal predicates VI Contents 4.3 Predicates as restrictive modifiers 63 4.3.1 A new composition rule 65 4.3.2 Modification as functional application 66 4.3.3 Evidence from nonintersective adjectives? 4.4 The definite article 73 4.4.1 A lexical entry inspired by Frege 73 4.4.2 Partial denotations and the distinction between presupposition and assertion 75 4.4.3 Uniqueness and utterance context 80 4.4.4 Presupposition failure versus uninterpretability 81 4.5 Modifiers in definite descriptions Contents 6.5.1 A little history 6.5.2 Relational and Sch O nfinkeled denotations for determine r s vii 147 6.6 Forma l properties of relational determiner meanings 6.7 Presuppositional quantifier phrases 6.7. 1 "Both" and "neither" 6.7.2 Presuppositionality and the relational theory 6.7.3 Other examples of presupposing DPs 157 6.8 Presuppositional quantifier phrases: controversial cases 6.8.1 Strawson's reconstruction of Aristotelian logic 6.8.2 Are all determiners presuppositional? 6.8.3 Nonextensional interpretation 6.8.4 Nonpresuppositional behavior in weak determiners 7 Quantification and Grammar 7.1 The problem of quantifiers in ob j ect position 7.2 Repairing the type mismatch in situ 7.2.1 An example of a "flexible types" approach 7.2.2 Excursion: flexible types for connectives 7.3 Repairing the type mismatch by movement 7. 4 Excursion: quantifiers in natural language and predicate logic 7.4.1 Separating quantifiers from variable binding 7.4.2 l-place and 2-place quantifiers 7.5 Choosing between quantifier movement and in situ interpretation : three standard arguments 7.5.1 Scope ambiguity and "inverse " scope 7.5.2 Antecedent-contained deletion 7. 5. 3 Quantifiers that bind pronouns 8 Syntactic and Semantic Constraints on Quantifier Movement 8.1 Which DPs may move, and which ones must? 8.2 How much moves along? And how far can you move? 8.3 What are potential landing sites for moving quantifiers ? 8.4 Quantifying into VP 8.4.1 Quantifiers taking narrow scope with respect to auxiliary negation , 8.4.2 Quantifying into W, VP-internal subjects , and flexible types 8.5 Quantifying into PP, AP, and NP 221 8.5.1 A problem of undergeneration 221 8.5.2 PP-internal subjects 8.5. 3 Subjects in all lexically headed XPs? 228 viii Contents 8.6 Quantifying into DP 230 8.6.1 Readings that can only be represented by DP adjunction? 232 8.6.2 Indirect evidence for DP adjunction : a problem with free IP adjunction? 233 8.6.3 Summary 234 9 Bound and Referential Pronouns and Ellipsis 239 9.1 Referential pronouns as free variables 239 9.1.1 Deictic versus anaphoric, referential versus bound-variable pronouns 23 9 9.1 .2 Utterance contexts and variable assignments 9.2 Co-reference Or binding ? 9.3 Pronouns in the theory of ellipsis 9.3.1 Background: the LF Identity Condition on ellipsis 9. 3.2 Referential pronouns and ellipsis 9.3.3 The "sloppy identity" puzzle and its solution
This chapter addresses the encoding of spatial semantics at Conceptual Structure (CS) in the framework proposed by Jackendoff (1983, 1987, 1996, 2002). The central question concerns the aspects of the representation of space at CS that are universal and therefore presumably innate.
2016
Reviewed by Francesco-Alessio Ursini, Stockholms Universitet The Modular Architecture of Grammar presents a state-of-the-art introduction to automodular grammar, a theory based on Fodor's (1983) modularity of mind hypothesis. According to the Modularity of Grammar Hypothesis, autonomous modules generate linguistic representations (e.g. sentence structures, propositions), but do not interact (p.7). The representations that these systems generate, however, are connected via mapping principles governed by the interface (meta-)module. The theoretical consequences of this assumption are far-reaching. For instance, the theory lacks movement operations or hierarchical levels of representation, and syntax does not have a central function in the architecture (cf. GB and Distributed Morphology: Chomsky 1981, Halle & Marantz 1993). The theory is tested against a wide set of data, including some well-known but still controversial problems. It presents an interesting representational alternative to derivational theories, and can provide several stimulating points of reflection for theoretically-inclined linguists. Below, I summarize the contents of the book. Chapter 1 introduces the two central modules of this architecture: semantics and syntax. The semantic module generates Function/Argument (FA) structures, which determine how the meanings of lexical items, phrases and sentences are composed. The syntax module generates phrase/sentence structure, as standardly assumed in generative frameworks. The syntactic rules of representation come in a standard, if conservative generative format (e.g. S→NP, VP). The semantic rules also come in a conservative, categorial format. For instance, an object of type Fap is a function that takes an argument object of type a as an input, and returns a type p proposition as a result (cf. Cresswell 1973). Lexical items are initially defined as pairings of F/A and syntactic representations, which include information about category and distribution. For instance, the intransitive verb sneeze has F/A type Fa and syntactic category "V in [VP ___]" (i.e. it is a verb in a VP). Chapter 2 presents the interface module and its three core principles. The first is lexical correspondence: each lexical item must have a representation in each module/dimension. The second is categorial correspondence: categories from different modules are mapped in a homogenous way (e.g. NPs to arguments, propositions to sentences). The third is geometric correspondence: relations from one dimension (e.g. c-command in syntax) must correspond to relation in another dimension (e.g. scope in semantics). Since the theory assumes that different rules generate syntactic and semantic representations, which are however connected via precise mappings, it predicts that discrepancies and asymmetries among representations can arise. For instance, copular sentences such as Sally is a carpenter are analysed as including lexical correspondence discrepancies. The copula and indefinite article are treated as having null semantic representations, the NPs Sally and carpenter as having semantic representations that combine to form a proposition (i.e. argument for Sally, predicate for carpenter). The interface module maps these NPs to respectively argument and predicate type representations, and copula and indefinite article to null representations. Hence, lexical and categorial correspondence are maintained even if not all syntactic representations correspond to non-null semantic representations. Chapter 3 adds the role (also event, cognitive) structure module, which determines the event structure and thematic roles associated to lexical items and sentences. Only three roles are postulated: proto-agent, proto-patient, and ancillary participant (cf. Dowty 1991). Thus, the role structure of a verb such as put can be represented as "RS: "put" (type), AGT, PAT, ANC". Notably, role structures are assumed to be "flat" sequences including event type and roles. The assumption of a distinct role structure module is motivated via the analysis of voice
2004
The hypothesis of the autonomy of syntax makes special demands on one of the central issues in linguistic theory: the specification of correspondences between a lexical conceptual and syntactic structure. One strategy is to distinguish several layers of lexical representation and allow only one of them to be "visible" to syntactic and morphological processes (cf. Pinker 1989, Grimshaw 1990). A recent implementation of this strategy is the Aspectual Interface Hypothesis (AIH) advocated by Tenny since 1987. The AIH is driven by the assumption that there is a direct and uniform association between telicity, or what Tenny calls "aspectual measuring-out" of events, and the internal direct object argument in the d-structure.
Pragmatics & Beyond New Series, 1999
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.