Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, Proceedings of the 16th conference on Computational linguistics -
Lexical rules are used in constraintbased grammar formalisms such as llead-Driven l)hrase Structure Grammar (IIPSG) (Pollard and Sag 1994) to express generalizations atnong lexical entries. '['his paper discusses a number of lexical rules from recent I[PSG analyses of German (tlinri<;hs and Nakazawa 1994) and shows that the grammar in some cases vastly overgenerates and in other cases introduces massive spurious structural ambiguity, if lexical rules ap: ply under unification. Such l)rot)lems of overgeneration or spurious ambiguity do not arise, if a lexical rule al)plies to a given lexical ent;ry iff the lexical entry is subsumed by the left:hand side of the lexical rule. I,'inally, the paper discusses computational consequcnce~s of at)plying lexical rules under subsuml)tion.
1999
Lexical rules have been used to cover a very diverse range of phenomena in constraint-based grammars. Examination of the full range of rules proposed shows that Carpenter's (1991) postulated upper bound on the length of list-valued attributes such as SUBCAT in the lexicon cannot be maintained, leading to unrestricted generative capacity in constraint-based formalisms utilizing HPSG-style lexical rules. We argue that it is preferable to subdivide such rules into a class of semiproductive lexically governed genuinely lexical rules, and a class of fully productive unary syntactic rules.
Journal of Logic Programming, 1996
2001
The question we address in this paper is whether`Lexical Rules' deserve their grand status, a status that is often conveyed by a special purpose formalism and/or a separate component, one that may even be external to the lexicon proper. We will argue that they do not and that a lexical knowledge representation language that is as expressive as it needs to be for other lexical purposes will, ipso facto, be expressive enough to encode`Lexical Rules' internally as lexical rules. Such internal encoding is not only possible but also desirable since`Lexical Rules' will then automatically acquire other characteristics which are now standardly associated with common or garden lexical rules, including inheritance, generalization by default, and the ability to relate lexical information from di erent levels of linguistic description. We give examples of what we take to be instances of common or garden lexical rules and then show how the same formal machinery provides for the statement of a PATR-like version of passive and the description of unbounded dependencies, inter alia, in Lexicalized Tree Adjoining Grammar (LTAG). We show how to de ne an LTAG lexicon as an inheritance hierarchy with internal lexical rules. A bottom-up featural encoding is used for LTAG trees and this allows lexical rules to be implemented as covariation constraints within feature structures. Such an approach eliminates the considerable redundancy otherwise associated with an LTAG lexicon.
Linguistik Aktuell/Linguistics Today, 2014
The present paper contributes to the long-term linguistic discussion on the boundaries between grammar and lexicon by analyzing four related issues from Czech. The analysis is based on the theoretical framework of Functional Generative Description (FGD), which has been elaborated in Prague since 1960's. First, the approach of FGD to the valency of verbs is summarized. The second topic, concerning dependent content clauses, is closely related to the valency issue. We propose to encode the information on the conjunction of the dependent content clause as a grammatical feature of the verb governing the respective clause. Thirdly, passive, resultative and some other constructions are suggested to be understood as grammatical diatheses of Czech verbs and thus to be a part of the grammatical module of FGD. The fourth topic concerns the study of Czech nouns denoting pair body parts, clothes and accessories related to these body parts and similar nouns. Plural forms of these nouns prototypically refer to a pair or typical group of entities, not just to many of them. Since under specific contextual conditions the pair/group meaning can be expressed by most Czech concrete nouns, it is to be described as a grammaticalized feature.
Linguistic Inquiry, 1975
1990
In a proposal, Vijay-Shanker and Joshi presented a definition for combining ttw two formalisms Tree Adjoining Grammars and PATR unification. The essential idea for that combination is the separation of the two recursion operations -adjoining and unification -to preserve all properties of both formalisms which is not desirable for natural language applications. In this paper, a definition for the integrated use of both processes is given and the remaining properties of the resulting formalism are discussed -especially weighing the appropriateness of this d~finition for natural language processing.
Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.dnb.de.
Proceedings of the 21st annual meeting on Association for Computational Linguistics -, 1983
A central goal of linguistic theory is to explain why natural languages are the way they are. It has often been supposed that com0utational considerations ought to play a role in this characterization, but rigorous arguments along these lines have been difficult to come by. In this paper we show how a key "axiom" of certain theories of grammar, Subjacency, can be explained by appealing to general restrictions on on-line parsing plus natural constraints on the rule-writing vocabulary of grammars. The explanation avoids the problems with Marcus' [1980] attempt to account for the same constraint. The argument is robust with respect to machine implementauon, and thus avoids the problems that often arise wilen making detailed claims about parsing efficiency. It has the added virtue of unifying in the functional domain of parsing certain grammatically disparate phenomena, as well as making a strong claim about the way in which the grammar is actually embedded into an on-line sentence processor.
Proceedings of the 29th annual meeting on Association for Computational Linguistics -, 1991
A formalism is presented for lexical specification in unification-based grammars which exploits defeasible multiple inheritance to express regularity, subregularity, and exceptions in classifying the properties of words. Such systems are in the general case intractable; the present proposal represents an attempt to reduce complexity while retaining sufficient expressive power for the task at hand. Illustrative examples are given of morphological analyses from English and German.
Information Processing Letters, 1990
There are at least two ways of handling lexical ambiguity in a tree adjoining grammar. One of them seems to be computationally intractable. The other is computationally efficient. This paper describes these two methods with algorithms and their analyses.
1989
Investigations of classes of grammars that are nontransformational and at the same time highly constrained are of interest both linguistically and mathematically. Context-free grammars (CFG) obviously form such a class. CFGs are not adequate (both weakly and strongly) to characterize some aspects of language structure. Thus how much more power beyond CFG is necessary to describe these phenomena is an important question. Based on certain properties of tree adjoining grammars (TAG) an approximate characterization of class of grammars, mildly context-sensitive grammars (MCSG), has been proposed earlier. In this paper, we have described the relationship between several different grammar formalisms, all of which belong to MCSG. In particular, we have shown that head grammars (HG), combinatory categorial grammars (CCG), and linear indexed grammars (LIG) and TAG are all weakly equivalent. These formalisms are all distinct from each other at least in the following aspects: (a) the formal objects and operations in each formalism, (b) the domain of locality over which dependencies are specified, (c) the degree to which recursion and the domain of dependencies are factored, and (d) the linguistic insights that are captured in the formal objects and operations in each formalism. A deeper understanding of this convergence is obtained by comparing these formalisms at the level of the derivation structures in each formalism. We have described a formalism, the linear context-free rewriting system (LCFR), as a first attempt to capture the closeness of the derivation structures of these formalisms. LCFRs thus make the notion of MCSGs more precise. We have shown that LCFRs are equivalent to muticomponent tree adjoining grammars (MCTAGs), and also briefly discussed some variants of TAGs, lexicalized TAGs, feature structure based TAGs, and TAGs in which local domination and linear precedence are factored TAG(LD/LP). Disciplines Disciplines Computer Sciences Comments Comments
Proceedings of the International Conference on Head-Driven Phrase Structure Grammar, 2003
Wasow (1977) argues that linguistic theory should recognize two qualitatively distinct types of rules: syntactic rules, which can affect more "superficial" grammatical function properties; and lexical rules, which affect deeper lexical semantic properties of lexical items. However, lexicalist theories of grammar have replaced syntactic rules with lexical rules leaving Wasow's dichotomy potentially unexplained. Our goal in this paper is to recapture Wasow's insight within a lexicalist framework such as HPSG. Building on Sag & Wasow's (1999) distinction between lexeme and word, we claim that there is a contrast between lexical rules that relate lexemes to lexemes (L-to-L rules) and lexical rules that relate words to words (W-to-W rules) and that these differences follow from the architecture of the grammar. In particular, we argue that syntactic function features (ARGST, VALENCE, etc.) are not defined for lexemes, while lexical semantic features (CONTENT) are. Fr...
Lacl, 1991
From a logical perspective, categorial type systems can be situated within a landscape of substructural logics | logics with a structure-sensitive consequence relation. Research on these logics has shown that the inhabitants of the substructural hierarchy can be systematically related by embedding translations on the basis of structural modalities. The modal operators offer controlled access to stronger logics from within weaker ones by licensing of structural operations. Linguistic material exhibits structure in dimensions not covered by the standard structural rules. The purpose of this paper is to generalize the modalisation and licensing strategy to two such dimensions: phrasal structure and headedness. Phrasal domain-sensitive type systems capture the notion of constituent structure; constituency relaxation can be licensed via an associativity modality. The opposition between heads and non-heads introduces dependency structure, an autonomous dimension of linguistic structure which may cross-cut semantic function-argument asymmetry; exibility with respect to dependency structure can be encoded by a licensing balance modality. The interplay between constituent and dependency sensitivity leads to characterizations of well-formedness for linguistic objects structured as trees or strings, and headed trees or headed strings.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.