Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005, Proceedings of the Pattern Analysis, Statistical Modelling, and Computational Learning (PASCAL) Challenges Workshop on Recognising Textual Entailment
Resumen: The system for semantic evaluation VENSES (Venice Semantic Evaluation System) is organized as a pipeline of two subsystems: the first is a reduced version of GETARUN, our system for Text Understanding. The output of the system is a flat list of head-dependent structures (HDS) with Grammatical Relations (GRs) and Semantic Roles (SRs) labels. The evaluation system is made up of two main modules: the first is a sequence of linguistic rule-based subcalls; the second is a quantitatively based measurement of input ...
Proceedings of the …, 2006
As in the previous RTE Challenge, we present a linguistically-based approach for semantic inference which is built around a neat division of labour between two main components: a grammatically-driven subsystem which is responsible for the level of predicate-arguments well-formedness and works on the output of a deep parser that produces augmented head-dependency structures. A second subsystem tries allowed logical and lexical inferences on the basis of different types of structural transformation intended to produce a semantically valid meaning corrispondence. Grammatical relations and semantic roles are used to generate a weighted score. In the current challenge, a number of additional modules have been added to cope with finegrained inferential triggers which were not present in the previous dataset. Different levels of argumenthood have been devised in order to cope with semantic uncertainty generated by nearly-inferrable Text/Hypothesis pairs where the interpretation needs reasoning.
2009
The main purpose of the workshop is to review, analyze and discuss the latest developments in semantic analysis of text. The fact that the workshop occurs between the last Semantic Evaluation exercise and the preparation for the next SemEval in 2010, presents an exciting opportunity to discuss practical and foundational aspects of semantic processing of text. The workshop targets papers describing both semantic processing systems and evaluation exercises, with special attention to foundational issues in both lexical and propositional semantics, including semantic representation and semantic corpus construction problems.
2009
In this paper we present two new mechanisms we created in VENSES, the system for semantic evaluation of the University of Venice. The first mechanism is used to match predicate-argument structures with different governors, a verb and a noun, respectively in the Hypothesis and the Text. It can be defined Augmented Finite State Automata (FSA) which are matching procedures based on tagged words in one case, and dependency relations in another. In both cases, a number of inferences -the augmentation -is fired to match different words. The second mechanism is based on the output of our module for anaphora resolution. Our system produces antecedents for pronominal expressions and equal nominal expressions. On the contrary, no decision is taken for "bridging" expressions. So the "bridging" mechanism is activated by the Semantic Evaluator and has access to the History List and the semantic features associated to each referring expression. If constraint conditions meet, the system looks for a similar association of property/entity in web ontologies like Umbel, Yago and DBPedia. The two mechanisms have been proven to contribute a 5% and 3% accuracy, respectively.
@Book{SemEval:2010, editor = {Katrin Erk and Carlo Strapparava}, title = {Proceedings of the 5th International Workshop on Semantic Evaluation}, month = {July}, year = {2010}, address = {Uppsala, Sweden}, publisher = {Association for Computational Linguistics}, url = {http://www.aclweb.org/anthology/S10-1} } @InProceedings{recasens-EtAl:2010:SemEval, author = {Recasens, Marta and M\`{a}rquez, Llu\'{i}s and Sapena, Emili and Mart\'{i}, M.
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), 2016
In our paper we present our rule-based system for semantic processing. In particular we show examples and solutions that may be challenge our approach. We then discuss problems and shortcomings of Task 2-iSTS. We comment on the existence of a tension between the inherent need to on the one side, to make the task as much as possible "semantically feasible". Whereas the detailed presentation and some notes in the guidelines refer to inferential processes, paraphrases and the use of commonsense knowledge of the world for the interpretation to work. We then present results and some conclusions.
2000
This paper presents a semantic parsing approach for unrestricted texts. Semantic parsing is one of the major bottlenecks of Natural Language Understanding (NLU) systems and usually requires building expensive resources not easily portable to other domains. Our approach obtains a case-role analysis, in which the semantic roles of the verb are identified. In order to cover all the possible syntactic realisations of a verb, our system combines their argument structure with a set of general semantic labelled diatheses models. Combining them, the system builds a set of syntactic-semantic patterns with their own rolecase representation. Once the patterns are build, we use an approximate tree pattern-matching algorithm to identify the most reliable pattern for a sentence. The pattern matching is performed between the syntactic-semantic patterns and the feature-structure tree representing the morphological, syntactical and semantic information of the analysed sentence. For sentences assigned to the correct model, the semantic parsing system we are presenting identifies correctly more than 73% of possible semantic case-roles.
Proceedings of the 1st North American Chapter of the Association For Computational Linguistics Conference, 2000
We introduce a framework for semantic interpretation in which dependency structures are mapped to conceptual representations based on a parsimonious set of interpretation schemata. Our focus is on the empirical evaluation of this approach to semantic interpretation, i.e., its quality in terms of recall and precision. Measurements are taken with respect to two real-world domains, viz. information technology test reports and medical finding reports.
proceedings of the workshop on Beyond Named Entity Recognition Semantic labelling for NLP tasks in association with 4th International Conference on Language Resources and Evaluation (LREC 2004), 2004
The UCREL semantic analysis system (USAS) is a software tool for undertaking the automatic semantic analysis of English spoken and written data. This paper describes the software system, and the hierarchical semantic tag set containing 21 major discourse fields and 232 fine-grained semantic field tags. We discuss the manually constructed lexical resources on which the system relies, and the seven disambiguation methods including part-of-speech tagging, general likelihood ranking, multi-word-expression extraction, domain of discourse identification, and contextual rules. We report an evaluation of the accuracy of the system compared to a manually tagged test corpus on which the USAS software obtained a precision value of 91%. Finally, we make reference to the applications of the system in corpus linguistics, content analysis, software engineering, and electronic dictionaries.
2000
LXGram is a hand-built Portuguese computational grammar based on HPSG (syntax) and MRS (semantics). The LXGram system participated in the STEP 2008 shared task which aims at comparing semantic representations produced by NLP systems such as LXGram. Every participating team had to contribute a small text. The text that we submitted for the shared task was originally in Portuguese (an excerpt from a newspaper) and translated into English, to make a meaningful comparison at the shared task possible. Likewise, the English texts contributed by the other participating teams were translated into Portuguese. Because the LXGram generates many different analyses (mainly due to PP attachment ambiguities), the preferred analysis was selected manually. It was required to extend LXGram's lexicon and inventory of syntax rules to be able to get a reasonable performance on the shared task data. Eventually, our system was able to produce an analysis for 20 out of the 30 sentences of the shared task data.
Proceedings of the 21st annual meeting on Association …, 1983
Traditionally, translation from the parse tree representing a sentence to a semantic representation (such as frames or procedural semantics) has a/ways been the most ad hoc part of natural language understand-•ng (NLU) systems. However, recent advances in linguistics, most notably the system of formal semantics known as Montague semantics, suggest ways of putting NLU semantics onto a cleaner and firmer foundation. We are using a Montague-inspired approach to semantics in an integrated NL U and pro blem-solving system that we are building. Like Montague's, our semantics are compositional by design and strongly typed, with semantic rules in one-to-one correspondence with the meaning-affecting rules of a Marcus-style parser. We have replaced Montague's semantic objects, functors and truth conditions, with the elements of the frame language Frail, and added a word sense and case slot disambiguation system. The result is a foundation for semantic interpretation that we believe to be superior ~o previous approaches.
2009
Right from Senseval's inception there have been questions over the choice of sense inventory for word sense disambiguation (Kilgarriff, 1998). While researchers usually acknowledge the issues with predefined listings produced by lexicographers, such lexical resources have been a major catalyst to work on annotating words with meaning. As well as the heavy reliance on manually produced sense inventories, the work on word sense disambiguation has focused on the task of selecting the single best sense from the predefined inventory for each given token instance. There is little evidence that the state-of-the-art level of success is sufficient to benefit applications. We also have no evidence that the systems we build are interpreting words in context in the way that humans do. One direction that has been explored for practical reasons is that of finding a level of granularity where annotators and systems can do the task with a high level of agreement (Navigli et al., 2007; Hovy et al., 2006). In this talk I will discuss some alternative annotations using synonyms (McCarthy and Navigli, 2007), translations (Sinha et al., 2009) and WordNet senses with graded judgments (Erk et al., to appear) which are not proposed as a panacea to the issue of semantic representation but will allow us to look at word usages in a more graded fashion and which are arguably better placed to reflect the phenomena we wish to capture than the 'winner takes all' strategy. References Katrin Erk, Diana McCarthy, and Nick Gaylord. Investigations on word senses and word usages. In Proceedings of ACL-IJCNLP 2009, to appear.
2009
SemEval-2007, the Fourth International Workshop on Semantic Evaluations (Agirre et al. 2007) took place on June 23–24, 2007, as a co-located event with the 45th Annual Meeting of the ACL. It was the fourth semantic evaluation exercise, continuing on from the series of successful Senseval workshops. SemEval-2007 took place over a period of about six months, including the evaluation exercise itself and the summary workshop.
… tp://purl. org/dm/papers/hahn-meurers- …, 2011
One of the reasons for the popularity of dependency approaches in recent computational linguistics is their ability to efficiently derive the core functorargument structure of a sentence as an interface to semantic interpretation. Exploring this feature of dependency structures further, in this paper we show how basic dependency representations can be mapped to semantic representation as used in Lexical Resource Semantics (Richter and Sailer 2003), an underspecified semantic formalism originally developed as a semantic formalism for the HPSG framework (Pollard and Sag 1994) and its elaborate syntactic representations.
1975
The course in parsing English is essentially a survey and comparison 'of several of the principal systems used for understanding natural language. The basic procedure of parsing is described. The discussion of the principal systems is based on the idea that "meaning is procedures," that is, that the procedures of application give a parsed structure its significance.' Natural language systems should be content-rather than structure-motivated, i.e. they should be concerned with linguistic problems revealed by parsing rather than with the relation of the proposed structure of the system to the structures of other systems. Within this framework, Winograd's understanding system, SHRDLU, is described and discussed, as are the second generation systems of Simmons, Schank, Colby and Wilks. A subsequent discussion compares all these systems. Concluding remarks outline immediate problems, including the need for a good memory model and the use of texts, rather than individual example Sentences, for investigation. (CLK) * * via the ERIC Document Reproduction Service (EDRS). EDRS is pot * responsible for the quality of the original document. Reproductions * * supplied by EDRS are the best that can be made from the original.
Proceedings of the Beyond Named Entity Recognition Semantic Labeling for NLP Tasks Workshop, 2004
The UCREL semantic analysis system (USAS) is a software tool for undertaking the automatic semantic analysis of English spoken and written data. This paper describes the software system, and the hierarchical semantic tag set containing 21 major discourse fields and 232 fine-grained semantic field tags. We discuss the manually constructed lexical resources on which the system relies, and the seven disambiguation methods including part-of-speech tagging, general likelihood ranking, multi-word-expression extraction, domain of discourse identification, and contextual rules. We report an evaluation of the accuracy of the system compared to a manually tagged test corpus on which the USAS software obtained a precision value of 91%. Finally, we make reference to the applications of the system in corpus linguistics, content analysis, software engineering, and electronic dictionaries.
Proceedings of the Workshop on Deep Linguistic Processing - DeepLP '07, 2007
This workshop was conceived with the aim of bringing together the different computational linguistic subcommunities which model language predominantly by way of theoretical syntax, either in the form of a particular theory (e.g. CCG, HPSG, LFG, TAG or the Prague School) or a more general framework which draws on theoretical and descriptive linguistics. We characterise this style of computational linguistic research as deep linguistic processing, due to it aspiring to model the complexity of natural language in rich linguistic representations. Aspects of this research have in the past had their own separate fora, such as the ACL 2005 workshop on deep lexical acquisition, as well as TAG+, Alpino, ParGram and DELPH-IN meetings. However, since the fundamental approach of building a linguistically-founded system, as well as many of the techniques used to engineer efficient systems, are common across these projects and independent of the specific grammar formalism chosen, we felt the need for a common meeting in which experiences could be shared among a wider community.
Proceedings of the 10th …, 1984
This paper presents extensions to the work of Bobrow and Webber [Bobrow&Webber 80a, Bobrow&Webber 80b] on semantic interpretation using KL-ONE to represent knowledge. The approach is based on an extended case frame formalism applicable to all types of phrases, not just clauses. The frames are used to recognize semantically acceptable phrases, identify their structure, and, relate them to their meaning representation through translation rules.
Latent Semantic Analysis (LSA) has been shown to perform many linguistic tasks as well as humans do, and has been put forward as a model of human linguistic competence. But LSA pays no attention to word order, much less sentence structure. Researchers in Natural Language Processing have made significant progress in quickly and accurately deriving the syntactic structure of texts. But there is little agreement on how best to represent meaning, and the representations are brittle and difficult to build. This paper evaluates a model of language understanding that combines information from rule-based syntactic processing with a vector-based semantic representation which is learned from a corpus. The model is evaluated as a cognitive model, and as a potential technique for natural language understanding.
2008
ABSTRACT: In this article we discuss what constitutes a good choice of semantic representation, compare different approaches of constructing semantic representations for fragments of natural language, and give an overview of recent methods for employing inference engines for natural language understanding tasks. Keywords: Formal semantics, computational linguistics, automated reasoning, first-order representations, models.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.