Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
62 pages
1 file
Since 2005, researchers have worked on a broad task called Recognizing Textual Entailment (RTE), which is designed to focus efforts on general textual inference capabilities, but without constraining participants to use a specific representation or reasoning approach. There have been promising developments in this sub-field of Natural Language Processing (NLP), with systems showing steady improvement, and investigations of a range of approaches to the problem.
2009
Abstract The goal of identifying textual entailment–whether one piece of text can be plausibly inferred from another–has emerged in recent years as a generic core problem in natural language understanding. Work in this area has been largely driven by the PASCAL Recognizing Textual Entailment (RTE) challenges, which are a series of annual competitive meetings.
2008
This paper describes our experiments on Textual Entailment in the context of the Fourth Recognising Textual Entailment (RTE-4) Evaluation Challenge at TAC 2008 contest. Our system uses a Machine Learning approach with AdaBoost to deal with the RTE challenge. We perform a lexical, syntactic, and semantic analysis of the entailment pairs. From this information we compute a set of semantic-based distances between sentences. We improved our baseline system for the RTE-3 challenge with more Language Processing techniques, an hypothesis classifier, and new semantic features. The results show no general improvement with respect to the baseline.
Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, 2020
Recognizing Textual Entailment (RTE) was proposed as a unified evaluation framework to compare semantic understanding of different NLP systems. In this survey paper, we provide an overview of different approaches for evaluating and understanding the reasoning capabilities of NLP systems. We then focus our discussion on RTE by highlighting prominent RTE datasets as well as advances in RTE dataset that focus on specific linguistic phenomena that can be used to evaluate NLP systems on a fine-grained level. We conclude by arguing that when evaluating NLP systems, the community should utilize newly introduced RTE datasets that focus on specific linguistic phenomena.
journal" Research in Computing Science, 2008
Abstract. Textual Entailment Recognition (RTE) was proposed as a generic task, aimed at building modules capable of capturing the semantic variability of texts and performing natural language inferences. These modules can be then included in any NLP system, improving its performance in fine-grained semantic differentiation. The first part of the article describes our approach aimed at building a generic, language-independent TE system that would eventually be used as a module within a QA system. We evaluated the accuracy of ...
2000
With the goal of producing explainable en- tailment decisions, and ultimately having the computer "understand" the sentences it is processing, we have been pursuing a (somewhat) "logical" approach to recogniz- ing entailment. First our system performs semantic interpretation of the sentence pairs. Then, it tries to determine if the (logic for) the H sentence subsumes (i.e., is implied by) some
Natural Language Engineering, 2010
The goal of identifying textual entailment -whether one piece of text can be plausibly inferred from another -has emerged in recent years as a generic core problem in natural language understanding. Work in this area has been largely driven by the PASCAL Recognizing Textual Entailment (RTE) challenges, which are a series of annual competitive meetings. The current work exhibits strong ties to some earlier lines of research, particularly automatic acquisition of paraphrases and lexical semantic relationships and unsupervised inference in applications such as question answering, information extraction and summarization. It has also opened the way to newer lines of research on more involved inference methods, on knowledge representations needed to support this natural language understanding challenge and on the use of learning methods in this context. RTE has fostered an active and growing community of researchers focused on the problem of applied entailment. This special issue of the JNLE provides an opportunity to showcase some of the most important work in this emerging area.
2010
We present our experiments on Recognizing Textual Entailment based on modeling the entailment relation as a classification problem. As features used to classify the entailment pairs we use a symmetric similarity measure and a non-symmetric similarity measure. Our system achieved an accuracy of 66% on the RTE-3 development dataset (with 10-fold cross validation) and accuracy of 63% on the RTE-3 test dataset.
2009
In this paper, we introduce our Recognizing Textual Entailment (RTE) system developed on the basis of Lexical Entailment between two text excerpts, namely the hypothesis and the text. To extract atomic parts of hypotheses and texts, we carry out syntactic parsing on the sentences. We then utilize WordNet and FrameNet lexical resources for estimating lexical coverage of the text on the hypothesis. We report the results of our RTE runs on the Text Analysis Conference RTE datasets. Using a failure analysis process, we also show that the main difficulty of our RTE system relates to the underlying difficulty of syntactic analysis of sentences.
Proceedings of the COLING/ACL on Main conference poster sessions -, 2006
This paper proposes a knowledge representation model and a logic proving setting with axioms on demand successfully used for recognizing textual entailments. It also details a lexical inference system which boosts the performance of the deep semantic oriented approach on the RTE data. The linear combination of two slightly different logical systems with the third lexical inference system achieves 73.75% accuracy on the RTE 2006 data.
Advances in Computational Intelligence and Robotics, 2020
Given two textual fragments, called a text and a hypothesis, respectively, recognizing textual entailment (RTE) is a task of automatically deciding whether the meaning of the second fragment (hypothesis) logically follows from the meaning of the first fragment (text). The chapter presents a method for RTE based on lexical similarity, dependency relations, and semantic similarity. In this method, called LSS-RTE, each of the two fragments is converted to a dependency graph, and the two obtained graph structures are compared using dependency triple matching rules, which have been compiled after a thorough and detailed analysis of various RTE development datasets. Experimental results show 60.5%, 64.4%, 62.8%, and 61.5% accuracy on the well-known RTE1, RTE2, RTE3, and RTE4 datasets, respectively, for the two-way classification task and 54.3% accuracy for three-way classification task on the RTE4 dataset.
Proceedings of the ACL- …, 2007
Proceedings of the …, 2006
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop, 2015
Second Pascal RTE …, 2006
International Journal of Computer Applications, 2016
Lecture Notes in Computer Science, 2014
Lecture Notes in Computer Science, 2010