Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Explanatory analogies make learning complex concepts easier by elaborately mapping a target concept onto a more familiar source concept. Solutions exist for automatically retrieving shorter metaphors from natural language text, but not for explanatory analogies. In this paper, we propose an approach to find webpages containing explanatory analogies for a given target concept. For this, we propose the use of a 'region of interest' (ROI) based on the observation that linguistic markers and source concept often co-occur with various forms of the word 'analogy'. We also suggest an approach to identify the source concept(s) contained in a retrieved analogy webpage. We demonstrate these approaches on a dataset created using Google custom search to find candidate web pages that may contain analogies.
Humans use analogies to communicate, reason, and learn. But while the human brain excels at creating and understanding analogies, it does not easily recall useful analogies created or learned over time. General purpose tools and methods are needed that assist humans in representing, storing, and recalling useful analogies. Additionally, such tools must take advantage of the World Wide Web's ubiquity, global reach, and universal standards. We first identify commonly occurring patterns of analogy structure. Because understanding of instructional analogies is significantly improved when their structure is visualized, we develop a compact and general representation for analogies using XML, and demonstrate general methods for visualizing the structure of analogy expressions in Web-based environments.
Cognitive Science, 2017
Making analogies is an important way for people to explain and understand new concepts. Though making analogies is natural for human beings, it is not a trivial task for a dialogue agent. Making analogies requires the agent to establish a correspondence between concepts in two different domains. In this work, we explore a data-driven approach for making analogies automatically. Our proposed approach works with data represented as a flat graphical structure, which can either be designed manually or extracted from Internet data. For a given concept from the base domain, our analogy agent can automatically suggest a corresponding concept from the target domain, and a set of mappings between the relationships each concept has as supporting evidence. We demonstrate the working of this algorithm by both reproducing a classical example of analogy inference and making analogies in new domains generated from DBPedia data.
World Wide Web, 2000
Artificial Intelligence and Cognitive Science, 2002
RADAR is a model of analogy retrieval that employs the principle of systematicity as its primary retrieval cue. RADAR was created to address the current bias toward semantics in analogical retrieval models, to the detriment of structural factors. RADAR recalls 100% of structurally identical domains. We describe a technique based on "derived attributes" that captures structural descriptions of the domain's representation rather than domain contents. We detail their use, recall and performance within RADAR through empirical evidence. We contrast RADAR with existing models of analogy retrieval. We also demonstrate that RADAR can retrieve both semantically related and semantically unrelated domains, even without a complete target description, which plagues current models.
Cornell University - arXiv, 2022
Analogy-making gives rise to reasoning, abstraction, flexible categorization and counterfactual inference-abilities lacking in even the best AI systems today. Much research has suggested that analogies are key to non-brittle systems that can adapt to new domains. Despite their importance, analogies received little attention in the NLP community, with most research focusing on simple word analogies. Work that tackled more complex analogies relied heavily on manually constructed, hardto-scale input representations. In this work, we explore a more realistic, challenging setup: our input is a pair of natural language procedural texts, describing a situation or a process (e.g., how the heart works/how a pump works). Our goal is to automatically extract entities and their relations from the text and find a mapping between the different domains based on relational similarity (e.g., blood is mapped to water). We develop an interpretable, scalable algorithm and demonstrate that it identifies the correct mappings 87% of the time for procedural texts and 94% for stories from cognitivepsychology literature. We show it can extract analogies from a large dataset of procedural texts, achieving 79% precision (analogy prevalence in data: 3%). Lastly, we demonstrate that our algorithm is robust to paraphrasing the input texts 1. CAUSE(PULL(piston),CAUSE(GREATER(PRESSURE(water), PRESSURE(pipe)),FLOW(water,pipe)))
2000
and expensive. A central problem that demands an automated solution is the discov- ery and incorporation of lexical semantic relations, or semantic relations between concepts. Lexical semantic relations are the fundamental building blocks that al- low words to be associated with each other and linked together to form cohesive text. Despite their importance, lexical semantic relations are severely underrep- resented
European Conference on Artificial Intelligence ECAI' …
Analogical reasoning is an acknowledged process behind many episodes of creativity. Typically, the creator chances upon information unrelated to the given problem -and solves the problem by analogy with this accidental source of inspiration. Current models of analogical retrieval do not explain how semantically unrelated source domains are retrieved. We present the RADAR algorithm that maps domains into a separate structure space, where domains with similar topological attributes are colocated. Each axis in structure space records the occurrence frequency of that feature in each domain. Nearest neighbour retrieval in structure space identifies structurally similar domainsfrom a diversity of semantic backgrounds. Structure based retrieval opens the possibility for creating an analogy model with far greater creativity potential than human reasoning.
2016
Humans regularly exploit analogical reasoning to generate potentially novel and useful inferences. We outline the Dr Inventor model that identifies analogies between research publications, describing recent work to evaluate the inferences that are generated by the system. Its inferences, in the form of subjectverb-object triples, can involve arbitrary combinations of source and target information. We evaluate three approaches to assess the quality of inferences. Firstly, we explore an n-gram based approach (derived from the Dr Inventor corpus). Secondly, we use ConceptNet as a basis for evaluating inferences. Finally, we explore the use of Watson Concept Insights (WCI) to support our inference evaluation process. Dealing with novel inferences arising from an ever growing corpus is a central concern throughout.
Interpreting metaphor is a hard but important problem in natural language processing that has numerous applications. One way to address this task is by finding a paraphrase that can replace the metaphorically used word in a given context. This approach has been previously implemented only within supervised frameworks, relying on manually constructed lexical resources, such as WordNet. In contrast, we present a fully unsupervised metaphor interpretation method that extracts literal paraphrases for metaphorical expressions from the Web. It achieves a precision of 0:42, which is high for an unsupervised paraphrasing approach. Moreover, the method significantly outperforms both the baseline and the selectional preference-based method of Shutova employed in an unsupervised setting.
2007
Examples of figurative language can range from the explicit and the obvious to the implicit and downright enigmatic. Some simpler forms, like simile, often wear their meanings on their sleeve, while more challenging forms, like metaphor, can make cryptic allusions more akin to those of riddles or crossword puzzles. In this paper we argue that because the same concepts and properties are described in either case, a computational agent can learn from the easy cases (explicit similes) how to comprehend and generate the hard cases (nonexplicit metaphors). We demonstrate that the markedness of similes allows for a large case-base of illustrative examples to be easily acquired from the web, and present a system, called Sardonicus, that uses this casebase both to understand property-attribution metaphors and to generate apt metaphors for a given target on demand. In each case, we show how the text of the web is used as a source of tacit knowledge about what categorizations are allowable and what properties are most contextually appropriate. Overall, we demonstrate that by using the web as a primary knowledge source, a system can achieve a robust and scalable competence with metaphor while minimizing the need for hand-crafted resources like WordNet.
arXiv (Cornell University), 2023
Analogy is one of the core capacities of human cognition; when faced with new situations, we often transfer prior experience from other domains. Most work on computational analogy relies heavily on complex, manually crafted input. In this work, we relax the input requirements, requiring only names of entities to be mapped. We automatically extract commonsense representations and use them to identify a mapping between the entities. Unlike previous works, our framework can handle partial analogies and suggest new entities to be added. Moreover, our method's output is easily interpretable, allowing for users to understand why a specific mapping was chosen. Experiments show that our model correctly maps 81.2% of classical 2x2 analogy problems (guess level=50%). On larger problems, it achieves 77.8% accuracy (mean guess level=13.1%). In another experiment, we show our algorithm outperforms human performance, and the automatic suggestions of new entities resemble those suggested by humans. We hope this work will advance computational analogy by paving the way to more flexible, realistic input requirements, with broader applicability.
APPLIED INFORMATICS-PROCEEDINGS-, 2001
retrieving analogies from presented problem data is an important phase of analogical reasoning, influencing many related cognitive processes. Existing models have focused on semantic similarity, but structural similarity is also a necessary requirement of any analogical comparison. We present a new technique for performing structure based analogy retrieval. This is founded upon derived attributes that explicitly encode elementary structural qualities of a domains representation. Crucially, these attributes are unrelated to the semantic content of the domain information, and encode only its structural qualities. We describe a number of derived attributes and detail the computation of the corresponding attribute values. We examine our models operation, detailing how it retrieves both semantically related and unrelated domains. We also present a comparison of our algorithms performance with existing models, using a structure rich but semantically impoverished domain.
Empirical Methods in Natural Language Processing, 2007
A lexical analogy is a pair of word-pairs that share a similar semantic relation. Lexical analogies occur frequently in text and are useful in various natural language processing tasks. In this study, we present a system that generates lexical analogies automatically from text data. Our system discovers semantically related pairs of words by using dependency relations, and applies novel machine learning algorithms to match these word-pairs to form lexical analogies. Empirical evaluation shows that our system generates valid lexical analogies with a precision of 70%, and produces quality output although not at the level of the best humangenerated lexical analogies.
Journal of Experimental and Theoretical Artificial Intelligence , 2022
In this paper, we outline a comprehensive approach to composed analogies based on the theory of conceptual spaces. Our algorithmic model understands analogy as a search procedure and builds upon the idea that analogical similarity depends on a conceptual phenomena called 'dimensional salience.' We distinguish between category-based, property-based, event-based, and part-whole analogies, and propose computationally-oriented methods for explicating them in terms of conceptual spaces.
Strategies for the Semi-Automatic Retrieval of Metaphorical Terms, 2011
This article proposes a method for the semi-automatic extraction of resemblance metaphor terms from a manually annotated corpus of marine biology texts in English and Spanish. The corpus was first searched for target domain terms as well as for lexical markers indicative of metaphors. The combination of these search strategies for metaphor extraction resulted in a set of English-Spanish term pairs. After analysing and comparing these metaphor candidates, a quantitative analysis provided comparative statistical data regarding marine biology metaphor. Finally, the metaphorical nature of marine biology terms was verified in three ways. The first verification strategy entailed an adapted version of the Metaphor Identification Procedure (Pragglejaz Group, 2007). The second involved the analysis of contextual data extracted from the corpus, and the third involved the analysis of visual images from an online marine biology database and from the Google search engine.
Full natural language understanding requires identifying and analyzing the meanings of metaphors, which are ubiquitous in both text and speech. Over the last thirty years, linguistic metaphors have been shown to be based on more general conceptual metaphors, partial semantic mappings between disparate conceptual domains. Though some achievements have been made in identifying linguistic metaphors over the last decade or so, little work has been done to date on automatically identifying conceptual metaphors. This paper describes research on identifying conceptual metaphors based on corpus data. Our method uses as little background knowledge as possible, to ease transfer to new languages and to minimize any bias introduced by the knowledge base construction process. The method relies on general heuristics for identifying linguistic metaphors and statistical clustering (guided by Wordnet) to form conceptual metaphor candidates. Human experiments show the system effectively finds meaningful conceptual metaphors.
2003
Innovative applications often occur at the juncture of radically different domains of research. This paper describes an emerging application, called the analogical thesaurus, that arises at the boundary of two very different domains, the highly applied domain of information retrieval, and the esoteric domain of lexical metaphor interpretation. This application has the potential to not just to improve the utility of conventional electronic thesauri, but to serve as an intelligent mapping component in any system that uses analogical reasoning or case-based reasoning. 1
2006
Analogy and metaphor are extremely knowledge-hungry processes, so one should question whether lightweight lexical ontologies like WordNet are sufficiently rich to support them. In this paper we argue that resources like WordNet are suited to the processing of certain kinds of lexical analogies and metaphors, for which we propose a spatially-motivated typology and a corresponding computational model. We identify two kinds of dimension that are important in lexical analogy - lexicalized (taxonomic) dimensions and ad-hoc (goal-specific) dimensions - and describe how these can be automatically identified, extracted and exploited in WordNet.
2003
Innovative applications often occur at the juncture of radically different domains of research. This paper describes an emerging application, called the analogical thesaurus, that arises at the boundary of two very different domains, the highly applied domain of information retrieval, and the esoteric domain of lexical metaphor interpretation. This application has the potential to not just to improve the utility of conventional electronic thesauri, but to serve as an intelligent mapping component in any system that uses analogical reasoning or casebased reasoning.
1996
Abstract. This paper defines and analyses a computational model of similarity which detects analogies between objects based on conceptual descriptions of them, constructed from classification, generalization relations and attributes. Analogies are detected (elaborated) by functions which measure conceptual distances between objects with respect to these semantic modelling abstractions. The model is domain independent and operational upon objects described in non uniform ways.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.