Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2017, Cognitive Science
Making analogies is an important way for people to explain and understand new concepts. Though making analogies is natural for human beings, it is not a trivial task for a dialogue agent. Making analogies requires the agent to establish a correspondence between concepts in two different domains. In this work, we explore a data-driven approach for making analogies automatically. Our proposed approach works with data represented as a flat graphical structure, which can either be designed manually or extracted from Internet data. For a given concept from the base domain, our analogy agent can automatically suggest a corresponding concept from the target domain, and a set of mappings between the relationships each concept has as supporting evidence. We demonstrate the working of this algorithm by both reproducing a classical example of analogy inference and making analogies in new domains generated from DBPedia data.
arXiv (Cornell University), 2023
Analogy is one of the core capacities of human cognition; when faced with new situations, we often transfer prior experience from other domains. Most work on computational analogy relies heavily on complex, manually crafted input. In this work, we relax the input requirements, requiring only names of entities to be mapped. We automatically extract commonsense representations and use them to identify a mapping between the entities. Unlike previous works, our framework can handle partial analogies and suggest new entities to be added. Moreover, our method's output is easily interpretable, allowing for users to understand why a specific mapping was chosen. Experiments show that our model correctly maps 81.2% of classical 2x2 analogy problems (guess level=50%). On larger problems, it achieves 77.8% accuracy (mean guess level=13.1%). In another experiment, we show our algorithm outperforms human performance, and the automatic suggestions of new entities resemble those suggested by humans. We hope this work will advance computational analogy by paving the way to more flexible, realistic input requirements, with broader applicability.
Explanatory analogies make learning complex concepts easier by elaborately mapping a target concept onto a more familiar source concept. Solutions exist for automatically retrieving shorter metaphors from natural language text, but not for explanatory analogies. In this paper, we propose an approach to find webpages containing explanatory analogies for a given target concept. For this, we propose the use of a 'region of interest' (ROI) based on the observation that linguistic markers and source concept often co-occur with various forms of the word 'analogy'. We also suggest an approach to identify the source concept(s) contained in a retrieved analogy webpage. We demonstrate these approaches on a dataset created using Google custom search to find candidate web pages that may contain analogies.
Cornell University - arXiv, 2022
Analogy-making gives rise to reasoning, abstraction, flexible categorization and counterfactual inference-abilities lacking in even the best AI systems today. Much research has suggested that analogies are key to non-brittle systems that can adapt to new domains. Despite their importance, analogies received little attention in the NLP community, with most research focusing on simple word analogies. Work that tackled more complex analogies relied heavily on manually constructed, hardto-scale input representations. In this work, we explore a more realistic, challenging setup: our input is a pair of natural language procedural texts, describing a situation or a process (e.g., how the heart works/how a pump works). Our goal is to automatically extract entities and their relations from the text and find a mapping between the different domains based on relational similarity (e.g., blood is mapped to water). We develop an interpretable, scalable algorithm and demonstrate that it identifies the correct mappings 87% of the time for procedural texts and 94% for stories from cognitivepsychology literature. We show it can extract analogies from a large dataset of procedural texts, achieving 79% precision (analogy prevalence in data: 3%). Lastly, we demonstrate that our algorithm is robust to paraphrasing the input texts 1. CAUSE(PULL(piston),CAUSE(GREATER(PRESSURE(water), PRESSURE(pipe)),FLOW(water,pipe)))
Proceedings of the third international conference on Industrial and engineering applications of artificial intelligence and expert systems - IEA/AIE '90, 1990
The research described in this paper addresses the problem of integrating analogical reasoning and argumentation into a natural language understanding system. We present an approach to completing an implicit argument-by-analogy as found in a natural language editorial text. The transformation of concepts from one domain to another, which is inherent in this task, is a complex process requiring basic reasoning skills and domain knowledge, as well as an understanding of the structure and use of both analogies and arguments. The integration of knowledge about natural language understanding, argumentation, and analogical reasoning is demonstrated in a proof of concept system called ARIEL. ARIEL is able to detect the presence of an analogy in an editorial text, identify the source and target components, and develop a conceptual representation of the completed analogy in memory. The design of our system is modular in nature, permitting extensions to the existing knowledge base and making the argumentation and analogical reasoning components portable to other understanding systems.
Journal of Intelligent Information Systems, 2017
Analogy is the cognitive process of matching the characterizing features of two different items. This may enable reuse of knowledge across domains , which can help to solve problems. Indeed, abstracting the 'role' of the features away from their specific embodiment in the single items is fundamental to recognize the possibility of an analogical mapping between them. The analogical reasoning process consists of five steps: retrieval, mapping, evaluation , abstraction and re-representation. This paper proposes two forms of an operator that includes all these elements, providing more power and flexibility than existing systems. In particular, the Roles Mapper leverages the presence of identical descriptors in the two domains, while the Roles Argumentation-based Mapper removes also this limitation. For generality and compliance with other reasoning operators in a multi-strategy inference setting, they exploit a simple formalism based on First-Order Logic and do not require any background knowledge or meta-knowledge. Applied to the most critical classical examples in the literature, they proved to be able to find insightful analogies.
2021
General purpose knowledge bases such as DBpedia and Wikidata are valuable resources for various AI tasks. They describe real-world facts as entities and relations between them and they are typically incomplete. Knowledge base completion refers to the task of adding new missing links between entities to build new triples. In this work, we propose an approach for discovering implicit triples using observed ones in the incomplete graph leveraging analogy structures deducted from a knowledge graph embedding model. We use a neural language modelling approach where semantic regularities between words are preserved, which we adapt to entities and relations. We consider domain specific views from large input graphs as the basis for the training, which we call context graphs, as a reduced and meaningful context for a set of entities from a given domain. Results show that analogical inferences in the projected vector space is relevant to a link prediction task in domain knowledge bases. Keywo...
2017
For robots to interact with natural language and handle realworld situations, some ability to perform analogical and associational reasoning is desirable. Consider commands like ”Fetch the ball” vs. ”Fetch the wagon”, the robot needs to know that carrying a ball is (in the appropriate sense) analogous to dragging a wagon. Without the ability to perform analogical reasoning, robots are incapable of generalizing in the ways that true natural language understanding requires. Inspired by implicit Verlet integration methods for mass spring systems in physics simulations, we present a novel knowledge-based embedding method in this paper, where distributional word representations and semantic relations derived from knowledge bases are incorporated. We use some SAT-style analogy questions to demonstrate potential feasibility of our approach on the analogical reasoning framework.
Journal of the Experimental Analysis of Behavior, 2009
Analogical reasoning is an important component of intelligent behavior, and a key test of any approach to human language and cognition. Only a limited amount of empirical work has been conducted from a behavior analytic point of view, most of that within Relational Frame Theory (RFT), which views analogy as a matter of deriving relations among relations. The present series of four studies expands previous work by exploring the applicability of this model of analogy to topography-based rather than merely selectionbased responses and by extending the work into additional relations, including nonsymmetrical ones. In each of the four studies participants pretrained in contextual control over nonarbitrary stimulus relations of sameness and opposition, or of sameness, smaller than, and larger than, learned arbitrary stimulus relations in the presence of these relational cues and derived analogies involving directly trained relations and derived relations of mutual and combinatorial entailment, measured using a variety of productive and selection-based measures. In Experiment 1 participants successfully recognized analogies among stimulus networks containing same and opposite relations; in Experiment 2 analogy was successfully used to extend derived relations to pairs of novel stimuli; in Experiment 3 the procedure used in Experiment 1 was extended to nonsymmetrical comparative relations; in Experiment 4 the procedure used in Experiment 2 was extended to nonsymmetrical comparative relations. Although not every participant showed the effects predicted, overall the procedures occasioned relational responses consistent with an RFT account that have not yet been demonstrated in a behavior-analytic laboratory setting, including productive responding on the basis of analogies.
Empirical Methods in Natural Language Processing, 2007
A lexical analogy is a pair of word-pairs that share a similar semantic relation. Lexical analogies occur frequently in text and are useful in various natural language processing tasks. In this study, we present a system that generates lexical analogies automatically from text data. Our system discovers semantically related pairs of words by using dependency relations, and applies novel machine learning algorithms to match these word-pairs to form lexical analogies. Empirical evaluation shows that our system generates valid lexical analogies with a precision of 70%, and produces quality output although not at the level of the best humangenerated lexical analogies.
APPLIED INFORMATICS-PROCEEDINGS-, 2001
retrieving analogies from presented problem data is an important phase of analogical reasoning, influencing many related cognitive processes. Existing models have focused on semantic similarity, but structural similarity is also a necessary requirement of any analogical comparison. We present a new technique for performing structure based analogy retrieval. This is founded upon derived attributes that explicitly encode elementary structural qualities of a domains representation. Crucially, these attributes are unrelated to the semantic content of the domain information, and encode only its structural qualities. We describe a number of derived attributes and detail the computation of the corresponding attribute values. We examine our models operation, detailing how it retrieves both semantically related and unrelated domains. We also present a comparison of our algorithms performance with existing models, using a structure rich but semantically impoverished domain.
2016
Humans regularly exploit analogical reasoning to generate potentially novel and useful inferences. We outline the Dr Inventor model that identifies analogies between research publications, describing recent work to evaluate the inferences that are generated by the system. Its inferences, in the form of subjectverb-object triples, can involve arbitrary combinations of source and target information. We evaluate three approaches to assess the quality of inferences. Firstly, we explore an n-gram based approach (derived from the Dr Inventor corpus). Secondly, we use ConceptNet as a basis for evaluating inferences. Finally, we explore the use of Watson Concept Insights (WCI) to support our inference evaluation process. Dealing with novel inferences arising from an ever growing corpus is a central concern throughout.
Humans use analogies to communicate, reason, and learn. But while the human brain excels at creating and understanding analogies, it does not easily recall useful analogies created or learned over time. General purpose tools and methods are needed that assist humans in representing, storing, and recalling useful analogies. Additionally, such tools must take advantage of the World Wide Web's ubiquity, global reach, and universal standards. We first identify commonly occurring patterns of analogy structure. Because understanding of instructional analogies is significantly improved when their structure is visualized, we develop a compact and general representation for analogies using XML, and demonstrate general methods for visualizing the structure of analogy expressions in Web-based environments.
Cornell University - arXiv, 2022
Analogical reasoning is the process of discovering and mapping correspondences from a target subject to a base subject. As the most well-known computational method of analogical reasoning, Structure-Mapping Theory (SMT) abstracts both target and base subjects into relational graphs and forms the cognitive process of analogical reasoning by finding a corresponding subgraph (i.e., correspondence) in the target graph that is aligned with the base graph. However, incorporating deep learning for SMT is still under-explored due to several obstacles: 1) the combinatorial complexity of searching for the correspondence in the target graph; 2) the correspondence mining is restricted by various cognitive theorydriven constraints. To address both challenges, we propose a novel framework for Analogical Reasoning (DeepGAR) that identifies the correspondence between source and target domains by assuring cognitive theory-driven constraints. Specifically, we design a geometric constraint embedding space to induce subgraph relation from node embeddings for efficient subgraph search. Furthermore, we develop novel learning and optimization strategies that could end-to-end identify correspondences that are strictly consistent with constraints driven by the cognitive theory. Extensive experiments are conducted on synthetic and realworld datasets to demonstrate the effectiveness of the proposed DeepGAR over existing methods. The code and data are available at: https://github.com/triplej0079/DeepGAR.
Psychological Review
The human ability to flexibly reason using analogies with domain-general content depends on mechanisms for identifying relations between concepts, and for mapping concepts and their relations across analogs. Building on a recent model of how semantic relations can be learned from non-relational word embeddings, we present a new computational model of mapping between two analogs. The model adopts a Bayesian framework for probabilistic graph matching, operating on semantic relation networks constructed from distributed representations of individual concepts and of relations between concepts. Through comparisons of model predictions with human performance in a novel mapping task requiring integration of multiple relations, as well as in several classic studies, we demonstrate that the model accounts for a broad range of phenomena involving analogical mapping by both adults and children. We also show the potential for extending the model to deal with analog retrieval. Our approach demonstrates that human-like analogical mapping can emerge from comparison mechanisms applied to rich semantic representations of individual concepts and relations.
2003
The goal of the work reported here is to capture the commonsense knowledge of non-expert human contributors. Achieving this goal will enable more intelligent human-computer interfaces and pave the way for computers to reason about our world. In the domain of natural language processing, it will provide the world knowledge much needed for semantic processing of natural language. To acquire knowledge from contributors not trained in knowledge engineering, I take the following four steps: (i) develop a knowledge representation (KR) model for simple assertions in natural language, (ii) introduce cumulative analogy, a class of nearest-neighbor based analogical reasoning algorithms over this representation, (iii) argue that cumulative analogy is well suited for knowledge acquisition (KA) based on a theoretical analysis of effectiveness of KA with this approach, and (iv) test the KR model and the effectiveness of the cumulative analogy algorithms empirically. To investigate effectiveness of cumulative analogy for KA empirically, Learner, an open source system for KA by cumulative analogy has been implemented, deployed, 1 and evaluated. Learner acquires assertion-level knowledge by constructing shallow semantic analogies between a KA topic and its nearest neighbors and posing these analogies as natural language questions to human contributors. Suppose, for example, that based on the knowledge about "newspapers" already present in the knowledge base, Learner judges "newspaper" to be similar to "book" and "magazine." Further suppose that assertions "books contain information" and "magazines contain information" are also already in the knowledge base. Then Learner will use cumulative analogy from the similar topics to ask humans whether "newspapers contain information."
2012
Are we any closer to creating an autonomous model of analogical reasoning that can generate new and creative analogical comparisons? A three-phase model of analogical reasoning is presented that encompasses the phases of retrieval, mapping and inference validation. The model of the retrieval phase maximizes its creativity by focusing on domain topology, combating the semantic locality suffered by other models. The mapping model builds on a standard model of the mapping phase, again making use of domain topology. A novel validation model helps ensure the quality of the inferences that are accepted by the model. We evaluated the ability of our tri-phase model to re-discover several hcreative analogies (Boden, 1992) from a background memory containing many potential source domains. The model successfully re-discovered all creative comparisons, even when given problem descriptions that more accurately reflect the original problem – rather than the standard (post hoc) representation of the analogy. Finally, some remaining challenges for a truly autonomous creative analogy machine are assessed.
Lecture Notes in Computer Science, 2017
Representing knowledge as high-dimensional vectors in a continuous semantic vector space can help overcome the brittleness and incompleteness of traditional knowledge bases. We present a method for performing deductive reasoning directly in such a vector space, combining analogy, association, and deduction in a straightforward way at each step in a chain of reasoning, drawing on knowledge from diverse sources and ontologies.
arXiv (Cornell University), 2024
Analogy-making is central to human cognition, allowing us to adapt to novel situations -an ability that current AI systems still lack. Most analogy datasets today focus on simple analogies (e.g., word analogies); datasets including complex types of analogies are typically manually curated and very small. We believe that this holds back progress in computational analogy. In this work, we design a data generation pipeline, ParallelPARC (Parallel Paragraph Creator) leveraging state-of-the-art Large Language Models (LLMs) to create complex, paragraph-based analogies, as well as distractors, both simple and challenging. We demonstrate our pipeline and create ProPara-Logy, a dataset of analogies between scientific processes. We publish a gold-set, validated by humans, and a silver-set, generated automatically. We test LLMs' and humans' analogy recognition in binary and multiple-choice settings, and found that humans outperform the best models (∼13% gap) after a light supervision. We demonstrate that our silver-set is useful for training models. Lastly, we show challenging distractors confuse LLMs, but not humans. We hope our pipeline will encourage research in this emerging field.
2000
We review the work of Evans on graphical proportional analogies, identifying the object mappings that underlie many such comparisons. The limitations of Evans ANALOGY model are investigated.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.