Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006, Theoretical computer science
…
23 pages
1 file
Connectionist computations of intuitionistic reasoning aim to bridge the gap between machine learning and reasoning within intelligent systems. The study proposes a framework to integrate quantitative and connectionist approaches for reasoning, particularly through the lens of intuitionistic logic. It discusses the representation of intuitionistic logic in neural networks and outlines the algorithms that govern this integration, emphasizing the importance of simultaneous learning and reasoning in advancing cognitive and neural computation.
1998
Abstract Hybrid connectionist symbolic systems have been the subject of much recent research in AI. By focusing on the implementation of high-level human cognitive processes (eg, rule-based inference) on low-level, brain-like structures (eg, neural networks), hybrid systems inherit both the efficiency of connectionism and the comprehensibility of symbolism. This paper presents the Basic Reasoning Applicator Implemented as a Neural Network (BRAINN).
Artificial Intelligence, 1995
The paper presents a connectionist framework that is capable of representing and learning propositional knowledge. An extended version of propositional calculus is developed and is demonstrated to be useful for nonmonotonic reasoning, dealing with conflicting beliefs and for coping with inconsistency generated by unreliable knowledge sources. Formulas of the extended calculus are proved to be equivalent in a very strong sense to symmetric networks (like Hopfield networks and Boltzmann machines), and efficient algorithms are given for translating back and forth between the two forms of knowledge representation. A fast learning procedure is presented that allows symmetric networks to learn representations of unknown logic formulas by looking at examples. A connectionist inference engine is then sketched whose knowledge is either compiled from a symbolic representation or learned inductively from training examples. Experiments with large scale randomly generated formulas suggest that the parallel local search that is executed by the networks is extremely fast on average. Finally, it is shown that the extended logic can be used as a high-level specification language for connectionist networks, into which several recent symbolic systems may be mapped. The paper demonstrates how a rigorous bridge can be constructed that ties together the (sometimes opposing) connectionist and symbolic approaches.
Theoretical computer science, 2007
1996
Abstract We present a connectionist architecture that supports almost instantaneous deductive and abductive reasoning. The deduction algorithm responds in few steps for single rule queries and in general, takes time that is linear with the number of rules in the query. The abduction algorithm produces an explanation in few steps and the best explanation in time linear with the size of the assumption set.
Doctor of PhilosophyDepartment of Computer ScienceMajor Professor Not ListedSymbolic knowledge representation and reasoning and deep learning are fundamentally different approaches to artificial intelligence with complementary capabilities. The former are transparent and data-efficient, but they are sensitive to noise and cannot be applied to non-symbolic domains where the data is ambiguous. The latter can learn complex tasks from examples, are robust to noise, but are black boxes; require large amounts of --not necessarily easily obtained-- data, and are slow to learn and prone to adversarial examples. Either paradigm excels at certain types of problems where the other paradigm performs poorly. In order to develop stronger AI systems, integrated neuro-symbolic systems that combine artificial neural networks and symbolic reasoning are being sought. In this context, one of the fundamental open problems is how to perform logic-based deductive reasoning over knowledge bases by means of...
Neural-symbolic computation aims at integrating robust connectionist learning algorithms with sound symbolic reasoning. The recent impact of neural learning, in particular of deep networks, has led to the creation of new representations that have, so far, not really been used for reasoning. Results on neural-symbolic computation have shown to offer powerful alternatives for knowledge representation, learning and inference in neural computation. This paper presents key challenges and contributions of neuralsymbolic computation to this area.
Recurrent Neural …
English version of my French "Reseaux de neurones capables de raisonner", Dossier Pour la Science (special issue of the French edition of the Scientific American), October/December 2005, 97-101., 2005
This paper gives a very short (and accessible) survey of how classical and nonmonotonic logic relate to neural networks in general, and on how neural networks might be able to carry out logical reasoning in particular.
Neural Networks, 2007. …, 2007
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Knowledge-Based Systems, 1997
Lecture Notes in Computer Science, 2012
Proc. 3rd Intl. Workshop on Neural- …, 2007
… Processing, 2002
Cornell University - arXiv, 2019
Proceedings of NIPS, 2003