Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
First order predicate logic appears frequently in Artificial Intelligence. In learning programs, it is often the language used to describe concepts, rules, examples, events, etc. This paper presents an overview of research in logic-related learning systems and describes those features of first order logic which have made it such a useful tool. Two developments are of particular interest to us: the use of logic in what is now called "constructive induction", and the benefits to machine learning contributed by logic programming.
Communications of the ACM, 1995
Techniques of machine learning have been successfully applied to various problems . Most of these applications rely on attribute-based learning, exemplified by the induction of decision trees as in the program C4.5 . Broadly speaking, attribute-based learning also includes such approaches to learning as neural networks and nearest neighbor techniques. The advantages of attribute-based learning are: relative simplicity, efficiency, and existence of effective techniques for handling noisy data. However, attribute-based learning is limited to non-relational descriptions of objects in the sense that the learned descriptions do not specify relations among the objects' parts. Attribute-based learning thus has two strong limitations:
2013
Machine Learning is necessary for the development of Artificial Intelligence, as pointed out by Turing in his 1950 article “Computing Machinery and Intelligence”. It is in the same article that Turing suggested the use of computational logic and background knowledge for learning. This thesis follows a logic-based machine learning approach called Inductive Logic Programming (ILP), which is advantageous over other machine learning approaches in terms of relational learning and utilising background knowledge. ILP uses logic programs as a uniform representation for hypothesis, background knowledge and examples, but its declarative bias is usually encoded using metalogical statements. This thesis advocates the use of logic programs to represent declarative and procedural bias, which results in a framework of single-language representation. We show in this thesis that using a logic program called the top theory as declarative bias leads to a sound and complete multi-clause learning system...
1998
Abstract We discuss the adoption of a three-valued setting for inductive concept learning. Distinguishing between what is true, what is false and what is unknown can be useful in situations where decisions have to be taken on the basis of scarce information. In a three-valued setting, we want to learn a de nition for both the target concept and its opposite, considering positive and negative examples as instances of two disjoint classes.
Machine Learning
Inductive Logic Programming (ILP) is a field at the intersection of Machine Learning and Logic Programming, based on logic as a uniform representation language for expressing examples, background knowledge and hypotheses. Thanks to the expressiveness of first-order logic, ILP has provided an excellent means for knowledge representation and learning in relevant fields such as graph mining, multirelational data mining and statistical relational learning, not to mention other logic-based non-propositional knowledge representation frameworks.
Lecture Notes in Computer Science, 1994
In this paper we investigate the possibility of learning logic programs by using an intensional evaluation of clauses. Unlike learning methods based on extensionality, by adopting an intensional evaluation of clauses the learning algorithm presented in this paper is correct and sufficient and does not depend on the kind of examples provided. Since searching a space of possible programs (instead of a space of independent clauses) is unfeasible, only partial programs containing clauses successfully used to derive at least one positive example are taken into consideration. Since clauses are not learned independently of each others, backtracking may be required.
Artificial Intelligence, 2001
The efficient learnability of restricted classes of logic programs is studied in the PAC framework of computational learning theory. We develop the product homomorphism method, which gives polynomial PAC learning algorithms for a nonrecursive Horn clause with function-free ground background knowledge, if the background knowledge satisfies some structural properties. The method is based on a characterization of the concept that corresponds to the relative least general generalization of a set of positive examples with respect to the background knowledge. The characterization is formulated in terms of products and homomorphisms. In the applications this characterization is turned into an explicit combinatorial description, which is then translated into the language of nonrecursive Horn clauses. We show that a nonrecursive Horn clause is polynomially PAC-learnable if there is a single binary background predicate and the ground atoms in the background knowledge form a forest. If the ground atoms in the background knowledge form a disjoint union of cycles then the situation is different, as the shortest consistent hypothesis may have exponential size. In this case polynomial PAC-learnability holds if a different representation language is used. We also consider the complexity of hypothesis finding for multiple clauses in some restricted cases.
SEMANTIC WEB APPLICATIONS …, 2008
Abstract. Acquiring and maintaining Semantic Web rules is very demanding and can be automated though partially by applying Machine Learning algorithms. In this paper we show that the form of Machine Learning known under the name of Inductive Logic Programming ...
Proceedings of the 3rd International Workshop on …, 1993
This paper traces the development of the main ideas that have led to the present state of knowledge in Inductive Logic Programming. The story begins with research in psychology on the subject of human concept learning. Results from this research influenced early efforts in Artificial Intelligence which combined with the formal methods of inductive inference to evolve into the present discipline of Inductive Logic Programming.
Machine Learning, 2014
The 2010 International Joint Conference on Neural Networks (IJCNN), 2010
Artificial Neural Networks have previously been applied in neuro-symbolic learning to learn ground logic program rules. However, there are few results of learning relations using neuro-symbolic learning. This paper presents the system PAN, which can learn relations defined by a logic program clause. The inputs to PAN are one or more atoms, representing the conditions of a logic rule, and the output is the conclusion of the rule. The symbolic inputs may include functional terms of arbitrary depth and arity, and the output may include terms constructed from the input functors. Symbolic inputs are encoded as an integer using an invertible encoding function, which is used in reverse to extract the output terms. The main advance of this system is a convention to allow construction of Artificial Neural Networks able to learn rules with the same power of expression as first order definite clauses. The learning process is insensitive to noisy data thanks to the use of Artificial Neural Networks. The system is tested on two domains.
Lecture Notes in Computer Science, 1998
We address a learning problem with the following peculiarity : we search for characteristic features common to a learning set of objects related to a target concept. In particular we approach the cases where descriptions of objects are ambiguous : they represent several incompatible realities. Ambiguity arises because each description only contains indirect information from which assumptions can be derived about the object. We suppose here that a set of constraints allows the identification of "coherent" sub-descriptions inside each object. We formally study this problem, using an Inductive Logic Programming framework close to characteristic induction from interpretations. In particular, we exhibit conditions which allow a pruned search of the space of concepts. Additionally we propose a method in which a set of hypothetical examples is explicitly calculated for each object prior to learning. The method is used with promising results to search for secondary substructures common to a set of RNA sequences.
1983
In Data Analysis, a learning problem is gene-rally stated as, either a discrimination problem, or a regression one. Some inductive methods [4] use metric concepts. But most of the Artificial Intelligence methods [2,3] are purely syntactic, ie they use concepts of Formal Logic. Then ...
2000
We discuss the adoption of a three-valued setting for inductive concept learning. Distinguishing between what is true, what is false and what is unknown can be useful in situations where decisions have to be taken on the basis of scarce, ambiguous, or downright contradictory information. In a three-valued setting, we learn a definition for both the target concept and its opposite, considering positive and negative examples as instances of two disjoint classes.
Artificial Intelligence in Engineering, 1993
Among several forms of learning, learning concepts from examples is the most common and best understood. In this paper some approaches to learning concepts from examples are reviewed. In particular those approaches that are currently most important with respect to practical applications (learning decision trees and if-then rules), or likely to become very important in the near future (Inductive Logic Programming as a form of relational learning) are discussed.
Knowledge Discovery and Data Mining, 1995
This paper introduces a new algorithm called SIAO1 for learning first order logic rules with genetic algo- rithms. SIAO1 uses the covering principle developed in AQ where seed examples are generalized into rules using however a genetic search, as initially introduced in the SIA algorithm for attribute-based representa- tion. The genetic algorithm uses a high level rep- resentation for learning
ACM Transactions on Computational Logic, 2001
Inductive logic programming (ILP) is concerned with learning relational descriptions that typically have the form of logic programs. In a transformation approach, an ILP task is transformed into an equivalent learning task in a different representation formalism. Propositionalization is a particular transformation method, in which the ILP task is compiled to an attribute-value learning task. The main restriction of propositionalization methods such as LINUS is that they are unable to deal with nondeterminate local variables in the body of hypothesis clauses. In this paper we show how this limitation can be overcome, by systematic first-order feature construction using a particular individual-centered feature bias. The approach can be applied in any domain where there is a clear notion of individual. We also show how to improve upon exhaustive first-order feature construction by using a relevancy filter. The proposed approach is illustrated on the "trains" and "mutagenesis" ILP domains.
Springer eBooks, 2011
Biological processes where every gene and protein participates is an essential knowledge for designing disease treatments. Nowadays, these annotations are still unknown for many genes and proteins. Since making annotations from in-vivo experiments is costly, computational predictors are needed for different kinds of annotation such as metabolic pathway, interaction network, protein family, tissue, disease and so on. Biological data has an intrinsic relational structure, including genes and proteins, which can be grouped by many criteria. This hinders the possibility of finding good hypotheses when attribute-value representation is used. Hence, we propose the generic Modular Multi-Relational Framework (MMRF) to predict different kinds of gene and protein annotation using Relational Data Mining (RDM). The specific MMRF application to annotate human protein with diseases verifies that group knowledge (mainly protein-protein interaction pairs) improves the prediction, particularly doubling the area under the precision-recall curve.
Logics for Emerging Applications of Databases, 2004
Data mining focuses on the development of methods and algorithms for such tasks as classification, clustering, rule induction, and discovery of associations. In the database field, the view of data mining as advanced querying has recently stimulated much research into the development of data mining query languages. In the field of machine learning, inductive logic programming has broadened its scope toward extending standard data mining tasks from the usual attribute-value setting to a multirelational setting. After a concise description of data mining, the contribution of logic to both fields is discussed. At the end, we indicate the potential use of logic for unifying different existing data mining formalisms.
2006
Learning is a crucial ability of intelligent agents. Rather than presenting a complete literature review, we focus in this paper on important issues surrounding the application of machine learning (ML) techniques to agents and multi-agent systems (MAS). In this discussion we move from disembodied ML over single-agent learning to full multiagent learning.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.