Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1993, Artificial Intelligence in Engineering
Among several forms of learning, learning concepts from examples is the most common and best understood. In this paper some approaches to learning concepts from examples are reviewed. In particular those approaches that are currently most important with respect to practical applications (learning decision trees and if-then rules), or likely to become very important in the near future (Inductive Logic Programming as a form of relational learning) are discussed.
First order predicate logic appears frequently in Artificial Intelligence. In learning programs, it is often the language used to describe concepts, rules, examples, events, etc. This paper presents an overview of research in logic-related learning systems and describes those features of first order logic which have made it such a useful tool. Two developments are of particular interest to us: the use of logic in what is now called "constructive induction", and the benefits to machine learning contributed by logic programming.
Communications of the ACM, 1995
Techniques of machine learning have been successfully applied to various problems . Most of these applications rely on attribute-based learning, exemplified by the induction of decision trees as in the program C4.5 . Broadly speaking, attribute-based learning also includes such approaches to learning as neural networks and nearest neighbor techniques. The advantages of attribute-based learning are: relative simplicity, efficiency, and existence of effective techniques for handling noisy data. However, attribute-based learning is limited to non-relational descriptions of objects in the sense that the learned descriptions do not specify relations among the objects' parts. Attribute-based learning thus has two strong limitations:
… of the European symposium on intelligent …, 2000
The data set used can influence the way in which inductive learning algorithms generate rules. A good set of training examples is a very important factor in obtaining high training and test accuracies but algorithm execution time can be greatly affected by the input data. The use of specific examples decreases the time incurred when updating concept descriptions and allows for the representation of probabilistic descriptions. This paper describes how the performance of the RULES-4 algorithm can be improved and how the running time decreases by having a good set of training examples. To achieve this, a method was required to select typical examples which are representative of the overall data set. A clustering technique was used and methods of selecting examples from clusters were tested. The results were validated with several data sets.
International Journal of Man-Machine Studies, 1987
This paper presents a formal, foundational approach to learning from examples in machine learning. It is assumed that a learning system is presented with a stream of facts describing a domain of application. The task of the system is to form and modify hypotheses characterising the relations in the domain, based on this information. Presumably the set of hypotheses that may be so formed will require continual revision as further information is received.
We present an algorithm for inductive learning from examples that outputs an ordered list of if-then rules as its hypothesis. The algorithm uses a combination of greedy and branch-and-bound techniques, and naturally handles noisy or stochastic learning situations. We also present the results of an empirical study comparing our algorithm with Quinlan's C4.5 on 1050 synthetic data sets. We nd that BBG greatly outperforms C4.5 on ruleoriented problems, and equals or exceeds the performance of C4.5 on tree-oriented problems.
Inductive learning is one of the most effective approaches used to automate the knowledge acquisition of an expert system. In this paper we present an analysis of three inductive learning algorithms, ID3, C4.5 and ILA applied to rules extraction.
1983
Research in the area of learning structural descriptions from examples is reviewed, giving primary attention to methods of learning characteristic descrip tions of single concepts. In particular, we examine methods for finding the maximally-specific conjunctive generalizations (MSC-generalizations) that cover all of the training examples of a given concept. Various important aspects of structural learning in general are examined, and several criteria for evaluating structural learning methods are presented. Briefly, these criteria include (i) ade quacy of the representation language, (ii) generalization rules employed, computational efficiency, and (iv) flexibility and extensibility. Selected learning methods developed by Buchanan, et al., Hayes-Roth, Vere, Winston, and the authors are analyzed according to these criteria. Finally, some goals are sug gested for future research.
Lecture Notes in Computer Science, 1998
We address a learning problem with the following peculiarity : we search for characteristic features common to a learning set of objects related to a target concept. In particular we approach the cases where descriptions of objects are ambiguous : they represent several incompatible realities. Ambiguity arises because each description only contains indirect information from which assumptions can be derived about the object. We suppose here that a set of constraints allows the identification of "coherent" sub-descriptions inside each object. We formally study this problem, using an Inductive Logic Programming framework close to characteristic induction from interpretations. In particular, we exhibit conditions which allow a pruned search of the space of concepts. Additionally we propose a method in which a set of hypothetical examples is explicitly calculated for each object prior to learning. The method is used with promising results to search for secondary substructures common to a set of RNA sequences.
Machine Learning, 1986
The technology for building knowledge-based systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and/or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions.
2010
Machine learning attempts to build computer programs that improve their performance by automating the acquisition of knowledge from experience. Inductive learning, one of machine learning paradigms, draws inductive inference from a teacher or environment-provided facts. Inductive learning enables the program to identify regularities and patterns in the prior knowledge or training data, and then to extract them as generalized rules. In literature there are proposed two ways of machine learning usage in information systems: (1) for building tools for software development and maintenance tasks and (2) for incorporation into software products to make them adaptive and self-configuring. However, considering information systems in more detail, division in three situations of inductive learning use in the context of information systems can be proposed, namely, first, in the information system development project management, second, to collect the information that is to be built in informat...
Lecture Notes in Computer Science, 2001
Proceedings of the 3rd International Workshop on …, 1993
This paper traces the development of the main ideas that have led to the present state of knowledge in Inductive Logic Programming. The story begins with research in psychology on the subject of human concept learning. Results from this research influenced early efforts in Artificial Intelligence which combined with the formal methods of inductive inference to evolve into the present discipline of Inductive Logic Programming.
Machine learning, a branch of artificial intelligence, is a scientific discipline that is concerned with the design and development of algorithms that allow computers to evolve behaviors based on empirical data, such as from sensor data or databases. Artificial neural networks are composed of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. In this paper, we made analysis with machine learning algorithms and artificial neural networks classification from instances in data set. Furthermore, machine learning algorithms and artificial neural networks with constituted rules.
1993
We present an algorithm (BBG) for inductive learning from examples that outputs a rule list. BBG uses a combination of greedy and branch-and-bound techniques, and naturally handles noisy or stochastic learning situations. We also present the results of an empirical study comparing BBG with Quinlan's C4.5 on 1050 synthetic data sets. We find that BBG greatly outperforms C4.5 on rule-oriented problems, and equals or exceeds C4.5's performance on tree-oriented problems.
1996
1st Class and RuleMaster Consider, for example, the following rule set: Rule 1: a 1 ∧ b 1 → δ 1 Rule 2: c 1 ∧ d 1 → δ 1 Suppose that Rules 1 and 2 cover all instances of class δ 1 and all other instances are of class
Machine Learning, 2014
2014 IEEE 26th International Conference on Tools with Artificial Intelligence, 2014
We investigate here relational concept learning from examples when we only have a partial information regarding examples: each such example is qualified as ambiguous as we only know a set of its possible complete descriptions. A typical such situation arises in rule learning when truth values of some atoms are missing in the example description while we benefit from background knowledge. We first give a sample complexity result for learning from ambiguous examples, then we propose a framework for relational rule learning from ambiguous examples and describe the learning system LEAR. Finally we discuss various experiments in which we observe how LEAR copes with increasing degrees of incompleteness.
Machine Learning, 1991
This paper is aimed at showing the benefits obtained by explicitly introducing a priori control knowledge into the inductive process. The starting point is Michalski's Induce system, which has been modified and augmented. Although the basic philosophy has been changed as little as possible, Induce has been radically modified from the algorithmic point of view, resulting in the new learning system Rigel. The main ideas taken from Induce are the sequential learning of descriptions of each concept against all the others, the Covering algorithm, the Star definition, and the VL 2 representation language. The modifications consist of a new way of computing the Star, the use of a separate body of heuristic knowledge to strongly direct the search, the implementation of a larger subset of the VL 2 language, a reasoned way of selecting the seed, and the use of rules to evaluate the worthiness of the inductive assertions. The effectiveness of Rigel has been tested both on artificial and on real-world case studies.
Much effort has been devoted to understanding learning and reasoning in artificial intelligence. However, very few models attempt to integrate these two complementary processes. Rather, there is a vast body of research in machine learning, often focusing on inductive learning from examples, quite isolated from the work on reasoning in artificial intelligence. Though these two processes may be different, they are very much interrelated. The ability to reason about a domain of knowledge is often based on rules about that domain, that must be learned somehow. And the ability to reason can often be used to acquire new knowledge, or learn. This paper introduces an Incremental Learning Algorithm (ILA) that attempts to combine inductive learning with prior knowledge and reasoning. ILA has many important characteristics useful for such a combination, including: 1) incremental, self-organizing learning, 2) non-uniform learning, 3) inherent non-monotonicity, 4) extensional and intensional capabilities, and 5) low order polynomial complexity. The paper describes ILA, gives simulation results for several applications, and discusses each of the above characteristics in detail.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.