Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Neurules are a kind of hybrid rules integrating neurocomputing and production rules. Each neurule is represented as an adaline unit. Thus, the corresponding rule base consists of a number of autonomous adaline units (neurules). Due to this fact, a modular and natural rule base is constructed, in contrast to existing connectionist rule bases. In this paper, we present a method for generating neurules from empirical data. We overcome the difficulty of the adaline unit to classify non-separable training examples by introducing the notion of 'closeness' between training examples and splitting each training set into subsets of 'close' examples.
International Journal on Artificial Intelligence Tools, 2001
Neurules are a kind of hybrid rules integrating neurocomputing and production rules. Each neurule is represented as an adaline unit. Thus, the corresponding neurule base consists of a number of autonomous adaline units (neurules). Due to this fact, a modular and natural knowledge base is constructed, in contrast to existing connectionist knowledge bases. In this paper, we present a method for generating neurules from empirical data. To overcome the difficulty of the adaline unit to classify non-separable training examples, the notion of 'closeness' between training examples is introduced. In case of a training failure, two subsets of 'close' examples are produced from the initial training set and a copy of the neurule for each subset is trained. Failure of training any copy, leads to production of further subsets as far as success is achieved.
International Journal on Artificial Intelligence Tools
Neurules are a kind of integrated rules integrating neurocomputing (via the adaline unit) and production rules. A neurule base is modular and natural, in contrast to existing connectionist knowledge bases, a comparable type of integrated knowledge bases. In producing neurules from an empirical data training set, the inability of the adaline unit to classify non-separable training data should be faced. The general approach followed is consecutively splitting the training set into two subsets, according to a splitting strategy, until (sub)sets of separable data are produced; then as many neurules as the resulted subsets are produced. In this paper, we present and experimentally evaluate six splitting strategies applied to the production process of a neurule base, three of which are based on clustering algorithms suitable for categorical data (i.e., 2-medoids, 2-modes and COOLCAT). Experiments were performed using 18 different distance or similarity metrics suitable for categorical dat...
1995
In this paper we introduce a new formalism for rule speci cation that extends the behaviour of a traditional rule based system and allows the natural development of hybrid trainable systems.
Neurules are a type of hybrid rules combining a symbolic and a connectionist representation. A neurule base consists of a number of autonomous adaline units (neurules), in contrast to existing neuro-symbolic knowledge bases. A neurule base is constructed from training examples. To overcome the inability of the adaline unit to classify non-separable training examples, the notion of 'closeness' between training examples has been used to split the initial training set into subsets that can be successfully trained. In this paper, we investigate previously unexplored aspects regarding the construction of neurules from training examples. First, we compare different splitting policies, i.e. policies using different criteria for splitting the training set. We also introduce two alternative approaches to splitting not solely relying on closeness and compare them with our initial approach, which is solely based on closeness. The comparison demonstrates the effectiveness of the notion of 'closeness' in splitting the initial non-separable training set. Finally, we evaluate the generalization capability of neurules.
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
Local basis function networks are a useful category of classifiers, with known variations developed in neural networks, machine learning and statistics communities. The localized range of activation of the hidden units have many similarities with rule-based representations. Neurofuzzy systems are a common example of an framework that explicitly integrates these approaches. Following this concept, we study alternatives for the development of rule-neural hybrid systems with the purpose of inducing robust and interpretable classifiers. Local fitting of parameters is done by a gradient descent optimization that modifies the covering produced by a rule induction algorithm. Two tasks are accomplished: how to select a small number of rules and how to improve precision. Accuracy is not the only target: indeed, the use of this architecture is better suited when one wants to achieve a good compromise between classification performance and simplicity.
International journal of neural systems, 2001
The problem of rule extraction from neural networks is NP-hard. This work presents a new technique to extract "if-then-else" rules from ensembles of DIMLP neural networks. Rules are extracted in polynomial time with respect to the dimensionality of the problem, the number of examples, and the size of the resulting network. Further, the degree of matching between extracted rules and neural network responses is 100%. Ensembles of DIMLP networks were trained on four data sets in the public domain. Extracted rules were on average significantly more accurate than those extracted from C4.5 decision trees.
… of the 8th Australian and New …, 2003
Rule extraction from neural networks often focusses on exact equivalence and is often tested on relatively small canonical examples. We apply genetic algorithms to the extract approximate rules from neural networks. The method is robust and works with large networks. We compare the results with rules obtained using state of the art decision tree methods and achieve superior performance to straight forward application of the WEKA implementation of the C5 algorithm, J48.PART.
Neural Information Processing Systems, 1992
We demonstrate in this paper how certain forms of rule-based knowl- edge can be used to prestructure a neural network of normalized basis functions and give a probabilistic interpretation of the network architec- ture. We describe several ways to assure that rule-based knowledge is preserved during training and present a method for complexity reduction that tries to minimize the number
1998
This last decade multi-layer perceptrons (MLPs) have been widely used in classification tasks. Nevertheless, the difficulty of explaining their results represents a main drawback for their acceptance in critical domain applications such as medical diagnosis. In this context how can we trust a black box without any form of explanation capability ? To redress this situation, the internal representation of a multi-layer perceptron should be transformed into symbolic rules. Such a network is a neural expert system. In the field of symbolic rule extraction from neural networks Andrews et al. proposed a taxonomy to explain and compare the characteristics of the existing techniques. After having studied what we consider the main contribution of the domain we propose the new approach of extracting symbolic rules by precisely locating the discriminant frontiers between two classes. Basically, in our mathematical analysis we point out that a frontier is built according to an equation with one...
The 1st Online Workshop on Soft …, 1996
Simple method for extraction of logical rules from neural networks trained with backpropagation algorithm is presented. Logical interpretation is assured by adding an additional term to the cost function, forcing the weight values to be ±1 or zero. Auxiliary constraint ensures that the training process strives to a network with maximal number of zero weights, which augmented by weight pruning yields a minimal number of logical rules extracted by means of weights analysis. Rules are generated consecutively, from most general, covering many training examples, to most specific, covering a few or even single cases. If there are any exceptions to these rules, they are being detected by additional neurons. The algorithm applied to the Iris classification problem generates 3 rules which give 98.7% accuracy. The rules found for the three monks and mushroom problems classify all the examples correctly.
Expert Systems with Applications, 1991
Rule-based expert systems either develop out of the direct involvement of a concerned expert or through the enormous efforts of intermediaries called knowledge engineers . In either case, knowledge engineering tools are inadequate in many ways to support the complex problem of expert system building. This article describes a set of experiments with adaptive neural networks which explore two types of learning, deductive and inductive, in the context of a rule-based, deterministic parser of Natural Language. Rule-based processing of Language is an important and complex domain. Experiences gained in this domain generalize to other rule-based domains. We report on those experiences and draw some general conclusions that are relevant to knowledge engineering activities and maintenance of rule-based systems.
Rule-extraction from trained neural networks has previously been used to generate propositional rule-sets. The extraction of "generic" rules or objects from trained feedforward networks is clearly desirable and sufficient for many applications. We present several approaches to generate a knowledge base that includes rules, facts and a is-a hierarchy that enables the greater explanatory capabilities by allowing the user interaction. The approaches are (1) construct two feedforward neural networks by cascade correlation algorithm [Fahlman & Lebiere, 1991] and tower algorithm ), extracts rules at the level of individual hidden and output units of both the networks by use of the decompositional rule-extraction method "LAP"; (2) cascade correlation and tower algorithm to train two different feedforward neural network, extracts rules that map inputs directly into outputs to generate the examples for each learning algorithm by the use of the pedagogical rule-extraction method "RuleVI" (3) constrained error back propagation to train a feedforward neural network, extract rules at the level of individual hidden and output units by use of the decompositional rule-extraction method "RULEX"; and use of the extracted symbolic rules to generate a connectionist knowledge base. Then the performance is demonstrated by a number of real-world applications.
14th IEEE International Conference on Tools with Artificial Intelligence, 2002. (ICTAI 2002). Proceedings., 2002
Neurules are a kind of hybrid rules that combine a symbolic (production rules) and a connectionist (adaline unit) representation. Each neurule is represented as an adaline unit. One way that the neurules can be produced is from training examples (empirical source knowledge). However, in certain application fields not all of the training examples are available a priori. A number of them become available over time. In these cases, updating the corresponding neurules is necessary. In this paper, methods for updating a hybrid rule base, consisting of neurules, to reflect the availability of new training examples are presented. The methods are efficient, since they require the least possible retraining effort and the number of the produced neurules is kept as small as possible.
Smart Innovation, Systems and Technologies, 2011
Neurules are a kind of integrated rules integrating neurocomputing and production rules. Each neurule is represented as an adaline unit. Thus, the corresponding neurule base consists of a number of autonomous adaline units (neurules). Due to this fact, a modular and natural knowledge base is constructed, in contrast to existing connectionist knowledge bases. In this paper, we present an overview of our main work involving neurules. We focus on aspects concerning construction of neurules, efficient updates of neurule bases, neurule-based inference and combination of neurules with case-based reasoning. Neurules may be constructed from either symbolic rule bases or empirical data in the form of training examples. Due to the fact that the source knowledge of neurules may change with time, efficient updates of corresponding neurule bases to reflect such changes are performed. Furthermore, the neurule-based inference mechanism is interactive and more efficient than the inference mechanism used in connectionist expert systems. Finally, neurules can be naturally combined with case-based reasoning to provide a more effective representation scheme that exploits multiple knowledge sources and provides enhanced reasoning capabilities.
Expert networks are networks of neural objects derived from expert systems. The hybrid nature of such networks allows the expert knowledge to be refined and augmented using sample data. The benefit of combining expert systems with neural network-like learning from data has been illustrated in a number of diagnosis and decision-making problem domains. The ability of these systems to learn illuminates knowledge previously unknown to human experts. This paper contains a synopsis of work in expert networks over the last several years, some of the findings wehave found useful and interesting, and an indication of current directions for this research.
Expert Systems, 2007
Neurules are a kind of hybrid rules that combine a symbolic (production rules) and a connectionist (adaline unit) representation. One way that the neurules can be produced is from training examples/patterns, extracted from empirical data. However, in certain application fields not all of the training examples are available a priori. A number of them become available over time. In those cases, updating a neurule base is necessary. In this paper, methods for updating a hybrid rule base, consisting of neurules, to reflect the availability of new training examples are presented. They can be considered as a type of incremental learning methods that retain the entire induced hypothesis and all past training examples. The methods are efficient, since they require the least possible retraining effort and the number of the produced neurules is kept as small as possible. Experimental results that prove the above argument are presented.
Nonlinear Analysis: Theory, Methods & …, 1997
Decision Support Systems, 1996
This research explores a new approach to integrate neural networks and expert systems. The integrated system combines the strength of rule-based semantic structure and the learning capability of connectionist architecture. In addition, the approach allows users to define logical operators that behave much similar to that of human expert decision making process. Neural Logic Network (NEULONET) is used as the underlying building unit. A rule-based shell like environment is developed. The shell is used to built a prototype expert decision support system for future bonds trading. The system also provides a way to behave like different experts responding to different users and giving advice according: to different environmental situations.
Advances in Informatics, 2000
In this paper, a hybrid knowledge representation formalism that integrates neurocomputing into the symbolic framework of production rules is presented. This is achieved by introducing neurules, a type of integrated rules. Each neurule is considered as an adaline unit, where weights are considered as significance factors. Each significance factor represents the significance of the associated condition in drawing the conclusion. A rule is fired when the corresponding adaline output becomes active. In this way, naturalness and modularity of production rules are retained, and imprecise relations between the conditions and the conclusion of a rule can be represented. Additionally, a number of heuristics used in the inference procedure result in increasing efficiency.
2000
In this paper, two methods for extraction o f knowledge rules through Artificial Neural Networks, with continuous activation functions are presented. Those rules are extracted from neural networks previously trained and of the sensitivity factors obtained by the differentiation of a neural network. The rules can be used when analytic models of the physical processes lead to equations of difficult
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.