Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, 10th Brazilian Symposium …
…
7 pages
1 file
Hybrid intelligent systems combine various components necessary for addressing complex application domains in Artificial Intelligence (AI). Weightless Neural Networks (WNNs) are proposed as a model for these systems, allowing for the integration of symbolic rules to enhance user understanding and acceptance. The Knowledge-based Inference System introduced in this paper enables refined classification through WNNs, while simultaneously facilitating the extraction of comprehensible rules from the trained networks.
International Journal of Bio-Inspired Computation, 2009
A hybrid system using weightless neural networks (WNNs) and finite state automata is described in this paper. With the use of such a system, rules can be inserted and extracted into/from WNNs. The rule insertion and extraction problems are described with a detailed discussion of the advantages and disadvantages of the rule insertion and extraction algorithms proposed. The process of rule insertion and rule extraction in WNNs is often more natural than in other neural network models.
Computer standards & interfaces, 1994
Lecture Notes in Computer Science, 2008
A novel weightless neural network model is presented, based on the known operations Alpha and Beta, and three original operations proposed. The new model of weightless neural network has been called CAINN-Computing Artificial Intelligent Neural Network. The experimental aspect is presented by applying the CAINN model to several known databases. Also, comparative studies about the performance of the CAINN model concerning ADAM weightless neural network and other models are reported. Results exhibit the superiority of the CAINN model over the ADAM model, its counterpart as a weightless neural network; and over other models immersed in the state of the art of neural networks; taking into account the No Free Lunch theorem.
2009
Mimicking biological neurons by focusing on the excitatory/inhibitory decoding performed by the dendritic trees is a different and attractive alternative to the integrate-and-fire McCullogh-Pitts neuron stylisation. In such alternative analogy, neurons can be seen as a set of RAM nodes addressed by Boolean inputs and producing Boolean outputs. The shortening of the semantic gap between the synaptic-centric model introduced by the McCullogh-Pitts neuron and the dominating, binary digital, computational environment, is among the interesting benefits of the weightless neural approach. This paper presents an overview of the most representative paradigms of weightless neural systems and corresponding applications, at abstraction levels ranging from pattern recognition to artificial consciousness.
Journal of the Society of Dyers and Colourists, 1998
1995
Although backpropagation neural networks generally predict better than decision trees do for pattern classi cation problems, they are often regarded as black boxes, i.e., their predictions are not as interpretable as those of decision trees. This paper argues that this is because there has been no proper technique that enables us to do so. With an algorithm that can extract rules 1 , by drawing parallels with those of decision trees, we show that the predictions of a network can be explained via rules extracted from it, thereby, the network can be understood. Experiments demonstrate that rules extracted from neural networks are comparable with those of decision trees in terms of predictive accuracy, number of rules and average number of conditions for a rule; they preserve high predictive accuracy of original networks.
Fuzzy Sets and Systems, 1995
In many cases the identification of systems by means of fuzzy rules is given by taking these rules from a predetermined set of possible ones. In this case, the correct description of the system is to be given by a finite set of rules each with an associated weight which assesses its correctness or accuracy. Here we present a method to learn this consistence level or weight by a neural network. The design of this neural network as well as the features of the training models are discussed. The paper concludes with an example.
Neural Processing Letters, 1998
Three neural-based methods for extraction of logical rules from data are presented. These methods facilitate conversion of graded response neural networks into networks performing logical functions. MLP2LN method tries to convert a standard MLP into a network performing logical operations (LN). C-MLP2LN is a constructive algorithm creating such MLP networks. Logical interpretation is assured by adding constraints to the cost function, forcing the weights to ±1 or 0. Skeletal networks emerge ensuring that a minimal number of logical rules are found. In both methods rules covering many training examples are generated before more specific rules covering exceptions. The third method, FSM2LN, is based on the probability density estimation. Several examples of performance of these methods are presented.
2009
Because of the big complexity of the world, the ability to deal with uncertain and to infer "almost" true rules is an obligation for intelligent systems. Therefore, the research of solution to emulate Inductive Reasoning is one of the fundamental problem of Artificial Intelligence. Several approaches have been studied: the techniques inherited from the Statistics one side, or techniques based on Logic on the other side. Both of these families show complementary advantages and weakness. For example, statistics techniques, like decision trees or artificial neural networks, are robust against noisy data, and they are able to deal with a large quantity of information. However, they are generally unable to generate complexes rules. On the other side, Logic based techniques, like ILP, are able to express very complex rules, but they cannot deal with large amount of information. This report presents the study and the development of an hybrid induction technique mixing the essence of statistical and logical learning techniques i.e. an Induction technique based on the First Order Logic semantic that generate hypotheses thanks to Artificial Neural Networks learning techniques. The expression power of the hypotheses is the one of the predicate logic, and the learning process is insensitive to noisy data thanks to the artificial neural network based learning process. During the project presented by this report, four new techniques have been studied and implemented: The first learns propositional relationship with an artificial neural network i.e. induction on propositional logic programs. The three other learn first order predicate relationships with artificial neural networks i.e. induction on predicate logic programs. The last of these techniques is the more complete one, and it is based on the knowledge acquired during the development of all the other techniques. The main advance of this technique is the definition of a convention to allow the interaction of predicate logic programs and artificial neural networks, and the construction of Artificial Neural Networks able to learn rule with the predicate logic power of expression.
arXiv (Cornell University), 2021
A fuzzy multipreference semantics has been recently proposed for weighted conditional knowledge bases, and used to develop a logical semantics for Multilayer Perceptrons, by regarding a deep neural network (after training) as a weighted conditional knowledge base. This semantics, in its different variants, suggests some gradual argumentation semantics, which are related to the family of the gradual semantics studied by Amgoud and Doder. The relationships between weighted conditional knowledge bases and MLPs extend to the proposed gradual semantics to capture the stationary states of MPs, in agreement with previous results on the relationship between argumentation frameworks and neural networks. The paper also suggests a simple way to extend the proposed semantics to deal attacks/supports by a boolean combination of arguments, based on the fuzzy semantics of weighted conditionals, as well as an approach for defeasible reasoning over a weighted argumentation graph, building on the proposed gradual semantics.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Knowledge and Data Engineering, 1999
Lecture Notes in Computer Science, 2001
Neural Processing Letters, 1998
Artificial Intelligence, 2001
Models of Neurons and Perceptrons: Selected Problems and Challenges, 2018
Proceedings Fourth International Conference on Computational Intelligence and Multimedia Applications. ICCIMA 2001, 2001
IEEE Transactions on Neural Networks, 1990
Accelerating the learning process of a neural network by predicting the weight coefficient, 2021
The 1st Online Workshop on …, 1996
arXiv (Cornell University), 2021
Knowledge and Information Systems, 2020