Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, 2012 International Conference on Frontiers in Handwriting Recognition
Classifier combination methods have proved to be an effective tool for increasing the performance in pattern recognition applications. The rationale of this approach follows from the observation that appropriately diverse classifiers make uncorrelated errors. Unfortunately, this theoretical assumption is not easy to satisfy in practical cases, thus reducing the performance obtainable with any combination strategy. In this paper we propose a new weighted majority vote rule which try to solve this problem by jointly analyzing the responses provided by all the experts, in order to capture their collective behavior when classifying a sample. Our rule associates a weight to each class rather than to each expert and computes such weights by estimating the joint probability distribution of each class with the set of responses provided by all the experts in the combining pool. The probability distribution has been computed by using the naive Bayes probabilistic model. Despite its simplicity, this model has been successfully used in many practical applications, often competing with much more sophisticated techniques. The experimental results, performed by using three standard databases of handwritten digits, confirmed the effectiveness of the proposed method.
Object recognition supported by user interaction for service robots, 2002
In this paper we introduce a novel multiple classifier system that incorporates a global optimization technique based on a genetic algorithm for configuring the system. The system adopts the weighted majority vote approach to combine the decision of the experts, and obtains the weights by maximizing the performance of the whole set of experts, rather than that of each of them separately. The system has been tested on a handwritten digit recognition problem, and its performance compared with those exhibited by a system using the weights obtained during the training of each expert separately. The results of a set of experiments conducted on a 30,000 digits extracted from the NIST database shown that the proposed system exhibits better performance than those of the alternative one, and that such an improvement is due to a better estimate of the reliability of the participating classifiers.
2000
In many applications of computer vision, combination of decisions from mulitiple sources is a very important way of achieving more accurate and robust classification. Many such techniques can be used, two of which are the Majority Voting and the Divide and Conquer techniques. The former achieves decision combination by measuring consensus among the participating classifiers and the latter achieves the same by dividing the problem into smaller problems and solving each of these sub-problems more e f iciently. Both these approaches have their advantages and disadvantages. In this paper, a novel approach t o combining these two techniques is presented. Although the success of the approach has been demonstrated in a typical application area of computer vision (recognition of complex and highly variable image data), the approach is completely generalised and is applicable to other task domains.
Lecture Notes in Computer Science, 2003
This study covers weighted combination methodologies for multiple classifiers to improve classification accuracy. The classifiers are extended to produce class probability estimates besides their class label assignments to be able to combine them more efficiently. The leave-one-out training method is used and the results are combined using proposed weighted combination algorithms. The weights of the classifiers for the weighted classifier combination are determined based on the performance of the classifiers on the training phase. The classifiers and combination algorithms are evaluated using classical and proposed performance measures. It is found that the integration of the proposed reliability measure, improves the performance of classification. A sensitivity analysis shows that the proposed polynomial weight assignment applied with probability based combination is robust to choose classifiers for the classifier set and indicates a typical one to three percent consistent improvement compared to a single best classifier of the same set.
IEEE Access
In critical applications, such as medical diagnosis, security related systems, and so on, the cost or risk of action taking based on incorrect classification can be very high. Hence, combining expert opinions before taking decision can substantially increase the reliability of such systems. Such pattern recognition systems base their final decision on evidence collected from different classifiers. Such evidence can be of data type, feature type, or classifier type. Common problems in pattern recognition, such as curse of dimensionality, and small sample data size, among others, have also prompted researchers into seeking new approaches for combining evidences. This paper presents a criteria-based framework for multiclassifiers combination techniques and their areas of applications. The criteria discussed here include levels of combination, types of thresholding, adaptiveness of the combination, and ensemble-based approaches. The strengths and weaknesses of each of these categories are discussed in details. Following this analysis, we provide our perspective on the outlook of this area of research and open problems. The lack of a well-formulated theoretical framework for analyzing the performance of combination techniques is shown to provide a fertile ground for further research. In addition to summarizing the existing work, this paper also updates and complements the latest developments in this area of research.
Information fusion, 2005
Individual classification models are recently challenged by combined pattern recognition systems, which often show better performance. In such systems the optimal set of classifiers is first selected and then combined by a specific fusion method. For a small number of classifiers optimal ensembles can be found exhaustively, but the burden of exponential complexity of such search limits its practical applicability for larger systems. As a result, simpler search algorithms and/or selection criteria are needed to reduce the complexity. This work provides a revision of the classifier selection methodology and evaluates the practical applicability of diversity measures in the context of combining classifiers by majority voting. A number of search algorithms are proposed and adjusted to work properly with a number of selection criteria including majority voting error and various diversity measures. Extensive experiments carried out with 15 classifiers on 27 datasets indicate inappropriateness of diversity measures used as selection criteria in favour of the direct combiner error based search. Furthermore, the results prompted a novel design of multiple classifier systems in which selection and fusion are recurrently applied to a population of best combinations of classifiers rather than the individual best. The improvement of the generalisation performance of such system is demonstrated experimentally.
Computational Intelligence, 2017
Classifier combination methods have proved to be an effective tool to increase the performance of classification techniques that can be used in any pattern recognition applications. Despite a significant number of publications describing successful classifier combination implementations, the theoretical basis is still not matured enough and achieved improvements are inconsistent. In this paper, we propose a novel statistical validation technique known as correlation-based classifier combination technique for combining classifier in any pattern recognition problem. This validation has significant influence on the performance of combinations, and their utilization is necessary for complete theoretical understanding of combination algorithms. The analysis presented is statistical in nature but promises to lead to a class of algorithms for rankbased decision combination. The potentials of the theoretical and practical issues in implementation are illustrated by applying it on 2 standard datasets in pattern recognition domain, namely, handwritten digit recognition and letter image recognition datasets taken from UCI Machine Learning Database Repository (http://www.ics. uci.edu/_mlearn). 1 An empirical evaluation using 8 wellknown distinct classifiers confirms the validity of our approach compared to some other combinations of multiple classifiers algorithms. Finally, we also suggest a methodology for determining the best mix of individual classifiers.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003
Amidst the conflicting experimental evidence of superiority of one over the other, we investigate the Sum and majority Vote combining rules in a two class case, under the assumption of experts being of equal strength and estimation errors conditionally independent and identically distributed. We show, analytically, that, for Gaussian estimation error distributions, Sum always outperforms Vote. For heavy tail distributions, we demonstrate by simulation that Vote may outperform Sum. Results on synthetic data confirm the theoretical predictions. Experiments on real data support the general findings, but also show the effect of the usual assumptions of conditional independence, identical error distributions, and common target outputs of the experts not being fully satisfied.
2004
The aim of this paper is to investigate the role of the a-priori knowledge in the process of classifier combination. For this purpose three combination methods are compared which use different levels of a-priori knowledge. The performance of the methods are measured under different working conditions by simulating sets of classifier with different characteristics. For this purpose, a random variable is used to simulate each classifier and an estimator of stochastic correlation is used to measure the agreement among classifiers. The experimental results, which clarify the conditions under which each combination method provides better performance, show to what extend the a-priori knowledge on the characteristics of the set of classifiers can improve the effectiveness of the process of classifier combination.
2009
Usage of recognition systems has found many applications in almost all fields. However, Most of classification algorithms have obtained good performance for specific problems; they have not enough robustness for other problems. Combination of multiple classifiers can be considered as a general solution method for pattern recognition problems. It has been shown that combination of classifiers can usually operate better than single classifier provided that its components are independent or they have diverse outputs. It was shown that the necessary diversity of an ensemble can be achieved manipulation of data set features. We also propose a new method of creating this diversity. The ensemble created by proposed method may not always outperforms all classifiers existing in it, it is always possesses the diversity needed for creation of ensemble, and consequently it always outperforms the simple classifier.
JSCI: International Journal of Systemics, Cybernetics, and Informatics
We investigate a number of parameters commonly affecting the design of a multiple classifier system in order to find when fusing is most beneficial. We extend our previous investigation to the case where unequal classifiers are combined. Results indicate that Sum is not affected by this parameter, however, Vote degrades when a weaker classifier is introduced in the combining system. This is more obvious when estimation error with uniform distribution exists.
In recent years, the combination of classifiers has been proposed as a method to improve the accuracy achieved in isolation by a single classifier. We are interested in ensemble methods that allow the combination of heterogeneous sets of classifiers, which are classifiers built using differing learning paradigms. We focus on theoretical and experimental comparison of five such combination methods: majority vote, a method based on Bayes' rule, a method based on Dempster-Shafer evidence combination, behavior-knowledge space, and logistic regression. We develop an upper bound on the accuracy that can be obtained by any of the five methods of combination, and show that this estimate can be used to determine whether an ensemble may improve the performance of its members. We then report a series of experiments using standard data sets and learning methods, and compare experimental results to theoretical expectations.
Frontiers in Handwriting …, 2002
In pattern recognition, there is a growing use of multiple classifier combinations with the goal to increase recognition performance. In many cases, plurality voting is a part of the combination process. In this article, we discuss and test several well known voting methods from politics and economics on classifier combination in order to see if an alternative to the simple plurality vote exists. We found that, assuming a number of prerequisites, better methods are available, that are comparatively simple and fast.
14th International Conference on Image Analysis and Processing (ICIAP 2007), 2007
In the framework of multiple classifier systems, we suggest to reformulate the classifier combination problem as a pattern recognition one. Following this approach, each input pattern is associated to a feature vector composed by the output of the classifiers to be combined. A Bayesian Network is used to automatically infer the probability distribution for each class and eventually to perform the final classification. We propose to use Bayesian Networks because they not only provide a basis for efficient probabilistic inference, but also a natural and compact way to encode exponentially sized joint probability distributions. Two systems adopting an ensemble of Back-Propagation neural network and an ensemble of Learning Vector Quantization neural network, respectively, have been tested on the Image database from the UCI repository. The performance of the proposed systems have been compared with those exhibited by multi-expert systems adopting the same ensembles, but the Majority Vote, the Weighted Majority vote and the Borda Count for combining them.
Artificial Intelligence in Engineering, 1998
The objective of this paper is to show that combination of votes from various pattern classifiers is better than a single vote from each individual classifier. A proposed support function is used in the combination of votes. The combination of outputs is motivated by the fact that decisions made by teams are generally better than those made by individuals. The decision element making at the outputs of the front-end classifiers is called the Combined Classifier (CC). The proof of the theory of the combining method is obtained using the principle of mathematical induction. Experimental investigation has been conducted to verify the theory. The first experiment was conducted using 5000 training digits and the second experiment was conducted using 10000 training digits. CC achieved a recognition accuracy of 86.67% compared with 70% of the best individual classifier in the second experiment. The results show that the theoretical and the experimental values are in good agreement.
International Journal on Document Analysis and Recognition, 2003
This paper presents a framework for the analysis of similarity among abstract-level classifiers and proposes a methodology for the evaluation of combination methods. In this paper, each abstract-level classifier is considered as a random variable, and sets of classifiers with different degrees of similarity are systematically simulated, combined, and studied. It is shown to what extent the performance of each combination method depends on the degree of similarity among classifiers and the conditions under which each combination method outperforms the others. Experimental tests have been carried out on simulated and real data sets. The results confirm the validity of the proposed methodology for the analysis of combination methods and its usefulness for multiclassifier system design.
Lecture Notes in Computer Science, 2011
Many methods have been proposed for combining multiple classifiers in pattern recognition such as Random Forest which uses decision trees for problem solving. In this paper, we propose a weighted vote-based classifier ensemble method. The proposed method is similar to Random Forest method in employing many decision trees and neural networks as classifiers. For evaluating the proposed weighting method, both cases of decision tree and neural network classifiers are applied in experimental results. Main presumption of this method is that the reliability of the prediction of each classifier differs among classes. The proposed ensemble method is tested on a huge Persian data set of handwritten digits and shows improvement in comparison with competitors.
Knowledge and Information Systems, 2014
We propose a probabilistic framework for classifier combination, which gives rigorous optimality conditions (minimum classification error) for four combination methods: majority vote, weighted majority vote, recall combiner and the naive Bayes combiner. The framework is based on two assumptions: class-conditional independence of the classifier outputs and an assumption about the individual accuracies. The four combiners are derived subsequently from one another, by progressively relaxing and then eliminating the second assumption. In parallel, the number of the trainable parameters increases from one combiner to the next. Simulation studies reveal that if the parameter estimates are accurate and the first assumption is satisfied, the order of preference of the combiners is: naive Bayes, recall, weighted majority and majority. By inducing label noise, we expose a caveat coming from the stability-plasticity dilemma. Experimental results with 73 benchmark data sets reveal that there is no definitive best combiner among the four candidates, giving a slight preference to naive Bayes. This combiner was better for problems with a large number of fairly balanced classes while weighted majority vote was better for problems with a small number of unbalanced classes.
International Journal of Pattern Recognition and Artificial Intelligence, 2003
ABSTRACT When several classifiers are brought to contribute to the same task of recognition, various strategies of decisions, implying these classifiers in different ways, are possible. A first strategy consists in deciding using different opinions: it corresponds to the combination of classifiers. A second strategy consists in using one or more opinions for better guiding other classifiers in their training stages, and/or to improve the decision-making of other classifiers in the classification stage: it corresponds to the cooperation of classifiers. The third and last strategy consists in giving more importance to one or more classifiers according to various criteria or situations: it corresponds to the selection of classifiers. The temporal aspect of Pattern Recognition (PR), i.e. the possible evolution of the classes to be recognized, can be treated by the strategy of selection.
Pattern Analysis & Applications, 2002
A robust character of combining diverse classifiers using a majority voting has recently been illustrated in the pattern recognition literature. Furthermore, negatively correlated classifiers turned out to offer further improvement of the majority voting performance even comparing to the idealised model with independent classifiers. However, negatively correlated classifiers represent a very unlikely situation in the real-world classification problems and their benefits usually remain out of reach. Nevertheless, it is theoretically possible to obtain 0% majority voting error using a finite number of classifiers at the error level lower than 50%. We attempt to show that structuring classifiers into relevant multistage organisations can widen this boundary as well as the limits of majority voting error even more. Introducing discrete error distributions for analysis, we show how majority voting errors and their limits depend on parameters of a multiple classifier system with hardened binary outputs (correct/incorrect). Moreover, we investigate sensitivity of boundary distributions of classifier outputs to small discrepancies modelled by the random changes of votes and propose new more stable patterns of boundary distributions. Finally, we show how organising classifiers into different structures can be used to widen the limits of majority voting errors and how this phenomenon can be effectively exploited.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2009
When a multiple classifier system is employed, one of the most popular methods to accomplish the classifier fusion is the simple majority voting. However, when the performance of the ensemble members is not uniform, the efficiency of this type of voting generally results affected negatively. In the present paper, new functions for dynamic weighting in classifier fusion are introduced. Experimental results with several real-problem data sets from the UCI Machine Learning Database Repository demonstrate the advantages of these novel weighting strategies over the simple voting scheme.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.