Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
6 pages
1 file
This paper presents a method for combining classifiers that uses estimates of each individual classifier's local accuracy in small regions of feature space surrounding an unknown test sample. An empirical evaluation using five real data sets confirms the validity of our approach compared to some other Combination of Multiple Classifiers algorithms. We also suggest a methodology for determining the best mix of individual classifiers.
International Journal on Document Analysis and Recognition, 2003
This paper presents a framework for the analysis of similarity among abstract-level classifiers and proposes a methodology for the evaluation of combination methods. In this paper, each abstract-level classifier is considered as a random variable, and sets of classifiers with different degrees of similarity are systematically simulated, combined, and studied. It is shown to what extent the performance of each combination method depends on the degree of similarity among classifiers and the conditions under which each combination method outperforms the others. Experimental tests have been carried out on simulated and real data sets. The results confirm the validity of the proposed methodology for the analysis of combination methods and its usefulness for multiclassifier system design.
17 th National Conference on Artificial Intelligence ( …, 2000
In recent years, the combination of classifiers has been proposed as a method to improve the accuracy achieved in isolation by a single classifier. We are interested in ensemble methods that allow the combination of heterogeneous sets of classifiers, which ...
2014
In this work we were interested in investigating the predictive accuracy of one of the most popular learning schemes for the combination of supervised classification methods: the Stacking Technique proposed by Wolpert (1992) and consolidated by Ting and Witten, (1999) and Seewald (2002). In particular, we made reference to the StackingC (Seewald 2002) as a starting point for our analysis, to which some modifications and extensions were made. Since most of the research on ensembles of classifiers tends to demonstrate that this scheme can perform comparably to the best of the base classifiers as selected by cross-validation, if not better, this motivated us to investigate the performance of the Stacking empirically. An analysis of the results obtained by applying the our Stacking scheme, which includes differences and characteristic implementations compared to what is proposed by the literature, to the set of the dataset generated by means of an experimental design does not lead us to...
Applied Soft Computing, 2013
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
2001
Recently, multiple classifiers, which base their decision on the output from more than one classifier, have become popular. In this paper, the use of multiple classifiers in data fusion of multisource remote sensing and geographic data is studied. In particular, the paper focuses on the recently proposed methodologies of bagging and boosting. Bagging, boosting, and several versions of optimized statistical consensus theory are compared in classification of a multisource remote sensing and geographic data set. The results show boosting to outperform all the other methods in terms of test accuracies.
IEEE Access
In critical applications, such as medical diagnosis, security related systems, and so on, the cost or risk of action taking based on incorrect classification can be very high. Hence, combining expert opinions before taking decision can substantially increase the reliability of such systems. Such pattern recognition systems base their final decision on evidence collected from different classifiers. Such evidence can be of data type, feature type, or classifier type. Common problems in pattern recognition, such as curse of dimensionality, and small sample data size, among others, have also prompted researchers into seeking new approaches for combining evidences. This paper presents a criteria-based framework for multiclassifiers combination techniques and their areas of applications. The criteria discussed here include levels of combination, types of thresholding, adaptiveness of the combination, and ensemble-based approaches. The strengths and weaknesses of each of these categories are discussed in details. Following this analysis, we provide our perspective on the outlook of this area of research and open problems. The lack of a well-formulated theoretical framework for analyzing the performance of combination techniques is shown to provide a fertile ground for further research. In addition to summarizing the existing work, this paper also updates and complements the latest developments in this area of research.
2003
It has been accepted that multiple classifier systems provide a platform for not only performance improvement, but more efficient and robust pattern classification systems. A variety of combining methods have been proposed in the literature and some work has focused on comparing and categorizing these approaches. In this paper we present a new categorization of these combining schemes based on their dependence on the data patterns being classified. Combining methods can be totally independent from the data, or they can be implicitly or explicitly dependent on the data. It is argued that data dependent, and especially explicitly data dependent, approaches represent the highest potential for improved performance. On the basis of this categorization, an architecture for explicit data dependent combining methods is discussed. Experimental results to illustrate the comparative performance of some combining methods according to this categorization is included.
Object recognition supported by user interaction for service robots, 2002
In this paper we introduce a novel multiple classifier system that incorporates a global optimization technique based on a genetic algorithm for configuring the system. The system adopts the weighted majority vote approach to combine the decision of the experts, and obtains the weights by maximizing the performance of the whole set of experts, rather than that of each of them separately. The system has been tested on a handwritten digit recognition problem, and its performance compared with those exhibited by a system using the weights obtained during the training of each expert separately. The results of a set of experiments conducted on a 30,000 digits extracted from the NIST database shown that the proposed system exhibits better performance than those of the alternative one, and that such an improvement is due to a better estimate of the reliability of the participating classifiers.
Several effective methods for improving the performance of a single learning algorithm have been developed recently. The general approach is to create a set of learned models by repeatedly applying the algorithm to different versions of the training data, and then combine the learned models' predictions according to a prescribed voting scheme. Little work has been done in combining the predictions of a collection of models generated by many learning algorithms having different representation and/or search strategies. This paper describes a method which uses the strategies of stacking and correspondence analysis to model the relationship between the learning examples and the way in which they are classified by a collection of learned models. A nearest neighbor method is then applied within the resulting representation to classify previously unseen examples. The new algorithm consistently performs as well or better than other combining techniques on a suite of data sets.
2011
Classifier combination methods need to make best use of the outputs of multiple, imperfect classifiers to enable higher accuracy classifications. In many situations, such as when human decisions need to be combined, the base decisions can vary enormously in reliability. A Bayesian approach to such uncertain combination allows us to infer the differences in performance between individuals and to incorporate any available prior knowledge about their abilities when training data is sparse.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Lecture Notes in Computer Science, 2003
2012 International Conference on Frontiers in Handwriting Recognition, 2012
Information Fusion, 2001
Neural Networks, 1994
Multiple Classifier Systems, 2002
International Journal of Applied Pattern Recognition, 2013
The Scientific World Journal, 2014
JSCI: International Journal of Systemics, Cybernetics, and Informatics
Machine Learning, 1999
Multiple Classifier Systems, 2000
International Journal of Signal Processing Systems, 2016
IAG, Universite Catholique de Louvain, 2004