Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998, Proceedings. Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170)
…
3 pages
1 file
A classifier based on a mixture model is proposed. The EM algorithm for construction of a mixture density is sensitive to the initial densities. It is also difficult to determine the optimal number of component densities. In this study, we construct a mixture density on the basis of a hyperrectangles found in the subclass method, in which the number of components is determined automatically. Experimental results show the effectiveness of this approach.
2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), 2008
D ensity estimation is a fundamental problem in pattern recognition and machine learning. It is particularly important for classification using the Bayes decision rule [1]. The methods for density estimation can be grouped into two categories: parametric and nonparametric. Parametric methods rely on functional density models but are compact in storage and computation. Nonparametric methods, such as Parzen windows and nearest neighbor methods, can adapt to arbitrary distributions but the model needs to store all the training points. The Gaussian mixture model (GMM) can be viewed as a semi-parametric approach. It is flexible to model irregular distributions and has moderate complexity. The Expectation Maximization (EM) algorithm [2-3], based on maximum likelihood, is a basic approach for mixture density estimation, and there has been many improved methods along this direction [1] .
1998
A new method is proposed for selection of the optimal number of components of a mixture model for pattern classi cation. We approximate a class-conditional density by a mixture of Gaussian components. We estimate the parameters of the mixture components by the EM (Expectation Maximization) algorithm and select the optimal number of components on the basis of the MDL (Minimum Description Length) principle. We evaluate the goodness of an estimated model in a tradeo between the number of the misclassi ed training samples and the complexity of the model.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
Pattern Recognition, 1991
We develop a method of performing pattern recognition (discrimination and classification) using a recursive technique derived from mixture models, kernel estimation and stochastic approximation.
Pattern Recognition Letters, 2005
Mixture modeling is the problem of identifying and modeling components in a given set of data. Gaussians are widely used in mixture modeling. At the same time, other models such as Dirichlet distributions have not received attention. In this paper, we present an unsupervised algorithm for learning a finite Dirichlet mixture model. The proposed approach for estimating the parameters of a Dirichlet mixture is based on the maximum likelihood (ML) expressed in a Riemannian space. Experimental results are presented for the following applications: summarization of texture image databases for efficient retrieval, and human skin color modeling and its application to skin detection in multimedia databases.
Lecture Notes in Computer Science, 2012
The non-linear decision boundary between object and background classes -due to large intra-class variations -needs to be modelled by any classifier wishing to achieve good results. While a mixture of linear classifiers is capable of modelling this non-linearity, learning this mixture from weakly annotated data is non-trivial and is the paper's focus. Our approach is to identify the modes in the distribution of our positive examples by clustering, and to utilize this clustering in a latent SVM formulation to learn the mixture model. The clustering relies on a robust measure of visual similarity which suppresses uninformative clutter by using a novel representation based on the exemplar SVM. This subtle clustering of the data leads to learning better mixture models, as is demonstrated via extensive evaluations on Pascal VOC 2007. The final classifier, using a HOG representation of the global image patch, achieves performance comparable to the state-of-the-art while being more efficient at detection time.
EAS Publications Series, 2016
This chapter is dedicated to model-based supervised and unsupervised classification. Probability distributions are defined over possible labels as well as over the observations given the labels. To this end, the basic tools are the mixture models. This methodology yields a posterior distribution over the labels given the observations which allows to quantify the uncertainty of the classification. The role of Gaussian mixture models is emphasized leading to Linear Discriminant Analysis and Quadratic Discriminant Analysis methods. Some links with Fisher Discriminant Analysis and logistic regression are also established. The Expectation-Maximization algorithm is introduced and compared to the K-means clustering method. The methods are illustrated both on simulated datasets as well as on real datasets using the R software.
AIP Conference Proceedings, 2015
We propose a method based on finite mixture models for classifying a set of observations into number of different categories. In order to demonstrate the method, we show how the component densities for the mixture model can be derived by using the maximum entropy method in conjunction with conservation of Pythagorean means. Several examples of distributions belonging to the Pythagorean family are derived. A discussion on estimation of model parameters and the number of categories is also given.
IEEE Transactions on Image Processing, 1997
We introduce the notion of a generalized mixture and propose some methods for estimating it, along with applications to unsupervised statistical image segmentation. A distribution mixture is said to be “generalized” when the exact nature of the components is not known, but each belongs to a finite known set of families of distributions. For instance, we can consider a mixture of three distributions, each being exponential or Gaussian. The problem of estimating such a mixture contains thus a new difficulty: we have to label each of three components (there are eight possibilities). We show that the classical mixture estimation algorithms-expectation-maximization (EM), stochastic EM (SEM), and iterative conditional estimation (ICE)-can be adapted to such situations once as we dispose of a method of recognition of each component separately. That is, when we know that a sample proceeds from one family of the set considered, we have a decision rule for what family it belongs to. Considering the Pearson system, which is a set of eight families, the decision rule above is defined by the use of “skewness” and “kurtosis”. The different algorithms so obtained are then applied to the problem of unsupervised Bayesian image segmentation, We propose the adaptive versions of SEM, EM, and ICE in the case of “blind”, i.e., “pixel by pixel”, segmentation. “Global” segmentation methods require modeling by hidden random Markov fields, and we propose adaptations of two traditional parameter estimation algorithms: Gibbsian EM (GEM) and ICE allowing the estimation of generalized mixtures corresponding to Pearson's system. The efficiency of different methods is compared via numerical studies, and the results of unsupervised segmentation of three real radar images by different methods are presented
2005
Mixture model-based clustering is widely used in many applications. In real-time applications, data are received sequentially and classification parameters have to be quickly updated. An on-line clustering algorithm based on mixture models is presented in the context of a real time flaw diagnosis application for pressurized containers. Available data for this application are acoustic emission signals. The proposed algorithm is a stochastic gradient one derived from the Classification version of the EM algorithm (CEM). It provides a model-based generalization of the well known on-line k-means algorithm to handle non spherical clusters when specific Gaussian mixture models are used. Using synthetic and real data sets, the proposed algorithm is compared to the batch CEM algorithm and the on-line EM algorithm. The three approaches generates comparable solutions in terms of the resulting partition when clusters are relatively well separated but on-line algorithms become fasters when the ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Lecture Notes in Computer Science, 2013
IEEE Transactions on Geoscience and Remote Sensing, 2002
Lecture Notes in Computer Science, 2006
Computational Statistics & Data Analysis, 2003
Proceedings - 8th International Conference on Hybrid Intelligent Systems, HIS 2008, 2008
Proceedings of the Eighth Annual Conference on Computational Learning Theory, 1995
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002
i-manager’s Journal on Pattern Recognition
Data Mining and Knowledge Discovery, 2018
IEEE Transactions on Image Processing, 2004
Neural, Parallel and Scientific Computations, 1999
Computational Statistics & Data Analysis, 2007
Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., 2004