Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2004
…
4 pages
1 file
ABSTRACT One of the many successful applications of Gaussian Mixture Models (GMMs) is in image segmentation, where spatially constrained mixture models have been used in conjuction with the Expectation-Maximization (EM) framework. In this paper, we propose a new methodology for the M-step of the EM algorithm that is based on a novel constrained optimization formulation.
IEEE transactions on neural networks / a publication of the IEEE Neural Networks Council, 2005
Gaussian mixture models (GMMs) constitute a well-known type of probabilistic neural networks. One of their many successful applications is in image segmentation, where spatially constrained mixture models have been trained using the expectation-maximization (EM) framework. In this letter, we elaborate on this method and propose a new methodology for the M-step of the EM algorithm that is based on a novel constrained optimization formulation. Numerical experiments using simulated images illustrate the superior performance of our method in terms of the attained maximum value of the objective function and segmentation accuracy compared to previous implementations of this approach.
2012
The Expectation-Maximization algorithm has been classically used to find the maximum likelihood estimates of parameters in probabilistic models with unobserved data, for instance, mixture models. A key issue in such problems is the choice of the model complexity. The higher the number of components in the mixture, the higher will be the data likelihood, but also the higher will be the computational burden and data overfitting. In this work we propose a clustering method based on the expectation maximization algorithm that adapts on-line the number of components of a finite Gaussian mixture model from multivariate data. Or method estimates the number of components and their means and covariances sequentially, without requiring any careful initialization. Our methodology starts from a single mixture component covering the whole data set and sequentially splits it incrementally during expectation maximization steps. The coarse to fine nature of the algorithm reduce the overall number of computations to achieve a solution, which makes the method particularly suited to image segmentation applications whenever computational time is an issue. We show the effectiveness of the method in a series of experiments and compare it with a state-of-the-art alternative technique both with synthetic data and real images, including experiments with images acquired from the iCub humanoid robot.
2008
Abstract. Recently stochastic models such as mixture models, graphical models, Markov random fields and hidden Markov models have key role in probabilistic data analysis. Also image segmentation means to divide one picture into different types of classes or regions, for example a picture of geometric shapes has some classes with different colors such as ’circle’, ’rectangle’, ’triangle’ and so on. Therefore we can suppose that each class has normal distribution with specify mean and variance. Thus in general a picture can be Gaussian mixture model. In this paper, we have learned Gaussian mixture model to the pixel of an image as training data and the parameter of the model are learned by EM-algorithm. Meanwhile pixel labeling corresponded to each pixel of true image is done by Bayes rule. This hidden or labeled image is constructed during of running EM-algorithm. In fact, we introduce a new numerically method of finding maximum a posterior estimation by using of EM-algorithm and Gau...
… Conference on Pattern …, 2010
In this paper, a parametric and unsupervised histogram-based image segmentation method is presented. The histogram is assumed to be a mixture of asymmetric generalized Gaussian distributions. The mixture parameters are estimated by using the Expectation Maximization algorithm. Histogram fitting and region uniformity measures on synthetic and real images reveal the effectiveness of the proposed model compared to the generalized Gaussian mixture model.
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2010
A new Bayesian model is proposed for image segmentation based upon Gaussian mixture models (GMM) with spatial smoothness constraints. This model exploits the Dirichlet compound multinomial (DCM) probability density to model the mixing proportions (i.e., the probabilities of class labels) and a Gauss-Markov random field (MRF) on the Dirichlet parameters to impose smoothness. The main advantages of this model are two. First, it explicitly models the mixing proportions as probability vectors and simultaneously imposes spatial smoothness. Second, it results in closed form parameter updates using a maximum a posteriori (MAP) expectation-maximization (EM) algorithm. Previous efforts on this problem used models that did not model the mixing proportions explicitly as probability vectors or could not be solved exactly requiring either time consuming Markov Chain Monte Carlo (MCMC) or inexact variational approximation methods. Numerical experiments are presented that demonstrate the superiority of the proposed model for image segmentation compared to other GMM-based approaches. The model is also successfully compared to state of the art image segmentation methods in clustering both natural images and images degraded by noise.
Neurocomputing, 2018
In this paper, a novel Bayesian statistical approach is proposed to tackle the problem of natural image segmentation. The proposed approach is based on finite Dirichlet mixture models in which contextual proportions (i.e., the probabilities of class labels) are modeled with spatial smoothness constraints. The major merits of our approach are summarized as follows: Firstly, it exploits the Dirichlet mixture model which can obtain a better statistical performance than commonly used mixture models (such as the Gaussian mixture model), especially for proportional data (i.e, normalized histogram). Secondly, it explicitly models the mixing contextual proportions as probability vectors and simultaneously integrate spatial relationship between pixels into the Dirichlet mixture model, which results in a more robust framework for image segmentation. Finally, we develop a variational Bayes learning method to update the parameters in a closed-form expression. The effectiveness of the proposed approach is compared with other mixture modeling-based image segmentation approaches through extensive experiments that involve both simulated and natural color images.
2008
A new hierarchical Bayesian model is proposed for image segmentation based on Gaussian mixture models (GMM) with a prior enforcing spatial smoothness. According to this prior, the local differences of the contextual mixing proportions (i.e. the probabilities of class labels) are Studentpsilas t-distributed. The generative properties of the Student's t-pdf allow this prior to impose smoothness and simultaneously model the edges between the segments of the image. A maximum a posteriori (MAP) expectation-maximization (EM) based algorithm is used for Bayesian inference. An important feature of this algorithm is that all the parameters are automatically estimated from the data in closed form. Numerical experiments are presented that demonstrate the superiority of the proposed model for image segmentation as compared to standard GMM-based approaches and to GMM segmentation techniques with ldquostandardrdquo spatial smoothness constraints.
INTERNATIONAL JOURNAL OF ENGINEERING …, 2008
Recently stochastic models such as mixture models, graphical models, Markov random fields and hidden Markov models have key role in probabilistic data analysis. Also image segmentation means to divide one picture into different types of classes or regions, for example a picture of geometric shapes has some classes with different colors such as 'circle', 'rectangle', 'triangle' and so on. Therefore we can suppose that each class has normal distribution with specify mean and variance. Thus in general a picture can be Gaussian mixture model. In this paper, we have learned Gaussian mixture model to the pixel of an image as training data and the parameter of the model are learned by EM-algorithm. Meanwhile pixel labeling corresponded to each pixel of true image is done by Bayes rule. This hidden or labeled image is constructed during of running EM-algorithm. In fact, we introduce a new numerically method of finding maximum a posterior estimation by using of EM-algorithm and Gaussians mixture model which we called EM-MAP algorithm. In this algorithm, we have made a sequence of the priors, posteriors and they then convergent to a posterior probability that is called the reference posterior probability. So Maximum a posterior estimation can be determined by this reference posterior probability which will make labeled image. This labeled image shows our segmented image with reduced noises. This method will show in several experiments.
Computers, Materials & Continua
Spatially Constrained Mixture Model (SCMM) is an image segmentation model that works over the framework of maximum a-posteriori and Markov Random Field (MAP-MRF). It developed its own maximization step to be used within this framework. This research has proposed an improvement in the SCMM's maximization step for segmenting simulated brain Magnetic Resonance Images (MRIs). The improved model is named as the Weighted Spatially Constrained Finite Mixture Model (WSCFMM). To compare the performance of SCMM and WSCFMM, simulated T1-Weighted normal MRIs were segmented. A region of interest (ROI) was extracted from segmented images. The similarity level between the extracted ROI and the ground truth (GT) was found by using the Jaccard and Dice similarity measuring method. According to the Jaccard similarity measuring method, WSCFMM showed an overall improvement of 4.72%, whereas the Dice similarity measuring method provided an overall improvement of 2.65% against the SCMM. Besides, WSCFMM signi cantly stabilized and reduced the execution time by showing an improvement of 83.71%. The study concludes that WSCFMM is a stable model and performs better as compared to the SCMM in noisy and noise-free environments.
ABSTRACT We propose a hierarchical and spatially variant mixture model for image segmentation where the pixel labels are random variables. Distinct smoothness priors are imposed on the label probabilities and the model parameters are computed in closed form through maximum a posteriori (MAP) estimation. More specifically, we propose a new prior for the label probabilities that enforces spatial smoothness of different degree for each cluster.
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2007
2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007
IEEE Transactions on Medical Imaging, 2000
IEEE Transactions on Image Processing, 1997
Statistics and Computing, 2008
Expert Systems with Applications, 2012
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010