Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, Neural, Parallel and Scientific Computations
A general model for estimating the pdf of a gray-level image histogram is reported. The histogram's pdf is approached by a mixture of Gaussian distributions. The originality of this work lies in the determination of the number of components in the mixture, which is considered as a parameter of the model and is determined using a novel algorithm. For this purpose, the model is divided into three parts. First, we use the k-means algorithm to set the initial values for the parameters of each component in the mixture. Our contributions are the determination of an appropriate numberof clusters in the k-means algorithm and a novel algorithm for eliminating false clusters. Finally, the values of the parameters are re ned using the EM algorithm. The model has been validated on both arti cial and real image histograms. Neural, Parallel and Scientific Computations, no. 7, p. 103-118, July 1999
1999
A general model for estimating the pdf of a gray-level image histogram is reported. The histogram's pdf is approached by a mixture of Gaussian distributions. The originality of this work lies in the determination of the number of components in the mixture, which is considered as a parameter of the model and is determined using a novel algorithm. For this purpose, the model is divided into three parts. First, we use the k-means algorithm to set the initial values for the parameters of each component in the mixture. Our contributions are the determination of an appropriate numberof clusters in the k-means algorithm and a novel algorithm for eliminating false clusters. Finally, the values of the parameters are re ned using the EM algorithm. The model has been validated on both arti cial and real image histograms. Neural, Parallel and Scientific Computations, no. 7, p. 103-118, July 1999
1999
An algorithm for determination of the number of modes in a gray-level image histogram is presented in this paper. The hypothesis is that the image histogram's pdf is approached by a mixture of Gaussians. Then, the algorithm tries to estimate the number of components in the mixture, which is an important parameter when using the maximum likelihood technique to estimate the remaining of parameters of the mixture. The algorithm is divided into two parts. First, initial clustering using the k-means algorithm is performed. This allows to estimate the centers of each cluster. Second, a novel algorithm, denoted "Elimination of False Clusters" (EFC) based on the Gaussian characteristics tries to suppress clusters which have no corresponding modes in the histogram. The algorithm has been validated on both artificial and real histograms.
2000
This paper discusses the problem of finding the number of component clusters in gray-level image histograms. These histograms are often modeled using a standard mixture of univariate normal densities. The problem, however, is that the number of components in the mixture is an unknown variable that must be estimated, together with the means and the variances. Computing the number of components in a mixture usually requires "unsupervised learning". This problem is denoted as "cluster validation" in the cluster analysis literature. The aim is to identify sub-populations believed to be present in a population. A wide variety of methods have been proposed for this purpose. In this paper, we intend to compare two methods, each belonging to a typical approach. The first, somewhat classical method, is based on criterion optimization. We are particularly interested in the Akaike's information criterion. The second method is based on a direct approach that makes use of a cluster's geometric properties. In this paper, we develop an algorithm to generate non-overlapped test vectors, allowing the generation of a large set of verified vectors that can be used to perform objective evaluation and comparison. 1 2 The second category includes the criteria for comparing models and the statistical methods presented in .
Image and Vision Computing, 2002
In this paper, we present a novel multi-modal histogram thresholding method in which no a priori knowledge about the number of clusters to be extracted is needed. The proposed method combines regularization and statistical approaches. By converting the approaching histogram thresholding problem to the mixture Gaussian density modeling problem, threshold values can be estimated precisely according to the parameters belonging to each contiguous cluster. Computational complexity has been greatly reduced since our method does not employ conventional iterative parameter re®nement. Instead, an optimal parameter estimation interval was de®ned before the estimation procedure. This prede®ned optimal estimation interval reduces time consumption while other histogram decomposition based methods search all feature space to locate an estimation interval for each candidate cluster. Experimental results with both simulated data and real images demonstrate the robustness of our method.
An algorithm for determination of the number of modes in a gray-level image histogram is presented in this paper. The hypothesis is that the image histogram's pdf is approached by a mixture of Gaussians. Then, the algorithm tries to estimate the number of components in the mixture, which is an important parameter when using the maximum likelihood technique to estimate the remaining of parameters of the mixture. The algorithm is divided into two parts. First, initial clustering using the k-means algorithm is performed. This allows to estimate the centers of each cluster. Second, a novel algorithm, denoted "Elimination of False Clusters" (EFC) based on the Gaussian characteristics tries to suppress clusters which have no corresponding modes in the histogram. The algorithm has been validated on both artificial and real histograms.
Pattern Recognition, 1993
A~tract--A recursive, nonparametric method is developed for performing density estimation derived from mixture models, kernel estimation and stochastic approximation. The asymptotic performance of the method, dubbed "adaptive mixtures" (Priebe and Marchette, Pattern Recognition 24, 1197-1209 (1991)) for its data-driven development of a mixture model approximation to the true density, is investigated using the method of sieves. Simulations are included indicating convergence properties for some simple examples.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002
In this paper we develop a dynamic continuous solution to the clustering problem of data characterized by a mixture of K distributions, where K is given a priori. The proposed solution resorts to game theory tools, in particular mean field games and can be interpreted as the continuous version of a generalized Expectation-Maximization (GEM) algorithm. The main contributions of this paper are twofold: first, we prove that the proposed solution is a GEM algorithm; second, we derive closed-form solution for a Gaussian mixture model and show that the proposed algorithm converges exponentially fast to a maximum of the log-likelihood function, improving significantly over the state of the art. We conclude the paper by presenting simulation results for the Gaussian case that indicate better performance of the proposed algorithm in term of speed of convergence and with respect to the overlap problem.
2008
Abstract. Recently stochastic models such as mixture models, graphical models, Markov random fields and hidden Markov models have key role in probabilistic data analysis. Also image segmentation means to divide one picture into different types of classes or regions, for example a picture of geometric shapes has some classes with different colors such as ’circle’, ’rectangle’, ’triangle’ and so on. Therefore we can suppose that each class has normal distribution with specify mean and variance. Thus in general a picture can be Gaussian mixture model. In this paper, we have learned Gaussian mixture model to the pixel of an image as training data and the parameter of the model are learned by EM-algorithm. Meanwhile pixel labeling corresponded to each pixel of true image is done by Bayes rule. This hidden or labeled image is constructed during of running EM-algorithm. In fact, we introduce a new numerically method of finding maximum a posterior estimation by using of EM-algorithm and Gau...
IEEE Transactions on Image Processing, 2004
This paper presents an unsupervised algorithm for learning a finite mixture model from multivariate data. This mixture model is based on the Dirichlet distribution, which offers high flexibility for modeling data. The proposed approach for estimating the parameters of a Dirichlet mixture is based on the maximum likelihood (ML) and Fisher scoring methods. Experimental results are presented for the following applications: estimation of artificial histograms, summarization of image databases for efficient retrieval, and human skin color modeling and its application to skin detection in multimedia databases.
1997
We consider a model-based approach to clustering, whereby each observation is assumed to have arisen from an underlying mixture of a nite number of distributions. The number of components in this mixture model corresponds to the number of clusters to be imposed on the data. A common assumption is to take the component distributions to be multivariate normal with perhaps some restrictions on the component covariance matrices. The model can be tted to the data using maximum likelihood implemented via the EM algorithm. There is a number of computational issues associated with the tting, including the speci cation of initial starting points for the EM algorithm and the carrying out of tests for the number of components in the nal version of the model. We shall discuss some of these problems and describe an algorithm that attempts to handle them automatically.
… Symposium on Advanced …, 2007
In this paper, we propose a method for image clustering using multinomial mixture models. The mixture of multinomial distributions, often called multinomial mixture, is a probabilistic model mainly used for text mining. The effectiveness of multinomial distribution for text mining originates from the fact that words can be regarded as independently generated in the first approximation. In this paper, we apply multinomial distribution to image clustering. We regard each color as a "word" and color histograms as "term frequency" distributions.
2009
The parameters estimation of mixture distributions is an important task in statistical signal processing, Pattern recognition, blind equalization and other modern statistical tasks often call for mixture estimation. This paper aims to provide a realistic distribution based on Mixture of Generalized Gaussian distribution (MGG), which has the advantage to characterize the variability of shape parameter in each component in the mixture. We propose a formulation of the Expectation Maximization (EM) algorithm under Generalized Gaussian distribution. For this, two different methods are proposed to include the shape parameter estimation. In the rst method a derivation of the Likelihood function is used to update the mixture parameters. In the second approach we propose an extension of the iclassicali (EM) algorithm and to estimate the shape parameter in terms of Kurtosis. The KullbackLeibler divergence (KLD) is used to compare, and evaluate these algorithms of MGG parameters estimation. An...
Neural Computing & Applications, 1996
A version of the Tr~vdn's [1] Gaussian clustering algorithm for normal mixture densities is studied. Unlike in the case of the Trttvdn's algorithm, no constraints on covariance structure of mixture components are imposed. Simulations suggest that the modified algorithm is a very promising method of estimating arbitrary continuous d-dimensional densities. In particular, the simulations have shown that the algorithm is robust against assuming the initial number of mixture components to be too large.
1996
We consider the estimation of local grey-level image structure in terms of a layered representation. This type of representation has recently been successfully used to segment v arious objects from clutter using either optical ow or stereo disparity information. We argue that the same type of representation is useful for grey-level data in that it allows for the estimation of properties for each o f s e v eral di erent components without prior segmentation. Our emphasis in this paper is on the process used to extract such a l a yered representation from a given image. In particular, we consider a variant of the EM-algorithm for the estimation of the layered model, and consider a novel technique for choosing the number of layers to use. We brie y consider the use of a simple version of this approach for image segmentation, and suggest two potential applications to the ARK project. Category: Image representation.
… Conference on Pattern …, 2010
In this paper, a parametric and unsupervised histogram-based image segmentation method is presented. The histogram is assumed to be a mixture of asymmetric generalized Gaussian distributions. The mixture parameters are estimated by using the Expectation Maximization algorithm. Histogram fitting and region uniformity measures on synthetic and real images reveal the effectiveness of the proposed model compared to the generalized Gaussian mixture model.
IEEE Transactions on Image Processing, 1996
Gaussian mixture density modeling and decomposition is a classic yet challenging research topic. We present a new approach to the modeling and decomposition of Gaussian mixtures by using robust statistical methods. The mixture distribution is viewed as a (severely) contaminated Gaussian density. Using this model and the model-fitting (MF) estimator, we propose a recursive algorithm called the Gaussian mixture density decomposition (GMDD) algorithm for successively identifying each Gaussian component in the mixture. The proposed decomposition scheme has several distinct advantages that are desirable but lacking in most existing techniques. In the GMDD algorithm the number of components does not need to be specified U priori, the proportion of noisy data in the mixture can be large, the parameter estimation of each component is virtually initial independent, and the variability in the shape and size of the component densities in the mixture is taken into account. Gaussian mixture density modeling and decomposition has been widely applied in a variety of disciplines that require signal or waveform characterization for classification and recognition, including remote sensing, target identification, spectroscopy, electrocardiography, speech recognition, or scene segmentation. We apply the proposed GMDD algorithm to the identification and extraction of clusters, and the estimation of unknown probability densities. Probability density estimation by identifying a decomposition using the GMDD algorithm, that is, a superposition of normal distributions, is successfully applied to the difficult biomedical problem of automated cell classification. Computer experiments using both real data and simulated data demonstrate the validity and power of the GMDD algorithm for various models and different noise assumptions.
1999
This paper introduces a novel statistical mixture model for probabilistic grouping of distributional (histogram) data. Adopting the Bayesian framework, we propose to perform annealed maximum a posteriori estimation to compute optimal clustering solutions. In order to accelerate the optimization process, an e cient multiscale formulation is developed. We present a prototypical application of this method for the unsupervised segmentation of textured images based on local distributions of Gabor coe cients. Benchmark results indicate superior performance compared to K{means clustering and proximity-based algorithms.
International Journal of Scientific and Engineering Research
In any man machine interaction system, a fast and accurate skin color classification is a challenging step, and the detection of skin color considered as the preliminary step in various recent applications such as face detection and gesture recognition. Segmentation based skin color process should be robust, accurate, and feasible for specific application field. Color distribution modeling techniques varies in its simplicity and complication according to particular color space and modeling technique parameters. Histogram technique has proven its outstanding capability of skin color classifications. In this work we adopted a novel method for hand gesture classification based on the mixture of histogram with look up table LUT applied on several three color models are combined into single model by assigning weight for each histogram based color model separately. The suggested system mechanism inspired from Hasan and Mishra [25] but they utilized Gaussian mixture model rather than the histogram technique. the performance of the proposed system evaluated using two methods in order to examine system efficiency comparing with other skin color classification systems including system performed by [25], these methods are; classification rate CR and accuracy. System results revealed that our system outperforms over other systems in term of accuracy and classification rate by achieving 99.48% and 98.9349 respectively.
INTERNATIONAL JOURNAL OF ENGINEERING …, 2008
Recently stochastic models such as mixture models, graphical models, Markov random fields and hidden Markov models have key role in probabilistic data analysis. Also image segmentation means to divide one picture into different types of classes or regions, for example a picture of geometric shapes has some classes with different colors such as 'circle', 'rectangle', 'triangle' and so on. Therefore we can suppose that each class has normal distribution with specify mean and variance. Thus in general a picture can be Gaussian mixture model. In this paper, we have learned Gaussian mixture model to the pixel of an image as training data and the parameter of the model are learned by EM-algorithm. Meanwhile pixel labeling corresponded to each pixel of true image is done by Bayes rule. This hidden or labeled image is constructed during of running EM-algorithm. In fact, we introduce a new numerically method of finding maximum a posterior estimation by using of EM-algorithm and Gaussians mixture model which we called EM-MAP algorithm. In this algorithm, we have made a sequence of the priors, posteriors and they then convergent to a posterior probability that is called the reference posterior probability. So Maximum a posterior estimation can be determined by this reference posterior probability which will make labeled image. This labeled image shows our segmented image with reduced noises. This method will show in several experiments.
IEEE Transactions on Neural Networks, 2001
A self-organizing mixture network (SOMN) is derived for learning arbitrary density functions. The network minimizes the Kullback-Leibler information metric by means of stochastic approximation methods. The density functions are modeled as mixtures of parametric distributions A mixture needs not to be homogenous, i.e., it can have different density profiles. The first layer of the network is similar to Kohonen's self-organizing map (SOM), but with the parameters of the component densities as the learning weights. The winning mechanism is based on maximum posterior probability, and updating of the weights is limited to a small neighborhood around the winner. The second layer accumulates the responses of these local nodes, weighted by the learned mixing parameters. The network possesses a simple structure and computational form, yet yields fast and robust convergence. The network has a generalization ability due to the relative entropy criterion used. Applications to density profile estimation and pattern classification are presented. The SOMN can also provide an insight to the role of neighborhood function used in the SOM.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.