Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1993, Pattern Recognition
…
15 pages
1 file
A~tract--A recursive, nonparametric method is developed for performing density estimation derived from mixture models, kernel estimation and stochastic approximation. The asymptotic performance of the method, dubbed "adaptive mixtures" (Priebe and Marchette, Pattern Recognition 24, 1197-1209 (1991)) for its data-driven development of a mixture model approximation to the true density, is investigated using the method of sieves. Simulations are included indicating convergence properties for some simple examples.
1996
A multivariate extension of the plug-in kernel (and ltered kernel) estimator is proposed which uses asymptotically optimal bandwidth matrix (matrices) for a normal mixture approximation of a density to be estimated (the ltered kernel estimator uses diierent matrices for diierent clusters of data). The normal mixture approximation is provided by a recursive version of the EM algorithm whose initial conditions are in turn obtained via an application of the ideas of adaptive mixtures density estimation and AIC-based pruning. Simulations show that the estimator proposed, while it is in fact a rather complex multistage estimation process, provides a very reliable way of estimating arbitrary and highly structured continuous densities on R 2 and, hopefully, R 3 .
IEEE Transactions on Information Theory, 2013
Recursive algorithms for the estimation of mixtures of densities have attracted a lot of attention in the last ten years. Here an algorithm for recursive estimation is studied. It complements existing approaches in the literature, as it is based on conditions that are usually very weak. For example, the parameter space over which the mixture is taken does not need to be necessarily bounded. The essence of the procedure is to combine density estimation via empirical characteristic function together with an iterative Hilbert space approximation algorithm. The conditions for consistency of the estimator are veried for three important statistical problems. A simulation study is also included.
Computational Statistics & Data Analysis, 1997
A multivariate extension of the plug-in kernel (and filtered kernel) estimator is proposed and this uses asymptotically optimal bandwidth matrix (matrices) for a normal mixture approximation of a density to be estimated (the filtered kernel estimator uses different matrices for different clusters of data). The normal mixture approximation is provided by a recursive version of the EM algorithm whose initial conditions are in turn obtained via an application of the ideas of adaptive mixtures density estimation and AIC-based pruning. Simulations show that the estimator proposed, while it is in fact a rather complex multistage estimation process, provides a very reliable way of estimating arbitrary and highly structured continuous densities on ~2 and, hopefully, ~3. @ 1997 Elsevier Science B.V.
Neural Processing Letters, 1999
We address the problem of estimating an unknown probability density function from a sequence of input samples. We approximate the input density with a weighted mixture of a finite number of Gaussian kernels whose parameters and weights we estimate iteratively from the input samples using the Maximum Likelihood (ML) procedure. In order to decide on the correct total number of
Proceedings of the Eighth Annual Conference on Computational Learning Theory, 1995
We investigate the problem of estimating the proportion vector which maximizes the likelihood of a given sample for a mixture of given densities. We adapt a framework developed for supervised learning and give simple derivations for many of the standard iterative algorithms like gradient projection and EM. In this framework, the distance between the new and old proportion vectors is used as a penalty term. The square distance leads to the gradient projection update, and the relative entropy to a new update which we call the exponentiated gradient update (EGη). Curiously, when a second order Taylor expansion of the relative entropy is used, we arrive at an update EMη which, for η = 1, gives the usual EM update. Experimentally, both the EMη-update and the EGη-update for η > 1 outperform the EM algorithm and its variants. We also prove a polynomial bound on the rate of convergence of the EGη algorithm.
IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 1999
Abstract| We address the problem of probability density function estimation using a Gaussian mixture model updated with the expectation-maximization (EM) algorithm. To deal with the case of an unknown number of mixing kernels, we de ne a new measure for Gaussian mixtures, called total kurtosis, which is based on the weighted sample kurtoses of the kernels. This measure provides an indication of how well the Gaussian mixture ts the data. Then we propose a new dynamic algorithm for Gaussian mixture density estimation which monitors the total kurtosis at each step of the EM algorithm in order to decide dynamically on the correct number of kernels and possibly escape from local maxima. We show the potential of our technique in approximating unknown densities through a series of examples with several density estimation problems.
1996
This paper introduces a new adaptive mixtures type estima- tor. This paper provides details on the new estimator along with some examples of its performance on several different types of data sets. In addition we provide a comparison between the performance of this estimator and the standard recursive adaptive mixtures estimator for one of the data sets.
Pattern Recognition, 1991
We develop a method of performing pattern recognition (discrimination and classification) using a recursive technique derived from mixture models, kernel estimation and stochastic approximation.
Lecture Notes in Computer Science, 1998
We propose a new optimisation method for estimating both the parameters and the structure, i. e. the number of components, of a nite mixture model for density estimation. We employ a hybrid method consisting of an evolutionary algorithm for structure optimisation in conjunction with a gradient-based method for evaluating each candidate model architecture. For structure modi cation we propose speci c, problem dependent evolutionary operators. The introduction of a regularisation term prevents the models from over-tting the data. Experiments show good generalisation abilities of the optimised structures.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IEEE Transactions on Image Processing, 1996
Signal Processing, 2005
IEEE Transactions on Neural Networks, 2001
Computational Statistics & Data Analysis, 2003
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
Statistics and …, 1998
RePEc: Research Papers in Economics, 2007
Siam Review, 1984
Information Sciences, 2018
Neural, Parallel and Scientific Computations, 1999
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2015
Cybernetics and Systems, 2007
Taylor & Francis, 2019