Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003, The Journal of Machine …
…
26 pages
1 file
This paper introduces an energy-based approach to Independent Component Analysis (ICA), which merges a bottom-up filtering perspective with the goal of fitting a probability density to observations. It highlights the advantages of overcomplete representations, where the number of sources exceeds the number of observations, emphasizing enhanced model flexibility and robustness to noise. The work demonstrates the potential of energy-based models to extend ICA concepts into multi-layer configurations, showcasing novel insights for improving interpretability and computational tractability.
2004 IEEE International Conference on Acoustics, Speech, and Signal Processing
We propose an incremental algorithm for independent component analysis (ICA), that is guided by the statistical efficiency. Starting from a ¢ ¡ norm sparseness measure contrast function, we derive the learning algorithm based on a winner-take-all learning mechanism. It avoids the optimization of high order non-linear function or density estimation, which have been used by other ICA methods, such as negentropy approximation, infomax, and maximum likelihood estimation based methods. We show that when the latent independent random variables are super-Gaussian distributions, the network efficiently extracts the independent components. We observed a much faster convergence than other ICA methods.
IEEE Signal …, 1999
Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.
Learning using Independent Component Analysis (ICA) has found a wide range of applications in the area of computer vision and pattern analysis, ranging from face recognition to speech separation. This paper presents a non-parametric approach to the ICA problem that is robust towards outlier effects. The algorithm, for the first time in the field of ICA, adopts an intuitive and direct approach, focusing on the very definition of independence itself; i.e. the joint probability density function (pdf) of independent sources is factorial over the marginal distributions. In the proposed algorithm, kernel density estimation is employed to approximate the underlying distributions. There are two major advantages of our algorithm. First, existing algorithms focus on learning the independent components by attempting to fulfill necessary conditions (but not sufficient) for independence. For example, the Jade algorithm attempts to approximate independence by minimizing higher order statistics, which are not robust to outliers. Comparatively, our technique is inherently robust towards outlier effects. Second, since the learning employs kernel density estimation, it is naturally free from the assumptions of source distributions (unlike the Infomax algorithm). Experimental results show that the algorithm is able to perform separation of sources in the presence of outliers, whereas existing algorithms like Jade and Infomax break down under such conditions. The results have also shown that the proposed non-parametric approach is generally source distribution independent. In addition, it is able to separate non-gaussian zero-kurtotic signals unlike the traditional ICA algorithms like Jade and Infomax.
We propose a mixture model for blind source separation and deconvolution with adaptive source densities. Data is modelled as a multivariate locally linear random process. We derive an expression for the asymptotic likelihood of a linear process segment, which allows us to formulate and optimize a mixture model via the EM algorithm. The mixture model is able to represent nonstationary (locally, or piecewise stationary) signals. We exploit a convexity-based inequality to ensure monotonic increase of the likelihood with respect to the source density parameters. The model is applied to analysis of EEG signals.
Unsupervised feature learning algorithms based on convolutional formulations of independent components analysis (ICA) have been demonstrated to yield state-ofthe-art results in several action recognition benchmarks. However, existing approaches do not allow for the number of latent components (features) to be automatically inferred from the data in an unsupervised manner. This is a significant disadvantage of the state-of-the-art, as it results in considerable burden imposed on researchers and practitioners, who must resort to tedious cross-validation procedures to obtain the optimal number of latent features. To resolve these issues, in this paper we introduce a convolutional nonparametric Bayesian sparse ICA architecture for overcomplete feature learning from highdimensional data. Our method utilizes an Indian buffet process prior to facilitate inference of the appropriate number of latent features under a hybrid variational inference algorithm, scalable to massive datasets. As we show, our model can be naturally used to obtain deep unsupervised hierarchical feature extractors, by greedily stacking successive model layers, similar to existing approaches. In addition, inference for this model is completely heuristics-free; thus, it obviates the need of tedious parameter tuning, which is a major challenge most deep learning approaches are faced with. We evaluate our method on several action recognition benchmarks, and exhibit its advantages over the state-of-the-art.
Neural Networks, 2000
A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject. ᭧ .fi (A. Hyvärinen), erkki.oja@ hut.fi (E. Oja).
2008
We address the problem of Blind Source Separation (BSS) of superimposed signals in situations where one signal has constant or slowly varying intensities at some consecutive locations and at the corresponding locations the other signal has highly varying intensities. Independent Component Analysis (ICA) is a major technique for Blind Source Separation and the existing ICA algorithms fail to estimate the original intensities in the stated situation. We combine the advantages of existing sparse methods and Kernel ICA in our technique, by proposing wavelet packet based sparse decomposition of signals prior to the application of Kernel ICA. Simulations and experimental results illustrate the effectiveness and accuracy of the proposed approach. The approach is general in the way that it can be tailored and applied to a wide range of BSS problems concerning one-dimensional signals and images (two-dimensional signals).
Neurocomputing, 2004
Expectation-Maximization (EM) algorithms for independent component analysis are presented in this paper. For super-Gaussian sources, a variational method is employed to develop an EM algorithm in closed form for learning the mixing matrix and inferring the independent components. For sub-Gaussian sources, a symmetrical form of the Pearson mixture model (Neural Comput. 11 (2) (1999) 417-441) is used as the prior, which also enables the development of an EM algorithm in fclosed form for parameter estimation. r
2000
We present a exible independent component analysis (ICA) algorithm which can separate mixtures of sub-and super-Gaussian source signals with self-adaptive nonlinearities. The exible ICA algorithm in the framework of natural Riemannian gradient, is derived using the parameterized generalized Gaussian density model. The nonlinear function in the exible ICA algorithm is self-adaptive and is controlled by Gaussian exponent. Computer simulation results con rm the validity and high performance of the proposed algorithm.
Proceedings of the International Joint Conference on Neural Networks, 2003., 2003
We propose a novel algorithm for blind separation of sparse overcomplete sources called Algebraic Independent Component Analysis (AICA). The proposed AICA algorithm is computationally more efficient in estimating blindly the mixing matrix as compared to earlier proposed geometric ICA (geo-ICA) algorithms. AICA is based entirely on algebraic operations and vector-distance measures. Firstly, these choices lead to considerable reduction in the computational cost of the AICA algorithm. Secondly, the robustness of the algebraic operations against the inherent permutation and scaling in ICA simplifies the performance evaluation of the ICA algorithms using the proposed algebraic measure. Thirdly, the algebraic framework is directly extendable to any dimensional ICA problems exhibiting only a linear increase in complexity. The stability of similar algorithms has been comprehensively studied in the realm of geo-ICA. The algorithm has been extensively tested for overcomplete, undercomplete and "quadratic" ICA using unimodal sparse distributions such as the laplacian and gamma distributions for speech. Illustrative blind source separation simulation examples for overcomplete speech mixtures are also presented.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014
IEEE Transactions on Audio, Speech and Language Processing, 2000
The Annals of Statistics, 2006
IEEE Transactions on Neural Networks, 2004
Trends in Cognitive Sciences, 2002
1997 IEEE International Conference on Acoustics, Speech, and Signal Processing
Neural Computation, 1999
2009 2nd International Conference on Computer, Control and Communication, 2009
IEEE Transactions on Neural Networks, 2005
Neural computation, 2001