Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1994, IEEE Transactions on Signal Processing
In this paper we describe a neural network model (APEX) for multiple principal component extraction. All the synaptic weights of the model are trained with the normalized Hebbian learning rule. The network structure features a hierarchical set of lateral connections among the output units which serve the purpose of weight orthogonalization. This structure also allows the size of the model to grow or shrink without need for retraining the old units. The exponential convergence of the network is formally proved while there is significant performance improvement over previous methods. By establishing an important connection with the recursive least squares algorithm we have been able to provide the optimal size for the learning step-size parameter which leads to a significant improvement in the convergence speed. This is in contrast with previous neural PCA models which lack such numerical advantages. The APEX algorithm is also parallelizable allowing the concurrent extraction of multiple principal components. Furthermore, APEX is shown to be applicable to the constrained PCA problem where the signal variance is maximized under external orthogonality constraints. We then study various principal component analysis (PCA) applications that might benefit from the adaptive solution offered by APEX. In particular we discuss applications in spectral estimation, signal detection and image compression and filtering, while other application domains are also briefly outlined.
International Statistical Review, 2017
PCA is a statistical method, which is directly related to EVD and SVD. Neural networks-based PCA method estimates PC online from the input data sequences, which especially suits for high-dimensional data due to the avoidance of the computation of large covariance matrix, and for the tracking of nonstationary data, where the covariance matrix changes slowly over time. Neural networks and algorithms for PCA will be described in this chapter, and algorithms given in this chapter are typically unsupervised learning methods. PCA has been widely used in engineering and scientific disciplines, such as pattern recognition, data compression and coding, image processing, high-resolution spectrum analysis, and adaptive beamforming. PCA is based on the spectral analysis of the second moment matrix that statistically characterizes a random vector. PCA is directly related to SVD, and the most common way to perform PCA is via the SVD of a data matrix. However, the capability of SVD is limited for very large data sets. It is well known that preprocessing usually maps a high-dimensional space to a low-dimensional space with the least information loss, which is known as feature extraction. PCA is a well-known feature extraction method, and it allows the removal of the second-order correlation among given random processes. By calculating the eigenvectors of the covariance matrix of the input vector, PCA linearly transforms a high-dimensional input vector into a low-dimensional one whose components are uncorrelated.
1996
In this paper a fast and ecient adaptive learning algorithm for estimation of the principal components is developed. It seems to be especially useful in applications with changing environment, where the learning process has to be repeated in on{line manner. The approach can be called the cascade recursive least square (CRLS) method, as it combines a cascade (hierarchical) neural network scheme for input signal reduction with the RLS (recursive least square) lter for adaptation of learning rates. Successful application of the CRLS method for 2{D image compression{reconstruction and its performance in comparison to other known PCA adaptive algorithms are also documented.
2007
Classical feature extraction and data projection methods have been extensively investigated in the pattern recognition and exploratory data analysis literature. Feature extraction and multivariate data projection allow avoiding the "curse of dimensionality", improve the generalization ability of classifiers and significantly reduce the computational requirements of pattern classifiers. During the past decade a large number of artificial neural networks and learning algorithms have been proposed for solving feature extraction problems, most of them being adaptive in nature and well-suited for many real environments where adaptive approach is required. Principal Component Analysis, also called Karhunen-Loeve transform is a well-known statistical method for feature extraction, data compression and multivariate data projection and so far it has been broadly used in a large series of signal and image processing, pattern recognition and data analysis applications.
Principal Component Analysis Networks and Algorithms, 2017
PCA is a statistical method, which is directly related to EVD and SVD. Neural networks-based PCA method estimates PC online from the input data sequences, which especially suits for high-dimensional data due to the avoidance of the computation of large covariance matrix, and for the tracking of nonstationary data, where the covariance matrix changes slowly over time. Neural networks and algorithms for PCA will be described in this chapter, and algorithms given in this chapter are typically unsupervised learning methods. PCA has been widely used in engineering and scientific disciplines, such as pattern recognition, data compression and coding, image processing, high-resolution spectrum analysis, and adaptive beamforming. PCA is based on the spectral analysis of the second moment matrix that statistically characterizes a random vector. PCA is directly related to SVD, and the most common way to perform PCA is via the SVD of a data matrix. However, the capability of SVD is limited for very large data sets. It is well known that preprocessing usually maps a high-dimensional space to a low-dimensional space with the least information loss, which is known as feature extraction. PCA is a well-known feature extraction method, and it allows the removal of the second-order correlation among given random processes. By calculating the eigenvectors of the covariance matrix of the input vector, PCA linearly transforms a high-dimensional input vector into a low-dimensional one whose components are uncorrelated.
Economic computation and economic cybernetics studies and research / Academy of Economic Studies
Principal component analysis allows the identification of a linear transformation such that the axes of the resulted coordinate system correspond to the largest variability of the investigated signal. The advantages of using principal components reside from the fact that bands are uncorrelated and no information contained in one band can be predicted by the knowledge of the other bands, therefore the information contained by each band is maximum for the whole set of bits. The paper reports a series of conclusions concerning the performance and efficiency of some of the most frequently used PCA algorithms implemented on neural architectures.
Journal of Computational Physics, 2014
A descent procedure is proposed for the search of low-dimensional subspaces of a high-dimensional space that satisfy an optimality criterion. Specifically, the procedure is applied to finding the subspace spanned by the first m singular components of an ndimensional dataset. The procedure minimizes the associated cost function through a series of orthogonal transformations, each represented economically as the exponential of a skew-symmetric matrix drawn from a low-dimensional space.
IEEE International Conference on Acoustics Speech and Signal Processing, 2002
Principal Components Analysis (PCA) is an invaluable statistical tool in signal processing. In many cases, an on-line algorithm to adapt the PCA network to determine the principal projections in the input space is desired. Algorithms proposed until now use the traditional deflation or the inflation procedure to determine the intermediate components sequentially, after the convergence of the principal or minor component is achieved. In this paper, we propose a constrained linear network and a robust cost function to determine any number of principal components simultaneously. The topology exploits the fact that the eigenvector matrix sought is orthonormal. A gradient-based algorithm named SIPEX-G is also presented
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 11, NO. 2, MARCH 2000, 2000
We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonsta-tionary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods. We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences. Index Terms-Adaptive principal component analysis, eigende-composition, principal subspace analysis.
2007
In this paper, a new constructive auto-associative neural network performing nonlinear principal component analysis is presented. The developed constructive neural network maps the data nonlinearly into its principal components and preserves the order of principal components at the same time. The weights of the neural network are trained by a combination of Back Propagation (BP) and Genetic Algorithm (GA) which accelerates the training process by preventing local minima. The performance of the proposed method was evaluated by means of two different experiments that illustrated its efficiency.
Principal manifolds for data …, 2008
Nonlinear principal component analysis (NLPCA) as a nonlinear generalisation of standard principal component analysis (PCA) means to generalise the principal components from straight lines to curves. This chapter aims to provide an extensive description of the autoassociative neural network approach for NLPCA. Several network architectures will be discussed including the hierarchical, the circular, and the inverse model with special emphasis to missing data. Results are shown from applications in the field of molecular biology. This includes metabolite data analysis of a cold stress experiment in the model plant Arabidopsis thaliana and gene expression analysis of the reproductive cycle of the malaria parasite Plasmodium falciparum within infected red blood cells.
Anais do 10. Congresso Brasileiro de Inteligência Computacional, 2016
Principal Component Analysis (PCA) is a well known statistical method that has successfully been applied for reducing data dimensionality. Focusing on a neural network which approximates the results obtained by classical PCA, the main contribution of this work consists in introducing a parallel modeling for such network. A comparative study shows that the proposal presents promising results when a multi-core computer is available.
Proceedings of 1995 American Control Conference - ACC'95, 1995
Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 2001
In this paper, we propose a neural network called Time Adaptive Principal Components Analysis (TAPCA) which is composed of a number of Time Adaptive Self-Organizing Map (TASOM) networks. Each TASOM in TAPCA network estimates one eigenvector of tlie correlation matrix of input vectors entered so far, without having to calculate the correlation matrix. This estimation is done in an online fashion. The input distribution can be nonstationary, too. The eigenvectors appear in order of importance: the first TASOM calculates tlie eigenvector corresponding to the largest eigenvalue of the correlation matrix, and so on. Tlie TAPCA network is tested in stationary environments, and is compared with tlie eigendecomposition (ED) method and GeneraliLed Hebbian Algorithm (GHA) network. It performs better tlian both methods and needs fewer samples to converge.
International Journal of Computing, 2021
An artificial neural system for data compression that sequentially processes linearly nonseparable classes is proposed. The main elements of this system include adjustable radial-basis functions (Epanechnikov’s kernels), an adaptive linear associator learned by a multistep optimal algorithm, and Hebb-Sanger neural network whose nodes are formed by Oja’s neurons. For tuning the modified Oja’s algorithm, additional filtering (in case of noisy data) and tracking (in case of nonstationary data) properties were introduced. The main feature of the proposed system is the ability to work in conditions of significant nonlinearity of the initial data that are sequentially fed to the system and have a non-stationary nature. The effectiveness of the developed approach was confirmed by the experimental results. The proposed kernel online neural system is designed to solve compression and visualization tasks when initial data form linearly nonseparable classes in general problem of Data Stream Mi...
1998
One of the most commonly known algorithm to perform neural Principal Component Analysis of real-valued random signals is the Kung-Diamantaras' Adaptive Principal component EXtractor (APEX) for a laterally-connected neural architecture.
Pattern Recognition Letters, 2003
In sequential principal component (PC) extraction, when increasing numbers of PCs are extracted the accumulated extraction error becomes dominant and makes a reliable extraction of the remaining PCs difficult. This paper presents an improved cascade recursive least squares method for PCsÕ extraction. The good features of the proposed approach are illustrated through simulation results, and include improved convergence speed and higher extraction accuracy.
2002 11th European Signal Processing Conference, 2002
Principal Components Analysis (PCA) is a very important statistical tool in signal processing, which has found successful applications in numerous engineering problems as well as other fields. In general, an on-line algorithm to adapt the PCA network to determine the principal projections of the input data is desired. The authors have recently introduced a fast, robust, and efficient PCA algorithm called SIPEX-G without detailed comparisons and analysis of performance. In this paper, we investigate the performance of SIPEX-G through Monte Carlo runs on synthetic data and on realistic problems where PCA is applied. These problems include direction of arrival estimation and subspace Wiener filtering.
Neural networks, 1989
Proceedings ESANN, 2002
Traditionally, nonlinear principal component analysis (NLPCA) is seen as nonlinear generalization of the standard (linear) principal component analysis (PCA). So far, most of these generalizations rely on a symmetric type of learning. Here we propose an algorithm that extends PCA into NLPCA through a hierarchical type of learning. The hierarchical algorithm (h-NLPCA), like many versions of the symmetric one (s-NLPCA), is based on a multi-layer perceptron with an auto-associative topology, the learning rule of which has been upgraded to accommodate the desired discrimination between components. With h-NLPCA we seek not only the nonlinear subspace spanned by the optimal set of components, ideal for data compression, but we give particular interest to the order in which these components appear. Due to its hierarchical nature, our algorithm is shown to be very efficient in detecting meaningful nonlinear features from real world data, as well as in providing a nonlinear whitening. Furthermore, in a quantitative type of analysis, the h-NLPCA achieves better classification accuracies, with a smaller number of components than most traditional approaches.
IEEE Signal Processing Letters, 2000
This letter presents a Hebb-type learning algorithm for on-line linear calculation of principal components. The proposed method is based on a recently proposed cooperative-competitive concept, named the time-oriented hierarchical method. The algorithm performs deflation on the signal power rather than on the signal itself. It will be also shown when, or how, this algorithm can be used as a blind signal separation algorithm. The proposed synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. The number of necessary global calculation circuits is one. Index terms -adaptive algorithm, principal/independent component analysis EDICS: SAS-STAT Detection, estimation and classification theory and methods, statistical signal processing SAS-ADAP Adaptive systems and adaptive filtering
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.