Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1994, IEEE Transactions on Signal Processing
…
6 pages
1 file
In this paper we provide theoretical foundations for a new neural model for singular value decomposition based on an extension of the Hebbian learning rule called the crosscoupled Hebhian rule. The model is extracting the SVD of the cross-correlation matrix of two stochastic signals and is an extension on previous work on neural-network-related principal component analysis (PCA). We prove the asymptotic convergence of the network to the principal (normalized) singular vectors of the cross-correlation and we provide simulation results which suggest that the convergence is exponential. The new model may have useful applications in the problems of filtering for signal processing and signal detection.
2007
A theoretical proof of the computational function performed by a time-delayed neural network implementing a Hebbian associative learning-rule is shown to compute the equivalent of cross-correlation of time-series functions, showing the relationship between correlation coefficients and connection-weights. The values of the computed correlation coefficients can be retrieved from the connection-weights.
The Open Cybernetics & Systemics Journal, 2007
A theoretical proof of the computational function performed by a time-delayed neural network implementing a Hebbian associative learning-rule is shown to compute the equivalent of cross-correlation of time-series functions, showing the relationship between correlation coefficients and connection-weights. The values of the computed correlation coefficients can be retrieved from the connection-weights.
This paper proposes a novel coupled neural network learning algorithm to extract the principal singular triplet (PST) of a cross-correlation matrix between two high-dimensional data streams. We firstly introduce a novel information criterion (NIC), in which the stationary points are singular triplet of the cross-correlation matrix. Then, based on Newton's method, we obtain a coupled system of ordinary differential equations (ODEs) from the NIC. The ODEs have the same equilibria as the gradient of NIC, however, only the first PST of the system is stable (which is also the desired solution), and all others are (unstable) saddle points. Based on the system, we finally obtain a fast and stable algorithm for PST extraction. The proposed algorithm can solve the speed-stability problem that plagues most noncoupled learning rules. Moreover, the proposed algorithm can also be used to extract multiple PSTs effectively by using sequential method. Citation: Xiaowei Feng, Xiangyu Kong, Hongguang Ma. Coupled cross-correlation neural network algorithm for principal singular triplet extraction of a cross-covariance matrix. IEEE/CAA Journal of Automatica Sinica, 2016, 3(2): 149-156
Systems and Computers in Japan, 1996
Hebbian rule might be the most popular one as an unsupervised learning model of neural nets. Recently, the opposite of the Hebbian rule, i.e., the socalled anti-Hebbian rule, has drawn attention as a new learning paradigm. This paper first clarifies some fundamental properties of the anti-Hebbian rule, and then shows that a variety of networks can be acquired by some anti-Hebbian rules.
International Statistical Review, 2017
PCA is a statistical method, which is directly related to EVD and SVD. Neural networks-based PCA method estimates PC online from the input data sequences, which especially suits for high-dimensional data due to the avoidance of the computation of large covariance matrix, and for the tracking of nonstationary data, where the covariance matrix changes slowly over time. Neural networks and algorithms for PCA will be described in this chapter, and algorithms given in this chapter are typically unsupervised learning methods. PCA has been widely used in engineering and scientific disciplines, such as pattern recognition, data compression and coding, image processing, high-resolution spectrum analysis, and adaptive beamforming. PCA is based on the spectral analysis of the second moment matrix that statistically characterizes a random vector. PCA is directly related to SVD, and the most common way to perform PCA is via the SVD of a data matrix. However, the capability of SVD is limited for very large data sets. It is well known that preprocessing usually maps a high-dimensional space to a low-dimensional space with the least information loss, which is known as feature extraction. PCA is a well-known feature extraction method, and it allows the removal of the second-order correlation among given random processes. By calculating the eigenvectors of the covariance matrix of the input vector, PCA linearly transforms a high-dimensional input vector into a low-dimensional one whose components are uncorrelated.
intechopen.com
This paper is developed in two parts. First, we formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. Our treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and Principal Component Analysis (PCA) as special cases. Our analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the Generalized Singular Value Decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer back propagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, we investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) we are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when our application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper-and lower-layer weights. We shall call this the Lateral Orthogonalization Network (LON) and we'll show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, we show the application of our results to the solution of the identification problem of systems whose excitation has non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
ArXiv, 2020
Recent work has shown that biologically plausible Hebbian learning can be integrated with backpropagation learning (backprop), when training deep convolutional neural networks. In particular, it has been shown that Hebbian learning can be used for training the lower or the higher layers of a neural network. For instance, Hebbian learning is effective for re-training the higher layers of a pre-trained deep neural network, achieving comparable accuracy w.r.t. SGD, while requiring fewer training epochs, suggesting potential applications for transfer learning. In this paper we build on these results and we further improve Hebbian learning in these settings, by using a nonlinear Hebbian Principal Component Analysis (HPCA) learning rule, in place of the Hebbian Winner Takes All (HWTA) strategy used in previous work. We test this approach in the context of computer vision. In particular, the HPCA rule is used to train Convolutional Neural Networks in order to extract relevant features from...
Principal Component Analysis Networks and Algorithms, 2017
PCA is a statistical method, which is directly related to EVD and SVD. Neural networks-based PCA method estimates PC online from the input data sequences, which especially suits for high-dimensional data due to the avoidance of the computation of large covariance matrix, and for the tracking of nonstationary data, where the covariance matrix changes slowly over time. Neural networks and algorithms for PCA will be described in this chapter, and algorithms given in this chapter are typically unsupervised learning methods. PCA has been widely used in engineering and scientific disciplines, such as pattern recognition, data compression and coding, image processing, high-resolution spectrum analysis, and adaptive beamforming. PCA is based on the spectral analysis of the second moment matrix that statistically characterizes a random vector. PCA is directly related to SVD, and the most common way to perform PCA is via the SVD of a data matrix. However, the capability of SVD is limited for very large data sets. It is well known that preprocessing usually maps a high-dimensional space to a low-dimensional space with the least information loss, which is known as feature extraction. PCA is a well-known feature extraction method, and it allows the removal of the second-order correlation among given random processes. By calculating the eigenvectors of the covariance matrix of the input vector, PCA linearly transforms a high-dimensional input vector into a low-dimensional one whose components are uncorrelated.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
BMC Neuroscience, 2008
Electronics Letters, 1998
IEEE Transactions on Signal Processing, 1994
Economic computation and economic cybernetics studies and research / Academy of Economic Studies
Theoretical Computer Science, 2011
IEEE Signal Processing Magazine, 1997
IEEE Transactions on Neural Networks, 1995
ICANN ’93, 1993
IEEE Transactions on Neural Networks and Learning Systems, 2021
Neural networks, 1989
Computers & Electrical Engineering, 1995
Cahiers du Centre de Recherche Viabilité, Jeux, Contrôle, 1998