Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
14 pages
1 file
This paper is developed in two parts. First, we formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. Our treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and Principal Component Analysis (PCA) as special cases. Our analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the Generalized Singular Value Decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer back propagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, we investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) we are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when our application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper-and lower-layer weights. We shall call this the Lateral Orthogonalization Network (LON) and we'll show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, we show the application of our results to the solution of the identification problem of systems whose excitation has non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
Electronics
System identification problems are always challenging to address in applications that involve long impulse responses, especially in the framework of multichannel systems. In this context, the main goal of this review paper is to promote some recent developments that exploit decomposition-based approaches to multiple-input/single-output (MISO) system identification problems, which can be efficiently solved as combinations of low-dimension solutions. The basic idea is to reformulate such a high-dimension problem in the framework of bilinear forms, and to then take advantage of the Kronecker product decomposition and low-rank approximation of the spatiotemporal impulse response of the system. The validity of this approach is addressed in terms of the celebrated Wiener filter, by developing an iterative version with improved performance features (related to the accuracy and robustness of the solution). Simulation results support the main theoretical findings and indicate the appealing p...
Computers & Electrical Engineering, 1975
Fast, simple and parallel identification algorithms are highly desirable in many applications. However, the assumptions which make the algorithm fast and simple also make the algorithm sensitive to deviations from these assumptions. In view of these considerations, this paper develops a sequential algorithm for the identification of discrete-time linear systems. The identification algorithm is based on decomposition of the autoregressive model. This decomposition approach identifies autoregressive model coefficients by minimizing the squared error performance index by use of the multilevel hierarchical decomposition procedure and by use of the stochastic approximation algorithm. Computer simulations illustrate the performance of the identification algorithm and comparison of the results with those obtained from the undecomposed stochastic approximation algorithm. 2. PROCESS MODEL AND IDENTIFICATION ALGORITHM Consider the N-order autoregressive model of a discrete-time linear system given by
In this paper, the performance of integrated linear-NN models is investigated for nonlinear system identification using two different structures: series vs. parallel. In particular, Laguerre filters are selected as the linear models, and multi-layer perceptron (MLP) or feed-forward neural networks (NN) are selected for the nonlinear models. Results show promising capability of the (novel) parallel Laguerre-NN structure especially in terms of its generalization capability when subjected to data different from those used during the identification stage in comparison to the series Laguerre-NN.
Although the finite element method is often applied to analyze the dynamics of structures, its application to large, complex structures can be time-consuming and errors in the modeling process may negatively affect the accuracy of analyses based on the model. System identification techniques attempt to circumvent these problems by using experimental response data to characterize or identify a system. However, identification of structures that are time-varying or nonlinear is problematic because the available methods generally require prior understanding about the equations of motion for the system. Nonlinear system identification techniques are generally only applicable to nonlinearities where the functional form of the nonlinearity is known and a general nonlinear system identification theory is not available as is the case with linear theory. Linear time-varying identification methods have been proposed for application to nonlinear systems, but methods for general time-varying systems where the form of the time variance is unknown have only been available for single-input single-output models. This dissertation presents several general linear time-varying methods for multiple-input multiple-output systems where the form of the time variance is entirely unknown. The methods use the proper orthogonal decomposition of measured response data combined with linear system theory to construct a model for predicting the response of an arbitrary linear or nonlinear system without any knowledge of the equations of motion. Separate methods are derived for predicting responses to initial displacements, initial velocities, and forcing functions. Some methods require only one data set but only promise accurate solutions for linear, time-invariant systems that are lightly damped and have a mass matrix proportional to the identity matrix. Other methods use multiple data sets and are valid for general time-varying systems. The proposed methods are applied to linear timeinvariant, time-varying, and nonlinear systems via numerical examples and experiments and the factors affecting the accuracy of the methods are discussed.
IEEE Control Systems Magazine, 1990
This paper presents two approaches for utilization of neural networks in identification of dynamical systems. In the first approach, a Hopfield network is used to implement a least-squares estimation for time-varying and time-invariant systems. The second approach, which is in the frequency domain, utilizes a set of orthogonal basis functions and Fourier analysis to construct a dynamic system in terms of its Fourier coefficients. Mathematical formulations are presented along with simulation results.
2001
In this paper, non iterative algorithms for the identification of (multivariable) nonlinear systems consisting of the interconnection of LTI systems and static nonlinearities are presented. The proposed algorithms are numerically robust, since they are based only on least squares estimation and singular value decomposition. Three different block-oriented nonlinear models are considered in this paper, viz., the Hammerstein model, the Wiener model, and the Feedback Block-Oriented model. For the Hammerstein model, the algorithm provides consistent estimates even in the presence of coloured output noise, under weak assumptions on the persistency of excitation of the inputs. For the Wiener model and the Feedback Block-Oriented model, consistency of the estimates can only be guaranteed in the noise free case. Key in the derivation of the results is the use of rational orthonormal bases for the representation of the linear part of the systems. An additional advantage of this is the possibility of incorporating prior information about the system in a typically black-box identification scheme.
Journal of Applied Sciences, 2010
International Statistical Review, 2017
PCA is a statistical method, which is directly related to EVD and SVD. Neural networks-based PCA method estimates PC online from the input data sequences, which especially suits for high-dimensional data due to the avoidance of the computation of large covariance matrix, and for the tracking of nonstationary data, where the covariance matrix changes slowly over time. Neural networks and algorithms for PCA will be described in this chapter, and algorithms given in this chapter are typically unsupervised learning methods. PCA has been widely used in engineering and scientific disciplines, such as pattern recognition, data compression and coding, image processing, high-resolution spectrum analysis, and adaptive beamforming. PCA is based on the spectral analysis of the second moment matrix that statistically characterizes a random vector. PCA is directly related to SVD, and the most common way to perform PCA is via the SVD of a data matrix. However, the capability of SVD is limited for very large data sets. It is well known that preprocessing usually maps a high-dimensional space to a low-dimensional space with the least information loss, which is known as feature extraction. PCA is a well-known feature extraction method, and it allows the removal of the second-order correlation among given random processes. By calculating the eigenvectors of the covariance matrix of the input vector, PCA linearly transforms a high-dimensional input vector into a low-dimensional one whose components are uncorrelated.
IFAC Proceedings Volumes, 1985
A method for the parameter identification of linear, time-invariant systems under stationary, gaussian coloured excitation is presented. The input-and output signals of the considered system are processed in known linear filters. Stationary covariance relations between these signals allow identification of unknown system parameters. The method is developed in detail for mechanical multibody systems and sing~e-input single-output systems , , 40rds. Identification; covariance methods; continuous time systems; measurement noise; instrumental variable methods; mechanical multibody systems; singleinput single-output systems. I.
International Journal of Systems Science, 2000
In the paper an approach to a certain class of nonlinear parameter estimation problem is proposed, which is, in particular, applicable to distributed parameter systems described by elliptic partial differential equations. The approach exploits a special structure of nonlinear dependence, what allows to apply the least squares algorithm twice, together with the inversion of a nonlinear characteristic. One can roughly say that the class of considered systems can be described by a feedforward neural net with two hidden layers and monotone activation functions. In the language of neural nets, the estimation problem can be interpreted as a partial inversion of the net, i.e., finding part of its inputs from a learning sequence. Simulations confirm that the approach is useful and much simpler than a direct iteration minimization of the sum of squares.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Mathematical biosciences, 1988
Signal Processing, IEEE Transactions …, 1996
2018 International Conference on Sustainable Information Engineering and Technology (SIET), 2018
2019 IEEE 58th Conference on Decision and Control (CDC), 2019
IFAC-PapersOnLine, 2021
Proceedings of 1995 IEEE Instrumentation and Measurement Technology Conference - IMTC '95, 1995
IEEE Transactions on Information Theory, 1998
IEEE Signal Processing Letters, 2003
Automatica, 1996
Modeling, Simulation and Optimization of Complex Processes HPSC 2015, 2017
Earthquake Engineering & Structural Dynamics, 2013
IEEE Transactions on Signal Processing, 1994
Signal, Image and Video Processing, 2015
2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012
Principal Component Analysis Networks and Algorithms, 2017
International Journal of Circuits, Systems and Signal Processing, 2021
IEEE Transactions on Signal Processing, 1994
2007 International Conference on Intelligent and Advanced Systems, 2007
IEEE Transactions on Neural Networks, 1995
IEE Proceedings - Vision, Image, and Signal Processing, 2006