Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1987
…
12 pages
1 file
A new algorithm is presented for computing a column reduced form of a given full column rank polynomial matrix. The method is based on reformulating the problem as a problem of constructing a minimal basis for the rigth nullspace of a polynomial matrix closely related to the original one. The latter problem can easily be solved in a numerically reliable way. Two examples illustrating the method are included.
2010
In this paper we propose a numerical algorithm to compute the minimal nullspace basis of a univariate polynomial matrix of arbitrary size. In order to do so a sequence of structured matrices is obtained from the given polynomial matrix. The nullspace of the polynomial matrix can be computed from the nullspaces of these structured matrices.
ACM Transactions on Mathematical Software, 1997
A polynomial matrix is called column reduced if its column degrees are as low as possible in some sense. Two polynomial matrices P and R are called unimodularly equivalent if there exists a unimodular polynomial matrix U such that P U = R. Every polynomial matrix is unimodularly equivalent to a column reduced polynomial matrix. In this paper a subroutine is described that takes a polynomial matrix P as input and yields on output a unimodular matrix U and a column reduced matrix R such that P U = R, actually P U − R is near zero. The subroutine is based on an algorithm, described in a paper by Neven and Praagman. The subroutine has been tested with a number of examples on different computers, with comparable results. The performance of the subroutine on every example tried is satisfactory in the sense that the magnitude of the elements of the residual matrix P U − R is about P U EPS, where EPS is the machine precision. To obtain these results a tolerance, used to determine the rank of some (sub)matrices, has to be set properly. The influence of this tolerance on the performance of the algorithm is discussed, from which a guideline for the usage of the subroutine is derived.
We propose a new algorithm for the computation of a minimal polynomial basis of the left kernel of a given polynomial matrix F (s). The proposed method exploits the structure of the left null space of generalized Wolovich or Sylvester resultants to compute row polynomial vectors that form a minimal polynomial basis of left kernel of the given polynomial matrix. The entire procedure can be implemented using only orthogonal transformations of constant matrices and results to a minimal basis with orthonormal coefficients.
European Journal of Control, 1996
Recently an algorithm has been developed for column reduction of polynomial matrices. In a previous report the authors described a Fortran implementation of this algorithm. In this paper we compare the results of that implementation with an implementation of the algorithm originally developed by Wolovich. We conclude that the complexity of the Wolovich algorithm is lower, but in complicated cases
2004
The problem of determination of a minimal polynomial basis of a rational vector space is the starting point of many control analysis, synthesis and design techniques. In this paper, we propose a new algorithm for the computation of a minimal polynomial basis of the left kernel of a given polynomial matrix F (s). The proposed method exploits the structure of the left null space of generalized Wolovich or Sylvester resultants, in order to compute ef ciently row polynomial vectors that form a minimal polynomial basis of the left kernel of the given polynomial matrix. One of the advantages of the algorithm is that it can be implemented using only orthogonal transformations of constant matrices and the result is a minimal basis with orthonormal coef cients.
The main contribution of this work is to provide two algorithms for the computation of the minimal polynomial of univariate polynomial matrices. The first algorithm is based on the solution of linear matrix equations while the second one employs DFT techniques. The whole theory is illustrated with examples.
Linear Algebra and its Applications, 1993
A few years ago Beelen developed an algorithm to determine a minimal basis for the kernel of a polynomial matrix (see ). In this paper we use a modified version of this algorithm to find a column reduced polynomial matrix, unimodularly equivalent to a given polynomial matrix. * all correspondence should be sent to the second author
International Journal of Applied Mathematics and Computer Science, 2005
The main contribution of this work is to provide two algorithms for the computation of the minimal polynomial of univariate polynomial matrices. The first algorithm is based on the solution of linear matrix equations while the second one employs DFT techniques. The whole theory is illustrated with examples.
SIAM Journal on Matrix Analysis and Applications, 2015
The generalized null space decomposition (GNSD) is a unitary reduction of a general matrix A of order n to a block upper triangular form that reveals the structure of the Jordan blocks of A corresponding to a zero eigenvalue. The reduction was introduced by Kublanovskaya. It was extended first by Ruhe and then by Golub and Wilkinson, who based the reduction on the singular value decomposition. Unfortunately, if A has large Jordan blocks, the complexity of these algorithms can approach the order of n 4. This paper presents an alternative algorithm, based on repeated updates of a QR decomposition of A, that is guaranteed to be of order n 3. Numerical experiments confirm the stability of this algorithm, which turns out to produce essentially the same form as that of Golub and Wilkinson. The effect of errors in A on the ability to recover the original structure is investigated empirically. Several applications are discussed, including the computation of the Drazin inverse.
The Fourth International Workshop on Multidimensional Systems, 2005. NDS 2005., 2005
The main contribution of this work is to provide an algorithm for the computation of the minimal polynomial of a two variable polynomial matrix, based on the solution of linear matrix equations. The whole theory is implemented via an illustrative example.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Asian Journal of Control, 2008
Japan Journal of Industrial and Applied Mathematics, 2003
International Journal of Control, 2000
Journal of Computational and Applied Mathematics, 1991
SIAM Journal on Matrix Analysis and Applications, 2007
Journal of Symbolic Computation, 2012
Proceedings of the 1995 international symposium on Symbolic and algebraic computation - ISSAC '95, 1995
Journal of Symbolic Computation, 2012
Conference Record of The Thirtieth Asilomar Conference on Signals, Systems and Computers
Numerical Algorithms, 2006
Applied Mathematics and Computation, 2006
WIT Transactions on Information and Communication Technologies, 1970
Journal of Symbolic Computation, 1997
Lecture Notes in Computer Science, 2012
29th IEEE Conference on Decision and Control, 1990