Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
22 pages
1 file
We introduce meta-factorization, a theory that describes matrix decompositions as solutions of linear matrix equations: the projector and the reconstruction equation. Meta-factorization reconstructs known factorizations, reveals their internal structures, and allows for introducing modifications, as illustrated with SVD, QR, and UTV factorizations. The prospect of meta-factorization also provides insights into computational aspects of generalized matrix inverses and randomized linear algebra algorithms. The relations between the Moore-Penrose pseudoinverse, generalized Nyström method, and the CUR decomposition are revealed here as an illustration. Finally, meta-factorization offers hints on the structure of new factorizations and provides the potential of creating them.
Applied and Computational Harmonic Analysis, 2011
Given an m × n matrix A and a positive integer k, we describe a randomized procedure for the approximation of A with a matrix Z of rank k. The procedure relies on applying A T to a collection of l random vectors, where l is an integer equal to or slightly greater than k; the scheme is efficient whenever A and A T can be applied rapidly to arbitrary vectors. The discrepancy between A and Z is of the same order as √ lm times the (k + 1)st greatest singular value σ k+1 of A, with negligible probability of even moderately large deviations.
Mathematics of Computation, 1974
In recent years, several algorithms have appeared for modifying the factors of a matrix following a rank-one change. These methods have always been given in the context of specific applications and this has probably inhibited their use over a wider field. In this report, several methods are described for modifying Cholesky factors. Some of these have been published previously while others appear for the first time. In addition, a new algorithm is presented for modifying the complete orthogonal factorization of a general matrix, from which the conventional QR factors are obtained as a special case. A uniform notation has been used and emphasis has been placed on illustrating the similarity between different methods.
Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz (or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~ of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra.
2011
Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition (Statistics for Social and Behavioral Sciences) Haruo Yanai, Kei Takeuchi, Yoshio Takane Aside from distribution theory, projections and the singular value decomposition (SVD) are the two most important concepts for understanding the basic mechanism of multivariate analysis. The former underlies the least squares estimation in regression analysis, which is essentially a projection of one subspace onto another, and the latter underlies principal component analysis, which seeks to find a subspace that captures the largest variability in the original space.This book is about projections and SVD. A thorough discussion of generalized inverse (g-inverse) matrices is also given because it is closely related to the former. The book provides systematic and in-depth accounts of these concepts from a unified viewpoint of linear transformations finite dimensional vector spaces. More specially, it shows that projection matrices (projectors) and g-inverse matrices can be defined in various ways so that a vector space is decomposed into a direct-sum of (disjoint) subspaces. Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition will be useful for researchers, practitioners, and students in applied mathematics, statistics, engineering, behaviormetrics, and other fields.
SIAM Journal on Applied Mathematics, 1976
The singular value decomposition of a matrix is used to derive systematically the Moore-Penrose inverse for a matrix bordered by a row and a column, in addition to the Moore-Penrose inverse for the associated principal Schur complements.
Applied Mathematics and Computation, 2012
An efficient algorithm for computing A ð2Þ T;S inverses of a given constant matrix A, based on the QR decomposition of an appropriate matrix W, is presented. Correlations between the derived representation of outer inverses and corresponding general representation based on arbitrary full-rank factorization are derived. In particular cases we derive representations of f2; 4g and f2; 3g-inverses. Numerical examples on different test matrices (dense or sparse) are presented as well as the comparison with several well-known methods for computing the Moore-Penrose inverse and the Drazin inverse.
SIAM Review, 2000
Matrix factorization in numerical linear algebra (NLA) typically serves the purpose of restating some given problem in such a way that it can be solved more readily; for example, one major application is in the solution of a linear system of equations. In contrast, within applied statistics/psychometrics (AS/P), a much more common use for matrix factorization is in presenting, possibly spatially, the structure that may be inherent in a given data matrix obtained on a collection of objects observed over a set of variables. The actual components of a factorization are now of prime importance and not just as a mechanism for solving another problem. We review some connections between NLA and AS/P and their respective concerns with matrix factorization and the subsequent rank reduction of a matrix. We note in particular that several results available for many decades in AS/P were more recently (re)discovered in the NLA literature. Two other distinctions between NLA and AS/P are also discussed briefly: how a generalized singular value decomposition might be defined, and the differing uses for the (newer) methods of optimization based on cyclic or iterative projections.
Journal of Computational and Applied Mathematics, 2009
The Moore-Penrose inverse of an arbitrary matrix (including singular and rectangular) has many applications in statistics, prediction theory, control system analysis, curve fitting and numerical analysis. In this paper, an algorithm based on the conjugate Gram-Schmidt process and the Moore-Penrose inverse of partitioned matrices is proposed for computing the pseudoinverse of an m × n real matrix A with m ≥ n and rank r ≤ n. Numerical experiments show that the resulting pseudoinverse matrix is reasonably accurate and its computation time is significantly less than that of pseudoinverses obtained by the other methods for large sparse matrices.
In this paper we present a recursive algorithm for factorization and inversion of matrices generated by adding dyads (elementary or rank-one matrices) as it happens in recursive array signal processing. The algorithm is valid for any recursively generated rectangular matrix and it has two parts: one valid for rank-deficient and the other for full rank matrices. The second part is used to obtain a generalization of the Sherman-Morrison algorithm for the recursive inversion of the covariance matrix. From the proposed algorithm we derive two others: one to compute the inverse (pseudo-inverse) of any matrix and the other to invert simultaneously two matrices. Zusammenfassung In diesem Beitrag prbentieren wir einen rekursiven Algorithmus fiir die Faktorisierung und Inversion von Matrizen, die durch die Addition von Dyaden (Elementar-Matrizen oder Matrizen mit dem Rang eins) generiert werden, wie dies bei der rekursiven Array-Signalverarbeitung geschieht. Der Algorithmus ist fiir jede rekursiv generierte Rechteck-Matrix giiltig und besteht aus zwei Teilen: der eine gilt fiir Matrizen mit nicht vollem Rang und der andere fiir Matrizen mit vollem Rang. Der zweite Teil wird benutzt, urn eine Verallgemeinerung des Sherman-Morrison Algorithmus fiir die rekursive inversion der Kovarianz-Matrix zu erhalten. Aus dem vorgeschlagenen Algorithmus leiten wir zwei weitere ab: einer berechnet die Inverse (Pseudo-Inverse) jeder beliebigen Matrix und der andere invertiert gleichzeitig zwei Matrizen. On prBsente dans cet article un algorithme rCcursif pour la factorisation et l'inversion de matrices gCntrCes par l'addition de diades (matrices Clkmentaires ou de rang unitaire), comme dans le traitement de signal matriciel rCcursif. L'algorithme convient pour toute matrice rectangulaire g&ntrCe rt.cursivement et contient deux parties: une pour les matrices qui ne sont pas de plein rang et l'autre pour les matrices de plein rang. La seconde partie est consacr6e g l'obtention d'une g&nCralisation de l'algorithme de Sherman-Morrison pour l'inversion r6cursive de la matrice de covariance. Deux autres algorithmes sont d&iv&s B partir du premier: l'un pour calculer l'inverse (ou le pseudo-inverse) de n'importe quelle matrice, et l'autre pour inverser simultantment deux matrices.
BIT, 1979
Iterative methods are developed for computing the Moore-Penrose pseudoinverse solution of a linear system Ax b, where A is an m x n sparse matrix. The methods do not require the explicit formation of AT A or AAT and therefore are advantageous to use when these matrices are much less sparse than A itself. The methods are based on solving the two related systems (i) x=ATy, AAly=b, and (ii) AT Ax=A 1 b. First it is shown how the SORand SSOR-methods for these two systems can be implemented efficiently. Further, the acceleration of the SSOR-method by Chebyshev semi-iteration and the conjugate gradient method is discussed. In particular it is shown that the SSOR-cg method for (i) and (ii) can be implemented in such a way that each step requires only two sweeps through successive rows and columns of A respectively. In the general rank deficient and inconsistent case it is shown how the pseudoinverse solution can be computed by a two step procedure. Some possible applications are mentioned and numerical results are given for some problems from picture reconstruction. 1. IntrodDction. Let A be a given m x n sparse matrix, b a given m-vector and x = A + b the Moore-Penrose pseudoinverse solution of the linear system of equations (1.1) Ax b. We denote the range and nullspace of a matrix A by R(A) and N(A) respectively. Convenient characterizations of the pseudoinverse solution are given in the following two lemmas. LEMMA 1.1. x=A+b is the unique solution of the problem: minimize IIxl1 2 when x E {x; Ilb-AxIl2=minimum}. LEMMA 1.2. x = A + b is the unique vector which satisfies x E R (AT) and (b-Ax)..LR(A), or equivalently x ..LN(A) and (b-Ax) E N(A T ).
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Pure and Applied Mathematics Journal, 2020
Proceedings of the National Academy of Sciences, 2007
Applied Mathematics and Computation, 2010
Applied and Computational Harmonic Analysis
Journal of the Institute of Engineering, 2019
BIT Numerical Mathematics, 2012
Journal of Computational and Applied Mathematics, 1989
SIAM Journal on Matrix Analysis and Applications, 2005
SIAM Journal on Matrix Analysis and Applications, 2015
Advances in Mathematical and Statistical Modeling, 2008
International Journal of Computer Mathematics, 2008
Differential-Algebraic Equations: A Projector Based Analysis, 2013
Applied Mathematics Letters, 2012