Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1997, Theoretical Computer Science
The complexity of performing matrix computations, such as solving a linear system, inverting a nonsingular matrix or computing its rank, has received a lot of attention by both the theory and the scientific computing communities. In this paper we address some "nonclassical" matrix problems that find extensive applications, notably in control theory. More precisely, we study the matrix equations Ax +XAT = C and Ax -XB = C, the "inverse" of the eigenvalue problem (called pole assignment), and the problem of testing whether the matrix [B AB
Remarkable progress has been made in both theory and applications of all important areas of control. On the other hand, progress in computational aspects of control theory, especially in the area of large-scale and parallel computations, has been painfully slow. In this paper we address some central problems arising in control theory, namely the controllability and the eigenvalue assignment problems, and the solution of the Lyapunov and Sylvester observer matrix equations. For all these problems we give parallel algorithms that run in almost linear time on a Parallel Random Access Machine model. The algorithms make efficient use of the processors and are scalable, which makes them of practical worth also in the case of limited parallelism. This paper is in part based on an invited talk delivered to the 1992 American Control Conference in Chicago, Il. y Istituto di Elaborazione dell'Informazione, Consiglio Nazionale delle Ricerche, Pisa (Italy). Partly supported by ESPRIT Basic R...
Information Processing Letters, 1980
Parallel Computing, 2003
This issue of the journal contains seven articles selected from invited and contributed presentations made at the Workshop on Parallel Matrix Algorithms and Applications (PMAA'02), which took place in Neuchâ atel, Switzerland, on 9-10 November 2002. This workshop was organized by Erricos J. Kontoghiorghes and his group at the University of Neuchâ atel. The workshop was attended by more than 100 participants from all over Europe, Israel, Korea, Japan, and the United States. PMAA'02 is the second PMAA Workshop--the previous one was held in August 2000.
Journal of Computer and System Sciences, 2001
This paper gives output-sensitive parallel algorithms whose performance depends on the output size and are significantly more efficient tan previous algorithms for problems with sufficiently small output size. Inputs are n_n matrices over a fixed ground field. Let P(n) and M(n) be the PRAM processor bounds for O(log n) time multiplication of two degree n polynomials, and n_n matrices, respectively. Let T(n) be the time bounds, using M(n) processors, for testing if an n_n matrix is nonsingular, and if so, computing its inverse. We compute the rank R of a matrix in randomized parallel time O(log n+T(R) log R) using nP(n)+M(R) processors (P(n)+RP(R) processors for constant displacement rank matrices, e.g., Toeplitz matrices). We find a maximum linearly independent subset (MLIS) of an n-set of n-dimensional vectors in time O(T(n) log n) using M(n) randomized processors and we also give output-sensitive algorithms for this problem. Applications include output-sensitive algorithms for finding: (i) a size R maximum matching in an n-vertex graph using time O(T(R) log n) and nP(n)ÂT(R)+RM(R) processors, and (ii) a maximum matching in an n-vertex bipartite graph, with vertex subsets of sizes n 1 n 2 , using time O(T(n 1) log n) and nP(n)ÂT(n 1)+ n 1 M(n 1) processors.
Acta Numerica, 1993
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the nonsymmetric eigenvalue problem, the singular value decomposition, and generalizations of these to two matrices. We consider dense, band and sparse matrices.
status: published
Parallel Computing, 2006
Google, Inc. (search). ...
Scientific computation, 2016
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
29th IEEE Conference on Decision and Control, 1990
Matrices of pol nomials over rings and fields provide a unifying framework $r many control system design problems. These include dynamic compensator design, infinite dimensional systems, controllers for nonlinear systems, and even controllers for discrete event s stems. An important obstacle for utilizing these owerful matiematical tools in practical applications has been &e non-availability of accurate and efficient algorithms to carry through the precise error-free computations required b these algebraic methods. In this paper we develop highly ekcient, error-free a1 orithms, for most of the important computations needed in %near systems over fields or rings. We show that the structure of the underlying rings and modules is critical in designing such algorithms.
1995
Abstract A parallel algorithm for the eigenvalue assignment problem (EAP) in single-input linear systems is presented. The algorithm is based on solution of the observer matrix equation, and it has a time complexity of 0 (n3/p), where n is the order of the system and p is the number of processors. The algorithm has been implemented on the hypercube parallel computer INTEL iPSC 860, with 8 processors, and expected speed up has been obtained.
Differential-Algebraic Equations: A Projector Based Analysis, 2013
Computers & Mathematics with Applications, 1989
Computers & Mathematics with Applications, 1990
We review some of the most important results in the area of fast parallel algorithms for the solution of linear systems and related problems, such as matrix inversion, computation of the determinant and of the adjoint matrix. We analyze both direct and iterative methods implemented in various models of parallel computation.
Journal of Computer Science, 2007
PRAM algorithms for Symmetric Gaussian elimination is presented. We showed actual testing operations that will be performed during Symmetric Gaussian elimination, which caused symbolic factorization to occur for sparse linear systems. The array pattern of processing elements (PE) in row major order for the specialized sparse matrix in formulated. We showed that the access function in 2 +jn+k contains topological properties. We also proved that cost of storage and cost of retrieval of a matrix are proportional to each other in polylogarithmic parallel time using P-RAM with a polynomial numbers of processor. We use symbolic factorization that produces a data structure, which is used to exploit the sparsity of the triangular factors. In these parallel algorithms number of multiplication/division in O(log 3 n), number of addition/subtraction in O(log 3 n) and the storage in O(log 2 n) may be achieved.
Parallel Computing, 2018
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Parallel Computing, 2001
This paper studies the parallelization of a recursive algorithm for triangular matrix inversion (TMI), using the``divide and conquer'' paradigm. For a (large scale) matrix of size n m2 k m; k P 1 and p 2 q 6 n=2 available processors, we ®rst construct an adequate 2-phases task segmentation and inducing a balanced layered task graph. Then, we design a greedy scheduling leading to a cost optimal parallel algorithm, i.e. whose eciency is equal to 1 for large n. The practical interest of the contribution is proven through an experimental study of two versions of the original algorithm on an IBM SP1 distributed memory multiprocessor. Ó 2001 Published by Elsevier Science B.V. 0167-8191/01/$ -see front matter Ó 2001 Published by Elsevier Science B.V. PII: S 0 1 6 7 -8 1 9 1 ( 0 1 ) 0 0 1 1 1 -9 triangular systems of size n, a recursive algorithm, based on successive partitionings of the original matrix, was ®rst proposed by Heller in 1973 . Although, this algorithm globally reduces (in its original form) to a series of increasing size matrix multiplications, its complexity is the same as in the standard one, i.e. n 3 =3 On 2 .
Linear Algebra and its Applications, 1990
Here we offer a new randomized parallel algorithm that determines the Smith normal form of a matrix with entries being univariate polynomials with coefficients in an arbitrary field. The algorithm has two important advantages over our previous one: the multipliers relating the Smith form to the input matrix are computed, and the algorithm is probabilistic of Las Vegas type, i.e., always finds the correct answer. The Smith form algorithm is also a good sequential algorithm. Our algorithm reduces the problem of Smith form computation to two Hermite form computations. Thus the Smith form problem has complexity asymptotically that of the Hermite form problem. We also construct fast parallel algorithms for Jordan normal form and testing similarity of matrices. Both the similarity and non-similarity problems are in the complexity class RNC for the usual coefficient fields, i.e., they can be probabilistically decided in polylogarithmic time using polynomially many processors.
2005
In this chapter we discuss numerical software for linear algebra problems on parallel computers. We focus on some of the most common numerical operations: linear system solving and eigenvalue computations. Numerical operations such as linear system solving and eigenvalue calculations can be applied to two di erent kinds of matrix: dense and sparse. In dense systems, essentially every matrix element is nonzero. In sparse systems, a su ciently large number of matrix elements is zero that a specialized storage scheme is warranted; for an introduction to sparse storage, see [3]. Because the two classes are so di erent, usually di erent numerical softwares apply to them. We discuss ScaLAPACK and PLAPACK as the choices for dense linear system solving (see Section 13.1). For solving sparse linear systems, there exist two classes of algorithms: direct methods and iterative methods. We will discuss SuperLU as an example of a direct solver (see Section 13.2.1) and PETSc as an example of itera...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.