Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005
…
20 pages
1 file
The rigidity function of a matrix is defined as the minimum number of its entries that need to be changed in order to reduce the rank of the matrix to below a given parameter. Proving a strong enough lower bound on the rigidity of a matrix implies a nontrivial lower bound on the complexity of any linear circuit computing the set of linear forms associated with it. However, although it is shown that most matrices are rigid enough, no explicit construction of a rigid family of matrices is known. In this survey report we review the concept of rigidity and some of its interesting variations as well as several notable results related to that. We also show the existence of highly rigid matrices constructed by evaluation of bivariate polynomials over finite fields.
computational complexity, 2013
The rigidity of a matrix A for target rank r is the minimum number of entries of A that must be changed to ensure that the rank of the altered matrix is at most r. Since its introduction by Valiant [Val77], rigidity and similar rank-robustness functions of matrices have found numerous applications in circuit complexity, communication complexity, and learning complexity. Almost all n × n matrices over an infinite field have a rigidity of (n − r) 2 . It is a long-standing open question to construct infinite families of explicit matrices even with superlinear rigidity when r = Ω(n).
Linear Algebra and its Applications, 2000
The rigidity of a matrix M is the function R M (r), which, for a given r, gives the minimum number of entries of M which one has to change in order to reduce its rank to at most r. This notion has been introduced by Valiant in 1977 in connection with the complexity of computing linear forms. Despite more than 20 years of research, very little is known about the rigidity of matrices. Nonlinear lower bounds on matrix rigidity would lead to new lower bound techniques for the computation of linear forms, e.g., for the computation of the DFT, as well as to more general advances in complexity theory. We put forward a number of linear algebra research issues arising in the above outlined context.
Theoretical Computer Science, 2000
We consider the problem of the presence of short cycles in the graphs of nonzero elements of matrices which have sublinear rank and nonzero entries on the main diagonal, and analyze the connection between these properties and the rigidity of matrices. In particular, we exhibit a family of matrices which shows that sublinear rank does not imply the existence of triangles. This family can also be used to give a constructive bound of the order of k 3=2 on the Ramsey number R(3; k), which matches the best-known bound. On the other hand, we show that sublinear rank implies the existence of 4-cycles. Finally, we prove some partial results towards establishing lower bounds on matrix rigidity and consequently on the size of logarithmic depth arithmetic circuits for computing certain explicit linear transformations.
2021
Motivated by a rigidity-theoretic perspective on the Localization Problem in 2D, we develop an algorithm for computing circuit polynomials in the algebraic rigidity matroid CMn associated to the Cayley-Menger ideal for n points in 2D. We introduce combinatorial resultants, a new operation on graphs that captures properties of the Sylvester resultant of two polynomials in the algebraic rigidity matroid. We show that every rigidity circuit has a construction tree from K4 graphs based on this operation. Our algorithm performs an algebraic elimination guided by the construction tree, and uses classical resultants, factorization and ideal membership. To demonstrate its effectiveness, we implemented our algorithm in Mathematica: it took less than 15 seconds on an example where a Gröbner Basis calculation took 5 days and 6 hrs. 2012 ACM Subject Classification General and reference → Performance; General and reference → Experimentation; Theory of computation → Computational geometry; Mathem...
2001
We characterize the complexity of some natural and important problems in linear algebra. In particular, we identify natural complexity classes for which the problems of (a) determining if a system of linear equations is feasible and (b) computing the rank of an integer matrix, (as well as other problems), are complete under logspace reductions. As an important part of presenting this classi cation, we show that the \exact counting logspace hierarchy" collapses to near the bottom level. (We review the de nition of this hierarchy below.) We further show that this class is closed under NC 1-reducibility, and that it consists of exactly those languages that have logspace uniform span programs (introduced by Karchmer and Wigderson) over the rationals. In addition, we contrast the complexity of these problems with the complexity of determining if a system of linear equations has an integer solution. 1 More precisely, there is a language in uniform NC 1 that requires uniform TC 0 circuits of size greater than 2 log O(1) n , if and only if there is oracle separating PSPACE from CH.
Foundations and Trends® in Theoretical Computer Science, 2007
Chic. J. Theor. Comput. Sci., 2019
We study the arithmetic circuit complexity of some well-known family of polynomials through the lens of parameterized complexity. Our main focus is on the construction of explicit algebraic branching programs (ABP) for determinant and permanent polynomials of the \emph{rectangular} symbolic matrix in both commutative and noncommutative settings. The main results are: 1. We show an explicit $O^{*}({n\choose {\downarrow k/2}})$-size ABP construction for noncommutative permanent polynomial of $k\times n$ symbolic matrix. We obtain this via an explicit ABP construction of size $O^{*}({n\choose {\downarrow k/2}})$ for $S_{n,k}^*$, noncommutative symmetrized version of the elementary symmetric polynomial $S_{n,k}$. 2. We obtain an explicit $O^{*}(2^k)$-size ABP construction for the commutative rectangular determinant polynomial of the $k\times n$ symbolic matrix. 3. In contrast, we show that evaluating the rectangular noncommutative determinant over rational matrices is $W[1]$-hard.
Advances in Applied Mathematics, 2007
In this paper we study the complexity of matrix elimination over finite fields in terms of row operations, or equivalently in terms of the distance in the the Cayley graph of GL n (F q) generated by the elementary matrices. We present an algorithm called striped matrix elimination which is asymptotically faster than traditional Gauss-Jordan elimination. The new algorithm achieves a complexity of O n 2 / log q n row operations, and O n 3 / log q n operations in total, thanks to being able to eliminate many matrix positions with a single row operation. We also bound the average and worst-case complexity for the problem, proving that our algorithm is close to being optimal, and show related concentration results for random matrices. Next we present the results of a large computational study of the complexities for small matrices and fields. Here we determine the exact distribution of the complexity for matrices from GL n (F q), with n and q small. Finally we consider an extension from finite fields to finite semifields of the matrix reduction problem. We give a conjecture on the behaviour of a natural analogue of GL n for semifields and prove this for a certain class of semifields.
2021
Abstract. We view the determinant and permanent as functions on directed weighted graphs and introduce their analogues for the undirected graphs. We prove that the task of computing the undirected determinants as well as permanents for planar graphs, whose vertices have degree at most 4, is #P-complete. In the case of planar graphs whose vertices have degree at most 3, the computation of the undirected determinant remains #P-complete while the permanent can be reduced to the FKT algorithm, and therefore is polynomial. The undirected permanent is a Holant problem and its complexity can be deduced from the existing literature. The concept of the undirected determinant is new. Its introduction is motivated by the formal resemblance to the directed determinant, a property that may inspire generalizations of some of the many algorithms which compute the latter. For a sizable class of planar 3-regular graphs, we are able to compute the undirected determinant in polynomial time.
Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2019
We present a deterministic polynomial time approximation scheme (PTAS) for computing the algebraic rank of a set of bounded degree polynomials. The notion of algebraic rank naturally generalizes the notion of rank in linear algebra, i.e., instead of considering only the linear dependencies, we also consider higher degree algebraic dependencies among the input polynomials. More specifically, we give an algorithm that takes as input a set f := {f 1 ,. .. , f n } ⊂ F[x 1 ,. .. , x m ] of polynomials with degrees bounded by d, and a rational number > 0 and runs in time O((nmd) O(d 2) • M (n)), where M (n) is the time required to compute the rank of an n × n matrix (with field entries), and finally outputs a number r, such that r is at least (1 −) times the algebraic rank of f. Our key contribution is a new technique which allows us to achieve the higher degree generalization of the results by Bläser, Jindal, Pandey (CCC'17) who gave a deterministic PTAS for computing the rank of a matrix with homogeneous linear entries. It is known that a deterministic algorithm for exactly computing the rank in the linear case is already equivalent to the celebrated Polynomial Identity Testing (PIT) problem which itself would imply circuit complexity lower bounds (Kabanets, Impagliazzo, STOC'03). Such a higher degree generalization is already known to a much stronger extent in the noncommutative world, where the more general case in which the entries of the matrix are given by poly
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
IFIP TCS, 2002
Computational Complexity, 1996
Computational Complexity, 2002
Electronic Notes in Discrete Mathematics, 2010
Journal of Applied and Industrial Mathematics, 2011
Linear Algebra and Its Applications, 2003
Mathematical Notes, 2015
Theoretical Computer Science, 2003