Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005, … Colloquium on Computational Complexity ( …
The rigidity function of a matrix is defined as the minimum number of its entries that need to be changed in order to reduce the rank of the matrix to below a given parameter. Proving a strong enough lower bound on the rigidity of a matrix implies a nontrivial lower bound on the complexity of any linear circuit computing the set of linear forms associated with it. However, although it is shown that most matrices are rigid enough, no explicit construction of a rigid family of matrices is known.
computational complexity, 2013
The rigidity of a matrix A for target rank r is the minimum number of entries of A that must be changed to ensure that the rank of the altered matrix is at most r. Since its introduction by Valiant [Val77], rigidity and similar rank-robustness functions of matrices have found numerous applications in circuit complexity, communication complexity, and learning complexity. Almost all n × n matrices over an infinite field have a rigidity of (n − r) 2 . It is a long-standing open question to construct infinite families of explicit matrices even with superlinear rigidity when r = Ω(n).
Linear Algebra and its Applications, 2000
The rigidity of a matrix M is the function R M (r), which, for a given r, gives the minimum number of entries of M which one has to change in order to reduce its rank to at most r. This notion has been introduced by Valiant in 1977 in connection with the complexity of computing linear forms. Despite more than 20 years of research, very little is known about the rigidity of matrices. Nonlinear lower bounds on matrix rigidity would lead to new lower bound techniques for the computation of linear forms, e.g., for the computation of the DFT, as well as to more general advances in complexity theory. We put forward a number of linear algebra research issues arising in the above outlined context.
Theoretical Computer Science, 2000
We consider the problem of the presence of short cycles in the graphs of nonzero elements of matrices which have sublinear rank and nonzero entries on the main diagonal, and analyze the connection between these properties and the rigidity of matrices. In particular, we exhibit a family of matrices which shows that sublinear rank does not imply the existence of triangles. This family can also be used to give a constructive bound of the order of k 3=2 on the Ramsey number R(3; k), which matches the best-known bound. On the other hand, we show that sublinear rank implies the existence of 4-cycles. Finally, we prove some partial results towards establishing lower bounds on matrix rigidity and consequently on the size of logarithmic depth arithmetic circuits for computing certain explicit linear transformations.
2021
Motivated by a rigidity-theoretic perspective on the Localization Problem in 2D, we develop an algorithm for computing circuit polynomials in the algebraic rigidity matroid CMn associated to the Cayley-Menger ideal for n points in 2D. We introduce combinatorial resultants, a new operation on graphs that captures properties of the Sylvester resultant of two polynomials in the algebraic rigidity matroid. We show that every rigidity circuit has a construction tree from K4 graphs based on this operation. Our algorithm performs an algebraic elimination guided by the construction tree, and uses classical resultants, factorization and ideal membership. To demonstrate its effectiveness, we implemented our algorithm in Mathematica: it took less than 15 seconds on an example where a Gröbner Basis calculation took 5 days and 6 hrs. 2012 ACM Subject Classification General and reference → Performance; General and reference → Experimentation; Theory of computation → Computational geometry; Mathem...
2001
We characterize the complexity of some natural and important problems in linear algebra. In particular, we identify natural complexity classes for which the problems of (a) determining if a system of linear equations is feasible and (b) computing the rank of an integer matrix, (as well as other problems), are complete under logspace reductions. As an important part of presenting this classi cation, we show that the \exact counting logspace hierarchy" collapses to near the bottom level. (We review the de nition of this hierarchy below.) We further show that this class is closed under NC 1-reducibility, and that it consists of exactly those languages that have logspace uniform span programs (introduced by Karchmer and Wigderson) over the rationals. In addition, we contrast the complexity of these problems with the complexity of determining if a system of linear equations has an integer solution. 1 More precisely, there is a language in uniform NC 1 that requires uniform TC 0 circuits of size greater than 2 log O(1) n , if and only if there is oracle separating PSPACE from CH.
We deploy algebraic complexity theoretic techniques for constructing symmetric determinantal representations of formulas and weakly skew circuits. Our representations produce matrices of much smaller dimensions than those given in the convex geometry literature when applied to polynomials having a concise representation (as a sum of monomials, or more generally as an arithmetic formula or a weakly skew circuit). These representations are valid in any field of characteristic different from 2. In characteristic 2 we are led to an almost complete solution to a question of Bürgisser on the VNP-completeness of the partial permanent. In particular, we show that the partial permanent cannot be VNP-complete in a finite field of characteristic 2 unless the polynomial hierarchy collapses.
2013
Let d, m, and q be positive integers and A(q) = {0, . . . , q − 1} be an alphabet. We investigate a generalization of the well-known subword complexity of d-dimensional matrices containing the elements of A(q). Let L = (L1, . . . ,Lm) be a list of distinct d-dimensional vectors, where Li = (ai1, . . . , aid). The prism complexity of a d-dimensional q-ary matrix M is denoted by C(d, q,L,M) and is defined as the number of distinct d-dimensional q-ary submatrices, whose permitted sizes are listed in L. We review and extend the earlier results, first of all results concerning maximum complexity of matrices and performance parameters of the construction algorithms.
#P-hardness of computing matrix immanants are proved for each member of a broad class of shapes and restricted sets of matrices. The class is characterized in the following way. If a shape of size n in it is in form (w, 1 + λ) or its conjugate is in that form, where 1 is the all-1 vector, then |λ| is n ε for some 0 < ε, λ can be tiled with 1 × 2 dominos and (3w + 3h(λ) + 1)|λ| ≤ n, where h(λ) is the height of λ. The problem remains #P-hard if the immanants are evaluated on 0-1 matrices. We also give hardness proofs of some immanants whose shape λ = (1 + λ d) has size n such that |λ d | = n ε for some 0 < ε < 1 2 , and for some w, the shape λ d /(w) is tilable with 1 × 2 dominos. The #P-hardness result holds when these immanants are evaluated on adjacency matrices of planar, directed graphs, however, in these cases the edges have small positive integer weights.
Journal of Applied and Industrial Mathematics, 2011
Under consideration is the problem of constructing a square Boolean matrix A of order n without "rectangles" (it is a matrix whose every submatrix of the elements that are in any two rows and two columns does not consist of 1s). A linear transformation modulo two defined by A has complexity o(ν(A) − n) in the base {⊕}, where ν(A) is the weight of A, i.e., the number of 1s (the matrices without rectangles are called thin). Two constructions for solving this problem are given. In the first construction, n = p 2 where p is an odd prime. The corresponding matrix H p has weight p 3 and generates the linear transformation of complexity O(p 2 log p log log p). In the second construction, the matrix has weight nk where k is the cardinality of a Sidon set in Z n. We may assume that k = Θ(√ n); there are examples of the sets of cardinality k ∼ √ n for some n. The corresponding linear transformation has complexity O(n log n log log n). Some generalizations of this problem are considered.
Linear Algebra and Its Applications, 2003
In representation theory, the problem of classifying pairs of matrices up to simultaneous similarity is used as a measure of complexity; classification problems containing it are called wild problems. We show in an explicit form that this problem contains all classification matrix problems given by quivers or posets. Then we prove that it does not contain (but is contained in) the problem of classifying threevalent tensors. Hence, all wild classification problems given by quivers or posets have the same complexity; moreover, a solution of any one of these problems implies a solution of each of the others. The problem of classifying three-valent tensors is more complicated.
Foundations and Trends® in Theoretical Computer Science, 2007
Chic. J. Theor. Comput. Sci., 2019
We study the arithmetic circuit complexity of some well-known family of polynomials through the lens of parameterized complexity. Our main focus is on the construction of explicit algebraic branching programs (ABP) for determinant and permanent polynomials of the \emph{rectangular} symbolic matrix in both commutative and noncommutative settings. The main results are: 1. We show an explicit $O^{*}({n\choose {\downarrow k/2}})$-size ABP construction for noncommutative permanent polynomial of $k\times n$ symbolic matrix. We obtain this via an explicit ABP construction of size $O^{*}({n\choose {\downarrow k/2}})$ for $S_{n,k}^*$, noncommutative symmetrized version of the elementary symmetric polynomial $S_{n,k}$. 2. We obtain an explicit $O^{*}(2^k)$-size ABP construction for the commutative rectangular determinant polynomial of the $k\times n$ symbolic matrix. 3. In contrast, we show that evaluating the rectangular noncommutative determinant over rational matrices is $W[1]$-hard.
Advances in Applied Mathematics, 2007
In this paper we study the complexity of matrix elimination over finite fields in terms of row operations, or equivalently in terms of the distance in the the Cayley graph of GL n (F q) generated by the elementary matrices. We present an algorithm called striped matrix elimination which is asymptotically faster than traditional Gauss-Jordan elimination. The new algorithm achieves a complexity of O n 2 / log q n row operations, and O n 3 / log q n operations in total, thanks to being able to eliminate many matrix positions with a single row operation. We also bound the average and worst-case complexity for the problem, proving that our algorithm is close to being optimal, and show related concentration results for random matrices. Next we present the results of a large computational study of the complexities for small matrices and fields. Here we determine the exact distribution of the complexity for matrices from GL n (F q), with n and q small. Finally we consider an extension from finite fields to finite semifields of the matrix reduction problem. We give a conjecture on the behaviour of a natural analogue of GL n for semifields and prove this for a certain class of semifields.
2021
Abstract. We view the determinant and permanent as functions on directed weighted graphs and introduce their analogues for the undirected graphs. We prove that the task of computing the undirected determinants as well as permanents for planar graphs, whose vertices have degree at most 4, is #P-complete. In the case of planar graphs whose vertices have degree at most 3, the computation of the undirected determinant remains #P-complete while the permanent can be reduced to the FKT algorithm, and therefore is polynomial. The undirected permanent is a Holant problem and its complexity can be deduced from the existing literature. The concept of the undirected determinant is new. Its introduction is motivated by the formal resemblance to the directed determinant, a property that may inspire generalizations of some of the many algorithms which compute the latter. For a sizable class of planar 3-regular graphs, we are able to compute the undirected determinant in polynomial time.
Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, 2019
We present a deterministic polynomial time approximation scheme (PTAS) for computing the algebraic rank of a set of bounded degree polynomials. The notion of algebraic rank naturally generalizes the notion of rank in linear algebra, i.e., instead of considering only the linear dependencies, we also consider higher degree algebraic dependencies among the input polynomials. More specifically, we give an algorithm that takes as input a set f := {f 1 ,. .. , f n } ⊂ F[x 1 ,. .. , x m ] of polynomials with degrees bounded by d, and a rational number > 0 and runs in time O((nmd) O(d 2) • M (n)), where M (n) is the time required to compute the rank of an n × n matrix (with field entries), and finally outputs a number r, such that r is at least (1 −) times the algebraic rank of f. Our key contribution is a new technique which allows us to achieve the higher degree generalization of the results by Bläser, Jindal, Pandey (CCC'17) who gave a deterministic PTAS for computing the rank of a matrix with homogeneous linear entries. It is known that a deterministic algorithm for exactly computing the rank in the linear case is already equivalent to the celebrated Polynomial Identity Testing (PIT) problem which itself would imply circuit complexity lower bounds (Kabanets, Impagliazzo, STOC'03). Such a higher degree generalization is already known to a much stronger extent in the noncommutative world, where the more general case in which the entries of the matrix are given by poly
IFIP TCS, 2002
We investigate the complexity of enumerative approximation of two elementary problems in linear algebra, computing the rank and the determinant of a matrix. In particular, we show that if there exists an enumerator that, given a matrix, outputs a list of constantly many numbers, one of which is guaranteed to be the rank of the matrix, then it can be determined in AC0 (with oracle access to the enumerator) which of these numbers is the rank. Thus, for example, if the enumerator is an FL function, then the problem of computing ...
Computational Complexity, 1996
Extending a line of research initiated by Lipton, we study the complexity of computing the permanent of random n by n matrices with integer values between 0 and p ? 1, for any suitably large prime p. Previous to our work, it was shown hard to compute the permanent of half these matrices (by Gemmell and Sudan), and to enumerate for any matrix a polynomial number of options for its permanent (by Cai and Hemachandra, and by Toda). We show that unless the polynomial-time hierarchy collapses to its second level, no polynomial time algorithm can compute the permanent of every matrix with probability at least 13n 3 =p, nor can it compute the permanent of at least a (49n 3 = p p)-fraction of the matrices. As p may be exponential in n, these represent very low success probabilities for any e cient algorithm that attempts to compute the permanent. For 0/1 matrices, our results show that their permanents cannot be guessed with probability greater than 1=2 n 1?. We also show that it is hard to get even partial information about the value of the permanent modulo p. For random matrices we show that any balanced polynomial-time 0/1 predicate (e.g., the least signi cant bit, the parity of all the bits, the quadratic residuosity character) cannot be guessed with probability signi cantly greater than 1/2 (unless the polynomial-time hierarchy collapses). This result extends to showing simultaneous hardness for linear size groups of bits.
Computational Complexity, 2002
We show that for several natural classes of "structured" matrices, including symmetric, circulant, Hankel and Toeplitz matrices, approximating the permanent modulo a prime p is as hard as computing its exact value. Results of this kind are well known for arbitrary matrices. However the techniques used do not seem to apply to "structured" matrices. Our approach is based on recent advances in the hidden number problem introduced by Boneh and Venkatesan in 1996 combined with some bounds of exponential sums motivated by the Waring problem in finite fields.
Electronic Notes in Discrete Mathematics, 2010
It is known that the problem of computing the permanent of a given matrix is #P hard. However, Alexander I. Barvinok has proven that if we fix the rank of the matrix then its permanent can be computed in strongly polynomial time. Barvinok's algorithm [1] computes the permanent of square matrices of fixed rank by constructing polynomials. We study the problem of expressing polynomials as the permanent of low rank square matrices and vice versa. We prove that the permanent of a square matrix with rank 1 is a monomial and the permanent of a square matrix (with integer entries) that has not full rank, is a polynomial with even coefficients. We also prove that, for a polynomial f ∈ k[x], there exist a square matrix of rank 2, whose permanent is the polynomial f. Our results contribute in computing the permanent of a square matrix efficiently.
1999
Abstract. We characterize the complexity of some natural and important problems in linear algebra. In particular, we identify natural complexity classes for which the problems of (a) determining if a system of linear equations is feasible and (b) computing the rank of an integer matrix (as well as other problems) are complete under logspace reductions.¶ As an important part of presenting this classification, we show that the" exact counting logspace hierarchy" collapses to near the bottom level.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.