Papers by Daniel Spielman
We use support theory, in particular the fretsaw extensions of Shklarski and Toledo [ST06a], to d... more We use support theory, in particular the fretsaw extensions of Shklarski and Toledo [ST06a], to design preconditioners for the stiffness matrices of 2-dimensional truss structures that are stiffly connected. Provided that all the lengths of the trusses are within constant factors of each other, that the angles at the corners of the triangles are bounded away from 0 and π, and that the elastic moduli and cross-sectional areas of all the truss elements are within constant factors of each other, our preconditioners allow us to solve linear equations in the stiffness matrices to accuracy ǫ in time O(n 5/4 (log 2 n log log n) 3/4 log(1/ǫ)).
Computing Research Repository, 2003
This paper has been divided into three papers. arXiv:0809.3232, arXiv:0808.4134, arXiv:cs/0607105
Proceedings of the thirty-eighth annual ACM symposium on Theory of computing - STOC '06, 2006
We present the first randomized polynomial-time simplex algorithm for linear programming. Like th... more We present the first randomized polynomial-time simplex algorithm for linear programming. Like the other known polynomial-time algorithms for linear programming, its running time depends polynomially on the number of bits used to represent its input.
Proceedings of the thirty-sixth annual ACM symposium on Theory of computing - STOC '04, 2004
This paper has been divided into three papers. arXiv:0809.3232, arXiv:0808.4134, arXiv:cs/0607105
Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures - SPAA '04, 2004
Proceedings of the thirty-third annual ACM symposium on Theory of computing - STOC '01, 2001
We introduce the smoothed analysis of algorithms, which continuously interpolates between the wor... more We introduce the smoothed analysis of algorithms, which continuously interpolates between the worst-case and average-case analyses of algorithms. In smoothed analysis, we measure the maximum over inputs of the expected performance of an algorithm under small random perturbations of that input. We measure this performance in terms of both the input size and the magnitude of the perturbations. We show that the simplex algorithm has smoothed complexity polynomial in the input size and the standard deviation of Gaussian perturbations.
Annals of Mathematics, 2015
We prove that there exist infinite families of regular bipartite Ramanujan graphs of every degree... more We prove that there exist infinite families of regular bipartite Ramanujan graphs of every degree bigger than 2. We do this by proving a variant of a conjecture of Bilu and Linial about the existence of good 2-lifts of every graph.
Proceedings of the thirty-third annual ACM symposium on Theory of computing - STOC '01, 2001
Page 1. Randomness Efficient Identity Testing of Multivariate Polynomials Adam R. Klivans Laborat... more Page 1. Randomness Efficient Identity Testing of Multivariate Polynomials Adam R. Klivans Laboratory for Computer Science MIT Cambridge, MA 02139 [email protected] Daniel A. Spielman Department of Mathematics MIT Cambridge, MA 02139 [email protected] ...
Proceedings of the twelfth annual symposium on Computational geometry - SCG '96, 1996
Lipton and Tarjan [18] showed that every n node pla-nar graph has a set of at most & of verti... more Lipton and Tarjan [18] showed that every n node pla-nar graph has a set of at most & of vertices whose removal divides the rest of the graph into two discon-nected pieces of size no more than (2/3)n. We call such a set a 2/3-separator of size /%. Their bound on the size of a 2/3-...
Proceedings of the thirtieth annual ACM symposium on Theory of computing - STOC '98, 1998
In , Gallager introduces a family of codes based on sparse bipartite graphs, which he calls low-d... more In , Gallager introduces a family of codes based on sparse bipartite graphs, which he calls low-density parity-check codes. He suggests a natural decoding algorithm for these codes, and proves a good bound on the fraction of errors that can be corrected. As the codes that Gallager builds are derived from regular graphs, we refer to them as regular codes.
Annals of Mathematics, 2015
Lecture Notes in Computer Science, 2003
In smoothed analysis, one measures the complexity of algorithms assuming that their inputs are su... more In smoothed analysis, one measures the complexity of algorithms assuming that their inputs are subject to small amounts of random noise. In an earlier work (Spielman and Teng, 2001), we introduced this analysis to explain the good practical behavior of the simplex algorithm. In this paper, we provide further motivation for the smoothed analysis of algorithms, and develop models of
Lecture Notes in Computer Science, 2005
... 1 Page 2. 2 Daniel A. Spielman and Shang-Hua Teng its competitive ratio (Sleator & Ta... more ... 1 Page 2. 2 Daniel A. Spielman and Shang-Hua Teng its competitive ratio (Sleator & Tar j an (1985) and B orodin & E l-Yaniv (1998)) . ... For these continuous inputs, for example, the family of Gaussian distributions (cf. Feller (1968, 1970)) provide a perturbation model for noise. ...
Lecture Notes in Computer Science, 2004
Off-centers were recently introduced as an alternative type of Steiner points to circum-centers f... more Off-centers were recently introduced as an alternative type of Steiner points to circum-centers for computing size-optimal quality guaranteed Delaunay triangulations. In this paper, we study the depth of the off-center insertion hierarchy. We prove that Delaunay refinement with off-centers takes only O(log (L/h)) parallel iterations, where L is the diameter of the domain, and h is the smallest edge in
We perform a smoothed analysis of Renegar's condition number for linear programming by analyzing ... more We perform a smoothed analysis of Renegar's condition number for linear programming by analyzing the distribution of the distance to ill-posedness of a linear program subject to a slight Gaussian perturbation. In particular, we show that for every n-by-d matrixĀ, n-vectorb, and d-vectorc satisfying Ā ,b,c F ≤ 1 and every σ ≤ 1,
We introduce a new notion of graph sparsification based on spectral similarity of graph Laplacian... more We introduce a new notion of graph sparsification based on spectral similarity of graph Laplacians: spectral sparsification requires that the Laplacian quadratic form of the sparsifier approximate that of the original. This is equivalent to saying that the Laplacian of the sparsifier is a good preconditioner for the Laplacian of the original.
Uploads
Papers by Daniel Spielman