Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2016, arXiv (Cornell University)
…
41 pages
1 file
This work is a comprehensive extension of Abu-Salem et al. (2015) that investigates the prowess of the Funnel Heap for implementing sums of products in the polytope method for factoring polynomials, when the polynomials are in sparse distributed representation. We exploit that the work and cache
2009
We present a high performance algorithm for multiplying sparse distributed polynomials using a multicore processor. Each core uses a heap of pointers to multiply parts of the polynomials using its local cache. Intermediate results are written to buffers in shared cache and the cores take turns combining them to form the result. A cooperative approach is used to balance the load and improve scalability, and the extra cache from each core produces a superlinear speedup in practice. We present benchmarks comparing our parallel routine to a sequential version and to the routines of other computer algebra systems.
2018
Our goal is to develop a high-performance code for factoring a multivariate polynomial in n variables with integer coefficients which is polynomial time in the sparse case and efficient in the dense case. Maple, Magma, Macsyma, Singular and Mathematica all implement Wang’s multivariate Hensel lifting, which, for sparse polynomials, can be exponential in n. Wang’s algorithm is also highly sequential. In this work we reorganize multivariate Hensel lifting to facilitate a highperformance parallel implementation. We identify multivariate polynomial evaluation and bivariate Hensel lifting as two core components. We have also developed a library of algorithms for polynomial arithmetic which allow us to assign each core an independent task with all the memory it needs in advance so that memory management is eliminated and all important operations operate on dense arrays of 64 bit integers. We have implemented our algorithm and library using Cilk C for the case of two monic factors. We disc...
Journal of Symbolic Computation, 2008
A recent bivariate factorisation algorithm appeared in Abu-Salem et al. [Abu-Salem, F., Gao, S., Lauder, A., 2004. Factoring polynomials via polytopes. In: Proc. ISSAC'04. pp. 4-11] based on the use of Newton polytopes and a generalisation of Hensel lifting. Although possessing a worst-case exponential running time like the Hensel lifting algorithm, the polytope method should perform well for sparse polynomials whose Newton polytopes have very few Minkowski decompositions. A preliminary implementation in Abu-Salem et al. [Abu-Salem, F., Gao, S., Lauder, A., 2004. Factoring polynomials via polytopes. In: Proc. ISSAC'04. pp. 4-11] indeed reflects this property, but does not exploit the fact that the algorithm preserves the sparsity of the input polynomial, so that the total amount of work and space required are O(d 4) and O(d 2) respectively, for an input bivariate polynomial of total degree d. In this paper, we show that the polytope method can be made sensitive to the number of non-zero terms of the input polynomial, so that the input size becomes dependent on both the degree and the number of terms of the input bivariate polynomial. We describe a sparse adaptation of the polytope method over finite fields with prime order, which requires fewer bit operations and memory references given a degree d sparse polynomial whose number of terms t satisfies t < d 3/4 , and which is known to be the product of two sparse factors. For t < d, and using fast polynomial arithmetic over finite fields, our refinement reduces the amount of work per extension of a coprime dominating edge factorisation and the total spatial cost to O(t λ d 2 +t 2λ d L(d)+t 4λ d) bit operations and O(t λ d) bits of memory respectively, for some 1/2 ≤ λ < 1, and L(d) = log d log log d. To the best of our knowledge, the sparse binary factorisations achieved using this adaptation are of the highest degree so far, reaching a world record degree of 20 000 for a very sparse bivariate polynomial over F 2 .
Journal of Symbolic Computation, 2020
The standard approach to factor a multivariate polynomial in Z[x1, x2,. .. , xn] is to factor a univariate image in Z[x1] then recover the multivariate factors from their univariate images using a process known as multivariate Hensel lifting. Wang's multivariate Hensel lifting recovers the variables one at a time. It is currently implemented in many computer algebra systems, including Maple, Magma and Singular. When the factors are sparse, Wang's approach can be exponential in the number of variables n. To address this, sparse Hensel lifting was introduced by Zippel and then improved by Kaltofen. Recently, Monagan & Tuncer introduced a new approach which uses sparse polynomial interpolation to solve the multivariate polynomial diophantine equations that arise inside Hensel lifting in random polynomial time. This approach is shown to be practical and faster than Zippel's and Kaltofen's algorithms and faster than Wang's algorithm for non-zero evaluation points. In this work we first present a complete description of the sparse interpolation used by Monagan & Tuncer and show that it runs in random polynomial time. Next we study what happens to the sparsity of multivariate polynomials when the variables are successively evaluated at numbers. We determine the expected number of remaining terms. We use this result to revisit and correct the complexity analysis of Zippel's original sparse interpolation. Next we present an average case complexity analysis of our approach. We have implemented our algorithm in Maple with some sub-algorithms implemented in C. We present some experimental data comparing our approach with Wang's method for both sparse and dense factors. The data shows that our method is always competitive with Wang's method and faster when Wang's method is exponential in n.
2019
Maple 2019 has a new multivariate polynomial factorization algorithm for factoring polynomials in \(\mathbb {Z}[x_1,x_2,...,x_n]\), that is, polynomials in n variables with integer coefficients. The new algorithm, which we call MTSHL, was developed by the authors at Simon Fraser University. The algorithm and its sub-algorithms have been published in a sequence of papers [3, 4, 5]. It was integrated into the Maple library in early 2018 by Baris Tuncer under a MITACS internship with Maplesoft. MTSHL is now the default factoring algorithm in Maple 2019.
State of the art factoring in Q[x] is dominated in theory by a combinatorial reconstruction problem while, excluding some rare polynomials, performance tends to be dominated by Hensel lifting. We present an algorithm which gives a practical improvement (less Hensel lifting) for these more common polynomials. In addition, factoring has suffered from a 25 year complexity gap because the best implementations are much faster in practice than their complexity bounds. We illustrate that this complexity gap can be closed by providing an implementation which is comparable to the best current implementations and for which competitive complexity results can be proved.
We report on new code for sparse multivariate polynomial multiplication and division that we have recently integrated into Maple as part of our MITACS project at Simon Fraser University. Our goal was to try to beat Magma which is widely viewed in the computer algebra community as having state-of-the-art polynomial algebra. Here we give benchmarks comparing our implementation for multiplication and division with the Magma, Maple, Singular, Trip and Pari computer algebra systems. Our algorithms use a binary heap to multiply and divide using very little working memory. Details of our work may be found in [7] and [8].
We present a new algorithm for pseudo-division of sparse multivariate polynomials with integer coefficients. It uses a heap of pointers to simultaneously merge the dividend and partial products, sorting the terms efficiently and delaying all coefficient arithmetic to produce good complexity. The algorithm uses very little memory and we expect it to run in the processor cache. We give benchmarks comparing our implementation to existing computer algebra systems.
2010
We present a Las Vegas algorithm for interpolating a sparse multivariate polynomial over a finite field, represented with a black box. Our algorithm modifies the algorithm of Ben-Or and Tiwari in 1988 for interpolating polynomials over rings with characteristic zero to characteristic p by doing additional probes.
ACM Communications in Computer Algebra, 2011
We demonstrate new routines for sparse multivariate polynomial multiplication and division over the integers that we have integrated into Maple 14 through the expand and divide commands. These routines are currently the fastest available, and the multiplication routine is parallelized with superlinear speedup. The performance of Maple is significantly improved. We describe our polynomial data structure and compare it with Maple's. Then we present benchmarks comparing Maple 14 with Maple 13, Magma, Mathematica, Singular, Pari, and Trip.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
ACM SIGSAM Bulletin, 2003
Journal of Symbolic Computation, 2011
Journal of Complexity, 2007
Electronic Colloquium on Computational Complexity, 2008
Numerical Algorithms, 2000
Computer Mathematics, 2014
Journal of Number Theory, 2002
Lecture Notes in Computer Science, 1997
Proceedings of the 1994 ACM/IEEE conference on Supercomputing - Supercomputing '94, 1994