Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014, Computer Mathematics
We demonstrate how a new data structure for sparse distributed polynomials in the Maple kernel significantly accelerates several key Maple library routines. The POLY data structure and its associated kernel operations (degree, coeff, subs, has, diff, eval, ...) are programmed for high scalability with very low overhead. This enables polynomial to have tens of millions of terms, increases parallel speedup in existing routines and dramatically improves the performance of high level Maple library routines.
ACM Communications in Computer Algebra, 2011
We demonstrate new routines for sparse multivariate polynomial multiplication and division over the integers that we have integrated into Maple 14 through the expand and divide commands. These routines are currently the fastest available, and the multiplication routine is parallelized with superlinear speedup. The performance of Maple is significantly improved. We describe our polynomial data structure and compare it with Maple's. Then we present benchmarks comparing Maple 14 with Maple 13, Magma, Mathematica, Singular, Pari, and Trip.
ACM Communications in Computer Algebra, 2013
We demonstrate how a new data structure for sparse distributed polynomials in the Maple kernel significantly accelerates a large subset of Maple library routines. The POLY data structure and its associated kernel operations (degree, coeff, subs, has, diff, eval, ...) are programmed for high scalability, allowing polynomials to have hundreds of millions of terms, and very low overhead, increasing parallel speedup in existing routines and improving the performance of high level Maple library routines.
ACM Sigsam Bulletin, 2009
One of the main successes of the computer algebra community in the last 30 years has been the discovery of algorithms, called modular methods, that allow to keep the swell of the intermediate expressions under control. Without these methods, many applications of computer algebra would not be possible and the impact of computer algebra in scientific computing would be severely
Proceedings of the second international symposium on Parallel symbolic computation - PASCO '97, 1997
We ported the computer algebra system Maple V to the Intel Paragon, a massively parallel, distributed memory machine. In order to take advantage of the parallel architecture, we extended the Maple kernel with a set of message pawing primitives baaed on the Paragon's native message passing library. Using these primitives, we implemented a parallel version of Karatsuba multiplication for univariate polynomials over 2P Our speedup timings illustrate the practicability of our approach. On top of the message p=ing primitives we have implemented a higher level model of parallel processing baaed on the manager-worker scheme; a Maple application on one node of the parallel machine submits jobs to Maple processes residing on different nodes, then asynchronously collects the results. This model proves to be convenient for interactive usage of a distributed memory machine. Apart from the message passing parallelism we also use localized multi-threading to achieve symmetric multiprocessing within each node of the Paragon. We combine both approaches and apply them to the multiplication of large bivariate polynomials over small prime fields.
ACM Communications in Computer Algebra, 2015
The principal data structure in Maple used to represent polynomials and general mathematical expressions involving functions like √ x, sin x, e 2x , y (x) etc., is known to the Maple developers as the sum-of-products data structure. Gaston Gonnet, as the primary author of the Maple kernel, designed and implemented this data structure in the early 80s. As part of the process of simplifying a mathematical formula, he represented every Maple object and every sub-object uniquely in memory. This makes testing for equality, which is used in many operations, very fast. In this article, on occasion of Gaston's retirement, we present details of his design, its pros and cons, and changes we have made to it over the years. One of the cons is the sum-of-products data structure is not nearly as efficient for multiplying multivariate polynomials as other special purpose computer algebra systems. We describe the new data structure called POLY which we added to Maple 17 (released 2013) to improve performance for polynomials in Maple, and recent work done for Maple 18 (released 2014).
ACM Communications in Computer Algebra, 2017
We employ two techniques to dramatically improve Maple's performance on the Fermat benchmarks for simplifying rational expressions. First, we factor expanded polynomials to ensure that gcds are identified and cancelled automatically. Second, we replace all expanded polynomials by new variables and normalize the result. To undo the substitutions, we use a C routine for sparse multivariate division by a set of polynomials. The resulting times for the first Fermat benchmark are a factor of 17x faster than Fermat and 39x faster than Magma.
2009
We present a high performance algorithm for multiplying sparse distributed polynomials using a multicore processor. Each core uses a heap of pointers to multiply parts of the polynomials using its local cache. Intermediate results are written to buffers in shared cache and the cores take turns combining them to form the result. A cooperative approach is used to balance the load and improve scalability, and the extra cache from each core produces a superlinear speedup in practice. We present benchmarks comparing our parallel routine to a sequential version and to the routines of other computer algebra systems.
2010
We present a Las Vegas algorithm for interpolating a sparse multivariate polynomial over a finite field, represented with a black box. Our algorithm modifies the algorithm of Ben-Or and Tiwari in 1988 for interpolating polynomials over rings with characteristic zero to characteristic p by doing additional probes.
[Now twelve years old, but still worth a look.] Exact symbolic computation with polynomials and matrices over polynomial rings has wide applicability to many fields. By "exact symbolic" we mean computation with polynomials whose coefficients are integers (of any size), rational numbers, or finite fields, as opposed to coefficients that are "floats" of a certain precision. Such computation is part of most computer algebra systems ("CA systems"). Over the last dozen years several large CA systems have become widely available, such as Axiom, Derive, Macsyma, Magma, Maple, Mathematica, and Reduce. They tend to have great breadth, be produced by profit-making companies, and be relatively expensive. However, most if not all of these systems have difficulty computing with the polynomials and matrices that arise in actual research. Real problems tend to produce large polynomials and large matrices that the general CA systems cannot handle. In the last few years several smaller CA systems focused on polynomials have been produced at universities by individual researchers or small teams. They run on Macs, PCs, and workstations. They are freeware or shareware. Several claim to be much more efficient than the large systems at exact polynomial computations. The list of these systems includes CoCoA, Fermat, MuPAD, Pari-GP, and Singular.
2004
One of the important issues facing the development of the grid as the computational framework of the future is availability of grid-enabled software. In this context, we discuss possible approaches to constructing a grid-enabled version of a computer algebra system. Our case study involves Maple: the proposed Maple2g package allows the connection between Maple and the computational grids based on the Globus Toolkit. We present the design of the Maple2g package and follow with a thorough discussion of its implementation.
Journal of Symbolic Computation, 1986
The Maple computer algebra system is described. Brief sample sessions show the user syntax and the mathematical power of the system for performing arithmetic, factoring, simplification, differentiation, integration, summation, solving algebraic equations, solving differential equations, series expansions, and matrix manipulations. Time and space statistics for each sample session show that the Maple system is very efficient in memory space utilisation, as well as in time. The Maple programming language is presented by describing the most commonly used features, using some non-trivial computations to illustrate the language features. 1. Overview Maple is an interactive system for algebraic computation. It is designed for the calculations one learns in high school algebra and university mathematics-integers, rational numbers, polynomials, equations, sets, derivatives, indefinite integrals, etc. This article explains how to use Maple interactively as an algebraic calculator, for casual users. It also explains the basic elements of Maple's programming language, giving examples of how to use it for programming extended algebraic calculations. The Maple project was started at Waterloo in 1980. While the facilities provided by Maple are, at a superficial level, similar to the facilities of other computer algebra systems such as MACSYMA (Pavelle & Wang, 1985) and REDUCE (Fitch, 1985), several design features make Maple unique. The most distinctive feature is Maple's compactness which reduces the basic computer memory requirements per user to a few hundred kilobytes rather than the few thousand kilobytes typically required by MACSYMA or REDUCE. (Of course, Maple's data space may grow to a few thousand kilobytes when performing difficult mathematical computations, if necessary, but this is not generally the case for the calculations required by undergraduate student users nor for many research calculations .) Maple was also designed to allow portability to a variety of different operating systems. Concurrent with the above goals, Maple incorporates an extensive set of mathematical knowledge via a library of functions. The library functions are coded in the user-level Maple programming language which was designed to facilitate the expression of, and the efficient execution of, mathematical operations. A consequence of Maple's design is user extensibility since user-defined functions are equal in status to the system's library functions. These design goals led to several novel design features. Maple's fundamental data structures are tagged objects represented internally as dynamic vectors (variable-length arrays). Specifically, each instance of a data structure is a vector in which the first component encodes the following information : the length of the structure, the type of data object (such as sum, product, set, rational number, etc.
2019
Maple 2019 has a new multivariate polynomial factorization algorithm for factoring polynomials in \(\mathbb {Z}[x_1,x_2,...,x_n]\), that is, polynomials in n variables with integer coefficients. The new algorithm, which we call MTSHL, was developed by the authors at Simon Fraser University. The algorithm and its sub-algorithms have been published in a sequence of papers [3, 4, 5]. It was integrated into the Maple library in early 2018 by Baris Tuncer under a MITACS internship with Maplesoft. MTSHL is now the default factoring algorithm in Maple 2019.
Lecture Notes in Computer Science, 2004
Eden is a parallel functional language extending Haskell with processes. This paper describes the implementation of an interface between the Eden language and the Maple system. The aim of this effort is to parallelize Maple programs by using Eden as coordination language. The idea is to leave in Maple the computational intensive functions of the (sequential) algorithm and to use Eden skeletons to set up the parallel process topology in the available parallel machine. A Maple system is instantiated in each processor. Eden processes are responsible for invoking Maple functions with appropriate parameters and of getting back the results, as well as of performing all the data communication between processes. The interface provides the following services: instantiating and terminating a Maple system in each processor, performing data conversion between Maple and Haskell objects, invoking Maple functions from Eden, and ensuring mutual exclusion in the access to Maple from different concurrent threads in the local processor. A parallel version of Buchberger's algorithm to compute Gröbner bases is presented to illustrate the use of the interface.
1996
We ported the computer algebra system Maple V to the Intel Paragon a massively parallel distributed memory machine In or der to take advantage of the parallel architecture we extended the Maple kernel with a set of message passing primitives based on the Paragon s native message passing library Using these primitives we implemented a parallel version of Karatsuba multiplication for univariate polynomials overZp Our speedup timings illustrate the practicability of our approach On top of the message passing primitives we have implemented a higher level model of parallel processing based on the manager worker scheme a managing Maple process on one node of the paral lel machine submits processing requests to Maple processes residing on di erent nodes then asynchronously collects the results This model proves to be convenient for interactive usage of a distributed memory machine
We comment on the implementation of various algorithms in multivariate polynomial theory. Specifically, we describe a modular computation of triangular sets and possible applications. Next we discuss an implementation of the F 4 algorithm for computing Gröbner bases. We also give examples of how to use Gröbner bases for vanishing ideals in polynomial and rational function interpolation.
1984
Maple is a symbolic computation system under development at the University of Waterloo. A primary goal of the system is to be compact without sacrificing the functionality required for serious symbolic computation. The system has a modular design such that most of the mathematical functions exist as external library functions to be loaded only when they are invoked. The compiled kernel of the system is about 100K bytes in size. The library functions are interpreted. Efficiency is achieved through techniques including the identification of critical functions that are put into the compiled kernel, extensive use of hashing techniques, and careful design of the mathematical algorithms. Timing comparisons with other symbolic computation systems show that time efficiency is achieved as well as space efficiency.
2000
Maple is a comprehensive general purpose computer algebra system. It is used primarily in education and scientific research in the sciences, in mathematics, and in engineering. Maple can do both symbolic and numerical calculations and has facilities for 2 and 3-dimensional graph- ical output. The newest version of Maple, Maple V Release 2, sports a new user interface that integrates
2018
Our goal is to develop a high-performance code for factoring a multivariate polynomial in n variables with integer coefficients which is polynomial time in the sparse case and efficient in the dense case. Maple, Magma, Macsyma, Singular and Mathematica all implement Wang’s multivariate Hensel lifting, which, for sparse polynomials, can be exponential in n. Wang’s algorithm is also highly sequential. In this work we reorganize multivariate Hensel lifting to facilitate a highperformance parallel implementation. We identify multivariate polynomial evaluation and bivariate Hensel lifting as two core components. We have also developed a library of algorithms for polynomial arithmetic which allow us to assign each core an independent task with all the memory it needs in advance so that memory management is eliminated and all important operations operate on dense arrays of 64 bit integers. We have implemented our algorithm and library using Cilk C for the case of two monic factors. We disc...
SIAM Journal on Computing, 1983
It is shown that any multivariate polynomial of degree d that can be computed sequentially in C steps can be computed in parallel in O((log d)(log C + log d)) steps using only (Cd) 1) processors.
Lecture Notes in Computer Science, 2014
The Basic Polynomial Algebra Subprograms (BPAS) provides arithmetic operations (multiplication, division, root isolation, etc.) for univariate and multivariate polynomials over common types of coefficients (prime fields, complex rational numbers, rational functions, etc.). The code is mainly written in CilkPlus [10] targeting multicore processors. The current distribution focuses on dense polynomials and the sparse case is work in progress. A strong emphasis is put on adaptive algorithms as the library aims at supporting a wide variety of situations in terms of problem sizes and available computing resources.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.