Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014, Springer eBooks
…
12 pages
1 file
We demonstrate how a new data structure for sparse distributed polynomials in the Maple kernel significantly accelerates several key Maple library routines. The POLY data structure and its associated kernel operations (degree, coeff, subs, has, diff, eval, ...) are programmed for high scalability with very low overhead. This enables polynomial to have tens of millions of terms, increases parallel speedup in existing routines and dramatically improves the performance of high level Maple library routines.
ACM Communications in Computer Algebra, 2011
We demonstrate new routines for sparse multivariate polynomial multiplication and division over the integers that we have integrated into Maple 14 through the expand and divide commands. These routines are currently the fastest available, and the multiplication routine is parallelized with superlinear speedup. The performance of Maple is significantly improved. We describe our polynomial data structure and compare it with Maple's. Then we present benchmarks comparing Maple 14 with Maple 13, Magma, Mathematica, Singular, Pari, and Trip.
ACM Communications in Computer Algebra, 2013
We demonstrate how a new data structure for sparse distributed polynomials in the Maple kernel significantly accelerates a large subset of Maple library routines. The POLY data structure and its associated kernel operations (degree, coeff, subs, has, diff, eval, ...) are programmed for high scalability, allowing polynomials to have hundreds of millions of terms, and very low overhead, increasing parallel speedup in existing routines and improving the performance of high level Maple library routines.
ACM Sigsam Bulletin, 2009
One of the main successes of the computer algebra community in the last 30 years has been the discovery of algorithms, called modular methods, that allow to keep the swell of the intermediate expressions under control. Without these methods, many applications of computer algebra would not be possible and the impact of computer algebra in scientific computing would be severely
Proceedings of the second international symposium on Parallel symbolic computation - PASCO '97, 1997
We ported the computer algebra system Maple V to the Intel Paragon, a massively parallel, distributed memory machine. In order to take advantage of the parallel architecture, we extended the Maple kernel with a set of message pawing primitives baaed on the Paragon's native message passing library. Using these primitives, we implemented a parallel version of Karatsuba multiplication for univariate polynomials over 2P Our speedup timings illustrate the practicability of our approach. On top of the message p=ing primitives we have implemented a higher level model of parallel processing baaed on the manager-worker scheme; a Maple application on one node of the parallel machine submits jobs to Maple processes residing on different nodes, then asynchronously collects the results. This model proves to be convenient for interactive usage of a distributed memory machine. Apart from the message passing parallelism we also use localized multi-threading to achieve symmetric multiprocessing within each node of the Paragon. We combine both approaches and apply them to the multiplication of large bivariate polynomials over small prime fields.
ACM Communications in Computer Algebra, 2015
The principal data structure in Maple used to represent polynomials and general mathematical expressions involving functions like √ x, sin x, e 2x , y (x) etc., is known to the Maple developers as the sum-of-products data structure. Gaston Gonnet, as the primary author of the Maple kernel, designed and implemented this data structure in the early 80s. As part of the process of simplifying a mathematical formula, he represented every Maple object and every sub-object uniquely in memory. This makes testing for equality, which is used in many operations, very fast. In this article, on occasion of Gaston's retirement, we present details of his design, its pros and cons, and changes we have made to it over the years. One of the cons is the sum-of-products data structure is not nearly as efficient for multiplying multivariate polynomials as other special purpose computer algebra systems. We describe the new data structure called POLY which we added to Maple 17 (released 2013) to improve performance for polynomials in Maple, and recent work done for Maple 18 (released 2014).
Journal of Symbolic Computation, 1986
The Maple computer algebra system is described. Brief sample sessions show the user syntax and the mathematical power of the system for performing arithmetic, factoring, simplification, differentiation, integration, summation, solving algebraic equations, solving differential equations, series expansions, and matrix manipulations. Time and space statistics for each sample session show that the Maple system is very efficient in memory space utilisation, as well as in time. The Maple programming language is presented by describing the most commonly used features, using some non-trivial computations to illustrate the language features. 1. Overview Maple is an interactive system for algebraic computation. It is designed for the calculations one learns in high school algebra and university mathematics-integers, rational numbers, polynomials, equations, sets, derivatives, indefinite integrals, etc. This article explains how to use Maple interactively as an algebraic calculator, for casual users. It also explains the basic elements of Maple's programming language, giving examples of how to use it for programming extended algebraic calculations. The Maple project was started at Waterloo in 1980. While the facilities provided by Maple are, at a superficial level, similar to the facilities of other computer algebra systems such as MACSYMA (Pavelle & Wang, 1985) and REDUCE (Fitch, 1985), several design features make Maple unique. The most distinctive feature is Maple's compactness which reduces the basic computer memory requirements per user to a few hundred kilobytes rather than the few thousand kilobytes typically required by MACSYMA or REDUCE. (Of course, Maple's data space may grow to a few thousand kilobytes when performing difficult mathematical computations, if necessary, but this is not generally the case for the calculations required by undergraduate student users nor for many research calculations .) Maple was also designed to allow portability to a variety of different operating systems. Concurrent with the above goals, Maple incorporates an extensive set of mathematical knowledge via a library of functions. The library functions are coded in the user-level Maple programming language which was designed to facilitate the expression of, and the efficient execution of, mathematical operations. A consequence of Maple's design is user extensibility since user-defined functions are equal in status to the system's library functions. These design goals led to several novel design features. Maple's fundamental data structures are tagged objects represented internally as dynamic vectors (variable-length arrays). Specifically, each instance of a data structure is a vector in which the first component encodes the following information : the length of the structure, the type of data object (such as sum, product, set, rational number, etc.
2019
Maple 2019 has a new multivariate polynomial factorization algorithm for factoring polynomials in \(\mathbb {Z}[x_1,x_2,...,x_n]\), that is, polynomials in n variables with integer coefficients. The new algorithm, which we call MTSHL, was developed by the authors at Simon Fraser University. The algorithm and its sub-algorithms have been published in a sequence of papers [3, 4, 5]. It was integrated into the Maple library in early 2018 by Baris Tuncer under a MITACS internship with Maplesoft. MTSHL is now the default factoring algorithm in Maple 2019.
Lecture Notes in Computer Science, 2004
Eden is a parallel functional language extending Haskell with processes. This paper describes the implementation of an interface between the Eden language and the Maple system. The aim of this effort is to parallelize Maple programs by using Eden as coordination language. The idea is to leave in Maple the computational intensive functions of the (sequential) algorithm and to use Eden skeletons to set up the parallel process topology in the available parallel machine. A Maple system is instantiated in each processor. Eden processes are responsible for invoking Maple functions with appropriate parameters and of getting back the results, as well as of performing all the data communication between processes. The interface provides the following services: instantiating and terminating a Maple system in each processor, performing data conversion between Maple and Haskell objects, invoking Maple functions from Eden, and ensuring mutual exclusion in the access to Maple from different concurrent threads in the local processor. A parallel version of Buchberger's algorithm to compute Gröbner bases is presented to illustrate the use of the interface.
1996
We ported the computer algebra system Maple V to the Intel Paragon a massively parallel distributed memory machine In or der to take advantage of the parallel architecture we extended the Maple kernel with a set of message passing primitives based on the Paragon s native message passing library Using these primitives we implemented a parallel version of Karatsuba multiplication for univariate polynomials overZp Our speedup timings illustrate the practicability of our approach On top of the message passing primitives we have implemented a higher level model of parallel processing based on the manager worker scheme a managing Maple process on one node of the paral lel machine submits processing requests to Maple processes residing on di erent nodes then asynchronously collects the results This model proves to be convenient for interactive usage of a distributed memory machine
We comment on the implementation of various algorithms in multivariate polynomial theory. Specifically, we describe a modular computation of triangular sets and possible applications. Next we discuss an implementation of the F 4 algorithm for computing Gröbner bases. We also give examples of how to use Gröbner bases for vanishing ideals in polynomial and rational function interpolation.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
SIAM Journal on Computing, 1983
Lecture Notes in Computer Science, 2014
ACM Communications in Computer Algebra, 2017
Special Talk, 2019
Scientific Programming, 2005
Computers & Mathematics with Applications, 1997
Lecture Notes in Computer Science, 2003
arXiv (Cornell University), 2016