Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002, Mathematical Software - Proceedings of the First International Congress of Mathematical Software
We describe the environment for symbolic and numeric computations, called SYNAPS (Symbolic and Numeric APplicationS) and developed in C++. Its aim is to provide a coherent platform integrating many of the nowadays freely available software in scientific computing. The approach taken here is inspired by the recent paradigm of software developments called active library. In this paper, we explain the design choices of the kernel and their impact on the development of generic and efficient codes for the treatment of algebraic objects, such as vectors, matrices, univariate and multivariate polynomials. Implementation details illustrate the performance of the approach. 1
The IMA Volumes in Mathematics and its Applications, 2008
We present an overview of the open source library synaps. We describe some of the representative algorithms of the library and illustrate them on some explicit computations, such as solving polynomials and computing geometric information on implicit curves and surfaces. Moreover, we describe the design and the techniques we have developed in order to handle a hierarchy of generic and specialized data-structures and routines, based on a view mechanism. This allows us to construct dedicated plugins, which can be loaded easily in an external tool. Finally, we show how this design allows us to embed the algebraic operations, as a dedicated plugin, into the external geometric modeler axel.
This is page vii Printer: Opaque this Preface This book is intended to be an easy, concise, but rather complete, introduction to the ISO/ANSI C++ programming language with special emphasis on object-oriented numeric computation for students and professionals in science and engineering. The description of the language is platformindependent. Thus it applies to di erent operating systems such as UNIX, Linux, MacOS, Windows, and DOS, as long as a standard C++ compiler is equipped. The prerequisite of this book is elementary knowledge of calculus and linear algebra. However, this prerequisite is hardly necessary if this book is going to be used as a textbook for teaching C++ and all the sections on numeric methods are skipped. Programming experience in another language such a s F ORT R A N , C , A d a , P ascal, Maple, or Matlab will certainly help, but is not presumed.
Lecture Notes in Computer Science, 2003
The aim of this communication is to present a Symbolic Linear Algebra package, written in MAPLE V.5 and working also in the release 7, with general but especially educational purpose. Its goal is to run over MAPLE the different Linear Algebra algorithms developped in [OJ]. The implemented algorithms are also valids over finite characteristic and, for educational purpose, may be runned interactively. Three topics are covered by the package: equivalent matrices (echelon matrices, rank, LU decomposition, linear systems,.. .), similar matrices (Eigenvalues and eigenvectors, Rational or Frobenius form, Irreducible form, Jordan form,.. .), and congruent matrices (Symmetric and Hermitian matrices, Cholesky and QR decomposition, SVD, Gram-Schmidt over arbitrary euclidean metrics, ortogonal matrices, quadric surfaces.. .
Future Generation Computer Systems, 2003
During the first years of the computer era no tools were available to aid scientists in the development of their computer codes. Code production was a pretty handicraft job. The last years have seen the resurgence of many concepts whose aim is to reduce the man effort in building scientific systems. These ideas spread from the reusable component concept to the multi-purpose problem-solving environment (PSE). This paper presents an approach that takes all the power of computer algebra systems (CASs) to easily specify a scientific problem solver and to automatically generate a computer program in standard language (as Fortran or C) that implements it. An application using a well-known CAS, Mathematica, is presented to show the methodology.
Computer Aided Design in Control and Engineering Systems, 1986
MAX is an interpreted programming language for polynomial matrix manipulations. It contains especially those operations which are frequently needed in linear control. MAX is written using the C programming language under the VAX!VMS operating system. In this paper the design , implementation and available operations of MAX are described.
ISO JTC1 SC22 WG22, 2021
Change dot, dotc, vector_norm2, and vector_abs_sum to imitate reduce, so that they return their result, instead of taking an output parameter. Users may set the result type via optional init parameter. Minor changes to "expression template" classes, based on implementation experience Briefly address LEWGI request of exploring concepts for input arguments. Lazy ranges style API was NOT explored. blas_interface.md 4/14/2021 2 / 141 Revision 2 (pre-Cologne) to be submitted 2020-01-13 Add "Future work" section. Remove "Options and votes" section (which were addressed in SG6, SG14, and LEWGI). Remove basic_mdarray overloads. Remove batched linear algebra operations. Remove over-and underflow requirement for vector_norm2. Mandate any extent compatibility checks that can be done at compile time. Add missing functions {symmetric,hermitian}_matrix_rank_k_update and triangular_matrix_{left,right}_product. Remove packed_view function. Fix wording for {conjugate,transpose,conjugate_transpose}_view, so that implementations may optimize the return type. Make sure that transpose_view of a layout_blas_packed matrix returns a layout_blas_packed matrix with opposite Triangle and StorageOrder. Remove second template parameter T from accessor_conjugate. Make scaled_scalar and conjugated_scalar exposition only. Add in-place overloads of triangular_matrix_matrix_{left,right}_solve, triangular_matrix_{left,right}_product, and triangular_matrix_vector_solve. Add alpha overloads to {symmetric,hermitian}_matrix_rank_{1,k}_update. Add Cholesky factorization and solve examples. Revision 3 (electronic) to be submitted 2021-04-15 Per LEWG request, add a section on our investigation of constraining template parameters with concepts, in the manner of P1813R0 with the numeric algorithms. We concluded that we disagree with the approach of P1813R0, and that the Standard's current GENERALIZED_SUM approach better expresses numeric algorithms' behavior. Update references to the current revision of P0009 (mdspan). Per LEWG request, introduce std::linalg namespace and put everything in there. Per LEWG request, replace the linalg_ prefix with the aforementioned namespace. We renamed linalg_add to add, linalg_copy to copy, and linalg_swap to swap_elements. Per LEWG request, do not use _view as a suffix, to avoid confusion with "views" in the sense of Ranges. We renamed conjugate_view to conjugated, conjugate_transpose_view to conjugate_transposed, scaled_view to scaled, and transpose_view to transposed. blas_interface.md 4/14/2021 3 / 141 Change wording from "then implementations will use T's precision or greater for intermediate terms in the sum," to "then intermediate terms in the sum use T's precision or greater." Thanks to Jens Maurer for this suggestion (and many others!). Before, a Note on vector_norm2 said, "We recommend that implementers document their guarantees regarding overflow and underflow of vector_norm2 for floating-point return types." Implementations always document "implementation-defined behavior" per [defs.impl.defined]. (Thanks to Jens Maurer for pointing out that "We recommend..." does not belong in the Standard.) Thus, we changed this from a Note to normative wording in Remarks: "If either in_vector_t::element_type or T are floating-point types or complex versions thereof, then any guarantees regarding overflow and underflow of vector_norm2 are implementationdefined." Define return types of the dot, dotc, vector_norm2, and vector_abs_sum overloads with auto return type. Remove the explicitly stated constraint on add and copy that the rank of the array arguments be no more than 2. This is redundant, because we already impose this via the existing constraints on template parameters named in_object*_t, inout_object*_t, or out_object*_t. If we later wish to relax this restriction, then we only have to do so in one place. Add vector_sum_of_squares. First, this gives implementers a path to implementing vector_norm2 in a way that achieves the over/underflow guarantees intended by the BLAS Standard. Second, this is a useful algorithm in itself for parallelizing vector 2-norm computation. Add matrix_frob_norm, matrix_one_norm, and matrix_inf_norm (thanks to coauthor Piotr Luszczek). Address LEWG request for us to investigate support for GPU memory. See section "Explicit support for asynchronous return of scalar values."
Journal of Symbolic Computation, 2013
We present the results of the first four years of the European research project SCIEnce -Symbolic Computation Infrastructure in Europe (http://www.symbolic-computation.org), which aims to provide key infrastructure for symbolic computation research. A primary outcome of the project is that we have developed a new way of combining computer algebra systems using the Symbolic Computation Software Composability Protocol (SCSCP), in which both protocol messages and data are encoded in the OpenMath format. We describe the SCSCP middleware and APIs, outline implementations for various Computer Algebra Systems (CAS), and show how SCSCPcompliant components may be combined to solve scientific problems that cannot be solved within a single CAS, or may be organised into a system for distributed parallel computations. Additionally, we present several domain-specific parallel skeletons that capture commonly used symbolic computations. To ease use and to maximise inter-operability, these skeletons themselves are provided as SCSCP services and take SCSCP services as arguments.
NASA Formal Methods, 2011
We present the growing C++ library GiNaCRA, which provides efficient and easy-to-integrate data structures and methods for real algebra. It is based on the C++ library GiNaC, supporting the symbolic representation and manipulation of polynomials. In contrast to other similar tools, our open source library aids exact, real algebraic computations based on an appropriate data type representing real zeros of polynomials. The only non-standard library GiNaCRA depends on is GiNaC, which makes the installation and usage of our library ...
Solving polynomial systems with CoCoALib (a C++ library from algebra to applications) ============================================================= We present the algebraic and exact methods for solving polynomial systems and analyzing their structure, and also the opposite problem i.e. finding polynomials vanishing on a given set of points; and then we discuss the recent results about the interaction between these algebraic techniques with approximation issues. We show how to perform these computations using the Computer Algebra System CoCoA, and also with its core C++ library, CoCoALib.
2003
Ev3 is a callable C++ library for performing symbolic computation (calculation of symbolic derivatives and various expression simplification). The purpose of this library is to furnish a fast means to use symbolic derivatives to third-party scientific software (e.g. nonlinear optimization, solution of nonlinear equations). It is small, easy to interface, even reasonably easy to change; it is written in C++ and the source code is available. One feature that makes Ev3 very efficient in algebraic manipulation is that the data structures are based on n-ary trees.
After a pause taken in 2008 (owed to the political situation in the Caucasus region where CASC 2008 was supposed to take place), CASC 2009 continued the series of international workshops on the latest advances and trends both in the methods of computer algebra and in applications of computer algebra systems (CASs) to the solution of various problems in scientific computing. Science and Technology Agency, and several other institutions of Japan. In this connection, it was decided to hold the CASC 2009 Workshop in Japan in the hope that it would help bring together non-Japanese and Japanese researchers working both in the areas of computer algebra (CA) methods and of various CA applications in natural sciences and engineering.
Journal of Symbolic Computation, 1998
Some large scale physical computations require algorithms performing symbolic computations with a particular class of algebraic formulas in a numerical code. Developing and implementing such algorithms in a numerical programming language is a tedious and error prone task. The algorithms can be developed in a computer algebra system and their correctness can be checked by comparison with build-in facilities of the system so that the system is used as an advanced debugging tool. After that a numerical code for the algorithms is automatically generated from the same source code. The proposed methodology is explained in detail on a simple example. Real applications to calculation of matrix elements of Coulomb interaction and two-centre exchange integrals needed in atomic collision codes, are described. The method makes the developing and debugging of such algorithms easier and faster.
Springer eBooks, 2022
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Science of Computer Programming, 2006
The role of computer algebra systems (CASs) is not limited to analyzing and solving mathematical and physical problems. They have also been used as tools in the development process of computer programs, starting from the specification and ending with the coding and testing phases. In this way one can exploit their powerful mathematical capacity during the development phases and, by the other way, take advantage of the speed performance of languages such as FORTRAN or C in the implementation. Among the mathematical features of CASs there are transformations allowing one to optimize the final code instructions. In this paper we show some kind of optimizations that can be done on new or existing algorithms, by extending some techniques that compilers apply currently to optimize the machine code. The results show that the CPU time taken by the optimized code is reduced by a factor that can reach 5. The optimizations are performed with a package built on a well known CAS: Mathematica.
Proceedings of Scalable Parallel Libraries Conference, 2000
Designing a scalable and portable numerical library requires consideration of many factors, including choice of parallel communication technology, data structures, and user interfaces. The PETSc librnry (Poriable Ettensible Tools for Scientific computing) makes use of modern software technology to provide a Pezible and portable implementaiion. This talk will discuss ihe use of a meia-communicaiion layer (allowing the user to choose different transport layers such as MPI, p 4 , pvm, or vendor-specific libraries) for poriabtlify, an aggressive data-structure-neutral implementation that minimi:es dependence on particular data structures (even vectors), permitiing the library to adapf fo fhe user rafher than the other way around, and the separation of implemenfafion language from user-interface language. Examples are presenfed.
2008
We propose two high-level application programming interfaces (APIs) to use a graphics processing unit (GPU) as a coprocessor for dense linear algebra operations. Combined with an extension of the FLAME API and an implementation on top of NVIDIA CUBLAS, the result is an efficient and user-friendly tool to design, implement, and execute dense linear algebra operations on the current generation of NVIDIA graphics processors, of wide-appeal to scientists and engineers. As an application of the developed APIs, we implement and evaluate the performance of three different variants of the Cholesky factorization.
ACM SIGSAM Bulletin, 2003
We present a new method for constructing a low degree C 1 implicit spline representation of a given parametric planar curve. To ensure the low degree condition, quadratic B-splines are used to approximate the given curve via orthogonal projection in Sobolev spaces. Adaptive knot removal, which is based on spline wavelets, is used to reduce the number of segments. The B-spline segments are implicitized. After multiplying the implicit B-spline segments by suitable polynomial factors the resulting bivariate functions are joined along suitable transversal lines. This yields to a globally C 1 bivariate function. References B. Jüttler, J. Schicho and M. Shalaby, Spline Implicitization of Planar Curves, Curves and Surfaces 2002, St.
This report describes our work on implementation of effective numerical routines for polynomials and polynomial matrices in the MATHEMATICA software. Such operations are recalled during the controller design process if the so called polynomial or algebraic design methods are employed. This research is also motivated by the fact that MATHEMATICA developers pay attention to control engineers needs and produce the Control Systems Professional package for use with MATH-EMATICA and, as we believe, a set of routines for algebraic approach could conveniently complement the existing bunch of programs primarily intended for state-space representations.
ACM Transactions on Mathematical Software, 2002
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.