Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Genetic Programming and Evolvable Machines
We introduce GPLS (Genetic Programming for Linear Systems) as a GP system that finds mathematical expressions defining an iteration matrix. Stationary iterative methods use this iteration matrix to solve a system of linear equations numerically. GPLS aims at finding iteration matrices with a low spectral radius and a high sparsity, since these properties ensure a fast error reduction of the numerical solution method and enable the efficient implementation of the methods on parallel computer architectures. We study GPLS for various types of system matrices and find that it easily outperforms classical approaches like the Gauss–Seidel and Jacobi methods. GPLS not only finds iteration matrices for linear systems with a much lower spectral radius, but also iteration matrices for problems where classical approaches fail. Additionally, solutions found by GPLS for small problem instances show also good performance for larger instances of the same problem.
International Journal of Information Technology, Modeling and Computing, 2016
Various algorithms are known for solving linear system of equations. Iteration methods for solving the large sparse linear systems are recommended. But in the case of general n× m matrices the classic iterative algorithms are not applicable except for a few cases. The algorithm presented here is based on the minimization of residual of solution and has some genetic characteristics which require using Genetic Algorithms. Therefore, this algorithm is best applicable for construction of parallel algorithms. In this paper, we describe a sequential version of proposed algorithm and present its theoretical analysis. Moreover we show some numerical results of the sequential algorithm and supply an improved algorithm and compare the two algorithms.
Linear systems may appear as a result of various problems modeling in the area of mathematics, engineering and computer science. The Bi-Conjugate Gradient Stabilized (BiCGStab) method is an iterative method for solving linear systems, mainly sparse and large ones. In this context, this paper proposes a parallel implementation of the BiCGStab method for solving large linear systems. The proposed implementation uses a Graphics Processing Unit (GPU) through CUDA-Matlab integration where the method operations are performed on the GPU cores by Matlab built-in functions. This parallel implementation aims to provide a higher computational performance when compared to the sequential implementation. Additionally, we compared the BiCGStab computational performance with an implementation of the Hybrid Bi-Conjugate Gradient Stabilized (BiCGStab(2)) iterative method, recently proposed by the author, in the solution of random linear systems with varying sizes. The results showed that the parallel BiCGStab is more efficient in solving the treated systems. It was possible to obtain gains in computational efficiency of approximately 5x when compared with the BiCGStab sequential implementation. In comparison with the BiCGStab(2), the parallel BiCGStab was approximately 2x faster.
IEEE Transactions on Computers, 1977
A parallel processor system and its mode of operation are described. A notation for writing programs on it is introduced. Methods for iterative solution of a set of linear equations are then discussed. The well-known algorithms of Jacobi and Gauss-Seidel are parallelized despite the apparent inherent sequentiality of the latter. New, parallel methods for the iterative solution of linear equations are introduced and their convergence is discussed. A measure of speedup is computed for all methods. It shows that in most cases the algorithms developed in the paper may be efficiently executed on a parallel processor system.
2000
Iterative methods are a popular way of solving linear systems of equation
1994
Abstract. We are interested in iterative algorithms that lend themselves to high-level parallel computation. One example is the solution of very large and sparse linear systems with multiple right-hand sides. We report on some recent work on this topic and present new results on the parallel solution of this problem. We show that algorithms that perform some amount of information exchange while the systems are being solved can be very competitive compared to algorithms that proceed independently.
2002
Genetic programming is a technique that produces as output the source code of another computer program. The program is evolved with rules of natural selection that seek to find the best solution to a particular problem. This report presents the results of tests run on eight different GP method parameters and shows how variations in the value of these parameters affects the time taken to converge on the correct solution. The problem used to test the method is the MAX problem which proved to be a simple, straight forward, and easy approach for evaluating GP efficiency. Testing the method on the MAX problem can help to develop an optimum search for problems with unknown solutions. This paper presents the results of this study.
Linear Algebra and its Applications, 1985
We study the adoption of iterative methods for numerically solving linear systems of the form Au = b on parallel machines. A new class of first order iterative schemes possessing a high level of parallelism is originated by the approximation of the Neumann series to A-'. A preliminary study of the case where the sequence of parameters involved are constant and equal to unity reveals that the series is best approximated by its first two terms. This results in the derivation of a new iterative method which under certain conditions possesses an exceptionally high rate of convergence.
2012
Solving large linear systems of equations is a common problem in the fields of science and engineering. Direct methods for computing the solution of such systems can be very expensive due to high memory requirements and computational cost. This is a very good reason to use iterative methods which computes only an approximation of the solution.In this paper we present an implementation of some iterative linear systems solvers that use the CUDA programming model. CUDA is now a popular programming model for general purpose computations on GPU and a great number of applications were ported to CUDA obtaining speedups of orders of magnitude comparing to optimized CPU implementations.Our library implements Jacobi, Gauss-Seidel and non-stationary iterative methods (GMRES, BiCG, BiCGSTAB) using C-CUDA extension. We compare the performance of our CUDA implementation with classic programs written to be run on CPU. Our performance tests show speedups of approximately 80 times for single precision floating point and 40 times for double precision.
Journal of Computational and Applied Mathematics, 1996
Many algorithms employing short recurrences have been developed for iteratively solving linear systems. Yet when the matrix is nonsymmetric or inde nite, or both, it is di cult to predict which method will perform best, or indeed, converge at all. Attempts have been made to classify the matrix properties for which a particular method will yield a satisfactory solution, but luck" still plays large role. This report describes the implementation of a poly-iterative solver.
Applied Numerical Mathematics, 1995
We present a collection of public-domain Fortran 77 routines for the solution of systems of linear equations using a variety of iterative methods. The routines implement methods which h a ve been modi ed for their e cient use on parallel architectures with either shared-or distributed-memory. PIM was designed to be portable across di erent machines. Results are presented for a variety of parallel computers.
P SPARSLIBis a library of portable FORTRAN routines for sparse matrix compuations. The current thrust of the library is in iterative solution techniques. In this note we present the`accelerators' part of the library, which consists of the best known of Krylov subspace techniques. This iterative solution module is implemented in reverse communication mode so as to allow any preconditioned to be combined with the pacgake. In addition, this mechanism allows us to ensure portability, since the communication calls required in the iterative solution process are hidden in the dot product and the matrix-vector product and preconditioning operatins. P SPARSLIB 4 CGNR This algorithm is intended for solving linear systms as well as leastsquares problems. It consists of solving the linear system, A T Ax = A T b by a CG method. Since A T A is always positive semi-de nite, it is guaranteed, in theory, to always converge to a solution. CGNR may be a good approach for highly inde nite matrices. For example if the matrix is unitary, then it can solve the linear system in just one step, whereas most of the other Krylov subspace projection methods will typically converge slowly. For typical problems arising from the discretization of partial di erential equations, CGNR converges more slowly than CG or BCG and so this approach is not as popular in this particular context.
BIT, 1988
A method is presented to solve Ax = b by computing optimum iteration parameters for Richardson's method. It requires some information on the location of the eigenvalues of A. The algorithm yields parameters well-suited for matrices for which Chebyshev parameters are not appropriate. It therefore supplements the Manteuffel algorithm, developed for the Chebyshev case. Numerical examples are described.
Lecture Notes in Computer Science, 1994
This paper proposes a general execution scheme for parallelizing a class of iterative algorithms characterized by strong data dependencies between iterations. This class includes non-simultaneous iterative methods for solving systems of linear equations, such as Gauss-Seidel and SOR, and long-range methods. The paper presents a set of code transformations that make it possible to derive the parallel form of the algorithm starting from sequential code. The performance of the proposed execution scheme are then analyzed with respect to an abstract model of the underlying parallel machine. ? We wish to thank P. Sguazzero for his helpful hints and suggestions, and IBM ECSEC for having made available to us the SP1 machine on which experimental measures have been performed. This work has been supported by Consiglio Nazionale delle Ricerche under funds of \Progetto Finalizzato Sistemi Informatici e Calcolo Parallelo" and by MURST under funds 40%.
2009
We describe PIM Parallel Iterative Methods a collection of Fortran routines to solve systems of linear equations on parallel computers using iterative methods A number of iterative methods for symmetric and nonsymmetric systems are avail able including Conjugate Gradients CG Bi Conjugate Gradients Bi CG Conjugate Gradients squared CGS the stabilised version of Bi Conjugate Gradients Bi CGSTAB the restarted stabilised version of Bi Conjugate Gradients RBi CGSTAB generalised min imal residual GMRES generalised conjugate residual GCR normal equation solvers CGNR and CGNE quasi minimal residual QMR with coupled two term recurrences transpose free quasi minimal residual TFQMR and Chebyshev acceleration The PIM routines can be used with user supplied preconditioners and left right or symmetric preconditioning are supported Several stopping criteria can be chosen by the user In this user s guide we present a brief overview of the iterative methods and algorithms available The use of PIM is...
Proceedings of the Pakistan Academy of Sciences, 2022
The fundamental problem of linear algebra is to solve the system of linear equations (SOLE's). To solve SOLE's, is one of the most crucial topics in iterative methods. The SOLE's occurs throughout the natural sciences, social sciences, engineering, medicine and business. For the most part, iterative methods are used for solving sparse SOLE's. In this research, an improved iterative scheme namely, ''a new improved classical iterative algorithm (NICA)'' has been developed. The proposed iterative method is valid when the coefficient matrix of SOLE's is strictly diagonally dominant (SDD), irreducibly diagonally dominant (IDD), M-matrix, Symmetric positive definite with some conditions and H-matrix. Such types of SOLE's does arise usually from ordinary differential equations (ODE's) and partial differential equations (PDE's). The proposed method reduces the number of iterations, decreases spectral radius and increases the rate of convergence. Some numerical examples are utilized to demonstrate the effectiveness of NICA over Jacobi (J), Gauss Siedel (GS), Successive Over Relaxation (SOR), Refinement of Jacobi (RJ), Second Refinement of Jacobi (SRJ), Generalized Jacobi (GJ) and Refinement of Generalized Jacobi (RGJ) methods.
1997
Description and comparison of several packages for the iterative solution of linear systems of equations. 1 1 Introduction There are several freely available packages for the iterative solution of linear systems of equations, typically derived from partial differential equation problems. In this report I will give a brief description of a number of packages, and give an inventory of their features and defining characteristics. The most important features of the packages are which iterative methods and preconditioners supply; the most relevant defining characteristics are the interface they present to the user's data structures, and their implementation language. 2 2 Discussion Iterative methods are subject to several design decisions that affect ease of use of the software and the resulting performance. In this section I will give a global discussion of the issues involved, and how certain points are addressed in the packages under review. 2.1 Preconditioners A good precondit...
Computational Mathematics and Mathematical Physics, 2019
The paper presents the results on the use of gradient descent algorithms for constructing iterative methods for solving linear equations. A mathematically rigorous substantiation of the convergence of iterations to the solution of the equations is given. Numerical results demonstrating the efficiency of the modified iterative gradient descent method are presented.
Journal of Natural Sciences Engineering and Technology, 2017
Genetic Algorithm has been successfully applied for solving systems of Linear Equations; however the effects of varying the various Genetic Algorithms parameters on the GA systems of Linear Equations solver have not been investigated. Varying the GA parameters produces new and exciting information on the behaviour of the GA Linear Equation solver. In this paper, a general introduction on the Genetic Algorithm, its application on finding solutions to the Systems of Linear equation as well as the effects of varying the Population size and Number of Generation is presented. The genetic algorithm simultaneous linear equation solver program was run several times using different sets of simultaneous linear equation while varying the population sizes as well as the number of generations in order to observe their effects on the solution generation. It was observed that small population size does not produce perfect solutions as fast as when large population size is used and small or large ...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.