NUMERICAL METHODS
GERMUND DAHLQUIST
Department of Computer Sciences
Royal Institute of Technology, Stockholm
AKE BJORCK
Department of Mathematics
Linképing University, Sweden
Translated from
the Swedish by
NED ANDERSON
Department of Computer Sciences
Royal Institute of Technology, Stockhoim
DOVER PUBLICATIONS, INC.
Mineola, New York :Copyright
Copyright © 1974 by Prentice-Hall, Inc.
All tights reserved.
Bibliographical Note
‘This Dover edition, first published in 2003, is an unabridged republication of the
work published by Prentice-Hall, Inc., Englewood Cliffs, New Jersey, in 1974, which
was an English translation of the work published as Numeriska Metoder by CWK
Gleerup, Lund, Sweden, in 1969
Library of Congress Cataloging-in-Publication Data
Dahlquist, Germund
Numerical methods / Germund Dahlquist, Ake Bjérck; translated from the
Swedish by Ned Anderson.
p.cm .
An extended and updated translation of Numeriska metoder, by A. Bjorck and G.
Dahlquist.
Originally published: Englewood Cliffs, N.J. : Prentice-Hall, 1974,
Includes bibliographical references and index.
ISBN 0-486-42807-9 (pbk.) i
1. Numerical analysis—Data processing. I. Bjéick, Ake, 1934- II. Bjérck, Ake,
1934— Numeriska metoder. III. Title.
QA297 D33 2003
519 4—de21
2002072867
Manufactured in the United States of America
Dover Publications, Inc., 31 East 2nd Street, Mineola, N.Y. 11501To the memory
of
George E. ForsytheCONTENTS
Both section and page numbers are given for the topics below. The reader is
recommended to use section numbers (at the top of the pages of the text proper)
when looking up a given topic, since a better idea of the structure of the book
will be obtained,
In general, review questions and/or problems follow after each one-decimal
sub-section.
PREFACE xvi
CONVENTIONS xviii
SOME GENERAL PRINCIPLES OF NUMERICAL
CALCULATION 1
1.1. Introduction 1
1.2. Some Common Ideas and Concepts in Numerical Methods 2
1.3, Numerical Problems and Algorithms 13
1.3.1. Definitions 13
1 Recursive Formulas; Horner's Rule 14
1.3.3. An Example of Numerical Instability 16
HOW TO OBTAIN AND ESTIMATE ACCURACY
IN NUMERICAL CALCULATIONS 21
2.1. Basic Concepts in Error Estimation 21
2.1.1, Introduction 21
2.1.2. Sources of Error 22
2.1.3. Absolute and Relative Errors 23
2.1.4, Rounding and Chopping 24
ixCONTENTS
2.2 Propagation of Errors
2.3,
24,
3.4.
3.2.
41.
4.2,
4.3,
2.2.1. Simple Examples of Error Analysis
2.2.2. The General Formula for Error Propagation;
Maximum Error and Standard Error
2.2.3. On the Practical Application of Error Estimation
2.2.4. The Use of Experimental Perturbations
2.2.5. Automatic Control of Accuracy
‘Number Systems; Floating and Fixed Representation
2.3.1, The Position System
2.3.2, Floating and Fixed Representation
2.3.3, Floating Decimal Point
2.3.4, Fixed Decimal Point
2.3.5. Round-off Errors in Computation with Floating
Arithmetic Operations
Backward Error Analysis; Condition Numbers
2.4.1, Backward Error Analysis
2.4.2, Condition Numbers for Problems and Algorithms
2.4.3, Geometrical Illustration of Error Analysis
NUMERICAL USES OF SERIES
Elementary Uses of Series
3.1.1. Simple Examples
3.1.2, Estimating the Remainder
3.1.3, Power Series
Acceleration of Convergence
3.2.1, Slowly Converging Alternating Series
3.2.2. Slowly Converging Series with Positive Terms
3.2.3. Other Simple Ways to Accelerate Convergence
3.2.4, Ill-Conditioned Series
3.2.5, Numerical Use of Divergent Series
APPROXIMATION OF FUNCTIONS
Basic Concepts in Approximation
4.1.1, Introduction
4.1.2. The Idea of a Function Space
4.1.3. Norms and Seminorms
4.1.4. Approximation of Functions as a Geometric
Problem in Function Space
‘The Approximation of Functions by the Method of Least
Squares
4.2.1. Statement of the Problems
4.2.2. Orthogonal Systems
4.2.3. Solution of the Approximation Problem
Polynomials
4.3.1, Basic Terminology; the Weierstrass Approximation
Theorem
4.3.2. Triangle Families of Polynomials
26
26
29
4
36
37
42
42
43
46
31
SI
53
56
60
60
2
65
a
1
B
4
15
16
81
81
81
84
85
87
88
88
89
92
97
97
984.4.
4.5.
4.6.
CONTENTS
4.3.3. A Triangle Family and Its Application to
Interpolation
4.3.4, Equidistant Interpolation and the Runge
Phenomenon
Orthogonal Polynomials and Applications
4.4.1. Tchebycheff Polynomials
Tchebycheff Interpolation and Smoothing
4.4.3, General Theory of Orthogonal Polynomials
4.4.4, Legendre Polynomials and Gram Polynomials
Complementary Observations on Polynomial Approximation
4.5.1, Summary of the Use of Polynomials
4.5.2. Some Inequalities for E,(f) with Applications, to the
Computation of Linear Functionals
4.5.3. Approximation in the Maximum Norm
Economization of Power Series; Standard Functions
4, Some Statistical Aspects of the Method of Least Squares
Spline Functions
NUMERICAL LINEAR ALGEBRA
Sa.
5.2.
5.3.
5.4.
5.5.
5.6.
5.7.
Introduction
Basic Concepts of Linear Algebra
5.2.1, Fundamental Definitions
5.2.2, Partitioned Matrices
5.2.3, Linear Vector Spaces
5.2.4. Eigenvalues and Similarity Transformations
5.2.5. Singular-Value Decomposition and Pseudo-Inverse
Direct Methods for Solving Systems of Linear Equations
5.3.1. Triangular Systems
Gaussian Elimination
Pivoting Strategies
LU-Decomposition
Compact Schemes for Gaussian Elimination
. Inverse Matrices
Special Matrices
Symmetric Positive-Definite Matrices
Band Matrices
Large-Scale Linear Systems
5.4.4. Other Sparse Matrices
Error Analysis for Linear Systems
An III-Conditioned Example
Vector and Matrix Norms
Perturbation Analysis
5.5.4. Rounding Errors in Gaussian Elimination
5.5.5. Scaling of Linear Systems
. Iterative Improvement of a Solution
Iterative Methods
Overdetermined Linear Systems
5.7.1, The Normal Equations
5.7.2. Orthogonalization Methods
xi
99
101
104
104
106
108
113
117
LIT
120
124
125
126
131
137
138
138
140
141
142
143
146
146
147
150
152
157
159
162
162
165
168
169
174
174
175
176
177
181
183
188
196
197
201xii
CONTENTS
5.8.
5.7.3. Improvement of Least-Squares Solutions
5.7.4. Least-Squares Problems with Linear Constraints
Computation of Eigenvalues and Eigenvectors
.8.1. The Power Method
Methods Based on Similarity Transformations
Eigenvalues by Equation Solving
The QR-Algorithm
NONLINEAR EQUATIONS
6.1.
6.2.
6.3,
6.4.
6.5.
6.6.
6.7.
6.8.
6.9.
Introduction
Initial Approximations; Starting Methods
6.2.1. Introduction
6.2.2. The Bisection Method
‘Newton-Raphson’s Method
The Secant Method
Description of the Method
Error Analysis for the Secant Method
7 Regula Falsi
6.4.4, Other Related Methods
General Theory of Iteration Methods
Error Estimation and Attainable Accuracy in Iteration
Methods
Error Estimation
Attainable Accuracy; Termination Criteria
Multiple Roots
Algebraic Equations
6.8.1. Introduction
6.8.2. Deflation
6.8.3. Ill-Conditioned Algebraic Equations
Systems of Nonlinear Equations
6.9.1, Iteration
6.9.2. Newton-Raphson’s Method and Some Modifications
6.9.3. Other Methods
FINITE DIFFERENCES WITH APPLICATIONS TO
NUMERICAL INTEGRATION, DIFFERENTIATION,
AND INTERPOLATION
WA,
7.2,
73.
Difference Operators and Their Simplest Properties
Simple Methods for Deriving Approximation Formulas and
Error Estimates
7.2.1, Statement of the Problems and Some Typical
Examples
7.2.2, Repeated Richardson Extrapolation
Interpolation
7.3.1, Introduction
73.2. When is Linear Interpolation Sufficient?
204
205
208
209
211
215
216
218
218
219
219
220
222
227
227
228
230
230
233
238
238
240
242
243
243
245
246
248
249
249
251
255
(255
263
263
269
275
275
27674,
15,
7.6.
77.
CONTENTS
Newton’s General Interpolation Formula
Formulas for Equidistant Interpolation
Complementary Remarks on Interpolation
Lagrange’s Interpolation Formula
Hermite Interpolation
Inverse Interpolation
Numerical Integration
7.4.1, The Rectangle Rule, Trapezoidal Rule, and
Romberg’s Method
7.4.2, The Truncation Error of the Trapezoidal Rule
7.4.3, Some Difficulties and Possibilities in Numerical
Integration
The Euler-Maclaurin Summation Formula
Uses of the Euler-Maclaurin Formula
Other Methods for Numerical Integration
Numerical Differentiation
The Calculus of Operators
7.6.1, Operator Algebra
7.6.2. Operator Series with Applications
Functions of Several Variables
7.7.1. Working with One Variable at a Time
7.7.2. Rectangular Grids
Irregular Triangular Grids
DIFFERENTIAL EQUATIONS
8.1.
8.2.
8.3.
8.4.
8.5.
‘Theoretical Background
8.1.1, Initial-Value Problems for Ordinary Differential
Equations
8.1.2, Error Propagation
8.1 Other Differential Equation Problems
Euler’s Method, with Repeated Richardson Extrapolation
Other Methods for Initial-Value Problems in Ordinary
Differential Equations
The Modified Midpoint Method
The Power-Series Method
Runge-Kutta Methods
Implicit Methods
Stiff Problems
Control of Step Size
A Finite-Difference Method for a Second-Order Equation
Orientation on Boundary and Eigenvalue Problems for
Ordinary Differential Equations
Introduction
The Shooting Method
The Band Matrix Method
Numerical Example of an Eigenvalue Problem
Difference Equations
8.5.1. Homogeneous Linear Difference Equations with Constant
Coefficients
xiii
277
279
282
284
285
286
290
290
293
294
297
300
302
307
311
31
312
318
319
319
322
330
330
330
333
337
338
342
342
345
346
347
349
350
352
359
359
359
361
363
367
368xiv
CONTENTS
General Linear Difference Equations
a Test Problem
8.5.4. Linear Multistep Methods
8.6. Partial Differential Equations
8.6.1. Introduction
An Example of an Initial-Value Problem
An Example of a Boundary-Value Problem
Methods of Undetermined Coefficients and
Variational Methods
Finite-Element Methods
Integral Equations
9 FOURIER METHODS
9.1, Introduction
9.2. Basic Formulas and Theorems in Fourier Analysis
9.2.1. Functions of One Variable
9.2.2. Functions of Several Variables
9.3. Fast Fourier Analysis
9.3.1. An Important Special Case
9.3.2. Fast Fourier Analysis, General Case
9.4. Periodic Continuation of a Nonperiodic Function
9.5. The Fourier Integral Theorem
10 OPTIMIZATION
N
10.1. Statement of the Problem, Definitions, and Normal Form
10.2. The Simplex Method
10.3. Duality
10.4. The'Transportation Problem and Some Other Optimization
Problems
10.5. Nonlinear Optimization Problems
Line Search
Overdetermined Nonlinear Systems
Constrained Optimization
11.1, Introduction
11.2. Random Digits and Random Numbers
11.3. Applications; Reduction of Variance
11.4, Pseudorandom Numbers
Analysis of a Numerical Method with the Help of
Basic Concepts and Introductory Examples
Algorithms for Unconstrained Optimization
THE MONTE CARLO METHOD AND SIMULATION
370
372
375
383
383
384
389
392
395
397
405
405
406
406
4il
413
413
414
417
419
422
422
426
435
436
438
438
440
441
443
444
448
449
455
463CONTENTS
12 SOLUTIONS TO PROBLEMS.
13 BIBLIOGRAPHY AND PUBLISHED ALGORITHMS
13.1,
13,2,
13.3,
13.4.
13.5,
13.6.
13.7,
13.8.
13.9,
Introduction
General Literature in Numerical Analysis
Tables, Collections of Formulas, and Problems
Error Analysis and Approximation of Functions
Linear Algebra and Nonlinear Systems of Equations
Interpolation, Numerical Integration, and Numerical
Treatment of Differential Equations
Optimization; Simulation
Reviews, Abstracts and Other Periodicals
Survey of Published Algorithms
Index by Subject to Algorithms, 1960-1970
APPENDIX TABLES
INDEX
xv
465
536
536
536
539
540
541
543,
545
547
548
548
563
565PREFACE
This book is an extended and updated translation of a textbook published in
Swedish by the CWK Gleerup Co. in 1969. Prerequisites for most of the book
are sophomore courses in mathematics (in particular calculus and linear
algebra) as well as some knowledge of a problem-oriented programming lan-
guage. The latter can be studied in parallel with the first three chapters of the
book. Parts of the book are more difficult. We hope that it will be easy for
teachers who will use the book as a text to give instructions as to what depth
the various parts are to be studied in a particular course.
We have tried to select methods which are important to large-scale com-
puting as well as techniques for small-scale computing with simple tools. It
is helpful if the reader has access to a time-sharing system for getting ac-
quainted with some of the algorithms, though in most cases a desk calculator
or even a slide rule and mathematical tables can be useful enough.
We hope that the book will be useful as a handbook for computation in
science and technology. The last chapter contains an extensive bibliography
and a list of published algorithms.
The general ideas and concepts of scientific computation are introduced
in the first chapter, while the second chapter is devoted to error analysis.
There and everywhere else we try to stress those aspects which are of impor-
tance for the design of algorithms. In contrast to the general survey style of
the first two chapters, the rest of the book is mostly concerned with the treat-
ment of various classes of problems.
The most important difference between this book and the Swedish edition
is the increased number of exercises. We have had the use of material from a
Swedish collection of problems by our colleagues Nils Hagander and Yngve
Sundblad. Their generosity is gratefully acknowledged. Section 7.7 on func-
tions of several variables is new, and so is Section 10.5 on non-linear opti-
mization. The related sections on non-linear equations is thoroughly revised.
xvixvii PREFACE
We also have a new treatment of methods for partial differential equations,
other than difference methods. In addition to this there has been an overall
modernization, particularly in the chapter on linear algebra. A chapter on
nomography and some peripheral details have been excluded.
We have benefited greatly from stimulating discussions with colleagues.
In particular we mention Peter Pohl, Hakan Ramsin, and the group at TRU
(Swedish board for television and radio in education).
We are especially grateful to our colleague Ned Anderson, who has not
only translated the Swedish text but improved the presentation in many
places, and to Maja Anderson for typing most of the manuscript. Christina
Landing also typed a large part of the manuscript. It is a pleasure to express
our gratitude for excellent cooperation and invaluable help.
We also thank the American and Swedish publishers for excellent coop-
eration. It was the encouragement of the late Professor George E. Forsythe
that stimulated us to translate the book.
Last but not least we would like to thank our families for help, encourage-
ment, and patience when the work intruded greatly on their leisure time.
GERMUND DAHLQUIST
AKE ByORcKCONVENTIONS
The book consists of thirteen chapters numbered in a way that is clear from
the Contents, The reference “see 4.3.3” indicates Section 4.3.3 in Chapter 4.
The reference (4.3.5) indicates a formula which can be found in one of the
sections whose first decimal is 3, [The formula (4.3.5) is in Section 4.3.3.] A
name followed by digits enclosed in brackets, e.g. Fréberg [7] refers to a
book in the Bibliography, Chapter 13.
Besides the generally accepted mathematical abbreviations and notations
(see e.g., James and James [38], pp. 467-471), the following notations are
used in the book:
log x logarithm to base 10
Inx natural logarithm
e* and exp (x) both denote the exponential function
sinh x, cosh x hyperbolic sine, hyperbolic cosine
sgn x sign of x, see 3.1.2
A(xty) floating-point operations, see 2.3.5
tab (f) tabulation operator, see 4.1.1
Ef) see 4.3.1
I+ Vos H+ tps norms and seminorms, see 4.1.3 and 5.5.2
{Heo denotes the set {xo, %1,-- 5 Xy}
[a, 5) closed interval (a < x < 5)
(a, b) open interval (a < x < 5), or scalar product of the
functions a and b
int (a, b,..., w) the least interval which contains a, b,...,w
k