Class Notes Numerical Methods
Class Notes Numerical Methods
(Class Notes)
August, 2024
Contents
1 Preliminaries 1
1.1 Significant Digits . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 Significant Digits in Rounding Off . . . . . . . . . . . . 2
1.2 Errors in Numerical Methods . . . . . . . . . . . . . . . . . . 3
1.2.1 Types of Errors . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4 Interpolation 23
4.1 Lagrange Interpolation . . . . . . . . . . . . . . . . . . . . . . 23
4.1.1 Lagrange Interpolating Polynomial . . . . . . . . . . . 23
4.2 Newton’s Form of Interpolation . . . . . . . . . . . . . . . . . 25
4.2.1 Newton’s Divided Difference Formula . . . . . . . . . . 25
4.2.2 Divided Differences . . . . . . . . . . . . . . . . . . . . 26
4.2.3 Newton’s Interpolating Polynomial . . . . . . . . . . . 26
4.3 Finite Difference Operators . . . . . . . . . . . . . . . . . . . 27
4.3.1 Shift Operator E . . . . . . . . . . . . . . . . . . . . . 28
4.3.2 Forward Difference Operator (∆) . . . . . . . . . . . . 28
4.3.3 Backward Difference Operator (∇) . . . . . . . . . . . 28
i
4.3.4 Central Difference Operator (δ) . . . . . . . . . . . . . 29
4.3.5 Average Difference Operator (µ) . . . . . . . . . . . . . 29
Preliminaries
2. Zeros between non-zero digits are significant. Any zero that lies
between two non-zero digits adds to the count of significant digits.
Example: 2501005 (7 significant digits).
3. Leading zeros are not significant. Zeros placed to the left of the
first non-zero digit (with a decimal point) merely act as placeholders.
1
2 CHAPTER 1. PRELIMINARIES
Understanding these rules helps ensure that numerical results are commu-
nicated with the appropriate degree of accuracy, avoiding both overstatement
and understatement of precision.
1. If the (n + 1)-th digit is less than 5: Keep the n-th digit unchanged
and simply discard all digits to the right.
Example: Consider 3.1432 rounded to 3 decimal places. The 4th digit
is 2 (less than 5). Hence, the rounded value is 3.143.
Absolute Error
The absolute error measures the magnitude of the difference between the
exact value (x) and its approximation (x∗ ). It is
√ defined as EA = |x − x∗ |.
Example: The exact solution of x2 −3 = 0 is 3. Suppose an approximate
solution is√given as 1.731. Then, the absolute error (to three decimal places)
is EA = | 3 − 1.731| = 0.001.
Relative Error
The relative error provides a scale-free measure of accuracy by comparing
|x−x∗ |
the error to the true value: ER = |x| .
Example: Using the previous case, the relative error is ER = 0.001
√ . A
3
smaller value of ER indicates a better approximation.
Percentage Error
The percentage error is simply the relative error expressed as a percentage:
EP = ER ×100. This measure is often used in practical applications to express
error in an intuitive way.
Round-off Error
A round-off error arises because computers cannot represent most real
numbers exactly. Instead, they store numbers using floating-point represen-
tation with finite precision.
Example: The mathematical constant π has the value 3.14159265358979 . . . .
In a computer system, it may be stored as 3.1416. This introduces a round-off
error due to the truncation of digits beyond machine precision.
Truncation Error
A truncation error occurs when an infinite process, such as a series expan-
sion or iterative algorithm, is terminated after a finite number of steps.
Example: The exponential function ex can be expanded as a Taylor series:
2 3
e = 1 + x + x2! + x3! + · · ·
x
1.3 Exercise
1. How many significant digits are there in the number 0.007230?
8. How many significant digits are in the number 700000? What changes
if it is written as 7.0 × 105 ?
9. Round 145.25 to 2 significant digits and explain the rule applied for
rounding off.
10. The length of an object is measured as 8.3500 cm. How many significant
digits does this measurement have, and what would be the result if it
were rounded to 3 significant digits?
11. If the true value of e is 2.7182818 and its approximate value is given
by 2.7200000, find the absolute and relative errors.
√
12. If the true value of 2 is 1.4142136 and its approximate value is given
by 1.4140000, find the absolute and relative errors.
13. If the true value of the gravitational constant G is 6.67430 × 10−11 and
its approximate value is given by 6.67000 × 10−11 , find the absolute and
relative errors.
14. If the true value of the speed of light c is 299, 792, 458 meters per second
and its approximate value is given by 299, 800, 000 meters per second,
find the absolute and relative errors.
15. If the true value of the Avogadro constant NA is 6.02214076 × 1023 and
its approximate value is given by 6.02000000 × 1023 , find the absolute
and relative errors.
6 CHAPTER 1. PRELIMINARIES
Chapter 2
7
8CHAPTER 2. ROOTS OF TRANSCENDENTAL AND POLYNOMIAL EQUATIONS
xn+1 = g(xn ), n ≥ 0.
2.1. FIXED-POINT ITERATION METHOD 9
To see this, suppose |g ′ (x)| ≤ k < 1 and let p be the exact fix-point of g.
Then, using the definition of the derivative of g(x) at the fixed point p, we
have
g(xn ) − g(p)
|g ′ (p)| = .
xn − p
Hence,
Here, |x0 − p| is some fixed positive number, and k is a real constant strictly
less than 1. Therefore, as n → ∞, we obtain
|xn+1 − p| → 0,
which shows that the sequence {xn } converges to the fixed point p. Thus, if
the derivative of g(x) is bounded by 1 in the interval of interest, the sequence
of approximations will converge to the exact solution.
Solution. We first rewrite the given equation in the form g(x) = x. Here,
g(x) = cos(x). Since the derivative of cos(x) has absolute value less than 1
in the interval (0, 1), the fixed-point iteration is applicable. Let the initial
guess be x0 = 0.5. Using the iteration formula xn+1 = g(xn ), we obtain
x1 = g(x0 ) = cos(0.5) = 0.877582, x2 = g(x1 ) = cos(0.877582) = 0.639012,
and so on. The first ten iterations are summarized in Table 2.1.
10CHAPTER 2. ROOTS OF TRANSCENDENTAL AND POLYNOMIAL EQUATIONS
Iteration n Approximation xn
0 0.500000
1 0.877583
2 0.639012
3 0.802685
4 0.694778
5 0.768196
6 0.719165
7 0.752356
8 0.730081
9 0.745120
10 0.735006
Table 2.1: Iterations for Fixed-Point Iteration Method for f (x) = cos(x) − x
Convergence criteria
At each iteration, the interval is halved. Let the length of the interval at the
nth iteration be ln . Then,
l0
ln = |bn − an | ≤ 12 |bn−1 − an−1 | ≤ · · · ≤ 1
|b
2n 0
− a0 | = 2n
.
This shows that as n increases, ln decreases. Hence, with more iterations, the
midpoint mn approaches the interval’s endpoints. Since the IVT guarantees
that a root of f lies within the interval, the sequence of midpoints {mn }
converges to the solution of f (x) = 0.
2.3. NEWTON-RAPHSON METHOD 11
Solution. Consider f (x) = cos(x)−x with the initial interval [a0 , b0 ] = [0, 1].
Since f (0) · f (1) < 0, the IVT ensures that a root of f lies within [0, 1]. The
first midpoint is
a0 + b 0
m1 = = 0.5.
2
Here, f (0) · f (0.5) > 0 and f (0.5) · f (1) < 0, so the next interval becomes
[0.5, 1]. Proceeding in this manner, the first 5 approximations are shown in
Table 2.2.
an−1 +bn−1
Iteration n mn = 2
Sign of f (mn ) an bn
0 0 1
1 0.5 + 0.5 1
2 0.75 − 0.5 0.75
3 0.625 + 0.625 0.75
4 0.6857 + 0.685 0.75
5 0.71875 + 0.71875 0.75
f (xn )
xn+1 = xn − ,
f ′ (xn )
f ′ (x) = − sin(x) − 1.
Iteration n xn
0 0.500000
1 0.725536
2 0.738369
3 0.739085
4 0.739085
f (x1 )(x1 − x0 )
x2 = x1 − .
f (x1 ) − f (x0 )
f (b0 )(b0 − a0 )
x 2 = b0 − ,
f (b0 ) − f (a0 )
and the next interval is chosen as either [a1 , b1 ] = [x0 , x2 ] or [a1 , b1 ] = [x2 , x1 ],
depending on the sign condition.
In general, the iterative step of the method is expressed as
f (bn )(bn − an )
xn+1 = bn − .
f (bn ) − f (an )
The following Table 2.4 presents a few iterations for the function f (x) =
cos(x) − x with initial values x0 = 0 and x1 = 1, computed using the Regula-
Falsi Method.
14CHAPTER 2. ROOTS OF TRANSCENDENTAL AND POLYNOMIAL EQUATIONS
f (xn+1 )(xn+1 − xn )
xn+2 = xn+1 − .
f (xn+1 ) − f (xn )
Unlike Regula-Falsi, however, the Secant Method does not preserve a bracket-
ing interval. Instead, it uses the two most recent approximations to generate
the next one. This often yields faster convergence, though at the cost of
losing the guaranteed bracketing of the root, which may sometimes result in
divergence.
or equivalently,
lim |xn − L| = 0.
n→∞
2.6. THE ORDER AND THE RATE OF CONVERGENCE 15
The number L is called the limit of the sequence {xn }. In simple words, as
n becomes larger, the values of xn get closer and closer to L.
Now, let x∗ denote the exact solution to the equation f (x) = 0, and let
{xn } be the sequence of approximations. The error at the nth iteration is
defined as
en := |xn − x∗ |.
With this notation, we are now ready to define the rate and the order of
convergence.
then the number p is called the order of convergence, and the constant C
is called the asymptotic error constant. The parameter p characterizes the
efficiency of the iterative method, while C provides a measure of the con-
vergence rate. Larger values of p (and smaller values of C) indicate faster
convergence.
n+2 n+2
Example 2.4 Let xn = . Then limn→∞ xn = limn→∞ = 1, so
n+5 n+5
x∗ = 1. Therefore,
n+2 3
|xn − x∗ | = en = −1 =
n+5 n+5
and
3
en+1 = .
n+6
We compute
If p > 1, the limit diverges since np−1 → ∞. Thus the largest admissible
en+1 n+5
p is 1, and limn→∞ = limn→∞ = 1. Therefore, the order of
en n+6
convergence is 1, with the rate of convergence 1.
16CHAPTER 2. ROOTS OF TRANSCENDENTAL AND POLYNOMIAL EQUATIONS
x2n 1
xn+1 = + .
2 xn
Here √
√ xn 1 √ (xn − 2)2
|xn+1 − 2| = + − 2= ,
2 xn 2xn
which gives
en+1 1
2
=
en 2xn
So, the limit
en+1 1
lim 2
= √ ,
n→∞ en 2 2
is finite. Thus, the order of convergence is 2 with Asymptotic Error Constant
(rate of convergence) 2√1 2 .
Remark 2.1 The rate of convergence describes the speed at which the se-
quence of errors en approaches zero, whereas the order of convergence reflects
the efficiency of an iterative method. Put differently, the rate of convergence
is a quantitative measure, while the order of convergence is a qualitative mea-
sure.
It is worth noting that not all texts distinguish clearly between these two
notions—some treat them as the same, while others emphasize the difference.
Formally, the rate of convergence is often understood as the limit of the
ratio of the error at the next iteration to that at the current iteration. For a
linearly convergent method, this limit exists as a nonzero constant. However,
for methods of higher order (p > 1), this limit is zero, and hence the definition
loses its meaning in that form. To address this, many books instead define
the asymptotic error constant as the rate of convergence.
Chapter 3
17
18 CHAPTER 3. SIMULTANEOUS LINEAR EQUATIONS
where:
(k+1)
• xi represents the new (updated) value of the i-th variable at the
(k + 1)-th iteration.
(k)
• xj is the value of the j-th variable obtained in the k-th iteration.
Convergence Criteria
For the Gauss-Jacobi method to be effective, the coefficient matrix A should
preferably be diagonally dominant. A matrix is diagonally dominant if
n
X
|aii | > |aij |, for all i.
j=1
j̸=i
Iteration x y z
0 0.00000 0.00000 0.00000
1 0.70000 −0.50000 1.3333
2 0.46667 −0.07917 1.1222
3 0.57194 −0.16111 1.2208
4 0.54569 −0.12330 1.1883
5 0.55651 −0.13470 1.1984
6 0.55322 −0.13084 1.1947
7 0.55436 −0.13217 1.1959
8 0.55398 −0.13174 1.1955
9 0.55411 −0.13189 1.1956
10 0.55406 −0.13184 1.1955
After about ten iterations, the solution stabilizes at:
x ≈ 0.55406, y ≈ −0.13184, z ≈ 1.1955.
This demonstrates how the Gauss-Jacobi method gradually refines the values
of the unknowns until convergence is achieved.
Iteration x y z
0 0.000000 0.000000 0.000000
1 0.700000 −0.412500 1.425278
2 0.505972 −0.135366 1.189860
3 0.551184 −0.131721 1.195349
4 0.554043 −0.131849 1.195593
5 0.554065 −0.131850 1.195598
6 0.554067 −0.131850 1.195598
7 0.554067 −0.131850 1.195598
After a few iterations, the solution stabilizes at:
x ≈ 0.554067, y ≈ −0.131850, z ≈ 1.195598.
• Start with the first column and choose the pivot element. If needed,
interchange rows so that the largest element (in magnitude) is placed
in the pivot position.
• Eliminate all entries below the pivot using row operations.
• Repeat the process column by column until the coefficient matrix is in
upper triangular form.
• Solve the resulting triangular system by back substitution.
Interpolation
n
X
P (x) = yi Li (x),
i=0
Explanation
The Lagrange basis polynomial Li (x) has the property that it equals 1 at
x = xi and 0 at all other xj (where j ̸= i). This property ensures that the
interpolating polynomial P (x) passes through each of the given data points.
23
24 CHAPTER 4. INTERPOLATION
Example 1
Suppose you have three data points: (1, 2), (2, 3), and (4, 5). The Lagrange
interpolating polynomial P (x) would be:
(x − 2)(x − 4) (x − 2)(x − 4)
L0 (x) = = ,
(1 − 2)(1 − 4) 3
(x − 1)(x − 4) (x − 1)(x − 4)
L1 (x) = =− ,
(2 − 1)(2 − 4) 2
(x − 1)(x − 2) (x − 1)(x − 2)
L2 (x) = = .
(4 − 1)(4 − 2) 6
Example 2
Given the data points (0, 1), (1, 2), and (2, 0), the Lagrange interpolating
polynomial P (x) is:
2
X
P (x) = yi Li (x),
i=0
(x − 1)(x − 2) (x − 1)(x − 2)
L0 (x) = = ,
(0 − 1)(0 − 2) 2
(x − 0)(x − 2)
L1 (x) = = −(x)(x − 2),
(1 − 0)(1 − 2)
(x − 0)(x − 1) (x)(x − 1)
L2 (x) = = .
(2 − 0)(2 − 1) 2
Substituting the values into P (x):
(x − 1)(x − 2)
P (x) = − 2x(x − 2).
2
Simplifying this expression gives the quadratic polynomial:
3 5
P (x) = − x2 + x + 1.
2 2
P (x) = a0 +a1 (x−x0 )+a2 (x−x0 )(x−x1 )+· · ·+an (x−x0 )(x−x1 ) . . . (x−xn−1 ),
26 CHAPTER 4. INTERPOLATION
f [xi ] = yi .
The first-order divided difference for two points xi and xi+1 is:
f [xi+1 ] − f [xi ]
f [xi , xi+1 ] = .
xi+1 − xi
The second-order divided difference for three points xi , xi+1 , xi+2 is:
Example
Consider the following data points:
(x0 , y0 ) = (1, 1), (x1 , y1 ) = (2, 4), (x2 , y2 ) = (3, 9), (x3 , y3 ) = (4, 16).
xi yi = f [xi ] f [xi , xi+1 ] f [xi , xi+1 , xi+2 ] f [xi , xi+1 , xi+2 , xi+3 ]
1 1 4−1
2−1
=3 5−3
2 4 3−1
=1 0−0
4−1
=0
3 9 9−4
3−2
=5
4 16
Explanation
• The first column contains the values of xi .
• The second column contains the values of yi = f (xi ), which are the
given data points.
• The third column contains the first-order divided differences:
4−1 9−4 16 − 9
f [x0 , x1 ] = = 3, f [x1 , x2 ] = = 5, f [x2 , x3 ] = = 7.
2−1 3−2 4−3
Ef (xi ) = f (xi + h)
For any integer n, the shift operator can shift by nh units:
∆f (xi ) ≈ hf ′ (xi )
The n-th order forward difference can be given as:
n
k n
X
n
∆ f (xi ) = (−1) f (xi + (n − k)h).
k=0
k
∇f (xi ) ≈ hf ′ (xi )
The n-th order backward difference can be given as:
n
k n
X
n
∇ f (xi ) = (−1) f (xi − kh).
k=0
k
4.3. FINITE DIFFERENCE OPERATORS 29
(i) ∆ = E − 1 and ∇ = 1 − E −1
(ii) ∆ + 1 = (1 − ∇)−1
Numerical Differentiation,
Integration, and ODE
5.1 Differentiation
Numerical differentiation is a technique used to approximate the derivative
of a function when only discrete data points are available. Three common
methods for first-order numerical differentiation are the forward difference,
backward difference, and central difference methods.
f (xi+1 ) − f (xi )
f ′ (xi ) ≈
h
where h is the step size between xi and xi+1 .
The error in the forward difference method is of order O(h), meaning the
error decreases linearly with decreasing step size. Thus, it is a first-order
accurate approximation.
f (xi ) − f (xi−1 )
f ′ (xi ) ≈
h
31
32 CHAPTER 5. DIFFERENTIATION, INTEGRATION AND ODE
f (xi+1 ) − f (xi−1 )
f ′ (xi ) ≈
2h
This method computes the slope of the secant line between the points (xi−1 , f (xi−1 ))
and (xi+1 , f (xi+1 )), centered at xi .
The central difference method has an error of order O(h2 ), meaning it
is a second-order accurate approximation. The error decreases quadratically
as the step size decreases, making it more accurate than the forward and
backward difference methods.
For higher accuracy in numerical differentiation, we can use three points
instead of just two. This results in higher-order approximations for both
the forward and backward difference methods. The three-point forward and
backward difference formulas give better estimates of the derivative at a point
xi .
Here, h is the step size between consecutive points. This method also provides
a higher-order approximation with an order O(h2 ) error, making it more
accurate than the two-point backward difference method.
Derivative at x1 :
(2x1 − x2 − x3 )y1 (x1 − x3 )y2 (x1 − x2 )y3
P ′ (x1 ) ≈ + +
(x1 − x2 )(x1 − x3 ) (x2 − x1 )(x2 − x3 ) (x3 − x1 )(x3 − x2 )
Derivative at x2 :
(x2 − x3 )y1 (2x2 − x1 − x3 )y2 (x2 − x1 )y3
P ′ (x2 ) ≈ + +
(x1 − x2 )(x1 − x3 ) (x2 − x1 )(x2 − x3 ) (x3 − x1 )(x3 − x2 )
Derivative at x3 :
(x3 − x2 )y1 (x3 − x1 )y2 (2x3 − x1 − x2 )y3
P ′ (x3 ) ≈ + +
(x1 − x2 )(x1 − x3 ) (x2 − x1 )(x2 − x3 ) (x3 − x1 )(x3 − x2 )
5.2 Integration
In this section, we will discuss the Trapezoid Rule and Simpson’s Rule for
numerical integration.
where h = b−a n
is the width of each sub-interval, and n is the number of
sub-intervals. The points that divide the interval [a, b] are:
x0 = a, x1 = a + h, x2 = a + 2h, ..., xn = b.
In particular, if we take n = 1, then x0 = a and xn = x1 = b, and hence the
Basic Trapezoid Rule will be
Z b
h
f (x) ≈ [f (a) + f (b)].
a 2
Note: If n is not given in the question, then one should apply the basic
Trapezoid rule.
Geometrical Interpretation
The idea behind the Trapezoid Rule is to approximate the region under the
curve as a series of trapezoids. Each trapezoid’s area is determined by the
function values at the endpoints of the sub-interval. Summing the areas of
these trapezoids gives an approximation of the integral.
(b − a)3 ′′
ET = − f (ξ),
12n2
where ξ is some point in the interval [a, b]. This shows that the error decreases
as n increases.
5.2. INTEGRATION 35
Example 1: f (x) = x2
Let’s apply the Trapezoid Rule to approximate the integral of f (x) = x2 over
the interval [0, 2]. We will divide the interval into n = 4 sub-intervals.
b−a
a = 0, b = 2, h= = 0.5
4
The points dividing the interval are:
x0 = 0, x1 = 0.5, x2 = 1, x3 = 1.5, x4 = 2.
Evaluating the function at these points:
Z 2
0.5
x2 dx ≈ [0 + 2(0.25 + 1 + 2.25) + 4]
0 2
=0.25 × [0 + 2(3.5) + 4]
=0.25 × 11
=2.75.
2 2 2
f (0) = e−(0) = 1, f (0.25) = e−(0.25) ≈ 0.9394, f (0.5) = e−(0.5) ≈ 0.7788,
2 2
f (0.75) = e−(0.75) ≈ 0.5698, f (1) = e−(1) ≈ 0.3679.
Now applying the Trapezoid Rule formula:
Z 1
2 0.25
e−x dx ≈ [1 + 2(0.9394 + 0.7788 + 0.5698) + 0.3679]
0 2
=0.125 × [1 + 2(2.2879) + 0.3679]
=0.125 × [1 + 4.5758 + 0.3679]
=0.125 × 5.9437
≈0.74296.
The exact value of the integral is approximately 0.7468, so the Trapezoid
Rule gives a good approximation of 0.74296.
where h = b−an
is the width of each sub-interval, and n is the number of sub-
intervals. It is important to note that for Simpson’s Rule n must be
even. The points that divide the interval [a, b] are:
x0 = a, x1 = a + h, x2 = a + 2h, ..., xn = b.
The Basic Simpson’s Rule can be obtained by taking n = 2 in the above
formula. For n = 2, x0 = a, x1 = a + h and xn = x2 = b, and hence the
formula will be
Z b
h
f (x) ≈ [f (x0 ) + 4f (x1 ) + f (x2 )].
a 3
5.2. INTEGRATION 37
Note: If n is not given in the question, then one should apply the Basic
Simpson’s Rule.
Geometrical Interpretation
Simpson’s Rule fits a quadratic polynomial (parabola) to every pair of adja-
cent intervals and approximates the area under the curve using this parabola.
This makes Simpson’s Rule more accurate for smooth and continuous func-
tions.
(b − a)5 (4)
ES = − f (ξ),
180n4
where ξ is some point in the interval [a, b]. This shows that Simpson’s Rule
is generally more accurate than Trapezoid.
Example 1: f (x) = x2
Let’s apply Simpson’s Rule to approximate the integral of f (x) = x2 over
the interval [0, 2], using n = 4 sub-intervals.
b−a
a = 0, b = 2, h= = 0.5
4
The points dividing the interval are:
x0 = 0, x1 = 0.5, x2 = 1, x3 = 1.5, x4 = 2.
Evaluating the function at these points:
Z 2
0.5 0.5
x2 dx ≈ [0 + 4(0.25 + 2.25) + 2(1) + 4] = × [0 + 4(2.5) + 2 + 4]
0 3 3
0.5 0.5 8
= × [10 + 2 + 4] = × 16 = ≈ 2.6667.
3 3 3
38 CHAPTER 5. DIFFERENTIATION, INTEGRATION AND ODE
2 2 2
f (0) = e−(0) = 1, f (0.25) = e−(0.25) ≈ 0.9394, f (0.5) = e−(0.5) ≈ 0.7788,
2 2
f (0.75) = e−(0.75) ≈ 0.5698, f (1) = e−(1) ≈ 0.3679.
Now applying Simpson’s Rule formula:
Z 1
2 0.25
e−x dx ≈ [1 + 4(0.9394 + 0.5698) + 2(0.7788) + 0.3679]
0 3
0.25
= × [1 + 4(1.5092) + 2(0.7788) + 0.3679]
3
0.25 0.25
= × [1 + 6.0368 + 1.5576 + 0.3679] = × 8.9623 ≈ 0.7470.
3 3
The exact value of the integral is approximately 0.7468, so Simpson’s Rule
gives an excellent approximation of 0.7470.
yn+1 = yn + h · f (tn , yn )
Example
yn+1 = yn + h [a1 k1 + a2 k2 ] ,
where:
The Modified Euler’s Method (Mid-Point method), also called the second-
order Runge-Kutta method, is another variant, defined by:
k1 = f (tn , yn ),
h h
k2 = f tn + , yn + k1 ,
2 2
yn+1 = yn + hk2 .
2. Heun’s Method
and hence
p1 = 1, u11 = 1.
k1 = f (tn , yn ),
k2 = f (tn + h, yn + hk1 ),
h
yn+1 = yn + (k1 + k2 ).
2
3. Ralston’s Method
k1 = f (tn , yn ),
2h 2h
k2 = f tn + , yn + k1 ,
3 3
h
yn+1 = yn + (k1 + 3k2 ).
4
Example
dy
For the same ODE, dx
= x + y, y(0) = 1, and step size h = 0.1:
k1 = f (0, 1) = 0 + 1 = 1
Miscellaneous Exercise
1. A real root of the equation x3 − 5x + 1 = 0 lies in the interval (0, 1).
Perform three iterations of the Regula Falsi Method to obtain the root.
2. Perform three iterations of the Bisection Method to find the root of the
equation Cos(x) − xex in (0, 1).
3. Discuss the order of convergence of the Secant method and give the
geometrical interpretation of the method.
√
4. (a) Verify x = a is a fixed point of the function h(x) = 12 x + xa2 .
9. (a) Define the backward difference operator ∇ and the Newton di-
vided difference. Prove that:
∇n f
f [x0 , x1 , . . . , xn ] =
n!hn
where h = xi+1 − xi .
(b) Construct the divided difference table for the following data set
and then write out the Newton form of the interpolating polyno-
mial:
x y
−7 10
−5 5
−4 2
−1 10
Find the approximation of y for x = −3.
(c) Use the formula
f (x0 + h) − f (x0 )
f ′ (x0 ) =
h
11. Define the rate of convergence of an iterative scheme {xn }. Use the
bisection method to determine the root of the equation:
x5 + 2x − 1 = 0 on (0, 1).
Further, compute the theoretical error bound at the end of the fifth
iteration and the next enclosing (bracketing) interval.
12. Differentiate between the method of false position and the secant method.
Apply the method of false position to solve:
cos(x) − x = 0
13. (a) Using scaled partial pivoting during the factoring step, find ma-
trices L, U , and P such that LU = P A, where:
4 2 −1
A= 1 2 3
−1 1 4
(b) Let g be a continuous function on the closed interval [a, b]. Sup-
pose g is differentiable on the open interval (a, b) and there exists a
positive constant k < 1 such that |g ′ (x)| ≤ k < 1 for all x ∈ (a, b).
Then:
i. The sequence {pn } generated by pn = g(pn−1 ) converges to
the fixed point p for any p0 ∈ [a, b].
ii. Find the approximated root of f (x) = x3 + 2x2 − 3x − 1 using
the secant method, taking p0 = 2 and p1 = 1, until |pn − pn−1 |
is sufficiently small.
(c) Determine the spectral radius ρ of the matrix:
−1 0 −2
A = 3 2 −1
1 3 0
14
Hence, solve the system Ax = b, given b = −3.
7
5.4. CLASSICAL FOURTH-ORDER RUNGE-KUTTA METHOD (RK4)45
14. Use the Jacobi method to solve the following system of linear equations:
2x1 − x2 = 3
−x1 + 4x2 + x3 = 7
x2 + 5x3 = 1
0
(0)
Use the initial approximation x = 0 and perform three iterations.
0
17. Apply Euler’s method to approximate the solution of the initial value
problem:
x′ = 1 + x2 , x(1) = 0, 1 ≤ t ≤ 4
taking 5 steps.
18. For the function f (x) = ex , construct the Lagrange form of the inter-
polating polynomial passing through the points (−1, e−1 ), (0, 1), and
(1, e). Estimate f (x) using the polynomial and determine the error in
the approximation. Verify that the theoretical error bound is satisfied.
46 CHAPTER 5. DIFFERENTIATION, INTEGRATION AND ODE