Numerical methods
are techniques by which mathematical problems are
formulated so that they can be solved with
arithmetic and logical operations. Because digital
computers excel at performing such operations,
numerical methods are sometimes referred to as
computer mathematics.
1. Numerical methods greatly expand the types of problems you
can address. They are capable of handling large systems of
equations, nonlinearities, and complicated geometries that
are not uncommon in engineering and science and that are
often impossible to solve analytically with standard calculus.
As such, they greatly enhance your problem-solving skills.
2. Numerical methods allow you to use canned software with
insight. 1
2
http://
3
[Link]
http://
4
[Link]
3. Many problems cannot be approached using
canned programs. If you are conversant with
numerical methods, and are adept at computer
programming, you can design your own programs to
solve problems without having to buy or commission
expensive software.
4. Numerical methods are an efficient vehicle for
learning to use computers. Because numerical
methods are expressly designed for computer
implementation, they are ideal for illustrating the
computer’s powers and limitations.
5. Numerical methods provide a vehicle for you to
reinforce your understanding of mathematics.
Because one function of numerical methods is to 5
reduce higher mathematics to basic arithmetic
Bisection Method
6
Derivation of
Bisection ethod
Theorem An equation f(x)=0, where f(x) is a real continuous
function, has at least one root between xl and xu if f(xl)
f(xu) < 0.
Figure 1 At least one root exists between the two points if the
function is real, continuous, and changes sign.
7
Derivation of
Bisection Method
f(x)
x x
xu
Figure 2 If functionf x does not change sign between two
points, roots of the equationf x 0 may still exist between
the two points. 8
Basis of Bisection
Method
f(x)
f(x)
x xu
x x
x xu
Figure 3 If the functionf x does not change sign between two
f x
points, there may not be any roots for the equation
0
between the two points.
9
Derivation of
Bisection Method
f(x)
xu
x
x
f x
Figure 4 If the function changes sign between two points,
more than one root for thef x equation
0 may exist
between the two points.
10
Bisection Method
algorithm
11
Step 1
Choose xl and xu as two guesses for the root such
that
f(x)
f(xl) f(xu) < 0, or in other words, f(x) changes
sign between xl and xu. This was demonstrated in
Figure 1.
x
x
xu
Figure 1
12
Step 2
Estimate the root, xm of the equation f (x) = 0 as
the mid point between xl and xu as
f(x)
x xm
x
xu
Figure 5 Estimate of x13
m
Step 3
Now check the following
a) If f xl f xm 0 , then the root lies between x l
and xm; then xl = xl ; xu = xm.
b) If f xl f xm 0 , then the root lies between x m
and xu; then xl = xm; xu = xu.
f xl f xm 0
c) If ; then the root is xm. Stop the
algorithm if this is true.
14
15
16
Step 4
Find the new estimate of the root
x xu
xm =
2
Find the absolute relative approximate error
x new x old
m
a m
new
100
x m
where
xmold previous estimate of root
xmnew current estimate of root
17
18
19
20
21
22
23
Step 5
Compare the absolute relative approximate error a
with the pre-specified errortolerance
s .
Go to Step 2 using
Yes new upper and lower
Is a s ? guesses.
No Stop the algorithm
Note one should also check whether the number
of iterations is more than the maximum number of
iterations allowed. If so, one needs to terminate
the algorithm and notify the user about it. 24
Example 1
To find the inverse of a value, a, one can use the
equation 1
f x a 0
x
where x is the inverse of a.
Use the bisection method of finding roots of
equations to find the inverse of a = 2.5. Conduct
three iterations to estimate the root of the above
equation.
Find the absolute relative approximate error at the
end of each iteration and the number of significant
digits at least correct at the end of each iteration.
25
Example 1 Cont.
1.5
1.5
0.5
f ( x)
0
0
0.5
1 1
0 0.25 0.5 0.75 1
0 x 1
f(x)
Figure 8 Graph of the function
f(x).
1
f x a 0
x 26
Example 1 Cont.
1.505
2
Solutio
a x 0
1
n on given interval with initial upper and lower guesses
Entered function
1
f ( x ) ax 1 0
0
f ( x ) 2.5 x 1 0
Let us assumexl 0, xu 1
0
f ( x)
f ( x)
f ( x)
1
Check if the function
2
changes signxbetween
l xu
f x . f 0 2.50 1 1
and
l
f xu f 1 2.51 1 1.5
3
3.5
f xl f xu f 0 f 1 11.5 0
4
1 0.5 0 0.5 1 1.5
1 x x u x l 1.002
f(x)
xu (upper guess)
xl (lower guess)
There is at least one root
Figure 9 Checking that the bracket is between the brackets.
valid. 27
Example 1 Cont.
2
Iteration 1
1.505
The estimate of the root is
1
xl xu 0 1
0
xm 0.5
f ( x) 0
2 2
f xm f 0.5 2.50.5 1 0.25
f ( x)
1
f ( x)
f ( x)
2 f xl f xm f 0 f 0.5 10.25 0
3
xl
The root is bracketed between
3.5 4
1 0.5 0 0.5 1 1.5
xm . The lower and
and
1
f(x)
x x u x l x r 1.002 upper limits of the new bracket
xu (upper guess)
xl (lower guess)
are x 0, x 0.5
new guess l u
Figure 10 Graph of the The absolute relative approximate
estimated root after error a cannot be calculated as
Iteration 1. we do not have a previous 28
approximation.
Example 1 Cont.
1.505
2
1
Iteration 2
The estimate of the root
isxl xu 0 0.5
0
0 xm 0.25
f ( x)
f ( x)
2 2
f ( x)
1
f xm f 0.25 2.50.25 1 0.375
f ( x)
2
f xm f xu f 0.25 f 0.5
3
0.3750.25 0
3.5 4
1 0.5 0 0.5 1 1.5
1
f(x)
x x u x l x r 1.002
xm
The root is bracketed between
xu (upper guess)
xl (lower guess) xu .
and
new guess
Figure 11 Graph of the estimated The lower and upper limits of
root after Iteration 2. the new bracket are
xl 0.25, x u 0.5
29
a
The absolute relative approximate error at the end of
Iteration 2 is
xmnew xmold
a new
100
xm
0.25 0.5
100
0.25
100%
None of the significant digits are at least correct in the
estimated root of x 0.25
m
as the absolute relative approximate error is greater than
5%. 30
1.505
2
Iteration 3
1 The estimate of the
f ( x) 0
0 root
xl isxu 0.25 0.5
f ( x) xm 0.375
f ( x)
1 2 2
f ( x)
2 f xm f 0.375 2.50.375 1 0.0625
f xm f xu f 0.375 f 0.5
3
0.06250.25 0
3.5 4
1 0.5 0 0.5 1 1.5
1 x x u x l x r 1.002
xm
f(x)
xu (upper guess) The root is bracketed between
andxu .
xl (lower guess)
new guess
Figure 12 Graph of the
estimated root after The lower and upper limits of
Iteration 3. the new bracket are
xl 0.25, xu 0.5 31
a
The absolute relative approximate error at the end of
Iteration 3 is
xmnew xmold
a new
100
xm
0.375 0.25
100
0.375
33.333%
Still none of the significant digits are at least correct in the
estimated root of the equation as the absolute relative
approximate error is greater than 5%. Seven more
iterations were conducted and these iterations are shown in
the table below. 32
Table 1 Root off x 0 as function of number of iterations
for bisection method.
Iteration xl xu xm a % f xm
1 0 1 0.5 ---------- 0.25
2 0 0.5 0.25 100 −0.375
3 0.25 0.5 0.375 33.33 −0.0625
4 0.375 0.5 0.4375 14.2857 0.09375
5 0.375 0.4375 0.40625 7.6923 0.01563
6 0.375 0.40625 0.39063 4.00 −0.02344
7 0.39063 0.40625 0.39844 1.9608 −3.90625×10-3
8 0.39844 0.40625 0.40234 0.97087 5.8594×10-3
9 0.39844 0.40234 0.40039 0.48780 9.7656×10-4
10 0.39844 0.40039 0.39941 0.24450 −1.4648×10-3
33
Advantages
Always convergent
The root bracket gets halved with each
iteration - guaranteed.
34
Drawbacks
Slow convergence
If one of the initial guesses is close
to the root, the convergence is
slower
[Link] 35
[Link]
Drawbacks
If a function f(x) is such that it just touches the
x-axis it will be unable to find the lower and
upper guesses.
f(x)
f x x 2
36
Drawbacks
Function changes sign but root does
not exist
1
f(x)
f x
x
x
37
THE END
http://
39
[Link]