0% found this document useful (0 votes)
398 views22 pages

Solution of Non Linear Equations

The document discusses methods for finding the roots or zeros of equations. It explains that the roots of polynomial equations can be found using direct methods, while iterative methods are generally needed for transcendental equations. Two categories of iterative methods are described: bracketing methods which bracket the root and reduce the width of the bracket, and open-end methods which use a single starting value. Specific iterative methods mentioned include bisection, false position, Newton-Raphson, and secant.

Uploaded by

Mysha Anowar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
398 views22 pages

Solution of Non Linear Equations

The document discusses methods for finding the roots or zeros of equations. It explains that the roots of polynomial equations can be found using direct methods, while iterative methods are generally needed for transcendental equations. Two categories of iterative methods are described: bracketing methods which bracket the root and reduce the width of the bracket, and open-end methods which use a single starting value. Specific iterative methods mentioned include bisection, false position, Newton-Raphson, and secant.

Uploaded by

Mysha Anowar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Man cannot discover new oceans unless he has the courage to lose sight of the shore.

Roots of Non Linear Equation

A problem of great importance in science and engineering is that of determining the roots/
zeros of an equation of the for,
f(x) = 0,

A polynomial equation of the form

f(x) = a0 + a1x + a2x2…………+an-1xn-1 + anxn for all x ε F.


is called an algebraic equation. An equation which contains polynomials, exponential functions,
logarithmic functions, trigonometric functions etc. is called a transcendental equation.
For example,
5x5 – x3 + 3x2 = 0 x3 – 4x2 + x + 6 = 0
are algebraic (polynomial) equations, and
2 sin x – x = 0 ex sin x – log x = 0
are transcendental equation.

Root:
We assume that the function f(x) is continuous in the required interval. We define that a number
α, for which f(α) ≡ 0 is called a root of the equation f(x) = 0, or a zero of f(x). Geometrically, a
root of an equation f(x) = 0 is the value of x at which the graph of the equation y = f(x) intersects
the x-axis. (see Fig. 1.1).

Remark 1
A polynomial equation of degree n has exactly n roots, real or complex, simple or multiple, where
as a transcendental equation may have one root, infinite number of roots or no root.
We shall derive methods for finding only the real roots.
The methods for finding the roots are classified as (i) direct methods, and (ii) iterative methods.

Direct analytical methods


In certain cases, roots can be found by using direct analytical methods. For example, we can easily
calculate the roots of a quadratic equation: ax2 + bx + c = 0 from the following equation:
Page 1 of 22
x = (- b ± √ (b2 – 4ac)) / 2a
However, there are equations that cannot be solved by analytical methods. For example, the simple
transcendental equation 2 sin x – x = 0 cannot be solved analytically. Direct methods for solving
non-linear equations do not exist except for certain simple cases.
Iterative methods:
An iterative method is based on the idea of successive approximations. We start with one or two
initial approximations to the root obtain a sequence of approximations x0, x1, ..., xk, ..., which in
the limit as k → ∞, converge to the exact root α. An iterative method for finding a root of the
equation f(x) = 0 can be obtained as xk + 1 = φ(xk), k = 0, 1, 2, ......
This method uses one initial approximation to the root x0. The sequence of approximations is given
by x1 = φ(x0),
x2 = φ(x1),
x3 = φ(x2), .....
The function φ is called an iteration function and x0 is called an initial approximation. If a method
uses two initial approximations x0, x1, to the root, then we can write the method as
xk + 1 = φ(xk – 1, xk), k = 1, 2, .....

Convergence of iterative methods


The sequence of iterates, {xk}, is said to converge to the exact root α, if lim k → ∞ xk = α, or
Lim k → ∞ | xk – α | = 0.
The error of approximation at the kth iterate is defined as εk = xk – α. Then, we can write as
Lim k → ∞ | error of approximation | = lim k → ∞ | xk – α | = lim k → ∞ | εk | = 0.

Remark 2 Given one or two initial approximations to the root, we require a suitable iteration
function φ for a given function f(x), such that the sequence of iterates, {xk}, converge to the exact
root α. Further, we also require a suitable criterion to terminate the iteration.

Slopping /Termination Criterion of Iterative Process


An iterative process must be terminated at some stage. We may use one (or combination) of the
following tests, depending on the behavior of the function, to terminate the process:
1. | xi+1 - xi | ≤ Ea ( absolute error in x)
x  xi
2. | i 1 | ≤ Er ( relative error in x), x ≠ 0
xi 1
3. | f (xi+1) | ≤E ( value of function at root)
4. | f (xi+1) – f (xi)| ≤E ( difference in function values)
5. | f (x) | ≤ Fmax ( large function value)
6. | xi | ≤ XL ( large value of x)
Here xi represents the estimate of the root at ith iteration and f(xi) is the value of the function at xi.
 In cases where we do not know whether the process converges or not, we must have a limit on
the number of iterations, like Iterations ≥ N (limit on iterations).

Initial Approximation for an Iterative Procedure

Page 2 of 22
Descartes’ rule of signs
For polynomial equations, Descartes’ rule of signs gives the bound for the number of positive and
negative real roots.
a. Count of Positive root: We count the number of changes of signs in the coefficients of
Pn(x) for the equation f(x) = Pn(x) = 0. The number of positive roots cannot exceed the
number of changes of signs. For example, if there are four changes in signs, then the
equation may have four positive roots or two positive roots or no positive root. If there are
three changes in signs, then the equation may have three positive roots or definitely one
positive root. (For polynomial equations with real coefficients, complex roots occur in
conjugate pairs.)
b. Count of Negative root: We write the equation f(– x) = Pn(– x) = 0, and count the number
of changes of signs in the coefficients of Pn(– x). The number of negative roots cannot
exceed the number of changes of signs. Again, if there are four changes in signs, then the
equation may have four negative roots or two negative roots or no negative root. If there
are three changes in signs, then the equation may have three negative roots or definitely
one negative root.

Example 1.1 Determine the maximum number of positive and negative roots and intervals of
length one unit in which the real roots lie for the following equations.
8x3 – 12x2 – 2x + 3 = 0
Solution
Let f(x) = 8x3 – 12x2 – 2x + 3 = 0. The number of changes in the signs of the coefficients
(8, – 12, – 2, 3) is 2. Therefore, the equation has 2 or no positive roots.
Now, f(– x) = – 8x3 – 12x2 + 2x + 3. The number of changes in signs in the coefficients (–
8, – 12, 2, 3) is 1. Therefore, the equation has one negative root.

Intermediate Value Theorem


We use the following theorem of calculus to determine an initial approximation. It is also called
the intermediate value theorem.

Theorem 1.1 If f(x) is continuous on some interval [a, b] and f(a)f(b) < 0, then the equation f(x) =
0 has at least one real root or an odd number of real roots in the interval (a, b).

Page 3 of 22
Fig: 1.2

This theorem is evident from the Figure 1.2 that if f (A) and f (B) have opposite signs the graph
must cross the x-axis at least once between x = A and x = B.

We set up a table of values of f (x) for various values of x. Studying the changes in signs in the
values of f (x), we determine the intervals in which the roots lie. For example, if f (1) and f (2) are
of opposite signs, then there is a root in the interval (1, 2).
Let us illustrate through the following examples.

Example 1.2 Determine the intervals of length one unit in which the real roots lie for the following
equations. 8x3 – 12x2 – 2x + 3 = 0
Solution
(i) Let f(x) = 8x3 – 12x2 – 2x + 3 = 0

Therefore, there are three real roots and the roots lie in the intervals (– 1, 0), (0, 1), (1, 2).
Largest Possible Root
For a polynomial represented by: f ( x)  a n x n  a n 1 x n 1  .......  a1 x  a0 the largest possible root
a
is given by: x`   n 1 .
an
Page 4 of 22
 This value is taken as initial approximation when no other value is suggested by the knowledge
of the problem at hand.
Search Bracket
Maximum absolute value of the root is,
an1 2 a
| xmax | ( )  2( n2 )}
an an
So, all real roots lie within the interval (- | x max |, | x max |).
Interval of the roots
All real roots x satisfy the inequality,
1
| x | 1  max { |a0|, |a1|,… , |an-1| }
an
where the ‘max’ denotes the maximum of the absolute values |a0|, |a1|,… , |an-1|.

ExampleI 1.3: Consider the polynomial equation: 2 x 3  8 x 2  12  0 . Estimate the possible initial
guess values.
Solution: The largest possible root is x =  (  8 / 2) = 4. No root can be larger than the value 4.

All roots must satisfy the relation


8 2
| x max | {( ) 2  2. } = 14
2 2
Therefore, all real roots lie in the interval (- 14, 14 ). We can use these two points as initial
guesses for the bracketing methods and one of them as the open end methods.

 Iterative methods can be divided into two categories:


1. Bracketing methods: Bracketing methods (also known as interpolation methods) start with
initial guesses that ‘bracket’ the root and then systematically reduce the width of the
bracket until the solution is reached. Two popular method under this category are:
 Bisection method
 False position method
2. Open end methods: Open end methods (also known as extrapolation methods) use a single
starting value or two values that do not necessarily bracket the root. The following iterative
methods fall under this category:
 Newton-Raphson method
 Secant method.
 Muller’s method.
 Methods of successive approximation (Fixed-point method)

Page 5 of 22
Evaluation of Polynomials
The polynomial is a sum of n+1 terms and can be expressed as

n n n
f ( x)   ai xi = a0 +  ai xi = a0 +  ai xi = f(x) = a0 + a1x + a2x2 + ... + anxn
i 0 i 1 i 1

Where ai are real numbers representing the polynomial coefficients and xn are the polynomial
variables. The above polynomial is said to be of the nth degree,

i.e. deg(f(x)) = n, where n represents the highest variable exponent.

This can be easily evaluated using a loop. But this would require n(n+1)/2 multiplication and n
additions.

Evaluation of Polynomial Using Horner’s Rule


We can write the polynomial f (x) using Horner’s rule (also known as nested multiplication) as
follows:
f (x) = ( ( ……( (an x + an-1 )x + an-2 )x + ………… + a1 )x + a0 )
Here the innermost expression an x + an-1 is evaluated first. The resulting value constitutes a
multiplicand for the expression at the next level. The number of level equals n, the degree of
polynomial. This approach needs a total of n additions and n multiplications.

Algorithm: Horner’s Rule


pn = an
pn-1 = pn x + an-1
...
...
pj = pj+1 x + aj
Page 6 of 22
...
p1 = p2 x + a1

f(x) = p0 = p1 x + a0

P(x) = a0 + a1x + a2x2…………+an-1xn-1 + anxn


1. Set j = n (where n is the degree of the polynomial).
2. bj = aj
3. bj - 1 = aj - 1 + bjx
4. j = j - 1
5. If j > 0 then go to step 3
Else End

Example 1.4: Evaluate the polynomial f (x) = x3 – 4x2 + x + 6 using Horner’s rule at x = 2.
Solution: Here n = 3, a3 = 1, a2 = - 4, a1 = 1, a0 = 6
p3 = a3 = 1
p2 = 1 * 2 + (-4) = -2
p1 = (-2) * 2 + 1 = -3
p0 = (-3) * 2 + 6 = 0
f(2) = 0

Advantages of Horner’s Rule


 Reduces the number of multiplication operations.
 Computer processing time for a single multiplication process is 5 to 20 times more than
the processing time of an addition process, so evaluate the polynomial using Horner’s rule
will have significant improvement on the execution time taken by the computer.
 Time complexity of Horner’s Rule is O(n).

Bisection method
The bisection method (also known as binary chopping or half-interval method) is one of the
simplest and most reliable of iterative methods for the solution of nonlinear equations. This method
based on the repeated application of the intermediate value theorem.
 Suppose f (x) has a real root in the interval [a, b]. The midpoint between a and b is
x0 = (a+b)/2.
Now, there exist the following three conditions:
1.if f (x0) = 0, then x0 is the root of the equation f (x) = 0.
2.if f (x0) * f (a) < 0, then the root lies between x0 and a.
3.if f (x0) * f (b) < 0, then the root lies between x0 and b.

It states that by testing the sign of the function at midpoint, we can conclude which part of the
interval contains the root. We can further separation this subinterval into equal parts to find

Page 7 of 22
another subinterval containing the root. This procedure can be rehashed until the interim
containing the root is as little as we want.

Fig 1.3
Algorithm: Bisection method
1. Decide initial values for x1 and x2 and stopping criterion E.
2. Compute f1 = f (x1) and f2 = f (x2).
3. If f1 * f2 > 0, x1 and x2 do not bracket any root and go to step 1.
4. Compute x0 = (x1 + x2) / 2 and compute f0 = f (x0).
5. If f1 * f0 < 0 then set x2 = x0 else set x1 = x0.
6. If absolute value of (x2-x1) is less then E, then root = (x1 + x2) / 2 and go to step 8
Else go to step 4
7. Stop.
Example 1.5: Find real root of the equation f (x) = x2 – 4x – 10 = 0 using Bisection Method.
an1 2 a
| xmax | ( )  2( n2 )}
an an

Table 1:
X –6 –5 –4 –3 –2 –1 0 1 2 3 4 5 6
f (x) 50 35 22 11 2 –4 – 10 – 13 – 14 – 13 – 10 –5 2

From Table 1 let lower limit a = – 2 and upper limit b = – 1

Iteration 1:
f (-2) = 2 and f (– 1) = – 4
so, f (a) f (b) <0
Page 8 of 22
root lies between – 2 and – 1
x0 = a+b/2 = – 2 + (– 1)/2 = –1.5
f (x0) = f (–1.5) = –1.75

Iteration 2:
f (-2) = 2 and f (– 1.5) = – 4
so, f (a) f (b) <0
root lies between – 2 and – 1.5
x0 = a+b/2 = – 2 + (– 1.5)/2 = –1.75
f (x0) = f (–1.75) = 0.0625

Iteration 3:
f (-1.5) = 1.75 and f (– 1.75) = – 0.0625
so, f (a) f (b) <0
root lies between – 1.75 and – 1.5
x0 = a+b/2 = – 1.75 + (– 1.5)/2 = –1.625
f (x0) = f (–1.75) = – 0.859

Iteration 4:
f (–1.75) = 0.0625 and f (– 1.625) = – 0.859
so, f (a) f (b) <0
root lies between – 1.75 and – 1.625
x0 = a+b/2 = – 1.75 + (– 1.625)/2 = –1.6875
f (x0) = f (–1.6875) = – 0.40

Iteration x1 x2 x0 f (x1) f (x2) f (x0)


1 –2 –1 –1.5 2 –4 c
2 –2 –1.5 –1.75 2 –1.75 0.0625
3 –1.75 –1.5 –1.625 0.0625 –1.75 –0.859
4 –1.75 –1.625 –1.6875 0.0625 –0.859 –0.40

Page 9 of 22
 The bisection method is linearly convergent. Since the convergence is slow to achieve a high
degree of accuracy, a large number of iterations may be needed. However, the bisection method
is guaranteed to converge.
Convergence of Bisection Method:
In Bisection Method, we choose a mid point x0 in the interval between xp and xn.
Depending upon the sign of ( ), xp or xn is set equal to x0 such that the root lies in
the interval. In other case the interval containing the root is reduced by a function of 2.
The same process is repeated for the new interval. If the procedure is repeated ‘n’
times, then the interval containing the root is reduced to the size:

After iteration the root must lies within of our estimate, this mean the error bound at nth
iteration is:

Similarly,

= =
2 2

Page 10 of 22
Thus, the error decreases linearly with each step by a factor of 0.5.This method is therefore
linearly convergent.

False Position Method


The method is also called linear interpolation method or chord method or regula-falsi method.
At the start of all iterations of the method, we require the interval in which the root lies. Let the
root of the equation ( ) = 0, lie in the interval (xk–1, xk), that is, < 0, where
( )= ( ) = . Then, ( , ) ( , ) are points on the curve
( ) = 0. Draw a straight line joining the points P and Q (Figs. 1.3 a, b). The line PQ is taken as
an approximation of the curve in the interval [ , ]. The equation of the line PQ is given by

− −
=
− −
Or, = ,
The point of intersection of this line PQ with the x-axis is taken as the next approximation to the
root. Setting y = 0, and solving for x, we get

Or, = − = − ……..1
Equation 1 is known as false position formula.

The next approximation to the root is taken as,



= −

2nd approximation to the root is,

= −

Simplifying, we can also write the approximation as,


( − )−( − )
=

Therefore, starting with the initial interval (x0, x1), in which the root lies, we compute

=

Now, if f(x0) f(x2) < 0, then the root lies in the interval (x0, x2). Otherwise, the root lies in the
interval (x2, x1). The iteration is continued using the interval in which the root lies, until the
required accuracy criterion given is satisfied.

False Position Formula

Page 11 of 22
Fig 1.3

From Figure 1.3


The points (x1, f (x1)) and (x2, f (x2)) is given by
(f (x2) – f (x1)) / (x2 – x1) = (y – f (x1)) / (x – x1)
Since the line intersects the x-axis at x0, when x = x0, y = 0, we have,
(f (x2) – f (x1)) / (x2 – x1) = (– f (x1)) /(x0 – x1)
or x0 – x1 = – f (x1) (x2 – x1) / (f(x2) – f(x1))
Then we have,
x0 = x1 – (f(x1) (x2 – x1) ) / (f (x2) – f (x1))

This equation is known as the false position formula.


Remark 3: At the start of each iteration, the required root lies in an interval, whose length is
decreasing. Hence, the method always converges.

Remark 4: The method of false position has a disadvantage. If the root lies initially in the interval
(x0, x1), then one of the end points is fixed for all iterations. For example, in Fig.1.2a, the left end
point x0 is fixed and the right end point moves towards the required root. Therefore, in actual
computations, the method behaves like

=

In Fig.1.2b, the right end point x1 is fixed and the left end point moves towards the required root.
Therefore, in this case, in actual computations, the method behaves like

=

Page 12 of 22
Remark 5: The computational cost of the method is one evaluation of the function ( ), for each
iteration.

False Position Algorithm


1. Decide initial values for x1 and x2 and stopping criterion E.
2. Compute x0 = x1- (f(x1) (x2-x1) ) / (f (x2) – f (x1))
3. If f (x0) * f (x1) < 0 set x2 = x0 otherwise set x1 = x0
4. If the absolute difference of two successive x0 is less then E, then root = x0 and stop.
Else go to step 2.
 False position method converges linearly.
Example: Use the false position method to find a root of the function f (x) = x2 – x – 2 = 0 in the
range 1 < x < 3.
Solution: Given x1= 1 and x2 = 3
f (x1) = f (1) = -2 f (x2) = f (3) = 4
x0 = x1- (f(x1) (x2-x1) ) / (f (x2) – f (x1))
= 1 + 2 * (3 - 1) / (4 + 2)
= 1.6667
Iteration 2: f (x0) * f (x1) = f (1.6667) * f (1) =1.7778
Therefore, the root lies in the interval between x0 and x2. Then,
x1 = x0 = 1.6667
f (x1) = f (1.6667) = -0.8889
f (x2) = f (3) = 4
x0 = 1.6667 + 0.8889 * (3 – 1.6667) / (4 + 0.8889) = 1.909
Iteration 3: f (x0) * f (x1) = f (1.909) * f (1.6667) = 0.2345
Therefore, the root lies in the interval between x0 = 1.909 and x2 = 3. Then,
x1 = x0 = 1.909
x0 = 1.909 - 0.2647 * (3 – 1.909) / (4 - 0.2647) = 1.986
The estimated root after third iteration is 1.986. The interval contains a root x = 2. We can perform
additional iterations to refine this estimate further.

Newton-Raphson Method

Page 13 of 22
Perhaps the most widely used of all root-locating formulas is the Newton-Raphson equation (Fig.
below). If the initial guess at the root is xi, a tangent can be extended from the point [xi, f(xi)]. The
point where this tangent crosses the x axis usually represents an improved estimate of the root.

Let x0 be an initial approximation to the root of ( ) = 0. Then, P(x0, f0), where f0 = f(x0), is a
point on the curve. Draw the tangent to the curve at P, (Fig. 1.3). We approximate the curve in the
neighborhood of the root by the tangent to the curve at the point P. The point of intersection of the
tangent with the x-axis is taken as the next approximation to the root. The process is repeated until
the required accuracy is obtained. The equation of the tangent to the curve y = f(x) at the point
P(x0, f0) is given by,
− ( )=( − ) ( )
where ( ) is the slope of the tangent to the curve at P. Setting = 0 and solving for , we get
( )
= − ( )≠0
( )

The next approximation to the root is given by,


( )
= − ( )≠0
( )
We repeat the procedure. The iteration method is defined as
( )
= − ( )≠0
( )

Derivation of Newton Raphson Formula by Taylor’s Series Expansion:


Assume that: xn is an estimate root of the function f(x) = 0 and consider a small interval ‘h’ such
that:
ℎ = − −−−−−−−−−−−−−−−−−−−−−−−−−−−−−( )
Using Taylor’s Series Expansion for ( ), we have:

Page 14 of 22
Neglecting the terms containing higher derivatives, we get:

If is the root of f(x), then: ( ), = 0. Putting this value in equation (ii) it gives:

The process will be continued till the absolute difference between two successive approximations
is less than the specified error E i,e, | – |<E.

Algorithm: Newton-Raphson Method


1. Assign an initial value for x, say x0 and stopping criterion E.
2. Compute f (x0) and f ‘(x0).
3. Find the improved estimate of x0
x1 = x0 – f (x0) / f ’(x0)
4. Check for accuracy of the latest estimate.
If | (x1-x0)/x1 | < E then stop; otherwise continue.
5. Replace x0 by x1 and repeat steps 3 and 4.

Newton-Raphson method is said to have quadratic convergence.

Example: Find the root of the equation f (x) = x2 – 3x + 2 using Newton-Raphson method.
Solution: Here, f ’(x) = 2x – 3

Page 15 of 22
Let x1 = 0 (First approximation)
x2 = x1 – f (x1) / f ’(x1) = 0 – 2 / (-3) = 2/3 = 0.6667
Similarly,
x3 = 0.6667 – 0.4444/-1.6667 = 0.9333
x4 = 0.9333 – 0.071/-1.334= 0.9959
x5 = 0.6667 – 0.0041/-1.0082 = 0.9999
x6 = 0.6667 – 0.0001/-1.0002 = 1.0000
Since f (1.0) = 0, The root closer to the point x = 0 is 1.0000.

Convergence of Newton Raphson Method:


Let xn be the estimate to the root of the function f(x)=0. If xn and xn+1 are close to each other,
then using Taylor’s Series Expansion:

Let us assume that, the exact root of f(x) be xr, then: xn+1 = xr, then: f (xn+1) = 0. So, equation (i)
becomes:

As we have:

Putting the value of f (xn) into equation (ii), we get:

We know that, the error in the estimate xn+1 is:

Page 16 of 22
Now, neglecting the higher power terms and expressing equation (iii) in terms of error:

Rearranging the term, we get:

This shows that error is roughly proportional to the square of the error at previous iteration.
Therefore, Newton Raphson method has quadratic convergence.

Limitations of Newton-Raphson method


The Newton-Raphson method has certain limitations and pitfalls. The method will fail in the
following situations.
1. Division by zero may occur if f ’(xi) is zero or very close to zero.
2. If the initial guess is too far away from the required root, the process may converge to some
other root.
3. A particular value in the iteration sequence may repeat, resulting in an infinite loop. This
occurs when the tangent to the curve f (x) at x = xi+1 cuts the x-axis again at x = xi.

Secant method
A potential problem in implementing the Newton-Raphson method is the evaluation of the
derivative. Although this is not inconvenient for polynomials and many other functions, there are
certain functions whose derivatives may be extremely difficult or inconvenient to evaluate. For
these cases, the derivative can be approximated by a backward finite divided difference.

Secant method uses two initial estimates but does not required that they must bracket the root. It
can use two points x1 and x2 as shown in the figure below as starting values although they do not
bracket the root.

Page 17 of 22
Let us consider two points x1 and x2 in Fig above as starting values. Slope of the secant line passing
through x1 and x2 is given by,
f ( x1 ) f ( x2 )

x1  x3 x2  x3
f ( x1 )( x2  x3 )  f ( x2 )( x1  x3 )
x3 [ f ( x2 )  f ( x1 )]  f ( x2 ).x1  f ( x1 ).x2
f ( x2 ).x1  f ( x1 ).x2
 x3 
f ( x2 )  f ( x1 )
By adding and subtracting f (x2) x2 to the numerator and remaining the terms we have,
f ( x 2 )( x 2  x1 )
x3  x 2 
f ( x 2 )  f ( x1 )
which is called the secant formula.
The approximate value of the root can be refined by repeating this procedure by replacing x1and
x2 by x2 and x3, respectively. That is, the next approximate value is given by,
f ( x3 )( x3  x 2 )
x 4  x3 
f ( x3 )  f ( x 2 )
This procedure is continued till the desired level of accuracy is obtained.
We can express the Secant formula as follows:
f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 )

 The formula for false position method and secant method are similar and both of them use two
initial estimates. However there is a major difference in their algorithms of implementation. In
false position method, the last estimate replaces one of the end points of the interval such that

Page 18 of 22
new interval brackets the root. But, in secant method, the values are prefaced in strict sequence,
i.e. xi-1 is replaced by xi and xi by xi+1. The points may not bracket the root.

Algorithm: Secant Method


1. Decide two initial points x1 and x2 and required accuracy level E.
2. Compute f1 = f (x1) and f2 = f (x2)
3. Compute x3 = (f2 x1 – f1 x2) / (f2 – f1)
4. If |(x3- x2)/ x3| > E, then
set x1 = x2 and f1 = f2
set x2 = x3 and f2 = f(x3)
go to step 3
Else
set root = x3
print results
5. Stop.

 Comparison between the secant formula and the Newton-Raphson formula for estimating a
root
f ( xn )
Newton-Raphson formula: xn+1 = xn –
f ' ( xn )
f ( xn )( xn  xn 1 )
Secant formula: xn+1 = xn –
f ( xn )  f ( xn 1 )
This shows that the derivative of the function in the Newton formula f ’(xn), has been replaced by
f ( xn )  f ( xn 1 )
the term in the secant formula.
( xn  xn 1 )
This is a major advantage because there is no need for the evaluation of derivatives. There are
many functions whose derivatives may be extremely difficult to evaluate.
However, one drawback of the secant formula is that the previous two iterates are required for
estimating the new one. Another drawback of the secant method is its slower rate of convergence.
It is proved that the rate of convergence of secant method is 1.618 while that of the Newton method
is 2.
 The convergence of secant method is referred as super linear convergence.

Example: Use the secant method to estimate the root of the equation f (x) = x2 – 4x – 10 = 0 with
the initial estimates of x1 = 4 and x2 = 2.
Solution: Given x1 = 4 and x2 = 2
f(x1) = f(4) = -10 f(x2) = f(2) = -14
x3 = x2 - ((f(x2) * (x2 - x1)) / (f(x2) – f(x1)))
= 2 – (-14) * (2 – 4)/((-14) – (-10))
=9
Page 19 of 22
For second iteration,
x1 = x2 = 2 x2 = x3 = 9
f(x1) = f(2) = -14 f(x2) = f(9) = 95
x3 = 9 – 35 * (9 – 2) / (35 + 14) = 4
For third iteration,
x1 = x2 = 9 x2 = x3 = 4
f(x1) = f(9) = 95 f(x2) = f(4) = -10
x3 = 4 – (-10) * (4 – 9) / ((-10) – (-35)) = 5.1111
For fourth iteration,
x1 = x2 = 4 x2 = x3 = 5.1111
f(x1) = f(4) = -10 f(x2) = f(5.1111) = - 4.3207
x3 = 5.1111 – (-4.3207) * (5.1111 – 4) / (( - 4.3207) – (-10)) = 5.9563
For fifth iteration,
x1 = x2 = 5.1111 x2 = x3 = 5.9563
f(x1) = f(5.1111) = - 4.3207 f(x2) = f(5.9563) = 5.0331
x3 = 5.9563– 5.0331 * (5.9563– 5.1111)/( 5.0331 + 4.3207) = 5.5014
For sixth iteration,
x1 = x2 = 5.9563 x2 = x3 = 5.5014
f(x1) = f(5.9563) = 5.0331 f(x2) = f(5.5014) = -1.7392
x3 = 5.5014– (-1.7392)* (5.5014– 5.9563) / ( -1.7392- 5.0331) = 5.6182
The value can be further refined by continuing the process, if necessary.

Fixed Point method


It is also known as, direct substitution method or method of fixed point iteration or
method of successive approximation. It is applicable if the equation f(x) = 0 can be
expressed in terms of x = g(x). If x0 is the initial approximation to the root, then the
next approximation will be: x1 = g(x0) and the next again will be: x2 = g(x1) and so on. In general:
xn+1 = g(xn)

Where, x = g(x) is known as fixed point equation.


To find the root of the equation f (x) = 0, we rewrite this equation in this way x = g (x )

Page 20 of 22
Let x0 be an approximate value of the desire root. Substituting it for x as the right side of the
equation, we obtain the first approximation x1 = g (x0). Further approximation is given by x2
= g (x1). This iteration process can be expressed in general form as
xi+1 = g (xi) i = 0, 1, 2, ….,
which is called the fixed point iteration formula. The iteration process would be terminated when
two successive approximations agree within some specified error.
 This method of solution is also known as the method of successive approximations or method
of direct substitution.
Algorithm:
1. Read an initial guess: x0 and E
2. Evaluate: x1 = g(x0)
3. Check for error: if |x1-x0/x1 |< E, x1 is root, else continue.
4. Assign the value of x1 to x0, i.e. x0 = x1 and repeat steps 2 & 3.
5. Print x1.
6. Stop

Example: Locate the root of the equation f(x) = x 2+ x – 2 = 0.


Solution: The given equation can be expressed as x = 2 – x 2.
Let us start with an initial value of x0 = 0.
x1 = 2 – 0 = 2
x2 = 2 – 4 = –2
x3 = 2 – 4 = –2
Since x3 – x2 = 0, –2 is one of the roots of the equation.

Example: Evaluate the square root of 5 using the equation x2 – 5 = 0 by applying the fixed-point
method.
Solution: Let us organize the function as follows: x = 5/x and assume x0 = 1, Then
x1 = 5
x2 = 1
x3 = 5
x4 = 1

The process does not converge to the solution. This type of divergence is called oscillatory
divergence.
Again, Let us organize the function as follows: x = x2 + x – 5 and assume x0 = 0, Then

x1 = –5
x2 = 15
x3 = 235
x4 = 55455

The process does not converge to the solution. This type of divergence is called monotone
divergence.
Page 21 of 22
x 5/ x
Again, Let us organize the function as follows: 2x = 5/x + x or x = and assume x0 =
2
1 Then
x1 = 3
x2 = 2.3333
x3 = 2.2381
x4 = 2.2361
x5 = 2.2361
The process converges rapidly to the solution. This square root of 5 is 2.2361.

 The iteration function g (x) can be formulated in different forms. But, not all forms result in
convergence of solution. Convergence of the iteration process depends on the nature of g (x).
The necessary condition for convergence is g’(x) < 1.

Mullar Method

Muller's method is a generalization of the secant method. Instead of starting with two initial values
and then joining them with a straight line in secant method, Mullers method starts with three initial
approximations to the root and then join them with a second degree polynomial (a parabola), then
the quadratic formula is used to find a root of the quadratic for the next approximation. That is if
x0, x1 and x 2 are the initial approximations then x3 is obtained by solving the quadratic which is
obtained by means of x0, x 1 and x2. Then two values among x0, x1 and x2 which are close to x3 are
chosen for the next iteration

−2
3 = 2 + , ℎ =
± ( −4 )
= 1/ 2 , = 2/ = ( 2)

= ℎ0ℎ1(ℎ0 − ℎ1),

1 = ( 0 − )ℎ1 − ( 1 − )ℎ0 ,

2 = ( 1 − )ℎ0 − ( 0 − )ℎ1

ℎ0 = 0 − 2 , ℎ1 = 1− 2

Page 22 of 22

You might also like