Numerical Methods
Numerical Methods
Vijay Kumar
Sept. 2018
Numerical Analysis Least Squares Interpolation Formulae Integration
Outline
Numerical Integration
1 Trapezoidal rule.
2 Simpson’s 1/3 rule.
3 Simpson’s 3/8 rule.
...
Computational problems:
Numerical Analysis:
The statement
y = f (x) ; x0 ≤ x ≤ xn ,
means: corresponding to every value of x in the range x0 ≤ x ≤ xn , there exists
one or more values of y. Assuming that f (x) is single valued and continuous
and that it is known explicitly, then the values of f (x) corresponding to certain
given values of x, say x0 ≤ x ≤ xn can easily be computed and tabulated.
Numerical Analysis:
Let the set of data points be (xi , yi ) ; i = 1, 2, . . . , n and let the curve given
by y = f (x) be fitted to this data.
At x = xi , the observed value of the ordinate is yi and corresponding value on
the fitting curve is f (xi ).
If ei is the residual of approximation at x = xi , then we have
ei = yi − f ( xi )
If we write
n
X n
X 2
S= e2i = (yi − f ( xi ))
i=1 i=1
then the method of least squares consists in minimizing S, i.e. the sum of
squares of residuals.
We shall study the least square fitting to the given data (xi , yi ) ; i = 1, 2, . . . , n
for different curves (linear and nonlinear etc.).
Scatter diagram:
P(xi , yi)
x
Residual a +b
y=
M(xi , a + b xi)
O xi T x
The principle of least squares is that the a and b values should be chosen so
n
e2i as small as possible. A necessary condition is that the
P
as to make E =
i=1
partial derivatives of the sum with respect to a and b should both be zero. We
thus have
n
∂E
P
∂a = −2 (yi − a − b xi ) = 0
i=1
n
∂E
P
∂b = −2 (yi − a − b xi ) xi = 0
i=1
Simplifying these two equations gives the normal equations for straight line :
n
P n
P
yi = n a + b xi
i=1 i=1
n n n
x2i
P P P
xi yi = a xi + b
i=1 i=1 i=1
Since the xi and yi are known quantities, the normal equations, can be solved.
n n n n
x2i and
P P P P
When these values yi , xi , xi yi computed from the data
i=1 i=1 i=1 i=1
are inserted, they give two simultaneous equations, which may be solved for a
and b.
Semi-log transformation y = a eb x
Double-log transformation y = a xb
Let the set of data points be (xi , yi ) ; i = 1, 2, . . . , n. Suppose we wish to fit
the curve y = a xb . Taking log of both sides, we have
u = A + bv
where u = log y ; A = log a and v = log x. Therefore, the normal
equations are given by
n
P n
P
ui = n A + b vi
i=1 i=1
n n n
vi2
P P P
vi ui = A vi + b
i=1 i=1 i=1
n n n n
vi2 and
P P P P
Compute ui , vi , vi ui , substitute these values in normal
i=1 i=1 i=1 i=1
equations and solve for A and b. Hence
a = antilog (A)
V. Kumar, DDU Gorakhpur University B.Sc.-I : Numerical Methods Sept. 2018 15 / 84
Numerical Analysis Least Squares Interpolation Formulae Integration Straight line Polynomial Nonlinear curves
Semi-log transformation y = a bx
Let the set of data points be (xi , yi ) ; i = 1, 2, . . . , n. Suppose we wish to fit
the curve
y = a bx
Taking log, we have
n n n n
x2i and
P P P P
Compute ui , xi , xi ui , substitute these values in normal
i=1 i=1 i=1 i=1
equations and solve for A and B. Hence
a = antilog (A)
V. Kumar, DDU Gorakhpur University and Methods
B.Sc.-I : Numerical b = antilog (B)
Sept. 2018 16 / 84
Numerical Analysis Least Squares Interpolation Formulae Integration Straight line Polynomial Nonlinear curves
b
Reciprocal transformation y = a + x
b
Fitting of the curve y = a + x2
Interpolation : Introduction
The process of computing or finding the value of a function for any value of
the independent variable outside the given range is called extrapolation.
Introduction contd...:
Given the set of tabular values (x0 , y0 ) , (x1 , y1 ) , (x2 , y2 ) , . . . , (xn , yn )
satisfying the relation y = f (x) where the nature of f (x) is not known, it
is required to find a simpler function φ (x), such that
Weierstrass’s theorem
Theorem
If f (x) is a continuous in x ∈ [a, b], then for a given ε > 0, there exists a
polynomial φn (x) of degree n, such that
This means that it is possible to find a polynomial φn (x) whose graph remains
within the region bounded by
Introduction contd...
On the other hand, if the function f (x) is not known, then it is very hard to
find the exact form of f (x) with the tabulated values (xi , yi ). In such cases,
the function f (x) can be replaced by a simpler, function, say, φ (x), which
has the same values as f (x) for x0 , . . . , xn .
The function φ (x) is called the interpolating or smoothing function and any
other value can be computed from φ (x).
Introduction contd...
Interpolation Formulae:
Equal intervals
1 Newton’s binomial expansion formula
2 Newton’s forward interpolation formula
3 Newton’s backward interpolation formula
4 Gauss’s forward interpolation formula
5 Gauss’s backward interpolation formula
6 Bessel’s formula
7 Stirling’s formula
8 Laplace-Everett’s formula
Unequal intervals
1 Lagrange’s formula
2 Newton’s divided difference formula
Consider a function y = f (x) defined on (a, b). x and y are the independent
and dependent variables respectively.
Here, the values of x are called arguments and the values of y are known as
entries. The interval h is called the difference interval.
∆ y0 = y1 − y0
∆ y1 = y2 − y1
..
.
∆ yi = yi+1 − yi
..
.
∆ yn−1 = yn − yn−1
The differences of the first differences are called the second differences and they
are denoted by ∆2 y0 , ∆2 y1 , . . . , ∆2 yn−2 . Hence
∆3 y0 = ∆2 (∆y0 ) = ∆2 (y1 − y0 )
= ∆2 y1 − ∆2 y0
= (y3 − 2y2 + y1 ) − (y2 − 2y1 + y0 )
= y3 − 3y2 + 3y1 − y0
∆3 y1 = y4 − 3y3 + 3y2 − y1 , etc.
Generalising, we have
x y ∆y ∆2 y ∆3 y ∆4 y ∆5 y ∆6 y
x0 y0
∆y0
x1 y1 ∆2 y0
∆y1 ∆3 y0
2
x2 y2 ∆ y1 ∆4 y 0
∆y2 ∆3 y1 ∆5 y 0
x3 y3 ∆2 y2 ∆4 y 1 ∆6 y 0
3 5
∆y3 ∆ y2 ∆ y1
x4 y4 ∆2 y3 ∆4 y 2
∆y4 ∆3 y3
2
x5 y5 ∆ y4
∆y5
x6 y6
The forward differences for the arguments x0 , x1 , . . . , xn are shown in Table 1. Table 1
is called a diagonal difference table or forward difference table. The first term in Table
1 is y0 and is called the leading term. The differences ∆y0 , ∆2 y0 , ∆3 y0 , . . . ∆y0 , are
called the leading differences. Similarly, the differences with fixed subscript are called
forward differences.
∇f (x) = f (x) − f (x − h)
∇yi = yi − yi−1 ; i = n, n − 1, . . . ., 1.
Similarly,
∇2 y3 = y3 − 2y2 + y1 ,
∇2 y4 = y4 − 2y3 + y2 ,
Generalising, we have
where
∇0 yi = yi , ∇1 yi = ∇yi .
The backward differences written in a tabular form is shown in Table 2. In
Table 2, the differences ∇n y with a fixed subscript 0 i0 lie along the diagonal
upward sloping. Here h is the difference interval.
x y ∇y ∇2 y ∇3 y ∇4 y ∇5 y ∇6 y
x0 y0
∇y1
x1 y1 ∇2 y 2
∇y2 ∇3 y 3
2
x2 y2 ∇ y3 ∇4 y 4
3
∇y3 ∇ y4 ∇5 y 5
2 4
x3 y3 ∇ y4 ∇ y5 ∇6 y6
3 5
∇y4 ∇ y5 ∇ y6
x4 y4 ∇2 y 5 ∇4 y 6
∇y5 ∇3 y 6
2
x5 y5 ∇ y6
∇y6
x6 y6
Shift Operator E:
E f (x) = f (x + h)
or E yi = yi+1
Hence, shift operator sifts the function value yi to the next higher value yi+1 .
The second shift operator gives
E −1 f (x) = f (x − h)
E −2 f (x) = f (x − 2h)
and
E −n f (x) = f (x − nh)
The more general form of E operator is given by
E r f (x) = f (x + rh)
Differential Operator D:
d d2
D f (x) = f (x) = f 0 (x) and D2 f (x) = f (x) = f 00 (x)
dx dx2
For linking different operators with differential operator D we consider Taylor’s
formula:
h2
f (x + h) = f (x) + h f 0 (x) + f 00 (x) + . . .
2!
In operator notation, we can write it as:
h2
E f (x) = 1 + h D + D2 + . . . f (x) = ehD f (x)
2!
This series in brackets is the expression for the exponential and hence we can
write
E = ehD
1 If c is a constant then ∆ c = 0.
Let f (x) = c, then f (x + h) = c, where h is the interval of differencing.
Hence
∆f (x) = f (x + h) − f (x) = c − c = 0
or ∆c = 0
2 ∆ is distributive, i.e., ∆ [f (x) ± g (x)] = ∆f (x) ± ∆g (x) .
h i
f (x) g(x)∆f (x)−f (x)∆g(x)
7 ∆ g(x) = g(x)g(x+h) .
f (x)
∆
g (x)
f (x + h) f (x)
= −
g (x + h) g (x)
g (x) f (x + h) − f (x) g (x + h)
=
g (x) g (x + h)
g (x) f (x + h) − g (x) f (x) − f (x) g (x + h) + f (x) g (x)
=
g (x) g (x + h)
g (x) [f (x + h) − f (x)] − f (x) [g (x + h) − g (x)]
=
g (x) g (x + h)
g (x) ∆f (x) − f (x) ∆g (x)
= .
g (x) g (x + h)
∆f (x) = f (x + h) − f (x)
∆ f (x) = (E − 1) f (x)
The above relation can be expressed as an identity
∆ =E−1 or E =1+∆
∇f (x) = f (x) − f (x − h) = f (x) − E −1 f (x)
∇ = 1 − E −1
1 (1 + ∆) (1 − ∇) = 1
(1 + ∆) (1 − ∇) f (x) = E E −1 f (x) = E 0 f (x) = 1
2 E ∆ = ∆E.
E ∆ f (x) = E (f (x + h) − f (x))
= E f (x + h) − E f (x)
= f (x + 2h) − f (x + h)
= ∆ f (x + h)
= ∆E f (x)
Hence, E ∆ = ∆E.
3 ∆ = E∇ = ∇E
E ∇ [f (x)] = E [f (x) − f (x − h)]
= E [f (x)] − E [f (x − h)]
= f (x + h) − f (x)
= ∆f (x)
Hence E∇=∆
V. Kumar, DDU Gorakhpur University B.Sc.-I : Numerical Methods Sept. 2018 40 / 84
Numerical Analysis Least Squares Interpolation Formulae Integration Introduction Operators Difference Table
4 ∆∇ = ∇∆ 6 ∇ = E −1 ∆
E −1 ∆f (x)
∇∆ f (x) = E −1 [f (x + h) − f (x)]
= ∇ [f (x + h) − f (x)] = E −1 f (x + h) − E −1 f (x)
= ∇f (x + h) − ∇f (x) = f (x) − f (x − h)
= f (x + h) − f (x) − [f (x) − f (x − h)] = ∇f (x)
= ∆ [f (x) − f (x − h)]
= ∆∇f (x)
5 ∆∇ = ∆ − ∇
∆∇f (x)
= ∆ [f (x) − f (x − h)]
= ∆f (x) − ∆f (x − h)
= ∆f (x) − ∇f (x)
= (∆ − ∇) f (x)
V. Kumar, DDU Gorakhpur University B.Sc.-I : Numerical Methods Sept. 2018 41 / 84
Numerical Analysis Least Squares Interpolation Formulae Integration Introduction Operators Difference Table
Example:
∆2 x E ex
Prove that ex = E e . ∆2 e x , the interval of differencing being h.
E ex = ex+h ,
Again
Hence
∆2
ex = (∆2 E −1 )ex = ∆2 ex−h = e−h (∆2 ex ) = e−h ex (eh − 1)2
E
ex+h
= e−h ex (eh − 1). = ex
ex (eh − 1)
V. Kumar, DDU Gorakhpur University B.Sc.-I : Numerical Methods Sept. 2018 42 / 84
Numerical Analysis Least Squares Interpolation Formulae Integration Introduction Operators Difference Table
Example:
∆2
Evaluate E x3
= E − 2 + E −1 x3
= Ex3 − 2x3 + E −1 x3
3 3
= (x + h) − 2x3 + (x − h)
= 6xh
∆2
Note: If h = 1, then E x3 = 6x
Example:
∆f (x)
Show that ∆ logf (x) = log 1 + f (x)
f (x + h) ∆f (x)
= +1
f (x) f (x)
Taking logarithms on both sides we get
f (x + h) ∆f (x)
log = log 1 +
f (x) f (x)
∆f (x)
log f (x + h) − log f (x) = log 1 +
f (x)
∆f (x)
∆ log f (x) = log 1 +
f (x)
V. Kumar, DDU Gorakhpur University B.Sc.-I : Numerical Methods Sept. 2018 44 / 84
Numerical Analysis Least Squares Interpolation Formulae Integration Missing Newton Forward Backward Lagrange Divided
Missing Values
Let a function y = f (x) is given for equally spaced values x0 , x1 , . . . , xn of
the argument and y0 , y1 , y2 , . . . , yn denote the corresponding values of the
function. If one or more values of y = f (x) are missing, we can determine the
missing values by employing the relationship between the operators E and ∆.
x0 , x0 + h, x0 + 2h, . . . ., x0 + nh
of x. Let one of the values of y is missing since n values of the functions are
known. Therefore, we have
n
∆n y0 = 0 or (E − 1) y0 = 0
or
n
(E − 1) y0 = 0
Expanding, we have
n n n
E − C1 E n−1 + n C2 E n−2 + . . . + (−1) y0 = 0
n (n − 1) n−2 n
or E n y0 − n E n−1 y0 + E y0 + . . . + (−1) y0 = 0
2
n (n − 1) n
or yn − n yn−1 + yn−2 + . . . + (−1) y0 = 0
2
It is quite useful in determining the missing values without actually
constructing the difference table.
∆4 y 0 = 0
4
(E − 1) y0 = 0
E 4 − 4E 3 + 6E 2 − 4E + 1 y0 = 0
(1)
y4 − 4y3 + 6y2 − 4y1 + y0 = 0
y 4 − 4 (28) + 6 (11) − 4y1 + 1 = 0
y4 − 4y1 = 45
and
∆5 y 0 = 0
5
(E − 1) y0 = 0
E 5 − 5E 4 + 10E 3 − 10E 2 + 5E − 1 y0 = 0
(2)
y5 − 5y4 + 10y3 − 10y2 + 5y1 − y0 = 0
116 − 5y4 + 10 (28) − 10 (11) + 5y1 − 1 = 0
−5y4 + 5y1 = −285
Solving Eqs.(1) and (2), we obtain
y1 = 4 and y4 = 61.
Let φ (x) be a polynomial of the nth degree in x taking the same values as
y corresponding to x = x0 , x1 , . . . , xn . Then, φ (x) represents the continuous
function y = f (x) such that
where R (x) is called the error term (remainder term) of the interpolation
formula.
Interpolation function: a function that passes exactly through a set
of data points.
V. Kumar, DDU Gorakhpur University B.Sc.-I : Numerical Methods Sept. 2018 50 / 84
Numerical Analysis Least Squares Interpolation Formulae Integration Missing Newton Forward Backward Lagrange Divided
Let
φ (x) = a0 + a1 (x − x0 ) + a2 (x − x0 ) (x − x1 )
+a3 (x − x0 ) (x − x1 ) (x − x2 ) (1)
+ . . . + an (x − x0 ) (x − x1 ) . . . (x − xn−1 )
and φ (xi ) = yi ; i = 0, 1, 2, . . . , n (2)
The constants a0 , a1 , a2 , . . . , an can be determined as follows:
Substituting x = x0 , x1 , . . . , xn successively in Eq.(1) and using Eq.(2), we
get
a 0 = y0
y1 = a0 + a1 (x1 − x0 )
or, y1 = y0 + a1 (x1 − x0 )
or, a1 h = y1 − y0 = ∆y0
∆y0
a1 =
h
∆y0 ∆2 y 0
(x − x0 ) +
φ (x) = y0 + (x − x0 ) (x − x1 )
h 2!h2
∆3 y 0 (3)
+ (x − x0 ) (x − x1 ) (x − x2 ) + . . .
3!h3
n
∆ y0
... + (x − x0 ) (x − x1 ) . . . (x − xn−1 )
n!hn
Let x = x0 + u h, we have
x − x0 = u h
x − x1 = (x − x0 ) − (x1 − x0 ) = u h − h = (u − 1) h and (4)
x − x2 = (x − x0 ) − (x2 − x0 ) = uh − 2h = (u − 2) h, etc.
u (u − 1) 2
φ (x) = y0 + u∆y0 + ∆ y0
2!
u (u − 1) (u − 2) 3
+ ∆ y0 (5)
3!
u (u − 1) . . . (u − n + 1) n
+... + ∆ y0
n!
φ (x) = y0 + u C1 ∆y0 + u C2 ∆2 y0 + . . . + u Cn ∆n y0
where u = (x − x0 ) /h
Let φ (x) be a polynomial of the nth degree in x taking the same values of y
corresponding to x = x0 , x1 , . . . , xn . That is, φ (x) represents y = f (x) such
that f (xi ) = φ (xi ), i = 0, 1, 2, . . . , n.
Let
φ (x) = a0 + a1 (x − xn ) + a2 (x − xn ) (x − xn−1 )
+a3 (x − xn ) (x − xn−1 ) (x − xn−2 ) (1)
+ . . . + an (x − xn ) (x − xn−1 ) . . . (x − x1 )
The constants a0 , a1 , a2 , . . . , an can be determined as follows:
Substituting x = xn , xn−1 , . . . , x1 successively in Eq.(1), we get
a0 = yn
Similarly, we obtain
Similarly, we obtain
∆n y 0
an =
n! hn
Substituting the values from a0 , a1 , a2 , . . . , an in Eq.(1), we get
∆yn−1 ∆2 yn−2
φ (x) = yn + (x − xn ) + (x − xn ) (x − xn−1 )
h 2!h2
3
∆ yn−3 (2)
+ (x − xn ) (x − xn−1 ) (x − xn−2 )
3!h3
n
∆ y0
+... + (x − xn ) (x − xn−1 ) . . . (x − x1 )
n!hn
Let x = xn + u h
x − xn = u h
x − xn−1 = (x − xn ) + (xn − xn−1 ) = u h + h = (u + 1) h (3)
x − xn−2 = (x − xn ) − (xn − xn−2 ) = uh + 2h = (u + 2) h, etc.
u (u + 1) 2 u (u + 1) (u + 2) 3
φ (x) = yn + u∆yn−1 + ∆ yn−2 + ∆ yn−3
2! 3! (4)
u (u + 1) . . . (u + n − 1) n
+... + ∆ y0
n!
where u = x−x h
n
The formula given in Eq.(4) is called the Newton’s backward
interpolation formula. This formula is used for interpolating values of y near
the end of the tabulated values and also used for extrapolating values of y a
little backward of yn .
Example:
If y = f (x) is known at the following Newton forward formula, with u = (x − x0 )/h, is
data points: given by
x 0 1 2 3 4 u(u−1)
y (x) = y0 + u∆y0 + 2!
∆2 y0
y 1 7 23 55 109 u(u−1)(u−2) 3
+ 3!
∆ y0
Find y(0.5) using Newton’s
forward difference formula Here x = 0.5, x0 = 0, h = 1 u = 0.5, thus
y(3.5) using Newton’s backward
difference formula 0.5(0.5−1)
y (0.5) = 1 + 0.5 × 6 + 2!
10
Difference Table 0.5(0.5−1)(0.5−2)
+ 3!
6
= 1 + 3 + 2.5 × (−0.5) + (−0.25) (−1.5)
x y ∆y ∆y 2 ∆y 3 ∆y 4 = 3.125
0 1
6 Newton backward formula, with u = (x − xn )/h, is
1 7 10 given by
16 6
2 23 16 0 u(u+1)
32 6 y (x) = yn + u∆yn−1 + 2!
∆2 yn−2
u(u+1)(u+2) 3
3 55 22 + 3!
∆ yn−3
54
4 109 Here x = 3.5, xn = 4.0, h = 1, u = −0.5,
yn = 109, ∆yn−1 = 54, ∆2 yn−2 = 22, ∆3 yn−3 = 6. Thus, we have y(3.5) =?.
Let y = f (x) be a real valued continuous function defined in an interval [a, b].
f (x) = a0 (x − x1 ) (x − x2 ) . . . (x − xn )
+a1 (x − x0 ) (x − x2 ) . . . (x − xn ) + . . .
(1)
+ai (x − x0 ) ... (x − xi−1 ) (x − xi+1 ) . . . (x − xn−1 ) + . . .
. . . + an (x − x0 ) (x − x1 ) . . . (x − xn−1 )
Lagrange’s contd...
f (x0 )
or, a0 =
(x0 − x1 ) (x0 − x2 ) . . . (x0 − xn )
Putting x = x1 in Eq.(1) we obtain
f (x1 )
a1 =
(x1 − x0 ) (x1 − x2 ) . . . (x1 − xn )
Lagrange’s contd...
f (xi )
ai =
(xi − x0 ) .... (xi − xi−1 ) (xi − xi+1 ) . . . (xi − xn )
and
f (xn )
an =
(xn − x0 ) (xn − x1 ) . . . (xn − xn−1 )
Lagrange contd...
(x − x1 ) (x − x2 ) . . . (x − xn )
f (x) = f (x0 )
(x0 − x1 ) (x0 − x2 ) . . . (x0 − xn )
(x − x0 ) (x − x2 ) . . . (x − xn )
+ f (x1 ) + . . .
(x1 − x0 ) (x1 − x2 ) . . . (x1 − xn )
(x − x0 ) (x − x1 ) . . . (x − xn−1 )
.... + f (xn )
(xn − x0 ) (xn − x1 ) . . . (xn − xn−1 )
(2)
The formula given by Eq.(2) is known as the Lagrange’s interpolation formula.
Divided Differences
Then the first divided differences of f (x) for the arguments x0 , x1 is defined
as
f (x1 ) − f (x0 )
f (x0 , x1 ) =
x1 − x0
It is denoted by f (x0 , x1 )or by [x0 , x1 ]
Similarly, the second divided difference for the arguments x0 , x1 , x2 , is given by
f (x1 , x2 ) − f (x0 , x1 )
f (x0 , x1 , x2 ) =
x2 − x0
f (x0 ) − f (x)
f (x, x0 ) =
(x0 − x)
f (x0 , x1 ) − f (x, x0 )
f (x, x0 , x1 ) =
(x1 − x)
f (x, x0 ) = f (x0 , x1 ) + (x − x1 ) f (x, x0 , x1 )
f (x) − f (x0 )
= f (x0 , x1 ) + (x − x1 ) f (x, x0 , x1 )
(x − x0 )
f (x) = f (x0 ) + (x − x0 ) f (x0 , x1 ) + (x − x0 ) (x − x1 ) f (x, x0 , x1 )
f (x, x0 , x1 , . . . , xn ) = 0
Numerical Integration:
Zb
I= f (x) dx = F (b) − F (a) ; F 0 (x) = f (x) (1)
a
Zb
I= f (x)dx = area under the curve f (x) between x = a to x = b.
a
Quadrature contd...
y
f(b)
f(x)
f(a)
x0 = a x0 = b x
V. Kumar, DDU Gorakhpur University B.Sc.-I : Numerical Methods Sept. 2018 71 / 84
Numerical Analysis Least Squares Interpolation Formulae Integration Newton-Cotes Trapezoidal Simpson 1/3 Simpson 3/8
Newton-Cotes contd...
Zxn Zb Zb
I= y dx = f (x) dx ∼
= φ (x) dx
x0 a a
where x = x0 + u h, so dx = h du
Newton-Cotes contd...
Zn " #
u2 − u 2 u3 − 3u2 + 2u 3
I=h y0 + u ∆y0 + ∆ y0 + ∆ y0 + ... du
2! 3!
0
n
1 u3 u2 1 u4 3u3 2u2
2 3
= h y0 + u ∆y0 + − ∆ y0 + − + ∆ y0 + ...
2 3 2 6 4 3 2 0
Hence, after simplification, we get
Zxn " #
n n (2n − 3) 2 n(n − 2)2 3
I= y dx = n h y0 + ∆y0 + ∆ y0 + ∆ y0 + ... (3)
2 12 24
x0
Trapezoidal Rule:
Similarly, we have
Zx2
h
I2 = y dx = (y1 + y2 )
2
x1
Zx3
h
I3 = y dx = (y2 + y3 )
2
x2
Trapezoidal contd...
In this method, the known function
values are joined by straight lines. The
area enclosed by these lines between
the given end points is computed to
approximate the integral as shown in Fig. y
y1=f(x1)
Trapezoidal contd...
Adding all the integrals Ii ; i = 1, . . . , n and using the interval additive property
of the definite integrals, we obtain
n Zxn
X h
I= Ii = y dx = [y0 + 2 (y1 + y2 + · · · + yn−1 ) + yn ] (4)
i=1
2
x0
f(x) yn
Similarly,
Zx4
h
y dx = (y2 + 4y3 + y4 )
3
x2
xZn−2
h
y dx = (yn−4 + 4yn−3 + yn−2 )
3
xn−4
Putting n = 3 into general quadrature formula and taking the curve through
the points in (x0 , y0 ) , (x1 , y1 ) , (x2 , y2 ) and (x3 , y3 ) as a polynomial of degree
three so that the differences of order greater than become zero. We get
Rx3 h i
y dx = 3h y0 + 32 ∆y0 + 3(3) 2 3 3
12 ∆ y0 + 24 ∆ y0
x0
= 3h y0 + 32 (y1 − y0 ) + 14 (y2 − 2y1 + y0 ) + 1
8 (y3 − 3y2 + 3y1 − y0 )
= 3h
8 (y0 + 3y1 + 3y2 + y3 )
f(x) yn
y2 y3 y=f(x)
y1
y0
Rx9 3h
y dx = 8 (y6 + 3y7 + 3y8 + y9 )
x6
··· ··· ··· ···
x
Rn
y dx = 3h
8 (yn−3 + 3yn−2 + 3yn−1 + yn )
xn−3
Equation (6) is known as Simpson’s 3/8 rule. It requires the whole range (the
given interval) must be divided into equal subintervals. Here, the number of
subintervals should be taken as multiples of 3. In this rule, the interpolating
polynomial is of degree 3.