Computational Programmes
Computational Programmes
Computational Techniques.
(PH-5172)
1
Why computational Physics?
Computational Physics
Computer simulation
Before going to discuss all the methods, we should know the error in computation
3
Sources of Errors
4
Errors in computation
GIGO
Garbage in, garbage out
6
Error Propagation
Addition/ Subtraction
x t =x a +e x ; y t =y a +e y
So x t ± y t =( x a ± y a )+(e x ±e y )
⇒ e x ± y =e x±e y
Absolute error: |e x ± y|≤|e x|+|e y|
7
Error Propagation
Multiplication
x t × y t =( x a +e x )×( y a +e y )
=x a × y a +x a e y +e x y a +e x e y
We are assuming errors are very small compare to the tru value
⇒ x t × y t =x a × y a +x a e y +e x y a
=x a × y a +x a e y +e x y a
So error in multiplication: e xy =x a e y +e x y a
8
Error Propagation
Division:
x t x a +e x x a y a +e x y a − x a e y −e x e y
= = 2 2
y t y a +e y y a -e y
x a y a +e x y a −x a e y
=
y 2a
x a e x y a −x a e y
= + 2
y a ya
e x y a −x a e y
So error in division: e x / y =
y 2a
9
Finite Differences
10
Newton’s Finite Differences
11
Newton’s Finite Differences
Given equidistant values of a function f(x) discrete value of x
at an interval h
:
x 0 x1 x x x
2 3 4 ⋯⋯ x n
f 0 f1 f f f
2 3 4 ⋯⋯ f n
12
Newton’s Finite Differences
:Forward difference:
st
1 order Δf ( x)=f ( x+h)−f ( x)
nd 2
2 order Δ f ( x)=Δf ( x+h )− Δf ( x )
rd 3 2 2
3 order Δ f ( x )=Δ f ( x+h )− Δ f ( x )
⋮
th r r−1 r−1
r order Δ f ( x)=Δ f ( x+h )− Δ13 f ( x)
Forward Difference Table
8 . Δf ( x)=f ( x+h)−f ( x)
2
Δ f ( x)=f ( x+2 h )−2 f ( x+h )+f ( x )
⋮
r
r
i= 0
r
i
i
()
Δ f ( x )=∑ (−1 ) f ( x+(r−i)h)
16
Factorial Notation
17
Factorial Notation:
The nth factorial of x, denoted by x(n)
( n)
x =x( x−h )( x−2h )⋯( x−(n−1)h )
⋮
(2)
x =x ( x−h )
(1)
x =x
(0)
x =1
18
Lemma: Any polynomial f(x) in x of
degree n can be expressed in Factorial
notation with same degree
( n) ( n−1 )
We have Δx =nhx
19
(1 ) ( n−1 )
⇒ Δf ( x )=ha 1 +2 ha 2 x +⋯+nha n x
( ) ( )
Δ 2 f ( x )=2 h2 a2 +6 h 2 a 3 x 1 +⋯+n(n−1)h2 a n x n−2
⋮
n n
Δ f ( x )=n!h a n
Now consider the starting point of difference table at x=0
⇒ a0 =f (0 )
Δf (0 )
a1 =
h
Δ2 f ( 0)
a2 = 2
2 !h
3
Δ f ( 0)
a3 = 3
3 !h
⋮
Δ n f (0 )
an = n
n!h 20
polynomial f(x) in x of degree n
Another notation:
n i−1
⇒ f ( x )=∑ ai ∏ ( x−x j )
i=0 j=0
Δi f 0
where ai =
i!hi
21
Newton’s divided difference table:
(Use: If the points are not equally
spaced)
f 1 −f 0
1st order: f [ x 0 ,x 1 ]=
x 1 −x 0
f 2 −f 1
f [ x 1 ,x 2 ]=
x 2 −x 1
⋮
f [ x 1 ,x 2 ]−f [ x 0 ,x 1 ]
2nd order: f [ x 0 ,x 1 ,x 2 ]=
x 2 −x 0
f [ x1 ,x 2 ,x 3 ]−f [ x 0 ,x 1 ,x 2 ]
3rd order: f [ x 0 ,x 1 ,x 2 ,x 3 ]=
x 3 −x 0
and similarly higher order difference 22
Newton’s divided difference
table:
x0 f0 1st 2nd
∆f[x0,x1]
x1 f1 ∆2f0
23
polynomial f(x) in x of degree n:
With Newton’s divided difference
n i−1
f ( x)=∑ a i ∏ ( x−x j )
i= 0 j= 0
where ai =f [ x 0 ,x 1 ,⋯,x i ] ( H . W .)
n i−1
⇒ f ( x )=∑ f [ x 0 ,x 1 ,⋯,x i ]∏ ( x−x j )
i=0 j=0
24
Interpolation & Approximation
Construct a function f(x), when it is not known
explicitly and only the values of f(x) at a set of
points (tabular points) are given.
x 0 x1 x 2 x 3 x 4 ⋯⋯ x n
f 0 f 1 f 2 f 3 f 4 ⋯⋯ f n
In approximation we derive f(x) from the
approximate polynomial P(x), called
interpolating polynomial for all values of x
over a given interval [a,b]
25
Interpolating polynomial
A polynomial P(x) is called an Interpolating
polynomial if the values of P(x) and its certain
order derivatives coincide with those of f(x) & its
same order derivatives at the given tabular points.
x0 f0 P ( x 0 )=f 0
x1 f1 P ( x 1 )=f 1
x2
x3
⋮
f2
f3 ⇒ P ( x 2 )=f 2
P ( x 3 )=f 3
⋮
xn fn P ( x n )=f n
We have n+1 number of data points, we can construct a
polynomial of maximum degree n 26
Interpolating polynomial
Let the polynomial is
2 n
P( x ) =a0 +a1 x+a 2 x +⋯+a n x
a 0 +a1 x 0 +a 2 x 0 +⋯+an x 0 =f 0
a 0 +a1 x 1 +a 2 x 1 +⋯+an x1 =f 1
Condition of
interpolation a 0 +a1 x 2 +a 2 x 2 +⋯+an x 2 =f 2
⋮
a 0 +a1 x n +a 2 x n +⋯+an x n =f n
27
Interpolating polynomial
( )( ) ( )
2 n
1 x0 x0 ⋯ x0 a0 f0
2 n
1 x1 x1 ⋯ x1 a1 f1
=
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
1 xn 2
xn ⋯ n
xn an fn
2 n
1 x0 x0 ⋯ x0
2 n
Condition of
existence of |1 x 1 x1
⋯ x1 |≠0
the interpolating
polynomial
⋮ ⋮ ⋮ ⋮ ⋮
2 n
1 xn xn ⋯ xn
28
Interpolating polynomial
(Uniqueness)
Let there is another interpolating polynomial
2 n
P' ( x )=a' 0 +a' 1 x+a' 2 x +⋯+a' n x
29
Subtracting page 29 from page 27
( )( ) ( )
1 x0 x 20 ⋯ x n0 a0 −a'0 0
1 x1 x 21 ⋯ x n1 a1 −a'1 0
⇒ =
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
1 xn xn
2
⋯ xn
n '
an −a n 0
'
Q( x )=P( x )−P ( x )=0
'
⇒ P( x )=P ( x )
31
Different kind of Interpolation
Lagrange interpolation
Newton’s interpolation
Spline interpolation
32
x0 f0
x1 f1
Lagrange interpolation x2 f2
x3 f3
⋮
n xn f n
Pn ( x)=∑ f i li ( x )
i= 0
n
( x−x i )
where l i ( x )=∏
j≠i ( x j −x i )
33
Newton-Gregory forward
interpolation
x0 f0 n i−1
x1 f1 f ( x )=∑ a i ∏ ( x−x j )
i= 0 j= 0
x2 f2 i
Δ f0
x3 f3 where ai = i
⋮ i!h
xn f n
34
Newton’s interpolation
x0 f0
x1 f1 n i−1
x2 f2 f ( x)=∑ a i ∏ ( x−x j )
i= 0 j= 0
x3 f3 where ai =f [ x 0 ,x 1 ,⋯,x i ]
⋮
xn f n
35
Curve Fitting: Regression
36
Curve Fitting: Regression
xn yn
37
Linear fitting x1
x2
y1
y2
fitting function y=ax+b x3 y3
⋮
xn yn
n ∑ x i y i −∑ x i ∑ y i
a= 2
n∑ x 2i − ( ∑ xi)
b=
∑ yi ∑ xi
−a
n n
39
Power law fitting
a
fitting function y=bx
ln y= ln b+a ln x
n ∑ ln x i ln y i−∑ ln x i ∑ ln y i
a= 2
n ∑ ( ln x i ) −(∑ ln x i )
2
ln b=
∑ ln y i ∑ ln xi
−a
n n
40
Polynomial fitting
2 m−1
fitting function y ( x)=a 0 +a 1 x+a 2 x +⋯+am−1 x
( )( ) ( )
n ∑ xi ∑ i
x 2
⋯ ∑ i
x m−1
a0 b0
∑ xi ∑ 2
xi ∑ 3
xi ⋯ ∑ m
xi a1
=
b1
⋮ ⋮ ⋮ ⋮ ⋮ ⋮ ⋮
∑ m−1
xi ∑ m
xi ∑ m−1
xi ⋯ ∑ xi
2m−2 a m−1 bm−1
where b k =∑ x ki y i
41
Multiple Linear Regression:
function of two variables
y=a0 +a1 x+a 2 z
X Z result
x0 z0 y0
x1 z1 y1
x2 z 0 y2
x3 z2 y3
⋮ ⋮ ⋮
xn zn y n
42
Multiple Linear Regression:
( )( ) ( )
n ∑ xi ∑ zi a0 ∑ yi
∑ xi ∑ i ∑ xi zi
x 2
a1 = ∑ yi xi
∑ zi ∑ xi zi ∑ zi
2 a2 ∑ yi zi
43
Numerical differentiation
44
Numerical differentiation:
Forward difference:
Central difference:
45
Integration
46
Integration
b
I=∫ f ( x )dx
• Integration:
a
is the total value, or summation, of f(x) dx over the range from
a to b:
47
Integration
We consider n+1 different equally spaced points in [ a,b ].
x 0 =a, x 1 =x 0 +h, x 2 =x 1 +h, ⋯, x n =xn−1 +h .
b−a
where h=
n
x−x 0
Let u=
h
or x=x 0 +hu & dx=hdu
x +nh
b 0
I=∫ f ( x )dx= ∫ f ( x )dx
a x0
n
=h∫ f ( x 0 +uh)du
0
48
x0 f0
We have the interpolation relation x1 f1
n
Δ i f 0 i−1 x2 f2
f ( x )=P( x )=∑ i ∏ ( x−x j ) x3 f3
i= 0 i!h j=0 ⋮
n i i−1 xn f n
Δ f0
∑ i ∏ (u− j )h
i=0 i!h j= 0
2 3
Δ f0 Δ f0
f 0 +uΔf 0 +u(u−1 ) +u(u−1 )(u−2)
2! 3!
n
Δ f0
+⋯+u(u−1)⋯(u−n+1 )
n!
49
Integration
n
I=h ∫ f ( x 0 +uh )du
0
[ ]
n 2 n
Δ f0 Δ f0
h ∫ f 0 +uΔf 0 +u(u−1) +⋯+u(u−1)⋯(u−n+ 1) du
0 2! n!
[ ]
2 3 2 4 3 2
n 2 n −3 n 2 n −4 n +4 n 3
h nf 0 + Δf 0 + Δ f 0+ Δ f 0 +⋯⋯
2 12 24
50
Trapezoidal Rule: Formula
Taking n= 1
[ 1
I=h f 0 + Δf 0
2 ]
[ 1
h f 0 + (f 1 −f 0 )
2 ]
h
2
[ f 0 +f 1 ] where f i =f ( x i )
51
Trapezoidal Rule: Graphical
P(x)
52
b
Composite Trapezoidal Rule
I=∫ f ( x )dx
a
• Assuming m+1 data
points are evenly
spaced, there will be m
intervals over which to
integrate.
• The total integral can
be calculated by
integrating each
subinterval and then
adding them together:
[ ]
m−1
h
⇒ I= f ( x 0 ) +2 ∑ f ( x i ) +f ( x m )
2 i=1
54
Trapezoidal Rule: Error
55
Trapezoidal Rule: Error
Where x i ≤ y i ≤x i+1
ETn =
−h 3
n
∑ f'' ( y i )
12 n
h3
.=− nΦ
12
57
Simpson’s 1/3 Rule: Formula (n=2)
x +nh
0 n
I= ∫ f ( x )dx =h∫ f ( x 0 +uh )du
x0 0
[ ]
n 2 n
Δ f0 Δ f0
∫ f 0 +uΔf 0 +u(u−1) 2 ! +⋯+u(u−1)⋯(u−n+1) n! du
0
Taking n=2
x +2 h
[ ]
0
n2 2 n3 −3 n2 2
I= ∫ f ( x )dx =h nf 0 + Δf 0 +
2 12
Δ f0
x0
[ 4
=h 2 f 0 + Δf 0 +
2
16−12 2
12
Δ f0 ]
[ 1
=h 2 f 0 +2(f 1−f 0 )+ ( f 2 −2 f 1 +f 0 )
3 ]
h
[ f +4 f 1 +f 2 ]
3 0
58
1/3 Rule: Graphical
59
• Simpson’s rule can be
1/3: Composite improved by dividing the
integration interval into a
formula number of segments of
equal width.
• Note: The number of
segments has to be even.
• h = step size = (b-a)/n; n =
the number of
segments=2m.
60
Simpson’s 1/3 Rule: Composite formula
x x
b 2 x4 2m
I=∫ f (x )dx=∫ f (x)dx+ ∫ f (x )dx+⋯+ ∫ f (x )dx
a x0 x2 x2m−2
h h
= [ f 0+4 f 1 +f 2 ]+ [ f 2+4 f 3 +f 4 ]+⋯
3 3
h
+ [ f 2m−2+4 f 2m−1 +f 2m ]
3
61
b x2 x4 x2 m
a
I f ( x)dx f ( x)dx
x0
f ( x)dx f ( x)dx
x2 x2 m 2
h h
f 0 4 f1 f 2 f 2 4 f 3 f 4
3 3
h
f 2m 2 4 f 2m 1 f 2m
2
h ( f 0 f 2 m ) 2( f 2 f 4 f 6 f 2 m 2 )
3 4( f 1 f 3 f 5 f 2 m 1 )
1/3 Rule: Error
63
Simpson’s 3/8 rule
I=∫ f ( x)dx
3h
I= [ f (x 0 )+3f ( x 1 )+3f (x 2 )+f (x 3 )]
8
64
Differential Equation
65
1st order Differential Equation:
Initial value problem
dy
Solve the differential equation =f ( x,y)
dx
with initial condition y( x 0 )=y 0 in the given interval [ a,b]
2
' h ''
y ( x 0 +h )=y ( x 0 )+hy ( x 0 )+ y ( x 0 )+⋯
2
using first order approximation
'
y ( x 0 +h )=y ( x 0 )+hy ( x 0 )
=y 0 +hf ( x 0 ,y 0 )
67
Differential Equation
Euler Method
Given
y0 y1
x0 x1 x2 x n−2 x n−1 x n
We got it
using EM
71
Runge-Kutta Method…
2
h
y i+1 =y i +hf ( x i ,y i )+ [ f x +f y f ]( x ,y )
2 i
i
y i+ 1 =y i +h {αf+β [ f+f x mh+nfhf y ] }( x ,y )
i
i
73
Runge-Kutta Method 4th order:
y i+1 =y i +kh
k 1 +2 k 2 +2 k 3 +k 4
k=
6
k 1 =f ( x i ,y i )
h k1 h
k 2 =f ( x i + ,y i + )
2 2
h k2h
k 3 =f ( x i + ,y i + )
2 2
k 4 =f ( x i +h,y i +k 3 h)
74
System of
Differential Equation
75
System of Differential Equations (Coupled)
Initial condition
dx
=f ( x,y,t ) x (t 0 )=x 0
dt
dy y(t 0 )=y 0
=g( x,y,t )
dt
We consider n+1 different equally spaced points in [ a,b ].
t 0 =a, t 1 =t 0 +h,t 2 =t 1 +h,⋯,t i =t i−1 +h,⋯,t n =t n−1 +h .
b−a
where h=
n
t i+1 =t i +h
x i+1 =x i +hf ( x i ,y i ,t i )
y i+1 =y i +hg( x i ,y i ,t i )
77
Higher order
Differential Equation
78
2nd order Differential Equation:
Initial value (IV) problem
2
d y '
2
=f ( x,y,y )
dx
with initial conditions:
y ( x 0 ) =y 0 & y ' ( x 0 )=u 0
79
2nd order Differential Equation
dn
Decompose the 2 order equation
st
into two 1 order equations
dy
1. =u( x ); IV: y ( x 0 )=y 0
dx
du
2 . =f ( x,y,u); IV: u( x 0 )=u0
dx
80
2nd order Differential Equation
y i+1 =y i +hui ( x i )
81
2nd order Differential Equation:
Boundary Value Problem
Shooting Method
Finite difference method
83
2 nd
ODE: Shooting Method
84
2nd ODE: Shooting Method
u1 −u 2
u( a )=u1 + (B−t 1 )
t 1 −t 2
85
Finite Difference Method
86
equations(PDEs):
87
Non-linear equation
f ( x)=0
88
Root of Non-linear
equation f ( x)=0
Analytical Method
Graphical Method
Iterative Method
89
Root of Non-linear equation
Analytical Method
2
ax +bx+c=0
−b± √b −4ac
2
Solution: x=
2a
We dont have this type of simple
formula for the little more complicated
equation .
90
Root of Non-linear equation
Graphical Method
91
Root of Non-linear equation
Trial & Error Method
Guess the value of x say x 0
find f ( x 0 )
if |f ( x 0 )|<tolerance value
then solution is x 0
else choose anothe value & test it
92
Root of Non-linear equation
Iterative methods
a) Bracketing Methods
This group of methods takes initial guesses such that
they bracket at least one of the actual roots of the
equation.
i) Bisection
ii) Regula Falsi
i) Secant
ii) Newton Raphson
........ 93
Bisection Method
94
Bisection Method: Algorithm
95
Bisection Method: Convergence
After nth division, the interval containing the root is reduced to the size
b−a Δx
n
= n
2 2
Root must lie within ± Δx/2 n .
th Δx
Error in the n step E n =| n |
2
th Δx En
Error in the (n+1 ) step E n+1 =| n+1 |=
2 2
The error is reducing linearly, so the method is called
linearly conserving method .
96
False Position Method
This technique is similar to the bisection method except that
the next iterate is taken as the line of interception between
the pair of x-values and the x-axis rather than at the
midpoint.
x 2 −x 1
x 3 =x 1−f ( x1 )∗
f ( x 2 ) −f ( x 1 )
97
False Position Method
f ( x 1 )−f ( x 2 ) y −f ( x 1 )
=
x 1− x 2 x− x1
x 2 −x 1
x 3 =x 1−f ( x1 )∗
f ( x 2 ) −f ( x 1 )
98
False Position
Algorithm
x 2 −x 1
x 3 =x 1−f ( x1 )∗
• Calculate f ( x 2 ) −f ( x 1 )
f ( x 3) f ( x 1 )< 0
• If
• Set x2 = x3
• Else Set x1 = x3
• End If
|f ( x 3 )|
• Until < tolerance value.
99
False Position Method
Convergence
Converges linearly
100
Newton-Raphson Method
Choose one point x1 if|f ( x 1 )|<ε then solution is x 1
otherwise we choose other point x 2 =x1 +h, very near
to x 1 towards actual root, and demand that f ( x 1 +h)=0
f ( x 1 +h)=f ( x1 )+f ' ( x 1 )h+⋯=0
taking only first order term, we have
f ( x1 )
h=− '
f ( x1 )
if|f ( x 2 )|<ε then solution is x 2
else
go to the first line and take x 2 as initial guess
101
Newton-Raphson Method:
Graphical representation
102
Convergence of Newton-Raphson Method
f ( xn )
x n+ 1 =x n − '
f ( xn )
'
⇒ f ( x n )=f ( x n )( x n − x n+1 )
103
Convergence of Newton-Raphson Method
f '( R )
⇒ e n+1 =− '
e 2n
2 f ( xn )
quadratic convergence, so converge faster than
Bisection method .
104
Limitation of Newton-Raphson Method:
1. If
105
Linear Algebra
106
Linear Equations
107
Linear Equations
(In matrix form)
( a11 a12 a 13 ……. a 1n )( a 21 a 22 a23 ……. a2 n )( a31 a32 a 33 ……. a3 n ) ( ⋮)( ⋮)
A x=b
108
Homogeneous Linear Equations
A x=0
a( 1 a12 a13……. a1n)(a21a2 a23……. a2n)(a31a32 a3 ……. a3n)( ⋮)( ⋮)
110
Forward Elimination
Step 1
a 11
Divide Equation 1 by and multiply
a 21 by
then
Subtract from 2.
( a22−
a 21
a11 ) (
a 12 x 2 +. . .+ a 2n −
a 21
a 11 )
a 1n x n =b 2 −
a 21
a 11
b1
End of Step 1
112
Forward Elimination
Step 2
Repeat the same procedure for the 3rd term of
Equation 3.
( n−1 ) ( n−1 )
a nn x n =b n
114
Forward Elimination
115
Forward Elimination: Solution
( n−1 )
bn
x n = ( n−1 )
ann
th
This can be substituted back in the (n-1) equation
to botain the solution for x n−1 . This back substitution
can be continued till we get the solution for x 1 .
[ ]
n
1
x k = ( k −1 ) bk − ∑ a kj x j
( k −1 ) ( k−1 )
akk j=k+1
116
Forward Elimination: Algorithm
Do i=1, n
Construction of triangular matrix
Do j=1, n
(0)
a ij =aij
..........................................
Do k=1,n-1
Do i=k+1, n
Do j=k+1, n
(k−1)
(k ) (k−1) a ik (k−1)
a ij =aij − (k−1) a kj
a kk
117
Forward Elimination: Algorithm
( n−1 )
bn
x n = ( n−1 )
ann
Do k=n-1, 1
[ ]
n
1
x k = ( k −1 ) bk − ∑ a kj x j
( k −1 ) ( k−1 )
akk j=k+1
118
Inverse of a matrix:
Definition of inverse of a
matrix
A A’ =
119
Inverse of a matrix:
Inverse of a matrix can be calculated using Gauss elemenation
technique:
121
LU Decompose
A= LU
where
L = lower triangular matrix (we prefer unit lower triangle)
U = upper triangular matrix
[ ][ ]
1 0 0 u11 u12 u 13
A=LU= ℓ 21 1 0 0 u22 u 23
ℓ31 ℓ32 1 0 0 u 33
122
LU Decompose
(1) Ax=b
Construct L & U: A=LU
(3) Ux=y
Solve x !!
123
Construction of L & U matrix
So we have
124
Construction of L & U matrix
125
Solution of Ly=b equation
126
Solution of Ux=y equation
[ ][ ] [ ]
u 11 u 12 u13 ⋯ u 1n x1 y1
0 u 22 u23 ⋯ u 2n x2 y2
⇒ 0 0 u33 ⋯ u 3n x3 = y 3
⋮ ⋮ ⋮ ⋯ ⋮ ⋮ ⋮
0 0 0 0 u nn xn yn
Solution:
yn
x n=
unn
Do k=n-1,1
[ ]
n
1
xk= y k − ∑ u jk x j
u kk j=k+1
127
Eigenvalues and Eigenvectors:
Aui =λi ui
A : Square Matrix
ui : Column Matrix (eigen vector )
λ i : Number (eigen value )
128
Eigenvalues and Eigenvectors:
Define a set of basis vectors { v i } and a diagonal matrix D
[ ]
λ1 0 0 ⋯ 0
0 λ2 0 ⋯ 0
D= 0 0 λ3 ⋯ 0
⋮ ⋮ ⋮ ⋯ ⋮
0 0 0 0 λn
[] [] []
1 0 0
0 1 0
v 1= 0 ; v 2 = 0 ; ⋯⋯; v n = 0
⋮ ⋮ ⋮
0 0 1
129
Eigenvalues and Eigenvectors:
−1 −1
Similarity transformation: S AS=D; S S=I
−1
corresponding v i =S ui
−1 −1 −1 −1
Now Dv i =S ASS ui =S Aui =λ i S ui =λi v i
So we have the eigen values!!!
131
132
Stochastics
133
Stochastic approach
A class of computational technique that rely on repeated random
sampling to compute the solution of a physical problem.
I=∫ f ( x )dx
Hit or Miss method: a
Draw a box extending from a to b and 0 to y0 where y0>f(x) throughout this
interval. Drop N points randomly into the box and count the number N0 which
fall below f(x).
N0
I= A where A=y 0 (b−a )
N
The random numbers ( x i ,y i ) must be in the
range a≤x i≤b & 0≤ y i ≤ y 0
135
Evaluation of definite Integral:
b
I=∫ f ( x )dx
a
Usually we generate random numbers in the range 0 to 1. Then
how do we generate random numbers within our required range??
x i =a+(b−a )x i & y i =y 0 y i
136
Value of Π (PI):
N0
∏ =4 N 137
Random Numbers generation:
Pseudorandom number generators (PRNGs) are algorithms that can
automatically create long runs of numbers with good random
properties but eventually the sequence repeats (or the memory
usage grows without bound). The series of values generated by
such algorithms is generally determined by a fixed number called
a seed. One of the most common PRNG is the “Linear
Congruential Generator” , which uses the recurrence