0% found this document useful (0 votes)
14 views34 pages

Numerical Computing Project Ms Fils

Numerical Computing

Uploaded by

juttjunaid532
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views34 pages

Numerical Computing Project Ms Fils

Numerical Computing

Uploaded by

juttjunaid532
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Group # 6

IDE: Jupyter Notebook & Ms Word


Language: Python

ASHRAF ALI KHAN


22765
MUHAMMAD LUQMAN
ALVI
1. Numerical Computing
Numerical computing is a branch of computer science and mathematics that focuses on solving
mathematical problems using numerical methods, often with the aid of computers. It is particularly useful
for problems that cannot be solved analytically or require approximation due to their complexity. The
essence of numerical computing lies in developing algorithms that approximate solutions to mathematical
equations, often producing results to a specified degree of accuracy.

1.1. Key Features of Numerical Computing

Approximations: Unlike symbolic computation, numerical computing approximates solutions with


numerical values rather than solving equations analytically.

Error Analysis: Numerical methods often involve errors due to approximations, round-off, or truncation.
Understanding and minimizing these errors is a critical aspect of numerical computing.

Computational Efficiency: The methods are designed to perform calculations efficiently, even for large-
scale problems that involve thousands or millions of variables.

Iteration: Many numerical methods rely on iterative processes, gradually refining the solution until a
desired accuracy is achieved.

1.2. Applications of Numerical Computing

Numerical computing has a wide range of applications across disciplines, including:

Engineering: Solving partial differential equations, structural analysis, and signal processing.

Physics: Simulations for fluid dynamics, quantum mechanics, and astrophysics.

Finance: Portfolio optimization, risk analysis, and derivative pricing.

Biology: Modeling population dynamics, genetic algorithms, and drug diffusion.

Statistics and Machine Learning: Data fitting, optimization algorithms, and predictive modeling.

1.3. Significance of Numerical Computing


Numerical computing is pivotal in modern science and engineering due to its ability to handle real-world
problems involving complex systems. Without numerical methods, advancements in fields like artificial
intelligence, weather forecasting, and advanced physics would be nearly impossible. It is also an
essential tool for researchers and professionals who need to model, analyze, and interpret phenomena
in a computationally efficient manner. Numerical computing is not just about writing code but
understanding the mathematical principles behind the methods to ensure solutions are accurate and
reliable. Its importance continues to grow as we encounter problems of greater complexity in an
increasingly data-driven world.

1.4. Why Numerical Computing?


If no Analytical solution or Exact solution exist, then we need to find numerical or approximate solution
and this is only happening with Numerical Computing. Today, phenomena are nonlinear like too complex
and we need accuracy, time management and reusability, So, it can only be done with Numerical
Computing.

1.5. Pros of Numerical Computing

Solves Complex Problems: Handles equations and systems that are too complex for analytical
solutions.

Flexibility: Adapts to various fields like engineering, physics, finance, and biology.

Accuracy Control: Allows users to set desired tolerances for precision.

Scalable: Handles large datasets and high-dimensional problems efficiently.

Real-World Applications: Models and predicts real-world phenomena with practical solutions.

1.6. Cons of Numerical Computing

Error Accumulation: Susceptible to rounding, truncation, or approximation errors.

High Computational Cost: May require significant processing power for complex problems.

Slower Convergence: Certain methods (e.g., bisection) can be slow compared to analytical approaches.

Dependence on Algorithms: Requires well-designed algorithms for accuracy and efficiency.

Approximation: Never guarantees an exact solution, only approximations close to the real answer.

2.Bisection Method
The bisection method, while reliable and straightforward, has specific conditions and restrictions that
must be satisfied for it to work effectively.

2.1. Conditions for Bisection Method

Continuity of the Function: The function f(x) must be continuous on the interval [a, b]. If the function
has discontinuities, the method cannot guarantee the existence of a root.

Sign Change in the Interval: The values of the function at the endpoints of the interval must have
opposite signs, i.e., f(a) * f(b) < 0. This ensures that there is at least one root within the interval [a, b], as
guaranteed by the Intermediate Value Theorem.

Initial Interval Specification: The user must provide a valid interval [a, b] that satisfies the above
condition. Incorrect or arbitrary interval selection can lead to failure.

Well-Defined Function: The function f(x) must be well-defined in the interval [a, b]. Undefined behavior,
such as division by zero, must not occur within this range.
2.2. Algorithm of Bisection Method

Step 1: Set a1 = a and b1 = b. Let p1 e the midpoint of [a, b], that is p1 = a + b / 2.

Step 2: If f(p1) = 0, then p = p1, and we are done.

Step 3: If f(p1) not equals to 0, then f(p1) has the same sign as either f(a1) or f(b1).

Step 4: If f(p1) and f(a1) have the same signs, p belongs to (p1, b1). Set a2 = p1 and b2 = b1.

Step 5: If f(p1) and f(a1) have opposite signs, p belongs to (a1, p1). Set a2 = a1 and b2 = p1.

Then reapply the process to the interval [a2, b2] and so on until we get the required root.

2.3. Example of Bisection Method

Enter the function: x3-3x-5


Enter the lower bound of the interval [a]: 2
Enter the upper bound of the interval [b]: 3
Enter the tolerance value: 0.001

Iteration a b p f(a) f(b) f(p)

1 2 3 2.5 -3 13 3.125

2 2 2.5 2.25 -3 3.125 -0.3594

3 2.25 2.5 2.375 -0.3594 3.125 1.2715

4 2.25 2.375 2.3125 -0.3594 1.2715 0.4290


2.4. Code of Bisection Method

2.5. Output of Bisection Method


2.6. Graph of Bisection Method

3.Regula Falsi Method OR False Position Method OR


Linear Interpolation Method
The Regula Falsi Method is a breaking method for finding the zero of the equation. This method is based on
solving the equation for the line through the points (a, f(a)) and (b, f(b)) to find the point (p, 0).

3.1. Conditions of Regula Falsi Method

Continuity of the Function: The function f(x) must be continuous on the interval [a, b].

Sign Change in the Interval: The values of the function at the endpoints, f(a) and f(b), must have
opposite signs: f(a) * f(b) < 0 This ensures that a root exists within the interval according to the
Intermediate Value Theorem.

Well-Defined Linear Interpolation: The method computes the root as the point where the straight line
connecting (a, f(a)) and (b, f(b)) crosses the x-axis. For this to work, the function should not be flat or
undefined within the interval.

Initial Bracketing: A valid initial interval [a, b] must be chosen where the root lies, and the sign change
condition is satisfied.

3.2. Restrictions of Regula Falsi Method

If f(a) * f(b) >or= 0, the method cannot be applied as there is no guarantee of a root in the interval.

While Regula Falsi often converges faster than the bisection method, it can sometimes suffer from slow
convergence if one endpoint remains fixed for many iterations.
3.3. Algorithm of Regula Falsi Method
Step 1: Find points a and b such that a < b and f(a) * f(b) < 0.

Step 2: Take the interval [a, b] and find the next value using the formula of p.

p has three formulas:

p = (b - (b-a)f(b)) / (f(b) - f(a))

p = (b - (b-a)f(a)) / (f(b) - f(a))

p = (af(b) - bf(a)) / (f(b) - f(a))

Step 3: If f(p1) = 0, then p = p1, and we are done.

If f(p1) not equals to 0, then f(p1) has the same sign as either f(a) or f(b).

If f(p1) and f(a) have the same signs, p belongs to (p1, b). Set a = p1 and b = b.

If f(p1) and f(a) have opposite signs, p belongs to (a, p1). Set a = a and b = p1.

Step 4: Repeat step 2 and 3 until f(pi) = 0 or |f (pi)| < tolerance.

3.4. Example of Regula Falsi Method


Enter the function: x3-3x-5
Enter the lower bound of the interval [a]: 2
Enter the upper bound of the interval [b]: 3
Enter the tolerance value: 0.001

Iteration a b p f(a) f(b) f(p)

1 2 3 2.1875 -3 13 -1.0950

2 2.1875 3 2.2506 -1.0950 13 -0.3518

3 2.2506 3 2.2704 -0.3518 13 -0.1084

4 2.2704 3 2.2764 -0.1084 13 -0.0330


3.5. Code of Regula Falsi Method
3.6. Output of Regula Falsi Method

3.7. Graph of Regula Falsi Method


4. Newton Raphson’s Method OR Newton’s Method
This method is an algorithm to approximate the roots of zeros of the real valued functions using guess for
the first iteration which is close to roots, using the following formula;

X1 = xo - (f(xo) / f'(xo)) Where xo is the initial guess.

f(xo) is the value of the equation at initial value or guess.

f'(xo) is the value of the first order derivative of the equation.

Note: f'(xo) should not be zero else the fraction part of the formula will change to infinity which means that
f(x) should not be a constant function.
In general, the formula of this method is;

Xn = xn −1 - (f(xn-1) / f'(xn-1)) where x belongs to {1,2,3,.......} OR xn+1 = xn - (f(xn) / f'(xn)) where x belongs to
{0,1,2,3,......}

4.1. Conditions of Newton’s Method


Newton's method, also known as the Newton-Raphson method, relies on certain conditions to ensure it
works correctly and converges to a root. Here are the main conditions:
Initial Guess Close to the Root: The initial guess xo should be sufficiently close to the actual root for
faster and more accurate convergence. A poor choice of the initial guess may lead to divergence or
convergence to a different root.

Function Continuity and Differentiability: The function f(x) must be continuous and differentiable in
the interval of interest. This is because Newton's method uses the derivative f'(x) to approximate the root.

Non-Zero Derivative: The derivative f'(x) at each iteration must not be zero. If f'(x) = 0, the method will
fail because it involves dividing by f'(x).

Monotonicity Around the Root: f(x) should ideally be monotonic (steadily increasing or decreasing)
near the root to avoid oscillation or divergence.

Lipschitz Continuity of the Derivative (Optional): For better convergence, the derivative f'(x) should
satisfy the Lipschitz condition (i.e., its rate of change should not vary too rapidly).

Tolerance for Convergence: A small tolerance tol > 0 must be set to determine when the successive
approximations xn+1 and xn are close enough to stop the iterations.

4.2. Algorithm of Newton’s Method


Step 1: Find points a and b such that f(a) * f(b) should be < 0.
Step 2: Take interval [a, b] and find initial approximation by xo = (a+b)/2.
Step 3: Find f(xo) and f'(xo).
Step 4: Put the values of Step 2 and 3 into the formula;
X1 = xo - (f(xo))/f'(xo))
Step 5: If f(x1) = 0 then x1 is the root, else xo = x1
Step 6: Repeat the step 3 and 4 until f(xi) = 0 or |f(xi)| < tolerance
4.3. Example of Newton’s Method
Enter the function: x3-3x-5
Enter the lower bound of the interval [a]: 2
Enter the upper bound of the interval [b]: 3
Enter the tolerance value: 0.001

iteration Xn f(xn) f’(xn)

2.5 3.125 15.75


1
2.3016 0.2874 12.8920
2
2.2793 0.0034 12.5855
3
4.4. Code of Newton’s Method

4.5. Output of Newton’s Method


4.6. Graph of Newton’s Method

5.Secant Method OR Chord Method


It's a recursive method for finding the root for the polynomials by successive approximation. It's similar to
Regula Falsi Method here we don't need to check f(a) * f(b) < 0 again and again after every approximation.
General Formula;

Xn+1 = xn - ((xn – xn-1) / (f(xn) - f(xn-1))) (f(xn))

5.1. Conditions of Secant Method


Initial Points: The method requires two initial guesses, xo and x1 , close to the actual root. These points
should not result in f(xo) = f(x1), as division by zero would occur.

Continuity: The function f(x) must be continuous in the interval of interest.

No Multiple Roots: Convergence is guaranteed only for simple roots (i.e., roots where f(x) not equals to
0).

Avoidance of Horizontal Tangents: If the slope of the approximated tangent becomes very small, the
method might not converge effectively.

Tolerance: A small tolerance value tol > 0 should be set to determine convergence, ensuring the
algorithm stops when successive approximations are close enough.

5.2. Algorithm of Secant Method

The Secant Method works as follows:


Inputs: Two initial guesses, xo and x1 The function f(x). Tolerance tol. Maximum number of iterations to avoid
infinite loops.

Iterative Formula: Compute the next approximation using the secant formula:
Xn+! = xo - ((xn – xn-1) / (f(xn) - f(xn-1))) (f(xn))
Here, xn is the current point, and xn-1 is the previous point.

Convergence Check: Stop the iteration if | xn+1 – xn| < tol or |f(xn+!)| < tol.

Iteration: Update xn-1 to xn and xn to xn+1, then repeat.

Output: Return xn+1 as the estimated root.

5.3. Example of Secant Method


Enter the function: x3-3x-5
Enter the first initial guess [x0]: 2
Enter the second initial guess [x1]: 3
Enter the tolerance value: 0.001

Iteration Xo X1 f(X0) f(X1) X2

2 3 -3 13 2.1875
1
3 2.1875 13 -1.0950 2.2506
2
2.1875 2.2506 -1.0950 -0.3518 2.2805
3
2.2506 2.2805 -0.3518 0.0187 2.2790
4
5.4. Code of Secant Method

5.5. Output of Secant Method


5.6. Graph of Secant Method

6.Fixed-Point Method
The Fixed-Point Method is an iterative numerical technique used to find a root of the equation f(x) = 0 by
rewriting it in the form x = g(x). The method repeatedly evaluates g(x) to converge to a solution.

6.1. Conditions of Fixed-Point Method


Rewriting the Equation: The function f(x) must be transformed into x = g(x), where g(x) represents a
single iteration step. The choice of g(x) is crucial. Improper forms may lead to divergence rather than
convergence.

Initial Guess: A good initial guess xo is needed for the method to converge efficiently.

Convergence Criteria: The method converges if |g'(x)| < 1 in the interval containing the root. Here, g'(x)
is the derivative of g(x). If |g'(x)| > 1, the method diverges.

Tolerance: A tolerance tol > 0 is set to determine when successive approximations are close enough to
stop the iterations.

6.2. Algorithm of Fixed-Point Method

Inputs: Function g(x), initial guess xo, and tolerance tol.

Iterative Formula: xn+! = g(xn). This formula computes the next approximation based on the current
approximation.

Stopping Criteria: Stop the iteration when | xn+1 – xn | < tol or the maximum number of iterations is reached.

Iteration: Update xn to xn+! , then repeat the process.

Output: Return xn+! as the approximate solution.


6.3. Example of Fixed-Point Method
Enter the function g(x): (x2+3)/5
Enter the initial guess [x0]: 1.5
Enter the tolerance value: 0.0001

Iteration xo (current guess) g(xo) X1 (next guess)

1.5 1.05 1.05


1
1.05 0.8205 0.8205
2
0.8205 0.7346 0.7346
3
0.7346 0.7079 0.7079
4

6.4. Code of Fixed-Point Method


6.5. Output of Fixed-Point Method

6.6. Graph of Fixed-Point Method

7. Cramer’s Rule
Cramer's Rule is a mathematical theorem used to solve systems of linear equations using determinants. It
provides an explicit formula for the solution of a system of 𝑛 equations with 𝑛 unknowns. This rule is
particularly useful when working with small systems.
7.1. Conditions for Cramer’s Rule

Square Matrix Requirement: The coefficient matrix of the system must be square(n*n)

Non-Zero Determinant: The determinant of the coefficient matrix (det(A)) must be non-zero (!=0). If the
determinant is zero, the system either has no solution or infinitely many solutions, and Cramer's Rule cannot be
applied.

Linear Independence: The equations must be linearly independent, meaning that no equation can be written as
a linear combination of the others.

7.2. Algorithm
Input the system of equations:
Read the coefficient matrix 𝐴 and the constant matrix 𝐵.
Ensure that 𝐴 is a square matrix (n*n).

Compute the determinant of 𝐴:


Find det(A).
If det(A) = 0, the system has either no unique solution or infinite solutions. Terminate the process.

Compute determinants for each variable:


For each unknown xi create matrix Ai by replacing the i-th column of A with column B.
Calculte the det(Ai).

Find the values of the unknowns:


Compute each xi using the formula:
Xi = det(Ai) / det(A)
Store the results.

Output the solutions:


Display the values of x1, x2, x3,........,x4.

7.3. Example

Consider the system of equations:

2x+3y=7
−3x+4y=15

Find the determinant of the coefficient matrix:

2 3
D=[ ]=(2×4)−(−3×3)=8+9=17
−3 4

Find the determinant for x (replace the first column with the constants):

7 3
Dx=[ ]=(7×4)−(3×15)=28−45=−17
15 4

Find the determinant for y (replace the second column with the constants):

2 7
Dy=[ ]=(2×15)−(−3×7)=30+21=51
−3 15
Solve for x and y:

x = Dx / D= −17 / 17 = −1
y = Dy / D = 51 / 17 = 3

So, the solution to the system is x=−1 , y=3.

7.4. Code

7.5. Output
7.6. Graph

8.LU Decomposition using Crout’s method


LU decomposition using Crout’s method is a numerical technique used to factorize a given matrix into a lower
triangular matrix (L) and an upper triangular matrix (U). Unlike the standard LU decomposition, Crout’s
method constructs L with unit diagonal elements in U.

8.1. Conditions for applying Crout's method

Square Matrix: The matrix must be square (n*n).

Non-Singular Matrix: The determinant of the matrix should be non-zero det(A) != 0.

Pivoting (if needed): If the matrix has zero leading coefficients during the decomposition process,
partial pivoting may be required to avoid division by zero.

Applicability: Crout’s method works well for large matrices and is particularly useful in solving linear
systems efficiently.

8.2. Algorithm

Given a system A = LU.

Where,

L is a lower triangular matrix with non-unit diagonal elements.

U is an upper triangular matrix with unit diagonal elements.

Step-by-Step Process

1. Initialize matrices L and U. Set U with unit diagonal elements. Set all values of L and U to zero
initially.

2. Compute elements of L and U using:


3. Continue iterating until all elements of L and U are computed.

4. Use forward substitution for solving Lz = b and backward substitution for solving Ux = z.

8.3. Example
Solve the following system using LU decomposition:

2 −1 1
[ 1 3 2]
1 2 3

𝑙11 0 0
L = [𝑙21 𝑙22 0 ]
𝑙31 𝑙32 𝑙33
1 𝑢12 𝑢13
U = [0 1 𝑢23]
0 0 1
Compute L and U:

L11 = A11 = 2
L21 = A21/ L11 = 1 / 2 = 0.5
L31 = A31/ L11 = 1 / 2 = 0.5
U21 = A12/ L11 = -1 / 2 = -0.5
U13 = A13/ L11 = 1 / 2 = 0.5
L22 = A22 – L21 . u12 = 3 – (0.5 * -0.5) = 3.25
L32 = A32 – L31 . u12 = 2 – (0.5 * -0.5) = 2.25
U23 = (A23 – L21 . u13) / L22 = (2 – (0.5 * -0.5)) / 3.25 = 0.615
L33 = A33 – L31 . u13 – L32 . u23 = 3 – (0.5 * -0.5) – (2.25 * 0.615) = 1.846

Final L and U matrices:

2 0 0
L = [0.5 3.25 0 ]
0.5 2.25 1.846

1 −0.5 0.5
U = [0 1 0.615]
0 0 1
8.4. Code

8.5. Output
8.6. Graph

9.Cholesky method
The Cholesky decomposition is a numerical method used for solving linear systems by decomposing a
symmetric, positive definite matrix into the product of a lower triangular matrix and its transpose. It is
particularly useful in numerical analysis and optimization problems.

9.1. Conditions

Symmetric Matrix: The given matrix A must be symmetric, meaning A = AT.

Positive Definite: The matrix must be positive definite, meaning all its leading principal minors have
positive determinants.

Square Matrix: The matrix must be square (n*n).

Non-Singular Matrix: The determinant of the matrix should be non-zero det(A) != 0.

9.2. Algorithm

Input: A symmetric, positive definite matrix A of size (n*n).


Initialize a lower triangular matrix L with zeros.
Compute the elements of L.
Store the computed values in L, setting all upper triangular elements to zero.

Output: Matrix L and its transpose LT.


solve the system Ax = b using: Forward substitution for Lz = b.
Backward substitution for LTx = z.
9.3. Example

Solve the system of equations using Cholesky decomposition:

4x+12y−16z=12
12x+37y−43z=34
−16x−43y+98z=50

4 12 16 𝑥 12
A = [ 12 37 −43] x = [𝑦] b = [34]
−16 −43 98 𝑧 50

A = LLT

2 0 0
L = [ 6 1 0]
−8 5 3
2 6 −8
LT = [0 1 5 ]
0 0 3

2 0 0 𝑦1 12
[6 1 0] [𝑦2] = [34]
−8 5 3 𝑦3 50

y1 = 6, y2 = -2, y3 = 36

2 6 −8 𝑥 6
[0 1 5 ] [𝑦] = [−2]
0 0 3 𝑧 4

x = 237, y = -62, z = 12
9.4. Code
9.5. Output

9.6. Graph
10. Gauss-Seidel method
The Gauss-Seidel method is an iterative technique used to solve systems of linear equations. It is an
improvement over the Jacobi method, as it uses updated values in calculations, making it converge faster
for certain systems.

10.1. Conditions

Square System: The coefficient matrix A must be square (n*n).

Diagonal Dominance or Positive Definiteness: A sufficient condition for convergence is diagonal


dominance. Alternatively, if A is symmetric and positive definite, Gauss-Seidel is guaranteed to converge.

Initial Guess: The method requires an initial guess for the variables.

Tolerance: The process continues until the approximate error is within a specified tolerance.

10.2. Algorithm

Input: Read the coefficient matrix A and the constant matrix B.


Define an initial guess for x (usually zeros).
Specify the tolerance and maximum iterations.

Initialize: Set iteration counter k = 0.


Set error to a large value.

Iteration Process: Use updated values immediately for calculations.

Check convergence: If error < tolerance, stop iteration.

Output: Return final values of x.

10.3. Example

Solve the following system using the Gauss-Seidel method with an initial guess of x0=y0=z0= 0 and iterate
until values converge within a tolerance of 0.001.

10x+y+2z=27
x+5y+3z=−15
2x+3y+10z=35

x = (27−y−2z) / 10
y = (−15−x−3z) / 5
z = (35−2x−3y) / 10

iteration 1:

x1 = (27−0−2(0)) / 10 = 2.7
y1= (−15−2.7−3(0)) / 5=−3.54
z1 = (35−2(2.7)−3(−3.54)) / 10=4.716
iteration 2:

x2 = (27−(−3.54)−2(4.716)) / 10 = 2.7588
y2 = (−15−2.7588−3(4.716)) / 5 = −3.4388
z2 = (35−2(2.7588)−3(−3.4388)) / 10 = 4.6527

x ≈ 2.76, y ≈ −3.44, z ≈ 4.65


10.4. Code
10.5. Output

10.6. Graph

11. Jacobi method


The Jacobi method is an iterative numerical technique used to solve systems of linear equations. Unlike
the Gauss-Seidel method, Jacobi uses only values from the previous iteration, making it suitable for
parallel computing.
11.1. Conditions

Square System: The coefficient matrix A must be square(n*n).

Diagonal Dominance (Preferred for Convergence): The system should satisfy. If the matrix is not
diagonally dominant, the method may not converge.

Initial Guess: A starting vector x(0) is needed.

Tolerance for Error: The process continues until error is below a specified threshold.

11.2. Algorithm

Input: Read the coefficient matrix A and constant matrix B.


Define an initial guess for x (usually zeros).
Specify tolerance and maximum iterations.

Initialize: Set iteration counter k = 0


Set error to a large value.

Iteration Process: Use only values from the previous iteration.

Check convergence: Compute error.


If error < tolerance, stop iteration.

Output: Return final values of x.


11.3. Code
11.4. Output

11.5. Graph

You might also like