0% found this document useful (0 votes)
300 views67 pages

Numerical Methods For Graduate School: JP Bersamina October 11,2018

This document provides an overview of various numerical methods that are important for graduate-level studies, including solving systems of linear and nonlinear equations, numerical differentiation and integration, curve fitting, and solving ordinary differential equations. Specific techniques covered include Gaussian elimination, LU decomposition, Newton's method, Simpson's rule, Euler's method, and more. MATLAB is also discussed as a tool for implementing these numerical methods.

Uploaded by

MARKCHRISTMAS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
300 views67 pages

Numerical Methods For Graduate School: JP Bersamina October 11,2018

This document provides an overview of various numerical methods that are important for graduate-level studies, including solving systems of linear and nonlinear equations, numerical differentiation and integration, curve fitting, and solving ordinary differential equations. Specific techniques covered include Gaussian elimination, LU decomposition, Newton's method, Simpson's rule, Euler's method, and more. MATLAB is also discussed as a tool for implementing these numerical methods.

Uploaded by

MARKCHRISTMAS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

NUMERICAL METHODS FOR GRADUATE

SCHOOL

JP Bersamina

October 11,2018
2

jbersamina
Contents

1 Introduction 5
1.1 Vectors and Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.1 Cramers rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.2 Solution using Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2 Ordinary Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Functions of two or more independent variables . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Taylor series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Roots of Equation 11
2.1 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2 Regula Falsi Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Newtons Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Secant Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 MOSS-Method of Successive Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6 Use of MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7 Equations with Multiple Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8 System of Nonlinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.8.1 Algorithm for Newton’s method for solving a system of nonlinear equations . . . . 19

3 Solving System of Linear Equation 23


3.1 Gauss Elimination Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Gauss Elimination with Pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Gauss Jordan Elimination method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 LU Decomposition Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.5 Inverse of Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5.1 Using Excel MINVERSE Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5.2 Gauss Jordan Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6 Jacobi Gauss seidel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.7 Tridiagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8 Use of Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4 Curve Fitting 39
4.1 Curve Fitting with linear equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2 Curve Fitting with Nonlinear equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Curve Fitting with Quadratic and Higher Order Polynomial . . . . . . . . . . . . . . . . . 39
4.4 Interpolation using Single Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.5 Lagrange Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.6 Newton’s Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.7 Spline Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

5 Numerical Differentiation 41
5.1 Finite Difference approximation of the derivative . . . . . . . . . . . . . . . . . . . . . . . 41
5.2 Finite difference formulas using Taylor series expansion . . . . . . . . . . . . . . . . . . . 43
5.3 Summary of finite difference formulas for numerical differentiation . . . . . . . . . . . . . 45
5.4 Differentiation formulas using Lagrange polynomials . . . . . . . . . . . . . . . . . . . . . 46
4 CONTENTS

5.5 Differentiation using curve fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47


5.6 Use of MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.7 Richardson’s extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.8 Numerical partial differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

6 Numerical Integration 51
6.1 Rectangle and midpoint methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 Trapezoidal method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.3 Simpson’s methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.4 Gauss quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.5 Evaluation of multiple integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.6 Use of MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.7 Richardson’s extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.8 Romberg integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

7 Initial Value Problem 57


7.1 Euler’s methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2 Modified Euler method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.3 Midpoint method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.4 Runge-Kutta methods (second, third, fourth order) . . . . . . . . . . . . . . . . . . . . . . 58
7.4.1 second order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
7.4.2 third order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.4.3 fourth order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.5 Systems of first-order ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7.6 Solving a higher order initial value ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
7.7 Use of MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

8 Boundary Value Problem 63


8.1 Shooting method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
8.2 The finite difference method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.3 Use of MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

jbersamina
Chapter 1

Introduction

1.1 Vectors and Matrices

1.1.1 Cramers rule

Figure 1.1:
6 CHAPTER 1. INTRODUCTION

1.1.2 Solution using Excel

Figure 1.2:

jbersamina
1.2. ORDINARY DIFFERENTIAL EQUATION 7

1.2 Ordinary Differential Equation

Figure 1.3:

Figure 1.4:
8 CHAPTER 1. INTRODUCTION

1.3 Functions of two or more independent variables

1.4 Taylor series

Figure 1.5:

Figure 1.6:

jbersamina
1.4. TAYLOR SERIES 9

Figure 1.7:
10 CHAPTER 1. INTRODUCTION

jbersamina
Chapter 2

Roots of Equation

Figure 2.1:
12 CHAPTER 2. ROOTS OF EQUATION

Figure 2.2:

2.1 Bisection Method

a+b
xn s = (2.1)
2

1. Choose the first interval by finding points a and b such that a solution exists between them. This
means that f(a) and f(b) have different signs such that f(a)f(b) ¡ 0. The points can be determined by
examining the plot of f(x) versus x.
2. Calculate the first estimate of the numerical solution xNsl by:
3. Determine whether the true solution is between a and xNSI or between xNsi and b. This is done by
checking the sign of the product f(a) f(xNsi) : If f(a) f(xNsi) ¡ 0, the true solution is between a and xNsi
If f(a) f(xNsi) ¿ 0, the true solution is between xNsi and b. 4. Select the subinterval that contains the
true solution (a to x NSI, or xNsi to b) as the new interval [a, b], and go back to step 2. Steps 2 through
4 are repeated until a specified tolerance or error bound is attained.

Figure 2.3:

jbersamina
2.2. REGULA FALSI METHOD 13

Figure 2.4:

Figure 2.5:

Figure 2.6:

2.2 Regula Falsi Method


af (b) − bf (a)
xn s = (2.2)
f (b) − f (a)
ρx = a + b
z =a+b
xn s = aff (b)−f
(b)−bf (a)
(a)
14 CHAPTER 2. ROOTS OF EQUATION

Figure 2.7:

Figure 2.8:

Figure 2.9:

jbersamina
2.3. NEWTONS METHOD 15

Figure 2.10:

2.3 Newtons Method

f (x)
xi+1 = xi − (2.3)
f 0 (x)

Figure 2.11:
16 CHAPTER 2. ROOTS OF EQUATION

Figure 2.12:

Figure 2.13:

Figure 2.14:

2.4 Secant Method

f (xi ) ∗ (xi − 1 − xi
xi+1 = xi − (2.4)
f (xi − 1) − f (xi)

jbersamina
2.4. SECANT METHOD 17

Figure 2.15:

Figure 2.16:

Figure 2.17:
18 CHAPTER 2. ROOTS OF EQUATION

2.5 MOSS-Method of Successive Substitution

Figure 2.18:

2.6 Use of MATLAB

MATLAB has two built-in functions for solving equations with one variable. The f zero command can
be used to find a root of any equation, And the roots command can be used for finding the roots of a
polynomial.

jbersamina
2.7. EQUATIONS WITH MULTIPLE ROOTS 19

2.7 Equations with Multiple Roots


Many nonlinear equations of the form f(x) = O have multiple solutions. or roots. As an example, consider
the following equation: f(x) = cos(x)cosh(x) + 1.

2.8 System of Nonlinear Equations


A system of nonlinear equations consists of two, or more, nonlinear equations that have to be solved
simultaneously. For example, Fig. 3-22 shows a catenary (hanging cable) curve given by the equation
The point of intersection between the two curves is given 52 32 by the solution of the following system
of nonlinear equations:

2.8.1 Algorithm for Newton’s method for solving a system of nonlinear equa-
tions

1. Estimate (guess) an initial solution, x1 , i x2, i . . . , xn,;. 2. Calculate the Jacobian and the value of
the f s on the right-hand side of Eq. (3.55). 3. Solve Eq. (3.55) for Lix1, Lix2, . . , Lixn. 4. Calculate a
new estimate of the solution, x1, ;+ 1, x2, ;+ 1, . . , xn, ;+ 1, using Eq. (3.56). 5. Calculate the error.
If the new solution is not accurate enough, assign the values Of X 1, ;+ l X2, ;+I . .. , Xn, ;+ 1 to X1, i
X2, i ... , Xn,;, and start a new iteration beginning with Step 2.

Figure 2.19:
20 CHAPTER 2. ROOTS OF EQUATION

Figure 2.22:

Figure 2.20:

Figure 2.21:

[H]

Figure 2.23:

jbersamina
2.8. SYSTEM OF NONLINEAR EQUATIONS 21

Figure 2.25:

Figure 2.24:

[H]

Figure 2.26:
22 CHAPTER 2. ROOTS OF EQUATION

jbersamina
Chapter 3

Solving System of Linear Equation

Systems of linear equations that have to be solved simultaneously arise in problems that include several
(possibly many) variables that are dependent on each other. Such problems occur not only in engineering
and science, which are the focus of this book, but in virtually any discipline (business, statistics, eco-
nomics, etc.). A system of two (or three) equations with two (or three) unknowns can be solved manually
by substitution or other mathematical methods (e.g., Cramer’s rule, Section 2.4.6). Solving a system in
this way is practically impossible as the number of equations (and unknowns) increases beyond three.

3.1 Gauss Elimination Method

The Gauss elimination method is a procedure for solving a system of linear equations. In this procedure,
a system of equations that is given in a general form is manipulated to be in upper triangular form,
which is then solved by using back substitution (see Section 4.1.1). For a set of four equations with four
unknowns the general form is given by

Figure 3.1:
24 CHAPTER 3. SOLVING SYSTEM OF LINEAR EQUATION

Figure 3.2:

jbersamina
3.1. GAUSS ELIMINATION METHOD 25

Figure 3.3:
26 CHAPTER 3. SOLVING SYSTEM OF LINEAR EQUATION

Figure 3.4:

3.2 Gauss Elimination with Pivoting


In the Gauss elimination procedure, the pivot equation is divided by the pivot coefficient. This, however,
cannot be done if the pivot coefficient is zero. For example, for the following system of three equations:

3.3 Gauss Jordan Elimination method


The Gauss-Jordan elimination method is a procedure for solving a system of linear equations, [a][x] = [b].
In this procedure, a system of equations that is given in a general form is manipulated into an equivalent
system of equations in diagonal form (see Section 4.1.1) with normalized elements along the diagonal.
This means that when the diagonal form of the matrix of the coefficients, [a] , is reduced to the identity
matrix, the new vector [b’] is the solution. The starting point of the procedure is a system of equations
given in a general form (the illustration that follows is for a system of four equations):

Figure 3.5:

jbersamina
3.3. GAUSS JORDAN ELIMINATION METHOD 27

Figure 3.6:

Figure 3.7:

Figure 3.8:
28 CHAPTER 3. SOLVING SYSTEM OF LINEAR EQUATION

Figure 3.9:

3.4 LU Decomposition Method

The Gauss elimination method consists of two parts. The first part is the elimination procedure in which
a system of linear equations that is given in a general form, [a][x] = [b], is transformed into an equivalent
system of equations [a’][x] = [b’] in which the matrix of coefficients [a’] is upper triangular. In the second
part, the equivalent system is solved by using back substitution. The elimination procedure requires many
mathematical operations and significantly more computing time than the back substitution calculations.

jbersamina
3.4. LU DECOMPOSITION METHOD 29

Figure 3.10:

Figure 3.11:
30 CHAPTER 3. SOLVING SYSTEM OF LINEAR EQUATION

Figure 3.12:

Figure 3.13:

jbersamina
3.5. INVERSE OF MATRIX 31

3.5 Inverse of Matrix

The inverse of a square matrix [a] is the matrix [ar1 such that the product of the two matrices gives the
identity matrix [/].

3.5.1 Using Excel MINVERSE Function

Figure 3.14:

3.5.2 Gauss Jordan Inverse

Using the procedure we discussed earlier on Gauss Jordan elimination 3.3.

Figure 3.15:
32 CHAPTER 3. SOLVING SYSTEM OF LINEAR EQUATION

Figure 3.16:

Figure 3.17:

jbersamina
3.5. INVERSE OF MATRIX 33

Figure 3.18:

Figure 3.19:
34 CHAPTER 3. SOLVING SYSTEM OF LINEAR EQUATION

3.6 Jacobi Gauss seidel

A system of linear equations can also be solved by using an iterative approach. The process, in principle,
is the same as in the fixed-point iteration method used for solving a single nonlinear equation (see Section
3.7). In an iterative process for solving a system of equations, the equations are written in an explicit
form in which each unknown is written in terms of the other unknown. The explicit form for a system of
four equations is illustrated in Fig. 4-21

Figure 3.20:

jbersamina
3.7. TRIDIAGONAL 35

Figure 3.21:

3.7 Tridiagonal

Implementation of Thomas algorithm.

Figure 3.22:
36 CHAPTER 3. SOLVING SYSTEM OF LINEAR EQUATION

Figure 3.23:

Figure 3.24:

jbersamina
3.7. TRIDIAGONAL 37

Figure 3.25:
38 CHAPTER 3. SOLVING SYSTEM OF LINEAR EQUATION

Figure 3.26:

3.8 Use of Matlab


MATLAB has mathematical operations and built-in functions that can be used for solving a system of
linear equations and for carrying out other matrix operations that are described in this chapter. x = a
¯
x=b/a

jbersamina
Chapter 4

Curve Fitting

this is the second part of the hibbeler book

4.1 Curve Fitting with linear equation


4.2 Curve Fitting with Nonlinear equation
4.3 Curve Fitting with Quadratic and Higher Order Polynomial
4.4 Interpolation using Single Polynomial
4.5 Lagrange Polynomials
4.6 Newton’s Polynomial
4.7 Spline Interpolation
40 CHAPTER 4. CURVE FITTING

jbersamina
Chapter 5

Numerical Differentiation

It is use later on in Boundary value problem wherein differential equation has second order derivative.

5.1 Finite Difference approximation of the derivative

Figure 5.1:
42 CHAPTER 5. NUMERICAL DIFFERENTIATION

Figure 5.2:

jbersamina
5.2. FINITE DIFFERENCE FORMULAS USING TAYLOR SERIES EXPANSION 43

Figure 5.3:

5.2 Finite difference formulas using Taylor series expansion

Figure 5.4:
44 CHAPTER 5. NUMERICAL DIFFERENTIATION

Figure 5.5:

jbersamina
5.3. SUMMARY OF FINITE DIFFERENCE FORMULAS FOR NUMERICAL DIFFERENTIATION
45

Figure 5.6:

5.3 Summary of finite difference formulas for numerical differ-


entiation

Figure 5.7:
46 CHAPTER 5. NUMERICAL DIFFERENTIATION

5.4 Differentiation formulas using Lagrange polynomials

Figure 5.8:

Figure 5.9:

jbersamina
5.5. DIFFERENTIATION USING CURVE FITTING 47

5.5 Differentiation using curve fitting

5.6 Use of MATLAB

5.7 Richardson’s extrapolation

5.8 Numerical partial differentiation

Figure 5.10:

Figure 5.11:
48 CHAPTER 5. NUMERICAL DIFFERENTIATION

Figure 5.12:

Figure 5.13:

jbersamina
5.8. NUMERICAL PARTIAL DIFFERENTIATION 49

Figure 5.14:

Figure 5.15:
50 CHAPTER 5. NUMERICAL DIFFERENTIATION

jbersamina
Chapter 6

Numerical Integration

its a subject for senior high school students in Lasalle

6.1 Rectangle and midpoint methods

Figure 6.1:

Figure 6.2:
52 CHAPTER 6. NUMERICAL INTEGRATION

6.2 Trapezoidal method

Figure 6.3:

Figure 6.4:

6.3 Simpson’s methods

Figure 6.5:

jbersamina
6.3. SIMPSON’S METHODS 53

Figure 6.6:

Figure 6.7:

Figure 6.8:
54 CHAPTER 6. NUMERICAL INTEGRATION

6.4 Gauss quadrature

Figure 6.9:

Figure 6.10:

Figure 6.11:

jbersamina
6.4. GAUSS QUADRATURE 55

Figure 6.12:

Figure 6.13:

Figure 6.14:
56 CHAPTER 6. NUMERICAL INTEGRATION

Figure 6.15:

6.5 Evaluation of multiple integrals


6.6 Use of MATLAB
6.7 Richardson’s extrapolation
6.8 Romberg integration

jbersamina
Chapter 7

Initial Value Problem

Figure 7.1:

7.1 Euler’s methods

Figure 7.2:
58 CHAPTER 7. INITIAL VALUE PROBLEM

Figure 7.3:

Figure 7.4:

7.2 Modified Euler method

7.3 Midpoint method

7.4 Runge-Kutta methods (second, third, fourth order)

7.4.1 second order

Figure 7.5:

jbersamina
7.4. RUNGE-KUTTA METHODS (SECOND, THIRD, FOURTH ORDER) 59

Figure 7.6:

Figure 7.7:

Figure 7.8:

Figure 7.9:
60 CHAPTER 7. INITIAL VALUE PROBLEM

7.4.2 third order

Figure 7.10:

Figure 7.11:

Figure 7.12:

jbersamina
7.4. RUNGE-KUTTA METHODS (SECOND, THIRD, FOURTH ORDER) 61

7.4.3 fourth order

Figure 7.13:

Figure 7.14:
62 CHAPTER 7. INITIAL VALUE PROBLEM

Figure 7.15:

Figure 7.16:

7.5 Systems of first-order ODEs


7.6 Solving a higher order initial value ODE
7.7 Use of MATLAB

jbersamina
Chapter 8

Boundary Value Problem

Figure 8.1:
64 CHAPTER 8. BOUNDARY VALUE PROBLEM

8.1 Shooting method

Figure 8.2:

Figure 8.3:

jbersamina
8.2. THE FINITE DIFFERENCE METHOD 65

8.2 The finite difference method

Figure 8.4:

Figure 8.5:

Figure 8.6:
66 CHAPTER 8. BOUNDARY VALUE PROBLEM

Figure 8.7:

Figure 8.8:

Figure 8.9:

Figure 8.10:

jbersamina
8.3. USE OF MATLAB 67

Figure 8.11:

Figure 8.12:

8.3 Use of MATLAB

You might also like