Numerical Methods Cheat Sheet
1. Bisection Method
Used to find the root of a function f(x) where f(x) = 0.
Steps:
1. Choose an interval [a, b] where f(a) ⋅ f(b) < 0 (i.e., the function changes sign).
2. Compute the midpoint: x_n = (a + b) / 2.
3. Evaluate f(x_n):
- If f(x_n) = 0, stop; x_n is the root.
- If f(a) ⋅ f(x_n) < 0, set b = x_n; otherwise, set a = x_n.
4. Repeat until |b - a| is sufficiently small.
Example:
Solve cos(x) - x = 0 on the interval [0, 1] using the bisection method.
1. f(0) = cos(0) - 0 = 1, f(1) = cos(1) - 1 ≈ -0.4597.
2. Midpoint: x_1 = (0 + 1) / 2 = 0.5.
3. f(0.5) = cos(0.5) - 0.5 ≈ 0.8776 - 0.5 = 0.3776.
4. Continue bisecting until you find the root.
2. Jacobi Method
Used to iteratively solve systems of linear equations AX = B.
Steps:
1. Rewrite each equation in the form x_i = (b_i - Σ(j ≠ i) a_ij x_j) / a_ii.
2. Start with an initial guess X(0).
3. Update each variable using the previous iteration’s values.
4. Repeat until the solution converges.
Example:
Solve AX = B, where:
A = [[24, 2.4], [-1.5, -15]], B = [2, 5], X(0) = [5, 2].
After three iterations, the approximate solution for a is 0.1155.
3. Divided Differences (Newton’s Divided Difference)
Used to find interpolating polynomials for given data points.
Formula:
For a set of points (x_0, y_0), (x_1, y_1), …, (x_n, y_n), the divided differences are computed
as:
- First-order: f[x_i, x_j] = (f(x_j) - f(x_i)) / (x_j - x_i).
- Higher-order: f[x_i, x_j, x_k] = (f[x_j, x_k] - f[x_i, x_j]) / (x_k - x_i).
Example:
Given data:
x = [0.6, 1.1, 2.5, 3.1], y = [2.1, 5.3, 7.2, 10.0].
Evaluate f[0.6, 1.1, 2.5, 3.1]:
1. First-order differences: f[0.6, 1.1] = 6.4, f[1.1, 2.5] = 1.3571, f[2.5, 3.1] = 4.6667.
2. Second-order differences: f[0.6, 1.1, 2.5] = -2.6541, f[1.1, 2.5, 3.1] = 1.6548.
3. Third-order difference: f[0.6, 1.1, 2.5, 3.1] = 1.7236.
4. Backward Newton Polynomial
Used to construct an interpolating polynomial that passes through a given set of points.
Formula:
The backward Newton polynomial is:
P_n(x) = f[x_n] + f[x_n, x_{n-1}](x - x_n) + f[x_n, x_{n-1}, x_{n-2}](x - x_n)(x - x_{n-1}) + …
Example:
Given points x = [0.6, 1.1, 2.5, 3.1] and y = [2.1, 5.3, 7.2, 10.0], the backward Newton
polynomial is:
P(x) = 10 + 4.6667(x - 3.1) + 1.6548(x - 3.1)(x - 2.5) - 2.6541(x - 3.1)(x - 2.5)(x - 1.1).
5. Lagrange Polynomial
Used to find a polynomial that passes through a given set of points using Lagrange basis
polynomials.
Formula:
For points (x_0, y_0), (x_1, y_1), …, (x_n, y_n), the Lagrange polynomial is:
P(x) = Σ(y_i * L_i(x)), where L_i(x) is the Lagrange basis polynomial:
L_i(x) = Π[(x - x_j) / (x_i - x_j)] for j ≠ i.
Example:
Given points x = [1, 2, 3] and y = [2, 3, 6], the Lagrange polynomial is:
P(x) = 2((x - 2)(x - 3)) / ((1 - 2)(1 - 3)) + 3((x - 1)(x - 3)) / ((2 - 1)(2 - 3)) + 6((x - 1)(x - 2)) /
((3 - 1)(3 - 2)).
6. Polynomial Interpolation
Polynomial interpolation is used to find a polynomial that passes through a given set of data
points.
Formula:
The interpolating polynomial of degree n for points (x_0, y_0), (x_1, y_1), …, (x_n, y_n) is
given by:
P(x) = Σ(y_i * Π[(x - x_j) / (x_i - x_j)] for j ≠ i).
This can be computed using either the Lagrange or Newton method.
Example:
Given data points x = [0.6, 1.1, 2.5, 3.1] and y = [2.1, 5.3, 7.2, 10.0], polynomial interpolation
gives an approximate polynomial:
P(x) = 10 + 4.6667(x - 3.1) + 1.6548(x - 3.1)(x - 2.5) - 2.6541(x - 3.1)(x - 2.5)(x - 1.1).