James DeLaura LATEX Numerical Methods Formula Inverse power method: Row reduce(A−αI)uk = vk−1 ; vk =
uk k
Sheet ||uk || ; λ = vkT Avk
√
Singular values: Find roots of AT A − λI, return λ=σ
1 Algorithms
1.7 Nonlinear Systems & Optimization
1.1 Nonlinear Equations in 1 Variable Taylor Series for systems: f (x+p) = f (x)+J(x)p+O(||p||2 )
∂f ∂f1
1
Bisection Method: given [a, b] s.t. f (a) ∗ f (b) < 0, com- ∂x ... ∂xn
1
Jacobian Matrix: J(x) = ... ... ...
pute f ( b+a
2 ) and replace the x-val for which sgn(f (x)) == ∂fn∂fn
sgn(f ( b+a b+a
η≤ 1 ... ∂x
2 )) with 2 . , where k is the number of it- n∂x1
2k
∗ b−a Newton’s Method for Systems: solve J(xk )pk = −f (xk ),
erations run. If |x − xn | ≤ atol, then n = ceil(log2 ( 2atol )).
then xk+1 = xk + pk .
FPI: x = g(x) until stopping point. Converges if |g‘(x)| ≤
1 ∀x ∈ (a, b). Rate = −log10 (ρ). =⇒ k = 1/rate.
Newton’s method: xk+1 = xk − f (xk ) 1.8 Approximating Polynomials
f ′ (xk )
f (xk )(xk −xk−1 )
Secant method: xk+1 = xk − f (xk )−f (xk−1 )
Monomial basis: pn (x) = c0 + c1 x+...+cn xn ; Solve Ac = x
The above methods converge if f (x ) = 0, f (x∗ ) ̸= 0.
∗ ′ for c. Lagrange interpolation: pn (x) = Σyi ϕi (x), where
phii (xn ) = 1 iff x = xn , 0 otherwise.
1.2 Linear Algebra Divided Differences: pn (x) = c0 + c1 (x − x0 ) + ... + cn (x −
a·b
xn−1 )(x − xn−2 )...(x − x0 ), solve as in monomial basis.
projb a = projection of A onto B = b·b b f n+1 (ξ) n
η: (n+1)! Π0 (x − xi ); maximize ξ for an upper bound.
||x − y|| = ||z||, zi = xi − yi
1.9 Numerical Differentiation
1.3 Linear Systems: Direct Methods
Use Taylor expansion to find an approximation for desired
O(backsub) = n2 .
derivative using only what is given as computable. η is the
O(Gaussian Elim) = n2
terms you can’t cancel out; order is the lowest-degree h that
Lij = {x|Aij − xAij = 0}
√ cannot be canceled.
If A symmetric pos def, G = LD1/2 , D = diag{ uii }m
GGT x = b =⇒ GT b = c =⇒ GT x = c
1.10 Numerical Integration
b−a
1.4 Least Squares Quadtrature weights: n = 1 : 2 =⇒ T rapezoid rule.
b−a ′
n=2: 6 =⇒ Simpson s rule.
r̂ = b − Ax
n = 0 : (b − a) =⇒ M idpoint rule.
Normal Equations: B = AT A, y = AT b Quadrature Rule Error
||x−x̂||
≤ K(A) ||r̂|| (b − a)f ( a+b f ′′ (ξ) 3
Midpoint 2 ) 24 (b − a)
||x|| ||b||
A1
QR, Gram-Schmidt: u1 = ||A1 || ; ui = Ai − Σi1 projux Ai ; Trapezoidal b−a
′′
− f 12(ξ) (b − a)3
ui 2 [f (a) + f (b)]
vi = ||ui || ; QT A = R Simpson’s b−a
+ 4f ( b+a
′′′′
− f 90 ξ ( b−a 5
ui 6 [f (a) 2 + f (b)] 2 )
QR, Householder: ui = Ai + ei ||Ai ||; vi = ||ui || ; Pi =
I − vi viT ; A′ = Pi A =⇒ x = Πn1 (Pi ) ∗ b
1.5 Linear Systems: Iterative Methods
Jacobi: M = diag(A), xk+1
i = 1
aii [bi − Σnj=1̸=i aij xkj
Gauss-Seidel: Jacobi, but values update during computa-
tion of iteration.
1.6 Eigenvalues & Singular Values
Computing λ for simple matrices: Solve roots of det(A−λI).
uk
Power method: uk = Avk−1 ; vk = ||uk || ; λk1 = vkT Avk
T
v Av
Dominant eigenvalue: µ(v) = vT v
.