0% found this document useful (0 votes)
1K views2 pages

Final Cheet Sheet

1) The document defines key concepts in matrix algebra including systems of equations, spanning sets, linear independence, bases, orthogonal vectors, and the Gram-Schmidt process. 2) It provides examples of how to determine if a set of vectors spans a vector space, forms a basis, or is linearly independent. 3) The document also summarizes properties and operations related to matrices including transpose, dot products, cross products, and the normal equation.

Uploaded by

Jorge Berumen
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views2 pages

Final Cheet Sheet

1) The document defines key concepts in matrix algebra including systems of equations, spanning sets, linear independence, bases, orthogonal vectors, and the Gram-Schmidt process. 2) It provides examples of how to determine if a set of vectors spans a vector space, forms a basis, or is linearly independent. 3) The document also summarizes properties and operations related to matrices including transpose, dot products, cross products, and the normal equation.

Uploaded by

Jorge Berumen
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

☺ Matrix Algebra – By Jorge ☺ Page 1

Consider the system of equations: Spanning Set


x 1−2x 2x 3=2 Definition: the set of vectors that represent all linear
2x1 x 2−x 3=1 combinations

[ ] [ ] [ ]
−3x1 x 2−2x 3=−5 | | |
Coefficient Matrix Augmented Matrix Span A1 , A2 ,⋯ , An =c1 A1c 2 A2 ⋯c n An

[ ]
| | |

[ ]
1 -2 1 1 -2 1 2
2 1 -1 2 1 -1 1 A span is all the linear combinations of vectors A1 to An
1 1 -2 1 1 -2 -5 n
Tip: A span can be equal ℝ as long as it can represent
Matrix Multiplication every single member in it, it is a subspace otherwise.
m×n× n× z=m×z

Compatible?
Note: A span is also a subspace
New Dimensions Subspace
n
Non-Singular Matrix Definition: describes a subset of ℝ
Must meet 3 criteria while maintaining condition in order
A is singular if the only solution to Ax is 0
to be a subspace

[ ][]
x1 0 (1) Must contain the 0 vector

[ ]
| | | | x2 0 (2) Must be closed under addition. Is x  y in
A1 A2 A3 ⋯ An x3 = 0 subspace?
| | | | ⋮ ⋮ (3) Must be closed under scalar multiplication. Is
xn 0 c x in subspace (c is a constant)
where Ai are the columns of A. Note: Subspace is usually in this notation, but can have
Augmented Matrix: [ A1 A2 A3⋯ An∣0 ] other forms, like a spanning set

{ [] }
Key Points x1
(1) Singular means that there is a solution other than 0 W = x : x= x 2 ; <condition>
vector. ⋮
(2) Also, non-singular matrices are square, meaning xn
that they n x n
(3) When row reduced, the matrix simplifies to the Proving Linear Independence
If vectors { x
1 ,x 2 ,⋯, 
x n } are linearly independent, then
identity matrix
Transpose of a Matrix in the equation c 1  x1 c 2 
x 2⋯c n  x n=0 , constants
Tip: Flip the columns and rows c 1=c 2=c 3=⋯=c n =0 , they have to be equal to 0.
Dimension

[ ]
− A1 −
T dim W =number of linearly independent vectors

[ ]
| | | | − A1 − nullity=dim Nullspace W 
A1 A2 A3 ⋯ An = − A2 − Rank W = Number of pivot columns and rows
| | | | ⋮ # of Columns of W =Rank W nullity W 
− An − Magnitude or distance of a vector is ∥q∥= q T q
Properties Note: All definition above are numbers only, not vectors!
T T T Products
1)  AB  = A  B T
T T T Dot Product of 2 vectors u⋅v =u v

2)  AC  =C A

[] []
T T
u1 v1
3)  A  = A Cross Product of 2 vectors. u= u2 v = v2
T T T
4)  A A = A A u v
3 3

[ ]
u 2 v 3−u3 v 2
Note: A matrix is known to be symmetric if it is equal to its
T Then u×v= u3 v 1−u1 v 3
transpose. In other words, it's symmetric when A = A .
u1 v 2−u2 v 1
Hint – only applies to n×n matrices
Note: Cross product lets you get an orthogonal vector to
that of u and v.
☺ Matrix Algebra – By Jorge ☺ Page 2

Orthogonality Basis
Definition: Orthogonal vectors are the set of vectors whose Definition: A non-zero spanning set that is linearly
dot product is equal to 0. In other words u⋅v =0 independent.
n
Note: Getting the dot product of an orthogonal vector to Note: In order to be a basis in ℝ , the set can have no
2
itself such as  u =∥u∥ =uT⋅u≠0 .
u⋅ more than n vectors and no less than 1.

Gram-Schmidt Process Note:To prove a basis, you must prove that the vectors are
Allows us to calculate an orthogonal basis linearly independent.
u 1=w 1 Homogeneous System
T
u1 w2 Definition: An m×n augmented matrix that is equal to 0.
u 2=w2 − T u1 Ex. [ A| b ] where b= 0 , thus always has 0 as one
u1 u1
solution.
u T1 w 3 uT2 w 3
u 3=w 3− T u 1− T u 2 Possible Scenarios:
u1 u1 u2 u2 (1) If m < n – Infinitely Many Solutions
T
i−1
u w (2) If m = n – Unique or Infinitely Many Solutions
u i=wi−∑ kT i u k where 2≤i≤ p Vector Form of a Solution
k=1 u k u k
Let A be an augment matrix
Normalization
Definition: A normal vector has a magnitude or distance of
just 1 unit.
To calculate, divide the vector by its magnitude, i.e.
q
[ 1 0 -1 0 -1 -2 0
A= 0 1 2 0 1 2 0 the corresponding
0 0 0 1 1 1 0
solutions are:
]
∥q∥ x 1= x3 x 52 x 6
Normal Equation x 2=−2 x 3−x 5−2 x6
Definition: the equation that allows you to find the most x 4 =−x5 −x 6
T * T
approximate solution (w*) to b: A A w = A b thus the corresponding Vector Form is:

[] [] [] []
x1 1 1 2
Another process that helps you find w* x2 -2 -1 -2
vT u x = x 3 : x3 1 x 0 x 0
p
w *=∑ T i u i
5 6
x4 0 -1 -1
i=1 u i u i x4 0 1 0
where p is number of vectors in A, v is the provided x6 0 0 1
inconsistent solution and u i are the corresponding
orthogonal basis members . Nullspace
Definition: It is the solution set (or subspace ) that makes a
Note: w* is known to be the least-square solution matrix be equal to the 0 vector. In other words:
N  A={x : Ax=0 , x ∈ ℝ n}
Note: Because w* will be the most approximate solution to
b, then ∥Aw *−v∥≤∥Ax−v∥ where x is any other vector Hint: The basis for a matrix that is equal to the 0 vector
is the same basis for the nullspace
Linear Transformations
Definition: A linear transformation pertains to a function Inverse of a Matrix
where T  x  = A x . In other words, matrix A can Definition: The matrix is invertible if it is n×n and such
represent all algebraic combinations, once multiplied with that: A−1 A= A A−1= I
x, that could ultimately represent any linear transformation To calculate, augment the matrix with the identity and row
T. reduce to the identity:
In order to be a linear transformation the following 2
criteria must be met:

[ ] [ ]
1 2 3 1 0 0 1 0 0 54 -23 -7
(1) T  x  T  y  =T  x  y  Addition 2 5 4 0 1 0 0 1 0 -16 7 2

(2) c T  x  =T  c x  Scalar Multiplication 1 -1 10 0 0 1 0 0 1 -7 3 1

You might also like