Matrices and Linear Equations Notes
Matrices and Linear Equations Notes
Natural Resources
Bunda Campus
Lecture Notes
Francisco Chamera
Contents
1 Matrices 2
1.2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 2 × 2 Determinants . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1 Matrices
has order 3 × 5.
Definition 1.2. Two matrices A and B are equal if and only if they have the same
size (that is, the same number of rows and the same number of columns) and their
corresponding entries are equal. That is if aij = bij for all 1 ≤ i ≤ m and 1 ≤ j ≤ n.
Definition 1.3. 1. A zero matrix is a matrix whose entries are all zeros.
2. A square matrix is a matrix with the same number of rows and columns.
3. In a square matrix, A = (aij ), of order n, the entries a11 , a22 , ..., ann are called
the diagonal entries and form the main diagonal (also called principal
diagonal) of A.
(
1 if i = j
4. A square matrix A = (aij ) with aij = is called the identity
0 if i 6= j
matrix, denoted by In .
6. A square matrix is lower triangular if the entries above the main diagonal
are all zeros, that is aij = 0 if i < j.
Example 1.4.
1 0 0 2 1 4
1 0
I2 = , I3 = 0 1 0 and the matrix A = 0 3 −1 is an upper triangular
0 1
0 0 1 0 0 −2
matrix.
By Definition 1.5, the ij-th entry of (A + B) is the sum of the ij-th entry of A with
the ij-th entry of B.
Example 1.6.
1 2 4 2 −1 3 3 1 7
If A = 2 3 1 and B = 2 4 2 then A + B = 4 7 3.
5 0 3 3 6 1 8 6 4
Definition 1.7. Let A be an m × n matrix and t ∈ R, a scalar. We define the
scalar multiplication of matrices by
(tA)ij = t(A)ij .
Example 1.8.
1 −1 3(1) 3(−1) 3 −3
3 2 8 = 3(2) 3(8) = 6 24 .
−10 −5 3(−10) 3(−5) −30 −15
We write −A for the scalar product (−1)A and define A + (−1)B as matrix sub-
raction A − B.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 4
Example 1.9.
1 2 4 2 −1 3 −1 3 1
If A = 2 3 1 and B = 2 4 2 then A − B = 0 −1 −1.
5 0 3 3 6 1 2 −6 2
Theorem 1.10. Let A, B and C be matrices of order m × n and let k, l ∈ R.
Suppose further that O represents the m × n zero [Link]
1. A + B = B + A (commutativity)
2. (A + B) + C = A + (B + C) (associativity)
3. A + O = A
4. there is an m × n matrix A0 such that A + A0 = O
5. k(lA) = (kl)A
6. (k + l)A = kA + lA
7. k(A + B) = kA + kB
8. 0A = O
Note that in Example 1.12, while AB is defined, the product BA is not defined.
However, for square matrices A and B of the same order, both the product AB and
BA are defined, though the products are generally not equal. If AB = BA we say
that A and B are commutative.
Definition 1.13. Two square matrices A and B are said to commute if AB = BA.
Example 1.17.
T
T 1 −5 T 7
1 2 4 4 7 4 7
= 2 2 , = and 8 = 7 8 9 .
−5 2 1 7 0 7 0
4 1 9
1. (AT )T = A
3. (A + B)T = AT + B T
4. (rA)T = rAT
Proof. We prove the first three and leave the last one as an exercise.
1. Let A = (aij ), AT = (bij ) and (AT )T = (cij ). Then the definition of transpose
gives
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 6
2. It is enough to show the first part, as the general claim follows by repeatedly
applying the first claim with k = 2. This is just an explicit calculation:
n
X n
X n
X
T
((AB) )ij = (AB)ji = Ajk Bki = Bki Ajk = (B T )ik (AT )kj = (B T AT )ij .
k=1 k=1 k=1
3. The (i, j)-th entry of AT + B T is the sum of (i, j)-th entries of AT and B T ,
which are (j, i)-th entries of A and B, respectively. Thus the (i, j)-th entry of
AT + B T is the (j, i)-th entry of the sum of A and B, which is equal to the
(i, j)-th entry of the transpose (A + B)T .
Example 1.20.
1 2 3 0 1 2
Let A = 2 4 −2 and B = −1 0 −3. Then A is a symmetric matrix and
3 −2 4 −2 3 0
B is a skew-symmetric matrix.
Proposition 1.21. For any square matrix A the matrices B = AAT and C = A+AT
are symmetric.
Proof.
B T = (AAT )T = (AT )T AT = AAT = B
and
C T = (A + AT )T = AT + (AT )T = AT + A = C.
Exercise 1.22.
1. Show that the product of two lower triangular matrices is a lower triangular
matrix. Show that the product of two upper triangular matrices is an upper
triangular matrix.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 7
1.2 Determinants
1.2.1 2 × 2 Determinants
a11 a12
Definition 1.23. Let A be a 2×2 matrix, i.e., A = . Then the determinant
a21 a22
a a
of A is given by |A| = 11 12 = a11 a22 − a12 a21 .
a21 a22
Example 1.24.
3 1 3 1
If A = then |A| = = 3(3) − 1(−2) = 9 + 2 = 11.
−2 3 −2 3
Definition 1.25. Let A be an n × n matrix. The (i, j)-th minor, denoted Aij , is
the determinant of the (n − 1) × (n − 1) matrix obtained from A by deleting the i-th
row and the j-th column.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 8
C ij = (−1)i+j Aij .
Example 1.27.
1 2 3
1 2
Let A = 4 5 6. Then A23 = = 8 − 14 = −6 and C 23 = (−1)2+3 (−6) = 6.
7 8
7 8 9
The following are steps for finding the determinant of a matrix using cofactors.
Example 1.28.
5 −2 2
Compute the determinant of B = 0 3 −3.
2 −4 7
Solution
Example 1.29.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 9
2 4 2 1
0 3 5 3
Find |D| given that D =
0 5 7 −4.
0 −7 −7 0
Solution
3 5 3
11
|D| = 2C = 2 5 7 −4 = 2 × 98 = 196.
−7 −7 0
Proof. If we use cofactor expansion along the first row or first column, the only term
in the expansion that is not zero is the first, and that term is the product of the first
entry and its cofactor matrix. By induction the cofactor matrix has a determinant
which is the product of its diagonal entries.
Example 1.31.
0 0 0 −1
is
2 × (−3) × 7 × (−1) = 42.
Proof. The result follows from Theorem 1.30 and definition for identity matrix.
To calculate
a11 a12 a13
|A| = a21 a22 a23 ,
a31 a32 a33
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 10
write the first two columns on the right side of the determinant forming a 3 × 5
matrix
a11 a12 a13 a11 a12
a21 a22 a23 a21 a22 .
a31 a32 a33 a31 a32
Then sum the products on all lines parallel to the main diagonal and subtract the
products on the lines parallel to the second diagonal (see Figure 1).
|A| = a11 a22 a33 + a12 a23 a31 + a13 a21 a32 − a13 a22 a31 − a11 a23 a32 − a12 a21 a33 .
Example 1.33.
2 0 2
Find 0 1 0 .
−1 0 1
Solution
2 0 2 2 0
The corresponding 3 × 5 matrix is 0 1 0 0 1 .
−1 0 1 −1 0
Example 1.35.
1 3 4
The matrix A = 4 3 −2 is singular since |A| = 0.
2 6 8
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 11
1. det(AT ) = det(A)
2. det(AB) = det(A)det(B)
Proof. We prove the first one and leave the last one as an exercise.
1.3 Inverses
AB = BA = In .
B = A−1 .
a11 a12
Definition 1.39. The inverse of a 2 × 2 matrix A = is given by
a21 a22
1 a 22 −a 12 1 a 22 −a 12
A−1 = =
|A| −a21 a11 a11 a22 − a12 a21 −a21 a11
1 − 12
1 2 −1 1 2 −1
A−1 = = = .
3(2) − 1(4) −4 3 2 −4 3 −2 23
To find the inverse of a 3 × 3 marix A using the cofactor method we follow these
steps:
1. Find |A| the determinant of A. If |A| = 0, then the inverse of A does not exist,
so stop. Otherwise (if |A| =
6 0) proceed to find the inverse using steps below.
4. Find the adjoint (also called adjugate) of A by taking the transpose of the
matrix of cofactors.
1
5. Find A−1 = Adj(A) where Adj(A) is the adjoint of A.
|A|
Example 1.41.
7 2 1
Find the inverse of A = 0 3 −1.
−3 4 −2
Solution
−2 8 −5
Please confirm that |A| = 1 and that Adj(A) = 3 −11 7 .
9 −34 21
Hence
−2 8 −5 −2 8 −5
1 1
A−1 = Adj(A) = 3 −11 7 = 3 −11 7 .
|A| 1 9 −34 21 9 −34 21
Theorem 1.42. Suppose that A and B are two invertible n × n matrices. Then
1. (A−1 )−1 = A
1
2. |A−1 | =
|A|
3. (AT )−1 = (A−1 )T
4. (AB)−1 = B −1 A−1 .
and
5. AB = AC ⇒ A−1 AB = A−1 AC ⇒ B = C.
1. Interchange rows.
Definition 1.44. Two matrices A and B are said to be row equivalent if B can
be obtained by applying a sequence of elementary row operations to A.
Definition 1.46. A leading entry of a matrix A is the first nonzero entry in a row.
2. If row k has a leading entry, then either the leading entry in row k + 1 is to
the right of the one for row k or row k + 1 is a zero row.
3. All rows with only zeros for entries, if available, are at the bottom.
Example 1.48.
1 1 2 −1 3
The matrix 0 0 1 0 −4 is in row echelon form.
0 0 0 0 0
2. The leading entry in a row is the only nonzero entry in its column. That is,
any column containing a leading entry has zeros in all other positions.
Example 1.50.
1 0 0 7
The matrix 0 1 0 4 is in reduced row echelon form.
0 0 1 3
Example 1.51.
3 −2 4 7
Put the matrix 2 1 0 −3 in row echelon form then in reduced row echelon
2 8 −8 2
form.
Solution
1 −3 4 10 1 −3 4 10
R1 → R1 −R2 , R3 → R3 −R2 2 1 0 −3. R2 → R2 −2R1 0 7 −8 −23
0 7 −8 5 0 7 −8 5
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 15
1 −3 4 1 −3 4
10 10
1 8 23 1 8 23
R3 → R3 − R2 , R2 → R2 0 1 − − . R3 → R3 0 1 − − .
7 7 7 28 7 7
0 0 0 28 0 0 0 1
The last matrix is in row echelon form.
4 1
1 0 7 7
R1 → R1 + 3R2 0 1 − 8 − 23 .
7 7
0 0 0 1
4
23 1 1 0 7 0
R2 → R2 + R3 , R1 → R1 − R3 0 1 − 8 0.
7 7 7
0 0 0 1
Remark 1.52. A matrix may have different row echelon forms but its reduced row
echelon form is unique. This means every matrix is row equivalent to one and only
one matrix in reduced row echelon form.
Proof. 1. First suppose that the two interchanged rows are consecutive:
∗ ∗ ... ∗ ∗ ∗ ... ∗
.. .. .. .. .. ..
. . . . . .
v v ... vn w1 w2 ... wn
A= 1 2 and B = .
w1 w2 ... wn v1 v2 ... vn
.. .. .. .. .. ..
. . . . . .
∗ ∗ ∗ ∗ ∗ ∗
If both |A| and |B| are computed by expanding along the row [w1 w2 ....wn ],
then all the cofactors are the same except that the signs are flipped. Therefore
|B| = −|A|.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 16
Now suppose that the two interchanged rows i < i0 are not consecutive.
∗ ∗ ... ∗ ∗ ∗ ... ∗
... .. .. . .. ..
. . .. . .
v v2 ... vn w w2 ... wn
1 1
A = ... .. .. . .. ..
. and B = .. . .
. .
w1 w2 ... wn v1 v2 ... vn
. .. .. . .. ..
.. . . .. . .
∗ ∗ ∗ ∗ ∗ ∗
Proof. 1. By part 2 of Theorem 1.53, if A has a whole row of 0’s we can multiply
this row by 0 to obtain B which is in fact equal to A. So we have |A| = 0|A|
giving |A| = 0.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 17
2. By part 1 of Theorem 1.53, interchanging the two identical rows gives the
original matrix, but its determinant is negated. The only scalar which is its
own negation is 0. Therefore, the determinant of the matrix is 0.
3. This follows from part 2 of Theorem 1.53 and part 2 of this Lemma.
Proof. Let
∗ ∗ ... ∗ ∗ ∗ ... ∗
... ..
.
..
. ..
.
..
.
..
.
v v ... v v1 v2 ... vn
1 2 n
.. .
.. .
.. and B = .
.. .
.. .
..
A= . .
w1 w2 ... wn w1 + k.v1 w2 + k.v2 ... wn + [Link]
. .. .. .. .. ..
..
. . . . .
∗ ∗ ∗ ∗ ∗ ∗
Suppose that [w1 w2 ...wn ] (in A) and [w1 + k.v1 ...wn + [Link] ] (in B) are in row i. Then
|A| = w1 C i1 + w2 C i2 + ... + wn C in
and
Example 1.56.
Solution
5 −1 −3
R2 → R2 + R1 , R3 → R3 + R1 3 1 0 = B.
9 7 0
The determinant of B can easily be found by expanding along the third column:
3 1
|B| = (−1)1+3 (−3) = −3(21 − 9) = −36.
9 7
Since the row operations used do not change determinant,
|A| = |B| = −36.
Example 1.57.
Solution
2 −2 8
4 1 2 −1 4
0 −2 −4
5 1 0 5 −2
−4
R1 ↔ R2
−3 . R1 → R1
.
−1 1
4 2 −3 4 −1 1
5 −8 9
5 5 5 −8 9
1 2 −1 4
0 5 −2 −4
R3 → R3 + 3R1 , R4 → R4 − 5R1 0 10 −4 13 .
0 −5 −3 −11
1 2 −1 4 1 2 −1 4
0 5 −2 −4 0 5 −2 −4
R3 → R3 − 2R2 , R4 → R4 + R2 0 0 0 21 . R3
↔ R4 = B.
0 0 −5 −15
0 0 −5 −15 0 0 0 21
Suppose A is an n × n invertible matrix. Then, its reduced row echelon form is the
identity matrix. In other words, by a series of successive row operations, the matrix
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 20
Solution
3 4 −1 1 0 0
We augment B with 3 × 3 identity matrix: 1 −1 1 0 1 0 .
−1 2 3 0 0 1
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 21
1 −1 1 0 1 0
R1 ↔ R2 3 4 −1 1 0 0 .
−1 2 3 0 0 1
1 −1 1 0 1 0
R2 → R2 − 3R1 , R3 → R3 + R1 0 7 −4 1 −3 0 .
0 1 4 0 1 1
1 −1 1 0 1 0
R3 ↔ R 2 0 1 4 0 1 1 .
0 7 −4 1 −3 0
1 0 5 0 2 1
R1 → R1 + R2 , R3 → R3 − 7R2 0
1 4 0 1 1 .
0 0 −32 1 −10 −7
1 0 5 0 2 1
1
R3 → − R3 0 1 4 0 1 1 .
32 0 0 1 − 1 10 7
32 32 32
5 14 3
1 0 0 32 32 − 32
4 8 4
R1 → R1 − 5R3 , R2 → R2 − 4R3 0 1 0 32 − 32 32 .
1 10 7
0 0 1 − 32 32 32
To solve the system of equations in 1 means finding all the n-tuples of scalars
(x¯1 , x¯2 , ..., x¯n ) that satisfy the system when the constants x¯j are substituted for the
unkowns xj , 1 ≤ j ≤ n. The solutions must satisfy each equation in the system.
In case of more than one solutions, we use upper indices to indicate constant values.
(1) (2)
For example x1 and x1 denote two different values for the unknown x1 .
Definition 2.2. If all the right hand constants bi , 1 ≤ i ≤ m, are equal to 0, then
the system is homogeneous. Otherwise it is inhomogeneous. If you set all the
constants bj in an inhomogeneous system 1 to zero, you get the homogeneous system
associated with the inhomogeneous one.
(1) (1) (1) (2) (2) (2)
Theorem 2.3. If (x1 , x2 , ..., xn ) and (x1 , x2 , ..., xn ) are solutions of the inho-
mogeneous system 1, then their difference
(1) (2) (1) (2)
(x1 − x1 , x2 − x2 , ..., x(1) (2)
n − xn )
Proof. Just subtract the corresponding equations. The details are left as an exercise.
Recall that a system of linear equations may or may not have a solution. If the
solution exists it may or may not be unique.
Definition 2.4. System of linear equations that does not have any solutions is in-
consistent. A system with at least one solution is consistent.
Corollary 2.5. A consistent inhomogeneous system has exactly one solution if and
only if the corresponding homogeneous system has only one solution, which must be
the trivial solution (trivial solution means all the variables are zero).
Proof. This is a direct consequence of Theorem 2.3 and the existence of the trivial
solution for homogeneous systems of equations.
Example 2.7.
5x1 + 15x2 + 56x3 = 35
Solve the following system of linear equations −4x1 − 11x2 − 41x3 = −26
−x − 3x − 11x = −7.
1 2 3
Solution
1 0 − 51 − 11
5 −3 0 R2 =R2 − 19
1 0 0 −2 −3 1
5 R3
0 1 19 4
1 0 7−− − −− − −→ 0 1 0 −3 1 −19 .
5 5
R1 =R1 + 51 R3
0 0 1 1 0 5 0 0 1 1 0 5
This gives
−2 −3 1
A−1 = −3 1 −19 .
1 0 5
So
−2 −3 1 35 1
−1
x = A b = −3 1 −19 −26 = 2 .
1 0 5 −7 0
Hence x1 = 1 when x2 = 2 and x3 = 0.
where ci denotes the i-th column of A. The second equality follows from the cofactor
expansion of the matrix
[c1 ... ck−1 b ck+1 ... cn ]
along the k-th column.
Example 2.9.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 25
(
2x1 + 3x2 = −5
Solve the system
2x1 − 3x2 = 13.
Solution
2 3
D= = 2(−3) − 2(3) = −6 − 6 = −12.
2 −3
−5 3 2 −5
Dx1 = = −24 and Dx2 = = 36.
13 −3 2 13
Dx1 −24 Dx2 36
Hence x1 = = = 2 and x2 = = = −3.
D −12 D −12
Example 2.10.
x1 + x2 − x3 = 6
Solve the system 3x1 − 2x2 + x3 = −5
x + 3x − 2x = 14.
1 2 3
Solution
1 1 −1 6 1 −1 1 6 −1
D = 3 −2 1 = −3, Dx1 = −5 −2 1 = −3, Dx2 = 3 −5 1 = −9,
1 3 −2 14 3 −2 1 14 −2
1 1 6
Dx3 = 3 −2 −5 = 6.
1 3 14
Dx1 Dx2 Dx3
Henece x1 = = 1, x2 = = 3 and x3 = = −2.
D D D
Recall steps for the Gaussian elimination method for solving a system of linear
equations:
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 26
2. Use elementary row operations to transform the augmented matrix and obtain
the row echelon form (REF).
3. Stop process in step 2 if you obtain a row whose elements are all zeros except
the last one on the right. In that case, the system is inconsistent (has no
solution). Otherwise, finish step 2 and read the solutions of the system from
the final matrix.
then the system has a unique solution and this can be found by back-substitution.
Solution
1 4 1 3
The augmented matrix is 2 −3 −2 5 . Now we use elementary row operations
2 4 2 6
to put this matrix into RREF as follows.
1 4 1 3 1 4 1 3 1 4 1 3
R =R3 −2R1
7 −3−−−
2 −3 −2 5 − −−→ 0 −11 −4 −1 R2 ↔ R3 0 −4 0 0
R2 =R2 −2R1
2 4 2 6 0 −4 0 0 0 −11 −4 −1
1 4 1 3 R2 =− 14 R2
1 4 1 3 1 4 1 3
R3 =R3 +11R2
0 −4 0 0 7−− −−−→ 0 1 0 0 − 7 − −−−−−→ 0 1 0 0
0 −11 −4 −1 0 −11 −4 −1 0 0 −4 −1
1 0 0 11
1 4 1 3 1 0 1 3 4
R1 =R1 −4R2 R1 =R1 −R3
0 1 0 0 7−−−−−− −→ 0 1 0 0 7−−− −− −→ 0 1 0 0 .
R3 =− 14 R3
0 0 −4 −1 0 0 1 14 0 0 1 14
The last matrix is in reduced row echelon form.
11 1
Hence x1 = when x2 = 0 and x3 = .
4 4
Example 2.13.
x1 + 2x2 + 3x3 = 12
Solve the system 4x1 + 5x2 + 6x3 = 11
7x + 8x + 9x = −10.
1 2 3
Solution
Example 2.14.
x1 − x2 + x3 = 3
Solve the system 2x1 − x2 + 4x3 = 7
3x − 5x − x = 7.
1 2 3
Solution
and solutions can be found by giving values to the free variable x3 then evaluating
the basic variables x1 and x2 .
where t ∈ R can have any value, and the set of all solutions is
In this example the arbitary value t ∈ R is called a parameter, and the general
solution (−3t + 4, −2t + 1, t) is called a parametric solution.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 29
Two cars, one traveling 10 km/h faster than the other car, start at the same time
from the same point and travel in opposite directions. In 3 hours, they are 300 km
apart. Find the rate of each car.
Solution
Let x1 and x2 be speed of the first car and second car respectively. The tabe below
summarises the information given.
Example 2.16.
C3 H8 + x1 O2 → x2 CO2 + x3 H2 O.
Solution
Carbon: 3 = x2 ⇒ x2 = 3
Hydrogen: 8 = 2x3 ⇒ x3 = 4
Example 2.17.
In the past three men’s soccer games, a Lilongwe based team called Town Rangers
5
averaged goals per game. They scored the same number of goals in the most
3
recent two games, but three games ago they scored additional two goals. How many
goals did they score in each game?
Solution
Let the number of goals in the past three games be x1 , x2 and x3 with x1 being the
most recent.
5
That they averaged goals means
3
x1 + x2 + x3 5
= ⇒ x1 + x2 + x3 = 5.
3 3
That the past two games had the same number of goals means
x1 = x2 ⇒ x1 − x2 = 0.
That three games ago there were two more goals can be represented with the equa-
tion
x3 = x2 + 2 ⇒ x3 − x2 = 2.
Thus, we have the system of linear equations
x1 + x2 + x3 = 5
x1 − x2 =0 .
x − x
3 2 =2
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 31
Definition 3.1. The linear combination of the vectors {v1 , v2 , ..., vk } with scalars
r1 , r2 , ..., rk is the vector r1 v1 + r2 v2 + ... + rk vk .
Definition 3.2. A set of vectors {v1 , v2 , ..., vk } is linearly independent if the
only linear combination r1 v1 + r2 v2 + ... + rk vk = 0 equal to the zero vector is the
one with r1 = r2 , = ... = rk = 0.
1 3 5 0 1 3 0 0
1. 2 2 + 5 − 9 = 0 and 3 2 − 5 − 1 = 0, so the two sets of
3 7 13 0 3 7 2 0
1 3 5 1 3 0
vectors 2 , 5 , 9
and 2 , 5 , 1 are linearly dependent.
3 7 13 3 7 2
1 0 0 0
0 1 0 0
2. , , , is a linearly independent subset of R4 .
0 0 1 0
0 0 0 1
1. Equate the linear combination of these vectors to the zero vector, that is
r1 v1 + r2 v2 + ... + rk vk = 0 where r’s are scalars that we have to find.
2. Solve for scalars r1 , r2 , ..., rk . If all are equal to zero then S is a linearly
independent set, otherwise (if at least one r’s is non-zero) then S is linearly
dependent.
Example 3.5.
Show whether the set S = {(2, 0, 6), (1, 2, −4), (3, 2, 2)} is linearly dependent or
independent.
Solution
Then we have
Clearly Equation (iv) is equivalent to Equation (ii). This implies that the above
system has infinitely many solutions.
Example 3.6.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 33
Show whether the set V = {(1, 2, 3), (1, 0, 2), (2, 1, 5)} is linearly dependent or inde-
pendent.
Solution
Then we have
Solution
Solution
Suppose that r1 sin x + r2 cos x = 0. Notice that this equation holds for all x ∈ R.
π
If we put x = 0 we get r1 .0 + r2 .1 = 0 and putting x = we get r1 .1 + r2 .0 = 0, so
2
we must have r1 = r2 = 0.
Proof. Since v1 , v2 , ..., vk , vk+1 are linearly dependent, there exist scalars r1 , r2 , ..., rk , rk+1
not all zeros such that
Proof. Exercise.
Exercise 3.11.
2. Determine the value of k such that the set {(1, 2, 1), (k, 3, 1), (2, k, 0)} is lin-
early dependent in R3 .
1 h 1
3. Find the values of h for which the set of vectors 0 , 1 , 2h is
0 −h 3h + 1
linearly independent.
5. Show that any set of vectors with the zero vector is linearly dependent.
Definition 3.12. The rank of a matrix A denoted rank(A) is the maximum number
of rows in A which are linearly independent.
Recall that two matrices A and B are row equivalent if we can convert A to B by
applying a sequence of elementary row operations.
Lemma 3.13. If A and B are row equivalent, then they have the same rank
Proof. Exercise.
Lemma 3.14. If a matrix A is in row echelon form, then rank(A) is the number
of non-zero rows of A
Proof. Exercise.
Lemma 3.13 and Lemma 3.14 suggest that to find rank of a matrix A, convert A to
a matrix A0 of row echelon form, then, count the number of non-zero rows of A0 .
Example 3.15.
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 36
1 2
Compute the rank of the following matrix A = 0 1.
3 4
Solution
1 2 1 2 1 2
R =R3 −3R1 R =R3 +2R2
0 1 −7 −3−−− −−→ 0 1 7−−3−−− −−→ 0 1 .
3 4 0 −2 0 0
Hence the rank of A is 2.
Example 3.16.
1 2 3 4
Compute the rank of the following matrix B = 5 6 7 8.
3 2 1 0
Solution
1 2 3 4 1 2 3 4 1 2 3 4
R3 =R3 −3R1 R =R +R2
−−−−−−→ 0 −4 −8 −12 7−−3−−3−1−→
5 6 7 8 7− 0 1 2 3 .
R2 =R2 −5R1 R2 =− 4 R2
3 2 1 0 0 −4 −8 −12 0 0 0 0
Hence, B has rank 2.
Lemma 3.17. The rank of a matrix A is the same as the rank of its transpose, AT .
Proof. Exercise.
2. If the linear system is consistent, then it has a unique solution if and only if
rank(A) = rank(A|b) = n.
Proof. Exercise.
Example 3.19.
2x1 + 2x2 − x3 = 1
4x1 + 2x3 = 2
6x2 − 3x3 = 4
Solution
We have
2 2 −1 2 2 −1 1
A = 4 0 2 and Ab = 4 0 2 2 .
0 6 −1 0 6 −1 4
1 1 − 21
2 2 −1 2 2 −1 R =R + 6 R 2 2 −1 R = 1 R
R2 =R2 +2R1 3 3 4 2 3 5 3
4 0 2 7−−−−−−−→ 0 −4 4 − 7 −−−−− 1
−→ 0 1 −1 7−− −−1
→ 0 1 −1 .
R2 =− 4 R2 R1 = 2 R1
0 6 −1 0 6 −1 0 0 5 0 0 1
The last matrix is in row echelon form and all the three row vectors are non-zero.
This gives rank(A) = 3.
For Ab ,
2 2 −1 1 2 2 −1 1 R3 =R3 + 46 R2
2 2 −1 1
R2 =R2 +2R1
−−−−−−→ 0 −4 4 0 7−−−−−−
4 0 2 2 7− −→ 0 1 −1 0 .
R2 =− 14 R2
0 6 −1 4 0 6 −1 4 0 0 5 4
1 1
2 2 −1 1 R1 = 21 R1
1 1 − 2 2
0 1 −1 0 7−− −− → 0 1 −1 0 .
R3 = 51 R3
0 0 5 4 0 0 1 45
Proof. Either a system of linear eqations has no solution, unique solution or infinitely
many solutions. Since x = 0 is always a solution, the first case is eliminated as a
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 38
Recall that a set of k vectors {v1 , v2 , ..., vk } is linearly independent if the equation
k
X
ri v i = 0
i=1
Determining whether the vectors are linearly independent or dependent is the same
as determining whether the homogeneous system Ar = 0 has a nontrivial solution.
Combining this observation with Corollary 3.20 gives another way to check for linear
independency/depencency by finding the rank of a matrix.
Corollary 3.21. A set of k column vectors {v1 , v2 , ..., vk } is linearly independent
if and only if the associated matrix A = [v1 , v2 , ..., vk ] has rank(A) = k.
Example 3.22.
Determine whether the vectors below are linearly independent or linearly dependent.
1 2 0
−1
, v 2 = 1 , v3 = 1 .
v1 =
2 1 −1
1 −1 11
Solution
1. T (0) = 0.
2. T (−v) = −T (v) for all v ∈ Rn .
3. T (u − v) = T (u) − T (v) for all u, v ∈ Rn .
4. If
v = {r1 v1 + r2 v2 + ... + rn vn }
then
2. Similarly
T (−v) = T ((−1)v) = (−1)T (v) = −T (v).
3. By the first property of Definition 4.1, we have
T (u − v) = T (u + (−1)v) = T (u) + T ((−1)v) = T (u) − T (v).
vn
define
v1
v2
T (v) = Av = A ... .
vn
Then T is a linear transformation from Rn to Rm .
Example 4.5.
1. T (−4, 5, 1).
Solution
1 0 0 47
0 1 0 20 .
7
0 0 1 117
4 20 11
Hence v1 = when v2 = and v3 = and the preimage of (4, 1, −1) is
7 7 7
4 20 11
, , .
7 7 7
Example 4.6.
T : R2 → R2
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 42
given by
T (x, y) = (x2 , y)
is linear.
Solution
We have
Example 4.7.
T (1, 0, 0) = (2, 4, −1), T (0, 1, 0) = (1, 3, −2) and T (0, 0, 1) = (0, −2, 2).
Solution
We have
(−2, 4, −1) = −2(1, 0, 0) + 4(0, 1, 0) − (0, 0, 1).
So
Example 4.8.
Find
1. T (1, 4).
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 43
2. T (−2, 1)
Solution
1. First we write
(1, 4) = a(1, 1) + b(1, −1).
Solving we get a = 2.5 and b = −1.5, so
T (1, 4) = 2.5T (1, 1, ) − 1.5T (1, −1) = 2.5(0, 2) − 1.5(2, 0) = (−3, 5).
2. Writing
(−2, 1) = a(1, 1) + b(1, −1)
and solving for a and b gives a = −0.5 and b = −1.5.
Hence
T (−2, 1) = −0.5T (1, 1) − 1.5T (1, −1) = −0.5(0, 2) − 1.5(2, 0) = (−3, −1).
T : Rn → R m ,
T (v) = Av
0 0 1
n
matrix in R . The set B = {e1 , e2 , ..., en } is called standard basis of the vector
space Rn (but this is discussion for another day).
a11 a12 a1n
a21
, T (e2 ) = a.22 , ... , T (en ) = a.2n .
T (e1 ) = .
.. .. ..
am1 am2 amn
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 44
vn
Now
a11 a12 ... a1n v1 a11 a12 a1n
a21 a22 ... a2n v2 a21 a22 a2n
Av =
... .. .. ..
= v1 ..
+ v2 ..
+ ... + vn ..
. . . . . .
am1 am2 ... amn vn am1 am2 amn
= v1 T (e1 ) + v2 T (e2 ) + ... + vn T (en )
= T (v1 e1 + v2 e2 + ... + vn en )
= T (v).
Example 4.10.
Solution
x
3 3 3
We have T : R → R . We write vectors v ∈ R as columns v = y instead of
z
(x, y, z). Let
Matrices and Linear Equations, Francisco Chamera, LUANAR - Bunda 45
1 0 0
e1 = 0, e2 = 1 and e3 = 0 in R3 .
0 0 1
We have
5 −3 1
T (e1 ) = 0, T (e2 ) = 4 and T (e3 ) = 2.
5 3 0
Let T (x, y, z) = (2x+y, 3y −z) be a linear transformation. Write down the standard
matrix of T and use it to find T (0, 1, −1).
Solution
Here T : R3 → R2 . With
1 0 0
e1 = 0, e2 = 1 and e3 = 0
0 0 1
we have
2 1 0
T (e1 ) = , T (e2 ) = and T (e3 ) = .
0 3 −1
Example 4.12.
Solution
1 0
1. In this case T : R2 → R2 . With e1 = and e2 = we have
0 1
0 1
T (e1 ) = and T (e2 ) = .
1 0
2. We know that T (3, 4) = (4, 3) but we want to find the same answer using the
standard matrix.
3 0 1 3 4
A = = .
4 1 0 4 3
Hence T (3, 4) = (4, 3).
and
r sin(α + θ) = r sin α cos θ + r cos α sin θ = y cos θ + x sin θ.
Hence
cos θ − sin θ x
T (x, y) = .
sin θ cos θ y
Exercise 4.14.
Let T be the counterclockwise rotation in R2 by the angle 120◦ . Write down the
standard matrix of T and find T (2, 2).
Example 4.15.