W. Liu - Training to all new Teaching Assistants
Linear Algebra
Wanmin Liu
Department of Mathematics
Hong Kong University of Science and Technology
2012 August
This is a tutorial demo training to all new Teaching Assistants. It contains ten
typical problems in Linear Algebra. The new TA would be randomly assigned a
problem, and present the problem on blackboard to other new TAs. Pros and
Cons of each presentation would be highlighted and discussed after the presen-
tation. Typical teaching techniques/tricks/mistakes would also be emphasised.
Problem 1. Solve the following system of linear equations:
2x
1
+7x
2
+3x
3
+x
4
= 6
3x
1
+5x
2
+2x
3
+2x
4
= 4
9x
1
+4x
2
+x
3
+7x
4
= 2
Keywords. Elementary row operations, rank, solution structures of linear system of equa-
tions.
Suggested Solution. Let A be the coefficient matrix. We do elementary row operations to the
1
linear_algebra_math_HKUST 1
W. Liu - Training to all new Teaching Assistants
augmented matrix
¯
A:
¯
A =
2 7 3 1 6
3 5 2 2 4
9 4 1 7 2
1 2 1 1 2
0 11 5 1 10
0 11 5 1 10
1 2 1 1 2
0 1
5
11
1
11
10
11
0 0 0 0 0
1 0
1
11
9
11
2
11
0 1
5
11
1
11
10
11
0 0 0 0 0
.
Since rank(A) = rank(
¯
A) = 2 < n = 4, the system has infinite many solutions. Let x
3
= x
4
= 0,
we find x
1
=
2
11
, x
2
=
10
11
, and a special solution
η =
2
11
10
11
0
0
.
Let x
3
=1, x
4
=0, we obtain x
1
=
1
11
, x
2
=
5
11
. Let x
3
=0, x
4
=1, we obtain x
1
=
9
11
, x
2
=
1
11
.
Let
ξ
1
=
1
11
5
11
1
0
, ξ
2
=
9
11
1
11
0
1
.
The general solutions of the system are x =η+k
1
ξ
1
+k
2
ξ
2
for arbitrary numbers k
1
and k
2
.
Problem 2. For various cases of the two numbers a and b, find out the solution(s) of the
following system of linear equations:
x
1
+x
2
x
3
= 1
2x
1
+(a +2)x
2
(b +2)x
3
= 3
3ax
2
+(a +2b)x
3
= 3
Keywords. Determinant, Cramers rule, elementary row operations.
Suggested Solution. The number of equations and the number of unknowns are the same.
Let A be the coefficient matrix of the system. Then
|A|=
¯
¯
¯
¯
¯
¯
1 1 1
2 a +2 (b +2)
0 3a a +2b
¯
¯
¯
¯
¯
¯
=a(a b).
We discuss the following three cases.
1). |A| 6= 0, i.e., a 6=0 and a 6=b. In this case, the system has a unique solution. We can use
Cramers rule to find it out.
x
2
=
¯
¯
¯
¯
¯
¯
1 1 1
2 3 (b +2)
0 3 a +2b
¯
¯
¯
¯
¯
¯
a(a b)
=
1
a
, x
3
=
¯
¯
¯
¯
¯
¯
1 1 1
2 a +2 3
0 3a 3
¯
¯
¯
¯
¯
¯
a(a b)
=0, x
1
=1 x
2
+x
3
=1
1
a
.
2
linear_algebra_math_HKUST 2
W. Liu - Training to all new Teaching Assistants
2). a =0. In this case we do elementary row operations to the augmented matrix:
1 1 1 1
2 2 (b +2) 3
0 0 2b 3
1 1 1 1
0 0 b 1
0 0 2b 3
1 1 1 1
0 0 b 1
0 0 0 1
.
So the system has no solution.
3). a =b 6=0. We do elementary row operations to the augmented matrix:
1 1 1 1
2 a +2 (a +2) 3
0 3a 3a 3
1 1 1 1
0 a a 1
0 3a 3a 3
1 1 1 1
0 1 1
1
a
0 0 0 0
.
So
x
1
x
2
x
3
=
1
1
a
1
a
+k
k
=
1
1
a
1
a
0
+k
0
1
1
,
where k is an arbitrary number.
Problem 3. Let
α
1
=
1
0
2
3
, α
2
=
1
1
3
5
, α
3
=
1
1
a +2
1
, α
4
=
1
2
4
a +8
, β =
1
1
b +3
5
.
1). For which values of a and b will β not be represented as linear combination of α
1
, α
2
,
α
3
, α
4
.
2). For which values of a and b will β be uniquely represented as linear combination of α
1
,
α
2
, α
3
, α
4
.
Keywords. Linear combination of vectors, solution structure of linear system of equations,
augmented matrix, elementary row operations, rank.
Suggested Solution. By the definition, whether β can be represented as linear combination
of α
1
, α
2
, α
3
, α
4
or not is equivalent to whether the system
x
1
α
1
+x
2
α
2
+x
3
α
3
+x
4
α
4
=β
has solution or not. Let us solve the system:
x
1
+x
2
+x
3
+x
4
= 1
x
2
x
3
+2x
4
= 1
2x
1
+3x
2
+(a +2)x
3
+4x
4
= b +3
3x
1
+5x
2
+x
3
+(a +8)x
4
= 5
3
linear_algebra_math_HKUST 3
W. Liu - Training to all new Teaching Assistants
Let the coefficient matrix be A. We do elementary row operations on the augmented matrix
¯
A:
¯
A =
1 1 1 1 1
0 1 1 2 1
2 3 a +2 4 b +3
3 5 1 a +8 5
1 1 1 1 1
0 1 1 2 1
0 1 a 2 b +1
0 2 2 a +5 2
1 1 1 1 1
0 1 1 2 1
0 0 a +1 0 b
0 0 0 a +1 0
.
1). When a =1, b 6=0, rank(A) =2 <3 =rank(
¯
A), the system has no solution. In this case,
β cannot be represented as linear combination of α
1
, α
2
, α
3
, α
4
.
2). When a 6= 1, for arbitrary b we have rank(A) = rank(
¯
A) = 4, the system has a unique
solution:
x
1
=
2b
a+1
x
2
= 1 +
b
a+1
x
3
=
b
a+1
x
4
= 0
In this case,
β =
2b
a +1
α
1
+(1 +
b
a +1
)α
2
+
b
a +1
α
3
.
Problem 4. Let A =
µ
1 sin(θ)cos(θ) cos
2
(θ)
sin
2
(θ) 1 +sin(θ)cos(θ)
.
1). Find eigenvalues of A.
2). Find the corresponding eigenvectors.
3). Find an invertible 2 ×2 matrix P such that
˜
A =P
1
AP is in Jordan normal form.
Keywords. Eigenvalue, eigenvector, Jordan normal form.
Suggested Solution.
1).
|A λI|=
¯
¯
¯
¯
1 sin(θ)cos(θ)λ cos
2
(θ)
sin
2
(θ) 1 +sin(θ)cos(θ)λ
¯
¯
¯
¯
=(λ 1)
2
.
The eigenvalue of A is 1, with mutiplicity 2.
2). Let us solve the equation (A λI )x =0 for λ =1.
µ
sin(θ)cos(θ) cos
2
(θ)
sin
2
(θ) sin(θ) cos(θ)
µ
x
1
x
2
=
µ
0
0
=p
1
=
µ
x
1
x
2
=
µ
cos(θ)
sin(θ)
.
So the eigenvector corresponding the the eigenvalue 1 is p
1
(or nonzero scalar multi-
plication of p
1
).
4
linear_algebra_math_HKUST 4
W. Liu - Training to all new Teaching Assistants
3). We would like to find a vector p
2
satisfying Ap
2
= p
1
+λp
2
, i.e., (A λI)p
2
= p
1
, where
λ =1. Solving this equation, we obtain p
2
=
µ
sin(θ)
cos(θ)
.
Let P =(p
1
, p
2
) =
µ
cos(θ) sin(θ)
sin(θ) cos(θ)
. Then
AP =(Ap
1
, Ap
2
) =(λp
1
, p
1
+λp
2
) =(p
1
, p
2
)
µ
λ 1
0 λ
=P
µ
λ 1
0 λ
.
So
˜
A =P
1
AP =
µ
1 1
0 1
.
Problem 5. Let A =
µ
3 4
5 2
.
1). Diagonalize A.
2). Let t be a formal variable, compute (t A)
2
, (t A)
3
.
3). Define exp(t A) =
P
n=0
1
n!
(t A)
n
as the formal sum (regardless the convergence). Com-
pute exp(t A).
Keywords. Matrix diagonalization, eigenvalue, eigenvector, exponential of matrix, applica-
tion of linear algebra.
Suggested Solution.
1). Let us compute the eigenvalues and corresponding eigenvectors.
¯
¯
¯
¯
3 λ 4
5 2 λ
¯
¯
¯
¯
=λ
2
5λ 14 =(λ +2)(λ 7),=λ
1
=2, λ
2
=7.
For λ
1
=2, we have
µ
5 4
5 4
µ
x
1
x
2
=
µ
0
0
,=
µ
x
1
x
2
=
µ
4
5
.
For λ
1
=7, we have
µ
4 4
5 5
µ
x
1
x
2
=
µ
0
0
,=
µ
x
1
x
2
=
µ
1
1
.
Let P =
µ
4 1
5 1
, we have
P
1
AP =
µ
2 0
0 7
.
Here P
1
=
µ
1
9
1
9
5
9
4
9
.
5
linear_algebra_math_HKUST 5
W. Liu - Training to all new Teaching Assistants
2).
t A =P
µ
2t 0
0 7t
P
1
=
(t A)
2
=P
µ
2t 0
0 7t
P
1
P
µ
2t 0
0 7t
P
1
=P
µ
(2t )
2
0
0 (7t )
2
P
1
=
µ
29t
2
20t
2
25t
2
24t
2
.
Similarly, (t A)
3
=P
µ
(2t )
3
0
0 (7t )
3
P
1
=
µ
187t
3
156t
3
195t
3
148t
3
.
3). We find that
(t A)
n
=P
µ
(2t )
n
0
0 (7t )
n
P
1
.
So we obtain
exp(t A) =
X
n=0
1
n!
P
µ
(2t )
n
0
0 (7t )
n
P
1
=P
µ
P
n=0
1
n!
(2t )
n
0
0
P
n=0
1
n!
(7t )
n
P
1
= P
µ
exp(2t ) 0
0 exp(7t )
P
1
=
1
9
µ
4exp(2t) +5exp(7t ) 4exp(2t) +4exp(7t )
5exp(2t) +5exp(7t ) 5exp(2t) +4exp(7t )
.
Remark. When you learn some knowledge on linear differential equations, you will see above
are typical processes on solving the equation x
0
(t ) = Ax(t), where x(t) is a unknown vector
function of t. The general solution is x(t) =exp(t A)x
0
, where x
0
is given as the initial vector.
This is an application of using linear algebra to solving ODE, and you already see the power
of matrix diagonalization.
Problem 6. Let u =(6,a +1,3)
T
, v =(a,2,2)
T
, w =(a,1,0)
T
.
1). For which value of a will u and v be linear dependent or linear independent? When
they are linear dependent, write down their linear relation.
2). For which value of a will u, v and w be linear dependent or linear independent? When
they are linear dependent, write down their linear relation.
Keywords. Linear dependence/independence, elementary row operations, determinant.
Suggested Solution.
1). Let x
1
u +x
2
v =0, we obtain a system of linear equations:
6x
1
+ax
2
= 0
(a +1)x
1
+2x
2
= 0
3x
1
2x
2
= 0
6
linear_algebra_math_HKUST 6
W. Liu - Training to all new Teaching Assistants
We do elementary row operations to the augmented matrix:
6 a 0
a +1 2 0
3 2 0
6 a 0
0
a
2
+a12
6
0
0
a
2
2 0
.
When
(
a
2
+a12
6
= 0
a
2
2 = 0
i.e., a =4, the system has nonzero solutions hence u and v are linear dependent. And
we can take x
1
= 2, x
2
= 3, so 2u +3v = 0. When a 6= 4, the system only has zero
solution hence u and v are linear independent.
2). Let us compute the determinant of matrix generated by u, v, w:
¯
¯
¯
¯
¯
¯
6 a a
a +1 2 1
3 2 0
¯
¯
¯
¯
¯
¯
=2a
2
5a +12 =(a +4)(2a 3).
When a =4 or a =
3
2
, u, v, w are linear dependent.
i). The case a =
3
2
. Let us write x
1
u+x
2
v +x
3
w =0. We do elementary row operations
for the augmented matrix:
6
3
2
3
2
0
5
2
2 1 0
3 2 0 0
1
2
3
0 0
0 1
3
11
0
0 0 0 0
.
We may take x
3
=11, x
2
=3, x
1
=6, i.e., 6u 3v +11w =0.
ii). The case a =4. We still use the relation as step 1): 2u +3v +0w =0.
Problem 7. Suppose that A is a n ×n matrix with eigenvalues λ
1
,λ
2
,...,λ
n
, and the corre-
sponding eigenvectors ξ
1
,ξ
2
,...,ξ
n
.
1). What are the eigenvalues and the corresponding eigenvectors of k A (k is a nonzero
constant)?
2). What are the eigenvalues and the corresponding eigenvectors of A
m
(m is a positive
integer)?
3). Assume that A is invertible. What are the eigenvalues and the corresponding eigenvec-
tors of A
1
?
4). Assume that A is invertible. Let adj(A) be the adjugate matrix of A. So we have the
Laplaces formula A adj(A) =adj(A) A =|A|I
n
. What are the eigenvalues and the corre-
sponding eigenvectors of adj(A) ?
7
linear_algebra_math_HKUST 7
W. Liu - Training to all new Teaching Assistants
5). Let P be a n ×n invertible matrix. What are the eigenvalues and the corresponding
eigenvectors of P
1
AP?
6). Let f (x) = x
m
+c
1
x
m1
+···+c
m1
x +c
m
(m is a positive integer). What are the eigen-
values and the corresponding eigenvectors of f (A)?
7). What are the eigenvalues of A
T
?
Keywords. Eigenvalue, eigenvector, adjugate matrix, matrix polynomial, determinant.
Suggested Solution. By definition we have Aξ
i
=λ
i
ξ
i
.
1). Since k Aξ
i
= kλ
i
ξ
i
, k A has eigenvalues kλ
1
, kλ
2
, ..., kλ
n
, and the corresponding
eigenvectors ξ
1
,ξ
2
,...,ξ
n
.
2). Since A
m
ξ
i
= A
m1
Aξ
i
= λ
i
A
m1
ξ
i
= ··· = λ
i
m
ξ
i
, A
m
has eigenvalues λ
1
m
, λ
2
m
, ... ,
λ
n
m
, and the corresponding eigenvectors ξ
1
,ξ
2
,...,ξ
n
.
3). Since A is invertible, |A|=λ
1
λ
2
···λ
n
6=0. Each λ
i
is nonzero. Left multiplying λ
i
1
A
1
to the equation Aξ
i
= λ
i
ξ
i
, we obtain A
1
ξ
i
= λ
i
1
ξ
i
. So A
1
has eigenvalues λ
1
1
,
λ
2
1
, ..., λ
n
1
, and the corresponding eigenvectors ξ
1
,ξ
2
,...,ξ
n
. Therefore, when A is
invertible, we can only assume that m is a nonzero integer in step 2) and still obtain the
same results.
4). By the Laplace formula, we obtain adjA = |A|A
1
. By step 1) and 3), A has eigenvalues
|A|
λ
1
,
|A|
λ
2
, ...,
|A|
λ
n
and the corresponding eigenvectors ξ
1
,ξ
2
,...,ξ
n
.
5). We have APP
1
ξ
i
= λ
i
ξ
i
. Left multiplying P
1
, we get (P
1
AP)(P
1
ξ
i
) = λ
i
(P
1
ξ
i
).
So P
1
AP has eigenvalues λ
1
, λ
2
, ..., λ
n
, and the corresponding eigenvectors P
1
ξ
1
,
P
1
ξ
2
, ..., P
1
ξ
n
.
6). By step 1) and 2), we obtain that f (A) has eigenvalues f (λ
1
), f (λ
2
), .. ., f (λ
n
), and the
corresponding eigenvectors ξ
1
,ξ
2
,...,ξ
n
.
7). Since |(A
T
λI )|=|(A λI)
T
|=|(A λI )|, so A
T
and A have the same eigenvalues.
Problem 8. Let A be the real symmetric matrix
4 2 2
2 4 2
2 2 4
. Find an orthogonal matrix T to
diagonalize A.
Keywords. Real symmetric matrix, orthogonal matrix, inner product, Gram-Schmidt process.
Suggested Solution. We first find the eigenvalues and the corresponding eigenvectors of A.
|A λI| =
¯
¯
¯
¯
¯
¯
4 λ 2 2
2 4 λ 2
2 2 4 λ
¯
¯
¯
¯
¯
¯
=
¯
¯
¯
¯
¯
¯
8 λ 2 2
8 λ 4 λ 2
8 λ 2 4λ
¯
¯
¯
¯
¯
¯
=(8 λ)
¯
¯
¯
¯
¯
¯
1 2 2
1 4 λ 2
1 2 4 λ
¯
¯
¯
¯
¯
¯
= (8 λ)
¯
¯
¯
¯
¯
¯
1 0 0
1 2 λ 0
1 0 2 λ
¯
¯
¯
¯
¯
¯
=(8 λ)(2 λ)
2
.
8
linear_algebra_math_HKUST 8
W. Liu - Training to all new Teaching Assistants
For λ
1
=8, we have
4 2 2
2 4 2
2 2 4
x
1
x
2
x
3
=
0
0
0
.
We obtain an eigenvector ξ
1
=(1,1, 1)
T
.
For λ
2
=λ
3
=2(multiplicity 2), we have
2 2 2
2 2 2
2 2 2
x
1
x
2
x
3
=
0
0
0
.
We obtain two linear independent eigenvectors ξ
2
=(1,1, 0)
T
, ξ
3
=(1,0, 1)
T
.
ξ
1
is already orthogonal to ξ
2
and ξ
3
(this is not a coincidence, why?). Next we apply Gram-
Schmidt process for ξ
2
and ξ
3
. Take
η
2
=ξ
2
,
η
3
=ξ
3
(ξ
3
,η
2
)
(η
2
,η
2
)
η
2
=
1
0
1
1
2
1
1
0
=
1
2
1
2
1
.
We normalize ξ
1
, η
2
, η
3
and obtain
α
1
=
1
p
3
1
p
3
1
p
3
, α
2
=
1
p
2
1
p
2
0
, α
3
=
1
p
6
1
p
6
2
p
6
.
Let
T =(α
1
,α
2
,α
3
) =
1
p
3
1
p
2
1
p
6
1
p
3
1
p
2
1
p
6
1
p
3
0
2
p
6
.
Then
T
1
AT =
8 0 0
0 2 0
0 0 2
.
Remark.
1). We have many ways to choose ξ
2
and ξ
3
so the matrix T is not unique.
2). For a real symmetry matrix A, if Aξ = λ
1
ξ, Aη = λ
2
η, and λ
1
6= λ
2
, then ξ and η are
orthogonal. The proof is as following: λ
1
(ξ,η) = (Aξ, η) =(ξ, Aη) =λ
2
(ξ,η), and λ
1
6=λ
2
,
hence (ξ,η) =0.
Problem 9. Let C[0, 1] be the set of all real valued continuous functions on the interval [0,1].
9
linear_algebra_math_HKUST 9
W. Liu - Training to all new Teaching Assistants
1). For any f , g ,C[0,1] and a scalar a R, define
(f +g )(t) = f (t)+g (t),
(a f )(t) = a f (t ).
Check that the C[0,1] with above addition and scalar multiplication is a vector space
over R. This is an example of infinite-dimensional vector space.
2). For any f , g C[0,1], define f , g =
R
1
0
f (t)g (t)d t . Check that 〈·,·〉 is an inner product
on the vector space C[0,1].
3). Let V be the subspaces of functions generated by the two functions f (t) = t , g (t) = t
2
.
Find an orthonormal basis for V .
Keywords. Example of vector space, inner product, orthonormal basis, Gram-Schmidt pro-
cess, application of linear algebra.
Suggested Solution.
1). For any f , g , h C[0,1], and a,b R, we check the following axioms.
Associativity of addition: f +(g +h) =(f +g ) +h.
Commutativity of addition: f +g = g + f
Identity element of addition: there exists an element 0 C[0,1], called the zero
vector, such that f +0 = f . Here 0 is the function which maps every element in
[0,1] to the value 0.
Inverse elements of addition: for every f C [0,1], there exists an element f
C[0,1], called the additive inverse of f , such that f +(f ) =0.
Distributivity of scalar multiplication with respect to vector addition: a(f +g) =
a f +ag .
Distributivity of scalar multiplication with respect to field addition:(a+b) f = a f +
b f .
Compatibility of scalar multiplication with field multiplication: a(b f ) =(ab)f .
Identity element of scalar multiplication: 1 f = f , where 1 denotes the multiplica-
tive identity in R.
2). We can check easily that:
f , g =
R
1
0
f (t)g (t)d t =g, f .
a f , g =
R
1
0
a f (t)g (t )d t = af , g .
f +g ,h=
R
1
0
(f (t) +g (t))h(t)dt =f ,h+g,h.
f , f =
R
1
0
f (t)f (t )d t 0, with equality if and only if f =0.
10
linear_algebra_math_HKUST 10
W. Liu - Training to all new Teaching Assistants
3). We apply Gram-Schmidt process to f (t) =t and g (t ) = t
2
. Let h =g
g ,f
f , f
f .
kf k
2
=f , f =
Z
1
0
t
2
d t =
1
3
, h(t) =t
2
3(
Z
1
0
t
3
d t)t = t
2
3
4
t.
Let us normalize f and h: khk
2
=
R
1
0
(t
2
3
4
t)
2
d t =
1
80
. So
f
kf k
=
p
3t
3
and
h
khk
=
p
80(t
2
3
4
t) will be an orthonormal basis for V .
Remark. We can play exactly the same game for the space C[π,π] with inner product f , g
=
1
π
R
π
π
f (t)g (t)d t . Let n be integer. Then
g
n
=sin(nt), n >0 and h
n
=cos(nt), n 0
will form an orthonormal basis. For example, we can express t in term of above basis. This is
the linear algebra theory parts of the Fourier analysis, and it is easy. The difficulty parts are
how to handle the infinite sum, i.e., the convergent problem and analysis.
Problem 10.
1). Let V = R
3
and take the standard basis e
1
,e
2
,e
3
. Let P : V V such that P(e
1
) = e
1
,
P(e
2
) = e
2
, P(e
3
) = 0. Write down the matrix A which represents the operator P in the
standard basis. Check that P
2
=P (here P
2
means P P : V V ) and A
2
= A. What are
the eigenvalues of P?
2). Let V be a vector space and P : V V be a linear transform such that P
2
=P.
i). What are the possible eigenvalues of P?
ii). Let U =ker(P), W =im(P). Let Q = I P : V V .
*
Check that P is the zero operator on U and the identity operator on W .
*
Check that Q
2
=Q.
*
Check that Q is the zero operator on W and the identity operator on U .
*
Check that for any x V , we can write x =u +w for some u U and w W .
*
Check that the above decomposition is unique. Therefore V =U W .
Keywords. Linear transform, projection operator, kernel, image, decomposition of space,
invariant subspace
Suggested Solution.
1). We would like to find a matrix A such that
P(x) = Ax,x =
x
1
x
2
x
3
.
11
linear_algebra_math_HKUST 11
W. Liu - Training to all new Teaching Assistants
Since
x =
x
1
x
2
x
3
=x
1
1
0
0
+x
2
0
1
0
+x
3
0
0
1
=(e
1
,e
2
,e
3
)
x
1
x
2
x
3
,
We have
P(x) =P(e
1
,e
2
,e
3
)
x
1
x
2
x
3
=(Pe
1
,Pe
2
,Pe
3
)
x
1
x
2
x
3
.
So we take
A =(Pe
1
,Pe
2
,Pe
3
) =(e
1
,e
2
,0) =
1 0 0
0 1 0
0 0 0
.
It is easy to see that P
2
= P and A
2
= A. It is obvious that P has eigenvalue 1 with the
corresponding eigenvectors e
1
,e
2
and eigenvalue 0 with the corresponding eigenvector
e
3
.
2). i). Suppose that P(x) = λx for some nonzero vector x. Then λx = P(x) = P
2
(x) =
P(λx) = λ
2
x, so (λ
2
λ)x = 0. Since x is nonzero, the possible eigenvalues are
λ =1 or λ = 0.
ii). We already know that both the kernel U and the image W are invariant subspaces
of V .
*
By definition of kernel, for any u U , P (u) = 0. Hence the restriction of P
to the subspace U is the zero operator. By the definition of image, for any
w W , we can write w =P(v) for some vector v V . So
P(w) =P(P(v)) =P
2
(v) =P (v) =w,
and the restriction of P to the subspace W is the identity operator.
*
Q
2
=(I P)
2
= I 2P +P
2
= I P =Q.
*
For any w W we have Q(w) = w P(w) = w w = 0. So the restriction of
Q on W is the zero operator. For any u U we have Q(u) =(I P)(u) =u. So
the restriction of Q on U is the identity operator.
*
For any x V , we can write x =(x P (x))+P(x) where u =x P (x) =Q(x) U
and w =P(x) W .
*
If x =
˜
u +
˜
w for some
˜
u U and
˜
w W , we have u =Q(x) =Q(
˜
u +
˜
w) =
˜
u and
w =P(x) =P(
˜
u +
˜
w) =
˜
w. So the decomposition is unique.
Remark Geometrically, we have the projection operator which is a linear operator. You can
think above P as the projection of V onto W along the "direction" U ; and Q as the projection
of V onto U along the "direction" W . Projection twice is the same as projection once, so we
have P
2
=P and Q
2
=Q. If we have a nontrivial projection, which means none of P nor Q is
12
linear_algebra_math_HKUST 12
W. Liu - Training to all new Teaching Assistants
the identity operator on V , then we can decompose V into direct sum of smaller spaces as
above. Algebraically, if a linear operator (or a matrix) P : V V satisfies P
2
=P, we just call P
a projection. Such a matrix is called an idempotent matrix.
13
linear_algebra_math_HKUST 13