Linear Algebra (MA - 102)
Lecture - 11
Inner product spaces
Gram-Schmidt orthogonalization process
G. Arunkumar
May 5, 2022
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 1 / 24
Chapter 5: Inner Product Spaces
Inner product spaces: basic properties
Gram-Schmidt orthogonalization process
Least Square Approximation Method (Reading Material)
Notation: ei denotes the vector in Rn with 1 in the i-th position and 0 elsewhere
(n is clear from the context)
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 2 / 24
Motivation
In Euclidean geometry, we have notions of length of a vector, angle between
vectors, projection of a point on a plane.
Let v ∈ R2 with coordinate vector (a, b) wrt the ordered basis {e1 , e2 }
Using the Pythagoras theorem, the length of v , denoted as kv k, can be
written as p
kv k = a2 + b 2
"
t a
v=(a,b)
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 3 / 24
Motivation
Let u ∈ R2 be another vector with coordinate vector (c, d) wrt the ordered
basis {e1 , e2 }
The angle between the vectors u and v is the angle between these vectors.
Using cosine law,
kukkv k cos θ = ac + bd
b)
do
.
'
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 4 / 24
The Inner Product or The Dot Product
The inner product or the dot product of u = (a1 , . . . , an ) and
v = (b1 , . . . , bn ), denoted as hu, v i, is
hu, v i := a1 b1 + · · · + an bn
A vector space V with the inner product h, i is called as the inner product
space. We write this as (V , h, i) is an inner product space.
The dot product is a function V × V → R
The dot product has the following properties for every u, v ∈ Rn and c ∈ R:
1 hu, v i = hv , ui (Symmetry)
2 hu, v + w i = hu, v i + hu, w i (additivity)
3 hcu, v i = chu, v i (homogeneity),
4 hv , v i ≥ 0 with hv , v i = 0 ⇐⇒ v = 0 (positive definite).
hu, v i
5 If θ is the angle between u and v , then cos θ = hu, ui1/2 hv , v i1/2
6 Thus θ = π/2 iff hu, v i = 0
The notion of dot product helps us to analyse many geometric concepts.
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 5 / 24
The Pythagoras Theorem
Definition
p a vector space. (1) Let v ∈ V . The length or norm of v is
Let V be
kv k = hv , v i and v is a unit vector if kv k = 1.
(2) Elements v , w of V are said to be orthogonal or perpendicular if hv , w i = 0.
We write this as v ⊥ w .
p p
Remark: If c ∈ R, v ∈ V then kcv k = hcv , cv i = c 2 hv , v i = |c|kv k.
Theorem
2 2 2
(Pythagoras Theorem) If v ⊥ w then kv + w k = kv k + kw k .
Proof: We have
2 2 2
kv + w k = hv + w , v + w i = hv , v i + hw , w i = kv k + kw k .
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 6 / 24
Projection of a vector onto another vector
Definition
Let v , w ∈ V with w 6= 0. We define
hw , v i
pw (v ) = w
hw , w i
to be the projection of v along w .
Hi
P_w(v) is the point on Rw closest to v
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 7 / 24
Projection of a vector onto another vector
Note that pw (v ) is the point on Rw closest to v .
Proposition
Let V be a vector space and v , w ∈ V with w 6= 0. Then
1 pw (v ) = p kww k (v ), i.e., the projection of v along w is the same as the
projection of v along the unit vector in the direction of w .
2 pw (v ) and v − pw (v ) are orthogonal.
3 kpw (v )k ≤ kv k with equality if and only if {v , w } are linearly dependent.
Proof: (1) We have
hw , v i hw , v i w w
pw (v ) = w= 2 w = h kw k , v i kw k = p kw k (v ).
w
hw , w i kw k
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 8 / 24
Projection of a vector onto another vector
(2) In view of part (1) we may assume that w is a unit vector. So
hpw (v ), v − pw (v )i = hpw (v ), v i − hpw (v ), pw (v )i
= hhw , v iw , v i − hhw , v iw , hw , v iw i
= hw , v ihw , v i − hw , v ihw , v ihw , w i = 0
(3) We have
2
kv k = hv , v i
= hpw (v ) + v − pw (v ), pw (v ) + v − pw (v )i
2 2 2
= kpw (v )k + kv − pw (v )k ≥ kpw (v )k .
Clearly, there is equality in the last step ⇐⇒ v = pw (v ).
⇐⇒ {v , w } are linearly dependent (Exercise)
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 9 / 24
The Cauchy-Schwartz inequality
Theorem
(Cauchy-Schwartz (C-S) inequality) For v , w ∈ V
|hw , v i| ≤ kv kkw k,
with equality ⇐⇒ {v , w } are linearly dependent.
Proof:
The result is clear if w = 0. So we may assume that w 6= 0.
Case (i): w is a unit vector. In this case the LHS of the C-S inequality is
kpw (v )k and the result follows from part (iii) of the previous lemma.
w
Case (ii): w is not a unit vector. Set u = kw k .
We have |hw , v i| = kw k(|hu, v i|) and kv kkw k = kw k(kv kkuk).
The result follows from Case (i).
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 10 / 24
Angle and distance between vectors
Definition
(1) Given v , w ∈ V with v , w 6= 0, by C-S inequality
hv , w i
−1 ≤ ≤ 1.
kv kkw k
So, there is a unique 0 ≤ θ ≤ π satisfying cos(θ) = kvhvkkw
, wi
k . This θ is the angle
between v and w .
(2) The distance between u and v in V is defined as d(u, v ) = ku − v k.
Lemma
Let u, v , w ∈ V . Then
1 d(u, v ) ≥ 0 with equality iff u = v .
2 d(u, v ) = d(v , u).
3 d(u, v ) ≤ d(u, w ) + d(w , v ).
Proof. Exercise.
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 11 / 24
Triangle inequality
Theorem
(Triangle Inequality) For v , w ∈ V
kv + w k ≤ kv k + kw k.
Proof: We have, using C-S inequality,
2
kv + w k = hv + w , v + w i
= hv , v i + hv , w i + hw , v i + hw , w i
= hv , v i + hv , w i + hv , w i + hw , w i
2 2
≤ kv k + kw k + 2kv kkw k
= (kv k + kw k)2 .
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 12 / 24
Angle and distance between vectors
Definition
(1) Given v , w ∈ V with v , w 6= 0, by C-S inequality
hv , w i
−1 ≤ ≤ 1.
kv kkw k
So, there is a unique 0 ≤ θ ≤ π satisfying cos(θ) = kvhvkkw
, wi
k . This θ is the angle
between v and w .
(2) The distance between u and v in V is defined as d(u, v ) = ku − v k.
Lemma
Let u, v , w ∈ V . Then
1 d(u, v ) ≥ 0 with equality iff u = v .
2 d(u, v ) = d(v , u).
3 d(u, v ) ≤ d(u, w ) + d(w , u).
Proof. Exercise.
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 13 / 24
Triangle inequality
Theorem
(Triangle Inequality) For v , w ∈ V
kv + w k ≤ kv k + kw k.
Proof: We have, using C-S inequality,
2
kv + w k = hv + w , v + w i
= hv , v i + hv , w i + hw , v i + hw , w i
= hv , v i + hv , w i + hv , w i + hw , w i
2 2
≤ kv k + kw k + 2kv kkw k
= (kv k + kw k)2 .
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 14 / 24
Orthogonal and Orthonormal Sets
Definition
Let V be a vector space and S ⊆ V be a nonempty subset.
(1) We say that S is an orthogonal set if hx, y i = 0 for all x, y ∈ S with x 6= y .
(2) An orthogonal set is said to be orthonormal set if kv k = 1 for all v ∈ S.
(3) An orthogonal (resp. orthonormal) subset S of V is said to be an orthogonal
(resp. orthonormal) basis of V if L(S) = V .
1 Example (1). The set {e1 , . . . , en } is an orthonormal basis of Rn with
standard inner product.
2 Example (2). Let v = (cos θ, sin θ)t , w = (− sin θ, cos θ)t , θ ∈ [0, π]. Then
{v , w } is an orthonormal basis of R2 .
1
3 Example (3). Consider R2 with standard inner product. Then v1 =
1
1
and v2 = are orthogonal.
−1
4 Dividing v1 and v2 by their lengths we get an orthonormal basis.
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 15 / 24
Orthonormal bases
Proposition
Let S = {u1 , . . . , un } be an orthogonal set of nonzero vectors in a vector space V .
Then S is linearly independent.
Proof.
1 Suppose c1 , c2 , . . . , cn are scalars with
c1 u1 + c2 u2 + . . . + cn un = 0.
2 Take inner product with ui on both sides to get ci hui , ui i = 0.
3 Since ui 6= 0, we get ci = 0. Thus S is linearly independent.
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 16 / 24
Gram-Schmidt Process
Theorem
Let S := {v1 , . . . , vn } be a linearly independent subset of a vector space V . Then
w1 wn
there exists an orthogonal basis {w1 , . . . , wn } of L(S), and hence { kw 1k
, . . . , kw nk
}
is an orthonormal basis of L(S). In particular, every vector space has an
orthonormal basis.
Proof:
We define wi inductively
Set w1 := v1
Suppose w1 , . . . , wm are defined. To define wm+1 , take vm+1 and subtract
from it its projections along w1 , . . . , wm .
hw , v i
Recall that pw (v ) = hw , w i w .
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 17 / 24
Gram-Schmidt Orthogonalization Process
1 More precisely, define
wm+1 = vm+1 − pw1 (vm+1 ) − pw2 (vm+1 ) − · · · − pwm (vm+1 )
2 Clearly, wm+1 6= 0 as otherwise {w1 , . . . , wm , vm+1 } would be linearly
dependent.
3 We now check that {w1 , . . . , wm+1 } is orthogonal.
4 For this, we show that wm+1 ⊥ wi for i = 1, 2, . . . , m.
5 For i = 1, 2, . . . , m we have
m
X
hwi , wm+1 i = hwi , vm+1 − pwj (vm+1 )i
j=1
m
X
= hwi , vm+1 i − hwi , pwj (vm+1 )i
j=1
= hwi , vm+1 i − hwi , pwi (vm+1 )i, ( hwi , wj i = 0 for i 6= j)
= hwi , vm+1 − pwi (vm+1 )i
= 0.
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 18 / 24
Gram-Schmidt Orthogonalization Process
1 This gives an orthogonal basis {w1 , . . . , wn } of L(S) why basis ?
w1 wn
2 Therefore { kw 1k
, . . . , kw nk
} is an orthonormal basis of L(S).
Conclusion: Suppose a linearly independent set of vectors S := {v1 , . . . , vn } is
given (which need not be orthogonal). Gram-Schmidt orthogonalization process
gives an algorithm to find a orthonormal basis S1 = {w1 , . . . , wn } of L(S) which is
as follows:
Step 1: Set w1 = v1
Step 2: for k = 1, . . . , n − 1 do the following: Suppose the vectors w1 , . . . , wk
have been determined. Then inductively define wk+1 using vk+1 and w1 , . . . , wk as
wk+1 = vk+1 − pw1 (vk+1 ) − pw2 (vk+1 ) − · · · − pwk (vk+1 )
w1 wn
Then {w1 , . . . , wn } is an orthogonal basis of L(S) and { kw 1k
, . . . , kw nk
} is an
orthonormal basis of L(S).
Different ordering of v1 , . . . , vn may give you a different orthonormal basis
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 19 / 24
Examples of Gram-Schmidt Orthogonalization Process
1 Example. Find an orthonormal basis for the subspace of R4 under standard
inner product spanned by
1 1 1
1 −2 0
v1 =
0 , v2 = 0 , v3 = −1
1 0 2
2 Then w1 = v1 ,
4
hv2 , v1 i 1 −5
w2 = v2 − v1 =
hv1 , v1 i 3 0
1
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 20 / 24
Examples of Gram-Schmidt Orthogonalization Process
1 Now subtract v3 from its projections along w1 and w2 .
−4
hv3 , w1 i hv3 , w2 i 1 −2
w 3 = v3 − v1 − w2 = .
hw1 , w1 i hw2 , w2 i 7 −7
6
2 Now w1 , w2 , w3 are orthogonal and generate the same subspace as v1 , v2 , v3 .
w1
3 Dividing by the lengths we get the orthonormal basis { kw , w2 , w3 }.
1 k kw2 k kw3 k
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 21 / 24
Orthogonal Complement of a Subspace
1 Let V be a vector space. We have seen how to project a vector onto a
nonzero vector.
2 We now discuss the orthogonal projection of a vector onto a subspace.
Definition
Let W be a subspace of V . Define
W ⊥ = {u ∈ V | u ⊥ w for all w ∈ W }.
Check that W ⊥ is a subspace of V . The subspace W ⊥ is called the orthogonal
complement of W .
Theorem
Every v ∈ V can be written uniquely as v = x + y , where x ∈ W and y ∈ W ⊥ .
Moreover dim V = dim W + dim W ⊥ .
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 22 / 24
Subspace and its orthogonal subspace
Proof: Let {v1 , v2 , . . . , vk } be an orthonormal basis of W . Set
x = hv1 , v iv1 + hv2 , v iv2 + · · · + hvk , v ivk
and put y = v − x.
1 Clearly v = x + y and x ∈ W . We now check that y ∈ W ⊥ .
2 For i = 1, 2, . . . , k we have
hy , vi i = hv − x, vi i
= hv , vi i − hx, vi i
k
X
= hv , vi i − h hvj , v ivj , vi i
j=1
k
X
= hv , vi i − hvj , v ihvj , vi i
j=1
= hv , vi i − hv , vi i = 0
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 23 / 24
1 It follows that y ∈ W ⊥ . For uniqueness let v = x + y = x 0 + y 0 ,
2 where x, x 0 ∈ W and y , y 0 ∈ W ⊥ . Then x − x 0 = y 0 − y ∈ W ∩ W ⊥ .
3 But W ∩ W ⊥ = {0}. Hence x = x 0 and y = y 0 .
W is defined to be pW (v ) = x.
More exercises will be uploaded to the Google Class Room.
G. Arunkumar, IIT Dharwad Linear Algebra: Introduction May 5, 2022 24 / 24