0% found this document useful (0 votes)
120 views3 pages

Advanced Matrix Algebra Concepts

This document summarizes key concepts in matrix algebra, including: 1. Eigenvalues and eigenvectors of matrices and how they relate to the characteristic polynomial. Eigenvalues may be complex. 2. Special properties of eigenvalues and eigenvectors for identity, diagonal, and orthogonal matrices. 3. How changing the basis of a vector space changes the perspective on matrix problems. 4. The definition and process for determining if a matrix is diagonalizable. 5. Properties of symmetric matrices, including that their eigenvalues are real and eigenvectors for distinct eigenvalues are orthogonal. 6. A theorem stating any real symmetric matrix can be diagonalized using an orthogonal matrix of eigenvectors.

Uploaded by

Lex
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views3 pages

Advanced Matrix Algebra Concepts

This document summarizes key concepts in matrix algebra, including: 1. Eigenvalues and eigenvectors of matrices and how they relate to the characteristic polynomial. Eigenvalues may be complex. 2. Special properties of eigenvalues and eigenvectors for identity, diagonal, and orthogonal matrices. 3. How changing the basis of a vector space changes the perspective on matrix problems. 4. The definition and process for determining if a matrix is diagonalizable. 5. Properties of symmetric matrices, including that their eigenvalues are real and eigenvectors for distinct eigenvalues are orthogonal. 6. A theorem stating any real symmetric matrix can be diagonalized using an orthogonal matrix of eigenvectors.

Uploaded by

Lex
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Chapter 8 - Further Matrix Algebra

8.1 - Eigenvalues and Eigenvectors


Let A be a square n × n matrix. Then a non-zero vector v satisfying the equation

Av = λv

for some real λ is called an eigenvector of A . The scalar λ is the corresponding eigenvalue.

To find the eigenvalues of a square matrix A it suffices to consider the characteristic polynomial,

det(A − λI ).

The roots of this polynomial are the eigenvalues of A . Note that even if the matrix consists of real numbers, it's
eigenvalues may be complex numbers.

To find the eigenvector corresponding to each eigenvalue one fixes a choice of the eigenvalue λ , then solves the
equations

(A − λI )v = 0.

There could be more than one eigenvector for each eigenvalue, also if the eigenvalue is a complex number then
the corresponding eigenvector will be complex as well.

8.2 - Eigenvalues/vectors of special matrices


For the identity matrix, every vector v satisfies the equation

I v = 1 ⋅ v,

hence every non-zero vector is an eigenvector with corresponding eigenvalue 1. If D is a diagonal matrix with
diagonal entries d1 ,  d2 , … ,  dn , then each vector ei is an eigenvector with eigenvalue di .

The eigenvalues of orthogonal matrices are usually complex numbers. For any eigenvalue λ of an orthogonal
matrix P , the magnitude |λ| is always 1. For example, if

cos(θ) sin(θ)
P = R(θ) = ( )
− sin(θ) cos(θ)

then the eigenvalues are eiθ and e−iθ with corresponding eigenvectors

i −i
( )  and  ( ).
1 1
8.3 - Change of basis
Suppose that {v1 , … , vn } are column vectors of dimension n . Then we call these a basis if the matrix with
n
columns vi is invertible. In particular the vectors e1 , … , en form the canonical basis of R . Any vector w
can be rewritten in a new basis,

w = λ1 v1 + ⋯ + λn vn .

We wont study bases any further in this course, but the point of them is that changing the basis changes the
perspective on a problem. By changing the basis we can make the problem easier.

8.4 - Diagonalisation of matrices


Definition
A matrix M can be diagonalised if it can be written in the form
−1
M = QDQ

for an invertible matrix Q and a diagonal matrix D .

In this diagonal form, the columns of the matrix Q are all eigenvectors for M . In fact the columns of Q form a
basis. The corresponding eigenvalue of the i th column of Q is the i th value on the diagonal of D .

So if we want to show that a matrix is diagonalisable, then we

1. compute it's eigenvalues d1 ,  d2 ,   … ,  dn , then


2. compute an eigenvector for each eigenvalue,
3. check that the resulting eigenvectors form a basis.

If they do form a basis then write the eigenvectors as a column for a matrix, V .

8.5 - Eigenvalues/vectors of symmetric matrices


Symmetric matrices are special enough to warrant their own section. Some special properties are:

1. The eigenvalues of real symmetric matrices are always real numbers.


2. If v and w are eigenvectors for a real symmetric matrix with different eigenvalues, then they are
orthogonal, i.e. v ⋅ w = 0 .

If all the eigenvalues of a symmetric matrix A are strictly positive then we say that A is positive definite.
8.6 - Diagonalisation of symmetric matrices
Theorem
If A is an n × n real symmetric matrix then their exists are choice of eigenvectors

v1 ,  v2 ,   …  ,  vn

for A such that

1  if i = j and
vi ⋅ vj = {
0  otherwise.

Then A = P DP
t
, where

1. D is the diagonal matrix with diagonal entries (di ) , for di the eigenvalue corresponding to vi and
2. P is the orthogonal matrix with columns vi .

8.7 - Application to quadratic forms


Recall that a quadratic form can be written in terms of a symmetric matrix A ,
t
Q(x1 ,   …  ,  xn ) = x Ax

Applying the previous theorem this is equal to


t t
x P DP x,

for some diagonal matrix D and some orthogonal matrix P , whose columns are eigenvectors of A . Multiplying
this out we find,
2
Q(x1 , … , xn ) =   d1 (p11 x1 + ⋯ + pn1 xn )
2
+ d2 (p12 x1 + ⋯ + pn2 xn )


2
+ dn (p1n x1 + ⋯ + pnn xn )

8.8 - Implicit curves defined by quadratic forms


By the previous section a quadratic form in two variables Q(x, y) t
= x Ax can be written in the form
2 2
d1 (p11 x + p21 y) + d2 (p12 x + p22 y) .

This allows us to solve the equation Q(x, y) = 1 as follows:

if both d1 , d2 > 0 then the solution is an ellipse with axes (p11 , p21 ) and (p12 , p22 ) and respective radii
d
−1
1
and d2−1 ,
if d1 < 0 < d2 then the solution set is a hyperbola, with axes (p11 , p21 ) and (p12 , p22 ) ,
if d1 = 0 < d2 then the solution set is a pair of parallel lines pointing in the direction (p12 , p22 ) ,
if both d1 , d2 ≤ 0 then the solution set is empty.

You might also like