Well see soon that if a linear operator on an ndimensional space has n distinct eigenvalues, then its diagonalizable.
But rst, a preliminary theorem. Theorem 2. Eigenvectors that are associated to distinct eigenvalues are independent. That is, if D Joyce, Fall 2012 1 , 2 , . . ., k are dierent eigenvalues of an operSome linear operators T : V V have the nice ator T , and an eigenvector vi is associated to each property that there is some basis for V so that the eigenvalue i , then the set of vectors v1 , v2 , . . ., vk matrix representing T is a diagonal matrix. Well is linearly independent. call those operators diagonalizable operators. Well Proof. Assume by induction that the rst k 1 of call a square matrix A a diagonalizable matrix if it is the eigenvectors are independent. Well show all k conjugate to a diagonal matrix, that is, there exists of them are. Suppose some linear combination of an invertible matrix P so that P 1 AP is a diagonal all k of them equals 0: matrix. Thats the same as saying that under a change of basis, A becomes a diagonal matrix. c1 v1 + c2 v2 + + ck vk = 0. Reections are examples of diagonalizable operators as are rotations if C is your eld of scalars. Take T I of both sides of that equation. The k Not all linear operators are diagonalizable. The left side simplies simplest one is R2 R2 , (x, y) (y, 0) whose ma0 1 (T k I)(c1 v1 + + ck vk ) trix is A = . No conjugate of it is diagonal. 0 0 = c1 T (v1 ) k c1 v1 + + ck T (vk ) k ck vk Its an example of a nilpotent matrix, since some = c1 (1 k )v1 + + ck (k k )vk power of it, namely A2 , is the 0-matrix. In general, = c1 (1 k )v1 + + ck1 (k1 k )vk1 nilpotent matrices arent diagonalizable. There are many other matrices that arent diagonalizable as and, of course, the right side is 0. That gives us a well. linear combination of the rst k 1 vectors which Theorem 1. A linear operator on an n- equals 0, so all their coecients are 0: dimensional vector space is diagonalizable if and only if it has a basis of n eigenvectors, in which case c1 (1 k ) = = ck1 (k1 k ) = 0 the diagonal entries are the eigenvalues for those Since k does not equal any of the other i s, thereeigenvectors. fore all the ci s are 0: Proof. If its diagonalizable, then theres a basis for which the matrix representing it is diagonal. The c1 = = ck1 = 0 transformation therefore acts on the ith basis vector by multiplying it by the ith diagonal entry, so its an The original equation now says ck vk = 0, and since eigenvector. Thus, all the vectors in that basis are the eigenvector vk is not 0, therefore ck = 0. Thus eigenvectors for their associated diagonal entries. all k eigenvectors are linearly independent. q.e.d. Conversely, if you have a basis of n eigenvectors, then the matrix representing the transformation is Corollary 3. If a linear operator on an ndiagonal since each eigenvector is multiplied by its dimensional vector space has n distinct eigenvalues, associated eigenvalue. q.e.d. then its diagonalizable. 1
Diagonalizable operators Math 130 Linear Algebra
Proof. Take an eigenvector for each eigenvalue. By the preceding theorem, theyre independent, and since there are n of them, they form a basis of the n-dimensional vector space. The matrix representing the transformation with respect to this basis is diagonal and has the eigenvalues displayed down the diagonal. q.e.d. Math 130 Home Page at [Link]