0% found this document useful (0 votes)
35 views137 pages

Vogan Diagrams for Symplectic Lie Algebras

Uploaded by

shiinawaifuumm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views137 pages

Vogan Diagrams for Symplectic Lie Algebras

Uploaded by

shiinawaifuumm
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 137

18.

755 (Lie Groups and Lie algebras II) Notes


Niven Achenjang

Spring 2021

These are my course notes for “Lie Groups and Lie algebras II” at MIT. Each lecture will get its own
“chapter.” These notes are live-texed or whatever, so there will likely to be some (but hopefully not too
much) content missing from me typing more slowly than one lectures. They also, of course, reflect my
understanding (or lack thereof) of the material, so they are far from perfect.1 Finally, they contain many
typos, but ideally not enough to distract from the mathematics. With all that taken care of, enjoy and
happy mathing.
The instructor for this class is Pavel Etingof. This class overlaps once a week with a seminar that I
am attending, so that might cause issue in these notes.

Contents
1 Lecture 1 (2/16) 1
1.1 Class stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Review of material from last term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.2 Lie subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.3 Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.4 Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.5 Exponential Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.6 Fundamental Theorems of Lie Theory . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.7 Representations of sl2 (C), SL2 (C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.8 Universal enveloping algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.9 Solvable and nilpotent Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.10 Semisimple and reductive Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.11 Killing form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Lecture 2 (2/18) 6
2.1 More general forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Semisimple Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Jordan decomposition and Cartan subalgebras . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Root decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Abstract Root Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1 In particular, if things seem confused/false at any point, this is me being confused, not the speaker

i
2.5.1 Positive and Simple Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.2 Dual root system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.3 Cartan matrix and Dynkin Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Serre presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.7 Representation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.8 Weyl Character Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Lecture 3 (2/23) 13
3.1 Weyl dimension formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Tensor products of fundamental representations . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Representations of SLn (C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Representations of GLn (C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Lecture 4 (2/25) 18
4.1 Schur-Weyl duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2 Schur functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Characters of symmetric group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

5 Lecture 5 (3/2) 24
5.1 Invariant Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2 Howe Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3 Minuscule weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

6 Lecture 6 (3/4) 29
6.1 Last Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.2 This Time: minisucle weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

7 Lecture 7 (3/11) 33
7.1 Fundamental weights/representations for classical Lie algebras . . . . . . . . . . . . . . . 33
7.1.1 Type Cn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.1.2 Type Bn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.1.3 Type Dn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

8 Lecture 8 (3/16) 38
8.1 Last time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.2 Clifford algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.3 Duals of irreps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

9 Lecture 9 (3/18) 44
9.1 Principal sl2 -subalgebra, exponents of g . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
9.2 Back to Real, Complex, Quaternionic Type . . . . . . . . . . . . . . . . . . . . . . . . . . 46
9.3 Review of differential forms and integration on manifolds . . . . . . . . . . . . . . . . . . 48
9.3.1 Top degree forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

ii
10 Lecture 10 (3/25) 49
10.1 Last time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
10.2 Volume Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10.3 Stoke’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10.4 Integration on (Real) Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
10.5 Representations of compact Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
10.6 Matrix coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

11 Lecture 11 (3/30) 54
11.1 Matrix coefficients + Peter-Weyl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
11.2 Proving Peter-Weyl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
11.2.1 Analytic Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

12 Lecture 12 (4/1) 59
12.1 Peter-Weyl, Proved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
12.2 Compact (2nd countable) topological groups . . . . . . . . . . . . . . . . . . . . . . . . . . 61
12.3 Integration theory on compact top. groups . . . . . . . . . . . . . . . . . . . . . . . . . . 62

13 Lecture 13 (4/6) 63
13.1 Hydrogen Atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
13.1.1 Quantum version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

14 Lecture 14 (4/8): Quantum stuff continued 68


14.1 Explanation for degeneracy of energy levels . . . . . . . . . . . . . . . . . . . . . . . . . . 70
14.2 Back to math: automorphisms of semisimple Lie algebras . . . . . . . . . . . . . . . . . . 70
14.2.1 Summary of last semester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
14.2.2 Maximal Tori . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
14.2.3 Forms of semisimple Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

15 Lecture 15 (4/13) 73
15.1 Forms of a semisimple Lie algebra, continued . . . . . . . . . . . . . . . . . . . . . . . . . 73

16 Lecture 16 (4/15) 78
16.1 Twists of the compact form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
16.2 Real forms of classical groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
16.2.1 Type An−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
16.3 Type B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

17 Lecture 17 (4/22) 82
17.1 Last time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
17.2 Classification of real forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
17.2.1 Type E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

iii
18 Lecture 18 (4/27) 86
18.1 E type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
18.2 Classification of connected compact Lie groups . . . . . . . . . . . . . . . . . . . . . . . . 91
18.2.1 Classification of semisimple compact Lie groups . . . . . . . . . . . . . . . . . . . . 91

19 Lecture 19 (4/29) 93
19.1 Filling in a gap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
19.2 Polar decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
19.3 Linear groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
19.4 Connected complex reductive groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
19.5 Maximal tori . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

20 Lecture 20 (5/4) 97
20.1 Semisimple and unipotent elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
20.2 Cartan Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
20.3 Integral form of character orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
20.4 Topology of Lie Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

21 Lecture 21 (5/6): Cohomology of Lie Groups 102


21.1 Künneth formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
21.2 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

22 Lecture 22 (5/11) 108


22.1 Cohomology of homogeneous spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
22.1.1 Flag manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

23 Lecture 23 (5/13): Lie algebra cohomology 113


23.1 Interpretations of Hi (g, V ) for small i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
23.1.1 i = 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
23.1.2 i = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
23.1.3 i = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
23.2 Levi decomposition theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

24 Lecture 24 (5/18) 118


24.1 The nilradical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
24.2 Exponentiating nilpotent and solvable Lie algebras, and 3rd Lie theorem . . . . . . . . . . 119
24.3 Algebraic Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
24.4 Faithful representations of nilpotent Lie algebras . . . . . . . . . . . . . . . . . . . . . . . 123

25 Lecture 25 (5/20): Last Class 123


25.1 Ado’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
25.2 Last topic: Borel subalgebras and flag manifold . . . . . . . . . . . . . . . . . . . . . . . . 125

26 List of Marginal Comments 128

iv
Index 130

List of Figures
1 An example graph giving an invariant function . . . . . . . . . . . . . . . . . . . . . . . . 25
2 A graph corresponding the the invariant function Tr(T ) 2 2
. . . . . . . . . . . . . . . . . . 25
3 The Dynkin Diagram Cn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4 The Dynkin Diagram Bn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 The Dynkin Diagram Dn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6 The Dynkin Diagram An . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7 The Dynkin Diagram E6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8 The Dynkin Diagrams D3 (left) and D2 (right) . . . . . . . . . . . . . . . . . . . . . . . . 43
9 An example Vogan diagram. White vertices have sign + and black vertices have sign −. . 80
10 The Dynkin Diagram G2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
11 A Dynkin diagram of type F4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

List of Tables
1 Real forms of simple complex Lie algebras (except E6 , E7 , E8 ) . . . . . . . . . . . . . . . . 84
2 Real forms of all simple complex Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . 86

v
1 Lecture 1 (2/16)
1.1 Class stuff
Homeworks assigned/due on Thursdays. Lecture notes here if you can acess the Canvas.

1.2 Review of material from last term


1.2.1 Lie groups

Definition 1.1. A real (complex) Lie group is a real (complex) manifold G which is also a group
such that G × G ! G is regular (analytic). A homomorphism of Lie groups G ! H is a group
homomorphism given by a regular map.

Example. Real: Rn , U (n), SU(n), GLn (R), O(p, q), Sp2n (R)
complex: Cn , GLn (C), On (C), Sp2n (C)

Every Lie group G has a connected component of 1 denoted G◦ . This is a normal subgroup, and
G/G◦ is discrete and countable.
Say G is connected. Then its universal cover Ge is a simply connected Lie group, and comes with a
map π : G  G with ker π = Z, some central discrete subgroup, such that G/Z
e e ∼
= G.
Example. G = S 1 then G
e = R and Z = Z. Hence, in this case π1 (S 1 ) = Z.

1.2.2 Lie subgroups

Definition 1.2. A Lie subgroup H ⊂ G is a subgroup which is also an immersed submanifold (i.e. H
is a Lie group and H ,! G is a regular map with injective differential at every point). A closed Lie
subgroup H ⊂ G is a subgroup which is an embedded submanifold (i.e. locally closed).

Remark 1.3. A closed Lie subgroup is equivalently a Lie subgroup which is closed in G.

Example. Q ⊂ R is a Lie subgroup, but not an embedded submanifold. However, Z ⊂ R is a closed Lie
subgroup.

Example. On (R) ⊂ GLn (R) is a closed Lie subgroup.

Example. An irrational torus winding R ⊂ S 1 × S 1 is a Lie subgroup which is not closed.

We will almost always work with closed Lie subgroups.

Fact (Did not prove last semester). Any closed subgroup of G is a closed Lie subgroup.

Fact (Did prove last semester). Any connected Lie group is generated by any neighborhood of 1.

Definition 1.4. Let H ⊂ G be a closed Lie subgroup. Then, the quotient G/H is a manifold with a
transitive G-action, i.e. a homogeneous G-space. If H is a normal subgroup, then G/H is a Lie group.

If G acts transitively on a manifold X (i.e. G × X ! X regular), then for any x ∈ X, we get a


stabilizer Gx ⊂ G (a closed Lie subgroup), and G/Gx ∼
= X. Hence, every homogeneous space is given by
a quotient of G.
More generally, say G acts on X not necessarily transitively. Then, there are orbits. For any x ∈ X,
Gx ⊂ X is an immersed submanifold, and is isomorphic to G/Gx .

1
1.2.3 Representations

Reps are actions of G on a vector space by a linear transformations. We usually consider complex
representations, i.e. maps G ! GLn (C) = GL(V ).
We get the usual notions from representation theory: homomorphisms of reps (intertwiing operators
A : V ! W ), subreps, direct sums, duals, tensor products, irreps, indecomposable reps, etc.

Lemma 1.5 (Schur’s lemma). Let V, W be irreps. If they are not isomorphic then any A : V ! W is
trivial (A = 0). If they are isomorphic, then any A : V ! V is scalar multiplication (A = λ Id).

Example. G acts on itself by conjugation: g · x = gxg −1 . This induces g∗ : T1 G ! T1 G, and the map
Ad : g 7! g∗ gives the adjoint representation Ad : G ! GL(g).

Definition 1.6. A unitary representation is one with an invariant positive def Hermitian form (v, w),
i.e. (gv, gw) = (v, w), i.e. G ! U (n) ⊂ GLn (C).

Recall that in general, indecompsoable (not a direct sum) is weaker than irreducible (no nontrival
proper subreps). However, any unitary representation is a direct sum of irreducible representations (so
unitary indecomposable = unitary irreducible).
If G is finite (or, more generally, compact), then any representation is unitary. Take a random positive
Hermitian form, and then average it over the group to get an invariant one. Thus, any finite dimensional
representation of G (finite or compact) is a direct sum of irreps (i.e. completely reducible).

1.2.4 Lie algebras

Note that G acts on itself by right translations, i.e. g ◦ x = xg. This is a right action. Fix a ∈ T1 G =: g.
Right translation gives rise to a tangent vector ag ∈ Tg G at g. Doing this at every point gives rise
to a left invariant vector field (since left multiplication commutes with right translation) Ra on G (i.e.
Ra|g = ag).
We know vector fields correspond to derivations of functions. We can consider the commutator

[Ra , Rb ] = Ra Rb − Rb Ra ,

another left-invariant derivation (vector field), so [Ra , Rb ] = R[a,b] for some [a, b] ∈ g. Hence, for any Any left-
a, b ∈ g, we get in this way a commutator [a, b] ∈ g. This is a bilinear map g × g ! g satisfying invariant
vector field
• (skew-symmetry) [x, x] = 0 =⇒ [x, y] = −[y, x]
is deter-
• (Jacobi identity) mined by
[[x, y], z] + [[y, z], x] + [[z, x], y] = 0. its value at
the identity
Definition 1.7. A Lie algebra over any field k is a k-vector space g with a bilinear operation [−, −] :
g × g ! g satisfying skew-symmetric + Jacobi identity.

Example. If G is a Lie group, then g = T1 G is a Lie algebra. We also denote it by Lie(G).

Example. If G = GLn (C), then g = gln (C) = Matn (C) with Lie bracket [A, B] = AB − BA

Example. If G = On (C), then g = skew-symmetric n × n matrices with same Lie bracket.

2
Definition 1.8. A lie subalgebra h ⊂ g is a subspace invariant under [−, −]. A Lie ideal is a Lie
subalgebra g ⊂ g such that [g, h] ⊂ h.

Example. The center z ⊂ g given by {z ∈ g | ∀x ∈ g : [x, z] = 0} is a Lie ideal.

If H ⊂ G is a Lie subgroup, then Lie H ⊂ Lie G is a Lie subalgebra. If H is normal, then Lie H is a Lie
ideal.
The same representations theory notions apply to Lie algebras as well, e.g. an n-dimensional rep-
resentation of g/k is a homomorphism ϕ : g ! gln (k) of Lie algebras, i.e. ϕ([a, b]) = [ϕ(a), ϕ(b)].

1.2.5 Exponential Map

Say G a Lie group and g = Lie G. Given a ∈ g, consider differential equation

g 0 (t) = ag(t) and g(0) = 1.

This has a unique solution which we denote by g(t) = exp(ta). This defines a 1-parameter subgroup
ϕ : R ! G, ϕ(t) = exp(ta). This satisfies

exp(ta) exp(sa) = exp((t + s)a).

Example. When G = GLn (K) (and K = R, C), this is usual matrix exponential

∞ n n
X t a
exp(ta) = .
n=0
n!

Setting t = 1 gives the exponential map exp : g ! G. The differential of this map at the idenity is
a map exp∗ : g ! g which is actually the identity map exp∗ = id. Hence, exp is invertible near identity,
and the inverse map is called log : U ⊂ G ! g (only defined on some open neighborhood U ⊂ G of the
identity).
This allows another definition of the commutator. One has

1
log(exp(a) exp(b)) = a + b + [a, b] + · · · .
2

Similarly (Note: exp(a)−1 = exp(−a)),

log(exp(a) exp(b) exp(−a) exp(−b)) = [a, b] + · · · .

In either case, the · · · refers to higher order terms. The commutator measure the extent to which G◦ is
non-commutative, e.g. G◦ commutative ⇐⇒ [−, −] = 0 on g.

1.2.6 Fundamental Theorems of Lie Theory

There are 3, and we proved 2 of them last semester?

Theorem 1.9. For any Lie group G, there is a bijection between connected Lie subgroups H ⊂ G, and
Lie subalgebras g ⊂ g = Lie G given (in one direction) by H 7! Lie(H).

3
Theorem 1.10. Let G, K be Lie groups with G simply connected. Then,


Hom(G, K) −
! Hom(g, k)

where g = Lie G and k = Lie K. This iso is given by taking the derivative at the identity.

Theorem 1.11 (Did not prove). For any finite dimensional real or complex Lie algebra g, there exists a
Lie group G such that g = Lie G.

We will prove this one this term.

Corollary 1.12. Say K = R, C. Then there is an equivalence of categories between simply connected
K-Lie groups and finite dimensional K-Lie algebras given by G 7! Lie G.

Any connected Lie group is of the form G/Γ where G is simply connected Lie group, and Γ ⊂ G is a
central, discrete subgroup.
These give good classification of Lie groups in terms of Lie algebras.

1.2.7 Representations of sl2 (C), SL2 (C)

Recall 1.13.
( ! ) ( ! )
a b a b
SL2 (C) = : ad − bc = 1 and sl2 (C) = :a+d=0 .
c d c d

Recall 1.14. sl2 (C) has basis given by


! ! !
0 1 1 0 0 0
e= , h= , and f =
0 0 0 −1 1 0

satisfying
[e, f ] = h, [h, e] = 2e, and [h, f ] = −2f.

Let Vn be the (n + 1)-dimensional representation on homogeneous polynomials in x, y of degree n:


a0 xn + a1 xn−1 y + · · · + an y n . sl2 acts on Vn by acting on x, y in the natural way.

Theorem 1.15.

(1) These representations (for n ≥ 0) are all the irreps of sl2 .

(2) Every representation is a direct sum of irreps (i.e. completely reducible)

(3)
min(m,n)
M
Vn ⊗ Vm = V|m−n|+2i−1
i=1

(Clebsh-Gordan, up to spelling)

4
1.2.8 Universal enveloping algebra

Let g be a Lie algebra. Get tensor algebra T g = g⊗n . The universal enveloping algebra is
L
n≥0

Tg
U (g) = .
(x ⊗ y − y ⊗ x − [x, y]

It is easy to see that representations of g are the same as reps of U (g).


Let xi be a totally ordered basis of g. Then we can form ordered monomials xni i with ni ≥ 0 (and
Q
i
only finitely many nonzero). These span U (g).

Theorem 1.16 (PBW). Such monomials form a basis of U (g) (so they are linearly independent).

1.2.9 Solvable and nilpotent Lie algebras

Given a Lie algebra g, consider D(g) = [g, g].

Definition 1.17. g is solvable if Dn (g) = 0 for some n.

Define L1 (g) = g, L2 (g) = [g, L1 (g)], L3 (g) = [g, L2 (g)], . . . . We say g is nilpotent if Ln (g) = 0 for
some n.
Remark 1.18. nilpotent =⇒ solvable, but the reverse does not always hold.
!
∗ ∗
Example. is solvable, but not nilpotent.
0 ∗
!
0 ∗
0 0

is nilpotent.

Theorem 1.19 (Lie). Working over C. If g is a f.d. solvable Lie algebra, then every irrep of g is
1-dimensional (false is positive characteristic). Hence, any f.d. rep has a basis in which all elements of
g act by upper triangular matrices.

Theorem 1.20 (Engel’s Theorem). A f.d. Lie algebra g is nilpotent ⇐⇒ all x ∈ g are nilpotent
(i.e. adx : g ! g given by adx(y) = [x, y] is nilpotent).

1.2.10 Semisimple and reductive Lie algebras

Everything from here on out is over C (and finite dimensional).

Definition 1.21. Let the radical rad(g) of g be the sum of all its solvable ideals (equivalently, the
largest solvable ideal).

Definition 1.22. We say g is semisimple if rad(g) = 0. The semisimplification is gss := g/ rad(g).

Proposition 1.23. gss is semisimple, and g = gss n rad(g) (Levi decomposition)

Definition 1.24. g is simple if its only ideals are 0 and g.

5
Proposition 1.25. A semisimple Lie algebra is a direct sum of simple Lie algebras.

Really hard to classify general Lie algebras, but classifying (semi)simple Lie algebras is doable in
terms of Dynkin diagrams.

Definition 1.26. We say g is reductive if rad(g) = Z(g) (radical = center). Any reducitive Lie algebra
is of form g = h ⊕ gss with h abelian.

Example. sln (C) is simple for n ≥ 2.


gln (C) = C ⊕ sln (C) is reductive.
son (C) is simple for n ≥ 3 except n = 4, where so4 (C) = sl2 ⊕ sl2 is semisimple.

1.2.11 Killing form

Definition 1.27. The Killing form is

K(x, y) = trg (adx · ady).

Above adx(z) = [x, z]. This is symmetric bilinear form on g which is ad-invariant:

K([x, z], y) = K(x, [z, y]).

Theorem 1.28 (Cartan’s Criteria). g is solvable iff [g, g] ⊂ ker K.


g is semisimple ⇐⇒ K is nondegenerate.

2 Lecture 2 (2/18)
Note 1. A few minutes late.
Continuing where we left off I think.

2.1 More general forms


Let V be a f.d. rep of g, so ρ : g ! End V a Lie algebra homomorphism. We consider

BV (x, y) = TrV (ρ(x)ρ(y)),

e.g. Bg = K is the Killing form.

Proposition 2.1. If BV is nondegenerated for some V , then g is reductive, g = gss ⊕ h with h abelian.

2.2 Semisimple Lie algebras


Consider g semisimple over C.

Theorem 2.2. Every f.d. representation of g is completely reducible


M
V = Vi .
i

6
If g is semisimple, we can construct a G with Lie G = g. Take Aut(g) ⊂ GL(g), let G = Aut(g)◦ .
This may not be simply connected, but Lie G ∼
= g.

2.3 Jordan decomposition and Cartan subalgebras


Recall that any matrix can be written as the sum of a diagonal matrix and a semisimple matrix with the
two commuting (e.g. put it in Jordan normal form).
Let g be a semisimple Lie algebra. We say x ∈ g is semisimple if adx : g ! g is semisimple Remember:
(diagonalizable). Always con-
sider s.s. Lie
Theorem 2.3. Any x ∈ g can be uniquely written as a sum x = xs +xn with xs semisimple, xn -nilpotent,
algebras in
and [xs , xn ] = 0.
characteris-
Definition 2.4. The Cartan subalgebra h ⊂ g is the maximal commutative subalgebra consisting of tic 0
semisimple elements.

Cartan subalgebras are also maximal w.r.t. commutative subalgebras.

Theorem 2.5. All Cartan subalgebras are conjugate under the action of the group G.

Hence, all Cartan subalgebras are of the same dimension r, called the rank of G.

Example. g = sln . Then h ⊂ sln consisting of diagonal matrices is a Cartan subalgebra.


 
x1 0
 ..  where x1 + · · · + xn = 0.


 . 
0 xn

h∼
= Cn−1 , so rank sln = n − 1.

2.4 Root decomposition


Fix h ⊂ g Cartan. h y g via adjoint action, and we can decompose g into eigenspaces:
M
g=h⊕ gα ,
α∈h∗ \0

where
gα = {x ∈ g : [h, x] = α(h)x for all h ∈ h} .

We call α ∈ h∗ \ 0 a root if gα 6= 0. There are only finitely many roots since dim g < ∞. The set R ⊂ h∗
of roots is called the root system of g. Note that

[gα , gβ ] ⊂ gα+β

by Jacobi (maybe want α + β 6= 0).


Let B be a non-degenerate bilinear form (e.g. Killing form). Then, gα ⊥ gβ unless α + β = 0. On the
other hand,
B : gα × g−α ! C

7
is a nondegenerate pairing. In fact, dim gα = 1 for all α ∈ R, so

|R| = dim g − rank g.

By above nondegeneracy, dim gα = dim g−α , so α ∈ R =⇒ −α ∈ R.


The span span(R) = h∗R of the roots is a real subspace of h such that hR ⊗R C = h. Write E for this
span (this is a Euclidean space since K|h∗R gives a positive definite inner product).

Example. A2 is the root system of sl3 . Here, dim sl3 = 8 and rank sl3 = 2, so |R| = 6. One can check
that these roots form the vertices of a regular hexagon.

2.5 Abstract Root Systems


Let E ∼
= Rn be a Euclidean space. Let R ⊂ R \ 0 be finite. If it satisfies the axioms

(1) R spans E

(2) For all α, β ∈ R,


2(α, β)
nαβ := ∈Z
(α, α)

(3) For all α, β ∈ R,


sα (β) = β − nαβ α ∈ R.

then we call R ⊂ E an abstract root system. We set rank(R) := dim E. We call it reduced if
α ∈ R =⇒ 2α 6∈ R. We call R irreducible if we can not write R = R1 t R2 (with E = E1 × E2 and
Ri ⊂ Ei root systems).

Fact. The set of roots of a semisimple Lie algebra form a reduced root system, which is irreducible iff
the Lie algebra is simple.

The reflections sα give the root system lots of symmetries.

Definition 2.6. The Weyl group is the subgroup W ⊂ O(E) generated by sα for α ∈ R.

Note that the Weyl group is finite since it acts faithfully on the roots R (so subgroup of permutation
group of roots).

Example. g = sln , then E = {(x1 , . . . , xn ) ∈ Rn : xi = 0} (coordinates on diagonal). The roots are


P

αij = ei −ej where i, j ∈ [1, n] and i 6= j, so n2 −n roots. The reflections sαij = (ij) act by transpositions.
Hence, W = Sn is the symmetric group. This gives the root system An−1 (n ≥ 2).

Example. The root system Bn (n ≥ 2) comes from o(2n + 1).


The root system Cn (n ≥ 3) comes from sp(2n).
The root system Dn (n ≥ 4) comes from o(2n).

Example. There are 5 exceptional root systems G2 , F4 , E6 , E7 , E8 .

8
2.5.1 Positive and Simple Roots

Pick some t ∈ E such that (t, α) 6= 0 for all α ∈ R.

Definition 2.7. We say α is positive w.r.t to t if (t, α) > 0 and negative if (t, α) < 0. We call α a
simple root if it is positive, but not the sum of two other positive roots.

Pavel drew a picture of A2 -root system with a choice of polarization. If you want to see a picture,
track down and look at my notes from last semester...

Notation 2.8. Let R+ be the set of positive roots, and R− be the set of negative roots. Let Π be the
set of simple roots. These all depend on the polarization (choice of t).

Fact.

(1) Every positive root is a sum of simple roots

(2) If α, β are simple roots, and α 6= β, then (α, β) ≤ 0.

(3) Π ⊂ R+ is a basis of E.

Any root can be written uniquely as


r
X
α= ni αi
i=1

where ni ∈ Z and Π = {α1 , . . . , αr }. Furthermore, ni ≥ 0 for all i if α ∈ R+ , and ni ≤ 0 for all i if


α ∈ R− .

2.5.2 Dual root system

For R ⊂ E, we can attach to coroot α∨ ∈ E ∗ defined by sα (α∨ ) = −α∨ and (α, α∨ ) = 2. Thus,

sα (x) = x − (x, α∨ )α

for any x. Write R∨ = {α∨ : α ∈ R} ⊂ E ∗ .

Example. Bn∨ = Cn . Other irreducible root systems are self-dual.

Definition 2.9. The root lattice if the Z-span of the roots (equivalently, Z-span fo simple roots), i.e.
it is ( r )
X
Q = hRi = hΠi = ni αi : ni ∈ Z .
i=1

The coroot lattice is Q∨ = hR∨ i. The weight lattice is the dual lattice to Q∨

P = (Q∨ )∗ = {λ ∈ E : (λ, α∨ ) ∈ Z for all α ∈ R} .

The coweight lattice is P ∨ = Q∗ .

Example. #(P/Q) = n for sln .

9
Inside the weight lattice are the fundamental weights ωi ∈ P satisfying

(ωi , αj∨ ) = δij ,

i.e. they are the dual basis to simple coroots. A weight λ = xi ωi is called dominant if xi ≥ 0 for all
P

i. It is called integral if xi ∈ Z (i.e. if λ ∈ P belongs to the weight lattice).

2.5.3 Cartan matrix and Dynkin Diagrams

We have simple roots α1 , . . . , αr . Recall

2(αi , αj )
Z 3 nαi αj = (αi∨ , αj ) = =: aij .
(αi , αi )

The Cartan matrix is A = (aij ). These satisfy

• aii = 2 always.

• aij ≤ 0 if i 6= j.

• aij = 0 ⇐⇒ aji = 0.

• aij aji ∈ {0, 1, 2, 3}.

One can reduce classifying irreducible root systems to classifying indecomposable Cartan matrices.

Example. For sl4 , the Cartan matrix is


 
2 −1 0
−1 2 −1 .
 

0 −1 2

We visualize these using Dynkin diagrams. There are r = rank(R ⊂ E) = dim E = |Π| vertices
corresponding to the simple roots. Vertex i is connected to vertex j by a(n) (undirected) single edge if
aij = −1. There is a (directed) double edge i ⇒ j if aij = −2 and aji = −1. There is a (directed) triple Directed
edge from i to j if aij = −3 and aji = −1. Set edges in a
 Dynkian di-
2

 if aij aji = 0 agram point

to the longer

if aij aji = 1

3
mij =

4 if aij aji = 2 root



if aij aji = 3

6

Let si = sαi be the simple reflections. These already generated the Weyl group W = hsi i, and satisfy
s2i = 1, (si sj )mij = 1. These are the defining relations (no other ones needed).

2.6 Serre presentations


This, among other things, let’s you construct Lie algebras for the exceptional root systems.

10
Let g be a simple Lie algebra. Let α1 , . . . , αr be the simple roots (choose Cartan subalgebra and
polarization of root system). Then we get 1-dim spaces gαi = hei i and g−αi = hfi i. We can normalize
our generates so that
[ei , fi ] =: hi = αi∨ .

For fixed i, the elements ei , fi , hi generate an sl2 triple with normal relations.

Theorem 2.10.

(1) As i varies, these ei , fi , hi generate all of g.

(2) They satisfy

[hi , hj ] = 0, [hi , ej ] = aij ej , [hi , fj ] = −aij fj , and [ei , fj ] = δij hi .

In addition, they satisfy the Serre relations (for all i 6= j)

(adei )1−aij ej = 0 and (adfi )1−aij fj = 0.

(3) g is defined by these generators and relations.

(4) For any reduced, irreducible root system, this defines a simple f.d. Lie algebra.

Corollary 2.11. Simple f.d. Lie algebras correspond bijectively to the Dynkin diagrams An , Bn , Cn , Dn , E6 , E7 , E8 , F4 , G2 .

2.7 Representation Theory


We can write g = h ⊕ n+ ⊕ n− where n+ = gα and similarly for n− .
L
α∈R+
Let λ ∈ h (we call such things weights). We can consider

Mλ = hvλ |hvλ = λ(h)vλ for all h ∈ h, ei vλ = 0 i .

By PBW (PWB?), we can write U (g) = U (n− ) ⊗ U (h) ⊗ U (n+ ). Then,

Mλ = U (g) ⊗U (h⊕n+ ) Cλ = U (n− )vλ ,

where Cλ = Cvλ is rep where hvλ = λ(h)vλ and ei vλ = 0. In particular, Mλ is a fee module of rank 1
over U (n− ).

Definition 2.12. A highest weight module for g with highest weight λ is a quotient of Mλ .

Proposition 2.13. Any f.d. irreducible representation of g is a highest weight module.

Any rep V of g (= semisimple) splits as


M
V = V [µ].
µ∈P

If V is f.d., there exists a “highest” weight λ s.t. λ + αi is not a weight for any i. For any λ ∈ h∗ , there
is a smallest quotient Mλ /Jλ = Lλ which is irreducible (but in general ∞-dimensional).

11
Theorem 2.14. Lλ is finite dimensional ⇐⇒ λ is a dominant, integral weight (i.e. λ ∈ P+ ).

Hence, f.d. irreps of g correspond bijectively to λ ∈ P+ via λ 7! Lλ .


It is hard to understand Lλ in general, but not when λ ∈ P+ .
P
Theorem 2.15. For λ ∈ P+ , λ = i ni ωi , we have


Lλ = .
fini +1 vλ

2.8 Weyl Character Formula


Write
M
V = V [µ].
µ

(I missed the hypotheses on V needed to have this decomposition) with each V [µ] fin dimensional. Let
X
χV = dim V [µ] · eµ ∈ C[P
[].
µ

If V is finite dimensional, then this is in the usual (non-completed) group algebra C[P ]. Note that, for
h ∈ h,
X
trV (eh ) = dim V [µ]eµ(h) .
µ

This is why χV above is called a ‘character’.


Recall W is the Weyl group and W ⊂ O(E), so det W ! ±1 makes sense. We can define this
combinatorially. If w = si1 . . . sim , then det(w) = (−1)m (i.e. it gives the parity of the length of w).
Define Remember:
r

X 1 X ρ =
h 3ρ= ωi = α.
2 Pr
i=1 α∈R+
i=1 ωi =
1
P
Theorem 2.16 (Weyl Character Formula). 2 α∈R+ α

det(w)ew(λ+ρ)−ρ
P
w∈W
χLλ = Q −α )
.
α∈R+ (1 − e

Example. If λ = 0, then Lλ = C with trivial action, and χL0 = 1. Therefore, we get the Weyl
denominator formula
X Y
det(w)ewρ−ρ = (1 − e−α ).
w∈W α∈R+

For sln , above becomes the Vandermonde determinant


 
1 ... 1
 . .. .. 
det  ..
Y
 . . =
 (xi − xj ).
1≤j<i≤n
xn−1
1 ... xn−1
n

Next time we start discussing new material. Homework out tonight; due in a week.

12
3 Lecture 3 (2/23)
Today we start new material. We talked last time about the Weyl character formula, so a good place to
go next is the...

3.1 Weyl dimension formula


Let g be a semisimple (complex) Lie algebra. Recall that P+ denotes the set of dominant, integral weights.
For every λ ∈ P+ , we get a f.d. irreducible highest weight representation Lλ with highest weight λ. For
h ⊂ g Cartan and h ∈ h, we had a formula for the character

det(w)e(w(λ+ρ)−ρ,h)
P
X
w∈W
χLλ (eh ) = TrLλ (eh ) = dim Lλ [β]eβ(h) = Q −α(h)
 .
β∈P (Lλ ) α∈R+ 1 − e

Note that dim Lλ = χLλ (eh )|h=0 , but this is not so easy to compute directly. In fact, both the numerator
and the denominator vanish at h = 0.

Question 3.1. How to compute the limit as h ! 0?

(We know this is possible since the character is secretly a polynomial)


Key idea: specialize to h = thρ (t ∈ R) where hρ ↔ ρ under identification h∗ ↔ h. Then,

det(w)e(w(λ+ρ)−ρ,tρ)
P
w∈W
χLλ (thρ ) = Q t(α,ρ)

α∈R+ 1 − e

e−t(ρ,ρ) w∈W det(w)e(w(λ+ρ),tρ)


P
= Q −t(α,ρ)

α∈R+ 1 − e
(λ+ρ,tw−1 ρ)
P
w∈W det(w)e
= e−t(ρ,ρ) Q −t(α,ρ)

α∈R+ 1 − e

det(w)et(λ+ρ,wρ)
P
= e−t(ρ,ρ) Qw∈W −t(α,ρ)

α∈R+ 1 − e

(Above, we’ve used that (, ) is W -invariant, and we’ve replace w 7! w−1 at one point (noting det w =
det w−1 )). At this point, we recall the Weyl denominator formula:
X Y
det(w)ewρ = eρ 1 − e−α .


w∈W α∈R+

Applying this to the previous displayed equation shows that

1 − e−t(α,λ+ρ)

et(ρ,λ+ρ)
Q
α∈R+
Y 1 − e−t(α,λ+ρ)
thρ −t(ρ,ρ)
χLλ (e )=e = et(λ,ρ) .
e−t(α,ρ) 1 − e−t(α,ρ)
Q
α∈R+ (1 − α∈R+

Recall 3.2. L’Hôpital let’s us see that


1 − eta a
lim = .
t!0 1 − etb b

13
Thus, we now see that
Y (α, λ + ρ)
χLλ (1) = dim Lλ = .
(α, ρ)
α∈R+

This is called the Weyl dimension formula.

3.2 Tensor products of fundamental representations


Fix some λ ∈ P+ , so λ = mi ωi , with (ωi , αj∨ ) = δij and mi ∈ Z+ .
P

Definition 3.3. The representations Lωi are called fundamental representations.

Consider
r
O
Tλ := L⊗m
ωi .
i

i=1

Note that this contains vλ = ⊗ri=1 vω⊗m


i
i
, where vωi is a highest weight vector of Lωi .

generated by vλ . Then, V ∼
Nr
Proposition 3.4. Let V be the subrepresentation of i=1 L⊗m
ωi
i
= Lλ .

Proof. Recall that P (Vλ ) ⊂ λ − Q+ . Since vλ ∈ V is a vector of weight λ, we can write


M
V = Lλ ⊕ mλµ Lµ ,
µ≺λ

where µ ≺ λ means µ ∈ (λ − Q+ ) ∩ P+ . Recall the Casimir element C ∈ U (g) (even in its center): for
xi any basis of g with dual basis x∗i ∈ g under the Killing form2 , then

X r
X X
C= xi x∗i = yj2 + 2 fα eα
j=1 α∈R+

(above, yj some orthonormal basis of h and eα , fα chosen so that (eα , fα ) = 1). Then, C|Lµ acts via
multiplication by (µ, µ + 2ρ). We have shown previously that if µ ≺ λ, then

(µ, µ + 2ρ) < (λ, λ + 2ρ).

However, since V is generated by vλ , we know that C|V = (λ, λ + 2ρ) IdV . Therefore, we must have
mλµ = 0 since C has no other eigenvalues. 

3.3 Representations of SLn (C)


Recall 3.5. Lie SLn (C) = sln (C) = {A ∈ gln (C) : Tr A = 0} has a natural Cartan subalgebra h ⊂ sln (C)
consisting of diagonal matrices diag(x1 , . . . , xn ) (with x1 + · · · + xn = 0). Hence, we may identify
( n
)
h∼
X
= Cn0 = n
(x1 , . . . , xn ) ∈ C : xi = 0 .
i=1
2 or any symmetric, ad-invariant, bilinear form

14
Note that we then have

h∗ = Cn /C diag = {(y1 , . . . , yn ) ∈ Cn } modulo shift (y1 , . . . , yn ) ∼ (y1 + c, . . . , yn + c).

Here, the simple coroots are


 

1 , −1 , 0, . . . , 0 = ei − ei+1 for i = 1, . . . , n − 1.
αi∨ = 0, . . . , 0, |{z}
|{z}
i i+1

The fundamental weights ωi satisfy (ωi , ej − ej+1 ) = δij , and it is easy to see that these are

ωi = (1, 1, . . . , 1, 0, 0, . . . , 0) for i = 1, . . . , n = 1.
| {z }
i times

We would like to construct representations corresponding to the fundamental weights. This turns out
to be each. Let V = Cn be the standard/tautological representation. Let v1 , . . . , vn ∈ V be the standard
basis. It is not hard to see that V is irreducible. What is the highest weight? Recall gαi is generated
by ei = Ei,i+1 . From this, it is not too hard to see that the highest weight vector (killed by all ei ) is v1 .
Note that h = (x1 , . . . , xn ) ∈ Cn0 satisfies hv1 = x1 v1 , so the highest weight is ω1 = (1, 0, . . . , 0). Hence,
V = Lω1 .
Note 2. Pavel occastionally draws pictures to illustrate points, but I’m currently too lazy to draw these
and add them to the notes...
Vm
Now consider exterior powers V for 1 ≤ m < n. This has basis vi1 ∧ vi2 ∧ · · · ∧ vim for i1 < i2 <
· · · < im . Say v is a highest weight vector. Then, E12 v = 0, E23 v = 0, . . . , En−1,n v = 0. Note that



 0 if i1 , . . . , im 6= j
Eij vi1 ∧ · · · ∧ vim = ±vi1 ∧ · · · ∧ vi ∧ · · · ∧ vim if ik = j

 |{z}

kth place

Thus, the highest weight vector is v1 ∧ v2 ∧ · · · ∧ vm . Note that h = (x1 , . . . , xn ) satisfies h · v1 ∧ · · · ∧ vm =


(x1 + x2 + · · · + xm ) v1 ∧ · · · ∧ vm , so the highest weight is

ωm = (1, 1, . . . , 1, 0, 0, . . . , 0).
| {z }
m times

V is irreducible (easy exercise), so Lωm ∼


Vm Vm
Further, = V.
Vm Vn
Example. V = 0 for m > n. V = C is the trivial representation (basically, matrices act by both
trace and determinant, but determinant = trace = 0). There is an invariant, nonndegenerate pairing
^n−1 ∧
^n
V ⊗ V −! V = C,

V ∼
Vn−1
so = V ∗ . More generally, ^ ∗
k ^n−k
V ∼
= V.

15
Say λ = mi ωi . Then, Lλ is the subrep in
P

n−1
O ^ ⊗mi
i
V
i=1

generated by tensor product of highest weight vectors. This is fairly concrete construction of Lλ .

Example. Take λ = mω1 . V ⊗m 3 v1 ⊗ v1 ⊗ · · · ⊗ v1 . Say m = 2. To get L2ω1 , we start applying Lie


algebra elements to this, e.g.
E21 (v1 ⊗ v1 ) = v1 ⊗ v2 + v2 ⊗ v1 .

One gets in the end that Lmω1 = Symm V (exercise).

Say λ = m1 ω1 + · · · + mn−1 ωn−1 . We can write this as a vector

m1 (1, 0, . . . , 0) + m2 (1, 1, 0 . . . , 0) + · · · = (m1 + m2 + · · · + mn−1 , m2 + · · · + mn−1 , . . . , mn−1 , 0),

so dominant weights (of sln (C)) correspond exactly to nonincreasing sequences

p1 ≥ p2 ≥ · · · ≥ pn−1 ≥ 0

of nonnegative integers.

Example. When n = 2, get one number p1 ≥ 0 which is exactly our old friend sl2 y Vp1 .

Exercise. sln (C) is simply connected, so it’s fine to not distinguish between representations of it and of
SLn (C).

3.4 Representations of GLn (C)


Recall 3.6. Lie GLn (C) = gln (C).

Warning 3.7. gln (C) is not semisimple, so our general theory does not directly apply. However, gln (C) =
sln (C) ⊕ C so it’s close enough for us to be able to understand things.

On the Lie group side, GLn (C) is not quite a product of SLn (C), but instead one has

GLn (C) = SLn (C) × C× /µn ,




where µn is the group of nth roots of unity (in C). If ζ n = 1, then


 
ζ
 .. 

 .  ∈ SLn ,

ζ

and we embed µn ,! (SLn (C) × C× ) diagonally. This identification is given by

(SLn (C) × C× )/µn 3 (A, z) 7! zA ∈ GLn (C).

16
Proposition 3.8. Rep GLn = Rep(SLn ×C× ) on which µn (embedding diagonally) acts trivially.3

Example. When n = 1, we just have C× . The Lie algebra is Lie C× = Ch, so a rep of the Lie algebra is
a choice of operator H : V ! V such that e2πiH = 1 (since e2πih = 1). Hence, H is diagonalizable with
integer eigenvalues. Thus, every representation of C× is completely reducible (since H diagonalizable),
and its irreps are 1-dimensional corresponding to n ∈ Z: χn (z) = z n . Note that
Lie C× is not
For SLn ×C× , all representations will be completely reducible. The irreducible representations are
semisimple,
Lλ,N = Lλ ⊗ χN . What about for GLn , i.e. when does the center act trivially? For GLn , you get the
and its rep-
Lλ,N for which |λ| + N is divisible by n.
resentations
We can look at this from another perspective. Cn ∼ = h ⊂ gln consisting of diagonal matrices gives a
are not com-
Cartan subalgebra (reductive Lie algebras have these as well). The dominant weights will correspond to
pletely re-
tuples (p1 , p2 , . . . , pn ) with p1 ≥ p2 ≥ · · · ≥ pn ∈ Z. The fundamental weights will be ω1 , ω2 , . . . , ωn with
ducible (e.g.
ωi = (1, 1, . . . , 1, 0, 0, . . . , 0) think Jor-
dan blocks),
| {z }
i times
but not ev-
as before. Note that ωn 6= 0 now (it gives the determinant character). Given λ = m1 ω1 + · · · + mn ωn , ery rep of
one has ⊗mk Lie C× lifts
O ^k
Lλ ⊂ V with V = C , n
to one of C×
k
since it is
and m1 , . . . , mn−1 ≥ 0 while mn ∈ Z (possibly negative). not simply
Remark 3.9. If χ is a 1-dim representation and k < 0, we can and do set connected
P
|λ| = pi
χ⊗k := (χ∗ )⊗(−k) .
from before

Say λ = (λ1 , . . . , λn ) with λ1 ≥ λ2 ≥ · · · ≥ λn ∈ Z.

Definition 3.10. Lλ is polynomial if λn ≥ 0.

Exercise. Lλ is polynomial ⇐⇒ it is a direct summand in a tensor power of V .


Why are these called polynomial? Given x = (xij ) ∈ GLn , v ∈ Y , and f ∈ Y ∗ , can form the
matrix element (f, Xv). This gives a function on G = GLn which is a polynomial for polynomial
representations. Note that GLn ⊂ Matn is an open subset. Matrix elements will extend to functions on
Matn if they are polynomials. Note that any irreducible representation Lλ will be of the form
^ n ⊗(−k)
Lλ = Poly rep ⊗ V for some k ≥ 0,

so understanding polynomial representation will let us understand everything. We also see that general
P (X)
matrix elements are det(X)k
so only extend to all matrices if k 6= 0 (need invertible determinant).
Note that λ1 ≥ · · · ≥ λn is a partition in n parts of

N = |λ| := λ1 + · · · + λn .

Partitions are usually denoted uses Young diagrams.


3 Pavel uses the notation µdiag
n to emphasize the diagonal embedding

17
Example. The partition (5, 3, 2) corresponds to the diagram

Note that Lλ occurs in V ⊗|λ| , e.g. L(5,3,2) occurs in V ⊗10 . Also, |λ| is the eigenvalue of id ∈ gln (when
acting on V N ?)

Let’s look more closely at the structure of V ⊗N . We have


M
V ⊗N = πλ ⊗ Lλ where πλ = HomGLn (Lλ , V ⊗N )
λ:|λ|=N

(consequence of complete reducibility). Note that πλ = 0 if λ has more than n parts.


How should we understand πλ ? The key is note: always in rep theory, if you decompose a representa-
tion into a direct sum, in order to understand the spaces showing up, you need to understand what acts
on those spaces; what acts on them normally is something that commutes with your group. Here, the
symmetric group SN y V ⊗N by permuting components. Therefore, SN y each πλ .
There’s a tight connection between rep theory of symmetric groups and rep theory of general linear
groups. Inside EndC (V ⊗N ) there is an algebra A generated by U (gln ) and another algebra B generated
by SN . These two subalgebras commute: [A, B] = 0. They also satisfy a double centralizer property (one
is the centralizer of the other).

Theorem 3.11 (Schur-Weyl duality).

(1) The centralizer of A is B, and vice versa.

(2) If λ has at most n parts, then πλ is an irreducible representation of B (hence of SN )

(3) If n = dim V ≥ N , then πλ exhaust all irreducible representations of sN (each occurring exactly
once).

The πλ correspond to partitions λ of N with ≤ n parts (this condition is meaningless if n > N ), and
this correspondence is independent of n. More on this next time.

4 Lecture 4 (2/25)
4.1 Schur-Weyl duality
We started talking about this last time. Recall we have V = Cn and GL(V ) = GLn (C) naturally acts
on this space. We formed V ⊗N so GL(V ) and gl(V ) = gln (C) have an induced action on V ⊗N . We
decompose
M
V ⊗N = L λ ⊗ πλ
λ:|λ|=N

into a direct sum of irreps Lλ each with ‘multiplicity’ πλ (note λ ranges over partitions of N with ≤ n
parts). Recall πλ = HomGLn (Lλ , V ⊗N ).

18
At the same time, SN acts on V ⊗N by permuting the factors, and this action commutes with the one
of GLn (C). We write SN y V ⊗N x GLn (C) to emphasize that the actions commute. As a consequence,
SN acts on each πλ .
Let A be the image of U (gln ) in EndC (V ⊗N ), and let B ⊂ EndC (V ⊗N ) be the image of the group
algebra CSN . Since the gln and SN actions on V ⊗N commute, these two subalgebras commute with each
other. Beyond this...

Theorem 4.1 (Schur-Weyl duality).

(1) The centralizer of B is A, and vice versa.

(2) πλ is an irreducible representation of SN , and the various πλ ’s are pairwise non-isomorphic.

(3) If n ≥ N (so all partitions of N have ≤ n parts), then the collection {πλ } gives the full set of
irreducible representations of SN .

Slogan. Symmetric groups and general linear groups have equivalent representation theories.

For the proof, we will need several lemmas.

Lemma 4.2. Let U be a complex vector space. Then, S N U is spanned by vectors of the form x ⊗ · · · ⊗ x,
x ∈ U.

Proof. Enough to consider finite dimensional U since any vector in S N U lies in the symmetric power
of some finite-dimensional subspace of U . Then, S N U is an irreducible representation of GL(U ) (or of
gl(U )), and span {x ⊗ · · · ⊗ x : x ∈ U } is a nonzero subrepresentation, so it must be everything.  Secretly,
S n U is an
Lemma 4.3. If R is an associative C-algebra, then the algebra S N R is a generated by elements
irreducible
representa-
∆N (x) := (x ⊗ 1 ⊗ · · · ⊗ 1) + (1 ⊗ x ⊗ 1 ⊗ · · · ⊗ 1) + (· · · + 1 ⊗ · · · ⊗ 1 ⊗ x)
tion even if
(N summands). Can think of this as x1 + x2 + · · · + xn with xi = 1 ⊗ · · · ⊗ 1 ⊗ x ⊗ 1 ⊗ · · · ⊗ 1 with x in U is infinite
ith slot. dimensional,
so no need
Proof. Consider z1 , . . . , zN ∈ C[z1 , . . . , zN ]. By fundamental theorem on symmetric functions, there exists to reduce to
a polynomial PN such that PN ( zi , zi2 , . . . , ziN ) = z1 . . . zN (Newton polynomial). We apply
P P P
the finite di-
mensional
PN (∆N (x), ∆N (x2 ), . . . , ∆N (xN )) = x ⊗ · · · ⊗ x.
case

By the previous lemma, these span S N R, so ∆N (y), y ∈ R, generate R. 

Lemma 4.4 (Double Centralizer Lemma). Suppose B ∼


Lr
= i=1 Matki (F ) is a direct sum of matrix
algebras4 over the field F . Suppose also we embed B ,! EndF V for some f.d. F -vector space V . Let
A ⊂ EndF V be the centralizer of B. Then, A is also a direct sum of r matrix algebras, and
r
M
V = Wi ⊗ Ui ,
i=1
4 apparently true for semisimple algebras over an algebraically closed field

19
where Ui ranges over the full set of irreducible representations of B, and Wi ranges over the full set of
irreducible representations of A, and this decomposition is as a module over A ⊗ B. In particular, there
is a bijection between irreps of B and of A, and also B is the centralizer of A.
Lr
Proof. We can write V = Wi ⊗ Ui with the Ui irreps of B, and Wi = HomB (Ui , V ). By definition,
i=1
Lr
the centralizer of B is A = EndB V = i=1 EndF (Wi ).

Question 4.5 (Audience). Why is the number of summands in the decomposition of V equal to the
number of summands in the decomposition of B?

Answer. All Ui must occur as B ,! End(V ) (so End(V ) contains regular rep) and so Wi 6= 0 for all TODO: Re-
i = 1, . . . , r. Similarly, A ,! End(V ) so all of its irreps must occur, so the Wi must be all of them. view this
answer


“A good mathematical theorem is one that takes one minute to state and one hour to prove, and a
bad one is one that takes one hour to state but one minute to prove.” – Kirillov, paraphrased.
Now we return to Schur-Weyl duality.

Proof of Theorem 4.1. B is a direct sum of matrix algebras since representations of SN are completely
reducible. We need to show that A is the centralizer of B. We know A ⊂ Z(B) that A is contained in
the centralizer. Note that
Z(B) = S N (End V )

is the endomorphisms of V N which commute with the permutation action of SN . The second lemma now
implies that Z(B) is generated by elements of the form ∆N (x) for x ∈ End V = gln . This is exactly the
action of x ∈ gln on V ⊗N , so ∆N (x) ∈ A, the image of the enveloping algebra. Hence, Z(B) ⊂ A. At
this point, the third lemma applies, and we obtain everything else:
M
V ⊗N = Wi ⊗ Ui

with Wi = Lλ representations of A, and Ui = πλ representations of B. This establishes (1),(2) of


Theorem 4.1.
Recall that (3) said: if n ≥ N , then πλ gives full set of irreps of SN . If dim V ≥ N , then we can
pick N linearly independent vectors v1 , . . . , vN ∈ V (and complete to a basis v1 , . . . , vn of V ). Then,
σ(v1 ⊗ · · · ⊗ vN ) = vσ(1) ⊗ · · · ⊗ vσ(N ) are all linearly independent for different σ. Hence, CSN · (v1 ⊗ · · · ⊗
vN ) ∼
= CSN , so B = CSN ,! EndC (V ⊗N ). Hence double centralizer story tells us that the {πλ } do give
all representations of SN . 

Corollary 4.6. RepSN ↔ partitions λ of N .

Remark 4.7 (Sanity check). Number of irreps of finite group G = number of conj. classes of G. Conjugacy
classes of SN are determined by cycle types, but these exactly correspond to partitions of N .
Remark 4.8. Schur-Weyl duality gives a new proof that reps of gln are completely reducible.

20
Remark 4.9. The algebra A appearing above is called the Schur algebra. It is always a quotient of
U (gln ) since this is infinite-dimensional while A is finite-dimensional.
We’ve given an assignment

partitions λ 7−! representations πλ of SN

making use of some GLn for n ≥ N .


Claim 4.10. This assignment is independent of the choice of n.
Proof. Say λ has ≤ n parts. Let V = Cn be basis e1 , . . . , en . Consider V ⊕ Cen+1 . Then, TODO:
M Make sense
(V ⊗ C)⊗N = L(n+1)
µ ⊗ πµ(n+1) , of this argu-
µ
ment
((n+1)) ((n+1))
with exponents signifying which GLn these come from. Pick some Lλ ⊗ πλ . What happens
when we restrict it to GL(V ) ⊂ GL(V ⊕ C), i.e. matrices of the form
!
∗ 0
with ∗ ∈ GLn (C).
0 1

(n+1) (n+1) (n+1)


Let vλ ∈ Lλ be a highest weight vector. Note that vλ ∈ V ⊗N ⊂ (V ⊕ C)⊗N since λ =
(n+1) (n+1) (n)
(λ1 , . . . , λn , 0). Then, vλ ⊗x (x ∈ πλ )
generates Lλ as a rep of gl(V ). This implies that it
a GLn ×Sn -module, so πλ ∼
(n) (n) (n) (n+1)
generates a copy of Lλ ⊗ πλ as = πλ . 

4.2 Schur functors


Definition 4.11. Let λ be a partition (say |λ| = N ). Then, the Schur functor S λ on the category of
vector spaces (or of representations of a Lie group) is

S λ V := HomSN πλ , V ⊗N .


We can restate Schur-Weyl duality in terms of these functors:


M
V ⊗N = S λ V ⊗ πλ .
λ part of N

If V is the standard representation of GL(V ), then S λ V = Lλ .


Example. S (N ) V = LN ω1 = S N V .
N
S (1 ) =
VN
V.
Example. Question:
^2
V ⊗ V = S (2) V ⊗ C+ ⊕ S (1,1) V ⊗ C− = S 2 V ⊕ V, What?

where C+ is trivial rep of SN , and C− is the alternating/sign rep.

^3
V ⊗ V ⊗ V = S (3) V ⊗ C+ ⊕ S (1,1,1) V ⊗ C− ⊕ S (2,1) V ⊗ C2 = S 3 V ⊕ V ⊕ S (2,1) V ⊗ C2 .

21
Note that ^ 
2
2

V ⊗ V ⊗ V = (V ⊗ V ) ⊗ V = S V ⊗ V ⊕ V ⊗V

V3
(first factor contains S 3 , S 2,1 and second contains , S 2,1 for some reason) so
^2 ^3
S 2 V ⊗ V = S 3 V ⊕ S 2,1 V and V ⊗V = V ⊕ S 2,1 V.

This gives two descriptions of S 2,1 V :


• 12 symmetric tensors in V ⊗3 whose full symmetrization is zero.

• 12 antisymmetric tensor in V ⊗3 where full antisymmetrization is zero.


What are dim S λ V where dim V = N ? We have the Weyl formula. One can check that we may take
ρ = (N − 1, N − 2, . . . , 1, 0) (smth smth replace gl with sln smth smth). Then,

Y (λ + ρ, α)
dim S λ V = ,
α>0
(ρ, α)

where as usual αij = ei − ej (for i < j). Note that (ρ, αij ) = j − i and (λ, αij ) = λi − λj . Hence,

Y λi − λj + j − i
dim S λ V = .
j−i
1≤i<j≤N

Say λ has k parts. Then above becomes TODO: Fix


typos below
Y λi − λj + j − i Y λi − λj + j − i
dim S λ V = ·
j−i j−i
1≤i<j≤k 1≤i≤k<j≤N
Y λi + j − i Y (N + 1 − i) . . . (N + λi − i)
= ·
j−i (k + 1 − i) . . . (k + λi − i)
1≤i<j≤k 1≤i≤k

Proposition 4.12. dim S λ V = Pλ (N ) is a polynomial of degree |λ| with Q-coeffs, and it has integer
roots all lying in [1 − λ1 , k − 1] (and the endpoints are always roots). Further, Pλ (N ) has integer values
N
See e.g.

at integers which means it’s a Z-linear combination of binomial coefficients m .
Example. dim S n V = N +n−1

= P(n) (N ), and dim
V n
V = N
 the part of
n n = P(1,...,1) (N ).
chapter 1 of
Example. Say a ≥ b. One can work out that
Hartshorne
where he
   
 
a−b+1 N +a−1 N +b−2 when a=b 1 N +a−1 N +a−2
P(a,b) (N ) = = .
a+1 a b a+1 N −1 N −2 talks about
Hilbert poly-
The a = b case gives Narayana numbers which combinatoralists apparently care about.
nomials and
numerical
4.3 Characters of symmetric group
polynomials
Recall  
x1
 .. 
ch Lλ = Tr |Lλ =S λ V 
 . .

xn

22
By Weyl character formula, this is
 
det σσ x1λ1 +n−1 x2λ2 +n−2 . . . xλnn
P
σ∈Sn
ch Lλ = Q
i<j (xi − xj )

det(σ)xλσ(1)
1 +n−1
. . . xλσ(n)
P n
σ∈Sn
= Q
i<j (xi − xj )
 
λ +n−j
 
λ +n−j
det xi j det xi j
=   i,j = Q =: Sλ (x)
i<j (xi − xj )
n−j
det xi
i,j

which is called the Schur polynomial in x = (x1 , . . . , xn ).   


x1
Let V = C as usual, and consider V
n ⊗N
. Act on it by the pair (x, σ) = 
 ..  , σ  (with
 
 .  
xn
σ ∈ SN ). Let’s compute Tr |V ⊗N (x, σ) in two ways.

• I’m not sure how to type notes on what he’s saying right now... The upshot is that if σ has mi
cycles of length i,5 then
Y mi Y m i
Tr(x, σ) = Tr |V (xi ) = xi1 + · · · + xin .
i i

• Recall Schur-Weyl V ⊗N = Lλ ⊗ πλ with x acting on first factor and σ acting on the second.
L

Therefore,
X
Tr(x, σ) = Sλ (x)χλ (σ)
λ

with χλ an SN -character and Sλ the Schur polynomial.

Thus,
X Y m i
Sλ (x)χλ (σ) = xi1 + · · · + xin .
λ i

Multiply by (xi − xj ) to get


Q

!
X X Y Y mi
det(s)xλ1 1 +n−1 . . . xλnn χλ (σ) = (xi − xj ) xi1 + · · · + xin .
λ s i<j i

Thus,

Theorem 4.13 (Frobenius Character Formula). χλ (σ) is the coefficient of

xλ1 1 +n−1 . . . xλnn

in the product
Y Y m i
(xi − xj ) xi1 + · · · + xin .
1≤i<j≤n i
5 Note
P
i≥1 imi = N

23
5 Lecture 5 (3/2)
Last time we finished with the character formula for representations of the symmetric group using Schur-
Weyl duality. We will next take a quick look at invariant theory; in particular, we want to prove the
‘fundamental theorem of invariant theory’ (due to Weyl).

5.1 Invariant Theory


Suppose we have a collection of tensors Ti ∈ V ⊗mi ⊗ (V ∗ )⊗ni (‘mi times contravariant and ni times
covariant’), i = 1, . . . , k, where V is a f.dim vector space. We want to classify invariant functions of
T1 , . . . , Tk , i.e. functions of the form F (T1 , . . . , Tk ) (we’ll restrict to polynomial functions).
In one perspective, we are looking for functions which we can write in a coordinate-free way. Physicist-
s/engineers think about tensors not as elements of tensor products, but as collections of numbers which
change in a specific way when you go from one basis to another. Then one can write various invariant
expressions, usually obtained using ‘Einstein summation’. If you have T ∈ V ⊗m ⊗ (V ∗ )⊗n and e` a basis
of V , then you can write this as
X
T = Tij11,...,i
,...,jn
e ⊗ · · · ⊗ ejm ⊗ e∗i1 ⊗ · · · ⊗ e∗in .
m j1

Note 3. Pavel said more things about how physicists think about tensors, but I didn’t care enough to
write it down.
We look for polynomial functions invariant under the GL(V )-action.

Example. If T is a linear operator, then det and Tr are both invariant functions.

It is enough to classify invariant functions of degree di with respect to each Ti . This is equivalent to
looking for invariants in
O ∗
S di V ⊗mi ⊗ (V ∗ )⊗ni
i

(above S is symmetric power). Finding invariant functions in this space looks formidable, but in fact it
isn’t.
To describe such invariant functions, attach to each ‘variable’ Ti a vertex:

.. ..
. • .

we give the vertex ni outgoing edges and mi incoming edges. Put on the plane di such vertices of each
type i. Invariant functions can be built by contractions of tensors: draw a graph by connecting vertices
in a way which respects directions and which makes use of each edge/stub attached to a vertex.

Example. Say T ∈ V ⊗2 ⊗ (V ∗ )⊗2 and we want a degree 3 invariant. Then we could form a graph as in
Figure 1.

24
Figure 1: An example graph giving an invariant function

Apparently these graphs are related to Feynman diagrams. To every such graph Γ, one can attach an
invariant FΓ .

Theorem 5.1 (Fundamental Theorem of Invariant Theory). Such functions FΓ , as Γ varies, span
the space of invariant functions.

Note no linear independence claim above.

Example. Say you have a linear operator T : V ! V , so T ∈ V ⊗ V ∗ . Say we want degree d invariant
polynomials. Then we need to start with d copies of a vertex with one outgoing edge and one incoming
edge. Then we need to connect them in some way. The graph in Figure 2, for example, corresponds to
the function FΓ = Tr(T 2 ) · Tr(T 2 ). Each cycle corresponds to the trace of T to the length of that cycle.

Figure 2: A graph corresponding the the invariant function Tr(T 2 )2

Hence, in this case, the theorem says that degree 4 invariant functions are spanned by

Tr(T )4 , Tr(T 2 ) Tr(T )2 , Tr(T 2 )2 , Tr(T 3 ) Tr(T ), Tr(T 4 ).

Hence, the algebra of invariant polynomials of T is generated by traces of powers of T , i.e. Tr(T ), Tr(T 2 ), Tr(T 3 ), . . . .
Observe that these are not linearly independent (e.g. characteristic polynomial can be used to get some
linear dependence between them; if T is n × n, then should only need to know Tr(T i ) for i ≤ n).

25
Proof of Theorem 5.1. An invariant function can be viewed as an element of the tensor product

k 
O ⊗(−di )  P P 
V ⊗mi ⊗ V ⊗(−ni ) = HomC V ni di , V i mi di .
i=1

We want GL(V )-invariants in this space. By Schur-Weyl duality, nonzero invariants only exist if i ni di =
P

i mi di (i.e. same number of incoming and outgoing arrows). If so, then Schur-Weyl duality also tells us Question:
P

that the invariants are spanned by permutations (i.e. a matching of incoming arrows to outgoing arrows). Why?
This means that these invariants are spanned by the FΓ ’s for various Γ.
In order to pass from tensor products to symmetric power, simply project using symmetrization. This
will cause some a priori different graphs to correspond to the same invariant, but this does not affect
spanning. 

Remark 5.2. SW duality tells us that if dim V  0 (dim V ≥ ni di , I think), then invariants cor-
P
i
responding to different permutations are linearly independent (in the tensor product). Symmetrization
identifies some of these, but if you remove the redundant ones, what are left are still linearly independent.
From this one can deduce that for large dim V and fixed di , mi , ni , the invariants FΓ corresponding to
non-isomorphic graphs Γ are linearly independent. In this way, one gets a basis fo the space of invariant
functions.

Example. Suppose T1 , . . . , Tk are operators V ! V , so all vertices have 2 arrows, one incoming and
one outgoing. Hence, all graphs Γ are unions of cycles. Each cycle gives the trace of the product of the
graphs appearing in the cycle Remember:
The upshot is that the algebra of polynomial invariants of k linear maps T1 , . . . , Tk : V ! V is trace of a
generated by traces Tr(Ti1 . . . Tim ) of cyclic words in T1 , . . . , Tk . These generators are “asymptotically
6
product is
algebraically independent” in the sense that for a fixed degree d and dim V d 0, these generators do invariant
not satisfy any nontrivial relations in degree d. under cyclic
permutation
Corollary 5.3. There are no universal polynomial identities for (square) matrices of all sizes.

Proof. Suppose P (X1 , . . . , Xn ) = 0 for all X1 , . . . , Xn . Introduce another variant Xn+1 , and consider
F = Tr(P (X1 , . . . , Xn )Xn+1 ) = 0. Traces of words are asymptotically independent, so if it vanishes for
all sizes of matrices, then P = 0. 

If you fix a size, then such identities do exist, e.g. for size 1 you have XY −Y X = 0 (i.e. multiplication
of scalars is commutative). For size 2, you have [(XY − Y X)2 , Z] = 0.7 This fails for size 3.
In general, for size n, there is the Amitsar-Levitzk identity: for X1 , . . . , X2n of size n,
X
sign(σ)Xσ(1) . . . Xσ(2n) = 0
σ∈S2n

(homework).
6 i.e.
words defined only up to cyclic permutation
7 Why? XY − Y X has trace 0 and is generically diagonalizable, so looks like diag(λ, −λ). Hence, it’s square looks like
λ2 I which is in the center.

26
5.2 Howe Duality
Let V, W be two f.dim complex vector spaces. Then consider S n (V ⊗ W ) as a representation of GL(V ) ×
GL(W ).

Theorem 5.4 (Howe duality).


M
S n (V ⊗ W ) = SλV ⊗ SλW
λ:|λ|=n

(if λ has > dim V or > dim W parts, then the corresponding summand is 0).

Proof. Note that


 Sn
⊗n
S n (V ⊗ W ) = (V ⊗ W ) = (V ⊗n ⊗ W ⊗n )Sn
   Sn
M M
=  S λ V ⊗ πλ  ⊗  S µ W ⊗ πµ  
λ:|λ|=n µ:|µ|=n
M
= S ⊗ S W ⊗ (πλ ⊗ πµ )Sn
λ µ

λ,µ:|λ|=|µ|=n

Now we know that the character of πλ is integer-valued (e.g. by Frobenius formula) so πλ = πλ∗ (character
fixed by complex conjugation), so

(πλ ⊗ πµ )Sn = HomC (πλ , πµ )Sn = HomSn (πλ , πµ ).

Schur-Weyl duality tells us that πλ are irreducible and pairwise non-isomorphic, so we conclude that
M
S n (V ⊗ W ) = SλV ⊗ SλW
λ:|λ|=n

as desired. 

Not important above that V, W are finite dimensional.

Corollary 5.5 (Cauchy identity). If x = (x1 , . . . , xr ) and y = (y1 , . . . , ys ), then

r Y
s
X Y 1
sλ (x)sλ (y)z |λ| =
i=1 j=1
1 − zxi yj
λ

Proof. We use the Molien formula: let A : V ! V be a linear operator on f.d. vector space V (over
any field), and let S n A : S n V ! S n V be the induced action of A, then


X 1
Tr(S n A)z n = .
n=0
det(1 − zA)

This is easy to prove. Let x1 , . . . , xr be the eigenvalues of A (so r = dim V ), then the eigenvalues of S n A

27
are xm mr
1 . . . xr
1
s.t. m1 + · · · + mr = n. Hence,
X
Tr(S n A) = xm mr
1 . . . xr
1
= hn (x1 , . . . , xr ),
m1 +···+mr =n

the complete symmetric function. The generating function of these is


∞ r
X X Y 1 1
hn (x1 , . . . , xr )z n = (x1 z)m1 . . . (xr z)mr = = .
n=0 i=1
1 − xi z det(1 − zA)

Now, to prove Cauchy, consider


   
x1 y1
 ..  and h = 
  .. 
g=
 .   . .

xr ys

So g y V = Cr and h y W = Cs . Then,

Tr S n (g ⊗ h) = TrS n (V ⊗W ) (g ⊗ h)
X
= TrS λ V (g) Tr S λ W (h)
λ:|λ|=n
X
= sλ (x)sλ (y).
λ:|λ|=n

Hence,
X X 1 Y 1
sλ (x)sλ (y)z |λ| = Tr S n (g ⊗ h)z n = = .
n
det(1 − z(g ⊗ h)) i,j
1 − zx i yj
λ

5.3 Minuscule weights


Let g be a simple Lie algebra over C.

Definition 5.6. A dominant, integral weight ω ∈ P+ is minuscule if for all positive coroots β, we have
(ω, β) ≤ 1 (i.e. (ω, β) ∈ {0, 1}). This is equivalent to requiring that for all coroots β, |(ω, β)| ≤ 1.

Example. ω = 0 is always a (trivial) minuscule weight.

Example. For sln , all fundamental weights are minuscule. Recall the fundamental weights are ωi =
(1, 1, . . . , 1, 0, . . . , 0), so we see (ωi , ej − ek ) = 0, 1 (with j < k).

Proposition 5.7. Every nonzero minuscule weight ω 6= 0 is fundamental.

Proof. The inner products (ω, αi∨ ) = 0, 1 for minuscule weights. However, it can be 1 only for one i since
otherwise (ω, θ∨ ) ≥ 2, where the maximal coroot is θ∨ = mk αk∨ with mk > 0 for all k.
P


Warning 5.8. Not all fundamental weights are minuscule.

28
Proposition 5.9. A fundamental weight ωi is miniuscule ⇐⇒ mi = 1, where θ∨ = mi αi∨ .
P
i

Proof. mi = (ωi , θ∨ ) so for minuscule ωi , we have mi = 1. If mi = 1, the for all coroot β, (ωi , β) ≤ 1
(e.g. since β ∈ θ∨ − Q+ ) so ωi is minuscule. 

Exercise. G2 has no minuscule weights except 0.


Here’s a theorem we will discuss next time.

Theorem 5.10. ω ∈ P+ is minuscule ⇐⇒ all weights of Lω are in the Weyle group orbit of highest
weight.

eγ . In particular, all weight multiplicities are 1.


P
Corollary 5.11. When ω minuscule, ch Lω = γ∈W ω

6 Lecture 6 (3/4)
Note 4. Video for last class not up yet, so we’ll see how much things make sense today...

6.1 Last Time


Recall 6.1. Let g be a simple Lie algebra. A dominant, integral weight ω ∈ P+ is called minuscule if
for all positive coroots β, we have (ω, β) ≤ 1 (∈ {0, 1}). Equivalently, for all coroots β, |(ω, β)| ≤ 1.

Remark 6.2. Any integral weight can by conjugated to a dominate one via an element of the Weyl group.

Example. ω = 0.

Example. Say g = sln . All fundamental weights are minuscule ωi = (1, 1, 1, . . . , 1, 0, 0, . . . , 0) (with i
ones).

Recall 6.3. Every nonzero minuscule weight is fundamental.

Recall 6.4. Let θ∨ be a maximal coroot. Then, ωi is minuscule ⇐⇒ mi = (ωi , θ∨ ) = 1.

Remark 6.5. θ∨ = i mi αi∨ .


P

6.2 This Time: minisucle weights


Lemma 6.6. If ω ∈ Q, and |(ω, β)| ≤ 1 for all coroots β, then ω = 0. Hence, there are no nonzero Remember:
miniscule weights in the root lattice. Q is the root
lattice
Proof. Suppose ω = i mi αi (with mi ∈ Z) is a counterexample minimizing i |mi |. Then,
P P

X
0 < (ω, ω) = mi (ω, αi ),
i

so there’s some index j s.t. mj and (ω, αj are nonzero and of the same sign. Replacing ω by −ω if
needed, we may assume mj , (ω, αj ) > 0. Since αj∨ is a positive multiple of αj , we have also (ω, αj∨ ) > 0.
By hypothesis, we know (ω, αj∨ ) ≤ 1, so (ω, αj∨ ) = 1. Consider
X
sj ω = ω − (ω, αj∨ )αj = ω − αj = m0i αi
i

29
where m0i = mi if i 6= j and m0j = mj −1. Note that |m0i | = |mi |−1, but sj ω is also a counterexample ω 6= αj since
P P
i
(modifying by Weyl group does not affect property).  (αj , αj ) =
2 > 1, I
Example. For G2 , P = Q (weight lattice = root lattice), so there are no nonzero miniscule weights.
think
Proposition 6.7. A weight ω ∈ P+ is minuscule iff for all α ∈ Q+ (α 6= 0), ω − α is not dominant.

Proof. (!) Suppose ω = ωk is miniscule and α ∈ Q+ is nonzero. Suppose also that ωk − α is dominant.
We can write α = i mi αi with mi ∈ Z+ . If mj = 0 for some j 6= k, then reduces to smaller rank
P

(can delete vertex j from Dynkin diagram8 ), so we may assume mj > 0 for j 6= k. Then, for all positive
coroots β,
(α, β) = (ωk , β) − (ωk − α, β) ≤ (ωk , β) ≤ 1

(ωk − α dominant =⇒ (ωk − α, β) ≥ 0) with equality if β involves αk∨ . If β does not involve αk∨ , then
(α, β) ≤ 0. In particular, (α, αi∨ ) ≤ 0 if i 6= k and (α, αk∨ ) ≤ 1.
Now, if (α, α∨ ) ≤ 0, we’d get (α, α) ≤ 0, a contradiction (α positive linear combination of positive
roots). Thus, (α, αk∨ ) = 1. As a consequence, mk > 0 (mk = 0 =⇒ (α, αk∨ ) ≤ 0 since only involves
simple roots/coroots with different indices and those entries in Cartan matrix are always ≤ 0). Thus,
(α, θ∨ ) ≥ 1 so (ωk − α, θ∨ ) = 1 − (α, θ∨ ) ≤ 0 which forces ωk − α = 0. Hence, ωk ∈ Q, a contradiction to
previous lemma.
( ) Suppose ω is not miniscule. We’ll produce an α ∈ Q+ s.t. ω − α is dominant. Since ω is not
miniscule, there exists a positive root γ s.t. (ω, γ ∨ ) ≥ 2. Consider9 ω − γ. We first claim this is not
conjugate to ω. Observe
(ω − γ, ω − γ) = (ω, ω) − 2(γ, ω) + (γ, γ).

Since 2(γ, ω)/(γ, γ) = (γ ∨ , ω) ≥ 2 > 1, we see that 2(γ, ω) > (γ, γ), so (ω − γ, ω − γ) < (ω, ω) which
means ω − γ 6∈ W ω. Now, pick w ∈ W such that λ := w(ω − γ) ∈ P+ . Then, λ 6= ω, but ω − λ ∈ Q+
because ω − γ is a weight of Lω (for the vector10 fγ vω ). 

Remark 6.8. We have a classification of root systems/semisimple lie algebras from last time, so in prin-
ciple, we could just go through the list and check which roots are miniscule. This would be unsatisfying,
so we don’t do that.

Question 6.9. Why are minuscule weights interesting?

Proposition 6.10. ω is minuscule ⇐⇒ the Weyl group W acts transitively on the weights of the irrep
Lω .

Proof. (!) Let µ be a weight of Lω . Pick w ∈ W such that wµ is dominant. Then, wµ = ω − α for some
α ∈ Q+ . This implies that wµ = ω, so µ = w−1 ω.
( ) If ω is not miniscule, take γ as in the previous proof, and consider ω − γ, the weight of fγ vω ∈ Lω .
This is nonzero so ω − γ is a weight not in the orbit of ω. 

Corollary 6.11. All weight spaces of Lω are 1-dimensional when ω miniscule.

Remark 6.12. The converse of this is false. Think about reps of sl2 , for example.
8 Pass to root subsystem generated by αi for i 6= j
9 “Just for fun, let us use representation theory.”
10 This is nonzero since h v = (γ ∨ , ω)v 6= 0
γ ω ω

30
Corollary 6.13. The character of Lω is
X
χω = eλ .
λ∈W ω

You could also compute this using Weyl’s character formula. Comparing the two would lead to some
nontrivial identity.

Corollary 6.14. If α is a root of g, then Lω |(sl2 )α is a direct sum of 1-dimensional and 2-dimensional
representations of (sl2 )α .

Proof. Let v ∈ Lω be a highest weight vector for (sl2 )α , of some weight λ. Then,

hα v = (λ, α∨ )v = (wω, α∨ )v ∈ {v, 0, −v}.

It can’t be −1 (since v highest weight), so it’s 0 or 1. Hence, v generates a 1-d or 2-d rep of (sl2 )α . 

Note 5. Pavel worked out an example looking at B2 , but I did not pay attention. I should go back and
watch the video and add it in later...

Corollary 6.15. If ω is minuscule and λ ∈ P+ , then


M
Lω ⊗ Lλ = Lλ+µ
µ∈W ω

(if λ + µ 6∈ P+ , the corresponding term drops out, i.e we really sum over µ ∈ W ω s.t. λ + µ ∈ P+ )

Proof. We use Weyl character formula: I possibly


  made some
det(v)ev(λ+ρ)
P
X typos below
χLω ⊗Lλ = eµ  Q v∈W α/2
µ∈W ω α>0 (e − e−α/2 )
v(λ+ρ)+µ
P
µ∈W ω,v∈W det(v)e
= Q α/2 − e−α/2 )
α∈R+ (e
v(λ+w−1 µ+ρ
P
µ∈W ω,v∈W det(v)e
= Q α/2 − e−α/2

α>0 e
w(λ+ν+ρ)
P
ν∈W ω,w∈W det(w)e
= Q α/2 − e−α/2 )
α>0 (e
X
= χLλ+ν .
v∈W ω

If λ + ν 6∈ P+ , then (λ + ν, αi∨ ), there exists i s.t. (λ + ν, αi∨ ) < 0. But (λ, αi∨ ) ≥ 0 and |(ν, αi∨ )| ≤ 1, so
(λ + ν, αi∨ ) = (ν, αi∨ ) = −1 which means (λ, αi∨ ) = 0. We know (ρ, αi∨ ) = 1, so

(λ + ν + ρ, αi∨ ) = 0.

This means si (λ + ν + ρ) = λ + ν + ρ, so the terms det(w)ew(λ+ν+ρ) and det(wsi )ewsi (λ+ν+ρ) will cancel.
This justifies ignoring the terms not in P+ . 

31
Recall all fundamental representations of sln are miniscule.

Corollary 6.16. Let V = Cn be the vector representation of GLn . Then, for any partition λ,
n
M
V ⊗ Lλ = Lλ+ei
i=1

(if λ + ei is not nonincreasing, drop it’s term).

Example. Take L(1,0,...,0,−1) = sl(V ), the adjoint representation. Then,

V ⊗ sl(V ) = L(2,0,...,0,−1) + L(1,1,...,0,−1) + L(1,0,1,...,−1) + · · · + L(1,0,...,0) ,

but only the first two and last terms survive. Hence, V ⊗ sl(V ) = V ⊕(two other irreps).

We can give a combinatorial interpretation of the previous corollary. Recall that partitions correspond
to Young diagrams, e.g. λ = (5, 3, 1, 1, 0) (say n = 5) is the diagram

What is the diagram corresponding to V ⊗ Lλ ? The λ + ei ’s are

(7, 3, 1, 1, 0), (6, 4, 1, 1, 0), (6, 3, 2, 1, 0), (6, 3, 1, 2, 0), (6, 3, 1, 1, 1)

Note that these each correspond to adding a square a square to one row of λ. If adding the square
produces another Young diagram (preserves monotonicity), we call it an addable box. Thus, we see
that
X
V ⊗ Lλ = Lλ0
λ0 =λ+

(sum over addable boxes). We can do the same thing for exterior powers.
Vi
Recall Lωi = V with weights given by (a1 , . . . , an ) s.t. aj ∈ {0, 1} and there are exactly i copies
of 1. Adding λ + (a1 , . . . , an ) corresponds to adding i boxes to different rows of λ. Since ωi is miniscule,
we have
^i M
V ⊗ Lλ = Lλ+eI .
I⊂{1,...,n}
|I|=i
Vi
Graphically, V ⊗ Lλ is a sum over Young diagrams obtained from λ by adding i boxes in different
rows.
V2
Example. Say n = 3. Let’s compute V ⊗ L(3,2,1) . Note that λ = (3, 2, 1) looks like

32
The diagrams in the sum are

, , ,

i.e.
^2
V ⊗ L(3,2,1) = L(4,3,1) + L(4,2,2) + L(3,3,2) .

Similarly, one gets


^2
V ⊗ L(3,1,1) = L(4,2,1) + L(3,2,2) .

If we were over GL4 , there would be extra summands, e.g. an L(4,1,1,1) and an L(3,2,1,1) .

Proposition 6.17. Every coset of P/Q contains a unique minuscule weight, so there’s a bijection between
P/Q and minuscule weights.

Proof. Consider Cq = a + Q ⊂ P , the coset of a. Look at the intersection Ca ∩ P+ . Take ω ∈ C+ ∩ P+ ,


an element with smallest 2(ω, ρ∨ ) ∈ Z. Weights of Lωa are all in Ca . If ωa is not minisucle, there exists
λ ∈ P+ s.t. 0 6= ωa − λ ∈ Q+ which implies 2(λ, ρ∨ ) < 2(ω, ρ∨ ). This contradicts minimality, so ωa is
minuscule.
If ω1 , ω2 ∈ Ca are both miniscule, consider there different ω1 − ω2 ∈ Q. If it is nonzero, by a previous
lemma, there exists a coroot β s.t. (ω1 − ω2 , β) ≥ 2. This forces (ω1 , β) = 1 and (ω2 , β) = −1. But the
first forces beta to be positive while the second forces beta to be negative, and this is a contradiction. 

Corollary 6.18. The number of miniscule weights is #P/Q = det A, where A is the Cartan matrix.

Example. Bn (o(2n + 1)) has det = 2 so there’s only one (nonzero) miniscule weight. The corresponding
representation here is called the ‘spinor representation’.
Cn (sp(2n)) has det = 2 so there’s again one minuscule weight. One can check that it corresponds to
the vector representation.
Dn (o(2n)) has det = 4, so 3 miniscule weights. These are the vector representation V and two spinor
representations S ± .

More on rep theory of orthogonal and symplectic groups next time. After that, we will start looking
at the theory of compact Lie groups.

7 Lecture 7 (3/11)
7.1 Fundamental weights/representations for classical Lie algebras
7.1.1 Type Cn
Pn
We begin with the symplectic Lie algebra g = sp2n . Let B = i=1 xi ∧ xi+n be our symplectic form. A
natural choice of Cartan subalgebra is
!
diag(a1 , . . . , an ) ∼
h= = Cn .
− diag(a1 , . . . , an )

33
In Rn , the roots are

1
αi = ei − ei+1 = αi∨ for i = 1, . . . , n − 1 and αn = 2en with αn∨ = en = αn .
2

The fundamental weights are ωi = (1, . . . , 1, 0, . . . , 0) with i copies of 1.

Recall 7.1. The Dynkin diagram for Cn is

• • ··· • •

Figure 3: The Dynkin Diagram Cn

Recall 7.2. The number of miniscule weights equals the determinant of the Cartan matrix.

Here, the determinant is 2 so there is 1 nonzero miniscule weight. It is the weight 1 corresponding to
the vertex on the left end of the Dynkin diagram.

Example. Consider the representation V = C2n . Its weights are e1 , . . . , en and −e1 , . . . , −en . The Weyl
n
group is W = Sn o (Z/nZ) (Sn permutes while (Z/nZ)n changes signs). Note V ' V ∗ . The highest
weight here is e1 = ω1 which is the miniscule weight.

What about other fundamental representations? Well, g has the same fundamental weights as GLn ,
Vi
so maybe we should expect Lωi = V ? This is true for i = 1.
V2
But it is not true for i = 2! V is not irreducible (but has correct highest weight). The symplectic
form B is non-degenerate, so we can invert it
X ^2
B −1 = x∗i+n ∧ x∗i ∈ V
i

V2
and B −1 ∈ V generates a copy of the trivial representation C. We can write

^2 ^2 ^2  ^2 
V =C B −1
⊕ V where V = y∈ V : (y, B) = 0 .
0 0

V2
Exercise. V is irreducible.11
0
V2
Hence, Lω2 = 0 V so our intuition was not bad.
V2 V2
Example. For sp(4), only two fundamental weights, V = C4 and V = C⊕ 0 V is 6-dimensional.
Recall sp(4) ' o(5) which has a 5-dimensional vector representation.

What happens for other i?


They will be contained in exterior powers, but some pieces will fall off. Consider the exterior algebra

2n ^
^ M i
V := V.
i=0
11 Look at weights, or write character formula, or show directly

34
V2 Vi+1 Vi−1
Recall we have B ∈ V ∗ . Hence, given T ∈ V , we can form ιB T ∈ V . In the other direction,
Vi−1 Vi+1
we may wedge with B −1
to move from mB : ! .

Proposition 7.3.
Vi
(1) The operators mB , ιB , and h (hT = (i − n)T with T ∈ ) form an sl2 -triple.

(2) The operator


^i+1 ^i−1
ιB : !
Vi
is surjective for i ≤ n and injective for i ≥ n (so an iso for i = n). Let 0 V := ker ιB . This is
Vi ∼
irreducible if 1 ≤ i ≤ n, and 0 V = Lωi .12

(3)
^ n
M
V = Lωi ⊗ Ln−i
i=0

where ω0 = 0 and Ln−i is the sl(2)-rep with highest weight n − i and dimension n − i + 1. This is an-
other in-
(4) Every irreducible representation of sp2n occurs in V ⊗N for some N (since all fundamental reps
stance of
do).13
the double
Proof. Homework.14  centralizer
Vi property
Remark 7.4. Note dim 2n
so these dimensions form a Bell curve shape.

V = i

7.1.2 Type Bn

We now have the Lie algebra g = o2n+1 . The roots here are αi = ei − ei+1 for i = 1, . . . , n − 1 and
αn = en . The dual roots for αi∨ = αi for i = 1, . . . , n − 1 and αn∨ = 2en = 2αn . The fundamental weights
are
ωi = (1, . . . , 1, 0, . . . , 0) for i = 1, . . . , n − 1
| {z }
i

and  
1 1
ωn = ,..., .
2 2
n
The Weyl group is W = Sn n (Z/2Z) (permute coordinates and change signs). The Cartan matrix

• • ··· • •

Figure 4: The Dynkin Diagram Bn

again has determine 2 (transpose of previous one?), so only one nontrivial miniscule weight. This time it
is ωn .
12 Thisshould follow from rep theory of sl2
13 Thiswill not be true for the orthogonal groups and is “the reason our world exists” (something about spin in physics)
14 Should be “easy” after establishing you have an sl -rep. Irreducibility should come from looking at characters (correct
2
highest weight and correct dimension)

35
Warning 7.5. Lω1 = V = C2n+1 is the vector representation, but is not miniscule.

Example. The weights of C5 (for so(5)) include 0, but 0 is not a weight of miniscule representations
(Weyl group acts transitively on weights).
Vi
Exercise. V are irreducible for 1 ≤ i ≤ n, so
^i ^n
V = Lωi for i ≤ n − 1 and V = L2ωn .

Remark 7.6. For Cn , we have an invariant skew-symmetric form. For Bn , we now have an invariant
symmetric form. Hence something falls off for symmetric powers instead of for exterior powers.

Definition 7.7. Lωn is called the spinor representation and denoted S.

What are its weights? It’s miniscule, so it’s weights should be an orbit under Weyl group. Hence,
they’ll be ± 12 , ± 12 , . . . , ± 12 for any choice of signs. Hence, dim S = 2n . What is the character of S?


We are using the quadratic form Q = x1 xn+1 + · · · + xn x2n + x22n+1 , so o(2n + 1) fixes this form. The
natural Cartan subalgebra is

h = {diag(a1 , . . . , an , −a1 , . . . , −an , 0)} ∼


= Cn .

Note that for h ∈ h, its exponential is

eh = diag(x1 , . . . , xn , x−1 −1
1 , . . . , xn , 1).

We want to compute it’s trace on S. This gives the character

± 21 ±1
 1
−1
 1
−1
X  
χS = x1 . . . xn 2 = x12 + x1 2 . . . xn2 + xn 2 .

What’s up with these 1/2 powers? Square root of a complex number is only defined up to sign, so does
this make sense? What does this mean. It means this representation does not lift to the orthogonal
group SO(2n + 1). The point is that the orthogonal group is not simply connected, so not all Lie algebra If it did, the
representations lift to it. 15
For the same reason, S does not occur in V ⊗N
. Elements of S are called exponents
spinors. would all be
integers
Definition 7.8. The universal cover of SO2n+1 (C) is called the Spin group, denoted Spin2n+1 (C).

Now, S gives a representation of Spin2n+1 (C).

Theorem 7.9. π1 (SOn (C)) = Z/2Z for any n ≥ 3.16

Example. When n = 3, SL2 (C) = Spin3 is a double cover of SO(3). What is the map SL2 (C) ! SO(3)?
Take the 3-dimensional representation of SL2 ; this is the adjoint representation which has an invariant
form, the Killing form. The kernel of this map is Z/2Z ∼ = {±I}, the center of SL2 , so we see the exact
sequence
1 −! Z/2Z −! SL2 −! SO3 −! 1.
15 It will however, lift to its universal cover.
16 π (SO (C)) = Z, apparently.
1 2

36

Lemma 7.10. Let Xn = (z1 , . . . , zn ) ∈ Cn : z12 + · · · + zn2 = 1 . Then, Xn is simply connected for
n ≥ 3, and π2 (Xn ) = 1 for n ≥ 4.

Proof. Consider XnR = Xn ∩ Rn = S n−1 ⊂ Rn . This is simple connected for n ≥ 3, so it suffices to show
that Xn deformation retracts onto XnR . Consider some z ∈ Xn and write z = x + iy with x, y ∈ Rn .
Then,
1 = z 2 = x2 − y 2 + 2ix · y =⇒ x2 − y 2 = 1 and x · y = 0.

Now, consider the homotopy ft : Xn ! Xn given by

x + ity
ft (x + iy) = p
x2 − t 2 y 2

(note (x + ity)2 = x2 − t2 y + 2ix · y = x2 − t2 y 2 ≥ x2 − y 2 = 1). Note that f1 = Id while f0 (x + iy) =


x/ |x| ∈ XnR . Furthermore, ft |XnR = IdXnR for all t, so XnR is a deformation retract of Xn , finishing the
proof. 

Note that when n = 4, we have X4 ∼ = {ad − bc = 1 : a, b, c, d ∈ C} = SL2 (C), so this lemma recovers
the fact that SL2 (C) is simply connected.

Proof of Theorem 7.9. We will induct in n. Note that we already know the theorem when n = 3. Note
that SOn y Xn transitively17 , so Xn is a homogeneous space. What is the stabilizer?

StabSOn (e1 ) = SOn−1

since if it preserves e1 it’ll also preserve the orthocomplement of e1 . Hence, Xn = SOn / SOn−1 , so we
have a fiber sequence SOn−1  SOn  Xn . Thus, we get an exact sequence

π2 (Xn ) ! π1 (SOn−1 ) ! π1 (SOn ) ! π1 (Xn ).

The previous lemma shows π1 (Xn ) = 1 for n ≥ 3 and π2 (Xn ) = 1 for n ≥ 4. Hence, we win. 

Corollary 7.11. Spinn (C) is a double cover of SOn (C) for n ≥ 3.

7.1.3 Type Dn

Finally, we consider the Lie algebra g = o2n . As usual, V = C2n is the vector representation. The
simple roots are α1 = e1 − e2 , . . . , αn−1 = en−1 − en and αn = en−1 + en . The fundamental weights are
ω1 = (1, 0, . . . , 0), ω2 = (1, 1, 0, . . . , 0) up to ωn−2 = (1, . . . , 1, 0, 0) and then
   
1 1 1 1 1 1
ωn−1 = ,..., ,− and ωn = ,..., , .
2 2 2 2 2 2

We know have two spinor representations, Lωn−1 = S− and Lωn = S+ . In this case, the Cartan
matrix has determinant det A = 4, so there are 3 miniscule fundamental weights. These are ωn−1 , ωn , ω1 .

Example. The weights of Lω1 = V are e1 , . . . , en and −e1 , . . . , −en .


17 Can always move any vector to e1

37
ωn−1

ω1 ω2 ··· ωn−2

ωn

Figure 5: The Dynkin Diagram Dn

Our quadratic form is Q = x1 xn+1 + · · · + xn x2n , so our Cartan subalgebra is

h = {diag(a1 , . . . , an , −a1 , . . . , −an )} .

n
The Weyl group here is Sn n (Z/2Z)0 where the 0 subscript means elements whose coordinates sum to 0.
Remark 7.12. Exterior powers are irreducible for i ≤ n − 1 still. Hence, for i ≤ n − 2,
^i
V = Lωi .

Some aspects of orthogonal groups are uniform and some depend on even or odd. Some even depend
on residue mod 4, and some even depend on residue mod 8. This is related to Bott periodicity. More on
this on a homework.

Example. S+ = S+ or S+

= S− depending on n mod 4. When S+

= S+ it has an invariant inner
product. Is it symmetric or skew-symmetric? This depends on n mod 8.

What do the spinor representations S± look like? The Weyl group allows us to permute factors and
change an even number of signs. Thus, the weights of S+ are the vectors
 
1 1
± ,...,±
2 2

with an even number of +’s while the weights of S− are those with an odd number of +’s ( ⇐⇒ an odd
number of −’s). Thus, we get the characters

n 
!
1
−1
Y 
χS± = xi +
2
xi 2 ,
i=1 ±

so S+ , S− don’t occur in V ⊗N and they don’t lift to SO2n . We again define Spin2n to be the universal
cover of SO2n (again a double cover by previous theorem).

8 Lecture 8 (3/16)
8.1 Last time
We talked about representations of of o(V ). When V = C2n this is type Dn . When C = C2n+1 , this is
type Bn . We also talked about spinor representations.

38
For o2n+1 , the spinor representation is associated to ωn = (1/2, 1/2, . . . , 1/2) and has dimension
dim S = 2n .
For o2n , there are two spinor representations S+ = Lωn and S− = Lωn−1 where ωn−1 = (1/2, . . . , 1/2, −1/2)
and ωn = (1/2, . . . , 1/2, 1/2).

Question 8.1. How can we construct these explicitly?

We know they don’t occur in the tensor products of vector representations, so we have to do something
new.

8.2 Clifford algebra


We know S does not occur in V ⊗n (its weights have half-integer coordinates), but S ⊗ S ∗ does occur (its
weights have integer coordinates). One can show that
^even ^2 ^4
S ⊗ S∗ = V =C+ V + V + ....

We need to “extract a square root” roughly in the sense that the space of vectors is a “square root” of the
space of matrices. This is the idea behind Clifford algebras.

Definition 8.2. Let V be a f.d. k-vector space (k = k and char k 6= 2) with a symmetric (non-
degenerate) inner product (−, −). The Clifford algebra Cl(V ) of V is generated by V with defining
relations v 2 = 21 (v, v) for v ∈ V .

Remark 8.3. Given a, b ∈ v, one has

1
ab + ba = (a + b)2 − a2 − b2 = [(a + b, a + b) − (a, a) − (b, b)] = (a, b).
2

Thus, an equivalent set of defining relations is

ab + ba = (a, b) · 1.

Can we describe this in terms of a basis?

Example. When dim V = 2n, we can find a basis a1 , . . . , an , b1 , . . . , bn of V so that

(ai , aj ) = (bi , bj ) = 0 and (ai , bj ) = δij .

Then the relations are

ai aj + aj ai = 0, bi bj + bj bi = 0, and ai bj + bj ai = δij .

This is a deformation of V ; we’ll make this more precise later.


V

Example. When dim V = 2n + 1, we can find a basis a1 , . . . , an , b1 , . . . , bn , z with ai , bj as above and


(z, ai ) = 0 = (z, bi ) while (z, z) = 2. The relations here are all the ones from before along with

zai + ai z = 0 = zbi + bi z and z 2 = 1.

39
This is again a deformation of V.
V

What is this deformation business we’re claiming/alluding to?


There is a filtration on Cl(V ) obtained by setting deg v = 1 for v ∈ V . In general, for x ∈ Cl(V ),
deg(x) is the smallest d ∈ Z+ such that x is a sum of monomials of degrees ≤ d. We filter by degree;
F0 Cl(V ) ⊂ F1 Cl(V ) ⊂ . . . . The associated graded
M
gr Cl(V ) = Fi+1 /Fi
| {z }
gri Cl(V )

fits into a natural surjective homomorphisms


^
ϕ: V −! gr Cl(V ).

This is because the RHS’s of the defining relations for Cl(V ) all have degree strictly smaller than the
LHS’s so all vanish in the associated graded.

Theorem 8.4. ϕ is an isomorphism.

Equivalently, ϕ is injective ( ⇐⇒ dim Cl(V ) = 2dim V ).


Remark 8.5. This is similar to the PBW theorem for Lie algebras:

U (g) = ha ∈ g : ab − ba[a, b]i .

The natural surjection


ϕ : Sg  grU (g)

is an isomorphism.
In fact, PBW generalizes to “Lie superalgebras” and this theorem about Cl(V ) is a special case of this
generalization.

Theorem 8.6. Cl(V ) ∼


= Mat2n (k) if dim V = 2n, and Cl(V ) ∼
= Mat2n (k) ⊕ Mat2n (k) if dim V = 2n + 1.

Note this is even stronger than Theorem 8.4.

Proof. (Even case) Let us start with the even case. Pick a basis a1 , . . . , an , b1 , . . . , bn of V as before. Let
m = (a1 , . . . , an ). Define a representation
V


ρ : Cl(V ) ! End M , ρ(ai )v = ai w and ρ(bi )w = w.
∂ai

Above, 
∂  0 if i 6= kj ∀j
(ak1 . . . akr ) =
∂ai (−1)j−1 a . . . ac . . . a
k1 kj kr if i = kj

(this is a (graded) derivation: ∂/∂ai (f · g) = (∂f /∂ai )g + (−1)deg f f (∂g/∂ai )). In addition to making
this a derivation, having the sign term above makes ρ a representation, e.g.

ρ(ai )ρ(bi ) + ρ(bi )ρ(ai ) = 1

40
(exercise). Note that this a natural spanning set for Cl(V ): given I = (i1 < · · · < ik ) and J = (j1 <
· · · < jm ), set
cIJ := ai1 . . . aik bj1 . . . bjm ∈ Cl(V )

(the defining relations allow us to order the monomials at worse at the cost of some δij which will introduce
lower degree terms.). It is not immediately clear that these form a basis, but note that there are 22n of Like in the
them, so the theorem is equivalent to showing they are linearly independent. proof of
For this, consider ρ(cIJ ) = ai1 . . . aik ∂a∂j ... ∂
∂ajm : M ! M. PBW
1

Exercise. Show that these operators are linearly independent.


Hint: Take any relation αIJ cIJ = 0. Pick cI0 J0 with αIJ 6= 0 and |J| largest. Then show
P

X Y Y
αIJ cIJ · aj = αI0 ,J0 ai
j∈J0 i∈I

(the products in decreasing order of the j’s and increasing order of the i’s). This forces αI0 J0 = 0.
This completes the proof in the even case.
(Odd case) In the odd case, we also have some element z ∈ Cl(V ) with z 2 = 1. We sill have an action
Cl(V ) y M = (a1 , . . . , an ). In addition to ρ(ai )w = ai w and ρ(bi )w = ∂a

w, we need to say how z
V
i

acts. There are two options:


ρ(z)w = ±(−1)deg w w;

these two representations are called M+ and M− (they are not isomorphic18 ). We can consider the direct
sum
ρ = ρ+ ⊕ ρ− : Cl(V ) ! End M+ ⊕ End M− .

Exercise. This is an isomorphism.




We want to use this to construct the spinor representations.


Proposition 8.7. Define a linear map
^2
ξ : o(V ) = V ! Cl(V )

via
1 1
ξ(a ∧ b) = (ab − ba) = ab − (a, b).
2 2
Then, ξ is a Lie algebra homomorphism.
Proof. For skew-symmetric matrices in this form, one can work out that the commutator is

[a ∧ b, c ∧ d] = (b, a)a ∧ d − (b, d)a ∧ c + (a, c)d ∧ b − (a, d)c ∧ b

(exercise, using a ∧ b = 1
2 (a ⊗ b − b ⊗ a)). Now compute
 
1 1
[ρ(a ∧ b), ρ(c ∧ d)] = ab − (a, b), cd − (c, d)
2 2
18 The eigenvalue of z on the space of v ∈ M± s.t. bi v = 0 is ±1

41
= [ab, cd]
= abcd − cdab
= (b, c)ad − acbd − cdab
= (b, c)ad − (b, d)ac + acdb − cdab
= (b, c)ad − (b, d)ac + (a, c)db − cadb − cdab
= (b, c)ad − (b, d)ac + (a, c)db − (a, d)cb
= ρ ([a ∧ b, c ∧ d])

You might worry about error terms in the last equality, but they are

(b, c)(a, d) − (b, d)(a, c) + (a, c)(d, b) − (a, d)(c, b) = 0,

so we win. 

We can now use this map ξ : o(V ) ! Cl(V ) to pull back the representations M (in the even case) and
M± (in the odd case) from before.
Exercise. When dim V = 2n,
ξ ∗ M = S+ ⊕ S− .
Veven Vodd
More precisely, S+ = (a1 , . . . , an ) and S− = (a1 , . . . , an ).
Exercise. If dim V = 2n + 1,
=S∼
ξ ∗ M+ ∼ = ξ ∗ M− .

For these, you’ll want to find highest weight vectors, compute their weights, and then compare di-
mensions.
So we realize the spinor representations as exterior algebras where o(V ) acts by some (0th, 1st, or
2nd order) differential operators.

8.3 Duals of irreps


Let g be a simple Lie algebra over C. Consider some f.d. irrep Lλ with λ ∈ P+ . How do we determine
L∗λ ?
Let µ be the lowest weight of Lλ . Then the highest weight of L∗λ is −µ, so L∗λ = L−µ . Hence
we only need compute µ. For this, recall the Weyl group. We know (from last semester) that the
Weyl group contains a unique element w0 such that w0 (dominant weights) = (antidominant weights)
(antidominant means negative of dominant weight). Hence w0 (R+ ) = R− . This is called the maximal
element (element of maximal length) w0 . Note that if −1 ∈ W (thought of as linear transformations
of Cartan subalgebra), then w0 = −1.
Note that −w0 always maps positive roots/weights to positive roots/weights, so it permutes ωi , αi .
Hence, it gives a graph automorphism of the Dynkin diagram. Thus, if the Dynkian diagram has no
(nontrivial) symmetries, then −w0 = 1. This happens for A1 , Bn , Cn , G2 , F4 , E7 , and E8 .

42
Proposition 8.8. The highest weight of the dual representation is −µ = w0 λ, so µ = −w0 λ. Thus,

L∗λ = L−w0 λ .

Corollary 8.9. For A1 , Bn , Cn , G2 , F4 , E7 , E8 , all representations are self-dual.


Example. For type An , n ≥ 2, there are nontrivial symmetries. In particular, there’s the flip symmetry

• • ··· • •

Figure 6: The Dynkin Diagram An

and in fact, −w0 (αi ) = αn+1−i in this case. How do you see this? Consider the vector representation
V = Lω1 = Cn+1 of sln+1 . Its dual is
^n
V∗ = V = Lωn

so ωn gets exchanged with ω1 . Thus, −w0 must be the flip since it’s the only nontrivial automorphism.
Example. Let’s look at type E6 now. Again, −w0 is the flip. We won’t show this rigorously right now,

• • • • •

Figure 7: The Dynkin Diagram E6

but maybe will later.


For type Dn , the action of −w0 depends on the parity of n.
∗ ∗ ∗
Proposition 8.10. For D2n , S+ = S+ and S− = S− . For D2n+1 , S+ = S− .
How do you remember this? Note that D3 = A3 = sl(4) so in this case it is the flip. D2 = o(4) =
sl(2) ⊕ sl(2), so duality is trivial. How do you prove the proposition in general? For type Dn , the Weyl

• •

• •

Figure 8: The Dynkin Diagrams D3 (left) and D2 (right)

n
group is W = Sn n (Z/2Z)0 (vectors with sum 0). For n even, −1 ∈ W so −w0 = 1 (no flip). For n odd,
−1 6∈ W and w0 = (−1, −1, . . . , −1, 1) (exercise) with trivial permutation σ = id. Then,
   
1 1 1 1 1 1
w0 ωn = w0 , ,..., = − ,...,− , ,
2 2 2 2 2 2

43
so −w0 ωn = ωn−1 . Hence S+

= S− and S−

= S+ . Alternatively, you can see that the lowest weight of
S+ is −ωn−1 .
Remark 8.11. This gives some mod 4 periodic phenomena. To observe mod 8 periodicity, ask yourself,
“When do S, S+ , S− have symmetric invariant forms, and when do they have skew symmetric invariant
forms?”

Definition 8.12. A f.d. representation V of a group G or Lie algebra g is said to be of complex type
∼ V ∗ . It is real type if V ∼ ∼
if V 6= = V ∗ and there exists a symmetric isomorphism ϕ : V − ! V ∗ , i.e.
ϕ∗ = ϕ ( ⇐⇒ ϕ has a symmetric, invariant bilinear form). It is quaternionic type if V ∼ = V ∗ and ∃

skew-symmetric isomorphism ϕ : V −
! V ∗ , i.e. ϕ∗ = −ϕ

Theorem 8.13. For D2n , S+ = S+ . For D4n , S+ has a symmetric form (real type) while D4n+2 has a
skew-symmetric form (quaternionic type).

No lecture on Tuesday. Next week Thursday lecture at MIT.

9 Lecture 9 (3/18)
Last time we ended while discussing reps of real, complex, and quaternionic type.

Recall 9.1. A f.d. irrep V of a group G or Lie algebra g is said to be complex type if V ∼
6 V ∗ . It is real
=
type if V ∼
= V ∗ and there exists a symmetric isom ϕ : V ! V ∗ (ϕ∗ = ϕ), and it is quaternionic type if

V ∼
= V ∗ and there exists a skew-symmetric isom ϕ : V −
! V ∗ (ϕ∗ = −ϕ).

Remark 9.2. Schur says all isos V ! V ∗ are proportional to each other, to ϕ∗ = cϕ for some c ∈ C.
Taking double dual shows ϕ = c2 ϕ, so c = ±1 (i.e. real and quaternionic type are only possibilities).

Recall 9.3. For D2n , S+ = S+



(and same for S− ). For D4n , S+ has a symmetric form (real type). For
D4n+2 , it has a skew-symmetric form (quaternionic type).

There will be a similar statement for odd orthogonal groups. In order to prove these, we need to
understand when self-dual reps are real or quaternionic type.

9.1 Principal sl2 -subalgebra, exponents of g


We have seen root sl2 -subalgebras before, but there are actually more copies of sl2 inside other Lie
algebras.
Pr
Definition 9.4. Let g be a (semisimple?) Lie algebra. Let e = i=1 ei , and choose h ∈ h s.t.

[h, e] = 2e ⇐⇒ [h, ei ] = 2ei for all i ⇐⇒ αi (h) = 2 for all i ⇐⇒ h = 2ρ∨


Pr
(any above equiv condition true). In any case, h = i=1 (2ρ

, ωi )hi where hi = αi∨ . We now take

r
X
f := (2ρ∨ , ωi )fi .
i=1

44
Then, [h, f ] = −2f and
 
X X X
[e, f ] =  ei , (2ρ∨ , ωj )fj  = (2ρ∨ , ωi )hi = h.
i j i

Then, (e, h, f ) defined as above genera an sl2 -subalgebra inside g, called the principal sl2 -subalgebra.

Example. If g = sln and V = Cn , then V |sl2 principal = Ln (the irred sl2 rep with highest weight n).
One can check that
 
0 1
   
0 n
 .. 
. ∗ 0 n−2
   
 0 
, f =   , and h = 
  
e= .. .. ..
.
 
.. . . .
.
   
1
     

0 ∗ 0 −n

On the other hand, if you restrict V to a root subalgebra, then you can see that V |sl2 root = C2 ⊕(n−2)C.
Hence, the principal sl2 -subalgebra is essentially different (not conjugate) to root subalgebras (at least
for n ≥ 3).

A natural thing to look at is the restriction of the adjoint representation to the principal sl2 -subalgebra.
Write g|sl2 -principal = i LNi . What are these Ni ? Recall we can recover the decomposition of an
L

sl2 -rep if we know the dimensions of its weight spaces, so what are the eigenvalues of h = 2ρ∨ acting
adjointly on g. Write g = n− ⊕ h ⊕ n+ . Given x ∈ gα , we have

[h, x] = α(h)x = (α, 2ρ∨ )x.


Pr
Writing α = i=1 ki αi with ki ∈ Z, we see

r
X r
X
(α, 2ρ∨ ) = ki (αi , 2ρ∨ ) = 2 ki .
i=1 i=1

Recall that when α ∈ R+ , its height is defined to be


r
X
ht(α) = ki =: |α| ,
i=1

so the weights are (twice) the heights of the roots. Thus,


M
g|sl2 = g[n] with g[0] = h.
n∈Z

Thus, dim g[0] = r and, for n > 0, dim g[2n] = #roots of height n.
How many roots are there of each height?

• There are exactly r roots of height 1. These are the simple roots.

• What about height 2. Picture a Dynkin diagram. We need to add two distinct roots (twice a root

45
is not a root), and they need to be connected (sum of orthogonal roots is not a root19 . Thus, the
number of height 2 roots equals the number of edges in the Dynkin diagram. Since the diagram is
a tree, this is r − 1.

• There are zero roots of height N for N  0.

Notation 9.5. Let rm := #roots of height m.

Definition 9.6. An exponent of g is a positive integer m such that rm+1 < rm . The multiplicity of
m is rm = rm+1 .

We have r exponents This is


1 ≤ m1 ≤ m2 ≤ · · · ≤ mr because
dim g[0] = r
with multiplicities. We know that m1 = 1, m2 > 1, and mr = (θ, ρ∨ ) = h − 1 where h is called the
and each
Coxeter number of g.
exponent
Warning 9.7. We earlier encountered the dual Coxeter number h = (θ, ρ). Do not confuse this with

corresponds
h. to an irrep
Proposition 9.8. in g|sl2
r
M
g|sl2 principal = L2mi
i=1

where the mi are the exponents.

Proof. This follows from rep theory of sl2 (exercise). 

Example. g = sln . The roots are αij = αi + · · · + αj where i ≤ j. This has height i + j − 1. If you look at
the Dynkin diagram, roots will be connected pieces and the height will be the number of vertices in that
piece. Hence, the # of root of height k is rk = n − k. Thus the exponents are {mi } = {1, 2, . . . , n − 1}, so

sln |sl2 principal = L2 ⊕ L4 ⊕ · · · ⊕ L2n−2 .

Sanity Check 9.9. Consider gln = V ⊗ V ∗ = Ln−1 ⊗ Ln−1 . Clebsch-Gordan tells us that this is

gln = Ln ⊗ Ln = L0 ⊕ L2 ⊕ L4 ⊕ · · · ⊕ L2n−2 .

Since gln = sln ⊕ C, things are agreeing which is a good sign.

What does all of this have to do with Spinor representations being real and quaternionic type?

9.2 Back to Real, Complex, Quaternionic Type


Let’s first discuss what happens for sl2 . !
Recall Ln has dimension n − 1, and L1 = C2 . SL2 (C) = Sp2 (C)
0 1
preserves skew-symmetric form on C2 , so L1 is quaternionic type.
−1 0
More generally, Ln = Symn L1 and Ln ∼ = L∗n with the form on Ln being the symmetric power of the
form on L1 (or tensor power if you want to view Ln ⊂ L⊗n
1 ). Now, ω : L1 × L1 ! C is skew-symmetric,
so ω ⊗n is skew-symmetric for odd n, and symmetric for even n. Thus,
19 Serre relations says (adei )1−aij ej = 0. If aij = 0, then this says (αi , αj ) = 0 =⇒ [ei , ej ] = 0 so αi + αj is not a root

46
Proposition 9.10. Ln is real for even n, and quaternionic for odd n.

Now let g be any simple Lie algebra. Choose λ ∈ P+ , so L∗λ = L−w0 λ .

Assumption. Let’s assume λ = −w0 λ

(this is e.g. always the case if Dynkin diagram has no nontrivial automorphisms so −w0 = 1).

Question 9.11. Is Lλ real or quaternionic?

Restrict it to principal sl2 -subalgebra. The weights will be (µ, 2ρ∨ ) where µ is a weight of Lλ . This
will be largest when µ = λ, and this eigenvalue (λ, 2ρ∨ ) occurs just once. This is because any other
weight of Lλ is of the form λ − β with β = ki αi , ki ≥ 0 and β =
6 0. Hence,
P

X
(µ, 2ρ∨ ) = (λ, 2ρ∨ ) − 2 ki < (λ, 2ρ∨ ).

Thus,
M
Lλ |sl2 = Lm ⊕ cn Ln with m = (λ, 2ρ∨ ).
n<m

If we have an invariant nondegenerate form on Lλ , B, then Lλ = Lm ⊕ L⊥


m , so B|Lm is a nondegenerate
invariant form. Clearly, B is symmetric ⇐⇒ B|Lm is symmetric. Thus, we obtain.

Proposition 9.12. Assume Lλ is not complex type. Then, Lλ is real type if (λ, 2ρ∨ ) is even, and is
quaternionic type if it is odd.

Application to Spinor representations Let g = o(2n). Recall the fundamental weights are ω1 =
(1, 0, . . . , 0), . . . , ωn−2 = (1, 1, . . . , 1, 0, 0), ωn−1 = 21 , . . . , 12 , 12 , ωn = 12 , . . . , 12 , − 12 . Hence,
 

X n(n − 1)
ρ = ρ∨ = ωi = (n − 1, n − 2, . . . , 1, 0) and (2ρ∨ , ωn ) = = (2ρ∨ , ωn−1 ).
2
n(n−1)
Fact. 2 is odd if n ≡ 2, 3 (mod 4) and is even if n ≡ 0, 1 (mod 4).

Corollary 9.13. If n = 0 (mod 4) then S± have symmetric forms. If n ≡ 2 (mod 4), then S± have
skew forms.

(they are not self-dual when n odd).


We can do the same analysis for g = o(2n + 1). Here, ω1 = (1, 0, . . . , 0), . . . , ωn−1 = (1, . . . , 1, 0), ωn =
(1/2, . . . , 1/2). And ω1∨ = ω1 , . . . , ωn−1

= ωn−1 , ωn∨ = (1, 1, . . . , 1). Then ρ∨ = ω1∨ + · · · + ωn∨ =
(n, n − 1, . . . , 1), so (2ρ∨ , ωn ) = n(n + 1)/2. Hence we obtain

Proposition 9.14. The Spinor rep S is real ⇐⇒ n ≡ 0, 3 (mod 4), and is quaternionic ⇐⇒ n ≡ 1, 2
(mod 4).

Theorem 9.15 (“Bott Periodicity”). The behavior of spinor representations of o(m) depend on the
remainder r of m mod 8.

• r = 1, 7: S is of real type

• r = 3, 5: S is of quat type

47
• r = 0: S+ , S− are of real type

• r = 2, 6: S+

= S− are of complex type

• r = 4: S+ , S− are of quat type

“Now, let’s move on. We’ve done enough representation theory, so let’s switch to another subject:
integration of Lie groups. This will help us do representation even better.” (paraphrase)

9.3 Review of differential forms and integration on manifolds


Let M be a smooth, real n-dimensional manifold. Recall

• T M is the tangent bundle (vectors)

• T ∗ M is the cotangent bundle (covectors)


Vk
• A differential k-form on M is a smooth section of T ∗ M (a skew-symmetric n-covariant and
0-contravariant tensor field on M ) “covariant”
here counts
In local coords x1 , . . . , xn a k-form ω looks like
duals while
X “contravari-
ω= fi1 ,...,ik (x1 , . . . , xn )dxi1 ∧ · · · ∧ dxik .
1≤i1 <i2 <···<ik ≤n ant” counts
non-duals.
If you change coordinates xi = xi (y1 , . . . , yn ), then
 
X X ∂xir
ω= fi1 ...ik (x1 , . . . , xn ) det dyj1 ∧ · · · ∧ dyjk .
j1 <j2 <···<jk i1 <···<ik
∂yis r,s

Example. If f ∈ C ∞ (M ), get differential df ∈ Ω1 (M ) (section of T ∗ M ) so that, for v ∈ Tp M (a


derivation), df (v) = ∂v f . In coordinates,

n
X ∂f
df = dxi .
i=1
∂xi

Note Ω0 (M ) = C ∞ (M ), and Ωk (M ) = 0 for k > n. We have a graded-commutative algebra

n
M
Ω• (M ) = Ωk (M )
k=0

of differential forms with multiplication given by wedge product ∧. The operation d in the previous
example extends to a degree 1 derivation of Ω∗ (M ), i.e. we have d : Ωk (M ) ! Ωk+1 (M ). It is defined by

d (f (x1 , . . . , xn )dxi1 ∧ · · · ∧ dxik ) = df ∧ dxi1 ∧ · · · ∧ dxik .

It is a graded derivation in the sense that given homogeneous a ∈ Ωk (M ) and b ∈ Ω` (M ), one has

d(a ∧ b) = da ∧ b + (−1)k a ∧ db.

48
A closed form ω is one for which dω = 0. It is an exact form if ω = dη. Also, d2 = 0 (exact implies
closed), but the coverse is not always true.

Example. Consider S 1 = R/Z with coordinate x (mod 1). Then, dx ∈ Ω1 (S 1 ) is a closed form, but is
not exact: 6 ∃f ∈ C ∞ (S 1 ) s.t. dx = f (would need f = x + c but x not well-defined on circle, only up to
adding integers).

Definition 9.16. De Rham Cohomology of M is

Ωkclosed (M )
Hk (M ) = .
Ωkexact (M )

If f : M ! N is a C ∞ -map, and ω ∈ Ωk (N ) is a k-form, then you get a pullback f ∗ ω ∈ Ωk (M ).


Given v1 , . . . , vk ∈ Tp M , one has

f ∗ ω(v1 , . . . , vk ) = ω(f∗ v1 , . . . , f∗ vk )

where f∗ : Tp M ! Tf (p) N . Note that pullback commutes with ∧ and d. Also, (f ◦ g)∗ = g ∗ ◦ f ∗ .

9.3.1 Top degree forms

Every element of Ωn (M ) looks like ω = f (x1 , . . . , xn )dx1 ∧ · · · ∧ dxn (in local coordinates). Say M = Rn
and ω has compact support (i.e. f has compact support). Then we define
Z Z
ω := f (x1 , . . . , xn )dx1 . . . dxn .
M Rn

We want this to be independent of coordinates. If we change xi = xi (y1 , . . . , yn ), then


 
∂xi
ω = f (x1 , . . . , xn ) det dy1 ∧ · · · ∧ dyn ,
∂yj

but Z Z  
∂xi
f (x1 , . . . , xn )dx1 . . . dxn = f (x1 , . . . , xn ) det dy1 . . . dyn ,
Rn Rn ∂yj
so there is a slight discrepancy (in one case have absolute values; in the other case we don’t). Hence,
integration of top forms only invariant under changes of variable that preserve orientation (det(jacobian) >
0). As a result, we will only be able to integrate differential forms on oriented manifolds.
On a general manifold, you cannot integration differential forms. You can, however, integrate densities.
These multiply by absolute value of determinant instead of by the determinant itself.

10 Lecture 10 (3/25)
Note 6. Haven’t seen last Thursday’s lecture yet...

49
10.1 Last time
Last time we talked about integration of (top degree) differential forms on manifolds. Say we have a
(real/smooth) manifold M of dimension dim M = n and we have ω ∈ Ωn (M ). To define the integral
Z
ω,
M

we need an orientation on M , i.e. a consistent way to say which bases of Tx M are ‘right-handed’ in the
tangent space. Say we have  some
 charts U, V of the manifold with coordinates xi , yi . Then the manifold
is orientable if always det ∂xj > 0.
∂yi

Say M is an oriented n-dim manifold, and suppose ω ∈ Ωn (M ) is a top degree form with compact
support. We won’t actually need to compact support condition, but good to know the integral will
converge. How do we define M ω?
R

Let K = supp ω. Cover K by finitely many balls Bi , and choose a Bi0 ⊂ Bi for each i, so these Bi0 ’s
already cover K. For a containment of balls B 0 ( B, we can define a hat function f ∈ C ∞ (B) satisfying
f > 0 on B 0 , f ≥ 0 on B, and f has compact support in B.

Example. In the one-dimensional case, just want some bump function. Can start with

 0 if x ≤ 0
g(x) =
e−1/x if x > 0

Can then do something like multiply this by a parabola (?) to get a hat function in the 1-d case. Then
use this to get hat functions in any dimension.

Let fi be a hat function on Bi , and consider

fi
gi := P
j fj

which is well-defined in a neighborhood of K, and has support instead Bi . Note that gi = 1, so these
P

give a partition of unity. We can now use these to define


Z XZ
ω := gi ω
M i Bi

with the RHS a sum of integrals in Rn .

Claim 10.1. This is well-defined and independent of choices.

For independence, given two partitions of unity, consider their refines obtained by taken pairwise
products (i.e. consisting of functions gi hj ).
Remark 10.2. Integration like this also makes sense for manifolds with boundary. The only difference is
that at boundary points, the local model is Rn−1 × R≥0 instead of Rn . Integration also makes sense for
non-compactly supported forms; the integral just might diverge in these cases.
Remark 10.3. If you have an oriented manifold with boundary, then it induces a canonical orientation
on the boundary (a basis of tangent space at boundary is right-handed iff basis of whole thing obtained

50
by extending the given basis by a single vector pointing inwards is right-handed, or something like this).

10.2 Volume Forms


Vn
Definition 10.4. A differential form ω ∈ Ωn (M ) is called nonvanishing if ωx ∈ Tx∗ M is not zero
for all x ∈ M .

A nonvanishing top form gives rise to an orientation on M : say v1 , . . . , vn ∈ Tx M is a right-handed


basis if ω(v1 , . . . , vn ) > 0. Note that ω also defines a (Borel) measure, given on open sets U via
Z
µω (U ) = ω
U

(or +∞ if integral diverges). Now say f is any measurable function on M . Then,


Z Z
|f | ω = |f | dµω .
M M

We call f integrable, denoted f ∈ L1 (M, µ), if this integral is < ∞. In such cases, can define
R
M
f dµω
just as in measure theory.
Remark 10.5. Above discussion shows that there are no non-vanishing forms on non-orientable manifolds.
open
Example. ω = dx1 ∧ · · · ∧ dxn on M ⊂ Rn is nonvanishing. The corresponding measure is the usual
open
Lebesgue measure µ, so µ(U ) = vol(U ) is the usual volume of U ⊂ M .

Inspired by above example, nonvanishing forms are often called volume forms. Given a volume form
R
ω, vol(M ) = M ω ∈ R+ ∪ {∞}.

Proposition 10.6. If M is compact, then it has finite volume, and any continuous function on M belongs
to L1 (M, µ), i.e. is integrable.

Proof. Cover M = x∈M Ux with Ux a neighborhood of x so small that µ(Ux ) < ∞. Since M is compact,
S

this has a finite subcover U1 , . . . , UN . Thus, µ(M ) ≤ i µ(Ui ) < ∞, so M has finite measure. If f is
P

continuous, then max |f | < ∞, so


Z
|f | dµ ≤ max |f | · µ(M ) < ∞.

10.3 Stoke’s Theorem


Theorem 10.7 (Stoke’s Theorem). Let M be a compact orientable manifold with boundary, and let
ω ∈ Ωn−1 (M ) (so restricts to a top form of ∂M ). Then,
Z Z
dω = ω.
M ∂M
R R
In particular, if M is closed (no boundary), then
dω = 0. Also, if dω = 0, then ∂M ω = 0.
M

Notation 10.8. We let M denote M with the opposite orientation. Note that M ω = − M ω.
R R

51
Remark 10.9. When n = 1, we can consider an interval M = [a, b] with boundary consisting of two
points. Then Stoke’s theorem says
Z b
df (x) = f (b) − f (a)
a

which is exactly the fundamental theorem of calculus.


Remark 10.10. Applying this inside R2 should recover Green’s formula. Applying it to a surface in R3
recovers Stoke’s classical formula. Applying it to a region in R3 gives Gauss’s theorem.

10.4 Integration on (Real) Lie groups


For complex Lie groups the story is the same. To integrate on them, just forget the complex structure.
Vn ∗
Let G be a real Lie group of dimension n with Lie algebra g = Lie G = T1 G. Note that g is
Vn ∗
one-dimensional. Fix some nonzero ξ ∈ g . We can use left-translations to spread ξ over G in order
to get a left-invariant (nonvanishing) top differential form ωξ .
Remark 10.11. Translating should show that T G = g × G is a trivial bundle.
This ω = ωξ gives us an orientation and a measure µω . Note that µω so defined is left-invariant, so
gives a (left-invariant) Haar measure.
Remark 10.12. ξ is well-defined up to scaling. Changing ξ λξ (λ ∈ R× ) changes the top form ω λω
and so changes the meausre µω |λ| µω . Hence, this Haar measure is well-defined up to positive scalar.

Notation 10.13. We use µL to denote a choice of left-invariant Haar measure. We similarly define µR
as a choice of right-invariant Haar measure.

A natural question is does µL = µR , up to scaling at least? They will for abelian groups since left/right
translations are the same. What about for non-abelian groups?
Suppose V is a 1-dimensional real representation of a group G, so have ρV : G ! Aut(V ) = R× . We
can then define a rep |V | on the same underlying space with map ρ|V | = |ρV |. This is still a representation
since | · | is a character (i.e. homomorphism) on R× .
Vn ∗
Proposition 10.14. µL = µR ⇐⇒ | g | is a trivial representation of G.

Proof. µL = µR ⇐⇒ µL is also right invariant ⇐⇒ µL is invariant under conjugation ⇐⇒ ω is


Vn ∗ Vn ∗
invariant under conjugation, up to sign ⇐⇒ ω1 ∈ g is invariant under G, up to sign ⇐⇒ | g |
is a trivial representation. 

Above, keep in mind that conjugation is what induces the adjoint action on g.

Definition 10.15. G is unimodular if µL = µR .

Example. When G is discrete and countable ⇐⇒ g = 0 =⇒ G unimodular. Up to scaling, µL = µR


is simply the counting measure µL (U ) = #U .
Vn
Definition 10.16. Say g is a f.d. real Lie algebra. We say g is unimodular if g∗ is a trivial g-module.

Proposition 10.17 (Homework).

(1) A connected Lie group G is unimodular ⇐⇒ G is unimodular.

52
(2) If g is perfect, i.e. g = [g, g], then g is unimodular.

Corollary 10.18. Semisimple Lie algebras are unimodular.

(3) A nilpotent (e.g. abelian) Lie algebra is unimodular.

(4) If g1 , g2 are unimodular, then so its g1 ⊕ g2 .

Corollary 10.19. A reductive Lie algebra (which is a direct sum of an abelian and a semisimple
Lie algebra) is unimodular.

(5) The Lie algebra tn of upper triangular matrices is not unimodular when n ≥ 2.

Corollary 10.20. Being unimodular is not closed under extensions. t2 sits in an


exact se-
Remark 10.21. If G has no nontrivial 1-dim reps, then G is unimodular. quence with
If G is unimodular, then it has a bi-invariant Haar measure µ = µL = µR . Integration with respect strictly up-
to this measure is denoted Z Z per triangu-
f dµ =: f (g)dg. lar matrices
G G
and diagonal
Proposition 10.22. Compact Lie groups are always unimodular. matrices.
Vn ∗
Proof. Consider the representation | g | which is given by a continuous map ρ : G ! R>0 . The image
ρ(G) ⊂ R>0 must be a compact subgroup, but there’s only one of these. 

Note that if G is compact, it has finite volume G dg < ∞, so we may normalize dg so that this
R

integral is 1, i.e. require our Haar measure to be a probability measure. This gives us an actually unique
choice of measure for compact G.

Example. When G is finite, the unique Haar probabality measure is the averaging measure µ(U ) =
#U/#G.

10.5 Representations of compact Lie groups


Proposition 10.23. Every f.d. representation V of a compact Lie group G is unitary.

Proof. Pick a positive Hermitian form B on V . We would like an invariant form, so consider the average
Z
Bav (v, w) = B(gv, gw)dg
G

which is invariant by construction (using right-invariant of dg) and well-defined since dg = 1 is finite!
R
G
Note that Bav (v, v) > 0 (for v 6= 0) since B(w, w) > 0 for w 6= 0. This gives a unitary structure on our
representation, completing the proof. 

Corollary 10.24. Any finite dimensional representation of a compact Lie group is completely reducible.

Proof. Unitary reps are always completely irreducible. If W ⊂ V is a subrep, then so is W ⊥ ⊂ V and
W ⊕ W ⊥ = V (then induct). 

53
Example. G = SU(n) is a simply-connected, compact Lie group (U (n) compact since rows are vectors
of unit length). It is simply-connected (and even doubly-connected) since SU(n)/ SU(n − 1) = S 2n−1 .
Thus, Rep SU(n) = Repsu(n) = Rep(n) (when sl(n) is the complexification of su(n)). Thus, we relearn
that f.d. reps of sl(n) are completely reducible. This proof strategy is called the Weyl unitary trick.

In fact we will show that every semisimple Lie algebra has a Lie group whose real form is a compact
Lie group.

10.6 Matrix coefficients


Let G be a compact Lie group, and let V be a f.d. irreducible representation of G. Let (−, −) be a
unitary form on V , which is unique up to scaling by a positive number (ultimately a consequence of
Schur). Choose an orthonormal basis v1 , . . . , vn of V . We can consider the expression

ρV (g)ij = (ρV (g)vi , vj ) = ψV,ij

computing the ij entry of the matrix for g in the given basis. This is a smooth function ψV,ij : G ! C
called a matrix coefficient. Note that this is independent of the normalization of the form (since scaling

the form by λ divides the orthonormal basis by λ), so it only depends on a choice of orthonormal basis.
Let W be another irrep of G. Say {wk } form an orthonormal basis for W .

Theorem 10.25 (Orthogonality of matrix coefficients).


Z
δV W δik δj`
ψV,ij (g)ψW,k` (g)dg = .
G dim V

Remark 10.26. δV W = 0 if V ∼
6 W . If δV W = 1, take V = W and require vi = wi (i.e. use same basis).
=

Proof. We’re after the expression


Z  
ρV (g) ⊗ ρW (g) (vi ⊗ wk ), vj ⊗ w` .
G

Note that Z  Z
ρV (g) ⊗ ρW (g) = ρV ⊗W ∗ (g)dg ∈ End(V ⊗ W ) = End(V ⊗ W ∗ ).
G G
| {z }
P

We want to compute this operator. For x ∈ V ⊗ W ∗ , we claim P x ∈ (V ⊗ W ∗ )G . This is because


Z Z
ρV ⊗W ∗ (h)P x = ρV ⊗W ∗ (h)ρV ⊗W ∗ (g)dg = ρV ⊗W ∗ (hg)dg = P x.
G G

Thus, im P ⊂ (V ⊗ W ∗ )G but this whole space is 0 if V ∼


6 W.
=
We will handle the case V = W next time... 

11 Lecture 11 (3/30)
Last time we talked about matrix coefficients.

54
11.1 Matrix coefficients + Peter-Weyl
Recall 11.1. Let V be an irrep of a compact Lie group G. Let (−, −) be an invariant, positive Hermitian
form on V . Let v1 , . . . , vn be an orthonormal basis w.r.t this form. The matrix coefficients are the smooth
functions ψV,ij : G ! C given by
ψV,ij (g) = (ρV (g)vi , vj ).

Hence, ψV,ij (g) is the ijth coefficient of the matrix of ρV (g) written in the basis v1 , . . . , vn .

Recall 11.2. Z
δV W δik δj`
ψV,ij (g)ψW,k` (g)dg = .
G dim V
We were in the middle of proving this last time. We showed this integral is 0 when V ∼
6 W by making
=
use of the operator Z
P := ρV (g) ⊗ ρW (g)dg
G
G
on V ⊗ W = V ⊗ W ∗ . We showed that P : V ⊗ W ∗ ! (V ⊗ W ∗ ) maps into the space (V ⊗ W ∗ )G =
HomG (W, V ) which is 0 if V ∼
6 W . The integral we are interested in is simply (P (vi ⊗ wk ), vj ⊗ w` ), so
=
it must vanish when V =∼
6 W . Let’s now wrap up the rest of the proof.

Proof of Theorem 10.25 when V = W . In this case,


Z
P := ρV ⊗V ∗ (g)dg.
G

Note that V ⊗ V ∗ = C ⊕ U with U G = 0, so


Z Z
P = ρC (g)dg ⊕ ρU (g)dg.
G G

The right summand takes values in U G , so must be 0. At the same time, ρC (g) = 1, so the left factor is
1. Hence, P = 1C ⊕ 0U is the projection to the trivial representation (the span of the identity operator
idV ∈ V ⊗ V ∗ ). From this we see that
Pn n
(x ⊗ y, i=1 vi ⊗ vi ) X (x, y) X
P (x ⊗ y) = Pn Pn v i ⊗ vi = vi ⊗ v i .
( i=1 vi ⊗ vi , i=1 vi ⊗ vi ) i=1 dim V

Pn
In particular, P (vi ⊗ vk ) = δik
dim V i=1 vi ⊗ vi , so

δik δj`
(P (vi ⊗ vk ), vj ⊗ v` ) = ,
dim V

which completes the proof. 

Corollary 11.3. {ψV,ij : V ∈ Irrep(G) and i, j = 1, . . . , dim V } given an orthogonal set in L2 (G).

We can actually say something stronger.

Theorem 11.4 (Peter-Weyl). This system is complete, i.e. the ψV,ij ’s form an orthogonal basis of
L2 (G).

55
Notation 11.5. Let L2alg (G) := usual linear span of the ψV,ij ’s.

Peter-Weyl says that L2alg (G) is dense in L2 (G).


n o
2
Recall 11.6. L2 (G) = f : G ! C measurable | G |f | dg < ∞ is the Hilbert space of square-integrable
R

measurable functions with inner product


Z
(f1 , f2 ) = f1 (g)f2 (g)dg.
G

Example. Say G = S 1 = R/2πZ is the circle. Then the irreps of G are the usual characters ψn (θ) = einθ
for n ∈ Z. PW says that these give an orthonormal basis for L2 (S 1 ) with inner product (f1 , f2 ) =
R 2π
2π 0 f1 (θ)f2 (θ)dθ. This recover the main theorem of Fourier analysis.
1

Peter-Weyl is the first step of non-abelian harmonic analysis.

Corollary 11.7 (of Theorem 10.25, character orthogonality). For the characters
X
χV (g) = Tr ρV (g) = ψV,ii (g),
i

one has Z
χV (g)χW (g)dg = δV W .
G
Proof. Z X X δV W δik δik δV W X 2
χV (g)χW (g)dg = = δ = δV W .
G i
dim V dim V i ii
k

Corollary 11.8 (of Peter-Weyl). The characters χV (g) (V ∈ Irrep(G)) give an orthonormal basis of
L2 (G)G , the conjugation-invariant L2 -functions.

Before proving this, we reformulate the Peter-Weyl theorem.

Theorem 11.9 (Peter-Weyl, reformulated). The G × G-invariant map


M
ξ: V ⊗ V ∗ ,! L2 (G) where ξ(vi ⊗ vj ) = ψV,ij
V ∈Irrep(G)

has dense image in L2 (G).

Proof of Corollary 11.8. Note that ξG : V ∈Irrep(G) (V ⊗ V ∗ )G ! L2 (G)G satisfies (and is determined
L

by) i vi ⊗ vi 7! χV (g). Thus, its image is the linear span of the χV (g)’s. Hence, it suffices to show that
P

L2alg (G)G is dense in L2 (G)G . For this, take some ψ ∈ Lg (G)G , so there’s a sequence ψn ∈ L2alg (G) with
ψn ! ψ as n ! ∞ (by Peter-Weyl). Let ψn0 := g ψn (gxg −1 )dg ∈ L2alg (G)G . Furthermore,
R

Z Z Z Z
n!∞
kψn0 −ψk = (gψn − ψ)dg = g(ψn − ψ)dg ≤ kg(ψn − ψ)k dg = kψn − ψk dg = kψn − ψk −−−−! 0,
G G G G

so ψn0 ! ψ which completes the proof. 

Let’s now prove Peter-Weyl.

56
11.2 Proving Peter-Weyl
11.2.1 Analytic Background

Before we can prove PW, we need some more background in analysis. In particular, we need the know
about compact operators on Hilbert spaces.
Definition 11.10. Let H be a Hilbert space. A bounded operator A : H ! H is a linear map s.t.
there exists some C ≥ 0 s.t. for all v ∈ H, kAvk ≤ Ckvk. The set of such C is closed, so the minimal
such C is called the norm kAk of A. The space of bounded operators is denoted B(H) and is a Banach
space (Banach algebra even) with this norm.
Remark 11.11. kA + Bk ≤ kAk + kBk and kABk ≤ kAkkBk.
Definition 11.12. A bounded operator A on a Hilbert space H is called self-adjoint if (Av, w) = (v, Aw)
for all v, w ∈ H. We say A is compact if it is the limit of a sequence of finite rank operators (i.e.
n!∞
dim im(An ) < ∞) An : H ! H, i.e. kAn − Ak −−−−! 0. We let K(H) denote the space of compact
operators, the closure of the space Kf (H) of finite rank operators.
Remark 11.13. Kf (H) ⊂ B(H) is a 2-sided ideal, so K(H) is also a 2-sided ideal in B(H).
Lemma 11.14. If A is compact, then it maps bounded sets to pre-compact sets, i.e. sets with compact
closure.
Remark 11.15. A bounded operator will map bounded sets to bounded sets. A compact operator will
map bounded sets into compact sets.
If {vn } is a bounded sequence in H and A is a compact operator, then Avn will have a convergence
subsequence (with limit possible outside im A).
Not every bounded set in a Hilbert space has a convergent subsequence.
Example. Let e1 , e2 , . . . be orthonormal vectors in H. Then this is a bounded sequence with no con-

vergent subsequence (distance between any two vectors is 2).
As a consequence, we see that id : H ! H is compact ⇐⇒ dim H < ∞. Let’s prove the lemma now.

Proof of Lemma 11.14. Let vn ∈ H with kvn k ≤ 1, and say A : H ! H is compact. Choose An of finite
rank with kAn − Ak ≤ 1/n for all n. We do a usual diagonal trick. Note that, since An has finite rank,
{An vk }k≥1 lies in a compact set (a ball in a finite dim space).
Let vn1 be a subsequence of vn s.t. A1 vn1 converges. Let vn2 be a subseq of vn1 s.t. A2 vn2 converges, and
so on and so forth. Define wn := v n which (away from the first k elements) is a subseq of vnk . Note that

Avik = Avjk ≤ Ak vik − Ak vjk + (A − Ak )(vik − vjk )


≤ Ak vik − Ak vjk + kA − Ak k vik − vjk
≤ Ak vik − Ak vjk + 2 kA − Ak k
2
< Ak vik − Ak vjk + .
k

Hence, for i, j  0, we have Avik = Avjk < 3


k since the first summand above vanishes in the limit. Since
(a tail of) wn is a subseq of vnk , we see that kAwi − Awj k ≤ 3/k when i, j k 0. Hence, Awi is Cauchy,
so it converges. 

57
Proposition 11.16. Let K be a continuous function on [0, 1]2n . Define the operator BK on L2 ([0, 1]n )
by Z
(BK ψ)(y) := K(x, y)ψ(x)dx.
[0,1]n

This operator is compact.

Proof. Cover [0, 1]2n by pixels of size m.


1
Approximate K in every pixel by its maximal value on that
pixel, and call the resulting function Km (x, y). Then, the corresponding BKm is a finite rank operator of
rank ≤ mn (functions constant on each pixel). Finally, kBK − BKm k ≤ max |K − Km | ! 0 as m ! ∞ by
uniform continuity of K, i.e. ∀ε > 0∃δ > 0 s.t. if |(x, y) − (x0 , y 0 )| < δ, then |K(x, y) − K(x0 , y 0 )| < ε.
Hence, BK is compact. 

Corollary 11.17. If M is a compact manifold with positive smooth measure dx, then for any continuous
K on M × M , the operator Z
(BK ψ) (y) := K(x, y)ψ(x)dx
M

is compact.

Proof. If f1 , . . . , fm is a partition of unity on M , then K(x, y) = i,j fi (x)fj (y)K(x, y). Defining
P

Kij (x, y) := fi (x)fj (y)K(x, y), we have BK = i,j BKij so it suffices to show BKij is compact, but
P

Kij has support in (a space homeomorphic to) [0, 1]n so we win. 

Fact. A bounded operator B : H ! H is compact ⇐⇒ it maps bounded sets to precompact sets.

We won’t actually need this fact (the direction we haven’t proved). On the other hand, we will need
to below fact.

Theorem 11.18 (Hilbert-Schmidt Theorem). Let A : H ! H be a self-adjoint compact operator.


Then, there exists an orthogonal decomposition
M
H = ker A ⊕ Hλ
d
λ

where λ runs over nonzero eigenvalues of A, and

• A|Hλ = λ · Id

• Hλ are finite dimensional

• λ are real and either form a finite set or a sequence converging to 0.

(Generalizes uses spectral theorem for Hermitian operators in f.dim linear algebra).

Example. When A is finite rank, this is just the spectral theorem for Hermitian operators in a f.d.
space. It says there exists an orthonormal basis in which A is diagonal with real eigenvalues.

Remark 11.19. Bounded operators in a Hilbert space do not have to have eigenvalues at all. For example,
consider multiplication by x on L2 ([0, 1]) (recall objects here are functions up to equality away from null
sets).

58
Proof of Hilbert-Schmidt. We first prove the theorem for the positive operator A2 . The idea is to find
the largest eigenvalue, take its orthocomplement, and then keep going...
Let β = kAk2 = supkvk=1 (A2 v, v) = supkvk=1 (Av, Av). WLOG we may assume β 6= 0 (otherwise
A = 0). Fix a sequence An of self-adjoint finite rank operators converging to A.20 Let βn = kAn k2 which
is in fact the maximal eigenvalue of A2n .21 Choose vn s.t. A2n vn = βn vn and kvn k = 1. Note that A2 vn
has a convergent subsequence, so we may assume wlog A2 vn ! w ∈ H. At the same time, A2n vn ! w
since kA2 vn − A2n vn k ≤ kA2 − A2n k ! 0 as n ! ∞. Since A2n vn = βn vn and βn ! β, we conclude that

vn ! β −1 w so A2 w = βw. Also, we know kwk = 1. Now replace H with hwi and continue.
In this way, we get a sequence of numbers β1 ≥ β2 ≥ β3 ≥ · · · ≥ 0 which either terminates (βn = 0
for n  0) or it’s infinite but tends to 0 (using compactness of A2 ). We have eigenvectors
q wj of norm 1
so that A wj = βj wj . This has a convergent subseq so βj ! 0 as kβj wj − βk wk k = βj2 + βk2 . Take a
2

vector v orthogonal to all wk . Then kAvk ≤ βk kvk, so kAvk = 0 =⇒ v ∈ ker A. This implies

M
H= Cwk ⊕ ker A2 .
d
k≥1

This completes the proof for A2 .



Finally, A acts on ker A2 by 0 and on Hβk with eigenvalues ± βk . 

We’ll deduce Peter-Weyl next time.

12 Lecture 12 (4/1)
12.1 Peter-Weyl, Proved
Let G be a compact Lie group. Recall we want to show that

Theorem 12.1 (Peter-Weyl).


O
L2 (G) = V ⊗ V ∗.
d
V ∈Irrep(G)

Proof. We want to make use of the Hilbert-Schmidt theorem from last time. We start by constructing
a ‘δ-like sequence’ of continuous function hN (x) on G, supported on small neighborhoods of 1 which
shrink to 1 as N ! ∞. We require hN ≥ 0, hN is conjugation invariant, and G hN (x)dx = 1. Note that,
R

if ϕ is a continuous function on G, then


Z
N !∞
hN (x)ϕ(x)dx −−−−! ϕ(1).
G

How do we actually construct such a sequence?


Note that g = Lie G has a positive, invariant inner product. Start with a function h(x) supported
2
on [−ε, ε]. Then define hg (~x) = h(|x| ) where ~x ∈ g. Then define e
hN (g) = hg (N log g) (supported in a
neighborhood of the identity on which log is defined). Then let cN = G e hN .
hN (g)dg, and set hN = c1N e
R

Note this is invariant under conjugation since it only depends on |log g|, so we have our sequence.
20 Replace with 12 (An + Atn )
21 This is a statement about matrices. Diagonalize to see this

59
Next, we define the convolution operator
Z Z
(BN ψ)(g) = hN (x)ψ(x−1 g)dx = hN (gy −1 )ψ(y)dy.
G G

Note that this is compact by Corollary 11.17 (applied to K(g, h) = hN (gy −1 )). Furthermore, BN is
self-adjoint since K(g, y) = K(y, g) (since hN invariant under inversion). Further, it commutes with both
left and right multiplication by G, so
M
L2 (G) = ker BN ⊕ Hλ
d
λ6=0

with Hλ the (f.dim) λ-eigenspace of BN . Each Hλ is G-invariant (say under left action) so Hλ ⊂ Question:
L2alg (G) = V V ⊗ V ∗ . Hence, for all N and any b ∈ Im BN and ε > 0, there exists f ∈ L2alg (G) s.t. Why is H in
L

kb − f k < ε. Note that for ϕ ∈ C(G) continuous, BN ϕ ! ϕ as N ! ∞ (k(BN ϕ − ϕ)k ! 0). We can this space?
pick fN ∈ L2alg (G) so that kBN ϕ − fN k < 1
and so see that kBN ϕ − fN k ! 0 as N ! ∞. Hence, L2alg
N Answer:
is dense in L2 , so we win. 
Ever ele-
Lemma 12.2. Let G be a compact Lie group, and let G = G0 ⊃ G1 ⊃ G2 ⊃ . . . be a descending sequence ment of Hλ
of closed subgroups. Then it must stabilize, i.e. Gn = Gn+1 for n  0. generates a
f.dim repre-
Proof. We may assume the sequence has no repetitions, and then show it is finite. Assume not. The
sentation
dimensions have to stabilize, so we may assume dim Gi is the same for all i. Then, K = G0n is the same for
all n (since Lie algebras must be the same), and is normal in each of them. Then, G1 /K ⊃ G2 /K ⊃ . . .
is a sequence of finite groups, so it must stabilize. 

Non-example. Z ⊃ 2Z ⊃ 4Z ⊃ 8Z ⊃ 16Z ⊃ . . .

Corollary 12.3. Any compact Lie group has a faithful, finite dimensional representation.

Proof. Pick a f.d. rep V1 of G, and let G1 = ker ρV1 . Then pick a rep V2 of G s.t. V2 |G1 is nontrivial,
and take G2 = ker (ρV1 ⊕ ρV2 ) = ker ρV2 |G2 . Continue in this way... By the lemma, this process can only
produce a finite sequence of non-isomorphic groups, so there’s a k s.t. every f.dim rep of G is trivial on
Gk . By Peter-Weyl, Gk acts trivially on L2 (G) which forces Gk = 1. Hence, V1 ⊕ · · · ⊕ Vk is a faithful
(unitary) representation of G, so G ,! U (V1 ⊕ · · · ⊕ Vk ). 

Conversely, if a compact topological group has a faithful f.dim rep, then it’s a closed subgroup of U (n)
which implies that it is itself a (compact) Lie group.

Notation 12.4. We let C(G, C) denote the Banach space of continuous C-valued functions on G. This
is complete w.r.t the norm kf k = max |f |.

Theorem 12.5 (Stone-Weierstrass Theorem). Let X be a compact metric space. Let A ⊂ C(X, C)
be a unital subalgebra s.t.

(1) A is closed.

(2) A = A (invariant under complex conjugation)

(3) A separates points, i.e. for distinct x, y ∈ X, ∃f ∈ A s.t. f (x) 6= f (y).

60
then A = C(X, C).

Theorem 12.6. L2alg (G) is dense in C(G, C) with this norm, so every continuous function on G can be
uniformly approximated by matrix coefficients of f.dim reps.

Proof. Let A = L2alg (X). It is obviously unital, closed, and closed under complex conjugation. Hence, it
suffices to check that it separates points. Fix any x, y ∈ G s.t. f (x) = f (y) for all f ∈ L2alg (G). Then,
for any f ∈ L2alg (G), one has f (1) = f (x−1 y), so g := x−1 y acts trivially on L2 (G) which forces g = 1,
i.e. x = y. 

12.2 Compact (2nd countable) topological groups


Recall 12.7. A topological space is called 2nd countable iff it has a countable base. A compact space
is 2nd countable ⇐⇒ it is separable ⇐⇒ it is metrizable.

Lots of what we said for Lie groups didn’t really need the smooth structure; it mainly just needed
integration. So we’ll make sense of integration on compact, 2nd countable topolgoical groups, and then
reprove things in this more general setting. Implicitly,
we’re assum-
Example. Let
ϕ2 ϕ1 ing all our
· · ·  G3  G2  G1
spaces are
be a chain of surjective homomorphisms of finite groups. Then, the inverse limit Hausdorff
 
 Y 
G := lim Gn = (gi )i≥1 ∈ Gi : ϕi (gi+1 ) = gi for all i
n!∞
−  
i≥1

is a profinite group. It is visibly an abstract group. To topologize it, we give it the weakest topology
in which all the projections pn : G ! Gn are continuous (with Gn discrete). Hence, a base of nbhds of 1
is given by ker pn .
Here, a sequence ~an = (an1 , an2 , . . . ) converges to ~a ⇐⇒ ∀k : ank eventually stabilizes to ak . Further,
this topology is metrizable with metric

d(~a, ~b) = C inf k (ak 6=bk )

for some fixed 0 < C < 1. Note that the natural map G ,! Gk is a closed embedding (using the
Q
k∈Z+
product topology on the target), so we see that G is compact.

Example. The p-adic integers


Zp := lim Z/pn Z
n!∞

×
form a profinite group. In fact, Zp is a profinite ring. It’s unit group Z×
p = lim (Z/pn Z) is also

n!∞
profinite, as are GLn (Zp ), On (Zp ), Sp2n (Zp ), etc.

Example. Absolute Galois groups (e.g. Gal(Q/Q)) are also profinite.

Note can also take inverse limits of compact Lie groups.

61
12.3 Integration theory on compact top. groups
Let C(X, R) denote the space of R-valued continuous functions on X (X some compact 2nd countable
topological group). Note that this is a Banach space, complete w.r.t kf k = max |f |.

Fact (Riesz representation theorem). Finite volume Borel measures on X are the same thing as non-
negative22 , continuous linear functionals C(X, R) ! R. Given a measure µ, the corresponding functional
is I(f ) = Iµ (f ) := X f dµ.
R

(A measure is just a thing that let’s you integrate functions). In the above correspondence, µ is a
probability measure iff I(1) = 1. Any nonzero µ has positive, finite value and can be normalized to be a
probability measure.

Theorem 12.8 (Haar, von Neumann). Let G be a second countable compact group. Then G admits a
unique left-invariant probability measure which is also right-invariant.

Don’t need to second countable assumption above. In fact, for any locally compact topological group,
there’s some Haar measure (unique up to scaling) which is left-invariant or right-invariant, but usually
not both. We won’t prove that, but will prove the weaker version stated above.

Proof. Let {gi }i≥1 ⊂ G be a dense sequence in G (exists since G 2nd countable). Fix ci > 0 s.t.
P∞
i=1 ci = 1 (e.g. ci = 2 ). We use these to build an averaging operator
−i

A : C(G, R) −! C(G, R)

" #
X
f 7−! x 7! ci f (xgi )
i=1

(absolutely convergent since f bounded on compact G). Note that kAk = 1 and that A is left-invariant.
∼ R ⊂ C(G, R) be the constant functions, so A|L = IdL . The distance from f ∈ C(G, R) to L (the
Let L =
“spread of f ”) is ν(f ) = 1
2 (max f − min f ).
We claim that ν(Af ) ≤ ν(f ) with equality iff f ∈ L. Indeed, choose some f 6∈ L. For any x ∈ G, we
can pick j s.t. f (xgj ) < max f . Then, (Af )(x) = ci f (xgi ) ≤ (1 − cj ) max f + cj (f xgj ) < max f . Thus,
P

max(Af ) < max f (since G compact). One similarly checks that min f < min(Af ), so ν(Af ) < ν(f ).
We now iterate. For f ∈ C(G, R), let fn = An f . This sequence is uniformly bounded by max |f | and
is equicontinuous, i.e. for all ε > 0 there is a neighborhood 1 3 U = Uε ⊂ s.t. for all x ∈ G and u ∈ U ,

|fn (x) − fn (ux)| < ε.

To show this, it suffices to show that f is uniformly continuous, i.e. to find U s.t. for all x ∈ G and
u ∈ U , |f (x) − f (ux)| < ε. This would then imply
X X X
ci f (xgi ) − ci f (uxgi ) ≤ ci |f (xgi ) − f (uxgi )| < ε.

Hence, assume to the contrary that ∃ui ! 1 and xi ∈ G s.t. |f (xi ) − f (ui xi )| ≥ ε. Since G is compact,
the xi have a convergent subsequence, so we may assume xi ! x. Taking limits then shows that
0 = |f (x) − f (1 · x)| ≥ ε, a contradiction.
22 i.e. I(f ) ≥ 0 ⇐= f ≥ 0

62
Now we appeal to Ascoli-Arzela: A sequence fn in C(X) (X compact) which is uniformly bounded
and equicontinuous has a convergent subsequence.23
Hence we get fn(m) = An(m) f converging to some h ∈ C(G, R). Consider the spread

ν(fn(m) ) ≥ ν(fn(m)+1 ) = ν(Afn(m) ) ≥ ν(fn(m+1) ).

Taking the limit as m ! ∞, we have ν(h) ≥ ν(Ah) ≥ ν(h), so ν(Ah) = ν(h). Hence h is a constant, so
the assingment f 7! h ∈ L ∼
= R is a continuous linear function. It is clearly left-invariant, nonnegative,
and satisfies 1 7! 1. Thus, it gives our desired Haar probability measure/integral I : C(G, R) ! R.
This just leaves uniqueness. We can similarly construct a right invariant integral I∗ : C(G, R) ! R.
For any left-invariant integral J, we have J(f ) = J(I∗ (f )). If J(1) = 1, then this says J(f ) = I∗ (f ), so Question:
we get uniqueness. We also see that I(f ) = I∗ (f ), so I if bi-invariant.  Why?

Next time we’ll generalize facts about compact Lie groups to these more general compact 2nd countable
groups, and then we’ll talk about hydrogen atoms I guess. Tuesday lecture at MIT.

13 Lecture 13 (4/6)
Today we learn some physics.

13.1 Hydrogen Atom


This is really a quantization of Kepler’s work on planetary motion.
Let’s start with the classical take on things. Imaginae planetary motion. There’s a sun with planets
orbiting it.

Notation 13.1. The configuration space is R3 (with sun at the origin) and let’s call the coordinates
x, y, z ∈ R. We put these together to form ~r = (x, y, z) whose length is r = |~r| = x2 + y 2 + z 2 . There’s
p

also momentum p~ = (px , py , pz ) and kinetic energy 12 p~2 as well as potential energy U (r) = − 1r . The
total energy is given by the Hamiltonian

1 1
H= p~ − .
2 r

Remark 13.2. We normalize all units so that constants (e.g. mass) is 1.


Define the Poisson bracket
∂f ∂g ∂f ∂g
{f, g} := − .
∂~
p ∂~r ∂~r ∂~
p
Motion is described by Hamilton’s equations f˙ = {H, f }. One (e.g. Kepler) can plug this in and solve
this differential equation; we’re lucky that this potential energy function U (r) is simple; this cannot be
solved by hand in general.
23 Find nested subsequences converging at each point in a countable sequence (using uniform boundedness), and then take

the diagonal.

63
13.1.1 Quantum version

In quantum theory, classical observables become operators on some Hilbert space. In the present case,
this space is L2 (R3 ). We view x, y, z as operators given by multiplication by x, y, z.

Warning 13.3. These aren’t literally operators on L2 (R3 ), e.g. multiplication by x can move a function
outside L2 . In reality, these are only operators on some dense subspace of L2 (R3 ). We won’t worry about
this too much.

What about momentum? px −i∂x . The minus is a convention, but the i is important; smth
smth real classical observables should give rise to self-adjoint operators (i.e. Af · g = f · Ag which
R R

we sometimes write by saying A† = A). Also, the classical Hamiltonian gets replaced by the quantum
Hamiltonian
1 2  1 1 1
H +− ∂ + ∂y2 + ∂z2 − = − ∆ − .
2 x r 2 r
Hamilton’s equation now becomes f˙ = [H, f ] (usual commutator) and called Schrödinger’s equation.
Classical states were pairs (~r, p~) (6 coordinates), but quantum states are elements of a Hilbert space
ψ ∈ L2 (R3 ) (∞ coordinates) normalized so kψk = 1. We consider this ψ modulo ‘phase factors’24 (so
we’re looking at lines in L2 (R3 )). Classical states transform non-linearly, but these quantum states will
translate linearly. Then we have Schrödinger’s equation (for states)

i∂t ψ = Hψ.

More explicitly,
1 2 1
∂ + ∂y2 + ∂z2 ψ − ψ.

i∂t ψ = −
2 x r
How do you solve this? If H was just a matrix, the solution would be ψ(t) = eitH ψ(0) with exponential
given by the usual power series. If H is some infinite-dimensional operator, we can still take inspiration
from this. If we have an eigenvalue Hψ(0) = Λψ(0), then ψ(t) = e−itΛ ψ(0) is a solution; more generally,
we can take superpositions of these. Hence, we’d like an eigenbasis for H (note H is symmetric and even
self-adjoint25 ).
We want an orthonormal basis ψN of L2 (R3 ) so that HψN = EN ψN . We call ψN the state of
energy EN (note EN ∈ R since ψ self-adjoint). Consider ψ(x, y, z, 0) = cN ψN (x, y, z). Here one has
P

cN = (ψ(0), ψN ). Given this initial condition, we get the solution


X
ψ(x, y, z, t) = cN eitEN ψN (x, y, z).

Thus, we only need to fine the eigenvectors ψN satisfying the stationary Schrödinger equation
HψN = EN ψN .
This is similar to the story of compact operators, but more complicated. H is not compact, and also
not bounded. It’s spectrum won’t be discrete. It’ll have a discrete part (called ‘bound states’ if I heard
correctly) as well as a continuous part (giving integrals instead of sums). At least, we can try to find the
discrete spectrum.
24 vectors of norm 1
25 pavel is distinguishing these two and seemingly claiming self-adjoint is something complicated

64
Goal. Solve this equation (Hψ = Eψ where E an eigenvalue), and figure out why we’re talking about
this in a Lie groups class.
Note everything is rotationally invariant, so we should utilize this symmetry. This amounts to passing
to spherical coordinates. 1/r is already in spherical coordinates. The Laplacian splits into two pieces,
r 2 ∆sph , the radial part and the spherical part. These are
1
∆ = ∆r +

2 1 1
∆r = ∂r2 + ∂r and ∆sph = ∂θ2 + ∂ϕ · sin ϕ∂ϕ .
r sin2 ϕ sin ϕ

Above, our spherical coordinates are (r, θ, ψ) where r the radius, θ the angle in the horizontal plane,
and ψ the angle in the vertical plane. Write ~r = r~u where |~u| = 1. We look for solutions of the form
ψ(r, ~u) = f (r)ξ(~u) (‘separation of variables’26 ).
First note that if ∆sph ξ + λξ = 0, then f satisfies an ODE depending on λ. Second, we claim that λ
will be positive. This is because
Z Z
∆sph ξ · ξ = − (∇ξ)2 ≤ 0 =⇒ λ ≥ 0.

What will be the equation for f ? It’s a “calculus exercise” to compute that f satisfies the ODE
 
2 2 λ
00
f + f0 + − 2 + 2E f = 0.
r r r

Here is where Lie groups start to come in. ∆sph acts on L2 (S 2 ) (really on some dense subspace) and
is rotationally invariant (since ∆, ∆r , and 1/r2 are; this is not obvious from its formula). Now, as
SO(3)-reps, we have
L2 (S 2 ) = L0 ⊕ L2 ⊕ L4 ⊕ . . . with dim Lk = k + 1

(apparently this was on some homework). Now, ∆sph preserves each L2` and acts on it by a scalar. Once
we compute these scalars, we’ll know all the eigenvectors and eigenvalues on this operator. What are
these scalars? There are a few ways to compute them. Here’s one...
Let w` be the 0-weight vector in L2` (recall it has weights 2`, (2` − 2), . . . , 0, . . . , (2 − 2`), −2`). It
turns out that h ∈ sl2 acts by −2i∂θ . Since w` is weight 0, ∂θ w` = 0, so it depends only on ϕ. In fact, it
is a degree ` polynomial in cos ϕ, so write w` = P` (cos ϕ). Recall that the Jacobian in passing between
sphereical and Euclidean coordinates is J = r2 sin ϕ. Hence (matrix coefficients?),
Z 1 Z π
Pm (z)Pn (z)dz = sin ϕ · Pm (cos ϕ) · Pn (cos ϕ)dϕ = 0 if m 6= n.
−1 0

So Pn is a degree n polynomial and they are orthogonal under uniform measure; this makes them Leg-
endre polynomials.
We can also calulate the action of ∆sph on P` . Recall that ∆sph = 1
∂2
sin2 ϕ θ
+ 1
sin ϕ ∂ϕ · sin ϕ∂ϕ and
note that (sin ϕ) −1
∂ϕ = ∂z . Using this (and independence from θ), one can show that

∆sph P` = ∂z (1 − z 2 )∂z P` = −λP` .


26 Apparently we earlier separated time from space. Now we separate radius from angle operators

65
We want to compute λ. Write P` = Cz ` + . . . . We compute the leading term of the LHS:

−∂z z 2 ∂z z ` = −`(` + 1)Cz ` + . . . .

Thus, λ = `(` + 1).

Proposition 13.4. ∆sph acts on each L2` by the scalar −`(` + 1).

Here, we do we get a discrete spectrum even though the operator is unbounded.

Notation 13.5. Let y`m denote the vector in L2` of weight 2m, e.g. y`0 = w` . This will be of the form

y`m = eimθ P`m (ϕ).

These functions are called spherical harmonics. These were known by quantum mechanics, but Laplace
studied the Laplace operator on the sphere.

Note these spherical harmonics actually have some dependency on θ now. We ignored that (sin ϕ)−2 ∂θ2
2
before, but now this will acts on y`m and generate a − 1−z
m
2 . We get

m2
∂z (1 − z 2 )∂z P`m (z) − P m (z) + `(` + 1)P`m (z) = 0.
1 − z2 `

This is called the Legendre differential equation. Note that −` ≤ m ≤ ` (in fact, it turns out these
are the only values of the parameters for which this equation has a solution which is smooth near x = 0).
This solution will be (almost?) a polynomial, unique up to scaling. One ends up with
 m2
P`m = 1 − z 2 ∂z`+m (1 − z 2 )`

which is a polynomial when m is even. These are called associated Legendre polynomials.
Remark 13.6. This P`m is a matrix coefficient, so it’s a trigonometric polynomial. You can write this as a
polynomial of cos with some sin factor when the degree is odd (or something? I didn’t quite catch what
he was saying).
Let’s go back to the radial equation. Recall it is
 
2 2 `(` + 1)
f + f0 +
00
− + 2E f = 0.
r r r2
r
How do we deal with this? We start with the magic change of variables: write f (r) = r` e− n h 2r
.

n
Letting ρ = 2r/n, h must satisfy
 
1
ρh00 + (2` + 2 − ρ) h0 + n − ` − 1 + (1 + 2En2 )ρ h = 0.
4

We should choose n so that the last term goes away, i.e. we take n = √ 1
−2E
, i.e. E = − 2n1 2 .27 Thus, we
have
ρh00 + (2` + 2 − ρ)h0 + (n − ` − 1)h = 0, (13.1)
27 Since our potential is negative, one can show that E < 0 if you want a solution lying in L2

66
called the Laguerre equation. Look at solutions near ρ = 0. They will have the form h = ρs (1 + o(1))
(for two values of s). The characteristic equation for s is (only first two terms relevant for this)

s(s − 1) + s(2` + 2) = 0 ⇐⇒ s(s + 2` + 1) = 0

with two solutions s = 0 and s = −2` − 1. We have a basis of two solutions, the first smooth and the
second having a singularity. We claim the solution corresponding to s = −2` − 1 is not possible. Observe Question:
Z Z Z Z Why?
2 2 2 2 2
|ψ| dxdydz = |f | |ξ| r2 sin ϕdrdθdϕ = |f | r2 dr · |ξ|
S2
| {z }
<∞

2
so our f should have the property that |f | r2 dr < ∞ (since we want a solution in L2 ). This is the case
R

iff Z
2
ρ2`+2 e−ρ |h(ρ)| dρ < ∞.

2
If h ∼ ρ−2`−1 as ρ ! 0, then ρ2`+2 |h(ρ)| ∼ ρ−2` as ρ ! 0, so if ` > 0, this is not integrable. Thus,
s = −2` − 1 not possible when ` > 0. Even when ` = 0, this is not possible: ψ(x, y, z) ∼ ψ ∼ r−1
near r = 0 and h ∼ ρ−1 =⇒ f ∼ r−1 near r = 0. Then, Hψ = − 12 ∆ψ − 1r ψ = Eψ + δ since
∆(1/r) ∼ δ0 (x, y, z). Thus, ψ won’t satisfy Schrodinger at the origin (as a distribution), so s = −2` − 1
is impossible even when ` = 0. Now allowing this behavior singles out a one dimensional span, the span
of the solution corresponding to s = 0.
We see that h must be regular at ρ = 0. Use power series method: h = an ρn . We then must
P
n≥0
have
X
k(k − 1)ak ρk−1 + (2` + 2 − ρ)kak ρk−1 + (n − ` − 1)ak ρk


We can shift
X
(k + 1)kak+1 ρk + (2` + 2)(k + 1)ak+1 ρk − kak ρk + (n − ` − 1)ak ρk

to get a recursion
(k + 1)(k + 2` + 2)ak+1 + (n − ` − 1 − k)ak = 0.

Starting with a0 = 1, one can calculate

(1 + ` − n) . . . (k + ` − n)
ak = .
(2` + 2) . . . (2` + 1 + k) · k!

Thus,
X (1 + ` − n) . . . (k + ` − n) ρk
h(ρ) = .
(2` + 2) . . . (2` + 1 + k) k!
k≥0

We see that this converges for all ρ (ratio behaves like a power of k and then it’s divided by a factorial).
log h(ρ)
Exercise. ρ ! 1 when ρ ! +∞ except when the series terminates.
When does this series terminate? Well, when one of the factors in the numerator becomes 0, i.e. if
n − ` − 1 ∈ Z≥0 . In which case you get a polynomial of this degree n − ` − 1; it is denoted L2`+1
n−`−1 (ρ)

67
and called the generalized Laguerre polynomial. Recall, we need
Z
2
ρ2`+2 e−ρ |h(ρ)| dρ < ∞

We looked at convergence near 0 before, but there’s also convergence near ∞. This will fail unless h(ρ)
behaves like a polynomial (the alternative is it looks like eρ at infinity, so get something like e−ρ e2ρ
above).

Recall 13.7. The states with E < 0 are called bound states.

Theorem 13.8. The bound states of the hydrogen atom are, up to normalization,
 
r 2r
ψn,`,m (r, ϕ, θ) = r−` e− n L2`+1
n−`−1 y`m (ϕ, θ)
n

where n = 1, 2, 3, . . . , ` = 0, 1, . . . , n − 1, and −` ≤ m ≤ `.

Definition 13.9. We call n above the principal quantum number, ` the azimuthal quantum
number, and m the magnetic quantum number.

Remark 13.10. The energy can only take values E = − 2n1 2 . When an electron jumps between energy
levels, it emits a photon with energy/wavelength proportional to the difference 1
2n2 − 2n02 .
1

We still have not achieved what we wanted yet. These eigenfunctions do not form a base in the Hilbert
space. This functions ψn,`,m span a space L20 (R3 ) ( L2 (R3 ). For example, note that (Hψ, ψ) ≤ 0 for
ψ ∈ L20 (R3 ). This is not the case for all ψ ∈ L2 (R3 ). Recall H = − 21 ∆ − 1r , so in general
Z Z
1 2 1 2
(Hψ, ψ) = k∇ψk − |ψ|
2 r

Can cook up a ψ so this is positive. In addition to the discrete spectrum/bound states we found, there’s
also a continuous spectrum consisting of the whole positive real linear {r ≥ 0}, but we will not discuss
this. Pavel said more about this, but I didn’t follow.
Remark 13.11. For each n, there are n choices of ` values, and each (n, `) has 2` choices of m values.
Hence dim Wn = n2 is the dimension of the space of energy levels of n. In chemistry though, one observes
a 2n2 , so we’re missing something. That something is spin. The real Hilbert space is L2 (R3 ) × C2 .
There’s more to the story that we will talk about next time.

14 Lecture 14 (4/8): Quantum stuff continued


Last time we studied the equation Hψ = Eψ where H = − 21 ∆ − 1r . We saw that we hand ‘bound states,’
L2 -eigenfunctions with corresponding energy E = − 2n1 2 for n = 1, 2, 3, . . . . These eigenfunctions looked
like  
r 2r
ψn,`,m (r, ϕ, θ) = r` e− n L2`+1
n−`−1 Y`m (ϕ, θ)
n
for n ≥ 1, 0 ≤ ` ≤ n − 1, and −` ≤ m ≤ `.

Recall 14.1. ` above is the azimuthal quantum number and m is the magnetic quantum number.

68
There’s a geometric SO(3)-symmetry so so(3) = Lie SO(3) acts by vector fields Lx , Ly , Lz . Set L ~ =
(Lx , Ly , Lz ) = ~r × p~ where ~r = (x, y, z) and p~ = (px , py , pz ) with px = −i∂x , etc. This L
~ = ~r × p~ is called
the angular momentum operator. Note that

Lx = −i (y∂z − z∂y ) .

These act on H-eigenspaces. Let Wn = ψ : Hψ = − 2n1 2 ψ = span {ψn,`,m : any `, m}. From our earlier


restrictions on `, m, we see that


n−1
X
dim Wn = (2` + 1) = n2 .
`=0

We know
Wn = L0 ⊕ L2 ⊕ · · · ⊕ L2n−2

as so(3)-reps.
Apparently, we studied the case where there’s one electron ‘orbiting’ a nucleus of charge +1, but this
also applies when there’s a larger nucleus. If the nucleus is to big, things aren’t too precise since there are
many electrons interacting with each (and that’s not taken into account here), but early in the periodic
table this is good enough. Isn’t this
Note 7. I’m finding it pretty hard to pay attention. supposed to
be a math
Because of chemistry stuff, our n2 seems like it should really be a 2n2 . We lost a factor of 2 in the
class?
physics. There’s a thing called spin (‘internal angular momentum’) that we did not take into account in
our model. This spin can be ± 21 .
On the side of mathematics, this means that the Hilbert space for the theorem should not be L2 (R3 ),
but should be H = L2 (R3 ) ⊗ C2 where this C2 is the 2-dimensional rep of so(3). On C2 , we have the
operator !
1
1 2 0
S= h=
2 0 − 21

whose eigenvalues are ± 12 . The total spin is m + s ∈ {m + 1/2, m − 1/2}. So the action of so(3) is
diagonal; the eigenvalues of h are 2m + 1 (or 2m − 1), odd numbers (‘odd highest weight’ or ‘half-integer
spin’); get a direct sum of representations L2k+1 . But Hamiltonian is the same, so instead of ψn,`,m , we
have
ψn,`,m,+ = ψn,`,m ⊗ v+ and ψn,`,m,− = ψn,`,m ⊗ v−

where ! !
1 0
v+ = and v− = .
0 1

Now, Vn := ψ : Hψ = Eψ with E = − 2n1 2 = Wn ⊗ C2 , so




C2
Vn = (L0 ⊕ · · · ⊕ L2n−2 )⊗|{z} = L1 +L1 +L3 +· · ·+L2n−3 +L2n−1 = 2L1 ⊕2L3 ⊕· · ·⊕2L2n−3 ⊕L2n−1
Clebsch-Gordan
L1

and dim Vn = 2n2 .

69
!
−1 0
This does not lift to a representation of SO(3), only to one of SU(2). The matrix ∈ SU(2)
0 −1
acts by −1. This is called an ‘anomaly’. The point is that quantum states are elements up to phase
factors, and this −1 is a phase factor, so we do have an SO(3)-action on the states; we just don’t have
one on vectors.
Say we have k electrons of the same energy E = − 2n1 2 . In quantum mechanics, if you have a particle
with state space V and another one with state space W , then the two together have state space V ⊗ W . If
particulars are indistinguishable from each, then you should mod out by permutation action. If elections
where labelled, we’d have state space Vn⊗k . This they are in fact indistinguishable, we need to mod out
by permutations. Hence, we would expect the state space to be V = S k Vn ; however, this is wrong.
Vk
The correct answer is V = V (k) = Vn since electrons are ferminons, not bosons (for bosons, do get
symmetric power).28
Remark 14.2 (Pauli exclusion principle). When k > 2n2 , we see V (k) = 0.
Remark 14.3. A generic operator will have eigenspaces of dimension ≤ 1, but here we have large di-
mensions dim Vn = 2n2 . This comes from symmetries grouping these eigenvalues into representations
(apparently, we’ve seen two so(3)-symmetries and there’s a third hidden one we’ll see now).

14.1 Explanation for degeneracy of energy levels


There is another so(3) related to the Laplace-Runge-Lenz vector (homework).
Note 8. Got distracted and missed some of what we said.
Total symmetry is so(3) ⊕ so(3) ⊕ so(3) and Vn = Ln−1  Ln−1  L1 . The first so(3) comes from the Question:
Laplace-Runger-Lenz stuff, the second is the ‘geometric’ so(3) (SO(3) y R so on L (R )), and the third
3 2 3
Why did
is the ‘spin’ so(3). Forgetting about spin, we have Wn = Ln−1  Ln−1 . Restricting to the diagonal gives he use 
instead of
Wn |diag = L0 ⊕ L2 ⊕ · · · ⊕ L2n−2 . ⊗?

Remark 14.4. so(3) ⊕ so(3) ⊕ so(3) = so(4) ⊕ so(3) = so(4) ⊕ su(2). Answer: It’s
an external
There’s apparently also another symmetric which doesn’t commute with Hamiltonian, but which is
tensor prod-
sometimes useful to consider.
uct, not an
internal one
14.2 Back to math: automorphisms of semisimple Lie algebras
14.2.1 Summary of last semester

Let g be a semisimple complex Lie algebra. We saw that Aut(g) is a complex Lie group with Lie Aut(g) = g
(I think in general Lie Aut(g) = Der(g)). In particular, this means there is a connected Lie group Aut(g)0
with Lie algebra g. Furthermore, we showed last semester that Aut0 (g) acts transitively on the Cartan
subalgebras of g.

Definition 14.5. The adjoint group of g is Gad := Aut0 (g).


28 Particles with half-integer spin are ferminions, while those with integer spin are bosons.

70
14.2.2 Maximal Tori

Let h ⊂ g be a Cartan subalgebra. Let H ⊂ Gad be the corresponding connected Lie subgroup. Elements
h ∈ H act on g = h ⊕ α∈R gα as follows: h|h = 1 and h|gαj = λj · id = ebj · id. Note that h|g−αj =
L

λ−1 −bj
. Furthermore, if α = mi αi , then (by compatibility with conjugation)
P
j =e

Y
h|gα = λm
i .
i

So if x ∈ h s.t. αi (x) = bi (so λj = eαj (x) ), then h|gα = eα(x) so we see we have

h
H∼
= ,
2πiP ∨

i.e. x 7! e2πix defines an isomorphism h/P ∨ − ! H (recall: P ∨ is the coweight lattice). Note that
H∼ = (C× )rank(g) is a complex torus; we call it the maximal torus corresponding to h ⊂ g.
We want to study its normalizer

N (H) = g ∈ Gad : gHg −1 = H .




Proposition 14.6. N (H) = stabilizer of h ⊂ g, and contains H as a normal subgroup with quotient
N (H)/H ∼
= W isomorphic to the Weyl group.

Proof. Recall (sl2 )i ⊂ g attached to simple roots. These give maps ηi : SL2 (C) ! adG by fundamental
theorems of Lie theory. Set !!
0 1
Si = ηi = ηi (e − f ) ∈ Gad .
−1 0

This has the property that Ad(Si )|h = si (with si the simple reflection). Note that Si2 = ηi (−1) 6= 1 in This was a
general, so we do not have a homomorphism W ! adG, just some set-theoretic lift of W . For w ∈ W , homework
write w = si1 . . . sim and define w
e = Si1 . . . Sim ∈ Gad , so w
e acts on h by w. Furthermore, if w = w1 w2 , problem
then w
e=w e2 h for some h ∈ H s.t. h acts trivially on h. This implies that hH, w
e1 w e : w ∈ W i generates a once upon
subgroup N of Gad such that N ⊃ H (with H normal) and N/H = W . a time
By definition, N ⊂ N (H), so we only need to show equality. Consider some x ∈ N (H). Write
x(αi ) = αi0 . Note that these αi0 ’s give another system of simple roots. Since the Weyl group acts
transitively on systems of simple roots, there must be some w ∈ W such that w(αi0 ) = αp(i) where p is some
permutation of simple roots. Now consider w
e−1 x ∈ Gad . By construction, we have w
e−1 x(αi ) = αp(i) .
Note that Gad preserves all irreducible representations g (since it acts by inner automorphisms), so p = id.
Hence, w
e−1 x|h = 1, so w
e−1 x ∈ H, so x ∈ wH
e ⊂ N , and we win. 

Warning 14.7. In general, the exact sequence

0 −! H −! N −! W −! 0

is not split, i.e. N is not a semi-direct product.

We’ve seen Aut(g) ⊃ Gad . Another obvious subgroup is Aut(D) ⊂ Aut(g) where D is the Dynkin

71
diagram. Moreover, Aut(D) y Gad , so we get a homomorphism

ξ : Aut(D) n Gad −! Aut(g).

This is in fact injective; ξ|Gad = id and a nontrivial Dynkin diagram automorphism can’t act trivially on
g (something like this).

Theorem 14.8. ξ is an isomorphism.

Proof. We need to show that ξ is surjective. Fix some a ∈ Aut(g). There exists a g ∈ Gad such that
ga(h) = h. We may replace a by ga, so assume WLOG that a(h) = h. By modifying a by an element of
N (H) · Aut(D),29 can assume a = 1 (acts trivially on h and each gαi ), so we win. 

14.2.3 Forms of semisimple Lie algebras

We have classified semisimple Lie algebras over C. What about their classification over other fields, in
particular over R?

Recall 14.9. A presentation of g by generators and relations ei , fi , hi contains only integers, so makes
sense over any ring.

For any field K (say, char K = 0), we have a Lie algebra gK defined by the same generators and
relations; we call this split semisimple Lie algebra. Over an algebraically closed field, every semisimple
Lie algebra is split, but this is not the case in general.
Let g be a s.s. LA over K which splits over some finite Galois extension L/K (e.g. K = R and
L = C), i.e. g ⊗K L = gL is a split s.s. Lie alg. Can we classify such g? Let Γ = Gal(L/K), so g = gΓL .
Therefore, g is determined by the action of Γ on gL . This action is twisted-linear:

γ(λx) = γ(λ)γ(x) for λ ∈ L and x ∈ gL .

Example. ρ0 (g ∈ Γ): preserves all generators and acts as Γ on scalars. This action gives rise to the split
s.s. Lie algebra over K, gΓL = gK .

Other actions will be of the form ρ(g) = η(g)ρ0 (g) with η : Γ ! Aut(gL ) not a homomorphism.
Instead,
η(gh)ρ0 (g)ρ0 (h) = ρ(gh) = ρ(g)ρ(h) = η(g)ρ0 (g)η(h)ρ0 (h),

from which we see


η(gh) = η(g)ρ0 (g)η(h)ρ0 (h)−1 = η(g) · g(η(h)),

so it’s almost a homomorphism but twisted by the Γ-action on Aut(gL ). This is what’s called a 1-cocycle
(or twisted homomorphism). Thus, any form of gK split over L is given by a 1-cocycle η; we call the
corresponding form gη .

Question 14.10. When is gη1 ∼


= gη2 ?
29 a sends simple roots to a different system of simple roots. Can use a (lift of an) element of the Weyl group to make

it preserve the system of simple roots. Then use an automorphism of D to ensure a(αi ) = αi . Then, a|gαi acts by some
scalar. Can use an element of H to make all these scalars 1.

72
Need some a ∈ Aut(gL ) such that ρ1 (g)a = aρ2 (g) which translates to

η1 (g) = aη2 (g)g(a)−1

(twisted conjugation).

Definition 14.11. Equivalence classes of 1-cocycles, up to twisted conjugation, form the (pointed) set
H1 (Γ, Aut(gL )) called the 1st Galois cohomology.

Proposition 14.12. Forms of gL over K are labelled by elements of H1 (Γ, Aut(gL )).

15 Lecture 15 (4/13)
15.1 Forms of a semisimple Lie algebra, continued
Let g be a s.s. Lie algebra over a field K of characteristic 0. Say there is a finite Galois extension
L ⊃ K such that g ⊗K L splits, i.e. is isomorphic to the standard semisimple Lie algebra gL given by the
Serre relations. We showed last time that such forms of gL over K are classified by the cohomology set
H1 (Γ, Aut(gL )).
Today we specialize to the case of main interest to us, i.e. K = R and L − C. That is, we wish to
classify real forms of complex semisimple Lie algebras.
Remark 15.1. There’s a parallel theory of forms for reductive Lie algebras.
In this case Γ = Gal(C/R) = Z/2Z. We computed last time that

Aut(gL ) = Aut(D) n Gad where Gad = Aut(gL )◦ .

Consider a 1-cocycle η : Z/2Z ! Aut(gL ). This must satisfy

η(xy) = η(x) · x(η(y)).

Hence, η(1) = η(1)η(1) =⇒ η(1) = 1, so η is determined by the element

s := η(−1) ∈ Aut(gL ) = Aut(D) n Gad .

Not just any s will work. We require

1 = η(1) = η((−1)(−1)) = η(−1) · (−1) ◦ (η(−1)) = ss.

Above, · denotes complex conjugation, g is the complexification of its split real form. s defined earlier
is well-defined up to twisted conjugation: s 7! asa−1 (for a ∈ Aut(gL )). Putting this all together, we
have...

Theorem 15.2. Real semisimple Lie algebras with complexification isomorphic to g (i.e. real forms
of g) are classified by s ∈ Aut(D) n Gad s.t. ss = 1 modulo the equivalence relations s ∼ asa−1 (for
a ∈ Aut(D) n Gad ; note · acts trivially on Aut(D)).

73
The bijection in the theorem is given by

s 7! gs := {x ∈ g : x = s(x)} .

Example. g1 = gR = {x ∈ g : x = x}.

Note that we can compute s and · to get the antilinear involution σs (x) = s(x) (note σs2 (x) = s(s(x)) =
s(s(x)) = s(x) = x). Hence, we can encode the real form gs using σs instead of s. In particular, note
that s gives rise to an element s0 ∈ Aut(D) = Out(g) = Aut(g)/ Inn(g) (Inn(g) = Gad ). Note that this
satisfies s20 ∈ 1, and that its conjugacy class is invariant under equivalences. This s0 permutes connected
components of the Dynkin diagram D (preserves some and matches others in pairs30 ). Hence, it’s enough
to consider to kinds of pictures.

(1) D connected with s0 : D −
! D.

(2) D0 = D t D and s0 exchanges them.

Proposition 15.3. If gR is semisimple, then it is a direct sum of simple Lie algebras, and the simple
Lie algebras are classified by such pictures.

(1) In the first case, gR is simple, and gR ⊗R C is also simple.


To be analyzed later...

(2) In the second, gR is simple, but g = gR ⊗R C has two summands (so is only semisimple).
Say g = a ⊕ a with a a simple complex Lie algebra. Write s = (g, h)s0 with g, h ∈ Aut(a). We know
s switches the summands and that ss = 1. This gives

(gh, hg) = (g, h)s0 (g, h)s0 = 1

so h = g −1 . Thus,
−1
s = (g, g −1 )s0 = (g, 1)s0 (g, 1) ∼ s0 ,

so there is only one real form with such s0 . It is


n o
gs0 = (x, y) ∈ a ⊕ a : (x, y) = (y, x) = {(x, x) : x ∈ a} ∼
=a

with the last iso an iso of real Lie algebras.

This just leaves the case when D is connected. We start with some new definitions.

Definition 15.4. We say gs is inner to gs0 if s0 = g ◦ s for some g ∈ Gad (i.e. for some inner
automorphism) ⇐⇒ s00 ∼ s0 . The inner class of s is the set of s0 which are inner to s. An inner real
form is a member of the inner class of the split form (i.e. s ∈ Gad ).

Definition 15.5. We say gs is quasi-split if s = s0 ∈ Aut(D).


30 since s20 = 1

74
Note that any form is inner to a unique quasi-split form. The only quasi-split inner form is the split
form.
There is one (non-split) distinguished form.

Definition 15.6. The compact real form is the one corresponding to the automorphism determined
by
s(hi ) = −hi , s(ei ) = −fi , and s(fi ) = −ei .

The corresponding Lie group is denoted by gc .

Proposition 15.7. The Killing form of gc is negative definite.

Proof. (g = sl2 ) In this case, we have s(h) = −h, s(e) = −f , and s(f ) = −e. We have

gc = {x ∈ g : x = s(x)} = hih, e − f, i(e + f )i .

We see the basis is given by the Pauli matrices


! ! !
i 0 0 1 0 i
X= , Y = , and Z = .
0 −i −1 0 i 0

These span su(2) = gc (recall su(2) = {x ∈ gl2 (C) : xt = −x and tr x = 0}).


The Killing form on su(2) is (a scalar multiple of?) the trace form. We compute

tr X 2 = −2 and tr Y 2 = −2 and tr Z 2 = −2,

so the Killing form on su(2) = slc2 is negative definite.


(Same is true for any f.d irrep. The trace-form associated to any f.dim irrep will be negative-definite)
(g general) Consider the matrix !
0 1
S= ∈ SU(2).
−1 0
!
0 1
This preserves su(2) ⊂ sl(2, C) so Si = ∈ SU(2)i (for ith simple root) preserves gc ⊃ su(2)i .
−1 0
Hence, for every w ∈ W (i.e. w = si1 . . . sir ), the corresponding we (= Si1 . . . Sir ) also preserves gc . For
any root α, pick w ∈ W s.t. w(α) = αi is a simple root. Then, w((
e 2 )α ) = (sl2 )i . Thus, (sl2 )α ∩gC = su(2)
so the Killing form is negative definite on gα ⊕ g−α ; also it is negative definite on hc = h ∩ gc since this is
spanned by {ihj : j = 1, . . . , r} since positive on R-span R hhj i (multiplying by i makes it negative). We
have
M
gc = hc ⊕ (gα ⊕ g−α )c .
α∈R+

Remark 15.8. Above we used that the compact real form restricted to any simple root (sl2 )i is the
corresponding compact real form.

75
Consider Aut(gc ) Killing is negative definite, so Aut(gc ) ⊂ O(gc ) is a closed subgroup in an orthogonal
group, and hence compact.31 Furthermore, Lie Aut(gc ) = gc (not hard to show). Thus,

Corollary 15.9. Let Gcad = Aut(gc )0 . Then, Gcad is a connected, compact Lie group with Lie algebra gc .

Remark 15.10. This gives a new proof that reps of complex semisimple Lie algebras are completely
reducible.
Exercise (Homework). For g = sln , show Gcad = PSU(n) = SU(n)/µn (where µn the nth roots of unity).
For g = son , show 
 SO(n) if n odd
Gcad =
SO(n)/{±1} if n even.

For g = sp2n , show

Gcad = U (n, H)/{±1} where U (n, H) = Sp2n (C) ∩ U (2n).

Exercise. s0 for the compact form is the involution corresponding to −w0 (dualiziation of representations).
Exercise. The compact form is never quasisplit.

Question 15.11. What kind of real forms do we know about?

(1) An−1 so g = sln (C).

• split: sln (R)


• compact: su(n)
• When n > 2, the diagram An−1 has the flip automorphism exchanging ei ↔ en+1−i (ei =
Ei,i+1 ). This corresponds to the involution s(A) = −JAt J −1 where
 
−1
1
 
 
J = .
 −1 
..
 
.

t T
Thus, gs is the Lie algebra of traceless matrices A s.t. A = −JA J −1 (i.e. AJ + JA = 0).
Thus, A preserves the (skew)hermitian form defined by J.32 What is the signature of J? For
even n, we have
X
J =± (zi z n+1−i ± zn+1−i z i ) ,

while for odd J we have


X
J =± (zi z n+1−i ± zn+1−i z i ) ± z n+1 z n+1 .
2 2

closed
31 Why is O(n) compact? At A = 1 means j a2ij = 1 so O(n) ⊂ (S n )n
P
32 Ifyou take a Hermitian form and multiply by i, you get a skew-hermitian form (and vice versa), so the two types are
not so different

76
When n = 2p, J has signature (p, p). When n = 2p + 1, it has signature (p, p + 1) or (p + 1, p).
The upshot of all of this is that the quasi-split form is su(p, p) if n = 2p and su(p + 1, p) if
n = 2p + 1.
• There are other forms: su(p, n − p). These are neither compact nor quasi-split.

(2) Type Bn (g = so2n+1 (C))

• split: so(n + 1, n)
• compact: so(2n + 1)
• no quasi-split forms since the corresponding Dynkin diagram has no symmetrices

(3) Type Cn (g = sp2n (C)):

• split: sp2n (R)


• compact: u(n, H)

(4) Dn (g = so2n (C))

• split: so(n, n)
• compact: so(2n)
• When n > 4, Aut(Dn ) = Z/2Z. When n = 4, Aut(D4 ) = S3 (claw graph). However, we only
care about conjugacy classes of involutions, and in either case, there’s only one nontrivial such
class: the one exchanging αn = en−1 + en and αn−1 = en−1 − en .
Note that the split form consists of matrices A satisfying A = −JAt J −1 where
 
1
1
 
 
J = .
..

 
 
1

To get the quasi-split form, we should use a matrix of the same structure, except it’s the
2 × 2 identity I2 in the center block (diagonal instead of antidiagonal at that point). Call this TODO: Add
matrix J.e Then the quasi-split forms consists of matrices satisfying A = −JA e t Je−1 , i.e. its rendition of
the Lie algebra of skew-symmetric matrices under J.
e This has signature (n + 1, n − 1), so the matrix
quasi-split form is so(n + 1, n − 1). (n ≥ 2)

(5) G2 has a split form Gs2 and compact form Gc2 . No Dynkin diagram automorphisms.

(6) F4 has a split form F4s and compact form F4c . No Dynkin diagram automorphisms.

(7) E6 has a split form E6s and compact form E6c . There is a Dynkin diagram automorphisms, so also
a quasi-split E6qs

(8) E7 has a split form E7s and compact form E7c . No Dynkin diagram automorphisms.

(9) E8 has a split form E8s and compact form E8c . No Dynkin diagram automorphisms.

77
Remark 15.12. There are some coincidences. For example,

• B1 = A1 gives so(2, 1) = su(1, 1) and so(3) = su(2)

• C2 = B2 gives so(5) = u(2, H) and so(3, 2) = sp4 (R)

• D2 = A1 × A1 is two unconnected points. Comparing points of view shows

su(1, 1) × su(1, 1) = sl2 (R) × sl2 (R) = so(2, 2) and su(2) × su(2) = so(4) and sl2 (C)R = so(3, 1).

Above, so(3, 1) is called the Lorentz Lie algebra.

• D3 = A3 gives sl4 (R) = so(3, 3), su4 = so(6), and su(3, 2) = so(4, 2).

Note we still have not classifies all real forms. We’ve just looked at the compact, and quasi-split forms.
There are still more.

16 Lecture 16 (4/15)
Last time we considered real forms of semisimple Lie algebras, and singled out a few particular forms of
note.
In particular, we defined the compact form of a semisimple Lie algebra. This had corresponding
involution ω : g ! g determined by ω(hi ) = −hi , ω(ei ) = −fi , and ω(fi ) = −ei . The corresponding
(real) Lie algebra was gc = {x ∈ g : ω(x) = x}.

16.1 Twists of the compact form


In trying to classify forms of g (a complex (semi)simple Lie algebra), we started with the split form, and
then looked at the other versions of it. It turns out that it is actually more convenient to start with the
compact form instead.
Write g = gc ⊗R C = gc + igc . Hence we can write z = x + iy and this has the natural involution
ω(z) = z = x − iy. What are the other real structures on g?
Consider σ : g ! g another antilinear involution. Then, σ = ω ◦ g for some C-linear g ∈ Aut(g). We
need
1 = σ 2 = ω ◦ g ◦ ω ◦ g = ωgω −1 ω 2 g = ωgω −1 · g,

i.e. ω(g)g = 1 where ω(g) := ωgω −1 . This is our old friend the cocycle condition. What’s different? gc has
a negative definite Killing form, so g = gc ⊗R C naturally has a positive Hermitian form (complexification
of −Killing).33 Fix some x ∈ g. Then,
adω(x) = −(adx)†

is the Hermitian adjoint (negated). Hence, gc acts by skew-Hermitian operators, i.e. if x ∈ gc , then
adx = −(adx)† . Therefore, when acting on group elements, we sill have ω(g) = (g † )−1 .
Now we see that the cocycle condition ω(g)g = 1 is equivalent to saying that g † = g, so the condition
on g is that it is a Hermitian operator on g.
33 Any orthonormal basis for gc is also an orthogonal basis for g

78
Fact. Any Hermitian operator on a space with positive Hermitian form is diagonalizable with real
eigenvalues.

For g = g † , we can write g = g(γ) as a sum of eigenspaces; moreover, this is a grading, i.e.
L
γ∈R
[g(γ), g(β)] ⊂ g(βγ). Since g is Hermitian, we can take its absolute value |g| : g ! g. This acts on g(γ)
−1
by |γ|. Define θ := f |g| : g ! g so θ|g(γ) = sign(γ). This is an automorphism satisfying

θ2 = 1 and θω = ωθ
−1
(second one since ω(θ) = θ† = θ).

Claim 16.1. θ and g define the same real structure.

Proof. Note that θ, g define the same real structure ⇐⇒ θ = aga−1 = agω(a)−1 for some a ∈ Gad . We
−1/2 −1/2
have take a = |g| (acts by |γ| of gγ ). Then,
 
−1/2 1/2 −1/2 −1/2 −1
|g| gω |g| = |g| g |g| = g |g| = θ,

so we win. 

Corollary 16.2. WLOG we may assume g = θ, i.e. σ = ω ◦ θ where ωθ = θω and θ2 = 1.

This replaces the mysterious equation ω(g)g = 1 with the simpler equation θ2 = 1.
Any real form is determined by a conjugacy class (conjugate by gc ) of such θ. Conversely, if two such
θ’s define the same real structure, then they will be conjugate under gc .

Claim 16.3. θ, ξ as above define the same real form ⇐⇒ they are conjugate by Aut(gc ).
−1 −1
Proof. (!) We have ξ = xθω (x) for some x ∈ Aut(g). Since ω(ξ) = ξ, we see that xθω (x) =
−1
ω(x)θx −1
. Set z := ω (x) x, so we have θz = z −1
θ and ω(z) = z −1
. Note that z = x x is a positive

operator, so we can extract a square root and set y := xz −1/2


. Then, ω(y) = ω(x)z 1/2 = xz −1/2 (since
ω(x) = xz −1 ), so y ∈ Aut(gc ). At the same time (use θz = z −1 θ),

ξ = xθω(x)−1 = xθzx−1 = xz −1/2 θz 1/2 x−1 = yθy −1 ,

so we win. 

At this point, we have obtained the following theorem.

Theorem 16.4. Real forms of g are in bijection with conjugacy classes of involutions θ ∈ Aut(gc ) (a
compact Lie group), via θ 7! σθ : ω ◦ θ.

Corollary 16.5. Have a canonical (up to Aut of gc ) decomposition g = k ⊕ p where k is the 1-eigenspace
of θ, and p is the −1-eigenspace. In particular, k is a Lie subalgebra, and [k, p] ⊂ p (so p is a k-module).
Furthermore, [p, p] ⊂ k and gc = kc ⊕ pc (kc = k ∩ gc and pc = gc ∩ p). Finally,

gσ = kc + ipc .

Example. Say g = sl2 (C), and let gσ = sl2 (R) be the split form. In this case, k = C(e − f ). Compute p
as an exercise. Then, gc = kc ⊕ pc and gσ = kc + ipc .

79
Figure 9: An example Vogan diagram. White vertices have sign + and black vertices have sign −.

Exercise. Show that k is a reductive Lie algebra.


We would like to simplify our task even further. Classifying involutions in a Lie group is not so easy.

Proposition 16.6. There exists a Cartan subalgebra h of g invariant under θ.

Proof. Consider a generic x ∈ kc . Note that all elements of gc are semisimple (act as skew-Hermitian
operators so are diagonalizable). Hence, x is regular semisimple.34 Let hC
+ ⊂ k be the centralizer. Its
c

complexification h+ = hc+ ⊗R C ⊂ k is a Cartan subalgebra of k (and still the centralizer of x). Let hc− ⊂ pc
be the maximal subspace s.t. hc = hc+ ⊕ hc− is a commutative subalgebra of gc .
We claim that h := hc ⊗R C ⊂ g is a Cartan subalgebra. It consists of semisimple elements by
construction (acts by normal operators on g). Suppose z ∈ g, [z, h] = 0. Write z = z+ + z− where z+ ∈ k
and z− ∈ p. Note that

[z, h] = 0 ⇐⇒ [z, h+ ] = 0 and [z, h− ] = 0 ⇐⇒ [z+ , h] = 0 and [z− , h] = 0.

Since h+ Cartan in k, we conclude that z+ ∈ h+ . Write z− = x− + iy− with x− , y− ∈ pc . Then,


[z− , h] = 0 =⇒ [x− , hc ] = 0, so x− , y− ∈ hc− (by maximality). Thus, z ∈ h, so h ⊂ g is Cartian. It is
θ-stable since θ|h± = ±1. 

Lemma 16.7. h− does not contain any coroots of g.

Proof. Suppose otherwise, so α∨ ∈ h− . Then, θ(α∨ ) = −α∨ , so θ(gα ) = g−α . Then, σ(gα ) = ω ◦ θ(gα )...
(do this next time) 

Corollary 16.8. For generic t ∈ g+ (regular semisimple) s.t. Re(t, α∨ ) 6= 0 for any coroot α∨ (possible
since no coroots in h− ), consider the polarization

R+ = {α ∈ R : Re(t, α∨ ) > 0} .

Then, θ(R+ ) = R+ (since θ(t) = t).

With a polarization as above, the simple roots get permuted, so θ(αi ) = αθ(i) where θ(i) gives the
action of θ on the Dynkin diagram of g. If θ(i) = i, then θ(ei ) = ±ei , θ(hi ) = hi , and θ(fi ) = ±fi . If
θ(i) 6= i, we can normalize the generators so that θ(ei ) = eθ(i) , θ(fi ) = fθ(i) , and θ(hi ) = hθ(i) .
We can encode this info in the Dynkin diagram, to produce a Vogan diagram. Any Vogan diagram TODO: Add
gives rise to a real form, and any real form comes from some Vogan diagram. However, different diagrams picture
can give rise to the same form (diagram depends on the choice of R+ with θ(R+ ) = R+ ).
Exercise (Homework). Compute the signature of the Killing form for gσ . It should be (dim p, dim k).
Deduce that for split form, dim k = |R+ |.
If gσ is in compact inner class, then rank(k) = rank(g), so they will share a Cartan subalgebra.
34 its centralizer is a Cartan algebra

80
16.2 Real forms of classical groups
16.2.1 Type An−1

The Dynkin diagram has two automorphisms (identity and flip), so there are two inner classes.

• We start with the compact inner class (i.e. θ an inner automorphism, conjugation by some element
of order ≤ 2 in PSU(n)).
Such an element can be lifted to g ∈ U (n) s.t. g 2 = 1. Then, θ(x) = gxg −1 . We know what g can
look like:  

g = diag 1, . . . , 1, −1, . . . , −1 where p + q = n,


| {z } | {z }
p q

and we may assume p ≥ q. The corresponding real form will be su(p, q).
t
The compact form was su(n) : A = −A (and Tr A = 0). For the form attached to g, we need
t
A = −gA g −1 (and Tr A = 0). This is just the requirement that A be skew-Hermitian for the form
defined by g.
When n = 2, A1 has no automorphisms, so all forms are inner to the compact form. In this case,
there are only two forms: su(2) and su(1, 1) = sl(2, R).

• There’s also the split inner class (assume n > 2)


The Vogan has at most one fixed vertex (only exists when n even, so there are an odd number of
vertices). Hence, there’s no choice when n odd so only get the split form sln (R). Here, kc = son (R)
is skew-symmetric matrices while pc consists of symmetric matrices.
When n = 2k is even, there are two options. The single fixed vertex can be colored black or white.
Exercise: if the vertex is white (positive sign), then k = sp2k while if it is black, then k = so2k (this
is the split form sl2k (R)). What is the Lie algebra corresponding to the white case? It is sl(k, H),
the Lie algebra of traceless quaternionic k × k matrices. There’s a trace map

Tr : gl(k, H) −! R
X .
A 7−! Re aii

This is the same as 1


2 Tr A as an operator on C2k . And

sl(k, H) = ker Tr = {A ∈ gl(k, H) : Tr A = 0}

which has dimension 4k 2 − 1.

16.3 Type B
There are no Dynkin diagram automorphisms, so all forms are inner. Furthermore SO(2n + 1) has trivial
center, so θ ∈ SO(2n + 1) of order 2. We know what all these elements look like (up to conjugation); we’ll
have θ = (− Id)2p ⊕ Id2q+1 with p + q = n. The corresponding real form is so(2p, 2q + 1), p = 0, . . . , n
(all distinct).
Holiday on Tuesday. Lecture on Thursday at MIT.

81
17 Lecture 17 (4/22)
Today we finish the classification of real forms.

17.1 Last time


Let g be a simple Lie algebra over C, and let σ : g ! g be an anti-involution giving rise to the real form
gσ . We single out the compact form gc obtained when σ = ω is the Cartan involution

ω(ei ) = −fi and ω(hi ) = −hi and ω(fi ) = −ei .

Characterizing σ in terms of how much it differs from the compact form led us to characterizing real
forms in terms of an involution θ : g ! g. Given, θ, we write g = k ⊕ p with k the (+1)-eigenspace and p
the (−1)-eigenspace of θ. We intersect with gc to write gc = kc ⊕ pc , and then gσ = kc + ipc . Elements
are k are skew-Hermitian so expoentiate to unitary operators so we call kc the compact directions,
while ipc has hermitian elements expoentiating to hermitian operators so we call these the noncompact
directions (maybe typos in this sentence).
We also found a θ-stable Cartan subalgebra. While doing this, we had a lemma which we did not
prove.

Recall 17.1. We chose hc+ ⊂ kc and hc− ⊂ pc . Then formed hc = hc+ ⊕ hc− and extended C-linearly to get
h = h+ ⊕ h− .

Lemma 17.2. h− does not contain any coroots of g (w.r.t. h)

Proof. If α∨ is a coroot in h− , then θ(α∨ ) = −α∨ , so θ(eα ) = e−α and vice versa.35 Therefore, eα +e−α ∈
k. Furthermore, [h+ , eα + e−α ] = 0 since α|h+ = 0 (since α ∈ h∗− ). We also know (eα , h+ ) = 0 = (e−α , h+ ) Question:
so eα + e−α ∈ h+ , a contradiction (since h+ maximal commutative subalgebra of k).  Why?

17.2 Classification of real forms


• An−1

– Compact inner class


These are all su(p, q) with p + q = n and p ≥ q.
– Split inner class
These are sln (R) and sln n
(when n even)

2,H

• Bn

– Compact inner class (only one since Dynkin diagram has no nontrivial auto)
so(2p + 1, 2q) where p + q = n.

This is where we stopped last time, so let’s continue.


35 normalize things so the coefficient is 1

82
• Cn
Dynkin diagram has no nontrivial auto, so only one inner class. θ will be inner, so θ ∈ Sp2n (C)/ ± 1
and θ2 = 1. Thus, θ(x) = gxg −1 where g ∈ Sp2n and g 2 = ±1.

– g2 = 1
We may write V = C2n = V (1) ⊕ V (−1). These eigenspaces each carry a symplectic form, so
they are even dimensional. Hence dim V (1) = 2p and dim V (−1) = 2q with p + q = n. May
assume p ≥ q (change g −g). In this case, one finds

g σ = u(p, q, H),

the quaternionic unitary Lie algebra, the Lie algebra of symmetries of a quaternionic
Hermitian form of signature (p, q). Can calculate that in this case, k = sp2p ⊕ sp2q .
– g 2 = −1
In this case, we write V = C2n = V (i) ⊕ V (−i) with each eigenspace isotropic. This forces
V (±i) to be Lagrangian, both of dimension n. In this case, k = gln , and in obtains the split
form gσ = sp2n (R).

• Dn Split form is
so(n, n) so
– Compact inner class
could be in
We have θ ∈ Gad = SO(2n)/ ± 1 with θ2 = 1. Thus, θ(x) = gxg −1 where g ∈ SO(2n) and
either class
g 2 = ±1.
depending
∗ g =1
2
on parity of
Again write V = C2n = V (1) ⊕ V (−1). We need det g = 1, so dim V (−1) = even. Hence, n
dim V (1) = 2p and dim V (−1) = 2q with p + q = n. Again, may assume p ≥ q. In this
case, k = so2p ⊕ so2q and gσ = so(2p, 2q).
∗ g 2 = −1
Again C2n = V (i)⊕V (−i). These are Lagrangian as before, so dim V (i) = n = dim V (−i).
Thus, in this case k = gln and one gets gσ = so∗ (2n), the Lie algebra of symmetries of a
skew-Hermitian quaternionic form
– Other inner class
Same story except θ ∈ O(n)/ ± 1 so θ(x) = gxg −1 with det(g) = −1. Note that we cannot
have g 2 = −1 since that would imply V = V (i) ⊕ V (−i) both Lagrangian so det g = 1. Thus,
we have g 2 = 1 so V = V (1) ⊕ V (−1) with dim V (1) = 2p + 1 and dim V (−1) = 2q − 1 (and
q ≤ p + 1). One gets k = so(2p + 1) ⊕ so(2q − 1) and gσ = so(2p + 1, 2q − 1).

Remark 17.3. For real numbers, have symmetric and skew-symmetric forms. Symmetric have signature,
but all skew-symmetric are the same.
For complex numbers, have Hermitian and skew-Hermitian. These are the same (multiply by i), and
they have signature.
For quaternions, Hermitian and skew-Hermitian are again different.

83
Class Real forms
An−1 compact inner class su(p, q) with p ≥ q and p + q = n
An−1 split inner class sln (R), sl(n/2, H) if n even
Bn so(2p + 1, 2q) with p + q = n
Cn u(p, q, H) with p + q = n and p ≥ q, sp2n (R)
Dn compact inner class so(2p, 2q) with p + q = n and p ≥ q, so∗ (2n)
Dn other inner class so(2p + 1, 2q − 1) with p + q = n and q ≤ p + 1
G2 compact and split
F4 compact, split (k = sp(6) ⊕ sl(2)), and the other (k = so(9))
E6 split inner class split (k = sp(8)) and other (k = F4 )

Table 1: Real forms of simple complex Lie algebras (except E6 , E7 , E8 )

Example (D3 = A3 ). Here, we can match up the real forms

so(6) = su(4)
so(4, 2) = su(2, 2) quasi-split
so∗ (6) = sl(2, H)
so(3, 3) = sl4 (R) split
so(5, 1) = su(3, 1)

We should also talk about exceptional Lie algebras. When dealing with these, one should consider
Vogan diagrams. Recall that these are formed by paring up vertices transposed by the involution, and
coloring the fixed vertices black or white.

• For black vertices, we set θ(ei ) = −ei

• For white vertices, we set θ(ei ) = +ei

Every real form gives rise to such a diagram, but there are some redundancies/equivalence relations.
Note that, of the exceptional diagrams, only E6 has automorphisms, so for the rest of them, we are just
coloring each vertex black or white.
Let’s consider the case when the automorphism of the Dynkin diagram is trivial (i.e. the compact
inner class), so the Vogan diagram is simply the Dynkin diagram + coloring.
What are the equivalences? First note that the compact form = all white vertices (θ = id) and so no
other diagram gives the compact form. Hence we consider only diagrams have ≥ 1 black vertex.
Say we have θ ∈ Gad giving our real form. This fixes all gα ’s (trivial aut on Dynkin), so θ ∈ H :=
exp(h). We color vertex i white if αi (θ) = 1 and black if αi (θ) = −1. Recall the Weyl group sits in an
exact sequence
1 −! H −! N (H) −! W −! 1.

Hence, we may modify θ by action of W . What do simple reflections do? Note that36 (since αk (θ) = ±1)

 αj (θ) if aij even (±2)
αj (si (θ)) = si (αj )(θ) = (αj − aij αi )(θ) = αj (θ) · αi (θ)−aij =
α (θ)α (θ) if a odd.
j i ij

36 s (θ)
i s−1
:= sei θei for some sei ∈ N (H) lifting si ∈ W

84
Thus we get the follow equivalence relation: if we have a black vertex, then we can change the signs of
all its neighbors except • ⇔ ◦ or • ⇔ • (and the color of the vertex itself doesn’t change).37

Example (G2 ). The Dynkin diagram G2 has four configurations

(•, •), (•, ◦), (◦, •), and (◦, ◦).

The last one is the compact form. The other three are all equivalent so must correspond to the split
form. One can show that k is the span of the long roots and then that k = sl2 ⊕ sl2 (for the split form).
It must have rank 2 (same as G2 ) and dimension 6 = |R+ |.

• •

Figure 10: The Dynkin Diagram G2

Example (F4 ). Now let’s consider the F4 case. Here we have the configurations (up to equivalence)

Figure 11: A Dynkin diagram of type F4

(◦, ◦, ◦, ◦), (•, ◦, ◦, ◦), (◦, ◦, ◦, •), and (◦, •, •, •).

In fact, even two of these are equivalent. The last two are the same (since the right half can’t be affect
by the left half?). The first one is the compact form. What do the other two look like?
The roots of F4 are (±1, 0, 0, 0) and all its permutations, (±1, ±1, 0, 0) and all its permutations, and
(for a total of 8+24+16 = 48 roots). The (±1, 0, 0, 0) and (±1, ±1, 0, 0) roots generate
± 21 , ± 21 , ± 12 , ± 12


an so9 while the (±1/2, . . . , ±1/2) roots give the spinor representation S. Thus, F4 = so9 ⊕ S.
In the second case (•, ◦, ◦, ◦), one can check that θ acts by 1 on so(9) and by −1 on S, so k = so(9).
Hence, this will not give the split form sin |R+ | = 24 6= 32 = dim so(9).
Thus, (◦, ◦, ◦, •) gives the split form. Note that here you can observe an sp(6) as the subdiagram
using vertices 1, 2, 3 (this is a copy of C3 ).

This just leaves diagrams E6 , E7 , E8 .

17.2.1 Type E

We start with the E6 split inner class. This corresponds to the nontrivial automorphism so only two TODO: Add
37 So change the colors of all neighbors of the black vertex except the neighbors with a double arrow coming into the black picture
one.

85
vertices get colored. They can be colored

(◦, ◦), (◦, •), (•, ◦), or (•, •).

The last three colorings are equivalent, so there are only 2 real forms in this class (neither compact). In
the first case (◦, ◦), you can check that k gives a copy of F4 . This is not the split form (we call it E61
instead) since dim F4 = 52 6= 36 = #R+ . Simple root generators of k are e1 + e5 , e2 + e4 , e3 , e6 (these all
obviously satisfy θ(e) = e and one can check that they in fact generate k as a Lie algebra); the Cartan
algebra will be spanned by h1 + h5 , h2 + h4 , h3 , h6 .

18 Lecture 18 (4/27)
We were working on classifying real forms of Lie algebras last time. We filled out much of Table 1, but we
still need to handle the exception Lie algebras of E type. We will complete the table this time, forming
Table 2.

18.1 E type
We looked at the split inner class of E6 last time. This is corresponding to the nontrivial automorphism,
so there are only two colored vertices. In fact, there are only two equiv classes of colorings: (+, +)
and {(+, −), (−, +), (−, −)} (+ is colored white). These correspond to E61 with k = F4 and E6spl with
k = sp(8) = C4 .
This brings us to the compact inner class. Hence, the Vogan diagram is the Dynkin diagram with
white and black vertices. We will be able to treat E6 , E7 , E8 more-or-less simultaneously. If all vertices
are white, we get the compact forms E6c , E7c , E8c . Hence, we may restrict ourselves to the case when we
have at least 1 black vertex.
By applying equivalence transformations (i.e. change colors of neighbors of a black/- vertex), we can

Class Real forms


An−1 compact inner class su(p, q) with p ≥ q and p + q = n
An−1 split inner class sln (R), sl(n/2, H) if n even
Bn so(2p + 1, 2q) with p + q = n
Cn u(p, q, H) with p + q = n and p ≥ q, sp2n (R)
Dn compact inner class so(2p, 2q) with p + q = n and p ≥ q, so∗ (2n)
Dn other inner class so(2p + 1, 2q − 1) with p + q = n and q ≤ p + 1
G2 compact and split (k = sl(2) ⊕ sl(2))
F4 compact, split (k = sp(6) ⊕ sl(2)), and the other (k = so(9))
E6 split inner class split (k = sp(8)) and E61 (k = F4 )
E6 compact inner class compact, E6 (k = so(10) × so(2)), and E63 (k = sl(6) × sl(2))
2

E7 compact, split (k = sl(8)), E71 (k = E6 ⊕ so(2)), E72 (k = so(12) ⊕ sl(2))


E8 compact, split (k = so(16)), E81 (k = E7 × sl(2))

Table 2: Real forms of all simple complex Lie algebras

86
achieve
(?) (?) (−) (?) ··· (?)

(?)

i.e. force the vertex above the branch to be a minus. Similarly, we can then achieve

(?) (?) (−) (?) ··· (?)

(−)

(now can change color of nodal vertex whenever we want).


Now, focus on the ‘right leg’ (strictly to the right of the (−) above).

• For E6 , this is (?) − (?) so has colorings ++ or {+−, −+, −−}. This means we get 2 classes on the
right. Now, the nodal transformation actually turns ++ into −+, so the two classes on the right The ‘node’ is
are one in the same (?). the valence 3
Sounds like one ends up with the real forms su(3) and su(2, 1) for this right leg. vertex

• For E7 , the right leg is (?) − (?) − (?). You end up with the forms su(4), su(3, 1), and su(2, 2). To
get su(4) you use + + +. For su(3, 1), you use {− + +, − − +, + − −, + + −}, and for su(2, 2) you
can use {− + −, − − −, + − +}.

Example. Consider + + −. These are the values of an involution on simple roots. Something liek
these signs correspond to ratios of adjacent values so + + − gives
 
1
 

 1 


 1 

−1

which gives su(3, 1) while e.g. + − + gives


 
1
 

 1 


 −1 

−1

which gives su(2, 2).

• For E8 , end up with su(5), su(4, 1), su(3, 2). One has

(su(5)) + + ++
(su(4, 1)) − + ++, − − ++, etc.
(su(3, 2)) + − ++, etc.

87
Note that the nodal transformation take us between these classes, so they are again all equivalent
(?)

Once the above is understood, the upshot is that we can arrange

(?) (?) (−) (−) ··· (−)

(?)

i.e. the right leg (+ the node) are all −’s (though the bottom vertex may become a + when applying the
nodal transformation).
If we really want the bottom vertex to be minus, we may arrange

(?) (?) (−) (−) (−) ··· (−)

(−)

or
(?) (?) (−) (+) (−) ··· (−)

(−)

(i.e. a single +).


One knows that +− ∼ −− and + − −− ∼ − − −− so these two cases are actually the same for E6
and E8 ! However, they are inequivalent for E7 . This simplifies things a lot from when we began.

(E6 ) We may arrange


(?) (?) (−) (−) (−)

(−)

so the only two classes are ++ on the left leg and +− on the left leg. Hence, there are at most 2
non-compact real forms in the compact inner class. To finish the classification, we just produce 2
these two forms.

– Could consider
(+) (+) (+) (+) (−)

(+)

Looking at all the +’s, we see that so(10) ⊂ k. Note that the root α1 (the sole −) is miniscule.
Hence, any positive root either does not contain α1 or contains it with coefficient 1. It it does
not, we are in the D5 (the so(10)). We see that k = so(10) × gl(1) = so(10) × so(2). We will Question:
call this form E62 . Why?

88
– Another possibility is

(+) (+) (+) (+) (+)

(−)

We see that k contains A5 , i.e. k ⊃ sl(6). The remaining weight is not miniscule, so there will
be other toos in the Lie algebra. One can show that k = sl(6) × sl(2) (homework). We will
call this form E63 .

(E8 ) We may arrange

(?) (?) (−) (−) (−) (−) (−)

(−)

so only two classes for left leg, ++ and +−. Hence, again at most 2 remaining real forms, so enough
to product them.

– Consider

(+) (+) (+) (+) (+) (+) (−)

(+)

We have E7 ⊂ k, so this is not the split form (dim E7 = 133, dim E8 = 248, and dim kspl = 120).
Note that E8 has no miniscule weights. One can show (homework) that in this case k = E7 ×sl2 .
We call this form E81 .
– Second option is

(−) (+) (+) (+) (+) (+) (+)

(+)

We see D7 ⊂ k. Can show k = D8 = so(16), and that this is the split form. Sanity check:
dim k = 16

2 = 120.
Remark 18.1. E8 = so(16) ⊕ R where R = p is a 128-dimensional representation of k, the
spinor representation S+ (or S− )

Thus, there are 3 real forms of E8 .

89
This just leaves E7 . There are a priori 4 variants, but two of them will be equivalent. Specifically,

(+) (−) (−) (+) (−) (−)

(−)

is equivalent to (apply transformation to first − from the left and then to leftmost vertex)

(−) (+) (+) (+) (−) (−)

(−)

which is equiv to (apply transformation to bottom vertex)

(−) (+) (−) (+) (−) (−)

(−)

which is equiv to (apply to nodal vertex)

(−) (−) (−) (−) (−) (−)

(+)

which is equiv to (second from left)

(+) (−) (+) (−) (−) (−)

(+)

which is equiv to (right of node)

(+) (−) (−) (−) (+) (−)

(+)

which is equiv to (node)

(+) (+) (−) (+) (+) (−)

(−)

90
which is equiv to (rightmost vertex)

(+) (+) (−) (+) (+) (−)

(−)

The upshot is that when we have a + on the right leg, all configurations of the left leg are equivalent.
Thus, there are only ≤ 3 possible non-compact real forms of E7 . These will all be different:

• First consider
(+) (+) (+) (+) (+) (−)

(+)

We have E6 ⊂ k. This is not split since dim E6 = 78 but dim kspl = #R+ = 12 (dim E7 − 7) = 63.
The − root above is miniscule. One gets that k = E6 ⊕ so(2). We call this E71 . It is the “most
compact” of the non-compact real forms (dim k maximal).

• Now consider
(−) (+) (+) (+) (+) (+)

(+)

Here we have D6 = so(12) ⊂ k so dim k ≥ dim D6 = 12


= 66 > 63, so this is still not split. The −

2
root is not miniscule. One can show that k = so(12) ⊕ sl(2). We denote this by E72 .

• Finally, there is the split form

(+) (+) (+) (+) (+) (+)

(−)

In this case, we have sl(7) ⊂ k, but in fact k = sl(8) of dimension 82 − 1 = 63 as it should be.

Thus, there are 4 real forms of E7 .


We now know all real semisimple Lie algebras. They’re listed in Table 2.

18.2 Classification of connected compact Lie groups


Proposition 18.2 (Homework). If K is a compact Lie group, then k := Lie K is reductive, i.e. k =
kss ⊕ kab is a sum of a semisimple Lie algebra plus an abelian Lie algebra.

18.2.1 Classification of semisimple compact Lie groups

Definition 18.3. We say G is semisimple if Lie G is semisimple.

Lemma 18.4. Let X be a compact manifold. Then, π1 (X) is finitely generated.

91
Proof idea. Cover X by (finitely many) small balls. Connect the centers of all these balls (use straight
lines in local coordinates or choose a Riemannian metric and then use geodesics); this gives a finite graph
Γ. Then, π1 (Γ) is finitely generated (with # generates at most number of loops in Γ), and π1 (Γ)  π1 (M ).
This is because any closed path from a vertex x0 to itself can be deformed to a graph walk. 

Theorem 18.5. Let g be a semisimple complex Lie algebra, and let Gcad be the compact adjoint group.
Then, π1 (Gc ) = P ∨ /Q∨ is a finite group of order det(Cartan). In particular, the universal cover G
ad
gc is
ad
also a compact Lie group.

Proof. Let K  Gcad be a finite cover, so K a compact connected Lie group. Let Z = ker(K ! Gcad ).
K is compact, and its f.d. irreps are a subset of the f.d. irreps of Lie KC = g (since K connected by
fundamental theorems), i.e. they are Lλ for λ ∈ S where P+ ∩Q ⊂ S ⊂ P+ . Note that the representations
of Gcad are in bijection with P+ ∩ Q. Question:
Z acts by scalar χλ on each Lλ . Since Lλ+µ ⊂ Lλ ⊗ Lµ , we see that χλ+µ = χλ χµ . Also χλ = 1 Why?
for λ ∈ Q (reps of Gcad ). This implies that χλ depends only on λ mod Q, so get χ : P/Q ! Z ∨
Answer: See
(with Z ∨ the character group). Now, Peter-Weyl says χ is surjective (all characters of Z must occur in

beginning of
L2 (K) = λ∈S Lλ ⊗ L∗λ ). The dual map gives an embedding Z ,! (P/Q) = P ∨ /Q∨ . Thus, you cannot
L
tomorrow’s
have covers of big degree.
lecture
We next show π1 (adGc ) is finite. We know it is finitely generated and abelian, so it is of the form
Zr ⊕ F for F some finite group. Let Γ ≤ π1 (Gcad ) be a subgroup of index N . This gives an N -sheeted
covering K  Gcad with kernel Z = π1 /Γ, so |Z| = N and Z ,! P ∨ /Q∨ , so N ≤ |P ∨ /Q∨ |. Thus, we
must have r = 0, so π1 (adGc ) = F .
Hence, K = G g c is compact and RepG
ad
gc = Repg = hL : λ ∈ P i, so we must have Z = P ∨ /Q∨ =
ad λ

(P/Q) . 

Corollary 18.6.

(1) If g is a simple complex Lie algebra, then the simply connected group Gc with Lie Gc = gc is compact
with center P ∨ /Q∨ .
Ln
(2) Let g = i=1 gi be a semisimple complex Lie algebra. Let Gci be the corresponding simply connected
compact Lie groups, and let Zi = Pi∨ /Q∨ c
i . Then, any connected Lie group with Lie algebra g is
compact, and of the form Qn
i=1 Gci
with Z ⊂ Z1 × . . . × Zn .
Z
Hence, any semisimple connected compact Lie group is of this form.

Definition 18.7. A Lie group G is simple if Lie G is simple.

Example. SU(2) is a simple Lie group even though it has the nontrivial normal subgroup Z/2Z.

Remark 18.8. Abelian connected compact Lie groups are simply tori (S 1 )n . Their universal cover is Rn
so G = Rn /L with L discrete, so G = (S 1 )m × Rn−m and compactness forces n = m.

Corollary 18.9. Any connected compact Lie group is the quotient of T × K by a finite central subgroup,
where T is a torus and K is semisimple and simply connected.

92
Proof. Let L be a connected, compact Lie group, and set l = Lie L. We write l = t ⊕ k with t abelian
and k semisimple. Let T = exp(t) ⊂ L, a Lie subgroup. Note that Lie T ⊂ z(l) = t so T = T is closed, so
compact, so a torus. Similarly define K = exp(k) which is also closed (K compact by previous theorem).
The natural map T × K ! L is a surjective submersion, so a finite covering, so Z = ker(T × K ! L) is
finite central and L = (T × K)/Z. 

19 Lecture 19 (4/29)
19.1 Filling in a gap
We start by filling in a gap in the proof at the end of last time. We need to explain why representations
of Gcad are related to dominant weights in the root lattice.
Let g be a semisimple complex Lie algebra, and let G be a connected, simply connected Lie group
with Lie algebra g. Let π : G ! Gad be the natural covering map, and let Z = ker π. Hence, Z = Z(G) ∼
=
π1 (Gad ) is the center of G (and fundamental group of Gad ).

Recall 19.1. f.dim reps of G are in bijection with f.d. representations of g (since G simply connected).
In particular, irreducible ones are the Lλ with λ ∈ P+ .

The center Z will act on Lλ by scalars, i.e. via a character χλ : Z ! C× . Since Lλ+µ ⊂ Lλ ⊗ Lµ , we
see that χλ+µ = χλ χµ . Thus, more generally,
Y
χP ki ωi = χkωii .
i

Thus, χ extends to a group homomorphism χ : P ! Hom(Z, C× ), λ 7! χλ .


Let θ be a maximal root, so Lθ = g is the adjoint rep (by definition of θ). Here, Z acts trivially, so
χθ = 1.

Recall 19.2 (Exercise 31.10 in the notes). If λ(hi ) are large enough, then for all roots α ∈ R, Lλ+α ⊂
Lλ ⊗ g. n o
λ(h )+1
(More specifically, this follows from Hom(Lµ , Lλ ⊗ V ) = v ∈ V [µ − λ] : ei i v = 0 )

In our case, V = g and µ − λ = α, so V [µ − λ] = gα . Thus, χλ χα = χλ+α = χλ , so χα = 1 for all


roots α ∈ R. Thus, χ|Q = 1, so χ really defines a map

χ : P/Q −! Hom(Z, C× ),

i.e. it gives a pairing χ : P/Q × Z ! C× . This is what we used in the proof (gives Z ! Hom(P/Q, C× ) =
P ∨ /Q∨ ).
Remark 19.3. In particular, χ|Q = 1 tells us that Lλ lifts to a rep of Gad when λ ∈ Q.

19.2 Polar decomposition


Recall 19.4 (Linear algebra). Let A be a complex invertible matrix. Then, it can be uniquely written
in the form
A = U R where U unitary and R > 0,

93
i.e. R positive Hermitian.

Example. For a ∈ GL1 (C) = C× , this is a = reiθ .

Remark 19.5. Also, every matrix is sum of a Hermitian matrix with a skew-Hermitian one. Take real
part + i(imaginary part). Real part Hermitian and i(imaginary part) skew-Hermitian.

Proof of Recall. Take R = (A† A)1/2 (note A† A positive Hermitian, so can take square root). Then,
U = A(A† A)−1/2 . This gives existence. For uniqueness, say U1 R1 = U2 R2 . Then, U2−1 U1 R1 = R2 . Let
U = U2−1 U1 , so U R1 = R2 . Take adjoint to see R1 U −1 = R2 and from this conclude that U = Id. 

We want to generalize this to any real semisimple group. Let gσ ⊂ g be a real form of g with
corresponding Lie group Gσ ⊂ Gad . Note this is a closed subgroup (if not, closure has a larger Lie
algebra, but every element of it still fixed by σ). Recall the decompositions

g = k ⊕ p, gc = kc ⊕ pc , and gσ = kc ⊕ ipc .

Let K c ⊂ Gcad be the (closed) subgroup with Lie K c = kc . Define

P σ = exp(ipc ) ⊂ Gσ .

Warning 19.6. This is not a group in general, e.g. since pc is not a Lie algebra but a module over kc .
Alternatively, pc acts by Hermitian matrices, so P σ does as well, but products of Hermitian matrices
need not be Hermitian.

Proposition 19.7. The exponential map exp : ipc ! P σ is a diffeomorphism.

Proof. We know that



exp : iu(n) −
! Herm>0 (n)

onto positive Hermitian matrices is a diffeomorphism. Why? Take log of the eigenvalues to get inverse

log : Herm>0 (n) −
! iu(n). The map in the statement is a restriction of this one. 

Corollary 19.8. P σ ∼
= RN where N = dim p.

Note that K σ acts on P σ by conjugation. Let

µ : K σ × P σ −! Gσ

be the multiplication map.

Theorem 19.9. µ is a diffeomorphism.

Proof. Consider some g ∈ Gσ ⊂ Aut(g) ⊂ GL(g). We also have g † ∈ Gσ , so g † g is a positive definite


automorphism. Hence, we can form Rg := (g † g)1/2 ∈ Aut(g).38 Set Ug := g(g † g)−1/2 , so g = Ug Rg . We
see that Rg ∈ P σ and Ug ∈ K σ . this gives inverse (bijective since polar decomp unique) to multiplication
map g 7! (Ug , Rg ). This is smooth, so we win. 
38 g † gis diagonalizable so splits Lie algebra into eigenspaces. Simply√take square roots of those eigenvalues.
√ √ That
√ is,
under g † g, g = Λ>0 gΛ satisfying [gΛ1 , gΛ2 ] = gΛ1 Λ2 . We set Rg |gΛ = Λ which makes sense since Λ1 Λ2 = Λ1 Λ2
L

94
Corollary 19.10. Gσ ∼
= K σ × Rdim p ∼ K σ with ∼
= denoting diffeomorphism and ∼ denoting homotopy
equivalence here.
closed
Hence topology of semisimple Lie groups largely reduces to topology of compact Lie groups (K σ ⊂
Gcad ).

Corollary 19.11. Gad = Gcad × P with P ⊂ Gad acting on g by Hermitian positive operators. Hence, Question:
Get this
π1 (Gad ) = π1 (Gcad ) = P ∨ /Q∨ corollary
by regarding
(P here the weight lattice).
Gad as a real
Corollary 19.12. Say G is a semisimple complex Lie group with center Z = Z(G). Then, Z ⊂ Gc , so Lie group?
c
coincides with the center of G . Question:
In particular, the restriction of f.dim reps from G to Gc is an equivalence. and P ∼=
This generalizes straightforwardly to any complex semisimple Lie group G instead of Gad , i.e. G = Rdim p ?
Gc × P and RepG = RepGc .

Warning 19.13. G and Gc have the same topology, but G and Gσ do not. Gσ ’s topology is related to
that of K σ . In particular, it can happen that G is simply connected but Gσ is not.

Example. Say G = SL2 (C) and Gσ = SL2 (R) is its split form. Note that SL2 (R) ⊃ SO(2) ∼
= S 1 , and in
fact we have a polar decomposition
SL2 (R) = SO(2) × P

with P ∼ = R2 consisting of positive symmetric matrices of determinant 1. Thus, SL2 (R) ∼


= S 1 × R2
(interior of a bagel). This has universal cover SL
^ ∼ 3
2 (R) = R .

Example. Take Gσ = SLn (C) (regarded as a real Lie group). Then, K σ = SU(n) and P σ = positive
Hermitian matrices of determinant 1. Then, we recover the usual polar decomposition.

Example. If Gσ = SLn (R), then K σ = SOn and P σ is positive symmetric matrices of det 1. This gives
the usual real polar decomposition.

19.3 Linear groups


Let G be a connected real or complex Lie group.

Definition 19.14. We say G is linear if it admits a faithful f.dim representation, i.e. it can be realized
as a subgroup of GLn .

Example. Every semisimple complex Lie group is linear. Let PG ⊂ G be the weight lattice of G (so
λ ∈ PG ⇐⇒ Lλ |π1 (G) = 1). If PG /Q is cyclic, we can take λ a generator, and then Lλ will be faithful.
P/Q is cyclic for all reduced irreducible root systems except D2n , where it’s Z/2Z × Z/2Z. For so(4n),
take λ1 , λ2 to generate PG /Q, and then L = Lλ1 ⊕ Lλ2 is faithful.

We can characterize real linear semisimple Lie groups as well. Say gσ ⊂ g is a real form with
corresponding Lie group Gσ ⊂ G. Then, Gσ is linear since G is, and all semisimple linear real groups are
of this form.

95
Example. Let Gσ = Sp2n (R) so K σ = U (n). Note Gσ ⊂ Sp2n (C) which is simply connected and
(m)
π1 (U (n)) = Z. For every integer m ≥ 2, Sp2n (R) has an m-sheeted cover Sp2n (R) with no f.dim faithful
representations (in fact, all its f.dim reps will factor through Sp2n (R)).

Exercise (Homework). Classify simply connected real semisimple linear Lie groups.39

19.4 Connected complex reductive groups


Definition 19.15. A connected complex Lie group G is reductive if it is of the form ((C× )r × G0 )/Z
where G0 semisimple, and Z ⊂ (C× )r × G0 a finite central subgroup. More generally, a complex Lie
group G is reductive if G0 is reductive, and G/G0 is finite.

Fact. Connected G is reductive ⇐⇒ RepG are completely reducible.

Example. GLn (C) is reductive, e.g. because

C× × SLn (C)
GLn (C) = .
µn

Remark 19.16. Let Z be the center of connected, reductive G. Then,

Z ⊂ (S 1 )r × Gc0 ⊂ (C× )r × G0 .

Hence, we get a compact subgroup


(S 1 )r × Gc0
K := ⊂ G.
Z
Restriction of f.dim reps gives an equivalence

RepG ∼
= RepK,

so RepG is semisimple (i.e. reps of G are completely reducible).


How do we parametrize irreps of G as above? Looking at the construction of K, they parametrized
by tuples (n1 , . . . , nr , λ) with ni ∈ Z, λ ∈ P+ subject to the global condition that they give a trivial
character of Z.

19.5 Maximal tori


We talked about Cartan subalgebras last semester.

Recall 19.17. Cartan subalgebras of g are conjugate, even when equipped with system of simple roots
(use Weyl group acts (simply) transitively on systems of simple roots).

Definition 19.18. A Cartan subalgebra of gc is a maximal commutative subalgebra hc ⊂ gc (note


this automatically consists of semisimple elements). Equivalently, hc ⊗R C is a Cartan subalgebra of g.

Lemma 19.19. All Cartan subalgebras (with systems of simple roots) of gc are conjugate.
39 Something something find those where k is semisimple (not just reductive)

96
Proof. Let (hc1 , Π1 ) and (gc2 , Π2 ) be two such things. Then, there exists some g ∈ G so that g(hc1 , Π1 )g −1 =
(hc2 , Π2 ). Also, g(hc1 , Π1 )g −1 = (hc2 , Π2 ) (where g = ω(g)). Thus,
−1
g −1 g(hc1 , Π1 ) g −1 g = (hc1 , Π1 ),

so g −1 g =: h ∈ H = exp(h := hc ⊗R C). Now we write the polar decomposition g = kp so g = kp−1 .



Hence, g = gh, kp = kp−1 h, p = p−1 h, and p2 = h. Since p ∈ P is positive, we see p = h = h1/2 as an
operator on g. In particular, p ∈ H. Thus, conjugation by g = kp is the same as conjugation by p, so
k(hc1 , Π1 )k −1 = g(hc1 , Π1 )g −1 = (hc2 , Π2 ), and we win (since k ∈ K = Gc ). 

Given a Cartan subalgebra hc ⊂ gc , its exponential H c = exp(hc ) ⊂ Gc is a torus (connected,


compact40 , abelian) (S 1 )r . In fact, H c is a maximal torus (any larger torus would have a larger Lie
algebra, but hc maximal).
Conversely, given a maximal torus H c , Lie H c is a commutative subalgebra, and maximality of H c
forces it to be a maximal commutative subalgebra. Thus, we have a bijection

Cartan subalgebrs Maximal tori


   
! .
in gc in Gc

Remark 19.20. Also Cartan subalgebras in g are in bijection with maximal tori in G.

Corollary 19.21. Any two maximal tori in Gc or G are conjugate.

Theorem 19.22 (to be proved next time). Every element of Gc is contained in a maximal torus.

Warning 19.23. This is false for complex groups (e.g. there exists non-semisimple elements like a matrix
with nontrivial Jordan block).

Lecture at MIT on Tuesday.

20 Lecture 20 (5/4)
Let K be a compact connected Lie group. We proved last time that all maximal tori in K are conjugate
(even with a choice of positive root system). The point was that maximal tori T ⊂ K are in bijection
with Cartan subalgebras t ⊂ k = Lie K.
Today, we would like to prove the following theorem.

Theorem 20.1. Every element of a connected compact Lie group K is contained in a maximal torus.

(A generic element will be contained in the unique maximal torus which is its center, but a special
element may be contained in many, e.g. a central element is contained in all)

Proof. The complexification KC =: G will be a reductive connected group with k = gc (g = Lie C). We
may assume WLOG that K is semisimple (reductive groups are products of semisimple groups with torii,
40 Since hc maximal =⇒ H c closed

97
up to finite quotient). Let K 0 ⊂ K be the subset of elements contained in a maximal torus. Also, fix
some maximal torus T ⊂ K. Consider the map

f : K ×T −! K
(k, t) 7−! ktk −1 .

Note that K 0 = im(f ), so K 0 is compact (so closed in K). Hence, K \ K 0 is open. Now, say x ∈ K is
regular if the centralizer zx ⊂ k of x in the Lie algebra has dimension ≤ r := rank K. The set of such
elements Kreg ⊂ K is open (rank is lower semicontinuous) and nonempty (many regular elements in gc
and exponentials of small regular elements will also be regular). On the other hand, any regular element
x is contained in exp(zx ) which is a maximal torus. Therefore, Kreg ⊂ K 0 so K \ K 0 ⊂ K \ Kreg . The Question:
set of non-regular elements is defined by polynomial equations . Polynomials cannot vanish on an open
41
Why?
set unless they vanish identically; these polynomials don’t vanish identically (regular elements exist), so
K \ K 0 is empty. 

Corollary 20.2. The exponential map exp : Lie K ! K is surjective.

Proof. If T ⊂ K is a maximal torus, then exp : Lie T ! T is surjective (since T commutative so exp a
homomorphism with image containing an open neighborhood of identity). Applying this for all maximal
tori gives the result. 
!
−1 1
Non-example. In G = SL2 (C), SL2 (R), is not in the image of the exponential map. It is
−1
the exponential of a matrix ! !
−1 1 πi ?
= exp ,
−1 πi
but it’s not the exponential of a traceless matrix.

20.1 Semisimple and unipotent elements


We talked about semisimple and nilpotent elements in the Lie algebra last term. Now let’s see the
analogous notions for groups.
Let G be a connected complex reductive group.

Definition 20.3. We say that g ∈ G is semisimple (resp. unipotent) if for every f.dim rep ρ : G !
GL(V ), the operator ρ(g) is semisimple42 (resp. unipotent43 ).

Remark 20.4. For Lie algebras, we defined an element to be semisimple iff adx was a semisimple operator,
but this is the same as ρ(x) being semisimple for any rep ρ : g ! End(V ) since x ∈ g semisimple iff it’s
contained in a Cartan subalgebra.
Similarly, g ∈ G will be semisimple iff it’s contained in a maximal torus.
We won’t delve into this theory here, but developing it is done in a series of homework exercises.
41 ranker smaller than expected, so certain minors have to vanish
42 diagonalizable since V a C-rep. In general, ‘semisimple’ means diagonalizable over algebraic closure
43 only eigenvalue is 1

98
Exercise. Let Y be a faithful f.dim rep of G. Then, g ∈ G is semisimple (resp. unipotent) iff ρY (g) is
semisimple (resp. unipotent).

Exercise. The exponential map exp : N (g) −
! U(G) gives a homeomorphism from the nilpotent elements
of g to the unipotent elements of G.
Exercise. Let Z = Z(G) ⊂ G be the center of G, and let π : G ! G/Z =: Gad be the natural projection. Note
∼ dim Gad may
(1) U(G) −
! U(G/Z) is a homeomorphism.
be less than
Example. If G is a torus, then G/Z = 1, so tori have no nontrivial unipotent elements. dim G, e.g.
if Z contains
(2) SS(G) = π −1 (SS(G/Z)) where SS(·) denotes semisimple elements. a torus

Exercise (Jordan Decomposition). Any g ∈ G can be uniquely written as a product g = gs gu where


gs semisimple, gu unipotent, and gs gu = gu gs .
Remark 20.5 (to be proved later). g ∈ G is semisimple ⇐⇒ g is contained in some (complex) maximal
torus (i.e. copy of (C× )r )

20.2 Cartan Decomposition


Let G be a complex connected semisimple group (actually, what we’ll say extends to reductive groups)
with Lie algebra g := Lie G. Let gc ⊂ g be the compact form. Pick some Cartan involution θ : g ! g
defining a real form; let σ = θ ◦ ω so θ gσ (ω is the antilinear involution defining the compact form).
Recall
gc = kc ⊕ pc , gσ = kc ⊕ ipc , and Gσ = K + σ · exp(ipc )
| {z }
Pσ )

where θ|kc = 1 and θ|pc = −1.


As a manifold, we have Gσ = Kσ × Pσ and Pσ is some Euclidean space. This Pσ is in general not a
group. Recall we have a Cartan subalgebra compatible with θ, hc+ ⊕ hc− = hc ⊂ gc (with hc+ ⊂ kc and
hc− ⊂ pc ). Define A := exp(ihc− ) ⊂ Pσ . This is an abelian group (since hc− abelian).

Theorem 20.6 (Cartan Decomposition).

Gσ = Kσ AKσ ,

i.e. any g ∈ Gσ can be written as g = k1 ak2 with k1 , k2 ∈ Kσ and a ∈ A.

Warning 20.7. This decomposition is not unique. In particular, Kσ × A × Kσ ! Gσ is not injective.

Remark 20.8. This extends to reductive groups e.g. by forming the decomposition separately for the
torus and semisimple factors.

Example. Let Gσ = GLn (C). Then, Kσ = U (n) and A = {positive diagonal matrices}. This says any
invertible matrix g over C is of the form u1 au2 where u1 , u2 are unitary and a is diagonal matrix with
positive entries.

99
Example. Let Gσ = GL+
n (R), invertible real matrices with positive determinant. Any g ∈ Gσ can be
written as g = O1 aO2 with Oi orthogonal with determinant 1 and a positive diagonal.
Easy to go from this to GLn (R) = O(n)A SO(n) with A consisting of diagonal matrices with pos.
entries.

These decompositions are well-known classically. For example.

Proof of first example. Write g = U R the polar decomposition, so U unitary and R positive hermitian.
−1 −1
We may diagonalize R = U 0 a (U 0 ) with U 0 unitary. Then, g = U U 0 a (U 0 ) . 

How does one prove Theorem 20.6?

Lemma 20.9 (Homework). hc− is a maximal abelian subalgebra of pc , and all such subalgebras are
conjugate under Kσ .

Proof of Theorem 20.6. We know Gσ = Kσ Pσ . Hence, it is enough to show that every element p ∈
Pσ is conjugate to an element of a by action of Kσ . This follows from the Lemma. Consider hcp− a
maximal abelian subalgebra of pc containing i log p. Then, by the lemma, there exists g ∈ K c such that
Ad(g)(hcp− ) = hc− . Thus, Ad(g)(i log p) ∈ hc− , so Ad(g)(log p) ∈ ihc− , so gpg −1 ∈ exp(ihc− ) = A. 

20.3 Integral form of character orthogonality


Let K be a connected compact Lie group with maximal torus T ⊂ K. We know that characters of
irreducible representations of K are orthonormal under the inner product
Z
(f, g) := f (k)g(k)dk
K

on C(K)Kad , continuous function invariant under adjoint action.44 But every f ∈ C(K)Kad is determined
by its values on T (since every element conjugate to an element of T ), so we should be able to write this
inner product just in terms of T . That is, we should have
Z
(f, g) = f (t)g(t)w(t)dt
T

for some weight function w(t). All our functions are Weyl group invariant, this weight should be W -
invariant as well.
What is w(t)? You can compute it directly by doing a computation in differential geometry. However,
we will not have to do this, because we secretly know what it is from the Weyl character formula.

Theorem 20.10. For any f ∈ C(K)Kad ,


Z Z
f (k)dk = f (t)w(t)dt,
K T

1 2
where w(t) = #W |∆(t)| and ∆(t) is the Weyl denominator
Y
∆(t) = (1 − α(t)).
α∈R+
44 If you wanted, could have taken L2 (K)Kad instead; it doesn’t matter

100
Example. Take K = U (n) with
  
z
 1

 

..

 : zj ∈ C with |zj | = 1 ∼= (S 1 )n .

T = 
 .  
 
zn
 

The (positive) roots are α = αjm = ej − em , i.e. α(t) = zj /zm . We see that
 
Z Z 2
1 Y zj  dθ1 . . . dθn
f (k)dk = f (z1 , . . . , zn )  1− where zj = eiθj .
U (n) n! T j<m
zm (2π)n

This can be simplified. Write


 
zj θj − θm
1− = 1 − ei(θj −θm ) = ei(θj −θm )/2 − e−i(θj −θm )/2 = 2 sin ,
zm 2

and so see that


 

2
 eiθ1
2π 2π   
θj − θm
Z Z Z
1 ..
Y  
f (k)dk = n n!
···  2 sin f  .  dθ1 . . . dθn .
U (n) (2π) 0 0 j<m
2  
eiθn

Proof of theorem 20.10. We know the characters χλ (k) are dense in C(K)Kad , so it’s enough to check
this equality for f = χλ , the character of Lλ . Characters are orthogonal, so
Z
χλ (k)dk = (χλ , 1) = (χλ , χ0 ) = δ0,λ .
K

Compare this with (use Weyl character formula for first equality45 and Weyl denominator formula for Possibly a
the second) typo below

2 R P
sign(w)(w(λ + ρ))(t) Y
Z
1 Y 1 Y
χλ (t) (1 − α(t)) dt = T w∈W
−1
Q (1 − α(t)) (1 − α(t)−1 )dt
#W T #W ρ(t) α∈R+ (1 − α(t) )
α∈R+ α∈R+ α∈R+
Z ! !
1 X X
= sign(w)w(λ + ρ)(t) sign(w)w(ρ)−1 (t) dt
#W T w
w∈W

Now, if λ 6= 0, then this is 0.46 If λ = 0, then the above becomes

1 X
1 = 1.
#W
w∈W

This completes the proof. 

You can reverse this. If you do the differential geometry calculation giving the integral formula, then
45 Also useR α(t) ∈ S 1 so α(t) = α(t)−1
46 Think, imθ · e−inθ = 0 when n 6= m
S1 e

101
you can use it to obtain the Weyl character formula instead. This is what Weyl did.

20.4 Topology of Lie Groups


This will be the subject of the next few lectures.
We want to understand the (co)homology/homotopy groups of Lie groups. There are many cohomol-
ogy theories computing the same thing; for Lie groups, it will be convenient to use de Rham cohomology.
Let M be a manifold. Recall the space Ωi (M ) of complex differential i-forms as well as the exterior
derivative d : Ωi (M ) ! Ωi+1 (M ) which satisfies d2 = 0.

Definition 20.11. The ith de Rham cohomology group of M is

ker d|Ωi
Hi (M ) = Hi (M, C) = .
im d|Ωi−1

Forms in ker d are called closed forms while those in im d are called exact forms.

What input from differential geometry will we need to use?


Let X be a vector field on M . Then one can form LX : Ωr (M ) ! Ωr (M ). First note that for

, we have X(xi ) = ai . This action can be extended to the contraction map
P
X = i ai ∂x i

ιX : Ωj ! Ωj−1 .

In particular, ιX ω(X1 , . . . , Xn−1 ) = ω(X, X1 , . . . , Xn−1 ). The map LX is locally given by

n
X X ∂
LX (f dxi1 ∧ · · · ∧ dxir ) = (LX f )dxi1 ∧· · ·∧dxir + f ·dxi1 ∧· · ·∧aij ∧· · ·∧dxir where X = ai .
j=1 i
∂xi

Theorem 20.12 (Cartan’s magic formula). On differential forms, LX = ιX d + dιX .

Friday is a holiday, so homework due date moved to Monday. There will be one more homework after
the current one, due on Monday of the last week.

21 Lecture 21 (5/6): Cohomology of Lie Groups


At the end of last time we switched topics to ‘cohomology of Lie groups.’ Let’s pick up where we left off.

Recall 21.1. Let M be a manifold. Its cohomology Hi (M, C) can be computed using the de Rham
complex
d d d
0 ! Ω0 (M ) −
! Ω1 (M ) −
! ... −
! Ωn (M ) ! 0,

where n = dim M . Here, Ωi (M ) is the space of (smooth, C-valued) differential i-forms, and d is the de
Rham differential determined by

d(f dx1 ∧ · · · ∧ dxm ) = df ∧ dx1 ∧ · · · ∧ dxm .

102
This satisfies d2 , and the cohomology of this complex

ker d
Hi (M, C) =
im d

is the cohomology of M .

Fact. If M is compact, then dim Hi (M ) < ∞.

Definition 21.2. The Betti numbers of M are bi (M ) = dim Hi (M ).

Example. b0 (M ) = #connected components, so M connected ⇐⇒ b0 = 1.

We would like to compute these bi for compact Lie groups.

Recall 21.3. There is a product structure on cohomology. If ω ∈ Ωi and ξ ∈ Ωj , can get an (i + j)-form
ω ∧ ξ ∈ Ωi+j . Moreover,
d(ω ∧ ξ) = dω ∧ ξ + (−1)deg ω ω ∧ dξ

(Above, you can think of the sign as coming from commuting d past ω). The Leibniz rule above tells I don’t know
us that ∧ descnds to if this rea-
n
soning works
M
∗ n
H (M ) = H (M )
i=0 in general
giving it the structure of an associative graded commutative algebra. Graded commutative means to always
get the right
ab = (−1)deg(a) deg(b) ba. sign in these
graded situ-
Remark 21.4. Let f : M ! N be a differential (i.e. smooth) map. Then, we get a pullback f ∗ : Ωi (N ) ! ations
Ωi (M ) which commutes with d and preserves ∧. Hence, it induces a graded algebra homomorphism

f ∗ : H∗ (M ) ! H∗ (N ).

Exercise. Say ft : M ! N is a smooth family of maps for t ∈ (0, 1) (i.e. f : (0, 1) × M ! N smooth).
Then, ft∗ : H∗ (N ) ! H∗ (M ) is independent of t. Hint: show that if dω = 0, then ∂ ∗
∂t ft ω is exact.
(f ∗ does not change under deformations of f ).
Before turning to Lie groups, we recall Cartan’s magic formulas. Let v be a vector field on M . Then
we get Lie derivative Lv : Ωi ! Ωi as well as a contraction operator ιv : Ωi ! Ωi−1 . This latter operator
is defined by
ιv (gdf1 ∧ · · · ∧ dfr ) = Alt (g · ιv f1 df2 ∧ · · · ∧ dfr ) ,

average the application of Lv over all permutations (or something like this). One can check

ιv (ω ∧ ξ) = ιv ω ∧ ξ + (−1)deg ω ω ∧ ιv ξ

and Lv (ω ∧ ξ) = Lv ω ∧ ξ + ω ∧ Lv ξ. Lv degree 0
operator so
Lemma 21.5 (Cartan’s magic formula). Lv = ιv d + dιv .
no sign
Note that ιv ◦ d + d ◦ ιv = [ιv , d] is a (graded) commutator.
ιv is a chain
homotopy
103
from Lv
to the zero
map
Proof. We showed last semester that the commutator of two derivations is a derivation. The same holds
true for graded commutators, so [ιv , d] is a derivation of degree 0 (exercise). Hence, we can check this
equality on generators in a local chart.
That is, we may assume ω = f or ω = df (everything else is a wedge/product of these). We see

Lv f = df (v) = ιv df = (ιv d + dιv )f

since ιv f = 0. Similarly,
Lv df = dLv f = dιv df = (ιv d + dιv )df

since d2 f = 0. 

Corollary 21.6. Lv maps closed forms to exact forms.

Proof. If dω = 0, then Lv ω = dιv ω. 

Corollary 21.7. Lv defines the zero map on H∗ (M ).

Corollary 21.8. If a connected Lie group G acts on M , then it acts trivially on H∗ (M )

(A path in G gives a homotopy of actions of its elements, so anything in the path component of 1
acts via the identity).

Theorem 21.9. Suppose of compact connected Lie group G acts on M . Then H∗ (M ) is computed by
the complex Ω∗ (M )G ⊂ Ω∗ (M ) of G-invariant forms.

Proof. Let P : Ω∗ (M ) ! Ω∗ (M ) be averaging over G, i.e.


Z
Pω = g ∗ ωdg.
G

Then P 2 = P and we have

Ω∗ (M ) = Ω∗ (M )1 ⊕ Ω∗ (M )0 = Ω∗ (M )G ⊕ ker P.

This decomposition is respected by d, so cohomology of M is a sum of the cohomology of these two


subcomplexes. Suppose ω ∈ Ω∗ (M )0 is a closed form, dω = 0. Then, [ω] = [gω] for any g ∈ G (G acts
trivially on cohomology). Thus we can take the average
Z Z 
[ω] = [gω] = [gω]dg = gωdg = [P ω] = 0.
G G

Thus, ω = dη for some η = η1 + η0 ∈ Ωi−1 (M ). Thus, ω = dη1 + dη0 =⇒ dη1 = 0 so ω = dη0 which
means that Ω∗ (M )0 is exact (it has zero cohomology). 

Corollary 21.10. Let G be a compact Lie group. Then H∗ (G) is computed by Ω∗ (G)G , the complex of
left-invariant differential forms.

Recall that the space of left-invariant vector fields is isomorphic to the Lie algebra Lie G. By the same
reasoning, one shows that
^i
Ωi (G)G ∼
= g∗ where g = (Lie G)C .

104
That is, cohomology of a compact Lie group is computed using a complex of the form
^2 ^n
0 −! C −! g∗ −! g∗ −! . . . −! g∗ −! 0

(this gives a way to see cohomology of a compact Lie group is finite dimensional).

Question 21.11. What is the different d?

Before proving a description entirely in terms of the Lie algebra, we need another lemma from differ-
ential geometry.

Lemma 21.12 (Cartan’s differentiation formula). Let ω ∈ Ωm (M ), and let v0 , . . . , vm be vector


fields on M . Then,
m
X X
dω(v0 , . . . , vm ) = (−1)i Lvi (ω(v0 , . . . , vbi , . . . , vm ))+ (−1)i+j ω ([vi , vj ], v0 , . . . , vbi , . . . , vbj , . . . , vm )
i=0 0≤i<j≤m

Proof Sketch. (1) RHS(f v0 , v1 , . . . , vm ) = f · RHS(v0 , . . . , vm ) so the RHS is linear over functions (in
each variable v0 , v1 , . . . , vm ).
(2) Now it’s enough to check this when vi = ∂
∂xki . Say ω = f dxj1 ∧ · · · ∧ dxjm . This it’s a “straight-
forward” calculation to verify this equality. 

Corollary 21.13. If ω ∈ Ω∗ (G)G is left-invariant and v0 , v1 , . . . , vm are left-invariant vector fields, then
X
dω(v0 , . . . , vm ) = (−1)i+j ω ([vi , vj ], v0 , . . . , vbi , . . . , vbj , . . . , vm ) . (21.1)
0≤i<j≤m

Proof. ω(v0 , . . . , vbi , . . . , vm ) is locally constant. 

Corollary 21.14. (21.1) defines the differential in the complex Ω∗ (G)G computed the cohomology of a
compact, connected Lie group.

Note that this complex


^2 ^n
0 −! C −! g∗ −! g∗ −! . . . −! g∗ −! 0

makes sense for any Lie algebra g (now that we’ve defined the differential just in terms of the Lie bracket).

Definition 21.15. This complex is called the standard complex (or Chevally-Eilenberg complex)
of g, denoted CE∗ (g). Its cohomology is called Lie algebra cohomology of g, and is denoted by H∗ (g).

This makes sense for any Lie algebra over any field. One has d2 = 0 because of the Jacobi identity.

Proposition 21.16. H∗ (G) ∼


= H∗ (g) when G compact connected.

Remark 21.17. There is an algebra structure on CE ∗ (g) induced by ∧ which descends to H∗ (g), making
it an associative graded commutative algebra. This isomorphism of the previous prop is one of graded
algebras.
Note we need G compact to compute its cohomology using its Lie algebra.

105
V∗
Example. Say g is abelian. Then d = 0 since all Lie brackets vanish. Thus H∗ (g) = g∗ is the exterior
algebra of the dual of g.

Example. If G = (S 1 )n , then g = Cn and one has


^∗
H∗ (G) = H∗ (g) = (ξ1 , . . . , ξn ) where deg ξi = 1.

In particular, H∗ (S 1 ) = (ξ) = h1, ξi with 1 generating H0 and ξ generating H1 .


V

Non-example. If you replace the circle by its universal cover, you get R and H∗ (R) 6= H∗ (S 1 ) =
H∗ ((Lie R)C ).

Corollary 21.18. Finite covers of compact Lie groups induce an isomorphism in H∗ (−; C).

This is not true with Z-coefficients.

Non-example. S 1 ! S 1 , z 7! z 2 induces multiplication by 2 in cohomology. This is an iso on H∗ (−; C),


but not on H∗ (−; Z) and certainly not on H∗ (−; F2 ).

Non-example. SU(2) ∼
= S 3 is a double cover of SO(3) ∼
= RP3 . The have different integral cohomology.

21.1 Künneth formula


Say M, N are manifolds. Then Ωi (M ) ⊗ Ωj (N ) ! Ωi+j (M, N ) gives a map Hi (M ) ⊗ Hj (N ) ! Hi+j (M ×
N ). These induce a graded iso.

Theorem 21.19. This induces an isomorphism


H∗ (M ) ⊗ H∗ (N ) −
! H∗ (M × N )

as graded algebras.

Remark 21.20. This is a graded tensor product above, so e.g. we have

0
(a ⊗ b)(a0 ⊗ b0 ) = (−1)deg(b) deg(a ) (aa0 ⊗ bb0 ).

Remark 21.21. The map


M
Ωi (M ) ⊗ Ωj (N ) ,! Ωk (M × N )
i+j=k

is an injection, but is not an isomorphism in general. What is true is that the image is dense w.r.t an
appropriate topology. This makes proving Künnth a bit subtle.
However, for Lie groups, Künnth formula comes for free:

Ω∗ (G × K)G×K = Ω∗ (G)G ⊗ Ω∗ (K)K .

21.2 Main Theorem


Theorem 21.22. If G is a connected compact Lie group, then the cohomology of G is
^ ∗ g
H∗ (G) ∼
= g∗

106
as a graded algebra, where we’re taking g-invariants under the adjoint action.

Proof. G has an action of G × G, so cohomology of G is computed by


^ ∗ g
Ω∗ (G)G×G = g∗ .

Hence, we only need to show that d = 0 on this space. This is easy to see from invariance, e.g.

ω ([v0 , v1 ], v2 , . . . , vm ) + ω(v1 , [v0 , v2 ], . . . , vm ) + · · · + ω(v1 , v2 , . . . , [v0 , vm ]) = 0.

Similarly with vi replacing v0 above. Equation (21.1) tells us that the alternating sum of these (which
are all 0) is 2dω(v0 , v1 , . . . ), so d = 0. 
V2
Example. Say ω ∈ g∗ . Then,

dω(x, y, z) = ω([x, y], z) + ω([y, z], x) + ω([z, x], y).

If ω is ad-invariant, then

ω([x, y], z) + ω(y, [x, z]) = 0 = ω([y, x], z) + ω(x, [y, z]) = 0 = ω([z, w], y) + ω(x, [z, y]).

Adding these up with alternating signs shows that

2ω([x, y], z) + 2ω([z, x], y) + 2ω([y, z], x) = 0.

This says 2dω(x, y, z) = 0.

To understand this answer a bit better, first note


^ g
X X i
∗ i ∗
dim H (G) = dim H (G) = dim g .
i i

V∗ V∗
Use the Weyl character formula. We have g = h ⊕ gα , so g∗ = h∗ ⊕ g∗α . Hence,
L L V
α∈R α∈R
letting r = rank(g),  ^∗  Y
ch g∗ (t) = 2r (1 + α(t))
α∈R

(C ⊕ g∗−α contributes 1 + e ) where t ∈ T ⊂ G. Thus,


α

^ ∗ g  ^∗ 
dim g∗ = ch g∗ , ch C
Z
1 Y Y
= 2r (1 + α(t)) (1 − α(t))dt
#W T
α∈R α∈R
2r
Z Y
= (1 − α(t2 ))dt
#W T
α∈R
2r
Z
= w(t2 )dt.
#W T

107
We change variables t 7! t2 to see that this is equal to

2r
Z Z
1
w(t)dt = 2r since w(t)dt = (ch C, ch C) = 1.
#W T #W T

Why did we get a power of 2? This is related to the fact that the cohomology of a Lie group is a
graded Hopf algebra. Let m : G × G ! G be the multiplication map. This induces a coproduct

∆ : H∗ (G) ! H∗ (G × G) ∼
= H∗ (G) ⊗ H∗ (G)

map. This is coassociative in the sense that (∆ ⊗ id) ◦ ∆ = (id ⊗∆) ◦ ∆ and is an algebra homomorphism.
This makes H∗ (G) a graded bialgebra.
Exercise. Deduce from this that H∗ (G) is a free (graded commutative) algebra. Hence, all generators are
odd.47

Corollary 21.23.
^ ∗ g

^
H∗ (G) = g∗ = (ξ1 , . . . , ξk ) where deg ξi = 2mi + 1.

Thus, dim Hi (G) = 2k .

Corollary 21.24.
^∗
H∗ (G) = (ξ1 , . . . , ξr ) and deg ξi = 2mi + 1

where m1 ≤ m2 ≤ · · · ≤ mr are integers.

We will discuss what these numbers are next time. They turn out to be the exponents of G (See
section 9.1).

22 Lecture 22 (5/11)
Last time we discussed the (complex) cohomology of Lie groups. In the end, we saw that the cohomology
of a compact Lie group is a free graded algebra with generators in odd degrees, computed as the invariants
of the exterior algebra on the dual of the Lie algebra.

Recall 22.1. For G a compact Lie group of rank r,


^∗
H∗ (G) = (ξ1 , . . . , ξr ) and deg ξi = 2mi + 1

where m1 ≤ m2 ≤ · · · ≤ mr are integers.

What do we know about these numbers m1 , . . . , mr ?


Pr
• We know r + 2 mi = i=1 (2mi + 1) = dim g so mi = dim2g−r = #R+ . Question:
P P

Why do
V g
3 ∗
Exercise. g = C spanned by the triple product ([x, y], z) (a linear functional on g⊗3 .
these de-
From this it follows that m1 = 1. grees add to
47 An even generator would give nontrivial cohomology in arbitrarily high degree to dim g?

108
Example (g simple of rank 2). We get m2 = 2 for A2 , m2 = 3 for B2 = C2 , m2 = 5 for G2 , etc. This is
because m2 = #R+ − m1 = #R+ − 1 for these cases.

In fact, we have the following general theorem, not to be proven here

Theorem 22.2. The numbers mi are the exponents of g defined in Section 9.1. In other words, the
degrees 2mi + 1 of generators of the cohomology ring are the dimensions of simple modules occurring in
the decomposition of g over its principal sl2 -subalgebra.

Definition 22.3. For a space X, it’s Poincaré series (sometimes polynomial) is


X
PX (z) = (dimC Hn (X; C))z n .
n≥0

V∗ g
Remark 22.4. The Poincaré polynomial P (z) of g∗ is given by the formula

(1 + z)r
Z Y
P (z) = (1 + zα(t))(1 − α(t)).
#W T α∈R

Hence, the above theorem is equivalent to the statement that this integral equals 2mi +1
Q
i (1t ).
We will prove this for the case of type A.

Corollary 22.5. For g = sln , we have mi = i. Equivalently, the same is true for g = gln if we add
m0 = 0.

How do we prove this (w/o using the theorem)?


V∗ g
Proof. Let g = gln , V = Cn . We need to compute the Poincaré polynomial of (V ⊗ V ∗ ) . To this
end, we employ the skew-Howe duality.
V∗ t
Exercise (skew-Howe duality). Show that (V ⊗ V ∗ ) = S λ V ⊗ S λ V ∗ where λt is the conjugate
L

partition to λ (i.e. transpose the Young diagram).


Remark 22.6. Taking exterior power is (something) like taking tensor power and then taking antiinvariants
of the symmetric group (homomorphisms from the sign representation, I think).
t
We need to take ad-invariants of S V ⊗ S λ V ∗ . These invariants will only exist if λ = λt (need
L λ

the irreps to be the same). Thus,


X
P (z) = z |λ|
λ=λt

with sum taken over λ with ≤ n parts. There are exactly 2n such symmetric partitions λ; they consist
of a sequence of hooks (k, 1k−1 ) with decreasing values of k. The degree of such a hook is 2k − 1, and so
we see that
P (z) = (1 + z)(1 + z 3 ) . . . (1 + z 2n−1 ).

Corollary 22.7.
^∗
H∗ (U (n)) = (ξ1 , ξ3 , . . . , ξ2n−1 )

109
with subscripts denoting the degrees, and
^∗
H∗ (SU(n)) = (ξ3 , . . . , ξ2n−1 ).

Remark 22.8. For gln , one gets the same cohomology even integrally. This is not true for other Lie
algebras.

22.1 Cohomology of homogeneous spaces


Let G be a compact connected Lie group with complex Lie algebra g = Lie(G)C , and let K ⊂ G be a
closed subgroup with k = Lie(K)C . Consider the homogeneous space G/K.

Question 22.9. How can we compute the cohomology H∗ (G/K)?

(recall we implicitly use C-coefficients)


V∗ K
Since the group G acts on G/K, this cohomology is computed by the complex Ω∗ (G/K)G = (g/k)∗
(for equality, use translation by G to see that an invariant differential form is determined by its value
at the identity). We denote this complex by CE∗ (g, K) and call it the relative Chevalley-Eilenberg
complex.
For example, if K = Γ is finite, this is just the Γ-invariant part of the Chevalley-Eilenberg complex.
We now Γ acts trivially on the cohomology (since G connected), so we get H∗ (G/Γ) = H∗ (G) (as noted
before).
What happens if dim K > 0? Can we reduce to a purely algebraic problem as we did for K = 1?

Notation 22.10. For k ⊂ g a pair of Lie algebras (over any field, of any dimension), let
^ ∗  g ∗ k
CEi (g, k) := .
k

Exercise. CE• (g, k) is a subcomplex of CE• (g).

Definition 22.11. The complex CE• (g, k) is called the relative Chevalley-Eilenberg complex, and
its cohomology is called the relative Lie algebra cohomology, denoted H• (g, k).

Going back to compact Lie groups, we have CE• (g, K) = CE• (g, k)K/K , so

Corollary 22.12.

H∗ (G/K) ∼
= H∗ (g, k)K/K

as algebras.

Thus computation of the cohomology of G/K reduced to the computation of relative Lie algebra
cohomology, which is again purely algebraic.

Warning 22.13. The differentials won’t always be trivial in this case.

Corollary 22.14. Suppose z ∈ K is an element acting by −1 on g/k. Then,


^ K
i

(g/k) = 0 for odd i.

110
Hence, the differential CE• (g, K) is 0 and thus
^ • K
H∗ (G/K) ∼
= (g/k)∗ ,

with cohomology present only in even degrees.

Example (Grassmannians). Let G = U (n + m) and K = U (n) × U (m), so that G/K is the Grass-
mannian Gn+m,n (C) ∼
= Gn+m,m (C) (the manifold of m− (or n−)dimensional subspaces of Cm+n ). The
element z = In ⊕ (−Im ) acts by −1 on g/k = V ⊗ W ∗ ⊕ W ⊗ V ∗ , where V, W are the tautological rep-
resentations of U (n) and U (m). So we get that the Grassmannian has cohomology only in even degrees,
and that
^2i U (n)×U (m)
H2i (Gm+n,m (C)) = (V ⊗ W ∗ ⊕ W ⊗ V ∗ ) .

We can therefore use skew Howe duality to see that dim Hom2i (Gm+n,m (C)) = Ni (n, m), where Ni (n, m) We have
is the number of partitions λ whose Young diagram has i boxes and fits into the m × n rectangle. an exterior
To compute Ni (m, n), consider the generating function fn,m (q) = i Ni (n, m)q i . Denote by pi the power of a
P

jumps of our partition, so tensor prod-


m
uct of dual
X
n
X
p0 +···+pm p1 +2p2 +···+mpm
Y 1 spaces
fn,m (q)z = z q = .
1 − qj z
n≥0 p0 ,p1 ,...,pm ≥0 j=0
Has at most
Hence the Betti numbers of Grassmannians are the coefficients of this series, e.g. if m = 1, we see that n parts with
transpose
X
n 1 X
fn,1 (q)z = = (1 + q + · · · + q n ) z n having at
(1 − z)(1 − qz)
n≥0 n
most m
which recovers the Poincaré polynomial 1 + q + · · · + q of the complex projective space CP = Gn+1,1 .
n n parts
The polynomials fn,m (q) are called the Gaussian binomial coefficients, and they can be computed
explitictly:

qm − 1
 
n+m [m + n]q !
fm,n (q) = = where [m]q := and [mq ]! := [1]q [2]q . . . [m]q .
n q [mq ]![nq ]! q−1

In other words, we have the q-binomial theorem

X n + m m
Y 1
zn = .
n q j=0
1 − qj z
n≥0

Note that setting q = 1 recovers the familar identity

X n + m 1
zn = .
m (1 − z)m+1
n≥0

Exercise. Compute the Betti number of GN,2 (C).


Exercise. Prove the q-binomial theorem.
There is a more geometric way to compute the Betti numbers of Grassmannians by working with
Schubert cells. Let Fi ⊂ Cn+m be spanned by the last i basis vectors em+n−i+1 , . . . , em+n . Thus, we

111
have a complete flag
0 = F0 ⊂ F1 ⊂ · · · ⊂ Fn+m = Cm+n .

Given an m-dimensional subspace V ⊂ Cm+n , let `j be the smallest integer for which dim(F`j ∩ V ) = j.
Then,
1 ≤ `1 < `2 < · · · < `m ≤ m + n,

which defines a partition with parts λ1 = `m − m, λ2 = `m−1 − (m − 1), . . . , λm = `1 − 1 fitting in the


m × n box. We let Sλ ⊂ Gn+m,m (C) be the set of V giving such numbers λi .
Exercise. Show that Sλ is an embedded (non-closed) complex submanifold of the Grassmannian isomor-
phic to the affine space Cr of dimension r := |λ| = i λi .
P

Definition 22.15. The subset Sλ of the Grassmannian is called the Schubert cell corresponding to λ.

We see that Gm+n,m (C) has a cell decomposition into a disjoint union of Schubert cells. This allows
one to rederive the same formula for the Poincaré polynomial of the Grassmannian from the following
fact from algebraic topology:

Proposition 22.16. If X is a connected cell complex which only has even-dimensional cells, then the
cohomology of X vanishes in odd degrees, and the groups H2i (X; Z) are free abelian groups of ranks
b2i (X), where the Betti number b2i (X) is just the number of cells in X of dimension i. Moreover, X is
simply connected.

(use cellular chain complex)


n+m
Corollary 22.17. H2i (Gn+m,n (C), Z) is a free abelian group of rank

m q, and the odd cohomology
groups are zero. Moreover, Grassmannians are simply connected.

In particular, this gives Betti numbers of any field.

22.1.1 Flag manifolds

Definition 22.18. The flag manifold Fn (C) is the space of all flags

0 = V0 ⊂ V1 ⊂ · · · ⊂ Vn = Cn with dim Vi = i.

It is a homogeneous space since Fn = G/T , where G = U (n) and T = U (1)n is a maximal torus in G.

We have fibrations π : Fn (C) ! CPn−1 sending (V1 , . . . , Vn−1 ) to Vn−1 , whose fiber is the space of
flags in Vn−1 , i.e. is Fn−1 (C). By induction48 , one argues that the flag manifolds can be decomposed
into even-dimensional cells isomorphic to Cr (also called Schubert cells). Thus, the Betti numbers of
Fn vanish in odd degrees, and in even degrees they are given by the generating function As a vector
X space, the
b2i (Fn )q n = [n]q ! = (1 + q)(1 + q + q 2 ) . . . (1 + q + · · · + q n−1 ). cohomology
of Fn (C)
Remark 22.19. There is also a map πm : Fm+n (C) ! Gm+n,m (C) sending (V1 , . . . , Vn+m−1 ) 7! Vm .
will be ten-
This is a fibration with fiber Fm (C) × Fn (C). From this one gets another proof of the formula for Betti
sor prod-
numbers of the Grassmannian.
uct of coho-
48 The fiber bundle will become trivial over the cells?
mology of
CPk for k =
112
1, . . . , n − 1
We can also define the partial flag manifold FS (C) for S ⊂ [1, n − 1], i.e. it is the space of partial
flags (Vs : s ∈ S) with Vs ⊂ Cn , dim Vs = s, and Vs ⊂ Vt if s < t. These include both (complete) flag
manifolds and Grassmannians.
Exercise. Let S = {n1 , n1 + n2 , . . . , n1 + · · · + nk−1 } and let nk = n − n1 − · · · − nk−1 . Show that the
even Betti numbers of the partial flag manifold are the coefficients of the polynomials

[n]q !
PS (q) = ,
[n1 ]q ! . . . [nk ]q !

while the odd Betti numbers vanish. Also, show the partial flag manifold is simply connected.

23 Lecture 23 (5/13): Lie algebra cohomology


We’ve talked recently about cohomology of Lie algebras. We can generalize our definitions to talk about
cohomology with non-trivial coefficients, i.e. with coefficients in a representation of the Lie algebra.
Let g be a Lie algebra (over any field, of any dimension), and let V be a g-module. We can define the
Chevalley-Eilenberg complex
^•
CE• (g, V ) := Homk ( g, V ),

so it looks like
^2
0 ! V ! Hom(g, V ) ! Hom( g, V ) ! · · · .

The differential is given by the Cartan differentiation formula


m
X X
dω(a0 , . . . , am ) = (−1)i ai ω(a0 , . . . , abi , . . . , am ) + (−1)i+j ω([ai , aj ], a0 , . . . , abi , . . . , abj , . . . , am )
i=0 0≤i<j≤m

for ω ∈ CEm .

Example. If G is a Lie group, g = Lie G, and V f.dim, then

g
CE∗ (g, V ) = (Ω∗ (G) ⊗ V )

with g acting diagonally, and the differential is the de Rham differential.

The cohomology of this complex CE• (g, V ) is called the cohomology of g with coefficients in V
and is denoted H∗ (g, V ). The cohomology we studied before is simply Hi (g) = Hi (g, C).

Proposition 23.1. If G is compact and V is a f.dim nontrivial irrep, then Hi (g, V ) = 0 for all i > 0.

(I missed the explanation, but this follows from what we did before. Something about cohomology
being computed using invariant forms so all nontrivial irreps drop out or something, who knows).
Remark 23.2. In general, H0 (g, V ) = V g is g-invariants.

Proposition 23.3 (Whitehead’s lemma). If g is semisimple, then H1 (g, V ) = H2 (g, V ) = 0 for any
f.dim V .

113
Proof. Can assume V irreducible. If V 6= C, this follows from previous prop, so say V = C. The standard
complex starts
0 d
^2
0!C−
! g∗ −
! g∗ ! · · · .
V g
2
Above, d(f ) = f ([x, y]). Hence, H1 (g, C) = HomLie (g, C) = 0. Similarly, H2 (g, C) = g∗ . Why is
this 0? Can assume g simple. This is the space of g-invariant skew-symmetric homomorphisms A : g ! g∗
(A∗ = −A). Note that Homg (g, g∗ ) = CK is 1-dimensional, spanned by the killing form. The Killing
form is symmetric, not skew-symmetric, so there are no skew-symmetric invariant forms. 

Remark 23.4. If you look at cohomology of non-semisimple Lie algebras or with coefficients in an infinite-
dimensional rep, then things are more complicated.

23.1 Interpretations of Hi (g, V ) for small i


23.1.1 i=0

We start with H0 (g, V ) = V g .

23.1.2 i=1

Onto H1 (g, V ) = Z 1 (g, V )/B 1 (g, V ). Here, we have 1-cocycles

Z 1 (g, V ) = {ω : g ! V | ω([x, y]) = [x, ω(y)] − [y, ω(x)]}

and 1-coboundaries B 1 (g, V ) = {ω = dv : v ∈ V } (i.e. ω(x) = xv which satisfies [x, y]v = x(yv) −
y(xv)).

Proposition 23.5. Say V, W are representations of g. Then,

H1 (g, Homk (V, W )) = Ext1 (V, W )

classifies extensions 0 ! W ! Y ! V ! 0 of V by W .

Proof. Given an extension 0 ! W ! Y ! V ! 0, we choose a complement of W in Y to write


Y = W ⊕ V as vector spaces. Under this decomposition, g acts by block upper triangular matrices
!
ρW (x) ω(x)
ρY (x) = .
0 ρV (x)

For this to be a representation, we need ρY ([x, y]) = [ρY (x), ρY (y)]. Note that
!
ρW (x)ρW (y) ρW (x)ω(y) + ω(x)ρV (y)
ρY (x)ρY (y) = .
0 ρV (w)ρV (y)

The condition we get is that (ρ’s omitted for brevity)

ω([x, y]) = xω(y) − yω(x).

114
Hence, ρY is a representation ⇐⇒ ω ∈ Z 1 (g, Homk (V, W )).
Exercise. If Y1 , Y2 are two such representations, then Y1 ∼
= Y2 as extensions iff ω1 −ω2 ∈ B 1 (g, Homk (V, W )).


Note 9. I have been pretty distracted most of this lecture, so I keep missing small things.
We’re talking about semidirect products now.

Definition 23.6. g n V is V ⊕ g with Lie bracket

[(v1 , x1 ), (v2 , x2 )] := (x1 v2 − x2 v1 , [x1 , x2 ]) .

This comes with a natural surjection g n V  g. What are the splittings x 7! (ω(x), x) of this
map? The condition for ω is precisely the 1-cocycle condition: ω([x, y]) = xω(y) − yω(x), so we need
ω ∈ Z 1 (g, V ). Note that V acts on g n V by automorphisms: w · (v, x) = (v + xw, x). We call this
‘conjugation by v.’
Exercise. Sections s1 , s2 are conjugate ⇐⇒ ω1 − ω2 ∈ B 1 differ by a coboundary.

Corollary 23.7. Splittings of g n V  g, up to conjugation, are classified by H1 (g, V ).

Remark 23.8. By previous interpretation, we also know

H1 (g, V ) = Ext1 (C, V ).

Let’s see yet another interpretation. Consider V = g the adjoint representation. Consider ω : g ! g
with ω ∈ Z 1 (g, g). Then,

ω([x, y]) = [x, ω(y)] − [y, ω(x)] = [x, ω(y)] + [ω(x), y],

so ω ∈ Der(g), i.e. ω is a cocycle if it’s a derivation. The coboundaries ω ∈ B 1 (g, g) are the inner
derivations, ω(x) = [d, x] for some d ∈ g. Thus,

Der(g)
H1 (g, g) = = Out(g)
Inn(g)

is the space of outer derivations.

23.1.3 i=2

For H2 , we’ll need to talk about abelian extensions.

Definition 23.9. An abelian extension is a Lie algebra e


g sitting in a short exact sequence

0 −! V −! e
g −! g −! 0

with V an abelian ideal.

g = g n V is a split abelian extension (of g by V )


Example. e

115
Example. The Heisenberg algebra H has basis x, y, c with c central ([x, c] = [y, c] = 0) and [x, y] = c.
Let V = hci, a 1-dimensional abelian ideal. We have H = k 2 = hx, yi (abelian quotient). This gives an
exact sequence
0 ! k ! H ! k2 ! 0

which is not split.

How can we classify abelian extensions of g by V ?


Note that exact sequences of Lie algebras always split as vector spaces, so we can write e
g = V ⊕ g as
vector spaces. We then get a commutator
^2
[(v, x), (w, y)] = (xw − yv + ω(x, y), [x, y]) with ω : g ! V.

What is the condition of ω for this to be a Lie algebra structure? The condition is given by the Jacobi
identity (this is already skew-symmetric by definition). One checks that this satisfies Jacobi ⇐⇒ ω ∈
g1 ∼
Z 2 (g, V ) (exercise). Furthermore, e =eg2 (as extensions) iff ω1 − ω2 = dη ∈ B 2 (g, V ).

Proposition 23.10. Up to equivalence, abelian extensions of g by V are classified by H2 (g, V ).

Example. Say g = k 2 = hx, yi and let V = k be the trivial rep. Then,





 k if ∗ = 0, 2

∗ ∗ ∗ 1 1
H (g, k) = H (g) = H (S × S ) = k 2 if ∗ = 1


if ∗ ≥ 3.

0

In particular, H2 (g) = k with nonzero element corresponding to the Hiesenberg algebra (up to some
scaling)

g is an abelian extension of g by a f.dim rep V , then e


Corollary 23.11. If g is semisimple and e g = gnV .

Whitehead says H2 (g, V ) = 0.


There’s another interpretation in terms of deformations of the Lie algebra. Say g over k with Lie
V2
bracket [ , ] : g ! g. Can we deform it with parameter t? Something like

[x, y]t = [x, y] + tc1 (x, y) + t2 c2 (x, y) + . . . ,

V2
a formal power series. These coefficients will be maps ci : g ! g. We want the above to be a Lie
bracket (i.e. satisfy Jacobi) for all t; that is, it should give a Lie algebra structure on g JtK so that, mod
t, you recover the original one.
We’d like to understand/analyze things term-by-term. We start with first order analysis, work mod
t . That is, we work with the ring g[t]/t2 t[t] = g ⊕ tg. Note we have an exact sequence
2

0 ! tg ! g ⊕ tg ! g ! 0

with tg abelian, so we have an abelian extension of g by itself with zero commutator. Hence, the condition
on c1 is that it should be a 2-cocycle: c1 ∈ Z 2 (g, g). Up to isomorphisms: a = 1 + ta1 + t2 a2 + . . . with

116
ai ∈ Endk g. Possible first order deformations c1 are classified up to isomorphism by H2 (g, g). In Question:
particular if H (g, g) = 0, then all deformations of g are in fact trivial (isom to c1 = c2 = c3 = · · · = 0).
2
What is a?

(1) kill c1 by applying a(1) = 1 + ta1 + . . . . This gives

[x, y]t = [x, y] + t2 c2 (x, y) + . . . .

(2) Now one discovers that we have [c2 ] ∈ Z 2 (g, g) = B 2 (g, g) so we can kill it as well by a(2) =
1 + t2 a2 + . . . . This gives [x, y]t = [x, y] + t3 c3 (x, y) + . . . .

(3) Now one continues. Use the composition · · · ◦ a(3) ◦ a(2) ◦ a(1) =: a (this makes sense since only
finitely many degrees involved in each step). This transforms the original deformation to the trivial
one with [x, y]t = [x, y],

Corollary 23.12. If g is a semisimple Lie algebra over R or C, then it is deformationally rigid in


the sense that all its deformations are trivial.

Example. Say g = k 2 = hx, yi ([x, y] = 0). Then, H2 (g, g) = k 2 so have 2-parameter deformation.
Can take [x, y] = tx + sy with t, s ∈ C. If (t, s) 6= 0, all are isomorphic as Lie algebras (though not as
deformations) by action of GL2 (C). Can always bring it to the form [x, y] = y, i.e. to the Lie algebra
( ! )
a b
aff 1 := Lie a, b ∈ C ⊂ gl2 .
0 0

Suppose g a Lie algebra with c1 ∈ Z 2 (g, g) and [c1 ] 6= 0 ∈ H2 (g, g). Can we lift our deformation mod
t3 ? Can we find c2 so that
[x, y]t = [x, y] + tc1 (x, y) = t2 c2 (x, y)

satisfies Jacobi? If c1 = 0, the condition would be dc2 = 0 (that it is a cocycle). In general, the condition
is
1 ^3
dc2 = [c1 , c1 ] ∈ Homk ( g, g)
2
where [−, −] is the Schouten bracket (this is some explicit quadratic expression we won’t write down).
Exercise. [c1 , c1 ] is a cocycle.
 
Hence we get a cohomology class [c1 , c1 ] ∈ H3 (g, g). To get a lifting (i.e. to solve dc2 = [c1 , c1 ]),
this needs to be a coboundary, i.e. the obstruction class [[c1 , c1 ]] needs to vanish.
Remark 23.13. Solving these extension problems depends on the choices you make along the way (i.e.
whether or not you can find c2 depends on what you choose for c1 ), so things can get hairy fast.
One can also consider deformations of modules. So you have g and a module V , and you want to
deform to a module V JtK. Say ρ = ρV : g ! End V . We now want

ρt = ρ + tρ1 + t2 ρ2 + . . . with ρi : g ! End V.

We start again with first order analysis (i.e. with working mod t2 ): ρt = ρ+tρ1 +O(t2 ). Note V [t]/t2 V [t] =
V ⊕ tV so we get an extension
0 −! tV ! V ⊕ tV −! V −! 0

117
of modules. We see that first order deformations of V , up to isomorphism, are classified by H1 (g, Endk V ) =
Ext1 (V, V ). Deformations are a non-linear problem, so we are not done yet.
Can we lift ρt = ρ + tρ1 + O(t2 ) modulo t3 ? Again, one gets that they need dρ2 = [ρ1 , ρ1 ]. Thus, you
have an obstruction class [[ρ1 , ρ1 ]] ∈ H2 (g, Endk V ) and you can lift iff it vanishes.

23.2 Levi decomposition theorem


Recall 23.14. The radical of g is its maximal solvable ideal, denoted rad(g). The quotient gss :=
g/ rad(g) is the largest semisimple quotient of g.

Theorem 23.15 (Levi decomposition). Let g be a f.dim Lie algebra over R or C. The exact sequence

0 ! rad(g) ! g ! gss ! 0

splits. In particular, gss acts on the radical rad(g).

Warning 23.16. rad(g) is not abelian in general.

Once we establish this, we’ll use it to prove the 3rd fundamental theorem (that every f.dim Lie algebra
is the Lie algebra of some simply connected Lie group).
Tuesday’s lecture will be prerecorded and posted online at the usual time. No zoom meeting/real-time
class meeting on Tuesday.

24 Lecture 24 (5/18)
Last time we introduced the Levi decomposition theorem.

Recall 24.1 (Levi decomposition). Let g be a f.dim Lie algebra over R or C. The exact sequence

0 ! rad(g) ! g ! gss ! 0

splits. In particular, g ∼
= gss n rad(g), and gss acts on the radical rad(g).

Above, recall (red g) is the sum of all solvable ideals in g.


We stated this last semester and said the proof will be given later. It’s later.

Proof of Levi decomposition. Choose a splitting of vector spaces g ∼


= rad(g) ⊕ gss . It’s commutator will
be of the form
^2
[(a, x), (b, y)] = ([x, b] − [y, a] + [a, b] + ω(x, y), [x, y]) for some ω : gss ! rad(g),

where a, b ∈ rad(g) and x, y ∈ gss . Since rad(g) is solvable, it has the filtration

rad(g) = D0 ⊃ D1 ⊃ · · · ⊃ Dn ⊃ Dn+1 = 0 with Di+1 = [Di , Di ]

(we suppose Dn 6= 0). We can replace g by g/Dn and then use induction in dim g to assume that ω = 0
V2
mod Dn , i.e. ω : gss ! Dn . Now, Dn is an abelian ideal in g. Hence, since Dn is abelian, our

118
commutator satisfies Jacobi iff ω is a 2-cocycle. Now, Whitehead’s lemma says that H2 (gss , Dn ) = 0, so
ω = dη is a coboundary. Now, we can use η to modify the splitting, so that ω becomes 0. 

We would like to prove the 3rd Lie theorem (any f.dim Lie algebra is the Lie algebra of a Lie group)
and also Ado’s theorem (any f.dim Lie algebra has a faithful rep). Doint so will require some more
technology, which brings us to....

24.1 The nilradical


We will consider nilradicals of solvable Lie algebras.
Say a is a solvable Lie algebra. By Lie’s theorem, in some basis of the adjoint representation (or any
rep), the matrices of ad(x) (x ∈ a) are upper triangular:
 
λ1 (a) ∗
 .. 
ad(x) = 
 . .

λn (a)

Definition 24.2. The nilradical of a is the subset n of nilpotent elements (i.e. a ∈ a s.t. ad(a) is
nilpotent).

Using this upper triangular basis, one can write this as

n = {a ∈ a : ad(a) is strictly upper triangular} .

This is an ideal containing [a, a] (commutator of two triangular matrices is strictly upper triangular), so
a/n is abelian.

The characters λ1 , . . . , λ ∈ (a/n) are a spanning set. If not, there is an element of a, not in n, but
whose adjoint matrix is nilpotent. Note that some λi may be zero (e.g. if a is nilpotent, they are all 0).

Lemma 24.3. If d : a ! a is a derivation, then d(a) ⊂ n.



Proof. Look at 1-parameter group of automorphisms etd : a ! a. If b ∈ a with [a, b] = λ(a)b (λ ∈ (a/n) ),
then
[etd (a), etd (b)] = λ(a)etd (b),

i.e. [e a)etd (b). Hence, if λ(a) occurs in a, then so does λt (a) = λ(e−td (a)). By ‘occurs’
a, etd (b)] = λ(e−td e
we mean shows up as a Jordan-Hölder factor. Only finitely many characters can occur, so this 1-parameter
family must be constant. Thus, etd λi = λi for all i. Therefore, etd acts trivially on (a/n)∗ so it acts
trivially on a/n, i.e. d|a/n = 0 which exactly says d(a) ⊂ n. 

Corollary 24.4. If a = rad(g), then g acts trivially on a/n.

24.2 Exponentiating nilpotent and solvable Lie algebras, and 3rd Lie theorem
Say g is a f.dim solvable Lie algebra over K = R or C.

119
Theorem 24.5. There exists a simply connected Lie group G with Lie algebra g with the exponential

map exp : g ! G being a diffeomorphism. Moreover, if g is nilpotent and we identify exp : g −
! G, then
multiplication is given by a polynomial
p : g × g ! g.

Example. Say g is the Heisenberg Lie algebra H = hx, y, ci with [x, y] = c and [x, c] = 0 = [y, c].
Equivalently,  
 0
 ∗ ∗ 
H = 0 0 ∗
 
 
0 0 0
 

is the Lie algebra of 3 × 3 upper triangular matrices. Can check that

αβ
  
0 α γ 1 α γ+ 2
exp 0 0 β  = 0 1 β .
   

0 0 0 0 0 1

In these coordinates, multiplication looks like


 
0 0 0 0 01 0 0 0
(α, β, γ) ∗ (α , β , γ ) = α + α , β + β , γ + γ + (αβ − βα ) .
2

Proof of Theorem 24.5. Induct on dim g. Suppose known for all Lie algebras of dimension < dim g. Fix
χ : g ! K a nontrivial character (exists since g solvable). Let g0 := ker χ, an ideal of codimension 1 in
g. Hence, g = Kd ⊕ g0 is a semidirect products (d acts as a derivative of g0 ). We know by inductive
assumption that g0 = Lie(G0 ) for some G0 with

exp : g0 ! G0

a diffeomorphism, and P : g0 × g0 ! g0 a regular multiplication map, polynomial if g0 nilpotent.


Consider the 1-parameter group of automorphisms etd : g0 ! g0 . Can now introduce group law on
g = Kd ⊕ g0 :49
(x, t) ∗ (y, s) = P (x, etd (y)), t + s


where x, y ∈ g0 and t, s ∈ K. One can check (exercise)

(1) this is a group law, defining a Lie group G with Lie(G) = g and exp : g ! G a diffeomorphism.
More precisely,

X tn−1 dn−1 (x) eu − 1


exp(td + x) = (t, xt ) where xt = = (x).
n! u u=td
n≥1

Example. ! !
eα −1
α β eα α ·β
exp =
0 0 0 1

(2) If g is nilpotent, the multiplication law is polynomial


49 We want a semidirect product

120


Definition 24.6. If g is nilpotent, the corresponding simply connected group G is called unipotent (it
acts by unipotent operators in the adjoint representation).

Theorem 24.7 (Third Lie Theorem). Every f.dim Lie algebra over R or C is the Lie algebra of some
Lie group.

Proof. By Levi decomposition, we have g = gss n a where a = rad(g) is solvable. By previous theorem,
a = Lie A with A simply connected. Furthermore, since gss is semisimple, we can write gss = Lie Gss with
Gss simply connected. Furthermore, Gss acts on a by automorphisms, so it acts on A by automorphisms.
We can now form G = Gss n A and by construction Lie G = g. 

Corollary 24.8. A simply connected complex Lie group G has homotopy type of its semisimple part Gss ,
and hence of Gcss . Specifically,
G∼
= Gcss × Rm

as a manifold.

Remark 24.9. Almost the same thing holds for real group. If G is a simply connected real Lie group,
we also have G ∼ Gss (homotopy equivalent) and Gss ∼ Kss , the simply connected compact group
corresponding to kss ⊂ k, the semisimple part of k = gσss .
The upshot is that any Lie group has the homotopy type of some compact Lie group (its maximal
compact subgroup).

24.3 Algebraic Lie algebras


We want to show every Lie algebra has a faithful representation. We will show this over R, C, but it is
in fact true over any characteristic.

Definition 24.10. A f.dim Lie algebra g over C is algebraic if it is the Lie algebra of an algebraic
group, i.e. g = Lie(G) where G = K n N with K reductive and Lie N nilpotent (i.e. N unipotent).

Non-example. Consider g1 = hd, x, yi with [x, y] = 0, [d, x] = x and [d, y] = 2y.

Exercise. This is not an algebraic Lie algebra, ultimately because 2 is irrational.

Non-example. Consider g2 = hd, x, yi with [x, y] = 0, [d, x] = x, and [d, y] = y + x.


Exercise. This is not algebraic, ultimately because d is a nontrivial Jordan block.

On the other hand

Proposition 24.11. Every f.dim Lie algebra over C is a Lie subalgebra of an algebraic Lie algebra.

Proof. We first make a definition. Say g is n-algebraic if g = Lie G and G = K nA, where K is reductive
and a = Lie A is solvable with dim(a/n) ≤ n. Note 0-algebraic = algebraic.

Example. g1 , g2 are 1-algebraic.

121
Now on to the proof. Any g is of the form g = gss n a with a = rad(g) solvable. Thus, any f.dim
Lie algebra is n-algebraic for some n. Hence, it suffices to show that an n-algebraic Lie algebra can be
embedded into an (n − 1)-algebraic Lie algebra (when n ≥ 1).
Suppose g is n-algebraic, so g = Lie G and G = K n A with K reductive and A simply connected with
a = Lie(A) solvable satisfying dim(a/n) = n. Pick d ∈ a but not in n s.t d is K-invariant (exists since K
acts trivially on a/n and since reps of K are completely reducible). Thus, ad(d) is a derivation of a, so
we can write
r
M
a= a(βi )
i=1

as a sum of generalized eigenspaces for ad(d). This decomposition is K-stable (K commutes with d).
Pick character χ : a ! C so that χ(d) = 1.
Consider subgroup Γ ⊂ C generated by βi , Γ ∼
= Zm . Let α1 , . . . , αm be a basis and write
m
X
βi = bij αj , bij ∈ Z.
j=1

Let T = (C× )m , and m-dimensional torus. We make T act on a via z = (z1 , . . . , zm ) ∈ T satisfies

m
b
Y
z|a[βi ] = zj ij .
j=1

This T commutes with K, so T acts on G = K n A. Form

e = T n G = (K × T ) n A.
G

Define a0 ⊂ Lie(T ) n a ⊂ Lie(G)


e by

a0 = hker χ, d − αi where α = (α1 , . . . , αm ) ∈ Lie T = Cm .

Note that α|a[βi ] = bij αj = βi . Thus, d, α have the same eigenvalues (α semisimple, d possibly not),
P

so d − α is nilpotent. Thus, n0 ⊂ a0 , the nilradical, is bigger: n0 = hn, d − αi and dim(a0 /n0 ) = n − 1. We


now let A0 be the simply connected Lie group corresponding to a0 , so A0 ⊂ (K × T ) n A. Note that

e = (K × T ) n A = (K × T ) n A0
G

so e e is (n − 1)-algebraic and contains g = Lie(K n A).


g = Lie G 

Example. Recall g1 = hd, x, yi with [d, x] = x, [d, y] = 2y. Add new generator δ so that [δ, y] = y and

[δ, x] = 0 = [δ, d]. Call the result algebra e
g1 . Note that it is algebraic as e
g1 = hδ, yi ⊕ d − 2δ, x . We
see that e
g1 = b ⊕ b with b a 2 dimensional noncommutative Lie algebras, = Lie Aff(1), the Lie algebra of
the group of affine transformations of a line.

Example. Recall g2 = hd, x, yi with [d, x] = x, [d, y] = x + y. Adjoint δ satisfying [δ, x] = 0 = [δ, d] and
[δ, y] = x. Let e
g2 = hδ, d, x, yi. This is C(d − δ) n H with H = hδ, x, yi the Heisenberg Lie algebra.

122
24.4 Faithful representations of nilpotent Lie algebras

Let n be a f.d nilpotent Lie algebra. We now n = Lie N with N unipotent, and exp : n −
! N an
isomorphism. Furthermore, the induced group law on n is polynomial P : n × n ! n.

Proposition 24.12. Let O(N ) be the space of polynomial functions on N ∼ = n, O(N ) = C[n] = Sn∗ .
Then, O(N ) is invariant under the action of n by left-invariant vector fields. Moreover, we have a
canonical filtration
[
O(N ) = Vn
n≥1

where Vn ⊂ O(N ) are f.dim subspaces, V1 ⊂ V2 ⊂ . . . , and nVi ⊂ Vi−1 .

Prove this next time.

25 Lecture 25 (5/20): Last Class


25.1 Ado’s Theorem
We’re working towards proving Ado’s theorem. We will first prove it in the case of nilpotent Lie algebras.

Recall 25.1. Say n is a nilpotent Lie algebra, then we can write n = Lie N with N a unipotent group and

exp : n −
! N an isomorphism. The induced group law on n, a deformation of addition, is a polynomial
P : n × n ! n.

We ended last time with the statement of the following proposition.

Proposition 25.2. Let O(N ) be the space of polynomial functions on N ∼ = n, O(N ) = C[n] = Sn∗ . Then,
O(N ) is invariant under the action of n by left-invariant vector fields. Moreover, we have a canonical
filtration The exis-
[
O(N ) = Vn tence of such
n≥1 a filtration
where Vn ⊂ O(N ) are f.dim subspaces, V1 ⊂ V2 ⊂ . . . , and nVi ⊂ Vi−1 . make O(N )
a ‘locally fi-
Proof. Say x ∈ n, so it has an associated left-invariant vector field Lx . For f ∈ O(N ) = Sn∗ , we have,
nite module’
by definition,
∂ ∂ (I think this
(Lx f )(y) = f (y ∗ tx) = f (P (y, tx)) . is the termi-
∂t t=0 ∂t t=0
nology)
Since f, P are both polynomials, we see that Lx f is a polynomial in y, so Lx f ∈ O(N ). Thus, O(N ) is
invariant under the action of n by left-invariant vector fields.
Now we given the filtration. Recall that n has a lower central series filtration

n = D0 (n) ⊃ D1 (n) ⊃ · · · ⊃ Dm (n) = 0 where Di (n) = [n, Di−1 (n)].

We can take orthogonal complements of these spaces to get

0 = D0 (n)⊥ ⊂ D1 (n)⊥ ⊂ · · · ⊂ Dm (n)⊥ = n.

123
Now pick a suff. large positive integer d, and give Di (n)⊥ degree di . This gives an increasing filtration
F • on the symmetric algebra Sn∗ = O(N ). Write
X
P (x, y) = x + y + Qi (x, y) where Qi : n × n ! [n, n]
i≥1

and Qi is degree i in y.50 Now note that51



(Lx f )(y) = ∂x f (y) + ∂Q1 (x,y) f (y).

Note that f (y) 7! ∂x f (y) lowers the degree in F • .


Exercise. If d  0 big enough, then also the second term f 7! ∂Q1 (x,y)f lowers the degree.
Hence, we may take Vn = F n (Sn∗ ) and will have Lx : Vn ! Vn−1 . 

Example. Consider the Heisenberg Lie algebra H = hx, y, ci with [x, y] = c while [x, c] = 0 = [y, c] (i.e.
c central). Then,
1
etx esy = etx+sy+ 2 tsc

(all higher commutators vanish). If u = px + qy + rc with p, q, r ∈ C then multiplication in these


coordinates is given by

1
(p1 , q1 , r1 ) ∗ (p2 , q2 , r2 ) = (p1 + p2 , q1 + q2 , r1 + r2 + (p1 q2 − p2 q1 )).
2

You can alternative describe things using upper triangular nilpotent matrices.
    
1 p1 r1 1 p2 r2 1 p1 + p2 r1 + r2 + p1 q2
1 q1   1 q2  =  1 q1 + q2
    
 
1 1 1

which is a slightly different, but isomorphic, group law. One can check that

1 1
Lc = ∂r , Lx = ∂p − q∂r , and Ly = ∂q + p∂r .
2 2

Setting deg p = deg q = d and deg r = d2 , these operators lower degree if d > 1.

Corollary 25.3. Every f.dim nilpotent Lie algebra over C has a faithful f.dim representation, and there-
fore is isomorphic to a Lie subalgebra of the Lie algebra of strictly upper triangular matrices.

Proof. By definition, O(N ) is a faithful n-module. Hence, for some n, the space Vn is also faithful. 

We now prove Ado’s theorem.

Theorem 25.4 (Ado’s Theorem). Every f.dim Lie algebra over C has a faithful f.dim representation,
i.e. is a Lie subalgebra of gln (C).
50 Apply x y
51 Keep
Campbell-Hausdorff expansionP to P (x, y) = log(e e )
in mind P (y, tx) = y + tx + Qi (y, tx)

124
Proof. We know from last time that g can be embedded into an algebraic Lie algebra, so we may assume
g is itself algebraic, i.e. that g = Lie G where G = K n N with K reductive and N unipotent (i.e.
Lie N = n) with action of K. Thus g = k n n with k = Lie K and n = Lie N . Let z ⊂ k be the centralizer
of n. Since k is reductive and z is an ideal, there is a complementary ideal k0 s.t. k = k0 ⊕ z. Hence,

g = k n n = k0 n n ⊕ z.

Now, note that if g = g1 ⊕ g2 where gi has a faithful rep Vi , then g has faithful rep V1 ⊕ V2 . Therefore,
we may assume g is indecomposable, so assume that z = 0.
Now, k acts faithfully on n. g = k n n acts on O(N ) where x ∈ n acts by Lx , and y ∈ k acts by Ly − Ry
(adjoint action). Thus, it also acts on the spaces Vn . Fix n so that the action of n on Vn is faithful. We
claim all of g acts faithfully on Vn . Suppose that nonzero y ∈ g acts by 0 on Vn . Write y = (y1 , y2 )
(y1 ∈ k and y2 ∈ n), and pick z ∈ n so that a = [y, z] 6= 0 (possible since z = 0). Then, a ∈ n acts by 0 on
Vn , a contradiction. 

25.2 Last topic: Borel subalgebras and flag manifold


Note 10. Got distracted for a few minutes and missed some stuff that looks important... whoops TODO: Find
what you
Definition 25.5. A Borel subalgebra is a Lie subalgebra conjugate to b+ . A Borel subgroup of
missed and
G is a Lie subgroup conjugate to B+ . A parabolic subalgebra is a Lie subalgebra containing a Borel
fill it in here
subalgebra. A parabolic subgroup is a Lie subgroup containing a Borel subgroup.

Since all pairs (h, Π) are conjugate, this definition does not depend on the choice of (h, Π).

Lemma 25.6. B+ is its own normalizer in G.

Proof. Take γ ∈ G such that γB+ γ −1 = B+ . Let H 0 = γB+ γ −1 ⊂ B+ (H ⊂ B+ a maximal torus). It


is easy to see that we can conjugate H 0 back to H inside B+ . Therefore, we may assume that H 0 = H. Something
Thus, γ ∈ N (H) which we recall fits in an exact sequence about Levi
decompo-
1 −! H −! N (H) −! W −! 1. sition and
having a
We also remark that γ preserves positive roots, so preserves the set Π of simple roots. The only element
vanishing H1
of the Weyl group which preserves the set Π is the identity, so actually γ ∈ H ⊂ B+ , and we win. 

Corollary 25.7. The set of Borel subgroups (subalgebras) is G/B+ , a homogeneous space and complex
manifold. We call this the flag manifold of G.

Note that this manifold is canonically attached to G, and depends only on gss ⊂ g = Lie G.
Remark 25.8.
1
dim G/B+ = |R+ | = (dim g − dim h) .
2
Example. If G = GLn (or SLn ), then we can take B+ to be the upper triangular matrices, and G/B+ =
Fn is the set of complete flags in Cn . For example, if G = SL2 , then G/B+ = CP1 is the Riemann sphere.

Note that G/B+ is compact in the above example. This is in fact true in general.
Let Gc ⊂ G be the compact form.

125
Remark 25.9. gc + b+ = g. Note gc contains things like eα ± e−α and i(eα ∓ e−α ) while b+ 3 ieα , eα
(α > 0 throughout this sentence). Hence, their sum contains all the eα ’s and the Cartan subalgebra.
As a consequence, the orbit Gc · 1 ⊂ G/B+ contains a neighborhood of 1 ∈ G/B+ (since gc  g/b+ ).
By translation, we see Gc · 1 contains a neighborhood of all its elements, so it is open. It is also closed
since it is compact. Since G/B+ is connected, we conclude that Gc · 1  G/B+ is surjective, so G/B+
is compact.
The above shows that Gc acts transitively on G/B+ . It’s stabilizer is Stab(1) = Gc ∩ B+ .
Note 11. Distracted and missed more stuff
Sounds like the stabilizer is H c = (S 1 )r , a maximal torus in Gc .
Corollary 25.10. G/B+ ∼
= Gc /H c .
Corollary 25.11 (Iwasawa Decomposition). The usual notation is K = Gc , N = N+ , and A =
exp(ihc ) ⊂ H, the non-compact part H. The multiplication map

K ×A×N !G

is a diffeomorphism, so G = KAN .
(Compare e.g. with Polar decomposition)

Proof. As shown above, the map ϕ : Gc × B+ ! G is surjective. Further,

ϕ(g1 , b1 ) = ϕ(g2 , b2 ) ⇐⇒ g1 = g2 s, b1 = s−1 b2 for some s ∈ H c .

∼ ∼
Let B+
0
= AN+ so B+ = H c B+
0
. Hence, Gc × B+
0

! G is a diffeomorphism. Also, A × N+ −
! B+
0
is a
diffeomorphism, so
Gc × A × N+ ! G

is a diffeomorphism. 

Another realization of flag manifold One can construct the flag manifold alternatively as the orbit
of a highest weight line in an irreducible representation with regular highest weight. Say λ ∈ P+ dominant
integral regular, where regular means λ(hi ) ≥ 1 for all i (e.g. λ = ρ so ρ(hi ) = 1). Let Lλ be the irrep
with highest weight λ, and let vλ be a highest weight vector, so Cvλ ∈ PLλ . Let O := G · Cvλ ⊂ PLλ be
the orbit of this line.
Claim 25.12.
O∼
= G/B+ .
Proof. What is the stabilizer S of Cvλ ? Clearly, S ⊃ B+ since vλ a highest weight vector. Also, for
α ∈ R+ , e−α vλ 6= 0 as λ(hi ) ≥ 1.52 We see from this that the stabilizer of Cvλ in g is b+ . Hence, S
normalizes b+ , so S ⊂ B+ , so S = B+ . This shows that O is a closed orbit in PLλ , so G/B+ is a complex
(smooth) projective variety. 

Remark 25.13. Partial flag manifolds are also complex projective varieties. Can prove similarly using
non-regular weights.
52 also wrote hα vλ = mvλ with m > 0, but I don’t see why this is relevant

126
Borel fixed point theorem Only 8 minutes left, so let’s end with a bang.

Theorem 25.14. Let a be a solvable Lie algebra over C, and let V be a f.dim a-module. Let X ⊂ PV be
a closed subset preserved by A. Then, there exists x ∈ X such that Ax = x.

Non-example. SL2 (C) y P1 without fixed points.

Proof. Induct in n = dim a. The base n = 0 is trivial. Since a is solvable, it has an ideal a0 of codimension
0
1. By induction, Y = X a 6= ∅. Furthermore, a/a0 acts on Y , so we only need to prove the theorem when
dim a = 1.
Say a = hai with a acting by a linear operator a : V ! V . We can scale a by complex numbers.
In particular, by rotating, we may assume that the real parts of all its eigenvalues are different. Pick
x0 ∈ X, and consider eta · x0 . If we send t ! ∞, the eigenvalue with largest real part will ‘dominate’
resulting in the existence of a limit x ∈ X (no particular vector has a limit, but the whole line does). Question:
This limit is fixed by a, so we win.  Use com-
pactness of
Corollary 25.15. Any solvable subalgebra of g is contained in a Borel. Thus, Borels are simply maximal
X?
solvable subalgebras.

Proof. Say a ⊂ g solvable. Then, it has a fixed when acting on G/B+ ⊂ PLλ . This fixed point is a Borel
subalgebra b, so exp(a) normalizes b, so a ⊂ b. 

Corollary 25.16. Any element of g is contained in some Borel subalgebra.

Example. When g = gln , this says any matrix can be upper triangularized in some basis.

We don’t have time to give the proof (it’s in the notes), but similarly...

Proposition 25.17. Any nilpotent subalgebra of g (consisting of nilpotent elements) is contained in a


conjugate to n+ . Hence, conjugates of n+ are the same thing as maximal nilpotent subalgebras.

One can show that the normalizer of n+ is B+ , so any maximal nilpotent subalgebra n is contained
in a unique Borel n, and n = [b, b]. Therefore, maximal nilpotent subalgebras are also parameterized by
the flag manifold G/B+ .

127
26 List of Marginal Comments

o Any left-invariant vector field is determined by its value at the identity . . . . . . . . . . . . . 2


o Remember: Always consider s.s. Lie algebras in characteristic 0 . . . . . . . . . . . . . . . . . . 7
o Directed edges in a Dynkian diagram point to the longer root . . . . . . . . . . . . . . . . . . . 10
Pr
o Remember: ρ = i=1 ωi = 12 α∈R+ α . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
P

o Note that Lie C ×


is not semisimple, and its representations are not completely reducible (e.g.
think Jordan blocks), but not every rep of Lie C× lifts to one of C× since it is not simply
connected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
o |λ| = pi from before . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
P

o Secretly, S n U is an irreducible representation even if U is infinite dimensional, so no need to


reduce to the finite dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
o TODO: Review this answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
o TODO: Make sense of this argument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
o Question: What? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
o TODO: Fix typos below . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
o See e.g. the part of chapter 1 of Hartshorne where he talks about Hilbert polynomials and
numerical polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
o Remember: trace of a product is invariant under cyclic permutation . . . . . . . . . . . . . . . 26
o Remember: Q is the root lattice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
o ω 6= αj since (αj , αj ) = 2 > 1, I think . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
o I possibly made some typos below . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
o This is another instance of the double centralizer property . . . . . . . . . . . . . . . . . . . . . 35
o If it did, the exponents would all be integers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
o Like in the proof of PBW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
o This is because dim g[0] = r and each exponent corresponds to an irrep in g|sl2 . . . . . . . . . 46
o “covariant” here counts duals while “contravariant” counts non-duals. . . . . . . . . . . . . . . . 48
o t2 sits in an exact sequence with strictly upper triangular matrices and diagonal matrices. . . . 53
o Question: Why is H in this space? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
o Answer: Ever element of Hλ generates a f.dim representation . . . . . . . . . . . . . . . . . . . 60
o Implicitly, we’re assuming all our spaces are Hausdorff . . . . . . . . . . . . . . . . . . . . . . . 61
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
o Isn’t this supposed to be a math class? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
o Question: Why did he use  instead of ⊗? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
o Answer: It’s an external tensor product, not an internal one . . . . . . . . . . . . . . . . . . . . 70
o This was a homework problem once upon a time . . . . . . . . . . . . . . . . . . . . . . . . . . 71
o TODO: Add rendition of matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
o TODO: Add picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

128
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
o Split form is so(n, n) so could be in either class depending on parity of n . . . . . . . . . . . . . 83
o TODO: Add picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
o The ‘node’ is the valence 3 vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
o Answer: See beginning of tomorrow’s lecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
o Question: Get this corollary by regarding Gad as a real Lie group? . . . . . . . . . . . . . . . . 95
o Question: and P ∼
= Rdim p ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
o Note dim Gad may be less than dim G, e.g. if Z contains a torus . . . . . . . . . . . . . . . . . 99
o Possibly a typo below . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
o I don’t know if this reasoning works in general to always get the right sign in these graded
situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
o Lv degree 0 operator so no sign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
o ιv is a chain homotopy from Lv to the zero map . . . . . . . . . . . . . . . . . . . . . . . . . . 103
o Question: Why do these degrees add to to dim g? . . . . . . . . . . . . . . . . . . . . . . . . . . 108
o We have an exterior power of a tensor product of dual spaces . . . . . . . . . . . . . . . . . . . 111
o Has at most n parts with transpose having at most m parts . . . . . . . . . . . . . . . . . . . . 111
o As a vector space, the cohomology of Fn (C) will be tensor product of cohomology of CPk for
k = 1, . . . , n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
o Question: What is a? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
o The existence of such a filtration make O(N ) a ‘locally finite module’ (I think this is the
terminology) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
o TODO: Find what you missed and fill it in here . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
o Something about Levi decomposition and having a vanishing H1 . . . . . . . . . . . . . . . . . 125
o Question: Use compactness of X? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

129
Index
δ-like sequence, 59 character orthogonality, 56
n-algebraic, 121 characters, 56
n-dimensional representation of g/k, 3 Chevalley-Eilenberg complex, 113
p-adic integers, 61 Chevally-Eilenberg complex, 105
q-binomial theorem, 111 Clebsh-Gordan, 4
“Bott Periodicity”, 47 Clifford algebra, 39
1-coboundaries, 114 closed form, 49
1-cocycle, 72 closed forms, 102
1-cocycles, 114 closed Lie subgroup, 1
1-parameter subgroup, 3 closed manifold, 51
1st Galois cohomology, 73 cohomology of g with coefficients in V , 113
2nd countable, 61 compact, 57
compact directions, 82
abelian extension, 115
compact real form, 75
abstract root system, 8
complete symmetric function, 28
addable box, 32
completely reducible, 2
adjoint group of g, 70
complex type, 44
adjoint representation, 2
convolution operator, 60
Ado’s Theorem, 124
coroot, 9
algebraic, 121
coroot lattice, 9
angular momentum operator, 69
counting measure, 52
antidominant, 42
coweight lattice, 9
Ascoli-Arzela, 63
Coxeter number of g, 46
associated Legendre polynomials, 66
averaging measure, 53 De Rham Cohomology, 49
azimuthal quantum number, 68 de Rham cohomology group, 102
deformationally rigid, 117
Betti numbers, 103
differential k-form, 48
Borel subalgebra, 125
dominant, 10
Borel subgroup, 125
Double Centralizer Lemma, 19
bound states, 68
bounded operator, 57 element of maximal length, 42
Engel’s Theorem, 5
Cartan Decomposition, 99
equicontinuous, 62
Cartan involution, 82
exact form, 49
Cartan matrix, 10
exact forms, 102
Cartan subalgebra, 7, 96
exponent, 46
Cartan’s Criteria, 6
Cartan’s differentiation formula, 105 finite rank operators, 57
Cartan’s magic formula, 102, 103 flag manifold, 112, 125
Casimir element, 14 Frobenius Character Formula, 23
Cauchy identity, 27 fundamental representations, 14

130
fundamental theorem of calculus, 52 Lie ideal, 3
Fundamental Theorem of Invariant Theory, 25 lie subalgebra, 3
fundamental weights, 10 Lie subgroup, 1
linear, 95
Gaussian binomial coefficients, 111
Lorentz Lie algebra, 78
generalized Laguerre polynomial, 68
graded commutative, 103 magnetic quantum number, 68
graded derivation, 48 matrix coefficient, 54
Grassmannian, 111 matrix element, 17
maximal element, 42
Haar measure, 52
maximal torus, 71
Hamilton’s equations, 63
minuscule, 28, 29
Hamiltonian, 63
Molien formula, 27
hat function, 50
multiplicity of m, 46
height, 45
Heisenberg Lie algebra, 120 Narayana numbers, 22
highest weight, 11 negative, 9
highest weight module, 11 Newton polynomial, 19
Hilbert-Schmidt Theorem, 58 nilpotent, 5
homogeneous G-space, 1 nilradical, 119
homomorphism of Lie groups, 1 noncompact directions, 82
Howe duality, 27 nonvanishing, 51

inner class, 74 obstruction class, 117


inner derivations, 115 operator norm, 57
inner real form, 74 Orthogonality of matrix coefficients, 54
inner to gs0 , 74 outer derivations, 115
integrable, 51
integral, 10 parabolic subalgebra, 125
irreducible, 8 parabolic subgroup, 125
Iwasawa Decomposition, 126 partial flag manifold, 113
partition in n parts, 17
Jacobi identity, 2 partition of unity, 50
Jordan Decomposition, 99 Pauli exclusion principle, 70
Pauli matrices, 75
Killing form, 6
perfect, 53
kinetic energy, 63
Peter-Weyl, 55, 56
Laguerre equation, 67 Poincaré series, 109
Legendre differential equation, 66 Poisson bracket, 63
Legendre polynomials, 65 polynomial, 17
Leibniz rule, 103 positive, 9
Levi decomposition, 5, 118 potential energy, 63
Lie, 5 pre-compact sets, 57
Lie algebra, 2, 105 principal sl2 -subalgebra, 45

131
principal quantum number, 68 skew-symmetric isomorphism, 44
profinite group, 61 skew-symmetry, 2
solvable, 5
quasi-split, 74 spherical harmonics, 66
quaternionic trace map, 81 Spin group, 36
quaternionic type, 44 spinor representation, 36
quaternionic unitary Lie algebra, 83 spinor representations, 37
spinors, 36
radical, 5, 118
split semisimple Lie algebra, 72
rank, 7
standard complex, 105
real (complex) Lie group, 1
state of energy EN , 64
real forms, 73
stationary Schrödinger equation, 64
real type, 44 Stoke’s Theorem, 51
reduced, 8 Stone-Weierstrass Theorem, 60
reductive, 6, 96 symmetric isomorphism, 44
reductive Lie algebra, 53
regular, 98, 126 Third Lie Theorem, 121
relative Chevalley-Eilenberg complex, 110 total spin, 69
relative Lie algebra cohomology, 110 twisted conjugation, 73
Riesz representation theorem, 62 twisted homomorphism, 72
root, 7 twisted-linear, 72
root lattice, 9
uniform continuity, 58
root system, 7
unimodular, 52
Schrödinger’s equation, 64 unipotent, 98, 121
Schubert cell, 112 unitary representation, 2
Schubert cells, 112 universal enveloping algebra, 5
Schur algebra, 21
Vogan diagram, 80
Schur functor, 21
volume forms, 51
Schur polynomial, 23
Schur’s lemma, 2 weight lattice, 9
Schur-Weyl duality, 18, 19 weight lattice of G, 95
self-adjoint, 57 weights, 11
semisimple, 7, 91, 98 Weyl Character Formula, 12
semisimplification, 5 Weyl denominator, 100
separates points, 60 Weyl denominator formula, 12, 13
Serre relations, 11 Weyl dimension formula, 14
simple, 92 Weyl group, 8
simple root, 9 Weyl unitary trick, 54
skew-Howe duality, 109 Whitehead’s lemma, 113

132

You might also like