Vogan Diagrams for Symplectic Lie Algebras
Vogan Diagrams for Symplectic Lie Algebras
Spring 2021
These are my course notes for “Lie Groups and Lie algebras II” at MIT. Each lecture will get its own
“chapter.” These notes are live-texed or whatever, so there will likely to be some (but hopefully not too
much) content missing from me typing more slowly than one lectures. They also, of course, reflect my
understanding (or lack thereof) of the material, so they are far from perfect.1 Finally, they contain many
typos, but ideally not enough to distract from the mathematics. With all that taken care of, enjoy and
happy mathing.
The instructor for this class is Pavel Etingof. This class overlaps once a week with a seminar that I
am attending, so that might cause issue in these notes.
Contents
1 Lecture 1 (2/16) 1
1.1 Class stuff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Review of material from last term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.1 Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.2 Lie subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2.3 Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.4 Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.5 Exponential Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.6 Fundamental Theorems of Lie Theory . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.7 Representations of sl2 (C), SL2 (C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.8 Universal enveloping algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.9 Solvable and nilpotent Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.10 Semisimple and reductive Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.11 Killing form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Lecture 2 (2/18) 6
2.1 More general forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Semisimple Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Jordan decomposition and Cartan subalgebras . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Root decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 Abstract Root Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1 In particular, if things seem confused/false at any point, this is me being confused, not the speaker
i
2.5.1 Positive and Simple Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.2 Dual root system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.5.3 Cartan matrix and Dynkin Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.6 Serre presentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.7 Representation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.8 Weyl Character Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Lecture 3 (2/23) 13
3.1 Weyl dimension formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Tensor products of fundamental representations . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Representations of SLn (C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.4 Representations of GLn (C) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Lecture 4 (2/25) 18
4.1 Schur-Weyl duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.2 Schur functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.3 Characters of symmetric group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Lecture 5 (3/2) 24
5.1 Invariant Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2 Howe Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.3 Minuscule weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6 Lecture 6 (3/4) 29
6.1 Last Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.2 This Time: minisucle weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
7 Lecture 7 (3/11) 33
7.1 Fundamental weights/representations for classical Lie algebras . . . . . . . . . . . . . . . 33
7.1.1 Type Cn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.1.2 Type Bn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
7.1.3 Type Dn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
8 Lecture 8 (3/16) 38
8.1 Last time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.2 Clifford algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
8.3 Duals of irreps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
9 Lecture 9 (3/18) 44
9.1 Principal sl2 -subalgebra, exponents of g . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
9.2 Back to Real, Complex, Quaternionic Type . . . . . . . . . . . . . . . . . . . . . . . . . . 46
9.3 Review of differential forms and integration on manifolds . . . . . . . . . . . . . . . . . . 48
9.3.1 Top degree forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
ii
10 Lecture 10 (3/25) 49
10.1 Last time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
10.2 Volume Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10.3 Stoke’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
10.4 Integration on (Real) Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
10.5 Representations of compact Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
10.6 Matrix coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
11 Lecture 11 (3/30) 54
11.1 Matrix coefficients + Peter-Weyl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
11.2 Proving Peter-Weyl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
11.2.1 Analytic Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
12 Lecture 12 (4/1) 59
12.1 Peter-Weyl, Proved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
12.2 Compact (2nd countable) topological groups . . . . . . . . . . . . . . . . . . . . . . . . . . 61
12.3 Integration theory on compact top. groups . . . . . . . . . . . . . . . . . . . . . . . . . . 62
13 Lecture 13 (4/6) 63
13.1 Hydrogen Atom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
13.1.1 Quantum version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
15 Lecture 15 (4/13) 73
15.1 Forms of a semisimple Lie algebra, continued . . . . . . . . . . . . . . . . . . . . . . . . . 73
16 Lecture 16 (4/15) 78
16.1 Twists of the compact form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
16.2 Real forms of classical groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
16.2.1 Type An−1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
16.3 Type B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
17 Lecture 17 (4/22) 82
17.1 Last time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
17.2 Classification of real forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
17.2.1 Type E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
iii
18 Lecture 18 (4/27) 86
18.1 E type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
18.2 Classification of connected compact Lie groups . . . . . . . . . . . . . . . . . . . . . . . . 91
18.2.1 Classification of semisimple compact Lie groups . . . . . . . . . . . . . . . . . . . . 91
19 Lecture 19 (4/29) 93
19.1 Filling in a gap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
19.2 Polar decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
19.3 Linear groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
19.4 Connected complex reductive groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
19.5 Maximal tori . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
20 Lecture 20 (5/4) 97
20.1 Semisimple and unipotent elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
20.2 Cartan Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
20.3 Integral form of character orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
20.4 Topology of Lie Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
iv
Index 130
List of Figures
1 An example graph giving an invariant function . . . . . . . . . . . . . . . . . . . . . . . . 25
2 A graph corresponding the the invariant function Tr(T ) 2 2
. . . . . . . . . . . . . . . . . . 25
3 The Dynkin Diagram Cn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4 The Dynkin Diagram Bn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 The Dynkin Diagram Dn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6 The Dynkin Diagram An . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7 The Dynkin Diagram E6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
8 The Dynkin Diagrams D3 (left) and D2 (right) . . . . . . . . . . . . . . . . . . . . . . . . 43
9 An example Vogan diagram. White vertices have sign + and black vertices have sign −. . 80
10 The Dynkin Diagram G2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
11 A Dynkin diagram of type F4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
List of Tables
1 Real forms of simple complex Lie algebras (except E6 , E7 , E8 ) . . . . . . . . . . . . . . . . 84
2 Real forms of all simple complex Lie algebras . . . . . . . . . . . . . . . . . . . . . . . . . 86
v
1 Lecture 1 (2/16)
1.1 Class stuff
Homeworks assigned/due on Thursdays. Lecture notes here if you can acess the Canvas.
Definition 1.1. A real (complex) Lie group is a real (complex) manifold G which is also a group
such that G × G ! G is regular (analytic). A homomorphism of Lie groups G ! H is a group
homomorphism given by a regular map.
Example. Real: Rn , U (n), SU(n), GLn (R), O(p, q), Sp2n (R)
complex: Cn , GLn (C), On (C), Sp2n (C)
Every Lie group G has a connected component of 1 denoted G◦ . This is a normal subgroup, and
G/G◦ is discrete and countable.
Say G is connected. Then its universal cover Ge is a simply connected Lie group, and comes with a
map π : G G with ker π = Z, some central discrete subgroup, such that G/Z
e e ∼
= G.
Example. G = S 1 then G
e = R and Z = Z. Hence, in this case π1 (S 1 ) = Z.
Definition 1.2. A Lie subgroup H ⊂ G is a subgroup which is also an immersed submanifold (i.e. H
is a Lie group and H ,! G is a regular map with injective differential at every point). A closed Lie
subgroup H ⊂ G is a subgroup which is an embedded submanifold (i.e. locally closed).
Remark 1.3. A closed Lie subgroup is equivalently a Lie subgroup which is closed in G.
Example. Q ⊂ R is a Lie subgroup, but not an embedded submanifold. However, Z ⊂ R is a closed Lie
subgroup.
Fact (Did not prove last semester). Any closed subgroup of G is a closed Lie subgroup.
Fact (Did prove last semester). Any connected Lie group is generated by any neighborhood of 1.
Definition 1.4. Let H ⊂ G be a closed Lie subgroup. Then, the quotient G/H is a manifold with a
transitive G-action, i.e. a homogeneous G-space. If H is a normal subgroup, then G/H is a Lie group.
1
1.2.3 Representations
Reps are actions of G on a vector space by a linear transformations. We usually consider complex
representations, i.e. maps G ! GLn (C) = GL(V ).
We get the usual notions from representation theory: homomorphisms of reps (intertwiing operators
A : V ! W ), subreps, direct sums, duals, tensor products, irreps, indecomposable reps, etc.
Lemma 1.5 (Schur’s lemma). Let V, W be irreps. If they are not isomorphic then any A : V ! W is
trivial (A = 0). If they are isomorphic, then any A : V ! V is scalar multiplication (A = λ Id).
Example. G acts on itself by conjugation: g · x = gxg −1 . This induces g∗ : T1 G ! T1 G, and the map
Ad : g 7! g∗ gives the adjoint representation Ad : G ! GL(g).
Definition 1.6. A unitary representation is one with an invariant positive def Hermitian form (v, w),
i.e. (gv, gw) = (v, w), i.e. G ! U (n) ⊂ GLn (C).
Recall that in general, indecompsoable (not a direct sum) is weaker than irreducible (no nontrival
proper subreps). However, any unitary representation is a direct sum of irreducible representations (so
unitary indecomposable = unitary irreducible).
If G is finite (or, more generally, compact), then any representation is unitary. Take a random positive
Hermitian form, and then average it over the group to get an invariant one. Thus, any finite dimensional
representation of G (finite or compact) is a direct sum of irreps (i.e. completely reducible).
Note that G acts on itself by right translations, i.e. g ◦ x = xg. This is a right action. Fix a ∈ T1 G =: g.
Right translation gives rise to a tangent vector ag ∈ Tg G at g. Doing this at every point gives rise
to a left invariant vector field (since left multiplication commutes with right translation) Ra on G (i.e.
Ra|g = ag).
We know vector fields correspond to derivations of functions. We can consider the commutator
[Ra , Rb ] = Ra Rb − Rb Ra ,
another left-invariant derivation (vector field), so [Ra , Rb ] = R[a,b] for some [a, b] ∈ g. Hence, for any Any left-
a, b ∈ g, we get in this way a commutator [a, b] ∈ g. This is a bilinear map g × g ! g satisfying invariant
vector field
• (skew-symmetry) [x, x] = 0 =⇒ [x, y] = −[y, x]
is deter-
• (Jacobi identity) mined by
[[x, y], z] + [[y, z], x] + [[z, x], y] = 0. its value at
the identity
Definition 1.7. A Lie algebra over any field k is a k-vector space g with a bilinear operation [−, −] :
g × g ! g satisfying skew-symmetric + Jacobi identity.
Example. If G = GLn (C), then g = gln (C) = Matn (C) with Lie bracket [A, B] = AB − BA
2
Definition 1.8. A lie subalgebra h ⊂ g is a subspace invariant under [−, −]. A Lie ideal is a Lie
subalgebra g ⊂ g such that [g, h] ⊂ h.
If H ⊂ G is a Lie subgroup, then Lie H ⊂ Lie G is a Lie subalgebra. If H is normal, then Lie H is a Lie
ideal.
The same representations theory notions apply to Lie algebras as well, e.g. an n-dimensional rep-
resentation of g/k is a homomorphism ϕ : g ! gln (k) of Lie algebras, i.e. ϕ([a, b]) = [ϕ(a), ϕ(b)].
This has a unique solution which we denote by g(t) = exp(ta). This defines a 1-parameter subgroup
ϕ : R ! G, ϕ(t) = exp(ta). This satisfies
Example. When G = GLn (K) (and K = R, C), this is usual matrix exponential
∞ n n
X t a
exp(ta) = .
n=0
n!
Setting t = 1 gives the exponential map exp : g ! G. The differential of this map at the idenity is
a map exp∗ : g ! g which is actually the identity map exp∗ = id. Hence, exp is invertible near identity,
and the inverse map is called log : U ⊂ G ! g (only defined on some open neighborhood U ⊂ G of the
identity).
This allows another definition of the commutator. One has
1
log(exp(a) exp(b)) = a + b + [a, b] + · · · .
2
In either case, the · · · refers to higher order terms. The commutator measure the extent to which G◦ is
non-commutative, e.g. G◦ commutative ⇐⇒ [−, −] = 0 on g.
Theorem 1.9. For any Lie group G, there is a bijection between connected Lie subgroups H ⊂ G, and
Lie subalgebras g ⊂ g = Lie G given (in one direction) by H 7! Lie(H).
3
Theorem 1.10. Let G, K be Lie groups with G simply connected. Then,
∼
Hom(G, K) −
! Hom(g, k)
where g = Lie G and k = Lie K. This iso is given by taking the derivative at the identity.
Theorem 1.11 (Did not prove). For any finite dimensional real or complex Lie algebra g, there exists a
Lie group G such that g = Lie G.
Corollary 1.12. Say K = R, C. Then there is an equivalence of categories between simply connected
K-Lie groups and finite dimensional K-Lie algebras given by G 7! Lie G.
Any connected Lie group is of the form G/Γ where G is simply connected Lie group, and Γ ⊂ G is a
central, discrete subgroup.
These give good classification of Lie groups in terms of Lie algebras.
Recall 1.13.
( ! ) ( ! )
a b a b
SL2 (C) = : ad − bc = 1 and sl2 (C) = :a+d=0 .
c d c d
satisfying
[e, f ] = h, [h, e] = 2e, and [h, f ] = −2f.
Theorem 1.15.
(3)
min(m,n)
M
Vn ⊗ Vm = V|m−n|+2i−1
i=1
(Clebsh-Gordan, up to spelling)
4
1.2.8 Universal enveloping algebra
Let g be a Lie algebra. Get tensor algebra T g = g⊗n . The universal enveloping algebra is
L
n≥0
Tg
U (g) = .
(x ⊗ y − y ⊗ x − [x, y]
Theorem 1.16 (PBW). Such monomials form a basis of U (g) (so they are linearly independent).
Define L1 (g) = g, L2 (g) = [g, L1 (g)], L3 (g) = [g, L2 (g)], . . . . We say g is nilpotent if Ln (g) = 0 for
some n.
Remark 1.18. nilpotent =⇒ solvable, but the reverse does not always hold.
!
∗ ∗
Example. is solvable, but not nilpotent.
0 ∗
!
0 ∗
0 0
is nilpotent.
Theorem 1.19 (Lie). Working over C. If g is a f.d. solvable Lie algebra, then every irrep of g is
1-dimensional (false is positive characteristic). Hence, any f.d. rep has a basis in which all elements of
g act by upper triangular matrices.
Theorem 1.20 (Engel’s Theorem). A f.d. Lie algebra g is nilpotent ⇐⇒ all x ∈ g are nilpotent
(i.e. adx : g ! g given by adx(y) = [x, y] is nilpotent).
Definition 1.21. Let the radical rad(g) of g be the sum of all its solvable ideals (equivalently, the
largest solvable ideal).
5
Proposition 1.25. A semisimple Lie algebra is a direct sum of simple Lie algebras.
Really hard to classify general Lie algebras, but classifying (semi)simple Lie algebras is doable in
terms of Dynkin diagrams.
Definition 1.26. We say g is reductive if rad(g) = Z(g) (radical = center). Any reducitive Lie algebra
is of form g = h ⊕ gss with h abelian.
Above adx(z) = [x, z]. This is symmetric bilinear form on g which is ad-invariant:
2 Lecture 2 (2/18)
Note 1. A few minutes late.
Continuing where we left off I think.
Proposition 2.1. If BV is nondegenerated for some V , then g is reductive, g = gss ⊕ h with h abelian.
6
If g is semisimple, we can construct a G with Lie G = g. Take Aut(g) ⊂ GL(g), let G = Aut(g)◦ .
This may not be simply connected, but Lie G ∼
= g.
Theorem 2.5. All Cartan subalgebras are conjugate under the action of the group G.
Hence, all Cartan subalgebras are of the same dimension r, called the rank of G.
h∼
= Cn−1 , so rank sln = n − 1.
where
gα = {x ∈ g : [h, x] = α(h)x for all h ∈ h} .
We call α ∈ h∗ \ 0 a root if gα 6= 0. There are only finitely many roots since dim g < ∞. The set R ⊂ h∗
of roots is called the root system of g. Note that
[gα , gβ ] ⊂ gα+β
7
is a nondegenerate pairing. In fact, dim gα = 1 for all α ∈ R, so
Example. A2 is the root system of sl3 . Here, dim sl3 = 8 and rank sl3 = 2, so |R| = 6. One can check
that these roots form the vertices of a regular hexagon.
(1) R spans E
then we call R ⊂ E an abstract root system. We set rank(R) := dim E. We call it reduced if
α ∈ R =⇒ 2α 6∈ R. We call R irreducible if we can not write R = R1 t R2 (with E = E1 × E2 and
Ri ⊂ Ei root systems).
Fact. The set of roots of a semisimple Lie algebra form a reduced root system, which is irreducible iff
the Lie algebra is simple.
Definition 2.6. The Weyl group is the subgroup W ⊂ O(E) generated by sα for α ∈ R.
Note that the Weyl group is finite since it acts faithfully on the roots R (so subgroup of permutation
group of roots).
αij = ei −ej where i, j ∈ [1, n] and i 6= j, so n2 −n roots. The reflections sαij = (ij) act by transpositions.
Hence, W = Sn is the symmetric group. This gives the root system An−1 (n ≥ 2).
8
2.5.1 Positive and Simple Roots
Definition 2.7. We say α is positive w.r.t to t if (t, α) > 0 and negative if (t, α) < 0. We call α a
simple root if it is positive, but not the sum of two other positive roots.
Pavel drew a picture of A2 -root system with a choice of polarization. If you want to see a picture,
track down and look at my notes from last semester...
Notation 2.8. Let R+ be the set of positive roots, and R− be the set of negative roots. Let Π be the
set of simple roots. These all depend on the polarization (choice of t).
Fact.
(3) Π ⊂ R+ is a basis of E.
For R ⊂ E, we can attach to coroot α∨ ∈ E ∗ defined by sα (α∨ ) = −α∨ and (α, α∨ ) = 2. Thus,
sα (x) = x − (x, α∨ )α
Definition 2.9. The root lattice if the Z-span of the roots (equivalently, Z-span fo simple roots), i.e.
it is ( r )
X
Q = hRi = hΠi = ni αi : ni ∈ Z .
i=1
The coroot lattice is Q∨ = hR∨ i. The weight lattice is the dual lattice to Q∨
9
Inside the weight lattice are the fundamental weights ωi ∈ P satisfying
i.e. they are the dual basis to simple coroots. A weight λ = xi ωi is called dominant if xi ≥ 0 for all
P
2(αi , αj )
Z 3 nαi αj = (αi∨ , αj ) = =: aij .
(αi , αi )
• aii = 2 always.
• aij ≤ 0 if i 6= j.
• aij = 0 ⇐⇒ aji = 0.
One can reduce classifying irreducible root systems to classifying indecomposable Cartan matrices.
0 −1 2
We visualize these using Dynkin diagrams. There are r = rank(R ⊂ E) = dim E = |Π| vertices
corresponding to the simple roots. Vertex i is connected to vertex j by a(n) (undirected) single edge if
aij = −1. There is a (directed) double edge i ⇒ j if aij = −2 and aji = −1. There is a (directed) triple Directed
edge from i to j if aij = −3 and aji = −1. Set edges in a
Dynkian di-
2
if aij aji = 0 agram point
to the longer
if aij aji = 1
3
mij =
4 if aij aji = 2 root
if aij aji = 3
6
Let si = sαi be the simple reflections. These already generated the Weyl group W = hsi i, and satisfy
s2i = 1, (si sj )mij = 1. These are the defining relations (no other ones needed).
10
Let g be a simple Lie algebra. Let α1 , . . . , αr be the simple roots (choose Cartan subalgebra and
polarization of root system). Then we get 1-dim spaces gαi = hei i and g−αi = hfi i. We can normalize
our generates so that
[ei , fi ] =: hi = αi∨ .
For fixed i, the elements ei , fi , hi generate an sl2 triple with normal relations.
Theorem 2.10.
(4) For any reduced, irreducible root system, this defines a simple f.d. Lie algebra.
Corollary 2.11. Simple f.d. Lie algebras correspond bijectively to the Dynkin diagrams An , Bn , Cn , Dn , E6 , E7 , E8 , F4 , G2 .
where Cλ = Cvλ is rep where hvλ = λ(h)vλ and ei vλ = 0. In particular, Mλ is a fee module of rank 1
over U (n− ).
Definition 2.12. A highest weight module for g with highest weight λ is a quotient of Mλ .
If V is f.d., there exists a “highest” weight λ s.t. λ + αi is not a weight for any i. For any λ ∈ h∗ , there
is a smallest quotient Mλ /Jλ = Lλ which is irreducible (but in general ∞-dimensional).
11
Theorem 2.14. Lλ is finite dimensional ⇐⇒ λ is a dominant, integral weight (i.e. λ ∈ P+ ).
Mλ
Lλ = .
fini +1 vλ
(I missed the hypotheses on V needed to have this decomposition) with each V [µ] fin dimensional. Let
X
χV = dim V [µ] · eµ ∈ C[P
[].
µ
If V is finite dimensional, then this is in the usual (non-completed) group algebra C[P ]. Note that, for
h ∈ h,
X
trV (eh ) = dim V [µ]eµ(h) .
µ
det(w)ew(λ+ρ)−ρ
P
w∈W
χLλ = Q −α )
.
α∈R+ (1 − e
Example. If λ = 0, then Lλ = C with trivial action, and χL0 = 1. Therefore, we get the Weyl
denominator formula
X Y
det(w)ewρ−ρ = (1 − e−α ).
w∈W α∈R+
Next time we start discussing new material. Homework out tonight; due in a week.
12
3 Lecture 3 (2/23)
Today we start new material. We talked last time about the Weyl character formula, so a good place to
go next is the...
det(w)e(w(λ+ρ)−ρ,h)
P
X
w∈W
χLλ (eh ) = TrLλ (eh ) = dim Lλ [β]eβ(h) = Q −α(h)
.
β∈P (Lλ ) α∈R+ 1 − e
Note that dim Lλ = χLλ (eh )|h=0 , but this is not so easy to compute directly. In fact, both the numerator
and the denominator vanish at h = 0.
det(w)e(w(λ+ρ)−ρ,tρ)
P
w∈W
χLλ (thρ ) = Q t(α,ρ)
α∈R+ 1 − e
det(w)et(λ+ρ,wρ)
P
= e−t(ρ,ρ) Qw∈W −t(α,ρ)
α∈R+ 1 − e
(Above, we’ve used that (, ) is W -invariant, and we’ve replace w 7! w−1 at one point (noting det w =
det w−1 )). At this point, we recall the Weyl denominator formula:
X Y
det(w)ewρ = eρ 1 − e−α .
w∈W α∈R+
1 − e−t(α,λ+ρ)
et(ρ,λ+ρ)
Q
α∈R+
Y 1 − e−t(α,λ+ρ)
thρ −t(ρ,ρ)
χLλ (e )=e = et(λ,ρ) .
e−t(α,ρ) 1 − e−t(α,ρ)
Q
α∈R+ (1 − α∈R+
13
Thus, we now see that
Y (α, λ + ρ)
χLλ (1) = dim Lλ = .
(α, ρ)
α∈R+
Consider
r
O
Tλ := L⊗m
ωi .
i
i=1
generated by vλ . Then, V ∼
Nr
Proposition 3.4. Let V be the subrepresentation of i=1 L⊗m
ωi
i
= Lλ .
where µ ≺ λ means µ ∈ (λ − Q+ ) ∩ P+ . Recall the Casimir element C ∈ U (g) (even in its center): for
xi any basis of g with dual basis x∗i ∈ g under the Killing form2 , then
X r
X X
C= xi x∗i = yj2 + 2 fα eα
j=1 α∈R+
(above, yj some orthonormal basis of h and eα , fα chosen so that (eα , fα ) = 1). Then, C|Lµ acts via
multiplication by (µ, µ + 2ρ). We have shown previously that if µ ≺ λ, then
However, since V is generated by vλ , we know that C|V = (λ, λ + 2ρ) IdV . Therefore, we must have
mλµ = 0 since C has no other eigenvalues.
14
Note that we then have
1 , −1 , 0, . . . , 0 = ei − ei+1 for i = 1, . . . , n − 1.
αi∨ = 0, . . . , 0, |{z}
|{z}
i i+1
The fundamental weights ωi satisfy (ωi , ej − ej+1 ) = δij , and it is easy to see that these are
ωi = (1, 1, . . . , 1, 0, 0, . . . , 0) for i = 1, . . . , n = 1.
| {z }
i times
We would like to construct representations corresponding to the fundamental weights. This turns out
to be each. Let V = Cn be the standard/tautological representation. Let v1 , . . . , vn ∈ V be the standard
basis. It is not hard to see that V is irreducible. What is the highest weight? Recall gαi is generated
by ei = Ei,i+1 . From this, it is not too hard to see that the highest weight vector (killed by all ei ) is v1 .
Note that h = (x1 , . . . , xn ) ∈ Cn0 satisfies hv1 = x1 v1 , so the highest weight is ω1 = (1, 0, . . . , 0). Hence,
V = Lω1 .
Note 2. Pavel occastionally draws pictures to illustrate points, but I’m currently too lazy to draw these
and add them to the notes...
Vm
Now consider exterior powers V for 1 ≤ m < n. This has basis vi1 ∧ vi2 ∧ · · · ∧ vim for i1 < i2 <
· · · < im . Say v is a highest weight vector. Then, E12 v = 0, E23 v = 0, . . . , En−1,n v = 0. Note that
0 if i1 , . . . , im 6= j
Eij vi1 ∧ · · · ∧ vim = ±vi1 ∧ · · · ∧ vi ∧ · · · ∧ vim if ik = j
|{z}
kth place
ωm = (1, 1, . . . , 1, 0, 0, . . . , 0).
| {z }
m times
V ∼
Vn−1
so = V ∗ . More generally, ^ ∗
k ^n−k
V ∼
= V.
15
Say λ = mi ωi . Then, Lλ is the subrep in
P
n−1
O ^ ⊗mi
i
V
i=1
generated by tensor product of highest weight vectors. This is fairly concrete construction of Lλ .
p1 ≥ p2 ≥ · · · ≥ pn−1 ≥ 0
of nonnegative integers.
Example. When n = 2, get one number p1 ≥ 0 which is exactly our old friend sl2 y Vp1 .
Exercise. sln (C) is simply connected, so it’s fine to not distinguish between representations of it and of
SLn (C).
Warning 3.7. gln (C) is not semisimple, so our general theory does not directly apply. However, gln (C) =
sln (C) ⊕ C so it’s close enough for us to be able to understand things.
On the Lie group side, GLn (C) is not quite a product of SLn (C), but instead one has
16
Proposition 3.8. Rep GLn = Rep(SLn ×C× ) on which µn (embedding diagonally) acts trivially.3
Example. When n = 1, we just have C× . The Lie algebra is Lie C× = Ch, so a rep of the Lie algebra is
a choice of operator H : V ! V such that e2πiH = 1 (since e2πih = 1). Hence, H is diagonalizable with
integer eigenvalues. Thus, every representation of C× is completely reducible (since H diagonalizable),
and its irreps are 1-dimensional corresponding to n ∈ Z: χn (z) = z n . Note that
Lie C× is not
For SLn ×C× , all representations will be completely reducible. The irreducible representations are
semisimple,
Lλ,N = Lλ ⊗ χN . What about for GLn , i.e. when does the center act trivially? For GLn , you get the
and its rep-
Lλ,N for which |λ| + N is divisible by n.
resentations
We can look at this from another perspective. Cn ∼ = h ⊂ gln consisting of diagonal matrices gives a
are not com-
Cartan subalgebra (reductive Lie algebras have these as well). The dominant weights will correspond to
pletely re-
tuples (p1 , p2 , . . . , pn ) with p1 ≥ p2 ≥ · · · ≥ pn ∈ Z. The fundamental weights will be ω1 , ω2 , . . . , ωn with
ducible (e.g.
ωi = (1, 1, . . . , 1, 0, 0, . . . , 0) think Jor-
dan blocks),
| {z }
i times
but not ev-
as before. Note that ωn 6= 0 now (it gives the determinant character). Given λ = m1 ω1 + · · · + mn ωn , ery rep of
one has ⊗mk Lie C× lifts
O ^k
Lλ ⊂ V with V = C , n
to one of C×
k
since it is
and m1 , . . . , mn−1 ≥ 0 while mn ∈ Z (possibly negative). not simply
Remark 3.9. If χ is a 1-dim representation and k < 0, we can and do set connected
P
|λ| = pi
χ⊗k := (χ∗ )⊗(−k) .
from before
so understanding polynomial representation will let us understand everything. We also see that general
P (X)
matrix elements are det(X)k
so only extend to all matrices if k 6= 0 (need invertible determinant).
Note that λ1 ≥ · · · ≥ λn is a partition in n parts of
N = |λ| := λ1 + · · · + λn .
17
Example. The partition (5, 3, 2) corresponds to the diagram
Note that Lλ occurs in V ⊗|λ| , e.g. L(5,3,2) occurs in V ⊗10 . Also, |λ| is the eigenvalue of id ∈ gln (when
acting on V N ?)
(3) If n = dim V ≥ N , then πλ exhaust all irreducible representations of sN (each occurring exactly
once).
The πλ correspond to partitions λ of N with ≤ n parts (this condition is meaningless if n > N ), and
this correspondence is independent of n. More on this next time.
4 Lecture 4 (2/25)
4.1 Schur-Weyl duality
We started talking about this last time. Recall we have V = Cn and GL(V ) = GLn (C) naturally acts
on this space. We formed V ⊗N so GL(V ) and gl(V ) = gln (C) have an induced action on V ⊗N . We
decompose
M
V ⊗N = L λ ⊗ πλ
λ:|λ|=N
into a direct sum of irreps Lλ each with ‘multiplicity’ πλ (note λ ranges over partitions of N with ≤ n
parts). Recall πλ = HomGLn (Lλ , V ⊗N ).
18
At the same time, SN acts on V ⊗N by permuting the factors, and this action commutes with the one
of GLn (C). We write SN y V ⊗N x GLn (C) to emphasize that the actions commute. As a consequence,
SN acts on each πλ .
Let A be the image of U (gln ) in EndC (V ⊗N ), and let B ⊂ EndC (V ⊗N ) be the image of the group
algebra CSN . Since the gln and SN actions on V ⊗N commute, these two subalgebras commute with each
other. Beyond this...
(3) If n ≥ N (so all partitions of N have ≤ n parts), then the collection {πλ } gives the full set of
irreducible representations of SN .
Slogan. Symmetric groups and general linear groups have equivalent representation theories.
Lemma 4.2. Let U be a complex vector space. Then, S N U is spanned by vectors of the form x ⊗ · · · ⊗ x,
x ∈ U.
Proof. Enough to consider finite dimensional U since any vector in S N U lies in the symmetric power
of some finite-dimensional subspace of U . Then, S N U is an irreducible representation of GL(U ) (or of
gl(U )), and span {x ⊗ · · · ⊗ x : x ∈ U } is a nonzero subrepresentation, so it must be everything. Secretly,
S n U is an
Lemma 4.3. If R is an associative C-algebra, then the algebra S N R is a generated by elements
irreducible
representa-
∆N (x) := (x ⊗ 1 ⊗ · · · ⊗ 1) + (1 ⊗ x ⊗ 1 ⊗ · · · ⊗ 1) + (· · · + 1 ⊗ · · · ⊗ 1 ⊗ x)
tion even if
(N summands). Can think of this as x1 + x2 + · · · + xn with xi = 1 ⊗ · · · ⊗ 1 ⊗ x ⊗ 1 ⊗ · · · ⊗ 1 with x in U is infinite
ith slot. dimensional,
so no need
Proof. Consider z1 , . . . , zN ∈ C[z1 , . . . , zN ]. By fundamental theorem on symmetric functions, there exists to reduce to
a polynomial PN such that PN ( zi , zi2 , . . . , ziN ) = z1 . . . zN (Newton polynomial). We apply
P P P
the finite di-
mensional
PN (∆N (x), ∆N (x2 ), . . . , ∆N (xN )) = x ⊗ · · · ⊗ x.
case
19
where Ui ranges over the full set of irreducible representations of B, and Wi ranges over the full set of
irreducible representations of A, and this decomposition is as a module over A ⊗ B. In particular, there
is a bijection between irreps of B and of A, and also B is the centralizer of A.
Lr
Proof. We can write V = Wi ⊗ Ui with the Ui irreps of B, and Wi = HomB (Ui , V ). By definition,
i=1
Lr
the centralizer of B is A = EndB V = i=1 EndF (Wi ).
Question 4.5 (Audience). Why is the number of summands in the decomposition of V equal to the
number of summands in the decomposition of B?
Answer. All Ui must occur as B ,! End(V ) (so End(V ) contains regular rep) and so Wi 6= 0 for all TODO: Re-
i = 1, . . . , r. Similarly, A ,! End(V ) so all of its irreps must occur, so the Wi must be all of them. view this
answer
“A good mathematical theorem is one that takes one minute to state and one hour to prove, and a
bad one is one that takes one hour to state but one minute to prove.” – Kirillov, paraphrased.
Now we return to Schur-Weyl duality.
Proof of Theorem 4.1. B is a direct sum of matrix algebras since representations of SN are completely
reducible. We need to show that A is the centralizer of B. We know A ⊂ Z(B) that A is contained in
the centralizer. Note that
Z(B) = S N (End V )
is the endomorphisms of V N which commute with the permutation action of SN . The second lemma now
implies that Z(B) is generated by elements of the form ∆N (x) for x ∈ End V = gln . This is exactly the
action of x ∈ gln on V ⊗N , so ∆N (x) ∈ A, the image of the enveloping algebra. Hence, Z(B) ⊂ A. At
this point, the third lemma applies, and we obtain everything else:
M
V ⊗N = Wi ⊗ Ui
Remark 4.7 (Sanity check). Number of irreps of finite group G = number of conj. classes of G. Conjugacy
classes of SN are determined by cycle types, but these exactly correspond to partitions of N .
Remark 4.8. Schur-Weyl duality gives a new proof that reps of gln are completely reducible.
20
Remark 4.9. The algebra A appearing above is called the Schur algebra. It is always a quotient of
U (gln ) since this is infinite-dimensional while A is finite-dimensional.
We’ve given an assignment
S λ V := HomSN πλ , V ⊗N .
^3
V ⊗ V ⊗ V = S (3) V ⊗ C+ ⊕ S (1,1,1) V ⊗ C− ⊕ S (2,1) V ⊗ C2 = S 3 V ⊕ V ⊕ S (2,1) V ⊗ C2 .
21
Note that ^
2
2
V ⊗ V ⊗ V = (V ⊗ V ) ⊗ V = S V ⊗ V ⊕ V ⊗V
V3
(first factor contains S 3 , S 2,1 and second contains , S 2,1 for some reason) so
^2 ^3
S 2 V ⊗ V = S 3 V ⊕ S 2,1 V and V ⊗V = V ⊕ S 2,1 V.
Y (λ + ρ, α)
dim S λ V = ,
α>0
(ρ, α)
where as usual αij = ei − ej (for i < j). Note that (ρ, αij ) = j − i and (λ, αij ) = λi − λj . Hence,
Y λi − λj + j − i
dim S λ V = .
j−i
1≤i<j≤N
Proposition 4.12. dim S λ V = Pλ (N ) is a polynomial of degree |λ| with Q-coeffs, and it has integer
roots all lying in [1 − λ1 , k − 1] (and the endpoints are always roots). Further, Pλ (N ) has integer values
N
See e.g.
at integers which means it’s a Z-linear combination of binomial coefficients m .
Example. dim S n V = N +n−1
= P(n) (N ), and dim
V n
V = N
the part of
n n = P(1,...,1) (N ).
chapter 1 of
Example. Say a ≥ b. One can work out that
Hartshorne
where he
a−b+1 N +a−1 N +b−2 when a=b 1 N +a−1 N +a−2
P(a,b) (N ) = = .
a+1 a b a+1 N −1 N −2 talks about
Hilbert poly-
The a = b case gives Narayana numbers which combinatoralists apparently care about.
nomials and
numerical
4.3 Characters of symmetric group
polynomials
Recall
x1
..
ch Lλ = Tr |Lλ =S λ V
. .
xn
22
By Weyl character formula, this is
det σσ x1λ1 +n−1 x2λ2 +n−2 . . . xλnn
P
σ∈Sn
ch Lλ = Q
i<j (xi − xj )
det(σ)xλσ(1)
1 +n−1
. . . xλσ(n)
P n
σ∈Sn
= Q
i<j (xi − xj )
λ +n−j
λ +n−j
det xi j det xi j
= i,j = Q =: Sλ (x)
i<j (xi − xj )
n−j
det xi
i,j
• I’m not sure how to type notes on what he’s saying right now... The upshot is that if σ has mi
cycles of length i,5 then
Y mi Y m i
Tr(x, σ) = Tr |V (xi ) = xi1 + · · · + xin .
i i
• Recall Schur-Weyl V ⊗N = Lλ ⊗ πλ with x acting on first factor and σ acting on the second.
L
Therefore,
X
Tr(x, σ) = Sλ (x)χλ (σ)
λ
Thus,
X Y m i
Sλ (x)χλ (σ) = xi1 + · · · + xin .
λ i
!
X X Y Y mi
det(s)xλ1 1 +n−1 . . . xλnn χλ (σ) = (xi − xj ) xi1 + · · · + xin .
λ s i<j i
Thus,
in the product
Y Y m i
(xi − xj ) xi1 + · · · + xin .
1≤i<j≤n i
5 Note
P
i≥1 imi = N
23
5 Lecture 5 (3/2)
Last time we finished with the character formula for representations of the symmetric group using Schur-
Weyl duality. We will next take a quick look at invariant theory; in particular, we want to prove the
‘fundamental theorem of invariant theory’ (due to Weyl).
Note 3. Pavel said more things about how physicists think about tensors, but I didn’t care enough to
write it down.
We look for polynomial functions invariant under the GL(V )-action.
Example. If T is a linear operator, then det and Tr are both invariant functions.
It is enough to classify invariant functions of degree di with respect to each Ti . This is equivalent to
looking for invariants in
O ∗
S di V ⊗mi ⊗ (V ∗ )⊗ni
i
(above S is symmetric power). Finding invariant functions in this space looks formidable, but in fact it
isn’t.
To describe such invariant functions, attach to each ‘variable’ Ti a vertex:
.. ..
. • .
we give the vertex ni outgoing edges and mi incoming edges. Put on the plane di such vertices of each
type i. Invariant functions can be built by contractions of tensors: draw a graph by connecting vertices
in a way which respects directions and which makes use of each edge/stub attached to a vertex.
Example. Say T ∈ V ⊗2 ⊗ (V ∗ )⊗2 and we want a degree 3 invariant. Then we could form a graph as in
Figure 1.
24
Figure 1: An example graph giving an invariant function
Apparently these graphs are related to Feynman diagrams. To every such graph Γ, one can attach an
invariant FΓ .
Theorem 5.1 (Fundamental Theorem of Invariant Theory). Such functions FΓ , as Γ varies, span
the space of invariant functions.
Example. Say you have a linear operator T : V ! V , so T ∈ V ⊗ V ∗ . Say we want degree d invariant
polynomials. Then we need to start with d copies of a vertex with one outgoing edge and one incoming
edge. Then we need to connect them in some way. The graph in Figure 2, for example, corresponds to
the function FΓ = Tr(T 2 ) · Tr(T 2 ). Each cycle corresponds to the trace of T to the length of that cycle.
Hence, in this case, the theorem says that degree 4 invariant functions are spanned by
Hence, the algebra of invariant polynomials of T is generated by traces of powers of T , i.e. Tr(T ), Tr(T 2 ), Tr(T 3 ), . . . .
Observe that these are not linearly independent (e.g. characteristic polynomial can be used to get some
linear dependence between them; if T is n × n, then should only need to know Tr(T i ) for i ≤ n).
25
Proof of Theorem 5.1. An invariant function can be viewed as an element of the tensor product
k
O ⊗(−di ) P P
V ⊗mi ⊗ V ⊗(−ni ) = HomC V ni di , V i mi di .
i=1
We want GL(V )-invariants in this space. By Schur-Weyl duality, nonzero invariants only exist if i ni di =
P
i mi di (i.e. same number of incoming and outgoing arrows). If so, then Schur-Weyl duality also tells us Question:
P
that the invariants are spanned by permutations (i.e. a matching of incoming arrows to outgoing arrows). Why?
This means that these invariants are spanned by the FΓ ’s for various Γ.
In order to pass from tensor products to symmetric power, simply project using symmetrization. This
will cause some a priori different graphs to correspond to the same invariant, but this does not affect
spanning.
Remark 5.2. SW duality tells us that if dim V 0 (dim V ≥ ni di , I think), then invariants cor-
P
i
responding to different permutations are linearly independent (in the tensor product). Symmetrization
identifies some of these, but if you remove the redundant ones, what are left are still linearly independent.
From this one can deduce that for large dim V and fixed di , mi , ni , the invariants FΓ corresponding to
non-isomorphic graphs Γ are linearly independent. In this way, one gets a basis fo the space of invariant
functions.
Example. Suppose T1 , . . . , Tk are operators V ! V , so all vertices have 2 arrows, one incoming and
one outgoing. Hence, all graphs Γ are unions of cycles. Each cycle gives the trace of the product of the
graphs appearing in the cycle Remember:
The upshot is that the algebra of polynomial invariants of k linear maps T1 , . . . , Tk : V ! V is trace of a
generated by traces Tr(Ti1 . . . Tim ) of cyclic words in T1 , . . . , Tk . These generators are “asymptotically
6
product is
algebraically independent” in the sense that for a fixed degree d and dim V d 0, these generators do invariant
not satisfy any nontrivial relations in degree d. under cyclic
permutation
Corollary 5.3. There are no universal polynomial identities for (square) matrices of all sizes.
Proof. Suppose P (X1 , . . . , Xn ) = 0 for all X1 , . . . , Xn . Introduce another variant Xn+1 , and consider
F = Tr(P (X1 , . . . , Xn )Xn+1 ) = 0. Traces of words are asymptotically independent, so if it vanishes for
all sizes of matrices, then P = 0.
If you fix a size, then such identities do exist, e.g. for size 1 you have XY −Y X = 0 (i.e. multiplication
of scalars is commutative). For size 2, you have [(XY − Y X)2 , Z] = 0.7 This fails for size 3.
In general, for size n, there is the Amitsar-Levitzk identity: for X1 , . . . , X2n of size n,
X
sign(σ)Xσ(1) . . . Xσ(2n) = 0
σ∈S2n
(homework).
6 i.e.
words defined only up to cyclic permutation
7 Why? XY − Y X has trace 0 and is generically diagonalizable, so looks like diag(λ, −λ). Hence, it’s square looks like
λ2 I which is in the center.
26
5.2 Howe Duality
Let V, W be two f.dim complex vector spaces. Then consider S n (V ⊗ W ) as a representation of GL(V ) ×
GL(W ).
(if λ has > dim V or > dim W parts, then the corresponding summand is 0).
λ,µ:|λ|=|µ|=n
Now we know that the character of πλ is integer-valued (e.g. by Frobenius formula) so πλ = πλ∗ (character
fixed by complex conjugation), so
Schur-Weyl duality tells us that πλ are irreducible and pairwise non-isomorphic, so we conclude that
M
S n (V ⊗ W ) = SλV ⊗ SλW
λ:|λ|=n
as desired.
r Y
s
X Y 1
sλ (x)sλ (y)z |λ| =
i=1 j=1
1 − zxi yj
λ
Proof. We use the Molien formula: let A : V ! V be a linear operator on f.d. vector space V (over
any field), and let S n A : S n V ! S n V be the induced action of A, then
∞
X 1
Tr(S n A)z n = .
n=0
det(1 − zA)
This is easy to prove. Let x1 , . . . , xr be the eigenvalues of A (so r = dim V ), then the eigenvalues of S n A
27
are xm mr
1 . . . xr
1
s.t. m1 + · · · + mr = n. Hence,
X
Tr(S n A) = xm mr
1 . . . xr
1
= hn (x1 , . . . , xr ),
m1 +···+mr =n
So g y V = Cr and h y W = Cs . Then,
Tr S n (g ⊗ h) = TrS n (V ⊗W ) (g ⊗ h)
X
= TrS λ V (g) Tr S λ W (h)
λ:|λ|=n
X
= sλ (x)sλ (y).
λ:|λ|=n
Hence,
X X 1 Y 1
sλ (x)sλ (y)z |λ| = Tr S n (g ⊗ h)z n = = .
n
det(1 − z(g ⊗ h)) i,j
1 − zx i yj
λ
Definition 5.6. A dominant, integral weight ω ∈ P+ is minuscule if for all positive coroots β, we have
(ω, β) ≤ 1 (i.e. (ω, β) ∈ {0, 1}). This is equivalent to requiring that for all coroots β, |(ω, β)| ≤ 1.
Example. For sln , all fundamental weights are minuscule. Recall the fundamental weights are ωi =
(1, 1, . . . , 1, 0, . . . , 0), so we see (ωi , ej − ek ) = 0, 1 (with j < k).
Proof. The inner products (ω, αi∨ ) = 0, 1 for minuscule weights. However, it can be 1 only for one i since
otherwise (ω, θ∨ ) ≥ 2, where the maximal coroot is θ∨ = mk αk∨ with mk > 0 for all k.
P
28
Proposition 5.9. A fundamental weight ωi is miniuscule ⇐⇒ mi = 1, where θ∨ = mi αi∨ .
P
i
Proof. mi = (ωi , θ∨ ) so for minuscule ωi , we have mi = 1. If mi = 1, the for all coroot β, (ωi , β) ≤ 1
(e.g. since β ∈ θ∨ − Q+ ) so ωi is minuscule.
Theorem 5.10. ω ∈ P+ is minuscule ⇐⇒ all weights of Lω are in the Weyle group orbit of highest
weight.
6 Lecture 6 (3/4)
Note 4. Video for last class not up yet, so we’ll see how much things make sense today...
Remark 6.2. Any integral weight can by conjugated to a dominate one via an element of the Weyl group.
Example. ω = 0.
Example. Say g = sln . All fundamental weights are minuscule ωi = (1, 1, 1, . . . , 1, 0, 0, . . . , 0) (with i
ones).
X
0 < (ω, ω) = mi (ω, αi ),
i
so there’s some index j s.t. mj and (ω, αj are nonzero and of the same sign. Replacing ω by −ω if
needed, we may assume mj , (ω, αj ) > 0. Since αj∨ is a positive multiple of αj , we have also (ω, αj∨ ) > 0.
By hypothesis, we know (ω, αj∨ ) ≤ 1, so (ω, αj∨ ) = 1. Consider
X
sj ω = ω − (ω, αj∨ )αj = ω − αj = m0i αi
i
29
where m0i = mi if i 6= j and m0j = mj −1. Note that |m0i | = |mi |−1, but sj ω is also a counterexample ω 6= αj since
P P
i
(modifying by Weyl group does not affect property). (αj , αj ) =
2 > 1, I
Example. For G2 , P = Q (weight lattice = root lattice), so there are no nonzero miniscule weights.
think
Proposition 6.7. A weight ω ∈ P+ is minuscule iff for all α ∈ Q+ (α 6= 0), ω − α is not dominant.
Proof. (!) Suppose ω = ωk is miniscule and α ∈ Q+ is nonzero. Suppose also that ωk − α is dominant.
We can write α = i mi αi with mi ∈ Z+ . If mj = 0 for some j 6= k, then reduces to smaller rank
P
(can delete vertex j from Dynkin diagram8 ), so we may assume mj > 0 for j 6= k. Then, for all positive
coroots β,
(α, β) = (ωk , β) − (ωk − α, β) ≤ (ωk , β) ≤ 1
(ωk − α dominant =⇒ (ωk − α, β) ≥ 0) with equality if β involves αk∨ . If β does not involve αk∨ , then
(α, β) ≤ 0. In particular, (α, αi∨ ) ≤ 0 if i 6= k and (α, αk∨ ) ≤ 1.
Now, if (α, α∨ ) ≤ 0, we’d get (α, α) ≤ 0, a contradiction (α positive linear combination of positive
roots). Thus, (α, αk∨ ) = 1. As a consequence, mk > 0 (mk = 0 =⇒ (α, αk∨ ) ≤ 0 since only involves
simple roots/coroots with different indices and those entries in Cartan matrix are always ≤ 0). Thus,
(α, θ∨ ) ≥ 1 so (ωk − α, θ∨ ) = 1 − (α, θ∨ ) ≤ 0 which forces ωk − α = 0. Hence, ωk ∈ Q, a contradiction to
previous lemma.
( ) Suppose ω is not miniscule. We’ll produce an α ∈ Q+ s.t. ω − α is dominant. Since ω is not
miniscule, there exists a positive root γ s.t. (ω, γ ∨ ) ≥ 2. Consider9 ω − γ. We first claim this is not
conjugate to ω. Observe
(ω − γ, ω − γ) = (ω, ω) − 2(γ, ω) + (γ, γ).
Since 2(γ, ω)/(γ, γ) = (γ ∨ , ω) ≥ 2 > 1, we see that 2(γ, ω) > (γ, γ), so (ω − γ, ω − γ) < (ω, ω) which
means ω − γ 6∈ W ω. Now, pick w ∈ W such that λ := w(ω − γ) ∈ P+ . Then, λ 6= ω, but ω − λ ∈ Q+
because ω − γ is a weight of Lω (for the vector10 fγ vω ).
Remark 6.8. We have a classification of root systems/semisimple lie algebras from last time, so in prin-
ciple, we could just go through the list and check which roots are miniscule. This would be unsatisfying,
so we don’t do that.
Proposition 6.10. ω is minuscule ⇐⇒ the Weyl group W acts transitively on the weights of the irrep
Lω .
Proof. (!) Let µ be a weight of Lω . Pick w ∈ W such that wµ is dominant. Then, wµ = ω − α for some
α ∈ Q+ . This implies that wµ = ω, so µ = w−1 ω.
( ) If ω is not miniscule, take γ as in the previous proof, and consider ω − γ, the weight of fγ vω ∈ Lω .
This is nonzero so ω − γ is a weight not in the orbit of ω.
Remark 6.12. The converse of this is false. Think about reps of sl2 , for example.
8 Pass to root subsystem generated by αi for i 6= j
9 “Just for fun, let us use representation theory.”
10 This is nonzero since h v = (γ ∨ , ω)v 6= 0
γ ω ω
30
Corollary 6.13. The character of Lω is
X
χω = eλ .
λ∈W ω
You could also compute this using Weyl’s character formula. Comparing the two would lead to some
nontrivial identity.
Corollary 6.14. If α is a root of g, then Lω |(sl2 )α is a direct sum of 1-dimensional and 2-dimensional
representations of (sl2 )α .
Proof. Let v ∈ Lω be a highest weight vector for (sl2 )α , of some weight λ. Then,
It can’t be −1 (since v highest weight), so it’s 0 or 1. Hence, v generates a 1-d or 2-d rep of (sl2 )α .
Note 5. Pavel worked out an example looking at B2 , but I did not pay attention. I should go back and
watch the video and add it in later...
(if λ + µ 6∈ P+ , the corresponding term drops out, i.e we really sum over µ ∈ W ω s.t. λ + µ ∈ P+ )
If λ + ν 6∈ P+ , then (λ + ν, αi∨ ), there exists i s.t. (λ + ν, αi∨ ) < 0. But (λ, αi∨ ) ≥ 0 and |(ν, αi∨ )| ≤ 1, so
(λ + ν, αi∨ ) = (ν, αi∨ ) = −1 which means (λ, αi∨ ) = 0. We know (ρ, αi∨ ) = 1, so
(λ + ν + ρ, αi∨ ) = 0.
This means si (λ + ν + ρ) = λ + ν + ρ, so the terms det(w)ew(λ+ν+ρ) and det(wsi )ewsi (λ+ν+ρ) will cancel.
This justifies ignoring the terms not in P+ .
31
Recall all fundamental representations of sln are miniscule.
Corollary 6.16. Let V = Cn be the vector representation of GLn . Then, for any partition λ,
n
M
V ⊗ Lλ = Lλ+ei
i=1
but only the first two and last terms survive. Hence, V ⊗ sl(V ) = V ⊕(two other irreps).
We can give a combinatorial interpretation of the previous corollary. Recall that partitions correspond
to Young diagrams, e.g. λ = (5, 3, 1, 1, 0) (say n = 5) is the diagram
Note that these each correspond to adding a square a square to one row of λ. If adding the square
produces another Young diagram (preserves monotonicity), we call it an addable box. Thus, we see
that
X
V ⊗ Lλ = Lλ0
λ0 =λ+
(sum over addable boxes). We can do the same thing for exterior powers.
Vi
Recall Lωi = V with weights given by (a1 , . . . , an ) s.t. aj ∈ {0, 1} and there are exactly i copies
of 1. Adding λ + (a1 , . . . , an ) corresponds to adding i boxes to different rows of λ. Since ωi is miniscule,
we have
^i M
V ⊗ Lλ = Lλ+eI .
I⊂{1,...,n}
|I|=i
Vi
Graphically, V ⊗ Lλ is a sum over Young diagrams obtained from λ by adding i boxes in different
rows.
V2
Example. Say n = 3. Let’s compute V ⊗ L(3,2,1) . Note that λ = (3, 2, 1) looks like
32
The diagrams in the sum are
, , ,
i.e.
^2
V ⊗ L(3,2,1) = L(4,3,1) + L(4,2,2) + L(3,3,2) .
If we were over GL4 , there would be extra summands, e.g. an L(4,1,1,1) and an L(3,2,1,1) .
Proposition 6.17. Every coset of P/Q contains a unique minuscule weight, so there’s a bijection between
P/Q and minuscule weights.
Corollary 6.18. The number of miniscule weights is #P/Q = det A, where A is the Cartan matrix.
Example. Bn (o(2n + 1)) has det = 2 so there’s only one (nonzero) miniscule weight. The corresponding
representation here is called the ‘spinor representation’.
Cn (sp(2n)) has det = 2 so there’s again one minuscule weight. One can check that it corresponds to
the vector representation.
Dn (o(2n)) has det = 4, so 3 miniscule weights. These are the vector representation V and two spinor
representations S ± .
More on rep theory of orthogonal and symplectic groups next time. After that, we will start looking
at the theory of compact Lie groups.
7 Lecture 7 (3/11)
7.1 Fundamental weights/representations for classical Lie algebras
7.1.1 Type Cn
Pn
We begin with the symplectic Lie algebra g = sp2n . Let B = i=1 xi ∧ xi+n be our symplectic form. A
natural choice of Cartan subalgebra is
!
diag(a1 , . . . , an ) ∼
h= = Cn .
− diag(a1 , . . . , an )
33
In Rn , the roots are
1
αi = ei − ei+1 = αi∨ for i = 1, . . . , n − 1 and αn = 2en with αn∨ = en = αn .
2
• • ··· • •
Recall 7.2. The number of miniscule weights equals the determinant of the Cartan matrix.
Here, the determinant is 2 so there is 1 nonzero miniscule weight. It is the weight 1 corresponding to
the vertex on the left end of the Dynkin diagram.
Example. Consider the representation V = C2n . Its weights are e1 , . . . , en and −e1 , . . . , −en . The Weyl
n
group is W = Sn o (Z/nZ) (Sn permutes while (Z/nZ)n changes signs). Note V ' V ∗ . The highest
weight here is e1 = ω1 which is the miniscule weight.
What about other fundamental representations? Well, g has the same fundamental weights as GLn ,
Vi
so maybe we should expect Lωi = V ? This is true for i = 1.
V2
But it is not true for i = 2! V is not irreducible (but has correct highest weight). The symplectic
form B is non-degenerate, so we can invert it
X ^2
B −1 = x∗i+n ∧ x∗i ∈ V
i
V2
and B −1 ∈ V generates a copy of the trivial representation C. We can write
^2 ^2 ^2 ^2
V =C B −1
⊕ V where V = y∈ V : (y, B) = 0 .
0 0
V2
Exercise. V is irreducible.11
0
V2
Hence, Lω2 = 0 V so our intuition was not bad.
V2 V2
Example. For sp(4), only two fundamental weights, V = C4 and V = C⊕ 0 V is 6-dimensional.
Recall sp(4) ' o(5) which has a 5-dimensional vector representation.
2n ^
^ M i
V := V.
i=0
11 Look at weights, or write character formula, or show directly
34
V2 Vi+1 Vi−1
Recall we have B ∈ V ∗ . Hence, given T ∈ V , we can form ιB T ∈ V . In the other direction,
Vi−1 Vi+1
we may wedge with B −1
to move from mB : ! .
Proposition 7.3.
Vi
(1) The operators mB , ιB , and h (hT = (i − n)T with T ∈ ) form an sl2 -triple.
(3)
^ n
M
V = Lωi ⊗ Ln−i
i=0
where ω0 = 0 and Ln−i is the sl(2)-rep with highest weight n − i and dimension n − i + 1. This is an-
other in-
(4) Every irreducible representation of sp2n occurs in V ⊗N for some N (since all fundamental reps
stance of
do).13
the double
Proof. Homework.14 centralizer
Vi property
Remark 7.4. Note dim 2n
so these dimensions form a Bell curve shape.
V = i
7.1.2 Type Bn
We now have the Lie algebra g = o2n+1 . The roots here are αi = ei − ei+1 for i = 1, . . . , n − 1 and
αn = en . The dual roots for αi∨ = αi for i = 1, . . . , n − 1 and αn∨ = 2en = 2αn . The fundamental weights
are
ωi = (1, . . . , 1, 0, . . . , 0) for i = 1, . . . , n − 1
| {z }
i
and
1 1
ωn = ,..., .
2 2
n
The Weyl group is W = Sn n (Z/2Z) (permute coordinates and change signs). The Cartan matrix
• • ··· • •
again has determine 2 (transpose of previous one?), so only one nontrivial miniscule weight. This time it
is ωn .
12 Thisshould follow from rep theory of sl2
13 Thiswill not be true for the orthogonal groups and is “the reason our world exists” (something about spin in physics)
14 Should be “easy” after establishing you have an sl -rep. Irreducibility should come from looking at characters (correct
2
highest weight and correct dimension)
35
Warning 7.5. Lω1 = V = C2n+1 is the vector representation, but is not miniscule.
Example. The weights of C5 (for so(5)) include 0, but 0 is not a weight of miniscule representations
(Weyl group acts transitively on weights).
Vi
Exercise. V are irreducible for 1 ≤ i ≤ n, so
^i ^n
V = Lωi for i ≤ n − 1 and V = L2ωn .
Remark 7.6. For Cn , we have an invariant skew-symmetric form. For Bn , we now have an invariant
symmetric form. Hence something falls off for symmetric powers instead of for exterior powers.
What are its weights? It’s miniscule, so it’s weights should be an orbit under Weyl group. Hence,
they’ll be ± 12 , ± 12 , . . . , ± 12 for any choice of signs. Hence, dim S = 2n . What is the character of S?
We are using the quadratic form Q = x1 xn+1 + · · · + xn x2n + x22n+1 , so o(2n + 1) fixes this form. The
natural Cartan subalgebra is
eh = diag(x1 , . . . , xn , x−1 −1
1 , . . . , xn , 1).
± 21 ±1
1
−1
1
−1
X
χS = x1 . . . xn 2 = x12 + x1 2 . . . xn2 + xn 2 .
What’s up with these 1/2 powers? Square root of a complex number is only defined up to sign, so does
this make sense? What does this mean. It means this representation does not lift to the orthogonal
group SO(2n + 1). The point is that the orthogonal group is not simply connected, so not all Lie algebra If it did, the
representations lift to it. 15
For the same reason, S does not occur in V ⊗N
. Elements of S are called exponents
spinors. would all be
integers
Definition 7.8. The universal cover of SO2n+1 (C) is called the Spin group, denoted Spin2n+1 (C).
Example. When n = 3, SL2 (C) = Spin3 is a double cover of SO(3). What is the map SL2 (C) ! SO(3)?
Take the 3-dimensional representation of SL2 ; this is the adjoint representation which has an invariant
form, the Killing form. The kernel of this map is Z/2Z ∼ = {±I}, the center of SL2 , so we see the exact
sequence
1 −! Z/2Z −! SL2 −! SO3 −! 1.
15 It will however, lift to its universal cover.
16 π (SO (C)) = Z, apparently.
1 2
36
Lemma 7.10. Let Xn = (z1 , . . . , zn ) ∈ Cn : z12 + · · · + zn2 = 1 . Then, Xn is simply connected for
n ≥ 3, and π2 (Xn ) = 1 for n ≥ 4.
Proof. Consider XnR = Xn ∩ Rn = S n−1 ⊂ Rn . This is simple connected for n ≥ 3, so it suffices to show
that Xn deformation retracts onto XnR . Consider some z ∈ Xn and write z = x + iy with x, y ∈ Rn .
Then,
1 = z 2 = x2 − y 2 + 2ix · y =⇒ x2 − y 2 = 1 and x · y = 0.
x + ity
ft (x + iy) = p
x2 − t 2 y 2
Note that when n = 4, we have X4 ∼ = {ad − bc = 1 : a, b, c, d ∈ C} = SL2 (C), so this lemma recovers
the fact that SL2 (C) is simply connected.
Proof of Theorem 7.9. We will induct in n. Note that we already know the theorem when n = 3. Note
that SOn y Xn transitively17 , so Xn is a homogeneous space. What is the stabilizer?
since if it preserves e1 it’ll also preserve the orthocomplement of e1 . Hence, Xn = SOn / SOn−1 , so we
have a fiber sequence SOn−1 SOn Xn . Thus, we get an exact sequence
The previous lemma shows π1 (Xn ) = 1 for n ≥ 3 and π2 (Xn ) = 1 for n ≥ 4. Hence, we win.
7.1.3 Type Dn
Finally, we consider the Lie algebra g = o2n . As usual, V = C2n is the vector representation. The
simple roots are α1 = e1 − e2 , . . . , αn−1 = en−1 − en and αn = en−1 + en . The fundamental weights are
ω1 = (1, 0, . . . , 0), ω2 = (1, 1, 0, . . . , 0) up to ωn−2 = (1, . . . , 1, 0, 0) and then
1 1 1 1 1 1
ωn−1 = ,..., ,− and ωn = ,..., , .
2 2 2 2 2 2
We know have two spinor representations, Lωn−1 = S− and Lωn = S+ . In this case, the Cartan
matrix has determinant det A = 4, so there are 3 miniscule fundamental weights. These are ωn−1 , ωn , ω1 .
37
ωn−1
ω1 ω2 ··· ωn−2
ωn
n
The Weyl group here is Sn n (Z/2Z)0 where the 0 subscript means elements whose coordinates sum to 0.
Remark 7.12. Exterior powers are irreducible for i ≤ n − 1 still. Hence, for i ≤ n − 2,
^i
V = Lωi .
Some aspects of orthogonal groups are uniform and some depend on even or odd. Some even depend
on residue mod 4, and some even depend on residue mod 8. This is related to Bott periodicity. More on
this on a homework.
∗
Example. S+ = S+ or S+
∗
= S− depending on n mod 4. When S+
∗
= S+ it has an invariant inner
product. Is it symmetric or skew-symmetric? This depends on n mod 8.
What do the spinor representations S± look like? The Weyl group allows us to permute factors and
change an even number of signs. Thus, the weights of S+ are the vectors
1 1
± ,...,±
2 2
with an even number of +’s while the weights of S− are those with an odd number of +’s ( ⇐⇒ an odd
number of −’s). Thus, we get the characters
n
!
1
−1
Y
χS± = xi +
2
xi 2 ,
i=1 ±
so S+ , S− don’t occur in V ⊗N and they don’t lift to SO2n . We again define Spin2n to be the universal
cover of SO2n (again a double cover by previous theorem).
8 Lecture 8 (3/16)
8.1 Last time
We talked about representations of of o(V ). When V = C2n this is type Dn . When C = C2n+1 , this is
type Bn . We also talked about spinor representations.
38
For o2n+1 , the spinor representation is associated to ωn = (1/2, 1/2, . . . , 1/2) and has dimension
dim S = 2n .
For o2n , there are two spinor representations S+ = Lωn and S− = Lωn−1 where ωn−1 = (1/2, . . . , 1/2, −1/2)
and ωn = (1/2, . . . , 1/2, 1/2).
We know they don’t occur in the tensor products of vector representations, so we have to do something
new.
We need to “extract a square root” roughly in the sense that the space of vectors is a “square root” of the
space of matrices. This is the idea behind Clifford algebras.
Definition 8.2. Let V be a f.d. k-vector space (k = k and char k 6= 2) with a symmetric (non-
degenerate) inner product (−, −). The Clifford algebra Cl(V ) of V is generated by V with defining
relations v 2 = 21 (v, v) for v ∈ V .
1
ab + ba = (a + b)2 − a2 − b2 = [(a + b, a + b) − (a, a) − (b, b)] = (a, b).
2
ab + ba = (a, b) · 1.
ai aj + aj ai = 0, bi bj + bj bi = 0, and ai bj + bj ai = δij .
39
This is again a deformation of V.
V
This is because the RHS’s of the defining relations for Cl(V ) all have degree strictly smaller than the
LHS’s so all vanish in the associated graded.
is an isomorphism.
In fact, PBW generalizes to “Lie superalgebras” and this theorem about Cl(V ) is a special case of this
generalization.
Proof. (Even case) Let us start with the even case. Pick a basis a1 , . . . , an , b1 , . . . , bn of V as before. Let
m = (a1 , . . . , an ). Define a representation
V
∂
ρ : Cl(V ) ! End M , ρ(ai )v = ai w and ρ(bi )w = w.
∂ai
Above,
∂ 0 if i 6= kj ∀j
(ak1 . . . akr ) =
∂ai (−1)j−1 a . . . ac . . . a
k1 kj kr if i = kj
(this is a (graded) derivation: ∂/∂ai (f · g) = (∂f /∂ai )g + (−1)deg f f (∂g/∂ai )). In addition to making
this a derivation, having the sign term above makes ρ a representation, e.g.
40
(exercise). Note that this a natural spanning set for Cl(V ): given I = (i1 < · · · < ik ) and J = (j1 <
· · · < jm ), set
cIJ := ai1 . . . aik bj1 . . . bjm ∈ Cl(V )
(the defining relations allow us to order the monomials at worse at the cost of some δij which will introduce
lower degree terms.). It is not immediately clear that these form a basis, but note that there are 22n of Like in the
them, so the theorem is equivalent to showing they are linearly independent. proof of
For this, consider ρ(cIJ ) = ai1 . . . aik ∂a∂j ... ∂
∂ajm : M ! M. PBW
1
X Y Y
αIJ cIJ · aj = αI0 ,J0 ai
j∈J0 i∈I
(the products in decreasing order of the j’s and increasing order of the i’s). This forces αI0 J0 = 0.
This completes the proof in the even case.
(Odd case) In the odd case, we also have some element z ∈ Cl(V ) with z 2 = 1. We sill have an action
Cl(V ) y M = (a1 , . . . , an ). In addition to ρ(ai )w = ai w and ρ(bi )w = ∂a
∂
w, we need to say how z
V
i
these two representations are called M+ and M− (they are not isomorphic18 ). We can consider the direct
sum
ρ = ρ+ ⊕ ρ− : Cl(V ) ! End M+ ⊕ End M− .
via
1 1
ξ(a ∧ b) = (ab − ba) = ab − (a, b).
2 2
Then, ξ is a Lie algebra homomorphism.
Proof. For skew-symmetric matrices in this form, one can work out that the commutator is
(exercise, using a ∧ b = 1
2 (a ⊗ b − b ⊗ a)). Now compute
1 1
[ρ(a ∧ b), ρ(c ∧ d)] = ab − (a, b), cd − (c, d)
2 2
18 The eigenvalue of z on the space of v ∈ M± s.t. bi v = 0 is ±1
41
= [ab, cd]
= abcd − cdab
= (b, c)ad − acbd − cdab
= (b, c)ad − (b, d)ac + acdb − cdab
= (b, c)ad − (b, d)ac + (a, c)db − cadb − cdab
= (b, c)ad − (b, d)ac + (a, c)db − (a, d)cb
= ρ ([a ∧ b, c ∧ d])
You might worry about error terms in the last equality, but they are
so we win.
We can now use this map ξ : o(V ) ! Cl(V ) to pull back the representations M (in the even case) and
M± (in the odd case) from before.
Exercise. When dim V = 2n,
ξ ∗ M = S+ ⊕ S− .
Veven Vodd
More precisely, S+ = (a1 , . . . , an ) and S− = (a1 , . . . , an ).
Exercise. If dim V = 2n + 1,
=S∼
ξ ∗ M+ ∼ = ξ ∗ M− .
For these, you’ll want to find highest weight vectors, compute their weights, and then compare di-
mensions.
So we realize the spinor representations as exterior algebras where o(V ) acts by some (0th, 1st, or
2nd order) differential operators.
42
Proposition 8.8. The highest weight of the dual representation is −µ = w0 λ, so µ = −w0 λ. Thus,
L∗λ = L−w0 λ .
• • ··· • •
and in fact, −w0 (αi ) = αn+1−i in this case. How do you see this? Consider the vector representation
V = Lω1 = Cn+1 of sln+1 . Its dual is
^n
V∗ = V = Lωn
so ωn gets exchanged with ω1 . Thus, −w0 must be the flip since it’s the only nontrivial automorphism.
Example. Let’s look at type E6 now. Again, −w0 is the flip. We won’t show this rigorously right now,
• • • • •
• •
• •
n
group is W = Sn n (Z/2Z)0 (vectors with sum 0). For n even, −1 ∈ W so −w0 = 1 (no flip). For n odd,
−1 6∈ W and w0 = (−1, −1, . . . , −1, 1) (exercise) with trivial permutation σ = id. Then,
1 1 1 1 1 1
w0 ωn = w0 , ,..., = − ,...,− , ,
2 2 2 2 2 2
43
so −w0 ωn = ωn−1 . Hence S+
∗
= S− and S−
∗
= S+ . Alternatively, you can see that the lowest weight of
S+ is −ωn−1 .
Remark 8.11. This gives some mod 4 periodic phenomena. To observe mod 8 periodicity, ask yourself,
“When do S, S+ , S− have symmetric invariant forms, and when do they have skew symmetric invariant
forms?”
Definition 8.12. A f.d. representation V of a group G or Lie algebra g is said to be of complex type
∼ V ∗ . It is real type if V ∼ ∼
if V 6= = V ∗ and there exists a symmetric isomorphism ϕ : V − ! V ∗ , i.e.
ϕ∗ = ϕ ( ⇐⇒ ϕ has a symmetric, invariant bilinear form). It is quaternionic type if V ∼ = V ∗ and ∃
∼
skew-symmetric isomorphism ϕ : V −
! V ∗ , i.e. ϕ∗ = −ϕ
∗
Theorem 8.13. For D2n , S+ = S+ . For D4n , S+ has a symmetric form (real type) while D4n+2 has a
skew-symmetric form (quaternionic type).
9 Lecture 9 (3/18)
Last time we ended while discussing reps of real, complex, and quaternionic type.
Recall 9.1. A f.d. irrep V of a group G or Lie algebra g is said to be complex type if V ∼
6 V ∗ . It is real
=
type if V ∼
= V ∗ and there exists a symmetric isom ϕ : V ! V ∗ (ϕ∗ = ϕ), and it is quaternionic type if
∼
V ∼
= V ∗ and there exists a skew-symmetric isom ϕ : V −
! V ∗ (ϕ∗ = −ϕ).
Remark 9.2. Schur says all isos V ! V ∗ are proportional to each other, to ϕ∗ = cϕ for some c ∈ C.
Taking double dual shows ϕ = c2 ϕ, so c = ±1 (i.e. real and quaternionic type are only possibilities).
There will be a similar statement for odd orthogonal groups. In order to prove these, we need to
understand when self-dual reps are real or quaternionic type.
r
X
f := (2ρ∨ , ωi )fi .
i=1
44
Then, [h, f ] = −2f and
X X X
[e, f ] = ei , (2ρ∨ , ωj )fj = (2ρ∨ , ωi )hi = h.
i j i
Then, (e, h, f ) defined as above genera an sl2 -subalgebra inside g, called the principal sl2 -subalgebra.
Example. If g = sln and V = Cn , then V |sl2 principal = Ln (the irred sl2 rep with highest weight n).
One can check that
0 1
0 n
..
. ∗ 0 n−2
0
, f = , and h =
e= .. .. ..
.
.. . . .
.
1
0 ∗ 0 −n
On the other hand, if you restrict V to a root subalgebra, then you can see that V |sl2 root = C2 ⊕(n−2)C.
Hence, the principal sl2 -subalgebra is essentially different (not conjugate) to root subalgebras (at least
for n ≥ 3).
A natural thing to look at is the restriction of the adjoint representation to the principal sl2 -subalgebra.
Write g|sl2 -principal = i LNi . What are these Ni ? Recall we can recover the decomposition of an
L
sl2 -rep if we know the dimensions of its weight spaces, so what are the eigenvalues of h = 2ρ∨ acting
adjointly on g. Write g = n− ⊕ h ⊕ n+ . Given x ∈ gα , we have
r
X r
X
(α, 2ρ∨ ) = ki (αi , 2ρ∨ ) = 2 ki .
i=1 i=1
Thus, dim g[0] = r and, for n > 0, dim g[2n] = #roots of height n.
How many roots are there of each height?
• There are exactly r roots of height 1. These are the simple roots.
• What about height 2. Picture a Dynkin diagram. We need to add two distinct roots (twice a root
45
is not a root), and they need to be connected (sum of orthogonal roots is not a root19 . Thus, the
number of height 2 roots equals the number of edges in the Dynkin diagram. Since the diagram is
a tree, this is r − 1.
Definition 9.6. An exponent of g is a positive integer m such that rm+1 < rm . The multiplicity of
m is rm = rm+1 .
Example. g = sln . The roots are αij = αi + · · · + αj where i ≤ j. This has height i + j − 1. If you look at
the Dynkin diagram, roots will be connected pieces and the height will be the number of vertices in that
piece. Hence, the # of root of height k is rk = n − k. Thus the exponents are {mi } = {1, 2, . . . , n − 1}, so
Sanity Check 9.9. Consider gln = V ⊗ V ∗ = Ln−1 ⊗ Ln−1 . Clebsch-Gordan tells us that this is
gln = Ln ⊗ Ln = L0 ⊕ L2 ⊕ L4 ⊕ · · · ⊕ L2n−2 .
What does all of this have to do with Spinor representations being real and quaternionic type?
46
Proposition 9.10. Ln is real for even n, and quaternionic for odd n.
(this is e.g. always the case if Dynkin diagram has no nontrivial automorphisms so −w0 = 1).
Restrict it to principal sl2 -subalgebra. The weights will be (µ, 2ρ∨ ) where µ is a weight of Lλ . This
will be largest when µ = λ, and this eigenvalue (λ, 2ρ∨ ) occurs just once. This is because any other
weight of Lλ is of the form λ − β with β = ki αi , ki ≥ 0 and β =
6 0. Hence,
P
X
(µ, 2ρ∨ ) = (λ, 2ρ∨ ) − 2 ki < (λ, 2ρ∨ ).
Thus,
M
Lλ |sl2 = Lm ⊕ cn Ln with m = (λ, 2ρ∨ ).
n<m
Proposition 9.12. Assume Lλ is not complex type. Then, Lλ is real type if (λ, 2ρ∨ ) is even, and is
quaternionic type if it is odd.
Application to Spinor representations Let g = o(2n). Recall the fundamental weights are ω1 =
(1, 0, . . . , 0), . . . , ωn−2 = (1, 1, . . . , 1, 0, 0), ωn−1 = 21 , . . . , 12 , 12 , ωn = 12 , . . . , 12 , − 12 . Hence,
X n(n − 1)
ρ = ρ∨ = ωi = (n − 1, n − 2, . . . , 1, 0) and (2ρ∨ , ωn ) = = (2ρ∨ , ωn−1 ).
2
n(n−1)
Fact. 2 is odd if n ≡ 2, 3 (mod 4) and is even if n ≡ 0, 1 (mod 4).
Corollary 9.13. If n = 0 (mod 4) then S± have symmetric forms. If n ≡ 2 (mod 4), then S± have
skew forms.
Proposition 9.14. The Spinor rep S is real ⇐⇒ n ≡ 0, 3 (mod 4), and is quaternionic ⇐⇒ n ≡ 1, 2
(mod 4).
Theorem 9.15 (“Bott Periodicity”). The behavior of spinor representations of o(m) depend on the
remainder r of m mod 8.
• r = 1, 7: S is of real type
• r = 3, 5: S is of quat type
47
• r = 0: S+ , S− are of real type
• r = 2, 6: S+
∨
= S− are of complex type
“Now, let’s move on. We’ve done enough representation theory, so let’s switch to another subject:
integration of Lie groups. This will help us do representation even better.” (paraphrase)
n
X ∂f
df = dxi .
i=1
∂xi
n
M
Ω• (M ) = Ωk (M )
k=0
of differential forms with multiplication given by wedge product ∧. The operation d in the previous
example extends to a degree 1 derivation of Ω∗ (M ), i.e. we have d : Ωk (M ) ! Ωk+1 (M ). It is defined by
It is a graded derivation in the sense that given homogeneous a ∈ Ωk (M ) and b ∈ Ω` (M ), one has
48
A closed form ω is one for which dω = 0. It is an exact form if ω = dη. Also, d2 = 0 (exact implies
closed), but the coverse is not always true.
Example. Consider S 1 = R/Z with coordinate x (mod 1). Then, dx ∈ Ω1 (S 1 ) is a closed form, but is
not exact: 6 ∃f ∈ C ∞ (S 1 ) s.t. dx = f (would need f = x + c but x not well-defined on circle, only up to
adding integers).
Ωkclosed (M )
Hk (M ) = .
Ωkexact (M )
f ∗ ω(v1 , . . . , vk ) = ω(f∗ v1 , . . . , f∗ vk )
where f∗ : Tp M ! Tf (p) N . Note that pullback commutes with ∧ and d. Also, (f ◦ g)∗ = g ∗ ◦ f ∗ .
Every element of Ωn (M ) looks like ω = f (x1 , . . . , xn )dx1 ∧ · · · ∧ dxn (in local coordinates). Say M = Rn
and ω has compact support (i.e. f has compact support). Then we define
Z Z
ω := f (x1 , . . . , xn )dx1 . . . dxn .
M Rn
but Z Z
∂xi
f (x1 , . . . , xn )dx1 . . . dxn = f (x1 , . . . , xn ) det dy1 . . . dyn ,
Rn Rn ∂yj
so there is a slight discrepancy (in one case have absolute values; in the other case we don’t). Hence,
integration of top forms only invariant under changes of variable that preserve orientation (det(jacobian) >
0). As a result, we will only be able to integrate differential forms on oriented manifolds.
On a general manifold, you cannot integration differential forms. You can, however, integrate densities.
These multiply by absolute value of determinant instead of by the determinant itself.
10 Lecture 10 (3/25)
Note 6. Haven’t seen last Thursday’s lecture yet...
49
10.1 Last time
Last time we talked about integration of (top degree) differential forms on manifolds. Say we have a
(real/smooth) manifold M of dimension dim M = n and we have ω ∈ Ωn (M ). To define the integral
Z
ω,
M
we need an orientation on M , i.e. a consistent way to say which bases of Tx M are ‘right-handed’ in the
tangent space. Say we have some
charts U, V of the manifold with coordinates xi , yi . Then the manifold
is orientable if always det ∂xj > 0.
∂yi
Say M is an oriented n-dim manifold, and suppose ω ∈ Ωn (M ) is a top degree form with compact
support. We won’t actually need to compact support condition, but good to know the integral will
converge. How do we define M ω?
R
Let K = supp ω. Cover K by finitely many balls Bi , and choose a Bi0 ⊂ Bi for each i, so these Bi0 ’s
already cover K. For a containment of balls B 0 ( B, we can define a hat function f ∈ C ∞ (B) satisfying
f > 0 on B 0 , f ≥ 0 on B, and f has compact support in B.
Example. In the one-dimensional case, just want some bump function. Can start with
0 if x ≤ 0
g(x) =
e−1/x if x > 0
Can then do something like multiply this by a parabola (?) to get a hat function in the 1-d case. Then
use this to get hat functions in any dimension.
fi
gi := P
j fj
which is well-defined in a neighborhood of K, and has support instead Bi . Note that gi = 1, so these
P
For independence, given two partitions of unity, consider their refines obtained by taken pairwise
products (i.e. consisting of functions gi hj ).
Remark 10.2. Integration like this also makes sense for manifolds with boundary. The only difference is
that at boundary points, the local model is Rn−1 × R≥0 instead of Rn . Integration also makes sense for
non-compactly supported forms; the integral just might diverge in these cases.
Remark 10.3. If you have an oriented manifold with boundary, then it induces a canonical orientation
on the boundary (a basis of tangent space at boundary is right-handed iff basis of whole thing obtained
50
by extending the given basis by a single vector pointing inwards is right-handed, or something like this).
We call f integrable, denoted f ∈ L1 (M, µ), if this integral is < ∞. In such cases, can define
R
M
f dµω
just as in measure theory.
Remark 10.5. Above discussion shows that there are no non-vanishing forms on non-orientable manifolds.
open
Example. ω = dx1 ∧ · · · ∧ dxn on M ⊂ Rn is nonvanishing. The corresponding measure is the usual
open
Lebesgue measure µ, so µ(U ) = vol(U ) is the usual volume of U ⊂ M .
Inspired by above example, nonvanishing forms are often called volume forms. Given a volume form
R
ω, vol(M ) = M ω ∈ R+ ∪ {∞}.
Proposition 10.6. If M is compact, then it has finite volume, and any continuous function on M belongs
to L1 (M, µ), i.e. is integrable.
Proof. Cover M = x∈M Ux with Ux a neighborhood of x so small that µ(Ux ) < ∞. Since M is compact,
S
this has a finite subcover U1 , . . . , UN . Thus, µ(M ) ≤ i µ(Ui ) < ∞, so M has finite measure. If f is
P
Notation 10.8. We let M denote M with the opposite orientation. Note that M ω = − M ω.
R R
51
Remark 10.9. When n = 1, we can consider an interval M = [a, b] with boundary consisting of two
points. Then Stoke’s theorem says
Z b
df (x) = f (b) − f (a)
a
Notation 10.13. We use µL to denote a choice of left-invariant Haar measure. We similarly define µR
as a choice of right-invariant Haar measure.
A natural question is does µL = µR , up to scaling at least? They will for abelian groups since left/right
translations are the same. What about for non-abelian groups?
Suppose V is a 1-dimensional real representation of a group G, so have ρV : G ! Aut(V ) = R× . We
can then define a rep |V | on the same underlying space with map ρ|V | = |ρV |. This is still a representation
since | · | is a character (i.e. homomorphism) on R× .
Vn ∗
Proposition 10.14. µL = µR ⇐⇒ | g | is a trivial representation of G.
Above, keep in mind that conjugation is what induces the adjoint action on g.
52
(2) If g is perfect, i.e. g = [g, g], then g is unimodular.
Corollary 10.19. A reductive Lie algebra (which is a direct sum of an abelian and a semisimple
Lie algebra) is unimodular.
(5) The Lie algebra tn of upper triangular matrices is not unimodular when n ≥ 2.
Note that if G is compact, it has finite volume G dg < ∞, so we may normalize dg so that this
R
integral is 1, i.e. require our Haar measure to be a probability measure. This gives us an actually unique
choice of measure for compact G.
Example. When G is finite, the unique Haar probabality measure is the averaging measure µ(U ) =
#U/#G.
Proof. Pick a positive Hermitian form B on V . We would like an invariant form, so consider the average
Z
Bav (v, w) = B(gv, gw)dg
G
which is invariant by construction (using right-invariant of dg) and well-defined since dg = 1 is finite!
R
G
Note that Bav (v, v) > 0 (for v 6= 0) since B(w, w) > 0 for w 6= 0. This gives a unitary structure on our
representation, completing the proof.
Corollary 10.24. Any finite dimensional representation of a compact Lie group is completely reducible.
Proof. Unitary reps are always completely irreducible. If W ⊂ V is a subrep, then so is W ⊥ ⊂ V and
W ⊕ W ⊥ = V (then induct).
53
Example. G = SU(n) is a simply-connected, compact Lie group (U (n) compact since rows are vectors
of unit length). It is simply-connected (and even doubly-connected) since SU(n)/ SU(n − 1) = S 2n−1 .
Thus, Rep SU(n) = Repsu(n) = Rep(n) (when sl(n) is the complexification of su(n)). Thus, we relearn
that f.d. reps of sl(n) are completely reducible. This proof strategy is called the Weyl unitary trick.
In fact we will show that every semisimple Lie algebra has a Lie group whose real form is a compact
Lie group.
computing the ij entry of the matrix for g in the given basis. This is a smooth function ψV,ij : G ! C
called a matrix coefficient. Note that this is independent of the normalization of the form (since scaling
√
the form by λ divides the orthonormal basis by λ), so it only depends on a choice of orthonormal basis.
Let W be another irrep of G. Say {wk } form an orthonormal basis for W .
Remark 10.26. δV W = 0 if V ∼
6 W . If δV W = 1, take V = W and require vi = wi (i.e. use same basis).
=
Note that Z Z
ρV (g) ⊗ ρW (g) = ρV ⊗W ∗ (g)dg ∈ End(V ⊗ W ) = End(V ⊗ W ∗ ).
G G
| {z }
P
11 Lecture 11 (3/30)
Last time we talked about matrix coefficients.
54
11.1 Matrix coefficients + Peter-Weyl
Recall 11.1. Let V be an irrep of a compact Lie group G. Let (−, −) be an invariant, positive Hermitian
form on V . Let v1 , . . . , vn be an orthonormal basis w.r.t this form. The matrix coefficients are the smooth
functions ψV,ij : G ! C given by
ψV,ij (g) = (ρV (g)vi , vj ).
Hence, ψV,ij (g) is the ijth coefficient of the matrix of ρV (g) written in the basis v1 , . . . , vn .
Recall 11.2. Z
δV W δik δj`
ψV,ij (g)ψW,k` (g)dg = .
G dim V
We were in the middle of proving this last time. We showed this integral is 0 when V ∼
6 W by making
=
use of the operator Z
P := ρV (g) ⊗ ρW (g)dg
G
G
on V ⊗ W = V ⊗ W ∗ . We showed that P : V ⊗ W ∗ ! (V ⊗ W ∗ ) maps into the space (V ⊗ W ∗ )G =
HomG (W, V ) which is 0 if V ∼
6 W . The integral we are interested in is simply (P (vi ⊗ wk ), vj ⊗ w` ), so
=
it must vanish when V =∼
6 W . Let’s now wrap up the rest of the proof.
The right summand takes values in U G , so must be 0. At the same time, ρC (g) = 1, so the left factor is
1. Hence, P = 1C ⊕ 0U is the projection to the trivial representation (the span of the identity operator
idV ∈ V ⊗ V ∗ ). From this we see that
Pn n
(x ⊗ y, i=1 vi ⊗ vi ) X (x, y) X
P (x ⊗ y) = Pn Pn v i ⊗ vi = vi ⊗ v i .
( i=1 vi ⊗ vi , i=1 vi ⊗ vi ) i=1 dim V
Pn
In particular, P (vi ⊗ vk ) = δik
dim V i=1 vi ⊗ vi , so
δik δj`
(P (vi ⊗ vk ), vj ⊗ v` ) = ,
dim V
Corollary 11.3. {ψV,ij : V ∈ Irrep(G) and i, j = 1, . . . , dim V } given an orthogonal set in L2 (G).
Theorem 11.4 (Peter-Weyl). This system is complete, i.e. the ψV,ij ’s form an orthogonal basis of
L2 (G).
55
Notation 11.5. Let L2alg (G) := usual linear span of the ψV,ij ’s.
Example. Say G = S 1 = R/2πZ is the circle. Then the irreps of G are the usual characters ψn (θ) = einθ
for n ∈ Z. PW says that these give an orthonormal basis for L2 (S 1 ) with inner product (f1 , f2 ) =
R 2π
2π 0 f1 (θ)f2 (θ)dθ. This recover the main theorem of Fourier analysis.
1
Corollary 11.7 (of Theorem 10.25, character orthogonality). For the characters
X
χV (g) = Tr ρV (g) = ψV,ii (g),
i
one has Z
χV (g)χW (g)dg = δV W .
G
Proof. Z X X δV W δik δik δV W X 2
χV (g)χW (g)dg = = δ = δV W .
G i
dim V dim V i ii
k
Corollary 11.8 (of Peter-Weyl). The characters χV (g) (V ∈ Irrep(G)) give an orthonormal basis of
L2 (G)G , the conjugation-invariant L2 -functions.
Proof of Corollary 11.8. Note that ξG : V ∈Irrep(G) (V ⊗ V ∗ )G ! L2 (G)G satisfies (and is determined
L
by) i vi ⊗ vi 7! χV (g). Thus, its image is the linear span of the χV (g)’s. Hence, it suffices to show that
P
L2alg (G)G is dense in L2 (G)G . For this, take some ψ ∈ Lg (G)G , so there’s a sequence ψn ∈ L2alg (G) with
ψn ! ψ as n ! ∞ (by Peter-Weyl). Let ψn0 := g ψn (gxg −1 )dg ∈ L2alg (G)G . Furthermore,
R
Z Z Z Z
n!∞
kψn0 −ψk = (gψn − ψ)dg = g(ψn − ψ)dg ≤ kg(ψn − ψ)k dg = kψn − ψk dg = kψn − ψk −−−−! 0,
G G G G
56
11.2 Proving Peter-Weyl
11.2.1 Analytic Background
Before we can prove PW, we need some more background in analysis. In particular, we need the know
about compact operators on Hilbert spaces.
Definition 11.10. Let H be a Hilbert space. A bounded operator A : H ! H is a linear map s.t.
there exists some C ≥ 0 s.t. for all v ∈ H, kAvk ≤ Ckvk. The set of such C is closed, so the minimal
such C is called the norm kAk of A. The space of bounded operators is denoted B(H) and is a Banach
space (Banach algebra even) with this norm.
Remark 11.11. kA + Bk ≤ kAk + kBk and kABk ≤ kAkkBk.
Definition 11.12. A bounded operator A on a Hilbert space H is called self-adjoint if (Av, w) = (v, Aw)
for all v, w ∈ H. We say A is compact if it is the limit of a sequence of finite rank operators (i.e.
n!∞
dim im(An ) < ∞) An : H ! H, i.e. kAn − Ak −−−−! 0. We let K(H) denote the space of compact
operators, the closure of the space Kf (H) of finite rank operators.
Remark 11.13. Kf (H) ⊂ B(H) is a 2-sided ideal, so K(H) is also a 2-sided ideal in B(H).
Lemma 11.14. If A is compact, then it maps bounded sets to pre-compact sets, i.e. sets with compact
closure.
Remark 11.15. A bounded operator will map bounded sets to bounded sets. A compact operator will
map bounded sets into compact sets.
If {vn } is a bounded sequence in H and A is a compact operator, then Avn will have a convergence
subsequence (with limit possible outside im A).
Not every bounded set in a Hilbert space has a convergent subsequence.
Example. Let e1 , e2 , . . . be orthonormal vectors in H. Then this is a bounded sequence with no con-
√
vergent subsequence (distance between any two vectors is 2).
As a consequence, we see that id : H ! H is compact ⇐⇒ dim H < ∞. Let’s prove the lemma now.
Proof of Lemma 11.14. Let vn ∈ H with kvn k ≤ 1, and say A : H ! H is compact. Choose An of finite
rank with kAn − Ak ≤ 1/n for all n. We do a usual diagonal trick. Note that, since An has finite rank,
{An vk }k≥1 lies in a compact set (a ball in a finite dim space).
Let vn1 be a subsequence of vn s.t. A1 vn1 converges. Let vn2 be a subseq of vn1 s.t. A2 vn2 converges, and
so on and so forth. Define wn := v n which (away from the first k elements) is a subseq of vnk . Note that
57
Proposition 11.16. Let K be a continuous function on [0, 1]2n . Define the operator BK on L2 ([0, 1]n )
by Z
(BK ψ)(y) := K(x, y)ψ(x)dx.
[0,1]n
Corollary 11.17. If M is a compact manifold with positive smooth measure dx, then for any continuous
K on M × M , the operator Z
(BK ψ) (y) := K(x, y)ψ(x)dx
M
is compact.
Proof. If f1 , . . . , fm is a partition of unity on M , then K(x, y) = i,j fi (x)fj (y)K(x, y). Defining
P
Kij (x, y) := fi (x)fj (y)K(x, y), we have BK = i,j BKij so it suffices to show BKij is compact, but
P
We won’t actually need this fact (the direction we haven’t proved). On the other hand, we will need
to below fact.
• A|Hλ = λ · Id
(Generalizes uses spectral theorem for Hermitian operators in f.dim linear algebra).
Example. When A is finite rank, this is just the spectral theorem for Hermitian operators in a f.d.
space. It says there exists an orthonormal basis in which A is diagonal with real eigenvalues.
Remark 11.19. Bounded operators in a Hilbert space do not have to have eigenvalues at all. For example,
consider multiplication by x on L2 ([0, 1]) (recall objects here are functions up to equality away from null
sets).
58
Proof of Hilbert-Schmidt. We first prove the theorem for the positive operator A2 . The idea is to find
the largest eigenvalue, take its orthocomplement, and then keep going...
Let β = kAk2 = supkvk=1 (A2 v, v) = supkvk=1 (Av, Av). WLOG we may assume β 6= 0 (otherwise
A = 0). Fix a sequence An of self-adjoint finite rank operators converging to A.20 Let βn = kAn k2 which
is in fact the maximal eigenvalue of A2n .21 Choose vn s.t. A2n vn = βn vn and kvn k = 1. Note that A2 vn
has a convergent subsequence, so we may assume wlog A2 vn ! w ∈ H. At the same time, A2n vn ! w
since kA2 vn − A2n vn k ≤ kA2 − A2n k ! 0 as n ! ∞. Since A2n vn = βn vn and βn ! β, we conclude that
⊥
vn ! β −1 w so A2 w = βw. Also, we know kwk = 1. Now replace H with hwi and continue.
In this way, we get a sequence of numbers β1 ≥ β2 ≥ β3 ≥ · · · ≥ 0 which either terminates (βn = 0
for n 0) or it’s infinite but tends to 0 (using compactness of A2 ). We have eigenvectors
q wj of norm 1
so that A wj = βj wj . This has a convergent subseq so βj ! 0 as kβj wj − βk wk k = βj2 + βk2 . Take a
2
vector v orthogonal to all wk . Then kAvk ≤ βk kvk, so kAvk = 0 =⇒ v ∈ ker A. This implies
M
H= Cwk ⊕ ker A2 .
d
k≥1
12 Lecture 12 (4/1)
12.1 Peter-Weyl, Proved
Let G be a compact Lie group. Recall we want to show that
Proof. We want to make use of the Hilbert-Schmidt theorem from last time. We start by constructing
a ‘δ-like sequence’ of continuous function hN (x) on G, supported on small neighborhoods of 1 which
shrink to 1 as N ! ∞. We require hN ≥ 0, hN is conjugation invariant, and G hN (x)dx = 1. Note that,
R
Note this is invariant under conjugation since it only depends on |log g|, so we have our sequence.
20 Replace with 12 (An + Atn )
21 This is a statement about matrices. Diagonalize to see this
59
Next, we define the convolution operator
Z Z
(BN ψ)(g) = hN (x)ψ(x−1 g)dx = hN (gy −1 )ψ(y)dy.
G G
Note that this is compact by Corollary 11.17 (applied to K(g, h) = hN (gy −1 )). Furthermore, BN is
self-adjoint since K(g, y) = K(y, g) (since hN invariant under inversion). Further, it commutes with both
left and right multiplication by G, so
M
L2 (G) = ker BN ⊕ Hλ
d
λ6=0
with Hλ the (f.dim) λ-eigenspace of BN . Each Hλ is G-invariant (say under left action) so Hλ ⊂ Question:
L2alg (G) = V V ⊗ V ∗ . Hence, for all N and any b ∈ Im BN and ε > 0, there exists f ∈ L2alg (G) s.t. Why is H in
L
kb − f k < ε. Note that for ϕ ∈ C(G) continuous, BN ϕ ! ϕ as N ! ∞ (k(BN ϕ − ϕ)k ! 0). We can this space?
pick fN ∈ L2alg (G) so that kBN ϕ − fN k < 1
and so see that kBN ϕ − fN k ! 0 as N ! ∞. Hence, L2alg
N Answer:
is dense in L2 , so we win.
Ever ele-
Lemma 12.2. Let G be a compact Lie group, and let G = G0 ⊃ G1 ⊃ G2 ⊃ . . . be a descending sequence ment of Hλ
of closed subgroups. Then it must stabilize, i.e. Gn = Gn+1 for n 0. generates a
f.dim repre-
Proof. We may assume the sequence has no repetitions, and then show it is finite. Assume not. The
sentation
dimensions have to stabilize, so we may assume dim Gi is the same for all i. Then, K = G0n is the same for
all n (since Lie algebras must be the same), and is normal in each of them. Then, G1 /K ⊃ G2 /K ⊃ . . .
is a sequence of finite groups, so it must stabilize.
Non-example. Z ⊃ 2Z ⊃ 4Z ⊃ 8Z ⊃ 16Z ⊃ . . .
Corollary 12.3. Any compact Lie group has a faithful, finite dimensional representation.
Proof. Pick a f.d. rep V1 of G, and let G1 = ker ρV1 . Then pick a rep V2 of G s.t. V2 |G1 is nontrivial,
and take G2 = ker (ρV1 ⊕ ρV2 ) = ker ρV2 |G2 . Continue in this way... By the lemma, this process can only
produce a finite sequence of non-isomorphic groups, so there’s a k s.t. every f.dim rep of G is trivial on
Gk . By Peter-Weyl, Gk acts trivially on L2 (G) which forces Gk = 1. Hence, V1 ⊕ · · · ⊕ Vk is a faithful
(unitary) representation of G, so G ,! U (V1 ⊕ · · · ⊕ Vk ).
Conversely, if a compact topological group has a faithful f.dim rep, then it’s a closed subgroup of U (n)
which implies that it is itself a (compact) Lie group.
Notation 12.4. We let C(G, C) denote the Banach space of continuous C-valued functions on G. This
is complete w.r.t the norm kf k = max |f |.
Theorem 12.5 (Stone-Weierstrass Theorem). Let X be a compact metric space. Let A ⊂ C(X, C)
be a unital subalgebra s.t.
(1) A is closed.
60
then A = C(X, C).
Theorem 12.6. L2alg (G) is dense in C(G, C) with this norm, so every continuous function on G can be
uniformly approximated by matrix coefficients of f.dim reps.
Proof. Let A = L2alg (X). It is obviously unital, closed, and closed under complex conjugation. Hence, it
suffices to check that it separates points. Fix any x, y ∈ G s.t. f (x) = f (y) for all f ∈ L2alg (G). Then,
for any f ∈ L2alg (G), one has f (1) = f (x−1 y), so g := x−1 y acts trivially on L2 (G) which forces g = 1,
i.e. x = y.
Lots of what we said for Lie groups didn’t really need the smooth structure; it mainly just needed
integration. So we’ll make sense of integration on compact, 2nd countable topolgoical groups, and then
reprove things in this more general setting. Implicitly,
we’re assum-
Example. Let
ϕ2 ϕ1 ing all our
· · · G3 G2 G1
spaces are
be a chain of surjective homomorphisms of finite groups. Then, the inverse limit Hausdorff
Y
G := lim Gn = (gi )i≥1 ∈ Gi : ϕi (gi+1 ) = gi for all i
n!∞
−
i≥1
is a profinite group. It is visibly an abstract group. To topologize it, we give it the weakest topology
in which all the projections pn : G ! Gn are continuous (with Gn discrete). Hence, a base of nbhds of 1
is given by ker pn .
Here, a sequence ~an = (an1 , an2 , . . . ) converges to ~a ⇐⇒ ∀k : ank eventually stabilizes to ak . Further,
this topology is metrizable with metric
for some fixed 0 < C < 1. Note that the natural map G ,! Gk is a closed embedding (using the
Q
k∈Z+
product topology on the target), so we see that G is compact.
61
12.3 Integration theory on compact top. groups
Let C(X, R) denote the space of R-valued continuous functions on X (X some compact 2nd countable
topological group). Note that this is a Banach space, complete w.r.t kf k = max |f |.
Fact (Riesz representation theorem). Finite volume Borel measures on X are the same thing as non-
negative22 , continuous linear functionals C(X, R) ! R. Given a measure µ, the corresponding functional
is I(f ) = Iµ (f ) := X f dµ.
R
(A measure is just a thing that let’s you integrate functions). In the above correspondence, µ is a
probability measure iff I(1) = 1. Any nonzero µ has positive, finite value and can be normalized to be a
probability measure.
Theorem 12.8 (Haar, von Neumann). Let G be a second countable compact group. Then G admits a
unique left-invariant probability measure which is also right-invariant.
Don’t need to second countable assumption above. In fact, for any locally compact topological group,
there’s some Haar measure (unique up to scaling) which is left-invariant or right-invariant, but usually
not both. We won’t prove that, but will prove the weaker version stated above.
Proof. Let {gi }i≥1 ⊂ G be a dense sequence in G (exists since G 2nd countable). Fix ci > 0 s.t.
P∞
i=1 ci = 1 (e.g. ci = 2 ). We use these to build an averaging operator
−i
A : C(G, R) −! C(G, R)
∞
" #
X
f 7−! x 7! ci f (xgi )
i=1
(absolutely convergent since f bounded on compact G). Note that kAk = 1 and that A is left-invariant.
∼ R ⊂ C(G, R) be the constant functions, so A|L = IdL . The distance from f ∈ C(G, R) to L (the
Let L =
“spread of f ”) is ν(f ) = 1
2 (max f − min f ).
We claim that ν(Af ) ≤ ν(f ) with equality iff f ∈ L. Indeed, choose some f 6∈ L. For any x ∈ G, we
can pick j s.t. f (xgj ) < max f . Then, (Af )(x) = ci f (xgi ) ≤ (1 − cj ) max f + cj (f xgj ) < max f . Thus,
P
max(Af ) < max f (since G compact). One similarly checks that min f < min(Af ), so ν(Af ) < ν(f ).
We now iterate. For f ∈ C(G, R), let fn = An f . This sequence is uniformly bounded by max |f | and
is equicontinuous, i.e. for all ε > 0 there is a neighborhood 1 3 U = Uε ⊂ s.t. for all x ∈ G and u ∈ U ,
To show this, it suffices to show that f is uniformly continuous, i.e. to find U s.t. for all x ∈ G and
u ∈ U , |f (x) − f (ux)| < ε. This would then imply
X X X
ci f (xgi ) − ci f (uxgi ) ≤ ci |f (xgi ) − f (uxgi )| < ε.
Hence, assume to the contrary that ∃ui ! 1 and xi ∈ G s.t. |f (xi ) − f (ui xi )| ≥ ε. Since G is compact,
the xi have a convergent subsequence, so we may assume xi ! x. Taking limits then shows that
0 = |f (x) − f (1 · x)| ≥ ε, a contradiction.
22 i.e. I(f ) ≥ 0 ⇐= f ≥ 0
62
Now we appeal to Ascoli-Arzela: A sequence fn in C(X) (X compact) which is uniformly bounded
and equicontinuous has a convergent subsequence.23
Hence we get fn(m) = An(m) f converging to some h ∈ C(G, R). Consider the spread
Taking the limit as m ! ∞, we have ν(h) ≥ ν(Ah) ≥ ν(h), so ν(Ah) = ν(h). Hence h is a constant, so
the assingment f 7! h ∈ L ∼
= R is a continuous linear function. It is clearly left-invariant, nonnegative,
and satisfies 1 7! 1. Thus, it gives our desired Haar probability measure/integral I : C(G, R) ! R.
This just leaves uniqueness. We can similarly construct a right invariant integral I∗ : C(G, R) ! R.
For any left-invariant integral J, we have J(f ) = J(I∗ (f )). If J(1) = 1, then this says J(f ) = I∗ (f ), so Question:
we get uniqueness. We also see that I(f ) = I∗ (f ), so I if bi-invariant. Why?
Next time we’ll generalize facts about compact Lie groups to these more general compact 2nd countable
groups, and then we’ll talk about hydrogen atoms I guess. Tuesday lecture at MIT.
13 Lecture 13 (4/6)
Today we learn some physics.
Notation 13.1. The configuration space is R3 (with sun at the origin) and let’s call the coordinates
x, y, z ∈ R. We put these together to form ~r = (x, y, z) whose length is r = |~r| = x2 + y 2 + z 2 . There’s
p
also momentum p~ = (px , py , pz ) and kinetic energy 12 p~2 as well as potential energy U (r) = − 1r . The
total energy is given by the Hamiltonian
1 1
H= p~ − .
2 r
the diagonal.
63
13.1.1 Quantum version
In quantum theory, classical observables become operators on some Hilbert space. In the present case,
this space is L2 (R3 ). We view x, y, z as operators given by multiplication by x, y, z.
Warning 13.3. These aren’t literally operators on L2 (R3 ), e.g. multiplication by x can move a function
outside L2 . In reality, these are only operators on some dense subspace of L2 (R3 ). We won’t worry about
this too much.
What about momentum? px −i∂x . The minus is a convention, but the i is important; smth
smth real classical observables should give rise to self-adjoint operators (i.e. Af · g = f · Ag which
R R
we sometimes write by saying A† = A). Also, the classical Hamiltonian gets replaced by the quantum
Hamiltonian
1 2 1 1 1
H +− ∂ + ∂y2 + ∂z2 − = − ∆ − .
2 x r 2 r
Hamilton’s equation now becomes f˙ = [H, f ] (usual commutator) and called Schrödinger’s equation.
Classical states were pairs (~r, p~) (6 coordinates), but quantum states are elements of a Hilbert space
ψ ∈ L2 (R3 ) (∞ coordinates) normalized so kψk = 1. We consider this ψ modulo ‘phase factors’24 (so
we’re looking at lines in L2 (R3 )). Classical states transform non-linearly, but these quantum states will
translate linearly. Then we have Schrödinger’s equation (for states)
i∂t ψ = Hψ.
More explicitly,
1 2 1
∂ + ∂y2 + ∂z2 ψ − ψ.
i∂t ψ = −
2 x r
How do you solve this? If H was just a matrix, the solution would be ψ(t) = eitH ψ(0) with exponential
given by the usual power series. If H is some infinite-dimensional operator, we can still take inspiration
from this. If we have an eigenvalue Hψ(0) = Λψ(0), then ψ(t) = e−itΛ ψ(0) is a solution; more generally,
we can take superpositions of these. Hence, we’d like an eigenbasis for H (note H is symmetric and even
self-adjoint25 ).
We want an orthonormal basis ψN of L2 (R3 ) so that HψN = EN ψN . We call ψN the state of
energy EN (note EN ∈ R since ψ self-adjoint). Consider ψ(x, y, z, 0) = cN ψN (x, y, z). Here one has
P
Thus, we only need to fine the eigenvectors ψN satisfying the stationary Schrödinger equation
HψN = EN ψN .
This is similar to the story of compact operators, but more complicated. H is not compact, and also
not bounded. It’s spectrum won’t be discrete. It’ll have a discrete part (called ‘bound states’ if I heard
correctly) as well as a continuous part (giving integrals instead of sums). At least, we can try to find the
discrete spectrum.
24 vectors of norm 1
25 pavel is distinguishing these two and seemingly claiming self-adjoint is something complicated
64
Goal. Solve this equation (Hψ = Eψ where E an eigenvalue), and figure out why we’re talking about
this in a Lie groups class.
Note everything is rotationally invariant, so we should utilize this symmetry. This amounts to passing
to spherical coordinates. 1/r is already in spherical coordinates. The Laplacian splits into two pieces,
r 2 ∆sph , the radial part and the spherical part. These are
1
∆ = ∆r +
2 1 1
∆r = ∂r2 + ∂r and ∆sph = ∂θ2 + ∂ϕ · sin ϕ∂ϕ .
r sin2 ϕ sin ϕ
Above, our spherical coordinates are (r, θ, ψ) where r the radius, θ the angle in the horizontal plane,
and ψ the angle in the vertical plane. Write ~r = r~u where |~u| = 1. We look for solutions of the form
ψ(r, ~u) = f (r)ξ(~u) (‘separation of variables’26 ).
First note that if ∆sph ξ + λξ = 0, then f satisfies an ODE depending on λ. Second, we claim that λ
will be positive. This is because
Z Z
∆sph ξ · ξ = − (∇ξ)2 ≤ 0 =⇒ λ ≥ 0.
What will be the equation for f ? It’s a “calculus exercise” to compute that f satisfies the ODE
2 2 λ
00
f + f0 + − 2 + 2E f = 0.
r r r
Here is where Lie groups start to come in. ∆sph acts on L2 (S 2 ) (really on some dense subspace) and
is rotationally invariant (since ∆, ∆r , and 1/r2 are; this is not obvious from its formula). Now, as
SO(3)-reps, we have
L2 (S 2 ) = L0 ⊕ L2 ⊕ L4 ⊕ . . . with dim Lk = k + 1
(apparently this was on some homework). Now, ∆sph preserves each L2` and acts on it by a scalar. Once
we compute these scalars, we’ll know all the eigenvectors and eigenvalues on this operator. What are
these scalars? There are a few ways to compute them. Here’s one...
Let w` be the 0-weight vector in L2` (recall it has weights 2`, (2` − 2), . . . , 0, . . . , (2 − 2`), −2`). It
turns out that h ∈ sl2 acts by −2i∂θ . Since w` is weight 0, ∂θ w` = 0, so it depends only on ϕ. In fact, it
is a degree ` polynomial in cos ϕ, so write w` = P` (cos ϕ). Recall that the Jacobian in passing between
sphereical and Euclidean coordinates is J = r2 sin ϕ. Hence (matrix coefficients?),
Z 1 Z π
Pm (z)Pn (z)dz = sin ϕ · Pm (cos ϕ) · Pn (cos ϕ)dϕ = 0 if m 6= n.
−1 0
So Pn is a degree n polynomial and they are orthogonal under uniform measure; this makes them Leg-
endre polynomials.
We can also calulate the action of ∆sph on P` . Recall that ∆sph = 1
∂2
sin2 ϕ θ
+ 1
sin ϕ ∂ϕ · sin ϕ∂ϕ and
note that (sin ϕ) −1
∂ϕ = ∂z . Using this (and independence from θ), one can show that
65
We want to compute λ. Write P` = Cz ` + . . . . We compute the leading term of the LHS:
Proposition 13.4. ∆sph acts on each L2` by the scalar −`(` + 1).
Notation 13.5. Let y`m denote the vector in L2` of weight 2m, e.g. y`0 = w` . This will be of the form
These functions are called spherical harmonics. These were known by quantum mechanics, but Laplace
studied the Laplace operator on the sphere.
Note these spherical harmonics actually have some dependency on θ now. We ignored that (sin ϕ)−2 ∂θ2
2
before, but now this will acts on y`m and generate a − 1−z
m
2 . We get
m2
∂z (1 − z 2 )∂z P`m (z) − P m (z) + `(` + 1)P`m (z) = 0.
1 − z2 `
This is called the Legendre differential equation. Note that −` ≤ m ≤ ` (in fact, it turns out these
are the only values of the parameters for which this equation has a solution which is smooth near x = 0).
This solution will be (almost?) a polynomial, unique up to scaling. One ends up with
m2
P`m = 1 − z 2 ∂z`+m (1 − z 2 )`
which is a polynomial when m is even. These are called associated Legendre polynomials.
Remark 13.6. This P`m is a matrix coefficient, so it’s a trigonometric polynomial. You can write this as a
polynomial of cos with some sin factor when the degree is odd (or something? I didn’t quite catch what
he was saying).
Let’s go back to the radial equation. Recall it is
2 2 `(` + 1)
f + f0 +
00
− + 2E f = 0.
r r r2
r
How do we deal with this? We start with the magic change of variables: write f (r) = r` e− n h 2r
.
n
Letting ρ = 2r/n, h must satisfy
1
ρh00 + (2` + 2 − ρ) h0 + n − ` − 1 + (1 + 2En2 )ρ h = 0.
4
We should choose n so that the last term goes away, i.e. we take n = √ 1
−2E
, i.e. E = − 2n1 2 .27 Thus, we
have
ρh00 + (2` + 2 − ρ)h0 + (n − ` − 1)h = 0, (13.1)
27 Since our potential is negative, one can show that E < 0 if you want a solution lying in L2
66
called the Laguerre equation. Look at solutions near ρ = 0. They will have the form h = ρs (1 + o(1))
(for two values of s). The characteristic equation for s is (only first two terms relevant for this)
with two solutions s = 0 and s = −2` − 1. We have a basis of two solutions, the first smooth and the
second having a singularity. We claim the solution corresponding to s = −2` − 1 is not possible. Observe Question:
Z Z Z Z Why?
2 2 2 2 2
|ψ| dxdydz = |f | |ξ| r2 sin ϕdrdθdϕ = |f | r2 dr · |ξ|
S2
| {z }
<∞
2
so our f should have the property that |f | r2 dr < ∞ (since we want a solution in L2 ). This is the case
R
iff Z
2
ρ2`+2 e−ρ |h(ρ)| dρ < ∞.
2
If h ∼ ρ−2`−1 as ρ ! 0, then ρ2`+2 |h(ρ)| ∼ ρ−2` as ρ ! 0, so if ` > 0, this is not integrable. Thus,
s = −2` − 1 not possible when ` > 0. Even when ` = 0, this is not possible: ψ(x, y, z) ∼ ψ ∼ r−1
near r = 0 and h ∼ ρ−1 =⇒ f ∼ r−1 near r = 0. Then, Hψ = − 12 ∆ψ − 1r ψ = Eψ + δ since
∆(1/r) ∼ δ0 (x, y, z). Thus, ψ won’t satisfy Schrodinger at the origin (as a distribution), so s = −2` − 1
is impossible even when ` = 0. Now allowing this behavior singles out a one dimensional span, the span
of the solution corresponding to s = 0.
We see that h must be regular at ρ = 0. Use power series method: h = an ρn . We then must
P
n≥0
have
X
k(k − 1)ak ρk−1 + (2` + 2 − ρ)kak ρk−1 + (n − ` − 1)ak ρk
We can shift
X
(k + 1)kak+1 ρk + (2` + 2)(k + 1)ak+1 ρk − kak ρk + (n − ` − 1)ak ρk
to get a recursion
(k + 1)(k + 2` + 2)ak+1 + (n − ` − 1 − k)ak = 0.
(1 + ` − n) . . . (k + ` − n)
ak = .
(2` + 2) . . . (2` + 1 + k) · k!
Thus,
X (1 + ` − n) . . . (k + ` − n) ρk
h(ρ) = .
(2` + 2) . . . (2` + 1 + k) k!
k≥0
We see that this converges for all ρ (ratio behaves like a power of k and then it’s divided by a factorial).
log h(ρ)
Exercise. ρ ! 1 when ρ ! +∞ except when the series terminates.
When does this series terminate? Well, when one of the factors in the numerator becomes 0, i.e. if
n − ` − 1 ∈ Z≥0 . In which case you get a polynomial of this degree n − ` − 1; it is denoted L2`+1
n−`−1 (ρ)
67
and called the generalized Laguerre polynomial. Recall, we need
Z
2
ρ2`+2 e−ρ |h(ρ)| dρ < ∞
We looked at convergence near 0 before, but there’s also convergence near ∞. This will fail unless h(ρ)
behaves like a polynomial (the alternative is it looks like eρ at infinity, so get something like e−ρ e2ρ
above).
Recall 13.7. The states with E < 0 are called bound states.
Theorem 13.8. The bound states of the hydrogen atom are, up to normalization,
r 2r
ψn,`,m (r, ϕ, θ) = r−` e− n L2`+1
n−`−1 y`m (ϕ, θ)
n
where n = 1, 2, 3, . . . , ` = 0, 1, . . . , n − 1, and −` ≤ m ≤ `.
Definition 13.9. We call n above the principal quantum number, ` the azimuthal quantum
number, and m the magnetic quantum number.
Remark 13.10. The energy can only take values E = − 2n1 2 . When an electron jumps between energy
levels, it emits a photon with energy/wavelength proportional to the difference 1
2n2 − 2n02 .
1
We still have not achieved what we wanted yet. These eigenfunctions do not form a base in the Hilbert
space. This functions ψn,`,m span a space L20 (R3 ) ( L2 (R3 ). For example, note that (Hψ, ψ) ≤ 0 for
ψ ∈ L20 (R3 ). This is not the case for all ψ ∈ L2 (R3 ). Recall H = − 21 ∆ − 1r , so in general
Z Z
1 2 1 2
(Hψ, ψ) = k∇ψk − |ψ|
2 r
Can cook up a ψ so this is positive. In addition to the discrete spectrum/bound states we found, there’s
also a continuous spectrum consisting of the whole positive real linear {r ≥ 0}, but we will not discuss
this. Pavel said more about this, but I didn’t follow.
Remark 13.11. For each n, there are n choices of ` values, and each (n, `) has 2` choices of m values.
Hence dim Wn = n2 is the dimension of the space of energy levels of n. In chemistry though, one observes
a 2n2 , so we’re missing something. That something is spin. The real Hilbert space is L2 (R3 ) × C2 .
There’s more to the story that we will talk about next time.
Recall 14.1. ` above is the azimuthal quantum number and m is the magnetic quantum number.
68
There’s a geometric SO(3)-symmetry so so(3) = Lie SO(3) acts by vector fields Lx , Ly , Lz . Set L ~ =
(Lx , Ly , Lz ) = ~r × p~ where ~r = (x, y, z) and p~ = (px , py , pz ) with px = −i∂x , etc. This L
~ = ~r × p~ is called
the angular momentum operator. Note that
Lx = −i (y∂z − z∂y ) .
These act on H-eigenspaces. Let Wn = ψ : Hψ = − 2n1 2 ψ = span {ψn,`,m : any `, m}. From our earlier
We know
Wn = L0 ⊕ L2 ⊕ · · · ⊕ L2n−2
as so(3)-reps.
Apparently, we studied the case where there’s one electron ‘orbiting’ a nucleus of charge +1, but this
also applies when there’s a larger nucleus. If the nucleus is to big, things aren’t too precise since there are
many electrons interacting with each (and that’s not taken into account here), but early in the periodic
table this is good enough. Isn’t this
Note 7. I’m finding it pretty hard to pay attention. supposed to
be a math
Because of chemistry stuff, our n2 seems like it should really be a 2n2 . We lost a factor of 2 in the
class?
physics. There’s a thing called spin (‘internal angular momentum’) that we did not take into account in
our model. This spin can be ± 21 .
On the side of mathematics, this means that the Hilbert space for the theorem should not be L2 (R3 ),
but should be H = L2 (R3 ) ⊗ C2 where this C2 is the 2-dimensional rep of so(3). On C2 , we have the
operator !
1
1 2 0
S= h=
2 0 − 21
whose eigenvalues are ± 12 . The total spin is m + s ∈ {m + 1/2, m − 1/2}. So the action of so(3) is
diagonal; the eigenvalues of h are 2m + 1 (or 2m − 1), odd numbers (‘odd highest weight’ or ‘half-integer
spin’); get a direct sum of representations L2k+1 . But Hamiltonian is the same, so instead of ψn,`,m , we
have
ψn,`,m,+ = ψn,`,m ⊗ v+ and ψn,`,m,− = ψn,`,m ⊗ v−
where ! !
1 0
v+ = and v− = .
0 1
C2
Vn = (L0 ⊕ · · · ⊕ L2n−2 )⊗|{z} = L1 +L1 +L3 +· · ·+L2n−3 +L2n−1 = 2L1 ⊕2L3 ⊕· · ·⊕2L2n−3 ⊕L2n−1
Clebsch-Gordan
L1
69
!
−1 0
This does not lift to a representation of SO(3), only to one of SU(2). The matrix ∈ SU(2)
0 −1
acts by −1. This is called an ‘anomaly’. The point is that quantum states are elements up to phase
factors, and this −1 is a phase factor, so we do have an SO(3)-action on the states; we just don’t have
one on vectors.
Say we have k electrons of the same energy E = − 2n1 2 . In quantum mechanics, if you have a particle
with state space V and another one with state space W , then the two together have state space V ⊗ W . If
particulars are indistinguishable from each, then you should mod out by permutation action. If elections
where labelled, we’d have state space Vn⊗k . This they are in fact indistinguishable, we need to mod out
by permutations. Hence, we would expect the state space to be V = S k Vn ; however, this is wrong.
Vk
The correct answer is V = V (k) = Vn since electrons are ferminons, not bosons (for bosons, do get
symmetric power).28
Remark 14.2 (Pauli exclusion principle). When k > 2n2 , we see V (k) = 0.
Remark 14.3. A generic operator will have eigenspaces of dimension ≤ 1, but here we have large di-
mensions dim Vn = 2n2 . This comes from symmetries grouping these eigenvalues into representations
(apparently, we’ve seen two so(3)-symmetries and there’s a third hidden one we’ll see now).
Remark 14.4. so(3) ⊕ so(3) ⊕ so(3) = so(4) ⊕ so(3) = so(4) ⊕ su(2). Answer: It’s
an external
There’s apparently also another symmetric which doesn’t commute with Hamiltonian, but which is
tensor prod-
sometimes useful to consider.
uct, not an
internal one
14.2 Back to math: automorphisms of semisimple Lie algebras
14.2.1 Summary of last semester
Let g be a semisimple complex Lie algebra. We saw that Aut(g) is a complex Lie group with Lie Aut(g) = g
(I think in general Lie Aut(g) = Der(g)). In particular, this means there is a connected Lie group Aut(g)0
with Lie algebra g. Furthermore, we showed last semester that Aut0 (g) acts transitively on the Cartan
subalgebras of g.
70
14.2.2 Maximal Tori
Let h ⊂ g be a Cartan subalgebra. Let H ⊂ Gad be the corresponding connected Lie subgroup. Elements
h ∈ H act on g = h ⊕ α∈R gα as follows: h|h = 1 and h|gαj = λj · id = ebj · id. Note that h|g−αj =
L
λ−1 −bj
. Furthermore, if α = mi αi , then (by compatibility with conjugation)
P
j =e
Y
h|gα = λm
i .
i
So if x ∈ h s.t. αi (x) = bi (so λj = eαj (x) ), then h|gα = eα(x) so we see we have
h
H∼
= ,
2πiP ∨
∼
i.e. x 7! e2πix defines an isomorphism h/P ∨ − ! H (recall: P ∨ is the coweight lattice). Note that
H∼ = (C× )rank(g) is a complex torus; we call it the maximal torus corresponding to h ⊂ g.
We want to study its normalizer
Proposition 14.6. N (H) = stabilizer of h ⊂ g, and contains H as a normal subgroup with quotient
N (H)/H ∼
= W isomorphic to the Weyl group.
Proof. Recall (sl2 )i ⊂ g attached to simple roots. These give maps ηi : SL2 (C) ! adG by fundamental
theorems of Lie theory. Set !!
0 1
Si = ηi = ηi (e − f ) ∈ Gad .
−1 0
This has the property that Ad(Si )|h = si (with si the simple reflection). Note that Si2 = ηi (−1) 6= 1 in This was a
general, so we do not have a homomorphism W ! adG, just some set-theoretic lift of W . For w ∈ W , homework
write w = si1 . . . sim and define w
e = Si1 . . . Sim ∈ Gad , so w
e acts on h by w. Furthermore, if w = w1 w2 , problem
then w
e=w e2 h for some h ∈ H s.t. h acts trivially on h. This implies that hH, w
e1 w e : w ∈ W i generates a once upon
subgroup N of Gad such that N ⊃ H (with H normal) and N/H = W . a time
By definition, N ⊂ N (H), so we only need to show equality. Consider some x ∈ N (H). Write
x(αi ) = αi0 . Note that these αi0 ’s give another system of simple roots. Since the Weyl group acts
transitively on systems of simple roots, there must be some w ∈ W such that w(αi0 ) = αp(i) where p is some
permutation of simple roots. Now consider w
e−1 x ∈ Gad . By construction, we have w
e−1 x(αi ) = αp(i) .
Note that Gad preserves all irreducible representations g (since it acts by inner automorphisms), so p = id.
Hence, w
e−1 x|h = 1, so w
e−1 x ∈ H, so x ∈ wH
e ⊂ N , and we win.
0 −! H −! N −! W −! 0
We’ve seen Aut(g) ⊃ Gad . Another obvious subgroup is Aut(D) ⊂ Aut(g) where D is the Dynkin
71
diagram. Moreover, Aut(D) y Gad , so we get a homomorphism
This is in fact injective; ξ|Gad = id and a nontrivial Dynkin diagram automorphism can’t act trivially on
g (something like this).
Proof. We need to show that ξ is surjective. Fix some a ∈ Aut(g). There exists a g ∈ Gad such that
ga(h) = h. We may replace a by ga, so assume WLOG that a(h) = h. By modifying a by an element of
N (H) · Aut(D),29 can assume a = 1 (acts trivially on h and each gαi ), so we win.
We have classified semisimple Lie algebras over C. What about their classification over other fields, in
particular over R?
Recall 14.9. A presentation of g by generators and relations ei , fi , hi contains only integers, so makes
sense over any ring.
For any field K (say, char K = 0), we have a Lie algebra gK defined by the same generators and
relations; we call this split semisimple Lie algebra. Over an algebraically closed field, every semisimple
Lie algebra is split, but this is not the case in general.
Let g be a s.s. LA over K which splits over some finite Galois extension L/K (e.g. K = R and
L = C), i.e. g ⊗K L = gL is a split s.s. Lie alg. Can we classify such g? Let Γ = Gal(L/K), so g = gΓL .
Therefore, g is determined by the action of Γ on gL . This action is twisted-linear:
Example. ρ0 (g ∈ Γ): preserves all generators and acts as Γ on scalars. This action gives rise to the split
s.s. Lie algebra over K, gΓL = gK .
Other actions will be of the form ρ(g) = η(g)ρ0 (g) with η : Γ ! Aut(gL ) not a homomorphism.
Instead,
η(gh)ρ0 (g)ρ0 (h) = ρ(gh) = ρ(g)ρ(h) = η(g)ρ0 (g)η(h)ρ0 (h),
so it’s almost a homomorphism but twisted by the Γ-action on Aut(gL ). This is what’s called a 1-cocycle
(or twisted homomorphism). Thus, any form of gK split over L is given by a 1-cocycle η; we call the
corresponding form gη .
it preserve the system of simple roots. Then use an automorphism of D to ensure a(αi ) = αi . Then, a|gαi acts by some
scalar. Can use an element of H to make all these scalars 1.
72
Need some a ∈ Aut(gL ) such that ρ1 (g)a = aρ2 (g) which translates to
(twisted conjugation).
Definition 14.11. Equivalence classes of 1-cocycles, up to twisted conjugation, form the (pointed) set
H1 (Γ, Aut(gL )) called the 1st Galois cohomology.
Proposition 14.12. Forms of gL over K are labelled by elements of H1 (Γ, Aut(gL )).
15 Lecture 15 (4/13)
15.1 Forms of a semisimple Lie algebra, continued
Let g be a s.s. Lie algebra over a field K of characteristic 0. Say there is a finite Galois extension
L ⊃ K such that g ⊗K L splits, i.e. is isomorphic to the standard semisimple Lie algebra gL given by the
Serre relations. We showed last time that such forms of gL over K are classified by the cohomology set
H1 (Γ, Aut(gL )).
Today we specialize to the case of main interest to us, i.e. K = R and L − C. That is, we wish to
classify real forms of complex semisimple Lie algebras.
Remark 15.1. There’s a parallel theory of forms for reductive Lie algebras.
In this case Γ = Gal(C/R) = Z/2Z. We computed last time that
Above, · denotes complex conjugation, g is the complexification of its split real form. s defined earlier
is well-defined up to twisted conjugation: s 7! asa−1 (for a ∈ Aut(gL )). Putting this all together, we
have...
Theorem 15.2. Real semisimple Lie algebras with complexification isomorphic to g (i.e. real forms
of g) are classified by s ∈ Aut(D) n Gad s.t. ss = 1 modulo the equivalence relations s ∼ asa−1 (for
a ∈ Aut(D) n Gad ; note · acts trivially on Aut(D)).
73
The bijection in the theorem is given by
s 7! gs := {x ∈ g : x = s(x)} .
Example. g1 = gR = {x ∈ g : x = x}.
Note that we can compute s and · to get the antilinear involution σs (x) = s(x) (note σs2 (x) = s(s(x)) =
s(s(x)) = s(x) = x). Hence, we can encode the real form gs using σs instead of s. In particular, note
that s gives rise to an element s0 ∈ Aut(D) = Out(g) = Aut(g)/ Inn(g) (Inn(g) = Gad ). Note that this
satisfies s20 ∈ 1, and that its conjugacy class is invariant under equivalences. This s0 permutes connected
components of the Dynkin diagram D (preserves some and matches others in pairs30 ). Hence, it’s enough
to consider to kinds of pictures.
∼
(1) D connected with s0 : D −
! D.
Proposition 15.3. If gR is semisimple, then it is a direct sum of simple Lie algebras, and the simple
Lie algebras are classified by such pictures.
(2) In the second, gR is simple, but g = gR ⊗R C has two summands (so is only semisimple).
Say g = a ⊕ a with a a simple complex Lie algebra. Write s = (g, h)s0 with g, h ∈ Aut(a). We know
s switches the summands and that ss = 1. This gives
so h = g −1 . Thus,
−1
s = (g, g −1 )s0 = (g, 1)s0 (g, 1) ∼ s0 ,
This just leaves the case when D is connected. We start with some new definitions.
Definition 15.4. We say gs is inner to gs0 if s0 = g ◦ s for some g ∈ Gad (i.e. for some inner
automorphism) ⇐⇒ s00 ∼ s0 . The inner class of s is the set of s0 which are inner to s. An inner real
form is a member of the inner class of the split form (i.e. s ∈ Gad ).
74
Note that any form is inner to a unique quasi-split form. The only quasi-split inner form is the split
form.
There is one (non-split) distinguished form.
Definition 15.6. The compact real form is the one corresponding to the automorphism determined
by
s(hi ) = −hi , s(ei ) = −fi , and s(fi ) = −ei .
Proof. (g = sl2 ) In this case, we have s(h) = −h, s(e) = −f , and s(f ) = −e. We have
Remark 15.8. Above we used that the compact real form restricted to any simple root (sl2 )i is the
corresponding compact real form.
75
Consider Aut(gc ) Killing is negative definite, so Aut(gc ) ⊂ O(gc ) is a closed subgroup in an orthogonal
group, and hence compact.31 Furthermore, Lie Aut(gc ) = gc (not hard to show). Thus,
Corollary 15.9. Let Gcad = Aut(gc )0 . Then, Gcad is a connected, compact Lie group with Lie algebra gc .
Remark 15.10. This gives a new proof that reps of complex semisimple Lie algebras are completely
reducible.
Exercise (Homework). For g = sln , show Gcad = PSU(n) = SU(n)/µn (where µn the nth roots of unity).
For g = son , show
SO(n) if n odd
Gcad =
SO(n)/{±1} if n even.
Exercise. s0 for the compact form is the involution corresponding to −w0 (dualiziation of representations).
Exercise. The compact form is never quasisplit.
t T
Thus, gs is the Lie algebra of traceless matrices A s.t. A = −JA J −1 (i.e. AJ + JA = 0).
Thus, A preserves the (skew)hermitian form defined by J.32 What is the signature of J? For
even n, we have
X
J =± (zi z n+1−i ± zn+1−i z i ) ,
closed
31 Why is O(n) compact? At A = 1 means j a2ij = 1 so O(n) ⊂ (S n )n
P
32 Ifyou take a Hermitian form and multiply by i, you get a skew-hermitian form (and vice versa), so the two types are
not so different
76
When n = 2p, J has signature (p, p). When n = 2p + 1, it has signature (p, p + 1) or (p + 1, p).
The upshot of all of this is that the quasi-split form is su(p, p) if n = 2p and su(p + 1, p) if
n = 2p + 1.
• There are other forms: su(p, n − p). These are neither compact nor quasi-split.
• split: so(n + 1, n)
• compact: so(2n + 1)
• no quasi-split forms since the corresponding Dynkin diagram has no symmetrices
• split: so(n, n)
• compact: so(2n)
• When n > 4, Aut(Dn ) = Z/2Z. When n = 4, Aut(D4 ) = S3 (claw graph). However, we only
care about conjugacy classes of involutions, and in either case, there’s only one nontrivial such
class: the one exchanging αn = en−1 + en and αn−1 = en−1 − en .
Note that the split form consists of matrices A satisfying A = −JAt J −1 where
1
1
J = .
..
1
To get the quasi-split form, we should use a matrix of the same structure, except it’s the
2 × 2 identity I2 in the center block (diagonal instead of antidiagonal at that point). Call this TODO: Add
matrix J.e Then the quasi-split forms consists of matrices satisfying A = −JA e t Je−1 , i.e. its rendition of
the Lie algebra of skew-symmetric matrices under J.
e This has signature (n + 1, n − 1), so the matrix
quasi-split form is so(n + 1, n − 1). (n ≥ 2)
(5) G2 has a split form Gs2 and compact form Gc2 . No Dynkin diagram automorphisms.
(6) F4 has a split form F4s and compact form F4c . No Dynkin diagram automorphisms.
(7) E6 has a split form E6s and compact form E6c . There is a Dynkin diagram automorphisms, so also
a quasi-split E6qs
(8) E7 has a split form E7s and compact form E7c . No Dynkin diagram automorphisms.
(9) E8 has a split form E8s and compact form E8c . No Dynkin diagram automorphisms.
77
Remark 15.12. There are some coincidences. For example,
su(1, 1) × su(1, 1) = sl2 (R) × sl2 (R) = so(2, 2) and su(2) × su(2) = so(4) and sl2 (C)R = so(3, 1).
• D3 = A3 gives sl4 (R) = so(3, 3), su4 = so(6), and su(3, 2) = so(4, 2).
Note we still have not classifies all real forms. We’ve just looked at the compact, and quasi-split forms.
There are still more.
16 Lecture 16 (4/15)
Last time we considered real forms of semisimple Lie algebras, and singled out a few particular forms of
note.
In particular, we defined the compact form of a semisimple Lie algebra. This had corresponding
involution ω : g ! g determined by ω(hi ) = −hi , ω(ei ) = −fi , and ω(fi ) = −ei . The corresponding
(real) Lie algebra was gc = {x ∈ g : ω(x) = x}.
i.e. ω(g)g = 1 where ω(g) := ωgω −1 . This is our old friend the cocycle condition. What’s different? gc has
a negative definite Killing form, so g = gc ⊗R C naturally has a positive Hermitian form (complexification
of −Killing).33 Fix some x ∈ g. Then,
adω(x) = −(adx)†
is the Hermitian adjoint (negated). Hence, gc acts by skew-Hermitian operators, i.e. if x ∈ gc , then
adx = −(adx)† . Therefore, when acting on group elements, we sill have ω(g) = (g † )−1 .
Now we see that the cocycle condition ω(g)g = 1 is equivalent to saying that g † = g, so the condition
on g is that it is a Hermitian operator on g.
33 Any orthonormal basis for gc is also an orthogonal basis for g
78
Fact. Any Hermitian operator on a space with positive Hermitian form is diagonalizable with real
eigenvalues.
For g = g † , we can write g = g(γ) as a sum of eigenspaces; moreover, this is a grading, i.e.
L
γ∈R
[g(γ), g(β)] ⊂ g(βγ). Since g is Hermitian, we can take its absolute value |g| : g ! g. This acts on g(γ)
−1
by |γ|. Define θ := f |g| : g ! g so θ|g(γ) = sign(γ). This is an automorphism satisfying
θ2 = 1 and θω = ωθ
−1
(second one since ω(θ) = θ† = θ).
Proof. Note that θ, g define the same real structure ⇐⇒ θ = aga−1 = agω(a)−1 for some a ∈ Gad . We
−1/2 −1/2
have take a = |g| (acts by |γ| of gγ ). Then,
−1/2 1/2 −1/2 −1/2 −1
|g| gω |g| = |g| g |g| = g |g| = θ,
so we win.
This replaces the mysterious equation ω(g)g = 1 with the simpler equation θ2 = 1.
Any real form is determined by a conjugacy class (conjugate by gc ) of such θ. Conversely, if two such
θ’s define the same real structure, then they will be conjugate under gc .
Claim 16.3. θ, ξ as above define the same real form ⇐⇒ they are conjugate by Aut(gc ).
−1 −1
Proof. (!) We have ξ = xθω (x) for some x ∈ Aut(g). Since ω(ξ) = ξ, we see that xθω (x) =
−1
ω(x)θx −1
. Set z := ω (x) x, so we have θz = z −1
θ and ω(z) = z −1
. Note that z = x x is a positive
†
so we win.
Theorem 16.4. Real forms of g are in bijection with conjugacy classes of involutions θ ∈ Aut(gc ) (a
compact Lie group), via θ 7! σθ : ω ◦ θ.
Corollary 16.5. Have a canonical (up to Aut of gc ) decomposition g = k ⊕ p where k is the 1-eigenspace
of θ, and p is the −1-eigenspace. In particular, k is a Lie subalgebra, and [k, p] ⊂ p (so p is a k-module).
Furthermore, [p, p] ⊂ k and gc = kc ⊕ pc (kc = k ∩ gc and pc = gc ∩ p). Finally,
gσ = kc + ipc .
Example. Say g = sl2 (C), and let gσ = sl2 (R) be the split form. In this case, k = C(e − f ). Compute p
as an exercise. Then, gc = kc ⊕ pc and gσ = kc + ipc .
79
Figure 9: An example Vogan diagram. White vertices have sign + and black vertices have sign −.
Proof. Consider a generic x ∈ kc . Note that all elements of gc are semisimple (act as skew-Hermitian
operators so are diagonalizable). Hence, x is regular semisimple.34 Let hC
+ ⊂ k be the centralizer. Its
c
complexification h+ = hc+ ⊗R C ⊂ k is a Cartan subalgebra of k (and still the centralizer of x). Let hc− ⊂ pc
be the maximal subspace s.t. hc = hc+ ⊕ hc− is a commutative subalgebra of gc .
We claim that h := hc ⊗R C ⊂ g is a Cartan subalgebra. It consists of semisimple elements by
construction (acts by normal operators on g). Suppose z ∈ g, [z, h] = 0. Write z = z+ + z− where z+ ∈ k
and z− ∈ p. Note that
Proof. Suppose otherwise, so α∨ ∈ h− . Then, θ(α∨ ) = −α∨ , so θ(gα ) = g−α . Then, σ(gα ) = ω ◦ θ(gα )...
(do this next time)
Corollary 16.8. For generic t ∈ g+ (regular semisimple) s.t. Re(t, α∨ ) 6= 0 for any coroot α∨ (possible
since no coroots in h− ), consider the polarization
R+ = {α ∈ R : Re(t, α∨ ) > 0} .
With a polarization as above, the simple roots get permuted, so θ(αi ) = αθ(i) where θ(i) gives the
action of θ on the Dynkin diagram of g. If θ(i) = i, then θ(ei ) = ±ei , θ(hi ) = hi , and θ(fi ) = ±fi . If
θ(i) 6= i, we can normalize the generators so that θ(ei ) = eθ(i) , θ(fi ) = fθ(i) , and θ(hi ) = hθ(i) .
We can encode this info in the Dynkin diagram, to produce a Vogan diagram. Any Vogan diagram TODO: Add
gives rise to a real form, and any real form comes from some Vogan diagram. However, different diagrams picture
can give rise to the same form (diagram depends on the choice of R+ with θ(R+ ) = R+ ).
Exercise (Homework). Compute the signature of the Killing form for gσ . It should be (dim p, dim k).
Deduce that for split form, dim k = |R+ |.
If gσ is in compact inner class, then rank(k) = rank(g), so they will share a Cartan subalgebra.
34 its centralizer is a Cartan algebra
80
16.2 Real forms of classical groups
16.2.1 Type An−1
The Dynkin diagram has two automorphisms (identity and flip), so there are two inner classes.
• We start with the compact inner class (i.e. θ an inner automorphism, conjugation by some element
of order ≤ 2 in PSU(n)).
Such an element can be lifted to g ∈ U (n) s.t. g 2 = 1. Then, θ(x) = gxg −1 . We know what g can
look like:
and we may assume p ≥ q. The corresponding real form will be su(p, q).
t
The compact form was su(n) : A = −A (and Tr A = 0). For the form attached to g, we need
t
A = −gA g −1 (and Tr A = 0). This is just the requirement that A be skew-Hermitian for the form
defined by g.
When n = 2, A1 has no automorphisms, so all forms are inner to the compact form. In this case,
there are only two forms: su(2) and su(1, 1) = sl(2, R).
Tr : gl(k, H) −! R
X .
A 7−! Re aii
16.3 Type B
There are no Dynkin diagram automorphisms, so all forms are inner. Furthermore SO(2n + 1) has trivial
center, so θ ∈ SO(2n + 1) of order 2. We know what all these elements look like (up to conjugation); we’ll
have θ = (− Id)2p ⊕ Id2q+1 with p + q = n. The corresponding real form is so(2p, 2q + 1), p = 0, . . . , n
(all distinct).
Holiday on Tuesday. Lecture on Thursday at MIT.
81
17 Lecture 17 (4/22)
Today we finish the classification of real forms.
Characterizing σ in terms of how much it differs from the compact form led us to characterizing real
forms in terms of an involution θ : g ! g. Given, θ, we write g = k ⊕ p with k the (+1)-eigenspace and p
the (−1)-eigenspace of θ. We intersect with gc to write gc = kc ⊕ pc , and then gσ = kc + ipc . Elements
are k are skew-Hermitian so expoentiate to unitary operators so we call kc the compact directions,
while ipc has hermitian elements expoentiating to hermitian operators so we call these the noncompact
directions (maybe typos in this sentence).
We also found a θ-stable Cartan subalgebra. While doing this, we had a lemma which we did not
prove.
Recall 17.1. We chose hc+ ⊂ kc and hc− ⊂ pc . Then formed hc = hc+ ⊕ hc− and extended C-linearly to get
h = h+ ⊕ h− .
Proof. If α∨ is a coroot in h− , then θ(α∨ ) = −α∨ , so θ(eα ) = e−α and vice versa.35 Therefore, eα +e−α ∈
k. Furthermore, [h+ , eα + e−α ] = 0 since α|h+ = 0 (since α ∈ h∗− ). We also know (eα , h+ ) = 0 = (e−α , h+ ) Question:
so eα + e−α ∈ h+ , a contradiction (since h+ maximal commutative subalgebra of k). Why?
• Bn
– Compact inner class (only one since Dynkin diagram has no nontrivial auto)
so(2p + 1, 2q) where p + q = n.
82
• Cn
Dynkin diagram has no nontrivial auto, so only one inner class. θ will be inner, so θ ∈ Sp2n (C)/ ± 1
and θ2 = 1. Thus, θ(x) = gxg −1 where g ∈ Sp2n and g 2 = ±1.
– g2 = 1
We may write V = C2n = V (1) ⊕ V (−1). These eigenspaces each carry a symplectic form, so
they are even dimensional. Hence dim V (1) = 2p and dim V (−1) = 2q with p + q = n. May
assume p ≥ q (change g −g). In this case, one finds
g σ = u(p, q, H),
the quaternionic unitary Lie algebra, the Lie algebra of symmetries of a quaternionic
Hermitian form of signature (p, q). Can calculate that in this case, k = sp2p ⊕ sp2q .
– g 2 = −1
In this case, we write V = C2n = V (i) ⊕ V (−i) with each eigenspace isotropic. This forces
V (±i) to be Lagrangian, both of dimension n. In this case, k = gln , and in obtains the split
form gσ = sp2n (R).
• Dn Split form is
so(n, n) so
– Compact inner class
could be in
We have θ ∈ Gad = SO(2n)/ ± 1 with θ2 = 1. Thus, θ(x) = gxg −1 where g ∈ SO(2n) and
either class
g 2 = ±1.
depending
∗ g =1
2
on parity of
Again write V = C2n = V (1) ⊕ V (−1). We need det g = 1, so dim V (−1) = even. Hence, n
dim V (1) = 2p and dim V (−1) = 2q with p + q = n. Again, may assume p ≥ q. In this
case, k = so2p ⊕ so2q and gσ = so(2p, 2q).
∗ g 2 = −1
Again C2n = V (i)⊕V (−i). These are Lagrangian as before, so dim V (i) = n = dim V (−i).
Thus, in this case k = gln and one gets gσ = so∗ (2n), the Lie algebra of symmetries of a
skew-Hermitian quaternionic form
– Other inner class
Same story except θ ∈ O(n)/ ± 1 so θ(x) = gxg −1 with det(g) = −1. Note that we cannot
have g 2 = −1 since that would imply V = V (i) ⊕ V (−i) both Lagrangian so det g = 1. Thus,
we have g 2 = 1 so V = V (1) ⊕ V (−1) with dim V (1) = 2p + 1 and dim V (−1) = 2q − 1 (and
q ≤ p + 1). One gets k = so(2p + 1) ⊕ so(2q − 1) and gσ = so(2p + 1, 2q − 1).
Remark 17.3. For real numbers, have symmetric and skew-symmetric forms. Symmetric have signature,
but all skew-symmetric are the same.
For complex numbers, have Hermitian and skew-Hermitian. These are the same (multiply by i), and
they have signature.
For quaternions, Hermitian and skew-Hermitian are again different.
83
Class Real forms
An−1 compact inner class su(p, q) with p ≥ q and p + q = n
An−1 split inner class sln (R), sl(n/2, H) if n even
Bn so(2p + 1, 2q) with p + q = n
Cn u(p, q, H) with p + q = n and p ≥ q, sp2n (R)
Dn compact inner class so(2p, 2q) with p + q = n and p ≥ q, so∗ (2n)
Dn other inner class so(2p + 1, 2q − 1) with p + q = n and q ≤ p + 1
G2 compact and split
F4 compact, split (k = sp(6) ⊕ sl(2)), and the other (k = so(9))
E6 split inner class split (k = sp(8)) and other (k = F4 )
so(6) = su(4)
so(4, 2) = su(2, 2) quasi-split
so∗ (6) = sl(2, H)
so(3, 3) = sl4 (R) split
so(5, 1) = su(3, 1)
We should also talk about exceptional Lie algebras. When dealing with these, one should consider
Vogan diagrams. Recall that these are formed by paring up vertices transposed by the involution, and
coloring the fixed vertices black or white.
Every real form gives rise to such a diagram, but there are some redundancies/equivalence relations.
Note that, of the exceptional diagrams, only E6 has automorphisms, so for the rest of them, we are just
coloring each vertex black or white.
Let’s consider the case when the automorphism of the Dynkin diagram is trivial (i.e. the compact
inner class), so the Vogan diagram is simply the Dynkin diagram + coloring.
What are the equivalences? First note that the compact form = all white vertices (θ = id) and so no
other diagram gives the compact form. Hence we consider only diagrams have ≥ 1 black vertex.
Say we have θ ∈ Gad giving our real form. This fixes all gα ’s (trivial aut on Dynkin), so θ ∈ H :=
exp(h). We color vertex i white if αi (θ) = 1 and black if αi (θ) = −1. Recall the Weyl group sits in an
exact sequence
1 −! H −! N (H) −! W −! 1.
Hence, we may modify θ by action of W . What do simple reflections do? Note that36 (since αk (θ) = ±1)
αj (θ) if aij even (±2)
αj (si (θ)) = si (αj )(θ) = (αj − aij αi )(θ) = αj (θ) · αi (θ)−aij =
α (θ)α (θ) if a odd.
j i ij
36 s (θ)
i s−1
:= sei θei for some sei ∈ N (H) lifting si ∈ W
84
Thus we get the follow equivalence relation: if we have a black vertex, then we can change the signs of
all its neighbors except • ⇔ ◦ or • ⇔ • (and the color of the vertex itself doesn’t change).37
The last one is the compact form. The other three are all equivalent so must correspond to the split
form. One can show that k is the span of the long roots and then that k = sl2 ⊕ sl2 (for the split form).
It must have rank 2 (same as G2 ) and dimension 6 = |R+ |.
• •
Example (F4 ). Now let’s consider the F4 case. Here we have the configurations (up to equivalence)
In fact, even two of these are equivalent. The last two are the same (since the right half can’t be affect
by the left half?). The first one is the compact form. What do the other two look like?
The roots of F4 are (±1, 0, 0, 0) and all its permutations, (±1, ±1, 0, 0) and all its permutations, and
(for a total of 8+24+16 = 48 roots). The (±1, 0, 0, 0) and (±1, ±1, 0, 0) roots generate
± 21 , ± 21 , ± 12 , ± 12
an so9 while the (±1/2, . . . , ±1/2) roots give the spinor representation S. Thus, F4 = so9 ⊕ S.
In the second case (•, ◦, ◦, ◦), one can check that θ acts by 1 on so(9) and by −1 on S, so k = so(9).
Hence, this will not give the split form sin |R+ | = 24 6= 32 = dim so(9).
Thus, (◦, ◦, ◦, •) gives the split form. Note that here you can observe an sp(6) as the subdiagram
using vertices 1, 2, 3 (this is a copy of C3 ).
17.2.1 Type E
We start with the E6 split inner class. This corresponds to the nontrivial automorphism so only two TODO: Add
37 So change the colors of all neighbors of the black vertex except the neighbors with a double arrow coming into the black picture
one.
85
vertices get colored. They can be colored
The last three colorings are equivalent, so there are only 2 real forms in this class (neither compact). In
the first case (◦, ◦), you can check that k gives a copy of F4 . This is not the split form (we call it E61
instead) since dim F4 = 52 6= 36 = #R+ . Simple root generators of k are e1 + e5 , e2 + e4 , e3 , e6 (these all
obviously satisfy θ(e) = e and one can check that they in fact generate k as a Lie algebra); the Cartan
algebra will be spanned by h1 + h5 , h2 + h4 , h3 , h6 .
18 Lecture 18 (4/27)
We were working on classifying real forms of Lie algebras last time. We filled out much of Table 1, but we
still need to handle the exception Lie algebras of E type. We will complete the table this time, forming
Table 2.
18.1 E type
We looked at the split inner class of E6 last time. This is corresponding to the nontrivial automorphism,
so there are only two colored vertices. In fact, there are only two equiv classes of colorings: (+, +)
and {(+, −), (−, +), (−, −)} (+ is colored white). These correspond to E61 with k = F4 and E6spl with
k = sp(8) = C4 .
This brings us to the compact inner class. Hence, the Vogan diagram is the Dynkin diagram with
white and black vertices. We will be able to treat E6 , E7 , E8 more-or-less simultaneously. If all vertices
are white, we get the compact forms E6c , E7c , E8c . Hence, we may restrict ourselves to the case when we
have at least 1 black vertex.
By applying equivalence transformations (i.e. change colors of neighbors of a black/- vertex), we can
86
achieve
(?) (?) (−) (?) ··· (?)
(?)
i.e. force the vertex above the branch to be a minus. Similarly, we can then achieve
(−)
• For E6 , this is (?) − (?) so has colorings ++ or {+−, −+, −−}. This means we get 2 classes on the
right. Now, the nodal transformation actually turns ++ into −+, so the two classes on the right The ‘node’ is
are one in the same (?). the valence 3
Sounds like one ends up with the real forms su(3) and su(2, 1) for this right leg. vertex
• For E7 , the right leg is (?) − (?) − (?). You end up with the forms su(4), su(3, 1), and su(2, 2). To
get su(4) you use + + +. For su(3, 1), you use {− + +, − − +, + − −, + + −}, and for su(2, 2) you
can use {− + −, − − −, + − +}.
Example. Consider + + −. These are the values of an involution on simple roots. Something liek
these signs correspond to ratios of adjacent values so + + − gives
1
1
1
−1
• For E8 , end up with su(5), su(4, 1), su(3, 2). One has
(su(5)) + + ++
(su(4, 1)) − + ++, − − ++, etc.
(su(3, 2)) + − ++, etc.
87
Note that the nodal transformation take us between these classes, so they are again all equivalent
(?)
(?)
i.e. the right leg (+ the node) are all −’s (though the bottom vertex may become a + when applying the
nodal transformation).
If we really want the bottom vertex to be minus, we may arrange
(−)
or
(?) (?) (−) (+) (−) ··· (−)
(−)
(−)
so the only two classes are ++ on the left leg and +− on the left leg. Hence, there are at most 2
non-compact real forms in the compact inner class. To finish the classification, we just produce 2
these two forms.
– Could consider
(+) (+) (+) (+) (−)
(+)
Looking at all the +’s, we see that so(10) ⊂ k. Note that the root α1 (the sole −) is miniscule.
Hence, any positive root either does not contain α1 or contains it with coefficient 1. It it does
not, we are in the D5 (the so(10)). We see that k = so(10) × gl(1) = so(10) × so(2). We will Question:
call this form E62 . Why?
88
– Another possibility is
(−)
We see that k contains A5 , i.e. k ⊃ sl(6). The remaining weight is not miniscule, so there will
be other toos in the Lie algebra. One can show that k = sl(6) × sl(2) (homework). We will
call this form E63 .
(−)
so only two classes for left leg, ++ and +−. Hence, again at most 2 remaining real forms, so enough
to product them.
– Consider
(+)
We have E7 ⊂ k, so this is not the split form (dim E7 = 133, dim E8 = 248, and dim kspl = 120).
Note that E8 has no miniscule weights. One can show (homework) that in this case k = E7 ×sl2 .
We call this form E81 .
– Second option is
(+)
We see D7 ⊂ k. Can show k = D8 = so(16), and that this is the split form. Sanity check:
dim k = 16
2 = 120.
Remark 18.1. E8 = so(16) ⊕ R where R = p is a 128-dimensional representation of k, the
spinor representation S+ (or S− )
89
This just leaves E7 . There are a priori 4 variants, but two of them will be equivalent. Specifically,
(−)
is equivalent to (apply transformation to first − from the left and then to leftmost vertex)
(−)
(−)
(+)
(+)
(+)
(−)
90
which is equiv to (rightmost vertex)
(−)
The upshot is that when we have a + on the right leg, all configurations of the left leg are equivalent.
Thus, there are only ≤ 3 possible non-compact real forms of E7 . These will all be different:
• First consider
(+) (+) (+) (+) (+) (−)
(+)
We have E6 ⊂ k. This is not split since dim E6 = 78 but dim kspl = #R+ = 12 (dim E7 − 7) = 63.
The − root above is miniscule. One gets that k = E6 ⊕ so(2). We call this E71 . It is the “most
compact” of the non-compact real forms (dim k maximal).
• Now consider
(−) (+) (+) (+) (+) (+)
(+)
(−)
In this case, we have sl(7) ⊂ k, but in fact k = sl(8) of dimension 82 − 1 = 63 as it should be.
91
Proof idea. Cover X by (finitely many) small balls. Connect the centers of all these balls (use straight
lines in local coordinates or choose a Riemannian metric and then use geodesics); this gives a finite graph
Γ. Then, π1 (Γ) is finitely generated (with # generates at most number of loops in Γ), and π1 (Γ) π1 (M ).
This is because any closed path from a vertex x0 to itself can be deformed to a graph walk.
Theorem 18.5. Let g be a semisimple complex Lie algebra, and let Gcad be the compact adjoint group.
Then, π1 (Gc ) = P ∨ /Q∨ is a finite group of order det(Cartan). In particular, the universal cover G
ad
gc is
ad
also a compact Lie group.
Proof. Let K Gcad be a finite cover, so K a compact connected Lie group. Let Z = ker(K ! Gcad ).
K is compact, and its f.d. irreps are a subset of the f.d. irreps of Lie KC = g (since K connected by
fundamental theorems), i.e. they are Lλ for λ ∈ S where P+ ∩Q ⊂ S ⊂ P+ . Note that the representations
of Gcad are in bijection with P+ ∩ Q. Question:
Z acts by scalar χλ on each Lλ . Since Lλ+µ ⊂ Lλ ⊗ Lµ , we see that χλ+µ = χλ χµ . Also χλ = 1 Why?
for λ ∈ Q (reps of Gcad ). This implies that χλ depends only on λ mod Q, so get χ : P/Q ! Z ∨
Answer: See
(with Z ∨ the character group). Now, Peter-Weyl says χ is surjective (all characters of Z must occur in
∨
beginning of
L2 (K) = λ∈S Lλ ⊗ L∗λ ). The dual map gives an embedding Z ,! (P/Q) = P ∨ /Q∨ . Thus, you cannot
L
tomorrow’s
have covers of big degree.
lecture
We next show π1 (adGc ) is finite. We know it is finitely generated and abelian, so it is of the form
Zr ⊕ F for F some finite group. Let Γ ≤ π1 (Gcad ) be a subgroup of index N . This gives an N -sheeted
covering K Gcad with kernel Z = π1 /Γ, so |Z| = N and Z ,! P ∨ /Q∨ , so N ≤ |P ∨ /Q∨ |. Thus, we
must have r = 0, so π1 (adGc ) = F .
Hence, K = G g c is compact and RepG
ad
gc = Repg = hL : λ ∈ P i, so we must have Z = P ∨ /Q∨ =
ad λ
∨
(P/Q) .
Corollary 18.6.
(1) If g is a simple complex Lie algebra, then the simply connected group Gc with Lie Gc = gc is compact
with center P ∨ /Q∨ .
Ln
(2) Let g = i=1 gi be a semisimple complex Lie algebra. Let Gci be the corresponding simply connected
compact Lie groups, and let Zi = Pi∨ /Q∨ c
i . Then, any connected Lie group with Lie algebra g is
compact, and of the form Qn
i=1 Gci
with Z ⊂ Z1 × . . . × Zn .
Z
Hence, any semisimple connected compact Lie group is of this form.
Example. SU(2) is a simple Lie group even though it has the nontrivial normal subgroup Z/2Z.
Remark 18.8. Abelian connected compact Lie groups are simply tori (S 1 )n . Their universal cover is Rn
so G = Rn /L with L discrete, so G = (S 1 )m × Rn−m and compactness forces n = m.
Corollary 18.9. Any connected compact Lie group is the quotient of T × K by a finite central subgroup,
where T is a torus and K is semisimple and simply connected.
92
Proof. Let L be a connected, compact Lie group, and set l = Lie L. We write l = t ⊕ k with t abelian
and k semisimple. Let T = exp(t) ⊂ L, a Lie subgroup. Note that Lie T ⊂ z(l) = t so T = T is closed, so
compact, so a torus. Similarly define K = exp(k) which is also closed (K compact by previous theorem).
The natural map T × K ! L is a surjective submersion, so a finite covering, so Z = ker(T × K ! L) is
finite central and L = (T × K)/Z.
19 Lecture 19 (4/29)
19.1 Filling in a gap
We start by filling in a gap in the proof at the end of last time. We need to explain why representations
of Gcad are related to dominant weights in the root lattice.
Let g be a semisimple complex Lie algebra, and let G be a connected, simply connected Lie group
with Lie algebra g. Let π : G ! Gad be the natural covering map, and let Z = ker π. Hence, Z = Z(G) ∼
=
π1 (Gad ) is the center of G (and fundamental group of Gad ).
Recall 19.1. f.dim reps of G are in bijection with f.d. representations of g (since G simply connected).
In particular, irreducible ones are the Lλ with λ ∈ P+ .
The center Z will act on Lλ by scalars, i.e. via a character χλ : Z ! C× . Since Lλ+µ ⊂ Lλ ⊗ Lµ , we
see that χλ+µ = χλ χµ . Thus, more generally,
Y
χP ki ωi = χkωii .
i
Recall 19.2 (Exercise 31.10 in the notes). If λ(hi ) are large enough, then for all roots α ∈ R, Lλ+α ⊂
Lλ ⊗ g. n o
λ(h )+1
(More specifically, this follows from Hom(Lµ , Lλ ⊗ V ) = v ∈ V [µ − λ] : ei i v = 0 )
χ : P/Q −! Hom(Z, C× ),
i.e. it gives a pairing χ : P/Q × Z ! C× . This is what we used in the proof (gives Z ! Hom(P/Q, C× ) =
P ∨ /Q∨ ).
Remark 19.3. In particular, χ|Q = 1 tells us that Lλ lifts to a rep of Gad when λ ∈ Q.
93
i.e. R positive Hermitian.
Remark 19.5. Also, every matrix is sum of a Hermitian matrix with a skew-Hermitian one. Take real
part + i(imaginary part). Real part Hermitian and i(imaginary part) skew-Hermitian.
Proof of Recall. Take R = (A† A)1/2 (note A† A positive Hermitian, so can take square root). Then,
U = A(A† A)−1/2 . This gives existence. For uniqueness, say U1 R1 = U2 R2 . Then, U2−1 U1 R1 = R2 . Let
U = U2−1 U1 , so U R1 = R2 . Take adjoint to see R1 U −1 = R2 and from this conclude that U = Id.
We want to generalize this to any real semisimple group. Let gσ ⊂ g be a real form of g with
corresponding Lie group Gσ ⊂ Gad . Note this is a closed subgroup (if not, closure has a larger Lie
algebra, but every element of it still fixed by σ). Recall the decompositions
g = k ⊕ p, gc = kc ⊕ pc , and gσ = kc ⊕ ipc .
P σ = exp(ipc ) ⊂ Gσ .
Warning 19.6. This is not a group in general, e.g. since pc is not a Lie algebra but a module over kc .
Alternatively, pc acts by Hermitian matrices, so P σ does as well, but products of Hermitian matrices
need not be Hermitian.
onto positive Hermitian matrices is a diffeomorphism. Why? Take log of the eigenvalues to get inverse
∼
log : Herm>0 (n) −
! iu(n). The map in the statement is a restriction of this one.
Corollary 19.8. P σ ∼
= RN where N = dim p.
µ : K σ × P σ −! Gσ
94
Corollary 19.10. Gσ ∼
= K σ × Rdim p ∼ K σ with ∼
= denoting diffeomorphism and ∼ denoting homotopy
equivalence here.
closed
Hence topology of semisimple Lie groups largely reduces to topology of compact Lie groups (K σ ⊂
Gcad ).
Corollary 19.11. Gad = Gcad × P with P ⊂ Gad acting on g by Hermitian positive operators. Hence, Question:
Get this
π1 (Gad ) = π1 (Gcad ) = P ∨ /Q∨ corollary
by regarding
(P here the weight lattice).
Gad as a real
Corollary 19.12. Say G is a semisimple complex Lie group with center Z = Z(G). Then, Z ⊂ Gc , so Lie group?
c
coincides with the center of G . Question:
In particular, the restriction of f.dim reps from G to Gc is an equivalence. and P ∼=
This generalizes straightforwardly to any complex semisimple Lie group G instead of Gad , i.e. G = Rdim p ?
Gc × P and RepG = RepGc .
Warning 19.13. G and Gc have the same topology, but G and Gσ do not. Gσ ’s topology is related to
that of K σ . In particular, it can happen that G is simply connected but Gσ is not.
Example. Say G = SL2 (C) and Gσ = SL2 (R) is its split form. Note that SL2 (R) ⊃ SO(2) ∼
= S 1 , and in
fact we have a polar decomposition
SL2 (R) = SO(2) × P
Example. Take Gσ = SLn (C) (regarded as a real Lie group). Then, K σ = SU(n) and P σ = positive
Hermitian matrices of determinant 1. Then, we recover the usual polar decomposition.
Example. If Gσ = SLn (R), then K σ = SOn and P σ is positive symmetric matrices of det 1. This gives
the usual real polar decomposition.
Definition 19.14. We say G is linear if it admits a faithful f.dim representation, i.e. it can be realized
as a subgroup of GLn .
Example. Every semisimple complex Lie group is linear. Let PG ⊂ G be the weight lattice of G (so
λ ∈ PG ⇐⇒ Lλ |π1 (G) = 1). If PG /Q is cyclic, we can take λ a generator, and then Lλ will be faithful.
P/Q is cyclic for all reduced irreducible root systems except D2n , where it’s Z/2Z × Z/2Z. For so(4n),
take λ1 , λ2 to generate PG /Q, and then L = Lλ1 ⊕ Lλ2 is faithful.
We can characterize real linear semisimple Lie groups as well. Say gσ ⊂ g is a real form with
corresponding Lie group Gσ ⊂ G. Then, Gσ is linear since G is, and all semisimple linear real groups are
of this form.
95
Example. Let Gσ = Sp2n (R) so K σ = U (n). Note Gσ ⊂ Sp2n (C) which is simply connected and
(m)
π1 (U (n)) = Z. For every integer m ≥ 2, Sp2n (R) has an m-sheeted cover Sp2n (R) with no f.dim faithful
representations (in fact, all its f.dim reps will factor through Sp2n (R)).
Exercise (Homework). Classify simply connected real semisimple linear Lie groups.39
C× × SLn (C)
GLn (C) = .
µn
Z ⊂ (S 1 )r × Gc0 ⊂ (C× )r × G0 .
RepG ∼
= RepK,
Recall 19.17. Cartan subalgebras of g are conjugate, even when equipped with system of simple roots
(use Weyl group acts (simply) transitively on systems of simple roots).
Lemma 19.19. All Cartan subalgebras (with systems of simple roots) of gc are conjugate.
39 Something something find those where k is semisimple (not just reductive)
96
Proof. Let (hc1 , Π1 ) and (gc2 , Π2 ) be two such things. Then, there exists some g ∈ G so that g(hc1 , Π1 )g −1 =
(hc2 , Π2 ). Also, g(hc1 , Π1 )g −1 = (hc2 , Π2 ) (where g = ω(g)). Thus,
−1
g −1 g(hc1 , Π1 ) g −1 g = (hc1 , Π1 ),
Remark 19.20. Also Cartan subalgebras in g are in bijection with maximal tori in G.
Theorem 19.22 (to be proved next time). Every element of Gc is contained in a maximal torus.
Warning 19.23. This is false for complex groups (e.g. there exists non-semisimple elements like a matrix
with nontrivial Jordan block).
20 Lecture 20 (5/4)
Let K be a compact connected Lie group. We proved last time that all maximal tori in K are conjugate
(even with a choice of positive root system). The point was that maximal tori T ⊂ K are in bijection
with Cartan subalgebras t ⊂ k = Lie K.
Today, we would like to prove the following theorem.
Theorem 20.1. Every element of a connected compact Lie group K is contained in a maximal torus.
(A generic element will be contained in the unique maximal torus which is its center, but a special
element may be contained in many, e.g. a central element is contained in all)
Proof. The complexification KC =: G will be a reductive connected group with k = gc (g = Lie C). We
may assume WLOG that K is semisimple (reductive groups are products of semisimple groups with torii,
40 Since hc maximal =⇒ H c closed
97
up to finite quotient). Let K 0 ⊂ K be the subset of elements contained in a maximal torus. Also, fix
some maximal torus T ⊂ K. Consider the map
f : K ×T −! K
(k, t) 7−! ktk −1 .
Note that K 0 = im(f ), so K 0 is compact (so closed in K). Hence, K \ K 0 is open. Now, say x ∈ K is
regular if the centralizer zx ⊂ k of x in the Lie algebra has dimension ≤ r := rank K. The set of such
elements Kreg ⊂ K is open (rank is lower semicontinuous) and nonempty (many regular elements in gc
and exponentials of small regular elements will also be regular). On the other hand, any regular element
x is contained in exp(zx ) which is a maximal torus. Therefore, Kreg ⊂ K 0 so K \ K 0 ⊂ K \ Kreg . The Question:
set of non-regular elements is defined by polynomial equations . Polynomials cannot vanish on an open
41
Why?
set unless they vanish identically; these polynomials don’t vanish identically (regular elements exist), so
K \ K 0 is empty.
Proof. If T ⊂ K is a maximal torus, then exp : Lie T ! T is surjective (since T commutative so exp a
homomorphism with image containing an open neighborhood of identity). Applying this for all maximal
tori gives the result.
!
−1 1
Non-example. In G = SL2 (C), SL2 (R), is not in the image of the exponential map. It is
−1
the exponential of a matrix ! !
−1 1 πi ?
= exp ,
−1 πi
but it’s not the exponential of a traceless matrix.
Definition 20.3. We say that g ∈ G is semisimple (resp. unipotent) if for every f.dim rep ρ : G !
GL(V ), the operator ρ(g) is semisimple42 (resp. unipotent43 ).
Remark 20.4. For Lie algebras, we defined an element to be semisimple iff adx was a semisimple operator,
but this is the same as ρ(x) being semisimple for any rep ρ : g ! End(V ) since x ∈ g semisimple iff it’s
contained in a Cartan subalgebra.
Similarly, g ∈ G will be semisimple iff it’s contained in a maximal torus.
We won’t delve into this theory here, but developing it is done in a series of homework exercises.
41 ranker smaller than expected, so certain minors have to vanish
42 diagonalizable since V a C-rep. In general, ‘semisimple’ means diagonalizable over algebraic closure
43 only eigenvalue is 1
98
Exercise. Let Y be a faithful f.dim rep of G. Then, g ∈ G is semisimple (resp. unipotent) iff ρY (g) is
semisimple (resp. unipotent).
∼
Exercise. The exponential map exp : N (g) −
! U(G) gives a homeomorphism from the nilpotent elements
of g to the unipotent elements of G.
Exercise. Let Z = Z(G) ⊂ G be the center of G, and let π : G ! G/Z =: Gad be the natural projection. Note
∼ dim Gad may
(1) U(G) −
! U(G/Z) is a homeomorphism.
be less than
Example. If G is a torus, then G/Z = 1, so tori have no nontrivial unipotent elements. dim G, e.g.
if Z contains
(2) SS(G) = π −1 (SS(G/Z)) where SS(·) denotes semisimple elements. a torus
Gσ = Kσ AKσ ,
Remark 20.8. This extends to reductive groups e.g. by forming the decomposition separately for the
torus and semisimple factors.
Example. Let Gσ = GLn (C). Then, Kσ = U (n) and A = {positive diagonal matrices}. This says any
invertible matrix g over C is of the form u1 au2 where u1 , u2 are unitary and a is diagonal matrix with
positive entries.
99
Example. Let Gσ = GL+
n (R), invertible real matrices with positive determinant. Any g ∈ Gσ can be
written as g = O1 aO2 with Oi orthogonal with determinant 1 and a positive diagonal.
Easy to go from this to GLn (R) = O(n)A SO(n) with A consisting of diagonal matrices with pos.
entries.
Proof of first example. Write g = U R the polar decomposition, so U unitary and R positive hermitian.
−1 −1
We may diagonalize R = U 0 a (U 0 ) with U 0 unitary. Then, g = U U 0 a (U 0 ) .
Lemma 20.9 (Homework). hc− is a maximal abelian subalgebra of pc , and all such subalgebras are
conjugate under Kσ .
Proof of Theorem 20.6. We know Gσ = Kσ Pσ . Hence, it is enough to show that every element p ∈
Pσ is conjugate to an element of a by action of Kσ . This follows from the Lemma. Consider hcp− a
maximal abelian subalgebra of pc containing i log p. Then, by the lemma, there exists g ∈ K c such that
Ad(g)(hcp− ) = hc− . Thus, Ad(g)(i log p) ∈ hc− , so Ad(g)(log p) ∈ ihc− , so gpg −1 ∈ exp(ihc− ) = A.
on C(K)Kad , continuous function invariant under adjoint action.44 But every f ∈ C(K)Kad is determined
by its values on T (since every element conjugate to an element of T ), so we should be able to write this
inner product just in terms of T . That is, we should have
Z
(f, g) = f (t)g(t)w(t)dt
T
for some weight function w(t). All our functions are Weyl group invariant, this weight should be W -
invariant as well.
What is w(t)? You can compute it directly by doing a computation in differential geometry. However,
we will not have to do this, because we secretly know what it is from the Weyl character formula.
1 2
where w(t) = #W |∆(t)| and ∆(t) is the Weyl denominator
Y
∆(t) = (1 − α(t)).
α∈R+
44 If you wanted, could have taken L2 (K)Kad instead; it doesn’t matter
100
Example. Take K = U (n) with
z
1
..
: zj ∈ C with |zj | = 1 ∼= (S 1 )n .
T =
.
zn
The (positive) roots are α = αjm = ej − em , i.e. α(t) = zj /zm . We see that
Z Z 2
1 Y zj dθ1 . . . dθn
f (k)dk = f (z1 , . . . , zn ) 1− where zj = eiθj .
U (n) n! T j<m
zm (2π)n
Proof of theorem 20.10. We know the characters χλ (k) are dense in C(K)Kad , so it’s enough to check
this equality for f = χλ , the character of Lλ . Characters are orthogonal, so
Z
χλ (k)dk = (χλ , 1) = (χλ , χ0 ) = δ0,λ .
K
Compare this with (use Weyl character formula for first equality45 and Weyl denominator formula for Possibly a
the second) typo below
2 R P
sign(w)(w(λ + ρ))(t) Y
Z
1 Y 1 Y
χλ (t) (1 − α(t)) dt = T w∈W
−1
Q (1 − α(t)) (1 − α(t)−1 )dt
#W T #W ρ(t) α∈R+ (1 − α(t) )
α∈R+ α∈R+ α∈R+
Z ! !
1 X X
= sign(w)w(λ + ρ)(t) sign(w)w(ρ)−1 (t) dt
#W T w
w∈W
1 X
1 = 1.
#W
w∈W
You can reverse this. If you do the differential geometry calculation giving the integral formula, then
45 Also useR α(t) ∈ S 1 so α(t) = α(t)−1
46 Think, imθ · e−inθ = 0 when n 6= m
S1 e
101
you can use it to obtain the Weyl character formula instead. This is what Weyl did.
ker d|Ωi
Hi (M ) = Hi (M, C) = .
im d|Ωi−1
Forms in ker d are called closed forms while those in im d are called exact forms.
ιX : Ωj ! Ωj−1 .
n
X X ∂
LX (f dxi1 ∧ · · · ∧ dxir ) = (LX f )dxi1 ∧· · ·∧dxir + f ·dxi1 ∧· · ·∧aij ∧· · ·∧dxir where X = ai .
j=1 i
∂xi
Friday is a holiday, so homework due date moved to Monday. There will be one more homework after
the current one, due on Monday of the last week.
Recall 21.1. Let M be a manifold. Its cohomology Hi (M, C) can be computed using the de Rham
complex
d d d
0 ! Ω0 (M ) −
! Ω1 (M ) −
! ... −
! Ωn (M ) ! 0,
where n = dim M . Here, Ωi (M ) is the space of (smooth, C-valued) differential i-forms, and d is the de
Rham differential determined by
102
This satisfies d2 , and the cohomology of this complex
ker d
Hi (M, C) =
im d
is the cohomology of M .
Recall 21.3. There is a product structure on cohomology. If ω ∈ Ωi and ξ ∈ Ωj , can get an (i + j)-form
ω ∧ ξ ∈ Ωi+j . Moreover,
d(ω ∧ ξ) = dω ∧ ξ + (−1)deg ω ω ∧ dξ
(Above, you can think of the sign as coming from commuting d past ω). The Leibniz rule above tells I don’t know
us that ∧ descnds to if this rea-
n
soning works
M
∗ n
H (M ) = H (M )
i=0 in general
giving it the structure of an associative graded commutative algebra. Graded commutative means to always
get the right
ab = (−1)deg(a) deg(b) ba. sign in these
graded situ-
Remark 21.4. Let f : M ! N be a differential (i.e. smooth) map. Then, we get a pullback f ∗ : Ωi (N ) ! ations
Ωi (M ) which commutes with d and preserves ∧. Hence, it induces a graded algebra homomorphism
f ∗ : H∗ (M ) ! H∗ (N ).
Exercise. Say ft : M ! N is a smooth family of maps for t ∈ (0, 1) (i.e. f : (0, 1) × M ! N smooth).
Then, ft∗ : H∗ (N ) ! H∗ (M ) is independent of t. Hint: show that if dω = 0, then ∂ ∗
∂t ft ω is exact.
(f ∗ does not change under deformations of f ).
Before turning to Lie groups, we recall Cartan’s magic formulas. Let v be a vector field on M . Then
we get Lie derivative Lv : Ωi ! Ωi as well as a contraction operator ιv : Ωi ! Ωi−1 . This latter operator
is defined by
ιv (gdf1 ∧ · · · ∧ dfr ) = Alt (g · ιv f1 df2 ∧ · · · ∧ dfr ) ,
average the application of Lv over all permutations (or something like this). One can check
ιv (ω ∧ ξ) = ιv ω ∧ ξ + (−1)deg ω ω ∧ ιv ξ
and Lv (ω ∧ ξ) = Lv ω ∧ ξ + ω ∧ Lv ξ. Lv degree 0
operator so
Lemma 21.5 (Cartan’s magic formula). Lv = ιv d + dιv .
no sign
Note that ιv ◦ d + d ◦ ιv = [ιv , d] is a (graded) commutator.
ιv is a chain
homotopy
103
from Lv
to the zero
map
Proof. We showed last semester that the commutator of two derivations is a derivation. The same holds
true for graded commutators, so [ιv , d] is a derivation of degree 0 (exercise). Hence, we can check this
equality on generators in a local chart.
That is, we may assume ω = f or ω = df (everything else is a wedge/product of these). We see
since ιv f = 0. Similarly,
Lv df = dLv f = dιv df = (ιv d + dιv )df
since d2 f = 0.
(A path in G gives a homotopy of actions of its elements, so anything in the path component of 1
acts via the identity).
Theorem 21.9. Suppose of compact connected Lie group G acts on M . Then H∗ (M ) is computed by
the complex Ω∗ (M )G ⊂ Ω∗ (M ) of G-invariant forms.
Ω∗ (M ) = Ω∗ (M )1 ⊕ Ω∗ (M )0 = Ω∗ (M )G ⊕ ker P.
Thus, ω = dη for some η = η1 + η0 ∈ Ωi−1 (M ). Thus, ω = dη1 + dη0 =⇒ dη1 = 0 so ω = dη0 which
means that Ω∗ (M )0 is exact (it has zero cohomology).
Corollary 21.10. Let G be a compact Lie group. Then H∗ (G) is computed by Ω∗ (G)G , the complex of
left-invariant differential forms.
Recall that the space of left-invariant vector fields is isomorphic to the Lie algebra Lie G. By the same
reasoning, one shows that
^i
Ωi (G)G ∼
= g∗ where g = (Lie G)C .
104
That is, cohomology of a compact Lie group is computed using a complex of the form
^2 ^n
0 −! C −! g∗ −! g∗ −! . . . −! g∗ −! 0
(this gives a way to see cohomology of a compact Lie group is finite dimensional).
Before proving a description entirely in terms of the Lie algebra, we need another lemma from differ-
ential geometry.
Proof Sketch. (1) RHS(f v0 , v1 , . . . , vm ) = f · RHS(v0 , . . . , vm ) so the RHS is linear over functions (in
each variable v0 , v1 , . . . , vm ).
(2) Now it’s enough to check this when vi = ∂
∂xki . Say ω = f dxj1 ∧ · · · ∧ dxjm . This it’s a “straight-
forward” calculation to verify this equality.
Corollary 21.13. If ω ∈ Ω∗ (G)G is left-invariant and v0 , v1 , . . . , vm are left-invariant vector fields, then
X
dω(v0 , . . . , vm ) = (−1)i+j ω ([vi , vj ], v0 , . . . , vbi , . . . , vbj , . . . , vm ) . (21.1)
0≤i<j≤m
Corollary 21.14. (21.1) defines the differential in the complex Ω∗ (G)G computed the cohomology of a
compact, connected Lie group.
makes sense for any Lie algebra g (now that we’ve defined the differential just in terms of the Lie bracket).
Definition 21.15. This complex is called the standard complex (or Chevally-Eilenberg complex)
of g, denoted CE∗ (g). Its cohomology is called Lie algebra cohomology of g, and is denoted by H∗ (g).
This makes sense for any Lie algebra over any field. One has d2 = 0 because of the Jacobi identity.
Remark 21.17. There is an algebra structure on CE ∗ (g) induced by ∧ which descends to H∗ (g), making
it an associative graded commutative algebra. This isomorphism of the previous prop is one of graded
algebras.
Note we need G compact to compute its cohomology using its Lie algebra.
105
V∗
Example. Say g is abelian. Then d = 0 since all Lie brackets vanish. Thus H∗ (g) = g∗ is the exterior
algebra of the dual of g.
Non-example. If you replace the circle by its universal cover, you get R and H∗ (R) 6= H∗ (S 1 ) =
H∗ ((Lie R)C ).
Corollary 21.18. Finite covers of compact Lie groups induce an isomorphism in H∗ (−; C).
Non-example. SU(2) ∼
= S 3 is a double cover of SO(3) ∼
= RP3 . The have different integral cohomology.
∼
H∗ (M ) ⊗ H∗ (N ) −
! H∗ (M × N )
as graded algebras.
0
(a ⊗ b)(a0 ⊗ b0 ) = (−1)deg(b) deg(a ) (aa0 ⊗ bb0 ).
is an injection, but is not an isomorphism in general. What is true is that the image is dense w.r.t an
appropriate topology. This makes proving Künnth a bit subtle.
However, for Lie groups, Künnth formula comes for free:
106
as a graded algebra, where we’re taking g-invariants under the adjoint action.
Hence, we only need to show that d = 0 on this space. This is easy to see from invariance, e.g.
Similarly with vi replacing v0 above. Equation (21.1) tells us that the alternating sum of these (which
are all 0) is 2dω(v0 , v1 , . . . ), so d = 0.
V2
Example. Say ω ∈ g∗ . Then,
If ω is ad-invariant, then
ω([x, y], z) + ω(y, [x, z]) = 0 = ω([y, x], z) + ω(x, [y, z]) = 0 = ω([z, w], y) + ω(x, [z, y]).
V∗ V∗
Use the Weyl character formula. We have g = h ⊕ gα , so g∗ = h∗ ⊕ g∗α . Hence,
L L V
α∈R α∈R
letting r = rank(g), ^∗ Y
ch g∗ (t) = 2r (1 + α(t))
α∈R
^ ∗ g ^∗
dim g∗ = ch g∗ , ch C
Z
1 Y Y
= 2r (1 + α(t)) (1 − α(t))dt
#W T
α∈R α∈R
2r
Z Y
= (1 − α(t2 ))dt
#W T
α∈R
2r
Z
= w(t2 )dt.
#W T
107
We change variables t 7! t2 to see that this is equal to
2r
Z Z
1
w(t)dt = 2r since w(t)dt = (ch C, ch C) = 1.
#W T #W T
Why did we get a power of 2? This is related to the fact that the cohomology of a Lie group is a
graded Hopf algebra. Let m : G × G ! G be the multiplication map. This induces a coproduct
∆ : H∗ (G) ! H∗ (G × G) ∼
= H∗ (G) ⊗ H∗ (G)
map. This is coassociative in the sense that (∆ ⊗ id) ◦ ∆ = (id ⊗∆) ◦ ∆ and is an algebra homomorphism.
This makes H∗ (G) a graded bialgebra.
Exercise. Deduce from this that H∗ (G) is a free (graded commutative) algebra. Hence, all generators are
odd.47
Corollary 21.23.
^ ∗ g
∼
^
H∗ (G) = g∗ = (ξ1 , . . . , ξk ) where deg ξi = 2mi + 1.
Corollary 21.24.
^∗
H∗ (G) = (ξ1 , . . . , ξr ) and deg ξi = 2mi + 1
We will discuss what these numbers are next time. They turn out to be the exponents of G (See
section 9.1).
22 Lecture 22 (5/11)
Last time we discussed the (complex) cohomology of Lie groups. In the end, we saw that the cohomology
of a compact Lie group is a free graded algebra with generators in odd degrees, computed as the invariants
of the exterior algebra on the dual of the Lie algebra.
Why do
V g
3 ∗
Exercise. g = C spanned by the triple product ([x, y], z) (a linear functional on g⊗3 .
these de-
From this it follows that m1 = 1. grees add to
47 An even generator would give nontrivial cohomology in arbitrarily high degree to dim g?
108
Example (g simple of rank 2). We get m2 = 2 for A2 , m2 = 3 for B2 = C2 , m2 = 5 for G2 , etc. This is
because m2 = #R+ − m1 = #R+ − 1 for these cases.
Theorem 22.2. The numbers mi are the exponents of g defined in Section 9.1. In other words, the
degrees 2mi + 1 of generators of the cohomology ring are the dimensions of simple modules occurring in
the decomposition of g over its principal sl2 -subalgebra.
V∗ g
Remark 22.4. The Poincaré polynomial P (z) of g∗ is given by the formula
(1 + z)r
Z Y
P (z) = (1 + zα(t))(1 − α(t)).
#W T α∈R
Hence, the above theorem is equivalent to the statement that this integral equals 2mi +1
Q
i (1t ).
We will prove this for the case of type A.
Corollary 22.5. For g = sln , we have mi = i. Equivalently, the same is true for g = gln if we add
m0 = 0.
with sum taken over λ with ≤ n parts. There are exactly 2n such symmetric partitions λ; they consist
of a sequence of hooks (k, 1k−1 ) with decreasing values of k. The degree of such a hook is 2k − 1, and so
we see that
P (z) = (1 + z)(1 + z 3 ) . . . (1 + z 2n−1 ).
Corollary 22.7.
^∗
H∗ (U (n)) = (ξ1 , ξ3 , . . . , ξ2n−1 )
109
with subscripts denoting the degrees, and
^∗
H∗ (SU(n)) = (ξ3 , . . . , ξ2n−1 ).
Remark 22.8. For gln , one gets the same cohomology even integrally. This is not true for other Lie
algebras.
Notation 22.10. For k ⊂ g a pair of Lie algebras (over any field, of any dimension), let
^ ∗ g ∗ k
CEi (g, k) := .
k
Definition 22.11. The complex CE• (g, k) is called the relative Chevalley-Eilenberg complex, and
its cohomology is called the relative Lie algebra cohomology, denoted H• (g, k).
◦
Going back to compact Lie groups, we have CE• (g, K) = CE• (g, k)K/K , so
Corollary 22.12.
◦
H∗ (G/K) ∼
= H∗ (g, k)K/K
as algebras.
Thus computation of the cohomology of G/K reduced to the computation of relative Lie algebra
cohomology, which is again purely algebraic.
110
Hence, the differential CE• (g, K) is 0 and thus
^ • K
H∗ (G/K) ∼
= (g/k)∗ ,
Example (Grassmannians). Let G = U (n + m) and K = U (n) × U (m), so that G/K is the Grass-
mannian Gn+m,n (C) ∼
= Gn+m,m (C) (the manifold of m− (or n−)dimensional subspaces of Cm+n ). The
element z = In ⊕ (−Im ) acts by −1 on g/k = V ⊗ W ∗ ⊕ W ⊗ V ∗ , where V, W are the tautological rep-
resentations of U (n) and U (m). So we get that the Grassmannian has cohomology only in even degrees,
and that
^2i U (n)×U (m)
H2i (Gm+n,m (C)) = (V ⊗ W ∗ ⊕ W ⊗ V ∗ ) .
We can therefore use skew Howe duality to see that dim Hom2i (Gm+n,m (C)) = Ni (n, m), where Ni (n, m) We have
is the number of partitions λ whose Young diagram has i boxes and fits into the m × n rectangle. an exterior
To compute Ni (m, n), consider the generating function fn,m (q) = i Ni (n, m)q i . Denote by pi the power of a
P
qm − 1
n+m [m + n]q !
fm,n (q) = = where [m]q := and [mq ]! := [1]q [2]q . . . [m]q .
n q [mq ]![nq ]! q−1
X n + m m
Y 1
zn = .
n q j=0
1 − qj z
n≥0
X n + m 1
zn = .
m (1 − z)m+1
n≥0
111
have a complete flag
0 = F0 ⊂ F1 ⊂ · · · ⊂ Fn+m = Cm+n .
Given an m-dimensional subspace V ⊂ Cm+n , let `j be the smallest integer for which dim(F`j ∩ V ) = j.
Then,
1 ≤ `1 < `2 < · · · < `m ≤ m + n,
Definition 22.15. The subset Sλ of the Grassmannian is called the Schubert cell corresponding to λ.
We see that Gm+n,m (C) has a cell decomposition into a disjoint union of Schubert cells. This allows
one to rederive the same formula for the Poincaré polynomial of the Grassmannian from the following
fact from algebraic topology:
Proposition 22.16. If X is a connected cell complex which only has even-dimensional cells, then the
cohomology of X vanishes in odd degrees, and the groups H2i (X; Z) are free abelian groups of ranks
b2i (X), where the Betti number b2i (X) is just the number of cells in X of dimension i. Moreover, X is
simply connected.
Definition 22.18. The flag manifold Fn (C) is the space of all flags
0 = V0 ⊂ V1 ⊂ · · · ⊂ Vn = Cn with dim Vi = i.
It is a homogeneous space since Fn = G/T , where G = U (n) and T = U (1)n is a maximal torus in G.
We have fibrations π : Fn (C) ! CPn−1 sending (V1 , . . . , Vn−1 ) to Vn−1 , whose fiber is the space of
flags in Vn−1 , i.e. is Fn−1 (C). By induction48 , one argues that the flag manifolds can be decomposed
into even-dimensional cells isomorphic to Cr (also called Schubert cells). Thus, the Betti numbers of
Fn vanish in odd degrees, and in even degrees they are given by the generating function As a vector
X space, the
b2i (Fn )q n = [n]q ! = (1 + q)(1 + q + q 2 ) . . . (1 + q + · · · + q n−1 ). cohomology
of Fn (C)
Remark 22.19. There is also a map πm : Fm+n (C) ! Gm+n,m (C) sending (V1 , . . . , Vn+m−1 ) 7! Vm .
will be ten-
This is a fibration with fiber Fm (C) × Fn (C). From this one gets another proof of the formula for Betti
sor prod-
numbers of the Grassmannian.
uct of coho-
48 The fiber bundle will become trivial over the cells?
mology of
CPk for k =
112
1, . . . , n − 1
We can also define the partial flag manifold FS (C) for S ⊂ [1, n − 1], i.e. it is the space of partial
flags (Vs : s ∈ S) with Vs ⊂ Cn , dim Vs = s, and Vs ⊂ Vt if s < t. These include both (complete) flag
manifolds and Grassmannians.
Exercise. Let S = {n1 , n1 + n2 , . . . , n1 + · · · + nk−1 } and let nk = n − n1 − · · · − nk−1 . Show that the
even Betti numbers of the partial flag manifold are the coefficients of the polynomials
[n]q !
PS (q) = ,
[n1 ]q ! . . . [nk ]q !
while the odd Betti numbers vanish. Also, show the partial flag manifold is simply connected.
so it looks like
^2
0 ! V ! Hom(g, V ) ! Hom( g, V ) ! · · · .
for ω ∈ CEm .
g
CE∗ (g, V ) = (Ω∗ (G) ⊗ V )
The cohomology of this complex CE• (g, V ) is called the cohomology of g with coefficients in V
and is denoted H∗ (g, V ). The cohomology we studied before is simply Hi (g) = Hi (g, C).
Proposition 23.1. If G is compact and V is a f.dim nontrivial irrep, then Hi (g, V ) = 0 for all i > 0.
(I missed the explanation, but this follows from what we did before. Something about cohomology
being computed using invariant forms so all nontrivial irreps drop out or something, who knows).
Remark 23.2. In general, H0 (g, V ) = V g is g-invariants.
Proposition 23.3 (Whitehead’s lemma). If g is semisimple, then H1 (g, V ) = H2 (g, V ) = 0 for any
f.dim V .
113
Proof. Can assume V irreducible. If V 6= C, this follows from previous prop, so say V = C. The standard
complex starts
0 d
^2
0!C−
! g∗ −
! g∗ ! · · · .
V g
2
Above, d(f ) = f ([x, y]). Hence, H1 (g, C) = HomLie (g, C) = 0. Similarly, H2 (g, C) = g∗ . Why is
this 0? Can assume g simple. This is the space of g-invariant skew-symmetric homomorphisms A : g ! g∗
(A∗ = −A). Note that Homg (g, g∗ ) = CK is 1-dimensional, spanned by the killing form. The Killing
form is symmetric, not skew-symmetric, so there are no skew-symmetric invariant forms.
Remark 23.4. If you look at cohomology of non-semisimple Lie algebras or with coefficients in an infinite-
dimensional rep, then things are more complicated.
23.1.2 i=1
and 1-coboundaries B 1 (g, V ) = {ω = dv : v ∈ V } (i.e. ω(x) = xv which satisfies [x, y]v = x(yv) −
y(xv)).
classifies extensions 0 ! W ! Y ! V ! 0 of V by W .
For this to be a representation, we need ρY ([x, y]) = [ρY (x), ρY (y)]. Note that
!
ρW (x)ρW (y) ρW (x)ω(y) + ω(x)ρV (y)
ρY (x)ρY (y) = .
0 ρV (w)ρV (y)
114
Hence, ρY is a representation ⇐⇒ ω ∈ Z 1 (g, Homk (V, W )).
Exercise. If Y1 , Y2 are two such representations, then Y1 ∼
= Y2 as extensions iff ω1 −ω2 ∈ B 1 (g, Homk (V, W )).
Note 9. I have been pretty distracted most of this lecture, so I keep missing small things.
We’re talking about semidirect products now.
This comes with a natural surjection g n V g. What are the splittings x 7! (ω(x), x) of this
map? The condition for ω is precisely the 1-cocycle condition: ω([x, y]) = xω(y) − yω(x), so we need
ω ∈ Z 1 (g, V ). Note that V acts on g n V by automorphisms: w · (v, x) = (v + xw, x). We call this
‘conjugation by v.’
Exercise. Sections s1 , s2 are conjugate ⇐⇒ ω1 − ω2 ∈ B 1 differ by a coboundary.
Let’s see yet another interpretation. Consider V = g the adjoint representation. Consider ω : g ! g
with ω ∈ Z 1 (g, g). Then,
ω([x, y]) = [x, ω(y)] − [y, ω(x)] = [x, ω(y)] + [ω(x), y],
so ω ∈ Der(g), i.e. ω is a cocycle if it’s a derivation. The coboundaries ω ∈ B 1 (g, g) are the inner
derivations, ω(x) = [d, x] for some d ∈ g. Thus,
Der(g)
H1 (g, g) = = Out(g)
Inn(g)
23.1.3 i=2
0 −! V −! e
g −! g −! 0
115
Example. The Heisenberg algebra H has basis x, y, c with c central ([x, c] = [y, c] = 0) and [x, y] = c.
Let V = hci, a 1-dimensional abelian ideal. We have H = k 2 = hx, yi (abelian quotient). This gives an
exact sequence
0 ! k ! H ! k2 ! 0
What is the condition of ω for this to be a Lie algebra structure? The condition is given by the Jacobi
identity (this is already skew-symmetric by definition). One checks that this satisfies Jacobi ⇐⇒ ω ∈
g1 ∼
Z 2 (g, V ) (exercise). Furthermore, e =eg2 (as extensions) iff ω1 − ω2 = dη ∈ B 2 (g, V ).
In particular, H2 (g) = k with nonzero element corresponding to the Hiesenberg algebra (up to some
scaling)
V2
a formal power series. These coefficients will be maps ci : g ! g. We want the above to be a Lie
bracket (i.e. satisfy Jacobi) for all t; that is, it should give a Lie algebra structure on g JtK so that, mod
t, you recover the original one.
We’d like to understand/analyze things term-by-term. We start with first order analysis, work mod
t . That is, we work with the ring g[t]/t2 t[t] = g ⊕ tg. Note we have an exact sequence
2
0 ! tg ! g ⊕ tg ! g ! 0
with tg abelian, so we have an abelian extension of g by itself with zero commutator. Hence, the condition
on c1 is that it should be a 2-cocycle: c1 ∈ Z 2 (g, g). Up to isomorphisms: a = 1 + ta1 + t2 a2 + . . . with
116
ai ∈ Endk g. Possible first order deformations c1 are classified up to isomorphism by H2 (g, g). In Question:
particular if H (g, g) = 0, then all deformations of g are in fact trivial (isom to c1 = c2 = c3 = · · · = 0).
2
What is a?
(2) Now one discovers that we have [c2 ] ∈ Z 2 (g, g) = B 2 (g, g) so we can kill it as well by a(2) =
1 + t2 a2 + . . . . This gives [x, y]t = [x, y] + t3 c3 (x, y) + . . . .
(3) Now one continues. Use the composition · · · ◦ a(3) ◦ a(2) ◦ a(1) =: a (this makes sense since only
finitely many degrees involved in each step). This transforms the original deformation to the trivial
one with [x, y]t = [x, y],
Example. Say g = k 2 = hx, yi ([x, y] = 0). Then, H2 (g, g) = k 2 so have 2-parameter deformation.
Can take [x, y] = tx + sy with t, s ∈ C. If (t, s) 6= 0, all are isomorphic as Lie algebras (though not as
deformations) by action of GL2 (C). Can always bring it to the form [x, y] = y, i.e. to the Lie algebra
( ! )
a b
aff 1 := Lie a, b ∈ C ⊂ gl2 .
0 0
Suppose g a Lie algebra with c1 ∈ Z 2 (g, g) and [c1 ] 6= 0 ∈ H2 (g, g). Can we lift our deformation mod
t3 ? Can we find c2 so that
[x, y]t = [x, y] + tc1 (x, y) = t2 c2 (x, y)
satisfies Jacobi? If c1 = 0, the condition would be dc2 = 0 (that it is a cocycle). In general, the condition
is
1 ^3
dc2 = [c1 , c1 ] ∈ Homk ( g, g)
2
where [−, −] is the Schouten bracket (this is some explicit quadratic expression we won’t write down).
Exercise. [c1 , c1 ] is a cocycle.
Hence we get a cohomology class [c1 , c1 ] ∈ H3 (g, g). To get a lifting (i.e. to solve dc2 = [c1 , c1 ]),
this needs to be a coboundary, i.e. the obstruction class [[c1 , c1 ]] needs to vanish.
Remark 23.13. Solving these extension problems depends on the choices you make along the way (i.e.
whether or not you can find c2 depends on what you choose for c1 ), so things can get hairy fast.
One can also consider deformations of modules. So you have g and a module V , and you want to
deform to a module V JtK. Say ρ = ρV : g ! End V . We now want
We start again with first order analysis (i.e. with working mod t2 ): ρt = ρ+tρ1 +O(t2 ). Note V [t]/t2 V [t] =
V ⊕ tV so we get an extension
0 −! tV ! V ⊕ tV −! V −! 0
117
of modules. We see that first order deformations of V , up to isomorphism, are classified by H1 (g, Endk V ) =
Ext1 (V, V ). Deformations are a non-linear problem, so we are not done yet.
Can we lift ρt = ρ + tρ1 + O(t2 ) modulo t3 ? Again, one gets that they need dρ2 = [ρ1 , ρ1 ]. Thus, you
have an obstruction class [[ρ1 , ρ1 ]] ∈ H2 (g, Endk V ) and you can lift iff it vanishes.
Theorem 23.15 (Levi decomposition). Let g be a f.dim Lie algebra over R or C. The exact sequence
0 ! rad(g) ! g ! gss ! 0
Once we establish this, we’ll use it to prove the 3rd fundamental theorem (that every f.dim Lie algebra
is the Lie algebra of some simply connected Lie group).
Tuesday’s lecture will be prerecorded and posted online at the usual time. No zoom meeting/real-time
class meeting on Tuesday.
24 Lecture 24 (5/18)
Last time we introduced the Levi decomposition theorem.
Recall 24.1 (Levi decomposition). Let g be a f.dim Lie algebra over R or C. The exact sequence
0 ! rad(g) ! g ! gss ! 0
splits. In particular, g ∼
= gss n rad(g), and gss acts on the radical rad(g).
where a, b ∈ rad(g) and x, y ∈ gss . Since rad(g) is solvable, it has the filtration
(we suppose Dn 6= 0). We can replace g by g/Dn and then use induction in dim g to assume that ω = 0
V2
mod Dn , i.e. ω : gss ! Dn . Now, Dn is an abelian ideal in g. Hence, since Dn is abelian, our
118
commutator satisfies Jacobi iff ω is a 2-cocycle. Now, Whitehead’s lemma says that H2 (gss , Dn ) = 0, so
ω = dη is a coboundary. Now, we can use η to modify the splitting, so that ω becomes 0.
We would like to prove the 3rd Lie theorem (any f.dim Lie algebra is the Lie algebra of a Lie group)
and also Ado’s theorem (any f.dim Lie algebra has a faithful rep). Doint so will require some more
technology, which brings us to....
Definition 24.2. The nilradical of a is the subset n of nilpotent elements (i.e. a ∈ a s.t. ad(a) is
nilpotent).
This is an ideal containing [a, a] (commutator of two triangular matrices is strictly upper triangular), so
a/n is abelian.
∗
The characters λ1 , . . . , λ ∈ (a/n) are a spanning set. If not, there is an element of a, not in n, but
whose adjoint matrix is nilpotent. Note that some λi may be zero (e.g. if a is nilpotent, they are all 0).
i.e. [e a)etd (b). Hence, if λ(a) occurs in a, then so does λt (a) = λ(e−td (a)). By ‘occurs’
a, etd (b)] = λ(e−td e
we mean shows up as a Jordan-Hölder factor. Only finitely many characters can occur, so this 1-parameter
family must be constant. Thus, etd λi = λi for all i. Therefore, etd acts trivially on (a/n)∗ so it acts
trivially on a/n, i.e. d|a/n = 0 which exactly says d(a) ⊂ n.
24.2 Exponentiating nilpotent and solvable Lie algebras, and 3rd Lie theorem
Say g is a f.dim solvable Lie algebra over K = R or C.
119
Theorem 24.5. There exists a simply connected Lie group G with Lie algebra g with the exponential
∼
map exp : g ! G being a diffeomorphism. Moreover, if g is nilpotent and we identify exp : g −
! G, then
multiplication is given by a polynomial
p : g × g ! g.
Example. Say g is the Heisenberg Lie algebra H = hx, y, ci with [x, y] = c and [x, c] = 0 = [y, c].
Equivalently,
0
∗ ∗
H = 0 0 ∗
0 0 0
αβ
0 α γ 1 α γ+ 2
exp 0 0 β = 0 1 β .
0 0 0 0 0 1
Proof of Theorem 24.5. Induct on dim g. Suppose known for all Lie algebras of dimension < dim g. Fix
χ : g ! K a nontrivial character (exists since g solvable). Let g0 := ker χ, an ideal of codimension 1 in
g. Hence, g = Kd ⊕ g0 is a semidirect products (d acts as a derivative of g0 ). We know by inductive
assumption that g0 = Lie(G0 ) for some G0 with
exp : g0 ! G0
(1) this is a group law, defining a Lie group G with Lie(G) = g and exp : g ! G a diffeomorphism.
More precisely,
Example. ! !
eα −1
α β eα α ·β
exp =
0 0 0 1
120
Definition 24.6. If g is nilpotent, the corresponding simply connected group G is called unipotent (it
acts by unipotent operators in the adjoint representation).
Theorem 24.7 (Third Lie Theorem). Every f.dim Lie algebra over R or C is the Lie algebra of some
Lie group.
Proof. By Levi decomposition, we have g = gss n a where a = rad(g) is solvable. By previous theorem,
a = Lie A with A simply connected. Furthermore, since gss is semisimple, we can write gss = Lie Gss with
Gss simply connected. Furthermore, Gss acts on a by automorphisms, so it acts on A by automorphisms.
We can now form G = Gss n A and by construction Lie G = g.
Corollary 24.8. A simply connected complex Lie group G has homotopy type of its semisimple part Gss ,
and hence of Gcss . Specifically,
G∼
= Gcss × Rm
as a manifold.
Remark 24.9. Almost the same thing holds for real group. If G is a simply connected real Lie group,
we also have G ∼ Gss (homotopy equivalent) and Gss ∼ Kss , the simply connected compact group
corresponding to kss ⊂ k, the semisimple part of k = gσss .
The upshot is that any Lie group has the homotopy type of some compact Lie group (its maximal
compact subgroup).
Definition 24.10. A f.dim Lie algebra g over C is algebraic if it is the Lie algebra of an algebraic
group, i.e. g = Lie(G) where G = K n N with K reductive and Lie N nilpotent (i.e. N unipotent).
√
Non-example. Consider g1 = hd, x, yi with [x, y] = 0, [d, x] = x and [d, y] = 2y.
√
Exercise. This is not an algebraic Lie algebra, ultimately because 2 is irrational.
Proposition 24.11. Every f.dim Lie algebra over C is a Lie subalgebra of an algebraic Lie algebra.
Proof. We first make a definition. Say g is n-algebraic if g = Lie G and G = K nA, where K is reductive
and a = Lie A is solvable with dim(a/n) ≤ n. Note 0-algebraic = algebraic.
121
Now on to the proof. Any g is of the form g = gss n a with a = rad(g) solvable. Thus, any f.dim
Lie algebra is n-algebraic for some n. Hence, it suffices to show that an n-algebraic Lie algebra can be
embedded into an (n − 1)-algebraic Lie algebra (when n ≥ 1).
Suppose g is n-algebraic, so g = Lie G and G = K n A with K reductive and A simply connected with
a = Lie(A) solvable satisfying dim(a/n) = n. Pick d ∈ a but not in n s.t d is K-invariant (exists since K
acts trivially on a/n and since reps of K are completely reducible). Thus, ad(d) is a derivation of a, so
we can write
r
M
a= a(βi )
i=1
as a sum of generalized eigenspaces for ad(d). This decomposition is K-stable (K commutes with d).
Pick character χ : a ! C so that χ(d) = 1.
Consider subgroup Γ ⊂ C generated by βi , Γ ∼
= Zm . Let α1 , . . . , αm be a basis and write
m
X
βi = bij αj , bij ∈ Z.
j=1
Let T = (C× )m , and m-dimensional torus. We make T act on a via z = (z1 , . . . , zm ) ∈ T satisfies
m
b
Y
z|a[βi ] = zj ij .
j=1
e = T n G = (K × T ) n A.
G
Note that α|a[βi ] = bij αj = βi . Thus, d, α have the same eigenvalues (α semisimple, d possibly not),
P
e = (K × T ) n A = (K × T ) n A0
G
Example. Recall g2 = hd, x, yi with [d, x] = x, [d, y] = x + y. Adjoint δ satisfying [δ, x] = 0 = [δ, d] and
[δ, y] = x. Let e
g2 = hδ, d, x, yi. This is C(d − δ) n H with H = hδ, x, yi the Heisenberg Lie algebra.
122
24.4 Faithful representations of nilpotent Lie algebras
∼
Let n be a f.d nilpotent Lie algebra. We now n = Lie N with N unipotent, and exp : n −
! N an
isomorphism. Furthermore, the induced group law on n is polynomial P : n × n ! n.
Proposition 24.12. Let O(N ) be the space of polynomial functions on N ∼ = n, O(N ) = C[n] = Sn∗ .
Then, O(N ) is invariant under the action of n by left-invariant vector fields. Moreover, we have a
canonical filtration
[
O(N ) = Vn
n≥1
Recall 25.1. Say n is a nilpotent Lie algebra, then we can write n = Lie N with N a unipotent group and
∼
exp : n −
! N an isomorphism. The induced group law on n, a deformation of addition, is a polynomial
P : n × n ! n.
Proposition 25.2. Let O(N ) be the space of polynomial functions on N ∼ = n, O(N ) = C[n] = Sn∗ . Then,
O(N ) is invariant under the action of n by left-invariant vector fields. Moreover, we have a canonical
filtration The exis-
[
O(N ) = Vn tence of such
n≥1 a filtration
where Vn ⊂ O(N ) are f.dim subspaces, V1 ⊂ V2 ⊂ . . . , and nVi ⊂ Vi−1 . make O(N )
a ‘locally fi-
Proof. Say x ∈ n, so it has an associated left-invariant vector field Lx . For f ∈ O(N ) = Sn∗ , we have,
nite module’
by definition,
∂ ∂ (I think this
(Lx f )(y) = f (y ∗ tx) = f (P (y, tx)) . is the termi-
∂t t=0 ∂t t=0
nology)
Since f, P are both polynomials, we see that Lx f is a polynomial in y, so Lx f ∈ O(N ). Thus, O(N ) is
invariant under the action of n by left-invariant vector fields.
Now we given the filtration. Recall that n has a lower central series filtration
123
Now pick a suff. large positive integer d, and give Di (n)⊥ degree di . This gives an increasing filtration
F • on the symmetric algebra Sn∗ = O(N ). Write
X
P (x, y) = x + y + Qi (x, y) where Qi : n × n ! [n, n]
i≥1
Example. Consider the Heisenberg Lie algebra H = hx, y, ci with [x, y] = c while [x, c] = 0 = [y, c] (i.e.
c central). Then,
1
etx esy = etx+sy+ 2 tsc
1
(p1 , q1 , r1 ) ∗ (p2 , q2 , r2 ) = (p1 + p2 , q1 + q2 , r1 + r2 + (p1 q2 − p2 q1 )).
2
You can alternative describe things using upper triangular nilpotent matrices.
1 p1 r1 1 p2 r2 1 p1 + p2 r1 + r2 + p1 q2
1 q1 1 q2 = 1 q1 + q2
1 1 1
which is a slightly different, but isomorphic, group law. One can check that
1 1
Lc = ∂r , Lx = ∂p − q∂r , and Ly = ∂q + p∂r .
2 2
Setting deg p = deg q = d and deg r = d2 , these operators lower degree if d > 1.
Corollary 25.3. Every f.dim nilpotent Lie algebra over C has a faithful f.dim representation, and there-
fore is isomorphic to a Lie subalgebra of the Lie algebra of strictly upper triangular matrices.
Proof. By definition, O(N ) is a faithful n-module. Hence, for some n, the space Vn is also faithful.
Theorem 25.4 (Ado’s Theorem). Every f.dim Lie algebra over C has a faithful f.dim representation,
i.e. is a Lie subalgebra of gln (C).
50 Apply x y
51 Keep
Campbell-Hausdorff expansionP to P (x, y) = log(e e )
in mind P (y, tx) = y + tx + Qi (y, tx)
124
Proof. We know from last time that g can be embedded into an algebraic Lie algebra, so we may assume
g is itself algebraic, i.e. that g = Lie G where G = K n N with K reductive and N unipotent (i.e.
Lie N = n) with action of K. Thus g = k n n with k = Lie K and n = Lie N . Let z ⊂ k be the centralizer
of n. Since k is reductive and z is an ideal, there is a complementary ideal k0 s.t. k = k0 ⊕ z. Hence,
g = k n n = k0 n n ⊕ z.
Now, note that if g = g1 ⊕ g2 where gi has a faithful rep Vi , then g has faithful rep V1 ⊕ V2 . Therefore,
we may assume g is indecomposable, so assume that z = 0.
Now, k acts faithfully on n. g = k n n acts on O(N ) where x ∈ n acts by Lx , and y ∈ k acts by Ly − Ry
(adjoint action). Thus, it also acts on the spaces Vn . Fix n so that the action of n on Vn is faithful. We
claim all of g acts faithfully on Vn . Suppose that nonzero y ∈ g acts by 0 on Vn . Write y = (y1 , y2 )
(y1 ∈ k and y2 ∈ n), and pick z ∈ n so that a = [y, z] 6= 0 (possible since z = 0). Then, a ∈ n acts by 0 on
Vn , a contradiction.
Since all pairs (h, Π) are conjugate, this definition does not depend on the choice of (h, Π).
Corollary 25.7. The set of Borel subgroups (subalgebras) is G/B+ , a homogeneous space and complex
manifold. We call this the flag manifold of G.
Note that this manifold is canonically attached to G, and depends only on gss ⊂ g = Lie G.
Remark 25.8.
1
dim G/B+ = |R+ | = (dim g − dim h) .
2
Example. If G = GLn (or SLn ), then we can take B+ to be the upper triangular matrices, and G/B+ =
Fn is the set of complete flags in Cn . For example, if G = SL2 , then G/B+ = CP1 is the Riemann sphere.
Note that G/B+ is compact in the above example. This is in fact true in general.
Let Gc ⊂ G be the compact form.
125
Remark 25.9. gc + b+ = g. Note gc contains things like eα ± e−α and i(eα ∓ e−α ) while b+ 3 ieα , eα
(α > 0 throughout this sentence). Hence, their sum contains all the eα ’s and the Cartan subalgebra.
As a consequence, the orbit Gc · 1 ⊂ G/B+ contains a neighborhood of 1 ∈ G/B+ (since gc g/b+ ).
By translation, we see Gc · 1 contains a neighborhood of all its elements, so it is open. It is also closed
since it is compact. Since G/B+ is connected, we conclude that Gc · 1 G/B+ is surjective, so G/B+
is compact.
The above shows that Gc acts transitively on G/B+ . It’s stabilizer is Stab(1) = Gc ∩ B+ .
Note 11. Distracted and missed more stuff
Sounds like the stabilizer is H c = (S 1 )r , a maximal torus in Gc .
Corollary 25.10. G/B+ ∼
= Gc /H c .
Corollary 25.11 (Iwasawa Decomposition). The usual notation is K = Gc , N = N+ , and A =
exp(ihc ) ⊂ H, the non-compact part H. The multiplication map
K ×A×N !G
is a diffeomorphism, so G = KAN .
(Compare e.g. with Polar decomposition)
∼ ∼
Let B+
0
= AN+ so B+ = H c B+
0
. Hence, Gc × B+
0
−
! G is a diffeomorphism. Also, A × N+ −
! B+
0
is a
diffeomorphism, so
Gc × A × N+ ! G
is a diffeomorphism.
Another realization of flag manifold One can construct the flag manifold alternatively as the orbit
of a highest weight line in an irreducible representation with regular highest weight. Say λ ∈ P+ dominant
integral regular, where regular means λ(hi ) ≥ 1 for all i (e.g. λ = ρ so ρ(hi ) = 1). Let Lλ be the irrep
with highest weight λ, and let vλ be a highest weight vector, so Cvλ ∈ PLλ . Let O := G · Cvλ ⊂ PLλ be
the orbit of this line.
Claim 25.12.
O∼
= G/B+ .
Proof. What is the stabilizer S of Cvλ ? Clearly, S ⊃ B+ since vλ a highest weight vector. Also, for
α ∈ R+ , e−α vλ 6= 0 as λ(hi ) ≥ 1.52 We see from this that the stabilizer of Cvλ in g is b+ . Hence, S
normalizes b+ , so S ⊂ B+ , so S = B+ . This shows that O is a closed orbit in PLλ , so G/B+ is a complex
(smooth) projective variety.
Remark 25.13. Partial flag manifolds are also complex projective varieties. Can prove similarly using
non-regular weights.
52 also wrote hα vλ = mvλ with m > 0, but I don’t see why this is relevant
126
Borel fixed point theorem Only 8 minutes left, so let’s end with a bang.
Theorem 25.14. Let a be a solvable Lie algebra over C, and let V be a f.dim a-module. Let X ⊂ PV be
a closed subset preserved by A. Then, there exists x ∈ X such that Ax = x.
Proof. Induct in n = dim a. The base n = 0 is trivial. Since a is solvable, it has an ideal a0 of codimension
0
1. By induction, Y = X a 6= ∅. Furthermore, a/a0 acts on Y , so we only need to prove the theorem when
dim a = 1.
Say a = hai with a acting by a linear operator a : V ! V . We can scale a by complex numbers.
In particular, by rotating, we may assume that the real parts of all its eigenvalues are different. Pick
x0 ∈ X, and consider eta · x0 . If we send t ! ∞, the eigenvalue with largest real part will ‘dominate’
resulting in the existence of a limit x ∈ X (no particular vector has a limit, but the whole line does). Question:
This limit is fixed by a, so we win. Use com-
pactness of
Corollary 25.15. Any solvable subalgebra of g is contained in a Borel. Thus, Borels are simply maximal
X?
solvable subalgebras.
Proof. Say a ⊂ g solvable. Then, it has a fixed when acting on G/B+ ⊂ PLλ . This fixed point is a Borel
subalgebra b, so exp(a) normalizes b, so a ⊂ b.
Example. When g = gln , this says any matrix can be upper triangularized in some basis.
We don’t have time to give the proof (it’s in the notes), but similarly...
One can show that the normalizer of n+ is B+ , so any maximal nilpotent subalgebra n is contained
in a unique Borel n, and n = [b, b]. Therefore, maximal nilpotent subalgebras are also parameterized by
the flag manifold G/B+ .
127
26 List of Marginal Comments
128
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
o Split form is so(n, n) so could be in either class depending on parity of n . . . . . . . . . . . . . 83
o TODO: Add picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
o The ‘node’ is the valence 3 vertex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
o Answer: See beginning of tomorrow’s lecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
o Question: Get this corollary by regarding Gad as a real Lie group? . . . . . . . . . . . . . . . . 95
o Question: and P ∼
= Rdim p ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
o Question: Why? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
o Note dim Gad may be less than dim G, e.g. if Z contains a torus . . . . . . . . . . . . . . . . . 99
o Possibly a typo below . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
o I don’t know if this reasoning works in general to always get the right sign in these graded
situations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
o Lv degree 0 operator so no sign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
o ιv is a chain homotopy from Lv to the zero map . . . . . . . . . . . . . . . . . . . . . . . . . . 103
o Question: Why do these degrees add to to dim g? . . . . . . . . . . . . . . . . . . . . . . . . . . 108
o We have an exterior power of a tensor product of dual spaces . . . . . . . . . . . . . . . . . . . 111
o Has at most n parts with transpose having at most m parts . . . . . . . . . . . . . . . . . . . . 111
o As a vector space, the cohomology of Fn (C) will be tensor product of cohomology of CPk for
k = 1, . . . , n − 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
o Question: What is a? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
o The existence of such a filtration make O(N ) a ‘locally finite module’ (I think this is the
terminology) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
o TODO: Find what you missed and fill it in here . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
o Something about Levi decomposition and having a vanishing H1 . . . . . . . . . . . . . . . . . 125
o Question: Use compactness of X? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
129
Index
δ-like sequence, 59 character orthogonality, 56
n-algebraic, 121 characters, 56
n-dimensional representation of g/k, 3 Chevalley-Eilenberg complex, 113
p-adic integers, 61 Chevally-Eilenberg complex, 105
q-binomial theorem, 111 Clebsh-Gordan, 4
“Bott Periodicity”, 47 Clifford algebra, 39
1-coboundaries, 114 closed form, 49
1-cocycle, 72 closed forms, 102
1-cocycles, 114 closed Lie subgroup, 1
1-parameter subgroup, 3 closed manifold, 51
1st Galois cohomology, 73 cohomology of g with coefficients in V , 113
2nd countable, 61 compact, 57
compact directions, 82
abelian extension, 115
compact real form, 75
abstract root system, 8
complete symmetric function, 28
addable box, 32
completely reducible, 2
adjoint group of g, 70
complex type, 44
adjoint representation, 2
convolution operator, 60
Ado’s Theorem, 124
coroot, 9
algebraic, 121
coroot lattice, 9
angular momentum operator, 69
counting measure, 52
antidominant, 42
coweight lattice, 9
Ascoli-Arzela, 63
Coxeter number of g, 46
associated Legendre polynomials, 66
averaging measure, 53 De Rham Cohomology, 49
azimuthal quantum number, 68 de Rham cohomology group, 102
deformationally rigid, 117
Betti numbers, 103
differential k-form, 48
Borel subalgebra, 125
dominant, 10
Borel subgroup, 125
Double Centralizer Lemma, 19
bound states, 68
bounded operator, 57 element of maximal length, 42
Engel’s Theorem, 5
Cartan Decomposition, 99
equicontinuous, 62
Cartan involution, 82
exact form, 49
Cartan matrix, 10
exact forms, 102
Cartan subalgebra, 7, 96
exponent, 46
Cartan’s Criteria, 6
Cartan’s differentiation formula, 105 finite rank operators, 57
Cartan’s magic formula, 102, 103 flag manifold, 112, 125
Casimir element, 14 Frobenius Character Formula, 23
Cauchy identity, 27 fundamental representations, 14
130
fundamental theorem of calculus, 52 Lie ideal, 3
Fundamental Theorem of Invariant Theory, 25 lie subalgebra, 3
fundamental weights, 10 Lie subgroup, 1
linear, 95
Gaussian binomial coefficients, 111
Lorentz Lie algebra, 78
generalized Laguerre polynomial, 68
graded commutative, 103 magnetic quantum number, 68
graded derivation, 48 matrix coefficient, 54
Grassmannian, 111 matrix element, 17
maximal element, 42
Haar measure, 52
maximal torus, 71
Hamilton’s equations, 63
minuscule, 28, 29
Hamiltonian, 63
Molien formula, 27
hat function, 50
multiplicity of m, 46
height, 45
Heisenberg Lie algebra, 120 Narayana numbers, 22
highest weight, 11 negative, 9
highest weight module, 11 Newton polynomial, 19
Hilbert-Schmidt Theorem, 58 nilpotent, 5
homogeneous G-space, 1 nilradical, 119
homomorphism of Lie groups, 1 noncompact directions, 82
Howe duality, 27 nonvanishing, 51
131
principal quantum number, 68 skew-symmetric isomorphism, 44
profinite group, 61 skew-symmetry, 2
solvable, 5
quasi-split, 74 spherical harmonics, 66
quaternionic trace map, 81 Spin group, 36
quaternionic type, 44 spinor representation, 36
quaternionic unitary Lie algebra, 83 spinor representations, 37
spinors, 36
radical, 5, 118
split semisimple Lie algebra, 72
rank, 7
standard complex, 105
real (complex) Lie group, 1
state of energy EN , 64
real forms, 73
stationary Schrödinger equation, 64
real type, 44 Stoke’s Theorem, 51
reduced, 8 Stone-Weierstrass Theorem, 60
reductive, 6, 96 symmetric isomorphism, 44
reductive Lie algebra, 53
regular, 98, 126 Third Lie Theorem, 121
relative Chevalley-Eilenberg complex, 110 total spin, 69
relative Lie algebra cohomology, 110 twisted conjugation, 73
Riesz representation theorem, 62 twisted homomorphism, 72
root, 7 twisted-linear, 72
root lattice, 9
uniform continuity, 58
root system, 7
unimodular, 52
Schrödinger’s equation, 64 unipotent, 98, 121
Schubert cell, 112 unitary representation, 2
Schubert cells, 112 universal enveloping algebra, 5
Schur algebra, 21
Vogan diagram, 80
Schur functor, 21
volume forms, 51
Schur polynomial, 23
Schur’s lemma, 2 weight lattice, 9
Schur-Weyl duality, 18, 19 weight lattice of G, 95
self-adjoint, 57 weights, 11
semisimple, 7, 91, 98 Weyl Character Formula, 12
semisimplification, 5 Weyl denominator, 100
separates points, 60 Weyl denominator formula, 12, 13
Serre relations, 11 Weyl dimension formula, 14
simple, 92 Weyl group, 8
simple root, 9 Weyl unitary trick, 54
skew-Howe duality, 109 Whitehead’s lemma, 113
132