Liealg Partiii Notes

Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

Lie Algebras and their Representations

Lectures by David Stewart


Notes by David Mehrle
dfm33@cam.ac.uk

Cambridge University
Mathematical Tripos Part III
Michaelmas 2015

Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Lie Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

3 Representations of slp2q . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 Major Results on Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . 24

5 Representations of Semisimple Lie Algebras . . . . . . . . . . . . . . . 34

6 Classification of Complex Semisimple Lie Algebras . . . . . . . . . . 60

Last updated May 29, 2016.

1
Contents by Lecture
Lecture 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Lecture 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Lecture 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Lecture 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Lecture 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Lecture 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Lecture 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Lecture 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Lecture 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Lecture 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

Lecture 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Lecture 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Lecture 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Lecture 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Lecture 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Lecture 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Lecture 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Lecture 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Lecture 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Lecture 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Lecture 21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Lecture 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Lecture 23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

2
Lecture 1 8 October 2015

1 Introduction
There are lecture notes online.
We’ll start with a bit of history because I think it’s easier to understand some-
thing when you know where it comes from. Fundamentally, mathematicians
wanted to solve equations, which is a rather broad statement, but it motivates
things like the Fermat’s Last Theorem – solving x3 ` y3 “ z3 . Galois theory
begins with wanting to solve equations. One of Galois’s fundamental ideas was
to not try to write a solution but to study the symmetries of the equations.
Sophus Lie was motivated by this to do the same with differential equations.
Could you say something about the symmetries of the solutions? This technique
is used a lot in physics. This led him to the study of Lie groups, and subsequently,
Lie algebras.

Example 1.1. The prototypical Lie group is the circle.

A Lie group G is, fundamentally, a group with a smooth structure on it.


The group has some identity e P G. Multiplying e by a P G moves it the
corresponding point around the manifold. Importantly, if the group isn’t abelian,
then aba´1 b´1 is not the identity. We call this the commutator ra, bs. Letting a, b
tend to zero, we get tangent vectors X, Y and a commutator rX, Ys by letting
ra, bs tend to 0. These points of the tangent space are elements of the Lie algebra.
We’l make this all precise later. We’ll classify the simple Lie algebras, which
ends up using the fundamental objects called root systems. Root systems
are so fundamental to math that this course might better be described as an
introduction to root systems by way of Lie algebras.

Definition 1.2. Let k be a field. A Lie algebra g is a vector space over k with a
bilinear bracket r´, ´s : g ˆ g Ñ g satisfying

(i) Antisymmetry rX, Xs “ 0, for all X P g;

(ii) Jacobi identity rX, rY, Zss ` rY, rZ, Xss ` rZ, rX, Yss “ 0.

The Jacobi Identity is probably easier to think of as

rX, rY, Zss “ rrX, Ys, Zs ` rY, rX, Zss.

Bracketing with X satisfies the chain rule! Actually rX, ´s is a derivation.

Definition 1.3. A derivation δ of an algebra A is an endomorphism A Ñ A


that satisfies δpabq “ aδpbq ` δpaqb.

Remark 1.4. From now on, our Lie algebras g will always be finite dimensional.
Most of the time, k “ C (but not always!). We’ll sometimes point out how things
go wrong in characteristic p ą 0.

Example 1.5.

(i) If V is any vector space, equip V with the trivial bracket ra, bs “ 0 for all
a, b P V.

3
Lecture 2 10 October 2015

(ii) If A is any associative algebra, equip A with ra, bs “ ab ´ ba for all a, b P A.

(iii) Let g “ Mnˆn pkq, the n ˆ n matrices over a field k. This is often written
gln pkq or glpnq when the field is understood. This is an example of an
associative algebra, so define rA, Bs “ AB ´ BA.
There is an important basis for glpnq consisting of Eij for 1 ď i, j ď n, which
is the matrix whose entries are all zero except in the pi, jq-entry which is 1.
First observe
rEij , Ers s “ δjr Eis ´ δis Erj .
This equation gives the structure constants for glpnq.
We can calculate that
$

’ 0 ti, ju ‰ tr, su


&E i “ r, j ‰ s
rs
rEii ´ Ejj , Ers s “
´E
’ rs

’ j “ r, i ‰ s

2Ers i “ r, j “ s
%

(iv) If A is any algebra over k, Derk A Ă Endk A is a Lie algebra, the deriva-
tions of A. For α, β P Der A, define rα, βs “ α ˝ β ´ β ˝ α. This will be a
valid Lie algebra so long as rα, βs is still a derivation.
Definition 1.6. A subspace h Ă g is a Lie subalgebra if h is closed under the
Lie bracket of g.
Definition 1.7. Define the derived subalgebra D pgq “ xrX, Ys | X, Y P gy.
Example 1.8. An important subalgebra of glpnq is slpnq, slpnq :“ tX P glpnq |
tr X “ 0u. This is a simple Lie algebra of type An´1 . In fact, you can check that
slpnq is the derived subalgebra of glpnq,

slpnq “ rglpnq, glpnqs “ D pglpnq.

Example 1.9. Lie subalgebras of glpnq which preserve a bilinear form. Let
Q : V ˆ V Ñ k be a bilinear form. Then we say glpVq preserves Q if the follow-
ing is true:
QpXv, wq ` Qpv, Xwq “ 0
for all v, w P V. Recall that if we pick a basis for V, we can represent Q by a
matrix M. Then Qpv, wq “ v T Mw. Then X preserves Q if and only if

v T X T Mw ` v T MXw “ 0,

if and only if
X T M ` MX “ 0.
Recall that a Lie algebra g is a k-vector space with a bilinear operation
r´, ´s : g ˆ g Ñ g satisfying antisymmetry and the Jacobi identity.
We had some examples, such as g “ glpVq “ Endk pVq. If you pick a basis,
this is Mnˆn pVq. Given any associative algebra, we can turn it into a Lie algebra
with bracket rX, Ys “ XY ´ YX.

4
Lecture 2 10 October 2015

Example 1.10. Another example is if Q : V ˆ V Ñ k is a bilinear form, the set


of X P glpVq preserving Q is a Lie subalgebra of glpVq. Taking a basis, Q is
~ q “ ~v T Mw
represented by a matrix M with Qp~v, w ~ . X preserves Q if and only if
T
X M ` MX “ 0.
The most important case is where Q is non-degenerate, i.e. Qpv, wq “
0 @ w P V if and only if v “ 0.

Example 1.11. Consider the bilinear form where


„ 
0 In
M“
´In 0

M represents an alternating form and the set of endomorphisms of V “ k2n


is the symplectic Lie algebra (of rank n) and denoted spp2nq. If X P glpVq is
written „ 
A B
X“
C D
Check that X preserves M if and only if
„ 
A B
X“
C ´A T

with B, C symmetric matrices. A basis for this consists of elements

• Hi,i`n “ Eii ´ Ei`n,i`n

• Eij ´ Ej`n,i`n

• Ei,j`n ` Ej,i`n

• Ei`n,j ` Ej`n,i

for 1 ď i, j ď n. This is a simple Lie algebra of type Cn for char k ‰ 2.

Example 1.12. There are also

• orthogonal Lie algebras of type Dn “ sop2nq, preserving


„ 
0 In
M“ .
In 0

• orthogonal lie algebras of type Bn “ sop2n ` 1q, preserving


» fi
0 In
M “ – In 0 fl .
1

Example 1.13. bn is the borel algebra of upper triangular n ˆ n matrices and


nn is the nilpotent algebra of strictly upper triangular n ˆ n matrices.

Definition 1.14. A linear map f : g Ñ h between two Lie algebras g, h, is a Lie


algebra homomorphism if f prX, Ysq “ r f pXq, f pYqs.

5
Lecture 2 10 October 2015

Definition 1.15. We say a subspace j is a subalgebra of g if j is closed under the


Lie bracket. A subalgebra j is an ideal of g if rX, Ys P j for all X P g, Y P j.

Definition 1.16. The center of g, denoted Zpgq is

Zpgq “ tX P g | rX, Ys “ 0 @ Y P gu.

Exercise 1.17. Check that Zpgq is an ideal using the Jacobi identity.

Proposition 1.18.

(1) If f : h Ñ g is a homomorphism of Lie algebras, then ker f is an ideal;

(2) If j Ă g is a linear subspace, then j is an ideal if and only if the quotient


bracket rX ` j, Y ` js “ rX, Ys ` j makes g{j into a Lie algebra;

(3) If j is an ideal of g then the quotient map g Ñ g{j is a Lie algebra homo-
morphism;

(4) If g and h are both Lie algebras, then g ‘ h becomes a Lie algebra under
rpX, Aq, pY, Bqs “ prX, Ys, rA, Bsq.

Exercise 1.19. Prove Proposition 1.18.

Remark 1.20. The category of Lie algebras, Lie, forms a semi-abelian category.
It’s closed under taking kernels but not under taking cokernels. The representa-
tion theory of Lie algebras does, however, form an abelian category.

Definition 1.21. The following notions are really two ways of thinking about
the same thing.

(a) A representation of g on a vector space V is a homomorphism of Lie


algebras ρ : g Ñ glpVq.

(b) An action of g on a vector space V is a bilinear map r : g ˆ V Ñ V satisfying


rprX, Ys, vq “ rpX, rpY, vqq ´ rpY, rpX, vqq. We also say that V is a g-module
if this holds.

Given an action r of g on V, we can make a representation of g by defining


ρ : g Ñ glpVq by ρpXqpvq “ rpX, vq.

Example 1.22. This is the most important example of a representation. For any
Lie algebra g, one always has the adjoint representation, ad : g Ñ glpgq defined
by adpXqpYq “ rX, Ys. The fact that ad gives a representation follows from the
Jacobi identity.

Definition 1.23. If W is a subspace of a g-module W, then W is a g-submodule


if W is stable under action by g: gpWq Ď W.

Example 1.24.

(1) Suppose j is an ideal in g. Then adpXqpYq “ rX, Ys P j for all Y P j, so j is a


submodule of the adjoint representation.

6
Lecture 3 13 October 2015

(2) If W Ď V is a submodule then V{W is another g-module via Xpv ` Wq “


Xv ` W.

(3) If V is a g-module, then the dual space V ˚ “ Homk pV, kq has the structure
of a g-module via Xφpvq “ ´φpXvq.
Last time we developed a category of Lie algebras, and said what homomor-
phisms of Lie algebras were, as well as defining kernels and cokernels. There
are a few more definitions that we should point out.
Definition 1.25. A Lie algebra is simple if it has no nontrivial ideals.
We also moved on and discussed representations. Recall
Definition 1.26. A representation or g-module of g on V is a Lie algebra homo-
morphism g Ñ glpVq.
To complete the category g-Mod of g-modules, let’s define a map of g-
modules.
Definition 1.27. Let V, W be g-modules. Then a linear map φ : V Ñ W is a
g-module map if Xφpvq “ φpXvq for all X P g.
φ
V W
X X
φ
V W

Proposition 1.28. If φ : V Ñ W is a g-module map, then ker φ is a submodule


of V
Exercise 1.29. Prove Proposition 1.28.
Definition 1.30. A g-module V (resp. representation) is simple (resp. irre-
ducible) if V has no non-trivial submodules.
We write V “ V1 ‘ V2 if V1 and V2 are submodules with V “ V1 ‘ V2 as
vector spaces.
How can you build new representations from old ones? There are several
ways. If V, W are g-modules, then so is V ‘ W becomes a g-module via Xpv, wq “
pXv, Xwq.
There’s another way to build new representations via the tensor product.
In fact, g-Mod is more than just an abelian category, it’s a monoidal category
via the tensor product. Given V, W representations of g, we can turn the tensor
product into a representation, denoted V b W, by defining the action on simple
tensors as
Xpv b wq “ pXvq b w ` v b pXwq
and then extending linearly.
We can iterate on multiple copies of V, say, to get tensor powers

V br “ looooooooomooooooooon
V bV b¨¨¨bV
r times

7
Lecture 3 13 October 2015

Definition 1.31. The r-th symmetric power of V, with basis e1 , . . . , en is the


vector space with basis ei1 ¨ ¨ ¨ eir for i1 ď i2 ď, . . . , ď ir . This is denoted Sr pVq.
The action of g on Sr pVq is

Xpei1 ¨ ¨ ¨ eir q “ Xpei1 qei2 ¨ ¨ ¨ eir ` ei1 Xpei2 q ¨ ¨ ¨ eir ` . . . ` ei1 ei2 ¨ ¨ ¨ eir´1 Xpeir q.

Definition 1.32. The r-th alternating power of a g-module V, denoted r pVq,


Ź

is the vector space with basis tei1 ^ ei2 ^ ¨ ¨ ¨ ^ eir | i1 ă i2 ă . . . ă ir u, if V has


basis e1 , . . . , en . The action is functionally the same as on the symmetric power:

Xpei1 ^ ¨ ¨ ¨ ^ eir q “ Xpei1 q ^ ei2 ^ ¨ ¨ ¨ ^ eir ` . . . ` ei1 ^ ei2 ^ ¨ ¨ ¨ ^ eir´1 ^ Xpeir q.

We also have the rule that

ei1 ^ . . . ^ ei j ^ . . . ^ eik ^ . . . ^ eir “ ´ei1 ^ . . . ^ eik ^ . . . ^ ei j ^ . . . ^ eir .

Exercise 1.33. What is the dimension of the symmetric powers / alternating


powers?

Example 1.34. Let


„ 
0 1
X“
0 0
and let V “ k2 with basis te1 , e2 u. Let g “ kX Ă glpVq. Then V b V has basis
e1 b e1 , e1 b e2 , e2 b e1 , and e2 b e2 .
Observe Xe1 “ 0, Xe2 “ e1 . Therefore,

Xpe1 b e1 q “ 0
Xpe1 b e2 q “ e1 b e1
Xpe2 b e1 q “ e1 b e1
Xpe2 b e1 q “ e1 b e2 ` e2 b e1

As a linear transformation V b V Ñ V b V, X is represented by the matrix


» fi
0 1 1 0
—0 0 0 1ffi
X“—
–0
ffi .
0 0 1fl
0 0 0 0

A basis for 2 V is te1 ^ e2 u, and here Xpe1 ^ e2 q “ Xe1 ^ e2 ` e1 ^ Xe2 “


Ź

0 ^ e2 ` e1 ^ e1 “ 0. So X is the zero map on the alternating square.

Exercise 1.35. Work out the preceding example for the symmetric square, and
the tensor cube.

2 Lie Groups
Lots of stuff in this section requires differential geometry and some analysis.

8
Lecture 4 15 October 2015

Definition 2.1. A Hausdorff, second countable topological space X is called a


manifold if each point has an open neighborhood (nbhd) homeomorphic to an

open subset U of Rd by a homeomorphism φ : U φpUq Ă R N .
The pair pU, φq of a homeomorphism and open subset of M is called a chart:
given open subsets U and V of X with U X V ‰ H, and charts pU, φU q and
´1
pV, φV q, we have a diffeomorphism φV ˝ φU : φU pU X Vq Ñ φV pU X Vq of open
subsets of R .
N

We think of a manifold as a space which looks locally like R N for some N.


Example 2.2.
(a) R1 , S1 are one-dimensional manifolds;

(b) S2 or S1 ˆ S1 are two-dimensional manifolds. The torus carries a group


structure; S2 does not.
Definition 2.3. A function f : M Ñ N is called smooth if composition with
the appropriate charts is smooth. That is, for U Ď M, V Ď N, and charts
φ : U Ñ R M , ψ : V Ñ R N , then ψ ˝ f ˝ φ´1 : R M Ñ R N is smooth where
defined.
Definition 2.4. A Lie group is a manifold G together with the structure of a
group such that the multiplication µ : G ˆ G Ñ G and inversion i : G Ñ G maps
are smooth.
Exercise 2.5. Actually, the fact that the inverse is smooth follows from the fact
that multiplication is smooth and by looking in a neighborhood of the identity.
Prove it!
To avoid some subtleties of differential geometry, we will assume that M
is embedded in R N for some (possibly large) N. This is possible under certain
tame hypotheses by Nash’s Theorem.
Example 2.6.
(1) GLpnq :“ n ˆ n matrices over R with non-zero determinant . There is
(
2
only a single chart: embed it into Rn .

(2) SLpnq :“ tg P GLpnq | det g “ 1u.

(3) If Q : Rn ˆ Rn Ñ R is a bilinear form, then

GpQq “ tg P Mpnq | Qpv, wq “ Qpgv, gwqu

for all v, w P Rn .
Recall that a Lie group is a manifold with a group structure such that the
group operations are smooth. For example, SLpnq.
Definition 2.7. Let G and H be two Lie groups. Then a map f : G Ñ H is a Lie
group homomorphism if f is a group homomorphism and a smooth map of
manifolds.

9
Lecture 4 15 October 2015

Let G be a Lie group and let G˝ be the connected component of G containing


the identity.

Proposition 2.8. For any Lie group G, the set G˝ is an open normal subgroup
of G. Moreover, if U is any open neighborhood of the identity in G˝ , then
G˝ “ xUy.

Proof. The first thing we need to show is that G˝ is a subgroup. Since G is a


manifold, its connected components are path connected. Suppose a, b P G˝ .
Then we can find paths γ, δ : r0, 1s Ñ G˝ with γp0q “ e “ δp0q, γp1q “ a and
δp1q “ b. Then taking the path µpγptq, δptqq gives a path from the identity to ab.
Hence, G˝ is closed under multiplication. Similarly, ipγptqq gives a path from e
to a´1 , and G˝ is closed under inverse.
Why is G˝ normal? Well, the map g ÞÑ aga´1 gives a diffeomorphism of G˝
with that fixes e and therefore also G˝ .
By replacing U with U X U ´1 , we can arrange that U contains the inverse of
every element in U. Now let U n “ U ¨ U ¨ ¨ ¨ U “ tu1 u2 ¨ ¨ ¨ un | ui P Uu is open
as it is the union of open cosets u1 u2 ¨ ¨ ¨ un´1 U over all pu1 , . . . , un´1 q. Then set
H “ ně0 U r This is an open subgroup of G0 containing xUy. It is also closed
Ť

since the set of cosets aR H aH “ G˝ zH is open as the union of diffeomorphic


Ť

translates of an open set, so H is the compliment of an open set.


The connected component G˝ is a minimal set that is both open and closed,
so H “ G.

Definition 2.9. Any open neighborhood of the identity is called a germ or


nucleus.

Corollary 2.10. If f and g are two homomorphisms from G to H with G con-


nected, then f “ g if and only if f |U “ g|U for any germ of g.

Definition 2.11. Let M Ă R N be a manifold. The tangent space of p P M is

Tp pMq “ v P R N ˇ there is a curve φ : p´ε, εq Ñ M with φp0q “ p, φ1 p0q “ v


ˇ (

One can show that this is a vector space. Scalar multiplication is easy: take the
curve φpλtq “ λv. Addition follows by looking at addition in charts.

Let’s single out a very important tangent space when we replace M with a
Lie group G.

Definition 2.12. If G is a Lie group, then we denote Te pGq by g and call it the
Lie algebra of G.

We don’t a priori know that this is actually a Lie algebra as we defined it


previously, but we can at least see what kind of vectors live in a Lie algebra by
looking at the Lie group.
ˇ
Example 2.13. Let’s calculate sln “ TIn pSLn q. If v “ d{dt ˇt“0 gptq “ g1 p0q. By the
condition on membership in SLn , we have det gptq “ 1 “ detpgij ptqq. Write out

10
Lecture 5 17 October 2015

the determinant explicitly.

ÿ n
ź
1 “ detpgij ptqq “ p´1qsgnpσq giσpiq ptq
σ PSn i “1

Now differentiate both sides with respect to t and evaluate at t “ 0:

ÿ n
ÿ ź
0“ p´1qsgnpσq g1jσp jq p0q giσpiq p0q
σ PSn j “1 i‰ j

Observe that g is a path through the identity, so gp0q “ In . Thus, gij p0q “ δij .
Therefore, we are left with
n ˇ
ÿ d ˇˇ
0“ g ptq “ tr v
dt ˇt“0 jj
j“1

Hence, slpnq is the traceless n ˆ n matrices.


Because manifolds and tangent spaces have the same dimension, this tells
us that dim SLn “ dim sln “ n2 ´ 1.

Example 2.14. Now consider G “ GpQq “ tg P GLpnq | Qpv, vq “ Qpgv, gvqu.


If Q is represented by a matrix M, we have Qpv, wq “ v T Mw. We have that
g P GpQq ðñ g T Mg “ M.
Now let gptq be a path through the identity of GpQq. We see that
ˇ
dˇ ´ ¯
0 “ ˇˇ gptqT Mgptq “ g1 p0qT M ` Mg1 p0q,
dt t“0

so the Lie algebra here is those matrices g such that X T M ` MX “ 0.

Definition 2.15. Let f : M Ñ N be a map of manifolds. Then given p P M, we


define d f p : Tp M Ñ T f p pq N as follows. Given v P Tp M with v “ φ1 p0q for some
ˇ
path φ, define d f p pvq “ w where w “ p f ˝ φq1 p0q “ d{dt ˇt“0 p f ˝ φq.

We need to check that d f p is well-defined, that is, given another path φ1


through p, we have
ˇ ˇ
d ˇˇ d ˇˇ
p f ˝ φqptq “ p f ˝ φ1 qptq,
dt ˇt“0 dt ˇt“0

but this is true by the multivariable chain rule.

2.1 The Exponential Map


If we have a Lie group G and a Lie algebra Te G, we want to find a way to
map elements of the algebra back onto the group. This is the exponential
map. It somehow produces a natural way of molding Te pGq onto G such
that a exp is a homomomorphism on restriction to any line in Te pGq, that is,
exppaXq exppbXq “ expppa ` bqXq.

11
Lecture 5 17 October 2015

Remark 2.16. Some preliminaries for the exponential map.

• Left multiplication by g P G gives a smooth map L g : G Ñ G, and differ-


entiating gives a map

pdL g qh : Th pGq Ñ Tgh pGq.

• Recall the theorem alternatively called the Picard-Lindelöf Theorem or


the Cauchy-Lipschitz Theorem: for any first order ODE of the form
y1 “ Fpx, yq, there is a unique continuously differentiable local solution
containing the point px0 , y0 q.

The following definition is not rigorous, but it will suffice for what we’re
trying to do. To define it (marginally more) rigorously, we need to talk about
vector bundles on a manifold.

Definition 2.17. A vector field on M is a choice of tangent vector for each


m P M, varying smoothly over M, which we write as a map M Ñ TM. (More
precisely, a vector field is a smooth section of the tangent bundle π : TM Ñ M,
that is, an X : M Ñ TM such that π ˝ X “ id M ).

Definition 2.18. If G is a Lie group, then a vector field v is left-invariant if


dL g pvphqq “ vpghq.

Given X P Th pGq, we construct a left-invariant vector field v X by v X pgq “


pdL gh´1 qh pXq. It’s clear that all left-invariant vector fields arise in this way. As
soon as you know any left-invariant vector field at a point, then you know it
anywhere.

Proposition 2.19 (“The flow of v X is complete”). Let g P G and X P Tg pGq with


v X the associated left-invariant vector field. Then there exists a unique curve
γg : R Ñ G such that γg p0q “ g and γ1g ptq “ v X pγptqq.

Proof. First we will reduce to the case when g “ e. By defining γg ptq “ gγe ptq,
we have that
γg p0q “ gγe p0q “ ge “ g,
and moreover,

γ1g ptq “ pdL g qe pγe1 ptqq “ pdL g qe pv X pγe ptqqq,

now apply left invariance to see that

pdL g qe pv X pγe ptqqq “ v X pgγe ptqq “ v X pγg ptqq.

Therefore, if we define γg ptq “ gγe ptq, we have that γ1g ptq “ v X pγg ptqq and
γg p0q “ gγe p0q “ So we have reduced it to the case where g “ e.
Now to establish the existence of γe ptq, we will solve the equation v X pγe ptqq “
1
γe ptq with initial condition γe p0q “ X in a small neighborhood of zero, and then
push the solutions along to everything.

12
Lecture 5 17 October 2015

Using the existence part of the Cauchy-Lipschitz theorem for ODE’s, there
is some ε ą 0 such that γε can be defined on an open interval p´ε, εq. We show
that there is no maximum such ε. For each s P p´ε, εq, define a new curve
αs : p´ε ` |s|, ε ´ |s|q Ñ G via αs ptq “ γe ps ` tq.
Then αs p0q “ γe psq and

α1s ptq “ γe1 ps ` tq “ v X pγe ps ` tqq “ v X pαs ptqq. (1)

By the uniqueness part of Cauchy-Lipschitz theorem, we must have a unique


solution to (1) for |s| ` |t| ă ε. But notice that γpsqγptq is another solution to (1),
because

pγe psqγe ptqq1 “ dLγe psq γe1 ptq


“ dLγe psq v X pγe ptqq
“ v X pLγe psq γe ptqq
“ v X pγe psqγe ptqq.

Therefore, we have that


γe ps ` tq “ γe ptqγe psq.
We can use this equation to extend the range of γe to p´3ε{2, 3ε{2q, via
#
γe p´ε{2qγe pt ` ε{2q t P p´3ε{2, ε{2q
γe ptq “
γe pε{2qγe pt ´ ε{2q t P p´ε{2, 3ε{2q

Repeating this infinitely defines a curve γe : R Ñ G with the required properties.

Definition 2.20. The curves γg guaranteed by the previous proposition are


called integral curves of v X .

Definition 2.21. The exponential map of G is the map exp : g “ Te G Ñ G given


by exppXq “ γe p1q, where γe : R Ñ G is the integral curve associated to the
left-invariant vector field v X . (We choose γe p1q because we want exp to be it’s
own derivative, like e X .)

Proposition 2.22. Every Lie group homomorphism φ : R Ñ G is of the form


φptq “ expptXq for X “ φ1 p0q P g.

Proof. Let X “ φ1 p0q P g and v X the associated left-invariant vector field. We


have φp0q “ e, and by the uniqueness part of the Cauchy-Lipschitz Theorem,
we need only show that φ is an integral curve of v X . Note that, as in the proof
of Proposition 2.19,
φps ` tq “ φpsqφptq,
which implies that φ1 psq “ pdLφpsq qe pφ1 p0qq “ v X pφpsqq.
Conversely, we must show that φptq “ expptXq is a Lie group homomor-
phism R Ñ G, that is, expppt ` sqXq “ expptXq exppsXq. To do this, we will use

13
Lecture 6 20 October 2015

ODE uniqueness. Let γ be the integral curve associated to X, which is a solution


to the equation v X pγptqq “ γ1 ptq with initial condition γp0q “ e. Let θ be the
integral curve associated to aX for a P R, solving the equation

θ 1 ptq “ v aX pθptqq (2)

with initial condition θp0q “ aX. We will show that γpatq is a solution to (2), and
therefore γpatq “ θptq by ODE uniqueness.
To that end,

v aX pγpatqq “ dLγpatq paXq


“ adLγpatq X
“ av X pγpatqq
d
“ aγ1 patq “ pγpatqq.
dt
Therefore, θptq “ γpatq. But as in the proof of Proposition 2.19, notice that
θps ` tq and θpsqθptq are both solutions to θ 1 ps ` tq “ v X pθps ` tqq, hence

γptpa ` bqq “ θpa ` bq “ θpaqθpbq “ γpatqγpbtq.

In particular, setting t “ 1, we have that γpa ` bq “ γpaqγpbq. But exppaXq “


θp1q “ γpaq by definition, so

expppa ` bqXq “ γpa ` bq “ γpaqγpbq “ exppaXq exppbXq.

Exercise 2.23. Let δ : R Ñ g be the curve tX for some X P g. Show that


ˇ
d ˇˇ
exppδptqq “ X.
dt ˇt“0
This is really just saying that pd expq0 : g Ñ g is the identity map.
Example 2.24. For G “ GLpVq, the exponential map is just the map
8
X2 X3 ÿ Xn
X ÞÑ 1 ` X ` ` `... “ .
2! 3! n!
n “0
ˇ
One can check that d{dt ˇt“0 expptXq “ X, and that it is a homomorphism when
restricted to any line through zero.
This definition is not something that’s not used much at all, but it’s nice to
have the terminology.
Definition 2.25. We refer to the images of R Ď G under the exponential map as
1-parameter subgroups.
Let’s summarize what we’ve done so far:
(1) We defined left invariant vector fields v such that pdL g qh pvphqq “ vpghq.
(In the notes the subscript p qh is often dropped, and each such is v X for
X P g).

14
Lecture 6 20 October 2015

(2) We used analysis to prove the existence of integral curves φ corresponding


to v X and defined exppXq “ φp1q.

(3) exp restricts to a homomorphism of every line of g through 0.

(4) The image of any such is called a 1-parameter subgroup.

Example 2.26. For SLp2q, if H “ 10 ´01 P slp2q, then


` ˘

et
„ 
0
expptHq “ ,
0 e ´t

| t P R . If we look at
` t 0˘ (
which is part of the split torus 0 t
„  „ 
0 1 1 t
X“ expptXq “
0 0 0 1
„  „ 
0 1 cos t sin t
Z“ expptZq “
´1 0 ´ sin t cos t
The image of t ÞÑ expptZq is isomorphic to S1 and called a non-split torus.

2.2 The Lie Bracket


We need a way to multiply exponentials. That is, a formula of the form

exppXq exppYq “ exppX ` Y ` Cq (3)

where C encodes the non-commutativity of X and Y.


To develop such a formula, let’s look to GLpnq for inspiration. The left hand
side of Equation 3 becomes

X2 Y2
ˆ ˙ˆ ˙
LHS of Equation 3 “ 1`X` `... 1`Y` `...
2 2
X2 Y2
“ 1`X`Y` ` XY ` `...
2 2
So what do we need for C on the right hand side? Up to quadratic terms, what
we want is equal to ´ ¯
exp X ` Y ` 12 rX, Ys ` . . . .

This is where the Lie bracket becomes important.


Observe that since pd expq0 : g Ñ g is the identity mapping, it follows from
the inverse/implicit function theorem that exp is a diffeomorphism of restriction
to some open neighborhood U of 0. Call the inverse log.

Definition 2.27. For g near the identity in GLpnq,

pg ´ 1q2 pg ´ 1q3
logpgq “ pg ´ 1q ´ ` ´...
2 3

15
Lecture 6 20 October 2015

Exercise 2.28. Check that log ˝ exp and exp ˝ log are the identity on g and G
where defined.
Moreover, there must be a possibly smaller neighborhood V of U such that
multiplication µ : exp V ˆ exp V Ñ G has image in exp U. It follows that there
is a unique smooth mapping ν : V ˆ V Ñ U such that

exppXq exppYq “ µpexp X, exp Yq “ exppνpX, Yqq.

Notice that expp0q expp0q “ expp0 ` 0q “ expp0q, so it must be that νp0, 0q “


0. Now Taylor expand ν around p0, 0q to see that
1 ` ˘
νpX, Yq “ ν1 pX, Yq ` ν2 pX, Yq ` higher order terms ,
2
where ν1 is the linear terms, ν2 is quadratic terms, etc.
Let’s try to figure out what ν1 and ν2 are.
Since exp is a homomorphism on lines in G, we have that νpaX, bXq “
pa ` bqX. In particular, νpX, 0q “ X and νp0, Yq “ Y. Since ν1 is the linear terms,
ν1 pX, Yq is linear in both X and Y, so we see that
1
pa ` bqX “ νpaX, bXq “ ν1 paX, bXq ` ν2 paX, bXq ` . . .
2
But the terms higher than linear vanish by comparing the left hand side to the
right hand side. Then set a “ 0, b “ 1 to see that ν1 p0, Xq “ X and likewise,
b “ 0, a “ 1 to see that ν1 pX, 0q “ X. Therefore, ν1 pX, Yq “ ν1 pX, 0q ` ν1 p0, Yq “
X ` Y.
So we have that
1 ` ˘
νpX, Yq “ pX ` Yq ` ν2 pX, Yq ` higher order terms ,
2
To figure out what ν2 is, consider
1
X “ νpX, 0q “ X ` 0 ` ν2 pX, 0q ` . . .
2
Therefore, ν2 pX, 0q “ 0. Similarly, ν2 p0, Yq “ Y. So the quadratic term ν2
contains neither X 2 nor Y 2 . Similarly, 2X “ νpX, Xq “ ν1 pX, Xq, so ν2 pX, Xq “ 0.
Therefore, if ν2 pX, Yq must be antisymmetric in the variables.
Definition 2.29. The antisymmetric, bilinear form r´, ´s : g ˆ g Ñ g defined by
rX, Ys “ ν2 pX, Yq is called the Lie bracket on g.
So this justifies calling g an algebra, if not a Lie algebra. To know that g is a
Lie algebra, we need to know that r´, ´s obeys the Jacobi identity.
Proposition 2.30. If F : G Ñ H is a Lie group homomorphism and X P g “
Te pGq, then exppdFe pXqq “ FpexppXqq, that is, the following diagram commutes
F
G H
exp exp
dF
g h

16
Lecture 7 22 October 2015

Proof. Let γptq “ FpexpptXqq. This gives us a line through the identity in G, so
we get a Lie group homomorphism R Ñ H. Take the derivative

γ1 p0q “ dFe pXq

by the chain rule. Now from Proposition 2.22, any Lie group homomorphism
φ : R Ñ G is of the form expptYq for Y “ φ1 p0q, so γptq “ exppt dFe pXqq. Plug in
t “ 1 to get the proposition.

Proposition 2.31. If G is a connected Lie group and f , g : G Ñ H are homomor-


phisms then f “ g if and only if d f “ dg.
Proof. If f “ g, then it’s clear that d f “ dg.
Conversely, assume that d f “ dg. Then there is an open neighborhood
U of e in G such that exp is invertible with inverse log. Then for a P U, by
Proposition 2.30, we have

f paq “ f pexpplogpaqqq
“ exppd f e plog aqq
“ exppdge plogpaqqq
“ gpexpplogpaqqq “ gpaq.

So now by Corollary 2.10, it must be that f and g agree everywhere.

Proposition 2.32. If f : G Ñ H is a Lie group homomormphism then d f is a


homomorphism of Lie algebras. That is, d f prX, Ysq “ rd f X, d f Ys.
Proof. Take X, Y P g sufficiently close to zero. Then

f pexppXq exppYqq “ f pexppXqq f pexppYqq.

But also, if we expand the left hand side,

f pexppXq exppYqq “ f pexppX ` Y ` 21 rX, Ys ` . . .qq


“ exppd f e pX ` Y ` 12 rX, Ys ` . . .qqq.

On the right hand side,

f pexppXqq f pexppYqq “ exppd f e pXqq exppd f e pYqq


´ ¯
“ exp d f e X ` d f e Y ` 12 rd f e X, d f e Ys ` . . .

using Proposition 2.30 to pull the d f inside on the left and the right. Therefore,
we have that
´ ¯
exppd f e X ` d f e Y ` 12 d f e rX, Ys ` . . .qq “ exp d f e X ` d f e Y ` 21 rd f e X, d f e Ys ` . . .

Taking logs and comparing quadratic terms gives the result.

Given f : G Ñ H, Proposition 2.30 tells us that f exp X “ exppd f e pXqq, and


Proposition 2.32 tells us that d f e prx, ysq “ rd f e X, d f e Ys. We’re going to use these
to prove the Jacobi identity.

17
Lecture 7 22 October 2015

2.3 The Lie bracket revisited


Define a map ψ‚ : G Ñ AutpGq by g ÞÑ ψg , where ψg is the conjugation by
g: ψg phq “ ghg´1 . We can easily check that ψg is a homomorphism, and that
gh ÞÑ ψg ψh , so the map ψ‚ is a homomorphism as well.
Notice that ψg peq “ e so that dψg : Te G Ñ Te G. By the chain rule,

dψgh “ dψg dψh .

So the maps dψ‚ can be thought of as homomorphisms of groups G Ñˇ GLpTe Gq “


GLpgq. We call these Ad : G Ñ GLpgq, and define them on X “ d{dt ˇt“0 hptq P g
by ˇ
d ˇˇ
pAd gqX “ ˇ ghptqg´1 “ gXg´1 .
dt t “0
In particular, notice that Ad e is the identity in GLpgq. So we can differentiate
again at the identity to get a map ad “ d Ad : g Ñ glpgq.

Proposition 2.33. We have pad XqpYq “ rX, Ys. In particular, we have the Jacobi
identity
adprX, Ysq “ rad X, ad Ys.

Proof. By definition, Ad g “ dψg . In order to compute dψg pYq for Y P g, we


need to compute γ1 p0q for γptq “ gpexp tYqg´1 . Moreover, since pd expq0 is the
identity mapping on g, we may as well compute β1 p0q where β “ exp´1 ˝γ.
Now letting g “ exp X,

βptq “ exp´1 pexp X expptYqq expp´xqq


“ exp´1 pexppX ` tY ` 12 rX, tYs ` . . .q expp´Xqq
“ exp´1 pexpptY ` 12 rX, tYs ´ 12 rtY, Xs ` . . .qq
` ˘
“ tY ` rX, tYs ` higher order terms

Thus, Adpexp XqpYq “ β1 p0q “ Y ` rX, Ys ` (higher order terms). By Proposi-


tion 2.30 we have

Adpexp Xq “ exppad Xq “ 1 ` ad X ` 21 pad Xq2 ` (higher order terms)

Comparing the two sides here after application to Y,

Y ` rX, Ys ` (higher order terms) “ Y ` pad XqpYq ` (higher order terms)

and therefore pad XqpYq “ rX, Ys as required. Finally ad “ d Ad so ad is a


Lie algebra homomorphism by Proposition 2.32. Hence, we get the Jacobi
identity.

Finally, let’s see that the bracket on gln was correct. Let gptq be a curve in G
with g1 p0q “ X, and note that
ˇ ˇ
d ´1 d ˇˇ ´1 d ˇˇ
0 “ gptqgptq “ X ` ˇ gptq ùñ gptq´1 “ ´X
dt dt t“0 dt ˇt“0

18
Lecture 7 22 October 2015

Then,
pad XqpYq “ pd AdqpXqpYq
ˇ
d ˇˇ
“ ˇ pAd gptqqY
dt t“0
ˇ
d ˇˇ
“ ˇ gptqYgptq´1
dt t“0
ˇ
d ˇˇ
“ XY ` ˇ Ygptq´1
dt t“0
“ XY ´ YX

3 Representations of slp2q
One of the themes of Lie theory is to understand the representation theory of
slp2q, which can then be used to understand the representations of larger Lie
algebras, which are in some sense built from a bunch of copies of slp2q put
together. This is also a good flavor for other things we’ll do later.
From now on, in this section, we’ll work over C. Recall
"„  ˇ *
a b ˇˇ
slp2q “ a, b, c P C
c ´a ˇ
with the Lie bracket rX, Ys “ XY ´ YX. There’s an important basis for slp2q,
given by
„  „  „ 
1 0 0 1 0 0
H“ X“ Y“ .
0 ´1 0 0 1 0
These basis elements have relations
rH, Xs “ 2X, rH, Ys “ ´2Y, rX, Ys “ H.
Example 3.1. What are some representations of slp2q?
(1) The trivial representation, slp2q Ñ glp1q given by X, Y, H ÞÑ 0.
(2) The natural/defining/standard representation that comes from including
slp2q glp2q, wherein slp2q acts on C2 by the 2 ˆ 2 matrices.
(3) For any Lie algebra g, the adjoint representation ad : g Ñ glpgq. For slp2q,
this is a map slp2q Ñ glp3q. Let’s work out how this representation works
on the basis.
X H Y
ad X 0 ´2X H
ad H 2X 0 ´2Y
ad Y ´H 2Y 0
Therefore, the matrices of ad X, ad Y, and ad H in this representation are
» fi » fi » fi
0 ´2 0 2 0 0 0 0 0
ad X “ –0 0 1fl ad H “ –0 0 0 fl ad Y “ –´1 0 0fl
0 0 0 0 0 ´2 0 2 0

19
Lecture 8 24 October 2015

(4) The map ρ : slp2q Ñ glpCrx, ysq given by X ÞÑ xB{B y and Y ÞÑ yB{B x , and
H ÞÑ xB{B x ´ yB{B y . Under ρ, the span of monomials of a given degree are
stable.

• monomials of degree zero are constant functions, so this is just the


trivial module.
• monomials of degree one λx ` µy give the standard representation if
we set x “ 10 and y “ 01 .
`˘ `˘

• monomials of degree two give the adjoint representation.


• for monomials of degree k, denote the corresponding representation
by Γk .

(5) Γ3 “ Cxx3 , x2 y, xy2 , y3 y and X, Y, H act on it as in the previous example.


The matrices of the basis elements are
» fi » fi » fi
0 1 0 0 3 0 0 0 0 0 0 0
—0 0 2 0ffi —0 1 0 0 ffi —3 0 0 0ffi
X“— –0 0 0 3fl
ffi H“— –0 0 ´1 0 fl
ffi Y“— –0 2 0 0fl
ffi

0 0 0 0 0 0 0 ´3 0 0 1 0

It turns out that all of the finite-dimensional irreducible representations of


slp2q appear as the monomials of a fixed degree inside the representation on
Crx, ys. Notice that for Γ3 , the matrix of H is diagonal. It turns out that this is
always the case: for any finite dimensional complex representation of slp2q, say
V, any diagonalizable element will map to another.
À
We can decompose V into eigenspaces for H, V “ λ Vλ , where λ is an
eigenvalue of H which we will call a weight of the representation V.

Exercise 3.2. Check that the representation Example 3.1(4) is indeed a rep-
resentation of slp2q. That is, ρprA, Bsq “ ρpAqρpBq ´ ρpBqρpAq and apply to
f P Crx, ys.

ρpHq is a diagonalizable element under any representation ρ : g Ñ glpVq with


V finite dimensional over C. Let V “ λ Vλ , be an eigenspace decomposition
À

for V, where λ are the eigenvalues for the action of H. If v P Vλ , then Hv “ λv.
We’ll classify all the finite-dimensional complex irreducible representations
of slp2q. Let’s start with an easy proposition.

Proposition 3.3. Let V be any slp2q-module and let v P Vα . Then

(1) Xv P Vα`2 ;

(2) Hv P Vα ;

(3) Yv P Vα´2 .

Proof. (1) HXv “ XHv ` rH, Xsv “ αXv ` 2Xv “ pα ` 2qXv;

(2) HHv “ Hαv “ αHv;

20
Lecture 8 24 October 2015

(3) HYv “ YHv ` rH, Ysv “ αYv ` 2Yv “ pα ` 2qYv.

So that was easy. It won’t get much harder.

Proposition 3.4. Let V be a finite dimensional representation of slp2q, and let


v P Vα with v ‰ 0. Then

(1) Y n`1 v “ 0 for some n P N; and,

(2) if V is irreducible and Xv “ 0 and n is minimal among such integers, then


V “ xY n v, Y n´1 v, . . . , vy.

(3) Further, with conditions as in (2), we have that as an xHy-module,

V “ V´n ‘ V´n`2 ‘ . . . ‘ Vn .

Proof. First let’s prove (1). Look at the set of tv, Yv, Y 2 v, . . .u. Because V is finite-
dimensional, then we can choose n P N minimal such that v, Yv, . . . , Y n`1 v are
linearly dependant. Then, we can write
n
ÿ
Y n`1 v “ ai Y i v.
i “0

Now apply H ´ pα ´ 2n ´ 2qI to this vector Y n`1 v. Proposition 3.3 says


that Y n`1 v is in the weight space Vα´2n´2 , and H ´ pα ´ 2n ´ 2qI should act as
the zero operator on this weight space because every element in Vα´2n´2 has
H-eigenvalue α ´ 2n ´ 2. Therefore,
n
ÿ n
ÿ
0“ ai ppα ´ 2iq ´ pα ´ 2n ´ 2qq Y i v “ ai p2pn ´ i ` 1qqY i v.
i“0 i “0

Since no term 2pn ´ i ` 1q is zero for i “ 0, . . . , n, we must have that ai “ 0 for


all i, since the tY i v | 0 ď i ď nu are linearly independent.
So Y n`1 v “ 0. To establish the second claim, we use the following lemma.

Lemma 3.5. Let v P Vα , and assume Xv “ 0. Then, XY m v “ mpα ´ m `


1qY m´1 v.

Proof. By induction on m. For base case m “ 1,

XYv “ YXv ` rX, Ysv “ 0 ` Hv “ αv

For the inductive step,

XY m V “ YXY m´1 v ` rX, YsY m´1 v


“ Ypm ´ 1qpα ´ m ` 2qY m´2 v ` Y m´1 pα ´ 2m ` 2qv
“ ppm ´ 1qpα ´ m ` 2q ` pα ´ 2m ` 2qqY m´1 v
“ mpα ´ m ` 1qY m´1 v

21
Lecture 8 24 October 2015

Proof of Proposition 3.4 continued. Now, given this lemma, we can prove (2). Ob-
serve that by the lemma, for W “ xv, Yv, . . . , Y n vy, we have XW Ă W. Also
by the previous result HW Ă W, and clearly YW Ă W. Therefore, W is an
irreducible subspace of V and because V is irreducible, then W “ V.
Finally, let’s prove (3). Putting m “ n ` 1 into Lemma 3.5, we get that
0 “ pn ` 1qpα ´ nqY n v, so α “ n.

The nice thing about slp2q representations is that they’re parameterized by


integers, so we can draw them! We go about it like this. For V “ in“0 V´n`2i a
À

decomposition into weight spaces, the picture is:

H H

X X X X
H V´n V´n`2 ... Vn´2 Vn H
Y Y Y Y

Corollary 3.6. To summarize what we’ve seen so far,

(1) each finite-dimensional irreducible representation has an eigenvector for


H with a maximal possible integral value, which we call the highest
weight.

(2) Any two irreducible modules of the same highest weight r are isomorphic.
We call such a module Γr .

(3) Γr has dimension r ` 1.

(4) The eigenvalues of H are equally spaced around 0.

(5) All eigenspaces are 1-dimensional.

(6) All even steps between n and ´n are filled.

(7) We can reconstruct V by starting with a highest weight vector v ‰ 0 such


that Xv “ 0. Then V “ xv, Yv, . . . , Y n vy.

It follows from the theory of associative algebras that any g-module has
a Jordan-Hölder series. That is, given any finite-dimensional representation
W of slp2q, we can explicitly decompose into composition factors (irreducible
subquotients) by the following algorithm:

(1) identify a highest weight vector v of weight r, say;

(2) generate a submodule xvy from this vector;

(3) write down a composition factor Γr ;

(4) repeat after replacing W by W{Γr .

This gives us a decomposition of W into irreducible factors.

Example 3.7.

22
Lecture 9 27 October 2015

(1) Consider the standard representation slp2q glp2q. This decomposes


as V “ V´1 ‘ V1 .

(2) The adjoint representation decomposes with weight spaces t´2, 0, 2u, as
V “ V´2 ‘ V0 ‘ V2 .

(3) The representation W on degree 3 polynomials in Crx, ys has weight spaces


V´3 ‘ V´1 ‘ V1 ‘ V3 .

(4) Consider the slp2q-submodule V of Crx, ys generated by all monomials of


degree at most three. H acts as xB{B x ´ yB{B y , and sends xi y j to pi ´ jqxi y j ,
and so we can calculate the weights on the basis tx3 , x2 y, xy2 , y3 , x2 , xy, y2 , x, y, 1u.
The weights are (with multiplicity) 3, 2, 1, 1, 0, 0, ´1, ´1, ´2, ´3, and

V “ V3 ‘ V2 ‘ V1 ‘ V0 ‘ V´1 ‘ V´2 ‘ V´3 .

There is a factor of Γ3 as V3 ‘ V1 ‘ V´1 ‘ V´3 , and from there we can


decompose further.

This is remarkable! For most finite simple groups, we can’t classify their
representations over C. Not even for finite groups of Lie type. So this is really a
simple representation theory, and remarkably it’s complete.

Example 3.8. Recall that Γ1 is the standard representation xxy ‘ xyy, where y
has eigenvalue ´1 and x has eigenvalue `1.
A basis for the tensor product Γ1 b Γ1 is x b x, x b y, y b x, y b y. The action
of H on this module is Hx “ x and Hy “ ´y. Then,

Hpx b xq “ Hx b x ` x b Hx “ 2x b x
Hpx b yq “ 0
Hpy b xq “ 0
Hpy b yq “ ´2y b y

The weight diagram is

This decomposes as Γ2 ‘ Γ0 .
à

So Γ1 b Γ1 “ Γ2 ‘ Γ0 . We can even find a basis for Γ2 and Γ0 in this manner:


for Γ2 , the basis is tx b x, x b y ` y b x, y b yu and Γ0 has basis tx b y ´ y b xu.
Observe more generally that in tensor product one simply “adds weights.”
Let V and W be slp2q-modules. If tv1 , . . . , vn u is a basis of H-eigenvectors of V
and tw1 , . . . , wm u is a basis of H-eigenvectors of W, then tvi b w j u is a bsis of
H-eigenvectors of V b W. If Hvi “ λi vi and Hw j “ µ j w j , then Hpvi b w j q “
pλi ` µ j qvi b w j .

23
Lecture 9 27 October 2015

4 Major Results on Lie Algebras


Definition 4.1. Let g be a Lie algebra. Then the derived subalgebra D pgq “
rg, gs is the span of the commutators of g. Inductively, we define the lower
central series D0 pgq “ g, D1 pgq “ rg, gs, and Dk pgq “ rg, Dk´1 gs.
Similarly, the upper central series D i pgq is given by D 0 pgq “ g, D k pgq “
rD k´1 pgq, D k´1 pgqs.
It will be important for us that Di pgq and Di pgq are characteristic ideals,
meaning that they are stable under all derivations of g.
Proposition 4.2.
(1) Dk pgq Ď Dk´1 pgq;

(2) Dk pgq is a characteristic ideal;

(3) Dk pgq{ is abelian;


Dk`1 pgq

(4) Dk pgq{ is central in g{Dk`1 pgq .


Dk`1 pgq

Proof.
(1) By induction. For k “ 1, we have D pgq Ď g clearly. Given Dk pgq Ď
Dk´1 pgq, we can take brackets on both sides to see that

Dk`1 pgq “ rg, Dk pgqs Ď rg, Dk´1 pgqs “ Dk pgq.

(2) To see that it’s an ideal, for each k we have that

rX, Dk pgqs Ď Dk`1 pgq Ď Dk pgq

for all X P g. To see that this ideal is characteristic, let α P Derpgq. Then

αpDk`1 pgqq “ α prg, Dk pgqsq “ rαpgq, Dk pgqs ` rg, αpDk pgqqs

But αpgq Ď g, so rαpgq, Dk pgqs Ď Dk`1 pgq. By induction,

rg, αpDk pgqqs Ď Dk pgq Ď Dk`1 pgq.

Hence, αpDk`1 pgqq Ď Dk`1 pgq.

(3) If X ` Dk`1 pgq, Y ` Dk`1 pgq are elements of Dk pgq{Dk`1 pgq , then

rX ` Dk`1 pgq, Y ` Dk`1 pgqs “ rX, Ys ` Dk`1 pgq “ 0 ` Dk`1 pgq

as required, because X P g and Y P Dk pgq, so rX, Ys P Dk`1 pgq.

(4) Let X P g, and Y P Dk pgq. Then

rX ` Dk`1 pgq, Y ` Dk`1 pgqs “ rX, Ys ` Dk`1 pgq.

Yet rX, Ys P rg, Dk pgqs “ Dk`1 pgq. Hence,

rX ` Dk`1 pgq, Y ` Dk`1 pgqs “ 0.

24
Lecture 9 27 October 2015

Proposition 4.3.

(1) D k pgq Ď D k´1 pgq;

(2) D k pgq is a characteristic ideal;

(3) D k pgq{ is abelian;


D k`1 pgq

(4) D k pgq Ď Dk pgq.

Exercise 4.4. Prove Proposition 4.3.

Definition 4.5. If Dk pgq “ 0 for some k, then we say that g is nilpotent. If on


the other hand D k pgq “ 0 for some k, then we say that g is solvable. If g has no
solvable ideals then we say that g is semisimple.

Remark 4.6. Everyone seems to use the term “solvable” nowadays, which is an
unfortunate Americanism. If you say “soluble,” you will be understood.

Note that a nilpotent Lie algebra is necessarily solvable by Proposition 4.3(4),


but a solvable Lie algebra need not be nilpotent.

Remark 4.7. For much of this chapter, we will have g Ă glpVq. There is a
theorem due to Ado which guarantees that there is a faithful representation
g Ñ glpVq. As such, we have the notion of an element X P g being V-nilpotent
if it is a nilpotent endomorphism of V. This is distinct from g being nilpotent as
a Lie algebra.

Theorem 4.8 (Engel’s Theorem). Let k be an arbitrary field, and let g Ď glpVq
be a Lie algebra such that every element of g is nilpotent (for every X P g, there
is N such that X N “ 0). Then there is some nonzero v P V such that Xv “ 0 for
all X P g.

To prove this, we’ll need a lemma.

Lemma 4.9. If X P glpVq is nilpotent, then ad X P Endpgq is nilpotent.

Proof. Suppose X N “ 0. Then by induction, one can show that

M ˆ ˙
M
ÿ
i M
pad Xq Y “ p´1q X M´i YX i
i
i“0

Now take M “ 2N ´ 1. Then either M ´ i or i is bigger than N, so the right


hand side vanishes identically for all Y. Hence, pad Xq2N ´1 “ 0.

Now that we’re given this, the proof of Engel’s theorem is a clever application
of linear algebra by an induction argument. This is not the way it was first
proved, but the proof has been cleaned up over the years to be much more
elegant.

25
Lecture 10 29 October 2015

Proof of Theorem 4.8. Induction on the dimension of g. Let dim g “ n.


For n “ 1, if g “ xXy, then suppose X N “ 0 but X N ´1 ‰ 0. Then there is
some nonzero v P V such that X N ´1 v ‰ 0. And so X N ´1 v is our desired vector.
For n ą 1, now assume that we have the result for all Lie algebras h with
dim h ă n. Claim that g has a codimension 1 ideal.
To prove this claim, let h be any maximal proper subalgebra. Since the
subalgebra generated by one element is a proper subalgebra when dim g ą 1,
and h is maximal, it cannot be that h “ 0.
Let h act on g by the adjoint action. By Lemma 4.9, ad h Ď glpgq consists of
nilpotent endomorphisms of g, and indeed also of g{h. Note that dim g{h ă n
since dim h ą 1, so by induction there is Y P gzh such that Y ` h is killed by h. In
particular, pad hqY Ď h, so h ‘ xYy is a subalgebra of g. But h is maximal among
proper subalgebras, so g – h ‘ Y as a vector space. Thus, h is a codimension 1
ideal.
Let W “ tv P V | hv “ 0u. This is nontrivial by the inductive hypothesis
and dim h ă dim g. But now YW Ď W because for any X P h, w P W, then
XYw “ YXw ` rX, Ysw. YXw “ 0 because X P h and w P W, and rX, Ysw “ 0
because rX, Ys “ pad XqY “ 0.
Now Y is nilpotent on W, so Y N “ 0 but Y N ´1 ‰ 0 for some N. Thus, there is
w P W such that Y N ´1 w ‰ 0, but YpY N ´1 wq “ 0. Therefore, gpY N ´1 wq “ 0.

Remark 4.10. This is basically the only theorem we’ll talk about that works
over fields of arbitrary characteristic. The rest of the theorems we’ll talk about
will fail in general, or at least for positive characteristic.

Last time we proved Engel’s theorem. Before we move on, let’s point out a
corollary to this.

Corollary 4.11. Under the same hypotheses of Engel’s theorem, Theorem 4.8,
then there is a basis of V with respect to which all elements of g can be repre-
sented by strictly upper triangular matrices.

Proof. Theorem 4.8 guarantees a nonzero v P V such that Xv “ 0 for all X P


g. Now by induction on dimension, there is a basis v2 ` xvy, . . . , vn ` xvy of
the quotient module of V{xvy satisfying the conclusion. That is, Xvi ` xvy P
xvi`1 , . . . , vn y ` xvy, so tv, v2 , . . . , vm u is the desired basis.

Now we’ll do the other major theorem of Lie algebras that allows the theory
of complex semisimple Lie algebras to go so far with so little work.

Theorem 4.12 (Lie’s Theorem). Let k “ C. Let g Ď glpVq be a Lie subalgebra.


Suppose g is solvable. Then there is a common eigenvector for all of the elements
of g.

Proof. By induction on the dimension of g. Let n “ dim g.


If n “ 1, this is trivial because for any nonzero X P g, the fact that C is
algebraically closed guarantees an eigenvector. Any other nonzero element of g
is a multiple of X, and therefore shares this eigenvector.

26
Lecture 10 29 October 2015

Now assume the result for all h with dim h ă n. We first find a codimension
1 ideal of g. For this, observe that D pgq “ rg, gs is strictly contained in g by
solvability (if not, then it’s never the case that D k pgq is zero). Also observe that
the quotient g{D pgq is abelian. Now by the first isomorphism theorem, each
ideal of g{D pgq corresponds to an ideal of g containing D pgq.
Any subspace of g{D pgq is an ideal since it’s abelian. So let h be the lift of
any codimension 1 subspace of g{D pgq, and this is the required codimension
1 ideal of g. Observe that h is also solvable since D k phq Ď D k pgq. So we can
apply the inductive hypothesis to obtain a nonzero v P V such that for all X P h,
Xv “ λpXqv for some λ P h˚ “ HomC ph, Cq.
Take Y P gzh such that g “ kY ‘ h as a vector space.
Let W “ tw P V | Xw “ λpXqw @X P hu. We know that W is nonempty
because we have found one such vector for which Xv “ λpXqv by applying our
inductive hypothesis.
We want to see that YW Ď W. If we can do this, then considering Y as a
linear transformation on W, Y has an eigenvector in W. Since g “ xYy ‘ h as
vector spaces, gxwy “ xwy. The fact that YW Ď W will follow from Lemma 4.13
(which is more general).

Lemma 4.13. Let h be an ideal in g Ď glpVq with λ : h Ñ C a linear functional.


Set W “ tv P V | Xv “ λpXqv @X P hu. Then YW Ď W for any Y P g.

Proof. Apply X P h to Yv for some v P W. We have that

XYv “ YXv ` rX, Ysv


“ λpXqYv ` λprX, Ysqv.

We want to show now that λprX, Ysq “ 0. This is a bit of work. Take w P W and
consider U “ xw, Yw, Y 2 w, . . .y. Clearly, YU Ď U. We claim that XU Ď U for all
X P h, and according to a basis tw, Yw, Y 2 w, . . . , Y i wu for U, X is represented by
an upper triangular matrix with λpXq on the diagonal.
We prove this claim by induction on i. For i “ 0, Xw “ λpXqw P U.
Now for k ď i,

XY k w “ YXY k´1 w ` rX, YsY k´1 w.

Note that rX, Ys P h and Y k´1 w is a previous basis vector, so by induction we


may express rX, YsY k´1 w as a linear combination of w, Yw, Y 2 w, . . . , Y k´1 w.
kÿ
´1
rX, YsY k´1 w “ ai Y i w. (4)
i “0

And

YXY k´1 w “ YλpXqY k´1 w ` linear combination of w, Yw, Y 2 w, . . . Y k´1 (5)


` ˘

So by (4) and (5), we have that

XY k w “ λpXqY k w ` linear combination of w, Yw, Y 2 w, . . . Y k´1 .


` ˘

27
Lecture 10 29 October 2015

Therefore, according to the basis tw, Yw, . . . , Y i wu, X looks like


» fi
λpXq ˚ ˚ ˚

— λpXq ˚ ˚ ffi
ffi
— .. ffi
– . ˚ fl
λpXq

Therefore, for any X P h, tr X|U “ pdim UqλpXq. This holds in particular for
rX, Ys, so
tr prX, Ys|U q “ pdim Uq ¨ λprX, Ysq
but the trace of a commutator is zero. So we get that λprX, Ysq “ 0, as required.

Corollary 4.14. Let V be a g-module where g is solvable. Then there is a basis


of V with respect to which g acts by upper triangular matrices.

The proof of this theorem is very similar to the proof of Corollary 4.11, so
we won’t repeat it here. It’s in the notes.

Proposition 4.15 (Jordan Decomposition). Let X P glpVq. Then there exist


polynomials Ps ptq, Pn ptq such that

(1) Xs “ Ps pXq is diagonalizable, Xn “ Pn pXq is nilpotent, and X “ Xs ` Xn .

(2) rXs , Xn s “ rX, Xn s “ rX, Xs s “ 0

(3) If A P glpVq for which rX, As ´ 0, then rXn , As “ rXs , As “ 0.

(4) If XW Ď W for any W Ď V, then Xn W Ď W and Xs W Ď W

(5) If D and N are such that rD, Ns “ 0 and X “ D ` N with D diagonalizable,


N nilpotent, then D “ Xs and N “ Xn .

Proof. The hard part of this theorem is constructing the polynomials. Everything
else follows from that. Let χ X ptq be the characteristic polynomial of X. We can
factor this as
χ X ptq “ pt ´ λ1 qe1 pt ´ λ2 qe2 ¨ ¨ ¨ pt ´ λr qer ,
where the λi are the distinct eigenvalues of X. Note that pt ´ λi q is coprime to
pt ´ λ j q for all i ‰ j.
Then by the Chinese Remainder Theorem, we can find a polynomial Ps ptq
such that
Ps ptq ” λi pmod pt ´ λi qei q
for all i. Define further Pn ptq “ t ´ Ps ptq. Let Xs “ Ps pXq and Xn “ Pn pXq.

(1) Clearly we have that X “ Xs ` Xn , since t “ Ps ptq ` Pn ptq. Let Vi “


kerpX ´ λi Iqei be the generalized eigenspaces of X, and note that V “
ei
À
i Vi . Since Xs “ Ps pXq “ λi ` pX ´ λi Iq qi pXq for some qi pXq, we
have that Xs acts diagonalizably on Vi with eigenvalue λi . By definition
Xn “ X ´ Xs so Xn |Vi “ X ´ λi I. So Xn is nilpotent as required.

28
Lecture 11 31 October 2015

(2) Since Xs and Xn are polynomials in X, then rXs , Xn s “ 0.

(3) If A P glpVq commutes with X, then it also commutes with Xs and Xn


because they are polynomial in X.

(4) Likewise, if W Ď V is stable under X, it is also stable under Xn and Xs .

(5) We have Xs ´ D “ N ´ Xn with everything in sight commuting. So Xs ´ D


is both diagonalizable and nilpotent, and is thus zero.

Observe that if X is a matrix in Jordan block form


» fi
λ1 1
.
λ1 . .
— ffi
— ffi
— ffi
— .. ffi

— . 1 ffi
ffi
X“— — λ1 0
ffi
ffi
— ffi
— λ2 1 ffi
— ffi
— .. .. ffi
– . . fl
λn 1

with the λi not necessarily distinct, then Xs has λi on the diagonal.


» fi
λ1
— .. ffi

— . ffi
ffi
— ffi
λ1
Xs —
— ffi
ffi
— λ2 ffi
— ffi
— .. ffi
– . fl
λn

and Xn is the matrix with 1’s immediately above the diagonal.

Corollary 4.16. For X P glpVq, we have pad Xqn “ ad Xn and pad Xqs “ ad Xs .

Proof. According to some basis tv1 , . . . , vn u of V, Xs is diagonal with entries


d1 , . . . , dn . Relative to this basis, let Eij be the standard basis of glpVq. Then
calculate
pad Xs qpEij q “ rXs , Eij s “ pdi ´ d j qEij
So the Eij are a basis of eigenvectors for ad Xs . So ad Xs is diagonalizable.
Furthermore, ad Xn is nilpotent by Lemma 4.9. Also,

ad X “ adpXs ` Xn q “ ad Xs ` ad Xn

and as rXn , Xs s “ 0, then rad Xn , ad Xs s “ adrXn , Xs s “ 0.

We’ve seen that taking traces can be a useful tool. This continues to be the
case, and is formalized in the following definition.

29
Lecture 11 31 October 2015

Definition 4.17. Let ρ : g Ñ glpVq be a representation of a Lie algebra g. Then


the Killing form with respect to ρ (or V) is the symmetric bilinear form given
by
BV pX, Yq “ trV ρpXqρpYq
for X, Y P g. When ρ “ ad and V “ g, we have ‘the’ Killing form

BpX, Yq “ tr ad X ad Y

Remark 4.18 (Historical interlude). Killing invented Lie algebras independently


of Lie when he was thinking about the infinitesimal transformations of a space,
whereas Lie wanted to study differential equations. Killing is more-or-less
responsible for the program of the classification of Lie algebras, but it was
completed by Élie Cartan.
Note that the Killing form is a symmetric bilinear form; this isn’t too hard to
see because the trace is linear and the definition is symmetric. The Killing form
has the nice property that it’s invariant under the adjoint action of g.
Proposition 4.19. Bρ prX, Ys, Zq “ Bρ pX, rY, Zsq.
Proof. Use the cyclic invariance of trace.

trpρrX, YsρZq “ tr ppρpXqρpYq ´ ρpYqρpXqqρpZqq


“ tr pρpXqρpYqρpZqq ´ trpρpYqρpXqρpZqq
“ tr pρpXqρpYqρpZqq ´ tr pρpXqρpZqρpYqq
“ tr pρpXqpρpYqρpZq ´ ρpZqρpYqqq
“ tr pρpXqρrY, Zsq

Theorem 4.20 (Cartan’s Criterion). Let g Ď glpVq be a Lie subalgebra. If


BV pX, Yq “ trpXYq is identically zero on g ˆ g, then g is solvable.
Proof. It suffices to show that every element of D pgq is nilpotent, since then by
Corollary 4.11, we have that some basis of V with respect to which all elements
of g can be represented by strictly upper triangular matrices, and repeated
commutators of strictly upper triangular matrices eventually vanish. More
precisely, if every element of D pgq is nilpotent, then D pgq is a nilpotent ideal, so
Dk pD pgqq “ 0 for some k. Now by induction D i pgq Ď Di pgq, so D k`1 pgq “ 0.
So take X P D pgq and write X “ D ` N for D diagonalizable and N nilpotent.
Work with a basis such that D is diagonal, say with entries λ1 , . . . , λn . We will
show that ÿ
tr DX “ λi λi “ 0.
i

where D is complex-conjugate matrix of D. It suffices to show this because λi λi


is always nonnegative, and a sum of nonnegative things is only zero when each
is zero individually.
Since X is a sum of commutators, rYi , Zi s say, it will suffice to show that

trpDrY, Zsq “ 0

30
Lecture 12 3 November 2015

for Y, Z P g. But
trpDrY, Zsq “ trprD, YsZq
by Proposition 4.19. By hypothesis, we will be done if we can show that ad D
takes g to itself, in which case we say that D normalizes g.
Since ad D “ ad Xs “ pad Xqs is a polynomial in ad X by Corollary 4.16,
we have that ad D normalizes g. Taking a basis of glpVq relative to which ad D
is diagonal, ad D is also diagonal with eigenvalues the complex conjugates of
the eigenvalues of ad D, and moreover they stabilize the same subspaces. In
particular, they stabilize g.

Remark 4.21 (Very very tangential aside). This proof is kind of cheating. We
proved it for specifically the complex numbers. The statement is true for any
algebraically closed field of characteristic zero, but we can use the Lefschetz
principle that says that any statement in any first order model theory that holds
for any algebraically closed field of characteristic zero is true for all such fields.
So really we should check that we can express this statement in some first order
model theory. But this remark is safe to ignore for our purposes.
Corollary 4.22. A Lie algebra g is solvable if and only if Bpg, D pgqq “ 0.
Proof. Assume first that g is solvable and consider the adjoint representation of
g. By the corollary to Lie’s theorem, Corollary 4.14, there is a basis of g relative
to which each endomorphism ad X is upper triangular. But now adrX, Ys “
rad X, ad Ys and the commutator of any two upper triangular matrices is strictly
upper triangular.
So for X P g, Y P D pgq, the previous paragraph shows that Y is strictly upper
triangular, as the sum of commutators, so ad Y is as well. And by our choice of
basis ad X is upper triangular. The product of an upper-triangular matrix and
strictly upper-triangular matrix is strictly upper triangular, so

BpX, Yq “ trpad X ad Yq “ 0

Conversely, assume Bpg, D pgqq is identically zero. Then BpD pgq, D pgqq “ 0
and so by Cartan’s Criterion (Theorem 4.20), we have that ad D pgq is solvable.
Then, D k pad D pgqq “ 0 for some k. But ad is a Lie algebra homomorphism, so
ad D k`1 pgq “ 0 as well.
Therefore, D k pgq Ď ker ad, and ker ad is abelian. So D k`1 pgq is abelian.
Hence D k`2 pgq “ 0.

“The Killing Form”, which sounds kind of like a television detective drama.
Previously on the Killing Form, we saw Cartan’s Criterion: if g Ď glpVq and BV
is identically zero on g, then g is solvable. We also showed that g is solvable if
and only if Bpg, D pgqq “ 0.
There are a bunch of easy-ish consequences of Cartan’s Criterion.
Definition 4.23. For an alternating or symmetric bilinear form F : V ˆ V Ñ k,
the radical of F is

radpFq “ tv P V | Fpv, wq “ 0 for all w P Vu.

31
Lecture 12 3 November 2015

and if W Ď V is a subspace, define

W K “ tv P V | Fpv, wq “ 0 for all w P Wu

Note that rad F “ V K . If rad F “ 0 we say that F is non-degenerate.


Corollary 4.24 (Corollary of Theorem 4.20). The Lie algebra g is semisimple if
and only if B is non-degenerate.
Proof. Assume g is semisimple. Consider rad B. If Y, Z P g, and X P rad B, then

0 “ BpX, rY, Zsq “ BprX, Ys, Zq

Since Z was arbitrary, this tells us that rX, Ys P rad B, Hence, rad B is an ideal.
But B vanishes identically on rad B, so Cartan’s Criterion (Theorem 4.20) shows
us that rad B is a solvable ideal. But g is semisimple, so rad B “ 0, which implies
B is nondegenerate.
Conversely, assume g is not semisimple. Take b a non-trivial solvable ideal.
Then for some k, we have that D k`1 pbq “ 0 but D k pbq ‰ 0. Now take some
nonzero X P D k pbq. For any Y P g, consider pad X ad Yq2 .
Since D i pbq are characteristic ideals, they are stable under ad Y. Now apply
pad X ad Yq2 to g.
ad Y ad X ad Y ad X
g ÝÝÝÑ D k pbq ÝÝÑ D k pbq ÝÝÝÑ D k`1 pbq “ 0
g ÝÝÑ looooooooooooooomooooooooooooooon
D k pbq is a characteristic ideal in b

So ad X ad Y is a nilpotent endomorphism, and therefore has trace zero. There-


fore, X P rad B but X ‰ 0.

Corollary 4.25. If g is a semisimple Lie algebra and I is an ideal, then I K is an


ideal and g “ I ‘ I K . Moreover B is nondegenerate on I.
Proof. Recall that I K “ tX P g | BpX, Yq “ 0@Y P Iu. This is an ideal, because
given any X P I K and Y P g, let Z P I. Then

BprX, Ys, Zq “ BpX, rY, Zsq “ 0,

because rY, Zs P I as I is an ideal. Hence, rX, Ys P I K since Z was arbitrary.


By general considerations of vector spaces, g “ I ` I K .
Now consider I X I K . This is an ideal of g on which B is identically zero.
Therefore, by Cartan’s criterion, adpI X I K q is solvable. So there is some k such
that D k padpI X I K qq “ 0.
Since ad is a Lie algebra homomorphism, then adpD k pI X I K qq “ 0 as well.
Hence D k pI X I K q Ď ker ad, but ker ad is abelian, so

D k`1 pI X I K q “ rD k pI X I K q, D k pI X I K qs “ 0.
Hence, I X I K is a solvable ideal of g, which means that I X I K “ 0 since g is
semisimple.
Finally, since B is nondegenerate on g, then for any X P I there is Y P g
such that BpX, Yq ‰ 0. We have that Y ‰ 0, and if Y P I K , then it must be that
BpX, Yq “ 0. So Y P I since g “ I ‘ I K .

32
Lecture 12 3 November 2015

Corollary 4.26. If g is semisimple, then g “ D pgq. (Terminology: g is perfect).


Proof. Let h “ D pgqK . Claim that h is an ideal. If X P h, Y P g and Z P D pgq,
then rY, Zs P D pgq, and

BprX, Ys, Zq “ BpX, rY, Zsq “ 0

Therefore, rX, Ys P h, so h is an ideal.


By Corollary 4.25 g – h ‘ D pgq as vector spaces, but h – g{D pgq is abelian. So
h is a solvable ideal, and hence zero.

We can get lots of mileage out of Cartan’s Criterion.


Corollary 4.27. Let g be semisimple and ρ : g Ñ glpVq with dim V “ 1. Then ρ
is the trivial representation.
Proof. ρpgq is abelian, and a quotient of g by ker ρ. It therefore factors through
the largest abelian quotient, g{D pgq, so D pgq Ď ker ρ. But by Corollary 4.26,
g “ D pgq Ď ker ρ.

So why is “semisimple” a good word for “has no solvable ideals?”


Corollary 4.28. Let g be semisimple. Then g – g1 ‘ . . . ‘ gn , where each gi is a
simple ideal of g.
Proof. If g is not simple, then let h be any nontrivial ideal. Then as before, hK is
an ideal and the Killing form vanishes identically on h X hK , so h X hK “ 0 and
therefore g “ h ‘ hK . Repeat with hK and h until they are themselves simple.
This terminates because dim h ď dim g and dim hK ď dim g.

Corollary 4.29. Let ρ : g Ñ h be any homomorphism with g semisimple. Then


ρpgq is also semisimple.
Proof. ker ρ is an ideal, so as before g – ker ρ ‘ pker ρqK , with B|ker ρ and
B|pker ρqK nondegenerate. So ρpgq – g{ ker ρ – pker ρqK is semisimple.

Corollary 4.30. Let g be semisimple. If ρ : g Ñ glpVq is a representation of g


then BV is non-degenerate on ρpgq
Proof. We know that the image of g under ρ is also semisimple. Let s “ tY P
ρpgq | BV pX, Yq “ 0 @ X P ρpgqu. Then as usual s is an ideal of ρpgq on which BV
is identically zero, and thus is zero by Cartan’s Criterion.

Remark 4.31. There is a huge (infinite-dimensional) associative algebra Upgq


called the universal enveloping algebra such that g Upgq as a Lie algebra
homomorphism. The representation theory of Upgq is the same as that of g. It’s
constructed as the quotient of the tensor algebra
8
gbn
à
Tpgq “
i“0

TpgqL
by the ideal I “ xpX b Y ´ Y b Xq ´ rX, Ys | X, Y P gy. Upgq “ I.

33
Lecture 13 5 November 2015

Remark 4.32. Many infinite-dimensional Lie algebras start by considering the


Loop algebra g b Crt, t´1 s with g some finite-dimensional complex semisimple
Lie algebra. This is not the direct sum of simple Lie algebras, but it does not
have solvable ideals. To get Kac-Moody algebras, one takes a central extension
g such that p
p g sits in the exact sequence

g Ñ g b Crt, t´1 s Ñ 0
0 Ñ Cc Ñ p

with Cc Ď Zpgq. Then r g ` Cd with d acting as tB{B t on p


g is p g and as 0 on t.

5 Representations of Semisimple Lie Algebras


In this section, we will explore the representation theory of semisimple Lie
algebras. The first result is Weyl’s theorem on complete reducibility of rep-
resentations. To that end, we first define the Casimir operator, which is a
distinguished (up to scalar multiples) of ZpUpgqq.
Definition 5.1. Let g be a subalgebra of glpVq, and let BV be the Kiling form
relative to V. If g is semisimple, then BV is non-degenerate on g. Take a basis
for g, say U1 , U2 , . . . , Udim g and let tUi1 | 1 ď i ď dim gu be the dual basis under
BV , that is, BpUi , Ui1 q “ δij . Then “the” Casimir operator with respect to V is
dim
ÿg
CV “ Ui Ui1 .
i “1

Exercise 5.2. The word “the” is in quotes above because it’s not obvious the
definition doesn’t depend on the choice of basis. Check that CV doesn’t depend
on the choice of basis for g.
”The Casimir operator” sounds like the name of a spy thriller. Let’s see an
example.
Example 5.3. Let g “ slp2q Ď glp2q. Then as before,
„  „  „ 
0 1 0 0 1 0
X“ Y“ H“
0 0 1 0 0 ´1
Then,

BV pX, Yq “ 1
BV pY, Yq “ BV pX, Xq “ BV pX, Hq “ BV pY, Hq “ 0
BV pH, Hq “ 2

So if tUi u “ tX, Y, Hu is a basis for g and tUi1 u “ tY, X, 12 Hu is in the dual basis
for g under BV , then
1
CV “ XY ` YX ` H 2
2
As an element of glp2q, „3 
{ 0
CV “ 2 3
0 {2

34
Lecture 13 5 November 2015

Proposition 5.4. Let CV be the Casimir operator for g with respect to a repre-
sentation g Ñ glpVq. Then

(1) tr CV “ dim g;

(2) if W Ď V is a g-submodule, then CV W Ď W;

(3) for X P g, rX, CV s “ 0.

Proof.
`ř 1
˘ řdim g
trpUi Ui1 q “ BV pUi , Ui1 q “
ř ř
(1) tr i Ui Ui “ i i i “1 1 “ dim g.

(2) Follows from Ui P g and Ui1 P g.


ř
(3) Define coefficients aij by rX, Ui s “ j aij U j . We have that

aij “ BV prX, Ui s, Uj1 q “ ´BV pUi , rX, Uj1 sq

and therefore, rX, Uj1 s “ ´ 1


ř
k akj Uk . So
ÿ
rX, CV s “ rX, Ui Ui1 s
i
ÿ ÿ
“ Ui rX, Ui1 s ` rX, Ui sUi1
i i
ÿ ÿ
“ Ui aij Uj1 ` aij Uj Ui1
i,j ij
ÿ ÿ
“ ´Ui a ji Uj1 ` a ji Ui Uj1
i,j i,j

“0

Lemma 5.5 (Schur’s Lemma). Let g be a Lie algebra over an algebraically closed
field. Let V be an irreducible finite-dimensional representation of g. Then
dim Homg pV, Vq “ 1

Proof. Let θ be a non-zero map in Homg pV, Vq. Because we work over an
algebraically closed field, θ has a non-trivial eigenvector with eigenvalue λ say.
Then θ ´ λI is clearly a g-module map, with a nonzero kernel. But kerpθ ´ λIq
is a g-submodule of V, and V is irreducible, so kerpθ ´ λIq “ V. Hence, θ “
λI.

The next theorem says that representations of semisimple Lie algebras are
completely reducible into a direct sum of irreducible representations, much like
representations of finite groups.

Theorem 5.6 (Weyl’s Theorem). Let g be a semisimple complex Lie algebra,


and let V be a representation of g with W Ď V a g-submodule. Then there is a
g-stable compliment to W in V, that is, a g-submodule W 1 such that V – W ‘ W 1 .

35
Lecture 13 5 November 2015

Proof. This is yet another incredibly clever piece of linear algebra. There are
several cases, which we prove in order of increasing generality.

Case 1: Assume first that W is codimension 1 in V and irreducible.

Proof of Case 1. First observe that V{W is a 1-dimensional g-module which (by
Corollary 4.27) is trivial for g. That is, gV Ď W. This implies that CV V Ď W.
Because rX, CV s “ 0 by Proposition 5.4, we have that XpCV pvqq “ CV pXpvqq for
all v P V. Therefore, CV is a g-module map. So CV |W “ λIW by Lemma 5.5
(using W irreducible). Now 1{λ CV is a projection homomorphism from V to W.
Dividing by λ is okay, since tr CV “ dim g ñ λ ‰ 0. Thus, V – W ‘ kerp1{λ CV q.
Hence, V is reducible.

Case 2: Assume that W is codimension 1.

Proof of Case 2. By induction on dim V. If dim V “ 1 there’s nothing to check.


In general, we can assume that W has a nontrivial g-submodule Z (or else W
is irreducible and we refer to case 1). Consider V{Z Ą W{Z . By an isomorphism
theorem, we have that
V
VL – {Z .
W W {Z
So W{Z is codimension 1 in V{Z and so by induction we can assume there is W 1 a
g-submodule of V such that

VL – W 1L à WL .
Z Z Z
But Z is codimension 1 in W 1 , so by induction again there is U Ă W 1 a g-
submodule such that W 1 – U ‘ Z. So V – W ‘ U by the following chain of
isomorphisms
W{ 1
VL Z ‘ W {Z W 1L pZ ‘ UqL
W– W{
Z
– Z – Z–U

Case 3: Assume that W is irreducible.

Proof of Case 3. Consider HomC pV, Wq. We know that this is a g-module via
pXαqpvq “ ´αpXvq ` Xpαpvqq. Similarly, there is an action of g on HomC pW, Wq.
Consider a restriction map R : HomC pV, Wq Ñ HomC pW, Wq. This is a
g-module homomorphism, because for w P W,

XpRpαqqpwq “ XpRpαqpwqq ´ RpαqpXpwqq


“ Xpαpwqq ´ αpXpwqq
“ pXpαqqpwq
“ RpXpαqqpwq

36
Lecture 14 7 November 2015

Now note that Xpαq “ 0 for all X P g precisely if α is a g-module map. By


this observation, HomC pW, Wq contains a g-submodule Homg pW, Wq, which
is trivial and 1-dimensional by Lemma 5.5 since W is irreducible. The module
M :“ R´1 pHomg pW, Wqq is a submodule of HomC pV, Wq.
Now kerpR| M q has codimension 1, as it’s image Homg pW, Wq has dimension
1. So by Case 2 we have that M – kerpR| M q ‘ Cψ for some ψ P HomC pV, Wq.
But Cψ is 1-dimensional, so g acts trivially on this space. Therefore, ψ is a
g-module map. Moreover, ψ is nonzero because otherwise RpMq “ 0. Again, by
scaling, we can arrange that ψ is a projection V Ñ W, so V has a compliment
W 1 to W, that is, V – W ‘ W 1 .

Case 4: The whole theorem.

Proof of Case 4. Proof by induction on dim V. If dim V “ 1, then we are done by


Corollary 4.27.
If dim V ą 1, let W Ď V. If W is irreducible, then this was done in Case 3.
Otherwise, W has a nontrivial submodule Z; pick Z maximal among nontrivial
submodules of W. Then W{Z is irreducible, because submodules of W{Z are
submodules of W containing Z, and Z is maximal so there are none. Since Z is
nontrivial, dim V{Z ă dim V, so by induction W{Z has a compliment in V{Z , of
` ˘
1
the form W {Z . So
VL – WL ‘ W 1L .
Z Z Z
1
Then because Z is nontrivial, dim W {Z ă dim V. So by induction, Z has a
compliment U in W 1 , with W 1 – U ‘ Z. Then
W{ 1
VL – ‘ W {Z 1L
Z
–W – pZ ‘ Uq – U.
L
W W{
Z
Z Z

Hence, V – W ‘ U and U, W are g-invariant.

This concludes the proof of Theorem 5.6.

Exercise 5.7. Show that if π : V Ñ V satisfies π 2 “ π, then V “ im π ‘ ker π.

Remark 5.8 (Important Examinable Material). Previously, on Complete Reducibil-


ity, Rick Moranis invents a machine to reduce things to their component parts.
By a cruel twist of fate, he is the victim of his own invention, and thereby his
consciousness gets trapped in a single glucose molecule. This is the story of that
glucose molecule’s fight to reunite itself with the rest of its parts, and thereby
reform Rick Moranis.

By Weyl’s Theorem, we see that if g is complex semisimple finite-dimensional


Lie algebra, and V is a finite-dimensional representation, then any submodule
W has a complement W 1 and V – W ‘ W 1 .

Corollary 5.9. A simple induction yields that under these hypotheses, V –


W1 ‘ W2 ‘ . . . ‘ Wn , where Wi are simple modules.

37
Lecture 14 7 November 2015

Theorem 5.10. Let g Ď glpVq be semisimple. For any X P g, let X “ Xs ` Xn be


the Jordan decomposition of X into semisimple Xs and nilpotent Xn parts. Then
Xs , Xn P g.

Proof. The idea here is to write g as the intersection of some subalgebras for
which the result is obvious. Let W be a simple submodule of V, and define sW
to be the component of the stabilizer of W in glpVq that is traceless, that is,

sW “ tX P glpVq | XW Ď W and tr X|W “ 0u.

Claim that g Ď sW . First, W is a submodule so it is stabilized by g, and also


the image of g in glpWq, g, is by Corollary 4.29 also semisimple, so has D pgq “ g.
Therefore, every element of g is a sum of commutators, all of whose trace must
be zero. So we conclude that g Ď sW . This tells us that tr X|W “ 0.
Note that Xs and Xn , being polynomials in X, stabilize everything that X
does, and also the trace of Xn |W is zero since Xn |W is nilpotent, and

trpXs |W q “ trpX|W ´ Xn |W q “ trpX|W q ´ trpXn |W q “ 0.

Therefore, Xs , Xn P sW for each W.


Now let n be the normalizer of g in glpVq,

n “ nglpV q pgq “ tX P glpVq | rX, gs Ď gu.

Clearly g Ď n, and also Xs , Xn P n, being polynomials in X.


To finish the proof, claim that g is precisely the intersection
¨ ˛
˚ č
g1 “ n X ˝ sW ‚

W ĎV
W irred.

Since g1 Ă n, g is an ideal of g1 . Then g is a submodule of g1 under the adjoint ac-


tion of g. So by Weyl’s Theorem, g1 – g ‘ U as g-modules for some g-submodule
U.
So we want to show that U “ 0. Take Y P U. We have rY, gs Ď g as g is an
ideal of g1 . But also ad gU Ď U, so rY, gs Ď U. Therefore, rY, gs Ď U X g “ 0.
Thus, Y commutes with every element of g. Hence, Y is a g-module map
from V to V. So Y stabilizes every irreducible submodule W, so by Schur’s
lemma Y|W “ λidW for some scalar λ.
Now tr Y|W “ 0 for all irreducible W Ď V because Y P sW for each W.
Therefore, tr λidW “ 0 ùñ λ “ 0, so Y|W “ 0 for all irreducible W. But
À
V – i Wi for Wi irreducible. So Y “ 0.

If g is as in the theorem, we can define an abstract Jordan decomposition by


ad X “ pad Xqs ` pad Xqn . And because ad is faithful for g semisimple, we have
that g – ad g Ď glpgq.
But by the theorem pad Xqs and pad Xqn are elements of ad g, and hence
are of the form ad Xs and ad Xn for some elements Xs and Xn of g. Therefore,
ad X “ adpXs ` Xn q and the faithfulness of ad implies X “ Xs ` Xn .

38
Lecture 15 10 November 2015

Suppose g Ď glpVq for some V, then write X “ Xs ` Xn relative to V. By


Corollary 4.16, ad Xs “ pad Xqs and ad Xn “ pad Xqn . So the abstract Jordan
decomposition as just defined agrees with the usual notion.
Moreover, this is true under any representation.

Corollary 5.11 (Preservation of Jordan Decomposition). Let ρ : g Ñ glpVq be


any representation of a semisimple Lie algebra g. Let X P g with abstract Jordan
decomposition X “ Xs ` Xn . Then ρpXqs “ ρpXs q and ρpXqn “ ρpXn q is a
Jordan decomposition.

We really rely on the semisimplicity of g here. This fails spectacularly in


positive characteristic.

Proof. The idea is that we want to compare ρpad Xq with ad ρpXq in some sense
(because these are not quite well-defined).
By Corollary 4.29, ρpgq is semisimple. Therefore, by Corollary 5.11, ρpXqs , ρpXqn P
ρpgq. So it remains to check that ρpXs q is semisimple and ρpXn q is nilpotent,
and then we may apply Proposition 4.15(5) to claim that ρpXs q “ ρpXqs and
ρpXn q “ ρpXqn .
Let Zi be a basis of eigenvectors of ad Xs in g. That is,

adpXs qZi “ λi Zi

for some λi . Then ρpZi q span ρpgq and

adpρpXs qqρpZi q “ rρpXs q, ρpZi qs “ ρprXs , Zi sq “ λi ρpZi q

so that adpρpXs qq has a basis of eigenvectors and is therefore semisimple (diag-


onalizable). Similarly, adpρpXn qq is nilpotent commuting with adpρpXs qq and
adpρpXqq.
Accordingly, ρpXq “ ρpXn q ` ρpXs q is the Jordan decomposition of ρpXq. But
by the remarks above this is the Jordan decomposition of ρpXq relative to V.
This means precisely that ρpXqs “ ρpXs q and ρpXn q “ ρpXqn .

Remark 5.12. There is another way to do this that uses the Killing form instead
of complete reducibility, but it’s a bit of a case of using a sledgehammer to crack
a nut. An alternative approach to Theorem 5.10 not using Weyl’s theorem is to
prove that when g is semisimple, every derivation D of g is inner, that is, of the
form D “ ad X for some X P g. Equivalently, ad g “ Derpgq.
Given that result, to prove Theorem 5.10 write ad X “ xs ` xn in glpgq for
some X P g. As xs and xn are also derivations of g, then xs “ ad Xs and
xn “ ad Xn for some Xs , Xn P g. From the injectivity of ad, we get X “ Xs ` Xn
and rXs , Xn s “ 0. It’s an easy exercise to see that Xs and Xn are semisimple and
nilpotent, respectively. This gives us the Jordan decomposition of X.

Remark 5.13 (Important Examinable Material). Last time we were talking about
Jordan Decomposition, which is a recent Channel 4 documentary following
the trials and tribulations of supermodel Jordan Price, wherein she is struck

39
Lecture 15 10 November 2015

by a previously undetected case of leprosy. Most episodes focus on major


reconstructive surgery, wherein her body parts are reattached. But unfortunately
her doctors are so overworked that her knee is put back on backwards, so she
has to walk around in a crablike fashion. This doesn’t last too long, however,
because soon her other knee becomes detached.

Previously, on The Jordan Decomposition, for a complex semisimple Lie al-


gebra we have, for any X P g, elements Xs and Xn in g such that under any
representation, ρpXqs “ Xs and ρpXqn “ Xn . The power of this will become
apparent in representation theory.
But to set that up, we need to generalize some of the facts about the repre-
sentation theory of sl2 to other Lie algebras.
Recall for g “ slp2q we have a decomposition g “ g0 pHq ‘ g2 pHq ‘ g´2 pHq,
where gλ pHq denotes the generalized λ-eigenspace of adpHq. Here, g0 pHq “
xHy, g2 pHq “ xXy, and g´2 pHq “ xYy.

Definition 5.14. A Cartan subalgebra of a semisimple complex Lie algebra g


is an abelian subalgebra consisting of ad-diagonalizable elements, which is
maximal with respect to these properties.

Now, we could have just said diagonalizable elements, because we know


there is an intrinsic notion of diagonalizability in Lie algebras, but for g semisim-
ple ad is a faithful representation anyway.

Definition 5.15. Let h be a Cartan subalgebra of g, and let H P h. Then define


the centralizer of H in g as

cpHq “ cg pHq “ tX P g | rX, Hs “ 0u.

Lemma 5.16. Let h be a Cartan subalgebra of g. Suppose H P h such that the


dimension of cg pHq is minimal over all elements H P h. Then, cg pHq “ cg phq “
Ş
X Ph cg pXq.

Proof. Notice that for any S P h, S is central in cg pHq if and only if cg pHq Ď cg pSq.
We shall show that if S is not central, then a linear combination of S and H has
a smaller centralizer in g, thus finding a contradiction.
First, we will construct a suitable basis for g. Start with a basis tc1 , . . . , cn u
of cg pHq X cg pSq. We know ad S acts diagonalizably on cg pHq because S P h
is ad-diagonalizable. Therefore S commutes with every element of h, so we
can extend this to a basis for cg pHq consisting of eigenvectors for ad S, say by
tx1 , . . . , x p u.
Similarly, we can extend tci u to a basis of cg pSq of eigenvectors for ad H by
adjoining ty1 , . . . , yq u. Then

tc1 , . . . , cn , x1 , . . . , x p , y1 , . . . , yq u

is a basis of cg pHq ` cg pSq.


As ad S and ad H commute, we can complete to a basis of g, say by tw1 , . . . , wr u
of simultaneous eigenvectors for S and H.

40
Lecture 15 10 November 2015

Note that rS, x j s ‰ 0 because x j P cg pHqzcg pSq, and also rH, y j s ‰ 0. Let
rH, wi s “ θi wi and rs, wi s “ σi wi with θi , σi ‰ 0. Thus if we choose λ ‰ 0 such
that λ ‰ ´σ`{θ` for any `, w j doesn’t commute with S ` λH for any j. Moreover,
xi and yi don’t commute with S ` λH by construction, so the only things that
commute with S ` λH are linear combinations of the ci – things that commute
with both H and S. Therefore, cg pS ` λHq “ cg pSq X cg pHq.
Since S is not central in cg pHq, cg pHq Ę cg pSq, so this is a subspace of smaller
dimension. This is a contradiction, because dim cg pHq was assumed to be the
smallest possible.

Lemma 5.17. Suppose H is any element of g. Then rgλ pHq, gµ pHqs Ď gλ`µ pHq.
Additionally, if g is a semisimple Lie algebra, then the restriction of the Killing
form to g0 pHq is nonzero, where H satisfies the hypotheses of the Lemma 5.16.

Proof. To show the first part, one proves by induction that

k ˆ ˙”
ÿ k ı
k
padpHq ´ pλ ` µqIq prX, Ysq “ padpHq ´ λIq j X, padpHq ´ µIqk´ j Y .
j
j “0

This just comes down to repeated application of the Jacobi identity. If k “ 1, this
is actually just the Jacobi identity.
Hence if X P gλ pHq and Y P gµ pHq, then we can take k sufficiently large (e.g.
k “ 2 dim g) such that either padpHq ´ λIq j X or padpHq ´ µIqk´ j Y vanishes, so
rX, Ys is in the generalized eigenspace of λ ` µ.
For the second statement, if Y P gλ pHq with λ ‰ 0, then ad Y maps each
eigenspace to a different one. Furthermore, so does ad Y ˝ ad X for X P g0 pHq.
So this endomorphism ad Y ˝ ad X is traceless. Therefore, BpX, Yq “ 0 for such
X, Y. Therefore, g0 pHq is perpendicular to all the other weight spaces for H.
But the Killing form is non-degenerate on g, so we should be able to find
some Z such that BpX, Zq ‰ 0. But this Z must be in g0 pHq, because all other
weight spaces are perpendicular to g0 pHq. Hence, B is non-degenerate on
g0 pHq.

Theorem 5.18. Let h be a Cartan subalgebra of a semisimple Lie algebra g. Then


cg phq “ h. The Cartan subalgebra is self-centralizing.

Proof. Choose H P h such that the dimension of cg pHq is minimal over all
elements H P h. Then cg phq “ cg pHq by Lemma 5.16, so it suffices to show that
cg pHq “ h.
Since h is abelian, we have that h Ď cg pHq.
Conversely, if X P cg pHq has Jordan decomposition X “ Xs ` Xn , then X
commutes with H implies that Xs commutes with H by Proposition 4.15.
We know that Xs is semisimple, and commutes with H, so commutes with
all elements of the Cartan subalgebra h because cg phq “ cg pHq by Lemma 5.16.
But h is the maximal abelian subalgebra consisting of semisimple elements. Xs
is semisimple and commutes with everything in h, so must be in h.
Therefore Xs P h. So we are done if Xn “ 0.

41
Lecture 16 12 November 2015

For any Y P cg phq, we see by the above that Ys is central in cg phq, so ad Ys acts
by zero on cg phq. Therefore, ad Y “ ad Yn is nilpotent for arbitrary Y P cg phq, so
every element of ad cg phq is nilpotent. Then by the corollary to Engel’s Theorem
(Corollary 4.11), there is a basis of cg phq such that each ad Y is strictly upper
triangular for Y P cg phq. Hence,

BpX, Yq “ trpad X ad Yq “ 0

for all Y P cg phq. But the Killing form is nondegenerate on restriction to cg pHq “
g0 pHq by Lemma 5.17, so it must be that ad X “ 0. However, ad X “ ad Xn and
ad is injective because g is semisimple, so Xn “ 0.
Therefore, for any X P cg pHq, X “ Xs and Xs P h, so cg pHq Ď h.

Previously on Cartan Subalgebras, we had maximal diagonalizable abelian


subalgebras h of g. We showed that cg phq “ h and moreover h “ cg pHq for some
H P h.
Remark 5.19. It’s not clear that any two Cartan subalgebras have the same
dimension. But in fact, it’s true that they all have the same dimension, and
moreover they are all centralizers of regular semisimple elements. Additionally,
all Cartan subalgebras are conjugate under the adjoint action of G such that g is
the Lie algebra of G.
Definition 5.20. We say that an element of g is regular if its centralizer dimen-
sion in g is minimal.
Remark 5.21. The definition that we gave is not the original definition of Cartan
subalgebra. Another useful one is that h is a self-normalizing nilpotent subal-
gebra, that is, h satisfies ng phq “ h. Then it is automatically maximal among
nilpotent subalgebras. But then it is unclear when Cartan subalgebras exist,
and remains unknown in many cases. Another definition is that h is the central-
izer of a maximal torus, where a torus is any abelian subalgebra consisting of
semisimple elements.
Given a representation V for a semisimple Lie algebra g, we can decompose
V into simultaneous eigenspaces for h (since ρphq is still abelian and diagonaliz-
À
able). Write V “ Vα for these eigenspaces. For v P Vα we have Hv “ αpHqv
for some function α : h Ñ C.
We can check that α is a linear function h Ñ C, that is, α P h˚ .
Definition 5.22. The vectors of eigenvalues α are called the weights of the
representation V, and the Vα are the corresponding weight spaces.
Let’s compare this to what we were doing with slp2q. In slp2q, we had weight
spaces for just one element H, and the Cartan subalgebra h was just spanned by
H. So these α were really just the eigenvalues of H.
Example 5.23. Let’s now consider g “ slp3q, the Lie algebra of traceless 3 ˆ 3
matrices over C. It’s easy to check that for slp3q, a Cartan subalgebra is
!” a1 ıˇ )
h“ a2 a1 ` a2 ` a3 “ 0 .
ˇ
a
ˇ
3

42
Lecture 16 12 November 2015

Any other Cartan subalgebra is given by conjugating these matrices. Let’s define
some elements Li of h˚ by
´” a1 ı¯
Li a2 “ ai .
a3

For the standard representation slp3q´ ¯C3 , a basis

œ
1
´ 0 ¯ of simultaneous
´0¯ eigenspaces
for h is just the standard basis e1 “ 0 , e2 “ 1 , e3 “ 0 . We have that
0 0 1
” a1 ı ´” a1 ı¯
a2 ei “ a i ei “ L i a2 ei ,
a3 a3

so this representation decomposes as V “ VL1 ‘ VL2 ‘ VL3 , where VLi “ xei y.

Example 5.24. The previous example is a bit simple, so let’s do something


more interesting. Consider the adjoint representation, in which we have that
rH, Eij s “ pai ´ a j qEij when
” a1 ı
H“ a2 ,
a3

and so the basis of simultaneous eigenspaces i s


!” 1 ı ”1 ı)
tEij | i ‰ ju Y ´1 , 0
0 ´1

The representation decomposes as

V “ h ‘ VL1 ´ L2 ‘ VL1 ´ L3 ‘ VL2 ´ L3 ‘ VL2 ´ L1 ‘ VL3 ´ L2 ‘ VL3 ´ L1 ,

where VLi ´ L j “ xEij y.

Definition 5.25. Given a semisimple Lie algebra g and a Cartan subalgebra h of


g, the Cartan decomposition of g is given by
à
g “ h‘ gα ,
α ‰0

where gα is a weight space for the adjoint action of h on g with weight α. These
nonzero weights are called roots.

Proposition 5.26. g is a semisimple Lie Algebra, h a Cartan subalgebra. Then


(1) g0 “ h;
(2) rgα , gβ s Ď gα`β ;
(3) the restriction of B to h is non-degenerate;
(4) the roots α P h˚ span h˚ ;
(5) Bpgα , gβ q ‰ 0 ðñ α “ ´β;
(6) if α is a root, then so is ´α;
(7) if X P gα , Y P g´α , then BpH, rX, Ysq “ αpHqBpX, Yq for H P h;
(8) rgα , g´α s ‰ 0.

Proof.

(1) Apply Theorem 5.18.

43
Lecture 17 17 November 2015

(2) This is a special case of Lemma 5.17, but it’s important enough that we
should do it again. This is what Fulton and Harris call the fundamental
calculation. Let X P gα , Y P gβ .

rH, rX, Yss “ rrH, Xs, Ys ` rX, rH, Yss


“ rαpHqX, Ys ` rX, βpHqYs
“ pα ` βqpHqrX, Ys.

(3) Second part of Lemma 5.17 together with (1).

(4) If the roots don’t span h˚ , then in particular there is some functional δH
that does not lie in the span of the roots. For this H P h, αpHq “ 0 for
all roots α P h˚ . Since g can be decomposed in terms of gα , we see that
rH, Xs “ 0 for all X P g, that is, H P Zpgq. But g is semisimple, so Zpgq “ 0
and H “ 0 as required.

(5) We calculate

αpHqBpX, Yq “ BprH, Xs, Yq


“ BpH, rX, Ysq
“ ´BpH, rY, Xsq
“ ´BprH, Ys, Xq “ ´βpHqBpX, Yq

so pαpHq ` βpHqqBpX, Yq “ 0, so either BpX, Yq “ 0 or α ` β “ 0.

(6) If α is a root, but ´α is not a root, then given any X P gα , we have


BpX, Yq “ 0 for all Y P g by (5), but B is non-degenerate so it must be
X “ 0.

(7) BpH, rX, Ysq “ BprH, Xs, Yq “ αpHqBpX, Yq.

(8) B is non-degenerate, so given X P gα , there is some Y such that BpX, Yq ‰ 0.


Choose H P h such that αpHq ‰ 0, and then

BpH, rX, Ysq “ αpHqBpX, Yq ‰ 0,

so rX, Ys ‰ 0.

Last time we introduced the Cartan decomposition of a semisimple Lie


algebra g. This is all building up to finding a set of subalgebras of g, each
isomorphic to sl2 .

Proposition 5.27.

(1) There is Tα P h, called the coroot associated to α, such that BpTα , Hq “


αpHq, and rX, Ys “ BpX, YqTα for X P gα , Y P gβ .

(2) αpTα q ‰ 0.

(3) rrgα , g´α s, gα s ‰ 0.

44
Lecture 17 17 November 2015

(4) If α is a root, Xα P gα , then we can find Yα P g´α such that sα “ xXα , Yα , Hα “


rXα , Yα sy – sl2 .

Proof. (1) For existence, recall that B|h is nondegenerate, and hence induces
an isomorphism h Ñ h˚ via H ÞÑ BpH, ´q. Define Tα to be the preimage
of α under this map. Now compute

BpH, BpX, YqTα q “ BpX, YqBpH, Tα q


“ αpHqBpX, Yq
“ BpH, rX, Ysq

the last line by Proposition 5.26(7). Now

BpH, BpX, YqTα ´ rX, Ysq “ 0,

and since H is arbitrary and B non-degenerate, then BpX, YqTα ´ rX, Ys “


0.

(2) Suppose αpTα q “ 0. Take X P gα , Y P g´α . Then

rTα , Xs “ αpTα qX “ 0,

rTα , Ys “ ´αpTα qY “ 0.
If X P gα , Y P g´α with BpX, Yq “ 1, then rX, Ys “ Tα by part (1).
So we have a subalgebra, s “ xX, Y, Tα y with D psq “ xTα y. The adjoint rep-
resentation ad s of this subalgebra is a solvable subalgebra of ad g Ď glpgq.
By Lie’s Theorem, ad s consists of upper triangular matrices, so ad D psq
consists of strictly upper triangular matrices. Therefore, ad Tα P ad D psq
is nilpotent. But ad Tα is also semisimple, because Tα P h. Therefore, ad Tα
is both semisimple and nilpotent and must be zero. Hence, Tα “ 0.

(3) Take X P gα , Y P g´α with BpX, Yq ‰ 0. For Z P gα , we have that

rrX, Ys, Zs “ rBpX, YqTα , Zs


“ BpX, YqαpTα qZ

This is nonzero if Z is.

(4) Take Xα P gα . Find Yα P g´α such that


2
BpXα , Yα q “
αpTα q

Set
2
Hα “ Tα .
BpTα , Tα q
Now check the sl2 relations. We have that

rXα , Yα s “ BpXα , Yα qTα “ Hα

45
Lecture 17 17 November 2015

2
rHα , Xα s “ rTα , Xα s “ 2Xα
αpTα q
Similarly,
rHα , Yα s “ ´2Yα .
So this is isomorphic to sl2 .

Proposition 5.28 (“Weights Add”). Let g be semisimple with Cartan decompo-


À
sition g “ h ‘ gα , and let V, W be g-modules with Vα , Wα the corresponding
weight spaces. Then

(1) gα Vβ Ď Vα`β

(2) Vα b Wβ Ď pV b Wqα`β

Lemma 5.29.

(1) If V is a finite-dimensional representation, then V|sα is a finite-dimensional


representation of sα .

(2) If V is a representation for g, then


ÿ
Vβ`nα
n PZ

is an sα submodule.

(3) βpHα q P Z for all roots β and α, and Hα P h.

Proof.

(1) This follows from generic facts about restriction of representations.

(2) sα “ gα ‘ g´α ‘ rgα , g´α s. So this space is mapped to itself by sα .

(3) The eigenvalues of Hα on V|sα are integers, but each Vβ is a set of eigenvec-
tors on which Hα acts by the scalar βpHα q. Hence, βpHα q is an integer.

Proposition 5.30. The root spaces of gα are 1-dimensional. The only roots
proportional to α are ˘α. In particular, twice a root is not a root.

Proof. For the first part, let α be a root. Let’s assume that dim gα ą 1. Then let
Y be a nonzero element of g´α . Then we can arrange that there is Xα such that
BpXα , Yq “ 0. We choose Xα by producing two independent elements of gα and
scaling appropriately and adding them together.
Now let Yα be such that sα “ xXα , Yα , Hα y – sl2 . We have

rXα , Ys “ BpXα , YqTα “ 0.

So Y is killed by Xα , but rHα , Ys “ ´2Y, since g´α ‘ h ‘ gα is a representation of


sα , and Y P g´α . So Y is in the ´2 weight-space for Hα . But Y is killed by ad Xα .
This is incompatible with the representation theory for sl2 , because ad Xα should

46
Lecture 18 19 November 2015

raise Y into the 0 weight-space; in particular, we should have that rXα , Ys “ Hα ,


yet Hα ‰ 0. This is a contradiction. Hence dim gα ď 1.
To see that the only roots proportional to α are ˘α, assume that there is ζ P C
with β “ ζα and β is a root. Then 2ζ “ ζαpHα q “ βpHα q P Z by Lemma 5.29.
Exchanging α and β, we see that 2ζ ´1 P Z. The two equations 2ζ, 2ζ ´1 P Z
limits the possibilities to ζ P t˘1{2 , ˘1, ˘2u.
We must exclude ˘1{2 and ˘2 from these possibilities. Since the negative of
a root is a root, we only need to check this for ζ “ 1{2 and ζ “ 2. Further, by
exchanging α and β, we need only check the case that ζ “ 2.
So assume β “ 2α. Define

a “ h ‘ gα ‘ g´α ‘ g2α ‘ g´2α .

This is a representation for sα – sl2 . But if X P gα and Xα P sα X gα , then


rXα , Xs “ 0 because rXα , Xs P rgα , gα s “ 0 (we know that rgα , gα s “ 0 since
gα is 1-dimensional). This again contradicts the representation theory of sl2 ,
because the highest weight space is g2α , yet not in the image of Xα . This is a
contradiction.

Proposition 5.31 (Facts about sα ).

(1) sα “ s´α .

(2) Hα “ ´H´α .

Previously, we showed that the root spaces of a semisimple complex Lie


algebra were 1-dimensional, and that g is composed of copies of sl2 , given by

sα “ g´α ‘ rg´α , gα s ‘ gα .

We would like to expand our theory of representations of slp2q to other Lie


algebras, including weight diagrams. These will in general be difficult to draw,
but at least for slp3q we can draw them in 2-dimensions.
For slp3q, recall that we had linear functionals L1 , L2 , L3 spanning h˚ , satisfy-
ing L1 ` L2 ` L3 “ 0. So we can represent the weights in the plane CrL1 , L2 , L3 s{xL1 `
L2 ` L3 y.

Example 5.32.

(1) Let V “ C3 be the standard representation. Then the weights of V are


L1 , L2 , and L3 .
L2

L1

L3

47
Lecture 18 19 November 2015

(2) Let V – slp3q via the adjoint representation. Weights of V are Li ´ L j for
i ‰ j.
L2 ´ L3

L2 ´ L1 L1 ´ L3

L3 ´ L1 L1 ´ L2

L3 ´ L2

(3) The dual representation of the standard representation has weights ´Li ,
and therefore the diagram

´L2

´L1

´L3

Definition 5.33. We define the weight lattice

ΛW “ tβ P h˚ | βpHα q P Z for all roots αu.

We let wtpVq denote the set of weights in a representation V of g. And by


Lemma 5.29(1), wtpVq Ď ΛW .

There is additional symmetry arising from the subalgebras sα – slp2q. For


instance, the fact that the weight multiplicities are symmetric about the origin.
So define hyperplanes

Ωα “ tβ P h˚ | βpHα q “ 0u.

Our symmetry amounts to saying that wtpVq is closed under reflections Wα


across Ωα .
More explicitly, to see that the weights are closed under these reflections,
compute
2βpHα q
Wα pβq “ β ´ α “ β ´ βpHα qα.
αpHα q
ř
Take the submodule Z “ nPZ Vβ`nα for sα . Pick v P Vβ , say; then Hα v “
βpHα qv.
In Z, we must be able to find w such that

Hα w “ ´βpHα qw.

48
Lecture 18 19 November 2015

Now

´βpHα q “ βpHα q ` mαpHα q ùñ ´2βpHα q “ mαpHα q

´2βpHα q
ùñ m “ ùñ m “ ´βpHα q.
αpHα q
This implies that βpHα q “ ´m. Therefore, the element v of the β-weight-space
v P Vβ corresponds to w P Vβ`mα “ Vβ´βp Hα qα as required. In fact, we obtain an
isomorphism Vβ – Vβ´βp Hα qα .

Remark 5.34 (Notation). The integer

2βpHα q 2βpTα q
βpHα q “ “
αpHα q αpTα q
is often denoted by
xβ, α_ y :“ βpHα q,
and α_ is the coroot to α (in the case of Lie algebras, α_ “ Tα as we defined it).
The important thing to remember is that xβ, α_ y is the number of α’s you
need to take off β to reflect β in the hyperplane perpendicular to α.

Definition 5.35. Given a semisimple Lie algebra g, we define the Weyl group
W as the group generated by the hyperplane reflections Wα ,

W :“ xtWα | α is a root of guy

In fact, W is a finite group. Note that W preserves wtpVq for any representa-
tion V of g.
In order to generalize the idea of a highest weight vector as we had for slp2q,
it will be convenient to pick a complete ordering on ΛW . In ΛW b R, we choose
a linear map ` : ΛW b R Ñ R satisfying α ą β if and only if `pαq ą `pβq. To
choose such than an `, choose the gradient of ` irrational with respect to the
weight lattice.

Example 5.36. In slp3q,


» fi
1 0 0
—0
`pαq “ α – ?1 ´ 1 0 ffi
2 fl
0 0 ´ ?1
2

In this case, `pL1 q “ 1, `pL2 q “ ?1 ´ 1, `pL3 q “ ´ ?1 . With this choice of `,


2 2

L1 ą ´L3 ą ´L2 ą 0 ą L2 ą L3 ą ´L1 .

We can also check that

L1 ´ L3 ą L1 ´ L2 ą L2 ´ L3 ą 0.

Definition 5.37. Given a semisimple Lie algebra g, denote by R the collection


of roots, and define R` “ tα P R | α ą 0u, and R´ “ tα P R | α ă 0u.

49
Lecture 19 21 November 2015

Lemma 5.38. The subalgebras sα span g as a vector space.

Proof. We clearly get all root spaces gα in this way, since gα Ď sα , so it’s just a
matter of checking that we get the whole of the Cartan. By Proposition 5.26, the
dual h˚ is spanned by the roots. Now the Killing form gives an isomorphism
between h Ñ h˚ under which Tα ÞÑ α. But Tα P sα for each α, as Tα is a multiple
of Hα .

Remark 5.39. The Weight Lattice is a game show derived from a Japanese con-
cept wherein participants are suspended from a large metal lattice over the
course of a week, while their families and friends must throw a sufficient quan-
tity food to them so that they gain enough weight to touch the ground. The
winners get a trip to the Bahamas, while the rest are humiliated for their fast
metabolism.

Recall that the Weight Lattice is

ΛW “ tλ P h˚ | λpHα q P Z for all roots αu.

Proposition 5.40. Let g be semisimple, and let V be a finite-dimensional repre-


sentation for g. Then

(1) V has a highest weight, λ say, such that Vλ ‰ 0 and Vβ “ 0 for any β ą λ
using the functional `;

(2) If α is a highest weight, and β P R` is a positive root, then gβ Vα “ 0.

(3) Given any nonzero v P Vλ , where λ is a highest weight, then the subspace
W generated by all vectors Yα1 ¨ ¨ ¨ Yαk v with αi P R` and Yαi P g´αi for all
k ě 0 is an irreducible g-submodule.

(4) If V is irreducible, then W “ V.

Proof.

(1) Just take λ maximal under the ordering subject to Vλ ‰ 0. Such a weight
space exists because we assumed that V is finite dimensional.

(2) Since gβ Vα Ď Vα`β and `pα ` βq “ `pαq ` `pβq ą `pαq since β P R` , but α
was a highest weight, so Vα`β “ 0.

(3) Let’s first show that W is a submodule. By construction, W is stable


under all g´α for α P R` . Also, v is a weight vector, hence stable under h,
and since weights add, each Yα1 ¨ ¨ ¨ Yαk v is also a weight vector of weight
λ ´ α1 ´ α2 ´ ¨ ¨ ¨ ´ αk . So W is stable under h. So it remains to show that
W is stable under Xα P gα for α P R` .

50
Lecture 19 21 November 2015

Since gα v “ 0 for all α P R` , (as v is a highest weight vector), we proceed


by induction on i, showing that Vpiq “ xYαk ¨ ¨ ¨ Yα1 v | 1 ď k ď iy is stable
under X. We have the result for i “ 0 above.
Assume now that this holds for i1 ă i. Then calculate

X β Yαi ¨ ¨ ¨ Yα1 v “ rX β , Yαi sYαi´1 ¨ ¨ ¨ Yα1 v ` Yαi X β Yαi´1 ¨ ¨ ¨ Yα1 v (6)

Note that
Yαi´1 ¨ ¨ ¨ Yα1 V P Vpi´1q ,
so
X β Yαi´1 ¨ ¨ ¨ Yα1 v P Vpi´1q
by induction, and therefore

Yαi X β Yαi´1 ¨ ¨ ¨ Yα1 v P Vpiq .

This deals with the second term on the right hand side of (6). To deal with
the first term on the right hand side of (6), notice that, as before,

Yαi´1 ¨ ¨ ¨ Yα1 v P Vpi´1q ,

and rX β , Yαi s is an element of a root space or in h. If rX β , Yαi s P gα for


α P R` , it follows that

rX β , Yαi sYαi´1 ¨ ¨ ¨ Yα1 v P Vpiq

by induction. Similarly if rX β , Yαi s P g´α , or if rX β , Yα s P h.


This shows that W is in fact a submodule.
To see that W is irreducible, write W “ W1 ‘ W2 , and suppose the highest
weight vector v “ v1 ` v2 with v1 P W1 , v2 P W2 . Then

Hv “ λpHqv “ λpHqv1 ` λpHqv2 ,

so under projection π : W Ñ W1 , we have that v1 is also a highest weight


vector for W1 , and similarly v2 is a highest weight vector for W2 . So if
v1 , v2 ‰ 0 then xv1 , v2 y spans a subspace of Wλ with dimension larger than
1. This is a contradiction, since Wλ is 1-dimensional and generated by v.

(4) Since W is a non-zero submodule of an irreducible module, then W “ V.

Proposition 5.41. g-modules are determined up to isomorphism by their highest


weight. Let V and W be two irreducible representations with highest weight λ.
Then V – W.

Proof. Let v, w be highest weight vectors for V and W, respectively. Let U be


the submodule of V ‘ W generated by g ¨ pv, wq. By Proposition 5.40(c), and the
projections U Ñ V and U Ñ W are nonzero, they must be isomorphisms.

51
Lecture 19 21 November 2015

What possibilities are there for highest weights?


Definition 5.42. Let E “ RΛW . Then for α a root, define

E`
α “ tβ P E | βpHα q ą 0u,


α “ tβ P E | βpHα q ă 0u.
Eα “ Ωα \ E`
α \ Eα
´

Recall that Ωα “ tβ P h˚ | βpHα q “ 0u.


Now number the positive roots by α1 , . . . , αn and define

Wp˘,˘,...,˘q “ E˘
α1 X Eα2 X . . . X Eα n .
˘ ˘

In particular
R ą0 Λ W
`
“ Wp`,`,...,`q
is called the fundamental Weyl chamber and

ΛW
`
“ ΛW X W p`,`,...,`q

is the set of dominant weights, where the bar denotes topological closure (these
things lie in some Rn ).
Proposition 5.43. If λ is a highest weight for some finite-dimensional represen-
tation, then λ P ΛW
`
.
Proof. Suppose β P E´ `
α for some α P R , then Wα pβq is also a weight, and we
have
`pWα pβqq “ `pβ ´ βpHα qαq “ `pβq ´ βpHα q`pαq
Note that βpHα q ă 0 and `pαq ą 0, so we conclude that

`pWα pβqq “ `pβq ´ βpHα q`pαq ą `pβq.


So there is a higher weight in the representation.

Example 5.44. Let’s work this out in detail for slp3q. The roots are α “ L1 ´ L2 ,
β “ L2 ´ L3 , α ` β “ L1 ´ L3 , ´α, ´β, ´α ´ β. We depicted these as

Ω α` β Ωα

β
ΛW

α`β

Ωβ

52
Lecture 20 24 November 2015

In this picture, E`
α is the half-plane bounded by Ωα containing α, and Eα is
´

the half-plane bounded by Ωα that contains ´α. Similarly for Eβ and Eα`β . The
˘ ˘

fundamental Weyl Chamber is the region bounded by Ω β and Ωα containing


α ` β.
The dominant weights are λ such that λpHµ q ą 0 for all µ P R` . Claim that
ΛW is generated by L1 and ´L3 . So clearly L1 and ´L3 span h˚ , and for any
`

dominant weight λ, we require that

λpHα q P Zě0

λpHβ q P Zě0
λpHα`β q P Zě0
The point is that once we know λ on Hα and Hβ , then we know it on Hα`β . If
λpHα q “ a and λpHβ q “ b, then λ “ aL1 ´ bL3 . To check this, L1 pHα q “ 1 and
´L3 pHα q “ 0, L1 pHβ q “ 0, ´L3 pHβ q “ 1.
So any irreducible module is isomorphic to Γ a,b for some a, b, where Γ a,b has
highest weight aL1 ´ bL3 . Moreover, all such must exist.
And Γ1,0 – V is the standard rep with highest weight L1 , and Γ0,1 is it’s dual
V˚.
Moreover, Γ a,b must be containd in the tensor product pΓ1,0 qba b pΓ0,1 qbb .

Remark 5.45. Last time we were talking about Weyl Chambers, which is a 1990’s
adult entertainment film by BDSM specialists ”Blood and Chains.” For reasons
of decency I can’t go into the details.

Example 5.46. Let’s construct Γ3,1 . The representation is generated by the


highest weight vector λ “ 3L1 ´ L3 .

53
Lecture 20 24 November 2015

Ω α` β Ωα

Ωβ

The weights are stable under the reflection in the hyperplanes Ωαi , so we
reflect in these hyperplanes to find other roots.

Ω α` β Ωα

Ωβ

Once we’ve done so, we know that a weight µ and it’s reflection over any
hyperplane Ωαi forms a representation of a copy of slp2q, so we should fill

54
Lecture 20 24 November 2015

in all the steps in-between these weights as the weight spaces of that slp2q
representation.

Ω α` β Ωα

Ωβ

This forms a border of the weight-space of the representation, and we can


use the same rule again to fill in the dots inside the borders; along any line
parallel to the roots, we have another slp2q representation with highest and
lowest weights on the border of the weight space, so we fill in all the even steps
in-between.

55
Lecture 20 24 November 2015

Ω α` β Ωα

Ωβ

We don’t yet know the multiplicities of the weights (= dimensions of the


weight spaces), but we can use the following rule.

Fact 5.47 (Rule of Thumb). Multiplicites of weights of irreducible slp3q repre-


sentations increase by one when moving in towards the origin in the weights
space from a hexagon-border, and remain stable when moving in towards the
origin in a triangle-border.

We draw concentric circles for each multiplicity past the first. So the repre-
sentation Γ3,1 has the weight diagram as below.

56
Lecture 20 24 November 2015

Ω α` β Ωα

Ωβ

Example 5.48. Suppose V is the standard representation V – Γ1,0 .

Then Sym2 pVq has weights the sums of distinct pairs in Γ1,0 , and we see that
this is Γ2,0 when we compare the weight diagrams for Sym2 pVq and Γ2,0 . Hence,
Sym2 pVq – Γ2,0 is irreducible – as in the weight diagram below.

57
Lecture 20 24 November 2015

By adding all the weights in V ˚ with weights of Γ2,0 , we get weights of


Sym2 pVq b V ˚ , we get the following diagram

Note that Γ3,0 has the weight diagram

And Γ1,1 has the weight diagram

58
Lecture 20 24 November 2015

Taking Γ3,0 from Sym2 pVq b V ˚ we are left with a weight diagram for Γ1,1 ‘
Γ0,0 . Therefore, Sym2 pVq b V ˚ – Γ3,0 ‘ Γ1,1 ‘ Γ0,0 .

Remark 5.49. Clearly you can see this is a wonderful source of exam questions
(hint hint). In fact, this has many applications in physics, where decomposing
the tensor product of two representations into irreducible direct summands
corresponds to what comes out of the collision of two particles, so it’s not
surprising that there are many algorithms and formulas for this kind of thing.

Let’s summarize what we now know from our investigation of representa-


tions of slp3q. Let g be a semisimple complex Lie algebra, h its Cartan subalgebra,
R its set of roots, ΛW the weight lattice, W the Weyl group with accompany-
ing reflecting hyperplanes Ωα . Pick a linear functional ` with irrational slope
with respect to the weight lattice, and ΛW`
the dominant weights. We also get
`
R“ R \R . ´

Definition 5.50. Let α be a positive root which is not expressible as the sum of
two positive roots. Then we say α is a simple root.

Definition 5.51. The rank of g is the dimension of the Cartan subalgebra h.

Fact 5.52.

(1) Under Z, the simple roots generate all roots, i.e. if S is the set of simple
roots, ZS X R “ R.

(2) The number of simple roots is equal to the rank of g.

(3) Any root is expressible as ω ¨ α for ω P W, α a simple root.

(4) The Weyl group is generated by reflections Wα for all simple roots α.

(5) The Weyl group acts simply transitively on the set of decompositions of R
into positive and negative parts. (The action has only one orbit, and if the
action of any element σ has a fixed point, then σ is the identity of W).

(6) The elements Hα such that α is a simple root generate the lattice

ZtHα | α P Ru Ď h.

59
Lecture 22 28 November 2015

(7) Define the fundamental dominant weights ωα for each simple root α by
the property that ωα pHβ q “ δαβ for α, β simple roots. They generate the
weight lattice ΛW .

(8) The set Zě0 tωα u is precisely the set of dominant weights.

(9) Every representation has a dominant highest weight, and there exists one
and only one representation with this highest weight up to isomorphism.

(10) The set of weights of a representation is stable under the Weyl group,
and moreover we can use slp2q-theory to establish the set of weights (but
maybe not the multiplicities) in a given representation.

(11) The multiplicities are not obvious.

The fact that the multiplicities are not obvious is the motivation for the next
section.

5.1 Multiplicity Formulae


Let’s define an inner product on h˚ via pα, βq “ BpTα , Tβ q. Recall that Tα P h is
dual to α P h˚ , with BpTα , Hq “ αpHq for any H P h.

Proposition 5.53 (Freudenthal’s Multiplicity Formula). Given a semisimple Lie


algebra g and irreducible Γλ with highest weight λ, then
ÿ ÿ
cpµqnµ pΓλ q “ 2 pµ ` kαq αnµ`kα pΓλ q
α P R ` k ě1

where nµ pΓλ q “ dimpΓλ qµ and cpµq is defined by

cpµq “ }λ ` ρ}2 ´ }µ ` ρ}2

where
1 ÿ
ρ“ α.
2 `
αP R

Remark 5.54. Freudenthal’s Multiplicity Formula is a sequel to Complete Reducibil-


ity, wherein Rick Moranis plays the mad scientist Freudenthal, who invents
a chemical formula that duplicates DNA. Unfortunately, the bad guy (played
by Bill Murray) gets a hold of this formula and takes a shower in it, making
thousands of Bill Murrays. He then manages to infiltrate the Pentagon and get
the nuclear codes, and the planet is destroyed within hours.

Remark 5.55. Today we’ll be talking about Root Systems, which is an upcoming
indie film about the fallout from the Fukashima Nuclear Reactor. Some ginger
from near the plant mutates and starts to grow out of control. And since
it’s a major component of Japanese cuisine, it wants to take revenge on the
people who’ve been eating it for so long. At first it just pops out of the ground

60
Lecture 22 28 November 2015

and squirts hot ginger at people’s faces, but it has more diabolical intentions.
Eventually, it finds an underground internet cable and starts sending messages
to the world’s leaders. To show that it means business, it deletes all cat videos
from the internet. To try and stop the mutant ginger, some samurai warriors,
the X-men, Batman and Captain America are sent to destroy it. But they’re
ultimately unsuccessful, and the ginger takes over the world.

6 Classification of Complex Semisimple Lie Alge-


bras
In this section, we can basically forget everything we’ve talked about so far and
distill the information about Lie algebras into a few basic facts that will be all
that we need to classify the Lie algebras.

Definition 6.1. Let E “ Rn for some n P N, equipped with an inner product


p , q. Then a root system on E is a finite set R such that

(R1) R spans E as a vector space.

(R2) α P R ðñ ´α a root, but kα is not a root for all k ‰ ˘1.

(R3) for α P R, the reflection Wα in the hyperplane αK perpindicular to α is a


map from R Ñ R.
p β,αq
(R4) For roots α, β P R, the real number n βα “ 2 pα,αq is an integer.

Exercise 6.2. Show that the root system of a Lie algebra forms an abstract root
system.

Remark 6.3. Note that for g semisimple, n βα would be βpHα q, and

Wα pβq “ β ´ βpHα qα “ β ´ n βα α.

What are the possibilities for n βα ? Turns out there are very few possibilities.
We have that
}β||
n βα “ 2 cos θ ,
}α}
where θ is the angle between α and β. Hence, nαβ n βα “ 4 cos2 θ P Z. Since
| cos2 θ| ď 1, we see that
n βα nαβ P t0, 1, 2, 3, 4u.
So nαβ is an integer between ´4 and 4, since n βα is also an integer. If β ‰ ˘α
then n βα lies between ´3 and 3.
Furthermore, n βα has the same sign as nαβ , and if |nαβ |, |n βα | ą 1, then
|nαβ | “ |n βα | “ 2, and so cos2 θ “ 1 ùñ α “ ˘β.

61
Lecture 22 28 November 2015

We may assume that nαβ “ ˘1. So what are the options for n βα ?

n βα 3 2 1 0 ´1 ´2 ´3

nαβ 1 1 1 0 ´1 ´1 ´1
? ? ? ?
cospθq 3{ 2{ 1{ 0 ´1{2 ´ 2{2 ´ 3{2
2 2 2 (7)

θ π{ π{ π{ π{ 2π{ 3π{ 5π{


6 4 3 2 3 4 6

} β} ? ? ? ?
}α}
3 2 1 ˚ 1 2 3

Some consequences of this table are the following facts. This and the next
proposition should settle any outstanding proofs owed for Fact 5.52.

Fact 6.4.

(1) In an α-string through β, there are at most 4 elements.

(2) If α and β are roots, α ‰ ˘β, then pβ, αq ą 0 implies that α ´ β is a root. If
pβ, αq ă 0, then α ` β is a root. If pβ, αq “ 0 then β ` α and α ´ β are either
both roots or both non-roots.

(3) As with RΛW , we can find ` : E Ñ R irrational with respect to ZR such


that it separates R into R “ R` \ R´ . With respect to this ordering, we
say that a positive root α is simple if α ‰ β ` γ for any β, γ P R` .

(4) If α, β are distinct simple roots, then α ´ β and β ´ α are not roots.

(5) If α, β are distinct simple roots, then then angle between them is obtuse,
pα, βq ă 0.

(6) The set of simple roots S is linearly independent.

(7) Every positive root is a nonnegative integral combination of the simple


roots.

Proof. (1) Let α, β be roots, with α ‰ ˘β. Then consider an α-string through β
given by tβ ´ pα, β ´ pp ` 1qα, . . . , β ` qαu. We have

Wα pβ ` qαq “ Wα pβq ` qWα pαq.

The left hand side is β ´ pα, and the right hand side is β ´ n βα α ´ qα, so

β ´ pα “ β ´ n βα α ´ qα.

So p ´ q “ n βα , so |p ´ q| ď 3. So there are at most 4 elements in this


string.
Relabelling, we may assume p “ 0, so q is an integer no more than 3.

62
Lecture 22 28 November 2015

(2) To see this, we inspect Table 7. Either nαβ or n βα is ˘1, without loss, say
n βα “ ˘1. Then Wβ pαq “ β ´ n βα α. Then by the previous fact, Fact 6.4(1),
we also get that all weights in the interior of an α-string through β are
roots.

(3) Same thing we did before.

(4) If either α ´ β or β ´ α is a root, then both are. So one of them is a positive


root. If say α ´ β was a positive root, then α “ β ` pα ´ βq is not a simple
root.

(5) If not, then either α ´ β or β ´ α is a root by inspection of Table 7. This is


in contradiction to Fact 6.4(4).
ř
(6) Assume that i ni αi “ 0, and renumber so that the first k-many ni are
positive. Then let
ÿk m
ÿ
v“ ni αi “ ´ nj αj.
i“1 j “ k `1

Now consider the inner product of v with itself.

k
ÿ n
ÿ
0 ď pv, vq “ ´ ni n j pαi , α j q
i “1 j “ k `1

Note that ni ě 0, n j ď 0, and pαi , α j q ď 0 by Lemma 6.5(5), so the right


hand side is ď 0;
ÿ
0 ď pv, vq “ ´ ni n j pαi , α j q ď 0.

So it must be v “ 0. But
˜ ¸
k
ÿ
0 “ `p0q “ `pvq “ ` ni αi ě0
i “1

So the ni are all zero for 1 ď i ď k, and similarly for k ` 1 ď j ď m.

(7) Assume not. Then there is β P R` with `pβq minimal such that β R ZS. But
since β is not simple, β “ β 1 ` β 2 for some β 1 , β 2 P R` , and `pβ 1 q, `pβ 2 q ă
`pβq. But by minimality of β, β 1 and β 2 are expressible as sums of simple
roots so also is β.

Now recall that the Weyl group W “ xWα | α P Ry injects into S|R| so in
particular, W is finite.

Lemma 6.5. Let W0 “ xWα | α P Sy, where S is the set of simple roots of a root
system R. Then every positive root is sent by elements of W0 to a simple root,
and furthermore W “ W0 .

63
Lecture 23 1 December 2015

Proof. Let α P R` . To prove that α is sent by elements of W0 to a member of S,


ř ř
define the height of α by htpαq “ i ni such that α “ i ni αi for αi P S.
First claim that there is γ P S such that pγ, αq ą 0. If not,
ÿ
pα, αq “ ni pα, αi q ă 0
i

because ni ą 0, pα, αi q ă 0 for all i. This is a contradiction, because pα, αq ě 0.


So htpWγ pαqq “ htpα ´ nαγ γq ă htpαq, and so we’re done by induction.
Finally, to show that W0 “ W, it’s an exercise to check that for g P W, we
have gWα g´1 “ Wgα . It suffices to show that Wα for α P R` is in W0 , since W is
generated by such Wα . Let σ P W0 be the element sending the simple root αi to
α, say σαi “ α. Then
Wα “ Wσαi “ σWαi σ´1 P W0 .

Remark 6.6. Recall:

• If α, β are simple roots then pα, βq ď 0. In fact, the angle between them is
π{ , 2π{ , 3π{ or 5π{ .
2 3 4 6

• The set of simple roots is linearly independent and every positive root is a
nonnegative integral combination of simple roots.

• Every root is conjugate to a simple root under W.

• W “ W0 “ xWα | α P Sy.

To classify root systems (and thereby semisimple Lie algebras), we will


classify the Dynkin diagrams.

Definition 6.7. A Dynkin diagram consists of a collection of nodes, one for


each simple root, and some lines between them indicating the angle between
them. Furthermore, we put an arrow to indicate which of the two roots is longer.

If there are just two nodes,

θ “ π{2
θ “ 2π{3
θ “ 3π{4
θ “ 5π{6

In the simplest cases,

64
Lecture 23 1 December 2015

Diagram Lie Algebra Root System

slp2q

slp2q ˆ slp2q

slp3q

spp4q

G2

Definition 6.8. A root system is irreducible if it’s Dynkin diagram is connected.

Theorem 6.9 (Classification Theorem). The Dynkin diagrams of irreducible root


systems are as follows:

65
Lecture 23 1 December 2015

Type Diagram

An (n ě 1)

Bn pn ě 2q

Cn pn ě 3q

Dn pn ě 4q

E6

E7

E8

F4

G2
The families A` , B` , C` , D` are the Lie algebras of classical type. The others,
E6 , E7 , E8 , F4 and G2 are exceptional type.

Type Lie Algebra Simple Roots


An slpn ` 1q L 1 ´ L 2 , . . . , L n ´ L n `1
Bn sop2n ` 1q L 1 ´ L 2 , L 2 ´ L 3 , . . . , L n ´1 ´ L n , L n
Cn spp2nq L1 ´ L2 , L2 ´ L3 , . . . , Ln´1 ´ Ln , 2Ln
Dn sop2nq L1 ´ L2 , L2 ´ L3 , . . . , Ln´1 ´ Ln , Ln´1 ` Ln
To prove the Classification Theorem, we will consider Coxeter diagrams.
Definition 6.10. Define a Coxeter diagram to be a Dynkin diagram without the
arrows (so in effect, we assume all root lengths are 1).
Proof of Theorem 6.9. Let ei , i “ 1, . . . , n denote the simple root vectors.Coming
from the Coxeter diagram, we know that pei , ei q “ 1 and if i ‰ j, pei , e j q “
? ?
0, ´ 21 , ´ 22 , or ´ 3{2 , if the number of edges between them are 0, 1, 2, 3, respec-
tively. Hence, 4pei , e j q2 is the number of edges between ei and e j .

66
Lecture 23 1 December 2015

Now we classify the admissible diagrams, that is, the possible Coxeter
diagrams coming form valid Dynkin diagrams. This is done in the following
steps:

(1) Clearly, any (connected) subdiagram of an admissible diagram is admis-


sible. So we consider only connected diagrams. If the diagrams are not
connected, then the connected components are themselves simple Lie
algebras.

(2) There are at most pn ´ 1q pairs of connected vertices. In particular, there


are no loops.

Proof. If ei and e j are connected, then 2pei , e j q ď ´1. Hence


´ÿ ÿ ¯ ÿ ÿ ` ˘
0ă ei , ei “ pei , ei q ` 2 pei , e j q “ n ´ # of edges
i iă j

(3) No node has more than three edges coming into it.

Proof. Label the central node e1 , and suppose e2 , . . . , en are connected to it.
By (2), there are no loops, so none of e2 , . . . , en are connected to any other.
Hence tei | i “ 1, . . . , nu is orthonormal. By Gram-Schmidt, extend to an
orthonormal basis by adding some en`1 with

spante2 , . . . , en`1 u “ spante1 , . . . , en u.

We must have that pen`1 , e1 q ‰ 0. So let


nÿ
`1
e1 “ pe1 , ei qei .
i “2

Then
nÿ
`1
1 “ pe1 , e1 q “ pe1 , ei q2 .
i “2

So if there is an edge e1 to ei , we have 4pe1 , ei q2 ě 1.


n
ÿ
# of edges out of e1 ď 4 pe1 , ei q2 ă 4
` ˘

i “2

and the result follows from the admissible values for pei , e j q.

(4) (Shrinking Lemma) In any admissible diagram, we can shrink any string
of the form ˝ ´ ˝ ´ ¨ ¨ ¨ ´ ˝ down to one node to get another admissible
diagram.

67
Lecture 23 1 December 2015

Proof. Let e1 , . . . , er be vectors along the string and replace with e “ e1 `


. . . ` er . Then
˜ ¸
ÿ
pe, eq “ pei , ei q ` 2 ppe1 , e2 q ` pe2 , e3 q ` . . . ` per´1 , er qq “ r ´ pr ´ 1q “ 1.
i

And for each other ek in the diagram but not in the string that we are
shrinking, pe, ek q satisfies the desired conditions, since pe, ek q is either
pe1 , ek q in the case that ek is a neighbor of e1 or per , ek q in the case that ek is
a neighbor of er .

(5) Immediately, from (3) and (4), we now see that G2 is the only connected
Dynkin diagram with a triple bond. Moreover, we cannot have

since this would imply

is a valid diagram, which is disallowed by (3).


And we also can’t have

either.
We can also exclude

by (3).

(6) So there are a few other things that we have to rule out to complete the
classification, namely

(8)

Proof. To rule out (8), let v “ e1 ` 2e2 and w “ 3e3 ` 2e4 ` e5 . Then we
calculate that
pv, wq2 “ }v}2 }w}2 .
So if θ is the angle between v and w, then pcos θq2 “ 0, so v and w are
linearly dependent. This is a contradiction because the ei are supposed to
be linearly independent.

68
Lecture 23 1 December 2015

(7) Similar considerations rule out the following:

69

You might also like