Lecture Notes On Lie Algebras and Lie Groups: Luiz Agostinho Ferreira
Lecture Notes On Lie Algebras and Lie Groups: Luiz Agostinho Ferreira
Lecture Notes On Lie Algebras and Lie Groups: Luiz Agostinho Ferreira
August - 2011
2
.
Contents
3 Representation theory
of Lie algebras 107
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2 The notion of weights . . . . . . . . . . . . . . . . . . . . . . . . 108
3
4 CONTENTS
Example 1.2 Consider the set of all human beings living and dead and define
a binary operation as follows: for any two persons take the latest common
5
6 CHAPTER 1. ELEMENTS OF GROUP THEORY
forefather. For the case of two brothers this would be their father; for two
cousins their common grandfather; for a mother and his son, the mother’s
father, etc. This set is closed or not under such operation depending, of course,
on how we understand everything has started.
Example 1.3 Take a rectangular box and imagine three mutually orthogonal
axis, x, y and z, passing through the center of the box and each of them being
orthogonal to two sides of the box. Consider the set of three rotations:
There are some redundancies in these definition, and the axioms c) and d)
could, in fact, be replaced by the weaker ones:
c0 ) There exists an element e in G, called left identity such that eg = g for
every g ∈ G.
1.1. THE CONCEPT OF GROUP 7
products.
The definition of abstract group given above is not the only possible one.
There is an alternative definition that does not require inverse and identity.
We could define a group as follows:
Definition 1.2 (alternative) Take the definition of group given above (as-
suming it is a non empty set) and replace axioms c) and d) by: “For any given
elements g1 , g2 ∈ G there exists a unique g satisfying g1 g = g2 and also a
unique g 0 satisfying g 0 g1 = g2 ”.
This definition is equivalent to the previous one since it implies that, given
any two elements g1 and g2 there must exist unique elements eL1 and eL2 in G
such that eL1 g1 = g1 and eL2 g2 = g2 . But it also implies that there exists a
unique g such that g1 g = g2 . Therefore, using associativity, we get
From the uniquiness of eL2 we conclude that eL1 = eL2 .Thus this alternative
definition implies the existence of a unique left identity element eL . On the
other hand it also implies that for every g ∈ G there exist an unique gL−1 such
that gL−1 g = eL . Consequently axioms c’) and d’) follows from the alternative
axiom above.
Example 1.5 The set of real numbers is a group under addition but it is not
under multiplication, division, and subtraction. The last two operations are
not associative and the element zero has no inverse under multiplication. The
natural numbers under addition are not a group since there are no inverse
elements.
Example 1.7 The set of rotations of a box discussed in example 1.3 is a group
under composition of rotations when the identity operation I is added to the
set. In fact the set of all rotations of a body in 3 dimensions (or in any number
of dimensions) is a group under the composition of rotations. This is called
the rotation group and is denoted SO(3).
Example 1.8 The set of all human beings living and dead with the operation
defined in example 1.2 is not a group. There are no unity and inverse elements
and the operation is not associative
1 2 3
@
@
@
@
2 3 1
@
@
@ @
@ = @
A @
A @
A
A
Example 1.10 The N th roots of the unity form a group under multiplication.
These roots are exp(i2πm/N ) with m=0,1,2..., N-1. The identity elements is
1(m = 0) and the inverse of exp(i2πm/N ) is exp(i2π(N −m)/N ) . This group
is called the cyclic group of order N and is denoted by ZN .
We say two elements, g1 and g2 , of a group commute with each other if their
product is independent of the order, i.e., if g1 g2 = g2 g1 . If all elements of a
given group commute with one another then we say that this group is abelian.
The real numbers under addition or multiplication (without zero) form an
abelian group. The cyclic groups Zn (see example 1.10 ) are abelian for any
n. The symmetric group Sn (see example 1.9 ) is not abelian for n > 2, but it
is abelian for n = 2 .
Let us consider some groups of order two, i.e., with two elements. The elements
0 and 1 form a group under addition modulo 2. We have
0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 0 (1.2)
The elements 1 and −1 also form a group, but under multiplication. We have
The symmetric group of degree 2, S2 , (see example 1.9 ) has two elements as
shown in fig. 1.3.
A
e= a= A
A
A
They satisfy
e.e = e, e.a = a.e = a, a.a = e (1.4)
These three examples of groups are in fact different realizations of the same
abstract group. If we make the identifications as shown in fig. 1.4 we see that
the structure of these groups are the same. We say that these groups are
isomorphic.
1.1. THE CONCEPT OF GROUP 11
0 ∼ 1 ∼
A
1 ∼ -1 ∼ A
A
A
Definition 1.3 Two groups G and G0 are isomorphic if their elements can
be put into one-to-one correspondence which is preserved under the composi-
tion laws of the groups. The mapping between these two groups is called an
isomorphism.
Analogously one can define mappings of a given group G into itself, i.e.,
for each element g ∈ G one associates another element g 0 . The one-to-one
mappings which respect the product law of G are called automorphisms of G.
In other words, an automorphism of G is an isomorphism of G onto itself.
12 CHAPTER 1. ELEMENTS OF GROUP THEORY
Example 1.12 Consider again the cyclic group Z6 and the mapping σ : Z6 →
Z6 defined by
This is an automorphism of Z6 .
In fact the above example is just a particular case of the automorphism of any
abelian group where a given element is mapped into its inverse.
Notice that if σ and σ 0 are two automorphisms of a group G, then the
composition of both σσ 0 is also an automorphism of G. Such composition
is an associative operation. In addition, since automorphisms are one-to-one
mappings, they are invertible. Therefore, if one considers the set of all auto-
morphisms of a group G together with the identity mapping of G into G, one
gets a group which is called the automorphism group of G.
Any element of G gives rise to an automorphism. Indeed, define the map-
ping σḡ : G → G
Then
σḡ (gg 0 ) = ḡ gg 0 ḡ −1
= ḡ gḡ −1 ḡg 0 ḡ −1
= σḡ (g) σḡ (g 0 ) (1.8)
1.2 Subgroups
A subset H of a group G which satisfies the group postulates under the same
composition law used for G, is said to be a subgroup of G. The identity element
and the whole group G itself are subgroups of G. They are called improper
subgroups. All other subgroups of a group G are called proper subgroups. If H
is a subgroup of G, and K a subgroup of H then K is a subgroup of G.
In order to find if a subset H of a group G is a subgroup we have to check only
two of the four group postulates. We have to chek if the product of any two
elements of H is in H (closure) and if the inverse of each element of H is in
H. The associativity property is guaranteed since the composition law is the
same as the one used for G. As G has an identity element it follows from the
closure and inverse element properties of H that this identity element is also
in H.
Example 1.13 The real numbers form a group under addition.The integer
numbers are a subset of the real numbers and also form a group under the
addition. Therefore the integers are a subgroup of the reals under addition.
However the reals without zero also form a group under multiplication, but the
integers (with or without zero) do not. Consequently the integers are not a
subgroup of the reals under multiplication.
Example 1.14 Take G to be the group of all integers under addition, H1 to
be all even integers under addition, H2 all multiples of 22 = 4 under addition,
H3 all multiples of 23 = 8 under addition and son on. Then we have
G: ... − 2, −1, 0, 1, 2...
H1 : ... − 4, −2, 0, 2, 4...
H2 : ... − 8, −4, 0, 4, 8...
H3 : ... − 16, −8, 0, 8, 16...
Hn : ... − 2.2n , −2n , 0, 2n , 2.2n ...
We see that each group is a subgroup of all groups above it, i.e.
G ⊃ H1 ⊃ H2 ... ⊃ Hn ... (1.10)
Moreover there is a one to one correspondence between any two groups of this
list such that the composition law is preserved. Therefore all these groups are
isomorphic to one another
G ∼ H1 ∼ H2 ... ∼ Hn ... (1.11)
This shows that a group can be isomorphic to one of its proper subgroups. The
same can not happen for finite groups.
14 CHAPTER 1. ELEMENTS OF GROUP THEORY
1 2 3 ...n-1 n 1 2 3 ...n-1 !n
A A A !
A
e = a = A A ! A! A
A !A! A A
... !! A A A ... A
1 2 3 n-1 n n 1 2 n-2 n-1
1 2 3 ...n-1 n 1a 2 3 ...n-1 n
@ @ aa
a2 = @ @ an−1= aaa
@ @
@ ... @ ... aa
n-1 n 1 n-3 n-2 2 3 4 n 1
Definition 1.5 The order of a finite group is the number of elements it has.
Corollary 1.1 If the order of a finite group is a prime number then it has no
proper subgroups.
1.2. SUBGROUPS 15
The proof involves the concept of cosets and it is given in section 1.4. A
finite group of prime order is necessarily a cyclic group and can be generated
from any of its elements other than the identity element.
We say an element g of a group G is conjugate to an element g 0 ∈ G if there
exists ḡ ∈ G such that
g = ḡg 0 ḡ −1 (1.12)
This concept of conjugate elements establishes an equivalence relation on the
group. Indeed, g is conjugate to itself (just take ḡ = e), and if g is conjugate to
g 0 , so is g 0 conjugate to g (since g 0 = ḡ −1 gḡ). In addition, if g is conjugate to g 0
and g 0 to g 00 , i.e. g 0 = g̃g 00 g̃ −1 , then g is conjugate to g 00 , since g = ḡg̃g 00 g̃ −1 ḡ −1 .
One can use such equivalence relation to divide the group G into classes.
Definition 1.6 The set of elements of a group G which are conjugate to each
other constitute a conjugacy class of G.
Obviously different conjugacy classes have no common elements. The indentity
element e constitute a conjugacy class by itself in any group. Indeed, if g 0 is
conjugate to the identity e, e = gg 0 g −1 , then g 0 = e.
Given a subgroup H of a group G we can form the set of elements g −1 Hg
where g is any fixed element of G and H stands for any element of the subgroup
H. This set is also a subgroup of G and is said to be a conjugate subgroup of
H in G. In fact the conjugate subgroups of H are all isomorphic to H, since if
h1 , h2 ∈ H and h1 h2 = h3 we have that h01 = g −1 h1 g and h02 = g −1 h2 g satisfy
h01 h02 = g −1 h1 gg −1 h2 g = g −1 h1 h2 g = g −1 h3 g = h03 (1.13)
Notice that the images of two different elements of H, under conjugation by
g ∈ G, can not be the same. Because if they were the same we would have
g −1 h1 g = g −1 h2 g → g(g −1 h1 g)g −1 = h2 → h1 = h2 (1.14)
and that is a contradiction.
By choosing various elements g ∈ G we can form different conjugate subgroups
of H in G. However it may happen that for all g ∈ G we have
g −1 Hg = H (1.15)
This means that all conjugate subgroups of H in G are not only isomorphic
to H but are identical to H. In this case we say that the subgroup H is an
invariant subgroup of G. This implies that, given an element h1 ∈ H we can
find, for any element g ∈ G, an element h2 ∈ H such that
g −1 h1 g = h2 → h1 g = gh2 (1.16)
16 CHAPTER 1. ELEMENTS OF GROUP THEORY
Definition 1.8 Given an element g of a group G we can form the set of all
elements of G which commute with g, i.e., all x ∈ G such that xg = gx. This
set is called the centralizer of g and it is a subgroup of G.
x−1 −1 −1 −1 −1 −1
1 (x1 g)x1 = x1 (gx1 )x1 → gx1 = x1 g. (1.18)
although all elements of the centralizer commute with a given element g they
do not have to commute among themselves and therefore it is not necessarily
an abelian subgroup of G.
Definition 1.9 The center of a group G is the set of all elements of G which
commute with all elements of G.
We could say that the center of G is the intersection of the centralizers of all
elements of G. The center of a group G is a subgroup of G and it is abelian ,
since by definition its elements have to commute with one another. In addition,
it is an (abelian) invariant subgroup.
Example 1.17 The set of all unitary n × n matrices form a group, called
U (n), under matrix multiplicaton. That is because if U1 and U2 are unitary
(U1† = U1−1 and U2† = U2−1 ) then U3 ≡ U1 U2 is also unitary. In addition the
inverse of U is just U † and the identity is the unity n × n matrix. The unitary
matrices with unity determinant constitute a subgroup, because the product of
two of them, as well as their inverses, have unity determinant. That subgroup
is denoted SU (n). It is an invariant subgroup of U (n) because the conjugation
of a matrix of unity determinant
by any unitary matrix gives a matrix of unity
determinat, i.e. det U M U † = detM = 1, with U ∈ U (n) and M ∈ SU (n).
Therefore, U (n) is not simple. However, it is not semisimple either, because it
has an abelian subgroup constituted by the matrices R ≡ eiθ 1ln×n , with θ being
real. Indeed, the multiplication of any two R’s is again in the set of matrices
R, the inverse of R is R−1 = e−iθ 1ln×n , and so a matrix in the set. Notice
the subgroup constituted by the matrices R is isomorphic to U (1), the group of
1×1 unitary matrices, i.e. phases eiθ . Since, the matrices R commute with any
unitary matrix, it follows they are invariant under conjugation by elements of
U (n). Therefore, the subgroup U (1) is an abelian invariant subgroup of U (n),
and so U (n) is not semisimple. The subgroup U (1) is in fact the center of
U (n), i.e. the set of matrices commuting with all unitary matrices. Notice, that
such U (1) is not a subgroup of SU (n), since their elements do not have unity
deteminant. However, the discrete subset of matrices e2πim/n 1ln×n with m =
0, 1, 2...(n − 1) have unity determinant and belong to SU (n). They certainly
commute with all n × n matrices, and constitute the center of SU (n). Those
matrices form an abelian invariant subgroup of SU (n), which is isomorphic to
Zn . Therefore, SU (n) is not semisimple.
18 CHAPTER 1. ELEMENTS OF GROUP THEORY
g = h1 h2 ...hn (1.19)
From these requirements it follows that the subgroups Hi have only the identity
e in common. Because if f 6= e is a common element to H2 and H5 say, then the
element g = h1 f h3 h4 f −1 h6 ...hn could be also written as g = h1 f −1 h3 h4 f h6 ...hn
. Every subgroup Hi is an invariant subgroup of G, because if h0i ∈ Hi then
Given two groups G and G0 we can construct another group by taking the
direct product of G and G0 as follows: the elements of G00 = G ⊗ G0 are formed
by the pairs (g, g 0 ) where g ∈ G and g 0 ∈ G0 . The composition law for G00 is
defined by
(g1 , g10 )(g2 , g20 ) = (g1 g2 , g10 g20 ) (1.22)
1.4 Cosets
Given a group G and a subgroup H of G we can divide the group G into
disjoint sets such that any two elements of a given set differ by an element of
H multiplied from the right. That is, we construct the sets
If g = e the set eH is the subgroup H itself. All elements in a set gH are dif-
ferent, because if gh1 = gh2 then h1 = h2 . Therefore the numbers of elements
of a given set gH is the same as the number of elements of the subgroup H.
Also an element of a set gH is not contained by any other set g 0 H with g 0 6= g
. Because if gh1 = g 0 h2 then g = g 0 h2 h−1
1 and therefore g would be contained
0 0 1
in g H and consequently gH ≡ g H . Thus we have split the group G into
disjoint sets, each with the same number of elements, and a given element
g ∈ G belongs to one and only one of these sets.
m = kn (1.23)
group and it is called the factor group or the quocient group. In order to show
this we consider the product of two elements of two different cosets. We get
where we have used the fact that H is invariant, and therefore there exists
h3 ∈ H such that g 0−1 h1 g 0 = h3 . Thus we have obtained an element of a
third coset, namely gg 0 H. If we had taken any other elements of the cosets
gH and g 0 H, their product would produce an element of the same coset gg 0 H.
Consequently we can introduce, in a well defined way, the product of elements
of the coset space G/H, namely
gHg 0 H ≡ gg 0 H (1.25)
The invariant subgroup H plays the role of the identity element since
1.5 Representations
The concept of abstract groups we have been discussing plays an important
role in Physics. However, its importance only appears when some quantities
in the physical theory realize, in a concentre way, the structure of the abstract
group. Here comes the concept of representation of an abstract group.
Suppose we have a set of operators D1 , D2 ... acting on a vector space V
Di | vi =| v 0 i ; | vi, | v 0 i ∈ V (1.32)
We can define the product of these operators by the composition of their action,
i.e., an operator D3 is the product of two other operators D1 and D2 if
D1 (D2 | vi) = D1 | v 0 i = D3 | vi (1.33)
for all | vi ∈ V . We then write
D1 .D2 = D3 . (1.34)
Suppose that these operators form a group under this product law. We call it
an operator group or group of transformations.
If we can associate to each element g of an abstract group G an operator,
which we shall denote by D(g), such that the group structure of G is preserved,
i.e., if for g, g 0 ∈ G we have
D(g)D(g 0 ) = D(gg 0 ) (1.35)
then we say that such set of operators is a representation of the abstract group
G in the representation space V . In fact, the mapping between the operator
group D and the abstract group G is a homomorphism. In addition to eq.(1.35)
one also has that
D(g −1 ) = D−1 (g)
D(e) = 1 (1.36)
where 1 stands for the unit operator in D.
Definition 1.10 The dimension of the representation is the dimension of the
representation space.
Notice that we can associate the same operator to two or more elements of
G, but we can not do the converse. In the case where there is a one-to-one
correspondence between the elements of the abstract group and the set of
operators, i.e., to one operator D there is only one element g associated, we
say that we have a faithful representation .
1.5. REPRESENTATIONS 23
Example 1.22 The unit matrix of any order is a trivial representation of any
group. Indeed, if we associate all elements of a given group to the operator 1
we have that the relation 1.1 = 1 reproduces the composition law of the group
g.g 0 = g 00 . This is an example of an extremely non faithful representation.
D(| vi+ | v 0 i) = D | vi + D | v 0 i
D(a | vi) = aD | vi (1.37)
with | vi, | v 0 i ∈ V and a being a c-number, we say they form a linear repre-
sentation of G.
Given a basis | vi i (i = 1, 2...n) of the vector space V (of dimension n)
we can construct the matrix representatives of the operators D of a given
representation. The action of an operator D on an element | vi i of the basis
produces an element of the vector space which can be written as a linear
combination of the basis
D | vi i =| vj iDji (1.38)
The coefficients Dji of this expansion constitute the matrix representatives of
the operator D. Indeed, we have
D0 (D | vi i) = D0 | vj iDji =| vk iDkj
0
Dji =| vk i(D0 D)ki (1.39)
So, we can now associate to the matrix Dij , the element of the abstract group
that is associated to the operator D. We have then what is called a matrix
representation of the abstract group. Notice that the matrices in each represen-
tation have to be non singular because of the existence of the inverse element.
In addition the unit element e is always represented by the unit matrix, i.e.,
Dij (e) = δij .
Example 1.23 In example 1.9 we have defined the group Sn . We can con-
struct a representation for this group in terms of n × n matrices as follows:
take a vector space Vn and let | vi i, i = 1, 2, ...n, be a basis of Vn . One can
define n! operators that acting on the basis permute them, reproducing the n!
permutations of n elements. Using (1.38) one then obtains the matrices. For
instance, in the case of S3 , consider the matrices
1 0 0 0 1 0
D(a0 ) = 0 1 0 ; D(a1 ) = 1 0 0 ;
0 0 1 0 0 1
24 CHAPTER 1. ELEMENTS OF GROUP THEORY
1 0 0 0 0 1
D(a2 ) = 0 0 1 ; D(a3 ) = 0 1 0 ;
0 1 0 1 0 0
0 1 0 0 0 1
D(a4 ) = 0 0 1
; D(a5 ) = 1 0 0
(1.40)
1 0 0 0 1 0
0 0 1
one can check that the matrices given above play the role of the operators
permuting the basis too
Lemma 1.1 (Schur) Any matrix which commutes with all matrices of a gi-
ven irreducible representation of a group G must be a multiple of the unit
matrix.
Proof Let A be a matrix that commutes will all matrices D(g) of a given
irreducible representation of G, i.e.
A | vi = λ | vi (1.55)
For any g 0 ∈ G
1 X † 0
D† (g 0 )HD(g 0 ) = D (gg )D(gg 0 ) = H (1.58)
N g∈G
v † Hv = v 0† H 0 v 0
d
Hii0 | vi0 |2
X
= (1.60)
i=1
where vi0 are the components of v 0 . Since vi0 are arbitrary we conclude that each
entry Hii0 of H 0 is q
real and positive. We then define a diagonal real matrix h
with entries hii = Hii0 , i.e. H 0 = hh. Therefore
H = U H 0 U † = U hhU † ≡ SS (1.61)
and so
D0† (g)D0 (g) = 1l (1.64)
Therefore the representation D(g) is equivalent to the unitary representation
D0 (g).This result, as we will discuss later, is also true for compact Lie groups.2
Analogously, the elements of a given conjugacy class have the same character.
Indeed, from definition 1.6, if two elements g 0 and g 00 are conjugate, g 0 =
gg 00 g −1 , then in any representation D one has T r(D(g 0 )) = T r(D(g 00 )). Nothing
prevents however, the elements of two different conjugacy class of having the
same character in some particular representation. In fact, this happens in the
representation discussed in example 1.22.
1.5. REPRESENTATIONS 29
All these four theorems are also true for compact Lie groups (see definition
1 P
in chapter 2) with the replacement of the sum N (G) g∈G by the invariant
integration G Dg on the group manifold.
R
Characters are also used to prove theorems about the number of inequiva-
lent irreducible representations of a finite group.
30 CHAPTER 1. ELEMENTS OF GROUP THEORY
Theorem 1.8 The sum of the squares of the dimensions of the inequivalent
irreducible representations of a finite group G is equal to the order of G.
Definition 1.14 If all the matrices of a representation are real the represen-
tation is said to be real.
and so
DR (g) = C ∗ D∗ (g)(C ∗ )−1 (1.75)
Therefore
D∗ (g) = (C −1 C ∗ )−1 D(g)C −1 C ∗ (1.76)
and D is equivalent to D∗ . However the converse is not always true, i.e. , if D is
equivalent to D∗ it does not means D is equivalent to a real representation. So
we classify the representations into three classes regarding the relation between
D and D∗ .
Notice that if D is potentially real or pseudo real then its characters are real.
1.5. REPRESENTATIONS 31
Example 1.24 The rotation group on the plane, denoted SO(2), can be rep-
resented by the matrices
!
cos θ sin θ
R(θ) = (1.77)
− sin θ cos θ
such that
! !
x x cos θ + y sin θ
R(θ) = (1.78)
y −x sin θ + y cos θ
One can easily check that R(θ)R(ϕ) = R(θ + ϕ). This group is abelian and
according to corollary 1.2 such representation is reducible. Indeed, one gets
!
−1 e−iθ 0
M R(θ)M = (1.79)
0 eiθ
where
!
1 i
M= (1.80)
i 1
χD (a0 ) = 3
χD (a1 ) = χD (a2 ) = χD (a3 ) = 1
χD (a4 ) = χD (a5 ) = 0 (1.83)
Therefore
5
1X
| χD (ai ) |2 = 2 (1.84)
6 i=0
32 CHAPTER 1. ELEMENTS OF GROUP THEORY
From theorem 1.7 one sees that such 3-dimensional representation is not irre-
ducible. Indeed, the one dimensional subspace generated by the vector
1
1
| w3 i = √ 1 (1.85)
3 1
| wi i =| vj iΛji (1.87)
where i, j = 1, 2, 3 and
1
√1 √1
√
−12 6 3
√1 √1
Λ= √ (1.88)
2 6 3
−2 √1
0 √
6 3
Therefore
5
1X 00
| χD (ai ) |2 = 1 (1.93)
6 i=0
According to theorem 1.7 the representation D00 is irreducible. Consequentely
the 3-dimensional representation D defined in (1.40) is completely reducible.
It decomposes into the irreducible 2-dimensional representation D00 and the
1-dimensional representation given by 1.
We have seen so far that S3 has two irreducible representations, the two di-
mensional representation D00 , and the scalar representation (one dimensional)
where all elements are represented by the number 1. Since, 22 + 12 = 5 and
since the order of S3 is 6, we observe from theorem 1.8 that it is missing one
irreducible representation of dimension 1. That is easy to construct, and in
fact any Sn group has it. It is the representation where the permutations made
of an even number of simple permutations is representated by 1, and those with
an odd number by −1. Since the composition of permutations add up the num-
ber of simple permutations it follows it is indeed a representation. Therefore,
the missing one dimensional irreducible representation of S3 is given by
35
36 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
then
x0 = f (x) (2.4)
If the elements of a group G form a topological space and if the functions
F (x, x0 ) and f (x) are continuous functions of its arguments then we say that
G is a topological group. Notice that in a topological group we have to have
some compatibility between the algebraic and the topological structures.
When the elements of a group G constitute a manifold and when the func-
tions F (x, x0 ) and f (x), discussed above, possess derivatives of all orders with
respect to its arguments, i.e., are analytic functions, we say the group G is a
Lie group . This definition can be given in a formal way.
For more details about the geometrical concepts involved here see [HEL 78,
CBW 82, ALD 86, FLA 63].
Example 2.1 The real numbers under addition constitute a Lie group. In-
deed, we can use a real variable x to parametrize the group elements. Therefore
for two elements with parameters x and x0 the function in (2.2) is given by
f (x) = −x (2.6)
Example 2.2 The group of rotations on the plane, discussed in example 1.24,
is a Lie group. In fact the groups of rotations on IRn , denoted by SO(n), are
Lie groups. These are the groups of orthogonal n × n real matrices O with unit
determinant (O> O = 1l, detO = 1)
Example 2.3 The groups GL(n) and SL(n) discussed in example 1.16 are
Lie groups, as well as the group SU (n) discussed in example 1.17
Example 2.4 The groups Sn and Zn discussed in examples 1.9 and 1.10 are
not Lie groups.
constitute a basis for the tangent space Tp M . Then, any tangent vector Vp on
Tp M can be written as a linear combination of this basis
∂
Vp = Vpi (2.8)
∂xi
Now suppose that we vary the point p along a differentiable curve. As we
do that we obtain vectors tangent to the curve at each of its points. These
tangent vectors are continuously and differentiably related. If we choose a
tangent vector on Tp M for each point p of the manifold M such that this set
of vectors are differentiably related in the manner described above we obtain
what is called a vector field . Given a set of local coordinates on M we can
write a vector field V , in that coordinate neighbourhood, in terms of the basis
∂
∂xi
, and its components V i are differentiable functions of these coordinates.
∂
V = V i (x) (2.9)
∂xi
Given two vector fields V and W in a coordinate neighbourhood we can
evaluate their composite action on a function f . We have
∂V i ∂f 2
j i ∂ f
W (V f ) = W j + W V (2.10)
∂xj ∂xi ∂xj ∂xi
Due to the second term on the r.h.s of (2.10) the operator W V is not a vector
field and therefore the ordinary composition of vector fields is not a vector
field. However if we take the commutator of the linear operators V and W we
get
j j
!
i ∂W i ∂V ∂
[V, W ] = V i
−W (2.11)
∂x ∂x ∂xj
i
and this is again a vector field. So, the set of vector fields close under the
operation of commutation and they form what is called a Lie algebra.
Definition 2.2 A Lie algebra G is a vector space over a field k with a bilinear
composition law
(x, y) → [x, y]
[x, ay + bz] = a[x, y] + b[x, z] (2.12)
1. [x, x] = 0
2. [x, [y, z]] + [z, [x, y]] + [y, [z, x]] = 0; (Jacobi identity)
[x + y, x + y] = 0
= [x, y] + [y, x] (2.13)
(a, b) → a + b (2.14)
and
(a, b) → ab (2.15)
called respectively addition and multiplication such that
a (b + c) = ab + ac
(a + b) c = ac + bc
This defines a mapping between the tangent spaces of G since, given Vg0 in
Tg0 G, we have associated a tangent vector Wg00 in Tg00 G. The vector Wg00 does
not have necessarily to coincide with the value of the vector field V at Tg00 G,
namely Vg00 . However, when that happens we say that the vector field V is a
left invariant vector field on G, since that transformation was induced by left
translations on G.
The commutator of two left invariant vector fields, V and V̄ , is again a left
invariant vector field. To check this consider the commutator of this vector
fields at group element g 0 . According to (2.11)
∂ V̄ j0 ∂V j0 ∂
Ṽg0 ≡ [Vg0 , V̄g0 ] = Vgi0 g0i − V̄gi0 g0i 0j (2.17)
∂x ∂x ∂x
∂x00i 00j
∂x00i 00j
! !!
∂ l ∂x ∂ l ∂x ∂
= Vgk0 0k V̄ g 0 − V̄gk0 0k V g 0
∂x ∂x00i ∂x0l ∂x ∂x00i ∂x0l ∂x00j
∂ V̄ j0 ∂V j0 ∂x00k ∂
= Vgi0 g0i − V̄gi0 g0i
∂x ∂x ∂x0j ∂x0k
∂x00k ∂
= Ṽgj0 (2.18)
∂x0j ∂x0k
So, Ṽ is also left invariant. Therefore the set of left invariant vector fields form
a Lie algebra. They constitute in fact a Lie subalgebra of the Lie algebra of
all vector fields on G.
Definition 2.4 A vector subspace H of a Lie algebra G is said to be a Lie
subalgebra of G if it closes under the Lie bracket, i.e.
[H , H] ⊂ H (2.19)
these constants contain all the information about the Lie algebra of G. Since
the relation above is point independent we are going to fix the tangent plane
to G at the identity element, Te G, as the vector space of the Lie algebra of G.
We could have defined right invariant vector fields in a similar way. Their Lie
algebra is isomorphic to the Lie algebra of the left-invariant fields.
A one parameter subgroup of a Lie group G is a differentiable curve, i.e., a
differentiable mapping from the real numbers onto G, t → g(t) such that
g(t)g(s) = g(t + s)
g(0) = e (2.21)
This means that the straight line on the tangent plane to G at the identity
element, Te G, is mapped onto the one parameter subgroup of G, g(t). This is
called the exponential mapping of the Lie algebra of G (Te G) onto G. In fact,
it is possible to prove that in general, the exponential mapping is an analytic
mapping of Te G onto G and that it maps a neighbourhood of the zero element
of Te G in a one to one manner onto a neighbourhood of the identity element
of G. In several cases this mapping can be extended globally on G.
For more details about the exponential mapping and other geometrical
concepts involved here see [HEL 78, ALD 86, CBW 82, AUM 77].
2.4. BASIC NOTIONS ON LIE ALGEBRAS 43
for any T, T 0 ∈ G.
g = 1 + iεa Ta (2.37)
2. D(aT ) = aD(T )
46 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Notice that given an element T of a Lie algebra G, one can define a trans-
formation in G as
T : G → G0 = [ T , G ] (2.40)
Using the Jacobi identity one can easily verify that the commutator of the com-
position of two of such transformations reproduces the Lie bracket operation
on G, i.e.
[T , [T0 , G ]] − [T0 , [T , G ]] = [[T , T0 ], G ] (2.41)
Therefore such transformations define a representation of G on G, which is
called the adjoint representation of G. Obviously, it has the same dimension
as G. Introducing the coeeficients dba (T ) as
[ T , Ta ] ≡ Tb dba (T ) (2.42)
and so
[ d(T ) , d(T 0 ) ] = d([ T , T 0 ]) (2.44)
Therefore, the matrices defined in (2.42) constitute a matrix representation of
G, which is the adjoint representation G. Using (2.23) and (2.42) one gets that
dcb (Ta ) is indeed equal to ifab
c
, as obtained in (2.39).
Notice that if G has an invariant subalgebra H, i.e. [ G , H ] ⊂ H, then from
(2.41) one observes that the vector space of H defines a representation of G,
which is in fact an invariant subspace of the adjoint representation. Therefore,
for non-simple Lie algebras, the adjoint representation is not irreducible.
In a given finite dimensional representation D of a Lie algebra we define
the quantity
η D (T, T 0 ) ≡ T r (D(T )D(T 0 )) (2.45)
which is symmetric and bilinear
1. η D (T, T 0 ) = η D (T 0 , T )
2.4. BASIC NOTIONS ON LIE ALGEBRAS 47
Definition 2.8 A Lie algebra is said to be abelian if all its elements commute
with one another.
In this case all the structure constants vanish and consequently the Killing
form is zero. However there might exist some representation D of an abelian
algebra for which the bilinear form (2.45) is not zero.
Definition 2.9 A subalgebra H of G is said to be a invariant subalgebra (or
ideal) if
[H, G] ⊂ H (2.50)
From (2.27) we see the Lie algebra of an invariant subgroup of a group G is
an invariant subalgebra of the Lie algebra of G.
Definition 2.10 We say a Lie algebra G is simple if it has no invariant subal-
gebras, except zero and itself, and it is semisimple if it has no invariant abelian
subalgebras.
for every T 0 ∈ G.
For the proof see chap. III of [JAC 79] or sec. 6 of appendix E of [COR 84].
Using the cyclic property of the trace one sees that fabc is antisymmetric with
respect to all its three indices. Notice that, in general, fabc is not a structure
constant.
c
For a compact semisimple Lie algebra we have from (2.53) that fab = fabc
, and therefore the commutation relations (2.23) can be written as
the basis of the algebra su(2) of this group can be taken to be (half of) the
Pauli matrices (Ti ≡ 12 σi )
! ! !
1 0 1 1 0 −i 1 1 0
T1 = ; T2 = ; T3 = (2.57)
2 1 0 2 i 0 2 0 −1
The matrices (2.57) define what is called the spinor (2-dimensional) represen-
tation of the algebra su(2).
From (2.39) we obtain the adjoint representation (3-dimensional) of su(2)
and so
0 0 0 0 0 1
d(T1 ) = i 0 0 −1 ; d(T2 ) = i 0 0 0 ;
0 1 0 −1 0 0
0 −1 0
d(T3 ) = i 1 0 0 (2.60)
0 0 0
So, it is non degenerate. This is in agreement with theorem 2.1, since this
algebra is simple. According to the definition 2.11 this is a compact algebra.
The trace form (2.45) in the spinor representation is given by
1
ηijs = T r(D(Ti Tj )) = δij (2.62)
2
50 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
T± = T1 ± iT2 (2.63)
Notice that formally, these are not elements of the algebra su(2) since we have
taken complex linear combination of the generators. These are elements of the
complex algebra denoted by A1 .
Using (2.58) one finds
[T3 , T± ] = ±T±
[T+ , T− ] = 2T3 (2.64)
T3 | j, mi = m | j, mi (2.66)
The operators T± raise and lower the eigenvalue of T3 since using (2.64)
T3 T± | j, mi = ([T3 , T± ] + T± T3 ) | j, mi
= (m ± 1) T± | j, mi (2.67)
We are interested in finite representations and therefore there can only exists
a finite number of eigenvalues m in a given representation. Consequently there
2.5. SU(2) AND SL(2): LIE ALGEBRA PROTOTYPES 51
must exist a state which possess the highest eigenvalue of T3 which we denote
j
T+ | j, ji = 0 (2.68)
The other states of the representation are obtained from | j, ji by applying T−
successively on it. Again, since the representation is finite there must exist a
positive integer l such that
(T− )l+1 | j, ji = 0 (2.69)
Using (2.63) one can write the Casimir operator (2.65) as
1
C = T32 + (T+ T− + T− T+ ) (2.70)
2
So, using (2.64), (2.66) and (2.68)
1
C | j, ji = T32
+ [T+ , T− ] + T− T+ | j, ji
2
= j (j + 1) | j, ji (2.71)
Since C commutes with all generators of the algebra, any state of the repre-
sentation is an eigenstate of C with the same eigenvalue
C | j, mi = j (j + 1) | j, mi (2.72)
where | j, mi = (T− )n | j, ji for m = j − n and n ≤ l. From Schur’s lemma
(see lemma1.1), in a irreducible representation, the Casimir operator has to be
proportional to the unity matrix and so
C = j(j + 1)1l (2.73)
Using (2.70) one can write
T+ T− = C − T32 + T3 (2.74)
Therefore applying T+ on both sides of (2.69)
T+ T− (T− )l | j, ji = 0
= j(j + 1) − (j − l)2 + (j − l) | j, ji (2.75)
Since, by assumption the state (T− )l | j, ji does exist, one must have
j(j + 1) − (j − l)2 + (j − l) = (2j − l)(l + 1) = 0 (2.76)
Since l is a positive integer, the only possible solution is l = 2j. Therefore we
conclude that
52 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
This defines a 2-dimensional representation of sl(2) which differ from the spinor
representation of su(2), given in (2.57), by a factor i in L2 . One can check the
they satisfy
From these commutation relations one can obtain the adjoint representation
of sl(2), using (2.39)
0 0 0 0 0 −1
d(L1 ) = 0 0 −1 ; d(L2 ) = 0 0 0 ;
0 −1 0 1 0 0
0 1 0
d(L3 ) = 1 0 0 (2.79)
0 0 0
According to (2.49), the Killing form of sl(2) is given by
1 0 0
ηij = T r(d(Li Lj )) = 2 0 −1 0 (2.80)
0 0 1
sl(2) is a simple algebra and we see that its Killing form is indeed non-
degenerate (see theorem 2.1). From definition 2.11 we conclude sl(2) is a
non-compact Lie algebra.
The trace form (2.45) in the 2-dimensional representation (2.77) of sl(2) is
1 0 0
1
ηij2−dim = T r(Li Lj ) = 0 −1 0 (2.81)
2
0 0 1
2.5. SU(2) AND SL(2): LIE ALGEBRA PROTOTYPES 53
Similarly to the case of su(2), this trace form is proportional to the Killing
form, η 2−dim = 41 η.
The operators
L± ≡ L1 ± L2 (2.82)
according to (2.78), satisfy commutation relations identical to (2.64)
Proof P does not contain any element of H and contains all elements of G
which are not in H. Using the cyclic property of the trace
Therefore
[H, P] ⊂ P. (2.89)
2
This theorem does not apply to non compact algebras because the trace
form does not provide an Euclidean type metric, i.e. there can exist null vectors
which are orthogonal to themselves. As an example consider sl(2).
[L1 + L2 , L1 − L2 ] = 2L3
[L1 + L2 , L3 ] = −(L1 + L2 ) (2.92)
So
[H, P] ⊂ H + P (2.93)
Notice P is a subalgebra too
[H, G] ⊂ H (2.95)
[H, P] ⊂ P (2.96)
[H, P] = 0 (2.97)
Theorem 2.4 For a simple Lie algebra the invariant bilinear trace form de-
fined in eq. (2.45) is the same in all representations up to an overall constant.
Consequentely they are all proportional to the Killing form.
56 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Proof Using the definition (2.31) of the adjoint representation and the invari-
ance property (2.48) of η D (T, T 0 ) we have
η −1 η D = λ1l → η D = λη (2.103)
T r(Hi Tm ) = 0 (2.109)
or
[Hi , Tm ] = (hi )mn Tn (2.112)
where we have defined the matrices
Tm → Umn Tn
(hi )mn → (U hi U † )mn (2.115)
Using the Jacobi identity and the commutation relations (2.116) we have that
if α and β are roots then
[Hi , [Eα , Eβ ]] = −[Eα , [Eβ , Hi ]] − [Eβ , [Hi , Eα ]]
= (αi + βi ) [Eα , Eβ ] (2.122)
60 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Since the algebra is closed under the commutator we have that [Eα , Eβ ] must
be an element of the algebra. We have then three possibilities
1. α + β is a root of the algebra and then [Eα , Eβ ] ∼ Eα+β
2. α + β is not a root and then [Eα , Eβ ] = 0
3. α + β = 0 and consequently [Eα , Eβ ] must be an element of the Cartan
subalgebra since it commutes with all Hi .
Since in a semisimple Lie algebra the roots are not degenerated (see (2.121)),
we conclude from (2.122) that 2α is never a root.
We then see that the knowlegde of the roots of the algebra provides all
the information about the commutation relations and consequently about the
structure of the algebra. From what we have learned so far, we can write the
commutation relations of a semisimple Lie algebra G as
[Hi , Hj ] = 0 (2.123)
[Hi , Eα ] = αi Eα (2.124)
Nαβ Eα+β if α + β is a root
[Eα , Eβ ] = Hα if α + β = 0 (2.125)
0 otherwise
−α - α
0 0 0 0 0 0
1 0 0 0 0 1
λ3 = 0 −1 0
; λ4 = 0 0 0
;
0 0 0 1 0 0
0 0 −i 0 0 0
λ5 = 0 0 0 ; λ6 = 0 0 1 ;
i 0 0 0 1 0
0 0 0 1 0 0
0 0 −i ;
λ7 =
λ8 = √13
0 1 0
(2.135)
0 i 0 0 0 −2
The trace form in such matrix representation is given by
where the structure constants fijk are completly antisymmetric (see (2.56))
and are given in table 2.1. The diagonal matrices λ3 and λ8 are the generators
64 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
i j k fijk
1 2 3 2
1 4 7 1
1 5 6 -1
2 4 6 1
2 5 7 1
3 4 5 1
3 6 7 √-1
4 5 8 √3
6 7 8 3
of the Cartan subalgebra. One can easily check that they satisfy the conditions
of the definition 2.13. We see that the remaining matrices play the role of Tm in
(2.112). Therefore we can construct the step operators as linear combination
of them. However, like the su(2) case, these are complex linear combination
and the step operators are not really generators of su(3). Doing that, and
normalizing the generators conveniently, we obtain the Weyl-Cartan basis for
for such algebra
1 1
H1 = √ λ3 ; H2 = √ λ8 ;
2 2
1 1
E±α1 = (λ1 ± iλ2 ) ; E±α2 = (λ6 ± iλ7 )
2 2
1
E±α3 = (λ4 ± iλ5 ) (2.138)
2
So they satisfy
α2 α3
KA
A
A
A
A
A - α1
A
A
A
A
A
AU
2α.β 2α.β α2
α2 β2
θ β2
π
0 0 2
undetermined
π
1 1 3
1
2π
−1 −1 3
1
π
1 2 4
2
3π
−1 −2 4
2
π
1 3 6
3
5π
−1 −3 6
3
Table 2.2: The possible scalar products, angles and ratios of squared lenght
for the roots
0 ≤ mn ≤ 4 (2.151)
This condition is very restrictive and from it we get that the possible values
of scalar products, angles and ratio of squared lenghts between any two roots
are those given in table 2.2. For the case of α being parallel or anti-parallel
to β we have cos θ = ±1 and consequently mn = 4. In this case the possible
values of m and n are
2α.β 2α.β
1. α2
= ±2 and β2
= ±2
2α.β 2α.β
2. α2
= ±1 and β2
= ±4
68 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
2α.β 2α.β
3. α2
= ±4 and β2
= ±1
1
T1 (α) = (Eα + E−α )
2
1
T2 (α) = (Eα − E−α ) (2.152)
2i
which satisfy the commutation relations
We now want to show that if α and β are roots of a given Lie algebra G,
then σα (β) is also a root. Let us introduce the operator
and so
[σα (x).H, Ẽβ ] = x.β Ẽβ (2.162)
However, if we perform a reflection twice we get back to where we started, i.e.,
σ 2 = 1. Therefore denoting σα (x) by y we get that σα (y) = x, and then from
(2.162)
[y.H, Ẽβ ] = σα (y).β Ẽβ (2.163)
and so
[Hi , Ẽβ ] = σα (β)i Ẽβ (2.164)
Therefore Ẽβ , defined in (2.157), is a step operator corresponding to the root
σα (β). Consequently if α and β are roots, σα (β) is necessarily a root (similarly
σβ (α) ).
Example 2.7 In section 2.7 we have discussed the algebra of the group SU (3).
The root diagram with the planes perpendicular to the roots is given in figure
2.3. One can sees that the root diagram is invariant under Weyl reflections.
We have
plane 1
α2 α3 plane 2
Q A
K
Q A
Q A
Q
Q A
Q A
Q
Q
A - α1
Q
A
Q
A Q
A QQ
A Q
Q
A
AU
Q plane 3
Figure 2.3: The planes orthogonal to the roots of A2 (SU (3) or SL(3))
(
α1 → α2 α2 → −α3 α3 → −α1
σ1 σ2 :
−α1 → −α2 −α2 → α3 −α3 → α1
(
α1 → −α3 α2 → α1 α3 → −α2
σ2 σ1 : (2.165)
−α1 → α3 −α2 → −α1 −α3 → α2
Definition 2.15 The Weyl group of a Lie algebra, or of its root system, is
the finite discrete group generated by the Weyl reflections.
From the considerations above we see that the Weyl group leaves invariant
the root system. However it does not contain all the symmetries of the root
system. The inversion α ↔ −α is certainly a symmetry of the root system of
any semisimple Lie algebra but, in general, it is not an element of Weyl group.
In the case of su(3) discussed in example 2.7 the inversion can not be written
in terms of reflections. In addition, the root diagram of su(3) is invarint under
rotations of π3 , and this operation is not an element of the Weyl group of su(3).
72 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
1. Φ does not contain zero, spans an Euclidean space of the same dimension
as the rank of the Lie algebra G and the number of elements of Φ is equal
to dim G - rank G.
Notice that if the root diagram decomposes into two or more disjoint and
mutually orthogonal subdiagrams then the corresponding Lie algebra is not
simple. Suppose the rank of the algebra is r and that the diagram decomposes
into two orthogonal subdiagrams of dimensions m and n such that m + n = r.
By taking basis vi (i = 1, 2...m) and uk (k = 1, 2...n) in each subdiagram we can
split the generators of the Cartan subalgebra into two subsets of the form Hv ≡
v.H and Hu = u.H. From (2.158) we see that the generatorsa Hv commute
with all step operators corresponding to roots in the subdiagram generated by
uk , and vice versa. In addition, since the sum of a root of one subdiagram
with a root of the other is not a root, we conclude that the corresponding step
operators commute. Therefore each subdiagram corresponds to an invariant
subalgebra of the Lie algebra which root diagram is their union.
2.10. WEYL CHAMBERS AND SIMPLE ROOTS 73
β
6
- α
?
−α α
-
Weyl chamber
Example 2.8 The root diagram shown in figure 2.4 is made of two ortoghonal
diagrams. Since each one is the diagram of an su(2) algebra we conclude, from
the discussion above, that it corresponds to the algebra su(2)⊕su(2). Remember
that the ratio of the squared lenght of the ortoghonal roots are undetermined in
this case (see table 2.2).
plane 1
H
α2 HHH α
HHHHH3
HHH H plane 2
Q KA H HH H
Q A HHH HH
H
Q H HHH H
Q A H
Q A H H H
HHH P
iPWeyl Chamber
Q A H H H
Q H
Q
A - α1
Q
A
Q
A Q
A QQ
A Q
Q
A
AU Q plane 3
Definition 2.18 We say a positive root is a simple root if it can not be written
as the sum of two positive roots.
Proof If α.β > 0 we see from table 2.2 that either 2α.β
α2
or 2α.β
β2
is equal to 1.
2α.β
Without loss of generality we can take α2 = 1. Therefore
2α.β
σα (β) = β − α=β−α (2.166)
α2
2.10. WEYL CHAMBERS AND SIMPLE ROOTS 75
So, from the invariance of the root system under the Weyl group, β − α is also
a root, as well as α − β. The proof for the case α.β < 0 is similar. 2
Theorem 2.6 Let α and β be distinct simple roots. Then α − β is not a root
and α.β ≤ 0.
Theorem 2.7 Let α1 , α2 ,... αr be the set of all simple roots of a semisimple
Lie algebra G. Then r = rank G and each root α of G can be written as
r
X
α= na α a (2.167)
a=1
where na are integers, and they are positive or zero if α is a positive root and
negative or zero if α is a negative root.
Proof Suppose the simple roots are linear dependent. Denote by xa and
−ya the positive and negative coefficients, respectively, of a vanishing linear
combination of the simple roots. Then write
s
X r
X
xa α a = yb αb ≡ v (2.168)
a=1 b=s+1
v2 =
X
xa yb αa .αb ≤ 0 (2.169)
ab
Since v is a vector on an Euclidean space it follows that that the only possibility
is v 2 = 0, and so v = 0. But this implies xa = yb = 0 and consequently the
simple roots must be linear independent. Now let α be a positve root. If it is
not simple then α = β + γ with β and γ both positive. If β and/or γ are not
simple we can write them as the sum of two positive roots. Notice that α can
not appear in the expansion of β and/or γ in terms of two positive roots, since
if x is a vector of the Fundamental Weyl Chamber we have x.α = x.β + x.γ.
Since they are all positive roots we have x.α > x.β and x.α > a.γ. Therefore
β or γ can not be written as α + δ with δ a positive root. For the same reason
β and γ will not appear in the expansion of any further root appearing in
76 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
this process. Thus, we can continue such process until α is written as a sum
of simple roots, i.e. α = ra=1 na αa with each na being zero or a positive
P
integer. Since, for semisimple Lie algebras, the roots come in pairs (α and
−α) it follows that the negative roots are written in terms of the simple roots
in the same way, with na being zero or negative integers. We then see that
the set of simple roots span the root space. Since they are linear independent,
they form a basis and consequently r = rank G. 2
2.11. CARTAN MATRIX AND DYNKIN DIAGRAMS 77
4. From the properties of the roots discussed in section 2.8 we see that
6. The Cartan matrix is symmetric only when all roots have the same
lenght.
Example 2.11 The algebra of SO(3) or SU (2) has only one simple root and
therefore its Cartan matrix is trivial, i.e., K = 2.
Example 2.13 From figure 2.6 we see that the Cartan matrix of A2 (su(3)
or sl(3)) is
!
2 −1
K= (2.177)
−1 2
2.11. CARTAN MATRIX AND DYNKIN DIAGRAMS 79
α3
α2 α4
@
I aa
6
@ aaaa a
@ aaaa
aaa P iPWeyl Chamber
@ aa
@ aa
@ - α1
@
@
@
@
@
? R
@
Figure 2.7: The root diagram and Fundamental Weyl chamber of so(5) (or
sp(2))
Example 2.15 The last simple Lie algebra of rank 2 is the exceptional algebra
G2 . Its root diagram is shown in figure 2.8. It has 12 roots and therefore
dimension 14. The Fundamental Weyl Chamber is the shaded region. The
positive roots are the ones labelled from 1 to 6 on the diagram. The simple
roots are α1 and α2 . The Cartan matrix is given by
!
2 −1
K= (2.179)
−3 2
We have seen that the relevant information contained in the Cartan matrix
is given by its off-diagonal elements. We have also seen that if Kab 6= 0 then
one of Kab or Kba is necessarily equal to −1. Therefore the information of the
off-diagonal elements can be given by the positive integers Kab Kba (no sum in
80 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
α6
XXX Weyl
6 Chamber
XX
α3 XX X α
4
α2 H
Y
H AK
X
X
* α5
HH A
- α1
H
H A
H
A H
A HH
AU H
j
H
a and b). These integers can be encoded in a diagram called Dynkin diagram
which is constructed in the following way:
2. Join the point a to the point b by Kab Kba lines. Remember that the
number of lines can be 0, 1, 2 or 3.
3. If the number of lines joining the points a and b exceeds 1 put an arrow
on the lines directed towards the one whose corresponding simple root
has a shorter lenght than the other.
αa2 Kab 1
2
= = (2.180)
αb Kba Kab Kba
and consenquently αb2 ≥ αa2 . So the number of lines joining two points in a
Dynkin diagram gives the ratio of the squared lenghts of the corresponding
simple roots.
Example 2.16 The Cartan matrix of the algebra of SO(3) or SU (2) is simply
K = 2. It has only one simple root and therefore its Dynkin diagram is just a
2.11. CARTAN MATRIX AND DYNKIN DIAGRAMS 81
point. The algebra of SU (3) on the other hand has two simple roots. From its
Cartan matrix given in example 2.13 and the rules above we see that its Dynkin
diagram is formed by two points linked by just one line. Using the rules above
one can easily construct the Dynkin diagrams for the algebras discussed in
examples 2.11 - 2.15. They are given in figure 2.9.
The root system postulates, given in definition 2.16, impose severe restric-
tions on the possible Dynkin diagrams. In section 2.15 we will classifly the
admissible diagrams, and we will see that there exists only nine types of sim-
ple Lie algebras.
We have said that for non simple algebras the Cartan matrix has a block
diagonal form. This implies that the corresponding Dynkin diagram is not
connect. Therefore a Lie algebra is simple only and if only its Dynkin diagram
is connected.
We say a Lie algebra is simply laced if the points of its Dynkin diagram
are joined by at most one link. This means all the roots of the algebra have
the same length.
82 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
α. (β + (r + 1)α) ≤ 0 (2.182)
α. (β + sα) ≥ 0 (2.183)
of β + pα under σα has to be β − qα, and vice versa, since they are the roots
that are most distant from the hyperplane perpendicular to α. We then have
2α.(β − qα)
σα (β − qα) = β − qα − α = β + pα (2.186)
α2
2α.β
and since the only possible values of α2
are 0, ±1, ±2 and ±3 we get that
2α.β
q−p= = 0, ±1, ±2, ±3 (2.187)
α2
Denoting β − qα by γ we see that for the α-root string through γ we have
q = 0 and therefore the possible values of p are 0, 1, 2 and 3. Consequently
the number of roots in any string can not exceed 4.
For a simply laced Lie algebra the only possible values of 2α.β
α2
are 0 and
±1. Therefore the root strings, in this case, can have at most two roots.
Notice that if α and β are distinct simple roots, we necessarily have q = 0,
since β − α is never a root in this case. So
[E−α , Eβ ] = [Eα , E−β ] = 0 (2.188)
If, in addition, α.β = 0 we get from (2.187) that p = 0 and consequently α + β
is not a root either. For a semisimple Lie algebra, since if α is a root then −α
is also a root, it follows that
[Eα , Eβ ] = [E−α , E−β ] = 0 (2.189)
for α and β simple roots and α.β = 0. We can read this result from the Dynkin
diagram since, if two points are not linked then the corresponding simple roots
are orthogonal.
Example 2.17 For the algebra of SU (3) we see from the diagram shown in
figure 2.6 that the α1 -root string through α2 contains only two roots namely 2
and 3= 2+1.
Example 2.18 From the root diagram shown in figure 2.7 we see that, for
the algebra of SO(5), the α1 -root string through α2 contains thre roots α2 ,
α3 = α1 + α2 , and α4 = α2 + 2α1 .
Example 2.19 The algebra G2 is the only simple Lie algebra which can have
root strings with four roots. From the diagram shown in figure 2.8 we see that
the α1 -root string through α2 contains the roots α2 , α3 = α2 +α1 , α5 = α2 +2α1
and α6 = 2α2 + 3α1 .
84 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
where na are given by (2.167). The only roots of height one are the simple
roots. This definition classifies the roots according to a hierarchy. We can
reconstruct the root system of a Lie algebra from its Dynkin diagram starting
from the roots of lowest height as we now explain.
Given the Dynkin diagram we can easily construct the Cartan matrix. We
know that the diagonal elements are always 2. The off diagonal elements are
zero whenever the corresponding points (simple roots) of the diagram are not
linked. When they are linked we have Kab (or Kba ) equals to −1 and Kba (or
Kab ) equal to minus the number of links between those points.
Example 2.20 The Dynkin diagram of SO(7) is given in figure 2.10
We see that the simple root 3 (according to the rules of section 2.11 ) has a
length smaller than that of the other two. So we have K23 = −2 and K32 = −1.
Since the roots 1 and 2 have the same length we have K12 = K21 = −1. K13
and K31 are zero because there are no links between the roots 1 and 3. Therefore
2 −1 0
K = −1 2 −2 (2.191)
0 −1 2
Once the Cartan matrix has been determined from the Dynkin diagram, one
obtain all the roots of the algebra from the Cartan matrix. We are interested in
semisimple Lie algebras. Therefore, since in such case the roots come in pairs
α and −α, we have to find just the positive roots. We now give an algorithm
for determining the roots of a given height n from those of height n − 1. The
steps are
2.13. COMMUTATION RELATIONS FROM DYNKIN DIAGRAMS 85
where p and q are the highest positive integers such that α(l) + pαb and
α(l) −qαb are roots. The integer q can be determined by looking at the set
of roots of height smaller than l (which have already been determined)
and checking what is the root of smallest height of the form α(l) − mαb .
One then finds p from (2.193). If p does not vanish, α(l) + αb is a root.
Notice that if p ≥ 2 one also determines roots of height greater than
l + 1. By applying this procedure using all simple roots and all roots of
height l one determines all roots of height l + 1.
4. The process finishes when no roots of a given height l + 1 is found. That
is because there can not exists roots of height l + 2 if there are no roots
of height l + 1.
86 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Therefore we have shown that the root system of a Lie algebra can be
determined from its Dynkin diagram. In some cases it is more practical to
determine the root system using the Weyl reflections through hyperplanes
perpendicular to the simple roots.
The root which has the highest height is said the highest root of the algebra
and it is generally denoted ψ. For simple Lie algebras the highest root is unique.
The integer h(ψ) + 1 = rankG
P PrankG
a=1 ma + 1, where ψ = a=1 ma αa , is said the
Coxeter number of the algebra.
We now show how to determine the commutation relations from the root
system of the algebra. We have been using the Cartan-Weyl basis introduced
in (2.134). However the commutation relations take a simpler form in the so
called Chevalley basis . In this basis the Cartan subalgebra generators are
2.13. COMMUTATION RELATIONS FROM DYNKIN DIAGRAMS 87
given by
2αa .H
Ha ≡ (2.194)
αa2
where αa (a = 1, 2, ... rank G) are the simple roots and αa .H = αai H i , where
Hi are the Cartan subalgebra generators in the Cartan-Weyl basis and αai are
the components of the simple root αa in that basis, i.e. [Hi , Eαa ] = αai Eαa .
The generators Ha are not orthonormal like the Hi . From (2.134) and (2.170)
we have that
4αa .αb 2
T r(Ha Hb ) = 2 2 = 2 Kab (2.195)
αa αb αa
The generators Ha obviously commute among themselves
[Ha , Hb ] = 0 (2.196)
The commutation relations between Ha and step operators are given by (see
(2.124))
2α.αa
[Ha , Eα ] = Eα = Kαa Eα (2.197)
αa2
where we have defined Kαa ≡ 2α.αα2a
a
. Since α can be written as in (2.167) we
see that Kαa is a linear combination with integer coefficients, all of the same
sign, of the a-columm of the Cartan matrix
2α.αa rankG
X
Kαa = = nb Kba (2.198)
αa2 b=1
where α = rankG
P
b=1 nb αb . Notice that the factor multiplying Eα on the r.h.s
of (2.197) is an integer. In fact this is a property of the Chevalley basis. All
the structure constants of the algebra in this basis are integer numbers. The
commutation relations (2.197) are determined once one knows the root system
of the algebra.
We now consider the commutation relations between step operators. From
(2.125)
Nαβ Eα+β if α + β is a root
[Eα , Eβ ] = Hα = ma Ha if α + β = 0 (2.199)
0 otherwise
from the root system of the algebra and also from the Jacobi identity . Let us
explain now how to do that.
Notice that from the antisymmetry of the Lie bracket
Nαβ = −Nβα (2.200)
for any pair of roots α and β. The structure constants Nαβ are defined up to
rescaling of the step operators. If we make the transformation
Eα → ρα Eα (2.201)
keeping the Cartan subalgebra generators unchanged, then from (2.199) the
structure constants Nαβ must transform as
ρα ρβ
Nαβ → Nαβ (2.202)
ρα+β
and
ρα ρ−α = 1 (2.203)
As we have said in section 2.9, any symmetry of the root diagram can be ele-
vated to an automorphism of the corresponding Lie algebra. In any semisimple
Lie algebra the transformation α → −α is a symmetry of the root diagram
since if α is a root so is −α. We then define the transformation σ : G → G as
σ(Ha ) = −Ha ; σ(Eα ) = ηα E−α (2.204)
and σ 2 = 1. From the commutation relations (2.196), (2.197) and (2.199) one
sees that such transformation is an automorphism if
ηα η−α = 1
ηα ηβ
Nαβ = N−α,−β (2.205)
ηα+β
Using the freedom to rescale the step operators as in (2.202) one sees that it is
possible to satisfy (2.205) and make (2.204) an automorphism. In particular
it is possible to choose all ηα equals to −1 and therefore
Nαβ = −N−α,−β (2.206)
Consider the α-root string through β given by (2.181). Using the Jacobi
identity for the step operators Eα , E−α and Eβ+nα , where p > n > 1 and p is
the highest integer such that β + pα is a root, we obtain from (2.199) that
2α.(β + nα)
Nβ+nα,−α Nβ+(n−1)α,α − Nβ+nα,α Nβ+(n+1)α,−α = (2.207)
α2
2.13. COMMUTATION RELATIONS FROM DYNKIN DIAGRAMS 89
Notice that the second term on the l.h.s of this equation vanishes when n = p
, since β + (p + 1)α is not a root. Adding up the equations (2.207) for n taking
the values 1, 2, ... p , we obtain that
2α.β
Nβ+α,−α Nβα = p + 2 (p + (p − 1) + (p − 2) + ... + 1)
α2
= p(q + 1) (2.208)
2
T r([Eα , Eβ ]E−α−β ) = Nαβ
(α + β)2
= −T r([E−α , E−β ]Eα+β )
= −T r([Eα+β , E−α ]E−β )
2
= −Nα+β,−α 2 (2.209)
β
Consequently
β2
Nα+β,−α = − Nαβ (2.210)
(α + β)2
Substituting this into (2.208) we get
2 (α + β)2
Nαβ = p(q + 1) (2.211)
β2
2 (β − α)2
Nβ,−α = q(p + 1) (2.212)
β2
90 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
The relation (2.211) can be put in a simpler form. From (2.187) we have
that (see section 25.1 of [HUM 72])
We want to show the r.h.s of this relation is zero. We distinguish two cases:
1. In the case where α2 ≥ β 2 we have | 2α.β
α2
|≤| 2α.β
β2
|. From table 2.2 we
2α.β
see that the possible values of α2 are −1, 0 or 1. In the first case we
get that the first factor on the r.h.s of (2.213) vanishes. On the other
two cases we have that α.β ≥ 0 and then (α + β)2 is strictly larger than
both, α2 and β 2 . Since we are assuming α + β is a root and since, as
we have said at the end of section 2.8, there can be no more than two
different root lengths in each component of a root system, we conclude
that α2 = β 2 . For the same reason β + 2α can not be a root since
(β + 2α)2 > (β + α)2 and therefore p = 1. But this implies that the
second factor on the r.h.s of (2.213) vanishes.
[Ha , Hb ] = 0 (2.216)
2α.αa
[Ha , Eα ] = Eα = Kαa Eα (2.217)
αa2
(q + 1)ε(α, β)Eα+β if α + β is a root
[Eα , Eβ ] = Hα = 2α.H
α2
= ma Ha if α + β = 0 (2.218)
0 otherwise
where we have denoted ε(α, β) the sign of the structure constant Nαβ , i.e.
Nαβ = (q +1)ε(α, β). These signs, also called cocycles, are determined through
the Jacobi identity as explained in section 2.14. As we have said before q is
the highest positive integer such that β − qα is a root. However when α + β
is a root, which is the case we are interested in (2.218), it is true that q is
also the highest positive integer such that α − qβ is a root. The reason is the
following: in a semisimple Lie algebra the roots always appear in pairs (α and
−α). Therefore if β − α is a root so is α − β. In addition we have seen in
section 2.12 that the root strings are unbroken and they can have at most four
roots. Therefore, since we are assuming that α + β is a root, the only possible
way of not satisfying what we said before is to have, let us say, the α-root
string through β as β − 2α, β − α, β, β + α; and the β-root string through α
as α − β, α, α + β or α − β, α, α + β, α + 2β. But from (2.187) we have
2α.β
=1 (2.219)
α2
and
2α.β
= 0 or − 1 (2.220)
β2
which are clearly incompatible.
We have said in section 2.12 that for a simply laced Lie algebra there can
be at most two roots in a root string. Therefore if α + β is a root α − β is not,
and therefore q = 0. Consequently the structure constants Nαβ are always ±1
for a simply laced algebra.
92 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Consider three roots α, β and γ such that their sum vanishes. The Jacobi
identity for their corresponding step operators yields, using (2.216) - (2.218)
and also
1 1 1
2
(qαβ + 1) = 2 (qβγ + 1) = 2 (qγα + 1) (2.225)
γ α β
2.14. FINDING THE COCYCLES ε(α, β) 93
Further relations are found by considering Jacobi identities for three step op-
erators corresponding to roots adding up to a fourth root. Now such identities
yield relations involving products of two cocycles. However, in many situations
there are only two non vanishing terms in the Jacobi identity. Consider three
roots α, β and γ such that α + β, β + γ and α + β + γ are roots but α + γ
is not a root. Then the Jacobi identity for the corresponding step operators
yields
and
(qαβ + 1)(qα+β,γ + 1) = (qβγ + 1)(qβ+γ,α + 1) (2.228)
There remains to consider the cases where the three terms in the Jacobi identity
for three step operators do not vanish. Such thing happens when we have three
roots α, β and γ such that α + β, α + γ, β + γ and α + β + γ are roots as
well. We now classify all cases where that happens. We shall denote long roots
by µ, ν, ρ, ... and short roots by e, f , g, ... . From the properties of roots
discussed in section 2.8 one gets that 2µ.ν
µ2
, 2µ.e
µ2
, 2e.f
e2
= 0, ±1. Let us consider
the possible cases:
2
1. All three roots are long. If µ + ν is a root then (µ+ν)
µ2
= 2 + 2µ.ν
µ2
. Since
2µ.ν
µ + ν can not be a longer than µ one gets µ2 = −1. So µ + ν is a long
root and if µ + ν + ρ is also a root one gets by the same argument that
2(µ+ν).ρ
µ2
= −1. Therefore µ + ρ and ν + ρ can not be roots simultaneously
since that would imply, by the same arguments, 2µ.ρ µ2
= 2ν.ρ
µ2
= −1 which
is a contradiction with the result above.
(µ+e)2
2. Two roots are long and one short. If µ + e is a root then µ2
=
2
1+ µe 2 + 2µ.e
µ2
. Since µ+e can not be longer than µ it follows that 2µ.e
µ2
= −1.
Therefore µ + e is a short root since (µ + e)2 = e2 . So, if µ + e + ν is
2 2
a root then (µ+e+ν) ν2
= 1 + (µ+e)
µ2
+ 2(µ+e).ν
ν2
and therefore 2(µ+e).ν
ν2
= −1.
Consequently µ + ν and ν + e can not be roots simultaneously since that
would imply, by the same arguments, 2µ.ν ν2
= 2ν.e
ν2
= −1.
94 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
3. Two roots are short and one long. Analogously if e + f and µ + e + f are
roots one gets 2(e+f
µ2
).µ
= −1 independently of e + f being shost or long.
So, it is impossible for µ + e and µ + f to be both roots since one would
get 2µ.e
µ2
= 2µ.f
µ2
= −1.
(e+f )2 2e.f
4. All three roots are short. If e + f is a root then e2
= 2+ e2
and
there exists three possibilities:
2e.f
(a) e2
= −1 and e + f is a short root.
2e.f (e+f )2
(b) e2
= 1 and e2
= 3 (can only happen in G2 ).
2e.f (e+f )2
(c) e2
= 0 and e2
= 2 (can only happen in Bn , Cn and F4 ).
In section 2.8 we have seen that the possible ratios of squared length of the
roots are 1, 2 and 3. Therefore there can not exists roots with three different
2 2
lengths in the same irreducible root system since if αβ 2 = 2 and βγ 2 = 3 then
γ2
α2
= 23 .
Consider the case 4.b and let g be the third short root. Then if e + g is a
(e+g)2 2e.g 2e.g
root we have (e+f )2
= 32 + (e+f )2
= 1 or 13 . But this is impossible since (e+f )2
would not be an integer. So the second case is ruled out since we would not
have e + f , e + g, f + g and e + f + g all roots.
(e+g)2
Consider the case 4.c. If e + g is a root then (e+f )2
= 1 + 12 2e.g
g2
= 1 or
1
2
. Therefore 2e.g g2
= 0 or −1. Similarly if f + g is a root we get 2f.g g2
= 0
or −1. But if e + f + g is also a root then it has to be a short root since
(e+f +g)2 +g)2
(e+f )2
= 23 + 2(e+f ).g
(e+f )2
. Consequently 2(e+f ).g
(e+f )2
= −1 and (e+f
(e+f )2
= 12 . It then
2
follows that 2e.g
g2
+ 2f.g
g2
= 2(e+f ).g (e+f )
(e+f )2 g2
= −2. Therefore in the case 4.c we can
have e + f , e + g, f + g and e + f + g all roots if e.f = 0, 2e.g
g2
= 2f.g
g2
= −1.
2
Consider the case 4.a. Again if e + g is a root then (e+g)
g2
= 2 + 2e.g
g2
= 1 or
2e.g 2f.g
2. So, g2 = 0 or −1. Similarly if f + g is a root g2 = 0 or −1. If e + f + g is
2
also a root then (e+fg2+g) = 2 + 2(e+f
g2
).g
= 1 or 2. Therefore 2(e+f
g2
).g
= 0 or −1.
2e.g 2f.g 2e.g
Consequently g2 and g2 can not be both −1. Suppose then g2 = 0 and
2
consequently e + g is a long root, i.e. (e+g)
g2
= 2. According to the arguments
used in case 4.c we get e + f + g is a short root and then 2f.g
g2
= −1.
We then conclude that the only possibility for the ocurrence of three short
roots e, f and g such that the sum of any two of them and e+f +g are all roots
is that two of them are ortoghonal, let us say e.f = 0 and 2e.gg2
= 2f.g
g2
= −1.
This can only happen in the algebras Cn or F4 . Therefore none of the three
2.14. FINDING THE COCYCLES ε(α, β) 95
terms in the Jacobi identity for the corresponding step operators will vanish.
We have
According to the discussion in section 2.12 any root string in an algebra where
the ratio of the squared lengths of roots is 1 or 2 can have at most 3 roots.
From (2.187) we see that qef = 1 and qge = qf g = qe+f,g = qg+e,f = qf +g,e = 0.
Therefore
1. The cocycles involving two negative roots, ε(−α, −β) with α and β both
positive, is determined from those involving two positive roots through
the relation (2.222).
2. The cocycles involving one positive and one negative root, ε(−α, β) with
both α and β both positive, are also determined from those involving
two positive roots through the relations (2.224) and (2.222). Indeed, if
−α + β is a positive root we write −α + β = γ and if it is negative we
write −α + β = −γ with γ positive in both cases. Therefore from (2.224)
and (2.222) it follows ε(−α, β) = ε(−γ, −α) = −ε(γ, α) in the first case,
and ε(−α, β) = ε(β, γ) in the second case.
Using such algorithm one can then verify that there will be one cocycle to
be chosen freely, for each positive non-simple root of the algebra. Once those
cocycles are chosen, all the other are uniquely determined.
96 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Therefore each point of the diagram will be associated to a unit vector a , and
these are all linearly independent. They satisfy
2αa · αb q
2a · b = q = − Kab Kba (2.232)
αa2 αb2
Now, from theorem 2.6 we have that a · b ≤ 0, and therefore from (2.174)
√ √
2a · b = 0, −1, − 2, − 3 (2.233)
which correspond to minus the square root of the number of lines joining
the points a and b. We shall call a set of unit vectors satisfying (2.233) an
admissible set.
One notices that by ommiting some a ’s, the remaining ones form an ad-
missible set, which diagram is obtained from the original one by ommiting the
corresponding points and all lines attached to them. So we have the obvious
lemma.
0 < 2 = r + 2
X
a · b (2.235)
pairs
And from (2.233) we see that if a and b are linked, then 2a · b ≤ −1. In order
to keep the inequality we see that the number of linked pairs of points must
be smaller or equal to r − 1. 2
Proof: If a diagram has a loop we see from lemma 2.2 that the loop itself
would be an admissible diagram. But that would violate lemma 2.3 since the
number o linked pairs of vertices is equal to the number of vertices. 2
Lemma 2.4 The number of lines attached to a given vertice can not exceed
three.
a · b = 0 a, b = 1, 2, 3, . . . k (2.236)
So we can write
k
X
η= (η · a ) a + (η · 0 ) 0 (2.237)
a=1
Corollary 2.2 The only connected diagram which has a triple link is the one
shown in figure 2.12 and it corresponds to the exceptional Lie algebra G2 .
2 = k + 2
X
a · b (2.241)
pairs
But since 2a · b = −1, for a amd b being nearest neighbours, we have
2 = k + (k − 1) (−1) = 1 (2.242)
or
η·=0 (2.244)
But since η and a belong to an admissible diagram we have that they satisfy
(2.233). Therefore, η and also satisfy (2.233) and consequently D0 is an
admissible diagram.
Corollary 2.4 Any admissible diagram can not have subdiagrams of the form
shown in figure 2.14.
The reason is that by lemma 2.3 we would obtain that the diagrams shown
in figure 2.15 are subdiagrams of admissible diagrams. From lemmas 2.2 and
2.4 we see that this is impossible.
So, from the results obtained so far we see that an admissible diagram has
to have one of the forms shown in figure 2.16.
Consider the diagram B) of figure 2.16, and define the vectors
p
X q
X
= aa ; η= aa (2.245)
a=1 a=1
100 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Figure 2.15:
2.15. THE CLASSIFICATION OF SIMPLE LIE ALGEBRAS 101
Figure 2.16:
102 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Therefore
p
2 = a2 + 2
X X
ab a · b
a=1 pairs
p p−1
a2 −
X X
= a (a + 1)
a=1 a=1
p−1
= p2 − a = p2 − p (p − 1) /2
X
a=1
= p (p + 1) /2 (2.246)
where we have used the fact that 2a · b = −1 for a and b being nearest
neighbours and 2a · b = 0 otherwise. In a similar way we obtain that
η 2 = q (q + 1) /2 (2.247)
and so √
· η = pq p · ηq = −pq/ 2 (2.249)
Using Schwartz inequality
( · η)2 ≤ 2 η 2 (2.250)
we have from (2.246), (2.247) and (2.249) that
p2 q 2 < p (p + 1) q (q + 1) /2 (2.251)
Since the equality can not hold because and η are linearly independent, eq.
(2.251) can be written as
(p − 1) (q − 1) < 2 (2.252)
1. p = q = 2
Figure 2.17:
Figure 2.18:
In the first case we have the diagram 2.17 which corresponds to the excep-
tional Lie algebra of rank 4 denoted F4 . In the other two cases we obtain the
diagram of figure 2.18 which corresponds to the classical Lie algebras so(2r+1)
or Sp(r) depending on the direction of the arrow.
Consider now the diagram D) of figure 2.16 and define the vectors
p−1
X q−1
X s−1
X
= aa η= aηa = aρa (2.253)
a=1 a=1 a=1
The vectors , η, ρ and ψ (see diagram D) in figure 2.16) are linearly indepen-
dent. Since ψ 2 = 1 we have from (2.254)
Figure 2.19:
Notice that (ψ · ρ) has to be different from zero, since , η, ρ and ψ are linarly
independent, we get the inequality
This ends the search for connected admissible diagrams. We have only to
consider the arrows on the diagrams with double and triple links. When that
is done we obtain all possible connected Dynkin diagrams corresponding to
the simple Lie algebras. We list in figure 2.21 the diagrams we have obtained
giving the name of the corresponding algebra in both, the physicist’s and
mathematician’s notations.
2.15. THE CLASSIFICATION OF SIMPLE LIE ALGEBRAS 105
Figure 2.20:
106 CHAPTER 2. LIE GROUPS AND LIE ALGEBRAS
Representation theory
of Lie algebras
3.1 Introduction
In this chapter we shall develop further the concepts introduced in section 1.5
for group representations. The concept of a representation of a Lie algebra
is analogous to that of a group. A set of operators D1 , D2 , . . . acting on
a vector space V is a representation of a Lie algebra in the representation
space V if we can define an operation between any two of these operators such
that it reproduces the commutation relations of the Lie algebra. We will be
interested mainly on matrix representations and the operation will be the usual
commutator of matrices. In addition we shall consider the representations of
compact Lie algebras and Lie groups only, since the representation theory of
non compact Lie groups is beyond the scope of these lecture notes.
Some results on the representation theory of finite groups can be extended
to the case of compact Lie groups. In some sense this this is true because the
volume of the group space is finite for the case of compact Lie groups, and
therefore the integration over the group elements converge. We state without
proof two important results on the representation theory of compact Lie groups
which are also true for finite groups:
107
108 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
Hi | µi = µi | µi i = 1, 2, 3...r(rank) (3.1)
2α · µ
Hα | µi = | µi (3.2)
α2
and consenquently we have that
2α · µ
is an integer for any root α (3.3)
α2
Any vector µ satisfying this condition is a weight, and in fact this is the
only condition a weight has to satisfy. From (2.148) we see that any root is a
weight but the converse is not true. Notice that 2α·µµ2
does not have to be an
integer and therefore the table 2.2 does not apply to the weights.
A weight is called dominant if it lies in the Fundamental Weyl Chamber or
on its borders. Obviously a dominant weight has a non negative scalar product
with any positive root. It is possible to find among the dominant weights, r
weights λa , a = 1, 2...r, satisfying
2λa · αb
= δab for any simple root αb (3.4)
αb2
3.2. THE NOTION OF WEIGHTS 109
In orther words we can find r dominant weights which are orthogonal to all
simple roots except one. These weights are called fundamental weights. They
play an important role in representation theory as we will see below.
Consider now a simple root αa and any weight µ. From (3.3) we have that
2µ · αa
= ma = integer (3.5)
αa2
where we have used the fact that the simple co-roots are given by
αa
αav = (3.12)
αa2
Any co-root can be written as a linear combination of the simple co-roots with
integer coefficients all of the same sign. To show that we observe from theorem
2.7 that r
α α2
αv = 2 = na a2 αav
X
(3.13)
α a=1 α
and from (3.4) we get
2λa · α
na = (3.14)
αa2
Therefore r r
2λa · α v X
αv = ma αav
X
2
α a ≡ (3.15)
a=1 α a=1
since from (3.3) we have that 2λαa2·α is an integer. In additon these integers are
all of the same sign since all λa ’s lie on the Fundamental Weyl Chamber or on
its border.
Let ν be a vector defined by
r
X
ν= ka λa (3.16)
a=1
where λa are the fundamental weights and ka are arbitrary integers. Using
(3.15) and (3.4) we get
2α · ν v
X 2λb · αa X
= 2α · ν = m a kb = ma ka (3.17)
α2 a,b αa2 a
lattice forms an abelian group under the addition of vectors. The root lattice is
an invariant subgroup and consequently the coset space Λ/Λr has the structure
of a group (see section 1.4). One can show that Λ/Λr corresponds to the center
of the covering group corresponding to the algebra which weight lattice is Λ.
We will show that all the weights of a given irreducible representation of a
compact Lie algebra lie in the same coset.
Before giving some examples we would like to discuss the relation between
the simple roots and the fundamental weights, which constitute two basis for
the root (or weight) space. Since any root is a weight we have that the simple
roots can be written as integer linear combination of the fundamental weights.
Using (3.4) one gets that the integer coefficients are the entries of the Cartan
matrix, i.e. X
αa = Kab λb (3.18)
b
and then
−1
X
λa = Kab αb (3.19)
b
So the fundamental weights are not, in general, written as integer linear com-
bination of the simple roots.
Example 3.1 SU (2) has only one simple root and consequently only one fun-
damental weight. Choosing a normalization such that α = 1, we have that
2λ · α 1
=1 and so λ= (3.20)
α2 2
Therefore the weight lattice of SU (2) is formed by the integers and half integer
numbers and the root lattice only by the integers. Then
Λ/Λr = ZZ2 (3.21)
which is the center of SU (2).
Example 3.2 SU (3) has two fundamental weights since it has rank two. They
can be constructed solving (3.4) or equivalently (3.19). The Cartan matrix of
SU (3) and its inverse are given by (see example 2.13)
! !
2 −1 −1 1 2 1
K= K = (3.22)
−1 2 3 1 2
So, from (3.19), we get that fundamental weights are
1 1
λ1 = (2α1 + α2 ) λ2 = (α1 + 2α2 ) (3.23)
3 3
112 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
α2 α3
KA
A λ2
A
A 6
A * λ1
A - α1
A
A
A
A
A
AU
In example 2.10 we have seen that the simple roots of SU (3) are given by
√
α1 = (1, 0) and α2 = −1/2, 3/2 . Therefore
√ ! √ !
1 3 3
λ1 = , λ2 = 0, (3.24)
2 6 3
The vectors representing the fundamental weights are given in figure 3.1.
| µ0 i ≡ Eα | µi (3.25)
satisfies
Hi | µ0 i = Hi Eα | µi
= (Eα Hi + [ Hi , Eα ]) | µi
= (µi + αi ) Eα | µi (3.26)
has weight µ + α1 + . . . + αn .
For this reason the weights in an irreducible representation differ by a sum
of roots, and consequently they all lie in the same coset in Λ/Λr . Since that
is the center of the covering group we see that the weights of an irreducible
representation is associated to only one element of the center.
In a finite dimensional representation, the number of weights is finite, since
this is at most the number of base states (remember the weights can be degen-
erated). Therefore, by applying sequences of step operators corresponding to
positive roots on a given state we will eventually get zero. So, an irreducible
finite dimensional representation possesses a state such that
This state is called the highest weight state of the representation, and λ is the
highest weight. It is possible to show that there is only one highest weight
in an irrep. and only one highest weight state associated to it. That is, the
highest weight is unique and non degenerate.
All other states of the representation are obtained from the highest weight
state by the application of a sequence of step operators corresponding to neg-
ative roots. The state defined by
states of the form (3.29). To see this, let β be a a positive root and α any of
the negative roots appearing in (3.29). Then we have
In the cases where β −α1 is a negative root or it is not a root or even β −α1 = 0,
we obtain that the second term on the r.h.s. of (3.30) is a state of the form of
(3.29). In the case β − α1 is a positive root we contiunue the process until all
positive step operators act directly on the highest state | λi, and consequently
annihilate it. Therefore the state (3.30) is a linear combination of the states
(3.29).
The weight lattice Λ is invariant by the Weyl group. If µ is a weight, and
therefore satisfies (3.3), it follows that σβ (µ) also satisfies (3.3) for any root
β, and so is a weight. To show this we use the fact that σβ (x) · σβ (y) = x · y
and σβ2 = 1. Then (denoting γ = σβ (α))
2α · σβ (µ) 2µ · σβ (α) 2γ · µ
2
= 2 = = integer (3.31)
α σβ (α) γ2
However we can show that the set of weights of a given representation, which
is a finite subset of Λ, is invariant by the Weyl group. The state defined by
| µ̄i ≡ Sα | µi (3.32)
Since the vector x is arbitrary we obtain that the state | µ̄i has, weight σα (µ)
Example 3.3 In the example 3.1 we have seen that the only fundamental
weight of SU (2) is λ = 12 . Therefore the dominant weights of SU (2) are
the positive integers and half integers. Each one of these dominant weights
corresponds to an irreducible representation of SU (2). Then we have that
λ = 0 corresponds to the scalar representation, λ = 21 the spinorial rep. which
is the fundamental rep. of SU (2) (dim = 2), λ = 1 is the vectorial rep. which
is the adjoint of SU (2) (dim = 3) and so on.
p and q are the greatest positive integers for which µ+pα and µ−qα are weights
of the representation. One can show that all vectors of the form µ + nα with
n integer and −q < n < p , are weights of the representation. Therefore the
weights form unbroken strings, called weight strings , of the form
µ + pα ; µ + (p − 1) α ; . . . µ + α ; µ ; µ − α ; . . . µ − qα (3.39)
We have shown in the last section that the set of weights of a representation is
invariant under the Weyl group. The effect of the action of the Weyl reflection
σα on a weight is to add or subtract a multiple of the root α, since σα (µ) =
µ − 2µ·α
α2
α, and from (3.3) we have that 2µ·α
α2
is an integer. Therefore the weight
string (3.39) is invariant by the Weyl reflection σα . In fact, σα reverses the
string (3.39) and consenquently we have that
2µ · α
σα (µ + pα) = µ − qα = µ − α − pα (3.40)
α2
and so
2µ · α
=q−p (3.41)
α2
This result is similar to (2.187) which was obtained for root strings. However,
notice that the possible values of q − p , in this case, are not restrict to the
values given in (2.187) (q − p can, in principle, have any integer value). In the
case where µ is the highest weight of the representation we have that p is zero if
α is a positive root, and q is zero if α is negative. The relation (3.41) provides
a practical way of finding the weights of the representation. In some cases it is
easier to find some weights of a given representation by taking successive Weyl
reflections of the highest weight. However, this method does not provide, in
general, all the weights of the representation.
Once the weights are known one has to calculate their multiplicities. There
exists a formula, due to Kostant, which expresses the multiplicities directly as
a sum over the elements of the Weyl group. However, it is not easy to use
this formula in practice. There exists a recursive formula, called Freudenthal’s
3.4. WEIGHT STRINGS AND MULTIPLICITIES 119
2 2
X p(α)
X
(λ + δ) − (µ + δ) m (µ) = 2 α · (µ + nα) m (µ + nα) (3.42)
α>0 n=1
where
1X
δ≡ α (3.43)
2 α>0
The first summation on the l.h.s. is over the positive roots and the second one
over all positive integers n such that µ + nα is a weight of the representation,
and we have denoted by p (α) the highest value of n. By starting with m (λ) = 1
one can use (3.43) to calculate the multiplicities of the weights from the higher
ones to the lower ones.
If the states | µi1 and | µi2 have the same weight, i.e., µ is degenerated,
then the weight σα (µ) is also degenerate and has the same multiplicity as µ.
Using (3.32) we obtain that the states
have weight σα (µ) and their linear independence follows from the linear inde-
pendence of | µi1 and | µi2 . Indeed,
So, if | µi1 and | µi2 are linearly independent one gets that one must have
x1 = x2 = 0 and so, | σα (µ)i1 and | σα (µ)i2 are also linearly independent.
Therefore all the weights of a representation which are conjugate under the
Weyl group have the same multiplicity. This fact can be used to make the
Freudenthal’s formula more efficient in the calculation of the multiplicities.
Example 3.5 Using the results of example 2.14 we have that the Cartan ma-
trix of so(5) ond its inverse are
! !
2 −1 −1 1 2 1
K= K = (3.46)
−2 2 2 2 2
Then, using (3.19), we get that the fundamental weights of so(5) are
1
λ1 = (2α1 + α2 ) λ2 = α1 + α2 (3.47)
2
120 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
where α1 and α2 are the simple roots of so(5). Let us consider the fundamen-
tal representation with highest weight λ1 . The scalar products of λ1 with the
positive roots of so(5) are
2λ1 · α1 2λ1 · α2
= 1 =0
α12 α22
2λ1 · (α1 + α2 ) 2λ1 · (2α1 + α2 )
= 1 =1 (3.48)
(α1 + α2 )2 (2α1 + α2 )2
Therefore using (3.41) (with p = 0 since λ1 is the highest weight) we get that
λ2 ; λ2 − α2 = α1 ; λ2 − α1 − α2 = 0 ; (3.50)
λ2 − 2α1 − α2 = −α1 ; λ2 − 2α1 − 2α2 = − (α1 + α2 )
3.5. THE WEIGHT δ 121
Again these weights are not degenerate and the representation has dimension
5. This is the vector representation of so(5).
Example 3.6 Consider the irrep. of su(3) with highest weight λ = α3 =
α1 + α2 , i.e., the highest positive root. Using (3.41) and performing Weyl
reflections one can check that the weights of such rep. are all roots plus the
zero weight. Since the roots are conjugated to α3 = λ under the Weyl group we
conclude that they are non degenerated weights. The multiplicity of the zero
weight can be calculated from the Freundenthal’s formula. From (3.43) we have
that, in this case, δ = α3 and so from (3.42) we get
4α32 − α32 m (0) = 2 m (α1 ) α12 + m (α2 ) α22 + m (α3 ) α32 (3.51)
Since m (α1 ) = m (α2 ) = m (α3 ) = 1 and α12 = α22 = α32 we obtain that
m (0) = 2. So there are two states with zero weight and consequently the
representation has dimension 8. This is the adjoint of su(3).
the coefficient of αb in σαa (β) is still nb , and consequently σαa (β) has at least
one positive coefficient. So, σαa (β) is a positive root, and it is different from
αa , since αa is the image of −αa under σαa . Therefore we have proved the
following lemma.
Lemma 3.1 If αa is a simple root, then σαa permutes the positive roots other
than αa .
From this lemma it follows that
σαa (δ) = δ − αa (3.52)
and consequently
2δ · αa
=1 for any simple root αa (3.53)
αa2
122 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
From the definition (3.43) it follows that δ is a vector on the root (or weight)
space and therefore can be written in terms of the simple roots or the funda-
mental weights. Writing
r
X
δ= xb λb (3.54)
b=1
s
for any g ∈ G, and where d sj0 (g) is the matrix representing g in the adjoint
j
0
representation, i.e. gTs g −1 = Ts0 dss (g) (see (2.31)).
Consider now a representation D of G and construct the operator
Notice that such operator can only be defined on a given representation since
it involves the product of operators and not Lie brackets of the generators.
We then have
D (g) Cn(D) = Γs1 s2 ...sn D gTs1 g −1 D gTs2 g −1 . . . D gTsn g −1 D (g)
0 0
= dss11 (g) . . . dssnn (g) Γs1 ...sn D Ts01 . . . D Ts0n D (g)
0 0
= Γs1 ...sn D Ts01 . . . D Ts0n D (g)
= Cn(D) D (g) (3.59)
So, we have shown that Cn(D) commutes with any matrix of the representation
h i
Cn(D) , D (g) = 0 (3.60)
Ar SU (r + 1) 2, 3, 4, . . . r + 1
Br SO(2r + 1) 2, 4, 6, . . . 2r
Cr Sp(r) 2, 4, 6 . . . 2r
Dr SO(2r) 2, 4, 6 . . . 2r − 2, r
E6 2, 5, 6, 8, 9, 12
E7 2, 6, 8, 10, 12, 14, 18
E8 2, 8, 12, 14, 18, 20, 24, 30
F4 2, 6, 8, 12
G2 2, 6
Table 3.1: The orders of the Casimir operators for the simple Lie Groups
Then taking
1
Tr (D0 (Ts1 Ts2 . . . Tsn ))
X
Γs1 s2 ...sn ≡ (3.63)
n! permutations
we get Casimir operators. However, one finds that after the symetrization pro-
cedure very few tensors of the form above survive. It follows that a semisimple
Lie algebra of rank r possesses r invariant Casimir operators functionally in-
dependent. Their orders, for the simple Lie algebras, are given in table 3.1.
and
(D)
C2 ≡ η st D (Ts ) D (Tt ) (3.65)
3.7. CHARACTERS 125
Since the Casimir operator commutes with all generators, we have from the
Schur’s lemma 1.1 that in an irreducible representation it must be propor-
tional to the unit matrix. Denoting by λ the highest weight of the irreducible
representation D we have
r
α2
!
(D)
λ2i
X X
C2 | λi = + [ D (Eα ) , D (E−α ) ] | λi
i=1 α>0 2
α2 2
!
2
X
= λ + Hα | λi
α>0 2
!
2
X
= λ + α · λ | λi (3.67)
α>0
where we have used (3.28) and (2.125). So, if D, with highest weight λ, is
irreducible, we can write using (3.43) that
(D)
C2 = λ · (λ + 2δ) 1l = (λ + δ)2 − δ 2 1l (3.68)
where 1l is the unit matrix in the representation D under consideration.
Example 3.7 In the case of SU (2) the quadratic operator is J 2 , i.e., the
square of the angular momentum. Indeed, from example 3.1 we have that
(D)
α = 1, and then δ = 1/2 and therefore C2 = λ (λ + 1). Since λ is a positive
integer or half integer we see that these are really the eigenvalues of J 2 .
3.7 Characters
In definition 1.13 we defined the character of an element g of a group G in a
given finite dimensional representation of G, with highest weight λ, as being
the trace of the matrix that represents that element, i.e.
χλ (g) ≡ Tr (D (g)) (3.69)
Obviously equivalent representations (see section 1.5) have the same charac-
ters. Analogously, two conjugate elements, g1 = g3 g2 g3−1 , have the same char-
acter in all representations. Therefore the conjugacy classes can be labelled
by the characters.
126 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
Example 3.8 Using (2.27) and the commutation relations (2.58) for the al-
gebra of so(3) (or su(2)) one gets that
π π
ei 2 T2 T3 e−i 2 T2 = T1 (3.70)
and consequently
π π
ei 2 T2 eiθT3 e−i 2 T2 = eiθT1 (3.71)
An analogous result is obtained if we interchange the roles of the generators T1
, T2 and T3 . Therefore the rotations by a given angle θ, no matter the axis, are
conjugate. The conjugacy classes of SO(3) are defined by the angle of rotation,
and the characters in a representation of spin j are given by
j
j j iθT3
eimθ
X
χ (θ) = χ e = (3.72)
m=−j
where the summation is over the elements σ of the Weyl group W , and where
sign is 1 (−1) if the element σ of the Weyl group is formed by an even (odd)
number of reflections. δ is the same as the one defined in (3.43). This relation
is called the Weyl character formula.
The character can also be calculated once one knows the multiplicities of
the weights of the representation. From (3.69) and (3.74) we have that
χλ (θ) = Tr Dλ eiθ·H = m (µ) eiθ·µ
X
(3.77)
µ
where the summation is over the weights of the representation and m (µ) are
their multiplicities. These can be obtained from Freudenthal’s formula (3.42).
In the scalar representation the elements of the group are represented by
the unity and the highest weight is zero. So setting λ = 0 in (3.76) we obtain
what is called the Weyl denominator formula
Y
(signσ) eiσ(δ)·θ = eiδ·θ 1 − e−iα·θ
X
(3.78)
σ∈W α>0
The dimension of the representation can be obtained from the Weyl char-
acter formula (3.76) noticing that
Example 3.9 In the case of SO(3) (or SU (2)) we have that α = 1, δ = 1/2
and consequently we have from (3.81) that
dim Dj = 2j + 1 (3.82)
This result can also be obtained from (3.73) by taking the limit θ → 0 and
using L’Hospital’s rule
128 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
(m1 , m2 ) dimension
(1, 0) (triplet) 3
(0, 1) (anti-triplet) 3
(2, 0) 6
(0, 2) 6
(1, 1) (adjoint) 8
(3, 0) 10
(0, 3) 10
(2, 1) 15
(1, 2) 15
(m1 , m2 ) dimension
(1, 0) (spinor) 4
(0, 1) (vector) 5
(2, 0) (adjoint) 10
(0, 2) 14
(1, 1) 16
(3, 0) 20
(0, 3) 30
(2, 1) 35
(1, 2) 40
Table 3.3: The dimensions of the smallest irreps. of SO(5) (or Sp(2))
We give in figures 3.4 and 3.5 the dimensions of the fundamental represen-
tations of the simple Lie algebras (extracted from [DYN 57]).
130 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
and so
(µ0 − µ) hµ0 | µi = 0 (3.89)
and consequently states with different weights are orthogonal. In the case a
weight is degenerate, it is possible to find an orthogonal basis for the subspace
generated by the states corresponding to that degenerate weight. We then
shall denote the base states of the representation by | µ, ki where µ is the
corresponding weight and k is an integer number that runs from 1 to m(µ),
the multiplicity of µ. We can always normalize these states such that
µ00 ,k00
where the sum is over the states of weight µ + α. Therefore, from (3.91) one
has
D (Eα )(µ0 ,k0 ) (µ,k) = hµ + α, k 0 | Eα | µ, kiδµ0 ,µ+α (3.95)
The matrix elements of Hi are known once we have the weights of the
representation, since from (3.1) and (3.90)
D (Hi )(µ0 ,k0 ) (µ,k) = hµ0 , k 0 | Hi | µ, ki = µi δµ0 ,µ δk0 ,k (3.96)
Therefore, in order to construct the matrix representation of the algebra
we have to calculate the “transition amplitudes” hµ + α, l | Eα | µ, ki. Notice
that from (3.87)
hµ + α, l | Eα | µ, ki† = hµ, k | E−α | µ + α, li (3.97)
Now, using the commutation relation (see (2.218))
2α · H
[ Eα , E−α ] = (3.98)
α2
one gets
2α · H
hµ, k | [ Eα , E−α ] | µ, ki = hµ, k | | µ, ki (3.99)
α2
2α · µ
=
α2
= hµ, k | Eα E−α | µ, ki − hµ, k | E−α Eα | µ, ki
m(µ−α)
X
= hµ, k | Eα | µ − α, lihµ − α, l | E−α | µ, ki
l=1
m(µ+α)
X
− hµ, k | E−α | µ + α, lihµ + α, l | Eα | µ, ki
l=1
134 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
k0 =1
m(µ+α)
hµ + α + β, l | Eβ | µ + α, k 0 ihµ + α, k 0 | Eα | µ, ki
X
−
k0 =1
= (q + 1) ε(α, β)hµ + α + β, l | Eα+β | µ, ki (3.103)
where q is the highest positive integer such that β − qα (or equivalently α − qβ,
since we are assuming α+β is a root) is a root, and ε(α, β) are signs determined
from the Jacobi identities (see section 2.14)
We now give some examples to ilustrate how to use (3.100) and (3.103)
to construct matrix representations. This method is very general and conse-
quently difficult to use when the representation (or the algebra) is big. There
are other methods which work better in specific cases.
3.8. CONSTRUCTION OF MATRIX REPRESENTATIONS 135
| j, mi m = −j, −j + 1, . . . , j − 1, j (3.104)
[ H , E± ] = ±E± [ E+ , E− ] = H (3.106)
where H = 2α · H/α2 , with α being the only positive root of SU (2). In section
2.5 we have used the basis
T3 | j, mi = m | j, mi (3.108)
Using the relation (3.100), which is the same as taking the expectation
value on the state | j, mi of both sides of the second relation in (3.107), we get
where we have used the fact that T+† = T− (see (3.87)). Notice that T+ | j, ji =
0, since j is the highest weight and so
| hj, j | T+ | j, j − 1i |2 = 2j (3.111)
Clearly, such result could also be obtained directly from (3.101). The other
matrix elements of T+ can then be obtained recursively from (3.110). Indeed,
136 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
Therefore
| hj, m + 1 | T+ | j, mi |2 = j(j + 1) − m(m + 1) (3.112)
and since
hj, m + 1 | T+ | j, mi† = hj, m | T− | j, m + 1i (3.113)
we get
| hj, m − 1 | T− | j, mi |2 = j(j + 1) − m(m − 1) (3.114)
The phases of such matrix elements can be chosen to vanish, since in SU (2)
we do not have a relation like (3.103) to relate them. Therefore, we get
q
T± | j, mi = j(j + 1) − m(m ± 1) | j, m ± 1i (3.115)
and so,
(j)
Dm0 ,m (T+ ) = hj, m0 | T+ | j, mi
q
= j(j + 1) − m(m + 1) δm0 ,m+1
(j)
Dm0 ,m (T− ) = hj, m0 | T− | j, mi
q
= j(j + 1) − m(m − 1) δm0 ,m−1 (3.116)
2λ1 · α1 2λ1 · α3
2
= =1 (3.117)
α1 α32
α2 α3
KA
A
A
λ1 − α 1 A
λ1
HHA
Y *
A H A
- α1
A A
A A
A ?A
AA A
λ1 − α3A
AU
| 1i ≡| λ1 i ; | 2i ≡| λ1 − α1 i ; | 3i ≡| λ1 − α3 i (3.120)
we obtain from (3.117), (3.118), (3.119) and that the matrices representing the
Cartan subalgebra generators are
1 0 0 0 0 0
Dλ1 (H1 ) = 0 −1 0 Dλ1 (H2 ) = 0 1 0 (3.121)
0 0 0 0 0 −1
0 0 0 0 0 0
0 0 0 0 0 0
Dλ1 (Eα2 ) =
0 0 eiϕ
Dλ1 (E−α2 ) =
0 0 0
−iϕ
0 0 0 0 e 0
0 0 ei(θ+ϕ)
0 0 0
Dλ1 (Eα3 ) = 0 0 0 λ1
D (E−α3 ) = 0 0 0
−i(θ+ϕ)
0 0 0 e 0 0
In general, the fases θ and ϕ are chosen to vanish. The algebra of SU (3)
is generated by taking real linear combination of the matrices Ha (a = 1, 2),
(Eα + E−α ) and (Eα − E−α ). On the other hand the algebra of SL(3) is gener-
ated by the same matrices but the third one does not have the factor i. Notice
that in this way the triplet representation of the group SU (3) is unitary whilst
the triplet of SL(3) is not.
3.8. CONSTRUCTION OF MATRIX REPRESENTATIONS 139
α2 α3
KA
A λ2
A AA
A 6A
A A
AH A - α1
λ2 − α3
A
A HH jA λ2 − α2
A
A
A
AU
| 1i ≡| λ2 i ; | 2i ≡| λ2 − α2 i ; | 3i ≡| λ2 − α3 i (3.128)
Using the Cartan matrix of SU (3) (see example 2.13), (3.4) and (3.118) we
get that the matrices which represent the Cartan subalgebra generators in the
Chevalley basis are
0 0 0 1 0 0
λ2 λ2
D (H1 ) = 0 1 0 D (H2 ) = 0 −1 0 (3.129)
0 0 −1 0 0 0
Using (3.95) we get that the only non vanishing matrix elements of the step
operators are
where, according to (3.130) and (3.131), we have introduced the fases θ, ϕ and
φ. From (3.87) we obtain the matrices for the negative step operators. Using
the fact that (q + 1) ε (α1 , α2 ) = 1 we get from (3.103) that these fases have to
satisfy
θ+ϕ=φ+π (3.133)
Therefore the matrices which represent the step operators in the anti-triplet
representation are
0 0 0 0 0 0
Dλ2 (Eα1 ) = 0 0 eiθ Dλ2 (E−α1 ) = 0 0 0 (3.134)
−iθ
0 0 0 0 e 0
0 eiϕ 0
0 0 0
Dλ2 (Eα2 ) = 0 0 λ2 −iϕ
0
D (E−α2 ) = e 0 0
0 0 0 0 0 0
0 0 ei(θ+ϕ)
0 0 0
Dλ2 (Eα3 ) = − 0 0 0 λ2
D (E−α3 ) = − 0 0 0
−i(θ+ϕ)
0 0 0 e 0 0
So, these matrices are obtained from those of the triplet by making the change
E±α1 ↔ E±α2 and E±α3 ↔ −E±α3 . From (3.121) and (3.129) we see the
Cartan subalgebra generators are also interchanged.
0
Taking orthonormal basis | µ, li and | µ0 , l0 i for V λ and V λ respectively,
0
we can construct an orthonormal basis for V λ⊗λ as
m(µ) m(µ0 )
0 k 0 0
X X
| µ + µ , ki = Cl,l 0 | µ, li⊗ | µ , l i (3.143)
l=1 l0 =1
0
where m (µ) and m (µ0 ) are the multiplicities of µ and µ0 in V λ and V λ re-
spectively, and k = 1, 2, . . . m (µ + µ0 ), with m (µ + µ0 ) being the multiplicity
0
of µ + µ0 in V λ⊗λ . Clearly, m (µ + µ0 ) = m (µ) m (µ0 ). The constants Cl,lk
0 are
Example 3.12 Let us consider the tensor product of two spinorial represen-
tations of SU (2). As discussed in section 3.8.1 it is a two dimensional repre-
sentation with states | 21 , 12 i and | 21 , − 12 i, and satisfying
1
T3 | 12 , ± 21 i = ± | 21 , ± 12 i (3.144)
2
and (see (3.115))
T+ | 12 , 21 i = 0 ; T+ | 12 , − 21 i =| 12 , 12 i
T− | 21 , 21 i = | 12 , − 21 i ; T− | 21 , − 12 i = 0 (3.145)
One can easily construct the irreducible components by taking the highest
weight state | 21 , 12 i⊗ | 12 , 21 i and act with the lowering operator. One gets
1 1
D 2 ⊗ 2 (T− ) | 12 , 12 i⊗ | 12 , 12 i = (T− ⊗ 1l + 1l ⊗ T− ) | 12 , 12 i⊗ | 21 , 12 i
= | 12 , − 12 i⊗ | 21 , 12 i+ | 12 , 12 i⊗ | 12 , − 21 i
and 2
1 1
D 2 ⊗ 2 (T− ) | 12 , 12 i⊗ | 12 , 21 i = 2 | 21 , − 12 i⊗ | 21 , − 12 i (3.146)
and 3
1 1
D 2 ⊗ 2 (T− ) | 12 , 12 i⊗ | 12 , 12 i = 0 (3.147)
On the other hand notice that
1 1
D 2 ⊗ 2 (T± ) (| 12 , − 21 i⊗ | 12 , 21 i− | 12 , 12 i⊗ | 21 , − 21 i) = 0 (3.148)
Therefore, one gets that the states
| 1, 1i ≡ | 21 , 12 i⊗ | 12 , 12 i
√
| 1, 0i ≡ (| 21 , − 12 i⊗ | 12 , 12 i+ | 21 , 12 i⊗ | 12 , − 12 i) / 2
| 1, −1i ≡ | 21 , − 12 i⊗ | 12 , − 21 i (3.149)
3.9. TENSOR PRODUCT OF REPRESENTATIONS 143
Example 3.14 In example 3.6 we have seen that weights of the adjoint rep-
resentation of SU (3) are its roots plus the null weight which is two-fold degen-
erate. So, let us denote the states as
| 0i ≡ E−α1 | α1 i (3.154)
| α1 i ; | 0i ; | −α1 i (3.155)
144 CHAPTER 3. REPRESENTATION THEORY OF LIE ALGEBRAS
where the numbers inside the parenteses are the U (1) eigenvalues.
Bibliography
[COR 84] J.F. Cornwell; Group theory in Physics; vols. I, II and III; Tech-
niques in Physics 7; Academic Press (1984).
[DYN 57] E.B. Dynkin, Transl. Amer. Math. Soc. (2) 6 (1957) 111,
and (1) 9 (1962) 328.
145
146 BIBLIOGRAPHY
[JAC 79] Nathan Jacobson; Lie Algebras, Dover Publ., Inc. (1979).
[OLI 82] D. I. Olive, Lectures on gauge theories and Lie algebras: with some
applications to spontaneous symmetry breaking and integrable dynam-
ical systems, University of Virginia preprint (1982).
Index
147
148 INDEX
tangent space, 35
tangent vector, 35
tensor product representation, 27
topological group, 34
trace form, 45
transformations, group of, 21
unitary representation, 25
vector field
definition, 36
left invariant, 38
tagent vector, 35
weight
definition, 106
dominant, 106
fundamental, 107
highest, 112
lattice, 108
minimal, 114
strings, 116
weight lattice, 108
weight strings, 116
Weyl chambers, 71
Weyl character formula, 125