SlideShare uma empresa Scribd logo
1 de 257
Baixar para ler offline
The Mathematics any Physicist
Should Know
Thomas Hjortgaard Danielsen
Contents
Preface 5
I Representation Theory of Groups and Lie Algebras 7
1 Peter-Weyl Theory 9
1.1 Foundations of Representation Theory . . . . . . . . . . . . . . . 9
1.2 The Haar Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Matrix Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4 Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.5 The Peter-Weyl Theorem . . . . . . . . . . . . . . . . . . . . . . 24
2 Structure Theory for Lie Algebras 29
2.1 Basic Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 Semisimple Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 The Universal Enveloping Algebra . . . . . . . . . . . . . . . . . 42
3 Basic Representation Theory of Lie Algebras 49
3.1 Lie Groups and Lie Algebras . . . . . . . . . . . . . . . . . . . . 49
3.2 Weyl’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4 Root Systems 59
4.1 Weights and Roots . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.2 Root Systems for Semisimple Lie Algebras . . . . . . . . . . . . . 62
4.3 Abstract Root Systems . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4 The Weyl Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5 The Highest Weight Theorem 75
5.1 Highest Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.2 Verma Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.3 The Case sl(3, C) . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6 Infinite-dimensional Representations 91
6.1 Gårding Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.2 Induced Lie Algebra Representations . . . . . . . . . . . . . . . . 95
6.3 Self-Adjointness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.4 Applications to Quantum Mechanics . . . . . . . . . . . . . . . . 102
II Geometric Analysis and Spin Geometry 109
7 Clifford Algebras 111
7.1 Elementary Properties . . . . . . . . . . . . . . . . . . . . . . . . 111
7.2 Classification of Clifford Algebras . . . . . . . . . . . . . . . . . . 117
3
4
7.3 Representation Theory . . . . . . . . . . . . . . . . . . . . . . . . 121
8 Spin Groups 125
8.1 The Clifford Group . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.2 Pin and Spin Groups . . . . . . . . . . . . . . . . . . . . . . . . . 128
8.3 Double Coverings . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.4 Spin Group Representations . . . . . . . . . . . . . . . . . . . . . 135
9 Topological K-Theory 139
9.1 The K-Functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
9.2 The Long Exact Sequence . . . . . . . . . . . . . . . . . . . . . . 144
9.3 Exterior Products and Bott Periodicity . . . . . . . . . . . . . . 149
9.4 Equivariant K-theory . . . . . . . . . . . . . . . . . . . . . . . . . 151
9.5 The Thom Isomorphism . . . . . . . . . . . . . . . . . . . . . . . 155
10 Characteristic Classes 163
10.1 Connections on Vector Bundles . . . . . . . . . . . . . . . . . . . 163
10.2 Connections on Associated Vector Bundles* . . . . . . . . . . . . 166
10.3 Pullback Bundles and Pullback Connections . . . . . . . . . . . . 172
10.4 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
10.5 Metric Connections . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.6 Characteristic Classes . . . . . . . . . . . . . . . . . . . . . . . . 180
10.7 Orientation and the Euler Class . . . . . . . . . . . . . . . . . . . 186
10.8 Splitting Principle, Multiplicative Sequences . . . . . . . . . . . . 190
10.9 The Chern Character . . . . . . . . . . . . . . . . . . . . . . . . . 197
11 Differential Operators 201
11.1 Differential Operators on Manifolds . . . . . . . . . . . . . . . . . 201
11.2 The Principal Symbol . . . . . . . . . . . . . . . . . . . . . . . . 205
11.3 Dirac Bundles and the Dirac Operator . . . . . . . . . . . . . . . 210
11.4 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
11.5 Elliptic Complexes . . . . . . . . . . . . . . . . . . . . . . . . . . 227
12 The Atiyah-Singer Index Theorem 233
12.1 K-Theoretic Version . . . . . . . . . . . . . . . . . . . . . . . . . 233
12.2 Cohomological Version . . . . . . . . . . . . . . . . . . . . . . . . 236
A Table of Clifford Algebras 245
B Calculation of Fundamental Groups 247
Bibliography 251
Index 252
Preface
When following courses given by Ryszard Nest at the Copenhagen University,
you can be almost certain that a reference to the Atiyah-Singer Index Theorem
will appear at least once during the course. Thus it was an obvious project for me
to find out what this, apparently great theorem, was all about. However, from
the beginning I was well aware that this was not an easy task and that it was
necessary for me to delve into a lot of other subjects involved in its formulation,
before the goal could be reached. It has never been my intension to actually
prove the theorem (well except for a few moments of utter over ambitiousness)
but merely to pave a road for my own understanding. This road leads through
as various subjects as K-theory, characteristic classes and elliptic theory. I have
tried to treat each subject as thoroughly and self-contained as I could, even
though this meant including stuff which wasn’t really necessary for the Index
Theorem.
The starting point is of course my own prerequisites when I began my work
half a year ago, that is a solid foundation in Riemannian geometry, algebraic
topology (notably homology and cohomology) and pseudodifferential calculus on
Euclidean space. From here we develop at first, in a systematic way, topological
K-theory. The approach is via vector bundles as it can be found in for instance
[Atiyah] or [Hatcher], no C∗
-algebras are involved. In the first two sections
the basic theory will be outlined and most proofs will be given. In the third
section we present the famous Bott-periodicity Theorem, without giving a proof.
The last two sections are dedicated to the Thom Isomorphism. To this end we
introduce equivariant K-theory (that is, K-theory involving group actions), a
slight generalization of the K-theory treated in the first sections. I follow the
outline given in the classical article by [Segal]. One could argue, that equivariant
K-theory could have been introduced from the very beginning, however I have
chosen not to, in order not to blur the introductory presentation with too many
technicalities.
The second chapter deals with the Chern-Weil approach to characteristic
classes of vector bundles. The first four sections are devoted to the study of the
basic theory of connections on vector bundles. From the curvature forms and
invariant polynomials we construct characteristic classes, in particular Chern
and Pontrjagin classes and their relationships will be discussed. In the following
section the Euler class of oriented bundles is defined. I have relied heavily on
[Morita] and [Milnor, Stacheff] when working out these sections but also [Mad-
sen, Tornehave] has provided valuable inspiration. The chapter ends with a
discussion of certain characteristic classes constructed, not from invariant poly-
nomials but from invariant formal power series. Examples of such classes are
the Todd class and the total e
A-class and the Chern character. No effort has
been made to include “great theorems”, in fact there are really no major results
in this chapter. It serves as a tool box to be applied to the construction of the
topological index.
The third chapter revolves around differential operators on manifolds. In the
5
6
standard literature on this subject not much care is taken, when transferring the
differential operators and principal symbols from Euclidean space to manifolds.
I’ve tried to remedy this, giving a precise and detailed treatment. To this I have
added a lot of examples of “classical” differential operators, such as the Lapla-
cian, Hodge-de Rham operators, Dirac operators etc. calculating their formal
adjoints and principal symbols. To shed some light on the analytic properties we
introduce Sobolev spaces. Essentially there are two different definitions: in the
first one, Sobolev spaces are defined in terms of connections, and in the second
they are defined as the “clutching” of local Euclidean Sobolev spaces. We prove
that the two definitions agree, when the underlying manifold is compact, and we
show how to extend differential operators to continuous operators between the
Sobolev spaces. The major results such as the Sobolev Embedding Theorem,
the Rellich lemma and Elliptic Regularity are given without proofs. We then
move on to elliptic complexes, which provides us with a link to the K-theory
developed in the first chapter.
In the fourth and final chapter the Index Theorem is presented. We construct
the so-called topological index map from the K-group K(TM) to the integers
and state the index theorem, which says that the index function when evaluated
on the specific K-class determined from the symbol of an elliptic differential op-
erator, is in fact equal to the Fredholm index. I give a short sketch of the proof
based on the original 1968-article by Atiyah and Singer. Then by introducing
the cohomological Thom isomorphism, Thom Defect classes etc. and drawing
heavily on the theory developed in the previous chapters we manage to deduce
the famous cohomological index formula. To demonstrate the power of the Index
Theorem, we prove two corollaries, namely the generalized Gauss-Bonnet The-
orem and the fact that any elliptic differential operator on a compact manifold
of odd dimension has index 0.
I would like to thank Professor Ryszard Nest for his guidance and inspiration,
as well as answers to my increasing amount of questions.
Copenhagen, March 2008. Thomas Hjortgaard Danielsen.
Part I
Representation Theory of
Groups and Lie Algebras
7
Chapter 1
Peter-Weyl Theory
1.1 Foundations of Representation Theory
We begin by introducing some basic but fundamental notions and results regard-
ing representation theory of topological groups. Soon, however, we shall restrict
our focus to compact groups and later to Lie groups and their Lie algebras.
We begin with the basic theory. To define the notion of a representation, let V
denote a separable Banach space and equip B(V ), the space of bounded linear
maps V −→ V , with the strong operator topology i.e. the topology on B(V )
generated by the seminorms kAkx = kAxk. Let Aut(V ) ⊆ B(V ) denote the
group of invertible linear maps and equip it with the subspace topology, which
turns it into a topological group.
Definition 1.1 (Representation). By a continuous representation of a topo-
logical group G on a separable Banach space V we understand a continuous
group homomorphism π : G −→ Aut(V ). We also say that V is given the struc-
ture of a G-module. If π is an injective homomorphism the representation is
called faithful.
By the dimension of the representation we mean the dimension of the vector
space on which it is represented. If V is infinite-dimensional the representation
is said to be infinite-dimensional as well.
In what follows a group without further specification will always denote a
locally compact topological group, and by a representation we will always un-
derstand a continuous representation. The reason why we demand the groups
to be locally compact should be apparent in the next section.
We will distinguish between real and complex representations depending on
whether V is a real or complex Banach space. Without further qualification, the
representations considered will all be complex.
The requirement on π to be strongly continuous can be a little hard to handle,
so here is an equivalent condition which is more applicable:
Proposition 1.2. Let π : G −→ Aut(V ) be a group homomorphism. Then the
following conditions are equivalent:
1) π is continuous w.r.t. the the strong operator topology on Aut(V ), i.e. π
is a continuous representation.
2) The map G × V −→ V given by (g, v) 7−→ π(g)v is continuous.
For a proof see [1] Proposition 18.8.
10 Chapter 1 – Peter-Weyl Theory
Example 1.3. The simplest example one can think of is the trivial repre-
sentation: Let G be a group and V a Banach space, and consider the map
G 3 g 7−→ idV . This is obviously a continuous group homomorphism and hence
a representation.
Now, let G be a matrix Lie group (i.e. a closed subgroup of GL(n, C)).
Choosing a basis for Cn
we get an isomorphism Aut(Cn
)
∼
−
−
→ GL(n, C), and
we can thus define a representation of G on Cn
simply by the inclusion map
G −→ GL(n, C). This is obviously a continuous representation of G, called the
defining representation.
We can form new representations out of old ones. If (π1, V1) and (π2, V2) are
representations of G on Banach spaces we can form their direct sum π1 ⊕ π2
to be the representation of G on V1 ⊕ V2 (which has been given the norm
k(x, y)k = kxk + kyk, turning V1 ⊕ V2 into a Banach space) given by
(π1 ⊕ π2)(g)(x, y) = (π1(g)x, π2(g)y).
If we have a countable family (Hi)i∈I of Hilbert spaces we can form the direct
sum Hilbert space
L
i∈I Hi to be the vector space of sequences (xi), xi ∈ Hi,
satisfying
P
i∈I kxik2
Hi
< ∞. Equipped with the inner product h(xi), (yi)i =
P
i∈Ihxi, yii this is again a Hilbert space. If we have a countable family (πi, Hi)
of representations such that supi∈I kπi(g)k < ∞ for each g ∈ G, then we can
form the direct sum of the representations
L
i∈I πi on
L
i∈I Hi by
M
i∈I
πi

(g)(xi) = (πi(g)xi).
Finally, if (π1, H1) and (π2, H2) are representations on Hilbert spaces, we can
form the tensor product, namely equip the tensor product vector space H1 ⊗ H2
with the inner product
hx1 ⊗ x2, y1 ⊗ y2i = hx1, y1ihx2, y2i
which turns H1 ⊗ H2 into a Hilbert space, and define the tensor product repre-
sentation π1 ⊗ π2 by
(π1 ⊗ π2)(g)(x ⊗ y) = π1(g)x ⊗ π2(g)y.
Definition 1.4 (Unitary Representation). By a unitary representation of a
group G we understand a representation π on a Hilbert space H such that π(g)
is a unitary operator for each g ∈ G.
Obviously the trivial representation is a unitary representation. As is the
defining representation of any subgroup of the unitary group U(n). In the next
section we show unitarity of some more interesting representations.
Definition 1.5 (Intertwiner). Let two representations (π1, V1) and (π2, V2) of
the same group G be given. By an intertwiner or an intertwining map between
π1 and π2 we understand a bounded linear map T : V1 −→ V2 rendering the
following diagram commutative
V1
∼
T
//
π1(g)

V2
π2(g)

V1
∼
T
// V2
i.e. satisfying T ◦ π1(g) = π2(g) ◦ T for all g ∈ G. The set of all intertwining
maps is denoted HomG(V1, V2).
1.1 Foundations of Representation Theory 11
A bijective intertwiner with bounded inverse between two representations is
called an equivalence of representations and the two representations are said to
be equivalent. This is denoted π1
∼
= π2.
It’s easy to see that HomG(V1, V2) is a vector space, and that HomG(V, V ) is
an algebra. The dimension of HomG(V1, V2) is called the intertwining number of
the two representations. If π1
∼
= π2 via an intertwiner T, then we have π2(g) =
T−1
◦ π1(g) ◦ T. Since we thus can express the one in terms of the other, for
almost any purpose the two representations can be regarded as the same.
Proposition 1.6. HomG respects direct sum in the sense that
HomG(V1 ⊕ V2, W) ∼
= HomG(V1, W) ⊕ HomG(V2, W) and (1.1)
HomG(V, W1 ⊕ W2) ∼
= HomG(V, W1) ⊕ HomG(V, W2). (1.2)
Proof. For the first isomorphism we define
Φ : HomG(V1 ⊕ V2, W) −→ HomG(V1, W) ⊕ HomG(V2, W)
by Φ(T) := (T|V1
, T|V2
). It is easy to check that this is indeed an element of the
latter space. It has an inverse Φ−1
given by
Φ−1
(T1, T2)(v1, v2) := T1(v1) + T2(v2),
and this proves the first isomorphism. The latter can be proved in the same
way.
Definition 1.7. Given a representation (π, V ) of a group G, we say that a
linear subspace U ⊆ V is π-invariant or just invariant if π(g)U ⊆ U for all
g ∈ G.
If U is a closed invariant subspace for a representation π of G on V , we
automatically get a representation of G on U, simply by restricting all the π(g)’s
to U (U should be a Banach space, and therefore we need U to be closed). This
is clearly a representation, and we will denote it π|U (although we are restricting
the π(g)’s to U and not π).
Here is a simple condition to check invariance of a given subspace, at least in
the case of a unitary representation
Lemma 1.8. Let (π, H) be a representation of G, let H = U ⊕ U⊥
be a de-
composition of H and denote by P : H −→ U the orthogonal projection onto U.
If U is π-invariant then so is U⊥
. Furthermore U is π-invariant if and only if
P ◦ π(g) = π(g) ◦ P for all g ∈ G.
Proof. Assume that U is invariant. To show that U⊥
is invariant let v ∈ U⊥
.
We need to show that π(g)v ∈ U⊥
, i.e. ∀u ∈ U : hπ(g)v, ui = 0. But that’s
easy, exploiting unitarity of π(g):
hπ(g)v, ui = hπ(g−1
)(π(g)v), π(g−1
)ui = hv, π(g−1
)ui
which is 0 since π(g−1
)u ∈ U and v ∈ U⊥
. Thus U⊥
is invariant
Assume U to be invariant. Then also U⊥
is invariant by the above. We split
x ∈ H into x = Px + (1 − P)x and calculate
P ◦ π(g)x = P(π(g)(Px + (1 − P)x)) = Pπ(g)Px + Pπ(g)(1 − P)x.
The first term is π(g)Px, since π(g)Px ∈ U, and the second term is zero, since
π(g)(1 − P)x ∈ U⊥
. Thus we have the desired formula.
12 Chapter 1 – Peter-Weyl Theory
Conversely, assume that P ◦ π(g) = π(g) ◦ P. Every vector u ∈ U is of the
form Px for some x ∈ H. Since
π(g)u = π(g)(Px) = P(π(g)x) ∈ U,
U is an invariant subspace.
For any representation (π, V ) it is easy to see two obvious invariant subspaces,
namely V itself and {0}. We shall focus a lot on representations having no
invariant subspaces except these two:
Definition 1.9. A representation is called irreducible if it has no closed invari-
ant subspaces except the trivial ones. The set of equivalence classes of finite-
dimensional irreducible representations of a group G is denoted b
G.
A representation is called completely reducible if it is equivalent to a direct
sum of finite-dimensional irreducible representations.
Any 1-dimensional representation is obviously irreducible, and if the group is
abelian the converse is actually true. We prove this in Proposition 1.14
If (π1, V1) and (π2, V2) are irreducible representations then the direct sum
π1 ⊕ π2 is not irreducible, since V1 is an π1 ⊕ π2-invariant subspace of V1 ⊕ V2:
(π1 ⊕ π2)(g)(v, 0) = (π1(g)v, 0).
The question is more subtle when considering tensor products of irreducible
representations. Whether or not the tensor product of two irreducible repre-
sentations is irreducible and if not, to write is as a direct sum of irreducible
representations is a branch of representation theory known as Clebsch-Gordan
theory.
Lemma 1.10. Let (π1, V1) and (π2, V2) be equivalent representations. Then π1
is irreducible if and only if π2 is irreducible.
Proof. Given the symmetry of the problem, it is sufficient to verify that ir-
reducibility of π1 implies irreducibility of π2. Let T : V1 −→ V2 denote the
intertwiner, which by the Open Mapping Theorem is a linear homeomorphism.
Assume that U ⊂ V2 is a closed invariant subspace. Then T−1
U ⊆ V1 is closed
and π1-invariant:
π1(g)T−1
U = T−1
π2(g)U ⊆ T−1
U
But this means that T−1
U is either 0 or V1, i.e. U is either 0 or V2.
Example 1.11. Consider the group SL(2, C) viewed as a real (hence 6-dimensional)
Lie group. We consider the following 4 complex representations of the real Lie
group SL(2, C) on C2
:
ρ(A)ψ := Aψ, ρ(A)ψ := Aψ,
e
ρ(A)ψ := (AT
)−1
ψ, e
ρ(A)ψ := (A∗
)−1
ψ,
where A simply means complex conjugation of all the entries. All four are clearly
irreducible. They are important in physics where they are called spinorial rep-
resentations. The physicists have a habit of writing everything in coordinates,
thus ψ will usually be written ψα, where α = 1, 2 but the exact notation will
vary according to which representation we have imposed on C2
(i.e. according
to how ψ transforms as the physicists say). In other words they view C2
not as
a vector space but rather as a SL(2, C)-module. The notations are
ψα ∈ C2
, ψα̇ ∈ C2, ψα
∈ f
C2, ψα̇
∈
f
C2.
1.1 Foundations of Representation Theory 13
The representations are not all mutually inequivalent, actually the map ϕ :
C2
−→ C2
given by the matrix

0 −1
1 0

intertwines ρ with e
ρ and intertwines ρ
with e
ρ. On the other hand ρ and ρ are actually inequivalent as we will se in Sec-
tion 1.4. These two representations are called the fundamental representations
of SL(2, C).
In short, representation theory has two goals: 1) given a group: find all the
irreducible representations and 2) given a representation of this group: split it (if
possible) into a direct sum of irreducibles. The rest of this chapter deals with the
second problem (at least for compact groups) and in the end we will achieve some
powerful results (Schur Orthogonality and the Peter-Weyl Theorem). Chapter
5 revolves around the first problem of finding irreducible representations.
But already at this stage we are able to state and prove two quite interesting
results. The first result is known as Schur’s Lemma. We prove a slightly more
general version than is usually seen, allowing the representations to be infinite-
dimensional.
Theorem 1.12 (Schur’s Lemma). Let (π1, H1) and (π2, H2) be two irre-
ducible unitary representations of a group G, and suppose that F : H1 −→ H2
is an intertwiner. Then either F is an equivalence of representations or F is the
zero map.
If (π, H) is an irreducible unitary representation of G and F ∈ B(H) is a
linear map which commutes with all π(g), then F = λ idH.
Proof. The proof utilizes a neat result from Gelfand theory: suppose that
A is a commutative unital C*-algebra which is also an integral domain (i.e.
ab = 0 implies a = 0 or b = 0), then A ∼
= Ce. The proof is rather simple.
Gelfand’s Theorem states that there exists a compact Hausdorff space X such
that A ∼
= C(X). To reach a contradiction, assume that X is not a one-point
set, and pick two distinct points x and y. Then since X is a normal topological
space, we can find disjoint open neighborhoods U and V around x and y, and
the Urysohn Lemma gives us two nonzero continuous functions f and g on X,
the first one supported in U and the second in V , the product, thus, being
zero. This contradicts the assumption that A = C(X) was an integral domain.
Therefore X can contain only one point and thus C(X) ∼
= C.
With this result in mente we return to Schur’s lemma. F being an intertwiner
means that F ◦ π1(g) = π2(g) ◦ F, and using unitarity of π1(g) and π2(g) we get
that
F∗
◦ π2(g) = π1(g) ◦ F∗
where F∗
is the hermitian adjoint of F. This yields
(FF∗
) ◦ π2(g) = F ◦ π1(g) ◦ F∗
= π2(g) ◦ (FF∗
).
In the last equality we also used that F intertwines the two representations.
Consider the C∗
-algebra A = C∗
(idH2
, FF∗
), the C∗
-algebra generated by idH2
and FF∗
. It’s a commutative unital C∗
-algebra, and all the elements are of the
form
P∞
n=0 an(FF∗
)n
. They commute with π2(g):
 ∞
X
n=1
an(FF∗
)n

π2(g) =
∞
X
n=1
(an(FF∗
)n
π2(g)) =
∞
X
n=1
an(π2(g)(FF∗
)n
)
= π2(g)
∞
X
n=1
an(FF∗
)n
.
We only need to show that A is an integral domain. Assume ST = 0. Since
π2(g)S = Sπ2(g) it’s easy to see that ker S is π2-invariant. π2 is irreducible
14 Chapter 1 – Peter-Weyl Theory
so ker S is either H2 or {0}. In the first case S = 0, and we are done, in the
second case, S is injective, and so T must be the zero map. This means that
A = C idH2 , in particular, there exists a λ ∈ C so that FF∗
= λ idH2 . Likewise,
one shows that F∗
F = λ0
idH1
. Thus, we see
λF = F(F∗
F) = (FF∗
)F = λ0
F
which implies F = 0 or λ = λ0
. In the second case if λ = λ0
= 0 then F∗
Fv = 0
for all v ∈ H1, and hence
0 = hv, F∗
Fvi = hFv, Fvi,
i.e. F = 0. If λ = λ0
and λ 6= 0 then it is not hard to see that λ− 1
2 F is unitary,
and that F therefore is an isomorphism.
The second claims is an immediate consequence of the proof of the first.
The content of this can be summed up to the following: If π1 and π2 are irre-
ducible unitary representations of G on H1 and H2, then HomG(H1, H2) ∼
= C if
π1 and π2 are equivalent and HomG(H1, H2) = {0} if π1 and π2 are inequivalent.
Corollary 1.13. Let (π, H1) and (ρ, H2) be finite-dimensional unitary repre-
sentations which decompose into irreducibles
π =
M
i∈I
miδi and ρ =
M
i∈I
niδi.
Then dim HomG(H1, H2) =
P
i∈I nimi.
Proof. Denoting the representations spaces of the irreducible representations
by Vi we get from (1.1) and (1.2) that
HomG(H1, H2) =
M
i∈I
M
j∈I
nimj Hom(Vi, Vj),
and by Schur’s Lemma the dimension formula now follows.
Now for the promised result on abelian groups
Proposition 1.14. Let G be an abelian group and (π, H) be a unitary repre-
sentation of G. If π is irreducible then π is 1-dimensional.
Proof. Since G is abelian we have π(g)π(h) = π(h)π(g) i.e. π(h) is an in-
tertwiner. Since π is irreducible, Schur’s Lemma says that π(h) = λ(h) idH.
Thus, each 1-dimensional subspace of H is invariant, and by irreducibility H is
1-dimensional.
Example 1.15. With the previous lemma we are in a position to determine the
set of irreducible complex representations of the circle group T = R/Z. Since this
is an abelian group, we have found all the irreducible representations when we
know all the 1-dimensional representations. A 1-dimensional representation is
just a homomorphism R/Z −→ C∗
, so let’s find them: It is well-known that the
only continuous homomorphisms R −→ C∗
are those of the form x 7−→ e2πiax
for some a ∈ R. But since we also want it to be periodic with periodicity 1,
only integer values of a are allowed. Thus, b
T consists of the homomorphisms
ρn(x) = e2πinx
for n ∈ Z.
Proposition 1.16. Every finite-dimensional unitary representation is com-
pletely reducible.
1.2 The Haar Integral 15
Proof. If the representation is irreducible then we are done, so assume we
have a unitary representation π : G −→ Aut(H) and let {0} 6= U ⊆ H be an
invariant subspace. The point is that U⊥
is invariant as well cf. Lemma 1.8. If
both π|U and π|U⊥ are irreducible we are done. If one of them is not, we find
an invariant subspace and perform the above argument once again. Since the
representation is finite-dimensional and since 1-dimensional representations are
irreducible, the argument must stop at some point.
1.2 The Haar Integral
In the representation theory of locally compact groups (also known as harmonic
analysis) the notions of Haar integral and Haar measure play a key role.
Some preliminary definitions: Let X be a locally compact Hausdorff space
and Cc(X) the space of complex valued functions on X with compact support.
By a positive integral on X is understood a linear functional I : Cc(X) −→ C
such that I(f) ≥ 0 if f ≥ 0. The Riesz Representation Theorem tells us that to
each such positive integral there exists a unique Radon measure µ on the Borel
algebra B(X) such that
I(f) =
Z
X
fdµ.
We say that this measure µ is associated with the positive integral.
Now, let G be a group. For each g0 ∈ G we have two maps Lg0
and Rg0
, left
and right translation, on the set of complex-valued functions on G, given by
(Lg0
f)(g) = f(g−1
0 g), (Rg0
f)(g) = f(gg0).
These obviously satisfy Lg1g2 = Lg1 Lg2 and Rg1g2 = Rg1 Rg2 .
Definition 1.17 (Haar Measure). Let G be a locally compact group. A
nonzero positive integral I on G is called a left Haar integral if I(Lgf) = I(f)
for all g ∈ G and f ∈ Cc(X). Similarly a nonzero positive integral is called a
right Haar integral if I(Rgf) = I(f) for all g ∈ G and f ∈ Cc(X). An integral
which is both a left and a right Haar integral is called a Haar integral.
The measures associated with left and right Haar integrals are called left and
right Haar measures. The measure associated with a Haar integral is called a
Haar measure.
Example 1.18. On (Rn
, +) the Lebesgue integral is a Haar integral: it is ob-
viously positive, and it is well-known that the Lebesgue integral is translation
invariant: Z
Rn
f(x + a)dx =
Z
Rn
f(−a + x)dx =
Z
Rn
f(x)dx.
The associated Haar measure is of course the Lebesgue measure mn.
On the circle group (T, ·) we define an integral I by
C(T) 3 f 7−→
1
2π
Z 2π
0
f(eit
)dt.
As before this is obviously a positive integral and since
I(Leia f) =
1
2π
Z 2π
0
f(e−ia
eit
)dt =
1
2π
Z 2π
0
f(ei(−a+t)
)dt
=
1
2π
Z 2π
0
f(eit
)dt
16 Chapter 1 – Peter-Weyl Theory
again by exploiting translation invariance of the Lebesgue measure, I is a left
Haar integral on T. Likewise one can show that it is a right Haar integral as
well, and hence a Haar integral. The associated Haar measure on T is also called
the arc measure.
In both cases the groups were abelian and in both cases the left Haar integrals
were also right Haar integrals. This is no mere coincidence for if G is an abelian
group we have Lg0
= Rg−1
0
and thus a positive integral is a left Haar integral if
and only if it is a right Haar integral.
The following central theorem attributed to Alfred Haar and acclaimed as
one of the most important mathematical discoveries in the 20th century states
existence and uniqueness of left and right Haar integrals on locally compact
groups.
Theorem 1.19. Every locally compact group G possesses a left Haar integral
and a right Haar integral, and these are unique up to multiplication by a positive
constant.
If G is compact then the two integrals coincide, and the corresponding Haar
measure is finite.
It would be far beyond the scope of this thesis to delve into the proof of this.
The existence part of the proof is a hard job so we just send some acknowledging
thoughts to Alfred Haar and accept it as a fact of life.
Now we restrict focus to compact groups on which, as we have just seen, we
have a finite Haar measure. The importance of this finiteness is manifested in
the following result:
Theorem 1.20 (Unitarization). Let G be a compact group and (π, H) are
representation on a Hilbert space (H, h·, ·i). Then there exists an inner product
h·, ·iG on H equivalent to h·, ·i which makes π a unitary representation.
Proof. Since the measure is finite, we can integrate all bounded measurable
functions over G. Let us assume the measure to be normalized, i.e. that µ(G) =
1. For x1, x2 ∈ H the map g 7−→ hπ(g)x1, π(g)x2i is continuous (by Proposition
1.2), hence bounded and measurable, i.e. integrable. Now define a new inner
product by
hx1, x2iG :=
Z
G
hπ(g)x1, π(g)x2idg. (1.3)
That this is a genuine inner product is not hard to see: it is obviously sesqui-
linear by the properties of the integral, it is conjugate-symmetric, as the original
inner product is conjugate-symmetric. Finally, if x 6= 0 then π(g)x 6= 0 (π(g) is
invertible) and thus kπ(g)xk  0 for all g ∈ G. Since the map g 7−→ kπ(g)xk2
is
continuous we have hx, xiG =
R
G
kπ(g)xkdg  0.
By the translation of the Haar measure we get
hπ(h)x1, π(h)x2iG =
Z
G
hπ(gh)x1, π(gh)x2idg
=
Z
G
hπ(g)x1, π(g)x2idg
= hx1, x2iG.
Thus, π is unitary w.r.t. this new inner product.
We just need to show that the two norms k · k and k · kG corresponding to
the two inner products are equivalent, i.e. that there exists a constant C so that
k·k ≤ Ck·kG and k·kG ≤ Ck·k. To this end, consider the map g 7−→ kπ(g)xk2
for
some x ∈ H. It’s a continuous map, hence supg∈G kπ(g)xk2
 ∞ for all x, and
1.2 The Haar Integral 17
the Uniform Boundedness Principle now says that C := supg∈G kπ(g)k  ∞.
Therefore
kxk2
=
Z
G
kxk2
dg =
Z
G
kπ(g−1
)π(g)xk2
dg ≤ C2
Z
G
kπ(g)xk2
= C2
kxk2
G.
Conversely we see
kxk2
G =
Z
G
kπ(g)xk2
≤
Z
G
kπ(g)k2
kxk2
dg ≤ C2
Z
G
kxk2
dg = C2
kxk2
.
This proves the claim.
If we combine this result with Proposition 1.16 we get
Corollary 1.21. Every finite-dimensional representation of a compact group is
completely reducible.
The Peter-Weyl Theorem which we prove later in this chapter provides a
strong generalization of this result in that it states that every Hilbert space
representation of a compact group is completely reducible.
We end this section by introducing the so-called modular function which is a
function that provides a link between left and right Haar integrals.
Let G be a topological group and I : f 7−→
R
G
f(g)dg a left Haar integral.
Let h ∈ G and consider the integral e
Ih : f 7−→
R
G
f(gh−1
)dg. This is positive
and satisfies
e
Ih(Lg0 f) =
Z
G
f(g−1
0 gh−1
)dg =
Z
G
f(gh−1
)dg = e
Ih(f)
i.e. is a left Haar integral. By the uniqueness part of Haar’s Theorem there exists
a positive constant c such that e
Ih(f) = cI(f). We define the modular function
∆ : G −→ R+
by assigning this constant to the group element h i.e.
Z
G
f(gh−1
)dg = ∆(h)
Z
G
f(g)dg.
It is not hard to see that this is indeed a homomorphism: on one hand we have
Z
G
f(g(hk)−1
)dg = ∆(hk)
Z
G
f(g)dg,
and on the other hand we have that this equals
Z
G
f(gk−1
h−1
)dg = ∆(h)
Z
G
f(gk−1
)dg = ∆(h)∆(k)
Z
G
.f(g)dg
Since this holds for all integrable functions f we must have ∆(hk) = ∆(h)∆(k).
One can show that this is in fact a continuous group homomorphism and thus
in the case of G being a Lie group, a Lie group homomorphism.
If ∆ is identically 1, that is if every right Haar integral satisfies
Z
G
f(hg)dg =
Z
G
f(g)dg (1.4)
for all h, then the group G is called unimodular. Eq. (1.4) says that an equivalent
condition for a group to be unimodular is that all right Haar integrals are also
left Haar integrals. As we have seen previously in this section abelian groups
and compact groups are unimodular groups.
18 Chapter 1 – Peter-Weyl Theory
1.3 Matrix Coefficients
Definition 1.22 (Matrix Coefficient). Let (π, V ) be a finite-dimensional rep-
resentation of a compact group G. By a matrix coefficient for the representation
π we understand a map G −→ C of the form
mv,ϕ(g) = ϕ(π(g)v)
for fixed v ∈ V and ϕ ∈ V ∗
.
If we pick a basis {e1, . . . , en} for V and let {ε1, . . . , εn} denote the correspond-
ing dual basis, then we see that mei,εj = εj(π(g)ei) precisely are the entries of
the matrix-representation of π(g), therefore the name matrix coefficient.
If V comes with an inner product h , i, then by the Riesz Theorem all matrix
coefficients are of the form mv,w = hπ(g)v, wi for fixed v, w ∈ V . By Theorem
1.20 we can always assume that this is the case.
Denote by C(G)π the space of linear combinations of matrix coefficient. Since
a matrix coefficient is obviously a continuous map, C(G)π ⊆ C(G) ⊆ L2
(G).
Thus, we can take the inner product of two functions in C(G)π. Note, however
that the elements of C(G)π need not all be matrix coefficients for π.
The following technical lemma is an important ingredient in the proof of the
Schur Orthogonality Relations which is the main result of this section.
Lemma 1.23. Let (π, H) be a finite-dimensional unitary representation of a
compact group G. Define the map Tπ : End(H) −→ C(G) by
Tπ(A)(g) = Tr(π(g) ◦ A). (1.5)
Then C(G)π = im Tπ.
Proof. Given a matrix coefficient mv,w we should produce a linear map A :
H −→ H, such that mv,w = Tπ(A). Consider the map Lv,w : H −→ H defined
by Lv,w(u) = hu, wiv, the claim is that this is the desired map A. To see this
we need to calculate Tr Lv,w and we claim that the result is hv, wi. Since Lv,w
is sesquilinear in its indices (Lav+bv0,w = aLv,w + bLv0,w), it’s enough to check
it on elements of an orthonormal basis {e1, . . . , en} for H.
Tr Lei,ei
=
n
X
k=1
hLei,ei
ek, eki =
n
X
k=1
hek, eiihei, eki = 1
while for i 6= j
Tr Lei,ej
=
n
X
k=1
hLei,ej
ek, eki =
n
X
k=1
hek, ejihei, eki = 0.
Thus, Tr Lv,w = hv, wi. Finally since
Lv,w ◦ π(g)u = hπ(g)u, wiv = hu, π(g−1
)wiv = Lv,π(g−1)wu
we see that
Tπ(Lv,w)(g) = Tr(π(g) ◦ Lv,w) = Tr(Lv,w ◦ π(g)) = hv, π(g−1
)wi = hπ(g)v, wi
= mv,w(g).
Conversely, we should show that any map Tπ(A) is a linear combination of
matrix coefficients. Some linear algebraic manipulations should be enough to
1.3 Matrix Coefficients 19
convince the reader that we for any A ∈ End(H) have A =
Pn
i,j=1hAej, eiiLei,ej
w.r.t some orthonormal basis {e1, . . . , en}. But then we readily see
Tπ(A)(g) = Tπ
n
X
i,j=1
hAej, eiiLei,ej

(g) =
n
X
i,j=1
hAej, eiiTπ(Lei,ej
)(g)
=
n
X
i,j=1
hAej, eiimei,ej
(g).
Theorem 1.24 (Schur Orthogonality I). Let (π1, H1) and (π2, H2) be two
unitary, irreducible finite-dimensional representations of a compact group G. If
π1 and π2 are equivalent, then we have C(G)π1 = C(G)π2 . If they are not, then
C(G)π1
⊥C(G)π2
inside L2
(G).
Before the proof, a few remarks on the integral of a vector valued function
would be in order. Suppose that f : G −→ H is a continuous function into
a finite-dimensional Hilbert space. Choosing a basis {e1, . . . , en} for H we can
write f in it’s components f =
Pn
i=1 fi
ei, which are also continuous, and define
Z
G
f(g)dg :=
n
X
i=1
Z
G
fi
(g)dg ei.
It’s a simple change-of-basis calculation to verify that this is independent of the
basis in question. Furthermore, one readily verifies that it is left-invariant and
satisfies
DZ
G
f(g)dg, v
E
=
Z
G
hf(g), vidg and A
Z
G
f(g)dg =
Z
Af(g)dg
when A ∈ End(H).
Proof of Theorem 1.24. If π1 and π2 are equivalent, there exists an isomor-
phism T : H1 −→ H2 such that Tπ1(g) = π2(g)T. For A ∈ End(H1) we see
that
Tπ2 (TAT−1
)(g) = Tr(π2(g)TAT−1
) = Tr(T−1
π2(g)TA) = Tr(π1(g)A)
= Tπ1 (A)(g).
Hence the map sending Tπ1
(A) to Tπ2
(TAT−1
) is the identity id : C(G)π1
−→
C(G)π2
proving that the two spaces are equal.
Now we show the second claim. Define for fixed w1 ∈ H1 and w2 ∈ H2 the
map Sw1,w2 : H1 −→ H2 by
Sw1,w2 (v) =
Z
G
hπ1(g)v, w1iπ2(g−1
)w2dg.
Sw1,w2
is in HomG(H1, H2) since by left-invariance
Sw1,w2 π1(h)(v) =
Z
G
hπ1(gh)v, w1iπ2(g−1
)w2dg =
Z
G
hπ1(g)v, w1iπ2(hg−1
)w2dg
= π2(h)
Z
G
hπ1(g)v, w1iπ2(g−1
)w2dg
= π2(h)Sw1,w2
(v).
Assume that we can find two matrix coefficients mv1,w1 and mv2,w2 for π1
and π2 that are not orthogonal, i.e. we assume that
0 6=
Z
G
mv1,w2
(g)mv2,w2
(g)dg =
Z
G
hπ1(g)v1, w1ihπ2(g)v2, w2idg
=
Z
G
hπ1(g)v1, w1ihπ2(g−1
)w2, v2idg.
20 Chapter 1 – Peter-Weyl Theory
From this we read hSw1,w2
v1, v2i 6= 0, so that Sw1,w2
6= 0. Since it’s an inter-
twiner, Schur’s Lemma tells us that Sw1,w2 is an isomorphism. By contraposition,
the second claim is proved.
In the case of two matrix coefficients for the same representation, we have the
following result
Theorem 1.25 (Schur Orthogonality II). Let (π, H) be a unitary, finite-
dimensional irreducible representation of a compact group G. For two matrix
coefficients mv1,w1 and mv2,w2 we have
hmv1,w1
, mv2,w2
i =
1
dim H
hv1, v2ihw2, w1i. (1.6)
Proof. As in the proof of Theorem 1.24 define Sw1,w2
: H −→ H by
Sw1,w2
(v) =
Z
G
hπ1(g)v, w1iπ2(g−1
)w2dg =
Z
G
π(g−1
)Lw2,w1
π(g)v dg.
We see that
hmv1,w1 , mv2,w2 i =
Z
G
hπ(g)v1, w1ihπ(g)v2, w2idg
=
Z
G
hπ(g)v1, w1ihπ(g−1
)w2, v2idg
=
Z
G
hhπ(g)v1, w1iπ(g−1
)w2, v2i
= hSw1,w2 v1, v2i.
Furthermore, since Sw1,w2 commutes with π(g), Schur’s Lemma yields a com-
plex number λ(w1, w2), such that Sw1,w2 = λ(w1, w2) idH. The operator Sw1,w2
is linear in w2 and anti-linear in w1, hence λ(w1, w2) is a sesquilinear form on
H. We now take the trace on both sides of the equation Sw1,w2
= λ(w1, w2) idH:
the right hand side is easy, it’s just λ(w1, w2) dim H. For the left hand side we
calculate That is, we get λ(w1, w2) = (dim H)−1
hw1, w2i, and hence
Sw1,w2
= (dim H)−1
hw1, w2i idH .
By substituting this into the equation hmv1,w1 , mv2,w2 i = hSw1,w2 v1, v2i the
desired result follows.
1.4 Characters
Definition 1.26 (Class Function). For a group G, a class function is a func-
tion on G which is constant on conjugacy classes. The set of square-integrable
resp. continuous class functions on G are denoted L2
(G, class) and C(G, class).
It is not hard to see that the closure of C(G, class) inside L2
(G) is L2
(G, class).
Thus, L2
(G, class) is a Hilbert space. Given an irreducible finite-dimensional
representation the set of continuous class functions inside C(G)π is very small:
Lemma 1.27. Let (π, H) be a finite-dimensional irreducible unitary represen-
tation of a compact group G, then the only class functions inside C(G)π are
complex scalar multiples of Tπ(idH).
1.4 Characters 21
Proof. To formulate the requirement on a class function, consider the repre-
sentation ρ of G on C(G) by (ρ(g)f)(x) = f(g−1
xg), then in terms of this a
function f is a class function if and only if π(g)f = f for all g.
For reasons which will be clear shortly, we introduce another representation
Π of G on End(H) by
Π(g)A = π(g)Aπ(g−1
).
Equipping End(H) with the inner product hA, Bi := Tr(B∗
A), it is easy to see
that Π becomes unitary. The linear map Tπ : End(H) −→ C(G)π which we
introduced in Lemma 1.23 is an intertwiner of the representations ρ and Π:
Tπ(Π(g)A)(x) = Tr π(x)π(g)Aπ(g−1
)

= Tr(π(g−1
xg)a)
= ρ(g) Tr(π(x)A) = ρ(g)Tπ(A)(x).
Tπ was surjective by Lemma 1.23. To show injectivity we define e
Tπ :=
√
dim H Tπ
and show that this is unitary. Since the linear maps Lv,w span End(H) it is
enough to show unitarity on these. But first we need some facts concerning
Lv,w:
hLv,wx, yi = hhx, wiv, yi = hx, wihv, yi = hx, hv, yiwi
= hx, hy, viwi = hx, Lw,vi
showing that L∗
v,w = Lw,v. Furthermore
Lw0,v0 ◦ Lv,wx = Lw0,v0 (hx, wiv) = hhx, wiv, v0
iw0
= hv, v0
ihx, wiw0
= hv, v0
iLw0,wx.
With the inner product on End(H) these results now yield
hLv,w, Lv0,w0 i = Tr(Lw0,v0 ◦ Lv,w) = Tr(hv, v0
iLw0,w) = hv, v0
ihw, w0i.
Since Tπ(Lv,w)(x) = mv,w(x), and using Schur Orthogonality II we see
h e
Tπ(Lv,w), e
Tπ(Lv0,w0 )i = dim Hhmv,w, mv0,w0 i = hv, v0
ihw, w0i = hLv,w, Lv0,w0 i.
Thus e
Tπ is unitary and in particular injective.
Now we come to the actual proof: let ϕ ∈ C(G)π be a class function. e
Tπ
is bijective, so there is a unique A ∈ End(H) for which ϕ = e
Tπ(A). That e
Tπ
intertwines Π and ρ leads to
ϕ(g−1
xg) = (ρ(g)ϕ)(x) = ρ(g) e
Tπ(A)(x) = e
Tπ(Π(g)A)(x) = e
Tπ(π(g)Aπ(g−1
)),
and since ϕ was a class function we get that π(g)Aπ(g−1
) = A, i.e. A intertwines
π. But π was irreducible, which by Schur’s Lemma implies A = λ idH, and hence
ϕ = λTπ(idH).
In particular there exists a unique class function ϕ0 which is positive on e and
which has L2
-norm 1: namely we have
kϕ0k2
2 = k e
Tπ(A)k2
2 = kAk2
= Tr(A∗
A)
so if ϕ0 should have norm 1 and be positive on e, then A is forced to be
(dim H)−1
idH, so that ϕ0 is given by ϕ0(g) = Tr π(g). This is a function of
particular interest:
22 Chapter 1 – Peter-Weyl Theory
Definition 1.28 (Character). Let (π, V ) be a finite-dimensional representa-
tion of a group G. By the character of π we mean the function χπ : G −→ C
given by
χπ(g) = Tr π(g).
If χ is a character of an irreducible representation, χ is called an irreducible
character.
The character is a class function, so in the case of two representations π1
and π2 being equivalent via the intertwiner T: π2(g) = Tπ1(g)T−1
, we have
χπ1
= χπ2
. Thus, equivalent representations have the same character. Actually,
the converse is also true, we show that at the end of the section.
Suppose that G is a topological group, and that H is a Hilbert space with
orthonormal basis {e1, . . . , en}. Then we can calculate the trace as
Tr π(g) =
n
X
i=1
hπ(g)ei, eii
which shows that χπ ∈ C(G)π. In due course we will prove some powerful or-
thogonality relations for irreducible characters. But first we will see that the
character behaves nicely with respect to direct sum and tensor product opera-
tions on representations.
Proposition 1.29. Let (π1, V1) and (π2, V2) be two finite-dimensional repre-
sentations of the group G. The characters of π1 ⊕ π2 and π1 ⊗ π2 are then given
by
χπ1⊕π2
(g) = χπ1
(g) + χπ2
(g) and χπ1⊗π2
(g) = χπ1
(g)χπ2
(g). (1.7)
Proof. Equip V1 and V2 with inner products and pick orthonormal bases (ei)
and (fj) for V1 and V2 respectively. Then the vectors (ei, 0), (0, fj) form an
orthonormal basis for V1 ⊕ V2 w.r.t. the inner product
h(v1, v2), (w1, w2)i := hv1, w1i + hv2, w2i.
Thus we see
χπ1⊕π2
(g) = Tr π1 ⊕ π2(g)
=
m
X
i=1
π1 ⊕ π2(g)(ei, 0), (ei, 0) +
n
X
j=1
π1 ⊕ π2(g)(0, fj), (0, fj)
=
m
X
i=1
hπ1(g)ei, eii +
n
X
j=1
hπ2(g)fj, fji = χπ1
(g) + χπ2
(g).
Likewise, the vectors ei ⊗ fj constitute an orthonormal basis for V1 ⊗ V2 w.r.t.
the inner product
hv1 ⊗ v2, w1 ⊗ w2i := hv1, w1ihv2, w2i,
and hence
χπ1⊗π2
(g) = Tr π1 ⊗ π2(g) =
m,n
X
i,j=1
π1 ⊗ π2(g)(ei ⊗ fj), (ei ⊗ fj)
=
m,n
X
i,j=1
hπ1(g)ei, eiihπ2(g)fj, fji
=
m
X
i=1
hπ1(g)ei, eii
n
X
j=1
hπ2(g)fj, fji = χπ1
(g)χπ2
(g).
1.4 Characters 23
The following lemma, stating the promised orthogonality relations of char-
acters, shows that irreducible characters form an orthonormal set in C(G).
The Schur Orthogonality Relations are important ingredients in the proof, thus
henceforth we need the groups to be compact.
Lemma 1.30. Let (π1, V1) and (π2, V2) be two finite-dimensional irreducible
representations of a compact group G. Then the following hold:
1) π1
∼
= π2 implies hχπ1
, χπ2
i = 1.
2) π1  π2 implies hχπ1
, χπ2
i = 0.
Proof. In the first case, we have a bijective intertwiner T : V1 −→ V2. Choose
an inner product on V1 and an orthonormal basis (ei) for V1. Define an inner
product on V2 by declaring T to be unitary. Then (Tei) is an orthonormal basis
for V2. Let n = dim V1 = dim V2. The expressions χπ1
=
Pn
i=1hπ1(g)ei, eii and
χπ2
=
Pn
j=1hπ2(g)Tej, Teji along with (1.6) yield
hχπ1 , χπ2 i =
n
X
i,j=1
Z
G
hπ1(g)ei, eiihπ2(g)Tej, Tejidg
=
n
X
i,j=1
Z
G
hπ1(g)ei, eiihTπ1(g)ej, Tejidg
=
n
X
i,j=1
Z
G
hπ1(g)ei, eiihπ1(g)ej, ejidg
=
1
n
n
X
i,j=1
hei, ejihei, eji =
1
n
n
X
i=1
1 = 1.
In the second case, if π1 and π2 are non-equivalent then by Theorem 1.24
we have C(G)π1
⊥C(G)π2
. Since χπ1
∈ C(G)π1
and χπ2
∈ C(G)π2
, the result
follows.
This leads to the main result on characters:
Theorem 1.31. Let π be an finite-dimensional representation of a compact
group G. Then π decomposes according to
π ∼
=
M
πi∈ b
G
hχπ, χπi iπi.
Proof. Proposition 1.16 says that π ∼
=
L
miπi where πi is irreducible and mi
is the number of times that πi occurs in π. From Lemma 1.30 it follows that
χπ =
P
i miχπi and hence by orthonormality of the irreducible characters that
mi = hχπ, χπi
i.
Example 1.32. A very simple example to illustrate this is the following. Con-
sider the 2-dimensional representation π of T given by
x 7−→
1
2

e2πinx
+ e2πimx
−e2πinx
+ e2πimx
−e2πinx
+ e2πimx
e2πinx
+ e2πimx

for n, m ∈ Z. It is easily seen to be a continuous homomorphism T −→ Aut(C2
)
with character χπ(x) = e2πimx
+ e2πinx
. But the two terms are irreducible
characters for T, cf. Example 1.15, and by Theorem 1.31 we have π ∼
= ρn ⊕
ρm.
24 Chapter 1 – Peter-Weyl Theory
Corollary 1.33. For finite-dimensional representations π1, π2 and π of a com-
pact group we have:
1) π1
∼
= π2 if and only if χπ1
= χπ2
.
2) π is irreducible if and only if hχπ, χπi = 1.
Proof. For the first statement, the only-if part is true by the remarks following
the definition of the character. To see the converse, assume that χπ1
= χπ2
.
Then for each irreducible representation ρ we must have hχπ1
, χρi = hχπ2
, χρi
and therefore π1 and π2 are equivalent to the same decomposition of irreducible
representations, hence they are equivalent.
If π is irreducible then Lemma 1.30 states that hχπ, χπi = 1. Conversely,
assume hχπ, χπi = 1 and decompose π into irreducibles: π ∼
=
L
miπi. Or-
thonormality of the irreducible characters again gives hχπ, χπi =
P
m2
i . From
this it is immediate that there is precisely one mi which is 1, while the rest are
0, i.e. π ∼
= πi. Therefore π is irreducible.
Considering the representations ρ and ρ from Example 1.11 we see that the
corresponding characters satisfy χρ = χρ and since χρ is actually a complex
map, they are certainly not equal. Hence the representations are inequivalent.
1.5 The Peter-Weyl Theorem
The single most important theorem in the representation theory of compact
topological groups is the Peter-Weyl Theorem. It has numerous consequences,
some of which we will mention at the end of this section.
Theorem 1.34 (Peter-Weyl I). Let G be a compact group. Then the subspace
M(G) :=
M
π∈ b
G
C(G)π
of C(G) is dense in L2
(G).
In other words the linear span of all matrix coefficients of the finite-dimensional
irreducible representations of G is dense in L2
(G).
Proof. We want to show that M(G) = L2
(G). We prove it by contradiction
and assume that M(G)⊥
6= 0. Now, suppose that M(G)⊥
(which is a closed
subspace of L2
(G) and hence a Hilbert space itself) contains a finite-dimensional
R-invariant subspace W (R is the right-regular representation) such that R|W
is irreducible (we prove below that this is a consequence of the assumption
M(G)⊥
6= 0). Then we can pick an finite orthonormal basis (ϕi) for W, and
then for 0 6= f ∈ W
f(x) =
N
X
i=1
hf, ϕiiϕi(x).
This is a standard result in Hilbert space theory. Then we see that
f(g) = (R|W (g)f)(e) =
N
X
i=1
hR|W (g)f, ϕiiϕi(e).
Since R|W is a finite-dimensional irreducible representation, the map g 7−→
hR|W (g), ϕii is a matrix coefficient. But this means that f ∈ M(G), hence a
contradiction.
1.5 The Peter-Weyl Theorem 25
Now, let’s prove the existence of the finite-dimensional right-invariant sub-
space. Let f0 ∈ M(G)⊥
be nonzero. As C(G) is dense in L2
(G) we can find a
ϕ ∈ C(G) such that hb
ϕ, f0i 6= 0 where b
ϕ(g) = ϕ(g−1
). Define K ∈ C(G × G) by
K(x, y) = ϕ(xy−1
) and let T : L2
(G) −→ L2
(G) be the integral operator with
K as its kernel:
Tf(x) =
Z
G
K(x, y)f(y)dy.
According to functional analysis, this is a well-defined compact operator, and it
commutes with R(g):
T ◦ R(g)f(x) =
Z
G
K(x, y)R(g)f(y)dy =
Z
G
ϕ(xy−1
)f(yg)dy
=
Z
G
ϕ(xgy−1
)f(y)dy =
Z
G
K(xg, y)f(y)dy
= R(g)(Tf)(x).
In the third equation we exploited the invariance of the measure under the right
translation y 7−→ yg−1
.
Since R(g) is unitary, also the adjoint T∗
of T commutes with R(g):
T∗
◦ R(g) = T∗
◦ R(g−1
)∗
= (R(g−1
) ◦ T)∗
= (T ◦ R(g−1
))∗
= R(g) ◦ T∗
.
Thus, the self-adjoint compact operator T∗
T commutes with R(g). The Spectral
Theorem for compact operators yields a direct sum decomposition of L2
(G):
L2
(G) = ker(T∗
T) ⊕
M
λ6=0
Eλ

where all the eigenspaces Eλ are finite-dimensional. They are also R-invariant,
for if f ∈ Eλ then
T∗
T(R(g)f) = R(g)(T∗
T)f = R(g)(λf) = λ(R(g)f) (1.8)
i.e. R(g)f ∈ Eλ. Actually M(G) is R-invariant: all functions are of the form
Pn
i=1 aihπi(x)ϕi, ψii and since
R(g)f(x) = f(xg) =
n
X
i=1
aihπi(x)(πi(g)ϕi), ψii
we see that R(g)f ∈ M(G). But then also M(G)⊥
is invariant. If P : L2
(G) −→
M(G)⊥
denotes the orthogonal projection, then by Lemma 1.8, P commutes
with R(g), and a calculation like (1.8) reveals that PEλ are all R-invariant sub-
spaces of M(G)⊥
. These are very good candidates to the subspace we wanted:
they are finite-dimensional and R-invariant, so we can restrict R to a represen-
tation on these. We just need to verify that at least one of them is nonzero. So
assume that PEλ are all 0. This means by definition of P that
L
λ Eλ ⊆ M(G)
and hence that M(G)⊥
⊆ (
L
λ Eλ)⊥
= ker T∗
T ⊆ ker T, where the last inclu-
sion follows since f ∈ ker T∗
T implies 0 = hT∗
Tf, fi = hTf, Tfi, i.e. Tf = 0.
But applied to the f0 ∈ M(G)⊥
we picked at the beginning, we have
Tf0(e) =
Z
G
ϕ(ey−1
)f0(y)dy =
Z
G
b
ϕ(y)f0(y)dy = hb
ϕ, f0i 6= 0,
and as Tf0 is continuous, Tf0 6= 0 as an L2
function. Thus, we must have at
least one λ for which PEλ 6= 0. If R restricted to this space is not irreducible, it
contains a nontrivial subspace on which it is. Thus, we have proved the result.
26 Chapter 1 – Peter-Weyl Theory
What we actually have shown in the course of the proof is that we for each
nonzero f can find a finite-dimensional subspace U ⊆ L2
(G) which is R-invariant
and, restricted to which, R is irreducible. We can show exactly the same thing
for the left regular representation L, all we need to alter is the definition of K,
which should be K(x, y) = ϕ(x−1
y). This observation will come in useful now,
when we prove the promised generalization of Corollary 1.21:
Theorem 1.35 (Peter-Weyl II). Let (π, H) be any (possibly infinite-dimen-
sional) representation of a compact group G on a Hilbert space H. Then π ∼
=
L
πi where πi is a finite-dimensional irreducible representation of G, i.e. π is
completely reducible.
Proof. By virtue of Theorem 1.20 we can choose a new inner product on H
turning π into a unitary representation.
Then we consider the set Σ of collections of finite-dimensional invariant sub-
spaces of H restricted to which π is irreducible, i.e. an element (Ui)i∈I in Σ is
a collection of subspaces of H satisfying the mentioned properties. We equip Σ
with the ordering ⊆ defined by (Ui)i∈I ⊆ (Uj)j∈J if
L
i Ui ⊆
L
j Uj. It is easily
seen that (Σ, ⊆) is inductively ordered, hence Zorn’s Lemma yields a maximal
element (Vi)i∈I. To show the desired conclusion, namely that
H =
M
i∈I
Vi,
we assume that W := (
L
Vi)⊥
6= 0. We have a contradiction if we in W can find
a finite-dimensional π-invariant subspace on which π is irreducible, so that’s our
goal.
First we remark that W is π-invariant since it’s the orthogonal complement
to an invariant subspace, thus we can restrict π to a representation on W. Now,
we will define an intertwiner T : W −→ L2
(G) between π|W and the left regular
representation L. Fix a unit vector x0 ∈ H and define (Ty)(g) = hy, π(g)x0i.
Ty : G −→ C is clearly continuous, and since Tx0(e) = kx0k 6= 0, Tx0 is
nonzero in L2
(G), hence T is nonzero as a linear map. T is continuous, as the
Cauchy-Schwartz inequality and unitarity of π(g) give
|Ty(g)| = |hy, π(g)x0i| ≤ kykkx0k
that is kTk ≤ kx0k. T is an intertwiner:
(T ◦ π(h))y(g) = hπ(h)y, π(g)x0i = hy, π(h−1
g)x0i = L(h) ◦ (Ty)(g).
The adjoint T∗
: L2
(G) −→ W (which is nonzero, as T is) is an intertwiner as
well, for taking the adjoint of the above equation yields π(h)∗
◦ T∗
= T∗
◦ L(h)∗
for all h. Using unitarity we get π(h−1
) ◦ T∗
= T∗
◦ L(h−1
), i.e. also T∗
is an
intertwiner.
As T∗
is nonzero, there is an f0 ∈ L2
(G) such that T∗
f0 6= 0. But by the
remark following the proof of the first Peter-Weyl Theorem we can find a non-
trivial finite-dimensional L-invariant subspace U ⊆ L2
(G) containing f0. Then
T∗
U ⊆ W is finite-dimensional, nontrivial (it contains T∗
f0) and π-invariant,
for if T∗
f ∈ T∗
U, then π(h)◦T∗
f = T∗
◦L(h)f ∈ T∗
U. Inside T∗
U we can now
find a subspace on which π is irreducible, hence the contradiction.
An immediate corollary of this is:
Corollary 1.36. An irreducible representation of a compact group is automat-
ically finite-dimensional.
1.5 The Peter-Weyl Theorem 27
In particular the second Peter-Weyl Theorem says that the left regular rep-
resentation is completely reducible. In many textbooks this is the statement of
the Peter-Weyl Theorem. The proof of this is not much different from the proof
we gave for the first version of the Peter-Weyl Theorem, and from this it would
also be possible to derive our second version of the Peter-Weyl Theorem. I chose
the version with matrix coefficients since it can be used immediately to provide
elegant proofs of some results in Fourier theory, which we now discuss.
Theorem 1.37. Let G be a compact group. The set of irreducible characters
constitute an orthonormal basis for the Hilbert space L2
(G, class). In particular
every square integrable class function f on G can be written
f =
X
π∈ b
G
hf, χπiχπ,
the convergence being L2
-convergence.
Proof. Let Pπ : L2
(G) −→ C(G)π denote the orthogonal projection onto
C(G)π. It is not hard to see that Pπ maps class functions to class functions, hence
Pπ(L2
(G, class)) ⊆ C(G)π ∩ C(G, class), the last space being the 1-dimensional
Cχπ by Lemma 1.27. Hence the space
M(G, class) := M(G) ∩ C(G, class) =
M
ρ∈ b
G
C(G)π ∩ C(G, class)
has as orthonormal basis the set of irreducible characters of G. To see that the
characters also form an orthonormal basis for the Hilbert space L2
(G, class)
assume that there exists an f ∈ L2
(G, class) which is orthogonal to all the
characters. Then since Pπf is just a scalar multiple of χπ we see
Pπf = hPπf, χπiχπ = hf, χπiχπ = 0
where in the third equality we exploited self-adjointness of the projection Pπ.
Thus we must have f ∈ M(G)⊥
which by Peter-Weyl I implies f = 0.
Specializing to the circle group T yields the existence of Fourier series. First
of all, since T is abelian, all functions defined on it are class functions, and
functions on T are nothing but functions on R with periodicity 1. Specializing
the above theorem to this case then states that the irreducible characters e2πinx
constitute an orthonormal basis for L2
(T, class) and that we have an expansion
of any such square integrable class function
f =
X
n∈Z
cn(f)e2πinx
(1.9)
where cn is the n’th Fourier coefficient
cn(f) = hf, ρni =
Z 1
0
f(x)e−2πinx
dx.
It’s important to stress that the convergence in (1.9) is only L2
-convergence. If
we put some restrictions to f such as differentiability or continuous differentia-
bility we can achieve pointwise or uniform convergence of the series. We will not
travel further into this realm of harmonic analysis.
28 Chapter 1 – Peter-Weyl Theory
Chapter 2
Structure Theory for Lie
Algebras
2.1 Basic Notions
Although we succeeded in Chapter 1 to prove some fairly strong results, we
must realize that it is limited how much we can say about topological groups,
compact or not. For instance the Peter-Weyl Theorem tells us that every rep-
resentation of a compact group is completely reducible, but if we don’t know
the irreducible representations then what’s the use? Therefore we change our
focus to Lie groups. The central difference, when regarding Lie groups, is of
course that we have their Lie algebras at our disposal. Often these are much
easier to handle than the groups themselves, while at the same time saying
quite a lot about the group. Therefore we need to study Lie algebras and their
representation theory.
In this section we focus solely on Lie algebras, developing the tools necessary
for the representation theory of the later chapters. We will only consider Lie
algebras over the fields R and C (commonly denoted K) although many of the
results in this chapter carry over to arbitrary (possibly algebraically closed)
fields of characteristic 0.
Definition 2.1 (Lie Algebra). A Lie algebra g over K is a K-vector space g
equipped with a bilinear map [ , ] : g × g −→ g satisfying
1) [X, Y ] = −[Y, X] (antisymmetry)
2) [[X, Y ], Z] + [[Y, Z], X] + [[Z, X], Y ] = 0 (Jacobi identity).
A Lie subalgebra h of g is a subspace of g which is closed under the bracket, i.e.
for which [h, h] ⊆ h. A Lie subalgebra h for which [h, g] ⊆ h is called an ideal.
In this thesis all Lie algebras will be finite-dimensional unless otherwise spec-
ified.
Example 2.2. The first examples of Lie algebras are algebras of matrices. By
gl(n, R) and gl(n, C) we denote the set of real resp. complex n × n matrices
equipped with the commutator bracket. It is trivial to verify that these are
indeed Lie algebras. The list below contains the definition of some of the classical
Lie algebras. They are all subalgebras of the two Lie algebras just mentioned.
It is a matter of routine calculations to verify that these examples are indeed
29
30 Chapter 2 – Structure Theory for Lie Algebras
closed under the the commutator bracket.
sl(n, R) = {X ∈ gl(n, R) | Tr X = 0}
sl(n, C) = {X ∈ gl(n, C) | Tr X = 0}
so(n) = {X ∈ gl(n, R) | X + Xt
= 0}
so(m, n) = {X, ∈ gl(m + n, R) | Xt
Im,n + Im,nX = 0}
so(n, C) = {X ∈ gl(n, C) | X + Xt
= 0}
u(n) = {X ∈ gl(n, C) | X + X∗
= 0}
u(m, n) = {X ∈ gl(m + n, C) | X∗
Im,n + Im,nX = 0}
su(n) = {X ∈ gl(n, C) | X + X∗
= 0, Tr X = 0}
su(m, n) = {X ∈ gl(m + n, C) | X∗
Im,n + Im,nX = 0, Tr X = 0}
where Im,n is the block-diagonal matrix whose first m × m block is the identity
and the last n × n block is minus the identity.
Another interesting example is the endomorphism algebra EndK(V ) for some
K-vector space V , finite-dimensional or not. Equipped with the commutator
bracket [A, B] = AB −BA this becomes a Lie algebra over K, as one can check.
To emphasize the Lie algebra structure of this, it is sometimes denoted gl(V ).
We stick to End(V ).
We always have the trivial ideals in g, namely 0 and g itself. If g is a Lie
algebra and h is an ideal in g, then we can form the quotient algebra g/h in the
following way: The underlying vector space is the vector space g/h and this we
equip with the bracket
[X + h, Y + h] = [X, Y ] + h.
Using the ideal-property it is easily checked that this is indeed well-defined and
satisfies the properties of a Lie algebra.
Definition 2.3 (Lie Algebra Homomorphism). Let g and g0
be Lie algebras
over K. A K-linear map ϕ : g −→ g0
is called a Lie algebra homomorphism if it
satisfies [ϕ(X), ϕ(Y )] = ϕ[X, Y ] for all X, Y ∈ g. If ϕ is bijective it is called a
Lie algebra isomorphism.
An example of a Lie algebra homomorphism is the canonical map κ : g −→ g/h
mapping X to X +h. It is easy to see that the image of a Lie algebra homomor-
phism is a Lie subalgebra of g0
and that the kernel of a homomorphism is an
ideal in g. Another interesting example is the so-called adjoint representation
ad : g −→ End(V ) given by ad(X)Y = [X, Y ]. We see that ad(X) is linear,
hence an endomorphism, and that the map X 7−→ ad(X) is linear. By virtue
of the Jacobi identity it respects the bracket operation and is thus ad is a Lie
algebra homomorphism.
In analogy with vector spaces and rings we have the following
Proposition 2.4. Let ϕ : g −→ g0
be a Lie algebra homomorphism and h ⊆ g an
ideal which contains ker ϕ, then there exists a unique Lie algebra homomorphism
ϕ : g/h −→ g0
such that ϕ = ϕ ◦ κ. In the case that h = ker ϕ and g0
= im ϕ the
induced map is an isomorphism.
If h and k are ideals in g then there exists a natural isomorphism (h+k)/k
∼
−
−
→
h/(h ∩ k).
Definition 2.5 (Centralizer). Finally, for any element X ∈ g we define the
centralizer C(X) of X to be the set of elements in g which commute with X. Let
h be any subalgebra of g. The centralizer C(h) of h is the set of all elements of
2.1 Basic Notions 31
g that commute with all elements of h. The centralizer of g is called the center
and is denoted Z(g).
For a subalgebra h of g we define the normalizer N(h) of h to be all elements
X ∈ g for which [X, h] ⊆ h.
We immediately see that the centralizer of X is just ker ad(X), hence C(X)
is an ideal. Furthermore we see that
C(h) =

X∈h
C(X)
and that Z(g) = ker ad. Hence also the center is an ideal. Finally, a subalgebra
of g is an ideal if and only if it’s normalizer is g.
Now consider the so-called derived algebra: Dg := [g, g] which clearly is an
ideal. g is called abelian if Dg = 0, i.e. if [X, Y ] = 0 for all X, Y ∈ g. Every
1-dimensional Lie algebra is abelian by antisymmetry of the bracket.
Definition 2.6 (Simple Lie Algebra). A nontrivial Lie algebra is called
indecomposable if the only ideals are the trivial ones: g and 0. A nontrivial Lie
algebra is called simple if it is indecomposable and Dg 6= 0.
Any 1-dimensional Lie algebra is indecomposable and as the next proposition
shows, the requirement Dg 6= 0 is just to get rid of these trivial examples:
Proposition 2.7. A Lie algebra is simple if and only if it is indecomposable
and dim g ≥ 2.
Proof. If g is simple then it is not abelian, hence we must have dim g ≥ 2.
Conversely, assume that g is indecomposable and dim g ≥ 2. As Dg is an ideal
we can only have Dg = 0 or Dg = g. In the first case, g is abelian and hence all
subspaces are ideals, and since dim g ≥ 2, nontrivial ideals exist, contradicting
indecomposability. Therefore Dg = g 6= 0.
Now, let’s consider the following sequence of ideals D1
g := Dg, D2
g :=
[Dg, Dg], . . . , Dn
g := [Dn−1
g, Dn−1
g], the so-called derived series. Obviously we
have Dm+n
g = Dm
(Dn
g). To see that they are really ideals we use induction:
We have already seen that D1
g is an ideal, so assume that Dn−1
g is an ideal.
Let X, X0
∈ Dn−1
g and let Y ∈ g be arbitrary. Then by the Jacobi identity
[[X, X0
], Y ] = −[[X0
, Y ], X] − [[Y, X], X0
].
Since Dn−1
g is an ideal, [X0
, Y ], [Y, X] ∈ Dn−1
g showing that [[X, X0
], Y ] ∈
Dn
g.
Definition 2.8 (Solvable Lie Algebra). A Lie algebra is called solvable if
there exists an N such that DN
g = 0.
Abelian Lie algebras are solvable, since we can take N = 1. On the other
hand, simple Lie algebras are definitely not solvable, for we showed in the proof
of Proposition 2.7 that Dg = g which implies that Dn
g = g for all n.
Proposition 2.9. Let g be a Lie algebra.
1) If g is solvable, then so are all subalgebras of g.
2) If g is solvable and ϕ : g −→ g0
is a Lie algebra homomorphism, then im ϕ
is solvable.
3) If h ⊆ g is a solvable ideal so that g/h is solvable, then g is solvable.
32 Chapter 2 – Structure Theory for Lie Algebras
4) If h and k are solvable ideals of g, then so is h + k.
Proof. 1) It should be clear that Dh ⊆ Dg. Hence, by induction, Di
h ⊆ Di
g
and since DN
g = 0 for some N, then DN
h = 0 as well.
2) Since ϕ is a Lie algebra homomorphism, we have D(ϕ(g)) = ϕ(Dg), and
again by induction Di
(ϕ(g)) = ϕ(Di
g). Thus, DN
g = 0 implies DN
(ϕ(g)) = 0.
3) Assume there is an N for which DN
(g/h) = 0 and consider the canonical
map κ : g −→ g/h. Like above we have Di
(g/h) = Di
(κ(g)) = κ(Di
g). Thus,
since DN
(g/h) = 0, we have κ(DN
g) = 0 i.e. DN
g ⊆ h. But h was also solvable,
so we can find an M for which DM
h = 0. Then
DM+N
g = DM
(DN
g) ⊆ DM
h = 0
i.e. g is solvable.
4) By 3) of this proposition it is enough to prove that (h+k)/k is solvable. By
Proposition 2.4 there exists an isomorphism (h+k)/k
∼
−
−
→ h/(h∩k), and the right
hand side is solvable since it is the image of the canonical map h −→ h/(h∩k).
The last point of this proposition yields the existence of a maximal solvable
ideal in g, namely if h and k are solvable ideals, then h + k will be a solvable
ideal containing both. Thus the sum of all solvable ideals is a solvable ideal. This
works since the Lie algebra is finite-dimensional. By construction, it is unique.
Definition 2.10 (Radical). The maximal solvable ideal, the existence of which
we have just verified, is called the radical of g and is denoted Rad g.
A Lie algebra g is called semisimple if Rad g = 0.
Since all solvable ideals are contained in Rad g another way of formulating
semisimplicity would be to say that it has no nonzero solvable ideals. In this
sense, semisimple Lie algebras are as far as possible from being solvable.
In the next section we prove some equivalent conditions for semisimplicity.
Proposition 2.11. Semisimple Lie algebras have trivial centers.
Proof. The center is an abelian, hence solvable, ideal, and is therefore trivial
by definition.
We now consider a concept closely related to solvability. Again we consider
a sequence of ideals: g0
:= g, g1
:= Dg, g2
:= [g, g1
], . . . , gn
:= [g, gn−1
]. It
shouldn’t be too hard to see that Di
g ⊆ gi
.
Definition 2.12 (Nilpotent Lie Algebra). A Lie algebra g is called nilpotent
if there exists an N such that gN
= 0.
Since Di
g ⊆ gi
nilpotency of g implies solvability of g. The converse statement
is not true in general. So schematically:
abelian ⇒ nilpotent ⇒ solvable
in other words, solvability and nilpotency are in some sense generalizations of
being abelian.
Here is a proposition analogous to Proposition 2.9
Proposition 2.13. Let g be a Lie algebra.
1) If g is nilpotent, then so are all its subalgebras.
2) If g is nilpotent and ϕ : g −→ g0
is a Lie algebra homomorphism, then
im ϕ is nilpotent.
2.1 Basic Notions 33
3) If g/Z(g) is nilpotent, then g is nilpotent.
4) If g is nilpotent, then Z(g) 6= 0.
Proof. 1) In analogy with the proof of Proposition 2.9 a small induction ar-
gument show that if h ⊆ g is a subalgebra, then hi
⊆ gi
. Thus, gN
= 0 implies
hN
= 0.
2) We have already seen that ϕ(g)1
= ϕ(Dg). Furthermore
ϕ(g)2
= [ϕ(g), ϕ(g)1
] = [ϕ(g), ϕ(Dg)] = ϕ([g, Dg]) = ϕ(g2
)
and by induction we get ϕ(g)i
= ϕ(gi
). Hence nilpotency of g implies nilpotency
of ϕ(g).
3) Letting κ : g −→ g/Z(g) denote the canonical homomorphism, we see that
(g/Z(g))i
= (κ(g))i
= κ(gi
) = gi
/Z(g).
Thus, if (g/Z(g))N
= 0 then gN
⊆ Z(g). But then gN+1
= [g, gN
] ⊆ [g, Z(g)] =
0, hence g is nilpotent.
4) As g is nilpotent there is a smallest n such that gn
6= 0 and gn+1
= 0. This
means that [g, gn
] = 0 i.e. everything in gn
commutes with all elements of g.
Thus, 0 6= gn
⊆ Z(g).
Definition 2.14. An element X ∈ g is called ad-nilpotent if ad(X) is a nilpotent
linear map, i.e. if there exists an N such that ad(X)N
= 0.
If the Lie algebra is a subalgebra of an algebra of endomorphisms (for in-
stance End(V )), it makes sense to ask if the elements themselves are nilpotent.
In this case nilpotency and ad-nilpotency of an element X need not be the same.
For instance in End(V ) we have the identity I, which is obviously not nilpo-
tent. However, ad(I) = 0, and thus I is ad-nilpotent. The reverse implication,
however, is true:
Lemma 2.15. Let g be a Lie algebra of endomorphisms of some vector space.
If X ∈ g is nilpotent, then it is ad-nilpotent.
Proof. We associate to A ∈ g two linear maps λA, ρA : End(V ) −→ End(V )
by λA(B) = AB and ρA(B) = BA. It’s easy to see that they commute, and
that ad(A) = λA − ρA.
As A is nilpotent, λA and ρA are also nilpotent, so we can find an N for which
λN
A = ρN
A = 0. Since they commute, we can use the binomial formula and get
ad(A)2N
= (λA − ρA)2N
=
2N
X
j=0
(−1)j

2N
j

λ2N−j
A ρj
A
which is zero since all terms contain either λA or ρA to a power greater than
N.
An equivalent formulation of nilpotency of a Lie algebra is that there exists an
N such that ad(X1) · · · ad(XN )Y = 0 for all X1, . . . , XN , Y ∈ g. In particular, if
g is nilpotent, then there exists an N such that ad(X)N
= 0 for all X ∈ g, i.e. X
is ad-nilpotent. Thus, for a nilpotent Lie algebra g, all elements are ad-nilpotent.
That the converse is actually true is the statement of Engel’s Theorem, which
will be a corollary to the following theorem.
Theorem 2.16. Let V be a finite-dimensional vector space and g ⊆ End(V ) be
a subalgebra consisting of nilpotent linear endomorphisms. Then there exists a
nonzero v ∈ V which is an eigenvector for all A ∈ End(V ).
34 Chapter 2 – Structure Theory for Lie Algebras
Proof. We will prove this by induction over the dimension of g. First, assume
dim g = 1. Then g = KA for some nonzero A ∈ g. As A is nilpotent there is a
smallest N such that AN
6= 0 and AN+1
= 0, i.e. we can find a vector w ∈ V
with AN
w 6= 0 and A(AN
w) = AN+1
w = 0. Since all elements of g are scalar
multiples of A the vector AN
w will qualify.
Now assuming that the theorem holds for all Lie algebras of dimension strictly
less than n, we should prove that it holds for n-dimensional algebras as well.
The algebra g consists of nilpotent endomorphisms on V , hence by the previous
lemma, all elements are ad-nilpotent. Consider a subalgebra h 6= g of g which
thus also consists of ad-nilpotent elements. For A ∈ h we have that ad(A)h ⊆ h
since h as a subalgebra is closed under brackets. We can form the vector space
g/h and define a linear map ad(A) : g/h −→ g/h by
ad(A)(B + h) = (ad(A)B) + h.
This is well defined for if B + h = B0
+ h, then B − B0
∈ h and therefore
ad(A)(B0
+ h) = ad(A)B0
+ h = ad(A)B0
+ ad(A)(B − B0
) + h = ad(A)B + h
= ad(A)(B + h).
This map is again nilpotent since ad(A)N
(B + h) = (ad(A)N
B) + h = h = [0].
So, the situation now is that we have a subalgebra ad(h) of End(g/h) with
dim ad(h) ≤ dim h  dim g = n. Our induction hypothesis then yields an ele-
ment 0 6= [B0] = B0 + h ∈ g/h on which ad(A) is zero for all A ∈ h. This means
that [A, B0] ∈ h for all h, i.e. the normalizer N(h) of h is strictly larger than h.
Now assume that h is any maximal subalgebra h 6= g. Then since N(h) is
a strictly larger subalgebra we must have N(h) = g and consequently h is an
ideal. Then g/h is a Lie algebra with canonical Lie algebra homomorphism
κ : g −→ g/h and g/h must have dimension 1 for, assuming otherwise, we could
find a 1-dimensional subalgebra k 6= g/h in g/h, and then κ−1
(k) 6= g would be
a subalgebra strictly larger that h. This is a contradiction, hence dim g/h = 1
and g ∼
= h ⊕ KA0 for some nonzero A0 ∈ g  h.
So far, so good. Now we come to the real proof of the existence of the nonzero
vector v ∈ V . h was an ideal of dimension n − 1 hence the induction hypothesis
assures that the subspace W := {v ∈ V | ∀B ∈ h : Bv = 0} is nonempty. We
will show that each linear map A ∈ g (which maps V −→ V ) can be restricted
to a map W −→ W and that it as such a map is still nilpotent. This will, in
particular, hold for A0 which by nilpotency will have the eigenvalue 0 and hence
a nonzero eigenvector v ∈ W associated to the eigenvalue 0. This will be the
desired vector, for all linear maps in g can according to the decomposition above
be written as B + λA0 for some B ∈ h, and Bv = 0 since v was chosen to be in
W.
Thus, to finish the proof we only need to see that W is invariant. So let A ∈ g
be any map. Since h is an ideal, [A, h] ⊆ h and hence for w ∈ W
B(Aw) = A(Bw) − [A, B]w = 0
for all B ∈ h. This shows that Aw ∈ W and hence that W is invariant. A
restriction of a nilpotent map is clearly nilpotent. This completes the proof.
From this we can prove
Corollary 2.17 (Engel’s Theorem). A Lie algebra is nilpotent if and only if
all its elements are ad-nilpotent.
2.2 Semisimple Lie Algebras 35
Proof. We have already showed the ’only if’ part. To show the ’if’ part we
again invoke induction over the dimension of g. If dim g = 1 then g is abelian,
hence nilpotent.
Now set n = dim g and assume that the result holds for all Lie algebras with
dimension strictly less than n. All the elements of g are ad-nilpotent, hence
ad(g) is a subalgebra of End(g) consisting of nilpotent elements and the previous
theorem yields an element 0 6= X ∈ g for which ad(Y )(X) = 0 for all Y ∈ g i.e.
X is contained in the center Z(g) which is therefore a nonzero ideal and g/Z(g)
is a Lie algebra whose dimension strictly less than n. Furthermore all elements
of g/Z(g) are ad-nilpotent, since by definition of the quotient bracket
ad([A])[B] = ad(A)B + Z(g)
we have that ad(A)N
= 0 implies ad([A])N
= 0. Thus g/Z(g) consists solely
of ad-nilpotent elements. Consequently the induction hypothesis assures that
g/Z(g) is nilpotent. Then by Proposition 2.13 g is nilpotent.
2.2 Semisimple Lie Algebras
The primary goal of this section is to reach some equivalent formulations of
semisimplicity. Our approach to this will be via the so-called Cartan Criterion
for solvability which we will prove shortly.
First we need a quite powerful result from linear algebra regarding ’advanced
diagonalization’:
Theorem 2.18 (SN-Decomposition). Let V be a finite-dimensional vector
space over K and let A ∈ End(V ). Then there exist unique commuting linear
maps S, N ∈ End(V ), S being diagonalizable and N being nilpotent, satisfying
A = S +N. This is called the SN-decomposition In fact S and N can be realized
as polynomials in A without constant terms.
Furthermore, if A = S +N is the SN-decomposition of A, then ad(S)+ad(N)
is the SN-decomposition of ad(A).
We will not prove this 1
.
Cartan’s Criterion gives a sufficient condition for solvability based on the trace
of certain matrices. Therefore the following lemma is necessary.
Lemma 2.19. Let V be a finite-dimensional vector space, W1 and W2 be sub-
spaces of End(V ) and define M := {B ∈ End(V ) | ad(A)W1 ⊆ W2}. If A ∈ M
satisfies Tr(AB) = 0 for all B ∈ M then A is nilpotent.
Proof. Let A ∈ M satisfy the required condition, and consider the SN-decom-
position of A = S + N. We are done if we can show that S = 0. Well, S is
diagonalizable, so we can find a basis {e1, . . . , en} for V in which S has the form
diag(a1, . . . , an). We will show that all these eigenvalues are 0, and we do so in
a curious way: We define E := spanQ{a1, . . . , an} ⊆ K to be the subspace of K
over the rationals spanned by the eigenvalues. If we can show that this space, or
equivalently its dual space E∗
, consisting of Q-linear maps E −→ Q, is 0, then
we are done.
So, let ϕ ∈ E∗
be arbitrary. The basis we chose for V readily gives us a
basis for End(V ), consisting of Eij where Eij is the linear map determined by
Eijej = ei and Eijek = 0 for k 6= j. Then we see
(ad(S)Eij)ej = [S, Eij]ej = SEijej − EijSej = Sei − ajEijej = (ai − aj)ei
1For a proof the reader is referred to for instance [5] Section 4.3.
36 Chapter 2 – Structure Theory for Lie Algebras
while [S, Eij]ek = 0 for k 6= j i.e. ad(S)Eij = (ai − aj)Eij.
Now, let B ∈ End(V ) denote the linear map which in the basis {e1, . . . , en} is
diag(ϕ(a1), . . . , ϕ(an)). As with S we have that ad(B)Eij = (ϕ(ai) − ϕ(aj))Eij.
There exists a polynomial p =
PN
n=1 cnXn
without constant term which maps
ai − aj to ϕ(ai − aj) = ϕ(ai) − ϕ(aj) (it’s a matter of solving some equations
to find the coefficients cn). Then we have
p(ad S)Eij = cn(ad S)n
Eij + · · · + c1(ad S)Eij
= cn(ai − aj)n
Eij + · · · + c1(ai − aj)Eij
= p(ai − aj)Eij = (ϕ(ai) − ϕ(aj))Eij
which says that p(ad S) = ad B. A statement in the SN-decomposition was that
ad S, being the diagonalizable part of ad A, is itself a polynomial expression
in ad A without constant term, which implies that ad B is a polynomial in
ad A without constant term. Since A ∈ M we have that ad(A)W1 ⊆ W2, and
since ad(B) was a polynomial expression in ad(A) then also ad(B)W1 ⊆ W2,
i.e. B ∈ M, and therefore by assumption Tr(AB) = 0. The trace of AB is
the sum
Pn
i=1 aiϕ(ai) and applying ϕ to the equation Tr(AB) = 0 we get
Pn
i=1 ϕ(ai)2
= 0, i.e. ϕ(ai) = 0 (ϕ(ai) are rationals hence ϕ(ai)2
≥ 0). Therefore
we must have ϕ = 0 which was what we wanted.
Theorem 2.20 (Cartan’s Criterion). Let V be a finite-dimensional vector
space and g ⊆ End(V ) a subalgebra. If Tr(AB) = 0 for all A ∈ g and all B ∈ Dg
then g is solvable.
Proof. As Dn
g = Dn−1
(Dg) ⊆ (Dg)n−1
we see that g will be solvable if Dg is
nilpotent. To show that Dg is nilpotent we invoke Engel’s Theorem and Lemma
2.15 which combined say that Dg is nilpotent if all X ∈ Dg are nilpotent.
To this end we use the preceding lemma with W1 = g and W2 = Dg and
M = {B ∈ End(V ) | [B, g] ⊆ Dg}. Notice that g ⊆ M. The reverse inclusion
need not hold.
Now, let A ∈ Dg be arbitrary, we need to show that it is nilpotent, and by
virtue of the previous lemma it suffices to verify that Tr(AB) = 0 for all B ∈ M.
A is of the form [X, Y ] for X, Y ∈ g and we have, in general that
Tr([X, Y ]B) = Tr(XY B) − Tr(Y XB) = Tr(Y BX) − Tr(BY X)
= Tr([Y, B]X) = Tr(X[Y, B]). (2.1)
Since B ∈ M and Y ∈ g we have by construction of M that [Y, B] ∈ Dg. But
then by assumption in the theorem we have that Tr(AB) = Tr([X, Y ]B) =
0.
With this powerful tool we can prove the promised equivalent conditions for
a Lie algebra to be semisimple. One of them involves the so-called Killing form:
Definition 2.21 (Killing Form). By the Killing form for a Lie algebra g over
K we understand the bilinear form B : g × g −→ K given by
B(X, Y ) = Tr(ad(X) ad(Y )).
Proposition 2.22. The Killing form is a symmetric bilinear form satisfying
B([X, Y ], Z) = B(X, [Y, Z]). (2.2)
Furthermore, if ϕ is any Lie algebra automorphism of g then B(ϕ(X), ϕ(Y )) =
B(X, Y ).
2.2 Semisimple Lie Algebras 37
Proof. B is obviously bilinear, and symmetry is a consequence of the property
of the trace: Tr(AB) = Tr(BA). Eq. (2.2) follows from (2.1). If ϕ : g −→
g is a Lie algebra automorphism, then another way of writing the equation
[ϕ(X), ϕ(Y )] = ϕ([X, Y ]) is ad(ϕ(X)) ◦ ϕ = ϕ ◦ ad(X). Therefore
B(ϕ(X), ϕ(Y )) = Tr(ϕ ◦ ad(X) ◦ ad(Y ) ◦ ϕ−1
) = Tr(ad(X) ad(Y ))
= B(X, Y ).
Calculating the Killing form directly from the definition is immensely compli-
cated. Fortunately, for some of the classical Lie algebras we have a much simpler
formula:
B(X, Y ) =





2(n + 1) Tr(XY ), for X, Y ∈ sl(n + 1, K), sp(2n, K)
(2n − 1) Tr(XY ), for X, Y ∈ so(2n + 1, K)
2(n − 1) Tr(XY ), for X, Y ∈ so(2n, K).
(2.3)
Lemma 2.23. If g is a Lie algebra with Killing form B and h ⊆ g is an ideal,
then B|h×h is the Killing form of h.
Proof. First a general remark: If ϕ : V −→ V is a linear map, and W ⊆
V is a subspace for which im ϕ ⊆ W, then Tr ϕ = Tr(ϕ|W ): Namely, pick a
basis {e1, . . . , ek} for W and extend it to a basis {e1, . . . , ek, . . . , en} for V .
Let {ε1
, . . . , εn
} denote the corresponding dual basis. As ϕ(v) ∈ W we have
εk+i
(ϕ(v)) = 0 and hence
Tr ϕ =
n
X
i=1
εi
(ϕ(ei)) =
k
X
i=1
εi
(ϕ(ei)) = Tr(ϕ|W ).
Now, let X, Y ∈ h, then as h is an ideal: ad(X)g ⊆ h and ad(Y )g ⊆ h, which
means that the image of ad(X) ad(Y ) lies inside h. It should be obvious that
the adjoint representation of h is just ad(X)|h for X ∈ h. Therefore
Bh(X, Y ) = Tr(ad(X)|h ad(Y )|h) = Tr((ad(X) ad(Y ))|h)
= B|h×h(X, Y ).
Theorem 2.24. If g is a Lie algebra, then the following are equivalent:
1) g is semisimple i.e. Rad g = 0.
2) g has no nonzero abelian ideals.
3) The Killing form B of g is non-degenerate.
4) g is a direct sum of simple Lie algebras: g = g1 ⊕ · · · ⊕ gn.
Proof. We first prove that 1 and 2 are equivalent. If g is semisimple, then g
has no nonzero solvable ideals and since abelian ideals are solvable, no nonzero
abelian ideals either. Conversely, if Rad g 6= 0, then, since Rad g is solvable,
there is a smallest N for which DN
(Rad g) 6= 0 and DN+1
(Rad g) = 0. Then
DN
(Rad g) and is an abelian ideal hence a solvable ideal. So by contraposition,
if no solvable ideals exist, then g is semisimple.
Now we show that 1 implies 3. We consider the so-called radical of the Killing
form B namely the subspace
h := {X ∈ g | ∀ Y ∈ g : B(X, Y ) = 0}.
h is an ideal for if X ∈ h and Y ∈ g, then for all Z ∈ g:
B([X, Y ], Z) = B(X, [Y, Z]) = 0
38 Chapter 2 – Structure Theory for Lie Algebras
i.e. [X, Y ] ∈ h. Obviously B is non-degenerate if and only if h = 0.
Now we assume that Rad g = 0 and want to show that h = 0. We can do
this by showing that h is solvable for then h ⊆ Rad g. First we use the Cartan
Criterion on the Lie algebra ad(h) showing that this is solvable: By definition
of h we have that 0 = B(X, Y ) = Tr(ad(X) ad(Y )) for all X ∈ h and Y ∈ g,
in particular it holds for all X ∈ Dh. In other words we have Tr(AB) = 0 for
all A ∈ ad(Dh) = D(ad h) and all B ∈ ad h. Hence the Cartan Criterion tells us
that ad h is solvable, i.e. 0 = DN
(ad h) = ad(DN
h). This says that DN
h ⊆ Z(g)
implying that DN+1
h = 0. Thus, h is solvable and consequently equals 0.
Then we prove 3 implies 2. Assume that h = 0, and assume k to be an abelian
ideal and let X ∈ k and Y ∈ g. Since the adjoint representations maps according
to (exploiting the ideal property of k)
g
ad(Y )
−
−
−
−
−
→ g
ad(X)
−
−
−
−
−
→ k
ad(Y )
−
−
−
−
−
→ k
ad(X)
−
−
−
−
−
→ Dk = 0
we have (ad(X) ad(Y ))2
= 0 that is ad(X) ad(Y ) is nilpotent. Since nilpotent
matrices have zero trace we see that 0 = Tr(ad(X) ad(Y )) = B(X, Y ). This
implies X ∈ h, i.e. k ⊆ h and thus the desired conclusion.
We then proceed to show that 1 implies 4. Suppose g is semisimple, and
let h ⊆ g be any ideal. We consider its “orthogonal complement” w.r.t. B:
h⊥
:= {X ∈ g|∀Y ∈ h : B(X, Y ) = 0}. This is again an ideal in g for if X ∈ h⊥
and Y ∈ g, then for all Z ∈ g we have [Y, Z] ∈ h and hence
B([X, Y ], Z) = B(X, [Y, Z]) = 0
saying that [X, Y ] ∈ h⊥
. To show that we have a decomposition g = h ⊕ h⊥
we
need to show that the ideal h ∩ h⊥
is zero. We can do this by showing that it
is solvable for then semisimplicity forces it to be zero. By some remarks earlier
in this proof, solvability of h ∩ h⊥
would be a consequence of ad(h ∩ h⊥
) being
solvable. To show that ad(h∩h⊥
) is solvable we invoke the Cartan Criterion: For
X ∈ D(h∩h⊥
) ⊆ h∩h⊥
and Y ∈ h∩h⊥
we have Tr(ad(X) ad(Y )) = B(X, Y ) = 0
since, in particular, X ∈ h and Y ∈ h⊥
. Thus, the Cartan Criterion renders
solvability of ad(h∩h⊥
) implying h∩h⊥
= 0. Hence, h∩h⊥
= 0 and g = h⊕h⊥
.
After these preliminary remarks we proceed via induction over the dimension
of g. If dim g = 2, then g is simple, for any nontrivial ideal in g would have
to be 1-dimensional, hence abelian, and such do not exist. Assume now that
dim g = n and that the result is true for Lie algebras of dimension strictly less
than n. Suppose that g1 is a minimal nonzero ideal in g then g1 is simple since
dim g1 ≥ 2 and since any nontrivial ideal in g1 would be an ideal in g properly
contained in g1 contradicting minimality. Then we have g = g1 ⊕ g⊥
1 with g⊥
1
semisimple, for if k is any abelian ideal in g1 then it is an abelian ideal in g and
these do not exist. Then by the induction hypothesis we have g⊥
1 = g2⊕· · ·⊕gn, a
sum of simple Lie algebras, hence g = g1 ⊕g2 ⊕· · ·⊕gn, a sum of simple algebras.
Finally we show that 4 implies 2. So consider g := g1 ⊕ · · · ⊕ gn and let h ⊆ g
be an abelian ideal. It is not hard to verify that hi := h ∩ gi is an abelian ideal
in gi, thus hi = gi or hi = 0. As hi is abelian and gi is not, we can rule out the
first possibility, i.e. hi = 0 and hence h = 0.
During the proof we saw that any ideal in a semisimple Lie algebra has a
complementary ideal. This is important enough to be stated as a separate result:
Proposition 2.25. Let g be a semisimple Lie algebra and h ⊆ g an ideal. Then
h⊥
:= {X ∈ g | ∀ Y ∈ h : B(X, Y ) = 0} is an ideal in g and g = h ⊕ h⊥
.
Another very important concept in the discussion to follow is that of com-
plexification.
2.2 Semisimple Lie Algebras 39
Definition 2.26 (Complexification). Let V be a real vector space. By the
complexification VC of the vector space V we understand VC := V ⊕ iV which
equipped with the scalar multiplication
(a + ib)(v1 + iv2) = (av1 − bv2) + i(av2 + bv1)
becomes a complex vector space.
If g is a real Lie algebra, the complexification gC of g is the vector space g⊕ig
equipped with the bracket
[X1 + iX2, Y1 + iY2] = ([X1, Y1] − [X2, Y2]) + i([X1, Y2] + [X2, Y1])
(note that this in not the usual direct sum bracket!). It is easily checked that
gC is a complex Lie algebra.
Other presentations of this subject define the complexification of g by gC =
g⊗R C, where C is considered a 2-dimensional real space. By writing C = R⊕iR
and use distributivity of the tensor product, this definition is equivalent to ours.
Example 2.27. The classical real Lie algebras mentioned earlier have the fol-
lowing complexifications
gl(n, R)C
∼
= gl(n, C)
sl(n, R)C
∼
= sl(n, C)
so(n)C
∼
= so(n, C)
so(m, n)C
∼
= so(m + n, C)
u(n)C
∼
= gl(n, C)
u(m, n)C
∼
= gl(m + n, C)
su(n)C
∼
= sl(n, C)
su(m, n)C
∼
= sl(m + n, C).
Let’s prove a few of them. For the first one, pick an element X of gl(n, C)
and split it in real and imaginary parts X = X1 + iX2. It is an easy exercise to
verify that the map X 7−→ X1 + iX2 is a Lie algebra isomorphism gl(n, C)
∼
−
−
→
gl(n, R)C.
To prove u(n)C
∼
= gl(n, C), let X ∈ gl(n, C) and write it as
X =
X − X∗
2
+ i
X + X∗
2i
.
It is not hard to see that both 1
2 (X −X∗
) and 1
2i (X +X∗
) are skew-adjoint, i.e.
elements of u(n). Again it is a trivial calculation to show that
X 7−→
X − X∗
2
+ i
X + X∗
2i
is a Lie algebra isomorphism gl(n, C)
∼
−
−
→ u(n)C. The other identities are verified
in a similar fashion.
Proposition 2.28. A Lie algebra g is semisimple if and only if gC is semisim-
ple.
Proof. Let B denote the Killing form of g and BC the Killing form of gC.
Our first task is to relate them. If {X1, . . . , Xn} is a basis for g as an R-vector
space, then {X1, . . . , Xn} is also a basis for gC as a C-vector space. Therefore
for X, Y ∈ g the linear map ad(X) ad(Y ) will have the same matrix whether it
40 Chapter 2 – Structure Theory for Lie Algebras
is considered a linear map on g or gC. In particular their traces will be equal
which amounts to say that B(X, Y ) = BC(X, Y ). In other words
BC|g×g = B. (2.4)
Now assume g to be semisimple, or, equivalently B to be non-degenerate. Then
B(X, Y ) = 0 for all Y ∈ g implies X = 0. To show that BC is non-degenerate,
let X ∈ gC satisfy BC(X, Y ) = 0 for all Y ∈ gC. Then it particularly holds for
all Y ∈ g. Write X = A1 + iA2 where A1, A2 ∈ g, then by (2.4)
0 = BC(A1, Y ) + BC(A2, Y ) = B(A1, Y ) + iB(A2, Y )
for all Y ∈ g. Hence by non-degeneracy of B we have A1 = A2 = 0, i.e. X = 0.
Thus BC is non-degenerate.
Now assume BC to be non-degenerate and suppose B(X, Y ) = 0 for all Y ∈ g.
This particularly holds for the basis elements: B(X, Xk) = 0 for k = 1, . . . , n.
By (2.4) we also have BC(X, Xk) = 0, and since {X1, . . . , Xn} was also a basis
for gC we get BC(X, Y ) = 0 for all Y ∈ gC and thus by non-degeneracy of BC
that X = 0, i.e. B is non-degenerate.
Up till now we have talked a lot about semisimple Lie algebras and their
amazing properties. But we have not yet encountered one single example of a
semisimple Lie algebra. The rest of this section tends to remedy that. The first
thing we do is to introduce a class of Lie algebras which contains the semisimple
ones:
Definition 2.29 (Reductive Lie Algebra). A Lie algebra g is called reductive
if for each ideal a ⊆ g there is an ideal b ⊆ g such that g = a ⊕ b.
From Proposition 2.25 it follows that semisimple Lie algebras are reductive.
So schematically we have
simple ⇒ semisimple ⇒ reductive.
Note how these classes of Lie algebras are somehow opposite to the classes of
abelian, solvable or nilpotent algebras.
The next proposition characterizes the semisimple Lie algebras among the
reductive ones
Proposition 2.30. If g is reductive, then g = Dg⊕Z(g) and Dg is semisimple.
Thus a reductive Lie algebra is semisimple if and only if its center is trivial.
Proof. Let Σ be the set of direct sums a1 ⊕ · · · ⊕ ak where a1, . . . , ak are
indecomposable ideals (i.e. they contain only trivial ideals). The elements of Σ
are themselves ideals. Let a ∈ Σ be an element of maximal dimension. As g is
reductive, there exists an ideal b such that g = a ⊕ b. We want to show that
b = {0} (and hence g = a) so assume for contradiction that b 6= {0} and let
b0
⊆ b be the smallest nonzero indecomposable ideal (which always exists, for
if b contains no proper ideals, then b is indecomposable). But then a ⊕ b0
∈ Σ
contradicting maximality of a, and therefore g = a ∈ Σ.
Now let’s write
g = a1 ⊕ · · · ⊕ aj
| {z }
g1
⊕ aj+1 ⊕ · · · ⊕ ak
| {z }
g2
where a1, . . . , aj are 1-dimensional and aj+1, . . . , ak are higher dimensional and
thus simple. Therefore g1 is abelian and g2 is semisimple (by Theorem 2.24) and
by definition of the direct sum bracket we have
Dg = D(a1 ⊕ · · · ⊕ ak) = Da1 ⊕ · · · ⊕ Dak = Daj+1 ⊕ · · · ⊕ Dak = g2.
2.2 Semisimple Lie Algebras 41
This shows that Dg is semisimple. We now only have to justify that g1 equals
the center. We have g1 ⊆ Z(g) for in the decomposition g = g1 ⊕ g2
[(X, 0), (Y, Z)] = [X, Y ] + [0, Z] = 0.
Conversely, let X ∈ Z(g). We decompose it X = X1 + · · · + Xk according to
the decomposition of g in indecomposable ideals. Then Xi ∈ Z(ai) which means
that Xi = 0 for j  i and hence X ∈ g1.
The next result will help us mass-produce examples of reductive Lie algebras
Proposition 2.31. Let g be a Lie subalgebra of gl(n, R) or gl(n, C). If g has
the property that X ∈ g implies X∗
∈ g (where X∗
is the conjugate transpose of
X), then g is reductive.
Proof. Define a real inner product on g by hX, Y i = Re Tr(XY ∗
). This is a
genuine inner product: it’s symmetric:
hY, Xi = Re Tr(Y X∗
) = Re Tr(Y ∗∗
X∗
) = Re Tr((XY ∗
)∗
)
= Re Tr(XY ∗) = Re Tr(XY ∗
) = hX, Y i,
and it’s positive definite, for Tr(XX∗
) is nothing but the sum of the square of
the norm of the columns in X which is 0 if and only if X = 0.
Assuming a to be an ideal in g, let a⊥
be the complementary subspace w.r.t.
the inner product just defined. Then as vector spaces it holds that g = a ⊕ a⊥
.
For this to be a Lie algebra direct sum we need a⊥
to be an ideal. Let X ∈ a⊥
and Y ∈ g, then for all Z ∈ a
h[X, Y ], Zi = Re Tr(XY Z∗
− Y XZ∗
) = − Re Tr(XZ∗
Y − XY Z∗
)
= − Re Tr(X(Y ∗
Z)∗
− X(ZY ∗
)∗
) = −hX, [Y ∗
, Z]i
which is 0 as X ∈ a⊥
and [Y ∗
, Z] ∈ a since Y ∗
∈ g. Thus a⊥
is an ideal.
Obviously gl(n, R) and gl(n, C) are closed under the operation conjugation
transpose and are therefore reductive. They are not semisimple as their centers
contain the scalar matrices diag(a, . . . , a) for a ∈ R or a ∈ C respectively,
violating Proposition 2.11.
The Lie algebras so(n) are semisimple for n ≥ 3. Recall that so(n) is the set
of real n × n matrices X for which X + X∗
= 0. From the definition it is clear
that if X ∈ so(n) then also X∗
∈ so(n). Hence so(n) is reductive for all n. so(2)
is a 1-dimensional (hence abelian) Lie algebra and thus is not semisimple. Let
us show that so(3) is semisimple. Thanks to Proposition 2.30 this boils down to
verifying that its center is trivial. So assume
X =


0 a b
−a 0 c
−b −c 0


to be in an element in the center of so(3). In particular it has to commute with
the two matrices
A1 =


0 1 0
−1 0 0
0 0 0

 and A2


0 0 1
0 0 0
−1 0 0

 .
We have
A1X =


−a 0 c
0 −a −b
0 0 0

 and XA1 =


−a 0 0
0 −a 0
−c −b 0

 .
42 Chapter 2 – Structure Theory for Lie Algebras
As these two matrices should be equal we immediately get that b = c = 0.
Furthermore
A2X =


0 0 0
0 0 0
0 −a 0

 and XA2 =


0 0 0
0 0 −a
0 0 0


and we get a = 0. Thus X = 0, and the center is trivial. Generalizing this to
higher dimensions one can show that so(n) is semisimple for n ≥ 3. Now since
so(n, C) = so(n)C (cf. Example 2.27) Proposition 2.28 says that also so(n, C) is
semisimple for n ≥ 3.
The Lie algebra u(n) is reductive. These are just the n × n complex matrices
satisfying X+X∗
= 0 and again it is clear that u(n) is closed under the operation
conjugate transpose and hence reductive. It is not semisimple, since the matrices
diag(ia, . . . , ia) for a ∈ R are all in the center. However the subalgebra su(n) is
semisimple for n ≥ 2 (su(1) is zero-dimensional) as can be seen by an argument
analogous to the one given above. Since its complexification is sl(n, C) this is also
semisimple for n ≥ 2. But sl(n, C) is also the complexification of sl(n, R) which
is therefore also semisimple for n ≥ 2. By the same argument also so(m, n) for
m+n ≥ 3 and su(m, n) for m+n ≥ 2 are semisimple since their complexifications
are. Wrapping up, the following Lie algebras are semisimple
sl(n, R) for n ≥ 2
sl(n, C) for n ≥ 2
so(n) for n ≥ 3
so(m, n) for m + n ≥ 3
so(n, C) for n ≥ 3
su(n) for n ≥ 2
su(m, n) for m + n ≥ 2.
2.3 The Universal Enveloping Algebra
For a finite-dimensional vector space V we have the tensor algebra T(V ) defined
by
T(V ) =
∞
M
n=0
V ⊗n
.
From this one can form various quotients. One of the more important is the
symmetric algebra S(V ) where we mod out the ideal I generated by elements
of the form X ⊗ Y − Y ⊗ X. The resulting algebra is commutative by con-
struction. If {X1, . . . , Xn} is a basis for V , then one can show that the set
{Xi1
1 · · · Xin
n | i1, . . . , in ∈ N0} (we define X0 = 1) will be a basis for S(V ) which
is thus (unlike the exterior algebra) infinite-dimensional. If we set I = (i1, . . . , ik)
we will use the short-hand notation XI for Xi1 · · · Xik
. We define the length of
I to be |I| = k and write j ≤ I if j ≤ i1, . . . , ik.
Definition 2.32 (Universal Enveloping Algebra). Let g be a Lie alge-
bra. By a universal enveloping algebra of g we understand a pair (U, i) of an
associative unital algebra U and a linear map i : g −→ U with i([X, Y ]) =
i(X)i(Y ) − i(Y )i(X) satisfying that for any pair (A, ϕ) of an associative unital
algebra A and a linear map ϕ : g −→ A with ϕ([X, Y ]) = ϕ(X)ϕ(Y )−ϕ(Y )ϕ(X)
there is a unique algebra homomorphism ϕ : U −→ A with ϕ = ϕ ◦ i.
2.3 The Universal Enveloping Algebra 43
In other words any linear map ϕ : g −→ A satisfying the above condition
factorizes through U rendering the following diagram commutative
g i //
ϕ

?
?
?
?
?
?
?
? U
ϕ

A
As for the symmetric algebra, multiplication in a universal algebra is written by
juxtaposition.
Proposition 2.33. Let g be a Lie algebra and J the two-sided ideal in T(g)
generated by elements of the form X ⊗ Y − Y ⊗ X − [X, Y ]. If i denotes the
restriction of the canonical map κ : T(g) −→ T(g)/J to g then (T(g)/J, i) is a
universal enveloping algebra for g. It is unique up to algebra isomorphism.
Proof. Uniqueness first. Assume that (U, i) and (e
U,e
i) are universal enveloping
algebras for g. Since e
i : g −→ e
U is a linear map satisfying the bracket condition,
the universal property of (U, i) yields an algebra homomorphism ϕ : U −→ e
U
so that e
i = ϕ ◦ i. Likewise for i : g −→ U the universal property of (e
U,e
i) yields
an algebra homomorphism ψ : e
U −→ U so that i = ψ ◦e
i. Composing these gives
that i = ψ ◦ φ ◦ i, i.e. ψ ◦ ϕ makes the following diagram commutative
g i //
i

?
?
?
?
?
?
?
? U
ψ◦ϕ

U
But obviously idU also makes the diagram commute and by uniqueness ψ ◦ ϕ =
idU . Likewise one shows that ϕ ◦ ψ = ide
U , thus U and e
U are isomorphic.
To show existence we just need to verify that (T(g)/J, i) is really a universal
enveloping algebra. Well, first of all
i([X, Y ]) = κ([X, Y ]) = [X, Y ] + J
= [X, Y ] + X ⊗ Y − Y ⊗ X − [X, Y ] + J = X ⊗ Y − Y ⊗ X + J
= (X + J) ⊗ (Y + J) − (Y + J) ⊗ (X + J)
= κ(X)κ(Y ) − κ(Y )κ(X) = i(X)i(Y ) − i(Y )i(X).
Now, suppose that ϕ : g −→ A is a linear map satisfying ϕ([X, Y ]) =
ϕ(X)ϕ(Y ) − ϕ(Y )ϕ(X). Consider the following diagram.
g   ι //
ϕ
''
N
N
N
N
N
N
N
N
N
N
N
N
N
N T(g)
ϕ0

κ // T(g)/J
ϕ
vvmmmmmmmmmmmmmm
A
Since ϕ is linear it factorizes uniquely through T(g) yielding an algebra homo-
morphism ϕ0
: T(g) −→ A. On the generators of J we see that
ϕ0
(X ⊗ Y − Y ⊗ X − [X, Y ]) = ϕ0
(X ⊗ Y ) − ϕ0
(Y ⊗ X) − ϕ0
([X, Y ])
= ϕ(X)ϕ(Y ) − ϕ(Y )ϕ(X) − ϕ([X, Y ]) = 0.
Thus, vanishing on J, ϕ0
factorizes uniquely through T(g)/J by an algebra ho-
momorphism ϕ : T(g)/J −→ A, i.e. ϕ = ϕ ◦ i. This proves existence.
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf
The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf

Mais conteúdo relacionado

Semelhante a The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf

A First Course In Complex Analysis
A First Course In Complex AnalysisA First Course In Complex Analysis
A First Course In Complex AnalysisNathan Mathis
 
Gerard_A._Venema _Foundations_of_Geometry(b-ok.org).pdf
Gerard_A._Venema _Foundations_of_Geometry(b-ok.org).pdfGerard_A._Venema _Foundations_of_Geometry(b-ok.org).pdf
Gerard_A._Venema _Foundations_of_Geometry(b-ok.org).pdfSahat Hutajulu
 
Compiled Report
Compiled ReportCompiled Report
Compiled ReportSam McStay
 
Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Xin-She Yang
 
Math516 runde
Math516 rundeMath516 runde
Math516 rundesgcskyone
 
URICHUK_ANDREW_MSC_2015
URICHUK_ANDREW_MSC_2015URICHUK_ANDREW_MSC_2015
URICHUK_ANDREW_MSC_2015Andrew Urichuk
 
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)Amy Cernava
 
Rao probability theory with applications
Rao probability theory with applicationsRao probability theory with applications
Rao probability theory with applicationsCaboGrosso
 
M.Sc_Maths Thesis Pradeep Mishra
M.Sc_Maths Thesis Pradeep MishraM.Sc_Maths Thesis Pradeep Mishra
M.Sc_Maths Thesis Pradeep MishraPradeep Mishra
 
Xin-She Yang - Introductory Mathematics for Earth Scientists -Dunedin Academi...
Xin-She Yang - Introductory Mathematics for Earth Scientists -Dunedin Academi...Xin-She Yang - Introductory Mathematics for Earth Scientists -Dunedin Academi...
Xin-She Yang - Introductory Mathematics for Earth Scientists -Dunedin Academi...Aditya Singh
 
Schaum's Outline of Theory and Problems of Differential and Integral Calculus...
Schaum's Outline of Theory and Problems of Differential and Integral Calculus...Schaum's Outline of Theory and Problems of Differential and Integral Calculus...
Schaum's Outline of Theory and Problems of Differential and Integral Calculus...Sahat Hutajulu
 

Semelhante a The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf (20)

Differential equations
Differential equationsDifferential equations
Differential equations
 
Teoria das supercordas
Teoria das supercordasTeoria das supercordas
Teoria das supercordas
 
Applied Math
Applied MathApplied Math
Applied Math
 
Quantum mechanics
Quantum mechanicsQuantum mechanics
Quantum mechanics
 
A First Course In Complex Analysis
A First Course In Complex AnalysisA First Course In Complex Analysis
A First Course In Complex Analysis
 
Gerard_A._Venema _Foundations_of_Geometry(b-ok.org).pdf
Gerard_A._Venema _Foundations_of_Geometry(b-ok.org).pdfGerard_A._Venema _Foundations_of_Geometry(b-ok.org).pdf
Gerard_A._Venema _Foundations_of_Geometry(b-ok.org).pdf
 
Compiled Report
Compiled ReportCompiled Report
Compiled Report
 
Marcaccio_Tesi
Marcaccio_TesiMarcaccio_Tesi
Marcaccio_Tesi
 
Complex
ComplexComplex
Complex
 
Thesis
ThesisThesis
Thesis
 
Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)Introduction to Computational Mathematics (2nd Edition, 2015)
Introduction to Computational Mathematics (2nd Edition, 2015)
 
thesis
thesisthesis
thesis
 
Math516 runde
Math516 rundeMath516 runde
Math516 runde
 
URICHUK_ANDREW_MSC_2015
URICHUK_ANDREW_MSC_2015URICHUK_ANDREW_MSC_2015
URICHUK_ANDREW_MSC_2015
 
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
A Graph Theoretic Approach To Matrix Functions And Quantum Dynamics (PhD Thesis)
 
Rao probability theory with applications
Rao probability theory with applicationsRao probability theory with applications
Rao probability theory with applications
 
Replect
ReplectReplect
Replect
 
M.Sc_Maths Thesis Pradeep Mishra
M.Sc_Maths Thesis Pradeep MishraM.Sc_Maths Thesis Pradeep Mishra
M.Sc_Maths Thesis Pradeep Mishra
 
Xin-She Yang - Introductory Mathematics for Earth Scientists -Dunedin Academi...
Xin-She Yang - Introductory Mathematics for Earth Scientists -Dunedin Academi...Xin-She Yang - Introductory Mathematics for Earth Scientists -Dunedin Academi...
Xin-She Yang - Introductory Mathematics for Earth Scientists -Dunedin Academi...
 
Schaum's Outline of Theory and Problems of Differential and Integral Calculus...
Schaum's Outline of Theory and Problems of Differential and Integral Calculus...Schaum's Outline of Theory and Problems of Differential and Integral Calculus...
Schaum's Outline of Theory and Problems of Differential and Integral Calculus...
 

Último

ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxDr.Ibrahim Hassaan
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxAshokKarra1
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parentsnavabharathschool99
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxChelloAnnAsuncion2
 
Q4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxQ4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxnelietumpap1
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYKayeClaireEstoconing
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptxSherlyMaeNeri
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Jisc
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPCeline George
 

Último (20)

ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
Gas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptxGas measurement O2,Co2,& ph) 04/2024.pptx
Gas measurement O2,Co2,& ph) 04/2024.pptx
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptx
 
Choosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for ParentsChoosing the Right CBSE School A Comprehensive Guide for Parents
Choosing the Right CBSE School A Comprehensive Guide for Parents
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptxGrade 9 Q4-MELC1-Active and Passive Voice.pptx
Grade 9 Q4-MELC1-Active and Passive Voice.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
Q4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptxQ4 English4 Week3 PPT Melcnmg-based.pptx
Q4 English4 Week3 PPT Melcnmg-based.pptx
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...Procuring digital preservation CAN be quick and painless with our new dynamic...
Procuring digital preservation CAN be quick and painless with our new dynamic...
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
How to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERPHow to do quick user assign in kanban in Odoo 17 ERP
How to do quick user assign in kanban in Odoo 17 ERP
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 

The Mathematics any Physicist Should Know. Thomas Hjortgaard Danielsen.pdf

  • 1. The Mathematics any Physicist Should Know Thomas Hjortgaard Danielsen
  • 2.
  • 3. Contents Preface 5 I Representation Theory of Groups and Lie Algebras 7 1 Peter-Weyl Theory 9 1.1 Foundations of Representation Theory . . . . . . . . . . . . . . . 9 1.2 The Haar Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.3 Matrix Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.4 Characters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.5 The Peter-Weyl Theorem . . . . . . . . . . . . . . . . . . . . . . 24 2 Structure Theory for Lie Algebras 29 2.1 Basic Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.2 Semisimple Lie Algebras . . . . . . . . . . . . . . . . . . . . . . . 35 2.3 The Universal Enveloping Algebra . . . . . . . . . . . . . . . . . 42 3 Basic Representation Theory of Lie Algebras 49 3.1 Lie Groups and Lie Algebras . . . . . . . . . . . . . . . . . . . . 49 3.2 Weyl’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4 Root Systems 59 4.1 Weights and Roots . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.2 Root Systems for Semisimple Lie Algebras . . . . . . . . . . . . . 62 4.3 Abstract Root Systems . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4 The Weyl Group . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 5 The Highest Weight Theorem 75 5.1 Highest Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.2 Verma Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.3 The Case sl(3, C) . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 6 Infinite-dimensional Representations 91 6.1 Gårding Subspace . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.2 Induced Lie Algebra Representations . . . . . . . . . . . . . . . . 95 6.3 Self-Adjointness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.4 Applications to Quantum Mechanics . . . . . . . . . . . . . . . . 102 II Geometric Analysis and Spin Geometry 109 7 Clifford Algebras 111 7.1 Elementary Properties . . . . . . . . . . . . . . . . . . . . . . . . 111 7.2 Classification of Clifford Algebras . . . . . . . . . . . . . . . . . . 117 3
  • 4. 4 7.3 Representation Theory . . . . . . . . . . . . . . . . . . . . . . . . 121 8 Spin Groups 125 8.1 The Clifford Group . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8.2 Pin and Spin Groups . . . . . . . . . . . . . . . . . . . . . . . . . 128 8.3 Double Coverings . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 8.4 Spin Group Representations . . . . . . . . . . . . . . . . . . . . . 135 9 Topological K-Theory 139 9.1 The K-Functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 9.2 The Long Exact Sequence . . . . . . . . . . . . . . . . . . . . . . 144 9.3 Exterior Products and Bott Periodicity . . . . . . . . . . . . . . 149 9.4 Equivariant K-theory . . . . . . . . . . . . . . . . . . . . . . . . . 151 9.5 The Thom Isomorphism . . . . . . . . . . . . . . . . . . . . . . . 155 10 Characteristic Classes 163 10.1 Connections on Vector Bundles . . . . . . . . . . . . . . . . . . . 163 10.2 Connections on Associated Vector Bundles* . . . . . . . . . . . . 166 10.3 Pullback Bundles and Pullback Connections . . . . . . . . . . . . 172 10.4 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 10.5 Metric Connections . . . . . . . . . . . . . . . . . . . . . . . . . . 178 10.6 Characteristic Classes . . . . . . . . . . . . . . . . . . . . . . . . 180 10.7 Orientation and the Euler Class . . . . . . . . . . . . . . . . . . . 186 10.8 Splitting Principle, Multiplicative Sequences . . . . . . . . . . . . 190 10.9 The Chern Character . . . . . . . . . . . . . . . . . . . . . . . . . 197 11 Differential Operators 201 11.1 Differential Operators on Manifolds . . . . . . . . . . . . . . . . . 201 11.2 The Principal Symbol . . . . . . . . . . . . . . . . . . . . . . . . 205 11.3 Dirac Bundles and the Dirac Operator . . . . . . . . . . . . . . . 210 11.4 Sobolev Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 11.5 Elliptic Complexes . . . . . . . . . . . . . . . . . . . . . . . . . . 227 12 The Atiyah-Singer Index Theorem 233 12.1 K-Theoretic Version . . . . . . . . . . . . . . . . . . . . . . . . . 233 12.2 Cohomological Version . . . . . . . . . . . . . . . . . . . . . . . . 236 A Table of Clifford Algebras 245 B Calculation of Fundamental Groups 247 Bibliography 251 Index 252
  • 5. Preface When following courses given by Ryszard Nest at the Copenhagen University, you can be almost certain that a reference to the Atiyah-Singer Index Theorem will appear at least once during the course. Thus it was an obvious project for me to find out what this, apparently great theorem, was all about. However, from the beginning I was well aware that this was not an easy task and that it was necessary for me to delve into a lot of other subjects involved in its formulation, before the goal could be reached. It has never been my intension to actually prove the theorem (well except for a few moments of utter over ambitiousness) but merely to pave a road for my own understanding. This road leads through as various subjects as K-theory, characteristic classes and elliptic theory. I have tried to treat each subject as thoroughly and self-contained as I could, even though this meant including stuff which wasn’t really necessary for the Index Theorem. The starting point is of course my own prerequisites when I began my work half a year ago, that is a solid foundation in Riemannian geometry, algebraic topology (notably homology and cohomology) and pseudodifferential calculus on Euclidean space. From here we develop at first, in a systematic way, topological K-theory. The approach is via vector bundles as it can be found in for instance [Atiyah] or [Hatcher], no C∗ -algebras are involved. In the first two sections the basic theory will be outlined and most proofs will be given. In the third section we present the famous Bott-periodicity Theorem, without giving a proof. The last two sections are dedicated to the Thom Isomorphism. To this end we introduce equivariant K-theory (that is, K-theory involving group actions), a slight generalization of the K-theory treated in the first sections. I follow the outline given in the classical article by [Segal]. One could argue, that equivariant K-theory could have been introduced from the very beginning, however I have chosen not to, in order not to blur the introductory presentation with too many technicalities. The second chapter deals with the Chern-Weil approach to characteristic classes of vector bundles. The first four sections are devoted to the study of the basic theory of connections on vector bundles. From the curvature forms and invariant polynomials we construct characteristic classes, in particular Chern and Pontrjagin classes and their relationships will be discussed. In the following section the Euler class of oriented bundles is defined. I have relied heavily on [Morita] and [Milnor, Stacheff] when working out these sections but also [Mad- sen, Tornehave] has provided valuable inspiration. The chapter ends with a discussion of certain characteristic classes constructed, not from invariant poly- nomials but from invariant formal power series. Examples of such classes are the Todd class and the total e A-class and the Chern character. No effort has been made to include “great theorems”, in fact there are really no major results in this chapter. It serves as a tool box to be applied to the construction of the topological index. The third chapter revolves around differential operators on manifolds. In the 5
  • 6. 6 standard literature on this subject not much care is taken, when transferring the differential operators and principal symbols from Euclidean space to manifolds. I’ve tried to remedy this, giving a precise and detailed treatment. To this I have added a lot of examples of “classical” differential operators, such as the Lapla- cian, Hodge-de Rham operators, Dirac operators etc. calculating their formal adjoints and principal symbols. To shed some light on the analytic properties we introduce Sobolev spaces. Essentially there are two different definitions: in the first one, Sobolev spaces are defined in terms of connections, and in the second they are defined as the “clutching” of local Euclidean Sobolev spaces. We prove that the two definitions agree, when the underlying manifold is compact, and we show how to extend differential operators to continuous operators between the Sobolev spaces. The major results such as the Sobolev Embedding Theorem, the Rellich lemma and Elliptic Regularity are given without proofs. We then move on to elliptic complexes, which provides us with a link to the K-theory developed in the first chapter. In the fourth and final chapter the Index Theorem is presented. We construct the so-called topological index map from the K-group K(TM) to the integers and state the index theorem, which says that the index function when evaluated on the specific K-class determined from the symbol of an elliptic differential op- erator, is in fact equal to the Fredholm index. I give a short sketch of the proof based on the original 1968-article by Atiyah and Singer. Then by introducing the cohomological Thom isomorphism, Thom Defect classes etc. and drawing heavily on the theory developed in the previous chapters we manage to deduce the famous cohomological index formula. To demonstrate the power of the Index Theorem, we prove two corollaries, namely the generalized Gauss-Bonnet The- orem and the fact that any elliptic differential operator on a compact manifold of odd dimension has index 0. I would like to thank Professor Ryszard Nest for his guidance and inspiration, as well as answers to my increasing amount of questions. Copenhagen, March 2008. Thomas Hjortgaard Danielsen.
  • 7. Part I Representation Theory of Groups and Lie Algebras 7
  • 8.
  • 9. Chapter 1 Peter-Weyl Theory 1.1 Foundations of Representation Theory We begin by introducing some basic but fundamental notions and results regard- ing representation theory of topological groups. Soon, however, we shall restrict our focus to compact groups and later to Lie groups and their Lie algebras. We begin with the basic theory. To define the notion of a representation, let V denote a separable Banach space and equip B(V ), the space of bounded linear maps V −→ V , with the strong operator topology i.e. the topology on B(V ) generated by the seminorms kAkx = kAxk. Let Aut(V ) ⊆ B(V ) denote the group of invertible linear maps and equip it with the subspace topology, which turns it into a topological group. Definition 1.1 (Representation). By a continuous representation of a topo- logical group G on a separable Banach space V we understand a continuous group homomorphism π : G −→ Aut(V ). We also say that V is given the struc- ture of a G-module. If π is an injective homomorphism the representation is called faithful. By the dimension of the representation we mean the dimension of the vector space on which it is represented. If V is infinite-dimensional the representation is said to be infinite-dimensional as well. In what follows a group without further specification will always denote a locally compact topological group, and by a representation we will always un- derstand a continuous representation. The reason why we demand the groups to be locally compact should be apparent in the next section. We will distinguish between real and complex representations depending on whether V is a real or complex Banach space. Without further qualification, the representations considered will all be complex. The requirement on π to be strongly continuous can be a little hard to handle, so here is an equivalent condition which is more applicable: Proposition 1.2. Let π : G −→ Aut(V ) be a group homomorphism. Then the following conditions are equivalent: 1) π is continuous w.r.t. the the strong operator topology on Aut(V ), i.e. π is a continuous representation. 2) The map G × V −→ V given by (g, v) 7−→ π(g)v is continuous. For a proof see [1] Proposition 18.8.
  • 10. 10 Chapter 1 – Peter-Weyl Theory Example 1.3. The simplest example one can think of is the trivial repre- sentation: Let G be a group and V a Banach space, and consider the map G 3 g 7−→ idV . This is obviously a continuous group homomorphism and hence a representation. Now, let G be a matrix Lie group (i.e. a closed subgroup of GL(n, C)). Choosing a basis for Cn we get an isomorphism Aut(Cn ) ∼ − − → GL(n, C), and we can thus define a representation of G on Cn simply by the inclusion map G −→ GL(n, C). This is obviously a continuous representation of G, called the defining representation. We can form new representations out of old ones. If (π1, V1) and (π2, V2) are representations of G on Banach spaces we can form their direct sum π1 ⊕ π2 to be the representation of G on V1 ⊕ V2 (which has been given the norm k(x, y)k = kxk + kyk, turning V1 ⊕ V2 into a Banach space) given by (π1 ⊕ π2)(g)(x, y) = (π1(g)x, π2(g)y). If we have a countable family (Hi)i∈I of Hilbert spaces we can form the direct sum Hilbert space L i∈I Hi to be the vector space of sequences (xi), xi ∈ Hi, satisfying P i∈I kxik2 Hi < ∞. Equipped with the inner product h(xi), (yi)i = P i∈Ihxi, yii this is again a Hilbert space. If we have a countable family (πi, Hi) of representations such that supi∈I kπi(g)k < ∞ for each g ∈ G, then we can form the direct sum of the representations L i∈I πi on L i∈I Hi by M i∈I πi (g)(xi) = (πi(g)xi). Finally, if (π1, H1) and (π2, H2) are representations on Hilbert spaces, we can form the tensor product, namely equip the tensor product vector space H1 ⊗ H2 with the inner product hx1 ⊗ x2, y1 ⊗ y2i = hx1, y1ihx2, y2i which turns H1 ⊗ H2 into a Hilbert space, and define the tensor product repre- sentation π1 ⊗ π2 by (π1 ⊗ π2)(g)(x ⊗ y) = π1(g)x ⊗ π2(g)y. Definition 1.4 (Unitary Representation). By a unitary representation of a group G we understand a representation π on a Hilbert space H such that π(g) is a unitary operator for each g ∈ G. Obviously the trivial representation is a unitary representation. As is the defining representation of any subgroup of the unitary group U(n). In the next section we show unitarity of some more interesting representations. Definition 1.5 (Intertwiner). Let two representations (π1, V1) and (π2, V2) of the same group G be given. By an intertwiner or an intertwining map between π1 and π2 we understand a bounded linear map T : V1 −→ V2 rendering the following diagram commutative V1 ∼ T // π1(g) V2 π2(g) V1 ∼ T // V2 i.e. satisfying T ◦ π1(g) = π2(g) ◦ T for all g ∈ G. The set of all intertwining maps is denoted HomG(V1, V2).
  • 11. 1.1 Foundations of Representation Theory 11 A bijective intertwiner with bounded inverse between two representations is called an equivalence of representations and the two representations are said to be equivalent. This is denoted π1 ∼ = π2. It’s easy to see that HomG(V1, V2) is a vector space, and that HomG(V, V ) is an algebra. The dimension of HomG(V1, V2) is called the intertwining number of the two representations. If π1 ∼ = π2 via an intertwiner T, then we have π2(g) = T−1 ◦ π1(g) ◦ T. Since we thus can express the one in terms of the other, for almost any purpose the two representations can be regarded as the same. Proposition 1.6. HomG respects direct sum in the sense that HomG(V1 ⊕ V2, W) ∼ = HomG(V1, W) ⊕ HomG(V2, W) and (1.1) HomG(V, W1 ⊕ W2) ∼ = HomG(V, W1) ⊕ HomG(V, W2). (1.2) Proof. For the first isomorphism we define Φ : HomG(V1 ⊕ V2, W) −→ HomG(V1, W) ⊕ HomG(V2, W) by Φ(T) := (T|V1 , T|V2 ). It is easy to check that this is indeed an element of the latter space. It has an inverse Φ−1 given by Φ−1 (T1, T2)(v1, v2) := T1(v1) + T2(v2), and this proves the first isomorphism. The latter can be proved in the same way. Definition 1.7. Given a representation (π, V ) of a group G, we say that a linear subspace U ⊆ V is π-invariant or just invariant if π(g)U ⊆ U for all g ∈ G. If U is a closed invariant subspace for a representation π of G on V , we automatically get a representation of G on U, simply by restricting all the π(g)’s to U (U should be a Banach space, and therefore we need U to be closed). This is clearly a representation, and we will denote it π|U (although we are restricting the π(g)’s to U and not π). Here is a simple condition to check invariance of a given subspace, at least in the case of a unitary representation Lemma 1.8. Let (π, H) be a representation of G, let H = U ⊕ U⊥ be a de- composition of H and denote by P : H −→ U the orthogonal projection onto U. If U is π-invariant then so is U⊥ . Furthermore U is π-invariant if and only if P ◦ π(g) = π(g) ◦ P for all g ∈ G. Proof. Assume that U is invariant. To show that U⊥ is invariant let v ∈ U⊥ . We need to show that π(g)v ∈ U⊥ , i.e. ∀u ∈ U : hπ(g)v, ui = 0. But that’s easy, exploiting unitarity of π(g): hπ(g)v, ui = hπ(g−1 )(π(g)v), π(g−1 )ui = hv, π(g−1 )ui which is 0 since π(g−1 )u ∈ U and v ∈ U⊥ . Thus U⊥ is invariant Assume U to be invariant. Then also U⊥ is invariant by the above. We split x ∈ H into x = Px + (1 − P)x and calculate P ◦ π(g)x = P(π(g)(Px + (1 − P)x)) = Pπ(g)Px + Pπ(g)(1 − P)x. The first term is π(g)Px, since π(g)Px ∈ U, and the second term is zero, since π(g)(1 − P)x ∈ U⊥ . Thus we have the desired formula.
  • 12. 12 Chapter 1 – Peter-Weyl Theory Conversely, assume that P ◦ π(g) = π(g) ◦ P. Every vector u ∈ U is of the form Px for some x ∈ H. Since π(g)u = π(g)(Px) = P(π(g)x) ∈ U, U is an invariant subspace. For any representation (π, V ) it is easy to see two obvious invariant subspaces, namely V itself and {0}. We shall focus a lot on representations having no invariant subspaces except these two: Definition 1.9. A representation is called irreducible if it has no closed invari- ant subspaces except the trivial ones. The set of equivalence classes of finite- dimensional irreducible representations of a group G is denoted b G. A representation is called completely reducible if it is equivalent to a direct sum of finite-dimensional irreducible representations. Any 1-dimensional representation is obviously irreducible, and if the group is abelian the converse is actually true. We prove this in Proposition 1.14 If (π1, V1) and (π2, V2) are irreducible representations then the direct sum π1 ⊕ π2 is not irreducible, since V1 is an π1 ⊕ π2-invariant subspace of V1 ⊕ V2: (π1 ⊕ π2)(g)(v, 0) = (π1(g)v, 0). The question is more subtle when considering tensor products of irreducible representations. Whether or not the tensor product of two irreducible repre- sentations is irreducible and if not, to write is as a direct sum of irreducible representations is a branch of representation theory known as Clebsch-Gordan theory. Lemma 1.10. Let (π1, V1) and (π2, V2) be equivalent representations. Then π1 is irreducible if and only if π2 is irreducible. Proof. Given the symmetry of the problem, it is sufficient to verify that ir- reducibility of π1 implies irreducibility of π2. Let T : V1 −→ V2 denote the intertwiner, which by the Open Mapping Theorem is a linear homeomorphism. Assume that U ⊂ V2 is a closed invariant subspace. Then T−1 U ⊆ V1 is closed and π1-invariant: π1(g)T−1 U = T−1 π2(g)U ⊆ T−1 U But this means that T−1 U is either 0 or V1, i.e. U is either 0 or V2. Example 1.11. Consider the group SL(2, C) viewed as a real (hence 6-dimensional) Lie group. We consider the following 4 complex representations of the real Lie group SL(2, C) on C2 : ρ(A)ψ := Aψ, ρ(A)ψ := Aψ, e ρ(A)ψ := (AT )−1 ψ, e ρ(A)ψ := (A∗ )−1 ψ, where A simply means complex conjugation of all the entries. All four are clearly irreducible. They are important in physics where they are called spinorial rep- resentations. The physicists have a habit of writing everything in coordinates, thus ψ will usually be written ψα, where α = 1, 2 but the exact notation will vary according to which representation we have imposed on C2 (i.e. according to how ψ transforms as the physicists say). In other words they view C2 not as a vector space but rather as a SL(2, C)-module. The notations are ψα ∈ C2 , ψα̇ ∈ C2, ψα ∈ f C2, ψα̇ ∈ f C2.
  • 13. 1.1 Foundations of Representation Theory 13 The representations are not all mutually inequivalent, actually the map ϕ : C2 −→ C2 given by the matrix 0 −1 1 0 intertwines ρ with e ρ and intertwines ρ with e ρ. On the other hand ρ and ρ are actually inequivalent as we will se in Sec- tion 1.4. These two representations are called the fundamental representations of SL(2, C). In short, representation theory has two goals: 1) given a group: find all the irreducible representations and 2) given a representation of this group: split it (if possible) into a direct sum of irreducibles. The rest of this chapter deals with the second problem (at least for compact groups) and in the end we will achieve some powerful results (Schur Orthogonality and the Peter-Weyl Theorem). Chapter 5 revolves around the first problem of finding irreducible representations. But already at this stage we are able to state and prove two quite interesting results. The first result is known as Schur’s Lemma. We prove a slightly more general version than is usually seen, allowing the representations to be infinite- dimensional. Theorem 1.12 (Schur’s Lemma). Let (π1, H1) and (π2, H2) be two irre- ducible unitary representations of a group G, and suppose that F : H1 −→ H2 is an intertwiner. Then either F is an equivalence of representations or F is the zero map. If (π, H) is an irreducible unitary representation of G and F ∈ B(H) is a linear map which commutes with all π(g), then F = λ idH. Proof. The proof utilizes a neat result from Gelfand theory: suppose that A is a commutative unital C*-algebra which is also an integral domain (i.e. ab = 0 implies a = 0 or b = 0), then A ∼ = Ce. The proof is rather simple. Gelfand’s Theorem states that there exists a compact Hausdorff space X such that A ∼ = C(X). To reach a contradiction, assume that X is not a one-point set, and pick two distinct points x and y. Then since X is a normal topological space, we can find disjoint open neighborhoods U and V around x and y, and the Urysohn Lemma gives us two nonzero continuous functions f and g on X, the first one supported in U and the second in V , the product, thus, being zero. This contradicts the assumption that A = C(X) was an integral domain. Therefore X can contain only one point and thus C(X) ∼ = C. With this result in mente we return to Schur’s lemma. F being an intertwiner means that F ◦ π1(g) = π2(g) ◦ F, and using unitarity of π1(g) and π2(g) we get that F∗ ◦ π2(g) = π1(g) ◦ F∗ where F∗ is the hermitian adjoint of F. This yields (FF∗ ) ◦ π2(g) = F ◦ π1(g) ◦ F∗ = π2(g) ◦ (FF∗ ). In the last equality we also used that F intertwines the two representations. Consider the C∗ -algebra A = C∗ (idH2 , FF∗ ), the C∗ -algebra generated by idH2 and FF∗ . It’s a commutative unital C∗ -algebra, and all the elements are of the form P∞ n=0 an(FF∗ )n . They commute with π2(g): ∞ X n=1 an(FF∗ )n π2(g) = ∞ X n=1 (an(FF∗ )n π2(g)) = ∞ X n=1 an(π2(g)(FF∗ )n ) = π2(g) ∞ X n=1 an(FF∗ )n . We only need to show that A is an integral domain. Assume ST = 0. Since π2(g)S = Sπ2(g) it’s easy to see that ker S is π2-invariant. π2 is irreducible
  • 14. 14 Chapter 1 – Peter-Weyl Theory so ker S is either H2 or {0}. In the first case S = 0, and we are done, in the second case, S is injective, and so T must be the zero map. This means that A = C idH2 , in particular, there exists a λ ∈ C so that FF∗ = λ idH2 . Likewise, one shows that F∗ F = λ0 idH1 . Thus, we see λF = F(F∗ F) = (FF∗ )F = λ0 F which implies F = 0 or λ = λ0 . In the second case if λ = λ0 = 0 then F∗ Fv = 0 for all v ∈ H1, and hence 0 = hv, F∗ Fvi = hFv, Fvi, i.e. F = 0. If λ = λ0 and λ 6= 0 then it is not hard to see that λ− 1 2 F is unitary, and that F therefore is an isomorphism. The second claims is an immediate consequence of the proof of the first. The content of this can be summed up to the following: If π1 and π2 are irre- ducible unitary representations of G on H1 and H2, then HomG(H1, H2) ∼ = C if π1 and π2 are equivalent and HomG(H1, H2) = {0} if π1 and π2 are inequivalent. Corollary 1.13. Let (π, H1) and (ρ, H2) be finite-dimensional unitary repre- sentations which decompose into irreducibles π = M i∈I miδi and ρ = M i∈I niδi. Then dim HomG(H1, H2) = P i∈I nimi. Proof. Denoting the representations spaces of the irreducible representations by Vi we get from (1.1) and (1.2) that HomG(H1, H2) = M i∈I M j∈I nimj Hom(Vi, Vj), and by Schur’s Lemma the dimension formula now follows. Now for the promised result on abelian groups Proposition 1.14. Let G be an abelian group and (π, H) be a unitary repre- sentation of G. If π is irreducible then π is 1-dimensional. Proof. Since G is abelian we have π(g)π(h) = π(h)π(g) i.e. π(h) is an in- tertwiner. Since π is irreducible, Schur’s Lemma says that π(h) = λ(h) idH. Thus, each 1-dimensional subspace of H is invariant, and by irreducibility H is 1-dimensional. Example 1.15. With the previous lemma we are in a position to determine the set of irreducible complex representations of the circle group T = R/Z. Since this is an abelian group, we have found all the irreducible representations when we know all the 1-dimensional representations. A 1-dimensional representation is just a homomorphism R/Z −→ C∗ , so let’s find them: It is well-known that the only continuous homomorphisms R −→ C∗ are those of the form x 7−→ e2πiax for some a ∈ R. But since we also want it to be periodic with periodicity 1, only integer values of a are allowed. Thus, b T consists of the homomorphisms ρn(x) = e2πinx for n ∈ Z. Proposition 1.16. Every finite-dimensional unitary representation is com- pletely reducible.
  • 15. 1.2 The Haar Integral 15 Proof. If the representation is irreducible then we are done, so assume we have a unitary representation π : G −→ Aut(H) and let {0} 6= U ⊆ H be an invariant subspace. The point is that U⊥ is invariant as well cf. Lemma 1.8. If both π|U and π|U⊥ are irreducible we are done. If one of them is not, we find an invariant subspace and perform the above argument once again. Since the representation is finite-dimensional and since 1-dimensional representations are irreducible, the argument must stop at some point. 1.2 The Haar Integral In the representation theory of locally compact groups (also known as harmonic analysis) the notions of Haar integral and Haar measure play a key role. Some preliminary definitions: Let X be a locally compact Hausdorff space and Cc(X) the space of complex valued functions on X with compact support. By a positive integral on X is understood a linear functional I : Cc(X) −→ C such that I(f) ≥ 0 if f ≥ 0. The Riesz Representation Theorem tells us that to each such positive integral there exists a unique Radon measure µ on the Borel algebra B(X) such that I(f) = Z X fdµ. We say that this measure µ is associated with the positive integral. Now, let G be a group. For each g0 ∈ G we have two maps Lg0 and Rg0 , left and right translation, on the set of complex-valued functions on G, given by (Lg0 f)(g) = f(g−1 0 g), (Rg0 f)(g) = f(gg0). These obviously satisfy Lg1g2 = Lg1 Lg2 and Rg1g2 = Rg1 Rg2 . Definition 1.17 (Haar Measure). Let G be a locally compact group. A nonzero positive integral I on G is called a left Haar integral if I(Lgf) = I(f) for all g ∈ G and f ∈ Cc(X). Similarly a nonzero positive integral is called a right Haar integral if I(Rgf) = I(f) for all g ∈ G and f ∈ Cc(X). An integral which is both a left and a right Haar integral is called a Haar integral. The measures associated with left and right Haar integrals are called left and right Haar measures. The measure associated with a Haar integral is called a Haar measure. Example 1.18. On (Rn , +) the Lebesgue integral is a Haar integral: it is ob- viously positive, and it is well-known that the Lebesgue integral is translation invariant: Z Rn f(x + a)dx = Z Rn f(−a + x)dx = Z Rn f(x)dx. The associated Haar measure is of course the Lebesgue measure mn. On the circle group (T, ·) we define an integral I by C(T) 3 f 7−→ 1 2π Z 2π 0 f(eit )dt. As before this is obviously a positive integral and since I(Leia f) = 1 2π Z 2π 0 f(e−ia eit )dt = 1 2π Z 2π 0 f(ei(−a+t) )dt = 1 2π Z 2π 0 f(eit )dt
  • 16. 16 Chapter 1 – Peter-Weyl Theory again by exploiting translation invariance of the Lebesgue measure, I is a left Haar integral on T. Likewise one can show that it is a right Haar integral as well, and hence a Haar integral. The associated Haar measure on T is also called the arc measure. In both cases the groups were abelian and in both cases the left Haar integrals were also right Haar integrals. This is no mere coincidence for if G is an abelian group we have Lg0 = Rg−1 0 and thus a positive integral is a left Haar integral if and only if it is a right Haar integral. The following central theorem attributed to Alfred Haar and acclaimed as one of the most important mathematical discoveries in the 20th century states existence and uniqueness of left and right Haar integrals on locally compact groups. Theorem 1.19. Every locally compact group G possesses a left Haar integral and a right Haar integral, and these are unique up to multiplication by a positive constant. If G is compact then the two integrals coincide, and the corresponding Haar measure is finite. It would be far beyond the scope of this thesis to delve into the proof of this. The existence part of the proof is a hard job so we just send some acknowledging thoughts to Alfred Haar and accept it as a fact of life. Now we restrict focus to compact groups on which, as we have just seen, we have a finite Haar measure. The importance of this finiteness is manifested in the following result: Theorem 1.20 (Unitarization). Let G be a compact group and (π, H) are representation on a Hilbert space (H, h·, ·i). Then there exists an inner product h·, ·iG on H equivalent to h·, ·i which makes π a unitary representation. Proof. Since the measure is finite, we can integrate all bounded measurable functions over G. Let us assume the measure to be normalized, i.e. that µ(G) = 1. For x1, x2 ∈ H the map g 7−→ hπ(g)x1, π(g)x2i is continuous (by Proposition 1.2), hence bounded and measurable, i.e. integrable. Now define a new inner product by hx1, x2iG := Z G hπ(g)x1, π(g)x2idg. (1.3) That this is a genuine inner product is not hard to see: it is obviously sesqui- linear by the properties of the integral, it is conjugate-symmetric, as the original inner product is conjugate-symmetric. Finally, if x 6= 0 then π(g)x 6= 0 (π(g) is invertible) and thus kπ(g)xk 0 for all g ∈ G. Since the map g 7−→ kπ(g)xk2 is continuous we have hx, xiG = R G kπ(g)xkdg 0. By the translation of the Haar measure we get hπ(h)x1, π(h)x2iG = Z G hπ(gh)x1, π(gh)x2idg = Z G hπ(g)x1, π(g)x2idg = hx1, x2iG. Thus, π is unitary w.r.t. this new inner product. We just need to show that the two norms k · k and k · kG corresponding to the two inner products are equivalent, i.e. that there exists a constant C so that k·k ≤ Ck·kG and k·kG ≤ Ck·k. To this end, consider the map g 7−→ kπ(g)xk2 for some x ∈ H. It’s a continuous map, hence supg∈G kπ(g)xk2 ∞ for all x, and
  • 17. 1.2 The Haar Integral 17 the Uniform Boundedness Principle now says that C := supg∈G kπ(g)k ∞. Therefore kxk2 = Z G kxk2 dg = Z G kπ(g−1 )π(g)xk2 dg ≤ C2 Z G kπ(g)xk2 = C2 kxk2 G. Conversely we see kxk2 G = Z G kπ(g)xk2 ≤ Z G kπ(g)k2 kxk2 dg ≤ C2 Z G kxk2 dg = C2 kxk2 . This proves the claim. If we combine this result with Proposition 1.16 we get Corollary 1.21. Every finite-dimensional representation of a compact group is completely reducible. The Peter-Weyl Theorem which we prove later in this chapter provides a strong generalization of this result in that it states that every Hilbert space representation of a compact group is completely reducible. We end this section by introducing the so-called modular function which is a function that provides a link between left and right Haar integrals. Let G be a topological group and I : f 7−→ R G f(g)dg a left Haar integral. Let h ∈ G and consider the integral e Ih : f 7−→ R G f(gh−1 )dg. This is positive and satisfies e Ih(Lg0 f) = Z G f(g−1 0 gh−1 )dg = Z G f(gh−1 )dg = e Ih(f) i.e. is a left Haar integral. By the uniqueness part of Haar’s Theorem there exists a positive constant c such that e Ih(f) = cI(f). We define the modular function ∆ : G −→ R+ by assigning this constant to the group element h i.e. Z G f(gh−1 )dg = ∆(h) Z G f(g)dg. It is not hard to see that this is indeed a homomorphism: on one hand we have Z G f(g(hk)−1 )dg = ∆(hk) Z G f(g)dg, and on the other hand we have that this equals Z G f(gk−1 h−1 )dg = ∆(h) Z G f(gk−1 )dg = ∆(h)∆(k) Z G .f(g)dg Since this holds for all integrable functions f we must have ∆(hk) = ∆(h)∆(k). One can show that this is in fact a continuous group homomorphism and thus in the case of G being a Lie group, a Lie group homomorphism. If ∆ is identically 1, that is if every right Haar integral satisfies Z G f(hg)dg = Z G f(g)dg (1.4) for all h, then the group G is called unimodular. Eq. (1.4) says that an equivalent condition for a group to be unimodular is that all right Haar integrals are also left Haar integrals. As we have seen previously in this section abelian groups and compact groups are unimodular groups.
  • 18. 18 Chapter 1 – Peter-Weyl Theory 1.3 Matrix Coefficients Definition 1.22 (Matrix Coefficient). Let (π, V ) be a finite-dimensional rep- resentation of a compact group G. By a matrix coefficient for the representation π we understand a map G −→ C of the form mv,ϕ(g) = ϕ(π(g)v) for fixed v ∈ V and ϕ ∈ V ∗ . If we pick a basis {e1, . . . , en} for V and let {ε1, . . . , εn} denote the correspond- ing dual basis, then we see that mei,εj = εj(π(g)ei) precisely are the entries of the matrix-representation of π(g), therefore the name matrix coefficient. If V comes with an inner product h , i, then by the Riesz Theorem all matrix coefficients are of the form mv,w = hπ(g)v, wi for fixed v, w ∈ V . By Theorem 1.20 we can always assume that this is the case. Denote by C(G)π the space of linear combinations of matrix coefficient. Since a matrix coefficient is obviously a continuous map, C(G)π ⊆ C(G) ⊆ L2 (G). Thus, we can take the inner product of two functions in C(G)π. Note, however that the elements of C(G)π need not all be matrix coefficients for π. The following technical lemma is an important ingredient in the proof of the Schur Orthogonality Relations which is the main result of this section. Lemma 1.23. Let (π, H) be a finite-dimensional unitary representation of a compact group G. Define the map Tπ : End(H) −→ C(G) by Tπ(A)(g) = Tr(π(g) ◦ A). (1.5) Then C(G)π = im Tπ. Proof. Given a matrix coefficient mv,w we should produce a linear map A : H −→ H, such that mv,w = Tπ(A). Consider the map Lv,w : H −→ H defined by Lv,w(u) = hu, wiv, the claim is that this is the desired map A. To see this we need to calculate Tr Lv,w and we claim that the result is hv, wi. Since Lv,w is sesquilinear in its indices (Lav+bv0,w = aLv,w + bLv0,w), it’s enough to check it on elements of an orthonormal basis {e1, . . . , en} for H. Tr Lei,ei = n X k=1 hLei,ei ek, eki = n X k=1 hek, eiihei, eki = 1 while for i 6= j Tr Lei,ej = n X k=1 hLei,ej ek, eki = n X k=1 hek, ejihei, eki = 0. Thus, Tr Lv,w = hv, wi. Finally since Lv,w ◦ π(g)u = hπ(g)u, wiv = hu, π(g−1 )wiv = Lv,π(g−1)wu we see that Tπ(Lv,w)(g) = Tr(π(g) ◦ Lv,w) = Tr(Lv,w ◦ π(g)) = hv, π(g−1 )wi = hπ(g)v, wi = mv,w(g). Conversely, we should show that any map Tπ(A) is a linear combination of matrix coefficients. Some linear algebraic manipulations should be enough to
  • 19. 1.3 Matrix Coefficients 19 convince the reader that we for any A ∈ End(H) have A = Pn i,j=1hAej, eiiLei,ej w.r.t some orthonormal basis {e1, . . . , en}. But then we readily see Tπ(A)(g) = Tπ n X i,j=1 hAej, eiiLei,ej (g) = n X i,j=1 hAej, eiiTπ(Lei,ej )(g) = n X i,j=1 hAej, eiimei,ej (g). Theorem 1.24 (Schur Orthogonality I). Let (π1, H1) and (π2, H2) be two unitary, irreducible finite-dimensional representations of a compact group G. If π1 and π2 are equivalent, then we have C(G)π1 = C(G)π2 . If they are not, then C(G)π1 ⊥C(G)π2 inside L2 (G). Before the proof, a few remarks on the integral of a vector valued function would be in order. Suppose that f : G −→ H is a continuous function into a finite-dimensional Hilbert space. Choosing a basis {e1, . . . , en} for H we can write f in it’s components f = Pn i=1 fi ei, which are also continuous, and define Z G f(g)dg := n X i=1 Z G fi (g)dg ei. It’s a simple change-of-basis calculation to verify that this is independent of the basis in question. Furthermore, one readily verifies that it is left-invariant and satisfies DZ G f(g)dg, v E = Z G hf(g), vidg and A Z G f(g)dg = Z Af(g)dg when A ∈ End(H). Proof of Theorem 1.24. If π1 and π2 are equivalent, there exists an isomor- phism T : H1 −→ H2 such that Tπ1(g) = π2(g)T. For A ∈ End(H1) we see that Tπ2 (TAT−1 )(g) = Tr(π2(g)TAT−1 ) = Tr(T−1 π2(g)TA) = Tr(π1(g)A) = Tπ1 (A)(g). Hence the map sending Tπ1 (A) to Tπ2 (TAT−1 ) is the identity id : C(G)π1 −→ C(G)π2 proving that the two spaces are equal. Now we show the second claim. Define for fixed w1 ∈ H1 and w2 ∈ H2 the map Sw1,w2 : H1 −→ H2 by Sw1,w2 (v) = Z G hπ1(g)v, w1iπ2(g−1 )w2dg. Sw1,w2 is in HomG(H1, H2) since by left-invariance Sw1,w2 π1(h)(v) = Z G hπ1(gh)v, w1iπ2(g−1 )w2dg = Z G hπ1(g)v, w1iπ2(hg−1 )w2dg = π2(h) Z G hπ1(g)v, w1iπ2(g−1 )w2dg = π2(h)Sw1,w2 (v). Assume that we can find two matrix coefficients mv1,w1 and mv2,w2 for π1 and π2 that are not orthogonal, i.e. we assume that 0 6= Z G mv1,w2 (g)mv2,w2 (g)dg = Z G hπ1(g)v1, w1ihπ2(g)v2, w2idg = Z G hπ1(g)v1, w1ihπ2(g−1 )w2, v2idg.
  • 20. 20 Chapter 1 – Peter-Weyl Theory From this we read hSw1,w2 v1, v2i 6= 0, so that Sw1,w2 6= 0. Since it’s an inter- twiner, Schur’s Lemma tells us that Sw1,w2 is an isomorphism. By contraposition, the second claim is proved. In the case of two matrix coefficients for the same representation, we have the following result Theorem 1.25 (Schur Orthogonality II). Let (π, H) be a unitary, finite- dimensional irreducible representation of a compact group G. For two matrix coefficients mv1,w1 and mv2,w2 we have hmv1,w1 , mv2,w2 i = 1 dim H hv1, v2ihw2, w1i. (1.6) Proof. As in the proof of Theorem 1.24 define Sw1,w2 : H −→ H by Sw1,w2 (v) = Z G hπ1(g)v, w1iπ2(g−1 )w2dg = Z G π(g−1 )Lw2,w1 π(g)v dg. We see that hmv1,w1 , mv2,w2 i = Z G hπ(g)v1, w1ihπ(g)v2, w2idg = Z G hπ(g)v1, w1ihπ(g−1 )w2, v2idg = Z G hhπ(g)v1, w1iπ(g−1 )w2, v2i = hSw1,w2 v1, v2i. Furthermore, since Sw1,w2 commutes with π(g), Schur’s Lemma yields a com- plex number λ(w1, w2), such that Sw1,w2 = λ(w1, w2) idH. The operator Sw1,w2 is linear in w2 and anti-linear in w1, hence λ(w1, w2) is a sesquilinear form on H. We now take the trace on both sides of the equation Sw1,w2 = λ(w1, w2) idH: the right hand side is easy, it’s just λ(w1, w2) dim H. For the left hand side we calculate That is, we get λ(w1, w2) = (dim H)−1 hw1, w2i, and hence Sw1,w2 = (dim H)−1 hw1, w2i idH . By substituting this into the equation hmv1,w1 , mv2,w2 i = hSw1,w2 v1, v2i the desired result follows. 1.4 Characters Definition 1.26 (Class Function). For a group G, a class function is a func- tion on G which is constant on conjugacy classes. The set of square-integrable resp. continuous class functions on G are denoted L2 (G, class) and C(G, class). It is not hard to see that the closure of C(G, class) inside L2 (G) is L2 (G, class). Thus, L2 (G, class) is a Hilbert space. Given an irreducible finite-dimensional representation the set of continuous class functions inside C(G)π is very small: Lemma 1.27. Let (π, H) be a finite-dimensional irreducible unitary represen- tation of a compact group G, then the only class functions inside C(G)π are complex scalar multiples of Tπ(idH).
  • 21. 1.4 Characters 21 Proof. To formulate the requirement on a class function, consider the repre- sentation ρ of G on C(G) by (ρ(g)f)(x) = f(g−1 xg), then in terms of this a function f is a class function if and only if π(g)f = f for all g. For reasons which will be clear shortly, we introduce another representation Π of G on End(H) by Π(g)A = π(g)Aπ(g−1 ). Equipping End(H) with the inner product hA, Bi := Tr(B∗ A), it is easy to see that Π becomes unitary. The linear map Tπ : End(H) −→ C(G)π which we introduced in Lemma 1.23 is an intertwiner of the representations ρ and Π: Tπ(Π(g)A)(x) = Tr π(x)π(g)Aπ(g−1 ) = Tr(π(g−1 xg)a) = ρ(g) Tr(π(x)A) = ρ(g)Tπ(A)(x). Tπ was surjective by Lemma 1.23. To show injectivity we define e Tπ := √ dim H Tπ and show that this is unitary. Since the linear maps Lv,w span End(H) it is enough to show unitarity on these. But first we need some facts concerning Lv,w: hLv,wx, yi = hhx, wiv, yi = hx, wihv, yi = hx, hv, yiwi = hx, hy, viwi = hx, Lw,vi showing that L∗ v,w = Lw,v. Furthermore Lw0,v0 ◦ Lv,wx = Lw0,v0 (hx, wiv) = hhx, wiv, v0 iw0 = hv, v0 ihx, wiw0 = hv, v0 iLw0,wx. With the inner product on End(H) these results now yield hLv,w, Lv0,w0 i = Tr(Lw0,v0 ◦ Lv,w) = Tr(hv, v0 iLw0,w) = hv, v0 ihw, w0i. Since Tπ(Lv,w)(x) = mv,w(x), and using Schur Orthogonality II we see h e Tπ(Lv,w), e Tπ(Lv0,w0 )i = dim Hhmv,w, mv0,w0 i = hv, v0 ihw, w0i = hLv,w, Lv0,w0 i. Thus e Tπ is unitary and in particular injective. Now we come to the actual proof: let ϕ ∈ C(G)π be a class function. e Tπ is bijective, so there is a unique A ∈ End(H) for which ϕ = e Tπ(A). That e Tπ intertwines Π and ρ leads to ϕ(g−1 xg) = (ρ(g)ϕ)(x) = ρ(g) e Tπ(A)(x) = e Tπ(Π(g)A)(x) = e Tπ(π(g)Aπ(g−1 )), and since ϕ was a class function we get that π(g)Aπ(g−1 ) = A, i.e. A intertwines π. But π was irreducible, which by Schur’s Lemma implies A = λ idH, and hence ϕ = λTπ(idH). In particular there exists a unique class function ϕ0 which is positive on e and which has L2 -norm 1: namely we have kϕ0k2 2 = k e Tπ(A)k2 2 = kAk2 = Tr(A∗ A) so if ϕ0 should have norm 1 and be positive on e, then A is forced to be (dim H)−1 idH, so that ϕ0 is given by ϕ0(g) = Tr π(g). This is a function of particular interest:
  • 22. 22 Chapter 1 – Peter-Weyl Theory Definition 1.28 (Character). Let (π, V ) be a finite-dimensional representa- tion of a group G. By the character of π we mean the function χπ : G −→ C given by χπ(g) = Tr π(g). If χ is a character of an irreducible representation, χ is called an irreducible character. The character is a class function, so in the case of two representations π1 and π2 being equivalent via the intertwiner T: π2(g) = Tπ1(g)T−1 , we have χπ1 = χπ2 . Thus, equivalent representations have the same character. Actually, the converse is also true, we show that at the end of the section. Suppose that G is a topological group, and that H is a Hilbert space with orthonormal basis {e1, . . . , en}. Then we can calculate the trace as Tr π(g) = n X i=1 hπ(g)ei, eii which shows that χπ ∈ C(G)π. In due course we will prove some powerful or- thogonality relations for irreducible characters. But first we will see that the character behaves nicely with respect to direct sum and tensor product opera- tions on representations. Proposition 1.29. Let (π1, V1) and (π2, V2) be two finite-dimensional repre- sentations of the group G. The characters of π1 ⊕ π2 and π1 ⊗ π2 are then given by χπ1⊕π2 (g) = χπ1 (g) + χπ2 (g) and χπ1⊗π2 (g) = χπ1 (g)χπ2 (g). (1.7) Proof. Equip V1 and V2 with inner products and pick orthonormal bases (ei) and (fj) for V1 and V2 respectively. Then the vectors (ei, 0), (0, fj) form an orthonormal basis for V1 ⊕ V2 w.r.t. the inner product h(v1, v2), (w1, w2)i := hv1, w1i + hv2, w2i. Thus we see χπ1⊕π2 (g) = Tr π1 ⊕ π2(g) = m X i=1 π1 ⊕ π2(g)(ei, 0), (ei, 0) + n X j=1 π1 ⊕ π2(g)(0, fj), (0, fj) = m X i=1 hπ1(g)ei, eii + n X j=1 hπ2(g)fj, fji = χπ1 (g) + χπ2 (g). Likewise, the vectors ei ⊗ fj constitute an orthonormal basis for V1 ⊗ V2 w.r.t. the inner product hv1 ⊗ v2, w1 ⊗ w2i := hv1, w1ihv2, w2i, and hence χπ1⊗π2 (g) = Tr π1 ⊗ π2(g) = m,n X i,j=1 π1 ⊗ π2(g)(ei ⊗ fj), (ei ⊗ fj) = m,n X i,j=1 hπ1(g)ei, eiihπ2(g)fj, fji = m X i=1 hπ1(g)ei, eii n X j=1 hπ2(g)fj, fji = χπ1 (g)χπ2 (g).
  • 23. 1.4 Characters 23 The following lemma, stating the promised orthogonality relations of char- acters, shows that irreducible characters form an orthonormal set in C(G). The Schur Orthogonality Relations are important ingredients in the proof, thus henceforth we need the groups to be compact. Lemma 1.30. Let (π1, V1) and (π2, V2) be two finite-dimensional irreducible representations of a compact group G. Then the following hold: 1) π1 ∼ = π2 implies hχπ1 , χπ2 i = 1. 2) π1 π2 implies hχπ1 , χπ2 i = 0. Proof. In the first case, we have a bijective intertwiner T : V1 −→ V2. Choose an inner product on V1 and an orthonormal basis (ei) for V1. Define an inner product on V2 by declaring T to be unitary. Then (Tei) is an orthonormal basis for V2. Let n = dim V1 = dim V2. The expressions χπ1 = Pn i=1hπ1(g)ei, eii and χπ2 = Pn j=1hπ2(g)Tej, Teji along with (1.6) yield hχπ1 , χπ2 i = n X i,j=1 Z G hπ1(g)ei, eiihπ2(g)Tej, Tejidg = n X i,j=1 Z G hπ1(g)ei, eiihTπ1(g)ej, Tejidg = n X i,j=1 Z G hπ1(g)ei, eiihπ1(g)ej, ejidg = 1 n n X i,j=1 hei, ejihei, eji = 1 n n X i=1 1 = 1. In the second case, if π1 and π2 are non-equivalent then by Theorem 1.24 we have C(G)π1 ⊥C(G)π2 . Since χπ1 ∈ C(G)π1 and χπ2 ∈ C(G)π2 , the result follows. This leads to the main result on characters: Theorem 1.31. Let π be an finite-dimensional representation of a compact group G. Then π decomposes according to π ∼ = M πi∈ b G hχπ, χπi iπi. Proof. Proposition 1.16 says that π ∼ = L miπi where πi is irreducible and mi is the number of times that πi occurs in π. From Lemma 1.30 it follows that χπ = P i miχπi and hence by orthonormality of the irreducible characters that mi = hχπ, χπi i. Example 1.32. A very simple example to illustrate this is the following. Con- sider the 2-dimensional representation π of T given by x 7−→ 1 2 e2πinx + e2πimx −e2πinx + e2πimx −e2πinx + e2πimx e2πinx + e2πimx for n, m ∈ Z. It is easily seen to be a continuous homomorphism T −→ Aut(C2 ) with character χπ(x) = e2πimx + e2πinx . But the two terms are irreducible characters for T, cf. Example 1.15, and by Theorem 1.31 we have π ∼ = ρn ⊕ ρm.
  • 24. 24 Chapter 1 – Peter-Weyl Theory Corollary 1.33. For finite-dimensional representations π1, π2 and π of a com- pact group we have: 1) π1 ∼ = π2 if and only if χπ1 = χπ2 . 2) π is irreducible if and only if hχπ, χπi = 1. Proof. For the first statement, the only-if part is true by the remarks following the definition of the character. To see the converse, assume that χπ1 = χπ2 . Then for each irreducible representation ρ we must have hχπ1 , χρi = hχπ2 , χρi and therefore π1 and π2 are equivalent to the same decomposition of irreducible representations, hence they are equivalent. If π is irreducible then Lemma 1.30 states that hχπ, χπi = 1. Conversely, assume hχπ, χπi = 1 and decompose π into irreducibles: π ∼ = L miπi. Or- thonormality of the irreducible characters again gives hχπ, χπi = P m2 i . From this it is immediate that there is precisely one mi which is 1, while the rest are 0, i.e. π ∼ = πi. Therefore π is irreducible. Considering the representations ρ and ρ from Example 1.11 we see that the corresponding characters satisfy χρ = χρ and since χρ is actually a complex map, they are certainly not equal. Hence the representations are inequivalent. 1.5 The Peter-Weyl Theorem The single most important theorem in the representation theory of compact topological groups is the Peter-Weyl Theorem. It has numerous consequences, some of which we will mention at the end of this section. Theorem 1.34 (Peter-Weyl I). Let G be a compact group. Then the subspace M(G) := M π∈ b G C(G)π of C(G) is dense in L2 (G). In other words the linear span of all matrix coefficients of the finite-dimensional irreducible representations of G is dense in L2 (G). Proof. We want to show that M(G) = L2 (G). We prove it by contradiction and assume that M(G)⊥ 6= 0. Now, suppose that M(G)⊥ (which is a closed subspace of L2 (G) and hence a Hilbert space itself) contains a finite-dimensional R-invariant subspace W (R is the right-regular representation) such that R|W is irreducible (we prove below that this is a consequence of the assumption M(G)⊥ 6= 0). Then we can pick an finite orthonormal basis (ϕi) for W, and then for 0 6= f ∈ W f(x) = N X i=1 hf, ϕiiϕi(x). This is a standard result in Hilbert space theory. Then we see that f(g) = (R|W (g)f)(e) = N X i=1 hR|W (g)f, ϕiiϕi(e). Since R|W is a finite-dimensional irreducible representation, the map g 7−→ hR|W (g), ϕii is a matrix coefficient. But this means that f ∈ M(G), hence a contradiction.
  • 25. 1.5 The Peter-Weyl Theorem 25 Now, let’s prove the existence of the finite-dimensional right-invariant sub- space. Let f0 ∈ M(G)⊥ be nonzero. As C(G) is dense in L2 (G) we can find a ϕ ∈ C(G) such that hb ϕ, f0i 6= 0 where b ϕ(g) = ϕ(g−1 ). Define K ∈ C(G × G) by K(x, y) = ϕ(xy−1 ) and let T : L2 (G) −→ L2 (G) be the integral operator with K as its kernel: Tf(x) = Z G K(x, y)f(y)dy. According to functional analysis, this is a well-defined compact operator, and it commutes with R(g): T ◦ R(g)f(x) = Z G K(x, y)R(g)f(y)dy = Z G ϕ(xy−1 )f(yg)dy = Z G ϕ(xgy−1 )f(y)dy = Z G K(xg, y)f(y)dy = R(g)(Tf)(x). In the third equation we exploited the invariance of the measure under the right translation y 7−→ yg−1 . Since R(g) is unitary, also the adjoint T∗ of T commutes with R(g): T∗ ◦ R(g) = T∗ ◦ R(g−1 )∗ = (R(g−1 ) ◦ T)∗ = (T ◦ R(g−1 ))∗ = R(g) ◦ T∗ . Thus, the self-adjoint compact operator T∗ T commutes with R(g). The Spectral Theorem for compact operators yields a direct sum decomposition of L2 (G): L2 (G) = ker(T∗ T) ⊕ M λ6=0 Eλ where all the eigenspaces Eλ are finite-dimensional. They are also R-invariant, for if f ∈ Eλ then T∗ T(R(g)f) = R(g)(T∗ T)f = R(g)(λf) = λ(R(g)f) (1.8) i.e. R(g)f ∈ Eλ. Actually M(G) is R-invariant: all functions are of the form Pn i=1 aihπi(x)ϕi, ψii and since R(g)f(x) = f(xg) = n X i=1 aihπi(x)(πi(g)ϕi), ψii we see that R(g)f ∈ M(G). But then also M(G)⊥ is invariant. If P : L2 (G) −→ M(G)⊥ denotes the orthogonal projection, then by Lemma 1.8, P commutes with R(g), and a calculation like (1.8) reveals that PEλ are all R-invariant sub- spaces of M(G)⊥ . These are very good candidates to the subspace we wanted: they are finite-dimensional and R-invariant, so we can restrict R to a represen- tation on these. We just need to verify that at least one of them is nonzero. So assume that PEλ are all 0. This means by definition of P that L λ Eλ ⊆ M(G) and hence that M(G)⊥ ⊆ ( L λ Eλ)⊥ = ker T∗ T ⊆ ker T, where the last inclu- sion follows since f ∈ ker T∗ T implies 0 = hT∗ Tf, fi = hTf, Tfi, i.e. Tf = 0. But applied to the f0 ∈ M(G)⊥ we picked at the beginning, we have Tf0(e) = Z G ϕ(ey−1 )f0(y)dy = Z G b ϕ(y)f0(y)dy = hb ϕ, f0i 6= 0, and as Tf0 is continuous, Tf0 6= 0 as an L2 function. Thus, we must have at least one λ for which PEλ 6= 0. If R restricted to this space is not irreducible, it contains a nontrivial subspace on which it is. Thus, we have proved the result.
  • 26. 26 Chapter 1 – Peter-Weyl Theory What we actually have shown in the course of the proof is that we for each nonzero f can find a finite-dimensional subspace U ⊆ L2 (G) which is R-invariant and, restricted to which, R is irreducible. We can show exactly the same thing for the left regular representation L, all we need to alter is the definition of K, which should be K(x, y) = ϕ(x−1 y). This observation will come in useful now, when we prove the promised generalization of Corollary 1.21: Theorem 1.35 (Peter-Weyl II). Let (π, H) be any (possibly infinite-dimen- sional) representation of a compact group G on a Hilbert space H. Then π ∼ = L πi where πi is a finite-dimensional irreducible representation of G, i.e. π is completely reducible. Proof. By virtue of Theorem 1.20 we can choose a new inner product on H turning π into a unitary representation. Then we consider the set Σ of collections of finite-dimensional invariant sub- spaces of H restricted to which π is irreducible, i.e. an element (Ui)i∈I in Σ is a collection of subspaces of H satisfying the mentioned properties. We equip Σ with the ordering ⊆ defined by (Ui)i∈I ⊆ (Uj)j∈J if L i Ui ⊆ L j Uj. It is easily seen that (Σ, ⊆) is inductively ordered, hence Zorn’s Lemma yields a maximal element (Vi)i∈I. To show the desired conclusion, namely that H = M i∈I Vi, we assume that W := ( L Vi)⊥ 6= 0. We have a contradiction if we in W can find a finite-dimensional π-invariant subspace on which π is irreducible, so that’s our goal. First we remark that W is π-invariant since it’s the orthogonal complement to an invariant subspace, thus we can restrict π to a representation on W. Now, we will define an intertwiner T : W −→ L2 (G) between π|W and the left regular representation L. Fix a unit vector x0 ∈ H and define (Ty)(g) = hy, π(g)x0i. Ty : G −→ C is clearly continuous, and since Tx0(e) = kx0k 6= 0, Tx0 is nonzero in L2 (G), hence T is nonzero as a linear map. T is continuous, as the Cauchy-Schwartz inequality and unitarity of π(g) give |Ty(g)| = |hy, π(g)x0i| ≤ kykkx0k that is kTk ≤ kx0k. T is an intertwiner: (T ◦ π(h))y(g) = hπ(h)y, π(g)x0i = hy, π(h−1 g)x0i = L(h) ◦ (Ty)(g). The adjoint T∗ : L2 (G) −→ W (which is nonzero, as T is) is an intertwiner as well, for taking the adjoint of the above equation yields π(h)∗ ◦ T∗ = T∗ ◦ L(h)∗ for all h. Using unitarity we get π(h−1 ) ◦ T∗ = T∗ ◦ L(h−1 ), i.e. also T∗ is an intertwiner. As T∗ is nonzero, there is an f0 ∈ L2 (G) such that T∗ f0 6= 0. But by the remark following the proof of the first Peter-Weyl Theorem we can find a non- trivial finite-dimensional L-invariant subspace U ⊆ L2 (G) containing f0. Then T∗ U ⊆ W is finite-dimensional, nontrivial (it contains T∗ f0) and π-invariant, for if T∗ f ∈ T∗ U, then π(h)◦T∗ f = T∗ ◦L(h)f ∈ T∗ U. Inside T∗ U we can now find a subspace on which π is irreducible, hence the contradiction. An immediate corollary of this is: Corollary 1.36. An irreducible representation of a compact group is automat- ically finite-dimensional.
  • 27. 1.5 The Peter-Weyl Theorem 27 In particular the second Peter-Weyl Theorem says that the left regular rep- resentation is completely reducible. In many textbooks this is the statement of the Peter-Weyl Theorem. The proof of this is not much different from the proof we gave for the first version of the Peter-Weyl Theorem, and from this it would also be possible to derive our second version of the Peter-Weyl Theorem. I chose the version with matrix coefficients since it can be used immediately to provide elegant proofs of some results in Fourier theory, which we now discuss. Theorem 1.37. Let G be a compact group. The set of irreducible characters constitute an orthonormal basis for the Hilbert space L2 (G, class). In particular every square integrable class function f on G can be written f = X π∈ b G hf, χπiχπ, the convergence being L2 -convergence. Proof. Let Pπ : L2 (G) −→ C(G)π denote the orthogonal projection onto C(G)π. It is not hard to see that Pπ maps class functions to class functions, hence Pπ(L2 (G, class)) ⊆ C(G)π ∩ C(G, class), the last space being the 1-dimensional Cχπ by Lemma 1.27. Hence the space M(G, class) := M(G) ∩ C(G, class) = M ρ∈ b G C(G)π ∩ C(G, class) has as orthonormal basis the set of irreducible characters of G. To see that the characters also form an orthonormal basis for the Hilbert space L2 (G, class) assume that there exists an f ∈ L2 (G, class) which is orthogonal to all the characters. Then since Pπf is just a scalar multiple of χπ we see Pπf = hPπf, χπiχπ = hf, χπiχπ = 0 where in the third equality we exploited self-adjointness of the projection Pπ. Thus we must have f ∈ M(G)⊥ which by Peter-Weyl I implies f = 0. Specializing to the circle group T yields the existence of Fourier series. First of all, since T is abelian, all functions defined on it are class functions, and functions on T are nothing but functions on R with periodicity 1. Specializing the above theorem to this case then states that the irreducible characters e2πinx constitute an orthonormal basis for L2 (T, class) and that we have an expansion of any such square integrable class function f = X n∈Z cn(f)e2πinx (1.9) where cn is the n’th Fourier coefficient cn(f) = hf, ρni = Z 1 0 f(x)e−2πinx dx. It’s important to stress that the convergence in (1.9) is only L2 -convergence. If we put some restrictions to f such as differentiability or continuous differentia- bility we can achieve pointwise or uniform convergence of the series. We will not travel further into this realm of harmonic analysis.
  • 28. 28 Chapter 1 – Peter-Weyl Theory
  • 29. Chapter 2 Structure Theory for Lie Algebras 2.1 Basic Notions Although we succeeded in Chapter 1 to prove some fairly strong results, we must realize that it is limited how much we can say about topological groups, compact or not. For instance the Peter-Weyl Theorem tells us that every rep- resentation of a compact group is completely reducible, but if we don’t know the irreducible representations then what’s the use? Therefore we change our focus to Lie groups. The central difference, when regarding Lie groups, is of course that we have their Lie algebras at our disposal. Often these are much easier to handle than the groups themselves, while at the same time saying quite a lot about the group. Therefore we need to study Lie algebras and their representation theory. In this section we focus solely on Lie algebras, developing the tools necessary for the representation theory of the later chapters. We will only consider Lie algebras over the fields R and C (commonly denoted K) although many of the results in this chapter carry over to arbitrary (possibly algebraically closed) fields of characteristic 0. Definition 2.1 (Lie Algebra). A Lie algebra g over K is a K-vector space g equipped with a bilinear map [ , ] : g × g −→ g satisfying 1) [X, Y ] = −[Y, X] (antisymmetry) 2) [[X, Y ], Z] + [[Y, Z], X] + [[Z, X], Y ] = 0 (Jacobi identity). A Lie subalgebra h of g is a subspace of g which is closed under the bracket, i.e. for which [h, h] ⊆ h. A Lie subalgebra h for which [h, g] ⊆ h is called an ideal. In this thesis all Lie algebras will be finite-dimensional unless otherwise spec- ified. Example 2.2. The first examples of Lie algebras are algebras of matrices. By gl(n, R) and gl(n, C) we denote the set of real resp. complex n × n matrices equipped with the commutator bracket. It is trivial to verify that these are indeed Lie algebras. The list below contains the definition of some of the classical Lie algebras. They are all subalgebras of the two Lie algebras just mentioned. It is a matter of routine calculations to verify that these examples are indeed 29
  • 30. 30 Chapter 2 – Structure Theory for Lie Algebras closed under the the commutator bracket. sl(n, R) = {X ∈ gl(n, R) | Tr X = 0} sl(n, C) = {X ∈ gl(n, C) | Tr X = 0} so(n) = {X ∈ gl(n, R) | X + Xt = 0} so(m, n) = {X, ∈ gl(m + n, R) | Xt Im,n + Im,nX = 0} so(n, C) = {X ∈ gl(n, C) | X + Xt = 0} u(n) = {X ∈ gl(n, C) | X + X∗ = 0} u(m, n) = {X ∈ gl(m + n, C) | X∗ Im,n + Im,nX = 0} su(n) = {X ∈ gl(n, C) | X + X∗ = 0, Tr X = 0} su(m, n) = {X ∈ gl(m + n, C) | X∗ Im,n + Im,nX = 0, Tr X = 0} where Im,n is the block-diagonal matrix whose first m × m block is the identity and the last n × n block is minus the identity. Another interesting example is the endomorphism algebra EndK(V ) for some K-vector space V , finite-dimensional or not. Equipped with the commutator bracket [A, B] = AB −BA this becomes a Lie algebra over K, as one can check. To emphasize the Lie algebra structure of this, it is sometimes denoted gl(V ). We stick to End(V ). We always have the trivial ideals in g, namely 0 and g itself. If g is a Lie algebra and h is an ideal in g, then we can form the quotient algebra g/h in the following way: The underlying vector space is the vector space g/h and this we equip with the bracket [X + h, Y + h] = [X, Y ] + h. Using the ideal-property it is easily checked that this is indeed well-defined and satisfies the properties of a Lie algebra. Definition 2.3 (Lie Algebra Homomorphism). Let g and g0 be Lie algebras over K. A K-linear map ϕ : g −→ g0 is called a Lie algebra homomorphism if it satisfies [ϕ(X), ϕ(Y )] = ϕ[X, Y ] for all X, Y ∈ g. If ϕ is bijective it is called a Lie algebra isomorphism. An example of a Lie algebra homomorphism is the canonical map κ : g −→ g/h mapping X to X +h. It is easy to see that the image of a Lie algebra homomor- phism is a Lie subalgebra of g0 and that the kernel of a homomorphism is an ideal in g. Another interesting example is the so-called adjoint representation ad : g −→ End(V ) given by ad(X)Y = [X, Y ]. We see that ad(X) is linear, hence an endomorphism, and that the map X 7−→ ad(X) is linear. By virtue of the Jacobi identity it respects the bracket operation and is thus ad is a Lie algebra homomorphism. In analogy with vector spaces and rings we have the following Proposition 2.4. Let ϕ : g −→ g0 be a Lie algebra homomorphism and h ⊆ g an ideal which contains ker ϕ, then there exists a unique Lie algebra homomorphism ϕ : g/h −→ g0 such that ϕ = ϕ ◦ κ. In the case that h = ker ϕ and g0 = im ϕ the induced map is an isomorphism. If h and k are ideals in g then there exists a natural isomorphism (h+k)/k ∼ − − → h/(h ∩ k). Definition 2.5 (Centralizer). Finally, for any element X ∈ g we define the centralizer C(X) of X to be the set of elements in g which commute with X. Let h be any subalgebra of g. The centralizer C(h) of h is the set of all elements of
  • 31. 2.1 Basic Notions 31 g that commute with all elements of h. The centralizer of g is called the center and is denoted Z(g). For a subalgebra h of g we define the normalizer N(h) of h to be all elements X ∈ g for which [X, h] ⊆ h. We immediately see that the centralizer of X is just ker ad(X), hence C(X) is an ideal. Furthermore we see that C(h) = X∈h C(X) and that Z(g) = ker ad. Hence also the center is an ideal. Finally, a subalgebra of g is an ideal if and only if it’s normalizer is g. Now consider the so-called derived algebra: Dg := [g, g] which clearly is an ideal. g is called abelian if Dg = 0, i.e. if [X, Y ] = 0 for all X, Y ∈ g. Every 1-dimensional Lie algebra is abelian by antisymmetry of the bracket. Definition 2.6 (Simple Lie Algebra). A nontrivial Lie algebra is called indecomposable if the only ideals are the trivial ones: g and 0. A nontrivial Lie algebra is called simple if it is indecomposable and Dg 6= 0. Any 1-dimensional Lie algebra is indecomposable and as the next proposition shows, the requirement Dg 6= 0 is just to get rid of these trivial examples: Proposition 2.7. A Lie algebra is simple if and only if it is indecomposable and dim g ≥ 2. Proof. If g is simple then it is not abelian, hence we must have dim g ≥ 2. Conversely, assume that g is indecomposable and dim g ≥ 2. As Dg is an ideal we can only have Dg = 0 or Dg = g. In the first case, g is abelian and hence all subspaces are ideals, and since dim g ≥ 2, nontrivial ideals exist, contradicting indecomposability. Therefore Dg = g 6= 0. Now, let’s consider the following sequence of ideals D1 g := Dg, D2 g := [Dg, Dg], . . . , Dn g := [Dn−1 g, Dn−1 g], the so-called derived series. Obviously we have Dm+n g = Dm (Dn g). To see that they are really ideals we use induction: We have already seen that D1 g is an ideal, so assume that Dn−1 g is an ideal. Let X, X0 ∈ Dn−1 g and let Y ∈ g be arbitrary. Then by the Jacobi identity [[X, X0 ], Y ] = −[[X0 , Y ], X] − [[Y, X], X0 ]. Since Dn−1 g is an ideal, [X0 , Y ], [Y, X] ∈ Dn−1 g showing that [[X, X0 ], Y ] ∈ Dn g. Definition 2.8 (Solvable Lie Algebra). A Lie algebra is called solvable if there exists an N such that DN g = 0. Abelian Lie algebras are solvable, since we can take N = 1. On the other hand, simple Lie algebras are definitely not solvable, for we showed in the proof of Proposition 2.7 that Dg = g which implies that Dn g = g for all n. Proposition 2.9. Let g be a Lie algebra. 1) If g is solvable, then so are all subalgebras of g. 2) If g is solvable and ϕ : g −→ g0 is a Lie algebra homomorphism, then im ϕ is solvable. 3) If h ⊆ g is a solvable ideal so that g/h is solvable, then g is solvable.
  • 32. 32 Chapter 2 – Structure Theory for Lie Algebras 4) If h and k are solvable ideals of g, then so is h + k. Proof. 1) It should be clear that Dh ⊆ Dg. Hence, by induction, Di h ⊆ Di g and since DN g = 0 for some N, then DN h = 0 as well. 2) Since ϕ is a Lie algebra homomorphism, we have D(ϕ(g)) = ϕ(Dg), and again by induction Di (ϕ(g)) = ϕ(Di g). Thus, DN g = 0 implies DN (ϕ(g)) = 0. 3) Assume there is an N for which DN (g/h) = 0 and consider the canonical map κ : g −→ g/h. Like above we have Di (g/h) = Di (κ(g)) = κ(Di g). Thus, since DN (g/h) = 0, we have κ(DN g) = 0 i.e. DN g ⊆ h. But h was also solvable, so we can find an M for which DM h = 0. Then DM+N g = DM (DN g) ⊆ DM h = 0 i.e. g is solvable. 4) By 3) of this proposition it is enough to prove that (h+k)/k is solvable. By Proposition 2.4 there exists an isomorphism (h+k)/k ∼ − − → h/(h∩k), and the right hand side is solvable since it is the image of the canonical map h −→ h/(h∩k). The last point of this proposition yields the existence of a maximal solvable ideal in g, namely if h and k are solvable ideals, then h + k will be a solvable ideal containing both. Thus the sum of all solvable ideals is a solvable ideal. This works since the Lie algebra is finite-dimensional. By construction, it is unique. Definition 2.10 (Radical). The maximal solvable ideal, the existence of which we have just verified, is called the radical of g and is denoted Rad g. A Lie algebra g is called semisimple if Rad g = 0. Since all solvable ideals are contained in Rad g another way of formulating semisimplicity would be to say that it has no nonzero solvable ideals. In this sense, semisimple Lie algebras are as far as possible from being solvable. In the next section we prove some equivalent conditions for semisimplicity. Proposition 2.11. Semisimple Lie algebras have trivial centers. Proof. The center is an abelian, hence solvable, ideal, and is therefore trivial by definition. We now consider a concept closely related to solvability. Again we consider a sequence of ideals: g0 := g, g1 := Dg, g2 := [g, g1 ], . . . , gn := [g, gn−1 ]. It shouldn’t be too hard to see that Di g ⊆ gi . Definition 2.12 (Nilpotent Lie Algebra). A Lie algebra g is called nilpotent if there exists an N such that gN = 0. Since Di g ⊆ gi nilpotency of g implies solvability of g. The converse statement is not true in general. So schematically: abelian ⇒ nilpotent ⇒ solvable in other words, solvability and nilpotency are in some sense generalizations of being abelian. Here is a proposition analogous to Proposition 2.9 Proposition 2.13. Let g be a Lie algebra. 1) If g is nilpotent, then so are all its subalgebras. 2) If g is nilpotent and ϕ : g −→ g0 is a Lie algebra homomorphism, then im ϕ is nilpotent.
  • 33. 2.1 Basic Notions 33 3) If g/Z(g) is nilpotent, then g is nilpotent. 4) If g is nilpotent, then Z(g) 6= 0. Proof. 1) In analogy with the proof of Proposition 2.9 a small induction ar- gument show that if h ⊆ g is a subalgebra, then hi ⊆ gi . Thus, gN = 0 implies hN = 0. 2) We have already seen that ϕ(g)1 = ϕ(Dg). Furthermore ϕ(g)2 = [ϕ(g), ϕ(g)1 ] = [ϕ(g), ϕ(Dg)] = ϕ([g, Dg]) = ϕ(g2 ) and by induction we get ϕ(g)i = ϕ(gi ). Hence nilpotency of g implies nilpotency of ϕ(g). 3) Letting κ : g −→ g/Z(g) denote the canonical homomorphism, we see that (g/Z(g))i = (κ(g))i = κ(gi ) = gi /Z(g). Thus, if (g/Z(g))N = 0 then gN ⊆ Z(g). But then gN+1 = [g, gN ] ⊆ [g, Z(g)] = 0, hence g is nilpotent. 4) As g is nilpotent there is a smallest n such that gn 6= 0 and gn+1 = 0. This means that [g, gn ] = 0 i.e. everything in gn commutes with all elements of g. Thus, 0 6= gn ⊆ Z(g). Definition 2.14. An element X ∈ g is called ad-nilpotent if ad(X) is a nilpotent linear map, i.e. if there exists an N such that ad(X)N = 0. If the Lie algebra is a subalgebra of an algebra of endomorphisms (for in- stance End(V )), it makes sense to ask if the elements themselves are nilpotent. In this case nilpotency and ad-nilpotency of an element X need not be the same. For instance in End(V ) we have the identity I, which is obviously not nilpo- tent. However, ad(I) = 0, and thus I is ad-nilpotent. The reverse implication, however, is true: Lemma 2.15. Let g be a Lie algebra of endomorphisms of some vector space. If X ∈ g is nilpotent, then it is ad-nilpotent. Proof. We associate to A ∈ g two linear maps λA, ρA : End(V ) −→ End(V ) by λA(B) = AB and ρA(B) = BA. It’s easy to see that they commute, and that ad(A) = λA − ρA. As A is nilpotent, λA and ρA are also nilpotent, so we can find an N for which λN A = ρN A = 0. Since they commute, we can use the binomial formula and get ad(A)2N = (λA − ρA)2N = 2N X j=0 (−1)j 2N j λ2N−j A ρj A which is zero since all terms contain either λA or ρA to a power greater than N. An equivalent formulation of nilpotency of a Lie algebra is that there exists an N such that ad(X1) · · · ad(XN )Y = 0 for all X1, . . . , XN , Y ∈ g. In particular, if g is nilpotent, then there exists an N such that ad(X)N = 0 for all X ∈ g, i.e. X is ad-nilpotent. Thus, for a nilpotent Lie algebra g, all elements are ad-nilpotent. That the converse is actually true is the statement of Engel’s Theorem, which will be a corollary to the following theorem. Theorem 2.16. Let V be a finite-dimensional vector space and g ⊆ End(V ) be a subalgebra consisting of nilpotent linear endomorphisms. Then there exists a nonzero v ∈ V which is an eigenvector for all A ∈ End(V ).
  • 34. 34 Chapter 2 – Structure Theory for Lie Algebras Proof. We will prove this by induction over the dimension of g. First, assume dim g = 1. Then g = KA for some nonzero A ∈ g. As A is nilpotent there is a smallest N such that AN 6= 0 and AN+1 = 0, i.e. we can find a vector w ∈ V with AN w 6= 0 and A(AN w) = AN+1 w = 0. Since all elements of g are scalar multiples of A the vector AN w will qualify. Now assuming that the theorem holds for all Lie algebras of dimension strictly less than n, we should prove that it holds for n-dimensional algebras as well. The algebra g consists of nilpotent endomorphisms on V , hence by the previous lemma, all elements are ad-nilpotent. Consider a subalgebra h 6= g of g which thus also consists of ad-nilpotent elements. For A ∈ h we have that ad(A)h ⊆ h since h as a subalgebra is closed under brackets. We can form the vector space g/h and define a linear map ad(A) : g/h −→ g/h by ad(A)(B + h) = (ad(A)B) + h. This is well defined for if B + h = B0 + h, then B − B0 ∈ h and therefore ad(A)(B0 + h) = ad(A)B0 + h = ad(A)B0 + ad(A)(B − B0 ) + h = ad(A)B + h = ad(A)(B + h). This map is again nilpotent since ad(A)N (B + h) = (ad(A)N B) + h = h = [0]. So, the situation now is that we have a subalgebra ad(h) of End(g/h) with dim ad(h) ≤ dim h dim g = n. Our induction hypothesis then yields an ele- ment 0 6= [B0] = B0 + h ∈ g/h on which ad(A) is zero for all A ∈ h. This means that [A, B0] ∈ h for all h, i.e. the normalizer N(h) of h is strictly larger than h. Now assume that h is any maximal subalgebra h 6= g. Then since N(h) is a strictly larger subalgebra we must have N(h) = g and consequently h is an ideal. Then g/h is a Lie algebra with canonical Lie algebra homomorphism κ : g −→ g/h and g/h must have dimension 1 for, assuming otherwise, we could find a 1-dimensional subalgebra k 6= g/h in g/h, and then κ−1 (k) 6= g would be a subalgebra strictly larger that h. This is a contradiction, hence dim g/h = 1 and g ∼ = h ⊕ KA0 for some nonzero A0 ∈ g h. So far, so good. Now we come to the real proof of the existence of the nonzero vector v ∈ V . h was an ideal of dimension n − 1 hence the induction hypothesis assures that the subspace W := {v ∈ V | ∀B ∈ h : Bv = 0} is nonempty. We will show that each linear map A ∈ g (which maps V −→ V ) can be restricted to a map W −→ W and that it as such a map is still nilpotent. This will, in particular, hold for A0 which by nilpotency will have the eigenvalue 0 and hence a nonzero eigenvector v ∈ W associated to the eigenvalue 0. This will be the desired vector, for all linear maps in g can according to the decomposition above be written as B + λA0 for some B ∈ h, and Bv = 0 since v was chosen to be in W. Thus, to finish the proof we only need to see that W is invariant. So let A ∈ g be any map. Since h is an ideal, [A, h] ⊆ h and hence for w ∈ W B(Aw) = A(Bw) − [A, B]w = 0 for all B ∈ h. This shows that Aw ∈ W and hence that W is invariant. A restriction of a nilpotent map is clearly nilpotent. This completes the proof. From this we can prove Corollary 2.17 (Engel’s Theorem). A Lie algebra is nilpotent if and only if all its elements are ad-nilpotent.
  • 35. 2.2 Semisimple Lie Algebras 35 Proof. We have already showed the ’only if’ part. To show the ’if’ part we again invoke induction over the dimension of g. If dim g = 1 then g is abelian, hence nilpotent. Now set n = dim g and assume that the result holds for all Lie algebras with dimension strictly less than n. All the elements of g are ad-nilpotent, hence ad(g) is a subalgebra of End(g) consisting of nilpotent elements and the previous theorem yields an element 0 6= X ∈ g for which ad(Y )(X) = 0 for all Y ∈ g i.e. X is contained in the center Z(g) which is therefore a nonzero ideal and g/Z(g) is a Lie algebra whose dimension strictly less than n. Furthermore all elements of g/Z(g) are ad-nilpotent, since by definition of the quotient bracket ad([A])[B] = ad(A)B + Z(g) we have that ad(A)N = 0 implies ad([A])N = 0. Thus g/Z(g) consists solely of ad-nilpotent elements. Consequently the induction hypothesis assures that g/Z(g) is nilpotent. Then by Proposition 2.13 g is nilpotent. 2.2 Semisimple Lie Algebras The primary goal of this section is to reach some equivalent formulations of semisimplicity. Our approach to this will be via the so-called Cartan Criterion for solvability which we will prove shortly. First we need a quite powerful result from linear algebra regarding ’advanced diagonalization’: Theorem 2.18 (SN-Decomposition). Let V be a finite-dimensional vector space over K and let A ∈ End(V ). Then there exist unique commuting linear maps S, N ∈ End(V ), S being diagonalizable and N being nilpotent, satisfying A = S +N. This is called the SN-decomposition In fact S and N can be realized as polynomials in A without constant terms. Furthermore, if A = S +N is the SN-decomposition of A, then ad(S)+ad(N) is the SN-decomposition of ad(A). We will not prove this 1 . Cartan’s Criterion gives a sufficient condition for solvability based on the trace of certain matrices. Therefore the following lemma is necessary. Lemma 2.19. Let V be a finite-dimensional vector space, W1 and W2 be sub- spaces of End(V ) and define M := {B ∈ End(V ) | ad(A)W1 ⊆ W2}. If A ∈ M satisfies Tr(AB) = 0 for all B ∈ M then A is nilpotent. Proof. Let A ∈ M satisfy the required condition, and consider the SN-decom- position of A = S + N. We are done if we can show that S = 0. Well, S is diagonalizable, so we can find a basis {e1, . . . , en} for V in which S has the form diag(a1, . . . , an). We will show that all these eigenvalues are 0, and we do so in a curious way: We define E := spanQ{a1, . . . , an} ⊆ K to be the subspace of K over the rationals spanned by the eigenvalues. If we can show that this space, or equivalently its dual space E∗ , consisting of Q-linear maps E −→ Q, is 0, then we are done. So, let ϕ ∈ E∗ be arbitrary. The basis we chose for V readily gives us a basis for End(V ), consisting of Eij where Eij is the linear map determined by Eijej = ei and Eijek = 0 for k 6= j. Then we see (ad(S)Eij)ej = [S, Eij]ej = SEijej − EijSej = Sei − ajEijej = (ai − aj)ei 1For a proof the reader is referred to for instance [5] Section 4.3.
  • 36. 36 Chapter 2 – Structure Theory for Lie Algebras while [S, Eij]ek = 0 for k 6= j i.e. ad(S)Eij = (ai − aj)Eij. Now, let B ∈ End(V ) denote the linear map which in the basis {e1, . . . , en} is diag(ϕ(a1), . . . , ϕ(an)). As with S we have that ad(B)Eij = (ϕ(ai) − ϕ(aj))Eij. There exists a polynomial p = PN n=1 cnXn without constant term which maps ai − aj to ϕ(ai − aj) = ϕ(ai) − ϕ(aj) (it’s a matter of solving some equations to find the coefficients cn). Then we have p(ad S)Eij = cn(ad S)n Eij + · · · + c1(ad S)Eij = cn(ai − aj)n Eij + · · · + c1(ai − aj)Eij = p(ai − aj)Eij = (ϕ(ai) − ϕ(aj))Eij which says that p(ad S) = ad B. A statement in the SN-decomposition was that ad S, being the diagonalizable part of ad A, is itself a polynomial expression in ad A without constant term, which implies that ad B is a polynomial in ad A without constant term. Since A ∈ M we have that ad(A)W1 ⊆ W2, and since ad(B) was a polynomial expression in ad(A) then also ad(B)W1 ⊆ W2, i.e. B ∈ M, and therefore by assumption Tr(AB) = 0. The trace of AB is the sum Pn i=1 aiϕ(ai) and applying ϕ to the equation Tr(AB) = 0 we get Pn i=1 ϕ(ai)2 = 0, i.e. ϕ(ai) = 0 (ϕ(ai) are rationals hence ϕ(ai)2 ≥ 0). Therefore we must have ϕ = 0 which was what we wanted. Theorem 2.20 (Cartan’s Criterion). Let V be a finite-dimensional vector space and g ⊆ End(V ) a subalgebra. If Tr(AB) = 0 for all A ∈ g and all B ∈ Dg then g is solvable. Proof. As Dn g = Dn−1 (Dg) ⊆ (Dg)n−1 we see that g will be solvable if Dg is nilpotent. To show that Dg is nilpotent we invoke Engel’s Theorem and Lemma 2.15 which combined say that Dg is nilpotent if all X ∈ Dg are nilpotent. To this end we use the preceding lemma with W1 = g and W2 = Dg and M = {B ∈ End(V ) | [B, g] ⊆ Dg}. Notice that g ⊆ M. The reverse inclusion need not hold. Now, let A ∈ Dg be arbitrary, we need to show that it is nilpotent, and by virtue of the previous lemma it suffices to verify that Tr(AB) = 0 for all B ∈ M. A is of the form [X, Y ] for X, Y ∈ g and we have, in general that Tr([X, Y ]B) = Tr(XY B) − Tr(Y XB) = Tr(Y BX) − Tr(BY X) = Tr([Y, B]X) = Tr(X[Y, B]). (2.1) Since B ∈ M and Y ∈ g we have by construction of M that [Y, B] ∈ Dg. But then by assumption in the theorem we have that Tr(AB) = Tr([X, Y ]B) = 0. With this powerful tool we can prove the promised equivalent conditions for a Lie algebra to be semisimple. One of them involves the so-called Killing form: Definition 2.21 (Killing Form). By the Killing form for a Lie algebra g over K we understand the bilinear form B : g × g −→ K given by B(X, Y ) = Tr(ad(X) ad(Y )). Proposition 2.22. The Killing form is a symmetric bilinear form satisfying B([X, Y ], Z) = B(X, [Y, Z]). (2.2) Furthermore, if ϕ is any Lie algebra automorphism of g then B(ϕ(X), ϕ(Y )) = B(X, Y ).
  • 37. 2.2 Semisimple Lie Algebras 37 Proof. B is obviously bilinear, and symmetry is a consequence of the property of the trace: Tr(AB) = Tr(BA). Eq. (2.2) follows from (2.1). If ϕ : g −→ g is a Lie algebra automorphism, then another way of writing the equation [ϕ(X), ϕ(Y )] = ϕ([X, Y ]) is ad(ϕ(X)) ◦ ϕ = ϕ ◦ ad(X). Therefore B(ϕ(X), ϕ(Y )) = Tr(ϕ ◦ ad(X) ◦ ad(Y ) ◦ ϕ−1 ) = Tr(ad(X) ad(Y )) = B(X, Y ). Calculating the Killing form directly from the definition is immensely compli- cated. Fortunately, for some of the classical Lie algebras we have a much simpler formula: B(X, Y ) =      2(n + 1) Tr(XY ), for X, Y ∈ sl(n + 1, K), sp(2n, K) (2n − 1) Tr(XY ), for X, Y ∈ so(2n + 1, K) 2(n − 1) Tr(XY ), for X, Y ∈ so(2n, K). (2.3) Lemma 2.23. If g is a Lie algebra with Killing form B and h ⊆ g is an ideal, then B|h×h is the Killing form of h. Proof. First a general remark: If ϕ : V −→ V is a linear map, and W ⊆ V is a subspace for which im ϕ ⊆ W, then Tr ϕ = Tr(ϕ|W ): Namely, pick a basis {e1, . . . , ek} for W and extend it to a basis {e1, . . . , ek, . . . , en} for V . Let {ε1 , . . . , εn } denote the corresponding dual basis. As ϕ(v) ∈ W we have εk+i (ϕ(v)) = 0 and hence Tr ϕ = n X i=1 εi (ϕ(ei)) = k X i=1 εi (ϕ(ei)) = Tr(ϕ|W ). Now, let X, Y ∈ h, then as h is an ideal: ad(X)g ⊆ h and ad(Y )g ⊆ h, which means that the image of ad(X) ad(Y ) lies inside h. It should be obvious that the adjoint representation of h is just ad(X)|h for X ∈ h. Therefore Bh(X, Y ) = Tr(ad(X)|h ad(Y )|h) = Tr((ad(X) ad(Y ))|h) = B|h×h(X, Y ). Theorem 2.24. If g is a Lie algebra, then the following are equivalent: 1) g is semisimple i.e. Rad g = 0. 2) g has no nonzero abelian ideals. 3) The Killing form B of g is non-degenerate. 4) g is a direct sum of simple Lie algebras: g = g1 ⊕ · · · ⊕ gn. Proof. We first prove that 1 and 2 are equivalent. If g is semisimple, then g has no nonzero solvable ideals and since abelian ideals are solvable, no nonzero abelian ideals either. Conversely, if Rad g 6= 0, then, since Rad g is solvable, there is a smallest N for which DN (Rad g) 6= 0 and DN+1 (Rad g) = 0. Then DN (Rad g) and is an abelian ideal hence a solvable ideal. So by contraposition, if no solvable ideals exist, then g is semisimple. Now we show that 1 implies 3. We consider the so-called radical of the Killing form B namely the subspace h := {X ∈ g | ∀ Y ∈ g : B(X, Y ) = 0}. h is an ideal for if X ∈ h and Y ∈ g, then for all Z ∈ g: B([X, Y ], Z) = B(X, [Y, Z]) = 0
  • 38. 38 Chapter 2 – Structure Theory for Lie Algebras i.e. [X, Y ] ∈ h. Obviously B is non-degenerate if and only if h = 0. Now we assume that Rad g = 0 and want to show that h = 0. We can do this by showing that h is solvable for then h ⊆ Rad g. First we use the Cartan Criterion on the Lie algebra ad(h) showing that this is solvable: By definition of h we have that 0 = B(X, Y ) = Tr(ad(X) ad(Y )) for all X ∈ h and Y ∈ g, in particular it holds for all X ∈ Dh. In other words we have Tr(AB) = 0 for all A ∈ ad(Dh) = D(ad h) and all B ∈ ad h. Hence the Cartan Criterion tells us that ad h is solvable, i.e. 0 = DN (ad h) = ad(DN h). This says that DN h ⊆ Z(g) implying that DN+1 h = 0. Thus, h is solvable and consequently equals 0. Then we prove 3 implies 2. Assume that h = 0, and assume k to be an abelian ideal and let X ∈ k and Y ∈ g. Since the adjoint representations maps according to (exploiting the ideal property of k) g ad(Y ) − − − − − → g ad(X) − − − − − → k ad(Y ) − − − − − → k ad(X) − − − − − → Dk = 0 we have (ad(X) ad(Y ))2 = 0 that is ad(X) ad(Y ) is nilpotent. Since nilpotent matrices have zero trace we see that 0 = Tr(ad(X) ad(Y )) = B(X, Y ). This implies X ∈ h, i.e. k ⊆ h and thus the desired conclusion. We then proceed to show that 1 implies 4. Suppose g is semisimple, and let h ⊆ g be any ideal. We consider its “orthogonal complement” w.r.t. B: h⊥ := {X ∈ g|∀Y ∈ h : B(X, Y ) = 0}. This is again an ideal in g for if X ∈ h⊥ and Y ∈ g, then for all Z ∈ g we have [Y, Z] ∈ h and hence B([X, Y ], Z) = B(X, [Y, Z]) = 0 saying that [X, Y ] ∈ h⊥ . To show that we have a decomposition g = h ⊕ h⊥ we need to show that the ideal h ∩ h⊥ is zero. We can do this by showing that it is solvable for then semisimplicity forces it to be zero. By some remarks earlier in this proof, solvability of h ∩ h⊥ would be a consequence of ad(h ∩ h⊥ ) being solvable. To show that ad(h∩h⊥ ) is solvable we invoke the Cartan Criterion: For X ∈ D(h∩h⊥ ) ⊆ h∩h⊥ and Y ∈ h∩h⊥ we have Tr(ad(X) ad(Y )) = B(X, Y ) = 0 since, in particular, X ∈ h and Y ∈ h⊥ . Thus, the Cartan Criterion renders solvability of ad(h∩h⊥ ) implying h∩h⊥ = 0. Hence, h∩h⊥ = 0 and g = h⊕h⊥ . After these preliminary remarks we proceed via induction over the dimension of g. If dim g = 2, then g is simple, for any nontrivial ideal in g would have to be 1-dimensional, hence abelian, and such do not exist. Assume now that dim g = n and that the result is true for Lie algebras of dimension strictly less than n. Suppose that g1 is a minimal nonzero ideal in g then g1 is simple since dim g1 ≥ 2 and since any nontrivial ideal in g1 would be an ideal in g properly contained in g1 contradicting minimality. Then we have g = g1 ⊕ g⊥ 1 with g⊥ 1 semisimple, for if k is any abelian ideal in g1 then it is an abelian ideal in g and these do not exist. Then by the induction hypothesis we have g⊥ 1 = g2⊕· · ·⊕gn, a sum of simple Lie algebras, hence g = g1 ⊕g2 ⊕· · ·⊕gn, a sum of simple algebras. Finally we show that 4 implies 2. So consider g := g1 ⊕ · · · ⊕ gn and let h ⊆ g be an abelian ideal. It is not hard to verify that hi := h ∩ gi is an abelian ideal in gi, thus hi = gi or hi = 0. As hi is abelian and gi is not, we can rule out the first possibility, i.e. hi = 0 and hence h = 0. During the proof we saw that any ideal in a semisimple Lie algebra has a complementary ideal. This is important enough to be stated as a separate result: Proposition 2.25. Let g be a semisimple Lie algebra and h ⊆ g an ideal. Then h⊥ := {X ∈ g | ∀ Y ∈ h : B(X, Y ) = 0} is an ideal in g and g = h ⊕ h⊥ . Another very important concept in the discussion to follow is that of com- plexification.
  • 39. 2.2 Semisimple Lie Algebras 39 Definition 2.26 (Complexification). Let V be a real vector space. By the complexification VC of the vector space V we understand VC := V ⊕ iV which equipped with the scalar multiplication (a + ib)(v1 + iv2) = (av1 − bv2) + i(av2 + bv1) becomes a complex vector space. If g is a real Lie algebra, the complexification gC of g is the vector space g⊕ig equipped with the bracket [X1 + iX2, Y1 + iY2] = ([X1, Y1] − [X2, Y2]) + i([X1, Y2] + [X2, Y1]) (note that this in not the usual direct sum bracket!). It is easily checked that gC is a complex Lie algebra. Other presentations of this subject define the complexification of g by gC = g⊗R C, where C is considered a 2-dimensional real space. By writing C = R⊕iR and use distributivity of the tensor product, this definition is equivalent to ours. Example 2.27. The classical real Lie algebras mentioned earlier have the fol- lowing complexifications gl(n, R)C ∼ = gl(n, C) sl(n, R)C ∼ = sl(n, C) so(n)C ∼ = so(n, C) so(m, n)C ∼ = so(m + n, C) u(n)C ∼ = gl(n, C) u(m, n)C ∼ = gl(m + n, C) su(n)C ∼ = sl(n, C) su(m, n)C ∼ = sl(m + n, C). Let’s prove a few of them. For the first one, pick an element X of gl(n, C) and split it in real and imaginary parts X = X1 + iX2. It is an easy exercise to verify that the map X 7−→ X1 + iX2 is a Lie algebra isomorphism gl(n, C) ∼ − − → gl(n, R)C. To prove u(n)C ∼ = gl(n, C), let X ∈ gl(n, C) and write it as X = X − X∗ 2 + i X + X∗ 2i . It is not hard to see that both 1 2 (X −X∗ ) and 1 2i (X +X∗ ) are skew-adjoint, i.e. elements of u(n). Again it is a trivial calculation to show that X 7−→ X − X∗ 2 + i X + X∗ 2i is a Lie algebra isomorphism gl(n, C) ∼ − − → u(n)C. The other identities are verified in a similar fashion. Proposition 2.28. A Lie algebra g is semisimple if and only if gC is semisim- ple. Proof. Let B denote the Killing form of g and BC the Killing form of gC. Our first task is to relate them. If {X1, . . . , Xn} is a basis for g as an R-vector space, then {X1, . . . , Xn} is also a basis for gC as a C-vector space. Therefore for X, Y ∈ g the linear map ad(X) ad(Y ) will have the same matrix whether it
  • 40. 40 Chapter 2 – Structure Theory for Lie Algebras is considered a linear map on g or gC. In particular their traces will be equal which amounts to say that B(X, Y ) = BC(X, Y ). In other words BC|g×g = B. (2.4) Now assume g to be semisimple, or, equivalently B to be non-degenerate. Then B(X, Y ) = 0 for all Y ∈ g implies X = 0. To show that BC is non-degenerate, let X ∈ gC satisfy BC(X, Y ) = 0 for all Y ∈ gC. Then it particularly holds for all Y ∈ g. Write X = A1 + iA2 where A1, A2 ∈ g, then by (2.4) 0 = BC(A1, Y ) + BC(A2, Y ) = B(A1, Y ) + iB(A2, Y ) for all Y ∈ g. Hence by non-degeneracy of B we have A1 = A2 = 0, i.e. X = 0. Thus BC is non-degenerate. Now assume BC to be non-degenerate and suppose B(X, Y ) = 0 for all Y ∈ g. This particularly holds for the basis elements: B(X, Xk) = 0 for k = 1, . . . , n. By (2.4) we also have BC(X, Xk) = 0, and since {X1, . . . , Xn} was also a basis for gC we get BC(X, Y ) = 0 for all Y ∈ gC and thus by non-degeneracy of BC that X = 0, i.e. B is non-degenerate. Up till now we have talked a lot about semisimple Lie algebras and their amazing properties. But we have not yet encountered one single example of a semisimple Lie algebra. The rest of this section tends to remedy that. The first thing we do is to introduce a class of Lie algebras which contains the semisimple ones: Definition 2.29 (Reductive Lie Algebra). A Lie algebra g is called reductive if for each ideal a ⊆ g there is an ideal b ⊆ g such that g = a ⊕ b. From Proposition 2.25 it follows that semisimple Lie algebras are reductive. So schematically we have simple ⇒ semisimple ⇒ reductive. Note how these classes of Lie algebras are somehow opposite to the classes of abelian, solvable or nilpotent algebras. The next proposition characterizes the semisimple Lie algebras among the reductive ones Proposition 2.30. If g is reductive, then g = Dg⊕Z(g) and Dg is semisimple. Thus a reductive Lie algebra is semisimple if and only if its center is trivial. Proof. Let Σ be the set of direct sums a1 ⊕ · · · ⊕ ak where a1, . . . , ak are indecomposable ideals (i.e. they contain only trivial ideals). The elements of Σ are themselves ideals. Let a ∈ Σ be an element of maximal dimension. As g is reductive, there exists an ideal b such that g = a ⊕ b. We want to show that b = {0} (and hence g = a) so assume for contradiction that b 6= {0} and let b0 ⊆ b be the smallest nonzero indecomposable ideal (which always exists, for if b contains no proper ideals, then b is indecomposable). But then a ⊕ b0 ∈ Σ contradicting maximality of a, and therefore g = a ∈ Σ. Now let’s write g = a1 ⊕ · · · ⊕ aj | {z } g1 ⊕ aj+1 ⊕ · · · ⊕ ak | {z } g2 where a1, . . . , aj are 1-dimensional and aj+1, . . . , ak are higher dimensional and thus simple. Therefore g1 is abelian and g2 is semisimple (by Theorem 2.24) and by definition of the direct sum bracket we have Dg = D(a1 ⊕ · · · ⊕ ak) = Da1 ⊕ · · · ⊕ Dak = Daj+1 ⊕ · · · ⊕ Dak = g2.
  • 41. 2.2 Semisimple Lie Algebras 41 This shows that Dg is semisimple. We now only have to justify that g1 equals the center. We have g1 ⊆ Z(g) for in the decomposition g = g1 ⊕ g2 [(X, 0), (Y, Z)] = [X, Y ] + [0, Z] = 0. Conversely, let X ∈ Z(g). We decompose it X = X1 + · · · + Xk according to the decomposition of g in indecomposable ideals. Then Xi ∈ Z(ai) which means that Xi = 0 for j i and hence X ∈ g1. The next result will help us mass-produce examples of reductive Lie algebras Proposition 2.31. Let g be a Lie subalgebra of gl(n, R) or gl(n, C). If g has the property that X ∈ g implies X∗ ∈ g (where X∗ is the conjugate transpose of X), then g is reductive. Proof. Define a real inner product on g by hX, Y i = Re Tr(XY ∗ ). This is a genuine inner product: it’s symmetric: hY, Xi = Re Tr(Y X∗ ) = Re Tr(Y ∗∗ X∗ ) = Re Tr((XY ∗ )∗ ) = Re Tr(XY ∗) = Re Tr(XY ∗ ) = hX, Y i, and it’s positive definite, for Tr(XX∗ ) is nothing but the sum of the square of the norm of the columns in X which is 0 if and only if X = 0. Assuming a to be an ideal in g, let a⊥ be the complementary subspace w.r.t. the inner product just defined. Then as vector spaces it holds that g = a ⊕ a⊥ . For this to be a Lie algebra direct sum we need a⊥ to be an ideal. Let X ∈ a⊥ and Y ∈ g, then for all Z ∈ a h[X, Y ], Zi = Re Tr(XY Z∗ − Y XZ∗ ) = − Re Tr(XZ∗ Y − XY Z∗ ) = − Re Tr(X(Y ∗ Z)∗ − X(ZY ∗ )∗ ) = −hX, [Y ∗ , Z]i which is 0 as X ∈ a⊥ and [Y ∗ , Z] ∈ a since Y ∗ ∈ g. Thus a⊥ is an ideal. Obviously gl(n, R) and gl(n, C) are closed under the operation conjugation transpose and are therefore reductive. They are not semisimple as their centers contain the scalar matrices diag(a, . . . , a) for a ∈ R or a ∈ C respectively, violating Proposition 2.11. The Lie algebras so(n) are semisimple for n ≥ 3. Recall that so(n) is the set of real n × n matrices X for which X + X∗ = 0. From the definition it is clear that if X ∈ so(n) then also X∗ ∈ so(n). Hence so(n) is reductive for all n. so(2) is a 1-dimensional (hence abelian) Lie algebra and thus is not semisimple. Let us show that so(3) is semisimple. Thanks to Proposition 2.30 this boils down to verifying that its center is trivial. So assume X =   0 a b −a 0 c −b −c 0   to be in an element in the center of so(3). In particular it has to commute with the two matrices A1 =   0 1 0 −1 0 0 0 0 0   and A2   0 0 1 0 0 0 −1 0 0   . We have A1X =   −a 0 c 0 −a −b 0 0 0   and XA1 =   −a 0 0 0 −a 0 −c −b 0   .
  • 42. 42 Chapter 2 – Structure Theory for Lie Algebras As these two matrices should be equal we immediately get that b = c = 0. Furthermore A2X =   0 0 0 0 0 0 0 −a 0   and XA2 =   0 0 0 0 0 −a 0 0 0   and we get a = 0. Thus X = 0, and the center is trivial. Generalizing this to higher dimensions one can show that so(n) is semisimple for n ≥ 3. Now since so(n, C) = so(n)C (cf. Example 2.27) Proposition 2.28 says that also so(n, C) is semisimple for n ≥ 3. The Lie algebra u(n) is reductive. These are just the n × n complex matrices satisfying X+X∗ = 0 and again it is clear that u(n) is closed under the operation conjugate transpose and hence reductive. It is not semisimple, since the matrices diag(ia, . . . , ia) for a ∈ R are all in the center. However the subalgebra su(n) is semisimple for n ≥ 2 (su(1) is zero-dimensional) as can be seen by an argument analogous to the one given above. Since its complexification is sl(n, C) this is also semisimple for n ≥ 2. But sl(n, C) is also the complexification of sl(n, R) which is therefore also semisimple for n ≥ 2. By the same argument also so(m, n) for m+n ≥ 3 and su(m, n) for m+n ≥ 2 are semisimple since their complexifications are. Wrapping up, the following Lie algebras are semisimple sl(n, R) for n ≥ 2 sl(n, C) for n ≥ 2 so(n) for n ≥ 3 so(m, n) for m + n ≥ 3 so(n, C) for n ≥ 3 su(n) for n ≥ 2 su(m, n) for m + n ≥ 2. 2.3 The Universal Enveloping Algebra For a finite-dimensional vector space V we have the tensor algebra T(V ) defined by T(V ) = ∞ M n=0 V ⊗n . From this one can form various quotients. One of the more important is the symmetric algebra S(V ) where we mod out the ideal I generated by elements of the form X ⊗ Y − Y ⊗ X. The resulting algebra is commutative by con- struction. If {X1, . . . , Xn} is a basis for V , then one can show that the set {Xi1 1 · · · Xin n | i1, . . . , in ∈ N0} (we define X0 = 1) will be a basis for S(V ) which is thus (unlike the exterior algebra) infinite-dimensional. If we set I = (i1, . . . , ik) we will use the short-hand notation XI for Xi1 · · · Xik . We define the length of I to be |I| = k and write j ≤ I if j ≤ i1, . . . , ik. Definition 2.32 (Universal Enveloping Algebra). Let g be a Lie alge- bra. By a universal enveloping algebra of g we understand a pair (U, i) of an associative unital algebra U and a linear map i : g −→ U with i([X, Y ]) = i(X)i(Y ) − i(Y )i(X) satisfying that for any pair (A, ϕ) of an associative unital algebra A and a linear map ϕ : g −→ A with ϕ([X, Y ]) = ϕ(X)ϕ(Y )−ϕ(Y )ϕ(X) there is a unique algebra homomorphism ϕ : U −→ A with ϕ = ϕ ◦ i.
  • 43. 2.3 The Universal Enveloping Algebra 43 In other words any linear map ϕ : g −→ A satisfying the above condition factorizes through U rendering the following diagram commutative g i // ϕ ? ? ? ? ? ? ? ? U ϕ A As for the symmetric algebra, multiplication in a universal algebra is written by juxtaposition. Proposition 2.33. Let g be a Lie algebra and J the two-sided ideal in T(g) generated by elements of the form X ⊗ Y − Y ⊗ X − [X, Y ]. If i denotes the restriction of the canonical map κ : T(g) −→ T(g)/J to g then (T(g)/J, i) is a universal enveloping algebra for g. It is unique up to algebra isomorphism. Proof. Uniqueness first. Assume that (U, i) and (e U,e i) are universal enveloping algebras for g. Since e i : g −→ e U is a linear map satisfying the bracket condition, the universal property of (U, i) yields an algebra homomorphism ϕ : U −→ e U so that e i = ϕ ◦ i. Likewise for i : g −→ U the universal property of (e U,e i) yields an algebra homomorphism ψ : e U −→ U so that i = ψ ◦e i. Composing these gives that i = ψ ◦ φ ◦ i, i.e. ψ ◦ ϕ makes the following diagram commutative g i // i ? ? ? ? ? ? ? ? U ψ◦ϕ U But obviously idU also makes the diagram commute and by uniqueness ψ ◦ ϕ = idU . Likewise one shows that ϕ ◦ ψ = ide U , thus U and e U are isomorphic. To show existence we just need to verify that (T(g)/J, i) is really a universal enveloping algebra. Well, first of all i([X, Y ]) = κ([X, Y ]) = [X, Y ] + J = [X, Y ] + X ⊗ Y − Y ⊗ X − [X, Y ] + J = X ⊗ Y − Y ⊗ X + J = (X + J) ⊗ (Y + J) − (Y + J) ⊗ (X + J) = κ(X)κ(Y ) − κ(Y )κ(X) = i(X)i(Y ) − i(Y )i(X). Now, suppose that ϕ : g −→ A is a linear map satisfying ϕ([X, Y ]) = ϕ(X)ϕ(Y ) − ϕ(Y )ϕ(X). Consider the following diagram. g  ι // ϕ '' N N N N N N N N N N N N N N T(g) ϕ0 κ // T(g)/J ϕ vvmmmmmmmmmmmmmm A Since ϕ is linear it factorizes uniquely through T(g) yielding an algebra homo- morphism ϕ0 : T(g) −→ A. On the generators of J we see that ϕ0 (X ⊗ Y − Y ⊗ X − [X, Y ]) = ϕ0 (X ⊗ Y ) − ϕ0 (Y ⊗ X) − ϕ0 ([X, Y ]) = ϕ(X)ϕ(Y ) − ϕ(Y )ϕ(X) − ϕ([X, Y ]) = 0. Thus, vanishing on J, ϕ0 factorizes uniquely through T(g)/J by an algebra ho- momorphism ϕ : T(g)/J −→ A, i.e. ϕ = ϕ ◦ i. This proves existence.