Elements of Linear Multilinear - Algebra - PDF
Elements of Linear Multilinear - Algebra - PDF
Elements of Linear Multilinear - Algebra - PDF
ALGEBRA
John M. Erdman
Portland State University
2010
c John M. Erdman
It is not essential for the value of an education that every idea be understood at the
time of its accession. Any person with a genuine intellectual interest and a wealth
of intellectual content acquires much that he only gradually comes to understand
fully in the light of its correlation with other related ideas. . . . Scholarship is a
progressive process, and it is the art of so connecting and recombining individual
items of learning by the force of ones whole character and experience that nothing
is left in isolation, and each idea becomes a commentary on many others.
- NORBERT WIENER
Contents
PREFACE ix
This set of notes is an activity-oriented introduction to the study of linear and multilinear
algebra. The great majority of the results in beginning linear and multilinear are straightforward
and can be verified by the thoughtful student. Indeed, that is the main point of these notes
to convince the beginner that the subject is accessible. In the material that follows there are
numerous indicators that suggest activity on the part of the reader: words such as proposition,
example, exercise, and corollary, if not followed by a proof or a reference to a proof, are
invitations to verify the assertions made. When the proof of a theorem appears to me to be too
difficult for the average student to (re)invent and I have no improvements to offer to the standard
proofs, I provide references to standard treatments. These notes were written for a 2-term course in
linear/multilinear algebra for seniors and first year graduate students at Portland State University.
The prerequisites for working through this material are quite modest. Elementary properties
of the real number system, the arithmetic of matrices, ability to solve systems of linear equations,
and the ability to evaluate the determinant of a square matrix are assumed. A few examples and
exercises depend on differentiation and/or integration of real valued functions, but no particular
skill with either is required.
There are of course a number of advantages and disadvantages in consigning a document to
electronic life. One advantage is the rapidity with which links implement cross-references. Hunting
about in a book for lemma 3.14.23 can be time-consuming (especially when an author engages in
the entirely logical but utterly infuriating practice of numbering lemmas, propositions, theorems,
corollaries, and so on, separately). A perhaps more substantial advantage is the ability to correct
errors, add missing bits, clarify opaque arguments, and remedy infelicities of style in a timely
fashion. The correlative disadvantage is that a reader returning to the web page after a short
time may find everything (pages, definitions, theorems, sections) numbered differently. (LATEXis an
amazing tool.) I will change the date on the title page to inform the reader of the date of the last
nontrivial update (that is, one that affects numbers or cross-references).
The most serious disadvantage of electronic life is impermanence. In most cases when a web
page vanishes so, for all practical purposes, does the information it contains. For this reason (and
the fact that I want this material to be freely available to anyone who wants it) I am making use of
a Share Alike license from Creative Commons. It is my hope that anyone who finds this material
useful will correct what is wrong, add what is missing, and improve what is clumsy. For more
information on creative commons licenses see http://creativecommons.org/. Concerning the
text itself, please send corrections, suggestions, complaints, and all other comments to the author
at
erdman@pdx.edu
ix
NOTATION AND TERMINOLOGY
Definitions.
(S, +) is a semigroup if it satisfies axioms (1)(2).
(S, +) is a monoid if it satisfies axioms (1)(3).
(S, +) is a group if it satisfies axioms (1)(4).
(S, +) is an Abelian group if it satisfies axioms (1)(5).
(S, +, m) is a ring if it satisfies axioms (1)(8).
(S, +, m) is a commutative ring if it satisfies axioms (1)(8) and (12).
(S, +, m) is a unital ring (or ring with identity, or unitary ring) if it satisfies
axioms (1)(9).
(S, +, m) is a division ring (or skew field) if it satisfies axioms (1)(11).
(S, +, m) is a field if it satisfies axioms (1)(12).
Remarks.
A binary operation is often written additively, (x, y) 7 x + y, if it is commutative and
multiplicatively, (x, y) 7 xy, if it is not. This is by no means always the case: in a
commutative ring (the real numbers or the complex numbers, for example), both addition
and multiplication are commutative.
When no confusion is likely to result we often write 0 for 0S and 1 for 1S .
Many authors require a ring to satisfy axioms (1)(9).
1
2 NOTATION AND TERMINOLOGY
It is easy to see that axiom (10) holds in any unital ring except the trivial ring S = {0}.
Convention: Unless the contrary is stated we will assume that every unital ring is non-
trivial.
Greek Letters
Fraktur Fonts
In these notes Fraktur fonts are used (most often for families of sets and families of linear maps).
Below are the Roman equivalents for each letter. When writing longhand or presenting material
on a blackboard it is usually best to substitute script English letters.
VECTOR SPACES
h f
S /U
k
The diagram is said to commute if k h = f j. Diagrams need not be rectangular. For instance,
R
d
h
S /U
k
is a commutative diagram if d = k h.
1.2. FUNCTIONS AND DIAGRAMS 7
1.2.4. Example. Here is one diagrammatic way of stating the associative law for composition of
functions: If h : R S, g : S T , and f : T U and we define j and k so that the triangles in
the diagram
j
R /T
?
g
h f
S /U
k
SO
f
A,S
A /T
f
A
Suppose that g : A T and A S. A function f : S T is an extension of g to S if f A = g,
that is, if the diagram
SO
f
A,S
A /T
g
commutes.
1.2.8. Notation. If S, T , and U are nonempty sets and if f : S T and g : S U , then we
define the function (f, g) : S T U by
(f, g)(s) = (f (s), g(s)).
Suppose, on the other hand, that we are given a function h mapping S into the Cartesian product
T U . Then for each s S the image h(s) is an ordered pair, which we will write as h1 (s), h2 (s) .
(The superscripts have nothing to do with powers.) Notice that we now have functions h1 : S T
and h2 : S U . These are the components of h. In abbreviated notation h = (h1 , h2 ).
1.2.9. Notation. Let f : S U and g : T V be functions between sets. Then f g denotes
the map
f g : S T U V : (s, t) 7 f (s), g(t) .
8 1. VECTOR SPACES
1.2.10. Exercise. Let S be a set and a : S S S be a function such that the diagram
aid
// a /S
SSS SS (D1)
id a
commutes. What is (S, a)? Hint. Interpret a as, for example, addition (or multiplication).
1.2.11. Convention. We will have use for a standard one-element set, which, if we wish, we can
regard as the Cartesian product of an empty family of sets. We will denote it by 1. For each set
S there is exactly one function from S into 1. We will denote it by S . If no confusion is likely to
arise we write for S .
1.2.12. Exercise. Let S be a set and suppose that a : S S S and : 1 S are functions such
that both diagram (D1) above and the diagram (D2) which follows commute.
9 SO e
f g
a
1S / SS o S1 (D2)
id id
$
1
What is (S, a, , )?
1.2.15. Notation. Let S be a set. We denote by the interchange (or switching) operation
on the S S. That is,
: S S S S : (s, t) 7 (t, s).
1.2.16. Exercise. Let S be a set and suppose that a : S S S, : 1 S, and : S S are
functions such that the diagrams (D1), (D2), and (D3) above commute. Suppose further that the
following diagram commutes.
SS
/ SS
a a
S (D4)
What is (S, a, , , )?
1.3. RINGS 9
1.2.17. Exercise. Let f : G H be a function between Abelian groups. Suppose that the diagram
f f
GG / H H
+ +
G /H
f
1.3. Rings
Recall that an ordered triple (R, +, ) is a ring if (R, +) is an Abelian group, (R, ) is a semigroup,
and the distributive laws (see Some Algebraic Objects 8) hold. The ring is unital if, in addition,
(R \ {0}, ) is a monoid.
1.3.1. Proposition. The additive identity of a ring is an annihilator. That is, for every element
a of a ring 0a = a 0 = 0.
1.3.2. Proposition. If a and b are elements of a ring, then (a)b = a(b) = (ab) and
(a)(b) = ab.
1.3.3. Proposition. Let a and b be elements of a unital ring. Then 1 ab is invertible if and only
if 1 ba is.
Hint for proof . Look at the product of 1 ba and 1 + bca where c is the inverse of 1 ab.
1.3.4. Definition. An element a of a ring is left cancellable if ab = ac implies that b = c. It
is right cancellable if ba = ca implies that b = c. A ring has the cancellation property if
every nonzero element of the ring is both left and right cancellable.
1.3.5. Exercise. Every division ring has the cancellation property.
1.3.6. Definition. A nonzero element a of a ring is a zero divisor (or divisor of zero) if there
exists a nonzero element b of the ring such that (i) ab = 0 or (ii) ba = 0.
Most everyone agrees that a nonzero element a of a ring is a left divisor of zero if it satisfies (i)
for some nonzero b and a right divisor of zero if it satisfies (ii) for some nonzero b. There agreement
on terminology ceases. Some authors ([6], for example) use the definition above for divisor of zero;
others ([18], for example) require a divisor of zero to be both a left and a right divisor of zero; and
yet others ([19], for example) avoid the issue entirely by defining zero divisors only for commutative
rings. Palmer in [27] makes the most systematic distinctions: a zero divisor is defined as above;
an element which is both a left and a right zero divisor is a two-sided zero divisor ; and if the same
nonzero b makes both (i) and (ii) hold a is a joint zero divisor.
10 1. VECTOR SPACES
1.3.7. Proposition. A division ring has no zero divisors. That is, if ab = 0 in a division ring,
then a = 0 or b = 0.
1.3.8. Proposition. A ring has the cancellation property if and only if it has no zero divisors.
1.3.9. Example. Let G be an Abelian group. Then Hom(G) is a unital ring (under the operations
of addition and composition).
1.3.10. Definition. A function f : R S between rings is a (ring) homomorphism if
f (x + y) = f (x) + f (y) (1.1)
and
f (xy) = f (x)f (y) (1.2)
for all x and y in R. If in addition R and S are unital rings and
f (1R ) = 1S (1.3)
then f is a unital (ring) homomorphism.
Obviously a ring homomorphism f : R S is a group homomorphism of R and S regarded as
Abelian groups. The kernel of f as a ring homomorphism is the kernel of f as a homomorphism
of Abelian groups; that is ker f = {x R : f (x) = 0}.
If f 1 exists and is also a ring homomorphism, then f is an isomorphism from R to S. If an
isomorphism from R to S exists, then R and S are isomorphic.
for all x S. Under this operation (called pointwise scalar multiplication) the Abelian group
F(S, F) becomes a vector space. When F = R we write F(S) for F(S, R).
1.4.6. Example. As a special case of example 1.4.5, we may regard Euclidean n-space Rn as a
vector space.
1.4.7. Example. As another special case of example 1.4.5, we may regard the set R of all
sequences of real numbers as a vector space.
1.4.8. Example. Yet another special case of example 1.4.5, is the vector space Mmn (F) of m n
matrices of members of F (where m, n N). We will use Mn (F) as shorthand for Mnn (F) and
Mn for Mn (R).
1.4.9. Exercise. Let V be the set of all real numbers. Define an operation of addition by
x y = the maximum of x and y
for all x, y V . Define an operation of scalar multiplication by
x = x
for all R and x V . Prove or disprove: under the operations and the set V is a vector
space.
1.4.10. Exercise. Let V be the set of all real numbers x such that x > 0. Define an operation of
addition by
x y = xy
for all x, y V . Define an operation of scalar multiplication by
x = x
for all R and x V . Prove or disprove: under the operations and the set V is a vector
space.
1.4.11. Exercise. Let V be R2 , the set of all ordered pairs (x, y) of real numbers. Define an
operation of addition by
(u, v) (x, y) = (u + x + 1, v + y + 1)
for all (u, v) and (x, y) in V . Define an operation of scalar multiplication by
(x, y) = (x, y)
for all R and (x, y) V . Prove or disprove: under the operations and the set V is a vector
space.
1.4.12. Exercise. Let V be the set of all n n matrices of real numbers. Define an operation of
addition by
A B = 21 (AB + BA)
for all A, B V . Define an operation of scalar multiplication by
A=0
for all R and A V . Prove or disprove: under the operations and the set V is a vector
space. (If you have forgotten how to multiply matrices, look in any beginning linear algebra text.)
1.4.13. Proposition. If x is a vector and is a scalar, then x = 0 if and only if = 0 or x = 0.
In example 1.1.9 we saw how to make the family of equivalence classes of directed segments in
the plane into an Abelian group. We may also define scalar multiplication on these equivalence
classes by declaring that
(1) if > 0, then P Q = P R where P , Q, and R are collinear, P does not lie between Q and
R, and the length of the directed segment (P, R) is times the length of (P, Q);
12 1. VECTOR SPACES
(2) if = 0, then P Q = P P ; and
(3) if < 0, then P Q = P R where P , Q, and R are collinear, P does lie between Q and R,
and the length of the directed segment (R, P ) is times the length of (P, Q).
1.4.14. Exercise. Show that the scalar multiplication presented above is well-defined and that
it makes the Abelian group of equivalence classes of directed segments in the plane into a vector
space.
1.4.15. Remark. Among the methods for proving elementary facts about Euclidean geometry of
the plane three of the most common are synthetic geometry, analytic geometry, and vector geometry.
In synthetic geometry points do not have coordinates, lines do not have equations, and vectors
are not mentioned; but standard theorems from Euclids Elements are used. Analytic geometry
makes use of a coordinate system in terms of which points are assigned coordinates and lines are
described by equations; little or no use is made of vectors or major theorems of Euclidean geometry.
Vector geometry uses vectors as defined in the preceding exercise, but does not rely on Euclidean
theorems or coordinate systems. Although there is nothing illogical about mixing these methods in
establishing a result, it is interesting to try to construct separate proofs of some elementary results
using each method in turn. That is what the next four exercises are about.
1.4.16. Exercise. Use each of the three geometric methods described above to show that the
diagonals of a parallelogram bisect each other.
1.4.17. Exercise. Use each of the three geometric methods described above to show that if the
diagonals of a quadrilateral bisect each other then the quadrilateral is a parallelogram.
1.4.18. Exercise. Use each of the three geometric methods described above to show that the line
joining the midpoints of the non-parallel sides of a trapezoid is parallel to the bases and its length
is half the sum of the lengths of the bases.
1.4.19. Exercise. Use each of the three geometric methods described above to show that the line
segments joining the midpoints of adjacent sides of an arbitrary quadrilateral form a parallelogram.
1.4.20. Exercise. Three vertices of a parallelogram P QRS in 3-space are P = (1, 3, 2), Q =
(4, 5, 3), and R = (2, 1, 0). What are the coordinates of the point S, opposite Q?
1.5. Subspaces
1.5.1. Definition. A subset M of a vector space V is a subspace of V if it is a vector space under
the operations it inherits from V .
1.5.2. Notation. For a vector space V we will write M V to indicate that M is a subspace
of V . To distinguish this concept from other uses of the word subspace (topological subspace, for
example) writers frequently use the expressions linear subspace, vector subspace, or linear manifold.
1.5.3. Proposition. A nonempty subset of M of a vector space V is a subspace of V if and only
if it is closed under addition and scalar multiplication. (That is: if x and y belong to M , so does
x + y; and if x belongs to M and F, then x belongs to M .
1.5.4. Example. In each of the following cases prove or disprove that the set of points (x, y, z) in
R3 satisfying the indicated condition is a subspace of R3 .
(a) x + 2y 3z = 4.
x1 y+2 z
(b) = = .
2 3 4
(c) x + y + z = 0 and x y + z = 1.
(d) x = z and x = z.
(e) x2 + y 2 = z.
x y3
(f) = .
2 5
1.5. SUBSPACES 13
1.5.5.
T Proposition. Let M be a family of subspaces of a vector space V . Then the intersection
M of this family is itself a subspace of V .
1.5.6. Exercise. Let A be a nonempty set of vectors in a vector space V . Explain carefully why it
makes sense to say that the intersection of the family of all subspaces containing A is the smallest
subspace of V which contains A.
1.5.7. Exercise. Find and describe geometrically the smallest subspace of R3 containing the
vectors (0, 3, 6) and (0, 1, 2).
1.5.8. Exercise. Find and describe geometrically the smallest subspace of R3 containing the
vectors (2, 3, 3) and (0, 3, 2).
1.5.9. Example. Let R denote the vector space of all sequences of real numbers. (See exam-
ple 1.4.5.) In each of the following a subset of R is described. Prove or disprove that it is a
subspace of R .
(a) Sequences that have infinitely many zeros (for example, (1, 1, 0, 1, 1, 0, 1, 1, 0, . . . )).
(b) Sequences which are eventually zero. (A sequence (xk ) is eventually zero if there is an
index n0 such that xn = 0 whenever n n0 .)
(c) P
Sequences that are absolutely summable. (A sequence (xk ) is absolutely summable if
k=1 |xk | < .)
(d) Bounded sequences. (A sequence (xk ) is bounded if there is a positive number M such
that |xk | M for every k.)
(e) Decreasing sequences. (A sequence (xk ) is decreasing if xn+1 xn for each n.)
(f) Convergent sequences. (A sequence (xk ) is convergent if there is a number ` such that the
sequence is eventually in every neighborhood of `; that is, if there is a number ` such that
for every > 0 there exists n0 N such that |xn ell| < whenever n n0 .)
(g) Arithmetic progressions. (A sequence (xk ) is arithmetic if it is of the form (a, a + k,
a + 2k, a + 3k, . . . ) for some constant k.)
(h) Geometric progressions. (A sequence (xk ) is geometric if it is of the form (a, ka, k 2 a, k 3 a, . . . )
for some constant k.)
1.5.10. Notation. Here are some frequently encountered families of functions:
F = F[a, b] = {f : f is a real valued function on the interval [a, b]} (1.4)
P = P[a, b] = {p : p is a polynomial function on [a, b]} (1.5)
P4 = P4 [a, b] = {p P : the degree of p is less than 4} (1.6)
Q4 = Q4 [a, b] = {p P : the degree of p is equal to 4} (1.7)
C = C[a, b] = {f F : f is continuous} (1.8)
D = D[a, b] = {f F : f is differentiable} (1.9)
K = K[a, b] = {f F : f is a constant function} (1.10)
B = B[a, b] = {f F : f is bounded} (1.11)
J = J [a, b] = {f F : f is integrable} (1.12)
(A function f F is bounded if there exists a number M 0 such that |f (x)| M for all x in
Rb
[a, b]. It is (Riemann) integrable if it is bounded and a f (x) dx exists.)
1.5.11. Exercise. For a fixed interval [a, b], which sets of functions in the list 1.5.10 are vector
subspaces of which?
1.5.12. Notation. If A and B are subsets of a vector space then the sum of A and B, denoted by
A + B, is defined by
A + B := {a + b : a A and b B}.
14 1. VECTOR SPACES
The set A B is defined similarly. For a set {a} containing a single element we write a + B instead
of {a} + B.
1.5.13. Exercise. Let M and N be subspaces of a vector space V . Consider the following subsets
of V .
(1) M N . (A vector v belongs to M N if it belongs to either M or N .)
(2) M + N .
(3) M \ N (A vector v belongs to M \ N if it belongs to M but not to N .)
(4) M N .
For each of the sets (a)(d) above, either prove that it is a subspace of V or give a counterexample
to show that it need not be a subspace of V .
1.5.14. Definition. Let M and N be subspaces of a vector space V . If M N = {0} and
M + N = V , then V is the (internal) direct sum of M and N . In this case we write
V = M N.
In this case the subspaces M and N are complementary and each is the complement of the
other.
1.5.15. Example. In R3 let M be the line x = y = z, N be the line x = 12 y = 13 z, and L = M + N .
Then L = M N .
1.5.16. Example. Let M be the plane x + y + z = 0 and N be the line x = y = z in R3 . Then
R3 = M N .
1.5.17. Example. Let C = C[1, 1] be the vector space of all continuous real valued functions
on the interval [1, 1]. A function f in C is even if f (x) = f (x) for all x [1, 1]; it is odd
if f (x) = f (x) for all x [1, 1]. Let Co = {f C : f is odd } and Ce = {f C : f is even }.
Then C = Co Ce .
1.5.18. Example. Let C = C[0, 1] be the family of continuous real valued functions on the inter-
val [0, 1]. Define
f1 (t) = t and f2 (t) = t4
for 0 t 1. Let M be the set of all functions of the form f1 + f2 where , R. And let N
be the set of all functions g in C which satisfy
Z 1 Z 1
tg(t) dt = 0 and t4 g(t) dt = 0.
0 0
Then C = M N .
1.5.19. Exercise. In the preceding example let g(t) = t2 for 0 t 1. Find polynomials f M
and h N such that f = g + h.
1.5.20. Theorem (Vector Decomposition Theorem). Let V be a vector space such that V = M N .
Then for every vector v V there exist unique vectors m M and n N such that v = m + n.
1.5.21. Exercise. Define what it means for a vector space V to be the direct sum of subspaces
M1 , . . . , Mn . Show (using your definition) that if V is the direct sum of these subspaces, then for
every v V there exist unique vectors mk Mk (for k = 1, . . . , n) such that v = m1 + + mn .
1.6.13. Example. In the vector space C[0, ] of continuous functions on the interval [0, ] define
the vectors f , g, and h by
f (x) = x
g(x) = sin x
h(x) = cos x
for 0 x . Then f , g, and h are linearly independent.
1.6.14. Example. In the vector space C[0, ] of continuous functions on [0, ] let f , g, h, and j
be the vectors defined by
f (x) = 1
g(x) = x
h(x) = cos x
x
j(x) = cos2
2
for 0 x . Then f , g, h, and j are linearly dependent.
1.6.15. Exercise. Let a, b, and c be distinct real numbers. Show that the vectors (1, 1, 1), (a, b, c),
and (a2 , b2 , c2 ) form a linearly independent subset of R3 .
1.6.16. Exercise. In the vector space C[0, 1] define the vectors f , g, and h by
f (x) = x
g(x) = ex
h(x) = ex
for 0 x 1. Are f , g, and h linearly independent?
1.6.17. Exercise. Let u = (, 1, 0), v = (1, , 1), and w = (0, 1, ). Find all values of which
make {u, v, w} a linearly dependent subset of R3 .
1.6.18. Exercise. Suppose that {u, v, w} is a linearly independent set in a vector space V . Show
that the set {u + v, u + w, v + w} is linearly independent in V .
1.7.5. Example. Let Mmn be the vector space of all m n matrices of real numbers. For
1 i m and 1 j n let E ij be the m n matrix whose entry in the ith row and j th column
is 1 and all of whose other entries are 0. Then E ij : 1 i m and 1 j n is a basis for Mmn .
a b
1.7.6. Exercise. A 2 2 matrix has zero trace if a + d = 0. Show that the set of all such
c d
matrices is a subspace of M22 and find a basis for it.
u u x
1.7.7. Exercise. Let U be the set of all matrices of real numbers of the form and V
0 x
v 0
be the set of all real matrices of the form . Find bases for U, V, U + V, and U V.
w v
To show that every nontrivial vector space has a basis.we need to invoke Zorns lemma, a set
theoretic assumption which is equivalent to the axiom of choice. To this end we need to know
about such things as partial orderings, chains, and maximal elements.
1.7.8. Definition. A relation on a set S is a subset of the Cartesian product S S. If the
relation is denoted by , then it is conventional to write x y (or equivalently, y x) rather than
(x, y) .
1.7.9. Definition. A relation on a set S is reflexive if x x for all x S. It is transitive
if x z whenever x y and y z. It is antisymmetric if x = y whenever x y and y x.
A relation which is reflexive, transitive, and antisymmetric is a partial ordering. A partially
ordered set is a set on which a partial ordering has been defined.
1.7.10. Example. The set R of real numbers is a partially ordered set under the usual relation .
1.7.11. Example. A family A of subsets of a set S is a partially ordered set under the relation .
When A is ordered in this fashion it is said to be ordered by inclusion.
1.7.12. Example. Let F(S) be the family of real valued functions defined on a set S. For f ,
g F(S) write f g if f (x) g(x) for every x S. This is a partial ordering on F(S). It is
known as pointwise ordering.
1.7.13. Definition. Let A be a subset of a partially ordered set S. An element u S is an upper
bound for A if a u for every a A. An element m in the partially ordered set S is maximal
if there is no element of the set which is strictly greater than m; that is, m is maximal if c = m
whenever c S and c m. An element m in S is the largest element of S if m s for every
s S.
Similarly an element l S is a lower bound for A if l a for every a A. An element
m in the partially ordered set S is minimal if there is no element of the set which is strictly less
than m; that is, m is minimal if c = m whenever c S and c m. An element m in S is the
smallest element of S if m s for every s S.
1.7.14. Example. Let S = {a, b, c} be a three-element set. The family P(S) of all subsets of S is
partially ordered by inclusion. Then S is the largest element of P(S)and, of course, it is also a
maximal element of P(S). The family Q(S) of all proper subsets of S has no largest element; but
it has three maximal elements {b, c}, {a, c}, and {a, b}.
1.7.15. Proposition. A linearly independent subset of a vector space V is a basis for V if and
only if it is a maximal linearly independent subset.
1.7.16. Proposition. A spanning subset for a nontrivial vector space V is a basis for V if and
only if it is a minimal spanning set for V .
1.7.17. Definition. Let S be a partially ordered set with partial ordering .
18 1. VECTOR SPACES
1.7.24. Proposition. Let M be a subspace of a vector space V . Then there exists a subspace N
of V such that V = M N .
1.7.25. Lemma. Let V be a vector space with a finite basis {e1 , . . . , en } and let v = nk=1 k ek be
P
a vector in V . If p Nn and p 6= 0, then {e1 , . . . , ep1 , v, ep+1 , . . . , en } is a basis for V .
1.7.26. Proposition. If some basis for a vector space V contains n elements, then every linearly
independent subset of V with n elements is also a basis.
Hint for proof . Suppose {e1 , . . . , en } is a basis for V and {v 1 , . . . , v n } is linearly independent
in V . Start by using lemma 1.7.25 to show that (after perhaps renumbering the ek s) the set
{v 1 , e2 , . . . , en } is a basis for V .
1.7.27. Corollary. If a vector space V has a finite basis B, then every basis for V is finite and
contains the same number of elements as B.
1.7.28. Definition. A vector space is finite dimensional if it has a finite basis and the dimen-
sion of the space is the number of elements in this (hence any) basis for the space. The dimension
of a finite dimensional vector space V is denoted by dim V . If the space does not have a finite basis,
it is infinite dimensional.
Corollary 1.7.27 can be generalized to arbitrary vector spaces.
1.7.29. Theorem. If B and C are bases for a vector space V , then B and C are cardinally
equivalent; that is, there exists a bijection from B onto C.
Proof. See [29], page 45, Theorem 1.12.
1.7.30. Proposition. Let V be a vector space and suppose that V = U W . Prove that if B
is a basis for U and C is a basis for W , then B C is a basis for V . From this conclude that
dim V = dim U + dim W .
1.7.31. Definition. The transpose of an nn matrix A = aij is the matrix At = aji obtained
LINEAR TRANSFORMATIONS
2.1. Linearity
2.1.1. Definition. Let V and W be vector spaces over the same field F. A function T : V W is
linear if T (x + y) = T x + T y and T (x) = T x for all x, y V and F. For linear functions
it is a matter of convention to write T x instead of T (x) whenever it does not cause confusion. (Of
course, we would not write T x + y when we intend T (x + y).) Linear functions are frequently called
linear transformations or linear maps.
2.1.2. Notation. If V and W are vector spaces (over the same field F) the family of all linear
functions from V into W is denoted by L(V, W ). Linear functions are frequently called linear
transformations, linear maps, or linear mappings. When V = W we condense the notation L(V, V )
to L(V ) and we call the members of L(V ) operators.
2.1.3. Example. Let V and W be vector spaces over a field F. For S, T L(V, W ) define S + T
by
(S + T )(x) := Sx + T x
for all x V . For T L(V, W ) and F define T by
(T )(x) := (T x)
for all x V . Under these operations L(V, W ) is a vector space.
2.1.4. Proposition. If S : V W and T : W X are linear maps between vector spaces, then
the composite of these two functions, nearly always written as T S rather than T S, is a linear
map from V into X.
2.1.5. Convention. If S : V W and T : W X are linear maps between vector spaces, then
the composite linear map of these two functions is nearly always written as T S rather than T S.
In the same vein, T 2 = T T , T 3 = T T T , and so on.
2.1.6. Exercise. Use the notation of definition 1.4.1 and suppose that (V, +, M ) and (W, +, M )
are vector spaces over a common field F and that T Hom(V, W ) is such that the diagram
V
T /W
M M
V /W
T
2.1.8. Example. Let a < b and C 1 = C 1 ([a, b]) be the set of all continuously differentiable real
valued functions on the interval [a, b]. (Recall that a function is continuously differentiable
if it has a derivative and the derivative is continuous.) Then differentiation
D : C 1 C : f 7 f 0
is linear.
2.1.9. Example. Let R be the vector space of all sequences of real numbers and define
S : R R : (x1 , x2 , x3 , . . . ) 7 (0, x1 , x2 , . . . ).
Then S is a linear operator. It is called the unilateral shift operator.
2.1.10. Definition. Let T : V W be a linear transformation between vector spaces. Then ker T ,
the kernel (or nullspace) of T is defined to be the set of all x in V such that T x = 0. Also,
ran T , the range of T (or the image of T ), is the set of all y in W such that y = T x for some x
in V . The rank of T is the dimension of its range and the nullity of T is the dimension of its
kernel.
2.1.11. Definition. Let T : V W be a linear transformation between vector spaces and let A
be a subset of V . Define T (A) := {T x : x A}. This is the (direct) image of A under T .
2.1.12. Proposition. Let T : V W be a linear map between vector spaces and M V . Then
T (M ) is a subspace of W . In particular, the range of a linear map is a subspace of the codomain
of the map.
2.1.13. Definition. Let T : V W be a linear transformation between vector spaces and let B be
a subset of W . Define T (B) := {x V : T x B}. This is the inverse image of B under T .
2.1.14. Proposition. Let T : V W be a linear map between vector spaces and M W . Then
T (M ) is a subspace of V . In particular, the kernel of a linear map is a subspace of the domain
of the map.
2.1.15. Exercise. Let T : R3 R3 : x = (x1 , x2 , x3 ) 7 (x1 + 3x2 2x3 , x1 4x3 , x1 + 6x2 ).
(a) Identify the kernel of T by describing it geometrically and by giving its equation(s).
(b) Identify the range of T by describing it geometrically and by giving its equation(s).
2.1.16. Exercise. Let T be the linear map from R3 to R3 defined by
T (x, y, z) = (2x + 6y 4z, 3x + 9y 6z, 4x + 12y 8z).
Describe the kernel of T geometrically and give its equation(s). Describe the range of T geometri-
cally and give its equation(s).
2.1.17. Exercise. Let C = C[a, b] be the vector space of all continuous real valued functions on
the interval [a, b] and C 1 = C 1 [a, b] be the vector space of all continuously differentiable real valued
functions on [a, b]. Let D : C 1 C be the linear transformation defined by
Df = f 0
and let T : C C 1 be the linear transformation defined by
Z x
(T f )(x) = f (t) dt
a
for all f C and x [a, b].
(a) Compute (and simplify) (DT f )(x).
(b) Compute (and simplify) (T Df )(x).
(c) Find the kernel of T .
(d) Find the range of T .
2.1.18. Proposition. A linear map T : V W between vector spaces is injective if and only if
ker T = {0}.
2.2. INVERTIBLE LINEAR MAPS 23
2.2.2. Definition. A linear map T : V W between vector spaces is left invertible (or has a
left inverse, or is a section) if there exists a linear map L : W V such that LT = IV .
The map T is right invertible (or has a right inverse, or is a retraction) if there exists
a linear map R : W V such that T R = IW . We say that T is invertible (or has an inverse,
or is an isomorphism) if there exists a linear map T 1 : W V which is both a left and a right
inverse for T . If there exists an isomorphism between two vector spaces V and W , we say that the
spaces are isomorphic and we write V = W.
2.2.3. Exercise. Show that an operator T L(V ) is invertible if it satisfies the equation
T 2 T + IV = 0.
2.2.4. Example. The unilateral shift operator S on the vector space R of all sequences of real
numbers (see 2.1.9) is injective but not surjective. It is left invertible but not right invertible.
2.2.5. Proposition. A linear map T : V W between vector spaces is invertible if and only if
has both a left inverse and a right inverse.
2.2.6. Proposition. A linear map between vector spaces is invertible if and only if it is bijective.
2.2.8. Proposition. An operator T on a vector space V is invertible if and only if it has a unique
right inverse.
Hint for proof . Consider ST + S IV , where S is the unique right inverse for T .
2.2.10. Example. Let B be a basis for a vector space V over a field F. Then V
= lc (B, F).
Hint for proof . Recall the notational conventions made in 1.7.22.
2.2.11. Proposition. Let S, T L(V, W ) where V and W are vector spaces over a field F; and
let B be a basis for V . If S(e) = T (e) for every e B, then S = T .
2.2.13. Proposition. A linear transformation has a left inverse if and only if it is injective.
2.2.14. Proposition. A linear transformation has a right inverse if and only if it is surjective.
2.2.15. Proposition. Let V and W be vector spaces over a field F and B be a basis for V . If
f : B W , then there exists a unique linear map Tf : V W which is an extension of f .
2.2.16. Exercise. Let S be a set, V be a vector space over a field F, and f : S V be a bijection.
Explain how to use f to make S into a vector space isomorphic to V .
24 2. LINEAR TRANSFORMATIONS
2.5.3. Exercise. According to convention 2.5.2 above, what is the value of fe when e and f are
elements of the basis B?
2.5.4. Proposition. Let V be a vector space with basis B. For every v V define a function v
on V by X
v (x) = xe ve for all x V .
eB
Then v is a linear functional on V .
2.5.5. Notation. In the preceding proposition 2.5.4 the value v (x) of v at x is often written as
hx, vi.
2.5.6. Exercise. Consider the notation 2.5.5 above in the special case that the scalar field F = R.
Then h , i is an inner product on the vector space V . (For a definition of inner product see 5.1.1,)
2.5.7. Exercise. In the special case that the scalar field F = C, things above are usually done a
bit differently. For v V the function v is defined by
X
v (x) = hx, vi = xe ve .
eB
Why do you think things are done this way?
26 2. LINEAR TRANSFORMATIONS
2.5.8. Proposition. Let v be a nonzero vector in a vector space V and B be a basis for V which
contains the vector v. Then there exists a linear functional f V such that f (v) = 1 and f (e) = 0
for every e B \ {v}.
2.5.9. Corollary. Let M be a subspace of a vector space V and v a vector in V which does not
belong to M . Then there exists f V such that f (v) = 1 and f (M ) = {0}.
2.5.10. Corollary. If v is a vector in a vector space V and f (v) = 0 for every f V , then v = 0.
2.5.11. Definition. Let F be a field. A family F of F-valued functions on a set S containing at
least two points separates points of S if for every x, y S such that x 6= y there exists f F
such that f (x) 6= f (y).
2.5.12. Corollary. For every nontrivial vector space V , the dual space V separates points of V .
2.5.13. Proposition. Let V be a vector space with basis B. The map : V V : v 7 v (see
proposition 2.5.4) is linear and injective.
The next result is the Riesz-Frechet theorem for finite dimensional vector spaces with basis. It
is important to keep in mind that the result does not hold for infinite dimensional vector spaces
(see proposition 2.5.18) and that the mapping depends on the basis which has been chosen for
the vector space.
2.5.14. Theorem. Let V be a finite dimensional vector space with basis B. Then the map
defined in the preceding proposition 2.5.13 is an isomorphism. Thus for every f V there exists
a unique vector a V such that a = f .
2.5.15. Definition. Let V be a vector space with basis {e : }. A basis { : } for V
is the dual basis for V if it satisfies
1, if = ;
(e ) =
0, if 6= .
2.5.16. Theorem. Every finite dimensional vector space V with a basis has a unique dual basis
for its dual space. In fact, if {e1 , . . . , en } is a basis for V , then {(e1 ) , . . . , (en ) } is the dual basis
for V .
2.5.17. Corollary. If a vector space V is finite dimensional, then so is its dual space and dim V =
dim V .
In proposition 2.5.13 we showed that the map
: V V : v 7 v
is always an injective linear map. In corollary 2.5.17 we showed that if V is finite dimensional, then
so is V and is an isomorphism between V and V . This is never true in infinite dimensional
spaces.
2.5.18. Proposition. If V is infinite dimensional, then is not an isomorphism.
Hint for proof . Let B be a basis for V . Is there a functional g V such that g(e) = 1 for
every e B? Could such a functional be (x) for some x V ?
2.5.19. Proposition. Let V be a vector space over a field F. For every x in V define
b : V F : 7 (x) .
x
(a) The vector xb belongs to V for each x V .
(b) Let V be the map from V to V which takes x to x b. (When no confusion is likely we
write for V , so that (x) = xb for each x V .) The function is linear.
(c) The function is injective.
2.6. ANNIHILATORS 27
2.5.20. Proposition. If V is a finite dimensional vector space, then the map : V V defined
in the preceding proposition 2.5.19 is an isomorphism.
2.5.21. Proposition. If V is infinite dimensional, then the mapping (defined in 2.5.19) is not
an isomorphism.
Hint for proof . Let B be a basis for V and V be as in proposition 2.5.18. Show that if we
let C0 be {e : e B}, then the set C0 {} is linearly independent and can therefore be extended
to a basis C for V . Find an element in V such that () = 1 and () = 0 for every other
C. Can be x for some x V ?
2.6. Annihilators
2.6.1. Notation. Let V be a vector space and M V . Then
M := {f V : f (x) = 0 for all x M }
We say that M is the annihilator of M . (The reasons for using the familiar orthogonal
complement notation M (usually read M perp) will become apparent when we study inner
product spaces, where orthogonality actually makes sense.)
2.6.2. Exercise. Find the annihilator in R2 of the vector (1, 1) in R2 . (Express your answer in
terms of the standard dual basis for R2 .)
terms injective and surjective may not make sense when applied to morphisms in a category that
is not concrete.)
g f1 f2
A morphism B / C is left cancellable if whenever morphisms A / B and A /B
satisfy gf1 = gf2 , then f1 = f2 . Saunders Mac Lane suggested calling left cancellable morphisms
monic morphisms. The distinction between monic morphisms and monomorphisms turns out to
be slight. In these notes almost all of the morphisms we encounter are monic if and only if they are
monomorphisms. As an easy exercise prove that any injective morphism in a (concrete) category
is monic. The converse sometimes fails.
In the same vein Mac Lane suggested calling a right cancellable morphism (that is, a morphism
f g1 g2
A / B such that whenever morphisms B / C and B / C satisfy g1 f = g2 f , then g1 = g2 ) an
epic morphism. Again it is an easy exercise to show that in a (concrete) category any epimorphism
is epic. The converse, however, fails in some rather common categories.
3.1.9. Definition. The terminology for inverses of morphisms in categories is essentially the same
/
as for functions. Let S T and T / S be morphisms in a category. If = IS , then is a
left inverse of and, equivalently, is a right inverse of . We say that the morphism is
an isomorphism (or is invertible) if there exists a morphism T / S which is both a left and
1
a right inverse for . Such a function is denoted by and is called the inverse of .
3.1.10. Proposition. If a morphism in some category has both a left and a right inverse, then it
is invertible.
In any concrete category one can inquire whether every bijective morphism (that is, every map
which is both a monomorphism and an epimorphism) is an isomorphism. We saw in proposi-
tion 2.2.6 that in the category VEC the answer is yes. In the next example the answer is no.
3.1.11. Example. In the category POSET of partially ordered sets and order preserving maps
not every bijective morphism is an isomorphism.
3.1.12. Example. If in the category CG of example 3.1.6 the monoid G is a group, then every
morphism in CG is an isomorphism.
3.2. Functors
3.2.1. Definition. If A and B are categories a (covariant) functor F from A to B (written
A
F / B) is a pair of maps: an object map F which associates with each object S in A an
object F (S) in B and a morphism map (also denoted by F ) which associates with each morphism
f Mor(S, T ) in A a morphism F (f ) Mor(F (S), F (T )) in B, in such a way that
(1) F (g f ) = F (g) F (f ) whenever g f is defined in A; and
(2) F (idS ) = idF (S) for every object S in A.
Forgetful functor can forget about properties as well. If G is an object in the category of
Abelian groups, the functor which forgets about commutativity in Abelian groups would take G
into the category of groups.
It was mentioned in the preceding section that all the categories that are of interest in these
notes are concrete categories (ones in which the objects are sets with additional structure and the
morphisms are maps which preserve, in some sense, this additional structure). We will have several
occasions to use a special type of forgetful functorone which forgets about all the structure of
the objects except the underlying set and which forgets any structure preserving properties of the
morphisms. If A is an object in some concrete category C, we denote by |A| its underlying set.
f
And if A / B is a morphism in C we denote by |f | the map from |A| to |B| regarded simply
as a function between sets. It is easy to see that | | , which takes objects in C to objects in SET
(the category of sets and maps) and morphisms in C to morphisms in SET, is a covariant functor.
In the category VEC of vector spaces and linear maps, for example, | | causes a vector space V
to forget about both its addition and scalar multiplication (|V | is just a set). And if T : V W
is a linear transformation, then |T | : |V | |W | is just a map between setsit has forgotten
about preserving the operations.
3.2.3. Notation. Let f : S T be a function between sets. Then we define f (A) = {f (x) : x
A} and f (B) = {x S : f (x) B}. We say that f (A) is the image of A under f and that
f (B) is the preimage of B under f .
3.2.4. Definition. A partially ordered set is order complete if every nonempty subset has a
supremum (that is, a least upper bound) and an infimum (a greatest lower bound).
3.2.5. Definition. Let S be a set. Then the power set of S, denoted by P(S), is the family of
all subsets of S.
3.2.6. Example (The power set functors). Let S be a nonempty set.
(a) The power set P(S) of S partially ordered by is order complete.
(b) The class of order complete partially ordered sets and order preserving maps is a category.
(c) For each function f between sets let P(f ) = f . Then P is a covariant functor from the
category of sets and functions to the category of order complete partially ordered sets and
order preserving maps.
(d) For each function f between sets let P(f ) = f . Then P is a contravariant functor from
the category of sets and functions to the category of order complete partially ordered sets
and order preserving maps.
3.2.7. Definition. Let T : V W be a linear map between vector spaces. For every g W
let T (g) = g T . Notice that T (g) V . The map T from the vector space W into the vector
space V is the (vector space) adjoint map of T .
3.2.8. CAUTION. In inner product spaces we will use the same notation T for a different map.
If T : V W is a linear map between inner product spaces, then the (inner product space) adjoint
transformation T maps W to V (not W to V ).
3.2.9. Example (The vector space duality functor). Let T L(V, W ) where V and W are vector
spaces over a field F. Then the pair of maps V 7 V and T 7 T is a contravariant functor from
the category of vector spaces and linear maps into itself. Show that (the morphism map of) this
functor is linear. (That is, show that (S + T ) = S + T and (T ) = T for all S, T L(V, W )
and F.)
There are several quite different results that in various texts are labeled as the fundamental
theorem of linear algebra. Many of them seem to me not to be particularly fundamental because
32 3. THE LANGUAGE OF CATEGORIES
they apply only to finite dimensional inner product spaces or, what amounts to the same thing,
matrices. I feel the following result deserves the name because it holds for arbitrary linear maps
between arbitrary vector spaces.
3.2.10. Theorem (Fundamental Theorem of Linear Algebra). For every linear map T : V W
between vector spaces the following hold.
(1) ker T = (ran T ) ;
(2) ran T = (ker T ) ;
(3) ker T = (ran T ) ; and
(4) ran T = (ker T ) .
Hint for proof . Showing in (2) that (ker T ) ran T takes some thought. Let f (ker T ) .
We wish to find a g W such that f = T g. By propositions 2.1.14 and 1.7.24 there exists a
subspace M of V such that V = ker T M . Let : M V be the inclusion mapping of M into V
and T |M be the restriction of T to M . Use proposition 2.2.13 to show that T |M has a left inverse
S : W M . Let g := f S. Use theorem 1.5.20.
3.2.11. Exercise. What is the relationship between a linear map T being injective and its adjoint
T being surjective? between T being surjective and T being injective?
|fe | fe
f
|A| A
We will be interested in free vector spaces; that is, free objects in the category VEC of vector
spaces and linear maps. Naturally, merely defining a concept does not guarantee its existence. It
turns out, in fact, that free vector spaces exist on arbitrary sets. (See exercise 3.3.5.)
3.3.2. Exercise. In the preceding definition reference to the forgetful functor is often omitted and
the accompanying diagram is often drawn as follows:
S
/F
fe
f
A
It certainly looks a lot simpler. Why do you suppose I opted for the more complicated version?
3.4. PRODUCTS AND COPRODUCTS 33
3.3.3. Proposition. If two objects in some concrete category are free on the same set, then they
are isomorphic.
3.3.5. Example. If S is an arbitrary nonempty set and F is a field, then there exists a vector
space V over F which is free on S. This vector space is unique (up to isomorphism).
Hint for proof . Given the set S let V be the set of all F-valued functions on S which have finite
support. Define addition and scalar multiplication pointwise. The map : s 7 {s} of each element
s S to the characteristic function of {s} is the desired injection. To verify that V is free over S it
f
must be shown that for every vector space W and every function S / |W | there exists a unique
fe
linear map V / W which makes the following diagram commute.
S
/ |V | V
|fe| fe
f
|W | W
A1 o 1 P 2
/ A2
? BO _
F1 F2
G
A1 /P o A2
j1 j2
3.4.2. Proposition. In an arbitrary category products and coproducts (if they exist) are essentially
unique.
Essentially unique means unique up to isomorphism. Thus in the preceding proposition the
claim is that if (P, 1 , 2 ) and (Q, 1 , 2 ) are both products of two given objects, then P
= Q.
3.4.3. Definition. Let V and W be vector spaces over the same field F. To make the Cartesian
product V W into a vector space we define addition by
(v, w) + (v 0 , w0 ) = (v + v 0 , w + w0 )
(where v, v 0 V and w, w0 W ), and we define scalar multiplication by
(v, w) = (v, w)
(where F, v V , and w W ). The resulting vector space we call the (external) direct
sum of V and W . It is conventional to use the same notation V W for external direct sums that
we use for internal direct sums.
3.4.4. Example. The external direct sum of two vector spaces (as defined in 3.4.3) is a vector
space.
3.4.5. Example. In the category of vector spaces and linear maps the external direct sum is both
a product but also a coproduct.
3.4.6. Example. In the category of sets and maps (functions) the product and the coproduct are
not the same.
3.4.7. Proposition. Let U , V , and W be vector spaces. If U
= W , then U V
=W V.
3.4.8. Example. The converse of the preceding proposition is not true.
3.4.9. Definition. Let V0 , V1 , V2 , . . . be vector spaces (over the same field). Then their (ex-
L
ternal) direct sum, which is denoted by Vk , is defined to be the set of all functions
k=0
v : Z+ +
S
k=0 Vk with finite support such that v(k) = vk Vk for each k Z . The usual
pointwise addition and scalar multiplication make this set into a vector space.
3.5. Quotients
!
V /M /W
Te
Furthermore, Te is injective if and only if ker T = M ; and Te is surjective if and only if T is.
3.5.6. Corollary. If T : V W is a linear map between vector spaces, then ran T
= V / ker T .
For obvious reasons the next result is usually called the rank-plus-nullity theorem. (It is also
sometimes listed as part of the fundamental theorem of linear algebra.)
3.5.7. Proposition. Let T : V W be a linear map between vector spaces. If V is finite dimen-
sional, then
rank T + nullity T = dim V.
3.5.8. Corollary. If M is a subspace of a finite dimensional vector space V , then dim V /M =
dim V dim M .
is said to be exact at Vn if ran jn = ker jn+1 . A sequence is exact if it is exact at each of its
constituent vector spaces. A sequence of vector spaces and linear maps of the form
j
0 /U /V k /W /0
36 3. THE LANGUAGE OF CATEGORIES
is a short exact sequence. (Here 0 denotes the trivial 0-dimensional vector space, and the
unlabeled arrows are the obvious linear maps.)
3.6.2. Proposition. The sequence
j
0 /U /V k /W /0
is exact.
3.6.6. Example. Let U and V be vector spaces. Then the following sequence is short exact:
1 2
0 /U /U V /V / 0.
f g h
0 / U0 /V 0 /W0 /0
j0 k0
If the rows are exact and the left square commutes, then there exists a unique linear map h : W
W 0 which makes the right square commute.
3.6.8. Proposition (The Short Five Lemma). Consider the following diagram of vector spaces
and linear maps
j
0 / U /V k /W /0
f g h
0 / U0 /V 0 /W0 /0
j0 k0
3.7. SOME MISCELLANEOUS RESULTS 37
where the rows are exact and the squares commute. Then the following hold.
(a) If g is surjective, so is h.
(b) If f is surjective and g is injective, then h is injective.
(c) If f and h are surjective, so is g.
(d) If f and h are injective, so is g.
j
3.6.9. Proposition. Show that if 0 /U /V k /W / 0 is an exact sequence of vector spaces
and linear maps, then V
= U W.
Hint for proof . Consider the following diagram and use proposition 3.6.8.
j
0 /U /V k /W /0.
g
2
0 /U / U W o /W /0
i1 i2
3.6.14. Proposition. If V0 , V1 , . . . , Vn are finite dimensional vector spaces and the sequence
dn d1
0 / Vn / Vn1 / ... / V1 / V0 /0
n
X
is exact, then (1)k dim Vk = 0.
k=0
is exact.
38 3. THE LANGUAGE OF CATEGORIES
4.1. Projections
Much of mathematical research consists analyzing complex objects by writing them as a combi-
nation of simpler objects. In the case of vector space operators the simpler objects, the fundamental
building blocks, are projection operators.
4.1.1. Definition. Let V be a vector space. An operator E L(V ) is a projection operator
if it is idempotent; that is, if E 2 = E.
4.1.2. Proposition. If E is a projection operator on a vector space V , then
V = ran E ker E.
4.1.3. Proposition. Let V be a vector space and E, F L(V ). If E + F = IV and EF = 0, then
E and F are projection operators and V = ran E ran F .
4.1.4. Proposition. Let V be a vector space and E1 , . . . , En L(V ).LIf nk=1 Ek = IV and
P
Ei Ej = 0 whenever i 6= j, then each Ek is a projection operator and V = nk=1 ran Ek .
4.1.5. Proposition. If E is a projection operator on a vector space V , then ran E = {x V : Ex =
x}.
4.1.6. Proposition. Let E and F be projection operators on a vector space V . Then E + F = IV
if and only if EF = F E = 0 and ker E = ran F .
4.1.7. Definition. Let V be a vector space and suppose that V = M N . We know from an
earlier theorem 1.5.20 that for each v V there exist unique vectors m M and n N such that
v = m + n. Define a function EN M : V V by EN M v = m. The function EN M is called the
projection of V along N onto M . (This terminology is, of course, optimistic. We must prove
that EN M is in fact a projection operator.)
4.1.8. Proposition. If M N is a direct sum decomposition of a vector space V , then the function
EN M defined in 4.1.7 is a projection operator whose range is M and whose kernel is N .
4.1.9. Proposition. If M N is a direct sum decomposition of a vector space V , then EN M +
EM N = IV and EN M EM N = 0.
4.1.10. Proposition. If E is a projection operator on a vector space V , then there exist M , N 4 V
such that E = EN M .
4.1.11. Exercise. Let M be the line y = 2x and N be the y-axis in R2 . Find [EM N ] and [EN M ].
4.1.12. Exercise. Let E be the projection of R3 onto the plane 3x y + 2z = 0 along the z-axis.
Find the matrix representation [E] (of E with respect to the standard basis of R3 ).
4.1.13. Exercise. Let F be the projection of R3 onto the z-axis along the plane 3x y + 2z = 0.
Where does F take the point (4, 5, 1)?
4.1.14. Exercise. Let P be the plane in R3 whose equation is x + 2y z = 0 and L be the line
x z
whose equations are = y = . Let E be the projection of R3 along L onto P and F be the
3 2
39
40 4. THE SPECTRAL THEOREM FOR VECTOR SPACES
4.2. Algebras
4.2.1. Definition. Let (A, +, M ) be a vector space over a field F which is equipped with another
binary operation : A A A : (a, b) 7 ab in such a way that (A, +, ) is a ring. If additionally
the equations
(ab) = (a)b = a(b) (4.1)
hold for all a, b A and F, then (A, +, M, ) is an algebra over the field F (sometimes
referred to as a linear associative algebra). We abuse notation in the usual way by writing
such things as, Let A be an algebra. We say that an algebra A is unital if its underlying ring
(A, +, ) is. And it is commutative if its ring is.
4.2.2. Example. A field may be regarded as an algebra over itself.
4.2.3. Example. If S is a nonempty set, then the vector space F(S, F) (see example 1.4.5) is
a commutative unital algebra under pointwise multiplication, which is defined for all f , g
F(S, F) by
(f g)(s) = f (s) g(s)
for all s S. The constant function 1 (that is, the function whose value at each s S is 1) is the
multiplicative identity.
4.2.4. Example. If V is a vector space, then the set L(V ) of linear operators on V is a unital
algebra under pointwise addition, pointwise scalar multiplication, and composition.
4.2.5. Notation. In the following material we make the notational convention that if B and C are
subsets of (a ring or) an algebra A, then BC denotes the set of all sums of products of elements in
B and C. That is
BC := {b1 c1 + + bn cn : n N; b1 , . . . , bn B; and c1 , . . . , cn C}.
And, of course, if b A, then bC = {b}C.
4.2.6. Definition. A map f : A B between algebras is an (algebra) homomorphism if it
is a linear map between A and B as vector spaces which preserves multiplication (in the sense of
equation (1.2). In other words, an algebra homomorphism is a linear ring homomorphism. It is
a unital (algebra) homomorphism if it preserves identities (as in (1.3)). The kernel of an
algebra homomorphism f : A B is, of course, {a A : f (a) = 0}.
If f 1 exists and is also an algebra homomorphism, then f is an isomorphism from A to B.
If an isomorphism from A to B exists, then A and B are isomorphic.
Here are three essentially obvious facts about algebra homomorphisms.
4.2.7. Proposition. Every bijective algebra (or ring) homomorphism is an isomorphism.
4.2.8. Proposition. If f : A B is an isomorphism between algebras (or rings) and A is unital,
then so is B and f is a unital homomorphism.
4.2. ALGEBRAS 41
an algebra. Thus we will take F[x] to be the polynomial algebra with coefficients in F; it has as its
basis {xn : n = 0, 1, 2, . . . }.
4.5.9. Definition. A nonzero polynomial p, being an element of lc (Z+ , A), has finite support. So
there exists n0 Z+ such that pn = 0 whenever n > n0 . The smallest such n0 is the degree of
the polynomial. We denote it by deg p. A polynomial of degree 0 is a constant polynomial.
The zero polynomial (the additive identity of l(Z+ , A)) is a special case; while it is also a constant
polynomial some authors assign it no degree whatever, while others let its degree be .
If p is a polynomial of degree n, then pn is the leading coefficient of p. A polynomial is
monic if its leading coefficient is 1.
4.5.10. Example. Let A be a unital commutative algebra. If p is a nonzero polynomial in lc (Z+ , A),
then
Xn
p= pk xk where n = deg p.
k=0
This is the standard form of the polynomial p. Keep in mind that each coefficient pk belongs to
the algebra A. Also notice that it does not really matter whether we write p as nk=0 pk xk or as
P
P k p k xk .
P
k=0 pk x ; so frequently we write just
4.5.11. Remark. Recall that there is occasionally a slight ambiguity in notation for sets. For
example, if we consider the (complex) solutions to an algebraic equation E of degree n, we know
that, counting multiplicities, there are n solutions to the equation. So it is common practice to
write, Let {x1 , x2 , . . . , xn } be the set of solutions to E. Notice that in this context there may be
repeated elements of the set. The cardinality of the set may be strictly less than n. However, when
we encounter the expression, Let {x1 , x2 , . . . , xn } be a set of . . . , it is usually the intention of the
author that the elements are distinct, that the cardinality of the set is n.
A similar ambiguity arises in polynomial notation. If, for example, p = nk=0 pk xk and q =
P
Pn k
Pn
k=0 qk x are both polynomials of degree n, we ordinarily write their sum as p + q = k=0 (pk +
k
qk )x even though the resulting sum may very well have degree strictly less than n. On the other
hand when one sees, Consider a polynomial p = nk=0 pk xk such that . . . , it is usually intended
P
that p have degree n; that is, that p is written in standard form.
4.5.12. Proposition. If p and q are polynomials with coefficients in a unital commutative algebra
A, then
(i) deg(p + q) max{deg p, deg q}, and
(ii) deg(pq) deg p + deg q.
If A is a field, then equality holds in (ii).
4.5.13. Example. If A is a unital commutative algebra, then so is l(A, A) under pointwise oper-
ations of addition, multiplication, and scalar multiplication.
4.5.14.
P Definition. Let A be a unital commutative algebra over a field F. For each polynomial
p = nk=0 pk xk with coefficients in F define
n
X
pe: A A : a 7 pk ak .
k=0
Then pe is the polynomial function on A determined by the polynomial p. Also for fixed a A
define
a : F[x] A : p 7 pe(a).
The mapping a is the polynomial functional calculus determined by the element a.
It is important to distinguish between the concepts of polynomials with coefficients in an algebra
and polynomial functions. Also important is the distinction between the indeterminant x in l(Z+ , A)
and x used as a variable for a polynomial function. (See 4.5.18.)
4.6. MINIMAL POLYNOMIALS 45
4.5.15. Exercise. Let A be a unital commutative algebra over a field F. Then for each a A the
polynomial functional calculus a : F[x] A defined in 4.5.14 is a unital algebra homomorphism.
4.5.16. Proposition. Let A be a unital commutative algebra over a field F. The map
: F[x] l(A, A) : p 7 pe
is a unital algebra homomorphism.
4.5.17. Exercise. Under the homomorphism (defined in 4.5.16) what is the image of the inde-
terminant x? Under the homomorphism a (defined in 4.5.14) what is the image of the indetermi-
nant x?
The following example is intended to illustrate the importance of distinguishing between poly-
nomials and polynomial functions.
4.5.18. Example. Let F = {0, 1} be the two-element field. The polynomials p = x + x2 + x3 and
q = x in the polynomial algebra F[x] show that the homomorphism (defined in 4.5.16) need not
be injective.
4.5.19. Proposition. Let A be a unital algebra of finite dimension m over a field F. For every
a A there exists a polynomial p F[x] such that 1 deg p m and pe(a) = 0.
4.6.4. Notation. If T is an operator on a finite dimensional vector space over a field F, we denote
its minimal polynomial in F[x] by mT .
4.6.5. Proposition. Let V be a finite dimensional vector space over a field F and T L(V ). Then
the minimal polynomial mT for T is unique.
4.6.6. Proposition. An operator T on a finite dimensional vector space is invertible if and only
if the constant term of its minimal polynomial is not zero.
4.6.7. Exercise. Explain how, for an invertible operator T on a finite dimensional vector space,
we can write its inverse as a polynomial in T .
4.6.8. Definition. If F is a field and p, p1 F[x], we say that p1 divides p if there exists q F[x]
such that p = p1 q.
4.6.9. Proposition. Let T be an operator on a finite dimensional vector space over a field F. If
p F[x] and pe(T ) = 0, then mT divides p.
46 4. THE SPECTRAL THEOREM FOR VECTOR SPACES
0 / Jm / F[x] / ran /0
T
is exact. Furthermore, the dimension of the range of the functional calculus associated with the
operator T is the degree of its minimal polynomial.
4.6.17. Definition. Let t0 , t1 , . . . , tn be distinct elements of a field F. For 0 k n define
pk F[x] by
n
Y x tj
pk = .
tk tj
j=0
j6=k
4.6.18. Proposition (Lagrange Interpolation Formula). The polynomials defined in 4.6.17 form
a basis for the vector space V of all polynomials with coefficients in F and degree less than or equal
to n and that for each polynomial q V
n
X
q= q(tk )pk .
k=0
4.6.19. Exercise. Use the Lagrange Interpolation Formula to find the polynomial with coefficients
in R and degree no greater than 3 whose values at 1, 0, 1, and 2 are, respectively, 6, 2, 2,
and 6.
4.6.20. Proposition. Let F be a field and p, q, and r be polynomials in F[x]. If p is a prime in
F[x] and p divides qr, then p divides q or p divides r.
4.6.21. Proposition. Let F be a field. Then every nonzero ideal in F[x] is principal.
4.7. INVARIANT SUBSPACES 47
Hint for proof . If J is a nonzero ideal in F[x] consider the principal ideal generated by any
member of J of smallest degree.
4.6.22. Definition. Let p1 , . . . , pn be polynomials, not all zero, with coefficients in a field F. A
monic polynomial d such that d divides each pk (k = 1, . . . , n) and such that any polynomial which
divides each pk also divides d is the greatest common divisor of the pk s. The polynomials pk
are relatively prime if their greatest common divisor is 1.
4.6.23. Proposition. Any finite set of polynomials (not all zero), with coefficients in a field F has
a greatest common divisor.
4.6.24. Theorem (Unique Factorization). Let F be a field. A nonconstant monic polynomial in
F[x] can be factored in exactly one way (except for the order of the factors) as a product of monic
primes in F[x] .
4.6.25. Definition. Let F be a field and p(x) F[x]. An element r F is a root of p(x) if
p(r) = 0.
4.6.26. Proposition. Let F be a field and p(x) F[x]. Then r is a root of p(x) if and only if x r
is a factor of p(x).
4.7.10. Proposition. Let M and N be complementary subspaces of a vector space V (that is, V
is the direct sum of M and N ) and let T be an operator on V . If M is invariant under T , then
M is invariant under T and if T is reduced by the pair (M, N ), then T is reduced by the pair
(M , N ).
If T L(V ) let
\
Lat T := Lat T.
T T
We say that Lat T (or Lat T) is trivial if it contains only the trivial invariant subspaces {0} and V .
Hint for proof . For dim V 2 let M be a nonzero proper subspace of V . Choose nonzero
7 f (v)y where f is a functional in V such that
vectors x M and y M c . Define T : V V : v
f (x) = 1.
4.8.5. Definition. Let V be a vector space. A subalgebra A of L(V ) is transitive if for every
x 6= 0 and y in V there exists an operator T in A such that y = T x.
4.8.6. Proposition. Let V be a vector space. A subalgebra A of L(V ) is transitive if and only if
Lat A is trivial.
4.8.8. Example. The field C of complex numbers is algebraically closed; the field R of real numbers
is not.
4.8.9. Theorem (Burnsides Theorem). Let V be a finite dimensional vector space over an alge-
braically closed field. Then L(V ) has no proper subalgebra which is transitive.
4.8.10. Corollary. Let V be a finite dimensional complex vector space of dimension at least 2.
Then every proper subalgebra of L(V ) has a nontrivial invariant subspace.
4.8.11. Example. The preceding result does not hold for real vector spaces.
4.9. EIGENVALUES AND EIGENVECTORS 49
be the projection onto Mk along the complementary subspace Nk . The projections E1 , . . . En are
the projections associated with the direct sum decomposition V = M1 Mn .
4.9.3. Proposition. If M1 Mn is a direct sum decomposition of a vector space V , then the
family {E1 , E2 , . . . , En } of the associated projections is a resolution of the identity.
In the following definition we make use of the familiar notion of the determinant of a matrix
even though we have not yet developed the theory of determinants. We will eventually do this.
4.9.4. Definition. Let V be a vector space over a field F and T L(V ). An element F
is an eigenvalue of T if ker(T I) 6= {0}. The collection of all eigenvalues of T is its point
spectrum, denoted by p (T ).
4.9.5. Definition. If F is a field and A is an n n matrix of elements of F, we define the
characteristic polynomial cA of A to be the determinant of A xI. (Note that det(A xI)
is a polynomial.) Some authors prefer the characteristic polynomial to be monic, and consequently
define it to be the determinant of xI A. As you would expect, the characteristic polynomial
cT of an operator T on a finite dimensional space (with basis B) is the characteristic polynomial
of the matrix representation of that operator (with respect to B). Making use of some standard
facts (which we have not yet proved) about determinants (see section 7.3) we see that F is an
eigenvalue of the matrix A (or of its associated linear transformation) if and only if it is a root of
the characteristic polynomial cA .
4.9.6. Proposition. If T is an operator on a finite dimensional vector space, then p (T ) = (T ).
1 1 1
4.9.7. Exercise. Let A = 1 1 1.
1 1 1
The characteristic polynomial of A is p ( 3)q where p = and q = .
The minimal polynomial of A is r ( 3)s where r = and s = .
0 1 0 1
2 3 0 1
4.9.8. Exercise. Let T be the operator on R4 whose matrix representation is
2 1
.
2 1
p
The characteristic polynomial of T is ( 2) where p = . 2 1 0 3
4.9.25. Proposition. If two matrices A and B are similar, then they have the same spectrum.
Hint for proof . You may use familiar facts about determinants that we have not yet proved.
4.9.26. Definition. An operator on a vector space is nilpotent if some power of the operator
is 0.
4.9.27. Proposition. An operator T on a finite dimensional complex vector space is nilpotent if
and only if (T ) = {0}.
4.9.28. Notation. Let 1 , . . . , n be elements of a field F. Then diag(1 , . . . , n ) denotes the
n n matrix whose entries are all zero except on the main diagonal where they are 1 , . . . , n .
Such a matrix is a diagonal matrix.
4.9.29. Definition. Let V be a vector space of finite dimension n. An operator T on V is
diagonalizable if it has n linearly independent eigenvectors (or, equivalently, if V has a basis of
eigenvectors of T ).
4.9.30. Proposition. Let A be an n n matrix with entries from a field F. Then A, regarded as
an operator on Fn , is diagonalizable if and only if it is similar to a diagonal matrix.
4.11.3. Corollary. Every operator on a finite dimensional complex vector space can be written as
the sum of two commuting operators, one diagonalizable and the other nilpotent.
2 2 1
4.11.4. Exercise. Let T be the operator on R whose matrix representation is .
1 4
(a) Explain briefly why T is not diagonalizable.
(b) Find the diagonalizable
and nilpotent
parts of T .
a b c c
Answer: D = and N = where a = ,b= , and c = .
b a c c
0 0 3
4.11.5. Exercise. Let T be the operator on R3 whose matrix representation is 2 1 2.
2 1 5
(a) Find D and N , the diagonalizable and nilpotent parts of T . Express these as polynomials
in T .
(b) Find a matrix S which diagonalizes D.
2 1 1 2 1 2
(c) Let [D1 ] = 1 2 1 and [N1 ] = 1 1 1. Show that D1 is diagonalizable,
1 1 2 3 0 3
that N1 is nilpotent, and that T = D1 + N1 . Why does this not contradict the uniqueness
claim made in theorem 4.11.2?
0 1 0 1
2 3 0 1
4.11.6. Exercise. Let T be the operator on R4 whose matrix representation is 2 1 2 1.
2 1 0 3
p
(a) The characteristic polynomial of T is ( 2) where p = .
r
(b) The minimal polynomial of T is ( 2) where r = .
a b b b
b a b b
(c) The diagonalizable part of T is D = b b a b where a =
and b = .
b b b a
a b c b
a b c b
(d) The nilpotent part of T is N = a b c b where a =
,b= , and
c= .
a b c b
1 0 0 1 1
0 1 2 3 3
5
4.11.7. Exercise. Let T be the operator on R whose matrix representation is
0 0 1 2 2 .
1 1 1 0 1
1 1 1 1 2
(a) Find the characteristic polynomial of T .
Answer: cT () = ( + 1)p ( 1)q where p = and q = .
(b) Find the minimal polynomial of T .
Answer: mT () = ( + 1)r ( 1)s where r = and s = .
(c) Find the eigenspaces V1 and V2 of T .
Answer: V1 = span{(a, 1, b, a, a)} where a = and b = ; and
V2 = span{(1, a, b, b, b), (b, b, b, 1, a)} where a = and b = .
54 4. THE SPECTRAL THEOREM FOR VECTOR SPACES
In this chapter all vector spaces (and algebras) have complex or real scalars.
5.1.5. Definition. When a (real or complex) vector space has been equipped with an inner product
we define the norm of a vector x by p
kxk := hx, xi;
(This somewhat optimistic terminology is justified in proposition 5.1.13 below.)
5.1.6. Theorem. In every inner product space the Schwarz inequality
|hx, yi| kxk kyk.
holds for all vectors x and y.
Hint for proof . Let V be an inner product space and fix vectors x, y V . For every scalar
we know that
0 hx y, x yi. (5.1)
Expand the right hand side of (5.1) into four terms and write hy, xi in polar form: hy, xi = rei ,
where r > 0 and R. Then in the resulting inequality consider those of the form tei where
t R. Notice that now the right side of (5.1) is a quadratic polynomial in t. What can you say
about its discriminant?
5.1.7. Exercise. If a1 , . . . , an > 0, then
Xn n
X
1
aj n2 .
ak
j=1 k=1
The proof of this is obvious from the Schwarz inequality if we choose x and y to be what?
5.1.8. Exercise. In this exercise notice that part (a) is a special case of part (b).
2
(a) Show that if a, b, c > 0, then 12 a + 13 b + 61 c 12 a2 + 13 b2 + 16 c2 .
(b) Show that if a1 , . . . , an , w1 , . . . , wn > 0 and nk=1 wk = 1, then
P
X n 2 Xn
ak wk ak 2 wk .
k=1 k=1
P 2
P 1 a
5.1.9. Exercise. Show that if k=1 ak converges, then k=1 k k converges absolutely.
5.1.10. Example. A sequence (ak ) of (real or) complex numbers is said to be square summable
if 2
P
k=1 k | < . The vector space of all square summable sequences of real numbers (respectively,
|a
complex numbers) is denoted by l2 (R) (respectively, l2 (C)). When no confusion will result, both
are denoted by l2 . If a, b l2 , define
X
ha, bi = ak bk .
k=1
(It must be shown that this definition makes sense and that it makes l2 into an inner product
space.)
5.1.11. Definition. Let V be a complex (or real) vector space. A function k k : V R : x 7 kxk
is a norm on V if
(i) kx + yk kxk + kyk for all x, y V ;
(ii) kxk = || kxk for all x V and C (or R); and
(iii) if kxk = 0, then x = 0.
The expression kxk may be read as the norm of x or the length of x.
A vector space on which a norm has been defined is a normed linear space (or normed
vector space). A vector in a normed linear space which has norm 1 is a unit vector.
5.1.12. Proposition. If k k is norm on a vector space V , then kxk 0 for every x V and
k0k = 0.
5.2. ORTHOGONALITY 57
As promised in definition 5.1.5 we can verify the (somewhat obvious) fact that every inner
product space is a normed linear space (and therefore a topologicalin fact, a metricspace).
5.1.13. Proposition. Let V be an inner product space. The map x 7 kxk defined on V in 5.1.5
is a norm on V .
5.1.14. Proposition (The parallelogram law). If x and y are vectors in an inner product space,
then
kx + yk2 + kx yk2 = 2kxk2 + 2kyk2 .
5.1.15. Example. Consider the space C([0, 1]) of continuous complex valued functions defined on
[0, 1]. Under the uniform norm kf ku := sup{|f (x)| : 0 x 1} the vector space C([0, 1]) is a
normed linear space, but there is no inner product on C([0, 1]) which induces this norm.
Hint for proof . Use the preceding proposition.
5.1.16. Proposition (The polarization identity). If x and y are vectors in a complex inner product
space, then
hx, yi = 41 (kx + yk2 kx yk2 + i kx + iyk2 i kx iyk2 ) .
5.1.17. Exercise. What is the corresponding formula for the polarization identity in a real inner
product space?
5.2. Orthogonality
5.2.1. Definition. Vectors x and y in an inner product space H are orthogonal (or perpen-
dicular) if hx, yi = 0. In this case we write x y. Subsets A and B of H are orthogonal if
a b for every a A and b B. In this case we write A B.
5.2.2. Proposition. Let a be a vector in an inner product space H. Then a x for every x H
if and only if a = 0.
5.2.3. Proposition (The Pythagorean theorem). If x y in an inner product space, then
kx + yk2 = kxk2 + kyk2 .
5.2.4. Definition. If M and N are subspaces of an inner product space H we use the notation
H = M N to indicate not only that H is the sum of M and N but also that M and N are
orthogonal. Thus we say that H is the (internal) orthogonal direct sum of M and N .
5.2.5. Proposition. If M and N are subspaces of an inner product space H and H is the orthogonal
direct sum of M and N , then it is also the vector space direct sum of M and N .
As is the case with vector spaces in general, we make a distinction between internal and external
direct sums.
5.2.6. Definition. Let V and W be inner product spaces. For (v, w) and (v 0 , w0 ) in V W and
C define
(v, w) + (v 0 , w0 ) = (v + v 0 , w + w0 )
and
(v, w) = (v, w) .
This results in a vector space, which is the (external) direct sum of V and W . To make it into an
inner product space define
h(v, w), (v 0 , w0 )i = hv, v 0 i + hw, w0 i.
This makes the direct sum of V and W into an inner product space. It is the (external orthog-
onal) direct sum of V and W and is denoted by V W .
58 5. THE SPECTRAL THEOREM FOR INNER PRODUCT SPACES
5.2.7. CAUTION. Notice that the same notation is used for both internal and external direct
sums and for both vector space direct sums (see definitions 1.5.14 and 3.4.3) and orthogonal direct
sums. So when we see the symbol V W it is important to be alert to context, to know which
category we are in: vector spaces or inner product spaces, especially as it is common practice to
omit the word orthogonal as a modifier to direct sum even in cases when it is intended.
5.2.8. Example. In R2 let M be the x-axis and L be the line whose equation is y = x. If we
think of R2 as a (real) vector space, then it is correct to write R2 = M L. If, on the other
hand, we regard R2 as a (real) inner product space, then R2 6= M L (because M and L are not
perpendicular).
5.2.9. Notation. Let V be an inner product space, x V , and A, B V . If x a for every
a A, we write x A; and if a b for every a A and b B, we write A B. We define A ,
the orthogonal complement of A, to be {x V : x A}. We write A for A .
5.2.10. CAUTION. The superscript is here used quite differently than in our study of vector
spaces (see 2.6.1). These two uses are, however, by no means unrelated! It is an instructive exercise
to make explicit exactly what this relationship is.
5.2.11. Proposition. If A is a subset of an inner product space V , then A is a subspace of V
and A = (span A) . Furthermore, if A B V , then B A .
5.2.12. Definition. When a nonzero vector x in an inner product space V is divided by its norm
x
the resulting vector u = is clearly a unit vector. We say that u results from normalizing the
kxk
vector x. A subset E of V is orthonormal if every pair of distinct vectors in E are orthogonal and
every vector in E has length one. If, in addition, V is the span of E, then E is an orthonormal
basis for V .
5.2.13. Definition. Let V be an inner product space and E = {e1 , e2 , . . . , en } be a finite orthonor-
mal subset of V . For each k Nk let xk := k
Pnhx, e i.k This scalar is called the Fourier coefficient
of x with respect to E. The vector s := k=1 xk e is the Fourier sum of x with respect to E.
5.2.14. Proposition. Let notation be as in 5.2.13. Then s = x if and only if x span E.
5.2.15. Proposition. Let notation be as in 5.2.13. Then x s ek for k = i, . . . , n and therefore
x s s.
The next result gives us a recipe for converting a finite linearly independent subset of an inner
product space into an orthonormal basis for the span of that set.
5.2.16. Theorem (Gram-Schmidt Orthonormalization). Let A = {a1 , a2 , . . . , an } be a finite lin-
early independent subset of an inner product space V . Define vectors e1 , . . . , en recursively by
setting
1 1
e1 := ka1 k a
and for 2 m n
em := kam sm k1 (am sm )
where sm := m1 m k k m with respect to E 1 m1 }.
P
k=1 ha , e ie is the Fourier sum for a m1 := {e , . . . , e
Then En is an orthonormal basis for the span of A.
It should be clear from the proof of the preceding theorem that finiteness plays no essential
role. The theorem remains true for countable linearly independent sets (as does its proof).
5.2.17. Corollary. Every finite dimensional inner product space has an orthonormal basis.
5.2. ORTHOGONALITY 59
5.2.18. Example. Let R[x] be the inner product space of real polynomials whose inner product
is defined by
Z 1
hp, qi := pe(x)e
q (x) dx
1
for all p, q R[x]. Application of the Gram-Schmidt process to the set {1, x, x2 , x3 , . . . } of real
polynomials produces an orthonormal sequence of polynomials known as the Legendre polynomials.
Compute the first four of these.
5.2.19. Example. Let R[x] be the inner product space of real polynomials whose inner product
is defined by Z
hp, qi := q (x)ex dx
pe(x)e
0
for all p, q R[x]. Application of the Gram-Schmidt process to the set {1, x, x2 , x3 , . . . } of real
polynomials produces an orthonormal sequence of polynomials known as the Laguerre polynomials.
Compute the first four of these.
Hint forR proof . Integration by parts or familiarity with the gamma function allows us to con-
clude that 0 xn ex dx = n! for each n N.
5.2.20. Proposition. If M is a subspace of a finite dimensional inner product space V then
V = M M .
5.2.21. Example. The subspace lc (N, R) of the inner product space l2 (R) (see example 5.1.10)
shows that the preceding proposition does not hold for infinite dimensional spaces.
5.2.22. Proposition. Let M be a subspace of an inner product space V . Then
(a) M M ;
(b) equality need not hold in (a); but
(c) if V is finite dimensional, then M = M .
5.2.23. Proposition. If S is a set of mutually perpendicular vectors in an inner product space
and 0
/ S, then the set S is linearly independent.
5.2.24. Proposition. Let M and N be subspaces of an inner product space V . Then
(a) (M + N ) = (M N ) = M N and
(b) if V is finite dimensional, then (M N ) = M + N .
5.2.25. Proposition. Let S, T : H K be linear maps between inner product spaces H and K.
If hSx, yi = hT x, yi for every x H and y K, then S = T .
5.2.26. Proposition. If H is a complex inner product space and T L(V ) satisfies hT z, zi = 0
for all z H, then T = 0.
Hint for proof . In the hypothesis replace z first by x + y and then by x + iy.
5.2.27. Example. Give an example to show that the preceding result does not hold for real inner
product spaces.
5.2.28. Example. Let H be a complex inner product space and a H. Define a : H C by
a (x) = hx, ai for all x H. Then a is a linear functional on H.
5.2.29. Theorem (Riesz-Frechet Theorem). If f is a linear functional on a finite dimensional
inner product space H, then there exists a unique vector a H such that
f (x) = hx, ai
for every x H.
60 5. THE SPECTRAL THEOREM FOR INNER PRODUCT SPACES
5.3.6. Proposition. Let a be an element of a unital -algebra. Then (a) if and only if
(a ).
5.3.7. Definition. An element a of a complex -algebra A is normal if a a = aa . It is self-
adjoint (or Hermitian) if a = a. It is skew-Hermitian if a = a. And it is unitary if
a a = aa = 1. The set of all self-adjoint elements of A is denoted by H(A), the set of all normal
elements by N (A), and the set of all unitary elements by U(A).
Oddly, and perhaps somewhat confusingly, history has dictated an alternative, but parallel,
language for real algebrasespecially algebras of matrices and linear maps. An element a of a real
-algebra A is symmetric if a = a. It is skew-symmetric if a = a. And it is orthogonal
if a a = aa = 1.
5.3.8. Example. Complex conjugation is an involution on the algebra C of complex numbers.
5.3.9. Example. Transposition (see definition 1.7.31) is an involution on the real algebra Mn of
n n matrices.
5.3.10. Example. Let a < b. The map f 7 f of a function to its complex conjugate is an
involution on the complex algebra C([a, b], C) of continuous complex valued functions on [a, b]
5.3.11. Proposition. For every element a of a -algebra A there exist unique self-adjoint elements
u and v in A such that a = u + iv.
Hint for proof . The self-adjoint element u is called the real part of a and v the imaginary part
of a.
5.3.12. Corollary. An element of a -algebra is normal if and only if its real part and its imaginary
part commute.
5.3.13. Definition. Let H and K be complex inner product spaces and T : H K be a linear
map. If there exists a function T : K H which satisfies
hT x, yi = hx, T yi
for all x H and y K, then T is the adjoint of T . If a linear map T has an adjoint we say
that T is adjointable. Denote the set of all adjointable maps from H to K by A(H, K) and write
A(H) for A(H, H).
When H and K are real vector spaces, the adjoint of T is usually called the transpose of T
and the notation T t is used (rather than T ).
5.3.14. Proposition. Let T : H K be a linear map between complex inner product spaces. If
the adjoint of T exists, then it is unique. (That is, there is at most one function T : K H that
satisfies hT x, yi = hx, T yi for all x H and y K.)
Similarly, of course, if T : H K is a linear map between real inner product spaces and if the
transpose of T exists, then it is unique.
5.3.15. Example. Let C = C([0, 1]) be the inner product space defined in example 5.1.4 and
J0 = {f C : f (0) = 0}. Then the inclusion map : J0 C is an example of a map which is not
adjointable.
5.3.16. Example. Let U be the unilateral shift operator on l2 (see example 5.1.10)
U : l2 l2 : (x1 , x2 , x3 , . . . ) 7 (0, x1 , x2 , . . . ),
then its adjoint is given by
U : l2 l2 : (x1 , x2 , x3 , . . . ) 7 (x2 , x3 , x4 , . . . ).
62 5. THE SPECTRAL THEOREM FOR INNER PRODUCT SPACES
5.3.17. Example (Multiplication operators). Let be a fixed continuous complex valued function
on the interval [a, b]. On the inner product space C = C([a, b], C) (see example 5.1.4) define
M : C C : f 7 f .
Then M is an adjointable operator on C.
5.3.18. Proposition. Let T : H K be a linear map between complex inner product spaces. If
the adjoint of T exists, then it is linear.
And, similarly, of course, if the transpose of a linear map between real inner product spaces
exists, then it is linear. In the sequel we will forgo the dubious helpfulness of mentioning every
obvious real analog of results holding for complex inner product spaces.
5.3.19. Proposition. Let T : H K be a linear map between complex inner product spaces. If
the adjoint of T exists, then so does the adjoint of T and T = T .
5.3.20. Proposition. Let S : H K and T : K L be linear maps between complex inner product
space. Show that if S and T both have adjoints, then so does their composite T S and
(T S) = S T .
5.3.21. Proposition. If T : H K is an invertible linear map between complex inner product
spaces and both T and T 1 have adjoints, then T is invertible and (T )1 = (T 1 ) .
5.3.22. Proposition. Let S and T be operators on a complex inner product space H. Then
(S + T ) = S + T and (T ) = T for every C.
5.3.23. Example. If H is a complex inner product space, then A(H) is a unital complex -algebra.
It is a unital subalgebra of L(H).
5.3.24. Theorem. Let T be an adjointable operator on an inner product space H. Then
(a) ker T = (ran T ) and
(b) ran T (ker T ) . If H is finite dimensional, then equality holds in (b).
5.3.25. Theorem. Let T be an adjointable operator on an inner product space H. Then
(a) ker T = (ran T ) and
(b) ran T (ker T ) . If H is finite dimensional, then equality holds in (b).
5.3.26. Proposition. Every linear map between finite dimensional complex inner product spaces
is adjointable.
Hint for proof . Use the Riesz-Frechet theorem.
5.3.27. Corollary. If H is a finite dimensional inner product space then L(H) is a unital -algebra.
5.3.28. Proposition. An adjointable operator T on a complex inner product space H is unitary
if and only if it preserves inner products (that is, if and only if hT x, T yi = hx, yi for all x, y H).
Similarly, an adjointable operator on a real inner product space is orthogonal if and only if it
preserves inner products.
5.3.29. Exercise. Let T : H K be a linear map between finite dimensional complex inner
product spaces. Find the matrix representation of T in terms of the matrix representation of T .
Also, for a linear map T : H K between finite dimensional real inner product spaces find the
matrix representation of T t in terms of the matrix representation of T .
5.3.30. Proposition. Every eigenvalue of a Hermitian operator on a complex inner product space
is real.
Hint for proof . Let x be an eigenvector associated with an eigenvalue of an operator A.
Consider kxk2 .
5.4. ORTHOGONAL PROJECTIONS 63
5.3.31. Proposition. Let A be a Hermitian operator on a complex inner product space. Prove
that eigenvectors associated with distinct eigenvalues of A are orthogonal.
Hint for proof . Let x and y be eigenvectors associated with distinct eigenvalues and of A.
Start your proof by showing that hx, yi = hx, yi.
5.3.32. Proposition. Let N be a normal operator on a complex inner product space H. Then
kN xk = kN xk for every x H.
5.4.11. Proposition. Let p and q be projections in a -algebra. Then the following are equivalent:
(a) pq = p;
(b) qp = p;
(c) q p is a projection.
5.4.12. Definition. Let p and q be projections in a -algebra. If any of the conditions in the
preceding result holds, then we write p q. In this case we say that p is a subprojection of q or
that p is smaller than q.
5.4.13. Proposition. If A is a -algebra, then the relation defined in 5.4.12 is a partial ordering
on P(A). If A is unital, then 0 p 1 for every p P(A).
5.4.14. Notation. If H, M , and N are subspaces of an inner product space, then the assertion
H = M N , may be rewritten as M = H N (or N = H M ).
5.4.15. Proposition. Let P and Q be projections on an inner product space H. Then the following
are equivalent:
(a) P Q;
(b) kP xk kQxk for all x H; and
(c) ran P ran Q.
In this case Q P is a projection whose kernel is ran P + ker Q and whose range is ran Q ran P .
The next two results are optional: they will not be used in the sequel.
5.4.16. Proposition. Suppose p and q are projections on a -algebra A. If pq = qp, then the
infimum of p and q, which we denote by p f q, exists with respect to the partial ordering and
p f q = pq. The infimum p f q may exist even when p and q do not commute. A necessary and
sufficient condition that p q hold is that both p f q = 0 and pq = qp hold.
5.4.17. Proposition. Suppose p and q are projections on a -algebra A. If p q, then the
supremum of p and q, which we denote by p g q, exists with respect to the partial ordering and
p g q = p + q. The supremum p g q may exist even when p and q are not orthogonal.
where 1 , . . . , n are the (distinct) eigenvalues of N and {P1 , . . . , Pn } is the orthogonal resolution
of the identity whose orthogonal projections are associated with the corresponding eigenspaces M1 ,
. . . , Mn .
Proof. See [29], theorems 10.13 and 10.21; or [16], chapter 8, theorems 20 and 22, and
chapter 9. theorem 9.
2 1+i
5.5.7. Exercise. Let H be the self-adjoint matrix .
1i 3
(a) Use the spectral theorem to write H as a linear combination of orthogonal projections.
1 2 1 i
Answer: H = P1 + P2 where = , = , P1 = ,
3
1 1 1+i
and P2 = .
3
(b) Find a square root of H.
1 4 1+i
Answer: H = .
3
4 + 2i 1 i 1 i
1
5.5.8. Exercise. Let N = 1 i 4 + 2i 1 i .
3 1 i 1 i 4 + 2i
a b b
(a) The matrix N is normal because N N = N N = b a b where a = and
b b a
b= .
(b) According to the spectral theorem N can be written as a linear combination of orthogonal
projections. Written in this
form N= 1 P1 + 2 P
2 where 1 = ,
a a a b a a
2 = , P1 = a a a, and P2 = a b a where a = and
a a a a a b
b= .
66 5. THE SPECTRAL THEOREM FOR INNER PRODUCT SPACES
a b c
(c) A unitary matrix U which diagonalizes N is a b c where a = ,b= ,
a d 2c
c= , and d = .
We now pause for a very brief review of differential calculus. The central concept here is
differentiability. A function f between normed linear spaces is said to be differentiable at a point p
if (when the point (p, f (p)) is translated to the origin) the function is tangent to some continuous
linear map. In this chapter (much of which is just chapter 13 of my online text [12]) we make this
idea precise and record a few important facts about differentiability. A more detailed nd leisurely
treatment can be found in my ProblemText in Advanced Calculus [11], chapters 2529.
There are two sorts of textbooks on differential calculus: concept oriented and computation
oriented. It is my belief that students who understand the concepts behind differentiation can
do the calculations, while students who study calculations only often get stuck. Among the most
masterful presentations of concept oriented differential calculus are [8] (volume I, chapter 8) and
[22] (chapter 3). As of this writing the latter book is available without charge at the website of
one of the authors:
http://www.math.harvard.edu/~shlomo/docs/Advanced_Calculus.pdf
The material in this chapter will benefit primarily those whose only encounter with multivariate
calculus has been through partial derivatives and a chain rule that looks something like
w w x w y w z
= + + (6.1)
u x u y u z u
The approach here is intended to be more geometric, emphasizing the role of tangency.
6.1. Tangency
6.1.1. Notation. Let V and W be normed linear spaces and a V . (If you are unfamiliar with,
or uncomfortable working in, normed linear spaces, just pretend that all the spaces involved are
n-dimensional Euclidean spaces. The only thing you may lose by so doing is the pleasant feeling of
assurance that differential calculus is no harder in infinite dimensional spaces than on the real line.)
We denote by Fa (V, W ) the family of all functions defined on a neighborhood of a taking values
in W . That is, f belongs to Fa (V, W ) if there exists an open set U such that a U dom f V
and if the image of f is contained in W . We shorten Fa (V, W ) to Fa when no confusion will result.
Notice that for each a V , the set Fa is closed under addition and scalar multiplication. (As
usual, we define the sum of two functions f and g in Fa to be the function f + g whose value at
x is f (x) + g(x) whenever x belongs to dom f dom g.) Despite the closure of Fa under these
operations, Fa is not a vector space. (Why not?)
6.1.2. Definition. Let V and W be normed linear spaces. A function f in F0 (V, W ) belongs to
O(V, W ) if there exist numbers c > 0 and > 0 such that
kf (x)k c kxk
whenever kxk < .
A function f in F0 (V, W ) belongs to o(V, W ) if for every c > 0 there exists > 0 such that
kf (x)k c kxk
67
68 6. A BRIEF REVIEW OF DIFFERENTIAL CALCULUS
whenever kxk < . Notice that f belongs to o(V, W ) if and only if f (0) = 0 and
kf (h)k
lim = 0.
h0 khk
6.2.3. Definition. Let V and W be normed linear spaces, a V , and f Fa (V, W ). We say that
f is differentiable at a if there exists a continuous linear map which is tangent at 0 to fa . If
such a map exists, it is called the differential of f at a and is denoted by dfa . Thus dfa is just a
member of B(V, W ) which satisfies dfa ' fa . We denote by Da (V, W ) the family of all functions
in Fa (V, W ) which are differentiable at a. We often shorten this to Da .
We establish next that there can be at most one bounded linear map tangent to fa .
6.2.4. Proposition. Let V and W be normed linear spaces and a V . If f Da (V, W ), then its
differential is unique.
6.2.5. Exercise. Let
f : R3 R2 : (x, y, z) 7 (x2 y 7, 3xz + 4y)
and a = (1, 1, 0). Use the definition of differential to find dfa . Hint. Work with the matrix
representation of dfa. Since the 3 2
differential must belong to B(R , R ), its matrix representation is
r s t
a 2 3 matrix M = . Use the requirement that khk1 kfa (h) M hk 0 as h 0 to
u v w
discover the identity of the entries in M .
6.2.6. Exercise. Let F : R2 R4 be defined by F(x, y) = (y, x2 , 4 xy, 7x), and let p = (1, 1).
Use the definition of differentiable to show that F is differentiable at p. Find the (matrix
representation of the) differential of F at p.
6.2.7. Proposition. Let V and W be normed linear spaces and a V . If f Da , then fa O;
thus, every function which is differentiable at a point is continuous there.
6.2.8. Proposition. Let V and W be normed linear spaces and a V . Suppose that f , g
Da (V, W ) and that R. Then
(1) f is differentiable at a and
d(f )a = dfa ;
(2) also, f + g is differentiable at a and
d(f + g)a = dfa + dga .
Suppose further that Da (V, R). Then
(c) f Da (V, W ) and
d(f )a = da f (a) + (a) dfa .
It seems to me that the version of the chain rule given in (6.1), although (under appropriate
hypotheses) a correct equation, really says very little. The idea that should be conveyed is that
the best linear approximation to the composite of two smooth functions is the composite of their
best linear approximations.
6.2.9. Theorem (The Chain Rule). Let V , W , and X be normed linear spaces with a V . If
f Da (V, W ) and g Df (a) (W, X), then g f Da (V, X) and
d(g f )a = dgf (a) dfa .
70 6. A BRIEF REVIEW OF DIFFERENTIAL CALCULUS
Proof. Our hypotheses are fa ' dfa and gf (a) ' dgf (a) . By proposition 6.2.7 fa O.
Then by proposition 6.1.5(f)
gf (a) fa ' dgf (a) fa (6.2)
and by proposition 6.1.5(e)
dgf (a) fa ' dgf (a) dfa . (6.3)
According to proposition 6.2.2(d)
(g f )a ' gf (a) fa . (6.4)
From (6.2), (6.3),(6.4), and proposition 6.1.5(a) it is clear that
(g f )a ' dgf (a) dfa .
Since dgf (a) dfa is a bounded linear transformation, the desired conclusion is an immediate con-
sequence of proposition 6.2.4.
6.2.10. Exercise. Derive (under appropriate hypotheses) equation (6.1) from theorem 6.2.9.
6.2.11. Exercise. Let T be a linear map from Rn to Rm and p Rn . Find dTp .
6.2.12. Example. Let T be a symmetric nn matrix and let p Rn . Define a function f : Rn R
by f (x) = hT x, xi. Then
dfp (h) = 2hT p, hi
for every h Rn .
if this limit exists. This directional derivative is also called the Gateaux differential (or
Gateaux variation) of f , and is sometimes denoted by f (a; v). Many authors require that in
the preceding definition v be a unit vector. We will not adopt this convention.
6.3. THE GRADIENT OF A SCALAR FIELD IN Rn 71
Recall that for 0 6= v V the curve ` : R V defined by `(t) = a + tv is the parametrized line
through a in the direction of v. In the following proposition, which helps illuminate our use of the
adjective directional, we understand the domain of f ` to be the set of all numbers t for which
the expression f (`(t)) makes sense; that is,
dom(f `) = {t R : a + tv dom f } .
Since a is an interior point of the domain of f , the domain of f ` contains an open interval about 0.
6.3.4. Proposition. If f Da (V, W ) and 0 6= v V , then the directional derivative Dv f (a) exists
and is the tangent vector to the curve f ` at 0 (where ` is the parametrized line through a in the
direction of v). That is,
Dv f (a) = D(f `)(0) .
1 7
6.3.5. Example. Let f (x, y) = ln x2 + y 2 2 . Then Dv f (a) = 10 when a = (1, 1) and v = ( 35 , 45 ).
6.3.6. Proposition. If f Da (V, W ), then for every nonzero v in V
Dv f (a) = dfa (v) .
6.3.7. Proposition. Let : U R be a scalar field on a subset U of Rn . If is differentiable at a
point a in U and da is not the zero functional, then the maximum value of the directional derivative
Du (a), taken over all unit vectors u in Rn , is achieved when u points in the direction.of the gradient
(a). The minimum value is achieved when u points in the opposite direction (a).
What role do partial derivatives play in all this? Conceptually, not much of one. They are
just directional derivatives in the directions of the standard basis vector of Rn . They are, however,
useful for computation. For example, if F is a mapping from Rn to Rm differentiable at a point a,
then the matrix representation of dFa is an m n matrix (the so-called Jacobian matrix ) whose
F j
entry in the j th row and k th column is the partial derivative Fkj = (where F j is the j th
xk
coordinate function of F ). And n
if is a differentiable scalar field on R . then its gradient can be
represented as ,..., .
x1 xn
6.3.8. Exercise. Take any elementary calculus text and derive every item called a chain rule in
that text from theorem 6.2.9.
CHAPTER 7
7.1. Permutations
A bijective map : X X from a set X onto itself is a permutation of the set. If x1 ,
x2 , . . . , xn are distinct elements of a set X, then the permutation of X that maps x1 7 x2 ,
x2 7 x3 , . . . , xn1 7 xn , xn 7 x1 and leaves all other elements of X fixed is a cycle (or cyclic
permutation) of length n. A cycle of length 2 is a transposition. Permutations 1 , . . . , n
of a set X are disjoint if each x X is moved by at most one j ; that is, if j (x) 6= x for at most
one j Nn := {1, 2, . . . , n}.
7.1.1. Proposition. If X is a nonempty set, the set of permutations of X is a group under
composition.
Notice that if and are disjoint permutations of a set X, then = . If X is a set
with n elements, then the group of permutations of X (which we may identify with the group of
permutations of the set Nn ) is the symmetric group on n elements (or on n letters); it is
denoted by Sn .
7.1.2. Proposition. Any permutation 6= idX of a finite set X can be written as a product
(composite) of cycles of length at least 2.
Proof. See[28], chapter 8, theorem 1.
A permutation of a finite set X is even if it can be written as the product of an even number of
transpositions, and it is odd if it can be written as a product of an odd number of transpositions.
7.1.3. Proposition. Every permutation of a finite set is either even or odd, but not both.
Proof. See[28], chapter 8, theorem 3.
7.1.4. Definition. The sign of a permutation , denoted by sgn , is +1 if is even and 1 if
is odd.
7.2.4. Proposition. If U , V , and W are vector spaces over a field F, then so is L2 (U, V ; W ).
Furthermore the spaces L(U, L(V, W )) and L2 (U, V ; W ) are (naturally) isomorphic.
Hint for proof . The isomorphism is implemented by the map
F : L(U, L(V, W )) L2 (U, V ; W ) : 7
where (u, v) := ((u))(v) for all u U and v V .
7.2.5. Definition. A multilinear map f : V n W from the n-fold product V V of a vector
space V into a vector space W is alternating if f (v1 , . . . , vn ) = 0 whenever vi = vj for some
i 6= j.
7.2.6. Exercise. Let V = R2 and f : V 2 R : (v, w) 7 v1 w2 . Is f bilinear? Is it alternating?
7.2.7. Exercise. Let V = R2 and g : V 2 R : (v, w) 7 v1 + w2 . Is g bilinear? Is it alternating?
7.2.8. Exercise. Let V = R2 and h : V 2 R : (v, w) 7 v1 w2 v2 w1 . Is h bilinear? Is it
alternating? If {e1 , e2 } is the usual basis for R2 , what is h(e1 , e2 )?
7.2.9. Definition. If V and W are vector spaces, a multilinear map f : V n W is skew-
symmetric if
f (v 1 , . . . , v n ) = (sgn )f v (1) , . . . , v (n)
for all Sn .
7.2.10. Proposition. Suppose that V and W be vector spaces. Then every alternating multilinear
map f : V n W is skew-symmetric.
Hint for proof . Consider f (u + v, u + v) in the bilinear case.
7.2.11. Remark. If a function f : Rn R is differentiable, then at each point a in Rn the
differential of f at a is a linear map from Rn into R. Thus we regard df : a 7 dfa (the differential
of f ) as a map from Rn into L(Rn , R). It is natural to inquire whether the function df is itself
differentiable. If it is, its differential at a (which we denote by d 2fa ) is a linear map from Rn into
L(Rn , R); that is
d 2fa L(Rn , L(Rn , R)).
In the same vein, since d 2f maps Rn into L(Rn , L(Rn , R)), its differential (if it exists) belongs to
L(Rn , L(Rn , L(Rn , R))). It is moderately unpleasant to contemplate what an element of L(Rn , L(Rn , R))
or of L(Rn , L(Rn , L(Rn , R))) might look like. And clearly as we pass to even higher order differ-
entials things look worse and worse. It is comforting to discover that an element of L(Rn , L(Rn , R))
may be regarded as a map from (Rn )2 into R which is bilinear (that is, linear in both of its vari-
ables), and that L(Rn , L(Rn , L(Rn , R))) may be thought of as a map from (Rn )3 into R which is
linear in each of its three variables. More generally, if V1 , V2 , V3 , and W are arbitrary vector spaces
it will be possible to identify the vector space L(V1 , L(V2 , W ))) with the space of bilinear maps from
V1 V2 to W , the vector space L(V1 , L(V2 , L(V3 , W ))) with the trilinear maps from V1 V2 V3
to W , and so on (see, for example, proposition 7.2.4).
7.3. Determinants
7.3.1. Definition. A field F is of characteristic zero if n1 = 0 for no n N.
7.3.2. Convention. In the following material on determinants, we will assume that the scalar
fields underlying all the vector spaces and algebras we encounter are of characteristic zero. Thus
multilinear functions will be alternating if and only if they are skew-symmetric. (See exercises 7.2.10
and 7.3.7.)
7.3.3.
n Remark. Let A be a unital commutative algebra. In the sequel we identify the algebra
A n = A An (n factors) with the algebra Mn (A) of n n matrices of elements of A by
n
n
regarding the term ak in (a1 , . . . , an ) An as the k th column vector of an nn matrix of elements
7.3. DETERMINANTS 75
of A. There are many standard notations for the same thing: Mn (A), An An (n factors),
n 2
An , Ann , and An , for example.
The identity matrix, which we usually denote by I, in Mn (A) is (e1 , . . . , en ), where e1 , . . . , en
are the standard basis vectors for An ; that is, e1 = (1A , 0, 0, . . . ), e2 = (0, 1A , 0, 0, . . . ), and so on.
7.3.4. Definition. Let A be a unital commutative algebra. A determinant function is an
alternating multilinear map D : Mn (A) A such that D(I) = 1A .
7.3.5. Proposition. Let V = Rn . Define
X
: V n R : (v 1 , . . . , v n ) 7 1
(sgn )v(1) n
. . . v(n) .
Sn
7.3.10. Proposition. Let A be a unital commutative algebra (over a field of characteristic zero)
and n N. A determinant function exists on Mn (A). Hint. Consider
X
det : Mn (A) A : (a1 , . . . an ) 7 (sgn )a1(1) . . . an(n) .
Sn
B
e
B
W
7.4.2. Proposition. In the category of vector spaces and linear maps if tensor products exist, then
they are unique (up to isomorphism).
7.4.3. Proposition. In the category of vector spaces and linear maps tensor products exist.
Hint for proof . Let U and V be vector spaces over a field F. Consider the free vector space
lc (U V ) = lc (U V , F). Define
: U V lc (U V ) : (u, v) 7 {(u,v)} .
Write u v instead of (u, v). Then let
S1 = {(u1 + u2 ) v u1 v u2 v : u1 , u2 U and v V },
S2 = {(u) v (u v) : F, u U , and v V },
S3 = {u (v1 + v2 ) u v1 u v2 : u U and v1 , v2 V },
S4 = {u (v) (u v) : F, u U , and v V },
S = span(S1 S2 S3 S4 ), and
U V = lc (U V )/S .
Also define
: U V U V : (u, v) 7 [u v].
Then show that U V and satisfy the conditions stated in definition 7.4.1.
7.4.4. Notation. It is conventional to write u v for (u, v) = [u v]. Tensors of the form
uv are called elementary tensors (or decomposable tensors or homogeneous tensors).
Keep in mind that
7.4.5. Proposition. Let u and v be elements of finite dimensional vector spaces U and V , respec-
tively. If u v = 0, then either u = 0 or v = 0.
Hint for proof . Argue by contradiction. Suppose there exist u0 6= 0 in U and v 0 6= 0 in V such
that u0 v 0 = 0. Use propositon 2.5.8 to choose linear functionals f U and g V such that
f (u0 ) = g(v 0 ) = 1. Consider the map B defined on U V by B(u, v) = f (u)g(v).
7.4.6. CAUTION. One needs to exercise some care in dealing with elementary tensors: keep in
mind that
(1) not every member of U V is of the form u v;
(2) the representation of a tensor as an elementary tensor, even when it is possible, fails to be
unique; and
(3) the family of elementary tensors (although it spans U V ) is by no means linearly inde-
pendent.
7.4. TENSOR PRODUCTS OF VECTOR SPACES 77
k Nn .
Hint for proof . Extend {e1 , . . . , en } to a basis E for U . Fix k Nn . Let V . Consider the
scalar valued function B defined on U V by B(u, v) = ek (u)(v), where ek is as defined in
proposition 2.5.4. Prove that (v k ) = 0.
7.4.8. Proposition. If {ei }m j n
i=1 and {f }j=1 are bases for the finite dimensional vector spaces U
and V , respectively, then the family {ei f j }i=1,
m n is a basis for U V .
j=1
7.4.9. Corollary. If U and V are finite dimensional vector spaces, then so is U V and
dim(U V ) = (dim U )(dim V ).
7.4.10. Proposition. Let U and V be finite dimensional vector spaces and {f j }nj=1 be a basis
for V . Then for every element t U V there exist unique vectors u1 , . . . un U such that
Xn
t= uj f j .
j=1
TENSOR ALGEBRAS
8.1.9. Example. If the dimension of a vector space V is 3 or less, then every homogeneous element
of the corresponding Grassmann algebra is decomposable.
8.1.10. Example. If the dimension of a (finite dimensional) vector space V is at least four,
then there exist homogeneous elements in the corresponding Grassmann algebra which are not
decomposable.
Hint for proof . Let e1 , e2 , e3 , and e4 be distinct basis elements of V and consider (e1 e2 ) +
(e3 e4 ).
8.1.11. Proposition. The elements v1 , v2 , . . . , vp in a vector space V areVlinearly independent if
and only if v1 v2 vp 6= 0 in the corresponding Grassmann algebra (V ).
8.1.12. Proposition. Let T : V W be a linear map between finite dimensional
V V vector V
spaces.
Then there exists a unique extension of T to a unital algebra homomorphism (T ) : (V ) (W ).
This extension maps k (V ) into k (W ) for each k N.
V V
V V
8.1.13. Example. The pair of maps V 7 (V ) and T 7 (T ) is a covariant functor from the
category of vector spaces and linear maps to the category of unital algebras and unital algebra
homomorphisms.
8.1.14. Proposition. If V is a vector space of dimension d, then dim p (V ) = dp for 0 p d.
V
8.1.15. Convention. Let V be a d-dimensional vector space. Since the map 7 1V(V ) is
an obvious isomorphism between F and the one-dimensional space 0 (V ), we identify these two
V
V0 Vd
spaces, and refer to an element of (V ) as a scalar. And since (V ) is also one-dimensional,
its elements are frequently referred to as pseudoscalars.
8.1.16. Proposition. If V is a finite dimensional vector space, p (V ), and q (V ), then
V V
= (1)pq .
T k (V ) T m (V )
= T k+m (V )
and extending by linearity to all of T (V ). The resulting algebra is the tensor algebra of V (or
generated by V ).
8.2.3. Proposition. The tensor algebra T (V ) as defined in 8.2.2 is in fact a unital algebra.
8.2.4. Proposition. Let V be a finite dimensional vector space and J be the ideal in the tensor
algebra T (V ) generated by the set of all elements of the form v v where v V . Then the quotient
algebra T (V )/J is the Grassmann algebra over V (or, equivalently, over V ).
8.2.5. Notation. If x and y are elements of the tensor algebra T (V ), then in the quotient algebra
T (V )/J the product of [x] and [y] is written using wedge notation; that is,
[x] [y] = [x y].
8.2. EXISTENCE OF GRASSMANN ALGEBRAS 81
8.2.6. Notation. If V is a vector space over F and k N we denote by Altk (V ) the set of all
alternating k-linear maps from V k into F. (The space Alt1 (V ) is just V .) Additionally, take
Alt0 (V ) = F.
8.2.7. Example. If V is a finite dimensional vector space and k > dim V , then Altk (V ) = {0}.
8.2.8. Definition. Let p, q N. We say that a permutation Sp+q is a (p, q)-shuffle if
(1) < < (p) and (p + 1) < < (p + q). The set of all such permutations is denoted
by S(p, q).
8.2.9. Example. Give an example of a (4, 5)-shuffle permutation of the set N9 = {1, . . . , 9} such
that (7) = 4.
8.2.10. Definition. Let V be a vector space over a field of characteristic 0. For p, q N define
: Altp (V ) Altq (V ) Altp+q (V ) : (, ) 7
where
X
( )(v 1 , . . . , v p+q ) = (sgn )(v (1) , . . . , v (p) )(v (p+1) , . . . , v (p+q) ).
S(p,q)
8.2.11. Exercise. Show that definition 8.2.10 is not overly optimistic by verifying that if
Altp (V ) and Altq (V ), then Altp+q (V ).
8.2.12. Proposition. The multiplication defined in 8.2.10 is associative. That is if Altp (V ),
Altq (V ), and Altr (V ), then
( ) = ( ) .
8.2.13. Exercise. Let V be a finite dimensional vector space over a field of characteristic zero.
Explain in detail how to make Altk (V ) (or, if you prefer, Altk (V ) ) into a vector space for each
k Z and how to make the collection of these into a Z-graded algebra. Show that this algebra is
the Grassmann algebra generated by V . Hint. Take Altk (V ) = {0} for each k < 0 and extend the
definition of the wedge product so that if Alt0 (V ) = F and Altp (V ), then = .
8.2.14. Proposition. Let 1 , . . . , p be members of Alt1 (V ) (that is, linear functionals on V ).
Then
p
(1 p )(v 1 , . . . , v p ) = det j (v k ) j,k=1
for all v 1 , . . . , v p V .
8.2.15. Proposition. If {e1 , . . . , en } is a basis for an n-dimensional vector space V , then
{e(1) e(p) : S(p, n p)}
is a basis for Altp (V ).
8.2.16. Proposition. For T : V W a linear map between vector spaces define
Altp (T ) : Altp (W ) Altp (V ) : 7 Altp (T )()
where Altp (T )() (v 1 , . . . , v p ) = (T v 1 , . . . , T v p ) for all v 1 , . . . , v p V . Then Altp is a con-
travariant functor from the category of vector space and linear maps into itself.
8.2.17. Exercise. Let V be an n-dimensional vector space and T L(V ). If T is diagonalizable,
then
n
X
cT () = (1)k [Altnk (T )]k .
k=0
82 8. TENSOR ALGEBRAS
DIFFERENTIAL MANIFOLDS
The purpose of this chapter and the next is to examine an important and nontrivial example
of a Grassmann algebra: the algebra of differential forms on a differentiable manifold. If your
background includes a study of manifolds, skip this chapter. If you are pressed for time, reading just
the first section of the chapter should enable you to make sense out of most of the ensuing material.
That section deals with familiar manifolds in low (three or less) dimensional Euclidean spaces. The
major weakness of this presentation is that it treats manifolds in a non-coordinate-free manner as
subsets of some larger Euclidean space. (A helix, for example, is a 1-manifold embedded in 3-space.)
The rest of the chapter gives a (very brief) introduction to a more satisfactory coordinate-free way of
viewing manifolds. For a less sketchy view of the subject read one of the many splendid introductory
books in the field. I particularly like [3] and [21].
9.1. Manifolds in R3
A 0-manifold is a point (or finite collection of points).
A function is smooth if it is infinitely differentiable (that is, if it has derivatives of all orders).
A curve is a continuous image of a closed line segment in R. If C is a curve, the choice of an
interval [a, b] and a continuous function f such that C = f ([a, b]) is a parametrization of C.
If the function f is smooth, we say that C is a smooth curve.
A 1-manifold is a curve (or finite collection of curves). A 1-manifold is flat if it is contained
in some line in R3 . For example, the line segment connecting two points in R3 is a flat 1-manifold.
A surface is a continuous image of a closed rectangular region in R2 . If S is a surface, the the
choice of a rectangle R = [a1 , b1 ] [a2 , b2 ] and a continuous function f such that S = f (R) is a
parametrization of S. If the function f is smooth, we say that S is a smooth surface.
A 2-manifold is a surface (or finite collection of surfaces). A 2-manifold is flat if it is
contained in some plane in R3 . For example, the triangular region connecting the points (1, 0, 0),
(0, 1, 0), and (0, 0, 1) is a flat 2-manifold.
A solid is a continuous image of the 3-dimensional region determined by a closed rectangular
parallelepiped (to avoid a six-syllable word many people say rectangular solid or even just box )
in R3 . If E is a solid, then the choice of a rectangular parallelepiped P = [a1 , b1 ] [a2 , b2 ] [a3 , b3 ]
and a continuous function f such that E = f (P ) is a parametrization of E. If the function f
is smooth, we say that E is a smooth solid.
A 3-manifold is a solid (or finite collection of solids).
9.2.3. Definition. Let m, n N, and U Rm . A function F : U Rn is smooth (or infinitely
differentiable, or C ) if the differential dp Fa exists for every p N and every a U . We denote
by C (U, Rn ) the family of all smooth functions from U into Rn .
9.2.4. Definition. Let M be a topological space and n N. A pair (U, ), where U is an open
subset of M and : U U e is a homeomorphism from U to an open subset U e of Rn , is called an
(n-dimensional coordinate) chart. (The notation here is a bit redundant. If we know the
function , then we also know its domain U . Indeed, do not be surprised to see reference to the
chart or to the chart U .)
Let p M . A chart (U, ) is said to contain p if p U and is said to be a chart (centered)
at p if (p) = 0. A family of n-dimensional coordinate charts whose domains cover M is an
(n-dimensional) atlas for M . If such an atlas exists, the space M is said to be locally
Euclidean.
9.2.5. Notation. Let n N. For 1 k n the function k : Rn R : x = (x1 , x2 , . . . , xn ) 7 xk
is the k th coordinate projection. If : M Rn is a chart on a topological space, one might
reasonably expect the n component functions of (that is, the functions k ) to be called 1 ,
. . . , n . But this is uncommon. People seem to like and as names for charts. But then the
components of these maps have names such as x1 , . . . , xn , or y 1 , . . . , y n . Thus we usually end up
with something like (p) = x1 (p), . . . , xn (p) . The numbers x1 (p), . . . , xn (p) are called the local
coordinates of p.
Two common exceptions to this notational convention occur in the cases when n = 2 or n = 3.
In the former case you are likely to see things like = (x, y) and = (u, v) for charts on 2-manifolds.
Similarly, for 3-manifolds expect to see notations such as = (x, y, z) and = (u, v, w).
9.2.6. Definition. A second countable Hausdorff topological space M equipped with an n-dimensional
atlas is a topological n-manifold (or just a topological manifold).
9.2.7. Definition. Charts and of a topological n-manifold M are said to be (smoothly)
compatible if the transition maps 1 and 1 are smooth. An atlas on M is a smooth
atlas if every pair of its charts is smoothly compatible. Two atlases on M are (smoothly)
compatible (or equivalent) if every chart of one atlas is smoothly compatible with every chart
of the second; that is, if their union is a smooth atlas on M .
9.2.8. Proposition. Every smooth atlas on a topological manifold is contained in a unique maximal
smooth atlas.
9.2.9. Definition. A maximal smooth atlas on a topological manifold M is a differential
structure on M . A topological n-manifold which has been given a differential structure is a
smooth n-manifold (or a differential n-manifold, or a C n-manifold).
NOTE: From now on we will be concerned only with differential manifolds; so the modifier
smooth will ordinarily be omitted when we refer to charts, to atlases, or to manifolds. Thus it
will be understood that by manifold we mean a topological n-manifold (for some fixed n) equipped
with a differential structure to which all the charts we mention belong.
9.2.10. Example. Let U be an open subset of Rn for some n Z+ and : U Rn : x 7 x be
the inclusion map. Then {} is a smooth atlas for U . We make the convention that when an open
subset of Rn is regarded as an n-manifold we will suppose, unless the contrary is explicitly stated,
that the inclusion map is a chart in its differentiable structure.
9.2.11. Example (An atlas for S1 ). Let S1 = {(x, y) R2 : x2 + y 2 = 1} be the unit circle in R2
and U = {(x, y) S1 : y 6= 1}. Define : U R to be the projection of points in U from the
point (0, 1) onto the x-axis; that is, if p = (x, y) is a point in U , then (p), 0 is the unique point
on the x-axis which is collinear with both (0, 1) and (x, y).
(1) Find an explicit formula for .
9.3. DIFFERENTIABLE FUNCTIONS BETWEEN MANIFOLDS 85
(2) Let V = {(x, y) S1 : y 6= 1}. Find an explicit formula for the projection of points in
V from (0, 1) onto the x-axis.
(3) The maps and are bijections between U and V , respectively, and R.
(4) The set {, } is a (smooth) atlas for S1 .
9.2.12. Example (An atlas for the n-sphere). The previous example can be generalized to the
n-sphere Sn = {x Rn+1 : kxk = 1}. The generalization of the mapping is called the stereo-
graphic projection from the south pole and the generalization of is the stereographic
projection from the north pole. (Find a simple expression for the transition maps.)
9.2.13. Example (Another atlas for S1 ). Let U = {(x, y) S1 : x 6= 1}. For (x, y) U let (x, y)
be the angle (measured counterclockwise) at the origin from (1, 0) to (x, y). (So (x, y) (0, 2).)
Let V = {(x, y) S1 : y 6= 1}. For (x, y) V let (x, y) be /2 plus the angle (measured
counterclockwise) at the origin from (0, 1) to (x, y). (So (x, y) (/2, 5/2).) Then {, } is a
(smooth) atlas for S1 .
9.2.14. Example (The projective plane P2 ). Let P2 be the set of all lines through the origin in R3 .
Such a line is determined by a nonzero vector lying on the line. Two nonzero vectors x = (x1 , x2 , x3 )
and y = (y 1 , y 2 , y 3 ) determine the same line if there exists R, 6= 0, such that y = x. In this
case we write x y. It is clear that is an equivalence relation. We regard a member of P2 as an
equivalence class of nonzero vectors. Let Uk = { [x] P2 : xk 6= 0} for k = 1, 2, 3. Also let
: U1 R2 : [(x, y, z)] 7 xy , xz ;
: U3 R2 : [(x, y, z)] 7 xz , yz .
The preceding sets and maps are well defined; and {, , } is a (smooth) atlas for P2 .
9.2.15. Example (The general linear group). Let G = GL(n, R) be the group of nonsingular n n
matrices of real numbers. If a = [ajk ] and b = [bjk ] are members of G define
Xn 1
2
2
d(a, b) = (ajk bjk ) .
j,k=1
The function d is a metric on G. Define
2
: G Rn : a = [ajk ] 7 a11 , . . . , a1n , a21 , . . . , a2n , . . . , an1 , . . . , ann .
Then {} is a (smooth) atlas on G. (Be a little careful here. There is one point that is not
completely obvious.)
9.2.16. Example. Let
I : R R : x 7 x ,
a: R R: x 7 arctan x , and
c : R R : x 7 x3 .
Each of {I}, {a}, and {c} is a (smooth) atlas for R. Which of these are equivalent?
NOTE: It now makes sense to say (and is true) that a single chart on a manifold is a smooth map.
9.3.2. Proposition. Suppose that a map F : M N between manifolds is smooth at a point m
and that (W, ) and (X, ) are charts at m and F (m), respectively, such that F (W ) X. Then
F 1 is smooth at (m).
9.3.3. Proposition. If F : M N and G : N P are smooth maps between manifolds, then
G F is smooth.
9.3.4. Proposition. Every smooth map F : M N between manifolds is continuous.
9.3.5. Example. Consider the 2-manifold S2 with the differential structure generated by the
stereographic projections form the north and south poles (see example 9.2.12) and the 2-manifold
P2 with the differentiable structure generated by the atlas given in example 9.2.14. The map
F : S2 P2 : (x, y, z) 7 [(x, y, z)] is smooth. (Think of F as taking a point (x, y, z) in S2 to the
line in R3 passing through this point and the origin.)
9.4.7. Proposition. If F : M N is a smooth mapping between manifolds and b and c are curves
tangent at a point m M , then the curves F b and F c are tangent at F (m) in N .
9.4.8. Definition. Since the family of little-oh functions is closed under addition it is obvious
that tangency at a point m is an equivalence relation on the family of curves at m. We denote the
equivalence class containing the curve c by e
c or, if we wish to emphasize the role of the point m, by
cm . Each equivalence class e
e cm is a geometric tangent vector at m and the family of all such
vectors is the geometric tangent space at m. The geometric tangent space at m is denoted
by Tem (or, if we wish to emphasize the role of the manifold M , by Tem (M )).
The language (involving the words vector and space) in the preceding definition is highly
optimistic. So far we have a set of equivalence classeswith no vector space structure. The key to
providing Tem with a such a structure is exercise 2.2.16. There we found that a set S may be given
a vector space structure by transferring the structure from a known vector space V to the set S by
means of a bijection f : S V . We will show, in particular, that if M is an n-manifold, then for
each m M the geometric tangent space Tem can be given the vector space structure of Rn .
9.4.9. Definition. Let be a chart containing a point m in an n-manifold M and let u be a
nonzero vector in Rn . For every t R such that (m) + tu belongs to the range of , let
cu (t) = 1 (m) + tu .
Notice that since cu is the composite of smooth functions and since cu (0) = m, it is clear that cu
is a curve at m in M .
9.4.10. Example. If M is an n-manifold, then the curves ce1 , . . . , cen obtained by means of the
preceding definition from the standard basis vectors e1 , . . . , en of Rn will prove to be very use-
ful. We will shorten the notation somewhat and write ck for cek (1 k n). We think of the
curves c1 , . . . , cn as being linearly independent directions in the tangent space at m (see proposi-
tion 9.4.14). We call these curves the standard basis curves at m determined by . It is important
to keep in mind that these curves depend on the choice of the chart ; the notation c1 , . . . , cn fails
to remind us of this.
9.4.11. Proposition. Let be a chart at a point m in an n-manifold. Then the map
C : Tem Rn : e
c 7 D( c)(0)
is well-defined and bijective.
Notice that initially we had no way of adding curves b and c at a point m in a manifold or of
multiplying them by scalars. Now, however, we can use the bijection C to transfer the vector
space structure from Rn to the tangent space Tem . Thus we add equivalence classes eb and e
c in the
obvious fashion. The formula is
c = C 1 C (eb) + C (e
eb + e c) . (9.1)
Similarly, if b is a curve at m and is a scalar, then
eb = C 1 C (eb) .
(9.2)
9.4.12. Corollary. At every point m in an n-manifold the geometric tangent space Tem may be
regarded as a vector space isomorphic to Rn .
As remarked previously, we have defined the vector space structure of the geometric tangent
space at m in terms of the mapping C , which in turn depends on the choice of a particular chart .
From this it might appear that addition and scalar multiplication on the tangent space depend on .
Happily this is not so.
9.4.13. Proposition. Let m be a point in an n-manifold. The vector space structure of the geo-
metric tangent space Tem is independent of the particular chart used to define it.
88 9. DIFFERENTIAL MANIFOLDS
9.4.15. Exercise. We know from thePpreceding proposition that if e c belongs to Tem , then there
n
exist scalars 1 , . . . , n such that e
c = k=1 k e
ck . Find these scalars.
If F : M N is a smooth mapping between manifolds and m is a point in M , we denote by
dFm the mapping that takes each geometric tangent vector e
e c in Tem to the corresponding geometric
tangent vector (F c) in TeF (m) . That dF
e m is well-defined is clear from proposition 9.4.7. The
point of proposition 9.4.17 below is to make the notation for this particular map plausible.
9.4.16. Definition. If F : M N is a smooth mapping between manifolds and m M , then the
function
c 7 (F c)
e m : Tem TeF (m) : e
dF
is the differential of F at m.
9.4.17. Proposition. Let F : M N be a smooth map from an n-manifold M to a p-manifold N .
For every m M and every pair of charts at m and at F (m) the following diagram commutes
dF
/ TeF (m)
e m
Tem
C C
Rn / Rp
d F
(m)
and consequently dF
e m is a linear map. (The maps C and C are defined as in proposition 9.4.11.)
The map C has been used to provide the geometric tangent space Tem at a point m on an
n-manifold with the vector space structure of Rn . With equal ease it may be used to provide Tem
with a norm. By defining
ke
ck = kC (e
c)k
it is clear that we have made Tem into a normed linear space (but in a way that does depend on the
choice of the chart ). Furthermore, under this definition C is an isometric isomorphism between
Tem and Rn . Thus, in particular, we may regard Tm and TF (m) as Euclidean spaces. From the
preceding proposition we see that the map dF e m is (continuous and) linear, being the composite of
1
the mapping d F (m) with two isometric isomorphisms C and C .
If we use the mapping C to identify Tem and Rn as Banach spaces and, similarly, C to identify
TeF (m) and Rp , then the continuous linear maps dF
e m and d F are also identified. The
(m)
notation which we have used for the mapping dF e m is thus a reasonable one since, as we have just
seen, this mapping can be identified with the differential at (m) of the local representative of F
and since its definition does not depend on the charts and . To further strengthen the case for
the plausibility of this notation. consider what happens if M and N are open subsets of Rn and
Rp , respectively (regarded as manifolds whose differential structure is generated in each case by the
appropriate identity map). The corresponding local representative of F is F itself, in which case
the bottom map in the diagram for proposition 9.4.17 is simply dFm .
The preceding definition also helps to justify the usual intuitive picture for familiar n-manifolds
of the tangent space being a copy of Rn placed at m. Suppose that for a particular n-manifold
M there exists a smooth inclusion map of M into a higher dimensional Euclidean space Rp . (For
example, the inclusion map of the 2-manifold S2 into R3 is smooth.) Then taking F in the preceding
9.5. THE ALGEBRAIC TANGENT SPACE 89
9.4.19. Example. Same example as the preceding except this time let U = {(x, y, z) S2 : z 6= 1}
and : U R2 be the stereographic projection of S2 from the south pole. That is,
x y
(x, y, z) = , .
1+z 1+z
9.4.20. Proposition (A Chain Rule for Maps Between Manifolds). If F : M N and G : N P
are smooth mappings between manifolds, then
e F )m = dG
d(G e F (m) dF
e m (9.3)
for every m M .
smooth real valued functions defined on a neighborhood of m). We write f g if there exists
a neighborhood of m on which f and g agree. Then is clearly an equivalence relation on
(M, R). The corresponding equivalence classes are germs of smooth functions at m. If f is a
Cm
member of Cm (M, R), we denote the germ containing f by fb. The family of all germs of smooth
real valued functions st m is denoted by Gm (M ) (or just Gm ). Addition, multiplication, and scalar
multiplication of germs are defined as you would expect. For fb, gb Gm and R let
fb + gb = (f + g) b
fb gb = (f g) b
fb = (f ) b
(As usual, the domain of f + g and f g is taken to be dom f dom g.)
9.5.2. Proposition. If m is a point in a manifold, then the set Gm of germs of smooth functions
at m is (under the operations defined above) a unital commutative algebra.
9.5.3. Definition. Let m be a point in a manifold. A derivation on the algebra Gm of germs of
which satisfies Leibnizs rule
smooth functions at m is a linear functional v Gm
v(fb gb) = f (m)v(b
g ) + v(fb)g(m)
for all fb, gb Gm . Another name for a derivation on the algebra Gm is an algebraic tangent
vector at m. The set of all algebraic tangent vectors at m (that is, derivations on Gm ) will be
called the algebraic tangent space at m and will be denoted by Tbm (M ) (or just Tbm ).
90 9. DIFFERENTIAL MANIFOLDS
The idea here is to bring to manifolds the concept of directional derivative; as the next example
shows directional derivatives at points in a normed linear space are derivations on the algebra of
(germs of) smooth functions.
9.5.4. Example. Let V be a normed linear space and let a and v be vectors in V . Define Dv,a on
functions f in Ga (V ) by
Dv,a (fb) := Dv f (a).
(Here, Dv f (a) is the usual directional derivative of f at a in the direction of v from beginning
calculus.) Then Dv,a is well-defined and is a derivation on Ga . (Think of the operator Dv,a as being
differentiation in the direction of the vector v followed by evaluation at a.)
9.5.5. Proposition. If m is a point in a manifold, v is a derivation on Gm , and k is a smooth
real valued function constant in some neighborhood of m, then v(b
k) = 0.
Hint for proof . Recall that 1 1 = 1.
It is an easy matter to show that the terminology algebraic tangent space adopted in defini-
tion 9.5.3 is not overly optimistic. It is a vector space.
9.5.6. Proposition. If m is a point in a manifold, then the tangent space Tbm is a vector space
under the usual pointwise definition of addition and scalar multiplication.
We now look at an example which establishes a first connection between the geometric and the
algebraic tangent spaces.
9.5.7. Example. Let m be a point in a manifold M and e c be a vector in the geometric tangent
space Tm . Define
e
vec : Gm R : fb 7 D(f c)(0).
Then vec is well-defined and belongs to the algebraic tangent space Tbm .
The notation vec (fb) of the preceding example is not particularly attractive. In the following
material we will ordinarily write just vc (f ). Although strictly speaking this is incorrect, it should
not lead to confusion. We have shown that vec (fb) depends only on the equivalence classes e c and fb,
not on the representatives chosen. Thus we do not distinguish between vb (g) and vc (f ) provided
that b and c belong to the same member of Tem and f and g to the same germ.
Example 9.5.7 is considerably more general than it may at first appear. Later in this section we
will show that the association e c 7 vec is an isomorphism between the tangent spaces Tem and Tbm .
In particular, there are no derivations on Gm other than those induced by curves at m.
For the moment, however, we wish to make plausible for an n-manifold the use of the notation
th standard basis curve at the point m determined by
xk m
for the derivation v ck , where c k is the k
the chart = (x1 , . . . , xn ) (see the notational convention in 9.2.5 and example 9.4.10). The crux
of the matter is the following proposition, which says that if cu = 1 bu , where is a chart at
m and bu is the parametrized line through (m) in the direction of the vector u, then the value at
a germ fb of the derivation vcu may be found by taking the directional derivative in the direction u
of the local representative f .
9.5.8. Proposition. Let be a chart containing the point m in an n-manifold. If u Rn and
bu : R Rn :
t 7 (m) + tu, then cu := 1 bu is a curve at m and
vcu (f ) = Du (f )((m))
for all fb G(m).
Hint for proof . Use proposition 25.5.9 in [11].
9.5. THE ALGEBRAIC TANGENT SPACE 91
The formula
f
(m) = (f 1 )k ((m))
xk
(see 9.5.10 and 9.5.11) should make it clear that x k depends on all the components of the chart
and not just on the single component xk . In any case, here is a concrete counterexample.
9.5.12. Example. Consider R2 as a 2-manifold (with the usual differential structure generated by
the atlas whose only member is the identity map on R2 ). Let = (x1 , x2 ) be the identity map on
R2 and = (y 1 , y 2 ) be the map defined by : R2 R2 : (u, v) 7 (u, u + v). Clearly and are
charts containing the point m = (1, 1) and x1 = y 1 . We see, however, that x 1 6= y 1 by computing
f f
x1
(1, 1) and y 1
(1, 1) for the function f : R2 R : (u, v) 7 u2 v.
9.5.13. Proposition (Change of Variables Formula). Let = (x1 , . . . , xn ) and = (y 1 , . . . , y n )
be charts at a point m on an n-manifold. Then
n
y j
X
= (m) j (9.5)
xk m xk y m
j=1
for 1 k n.
9.5.14. Remark. If = (x1 , . . . , xn ) and = (y 1 , . . . , y n ) are charts on an n-manifold which
overlap (that is, dom dom 6= ), then the preceding change of variables formula holds for
all m in dom dom 6= . Thus in the interests of economy of notation, symbols indicating
evaluation at m are normally omitted. Then (9.5) becomes
n
X y j
= (9.6)
xk xk y j
j=1
92 9. DIFFERENTIAL MANIFOLDS
with the understanding that it may be applied to any real valued function which is smooth
on dom dom .
9.5.15. Exercise. Regard the 2-sphere in R3 as a 2-manifold whose differentiable structure is is
generated by the stereographic projections from the north and south poles (see example 9.2.12).
Let be the stereographic projection from the south pole. That is,
x y
(x, y, z) = ,
1+z 1+z
for all (x, y, z) S2 such that z 6= 1. Let be defined by
(x, y, z) = (y, z)
for all (x, y, z) S2 such that x > 0. (It is easy to see that is a chart.) Also define a real valued
function f by
x y
f (x, y, z) =
+
y z
2 1 1 1
for all (x, y, z) S such that x, y, z > 0. Let m = 2 , 2 , 2 .
For these data verify by explicit computation formula (9.5). That is, by computing both sides
show that for k = 1 and k = 2 the formula
f y 1 f y 2 f
k
(m) = k
(m) 1
(m) + k
(m) 2 (m)
x x y x y
is correct for the functions , , and f and the point m given in the preceding paragraph.
9.5.16. Proposition (Another version of the chain rule). Let G : M N be a smooth map between
manifolds of dimensions n and p, respectively, = (x1 , . . . , xn ) be a chart containing the point m
in M , and = (y 1 , . . . , y p ) be a chart containing G(m) in N . Then
p
X (y j G)
(f G) f
k
(m) = k
(m) j (G(m)) (9.7)
x x y
j=1
whenever f CG(m) and 1 k n.
9.5.17. Remark. If one is willing to adopt a sufficiently relaxed attitude towards notation many
complicated looking formulas can be put in simpler form. Convince yourself that it is not beyond
the realm of possibility for one to encounter equation (9.7) written in the form
p
X z y j
z
= .
xk y j xk
j=1
9.5.18. Exercise. Consider the 2-manifold S2 with the differential structure generated by the
stereographic projections form the north and south poles (see example 9.2.12) and the 2-manifold
P2 with the differentiable structure generated by the atlas given in example 9.2.14. Recall from
example 9.3.5 that the map F : S2 P2 : (x, y, z) 7 [(x, y, z)] is smooth. Define
x+y
h [(x, y, z)] =
2z
2 1 1 1
whenever [(x, y, z)] P and z 6= 0; and let m = 2 , 2 , 2 . It is clear that h is well-defined.
(You may assume it is smooth in a neighborhood of F (m).) Let = (u, v) be the stereographic
projection of S2 from the south pole (see 9.2.12).
(a) Using the definitions of and compute (hF ) (m) and (hF ) (m).
u m v m u v
(b) Let be the chart in P2 defined in example 9.2.14. Use this chart, which contains F (m),
and the version of the chain rule given in proposition 9.5.16 to compute (independently
of part (a) ) (hF )
u (m) and
(hF )
v (m).
9.5. THE ALGEBRAIC TANGENT SPACE 93
Let m be a point in an n-manifold. In example 9.5.7 we defined, for each e c in the geometric
tangent space Tm , a function
e
vec : Gm R : fb 7 D(f c)(0)
and showed that vec is (well-defined and) a derivation on the space Gm of germs at m. Thus the
c 7 vec takes members of Tem to members of Tbm . The next few propositions lead to the
map v : e
conclusion that v : Tem Tbm is a vector space isomorphism and that consequently the two definitions
of tangent space are essentially the same. Subsequently we will drop the diacritical marks tilde
and circumflex that we have used to distinguish the geometric and algebraic tangent spaces and
instead write just Tm for the tangent space at m. We will allow context to dictate whether a
tangent vector (that is, a member of the tangent space) is to be interpreted as an equivalence class
of curves or as a derivation. Our first step is to show that the map v is linear.
9.5.19. Proposition. Let m be a point in an n-manifold and
v : Tem Tbm : e
c 7 vec
be the map defined in example 9.5.7. Then
(i) veb+ec = veb + vec and
(ii) vec = vec
c Tem and R.
for all eb, e
9.5.20. Proposition. Let m be a point in an n-manifold. Then the map
v : Tem Tbm : e
c 7 vec
(defined in 9.5.7) is injective.
In order to show that the map v : Tm Tm is surjective we will need to know that the tangent
e b
vectors x1 m , . . . , xn m span the tangent space Tbm . The crucial step in this argument depends
on adapting the (second order) Taylors formula so that it holds on finite dimensional manifolds.
9.5.21. Lemma. Let = (x1 , . . . , xn ) be a chart containing a point m in an n-manifold and f be a
. Then there exist a neighborhood U of m and smooth functions sjk (for 1 j, k n)
member of Cm
such that
Xn Xn
(xj aj ) f j (a) + (xj aj )(xk ak )sjk
f = f (m) +
j=1 j,k=1
where a = (m).
Hint for proof . Apply Taylors formula to the local representative f .
9.5.22. Proposition. If = (x 1 n
, . . . , x ) is a chart containing a point m in an n-manifold, then
the derivations x1 m , . . . , xn m span the algebraic tangent space Tbm . In fact, if w is an arbitrary
element of Tbm , then
n
X
w= w xbj .
xj m
j=1
v v
Tbm / TbF (m)
dF
b m
In light of the isomorphism between the geometric and algebraic tangent spaces to a manifold
(see 9.5.24) and the equivalence of the respective differential maps (proved in the preceding propo-
sition), we will for the most part write just Tm for either type of tangent space and dFm for either
differential of a smooth map. In situations where the difference is important context should make
it clear which one is intended.
9.5.29. Corollary. If F : M N is a smooth map between finite dimensional manifolds and
m M , then dF
b m is a linear transformation from the algebraic tangent space Tbm into TbF (m) .
9.5.30. Proposition (Yet another chain rule). If F : M N and G : N P are smooth maps
between finite dimensional manifolds and m M , then
b F )m = dG
d(G b F (m) dF
b m.
In the next exercise we consider the tangent space Ta at a point a in R. Here, as usual,
the differential structure on R is taken to be the one generated by the atlas whose only chart is
the identity
map I on R. The tangent space is one-dimensional; it is generated by the tangent
vector I a
. This particular
notation
is, as far as I know,
never used; some alternative standard
d d
d d df
notations are , , and . And, of course, (f ) is written as (a).
dx a dt a
dI a
dx a dx
dg
9.5.31. Exercise. Let a R and g Ca . Find (a).
dx
9.5.32. Exercise. If f Ca , where m is a point in some n-manifold, and w Tm , then dfm (w)
. Since the tangent space at f (m) is one-dimensional, there exists R such that
belongs to Tf (m)
d
dfm (w) = . Show that = w(f ).
dI f (m)
9.5. THE ALGEBRAIC TANGENT SPACE 95
containing m. Then
n
X f
dfm = (m) dxk m .
xk
k=1
Hint for proof . There exist scalars 1 , . . . n such that dfm = nj=1 j dxj m . (Why?) Consider
P
Pn j
j=1 j dx m xk m .
Notice that this proposition provides some meaning (and justification) for the conventional
formula frequently trotted out in beginning calculus courses.
f f f
df = dx + dy + dz .
x y z
CHAPTER 10
and
^ [ ^
(Tm ) : m M .
(M ) =
V
10.3.2. Definition. A differential form is a section of (M ). Thus is a differential form if
(m) (Tm ) for every m M . Similarly, is a differential k-form (or just a k-form) if
V
it a section of k (M ). Notice that for 1-forms this definition agrees with the one given in 10.2.2
V
since 1 (M ) = T M . Also notice that a 0-form is just a real valued function on M (because of our
V
It should be kept in mind that the coefficients aj1 ...jk in this expression are functions and that
they depend on the choice of coordinate system (chart). The k-form is smooth with respect to
the chart if all the coefficients aj1 ...jk are smooth real valued functions on U . A k-form defined
on all of M is smooth if it is smooth with respect to every chart on M . A differential form is
smooth if its component in k(M ) is smooth for every k. The set of smooth differential
V
forms on
V
Vk
M is denoted by C M, (M ) and the set of smooth k-forms by C M, (M ) .
10.3.5. Convention. In the sequel all k-forms are smooth differential k-forms and all differential
forms are smooth.
The next theorem defines a mapping d on differential forms called the exterior differenti-
ation operator.
10.3.6. Theorem. If M is an n-manifold, then there exists a unique linear map
^ ^
d : C M, (M ) C M, (M )
which satisfies
(1) d C M, k (M ) C M, k+1 (M ) ;
V V
where dk1 and dk are exterior differentiation operators. Elements of ker dk are called closed
k-forms and elements of ran dk1 are exact k-forms. In other words, a k-form is closed if
d = 0. It is exact if there exists a (k 1)-form such that = d.
10.5.2. Proposition. Every exact differential form is closed.
10.5.3. Proposition. If and are closed differential forms, so is .
10.5.4. Proposition. If is an exact form and is a closed form, then is exact.
10.5.5. Example. Let = (x, y, z) : U R3 be a chart on a 3-manifold and = a dx + b dy + c dz
c b a c b
be a 1-form on U . If is exact, then y = z , z = x , and x = a
y .
10.5.6. Exercise. Determine if each of the following 1-forms is exact in R2 . If it is, specify the
0-form of which it is the differential.
(1) yexy dx + xexy dy;
(2)
x sin y dx + x cos y dy;and
x2
arctan y x 2 arcsin x y
(3) + + 3x dx + 2 + e dy.
1 x2 y 1 + y2 2y
10.5.7. Exercise. Explain why solving the initial value problem
ex cos y + 2x ex (sin y)y 0 = 0, y(0) = /3
is essentially the same thing as showing that the 1-form (ex cos y + 2x) dx ex (sin y) dy is exact.
Do it.
j=1
and
h() := g .
In the definition of , the circumflex above the term dxij indicates that the term is deleted. For
example, if k = 3, then
= xi1 dxi2 dxi3 xi2 dxi1 dxi3 + xi3 dxi1 dxi2 .
For each k extend h to all of k (U ) by requiring it to be linear. Thus
V
^k ^k1
h (U ) (U ) .
10.6.2. Theorem (Poincares lemma). If U is a nonempty open convex subset of Rn and p 1,
then every closed p-form on U is exact.
Hint for proof. Let p be a fixed integer such that 1 p n. Let i1 , . . . , ip be distinct
integers between 1 and n, and let a be a 0-form on U . Define
:= dxi1 dxip ,
:= a,
and, for 1 k n, define
k := ak dxk .
a
(Here, ak = xk .)
Now, do the following.
R1
(a) Show that g k (x) = 0 ak (tx)tp dt for 1 k n and x U .
k
(b) Show that = xk dxk for 1 k n.
(c) Compute h k for 1 k n.
(d) Show that d = nk=1 k .
P
(e) Compute h d.
(f) Show that d = p .
(g) Compute d(h).
d p
(h) Compute t a(tx) .
dt
(i) Show that pg + nk=1 g k xk = a.
P
(j) Show that (dh + hd)() = .
10.6.3. Proposition. If is a 1-form on a nonempty convex open subset U of R3 with curl = 0,
then there exists a 0-form f on U such that = grad f .
10.6.4. Exercise. Use the proof of Poincares lemma to find a 0-form on R3 whose gradient is
(2xyz 3 y 2 z) dx + (x2 z 3 2xyz) dy + (3x2 yz 2 xy 2 ) dz.
10.6.5. Exercise. Consider the 1-form = ez dx + x dy in R3 and let = d. Use the proof of
Poincares lemma to find another 1-form = a dx + b dy + c dz such that = d. Explain carefully
a
what happens at z = 0. Find (0, 0, 0) for every integer n 0.
z n
10.6.6. Proposition. If is a 1-form on a nonempty convex open subset U of R3 with div = 0,
then there exists a 1-form on U such that = curl .
10.6.7. Exercise. Let = 2xyz dx + x3 z 2 dy yz 2 dz. Check that div = 0. Use the proof of
Poincares lemma to find a 1-form whose curl is .
10.6. POINCARES LEMMA 103
10.6.8. Proposition. Every smooth real valued function f on a nonempty convex open subset U
of R3 is div for some 1-form on U .
10.6.9. Exercise. In the proof of the preceding proposition, what needs to be changed if U lies
in R2 ?
10.6.10. Exercise. Use the proof of Poincares lemma to find a 1-form on R3 whose whose
divergence is the function
f : (x, y, z) 7 xy y 2 z + xz 3 .
CHAPTER 11
in (11.1) slightly
^k
( F )()(v 1 , . . . , v k ) = (dF (v 1 ), . . . , dF (v k )). (11.2)
105
106 11. HOMOLOGY AND COHOMOLOGY
Vk
Denote by F theVmap induced by the maps
V
F which takes the Z-graded algebra (N ) to the
Z-graded algebra (M ).
11.1.5. Example. For each k Z+ the pair of maps M 7 k (M ) and F 7 k (F ) (as defined
V V
in 10.3.1 and 11.1.4) is a contravariant functor from the category of smooth manifolds and smooth
maps to the category of vector spaces and linear maps.
11.1.6. Proposition. If F : M N is a smooth function between smooth manifolds, j (N ),
V
and k (N ), then
V
^j+k ^j ^k
( F )( ) = ( F )() ( F )().
Exercise. In example 11.1.5 you showed that k was a functor for each k. What about
V
11.1.7.
V
itself? Is it a functor? Explain.
11.1.8. Proposition. If F : M N is a smooth function between smooth manifolds, then
d F = F d .
11.1.9. Exercise. Let V = {0} be the 0-dimensional Euclidean space. Compute the k th de Rham
cohomology group H k (V ) for all k Z.
11.1.10. Exercise. Compute H k (R) for all k Z.
11.1.11. Exercise. Let U be the union of m disjoint open intervals in R. Compute H k (U ) for all
k Z.
11.1.12. Exercise. Let U be an open subset of Rn . For [] H j (U ) and [] H k (U ) define
[][] = [ ] H j+k (U ).
Explain why proposition 10.5.3 is necessary for this definition to make sense. Prove also that
this definition does not depend on theLrepresentatives chosen from the equivalence classes. Show
that this definition makes H (U ) = H k (U ) into a Z-graded algebra. This is the de Rham
cohomology algebra of U . kZ
11.1.13. Definition. Let F : M N be a smooth function between smooth manifolds. For each
integer k define
^k
H k (F ) : H k (N ) H k (M )k : [] 7 [ (F )()].
Denote by H (F ) the induced map which takes the Z-graded algebra H (N ) into H (M ).
11.1.14. Example. With the definitions given in 11.1.12 and 11.1.13 H becomes a contravariant
functor from the category of open subsets of Rn and smooth maps to the category of Z-graded
algebras and their homomorphisms.
of vector spaces and linear maps is a cochain complex if dk dk1 = 0 for all k Z. Such a
sequence may be denoted by (V , d) or just by V .
11.2.2. Definition. We generalize definition 11.1.1 in the obvious fashion. If V is a cochain
complex, then the k th cohomology group H k (V ) is defined to be ker dk / ran dk1 . (As before,
this group is actually a vector space.) In this context the elements of Vk are often called k-
cochains, elements of ker dk are k-cocycles, elements of ran dk1 are k-coboundaries, and d
is the coboundary operator.
11.3. SIMPLICIAL HOMOLOGY 107
Gk Gk+1
... / Wk / Wk+1 / ...
k
commutes.
11.2.4. Proposition. Let G : V W be a cochain map between cochain complexes. For each
k Z define
Gk : H k (V ) H k (W ) : [v] 7 [Gk (v)]
whenever v is a cocycle in Vk . Then the maps Gk are well defined and linear.
Hint for proof . To prove that Gk is well-defined we need to show two things: that Gk (v) is a
cocycle in Wk and that the definition does not depend on the choice of representative v.
11.2.5. Definition. A sequence
0 / U F /V G / W /0
of cochain complexes and cochain maps is (short) exact if for every k Z the sequence
Fk Gk
0 / Uk / Vk / Wk /0
0 / U F /V G / W /0
Hint for proof . If w is a cocycle in Wk , then, since Gk is surjective, there exists v Vk such
that w = Gk (v). It follows that dv ker Gk+1 = ran Fk+1 so that dv = Fk+1 (u) for some u Uk+1 .
Let k ([w]) = [u].
11.3.3. CAUTION. Notice that there is a slight conflict between this notation, when applied to
the vector space R of real numbers, and the usual notation for closed intervals on the real line. In
R the closed segment [a, b] is the same as the closed interval [a, b] provided that a b. If a > b,
however, the closed segment [a, b] is the same as the segment [b, a], it contains all numbers c such
that b c a, whereas the closed interval [a, b] is empty.
11.3.4. Definition. A subset C of a vector space V is convex if the closed segment [a, b] is
contained in C whenever a, b C.
11.3.5. Definition. Let A be a subset of a vector space V . The convex hull of A is the smallest
convex subset of V which contain A.
11.3.6. Exercise. Show that definition 11.3.5 makes sense by showing that the intersection of
a family of convex subsets of a vector space is itself convex. Then show that a constructive
characterization is equivalent; that is, prove that the convex hull of A is the set of all convex
combinations of elements of A.
11.3.7. Definition. A set S = {v0 , v1 , . . . , vp } of p + 1 vectors in a vector space V is convex
independent if the set {v1 v0 , v2 v0 , . . . , vp v0 } is linearly independent in V .
11.3.8. Definition. An affine subspace of a vector space V is any translate of a linear subspace
of V .
11.3.9. Example. The line whose equation is y = 2x 5 is not a linear subspace of R2 . But it is
an affine subspace: it is the line determined by the equation y = 2x (which is a linear subspace of
R2 ) translated downwards parallel to the y-axis by 5 units.
11.3.10. Definition. Let p Z+ . The closed convex hull of a convex independent set S =
{v0 , . . . , vp } of p + 1 vectors in some vector space is a closed p -simplex. It is denoted by [s] or
by [v0 . . . . , vp ]. The integer p is the dimension of Pthe simplex. The open p -simplex determined
by the set S is the set of all convex combinations pk=0 k vk of elements of S where each k > 0.
The open simplex will be denoted by (s) or by (v0 , . . . , vp ). We make the special convention that
a single vector {v} is both a closed and an open 0 -simplex.
If [s] is a simplex in Rn then the plane of [s] is the affine subspace of Rn having the least
dimension which contains [s]. It turns out that the open simplex (s) is the interior of [s] in the
plane of [s].
11.3.11. Definition. Let [s] = [v0 , . . . , vp ] be a closed p -simplex in Rn and {j0 , . . . , jq } be a
nonempty subset of {0, 1, . . . , p}. Then the closed q -simplex [t] = [vj0 , . . . , vjq ] is a closed q-face
of [s]. The corresponding open simplex (t) is an open q-face of [s]. The 0 -faces of a simplex are
called the vertices of the simplex.
Note that distinct open faces of a closed simplex [s] are disjoint and that the union of all the
open faces of [s] is [s] itself.
11.3.12. Definition. Let [s] = [v0 , . . . , vp ] be a closed p -simplex in Rn . We say that two orderings
(vi0 , . . . , vip ) and (vj0 , . . . , vjp ) of the vertices are equivalent if (j0 , . . . , jp ) is an even permutation
of (i0 , . . . , ip ). (This is an equivalence relation.) For p 1 there are exactly two equivalence
classes; these are the orientations of [s]. An oriented simplex is a simplex together with one
of these orientations. The oriented simplex determined by the ordering (v0 , . . . , vp ) will be denoted
by hv0 , . . . , vp i. If, as above, [s] is written as [v0 , . . . , vp ], then we may shorten hv0 , . . . , vp i to hsi.
Of course, none of the preceding makes sense for 0 -simplexes. We arbitrarily assign them two
orientations, which we denote by + and . Thus hsi and hsi have opposite orientations.
11.3.13. Definition. A finite collection K of open simplexes in Rn is a simplicial complex if
the following conditions are satisfied:
(1) if (s) K and (t) is an open face of [s], then (t) K; and
11.3. SIMPLICIAL HOMOLOGY 109
11.3.22. Definition. Let K be a simplicial complex. The number p := dim Hp (K) is the pth
Betti number of the complex K. And (K) := dim
P K p
p=0 (1) p is the Euler characteristic
of K.
11.3.23. Proposition. Let K be a simplicial complex. For 0 p dim K let p be the number
of p -simplexes in K. That is, p = dim Cp (K). Then
dim
XK
(K) = (1)p p .
p=0
(3) (F )m (v) = F (m) (dFm (v)) for every 1-form on N , every m M , and every v Tm .
11.4.8. Proposition. If F : MV N is a smooth map between n-manifolds, then F is a cochain
V
map from the cochain complex ( (N ), d ) to the cochain complex ( (M ), d ). That is, the diagram
d / p+1 (N )
Vp V
(N )
F F
Vp Vp+1
(M ) / (M )
d
STOKES THEOREM
where the right hand side is an ordinary Riemann integral. If hv0 i is a 0 -simplex, we make a special
definition Z
f = f (v0 )
hv0 i
for every 0 -form f .
Extend the preceding definition
P to p -chains by requiring the integral to be linear as a function
of simplexes; that is, if c = as hsi is a p -chain (in some simplicial complex) and is a p -form,
define Z Z
X
= a(s) .
c hsi
The tangential component of F , written FT may be regarded as the 1 -form nk=1 F k dxk .
P
Make sense of the preceding definition in terms of the definition of the integral of 1 -forms over
a smoothly triangulated manifold. For simplicity take n = 2. Hint. Suppose we have the following:
(1) ht0 , t1 i (with t0 < t1 ) is an oriented 1 -simplex in R;
(2) V is an open subset of R2 ;
(3) c : J V is an injective smooth curve in V , where J is an open interval containing [t0 , t1 ];
and
(4) = a dx + b dy is a smooth 1 -form on V .
First show that
c (dx) (t) = Dc1 (t)
for t0 t t1 . (We drop the notational distinction between c and its extension cs to J. Since
the tangent space Tt is one-dimensional for every t, we identify Tt with R. Choose v (in (3) of
proposition 11.4.7) to be the usual basis vector in R, the number 1.)
Show in a similar fashion that
c (dy) (t) = Dc2 (t) .
Then write an expression for c () (t). Finally conclude that 1 (ht0 , t1 i) is indeed equal to
R
R t1
t0 h(a, b) c, Dci as claimed in 12.1.
12.1.4. Exercise. Let S1 be the unit circle in R2 oriented counterclockwise and let F be the
3 3 3 3
R defined by F(x, y) = (2x y ) i + (x + y ) j. Use your work in exercise 12.1.3 to
vector field
calculate S1 FT . Hint. You may use without proof two facts: (1) the integral does not depend
on the parametrization (triangulation) of the curve, and (2) the results of exercise 12.1.3 hold also
for simple closed curves in R2 ; that is, for curves c : [t0 , t1 ] R2 which are injective on the open
interval (t0 , t1 ) but which satisfy c(t0 ) = c(t1 ).
12.1.5. Notation. Let Hn = {x Rn : xn 0}. This is the upper half-space of Rn .
12.1.6. Definition. A n -manifold with boundary is defined in the same way as an n -manifold
except that the range of a chart is assumed to be an open subset of Hn .
The interior of Hn , denoted by int Hn , is defined to be {x Rn : xn > 0}. (Notice that this
is the interior of Hn regarded as a subset of Rn not of Hn .) The boundary of Hn , denoted by
Hn , is defined to be {x Rn : xn = 0}.
If M is an n -manifold with boundary, a point m M belongs to the interior of M (denoted
by int M ) if (m) int Hn for some chart . And it belongs to the boundary of M (denoted by
M ) if (m) Hn for some chart .
12.1.7. Theorem. Let M and N be a smooth n -manifolds with boundary and F : M N be a
smooth diffeomorphism. Then both int M and M are smooth manifolds (without boundary). The
interior of M has dimension n and the boundary of M has dimension n 1. The mapping F
induces smooth diffeomorphisms int F : int M int N and F : M N .
Proof. Consult the marvelous text [2], proposition 7.2.6.
12.1.8. Exercise. Let V be an open subset of R3 , F : V R3 be a smooth vector field, and
(S, K, h) be a smoothly triangulated 2 -manifold suchRRthat S V . It is conventional to define the
normal component of F over S , often denoted by S FN , by the formula
ZZ ZZ
FN = hF h, ni
S K
F = a i+b j+c k (where a, b, and c are smooth functions) and let = a dy dz +b dz dx+c dxdy.
This 2 -form is conventionally called the normal component of F and is denoted by FN . Notice
that FN is just where is the 1 -form associated with the vector field F. Hint. Proceed as
follows.
(a) Show that the vector n(u, v) is perpendicular to the surface S at h(u, v) for each (u, v) in
[K] by showing that it is perpendicular to D(h c)(0) whenever c is a smooth curve in [K]
such that c(0) = (u, v).
(b) Let u and v (in that order) be the coordinates in the plane of [K] and x, y, and z (in that
order) be the coordinates in R3 . Show that h (dx) = h11 du + h12 dv. Also compute h (dy)
and h (dz).
Remark. If at each point in [K] we identify the tangent plane to R2 with R2 itself and if
we use conventional notation, the v which appears in (3) of proposition 11.4.7 is just
not written. One keeps in mind that the components of h and all the differential forms
are functions on (a neighborhood of) [K].
(c) Now find h (). (Recall that = FN is defined above.)
(d) Show for each simplex (s) in K that
Z ZZ
(hsi) = hF h, ni .
2 [s]
Pn
(e) Finally show that if hs1 i, . . . , hsn i are the oriented 2 -simplexes of K and c = k=1 hsk i,
then Z ZZ
(c) = hF h, ni .
2 [K]
Proof. This is an important and standard theorem, which appears in many versions and with
many different proofs. See, for example, [2], theorem 7.2.6; [20], chapter XVII, theorem 2.1; [26],
theorem 10.8; or [33], theorems 4.7 and 4.9.
Recall that when we say in Stokes theorem that the integration operator is a cochain map, we
are saying that the following diagram commutes.
d / p (M ) d / p+1 (M ) d /
V V
R R
/ C p (K) / C p+1 (K) /
This last equation (12.2) can be written in terms of integration over oriented simplexes:
Z Z
hs .
d hs = (12.3)
hsi hsi
In more conventional notation all mention of the triangulating simplicial complex K and of the
map h is suppressed. This is justified by the fact that it can be shown that the value of the integral
is independent of the particular triangulation used. Then when the equations of the form (12.3) are
added over all the (p + 1) -simplexes comprising K we arrive at a particularly simple formulation
of (the conclusion of) Stokes theorem
Z Z
d = . (12.4)
M M
One particularly important topic that has been glossed over in the preceding is a discussion of
orientable manifolds (those which possess nowhere vanishing volume forms), their orientations, and
the manner in which an orientation of a manifold with boundary induces an orientation on its
boundary. One of many places where you can find a careful development of this material is in
sections 6.5 and 7.2 of [2].
12.2.2. Theorem.
R Let be a 1 -form on a connected open subset U of R2 . Then is exact on U
if and only if C = 0 for every simple closed curve in U .
Proof. See [9], chapter 2, proposition 1.
y dx x dy
12.2.3. Example. Let = + 2 . On the region R2 \ {(0, 0)} the 1-form is closed
x2+y 2 x + y2
but not exact.
12.2.4. Exercise. What classical theorem do we get from the version of Stokes theorem given by
equation (12.4) in the special case that M is a flat 1 -manifold (with boundary) in R and is a
0 -form defined on some open set in R which contains M ? Explain.
12.2.5. Exercise. What classical theorem do we get from the version of Stokes theorem given by
equation (12.4) in the special case that M is a (not necessarily flat) 1 -manifold (with boundary)
in R3 and is a 0 -form defined on some open subset of R3 which contains M ? Explain.
12.2.6. Exercise. What classical theorem do we get from the version of Stokes theorem given
by equation (12.4) in the special case that M is a flat 2 -manifold (with boundary) in R2 and
is the 1 -form associated with a vector field F : U R2 defined on an open subset U of R2 which
contains M ? Explain.
12.2.7. Exercise. Use exercise 12.2.6 to compute S1 (2x3 y 3 ) dx + (x3 + y 3 ) dy (where S1 is the
R
unit circle oriented counterclockwise).
12.2.8. Exercise. Let F(x, y) = (y, x) and let Ca and Cb be the circles centered at the origin
with radii a and b, respectively, where a < b. Suppose that Ca is oriented clockwise and Cb is
oriented counterclockwise. Find Z Z
F dr + F dr .
Ca Cb
12.2.9. Exercise. What classical theorem do we get from the version of Stokes theorem given by
equation (12.4) in the special case that M is a (not necessarily flat) 2 -manifold (with boundary)
in R3 and is the 1 -form associated with a vector field F : U R3 defined on an open subset U
of R3 which contains M ? Explain.
12.2. GENERALIZED STOKES THEOREM 117
12.2.10. Exercise. What classical theorem do we get from the version of Stokes theorem given
by equation (12.4) in the special case that M is a (flat) 3 -manifold (with boundary) in R3 and
= where is the 1 -form associated with a vector field F : U R3 defined on an open subset
U of R3 which contains M ? Explain.
12.2.11. Exercise. Your good friend Fred R. Dimm calls you on his cell phone seeking help with
a math problem. He says that he wants to evaluate the integral of the normal component of the
vector field on R3 whose coordinate functions are x, y, and z (in that order) over the surface
of a cube whose edges have length 4. Fred is concerned that hes not sure of the coordinates of
the vertices of the cube. How would you explain to Fred (over the phone) that it doesnt matter
where the cube is located and that it is entirely obvious that the value of the surface integral he is
interested in is 192?
CHAPTER 13
GEOMETRIC ALGEBRA
13.1.5. Proposition. If v is a 1-vector in Cl(2, 0), then v 2 = kvk2 . (Here, v 2 means vv.)
13.1.6. Corollary. Every nonzero 1-vector v in Cl(2, 0) has an inverse (with respect to Clifford
multiplication). As usual, the inverse of v is denoted by v 1 .
13.1.7. Exercise. Justify the claims made in the last two sentences of definition 13.1.2.
The magnitude (or norm) of a 1-vector is just its length. We will also assign a magnitude (or
norm) to bivectors. The motivation for the following definition is that we wish a bivector v w to
represent an equivalence class of directed regions in the plane, two such regions being equivalent
if they have the same area and the same orientation (positive = counterclockwise or negative =
clockwise). So we will take the magnitude of v w to be the area of the parallelogram generated
by v and w.
13.1.8. Definition. Let v and w be 1-vectors in Cl(2, 0). Define kv wk := kvkkwk sin where
is the angle between v and w (0 ).
13.1.9. Proposition. If v and w are 1-vectors in Cl(2, 0), then kv wk is the area of the parallel-
ogram generated by v and w.
13.1.10. Proposition. If v and w are 1-vectors in Cl(2, 0), then v w = kv wke12 = det(v, w)e12 .
13.1.11. Exercise. Suppose you know how to find the Clifford product of any two elements of
Cl(2, 0). Explain how to use equation (13.1) to recapture formulas defining the inner product and
the wedge product of two 1-vectors in Cl(2, 0).
A vector in the plane is usually regarded as an equivalence class of directed segments. (Two
directed segments are taken to be equivalent if they lie on parallel lines, have the same length,
and point in the same direction.) Each such equivalence class of directed segments has a standard
representative, the one whose tail is at the origin. Two standard representatives are parallel if one
is a nonzero multiple of the other. By a common abuse of language, where we conflate the notions
of directed segments and vectors, we say that two nonzero vectors in the plane are parallel if one
is a scalar multiple of the other.
13.1.12. Proposition. Two nonzero vectors in Cl(2, 0) are parallel if and only if their Clifford
product commutes.
13.1.13. Definition. Two elements, a and b, in a semigroup (or ring, or algebra) anticommute
if ba = ab.
13.1.14. Example. The bivector e12 anticommutes with all vectors in Cl(2, 0).
13.1.15. Proposition. Two vectors in Cl(2, 0) are perpendicular if and only if their Clifford prod-
uct anticommutes.
Let v and w be nonzero vectors in Cl(2, 0). We can write v as the sum of two vectors vk
and v ,where vk is parallel to w and v is perpendicular to w. The vector vk is the parallel
component of v and v is the perpendicular component of v (with respect to w).
13.1.16. Proposition. Let v and w be nonzero vectors in Cl(2, 0). Then vk = hv, wiw1 and
v = (v w)w1 .
13.1.17. Definition. If v and w are nonzero 1-vectors in Cl(0, 2), the reflection of v across w
is the map v = vk + v 7 vk v .
13.1.18. Proposition. If v and w are nonzero 1-vectors in Cl(0, 2), then the reflection v 0 = vk v
of v across w is given by v 0 = wvw1 .
Although Cl(2, 0) is a real algebra, it contains a field isomorphic to the complex numbers. We
start with the observation
V0 that e12 2 = 1. This prompts us to give a new name to e12 . We will
(R ) 2 (R2 ), of scalars plus pseudoscalars, this element serves the same
2
V
call it . In the space
role as the complex number i does in the complex plane C.
13.1. GEOMETRIC PLANE ALGEBRA 121
with 1 = ||2 .
13.1.21. Proposition. The space 0 (R2 ) 2 (R2 ) (with Clifford multiplication) is a field and is
V V
isomorphic to the field C of complex numbers.
Complex numbers serve two purposes in the plane. They implement rotations via their rep-
resentation in polar formVzei and, as points, they represent vectors.
V1 2 Recall that in the Clifford
1 2 2
algebra Cl(2, 0) we have (R ) = R . So here vectors live in (R ) while (the analogs of) com-
plex numbers live in 0 (R2 ) 2 (R2 ). Since these are both two-dimensional vector spaces, they are
V V
isomorphic. It turns out that there exists a simple, natural, and very useful isomorphism between
these spaces: left multiplication by e1 .
13.1.22. Notation. In the remainder of this section we will shorten (R2 ) to and k (R2 ) to
V V V
Vk
for k = 0, 1, and 2.
13.1.23. Proposition. The map from 1 to 0 2 defined by v 7 v := e1 v is a vector space
V V V
isomorphism. The mapping is also an isometry in the sense that |v| = kvk. The mapping is its
own inverse; that is, e1 v = v.
13.1.24. Definition. For R define e by the usual power series
X ( )k
e := = cos + (sin ) .
k!
k=0
13.1.25. Exercise. Discuss the convergence problems (or lack thereof) that arise in the preceding
definition.
Next we develop a formula for the positive (counterclockwise) rotation of a nonzero vector v
through angle of radians. Let v 0 be the vector in its rotated position. First move these 1-vectors
to their corresponding points in the (Cl(2, 0) analog of the) complex plane. Regarding these two
points, = e1 v, 0 = e1 v 0 0 2 as complex numbers (via proposition 13.1.21), we see that
V V
0 = e . Then move and 0 back to 1 to obtain the formula in the following proposition.
V
13.1.26. Proposition. Let v be a nonzero 1-vector in Cl(2, 0). After being rotated by radians
in the positive direction it becomes the vector v 0 . Then
v 0 = e v = ve .
In definition 13.1.2 and exercise 13.1.7 we established that a necessary condition for Cl(2, 0)
under Clifford multiplication to be an algebra, is that certain relations involving the basis elements,
1, e1 , e2 , and e12 hold. This doesnt prove, however, that if these relations are satisfied, then Cl(2, 0)
under Clifford multiplication is an algebra. This lacuna is not hard to fill.
13.1.27. Proposition. The Clifford algebra Cl(2, 0) is, in fact, a unital algebra isomorphic to the
matrix algebra M2 (R).
1 0 0 1
Hint for proof . Try mapping e1 to and e2 to .
0 1 1 0
13.1.28. Example. The Clifford algebra Cl(2, 0) is not a Z+ -graded algebra.
13.1.29. Proposition. The Clifford algebra Cl(2, 0) does not have the cancellation property.
Hint for proof . Consider uv and uw where u = e2 e12 , v = e1 + e2 , and w = 1 + e2 .
122 13. GEOMETRIC ALGEBRA
13.2.3. Notation. The object defined in 13.2.1, the vector space (R3 ) together with Clifford
V
multiplication, turns out to be a unital algebra and is a second example of a Clifford algebra. We
will see in the next chapter why this algebra is denoted by Cl(3, 0).
In the preceding section we commented on the fact that, geometrically speaking, we picture a
vector in the plane as an equivalence class of directed intervals (that is, directed line segments),
two intervals being equivalent if they lie on parallel lines, have the same length, and point in the
same direction. In a similar fashion it is helpful to have a geometrical interpretation of a 2-blade.
Let us say that an directed interval in 3-space is an oriented parallelogram generated by a pair of
vectors. Two such intervals will be said to be equivalent if they lie in parallel planes, have the same
area, and have the same orientation. We may choose as a standard representative of the 2-blade
v w the parallelogram formed by putting the tail of v at the origin and the tail of w at the head
13.2. GEOMETRIC ALGEBRA IN 3-SPACE 123
of v. (Complete the parallelogram with by adding, successively, the vectors v and w.) The
standard representative of the 2-blade w v = v w is the same parallelogram with the opposite
orientation: put the tail of w at the origin and the tail of v at the head of w.
13.2.4. Proposition. The Clifford algebra Cl(3, 0) is indeed a unital algebra.
Hint for proof . Consider the subalgebra of M4 (R) generated by the matrices
0 0 0 1 0 0 1 0 1 0 0 0
, 2 = 0 0 0 1 , and 3 = 0 1 0
0 0 1 0 0
1 =
0 1 0 0 1 0 0 0 0 0 1 0 .
1 0 0 0 0 1 0 0 0 0 0 1
13.2.5. Exercise. The proof of the preceding proposition provides a representation of the algebra
Cl(3, 0) as an 8-dimensional algebra of matrices of real numbers. Another, perhaps more interesting,
representation of the same algebra is by means of the Pauli spin matrices:
0 1 0 i 1 0
1 = , 2 = , and 3 = .
1 0 i 0 0 1
Let C(2) be the real vector space of 2 2-matrices with complex entries
(a) Show that although the complex vector space M2 (C) is 4-dimensional, C(2) is 8-dimensional.
(b) Show that {I, 1 , 2 , 3 , 2 3 , 3 1 , 1 2 , 1 2 3 } is a linearly independent subset of C(2).
(Here, I is the 2 2 identity matrix.)
(c) Find an explicit isomorphism from Cl(3, 0) to C(2).
As was the case in the plane we wish to assign magnitudes to 2-blades. We adopt the same
convention: the magnitude of v w is the area of the parallelogram generated by v and w.
13.2.6. Definition. Let v and w be 1-vectors in Cl(3, 0). Define kv wk := kvkkwk sin where
is the angle between v and w (0 ).
13.2.7. Proposition. Let u be a unit vector in R3 and P be a plane through the origin perpendicular
to u. Then the reflection v of a vector v R3 with respect to P is given by
v = uvu .
Hint for proof . If vk and v are, respectively, the parallel and perpendicular components of v
with respect to u, then v = v vk .
13.2.8. Exercise. Using the usual vector notation, how would you express v in the preceding
proposition?
13.2.9. Exercise. Find the area of the triangle in R3 whose vertices are the origin, (5, 4, 2),
and (1, 4, 6).
13.2.10. Example. To rotate a vector in 3-space by 2 radians about an axis determined by a
nonzero vector a, choose unit vectors u1 and u2 perpendicular to a so that the angle between u1
and u2 is radians. Let P1 and P2 be the planes through the origin perpendicular to u1 and u2 ,
respectively. Then the map v 7 v = R1 vR where R = u1 u2 is the desired rotation.
13.2.11. Example. It is clear that a 90 rotation in the positive (counterclockwise) direction about
the z-axis in R3 will take the point (0, 2, 3) to the point (2, 0, 3). Show how the formula derived
in the previous example gives this result when we choose u1 = e1 , u2 = 12 e1 + 12 e2 , a = (0, 0, 1),
and v = 2e2 + 3e3 .
CHAPTER 14
CLIFFORD ALGEBRAS
In this last chapter we will barely scratch the surface of the fascinating subject of Clifford
algebras. For a more substantial introduction there are now, fortunately, many sources available.
Among my favorites are [1], [7], [10], [14], [15], [24], [25],and [31].
14.4.4. Theorem. Let V be a finite dimensional real vector space, let B be a symmetric bilinear
form on V , and let Q be the quadratic form associated with B. P Then there exist p, q Z+ such
that if E = (e1 , . . . , en ) is a B-orthonormal basis for V and v = vk ek , then
p
X p+q
X
2
Q(v) = vk vk 2 .
k=1 k=p+1
1. Rafal Ablamowicz and Garret Sobczyk (eds.), Lectures on Clifford (Geometric) Algebras and Applications,
Birkhauser, Boston, 2004. 125
2. Ralph Abraham, Jerrold E. Marsden, and Tudor Ratiu, Manifolds, Tensor Analysis, and Applications, Addison-
Wesley, Reading, MA, 1983. 114, 115, 116
3. Richard L. Bishop and Richard J. Crittenden, Geometry of Manifolds, Academic Press, New York, 1964. 83, 99
4. William C. Brown, A second Course in Linear Algebra, John Wiley, New York, 1988. 127
5. Stewart S. Cairns, A simple triangulation method for smooth manifolds, Bull. Amer. Math. Soc. 67 (1961),
389390. 110
6. P. M. Cohn, Basic Algebra: Groups, Rings and Fields, Springer, London, 2003. 9
7. John S. Denker, An Introduction to Clifford Algebra, 2006,
http://www.av8n.com/physics/clifford-intro.htm. 125
8. J. Dieudonne, Treatise on Analysis, Volumes I and II, Academic Press, New York, 1969, 1970. 67
9. Manfredo P. do Carmo, Differential Forms and Applications, Springer-Verlag, Berlin, 1994. 116
10. Chris Doran and Anthony Lasenby, Geometric Algebra for Physicists, Cambridge University Press, Cambridge,
2003. 125
11. John M. Erdman, A ProblemText in Advanced Calculus,
http://web.pdx.edu/~erdman/PTAC/PTAClicensepage.html. 67, 86, 90
12. , A Companion to Real Analysis, 2007,
http://web.pdx.edu/~erdman/CRA/CRAlicensepage.html. 67
13. Douglas R. Farenick, Algebras of Linear Transformations, Springer-Verlag, New York, 2001. 48, 50
14. D. J. H. Garling, Clifford Algebras: An Introduction, Cambridge University Press, Cambridge, 2011. 125
15. David Hestenes and Garret Sobczyk, Clifford Algebra to Geometric Calculus, D. Reidel, Dordrecht, 1984. 125
16. Kenneth Hoffman and Ray Kunze, Linear Algebra, second ed., Prentice Hall, Englewood Cliffs,N.J., 1971. 52, 65
17. S. T. Hu, Differentiable Manifolds, Holt, Rinehart, and Winston, New York, 1969. 110
18. Thomas W. Hungerford, Algebra, Springer-Verlag, New York, 1974. 9
19. Saunders Mac Lane and Garrett Birkhoff, Algebra, Macmillan, New York, 1967. 9
20. Serge Lang, Fundamentals of Differential Geometry, Springer Verlag, New York, 1999. 115
21. John M. Lee, Introduction to Smooth Manifolds, Springer, New York, 2003. 83, 99, 110
22. Lynn H. Loomis and Shlomo Sternberg, Advanced calculus, Jones and Bartlett, Boston, 1990. 67
23. Pertti Lounesto, Counterexamples in Clifford Algebras, 1997/2002,
http://users.tkk.fi/~ppuska/mirror/Lounesto/counterexamples.htm. 128
24. , Clifford Algebras and Spinors, second ed., Cambridge University Press, Cambridge, 2001. 125
25. Douglas Lundholm and Lars Svensson, Clifford algebra, geometric algebra, and applications, 2009,
http://arxiv.org/abs/0907.5356v1. 125
26. Ib Madsen and Jrgen Tornehave, From Calculus to Cohomology: de Rham cohomology and characteristic classes,
Cambridge University Press, Cambridge, 1997. 115
27. Theodore W. Palmer, Banach Algebras and the General Theory of -Algebras III, Cambridge University Press,
Cambridge, 1994/2001. 9
28. Charles C. Pinter, A Book of Abstract Algebra, second ed., McGraw-Hill, New York, 1990. 73
29. Steven Roman, Advanced Linear Algebra, second ed., Springer-Verlag, New York, 2005. 19, 65, 76
30. I. M. Singer and J. A. Thorpe, Lecture Notes on Elementary Topology and Geometry, Springer Verlag, New York,
1967. 110
31. John Snygg, A New Approach to Differential Geometry using Cliffords Geometric Algebra, Springer, New York,
2012. 125
32. Gerard Walschap, Metric Structures in Differential Geometry, Springer-Verlag, New York, 2004. 99
33. Frank W. Warner, Foundations of Differentiable Manifolds and Lie Groups, Springer Verlag, New York, 1983.
110, 115
129
Index
map, 30 projection, 39
Mor(S, T ) (morphisms from S to T ), 29 similarity, 50
morphisms, 29 opposite orientation, 82
composition of, 29 order
mT (minimal polynomial for T ), 45 complete, 31
multilinear preserving, 29
form, 73 ordered
functional, 73 basis, 82
map, 73 by inclusion, 17
multiplication partially, 17
Clifford, 119 pointwise, 17
pointwise, 40 ordering
multiplicative linear, 18
identity, 1 partial
inverse, 1 of projections, 64
total, 18
N (set of natural numbers), 2 orientation
Nn (first n natural numbers), 2 of a simplex, 108
N (A) (normal elements of A), 61 of a vector space, 82
nilpotent, 51 oriented
n-linear function, 73 simplex, 108
n-manifold, 84 vector space, 82
nondegenerate bilinear form, 126 orthogonal, 57
norm, 56 complement, 58
of a bivector, 120 direct sum
uniform, 57 projections associated with, 64
normal, 61 element of a real -algebra, 61
normalizing a vector, 58 inner product projections, 63
normed projection, 63
linear space, 56 associated with an orthogonal direct sum
vector space, see normed linear space decomposition, 64
nullity, 22 projections
nullspace, 22 in a -algebra, 63
numbers resolution of the identity, 64
special sets of, 2 with respect to a bilinear form, 126
orthonormal, 58
object basis, 58
free, 32 orthonormal, B-, 126
map, 30 overlap maps, 83
objects, 29
odd P(A) (projections on a -algebra), 63
permutation, 73 P(S) (power set of S), 31
odd function, 14 P(J) (polynomial functions on J), 16
O(V, W ) (big-oh functions), 67 Pn (J) (polynomial functions of degree strictly less
o(V, W ) (little-oh functions), 67, 86 than n), 16
(v) (a one-form acting on a vector field), 98 P2 (projective plane), 85
one-form parallelogram law, 57
differential, 98 parametrization
one-to-one, 6 of a curve, 83
one-to-one correspondence, 6 of a solid, 83
onto, 6 of a surface, 83
open partial
face of a simplex, 108 ordering, 17
simplex, 108 of projections, 64
unit disc, 2 Pauli spin matrices, 123
operation p -boundaries, 109
binary, 1 p -chains, 109
operator, 21 p -coboundary
diagonalizable, 51 simplicial, 110
exterior differentiation, 99 p -cochain, 110
nilpotent, 51 p -cocycle
INDEX 139
restriction, 7 sign
retraction, 23 of a permutation, 73
reverse orientation, 82 similar, 50
M (cotangent bundle projection), 98 simple
Riesz-Frechet theorem algebra, 41
for inner product spaces, 59 simplex
for vector spaces, 26 closed, 108
right dimension of a, 108
-handed basis, 82 face of a, 108
cancellable, 30 open, 108
ideal, 41 oriented, 108
inverse plane of a, 108
of a morphism, 30 vertex of a, 108
invertible linear map, 23 simplicial
R coboundary, 110
as a vector space, 11 cocycle, 110
as an Abelian group, 5 cohomology group, 110
ring, 1 complex, 108
commutative, 1 dimension of a, 109
division, 1 simplicial homology group, 109
homomorphism, 10 skeleton of a complex, 109
unital, 10 skew-Hermitian, 61
with identity, 1 skew-symmetric, 61, 74
Rn smallest, 17
as a vector space, 11 subspace containing a set of vectors, 13
as an Abelian group, 5 smooth, 84
root, 47 k-form, 99
row vector, 18 atlas, 84
r -skeleton, 109 curve, 83
S1 (unit circle), 2 differential form, 99
S1 (unit circle), 84 differential one-form, 98
Sn (n-sphere), 85 function, 83
scalar, 10, 80, 119, 122 locally, 99
field, 70 manifold, 84
multiplication map between manifolds, 85
pointwise, 11 solid, 83
Schwarz inequality, 56 submanifold, 110
section, 23, 97 surface, 83
segment, 5 triangulation, 110
segment, closed, 107 vector field, 97
self-adjoint, 61 smoothly compatible
semigroup, 1 atlases, 84
separates points, 26 charts, 84
sequence solid, 83
bounded, 13 parametrization of a, 83
convergent, 13 smooth, 83
decreasing, 13 space
exact, 35 algebraic tangent, 89
series geometric tangent, 87
formal power, 43 inner product, 55
sesquilinear, 55 normed linear, 56
SET vector, 10
the category, 29 span, 15
shift of the empty set, 15
unilateral, 22 span(A) (the span of A), 15
short exact sequence, 36, 107 spectral theorem
shuffle, 81 for complex inner product spaces, 64
(interchange operation), 8 for vector spaces, 51
(a), A (a) (the spectrum of a), 42 spectrum, 42
p (A), p (T ) (point spectrum), 49 point, 49
INDEX 141
diagonalizable, 64
equivalent, 64
unitary, 61
unitization, 42
universal, 125
mapping
diagram, 32
property, 32
upper
bound, 17
half-space, 114
V (dual space of V ), 25
v (= e1 v), 121
vk (parallel component of v), 120
v (perpendicular component of v), 120
variation
Gateaux, 70
vector, 10, 119, 122
algebraic tangent, 89
column, 18
decomposition theorem, 14
field, 97
geometric tangent, 87
norm of a, 56
row, 18
space, 10
adjoint map, 31
complex, 10
dimension of a, 19
free, 32
normed, 56
orientation of a, 82
oriented, 82
real, 10
spectral theorem for, 51
trivial, 10
unit, 56
VEC
the category, 29
vertex
of a simplex, 108
vf (action of a vector field on a function), 97
vol (volume element or form), 82
volume
element, 82
wedge product, 79
Z (set of integers), 2
zero
divisor, 9
form, 98
Z+ -graded algebra, 79
Z k (M ) (closed k-forms), 105
Zorns lemma, 18
Z p (K) (space of simplicial p -cocycles), 110
Zp (K) (space of simplicial p -cycles), 109