L 0010595429 PDF
L 0010595429 PDF
L 0010595429 PDF
~ IN MATHEMATICS
Functional Analysis
s. Kesavan
I1,Jr,i1@HINDUSTAN
UillJ UBOOK AGENCY
TEXTS AND READINGS
IN MATHEMATICS 52
Functional Analysis
Texts and Readings in Mathematics
Advisory Editor
C. S. Seshadri, Chennai Mathematical Institute, Chennai.
Managing Editor
Rajendra Bhatia, Indian Statistical Institute, New Delhi.
Editors
R. B. Bapat, Indian Statistical Institute, New Delhi.
V. S. Borkar, Tata Inst. of Fundamental Research, Mumbai.
Probal Chaudhuri, Indian Statistical Institute, Kolkata.
V. S. Sunder, Inst. of Mathematical Sciences, Chennai.
M. Vanninathan, TIFR Centre, Bangalore.
Functional Analysis
s. Kesavan
Institute of Mathematical Sciences
Chennai
fl:[gUgl HINDUSTAN
U U2J UBOOKAGENCY
Published in India by
email: info@hindbook.com
http://www.hindbook.com
All export rights for this edition vest exclusively with Hindustan Book
Agency (India). Unauthorized export is a violation of Copyright Law
and is subject to legal action.
The present book grew out of notes prepared by myself while lectur-
ing to graduate students at the Tata Institute of Fundamental Research
(Bangalore Centre) and the Institute of Mathematical Sciences, Chen-
nai. The material presented in this book is standard and is ideally suited
for a course which can be followed by masters students who have cov-
ered the necessary prerequisites mentioned earlier. While covering all
the standard material, I have also tried to illustrate the use of various
theorems via examples taken from differential equations and the calcu-
lus of variations, either through brief sections or through the exercises.
In fact, this book is well suited for students who would like to pursue a
research career in the applications of mathematics. In particular, famil-
iarity with the material presented in this book will facilitate studying
my earlier book, published nearly two decades ago, 'Topics in Func-
tional Analysis and Applications' (Wiley Eastern, now called New Age
International), which serves as a functional analytic introduction to the
theory of partial differential equations.
The first chapter of the present book gives a rapid revision of linear
algebra, topology and measure theory. Important definitions, examples
and results are recalled and no proofs are given. At the end of each
section, the reader is referred to a standard text on that topic. This
chapter has been included only for reference purposes and it is not in-
tended that it be covered in a course based on this book.
vi
Chapter 6 introduces the Lebesgue spaces and also presents the the-
ory of one of the simplest classes of Sobolev spaces.
Occasionally, some hints for the solution are provided. It is hoped that
the students will benefit by solving them.
Chennai, s. Kesavan
May 2008.
Notations
Certain general conventions followed throughout the text regarding
notations are described below. All other specific notations are explained
as and when they appear in the text.
• Sets (including vector spaces and their subspaces) and also linear
transformations between vector spaces are denoted by upper case
latin letters.
• To distinguish between the scalar zero and the null (or zero) vector,
the latter is denoted by the zero in boldface, i. e. o. This is also
used to denote the zero linear transformation and the the zero
linear functional.
1 Preliminaries 1
1.1 Linear Spaces . . . . . . 1
1.2 Topological Spaces . . . 11
1.3 Measure and Integration 18
3 Hahn-Banach Theorems 69
3.1 Analytic Versions . . . 69
3.2 Geometric Versions .. 77
3.3 Vector Valued Integration 81
3.4 An Application to Optimization Theory 85
3.5 Exercises 92
6 LP Spaces 162
6.1 Basic properties . · 162
6.2 Duals of LP Spaces · 168
6.3 The Spaces LP(n) .170
6.4 The Spaces W1,P(a, b) .178
6.5 Exercises ...... · 188
Bibliography 265
Index 266
Chapter 1
Preliminaries
x+y = y+x;
(ii) (associativity) for all x, y and Z E V, we have
x + (y + z) = (x + y) + z;
(iii) there exists a unique vector 0 E V, called the zero or the null
vector, such that, for every x E V,
x+o = x;
x + (-x) = o.
2 1 Preliminaries
Definition 1.1.3 Let V be a vector space and let Xl, ... , Xn be vectors
in V. A linear combination of these vectors is any vector of the form
alxl + ... +anxn , where the ai, 1 :S i :S n are scalars. A linear relation
between these vectors is an equation of the form
alXl + ... + anXn = 0.•
1.1 Linear Spaces 3
Example 1.1.2 The space ]RN (respectively, eN) has a basis which
is defined as follows. Let 1 ::; i ::; N. Let ei be the vector whose i-th
component is unity and all other components are zero. Then {el' ... , eN}
is a basis for]RN (respectively, eN) and is called the standard basis . •
where the ai and bi are real numbers and x is the variable. Let a E R
Assume, without loss of generality, that m ::; n. Define
n
(p + q)(x) = I)ai + bdxi
i=O
v= WI + ... + Wn
where Wi E Wi for each 1 :::; i :::; n. The spaces are said to be independent
if an element in the span is zero if, and only if each, Wi = O. In particular,
it follows that, if the Wi are independent, then for all 1 ::; i, j ::; n such
that i f. j, we have Wi n Wj = {O}. Further, every element in the span
will have a unique decomposition into vectors from the spaces Wi'
Definition 1.1.7 Let V be a vector space and let Wi, 1 ::; i ::; n be
subspaces. Then, V is said to be the direct sum of the Wi if the spaces
Wi are independent and their span is the space V. In this case we write
Definition 1.1.8 Let V and W be vector spaces (over the same base
field). A linear transformation, or linear operator, is a mapping
T : V -> W such that for all x and y E V and for all scalars a and (3,
we have
T(ax + (3y) = aT(x) + (3T(y).
1.1 Linear Spaces 5
If W is the base field (which is a vector space over itself), then a linear
transformation from V into W is called a linear functional on V. •
The coefficients (tij) in the above relation form a matrix with m rows
and n columns. Such a matrix is referred to as an m x n matrix. The
j-th column of the matrix represents the coefficients in the expansion
of T(vj) in terms of the basis { ~1 of W. Of course, if we change
the bases for V and W, the same linear transformation will be given by
another matrix. In particular, let dim(V) = n and let T : V - t V be a
linear operator. Let T be represented by the n x n matrix (also known
as a square matrix of order n) T = (tij) with respect to a given basis. If
we change the basis, then T will be represented by another n x n matrix
T = (tij) and the two will be connected by a relation of the form:
6 1 Preliminaries
where P is called the change of basis matrix and represents the linear
transformation which maps one basis to another. The matrix p-l rep-
resents the inverse of this change of basis m p~n and is the inverse
matrix of P. In this case, the matrices T and T are said to be simi-
lar. The identity matrix I represents the identity mapping x 1-+ x for
all x E V for any fixed basis of V. For a given basis, if T : V ~ V is
invertible, then the matrix representing T- 1 will be the inverse of the
matrix representing T.
A square matrix is said to be diagonal if all its off-diagonal entries
are zero. A n x n square matrix A = (aij) is said to be upper trian-
gular (repectively, lower triangular) if aij = 0 for all 1 :S j < i :S n
(repectively, aij = 0 for all 1 :S i < j :S n). It can be shown that every
matrix is similar to an upper triangular matrix. A matrix is said to be
diagonalizable if it is similar to a diagonal matrix.
Given an m x n matrix and two vector spaces of dimensions nand m
respectively along with a basis for each of them, the matrix can be used,
as in relation (1.1.1), to define a linear transformation beteween these
two spaces. Thus, there is a one-to-one correspondence between ma-
trices and linear transformations between vector spaces of appropriate
dimension, once the bases are fixed.
Definition 1.1.11 If T = (tij) is an m x n matrix, then the n x m
matrix T' = (tji), formed by interchanging the rows and the columns of
the matrix T, is called the transpose of the matrix T. If T = (tij) is
an m x n matrix with complex entries, then the n x m matrix T* = (tTj )
where tTj = tji (the bar denoting complex conjugation), is called the
adjoint of the matrix T . •
If x and y E ]Rn (respectively, en), then y'x (respectively, y*x) repre-
sents the 'usual' scalar product of vectors in ]Rn (respectively, en) given
by L~~l XiYi (respectively, L~=l x/fh). If the scalar product is zero, we
say that the vectors are orthogonal to each other and write x ~ y. If
W is a subspace and x is a vector orthogonal to all vectors in W, we
write x ~ W.
Definition 1.1.12 Let T be an m x n matrix. Then its row rank is
defined as the number of linearly independent row vectors of the matrix
and the column rank is the number of independent column vectors of
the matrix. •
The column rank is none other than the rank of the linear transfor-
mation defined by T and the row rank is the rank of the transformation
1.1 Linear Spaces 7
Proposition 1.1.2 For any matrix, the row and column ranks are equal
and the common value is called the rank of the matrix. •
Corollary 1.1.1 Ann x n matrix is invertible if, and only if, its nullity
is zero or, equivalently, its rank is n. Equivalently, a linear operator on
a finite dimensional space is one-to-one if, and only if, it is onto . •
n
Definition 1.1.14 Let T be an n x matrix with complex entries and
T* its adjoint. The matrix is said to be normal if
TT* = T*T.
TT* = T*T = I.
x*Tx ~ O.
Remark 1.1.2 A hermitian matrix is equal to its adjoint and the inverse
of a unitary matrix is its adjoint. A matrix T, with real entries, which
is equal to its transpose is called symmetric and one whose inverse is
its own transpose is called orthogonal. In case the matrix T has real
entries, we can still define positive semi-definiteness (or positive definite-
ness) by considering real column vectors x in the above definition. •
8 1 Preliminaries
det(ST) = det(S).det(T).
In particular, T is invertible if, and only if, det(T) =I- 0 and
det(T- 1 ) = (det(T))-l.
(iii) If T is an n x n matrix, then
det(T') = det(T) . •
Tx = b.
This is because the corresponding linear transformation is invertible and
hence onto (which gives the existence of the solution) and one-to-one
(which gives the uniqueness of the solution). If T is singular, this is no
longer the case and we have the following result.
1.1 Linear Spaces 9
Tx = b
has no solution or has an infinite number of solutions. The latter pos-
sibility occurs if, and only if
b'u = 0
Tu = AU.
if i = j,
if i 1= j.
1.2 Topological Spaces 11
Ai RT( Vi)
RT(X)
maxxE\!;
mind Vi-I RT(X)
minwcc n , dim(W)=i maxxEW RT(X).
In particular,
For more details on linear spaces, the reader is referred to, for in-
stance, Artin [1].
It is clear from the above definition that X and 0 are both open and
closed. Further, a finite union and an arbitrary intersection of closed
12 1 Preliminaries
sets is closed. It then follows that given any set A eX, there is a
smallest closed set containing it. This is called the closure of the set
A and is usually denoted by A. If A = X, we say that A is dense in
X. Similarly, given any set A eX, there is a largest open set contained
in A. This set is called the interior of A and is denoted by A o . A set
A c X is said to be nowhere dense if (A)O =.0.
If {X, T} is a topological space and if Y eX, then Y inherits a
natural topology from that of X. The open sets are those of the form
Un Y, where U is open in X.
The set B(x; r) described above is called the (open) ball centred at x and
of radius r. It is a simple exercise to check that open balls themselves
are open sets. It is also immediate to see that this topology is Hausdorff.
On]R or e, we have the 'usual' metric defined by
d(x, y) = Ix - YI·
The topology induced by this metric will be called the 'usual' topology
on ]R or e, as the case may be. Similarly, on ]RN (or eN), we have
the 'usual' Euclidean distance which defines a metric on that space: if
x = (x, ... , XN) and Y = (YI, ... , YN) are vectors in]RN (respectively, eN),
then
Definition 1.2.7 Let {Xi, 'Ii}, i = 1,2, be two topological spaces and
let f : Xl -+ X 2 be a given function. We say that f is continuous
if f-l(U) is an open set in Xl for every open set U in X 2 • If f is
a bijection such that both f and f- l are continuous, then f is said to
be a homeomorphism and the two topological spaces are said to be
homeomorphic to each other. •
This is called the distance of the point x from the set E. The following
proposition is easy to prove.
Proposition 1.2.3 Let {X, d} be a metric space and let E eX. Then
(i) for all x and y E X, we have
E = {x E X I d(x, E) = o}.
Definition 1.2.8 Let {X,d} be a metric space. A sequence {x n } in X
is said to be Cauchy if, for every E > 0, there exists a positive integer
N such that
for every k 2: N, l 2: N . •
From the above definition it follows that a subbase for the weak
topology generated by the li is the collection of all sets of the form
fi-l(U) where U is an arbitrary open set in Xi and the index i ranges
over the indexing set I. A typical neighbourhood of a point x E X will
therefore be a finite intersection of sets of the form f i- 1 (Ui) where Ui is
a neighbourhood of Ji(x) in Xi.
Thus, sets of the form IIi EI Ui , where Ui = Xi for all i -I- io (an
arbitrary element of I) and Uio is open in X io ' form a subbase for the
product topology. A base for the topology is the collection of all sets
of the form IIiEIUi where Ui = Xi for all but a finite number of indices
and, for those indices, Ui is an open set in Xi.
X C 7~ { B(Xi; E) .•
Proposition 1.2.6 Let {X, d} be a metric space. The following state-
ments are equivalent:
(i) X is compact.
(ii) X is sequentially compact.
(iii) Every infinite subset of X has a limit point.
(iv) X is complete and totally bounded. •