Melnikov, Green's Functions and Elementary Functions, 2011
Melnikov, Green's Functions and Elementary Functions, 2011
Melnikov, Green's Functions and Elementary Functions, 2011
Melnikov
Green’s Functions
and Infinite Products
Two traditional mathematical concepts, classical in their own fields, are brought for-
ward in this brief volume. Reviewing these concepts separately, with no connection
to each other, would definitely look natural, but bringing them together into a single
book format is quite a different story. The point is that the concepts are drawn from
subject areas of mathematics that have no evident points of contiguity. That is why
the reader might be intrigued with our intention in this book to explore their mutual
fusion. This endeavor provides a basis for a challenging and nontrivial investigation.
The first of the two concepts is the Green’s function. It represents an important
topic in standard courses of differential equations and is customarily covered in
most texts in the field. The second concept, of infinite product, belongs, in turn, to
classical mathematical analysis. As to Green’s functions for partial differential equa-
tions, it is not a common practice in existing textbooks for careful consideration to
be given to procedures used for their construction. On the other hand, the standard
texts on mathematical analysis do not usually confront the infinite product repre-
sentation of elementary functions. A simultaneous review of just these two subject
areas (the construction of Green’s functions and the infinite product representation
of elementary functions) constitutes the context of the present book.
Green’s functions for the two-dimensional Laplace equation are most widely rep-
resented in relevant texts. They are conventionally constructed using the method of
images, conformal mapping, or eigenfunction expansion. The present volume fo-
cuses on the construction of such Green’s functions for a wide range of boundary-
value problems. A comprehensive review of the traditional methods is provided,
with emphasis on the infinite-product-containing expressions of Green’s functions,
which are obtained by the method of images. This provides a background for the
central theme in this book, which is the development of an innovative approach to
the representation of elementary functions in terms of infinite products.
The intention in the present volume is not just to familiarize the reader in the
traditional manner with the state of things in the area, but rather to reach beyond tra-
ditions. That is, we plan not only to introduce the classical topics of the construction
of Green’s functions and the infinite product representation of elementary functions,
but also to present a challenging investigation into the intersection of these fields.
vii
viii Preface
To be well prepared for the presentation in this book, the reader is required to have
a reasonably solid background in the standard undergraduate courses of calculus
and differential equations. In addition, the reader would definitely benefit from a
superficial knowledge of the basics of numerical analysis.
There is good reason to believe that this piece of work is original. To the author’s
best knowledge, there are no analogous books available on the market. That is why
we anticipate that the book will not be overlooked by the professional community.
It might, for example, be adopted as supplementary reading for an undergraduate
course or as a seminar topic within the scope of a pure or applied mathematics
curriculum. Infinite Product Representation of Elementary Functions, A Further
Linking of Differential Equations with Calculus, or Broadening the Use of Green’s
Functions might be the title for such a course or seminar topic.
Very initial results on the Green’s-function-based approach to the infinite product
representation of elementary functions were reported not long ago. The first printed
publications on progress in this field appeared just recently. It then took us over
three years to ultimately come up with this book, which was originally intended as
a text for an elective course within the computational sciences Ph.D. program just
launched at Middle Tennessee State University.
It is with pleasure and gratitude that the author acknowledges the editorial ser-
vices provided by the staff of Birkhäuser Boston, with special thanks to Tom Grasso,
senior mathematics editor, for his professional treatment of nontrivial situations.
Although the editing process was not fast, smooth, and painless, it has significantly
improved the quality of the presentation and definitely made this book a much better
read.
The opening phase of our work on this project was partially funded by a 2008
Summer Research Grant awarded by the Faculty Research and Creative Activity
Committee at Middle Tennessee State University. This created a propitious work
environment, promoted progress at later stages of the project, and made a decisive
contribution to its prompt completion.
Murfreesboro, USA Yuri A. Melnikov
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Infinite Products and Elementary Functions . . . . . . . . . . . . . . 17
2.1 Euler’s Classical Representations . . . . . . . . . . . . . . . . . . 17
2.2 Alternative Derivations . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Other Elementary Functions . . . . . . . . . . . . . . . . . . . . . 28
2.4 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3 Green’s Functions for the Laplace Equation . . . . . . . . . . . . . . 43
3.1 Construction by the Method of Images . . . . . . . . . . . . . . . 43
3.2 Method of Conformal Mapping . . . . . . . . . . . . . . . . . . . 54
3.3 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4 Green’s Functions for ODE . . . . . . . . . . . . . . . . . . . . . . . 61
4.1 Construction by Defining Properties . . . . . . . . . . . . . . . . . 61
4.2 Method of Variation of Parameters . . . . . . . . . . . . . . . . . 72
4.3 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5 Eigenfunction Expansion . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1 Background of the Approach . . . . . . . . . . . . . . . . . . . . 85
5.2 Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . . . . . 86
5.3 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.4 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6 Representation of Elementary Functions . . . . . . . . . . . . . . . . 121
6.1 Method of Images Extends Frontiers . . . . . . . . . . . . . . . . 122
6.2 Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . 131
6.3 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . 141
6.4 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7 Hints and Answers to Chapter Exercises . . . . . . . . . . . . . . . . 151
7.1 Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.2 Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.3 Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
ix
x Contents
Our objective in putting together this volume has been to develop a supplementary
text for an elective upper-division undergraduate or graduate course/seminar that
might be offered within the scope of a pure or applied mathematics curriculum.
A quite unexpected treatment is delivered herein on two subjects that one might
hardly have anticipated considering together in a single book. This makes the book
an original and unique read, and a good choice for those who are open to challenges
and welcome the unexpected.
The reader is invited on an interesting voyage, with the subject matter resting
upon two concepts taken from different subject areas of mathematics. These con-
cepts are (i) the infinite product, which represents a standard topic in courses of
mathematical analysis, and (ii) the Green’s function, representing a significant topic
in courses on differential equations. To be more specific, we will concentrate in this
book on the infinite product representation of elementary functions and the Green’s
function of boundary-value problems for the two-dimensional Laplace equation.
It would not probably be an exaggeration to assert that none of the existing rel-
evant textbooks in mathematics covers both the concepts that are to be explored
herein. Consequently, the two concepts have probably never been considered to-
gether in a single traditionally offered course. The reader might therefore be con-
cerned about the reason for presenting both concepts in our volume. Indeed, what
is the driving force for considering them together this time around? The resolution
of this concern will be found on examining the very recent developments reported
in [27] and [28]. It appears that a diligent analysis of the two concepts reveals an
unlooked-for outcome that happens to be extremely rewarding. A novel approach
was discovered to the approximation of functions, which provides never before re-
ported infinite product representations for some elementary functions.
According to the title, the present volume is not designed to focus exclusively on
either differential equations or mathematical analysis. The subtitle brings a neces-
sary clarification. It suggests that both the subject areas, a fusion of which represents
an unbreakable background for our presentation, are going to be covered to a certain
extent, with emphasis on the establishment of their productive linking.
The motivation for writing this book is due in large measure to the many years of
our work on the construction of computer-friendly expressions for Green’s functions
for applied partial differential equations. The results of that work have been reported
in a series of publications of the past decades. To get a sense and a distinctive feature
of this work, the reader might examine some of our publications [11, 12, 17, 21,
23, 25]. The most complete and useful list of efficient representations of Green’s
functions recently constructed can be found in [16].
It was just recently, however, that the idea emerged to compare alternative forms
of Green’s functions that had been obtained for a variety of boundary-value prob-
lems stated for the two-dimensional Laplace equation. The comparison appeared
really nontrivial. It ultimately gave birth to a score of infinite product representa-
tions of some trigonometric and hyperbolic functions. The idea is not, of course,
new. The reader might recall some other areas of mathematics in which a compari-
son of equivalent but different-looking forms of some statement results in interesting
developments.
As might be learned from mathematical analysis, the theory of infinite prod-
ucts is closely related to that of infinite series. The latter, in turn, represents one of
the major driving forces in the core courses of undergraduate mathematics. Infinite
series usually receive more or less complete and detailed coverage in standard un-
dergraduate texts in both pure and applied mathematics curricula. Indeed, Taylor,
Fourier, and other types of series represent a convenient tool for mathematical anal-
ysis. They traditionally play a significant role in the standard courses of calculus,
complex variables, differential equations, numerical analysis, and others.
Infinite products, in contrast, are not that well and fully covered in standard texts
on mathematical analysis. Nevertheless, they quite frequently arise in different areas
of mathematics [5, 20], and are, as well as infinite series, successfully implemented
as a tool in the description of a number of subjects, such as the approximation of
functions in particular. Although the fundamental results on the infinite product rep-
resentation of elementary functions can be traced back to Euler’s era [1], mathemati-
cians all over the globe are still working in this area [2–4, 7, 14, 22, 24], reporting
on different theoretical and computational aspects of this topic.
Since representation of elementary functions by infinite products constitutes the
leading theme in this work, it would be appropriate to provide the reader with at
least a brief introduction to the basic terminology as well as to the chief concepts of
infinite products. An introduction of this sort would be, in our opinion, reasonable
to make this book more consistent, self-contained, and easier to read.
In addition, the reader will later be familiarized with the concept of Green’s func-
tion for the two-dimensional Laplace equation. This concept will be briefly reviewed
in the introduction and discussed then in more detail in Chaps. 3, 4, and 5, where
we give an overview of the major solution methods that are traditionally used for
the construction of Green’s functions. In doing so, our special emphasis will be
on the method of images, which results, for some problems, in an infinite-product-
containing representation of the Green’s function.
We begin with a brief review of the fundamentals of infinite products by assum-
ing that a1 , a2 , . . . , ak , . . . represent nonzero complex numbers, and consider the
1 Introduction 3
product
∞
a1 · a2 · · · · · ak · · · = ak . (1.1)
k=1
The concept of convergence must be, of course, of the same critical importance
for infinite products as it is for infinite series. To introduce this concept, we form the
finite product
N
PN = ak = a1 · a2 · · · · · aN
k=1
and call it the N th partial product of (1.1). It is said that the infinite product in (1.1)
is convergent (we will also use the wording the infinite product converges) if there
exists a finite limit P = 0 to which the sequence
P1 , P2 , . . . , PN , . . . (1.2)
P = lim PN .
N →∞
lim ak = 1. (1.3)
k→∞
for the general term of the product in (1.1) in terms of two successive partial prod-
ucts. Indeed, taking the limit in (1.4), one obtains for a convergent infinite product
limN →∞ PN P
lim aN = = = 1.
N →∞ limN →∞ PN −1 P
Hence, the condition in (1.3) is necessary for the infinite product in (1.1) to con-
verge. To give some illustrations of this claim, we consider a few examples. Take
the infinite product
∞
(k + 1)2
(1.5)
k(k + 2)
k=1
and explore its convergence by taking a close look at its N th partial product, written
down explicitly as
N
(k + 1)2
PN =
k(k + 2)
k=1
22 32 42 (N − 1)2 N2 (N + 1)2
= · · · ··· · · · .
1·3 2·4 3·5 (N − 2)N (N − 1)(N + 1) N (N + 2)
2(N + 1)
lim PN = lim = 2.
N →∞ N →∞ N +2
Thus, the infinite product in (1.5) really does converge. And what about the con-
dition in (1.3)? It is obviously met, since
(k + 1)2
lim ak = lim = 1.
k→∞ k→∞ k(k + 2)
For another example of the necessity of the condition in (1.3), consider the fol-
lowing infinite product:
∞ 2
k −1
. (1.6)
k2
k=2
To check for convergence, consider its N th partial product, which reads in this
case
1 Introduction 5
N
k 2 − 1 (k − 1)(k + 1)
N
PN = =
k2 k2
k=2 k=2
1·3 2·4 3·5 (N − 2)N (N − 1)(N + 1) N + 1
= · 2 · 2 · ··· · · = ,
2 2 3 4 (N − 1)2 N2 2N
providing
N +1 1
lim PN = lim = .
N →∞ N →∞ 2N 2
Hence, the infinite product in (1.6) is indeed convergent. It is also clear that the
condition in (1.3) is met. That is,
k2 − 1
lim = 1.
k→∞ k 2
its odd-index partial product P2N −1 and even-index partial product P2N sould be
analyzed separately. The point is that these partial products look formally different.
We can show, however, that the sequence
P3 , P5 , P7 , . . . , P2N −1
P2 , P4 , P6 , P8 , . . . , P2N
of the even-index partial products of (1.7) converge to the same value, implying that
the infinite product in (1.7) is convergent. To verify this conjecture, take a look first
at the odd-index partial product, P2N −1 :
2N −1
k + (−1)k 3 2 5 4 2N − 1 2N − 2
P2N −1 = = · · · · ··· · · ,
k 2 3 4 5 2N − 2 2N − 1
k=2
which clearly converges to 1, implying that the infinite product in (1.7) is indeed
convergent. As to its value, which is the limit of its general term, it is equal to unity:
k + (−1)k
lim = 1.
k→∞ k
Thus, the analysis just completed for the infinite products in (1.5), (1.6), and (1.7)
simply illustrates the necessity (which has actually been proven earlier) of the con-
dition in (1.3) for the convergence of an infinite product. To show that the condition
in (1.3) is not sufficient for convergence, we may offer a single counterexample. Let
us take the product
∞
k+1
, (1.8)
k
k=1
the general term of which approaches unity as k goes to infinity, yet the product is
divergent. The divergence can be proven by showing that the N partial product,
2 3 4 N N +1
PN = · · · ··· · · = N + 1,
1 2 3 N −1 N
is unbounded as N goes to infinity. This provides convincing evidence of the diver-
gence of the product in (1.8).
The example just considered allows us to declare that the condition in (1.3), being
necessary for the convergence of infinite products, is not, however, sufficient. In
other words, if the condition in (1.3) is not met, then the product in (1.1) diverges.
If, however, the condition in (1.3) is met, then the product might either converge or
diverge.
It is interesting to recall that the situation with infinite products
just discussed
resembles that for infinite series. That is, if an infinite series ∞ k=1 bk converges,
then the limit of its general term bk must be zero. The converse assertion, that a
series necessarily converges if the limit of its general term is zero, is, however,
untrue. This point is traditionally
illustrated in calculus with the remarkable example
of the harmonic series ∞ k=1 1/k, which diverges despite the fact that its general
term 1/k approaches zero as k goes to infinity.
Let us revisit the infinite product in (1.1) and assume that all its terms are
nonzero, which means that if its general term is rewritten as ak = 1 + βk , then
βk = −1 for every k. Now rewrite (1.1) as
∞
(1 + βk ). (1.9)
k=1
From the necessary condition for convergence of the product in (1.1), it follows
that if the product in (1.9) converges, then the following condition
lim βk = 0 (1.10)
k→∞
must be satisfied.
1 Introduction 7
∞
log(1 + βk ). (1.11)
k=1
∞
|βk | (1.13)
k=1
is convergent.
Proof of the latter claim can immediately be accomplished by the limit com-
parison test. Indeed, since convergence of either the series in (1.12) or the series
in (1.13) implies (1.10), we take the limit
log(1 + βk )
lim
βk →0 βk
8 1 Introduction
diverges.
As with infinite series, the commutativity property [5] holds for absolutely con-
vergent infinite products but does not do so for conditionally convergent ones. This
means that the order of factors in an absolutely convergent infinite product can be
arbitrarily rearranged without affecting the product value. If, however, an infinite
product converges conditionally, then a rearrangement may affect the product’s con-
vergence in the sense that it might change its value.
For a justification of the commutativity property, we will present an example of
a conditionally convergent infinite product and show that by rearranging the order
of its factors we can obtain for it an arbitrarily preassigned value. In doing so, recall
the infinite product in (1.7) and rewrite it as
∞
(−1)k
1+ . (1.14)
k
k=2
As we have recently proved, this product converges to the value of unity. The con-
vergence is, however, conditional because the product in (1.8),
∞
1
1+ ,
k
k=2
Let M and N be two positive integers, and rearrange the order of factors in (1.14)
in such a way that segments TM of M factors representing sums alternate with seg-
ments TN of N factors representing differences. The first of the segments TM ap-
pears as
1 1 1 1
TM = 1 + 1+ 1+ · ··· · 1 + ,
2 4 6 2M
while the first of the segments TN reads
1 1 1 1
TN = 1 − 1− 1− · ··· · 1 − .
3 5 7 2N + 1
Rewrite the segments TM and TN in a compact form. That is,
3 5 7 2M + 1 (2M + 1)!!
TM = · · · ··· · =
2 4 6 2M (2M)!!
and
3 5 7 2N (2N )!!
TN = · · · ··· · = .
2 4 6 2N + 1 (2N + 1)!!
This makes the (M + N)kth partial product P(M+N )k of the rearranged infinite
product equal to
(2Mk + 1)!!(2N k)!!
P(M+N )k = . (1.15)
(2Mk)!!(2Nk + 1)!!
To compute the value of the rearranged infinite product, one is required to take
a limit of its partial products in (1.15) as k approaches infinity. Before going any
further with the limit process, we recall the classical [5] Wallis formula
(2k)!! √
lim √ = π
k→∞ (2k − 1)!! k
and convert it to a form that is more convenient for the development that follows. In
doing so, rewrite the above as
√
(2k − 1)!! k 1
lim =√ .
k→∞ (2k)!! π
Clearly, upon multiplying the numerator and denominator in the above by (2k +
√
1) k, we transform it into
√
(2k + 1)!! · k (2k − 1)!! k 1
lim √ = lim =√ .
k→∞ (2k)!! · k · (2k + 1) k→∞ (2k)!! π
The limit on the left-hand side can be decomposed into the product of two limits:
(2k + 1)!! k 1
lim √ · lim =√ .
k→∞ (2k)!! k k→∞ 2k + 1 π
10 1 Introduction
Since the second of the two limits is 1/2, the above relation reads
(2k + 1)!! 2
lim √ =√ ,
k→∞ (2k)!! k π
which can be considered an equivalent version of Wallis’s formula.
We recall now the rearranged infinite product in (1.14) and compute its value V
by taking the limit of its (M + N)kth partial product P(M+N )k in (1.15) as k ap-
proaches infinity. This yields
(2Mk + 1)!!(2Nk)!!
V = lim
k→∞ (2Mk)!!(2Nk + 1)!!
√
M (2Mk + 1)!! (2N k)!! N k
= lim √ · lim ,
N k→∞ (2Mk)!! Mk k→∞ (2N k + 1)!!
which, in light of the second version of Wallis’s formula, reads
√
M 2 π M
V= ·√ · = .
N π 2 N
Hence, the infinite product in (1.14), rearranged in the way just described, might
either increase or decrease in value depending upon the integers M and N . Indeed,
if the rearrangement is made, say, such that every four factors representing sums
(M = 4) are followed by a single factor representing a difference (N = 1), then
the value of the resultant infinite product is twice the value of the original infinite
product in (1.14).
Completing our brief review on the fundamentals of infinite products, we turn to
functional products and let βk (z) be a function defined on a set S for each positive
integer k. Then we say that the infinite product
∞
(1 + βk (z)) (1.16)
k=1
converges uniformly in S if the condition 1 + βk (z) = 0 holds for all k, and the
sequence of partial products
N
PN (z) = (1 + βk (z))
k=1
is uniformly convergent on S.
1 Introduction 11
Let βk (z) represent continuous functions in S for each k. It can be shown that if
the infinite product in (1.16) converges uniformly in S, then the product value P (z)
is continuous in S.
Keep in mind that the key theme in the present volume is the representation of
elementary functions in terms of infinite products. We believe that with the review
just completed, the reader is prepared to cope with the rest of the material in this
book where infinite products emerge.
Our introductory review takes a turn, at this point, to the second of the two con-
cepts, which, along with the concept of infinite product, represents the keystone
in this volume. That is the concept of Green’s function. In order to introduce the
Green’s function notion for the two-dimensional Laplace equation, we consider,
in two-dimensional Euclidean space, a simply connected region bounded by a
piecewise smooth contour L, and formulate a boundary-value problem in which the
Poisson equation
∇ 2 u(P ) = −f (P ), P ∈ , (1.17)
is subject to the homogeneous boundary condition
B[u(P )] = 0, P ∈ L, (1.18)
∇ 2 u(P ) = 0, P ∈ , (1.19)
has only the trivial solution u(P ) = 0. If so, then the solution u(P ) for the problem
in (1.17) and (1.18) can be expressed [8, 15, 18] in the integral form
u(P ) = G(P , Q)f (Q)d(Q), (1.20)
with the kernel G(P , Q) being called the Green’s function to the homogeneous
boundary-value problem in (1.19) and (1.18).
The relation in (1.20) reveals a special feature of the Green’s function. Indeed,
once the latter is available, solution of the problem in (1.17) and (1.18) is a matter
of computing the integral in (1.20), which can be considered a direct consequence
of the second Green’s formula [8].
We use some standard terminology in our book for the arguments P and Q of
the Green’s function. They are commonly referred to as the observation point for P
(another customarily used term is the field point) and the source point for Q.
For any location of the source point Q ∈ , the Green’s function G(P , Q), as
a function of the coordinates of the field point P , possesses the following defining
properties:
12 1 Introduction
BP [G(P , Q)] = 0, P ∈ L.
In compliance with the defining properties, the Green’s function G(P , Q) of the
problem in (1.19) and (1.18) can be expressed as
1 1
G(P , Q) = ln + R(P , Q),
2π |P − Q|
with the second additive component R(P , Q) referred to as the regular part of the
Green’s function. R(P , Q) represents a harmonic, everywhere in , function of the
coordinates of P . That is,
A given Green’s function constructed by two different methods might have two
different appearances. One expression might appear in a computer-friendly form,
whereas the other might not be readily computable or simple to use. Later in this
volume, a number of different methods will be reviewed that produce a variety of
different forms of Green’s functions for boundary-value problems for the Laplace
equation.
Some of the available Green’s functions can be completely expressed in terms of
elementary functions. As an example, one might recall the classical [5, 18] Green’s
function
1 (x − ξ )2 + (y + η)2
G(x, y; ξ, η) = ln (1.21)
4π (x − ξ )2 + (y − η)2
for the Dirichlet problem posed in the half-plane {−∞ < x < ∞, 0 < y < ∞}.
It is evident that the denominator component in (1.21) constitutes the singular
part of G(x, y; ξ, η), whereas the component
1
R(x, y; ξ, η) = ln (x − ξ )2 + (y + η)2
4π
represents its regular part.
Representations of the type in (1.21) are compact and convenient to work with in
various applications. It is worth noting, however, that there unfortunately exist only
a few such closed analytical forms of Green’s functions available in the literature.
1 Introduction 13
Some other Green’s functions for the Laplace equation that are available in lit-
erature are expressed in a form containing elementary functions and trigonometric
series, such as
∞
(r)n
1 1
G(r, ϕ; , ψ) = − 2β cos n(ϕ − ψ)
2π β n(n + β)
n=1
1
− ln r 2 − 2r cos(ϕ − ψ) + 2
4π
1
− ln 1 − 2r cos(ϕ − ψ) + r 2 2 , (1.22)
4π
obtained in [16] for the mixed boundary-value problem
∂
+ β G(1, ϕ; , ψ) = 0, β > 0, (1.23)
∂r
Representations of the type in (1.22) are also quite convenient for practical im-
plementations, because their singular components are explicitly expressed, while
the series in their regular parts are uniformly convergent. This makes forms of the
type in (1.22) computer-friendly and allows efficient computing by a truncation of
the series. A number of Green’s functions obtained in such a mixed form can be
found in [16].
In most other cases, however, Green’s functions of boundary-value problems for
the Laplace equation cannot be expressed in either an elementary function form or
in a mixed form of the type in (1.22). For example, we have the classical [18] Fourier
double-series form
∞
4 sin μx sin νy sin μξ sin νη
G(x, y; ξ, η) = (1.24)
ab μ2 + ν 2
m,n=1
of the Green’s function for the Dirichlet problem stated on the rectangle {0 < x < a,
0 < y < b}, where the parameters μ and ν are expressed in terms of the summation
indices m and n, and the rectangle’s dimensions a and b as
mπ nπ
μ= and ν = .
a b
14 1 Introduction
is available in [5]. The standard abbreviation Re denotes the real part of a function
of a complex variable. Note that the above representation and the one in (1.22) are
equivalent mathematically, but it is evident that on the other hand, the two forms
are not quite equivalent computationally. Indeed, the one in (1.25) is less suitable
for computer implementations compared to that of (1.22), because the regular part
in (1.25) requires greater computational effort.
The multiplicity of forms in which Green’s functions can potentially be repre-
sented is instrumental in the present book. It represents the driving force of the in-
vestigation reported in Chap. 6. Taking advantage of this multiplicity, we will later
derive some interesting infinite product representations of trigonometric and hyper-
bolic functions by comparison of some alternative forms of Green’s functions for
the Laplace equation.
As to the material of the present volume, it is organized in five chapters. The
focus in Chap. 2 is on the classical [1] Euler infinite product representation of the
trigonometric and hyperbolic sine and cosine functions. We explain how Euler de-
rived them and also review some other known derivation procedures developed later.
In addition, the reader is introduced to the derivation of some other infinite product
representations of elementary functions that are available in mathematical hand-
books [6] or [9].
In Chaps. 3 and 5, we turn to Green’s functions of boundary-value problems
for the two-dimensional Laplace equation. Our effort is specific because we do not
concentrate on theoretical aspects but mostly analyze a variety of methods that are
traditionally used for the construction of Green’s functions. In doing so, special
attention is paid to the methods of images, conformal mapping, and eigenfunction
expansion. By extending the frontiers of the method of images, we obtain alternative
infinite-product-containing expressions for some classical Green’s functions. This
provides background for the key developments in the present work.
1 Introduction 15
Chapter 4 makes a sharp turn, departing from the field of partial differential equa-
tions and focusing instead on ordinary differential equations. It might seem that this
material lies outside the book’s focus because it is not directly related to Green’s
functions for the Laplace equation. But the purpose of Chap. 4 is to prepare the
reader for a better comprehension of our work in Chap. 5, where we return to the
Laplace equation. A vast number of Green’s functions are obtained for this equa-
tion by means of the method of eigenfunction expansion. It is worth noting that this
method is applicable and appears efficient not only for the Laplace equation but also
for many other applied partial differential equations.
Chapter 6 is central to the book. An innovative approach [28] is presented and
developed for expressing elementary functions in terms of infinite products. Our
work on Green’s functions, discussed in detail earlier in the book, is instrumental
for Chap. 6. A number of infinite product expansions of elementary functions are
obtained. Some of those are simply alternatives to the forms that are already avail-
able in the classical literature. Some others were derived, however, for functions
whose infinite product representations are unavailable in existing handbooks.
To enhance the usefulness of this volume as a textbook many illustrative exam-
ples are offered in most of its sections to assist the instructor in class preparation
and in giving the student more effective material for study. Every chapter begins
with a review guide outlining the basic concepts covered in the chapter. To reflect
the widespread idea that a text is only as good as its problems, a set of carefully
designed challenging exercises is available at the end of each chapter. The exercises
provide opportunities for the reader to explore the concepts of the chapter in more
detail. Hints, comments, and answers to most of those exercises are available in the
book.
The author hopes that the discussion initiated in this brief volume will motivate
the reader’s interest to further learning from our approach to the representation of
elementary functions in terms of infinite products. He believes that the book might
arouse the reader’s curiosity and awaken a desire to better understand the nature of
the intersection of the subjects of Green’s functions and infinite products. This could
promote further progress in this challenging field that bridges the divide between the
two subjects.
Chapter 2
Infinite Products and Elementary Functions
The objective in this chapter is to lay out a working background for dealing with
infinite products and their possible applications. The reader will be familiarized
with a specific topic that is not often included in traditional texts on related courses
of mathematical analysis, namely the infinite product representation of elementary
functions.
It is known [9] that the theory of some special functions is, to a certain extent,
linked to infinite products. In this regard, one might recall, for example, the elliptic
integrals, gamma function, Riemann’s zeta function, and others. But note that spe-
cial functions are not targeted in this book at all. Our scope is limited exclusively to
the use of infinite products for the representation of elementary functions.
We will recall and discuss those infinite product representations of elementary
functions that are available in the current literature. Note that they have been de-
rived by different methods, but the number of them is limited. In Sect. 2.1, Euler’s
classical derivation procedure will be analyzed. His elegant elaborations in this field
were directed toward the derivation of infinite product representations for trigono-
metric as well as the hyperbolic sine and cosine functions. The work of Euler on
infinite products was inspirational [26] for many generations of mathematicians. It
will be frequently referred to in this brief volume as well.
Some alternative derivation techniques proposed for infinite product representa-
tions of trigonometric functions will be reviewed in detail in Sect. 2.2. The closing
Sect. 2.3 brings to the reader’s attention a variety of possible techniques for the
derivation of infinite product forms of other elementary functions. We will instruct
the reader on how to obtain the infinite product representations of elementary func-
tions that are available in standard texts and handbooks.
between the trigonometric and hyperbolic functions, which represent the analytic
continuation of the trigonometric functions into the complex plane, that the infinite
product representations
∞
x2
sinh x = x 1+ 2 2 (2.3)
k π
k=1
and
∞
4x 2
cosh x = 1+ (2.4)
(2k − 1)2 π 2
k=1
for the hyperbolic sine and cosine functions directly follow from (2.1) and (2.2),
respectively.
As we will show later, Euler’s direct approach can be successfully applied to the
derivation of the representations in (2.3) and (2.4).
2.1 Euler’s Classical Representations 19
To let the reader enjoy the elegance of the approach, we will consider first the
case of the representation in (2.1) and follow it in some detail. In doing so, we write
down the trigonometric sine function, using Euler’s formula, in the exponential form
eix − e−ix
sin x = ,
2i
and replace the exponential functions with their limit expressions reducing the above
to
1 ix n ix n
sin x = lim 1+ − 1−
2i n→∞ n n
n
i ix ix n
= − lim 1+ − 1− . (2.5)
2 n→∞ n n
We then apply Newton’s binomial formula to both polynomials in the brackets.
This yields
n
ix n ix n(n − 1) ix 2 k ix k
1+ =1+n + + ··· = (2.6)
n n 2! n n n
k=0
and
n k
ix n ix n(n − 1) ix 2 k k ix
1− =1−n + − ··· = (−1) . (2.7)
n n 2! n n n
k=0
Once these expressions are substituted into (2.5), all the real terms in the brackets
(the terms in even powers of x) cancel out. As soon as the common factor of 2ix
is factored out in the remaining odd-power terms of x, the right-hand side of (2.5)
reduces to a compact form, and we have
(n−1)/2
2k + 1 x 2k
sin x = x lim (−1)k . (2.8)
n→∞ n n2k+1
k=0
Of all the stages in Euler’s procedure, which, as a whole, represents a real work of
art, the next stage is perhaps the most critical and decisive. Factoring the polynomial
in (2.8) into the trigonometric form
(n−1)/2
(1 + cos 2kπ/n) x 2
sin x = x lim 1− ,
n→∞ (1 − cos 2kπ/n) n2
k=1
To take the limit, the second additive term in the parentheses of the above finite
product is multiplied and divided by the factor k 2 π 2 . This yields
(n−1)/2
x 2 k2 π 2
sin x = x lim 1− 2 2 2 2
n→∞ n k π tan kπ/n
k=1
(n−1)/2
x2
kπ/n 2
= x lim 1− 2 2 ,
n→∞ k π tan kπ/n
k=1
ϑ
lim = 1,
ϑ→0 tan ϑ
∞
x2
sin x = x 1− .
k2 π 2
k=1
of the sine function. These two forms share a common feature and are different at the
same time. As to the common feature, both, the partial products of Euler’s infinite
product representation and partial sums of the Maclaurin expansion are odd-degree
polynomials. But what makes the two forms different is that the partial products of
Euler’s representation are somewhat more relevant to the sine function. That is, they
share same zeros xk = kπ with the original sine function, whereas the Maclaurin ex-
pansion does not. It is evident that this property of the infinite product representation
could be essential in applications.
To examine the convergence pattern of Euler’s representation and compare it to
that of Maclaurin’s series, the reader is invited to take a close look at Figs. 2.1
and 2.2. Sequences of the Euler partial products and Maclaurin partial sums are
depicted, illustrating the difference between the two formulations.
As to the derivation of the infinite product representation of the trigonometric
cosine function, which was shown in (2.2), we diligently follow the procedure just
described for the sine function. That is, after using Euler’s formula
eix + e−ix
cos x =
2
2.1 Euler’s Classical Representations 21
we substitute the Newtonian polynomials from (2.6) and (2.7) into the right-hand
side of the above relation. It can readily be seen that, in contrast to the case of the
sine function, all the odd-power terms cancel out; and we subsequently arrive at the
following even-degree polynomial-containing representation
(n−1)/2
2k x 2k
cos x = lim (−1) k
n→∞ n n2k
k=0
for the cosine function. The polynomial under the limit sign can be factored in a
similar way as in (2.8). In this case, we obtain
(n−1)/2
[1 + cos(2k − 1)π/n] x 2
cos x = lim 1− ,
n→∞ [1 − cos(2k − 1)π/n] n2
k=1
22 2 Infinite Products and Elementary Functions
(n−1)/2
x 2 cos2 (2k − 1)π/2n
cos x = lim 1−
n→∞
k=1
n2 sin2 (2k − 1)π/2n
(n−1)/2
x2
= lim 1− 2 2 .
n→∞ n tan (2k − 1)π/2n
k=1
Similarly to the case of the sine function, we take the limit in the above relation,
which requires some additional algebra. That is, the second additive term in the
parentheses of the finite product is multiplied and divided by (2k − 1)2 π 2 /4n2 .
This yields
(n−1)/2
4x 2 (2k − 1)2 π 2
cos x = lim 1− ,
n→∞ 4n2 (2k − 1)2 π 2 tan2 (2k − 1)π/2n
k=1
(n−1)/2
4x 2
(2k − 1)π/2n 2
cos x = lim 1− .
n→∞ (2k − 1)2 π 2 tan(2k − 1)π/2n
k=1
The latter, in turn, reads ultimately as the classical Euler expansion for the cosine
shown in (2.2):
∞
4x 2
cos x = 1− .
(2k − 1)2 π 2
k=1
Note that, similarly to the case of the sine function, the above infinite product
representation also shares the zeros xk = (2k − 1)π/2 with the original cosine func-
tion.
The convergence pattern of the above infinite product representation can be ob-
served in Fig. 2.3.
2.1 Euler’s Classical Representations 23
We turn now to the case of the hyperbolic sine function whose expansion is pre-
sented in (2.3). Its derivation can be conducted in a manner similar to that for the
trigonometric sine. Indeed, representing the hyperbolic sine function with Euler’s
formula
ex − e−x
sinh x = ,
2
one customarily expresses both the exponential functions in the limit form. This
results in
1 x n x n
sinh x = lim 1+ − 1− . (2.9)
2 n→∞ n n
Once the Newton binomial formula is used for both polynomials in the brackets,
one obtains
n k
x n x n(n − 1) x 2 k x
1+ =1+n + + . . . =
n n 2! n2 n n
k=0
and
n
n k
x x n(n − 1) x 2 k k x
1− =1−n + 2
− . . . = (−1) .
n n 2! n n n
k=0
As in the derivation of the trigonometric sine function, all the even-power terms
in x in (2.9) cancel out, while the remaining odd-power terms possess a common
factor of 2x. Once the latter is factored out, the expression in (2.9) simplifies to the
compact form
(n−1)/2
2k + 1 x 2k
sinh x = x lim ,
n→∞ n n2k+1
k=0
which factors as
(n−1)/2
x 2 (1 + cos 2kπ/n)
sinh x = x lim 1+ 2 .
n→∞ n (1 − cos 2kπ/n)
k=1
(n−1)/2
x 2 cos2 kπ/n
sinh x = x lim 1+
n→∞
k=1
n2 sin2 kπ/n
(n−1)/2
x2
= x lim 1+ .
n→∞ n2 tan2 kπ/n
k=1
24 2 Infinite Products and Elementary Functions
Taking the limit as in the case of the trigonometric sine function, one ultimately
transforms the above relation into the classical Euler form in (2.3):
∞
x2
sinh x = x 1+ 2 2 .
k π
k=1
On the other hand, using the binomial formula, the left-hand side of the above
can be expanded as
Equating the imaginary parts of the left-hand sides in (2.10) and (2.11), we obtain
2n + 1
sin(2n + 1)w = (2n + 1) cos w sin w −
2n
cos2n−2 w sin3 w
3
+ · · · + (−1)n sin2n+1 w
= sin w (2n + 1) cos2n w
2n + 1
− cos2n−2
w sin w + · · · + (−1) sin w . (2.12)
2 n 2n
3
Since the second factor (the one in the brackets) contains only even exponents
of the sine and cosine functions, it can be represented as a polynomial Pn (sin2 w),
where the degree of sin2 x never exceeds n. On the other hand, for any fixed value of
n, the left-hand side of (2.12) takes on the value zero at the n points wk = kπ/(2n +
1), k = 1, 2, 3, . . . , n, on the open segment (0, π/2). This implies that the zeros of
Pn (s) are the values sk = sin wk , allowing the polynomial to be expressed as
n
s
Pn (s) = β 1− , (2.13)
k=1
sin2 wk
26 2 Infinite Products and Elementary Functions
The limit on the left-hand side of the above is 2n + 1, while the limit on the
right-hand side is equal to 1. This suggests for the factor β the value 2n + 1, and the
relation in (2.14) transforms into
n
2
sin w
sin(2n + 1)w = (2n + 1) sin w 1− . (2.15)
sin(kπ/(2n + 1))
k=1
Since
x
lim (2n + 1) sin = x,
n→∞ 2n + 1
while
sin(x/(2n + 1)) x
lim = ,
n→∞ sin(kπ/(2n + 1)) kπ
the relation in (2.16) transforms, as n approaches infinity, into the classical Euler
representation in (2.1):
∞
x2
sin x = x 1− 2 2 .
k π
k=1
Clearly, the derivation procedure just reviewed is based on a totally different idea
compared to that used by Euler. Recall another alternative derivation of the Euler
representation of the sine function, which can be carried out using the Laurent series
expansion [5]
∞
1 1 1
cot z − = + (2.17)
z z − kπ kπ
k=−∞
for the cotangent function of a complex variable. Note that the summation in (2.17)
assumes that the k = 0 term is omitted.
2.2 Alternative Derivations 27
Evidently, the opening terms of the above series have isolated singular points
(poles) in any bounded region D of the complex plane. If, however, a few initial
terms of the series in (2.17) are truncated, then the series becomes absolutely and
uniformly convergent in a bounded region. This assertion can be justified by con-
sidering the general term
1 1 z
+ =
z − kπ kπ kπ(z − kπ)
of the series, for which the following estimate holds:
z z T 1
= 2 ≤ · 2,
kπ(z − kπ) k π(z/k − π) π|T /k − π| k
where T represents the upper bound of the modulus of the variable z, that is, |z| < T .
It can be shown that the first factor on the right-hand side of the above inequality
has the finite limit T /π 2 as k approaches infinity. Thus, the series in (2.17) con-
verges (at the rate of 1/k 2 ) absolutely and uniformly in any bounded region. In
other words, both the left-hand side and the right-hand side in (2.17) are regular
functions at z = 0. This makes it possible for the series in (2.17) to be integrated
term by term. Taking advantage of this fact, we integrate both sides in (2.17) along
a path joining the origin z = 0 to a point z ∈ D. This yields
z=z ∞
z=z
sin z z
log = log(z − kπ) + ,
z z=0 kπ z=0
k=−∞
and after choosing the branch of the logarithm that vanishes at the origin, we obtain
∞
sin z z z
log = log 1 − +
z kπ kπ
k=−∞
∞
z z
= log 1− exp
kπ kπ
k=−∞
∞
z z
= log 1− exp . (2.18)
kπ kπ
k=−∞
Recall that the factor k = 0 is omitted in the above infinite product. Coupling
then the kth factor
z z
1− exp
kπ kπ
28 2 Infinite Products and Elementary Functions
So, two different derivations for the expansion in (2.1) have been reviewed in this
section. They are alternative to the classical Euler procedure discussed in Sect. 2.1.
This issue will be revisited again in Chap. 6, where yet another alternative deriva-
tion procedure for infinite product representations of elementary functions will be
presented. It was recently proposed by the author and reported in [27, 28], and is
based on a novel approach.
The objective in the next section is to introduce the reader to a limited number
of infinite product representations of elementary functions that can be found in the
current literature.
Leaving the sin2 y2 factor in the numerator in its current form while expressing
the other three sine factors with the aid of the classical Euler infinite product repre-
sentation in (2.1), one obtains
∞ ∞
y y2 − x2 (y + x)2 (y − x)2
cos x − cos y = sin2 1− 1 −
2 2 4k 2 π 2 4k 2 π 2
k=1 k=1
∞ −2
y y2
× 1− 2 2 ,
2 4k π
k=1
which can be rewritten in a more compact form. To proceed with this, we combine
all the three infinite products into a single product form. This yields
2 2
∞
y 2 − x 2 2 y [1 − 4k 2 π 2 ][1 − 4k 2 π 2 ]
(y+x) (y−x)
cos x − cos y = 2 sin ,
y2 2 (1 − y2
)2
k=1 4k 2 π 2
or, after performing elementary algebra on the expression under the product sign,
we have
∞
x2 y [4k 2 π 2 − (x + y)2 ][4k 2 π 2 − (x − y)2 ]
cos x − cos y = 2 1 − 2 sin2 .
y 2 (4k 2 π 2 − y 2 )2
k=1
Upon factoring the differences of squares under the product sign, the above rela-
tion transforms into
∞
x2 y [2kπ + (x + y)][2kπ − (x + y)]
cos x − cos y = 2 1 − 2 sin2
y 2 (2kπ + y)2
k=1
[2kπ + (x − y)][2kπ − (x − y)]
× .
(2kπ − y)2
At this point, we regroup the numerator factors under the product sign. That is,
we combine the first and fourth factors, as well as the second and third factors. This
yields
∞
x2 y [(2kπ + y) + x][(2kπ + y) − x]
cos x − cos y = 2 1 − 2 sin2
y 2 (2kπ + y)2
k=1
[(2kπ − y) − x][(2kπ − y) + x]
× ,
(2kπ − y)2
reducing the above relation to the form
∞
x2 y (2kπ + y)2 − x 2 (2kπ − y)2 − x 2
cos x − cos y = 2 1 − 2 sin2
y 2 (2kπ + y)2 (2kπ − y)2
k=1
∞
x2 y x2 x2
= 2 1 − 2 sin2 1− 1 − .
y 2 (2kπ + y)2 (2kπ − y)2
k=1
30 2 Infinite Products and Elementary Functions
and simply trace out the procedure described earlier in detail for the case of (2.20):
y + ix y − ix
cosh x − cos y = cos ix − cos y = 2 sin sin
2 2
y
sin2 2 y + ix y − ix
=2 sin sin
sin2 y2 2 2
∞
2 y y + ix (y + ix)2
= 2 sin · 1−
2 2 4k 2 π 2
k=1
∞
∞ 2 −1
y − ix (y − ix)2 y2 y2
× 1− 1− 2 2 .
2 4k 2 π 2 4 4k π
k=1 k=1
Upon grouping all the infinite product factors, the above reads
2 2
∞
y 2 + x 2 2 y [1 − 4k2 π 2 ][1 − 4k 2 π 2 ]
(y+ix) (y−ix)
2 sin ,
y2 2 2
(1 − 4ky2 π 2 )2
k=1
∞
x2 y (2kπ + y)2 + x 2 (2kπ − y)2 + x 2
= 1 + 2 sin2
y 2 (2kπ + y)2 (2kπ − y)2
k=1
∞
x2 y x2 x2
= 2 1 + 2 sin2 1+ 1 + .
y 2 (2kπ + y)2 (2kπ − y)2
k=1
listed in [9], for example, as #1.433. This infinite product converges at the slow rate
of 1/k. We can offer two alternative expansions of the function
πx πx
cos − sin
4 4
whose convergence rate is notably faster compared to that of (2.22). To derive the
first such expansion, we convert the difference of trigonometric functions in (2.22)
to a single√ cosine function. This can be done by multiplying and dividing it by a
factor of 2/2:
√ √
πx πx √ 2 πx 2 πx
cos − sin = 2 cos − sin
4 4 2 4 2 4
√ π πx π πx √ π(1 + x)
= 2 cos cos − sin sin = 2 cos .
4 4 4 4 4
Upon expressing the above cosine function by the classical Euler infinite product
form in (2.2), the first alternative version of the expansion in (2.22) appears as
∞
πx πx √ π(1 + x) √ (1 + x)2
cos − sin = 2 cos = 2 1− . (2.23)
4 4 4 4(2k − 1)2
k=1
If in contrast to the derivation just completed, the left-hand side of (2.22) is sim-
ilarly expressed as a single sine function
πx πx √ π(1 − x)
cos − sin = 2 sin ,
4 4 4
then one arrives, with the aid of the classical Euler infinite product form for the sine
function in (2.1), at another alternative representation to that in (2.22),
√ ∞
πx πx π 2 (1 − x)2
cos − sin = (1 − x) 1− . (2.24)
4 4 4 16k 2
k=1
32 2 Infinite Products and Elementary Functions
It is evident that the versions in (2.23) and (2.24) are more efficient computation-
ally than that in (2.22). Indeed, they converge at the rate 1/k 2 , in contrast to the rate
1/k for the expansion in (2.22).
As to the representations in (2.23) and (2.24), it can be shown that the relative
convergence of the latter must be slightly faster. This assertion directly follows from
observation of the denominators in their fractional components. Indeed, the inequal-
ity
4(2k − 1)2 = 16k 2 − 16k + 4 < 16k 2
holds for any integer k, since 16k − 4 > 0.
Relative convergence of the representations in (2.22) and (2.24) can be observed
in Figs. 2.5 and 2.6, where their second, fifth, and tenth partial products are plotted
on the interval [0, π].
Derivation of the next infinite product representation of an elementary function,
which is available in [9] (see #1.434),
∞
1 (π + 2x)2
cos2 x = (π + 2x)2 1− , (2.25)
4 4k 2 π 2
k=1
2.3 Other Elementary Functions 33
which is presented in [9] as #1.435. If the sine functions in the numerator and de-
nominator are expressed in terms of the classical Euler form, then (2.26) reads
∞ π 2 (x+a)2
sin π(x + a) π(x + a) k=1 [1 − k 2 π 2 ]
= .
sin πa πa ∞ π 2 a2
k=1 [1 − k 2 π 2 ]
π(a − x) π(a + x)
sin πa − sin πx = 2 sin cos
2 2
and
π(a + x) π(a − x)
sin πa + sin πx = 2 sin cos .
2 2
With this, we regroup the numerator in (2.29) as
∞ (a+x)2 2
(a + x)(a − x) [1 − k2
][1 − (a−x)
k2
]
,
a2 a2 2
(1 − k2 )
k=1
So, grouping the first factor with the third, and the second with the fourth, one
converts the numerator in (2.30) into
(k − a)2 − x 2 (k + a)2 − x 2 ,
and finally to
∞
x2 x2 x2
1− 2 1− 1 − .
a (k − a)2 (k + a)2
k=1
To verify this identity, we decompose first the difference of squares in the product
as
∞
2 ∞
2x 2x 2x
− 1− =− 1− 1+
x + kπ x + kπ x + kπ
k=−∞ k=−∞
and then convert the above infinite product to an equivalent form. Namely, by split-
ting off the term with k = 0, which is evidently equal to −3, and pairing the kth and
the −kth terms, the above product transforms into
∞
2x 2x 2x 2x
3 1− 1− 1+ 1+
x + kπ x − kπ x + kπ x − kπ
k=1
and
∞
kπ − x −kπ − x 3x + kπ 3x − kπ
3 .
x + kπ x − kπ x + kπ x − kπ
k=1
36 2 Infinite Products and Elementary Functions
Clearly, the first two factors under the product sign cancel, leaving the right-hand
side of (2.31) as
∞
3x + kπ 3x − kπ
3 . (2.32)
x + kπ x − kπ
k=1
As to the left-hand side in (2.31), we reduce both the sine functions in it to the
infinite product form
2
∞ ∞
sin 3x 1 − (3x)
k2 π 2 9x 2 − k 2 π 2
=3 =3 ,
2
x 2 − k2 π 2
k=1 1 − k 2 π 2
sin x x
k=1
which is identical to the expression in (2.32). Thus, the identity in (2.31) is ulti-
mately verified.
We turn next to an infinite product representation of another elementary function,
∞
2
cosh x − cos α x
= 1+ , (2.33)
1 − cos α 2kπ + a
k=−∞
which is listed in [9] as #1.438. To verify this identity, we transform its left-hand
side as
cosh x − cos α cos ix − cos α sin a+ix
2 sin 2
a−ix
= = .
1 − cos α 1 − cos α sin2 a2
We then express the sine functions by the classical Euler infinite product form,
and perform some obvious elementary transformations. This yields
or
∞
a 2 + x 2 [4k 2 π 2 − (a + ix)2 ][4k 2 π 2 − (a − ix)2 ]
,
a2 (2kπ + a)2 (2kπ − a)2
k=1
which transforms as
∞
x 2 (2kπ − a − ix)(2kπ + a + ix) (2kπ − a + ix)(2kπ + a − ix)
1+ 2 .
a (2kπ + a)2 (2kπ − a)2
k=1
Combining the first factor with the third, and the second with the fourth in the
numerator, one converts the above into
∞
x 2 (2kπ − a)2 + x 2 (2kπ + a)2 + x 2
1+ 2 ,
a (2kπ − a)2 (2kπ + a)2
k=1
2.3 Other Elementary Functions 37
It can be shown that the above infinite product (where the multiplication is as-
sumed from one to infinity) transforms to that in (2.33), where we “sum” from neg-
ative infinity to positive infinity. To justify this assertion, we formally break down
the product in (2.34) into two pieces,
∞ ∞
x2 x2 x2
1+ 2 1+ · 1+ ,
a (2mπ − a)2 (2kπ + a)2
m=1 k=1
and change the multiplication index in the first of the products via m = −k. This
converts the above expression to
−∞
∞
x2 x2 x2
1+ 2 1+ · 1+ ,
a (2kπ + a)2 (2kπ + a)2
k=−1 k=1
which is just the right-hand side of the relation in (2.33). Thus, the identity in (2.33)
is verified.
We have finished our review of infinite product expansions of elementary func-
tions that can be directly derived with the aid of the classical Euler representations
for the trigonometric sine and cosine functions.
A few expansions, whose derivation will be conducted in the remaining part of
this section, illustrate a variety of other possible approaches to the problem. Let us
recall an alternative to the Euler’s (2.1) infinite product expansion of the trigono-
metric sine function. That is,
∞
x
sin x = x cos , (2.35)
2k
k=1
which also has been known for centuries and is listed, in particular, in [9] as #1.439.
A formal comment is appropriate as to the convergence of the infinite product
in (2.35). It converges to nonzero values of the sine function for any value of the
variable x that does not make the argument of the cosine equal to π/2 + nπ , whereas
it diverges to zero at such values of x, matching zero values of the sine function.
The derivation strategy that we are going to pursue in the case of (2.35) is based
on the definition of the value of an infinite product. The strategy has two stages.
First, a compact expression must be derived for the Kth partial product PK (x),
K
x
PK = cos ,
2k
k=1
38 2 Infinite Products and Elementary Functions
of the infinite product in (2.35). Then the limit of PK (x) is obtained as K ap-
proaches infinity.
To obtain a compact form of the partial product PK (x) for (2.35), we rewrite its
first factor cos x2 as
x 2 sin x2 cos x2 sin x
cos = = .
2 2 sin x2 2 sin x2
Similarly, the second factor cos 2x2 and the third factor cos 2x3 in PK (x) turn out
to be
x 2 sin 2x2 cos 2x2 sin x2
cos = =
22 2 sin 2x2 2 sin 2x2
and
x 2 sin 2x3 cos 2x3 sin 2x2
cos = = .
23 2 sin 2x3 2 sin 2x3
x
Proceeding like this with the next-to-the-last factor cos 2K−1 and the last factor
x
cos 2K in PK (x), we express them as
x x x
x 2 sin 2K−1 cos 2K−1 sin 2K−2
cos = x = x
2K−1 2 sin 2K−1 2 sin 2K−1
and
x 2 sin 2xK cos 2xK x
sin 2K−1
cos = = .
2K 2 sin 2xK 2 sin 2xK
Once all the factors are put together, we have a series of cancellations, and the
partial product PK (x) eventually reduces to the form
sin x
PK (x) = . (2.36)
2K sin 2xK
which is available in [20]. It is evident that its derivation can also be conducted in
exactly same way as for the one in (2.35).
In what follows, the strategy just illustrated will be applied to the derivation of
an infinite product representation for another elementary function, that is,
∞
1 k
= 1 + x2 , |x| < 1. (2.38)
1−x
k=0
It can also be found in [20]. To proceed with the derivation, we transform the general
term in (2.38) as
k+1
k 1 − x2
1 + x2 =
1 − x2
k
and write down the Kth partial product PK (x) of the representation in (2.38) ex-
plicitly as
K
k
PK (x) = 1 + x2
k=0
K−1 K
1 − x2 1 − x4 1 − x8 1 − x2 1 − x2
= · · · ... · · .
1−x 1−x 1−x 2 4
1 − x2
K−2
1 − x2
K−1
It is evident that nearly all the terms in the above product cancel. Indeed, the only
K
terms left are the denominator 1 − x of the first factor and the numerator 1 − x 2 of
the last factor. This reduces the partial product PK (x) to the compact form
K
k 1 − x2
K
1 + x2 = ,
1−x
k=0
K
k 1 − x2 1
K
lim 1 + x2 = lim =
K→∞ K→∞ 1 − x 1−x
k=0
PK = S2K
40 2 Infinite Products and Elementary Functions
between the partial product of (2.38) and the partial sum of the series. This observa-
tion means that the infinite product in (2.38) converges, at least formally, at a much
faster rate.
To complete the review of methods customarily used for the infinite product rep-
resentation
√ of elementary functions, let us recall an approach to the square root
function 1 + x, which is described in [20], for example. The function is first trans-
formed as
√ 2(x + 1) (x + 2)2
1+x = (1 + x) , (2.39)
x +2 4(x + 1)2
and the radicand on the right-hand side is then simplified as
(x + 2)2 (x + 2)2
(1 + x) = ,
4(x + 1) 2 4(x + 1)
resulting in
√ 2(x + 1) (x + 2)2
1+x =
x +2 4(x + 1)
2(x + 1) x 2 + 4x + 4 2(x + 1) x2
= = 1+ . (2.40)
x +2 4x + 4 x+2 4x + 4
of the right-hand
√ side in (2.40) the same transformation that has just been applied to
the function 1 + x in (2.39). This yields
x2 x2
√ 2(x + 1) 2( + 1) ( 4(x+1) )2
1+x = ·
4(x+1) 1 + .
x +2 x2
+2
2
4( x + 1)
4(x+1) 4(x+1)
Proceeding further with this algorithm, one arrives at the infinite product repre-
sentation
∞
√ 2(Ak + 1)
1+x = (2.41)
Ak + 2
k=0
for the square root function, where the parameter Ak can be obtained from the re-
currence
A2k
A0 = x and Ak+1 = , k = 0, 1, 2, . . . .
4(Ak + 1)
2.4 Chapter Exercises 41
It appears that the convergence rate of the expansion in (2.41) is extremely fast.
This assertion is illustrated with Fig. 2.7, where the partial products P0 , P1 , and P2
of the representation are depicted.
This completes the review that we intended to provide the reader of infinite prod-
uct representations of elementary functions available in the current literature.
In the next chapter, the reader’s attention will be directed to a totally different
subject. Namely, we will begin a review of a collection of methods that are tra-
ditionally used for the construction of Green’s functions for the two-dimensional
Laplace equation.
The purpose for such a sharp turn is twofold. First, we aim at giving a more
comprehensive, in comparison with other relevant sources, review of the available
procedures for the construction of Green’s functions for a variety of boundary-value
problems for the Laplace equation. Second, one of those procedures represents a
significant issue for Chap. 6, where an innovative approach will be discussed for the
expression of elementary functions in terms of infinite products.
2.1 Use Euler’s approach and derive the infinite product representation in (2.4) for
the hyperbolic cosine function.
2.3 Derive the infinite product representation in (2.37) for the hyperbolic sine func-
tion.
a sin x + b cos x,
sin x + sin y.
cos x + cos y.
cot x + cot y.
coth x + coth y.
Chapter 3
Green’s Functions for the Laplace Equation
Our recent work reported in [27, 28] provides convincing evidence of a surprising
linkage between the topics of approximation of functions and the Green’s function
for some partial differential equations. The linkage appears promising and extremely
productive. It has generated an unlooked-for approach to the infinite product repre-
sentation of elementary functions.
Our work here focuses on a comprehensive review of two standard methods that
can potentially be (and are, actually) used for the construction of Green’s functions
to boundary-value problems for the two-dimensional Laplace equation. These are
the method of images, which is reviewed in Sect. 3.1, and the method of conformal
mapping, whose review is given in Sect. 3.2.
The present chapter is primarily designed to provide a preparatory basis for
Chap. 6, which plays a central role in the entire volume. An innovative approach
is proposed in that chapter to the infinite product representation of some elementary
functions, in particular for a number of trigonometric and hyperbolic functions.
∇ 2 u(P ) = 0, P ∈ , (3.2)
T u(P ) = 0, P ∈ L, (3.3)
m
1
R(P , Q) = ± ln |P − Q∗j | (3.4)
2π
j =1
a harmonic function at every point P in (since all the source points Q∗j are outside
). The plus sign in (3.4) corresponds to a sink, and the minus to a source. Clearly,
G(P , Q), with such a regular component R(P , Q), represents a harmonic function
at every point P ∈ , except at P = Q. In addition, the boundary condition in (3.3)
is supposed to be satisfied by appropriately choosing locations for Q∗1 , Q∗2 , . . . , Q∗m .
That is, the trace of the singular component −T [ 2π
1
ln |P − Q|] on the boundary line
L is supposed to be compensated by T [R(P , Q)].
Example 3.1 For the first example on the use of the method of images, we consider
a classical case of the Dirichlet problem for the upper half-plane (x, y) = {−∞ <
x < ∞, y > 0}, and construct its Green’s function.
The influence of the unit source at a point Q(ξ, η) (the singular component of
the Green’s function)
1
− ln (x − ξ )2 + (y − η)2
4π
can be compensated, in this case, with a single unit sink placed at the point
Q∗ (ξ, −η) located in the lower half-plane and symmetric to Q(ξ, η) about the
boundary y = 0 of the half-plane. With the influence of this sink given as
1
ln (x − ξ )2 + (y + η)2 ,
4π
3.1 Construction by the Method of Images 45
the Green’s function of the Dirichlet problem for the upper half-plane is finally
found as
1 (x − ξ )2 + (y + η)2
G(x, y; ξ, η) = ln . (3.5)
4π (x − ξ )2 + (y − η)2
Example 3.2 As our next example, we consider another classical case of the Dirich-
let problem for the quarter-plane (one may refer to it as the infinite wedge of π/2),
(r, ϕ) = {0 < r < ∞, 0 < ϕ < π/2}.
Since the distance between two points (r1 , ϕ1 ) and (r2 , ϕ2 ) is defined in polar
coordinates as
r12 − 2r1 r2 cos(ϕ1 − ϕ2 ) + r22 ,
the singular component of the Green’s function G(r, ϕ; , ψ) reads as
1 2
− ln r − 2r cos(ϕ − ψ) + 2
, (3.6)
4π
which represents the response at an observation point M(r, ϕ) ∈ to the unit source
(labeled here and later with a plus sign) placed at A( , ψ) ∈ (see Fig. 3.1).
In order to compensate the trace of the function in (3.6) (or in other words, to
support the Dirichlet condition) on the boundary segment y = 0, we place a unit
sink (labeled with an asterisk) at D( , 2π − ψ). The influence of this sink is given
by
1 2
ln r − 2r cos ϕ − (2π − ψ) + 2 . (3.7)
4π
Similarly, with a unit sink at B( , π − ψ), whose influence is defined as
1 2
ln r − 2r cos ϕ − (π − ψ) + 2
, (3.8)
4π
46 3 Green’s Functions for the Laplace Equation
1 2
r 2 − 2r cos(ϕ − (nπ − ψ)) + 2
G(r, ϕ; , ψ) = ln , (3.10)
4π r 2 − 2r cos(ϕ − ((n − 1)π + ψ)) + 2
n=1
represents the Green’s function of the Dirichlet problem for the infinite wedge {0 <
r < ∞, 0 < ϕ < π/2}.
Example 3.3 Note that if compensatory sources and sinks are placed, for the infinite
wedge (r, ϕ) = {0 < r < ∞, 0 < ϕ < π/2}, in a manner different from that just
described in Example 3.2, then the method of images enables us to construct the
Green’s function for a certain mixed boundary-value problem.
Proceeding in compliance with the scheme depicted in Fig. 3.2, one obtains the
Green’s function
1 r 2 − 2r cos(ϕ − (2π − ψ)) + 2
G(r, ϕ; , ψ) = ln
4π r 2 − 2r cos(ϕ − ψ) + 2
r 2 − 2r cos(ϕ − (π + ψ)) + 2
× (3.11)
r 2 − 2r cos(ϕ − (π − ψ)) + 2
In a series of examples that follow, we show that although the method of im-
ages appears productive for a number of boundary-value problems stated on infinite
wedges, it does not work for some of them.
Example 3.4 Consider the Dirichlet problem for the wedge of π/3, (r, ϕ) = {0 <
r < ∞, 0 < ϕ < π/3}. To construct the Green’s function, the reader could follow,
in this case, the procedure in detail by examining the scheme depicted in Fig 3.3.
1 2
− ln r − 2r cos(ϕ − ψ) + 2
4π
of the Green’s function on the boundary fragment ϕ = 0, we place a compensatory
unit sink at F ( , 2π − ψ), while another unit sink is required at B( , 2π/3 − ψ)
to support the Dirichlet condition on ϕ = π/3. To compensate the trace of the latter
sink on the boundary fragment ϕ = 0, a unit source is required at E( , 4π/3 +
ψ). The trace of the latter source is compensated on ϕ = π/3 with a unit sink at
D( , 4π/3 − ψ), while the trace of this sink is compensated on ϕ = 0 with a unit
source placed at C( , 2π/3 + ψ).
Thus, the aggregate influence of the five compensatory sources and sinks located
outside , as shown in Fig. 3.3, represents the regular component R(r, ϕ; , ψ) of
the Green’s function of the Dirichlet problem for the wedge of π/3. The Green’s
function itself is ultimately obtained in the form
3
r 2 − 2r cos(ϕ − ( 2nπ
3 − ψ)) +
2
1
G(r, ϕ; , ψ) = ln 2(n−1)π
. (3.12)
n=1 r − 2r cos(ϕ − ( + ψ)) + 2
4π 2
3
In contrast to the case of the mixed problem considered in Example 3.3 for the
wedge of π/2, the method of images fails for the problem considered in the next
example.
48 3 Green’s Functions for the Laplace Equation
Example 3.5 To follow the procedure in detail and observe its failure for the
Dirichlet–Neumann problem stated for the wedge of π/3, the reader is referred to
the scheme of Fig 3.4.
Example 3.6 Consider the case of the Dirichlet problem stated on the infinite wedge
of π/4, (r, ϕ) = {0 < r < ∞, 0 < ϕ < π/4}.
The scheme depicted in Fig 3.5 allows the reader to follow the procedure in
detail and helps ultimately to obtain the Green’s function that we are looking for in
the compact form
4
r 2 − 2r cos(ϕ − ( nπ
2 − ψ)) +
2
1
G(r, ϕ; , ψ) = ln (n−1)π
. (3.13)
n=1 r − 2r cos(ϕ − ( + ψ)) + 2
4π 2
2
Example 3.7 The Green’s function of the mixed problem for the wedge of π/4 can
also be obtained by the method of images. To justify this claim, consider the state-
ment with the Dirichlet and Neumann conditions imposed on the boundary segments
ϕ = 0 and ϕ = π/4, respectively.
3.1 Construction by the Method of Images 49
In order to trace out the image method, we examine the scheme shown in Fig 3.6.
Combining the influence of the eight sources and sinks that emerge in this case, we
obtain the Green’s function that we are looking for in the form
1 2
r 2 − 2r cos(ϕ − ( (2n−1)π + ψ)) + 2
G(r, ϕ; , ψ) = ln 2
(2n−1)π
n=1 r − 2r cos(ϕ − ( − ψ)) +
4π 2 2
2
Example 3.8 As to the Dirichlet problem for the wedge of π/6, the scheme of the
method of images results in twelve unit sources and sinks, the aggregate of which
represents the Green’s function of interest, which appears in the form
6
r 2 − 2r cos(ϕ − ( nπ
3 − ψ)) +
2
1
G(r, ϕ; , ψ) = ln (n−1)π
. (3.15)
n=1 r − 2r cos(ϕ − ( + ψ)) + 2
4π 2
3
50 3 Green’s Functions for the Laplace Equation
Example 3.9 Observing the expression derived earlier for the Green’s function of
the Dirichlet problem on the wedge of π/2 and presented in (3.10) along with the
one obtained for the wedge of π/4 (see (3.13)), we arrive at the generalization
k
1 2
r 2 − 2r cos(ϕ − ( 2nπ
k−1 − ψ)) +
2
G(r, ϕ; , ψ) = ln (n−1)π
, (3.16)
n=1 r − 2r cos(ϕ − ( 2k−1 + ψ)) +
4π 2 2
representing the Green’s function of the Dirichlet problem for the wedge of π/2k ,
where k = 0, 1, 2, . . . .
It is worth noting that the case of k = 0, which corresponds to the wedge of π or
in other words, to the upper half-plane y > 0, reads from (3.16) as
1 r 2 − 2r cos(ϕ + ψ) + 2
G(r, ϕ; , ψ) = ln 2 ,
4π r − 2r cos(ϕ − ψ) + 2
representing the Green’s function derived earlier (see (3.5)) and expressed here in
polar coordinates.
Example 3.10 Upon analyzing the expressions in (3.12) and (3.15), obtained for the
wedges of π/3 and π/6, we obtain the Green’s function of the Dirichlet problem
for the wedge of π/(3 · 2k ) in the form
k
1 r 2 − 2r cos(ϕ − ( 2nπk − ψ)) + 2
3·2
G(r, ϕ; , ψ) = ln 3·2
, (3.17)
4π r 2 − 2r cos(ϕ − ( 2(n−1)π + ψ)) + 2
n=1 3·2k
Example 3.11 Consider the Dirichlet problem for the infinite wedge (r, ϕ) = {0 <
r < ∞, 0 < ϕ < 2π/3} and try to construct its Green’s function.
The failure of the method can be observed, in this case, with the aid of the scheme
shown in Fig. 3.7. Let a unit source (which produces the singular component of the
Green’s function) be located at A( , ψ) ∈ . To compensate its trace on the frag-
ment ϕ = 0 of the boundary of , place a compensatory sink at D( , 2π − ψ) ∈ / .
3.1 Construction by the Method of Images 51
The trace of the latter on the boundary fragment ϕ = 2π/3 is compensated, in turn,
with a unit source at C( , 4π/3 + ψ) ∈ / , whose trace on ϕ = 0 should be compen-
sated with a unit sink at B( , 2π/3 − ψ), which is, unfortunately, located inside .
And this is what justifies the failure of the method. Why so? Because compensatory
sources and sinks cannot, according to the definition of the Green’s function, be
located inside .
Thus, the above example illustrates the fact that the method of images may fail in
the construction of the Green’s function for the Dirichlet problem stated on a wedge
that allows cyclic symmetry. To observe some other cases in which the method fails,
the reader is invited (in the chapter exercises) to apply the procedure to other wedges
(of 2π/5 or 2π/7, for example) also allowing the cyclic symmetry.
Based on the experience gained so far, it sounds reasonable to make the following
observation. The method of images appears workable for Dirichlet problems stated
on the wedges of π/k, where k represents an integer. But a word of caution is
appropriate as to the above assertion. It is just an assertion, and the reader is strongly
encouraged to prove it rigorously.
Example 3.12 For the next illustrative example on the effective implementation of
the method of images, let us apply it to the construction of Green’s function for
another classical case of the Dirichlet problem stated on the disk of radius a.
The strategy of tackling the current problem with the method of images is based
on an obvious observation concerning the shape of equipotential lines in the field
generated by a point source or sink. Since these lines represent concentric circles
centered at the generating point, the following statement looks reasonable. That is,
for every location A of a unit source inside the disk, there exists a proper location
B of the compensatory unit sink outside the disk such that the circumference of the
disk is an equipotential line for the field generated by both the source and the sink.
Applying the strategy just described, we assume that the disk is centered at the
origin of the polar coordinate system r, ϕ and let the unit source generating the
52 3 Green’s Functions for the Laplace Equation
singular component
1 2
− ln r − 2r cos(ϕ − ψ) + 2
(3.18)
4π
of the Green’s function at M(r, ϕ) be located at a point A( , ψ) (see Fig. 3.8). Let
also C(a, ϕ) be an arbitrary point on the circumference of the disk. It is evident that
the point B( , ψ), where the compensatory unit sink
1 2
ln r − 2r cos(ϕ − ψ) + 2
(3.19)
4π
is located, must be on the extension of the radial line of A. In other words, the
angular coordinate ψ of B must be the same as that of A. As to the radial coordinate
of B, it should be determined from the condition that the sum of (3.18) and (3.19)
is a constant, say λ, when M is taken to C (r = a). For the sake of convenience, we
express λ as
1
λ=− ln μ. (3.20)
4π
This yields
1 a 2 − 2a cos(ϕ − ψ) + 2 1
ln 2 =− ln μ,
4π a − 2a cos(ϕ − ψ) + 2 4π
or
a 2 − 2a cos(ϕ − ψ) + 2
= μ a 2 − 2a cos(ϕ − ψ) + 2
.
Making the substitution
=ω , (3.21)
we transform the above equation into
a 2 − 2a cos(ϕ − ψ) + 2
= μ a 2 − 2aω cos(ϕ − ψ) + ω2 2
. (3.22)
3.1 Construction by the Method of Images 53
Clearly, the equation in (3.22) must hold for any value of ϕ − ψ. So, by assuming,
for instance, ϕ − ψ = π/2 (which implies cos(ϕ − ψ) = 0), we reduce (3.22) to
a2 + 2
= μ a 2 + ω2 2
. (3.23)
This simply means that μω = 1, that is, the values of μ and ω are reciprocals.
Substitution of μ = 1/ω into (3.23) yields
1 2
a2 + 2
= a +ω 2
.
ω
The above equation can be rewritten as
a 2 (ω − 1) = 2
ω(ω − 1). (3.24)
= a2/ and μ = 2
/a 2 .
Thus, we have found the location where the point B(a 2 / , ψ) should be placed.
Such a point is usually referred to as the image of A about the circumference of the
disk. We also found the value of λ in (3.20):
1 2
λ=− ln 2 .
4π a
To complete the construction of the Green’s function, observe that the unit sink
at B generates the potential field
1 a4 a2
ln 2 − 2r cos(ϕ − ψ) + r 2
4π
at a point M(r, ϕ) inside the disk. Hence, the potential field generated at M(r, ϕ) by
both the unit source at A and the compensatory unit sink at B is defined as
1 a 4 − 2r a 2 cos(ϕ − ψ) + r 2 2
ln 2 2 . (3.25)
4π (r − 2r cos(ϕ − ψ) + 2 )
which reduces to
1 a 4 − 2r a 2 cos(ϕ − ψ) + r 2 2
G(r, ϕ; , ψ) = ln 2 2 . (3.26)
4π a (r − 2r cos(ϕ − ψ) + 2 )
(ξ, η). But what about the third defining property? Does the function in (3.29) vanish
on the boundary L of ? The answer is yes, because from the fact that the function
w(z, ζ ) maps L onto the circumference of the unit disk, it follows that
w(z, ζ ) = 1 for z ∈ L,
Example 3.13 Let the method of conformal mapping be applied to the Dirichlet
problem stated on the unit disk |z| ≤ 1.
The family of functions w(z, ζ ) that maps the unit disk conformally onto itself,
with a point z = ζ being mapped onto the disk’s center, is defined [5] as
z−ζ
w(z, ζ ) = eiβ · ,
zζ − 1
where β is a real parameter that is responsible for the rotation of the disk about its
center. For the sake of uniqueness, we neglect the rotation by assuming β = 0.
In compliance with (3.29), one arrives at the expression
1 z−ζ 1 zζ − 1
G(z, ζ ) = − ln = ln (3.32)
2π zζ − 1 2π z−ζ
The denominator in the argument of the logarithm in (3.32) represents the dis-
tance between z and ζ , which is
|z − ζ | = r 2 − 2r cos(ϕ − ψ) + 2 . (3.34)
Substituting (3.33) and (3.34) into (3.32), we finally obtain the Green’s function
of the Dirichlet problem for the unit disk:
1 r 2 2 − 2r cos(ϕ − ψ) + 1
G(r, ϕ; , ψ) = ln 2 . (3.35)
4π r − 2r cos(ϕ − ψ) + 2
The reader may compare this representation with the one derived earlier in
Sect. 3.1 (see (3.26), where a is to be set equal to unity).
Example 3.14 The method of conformal mapping will be used here to construct
the Green’s function of the Dirichlet problem for the infinite strip = {−∞ < x <
∞, 0 ≤ y ≤ π}.
In a course on complex analysis [5], the reader may have learned that the family
of functions
ez − eζ
w(z, ζ ) = eiβ (3.36)
ez − eζ
maps the infinite strip conformally onto the unit disk |w| ≤ 1, while the point
z = ζ is mapped onto the disk’s center w = 0. For the sake of uniqueness, we assume
β = 0 for the rotation parameter.
Before substituting the mapping function from (3.36) into (3.29), we express the
observation point and the source point in Cartesian coordinates
z = x + iy and ζ = ξ + iη,
and then transform the modulus of the numerator in (3.36) by means of the classical
Euler formula
ez − eζ = Re2 ez − eζ + Im2 ez − eζ ,
where the real and the imaginary parts read
Re ez − eζ = ex cos y − eξ cos η
and
Im ez − eζ = ex sin y − eξ sin η.
Trivial complex algebra further yields
ez − eζ = e2x + e2ξ − 2e(x+ξ ) cos(y − η)
= e · 1 − 2e(x−ξ ) cos(y − η) + e2(x−ξ ) .
ξ
58 3 Green’s Functions for the Laplace Equation
This puts the Green’s function we are looking for in the form
An equivalent but more compact form for this Green’s function can be obtained
by multiplying both the numerator and the denominator in (3.37) by e(ξ −x) . This
yields
1 e(x−ξ ) + e(ξ −x) − 2 cos(y + η)
G(x, y; ξ, η) = ln (x−ξ ) ,
4π e + e(ξ −x) − 2 cos(y − η)
which transforms, by dividing the numerator and the denominator of the argument
of the logarithm by 2, into the equivalent form
1 cosh(x − ξ ) − cos(y + η)
G(x, y; ξ, η) = ln . (3.38)
4π cosh(x − ξ ) − cos(y − η)
Example 3.15 The half-plane = {−∞ < x < ∞, y ≥ 0} maps conformally onto
the unit disk (with the point z = ζ mapped onto the disk’s center, w = 0) by a family
of functions, one of which is [5]
z−ζ
w(z, ζ ) = .
z−ζ
Thus, the Green’s function of the Dirichlet problem for the Laplace equation on
the half-plane is given by
1 z−ζ
G(z, ζ ) = − ln ,
2π z−ζ
1 (x − ξ )2 + (y + η)2
G(x, y; ξ, η) = ln , (3.39)
4π (x − ξ )2 + (y − η)2
1 r 2 − 2r cos(ϕ + ψ) + 2
G(r, ϕ; , ψ) = ln 2 . (3.40)
4π r − 2r cos(ϕ − ψ) + 2
Recall that the representations of the Green’s function for the half-plane shown
in (3.39) and (3.40) were already obtained in Sect. 3.1 by the method of images (see
Examples 3.1 and 3.9).
3.3 Chapter Exercises 59
Note that in each of the problems reviewed so far in this section, the regions under
consideration are mapped conformally onto the unit disk by an elementary function.
The problem that we will face in Example 3.16 below represents a challenge. The
point is that it aims at the Green’s function of the Dirichlet problem stated on a
rectangle. But the rectangle cannot [5], unfortunately, be mapped conformally onto
the interior of a disk by an elementary function.
Example 3.16 Construct the Green’s function of the Dirichlet problem stated for the
two-dimensional Laplace equation on the rectangle = {0 ≤ x ≤ a, 0 ≤ y ≤ b}.
From [5] the reader will have learned that the rectangle maps onto the unit disk
(with a point z = ζ mapped onto the disk’s center w = 0) conformally by the func-
tion
W (z − ζ ; ω1 , ω2 ) · W (z + ζ ; ω1 , ω2 )
w(z, ζ ) =
W (z − ζ ; ω1 , ω2 ) · W (z + ζ ; ω1 , ω2 )
defined in terms of the special function W (t; ω1 , ω2 ), which is called the Weierstrass
elliptic function. The parameters ω1 and ω2 are determined through the dimensions
of the rectangle as ω1 = 2a and ω2 = 2ib. To compute the Weierstrass function prac-
tically, the reader might go with its series representation given in [9], for example,
as
∞
1 1 1
W (t; ω1 , ω2 ) = 2 + − ,
t (t − 2mω1 − 2nω2 )2 (2mω1 + 2nω2 )2
m,n=0
where in the summation, we assume that the indices m and n are not equal to zero
simultaneously.
Thus, the Green’s function of the Dirichlet problem for the rectangle is ex-
pressed in terms of the Weierstrass function as
3.1 Derive the Green’s function presented in (3.13) for the Dirichlet problem stated
for the Laplace equation on the infinite wedge of π/4.
3.2 Derive the Green’s function presented in (3.15) for the Dirichlet problem stated
for the Laplace equation on the infinite wedge of π/6.
60 3 Green’s Functions for the Laplace Equation
3.3 Show that the method of images fails in the construction of the Green’s function
of the Dirichlet problem on the infinite wedge of 2π/5.
3.4 Show that the method of images fails in the construction of the Green’s function
of the Dirichlet problem on the infinite wedge of 2π/7.
3.5 Prove that the method of images is efficient for the construction of the Green’s
function of the Dirichlet problem stated on the infinite wedge of π/k, where k is an
integer.
Chapter 4
Green’s Functions for ODE
As was convincingly shown in Chap. 3, the methods of images and conformal map-
ping are helpful in obtaining Green’s functions for the two-dimensional Laplace
equation. But it is worth noting, at the same time, that the number of problems for
which these methods are productive, is notably limited. To support this assertion, re-
call that mixed boundary-value problems with Robin conditions imposed on a piece
of the boundary are not within the reach of these methods.
Another alternative approach to the construction of Green’s functions for many
partial differential equations is the method of eigenfunction expansion. But we do
not, however, immediately proceed with its coverage, postponing it for Chap. 5,
where its potential will be explored in full detail. The reason for that is methodolog-
ical. Certain preparatory work would help prior to turning to this method. In doing
so, we change topics by shifting from partial to ordinary differential equations.
Our objective is to assist the reader with an easier grasp of the material of Chap. 5,
where intensive work will be resumed on the Laplace equation. But before going
any further with this, the topic of Green’s functions for linear ordinary differential
equations will be explored here in some detail. A consistent use of the experience
gained in the current chapter will prove critical for later work on the method of
eigenfunction expansion.
∂g(x, s) ∂g(x, s) 1
lim − lim =− ,
x→s + ∂x x→s − ∂x p0 (s)
Two standard approaches to the construction of Green’s functions for linear ordi-
nary differential equations are traditionally recommended [5, 8]. The first of them is
based, as we already mentioned, on the defining properties just listed and represents,
in fact, a constructive proof of the existence and uniqueness theorem for the given
statement. The idea of the second approach is different. It is rooted in Lagrange’s
method of variation of parameters, which is usually used for finding particular solu-
tions to inhomogeneous linear equations.
To trace out the procedure of the approach based on the defining properties, let
functions y1 (x) and y2 (x) constitute a fundamental set of solutions for the equation
4.1 Construction by Defining Properties 63
in (4.1). That is, y1 (x) and y2 (x) are particular solutions of the equation that are
linearly independent on (a, b).
In compliance with property 1 of the definition, for any arbitrarily fixed value of
s ∈ (a, b), the Green’s function g(x, s) must be a solution of the equation in (4.1)
in (a, s) (on the left of s), as well as in (s, b) (on the right of s). Since any solution
of (4.1) can be expressed as a linear combination of y1 (x) and y2 (x), one may write
g(x, s) in the following form:
2 yj (x)Aj (s), for a ≤ x ≤ s,
g(x, s) = (4.3)
j =1 yj (x)Bj (s), for s ≤ x ≤ b,
2
Cj (s)yj (s) = 0 (4.4)
i=1
Another linear equation in C1 (s) and C2 (s) can be derived by turning to prop-
erty 3. This yields the equation
2
dyi (s) 1
Ci (s) =− . (4.6)
dx p0 (s)
i=1
Hence, the relation in (4.4) along with that in (4.6) forms a system of two si-
multaneous linear algebraic equations in C1 (s) and C2 (s). The determinant of the
coefficient matrix in this system is not zero, because it represents the Wronskian for
the fundamental set of solutions {yj (x), j = 1, 2}.
64 4 Green’s Functions for ODE
Thus, the system in (4.4) and (4.6) has a unique solution. In other words, one can
readily obtain explicit expressions for C1 (s) and C2 (s). This implies that, in view
of (4.5), two linear relations are already available for the four functions Aj (s) and
Bj (s). In order to obtain them, we take advantage of property 4. In doing so, let us
first break down the forms Mi (y(a), y(b)) in (4.2) into two additive components as
Mi y(a), y(b) = Si y(a) + Ti y(b) (i = 1, 2),
2
Si y(a) = αi,k−1 y (k−1) (a)
k=1
and
2
Ti y(b) = βi,k−1 y (k−1) (b).
k=1
In compliance with property 4, we substitute the expression for g(x, s) from (4.3)
into (4.2), and we obtain
Mi g(a, s), g(b, s) ≡ Si g(a, s) + Ti g(b, s) = 0 (i = 1, 2). (4.7)
Since the operator Si in (4.7) governs the values of g(a, s) at the left endpoint
x = a of the interval [a, b], while the operator Ti governs the values of g(b, s) at the
right endpoint x = b, the upper branch
2
yj (x)Aj (s)
j =1
of g(x, s) from (4.3) goes to Si (g(a, s)), while the lower branch
2
yj (x)Bj (s)
j =1
2
Si g(a, s) Aj (s) + Ti g(b, s) Bj (s) = 0 (i = 1, 2).
j =1
Replacing the expressions for Aj (s) in the above system with the differences
Bj (s)–Cj (s) in accordance with (4.5), one rewrites the system in the form
2
Si g(a, s) Bj (s) − Cj (s) + Tj g(b, s) Bj (s) = 0 (i = 1, 2).
j =1
4.1 Construction by Defining Properties 65
Combining the terms with Bj (s) and moving the term with Cj (s) to the right-
hand side, we obtain
2
2
Si g(a, s) + Ti g(b, s) Bj (s) = Si g(a, s) Cj (s) (i = 1, 2).
j =1 j =1
Upon recalling the relation from (4.7), the above equations can finally be rewrit-
ten in the form
2
2
Mi g(a, s), g(b, s) Bj (s) = Si g(a, s) Cj (s) (i = 1, 2). (4.8)
j =1 j =1
Thus, the relations in (4.8) constitute a system of two linear algebraic equations
in Bj (s). The coefficient matrix of this system is nonsingular, because the forms Mi
are linearly independent. The right-hand-side vector in (4.8) is defined in terms of
the values of Cj (s), which have already been found. The system has, consequently,
a unique solution for B1 (s) and B2 (s). So, once these are available, unique expres-
sions for Aj (s) can readily be obtained from (4.5).
Hence, upon substituting the expressions obtained for Aj (s) and Bj (s) into (4.3),
we obtain an explicit representation for the Green’s function that we are looking for.
In what follows, a series of examples is presented, where a number of different
boundary-value problems are considered, illustrating the described approach to the
construction of Green’s functions in detail.
Example 4.1 We start with a simple boundary-value problem in which the differen-
tial equation
d 2 y(x)
= 0, x ∈ (0, a), (4.9)
dx 2
is subject to the boundary conditions
dy(0) dy(a)
= 0, + hy(a) = 0, (4.10)
dx dx
with h representing a nonzero constant.
Before going any further with the construction procedure, we must make sure
that the unique Green’s function to the problem in (4.9) and (4.10) really does exist.
That is, we are required to check whether the problem has only the trivial solution.
The most elementary set of functions constituting a fundamental set of solutions
for the equation in (4.9) is represented by
Therefore, the general solution yg (x) for (4.9) can be written as a linear combination
of y1 (x) and y2 (x),
yg (x) = D1 + D2 x,
66 4 Green’s Functions for ODE
D2 = 0,
hD1 + (1 + ah)D2 = 0.
It is evident that the only solution for the system is D1 = D2 = 0. This implies
that the problem in (4.9) and (4.10) is well posed. There thus exists a unique Green’s
function g(x, s). And according to the defining property 1, one can look for it in the
form
A1 (s) + xA2 (s), for 0 ≤ x ≤ s,
g(x, s) = (4.11)
B1 (s) + xB2 (s), for s ≤ x ≤ a.
Introducing then, as suggested in (4.5), C1 (s) = B1 (s) − A1 (s) and C2 (s) =
B2 (s) − A2 (s), we form a system of linear algebraic equations in these unknowns
written as
C1 (s) + sC2 (s) = 0,
C2 (s) = −1,
whose unique solution is C1 (s) = s and C2 (s) = −1.
The first boundary condition in (4.10), being satisfied with the upper branch of
g(x, s), results in A2 (s) = 0. Recall that the upper branch is chosen because x = 0
belongs to the domain 0 ≤ x ≤ s. Since B2 (s) = C2 (s) + A2 (s), we conclude that
B2 (s) = −1.
The second boundary condition in (4.10), being treated with the lower branch of
g(x, s), yields
B2 (s) + h B1 (s) + aB2 (s) = 0,
resulting in B1 (s) = (1 + ah)/ h. And finally, since A1 (s) = B1 (s) − C1 (s), we find
that
A1 (s) = 1 + h(a − s) / h.
Substituting these into (4.11), we ultimately obtain the Green’s function
dy(0) dy(a)
= 0, = 0. (4.13)
dx dx
4.1 Construction by Defining Properties 67
It is evident that the boundary-value problem in (4.9) and (4.13) has no Green’s
function, because it allows infinitely many solutions (any function y(x) = const
represents a solution) and is therefore ill posed. This conclusion is also justified by
the form of the Green’s function in (4.12). Indeed, if h = 0, then g(x, s) in (4.12) is
undefined.
On the other hand, if h → ∞, then the boundary conditions in (4.10) transform
into
dy(0)
= 0, y(a) = 0, (4.14)
dx
and the Green’s function of the problem in (4.9) and (4.14) can be obtained from
that in (4.12) by taking a limit as h → ∞, resulting in
a − s, for 0 ≤ x ≤ s,
g(x, s) = (4.15)
a − x, for s ≤ x ≤ a.
Example 4.2 Let us construct the Green’s function for the boundary-value problem
stated by the differential equation
d 2 y(x)
− k 2 y(x) = 0, x ∈ (0, ∞), (4.16)
dx 2
subject to boundary conditions imposed as
It can readily be shown that the conditions of existence and uniqueness for the
Green’s function are met in this case. Indeed, since a fundamental set of solutions
for the equation in (4.16) can be written as
1 −ks 1 ks
C1 (s) = − e , C2 (s) = e .
2k 2k
The first condition in (4.17) implies
while the second condition results in B1 (s) = 0, because the exponential function
ekx is unbounded as x approaches infinity. And the only way to satisfy the second
condition in (4.17) is to set B1 (s) equal to zero. This immediately yields
1 −ks
A1 (s) = e ,
2k
and the relation in (4.19) consequently provides
1 −ks
A2 (s) = − e .
2k
Hence, based on the known values of C2 (s) and A2 (s), one obtains
1 ks
B2 (s) = e − e−ks .
2k
Upon substituting the values of the coefficients Aj (s) and Bj (s) just found
into (4.18), one finally obtains the Green’s function
to the problem posed by (4.16) and (4.17). It is evident that we can rewrite it in the
compact form
1 −k|x−s|
g(x, s) = e − e−k(x+s) , for 0 ≤ x, s ≤ a.
2k
Example 4.3 Consider a boundary-value problem for the equation in (4.16) but
stated over a different domain,
d 2 y(x)
− k 2 y(x) = 0, x ∈ (0, a), (4.21)
dx 2
and subject to boundary conditions written as
dy(0) dy(a)
y(0) = y(a), = . (4.22)
dx dx
4.1 Construction by Defining Properties 69
So the relations in (4.24) and (4.25), along with those in (4.23), form a system of
four linear algebraic equations in A1 (s), A2 (s), B1 (s), and B2 (s). To find the values
of A1 (s) and B1 (s), we add (4.24) and (4.25) to each other. This provides us with
ek(a−s) e−ks
A1 (s) = , B1 (s) = .
2k(eka − 1) 2k(eka − 1)
To find the values of A2 (s) and B2 (s), we subtract (4.25) from (4.24). This results
in
A2 (s) − B2 (s)e−ka = 0. (4.28)
Rewriting the second relation from (4.23) in the form
1 ks
−A2 (s) + B2 (s) = e , (4.29)
2k
70 4 Green’s Functions for ODE
eks ek(a+s)
A2 (s) = , B2 (s) = .
2k(eka − 1) 2k(eka − 1)
Substituting the values of A1 (s), A2 (s), B1 (s), and B2 (s) just found into (4.18),
we finally obtain the compact form
e−k(|x−s|−a) + ek|x−s|
g(x, s) = , for 0 ≤ x, s ≤ a, (4.30)
2k(eka − 1)
of the Green’s function to the boundary-value problem posed by (4.21) and (4.22).
Example 4.4 For another example, let the second-order equation with variable co-
efficients
d dy
(mx + b) = 0, x ∈ (0, a), (4.31)
dx dx
be subject to the boundary conditions
dy(0)
= 0, y(a) = 0, (4.32)
dx
where we assume that m > 0 and b > 0, which implies that mx + b = 0 on the
interval [0, a].
The fundamental set of solutions
required for the construction of the Green’s function for the problem in (4.31)
and (4.32) can be obtained by two successive integrations of the governing equa-
tion. Indeed, the first integration yields
dy
(mx + b) = C1 .
dx
Dividing the above equation through by mx + b and multiplying by dx, we sep-
arate variables,
dx
dy = C1 ,
mx + b
and finally obtain the general solution of the equation in (4.31) in the form
C1
y(x) = ln(mx + b) + C2 ,
m
which implies that the functions in (4.33) indeed constitute a fundamental set of
solutions for (4.31).
4.1 Construction by Defining Properties 71
It can be easily shown that the problem in (4.31) and (4.32) has only the trivial
solution. Hence, there exists a unique Green’s function, which can be represented in
the form
A1 (s) + ln (mx + b)A2 (s), for 0 ≤ x ≤ s,
g(x, s) = (4.34)
B1 (s) + ln (mx + b)B2 (s), for s ≤ x ≤ a.
Tracing out our construction procedure, we obtain the system of linear algebraic
equations
C1 (s) + ln (ms + b)C2 (s) = 0,
mC2 (s) = −1,
in Cj (s) = Bj (s) − Aj (s) (j = 1, 2). Its solution is
1 1
C1 (s) = ln (ms + b), C2 (s) = − . (4.35)
m m
The first boundary condition in (4.32) yields A2 (s) = 0. Consequently, we have
B2 (s) = −1/m. The second condition in (4.32) gives
Example 4.5 Construct the Green’s function of the boundary-value problem for the
differential equation
d dy(x)
x = 0, x ∈ (0, a), (4.37)
dx dx
subject to boundary conditions written as
dy(a)
|y(0)| < ∞, + hy(a) = 0. (4.38)
dx
72 4 Green’s Functions for ODE
y1 (x) ≡ 1, y2 (x) ≡ ln x.
The problem in (4.37) and (4.38) is well posed (has only the trivial solution),
allowing a unique Green’s function in the form
C1 (s) + ln s C2 (s) = 0,
s −1 C2 (s) = −s −1 ,
Hence, B1 (s) = 1/ah + ln a, and ultimately, A1 (s) = 1/ah − ln s/a. Thus, sub-
stituting the values of Aj (s) and Bj (s) just found into (4.39), we obtain the Green’s
function that we are looking for in the form
1 ln(s/a), for 0 ≤ x ≤ s,
g(x, s) = − (4.40)
ah ln(x/a), for s ≤ x ≤ a.
It is clearly seen that if the parameter h is equal to zero, then the Green’s func-
tion in (4.40) is undefined. This agrees with the setting in (4.37) and (4.38), which
becomes ill posed if h = 0.
d 2 y(x) dy(x)
p0 (x) + p1 (x) + p2 (x)y(x) = −f (x), x ∈ (a, b), (4.41)
dx 2 dx
with a right-hand-side function f (x) continuous on [a, b], subject to the homoge-
neous boundary conditions in (4.2) can be expressed by the integral
b
y(x) = g(x, s) f (s) ds. (4.42)
a
proceeds as follows. First, differentiate the function y(x) in (4.44) using the product
rule
y (x) = C1 y1 + C1 y1 + C2 y2 + C2 y2 , (4.45)
and then, keeping in mind the degree of freedom mentioned above, we make a sim-
plifying assumption as
C1 y1 + C2 y2 = 0, (4.46)
transforming (4.45) into
y (x) = C1 y1 + C2 y2 . (4.47)
Hence, the second derivative of y(x), is expressed as
p0 y1 + p1 y1 + p2 y1 = 0
as well as
p0 y2 + p1 y2 + p2 y2 = 0.
This reduces the equation in (4.49) to
The relations in (4.46) and (4.50) represent a system of linear algebraic equations
in C1 (x) and C2 (x). The system is well posed (has a unique solution) because the
determinant of its coefficient matrix is the Wronskian
Since s represents the variable of integration, the factors y1 (x) and y2 (x) of the
integral containing terms representing functions of x can be formally moved inside
the integrals. And once this is done and the two integral terms are combined, we
obtain
x y1 (s)y2 (x) − y1 (x)y2 (s)
y(x) = H1 y1 (x) + H2 y2 (x) + f (s)ds. (4.51)
a p0 (s)W (s)
To determine values of H1 and H2 , we satisfy the boundary conditions in (4.43)
with the above expression for y(x). This yields the system of linear algebraic equa-
tions
y1 (a) y2 (a) H1 0
= (4.52)
y1 (b) y2 (b) H2 P (a, b)
in H1 and H2 , where P (a, b) is defined as
b R(b, s)
P (a, b) = f (s) ds
a p0 (s)W (s)
and
R(b, s) = y1 (b)y2 (s) − y1 (s)y2 (b).
With this, we arrive at the solution to the system in (4.52) in the form
b y2 (a)R(b, s)f (s)
H1 = − ds
a p0 (s)R(a, b)W (s)
and
b y1 (a)R(b, s)f (s)
H2 = ds.
a p0 (s)R(a, b)W (s)
76 4 Green’s Functions for ODE
Upon substituting these into (4.51), we obtain the solution of the boundary-value
problem posed in (4.41) and (4.43) as
x R(x, s)f (s) b R(a, x)R(b, s)f (s)
y(x) = − ds + ds.
a p0 (s)W (s) a p0 (s)R(a, b)W (s)
R(a, x)R(b, s)
g(x, s) = , x ≤ s,
p0 (s)R(a, b)W (s)
After a trivial but quite cumbersome transformation, the above expression can be
simplified to
R(a, s)R(b, x)
g(x, s) = , x ≥ s.
p0 (s)R(a, b)W (s)
Thus, since the solution to the problem posed in (4.41) and (4.43) is found as
a single integral of the type in (4.42), we conclude that the kernel function g(x, s)
in (4.53) does in fact represent the Green’s function to the corresponding homoge-
neous boundary-value problem.
So, the approach based on the method of variation of parameters can success-
fully be used to actually construct Green’s functions. We present below a number
of examples illustrating some peculiarities of this approach that emerge in practical
situations.
Example 4.6 Apply the procedure based on the method of variation of parameters to
the construction of the Green’s function for the homogeneous equation correspond-
ing to
d 2 y(x)
+ k 2 y(x) = −f (x), x ∈ (0, a), (4.54)
dx 2
and subject to the homogeneous boundary conditions
The system of linear algebraic equations in C1 (x) and C2 (x), which has been
derived, in general, in (4.46) and (4.50), appears in this case as
From the first condition in (4.55), it follows that H1 = 0, while the second con-
dition yields
a
cos k(a − s)f (s)ds − H2 k sin ka = 0,
0
from which we immediately obtain
a cos k(a − s)
H2 = f (s) ds.
0 k sin ka
78 4 Green’s Functions for ODE
Upon substituting the values of H1 and H2 just found into (4.57) and correspond-
ingly regrouping the integrals, one obtains
x sin k(x − s) a cos k(a − s)
y(x) = f (s) ds + cos kx f (s) ds. (4.58)
0 k 0 k sin ka
Both of the above integrals can be combined and written in a compact single-
integral form. In helping the reader to proceed more easily through this transforma-
tion, we add formally the term
a
0 · f (s) ds
x
to the first of the two integrals in (4.58) and break down the second one as
x cos k(a − s) a cos k(a − s)
cos kx f (s) ds + cos kx f (s) ds.
0 k sin ka x k sin ka
Then y(x) is represented as a sum of four definite integrals, in two of which the
integration is carried out from 0 to x. In the other two integrals, we integrate from
x to a. This transforms (4.58) into
x sin k(x − s) x cos k(a − s)
y(x) = f (s) ds + cos kx f (s) ds
0 k 0 k sin ka
a a cos k(a − s)
+ 0 · f (s)ds + cos kx f (s) ds.
x x k sin ka
Note that in the first integral above, the variables x and s satisfy the inequal-
ity x ≥ s, since x represents the upper limit of integration, whereas in the second
integral x, is the lower limit, implying x ≤ s.
Hence, the representation for y(x) that we just came up with can be viewed as
the single integral
a
y(x) = g(x, s)f (s) ds, (4.59)
0
4.2 Method of Variation of Parameters 79
Thus, since the solution of the boundary-value problem stated in (4.54) and (4.55)
is expressed as the integral in (4.59), g(x, s) represents the Green’s function to the
homogeneous boundary-value problem corresponding to that in (4.54) and (4.55).
d 2 y(x)
− k 2 y(x) = −f (x), (4.61)
dx 2
subject to the homogeneous boundary conditions
as its fundamental set of solutions. Hence, the general solution can be represented
for (4.61) itself by
y(x) = C1 (x)ekx + C2 (x)e−kx . (4.63)
Tracing out the procedure of Lagrange’s method, one obtains expressions for
C1 (x) and C2 (x) in the form
x 1 −ks
C1 (x) = − e f (s) ds + H1
0 2k
and
x 1 ks
C2 (x) = e f (s) ds + H2 .
0 2k
By virtue of substitution of these into (4.63), we obtain
x 1
y(x) = H1 ekx + H2 e−kx − sinh k(x − s)f (s) ds. (4.64)
0 k
80 4 Green’s Functions for ODE
The first boundary condition y (0) = 0 in (4.62) implies that H1 = H2 , while the
second condition y(a) = 0 yields
a sinh k(a − s)
H1 = H2 = f (s) ds.
0 2k cosh ka
Example 4.8 Let us return to the equation in (4.61), and let it be subject to the
following boundary conditions:
It can readily be checked that there exists a unique Green’s function for the homo-
geneous boundary-value problem corresponding to that posed by (4.61) and (4.66).
The reader is recommended to justify this fact in Exercise 4.6.
The general solution of the equation in (4.61) was earlier presented in (4.64). In
the present case, however, it is going to be more beneficial to express it, in con-
trast to the mixed hyperbolic–exponential form in (4.64), completely in terms of
exponential functions. That is,
x 1 k(s−x)
y(x) = H1 ekx + H2 e−kx + e − ek(x−s) f (s)ds. (4.67)
0 2k
The point is that the form in (4.67) will be more practical in view of the neces-
sity to treat the boundedness condition |y(∞)| < ∞ in the discussion that follows.
Indeed, splitting off both the exponential terms under the integral sign and group-
ing together the terms containing the factor of ekx and those containing the factor
of e−kx , we transform (4.67) into
x e−ks x eks
y(x) = H1 − f (s) ds ekx + H2 + f (s) ds e−kx . (4.68)
0 2k 0 2k
4.2 Method of Variation of Parameters 81
It is clearly seen that the boundedness condition |y(∞)| < ∞ implies that the
factor of the positive exponential term ekx in (4.68) must equal zero as x approaches
infinity. This implies
∞ 1 −ks
H1 = e f (s) ds,
0 2k
while the first condition in (4.66) subsequently yields
k−h ∞ k − h −ks
H2 = H1 = e f (s) ds.
k+h 0 2k(k + h)
Upon substituting the expressions for H1 and H2 just found into (4.67) and
rewriting its integral component again in a more compact hyperbolic-function-
containing form, we obtain
x 1 ∞ 1 −ks kx
y(x) = − sinh k(x − s)f (s)ds + e e + h∗ e−kx f (s) ds,
0 k 0 2k
where h∗ = (k − h)/(k + h). From this representation, the Green’s function to the
problem in (4.61) and (4.66) ultimately appears as
In the example that follows, a boundary-value problem for another equation with
variable coefficients is considered.
Example 4.9 Consider another second-order linear equation with variable coeffi-
cients
d 2 2 dy(x)
β x +1 = −f (x), x ∈ (0, a), (4.70)
dx dx
subject to the boundary conditions
the form
x 1 β(s − x)
y(x) = arctan f (s)ds + D1 + D2 arctan βx.
0 β 1 + β 2 xs
for the homogeneous problem corresponding to that in (4.70) and (4.71), where
K = β arctan βa.
We believe that having developed the necessary flexibility in dealing with ordi-
nary differential equations, the reader will feel comfortable enough working on the
material in the next chapter.
4.1 Show that the trivial solution is the only solution to the boundary-value problem
stated in (4.21) and (4.22).
4.3 Prove that the boundary-value problem stated in (4.54) and (4.55) is uniquely
solvable.
4.4 Construct the Green’s function for the homogeneous equation corresponding
to (4.54), subject to the boundary conditions
y(0) = y (1) = 0.
4.3 Chapter Exercises 83
4.5 Construct the Green’s function for the homogeneous equation corresponding
to (4.61) subject to the boundary conditions
4.6 Prove that the boundary-value problem stated in (4.61) and (4.66) is well posed.
4.7 Prove that the boundary-value problem stated in (4.70) and (4.71) is well posed.
Chapter 5
Eigenfunction Expansion
Having departed for a while from the main focus of the book in the previous chapter,
where the emphasis was on ordinary differential equations, we are going to return
in the present chapter to partial differential equations. The reader will be provided
with a comprehensive review of another approach that has been traditionally em-
ployed for the construction of Green’s functions for partial differential equations.
The method of eigenfunction expansion will be used, representing one of the most
productive and recommended methods in the field.
Our objective in reviewing the method of eigenfunction expansion is twofold.
First, we want to assist the reader in the derivation of Green’s functions for a variety
of applied partial differential equations. Our second goal is to lay out a preparatory
basis for Chap. 6, which is, to a certain extent, central for the entire volume. In
that chapter, upon comparison of different forms of Green’s functions, some infinite
product representations are derived for a number of trigonometric and hyperbolic
functions.
After presenting introductory comments in the brief section below, we develop
a procedure based on the eigenfunction expansion method to derive a number of
Green’s functions in Sects. 5.2 and 5.3. The first of these touches upon problems
stated in Cartesian coordinates, while problems formulated in polar coordinates are
dealt with in Sect. 5.3.
Earlier, in Chap. 3, the reader was familiarized with two standard approaches that
are traditionally used for the construction of Green’s functions for the Laplace equa-
tion in two dimensions. These approaches are based on the methods of images
and conformal mapping. Another standard approach in the field is based on the
method of eigenfunction expansion [15, 18]. The number of problems for which
this method appears productive is notably wider than the number of problems suc-
cessfully treated by either of the two other methods.
In the introduction to this book, it was mentioned that the solution u(P ) to the
well-posed boundary-value problem
∇ 2 u(P ) = −f (P ), P ∈ , (5.1)
B u(P ) = 0, P ∈ L, (5.2)
stated for the Poisson (inhomogeneous) equation can be expressed in the integral
form
u(P ) = G(P , Q)f (Q)d(Q) (5.3)
In what follows, the particulars of the approach based on the method of eigen-
function expansion and its specific features are clarified and explained as we pass
through a series of illustrative examples in which problems are stated in Cartesian
coordinates. In the first example, the reader has an opportunity to go into a more or
less detailed description of the approach.
Example 5.1 We revisit the Dirichlet problem for the Laplace equation stated on the
infinite strip = {−∞ < x < ∞, 0 < y < b}.
This problem has already been considered in Chap. 3. Two equivalent forms of
its Green’s function were presented there in (3.37) and (3.38). They were obtained
by the method of conformal mapping. We are going to describe now an alternative
derivation procedure, which will be explained by turning to the following boundary-
value problem:
∂ 2 u(x, y) ∂ 2 u(x, y)
+ = −f (x, y), (x, y) ∈ , (5.4)
∂x 2 ∂y 2
u(x, 0) = u(x, b) = 0. (5.5)
5.2 Cartesian Coordinates 87
∂ 2 U (x, y) ∂ 2 U (x, y)
+ = 0, (x, y) ∈ ,
∂x 2 ∂y 2
U (x, 0) = U (x, b) = 0,
corresponding to (5.4) and (5.5), then its solution U (x, y) is given by
∞
U (x, y) = Xn (x)Yn (y).
n=1
d 2 Yn (y)
+ ν 2 Yn (y) = 0, y ∈ (0, b),
dy 2
Yn (0) = Yn (b) = 0,
which represents the expansion of the two variable function u(x, y) in a Fourier sine
series with respect to one of the variables.
The right-hand-side function f (x, y) in (5.4) is also expanded in terms of Yn (y):
∞
f (x, y) = fn (x) sin νy. (5.7)
n=1
88 5 Eigenfunction Expansion
Once the expansions from (5.6) and (5.7) are substituted into (5.4), we obtain
∞ 2
∞
d un (x)
− ν un (x) sin νy = −
2
fn (x) sin νy.
dx 2
n=1 n=1
Equating the coefficients of the two series in the above relation yields the ordi-
nary differential equation
d 2 un (x)
− ν 2 un (x) = −fn (x), −∞ < x < ∞, (5.8)
dx 2
in the coefficients un (x) of the series in (5.6). Clearly, the boundedness conditions
must be imposed on un (x) to make the problem setting in (5.8) and (5.9) well posed.
To construct the Green’s function of the above boundary-value problem, we may
choose either the approach employing the defining properties or the one based on the
method of variation of parameters. Choosing the latter, we trace out its procedure,
which was described in detail in Sect. 4.2. That is, we express the general solution
to (5.8) in the form
un (x) = C1 (x)eνx + C2 (x)e−νx , (5.10)
which yields the well-posed system of linear algebraic equations
νx
e e−νx C1 (x) 0
=
νeνx −νe−νx C2 (x) −f (x)
1 −νx 1 νx
C1 (x) = − e fn (x), C2 (x) = e fn (x).
2ν 2ν
Expressions for C1 (x) and C2 (x),
x
1
C1 (x) = − e−νξ fn (ξ ) dξ + D1
2ν −∞
and
x
1
C2 (x) = eνξ fn (ξ ) dξ + D2 ,
2ν −∞
are found by integration. Substituting these into (5.10), we obtain
x x
1 1
un (x) = e−νx eνξ fn (ξ ) dξ + D2 − eνx e−νξ fn (ξ ) dξ + D1 .
2ν −∞ 2ν −∞
(5.11)
5.2 Cartesian Coordinates 89
The first boundedness condition in (5.9) requires that the factor of e−νx in (5.11)
be zero as x approaches negative infinity. This yields D2 = 0. The second condition
in (5.9), in turn, implies that the factor of eνx is zero as x approaches infinity. This
yields
∞
1
D1 = − e−νξ fn (ξ ) dξ.
2ν −∞
After the values of D1 and D2 that were just found are substituted into (5.11),
the solution of the boundary-value problem posed in (5.8) and (5.9) is found as
∞ x
1 ν(x−ξ ) 1 ν(ξ −x)
un (x) = e fn (ξ ) dξ + e − eν(x−ξ ) fn (ξ ) dξ,
−∞ 2ν −∞ 2ν
By substitution of the above into (5.12) and then substituting the coefficients
un (x) in (5.6), the solution to the problem in (5.4) and (5.5) is obtained in the form
b ∞ ∞
1 e−ν|x−ξ |
u(x, y) = sin νy sin νηf (ξ, η) dξ dη, (5.13)
0 −∞ π n
n=1
of the integral representation from (5.13) represents the Green’s function to the ho-
mogeneous boundary-value problem corresponding to that in (5.4) and (5.5).
The series in (5.14) is nonuniformly convergent. Due to the logarithmic singular-
ity, it diverges, in fact, when the observation point (x, y) coincides with the source
point (ξ, η). This makes the above series form of the Green’s function somewhat
90 5 Eigenfunction Expansion
inconvenient for numerical implementations. But the situation can be radically im-
proved, because the series is actually summable. To sum it, we transform (5.14)
into
∞
1 e−ν|x−ξ |
G(x, y; ξ, η) = cos ν(y − η) − cos ν(y + η)
2π n
n=1
∞ ∞ −ν|x−ξ |
1 e−ν|x−ξ | e
= cos ν(y − η) − cos ν(y + η)
2π n n
n=1 n=1
(5.15)
which holds if its parameters meet the constraints p < 1 and 0 ≤ ϑ < 2π .
It is evident that the series in (5.15) are of the type in (5.16) and that the con-
straints on the parameters p and ϑ are met. Indeed, it is clear that
e−ν|x−ξ | ≤ 1
and
0 ≤ ν(y − η) < 2π and 0 ≤ ν(y + η) < 2π.
Hence, the series in (5.15) appear summable, which yields the analytical repre-
sentation
1 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
G(x, y; ξ, η) = ln (5.17)
4π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
for the Green’s function to the homogeneous boundary-value problem correspond-
ing to that in (5.4) and (5.5). Here ω = π/b.
At this point, the reader is referred to the expression in (3.37) of Chap. 3, which
was obtained (by the method of conformal mapping) as the Green’s function of the
Dirichlet problem for the infinite strip = {−∞ < x < ∞, 0 < y < π} of width π .
Clearly, if we assume b = π , implying ω = 1, then the expression in (5.17) reduces
to that of (3.37).
Note that, similarly to the conversion of the representation in (3.37) into that
in (3.38) undertaken in Sect. 3.2, the expression shown in (5.17) converts into
1 cosh ω(x − ξ ) − cos ω(y + η)
G(x, y; ξ, η) = ln . (5.18)
4π cosh ω(x − ξ ) − cos ω(y − η)
Recall that the conversion is accomplished by multiplying the numerator and de-
nominator in (5.17) by the factor 2eω(ξ −x) , with subsequent use of the Euler formula
for the hyperbolic cosine function.
5.2 Cartesian Coordinates 91
Another shorthand expression for (5.17) can be obtained upon introducing the
complex variables
z = x + iy and ζ = ξ + iη
for the observation point (x, y) and the source point (ξ, η). Indeed, recalling the
Euler formula
ez = ex (cos y + i sin y)
for the complex exponent, one reduces the representation in (5.17) to a compact
form. That is,
1 |1 − eω(z−ζ ) |
G(x, y; ξ, η) = ln , (5.19)
2π |1 − eω(z−ζ ) |
where the bar on ζ stands for the complex conjugate.
Example 5.2 We turn to the construction of the Green’s function for the Dirichlet
problem
u(x, 0) = u(x, b) = u(0, y) = 0 (5.20)
stated for the Laplace equation on the semi-infinite strip = {0 < x < ∞,
0 < y < b}.
d 2 un (x)
− ν 2 un (x) = −fn (x), 0 < x < ∞,
dx 2
un (0) = 0, lim |un (x)| < ∞,
x→∞
and after employing the summation formula from (5.16), the above representation
transforms to the closed analytical form
which is usually given for this Green’s function in the literature [18], can readily be
verified using the algebra explained earlier in Example 5.1.
So far, we have used the method of eigenfunction expansion as an alternative way
to construct some Green’s functions already available in the literature. In what fol-
lows, in contrast, the method is applied to a mixed boundary-value problem whose
Green’s function probably cannot be obtained otherwise.
∂ 2 u(x, y) ∂ 2 u(x, y)
+ = −f (x, y), (x, y) ∈ , (5.23)
∂x 2 ∂y 2
stated on the semi-infinite strip = {0 < x < ∞, 0 < y < b}, and subject, in this
case, to the boundary conditions
∂u(0, y)
− βu(0, y) = 0, u(x, 0) = u(x, b) = 0, β ≥ 0. (5.24)
∂x
By virtue of the Fourier sine-series expansions
∞
nπ
u(x, y) = un (x) sin νy, ν= ,
b
n=1
and
∞
f (x, y) = fn (x) sin νy,
n=1
5.2 Cartesian Coordinates 93
d 2 un (x)
− ν 2 un (x) = −fn (x), 0 < x < ∞,
dx 2
dun (0)
− βun (0) = 0, lim |un (x)| < ∞,
dx x→∞
in the coefficients un (x) of the above series expansion for u(x, y).
Following the procedure of the method of variation of parameters, the Green’s
function gn (x, ξ ) to the above problem is found in the form
−νξ νx ∗ −νx
1 e (e + β e ), for x ≤ ξ,
gn (x, ξ ) = (5.25)
2ν e−νx (eνξ + β ∗ e−νξ ), for x ≥ ξ,
Recall now the cases covered earlier in Examples 5.1 and 5.2, where we managed
to sum the series expansions of Green’s functions. Observe, for example, how the
series in (5.14) converts to the closed analytical form in (5.18). In contrast to those
cases, the series in (5.27) cannot be completely summed, but we can observe that its
singular component can be split off, radically improving its computability.
Since the truncation of the series in (5.27) does not work, an extra effort is re-
quired to enhance its computability. One possible way of doing so was proposed
in [12]. The idea is to split the expression for gn (x, ξ ) in (5.25) into two parts,
one of which contains the components responsible for the singularity and allows a
complete summation, while the other part leads to a uniformly convergent series. In
doing so, we rewrite the coefficient gn (x, ξ ) in the form
1 ν − β −ν(x+ξ )
gn (x, ξ ) = e−ν|x−ξ | + e , for 0 < x, ξ < ∞,
2ν ν +β
and represent the factor (ν − β)/(ν + β) of its second exponential function e−ν(x+ξ )
as
ν −β 2β
=1− .
ν +β ν +β
This yields
1 2β −ν(x+ξ )
gn (x, ξ ) = e−ν|x−ξ | + e−ν(x+ξ ) − e .
2ν ν +β
Upon substituting the above into (5.27), we rewrite the latter as
∞
1 1 −ν|x−ξ |
G(x, y; ξ, η) = e + e−ν(x+ξ ) sin νy sin νη
b ν
n=1
∞
2β e−ν(x+ξ ) nπ
− sin νy sin νη, ν= .
b ν(ν + β) b
n=1
5.2 Cartesian Coordinates 95
Clearly, the first of the above two series is summable. The summation can be ac-
complished in the same way as in Examples 5.1 and 5.2. The second series does not
allow a summation. But it is uniformly convergent, and we may leave it in its current
form without significantly deteriorating the computability of the whole expression.
Thus, a computer-friendly representation of the Green’s function to the mixed
boundary-value problem in (5.24) for the Laplace equation posed on the semi-
infinite strip = {0 < x < ∞, 0 < y < b} is finally obtained as
The exponential and trigonometric factors of the general term in this series never
exceed unity. Since the parameter β is nonnegative, we arrive at the following esti-
mate for the absolute value of RN :
∞
∞
1 1
|RN (x, y; ξ, η)| ≤ ≤
ν(ν + β) ν2
n=N+1 n=N +1
∞
∞
b2 1 b2 1
N
1
= 2 = − .
π n2 π 2 n2 n2
n=N+1 n=1 n=1
Notice first that the above inequality is quite compact and very simple to use.
Second, it provides a uniform estimate and is therefore valid at any point in .
Third, from our derivation, it follows that it gives a relatively coarse estimate. The
latter makes it advisable to revisit the analysis of (5.30). In doing so, we replace
its trigonometric factors with unity and express the parameter ν in terms of n. This
yields
∞
∞
e−ν(x+ξ ) b2 e−ν(x+ξ )
|RN (x, y; ξ, η)| ≤ = 2 ,
ν(ν + β) π n(n + β0 )
n=N+1 n=N +1
where β0 = βb/π .
In the case of β0 ≥ 1, the above estimate might be improved. That is,
∞ ∞
b2 e−ν(x+ξ ) b2 e−ν(x+ξ ) e−ν(x+ξ )
N
|RN (x, y; ξ, η)| ≤ = − .
π2 n(n + 1) π 2 n(n + 1) n(n + 1)
n=N+1 n=1 n=1
Note that the infinite series in the brackets is summable. Using the standard sum-
mation formula [9]
∞
pn 1−p 1
=1− ln , p2 < 1,
n(n + 1) p 1−p
n=1
where p = e−ν(x+ξ ) and ν = π/b, we arrive at the following estimate for the re-
mainder in (5.30)
b2 e−ν(x+ξ )
N
−ν(x+ξ )
|RN (· · · )| ≤ 1 + e ν(x+ξ )
− 1 ln 1 − e − . (5.32)
π2 n(n + 1)
n=1
The above estimate, unlike that in (5.31), is nonuniform. Indeed, its right-hand-
side depends on the observation and source points to which the estimate is applied.
So, it is more flexible in practical computing. This, in other words, allows the user
to apply different truncations for the series in (5.31) in different zones of in order
to keep a certain desired accuracy level for the entire region.
The improvement that has been achieved by the recent transformation of the
series-only form of the Green’s function in (5.27) appears to be outstanding. This
can be fully appreciated when the profile depicted in Fig. 5.2 is compared with those
shown earlier, in Fig. 5.1. The representation in (5.28), with only the tenth partial
sum of its series component is accounted for.
Example 5.4 The method of eigenfunction expansion will be used here to construct
the Green’s function of the Dirichlet problem for the Laplace equation stated on a
rectangle.
This is our second look at the problem. It was considered in Chap. 3 where its
Green’s function was obtained by the method of conformal mapping (see the repre-
sentation in (3.41)). The representation in (3.41) is expressed in terms of a special
(Weierstrass) function that is not yet tabulated, making it inconvenient in computing.
5.2 Cartesian Coordinates 97
∂ 2 u(x, y) ∂ 2 u(x, y)
+ = −f (x, y), (x, y) ∈ , (5.33)
∂x 2 ∂y 2
u(x, 0) = u(x, b) = u(0, y) = u(a, y) = 0, (5.34)
on the rectangle = {0 < x < a, 0 < y < b}, where f (x, y) is assumed to be
integrable (continuous) on the closure of .
It is assumed that the reader has learned from a course on differential equations
(see, for example, [5, 15]) that the components in the set of functions
and expand also the right-hand-side function f (x, y) in (5.33) in the double Fourier
sine series
∞
f (x, y) = fm,n sin μx sin νy. (5.36)
m,n=1
Once the expansions from (5.35) and (5.36) are substituted into (5.33), we have
∞
∞
− μ + ν um,n sin μx sin νy = −
2 2
fm,n sin μx sin νy.
m,n=1 m,n=1
Equating the coefficients of the series from the left-hand side and the right-hand
side in the above equation yields
fm,n
um,n = .
μ2 + ν 2
With the aid of the Euler–Fourier formula, the coefficients fm,n in the series
of (5.36) are expressed as
b a
4
fm,n = f (ξ, η) sin μξ sin νη dξ dη.
ab 0 0
By substitution of the expression for fm,n into the above formula for the coef-
ficients um,n , and then substituting the coefficients um,n into (5.35), we obtain the
solution of the problem posed by (5.33) and (5.34) in the form
b a ∞
4 sin μx sin νy sin μξ sin νη
u(x, y) = f (ξ, η) dξ dη.
0 0 ab μ2 + ν 2
m,n=1
Since the solution of the problem in (5.33) and (5.34) is expressed in the integral
form of (5.3), the kernel of the above,
∞
4 sin μx sin νy sin μξ sin νη
G(x, y; ξ, η) = , (5.37)
ab μ2 + ν 2
m,n=1
represents the Green’s function of the Dirichlet problem stated on the rectangle =
{0 < x < a, 0 < y < b}.
It is evident that the computability represents a critical issue for the double series
in (5.37). Addressing this issue in the forthcoming analysis, let a = π and b = π for
simplicity. This transforms (5.37) into
∞
4 sin mx sin ny sin mξ sin nη
G(x, y; ξ, η) = , (5.38)
π2 m2 + n 2
m,n=1
which is the Green’s function for the square = {0 < x < π, 0 < y < π}.
5.2 Cartesian Coordinates 99
To examine the convergence rate of the series in (5.38), we depict, in Fig. 5.3,
profiles of its (M, N )th partial sum for various values of the truncation parameters
M and N . The x-coordinate of the field point is fixed at x = π/2, while the source
point (ξ, η) is chosen as (π/2, 2).
Two important observations can be made from the data in Fig. 5.3, and both
of them indicate a low computational potential of the expression in (5.38). First, the
logarithmic singularity is poorly approximated when the series is truncated. Second,
a high-frequency oscillation dramatically reduces its practicality.
Note that the oscillation cannot be entirely eliminated in the case of M = 100
and N = 100. This implies, in particular, that the accuracy in computing derivatives
of the Green’s function (which are frequently required in applications) should to be
even much lower than that of the function itself.
Hence, some work is required to enhance the computational potential of the rep-
resentation in (5.38). In [25], for example, it was proposed to rearrange the double-
summation in (5.38) as
∞
∞
4 sin mx sin mξ
sin ny sin nη,
π2 m2 + n 2
n=1 m=1
∞
∞
4 1 cos m(x − ξ ) − cos m(x + ξ )
sin ny sin nη.
π2 2 m2 + n2
n=1 m=1
∞
∞
2 cos m(x − ξ ) cos m(x + ξ )
− sin ny sin nη. (5.39)
π2 m2 + n2 m2 + n 2
n=1 m=1
100 5 Eigenfunction Expansion
where the parameter β is assumed to be bounded, since 0 < β < 2π , each of the
m-series in (5.39) is analytically summable. Carrying out the summation, we reduce
the double series in (5.38) to
∞
1 cosh n(π − |x − ξ |) − cosh n(π − (x + ξ ))
sin ny sin nη,
π n sinh nπ
n=1
or
∞
2
Tn (x, ξ ) sin ny sin nη, (5.40)
π
n=1
the series as
∞
2 sinh nπ cosh nx − sinh nx cosh nπ
sinh nξ sin ny sin nη,
π n sinh nπ
n=1
or
∞
2 cosh nx − sinh nx coth nπ
sinh nξ sin ny sin nη.
π n
n=1
Adding and subtracting the term of sinh nx in the numerator, we rewrite the above
as
∞
2 1
cosh nx − sinh nx + sinh nx(1 − coth nπ) sinh nξ sin ny sin nη,
π n
n=1
It can readily be shown that the second of the above two series appears analyt-
ically summable. To proceed with the summation, we convert its hyperbolic func-
tions into exponential form. This yields
∞
21
(1 − coth nπ) sinh nx sinh nξ sin ny sin nη
π n
n=1
∞
2 1 −nx enξ − e−nξ
+ e sin ny sin nη,
π n 2
n=1
When the brackets are removed in the first of the above two series, it breaks
into four pieces, each of which allows analytical summation in compliance with the
102 5 Eigenfunction Expansion
standard formula from (5.16). This converts the Green’s function in (5.40) into
1 1 − 2e−(x−ξ ) cos(y + η) + e−2(x−ξ )
G(x, y; ξ, η) = ln
2π 1 − 2e−(x−ξ ) cos(y − η) + e−2(x−ξ )
1 1 − 2e−(x+ξ ) cos(y − η) + e−2(x+ξ )
+ ln
2π 1 − 2e−(x+ξ ) cos(y + η) + e−2(x+ξ )
∞
2 sinh nx sinh nξ
− sin ny sin nη.
π nenπ sinh nπ
n=1
z = x + iy and ζ = ξ + iη,
as introduced earlier in Example 5.1, is used for the points (x, y) and (ξ, η).
The computational superiority of the version in (5.41) over those in (5.38)
and (5.40) cannot be disputed, mainly because the basic logarithmic singularity of
the Green’s function is analytically expressed in (5.41). Indeed, it is contained in the
term
1 1
ln . (5.42)
2π |1 − e(z−ζ ) |
To verify this fact, we expand the exponent e(z−ζ ) in a Taylor series and substitute
it into (5.42). This yields
1 1 1 1 1
ln = − ln (z − ζ ) + (z − ζ )2
+ (z − ζ ) 3
+ · · ·
2π |1 − e (z−ζ ) | 2π 2! 3!
1 1 1
=− ln |z − ζ |1 + (z − ζ ) + (z − ζ ) + · · ·
2
2π 2! 3!
1 1 1 1 1
= ln − ln 1 + (z − ζ ) + (z − ζ ) + · · · ,
2
2π z − ζ 2π 2! 3!
where the first logarithmic term in fact represents the fundamental solution of the
Laplace equation, while the second logarithm is a regular function that vanishes at
z = ζ.
5.2 Cartesian Coordinates 103
Table 5.1 Convergence of (5.41) for the source point (3π/4, π/2)
Field point, Truncation parameter, N
x/π 10 20 50 100 200
Table 5.2 Convergence of (5.41) for the source point (0.99π, π/2)
Field point, Truncation parameter, N
x/π 10 20 50 100 200
where
z1 = (x + π) + iy, ζ1 = (ξ + π) + iη,
z2 = (x − π) + iy, ζ2 = (ξ − π) + iη,
Since the second additive term cosh nπ cosh n(x + ξ ) in the numerator is never
negative, the estimation procedure can be continued as
∞
enπ cosh n(x − ξ ) ∞
e−nπ cosh nx
|RN (x, y; ξ, η)| ≤ ≤
πne2nπ sinh nπ πn sinh nπ
n=N+1 n=N +1
∞
∞
1 e−nπ 1 e−nπ
N
e−nπ
≤ = − .
π n π n n
n=N+1 n=1 n=1
The infinite series in the above expression is analytically summable [9], leading
ultimately to the estimate
1 e−nπ N
|RN (x, y; ξ, η)| ≤ ln − ,
1 − e−π n
n=1
which indicates extremely rapid convergence of the series in (5.43), where the error
level of order, say, 10−8 can be attained for any location of the field and the source
point, with the truncation parameter as low as N = 5. The superiority of the latter
form of the Green’s function over all other forms obtained so far is illustrated in
Fig. 5.5, where its profile G(π/2, y; π/2, 2) is exhibited as in Figs. 5.3 and 5.4.
5.3 Polar Coordinates 105
Note that the representation in (5.43) of the Green’s function of the Dirichlet
problem for the Laplace equation appears by far more computer-friendly than those
of (5.38), (5.40), and (5.41). Two features of (5.43) support this claim (an analyti-
cal form of the basic logarithmic singularity and uniform convergence of its series
term). The latter feature allows complete elimination of the high-frequency oscil-
lation by truncating the series to its low partial sums. The fifth partial sum, for
example, is accounted for in Fig. 5.5.
We begin our presentation in this section with a problem that has already been
treated twice in the present volume. In Chap. 3, the classical expression of the
Green’s function
1 a 4 − 2ra 2 cos(ϕ − ψ) + r 2 2
G(r, ϕ; , ψ) = ln 2 2 (5.44)
4π a (r − 2r cos(ϕ − ψ) + 2 )
of the Dirichlet problem for a disk of radius a was constructed by the methods
of images and conformal mapping. This time around, the eigenfunction expansion
method will be used for its derivation. We thereby provide the necessary background
for later application of this method to the construction of Green’s functions to some
other problems for which the methods of images and conformal mapping fail.
Example 5.5 On the disk = {0 < r < a, 0 ≤ ϕ < 2π} of radius a, the boundary-
value problem
lim |u(r, ϕ)| < ∞ and u(a, ϕ) = 0 (5.45)
r→0
where ddψ represents the area element in polar coordinates. It gives the Green’s
function G(r, ϕ; , ψ) in which we are interested.
Taking into account the 2π -periodicity of the solution u(r, ϕ) of the problem
in (5.45) and (5.46) with respect to the variable ϕ, we expand it in the trigonometric
Fourier series
∞
1
u(r, ϕ) = u0 (r) + ucn (r) cos nϕ + usn (r) sin nϕ . (5.48)
2
n=1
By substitution of the expansions from (5.48) and (5.49) into (5.46) and equating
the corresponding coefficients of the series on both sides, we derive the following
linear ordinary differential equation:
d dun (r) n2
r − 2 un (r) = −rfn (r), n = 0, 1, 2, . . . , (5.50)
dr dr r
in the coefficients un (r) of the expansion in (5.48). At the current stage of our de-
velopment, we omit the superscripts on un (r) and fn (r) for notational convenience.
The relations in (5.45) imply that the solution un (r) of (5.50) should satisfy the
boundary conditions
It is worth noting that the fundamental set of solutions of the homogeneous equa-
tion corresponding to (5.50) for the case n = 0 differs from that for the case n ≥ 1.
This means that in constructing the Green’s function to the boundary-value problem
in (5.50) and (5.51), the two cases must be considered separately.
In the case n = 0, the boundary-value problem in (5.50) and (5.51) reduces to
d du0 (r)
r = −rf0 (r), (5.52)
dr dr
5.3 Polar Coordinates 107
with the functions u(r) = ln r and u(r) = 1 representing a fundamental set of so-
lutions for the homogeneous equation corresponding to (5.52). Hence, the general
solution for (5.52) can be written, in the variation of parameters method, as
Substituting this into (5.52) and following the routine of the method, we obtain
and
r
C1 (r) = ln f0 ()d + D2 .
0
Once the above quantities are substituted into (5.54) and the integral terms are
combined, the general solution of (5.52) is found as
r
u0 (r) = ln f0 ()d + D1 ln r + D2 .
0 r
The values
a
D1 = 0 and D2 = − ln f0 ()d
0 a
of the constants D1 and D2 are obtained by taking advantage of the boundary con-
ditions in (5.53). Upon substituting these into the above expression for u0 (r), the
solution of the boundary-value problem in (5.52) and (5.53) reads as
r a
u0 (r) = ln f0 ()d − ln f0 ()d,
0 r 0 a
and after proceeding through the variation of parameters routine, we derive the
above as
r n n
1 r
un (r) = − fn ()d + D1 r n + D2 r −n . (5.57)
0 2n r
The boundary conditions in (5.51) yield
a n n
1 1
D2 = 0 and D1 = − fn ()d.
0 2n a2
Upon substituting these into (5.57), we obtain
r n n a n n
1 r 1 r r
un (r) = − fn ()d + − fn ()d,
0 2n r 0 2n a2
or using a more compact notation,
a
un (r) = gn (r, )fn ()d, (5.58)
0
and
a
usn (r) = gn (r, )fns ()d, (5.60)
0
5.3 Polar Coordinates 109
and
2π
1
fns (r) = f (, ψ) sin nψdψ, n = 1, 2, 3, . . . . (5.62)
π 0
Upon substituting the expressions for fnc () and fns () from (5.61) and (5.62)
into (5.55), (5.59), and (5.60), and then the coefficients u0 (r), ucn (r), and usn (r)
into (5.55), we obtain the solution of the boundary-value problem posed by (5.45)
and (5.46) in the form
∞
2π a 1 g0 (r, )
u(r, ϕ) = + gn (r, )(cos nϕ cos nψ + sin nϕ sin nψ)
0 0 π 2
n=1
× f (, ψ)ddψ,
which can be written in a more compact form, after the factor of gn (r, ) in the
above series reads as a single trigonometric function. That is,
a ∞
2π 1 g0 (r, )
u(r, ϕ) = + gn (r, ) cos n(ϕ − ψ) f (, ψ)ddψ.
0 0 π 2
n=1
(5.63)
Since the expression ddψ represents the area element in polar coordinates,
we observe that the solution of the boundary-value problem in (5.45) and (5.46) is
obtained in the form of (5.48). Indeed, the integration in (5.63) is taken over the
entire disk . This allows us to conclude that the kernel in the above integral,
∞
1
G(r, ϕ; , ψ) = g0 (r, ) + 2 gn (r, ) cos n(ϕ − ψ) , (5.64)
2π
n=1
represents the Green’s function of the Dirichlet problem for the Laplace equation on
the disk of radius a.
To proceed with the summation of the series term in G(r, ϕ; , ψ), either branch
of g0 (r, ) and gn (r, ) can be used. Taking, for instance, the branch valid for r ≤
and substituting into (5.64), we have
∞
n n
1 1 r r
G(r, ϕ; , ψ) = − ln + − cos n(ϕ − ψ) .
2π a n a2
n=1
1 a 4 − 2ra 2 cos(ϕ − ψ) + r 2 2
G(r, ϕ; , ψ) = ln 2 2
4π a (r − 2r cos(ϕ − ψ) + 2 )
of the Green’s function of the Dirichlet problem on the disk of radius a.
where the series is partially summable. By applying the standard summation formula
from (5.16), we convert the above representation to
1 1
G(r, ϕ; , ψ) = − ln − L1 (r, ϕ; , ψ) − L2 (r, ϕ; , ψ)
2π aβ a
∞
n
2aβ r
− cos n(ϕ − ψ) ,
n(n + aβ) a 2
n=1
where
2
1 r r
L1 (r, ϕ; , ψ) = ln 1 − 2 cos(ϕ − ψ) +
2
and
2
1 r r
L2 (r, ϕ; , ψ) = ln 1 − 2 2 cos(ϕ − ψ) + .
2 a a2
Following trivial transformations, this cumbersome form reduces to a more com-
pact one as
1 1 a3
G(r, ϕ; , ψ) = + ln
2π aβ |z − ζ ||zζ − a 2 |
∞
2aβ r n
− cos n(ϕ − ψ) . (5.68)
n(n + aβ) a 2
n=1
Clearly, the series in (5.68) converges at the rate 1/n2 , making the entire repre-
sentation convenient for computer implementation.
It is worth noting that the boundary condition in (5.65) reduces to Dirichlet type
if the parameter β is taken to infinity. In compliance with this note, the limit of the
expression in (5.68) as β approaches infinity should represent the Green’s function
for the Dirichlet problem on the disk of radius a. Indeed, taking the limit in (5.68),
one arrives at
∞
1 a3 2 r n
G(r, ϕ; , ψ) = ln − cos n(ϕ − ψ) , (5.69)
2π |z − ζ ||zζ − a 2 | n a2
n=1
112 5 Eigenfunction Expansion
1 |zζ − a 2 |
G(r, ϕ; , ψ) = ln
2π a|z − ζ |
of the Green’s function for the Dirichlet problem on the disk of radius a, which was
just derived in Example 5.5.
So far in this section, we have dealt with more or less trivial problems, those
whose Green’s functions can be found in existing texts on partial differential equa-
tions. In the examples that follow, we turn to a series of boundary-value problems
for the Laplace equation whose Green’s functions are not so readily available.
Example 5.7 Consider the Dirichlet problem stated for the Laplace equation on the
annular region = {a < r < b, 0 ≤ ϕ < 2π}, and look for a computer-friendly
form of its Green’s function.
Acting in compliance with our strategy, we subject the Poisson equation (5.46)
of Example 5.5 to the boundary conditions
and a solution for the above problem is found in the integral form
b
u0 (r) = g0 (r, )f0 ()d, (5.74)
a
5.3 Polar Coordinates 113
∞
1
G(r, ϕ; , ψ) = g0 (r, ) + 2 gn (r, ) cos n(ϕ − ψ) (5.79)
2π
n=1
in (5.78) represents the Green’s function to the Dirichlet problem for the Laplace
equation stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}.
Close analysis shows that the representation in (5.79) does not guarantee high
level of accuracy in computing the Green’s function. Causing this is the appearance
114 5 Eigenfunction Expansion
of the coefficient gn (r, ), which reveals two different types of singularity from
the series in (5.79). The first of the singularities is of principal logarithmic type,
which shows up whenever the field point (r, ϕ) approaches the source point (, ψ),
whereas the second singularity could be called the near-boundary type. It shows up
whenever both the field and the source point approach either the inner r = a or the
outer r = b fragment of the boundary of .
The accuracy level attainable in the direct valuation of the expansion in (5.79) can
be observed in Fig. 5.6, where the profile G(r, ϕ; 2.0, 4π/9) of the Green’s function
for the annular region with a = 1.0 and b = 3.0 is depicted. The series in (5.79)
was truncated to its tenth partial sum, which is clearly insufficient for a reasonable
approximation.
To find out how the order of a partial sum affects the accuracy level attain-
able by the expansion in (5.79), we present Fig. 5.7. As in Fig. 5.6, the profile
G(r, ϕ; 2.0, 4π/9) of the Green’s function is depicted, with the 100th partial sum of
the series in (5.79) accounted for. Clearly, such a radical increase in the order of the
partial sum notably improves the accuracy level overall, but it still remains low very
close to the angular coordinate ψ of the source point.
5.3 Polar Coordinates 115
As follows from the analysis of the data in Figs. 5.6 and 5.7, the costly involve-
ment of higher partial sums in computing the nonuniform convergent series in (5.79)
can hardly be considered productive. However, an effective way of improving the
convergence of the series prior to its computer implementation might be found. This
can be done by some analytical work on the coefficient gn (r, ). Taking its branch,
which is valid for r ≤ , we have
1 1 1 1
gn (r, ) = − + b2n − 2n r 2n − a 2n
2n(r)n b2n − a 2n b2n b2n
1 a 2n 1
= + b2n − 2n r 2n − a 2n .
2n(r)n b2n (b2n − a 2n ) b2n
With this, the series in (5.79) breaks into two pieces, the first of which, the one
associated with the term
a 2n
,
b2n (b2n − a 2n )
is uniformly convergent. The other series, the one associated with the term 1/b2n ,
is nonuniformly convergent. But it allows a complete summation using the standard
formula from (5.16). This yields a computer-friendly form for the Green’s function
as
∞
1
G(r, ϕ; , ψ) = g0 (r, ) + 2 gn∗ (r, ) cos n(ϕ − ψ)
2π
n=1
1 a4 − 2a 2 r cos(ϕ − ψ) + r 2 2
+ ln
4π r 2 − 2r cos(ϕ − ψ) + 2
1 b4 − 2b2 r cos(ϕ − ψ) + r 2 2
+ ln 4 2 , (5.80)
4π b r − 2a 2 b2 r cos(ϕ − ψ) + a 4 2
Recall that the expression for G(r, ϕ; , ψ) in (5.80) is valid for r ≤ . The fol-
lowing steps should be taken to convert the expression in (5.80) to a form valid for
r ≥ : (i) choose the corresponding branch of g0 (r, ); (ii) interchange the variables
r with in (5.81); and (iii) replace the denominator of the second logarithmic term
in (5.80) with
b4 2 − 2a 2 b2 r cos(ϕ − ψ) + a 4 r 2 .
A shorthand complex-variable-based notation can be introduced for the argu-
ments of the logarithmic terms in (5.80) so that the expression for the Green’s func-
116 5 Eigenfunction Expansion
tion of the Dirichlet problem on the annular region of radii a and b finally reads
∞
1
G(r, ϕ; , ψ) = g0 (r, ) + 2 gn∗ (r, ) cos n(ϕ − ψ)
2π
n=1
1 − zζ ||b2 − zζ |
|a 2
+ ln , (5.82)
2π |z − ζ ||b2 z − a 2 ζ |
where the factor |b2 z − a 2 ζ | in the denominator holds for r ≤ , while for r ≥ it
must be replaced with |b2 ζ − a 2 z|.
It can easily be shown that the representation in (5.82) is notably more efficient
than that of (5.79). Two features support this assertion. First, the principal singular-
ity term is expressed analytically. Second, the series in (5.82) converges uniformly,
allowing a fairly accurate valuation at a relatively low cost. The smooth graph in
Fig. 5.8 convincingly supports the efficient computability of the above representa-
tion of the Green’s function.
As in Figs. 5.6 and 5.7, the profile G(r, ϕ; 2.0, 4π/9) of the Green’s function is
depicted in Fig. 5.8 for the annulus of radii a = 1.0 and b = 3.0, with the series
in (5.82) truncated to its tenth partial sum.
and their forms valid for r ≥ can be obtained from those above by interchanging
the variables r and .
Upon substituting g0 (r, ) and gn (r, ) just shown into (5.79) and proceeding
through some algebra, we obtain the computer-friendly form
∞
1 |b2 z − a 2 ζ ||a 2 − zζ |
∗
G(r, ϕ; , ψ) = ln + gn (r, ) cos n(ϕ − ψ)
2π a 2 |z|2 |z − ζ ||b2 − zζ | n=1
(5.84)
of the Green’s function to the “Dirichlet–Neumann” problem for the annulus of
radii a and b. The r ≤ branch of gn∗ (r, ) is found, in this case, as
while for its r ≥ branch, the variables r and in gn∗ (r, ) must be interchanged.
In addition, the factors |z| and |b2 z − a 2 ζ | in the argument of the logarithmic term
in (5.84) hold for r ≤ , while for r ≥ they must be replaced with |ζ | and |b2 ζ −
a 2 z|, respectively.
The series in (5.84) converges uniformly, allowing an accurate immediate valua-
tion of the Green’s function by truncating the series to the N th partial sum. To verify
this claim, the reader is encouraged to take a close look at the coefficient gn∗ (r, ) of
the series in (5.84). It is also recommended that some profiles of this representation
be depicted for different values of the truncation parameter N .
Example 5.9 Consider another mixed boundary-value problem for the Laplace
equation, that is, the “Neumann–Dirichlet” problem
∂u(b, ϕ)
= 0, u(a, ϕ) = 0, (5.85)
∂r
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}.
The Green’s function to the homogeneous boundary-value problem correspond-
ing to that in (5.46) and (5.85), constructed by the eigenfunction expansion method,
appears, this time around, again as the series representation in (5.79). Expressions
for the coefficients g0 (r, ) and gn (r, ) of the series that are valid for r ≤ are
found in this case as
where
a 2n (2n − b2n )(a 2n + r 2n )
gn∗ (r, ) = . (5.87)
n(b2 r)n (b2n + a 2n )
Note that to obtain the branch of G(r, ϕ; , ψ) valid for r ≥ , the variables r
and in (5.87) should be interchanged, while the factors |ζ | and |b2 z − a 2 ζ | in the
argument of the logarithmic term in (5.86) must be replaced with |z| and |b2 ζ −a 2 z|,
respectively.
Uniform convergence of the series in (5.86) can again be justified upon analysis
of its coefficient gn∗ (r, ). To illustrate this assertion, the reader is advised to depict
some profiles of the Green’s function by playing around with different values of the
truncation parameter N .
5.2 Construct the Green’s function of the Laplace equation for the boundary-value
problem
∂u(x, b)
u(0, y) = u(x, 0) = =0
∂y
stated on the semi-infinite strip = {0 < x < ∞, 0 < y < b}.
5.3 Construct the Green’s function of the Laplace equation for the boundary-value
problem
∂u(0, y) ∂u(x, b)
= u(x, 0) = =0
∂x ∂y
5.4 Chapter Exercises 119
5.4 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the mixed boundary-value problem
∂u(a, y)
u(0, y) = u(x, 0) = u(x, b) = + βu(a, y) = 0,
∂x
where β ≥ 0, stated on the rectangle = {0 < x < a, 0 < y < b}.
5.5 Construct the Green’s function of the Laplace equation for the “Dirichlet–
mixed” problem
∂u(b, ϕ)
u(a, ϕ) = 0, + βu(b, ϕ) = 0, β ≥ 0,
∂r
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Notice that in the case
β = 0, your representation of the Green’s function reduces to that of Example 5.8,
whereas in the case of β approaching infinity, it reduces to the form derived in
Example 5.7.
5.6 Construct the Green’s function of the Laplace equation for the “mixed–
Dirichlet” problem
∂u(a, ϕ)
u(b, ϕ) = 0, − βu(a, ϕ) = 0, β ≥ 0,
∂r
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Treat the cases in which
the parameter β either is equal to zero or approaches infinity.
5.7 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the “Neumann–mixed” problem
∂u(a, ϕ) ∂u(b, ϕ)
= 0, + βu(b, ϕ) = 0, β > 0,
∂r ∂r
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Explain why, in the
case β = 0, the problem is ill posed, implying that its Green’s function does not
exist. Consider also the case of β approaching infinity.
5.8 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the “mixed–Neumann” boundary-value problem
∂u(a, ϕ) ∂u(b, ϕ)
− βu(a, ϕ) = 0, = 0,
∂r ∂r
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Treat the cases in which
the parameter β either is equal to zero or approaches infinity.
120 5 Eigenfunction Expansion
5.9 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the “mixed–mixed” boundary-value problem
∂u(a, ϕ) ∂u(b, ϕ)
− β1 u(a, ϕ) = 0, + βu(b, ϕ) = 0, β1 , β2 > 0,
∂r ∂r
stated on the annular region = {a < r < b, 0 ≤ ϕ < 2π}. Explain why in the case
that both the parameters β1 and β2 are equal to zero, the problem is ill posed, im-
plying that its Green’s function does not exist. Observe also that some other Green’s
functions for the annular region obtained earlier in this chapter follow from the
present one.
Chapter 6
Representation of Elementary Functions
While the first five chapters in this book have touched upon more or less standard
topics, the material of the present chapter goes in another direction. The reader will
probably find it surprising. Indeed, the notions of infinite product and Green’s func-
tion, discussed in detail earlier in this volume, have customarily been included in
texts on mathematical analysis and differential equations, respectively. The present
chapter, in contrast, discusses an unusual idea that has never been explored in texts
before. That is, a technique, reported for the first time in [27, 28], is employed here
for obtaining infinite product representations for a number of elementary functions.
The technique is based on the comparison of alternative expressions of Green’s
functions for the two-dimensional Laplace equation that are constructed by different
methods. Some standard boundary-value problems posed on regions of a regular
configuration are considered. Classical closed analytical forms of Green’s functions
for such problems are compared with those obtained by the method of images in
the infinite product form. This comparison appears extremely fruitful. It provides a
number of infinite product representations for some trigonometric and hyperbolic
functions.
As outlined in Chap. 3, the method of images is useful for obtaining closed ana-
lytical expressions of Green’s functions for a certain class of boundary-value prob-
lems posed for the Laplace equation. The sphere of successful implementation of
this method is limited, however, to a narrow class of problems. We begin our presen-
tation in Sect. 6.1 by considering problems for which the method of images does not
represent the best choice for the construction of the Green’s function, because some
other classical methods allow one to obtain the Green’s function in a more compact
computer-friendly form. But it is worth noting that Green’s functions themselves are
not considered as the ultimate goal. The form in which they are expressed is what is
at issue.
To broaden the limited frontiers of successful application of the method of im-
ages, one arrives at expressions of Green’s functions in terms of infinite products.
Those expressions are no match to the compact ones available in the literature and
obtained by other classical methods (see Chaps. 3 and 5 for examples). But what
makes such expressions of Green’s functions really valuable is that they are used
here for the derivation of some identities involving infinite products.
The method of images, which is traditionally used for the construction of Green’s
functions for the Laplace equation, is well described in Chap. 3. The idea behind
the method is to find the location and intensity of point sources and sinks outside
the region in such a way that the homogeneous boundary conditions imposed on the
region’s boundary are satisfied for any location of a unit source inside the region.
The method of images represents one of the standard approaches in the field.
From Chap. 3, the reader may conclude that the complete list of problems al-
lowing a successful implementation of the method of images is quite short. This is
indeed true for the list of such problems for which the method results in a closed
analytical form of Green’s functions. It includes only the Dirichlet problem for a
half-plane; the Dirichlet, Neumann, and Dirichlet–Neumann problems for a quarter-
plane; the Dirichlet problem for a disk; and the Dirichlet problem for some infinite
wedge-shaped regions.
In [27] and [28] a nontrivial accomplishment was reported for the first time on the
application of the method of images to the derivation of infinite product represen-
tations of elementary functions. The method was used for the derivation of Green’s
functions for the infinite and semi-infinite strip, with Dirichlet and Neumann bound-
ary conditions imposed. Comparison of such infinite-product-containing representa-
tions of Green’s functions with their classical analytical forms brings an unexpected
discovery.
To lay out a working background for our approach to the derivation of infinite
product representations for a number of elementary functions, we will revisit some
classical expressions of Green’s functions for the Laplace equation that have con-
ventionally been obtained by a variety of methods. Alternative representations of
those Green’s functions are later constructed here by the method of images in the
infinite product form. Comparison of the two representations of the same Green’s
function entails a number of “summation” formulas for infinite functional products.
Example 6.1 We begin our presentation by considering the Dirichlet problem for
the Laplace equation stated on the infinite strip = {−∞ < x < ∞, 0 < y < b}.
The closed analytical form
1 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π
G(x, y; ξ, η) = ln , ω = , (6.1)
2π 1 − 2eω(x−ξ ) cos ω(y − η) + e 2ω(x−ξ ) b
6.1 Method of Images Extends Frontiers 123
of the Green’s function for this problem is available in standard texts [15, 18] on
partial differential equations. As follows from Chaps. 3 and 5, it can be derived
by either the method of conformal mapping or the method of eigenfunction expan-
sion. In [16], for example, it was obtained by a modified version of the method
of eigenfunction expansion. That version was first proposed in [12]. It provides a
computer-friendly form of the Green’s function, which becomes possible due to ei-
ther complete (as in the case under consideration) or partial summation of its series
representation.
+ +
with the unit sources S2,0 and S2,b located at D(ξ, −2b + η) and E(ξ, 2b + η). The
responses to these at (x, y) are given as
1 2
G+2,0 (x, y; ξ, −2b + η) = − ln (x − ξ )2 + y − (−2b + η)
2π
and
1 2
G+
2,b (x, y; ξ, 2b + η) = − ln (x − ξ )2 + y − (2b + η) .
2π
Traces of the functions G+ +
2,0 (x, y; ξ, −2b + η) and G2,b (x, y; ξ, 2b + η) on y = 0
− −
and y = b can, in turn, be compensated with the unit sinks S3,0 and S3,b located at
F (ξ, −2b − η) and H (ξ, 4b − η), respectively.
Following the described procedure of properly placing compensatory unit
sources that alternate with unit sinks, the Green’s function G = G(x, y; ξ, η) that
we are looking for is obtained in the infinite series form
∞
∞
− +
G = G+
0 + G2i−1,0 + G−
2i−1,b + G2i,0 + G+
2i,b .
i=1 i=1
Since the terms of this series represent logarithmic functions, its N th partial sum
N
− N
+
SN (x, y; ξ, η) = G+
0 + G2i−1,0 + G−
2i−1,b + G2i,0 + G+
2i,b
i=1 i=1
Taking the limit as N approaches infinity, we obtain the final form of the Green’s
function that we are looking for as
∞
1 (x − ξ )2 + (y + η − 2nb)2
G(x, y; ξ, η) = ln . (6.2)
2π n=−∞ (x − ξ )2 + (y − η + 2nb)2
Thus, (6.2) provides another representation of the Green’s function for the
Dirichlet problem for the Laplace equation stated on the infinite strip. The above
can be considered an alternative to the classical form presented in (6.1). It is evi-
dent, however, that the representation in (6.2) cannot be recommended for practical
use, since it is not that computer-friendly compared to the closed form in (6.1). But
computability is not an issue in the discussion that follows.
The radicand in either (6.1) or (6.2) is a fraction whose numerator and denomi-
nator represent a distance between two points. Hence, the radicands are nonnegative
6.1 Method of Images Extends Frontiers 125
∞
(x − ξ )2 + [y + (η − 2nb)]2 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
= .
n=−∞
(x − ξ )2 + [y − (η − 2nb)]2 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
This relation can be interpreted as a “summation” formula for the infinite prod-
uct. In order to reduce the above to a more compact form, assume that b = π , and
introducing the parameters β = x − ξ , 2t = y + η and 2u = y − η, we obtain the
multivariable identity
∞
β 2 + 4(t − nπ)2 1 − 2eβ cos 2t + e2β
= . (6.3)
n=−∞
β 2 + 4(u + nπ)2 1 − 2eβ cos 2u + e2β
To obtain ranges for the parameters β, t , and u in (6.3), we recall that both the
observation point (x, y) and the source point (ξ, η) are interior to the infinite strip
. This makes the identity in (6.3) valid (at least, formally) for
given that the parameters β and u are not equal zero at the same time.
But it is important to note that if the product in (6.3) happens to be uniformly
convergent for a wider range of the variables t and u, then the constraints on these
variables in (6.4) can be revised accordingly.
The identity in (6.3), along with other identities to be derived in this section, will
play a significant role in the further development.
The scheme presented in Fig. 6.2 may help the reader to follow the procedure
of the method of images which is similar to that described earlier for the Dirichlet
problem.
We look for an alternative representation to (6.5) of the Green’s function for the
Dirichlet–Neumann problem stated on the infinite strip. It can again be obtained
as an aggregate response to an infinite number of properly spaced unit sources and
sinks. Their locations are chosen in compliance with the following pattern.
126 6 Representation of Elementary Functions
Continuing this process and proceeding in compliance with the scheme described
in Example 6.1, the Green’s function that we are looking for is ultimately obtained
in the following infinite product form:
∞
1 (x − ξ )2 + (y + η + 4nb)2
G(x, y; ξ, η) = ln
2π n=−∞ (x − ξ )2 + (y − η + 4nb)2
(x − ξ )2 + [y − η + 2(2n + 1)b]2
× , (6.6)
(x − ξ )2 + [y + η + 2(2n + 1)b]2
which can be viewed as an alternative to the closed analytical form exhibited earlier
in (6.5).
By comparison of the equivalent expressions in (6.6) and (6.5), one arrives at the
multivariable identity
∞
(x − ξ )2 + [y − η + 2(2n + 1)b]2
n=−∞
(x − ξ )2 + [y + η + 2(2n + 1)b]2
(x − ξ )2 + (y + η + 4nb)2
×
(x − ξ )2 + (y − η + 4nb)2
1 + 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
=
1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
× .
1 + 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
To obtain a more compact form for this relation, we assume b = π/2, which
evidently implies that ω = 1, and introduce the parameters β = x − ξ , t = y + η,
and u = y − η. This yields
∞
[β 2 + (t + 2nπ)2 ][β 2 + (u + (2n + 1)π)2 ]
n=−∞
[β 2 + (u + 2nπ)2 ][β 2 + (t + (2n + 1)π)2 ]
The above identity, along with that in (6.3) and some others to be obtained later
in this section, is crucial for the major issue of the present chapter, which is the
derivation of infinite product representations for some elementary functions.
We turn now to other classical Green’s functions and apply our technique based
on the method of images to some boundary-value problems formulated for the
Laplace equation on a semi-infinite strip.
128 6 Representation of Elementary Functions
Example 6.3 Consider first the Dirichlet problem on the semi-infinite strip =
{0 < x < ∞, 0 < y < b}. The classical compact form
1 1 − 2eω(x+ξ ) cos ω(y − η) + e2ω(x+ξ )
G(x, y; ξ, η) = ln
2π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π
× , ω= , (6.8)
1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ ) b
of its Green’s function can be found in most classical sources. In [16], for example,
it was obtained by the modified version of the method of eigenfunction expansion.
Another form of the Green’s function for the problem under consideration will
be obtained here with the aid of the method of images. To trace its procedure in a
way similar to that described in detail in Examples 6.1 and 6.2, the reader is invited
to follow, in this case, the derivation scheme depicted in Fig. 6.3.
The potential field generated by a unit source acting at an arbitrary point A(ξ, η)
in can be compensated on the edges y = 0 and y = b with unit sources and sinks
placed at the regular set of points B(ξ, −η), C(ξ, 2b − η), D(ξ, −2b + η), E(ξ, 2b +
η), F (ξ, −2b − η), H (ξ, 4b − η), and so on. All these points are located outside
of . In other words, these sources and sinks allow us to satisfy the homogeneous
Dirichlet boundary conditions imposed on the edges y = 0 and y = b of .
As to the boundary condition imposed on the edge x = 0, the influence of the
sources and sinks acting at A, B, C, D, E, F , H , and so on can, in turn, be
compensated on that boundary line with unit sources and sinks if we place them
at another set of points K(−ξ, η), L(−ξ, −η), N(−ξ, 2b − η), P (−ξ, −2b + η),
R(−ξ, 2b + η), S(−ξ, −2b − η), T (−ξ, 4b − η), and so on. It is evident that the
latter sources and sinks do not conflict with the boundary conditions on y = 0 and
y = b.
Thus, upon combining the influence of all the compensatory sources and sinks
shown in Fig. 6.3, one arrives at an alternative form to (6.8) of the Green’s function
of the Dirichlet problem for the semi-infinite strip = {0 < x < ∞, 0 < y < b}.
After some trivial algebra, it is ultimately obtained in the infinite product-containing
6.1 Method of Images Extends Frontiers 129
form
∞
1 (x − ξ )2 + (y + η − 2nb)2
G(x, y; ξ, η) = ln
2π n=−∞ (x − ξ )2 + (y − η + 2nb)2
(x + ξ )2 + (y − η + 2nb)2
× . (6.9)
(x + ξ )2 + (y + η − 2nb)2
Note that the above identity, along with those of (6.3) and (6.7), creates a back-
ground for our further work on the infinite product representation of elementary
functions.
Example 6.4 As another example for the semi-infinite strip = {0 < x < ∞, 0 <
y < b}, we consider a mixed boundary-value problem. That is, let Dirichlet condi-
tions be imposed on the boundary fragments y = 0 and y = b, while the Neumann
condition is imposed on x = 0. The compact form
1 1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ )
G(x, y; ξ, η) = ln
2π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π
× , ω = , (6.11)
1 − 2eω(x+ξ ) cos ω(y − η) + e 2ω(x+ξ ) b
130 6 Representation of Elementary Functions
of the Green’s function for this Dirichlet–Neumann problem is presented, for exam-
ple, in [16].
An alternative form to (6.11) of the Green’s function can be derived with the aid
of the scheme exhibited in Fig. 6.4. As the previous example suggests, the traces
of the fundamental solution (the field generated by a unit source acting at an arbi-
trary point A(ξ, η) in ) on the edges y = 0 and y = b are compensated with unit
sources and sinks placed at a set of points exterior to : B(ξ, −η), C(ξ, 2b − η),
D(ξ, −2b + η), E(ξ, 2b + η), F (ξ, −2b − η), H (ξ, 4b − η), and so on.
To satisfy the Neumann condition imposed on the edge x = 0, the influence of
the sources and sinks acting at A, B, C, D, E, F , H , and so on can, similarly
to the Dirichlet problem, be compensated with unit sources and sinks if we place
them at the set of points K(−ξ, η), L(−ξ, −η), N (−ξ, 2b − η), P (−ξ, −2b + η),
R(−ξ, 2b + η), S(−ξ, −2b − η), T (−ξ, 4b − η), and so on exterior to . The or-
der of sources and sinks is, however, different from that suggested earlier for the
Dirichlet problem.
Proceeding further with the method of images, one arrives at the infinite-product-
containing representation
∞
1 (x − ξ )2 + (y + η − 2nb)2
G(x, y; ξ, η) = ln
2π n=−∞ (x − ξ )2 + (y − η + 2nb)2
(x + ξ )2 + (y + η − 2nb)2
× (6.12)
(x + ξ )2 + (y − η + 2nb)2
(x + ξ )2 + (y + η − 2nb)2
×
(x + ξ )2 + (y − η + 2nb)2
6.2 Trigonometric Functions 131
where the parameter t can take on any real value. As to the parameter u, it cannot
equal nπ , with n = 0, ±1, ±2, . . . .
It is evident that the identity that we just arrived at in (6.14) holds if the identity
∞
t − nπ sin t
= (6.15)
n=−∞
u + nπ sin u
holds as well. The above identity represents, in fact, an infinite product expansion
of the two-variable function
sin t
F (t, u) = .
sin u
132 6 Representation of Elementary Functions
∞ ∞
t − nπ t (t − kπ)(t + kπ)
=
n=−∞
u + nπ u (u + kπ)(u − kπ)
k=1
∞ ∞
t t 2 − k2π 2 t t 2 − u2 + u2 − k 2 π 2
= =
u u2 − k 2 π 2 u u2 − k 2 π 2
k=1 k=1
∞
t t 2 − u2
= 1+ .
u u2 − k2π 2
k=1
The form that the product in (6.15) reduces to implies [5, 9] that it converges
uniformly if the series
∞
t 2 − u2
u2 − k 2 π 2
k=1
does. But the above represents the p-series (also referred to in some sources as
the generalized harmonic series) with convergence rate of order 1/k 2 . Hence, it
converges uniformly [9] for any finite value of t and u = kπ . This makes it possible
to conclude that the constraints put on the parameters t and u in Sect. 6.1 (see (6.4))
can be revised. This, in turn, implies that the product in (6.15) converges uniformly
to a value of the function F (t, u) at any point (t, u) in its domain.
In what follows, the reader will be introduced to a number of infinite product
representations for single-variable trigonometric functions that can be obtained from
the identities in (6.14) or (6.15). Note that most of the representations, we will arrive
at in this section were reported for the first time in [27] and [28].
Let us revisit the two-variable identity in (6.15) and assume u = π/2. The iden-
tity transforms in this case to the expansion
∞
2(t − nπ)
sin t = (6.16)
n=−∞
(2n + 1)π
Thus, it appears that an infinite product representation is obtained for the trigono-
metric sine function. But this raises a natural question about the relationship be-
tween the representation in (6.17) and the classical [9] Euler expansion
∞
t2
sin t = t 1− 2 2 , (6.18)
k π
k=1
which has been referenced and dealt with multiple times in this volume.
Close analysis shows that the forms in (6.17) and (6.18) are unrelated, meaning
that neither of them follows from the other. This makes it possible to assert that
(6.17) is simply an alternative to (6.18).
Note that the representation in (6.18) has been around in mathematics for the past
two hundred and fifty plus years, owing to the genius of Leonhard Euler [1]. It is
obvious that his name needs no recommendation. It is known to everyone who is at
least superficially familiar with the history of the natural sciences. That phenomenal
Swiss mathematician made countless decisive contributions to different areas of
mathematics, mechanics, and engineering sciences.
To provide the reader with some perspective on the intellectual greatness of Eu-
ler, we recall a comment of another giant, who represents an indisputable authority
in the mathematical sciences. Being impressed with and inspired by the beauty and
elegance of Euler’s ideas, which had influenced a huge army of his pupils and fol-
lowers, the French mathematician and physicist Pierre Simon Laplace [26] once
exclaimed: “Read Euler, read Euler, he is the master of us all.” What could be more
convincing than such recognition!
It can be seen clearly that the infinite products in (6.17) and (6.18) converge at
the same rate. This assertion follows from the form of their general terms. Indeed,
both products converge at the same rate 1/k2 . It appears from close observation,
however, that the actual convergence of the product in (6.17) is somewhat faster than
that in (6.18). This observation by no means conflicts with the a priori estimate, but
rather gives a comparison of the practical convergencies of the two expansions.
The latter point is well illustrated in Figs. 6.5 and 6.6, where, to give a clear
view of the convergence rate of both representations, we display graphs of their Kth
partial products
(17)
2t
K
4t 2 − π 2
= 1+
π (1 − 4k 2 )π 2
K k=1
and
(18) K
t2
=t 1− .
k2π 2
K k=1
134 6 Representation of Elementary Functions
for the cosine function directly follows from the expansion in (6.17), representing
an alternative to another classical [9] Euler form
∞
4t 2
cos t = 1− . (6.20)
(2k − 1)2 π 2
k=1
We revisit again the identity in (6.15) and let the parameter t there equal π/2.
This converts (6.15) to the representation
6.2 Trigonometric Functions 135
∞ ∞
(1 − 2n)π 2t + π
csc t = = −1 +
n=−∞
2(t + nπ) n=−∞ 2(t + nπ)
∞
π 1 − 4t 2
= 1+ (6.21)
2t 4(t 2 − k 2 π 2 )
k=1
The equivalence of the two infinite products in the above identity is evident,
given that each of them is unchanged by the replacement of the multiplication index
n with −n.
It appears that the identity shown in (6.15) might help in deriving alternative
forms for other rare infinite product expansions that are available in the literature.
To verify this assertion, we recall the representation
∞ 2
sin 3t 2t
=− 1− , (6.22)
sin t n=−∞
t + nπ
∞
t − nπ sin t
=
n=−∞
u + nπ sin u
This yields
∞ ∞
sin At A A2 t 2 − k 2 π 2 A (A2 − B 2 )t 2
= = 1+ 2 2 , Bx = nπ, (6.23)
sin Bt B B t −k π
2 2 2 2 B B t − k2π 2
k=1 k=1
136 6 Representation of Elementary Functions
∞
sin 3t 3t − nπ
= , t = nπ, (6.24)
sin t n=−∞
t + nπ
∞ ∞
∞
3t − nπ (3t − kπ)(3t + kπ) 9t 2 − k 2 π 2
=3 =3 .
n=−∞
t + nπ (t + nπ)(t − kπ) t 2 − k2π 2
k=1 k=1
As to the product in (6.22), we can prove that it reduces to the same expression.
Indeed, after trivial transformations we have
∞
2
2t
− 1−
n=−∞
t + nπ
∞
2 2
2t 2t
=3 1− 1−
t + kπ t − kπ
k=1
∞
16t 4 4t 2 4t 2
=3 1+ − +
(t 2 − k 2 π 2 )2 (t + kπ)2 (t − kπ)2
k=1
∞
∞
9t 4 − 10k 2 π 2 t 2 + k 4 π 4 9t 2 − k 2 π 2
=3 = 3 .
(t 2 − k 2 π 2 )2 t 2 − k2 π 2
k=1 k=1
Thus, the expansions in (6.22) and (6.24) are indeed equivalent. The reader is
encouraged, in Exercise 6.5, to obtain a graphical illustration of the equivalence of
these two expansions.
Note that the expansion in (6.23) can be obtained by directly expressing sin At
and sin Bt with the aid of the representation in (6.17). Indeed, the latter suggests
that
∞ ∞
2At 4A2 t 2 − π 2 2At 4(A2 t 2 − k 2 π 2 )
sin At = 1+ = ,
π (1 − 4k 2 )π 2 π (1 − 4k 2 )π 2
k=1 k=1
while
∞ ∞
2Bt 4B 2 t 2 − π 2 2Bt 4(B 2 t 2 − k 2 π 2 )
sin Bt = 1+ = .
π (1 − 4k 2 )π 2 π (1 − 4k 2 )π 2
k=1 k=1
6.2 Trigonometric Functions 137
Thus we have
∞
sin At A A2 t 2 − k 2 π 2
= .
sin Bt B B 2t 2 − k2π 2
k=1
Interestingly enough, the Euler expansion in (6.18) also directly leads to that
in (6.23).
We continue with the derivation of infinite product representations for trigono-
metric functions. In doing so, let us assume A = 2 and B = 1 in (6.23). This yields
∞
sin 2t 2t − nπ
= 2 cos t = .
sin t n=−∞
t + nπ
This immediately yields another uniformly convergent expansion for the cosine
function,
∞ ∞
1 2t − nπ 3t 2
cos t = = 1+ 2 , (6.25)
2 n=−∞ t + nπ t − k2 π 2
k=1
which represents yet another alternative to those exhibited earlier in (6.19)
and (6.20). Yet another infinite product representation for the cosine function can be
obtained from that in (6.23) if we assume there A = 1/2 and B = 1. This yields
∞
sin t/2 t − 2nπ
= ,
sin t n=−∞
2(t + nπ)
or
∞
sin2 t/2 1 (t − 2nπ)2
= = ,
sin2 t 2(1 + cos t) n=−∞ 4(t + nπ)2
from which, solving for cos t, we obtain another alternative infinite product repre-
sentation for the cosine function:
∞
1 4(t + nπ)2
cos t = −1 + . (6.26)
2 n=−∞ (t − 2nπ)2
Since the degree of the polynomial in k in the denominator is two units higher
than that in the numerator (four against two), we conclude that the product in (6.26)
converges at the rate 1/k 2 .
138 6 Representation of Elementary Functions
Another alternative infinite product expansion for the cosine function directly
follows from the identity
∞
(t − nπ)2 1 − cos 2t
= ,
n=−∞
(u + nπ) 2 1 − cos 2u
which we saw earlier in (6.14). Indeed, assuming in the above t := t/2 and u = π/4,
we reduce it to
∞
4(t − 2nπ)2
cos t = 1 − . (6.27)
n=−∞
(1 + 4n)2 π 2
sin t sin t
= = tan t,
sin u sin(π/2 − t)
6.2 Trigonometric Functions 139
The uniform convergence of this representation, for any value of t in the domain
of the tangent function, clearly follows from the analysis of the expansion in (6.15)
that was completed earlier in this section.
An alternative to the infinite product representation in (6.28) for the tangent func-
tion can be obtained from the identity in (6.7). Indeed, if the parameter β is set equal
to zero in (6.7), then the latter reads as
∞
(t + 2nπ)2 [u + (2n + 1)π]2 (1 − cos t)(1 + cos u)
=
n=−∞
(u + 2nπ)2 [t + (2n + 1)π]2 (1 − cos u)(1 + cos t)
t u
= tan2 cot2 .
2 2
It is evident that the above identity holds, for values of t and u from the domain
of the function tan2 2t cot2 u2 if the identity
∞
t u (t + 2nπ)[u + (2n + 1)π]
tan cot = (6.29)
2 2 n=−∞ (u + 2nπ)[t + (2n + 1)π]
also holds.
The identity in (6.29) can be further transformed. Assuming u = π/2 and
t/2 := t, we arrive at
∞
2(3 + 4n)(t + nπ)
tan t = . (6.30)
n=−∞
(1 + 4n)[2t + (2n + 1)π]
Uniform convergence of this infinite product, for any value of t in the domain of
the tangent function, can be verified if we transform it into
∞
6t 4(9 − 16k 2 )(t 2 − k 2 π 2 )
tan t =
2t − π (1 − 16k 2 )[(2t + π)2 − 4k 2 π 2 ]
k=1
converges at the rate 1/k 2 , the infinite product in (6.30) uniformly converges to the
tangent function at every point in its domain.
Figure 6.8 gives a clear view of the convergence rate of the expansions in (6.28)
and (6.30). The 10th partial products are shown. It is important to note that one of
these expansions approximates exact values of the tangent function strictly from
above, whereas the other one does so strictly from below. Note also that this
sandwich-type feature holds for every value of the truncation parameter K, mak-
ing convenient the simultaneous use of both expansions in (6.28) and (6.30).
It is evident that the relation that we obtained in (6.30) yields the infinite product
representation
∞
(1 + 4n)[2t + (2n + 1)π]
cot t = , t = nπ (6.31)
n=−∞
2(3 + 4n)(t + nπ)
for the cotangent function follows from the expansion for the tangent function
in (6.28).
By the way, the representation in (6.31) can be directly obtained from that
in (6.29) by letting t = π/2 and making the substitution u/2 := t.
Another infinite product representation of a trigonometric function can directly
be obtained from that in (6.29), that is,
∞
tan At (At + nπ)[2Bt + (1 + 2n)π]
= . (6.33)
tan Bt n=−∞ (Bt + nπ)[2At + (1 + 2n)π]
This follows from the identity in (6.29) if a single variable t is introduced there as
t/2 := At and u/2 := Bt, where A and B are real constants that meet the following
constraints:
In Exercise 6.8, the reader is advised to analyze the representation in (6.33) and
determine its convergence rate. This can be accomplished using the approach em-
ployed earlier in this section.
In the next section, we will show that the use of the identities derived earlier in
Sect. 6.1 can be extended to another type of elementary functions. Namely, those
identities also allow one to derive some infinite product representations for some
hyperbolic functions.
The identities derived earlier in Sect. 6.1 (see (6.3), (6.7), (6.10), and (6.13)) are also
helpful in obtaining some infinite product representations for hyperbolic functions.
But before we proceed with specifics, let us revisit some of the infinite product
expansions obtained in Sect. 6.2 for trigonometric functions and figure out what
those expansions transform to with the aid of the analytic continuation formulas
Similarly to the conversion of the classical Euler infinite product expansion for
the trigonometric sine function (see (2.1) of Chap. 2) into the expansion for the
hyperbolic sine function in (2.3), which was accomplished with the aid of the first
of the formulas in (6.34), the expansion
∞
2t 4t 2 − π 2
sin t = 1+
π (1 − 4k 2 )π 2
k=1
from (6.25), for example, and utilizing the second of the formulas in (6.34), we have
∞
3t 2
cosh t = 1+ . (6.36)
t2 + k2 π 2
k=1
142 6 Representation of Elementary Functions
for the trigonometric cosine shown in (6.26) also works. But before going to its
analytic continuation, we convert it to an equivalent form as
∞
16(t 2 − n2 π 2 )2
cos t = −1 + 2 .
(t 2 − 4n2 π 2 )2
k=1
This yields
∞
16(t 2 + n2 π 2 )2
cosh t = −1 + 2 . (6.37)
(t 2 + 4n2 π 2 )2
k=1
Another alternative to the two infinite product representations for cosh t just pre-
sented follows from
∞
4(t − 2nπ)2
cos t = 1 − ,
n=−∞
(1 + 4n)2 π 2
we then obtain
∞
4t 2 16(t 2 + 4n2 π 2 )2
cosh t = 1 + . (6.38)
π2 (1 − 16n2 )2 π 4
k=1
Some other infinite product representations of trigonometric functions can also
be immediately converted upon analytic continuation. Revisiting, for example, the
relation
∞
sin At A A2 t 2 − k 2 π 2
= ,
sin Bt B B 2 t 2 − k2 π 2
k=1
derived earlier in Sect. 6.2, we convert it into a corresponding relation written in
terms of hyperbolic functions, which reads as
∞
sinh At A A2 t 2 + k 2 π 2
= . (6.39)
sinh Bt B B 2t 2 + k2π 2
k=1
can be further extended. We are not going to explore this track in more detail but
encourage the reader to do so.
In the remaining part of this section, we will be investigating the potential of
the identities derived in Sect. 6.1. To begin, we revisit first the identity in (6.3). By
assuming for the variables t and u the values t = 0 and u = π/2, we transform (6.3)
into the single-variable identity
∞
β 2 + 4n2 π 2 (1 − eβ )2
= .
n=−∞
β 2 + (1 + 2n)2 π 2 (1 + eβ )2
This delivers the following dual expansion for the hyperbolic tangent function:
∞
t 2 + n2 π 2
tanh t = ± 2 , (6.42)
n=−∞
4t 2 + (1 + 2n)2 π 2
with the minus sign, the variable t is assumed less then zero.
It can be shown that the infinite product representations in (6.41) and (6.42) con-
verge uniformly for −∞ < t < ∞. We will verify this assertion by the method used
earlier for the product in (6.15). In doing so, we isolate first the term with n = 0
in (6.41), which is equal to
4t 2
,
4t 2 + π 2
144 6 Representation of Elementary Functions
and then pair the terms with n = k and n = −k. This transforms the expansion
in (6.41) into
∞
4t 2 16(t 2 + k 2 π 2 )2
tanh2 t = . (6.43)
4t 2 + π 2 [4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]
k=1
16(t 2 + k 2 π 2 )2
[4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]
π 2 [8t 2 + (1 − 8k 2 )π 2 ]
1− . (6.44)
[4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]
Hence, since the numerator in the second additive term in (6.44) represents a
second-degree polynomial in k, whereas the degree of its denominator is four, we
conclude that the infinite product in (6.41) is indeed convergent, with convergence
rate of order 1/k2 .
Another infinite product representation for a hyperbolic function can be obtained
from another multivariable identity also derived earlier in Sect. 6.1. That is, assign-
ing in (6.7) the values of 0 and π for the variables t and u, respectively, we read it
as
∞
1 − eβ 4 (β 2 + 4n2 π 2 )[β 2 + 4(1 + n)2 π 2 ]
= ,
1 + eβ n=−∞
[β 2 + (1 + 2n)2 π 2 ]2
∞
16(t 2 + n2 π 2 )[t 2 + (1 + n)2 π 2 ]
tanh4 t = (6.45)
n=−∞
[4t 2 + (1 + 2n)2 π 2 ]2
The uniform convergence of the above infinite product can be proven for any real
value of t in the way applied earlier to the identity in (6.41).
6.3 Hyperbolic Functions 145
The identity in (6.10) can also be used to derive some infinite product represen-
tations for hyperbolic functions. Indeed, assuming the values of 0 and π for the
variables t and u, respectively, we arrive at the expansion
∞
α β (β 2 + 4n2 π 2 )[α 2 + (1 + 2n)2 π 2 ]
coth2 tanh2 = . (6.46)
2 2 n=−∞ (α 2 + 4n2 π 2 )[β 2 + (1 + 2n)2 π 2 ]
It is worth noting that the expansion in (6.40) follows from that in (6.46) if α is
taken to infinity. If, on the other hand, the limit is taken in (6.46) as β approaches
infinity, then one arrives at the expansion
∞
α α 2 + (1 + 2n)2 π 2
coth2 = ,
2 n=−∞ α 2 + 4n2 π 2
which reads as
∞
4t 2 + (1 + 2n)2 π 2
coth2 t = , t = 0, (6.47)
n=−∞
4(t 2 + n2 π 2 )
if the variable t is introduced in terms of α as t := α/2. Note that the above expan-
sion has been derived from the identity of (6.10). But it can also be directly obtained
as a reciprocal of the expansion in (6.41).
Dual expansion for the hyperbolic cotangent function follows from that in (6.47)
as
∞
1 4t 2 + (1 + 2n)2 π 2
coth t = ± , t = 0, (6.48)
n=−∞
2 t 2 + n2 π 2
with
∞
1 4t 2 + (1 + 2n)2 π 2
coth t =
n=−∞
2 t 2 + n2 π 2
and then to
∞
4t 2 + π 2 π 2 [8t 2 + (1 − 8k 2 )π 2 ]
coth2 t = 1 + .
4t 2 16(t 2 + k 2 π 2 )2
k=1
Since the numerator of the second additive component in the braces represents a
second-degree polynomial in k, while the degree of the denominator polynomial is
two units higher, the expansion in (6.47) converges at a rate of order 1/k 2 .
Clearly enough, the expansions in (6.47) and (6.48) can be directly obtained from
those in (6.41) and (6.42), respectively.
An interesting infinite product representation for a single-variable hyperbolic
function follows from the two-variable identity in (6.46). Indeed, if a new variable
t is introduced there as α = 2At and β = 2Bt, where A and B represent real con-
stants, with A = 0, then the right-hand side of the identity in (6.46) transforms into
the infinite product expansion
∞
(B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ]
(6.49)
n=−∞
(A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ]
of the function
tanh2 Bt
F (t) = ,
tanh2 At
whose domain clearly is the set of all real numbers except for t = 0. Hence, the
expansion in (6.49) must converge uniformly in the domain of F (t). This statement
can, however, be further justified with the assertion that the expansion in (6.49)
converges at t = 0 as well, and the value of it at t = 0 is the limit of F (t) as t
approaches zero. That is,
tanh2 Bt B2
lim = .
t→0 tanh2 At A2
This assertion is not evident. To come up with its verification, we split off the
n = 0 term in (6.49). This yields
∞
(B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ]
n=−∞
(A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ]
∞
B 2 (4A2 t 2 + π 2 ) (B 2 t 2 + k 2 π 2 )[4A2 t 2 + (1 + 2k)2 π 2 ]
=
A2 (4B 2 t 2 + π 2 ) (A2 t 2 + k 2 π 2 )[4B 2 t 2 + (1 + 2k)2 π 2 ]
k=1
∞
(B 2 t 2 + k 2 π 2 )[4A2 t 2 + (1 − 2n)2 π 2 ]
× .
(A2 t 2 + k 2 π 2 )[4B 2 t 2 + (1 − 2n)2 π 2 ]
k=1
6.3 Hyperbolic Functions 147
Combining the two infinite products into one, we can rewrite the above relation
as
∞
(B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ]
n=−∞
(A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ]
∞
B 2 (4A2 t 2 + π 2 ) 16A4 t 4 + 8(1 + 4k 2 )A2 t 2 π 2 + (1 − 4k 2 )2 π 4
= .
A2 (4B 2 t 2 + π 2 ) 16B 4 t 4 + 8(1 + 4k 2 )B 2 t 2 π 2 + (1 − 4k 2 )2 π 4
k=1
It can readily be seen that the general term in the last infinite product represents
unity at t = 0, implying that the value of the expansion in (6.49) at t = 0 is indeed
B 2 /A2 . This allows us finally to obtain the expansion
∞
tanh2 Bt (B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ]
= ,
tanh2 At n=−∞
(A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ]
∞
(1 + eα )2 (1 + eβ )2 16[α 2 + (1 − 2n)2 π 2 ][β 2 + (1 − 2n)2 π 2 ]
= . (6.50)
(1 + e2α )(1 + e2β ) n=−∞
[4α 2 + (1 + 4n)2 π 2 ][4β 2 + (1 + 4n)2 π 2 ]
or, converting the left-hand side of the above to a hyperbolic function form, we
transform (6.51) into
∞
1 + cosh t 4[t 2 + (1 − 2n)2 π 2 ]
= .
cosh t n=−∞
4t 2 + (1 + 4n)2 π 2
∞
4[t 2 + (1 − 2n)2 π 2 ]
sech t = −1 + (6.52)
n=−∞
4t 2 + (1 + 4n)2 π 2
148 6 Representation of Elementary Functions
of the hyperbolic secant function. The above representation converges uniformly for
any value of t. We verify this assertion by our customary procedure. For that, the
infinite product in (6.52) is transformed as
∞
4[t 2 + (1 − 2n)2 π 2 ]
n=−∞
4t 2 + (1 + 4n)2 π 2
∞
4(t 2 + π 2 ) 16[t 2 + (1 + 2k)2 π 2 ][t 2 + (1 − 2k)2 π 2 ]
= .
4t 2 + π 2 [4t 2 + (1 + 4k)2 π 2 ][4t 2 + (1 − 4k)2 π 2 ]
k=1
After some trivial algebra with the general term of the above product, the infinite
product representation of the hyperbolic secant function exhibited in (6.52) converts
into the equivalent form
∞
4(t 2 + π 2 ) 3π 2 [8t 2 + π 2 (5 − 32k 2 )]
sech t = −1 + 1 + .
4t 2 + π 2 [4t 2 + (1 + 4k)2 π 2 ][4t 2 + (1 − 4k)2 π 2 ]
k=1
(6.53)
From a comparison of the highest degree of the multiplication index k in the nu-
merator (which is two) and the denominator (which is four) of (6.53), it follows that
the expansion in (6.52) converges uniformly for any value of t, and its convergence
rate is of order 1/k 2 .
To give a sense of the actual convergence of the expansion derived in (6.52),
graphs of its partial products
N
4[t 2 + (1 − 2n)2 π 2 ]
sech t = −1 +
4t 2 + (1 + 4n)2 π 2
n=−N
6.2 Using the method of eigenfunction expansion, derive the expression presented
in (6.5) for the Green’s function of the Dirichlet–Neumann problem stated for the
Laplace equation on the infinite strip {−∞ < x < ∞, 0 < y < b}.
6.3 Using the method of eigenfunction expansion, derive the expression presented
in (6.8) for the Green’s function of the Dirichlet problem stated for the Laplace
equation on the semi-infinite strip {0 < x < ∞, 0 < y < b}.
6.4 Using the method of eigenfunction expansion, derive the expression presented
in (6.11) for the Green’s function of the mixed problem stated for the Laplace equa-
tion on the semi-infinite strip {0 < x < ∞, 0 < y < b}.
6.5 Illustrate the equivalence of the expansions presented in (6.22) and (6.24) by
graphing their partial products.
6.6 Prove the convergence of the infinite product representation derived in (6.27)
for the cosine function.
6.7 Prove the convergence of the infinite product representation derived in (6.28)
for the tangent function.
6.8 Determine the convergence rate of the infinite product representation obtained
in (6.33).
tan x − cot x
7.1 Chapter 2
which implies
b b
ϕ = arcsin √ = arctan ,
a2 + b2 a
we have
a sin x + b cos x = a 2 + b2 sin(x + ϕ),
2.6 Expressing the sum of sine functions in the statement in the product form
x +y x −y x+y π − (x − y)
sin x + sin y = 2 sin cos = 2 sin sin
2 2 2 2
and replacing the sine functions with their Euler infinite product representations,
one arrives at
∞
(x + y)2 (π − (x − y))2
sin x + sin y = 2 1− 1− ,
4k 2 π 2 4k 2 π 2
k=1
x+y x −y
cos x + cos y = 2 cos cos
2 2
and each of the cosine functions on the right-hand side is replaced with its Euler
infinite product representation in (2.2), then one obtains
∞
(x + y)2 (x − y)2
cos x + cos y = 2 1− 1− ,
(2k − 1)2 π 2 (2k − 1)2 π 2
k=1
sin(x + y)
cot x + cot y =
sin x sin y
and expressing the sine functions on the right-hand side with the Euler representa-
tion in (2.1), one obtains
2
∞
x +y (1 − (x+y)k2 π 2
)
cot x + cot y = ,
xy x2 y2
k=1 (1 − k 2 π 2 )(1 − k 2 π 2 )
2.11 Converting the sum of hyperbolic cotangent functions into the product
sinh(x + y)
coth x + coth y =
sinh x sinh y
and using the classical Euler infinite product representation in (2.3) for the right-
hand side, one arrives at
2
∞
x +y (1 + (x+y)k2 π 2
)
coth x + coth y = ,
xy 2 y2
k=1 (1 + k 2 π 2 )(1 + k 2 π 2 )
x
7.2 Chapter 3
3.3 In an attempt to construct the Green’s function for the Dirichlet problem for
the Laplace equation on the infinite wedge (r, ϕ) = {0 < r < ∞, 0 < ϕ < 2π/5},
let the unit source (which produces the singular component of the Green’s function)
be located at (, ψ) ∈ . To compensate its trace on the fragment ϕ = 0 of the
boundary of , place a unit sink at (, 2π − ψ) ∈ / . The trace of the latter on
the boundary fragment ϕ = 2π/5 is compensated, in turn, with a unit source at
(, 4π/5 + ψ) ∈ / , whose trace on ϕ = 0 must be compensated with a unit sink at
(, 6π/5 − ψ) ∈ / , whose trace on ϕ = 2π/5 requires a compensation by a source
at (, 8π/5 + ψ) ∈ / . And to compensate the latter’s trace on ϕ = 0, we must put
a sink at (, 2π/5 − ψ), which is, unfortunately, located inside . And this is what
justifies the failure of the method in this case.
7.3 Chapter 4
4.1 Express the general solution of the equation in (4.21) as
where D1 and D2 are arbitrary constants. The first of the conditions in (4.22) yields
D1 + D2 = D1 exp ka + D2 exp(−ka)
D1 − D2 = D1 exp ka − D2 exp(−ka).
154 7 Hints and Answers to Chapter Exercises
These two relations represent the homogeneous system of linear algebraic equations
1 − exp ka 1 − exp(−ka) D1 0
= ,
1 − exp ka exp(−ka) − 1 D2 0
having only the trivial solution D1 = D2 = 0, because the coefficient matrix of the
system is regular. Indeed, its determinant
2(1 − exp ka) exp(−ka) − 1
is nonzero.
yg (x) = D1 ln(mx + b) + D2 .
The first of the boundary conditions in (4.32) yields mD1 /b = 0, while the second
condition yields
D1 ln(ma + b) + D2 = 0.
Hence, D1 = D2 = 0, which implies that the boundary-value problem stated
in (4.31) and (4.32) has only the trivial solution, or is well posed.
4.3 Proving that the problem in (4.54) and (4.55) has a unique solution is equiv-
alent to showing that the corresponding homogeneous problem has only the trivial
solution, which is indeed true. To support this claim, express the general solution of
the homogeneous equation as
The first boundary condition in (4.55) yields D1 = 0, while from the second condi-
tion it follows that
D1 cos ka − D2 cos ka = 0,
implying that D2 is also zero.
7.4 Chapter 5
Here and further in the answers for Exercises 5.2–5.4, the complex variable no-
tations z = x + iy and ζ = ξ + iη are used for the field point and the source point
respectively.
1 |1 − exp π(z+ζ )
||1 − exp π(z−ζ )
|
G(x, y; ξ, η) = ln b
π(z−ζ )
b
π(z+ζ )
2π |1 − exp
2b ||1 − exp 2b |
∞
4 (β − ν) sinh νx sinh νξ
− sin νy sin νη,
b ν[(ν + β) exp 2νa + (ν − β)]
n=1
where ν = nπ/b.
5.5 Tracing out the standard procedure of the method of eigenfunction expansion,
one obtains the Green’s function in the Fourier series form
∞
1
G(r, ϕ; ρ, ψ) = k0 (r, ρ) + 2 kn (r, ρ) cos n(ϕ − ψ) ,
2π
n=1
156 7 Hints and Answers to Chapter Exercises
(r 2n − a 2n )[n(b2n + ρ 2n ) + βb(b2n − ρ 2n )]
kn (r, ρ) = , for r ≤ ρ.
2n(rρ)n [n(b2n + a 2n ) + βb(b2n − a 2n )]
Note that the expression for kn (r, ρ), valid for r ≥ ρ, can be obtained from the above
with the variables r and ρ interchanged.
A close analysis reveals a slow convergence of the series in the above expres-
sion for G(r, ϕ; ρ, ψ). Indeed, it converges at the rate 1/n, notably diminishing the
practicality of the representation. After improving the convergence in the way de-
scribed in the current chapter, a computer-friendly form for the Green’s function is
ultimately obtained as
∞
1 |a 2 − zζ | ∗
G(r, ϕ; ρ, ψ) = ln + k0 (r, ρ) + kn (r, ρ) cos n(ϕ − ψ) ,
2π |z||z − ζ |
n=1
where the coefficient kn∗ (r, ρ) of the series component is found, for r ≤ ρ, as
(r 2n − a 2n )(a 2n − ρ 2n )(βb − n)
kn∗ (r, ρ) = ,
n(rρ)n [(b2n (βb + n) − a 2n (βb − n)]
while for an expression that is valid for r ≥ ρ, we interchange the variables r and ρ.
Here and further in the answer to Exercise 5.7, the complex variable notations
z = r(cos ϕ + i sin ϕ) and ζ = ρ(cos ψ + i sin ψ) are used for the field point and the
source point, respectively.
where the function k0∗ (r, ρ) and the coefficient kn∗ (r, ρ) of the series component are
found, for r ≤ ρ, as
1 b
k0∗ (r, ρ) = 1 + βb ln
βb ρ
and
(r 2n + a 2n )(a 2n + ρ 2n )(βb − n)
kn∗ (r, ρ) = ,
n(rρ)n [(b2n (βb + n) + a 2n (βb − n)]
7.5 Chapter 6 157
while the variables r and ρ must be interchanged in the above expressions for
k0∗ (r, ρ) and kn∗ (r, ρ) to make them valid for r ≥ ρ.
7.5 Chapter 6
6.6 To prove the convergence of the infinite product
∞
4(t − 2nπ)2
,
n=−∞
(1 + 4n)2 π 2
Hence, the infinite product in (6.27) indeed converges, and its convergence rate
is of order 1/k 2 .
This reveals the convergence rate of the representation in (6.33) of order 1/k 2 .
∞
4t − (1 + 2n)π
tan t − cot t = 2 .
n=−∞
2(2t − nπ)
23. Yu.A. Melnikov, Influence functions of a point source for perforated compound plates with
facial convection, J. Eng. Math., 49 (2004), pp. 253–270
24. A.K. Ibrahim and M.A. Rakha, Numerical computations of infinite products, Appl. Math.
Comput., 161 (2005), no. 1, pp. 271–283
25. Yu.A. Melnikov and M.Yu. Melnikov, Computability of series representations of Green’s func-
tions in a rectangle, Eng. Anal. Bound. Elem., 30 (2006), pp. 774–780
26. V.S. Varadarajan, Euler and his work on infinite series, Bull. Am. Math. Soc., 44 (2007), no. 4,
pp. 515–539
27. Yu. A. Melnikov, New infinite product representations of some elementary functions, Appl.
Math. Sci., 2 (2008), no. 2, pp. 81–97
28. Yu.A. Melnikov, A new approach to the representation of trigonometric and hyperbolic func-
tions by infinite products, J. Math. Anal. Appl., 344 (2008), no. 1, pp. 521–534
Index
N Q
Necessary condition, 6 Quadratic equation, 53
Necessity, 4, 6, 80 Quarter-plane, 45, 46, 122
Negative infinity, 37, 89
Neumann problem, 44 R
Newtonian polynomial, 21 Radial coordinate, 52
Newton’s binomial formula, 19 Radial line, 52
Nontrivial, vii, 2, 122 Radical factor, 40
Nontrivial situations, viii Radicand, 40, 124
Nonzero, 6, 37, 55, 62, 65, 123, 145, 154 Rate of convergence, 31, 41, 99, 103, 133, 141,
Nonzero complex number, 2 148, 157
Normal direction, 44 Real component, 55
Novel approach, 1, 28 Real part, 14
Numerical analysis, viii, 2 Real terms, 19
Numerical differentiation, 18 Rearranged infinite product, 9, 10
Rearrangement, 8, 10
O Reciprocals, 53
Observation point, 11, 45, 54–57, 89, 91, 125 Rectangle, 13, 59, 96–98, 119
Odd-degree polynomial, 20 Recurrence, 40
Odd-index partial product, 5 Region, 44, 59, 80, 95, 96, 112–114, 116, 117,
Opening terms, 27 119–122
Order of factors, 8, 9 Regular component, 43, 44, 47
Ordinary differential equations, 15, 61, 72, 82, Regular function, 27, 102
85 Regular part, 12–14
Regularization, 14
Relative convergence, 32
P
Review guide, 15
Parameters, 13, 59, 90, 93, 99, 120, 125, 127,
Riemann’s zeta function, 17
129, 131, 132
Right-hand side, 11, 19, 21, 26, 27, 36, 37, 40,
Partial differential equations, vii, 15, 43, 61,
65, 98, 138, 146, 152, 153
85, 112, 123
Robin problem, 44
Partial product, 3–7, 9, 10, 20, 32, 37–41, 133,
Root, 53
138, 140, 148, 149
Rotation parameter, 57
Partial sum, 7, 20, 93, 96, 99, 100, 105,
Routine exercise, 61
114–117, 124
Particular solution, 62, 63, 73, 74 S
Piecewise smooth contour, 11 Scheme of the method, 43, 49
Pioneering results, 18 Second Green’s formula, 11
Poisson equation, 11, 92, 105, 112 Second-order differential equation, 61, 73
Polar coordinates, 45, 50, 56, 58, 85, 105, 106, Self-contained, 2
109 Seminar topic, viii
Pole, 27 Sequence of partial products, 10
Polynomial, 19, 21, 23, 25, 137, 144, 146 Series representation, 14, 59, 100, 116, 117,
Polynomial-containing representation, 21 123
Positive infinity, 37 Sharp turn, 15, 41
Positive integers, 9, 10 Simple pole, 55
Potential field, 53, 128 Simply connected region, 11, 54
Practical implementations, 13 Single branch, 7
Preparatory basis, 43, 85 Singular component, 13, 44, 45, 47, 50, 52, 94,
Principal value, 7 153
Product form, 28, 29, 34, 151, 152 Singular part, 12
Product value, 8, 11 Singular point, 27, 106
Professional treatment, viii Sink, 44–51, 122, 123, 125, 128, 130, 153
Prominent mathematician, 18 Source, 41, 44, 47, 49, 51, 72, 122, 126, 128,
Property of finite products, 3 130, 132, 153
Index 165
Source point, 11, 44, 53–57, 89, 91, 93, 96, 99, U
103, 104, 114, 125, 155, 156 Unbounded, 6, 68, 80
Special feature, 11, 80 Undergraduate course of calculus, viii
Special functions, 17, 59 Undergraduate course of differential equations,
Square root function, 40 viii
Standard abbreviation, 14 Undergraduate mathematics, 2
Standard courses, vii, 2 Undergraduate textbook, 61
Standard limit, 20 Unexpected treatment, 1
Subject areas of mathematics, vii, 1 Uniform convergence, 105, 118, 132, 139, 144,
Successive partial products, 4 145
Sufficient condition, 7 Unique solution, 11, 64–66, 73, 74, 79, 154
Sum of the series, 40, 94, 114 Uniqueness, 56, 57
Summation, 26, 59, 90, 92, 94–96, 99–101, Unit disk, 13, 14, 54, 56–59
109, 111, 115, 122, 123, 125 Unit sink, 44, 45, 47, 48, 51, 53, 123, 124, 126,
Summation indices, 13 153
Supplementary reading, viii Unit source, 44–51, 53, 122–126, 128, 130,
Surprising linkage, 43 153
Unity, 3, 6, 8, 57, 95, 96, 147
Unlooked-for approach, 43
T Unlooked-for outcome, 1
Taylor series, 8, 102 Upper bound, 27
Terminological issue, 8 Upper-division course/seminar, 1
Terminology, 11, 44 Upper half-plane, 44, 45, 50
Theoretical aspects, 14
Theory of infinite products, 2 V
Tom Grasso, viii Value, 3, 5–10, 25, 26, 37, 39, 53, 54, 63–65,
Trace of the function, 45, 46, 124, 126 68–72, 75, 77, 78, 89, 95, 99, 104,
Traditional instrument, 17 107, 117, 118, 131, 132, 135, 139,
Traditional methods, vii 140, 143–148
Trigonometric functions, 17, 18, 31, 131, 132, Variation of parameters, 62, 72, 73, 76, 88, 93,
137, 138, 141, 142 107, 108
Trigonometric series, 13
Trigonometric sine function, 18, 19, 23–25, 37, W
133, 141 Wallis formula, 9, 10
Trivial solution, 11, 62, 65, 69, 71–73, 77, 79, Weierstrass elliptic function, 59
82, 154 Well posed, 11, 62, 63, 66, 72, 74, 81, 83, 88,
Trivial trigonometric transformation, 19, 22 110, 154
Truncation of the series, 13, 94 Word of caution, 51
Two-dimensional Euclidean space, 11 Work of art, 19
Two-dimensional Laplace equation, vii, 1, 2,
11, 14, 24, 41, 43, 54, 59, 61, 93, Z
121 Zero terms, 3