Melnikov, Green's Functions and Elementary Functions, 2011

Download as pdf or txt
Download as pdf or txt
You are on page 1of 176

Yuri A.

Melnikov

Green’s Functions
and Infinite Products

Bridging the Divide


Yuri A. Melnikov
Department of Mathematical Sciences
Computational Sciences Program
Middle Tennessee State University
Murfreesboro, TN 37132-0001
USA
ymelniko@mtsu.edu

ISBN 978-0-8176-8279-8 e-ISBN 978-0-8176-8280-4


DOI 10.1007/978-0-8176-8280-4
Springer New York Dordrecht Heidelberg London

Library of Congress Control Number: 2011937161

Mathematics Subject Classification (2010): 40A20, 65N80

© Springer Science+Business Media, LLC 2011


All rights reserved. This work may not be translated or copied in whole or in part without the written
permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York,
NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in
connection with any form of information storage and retrieval, electronic adaptation, computer software,
or by similar or dissimilar methodology now known or hereafter developed is forbidden.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are
not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject
to proprietary rights.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.birkhauser-science.com)


To my beloved grandchildren:
Yulya, Afanasy, and Mashen’ka
Preface

Two traditional mathematical concepts, classical in their own fields, are brought for-
ward in this brief volume. Reviewing these concepts separately, with no connection
to each other, would definitely look natural, but bringing them together into a single
book format is quite a different story. The point is that the concepts are drawn from
subject areas of mathematics that have no evident points of contiguity. That is why
the reader might be intrigued with our intention in this book to explore their mutual
fusion. This endeavor provides a basis for a challenging and nontrivial investigation.
The first of the two concepts is the Green’s function. It represents an important
topic in standard courses of differential equations and is customarily covered in
most texts in the field. The second concept, of infinite product, belongs, in turn, to
classical mathematical analysis. As to Green’s functions for partial differential equa-
tions, it is not a common practice in existing textbooks for careful consideration to
be given to procedures used for their construction. On the other hand, the standard
texts on mathematical analysis do not usually confront the infinite product repre-
sentation of elementary functions. A simultaneous review of just these two subject
areas (the construction of Green’s functions and the infinite product representation
of elementary functions) constitutes the context of the present book.
Green’s functions for the two-dimensional Laplace equation are most widely rep-
resented in relevant texts. They are conventionally constructed using the method of
images, conformal mapping, or eigenfunction expansion. The present volume fo-
cuses on the construction of such Green’s functions for a wide range of boundary-
value problems. A comprehensive review of the traditional methods is provided,
with emphasis on the infinite-product-containing expressions of Green’s functions,
which are obtained by the method of images. This provides a background for the
central theme in this book, which is the development of an innovative approach to
the representation of elementary functions in terms of infinite products.
The intention in the present volume is not just to familiarize the reader in the
traditional manner with the state of things in the area, but rather to reach beyond tra-
ditions. That is, we plan not only to introduce the classical topics of the construction
of Green’s functions and the infinite product representation of elementary functions,
but also to present a challenging investigation into the intersection of these fields.

vii
viii Preface

To be well prepared for the presentation in this book, the reader is required to have
a reasonably solid background in the standard undergraduate courses of calculus
and differential equations. In addition, the reader would definitely benefit from a
superficial knowledge of the basics of numerical analysis.
There is good reason to believe that this piece of work is original. To the author’s
best knowledge, there are no analogous books available on the market. That is why
we anticipate that the book will not be overlooked by the professional community.
It might, for example, be adopted as supplementary reading for an undergraduate
course or as a seminar topic within the scope of a pure or applied mathematics
curriculum. Infinite Product Representation of Elementary Functions, A Further
Linking of Differential Equations with Calculus, or Broadening the Use of Green’s
Functions might be the title for such a course or seminar topic.
Very initial results on the Green’s-function-based approach to the infinite product
representation of elementary functions were reported not long ago. The first printed
publications on progress in this field appeared just recently. It then took us over
three years to ultimately come up with this book, which was originally intended as
a text for an elective course within the computational sciences Ph.D. program just
launched at Middle Tennessee State University.
It is with pleasure and gratitude that the author acknowledges the editorial ser-
vices provided by the staff of Birkhäuser Boston, with special thanks to Tom Grasso,
senior mathematics editor, for his professional treatment of nontrivial situations.
Although the editing process was not fast, smooth, and painless, it has significantly
improved the quality of the presentation and definitely made this book a much better
read.
The opening phase of our work on this project was partially funded by a 2008
Summer Research Grant awarded by the Faculty Research and Creative Activity
Committee at Middle Tennessee State University. This created a propitious work
environment, promoted progress at later stages of the project, and made a decisive
contribution to its prompt completion.
Murfreesboro, USA Yuri A. Melnikov
Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 Infinite Products and Elementary Functions . . . . . . . . . . . . . . 17
2.1 Euler’s Classical Representations . . . . . . . . . . . . . . . . . . 17
2.2 Alternative Derivations . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Other Elementary Functions . . . . . . . . . . . . . . . . . . . . . 28
2.4 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3 Green’s Functions for the Laplace Equation . . . . . . . . . . . . . . 43
3.1 Construction by the Method of Images . . . . . . . . . . . . . . . 43
3.2 Method of Conformal Mapping . . . . . . . . . . . . . . . . . . . 54
3.3 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4 Green’s Functions for ODE . . . . . . . . . . . . . . . . . . . . . . . 61
4.1 Construction by Defining Properties . . . . . . . . . . . . . . . . . 61
4.2 Method of Variation of Parameters . . . . . . . . . . . . . . . . . 72
4.3 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5 Eigenfunction Expansion . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1 Background of the Approach . . . . . . . . . . . . . . . . . . . . 85
5.2 Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . . . . . 86
5.3 Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.4 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6 Representation of Elementary Functions . . . . . . . . . . . . . . . . 121
6.1 Method of Images Extends Frontiers . . . . . . . . . . . . . . . . 122
6.2 Trigonometric Functions . . . . . . . . . . . . . . . . . . . . . . . 131
6.3 Hyperbolic Functions . . . . . . . . . . . . . . . . . . . . . . . . 141
6.4 Chapter Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7 Hints and Answers to Chapter Exercises . . . . . . . . . . . . . . . . 151
7.1 Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
7.2 Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
7.3 Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

ix
x Contents

7.4 Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155


7.5 Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Chapter 1
Introduction

Our objective in putting together this volume has been to develop a supplementary
text for an elective upper-division undergraduate or graduate course/seminar that
might be offered within the scope of a pure or applied mathematics curriculum.
A quite unexpected treatment is delivered herein on two subjects that one might
hardly have anticipated considering together in a single book. This makes the book
an original and unique read, and a good choice for those who are open to challenges
and welcome the unexpected.
The reader is invited on an interesting voyage, with the subject matter resting
upon two concepts taken from different subject areas of mathematics. These con-
cepts are (i) the infinite product, which represents a standard topic in courses of
mathematical analysis, and (ii) the Green’s function, representing a significant topic
in courses on differential equations. To be more specific, we will concentrate in this
book on the infinite product representation of elementary functions and the Green’s
function of boundary-value problems for the two-dimensional Laplace equation.
It would not probably be an exaggeration to assert that none of the existing rel-
evant textbooks in mathematics covers both the concepts that are to be explored
herein. Consequently, the two concepts have probably never been considered to-
gether in a single traditionally offered course. The reader might therefore be con-
cerned about the reason for presenting both concepts in our volume. Indeed, what
is the driving force for considering them together this time around? The resolution
of this concern will be found on examining the very recent developments reported
in [27] and [28]. It appears that a diligent analysis of the two concepts reveals an
unlooked-for outcome that happens to be extremely rewarding. A novel approach
was discovered to the approximation of functions, which provides never before re-
ported infinite product representations for some elementary functions.
According to the title, the present volume is not designed to focus exclusively on
either differential equations or mathematical analysis. The subtitle brings a neces-
sary clarification. It suggests that both the subject areas, a fusion of which represents
an unbreakable background for our presentation, are going to be covered to a certain
extent, with emphasis on the establishment of their productive linking.
The motivation for writing this book is due in large measure to the many years of
our work on the construction of computer-friendly expressions for Green’s functions

Y.A. Melnikov, Green’s Functions and Infinite Products, 1


DOI 10.1007/978-0-8176-8280-4_1, © Springer Science+Business Media, LLC 2011
2 1 Introduction

for applied partial differential equations. The results of that work have been reported
in a series of publications of the past decades. To get a sense and a distinctive feature
of this work, the reader might examine some of our publications [11, 12, 17, 21,
23, 25]. The most complete and useful list of efficient representations of Green’s
functions recently constructed can be found in [16].
It was just recently, however, that the idea emerged to compare alternative forms
of Green’s functions that had been obtained for a variety of boundary-value prob-
lems stated for the two-dimensional Laplace equation. The comparison appeared
really nontrivial. It ultimately gave birth to a score of infinite product representa-
tions of some trigonometric and hyperbolic functions. The idea is not, of course,
new. The reader might recall some other areas of mathematics in which a compari-
son of equivalent but different-looking forms of some statement results in interesting
developments.
As might be learned from mathematical analysis, the theory of infinite prod-
ucts is closely related to that of infinite series. The latter, in turn, represents one of
the major driving forces in the core courses of undergraduate mathematics. Infinite
series usually receive more or less complete and detailed coverage in standard un-
dergraduate texts in both pure and applied mathematics curricula. Indeed, Taylor,
Fourier, and other types of series represent a convenient tool for mathematical anal-
ysis. They traditionally play a significant role in the standard courses of calculus,
complex variables, differential equations, numerical analysis, and others.
Infinite products, in contrast, are not that well and fully covered in standard texts
on mathematical analysis. Nevertheless, they quite frequently arise in different areas
of mathematics [5, 20], and are, as well as infinite series, successfully implemented
as a tool in the description of a number of subjects, such as the approximation of
functions in particular. Although the fundamental results on the infinite product rep-
resentation of elementary functions can be traced back to Euler’s era [1], mathemati-
cians all over the globe are still working in this area [2–4, 7, 14, 22, 24], reporting
on different theoretical and computational aspects of this topic.
Since representation of elementary functions by infinite products constitutes the
leading theme in this work, it would be appropriate to provide the reader with at
least a brief introduction to the basic terminology as well as to the chief concepts of
infinite products. An introduction of this sort would be, in our opinion, reasonable
to make this book more consistent, self-contained, and easier to read.
In addition, the reader will later be familiarized with the concept of Green’s func-
tion for the two-dimensional Laplace equation. This concept will be briefly reviewed
in the introduction and discussed then in more detail in Chaps. 3, 4, and 5, where
we give an overview of the major solution methods that are traditionally used for
the construction of Green’s functions. In doing so, our special emphasis will be
on the method of images, which results, for some problems, in an infinite-product-
containing representation of the Green’s function.
We begin with a brief review of the fundamentals of infinite products by assum-
ing that a1 , a2 , . . . , ak , . . . represent nonzero complex numbers, and consider the
1 Introduction 3

product


a1 · a2 · · · · · ak · · · = ak . (1.1)
k=1
The concept of convergence must be, of course, of the same critical importance
for infinite products as it is for infinite series. To introduce this concept, we form the
finite product

N
PN = ak = a1 · a2 · · · · · aN
k=1
and call it the N th partial product of (1.1). It is said that the infinite product in (1.1)
is convergent (we will also use the wording the infinite product converges) if there
exists a finite limit P = 0 to which the sequence

P1 , P2 , . . . , PN , . . . (1.2)

of partial products of (1.1) converges as N approaches infinity. That is,

P = lim PN .
N →∞

The number P is called the value of (1.1).


Similarly to the case of infinite series, if an infinite product is not convergent,
then we say that it diverges or is divergent.
An important comment must be offered at this point as to the convergence of
an infinite product. If some of the terms in (1.1) are equal to zero, then the infinite
product is said to be convergent if it converges when the zero terms are excluded.
With this, the value of the infinite product with zero terms is said to be zero. This
comment is required for the infinite product to possess a property of finite products.
That is, the value of a convergent infinite product is zero if and only if at least one
of its terms is zero.
Note also that if none of the terms ak of the infinite product in (1.1) is zero,
and the limit of the sequence in (1.2) is zero (P = 0), then we say that the product
diverges to zero.
Hence, two options are on the table as to the convergence of an infinite product.
Namely, an infinite product either converges (when P is finite, but not zero), or
diverges (when P is either infinite or zero).
It is evident that if the infinite product in (1.1) converges, then the limit of its
general term must equal unity:

lim ak = 1. (1.3)
k→∞

This assertion immediately follows from the obvious relation


PN
aN = (1.4)
PN −1
4 1 Introduction

for the general term of the product in (1.1) in terms of two successive partial prod-
ucts. Indeed, taking the limit in (1.4), one obtains for a convergent infinite product

limN →∞ PN P
lim aN = = = 1.
N →∞ limN →∞ PN −1 P

Hence, the condition in (1.3) is necessary for the infinite product in (1.1) to con-
verge. To give some illustrations of this claim, we consider a few examples. Take
the infinite product

 (k + 1)2
(1.5)
k(k + 2)
k=1

and explore its convergence by taking a close look at its N th partial product, written
down explicitly as


N
(k + 1)2
PN =
k(k + 2)
k=1

22 32 42 (N − 1)2 N2 (N + 1)2
= · · · ··· · · · .
1·3 2·4 3·5 (N − 2)N (N − 1)(N + 1) N (N + 2)

It is evident that after a series of cancellations in the above expression, PN be-


comes
2(N + 1)
PN = ,
N +2
verifying that the limit of the above is a finite number. Indeed,

2(N + 1)
lim PN = lim = 2.
N →∞ N →∞ N +2

Thus, the infinite product in (1.5) really does converge. And what about the con-
dition in (1.3)? It is obviously met, since

(k + 1)2
lim ak = lim = 1.
k→∞ k→∞ k(k + 2)

For another example of the necessity of the condition in (1.3), consider the fol-
lowing infinite product:
∞ 2
 k −1
. (1.6)
k2
k=2

To check for convergence, consider its N th partial product, which reads in this
case
1 Introduction 5


N
k 2 − 1  (k − 1)(k + 1)
N
PN = =
k2 k2
k=2 k=2
1·3 2·4 3·5 (N − 2)N (N − 1)(N + 1) N + 1
= · 2 · 2 · ··· · · = ,
2 2 3 4 (N − 1)2 N2 2N
providing
N +1 1
lim PN = lim = .
N →∞ N →∞ 2N 2
Hence, the infinite product in (1.6) is indeed convergent. It is also clear that the
condition in (1.3) is met. That is,
k2 − 1
lim = 1.
k→∞ k 2

To explore the convergence of the next infinite product,



 k + (−1)k
, (1.7)
k
k=2

its odd-index partial product P2N −1 and even-index partial product P2N sould be
analyzed separately. The point is that these partial products look formally different.
We can show, however, that the sequence
P3 , P5 , P7 , . . . , P2N −1

of the odd-index partial products, as well as the sequence

P2 , P4 , P6 , P8 , . . . , P2N

of the even-index partial products of (1.7) converge to the same value, implying that
the infinite product in (1.7) is convergent. To verify this conjecture, take a look first
at the odd-index partial product, P2N −1 :

2N −1
k + (−1)k 3 2 5 4 2N − 1 2N − 2
P2N −1 = = · · · · ··· · · ,
k 2 3 4 5 2N − 2 2N − 1
k=2

which implies that


P3 , P5 , P7 , . . . , P2N −1
represents just the sequence of 1’s. Hence, 1 is its limit.
The even-index partial product P2N of (1.7) is, in turn,

2N
k + (−1)k
P2N =
k
k=2
3 2 5 4 7 2N − 1 2N − 2 2N + 1 2N + 1
= · · · · · ··· · · · = ,
2 3 4 5 6 2N − 2 2N − 1 2N 2N
6 1 Introduction

which clearly converges to 1, implying that the infinite product in (1.7) is indeed
convergent. As to its value, which is the limit of its general term, it is equal to unity:

k + (−1)k
lim = 1.
k→∞ k
Thus, the analysis just completed for the infinite products in (1.5), (1.6), and (1.7)
simply illustrates the necessity (which has actually been proven earlier) of the con-
dition in (1.3) for the convergence of an infinite product. To show that the condition
in (1.3) is not sufficient for convergence, we may offer a single counterexample. Let
us take the product

 k+1
, (1.8)
k
k=1
the general term of which approaches unity as k goes to infinity, yet the product is
divergent. The divergence can be proven by showing that the N partial product,
2 3 4 N N +1
PN = · · · ··· · · = N + 1,
1 2 3 N −1 N
is unbounded as N goes to infinity. This provides convincing evidence of the diver-
gence of the product in (1.8).
The example just considered allows us to declare that the condition in (1.3), being
necessary for the convergence of infinite products, is not, however, sufficient. In
other words, if the condition in (1.3) is not met, then the product in (1.1) diverges.
If, however, the condition in (1.3) is met, then the product might either converge or
diverge.
It is interesting to recall that the situation with infinite products
 just discussed
resembles that for infinite series. That is, if an infinite series ∞ k=1 bk converges,
then the limit of its general term bk must be zero. The converse assertion, that a
series necessarily converges if the limit of its general term is zero, is, however,
untrue. This point is traditionally
 illustrated in calculus with the remarkable example
of the harmonic series ∞ k=1 1/k, which diverges despite the fact that its general
term 1/k approaches zero as k goes to infinity.
Let us revisit the infinite product in (1.1) and assume that all its terms are
nonzero, which means that if its general term is rewritten as ak = 1 + βk , then
βk = −1 for every k. Now rewrite (1.1) as


(1 + βk ). (1.9)
k=1

From the necessary condition for convergence of the product in (1.1), it follows
that if the product in (1.9) converges, then the following condition

lim βk = 0 (1.10)
k→∞

must be satisfied.
1 Introduction 7

Taking logarithms in (1.9), one obtains the series



log(1 + βk ). (1.11)
k=1

Since the logarithm is a multiple-valued function, a single branch of the logarith-


mic function in (1.11) (say, the principal one) can be chosen.
Let the number SN represent a partial sum of (1.11), and assume that the se-
ries converges. This implies that a finite limit S of SN exists as N goes to infinity.
From (1.11), it follows in this case that the partial product PN of (1.9) is expressed
in terms of SN as PN = eSN . By the continuity property, we conclude that the value
P of the product in (1.9) is expressed in terms of the sum S of the series in (1.11)
as P = eS , which cannot, of course, be zero.
As with infinite series, the notion of absolute convergence can also be introduced
for the infinite product in (1.9). Namely, we say that the product in (1.9) converges
absolutely if the product


(1 + |βk |)
k=1

converges. A necessary and sufficient condition for absolute convergence of the


above product is that the series


βk
k=1

be absolutely convergent. Clearly, this assertion is equivalent to another one, that


the series


| log(1 + βk )| (1.12)
k=1

is convergent if and only if the series



|βk | (1.13)
k=1

is convergent.
Proof of the latter claim can immediately be accomplished by the limit com-
parison test. Indeed, since convergence of either the series in (1.12) or the series
in (1.13) implies (1.10), we take the limit

log(1 + βk )
lim
βk →0 βk
8 1 Introduction

and expand the logarithm in a Taylor series. This yields


 
log(1 + βk ) 1 1 1
lim = lim βk − βk2 + βk3 − · · ·
βk →0 βk βk →0 βk 2 3
 
1 1 2
= lim 1 − βk + βk − · · · = 1,
βk →0 2 3

which justifies our assertion.


Another terminological issue is also important for the material in this book. That
is, we say that the product in (1.9) converges conditionally if it converges, whereas
the product


|1 + βk |
k=1

diverges.
As with infinite series, the commutativity property [5] holds for absolutely con-
vergent infinite products but does not do so for conditionally convergent ones. This
means that the order of factors in an absolutely convergent infinite product can be
arbitrarily rearranged without affecting the product value. If, however, an infinite
product converges conditionally, then a rearrangement may affect the product’s con-
vergence in the sense that it might change its value.
For a justification of the commutativity property, we will present an example of
a conditionally convergent infinite product and show that by rearranging the order
of its factors we can obtain for it an arbitrarily preassigned value. In doing so, recall
the infinite product in (1.7) and rewrite it as
∞
 
(−1)k
1+ . (1.14)
k
k=2

As we have recently proved, this product converges to the value of unity. The con-
vergence is, however, conditional because the product in (1.8),
∞
 
1
1+ ,
k
k=2

as we also recently figured out, is divergent.


To illustrate the fact that the order of factors in (1.14) matters, or to show, in
other words, that rearranging the order of its factors might affect its value, observe
that the factors with a plus sign in (1.14) alternate with those having a minus sign.
Indeed,
∞
      
(−1)k 1 1 1 1
1+ = 1+ 1− 1+ 1− ··· .
k 2 3 4 5
k=2
1 Introduction 9

Let M and N be two positive integers, and rearrange the order of factors in (1.14)
in such a way that segments TM of M factors representing sums alternate with seg-
ments TN of N factors representing differences. The first of the segments TM ap-
pears as
     
1 1 1 1
TM = 1 + 1+ 1+ · ··· · 1 + ,
2 4 6 2M
while the first of the segments TN reads
     
1 1 1 1
TN = 1 − 1− 1− · ··· · 1 − .
3 5 7 2N + 1
Rewrite the segments TM and TN in a compact form. That is,
3 5 7 2M + 1 (2M + 1)!!
TM = · · · ··· · =
2 4 6 2M (2M)!!
and
3 5 7 2N (2N )!!
TN = · · · ··· · = .
2 4 6 2N + 1 (2N + 1)!!
This makes the (M + N)kth partial product P(M+N )k of the rearranged infinite
product equal to
(2Mk + 1)!!(2N k)!!
P(M+N )k = . (1.15)
(2Mk)!!(2Nk + 1)!!
To compute the value of the rearranged infinite product, one is required to take
a limit of its partial products in (1.15) as k approaches infinity. Before going any
further with the limit process, we recall the classical [5] Wallis formula
(2k)!! √
lim √ = π
k→∞ (2k − 1)!! k

and convert it to a form that is more convenient for the development that follows. In
doing so, rewrite the above as

(2k − 1)!! k 1
lim =√ .
k→∞ (2k)!! π
Clearly, upon multiplying the numerator and denominator in the above by (2k +

1) k, we transform it into

(2k + 1)!! · k (2k − 1)!! k 1
lim √ = lim =√ .
k→∞ (2k)!! · k · (2k + 1) k→∞ (2k)!! π
The limit on the left-hand side can be decomposed into the product of two limits:
(2k + 1)!! k 1
lim √ · lim =√ .
k→∞ (2k)!! k k→∞ 2k + 1 π
10 1 Introduction

Since the second of the two limits is 1/2, the above relation reads
(2k + 1)!! 2
lim √ =√ ,
k→∞ (2k)!! k π
which can be considered an equivalent version of Wallis’s formula.
We recall now the rearranged infinite product in (1.14) and compute its value V
by taking the limit of its (M + N)kth partial product P(M+N )k in (1.15) as k ap-
proaches infinity. This yields
(2Mk + 1)!!(2Nk)!!
V = lim
k→∞ (2Mk)!!(2Nk + 1)!!
 √
M (2Mk + 1)!! (2N k)!! N k
= lim √ · lim ,
N k→∞ (2Mk)!! Mk k→∞ (2N k + 1)!!
which, in light of the second version of Wallis’s formula, reads
 √ 
M 2 π M
V= ·√ · = .
N π 2 N
Hence, the infinite product in (1.14), rearranged in the way just described, might
either increase or decrease in value depending upon the integers M and N . Indeed,
if the rearrangement is made, say, such that every four factors representing sums
(M = 4) are followed by a single factor representing a difference (N = 1), then
the value of the resultant infinite product is twice the value of the original infinite
product in (1.14).
Completing our brief review on the fundamentals of infinite products, we turn to
functional products and let βk (z) be a function defined on a set S for each positive
integer k. Then we say that the infinite product


(1 + βk (z)) (1.16)
k=1

converges uniformly in S if the condition 1 + βk (z) = 0 holds for all k, and the
sequence of partial products


N
PN (z) = (1 + βk (z))
k=1

of (1.16) converges uniformly in S to a function P (z) that never vanishes in S.


Clearly, the infinite product in (1.16) converges uniformly if the series


βk (z)
k=1

is uniformly convergent on S.
1 Introduction 11

Let βk (z) represent continuous functions in S for each k. It can be shown that if
the infinite product in (1.16) converges uniformly in S, then the product value P (z)
is continuous in S.
Keep in mind that the key theme in the present volume is the representation of
elementary functions in terms of infinite products. We believe that with the review
just completed, the reader is prepared to cope with the rest of the material in this
book where infinite products emerge.
Our introductory review takes a turn, at this point, to the second of the two con-
cepts, which, along with the concept of infinite product, represents the keystone
in this volume. That is the concept of Green’s function. In order to introduce the
Green’s function notion for the two-dimensional Laplace equation, we consider,
in two-dimensional Euclidean space, a simply connected region  bounded by a
piecewise smooth contour L, and formulate a boundary-value problem in which the
Poisson equation
∇ 2 u(P ) = −f (P ), P ∈ , (1.17)
is subject to the homogeneous boundary condition

B[u(P )] = 0, P ∈ L, (1.18)

where ∇ 2 represents the Laplace operator, the right-hand side f (P ) is a function


continuous in , and B is an operator of boundary conditions.
Assume that the problem in (1.17) and (1.18) is well posed, implying that it has
a unique solution. This means that the corresponding homogeneous problem, where
the boundary condition in (1.18) is imposed on L for the Laplace equation

∇ 2 u(P ) = 0, P ∈ , (1.19)

has only the trivial solution u(P ) = 0. If so, then the solution u(P ) for the problem
in (1.17) and (1.18) can be expressed [8, 15, 18] in the integral form

u(P ) = G(P , Q)f (Q)d(Q), (1.20)


with the kernel G(P , Q) being called the Green’s function to the homogeneous
boundary-value problem in (1.19) and (1.18).
The relation in (1.20) reveals a special feature of the Green’s function. Indeed,
once the latter is available, solution of the problem in (1.17) and (1.18) is a matter
of computing the integral in (1.20), which can be considered a direct consequence
of the second Green’s formula [8].
We use some standard terminology in our book for the arguments P and Q of
the Green’s function. They are commonly referred to as the observation point for P
(another customarily used term is the field point) and the source point for Q.
For any location of the source point Q ∈ , the Green’s function G(P , Q), as
a function of the coordinates of the field point P , possesses the following defining
properties:
12 1 Introduction

1. At any location of P ∈ , except at P = Q, G(P , Q) is a harmonic function


of P , that is,
∇P2 G(P , Q) = 0, P = Q.
2. For P = Q, G(P , Q) possesses a logarithmic singularity of the type
1 1
ln .
2π |P − Q|
3. G(P , Q) satisfies the boundary condition in (1.18), that is,

BP [G(P , Q)] = 0, P ∈ L.

In compliance with the defining properties, the Green’s function G(P , Q) of the
problem in (1.19) and (1.18) can be expressed as
1 1
G(P , Q) = ln + R(P , Q),
2π |P − Q|
with the second additive component R(P , Q) referred to as the regular part of the
Green’s function. R(P , Q) represents a harmonic, everywhere in , function of the
coordinates of P . That is,

∇P2 R(P , Q) = 0, P ∈  and Q ∈ .

A given Green’s function constructed by two different methods might have two
different appearances. One expression might appear in a computer-friendly form,
whereas the other might not be readily computable or simple to use. Later in this
volume, a number of different methods will be reviewed that produce a variety of
different forms of Green’s functions for boundary-value problems for the Laplace
equation.
Some of the available Green’s functions can be completely expressed in terms of
elementary functions. As an example, one might recall the classical [5, 18] Green’s
function
1 (x − ξ )2 + (y + η)2
G(x, y; ξ, η) = ln (1.21)
4π (x − ξ )2 + (y − η)2
for the Dirichlet problem posed in the half-plane {−∞ < x < ∞, 0 < y < ∞}.
It is evident that the denominator component in (1.21) constitutes the singular
part of G(x, y; ξ, η), whereas the component
1
R(x, y; ξ, η) = ln (x − ξ )2 + (y + η)2

represents its regular part.
Representations of the type in (1.21) are compact and convenient to work with in
various applications. It is worth noting, however, that there unfortunately exist only
a few such closed analytical forms of Green’s functions available in the literature.
1 Introduction 13

Some other Green’s functions for the Laplace equation that are available in lit-
erature are expressed in a form containing elementary functions and trigonometric
series, such as

 (r)n
1 1
G(r, ϕ; , ψ) = − 2β cos n(ϕ − ψ)
2π β n(n + β)
n=1
1
− ln r 2 − 2r cos(ϕ − ψ) + 2

1
− ln 1 − 2r cos(ϕ − ψ) + r 2 2 , (1.22)

obtained in [16] for the mixed boundary-value problem
 

+ β G(1, ϕ; , ψ) = 0, β > 0, (1.23)
∂r

posed on the unit disk {0 < r < 1, 0 < ϕ ≤ 2π}.


Clearly, the regular part of the Green’s function presented in (1.22) can be written
as
1
R(r, ϕ; , ψ) = − ln 1 − 2r cos(ϕ − ψ) + r 2 2

 ∞
1 1 1
+ − 2β (r)n cos n(ϕ − ψ) .
2π β n(n + β)
n=1

Representations of the type in (1.22) are also quite convenient for practical im-
plementations, because their singular components are explicitly expressed, while
the series in their regular parts are uniformly convergent. This makes forms of the
type in (1.22) computer-friendly and allows efficient computing by a truncation of
the series. A number of Green’s functions obtained in such a mixed form can be
found in [16].
In most other cases, however, Green’s functions of boundary-value problems for
the Laplace equation cannot be expressed in either an elementary function form or
in a mixed form of the type in (1.22). For example, we have the classical [18] Fourier
double-series form

4  sin μx sin νy sin μξ sin νη
G(x, y; ξ, η) = (1.24)
ab μ2 + ν 2
m,n=1

of the Green’s function for the Dirichlet problem stated on the rectangle {0 < x < a,
0 < y < b}, where the parameters μ and ν are expressed in terms of the summation
indices m and n, and the rectangle’s dimensions a and b as
mπ nπ
μ= and ν = .
a b
14 1 Introduction

The computability of the series representations of the type in (1.24) is limited


due to their nonuniform convergence. The latter is unavoidable because any Green’s
function for the Laplace equation is not regular by definition and does not, therefore,
meet the convergence requirements for Fourier series [5]. Hence, a certain regular-
ization is required in order to convert the Green’s function shown in (1.24) to a form
appropriate for immediate computer implementation. Some recommendations for
such a conversion can be found in [16, 19, 24].
It is worth noting that multiple forms of Green’s functions are available for many
boundary-value problems for the Laplace equation. To illustrate this point, recall the
mixed problem in (1.23) for the unit disk and its Green’s function shown in (1.22).
Another alternative to the representation in (1.22) of this Green’s function,

1 1 1 1 − 2r cos(ϕ − ψ) + r 2 2
G(r, ϕ; , ψ) = + ln 2
2π β 2 r − 2r cos(ϕ − ψ) + 2
 rei(ϕ−ψ) β−1 
1 −β ζ dζ
+ Re rei(ϕ−ψ) , (1.25)
π 0 1 − ζ

is available in [5]. The standard abbreviation Re denotes the real part of a function
of a complex variable. Note that the above representation and the one in (1.22) are
equivalent mathematically, but it is evident that on the other hand, the two forms
are not quite equivalent computationally. Indeed, the one in (1.25) is less suitable
for computer implementations compared to that of (1.22), because the regular part
in (1.25) requires greater computational effort.
The multiplicity of forms in which Green’s functions can potentially be repre-
sented is instrumental in the present book. It represents the driving force of the in-
vestigation reported in Chap. 6. Taking advantage of this multiplicity, we will later
derive some interesting infinite product representations of trigonometric and hyper-
bolic functions by comparison of some alternative forms of Green’s functions for
the Laplace equation.
As to the material of the present volume, it is organized in five chapters. The
focus in Chap. 2 is on the classical [1] Euler infinite product representation of the
trigonometric and hyperbolic sine and cosine functions. We explain how Euler de-
rived them and also review some other known derivation procedures developed later.
In addition, the reader is introduced to the derivation of some other infinite product
representations of elementary functions that are available in mathematical hand-
books [6] or [9].
In Chaps. 3 and 5, we turn to Green’s functions of boundary-value problems
for the two-dimensional Laplace equation. Our effort is specific because we do not
concentrate on theoretical aspects but mostly analyze a variety of methods that are
traditionally used for the construction of Green’s functions. In doing so, special
attention is paid to the methods of images, conformal mapping, and eigenfunction
expansion. By extending the frontiers of the method of images, we obtain alternative
infinite-product-containing expressions for some classical Green’s functions. This
provides background for the key developments in the present work.
1 Introduction 15

Chapter 4 makes a sharp turn, departing from the field of partial differential equa-
tions and focusing instead on ordinary differential equations. It might seem that this
material lies outside the book’s focus because it is not directly related to Green’s
functions for the Laplace equation. But the purpose of Chap. 4 is to prepare the
reader for a better comprehension of our work in Chap. 5, where we return to the
Laplace equation. A vast number of Green’s functions are obtained for this equa-
tion by means of the method of eigenfunction expansion. It is worth noting that this
method is applicable and appears efficient not only for the Laplace equation but also
for many other applied partial differential equations.
Chapter 6 is central to the book. An innovative approach [28] is presented and
developed for expressing elementary functions in terms of infinite products. Our
work on Green’s functions, discussed in detail earlier in the book, is instrumental
for Chap. 6. A number of infinite product expansions of elementary functions are
obtained. Some of those are simply alternatives to the forms that are already avail-
able in the classical literature. Some others were derived, however, for functions
whose infinite product representations are unavailable in existing handbooks.
To enhance the usefulness of this volume as a textbook many illustrative exam-
ples are offered in most of its sections to assist the instructor in class preparation
and in giving the student more effective material for study. Every chapter begins
with a review guide outlining the basic concepts covered in the chapter. To reflect
the widespread idea that a text is only as good as its problems, a set of carefully
designed challenging exercises is available at the end of each chapter. The exercises
provide opportunities for the reader to explore the concepts of the chapter in more
detail. Hints, comments, and answers to most of those exercises are available in the
book.
The author hopes that the discussion initiated in this brief volume will motivate
the reader’s interest to further learning from our approach to the representation of
elementary functions in terms of infinite products. He believes that the book might
arouse the reader’s curiosity and awaken a desire to better understand the nature of
the intersection of the subjects of Green’s functions and infinite products. This could
promote further progress in this challenging field that bridges the divide between the
two subjects.
Chapter 2
Infinite Products and Elementary Functions

The objective in this chapter is to lay out a working background for dealing with
infinite products and their possible applications. The reader will be familiarized
with a specific topic that is not often included in traditional texts on related courses
of mathematical analysis, namely the infinite product representation of elementary
functions.
It is known [9] that the theory of some special functions is, to a certain extent,
linked to infinite products. In this regard, one might recall, for example, the elliptic
integrals, gamma function, Riemann’s zeta function, and others. But note that spe-
cial functions are not targeted in this book at all. Our scope is limited exclusively to
the use of infinite products for the representation of elementary functions.
We will recall and discuss those infinite product representations of elementary
functions that are available in the current literature. Note that they have been de-
rived by different methods, but the number of them is limited. In Sect. 2.1, Euler’s
classical derivation procedure will be analyzed. His elegant elaborations in this field
were directed toward the derivation of infinite product representations for trigono-
metric as well as the hyperbolic sine and cosine functions. The work of Euler on
infinite products was inspirational [26] for many generations of mathematicians. It
will be frequently referred to in this brief volume as well.
Some alternative derivation techniques proposed for infinite product representa-
tions of trigonometric functions will be reviewed in detail in Sect. 2.2. The closing
Sect. 2.3 brings to the reader’s attention a variety of possible techniques for the
derivation of infinite product forms of other elementary functions. We will instruct
the reader on how to obtain the infinite product representations of elementary func-
tions that are available in standard texts and handbooks.

2.1 Euler’s Classical Representations


Both infinite series and infinite products could potentially be helpful in the area of
approximation of functions. Infinite series represent a traditional instrument in con-
temporary mathematics. One of its classical implementations is the representation of

Y.A. Melnikov, Green’s Functions and Infinite Products, 17


DOI 10.1007/978-0-8176-8280-4_2, © Springer Science+Business Media, LLC 2011
18 2 Infinite Products and Elementary Functions

functions, which is applicable to different areas of mathematical analysis. Approx-


imation of functions and numerical differentiation and integration can be pointed
out as some, but not the only, such areas. Although infinite products have also been
known and developed for centuries [26], and can potentially be used in solving a
variety of mathematical problems, the range of their known implementations is not
as broad as that of infinite series.
The focus in the present volume is on just one of many possible implementations
of infinite products, namely the representation of elementary functions. Pioneering
results in this field were obtained over two hundred fifty years ago. They are as-
sociated with the name of one of the most prominent mathematicians of all time,
Leonhard Euler. According to historians [26], his mind had been preoccupied with
this topic for quite a long span of time. And it took him nearly ten years to ulti-
mately derive the following now classical representation for the trigonometric sine
function:
∞
 
x2
sin x = x 1− 2 2 . (2.1)
k π
k=1
We will analyze in this section the derivation procedure proposed by Euler and
also review, in further sections, some other procedures proposed later for the deriva-
tion of the representation in (2.1). Euler also showed that his procedure appears
effective for the trigonometric cosine function and derived the following infinite
product representation:
∞
 
4x 2
cos x = 1− . (2.2)
(2k − 1)2 π 2
k=1

It is evident from the classical relations

sin iz = i sinh z and cos iz = cosh z

between the trigonometric and hyperbolic functions, which represent the analytic
continuation of the trigonometric functions into the complex plane, that the infinite
product representations
∞
 
x2
sinh x = x 1+ 2 2 (2.3)
k π
k=1

and
∞
 
4x 2
cosh x = 1+ (2.4)
(2k − 1)2 π 2
k=1
for the hyperbolic sine and cosine functions directly follow from (2.1) and (2.2),
respectively.
As we will show later, Euler’s direct approach can be successfully applied to the
derivation of the representations in (2.3) and (2.4).
2.1 Euler’s Classical Representations 19

To let the reader enjoy the elegance of the approach, we will consider first the
case of the representation in (2.1) and follow it in some detail. In doing so, we write
down the trigonometric sine function, using Euler’s formula, in the exponential form
eix − e−ix
sin x = ,
2i
and replace the exponential functions with their limit expressions reducing the above
to
    
1 ix n ix n
sin x = lim 1+ − 1−
2i n→∞ n n
 n   
i ix ix n
= − lim 1+ − 1− . (2.5)
2 n→∞ n n
We then apply Newton’s binomial formula to both polynomials in the brackets.
This yields
    n   
ix n ix n(n − 1) ix 2 k ix k
1+ =1+n + + ··· = (2.6)
n n 2! n n n
k=0

and
    
n   k
ix n ix n(n − 1) ix 2 k k ix
1− =1−n + − ··· = (−1) . (2.7)
n n 2! n n n
k=0

Once these expressions are substituted into (2.5), all the real terms in the brackets
(the terms in even powers of x) cancel out. As soon as the common factor of 2ix
is factored out in the remaining odd-power terms of x, the right-hand side of (2.5)
reduces to a compact form, and we have

(n−1)/2  
2k + 1 x 2k
sin x = x lim (−1)k . (2.8)
n→∞ n n2k+1
k=0

Of all the stages in Euler’s procedure, which, as a whole, represents a real work of
art, the next stage is perhaps the most critical and decisive. Factoring the polynomial
in (2.8) into the trigonometric form
 
(n−1)/2
(1 + cos 2kπ/n) x 2

sin x = x lim 1− ,
n→∞ (1 − cos 2kπ/n) n2
k=1

after trivial trigonometric transformations, we obtain


 
(n−1)/2
x 2 cos2 kπ/n

sin x = x lim 1−
n→∞
k=1
n2 sin2 kπ/n
 
(n−1)/2
x2

= x lim 1− .
n→∞ n2 tan2 kπ/n
k=1
20 2 Infinite Products and Elementary Functions

To take the limit, the second additive term in the parentheses of the above finite
product is multiplied and divided by the factor k 2 π 2 . This yields

 
(n−1)/2 
x 2 k2 π 2
sin x = x lim 1− 2 2 2 2
n→∞ n k π tan kπ/n
k=1

 
(n−1)/2
x2

kπ/n 2
 
= x lim 1− 2 2 ,
n→∞ k π tan kπ/n
k=1

which can be written, on account of the standard limit

ϑ
lim = 1,
ϑ→0 tan ϑ

as the classical Euler representation

∞
 
x2
sin x = x 1− .
k2 π 2
k=1

An interesting observation can be drawn from a comparison of the above infinite


product form with the classical Maclaurin series expansion

 (−1)k x 2k+1
sin x =
(2k + 1)!
k=0

of the sine function. These two forms share a common feature and are different at the
same time. As to the common feature, both, the partial products of Euler’s infinite
product representation and partial sums of the Maclaurin expansion are odd-degree
polynomials. But what makes the two forms different is that the partial products of
Euler’s representation are somewhat more relevant to the sine function. That is, they
share same zeros xk = kπ with the original sine function, whereas the Maclaurin ex-
pansion does not. It is evident that this property of the infinite product representation
could be essential in applications.
To examine the convergence pattern of Euler’s representation and compare it to
that of Maclaurin’s series, the reader is invited to take a close look at Figs. 2.1
and 2.2. Sequences of the Euler partial products and Maclaurin partial sums are
depicted, illustrating the difference between the two formulations.
As to the derivation of the infinite product representation of the trigonometric
cosine function, which was shown in (2.2), we diligently follow the procedure just
described for the sine function. That is, after using Euler’s formula

eix + e−ix
cos x =
2
2.1 Euler’s Classical Representations 21

Fig. 2.1 Convergence of the series expansion for sin x

Fig. 2.2 Convergence of the product expansion for sin x

and expressing the exponential functions in the limit form


    
1 ix n ix n
cos x = lim 1+ + 1− ,
2 n→∞ n n

we substitute the Newtonian polynomials from (2.6) and (2.7) into the right-hand
side of the above relation. It can readily be seen that, in contrast to the case of the
sine function, all the odd-power terms cancel out; and we subsequently arrive at the
following even-degree polynomial-containing representation


(n−1)/2  
2k x 2k
cos x = lim (−1) k
n→∞ n n2k
k=0

for the cosine function. The polynomial under the limit sign can be factored in a
similar way as in (2.8). In this case, we obtain

 
(n−1)/2
[1 + cos(2k − 1)π/n] x 2
cos x = lim 1− ,
n→∞ [1 − cos(2k − 1)π/n] n2
k=1
22 2 Infinite Products and Elementary Functions

Fig. 2.3 Convergence of the product expansion for cos x

which, after a trivial trigonometric transformation, becomes

 
(n−1)/2 
x 2 cos2 (2k − 1)π/2n
cos x = lim 1−
n→∞
k=1
n2 sin2 (2k − 1)π/2n

 
(n−1)/2 
x2
= lim 1− 2 2 .
n→∞ n tan (2k − 1)π/2n
k=1

Similarly to the case of the sine function, we take the limit in the above relation,
which requires some additional algebra. That is, the second additive term in the
parentheses of the finite product is multiplied and divided by (2k − 1)2 π 2 /4n2 .
This yields

 
(n−1)/2 
4x 2 (2k − 1)2 π 2
cos x = lim 1− ,
n→∞ 4n2 (2k − 1)2 π 2 tan2 (2k − 1)π/2n
k=1

which immediately transforms into

 
(n−1)/2
4x 2

(2k − 1)π/2n 2
 
cos x = lim 1− .
n→∞ (2k − 1)2 π 2 tan(2k − 1)π/2n
k=1

The latter, in turn, reads ultimately as the classical Euler expansion for the cosine
shown in (2.2):
∞ 
4x 2
cos x = 1− .
(2k − 1)2 π 2
k=1
Note that, similarly to the case of the sine function, the above infinite product
representation also shares the zeros xk = (2k − 1)π/2 with the original cosine func-
tion.
The convergence pattern of the above infinite product representation can be ob-
served in Fig. 2.3.
2.1 Euler’s Classical Representations 23

We turn now to the case of the hyperbolic sine function whose expansion is pre-
sented in (2.3). Its derivation can be conducted in a manner similar to that for the
trigonometric sine. Indeed, representing the hyperbolic sine function with Euler’s
formula
ex − e−x
sinh x = ,
2
one customarily expresses both the exponential functions in the limit form. This
results in
    
1 x n x n
sinh x = lim 1+ − 1− . (2.9)
2 n→∞ n n
Once the Newton binomial formula is used for both polynomials in the brackets,
one obtains
  n   k

x n x n(n − 1) x 2 k x
1+ =1+n + + . . . =
n n 2! n2 n n
k=0

and
 n 
n   k
x x n(n − 1) x 2 k k x
1− =1−n + 2
− . . . = (−1) .
n n 2! n n n
k=0

As in the derivation of the trigonometric sine function, all the even-power terms
in x in (2.9) cancel out, while the remaining odd-power terms possess a common
factor of 2x. Once the latter is factored out, the expression in (2.9) simplifies to the
compact form

 
(n−1)/2 
2k + 1 x 2k
sinh x = x lim ,
n→∞ n n2k+1
k=0

which factors as

 
(n−1)/2 
x 2 (1 + cos 2kπ/n)
sinh x = x lim 1+ 2 .
n→∞ n (1 − cos 2kπ/n)
k=1

Elementary trigonometric transformations yield

 
(n−1)/2
x 2 cos2 kπ/n

sinh x = x lim 1+
n→∞
k=1
n2 sin2 kπ/n

 
(n−1)/2
x2

= x lim 1+ .
n→∞ n2 tan2 kπ/n
k=1
24 2 Infinite Products and Elementary Functions

Fig. 2.4 Convergence of the product expansion for sinh x

Taking the limit as in the case of the trigonometric sine function, one ultimately
transforms the above relation into the classical Euler form in (2.3):
∞
 
x2
sinh x = x 1+ 2 2 .
k π
k=1

The convergence pattern of the above product representation can be observed in


Fig. 2.4.
As to the derivation procedure for the case of the hyperbolic cosine function, we
will not go through its specifics, because it can be accomplished in exactly the same
way as that for the trigonometric cosine. To better understand the peculiarities of the
procedure, the reader is, however, urgently recommended to carefully pass through
its details.
It is worth noting that since Euler there have been proposed various procedures
for the derivation of the infinite product representations of the trigonometric and hy-
perbolic functions. In the next section, we plan to review some of those procedures.
Over a dozen infinite product representations of elementary functions are avail-
able in current handbooks (see, for example, [9]). The present volume reviews them
in detail and describes, in addition, an interesting approach to the problem based
on the construction of Green’s functions for the two-dimensional Laplace equation.
This results, in particular, in infinite product representations [28] alternative to those
in (2.1) and (2.2) for the trigonometric sine and cosine functions. A number of other-
wise unavailable infinite product representations will also be derived for some other
trigonometric and hyperbolic functions.

2.2 Alternative Derivations


In all fairness, Euler’s derivation of the infinite product representations of the
trigonometric (hyperbolic) sine and cosine functions, which were reviewed in
Sect. 2.1, must be referred to as classical. This assertion is unreservedly justified
by the chronology. Indeed, Euler was the first to propose his derivation.
2.2 Alternative Derivations 25

The reader will later be exposed to an unusual approach to the representation of


elementary functions by infinite products, which was proposed by the author. This
approach had resulted [28] in novel representations for many elementary functions.
But before going any further into the details of that approach, let us revisit the clas-
sical Euler representation of the trigonometric sine function, and proceed through
some of its other derivations that are well known and can readily be found in the
classical literature [5] on the subject.
The first of those derivations can be handled with DeMoivre’s formula [5] for
a complex number in trigonometric form. It will be written down here for its odd
(2n + 1) exponent:

(cos w + i sin w)2n+1 = cos(2n + 1)w + i sin(2n + 1)w. (2.10)

On the other hand, using the binomial formula, the left-hand side of the above
can be expanded as

(cos w + i sin w)2n+1 = cos2n+1 w + i(2n + 1) cos2n w sin w


 
2n + 1
− cos2n−1 w sin2 w
2
 
2n + 1
−i cos2n−2 w sin3 w
3
+ · · · + (−1)n sin2n+1 w. (2.11)

Equating the imaginary parts of the left-hand sides in (2.10) and (2.11), we obtain
 
2n + 1
sin(2n + 1)w = (2n + 1) cos w sin w −
2n
cos2n−2 w sin3 w
3
+ · · · + (−1)n sin2n+1 w

= sin w (2n + 1) cos2n w
  
2n + 1
− cos2n−2
w sin w + · · · + (−1) sin w . (2.12)
2 n 2n
3

Since the second factor (the one in the brackets) contains only even exponents
of the sine and cosine functions, it can be represented as a polynomial Pn (sin2 w),
where the degree of sin2 x never exceeds n. On the other hand, for any fixed value of
n, the left-hand side of (2.12) takes on the value zero at the n points wk = kπ/(2n +
1), k = 1, 2, 3, . . . , n, on the open segment (0, π/2). This implies that the zeros of
Pn (s) are the values sk = sin wk , allowing the polynomial to be expressed as
n 
 
s
Pn (s) = β 1− , (2.13)
k=1
sin2 wk
26 2 Infinite Products and Elementary Functions

where the factor β is yet to be determined. In going through its determination, we


can rewrite the relation in (2.12), in light of (2.13), in the following compact form
n 
   
sin(2n + 1)w sin w 2
=β 1− (2.14)
sin w sin wk
k=1

in terms of wk and take the limit as w approaches zero:


n 
   
sin(2n + 1)w sin w 2
lim = β lim 1− .
w→0 sin w w→0 sin wk
k=1

The limit on the left-hand side of the above is 2n + 1, while the limit on the
right-hand side is equal to 1. This suggests for the factor β the value 2n + 1, and the
relation in (2.14) transforms into
n 
  2
sin w
sin(2n + 1)w = (2n + 1) sin w 1− . (2.15)
sin(kπ/(2n + 1))
k=1

Substituting x = (2n + 1)w, we rewrite (2.15) as


n   
x  sin(x/(2n + 1)) 2
sin x = (2n + 1) sin 1− . (2.16)
2n + 1 sin(kπ/(2n + 1))
k=1

Since
 
x
lim (2n + 1) sin = x,
n→∞ 2n + 1
while
sin(x/(2n + 1)) x
lim = ,
n→∞ sin(kπ/(2n + 1)) kπ
the relation in (2.16) transforms, as n approaches infinity, into the classical Euler
representation in (2.1):
∞
 
x2
sin x = x 1− 2 2 .
k π
k=1
Clearly, the derivation procedure just reviewed is based on a totally different idea
compared to that used by Euler. Recall another alternative derivation of the Euler
representation of the sine function, which can be carried out using the Laurent series
expansion [5]
∞  
1 1 1
cot z − = + (2.17)
z z − kπ kπ
k=−∞

for the cotangent function of a complex variable. Note that the summation in (2.17)
assumes that the k = 0 term is omitted.
2.2 Alternative Derivations 27

Evidently, the opening terms of the above series have isolated singular points
(poles) in any bounded region D of the complex plane. If, however, a few initial
terms of the series in (2.17) are truncated, then the series becomes absolutely and
uniformly convergent in a bounded region. This assertion can be justified by con-
sidering the general term
1 1 z
+ =
z − kπ kπ kπ(z − kπ)
of the series, for which the following estimate holds:
z z T 1
= 2 ≤ · 2,
kπ(z − kπ) k π(z/k − π) π|T /k − π| k
where T represents the upper bound of the modulus of the variable z, that is, |z| < T .
It can be shown that the first factor on the right-hand side of the above inequality
has the finite limit T /π 2 as k approaches infinity. Thus, the series in (2.17) con-
verges (at the rate of 1/k 2 ) absolutely and uniformly in any bounded region. In
other words, both the left-hand side and the right-hand side in (2.17) are regular
functions at z = 0. This makes it possible for the series in (2.17) to be integrated
term by term. Taking advantage of this fact, we integrate both sides in (2.17) along
a path joining the origin z = 0 to a point z ∈ D. This yields
z=z ∞ 
 z=z
sin z z
log = log(z − kπ) + ,
z z=0 kπ z=0
k=−∞

and after choosing the branch of the logarithm that vanishes at the origin, we obtain
∞ 
   
sin z z z
log = log 1 − +
z kπ kπ
k=−∞

   
z z
= log 1− exp
kπ kπ
k=−∞
∞ 
 
z z
= log 1− exp . (2.18)
kπ kπ
k=−∞

Exponentiating (2.18), we rewrite it as


∞ 
 
z z
sin z = z 1− exp . (2.19)
kπ kπ
k=−∞

Recall that the factor k = 0 is omitted in the above infinite product. Coupling
then the kth factor
 
z z
1− exp
kπ kπ
28 2 Infinite Products and Elementary Functions

and the −kth factor


   
z z
1+ exp −
kπ kπ
in (2.19), we ultimately obtain the classical Euler representation of (2.1):
∞
 
z2
sin z = z 1− 2 2 .
k π
k=1

So, two different derivations for the expansion in (2.1) have been reviewed in this
section. They are alternative to the classical Euler procedure discussed in Sect. 2.1.
This issue will be revisited again in Chap. 6, where yet another alternative deriva-
tion procedure for infinite product representations of elementary functions will be
presented. It was recently proposed by the author and reported in [27, 28], and is
based on a novel approach.
The objective in the next section is to introduce the reader to a limited number
of infinite product representations of elementary functions that can be found in the
current literature.

2.3 Other Elementary Functions


The classical Euler representations of the trigonometric and hyperbolic sine and co-
sine functions, whose derivation has been reproduced in this volume, could be em-
ployed in obtaining infinite product expansions for some other elementary functions.
However, only a limited number of those expansions are available in the literature.
All of them are listed in handbooks on the subject (see, for example, [6, 9]).
In this section, we are going to revisit the expressions for elementary functions
in terms of infinite products available in literature and advise the reader on methods
that could be applied for their derivation. In doing so, we begin with the representa-
tion
  ∞   
x2 2 y x2 x2
cos x − cos y = 2 1 − 2 sin 1− 1−
y 2 (2kπ + y)2 (2kπ − y)2
k=1
(2.20)
listed in [9] as #1.432(1). In order to derive it, the difference of cosines on the left-
hand side of (2.20) can be converted to the product form
y+x y −x
cos x − cos y = 2 sin sin
2 2
and multiplied and divided then by the factor sin2 y2 , yielding
y
sin2 y+x y −x
cos x − cos y = 2 2
sin sin .
sin2 y2 2 2
2.3 Other Elementary Functions 29

Leaving the sin2 y2 factor in the numerator in its current form while expressing
the other three sine factors with the aid of the classical Euler infinite product repre-
sentation in (2.1), one obtains
∞  ∞ 
y y2 − x2  (y + x)2  (y − x)2
cos x − cos y = sin2 1− 1 −
2 2 4k 2 π 2 4k 2 π 2
k=1 k=1
∞ −2
y y2
× 1− 2 2 ,
2 4k π
k=1

which can be rewritten in a more compact form. To proceed with this, we combine
all the three infinite products into a single product form. This yields
2 2

y 2 − x 2 2 y  [1 − 4k 2 π 2 ][1 − 4k 2 π 2 ]
(y+x) (y−x)
cos x − cos y = 2 sin ,
y2 2 (1 − y2
)2
k=1 4k 2 π 2

or, after performing elementary algebra on the expression under the product sign,
we have
  ∞
x2 y  [4k 2 π 2 − (x + y)2 ][4k 2 π 2 − (x − y)2 ]
cos x − cos y = 2 1 − 2 sin2 .
y 2 (4k 2 π 2 − y 2 )2
k=1

Upon factoring the differences of squares under the product sign, the above rela-
tion transforms into
  ∞
x2 y  [2kπ + (x + y)][2kπ − (x + y)]
cos x − cos y = 2 1 − 2 sin2
y 2 (2kπ + y)2
k=1
[2kπ + (x − y)][2kπ − (x − y)]
× .
(2kπ − y)2
At this point, we regroup the numerator factors under the product sign. That is,
we combine the first and fourth factors, as well as the second and third factors. This
yields
  ∞
x2 y  [(2kπ + y) + x][(2kπ + y) − x]
cos x − cos y = 2 1 − 2 sin2
y 2 (2kπ + y)2
k=1
[(2kπ − y) − x][(2kπ − y) + x]
× ,
(2kπ − y)2
reducing the above relation to the form
  ∞
x2 y  (2kπ + y)2 − x 2 (2kπ − y)2 − x 2
cos x − cos y = 2 1 − 2 sin2
y 2 (2kπ + y)2 (2kπ − y)2
k=1
  ∞  
x2 y x2 x2
= 2 1 − 2 sin2 1− 1 − .
y 2 (2kπ + y)2 (2kπ − y)2
k=1
30 2 Infinite Products and Elementary Functions

This completes the derivation of the representation in (2.20).


A derivation procedure similar to that just described for the expansion in (2.20)
can be employed for obtaining another infinite product expression of an elementary
function. This is the representation
  ∞
  
x2 2 y x2 x2
cosh x − cos y = 2 1 + 2 sin 1+ 1+ ,
y 2 (2kπ + y)2 (2kπ − y)2
k=1
(2.21)
which is also available in the existing literature (see #1.432(2) in [9]).
To put the derivation procedure for the relation in (2.21) on the effective track
just used in the case of the representation in (2.20), we express the hyperbolic cosine
function in terms of the trigonometric cosine,

cosh x = cos ix,

and simply trace out the procedure described earlier in detail for the case of (2.20):

y + ix y − ix
cosh x − cos y = cos ix − cos y = 2 sin sin
2 2
y
sin2 2 y + ix y − ix
=2 sin sin
sin2 y2 2 2

  
2 y y + ix (y + ix)2
= 2 sin · 1−
2 2 4k 2 π 2
k=1

∞
  ∞ 2 −1
y − ix (y − ix)2 y2  y2
× 1− 1− 2 2 .
2 4k 2 π 2 4 4k π
k=1 k=1

Upon grouping all the infinite product factors, the above reads
2 2

y 2 + x 2 2 y  [1 − 4k2 π 2 ][1 − 4k 2 π 2 ]
(y+ix) (y−ix)
2 sin ,
y2 2 2
(1 − 4ky2 π 2 )2
k=1

and transforms then as


  ∞
x2 y  [4k 2 π 2 − (ix + y)2 ][4k 2 π 2 − (ix − y)2 ]
2 1 + 2 sin2
y 2 (4k 2 π 2 − y 2 )2
k=1
  ∞
x2 y  [2kπ − (ix + y)][2kπ + (ix + y)]
= 2 1 + 2 sin2
y 2 (2kπ + y)2
k=1
[2kπ − (ix − y)][2kπ + (ix − y)]
×
(2kπ − y)2
2.3 Other Elementary Functions 31

  ∞
x2 y  (2kπ + y)2 + x 2 (2kπ − y)2 + x 2
= 1 + 2 sin2
y 2 (2kπ + y)2 (2kπ − y)2
k=1
  ∞  
x2 y x2 x2
= 2 1 + 2 sin2 1+ 1 + .
y 2 (2kπ + y)2 (2kπ − y)2
k=1

We turn now to another infinite product representation of an elementary function


that is available in the literature,
∞ 
πx πx  (−1)k x
cos − sin = 1+ , (2.22)
4 4 2k − 1
k=1

listed in [9], for example, as #1.433. This infinite product converges at the slow rate
of 1/k. We can offer two alternative expansions of the function
πx πx
cos − sin
4 4
whose convergence rate is notably faster compared to that of (2.22). To derive the
first such expansion, we convert the difference of trigonometric functions in (2.22)
to a single√ cosine function. This can be done by multiplying and dividing it by a
factor of 2/2:
√ √ 
πx πx √ 2 πx 2 πx
cos − sin = 2 cos − sin
4 4 2 4 2 4
 
√ π πx π πx √ π(1 + x)
= 2 cos cos − sin sin = 2 cos .
4 4 4 4 4

Upon expressing the above cosine function by the classical Euler infinite product
form in (2.2), the first alternative version of the expansion in (2.22) appears as
∞ 
πx πx √ π(1 + x) √  (1 + x)2
cos − sin = 2 cos = 2 1− . (2.23)
4 4 4 4(2k − 1)2
k=1

If in contrast to the derivation just completed, the left-hand side of (2.22) is sim-
ilarly expressed as a single sine function

πx πx √ π(1 − x)
cos − sin = 2 sin ,
4 4 4
then one arrives, with the aid of the classical Euler infinite product form for the sine
function in (2.1), at another alternative representation to that in (2.22),
√ ∞ 
πx πx π 2  (1 − x)2
cos − sin = (1 − x) 1− . (2.24)
4 4 4 16k 2
k=1
32 2 Infinite Products and Elementary Functions

Fig. 2.5 Convergence of the representation in (2.22)

Fig. 2.6 Convergence of the representation in (2.22)

It is evident that the versions in (2.23) and (2.24) are more efficient computation-
ally than that in (2.22). Indeed, they converge at the rate 1/k 2 , in contrast to the rate
1/k for the expansion in (2.22).
As to the representations in (2.23) and (2.24), it can be shown that the relative
convergence of the latter must be slightly faster. This assertion directly follows from
observation of the denominators in their fractional components. Indeed, the inequal-
ity
4(2k − 1)2 = 16k 2 − 16k + 4 < 16k 2
holds for any integer k, since 16k − 4 > 0.
Relative convergence of the representations in (2.22) and (2.24) can be observed
in Figs. 2.5 and 2.6, where their second, fifth, and tenth partial products are plotted
on the interval [0, π].
Derivation of the next infinite product representation of an elementary function,
which is available in [9] (see #1.434),
∞
 
1 (π + 2x)2
cos2 x = (π + 2x)2 1− , (2.25)
4 4k 2 π 2
k=1
2.3 Other Elementary Functions 33

is as straightforward as it gets. Indeed, once the cosine function is converted to the


sine form
 
2 π
cos x = sin
2
+x ,
2
the implementation of the classical representation for the sine function in (2.1) com-
pletes the job.
At this point, we turn to another representation,
∞  
sin π(x + a) x + a  x x
= 1− 1+ , (2.26)
sin πa a k−a k+a
k=1

which is presented in [9] as #1.435. If the sine functions in the numerator and de-
nominator are expressed in terms of the classical Euler form, then (2.26) reads
∞ π 2 (x+a)2
sin π(x + a) π(x + a) k=1 [1 − k 2 π 2 ]
=  .
sin πa πa ∞ π 2 a2
k=1 [1 − k 2 π 2 ]

And upon performing a chain of straightforward transformations, the above rep-


resentation converts ultimately into (2.26),
2

x + a  1 − k2
(x+a)
sin π(x + a)
=
a2
k=1 1 − k 2
sin πa a

x + a  (1 − x+a
k )(1 + k )
x+a
=
a (1 − ak )(1 + ak )
k=1

x + a  (k − a) − x (k + a) + x
= ·
a (k − a) (k + a)
k=1
∞  
x +a  x x
= 1− 1+ .
a k−a k+a
k=1

For another infinite product representation of an elementary function available in


the literature, we turn to
∞ 
 
sin2 πx x2
1− = 1− , (2.27)
sin2 πa (k − a)2
k=−∞

which is listed as #1.436 in [9].


To proceed with the derivation in this case, we convert the infinite product
in (2.27) to an equivalent form. In doing so, we isolate the term with k = 0 (which
is equal to (1 − x 2 /a 2 )) of the product, and group the kth and the −kth terms by
34 2 Infinite Products and Elementary Functions

pairs. This transforms the relation in (2.27) into


 
∞  
sin2 πx x2 x2 x2
1− 2 = 1− 2 1− 1− . (2.28)
sin πa a (k − a)2 (k + a)2
k=1

To verify the above identity, transform its left-hand side as

sin2 πx sin2 πa − sin2 πx


1− 2
=
sin πa sin2 πa
and decompose the numerator as a difference of squares:

sin2 πa − sin2 πx (sin πa − sin πx)(sin πa + sin πx)


= . (2.29)
sin2 πa sin2 πa
At the next step, convert the difference and the sum of the sine functions in (2.29)
to the product forms

π(a − x) π(a + x)
sin πa − sin πx = 2 sin cos
2 2
and
π(a + x) π(a − x)
sin πa + sin πx = 2 sin cos .
2 2
With this, we regroup the numerator in (2.29) as

π(a + x) π(a + x) π(a − x) π(a − x)


2 sin cos · 2 sin cos ,
2 2 2 2
where the first double product represents the sine function sin π(a + x), while the
second double product is sin π(a − x). This finally transforms the left-hand side
in (2.28) into
sin π(a + x) sin π(a − x)
.
sin2 πa
At this point, replacing all the sine functions with their classical Euler infinite
product form, we rewrite the above as

∞ (a+x)2 2
(a + x)(a − x)  [1 − k2
][1 − (a−x)
k2
]
,
a2 a2 2
(1 − k2 )
k=1

which transforms into



a 2 − x 2  [k 2 − (a + x)2 ][k 2 − (a − x)2 ]
. (2.30)
a2 (k − a)2 (k + a)2
k=1
2.3 Other Elementary Functions 35

The numerator under the infinite product sign can be decomposed as

(k − a − x)(k + a + x)(k − a + x)(k + a − x).

So, grouping the first factor with the third, and the second with the fourth, one
converts the numerator in (2.30) into
  
(k − a)2 − x 2 (k + a)2 − x 2 ,

which transforms (2.30) to


 

x2 (k − a)2 − x 2 (k + a)2 − x 2
1− 2
a (k − a)2 (k + a)2
k=1

and finally to
  ∞  
x2  x2 x2
1− 2 1− 1 − .
a (k − a)2 (k + a)2
k=1

This completes the derivation of the representation in (2.27).


The next infinite product representation of an elementary function that will be
reviewed here, is also taken from [9]. It is #1.437:
∞ 
  2 
sin 3x 2x
=− 1− . (2.31)
sin x x + kπ
k=−∞

To verify this identity, we decompose first the difference of squares in the product
as
∞ 
  2  ∞ 
  
2x 2x 2x
− 1− =− 1− 1+
x + kπ x + kπ x + kπ
k=−∞ k=−∞

and then convert the above infinite product to an equivalent form. Namely, by split-
ting off the term with k = 0, which is evidently equal to −3, and pairing the kth and
the −kth terms, the above product transforms into
∞
    
2x 2x 2x 2x
3 1− 1− 1+ 1+
x + kπ x − kπ x + kπ x − kπ
k=1

and
∞
    
kπ − x −kπ − x 3x + kπ 3x − kπ
3 .
x + kπ x − kπ x + kπ x − kπ
k=1
36 2 Infinite Products and Elementary Functions

Clearly, the first two factors under the product sign cancel, leaving the right-hand
side of (2.31) as
∞
  
3x + kπ 3x − kπ
3 . (2.32)
x + kπ x − kπ
k=1
As to the left-hand side in (2.31), we reduce both the sine functions in it to the
infinite product form
2
∞ ∞
sin 3x 1 − (3x)
k2 π 2 9x 2 − k 2 π 2
=3 =3 ,
2
x 2 − k2 π 2
k=1 1 − k 2 π 2
sin x x
k=1

which is identical to the expression in (2.32). Thus, the identity in (2.31) is ulti-
mately verified.
We turn next to an infinite product representation of another elementary function,
∞ 
  2 
cosh x − cos α x
= 1+ , (2.33)
1 − cos α 2kπ + a
k=−∞

which is listed in [9] as #1.438. To verify this identity, we transform its left-hand
side as
cosh x − cos α cos ix − cos α sin a+ix
2 sin 2
a−ix
= = .
1 − cos α 1 − cos α sin2 a2
We then express the sine functions by the classical Euler infinite product form,
and perform some obvious elementary transformations. This yields

a+ix a−ix ∞ (a+ix)2 2


2 2
[1 − 4k 2 π 2
][1 − (a−ix)
4k 2 π 2
]
2
,
a2
a
4 k=1 (1 − 4k 2 π 2 )2

or

a 2 + x 2  [4k 2 π 2 − (a + ix)2 ][4k 2 π 2 − (a − ix)2 ]
,
a2 (2kπ + a)2 (2kπ − a)2
k=1
which transforms as
  ∞
x 2  (2kπ − a − ix)(2kπ + a + ix) (2kπ − a + ix)(2kπ + a − ix)
1+ 2 .
a (2kπ + a)2 (2kπ − a)2
k=1

Combining the first factor with the third, and the second with the fourth in the
numerator, one converts the above into
  ∞
x 2  (2kπ − a)2 + x 2 (2kπ + a)2 + x 2
1+ 2 ,
a (2kπ − a)2 (2kπ + a)2
k=1
2.3 Other Elementary Functions 37

which can be represented as


 
∞  
x2 x2 x2
1+ 2 1+ 1+ . (2.34)
a (2kπ − a)2 (2kπ + a)2
k=1

It can be shown that the above infinite product (where the multiplication is as-
sumed from one to infinity) transforms to that in (2.33), where we “sum” from neg-
ative infinity to positive infinity. To justify this assertion, we formally break down
the product in (2.34) into two pieces,
 
∞   ∞ 
x2 x2 x2
1+ 2 1+ · 1+ ,
a (2mπ − a)2 (2kπ + a)2
m=1 k=1

and change the multiplication index in the first of the products via m = −k. This
converts the above expression to
  −∞
  ∞ 
x2 x2 x2
1+ 2 1+ · 1+ ,
a (2kπ + a)2 (2kπ + a)2
k=−1 k=1

which is just the right-hand side of the relation in (2.33). Thus, the identity in (2.33)
is verified.
We have finished our review of infinite product expansions of elementary func-
tions that can be directly derived with the aid of the classical Euler representations
for the trigonometric sine and cosine functions.
A few expansions, whose derivation will be conducted in the remaining part of
this section, illustrate a variety of other possible approaches to the problem. Let us
recall an alternative to the Euler’s (2.1) infinite product expansion of the trigono-
metric sine function. That is,

 x
sin x = x cos , (2.35)
2k
k=1

which also has been known for centuries and is listed, in particular, in [9] as #1.439.
A formal comment is appropriate as to the convergence of the infinite product
in (2.35). It converges to nonzero values of the sine function for any value of the
variable x that does not make the argument of the cosine equal to π/2 + nπ , whereas
it diverges to zero at such values of x, matching zero values of the sine function.
The derivation strategy that we are going to pursue in the case of (2.35) is based
on the definition of the value of an infinite product. The strategy has two stages.
First, a compact expression must be derived for the Kth partial product PK (x),


K
x
PK = cos ,
2k
k=1
38 2 Infinite Products and Elementary Functions

of the infinite product in (2.35). Then the limit of PK (x) is obtained as K ap-
proaches infinity.
To obtain a compact form of the partial product PK (x) for (2.35), we rewrite its
first factor cos x2 as
x 2 sin x2 cos x2 sin x
cos = = .
2 2 sin x2 2 sin x2
Similarly, the second factor cos 2x2 and the third factor cos 2x3 in PK (x) turn out
to be
x 2 sin 2x2 cos 2x2 sin x2
cos = =
22 2 sin 2x2 2 sin 2x2
and
x 2 sin 2x3 cos 2x3 sin 2x2
cos = = .
23 2 sin 2x3 2 sin 2x3
x
Proceeding like this with the next-to-the-last factor cos 2K−1 and the last factor
x
cos 2K in PK (x), we express them as
x x x
x 2 sin 2K−1 cos 2K−1 sin 2K−2
cos = x = x
2K−1 2 sin 2K−1 2 sin 2K−1

and
x 2 sin 2xK cos 2xK x
sin 2K−1
cos = = .
2K 2 sin 2xK 2 sin 2xK
Once all the factors are put together, we have a series of cancellations, and the
partial product PK (x) eventually reduces to the form

sin x
PK (x) = . (2.36)
2K sin 2xK

Upon multiplying the numerator and denominator in (2.36) by x and regrouping


the factors
x
x sin x 2K sin x
PK (x) = x = ,
K
x2 sin 2K sin 2xK x
the partial product of the representation in (2.35) is prepared for taking the limit.
Thus, we finally obtain

 x
x K sin x sin x
lim PK (x) = cos = lim 2 = ,
K→∞ 2k K→∞ sin 2xK x x
k=1

which completes the derivation of the representation in (2.35).


2.3 Other Elementary Functions 39

Recall another infinite product representation,



 x
sinh x = x cosh , (2.37)
2k
k=1

which is available in [20]. It is evident that its derivation can also be conducted in
exactly same way as for the one in (2.35).
In what follows, the strategy just illustrated will be applied to the derivation of
an infinite product representation for another elementary function, that is,

 ∞
1 k
= 1 + x2 , |x| < 1. (2.38)
1−x
k=0

It can also be found in [20]. To proceed with the derivation, we transform the general
term in (2.38) as
k+1
k 1 − x2
1 + x2 =
1 − x2
k

and write down the Kth partial product PK (x) of the representation in (2.38) ex-
plicitly as


K
 k 
PK (x) = 1 + x2
k=0
K−1 K
1 − x2 1 − x4 1 − x8 1 − x2 1 − x2
= · · · ... · · .
1−x 1−x 1−x 2 4
1 − x2
K−2
1 − x2
K−1

It is evident that nearly all the terms in the above product cancel. Indeed, the only
K
terms left are the denominator 1 − x of the first factor and the numerator 1 − x 2 of
the last factor. This reduces the partial product PK (x) to the compact form


K
 k  1 − x2
K

1 + x2 = ,
1−x
k=0

whose limit, as K approaches infinity, is


K
 k  1 − x2 1
K

lim 1 + x2 = lim =
K→∞ K→∞ 1 − x 1−x
k=0

for values of x such that |x| < 1.


From a comparison
 of the infinite product representation in (2.38) with the
Maclaurin series ∞ k=0 x k of the function 1/(1 − x), it follows that the two are

equivalent to each other, with the relation

PK = S2K
40 2 Infinite Products and Elementary Functions

between the partial product of (2.38) and the partial sum of the series. This observa-
tion means that the infinite product in (2.38) converges, at least formally, at a much
faster rate.
To complete the review of methods customarily used for the infinite product rep-
resentation
√ of elementary functions, let us recall an approach to the square root
function 1 + x, which is described in [20], for example. The function is first trans-
formed as

√ 2(x + 1) (x + 2)2
1+x = (1 + x) , (2.39)
x +2 4(x + 1)2
and the radicand on the right-hand side is then simplified as

(x + 2)2 (x + 2)2
(1 + x) = ,
4(x + 1) 2 4(x + 1)

resulting in

√ 2(x + 1) (x + 2)2
1+x =
x +2 4(x + 1)
 
2(x + 1) x 2 + 4x + 4 2(x + 1) x2
= = 1+ . (2.40)
x +2 4x + 4 x+2 4x + 4

This suggests for the radical factor



x2
1+
4(x + 1)

of the right-hand
√ side in (2.40) the same transformation that has just been applied to
the function 1 + x in (2.39). This yields

x2  x2
√ 2(x + 1) 2( + 1)  ( 4(x+1) )2
1+x = ·
4(x+1) 1 + .
x +2 x2
+2
2
4( x + 1)
4(x+1) 4(x+1)

Proceeding further with this algorithm, one arrives at the infinite product repre-
sentation


√ 2(Ak + 1)
1+x = (2.41)
Ak + 2
k=0

for the square root function, where the parameter Ak can be obtained from the re-
currence
A2k
A0 = x and Ak+1 = , k = 0, 1, 2, . . . .
4(Ak + 1)
2.4 Chapter Exercises 41

Fig. 2.7 Convergence of the expansion in (2.41)

It appears that the convergence rate of the expansion in (2.41) is extremely fast.
This assertion is illustrated with Fig. 2.7, where the partial products P0 , P1 , and P2
of the representation are depicted.
This completes the review that we intended to provide the reader of infinite prod-
uct representations of elementary functions available in the current literature.
In the next chapter, the reader’s attention will be directed to a totally different
subject. Namely, we will begin a review of a collection of methods that are tra-
ditionally used for the construction of Green’s functions for the two-dimensional
Laplace equation.
The purpose for such a sharp turn is twofold. First, we aim at giving a more
comprehensive, in comparison with other relevant sources, review of the available
procedures for the construction of Green’s functions for a variety of boundary-value
problems for the Laplace equation. Second, one of those procedures represents a
significant issue for Chap. 6, where an innovative approach will be discussed for the
expression of elementary functions in terms of infinite products.

2.4 Chapter Exercises

2.1 Use Euler’s approach and derive the infinite product representation in (2.4) for
the hyperbolic cosine function.

2.2 Verify the infinite product representation in (2.24).

2.3 Derive the infinite product representation in (2.37) for the hyperbolic sine func-
tion.

2.4 Verify the infinite product representation


∞ 
 
(−1)n 4x
cos x − sin x = 1+ .
(2n − 1)π
n=1
42 2 Infinite Products and Elementary Functions

2.5 Derive an infinite product representation for the function

a sin x + b cos x,

where a and b are real factors.

2.6 Derive an infinite product representation for the function

sin x + sin y.

2.7 Derive an infinite product representation for the function

cos x + cos y.

2.8 Verify the infinite product representation


∞ 
1 4x 2
tan x + cot x = 1+ 2 2 .
x k π − 4x 2
k=1

2.9 Derive an infinite product representation for the function

cot x + cot y.

2.10 Verify the infinite product representation


∞ 
x2 − y2  x 2 + y 2 (x 2 − y 2 )2
cosh x − cosh y = 1+ + .
2 2k 2 π 2 16k 4 π 4
k=1

2.11 Derive an infinite product representation for the function

coth x + coth y.
Chapter 3
Green’s Functions for the Laplace Equation

Our recent work reported in [27, 28] provides convincing evidence of a surprising
linkage between the topics of approximation of functions and the Green’s function
for some partial differential equations. The linkage appears promising and extremely
productive. It has generated an unlooked-for approach to the infinite product repre-
sentation of elementary functions.
Our work here focuses on a comprehensive review of two standard methods that
can potentially be (and are, actually) used for the construction of Green’s functions
to boundary-value problems for the two-dimensional Laplace equation. These are
the method of images, which is reviewed in Sect. 3.1, and the method of conformal
mapping, whose review is given in Sect. 3.2.
The present chapter is primarily designed to provide a preparatory basis for
Chap. 6, which plays a central role in the entire volume. An innovative approach
is proposed in that chapter to the infinite product representation of some elementary
functions, in particular for a number of trigonometric and hyperbolic functions.

3.1 Construction by the Method of Images


We begin our exposure to the collection of methods that are traditionally used for the
construction of Green’s functions for the two-dimensional Laplace equation with
the method of images. It is probably the simplest of all and represents one of the
classical approaches to the problem. It is included in nearly every text on partial
differential equations.
The scheme of the method is transparent, its algorithm is straightforward, but its
applicability is, however, very limited. Only a few closed forms of Green’s function,
as expressed in terms of elementary functions, can be obtained by the method of
images. The objective of the method is to obtain a closed analytical form of the
regular component R(P , Q) of the Green’s function
1
G(P , Q) = − ln |P − Q| + R(P , Q), P , Q ∈ , (3.1)

Y.A. Melnikov, Green’s Functions and Infinite Products, 43


DOI 10.1007/978-0-8176-8280-4_3, © Springer Science+Business Media, LLC 2011
44 3 Green’s Functions for the Laplace Equation

for the well-posed boundary-value problem

∇ 2 u(P ) = 0, P ∈ , (3.2)
 
T u(P ) = 0, P ∈ L, (3.3)

stated for the Laplace equation.


Commonly used terminology [5, 8, 13, 15, 18] applies in this volume to the
setting in (3.2) and (3.3). Namely, the latter is said to be the Dirichlet problem if
T represents the identity operator T ≡ I . The Neumann problem corresponds to
T ≡ ∂/∂n, where n is the normal direction to the boundary L. The case of T ≡
∂/∂n − β, where β is a function of the coordinates of P , is usually referred to as the
Robin problem; in some sources it is called the mixed problem.
Recall that the singular component − 2π 1
ln |P − Q| of G(P , Q) can be inter-
preted as the response at a field (observation) point P to a unit source placed at an
arbitrary point Q. With this in mind, the regular component R(P , Q) of G(P , Q)
is intended, in the method of images, to be expressed as a response to a finite num-
ber of unit sources and sinks placed at points Q∗1 , Q∗2 , . . . , Q∗m outside the region
. None of those sources and sinks can, according to the definition of the Green’s
function, be located inside . This makes the regular component


m
1
R(P , Q) = ± ln |P − Q∗j | (3.4)

j =1

a harmonic function at every point P in  (since all the source points Q∗j are outside
). The plus sign in (3.4) corresponds to a sink, and the minus to a source. Clearly,
G(P , Q), with such a regular component R(P , Q), represents a harmonic function
at every point P ∈ , except at P = Q. In addition, the boundary condition in (3.3)
is supposed to be satisfied by appropriately choosing locations for Q∗1 , Q∗2 , . . . , Q∗m .
That is, the trace of the singular component −T [ 2π
1
ln |P − Q|] on the boundary line
L is supposed to be compensated by T [R(P , Q)].

Example 3.1 For the first example on the use of the method of images, we consider
a classical case of the Dirichlet problem for the upper half-plane (x, y) = {−∞ <
x < ∞, y > 0}, and construct its Green’s function.

The influence of the unit source at a point Q(ξ, η) (the singular component of
the Green’s function)
1  
− ln (x − ξ )2 + (y − η)2

can be compensated, in this case, with a single unit sink placed at the point
Q∗ (ξ, −η) located in the lower half-plane and symmetric to Q(ξ, η) about the
boundary y = 0 of the half-plane. With the influence of this sink given as
1  
ln (x − ξ )2 + (y + η)2 ,

3.1 Construction by the Method of Images 45

Fig. 3.1 Derivation of the Green’s function for the quarter-plane

the Green’s function of the Dirichlet problem for the upper half-plane is finally
found as
1 (x − ξ )2 + (y + η)2
G(x, y; ξ, η) = ln . (3.5)
4π (x − ξ )2 + (y − η)2

Example 3.2 As our next example, we consider another classical case of the Dirich-
let problem for the quarter-plane (one may refer to it as the infinite wedge of π/2),
(r, ϕ) = {0 < r < ∞, 0 < ϕ < π/2}.

Since the distance between two points (r1 , ϕ1 ) and (r2 , ϕ2 ) is defined in polar
coordinates as

r12 − 2r1 r2 cos(ϕ1 − ϕ2 ) + r22 ,
the singular component of the Green’s function G(r, ϕ; , ψ) reads as

1  2 
− ln r − 2r cos(ϕ − ψ) + 2
, (3.6)

which represents the response at an observation point M(r, ϕ) ∈  to the unit source
(labeled here and later with a plus sign) placed at A( , ψ) ∈  (see Fig. 3.1).
In order to compensate the trace of the function in (3.6) (or in other words, to
support the Dirichlet condition) on the boundary segment y = 0, we place a unit
sink (labeled with an asterisk) at D( , 2π − ψ). The influence of this sink is given
by
1  2   
ln r − 2r cos ϕ − (2π − ψ) + 2 . (3.7)

Similarly, with a unit sink at B( , π − ψ), whose influence is defined as

1  2   
ln r − 2r cos ϕ − (π − ψ) + 2
, (3.8)

46 3 Green’s Functions for the Laplace Equation

Fig. 3.2 Dirichlet–Neumann problem for the quarter-plane

we compensate the trace of (3.6) on the boundary segment x = 0, while to compen-


sate the traces of the functions in (3.7) and (3.8) on x = 0 and y = 0, respectively,
a unit source is required at C( , π + ψ), with the influence
1  2   
− ln r − 2r cos ϕ − (π + ψ) + 2
. (3.9)

Hence, the sum of the components in (3.6), (3.7), (3.8), and (3.9), which converts
to the compact form

1  2
r 2 − 2r cos(ϕ − (nπ − ψ)) + 2
G(r, ϕ; , ψ) = ln , (3.10)
4π r 2 − 2r cos(ϕ − ((n − 1)π + ψ)) + 2
n=1

represents the Green’s function of the Dirichlet problem for the infinite wedge {0 <
r < ∞, 0 < ϕ < π/2}.

Example 3.3 Note that if compensatory sources and sinks are placed, for the infinite
wedge (r, ϕ) = {0 < r < ∞, 0 < ϕ < π/2}, in a manner different from that just
described in Example 3.2, then the method of images enables us to construct the
Green’s function for a certain mixed boundary-value problem.

Proceeding in compliance with the scheme depicted in Fig. 3.2, one obtains the
Green’s function
1 r 2 − 2r cos(ϕ − (2π − ψ)) + 2
G(r, ϕ; , ψ) = ln
4π r 2 − 2r cos(ϕ − ψ) + 2
r 2 − 2r cos(ϕ − (π + ψ)) + 2
× (3.11)
r 2 − 2r cos(ϕ − (π − ψ)) + 2

of the Dirichlet–Neumann boundary-value problem for the infinite wedge of π/2,


with Dirichlet and Neumann boundary conditions imposed on the boundary seg-
ments y = 0 and x = 0, respectively.
3.1 Construction by the Method of Images 47

Fig. 3.3 Dirichlet problem for the infinite wedge of π/3

In a series of examples that follow, we show that although the method of im-
ages appears productive for a number of boundary-value problems stated on infinite
wedges, it does not work for some of them.

Example 3.4 Consider the Dirichlet problem for the wedge of π/3, (r, ϕ) = {0 <
r < ∞, 0 < ϕ < π/3}. To construct the Green’s function, the reader could follow,
in this case, the procedure in detail by examining the scheme depicted in Fig 3.3.

In order to compensate the influence of the singular component

1  2 
− ln r − 2r cos(ϕ − ψ) + 2

of the Green’s function on the boundary fragment ϕ = 0, we place a compensatory
unit sink at F ( , 2π − ψ), while another unit sink is required at B( , 2π/3 − ψ)
to support the Dirichlet condition on ϕ = π/3. To compensate the trace of the latter
sink on the boundary fragment ϕ = 0, a unit source is required at E( , 4π/3 +
ψ). The trace of the latter source is compensated on ϕ = π/3 with a unit sink at
D( , 4π/3 − ψ), while the trace of this sink is compensated on ϕ = 0 with a unit
source placed at C( , 2π/3 + ψ).
Thus, the aggregate influence of the five compensatory sources and sinks located
outside , as shown in Fig. 3.3, represents the regular component R(r, ϕ; , ψ) of
the Green’s function of the Dirichlet problem for the wedge of π/3. The Green’s
function itself is ultimately obtained in the form

3
r 2 − 2r cos(ϕ − ( 2nπ
3 − ψ)) +
2
1
G(r, ϕ; , ψ) = ln 2(n−1)π
. (3.12)
n=1 r − 2r cos(ϕ − ( + ψ)) + 2
4π 2
3

In contrast to the case of the mixed problem considered in Example 3.3 for the
wedge of π/2, the method of images fails for the problem considered in the next
example.
48 3 Green’s Functions for the Laplace Equation

Fig. 3.4 Failure of the method of images for a mixed problem

Example 3.5 To follow the procedure in detail and observe its failure for the
Dirichlet–Neumann problem stated for the wedge of π/3, the reader is referred to
the scheme of Fig 3.4.

Clearly, the Dirichlet condition on ϕ = 0 is supported with a unit sink placed at


F ( , 2π − ψ). To allow this sink to support the Neumann condition on ϕ = π/3,
a unit sink is also required at C( , 2π/3 + ψ). As to the Neumann condition
on ϕ = π/3, the unit source at A( , ψ) must be supported with a unit source
at B( , 2π/3 − ψ), which, in turn, should be paired with a unit sink placed at
E( , 4π/3 + ψ). The latter sink has to be paired with a unit sink at D( , 4π/3 − ψ)
for the Neumann condition supported on ϕ = π/3. If we now take a look at the two
sinks at C( , 2π/3 + ψ) and D( , 4π/3 − ψ), they do not, unfortunately, support
the Dirichlet condition on the boundary fragment ϕ = 0. And this is what indicates,
in fact, the failure of the method for the mixed problem under consideration.
With the next example, we extend the number of problems stated on infinite
wedges for which the method of images does work.

Example 3.6 Consider the case of the Dirichlet problem stated on the infinite wedge
of π/4, (r, ϕ) = {0 < r < ∞, 0 < ϕ < π/4}.

The scheme depicted in Fig 3.5 allows the reader to follow the procedure in
detail and helps ultimately to obtain the Green’s function that we are looking for in
the compact form

4
r 2 − 2r cos(ϕ − ( nπ
2 − ψ)) +
2
1
G(r, ϕ; , ψ) = ln (n−1)π
. (3.13)
n=1 r − 2r cos(ϕ − ( + ψ)) + 2
4π 2
2

Example 3.7 The Green’s function of the mixed problem for the wedge of π/4 can
also be obtained by the method of images. To justify this claim, consider the state-
ment with the Dirichlet and Neumann conditions imposed on the boundary segments
ϕ = 0 and ϕ = π/4, respectively.
3.1 Construction by the Method of Images 49

Fig. 3.5 Dirichlet problem stated on the infinite wedge of π/4

Fig. 3.6 Dirichlet–Neumann problem on the infinite wedge of π/4

In order to trace out the image method, we examine the scheme shown in Fig 3.6.
Combining the influence of the eight sources and sinks that emerge in this case, we
obtain the Green’s function that we are looking for in the form

1 2
r 2 − 2r cos(ϕ − ( (2n−1)π + ψ)) + 2
G(r, ϕ; , ψ) = ln 2
(2n−1)π
n=1 r − 2r cos(ϕ − ( − ψ)) +
4π 2 2
2

r 2 − 2r cos(ϕ − (nπ − ψ)) + 2


× . (3.14)
r 2 − 2r cos(ϕ − ((n − 1)π + ψ)) + 2

Example 3.8 As to the Dirichlet problem for the wedge of π/6, the scheme of the
method of images results in twelve unit sources and sinks, the aggregate of which
represents the Green’s function of interest, which appears in the form

6
r 2 − 2r cos(ϕ − ( nπ
3 − ψ)) +
2
1
G(r, ϕ; , ψ) = ln (n−1)π
. (3.15)
n=1 r − 2r cos(ϕ − ( + ψ)) + 2
4π 2
3
50 3 Green’s Functions for the Laplace Equation

Analysis of the boundary-value problems for infinite wedges considered so far


allows a couple of generalizations. The following two examples are presented in
order to provide the reader with details.

Example 3.9 Observing the expression derived earlier for the Green’s function of
the Dirichlet problem on the wedge of π/2 and presented in (3.10) along with the
one obtained for the wedge of π/4 (see (3.13)), we arrive at the generalization
k
1 2
r 2 − 2r cos(ϕ − ( 2nπ
k−1 − ψ)) +
2
G(r, ϕ; , ψ) = ln (n−1)π
, (3.16)
n=1 r − 2r cos(ϕ − ( 2k−1 + ψ)) +
4π 2 2

representing the Green’s function of the Dirichlet problem for the wedge of π/2k ,
where k = 0, 1, 2, . . . .
It is worth noting that the case of k = 0, which corresponds to the wedge of π or
in other words, to the upper half-plane y > 0, reads from (3.16) as

1 r 2 − 2r cos(ϕ + ψ) + 2
G(r, ϕ; , ψ) = ln 2 ,
4π r − 2r cos(ϕ − ψ) + 2

representing the Green’s function derived earlier (see (3.5)) and expressed here in
polar coordinates.

Example 3.10 Upon analyzing the expressions in (3.12) and (3.15), obtained for the
wedges of π/3 and π/6, we obtain the Green’s function of the Dirichlet problem
for the wedge of π/(3 · 2k ) in the form
k
1  r 2 − 2r cos(ϕ − ( 2nπk − ψ)) + 2
3·2
G(r, ϕ; , ψ) = ln 3·2
, (3.17)
4π r 2 − 2r cos(ϕ − ( 2(n−1)π + ψ)) + 2
n=1 3·2k

where k = 0, 1, 2, . . . . Observe that the case k = 0 corresponds to the wedge of π/3,


while the case of k = 1 is associated with the wedge of π/6, and so on.

Earlier in this section, we presented a convincing example of a problem for


which the method of images may fail when applied to a mixed (Dirichlet–Neumann)
boundary-value problem stated on a wedge. Note that the method is not necessar-
ily effective even for the Dirichlet problem on a wedge. The following example is
presented to justify this assertion.

Example 3.11 Consider the Dirichlet problem for the infinite wedge (r, ϕ) = {0 <
r < ∞, 0 < ϕ < 2π/3} and try to construct its Green’s function.

The failure of the method can be observed, in this case, with the aid of the scheme
shown in Fig. 3.7. Let a unit source (which produces the singular component of the
Green’s function) be located at A( , ψ) ∈ . To compensate its trace on the frag-
ment ϕ = 0 of the boundary of , place a compensatory sink at D( , 2π − ψ) ∈ / .
3.1 Construction by the Method of Images 51

Fig. 3.7 Failure of the method of images for a Dirichlet problem

The trace of the latter on the boundary fragment ϕ = 2π/3 is compensated, in turn,
with a unit source at C( , 4π/3 + ψ) ∈ / , whose trace on ϕ = 0 should be compen-
sated with a unit sink at B( , 2π/3 − ψ), which is, unfortunately, located inside .
And this is what justifies the failure of the method. Why so? Because compensatory
sources and sinks cannot, according to the definition of the Green’s function, be
located inside .
Thus, the above example illustrates the fact that the method of images may fail in
the construction of the Green’s function for the Dirichlet problem stated on a wedge
that allows cyclic symmetry. To observe some other cases in which the method fails,
the reader is invited (in the chapter exercises) to apply the procedure to other wedges
(of 2π/5 or 2π/7, for example) also allowing the cyclic symmetry.
Based on the experience gained so far, it sounds reasonable to make the following
observation. The method of images appears workable for Dirichlet problems stated
on the wedges of π/k, where k represents an integer. But a word of caution is
appropriate as to the above assertion. It is just an assertion, and the reader is strongly
encouraged to prove it rigorously.

Example 3.12 For the next illustrative example on the effective implementation of
the method of images, let us apply it to the construction of Green’s function for
another classical case of the Dirichlet problem stated on the disk of radius a.

The strategy of tackling the current problem with the method of images is based
on an obvious observation concerning the shape of equipotential lines in the field
generated by a point source or sink. Since these lines represent concentric circles
centered at the generating point, the following statement looks reasonable. That is,
for every location A of a unit source inside the disk, there exists a proper location
B of the compensatory unit sink outside the disk such that the circumference of the
disk is an equipotential line for the field generated by both the source and the sink.
Applying the strategy just described, we assume that the disk is centered at the
origin of the polar coordinate system r, ϕ and let the unit source generating the
52 3 Green’s Functions for the Laplace Equation

Fig. 3.8 Derivation of the Green’s function for disk

singular component

1  2 
− ln r − 2r cos(ϕ − ψ) + 2
(3.18)

of the Green’s function at M(r, ϕ) be located at a point A( , ψ) (see Fig. 3.8). Let
also C(a, ϕ) be an arbitrary point on the circumference of the disk. It is evident that
the point B( , ψ), where the compensatory unit sink

1  2 
ln r − 2r cos(ϕ − ψ) + 2
(3.19)

is located, must be on the extension of the radial line of A. In other words, the
angular coordinate ψ of B must be the same as that of A. As to the radial coordinate
of B, it should be determined from the condition that the sum of (3.18) and (3.19)
is a constant, say λ, when M is taken to C (r = a). For the sake of convenience, we
express λ as
1
λ=− ln μ. (3.20)

This yields

1 a 2 − 2a cos(ϕ − ψ) + 2 1
ln 2 =− ln μ,
4π a − 2a cos(ϕ − ψ) + 2 4π
or
 
a 2 − 2a cos(ϕ − ψ) + 2
= μ a 2 − 2a cos(ϕ − ψ) + 2
.
Making the substitution
=ω , (3.21)
we transform the above equation into
 
a 2 − 2a cos(ϕ − ψ) + 2
= μ a 2 − 2aω cos(ϕ − ψ) + ω2 2
. (3.22)
3.1 Construction by the Method of Images 53

Clearly, the equation in (3.22) must hold for any value of ϕ − ψ. So, by assuming,
for instance, ϕ − ψ = π/2 (which implies cos(ϕ − ψ) = 0), we reduce (3.22) to
 
a2 + 2
= μ a 2 + ω2 2
. (3.23)

Subtracting (3.23) from (3.22), we in turn have

2a cos(ϕ − ψ) = 2μaω cos(ϕ − ψ).

This simply means that μω = 1, that is, the values of μ and ω are reciprocals.
Substitution of μ = 1/ω into (3.23) yields

1 2
a2 + 2
= a +ω 2
.
ω
The above equation can be rewritten as

a 2 (ω − 1) = 2
ω(ω − 1). (3.24)

So, ω = 1 represents one of the roots of the quadratic equation in (3.24). It is


evident that this root is meaningless, because the relation in (3.21) suggests, in this
case, that the compensatory point B (see Fig. 3.8) is the same as A (?!). The second
root ω = a 2 / 2 of (3.24) implies that

= a2/ and μ = 2
/a 2 .

Thus, we have found the location where the point B(a 2 / , ψ) should be placed.
Such a point is usually referred to as the image of A about the circumference of the
disk. We also found the value of λ in (3.20):

1 2
λ=− ln 2 .
4π a
To complete the construction of the Green’s function, observe that the unit sink
at B generates the potential field

1 a4 a2
ln 2 − 2r cos(ϕ − ψ) + r 2

at a point M(r, ϕ) inside the disk. Hence, the potential field generated at M(r, ϕ) by
both the unit source at A and the compensatory unit sink at B is defined as

1 a 4 − 2r a 2 cos(ϕ − ψ) + r 2 2
ln 2 2 . (3.25)
4π (r − 2r cos(ϕ − ψ) + 2 )

In other words, (3.25) represents a function that is harmonic everywhere inside


the disk except for the source point ( , ψ). And in addition, the function in (3.25)
54 3 Green’s Functions for the Laplace Equation

takes on the constant value − 4π


1
ln( 2 /a 2 ) on the boundary of the disk. Thus, com-
pensating the function in (3.25) with the opposite value of − 4π
1
ln(a 2 / 2 ), one ulti-
mately obtains the Green’s function of the Dirichlet problem for the disk of radius
a in the form
1 a 4 − 2r a 2 cos(ϕ − ψ) + r 2 2 1 a2
G(r, ϕ; , ψ) = ln 2 2 − ln 2 ,
4π (r − 2r cos(ϕ − ψ) + ) 2 4π

which reduces to
1 a 4 − 2r a 2 cos(ϕ − ψ) + r 2 2
G(r, ϕ; , ψ) = ln 2 2 . (3.26)
4π a (r − 2r cos(ϕ − ψ) + 2 )

Note that another equivalent to the above compact form,


1 |zζ − a 2 |
G(r, ϕ; , ψ) = ln ,
2π a|z − ζ |
is often used in the literature for the representation in (3.26), where the complex
variable notation is employed for the observation point z = r(cos ϕ + i sin ϕ) and
the source point ζ = (cos ψ + i sin ψ).
It is important to note that all the Green’s functions presented in this section
are expressed in a closed analytical form (in terms of a finite number of elementary
functions). Later in this book, we will further expand the sphere of productive use of
the method of images. But the purpose of that expansion will be different from just
obtaining Green’s functions. The method will be employed to obtain specific repre-
sentations of some Green’s functions with a subsequent focus on the approximation
of elementary functions.

3.2 Method of Conformal Mapping


Another method that has traditionally been employed for the construction of Green’s
functions for the two-dimensional Laplace equation is the method of conformal
mapping [8, 13, 16]. It is rooted in the classical topic of complex analysis. To in-
troduce its background, let w(z, ζ ) represent a function of a complex variable that
maps a simply connected region  bounded by L conformally onto the interior of
the unit disk |w| ≤ 1, while the point z = ζ is mapped into the center w = 0 of the
disk, that is, w(ζ, ζ ) = 0.
It is worth noting that the conformal mapping of a simply connected region onto
a disk is not unique. Indeed, it is defined up to an arbitrary rotation about the disk’s
center.
As the reader may have learned from a course in complex analysis [5], the
Green’s function to the Dirichlet problem
∇ 2 u(P ) = 0, P ∈ , (3.27)
u(P ) = 0, P ∈ L, (3.28)
3.2 Method of Conformal Mapping 55

can be expressed in terms of the mapping function w(z, ζ ) as


1
G(P , Q) = − ln |w(z, ζ )|, (3.29)

where z = x + iy represents the observation point P , while ζ = ξ + iη represents
the source point Q.
This statement can readily be justified. In doing so, we observe that since
w = w(z, ζ ) performs a conformal mapping of , it is an analytic function of z
dz = 0 everywhere in , with z = ζ included.
in  and w(z, ζ ) = 0 if z = ζ , while dw
Consequently, z = ζ represents a simple pole for w(z, ζ ). That is why one can ex-
press the latter in the form

w(z, ζ ) = (z − ζ )(z, ζ ), (3.30)

with (z, ζ ) representing an analytic function of z in , which is nonzero as z = ζ ,


that is, (ζ, ζ ) = 0.
Since an analytic function of an analytic function is also analytic, we have that
the function
ln (z, ζ ) = ln |(z, ζ )| + i arg (z, ζ ) (3.31)
is analytic in . Then the real component ln |(z, ζ )| in (3.31) represents a har-
monic function in  and so, obviously, is
1
− ln |(z, ζ )|.

Hence, in light of the relation in (3.30), the function in (3.29) reads as
1 1 1
− ln |w(z, ζ )| = − ln |z − ζ | − ln |(z, ζ )|,
2π 2π 2π
which can be rewritten in terms of the Cartesian coordinates of z and ζ as

1 1 1
− ln |w(z, ζ )| = − ln (x − ξ )2 + (y − η)2 − ln |(z, ζ )|.
2π 2π 2π
The reader can easily figure out that the component

1
− ln (x − ξ )2 + (y − η)2

is a harmonic function of x and y almost everywhere in . More specifically, it
is harmonic at every point (x, y) ∈ , except at (x, y) = (ξ, η). This implies that
the function in (3.29), as a function of x and y, is also harmonic everywhere in ,
except at (x, y) = (ξ, η).
So, what has already been shown is that the function in (3.29) meets two of the
three defining properties of the Green’s function. Indeed, it is harmonic everywhere
in , except at (x, y) = (ξ, η), and possesses a logarithmic singularity as (x, y) →
56 3 Green’s Functions for the Laplace Equation

(ξ, η). But what about the third defining property? Does the function in (3.29) vanish
on the boundary L of ? The answer is yes, because from the fact that the function
w(z, ζ ) maps L onto the circumference of the unit disk, it follows that

w(z, ζ ) = 1 for z ∈ L,

based on which we have


1
− ln |w(z, ζ )| = 0 for z ∈ L.

Thus, (3.29) really represents the Green’s function for the Dirichlet problem
stated in (3.27) and (3.28).
In what follows, we present a few examples of the construction of Green’s func-
tions by the method of conformal mapping.

Example 3.13 Let the method of conformal mapping be applied to the Dirichlet
problem stated on the unit disk |z| ≤ 1.
The family of functions w(z, ζ ) that maps the unit disk conformally onto itself,
with a point z = ζ being mapped onto the disk’s center, is defined [5] as
z−ζ
w(z, ζ ) = eiβ · ,
zζ − 1
where β is a real parameter that is responsible for the rotation of the disk about its
center. For the sake of uniqueness, we neglect the rotation by assuming β = 0.
In compliance with (3.29), one arrives at the expression

1 z−ζ 1 zζ − 1
G(z, ζ ) = − ln = ln (3.32)
2π zζ − 1 2π z−ζ

for the Green’s function that we are looking for.


Expressing the observation point z and the source point ζ in polar coordinates

z = r(cos ϕ + i sin ϕ) and ζ = (cos ψ + i sin ψ),

we transform the numerator in the argument of the logarithm in (3.32) as

zζ − 1 = r(cos ϕ + i sin ϕ) (cos ψ − i sin ψ) − 1


= r [(cos ϕ cos ψ + sin ϕ sin ψ) + i(cos ϕ cos ψ + sin ϕ sin ψ)] − 1
 
= r cos(ϕ − ψ) − 1 + ir sin(ϕ − ψ).

The modulus of the above appears as



 2  2
|zζ − 1| = r cos(ϕ − ψ) − 1 + r sin(ϕ − ψ)

= r 2 2 − 2r cos(ϕ − ψ) + 1. (3.33)
3.2 Method of Conformal Mapping 57

The denominator in the argument of the logarithm in (3.32) represents the dis-
tance between z and ζ , which is

|z − ζ | = r 2 − 2r cos(ϕ − ψ) + 2 . (3.34)

Substituting (3.33) and (3.34) into (3.32), we finally obtain the Green’s function
of the Dirichlet problem for the unit disk:
1 r 2 2 − 2r cos(ϕ − ψ) + 1
G(r, ϕ; , ψ) = ln 2 . (3.35)
4π r − 2r cos(ϕ − ψ) + 2

The reader may compare this representation with the one derived earlier in
Sect. 3.1 (see (3.26), where a is to be set equal to unity).

Example 3.14 The method of conformal mapping will be used here to construct
the Green’s function of the Dirichlet problem for the infinite strip  = {−∞ < x <
∞, 0 ≤ y ≤ π}.

In a course on complex analysis [5], the reader may have learned that the family
of functions
ez − eζ
w(z, ζ ) = eiβ (3.36)
ez − eζ
maps the infinite strip  conformally onto the unit disk |w| ≤ 1, while the point
z = ζ is mapped onto the disk’s center w = 0. For the sake of uniqueness, we assume
β = 0 for the rotation parameter.
Before substituting the mapping function from (3.36) into (3.29), we express the
observation point and the source point in Cartesian coordinates
z = x + iy and ζ = ξ + iη,

and then transform the modulus of the numerator in (3.36) by means of the classical
Euler formula
    
ez − eζ = Re2 ez − eζ + Im2 ez − eζ ,
where the real and the imaginary parts read
 
Re ez − eζ = ex cos y − eξ cos η

and
 
Im ez − eζ = ex sin y − eξ sin η.
Trivial complex algebra further yields

ez − eζ = e2x + e2ξ − 2e(x+ξ ) cos(y − η)

= e · 1 − 2e(x−ξ ) cos(y − η) + e2(x−ξ ) .
ξ
58 3 Green’s Functions for the Laplace Equation

The modulus of the denominator in (3.36) transforms similarly into



ez − eζ = eξ · 1 − 2e(x−ξ ) cos(y + η) + e2(x−ξ ) .

This puts the Green’s function we are looking for in the form

1 1 − 2e(x−ξ ) cos(y + η) + e2(x−ξ )


G(x, y; ξ, η) = ln . (3.37)
4π 1 − 2e(x−ξ ) cos(y − η) + e2(x−ξ )

An equivalent but more compact form for this Green’s function can be obtained
by multiplying both the numerator and the denominator in (3.37) by e(ξ −x) . This
yields
1 e(x−ξ ) + e(ξ −x) − 2 cos(y + η)
G(x, y; ξ, η) = ln (x−ξ ) ,
4π e + e(ξ −x) − 2 cos(y − η)
which transforms, by dividing the numerator and the denominator of the argument
of the logarithm by 2, into the equivalent form

1 cosh(x − ξ ) − cos(y + η)
G(x, y; ξ, η) = ln . (3.38)
4π cosh(x − ξ ) − cos(y − η)

Example 3.15 The half-plane  = {−∞ < x < ∞, y ≥ 0} maps conformally onto
the unit disk (with the point z = ζ mapped onto the disk’s center, w = 0) by a family
of functions, one of which is [5]
z−ζ
w(z, ζ ) = .
z−ζ

Thus, the Green’s function of the Dirichlet problem for the Laplace equation on
the half-plane  is given by

1 z−ζ
G(z, ζ ) = − ln ,
2π z−ζ

which reads in Cartesian coordinates as

1 (x − ξ )2 + (y + η)2
G(x, y; ξ, η) = ln , (3.39)
4π (x − ξ )2 + (y − η)2

while in polar coordinates it is

1 r 2 − 2r cos(ϕ + ψ) + 2
G(r, ϕ; , ψ) = ln 2 . (3.40)
4π r − 2r cos(ϕ − ψ) + 2

Recall that the representations of the Green’s function for the half-plane shown
in (3.39) and (3.40) were already obtained in Sect. 3.1 by the method of images (see
Examples 3.1 and 3.9).
3.3 Chapter Exercises 59

Note that in each of the problems reviewed so far in this section, the regions under
consideration are mapped conformally onto the unit disk by an elementary function.
The problem that we will face in Example 3.16 below represents a challenge. The
point is that it aims at the Green’s function of the Dirichlet problem stated on a
rectangle. But the rectangle cannot [5], unfortunately, be mapped conformally onto
the interior of a disk by an elementary function.

Example 3.16 Construct the Green’s function of the Dirichlet problem stated for the
two-dimensional Laplace equation on the rectangle  = {0 ≤ x ≤ a, 0 ≤ y ≤ b}.

From [5] the reader will have learned that the rectangle maps onto the unit disk
(with a point z = ζ mapped onto the disk’s center w = 0) conformally by the func-
tion
W (z − ζ ; ω1 , ω2 ) · W (z + ζ ; ω1 , ω2 )
w(z, ζ ) =
W (z − ζ ; ω1 , ω2 ) · W (z + ζ ; ω1 , ω2 )
defined in terms of the special function W (t; ω1 , ω2 ), which is called the Weierstrass
elliptic function. The parameters ω1 and ω2 are determined through the dimensions
of the rectangle as ω1 = 2a and ω2 = 2ib. To compute the Weierstrass function prac-
tically, the reader might go with its series representation given in [9], for example,
as
∞ 
1 1 1
W (t; ω1 , ω2 ) = 2 + − ,
t (t − 2mω1 − 2nω2 )2 (2mω1 + 2nω2 )2
m,n=0

where in the summation, we assume that the indices m and n are not equal to zero
simultaneously.
Thus, the Green’s function of the Dirichlet problem for the rectangle  is ex-
pressed in terms of the Weierstrass function as

1 W (z − ζ ; 2a, 2ib) · W (z + ζ ; 2a, 2ib)


G(z, ζ ) = ln . (3.41)
2π W (z − ζ ; 2a, 2ib) · W (z + ζ ; 2a, 2ib)
In Chap. 5, we will revisit the construction of the Green’s function of the Dirichlet
problem for a rectangle. An alternative form for that in (3.41) will be derived using
the method of eigenfunction expansion.

3.3 Chapter Exercises

3.1 Derive the Green’s function presented in (3.13) for the Dirichlet problem stated
for the Laplace equation on the infinite wedge of π/4.

3.2 Derive the Green’s function presented in (3.15) for the Dirichlet problem stated
for the Laplace equation on the infinite wedge of π/6.
60 3 Green’s Functions for the Laplace Equation

3.3 Show that the method of images fails in the construction of the Green’s function
of the Dirichlet problem on the infinite wedge of 2π/5.

3.4 Show that the method of images fails in the construction of the Green’s function
of the Dirichlet problem on the infinite wedge of 2π/7.

3.5 Prove that the method of images is efficient for the construction of the Green’s
function of the Dirichlet problem stated on the infinite wedge of π/k, where k is an
integer.
Chapter 4
Green’s Functions for ODE

As was convincingly shown in Chap. 3, the methods of images and conformal map-
ping are helpful in obtaining Green’s functions for the two-dimensional Laplace
equation. But it is worth noting, at the same time, that the number of problems for
which these methods are productive, is notably limited. To support this assertion, re-
call that mixed boundary-value problems with Robin conditions imposed on a piece
of the boundary are not within the reach of these methods.
Another alternative approach to the construction of Green’s functions for many
partial differential equations is the method of eigenfunction expansion. But we do
not, however, immediately proceed with its coverage, postponing it for Chap. 5,
where its potential will be explored in full detail. The reason for that is methodolog-
ical. Certain preparatory work would help prior to turning to this method. In doing
so, we change topics by shifting from partial to ordinary differential equations.
Our objective is to assist the reader with an easier grasp of the material of Chap. 5,
where intensive work will be resumed on the Laplace equation. But before going
any further with this, the topic of Green’s functions for linear ordinary differential
equations will be explored here in some detail. A consistent use of the experience
gained in the current chapter will prove critical for later work on the method of
eigenfunction expansion.

4.1 Construction by Defining Properties


In contrast to partial differential equations, for which the construction of Green’s
functions represents in most cases a challenge, the case of linear ordinary differential
equations is to a large extent a routine exercise. It is included and discussed in nearly
every undergraduate textbook in the field. A standard procedure is based on defining
properties of Green’s functions.
Since the discussion in Chap. 5 is limited to second-order differential equations,
our presentation here will be related to the homogeneous equation
  d 2 y(x) dy(x)
L y(x) ≡ p0 (x) 2
+ p1 (x) + p2 (x)y(x) = 0, (4.1)
dx dx

Y.A. Melnikov, Green’s Functions and Infinite Products, 61


DOI 10.1007/978-0-8176-8280-4_4, © Springer Science+Business Media, LLC 2011
62 4 Green’s Functions for ODE

subject to the homogeneous boundary conditions


2  
   d k−1 y(a) d k−1 y(b)
Mi y(a), y(b) ≡ αi,k−1 + βi,k−1 = 0, i = 1, 2,
dx k−1 dx k−1
k=1
(4.2)
where the coefficients pj (x) in (4.1) are continuous functions on (a, b), with leading
coefficient p0 (x) = 0, while Mi (y(a), y(b)) in (4.2) represent linearly independent
forms with constant coefficients αi,k−1 and βi,k−1 . It is assumed that at least one of
the coefficients αi,k−1 and βi,k−1 is nonzero, for every fixed subscript i. This holds
the total number of boundary conditions in (4.2) to exactly 2.
From a course on differential equations [5, 10], one learns that if the boundary-
value problem stated in (4.1) and (4.2) is well posed (if, in other words, the problem
has only the trivial solution y(x) ≡ 0), then it has a unique Green’s function.
We call g(x, s) the Green’s function for the boundary-value problem stated
in (4.1) and (4.2) if as a function of its first variable x, it meets the following defining
properties for every s ∈ (a, b):
1. On both intervals [a, s) and (s, b], g(x, s) is a continuous function having con-
tinuous derivatives up to second order and satisfies the homogeneous equation
in (4.1) on (a, s) and (s, b), i.e.,
   
L g(x, s) = 0, x ∈ (a, s); L g(x, s) = 0, x ∈ (s, b).

2. For x = s, g(x, s) is continuous,

lim g(x, s) − lim g(x, s) = 0.


x→s + x→s −

3. The first-order derivative of g(x, s) is discontinuous when x = s, provided that

∂g(x, s) ∂g(x, s) 1
lim − lim =− ,
x→s + ∂x x→s − ∂x p0 (s)

where p0 (s) represents the leading coefficient in (4.1).


4. g(x, s) satisfies the boundary conditions in (4.2), i.e.,
 
Mi g(a, s), g(b, s) = 0 (i = 1, 2).

Two standard approaches to the construction of Green’s functions for linear ordi-
nary differential equations are traditionally recommended [5, 8]. The first of them is
based, as we already mentioned, on the defining properties just listed and represents,
in fact, a constructive proof of the existence and uniqueness theorem for the given
statement. The idea of the second approach is different. It is rooted in Lagrange’s
method of variation of parameters, which is usually used for finding particular solu-
tions to inhomogeneous linear equations.
To trace out the procedure of the approach based on the defining properties, let
functions y1 (x) and y2 (x) constitute a fundamental set of solutions for the equation
4.1 Construction by Defining Properties 63

in (4.1). That is, y1 (x) and y2 (x) are particular solutions of the equation that are
linearly independent on (a, b).
In compliance with property 1 of the definition, for any arbitrarily fixed value of
s ∈ (a, b), the Green’s function g(x, s) must be a solution of the equation in (4.1)
in (a, s) (on the left of s), as well as in (s, b) (on the right of s). Since any solution
of (4.1) can be expressed as a linear combination of y1 (x) and y2 (x), one may write
g(x, s) in the following form:


2 yj (x)Aj (s), for a ≤ x ≤ s,
g(x, s) = (4.3)
j =1 yj (x)Bj (s), for s ≤ x ≤ b,

where Aj (s) and Bj (s) (j = 1, 2) represent functions to be determined. Clearly,


the total number of these functions is four, and to complete the construction, we are
required to find them. To succeed with such an endeavor, an appropriate engine must
be created. Reviewing available resources for that, we go to the remaining defining
properties of the Green’s function. A close analysis shows that one linear equation
can be obtained for Aj (s) and Bj (s) with the aid of property 2, a single linear
equation coming from property 3, and another two linear equations from property 4.
From the strategy just sketched, it follows that a system of four linear algebraic
equations can be obtained in the four functions Aj (s) and Bj (s) (j = 1, 2). The
question that remains unanswered, however, is whether this system is well posed.
To answer this question, one must take a close look at the coefficient matrix of
the system to find out whether it is nonsingular. This requires an analysis of the
proposed strategy in full detail.
First, it is evident that by virtue of property 2, which stipulates the continuity of
g(x, s) at x = s, one derives the linear algebraic equation


2
Cj (s)yj (s) = 0 (4.4)
i=1

in the two unknown functions

Cj (s) = Bj (s) − Aj (s) (j = 1, 2). (4.5)

Another linear equation in C1 (s) and C2 (s) can be derived by turning to prop-
erty 3. This yields the equation


2
dyi (s) 1
Ci (s) =− . (4.6)
dx p0 (s)
i=1

Hence, the relation in (4.4) along with that in (4.6) forms a system of two si-
multaneous linear algebraic equations in C1 (s) and C2 (s). The determinant of the
coefficient matrix in this system is not zero, because it represents the Wronskian for
the fundamental set of solutions {yj (x), j = 1, 2}.
64 4 Green’s Functions for ODE

Thus, the system in (4.4) and (4.6) has a unique solution. In other words, one can
readily obtain explicit expressions for C1 (s) and C2 (s). This implies that, in view
of (4.5), two linear relations are already available for the four functions Aj (s) and
Bj (s). In order to obtain them, we take advantage of property 4. In doing so, let us
first break down the forms Mi (y(a), y(b)) in (4.2) into two additive components as
     
Mi y(a), y(b) = Si y(a) + Ti y(b) (i = 1, 2),

with the forms Si (a) and Ti (b) being defined as

  2
Si y(a) = αi,k−1 y (k−1) (a)
k=1

and
  2
Ti y(b) = βi,k−1 y (k−1) (b).
k=1
In compliance with property 4, we substitute the expression for g(x, s) from (4.3)
into (4.2), and we obtain
     
Mi g(a, s), g(b, s) ≡ Si g(a, s) + Ti g(b, s) = 0 (i = 1, 2). (4.7)

Since the operator Si in (4.7) governs the values of g(a, s) at the left endpoint
x = a of the interval [a, b], while the operator Ti governs the values of g(b, s) at the
right endpoint x = b, the upper branch


2
yj (x)Aj (s)
j =1

of g(x, s) from (4.3) goes to Si (g(a, s)), while the lower branch


2
yj (x)Bj (s)
j =1

of g(x, s) must be substituted into Ti (g(b, s)), resulting in


2
     
Si g(a, s) Aj (s) + Ti g(b, s) Bj (s) = 0 (i = 1, 2).
j =1

Replacing the expressions for Aj (s) in the above system with the differences
Bj (s)–Cj (s) in accordance with (4.5), one rewrites the system in the form


2
      
Si g(a, s) Bj (s) − Cj (s) + Tj g(b, s) Bj (s) = 0 (i = 1, 2).
j =1
4.1 Construction by Defining Properties 65

Combining the terms with Bj (s) and moving the term with Cj (s) to the right-
hand side, we obtain


2
     
2
 
Si g(a, s) + Ti g(b, s) Bj (s) = Si g(a, s) Cj (s) (i = 1, 2).
j =1 j =1

Upon recalling the relation from (4.7), the above equations can finally be rewrit-
ten in the form

2
  
2
 
Mi g(a, s), g(b, s) Bj (s) = Si g(a, s) Cj (s) (i = 1, 2). (4.8)
j =1 j =1

Thus, the relations in (4.8) constitute a system of two linear algebraic equations
in Bj (s). The coefficient matrix of this system is nonsingular, because the forms Mi
are linearly independent. The right-hand-side vector in (4.8) is defined in terms of
the values of Cj (s), which have already been found. The system has, consequently,
a unique solution for B1 (s) and B2 (s). So, once these are available, unique expres-
sions for Aj (s) can readily be obtained from (4.5).
Hence, upon substituting the expressions obtained for Aj (s) and Bj (s) into (4.3),
we obtain an explicit representation for the Green’s function that we are looking for.
In what follows, a series of examples is presented, where a number of different
boundary-value problems are considered, illustrating the described approach to the
construction of Green’s functions in detail.

Example 4.1 We start with a simple boundary-value problem in which the differen-
tial equation
d 2 y(x)
= 0, x ∈ (0, a), (4.9)
dx 2
is subject to the boundary conditions
dy(0) dy(a)
= 0, + hy(a) = 0, (4.10)
dx dx
with h representing a nonzero constant.
Before going any further with the construction procedure, we must make sure
that the unique Green’s function to the problem in (4.9) and (4.10) really does exist.
That is, we are required to check whether the problem has only the trivial solution.
The most elementary set of functions constituting a fundamental set of solutions
for the equation in (4.9) is represented by

y1 (x) ≡ 1 and y2 (x) ≡ x.

Therefore, the general solution yg (x) for (4.9) can be written as a linear combination
of y1 (x) and y2 (x),
yg (x) = D1 + D2 x,
66 4 Green’s Functions for ODE

where D1 and D2 represent arbitrary constants. Substitution of yg (x) into the


boundary conditions of (4.10) yields the homogeneous system of linear algebraic
equations in D1 and D2

D2 = 0,
hD1 + (1 + ah)D2 = 0.

It is evident that the only solution for the system is D1 = D2 = 0. This implies
that the problem in (4.9) and (4.10) is well posed. There thus exists a unique Green’s
function g(x, s). And according to the defining property 1, one can look for it in the
form
A1 (s) + xA2 (s), for 0 ≤ x ≤ s,
g(x, s) = (4.11)
B1 (s) + xB2 (s), for s ≤ x ≤ a.
Introducing then, as suggested in (4.5), C1 (s) = B1 (s) − A1 (s) and C2 (s) =
B2 (s) − A2 (s), we form a system of linear algebraic equations in these unknowns
written as
C1 (s) + sC2 (s) = 0,
C2 (s) = −1,
whose unique solution is C1 (s) = s and C2 (s) = −1.
The first boundary condition in (4.10), being satisfied with the upper branch of
g(x, s), results in A2 (s) = 0. Recall that the upper branch is chosen because x = 0
belongs to the domain 0 ≤ x ≤ s. Since B2 (s) = C2 (s) + A2 (s), we conclude that
B2 (s) = −1.
The second boundary condition in (4.10), being treated with the lower branch of
g(x, s), yields
 
B2 (s) + h B1 (s) + aB2 (s) = 0,
resulting in B1 (s) = (1 + ah)/ h. And finally, since A1 (s) = B1 (s) − C1 (s), we find
that
 
A1 (s) = 1 + h(a − s) / h.
Substituting these into (4.11), we ultimately obtain the Green’s function

1 1 + h(a − s), for 0 ≤ x ≤ s,


g(x, s) = (4.12)
h 1 + h(a − x), for s ≤ x ≤ a,

that we are looking for.


Take a look again at the problem setting in (4.9) and (4.10). If h = 0, then the
boundary conditions in (4.10) are

dy(0) dy(a)
= 0, = 0. (4.13)
dx dx
4.1 Construction by Defining Properties 67

It is evident that the boundary-value problem in (4.9) and (4.13) has no Green’s
function, because it allows infinitely many solutions (any function y(x) = const
represents a solution) and is therefore ill posed. This conclusion is also justified by
the form of the Green’s function in (4.12). Indeed, if h = 0, then g(x, s) in (4.12) is
undefined.
On the other hand, if h → ∞, then the boundary conditions in (4.10) transform
into
dy(0)
= 0, y(a) = 0, (4.14)
dx
and the Green’s function of the problem in (4.9) and (4.14) can be obtained from
that in (4.12) by taking a limit as h → ∞, resulting in
a − s, for 0 ≤ x ≤ s,
g(x, s) = (4.15)
a − x, for s ≤ x ≤ a.

Example 4.2 Let us construct the Green’s function for the boundary-value problem
stated by the differential equation

d 2 y(x)
− k 2 y(x) = 0, x ∈ (0, ∞), (4.16)
dx 2
subject to boundary conditions imposed as

y(0) = 0, |y(∞)| < ∞. (4.17)

It can readily be shown that the conditions of existence and uniqueness for the
Green’s function are met in this case. Indeed, since a fundamental set of solutions
for the equation in (4.16) can be written as

y1 (x) ≡ ekx , y2 (x) ≡ e−kx ,

its general solution is


yg (x) = D1 ekx + D2 e−kx .
The first condition in (4.17) implies D1 + D2 = 0, while the second condition
in (4.17) requires D1 = 0, resulting in D2 = 0. This ensures, in fact, existence of
a unique Green’s function of the formulation in (4.16) and (4.17), which can be
expressed in the following form:

A1 (s)ekx + A2 (s)e−kx , for x ≤ s,


g(x, s) = (4.18)
B1 (s)ekx + B2 (s)e−kx , for s ≤ x.
Defining Ci (s) = Bi (s) − Ai (s) (i = 1, 2), one obtains the following well-posed
system of linear algebraic equations:

eks C1 (s) + e−ks C2 (s) = 0,


keks C1 (s) − ke−ks C2 (s) = −1,
68 4 Green’s Functions for ODE

in C1 (s) and C2 (s). Its solution is

1 −ks 1 ks
C1 (s) = − e , C2 (s) = e .
2k 2k
The first condition in (4.17) implies

A1 (s) + A2 (s) = 0, (4.19)

while the second condition results in B1 (s) = 0, because the exponential function
ekx is unbounded as x approaches infinity. And the only way to satisfy the second
condition in (4.17) is to set B1 (s) equal to zero. This immediately yields

1 −ks
A1 (s) = e ,
2k
and the relation in (4.19) consequently provides

1 −ks
A2 (s) = − e .
2k
Hence, based on the known values of C2 (s) and A2 (s), one obtains

1  ks 
B2 (s) = e − e−ks .
2k
Upon substituting the values of the coefficients Aj (s) and Bj (s) just found
into (4.18), one finally obtains the Green’s function

1 ek(x−s) − e−k(x+s) , for x ≤ s,


g(x, s) = (4.20)
2k ek(s−x) − e−k(x+s) , for s ≤ x,

to the problem posed by (4.16) and (4.17). It is evident that we can rewrite it in the
compact form

1  −k|x−s| 
g(x, s) = e − e−k(x+s) , for 0 ≤ x, s ≤ a.
2k

Example 4.3 Consider a boundary-value problem for the equation in (4.16) but
stated over a different domain,

d 2 y(x)
− k 2 y(x) = 0, x ∈ (0, a), (4.21)
dx 2
and subject to boundary conditions written as

dy(0) dy(a)
y(0) = y(a), = . (4.22)
dx dx
4.1 Construction by Defining Properties 69

This boundary-value problem represents an important type of formulations in


applied sciences. The relations in (4.22) specify conditions of the a-periodicity of
the solution to be found.
Using the experience accumulated so far, the reader can easily show that the
above problem has only the trivial solution, providing existence of a unique Green’s
function for it.
Clearly, the beginning stage of the construction procedure can be reiterated from
Example 4.2. We again express the Green’s function as in (4.18), and recall the
coefficients C1 (s) and C2 (s):
1 −ks 1 ks
C1 (s) = − e , C2 (s) = e . (4.23)
2k 2k
Satisfying the first condition in (4.22), we utilize the upper branch of the Green’s
function from (4.18) in order to compute the value of y(0), while its lower branch
is used for computing the value of y(a). This results in

A1 (s) + A2 (s) = B1 (s)eka + B2 (s)e−ka . (4.24)

Upon satisfying the second condition in (4.22), we compute the derivative of


y(x) at x = 0 using the upper branch from (4.18), while the value of the derivative
of y(x) at x = a is computed using the lower branch. This yields

A1 (s) − A2 (s) = B1 (s)eka − B2 (s)e−ka . (4.25)

So the relations in (4.24) and (4.25), along with those in (4.23), form a system of
four linear algebraic equations in A1 (s), A2 (s), B1 (s), and B2 (s). To find the values
of A1 (s) and B1 (s), we add (4.24) and (4.25) to each other. This provides us with

A1 (s) − B1 (s)eka = 0, (4.26)

while the first relation in (4.23) can be rewritten in the form


1 −ks
−A1 (s) + B1 (s) = − e . (4.27)
2k
Solving equations (4.26) and (4.27) simultaneously, we obtain

ek(a−s) e−ks
A1 (s) = , B1 (s) = .
2k(eka − 1) 2k(eka − 1)
To find the values of A2 (s) and B2 (s), we subtract (4.25) from (4.24). This results
in
A2 (s) − B2 (s)e−ka = 0. (4.28)
Rewriting the second relation from (4.23) in the form
1 ks
−A2 (s) + B2 (s) = e , (4.29)
2k
70 4 Green’s Functions for ODE

we solve equations (4.28) and (4.29) simultaneously. This yields

eks ek(a+s)
A2 (s) = , B2 (s) = .
2k(eka − 1) 2k(eka − 1)

Substituting the values of A1 (s), A2 (s), B1 (s), and B2 (s) just found into (4.18),
we finally obtain the compact form

e−k(|x−s|−a) + ek|x−s|
g(x, s) = , for 0 ≤ x, s ≤ a, (4.30)
2k(eka − 1)

of the Green’s function to the boundary-value problem posed by (4.21) and (4.22).

Example 4.4 For another example, let the second-order equation with variable co-
efficients
d dy
(mx + b) = 0, x ∈ (0, a), (4.31)
dx dx
be subject to the boundary conditions

dy(0)
= 0, y(a) = 0, (4.32)
dx
where we assume that m > 0 and b > 0, which implies that mx + b = 0 on the
interval [0, a].
The fundamental set of solutions

y1 (x) ≡ 1, y2 (x) ≡ ln(mx + b), (4.33)

required for the construction of the Green’s function for the problem in (4.31)
and (4.32) can be obtained by two successive integrations of the governing equa-
tion. Indeed, the first integration yields

dy
(mx + b) = C1 .
dx
Dividing the above equation through by mx + b and multiplying by dx, we sep-
arate variables,
dx
dy = C1 ,
mx + b
and finally obtain the general solution of the equation in (4.31) in the form

C1
y(x) = ln(mx + b) + C2 ,
m
which implies that the functions in (4.33) indeed constitute a fundamental set of
solutions for (4.31).
4.1 Construction by Defining Properties 71

It can be easily shown that the problem in (4.31) and (4.32) has only the trivial
solution. Hence, there exists a unique Green’s function, which can be represented in
the form
A1 (s) + ln (mx + b)A2 (s), for 0 ≤ x ≤ s,
g(x, s) = (4.34)
B1 (s) + ln (mx + b)B2 (s), for s ≤ x ≤ a.
Tracing out our construction procedure, we obtain the system of linear algebraic
equations
C1 (s) + ln (ms + b)C2 (s) = 0,
mC2 (s) = −1,
in Cj (s) = Bj (s) − Aj (s) (j = 1, 2). Its solution is
1 1
C1 (s) = ln (ms + b), C2 (s) = − . (4.35)
m m
The first boundary condition in (4.32) yields A2 (s) = 0. Consequently, we have
B2 (s) = −1/m. The second condition in (4.32) gives

B1 (s) + ln (ma + p)B2 (s) = 0,

resulting in B1 (s) = [ln(ma + b)]/m, which provides us with


1 ma + b
A1 (s) = ln .
m ms + b
Substituting the values of Aj (s) and Bj (s) just found into (4.34), we obtain the
Green’s function that we are looking for in the form

1 ln((ma + b)/(ms + b)), for 0 ≤ x ≤ s,


g(x, s) = (4.36)
m ln((ma + b)/(mx + b)), for s ≤ x ≤ a.
Sometimes in the applied sciences, we face boundary-value problems on finite
intervals, where one of the endpoints is singular for the governing differential equa-
tion. Our algorithm can be successfully used for constructing Green’s functions of
such problems as well. As an illustration to this point, we offer the following exam-
ple.

Example 4.5 Construct the Green’s function of the boundary-value problem for the
differential equation
d dy(x)
x = 0, x ∈ (0, a), (4.37)
dx dx
subject to boundary conditions written as
dy(a)
|y(0)| < ∞, + hy(a) = 0. (4.38)
dx
72 4 Green’s Functions for ODE

Note that the boundedness condition at x = 0 is written in a shorthand form. It is


understood in the sense that the limit of y(x) as x approaches zero is bounded.
The left endpoint x = 0 of the domain is a point of singularity for the governing
equation. Therefore, instead of formulating a traditional boundary condition at this
point, we require in (4.38) for y(0) to be bounded.
Clearly, a fundamental set of solutions of the equation in (4.37) can be written as

y1 (x) ≡ 1, y2 (x) ≡ ln x.

The problem in (4.37) and (4.38) is well posed (has only the trivial solution),
allowing a unique Green’s function in the form

A1 (s) + ln xA2 (s), for 0 ≤ x ≤ s,


g(x, s) = (4.39)
B1 (s) + ln xB2 (s), for s ≤ x ≤ a.

In compliance with our procedure, we form a system of linear algebraic equations

C1 (s) + ln s C2 (s) = 0,
s −1 C2 (s) = −s −1 ,

whose solution is C1 (s) = ln s and C2 (s) = −1.


The boundedness of the Green’s function at x = 0 implies A2 (s) = 0. Conse-
quently, B2 (s) = −1, while the second condition in (4.38) yields
 
B2 (s)/a + h B1 (s) + ln aB2 (s) = 0.

Hence, B1 (s) = 1/ah + ln a, and ultimately, A1 (s) = 1/ah − ln s/a. Thus, sub-
stituting the values of Aj (s) and Bj (s) just found into (4.39), we obtain the Green’s
function that we are looking for in the form

1 ln(s/a), for 0 ≤ x ≤ s,
g(x, s) = − (4.40)
ah ln(x/a), for s ≤ x ≤ a.

It is clearly seen that if the parameter h is equal to zero, then the Green’s func-
tion in (4.40) is undefined. This agrees with the setting in (4.37) and (4.38), which
becomes ill posed if h = 0.

4.2 Method of Variation of Parameters


We turn now to another approach that has traditionally been applied to the construc-
tion of Green’s functions for ordinary differential equations. This is the one rooted
in Lagrange’s method of variation of parameters. The idea behind this approach
is based on the following classical assertion referred to, in some sources (see, for
example, [8]), as Hilbert’s theorem.
4.2 Method of Variation of Parameters 73

If g(x, s) represents the Green’s function to the homogeneous boundary-value


problem posed by (4.1) and (4.2), then a unique solution of the corresponding inho-
mogeneous equation

d 2 y(x) dy(x)
p0 (x) + p1 (x) + p2 (x)y(x) = −f (x), x ∈ (a, b), (4.41)
dx 2 dx
with a right-hand-side function f (x) continuous on [a, b], subject to the homoge-
neous boundary conditions in (4.2) can be expressed by the integral
b
y(x) = g(x, s) f (s) ds. (4.42)
a

For simplicity in the presentation that follows, we consider a particular case of


the boundary conditions in (4.2). That is, choosing the coefficients αj,k−1 and βi,k−1
as
α1,0 = β2,0 = 1,
while
α1,1 = β1,0 = β1,1 = α2,0 = α2,1 = β2,1 = 0,
reduces (4.2) to
y(a) = 0, y(b) = 0. (4.43)
The boundary-value problem stated in (4.41) and (4.43) has a unique solution,
which implies, of course, that the corresponding homogeneous problem has only
the trivial solution.
Let y1 (x) and y2 (x) represent two linearly independent particular solutions of the
homogeneous equation corresponding to that in (4.41). We then express the general
solution of (4.41) itself, in compliance with the method of variation of parameters,
in the form
y(x) = C1 (x)y1 (x) + C2 (x)y2 (x), (4.44)
where C1 (x) and C2 (x) represent functions that are at least twice differentiable and
yet to be found.
The idea of expressing the solution in the form of (4.44) might not look reason-
able, since the equation in (4.41) delivers just a single relation for the two functions
C1 (x) and C2 (x). This presumes a certain degree of freedom in choosing a sec-
ond relation, which would allow us to uniquely define C1 (x) and C2 (x). Lagrange’s
method of variation of parameters provides an effective and elegant choice of such
a relation.
The direct substitution of y(x) from (4.44) into (4.41) would result in a cum-
bersome single second-order differential equation in two unknown functions C1 (x)
and C2 (x). In order to avoid such an unfortunate complication, Lagrange’s method
74 4 Green’s Functions for ODE

proceeds as follows. First, differentiate the function y(x) in (4.44) using the product
rule
y  (x) = C1 y1 + C1 y1 + C2 y2 + C2 y2 , (4.45)
and then, keeping in mind the degree of freedom mentioned above, we make a sim-
plifying assumption as
C1 y1 + C2 y2 = 0, (4.46)
transforming (4.45) into
y  (x) = C1 y1 + C2 y2 . (4.47)
Hence, the second derivative of y(x), is expressed as

y  (x) = C1 y1 + C1 y1 + C2 y2 + C2 y2 . (4.48)

We now substitute the expressions of functions y(x), y  (x), and y  (x)


from (4.44), (4.47), and (4.48) into (4.41), yielding for its left-hand-side
   
p0 C1 y1 + C1 y1 + C2 y2 + C2 y2 + p1 C1 y1 + C2 y2 + p2 (C1 y1 + C2 y2 ).

Rearranging the order of terms, we rewrite this as


     
C1 p0 y1 + p1 y1 + p2 y1 + C2 p0 y2 + p1 y2 + p2 y2 + p0 C1 y1 + C2 y2 . (4.49)

Since y1 = y1 (x) and y2 = y2 (x) represent particular solutions of the homoge-


neous equation corresponding to (4.41), we have

p0 y1 + p1 y1 + p2 y1 = 0

as well as
p0 y2 + p1 y2 + p2 y2 = 0.
This reduces the equation in (4.49) to

C1 (x)y1 (x) + C2 (x)y2 (x) = −f (x)p0−1 (x). (4.50)

The relations in (4.46) and (4.50) represent a system of linear algebraic equations
in C1 (x) and C2 (x). The system is well posed (has a unique solution) because the
determinant of its coefficient matrix is the Wronskian

W (x) = y1 (x)y2 (x) − y2 (x)y1 (x) = 0

for the fundamental set of solutions y1 (x) and y2 (x).


Solving the system in (4.46) and (4.50), we obtain

y2 (x)f (x) y1 (x)f (x)


C1 (x) = − , C2 (x) = .
p0 (x)W (x) p0 (x)W (x)
4.2 Method of Variation of Parameters 75

Straightforward integration of the above expressions yields


x y2 (s)f (s)
C1 (x) = − ds + H1
a p0 (s)W (s)
and
x y1 (s)f (s)
C2 (x) = ds + H2 ,
a p0 (s)W (s)
where H1 and H2 are arbitrary constants of integration.
Upon substituting these expressions for C1 (x) and C2 (x) into (4.44), we obtain
the general solution of the equation in (4.41) in the form

y(x) = H1 y1 (x) + H2 y2 (x)


x y1 (s)f (s) x y2 (s)f (s)
+ y2 (x) ds − y1 (x) ds.
a p0 (s)W (s) a p0 (s)W (s)

Since s represents the variable of integration, the factors y1 (x) and y2 (x) of the
integral containing terms representing functions of x can be formally moved inside
the integrals. And once this is done and the two integral terms are combined, we
obtain
x y1 (s)y2 (x) − y1 (x)y2 (s)
y(x) = H1 y1 (x) + H2 y2 (x) + f (s)ds. (4.51)
a p0 (s)W (s)
To determine values of H1 and H2 , we satisfy the boundary conditions in (4.43)
with the above expression for y(x). This yields the system of linear algebraic equa-
tions
y1 (a) y2 (a) H1 0
= (4.52)
y1 (b) y2 (b) H2 P (a, b)
in H1 and H2 , where P (a, b) is defined as
b R(b, s)
P (a, b) = f (s) ds
a p0 (s)W (s)
and
R(b, s) = y1 (b)y2 (s) − y1 (s)y2 (b).
With this, we arrive at the solution to the system in (4.52) in the form
b y2 (a)R(b, s)f (s)
H1 = − ds
a p0 (s)R(a, b)W (s)
and
b y1 (a)R(b, s)f (s)
H2 = ds.
a p0 (s)R(a, b)W (s)
76 4 Green’s Functions for ODE

Upon substituting these into (4.51), we obtain the solution of the boundary-value
problem posed in (4.41) and (4.43) as
x R(x, s)f (s) b R(a, x)R(b, s)f (s)
y(x) = − ds + ds.
a p0 (s)W (s) a p0 (s)R(a, b)W (s)

This representation can be rewritten in the single-integral form


b
y(x) = g(x, s)f (s) ds, (4.53)
a

whose kernel function g(x, s) is found in two pieces. For x ≤ s, it is defined as

R(a, x)R(b, s)
g(x, s) = , x ≤ s,
p0 (s)R(a, b)W (s)

while for x ≥ s, we obtain


R(a, x)R(b, s) − R(x, s)R(a, b)
g(x, s) = , x ≥ s.
p0 (s)R(a, b)W (s)

After a trivial but quite cumbersome transformation, the above expression can be
simplified to
R(a, s)R(b, x)
g(x, s) = , x ≥ s.
p0 (s)R(a, b)W (s)
Thus, since the solution to the problem posed in (4.41) and (4.43) is found as
a single integral of the type in (4.42), we conclude that the kernel function g(x, s)
in (4.53) does in fact represent the Green’s function to the corresponding homoge-
neous boundary-value problem.
So, the approach based on the method of variation of parameters can success-
fully be used to actually construct Green’s functions. We present below a number
of examples illustrating some peculiarities of this approach that emerge in practical
situations.

Example 4.6 Apply the procedure based on the method of variation of parameters to
the construction of the Green’s function for the homogeneous equation correspond-
ing to
d 2 y(x)
+ k 2 y(x) = −f (x), x ∈ (0, a), (4.54)
dx 2
and subject to the homogeneous boundary conditions

y  (0) = 0, y  (a) = 0. (4.55)

We assume that the right-hand-side function f (x), in (4.54) is continuous and


therefore integrable on (0, a).
4.2 Method of Variation of Parameters 77

It can easily be shown that the homogeneous problem corresponding to that


in (4.54) and (4.55) has only the trivial solution. This implies that the conditions
of existence and uniqueness of the Green’s function are met and the latter can be
constructed.
Since the functions y1 (x) ≡ sin kx and y2 (x) ≡ cos kx represent a fundamental
set of solutions for the corresponding homogeneous equation, the general solution
to (4.54) can be expressed as

y(x) = C1 (x) sin kx + C2 (x) cos kx. (4.56)

The system of linear algebraic equations in C1 (x) and C2 (x), which has been
derived, in general, in (4.46) and (4.50), appears in this case as

sin kx cos kx C1 (x) 0


= ,
k cos kx −k sin kx C2 (x) −f (x)

providing us with the following solution:


1 1
C1 (x) = cos kxf (x), C2 (x) = − sin kxf (x).
k k
Integrating, we obtain
x 1
C1 (x) = cos ksf (s)ds + H1
0 k
and
x 1
C2 (x) = − sin ksf (s)ds + H2 .
0 k
Upon substituting these into (4.56) and carrying out an obvious transformation,
we obtain
x 1
y(x) = sin k(x − s)f (s)ds + H1 sin kx + H2 cos kx. (4.57)
0 k
To determine the values of H1 and H2 , we differentiate y(x):
x
y  (x) = cos k(x − s)f (s)ds + H1 k cos kx − H2 k sin kx.
0

From the first condition in (4.55), it follows that H1 = 0, while the second con-
dition yields
a
cos k(a − s)f (s)ds − H2 k sin ka = 0,
0
from which we immediately obtain
a cos k(a − s)
H2 = f (s) ds.
0 k sin ka
78 4 Green’s Functions for ODE

Upon substituting the values of H1 and H2 just found into (4.57) and correspond-
ingly regrouping the integrals, one obtains
x sin k(x − s) a cos k(a − s)
y(x) = f (s) ds + cos kx f (s) ds. (4.58)
0 k 0 k sin ka

Both of the above integrals can be combined and written in a compact single-
integral form. In helping the reader to proceed more easily through this transforma-
tion, we add formally the term
a
0 · f (s) ds
x

to the first of the two integrals in (4.58) and break down the second one as
x cos k(a − s) a cos k(a − s)
cos kx f (s) ds + cos kx f (s) ds.
0 k sin ka x k sin ka

Then y(x) is represented as a sum of four definite integrals, in two of which the
integration is carried out from 0 to x. In the other two integrals, we integrate from
x to a. This transforms (4.58) into
x sin k(x − s) x cos k(a − s)
y(x) = f (s) ds + cos kx f (s) ds
0 k 0 k sin ka
a a cos k(a − s)
+ 0 · f (s)ds + cos kx f (s) ds.
x x k sin ka

Grouping the integrals by pairs, we have


x  sin k(x − s) cos k(a − s)

y(x) = + cos kx f (s) ds
0 k k sin ka
a cos k(a − s)
+ cos kx f (s) ds
x k sin ka
x cos k(a − x) a cos k(a − s)
= cos ks f (s) ds + cos kx f (s) ds.
0 k sin ka x k sin ka

Note that in the first integral above, the variables x and s satisfy the inequal-
ity x ≥ s, since x represents the upper limit of integration, whereas in the second
integral x, is the lower limit, implying x ≤ s.
Hence, the representation for y(x) that we just came up with can be viewed as
the single integral
a
y(x) = g(x, s)f (s) ds, (4.59)
0
4.2 Method of Variation of Parameters 79

whose kernel function g(x, s) is defined in two pieces as

1 cos kx cos k(a − s), for x ≤ s,


g(x, s) = (4.60)
k sin ka cos ks cos k(a − x), for s ≤ x.

Thus, since the solution of the boundary-value problem stated in (4.54) and (4.55)
is expressed as the integral in (4.59), g(x, s) represents the Green’s function to the
homogeneous boundary-value problem corresponding to that in (4.54) and (4.55).

Example 4.7 Consider the inhomogeneous equation

d 2 y(x)
− k 2 y(x) = −f (x), (4.61)
dx 2
subject to the homogeneous boundary conditions

y  (0) = 0, y(a) = 0. (4.62)

It can be shown that the homogeneous boundary-value problem corresponding


to that of (4.61) and (4.62) has only the trivial solution, which means that the above
problem itself has a unique solution. This justifies the existence and uniqueness of
the Green’s function.
Earlier in Example 4.2, while dealing with the homogeneous equation corre-
sponding to (4.61), we presented the set of functions

y1 (x) ≡ ekx , y2 (x) ≡ e−kx

as its fundamental set of solutions. Hence, the general solution can be represented
for (4.61) itself by
y(x) = C1 (x)ekx + C2 (x)e−kx . (4.63)
Tracing out the procedure of Lagrange’s method, one obtains expressions for
C1 (x) and C2 (x) in the form
x 1 −ks
C1 (x) = − e f (s) ds + H1
0 2k

and
x 1 ks
C2 (x) = e f (s) ds + H2 .
0 2k
By virtue of substitution of these into (4.63), we obtain
x 1
y(x) = H1 ekx + H2 e−kx − sinh k(x − s)f (s) ds. (4.64)
0 k
80 4 Green’s Functions for ODE

The first boundary condition y  (0) = 0 in (4.62) implies that H1 = H2 , while the
second condition y(a) = 0 yields
a sinh k(a − s)
H1 = H2 = f (s) ds.
0 2k cosh ka

Substituting these into (4.64), we obtain


a cosh kx sinh k(a − s) x 1
y(x) = f (s) ds − sinh k(x − s)f (s) ds.
0 k cosh ka 0 k

Conducting transformations of the above integrals in compliance with our rou-


tine, the Green’s function g(x, s) to the homogeneous boundary-value problem cor-
responding to that in (4.61) and (4.62) reads ultimately as

1 cosh kx sinh k(a − s), for x ≤ s,


g(x, s) = (4.65)
k cosh ka cosh ks sinh k(a − x), for s ≤ x.

The example below focuses on another special feature of Lagrange’s method. It


is designed to demonstrate a capability of the method in managing problems stated
on an unbounded region with boundedness conditions imposed.

Example 4.8 Let us return to the equation in (4.61), and let it be subject to the
following boundary conditions:

y  (0) − hy(0) = 0, |y(∞)| < ∞. (4.66)

It can readily be checked that there exists a unique Green’s function for the homo-
geneous boundary-value problem corresponding to that posed by (4.61) and (4.66).
The reader is recommended to justify this fact in Exercise 4.6.
The general solution of the equation in (4.61) was earlier presented in (4.64). In
the present case, however, it is going to be more beneficial to express it, in con-
trast to the mixed hyperbolic–exponential form in (4.64), completely in terms of
exponential functions. That is,
x 1  k(s−x) 
y(x) = H1 ekx + H2 e−kx + e − ek(x−s) f (s)ds. (4.67)
0 2k

The point is that the form in (4.67) will be more practical in view of the neces-
sity to treat the boundedness condition |y(∞)| < ∞ in the discussion that follows.
Indeed, splitting off both the exponential terms under the integral sign and group-
ing together the terms containing the factor of ekx and those containing the factor
of e−kx , we transform (4.67) into
x e−ks x eks
y(x) = H1 − f (s) ds ekx + H2 + f (s) ds e−kx . (4.68)
0 2k 0 2k
4.2 Method of Variation of Parameters 81

It is clearly seen that the boundedness condition |y(∞)| < ∞ implies that the
factor of the positive exponential term ekx in (4.68) must equal zero as x approaches
infinity. This implies
∞ 1 −ks
H1 = e f (s) ds,
0 2k
while the first condition in (4.66) subsequently yields

k−h ∞ k − h −ks
H2 = H1 = e f (s) ds.
k+h 0 2k(k + h)

Upon substituting the expressions for H1 and H2 just found into (4.67) and
rewriting its integral component again in a more compact hyperbolic-function-
containing form, we obtain
x 1 ∞ 1 −ks  kx 
y(x) = − sinh k(x − s)f (s)ds + e e + h∗ e−kx f (s) ds,
0 k 0 2k

where h∗ = (k − h)/(k + h). From this representation, the Green’s function to the
problem in (4.61) and (4.66) ultimately appears as

1 e−ks (ekx + h∗ e−kx ), for x ≤ s,


g(x, s) = (4.69)
2k e−kx (eks + h∗ e−ks ), for s ≤ x.

In the example that follows, a boundary-value problem for another equation with
variable coefficients is considered.

Example 4.9 Consider another second-order linear equation with variable coeffi-
cients
d  2 2  dy(x)
β x +1 = −f (x), x ∈ (0, a), (4.70)
dx dx
subject to the boundary conditions

y(0) = 0, y(a) = 0. (4.71)

The above boundary-value problem is well posed (the reader is recommended to


justify this assertion in Exercise 4.7), ensuring existence of the unique Green’s func-
tion for the corresponding homogeneous problem. Since by now, the reader should
have gained a great deal of experience, we will just briefly describe the construction
procedure.
Due to the self-adjoint form of the equation in (4.70), two components for its
fundamental set of solutions can in this case be obtained by two successive inte-
grations of the corresponding homogeneous equation. This gives us y1 (x) ≡ 1 and
y2 (x) ≡ arctan βx, yielding the general solution to the inhomogeneous equation in
82 4 Green’s Functions for ODE

the form
x 1 β(s − x)
y(x) = arctan f (s)ds + D1 + D2 arctan βx.
0 β 1 + β 2 xs

By satisfying the boundary conditions in (4.71), one determines the constants D1


and D2 :
a arctan βa − arctan βs
D1 = 0, D2 = f (s) ds.
0 β arctan βa
Substituting these into the above expression for the general solution of (4.70) and
rearranging the integral terms, one obtains the solution to the original boundary-
value problem as
a
y(x) = g(x, s)f (s) ds,
0
where the kernel g(x, s) represents the Green’s function

1 arctan βx(K − arctan βs), for 0 ≤ x ≤ s,


g(x, s) = (4.72)
K arctan βs(K − arctan βx), for x ≤ s ≤ a,

for the homogeneous problem corresponding to that in (4.70) and (4.71), where
K = β arctan βa.

We believe that having developed the necessary flexibility in dealing with ordi-
nary differential equations, the reader will feel comfortable enough working on the
material in the next chapter.

4.3 Chapter Exercises

4.1 Show that the trivial solution is the only solution to the boundary-value problem
stated in (4.21) and (4.22).

4.2 Justify the well-posedness of the boundary-value problem stated in (4.31)


and (4.32).

4.3 Prove that the boundary-value problem stated in (4.54) and (4.55) is uniquely
solvable.

4.4 Construct the Green’s function for the homogeneous equation corresponding
to (4.54), subject to the boundary conditions

y(0) = y  (1) = 0.
4.3 Chapter Exercises 83

4.5 Construct the Green’s function for the homogeneous equation corresponding
to (4.61) subject to the boundary conditions

y  (0) − hy(0) = y(a) = 0.

4.6 Prove that the boundary-value problem stated in (4.61) and (4.66) is well posed.

4.7 Prove that the boundary-value problem stated in (4.70) and (4.71) is well posed.
Chapter 5
Eigenfunction Expansion

Having departed for a while from the main focus of the book in the previous chapter,
where the emphasis was on ordinary differential equations, we are going to return
in the present chapter to partial differential equations. The reader will be provided
with a comprehensive review of another approach that has been traditionally em-
ployed for the construction of Green’s functions for partial differential equations.
The method of eigenfunction expansion will be used, representing one of the most
productive and recommended methods in the field.
Our objective in reviewing the method of eigenfunction expansion is twofold.
First, we want to assist the reader in the derivation of Green’s functions for a variety
of applied partial differential equations. Our second goal is to lay out a preparatory
basis for Chap. 6, which is, to a certain extent, central for the entire volume. In
that chapter, upon comparison of different forms of Green’s functions, some infinite
product representations are derived for a number of trigonometric and hyperbolic
functions.
After presenting introductory comments in the brief section below, we develop
a procedure based on the eigenfunction expansion method to derive a number of
Green’s functions in Sects. 5.2 and 5.3. The first of these touches upon problems
stated in Cartesian coordinates, while problems formulated in polar coordinates are
dealt with in Sect. 5.3.

5.1 Background of the Approach

Earlier, in Chap. 3, the reader was familiarized with two standard approaches that
are traditionally used for the construction of Green’s functions for the Laplace equa-
tion in two dimensions. These approaches are based on the methods of images
and conformal mapping. Another standard approach in the field is based on the
method of eigenfunction expansion [15, 18]. The number of problems for which
this method appears productive is notably wider than the number of problems suc-
cessfully treated by either of the two other methods.

Y.A. Melnikov, Green’s Functions and Infinite Products, 85


DOI 10.1007/978-0-8176-8280-4_5, © Springer Science+Business Media, LLC 2011
86 5 Eigenfunction Expansion

In the introduction to this book, it was mentioned that the solution u(P ) to the
well-posed boundary-value problem

∇ 2 u(P ) = −f (P ), P ∈ , (5.1)
 
B u(P ) = 0, P ∈ L, (5.2)

stated for the Poisson (inhomogeneous) equation can be expressed in the integral
form

u(P ) = G(P , Q)f (Q)d(Q) (5.3)


in terms of the Green’s function G(P , Q) of the corresponding homogeneous prob-


lem (stated for the Laplace equation).
This gives a hint as to a possible technique for constructing Green’s functions.
It could aim at expressing the solution u(P ) to the problem in (5.1) and (5.2) in
the integral representation form of (5.3). In other words, when solving the problem
in (5.1) and (5.2), the goal is not just to obtain its solution u(P ) by any means and in
any form. The intention is rather more specific. The solution should be in the form
of (5.3), providing an explicit expression for G(P , Q). An algorithm that uses the
method of eigenfunction expansion appears efficient in such an endeavor.

5.2 Cartesian Coordinates

In what follows, the particulars of the approach based on the method of eigen-
function expansion and its specific features are clarified and explained as we pass
through a series of illustrative examples in which problems are stated in Cartesian
coordinates. In the first example, the reader has an opportunity to go into a more or
less detailed description of the approach.

Example 5.1 We revisit the Dirichlet problem for the Laplace equation stated on the
infinite strip  = {−∞ < x < ∞, 0 < y < b}.

This problem has already been considered in Chap. 3. Two equivalent forms of
its Green’s function were presented there in (3.37) and (3.38). They were obtained
by the method of conformal mapping. We are going to describe now an alternative
derivation procedure, which will be explained by turning to the following boundary-
value problem:

∂ 2 u(x, y) ∂ 2 u(x, y)
+ = −f (x, y), (x, y) ∈ , (5.4)
∂x 2 ∂y 2
u(x, 0) = u(x, b) = 0. (5.5)
5.2 Cartesian Coordinates 87

In addition to the conditions in (5.5), it is assumed that the function u(x, y) is


bounded as x approaches infinity,

lim |u(x, y)| < ∞, lim |u(x, y)| < ∞,


x→−∞ x→∞

while the right-hand-side function f (x, y) is integrable on , implying that the


improper integral

f (x, y) d(x, y)

is convergent.
Recall that if the classical separation of variables method is applied to the homo-
geneous problem

∂ 2 U (x, y) ∂ 2 U (x, y)
+ = 0, (x, y) ∈ ,
∂x 2 ∂y 2

U (x, 0) = U (x, b) = 0,
corresponding to (5.4) and (5.5), then its solution U (x, y) is given by


U (x, y) = Xn (x)Yn (y).
n=1

This yields the following eigenvalue problem in Yn (y):

d 2 Yn (y)
+ ν 2 Yn (y) = 0, y ∈ (0, b),
dy 2
Yn (0) = Yn (b) = 0,

whose eigenvalues and eigenfunctions are found as ν = nπ/b, with n = 1, 2, 3, . . . ,


and Yn (y) = sin νy.
Following then, with the above in mind, the procedure of the eigenfunction ex-
pansion method, we express the solution u(x, y) of the problem in (5.4) and (5.5) in
terms of the eigenfunctions Yn (y),


u(x, y) = un (x) sin νy, (5.6)
n=1

which represents the expansion of the two variable function u(x, y) in a Fourier sine
series with respect to one of the variables.
The right-hand-side function f (x, y) in (5.4) is also expanded in terms of Yn (y):


f (x, y) = fn (x) sin νy. (5.7)
n=1
88 5 Eigenfunction Expansion

Once the expansions from (5.6) and (5.7) are substituted into (5.4), we obtain
∞  2
  ∞

d un (x)
− ν un (x) sin νy = −
2
fn (x) sin νy.
dx 2
n=1 n=1

Equating the coefficients of the two series in the above relation yields the ordi-
nary differential equation

d 2 un (x)
− ν 2 un (x) = −fn (x), −∞ < x < ∞, (5.8)
dx 2
in the coefficients un (x) of the series in (5.6). Clearly, the boundedness conditions

lim |un (x)| < ∞, lim |un (x)| < ∞, (5.9)


x→−∞ x→∞

must be imposed on un (x) to make the problem setting in (5.8) and (5.9) well posed.
To construct the Green’s function of the above boundary-value problem, we may
choose either the approach employing the defining properties or the one based on the
method of variation of parameters. Choosing the latter, we trace out its procedure,
which was described in detail in Sect. 4.2. That is, we express the general solution
to (5.8) in the form
un (x) = C1 (x)eνx + C2 (x)e−νx , (5.10)
which yields the well-posed system of linear algebraic equations
 νx   
e e−νx C1 (x) 0
=
νeνx −νe−νx C2 (x) −f (x)

in C1 (x) and C2 (x), whose solution is obtained as

1 −νx 1 νx
C1 (x) = − e fn (x), C2 (x) = e fn (x).
2ν 2ν
Expressions for C1 (x) and C2 (x),
 x
1
C1 (x) = − e−νξ fn (ξ ) dξ + D1
2ν −∞

and
 x
1
C2 (x) = eνξ fn (ξ ) dξ + D2 ,
2ν −∞
are found by integration. Substituting these into (5.10), we obtain
  x   x
1 1
un (x) = e−νx eνξ fn (ξ ) dξ + D2 − eνx e−νξ fn (ξ ) dξ + D1 .
2ν −∞ 2ν −∞
(5.11)
5.2 Cartesian Coordinates 89

The first boundedness condition in (5.9) requires that the factor of e−νx in (5.11)
be zero as x approaches negative infinity. This yields D2 = 0. The second condition
in (5.9), in turn, implies that the factor of eνx is zero as x approaches infinity. This
yields
 ∞
1
D1 = − e−νξ fn (ξ ) dξ.
2ν −∞
After the values of D1 and D2 that were just found are substituted into (5.11),
the solution of the boundary-value problem posed in (5.8) and (5.9) is found as
 ∞  x
1 ν(x−ξ ) 1  ν(ξ −x) 
un (x) = e fn (ξ ) dξ + e − eν(x−ξ ) fn (ξ ) dξ,
−∞ 2ν −∞ 2ν

which reads as a single integral


 ∞
un (x) = gn (x, ξ )fn (ξ ) dξ, (5.12)
−∞

whose kernel is expressed as


1 −ν|x−ξ |
gn (x, ξ ) = e , −∞ < x, ξ < ∞.

Thus, the above represents the Green’s function of the homogeneous boundary-
value problem corresponding to that in (5.8) and (5.9).
With the aid of the Euler–Fourier formula, the coefficient fn (ξ ) in the series
of (5.7) is expressed through the right-hand-side function of the equation in (5.4) as
 b
2
fn (ξ ) = f (ξ, η) sin νη dη.
b 0

By substitution of the above into (5.12) and then substituting the coefficients
un (x) in (5.6), the solution to the problem in (5.4) and (5.5) is obtained in the form
 b ∞ ∞
1  e−ν|x−ξ |
u(x, y) = sin νy sin νηf (ξ, η) dξ dη, (5.13)
0 −∞ π n
n=1

which suggests that in view of (5.3), the kernel



1  e−ν|x−ξ |
G(x, y; ξ, η) = sin νy sin νη (5.14)
π n
n=1

of the integral representation from (5.13) represents the Green’s function to the ho-
mogeneous boundary-value problem corresponding to that in (5.4) and (5.5).
The series in (5.14) is nonuniformly convergent. Due to the logarithmic singular-
ity, it diverges, in fact, when the observation point (x, y) coincides with the source
point (ξ, η). This makes the above series form of the Green’s function somewhat
90 5 Eigenfunction Expansion

inconvenient for numerical implementations. But the situation can be radically im-
proved, because the series is actually summable. To sum it, we transform (5.14)
into

1  e−ν|x−ξ |  
G(x, y; ξ, η) = cos ν(y − η) − cos ν(y + η)
2π n
n=1
∞ ∞ −ν|x−ξ |
1  e−ν|x−ξ |  e
= cos ν(y − η) − cos ν(y + η)
2π n n
n=1 n=1
(5.15)

and recall the classical [5, 6, 9] summation formula



 pn 1
cos nϑ = − ln 1 − 2p cos ϑ + p 2 , (5.16)
n 2
n=1

which holds if its parameters meet the constraints p < 1 and 0 ≤ ϑ < 2π .
It is evident that the series in (5.15) are of the type in (5.16) and that the con-
straints on the parameters p and ϑ are met. Indeed, it is clear that

e−ν|x−ξ | ≤ 1

and
0 ≤ ν(y − η) < 2π and 0 ≤ ν(y + η) < 2π.
Hence, the series in (5.15) appear summable, which yields the analytical repre-
sentation
1 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
G(x, y; ξ, η) = ln (5.17)
4π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
for the Green’s function to the homogeneous boundary-value problem correspond-
ing to that in (5.4) and (5.5). Here ω = π/b.
At this point, the reader is referred to the expression in (3.37) of Chap. 3, which
was obtained (by the method of conformal mapping) as the Green’s function of the
Dirichlet problem for the infinite strip  = {−∞ < x < ∞, 0 < y < π} of width π .
Clearly, if we assume b = π , implying ω = 1, then the expression in (5.17) reduces
to that of (3.37).
Note that, similarly to the conversion of the representation in (3.37) into that
in (3.38) undertaken in Sect. 3.2, the expression shown in (5.17) converts into
1 cosh ω(x − ξ ) − cos ω(y + η)
G(x, y; ξ, η) = ln . (5.18)
4π cosh ω(x − ξ ) − cos ω(y − η)
Recall that the conversion is accomplished by multiplying the numerator and de-
nominator in (5.17) by the factor 2eω(ξ −x) , with subsequent use of the Euler formula
for the hyperbolic cosine function.
5.2 Cartesian Coordinates 91

Another shorthand expression for (5.17) can be obtained upon introducing the
complex variables
z = x + iy and ζ = ξ + iη
for the observation point (x, y) and the source point (ξ, η). Indeed, recalling the
Euler formula
ez = ex (cos y + i sin y)
for the complex exponent, one reduces the representation in (5.17) to a compact
form. That is,

1 |1 − eω(z−ζ ) |
G(x, y; ξ, η) = ln , (5.19)
2π |1 − eω(z−ζ ) |
where the bar on ζ stands for the complex conjugate.

Example 5.2 We turn to the construction of the Green’s function for the Dirichlet
problem
u(x, 0) = u(x, b) = u(0, y) = 0 (5.20)
stated for the Laplace equation on the semi-infinite strip  = {0 < x < ∞,
0 < y < b}.

Consider the boundary-value problem posed in (5.4) and (5.20) on . In addition


to the conditions in (5.20), it is required for the function u(x, y) to be bounded as x
approaches infinity, while the right-hand-side function f (x, y) in (5.4) is assumed
to be integrable on .
Following the scheme of the eigenfunction expansion method, the functions
u(x, y) and f (x, y) are expanded, analogously to the case in Example 5.1, in the
Fourier sine series of (5.6) and (5.7). This yields the boundary-value problem

d 2 un (x)
− ν 2 un (x) = −fn (x), 0 < x < ∞,
dx 2
un (0) = 0, lim |un (x)| < ∞,
x→∞

in the coefficients un (x) of the series in (5.6).


Recall that the Green’s function gn (x, ξ ) to the above problem was obtained
earlier, in Chap. 4 (see the form in (4.20)). Using our current notation, we express it
as
1  −ν|x−ξ | 
gn (x, ξ ) = e − e−ν(x+ξ ) , for 0 ≤ x, ξ ≤ ∞.

Tracing out the procedure used for the setting in Example 5.1, a series expansion
of the Green’s function for the homogeneous boundary-value problem correspond-
92 5 Eigenfunction Expansion

ing to that in (5.4) and (5.20) is obtained as



2
G(x, y; ξ, η) = gn (x, ξ ) sin νy sin νη,
b
n=1

and after employing the summation formula from (5.16), the above representation
transforms to the closed analytical form

1 |1 − eω(z−ζ ) ||1 − eω(z+ζ ) |


G(x, y; ξ, η) = ln , (5.21)
2π |1 − eω(z−ζ ) ||1 − eω(z+ζ ) |

where ω = π/b. Equivalence of this to the form



1 cosh ω(x + ξ ) − cos ω(y − η)
G(x, y; ξ, η) = ln
4π cosh ω(x − ξ ) − cos ω(y − η)
cosh ω(x − ξ ) − cos ω(y + η)
× , (5.22)
cosh ω(x + ξ ) − cos ω(y + η)

which is usually given for this Green’s function in the literature [18], can readily be
verified using the algebra explained earlier in Example 5.1.
So far, we have used the method of eigenfunction expansion as an alternative way
to construct some Green’s functions already available in the literature. In what fol-
lows, in contrast, the method is applied to a mixed boundary-value problem whose
Green’s function probably cannot be obtained otherwise.

Example 5.3 Consider the Poisson equation

∂ 2 u(x, y) ∂ 2 u(x, y)
+ = −f (x, y), (x, y) ∈ , (5.23)
∂x 2 ∂y 2

stated on the semi-infinite strip  = {0 < x < ∞, 0 < y < b}, and subject, in this
case, to the boundary conditions

∂u(0, y)
− βu(0, y) = 0, u(x, 0) = u(x, b) = 0, β ≥ 0. (5.24)
∂x
By virtue of the Fourier sine-series expansions

 nπ
u(x, y) = un (x) sin νy, ν= ,
b
n=1

and


f (x, y) = fn (x) sin νy,
n=1
5.2 Cartesian Coordinates 93

we obtain the boundary-value problem

d 2 un (x)
− ν 2 un (x) = −fn (x), 0 < x < ∞,
dx 2
dun (0)
− βun (0) = 0, lim |un (x)| < ∞,
dx x→∞

in the coefficients un (x) of the above series expansion for u(x, y).
Following the procedure of the method of variation of parameters, the Green’s
function gn (x, ξ ) to the above problem is found in the form
 −νξ νx ∗ −νx
1 e (e + β e ), for x ≤ ξ,
gn (x, ξ ) = (5.25)
2ν e−νx (eνξ + β ∗ e−νξ ), for x ≥ ξ,

where β ∗ = (ν − β)/(ν + β).


The solution to (5.23) and (5.24) is obtained then as
 b ∞ ∞
2
u(x, y) = gn (x, ξ ) sin νy sin νη f (ξ, η) dξ dη. (5.26)
0 0 b
n=1

This implies that the kernel



2
G(x, y; ξ, η) = gn (x, ξ ) sin νy sin νη (5.27)
b
n=1

of (5.26) represents the Green’s function to the problem in (5.24).


Thus, the Green’s function that we are looking for is formally obtained. But what
is about the computability of the representation in (5.27)? In contrast to closed an-
alytical forms, such as those obtained in Examples 5.1 and 5.2, the one in (5.27) is
not, unfortunately, suitable for immediate computer implementation. This is so be-
cause the series in (5.27) does not (and cannot) uniformly converge. We have already
touched upon this phenomenon earlier in this book. Any series-only representation
of a Green’s function for the two-dimensional Laplace equation cannot uniformly
converge due to the logarithmic singularity in the Green’s function.
To give the reader a sense of the level of accuracy attainable by the series expan-
sion in (5.27), Fig. 5.1 exhibits profiles of G(x, y; ξ, η) for various partial sums. The
problem was specified by the parameters b = 1, β = 0.5. The source point is fixed
as (0.1, 0.5), and the profile of G(x, 0.3; 0.1, 0.5) was depicted in a neighborhood
of the source point.
Although increasing the order of a partial sum provides a reasonable improve-
ment, the approximation in the immediate vicinity of the source point still remains
quite poor. In other words, any attempt to approximate the Green’s function with
a partial sum of (5.27) is ineffectual, at least in a neighborhood of the source
point (ξ, η).
94 5 Eigenfunction Expansion

Fig. 5.1 Profiles of different partial sums of the series in (5.27)

Recall now the cases covered earlier in Examples 5.1 and 5.2, where we managed
to sum the series expansions of Green’s functions. Observe, for example, how the
series in (5.14) converts to the closed analytical form in (5.18). In contrast to those
cases, the series in (5.27) cannot be completely summed, but we can observe that its
singular component can be split off, radically improving its computability.
Since the truncation of the series in (5.27) does not work, an extra effort is re-
quired to enhance its computability. One possible way of doing so was proposed
in [12]. The idea is to split the expression for gn (x, ξ ) in (5.25) into two parts,
one of which contains the components responsible for the singularity and allows a
complete summation, while the other part leads to a uniformly convergent series. In
doing so, we rewrite the coefficient gn (x, ξ ) in the form

1 ν − β −ν(x+ξ )
gn (x, ξ ) = e−ν|x−ξ | + e , for 0 < x, ξ < ∞,
2ν ν +β

and represent the factor (ν − β)/(ν + β) of its second exponential function e−ν(x+ξ )
as
ν −β 2β
=1− .
ν +β ν +β
This yields

1 2β −ν(x+ξ )
gn (x, ξ ) = e−ν|x−ξ | + e−ν(x+ξ ) − e .
2ν ν +β
Upon substituting the above into (5.27), we rewrite the latter as

1  1  −ν|x−ξ | 
G(x, y; ξ, η) = e + e−ν(x+ξ ) sin νy sin νη
b ν
n=1

2β  e−ν(x+ξ ) nπ
− sin νy sin νη, ν= .
b ν(ν + β) b
n=1
5.2 Cartesian Coordinates 95

Clearly, the first of the above two series is summable. The summation can be ac-
complished in the same way as in Examples 5.1 and 5.2. The second series does not
allow a summation. But it is uniformly convergent, and we may leave it in its current
form without significantly deteriorating the computability of the whole expression.
Thus, a computer-friendly representation of the Green’s function to the mixed
boundary-value problem in (5.24) for the Laplace equation posed on the semi-
infinite strip  = {0 < x < ∞, 0 < y < b} is finally obtained as

1 |1 − eω(z+ζ ) ||1 − eω(z−ζ ) |


G(x, y; ξ, η) = ln
2π |1 − eω(z−ζ ) ||1 − eω(z+ζ ) |

2β  e−ν(x+ξ ) π
− sin νy sin νη, ω= . (5.28)
b ν(ν + β) b
n=1

If β = 0, then the above reduces to the Green’s function

1 |1 − eω(z+ζ ) ||1 − eω(z−ζ ) |


G(x, y; ξ, η) = ln (5.29)
2π |1 − eω(z−ζ ) ||1 − eω(z+ζ ) |

of the boundary-value problem


∂u(0, y)
= 0, u(x, 0) = u(x, b) = 0,
∂x
for the Laplace equation on the region  = {0 < x < ∞, 0 < y < b}.
We turn again to the expression in (5.28). Its series component converges at a
rapid rate. To be more specific, we estimate its N th remainder

 e−ν(x+ξ )
RN (x, y; ξ, η) = sin νy sin νη. (5.30)
ν(ν + β)
n=N+1

The exponential and trigonometric factors of the general term in this series never
exceed unity. Since the parameter β is nonnegative, we arrive at the following esti-
mate for the absolute value of RN :

 ∞

1 1
|RN (x, y; ξ, η)| ≤ ≤
ν(ν + β) ν2
n=N+1 n=N +1

∞ 
b2  1 b2  1 
N
1
= 2 = − .
π n2 π 2 n2 n2
n=N+1 n=1 n=1

The infinite series in parentheses can be summed [9], yielding


 
b2 π 2  1
N
|RN (x, y; ξ, η)| ≤ 2 − . (5.31)
π 6 n2
n=1
96 5 Eigenfunction Expansion

Notice first that the above inequality is quite compact and very simple to use.
Second, it provides a uniform estimate and is therefore valid at any point in .
Third, from our derivation, it follows that it gives a relatively coarse estimate. The
latter makes it advisable to revisit the analysis of (5.30). In doing so, we replace
its trigonometric factors with unity and express the parameter ν in terms of n. This
yields

 ∞
e−ν(x+ξ ) b2  e−ν(x+ξ )
|RN (x, y; ξ, η)| ≤ = 2 ,
ν(ν + β) π n(n + β0 )
n=N+1 n=N +1

where β0 = βb/π .
In the case of β0 ≥ 1, the above estimate might be improved. That is,
∞ ∞
b2  e−ν(x+ξ ) b2  e−ν(x+ξ )  e−ν(x+ξ )
N
|RN (x, y; ξ, η)| ≤ = − .
π2 n(n + 1) π 2 n(n + 1) n(n + 1)
n=N+1 n=1 n=1

Note that the infinite series in the brackets is summable. Using the standard sum-
mation formula [9]

 pn 1−p 1
=1− ln , p2 < 1,
n(n + 1) p 1−p
n=1

where p = e−ν(x+ξ ) and ν = π/b, we arrive at the following estimate for the re-
mainder in (5.30)

b2  e−ν(x+ξ )
N
−ν(x+ξ )
|RN (· · · )| ≤ 1 + e ν(x+ξ )
− 1 ln 1 − e − . (5.32)
π2 n(n + 1)
n=1

The above estimate, unlike that in (5.31), is nonuniform. Indeed, its right-hand-
side depends on the observation and source points to which the estimate is applied.
So, it is more flexible in practical computing. This, in other words, allows the user
to apply different truncations for the series in (5.31) in different zones of  in order
to keep a certain desired accuracy level for the entire region.
The improvement that has been achieved by the recent transformation of the
series-only form of the Green’s function in (5.27) appears to be outstanding. This
can be fully appreciated when the profile depicted in Fig. 5.2 is compared with those
shown earlier, in Fig. 5.1. The representation in (5.28), with only the tenth partial
sum of its series component is accounted for.

Example 5.4 The method of eigenfunction expansion will be used here to construct
the Green’s function of the Dirichlet problem for the Laplace equation stated on a
rectangle.
This is our second look at the problem. It was considered in Chap. 3 where its
Green’s function was obtained by the method of conformal mapping (see the repre-
sentation in (3.41)). The representation in (3.41) is expressed in terms of a special
(Weierstrass) function that is not yet tabulated, making it inconvenient in computing.
5.2 Cartesian Coordinates 97

Fig. 5.2 Convergence of the representation in (5.28)

The objective in the current example is to derive an alternative representation of


the Green’s function for the Laplace equation on a rectangle. We aim, in other words,
at a form that is more easily computable. In doing so, consider the boundary-value
problem

∂ 2 u(x, y) ∂ 2 u(x, y)
+ = −f (x, y), (x, y) ∈ , (5.33)
∂x 2 ∂y 2
u(x, 0) = u(x, b) = u(0, y) = u(a, y) = 0, (5.34)

on the rectangle  = {0 < x < a, 0 < y < b}, where f (x, y) is assumed to be
integrable (continuous) on the closure of .
It is assumed that the reader has learned from a course on differential equations
(see, for example, [5, 15]) that the components in the set of functions

Um,n (x, y) = sin μx sin νy,

where μ = mπ/a and ν = nπ/b, with m, n = 1, 2, 3, . . . , represent eigenfunctions


of the Dirichlet problem for the Laplace operator on the rectangle . Indeed, one
can directly check that every component in the set Um,n (x, y) satisfies the conditions
in (5.34) and is also a solution of the static Klein-Gordon equation

∂ 2 Um,n (x, y) ∂ 2 Um,n (x, y)


+ + λ2 Um,n (x, y) = 0, (x, y) ∈ ,
∂x 2 ∂y 2

if the parameter λ is defined in terms of the indices m and n as λ2 = μ2 + ν 2 .


This motivates a strategy of our approach to the problem in (5.33) and (5.34)
when we represent its solution u(x, y) in the eigenfunction expansion (double
Fourier sine series) form


u(x, y) = um,n sin μx sin νy (5.35)
m,n=1
98 5 Eigenfunction Expansion

and expand also the right-hand-side function f (x, y) in (5.33) in the double Fourier
sine series
∞
f (x, y) = fm,n sin μx sin νy. (5.36)
m,n=1

Once the expansions from (5.35) and (5.36) are substituted into (5.33), we have

 ∞

− μ + ν um,n sin μx sin νy = −
2 2
fm,n sin μx sin νy.
m,n=1 m,n=1

Equating the coefficients of the series from the left-hand side and the right-hand
side in the above equation yields
fm,n
um,n = .
μ2 + ν 2
With the aid of the Euler–Fourier formula, the coefficients fm,n in the series
of (5.36) are expressed as
 b a
4
fm,n = f (ξ, η) sin μξ sin νη dξ dη.
ab 0 0

By substitution of the expression for fm,n into the above formula for the coef-
ficients um,n , and then substituting the coefficients um,n into (5.35), we obtain the
solution of the problem posed by (5.33) and (5.34) in the form
 b a ∞
4  sin μx sin νy sin μξ sin νη
u(x, y) = f (ξ, η) dξ dη.
0 0 ab μ2 + ν 2
m,n=1

Since the solution of the problem in (5.33) and (5.34) is expressed in the integral
form of (5.3), the kernel of the above,

4  sin μx sin νy sin μξ sin νη
G(x, y; ξ, η) = , (5.37)
ab μ2 + ν 2
m,n=1

represents the Green’s function of the Dirichlet problem stated on the rectangle  =
{0 < x < a, 0 < y < b}.
It is evident that the computability represents a critical issue for the double series
in (5.37). Addressing this issue in the forthcoming analysis, let a = π and b = π for
simplicity. This transforms (5.37) into

4  sin mx sin ny sin mξ sin nη
G(x, y; ξ, η) = , (5.38)
π2 m2 + n 2
m,n=1

which is the Green’s function for the square  = {0 < x < π, 0 < y < π}.
5.2 Cartesian Coordinates 99

Fig. 5.3 Convergence of the representation of (5.38)

To examine the convergence rate of the series in (5.38), we depict, in Fig. 5.3,
profiles of its (M, N )th partial sum for various values of the truncation parameters
M and N . The x-coordinate of the field point is fixed at x = π/2, while the source
point (ξ, η) is chosen as (π/2, 2).
Two important observations can be made from the data in Fig. 5.3, and both
of them indicate a low computational potential of the expression in (5.38). First, the
logarithmic singularity is poorly approximated when the series is truncated. Second,
a high-frequency oscillation dramatically reduces its practicality.
Note that the oscillation cannot be entirely eliminated in the case of M = 100
and N = 100. This implies, in particular, that the accuracy in computing derivatives
of the Green’s function (which are frequently required in applications) should to be
even much lower than that of the function itself.

Hence, some work is required to enhance the computational potential of the rep-
resentation in (5.38). In [25], for example, it was proposed to rearrange the double-
summation in (5.38) as


∞ 
4   sin mx sin mξ
sin ny sin nη,
π2 m2 + n 2
n=1 m=1

which after some trivial algebra reads


 ∞ 
4  1  cos m(x − ξ ) − cos m(x + ξ )
sin ny sin nη.
π2 2 m2 + n2
n=1 m=1

Breaking the m-series into two, the above is transformed into


∞ 
2   cos m(x − ξ ) cos m(x + ξ )
− sin ny sin nη. (5.39)
π2 m2 + n2 m2 + n 2
n=1 m=1
100 5 Eigenfunction Expansion

Fig. 5.4 Convergence of the representation in (5.40)

In compliance with the standard [9] summation formula



 cos mβ π cosh α(π − β) 1
= − 2,
m +α
2 2 2α sinh απ 2α
m=1

where the parameter β is assumed to be bounded, since 0 < β < 2π , each of the
m-series in (5.39) is analytically summable. Carrying out the summation, we reduce
the double series in (5.38) to

1  cosh n(π − |x − ξ |) − cosh n(π − (x + ξ ))
sin ny sin nη,
π n sinh nπ
n=1

or

2
Tn (x, ξ ) sin ny sin nη, (5.40)
π
n=1

where the coefficient Tn (x, ξ ) is defined as



1 sinh n(π − x) sinh nξ, x ≥ ξ,
Tn (x, ξ ) =
n sinh nπ sinh n(π − ξ ) sinh nx, x ≤ ξ.

To analyze the convergence of the single-series form in (5.40), we depict, in


Fig. 5.4, profiles of its N th partial sum in a manner similar to that of Fig. 5.3.
Comparison of the data of Figs. 5.3 and 5.4 clearly indicates that the single-series
expression of the Green’s function works slightly better in the approximation of the
basic logarithmic singularity. But on the other hand, a high-frequency oscillation is
still there for the single-series form. Moreover, it becomes even more notable. So,
neither of the two series representations of the Green’s function obtained so far is
computationally efficient, leaving room for further improvement.
A significant step in that direction can be provided by accelerating the conver-
gence of the form in (5.40). This can be done by operating on either branch of the
coefficient Tn (x, ξ ). For specificity we choose the one valid for x ≥ ξ , and transform
5.2 Cartesian Coordinates 101

the series as

2  sinh nπ cosh nx − sinh nx cosh nπ
sinh nξ sin ny sin nη,
π n sinh nπ
n=1

or

2  cosh nx − sinh nx coth nπ
sinh nξ sin ny sin nη.
π n
n=1

Adding and subtracting the term of sinh nx in the numerator, we rewrite the above
as

2  1 
cosh nx − sinh nx + sinh nx(1 − coth nπ) sinh nξ sin ny sin nη,
π n
n=1

from which, upon removing the brackets, we have



21
sinh nx(1 − coth nπ) sinh nξ sin ny sin nη
π n
n=1

21
+ (cosh nx − sinh nx) sinh nξ sin ny sin nη.
π n
n=1

It can readily be shown that the second of the above two series appears analyt-
ically summable. To proceed with the summation, we convert its hyperbolic func-
tions into exponential form. This yields

21
(1 − coth nπ) sinh nx sinh nξ sin ny sin nη
π n
n=1

2  1 −nx enξ − e−nξ
+ e sin ny sin nη,
π n 2
n=1

which transforms, by means of elementary algebra, into



1  1  −n(x−ξ )  
e − e−n(x+ξ ) cos n(y − η) − cos n(y + η)
2π n
n=1

2  sinh nx sinh nξ
− sin ny sin nη.
π nenπ sinh nπ
n=1

When the brackets are removed in the first of the above two series, it breaks
into four pieces, each of which allows analytical summation in compliance with the
102 5 Eigenfunction Expansion

standard formula from (5.16). This converts the Green’s function in (5.40) into

1 1 − 2e−(x−ξ ) cos(y + η) + e−2(x−ξ )
G(x, y; ξ, η) = ln
2π 1 − 2e−(x−ξ ) cos(y − η) + e−2(x−ξ )

1 1 − 2e−(x+ξ ) cos(y − η) + e−2(x+ξ )
+ ln
2π 1 − 2e−(x+ξ ) cos(y + η) + e−2(x+ξ )

2  sinh nx sinh nξ
− sin ny sin nη.
π nenπ sinh nπ
n=1

Following some elementary transformations, the logarithmic terms reduce to a


more compact form,

1 |1 − e(z−ζ ) ||1 − e(z+ζ ) |


G(x, y; ξ, η) = ln
2π |1 − e(z−ζ ) ||1 − e(z+ζ ) |

2  sinh nx sinh nξ
− sin ny sin nη, (5.41)
π nenπ sinh nπ
n=1

where the complex variable notation

z = x + iy and ζ = ξ + iη,

as introduced earlier in Example 5.1, is used for the points (x, y) and (ξ, η).
The computational superiority of the version in (5.41) over those in (5.38)
and (5.40) cannot be disputed, mainly because the basic logarithmic singularity of
the Green’s function is analytically expressed in (5.41). Indeed, it is contained in the
term
1 1
ln . (5.42)
2π |1 − e(z−ζ ) |
To verify this fact, we expand the exponent e(z−ζ ) in a Taylor series and substitute
it into (5.42). This yields
 
1 1 1  1 1 
ln = − ln (z − ζ ) + (z − ζ )2
+ (z − ζ ) 3
+ · · · 
2π |1 − e (z−ζ ) | 2π  2! 3! 
  
1  1 1 
=− ln |z − ζ |1 + (z − ζ ) + (z − ζ ) + · · · 
 2
2π 2! 3!
 
1 1 1  1 1 
= ln − ln 1 + (z − ζ ) + (z − ζ ) + · · · ,
2
2π z − ζ 2π  2! 3!
where the first logarithmic term in fact represents the fundamental solution of the
Laplace equation, while the second logarithm is a regular function that vanishes at
z = ζ.
5.2 Cartesian Coordinates 103

Table 5.1 Convergence of (5.41) for the source point (3π/4, π/2)
Field point, Truncation parameter, N
x/π 10 20 50 100 200

0.185 .0299291 .0299291 .0299291 .0299291 .0299291


0.385 .0767002 .0767002 .0767002 .0767002 .0767002
0.585 .1752671 .1752671 .1752671 .1752671 .1752671
0.785 .3851038 .3851034 .3851033 .3851033 .3851033
0.985 .0171972 .0171947 .0171937 .0171936 .0171936

Table 5.2 Convergence of (5.41) for the source point (0.99π, π/2)
Field point, Truncation parameter, N
x/π 10 20 50 100 200

0.185 .0010733 .0010733 .0010733 .0010733 .0010733


0.385 .0027066 .0027066 .0027066 .0027066 .0027066
0.585 .0057374 .0057367 .0057363 .0057363 .0057363
0.785 .0137082 .0136948 .0136931 .0136928 .0136928
0.985 .3066126 .2703686 .2567230 .2560739 .2560668

It is worth noting that although the basic logarithmic singularity is explicit


in (5.41), the representation as a whole still has a computational drawback. That
is, its convergence rate varies with the location of the field and the source point.
In other words, the convergence of the series in (5.41) is nonuniform. Indeed, the
series converges at a relatively fast rate unless both (x, y) and (ξ, η) are close to
the boundary segment x = π . This feature of series expansions of Green’s func-
tions could be referred to as the near-boundary singularity. The data in Tables 5.1
and 5.2 are presented to illustrate the near-boundary singularity of the representation
in (5.41).
The data in Table 5.1 were obtained for a source point relatively remote from
the boundary segment x = π , whereas in Table 5.2, the source point is quite close
to the boundary. As can be seen from Table 5.1, the data are nearly indifferent
to the truncation, indicating a rapid convergence of the series (only the last row
slightly varies with N ). The data in Table 5.2 are, in contrast, significantly affected
by N , revealing poor convergence if both the field and the source point approach the
boundary.
The convergence of the representation in (5.41) can be further improved. By an
elementary transformation, it reduces to a form that contains a series that is uni-
formly convergent. That is,
104 5 Eigenfunction Expansion

1 |1 − e(z−ζ ) ||1 − e(z+ζ ) |


G(x, y; ξ, η) = ln
2π |1 − e(z−ζ ) ||1 − e(z+ζ ) |
1 |1 − e(z1 +ζ 1 ) ||1 − e(z2 +ζ 2 ) |
+ ln
4π |1 − e(z1 +ζ1 ) ||1 − e(z2 +ζ2 ) |
∞ nπ
 e cosh n(x − ξ ) − cosh nπ cosh n(x + ξ )
+ S(y, η), (5.43)
πne2nπ sinh nπ
n=1

where

z1 = (x + π) + iy, ζ1 = (ξ + π) + iη,
z2 = (x − π) + iy, ζ2 = (ξ − π) + iη,

and S(y, η) = sin ny sin nη.


To ensure accurate computation of values of G(x, y; ξ, η), we obtain an estimate
of the series remainder in (5.43). In doing so, we write it down as
 ∞ 
  enπ cosh n(x − ξ ) − cosh nπ cosh n(x + ξ ) 
 
|RN (x, y; ξ, η)| =  S(y, η)
 πne2nπ sinh nπ 
N+1
 ∞ 
  enπ cosh n(x − ξ ) − cosh nπ cosh n(x + ξ ) 
 
≤ .
 πne2nπ sinh nπ 
n=N+1

Since the second additive term cosh nπ cosh n(x + ξ ) in the numerator is never
negative, the estimation procedure can be continued as
 ∞ 
  enπ cosh n(x − ξ )  ∞
e−nπ cosh nx
 
|RN (x, y; ξ, η)| ≤   ≤
 πne2nπ sinh nπ  πn sinh nπ
n=N+1 n=N +1

∞ 
1  e−nπ 1  e−nπ 
N
e−nπ
≤ = − .
π n π n n
n=N+1 n=1 n=1

The infinite series in the above expression is analytically summable [9], leading
ultimately to the estimate

1  e−nπ N
|RN (x, y; ξ, η)| ≤ ln − ,
1 − e−π n
n=1

which indicates extremely rapid convergence of the series in (5.43), where the error
level of order, say, 10−8 can be attained for any location of the field and the source
point, with the truncation parameter as low as N = 5. The superiority of the latter
form of the Green’s function over all other forms obtained so far is illustrated in
Fig. 5.5, where its profile G(π/2, y; π/2, 2) is exhibited as in Figs. 5.3 and 5.4.
5.3 Polar Coordinates 105

Fig. 5.5 Convergence of the representation of (5.43)

Note that the representation in (5.43) of the Green’s function of the Dirichlet
problem for the Laplace equation appears by far more computer-friendly than those
of (5.38), (5.40), and (5.41). Two features of (5.43) support this claim (an analyti-
cal form of the basic logarithmic singularity and uniform convergence of its series
term). The latter feature allows complete elimination of the high-frequency oscil-
lation by truncating the series to its low partial sums. The fifth partial sum, for
example, is accounted for in Fig. 5.5.

5.3 Polar Coordinates

We begin our presentation in this section with a problem that has already been
treated twice in the present volume. In Chap. 3, the classical expression of the
Green’s function

1 a 4 − 2ra 2 cos(ϕ − ψ) + r 2 2
G(r, ϕ; , ψ) = ln 2 2 (5.44)
4π a (r − 2r cos(ϕ − ψ) + 2 )

of the Dirichlet problem for a disk of radius a was constructed by the methods
of images and conformal mapping. This time around, the eigenfunction expansion
method will be used for its derivation. We thereby provide the necessary background
for later application of this method to the construction of Green’s functions to some
other problems for which the methods of images and conformal mapping fail.

Example 5.5 On the disk  = {0 < r < a, 0 ≤ ϕ < 2π} of radius a, the boundary-
value problem
lim |u(r, ϕ)| < ∞ and u(a, ϕ) = 0 (5.45)
r→0

is considered for the Poisson equation



1 ∂ ∂u(r, ϕ) 1 ∂ 2 u(r, ϕ)
r + 2 = −f (r, ϕ), (r, ϕ) ∈ . (5.46)
r ∂r ∂r r ∂ϕ 2
106 5 Eigenfunction Expansion

Note that the boundedness condition, as r approaches zero, is required in the


above problem, because r = 0 represents a singular point for the governing equation.
As the reader has already learned in this section, our objective in the method
of eigenfunction expansion is to express the solution of the problem in (5.45)
and (5.46) in integral form, which in this case reads as
 2π  a
u(r, ϕ) = G(r, ϕ; , ψ)f (, ψ)ddψ, (5.47)
0 0

where ddψ represents the area element in polar coordinates. It gives the Green’s
function G(r, ϕ; , ψ) in which we are interested.
Taking into account the 2π -periodicity of the solution u(r, ϕ) of the problem
in (5.45) and (5.46) with respect to the variable ϕ, we expand it in the trigonometric
Fourier series
 ∞
1
u(r, ϕ) = u0 (r) + ucn (r) cos nϕ + usn (r) sin nϕ . (5.48)
2
n=1

The right-hand-side function f (r, ϕ) in (5.46) is also represented by the Fourier


series
 ∞
1
f (r, ϕ) = f0 (r) + fnc (r) cos nϕ + fns (r) sin nϕ . (5.49)
2
n=1

By substitution of the expansions from (5.48) and (5.49) into (5.46) and equating
the corresponding coefficients of the series on both sides, we derive the following
linear ordinary differential equation:

d dun (r) n2
r − 2 un (r) = −rfn (r), n = 0, 1, 2, . . . , (5.50)
dr dr r

in the coefficients un (r) of the expansion in (5.48). At the current stage of our de-
velopment, we omit the superscripts on un (r) and fn (r) for notational convenience.
The relations in (5.45) imply that the solution un (r) of (5.50) should satisfy the
boundary conditions

lim |un (r)| < ∞ and un (a) = 0. (5.51)


r→0

It is worth noting that the fundamental set of solutions of the homogeneous equa-
tion corresponding to (5.50) for the case n = 0 differs from that for the case n ≥ 1.
This means that in constructing the Green’s function to the boundary-value problem
in (5.50) and (5.51), the two cases must be considered separately.
In the case n = 0, the boundary-value problem in (5.50) and (5.51) reduces to

d du0 (r)
r = −rf0 (r), (5.52)
dr dr
5.3 Polar Coordinates 107

lim |u0 (r)| < ∞, and u0 (a) = 0, (5.53)


r→0

with the functions u(r) = ln r and u(r) = 1 representing a fundamental set of so-
lutions for the homogeneous equation corresponding to (5.52). Hence, the general
solution for (5.52) can be written, in the variation of parameters method, as

u0 (r) = C1 (r) ln r + C2 (r). (5.54)

Substituting this into (5.52) and following the routine of the method, we obtain

C1 (r) = −rf0 (r) and C2 (r) = r ln rf0 (r).

Integration of these expressions yields


 r
C1 (r) = − f0 ()d + D1
0

and
 r
C1 (r) =  ln f0 ()d + D2 .
0
Once the above quantities are substituted into (5.54) and the integral terms are
combined, the general solution of (5.52) is found as
 r 

u0 (r) = ln f0 ()d + D1 ln r + D2 .
0 r

The values
 a 

D1 = 0 and D2 = − ln f0 ()d
0 a

of the constants D1 and D2 are obtained by taking advantage of the boundary con-
ditions in (5.53). Upon substituting these into the above expression for u0 (r), the
solution of the boundary-value problem in (5.52) and (5.53) reads as
 r   a 
 
u0 (r) = ln f0 ()d − ln f0 ()d,
0 r 0 a

which can be rewritten in the single-integral form


 a
u0 (r) = g0 (r, )f0 ()d, (5.55)
0

where the kernel



ln(/a), for r ≤ ,
g0 (r, ) = − (5.56)
ln(r/a), for r ≥ ,
108 5 Eigenfunction Expansion

represents the Green’s function of the homogeneous problem corresponding to that


posed by (5.52) and (5.53).
We turn now to the case n ≥ 1. That is, we consider the boundary-value problem
in (5.50) and (5.51) as it is. Since the equation in (5.50) is of Cauchy–Euler type,
its fundamental set of solutions can be formed with the functions u(r) = r n and
u(r) = r −n . This yields the general solution for (5.50) in the form

un (r) = C1 (r)r n + C2 (r)r −n ,

and after proceeding through the variation of parameters routine, we derive the
above as
 r  n  n 
1  r
un (r) = − fn ()d + D1 r n + D2 r −n . (5.57)
0 2n r 
The boundary conditions in (5.51) yield
 a  n  n
1 1 
D2 = 0 and D1 = − fn ()d.
0 2n  a2
Upon substituting these into (5.57), we obtain
 r  n  n   a  n  n
1  r 1 r r
un (r) = − fn ()d + − fn ()d,
0 2n r  0 2n  a2
or using a more compact notation,
 a
un (r) = gn (r, )fn ()d, (5.58)
0

where the kernel gn (r, ) is defined in two pieces. For r ≤ , it reads as


 n  
1 r r n
gn (r, ) = − , for r ≤ ,
2n  a2
while for r ≥ , we have
 n  n
1  r
gn (r, ) = − , for r ≥ .
2n r a2
The expression for un (r) in (5.58) suggests that the cosine coefficients and the
sine coefficients in the Fourier series of (5.48) can be written as
 a
un (r) =
c
gn (r, )fnc ()d (5.59)
0

and
 a
usn (r) = gn (r, )fns ()d, (5.60)
0
5.3 Polar Coordinates 109

where, in compliance with the Euler–Fourier formulas,


 2π
1
fnc () = f (, ψ) cos nψdψ, n = 0, 1, 2, . . . , (5.61)
π 0

and
 2π
1
fns (r) = f (, ψ) sin nψdψ, n = 1, 2, 3, . . . . (5.62)
π 0

Upon substituting the expressions for fnc () and fns () from (5.61) and (5.62)
into (5.55), (5.59), and (5.60), and then the coefficients u0 (r), ucn (r), and usn (r)
into (5.55), we obtain the solution of the boundary-value problem posed by (5.45)
and (5.46) in the form
  ∞
2π a 1 g0 (r, ) 
u(r, ϕ) = + gn (r, )(cos nϕ cos nψ + sin nϕ sin nψ)
0 0 π 2
n=1
× f (, ψ)ddψ,

which can be written in a more compact form, after the factor of gn (r, ) in the
above series reads as a single trigonometric function. That is,
  a ∞

2π 1 g0 (r, ) 
u(r, ϕ) = + gn (r, ) cos n(ϕ − ψ) f (, ψ)ddψ.
0 0 π 2
n=1
(5.63)
Since the expression ddψ represents the area element in polar coordinates,
we observe that the solution of the boundary-value problem in (5.45) and (5.46) is
obtained in the form of (5.48). Indeed, the integration in (5.63) is taken over the
entire disk . This allows us to conclude that the kernel in the above integral,

 ∞
1
G(r, ϕ; , ψ) = g0 (r, ) + 2 gn (r, ) cos n(ϕ − ψ) , (5.64)

n=1

represents the Green’s function of the Dirichlet problem for the Laplace equation on
the disk of radius a.
To proceed with the summation of the series term in G(r, ϕ; , ψ), either branch
of g0 (r, ) and gn (r, ) can be used. Taking, for instance, the branch valid for r ≤ 
and substituting into (5.64), we have
 
 ∞
  n  n
1  1 r r
G(r, ϕ; , ψ) = − ln + − cos n(ϕ − ψ) .
2π a n  a2
n=1

Recalling the summation formula from (5.16), we arrive at


110 5 Eigenfunction Expansion
     2
1  1 r r
G(r, ϕ; , ψ) = − ln − ln 1 − 2 cos(ϕ − ψ) +
2π a 2  
   
1 r r 2
+ ln 1 − 2 2 cos(ϕ − ψ) + ,
2 a a2
which, after some trivial algebra, reads as the familiar form

1 a 4 − 2ra 2 cos(ϕ − ψ) + r 2 2
G(r, ϕ; , ψ) = ln 2 2
4π a (r − 2r cos(ϕ − ψ) + 2 )
of the Green’s function of the Dirichlet problem on the disk of radius a.

In the following example, the derivation, which yields a computer-friendly repre-


sentation of a Green’s function, is introduced, as an alternative to the classical form
whose computer implementation is not that straightforward.

Example 5.6 Let us turn to the mixed boundary-value problem


∂u(a, ϕ)
+ βu(a, ϕ) = 0, β > 0, (5.65)
∂r
stated on the disk  = {0 < r < a, 0 ≤ ϕ < 2π}. Recall that due to the singularity of
the Laplace operator at the point r = 0, the boundedness condition as r approaches
zero also applies, to make the problem well posed.
Tracing out the procedure of the method of eigenfunction expansion for the set-
ting in (5.46) and (5.65), we expand its solution u(r, ϕ) and the right-hand-side
function f (r, ϕ) of the equation in (5.46) in the Fourier series of (5.48) and (5.49),
respectively. This yields, for our setting, the boundary-value problem

d dun (r) n2
r − 2 un (r) = −rfn (r), n = 0, 1, 2, . . . , (5.66)
dr dr r
dun (a)
lim |un (r)| < ∞, and + βun (a) = 0, (5.67)
r→0 dr
in the coefficients un (r) of the expansion in (5.48).
As in the treatment of the problem in Example 5.5, the cases of n = 0 and n ≥ 1
have been considered separately. For n = 0, the Green’s function of the homoge-
neous boundary-value problem corresponding to that in (5.66) and (5.67) is found
as

1 
g0 (x, s) = − ln , r ≤ ,
aβ a
while the case n ≥ 1 yields
 n  n  n
1 r r aβ r
gn (r, ) = + − , for r ≤ .
2n  a2 n(n + aβ) a 2
5.3 Polar Coordinates 111

This leads to the Green’s function of the homogeneous setting corresponding to


that in (5.46) and (5.65) in the form
 
1 1 
G(r, ϕ; , ψ) = − ln
2π aβ a
∞  n    
 1 r r n 2aβ r n
+ + − cos n(ϕ − ψ) ,
n  a2 (n + aβ) a 2
n=1

where the series is partially summable. By applying the standard summation formula
from (5.16), we convert the above representation to

1 1 
G(r, ϕ; , ψ) = − ln − L1 (r, ϕ; , ψ) − L2 (r, ϕ; , ψ)
2π aβ a

  n
2aβ r
− cos n(ϕ − ψ) ,
n(n + aβ) a 2
n=1

where
   2
1 r r
L1 (r, ϕ; , ψ) = ln 1 − 2 cos(ϕ − ψ) +
2  
and
   2
1 r r
L2 (r, ϕ; , ψ) = ln 1 − 2 2 cos(ϕ − ψ) + .
2 a a2
Following trivial transformations, this cumbersome form reduces to a more com-
pact one as

1 1 a3
G(r, ϕ; , ψ) = + ln
2π aβ |z − ζ ||zζ − a 2 |

 
2aβ r n
− cos n(ϕ − ψ) . (5.68)
n(n + aβ) a 2
n=1

Clearly, the series in (5.68) converges at the rate 1/n2 , making the entire repre-
sentation convenient for computer implementation.
It is worth noting that the boundary condition in (5.65) reduces to Dirichlet type
if the parameter β is taken to infinity. In compliance with this note, the limit of the
expression in (5.68) as β approaches infinity should represent the Green’s function
for the Dirichlet problem on the disk of radius a. Indeed, taking the limit in (5.68),
one arrives at

 
1 a3 2 r n
G(r, ϕ; , ψ) = ln − cos n(ϕ − ψ) , (5.69)
2π |z − ζ ||zζ − a 2 | n a2
n=1
112 5 Eigenfunction Expansion

where the series sums as


∞  n
2 r |zζ − a 2 |
cos n(ϕ − ψ) = −2 ln ,
n a2 a2
n=1

transforming (5.69) to the familiar form

1 |zζ − a 2 |
G(r, ϕ; , ψ) = ln
2π a|z − ζ |

of the Green’s function for the Dirichlet problem on the disk of radius a, which was
just derived in Example 5.5.
So far in this section, we have dealt with more or less trivial problems, those
whose Green’s functions can be found in existing texts on partial differential equa-
tions. In the examples that follow, we turn to a series of boundary-value problems
for the Laplace equation whose Green’s functions are not so readily available.

Example 5.7 Consider the Dirichlet problem stated for the Laplace equation on the
annular region  = {a < r < b, 0 ≤ ϕ < 2π}, and look for a computer-friendly
form of its Green’s function.

Acting in compliance with our strategy, we subject the Poisson equation (5.46)
of Example 5.5 to the boundary conditions

u(a, ϕ) = 0, u(b, ϕ) = 0. (5.70)

It appears that the procedure of the method of eigenfunction expansion is efficient


in this case. Tracing it out, we expand the functions u(r, ϕ) and f (r, ϕ) in (5.46) in
the Fourier series shown earlier in (5.48) and (5.49). This yields the boundary-value
problem
un (a) = 0 and un (b) = 0 (5.71)
stated for the equation in (5.50) of Example 5.5.
Recall again that the cases n = 0 and n ≥ 1 in (5.50) must be treated separately.
For n = 0, the boundary-value problem in (5.50) and (5.71) transforms, for this
setting, into

d du0 (r)
r = −rf0 (r), (5.72)
dr dr
u0 (a) = 0, and u0 (b) = 0, (5.73)

and a solution for the above problem is found in the integral form
 b
u0 (r) = g0 (r, )f0 ()d, (5.74)
a
5.3 Polar Coordinates 113

where the kernel



1 ln(r/a) ln(b/), for r ≤ ,
g0 (r, ) = (5.75)
ln(b/a) ln(/a) ln(b/r), for r ≥ ,

represents the Green’s function of the homogeneous problem corresponding to that


posed by (5.72) and (5.73).
Following our procedure for the solution of the problem in (5.50) and (5.71), in
the case of n ≥ 1, we arrive at
 b
un (r) = gn (r, )fn ()d, (5.76)
a

with the kernel function found as


 2n
r −n −n (b − 2n )(r 2n − a 2n ), for r ≤ ,
gn (r, ) = (5.77)
2n(b2n − a 2n ) (b2n − r 2n )(2n − a 2n ), for r ≥ ,

representing the Green’s function of the homogeneous problem corresponding to


that posed by (5.50) and (5.71).
Upon substituting the expressions from (5.74) and (5.76) into the expansion
of (5.48), the solution to the boundary-value problem stated by (5.46) and (5.70)
reduces to the volume integral
  ∞
2π b 1 g0 (r, ) 
u(r, ϕ) = + gn (r, ) cos nϕ cos nψ
0 a π 2
n=1


+ gn (r, ) sin nϕ sin nψ f (, ψ)ddψ,
n=1

which can be rewritten as


 2π  b ∞
1 g0 (r, ) 
u(r, ϕ) = + gn (r, ) cos n(ϕ − ψ) f (, ψ)ddψ.
0 a π 2
n=1
(5.78)
As soon as the solution to the problem in (5.46) and (5.70) appears in the integral
form of (5.3), we conclude that the kernel function

 ∞
1
G(r, ϕ; , ψ) = g0 (r, ) + 2 gn (r, ) cos n(ϕ − ψ) (5.79)

n=1

in (5.78) represents the Green’s function to the Dirichlet problem for the Laplace
equation stated on the annular region  = {a < r < b, 0 ≤ ϕ < 2π}.
Close analysis shows that the representation in (5.79) does not guarantee high
level of accuracy in computing the Green’s function. Causing this is the appearance
114 5 Eigenfunction Expansion

Fig. 5.6 Profile of the representation of (5.79), with N = 10

Fig. 5.7 Profile of the representation of (5.79), with N = 100

of the coefficient gn (r, ), which reveals two different types of singularity from
the series in (5.79). The first of the singularities is of principal logarithmic type,
which shows up whenever the field point (r, ϕ) approaches the source point (, ψ),
whereas the second singularity could be called the near-boundary type. It shows up
whenever both the field and the source point approach either the inner r = a or the
outer r = b fragment of the boundary of .
The accuracy level attainable in the direct valuation of the expansion in (5.79) can
be observed in Fig. 5.6, where the profile G(r, ϕ; 2.0, 4π/9) of the Green’s function
for the annular region with a = 1.0 and b = 3.0 is depicted. The series in (5.79)
was truncated to its tenth partial sum, which is clearly insufficient for a reasonable
approximation.
To find out how the order of a partial sum affects the accuracy level attain-
able by the expansion in (5.79), we present Fig. 5.7. As in Fig. 5.6, the profile
G(r, ϕ; 2.0, 4π/9) of the Green’s function is depicted, with the 100th partial sum of
the series in (5.79) accounted for. Clearly, such a radical increase in the order of the
partial sum notably improves the accuracy level overall, but it still remains low very
close to the angular coordinate ψ of the source point.
5.3 Polar Coordinates 115

As follows from the analysis of the data in Figs. 5.6 and 5.7, the costly involve-
ment of higher partial sums in computing the nonuniform convergent series in (5.79)
can hardly be considered productive. However, an effective way of improving the
convergence of the series prior to its computer implementation might be found. This
can be done by some analytical work on the coefficient gn (r, ). Taking its branch,
which is valid for r ≤ , we have
 
1 1 1 1
gn (r, ) = − + b2n − 2n r 2n − a 2n
2n(r)n b2n − a 2n b2n b2n
 
1 a 2n 1
= + b2n − 2n r 2n − a 2n .
2n(r)n b2n (b2n − a 2n ) b2n

With this, the series in (5.79) breaks into two pieces, the first of which, the one
associated with the term
a 2n
,
b2n (b2n − a 2n )
is uniformly convergent. The other series, the one associated with the term 1/b2n ,
is nonuniformly convergent. But it allows a complete summation using the standard
formula from (5.16). This yields a computer-friendly form for the Green’s function
as

 ∞
1
G(r, ϕ; , ψ) = g0 (r, ) + 2 gn∗ (r, ) cos n(ϕ − ψ)

n=1

1 a4 − 2a 2 r cos(ϕ − ψ) + r 2 2
+ ln
4π r 2 − 2r cos(ϕ − ψ) +  2
1 b4 − 2b2 r cos(ϕ − ψ) + r 2 2
+ ln 4 2 , (5.80)
4π b r − 2a 2 b2 r cos(ϕ − ψ) + a 4 2

where the coefficient gn∗ (r, ) is found as

a 2n (b2n − 2n )(r 2n − a 2n )


gn∗ (r, ) = . (5.81)
2n(b2 r)n (b2n − a 2n )

Recall that the expression for G(r, ϕ; , ψ) in (5.80) is valid for r ≤ . The fol-
lowing steps should be taken to convert the expression in (5.80) to a form valid for
r ≥ : (i) choose the corresponding branch of g0 (r, ); (ii) interchange the variables
r with  in (5.81); and (iii) replace the denominator of the second logarithmic term
in (5.80) with
b4 2 − 2a 2 b2 r cos(ϕ − ψ) + a 4 r 2 .
A shorthand complex-variable-based notation can be introduced for the argu-
ments of the logarithmic terms in (5.80) so that the expression for the Green’s func-
116 5 Eigenfunction Expansion

Fig. 5.8 Profile of the representation of (5.82), with N = 10

tion of the Dirichlet problem on the annular region of radii a and b finally reads

 ∞
1
G(r, ϕ; , ψ) = g0 (r, ) + 2 gn∗ (r, ) cos n(ϕ − ψ)

n=1

1 − zζ ||b2 − zζ |
|a 2
+ ln , (5.82)
2π |z − ζ ||b2 z − a 2 ζ |

where the factor |b2 z − a 2 ζ | in the denominator holds for r ≤ , while for r ≥  it
must be replaced with |b2 ζ − a 2 z|.
It can easily be shown that the representation in (5.82) is notably more efficient
than that of (5.79). Two features support this assertion. First, the principal singular-
ity term is expressed analytically. Second, the series in (5.82) converges uniformly,
allowing a fairly accurate valuation at a relatively low cost. The smooth graph in
Fig. 5.8 convincingly supports the efficient computability of the above representa-
tion of the Green’s function.
As in Figs. 5.6 and 5.7, the profile G(r, ϕ; 2.0, 4π/9) of the Green’s function is
depicted in Fig. 5.8 for the annulus of radii a = 1.0 and b = 3.0, with the series
in (5.82) truncated to its tenth partial sum.

Example 5.8 We turn now to a mixed boundary-value problem. As such, the


“Dirichlet–Neumann” setting
∂u(b, ϕ)
u(a, ϕ) = 0, = 0, (5.83)
∂r
for the Laplace equation is considered on the annular region  = {a < r < b, 0 ≤
ϕ < 2π}.
Tracing out the procedure of the method of eigenfunction expansion for the set-
ting in (5.46) and (5.83), we arrive at the series representation in (5.79) for the
Green’s function to the corresponding homogeneous boundary-value problem. Ex-
pressions for the coefficients g0 (r, ) and gn (r, ) of the series term in (5.79), which
5.3 Polar Coordinates 117

are valid for r ≤ , are found, in this case, as

r (b2n + 2n )(r 2n − a 2n )


g0 (r, ) = ln and gn (r, ) = ,
a 2n(r)n (b2n + a 2n )

and their forms valid for r ≥  can be obtained from those above by interchanging
the variables r and .
Upon substituting g0 (r, ) and gn (r, ) just shown into (5.79) and proceeding
through some algebra, we obtain the computer-friendly form
 ∞

1 |b2 z − a 2 ζ ||a 2 − zζ | 

G(r, ϕ; , ψ) = ln + gn (r, ) cos n(ϕ − ψ)
2π a 2 |z|2 |z − ζ ||b2 − zζ | n=1
(5.84)
of the Green’s function to the “Dirichlet–Neumann” problem for the annulus of
radii a and b. The r ≤  branch of gn∗ (r, ) is found, in this case, as

a 2n (b2n + 2n )(a 2n − r 2n )


gn∗ (r, ) = ,
n(b2 r)n (b2n + a 2n )

while for its r ≥  branch, the variables r and  in gn∗ (r, ) must be interchanged.
In addition, the factors |z| and |b2 z − a 2 ζ | in the argument of the logarithmic term
in (5.84) hold for r ≤ , while for r ≥  they must be replaced with |ζ | and |b2 ζ −
a 2 z|, respectively.
The series in (5.84) converges uniformly, allowing an accurate immediate valua-
tion of the Green’s function by truncating the series to the N th partial sum. To verify
this claim, the reader is encouraged to take a close look at the coefficient gn∗ (r, ) of
the series in (5.84). It is also recommended that some profiles of this representation
be depicted for different values of the truncation parameter N .

Example 5.9 Consider another mixed boundary-value problem for the Laplace
equation, that is, the “Neumann–Dirichlet” problem

∂u(b, ϕ)
= 0, u(a, ϕ) = 0, (5.85)
∂r
stated on the annular region  = {a < r < b, 0 ≤ ϕ < 2π}.
The Green’s function to the homogeneous boundary-value problem correspond-
ing to that in (5.46) and (5.85), constructed by the eigenfunction expansion method,
appears, this time around, again as the series representation in (5.79). Expressions
for the coefficients g0 (r, ) and gn (r, ) of the series that are valid for r ≤  are
found in this case as

b (b2n − 2n )(r 2n + a 2n )


g0 (r, ) = ln and gn (r, ) = ,
 2n(r)n (b2n + a 2n )

while for r ≥ , the variables r and  in the above must be interchanged.


118 5 Eigenfunction Expansion

As in the derivation in the previous example, we substitute the above expressions


for the components g0 (r, ) and gn (r, ) into (5.79). Performing then some trivial
algebra, the branch r ≤  of the Green’s function that we are looking for is obtained
in the computer-friendly form

1 |ζ |2 |b2 z − a 2 ζ ||b2 − zζ |
G(r, ϕ; , ψ) = ln
2π b6 |z − ζ ||a 2 − zζ |




+ gn (r, ) cos n(ϕ − ψ) , (5.86)
n=1

where
a 2n (2n − b2n )(a 2n + r 2n )
gn∗ (r, ) = . (5.87)
n(b2 r)n (b2n + a 2n )
Note that to obtain the branch of G(r, ϕ; , ψ) valid for r ≥ , the variables r
and  in (5.87) should be interchanged, while the factors |ζ | and |b2 z − a 2 ζ | in the
argument of the logarithmic term in (5.86) must be replaced with |z| and |b2 ζ −a 2 z|,
respectively.
Uniform convergence of the series in (5.86) can again be justified upon analysis
of its coefficient gn∗ (r, ). To illustrate this assertion, the reader is advised to depict
some profiles of the Green’s function by playing around with different values of the
truncation parameter N .

5.4 Chapter Exercises


5.1 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the boundary-value problem
∂u(x, b)
u(x, 0) = = 0,
∂y
stated on the infinite strip  = {−∞ < x < ∞, 0 < y < b}.

5.2 Construct the Green’s function of the Laplace equation for the boundary-value
problem
∂u(x, b)
u(0, y) = u(x, 0) = =0
∂y
stated on the semi-infinite strip  = {0 < x < ∞, 0 < y < b}.

5.3 Construct the Green’s function of the Laplace equation for the boundary-value
problem
∂u(0, y) ∂u(x, b)
= u(x, 0) = =0
∂x ∂y
5.4 Chapter Exercises 119

stated on the semi-infinite strip  = {0 < x < ∞, 0 < y < b}.

5.4 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the mixed boundary-value problem

∂u(a, y)
u(0, y) = u(x, 0) = u(x, b) = + βu(a, y) = 0,
∂x
where β ≥ 0, stated on the rectangle  = {0 < x < a, 0 < y < b}.

5.5 Construct the Green’s function of the Laplace equation for the “Dirichlet–
mixed” problem

∂u(b, ϕ)
u(a, ϕ) = 0, + βu(b, ϕ) = 0, β ≥ 0,
∂r
stated on the annular region  = {a < r < b, 0 ≤ ϕ < 2π}. Notice that in the case
β = 0, your representation of the Green’s function reduces to that of Example 5.8,
whereas in the case of β approaching infinity, it reduces to the form derived in
Example 5.7.

5.6 Construct the Green’s function of the Laplace equation for the “mixed–
Dirichlet” problem

∂u(a, ϕ)
u(b, ϕ) = 0, − βu(a, ϕ) = 0, β ≥ 0,
∂r
stated on the annular region  = {a < r < b, 0 ≤ ϕ < 2π}. Treat the cases in which
the parameter β either is equal to zero or approaches infinity.

5.7 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the “Neumann–mixed” problem

∂u(a, ϕ) ∂u(b, ϕ)
= 0, + βu(b, ϕ) = 0, β > 0,
∂r ∂r
stated on the annular region  = {a < r < b, 0 ≤ ϕ < 2π}. Explain why, in the
case β = 0, the problem is ill posed, implying that its Green’s function does not
exist. Consider also the case of β approaching infinity.

5.8 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the “mixed–Neumann” boundary-value problem

∂u(a, ϕ) ∂u(b, ϕ)
− βu(a, ϕ) = 0, = 0,
∂r ∂r
stated on the annular region  = {a < r < b, 0 ≤ ϕ < 2π}. Treat the cases in which
the parameter β either is equal to zero or approaches infinity.
120 5 Eigenfunction Expansion

5.9 Use the method of eigenfunction expansion to construct the Green’s function of
the Laplace equation for the “mixed–mixed” boundary-value problem

∂u(a, ϕ) ∂u(b, ϕ)
− β1 u(a, ϕ) = 0, + βu(b, ϕ) = 0, β1 , β2 > 0,
∂r ∂r
stated on the annular region  = {a < r < b, 0 ≤ ϕ < 2π}. Explain why in the case
that both the parameters β1 and β2 are equal to zero, the problem is ill posed, im-
plying that its Green’s function does not exist. Observe also that some other Green’s
functions for the annular region obtained earlier in this chapter follow from the
present one.
Chapter 6
Representation of Elementary Functions

While the first five chapters in this book have touched upon more or less standard
topics, the material of the present chapter goes in another direction. The reader will
probably find it surprising. Indeed, the notions of infinite product and Green’s func-
tion, discussed in detail earlier in this volume, have customarily been included in
texts on mathematical analysis and differential equations, respectively. The present
chapter, in contrast, discusses an unusual idea that has never been explored in texts
before. That is, a technique, reported for the first time in [27, 28], is employed here
for obtaining infinite product representations for a number of elementary functions.
The technique is based on the comparison of alternative expressions of Green’s
functions for the two-dimensional Laplace equation that are constructed by different
methods. Some standard boundary-value problems posed on regions of a regular
configuration are considered. Classical closed analytical forms of Green’s functions
for such problems are compared with those obtained by the method of images in
the infinite product form. This comparison appears extremely fruitful. It provides a
number of infinite product representations for some trigonometric and hyperbolic
functions.
As outlined in Chap. 3, the method of images is useful for obtaining closed ana-
lytical expressions of Green’s functions for a certain class of boundary-value prob-
lems posed for the Laplace equation. The sphere of successful implementation of
this method is limited, however, to a narrow class of problems. We begin our presen-
tation in Sect. 6.1 by considering problems for which the method of images does not
represent the best choice for the construction of the Green’s function, because some
other classical methods allow one to obtain the Green’s function in a more compact
computer-friendly form. But it is worth noting that Green’s functions themselves are
not considered as the ultimate goal. The form in which they are expressed is what is
at issue.
To broaden the limited frontiers of successful application of the method of im-
ages, one arrives at expressions of Green’s functions in terms of infinite products.
Those expressions are no match to the compact ones available in the literature and
obtained by other classical methods (see Chaps. 3 and 5 for examples). But what
makes such expressions of Green’s functions really valuable is that they are used
here for the derivation of some identities involving infinite products.

Y.A. Melnikov, Green’s Functions and Infinite Products, 121


DOI 10.1007/978-0-8176-8280-4_6, © Springer Science+Business Media, LLC 2011
122 6 Representation of Elementary Functions

The reader will be surprised by the number of infinite product representations


of trigonometric and hyperbolic functions derived in Sects. 6.2 and 6.3. They were
obtained with the aid of the infinite-product-containing identities that we managed
to obtain in Sect. 6.1. Some of these representations are just alternatives for those
classical ones already available in the literature [9], while others were unavailable
prior to the first report on them in [27, 28].

6.1 Method of Images Extends Frontiers

The method of images, which is traditionally used for the construction of Green’s
functions for the Laplace equation, is well described in Chap. 3. The idea behind
the method is to find the location and intensity of point sources and sinks outside
the region in such a way that the homogeneous boundary conditions imposed on the
region’s boundary are satisfied for any location of a unit source inside the region.
The method of images represents one of the standard approaches in the field.
From Chap. 3, the reader may conclude that the complete list of problems al-
lowing a successful implementation of the method of images is quite short. This is
indeed true for the list of such problems for which the method results in a closed
analytical form of Green’s functions. It includes only the Dirichlet problem for a
half-plane; the Dirichlet, Neumann, and Dirichlet–Neumann problems for a quarter-
plane; the Dirichlet problem for a disk; and the Dirichlet problem for some infinite
wedge-shaped regions.
In [27] and [28] a nontrivial accomplishment was reported for the first time on the
application of the method of images to the derivation of infinite product represen-
tations of elementary functions. The method was used for the derivation of Green’s
functions for the infinite and semi-infinite strip, with Dirichlet and Neumann bound-
ary conditions imposed. Comparison of such infinite-product-containing representa-
tions of Green’s functions with their classical analytical forms brings an unexpected
discovery.
To lay out a working background for our approach to the derivation of infinite
product representations for a number of elementary functions, we will revisit some
classical expressions of Green’s functions for the Laplace equation that have con-
ventionally been obtained by a variety of methods. Alternative representations of
those Green’s functions are later constructed here by the method of images in the
infinite product form. Comparison of the two representations of the same Green’s
function entails a number of “summation” formulas for infinite functional products.

Example 6.1 We begin our presentation by considering the Dirichlet problem for
the Laplace equation stated on the infinite strip  = {−∞ < x < ∞, 0 < y < b}.
The closed analytical form

1 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π
G(x, y; ξ, η) = ln , ω = , (6.1)
2π 1 − 2eω(x−ξ ) cos ω(y − η) + e 2ω(x−ξ ) b
6.1 Method of Images Extends Frontiers 123

Fig. 6.1 Derivation of an alternative representation for (6.1)

of the Green’s function for this problem is available in standard texts [15, 18] on
partial differential equations. As follows from Chaps. 3 and 5, it can be derived
by either the method of conformal mapping or the method of eigenfunction expan-
sion. In [16], for example, it was obtained by a modified version of the method
of eigenfunction expansion. That version was first proposed in [12]. It provides a
computer-friendly form of the Green’s function, which becomes possible due to ei-
ther complete (as in the case under consideration) or partial summation of its series
representation.

In what follows, it will be explicitly shown how another (alternative to that


in (6.1)) expression can be obtained by the method of images for the Green’s func-
tion of the Dirichlet problem for the infinite strip. To follow the method, the reader
is advised to take a close look at the scheme presented in Fig. 6.1.
We place a unit source S0+ at an arbitrary point A(ξ, η) inside . The response
to S0+ at a point M(x, y) represents the fundamental solution

1
G+
0 (x, y; ξ, η) = − ln (x − ξ )2 + (y − η)2

of the Laplace equation.
Clearly, the function G+ 0 (x, y; ξ, η) conflicts with the Dirichlet conditions on
the boundary fragments y = 0 and y = b (it does not vanish on these lines). To
compensate the traces of G+ 0 (x, y; ξ, η) on y = 0 and y = b, we place two unit
− −
sinks S1,0 and S1,b at the points B(ξ, −η) and C(ξ, 2b − η), which represent the
images of (ξ, η) about the lines y = 0 and y = b, respectively. The responses to
these sinks at (x, y) evidently are

1
G−1,0 (x, y; ξ, −η) = ln (x − ξ )2 + (y + η)2

and

1  2
G−
1,b (x, y; ξ, 2b − η) = ln (x − ξ )2 + y − (2b − η) .

But the functions G− −
1,0 (x, y; ξ, −η) and G1,b (x, y; ξ, 2b − η) leave nonzero
traces on the boundary lines y = 0 and y = b. These traces can be compensated
124 6 Representation of Elementary Functions

+ +
with the unit sources S2,0 and S2,b located at D(ξ, −2b + η) and E(ξ, 2b + η). The
responses to these at (x, y) are given as

1  2
G+2,0 (x, y; ξ, −2b + η) = − ln (x − ξ )2 + y − (−2b + η)

and

1  2
G+
2,b (x, y; ξ, 2b + η) = − ln (x − ξ )2 + y − (2b + η) .

Traces of the functions G+ +
2,0 (x, y; ξ, −2b + η) and G2,b (x, y; ξ, 2b + η) on y = 0
− −
and y = b can, in turn, be compensated with the unit sinks S3,0 and S3,b located at
F (ξ, −2b − η) and H (ξ, 4b − η), respectively.
Following the described procedure of properly placing compensatory unit
sources that alternate with unit sinks, the Green’s function G = G(x, y; ξ, η) that
we are looking for is obtained in the infinite series form

 ∞
 −    + 
G = G+
0 + G2i−1,0 + G−
2i−1,b + G2i,0 + G+
2i,b .
i=1 i=1

Since the terms of this series represent logarithmic functions, its N th partial sum


N
 −  N
 + 
SN (x, y; ξ, η) = G+
0 + G2i−1,0 + G−
2i−1,b + G2i,0 + G+
2i,b
i=1 i=1

can be written as a single logarithm of a product:



1 N
(x − ξ )2 + (y + η − 2nb)2
SN (x, y; ξ, η) = ln .
2π (x − ξ )2 + (y − η + 2nb)2
n=−N

Taking the limit as N approaches infinity, we obtain the final form of the Green’s
function that we are looking for as



1 (x − ξ )2 + (y + η − 2nb)2
G(x, y; ξ, η) = ln . (6.2)
2π n=−∞ (x − ξ )2 + (y − η + 2nb)2

Thus, (6.2) provides another representation of the Green’s function for the
Dirichlet problem for the Laplace equation stated on the infinite strip. The above
can be considered an alternative to the classical form presented in (6.1). It is evi-
dent, however, that the representation in (6.2) cannot be recommended for practical
use, since it is not that computer-friendly compared to the closed form in (6.1). But
computability is not an issue in the discussion that follows.
The radicand in either (6.1) or (6.2) is a fraction whose numerator and denomi-
nator represent a distance between two points. Hence, the radicands are nonnegative
6.1 Method of Images Extends Frontiers 125

quantities, which allows us to obtain the identity

∞
(x − ξ )2 + [y + (η − 2nb)]2 1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
= .
n=−∞
(x − ξ )2 + [y − (η − 2nb)]2 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )

This relation can be interpreted as a “summation” formula for the infinite prod-
uct. In order to reduce the above to a more compact form, assume that b = π , and
introducing the parameters β = x − ξ , 2t = y + η and 2u = y − η, we obtain the
multivariable identity

∞
β 2 + 4(t − nπ)2 1 − 2eβ cos 2t + e2β
= . (6.3)
n=−∞
β 2 + 4(u + nπ)2 1 − 2eβ cos 2u + e2β

To obtain ranges for the parameters β, t , and u in (6.3), we recall that both the
observation point (x, y) and the source point (ξ, η) are interior to the infinite strip
. This makes the identity in (6.3) valid (at least, formally) for

−∞ < β < ∞, 0 < t < π, 0 ≤ u < π/2, (6.4)

given that the parameters β and u are not equal zero at the same time.
But it is important to note that if the product in (6.3) happens to be uniformly
convergent for a wider range of the variables t and u, then the constraints on these
variables in (6.4) can be revised accordingly.
The identity in (6.3), along with other identities to be derived in this section, will
play a significant role in the further development.

Example 6.2 Reviewing other classical Green’s functions, we consider a mixed


boundary-value problem for the Laplace equation on the infinite strip  = {−∞ <
x < ∞, 0 < y < b}, with Dirichlet condition imposed on y = 0, while the Neumann
condition is imposed on y = b. Recall the Green’s function for this problem, which
is expressed in [16] as

1 1 + 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
G(x, y; ξ, η) = ln
2π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )

1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π
× , ω = . (6.5)
1 + 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) 2b

The scheme presented in Fig. 6.2 may help the reader to follow the procedure
of the method of images which is similar to that described earlier for the Dirichlet
problem.

We look for an alternative representation to (6.5) of the Green’s function for the
Dirichlet–Neumann problem stated on the infinite strip. It can again be obtained
as an aggregate response to an infinite number of properly spaced unit sources and
sinks. Their locations are chosen in compliance with the following pattern.
126 6 Representation of Elementary Functions

Fig. 6.2 Derivation of an alternative representation for (6.5)

To compensate the trace of the fundamental solution G+ 0 (x, y; ξ, η) on the



boundary line y = 0, a unit sink S1,0 is placed at the point B(ξ, −η), with the re-
sponse at M(x, y) given by

1
G−1,0 (x, y; ξ, −η) = ln (x − ξ )2 + (y + η)2 .

As to the Neumann condition on y = b, it can be supported by placing a unit
+
source S1,b at the point C(ξ, 2b − η). This yields

1  2
G+
1,b (x, y; ξ, 2b − η) = − ln (x − ξ )2 + y − (2b − η) .

The trace of the function G+ 1,b (x, y; ξ, 2b − η) on the boundary line y = 0 can,

in turn, be compensated with a unit sink S2,0 placed at D(ξ, −2b + η), with the
response at (x, y)

− 1  2
G2,0 (x, y; ξ, −2b + η) = ln (x − ξ )2 + y − (−2b + η) ,


while the Neumann condition on y = b can be supported with a unit sink S2,b lo-
cated at E(ξ, 2b + η), whose response at (x, y) reads as

− 1  2
G2,b (x, y; ξ, 2b + η) = ln (x − ξ )2 + y − (2b + η) .

The trace of the function G−2,b (x, y; ξ, 2b + η) on y = 0 can, in turn, be compen-
+
sated with a unit source S3,0 placed at F (ξ, −2b − η). The response of this source
reads as

1  2
G+3,0 (x, y; ξ, −2b − η) = − ln (x − ξ )2 + y + (2b + η) ,


while the Neumann condition on y = b can be supported with a unit sink S3,b at
H (ξ, 4b − η), whose response at (x, y) is given as

1  2
G−3,b (x, y; ξ, 4b − η) = ln (x − ξ )2 + y − (4b − η) .

6.1 Method of Images Extends Frontiers 127

Continuing this process and proceeding in compliance with the scheme described
in Example 6.1, the Green’s function that we are looking for is ultimately obtained
in the following infinite product form:

∞
1 (x − ξ )2 + (y + η + 4nb)2
G(x, y; ξ, η) = ln
2π n=−∞ (x − ξ )2 + (y − η + 4nb)2

(x − ξ )2 + [y − η + 2(2n + 1)b]2
× , (6.6)
(x − ξ )2 + [y + η + 2(2n + 1)b]2

which can be viewed as an alternative to the closed analytical form exhibited earlier
in (6.5).
By comparison of the equivalent expressions in (6.6) and (6.5), one arrives at the
multivariable identity

∞
(x − ξ )2 + [y − η + 2(2n + 1)b]2
n=−∞
(x − ξ )2 + [y + η + 2(2n + 1)b]2

(x − ξ )2 + (y + η + 4nb)2
×
(x − ξ )2 + (y − η + 4nb)2
1 + 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
=
1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
× .
1 + 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )

To obtain a more compact form for this relation, we assume b = π/2, which
evidently implies that ω = 1, and introduce the parameters β = x − ξ , t = y + η,
and u = y − η. This yields

∞
[β 2 + (t + 2nπ)2 ][β 2 + (u + (2n + 1)π)2 ]
n=−∞
[β 2 + (u + 2nπ)2 ][β 2 + (t + (2n + 1)π)2 ]

(1 − 2eβ cos t + e2β )(1 + 2eβ cos u + e2β )


= . (6.7)
(1 − 2eβ cos u + e2β )(1 + 2eβ cos t + e2β )

The above identity, along with that in (6.3) and some others to be obtained later
in this section, is crucial for the major issue of the present chapter, which is the
derivation of infinite product representations for some elementary functions.
We turn now to other classical Green’s functions and apply our technique based
on the method of images to some boundary-value problems formulated for the
Laplace equation on a semi-infinite strip.
128 6 Representation of Elementary Functions

Fig. 6.3 Derivation of an alternative representation for (6.8)

Example 6.3 Consider first the Dirichlet problem on the semi-infinite strip  =
{0 < x < ∞, 0 < y < b}. The classical compact form

1 1 − 2eω(x+ξ ) cos ω(y − η) + e2ω(x+ξ )
G(x, y; ξ, η) = ln
2π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )

1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π
× , ω= , (6.8)
1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ ) b

of its Green’s function can be found in most classical sources. In [16], for example,
it was obtained by the modified version of the method of eigenfunction expansion.
Another form of the Green’s function for the problem under consideration will
be obtained here with the aid of the method of images. To trace its procedure in a
way similar to that described in detail in Examples 6.1 and 6.2, the reader is invited
to follow, in this case, the derivation scheme depicted in Fig. 6.3.

The potential field generated by a unit source acting at an arbitrary point A(ξ, η)
in  can be compensated on the edges y = 0 and y = b with unit sources and sinks
placed at the regular set of points B(ξ, −η), C(ξ, 2b − η), D(ξ, −2b + η), E(ξ, 2b +
η), F (ξ, −2b − η), H (ξ, 4b − η), and so on. All these points are located outside
of . In other words, these sources and sinks allow us to satisfy the homogeneous
Dirichlet boundary conditions imposed on the edges y = 0 and y = b of .
As to the boundary condition imposed on the edge x = 0, the influence of the
sources and sinks acting at A, B, C, D, E, F , H , and so on can, in turn, be
compensated on that boundary line with unit sources and sinks if we place them
at another set of points K(−ξ, η), L(−ξ, −η), N(−ξ, 2b − η), P (−ξ, −2b + η),
R(−ξ, 2b + η), S(−ξ, −2b − η), T (−ξ, 4b − η), and so on. It is evident that the
latter sources and sinks do not conflict with the boundary conditions on y = 0 and
y = b.
Thus, upon combining the influence of all the compensatory sources and sinks
shown in Fig. 6.3, one arrives at an alternative form to (6.8) of the Green’s function
of the Dirichlet problem for the semi-infinite strip  = {0 < x < ∞, 0 < y < b}.
After some trivial algebra, it is ultimately obtained in the infinite product-containing
6.1 Method of Images Extends Frontiers 129

form

∞
1 (x − ξ )2 + (y + η − 2nb)2
G(x, y; ξ, η) = ln
2π n=−∞ (x − ξ )2 + (y − η + 2nb)2

(x + ξ )2 + (y − η + 2nb)2
× . (6.9)
(x + ξ )2 + (y + η − 2nb)2

Similarly to the development in the problems considered in Examples 6.1


and 6.2, by equating arguments of the logarithmic functions of the alternative ex-
pressions in (6.9) and (6.8), one obtains in this case the following multivariable
identity:
∞
[(x − ξ )2 + (y + η − 2nb)2 ][(x + ξ )2 + (y − η + 2nb)2 ]
n=−∞
[(x − ξ )2 + (y − η + 2nb)2 ][(x + ξ )2 + (y + η − 2nb)2 ]
1 − 2eω(x+ξ ) cos ω(y − η) + e2ω(x+ξ )
=
1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
× .
1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ )

To convert the above identity to a more compact form, we assume b = π and


introduce the parameters α = x + ξ , β = x − ξ , t = y + η, and u = y − η. This
reduces the identity to
∞
[β 2 + (t − 2nπ)2 ][α 2 + (u + 2nπ)2 ]
n=−∞
[β 2 + (u + 2nπ)2 ][α 2 + (t − 2nπ)2 ]
(1 − 2eα cos u + e2α )(1 − 2eβ cos t + e2β )
= . (6.10)
(1 − 2eβ cos u + e2β )(1 − 2eα cos t + e2α )

Note that the above identity, along with those of (6.3) and (6.7), creates a back-
ground for our further work on the infinite product representation of elementary
functions.

Example 6.4 As another example for the semi-infinite strip  = {0 < x < ∞, 0 <
y < b}, we consider a mixed boundary-value problem. That is, let Dirichlet condi-
tions be imposed on the boundary fragments y = 0 and y = b, while the Neumann
condition is imposed on x = 0. The compact form

1 1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ )
G(x, y; ξ, η) = ln
2π 1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )

1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ ) π
× , ω = , (6.11)
1 − 2eω(x+ξ ) cos ω(y − η) + e 2ω(x+ξ ) b
130 6 Representation of Elementary Functions

Fig. 6.4 Derivation of an alternative representation for (6.11)

of the Green’s function for this Dirichlet–Neumann problem is presented, for exam-
ple, in [16].

An alternative form to (6.11) of the Green’s function can be derived with the aid
of the scheme exhibited in Fig. 6.4. As the previous example suggests, the traces
of the fundamental solution (the field generated by a unit source acting at an arbi-
trary point A(ξ, η) in ) on the edges y = 0 and y = b are compensated with unit
sources and sinks placed at a set of points exterior to : B(ξ, −η), C(ξ, 2b − η),
D(ξ, −2b + η), E(ξ, 2b + η), F (ξ, −2b − η), H (ξ, 4b − η), and so on.
To satisfy the Neumann condition imposed on the edge x = 0, the influence of
the sources and sinks acting at A, B, C, D, E, F , H , and so on can, similarly
to the Dirichlet problem, be compensated with unit sources and sinks if we place
them at the set of points K(−ξ, η), L(−ξ, −η), N (−ξ, 2b − η), P (−ξ, −2b + η),
R(−ξ, 2b + η), S(−ξ, −2b − η), T (−ξ, 4b − η), and so on exterior to . The or-
der of sources and sinks is, however, different from that suggested earlier for the
Dirichlet problem.
Proceeding further with the method of images, one arrives at the infinite-product-
containing representation

∞
1 (x − ξ )2 + (y + η − 2nb)2
G(x, y; ξ, η) = ln
2π n=−∞ (x − ξ )2 + (y − η + 2nb)2

(x + ξ )2 + (y + η − 2nb)2
× (6.12)
(x + ξ )2 + (y − η + 2nb)2

for the Green’s function under consideration.


Setting equal the arguments of the logarithmic functions in (6.11) and (6.12), we
obtain the following multivariable identity:
∞
(x − ξ )2 + (y + η − 2nb)2
n=−∞
(x − ξ )2 + (y − η + 2nb)2

(x + ξ )2 + (y + η − 2nb)2
×
(x + ξ )2 + (y − η + 2nb)2
6.2 Trigonometric Functions 131

1 − 2eω(x+ξ ) cos ω(y + η) + e2ω(x+ξ )


=
1 − 2eω(x−ξ ) cos ω(y − η) + e2ω(x−ξ )
1 − 2eω(x−ξ ) cos ω(y + η) + e2ω(x−ξ )
× .
1 − 2eω(x+ξ ) cos ω(y − η) + e2ω(x+ξ )
Similarly to the case of the Dirichlet problem considered earlier in Example 6.3,
we simplify the above identity by assuming b = π and introducing the parameters
α = x + ξ , β = x − ξ , t = y + η, and u = y − η. This yields
∞
[β 2 + (t − 2nπ)2 ][α 2 + (t − 2nπ)2 ]
n=−∞
[β 2 + (u + 2nπ)2 ][α 2 + (u + 2nπ)2 ]

(1 − 2eα cos t + e2α )(1 − 2eβ cos t + e2β )


= . (6.13)
(1 − 2eβ cos u + e2β )(1 − 2eα cos u + e2α )
The infinite-product-containing identities that have been derived so far in this
section (see (6.3), (6.7), (6.10), and (6.13)) will be repeatedly referred to in
Sects. 6.2 and 6.3. They will play a key role in our study. We will use them for
the derivation of such representations for some elementary (trigonometric and hy-
perbolic) functions that have not been reported before.

6.2 Trigonometric Functions


At this point, the reader is prepared for a new turn in our presentation. We are going
to address a subject area that bridges the topics of Green’s function and infinite prod-
uct. The infinite product representation of elementary functions will be explored to
a certain extent. The infinite-product-containing identities derived in the previous
section create a convenient background for our work.
We begin by recalling first the identity presented in (6.3) of Sect. 6.1 and assume
a zero value for the parameter β. This converts the identity into the compact form
∞
(t − nπ)2 1 − cos 2t
= = sin2 t csc2 u, (6.14)
n=−∞
(u + nπ) 2 1 − cos 2u

where the parameter t can take on any real value. As to the parameter u, it cannot
equal nπ , with n = 0, ±1, ±2, . . . .
It is evident that the identity that we just arrived at in (6.14) holds if the identity
∞
t − nπ sin t
= (6.15)
n=−∞
u + nπ sin u

holds as well. The above identity represents, in fact, an infinite product expansion
of the two-variable function
sin t
F (t, u) = .
sin u
132 6 Representation of Elementary Functions

To analyze the convergence of the infinite product representation in (6.15), we


isolate its term with n = 0, which clearly is t/u, and group the pairs of terms with
n = k and n = −k. This yields

∞ ∞
t − nπ t  (t − kπ)(t + kπ)
=
n=−∞
u + nπ u (u + kπ)(u − kπ)
k=1
∞ ∞
t  t 2 − k2π 2 t  t 2 − u2 + u2 − k 2 π 2
= =
u u2 − k 2 π 2 u u2 − k 2 π 2
k=1 k=1
∞

t t 2 − u2
= 1+ .
u u2 − k2π 2
k=1

The form that the product in (6.15) reduces to implies [5, 9] that it converges
uniformly if the series

 t 2 − u2
u2 − k 2 π 2
k=1
does. But the above represents the p-series (also referred to in some sources as
the generalized harmonic series) with convergence rate of order 1/k 2 . Hence, it
converges uniformly [9] for any finite value of t and u = kπ . This makes it possible
to conclude that the constraints put on the parameters t and u in Sect. 6.1 (see (6.4))
can be revised. This, in turn, implies that the product in (6.15) converges uniformly
to a value of the function F (t, u) at any point (t, u) in its domain.
In what follows, the reader will be introduced to a number of infinite product
representations for single-variable trigonometric functions that can be obtained from
the identities in (6.14) or (6.15). Note that most of the representations, we will arrive
at in this section were reported for the first time in [27] and [28].
Let us revisit the two-variable identity in (6.15) and assume u = π/2. The iden-
tity transforms in this case to the expansion

∞
2(t − nπ)
sin t = (6.16)
n=−∞
(2n + 1)π

of the sine function in an infinite product. Uniform convergence of this expansion


evidently follows from the analysis that we just completed for the infinite product
in (6.15).
The expansion in (6.16) can be transformed by means of the approach applied
earlier to the relation in (6.15). That is, by isolating the term with n = 0, which
equals 2t/π , and coupling the terms with n = k and n = −k, we can convert the
expansion in (6.16) into

2t  4(t 2 − k 2 π 2 )
sin t = ,
π (1 − 4k 2 )π 2
k=1
6.2 Trigonometric Functions 133

which, after some trivial algebra, reads



2t  4t 2 − π 2
sin t = 1+ . (6.17)
π (1 − 4k 2 )π 2
k=1

Thus, it appears that an infinite product representation is obtained for the trigono-
metric sine function. But this raises a natural question about the relationship be-
tween the representation in (6.17) and the classical [9] Euler expansion
∞
 t2
sin t = t 1− 2 2 , (6.18)
k π
k=1

which has been referenced and dealt with multiple times in this volume.
Close analysis shows that the forms in (6.17) and (6.18) are unrelated, meaning
that neither of them follows from the other. This makes it possible to assert that
(6.17) is simply an alternative to (6.18).
Note that the representation in (6.18) has been around in mathematics for the past
two hundred and fifty plus years, owing to the genius of Leonhard Euler [1]. It is
obvious that his name needs no recommendation. It is known to everyone who is at
least superficially familiar with the history of the natural sciences. That phenomenal
Swiss mathematician made countless decisive contributions to different areas of
mathematics, mechanics, and engineering sciences.
To provide the reader with some perspective on the intellectual greatness of Eu-
ler, we recall a comment of another giant, who represents an indisputable authority
in the mathematical sciences. Being impressed with and inspired by the beauty and
elegance of Euler’s ideas, which had influenced a huge army of his pupils and fol-
lowers, the French mathematician and physicist Pierre Simon Laplace [26] once
exclaimed: “Read Euler, read Euler, he is the master of us all.” What could be more
convincing than such recognition!
It can be seen clearly that the infinite products in (6.17) and (6.18) converge at
the same rate. This assertion follows from the form of their general terms. Indeed,
both products converge at the same rate 1/k2 . It appears from close observation,
however, that the actual convergence of the product in (6.17) is somewhat faster than
that in (6.18). This observation by no means conflicts with the a priori estimate, but
rather gives a comparison of the practical convergencies of the two expansions.
The latter point is well illustrated in Figs. 6.5 and 6.6, where, to give a clear
view of the convergence rate of both representations, we display graphs of their Kth
partial products

(17)
2t 
K
4t 2 − π 2
= 1+
π (1 − 4k 2 )π 2
K k=1
and

(18) K 
 t2
=t 1− .
k2π 2
K k=1
134 6 Representation of Elementary Functions

Fig. 6.5 Convergence of the expansions in (6.17) and (6.18), K = 5

Fig. 6.6 Convergence of the expansions in (6.17) and (6.18), K = 10

The case K = 5 is depicted in Fig. 6.5 along with that of K = 10 shown in


Fig. 6.6. These illustrations provide a sense of the convergence rate for both expan-
sions in (6.17) (diamonds) and (6.18) (boxes).
The infinite product representation
 ∞
π π − 2t  4t (t − π)
cos t = sin −t = 1+ (6.19)
2 π (1 − 4k 2 )π 2
k=1

for the cosine function directly follows from the expansion in (6.17), representing
an alternative to another classical [9] Euler form

∞
 4t 2
cos t = 1− . (6.20)
(2k − 1)2 π 2
k=1

We revisit again the identity in (6.15) and let the parameter t there equal π/2.
This converts (6.15) to the representation
6.2 Trigonometric Functions 135

∞ ∞

(1 − 2n)π 2t + π
csc t = = −1 +
n=−∞
2(t + nπ) n=−∞ 2(t + nπ)

π  1 − 4t 2
= 1+ (6.21)
2t 4(t 2 − k 2 π 2 )
k=1

for the cosecant function.


From the appearance of the second additive term in the brackets in (6.21), it
is evident that the infinite product converges uniformly to values of the cosecant
function at every point in the domain of this function. And the convergence rate of
this representation is 1/k 2 .
Observe that from the representations exhibited in (6.21) and (6.16), it follows
that
∞ ∞
2(t − nπ) 2(t + nπ)
≡ .
n=−∞
(2n + 1)π n=−∞
(1 − 2n)π

The equivalence of the two infinite products in the above identity is evident,
given that each of them is unchanged by the replacement of the multiplication index
n with −n.
It appears that the identity shown in (6.15) might help in deriving alternative
forms for other rare infinite product expansions that are available in the literature.
To verify this assertion, we recall the representation

∞  2
sin 3t 2t
=− 1− , (6.22)
sin t n=−∞
t + nπ

which appears in [9].


If both the variables t and u in

∞
t − nπ sin t
=
n=−∞
u + nπ sin u

are expressed in terms of a single variable as t := At and u := Bt, where A and B


are real constants, then the above relation transforms into


sin At At − nπ
= .
sin Bt n=−∞ Bt + nπ

This yields
∞ ∞
sin At A  A2 t 2 − k 2 π 2 A (A2 − B 2 )t 2
= = 1+ 2 2 , Bx = nπ, (6.23)
sin Bt B B t −k π
2 2 2 2 B B t − k2π 2
k=1 k=1
136 6 Representation of Elementary Functions

from which the compact expansion

∞
sin 3t 3t − nπ
= , t = nπ, (6.24)
sin t n=−∞
t + nπ

follows as a particular case apparently representing an alternative to the expan-


sion in (6.22). A close analysis reveals, however, the equivalence of the expansions
in (6.24) and (6.22). This assertion can readily be verified using the procedure that
was applied earlier to the product in (6.15). That is, we isolate the terms with n = 0
in (6.22) and (6.24), which are equal to −3 and 3, respectively, and pair the terms
with n = k and n = −k. This ultimately transforms the product in (6.24) into

∞ ∞
 ∞

3t − nπ (3t − kπ)(3t + kπ) 9t 2 − k 2 π 2
=3 =3 .
n=−∞
t + nπ (t + nπ)(t − kπ) t 2 − k2π 2
k=1 k=1

As to the product in (6.22), we can prove that it reduces to the same expression.
Indeed, after trivial transformations we have

  2
2t
− 1−
n=−∞
t + nπ

  2  2
2t 2t
=3 1− 1−
t + kπ t − kπ
k=1

 
16t 4 4t 2 4t 2
=3 1+ − +
(t 2 − k 2 π 2 )2 (t + kπ)2 (t − kπ)2
k=1

 ∞

9t 4 − 10k 2 π 2 t 2 + k 4 π 4 9t 2 − k 2 π 2
=3 = 3 .
(t 2 − k 2 π 2 )2 t 2 − k2 π 2
k=1 k=1

Thus, the expansions in (6.22) and (6.24) are indeed equivalent. The reader is
encouraged, in Exercise 6.5, to obtain a graphical illustration of the equivalence of
these two expansions.
Note that the expansion in (6.23) can be obtained by directly expressing sin At
and sin Bt with the aid of the representation in (6.17). Indeed, the latter suggests
that
∞ ∞
2At  4A2 t 2 − π 2 2At  4(A2 t 2 − k 2 π 2 )
sin At = 1+ = ,
π (1 − 4k 2 )π 2 π (1 − 4k 2 )π 2
k=1 k=1

while
∞ ∞
2Bt  4B 2 t 2 − π 2 2Bt  4(B 2 t 2 − k 2 π 2 )
sin Bt = 1+ = .
π (1 − 4k 2 )π 2 π (1 − 4k 2 )π 2
k=1 k=1
6.2 Trigonometric Functions 137

Thus we have

sin At A  A2 t 2 − k 2 π 2
= .
sin Bt B B 2t 2 − k2π 2
k=1
Interestingly enough, the Euler expansion in (6.18) also directly leads to that
in (6.23).
We continue with the derivation of infinite product representations for trigono-
metric functions. In doing so, let us assume A = 2 and B = 1 in (6.23). This yields
∞
sin 2t 2t − nπ
= 2 cos t = .
sin t n=−∞
t + nπ

This immediately yields another uniformly convergent expansion for the cosine
function,
∞ ∞
1  2t − nπ  3t 2
cos t = = 1+ 2 , (6.25)
2 n=−∞ t + nπ t − k2 π 2
k=1
which represents yet another alternative to those exhibited earlier in (6.19)
and (6.20). Yet another infinite product representation for the cosine function can be
obtained from that in (6.23) if we assume there A = 1/2 and B = 1. This yields
∞
sin t/2 t − 2nπ
= ,
sin t n=−∞
2(t + nπ)

or
∞
sin2 t/2 1 (t − 2nπ)2
= = ,
sin2 t 2(1 + cos t) n=−∞ 4(t + nπ)2

from which, solving for cos t, we obtain another alternative infinite product repre-
sentation for the cosine function:

1  4(t + nπ)2
cos t = −1 + . (6.26)
2 n=−∞ (t − 2nπ)2

To analyze the convergence of this form, an approach will be applied that is


similar to that used earlier for the form in (6.15). That is, by isolating the term with
n = 0, which is equal to 4, and pairing the terms with n = k and n = −k, we rewrite
the representation in (6.26) in the form

 ∞

16(t 2 − k 2 π 2 )2 3t 2 (5t 2 − 8k 2 π 2 )
cos t = −1 + 2 = −1 + 2 1+ .
(t 2 − 4k 2 π 2 )2 (t 2 − 4k 2 π 2 )2
k=1 k=1

Since the degree of the polynomial in k in the denominator is two units higher
than that in the numerator (four against two), we conclude that the product in (6.26)
converges at the rate 1/k 2 .
138 6 Representation of Elementary Functions

Fig. 6.7 Different product expansions of the cosine function

Another alternative infinite product expansion for the cosine function directly
follows from the identity

∞
(t − nπ)2 1 − cos 2t
= ,
n=−∞
(u + nπ) 2 1 − cos 2u

which we saw earlier in (6.14). Indeed, assuming in the above t := t/2 and u = π/4,
we reduce it to
∞
4(t − 2nπ)2
cos t = 1 − . (6.27)
n=−∞
(1 + 4n)2 π 2

The reader is encouraged to analyze the convergence of this representation in the


way suggested earlier.
So, as follows from our presentation, the alternative expansions obtained thus
far for the cosine function (see the expansions in (6.19), (6.20), (6.25), (6.26),
and (6.27)) converge at the same rate 1/k 2 , although our experience reveals some
differences in their practical convergence. To justify this assertion, take a look at
Fig. 6.7, which gives a view of the actual convergence. The 10th partial products
are depicted for each expansion involved. Note that of all the expansions, the one
in (6.27) (smaller box curve) appears to be the “most accurate,” followed by the one
in (6.19) (cross curve) and the classical Euler expansion in (6.20) (circle curve).
The expansion in (6.15) has already been used in this section for the derivation
of a number of infinite product representations of trigonometric functions. And yet
it can be helpful in the development of some other infinite product expansions. To
support this claim let us show that it can, for example, be used to generate such an
expansion for the trigonometric tangent function. In doing so, we introduce a single
variable in (6.15) by leaving the t variable as it is, while expressing the u variable
as u := π/2 − t. This yields for the right-hand side in (6.15)

sin t sin t
= = tan t,
sin u sin(π/2 − t)
6.2 Trigonometric Functions 139

transforming the whole relation in (6.15) into



 2(t − nπ)
tan t = . (6.28)
n=−∞
(1 + 2n)π − 2t

The uniform convergence of this representation, for any value of t in the domain
of the tangent function, clearly follows from the analysis of the expansion in (6.15)
that was completed earlier in this section.
An alternative to the infinite product representation in (6.28) for the tangent func-
tion can be obtained from the identity in (6.7). Indeed, if the parameter β is set equal
to zero in (6.7), then the latter reads as
∞
(t + 2nπ)2 [u + (2n + 1)π]2 (1 − cos t)(1 + cos u)
=
n=−∞
(u + 2nπ)2 [t + (2n + 1)π]2 (1 − cos u)(1 + cos t)
t u
= tan2 cot2 .
2 2
It is evident that the above identity holds, for values of t and u from the domain
of the function tan2 2t cot2 u2 if the identity

∞
t u (t + 2nπ)[u + (2n + 1)π]
tan cot = (6.29)
2 2 n=−∞ (u + 2nπ)[t + (2n + 1)π]

also holds.
The identity in (6.29) can be further transformed. Assuming u = π/2 and
t/2 := t, we arrive at

 2(3 + 4n)(t + nπ)
tan t = . (6.30)
n=−∞
(1 + 4n)[2t + (2n + 1)π]

Uniform convergence of this infinite product, for any value of t in the domain of
the tangent function, can be verified if we transform it into

6t  4(9 − 16k 2 )(t 2 − k 2 π 2 )
tan t =
2t − π (1 − 16k 2 )[(2t + π)2 − 4k 2 π 2 ]
k=1

and rewrite it in an equivalent form as



6t  (4t − π)[8t + (1 + 16k 2 )π]
tan t = 1+ .
2t − π (1 − 16k 2 )[(2t + π)2 − 4k 2 π 2 ]
k=1

Since the series



 (4t − π)[8t + (1 + 16k 2 )π]
(1 − 16k 2 )[(2t + π)2 − 4k 2 π 2 ]
k=1
140 6 Representation of Elementary Functions

Fig. 6.8 Convergence pattern of the expansions in (6.28) and (6.30)

converges at the rate 1/k 2 , the infinite product in (6.30) uniformly converges to the
tangent function at every point in its domain.
Figure 6.8 gives a clear view of the convergence rate of the expansions in (6.28)
and (6.30). The 10th partial products are shown. It is important to note that one of
these expansions approximates exact values of the tangent function strictly from
above, whereas the other one does so strictly from below. Note also that this
sandwich-type feature holds for every value of the truncation parameter K, mak-
ing convenient the simultaneous use of both expansions in (6.28) and (6.30).
It is evident that the relation that we obtained in (6.30) yields the infinite product
representation
∞
(1 + 4n)[2t + (2n + 1)π]
cot t = , t = nπ (6.31)
n=−∞
2(3 + 4n)(t + nπ)

for the cotangent function,while the alternative infinite product expansion


∞
(1 + 2n)π − 2t
cot t = , t = nπ, (6.32)
n=−∞
2(t − nπ)

for the cotangent function follows from the expansion for the tangent function
in (6.28).
By the way, the representation in (6.31) can be directly obtained from that
in (6.29) by letting t = π/2 and making the substitution u/2 := t.
Another infinite product representation of a trigonometric function can directly
be obtained from that in (6.29), that is,


tan At (At + nπ)[2Bt + (1 + 2n)π]
= . (6.33)
tan Bt n=−∞ (Bt + nπ)[2At + (1 + 2n)π]

This follows from the identity in (6.29) if a single variable t is introduced there as
t/2 := At and u/2 := Bt, where A and B are real constants that meet the following
constraints:

At = (1 + 2n)π/2 and Bt = nπ, n = 0, ±1, ±2, . . . .


6.3 Hyperbolic Functions 141

In Exercise 6.8, the reader is advised to analyze the representation in (6.33) and
determine its convergence rate. This can be accomplished using the approach em-
ployed earlier in this section.
In the next section, we will show that the use of the identities derived earlier in
Sect. 6.1 can be extended to another type of elementary functions. Namely, those
identities also allow one to derive some infinite product representations for some
hyperbolic functions.

6.3 Hyperbolic Functions

The identities derived earlier in Sect. 6.1 (see (6.3), (6.7), (6.10), and (6.13)) are also
helpful in obtaining some infinite product representations for hyperbolic functions.
But before we proceed with specifics, let us revisit some of the infinite product
expansions obtained in Sect. 6.2 for trigonometric functions and figure out what
those expansions transform to with the aid of the analytic continuation formulas

i sin iz = − sinh z; cos iz = cosh z. (6.34)

Similarly to the conversion of the classical Euler infinite product expansion for
the trigonometric sine function (see (2.1) of Chap. 2) into the expansion for the
hyperbolic sine function in (2.3), which was accomplished with the aid of the first
of the formulas in (6.34), the expansion

2t  4t 2 − π 2
sin t = 1+
π (1 − 4k 2 )π 2
k=1

derived in (6.18) converts into the expansion



2t  4t 2 + π 2
sinh t = 1− (6.35)
π (1 − 4k 2 )π 2
k=1

of the hyperbolic sine function.


Some infinite product representations of the hyperbolic cosine function can also
be directly obtained from those derived earlier for the trigonometric cosine. Taking
∞
 3t 2
cos t = 1+
t 2 − k2π 2
k=1

from (6.25), for example, and utilizing the second of the formulas in (6.34), we have
∞
 3t 2
cosh t = 1+ . (6.36)
t2 + k2 π 2
k=1
142 6 Representation of Elementary Functions

The alternative representation



1  4(t + nπ)2
cos t = −1 +
2 n=−∞ (t − 2nπ)2

for the trigonometric cosine shown in (6.26) also works. But before going to its
analytic continuation, we convert it to an equivalent form as

 16(t 2 − n2 π 2 )2
cos t = −1 + 2 .
(t 2 − 4n2 π 2 )2
k=1

This yields

 16(t 2 + n2 π 2 )2
cosh t = −1 + 2 . (6.37)
(t 2 + 4n2 π 2 )2
k=1
Another alternative to the two infinite product representations for cosh t just pre-
sented follows from
∞
4(t − 2nπ)2
cos t = 1 − ,
n=−∞
(1 + 4n)2 π 2

shown in (6.27). Converting it first to the equivalent form



4t 2  16(t 2 − 4n2 π 2 )2
cos t = 1 − ,
π2 (1 − 16n2 )2 π 4
k=1

we then obtain

4t 2  16(t 2 + 4n2 π 2 )2
cosh t = 1 + . (6.38)
π2 (1 − 16n2 )2 π 4
k=1
Some other infinite product representations of trigonometric functions can also
be immediately converted upon analytic continuation. Revisiting, for example, the
relation

sin At A  A2 t 2 − k 2 π 2
= ,
sin Bt B B 2 t 2 − k2 π 2
k=1
derived earlier in Sect. 6.2, we convert it into a corresponding relation written in
terms of hyperbolic functions, which reads as

sinh At A  A2 t 2 + k 2 π 2
= . (6.39)
sinh Bt B B 2t 2 + k2π 2
k=1

The infinite product representations shown in (6.35)–(6.39) are converted from


the corresponding representations obtained earlier for trigonometric functions. An-
alytic continuation was used as an instrument for that. The list of such conversions
6.3 Hyperbolic Functions 143

can be further extended. We are not going to explore this track in more detail but
encourage the reader to do so.
In the remaining part of this section, we will be investigating the potential of
the identities derived in Sect. 6.1. To begin, we revisit first the identity in (6.3). By
assuming for the variables t and u the values t = 0 and u = π/2, we transform (6.3)
into the single-variable identity

 β 2 + 4n2 π 2 (1 − eβ )2
= .
n=−∞
β 2 + (1 + 2n)2 π 2 (1 + eβ )2

The exponential expression on the right-hand-side can be rewritten in terms of a


hyperbolic function, transforming the above into


β β 2 + 4n2 π 2
tanh2 = , (6.40)
2 n=−∞ β 2 + (1 + 2n)2 π 2

from which, by introducing the variable t := β/2, we have



 4(t 2 + n2 π 2 )
tanh t =
2
. (6.41)
n=−∞
4t 2 + (1 + 2n)2 π 2

This delivers the following dual expansion for the hyperbolic tangent function:

∞
t 2 + n2 π 2
tanh t = ± 2 , (6.42)
n=−∞
4t 2 + (1 + 2n)2 π 2

where the expansion with the plus sign



∞
t 2 + n2 π 2
tanh t = 2 ,
n=−∞
4t 2 + (1 + 2n)2 π 2

holds for t ≥ 0, while for the expansion




 t 2 + n2 π 2
tanh t = − 2
n=−∞
4t 2 + (1 + 2n)2 π 2

with the minus sign, the variable t is assumed less then zero.
It can be shown that the infinite product representations in (6.41) and (6.42) con-
verge uniformly for −∞ < t < ∞. We will verify this assertion by the method used
earlier for the product in (6.15). In doing so, we isolate first the term with n = 0
in (6.41), which is equal to
4t 2
,
4t 2 + π 2
144 6 Representation of Elementary Functions

and then pair the terms with n = k and n = −k. This transforms the expansion
in (6.41) into

4t 2  16(t 2 + k 2 π 2 )2
tanh2 t = . (6.43)
4t 2 + π 2 [4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]
k=1

We then rewrite the general term

16(t 2 + k 2 π 2 )2
[4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]

of the infinite product of (6.43) in the equivalent form

π 2 [8t 2 + (1 − 8k 2 )π 2 ]
1− . (6.44)
[4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]

Hence, since the numerator in the second additive term in (6.44) represents a
second-degree polynomial in k, whereas the degree of its denominator is four, we
conclude that the infinite product in (6.41) is indeed convergent, with convergence
rate of order 1/k2 .
Another infinite product representation for a hyperbolic function can be obtained
from another multivariable identity also derived earlier in Sect. 6.1. That is, assign-
ing in (6.7) the values of 0 and π for the variables t and u, respectively, we read it
as
 ∞
1 − eβ 4 (β 2 + 4n2 π 2 )[β 2 + 4(1 + n)2 π 2 ]
= ,
1 + eβ n=−∞
[β 2 + (1 + 2n)2 π 2 ]2

which converts to the infinite product expansion




β (β 2 + 4n2 π 2 )[β 2 + 4(1 + n)2 π 2 ]
tanh4 = .
2 n=−∞ [β 2 + (1 + 2n)2 π 2 ]2

The above can be rewritten as

∞
16(t 2 + n2 π 2 )[t 2 + (1 + n)2 π 2 ]
tanh4 t = (6.45)
n=−∞
[4t 2 + (1 + 2n)2 π 2 ]2

with the equivalent form



 π 2 [8(t 2 − n(n + 1)π 2 ) − π 2 ]
tanh4 t = 1+ .
n=−∞
[4t 2 + (1 + 2n)2 π 2 ]2

The uniform convergence of the above infinite product can be proven for any real
value of t in the way applied earlier to the identity in (6.41).
6.3 Hyperbolic Functions 145

The identity in (6.10) can also be used to derive some infinite product represen-
tations for hyperbolic functions. Indeed, assuming the values of 0 and π for the
variables t and u, respectively, we arrive at the expansion

∞
α β (β 2 + 4n2 π 2 )[α 2 + (1 + 2n)2 π 2 ]
coth2 tanh2 = . (6.46)
2 2 n=−∞ (α 2 + 4n2 π 2 )[β 2 + (1 + 2n)2 π 2 ]

It is worth noting that the expansion in (6.40) follows from that in (6.46) if α is
taken to infinity. If, on the other hand, the limit is taken in (6.46) as β approaches
infinity, then one arrives at the expansion


α α 2 + (1 + 2n)2 π 2
coth2 = ,
2 n=−∞ α 2 + 4n2 π 2

which reads as

∞
4t 2 + (1 + 2n)2 π 2
coth2 t = , t = 0, (6.47)
n=−∞
4(t 2 + n2 π 2 )

if the variable t is introduced in terms of α as t := α/2. Note that the above expan-
sion has been derived from the identity of (6.10). But it can also be directly obtained
as a reciprocal of the expansion in (6.41).
Dual expansion for the hyperbolic cotangent function follows from that in (6.47)
as

∞
1 4t 2 + (1 + 2n)2 π 2
coth t = ± , t = 0, (6.48)
n=−∞
2 t 2 + n2 π 2

with

∞
1 4t 2 + (1 + 2n)2 π 2
coth t =
n=−∞
2 t 2 + n2 π 2

holding for t > 0, while



∞
1 4t 2 + (1 + 2n)2 π 2
coth t = −
n=−∞
2 t 2 + n2 π 2

holds for t < 0.


Uniform convergence of the expansion in (6.47), for any nonzero value of t,
becomes evident after we reduce it to

4t 2 + π 2  [4t 2 + (1 + 2k)2 π 2 ][4t 2 + (1 − 2k)2 π 2 ]
coth2 t =
4t 2 16(t 2 + k 2 π 2 )2
k=1
146 6 Representation of Elementary Functions

and then to

4t 2 + π 2  π 2 [8t 2 + (1 − 8k 2 )π 2 ]
coth2 t = 1 + .
4t 2 16(t 2 + k 2 π 2 )2
k=1

Since the numerator of the second additive component in the braces represents a
second-degree polynomial in k, while the degree of the denominator polynomial is
two units higher, the expansion in (6.47) converges at a rate of order 1/k 2 .
Clearly enough, the expansions in (6.47) and (6.48) can be directly obtained from
those in (6.41) and (6.42), respectively.
An interesting infinite product representation for a single-variable hyperbolic
function follows from the two-variable identity in (6.46). Indeed, if a new variable
t is introduced there as α = 2At and β = 2Bt, where A and B represent real con-
stants, with A = 0, then the right-hand side of the identity in (6.46) transforms into
the infinite product expansion

∞
(B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ]
(6.49)
n=−∞
(A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ]

of the function
tanh2 Bt
F (t) = ,
tanh2 At
whose domain clearly is the set of all real numbers except for t = 0. Hence, the
expansion in (6.49) must converge uniformly in the domain of F (t). This statement
can, however, be further justified with the assertion that the expansion in (6.49)
converges at t = 0 as well, and the value of it at t = 0 is the limit of F (t) as t
approaches zero. That is,

tanh2 Bt B2
lim = .
t→0 tanh2 At A2
This assertion is not evident. To come up with its verification, we split off the
n = 0 term in (6.49). This yields

∞
(B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ]
n=−∞
(A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ]

B 2 (4A2 t 2 + π 2 )  (B 2 t 2 + k 2 π 2 )[4A2 t 2 + (1 + 2k)2 π 2 ]
=
A2 (4B 2 t 2 + π 2 ) (A2 t 2 + k 2 π 2 )[4B 2 t 2 + (1 + 2k)2 π 2 ]
k=1

 (B 2 t 2 + k 2 π 2 )[4A2 t 2 + (1 − 2n)2 π 2 ]
× .
(A2 t 2 + k 2 π 2 )[4B 2 t 2 + (1 − 2n)2 π 2 ]
k=1
6.3 Hyperbolic Functions 147

Combining the two infinite products into one, we can rewrite the above relation
as
∞
(B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ]
n=−∞
(A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ]

B 2 (4A2 t 2 + π 2 )  16A4 t 4 + 8(1 + 4k 2 )A2 t 2 π 2 + (1 − 4k 2 )2 π 4
= .
A2 (4B 2 t 2 + π 2 ) 16B 4 t 4 + 8(1 + 4k 2 )B 2 t 2 π 2 + (1 − 4k 2 )2 π 4
k=1

It can readily be seen that the general term in the last infinite product represents
unity at t = 0, implying that the value of the expansion in (6.49) at t = 0 is indeed
B 2 /A2 . This allows us finally to obtain the expansion
∞
tanh2 Bt (B 2 t 2 + n2 π 2 )[4A2 t 2 + (1 + 2n)2 π 2 ]
= ,
tanh2 At n=−∞
(A2 t 2 + n2 π 2 )[4B 2 t 2 + (1 + 2n)2 π 2 ]

which is valid for any real value of t.


Another infinite product expansion of a multivariable function can be obtained
from the identity presented in (6.13) of Sect. 6.1. In doing so, we assume for the
variables t and u in (6.13) the values of t = π and u = π/2. This yields

∞
(1 + eα )2 (1 + eβ )2 16[α 2 + (1 − 2n)2 π 2 ][β 2 + (1 − 2n)2 π 2 ]
= . (6.50)
(1 + e2α )(1 + e2β ) n=−∞
[4α 2 + (1 + 4n)2 π 2 ][4β 2 + (1 + 4n)2 π 2 ]

To simplify the above relation, we take advantage of its specific (symmetric)


form. Indeed, introducing a new variable t by assuming t = α = β, we reduce (6.50)
to
∞
(1 + et )4 16[t 2 + (1 − 2n)2 π 2 ]2
= ,
(1 + e )
2t 2
n=−∞
[4t 2 + (1 + 4n)2 π 2 ]2

which further reduces to


∞
(1 + et )2 4[t 2 + (1 − 2n)2 π 2 ]
= , (6.51)
1 + e2t n=−∞
4t 2 + (1 + 4n)2 π 2

or, converting the left-hand side of the above to a hyperbolic function form, we
transform (6.51) into
∞
1 + cosh t 4[t 2 + (1 − 2n)2 π 2 ]
= .
cosh t n=−∞
4t 2 + (1 + 4n)2 π 2

This leads to the compact infinite product expansion

∞
4[t 2 + (1 − 2n)2 π 2 ]
sech t = −1 + (6.52)
n=−∞
4t 2 + (1 + 4n)2 π 2
148 6 Representation of Elementary Functions

Fig. 6.9 Convergence of the representation in (6.52)

of the hyperbolic secant function. The above representation converges uniformly for
any value of t. We verify this assertion by our customary procedure. For that, the
infinite product in (6.52) is transformed as

∞
4[t 2 + (1 − 2n)2 π 2 ]
n=−∞
4t 2 + (1 + 4n)2 π 2

4(t 2 + π 2 )  16[t 2 + (1 + 2k)2 π 2 ][t 2 + (1 − 2k)2 π 2 ]
= .
4t 2 + π 2 [4t 2 + (1 + 4k)2 π 2 ][4t 2 + (1 − 4k)2 π 2 ]
k=1

After some trivial algebra with the general term of the above product, the infinite
product representation of the hyperbolic secant function exhibited in (6.52) converts
into the equivalent form

4(t 2 + π 2 )  3π 2 [8t 2 + π 2 (5 − 32k 2 )]
sech t = −1 + 1 + .
4t 2 + π 2 [4t 2 + (1 + 4k)2 π 2 ][4t 2 + (1 − 4k)2 π 2 ]
k=1
(6.53)
From a comparison of the highest degree of the multiplication index k in the nu-
merator (which is two) and the denominator (which is four) of (6.53), it follows that
the expansion in (6.52) converges uniformly for any value of t, and its convergence
rate is of order 1/k 2 .
To give a sense of the actual convergence of the expansion derived in (6.52),
graphs of its partial products


N
4[t 2 + (1 − 2n)2 π 2 ]
sech t = −1 +
4t 2 + (1 + 4n)2 π 2
n=−N

are depicted in Fig. 6.9 for N = 5, 10, and 50.


6.4 Chapter Exercises 149

6.4 Chapter Exercises


6.1 Using the method of eigenfunction expansion, derive the expression presented
in (6.1) for the Green’s function of the Dirichlet problem stated for the Laplace
equation on the infinite strip {−∞ < x < ∞, 0 < y < b}.

6.2 Using the method of eigenfunction expansion, derive the expression presented
in (6.5) for the Green’s function of the Dirichlet–Neumann problem stated for the
Laplace equation on the infinite strip {−∞ < x < ∞, 0 < y < b}.

6.3 Using the method of eigenfunction expansion, derive the expression presented
in (6.8) for the Green’s function of the Dirichlet problem stated for the Laplace
equation on the semi-infinite strip {0 < x < ∞, 0 < y < b}.

6.4 Using the method of eigenfunction expansion, derive the expression presented
in (6.11) for the Green’s function of the mixed problem stated for the Laplace equa-
tion on the semi-infinite strip {0 < x < ∞, 0 < y < b}.

6.5 Illustrate the equivalence of the expansions presented in (6.22) and (6.24) by
graphing their partial products.

6.6 Prove the convergence of the infinite product representation derived in (6.27)
for the cosine function.

6.7 Prove the convergence of the infinite product representation derived in (6.28)
for the tangent function.

6.8 Determine the convergence rate of the infinite product representation obtained
in (6.33).

6.9 Derive an infinite product representation for the function

tan x − cot x

and determine its convergence rate.


Chapter 7
Hints and Answers to Chapter Exercises

7.1 Chapter 2

2.5 Transforming the function in the statement as


  
a sin x b cos x
a sin x + b cos x = a 2 + b2 √ +√
a 2 + b2 a 2 + b2

and introducing the argument


a
ϕ = arccos √ ,
a + b2
2

which implies
b b
ϕ = arcsin √ = arctan ,
a2 + b2 a
we have

a sin x + b cos x = a 2 + b2 sin(x + ϕ),

which, in compliance with the Euler expansion in (2.1), yields


  
a sin x + b cos x = a 2 + b2 x + arctan(b/a)
∞
(x + arctan(b/a))2
× 1− .
k2 π 2
k=1

2.6 Expressing the sum of sine functions in the statement in the product form
 
x +y x −y x+y π − (x − y)
sin x + sin y = 2 sin cos = 2 sin sin
2 2 2 2

Y.A. Melnikov, Green’s Functions and Infinite Products, 151


DOI 10.1007/978-0-8176-8280-4_7, © Springer Science+Business Media, LLC 2011
152 7 Hints and Answers to Chapter Exercises

and replacing the sine functions with their Euler infinite product representations,
one arrives at
∞
  
(x + y)2 (π − (x − y))2
sin x + sin y = 2 1− 1− ,
4k 2 π 2 4k 2 π 2
k=1

which can be rewritten, after an elementary algebra, as


∞
 (x + y)2 + (π − (x − y))2 (x + y)2 (π − (x − y))2
sin x + sin y = 2 1− + .
4k 2 π 2 16k 4 π 4
k=1

2.7 If the sum of cosine functions is expressed in the product form

x+y x −y
cos x + cos y = 2 cos cos
2 2
and each of the cosine functions on the right-hand side is replaced with its Euler
infinite product representation in (2.2), then one obtains
∞
  
(x + y)2 (x − y)2
cos x + cos y = 2 1− 1− ,
(2k − 1)2 π 2 (2k − 1)2 π 2
k=1

which converts, with the aid of elementary algebra, into


∞
 2(x 2 + y 2 ) (x 2 − y 2 )2
cos x + cos y = 2 1− + .
(2k − 1) π
2 2 (2k − 1)4 π 4
k=1

2.9 Using the elementary trigonometric identity

sin(x + y)
cot x + cot y =
sin x sin y

and expressing the sine functions on the right-hand side with the Euler representa-
tion in (2.1), one obtains
2

x +y  (1 − (x+y)k2 π 2
)
cot x + cot y = ,
xy x2 y2
k=1 (1 − k 2 π 2 )(1 − k 2 π 2 )

which transforms ultimately into


∞
x+y  xy(xy + 2k 2 π 2 )
cot x + cot y = 1− 2 2 .
xy (k π − x 2 )(k 2 π 2 − y 2 )
k=1
7.2 Chapter 3 153

2.11 Converting the sum of hyperbolic cotangent functions into the product
sinh(x + y)
coth x + coth y =
sinh x sinh y
and using the classical Euler infinite product representation in (2.3) for the right-
hand side, one arrives at
2

x +y  (1 + (x+y)k2 π 2
)
coth x + coth y = ,
xy 2 y2
k=1 (1 + k 2 π 2 )(1 + k 2 π 2 )
x

which transforms into


∞
x+y  xy(2k 2 π 2 − xy)
coth x + coth y = 1+ 2 2 .
xy (k π + x 2 )(k 2 π 2 + y 2 )
k=1

7.2 Chapter 3
3.3 In an attempt to construct the Green’s function for the Dirichlet problem for
the Laplace equation on the infinite wedge (r, ϕ) = {0 < r < ∞, 0 < ϕ < 2π/5},
let the unit source (which produces the singular component of the Green’s function)
be located at (, ψ) ∈ . To compensate its trace on the fragment ϕ = 0 of the
boundary of , place a unit sink at (, 2π − ψ) ∈ / . The trace of the latter on
the boundary fragment ϕ = 2π/5 is compensated, in turn, with a unit source at
(, 4π/5 + ψ) ∈ / , whose trace on ϕ = 0 must be compensated with a unit sink at
(, 6π/5 − ψ) ∈ / , whose trace on ϕ = 2π/5 requires a compensation by a source
at (, 8π/5 + ψ) ∈ / . And to compensate the latter’s trace on ϕ = 0, we must put
a sink at (, 2π/5 − ψ), which is, unfortunately, located inside . And this is what
justifies the failure of the method in this case.

7.3 Chapter 4
4.1 Express the general solution of the equation in (4.21) as

yg (x) = D1 exp kx + D2 exp(−kx)

where D1 and D2 are arbitrary constants. The first of the conditions in (4.22) yields

D1 + D2 = D1 exp ka + D2 exp(−ka)

while from the second condition, we have

D1 − D2 = D1 exp ka − D2 exp(−ka).
154 7 Hints and Answers to Chapter Exercises

These two relations represent the homogeneous system of linear algebraic equations
    
1 − exp ka 1 − exp(−ka) D1 0
= ,
1 − exp ka exp(−ka) − 1 D2 0
having only the trivial solution D1 = D2 = 0, because the coefficient matrix of the
system is regular. Indeed, its determinant
 
2(1 − exp ka) exp(−ka) − 1

is nonzero.

4.2 Express the general solution of the equation in (4.31) as

yg (x) = D1 ln(mx + b) + D2 .

The first of the boundary conditions in (4.32) yields mD1 /b = 0, while the second
condition yields
D1 ln(ma + b) + D2 = 0.
Hence, D1 = D2 = 0, which implies that the boundary-value problem stated
in (4.31) and (4.32) has only the trivial solution, or is well posed.

4.3 Proving that the problem in (4.54) and (4.55) has a unique solution is equiv-
alent to showing that the corresponding homogeneous problem has only the trivial
solution, which is indeed true. To support this claim, express the general solution of
the homogeneous equation as

yg (x) = D1 sin kx + D2 cos kx.

The first boundary condition in (4.55) yields D1 = 0, while from the second condi-
tion it follows that
D1 cos ka − D2 cos ka = 0,
implying that D2 is also zero.

4.4 The Green’s function is found as

1 sin kx cos k(s − 1), for x ≤ s,


g(x, s) =
k cos k sin ks cos k(x − 1), for x ≥ s.

4.5 The Green’s function is obtained in the form

1 E(x) sinh k(s − a), for x ≤ s,


g(x, s) =
kE(a) E(s) sinh k(x − a), for x ≥ s,
where
k−h
E(p) = exp kp + λ exp(−kp) and λ = .
k+h
7.4 Chapter 5 155

7.4 Chapter 5

5.1 A closed form of the Green’s function is obtained as

1 |1 + exp π(z−ζ ) π(z−ζ )


2b ||1 − exp 2b |
G(x, y; ξ, η) = ln .
2π |1 + exp π(z−ζ ) ||1 − exp π(z−ζ ) |
2b 2b

Here and further in the answers for Exercises 5.2–5.4, the complex variable no-
tations z = x + iy and ζ = ξ + iη are used for the field point and the source point
respectively.

5.2 The Green’s function is obtained in the form



1 |1 + exp π(z−ζ ) π(z−ζ )
2b ||1 − exp 2b |
G(x, y; ξ, η) = ln
2π |1 + exp π(z−ζ ) π(z−ζ )
2b ||1 − exp 2b |

|1 + exp π(z+ζ ) π(z+ζ )
2b ||1 − exp 2b |
× .
|1 + exp π(z+ζ ) π(z+ζ )
2b ||1 − exp 2b |

5.3 The Green’s function is obtained in the form



1 |1 + exp π(z−ζ ) π(z−ζ )
2b ||1 − exp 2b |
G(x, y; ξ, η) = ln
2π |1 + exp π(z−ζ ) π(z−ζ )
2b ||1 − exp 2b |

|1 + exp π(z+ζ ) π(z+ζ )
2b ||1 − exp 2b |
× .
|1 + exp π(z+ζ ) π(z+ζ )
2b ||1 − exp 2b |

5.4 A computer-friendly expression for the Green’s function is obtained in the


form

1 |1 − exp π(z+ζ )
||1 − exp π(z−ζ )
|
G(x, y; ξ, η) = ln b
π(z−ζ )
b
π(z+ζ )
2π |1 − exp
2b ||1 − exp 2b |

4 (β − ν) sinh νx sinh νξ
− sin νy sin νη,
b ν[(ν + β) exp 2νa + (ν − β)]
n=1

where ν = nπ/b.

5.5 Tracing out the standard procedure of the method of eigenfunction expansion,
one obtains the Green’s function in the Fourier series form

1
G(r, ϕ; ρ, ψ) = k0 (r, ρ) + 2 kn (r, ρ) cos n(ϕ − ψ) ,

n=1
156 7 Hints and Answers to Chapter Exercises

where the function k0 (r, ρ) is found as

1 ln(r/a)(1 + βb ln(b/ρ)), for r ≤ ρ,


k0 (r, ρ) =
1 + βb ln(b/a) ln(ρ/a)(1 + βb ln(b/r)), for r ≥ ρ,
while the expression for the coefficient kn (r, ρ), valid for r ≤ ρ, reads

(r 2n − a 2n )[n(b2n + ρ 2n ) + βb(b2n − ρ 2n )]
kn (r, ρ) = , for r ≤ ρ.
2n(rρ)n [n(b2n + a 2n ) + βb(b2n − a 2n )]
Note that the expression for kn (r, ρ), valid for r ≥ ρ, can be obtained from the above
with the variables r and ρ interchanged.
A close analysis reveals a slow convergence of the series in the above expres-
sion for G(r, ϕ; ρ, ψ). Indeed, it converges at the rate 1/n, notably diminishing the
practicality of the representation. After improving the convergence in the way de-
scribed in the current chapter, a computer-friendly form for the Green’s function is
ultimately obtained as


1 |a 2 − zζ | ∗
G(r, ϕ; ρ, ψ) = ln + k0 (r, ρ) + kn (r, ρ) cos n(ϕ − ψ) ,
2π |z||z − ζ |
n=1

where the coefficient kn∗ (r, ρ) of the series component is found, for r ≤ ρ, as

(r 2n − a 2n )(a 2n − ρ 2n )(βb − n)
kn∗ (r, ρ) = ,
n(rρ)n [(b2n (βb + n) − a 2n (βb − n)]
while for an expression that is valid for r ≥ ρ, we interchange the variables r and ρ.
Here and further in the answer to Exercise 5.7, the complex variable notations
z = r(cos ϕ + i sin ϕ) and ζ = ρ(cos ψ + i sin ψ) are used for the field point and the
source point, respectively.

5.7 A computer-friendly form of the Green’s function is obtained as



1 |z||ζ |2
G(r, ϕ; ρ, ψ) = ln
2π |z − ζ ||a 2 − zζ |
∞ 
∗ ∗
+ k0 (r, ρ) − kn (r, ρ) cos n(ϕ − ψ) ,
n=1

where the function k0∗ (r, ρ) and the coefficient kn∗ (r, ρ) of the series component are
found, for r ≤ ρ, as
  
1 b
k0∗ (r, ρ) = 1 + βb ln
βb ρ
and
(r 2n + a 2n )(a 2n + ρ 2n )(βb − n)
kn∗ (r, ρ) = ,
n(rρ)n [(b2n (βb + n) + a 2n (βb − n)]
7.5 Chapter 6 157

while the variables r and ρ must be interchanged in the above expressions for
k0∗ (r, ρ) and kn∗ (r, ρ) to make them valid for r ≥ ρ.

7.5 Chapter 6
6.6 To prove the convergence of the infinite product

∞
4(t − 2nπ)2
,
n=−∞
(1 + 4n)2 π 2

convert it to the form



4t 2  16(t 2 − 4k 2 π 2 )2
π2 (1 − 16k 2 )2 π 4
k=1
and then transform it into
∞
4t 2  (π 2 − 4t 2 )((32k 2 − 1)π 2 − 4t 2 )
1+ .
π 2 (1 − 16k 2 )2 π 4
k=1

Hence, the infinite product in (6.27) indeed converges, and its convergence rate
is of order 1/k 2 .

6.7 The infinite product



 2(t − nπ)
n=−∞
(1 + 2n)π − 2t

in (6.28) converges because it is equivalent to



2t  4(t 2 − k 2 π 2 )
,
π − 2t [(1 + 2k)π − 2t][(1 − 2k)π − 2t]
k=1

which can be rewritten as


∞ 
2t  π(π − 4t)
1− ,
π − 2t [(1 + 2k)π − 2t][(1 − 2k)π − 2t]
k=1

revealing the convergence rate of order 1/k2 .

6.8 Show that the representation in the statement is equivalent to



A(2Bt + π)  (A2 t 2 − k 2 π 2 )[(1 + 2k)π + 2Bt][(1 − 2k)π + 2Bt]
,
B(2At + π) (B 2 t 2 − k 2 π 2 )[(1 + 2k)π + 2At][(1 − 2k)π + 2At]
k=1
158 7 Hints and Answers to Chapter Exercises

which, in turn, is equivalent to


∞ 
A(2Bt + π)  πt (A − B)[(πt (A + B) + 4(ABt 2 + k 2 π 2 )]
1 + .
B(2At + π) (B 2 t 2 − k 2 π 2 )[(1 + 2k)π + 2At][(1 − 2k)π + 2At]
k=1

This reveals the convergence rate of the representation in (6.33) of order 1/k 2 .

6.9 Transform the function in the statement as


sin t cos t
tan t − cot t = − = −2 cot 2t
cos t sin t
and replace the cotangent function with the infinite product (see (6.32)). This yields

∞
4t − (1 + 2n)π
tan t − cot t = 2 .
n=−∞
2(2t − nπ)

The above converts into the infinite product


∞
4x − π  π(π − 8t)
tan t − cot t = 1+ ,
2x 4(4t 2 − k 2 π 2 )
k=1

whose convergence rate is of the order of 1/k 2 .


References

1. L. Euler, Introductio in Analysin Infinitorum, Vol. 1, Lausanne, Bousquet, 1748


2. G.M. Robison, Summability of infinite products, Am. J. Math., 51 (1929), no. 4, pp. 653–660
3. K.M. Slepenchuk, Representation of an analytic function of two variables by means of a dou-
ble infinite product (Russian), Usp. Mat. Nauk, 8 (1953), no. 2, pp. 139–152
4. K.M. Slepenchuk, On a property of infinite products (Russian), Usp. Mat. Nauk, 10 (1955),
no. 1, pp. 151–153
5. V.I. Smirnov, A Course of Higher Mathematics, Vols. 1 and 4, Pergamon, Oxford, 1964
6. M. Abramowitz and I.A. Stegun, Handbook of Mathematical Functions with Formulas,
Graphs, and Mathematical Tables, Dover, New York, 1965
7. A. Arcache, Expansion of analytic functions in infinite series and infinite products with appli-
cation to multiple valued functions, Am. Math. Mon., 72 (1965), no. 8, pp. 861–864
8. V.Y. Arsenin, Basic Equations and Special Functions of Mathematical Physics, Eliffe Books,
London, 1968
9. I.S. Gradstein and I.M. Ryzhik, Table of Integrals, Series and Products, Academic Press, New
York, 1971
10. W.E. Boyce and R.C. DiPrima, Elementary Differential Equations and Boundary Value Prob-
lems, Wiley, New York, 1977
11. Y.A. Melnikov, Some applications of the Green’s function method in mechanics, Int. J. Solids
Struct., 13 (1977), pp. 1045–1058
12. I.M. Dolgova and Yu.A. Melnikov, Construction of Green’s functions and matrices for equa-
tions and systems of elliptic type, J. Appl. Math. Mech., 42 (1978), pp. 740–746. Translation
from Russian PMM
13. I. Stakgold, Green’s Functions and Boundary-Value Problems, Wiley, New York, 1980
14. R. Blecksmith, J. Brillhart, and I. Gerst, Some infinite product identities, Math. Comput., 51
(1988), no. 183, pp. 301–314
15. R. Haberman, Elementary Applied Partial Differential Equations, Prentice-Hall, New Jersey,
1998
16. Yu.A. Melnikov, Influence Functions and Matrices, Marcel Dekker, New York, 1998
17. Yu.A. Melnikov, Green’s function formalism extended to systems of mechanical differential
equations posed on graphs, J. Eng. Math., 34 (1998), pp. 369–386
18. M.A. Pinsky, Partial Differential Equations and Boundary-Value Problems with Applications,
McGraw-Hill, Boston, 1998
19. S.L. Marshall, A rapidly convergent modified Green’s function for Laplace’s equation in a
rectangular region. Proc. R. Soc., 155 (1999), pp. 1739–1766
20. W.J. Kaczor and M.T. Nowak, Problems in Mathematical Analysis, I, AMS, Providence, 2000
21. Yu.A. Melnikov, An alternative construction of Green’s functions for the two-dimensional
heat equation, Eng. Anal. Bound. Elem., 24 (2000), pp. 467–475
22. H.-G. Jeon, Expressions of infinite products, Far East J. Math. Sci., 5 (2002), no. 1, pp. 81–88

Y.A. Melnikov, Green’s Functions and Infinite Products, 159


DOI 10.1007/978-0-8176-8280-4, © Springer Science+Business Media, LLC 2011
160 References

23. Yu.A. Melnikov, Influence functions of a point source for perforated compound plates with
facial convection, J. Eng. Math., 49 (2004), pp. 253–270
24. A.K. Ibrahim and M.A. Rakha, Numerical computations of infinite products, Appl. Math.
Comput., 161 (2005), no. 1, pp. 271–283
25. Yu.A. Melnikov and M.Yu. Melnikov, Computability of series representations of Green’s func-
tions in a rectangle, Eng. Anal. Bound. Elem., 30 (2006), pp. 774–780
26. V.S. Varadarajan, Euler and his work on infinite series, Bull. Am. Math. Soc., 44 (2007), no. 4,
pp. 515–539
27. Yu. A. Melnikov, New infinite product representations of some elementary functions, Appl.
Math. Sci., 2 (2008), no. 2, pp. 81–97
28. Yu.A. Melnikov, A new approach to the representation of trigonometric and hyperbolic func-
tions by infinite products, J. Math. Anal. Appl., 344 (2008), no. 1, pp. 521–534
Index

A Boundary-value problems, vii, 1, 2, 11, 12–14,


Absolute convergence, 7 41, 43, 44, 46, 47, 50, 62, 65,
Absolutely convergent, 7, 8 67–71, 73, 76, 79–83, 86, 88–91,
Additive component, 12, 64, 146 93, 95, 97, 105–110, 112, 113,
Additive term, 20, 22, 104, 135, 144 116–119, 121, 127, 154
Aggregate influence, 47 Bounded region, 27
Algorithm, 40, 43, 71, 86 Bridge the divide, 15
Alternate, 8, 9, 124 Brief introduction, 2
Brief review, 2, 10
Alternative forms of Green’s functions, 2, 14
Alternative representation, 31, 97, 122, 123, C
125, 126, 128, 130, 142 Cartesian coordinates, 55, 57, 58, 85, 86
Analytic continuation, 18, 141, 142 Challenging field, 15
Analytic function, 55 Chief concepts, 2
Angular coordinate, 52, 114 Circumference, 51–53, 56
Applied mathematics, viii, 1, 2 Classical literature, 15, 25
Applied partial differential equations, 2, 15, 85 Classical topic, vii, 54
Approximation of functions, 1, 2, 17, 18, 43 Closed analytical form, 12, 43, 54, 92–94, 121,
Arbitrarily preassigned value, 8 122, 127
Arbitrary rotation, 54 Closed form, 43, 124, 155
Collection of methods, 41, 43
Common factor, 19, 23
B Commutativity, 8
Background, vii, viii, 1, 14, 17, 54, 85, 105, Compact form, 9, 19, 23, 26, 29, 38, 39, 46,
122, 129, 131 48, 54, 58, 68, 70, 91, 102, 109,
125, 127–129, 131
Basic terminology, 2 Compensatory sources, 46, 47, 51, 128
Birkhäuser Boston, viii Compensatory unit sink, 47, 51–53
Boundary, vii, 1, 2, 11–14, 41, 43, 44, 46–48, Complex plane, 18, 27
50, 51, 54, 56, 61, 62, 65, 67–71, 73, Complex variable notation, 54, 102, 155, 156
76, 79–83, 86, 88–91, 93, 95, 97, Complex variables, 2, 14, 26, 54, 91
103, 105–110, 112–114, 116–119, Comprehensive review, vii, 43, 85
121–123, 127, 129, 153, 154 Computability, 14, 93–95, 98, 116, 124
Computable, 12, 97
Boundary conditions, 11, 12, 44, 46, 62,
Computational aspects, 2
65–68, 70, 71–73, 75, 80–83, 92, Computational effort, 14
106–108, 111, 112, 122, 128, 154 Computational sciences, viii
Boundary line, 44, 123, 126, 128 Computer-friendly expressions, 1
Boundary segment, 45, 46, 48, 103 Concentric circle, 51

Y.A. Melnikov, Green’s Functions and Infinite Products, 161


DOI 10.1007/978-0-8176-8280-4, © Springer Science+Business Media, LLC 2011
162 Index

Concept of convergence, 3 Double series, 13, 98, 100


Conditional convergence, 8 Driving force, 1, 2, 14
Conditionally convergent, 8
Conjecture, 5 E
Consistent, 2, 61 Editing process, viii
Constant coefficients, 62 Editorial services, viii
Constant value, 54 Efficient computing, 13
Construction of Green’s functions, vii, 2, 14, Elective course, viii
24, 41, 43, 54, 56, 61, 62, 65, 72, Elegance of the approach, 19
85, 105, 122 Elementary functions, vii, viii, 1, 2, 11–15, 17,
Constructive proof, 62 18, 24, 25, 28, 37, 40, 41, 43, 54,
Contemporary mathematics, 17 121, 122, 127, 129, 131, 141
Continuity property, 7 Elliptic integrals, 17
Continuous derivative, 62 Equipotential lines, 51
Continuous function, 11, 62 Equivalent mathematically, 14
Convenient tool, 2 Equivalent version, 10
Convergence pattern, 20, 22, 24, 140 Estimate, 27, 95, 96, 104, 133
Convergent, 3–7, 10, 13, 27, 87, 89, 94, 95, Euler, 14, 17, 18, 20, 22, 24–26, 28, 29, 33, 37,
103, 115, 125, 137, 144 41, 57, 89–91, 98, 108, 109, 133,
Converse assertion, 6 134, 137, 138, 151–153
Conversion, 14, 90, 141, 142 Euler infinite product, 31, 34, 36, 141
Cosine function, 14, 17, 18, 20–22, 24, 25, 28, Euler’s classical derivation, 17
30, 31, 33, 37, 41, 90, 134, 137, Euler’s classical representation, 17
138, 141, 149, 152 Euler’s derivation, 24
Cotangent function, 26, 140, 145, 153, 158 Euler’s era, 2
Counterexample, 6 Euler’s formula, 19, 20, 23
Current literature, 17, 28, 41 Euler’s procedure, 19
Curriculum, viii, 1 Even power, 19
Cyclic symmetry, 51 Even-index partial product, 5
Existence and uniqueness, 62, 67, 77, 79
D Exponential form, 19, 80, 101
Defining properties, 11, 12, 55, 61–63, 88 Exponential function, 19, 21, 23, 68, 80, 94
DeMoivre’s formula, 25
Derivation procedure, 14, 18, 24, 26, 28, 30, 86 F
Derivation strategy, 37 Factoring polynomial, 19
Difference of cosines, 28 Failure, 48, 50, 51, 153
Difference of squares, 29, 34, 35 Family of functions, 56–58
Differential equations, vii, viii, 1, 2, 62, 97, Field point, 11, 99, 103, 114, 155, 156
121 Finite limit, 3, 7, 27
Dimensions, 13, 59, 85 Finite product, 3, 20, 22
Direct consequence, 11 First-order derivative, 62
Dirichlet problem, 12, 13, 44–51, 54, 56–60, Fourier series, 14, 106, 108, 110, 112, 155
86, 90, 91, 96–98, 105, 109–113, Fractional components, 32
116, 117, 119, 122–125, 128, 130, Frontiers, 14, 121, 122
131, 149, 153 Functional products, 10, 122
Dirichlet–Neumann problem, 46, 48, 49, 117, Fundamental results, 2
122, 125, 130, 149 Fundamentals of infinite products, 2, 10
Discontinuous, 62
Disk, 51–54, 56–59, 105, 109–112, 122 G
Disk’s center, 54, 56–59 Gamma function, 17
Distance, 45, 57, 124 General term, 3, 4, 6, 27, 39, 95, 133, 144,
Divergent, 3, 6, 8 147, 148
Diverges, 3, 6, 8, 37, 89 Generalization, 50
Double product, 34 Generating point, 51
Index 163

Green’s function, vii, viii, 1, 2, 11–15, 43–63, K


65–73, 76, 77, 79–83, 85, 86, Kernel, 11, 76, 79, 82, 89, 93, 98, 107–109,
88–100, 102–106, 108–125, 127, 113
128, 130, 131, 149, 153–156 Key theme, 11
Keystone, 11
H
Half-plane, 12, 44, 58, 122 L
Harmonic function, 12, 44, 55 Lagrange’s method, 62, 72, 73, 79, 80
Harmonic series, 6, 132 Laplace operator, 11, 97, 110
Historians, 18 Laurent series, 26
Homogeneous boundary condition, 11, 62, 73, Leading coefficient, 62
76, 79, 122 Leading theme, vii, 2
Homogeneous equation, 61, 62, 73, 74, 76, 77, Left-hand side, 9, 25–28, 31, 34, 36, 98, 147
79, 81–83, 106, 107, 154 Leonhard Euler, 18, 133
Homogeneous problem, 11, 73, 77, 81, 82, 86, Limit comparison test, 7
87, 108, 113, 154 Limit of general term, 3, 6
Hyperbolic functions, 2, 14, 18, 24, 43, 85, Limit of sequence, 3
101, 121, 122, 141, 142, 145 Limit process, 9
Hyperbolic sine function, 23, 41, 141 Linear ordinary differential equation, 61, 62,
106
I Linearly independent form, 62
Logarithmic function, 7, 124, 129, 130
Identity, 34–37, 44, 125, 127, 129–132, 134,
Logarithmic singularity, 12, 55, 89, 93, 99,
135, 138–140, 143–147, 152
100, 102, 103, 105
Imaginary part, 25, 57
Logarithms, 7
Inequality, 27, 32, 78, 96
Lower half-plane, 44
Infinite product, vii, 1–8, 10, 11, 15, 17, 18, 20,
25, 27–31, 33, 35–38, 40, 41, 121, M
122, 125, 127, 128, 131–133, 135, Maclaurin series, 20, 39
138–141, 144, 146–148, 157, 158 Major solution methods, 2
Infinite product representation, vii, viii, 1, 2, Mapping function, 55, 57
14, 15, 17, 18, 20, 22, 24, 28, 29, Mathematical analysis, vii, 1, 2, 17, 18, 121
31–33, 35, 36, 39–43, 85, 121, 122, Mathematical concepts, vii
127, 129, 131–134, 137–146, 148, Mathematical handbooks, 14
149, 152, 153 Mathematical problems, 18
Infinite product representation of elementary Mathematics editor, viii
functions, vii, viii, 1, 2, 14, 17, 24, Method of conformal mapping, 43, 54, 56, 57,
28, 32, 40, 43, 129 86, 90, 96, 123
Infinite series, 2, 3, 6–8, 17, 18, 95, 96, 104, Method of eigenfunction expansion, 15, 59,
124 61, 85, 86, 92, 96, 106, 110, 112,
Infinite strip, 57, 86, 90–92, 95, 118, 119, 116, 118–120, 123, 128, 149, 155
122–125, 127–129, 149 Method of images, vii, 2, 14, 43, 44, 46–48,
Infinite wedge, 45–50, 59, 60, 122, 153 50, 51, 54, 58, 60, 61, 85, 105,
Infinite-product-containing expressions, vii, 14 121–123, 125, 127, 128, 130
Infinity, 3, 6, 7, 9, 10, 26, 27, 37–39, 68, 81, Middle Tennessee State University, viii
87, 89, 91, 111, 119, 124, 145 Mixed boundary-value problem, 13, 46, 61, 92,
Influence, 44–47, 49, 128, 130 95, 110, 116, 117, 119, 120, 125,
Inhomogeneous equation, 73, 79, 81 129
Initial terms, 27 Mixed form, 13
Innovative approach, vii, 15, 41, 43 Mixed problem, 14, 44, 47, 48, 119, 149
Integer, 10, 32, 51, 60 Modulus, 27, 56–58
Integral form, 11, 76, 78, 86, 98, 106, 107, Multiple form, 14
112, 113 Multiple-valued function, 7
Interior of a disk, 59 Multiplication index, 37, 135, 148
Introductory review, 11 Multiplicity, 14
164 Index

N Q
Necessary condition, 6 Quadratic equation, 53
Necessity, 4, 6, 80 Quarter-plane, 45, 46, 122
Negative infinity, 37, 89
Neumann problem, 44 R
Newtonian polynomial, 21 Radial coordinate, 52
Newton’s binomial formula, 19 Radial line, 52
Nontrivial, vii, 2, 122 Radical factor, 40
Nontrivial situations, viii Radicand, 40, 124
Nonzero, 6, 37, 55, 62, 65, 123, 145, 154 Rate of convergence, 31, 41, 99, 103, 133, 141,
Nonzero complex number, 2 148, 157
Normal direction, 44 Real component, 55
Novel approach, 1, 28 Real part, 14
Numerical analysis, viii, 2 Real terms, 19
Numerical differentiation, 18 Rearranged infinite product, 9, 10
Rearrangement, 8, 10
O Reciprocals, 53
Observation point, 11, 45, 54–57, 89, 91, 125 Rectangle, 13, 59, 96–98, 119
Odd-degree polynomial, 20 Recurrence, 40
Odd-index partial product, 5 Region, 44, 59, 80, 95, 96, 112–114, 116, 117,
Opening terms, 27 119–122
Order of factors, 8, 9 Regular component, 43, 44, 47
Ordinary differential equations, 15, 61, 72, 82, Regular function, 27, 102
85 Regular part, 12–14
Regularization, 14
Relative convergence, 32
P
Review guide, 15
Parameters, 13, 59, 90, 93, 99, 120, 125, 127,
Riemann’s zeta function, 17
129, 131, 132
Right-hand side, 11, 19, 21, 26, 27, 36, 37, 40,
Partial differential equations, vii, 15, 43, 61,
65, 98, 138, 146, 152, 153
85, 112, 123
Robin problem, 44
Partial product, 3–7, 9, 10, 20, 32, 37–41, 133,
Root, 53
138, 140, 148, 149
Rotation parameter, 57
Partial sum, 7, 20, 93, 96, 99, 100, 105,
Routine exercise, 61
114–117, 124
Particular solution, 62, 63, 73, 74 S
Piecewise smooth contour, 11 Scheme of the method, 43, 49
Pioneering results, 18 Second Green’s formula, 11
Poisson equation, 11, 92, 105, 112 Second-order differential equation, 61, 73
Polar coordinates, 45, 50, 56, 58, 85, 105, 106, Self-contained, 2
109 Seminar topic, viii
Pole, 27 Sequence of partial products, 10
Polynomial, 19, 21, 23, 25, 137, 144, 146 Series representation, 14, 59, 100, 116, 117,
Polynomial-containing representation, 21 123
Positive infinity, 37 Sharp turn, 15, 41
Positive integers, 9, 10 Simple pole, 55
Potential field, 53, 128 Simply connected region, 11, 54
Practical implementations, 13 Single branch, 7
Preparatory basis, 43, 85 Singular component, 13, 44, 45, 47, 50, 52, 94,
Principal value, 7 153
Product form, 28, 29, 34, 151, 152 Singular part, 12
Product value, 8, 11 Singular point, 27, 106
Professional treatment, viii Sink, 44–51, 122, 123, 125, 128, 130, 153
Prominent mathematician, 18 Source, 41, 44, 47, 49, 51, 72, 122, 126, 128,
Property of finite products, 3 130, 132, 153
Index 165

Source point, 11, 44, 53–57, 89, 91, 93, 96, 99, U
103, 104, 114, 125, 155, 156 Unbounded, 6, 68, 80
Special feature, 11, 80 Undergraduate course of calculus, viii
Special functions, 17, 59 Undergraduate course of differential equations,
Square root function, 40 viii
Standard abbreviation, 14 Undergraduate mathematics, 2
Standard courses, vii, 2 Undergraduate textbook, 61
Standard limit, 20 Unexpected treatment, 1
Subject areas of mathematics, vii, 1 Uniform convergence, 105, 118, 132, 139, 144,
Successive partial products, 4 145
Sufficient condition, 7 Unique solution, 11, 64–66, 73, 74, 79, 154
Sum of the series, 40, 94, 114 Uniqueness, 56, 57
Summation, 26, 59, 90, 92, 94–96, 99–101, Unit disk, 13, 14, 54, 56–59
109, 111, 115, 122, 123, 125 Unit sink, 44, 45, 47, 48, 51, 53, 123, 124, 126,
Summation indices, 13 153
Supplementary reading, viii Unit source, 44–51, 53, 122–126, 128, 130,
Surprising linkage, 43 153
Unity, 3, 6, 8, 57, 95, 96, 147
Unlooked-for approach, 43
T Unlooked-for outcome, 1
Taylor series, 8, 102 Upper bound, 27
Terminological issue, 8 Upper-division course/seminar, 1
Terminology, 11, 44 Upper half-plane, 44, 45, 50
Theoretical aspects, 14
Theory of infinite products, 2 V
Tom Grasso, viii Value, 3, 5–10, 25, 26, 37, 39, 53, 54, 63–65,
Trace of the function, 45, 46, 124, 126 68–72, 75, 77, 78, 89, 95, 99, 104,
Traditional instrument, 17 107, 117, 118, 131, 132, 135, 139,
Traditional methods, vii 140, 143–148
Trigonometric functions, 17, 18, 31, 131, 132, Variation of parameters, 62, 72, 73, 76, 88, 93,
137, 138, 141, 142 107, 108
Trigonometric series, 13
Trigonometric sine function, 18, 19, 23–25, 37, W
133, 141 Wallis formula, 9, 10
Trivial solution, 11, 62, 65, 69, 71–73, 77, 79, Weierstrass elliptic function, 59
82, 154 Well posed, 11, 62, 63, 66, 72, 74, 81, 83, 88,
Trivial trigonometric transformation, 19, 22 110, 154
Truncation of the series, 13, 94 Word of caution, 51
Two-dimensional Euclidean space, 11 Work of art, 19
Two-dimensional Laplace equation, vii, 1, 2,
11, 14, 24, 41, 43, 54, 59, 61, 93, Z
121 Zero terms, 3

You might also like