Zeta Functions 33pp
Zeta Functions 33pp
Zeta Functions 33pp
Johar M. Ashfaque
1.1 Introduction
Leonhard Euler lived from 1707 to 1783 and is, without a doubt, one of the most influential mathemati-
cians of all time. His work covered many areas of mathematics including algebra, trigonometry, graph
theory, mechanics and, most relevantly, analysis.
Although Euler demonstrated an obvious genius when he was a child it took until 1735 for his talents to
be fully recognised. It was then that he solved what was known as the Basel problem, the problem set
in 1644 and named after Euler’s home town [?]. This problem asks for an exact expression of the limit
of the equation
∞
X 1 1 1 1
2
=1+ + + + ..., (1)
n=1
n 4 9 16
which Euler calculated to be exactly equal to π 2 /6. Going beyond this, he also calculated that
∞
X 1 1 1 1 π4
4
=1+ + + + ... = , (2)
n=1
n 16 81 256 90
among other specific values of a series that later became known as the Riemann zeta function, which is
classically defined in the following way.
Definition 1.1 For <(s) > 1, the Riemann zeta function is defined as
∞
X 1 1 1 1
ζ(s) = s
= 1 + s + s + s + ...
n=0
n 2 3 4
This allows us to write the sums in the above equations (1) and (2) simply as ζ(2) and ζ(4) respectively.
A few years later Euler constructed a general proof that gave exact values for all ζ(2n) for n ∈ N. These
were the first instances of the special values of the zeta function and are still amongst the most interesting.
However, when they were discovered, it was still unfortunately the case that analysis of ζ(s) was restricted
only to the real numbers. It wasn’t until the work of Bernhard Riemann that the zeta function was to
be fully extended to all of the complex numbers by the process of analytic continuation and it is for this
reason that the function is commonly referred to as the Riemann zeta function. From this, we are able
to calculate more special values of the zeta function and understand its nature more completely.
We will be discussing some classical values of the zeta function as well as some more modern ones and will
require no more than an undergraduate understanding of analysis (especially Taylor series of common
functions) and complex numbers. The only tool that we will be using extensively in addition to these
will be the ‘Big Oh’ notation, that we shall now define.
Definition 1.2 We can say that f (x) = g(x) + O(h(x)) as x → k if there exists a constant C > 0 such
that |f (x) − g(x)| ≤ C|h(x)| for when x is close enough to k.
This may seem a little alien at first and it is true that, to the uninitiated, it can take a little while to
digest. However, its use is more simple than its definition would suggest and so we will move on to more
important matters.
1
1.2 The Euler Product Formula for ζ(s)
Although we will not give a complete proof here, one is referred to [?]. However, it is worth noting that
m
(m )
Y X
(1 + an ) = exp ln(1 + an ) .
n=1 n=1
Theorem 1.4 Let p denote the prime numbers. For <(s) > 1,
∞
Y
ζ(s) = (1 − p−s )−1 .
p
If we continue this process of siphoning off primes we can see that, by the Fundamental Theorem of
Arithmetic, Y
ζ(s) (1 − p−s ) = 1,
p
2
1.3 The Bernoulli Numbers
We will now move on to the study of Bernoulli numbers, a sequence of rational numbers that pop up
frequently when considering the zeta function. We are interested in them because they are intimately
related to some special values of the zeta function and are present in some rather remarkable identities.
We already have an understanding of Taylor series and the analytic power that they provide and so we
can now begin with the definition of the Bernoulli numbers. This section will follow Chapter 6 in [?].
Definition 1.5 The Bernoulli Numbers Bn are defined to be the coefficients in the series expansion
∞
x X Bn xn
= .
ex − 1 n=0 n!
It is a result from complex analysis that this series converges for |x| < 2π but, other than this, we cannot
gain much of an understanding from the implicit definition. Please note also, that although the left hand
side would appear to become infinite at x = 0, it does not.
Corollary 1.6 We can calculate the Bernoulli numbers by the recursion formula
k−1
X
k
0= Bj ,
j=0
j
where B0 = 1.
Proof. Let us first replace ex − 1 with its Taylor series to see that
∞ j ∞ ∞ ∞
X x X Bn xn X xj X Bn xn
x = − 1 = .
j=0
j n=0
n! j=1
j! n=0 n!
Hence
k−1 k−1
X Bj 1 X k
0= = Bj .
j=0
(k − j)!j! k! j=0 j
Note that the inverse k! term is irrelevant to the recursion formula. This completes the proof.
The first few Bernoulli numbers are therefore
B0 = 1, B1 = −1/2, B2 = 1/6, B3 = 0,
B4 = −1/30, B5 = 0, B6 = 1/42, B7 = 0.
Lemma 1.7 The values of the odd Bernoulli numbers (except B1 ) are zero
Proof. As we know the values of B0 and B1 , we can remove the first two terms from Definition 1.5 and
rearrange to get
∞
x x X Bn xn
x
+ = 1 + ,
e −1 2 n=2
n!
which then simplifies to give
∞
ex + 1 Bn xn
x X
=1+ .
2 ex − 1 n=2
n!
3
We can then multiply both the numerator and denominator of the left hand side by exp(−x/2) to get
∞
x ex/2 + e−x/2 Bn xn
X
x/2 −x/2
= 1 + . (3)
2 e −e n=2
n!
By substituting x → −x into the left hand side of this equation we can see that it is an even function and
hence invariate under this transformation. Hence, as the odd Bernoulli numbers multiply odd powers
of x, the right hand side can only be invariant under the same transformation if the value of the odd
coefficients are all zero.
As, we have already dicussed, Euler found a way of calculating exact values of ζ(2n) for n ∈ N. He did
this using the properties of the Bernoulli numbers, although he originally did it using the infinite product
for the sine function. The relationship between the zeta function and Bernoulli numbers is not obvious
but the proof of it is quite satisfying.
To prove this theorem, we will be using the original proof attributed to Euler and reproduced in [?]. This
will be done by finding two seperate expressions for z cot(z) and then comparing them. We will be using
a real analytic proof which is slightly longer than a complex analytic proof, an example of which can be
found in [?].
Proof. Substitute x = 2iz into equation (3) and observe that, because the odd Bernoulli numbers are
zero, we can write this as
∞
x ex/2 + e−x/2 eiz + e−iz 2n
n B2n z
X
= iz = 1 + (−4) .
2 ex/2 − e−x/2 eiz − e−iz n=1
(2n)!
Noting that the left hand side is equal to z cot(z) completes the proof
Proof. Recall that 2 cot(2z) = cot(z) + cot(z + π/2). If we continually iterate this formula we will find
that
2n −1
1 X z + jπ
cot(z) = n cot ,
2 j=0 2n
which can be proved by induction. Removing the j = 0 and j = 2n−1 terms and recalling that cot(z +
π/2) = − tan(z) gives us
cot(z/2n ) tan(z/2n )
cot(z) = −
2n 2n
n−1 n
2 X−1 2X −1
1 z + jπ z + jπ
+ cot + cot .
2n j=1
2n 2n
j=2n−1 +1
4
All we have to do now is observe that, as cot(z + π) = cot(z), we can say that
n
2X −1 2n−1
X−1
z + jπ z − jπ
cot = cot ,
2n j=1
2n
j=2n−1 +1
Proof. In order to obtain this, we first multiply both sides of equation (5) by z to get
2n−1
X−1 z z + jπ
z n z n z − jπ
z cot(z) = n cot(z/2 ) − n tan(z/2 ) + cot + cot . (7)
2 2 j=1
2n 2n 2n
Let us now take the limit of the right hand side as n tends to infinity. First recall that the Taylor series
for x cot(x) and x tan(x) can be respectively expressed as
x cot(x) = 1 + O(x2 )
and
x tan(x) = x2 + O(x4 ).
Hence, if we substitute x = z/2n into both of these we can see that
hz z i
lim cot =1 (8)
n→∞ 2n 2n
and hz z i
lim tan = 0. (9)
n→∞ 2n 2n
Now we have dealt with the expressions outside the summation and so we need to consider the ones
inside. To make things slightly easier for the moment, let us consider both of the expressions at the same
time. Using Taylor series again, we can see that
z z ± jπ z
n
cot n
= + O(4−n ). (10)
2 2 z ± jπ
Substituting equations (8), (9) and (10) into the right hand side of equation (7) gives that
2n−1
X−1
z z
z cot(z) = 1 − lim + + O(4−n ) ,
n→∞
j=1
z + jπ z − jπ
5
Proof. Take the summand of equation (1.11) and multiply both the numerator and denominator by
(jπ)−2 to obtain
∞
X (z/jπ)2
z cot(z) = 1 − 2 .
j=1
1 − (z/jπ)2
But, we can note that the summand can be expanded as an infinite geometric series. Hence we can write
this as
∞ X ∞ 2n
X z
z cot(z) = 1 − 2 ,
j=1 n=1
jπ
which
∞ 2n
X z
=1−2 ζ(2n)
n=1
π
as long as the geometric series converges (i.e. |z| < π). Note that exchanging the summations in such a
way is valid as both of the series are absolutely convergent.
Now, we can complete the proof of the Theorem 1.8 by equating equations (4) and (11) to see that
∞ ∞ 2n
X B2n z 2n X z
1+ = (−4)n =1−2 ζ(2n).
n=2
(2n)! n=1
π
If we then strip away the 1 terms and remove the summations we obtain the identity
B2n z 2n z 2n
(−4)n = −2 2n ζ(2n),
(2n)! π
which rearranges to complete the proof of Theorem 1.8 as required.
Now that have proven this beautiful formula (thanks again, Euler) we can use it to calculate the sums of
positive even values of the zeta function. First, let us rewrite the result of Theorem 1.8 to give us
(2π)2n |B2n |
ζ(2n) = .
2(2n)!
From this, we can easily calculate specific values such as
∞
X 1 (2π)6 |B6 | π6
ζ(6) = 6
= = ,
n=1
j 2(6)! 945
∞
X 1 (2π)8 |B8 | π8
ζ(8) = = = ,
n=1
j8 2(8)! 9450
and so on. It is unfortunate that no similar formula has been discovered for ζ(2n + 1).
There have been recent results concerning the values of the zeta function for odd integers. For example,
Apéry’s proof of the irrationality of ζ(3) in 1979 or Matilde Lalı́n’s integral representations of ζ(3) and
ζ(5) by the use of Mahler measures.
Special values for ζ(−2n) and ζ(−2n + 1) have been found, the latter of which also involves Bernoulli
numbers. It is to our good fortune that we will have to take a whirlwind tour through some of the
properties of the Gamma function in order to get to them.
Now, this function initially looks rather daunting and irrelavent. We will see, however, that is does have
many fascinating properties. Among the most basic are the following two identities ...
6
Corollary 1.14 Γ(s) has the recursive property
Γ(s + 1) = sΓ(s). (12)
Proof. We can prove this by performing a basic integration by parts on the gamma function. Note that
Z ∞ Z ∞
∞
ts e−t dt = −ts e−t 0 + s ts−1 e−t dt
Γ(s + 1) =
0 0
= [0] + sΓ(s)
as required.
Remark 1.16 We can use the fact that Γ(s) = Γ(s + 1)/s to see that, as s tends to 0, Γ(s) → ∞. We
can also use this recursive relation to prove that the Gamma function has poles at all of the negative
integers.
If we switch to polar co-ordinates using the change of variables x = r cos(θ), y = r sin(θ). Noting that
dydx = rdθdr, we have
Z 2π Z ∞ Z ∞
I= r exp(−r2 )drdθ = π 2r exp(−r2 )dr = π[exp(−r2 )]∞0 = π.
0 0 0
We can then seperate the original integral into two seperate integrals to obtain
Z ∞ Z ∞
2 2
I= exp −x dx exp −y dy = π.
−∞ −∞
Noting that the two integrals are identical and are also both even functions, we can see that integrating
one of them from zero to infinity completes the proof as required
Corollary 1.18 Consider (n)!2 = n(n − 2)(n − 4)...,, which terminates at 1 or 2 depending on whether
n is odd or even respectively. Then for n ∈ N,
√
2n + 1 π(2n − 1)!2
Γ = .
2 2n
Proof. We will prove this by induction. Consider that
√ √
2n + 3 2(n + 1) + 1 (2n + 1) (2n − 1)!2 π (2n + 1)!2 π
Γ =Γ = . = .
2 2 2 2n 2n+1
Noting that the leftmost and rightmost equalities are equal by definition completes the proof.
7
Remark. We can use this relationship Γ(1 + 1/s) = (1/s)Γ(s) to see, for example, that
√ √
3 π 15 π
Γ(5/2) = , Γ(7/2) = ,
4 8
etc.
This chapter will use a slightly different definition of the Gamma function and will follow source [?]. First
let us consider the definition of the very important Euler constant γ.
We will then use Gauss’ definition for the Gamma function which can be written as follows ...
and
Γ(s) = lim Γh (s).
h→∞
This does not seem immediately obvious but the relationship is true and is proven for <(s) > 0 in [?].
So now that we have these definitions we can work on a well known theorem.
Theorem 1.22 The Gamma function can be written as the following infinite product;
∞
1 γs
Y s −s/n
= se 1+ e .
Γ(s) n=1
n
Proof. Before we start with the derivation, let us note that the infinite product is convergent because
the exponential term forces it. Now that we have cleared that from our conscience, we will begin by using
Definition 1.21 and say that
hs
Γh (s) = .
s(1 + s)(1 + s/2)...(1 + s/h)
Now we can also see that
hs = exp (s log(h))
1 1 1 1
= exp s log(h) − 1 − − ... − exp s 1 + + ... + .
2 h 2 h
8
We can then observe that
1 es es/2 es/h
1 1
Γh (s) = ... exp s log(h) − 1 − − ... − ,
s 1 + s 1 + s/2 1 + s/h 2 h
which we can write as the product
Yh
1 1 1 1
Γh (s) = exp s log(h) − 1 − − ... − es/n .
s 2 h n=1
1 + s/n
All we need to do now is to take the limit of this as h tends to infinity and use Definition 1.20 to prove
the theorem as required.
This theorem is very interesting as it allows us to prove two really quite beautiful identities, known as the
Euler Reflection formulae. But before we do this, we are going to need another way of dealing with the
sine function. It should be noted that the method of approach that we are going to use is not completely
rigorous. However, it can be proven rigorously using the Weierstrass Factorisation Theorem - a discussion
of which can be found in [?].
Theorem 1.24 The Gamma function has the following reflective relation -
1 sin(πs)
= .
Γ(s)Γ(1 − s) π
Proof. We can use Theorem 1.22 to see that
∞ ∞
n2 − s2
1 Y n+s n−s Y
= −s2 eγs−γs es/n−s/n = −s2 .
Γ(s)Γ(−s) n=1
n n n=1
n2
Corollary 1.25 The Gamma function also has the reflectional formula
1 s sin(πs)
=− .
Γ(s)Γ(−s) π
Proof. This can easily be shown using a slight variation of the previous proof. However, an alternate
proof can be constructed by considering Theorem 1.24 and Corollary 1.14.
Remark. It is obvious that ζ(s, 1) = ζ(s) and from this we can see that if we can prove results for the
Hurwitz zeta function that are valid when a = 1, then we obtain results for the regular zeta function
automatically. Let us then begin with our first big result.
9
Theorem 2.2 For <(s) > 1, the Hurwitz zeta function can be expressed as the infinite integral
Z ∞ s−1 −ax
1 x e
ζ(s, a) = dx. (15)
Γ(s) 0 1 − e−x
Proposition 2.4 The following identity holds for the zeta function:
Z ∞ s−1 x
2 x e
(2s − 1)ζ(s) = ζ(s, 1/2) = dx.
Γ(s) 0 e2x − 1
which s s s
2 2 2
= 2s + + + + ... = ζ(s, 21 ).
3 5 7
We have that
∞
xs−1 e−x/2
Z
1
ζ(s, 1/2) = dx.
Γ(s) 0 1 − e−x
We can then multiply the numerator and denominator by exp( x2 ) and perform the substitution x = 2y
to obtain the identity
Z ∞ Z ∞ s−1 y
2 (2y)s−1 ey 2s y e
ζ(s, 1/2) = dy = dy
Γ(s) 0 e2y − 1 Γ(s) 0 e2y − 1
as required.
There are many variants of the Hurwitz zeta function and we will only prove identities involving the more
obvious ones.
Definition 2.6 We define the alternating zeta function as ζ(s) = ζ(s, 1).
Theorem 2.7 For <(s) > 1, the alternating Hurwitz zeta function can also be written as
Z ∞ s−1 −ax
1 x e
ζ(s, a) = dx.
Γ(s) 0 1 + e−x
10
Corollary 2.8 The alternating zeta function can also be written as
Z ∞ s−1
1 x
ζ(s) = (1 − 21−s )ζ(s) = dx.
Γ(s) 0 ex + 1
Proof. We have already done the hard work in Theorem 2.7 and as such, all that remains to be proven
is that ζ(s) = (1 − 21−s )ζ(s), which can be easily shown by expanding the series.
Proof. If s ∈ N then
∞
xs−1 e−ax
Z
1
ζ(s, a) = dx.
(s − 1)! 0 1 − e−x
Hence
∞
" k Z
#
∞
X X xs−1 e−ax
ζ(s, a) = lim dx ,
s=2
k→∞
s=2 0 (s − 1)! 1 − e−x
which
∞ ∞
"Z ! #
∞
e−ax X xs−1 X xs−1
= lim − dx .
k→∞ 0 1 − e−x s=2
(s − 1)! (s − 1)!
s=k+1
Noting that the first term inside the curved brackets is just the Taylor series for ex − 1 then gives us
∞ Z ∞ "Z k
! #
∞
X ex − 1 e−ax x
X xs−1
ζ(s, a) = dx − lim e − dx .
s=2 0 eax (1 − e−x ) k→∞ 0 1 − e−x s=1
(s − 1)!
Now, we can see that expression inside the limit tends to zero as k tends to infinity which then leaves us
with the rather inspiring identity
∞ ∞
ex − 1
X Z
ζ(s, a) = dx,
s=2 0 eax (1 − e−x )
11
Proof. Simply substitute a = 1 into the previous result to complete the proof.
We can also prove a set of similar results using the same technique as in Proposition 2.11. The two easiest
examples are given in the propositions below.
We can the integrate this to see that, for a > 1, this converges to the given limit as required.
12
Lemma 2.15 For a ∈ N,
" a−1
#
y−1 1 X (−1)k 1
= 2(−1)a+1 + − a.
y a (y + 1) y+1 yk y
k=1
as required.
13
Proposition 2.16 If Hn0 represents the nth alternating Harmonic Number then, for an integer a > 2, it
is true that
∞
X
0 1
ζ(s, a) = (−1)a ln(4) − 2Ha−2
− .
s=2
a − 1
Proof. If we employ the same methods as used in the proof of Proposition 2.11 we can easily see that
∞ ∞
e−ax (ex − 1)
X Z
ζ(s, a) = dx.
s=2 0 1 + e−x
We can then make the substitution x = ln(y) and dx = dy/y to see that this transforms to
∞ ∞
y−1
X Z
ζ(s, a) = dy.
s=2 1 y a (y
+ 1)
We can then remove the first term from the summation and seperate the integrals to find that the above
equation
∞ Z ∞ ( "a−1
X (−1)k
#)
a+1 1 a+1
= 2(−1) (ln(y + 1) − ln(y)) + a−1 + 2(−1) dy.
y (a − 1) 1 1 yk
k=2
If we now compute the value of the left-most expression and integrate the right-most, (assuming that we
can exchange the summation and integral) we see that
∞
X 1 X (−1)k+1 ∞
a−1
ζ(s, a) = (−1)a ln(4) − − 2(−1)a+1 .
s=2
a−1 (k − 1)y k−1 1
k=2
as required.
Corollary 2.17 The sum of the alternating Hurwitz zeta functions is irrational.
Corollary 2.18 The sum of the regular alternating zeta functions diverges.
14
3.2 Euler’s Reflection Formula
Using this representation of Γ(s) and making use of the Euler sine product
Γ(s)Γ(1 − s) = −sΓ(s)Γ(−s)
∞
1 Y 1
= s2
s n=1
1− n2
1 πs
=
s sin(πs)
π
=
sin(πs)
1
which for some C > 0 ⇒ ϑ(t) t− 2 .
Theorem 4.1 We have that ζ(s) extends analytically onto C except for a simple pole at s = 1 with
residue of 1. By defining
s s
Λ(s) = π − 2 Γ zeta(s)
2
we obtain the functional equation
Λ(s) = Λ(s − 1).
In particular, the zeta function satisfies the functional equation
− 2s s − 1−s 1−s
π Γ ζ(s) = π 2 Γ ζ(1 − s).
2 2
15
Proof. Define Z ∞ Z 1
s dt s 1 dt
φ(s) = t 2 (ϑ(t) − 1) + t 2 ϑ(t) − √ .
1 t 0 t t
P∞ −πn2 t
Note. ϑ(t) − 1 = 2 n=1 e → 0 at ∞ ⇒ the integral converges.
1
Note. Since ϑ(t) t− 2 , the second integral will also converge.
We now evaluate the second integral, assuming <(s) > 1:
Z 1 Z 1 Z 1
s dt s−1 dt s dt 2
t 2 ϑ(t) − t 2 = t 2 ϑ(t) − .
0 t 0 t 0 t s − 1
Thus
∞ Z ∞
X 2 s dt 2 2
φ(s) = 2 e−πn t t 2 + +
n=1 0 t s 1−s
2 s
for <(s) > 1. Letting c → πn , s → 2, we have
1 − 2s s 1 1
φ(s) = π ζ(s)Γ + + .
2 2 s 1−s
where a, b and c are real numbers with a > 0 and b2 − 4ac < 0.
Z(s) can be continued analytically to the whole complex plane except for the simple pole at s = 1.
The value of Z(k) for k = 2, 3, ... is determined in terms of the infinite series of the form
∞
X cotr (nπτ )
, (r = 1, 2, ..., k)
n=1
n2k−1
where √
b+ b2 − 4ac
τ= .
2a
16
5.1 Introduction
Let a, b and c be real numbers with a > 0 and D = 4ac − b2 > 0 so that
Q(u, v) ≥ λ(u2 + v 2 )
with
1 p
λ= a + c − (a − c)2 + b2 > 0
2
for all real numbers u and v, the double series converges absolutely for σ > 1 and uniformly in every half
plane
σ ≥ 1 + ( > 0).
Thus Z(s) is analytic function of s for σ > 1. Furthermore, it can be continued analytically to the whole
complex plane except for the simple pole at s = 1 and satisfies the functional equation
√ s √ 1−s
D D
Γ(s)Z(s) = Γ(1 − s)Z(1 − s).
2π 2π
Recall 5.1 √
b+ b2 − 4ac
τ=
2a
Setting √ √
b D b+i D
x= , y= , τ = x + iy =
2a 2a 2a
we have
b
τ +τ =
a
c
ττ =
a
so that
and
∞
X 1
Z(s) = , σ > 1.
m,n=−∞
as |m + nτ |2s
(m,n)6=(0,0)
17
Separating the term with n = 0, we have
∞ ∞ ∞
2 X 1 2 X X 1
Z(s) = s 2s
+ , σ > 1.
a m=1 m a n=1 m=−∞ |m + nτ |2s
s
We wish to evaluate the second term in and therefore apply the Poisson summation formula
X∞ ∞
X Z ∞
f (m) = f (u) cos 2mπu du
m=−∞ m=−∞ −∞
to the function
1
f (t) =
|t + τ |2s
to obtain
∞ ∞ Z ∞
X 1 X cos 2mπu
= du
m=−∞
|m + τ |2s m=−∞ −∞
|u + τ |2s
∞ Z ∞
X cos 2mπu
= du
m=−∞ −∞
{(u + x)2 + y 2 }s
∞ Z ∞
X cos 2mπ(t − x)
= dt
m=−∞ −∞
(t2 + y 2 )s
∞ Z ∞
X cos 2mπt
= cos 2mπx 2 2 s
dt
m=−∞ −∞ (t + y )
since the integrals involving the sine function vanish which yields
∞ Z ∞ ∞ Z ∞
X 1 2 1 22 X cos 2mπyt
2s
= 2s−1 2 )s
dt + 2s−1
cos 2mπx 2 s
dt, σ > 1.
m=−∞
|m + τ | y 0 (1 + t y m=1 −∞ (1 + t )
Now, we wish to evaluate the two integrals by first making the substitution
t2
u=
1 + t2
which gives
1 2tdt 1 3
= 1 − u, du = = 2u 2 (1 − u) 2 dt
1 + t2 (1 + t2 )2
and therefore
∞ √
1
Γ(s − 21 ) π
Z Z
dt 1 s− 23 − 12 1 1 1
= (1 − u) u du = B s − , = .
0 (1 + t2 )s 2 0 2 2 2 2Γ(s)
we have √ 1
∞
π(mπy)s− 2
Z
cos 2mπyt 1
dt = Ks− 21 (2mπy), σ > .
0 (1 + t2 )s Γ(s) 2
Thus
√
1 √
∞ Γ s− 2 π ∞
X 1 4 π X 1
= + (mπy)s− 2 cos(2mπx)Ks− 12 (2mπy), σ > 1
m=−∞
|m + τ |2s y 2s−1 Γ(s) y 2s−1 Γ(s) m=1
18
such that for n ≥ 1
√
∞ Γ s − 12 π √ ∞
X 1 4 π X 1
2s
= 2s−1 y 2s−1 Γ(s)
+ 2s−1 y 2s−1 Γ(s)
(mnπy)s− 2 cos(2mnπx)Ks− 21 (2mnπy).
m=−∞
|m + nτ | n n m=1
Hence
√
1
Γ s− π 1 ∞ ∞
2 8a−s y 2 −s π s X 1−2s X 1
Z(s) = 2a−s ζ(2s)+2a−s y 1−2s ζ(2s−1)+ n (mn)s− 2 cos(2mnπx)Ks− 12 (2mnπy).
Γ(s) Γ(s) n=1 m=1
√
Γ s − 12 π 1 ∞
8a−s y 2 −s π s X X 1−2s
1
Z(s) = 2a −s
ζ(2s)+2a −s 1−2s
y ζ(2s−1)+ n (k)s− 2 cos(2kπx)Ks− 12 (2kπy)
Γ(s) Γ(s)
k=1 n|k
that is
√
Γ s − 12 π 1
2a−s y 2 −s π s
Z(s) = 2a −s
ζ(2s) + 2a −s 1−2s
y ζ(2s − 1) + H(s)
Γ(s) Γ(s)
where
∞
1
X
H(s) = 4 σ1−2s (k)(k)s− 2 cos(2kπx)Ks− 12 (2kπy)
k=1
Recall 5.2
K−ν (y) = Kν (y)
and
ν ν
k − 2 σν (k) = k 2 σ−ν (k)
19
leading to
H(s) = H(1 − s).
For s
ay
φ(s) = Γ(s)Z(s)
π
we have
φ(s) = φ(1 − s).
Since √
D
ay =
2
we have √ s √ 1−s
D D
Γ(s)Z(s) = Γ(1 − s)Z(1 − s)
2π 2π
which is the functional equation of Z(s).
6.1 Introduction
The theory of the Laplace transform has a long and rich history. Many mathematicians can be named
among which Euler, Lagrange and Laplace played important roles in realising the importance of the
Laplace transform to solve not only differential equations but also difference equations. Euler used the
Laplace transform in order to solve certain differential equations, whereas it was Laplace who understood
the true essence of the theory of the Laplace transform in solving both differential and difference equations.
For a complex-valued function x, defined for t > 0, the Laplace transform of x(t) is defined by
Z ∞
X(s) = x(t)e−st dt (18)
0
L{x(t)} = X(s)
In a similar fashion we can obtain the Laplace transform for eat . Example 2: Let x(t) = sin(ωt), for
ω ∈ R. Then Z ∞
X(s) = sin(ωt) e−st dt .
0
Integrating the right hand side by parts twice, we obtain
Z ∞
ω2 ∞
Z
w
sin(ωt) e−st dt = 2 − 2 sin(ωt)e−st dt ,
0 s s 0
Rearranging we find
2 Z ∞ Z ∞
s + ω2 −st w w
2
sin(ωt) e dt = 2
=⇒ sin(ωt) e−st dt = 2 .
s 0 s 0 s + ω2
20
Thus
ω
L{sin(ωt)} =
s2 + ω2
as required.
The integral on the right hand side, can be integrated by parts once to obtain
Z ∞ ∞ Z ∞ Z ∞
−st 0 −st −st
e−st x(t)dt
e x (t)dt = x(t)e + s x(t)e dt = −x(0) + s .
0 0 0 0
Hence
L{x0 (t)} = −x(0) + sX(s) = sX(s) − x(0)
as required.
21
6.7 Scaling property of the Laplace Transform
6.9 Examples
Example 1: Let x(t) = e6t sin (3t). Then applying the shift property of the Laplace transform, we have
3
L{e6t sin (3t)} =
(s − 6)2 + 9
as the Laplace transform for x(t) = e6t sin (3t).
In general, by the shift property of the Laplace transform, we conclude that
ω
L{eat sin (ωt)} = .
(s − a)2 + ω 2
Example 3: Let x(t) = e9t cos (8t). Then applying the shift property of the Laplace transform, we have
s−9
L{e9t cos (8t)} =
(s − 9)2 + 64
as the Laplace transform for x(t) = e9t cos (8t).
In general, by the shift property of the Laplace transform, we conclude that
s−a
L{eat cos (ωt)} = .
(s − a)2 + ω 2
22
6.10 Inverse Laplace Transform
If for a given function X(s), we can find a function x(t) such that L{x(t)} = X(s), then the inverse
Laplace transform is denoted
L−1 {X(s)} = x(t)
and is unique.
The inverse Laplace transform is linear:
Example: Find
12s − 36
L −1
.
(s + 5)(s − 3)(s + 7)
We use equation (23) and equation (27), to obtain
12s − 36 33 3 9 33 −5t 3 3t 9 −7t
L −1
=L −1
+ − = e + e − e .
(s + 5)(s − 3)(s + 7) 8(s + 5) 8(s − 3) 2(s + 7) 8 8 2
Theorem 6.1 Let x(t) be a piecewise continuous function in the interval [0, ∞) and is of exponential
order a. Also, let
|x(t)| ≤ Keat , t ≥ 0
with real constants K and a, where K is positive. Then the Laplace transform L{x(t)} = X(s) exists for
s > a.
In this section, we seek to compute the Laplace transform of a convolution. Let us begin by reminding
ourselves what we mean by a convolution. The convolution, in the interval [0, ∞), is defined as
Z t
(x ∗ y)(t) = x(τ )y(t − τ )dτ . (28)
0
To simplify the repeated integral we introduce the shifted unit step function U (t − τ ) to obtain
Z ∞Z t Z ∞Z ∞
x(τ )y(t − τ )dτ e−st dt = U (t − τ )x(τ )y(t − τ )dτ e−st dt .
0 0 0 0
23
Changing the order of integration, see [?, p. 187], we have
Z ∞Z ∞ Z ∞ Z ∞
−st −st
U (t − τ )x(τ )y(t − τ )dτ e dt = x(τ ) U (t − τ )y(t − τ )e dt dτ .
0 0 0 0
Therefore
Z ∞ Z ∞ Z ∞ Z ∞
−st
x(τ ) U (t−τ )y(t−τ )e dt dτ = x(τ )e−sτ Y (s)dτ = Y (s) e−sτ x(τ )dτ = X(s)Y (s) .
0 0 0 0
Example: Solve Z t
f (t) + (t − u)f (u) du = sin(2t) .
0
We begin by taking the Laplace transform of both sides of the equation to obtain
2s2 2 8
L{f (t)} = 2 2
=− 2 + 2
.
(s + 1)(s + 4) 3(s + 1) 3(s + 4)
One application of the Laplace transform is to solve differential equations. In this section, we consider
ordinary differential equations or ODEs. The schema behind the use of Laplace transforms to solve ODEs
is shown in the following diagram:
L / Algebraic equation
ODE in x(t) Laplace Transform in X(s)
solve
L−1
x(t) o Inverse Laplace Transform X(s)
Figure 1: Ordinary differential equations(ODEs) can be solved generally, or by way of Laplace transforms,
to obtain the same solution in both cases. This figure appears in [?].
y 00 (t) − y(t) = t − 2
24
given y(2) = 3 and y 0 (2) = 0.
We begin by moving the initial conditions to t = 0. This is done by setting x(t) = y(t + 2). Then
x0 (t) = y 0 (t + 2) and x00 (t) = y 00 (t + 2). Substituting t = t + 2 and x(t) = y(t + 2) into the differential
equation, the initial value problem becomes
Taking the Laplace transform of both sides and using the linearity of the Laplace transform, see subsection
[6.4], we have
L{x00 (t)} + L{x(t)} = L{t} .
Using equation (22), and equation (25), we obtain
1
s2 X(s) − sx(0) − x0 (0) − X(s) = .
s2
Using the initial values, x(0) = 3 and x0 (0) = 0, and simplifying gives
1 + 3s3 2 1 1
X(s) = = + − .
s2 s2 − 1 s − 1 s + 1 s2
From the table of Laplace transforms, we find the inverse Laplace transform of X(s) is
But x(t) = y(t + 2), therefore y(t) = x(t − 2). Hence we have
The Laplace transform can be used to solve a system of ordinary differential equations.
Example 1: Find the solution to the initial value problem
(
x0 = y + sin(t) x(0) = 2
y 0 = x + 2 cos(t) y(0) = 0 .
We begin by taking Laplace transform of both sides, of both the equations, and using the initial conditions
to obtain (
sX(s) − 2 = Y (s) + s21+1
sY (s) = X(s) + s22s+1
knowing the Laplace transform of sin(t) from section [6.2], the Laplace transform of cos(t) from section
[6.3] and using equation (21). We proceed by eliminating either X(s) or Y (s). Eliminating X(s), gives
2s2 1
s2 − 1 − Y (s) = 2
+2+ 2 .
s +1 s +1
Thus
4s2 + 3 1 7 7
Y (s) = 2 2
= 2
− + .
(s + 1)(s − 1 2(s + 1) 4(s + 1) 4(s − 1)
Using equation (27), gives
1 −1 1 7 −1 1 7 −1 1
L −1
{Y (s)} = L − L + L .
2 s2 + 1 4 s+1 4 s−1
Hence
1 7 7
y(t) = sin(t) − e−t + et .
2 4 4
25
We find x(t) simply:
1 7 7
y 0 = x + 2 cos(t) , y0 = cos(t) + e−t + et .
2 4 4
Therefore, we obtain
7 −t 7 t 3
x(t) = e + e − cos(t) .
4 4 2
Definition: The transfer function R(s) is defined as the ratio of the Laplace transform of the output
x(t) to the Laplace transform of the input x(t), as [?, ?, ?] suggest, given that the initial conditions are
zero. This is equivalent to
Y (s)
R(s) = .
X(s)
Consider a second order differential equation
ay 00 + by 0 + cy = x(t)
for a, b, c are constants with y(0) = 0 and y 0 (0) = 0. Taking the Laplace transform of both sides, using
equation (20), and equation (22), gives
and using the fact that the initial conditions are zero, we have
Y (s) 1
R(s) = = 2 .
X(s) as + bs + c
L−1 {R(s)} .
Example: Using the convolution theorem obtain the solution to the following initial value problem
y 00 − 2y 0 + y = x(t)
given y(0) = −1 and y 0 (0) = 1. We know the form of the transfer function and in our case we have
1 1 1
R(s) = = 2 = .
as2 + bs + c s − 2s + 1 (s − 1)2
From the table of Laplace transforms, see [?, ?, ?], the inverse of the Laplace transform of R(s) is
1
r(s) = L−1 = tet .
(s − 1)2
We now solve the homogeneous problem, y 00 − 2y 0 + y = 0, using the initial conditions y(0) = −1 and
y 0 (0) = 1 to obtain
y(t) = (2t − 1)et .
Hence, using the convolution theorem, equation (28), the solution to the initial value problem is
Z t
(x ∗ r)(t) + y(t) = x(τ )et−τ (t − τ )dτ + (2t − 1)et
0
26
6.16 Dirac Delta Functional
We can find a relation between the δ(t) and the unit step function, U (t):
Z t
δ(x − a)dx = U (t − a) .
−∞
Thus
δ(t − a) = U 0 (t − a) ,
that is the derivative of the Dirac delta function is the unit step function.
Example: Boundary-value problem A beam of 2λ that is embedded in a support on the left and free
on the right. The vertical deflection of the beam a distance x away from the support is denoted by y(x).
If the beam has a concentrated load L on it in the center of the beam then the deflection must satisfy
the boundary value problem
(
EIy 0000 (x) = Lδ(x − λ)
y(0) = y 0 (0) = y 00 (2λ) = y 000 (2λ) = 0
where E is the modulus of elasticity and I is the moment of inertia, are constants. We shall solve the
formula for the displacement y(x) in terms of constants λ, L, E, and I.
We begin by taking the Laplace transform of both sides to obtain
L{y 0000 (x)} = s4 Y (s) − s3 y(0) − s2 y 0 (0) − sy 00 (0) − y 000 (0) = s4 Y (s) − sy 00 (0) − y 000 (0) .
Thus
L L −λ(s)
s4 Y (s) − sy 00 (0) − y 000 (0) = L{δ(x − λ)} = e .
EI EI
Let A = y 00 (0) and B = y 000 (0), then
A B L e−λ(s)
Y (s) = + + .
s3 s4 EI s4
We now need to use equation (25), and equation (30), to find the inverse Laplace transform Y (s)
27
We are given that y 00 (2λ) = y 000 (2λ) = 0, then differentiating twice and thrice respectively, we obtain
L
0 = 6A + 12λB + 6λ
EI
and
L
0 = 6B + 6 .
EI
Hence, the solution to the problem is
L
3λx2 − x3 + (x − λ)3 U (x − λ)
y(x) = .
6EI
In this section, we show how to use the Laplace transform to solve one-dimensional linear partial differ-
ential equations. Partial differential equations are also known as PDEs.
There are 3 main steps in order to solve a PDE using the Laplace transform:
1. Begin by taking the Laplace transform with one of the two variables, usually t. This will give an
ODE of the transform of the unknown function.
2. Solving the ODE, we shall obtain the transform of the unknown function.
3. By taking the inverse Laplace transform, we obtain the solution to the original problem.
In this section, through the use of the Laplace transforms, we seek solutions to initial-boundary value
problems involving the heat equation.
The one-dimensional partial differential equation
∂u ∂2u
= c2 2 . (29)
∂t ∂x
is known as the heat equation, where c2 is known as the thermal diffusivity of the material.
Example 1: Solve
∂u ∂2u
= , x>0 , t>0
∂t ∂x2
given
u(x, 0) = 0 , u(0, t) = δ(t) , lim u(x, t) = 0 .
x→∞
We have to solve the heat equation for positive x and t, with c2 = 1, subject to the boundary conditions
28
Applying the boundary conditions, with L{f (t)} = F (s), we obtain
and Z ∞ Z ∞
lim U (x, s) = lim e−st u(x, t)dt = e−st lim u(x, t)dt = 0 .
x→∞ x→∞ 0 0 x→∞
U (0, s) = B(s) = 1 .
Therefore √
U (x, s) = e− sx
.
From the tables of the Laplace transforms we obtain the inverse Laplace transform as
√ x x2
e− sx
= √ e− 4t .
2 πt3
Hence
x x2
u(x, t) = √ e− 4t .
2 πt3
Example 2: We find the temperature w(x, t) in a semi-infinite laterally insulated bar extending from
x = 0 along the x-axis to infinity, assuming that the original temperature is 0, w(x, t) → 0 as x → ∞ for
every fixed t ≥ 0 and w(0, t) = √1t .
We have to solve the heat equation for positive x and t subject to the boundary conditions
1
w(0, t) = √ , lim w(x, t) = 0
t x→∞
∂2W ∂2W s
sW = c2 2
=⇒ − 2W = 0 .
∂x ∂x2 c
Notice that we have obtained an ODE, in the variable W , which has a general solution
√ √
sx sx
W (x, s) = A(s)e c + B(s)e− c .
r
π
W (0, s) = B(s) = .
s
29
Therefore r
π − √sx
W (x, s) = e c .
s
From the tables of the Laplace transforms, we obtain the inverse
√ x
e− s c 1 x2
√ = √ e− 4c2 t .
s πt
Hence
1 x2
w(x, t) = √ e− 4c2 t .
t
By the convolution theorem, see section [6.12], we can also express the solution as
Z t
x2 1
w(x, t) = e− 4c2 τ (t − τ )− 2 dτ .
0
For t ≥ 0 the unit step function is the same as 1. Therefore the Laplace transform of U (t) is
1
L{U (t)} = L{1} = .
s
30
7.1 Relation to Laplace Transform
We have Z ∞
M(f (at); p) = f (at)tp−1 dt.
0
Substituting x = at where dx = adt, we obtain
Z ∞
a−p f (x)xp−1 dx = a−p F (p).
0
7.4 Multiplication by ta
7.5 Derivative
We have
Z ∞
M(f 0 (t); p) = f 0 (t)tp−1 dt
0
p−1 ∞
Z ∞
= t f (t) 0 − (p − 1) f (t)tp−2 dt
0
which gives
M(f 0 (t); p) = −(p − 1)M(f (t); p − 1)
provided tp−1 f (t) → 0 as x → 0 and tp−1 f (t) → ∞ as x → ∞.
For the n-th derivative this produces
M(f (n) (t); p) = (−1)n (p − 1)(p − 2)(p − 3)...(p − n)F (p − n)
provided that the extension to higher derivatives of the conditions as x → 0 and asx → ∞ holds upto
(n − 1)th derivative.
Knowing that
(p − 1)! Γ(p)
(p − 1)(p − 2)(p − 3)...(p − n) = =
(p − n − 1)! Γ(p − n)
the expression for the n-th derivative can be expressed as
Γ(p)
M(f (n) (t); p) = (−1)n F (p − n).
Γ(p − n)
31
7.6 Another Property of the Derivative
(p + n − 1)!
M(xn f (n) (t); p) = (−1)n F (p).
(p − 1)!
7.7 Integral
By making use of the derivative property Rof the Mellin transform we can easily derive this property. We
t
begin by choosing to write f (t) as f (t) = 0 h(u)du where f 0 (t) = h(t). As a result, we obtain
Z t
M(f (t) = h(t); p) = −(p − 1)M(f (t) =
0
h(u)du; p − 1).
0
Rearranging gives Z t
1
− M(h(t); p) = M( h(u)du; p − 1)
p−1 0
Substituting p for p − 1 we arrive at the desired identity
Z t
1
− M(h(t); p + 1) = M( h(u)du; p).
p 0
7.8 Example 1
7.9 Example 2
32
7.10 The Mellin Tranform & The Zeta Function
The Mellin transform is an integral transform that helps to transform the symmetries of the ϑ functions
to the symmetries of the ζ functions. The Mellin transform of
1
ex − 1
with <(s) > 1 is computed as
Z ∞
1 1
M( x
; p) = xs−1
dx
e −1 0 −1 ex
Z ∞ ∞
X
= xs−1 e−nx dx
0 n=1
∞ Z
X ∞
= xs−1 e−nx dx
n=1 0
∞ Z s−1
∞
X t 1
= e−t dt
n=1 0
n n
X∞ Z ∞
= n−s ts−1 e−t dt
n=1 0
= ζ(s)Γ(s).
33