Note 1 Annotated
Note 1 Annotated
Note 1 Annotated
Introduction
We already know that if a function is ”very nice”, then one can express it as a power
series; for example
∞ ∞
xk (−x)n
ex =
X X
∀x; log(1 + x) = − ∀|x| < 1.
k=0 k! n=1 n
∞
(x − a)k
f (k) (a)
X
f (x) = for all x sufficiently closed to a.
k=0 k!
Unfortunately, most functions are not that nice (indeed we will be talking about piece-
wise smooth or piecewise continuous functions.) The basic idea of Joseph Fourier
is to represent such function by an infinite sum of trigonometric functions.
For simplicity, first suppose f is a function on [0, 2π]. We hope to express it as
∞
X ∞
X
ck sin(kx − τk ) = a0 + (ak cos kx + bk sin kx).
k=0 k=1
2
∂u = ut = α2 uxx = α2 ∂∂xu2 , 0 < x < 1, t > 0;
∂t
(1)
u(0, t) = u(1, t) = 0 for all t ≥ 0,
How do we solve the equation? It seems very complicated. The following is probably
due to Joseph Fourier.
Let’s start by guessing: what if u(x, t) = T (t)X(x) (where T is only a function of t and
X is a function of x)? Of course this is usually not possible even for simple function such
as (x + t)2 , but it will be a linear combination of such forms. Anyway, do we know how
to solve this simple case?
Note that then we have a very simple form as follows:
and hence
T 0 (t) X 00 (x)
= .
α2 T (t) X(x)
Notice that LHS depends only on t while RHS depends only on x. For that to hap-
pen, they must both equal to a constant. Thus, there exists a real number λ such that
T 0 (t)
=λ (3)
α2 T (t)
X 00 (x)
= λ. (4)
X(x)
We will now solve for (3) and (4). Note that they are just ordinary linear differential
equations. We now recall from (1) that we must have X(0) = X(1) = 0.
First note that for λ > 0, (4) has no nontrivial solution (i.e., a solution that is not
identically zero). This is because its general solution is of the form (we will come back to
this for more details later)
√ √
X(x) = C1 e λx
+ C2 e− λx
2
For each k ∈ N, we have T 0 = −(kπ)2 α2 T and hence T (t) = ce−(kαπ) t .
2 2
Let us write Tk (t) = e−(kαπ) t . Then for each k ∈ N, Tk (t)Xk (x) = e−(kαπ) t sin kπx is a
solution to (1).
Let us note that if u1 (x, t) and u2 (x, t) are solutions of (1) then c1 u1 + c2 u2 is also a
solution of (1).
k=1
satisfies (1) and (2). We did not claim that this is the only solution that satisfies (1) and
(2). But we did find one such solution provided we are able to write
∞
X
f (x) = ck sin kπx
k=1
such that
(a)
(b)
4
Note that they hold if for example when RHS converges uniformly (but they could
hold under much weaker conditions). However, as we did not assume that you have taken
MA3110/MA3210 and indeed Fourier did not care about these issues, we may just assume
both (a) and (b) hold most of the time in this module. Of course, if there are enough
interest among students, I will go through it in the module. Anyway, we will come back
here with more detail.
Simple Examples
(1) f (x) = sin 2πx + 7 sin 20πx
(6) Consider the following (initial) boundary value problem (wave equation §37, 45, )
utt = uxx , 0 < x < π, t > 0; u(0, t) = u(π, t) = 0 for all t ≥ 0, (5)
Similar to power series expansion, we would like to express a function in terms of sum
of trigonometric functions. Here is the most common example:
∞
X
f (x) = a0 + (ak cos kx + bk sin kx)
k=1
which is a (formal) trigonometric series or also called Fourier series. Obviously, we cannot
hope that the representation is pointwise (infinite sum!) for all functions. In particular
as RHS is periodic of period 2π (if it converges), we may conclude that f must also be
periodic. However, in general, we do not need pointwise equality in practice. Indeed, we
only need a representation that is able to do jobs that we need to do.
Let us now try to find out some types of functions that have such representation.
Recall that for a function to have a power series expansion, the function has to be an-
alytic (at the reference point 0) and hence infinitely differentiable. However, necessary
conditions for a function (indeed possible for a generalized function) to have a Fourier
series is much weaker. We are not going to discuss the collection of all functions that
have a Fourier series, instead, we will just concentrate on functions that are not “too
badly behaved”. The following (piecewise defined) functions are the basic element in this
module.
Remark 2.2. This is not the definition used in your textbook but the two definitions
are equivalent. Indeed, it is a consequence of uniform extension theorem (for continuous
functions, refer to your note on continuous functions and uniform continuous function in
MA2108). In particular, we have
(1) both f (a+ ), f (b− ) exist, and
−
(2) f (x+
i ), f (xi ) exist for all i = 1, · · · , n − 1.
Remark:
(1) Continuous functions on [a,b] are piecewise continuous on [a,b]. However, contin-
uous functions on (a, b) need not be piecewise continuous on [a,b].
(2) Similar to continuous functions, it is easy to verify that product and linear sum
of piecewise continuous functions are again piecewise continuous.
(3) For those who know the definition of Riemann integral may want to verity that
piecewise continuous functions on [a, b] are Riemann integrable.
More examples:
Similarly, we can define piecewise smooth functions (to be exact, it should be called
piecewise continuously differentiable functions):
Definition A function f is said to be piecewise smooth (cf. p 27) on [a, b] if both f 0
and f are piecewise continuous on [a, b].
[Example]
Remark 2.3. Again, a piecewise smooth function needs not be differentiable at all points.
Moreover, the following consequence of L’Hopital rule may be useful to us.
f (x) − f (a+ )
Fact: if lim+ f 0 (x) = l, then lim+ = l. Similar fact holds for left limit.
x→a x→a x−a
Z b n Z xj
X n Z xj
X
f (x)dx = f (x)dx = gj (x)dx
a j=1 xj−1 j=1 xj−1
where x0 = a < x1 < · · · < xn = b is such that f is uniformly continuous on each
(xj−1 , xj ), and gj (x) on [xj−1 , xj ] is the continuous extension of f on (xj−1 , xj ); this is
always possible by uniform extension theorem of continuous function (cf. MA2108).
Z
P f dx = P F1 − P 0 F2 + P 00 F3 − · · · + · · · + (−1)m P (m) Fm+1 + C
where P (j) is the j derivative of P and F1 is an antiderivative of f and in general Fj+1 is
an antiderivative of Fj .
3. If P is a polynomial (or any other nice function) and f is only piecewise continuous,
then we will do step 1 first before step 2. Note that in general, it may not be true that
Z b Z b
P 0 f dx = f (b)P (b) − f (a)P (a) − P f 0 dx (∗)
a a
Exercise: Can you find the correct formula for (*)? Assuming f is uniformly contin-
uous on (a, c) and (c, b).
10
Periodic functions We know that trigonometric functions are periodic. What does
that mean?
A function f (x) on R is said to be periodic if there exists a > 0 such that
We said f is periodic with period α, if α is (usually) the smallest positive number (if that
exists) such that the above mentioned identity holds with a = α.
Examples cos x, sin x are periodic functions with period 2π. sin 2πx + cos πx is a
periodic function with period 2.
Remark 2.4. (1) If f is a periodic function with period τ , then for any k ∈ Z, we
have f (x + kτ ) = f (x) for all x ∈ R.
(2) If f is a function on (0, a], a > 0, then it can be extended to a periodic function
on R with period a or a/k, k ∈ N.
Hence, we have
Z π
eimx + e−imx inx
e dx = πδ(m − n), if m, n ∈ N ∪ {0}.
−π 2
Since eimx = cos mx + i sin mx (De Moivre formula), we have the following orthogonal
property (m, n ∈ N ∪ {0}):
Z π Z π
cos(nx) cos(mx)dx = real part of eimx cos nxdx = πδ(m − n).
−π −π
The above is equal to 2π if m = n = 0. Of course, one could also use trigonometric
identity for the above computation. Similarly,
Z π
cos(nx) sin(mx)dx = 0 (even if n = m)
−π
Z π
sin(nx) sin(mx)dx = πδ(m − n).
−π
The idea of finding Fourier series expansion is indeed very simple. For illustration, let
us first assume it is possible to write
∞
X
f (x) = bk sin kx (which is usually known as Fourier sine series on [0, π]).
k=1
Now if we can interchange the infinite sum with the integral, we have
∞ Z π
X
RHS = bk sin kx sin nxdx = bn π/2.
k=1 0
Indeed, the above computation is valid for finite sums and thus Fourier probably think
that it should be also true for infinite sum. After Fourier, many mathematicians have
provided sufficient condition in order to interchange the infinite sum and the integral
12
(note that it is always possible to do that if it is only a finite sum since integration is
known to be ’linear’). Riemann saw that it is possible if one has uniform convergence of
the infinite sum. However, in general, it is possible (to interchange) under much weaker
condition instead of uniform convergence (unfortunately, it requires Lebesgue’s work in
measure and integral MA4262).
Our computation above only conclude that if f (x) has a Fourier sine series, then bk
must be of the above mentioned form. We have yet to prove that the Fourier sine series
is a meaningful expansion. Indeed, most of the work about whether the expansion is
meaningful were not done by Fourier. For completeness of this subject, we will justify it
later in the course.
13
Some examples
∞
X (−1)k
(1) x ∼ −2 sin kx
k=1 k
∞
4X sin(2k − 1)x
(2) 1 ∼ .
π k=1 2k − 1
∞
−(−1)k π 2 2(−1)k − 1
!
2 2X
(3) x ∼ + sin kx.
π k=1 k k3
∞
4X sin(2k − 1)x
1= for x ∈ (0, π)
π k=1 2k − 1
and
14
P∞
Question 2. How about b0 + k=1 bk sin kx?
15
Question 3. While Fourier sine series gives us an odd function, what does Fourier
cosine series give us?
For any function f on [−π, π], one could always write it as a sum of an odd function
and an even function. Indeed, note that the function fE (x) = (f (x) + f (−x))/2 is even
and fO (x) = (f (x) − f (−x))/2 is odd and f (x) = fE (x) + fO (x) for all x. Note that f is
piecewise continuous on [−π, π] if and only if both fE and fO are piecewise continuous.
We could then expand fE as a Fourier cosine series and fO as a Fourier sine series and
hence we have a Fourier series:
∞
X ∞
X
f (x) = a0 + ak cos kx + bk sin kx.
k=1 k=1
(Note that the RHS of the above equality is a function of period 2π if it converges.)
We could also compute directly just as before and observe that for any n ∈ N
Z π ∞
X ∞
X
(a0 + ak cos kx + bk sin kx) sin nxdx
−π k=1 k=1
Z π ∞
X Z π ∞
X Z π
= a0 sin nxdx + ak cos kx sin nxdx + bk sin kx sin nxdx
−π k=1 −π k=1 −π
Z π
2
= bn sin nxdx = bn π
−π
Finally, we have
Z π Z π ∞
X ∞
X Z π
f (x)dx = (a0 + ak cos kx + bk sin kx)dx = a0 dx = 2πa0 .
−π −π k=1 k=1 −π
1 Zπ
Hence a0 = f (x)dx.
2π −π
Remark 2.5. (1) After checking through the above, we do not see why there is a need for
f be periodic. All we need to compute an and bn is the fact that f is piecewise continuous
(indeed, it is enough to assume that f is Riemann integrable). The question is why should
the representation be meaningful.
(2) Note that if f (x) = f (−x), then bk are zero for all k (sin kx are odd functions).
Similarly, if f (x) = −f (−x), then ak are zero for all k.
More examples
In general, a periodic function may have period other than 2π. What if the period of
a function is not 2π? For example the function sin 2πx or cos πx + sin 3πx. Then it is
natural for us to modify our representation accordingly. In general if f has period L, then
∞ ∞
X 2kπx X 2kπx
f (x) = a0 + ak cos + bk sin .
k=1 L k=1 L
And we can compute the Fourier series just as before:
2 Z L/2 2πnx
an = f (x) cos dx
L −L/2 L
2 Z L/2 2πnx
bn = f (x) sin dx
L −L/2 L
1 Z L/2
a0 = f (x)dx.
L −L/2
Remark (1) Once again, we see that it is sufficient to assume the integral makes sense
in order to compute an , bn . Is there any reason that f must be periodic?
(2) Instead of doing the computation similarly, it is also possible to do it by change of
variables to the Fourier series that we have already obtained (see §8). Indeed, change of
variable can be a useful tool in cutting down steps of computation.
18
[Exercise]
1. f (x) = |x| on [-1,1].
Relationship of Fourier series, Fourier sine series and Fourier cosine series
Let f be a function on [0, π] such that f (0) = f (π) = 0. Then we can do any of the
following.
(1) Extend f to a function on R of period π.
(2) Extend f to an odd function on R of period 2π.
(3) Extend f to an even function on R of period 2π.
20
[Exercise] Find the Fourier series, Fourier cosine series and Fourier sine series for each
of the following functions:
(1) x sin x on [0, π];
(2) x on [0, π];
(3) x on [0,1];
(4) 1 + 2x on
[0,1].
−1, if 0 < x < 1
2
(5) f (x) = 1
.
1, if 2 < x < 1