Cryptography Theory
Cryptography Theory
Cryptography Theory
Jordi-Lluı́s Figueras
October 9, 2014
ii
Contents
Some words v
I Mathematical modelling 3
2 Dimensional analysis and Scaling. 7
2.1 Dimensions and units. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Laws and unit free laws. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Pi theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 Example 1: Atomic bomb. . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 Example 2: Heat transfer problem. . . . . . . . . . . . . . . . . . . . 13
2.4 Scaling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
II Analytical methods. 15
3 Perturbation methods. 17
3.1 Regular perturbations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.1.1 Poincaré-Lindstedt method. . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.2 Big O and little o notation. . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Singular perturbations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.3 Boundary layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.4 The WKB approximation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4 Calculus of variations. 27
4.1 Variational problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Necessary conditions for extrema. . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2.1 Normed linear spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2.2 Derivatives of functionals. . . . . . . . . . . . . . . . . . . . . . . . . 29
4.3 The simplest problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
4.4 Generalizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.4.1 Higher derivatives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
iii
iv CONTENTS
5 Dynamical systems. 35
5.1 Discrete dynamical systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.1.1 Equilibria and stability. . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Continuous dynamical systems. . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2.1 Vector fields and phase space portraits. . . . . . . . . . . . . . . . . . 38
5.2.2 Stationary orbits and stability. . . . . . . . . . . . . . . . . . . . . . . 39
5.2.3 Periodic orbits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
5.3 Chaotic systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
7 Sturm-Liouville problems. 51
8 Theory of transforms. 53
8.1 Laplace transform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.2 Fourier transform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
8.3 Other transforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
9 Integral equations. 59
9.1 Volterra equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
9.2 Fredholm equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
9.2.1 Fredholm equations with degenerate kernel. . . . . . . . . . . . . . . 62
9.2.2 Symmetric kernels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
9.3 Perturbation methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Appendices 67
These lecture notes are based mainly on the book Applied Mathematics: Third edition
written by J. David Logan and on the lecture notes written by Professor Lars-Erik Persson,
see his web page http://staff.www.ltu.se/~larserik/. The main purpose of these notes
is to summarize all the topics covered in the course Tillämpad Matematik taught at Uppsala
University by the author during the Fall 2014. I strongly recommend to go the sources for
a better and further exposition on the selected topics.
Notice that in these lecture notes, a lot of exercises appear. I follow the idea that
mathematics is learnt through exercises!
v
vi SOME WORDS
Chapter 1
Applied mathematics is a broad subject area dealing with those problems that come from
the real world. Applied mathematics deals with all the stages for solving these problems,
namely:
3. Compare the model’s results with experimental results. In case that they disagree
qualitatively, go back and reformulate the problem.
1
2 CHAPTER 1. WHAT IS APPLIED MATHEMATICS.
Figure 1.1: Schematic representation of the stages involving the finding for a solution of a
real world problem.
Part I
Mathematical modelling
3
5
Dimensional analysis and scaling methods deal with the first stage in applied mathe-
matics: finding a mathematical model. With the help of these we can try to construct a
mathematical model or, at least, enlight some of the properties of the problem that we have
in hand.
6
Chapter 2
Dimensional analysis is a useful tool for finding mathematical models when the physical law
we are studying is unit free.
Observation 3. Usually we use the SI system in physical problems but other sets of fun-
damental dimensions must be used in other contextes. For example, it could happen that in
economics we use dimensions like: population, wealth, happiness...
7
8 CHAPTER 2. DIMENSIONAL ANALYSIS AND SCALING.
Table 2.1: SI fundamental dimensions. Disclaim: the third column contains a non-standard
notation for the symbols.
Table 2.2: Some SI derived units with respect fundamental dimensions. Dimensionless
dimensions are expressed as 1.
2.2. LAWS AND UNIT FREE LAWS. 9
f (q1 , . . . , qn ) = 0.
With these, we can create the dimension matrix. It is the n × m matrix with integer
coefficients
a1,1 · · · a1,n
a2,1 · · · a2,n
.. .
.. ..
. . .
am,1 · · · am,n
The definition of law looks a little bit curious, doesn’t it? Let’s see some examples of
laws:
Example 4 (Energy preservation). Given a system that depends on the position (q), velocity
(p) and mass (m), the law of energy preservation in its most classical setting says that the
sum of the kinetical and potential energy is constant. That is,
p2
m + V (q) = C.
2
Thus, in this example, the function f depends on three variables, p, q, m and it is
p2
f (p, q, m) = m + V (q) − C.
2
Example 5 (Hooke’s law). The force F needed to extend (or compress) a spring by some
distance L is proportional to this distance. That is,
F = kL.
f (F, L) = F − kL.
Notice that Hooke’s law implies that the constant k is not dimensionless. This observation
should be keeped in mind.
10 CHAPTER 2. DIMENSIONAL ANALYSIS AND SCALING.
f (r, t, ρ, E) = 0.
Now that we have seen plenty of examples of laws, and seen that to all laws there is a
function associated to it, could you think of a law that has no f related to it? It is hard
to imagine it. Once we talk about relations between dimensions/units/quantities, equations
appear. And, from each equation, we get a law!
Laws are important because they give as relations between the variables involved. If we
know the law, then we know exactly their relation, but just knowing that there is a law tells
us that there is some relation.
A unit free law is a law that does not depend on the choice of units. More concretely,
given a law that depends on n quantities q1 , . . . , qn and m < n units L1 , . . . , Lm ,
f (q1 , . . . , qn ) = 0,
and for any n λi > 0, the law is also true for the new variables q̂i formed by the new units
L̂i = λi Li . That is,
f (q̂1 , . . . , q̂n ) = 0,
Example 7. An example of a unit free law is
1
f (x, g, t) = x − gt2 = 0, (2.1)
2
where x denotes position (L), g the constant of the gravitational field (L/T 2 ) and t time
(T ).
If L̂ = λ1 L, T̂ = λ2 T then, since g has units in L/T 2 , we get that
2.3 Pi theorem.
Theorem 8. Let
f (q1 , . . . , qn ) = 0
be a unit free physical law that relates the dimensioned quantities q1 , . . . , qn . Let L1 , . . . , Lm
(where m < n) be the fundamental dimensions with
a
[qi ] = L11,i · · · Lamm,i .
2.3. PI THEOREM. 11
and let r = rank(A), where A is the dimension matrix. Then there are n − r independent
dimensionless quantities
π1 , . . . , πn−r
that can be formed from q1 , . . . , qn . That is, for all i,
α
πi = q1 1,i · · · qnαn,i .
F (π1 , . . . , πn−r ) = 0
I will not prove this theorem. If you are interested in seeing a proof, please have a look
at Logan’s book.
What it is important in this theorem is the information that we can get from it. Let’s
discuss several examples:
Example 9. Suppose that a given a unit free law can be reduced to a just one dimensionless
variable π1 . Then, Pi Theorem states that this law is equivalent to a law with the form
F (π1 ) = 0,
α α
with π1 = q1 1,1 · · · qn n,1 . Now, since we can suppose that, generally, zeros of functions of one
variable are discrete, then π1 can only attain discrete values. Hence,
π1 = C,
with C a constant. This means that we get a relation between the variables qi of the form
α
q1 1,1 · · · qnαn,1 = C.
Example 10. Suppose now that given a unit free law Pi theorem asserts that there are two
dimensionless variables π1 , π2 . Then, this law is equivalent to one with
F (π1 , π2 ) = 0.
Now, since the zero set of a function of two variables is, generically, a curve, and using (if
possible!) the Implicit Function Theorem we get a relation between π1 and π2 of the form
π1 = g(π2 ),
1
The previous examples use the following principle that can be deduced using the
Implicit Function Theorem:
Principle 11. Let f : Rn → R be a smooth function. Then, the zero set {z ∈ Rn : f (z) = 0}
is, generically, a n − 1 dimensional hypersurface. Furthermore, if z = (z1 , . . . , zn ), for most
of the zi there exists a function gi : Rn−1 → R such that the zero set is locally equivalent to
the solutions of the equation
f (r, t, ρ, E) = 0.
that depends on the radius r, the air density ρ, the time t and the energy E. Now, [t] = T ,
[r] = L, [E] = M L2 /T 2 and [ρ] = M/L3 . The dimension matrix is
1 0 −2 0
0 1 2 −3 .
0 0 1 1
Notice that there are n = 4 dimensioned quantities and the rank of the dimension matrix
is 3. Hence, Pi Theorem asserts that there is just 1 dimensionless quantity π1 that can be
formed from these 4 quantities. Also, the law is equivalent to
F (π1 ) = 0.
r5 ρ
π1 =
t2 E
r5 ρ
= C,
t2 E
2.4 Scaling.
Scaling is another procedure useful in formulating mathematical models. Scaling is about
scaling the variables in their correct magnitude. A lot of systems evolve in time but not
all of them are well measured if we use seconds. For example, it is not the same measuring
time when we study galaxies or when we study atomic reactions. Another example could be
measuring distances: Galaxies and atomic reactions are not measured using the same scale.
Every problem has its own scales (for each of the dimensions). And this scale, called the
characteristic scale is the one that should be used.
Once the characteristic scale is identified, a new dimensionless variable is formed by
dividing the former with the latter. For example, in the case of time in galaxies, the
2
Notice that in this problem the heat energy is a fundamental dimension, since it can not be deduced
from the others.
14 CHAPTER 2. DIMENSIONAL ANALYSIS AND SCALING.
characteristic scale could be something around tc = 106 years, and the dimensionless time
will be
t
t̄ = .
tc
After scaling all the variables of the model in hand, we get a dimensionless form of the
problem. This process is called non-dimensionalization.
Part II
Analytical methods.
15
Chapter 3
Perturbation methods.
Perturbation methods are used for studying problems that are close to a known problem.
Example 12. When considering the motion of planets, it is well known that the 2 body
problem (e.g. Sun-Earth system) is a problem with a well known solution: the bodies orbit
around their center of mass along elliptical orbits. In this setting, if we consider the problem
of 3 masses with one of them much smaller with respect the other two (e.g. Sun-Earth-
satellite system) then we have a perturbed system.
The idea behind the perturbation methods is computing approximate solutions of the
system in terms of Taylor (or other) expansions.
Example 13. Consider the equation
x2 − 1 + εx = 0.
For ε = 0 the equation has solutions x(0) = ±1. Without loss of generality, set x(0) = 1. It
is natural to expect that for values of ε small enough, solution to the equation will be close
to x(0) = 1. If we do the ansatz that the solutions can be written in Taylor form
x(ε) = x0 + x1 ε + x2 ε2 + · · ·
F (t, y, y 0 , y 00 , . . . , y n) , ε) = 0,
17
18 CHAPTER 3. PERTURBATION METHODS.
mv 0 = −av + bv 2
,
v(0) = V0
with b a.
First, we introduce dimensionless variables
v at
y= , τ= ,
V0 m
bV0
where ε = 1.
a
After this change of variables, the solution to Equation (3.1) when ε = 0 is
y0 (t) = e−t .
Now, performing the ansatz that solutions to Equation (3.1) are of the form
y00 (t) + εy10 (t) + ε2 y20 (t) + h.o.t. = −y0 (t) − εy1 (t) − ε2 y2 (t) + εy0 (t)2 + ε2 2y0 (t)y1 (t) + +h.o.t.
y0 (t) = e−t ,
y1 (t) = e−t − e−2t ,
y2 (t) = e−t − 2e−2t + e−3t .
3.1. REGULAR PERTURBATIONS. 19
x00 + x + εx3 = 0
Notice that ω0 = 1. Using the change of coordinates τ = ωt we get the initial value problem
2
ω ẍ(τ ) + x(τ ) + εx(τ )3 = 0,
x(0) = 1, .
ẋ(0) = 0
20 CHAPTER 3. PERTURBATION METHODS.
x00 + x + εx5 = 0
f (s)
lim = 0.
s→A g(s)
1. x2 = o(x) as x → 0.
3.2. SINGULAR PERTURBATIONS. 21
2. sin(x) = O(x) as x → 0.
3. ln(1 + x) = O(x) as x → 0.
This notation is very useful because it help us to specify how, among other things, the
approximate solutions computed using perturbation methods approximate the true solu-
tions.
Example 21. In Example 17 it could be proved that the true solution x(τ ) and the approx-
imate solution x0 (τ ) + εx1 (τ ) satisfy
1. The small parameter multiplies the highest derivative in an ODE. For example,
εy 00 + y 0 + y 3 = 0.
2. The small parameter multiplies the term with highest degree in an algebraic equation.
For example,
εx4 + x + 1 = 0.
εx5 + x − 1 = 0. (3.3)
Notice that for ε = 0 Equation (3.3) has just one solution, x = 1, but for ε 6= 0 it has 5
solutions. Hence, there is a family of solutions given by the Taylor expansion
but the four other solutions are missing. What is wrong with Equation (3.3)? It is wrong
that the degree of the polynomial changes with respect the value of ε.
Let’s try to find a way of computing them. Perform a change of variables
y
x= ,
f (ε)
with unknown function f (ε). Applying this change of variables to Equation (3.3) we obtain
the equation
f (ε)−4 εy 5 + y − f (ε) = 0. (3.4)
1
Now, choosing f (ε) = ε 4 the leading term of Equation (3.4) is 1 and we obtain
1
y 5 + y − ε 4 = 0.
Now, for this last equation we can perform a regular perturbation analysis and obtain that
there are five solutions when ε = 0:
1 1 3
0, e2πi 4 , e2πi 2 , e2πi 4 , 1
Exercise 23. Give a second order approximation of the following algebraic equations:
1. εx3 + x + 1 = 0.
2. εx3 + 1 = 0.
3. εx6 + x2 − 2x − 2 = 0.
For the case of how to solve ODEs with singular perturbations, see Sections 3.3 and 3.4.
3
ε=0.1
ε=0.01
ε=0.001
2.5
1.5
0.5
0
0 0.2 0.4 0.6 0.8 1
Figure 3.1: Graph of the function y(x) for different values of the parameter ε.
The boundary value problem (3.5) satisfies that solving it as in the regular case, via
Taylor expansions of the form
y(x) = y0 + εy1 + · · · ,
then y0 = Ce−x , where C is a constant, does not satisfy both boundary values.
This is overcome by approximating the solution to (3.5) by inner and outer layers.
Outer layer:
24 CHAPTER 3. PERTURBATION METHODS.
The outer layer is the one away x = 0. Since there εy 00 and εy 0 are small, it is computed
using 0
yo + yo = 0
,
yo (1) = 1
which has as solution yo (x) = e−x .
Inner layer:
We scale the boundary problem (3.5) with
x
τ=
f (ε)
obtaining
ε 1+ε
2
ÿ + ẏ + y = 0. (3.6)
f (ε) f (ε)
This last ODE has coefficients
ε ε 1
2
, , , 1.
f (ε) f (ε) f (ε)
ε
We will choose f (ε) so that the leading coefficient f (ε)2 has the same order as another and
the other two are small in comparison. This leads to the choice f (ε) = ε.
x
Then, an approximate solution of Equation (3.6) of order O(ε) is yi (x) = a(1 − e− ε ).
Now, the problem is finding the value of constant a in the inner approximation so the
solutions match. This matching should be done√outside the inner (x = O(ε)) and outer
(x = O(1)) regions. For example, when x = O( ε). We perform the change of variables
ν = √xε and impose the condition
√ √
lim yi ( εν) = lim yo ( εν).
ε→0 ε→0
~2 00
− y + (V (x) − E)y = 0,
2m
is an example where the WKB method can be applied.
Let’s consider Equation (3.7). The method consists on doing the ansatz that the solution
is of the form u(x)
y(x) = e ε .
Thus, we obtain equation
εf 0 + f 2 + q(x) = 0,
where f = u0 . Finally, using a regular perturbation of f
f = f0 + εf1 + h.o.t.
we obtain that
p
f0 = ± q(x),
q 0 (x)
f1 = − .
4q(x)
In terms of y it is
1 R √
± 1ε ax q(x)dx
y(x) = 4
p e (1 + O(ε)).
q(x)
Exercise 27. Apply the WKB method in the following equations:
1. εy 00 + xy = 0, 0 < ε 1.
2. y 00 + λ cos(x)y = 0, 1 λ.
26 CHAPTER 3. PERTURBATION METHODS.
Chapter 4
Calculus of variations.
The calculus of variations deals with the study of minimizing/maximizing functionals. These
are functions that map functions to the reals. Examples are: minimize the length of a curve,
maximize the area given a fixed length, minimize the area...
f (x0 ) ≤ f (x).
A global minimum will be a local minimum for all neighbourhoods. Similarly, we define
local and global maxima.
If the function f is differentiable, a necessary condition of being a local minimum is that
∇f (x0 ) = 0.
∇f (x0 ) = 0
27
28 CHAPTER 4. CALCULUS OF VARIATIONS.
f : U → Rm
with norm
kf k0 := sup kf (x)k.
x∈U
f : U → Rm
with norm r
X
kf kr := kDk f k0 .
k=0
f :U →C
with norm
kf kU := sup |f (x)|.
x∈U
The problem that we are interested in is, given a functional, find its (local) minima or
maxima.
Let’s see some examples of functionals.
Example 32. • Let x0 ∈ R, then J : C 0 (R, R) → R with
J(f ) := f (x0 )
is a functional.
is a functional.
is a functional.
4.2. NECESSARY CONDITIONS FOR EXTREMA. 29
is a functional.
Exercise 33. (Brachistochrone problem) Let p = (0, b) and q = (a, 0) be two points lying
on a vertical plane under the force of the gravity g (vertical). Let a wire joining p and q be
given by the graph of a function y(x).
Prove that the time that a bead takes to travel across the wire, starting at p and finishing
at q, is
Z a p
1 + y 0 (x)
T = p dx.
0 2g(b − y(x))
In a more general setting, in classical calculus of variations the types of functionals are
of the form Z b
J(y) = L(x, y, y 0 )dx,
a
Exercise 34. Prove that the vector spaces in Notation 31 are normed vector spaces with the
norms specified there.
Notice that in the definition of directional derivative we are using an auxiliary construc-
tion: a function from R to R given by
ε → J(y0 + εv).
Exercise 35. Compute the directional derivatives of the following functionals at the specified
point y0 and with direction v:
1. Z 1
J(y) = y 2 dx, y0 = cos(x), v = sin(x).
0
2. Z 1
J(y) = y 02 dx, y0 = cos(x), v = sin(x).
0
3. Z 1
J(y) = cos(y)dx, y0 = x, v = x2 .
0
Now, with the help of directional derivatives we can give necessary conditions for the
existence of minima/maxima of functionals.
Theorem 36. Let J : A ⊂ X → R be a functional defined on an open subset of a normed
vector space X . If y0 ∈ A is a minimum (maximum) of J, then
δJ(y0 , v) = 0
for all v where the directional derivative exists.
Exercise 37. Consider the functional J : C 0 ([2, 4], R) → R,
Z 4
J(y) = y(x)2 dx.
2
defined for functions y ∈ C 2 [a, b] with the extra condition y(a) = A, y(b) = B. The function
L should satisfy that it is twice differentiable in [a, b] × R2 .
When computing direrctional derivatives it is required that v(a) = v(b) = 0, so J(y + εv)
is well-defined.
4.3. THE SIMPLEST PROBLEM. 31
From Exercise 38 we deduce that a necessary condition for y0 being a minimum is that
Z b
∂y L(x, y0 , y00 )v + ∂y0 L(x, y0 , y00 )v 0 dx = 0 (4.2)
a
with ∂y L = 0 satisfy
∂y0 L = C,
with C being a constant.
Also, if ∂x L = 0, then
L − y 0 ∂y0 L = C,
with C being a constant.
Finally, if ∂y0 L = 0, then
∂y L = 0.
32 CHAPTER 4. CALCULUS OF VARIATIONS.
4.4 Generalizations.
There are different ways of generalizing the Euler-Lagrange equations.
where y ∈ C 4 [a, b] satisfying the boundary conditions y(a) = A1 , y(b) = B1 , y 0 (a) = A2 , y 0 (b) =
B2 . In this case, and proceeding as before, we obtain that the (generalized) Euler-Lagrange
equations are
d d2
∂y L − ∂y0 L + 2 ∂y00 L = 0.
dx dx
More generally, in the case of having the Lagrange function L depending on the deriva-
tives of y up to the n-th order, then y ∈ C 2n [a, b], y should satisfy boundary conditions up
the (n − 1)-th derivatives, and the Euler-Lagrange equations are
d d2 dn
∂y0 L + 2 ∂y00 L + · · · + (−1)n n ∂yn−1) L = 0.
∂y L −
dx dx dx
Exercise 43. Find the extremals of the functional
Z 2p
1 + (y 00 )2 dx,
0
0 0
with y(0) = 0, y (0) = 1, y(2) = 1, y (2) = 1.
4.5. MORE PROBLEMS. 33
with boundary conditions y(a) = A and y(b) free. In this case, we get the system of equations
d
(
∂y L − ∂y0 L = 0,
dx
∂y0 L(b, y(b), y 0 (b)) = 0.
Exercise 45. Find the extremal paths connecting two points lying on a cylinder.
1. Z 1
J(y) = (y 2 + y 02 − 2y sin(x))dx,
0
2.
2
y 02
Z
J(y) = dx,
1 x3
where y(1) = 1 and y(2) = 0.
34 CHAPTER 4. CALCULUS OF VARIATIONS.
3. Z 2
J(y) = (y 2 + y 02 + 2yex )dx,
0
Exercise 49. Show that the area of a surface given by the graph of a function z = f (x, y)
defined on a domain D is given by the double integral
ZZ q
1 + (∂x f )2 + (∂y f )2 dxdy.
D
1. Z 1
J(y) = (y 02 + y 2 )dx,
0
2. Z e
1 2 02 1 2
J(y) = x y − y dx,
1 2 8
where y(1) = 1 and y(e) free.
Chapter 5
Dynamical systems.
A dynamical system is a rule for time evolution on a state space. This means that given
a space describing all possible statuses, a dynamical system is a set of rules that describe
how a given initial status evolves in time.
Example 51. Consider the phase space R describing the height of a particle. Then an
example of a dynamical system on this phase space is
xn+1 = xn − 1.
This dynamical system describes how the height of the particle evolves in (discrete) time.
Example 52. As before, consider the phase space R describing the height of a particle.
Another example of a dynamical system on this phase space is
ẋ = −1.
This dynamical system describes how the height of the particle evolves in (continuous) time.
As shown in the previous two examples, there are different types of dynamical systems.
A general classification of all possible dynamical systems is out of the scope of this lecture
notes but, we could say that there is a dichotomy depending on the nature of time: discrete
or continuous.
In this notes we will concentrate in two types of dynamical systems. When the time
evolution is discrete, we will consider dynamical systems described by the iteration of a map
F : X → X,
where X will represent the phase space. When the time evolution is continuous, we will
consider dynamical systems described by ODEs
ẋ = F (x, t),
where x ∈ X.
The goal of studying a dynamical system is to describe, if possible, the behaviour of the
particles when time evolves. Questions that are usually asked are: do all particles converge
to a point? Do all particles converge to a set? Are there stationary particles? How is the
evolution of the particles near a stationary one?
35
36 CHAPTER 5. DYNAMICAL SYSTEMS.
Exercise 53. Consider the dynamical system that models the evolution of your savings in
your bank account. This system is given by
xn+1 = xn + rxn ,
with r ∈ R being a real parameter. Describe the evolution of all initial states x0 under this
system.
Exercise 54. Find a closed formula of the forward iterations of the (linear) dynamical
system
xn+1 = Axn
in terms of the eigenvalues and eigenvectors of the matrix A. Consider the case that all
eigenvalues have multiplicity one.
Apply this to the system with
2 1
A= .
1 1
In Exercises 53 and 54 we could find an explicit formula for the evolution of the system.
Usually this is not the case.
f (x∗ ) = x∗ .
Exercise 56. Find all fixed points of the system in Exercise 53.
Exercise 57. Find all fixed points of the system
xn+1 = x2n + a.
Definition 58. Given a discrete dynamical system xn+1 = f (xn ), a periodic orbit of
period k is a point x∗ such that its forward evolution is k-periodic. That is,
f k (x∗ ) = x∗ .
xn+1 = −xn .
5.1. DISCRETE DYNAMICAL SYSTEMS. 37
Exercise 60. Find all fixed points and period 2 orbits of the system
xn+1 = 1 − ax2n + byn
.
yn+1 = xn
Fixed points and periodic orbits consitute the simplest orbits a dynamical system has.
Once they are computed, the next question that one should ask is: how is the behaviour
of nearby orbits? Do they converge to the fixed point? Are they repelled? In order of
answering these questions, we should introduce linear stability.
Definition 61. Given a dynamical system xn+1 = f (xn ), we will say that a fixed (or peri-
odic) point x∗ is asymptotically stable if all points y in a neighbourhood of it converge
satisfy
lim d(f n (y), f n (x∗ )) = 0.
n→+∞
ẋ = Ax,
For the ODEs that no known explicit solution is known, other methods for studying
them are needed. Let’s see some of them in the next subsections.
For the sake of simplicity, from now on we will only consider autonomous ODEs: ∂t F (x, t) =
0.
where h is a small time advance. Of course, this last equation is just an approximation of
how solutions of the vector fields behave.
The phase portrait of an ODE is the portrait of all its solutions.
Exercise 67. Plot the vector fields and phase portraits of the following vector fields:
1.
ẋ = 1.
2.
ẋ = x.
5.2. CONTINUOUS DYNAMICAL SYSTEMS. 39
3.
ẋ = 2
ẏ = −1
4.
ẋ = x + y
ẏ = y − x
5.
ẋ = x2 + y 2
ẏ = cos(x)
Exercise 68. Consider the following vector fields in polar coordinates. Plot them and their
phase portraits:
1.
ṙ = 1
θ̇ = 1
2.
ṙ = r3 − r
θ̇ = 1
3.
ṙ = 0
θ̇ = r
F (x) = 0.
As in the discrete case, the stability of a stationary orbit is dictated by its linearization
around it.
DF (x0 )
is strictly contained in the left side of the y axis, then the stationary orbit is asymptotically
stable. Similarly, if part of the spectrum is on the right hand side, it is asymptotically
unstable.
40 CHAPTER 5. DYNAMICAL SYSTEMS.
Exercise 70. Find the stationary orbits of the following ODEs and study their stability:
1.
ẋ = x2 + x.
3.
ẋ = x2 + y 2 − 1
ẏ = x − y
1. Polar coordinates:
ṙ = r3 − r
θ̇ = 1
2. Cartesian coordinates:
ẋ = x3 − x + y 2 x − y
ẏ = y 3 − y + x2 y + x
d(xn , yn )
diverge. This condition ensures that the system is sensitive under initial conditions.
Observation 72. The definition of a chaotic system is more technical than the idea ex-
pressed above. It involves two other conditions: that the system has a dense set of periodic
orbits and that it is topologically mixing: all neighborhoods in U mix under the action of the
system.
5.3. CHAOTIC SYSTEMS. 41
Exercise 73. Convince yourself that the discrete dynamical system defined on the unit
interval [0, 1]
xn+1 = f (xn ),
where
1
2x, x< 2
f (x) = 1
2 − 2x, x ≥ 2
is chaotic. That is, prove that it has a dense set of periodic orbits and its sensitive to initial
conditions.
Exercise 74. Convince yourself that the discrete dynamical system defined on the circle
[0, 1]/Z
xn+1 = f (xn ) (mod 1),
where
f (x) = 2x (mod 1)
is chaotic. That is, prove that it has a dense set of periodic orbits and its sensitive to initial
conditions.
Where could we find this dynamical system?
42 CHAPTER 5. DYNAMICAL SYSTEMS.
Chapter 6
Partial differential equations occur in a lot of problems in applied mathematics. They model
chemical reactions, physical laws, reaction-diffusion systems in biology and chemistry, gas
dynamics, fluid dynamics... The list is endless.
The goal of this chapter is to give an introduction to this topic.
ut = ∆u + f (u)
describes how the concentration of one or more substances distributed in space changes under
the influence of two processes: diffussion which causes the substances to spread out, and
reactions which causes the substances to transform themselves.
43
44 CHAPTER 6. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS.
1.
ut = cos(x).
2.
utx = x.
3.
uxx = y.
1.
uux = cos(x) + sin(y).
2.
ut + uxt = 1.
3.
ut + cux = f (x, t).
Use the change of coordinates z = x − ct, t = t.
Exercise 80. Find all solutions of the heat equation ut = kuxx of the form u(x, t) = U (z),
with z = √xkt .
Most of the times, solving analytically a PDE is impossible. We will study some examples
where analytic solutions are known.
To give a classification of all PDEs is out of the scope of this text. Nevertheless, we
will see that some PDEs can be classified into three categories: elliptic (those that govern
equilibrium phenomena), hyperbolic (those that govern wave propagation) and parabolic
(those that govern diffusion processes).
6.3. LINEARITY AND SUPERPOSITION. 45
• Hyperbolic:
utt = uxx .
• Parabolic:
ut = uxx .
where f is a known function and u is the unknown. The linear operator L is of the form
X
L= a(x, t)∂xi ∂tj ,
1.
uxx + cos(x)ux = 0.
2.
uxx + u2x = 0.
Another way of seeing that a PDE is linear if it satisfies the following two properties:
1. L(u + w) = Lu + Lw.
Linear PDEs satisfy the following interesting property: superposition. This property
is that if u1 and u2 are solutions of the PDE
Lu = 0,
Lu = f, (6.1)
46 CHAPTER 6. INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS.
is of the form
up + c1 u1 + · · · + cn un ,
where up is a particular solution of Equation (6.1), and ci ∈ R and ui are solutions of
equation Lu = 0.
Another type of superposition is that, if u(x, t; α) are all solutions of the equation Lu = 0,
then Z ∞
g(α)u(x, t; α)dα
−∞
∆u = uxx + uyy = 0.
Its solutions model equilibrium problems because, for example, these are the time indepen-
dent solutions of the heat and the wave equations.
There are different ways of stating the Laplace’s equation, depending on which type of
conditions we impose on the boundaries.
• (Dirichlet condition.) Given a region Ω ∈ R2 ,
∆u = f
u|∂Ω = g(x).
Observation 89. In general, and under mild conditions, all initial conditions evolve, under
the heat equation flow, to the solution of the Laplace’s equation. This is the same as saying
the they are globally attracting solutions.
Exercise 90. Consider the two dimensional heat equation on the unit square Ω = [0, 1]2
with boundary condition u|∂Ω = 0. If we assume that all functions on it with these boundary
conditions are of the form
X
an,m sin(2πnx) sin(2πmy),
n,m∈Z2
Exercise 94. Use the eigenfunction expansion method to solve the heat equation
ut = kuxx
Exercise 95. Use the eigenfunction expansion method to solve the PDE
ut = kuxx + sin(πx)
Sturm-Liouville problems.
a1 y(a) + a2 y 0 (a) = 0
b1 y(b) + b2 y 0 (b) = 0
defined a Sturm-Liouville problem, (SLP). If both functions p and q are continuous and
p does not change sign in [a, b] then the SLP problem is called regular.
Notice that solutions of Equation (7.1) depend on the parameter λ, and that y(x) = 0
is always a solution. A value λ for which Equation (7.1) has a non trival solution is called
an eigenvalue, and the corresponding solution is called an eigenvector.
Regular SLP problems satisfy that they have infinite eigenvalues with associated eigen-
vectors of finite multiplicity. These eigenvectors form a complete, orthogonal set. Moreover,
all eigenvalues are real.
Exercise 96. Find all eigenvalues and eigenvectors of the regular SLP problems:
1.
y 00 = λy,
with boundary conditions y(0) = y(π) = 0.
2.
y 00 = λy,
with boundary conditions y(0) = y(l) = 0.
3.
y 00 + y 0 = λy,
with boundary conditions y 0 (0) = y(1) = 0.
51
52 CHAPTER 7. STURM-LIOUVILLE PROBLEMS.
Chapter 8
Theory of transforms.
The idea behind the theory of transforms is to transform our problem into an easier problem.
The applied transformations are usually of integral type: Laplace transform and Fourier
transform. The former is applied on functions defined on the positive real line, while the
latter is applied on functions defined on the entire line. As we will see, the theory behind
these two transforms is parallel to each other.
In general, transforms are applied in problems in order that we change an undesired
property of it to an easy-handling one. For example, Fourier and Laplace transforms are
useful for replacing derivatives by algebraic equations.
Example 97. Let’s illustrate the idea behind transform methods.
Consider the nonlinear system of equations
2 3
xy z = 8
xy = 7
x3 y 5
z
= 1
This system is, in principle, intractable. But, if we perform the logarithm transform, we get
the system
2X + 3Y + Z = log(8)
X +Y = log(7) ,
3X + 5Y − Z = 0
where X = log(X), Y = log(Y ), Z = log(Z). This last system is easily solvable. Once we
have its solutions, we pull them back with the help of the inverse of the logarithm transform,
the exponential.
53
54 CHAPTER 8. THEORY OF TRANSFORMS.
1. 1.
2. t.
3. tn .
4. eat .
where the integration path is a vertical line on the complex plane from bottom to top and
a is chosen in a way that all singularities of the function Y lie on the left side of the vertical
line with real part a.
The Laplace transform satisfies very nice properties.
Furthermore,
L−1 (Y1 Y2 )(t) = (y1 ∗ y2 )(t).
Exercise 102. Prove that the Laplace transform defines a linear map. That is, L(af +bg) =
aL(f ) + bL(g), where a and b are constants.
Have a look at Table 8.1, where some of the most common Laplace transforms appear.
Exercise 103. With the help of Laplace transforms, solve the following problems:
8.1. LAPLACE TRANSFORM. 55
y(t) Y (s)
1
1 s
, s>0
n n!
t with n > 0 integer sn+1
,s>0
1
eat s−a
,s>a
a
sin(at) s2 +a2
,s>0
s
cos(at) s2 +a2
,s>0
a
sinh(at) s2 −a2
, s > |a|
s
cosh(at) s2 −a2
, s > |a|
a
ebt sin(at) (s−b)2 +a2
,s>b
s−b
ebt cos(at) (s−b)2 +a2
,s>b
n!
tn eat (s−a)n+1
,s>a
e −as
H(t − a) s
,s>0
δ(t − a) e−as
H(t − a)f (t − a) F (s)e−as
f (t)e√−at F (s + a)
erf( t) √1 , s > 0
s s+1
1 − a4t
2 p π −a√s
√
t
e s √
e ,s>0
a 1 −a s
1 − erf( 2√ t
) s
e ,s>0
a − a4t
2 √ −a√s
3 e πe ,s>0
2t 2
Table 8.1: Laplace transforms. H(t) is the Heaviside function, while δ(t) is the Dirac delta.
56 CHAPTER 8. THEORY OF TRANSFORMS.
1. The ODE
u00 + u0 + u = sin(t).
2. The ODE
N
X M
X
(k)
ak u = bk x k ,
k=0 k=0
3. The PDE
ut + ux = x, x > 0, t > 0.
u(x, 0) = 0, x > 0,
u(0, t) = 0, t > 0.
4. The PDE
ut − uxx = 0, x > 0, t > 0,
u(x, 0) = 0, x > 0,
u(0, t) = 1, t > 0,
u(x, t) is bounded.
5. The PDE
ut − kuxx = 0, x > 0, t > 0,
u(x, 0) = 0, x > 0,
u(0, t) = g(t), t > 0.
Rt
Write the solution in the form u(x, t) = 0 K(x, t − τ )g(τ )dτ.
Observation 104. The definition of the Fourier transform presented in these notes differs
from others given in other textbooks. Although I personally do not like it, I follow the
notation on Logan’s book in order of not creating any confusing situation.
The Fourier transform satisfies similar properties to the ones satisfied by the Laplace
transform.
8.2. FOURIER TRANSFORM. 57
Exercise 105. Prove that the Fourier transform defines a linear map. That is, F(af +bg) =
aF(f ) + bF(g), where a and b are constants.
Exercise 106. Compute the Fourier transform of n-th derivative y (n) and of the convolution
y1 ∗ y2 .
In Table 8.2 there are some Fourier transforms.
y(t) ŷ(s)
δ(t − a) eias
2 p π − s2
e−at a
e 4a
H(t) πδ(s) − si
e−a|t| 2a
a2 +s2
y (n) (t) n
(−is) ŷ(s)
(y1 ∗ y2 )(t) ŷ1 (s)ŷ2 (s)
Table 8.2: Fourier transforms. H(t) is the Heaviside function, while δ(t) is the Dirac delta.
Exercise 107. With the help of Fourier transforms, solve the following problems:
1. The ODE
u00 − u = f (x), x ∈ R.
Example 108. The periodic Fourier transform is useful for solving PDEs where solutions
are periodic in one of the variables. For example, the heat equation on the circle. It is the
PDE
ut = uxx ,
with u(t, x) = u(t, x + 1), u(0, x) = f (x).
Another example is when dealing with sequences. Then we can use the Z transform,
defined as ∞
X
Z(x)(z) = xn z −n .
k=0
Integral equations.
An integral equation is an equation where the unknown is a function and integrals are
involved.
Example 109. Z 1
f (x)dx = f (2).
0
Example 110. Z x
f (t)dt = f (0) + f (x).
0
Exercise 111. Prove that a solution of Equation (9.2) is a solution of the ODE (9.1).
Two classical examples of linear integral equations are the Volterra and Fredholm
equations. The former is of the form
Z x
k(x, y)u(y)dy − λu(x) = f (x), a ≤ x ≤ b
a
In both examples, the unknown function is u, while k and f are known. The function k is
usually called the kernel.
59
60 CHAPTER 9. INTEGRAL EQUATIONS.
Observation 112. Notice that both problems look similar. They only differ on the fact
that for the Volterra equations the limits of integration depend on x, while for the Fredholm
are fixed. As we will see, this small detail changes dramatically the way each problem is
addressed.
Let’s discuss in more detail these equations. Notice that both equations can be written
in the form
(K − λId)u = f, (9.3)
where K denotes the linear integral operator. Hence, the equations will have a solution u if
the function f is on the range of the linear operator K − λId. For example. if it is invertible:
u = (K − λId)−1 f.
Observation 113. If the operator K − λId fails to be invertible, it is still possible that for
some (but not all) f Equation (9.3) has solutions.
To study the invertibility of K − λId it is important to understand for which λs the
eigenvalue equation
Ku = λu
is satisfied. For these, invertibility will fail.
The following exercise shows why studying the spectrum of a linear operator A is useful
for solving linear systems.
Exercise 114. Consider the real symmetric n × n matrix A. Give a solution of the nonho-
mogeneous system
Av = λv + f
in terms of the eigenvalues and eigenvectors of the matrix A. Use the fact that there exists
an orthogonal basis of eigenvectors, and that the eigenvalues are all real.
There are special cases where the Volterra equation has an easy solution. Let’s see some
of these.
Exercise 115. Suppose that the kernel k does not depend on the first variable x (k(x, t) =
g(t)). Prove that a solution of Equation (9.4) satisfies the ODE
1
u0 (x) = (g(x)u(x) − f 0 (x)).
λ
9.1. VOLTERRA EQUATIONS. 61
1. Z x
u(t)dt = u(x) + x.
0
2. Z x
tu(t)dt = 2u(x) + cos(x).
0
Exercise 117. Suppose that the kernel k in Equation (9.4) is of the form k(x, t) = g(x − t).
Prove that the solution of Equation (9.4) can be solved by means of the Laplace transform.
(Hint: Remember that the Laplace transform of the convolution is the product of the
Laplace transforms.)
1. Z x
u(x) + (x − t)u(t)dt = t.
0
2. Z x
u(x) = ex−t u(t)dt.
0
In general, Volterra equations are solved by means of the Picard’s method. If we write
down the Volterra equation as
with K̂ = λ1 K and fˆ = −1
λ
f. The solution is of the form
∞
X
u= K̂ n f, (9.5)
n=0
where K̂ n denotes the n-th composition of the operator K̂. This series is called the Neu-
mann series.
There is a theorem that assures that this procedures works.
Theorem 119. If f , k are continuous, then the solution to the Volterra equation is given
by (9.5).
Observation 120. Since solution (9.5) involves an infinite series, approximate solutions
are required. It can be proven that
n
n (b − a)n max | λk |
|K̂ f | ≤ max |f | .
n!
62 CHAPTER 9. INTEGRAL EQUATIONS.
Exercise 122. Find approximate solutions to the following Volterra equations using Neu-
mann series:
1. Z x
u(x) + λ u(s)ds = x.
0
2. Z x
λu(x) + (x − s)u(s)ds = x.
0
In the case of Volterra equations we saw that all the linear equations have a solution,
given by the Neumann series. In the case of Fredholm equations, this is no longer true.
However, as we will see, there are cases that we can treat.
In this special case, the solution to the Fredholm equation (9.8) can be reduced to a finite
dimensional linear algebra problem. Notice that it is equivalent to
Xn Z b
αi (x) βi (y)u(y)dy − λu(x) = f (x). (9.9)
i=0 a
Multiplying Equation (9.9) by βj (x) and integrating with respect x we obtain the n
linear equations of the form
n
X
(αi , βj )(βi , u) − λ(βj , u) = (βj , f ).
i=0
Observation 123. Notice that the linear system (9.10) has a solution for all f if and only
if λ is not an eigenvalue of the matrix A.
It is easily proven in this case the following theorem, sometimes called the Fredholm
alternative.
Theorem 124. Consider the Fredholm equation (9.8) with degenerate kernel. Then, if λ is
not an eigenvalue of the matrix A, the problem has a unique solution. If, on the contrary,
it is an eigenvalue, either the problem has none or infinite number of solutions.
Exercise 125. Solve the Fredholm equation
Z 1
xtu(t)dt + u(x) = cos(2πx).
0
Theorem 127. If the Fredholm equation satisfies that its kernel is symmetric, continuous
and non-degenerate, then the eigenvalue problem
Ku = λu
has infinite eigenvalues λi , each with finite multiplicity, such that then can be ordered
Rb
The coefficients ak are equal to a
f (x)φk (x)dx.
Notice that in the case of the previous theorem, solving the linear equation
Ku − λu = f
is easy once we know all the eigenvalues and eigenfunctions of the operator K. See Exercise
114 for an analogue solution.
Exercise 129. Find approximate solutions of the following equations by means of perturba-
tion series around ε = 0:
1. Z x
u(x) + ε u(x)2 dx = 1, 0 ≤ x ≤ 1.
0
2. Z x
u(x) + u(x)dx + ε(u(x)3 − u(x)) = x, 0 ≤ x ≤ 1.
0
3. Z 1
u(x) + u(x)dx + εu0 (x) = 1, 0 ≤ x ≤ 1.
0
66 CHAPTER 9. INTEGRAL EQUATIONS.
Appendices
67
Appendix A
y 00 + p(x)y 0 + q(x)y = 0.
69
70 APPENDIX A. SOLVING SOME ODES.