Differential Equations Notes
Differential Equations Notes
Differential Equations Notes
Philip Korman
Department of Mathematical Sciences
University of Cincinnati
Cincinnati Ohio 45221-0025
Contents
Introduction
vii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
3
5
10
12
15
15
18
20
21
23
24
26
31
32
33
37
CONTENTS
2.2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
2.10
2.11
2.12
2.13
2.14
2.15
iii
53
54
54
55
57
59
63
64
64
66
67
68
72
74
77
77
77
78
78
81
84
86
87
87
89
90
91
92
92
96
97
101
101
102
104
105
108
iv
CONTENTS
115
. 115
. 115
. 117
. 118
. 123
128
. 129
. 135
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
139
139
139
140
142
143
146
148
151
156
158
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
190
190
194
199
201
207
CONTENTS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
286
286
291
294
297
299
302
305
310
vi
CONTENTS
8.7
8.8
8.9
8.10
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
310
312
315
317
319
322
9 Numerical Computations
324
9.1 The Capabilities of Software Systems, Like Mathematica . . 324
9.2 Solving Boundary Value Problems . . . . . . . . . . . . . . . 326
9.3 Solving Nonlinear Boundary Value Problems . . . . . . . . . 330
9.4 Direction Fields . . . . . . . . . . . . . . . . . . . . . . . . . 332
10 Appendix
338
.1
The Chain Rule and Its Descendants . . . . . . . . . . . . . . 338
.2
Partial Fractions . . . . . . . . . . . . . . . . . . . . . . . . . 339
.3
Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . 340
Introduction
This book is based on several courses that I taught at the University of
Cincinnati. Chapters 1-4 are based on the course Differential Equations
for sophomores in science and engineering. Chapters 7 and 8 are based on
the course Fourier Series and PDE. While some theoretical material is
either quoted, or just mentioned without proof, I have tried to show all of
the details when doing problems. I have tried to use plain language, and
not to be too wordy. I think that an extra word of explanation has often as
much potential to confuse a student, as to be helpful. I have also tried not
to overwhelm students with new information. I forgot who said it first: one
should teach the truth, nothing but the truth, but not the whole truth.
I hope that the experts will find this book useful as well. It presents
several important topics that are hard to find in the literature: Masseras
theorem, Lyapunovs inequality, Picones form of Sturms comparison theorem, sideways heat equation, periodic population models, hands on
numerical solution of nonlinear boundary value problems, etc. The book
also contains new exposition of some standard topics. We have completely
revamped the presentation of the Frobenius method for series solution of
differential equations, so that the regular singular points are now hopefully in the past. In the proof of the existence and uniqueness theorem,
we have replaced the standard Picard iterations with monotone iterations,
which should be easier for students to absorb. There are many other fresh
touches throughout the book. The book contains a number of interesting
problems, including some original ones, published by the author over the
years in the Problem Sections of SIAM Review, EJDE, and other journals.
All of the challenging problems are provided with hints, making them easy
to solve for instructors.
How important are differential equations? Here is what Isaac Newton
said: It is useful to solve differential equations. And what he knew was
just the beginning. Today differential equations are used widely in science
vii
viii
INTRODUCTION
Chapter 1
Integration by Guess-and-Check
Many problems in differential equations end with a computation of an integral. One even uses the term integration of a differential equation instead
of solution. We need to be able to compute integrals quickly.
Recall the product rule
(f g)0 = f g 0 + f 0 g .
Example
xex dx. We need to find the function, with the derivative equal
1
1
x cos 3x dx = x sin 3x + cos 3x + c.
3
9
We see that the initial guess is the product f (x)g(x), chosen in such a
way that f (x)g 0(x) gives the integrand.
Z
1
Example xe5x dx. Starting with the initial guess xe5x , we compute
5
Z
Example
1
1
xe5x dx = xe5x e5x + c.
5
25
1
x2 sin 3x dx. The initial guess is x2 cos 3x. Its derivative
3
1
x2 cos 3x
3
0
2
= x2 sin 3x x cos 3x
3
2
has an extra term x cos 3x. To remove this term, we modify our guess:
3
2
1
x2 cos 3x + x sin 3x. Its derivative
3
9
1
2
x2 cos 3x + x sin 3x
3
9
0
= x2 sin 3x +
2
sin 3x
9
2
sin 3x. So we make the final adjustment
9
1
2
2
x2 sin 3x dx = x2 cos 3x + x sin 3x +
cos 3x + c .
3
9
27
x2
x x2 + 4
3/2
. Differentiation
3/2
1/2
d 2
x +4
= 3x x2 + 4
dx
x x2 + 4 dx =
3/2
1 2
x +4
+ c.
3
1/2
dx.
1
dx. Instead of using partial fractions, let us
+ 1)(x2 + 4)
try to split the integrand as
Example
(x2
1
1
.
x2 + 1 x2 + 4
We see that this is off by a factor. The correct formula is
1
1
=
2
2
(x + 1)(x + 4)
3
Then
(x2
1
1
2
2
x +1 x +4
1
1
x
1
dx = tan1 x tan1 + c .
2
+ 1)(x + 4)
3
6
2
Sometimes one can guess the splitting twice, as in the following case.
Example
x2
x2 (1
1
dx.
x2 )
1
1
1
1
1 1 1
1 1
1
= 2+
= 2+
= 2+
+
.
2
2
(1 x )
x
1x
x
(1 x)(1 + x)
x
21x 21+x
1.2
x2
1
1 1
1
dx = ln(1 x) + ln(1 + x) + c .
2
(1 x )
x 2
2
Background
Suppose we need to find a function y(x) so that
y 0 (x) = x.
This is a differential equation, because it involves a derivative of the unknown
function. This is a first order equation, as it only involves the first derivative.
Solution is, of course,
x2
(2.1)
y(x) =
+ c,
2
y 0 (x) = x
y(0) = 5
we begin with the general solution given in formula (2.1), and then evaluate
it at x = 0
y(0) = c = 5.
So that c = 5, and solution of the problem (2.2) is
y(x) =
x2
+ 5.
2
(2.3)
1.2.1
p(x) dx is p(x).
y 0 + p(x)y = g(x),
where p(x) and g(x) are given functions. This is a linear equation, as we
have a linear function of y, and y 0 . Because we know p(x), we can calculate
the function
R
(x) = e p(x) dx ,
0 (x) = p(x)e
p(x) dx
= p(x).
y 0 + yp(x) = g(x).
Let us use the product rule and the formula (2.5) to calculate the derivative
d
[y] = y 0 + y0 = y 0 + yp(x).
dx
So that we may rewrite (2.6) in the form
(2.7)
d
[y] = g(x).
dx
Example Solve
y 0 + 2xy = x
y(0) = 2 .
Here p(x) = 2x, and g(x) = x. Compute
R
(x) = e
The equation (2.7) takes the form
2x dx
= ex .
d h x2 i
2
e y = xex .
dx
Integrate both sides, and then perform integration by a substitution u = x2
(or use guess-and-check)
x2
e y=
xex dx =
1 x2
e + c.
2
y(x) =
1
2
+ cex .
2
y(0) =
1
+ c = 2,
2
3
1 3
2
so that c = . Answer: y(x) = + ex .
2
2 2
Example Solve
1
y 0 + y = cos 2t , y(/2) = 1 .
t
Here the independent variable is t, y = y(t), but the method is, of course,
the same. Compute
R 1
(t) = e t dt = eln t = t,
and then
d
[ty] = t cos 2t.
dt
Integrate both sides, and perform integration by parts
ty =
t cos 2t dt =
1
1
t sin 2t + cos 2t + c.
2
4
1
1 cos 2t c
sin 2t +
+ .
2
4 t
t
1 1
c
+
= 1.
4 /2 /2
1
c = /2 + ,
4
1
1 cos 2t /2 + 14
y(t) = sin 2t +
+
.
2
4 t
t
This function y(t) gives us a curve, called the integral curve. The initial
condition tells us that y = 1 when t = /2, so that the point (/2, 1) lies
on the integral curve. What is the maximal interval on which the solution
(2.8) is valid? I.e., starting with the initial point t = /2, how far can we
continue the solution to the left and to the right of the initial point? We see
from (2.8) that the maximal interval is (0, ). At t = 0, the solution y(t)
is undefined.
Example Solve
x
dy
+ 2y = sin x ,
dx
y() = 2 .
Here the equation is not in the form (2.4), for which the theory applies. We
divide the equation by x
dy
2
sin x
+ y=
.
dx x
x
2
sin x
Now the equation is in the right form, with p(x) = and g(x) =
. As
x
x
before, we compute, using the properties of logarithms,
R
(x) = e
And then
2
x
dx
= e2 ln x = eln x = x2 .
d h 2 i
sin x
x y = x2
= x sin x.
dx
x
cos x sin x
c
+ 2 + 2.
x
x
x
c
1
+ 2 = 2.
Solve for c:
c = 2 2 + .
cos x sin x 2 2 +
+ 2 +
. This solution is valid on the
x
x
x2
interval (, 0) (that is how far it can be continued to the left and to the
right, starting from the initial point x = ).
Answer: y(x) =
Example Solve
dy
1
=
,
dx
yx
y(1) = 0 .
We have a problem: not only this equation is not in the right form, this
1
is not a linear function of y. We need a
is a nonlinear equation, because yx
little trick. Let us pretend that dy and dx are numbers, and take reciprocals
of both sides of the equation, getting
dx
= y x,
dy
or
dx
+ x = y.
dy
1.0
0.5
1.5
2.0
2.5
3.0
-1
Figure 1.1: The integral curve x = y1+2ey , with the initial point marked
Integrating
ey x =
yey dy = yey ey + c,
or
x(y) = y 1 + cey .
To find c, we need an initial condition. The original initial condition tells
us that y = 0 for x = 1. For the inverse function x(y) this translates to
x(0) = 1. So that c = 2.
Answer: x(y) = y 1 + 2ey (see the Figure 1.1).
The rigorous justification of this method is based on the formula for
the derivative of an inverse function, that we recall next. Let y = y(x) be
some function, and y0 = y(x0 ). Let x = x(y) be its inverse function. Then
x0 = x(y0 ), and we have
dx
(y0 ) =
dy
1
dy
dx (x0 )
10
1.3
Separable Equations
Background
Suppose we have a function F (y), and y in turn depends on x, y = y(x). So
that, in effect, F depends on x. To differentiate F with respect to x, we use
the chain rule from Calculus:
dy
d
F (y(x)) = F 0 (y(x)) .
dx
dx
The Method
Suppose we are given two functions F (y) and G(x), and let us use the
corresponding lower case letters to denote their derivatives, so that F 0 (y) =
f (y) and G0 (x) = g(x). Our goal is to solve the following equation (to find
the general solution)
dy
f (y)
(3.1)
= g(x).
dx
This is a nonlinear equation.
We begin by rewriting this equation, using the upper case functions
F 0 (y)
dy
= G0 (x).
dx
11
f (y) dy =
g(x) dx,
dy
= x(1 + y 2 ).
dx
To separate the variables, we multiply by dx, and divide by (1 + y 2 )
Z
dy
dy =
1 + y2
x dx .
xy 2 + x dx + ex dy = 0 .
dy
=
y2 + 1
xex dx ;
tan1 y = xex + ex + c .
Answer: y(x) = tan xex + ex + c .
f (t) dt = f (x) ,
12
Example Solve
dy
2
= ex y 2 ,
dx
Separation of variables
Z
y(1) = 2 .
dy
dy =
y2
ex dx
gives on the right an integral that cannot be evaluated in elementary functions. We shall change it to a definite integral, as above. We choose a = 1,
because the initial condition was given at x = 1:
Z
dy
dy =
y2
1
=
y
et dt + c ,
1
2
et dt + c .
1.3.1
Problems
I. Integrate by Guess-and-Check.
1.
3.
xe5x dx.
1
xe 2 x dx.
x cos 2x dx.
5.
7.
(x2
4.
2.
x sin 3x dx.
1
Answer. x cos 2x +
2
1 2 1
x
sin 2x + c.
2
4
p
x
dx.
Answer. x2 + 1 + c.
x2 + 1
Z
1
x
1 2
2
6.
dx.
Answer.
ln
x
+
1
ln
x
+
2
+ c.
(x2 + 1)(x2 + 2)
2
2
1
dx.
+ 1)(x2 + 9)
Answer.
1
1
x
tan1 x
tan1 + c.
8
24
3
13
(ln x)5
dx.
x
9.
x2 ex dx.
10.
2x
Answer.
Answer.
sin 3x dx.
1
(ln x)6 + c.
6
1 x3
e + c.
3
2x
Answer. e
2
3
sin 3x
cos 3x + c.
13
13
Hint: Look for the antiderivative in the form Ae2x sin 3x + Be2x cos 3x, and
determine the constants A and B by differentiation.
II. Find the general solution of the linear problems.
1. y 0 +
1
y = cos x.
x
2. xy 0 + 2y = ex .
3. x4 y 0 + 3x3 y = x2 ex .
4.
dy
= 2x(x2 + y).
dx
5. xy 0 2y = xe1/x .
6. y 0 + 2y = sin 3x.
c
cos x
+ sin x +
.
x
x
c
(x + 1)ex
Answer. y = 2
.
x
x2
c
(x 1)ex
Answer. y = 3 +
.
x
x3
Answer. y =
Answer. y = cex x2 1.
Answer. y = cx2 x2 e1/x .
Answer. y = ce2x +
3
2
sin 3x
cos 3x.
13
13
7. x yy 0 1 = y 2 .
Hint: Set v = y 2 . Then v 0 = 2yy 0 , and one obtains a linear equation for
v = v(x).
Answer. y 2 = 2x + cx2 .
III. Find the solution of the initial value problem, and state the maximum
interval on which this solution is valid.
1
cos(x)+x sin(x)
1. y 0 + y = cos x, y( ) = 1.
Answer. y =
; (0, ).
x
x
2
2. xy 0 + (2 + x)y = 1, y(2) = 0.
Answer. y =
(, 0).
3. x(y 0 y) = ex , y(1) =
4. (t + 2)
1
.
e
dy
+ y = 5, y(1) = 1.
dt
1
3ex2
1
+
2;
2
x
x
x
5t 2
; (2, ).
t+2
14
5. ty 0 2y = t4 cos t, y(/2) = 0.
Rx
0
a(t) dt
(t)f (t) dt + c
,
(x)
dy
2
=
.
dx
x(y 3 + 1)
Answer.
2. ex dx ydy = 0, y(0) = 1.
3. (x2 y 2 + y 2 )dx yxdy = 0.
y4
+ y 2 ln x = c.
4
Answer. y = 2ex 1.
Answer. y = e
x2
2
+ln x+c
= cxe
x2
2
15
e ex x
Answer. y =
.
e
5. (y xy + x 1)dx + x dy = 0, y(1) = 0.
0
Rx
x2
6. y = e y, y(2) = 1.
0
Answer. y = e
7. y = xy + xy, y(0) = 2.
et dt
Answer. y =
2e
x2
2
3 2e
1.4
1.4.1
x2
2
y
Let f (t) be a given function. If we set here t = , we obtain a function
x
y
f ( ), which is a function of two variables x and y, but it depends on them
x
in a special way. One calls functions of the form f (y/x) to be homogeneous.
y 4x
is a homogeneous function, because we can put it into
For example,
xy
the form
y 4x
y/x 4
=
,
xy
1 y/x
so that here f (t) =
t4
1t .
dy
y
= f ( ).
dx
x
y
Set v = . Since y is a function of x, the same is true of v = v(x). Solving
x
for y, y = xv, we express by the product rule
dy
dv
= v+x .
dx
dx
Switching to v in (4.1), gives
dv
= f (v).
dx
This a separable equation! Indeed, after taking v to the right, we can separate the variables
Z
Z
dv
dx
dv =
.
f (v) v
x
(4.2)
v+x
16
After solving this equation for v(x), we can express the original unknown
y = xv(x).
In practice, one should try to remember the formula (4.2).
Example Solve
dy
x2 + 3y 2
=
dx
2xy
y(1) = 2 .
y
x
1
, or y = xv. Observing that = , we have
x
y
v
v+x
dv
11 3
=
+ v.
dx
2v 2
Simplify:
dv
11 1
1 + v2
=
+ v=
.
dx
2v 2
2v
Separating the variables, gives
x
2v
dv =
1 + v2
dx
.
x
We now obtain the solution, by doing the following steps (observe that ln c
is another way to write an arbitrary constant)
ln (1 + v 2 ) = ln x + ln c = ln cx;
1 + v 2 = cx;
v = cx 1;
y(x) = xv = x cx 1 .
From the initial condition
y(1) = c 1 = 2.
It follows that we need to select minus, and c = 5.
Answer: y(x) = x 5x 1.
17
1
, we see that
x
y
f (x, y) = f (tx, ty) = f (1, ),
x
y
so that f (x, y) is a function of , and the old definition applies.
x
Example Solve
dy
y
=
, with x > 0 .
dx
x + xy
It is more straightforward to use the new definition to verify that the function
y
f (x, y) = x+
xy is homogeneous. For all t > 0, we have
f (tx, ty) =
y
(ty)
p
=
= f (x, y) .
x
+
xy
(tx) + (tx)(ty)
.
v + xv 0 =
=
x + x xv
1+ v
We proceed to separate the variables:
x
dv
v
v 3/2
v =
;
=
dx
1+ v
1+ v
Z
Z
dx
1+ v
dv
=
;
3/2
x
v
2v 1/2 + ln v = ln x + c .
The integral on the left was evaluated by performing division, and splitting
it into two pieces. Finally, we replace v by y/x:
2
x
y
+ ln = ln x + c ;
y
x
x
+ ln y = c .
y
We have obtained an implicit representation of a family of solutions.
2
18
1.4.2
Let y(t) denote the number of rabbits on a tropical island at time t. The
simplest model of population growth is
y 0 = ay
y(0) = y0 .
Here a > 0 is a given constant. This model assumes that initially the number
of rabbits was equal to some number y0 > 0, while the rate of change of
population, given by y 0 (t), is proportional to the number of rabbits. The
population of rabbits grows, which results in a faster and faster rate of
growth. One expects an explosive growth. Indeed, solving the equation, we
get
y(t) = ceat .
From the initial condition y(0) = c = y0 , which gives us y(t) = y0 eat , an
exponential growth. This is the notorious Malthusian model of population
growth. Is it realistic? Yes, sometimes, for a limited time. If the initial
number of rabbits y0 is small, then for a while their number may grow
exponentially.
A more realistic model, which may be used for a long time, is the logistic
model:
y 0 = ay by 2
y(0) = y0 .
19
1
. By the generalized
y(t)
power rule, v 0 = y 2 y 0 , so that we can rewrite the last equation as
v 0 = av b,
or
v 0 + av = b.
This a linear equation for v(t)! To solve it, we follow the familiar steps, and
then we return to the original unknown function y(t):
R
(t) = e
a dt
= eat ;
d h at i
e v = beat;
dt
at
e v=b
eat dt =
v=
y(t) =
b at
e + c;
a
b
+ ceat ;
a
1
=
v
b
a
1
.
+ ceat
c=
y(t) =
b
a
1
= y0 ;
+c
1
b
;
y0 a
1
b
a
1
y0
b
a
eat
The problem is solved. Observe that limt+ y(t) = a/b, no matter what
initial value y0 we take. The number a/b is called the carrying capacity. It
tells us the number of rabbits, in the long run, that our island will support.
A typical solution curve, called the logistic curve is given in Figure 1.2.
20
2.0
1.5
1.0
0.5
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.4.3
Bernoullis Equation
t
y0 = y + .
y
21
We now let v(t) = y 3/2 , v 0 (t) = 23 y 1/2 y 0 , obtaining a linear equation for v,
which we solve as usual:
2 0
v = v + t;
3
(t) = e
3
e 2 t v =
3
2
dt
= e 2 t ;
3
3
v 0 v = t;
2
2
d h 3t i 3 3t
e 2 v = te 2 ;
dt
2
3
3 3t
2 3
te 2 dt = te 2 t e 2 t + c;
2
3
v = t
3
2
+ ce 2 t .
3
3
2
Returning to the original variable y, we have the answer: y = t + ce 2 t
3
2/3
22
2(1t) dt
et
= et
2 2t
2 2t
z=
d h t2 2t i
2
e
z = et 2t;
dt
z 0 2(1 t)z = 1;
et
2 2t
dt .
The last integral cannot be evaluated through elementary functions (Mathematica can evaluate it through a special function, called Erfi). So we
leave this integral unevaluated. Then we get z from the last formula, after
which we express v, and finally y. We have obtained a family of solutions:
2
et 2t
y(t) = t + 1 + R t2 2t . (The usual arbitrary constant c is now inside
e
dt
the integral.) Another solution: y = t + 1 (corresponding to v = 0).
Example Solve
6
.
t2
We look for a particular solution in the form y(t) = a/t. Plugging this in,
we determine a = 2, so that p(t) = 2/t is a particular solution (a = 3/2 is
also a possibility). The substitution y(t) = 2/t + v(t) produces Bernoullis
equation
8
v 0 + v + 2v 2 = 0 .
t
7
2
Solving it, we obtain v(t) = 8
, and v = 0. Solutions: y(t) = +
ct 2t
t
7
2
, and also y = .
ct8 2t
t
Let us outline an alternative approach to the last problem. Set y = 1/z,
where z = z(t) is a new unknown function. Plugging this into (4.3), then
clearing the denominators, we have
(4.3)
y 0 + 2y 2 =
z0
1
6
+ 2 2 = 2;
2
z
z
t
23
1 y0 2
y(0) = 1 .
This equation is not solved for the derivative y 0 (x). Solving for y 0 (x), and
then separating the variables, one may indeed find the solution. Instead, let
us assume that
y 0 (x) = sin t,
where t is a parameter. From the equation
y=
1 sin2 t =
cos2 t = cos t,
dy
sin t dt
=
= dt,
y 0 (x)
sin t
dx
= 1, which gives
dt
x = t + c.
24
from the equation that y 0). This problem admits another solution: y = 1.
For the equation
y0 + y0 = x
we do not have an option of solving for y 0 (x). Parametric integration appears
to be the only way to solve it. We let y 0 (x) = t, so that from the equation,
x = t5 + t, and dx = (5t4 + 1) dt. Then
dy = y 0 (x) dx = t(5t4 + 1) dt ,
dy
so that
= t(5t4 + 1), which gives y = 56 t6 + 12 t2 + c. We obtained a family
dt
of solutions in parametric form:
x = t5 + t
y = 56 t6 + 12 t2 + c .
1.4.6
Some Applications
(x0 , f (x0 ))
c
@
@
@
@
@
y = f (x)
@
x1
- x
The triangle formed by the tangent line, the line x = x0 , and the x-axis
25
f (x0 )
. It follows that the horizontal side of our
f 0 (x0 )
f (x0 )
, while the vertical side is f (x0 ). The area of this right
f 0 (x0 )
triangle is then
1 f 2 (x0 )
= a.
2 f 0 (x0 )
triangle is
(Observe that f 0 (x0 ) < 0, so that the area is positive.) The point x0 was
arbitrary, so that we replace it by x, and then we replace f (x) by y, and
f 0 (x) by y 0 :
1 y2
y0
1
.
0 = a; or 2 =
2y
y
2a
We solve this differential equation by taking the antiderivatives of both sides:
1
1
=
x+c.
y
2a
Answer: y(x) =
2a
.
x + 2ac
2a
.
x
Example A tank holding 10L (liters) is originally filled with water. A saltwater mixture is pumped into the tank at a rate of 2L per minute. This
mixture contains 0.3 kg of salt per liter. The excess fluid is flowing out of
the tank at the same rate (2L per minute). How much salt does the tank
contain after 4 minutes?
This is a family of hyperbolas. One of them is y =
Let t be the time (in minutes) since the mixture started flowing, and let
y(t) denote the amount of salt in the tank at time t. The derivative, y 0 (t),
gives the rate of change of salt per minute, and it is equal to the difference
between the rate at which salt flows in, and the rate it flows out. The salt
is pumped in at a rate of 0.6 kg per minute. The density of salt at time t is
26
y(t)
y(t)
(so that each liter of the solution in the tank contains
kg of salt).
10
10
y(t)
Then, the salt flows out at the rate 2
= 0.2 y(t) kg/min. The difference
10
of these two rates is y 0 (t), so that
y 0 = 0.6 0.2y .
This is a linear differential equation. Initially, there was no salt in the tank,
so that y(0) = 0 is our initial condition. Solving this equation together with
the initial condition, we have y(t) = 3 3e0.2t. After 4 minutes, we have
y(4) = 3 3e0.8 1.65 kg of salt in the tank.
Now suppose a patient has alcohol poisoning, and doctors are pumping
in water to flush his stomach out. One can compute similarly the weight of
poison left in the stomach at time t.
1.5
Exact Equations
27
Here the functions M (x, y) and N (x, y) are given. In the above example,
M = y 2 and N = 2xy.
Definition The equation (5.1) is called exact if there is a function (x, y),
with continuous derivatives up to second order, so that we can rewrite (5.1)
in the form
d
(5.2)
(x, y) = 0.
dx
The solution of the exact equation is (c is an arbitrary constant)
(5.3)
(x, y) = c.
There are two natural questions: when is the equation (5.1) exact, and if it
is exact, how does one find (x, y)?
Theorem 1.5.1 Assume that the functions M (x, y), N (x, y), My (x, y) and
Nx (x, y) are continuous in some disc D : (x x0 )2 + (y y0 )2 < r 2 , around
some point (x0 , y0 ). Then the equation (5.1) is exact in D, if and only if,
the following partial derivatives are equal
(5.4)
My (x, y) = Nx(x, y) ,
This theorem says two things: if the equation is exact, then the partials are
equal, and conversely, if the partials are equal, then the equation is exact.
Proof:
1. Assume that the equation (5.1) is exact, so that it can be
written in the form (5.2). Performing the differentiation in (5.2), we write
it as
x + y y 0 = 0 .
But this is the same equation as (5.1), so that
x = M
y = N .
Taking the second partials
xy = My
yx = Nx .
28
x = M (x, y)
y = N (x, y) .
(x, y) =
M (x, y) dx + h(y),
h (y) = N (x, y)
Observe that we denoted by p(x, y) the right side of the last equation. It
turns out that p(x, y) does not really depend on x! Indeed, taking the partial
derivative in x,
dy
= 0.
dx
29
The partials are not the same, this equation is not exact, and our theory
does not apply.
Example Solve (for x > 0)
Here M (x, y) =
y
+ 6x
x
dx + (ln x 2) dy = 0 .
y
+ 6x and N (x, y) = ln x 2. Compute
x
My =
1
= Nx ,
x
and so the equation is exact. To find (x, y), we observe that the equations
(5.5) take the form
y
x = + 6x
x
y = ln x 2 .
Take the antiderivative in x of the first equation
(x, y) = y ln x + 3x2 + h(y),
where h(y) is an arbitrary function of y. Plug this (x, y) into the second
equation
y = ln x + h0 (y) = ln x 2,
which gives
h0 (y) = 2.
Integrating, h(y) = 2y, and so (x, y) = y ln x + 3x2 2y, giving us the
solution
y ln x + 3x2 2y = c .
c 3x2
. Observe that when we
ln x 2
were solving for h(y), we chose the integration constant to be zero, because
at the next step we set (x, y) equal to c, an arbitrary constant.
30
ye2xy + x dx + bxe2xy dy = 0
ye2xy + x dx + xe2xy dy = 0,
1 2xy
e + h(x) ,
2
31
1.6
y(0) = 0 .
The function f (x, y) = y is continuous (for y 0), but its partial derivative
1
in y, fy (x, y) = 2
y , is not even defined at the initial point (0, 0). The
2
theorem does not apply. One checks that the function y = x4 solves our
initial value problem (for x 0). But here is another solution: y = 0.
(Having two different solutions of the same initial value problem is like
having two primadonnas in the same theater.)
Observe that the theorem guarantees existence of solution only on some
interval (it is not happily ever after).
Example Solve for y = y(t)
y0 = y2
y(0) = 1 .
Here f (t, y) = y 2 , and fy (t, y) = 2y are continuous functions. The theorem
1
applies. By separation of variables, we determine the solution y(t) =
.
1t
As time t approaches 1, this solution disappears, by going to infinity. This
phenomenon is sometimes called blow up in finite time.
32
1.7
33
These computations imply that y(0.05) 1.05, y(0.1) 1.11, and y(1.15)
1.18. If you need to approximate the solution on the interval (0, 0.4), you
have to make five more steps. Of course, it is better to program a computer.
Eulers method is using the tangent line approximation, or the first two
terms of the Taylor series approximation. One can use more terms of the
Taylor series, and develop more sophisticated methods (which is done in
books on numerical methods). But here is a question: if it is so easy to compute numerical approximation of solutions, why bother learning analytical
solutions? The reason is that we seek not just to solve a differential equation, but to understand it. What happens if the initial condition changes?
The equation may include some parameters, what happens if they change?
What happens to solutions in the long term?
1.7.1
Problems
1.
3.
Answer. y = cx + 2x ln x.
Answer. y =
x + cx + x ln x
.
c + ln x
dy
y 2 + 2x
=
.
dx
y
dy
y 2 + 2xy
2x2
=
,
y(1)
=
2.
Answer.
y
=
.
dx
x2
3 2x
y
y
5. xy 0 y = x tan .
Answer. sin = cx.
x
x
4.
6. y 0 =
x2 + y 2
, y(1) = 2.
xy
Answer. y = x2 ln x2 + 4x2 .
y + x1/2 y 3/2
7. y =
, with x > 0, y > 0.
xy
0
Answer. 2
y
= ln x + c.
x
34
1. y 0
2.
1
y = y 2 , y(2) = 2.
x
dy
y2 + 1
=
.
dx
2y
Answer. y =
Answer. y = cex 1.
3. y + x 3 y = 3y.
2x
.
2
x2
3
2
x 1
Answer. y =
+ + ce2x , and y = 0.
3 6
4. y 0 + xy = y 3 .
5. The equation
dy
y 2 + 2x
=
dx
y
Answer. x2 + x3 y y 3 = c.
1
Answer. x2 + x sin y y 2 = c.
2
2y 3
x
dx
+
dy = 0.
Answer. x2 + y 4 = c.
x2 + y 4
x2 + y 4
4. Find a simpler solution for the preceding problem.
3.
7. Find the value of b for which the following equation is exact, and then
solve the equation, using that value of b
(yexy + 2x) dx + bxexy dy = 0 .
Answer. b = 1, y =
1
ln(c x2 ).
x
y0 + y0 = x .
35
3 4 1 2
t + t + c.
4
2
2. Use parametric integration to solve
Answer. x = t3 + t, y =
y = ln(1 + y 0 ) .
Answer. x = 2 tan1 t + c, y = ln(1 + t2 ). Another solution: y = 0.
3. Use parametric integration to solve
y 0 + sin(y 0 ) = x .
1
Answer. x = t + sin t, y = t2 + t sin t + cos t + c.
2
4. A tank has 100L of water-salt mixture, which initially contains 10 kg of
salt. Water is flowing in at a rate of 5L per minute. The new mixture flows
out at the same rate. How much salt remains in the tank after an hour?
Answer. 0.5 kg.
5. A tank has 100L of water-salt mixture, which initially contains 10 kg of
salt. A water-salt mixture is flowing in at a rate of 3L per minute, and each
liter of it contains 0.1 kg of salt. The new mixture flows out at the same
rate. How much salt remains in the tank after t minutes?
Answer. 10 kg.
6. Water is being pumped into patients stomach at a rate of 0.5L per minute
to flush out 300 grams of alcohol poisoning. The excess fluid is flowing out at
the same rate. The stomach holds 3L. The patient can be discharged when
the amount of poison drops to 50 grams. How long should this procedure
last?
7. Find all curves y = f (x) with the following property: if you draw a
tangent line at any point (x, f (x)) on this curve, and continue the tangent
line until it intersects the x -axis, then the point of intersection is x2 .
Answer. y = cx2 .
8. Find all positive decreasing functions y = f (x), with the following property: in the triangle formed by the vertical line going down from the curve,
the x-axis and the tangent line to this curve, the sum of two sides adjacent
to the right angle is constant, equal to a > 0.
Answer. y a ln y = x + c.
36
x2 y 2
+ 1)
x2 (y 2
1 + 1/x2
1
2.
2
y +1
x
Assume, on the contrary, that y(x) is bounded when x is large. Then y 0 (x)
exceeds a positive constant for all large x, and therefore y(x) tends to infinity,
a contradiction (observe that 1/x2 becomes negligible for large x).
11. Solve
x(y 0 ey ) + 2 = 0 .
Hint: Divide the equation by ey , then set v = ey , obtaining a linear equation for v = v(x).
Answer. y = ln(x + cx2 ).
12. Solve the integral equation
y(x) =
y(t) dt + x + 1 .
37
Is it good to have two solutions of the same initial value problem? What
went wrong? (Why the existence and uniqueness theorem does not apply
here?)
2. Find all y0 , for which the following problem has a unique solution
x
y0 = 2
, y(2) = y0 .
y 2x
x|x|
solves the problem
4
p
y 0 = |y|
y(0) = 0 ,
for all x. Can you find another solution?
Hint: Consider separately the cases when x > 0, x < 0, and x = 0.
4. Show that the problem (here y = y(t))
y 0 = y 2/3
y(0) = 0
has infinitely many solutions.
Hint: Consider y(t) that is equal to zero for t < a, and to
where a > 0 is any constant.
(ta)3
27
for t a,
y 0 = f (x, y)
y(x0 ) = y0 ,
we shall prove a more general existence and uniqueness theorem than the
one stated before. We define a rectangular box B around the initial point
(x0 , y0 ) to be the set of points (x, y), satisfying x0 a x x0 + a and
y0 b x y0 + b, for some positive a and b. It is known from Calculus
that in case f (x, y) is continuous on B, it is bounded on B, so that for some
constant M > 0
(8.2)
38
for any two points (x, y1 ) and (x, y2 ) in B. Then the initial value problem
(8.1) has a unique solution, which is defined for x on the interval (x0
b
b
b
b
M , x0 + M ), in case M < a, and on the interval (x0 a, x0 + a) if M a.
b
Proof: Assume, for definiteness, that M
< a. We shall prove the existence
of solutions first, and we shall assume that x > x0 (the case when x < x0
is similar). Integrating the equation in (8.1) over the interval (x0 , x), we
convert the initial value problem (8.1) into an equivalent integral equation
(8.4)
y(x) = y0 +
x0
f (t, y(t)) dt .
(If y(x) solves (8.4), then y(x0 ) = y0 , and by differentiation y 0 = f (x, y).)
By (8.2), M f (t, y(t)) M , and then any solution of (8.4) lies between
two straight lines
y0 M (x x0 ) y(x) y0 + M (x x0 ) .
b
For x0 x x0 + M
these lines stay in the box B. (In the other case,
b
when M a, these lines stay in B for x0 x x0 + a.) We denote
(x) = y0 + M (x x0 ), and call this function a supersolution, while (x) =
y0 M (x x0 ) is called a subsolution.
y2 (x) = y0 +
x0
x0
b
M,
the following
39
x0
x0
using that (x) y1 (x), and the monotonicity of f (x, y). So that y1 (x)
y2 (x), and the other inequalities in (8.5) are established similarly. Next, we
b
claim that for all x in the interval x0 < x x0 + M
, all of these iterates lie
below the supersolution (x), so that we have
(8.6)
x0
b
M,
and then
x0
yn (x) = y0 +
x0
concluding that y(x) gives the desired solution of the integral equation (8.4).
2. The general case. Define g(x, y) = f (x, y) + Ay. If we choose the
constant A large enough, then the new function g(x, y) will be increasing in
y. Indeed, using our condition (8.3),
g(x, y2) g(x, y1) = f (x, y2 ) f (x, y1 ) + A (y2 y1 )
40
for any two points (x, y1 ) and (x, y2) in B, provided that A > L, and y2 > y1 .
We now consider an equivalent equation
y 0 + Ay = f (x, y) + Ay = g(x, y) .
Multiplying both sides by the integrating factor eAx , we put this equation
into the form
d h Ax i
e y = eAx g(x, y) .
dx
Set z(x) = eAx y(x), then y(x) = eAx z(x), and the new unknown function
z(x) satisfies
z 0 = eAx g x, eAx z
(8.7)
z(x0 ) = eAx0 y0 .
f (t, z(t)) dt .
x0
x0
L(x x0 )
|y(x) z(x)|
1
2L ].
x0
max
1
[x0,x0 + 2L
]
|y(x) z(x)|
x0
|y(t)) z(t)| dt
1
max |y(x) z(x)| .
1
2 [x0 ,x0 + 2L
]
It follows that
max
1
[x0 ,x0 + 2L
]
|y(x) z(x)|
1
max |y(x) z(x)| .
1
2 [x0 ,x0 + 2L
]
1
]
2L
41
1
].
|y(x)z(x)| = 0, so that y(x) = z(x) on [x0 , x0 + 2L
1
1
Let x1 = x0 + 2L
. We have just proved that y(x) = z(x) on [x0 , x0 + 2L
], and
in particular y(x1 ) = z(x1 ). Repeating (if necessary) the same argument on
1
[x1 , x1 + 2L
], and so on, we will eventually conclude that y(x) = z(x) on
b
(x0 , x0 + M ).
u(x) K +
x0
u(x) Ke
x0
a(t) dt
, for x x0 .
Divide the inequality (8.8) by its right hand side (which is positive)
a(x)u(x)
R
a(x) .
K + xx0 a(t)u(t) dt
Integrating both sides over (x0 , x) (the numerator of the fraction on the left
is equal to the derivative of its denominator), gives
ln K +
x0
a(t)u(t) dt ln K
a(t) dt ,
x0
x0
Rx
a(t)u(t) dt Ke
x0
a(t) dt
z 0 = f (x, z)
z(x0 ) = z0 .
42
y(x) will remain close over any bounded interval (x0 , x0 + p). This is known
as continuous dependence of solutions, with respect to the initial condition.
We begin the proof of the claim by observing that z(x) satisfies
z(x) = z0 +
f (t, z(t)) dt .
x0
z(x) y(x) = z0 y0 +
x0
x0
x0
L |z(t) y(t)| dt .
x
1
x0
43
x0
Hint: Observe that u(x0 ) = 0. When t is close to x0 , u(t) is small. But then
u2 (t) < u(t).
4. Show that if a function x(t) satisfies
0
dx
x2 for all t, and x(0) = 0,
dt
Hint: Show that x(t) = 0 for t > 0. In case t < 0, introduce new variables
y and s, by setting x = y and t = s, so that s > 0.
Chapter 2
45
2.1.1
1
t
dt
= e ln t = eln t =
1
;
t
d 1
v = 1;
dt t
1
v = t + c1 ;
t
y 0 = v = t2 + c1 t ;
t2
t3
+ c1 + c2 .
3
2
Here c1 and c2 are arbitrary constants.
y(t) =
y 00 + 2xy 0 = 0 .
Again, y is missing in the equation. By setting y 0 = v, with y 00 = v 0 , we
obtain a first order equation. This time the equation for v(x) is not linear,
but we can separate the variables. We have:
v 0 + 2xv 2 = 0 ;
dv
= 2xv 2 .
dx
46
2x dx ;
1
= x2 c1 ;
v
y0 = v =
x2
1
.
+ c1
y(x) =
x2
1
1
x
dx = arctan + c2 .
+ c1
c1
c1
1
1
1
1
=
.
2
2
x c1
2c1 x c1 x + c1
1
1
ln |x c1 |
ln |x + c1 | + c4 .
2c1
2c1
2.1.2
y 00 + yy 0 = 0 .
All three functions appearing in the equation are functions of x, but x itself
is not present in the equation. On the curve y = y(x), the slope y 0 is a
function of x, but it is also a function of y. We set y 0 = v(y), and v(y) will
be our new unknown function. By the chain rule
y 00 (x) =
d
dy
v(y) = v 0 (y)
= v 0v ,
dx
dx
47
v v 0 + yv 2 = 0.
If the first factor is zero, y 0 = v = 0, we obtain a family of solutions y = c.
Setting the second factor to zero
dv
+ yv 2 = 0 ,
dy
we have a separable equation. We solve it by separating the variables
dv
=
v2
y dy ;
1
y2
y2
=
+ c1 =
+ c1 ;
v
2
2
dy
2
=v= 2
.
dx
y + 2c1
To find y(x) we need to solve another first order equation. Again, we separate the variables
Z
Z
y 2 + 2c1 dy = 2 dx ;
y 3 /3 + 2c1 y = 2x + c2 .
dy
y
=
.
dx
Xx
48
dx
.
dy
(1.2)
q
1
ds
, and ds = dx2 + dy 2 , so that dt = ds =
On the other hand, v =
dt
v
1q 2
2
dx + dy , and then
v
dt
1
=
dy
v
(1.3)
dx
dy
2
+ 1.
dt
< 0, so that minus is needed in front of the square root.)
dy
dx 00
d2 x
Comparing (1.2) with (1.3), and writing x0 (y) =
, x (y) = 2 , we arrive
dy
dy
at the equation of the motion for the drone
(Observe that
y x00 (y) =
a
v
x0 2 (y) + 1 .
6 c(x0 , y0 )
x0
@
@c(x, y)
@
@
R
@
@
@
@c
y = f (x)
- x
49
y
Z
dp
a
=
2
v
p +1
dy
.
y
ln p +
p+
We write this as
p2
+1 =
a
(ln y + ln c) ,
v
p2 + 1 = cy v
(with a new c) .
p2 + 1 = cy v p ,
(1.4)
2a
v
2cy v p .
1
2
y
y0
a
v
1
2
y
y0
a
v
"
y
y0
1+ a
v
y0
1
2(1 a/v)
"
y
y0
1 a
v
1 + x0 .
50
2.2
ay 00 + by 0 + cy = 0 ,
where a, b and c are given numbers. This is arguably the most important
class of differential equations, because it arises when applying Newtons
second law of motion (or when modeling electrical oscillations). If y(t)
denotes displacement of an object at time t, then this equation relates the
displacement with velocity y 0 (t) and acceleration y 00 (t).
Observe that if y(t) is a solution, then so is 2y(t). Indeed, plug 2y(t)
into the equation:
a(2y)00 + b(2y)0 + c(2y) = 2 ay 00 + by 0 + cy = 0 .
The same argument shows that c1 y(t) is a solution for any constant c1 . If
y1 (t) and y2 (t) are two solutions, similar argument will show that y1 (t)+y2 (t)
and y1 (t) y2 (t) are also solutions. More generally, a linear combination of
two solutions, c1 y1 (t) + c2 y2 (t), is also a solution, for any constants c1 and
c2 . Indeed,
a (c1 y1 (t) + c2 y2 (t))00 + b (c1 y1 (t) + c2 y2 (t))0 + c (c1 y1 (t) + c2 y2 (t))
= c1 (ay100 + by10 + cy1 ) + c2 (ay200 + by20 + cy2 ) = 0 .
This is called linear superposition property of solutions. The term homogeneous refers to the right hand side of this equation being zero. Homogeneous
equations possessing linear superposition property are called linear.
We now try to find a solution of the equation (2.1) in the form y = ert ,
where r is a constant to be determined. We have y 0 = rert and y 00 = r 2 ert ,
so that plugging into the equation (2.1) gives
ar 2 + br + c = 0 .
This is a quadratic equation for r, called the characteristic equation. If r is
a root (solution) of this equation, then ert solves our differential equation
(2.1). When solving a quadratic equation, it is possible to encounter two
real roots, one (repeated) real root, or two complex conjugate roots. We
shall look at these cases in turn.
2.2.1
The roots r1 and r2 are real, and r2 6= r1 . Then er1 t and er2 t are two
solutions, and their linear combination gives us the general solution
y(t) = c1 er1 t + c2 er2 t .
As we have two constants to play with, one can prescribe two additional
conditions for the solution to satisfy.
Example Solve
y 00 + 4y 0 + 3y = 0
y(0) = 2
0
y (0) = 1 .
We prescribe that at time zero the displacement is 2, and the velocity is 1.
These two conditions are usually referred to as the initial conditions, and
together with the differential equation, they form an initial value problem.
The characteristic equation is
r 2 + 4r + 3 = 0 .
Solving it (say by factoring as (r + 1)(r + 3) = 0), we get its roots r1 = 1,
and r2 = 3. The general solution is then
y(t) = c1 et + c2 e3t .
We have y(0) = c1 + c2 . Compute y 0 (t) = c1 et 3c2 e3t , and therefore
y 0 (0) = c1 3c2 . The initial conditions tell us that
c1 + c2 = 2
c1 3c2 = 1 .
We have two equations to find two unknowns c1 and c2 . We find that
c1 = 5/2, and c2 = 1/2 (say by adding the equations).
5
1
Answer: y(t) = et e3t .
2
2
Example Solve
y 00 4y = 0 .
52
(a is a given constant),
y(t) = c1 e 3 t + c2 e 3 t .
1
1
1
1
Compute y 0 (t) = c1 e 3 t + c2 e 3 t , and then the initial conditions give
3
3
y(0) = c1 + c2 = 2
y 0 (0) = 13 c1 + 31 c2 = a .
Solving this system of two equation for c1 and c2 (by multiplying the second
equation through by 3, and adding the result to the first equation), we get
c2 = 1 + 32 a, and c1 = 1 32 a. The solution is
1
1
3
3
y(t) = 1 a e 3 t + 1 + a e 3 t .
2
2
ar 2 + br + c = a(r r1 )(r r2 ) .
2.2.2
b = 2ar1 .
0
r1 t
To plug y2 (t) into the equation, we compute itsderivatives
y2 (t) = e +
r1 ter1 t = er1 t (1 + r1 t), and similarly y200 (t) = er1 t 2r1 + r12 t . Then
y(t) = c1 e 3 t + c2 te 3 t .
Example Solve
y 00 4y 0 + 4y = 0
y(0) = 1, y 0 (0) = 2 .
54
2.3
2.3.1
1
1
1
1
2
3
4
5
2! (i ) + 3! (i ) + 4! (i ) + 5! (i ) +
1 2
1
1 4
1
= 1 + i 2!
3!
i 3 + 4!
+ 5!
i 5 +
1 4
1 3
1 5
1 2
1 2!
+ 4!
+ + i 3!
+ 5!
+
ei = 1 + i +
=
= cos + i sin .
ei = cos + i sin .
Replacing by , we have
(3.2)
cos =
ei + ei
.
2
2.3.2
sin =
ei ei
.
2i
e(p+iq)t + e(piq)t
eiqt + eiqt
= ept
= ept cos qt .
2
2
e(p+iq)t e(piq)t
eiqt eiqt
= ept
= ept sin qt
2i
2i
56
(a is a given constant),
y(/3) = 2, y 0 (/3) = 4 .
2
2
1
3
y(/3) = c1 cos
+ c2 sin
= c1 +
c2 = 2
3
3
2
2
2
2
+ 2c2 cos
= 3c1 c2 = 4 .
y 0 (/3) = 2c1 sin
3
3
2.3.3
Problems
Answer. y = 2 tan1 x
.
2
0 2
2. yy + (y ) = 0.
Answer. 3 (y ln y y) + c1 y = x + c2 , and y = c.
Answer. y = tan x.
y yy
02
02
= (2x 1)y ;
y 00 y y 0 2
= 2x 1;
y02
y
0
y
0
= 2x 1 .
y
1
(2x 1)2
2
=
x
x
+
=
.
y0
4
4
4x
Answer. y = e 2x1 .
III. Solve the linear second order equations, with constant coefficients.
1. y 00 + 4y 0 + 3y = 0.
2. y 00 3y 0 = 0.
3. 2y 00 + y 0 y = 0.
4. y 00 3y = 0.
5. 3y 00 5y 0 2y = 0.
Answer. y = c1 et + c2 e3t .
Answer. y = c1 + c2 e3t.
1
Answer. y = c1 et + c2 e 2 t .
Answer. y = c1 e
3t
3t
+ c2 e
58
6. y 00 9y = 0, y(0) = 3, y 0 (0) = 3.
8. y 00 + y 0 6y = 0, y(0) = 2, y 0 (0) = 3.
Answer. y = 3 + 2e5t .
7
3e2t
Answer. y = e3t
.
5
5
9. 4y 00 y = 0.
2. 4y 00 4y 0 + y = 0.
Answer. y = c1 e 2 t + c2 te 2 t.
3. y 00 2y 0 + y = 0, y(0) = 0, y 0 (0) = 2.
Answer. y = 2tet .
V. Using Eulers
formula, compute: 1. ei
9
2i
4. e
5. 2ei 4
2. ei/2
4. 9y 00 6y 0 + y = 0, y(0) = 1, y 0 (0) = 2.
3. ei
3
4
Hint: Begin with ei2 = (cos + i sin )2 . Apply Eulers formula on the left,
and square out on the right. Then equate the real and imaginary parts.
7 . Show that
sin 3 = 3 cos2 sin sin3 , and cos 3 = 3 sin2 cos + cos3 .
Hint: Begin with ei3 = (cos + i sin )3 . Apply Eulers formula on the left,
and cube out on the right. Then equate the real and imaginary parts.
VI. Solve the linear second order equations, with constant coefficients.
1. y 00 + 4y 0 + 8y = 0.
2. y 00 + 16y = 0.
!
t
3
3
00
0
6. y y + y = 0.
Answer. y = e 2 c1 cos
t + c2 sin
t .
2
2
1
Answer. y = 8et cos t.
2
Answer. y = sin(t /4).
7. 4y 00 + 8y 0 + 5y = 0, y() = 0, y 0 () = 4.
8. y 00 + y = 0, y(/4) = 0, y 0 (/4) = 1.
VII.
1. Consider the equation (y = y(t))
y 00 + by 0 + cy = 0 ,
with positive constants b and c. Show that all of its solutions tend to zero,
as t .
2. Consider the equation
y 00 + by 0 cy = 0 ,
with positive constants b and c. Assume that some solution is bounded, as
t . Show that this solution tends to zero, as t .
2.4
Linear Systems
Recall that a system of two equations (here the numbers a, b, c, d, g and h
are given, while x and y are unknowns)
ax+ by = g
cx+dy = h
has
a unique
solution if and only if the determinant of the system is non-zero,
a b
= ad bc 6= 0. This is justified by explicitly solving the system:
c d
x=
dg bh
,
ad bc
y=
ah cg
.
ad bc
60
a b
It is also easy to justify that a determinant is zero,
c d
= 0, if and only
(4.1)
The coefficient functions p(t) and g(t), and the function f (t) are assumed to
be given. The constants t0 , and are also given, so that at some initial
time t = t0 , the values of the solution and its derivative are prescribed. It
is natural to ask the following questions. Is there a solution to this problem?
If so, is the solution unique, and how far can it be continued?
Theorem 2.4.1 Assume that the functions p(t), g(t) and f (t) are continuous on some interval (a, b) that includes t0 . Then the problem (4.1) has a
solution, and only one solution. This solution can be continued to the left
and to the right of the initial point t0 , so long as t remains in (a, b).
If the functions p(t), g(t) and f (t) are continuous for all t, then the
solution can be continued for all t, < t < . This is better than what
we had for first order equations (where blow up in finite time was possible).
Why? Because the equation here is linear. Linearity pays!
Corollary 2.4.1 Let z(t) be a solution of (4.1) with the same initial data
as y(t): z(t0 ) = and z 0 (t0 ) = . Then z(t) = y(t) for all t.
Let us now study the homogeneous equation (for y = y(t))
(4.2)
y 00 + p(t)y 0 + g(t)y = 0 ,
with given coefficient functions p(t) and g(t). Although this equation looks
relatively simple, its analytical solution is totally out of reach, in general.
(One has to either solve it numerically, or use infinite series.) In this section
we shall study some theoretical aspects. In particular, we shall prove that
a linear combination of two solutions, which are not constant multiples of
= y1 (t)y20 (t) y10 (t)y2 (t) .
(Named in honor of Polish mathematician J.M. Wronski, 1776-1853.) Sometimes the Wronskian is written as W (y1 , y2 )(t) to stress its dependence on
y1 (t) and y2 (t). For example,
sin 2t
cos 2t
W (cos 2t, sin 2t)(t) =
2 sin 2t 2 cos 2t
= 2 cos2 2t + 2 sin2 2t = 2 .
Given the Wronskian and one of the functions, one can determine the
other one.
Example If f (t) = t, and W (f, g)(t) = t2 et , find g(t).
Solution: Here f 0 (t) = 1, and so
t g(t)
W (f, g)(t) =
1 g 0 (t)
= tg 0 (t) g(t) = t2 et .
This is a linear first order equation for g(t). We solve it as usual, obtaining
g(t) = tet + ct .
If g(t) = cf (t), with some constant c, we compute that W (f, g)(t) = 0,
for all t. The converse statement is not true. For example, the functions
f (t) = t2 and
(
t2
if t 0
g(t) =
t2 if t < 0
are not constant multiples of one another, but W (f, g)(t) = 0. This is
seen by computing the Wronskian separately in case t 0, and for t < 0.
(Observe that g(t) is a differentiable function, with g 0 (0) = 0.)
Theorem 2.4.2 Let y1 (t) and y2 (t) be two solutions of (4.2), and W (t) is
their Wronskian. Then
R
(4.3)
W (t) = ce p(t) dt .
where c is some constant.
62
This is a remarkable fact. Even though we do not know y1 (t) and y2 (t),
we can compute their Wronskian.
Proof: We differentiate the Wronskian W (t) = y1 (t)y20 (t) y10 (t)y2 (t):
W 0 = y1 y200 + y10 y20 y10 y20 y100 y2 = y1 y200 y100 y2 .
Because y1 is a solution of (4.2), we have y100 + p(t)y10 + g(t)y1 = 0, or
y100 = p(t)y10 g(t)y1 , and similarly y200 = p(t)y20 g(t)y2 . With these
formulas, we continue
W 0 = y1 (p(t)y20 g(t)y2 ) (p(t)y10 g(t)y1 ) y2
= p(t) (y1 y20 y10 y2 ) = p(t)W .
We obtained a linear first order equation for W (t). Solving it, we conclude
(4.3).
Corollary 2.4.2 We see from (4.3) that either W (t) = 0 for all t, when
c = 0, or else W (t) is never zero, in case c 6= 0.
Theorem 2.4.3 Let y1 (t) and y2 (t) be two non-trivial solutions of (4.2),
and W (t) is their Wronskian. Then W (t) = 0, if and only if y1 (t) and y2 (t)
are constant multiples of each other.
We just saw that if two functions are constant multiples of each other,
then their Wronskian is zero, while the converse statement is not true, in
general. But if these functions happen to be solutions of (4.2), then the
converse statement is true.
Proof: Assume that the Wronskian of two solutions y1 (t) and y2 (t) is zero.
In particular it is zero at any point t0 , so that
y1 (t0 ) y2 (t0 )
0
y1 (t0 ) y20 (t0 )
= 0.
63
the same as y1 (t). By Corollary 2.4.1, it follows that z(t) = y1 (t), so that
y2 (t) = y1 (t), for all t.
Definition We say that two solutions y1 (t) and y2 (t) of (4.2) form a fundamental set, if for any other solution z(t), we can find two constants c01 and
c02 , so that z(t) = c01 y1 (t) + c02 y2 (t). In other words, the linear combination
c1 y1 (t) + c2 y2 (t) gives us all solutions of (4.2).
Theorem 2.4.4 Let y1 (t) and y2 (t) be two solutions of (4.2), that are not
constant multiples of one another. Then they form a fundamental set.
Proof: Let y(t) be a solution of the equation (4.2). Let us try to find the
constants c1 and c2 , so that z(t) = c1 y1 (t) + c2 y2 (t) satisfies the same initial
conditions as y(t), so that
(4.4)
2.5
We shall give some practical applications of the theory from the last section.
But first, we recall the functions sinh t and cosh t.
64
10
y = cosh t
5
-3
-2
-1
2.5.1
One defines
cosh t =
et + et
,
2
and
sinh t =
et et
.
2
and
d
sinh t = cosh t .
dt
These formulas are similar to those for cosine and sine. By squaring out,
one sees that
cosh2 t sinh2 t = 1, for all t.
(There are other similar formulas.) We see that the derivatives, and the
algebraic properties of the new functions are similar to those for cosine and
sine. However, the graphs of sinh t and cosh t look totally different: they are
not periodic, and they are unbounded, see Figures 2.1 and 2.2.
2.5.2
y 00 a2 y = 0
you remember that the functions eat and eat form a fundamental set, and
y(t) = c1 eat + c2 eat is the general solution. But y = sinh at is also a
solution, because y 0 = a cosh at and y 00 = a2 sinh at = a2 y. Similarly, cosh at
65
-3
-2
y = sinh t
1
-1
-5
-10
-15
3c1 + 3c2 = 9 .
66
) + c2 sin 2(t ) .
5
5
2.5.3
y 00
Call the other solution y(t). By the Theorem 2.4.2, we can calculate the
Wronskian of two solutions:
t y(t)
W (t, y) =
1 y 0 (t)
R
= ce
2t
1t2
dt
We shall set here c = 1, because we need just one solution, which is not a
constant multiple of t. Then
0
ty y = e
2t
1t2
dt
= e ln(1t ) =
1
.
1 t2
67
d 1
1
1
y = 2
; y=
2
dt t
t (1 t ) t
1
t2 (1 t2 )
dt .
2.5.4
Problems
Answer. 3e2t.
1. y 00 2y 0 + y = 0,
2. (t 2)y 00 ty 0 + 2y = 0,
y1 (t) = et .
Answer. y = c1 et + c2 t2 + 2t 2 .
3. ty 00 + 2y 0 + ty = 0,
y1 (t) =
sin t
t .
Answer. y = c1
sin t
cos t
+ c2
.
t
t
68
III. Express the solution, by using the hyperbolic sine and cosine functions.
1
Answer. y = sinh 2t.
6
1. y 00 4y = 0, y(0) = 0, y 0 (0) = 13 .
2. y 00 9y = 0, y(0) = 2, y 0 (0) = 0.
3. y 00 y = 0, y(0) = 3, y 0 (0) = 5.
IV. Solve the problem, by using the general solution centered at the initial
point.
1. y 00 + y = 0, y(/8) = 0, y 0 (/8) = 3.
2. y 00 + 4y = 0, y(/4) = 0, y 0 (/4) = 4.
Answer. y = 2 sin 2(t /4) = 2 sin(2t /2) = 2 cos 2t.
3. y 00 2y 0 3y = 0, y(1) = 1, y 0 (1) = 7.
2.6
Non-homogeneous Equations.
69
Here the coefficient functions p(t) and g(t), and the function f (t) are given.
The corresponding homogeneous equation is
(6.2)
y 00 + p(t)y 0 + g(t)y = 0 .
4
giving A = 54 , and Y (t) = 54 cos 2t. Answer: y(t) = cos 2t+c1 cos 3t+
5
c2 sin 3t.
This was an easy example, because the y 0 term was missing. If y 0 term
is present, we need to look for Y (t) in the form Y (t) = A cos 2t + B sin 2t.
70
Prescription 1 If the right side of the equation (6.1) has the form a cos t+
b sin t, with constants a, b and , then look for a particular solution in the
form Y (t) = A cos t + B sin t. More generally, if the right side of the
equation has the form (at2 + bt + c) cos t + (et2 + gt + h) sin t, then look
for a particular solution in the form Y (t) = (At2 + Bt + C) cos t + (Et2 +
Gt + H) sin t. Even more generally, if the polynomials are of higher power,
we make the corresponding adjustments.
Example Solve y 00 + 3y 0 + 9y = 4 cos 2t + sin 2t.
We look for a particular solution in the form Y (t) = A cos 2t + B sin 2t.
Plug this in, then combine the like terms
4A cos 2t 4B sin 2t + 3 (2A sin 2t + 2B cos 2t) + 9 (A cos 2t + B sin 2t)
= 4 cos 2t + sin 2t;
y 00 + 3y 0 + 9y = 0
3
3
26
is y = c1 e 2 t cos 3 2 3 t + c2 e 2 t sin 3 2 3 t. Answer: y(t) = cos 2t
61
3 3
3 3
19
32 t
32 t
cos
sin
sin 2t + c1 e
t + c2 e
t.
61
2
2
Example Solve y 00 + 2y 0 + y = t 1.
On the right we see a linear polynomial. We look for particular solution
in the form Y (t) = At + B. Plugging this in, gives
2A + At + B = t 1 .
Equating the corresponding coefficients, we have A = 1, and 2A+B = 1, so
that B = 3. Then, Y (t) = t 3. The general solution of the corresponding
homogeneous equation
y 00 + 2y 0 + y = 0
71
2A 2B = 0
2A + B 2C = 0 .
From the first equation, A = 12 , from the second one, B = 21 , and from
the third, C = A + 12 B = 34 . So that Y (t) = 12 t2 12 t 43 . The general
solution of the corresponding homogeneous equation is y(t) = c1 e2t + c2 et .
1
1
3
Answer: y(t) = t2 t + c1 e2t + c2 et .
2
2
4
The last two examples lead to the following prescription.
Prescription 2 If the right side of the equation (6.1) is a polynomial of
degree n: a0 tn + a1 tn1 + + an1 t + an , look for a particular solution
as a polynomial of degree n: A0 tn + A1 tn1 + + An1 t + An , with the
coefficients to be determined.
And on to the final possibility.
Prescription 3 If the right side of the equation (6.1) is a polynomial
of de
gree n, times an exponential : a0 tn + a1 tn1 + + an1 t + an et, look
for a particular solution as a polynomial of degree n times the same exponential: A0 tn + A1 tn1 + + An1 t + An et , with the coefficients to be
determined.
Example Solve y 00 + y = te2t .
We look for a particular solution in the form Y (t) = (At + B)e2t . Compute
Y 0 (t) = Ae2t 2(At + B)e2t = (2At + A 2B)e2t , Y 00 (t) = 2Ae2t
2(2At + A 2B)e2t = (4At 4A + 4B)e2t . Plug Y (t) in:
4Ate2t 4Ae2t + 4Be2t + Ate2t + Be2t = te2t .
72
4
2t
.
25 )e
Answer:
2.7
The prescriptions from the previous section do not always work. In this
section we sketch a fix. More details can be found in the book of W.E.
Boyce and R.C. DiPrima. A more general approach for finding Y (t) will be
developed in the next section.
Example Solve y 00 + y = sin t.
We try Y (t) = A sin t+B cos t, according to the Prescription 1. Plugging
Y (t) in, we get
0 = sin t ,
which is impossible. Why did we strike out? Because A sin t and B cos t are
solutions of the corresponding homogeneous equation. Let us multiply the
initial guess by t, and try Y = At sin t + Bt cos t. Calculate Y 0 = A sin t +
74
t2
4
11
8 t.
6At + 2B = t .
It follows that
6A = 1
2B = 0 ,
giving A = 1/6, and B = 0. Answer: y(t) =
2.8
1 3 t
t e + c1 et + c2 tet .
6
75
We assume that y1 (t) and y2 (t) form a fundamental solution set for the
corresponding homogeneous equation
(8.2)
y 00 + p(t)y 0 + g(t)y = 0 .
(So that c1 y1 (t) + c2 y2 (t) is the general solution of (8.2).) We look for a
particular solution of (8.1) in the form
(8.3)
with some functions u1 (t) and u2 (t), that we shall choose to satisfy the
following two equations
(8.4)
We have a system of two linear equations to find u01 (t) and u02 (t). Its determinant
y (t) y (t)
1
2
W (t) = 0
y1 (t) y20 (t)
is the Wronskian of y1 (t) and y2 (t). We know that W (t) 6= 0 for all t,
because y1 (t) and y2 (t) form a fundamental solution set. By Cramers rule
(or by elimination), we solve
(8.5)
u01 (t) =
u02 (t) =
f (t)y2 (t)
,
W (t)
f (t)y1 (t)
.
W (t)
76
cos t
sin t
= sin2 t cos2 t = 1 ,
sin2 t
.
cos t
Integrating, u1 (t) = cos t. We set the constant of integration to zero, because we only need one particular solution. Integrating the second formula,
u2 (t) =
sin2 t
dt =
cos t
1 cos2 t
dt =
cos t
( sec t + cos t) dt
77
2.9
2.9.1
where
gt(t, s) denotes the partial derivative in t. To differentiate the integral
Rt
g(s)
ds, we use the fundamental theorem of Calculus:
a
d
dt
g(s) ds = g(t) .
d
dt
g(t, s) ds =
gt (t, s) ds + g(t, t) ,
so that, in effect, we combine the previous two formulas. Let now z(t) and
f (t) be some functions, then the last formula gives
(9.1)
2.9.2
d
dt
y 00 + py 0 + gy = f (t) ,
where p and g are given numbers, and f (t) is a given function. Let z(t)
denote the solution of the corresponding homogeneous equation, satisfying
(9.3)
z 00 + pz 0 + gz = 0, z(0) = 0, z 0 (0) = 1 .
Y (t) =
78
z 0 (t s)f (s) ds ;
Then
Y 00 (t) + pY 0 (t) + gY (t)
=
Rt
0
Here the integral is zero, because z(t) satisfies the homogeneous equation
(with constant coefficients p and g) at all values of its argument t, including
t s.
Example Let us now revisit the equation
y 00 + y = tan t .
Solving
z 00 + z = 0, z(0) = 0, z 0 (0) = 1 ,
we get z(t) = sin t. Then
Y (t) =
sin(t s) tan s ds .
Writing sin(ts) = sin t cos scos t sin s, and integrating, it is easy to obtain
the solution we had before.
2.10
2.10.1
Vibrating Spring
Let y = y(t) denote the displacement of a spring from its natural position.
Its motion is governed by Newtons second law
ma = f .
79
The acceleration a = y 00 (t). We assume that the only force f , acting on the
spring, is its own restoring force, which by Hookes law is f = ky, for small
displacements. Here the physical constant k > 0 describes the stiffness (or
the hardness) of the spring. We have
my 00 = ky .
2
Divide
p both sides by the mass m of the spring, and denote k/m = (or
= k/m), obtaining
y 00 + 2 y = 0 .
c21 + c22 q
c1
c21
+ c22
c2
cos t + q
sin t
2
2
c1 + c2
c1
c2
=A
cos t + sin t
A
A
where we have denoted A = c21 + c22 . Observe that ( cA1 )2 +( cA2 )2 = 1, which
means that we can find an angle , such that cos = cA1 , and sin = cA2 .
Then our solution takes the form
y(t) = A (cos t cos + sin t sin ) = A cos(t ) .
So that any harmonic motion is just a shifted cosine curve of amplitude
q
2
A = c21 + c22 , and of period
. We see that the larger is , the smaller
is the period, and the oscillations are more frequent. So that gives us the
frequency of oscillations. The constants c1 and c2 can be computed, once
the initial displacement y(0), and the initial velocity y 0 (0) are given.
Example Solving the initial value problem
y 00 + 4y = 0, y(0) = 3, y 0 (0) = 8
one gets y(t) = 3 cos 2t 4 sin 2t. This solution is a periodic function, with
the amplitude 5, the frequency 2, and the period .
The equation
(10.1)
y 00 + 2 y = f (t)
80
models the case when an external force, with acceleration equal to f (t), is
applied to the spring. Indeed, the equation of motion is now
my 00 = ky + mf (t) ,
from which we get (10.1), dividing by m.
Let us consider the case of a periodic forcing term
(10.2)
y 00 + 2 y = a sin t ,
where a > 0 is the amplitude of the external force, and is the forcing frequency. If 6= , we look for a particular solution of this non-homogeneous
a
equation in the form Y (t) = A sin t. Plugging this in, gives A = 2
.
2
a
The general solution of (10.2), which is y(t) = 2
sin t + c1 cos t +
2
c2 sin t, is a superposition (sum) of the harmonic motion, and the response
a
term ( 2
sin t) to the external force. We see that the solution is still
2
bounded, although not periodic anymore, in general (such functions are
called quasiperiodic).
A very important case is when = , so that the forcing frequency is
the same as the natural frequency. Then a particular solution has the form
Y (t) = At sin t + Bt cos t, so that solutions become unbounded, as time
t increases. This is the case of resonance, when a bounded external force
produces unbounded response. Large displacements will break the spring.
Resonance is a serious engineering concern.
Example y 00 + 4y = sin 2t, y(0) = 0, y 0 (0) = 1.
Both the natural and forcing frequencies are equal to 2. The fundamental
set of the corresponding homogeneous equation consists of sin 2t and cos 2t.
We search for a particular solution in the form Y (t) = At sin 2t + Bt cos 2t.
As before, we compute Y (t) = 41 t cos 2t. Then the general solution is
y(t) = 14 t cos 2t + c1 cos 2t + c2 sin 2t. Using the initial conditions, we
calculate c1 = 0 and c2 = 58 , so that y(t) = 14 t cos 2t + 58 sin 2t. The term
14 t cos 2t introduces oscillations, with the amplitude 14 t increasing without
bound, as time t . (It is customary to call this unbounded term a
secular term, which seems to imply that the harmonic terms are divine.)
81
y
6
y = - t cos 2t
4
4
2
10
15
20
25
30
-2
-4
-6
Figure 2.3: The graph of the secular term y = 41 t cos 2t, oscillating between
the lines y = 14 t, and y = 14 t
2.10.2
Problems
3 cos t 1
sin t.
5
5
Answer. y = t + 4 + c1 et + c2 et t.
1 2
Answer. y =
2t 6t + 1 +c1 cos 2t+c2 sin 2t.
8
1
13e2t e2t 6e3t
Answer. y = e3t t
+
.
5
50
2
25
6. 4y 00 + 8y 0 + 5y = 2t sin 3t.
2
16
31
24
t
t
Answer. y = t
+
sin 3t +
cos 3t + c1 et cos + c2 et sin .
5
25 1537
1537
2
2
7. y 00 + y = 2e4t + t2 .
2
Answer. y = e4t + t2 2 + c1 cos t + c2 sin t.
17
00
0
8. y y = 2 sin t cos 2t.
1
Answer. y = cos t sin t + cos 2t + sin 2t + c1 + c2 et .
2
82
Answer. y = t + c1 e 2 t + c2 et .
III. Write down the form in which one should look for a particular solution,
but DO NOT compute the coefficients.
1. y 00 4y 0 + 4y = 3te2t + sin 4t t2 .
et
.
1 + t2
1
Answer. y = c1 et + c2 tet et ln(1 + t2 ) + tet tan1 t.
2
00
0
t
2. y 2y + y = 2e .
Answer. y = t2 et + c1 et + c2 tet .
et
.
Answer. y = tet + tet ln t + c1 et + c2 tet .
t
4. y 00 + y = sec t.
Answer. y = c1 cos t + c2 sin t + ln(cos t) cos t + t sin t.
1
5. y 00 + 2y 0 + y = 2e3t.
Answer. y = e3t + c1 et + c2 tet .
8
00
0
6. y + 4y = sin 2t, y(0) = 0, y (0) = 1.
1
5
Answer. y = t cos 2t + sin 2t.
4
8
3. y 00 + 2y 0 + y =
V. Verify that the functions y1 (t) and y2 (t) form a fundamental solution
set for the corresponding homogeneous equation, and then use variation of
parameters, to find the general solution.
83
y1 (t) = t2 , y2 (t) = t1 .
1 1 3
1
+ t + c1 t2 + c2 .
2 4
t
2. ty 00 (1 + t)y 0 + y = t2 e3t.
Answer. y =
y1 (t) = t + 1, y2 (t) = et .
1 3t
e (2t 1) + c1 (t + 1) + c2 et .
12
y1 (t) =
1
, y2 (t) = t2 + 1.
t
equation
(t s)n1
f (s) ds gives a solution of the n-th order
(n 1)!
y (n) = f (t) .
2
3
85
3 .
84
Write down the solution, assuming that < 6. What is the smallest value
of the dissipation constant , which will prevent the spring from oscillating?
Answer. No oscillations for 6.
6. Consider forced vibrations of a dissipative spring
y 00 + y 0 + 9y = sin 3t .
Write down the general solution for
(i) = 0
(ii) 6= 0.
What does friction do to the resonance?
2.10.3
Let r = r(t) denote the distance of some meteor from the center of the Earth.
The motion of the meteor is governed by Newtons law of gravitation
(10.3)
mr 00 =
mM G
.
r2
Here m is the mass of the meteor, M denotes the mass of the Earth, and
G is the universal gravitational constant. Let a be the radius of the Earth.
If an object is sitting on Earths surface, then r = a, and r 00 = g, so that
from (10.3)
MG
g= 2 .
a
85
(10.4)
a2
.
r2
1 02
a2
r g
2
r
= 0;
1 02
a2
r (t) g
= c.
2
r(t)
1 2
a2
, remains constant at
So that the energy of the meteor, E(t) = r 0 (t)g
2
r(t)
all time. (That is why the gravitational
force field is called conservative.) We
q
2a2
0
can now express r (t) = 2c + g r(t) , and calculate the motion of meteor
r(t) by separation of variables. However, as we are not riding on the meteor,
this seems to be not worth the effort. What really concerns us is the velocity
of impact, as the meteor hits the Earth.
Let us assume that the meteor begins its journey with zero velocity
r 0 (0) = 0, and at a distance so large that we may assume r(0) = . Then
the energy of the meteor at time t = 0 is zero, E(0) = 0. As the energy
remains constant at all time, the energy at the time of impact is also zero.
At the time of impact, we have r = a, and the velocity of impact we denote
by v (r 0 = v). Then from (10.5)
1 2
a2
v (t) g
= 0,
2
a
and the velocity of impact is
v=
2ga .
Food for thought: the velocity of impact is the same, as it would have been
achieved by free fall from height a.
Let us now revisit the harmonic oscillations of a spring:
y 00 + 2 y = 0 .
86
1 02 1 2 2
y + y = 0;
2
2
1 02 1 2 2
y + y = constant .
2
2
With the energy E(t) conserved, no wonder the motion of the spring was
periodic.
2.10.4
Damped Oscillations
(10.6)
Let us see what effect the extra term y 0 has on the energy of the spring,
E(t) = 12 y 0 2 + 21 2 y 2 . We differentiate the energy, and express from the
equation (10.6), y 00 = y 0 2 y, obtaining
2
E 0 (t) = y 0 y 00 + 2 yy 0 = y 0 (y 0 2 y) + 2 yy 0 = y 0 .
We see that E 0 (t) 0, and the energy decreases. This is an example of a
dissipative motion. We expect the amplitude of oscillations to decrease with
time. We call the dissipation (or damping) coefficient.
To solve the equation (10.6), we write down its characteristic equation
r 2 + r + 2 y = 0 .
2 4 2
.
2
There are three cases to consider.
87
q
complex. If we denote 2 4 2 = q 2 , the roots are i . The general
2
2
solution
q
q
y(t) = c1 e 2 t sin t + c2 e 2 t cos t
2
2
exhibits damped oscillations (the amplitude of oscillations tends to zero, as
t ).
y(t) = c1 e 2 t + c2 te 2 t
tends to zero as t , without oscillating.
q
+ q
2 4 2 , then the roots are r1 =
, and r2 =
. Both roots
2
2
are negative, because q < . The general solution
y(t) = c1 er1 t + c2 er2 t
tends to zero as t , without oscillating. We see that large enough
dissipation kills the oscillations.
2.11
Further Applications
2.11.1
Y (t) = A sin(t ) ,
with the constants A and depending on A1 and A2 . So, let us look for
a particular solution directly in the form (11.2). We transform the forcing
term as a linear combination of sin(t ) and cos(t ):
sin t = sin ((t ) + ) = sin(t ) cos + cos(t ) sin .
88
A( 2 2 ) = cos
A = sin .
( 2
1
.
2 )2 + 2 2
, or = tan1 2
.
tan = 2
2
2
We computed a particular solution
1
Y (t) = p 2
sin(t ), where = tan1 2
.
2
2
2
2
2
( ) +
We now make a physically reasonable assumption that the damping coefficient is small, so that 2 4 2 < 0. Then the characteristic equation for
the homogeneous equation corresponding to (11.1),
r 2 + r + 2 = 0 ,
1
y(t) = c1 e 2 t cos t + c2 e 2 t sin t + p 2
sin(t ) .
( 2 )2 + 2 2
The first two terms of this solution are called the transient oscillations,
because they quickly tend to zero, as the time t goes on (sic transit gloria
mundi). So that the second term, Y (t), describes the oscillations in the
long run. We see that oscillations of Y (t) are bounded, no matter what is
the frequency of the forcing term. The resonance is gone! Moreover, the
largest amplitude of Y (t) occurs not at = , but at a slightly smaller
value of . Indeed, the maximal amplitude happens when the quantity in
the denominator, ( 2 2 )2 + 2 2 , is the smallest. This quantity
is a
q
2
2
2 .
89
2.11.2
0 if t c
.
1 if t > c
0
if t /4
.
t + 1 if t > /4
(No external force is applied to the spring before the time t = /4, and the
force is equal to t + 1 afterwards. The forcing function can be written as
f (t) = u/4 (t)(t + 1).)
The problem naturally breaks down into two parts. When 0 < t /4,
we are solving
y 00 + 4y = 0 .
Its general solution is y(t) = c1 cos 2t + c2 sin 2t. From the initial conditions,
we find that c1 = 0, c2 = 32 , so that
3
sin 2t, for t /4 .
2
At later times, when t > /4, our equation is
(11.4)
y(t) =
y 00 + 4y = t + 1 .
(11.5)
But what are the new initial conditions at the time t = /4? Clearly, we
can get them from (11.4):
(11.6)
y(/4) =
3
, y 0 (/4) = 0 .
2
The general solution of (11.5) is y(t) = 14 t + 41 + c1 cos 2t + c2 sin 2t. Calculating c1 and c2 from the initial conditions in (11.6), we find that y(t) =
1
1
1
5
3
2
sin 2t,
1
4t
1
4
+ 18 cos 2t + ( 45
if t /4
16 ) sin 2t,
if t > /4
90
2.11.3
Oscillations of a Pendulum
Assume that a small ball of mass m is attached to one end of a rigid rod of
length l, while the other end of the rod is attached to the ceiling. Assume
also that the mass of the rod itself is so small, that it can be neglected.
Clearly, the ball will move on an arch of a circle of radius l. Let = (t)
be the angle that the pendulum makes with the vertical line, at the time
t. We assume that > 0 if the pendulum is to the left of the vertical line,
and < 0 to the right of the vertical. If the pendulum moves by an angle
radians, it covers the distance l = l(t). It follows that l0 (t) gives the
velocity, and l00 (t) the acceleration. We assume that the only force acting
on the mass is the force of gravity. Only the projection of this force on
the tangent line to the circle is active, which is mg cos( 2 ) = mg sin .
Newtons second law of motion gives
ml00 (t) = mg sin .
(Minus, because the force works to decrease the angle , when > 0.)
Denoting g/l = 2 , we have the pendulum equation
00 (t) + 2 sin (t) = 0 .
a
@
@
@
l
@
@
@
@dm
If the oscillation angle (t) is small, then sin (t) (t), giving us again
a harmonic oscillator
00 (t) + 2 (t) = 0 ,
this time as a model of small oscillations of a pendulum.
91
2.11.4
Sympathetic Oscillations
Suppose that we have two pendulums hanging from the ceiling, and they are
coupled (connected) through a weightless spring. Let x1 denote the angle
the left pendulum makes with the vertical line. We consider this angle to
be positive if the pendulum is to the left of the vertical, and x1 < 0 if
the pendulum is to the right of the vertical. Let x2 be the angle the right
pendulum makes with the vertical, with the same assumptions on its sign.
We assume that x1 and x2 are small in absolute value, which means that
each pendulum separately can be modeled by a harmonic oscillator. For
coupled pendulums the model is
x001 + 2 x1 = k(x1 x2 )
(11.7)
x002 + 2 x2 = k(x1 x2 ) ,
x1
d
x2
92
2.12
2.12.1
Fourier Series
For vectors in three dimensions, one of the central notions is that of the
scalar product (also known as the inner product or the dot product).
x1
y1
(x, x),
(x, y)
.
||x|| ||y||
f (t) g(t) dt .
We
call the functions orthogonal if (f, g) = 0. For example, (sin t, cos t) =
R
sin t cos t dt = 0, so that sin t and cos t are orthogonal. (Observe that the
orthogonality of these functions has nothing to do with the angle at which
their graphs intersect.) The notion of scalar product allows us to define the
norm of a function
||f || =
(f, f ) =
sZ
f 2 (t) dt .
For example,
sZ
sZ
1 1
|| sin t|| =
sin t dt =
cos 2t dt = .
2
Similarly,
for any positive integer n, || sin nt|| = , || cos nt|| = , and
||1|| = 2.
94
cos nt dt = 0;
sin nt dt = 0;
The last three integrals are computed by using trig identities. If we divide these functions by their norms, we shall obtain an orthonormal set of
functions
1
cos t cos 2t
cos nt
sin t sin 2t
sin nt
, , , ... , , ... , , , ... , , ... ,
2
which is similar to the vectors i, j and k. It is known that these functions form a complete set, so that any function f (t) can be represented
as their linear combination. Similarly to the formula for vectors (12.1), we
decompose an arbitrary function f (t) as
X
1
cos nt
sin nt
f (t) = 0 +
n + n
2 n=1
where
1
1
0 = (f (t), ) =
2
2
cos nt
1
n = (f (t), ) =
sin nt
1
n = (f (t), ) =
f (t) dt ,
f (t) cos nt dt ,
f (t) sin nt dt .
n=1
with
a0 =
1
2
f (t) dt ,
1
an =
f (t) cos nt dt ,
Z
1
f (t) sin nt dt .
bn =
Z
(t + ) dt =
Using guess-and-check
1
an =
1
(t + )2 = .
1
sin nt cos nt
(t + ) cos nt dt =
(t + )
+ 2
= 0,
n
n
1
cos nt
sin nt
(t + ) sin nt dt =
(t + )(
)+ 2
n
n
2
2
2
= cos n = (1)n = (1)n+1 .
n
n
n
(Observe that cos n is equal to 1 for even n, and to 1 for odd n, which
may be combined as cos n = (1)n.) We obtained a Fourier series for the
function f (t):
X
2
(1)n+1 sin nt ,
f (t) = +
n
n=1
X
2
n
n=1
96
y
6
c
- t
n=1
Let us assume that 6= n, for any integer n (to avoid resonance). According
to our theory, we look for a particular solution in the form Y (t) = A0 +
P
n=1 (An cos nt + Bn sin nt). Plugging this in, we find
Y (t) =
X
an
bn
a0
+
cos nt + 2
sin nt
2
2
2
n
n2
n=1
X
a0
an
bn
+
cos nt + 2
sin nt + c1 cos t + c2 sin t .
2
2 n2
n2
n=1
We see that the coefficients in the m-th harmonics (in cos mt and sin mt)
are large, provided that the natural frequency is selected to be close to m.
97
That is basically what happens, when one is turning the tuning knob of a
radio set. (The knob controls , while your favorite station broadcasts at a
frequency m, so that its signal has the form f (t) = am cos mt + bm sin mt.)
2.13
Eulers Equation
Preliminaries
1
if t > 0
.
1 if t < 0
Observe that
d
|t| = sign(t), for all t 6= 0 ,
dt
as follows by considering separately the cases t > 0, and t < 0.
By the chain rule, we have
(13.1)
In particular,
d
f (|t|) = f 0 (|t|)sign(t), for all t 6= 0 .
dt
d
r
dt |t|
= r|t|r1 sign(t).
t2 y 00 + aty 0 + by = 0 ,
where a and b are given numbers. Assume first that t > 0. We look for
solution in the form y = tr , with the constant r to be determined. Plugging
this in, gives
t2 r(r 1)tr2 + atrtr1 + btr = 0 .
98
r(r 1) + ar + b = 0
Example t2 y 00 + 2ty 0 2y = 0.
The characteristic equation
r(r 1) + 2r 2 = 0
has roots r1 = 2, and r2 = 1. Solution: y(t) = c1 t2 + c2 t, valid for all
t 6= 0. Another general solution valid for all t 6= 0 is y(t) = c1 |t|2 + c2 |t| =
c1 t2 + c2 |t|. This is a truly different solution! Why such an unexpected
complexity? If one divides this equation by t2 , then the functions p(t) = 2/t
and g(t) = 2/t2 from our general theory, are both discontinuous at t = 0.
We have a singularity at t = 0, and, in general, solution y(t) is not even
defined at t = 0 (as we see in this example), and that is the reason for the
complexity. However, when solving initial value problems, it does not matter
which form of the general solution one uses. For example, if we prescribe
some initial conditions at t = 1, then both forms of the general solution
are valid only on the interval (, 0), and on that interval both forms are
equivalent.
We now turn to the cases of equal roots, and of complex roots, for the
characteristic equation. We could proceed similarly to the linear equations
99
with constant coefficients. Instead, we make a change of independent variables from t to a new variable s, by letting t = es , or s = ln t. By the chain
rule
dy
dy ds
dy 1
=
=
.
dt
ds dt
ds t
Using the product rule, and then the chain rule,
d2 y
d dy 1 dy 1
d2 y ds 1 dy 1
d2 y 1
dy 1
=
(
)
.
2
2
2
2
2
2
dt
dt ds t
ds t
ds dt t
ds t
ds t
ds t2
Then Eulers equation (13.2) becomes
dy
d2 y dy
+ a + by = 0 .
2
ds
ds
ds
This is a linear equations with constant coefficients! We can solve it for
any a and b. Let us use primes again to denote the derivatives in s in this
equation. Then we have
(13.4)
y 00 + (a 1)y 0 + by = 0 .
r 2 + (a 1)r + b = 0
100
has a pair of complex roots 2i. Here p = 0, q = 2, and the general solution,
valid for both positive and negative t, is
y(t) = c1 cos(2 ln |t|) + c2 sin(2 ln |t|) .
From the first initial condition, y(1) = c1 = 0, so that y(t) = c2 sin(2 ln |t|).
To use the second initial condition, we need to differentiate y(t) at t =
1. Observe that for negative t, |t| = t, and so y(t) = c2 sin(2 ln(t)).
Then y 0 (t) = c2 cos(2 ln(t)) 2t , and y 0 (1) = 2c2 = 3, giving c2 = 3/2.
3
Answer: y(t) = sin(2 ln |t|).
2
Example t2 y 00 3ty 0 + 4y = t 2, t > 0.
This is a non-homogeneous equation. We look for a particular solution in the
1
form Y = At+B. Plugging this in, we compute Y = t . The fundamental
2
solution set of the corresponding homogeneous equation is given by t2 and
1
t2 ln t, as we saw above. The general solution is y = t + c1 t2 + c2 t2 ln t.
2
101
2.14
2.14.1
For a complex number x + iy, one can use the point (x, y) to represent it.
This turns the usual plane into the complex plane. The point (x, y) can also
be identified by its polar coordinates (r, ). We shall always take r > 0.
Then
z = x + iy = r cos + ir sin = r(cos + i sin )
gives us polar form of a complex number z. Using Eulers formula, we can
3
also write z = rei . For example, 2i = 2ei 2 , because the point (0, 2)
2m
)
n
, m = 0, 1, . . . , n 1 .
Here r 1/n is the positive n-th root of the positive number r. (The high
school n-th root.) When m varies from 0 to n 1, we get different values,
and then the roots repeat themselves. There are n complex n-th roots of
any complex number (and in particular, of any real number). All of these
roots lie on a circle of radius r 1/n around the origin, and the difference in
polar angles between any two neighbors is 2/n.
Example Solve the equation: z 4 + 16 = 0.
We need the four complex roots of 16 = 16ei(+2m) . The formula (14.1)
gives
m
(16)(1/4) = 2ei( 4 + 2 ), m = 0, 1, 2, and 3 .
other roots
similarly.
They
come
in
two
complex
conjugate
pairs:
2
i
2
and 2 i 2. In the complex plane, they all lie on circle of radius 2, and
the difference in polar angles between any two neighbors is /2.
Example Solve the equation: z 3 + 8 = 0.
We need the three complex roots of 8. One of them is z1 = 2 = 2ei ,
and the other two lie on
an angle 2/3 away, so
the circle of radius 2, at
that z2 = 2ei/3 = 1 + 3i, and z3 = 2ei/3 = 1 3i. (Alternatively, the
102
'$
b
b
2
b
b
&%
2.14.2
a0 y 0000 + a1 y 000 + a2 y 00 + a3 y 0 + a4 y = 0 ,
103
solutions, if the root is repeated three times, and so on.) The following cases
may occur for the n-th order equation (14.4).
Case 1 r1 is a simple real root. Then it brings er1 t into the fundamental
set.
Case 2 r1 is a real root repeated s times. Then it brings the following s
solutions into the fundamental set: er1 t , ter1 t , . . ., ts1 er1 t .
Case 3 p+iq and piq are simple complex roots. They contribute: ept cos qt
and ept sin qt into the fundamental set.
Case 4 p + iq and p iq are repeated s times each. They bring the following
2s solutions into the fundamental set: ept cos qt and ept sin qt, tept cos qt and
tept sin qt, . . ., ts1 ept cos qt and ts1 ept sin qt.
Example y 0000 y = 0. The characteristic equation is
r4 1 = 0 .
We solve it by factoring
(r 1)(r + 1)(r 2 + 1) = 0 .
The roots are 1, 1, i, i. The general solution: y(t) = c1 et + c2 et +
c3 cos t + c4 sin t.
Example y 000 3y 00 + 3y 0 y = 0. The characteristic equation is
r 3 3r 2 + 3r 1 = 0 .
This is a cubic equation. You probably did not study how to solve it by
Cardanos formula. Fortunately, you must remember that the quantity on
the left is an exact cube:
(r 1)3 = 0 .
Let us suppose you did not know the formula for cube of a difference.
Then one can guess that r = 1 is a root. This means that the cubic polynomial can be factored, with one factor being r 1. The other factor is
then found by the long division. The other factor is a quadratic polynomial,
whose roots are easy to find.
Example y 000 y 00 + 3y 0 + 5y = 0. The characteristic equation is
r 3 r 2 + 3r + 5 = 0 .
104
We need to guess a root. The procedure for guessing a root (for textbook
examples) is a simple one: try r = 0, r = 1, r = 2, and then give up.
We see that r = 1 is a root. It follows that the first factor is r + 1, and
the second one is found by the long division:
(r + 1)(r 2 2r + 5) = 0 .
The roots of the quadratic are r2 = 1 2i, and r3 = 1 + 2i. The general
solution: y(t) = c1 et + c2 et cos 2t + c3 et sin 2t.
Example y 0000 + 2y 00 + y = 0. The characteristic equation is
r 4 + 2r 2 + 1 = 0 .
It can be solved by factoring
(r 2 + 1)2 = 0 .
(Or one could set z = r 2 , and obtain a quadratic equation for z.) The
roots are i, i, each repeated twice. The general solution: y(t) = c1 cos t +
c2 sin t + c3 t cos t + c4 t sin t.
Example y 0000 + 16y = 0. The characteristic equation is
r 4 + 16 = 0 .
Its solutions
are
2t
y(t) = c1 e
2 i 2,
2.14.3
Non-Homogeneous Equations
The theory is parallel to the second order case. Again, we need a particular
solution.
Example y (5) + 9y 000 = 3t sin 2t.
105
We just found the general solution of the corresponding homogeneous equation. We produce a particular solution in the form Y (t) = Y1 (t) + Y2 (t),
where Y1 (t) is a particular solution of y (5) + 9y 000 = 3t, and Y2 (t) is a
particular solution of y (5) + 9y 000 = sin 2t. We guess that Y1 (t) = At4 ,
1
1
and compute A = 72
, and that Y2 (t) = B cos 2t, which gives B = 40
. We
1 4
1
have Y (t) = 72 t 40 cos 2t.
Answer: y(t) =
2.14.4
1 4
72 t
1
40
Problems
3 sin 3t,
9t + 9
cos 3t
if t
17
27
2. y 00 + y = f (t), where f (t) = 0 for 0 < t < , and f (t) = t for t > ,
y(0) = 2, y 0 (0) = 0.
II. Find the general solution, valid for t > 0.
1. t2 y 00 2ty 0 + 2y = 0.
2. t2 y 00 + ty 0 + 4y = 0.
Answer. y = c1 t + c2 t2 .
Answer. y = c1 cos(2 ln t) + c2 sin(2 ln t).
3. t2 y 00 + 5ty 0 + 4y = 0.
Answer. y = c1 t2 + c2 t2 ln t.
4. t2 y 00 + 5ty 0 + 5y = 0.
5. t2 y 00 3ty 0 = 0.
1
6. y 00 + t2 y = 0.
4
Answer. y = c1 + c2 t4 .
Answer. y = c1 t + c2 t ln t.
1. t2 y 00 + ty 0 + 4y = 0.
2. 2t2 y 00 ty 0 + y = 0.
Answer. y = c1 |t| + c2 t.
3. 2t2 y 00 ty 0 + y = t2 3.
106
Answer. y = t + 3t2 .
3
1
Answer. y = t3 t.
4
4
4. t2 y 00 ty 0 + 5y = 0, y(1) = 0, y 0 (1) = 2.
2. r 3 + 27 = 0.
4
3. r 16 = 0.
3
3
2
Answer. 3,
2
3
3 3
3
2 i, 2
4. r 3r + r + 1 = 0.
Answer. 1, 1
i 4
i 3
4
i 5
4
, ei
+
4
3
3 3
2 i.
2, 1 +
i 7
4
2.
5. r 4 + 1 = 0.
Answer. e , e
6. r 4 + 4 = 0.
Answer. 1 + i, 1 i, 1 + i, 1 i.
7. r 4 + 8r 2 + 16 = 0.
,e
,e
8. r 4 + 5r 2 + 4 = 0.
1. y y = 0.
t/2
Answer. y = c1 e +c2 e
2. y 000 3y 00 + y 0 + y = 0.
3. y (4) 8y 00 +16y = 0.
3
3
t/2
cos
t+c3 e
sin
t.
2
2
2)t
Answer. y = c1 et + c2 e(1
+ c3 e(1+
2)t
107
4. y (4) + 8y 00 + 16y = 0.
Answer. y = c1 cos 2t + c2 sin 2t + c3 t cos 2t + c4 t sin 2t.
5. y (4) + y = 0.
Answer. y = c1 e
t
2
6. y 000 y = t2 .
t
t
t
t
t
t
t
cos + c2 e 2 sin + c3 e 2 cos + c4 e 2 sin .
2
2
2
2
3
3
2
t
t/2
t/2
Answer. y = t +c1 e +c2 e
cos
t+c3 e
sin
t.
2
2
7. y (6) y 00 = 0.
1
sin t + c1 + c2 t + c3 t2 + c4 t3 + c5 t4 + c6 t5 + c7 et + c8 et .
2
9. y 0000 + 4y = 4t2 1.
1
Answer. y = t2 + c1 et cos t + c2 et sin t + c3 et cos t + c4 et sin t.
4
VII. Solve the following initial value problems.
1. y 000 + 4y 0 = 0, y(0) = 1, y 0 (0) = 1, y 00 (0) = 2.
Answer. y =
3
2
1
2
VIII.
1. Find the linear homogeneous differential equation of the lowest possible
order, which has the following functions as its solutions: 1, e2t , sin t.
Answer. y 0000 + 2y 000 + y 00 + 2y 0 = 0.
2. Find the general solution of
(t + 1)2 y 00 4(t + 1)y 0 + 6y = 0 .
Hint: Look for the solution in the form y = (t + 1)r .
108
2y 0 y 000 3y 00 = 0 .
(This equation is connected to the Schwarzian derivative, defined as Sf =
y000
y0
3
2
y00 2
.)
y0
(3/2)
2.15
E(0)
a(0)
= constant.
The equation
y 00 + n2 y = 0
has a solution y(t) = sin nt. The larger is n, the more roots this solution has,
and so it oscillates faster. In 1836, J.C.F. Sturm discovered the following
theorem.
109
Theorem 2.15.1 (The Sturm Comparison Theorem.) Let y(t) and v(t) be
respectively non-trivial solutions of the following equations
(15.1)
y 00 + b(t)y = 0 ,
(15.2)
v 00 + b1 (t)v = 0 .
Assume that the given continuous functions b(t), and b1 (t) satisfy
(15.3)
In case b1 (t) = b(t) on some interval, assume additionally that y(t) and v(t)
are not constant multiples of one another. Then v(t) has a root between any
two consecutive roots of y(t).
Proof:
Let t1 < t2 be two consecutive roots of y(t), y(t1 ) = y(t2 ) = 0.
We may assume that y(t) > 0 on (t1 , t2 ) (in case y(t) < 0 on (t1 , t2 ), we
may consider y(t), which is also a solution of (15.1)). Assume, contrary to
what we want to prove, that v(t) has no roots on (t1 , t2 ). We may assume
that v(t) > 0 on (t1 , t2 ).
Multiply the equation (15.2) by y(t), and subtract from that the equation
(15.1), multiplied by v(t). The result may be written as
0
v 0 y vy 0 + (b1 b)yv = 0 .
Integrating this over (t1 , t2 ), gives
0
t2
t1
All three terms on the left are non-negative. If b1 (t) > b(t) on some subinterval of (t1 , t2 ), then the third term is strictly positive, and we have
a contradiction. We now consider the case when b1 (t) = b(t) for all t
(t1 , t2 ). We claim that v(t) cannot vanish at either t1 or t2 , so that v(t1 ) >
0, and v(t2 ) > 0. Indeed, in case v(t1 ) = 0, we consider the function
y 0 (t1 )
z(t) = 0
v(t). This function is a solution of (15.1), and z(t1 ) = y(t1 ) =
v (t1 )
0, z 0 (t1 ) = y 0 (t1 ), so that by the uniqueness of solutions for initial value
problems, z(t) = y(t) for all t, and then y(t) and v(t) are constant multiples
of one another, which is not allowed. It follows that v(t1 ) > 0, and similarly
we prove that v(t2 ) > 0. The uniqueness theorem for initial value problems
also implies that y 0 (t1 ) > 0, and y 0 (t2 ) < 0 (otherwise, y(t) = 0). Then the
110
first two terms in (15.4) are strictly positive, and we have a contradiction in
(15.4).
v(t)
y(t)
t1
t2
- t
on some interval (a, b) (with a given continuous function q(t)). Then v(t)
oscillates faster than u(t), provided that both functions are positive. Namely,
if v(t) > 0 on (a, b), then u(t) > 0 on (a, b).
If, on the other hand, u(t) > 0 on (a, b) and u(b) = 0, then v(t) must
vanish on (a, b].
Proof:
(15.6)
As before, we have
0
v 0 u vu0 0,
for x (a, b) .
Assume, contrary to what we want to prove, that u() = 0 at some (a, b).
Case 1. The inequality in (15.5) is strict on some sub-interval of (a, ). The
the same is then true for the inequality (15.6). Integrating (15.6) over (a, ),
we have
v()u0 () < 0 ,
111
We shall need the following Calculus formula, discovered by Mauro Picone in 1909.
Lemma 2.15.2 (Picones Identity) Assume that the functions a(t) and a1 (t)
are differentiable, the functions u(t) and v(t) are twice differentiable, and
v(t) > 0 for all t. Then
0
u
vau0 ua1 v 0
v
Proof:
2
u2
u
= u(au ) (a1 v 0 )0 + (a a1 )u0 + a1 u0 v 0
v
v
0 0
2
u
u0 v uv 0
u
vau0 ua1 v 0 + v(au0 )0 u(a1 v 0 )0 +(aa1 ) u0 v 0 A+B+C .
2
v
v
v
The middle term B is the same as the first two terms on the right in Picones
identity. It remains to prove that A + C is equal to the sum of the last two
terms on the right in Picones identity. We expand A and C, and after a
cancellation, we get
0
"
2 #
u
2 u
u 2 u0 v 0 + v 0
v
v
02
u 0
v
v
The given functions p(t), q(t) and r(t) are assumed to be differentiable, and
p(t) > 0 for all t. We divide this equation by p(t)
u00 +
q(t) 0 r(t)
u +
u = 0,
p(t)
p(t)
R
q(t)
p(t)
dt
. Denoting b(t) =
2
112
a1 (t)v 0 + b1 (t)v = 0 .
Assume that the given differentiable functions a(t), a1 (t), and continuous
functions b(t), b1 (t) satisfy
b1 (t) b(t), and 0 < a1 (t) a(t)
for all t.
In case a1 (t) = a(t) and b1 (t) = b(t) for all t, assume additionally that u(t)
and v(t) are not constant multiples of one another. Then v(t) has a root
between any two consecutive roots of u(t).
Proof:
The proof is similar to that of the Sturm Comparison Theorem.
Let t1 < t2 be two consecutive roots of u(t), u(t1 ) = u(t2 ) = 0. As before,
we may assume that u(t) > 0 on (t1 , t2 ). Assume, contrary to what we
want to prove, that v(t) has no roots on (t1 , t2 ). We may assume, as before,
that v(t) > 0 on (t1 , t2 ). We apply Picones identity. Expressing from the
corresponding equations, (a(t)u0 )0 = b(t)u and (a1 (t)v 0 )0 = b1 (t)v, we
rewrite Picones identity as
0
u
vau0 ua1 v 0
v
u 0
v
v
2
t2
t1
(b1 b)u dt +
t2
t1
(a a1 )u dt +
t2
t1
a1
u
u v0
v
0
2
dt .
(In case v(t1 ) = 0, we have v 0 (t1 ) > 0 by the uniqueness of solutions for
u2
2uu0
initial value problems. Then lim
= lim
= 0. Similarly, in case
tt1 v
tt1 v 0
v(t2 ) = 0, the upper limit vanishes for the integral on the left.) The integrals
on the right are non-negative. We obtain an immediate contradiction, unless
a1 (t) = a(t) and b1 (t) = b(t) for all t (t1 , t2 ). In such a case, we must also
u
have u0 v 0 = 0 on (t1 , t2 ) (so that all three integrals vanish). But then
v
u0
v0
= , and integrating we see that u(t) and v(t) are constant multiples of
u
v
one another, contradicting our assumption.
113
We define q + (t) = max(q(t), 0), the positive part of the function q(t).
Similarly, one defines q (t) = min(q(t), 0), the negative part of the function
q(t). Clearly, q(t) = q + (t) + q (t). Our next goal is the famous Lyapunov
inequality. It will follow from the following lemma.
Lemma 2.15.3 Assume that u(t) is twice continuously differentiable, and
it satisfies the following conditions on some interval (0, b) (here q(t) is a
given continuous function)
u00 + q(t)u = 0, u(0) = u(b) = 0, u(t) > 0 on (0, b) .
(15.9)
Then
Proof:
q + (t) dt >
4
.
b
(15.10)
By Lemma 2.15.1, v(t) must vanish on (0, b]. Let t2 (0, b] be the first root
of v(t), so that v(t) > 0 on (0, t2 ). (In case q (t) 0, we have t2 = b,
because v(t) is a constant multiple of u(t).) Integrating (15.10) (treating
q + (t)v as a known quantity),
(15.11)
v(t) = ct
1
From v(t2 ) = 0, it follows that c =
t2
back into (15.11), we express
v(t) =
t
t2
t2
t2 v(t) =
t2
R t2
0
Rt
0
+
+
R t2
t
for t [0, t2 ] .
(t s)q + (s)v(s) ds .
, we continue
= (t2 t)
sq + (s)v(s) ds + t
t2
t2
114
t0
t0
sq + (s) ds + t0
t2
t0
t2
t0
t2
Dividing by t2 , gives
1<
t2
s
(1 )sq + (s) ds
t2
s
(1 )sq + (s) ds <
b
b +
q (s) ds ,
4
which implies our inequality. (On the last step we estimated the function
s
b
(1 )s by its maximum value of .)
b
4
Theorem 2.15.3 (Lyapunovs inequality) If a non-trivial solution of the
equation
u00 + q(t)u = 0
has two roots on an interval (a, b), then
Z
q + (t) dt >
4
.
ba
Proof:
Let a t1 < t2 b be two consecutive roots of u(t). We may
assume that u(t) > 0 on (t1 , t2 ), and use the above lemma
Z
q + (t) dt
t2
t1
q + (t) dt >
4
4
.
t2 t1
ba
(We may declare the point t1 to be the origin, to use the above lemma.)
Chapter 3
f (x) = a0 + a1 x + a2 x2 + + an xn + .
X
f 00 (0) 2
f (n) (0) n
f (n) (0) n
x ++
x +=
x .
2!
n!
n!
n=0
It is known that for each f (x) there is a number R, so that the Maclauren
series converges inside the interval (R, R), and diverges when |x| > R. R
is called the radius of convergence. For some f (x), we have R = (for
example, for sin x, cos x, ex ), while for some series we have R = 0, and in
general 0 R .
Computing the Maclauren series for some specific functions, gives:
sin x = x
X
x3 x5 x7
x2n+1
+
+ =
(1)n
,
3!
5!
7!
(2n + 1)!
n=0
115
+ =
(1)n
,
2!
4!
6!
(2n)!
n=0
ex = 1 + x +
X
xn
x2 x3 x4
+
+
+ =
,
2!
3!
4!
n!
n=0
1
= 1 + x + x2 + + xn + .
1x
The last series, called the geometric series, converges on the interval (1, 1),
so that R = 1.
Maclaurens series gives an approximation of f (x), for x close to zero.
For example, sin x x gives a reasonably good approximation for |x| small.
x3
If we add one more term of the Maclauren series: sin x x , then, say
6
on the interval (0.3, 0.3), we get an excellent approximation.
If one needs the Maclauren series for sin x2 , one begins with a series for
sin x, and then replaces each x by x2 , obtaining
sin x2 = x2
X
x6 x10
x4n+2
+
+ =
(1)n
.
3!
5!
(2n + 1)!
n=0
One can split Maclaurens series into a sum of series with either even or
odd powers:
X
f (n) (0)
n=0
n!
X
f (2n)(0)
xn =
n=0
(2n)!
x2n +
X
f (2n+1) (0)
n=0
(2n + 1)!
x2n+1 .
In the following series only the odd powers have non-zero coefficients
X
1 (1)n
n=1
xn =
2
2
2
x + x3 + x5 +
1
3
5
X
2
1
x2n+1 = 2
x2n+1 .
2n
+
1
2n
+
1
n=0
n=0
All of the series above were centered at 0. One can replace zero by any
number a, obtaining Taylors series
f (x) = f (a) + f 0 (a)(x a) +
=
f 00 (a)
f (n) (a)
(x a)2 + +
(x a)n +
2!
n!
X
f (n) (a)
n=0
n!
(x a)n .
117
3.1.2
A Toy Problem
(1.2)
By y2 (x) we denote the solution of the same equation, together with the
initial conditions y(0) = 0, y 0 (0) = 1. Clearly, y1 (x) and y2 (x) are not
constant multiples of each other. Therefore, they form a fundamental set,
giving us the general solution y(x) = c1 y1 (x) + c2 y2 (x).
Let us now compute y1 (x), the solution of (1.2). From the initial conditions, we already know the first two terms of its Maclauren series
y(x) = y(0) + y 0 (0)x +
y 00 (0) 2
y (n) (0) n
x ++
x + .
2
n!
To get more terms, we need to compute the derivatives of y(x) at zero. From
the equation (1.2), y 00 (0) = y(0) = 1. We now differentiate the equation
(1.2): y 000 + y 0 = 0, and then set x = 0:
y 000(0) = y 0 (0) = 0 .
Differentiating again, we have y 0000 + y 00 = 0, and setting x = 0,
y 0000(0) = y 00 (0) = 1 .
On the next step:
y (5)(0) = y 000 (0) = 0 .
We see that all derivatives of odd order vanish at x = 0, while the derivatives
of even order alternate between 1 and 1. We write down the Maclauren
series:
x2
x4
y1 (x) = 1
+
= cos x .
2!
4!
x3 x5
+
= sin x .
3!
5!
(1.3)
where continuous functions P (x), Q(x) and R(x) are given. We shall always
denote by y1 (x) the solution of (1.3) satisfying the initial conditions y(0) = 1,
y 0 (0) = 0, and by y2 (x) the solution of (1.3) satisfying the initial conditions
y(0) = 0, y 0 (0) = 1. If one needs to solve (1.3), together with the given
initial conditions
y(0) = , y 0 (0) = ,
then the solution is
y(x) = y1 (x) + y2 (x) .
3.1.3
119
It is clear that all derivatives of even order are zero at x = 0. Let us continue
with the derivatives of odd order:
y (5)(0) = 5y 000 (0) = (1)2 5 3 1 ,
y (7)(0) = 7y 000(0) = (1)3 7 5 3 1 .
And in general,
y (2n+1) (0) = (1)n (2n + 1) (2n 1) 3 1 .
Then the Maclauren series for y2 (x) is
y2 (x) =
X
y (n) (0)
n!
n=0
=x+
(1)n
n=1
=x+
X
y (2n+1) (0)
n=1
(2n + 1)!
x2n+1
(1)n
n=1
xn = x +
1
x2n+1 .
2n (2n 2) 4 2
(1)n
n=0
1
2n n!
x2n+1 .
To compute y1 (x), we use the initial conditions y(0) = 1, y 0 (0) = 0. Similarly to the above, we see from the recurrence relation that all derivatives
of odd order vanish at x = 0, while the even ones satisfy
y (2n)(0) = (1)n 2n (2n 2) 4 2 , for n = 1, 2, 3, . . . .
This leads to
y1 (x) = 1 +
n=1
(1)n
1
x2n .
(2n 1) (2n 3) 3 1
n=1
X
(1)n
1
x2n + c2
(1)n n x2n+1 .
(2n 1)(2n 3) 3 1
2
n!
n=0
(1.4)
n(n 1) (n2) 00
f
g +
2
n(n 1) 00 (n2)
f g
+ nf 0 g (n1) + f g (n) .
2
n
k
n!
are the binomial coefficients.
k!(n k)!
For this equation, any a is a regular point. Let us find the general solution
as an infinite series, centered at a = 0, the Maclauren series for y(x). We
differentiate both sides of this equation n times. When we use the formula
(1.4) to differentiate the first term, only three terms are non-zero, because
121
(n+2)
n2 2n + 4 (n)
(0) =
y (0) .
2
This relation is too involved to get a neat formula for y (n) (0) as a function
of n. However, it can be used to crank out the derivatives at zero, as many
as you wish. To compute y1 (x), we use the initial conditions y(0) = 1 and
y 0 (0) = 0. We see from the recurrence relation that all of the derivatives of
odd order are zero at x = 0. Setting n = 0 in the recurrence relation, we
have
y 00 (0) = 2y(0) = 2 .
When n = 2,
y 0000(0) = 2y 00 (0) = 4 .
Using these derivatives in the Maclauren series, we have
1
y1 (x) = 1 x2 + x4 + .
6
To compute y2 (x), we use the initial conditions y(0) = 0 and y 0 (0) = 1.
We see from the recurrence relation that all of the derivatives of even order
are zero. When n = 1, we get
3
3
y 000(0) = y 0 (0) = .
2
2
Setting n = 3, we have
7
21
y (5)(0) = y 000 (0) =
.
2
4
Using these derivatives in the Maclauren series, we conclude
1
7 5
y2 (x) = x x3 +
x + .
4
160
+c2
1
7 5
x + .
x x3 +
4
160
Suppose that we wish to solve the above equation, together with the
initial conditions: y(0) = 2, y 0 (0) = 3. Then y(0) = c1 y1 (0) + c2 y2 (0) =
c1 = 2, and y 0 (0) = c1 y10 (0) + c2 y20 (0) = c2 = 3. It follows that y(x) =
2y1 (x) + 3y2 (x). If we need to approximate y(x) near x = 0, say on the
interval (0.3, 0.3), then
1
1
7 5
y(x) 2 1 x + x4 + 3 x x3 +
x
6
4
160
X
y (n)(1)
(x1)n .
We need to compute Taylors series about a = 1: y(x) =
n!
n=0
From the equation,
y 00 (1) = y(1) .
To get higher derivatives, we differentiate our equation n times, and then
set x = 1
y (n+2) ny (n1) xy (n) = 0;
y (n+2) (1) = ny (n1) (1) + y (n)(1), n = 1, 2, . . . .
123
Then, for n = 3,
y (5)(1) = 3y 00(1) + y 000 (1) = 4 .
We have
y1 (x) = 1 +
Again, it does not seem possible to get a simple formula for the coefficients.
To compute y2 (x), we use the initial conditions y(1) = 0, y 0 (1) = 1.
Then y 00 (1) = y(1) = 0. Setting n = 1 in the recurrence relation: y (3)(1) =
y(1) + y 0 (1) = 1. When n = 2, y (4)(1) = 2y 0 (1) + y 00 (1) = 2. Then, for n = 3,
y (5)(1) = 3y 00 (1) + y 000(1) = 1. We have
y2 (x) = x 1 +
3.2
X
y (n) (a)
at which we want to compute solution as a series y(x) =
(x a)n.
n!
n=0
If P (a) = 0, we have a problem: one cannot compute y 00 (a) from the equation
(and the same problem for higher derivatives). However, if a is a simple
root of P (x), it turns out that one can still use series to produce a solution.
Namely, we assume that P (x) = (x a)P1 (x), with P1 (a) 6= 0. We call
x = a a mildly singular point. Dividing the equation by P1 (x), and calling
q(x) = PQ(x)
, r(x) = PR(x)
, we put it into the form
1 (x)
1 (x)
(x a)y 00 + q(x)y 0 + r(x)y = 0 .
The functions q(x) and r(x) are continuous near a. In case a = 0, we have
(2.1)
xy 00 + q(x)y 0 + r(x)y = 0 .
It turns out that for this equation we cannot expect to obtain two linearly
independent solutions, by prescribing y1 (0) = 1, y10 (0) = 0, and y2 (0) = 0,
an xn .
n=0
Then
0
y =
an nxn1 ,
n=1
y 00 =
n=2
an n(n 1)xn2 .
125
n=2
an n(n 1)xn +
n=1
3an nxn
n=0
an xn
2an xn+1 = 0 .
n=0
The third series is not lined up with the other two. We therefore replace
n by n 1 in that series, obtaining
2an xn+1 =
n=0
2an1 xn .
n=1
We rewrite (2.2):
(2.3)
n=2
an n(n 1)xn +
n=1
3an nxn
2an1 xn = 0 .
n=1
We shall use the following fact: if n=1 bn xn = 0 for all x, then bn = 0 for
all n = 1, 2, . . .. Our goal is combine the three series in (2.3) into a single
one, so that we can set all of the resulting coefficients to zero. The x term is
present in the second and the third series, but not in the first. However, we
can start the first series at n = 1, because at n = 1 the coefficient is zero.
So that we have
P
n=1
an n(n 1)xn +
n=1
3an nxn
2an1 xn = 0 .
n=1
Now for all n 1, the xn term is present in all three series, so that we can
combine these series into one series. We therefore just lift the coefficients:
an n(n 1) + 3an n 2an1 = 0 .
We solve for an :
an =
2
an1 , n 1 .
n(n + 2)
2
,
13
2
22
23
a1 =
=
,
24
(1 2)(3 4)
2! 4!
a3 =
2
24
a2 =
,
35
3! 5!
X
X
2n+1
2n+1
n
x =
xn .
Answer: y(x) = 1 +
n!
(n
+
2)!
n!
(n
+
2)!
n=0
n=1
and, in general, an =
an =
2
an1 , n 1 .
n(n 4)
n=4
equation
x2 y 00 3xy 0 2xy = 0
(which is the original equation, multiplied by x), gives
(2.5)
n=4
an n(n 1)xn
n=4
3an nxn
2an xn+1 = 0 .
n=4
n=5
X
2n4
2n4
xn = 24
xn .
n!(n 4)!
n!(n
4)!
n=4
127
an xn ,
n=0
an xn , starting with
n=k+1
n=2
an n(n 1)xn +
n=0
an nxn +
n=1
an xn+2 = 0 .
n=0
n=2
an n(n 1)xn +
an nxn +
n=1
an2 xn = 0 .
n=2
None of the series has a constant term. The x term is present only in the
second series. Its coefficient is a1 , and so
a1 = 0 .
The terms involving xn , starting with n = 2, are present in all series, so that
(after combining the series)
an n(n 1) + an n + an2 = 0 ,
10
15
-0.2
-0.4
giving
1
an2 .
n2
This recurrence relation tells us that all odd coefficients are zero, a2n+1 =
1
1
1
0. Starting with a0 = 1, compute a2 = 2 a0 = 2 , a4 = 2 a2 =
2
2
4
1
1
1
(1)2 2 2 , a6 = 2 a4 = (1)3 2 2 2 , and in general,
2 4
6
2 4 6
an =
1
1
1
n
= (1)n
2 = (1) 22n(n!)2 .
22 42 62 (2n)2
(2 4 6 2n)
a2n = (1)n
We then have
y =1+
n=1
a2n x2n = 1 +
n=1
(1)n
X
1
1
2n
x
=
(1)n 2n
x2n .
2
22n (n!)2
2
(n!)
n=0
We have obtained Bessels function of order zero of the first kind, with the
X
1
customary notation: J0 (x) =
(1)n 2n
x2n .
2
2
(n!)
n=0
129
n (n1)
y
(0) .
n+1
2n 1 (2n2)
2n 1 2n 3 (2n4)
y
(0) = (1)2
y
(0) = . . .
2n
2n 2n 4
(2n 1)(2n 3) 3 1
(2n 1)(2n 3) 3 1
y(0) = (1)n
.
2n(2n 2) 2
2n n!
y(x) =
X
y (2n) (0)
n=0
(2n)!
2n
n=0
(1)n
1
x2n .
22n (n!)2
We obtained again Bessels function of order zero of the first kind, J0 (x) =
X
1
(1)n 2n
x2n .
2
2
(n!)
n=0
In case of the initial conditions y(0) = 0 and y 0 (0) = 1, there is no
solution (the recurrence relation above is not valid, because the relation
xy (n+2) 0 as x 0 is not true in this case). In fact, the second solution
of Bessels equation cannot be continuously differentiable
at x = 0. Indeed,
R
c
x1 dx
the Wronskian of any two solutions is equal to ce
= . We have
x
W (y1 , y2 ) = y1 y20 y10 y2 = xc . The solution J0 (x) satisfies J0 (0) = 1, J00 (0) =
0. Therefore, the other solution, or its derivative, must be discontinuous
at x = 0. It turns out that the other solution, called Bessels function of
the second type, and denoted Y0 (x), has a term involving ln x in its series
representation, see e.g., [3].
3.3
We shall deal only with the Maclauren series, so that a = 0, and the general
case is similar. We consider the equation
(3.1)
x2 y 00 + xp(x)y 0 + q(x)y = 0 ,
2x2 y 00 xy 0 + (1 + x)y = 0.
1
1 1
x2 y 00 xy 0 + ( + x)y = 0 .
2
2 2
1
2
1
1
r(r 1) r + = 0 .
2
2
131
(3.5)
n=2
n=0
an nx +
n=1
an xn+1 = 0 .
n=0
To line up the powers, we shift n n 1 in the last series. The first series
we may begin at n = 1, instead of n = 2, because its coefficient at n = 1 is
zero. We have
n=1
an nxn +
n=1
an1 xn = 0 .
n=1
We can now combine these series into a single series. Setting its coefficients
to zero, we have
2an n(n 1) + an n + an1 = 0 ,
which gives us the recurrence relation
an =
1
an1 .
n(2n 1)
1
1
1
, a2 =
a1 = (1)2
,
11
23
(1 2) (1 3)
1
1
a2 = (1)3
, and in general
35
(1 2 3) (1 3 5)
an = (1)n
1
.
n! 1 3 5 (2n 1)
1/2
v(x) = x
1/2
"
1
xn .
1+
(1)
n!
(2n
1)
n=1
n
The case r = 1. We set y = xv. Plugging this into (3.4), and simplifying
3
1
x3 v 00 + x2 v 0 + x2 v = 0 .
2
2
Dividing by x2 , we have a mildly singular equation
3
1
xv 00 + v 0 + v = 0 .
2
2
We multiply this equation by 2x, for convenience,
2x2 v 00 + 3xv 0 + xv = 0 ,
(3.6)
n=2
n=0
3an nxn +
n=1
an xn+1 = 0 .
n=0
n=1
3an nxn +
n=1
an1 xn = 0 .
n=1
1
an1 .
n(2n + 1)
1
1
1
, a2 =
a1 = (1)2
,
13
25
(1 2) (1 3 5)
1
1
a2 = (1)3
, and in general
37
(1 2 3) (1 3 5 7)
an = (1)n
1
.
n! 1 3 5 (2n + 1)
133
1
y2 (x) = x 1 +
(1)n
xn .
n!
(2n
+
1)
n=1
The general solution is, of course, y(x) = c1 y1 + c2 y2 .
1
x2 y 00 + xy 0 + (x2 )y = 0.
9
This is Bessels equation of order 13 . Here p(x) = 1, and q(x) = x2 19 . The
characteristic equation
1
r(r 1) + r = 0
9
Example
3x2 v 00 + xv 0 + 3x2 v = 0 ,
n=2
n=0
an nxn +
n=1
3an xn+2 = 0 .
n=0
n=2
n=1
an nxn +
3an2 xn = 0 .
n=2
The x term is present only in the second series. Its coefficient must be zero,
so that
(3.8)
a1 = 0 .
The term xn , with n 2, is present in all three series. Setting its coefficient
to zero
3an n(n 1) + an n + 3an2 = 0 ,
3
an2 .
n(3n 2)
All odd coefficients are zero (because of (3.8)), while for the even ones, our
recurrence relation gives
3
a2n2 .
a2n =
2n(6n 2)
Starting with a0 = 1, we compute a2 =
a6 = (1)3
3
32
, a4 = (1)2
,
24
(2 4) (4 10)
32
, and in general,
(2 4 6) (4 10 16)
a2n = (1)n
1/3
"
3n
.
(2 4 2n) (4 10 (6n 2))
#
3n
1+
(1)
x2n .
(2
2n)
(4
10
(6n
2))
n=1
It follows that all odd coefficients are zero, while the even ones satisfy
3
1
a2n =
a2n2 =
a2n2 .
2n(6n + 2)
2n(2n + 2/3)
We then derive the second solution
y2 (x) = x
1/3
"
1
1+
(1)
x2n .
(2
2n)
((2
+
2/3)
(4
+
2/3)
(2n
+
2/3))
n=1
n
135
3.3.1
Problems
I. Find the Maclauren series of the following functions, and state their radius
of convergence
1
3
1. sin x2 .
2.
.
3. ex .
1 + x2
II. 1. Find the Taylor series of f (x) centered at a.
(ii) f (x) = ex , a = 1.
X
1 + (1)n
n2
n=1
3. Show that
xn =
(iii) f (x) =
1
, a = 1.
x
1X
1 2n
x .
2 n=1 n2
X
n+3
n+2
xn+1 =
xn .
n!(n
+
1)
(n
1)!n
n=0
n=1
i(n)
i(n)
III. Find the general solution, using power series centered at a (find the
recurrence relation, and two linearly independent solutions).
1. y 00 xy 0 y = 0, a = 0.
Answer. y1 (x) =
X
x2n
n=0
2n n!
, y2 (x) =
2n n!
x2n+1 .
(2n
+
1)!
n=0
2. y 00 xy 0 + 2y = 0, a = 0.
1
1 5
Answer. y1 (x) = 1 x2 , y2 (x) = x x3
x .
6
120
3. y 00 xy 0 y = 0, a = 1.
1
1
1
Answer. y1 (x) = 1 + (x 1)2 + (x 1)3 + (x 1)4 + ,
2
6
6
1
1
1
2
3
y2 (x) = (x 1) + (x 1) + (x 1) + (x 1)4 + .
2
2
4
4. (x2 + 1)y 00 + xy 0 + y = 0, a = 0.
Answer. The recurrence relation: y (n+2) (0) = (n2 + 1)y (n)(0).
1
1
5 4
y1 (x) = 1 12 x2 + 24
x , y2 (x) = x x3 + x5 .
3
6
x4
x6
x2
+
+
2 3 2 4 3 7 2 4 6 3 7 11
(1)n x2n
.
2n n! 3 7 11 (4n 1)
n=1
2. xy 00 + y 0 y = 0.
Answer. y =
X
xn
n=0
(n!)2
3. xy 00 + 2y 0 + y = 0.
Answer. y =
(1)n
xn .
n!(n
+
1)!
n=0
4. xy 00 + y 0 2xy = 0.
X
1
1
2n
Answer. y = 1 +
x =
x2n .
n (n!)2
n (n!)2
2
2
n=1
n=0
n=5
xy 00 4y 0 + y = 0 .
Answer. y = x5 + 120
X
(1)n5
n!(n 5)!
n=6
xn .
137
2. Find one series solution of the following mildly singular equation (here
a = 0)
xy 00 2y 0 2y = 0 .
3. Find one series solution of the following mildly singular equation (here
a = 0)
xy 00 + y = 0 .
Hint: Look for solution in the form
an xn , starting with a1 = 1.
n=1
Answer. y =
(1)n1
n!(n 1)!
n=1
xn .
1/2
"
1+
X
(1)n x2n
n=1
(2n + 1)!
= x
1/2
"
x+
X
(1)n x2n+1
n=1
(2n + 1)!
Chapter 4
1
1
e2t dt = e2t |0 = .
2
2
Here we did not plug in the upper limit t = , but rather computed the
limit as t (the limit is zero). This is an example of a convergent
integral. On the other hand, the integral
Z
dt = ln t |1
t
1
1
te2t dt = te2t e2t |0 = .
2
4
4
140
4.1.2
Let the function f (t) be defined on the interval [0, ). Let s > 0 be a
positive parameter. We define the Laplace transform of f (t) as
Z
F (s) =
provided that this integral converges. It is customary to use the corresponding capital letters to denote the Laplace transform (so that the Laplace
transform of g(t) is denoted by G(s), of h(t) by H(s), etc.). We also use the
operator notation for the Laplace transform: L(f (t)).
We now build up a collection of Laplace transforms.
L(1) =
L(t) =
st
est dt =
est 1
| = ;
s 0
s
"
est t est 1
t dt =
2 |0 = 2 .
s
s
s
L(t ) =
est tn n
t dt =
|0 +
s
s
st n
est tn1 dt =
n
L(tn1 ) .
s
tn
est
n!
sn+1
The next class of functions are the exponentials eat , where a is some number:
at
L(e ) =
est eat dt =
1 (sa)t
1
e
|0 =
, provided that s > a.
sa
sa
141
because a similar property holds for integrals (and the Laplace transform is
an integral). This expands considerably the set of functions for which we
can write down the Laplace transform. For example, with a > 0,
1
1
1 1
1 1
s
L(cosh at) = L( eat + eat ) =
+
= 2
, for s > a .
2
2
2sa 2s+a
s a2
Similarly,
a
, for s > a .
s2 a2
We can compute the Laplace transform of any polynomial. For example,
L(sinh at) =
est cos at dt =
240
6
5
3+ .
s6
s
s
(One guesses that the antiderivative of est cos at is of the form Aest cos at+
Best sin at, and then evaluates the constants A and B by differentiation.)
Similarly,
L(sin at) =
est sin at dt =
a
est (s sin at + a cos at)
|0 = 2
.
2
2
a +s
s + a2
e(sc)t f (t) dt = F (s c) .
L(ect f (t)) = F (s c) .
For example,
L(e5t sin 3t) =
Another example:
3
.
(s 5)2 + 9
s+2
.
(s + 2)2 9
5!
.
(s 1)6
142
4.1.3
This is just going from F (s) back to f (t). We denote it as f (t) = L1 (F (s)).
We have
L1 (c1 F (s) + c2 G(s)) = c1 f (t) + c2 g(t) .
This is just the formula (1.1), read backward. Each of the formulas for the
Laplace Transform leads to the corresponding formula for its inverse:
tn
)
=
,
sn+1
n!
s
L1 ( 2
) = cos at ,
s + a2
1
1
L1 ( 2
) = sin at ,
2
s +a
a
1
L1 (
) = eat ,
sa
L1 (
and so on. To compute L1 , one often uses partial fractions, as well as the
inverse of the shift formula (1.2)
L1 (F (s c)) = ect f (t) ,
(1.3)
s
1
5
3s 5
) = 3L1 ( 2
) 5L1 ( 2
) = 3 cos 2t sin 2t .
s2 + 4
s +4
s +4
2
Example Find L1 (
2
).
(s 5)4
2
. We begin
s4
2
t3
) = , and then account for the shift,
4
s
3
according to the shift formula (1.3):
by inverting this function, L1 (
L1 (
3
2
5t t
)
=
e
.
(s 5)4
3
143
s2
s2
s+7
s+7
2
1
=
=
,
s6
(s 3)(s + 2)
s3 s+2
which gives
L1 (
s+7
) = 2e3t e2t .
s2 s 6
2s 1
).
+ 2s + 5
The denominator cannot be factored, so we complete the square
Example Find L1 (
s2
s2
2s 1
2(s + 1) 3
2s 1
=
=
,
2
+ 2s + 5
(s + 1) + 4
(s + 1)2 + 4
and then adjust the numerator, so that it involves the same shift (as in the
2s 3
denominator). Without the shift, we have the function 2
, with the
s +4
inverse Laplace transform equal to 2 cos 2t 32 sin 2t. By the shift formula
L1 (
4.2
s2
2s 1
3
) = 2et cos 2t et sin 2t .
+ 2s + 5
2
Integrating by parts,
0
L(f (t)) =
st 0
st
f (t) dt = e
f (t) |0 +s
est f (t) dt .
Let us assume that f (t) does not grow too fast, as t , so that |f (t)|
beat , for some positive constants a and b. If we now choose s > a, then the
limit as t is zero, while the lower limit gives f (0). We conclude
L(f 0 (t)) = f (0) + sF (s) .
(2.1)
To compute the Laplace transform of f 00 (t), we use the formula (2.1) twice
(2.2) L(f 00 (t)) = L((f 0 (t))0 ) = f 0 (0)+sL(f 0 (t)) = f 0 (0)sf (0)+s2 F (s) .
In general,
(2.3)
L(f (n) (t)) = f (n1) (0) sf (n2) (0) sn1 f (0) + sn F (s) .
144
s2 + 3s + 2 Y (s) + s 1 = 0 .
1s
.
+ 3s + 2
To get the solution, it remains to find the inverse Laplace transform y(t) =
L1 (Y (s)). We factor the denominator, and use partial fractions
Y (s) =
s2
s2
1s
1s
2
3
=
=
.
+ 3s + 2
(s + 1)(s + 2)
s+1 s+2
Y (s) =
s2
s2
s6
(s 2) 4
=
.
4s + 5
(s 2)2 + 1
145
5s
;
+4
s2
5s
s
+
.
(s2 + 4)(s2 + 2 ) s2 + 2
The second term is easy to invert. To find the inverse Laplace transform of
the first term, we use the guess-and-check method (or partial fractions)
1
s
s
s
= 2
.
(s2 + 4)(s2 + 2 )
4 s2 + 4 s2 + 2
5
(cos 2t cos t) + cos t.
4
When approaches 2, the amplitude of the oscillations becomes large.
Answer: y(t) =
to obtain
0
F (s) =
or
est f (t) dt ,
2
4s
d
= 2
.
2
ds s + 4
(s + 4)2
146
5s
;
+4
s2
5s
.
(s2 + 4)2
5
Then, using (2.4), y(t) = t sin 2t. We see that the amplitude of oscillations
4
(which is equal to 54 t) tends to infinity with time.
Example Solve the initial value problem for the fourth order equation
y 0000 y = 0,
Y (s) =
s3 + s
s(s2 + 1)
s
=
= 2
.
4
2
2
s 1
(s 1)(s + 1)
s 1
4.2.1
Step Functions
Sometimes an external force acts only over some time interval. One uses
step functions to model such forces. The basic step function is the Heaviside
function uc (t), defined for any positive constant c by
uc (t) =
0 if t < c
1 if t c
uc (t)
b
-t
147
est uc (t) dt =
L1
ecs
s
est dt =
ecs
;
s
= uc (t) .
est uc (t)f (t c) dt =
est f (t c) dt .
(2.5)
1
For example, L
2s
1
s4
1
u2 (t)(t 2)3 .
6
Example Solve
y 00 + 9y = u2 (t) u4 (t),
y(0) = 1, y 0 (0) = 0 .
Here the forcing term is equal to 1, for 2 t < 4, and is zero for other t.
Taking the Laplace transform, then solving for Y (s), we have
s2 Y (s) s + 9Y (s) =
e2s e4s
;
s
s
148
s2
s
1
1
+ e2s 2
e4s 2
.
+9
s(s + 9)
s(s + 9)
,
s(s2 + 9)
9 s s2 + 9
and therefore
L1
1
s(s2 + 9)
1 1
cos 3t .
9 9
4.3
t0
n
X
i=1
(ti )t =
(t) dt .
Assume now that the rod is moved to a new position in the (t, y) plane,
with each point (t, 0) moved to a point (t, f (t)), where f (t) is a given function. What is the work needed for this move? For the piece i, the work is
approximated by f (ti )(ti)t. The total work is then
W = lim lim
N t0
n
X
i=1
f (ti )(ti)t =
(t)f (t) dt .
Assume now that the rod has unit weight, m = 1, and the entire weight
is pushed into a single point t = 0. The resulting distribution of weight is
149
called the delta distribution or the delta function, and is denoted (t). It has
the following properties:
(i) (t) = 0, for t 6= 0,
(ii)
(iii)
The last formula holds, because work is expended only to move the weight 1
at t = 0, the distance of f (0). Observe that (t) is not a usual function, like
the ones studied in Calculus. (If a usual function is equal to zero, except at
one point, its integral is zero, over any interval.) One can think of (t) as
the limit of the functions
f (t) =
as 0. (Observe that
1
2
f (t) dt
if < t <
for other t
= 1.)
(iii)
(t t0 ) dt = 1,
(t t0 )f (t) dt = f (t0 ).
Using the properties (i) and (iii), we compute the Laplace transform,
for any t0 0,
L ((t t0 )) =
(t t0 )est dt =
(t t0 )est dt = est0 .
In particular,
L ((t)) = 1 .
Correspondingly,
150
y
1.5
1.0
0.5
s2
e2s
6
= e2s
.
+ 2s + 5
(s + 1)2 + 4
1
1
(s + 1)2 + 4
1 t
e sin 2t, we con2
4.4
151
The problem
y 00 + y = 0,
y(0) = 0, y 0 (0) = 1
has solution y = sin t. If we now add a forcing term g(t), and consider
y 00 + y = g(t),
then the solution is
(4.1)
y(t) =
y(0) = 0, y 0 (0) = 0 ,
t
f (t v)g(v) dv .
(t v)v 2 dv = t
t
0
v 2 dv
v 3 dv =
t4
t4
t4
=
.
3
4
12
g(u)f (t u) du
f (t u)g(u) du = f g .
152
Indeed,
L (f g) =
est
f (t v)g(v) dv dt =
ZZ
est f (t v)g(v) dv dt ,
where the double integral on the right hand side is taken over the region D
of the tv-plane, which is an infinite wedge 0 < v < t in the first quadrant.
We now evaluate this double integral by using the reverse order of repeated
integrations:
(4.2)
ZZ
st
e
D
f (t v)g(v) dv dt =
g(v)
Z
st
f (t v) dt dv .
st
f (t v) dt =
s2
(s2 + 4)2
= cos 2t cos 2t =
Using that
cos 2(t v) = cos 2t cos 2v + sin 2t sin 2v ,
153
we conclude
1
s2
(s2 + 4)2
= cos 2t
cos2 2v dv + sin 2t
sin 2v cos 2v dv
1
1
t cos 2t + sin 2t .
2
4
Example Consider the vibrations of a spring at resonance
=
y 00 + y = 3 cos t,
y(0) = 0, y 0 (0) = 0 .
(s2
s
.
+ 1)2
3
y(t) = 3 sin t cos t = t sin t .
2
The Tautochrone curve
y
d (x, y)
(x1 , v)
-x
Assume that we have a curve through the origin in the first quadrant of the
xy-plane, and a particle slides down this curve, under the influence of the
force of gravity. The initial velocity at the starting point is assumed to be
154
zero. We wish to find the curve so that the time T it takes to reach the bottom at (0, 0) is the same, for any starting point (x, y). This historical curve,
called the tautochrone (which means loosely the same time in Latin), was
found by Christian Huygens in 1673. He was motivated by the construction
of a clock pendulum whose period is independent of its amplitude.
Let (x1 , v) be any intermediate position of the particle, v < y. Let
s = f (v) be the length of the curve from (0, 0) to (x1 , v). Of course, the
ds
gives the speed of the particle.
length s depends also on the time t, and
dt
The kinetic energy of the particle at (x1 , v) is due to the decrease of its
potential energy (m is the mass of the particle):
1
ds
m
2
dt
2
= mg(y v) .
ds
ds dv
dv
=
= f 0 (v) , so that
dt
dv dt
dt
1
dv
f 0 (v)
2
dt
f 0 (v)
2
= g(y v) ;
p
dv
= 2g y v .
dt
(4.3)
dv =
2g dt = 2g T .
y v
0
0
(Over the time interval (0, T ), the particle descends from v = y to v = 0.)
To find the function f 0 , we need to solve the integral equation (4.3). We
may rewrite it as
p
(4.4)
y 1/2 f 0 (y) = 2g T .
L y 2 =
(4.5)
, or in
s
.
s
We now apply the Laplace transform to the equation (4.4), and get
r
p
1
L f 0 (y) = 2g T .
s
s
155
where we denoted a =
(4.6)
Tp
2g
= a
s
,
s
T2
2g. Using (4.5) again
2
f 0 (y) = ay 1/2 .
s
ds
= 1+
We have ds = dx2 + dy 2 , and so f 0 (y) =
dy
expression in (4.6):
s
2
1
dx
1+
(4.7)
= a .
dy
y
p
dx
dy
2
. We use this
dx
, and then separate the
dy
variables. But it seems easier to use the parametric integration technique.
To do that, we solve this equation for y
This is a first order equation. We could solve it for
(4.8)
y=
1+
and set
(4.9)
dx
dy
2 ,
dx
1 + cos
=
,
dy
sin
sin2
sin2
a 1 cos2
a
=
a
=
= (1 cos ) .
2
2
2
+
2
cos
2
1
+
cos
2
sin + (1 + cos )
It follows that dy =
a
sin d. Then from (4.9), we get
2
dx =
1 + cos
a
dy = (1 + cos ) d .
sin
2
a
( + sin ),
2
y=
a
(1 cos ) ,
2
156
4.5
Distributions
(f, ) =
f (t)(t) dt .
f (t) (c1 1 + c2 2 ) dt
= c1
f (t)1 dt + c2
for any two constants c1 and c2 , and any two test functions 1 (t) and 2 (t).
This way usual functions can be viewed as distributions. The formula
(5.10) lets us consider f (t) in the sense of distributions.
Example The Delta distribution. Define
((t), ) = (0) .
f 0 (t)(t) dt =
157
4.5. DISTRIBUTIONS
(|t| , ) = (|t|, ) =
(t) (t) dt
t0 (t) dt .
(|t| , ) =
(t) dt +
(t) dt = (2H(t) 1, ) .
158
4.5.1
Problems
5 12
1
+
.
s s4
s+4
6
6
Answer. 2
4.
s +9 s
Answer.
2. 2 sin 3t t3 .
3. cosh 2t e4t .
4. e2t cos 3t.
5.
t3 3t
.
t
6. e3t t4 .
Answer.
s2
(s 2)2 + 9
2
3
.
3
s
s
24
Answer.
(s + 3)5
Answer.
1
2
1
3.
Answer. sin 2t t2 .
+4 s
2
2
s
.
s2 9 s + 3
1
.
Answer. 1 et .
2
s +s
1
.
Answer. 1 cos t.
3
s +s
1
1
1
.
Answer. e3t .
s2 3s
3
3
1
1
1
.
Answer. sin t sin 2t.
2
2
(s + 1)(s + 4)
3
6
s2
1
.
+ 2s + 10
1
8. 2
.
s +s2
7.
9.
10.
s2
s2
s
.
+s+1
s1
.
s2 s 2
1 t
e sin 3t.
3
1
1
Answer. et e2t .
3
3
"
#
3
1
3
12 t
Answer. e
cos
t sin
t .
2
2
3
Answer.
Answer.
1 2t 2 t
e + e .
3
3
159
4.5. DISTRIBUTIONS
11.
s
.
2
4s 4s + 5
Answer. e 2 t
1
1
cos t + sin t .
4
8
III. Using the Laplace transform, solve the following initial value problems.
1. y 00 + 3y 0 + 2y = 0, y(0) = 1, y 0 (0) = 2.
Answer. y = e2t .
2. y 00 + 2y 0 + 5y = 0, y(0) = 1, y 0 (0) = 2.
Answer.
1 t
2e
Answer. y =
5
1
sin t sin 2t.
3
3
4. y 00 + 2y 0 + 2y = et , y(0) = 0, y 0 (0) = 1.
1
1
Answer. y = et et (cos t 3 sin t).
5
5
5. y 0000 y = 0, y(0) = 0, y 0 (0) = 1, y 00 (0) = 0, y 000 (0) = 0.
Answer. y =
1
1
sin t + sinh t.
2
2
sx2
dx =
Hint: Denote I =
.
s
I =
sx2
dx
sy2
dy =
es(x
2 +y2 )
dA .
This is a double integral over the entire xy-plane. Evaluate it by using polar
coordinates, to obtain I 2 = s .
.
s
160
12
Hint: L t
1
2
letting x = t . Then L t 2 = 2
esx dx =
esx dx =
.
s
sin x
2. Show that
dx = .
x
2
0
Z
sin tx
dx, and calculate the Laplace transform
Hint: Consider f (t) =
x
0
F (s) = 2 1s .
Z
1
3
1
9 4t
e + (7 + 4t), y(t) = et + e4t + (3 + 4t).
16
16
8
8
V.
1. A function f (t) is equal to 1 for 1 t 5, and is equal to 0 for all
other t 0. Represent f (t) as a difference of two step functions, and find
its Laplace transform.
Answer. f (t) = u1 (t) u5 (t), F (s) =
es
s
e5s
s .
Answer. F (s) =
2
s3
4s
2e s .
3. Sketch the graph of the function u2 (t) 2u3 (t), and find its Laplace
transform.
4. A function g(t) is equal to 1 for 0 t 5, and is equal to 2 for t > 5.
Represent g(t) using step functions, and find its Laplace transform.
Answer. g(t) = 1 3u5 (t), G(s) =
1
s
5s
3e s .
161
4.5. DISTRIBUTIONS
5. Find the inverse Laplace transform of
Answer. 2u1 (t)(t 1) 3u4 (t)(t 4).
1 s
4s
2e
3e
.
s2
1
sin 2(t 2) .
2
3s 1
.
s2 + 4
1 2t2 1 3t+3
e
e
.
5
5
s2
1
.
+s6
1
8. Find the inverse Laplace transform of e 2 s 2
, and simplify the
s + 2s + 5
answer.
1
Answer. u/2 (t)et+/2 sin 2t.
2
9. Solve
y 00 + y = 2u1 (t) u5 (t), y(0) = 0, y 0 (0) = 2.
10. Solve
y 00 + 3y 0 + 2y = u2 (t), y(0) = 0, y 0 (0) = 1.
Answer. y(t) = e2t et + u2 (t)
1
1
e(t2) + e2(t2) .
2
2
VI.
1. Show that L u0c (t) = L ((t c)).
1
Answer. y(t) = u (t)et+ sin 3t.
3
5. Solve
4y 00 + y = (t), y(0) = 0, y 0 (0) = 0.
162
Answer. y(t) =
1
1
sin t.
2
2
6. Solve
4y 00 + 4y 0 + 5y = (t 2), y(0) = 0, y 0 (0) = 1.
1
1
1
Answer. y(t) = e 2 t sin t + u2 (t) e 2 (t2) sin t.
4
VII.
1. Show that sin t 1 = 1 cos t. (Observe that sin t 1 6= sin t.)
2. Show that f (t) (t) = f (t), for any f (t).
(So that the delta function plays the role of unity for convolution.)
3. Find the convolution t sin at.
Answer.
at sin at
.
a2
Answer.
1
1
t cos t + sin t.
2
2
(b)
(d)
(s2
1
.
+ 9)2
Answer.
et
1
3
+
cos 3t +
sin 3t.
10
10
10
Answer.
1
t sin t.
2
Answer.
1
3
sin 3t t cos 3t.
54
54
y(0) = 0, y 0 (0) = 0.
1
t sin 3t.
6
7. Solve the initial value problem with a given forcing term g(t)
Answer. y(t) =
y 00 + 4y = g(t),
Answer. y(t) =
1
2
y(0) = 0, y 0 (0) = 0.
163
4.5. DISTRIBUTIONS
VIII.
1. Find the second derivative of |t| in the sense of distributions.
Answer. 2(t).
2. Find f (t), such that
f (n) (t) = (t) .
Answer. f (t) =
3. Let f (t) =
0
tn1
(n1)!
if t < 0
.
if t > 0
t2
if t < 0
.
2
t + 5 if t > 0
Chapter 5
Linear Systems of
Differential Equations
5.1
5.1.1
a1
b1
C1 = a2 , C2 = b2 ,
a3
b3
a1 + b1
xa1
x1 a1 + x2 b1
x1 C1 + x2 C2 = x1 a2 + x2 b2 ,
x1 a3 + x2 b3
We shall be dealing only with the square matrices, like the following 33
matrix
(1.1)
A = a21 a22 a23 .
a31 a32 a33
164
165
C1 C2 C3 , where
a11
a12
a13
x1
b1
5.1.2
We wish to find the functions x1 (t), x2 (t) and x3 (t), which solve the following
system of equations, with given constant coefficients a11 , . . . , a33,
x01 = a11 x1 + a12 x2 + a13 x3
(1.3)
(1.4)
x1 (t)
conditions.
Indeed,
on
the
left
in
(1.3)
we
have
components of the vector
0
x1 (t)
x0 (t) = x02 (t) , while on the right we see the components of the vector
x03 (t)
Ax.
Let us observe that given two vector functions y(t) and z(t), which are
solutions of the system x0 = Ax, their linear combination c1 y(t) + c2 z(t) is
also a solution of the same system, for any constants c1 and c2 . Our system
of differential equations is linear.
We now search for solution of (1.4) in the form
(1.5)
x(t) = et ,
167
(1.6)
solves our system. We claim that (1.6) gives the general solution of our
system, meaning that we can determine c1 , c2 , and c3 to satisfy any initial
conditions:
(1.7)
x(t0 ) = c1 e1 t0 1 + c2 e2 t0 2 + c3 e3 t0 3 = x0 .
x =
"
2 1
1 2
"
x, x(0) =
1
2
This is a 2 2 system, so that we have only two terms in (1.6). "We compute
#
1
the eigenvalue 1 = 1, and the corresponding eigenvector 1 =
, and
1
"
1
. The general solution
1
"
1
1
x(t) = c1 e
"
1
1
3t
+ c2 e
or in components
x1 (t) = c1 et + c2 e3t
x2 (t) = c1 et + c2 e3t .
2 1 1
1
x0 = 1 2 1 x , x(0) = 0 .
1 1 2
4
We calculate that this matrix has a double eigenvalue
1 = 1, which
has
1
0
1
0
1
x(t) = c1 et 0 + c2 et 1 + c3 e4t 1 .
1
1
1
Or in components
x1 (t) = c1 et + c3 e4t
x2 (t) = c2 et + c3 e4t
x3 (t) = c1 et + c2 et + c3 e4t .
169
(1.8)
(A 1 I) = .
"
1 1
1
3
x.
1
2
1 2 = 1
1 + 2 = 1 .
We discard the second equation, because it is a multiple of the first. The
first equation has infinitely many solutions. But all we need is just one
solution, that is not a multiple of . So we set 2 = 0, which gives 1 = 1.
We have computed the second linearly independent solution
2t
x2 (t) = te
"
1
1
+e
2t
"
2t
"
1
0
2t
"
x(t) = c1 e
5.2
5.2.1
"
1
1
+ c2 te
1
1
+e
1
0
#!
Recall that one differentiates complex valued functions much the same way,
as the real ones. For example,
d it
e = ieit ,
dt
where i = 1 is treated the same way as any other constant. Any complex
valued function f (t) can be written in the form f (t) = u(t)+iv(t), where u(t)
and v(t) are real valued functions. It follows by the definition of derivative
that f 0 (t) = u0 (t) + iv 0 (t). For example, using Eulers formula,
d it
d
e = (cos t + i sin t) = sin t + i cos t = i(cos t + i sin t) = ieit .
dt
dt
Any complex valued vector function x(t) can also be written as x(t) =
u(t) + iv(t), where u(t) and v(t) are real valued vector functions. Again, we
have x0 (t) = u0 (t) + iv 0 (t). If x(t) is a solution of our system (1.4), then
u0 (t) + iv 0 (t) = A(u(t) + iv(t)) .
Separating the real and imaginary parts, we see that both u(t) and v(t) are
real valued solutions of our system.
171
5.2.2
give us two real valued solutions. In case of a 2 2 matrix (when there are
no other eigenvalues), the general solution is
(2.1)
(If one uses the other eigenvalue i, and the corresponding eigenvector,
the answer is the same.) We show in the exercises that one can choose the
constants c1 and c2 to satisfy any initial condition x(t0 ) = x0 .
Example Solve
x0 =
"
1 2
2
1
"
x, x(0) =
2
1
"
i
. We rewrite it as
1
"
i
1
= et
"
sin 2t
cos 2t
+ iet
"
cos 2t
sin 2t
The real and imaginary parts give us two linearly independent solutions, so
that the general solution is
t
x(t) = c1 e
"
sin 2t
cos 2t
+ c2 e
"
cos 2t
sin 2t
We see from the form of the solutions, that if a matrix A has all eigenvalues which are either negative or have negative real parts, then limt x(t) =
0 (all components of the vector x(t) tend to zero).
5.3
x0 = Ax,
x(0) = x0
looks like a single equation. In case A and x0 are constants, the solution of
(3.1) is
(3.2)
x(t) = eAt x0 .
In order to write the solution of our system in the form (3.2), we shall define
the notion of the exponential of a matrix. First, we define powers of a matrix:
A2 = A A, A3 = A2 A, and so on. Starting with the Maclauren series
ex = 1 + x +
X
x2 x3 x4
xn
+
+
+ =
,
2!
3!
4!
n!
n=0
X
A2 A3
A4
An
+
+
+ =
.
2!
3!
4!
n!
n=0
173
any matrix A, so that we can compute eA for any square matrix A (we shall
prove this fact for diagonalizable matrices). If O is a square matrix with
zero entries, then eO = I.
We have
eAt = I + At +
A2 t2
A3 t3
A4 t4
+
+
+ ,
2!
3!
4!
and then
A3 t2
A4 t3
d At
e = A + A2 t +
+
+ = AeAt .
dt
2!
3!
We conclude that the formula (3.2) gives the solution of the initial-value
problem (3.1). (Observe that x(0) = eO x0 = x0 .)
Example Let A =
"
an 0
0 bn
A
e =
"
a 0
, where a and b are constants. Then An =
0 b
, and we have
"
1+a+
0
Example Let A =
At
"
a2
2!
"
a3
3!
1+b+
b2
2!
b3
3!
0
+
"
ea 0
0 eb
0 1
. Then for any constant t
1
0
"
cos t sin t
sin t
cos t
(3.3)
x2 , x1 (0) =
x02 = x1 ,
x2 (0) =
in the form
"
x1 (t)
x2 (t)
= eAt
"
"
"
cos t sin t
sin t
cos t
#
#"
We see that the integral curves of our system are circles in the (x1 , x2 ) plane.
"
x1
x2
"
x01
x02
"
x2
x1
is always
a11
a12
a13
x1
3
X
k=1
1 0 0
0
0
1
A = A 0 A 2 A 0 = [1 C1 2 C2 3 C3 ] .
0
0
3
175
S 1 A S = .
Similarly,
(3.5)
A = S S 1 .
One refers to the formulas (3.4) and (3.5) as giving the diagonalization of the
matrix A. We see that a matrix with a complete set of 3 linearly independent
eigenvectors can be diagonalized. An n n matrix A is diagonalizable, if it
has a complete set of n linearly independent eigenvectors.
If A is diagonalizable, so that the formula (3.5) holds, then A2 = A A =
S S 1 S S 1 = S 2 S 1 , and in general An = S n S 1 . We then have
(for any real scalar t)
eAt =
X
An tn
n=0
n!
X
S n tn S 1
n=0
n!
=S
X
n tn
n=0
n!
S 1 = S et S 1 .
At
2tI
"
=e
e2t
0
0 e2t
"
#"
0 t
t
0
2tI
=e
cos t sin t
sin t
cos t
"
cos t sin t
sin t
cos t
= e2t
"
cos t sin t
sin t
cos t
Non-Homogeneous Systems
We shall solve the initial value problem
x0 = Ax + f (t), x(t0 ) = x0 ,
(3.6)
(3.7)
eAs f (s) ds .
t0
How does one think of this formula? In case of one equation (when A is a
number) we have an easy linear equation (with = eAt ), for which (3.7)
d A(tt0 )
e
= AeA(tt0 ) to justify this formula
gives the solution. We use that dt
for matrices.
Example Solve
x0 =
"
0 1
1
0
x+
"
0
3
"
1
t
, x(0) =
"
0
3
By (3.7)
At
x(t) = e
At
+e
As
"
1
s
ds .
We have
At
"
cos t sin t
sin t
cos t
As
, e
"
cos s sin s
sin s cos s
177
"
"
ds =
"
1
s
cos s sin s
sin s cos s
#"
1
s
"
cos s + s sin s
sin s + s cos s
"
Then
Z
eAs
"
1
s
At
Rt
(cos s + s sin s) ds
Rt 0
0 ( sin s + s cos s) ds
As
"
1
s
"
t + 2 sin t
2 2 cos t
ds =
"
t cos t + 2 sin t
2 + 2 cos t + t sin t
t + 2 sin t
2 2 cos t
We conclude that
x(t) =
"
3 sin t
3 cos t
"
t sin t
2 + cos t
5.3.1
We now describe the solution curves in the x1 x2 -plane, near the origin (0, 0),
of the system
(3.8)
x0 = Ax ,
with a constant 2 2 matrix A =
"
a11 a12
a21 a22
, and x =
"
x1
x2
If the eigenvalues of A are real and distinct, 1 6= 2 , with the corresponding eigenvectors 1 and 2 , we know that the general solution is
x(t) = c1 e1 t 1 + c2 e2 t 2 .
We distinguish the following cases.
(i) Both eigenvalues are negative, 1 < 2 < 0. Then
x(t) c2 e2 t 2 ,
All solution curves (x1 (t), x2(t)) tend to the origin (0, 0) as t , and they
are tangent to the vector 2 . We have a stable node at the origin.
as t .
All solution curves (x1 (t), x2(t)) emerge from the origin (0, 0), and they are
tangent to the vector 2 . We have an unstable node at the origin.
(iii) The eigenvalues have different sign, 1 > 0 > 2 . In case the initial
point lies along the vector 2 (so that c1 = 0), the solution curve (x1 (t), x2 (t))
tends to the origin (0, 0), as t . All other solutions (when c1 6= 0) tend
to infinity, and they are tangent to the vector
t . We have a
" 1 , as #
1 0
saddle at the origin. For example, if A =
, we have 1 = 1,
0 1
1
c1 c2
1
2 = 1, and x1 = c1 et , x2 = c2 et . Express: x2 = c2 t = c2
=
.
e
x1 /c1
x1
c
Denoting c = c1 c2 , we see that solutions are the hyperbolas x2 = , which
x1
form a saddle near the origin.
Turning "to the case
# of complex eigenvalues, we begin with a special
p q
matrix B =
, with the eigenvalues p iq, and consider the system
q p
y 0 = By .
(3.9)
Its solutions are
Bt
y(t) = e
"
c1
c2
pt
=e
"
"
cos qt sin qt
sin qt cos qt
#"
c1
c2
c1
If p = 0, then any initial vector
is rotated around the origin (infinitely
c2
many times, as t ). We say that the origin is a center. If p < 0, the
solutions spiral into the origin. We have a stable spiral at the origin. If
p > 0, we have an unstable spiral.
Assume now that A is a 2 2 matrix with complex eigenvalues p iq,
q 6= 0. Let = u + iv be an eigenvector corresponding to p + iq, where u and
v are real vectors. We have A(u+iv) = (p+iq)(u+iv) = puqv +i(pv +qu),
and separating the real and imaginary parts
(3.10)
Au = pu qv, and Av = qu + pv .
179
"
p q
q p
= PB ,
5.3.2
Problems
1. x =
"
3 1
1 3
x, x(0) =
"
1
3
. Answer:
"
0 1
2
2. x =
x, x(0) =
. Check your answer by reducing this
1 0
1
system to a second order equation for x1 (t). Answer:
0
x1 (t) = 21 et + 32 et
x2 (t) = 21 et + 32 et .
2 2 3
0
3. x0 = 2 3 2 x, x(0) = 1 . Answer:
4 2 5
2
x1 (t) = 3et 2e2t + 5e3t
x2 (t) = 4e2t + 5e3t
x3 (t) = 3et + 5e3t .
4 0 1
2
4. x0 = 2 1 0 x, x(0) = 5 . Answer:
2 0 1
0
x1 (t) = 2e2t 4e3t
2 1 1
2
5. x0 = 1 2 1 x, x(0) = 3 . Answer:
1 1 2
2
x1 (t) = 3et + e4t
x2 (t) = 2et + e4t
x3 (t) = et + e4t .
1
4 0
2
0
6. x = 4 7 0 x, x(0) = 6 . Answer:
0
0 5
1
x1 (t) = 2e3t (1 + 8t)
"
"
0 2
2
7. x =
x, x(0) =
. Check your answer by reducing this
2
0
1
system to a second order equation for x1 (t). Answer:
0
8. x =
"
3 2
2
3
x, x(0) =
"
0
1
. Answer:
1
2
2
1
9. x0 = 1
1
0 x, x(0) = 1 . Answer:
2
0 2 1
x1 (t) = cos t + 5 sin t
181
x =
"
1 1
1
1
x+
"
1
t
, x(0) =
"
"
1t
2
4
1
a
b
b a
x, with positive
dx
= ax + by
dt
dy
= mx + ny ,
dt
where a,b,m and n are real constants.
(3.11)
(ii) Assume that (0, 0) is a center for (3.11). Show that the equation (3.12)
is exact, and solve it.
Hint: One needs n = a (and also that b and m have the opposite signs),
in order for the matrix of (3.11) to have purely imaginary eigenvalues.
Answer. mx2 by 2 = c, a family of ellipses.
5. Let A be a real 3 3 constant matrix. Suppose that all solutions of
x0 = Ax are bounded as t +, and as t . Show that every
solution is periodic, and there is a common period for all solutions.
Hint: One of the eigenvalues of A must be zero, and the other two purely
imaginary.
6. Let x(t) and y(t) be two solutions of the system
x0 = Ax ,
with an n n matrix A. Show that 5x(t), and x(t) + y(t) are also solutions.
Show that the same is true for c1 x(t) + c2 y(t), with any numbers c1 and c2 .
Are the above conclusions true if the entries of A depend on t ?
7. Show that the series for eA converges for any diagonalizable matrix A.
Hint: If A = SS 1 , then eA = Se S 1 .
8. (i) Suppose that +i is an eigenvalue of A, and +i is a corresponding
eigenvector. Show that i is also an eigenvalue of A, and i is a
corresponding eigenvector. (A is an n n matrix with real entries.)
(ii) Show that and are linearly independent. (They are not constant
multiples of each other.)
(iii) Show that the formula (2.1) gives the general solution of the system
x0 = Ax, so that we can choose c1 and c2 , with x(t0 ) = x0 , for any initial
condition.
Hint: Decompose x0 as a linear combination of and , and then find c1
and c2 .
183
x0 = Ax .
"
0 1
. Show that
0 0
At
"
1 t
0 1
0 1 0
1 t 21 t2
= 0 1 t .
0 0 1
0 1 0
3. Let A = 1
0 0 . Show that
0
0 2
eAt
cos t sin t 0
= sin t
cos t 0 .
0
0 e2t
3 1 0
4. Let A = 1
3 0 . Show that
0
0 2
eAt
= e sin t
e3t cos t 0 .
2t
0
0 e
0 1
0
5. Let A = 0
0
0 . Show that
0
0 2
eAt
6. Let A =
"
1 t 0
= 0
1 0
.
0
0 e2t
0 1
. Show that
1 0
At
"
cosh t sinh t
sinh t cosh t
5.4
n = 0, 1, 2, . . . .
n = 0, 1, 2, . . . .
185
c1 x1 + c2 x2 + + ck xk = 0 .
c2
ck
x2 xk ,
c1
c1
AT v = 0 ,
x0 = A(t)x
where the nn matrix A(t) depends on t. Let the vectors x1 (t), x2(t), . . . , xn(t)
be linearly independent (at all t) solutions of this system. We use these vectors as columns of the matrix X(t) = [x1 (t) x2 (t) . . . xn (t)]. The matrix
X(t) is called a fundamental solution matrix or a fundamental matrix, for
short. If, moreover, X(0) = I (the identity matrix), we call X(t) the normalized fundamental matrix. We claim that the general solution of (4.4)
is
x(t) = c1 x1 (t) + c2 x2 (t) + + cn xn (t) = X(t)c ,
where c is the column vector c = [c1 c2 . . . cn ]T , and ci s are arbitrary constants. Indeed, if y(t) is any solution of (4.4), we choose the vector c0 , so
that X(0)c0 = y(0) (or c0 = X 1 (0)y(0)). Then the two solutions of (4.4),
X(t)c0 and y(t), have the same initial values at t = 0. By the uniqueness
theorem (which holds for systems too), y(t) = X(t)c0.
Let Y (t) be another fundamental matrix Y (t) = [y1 (t) y2 (t) . . . yn (t)].
Its first column y1 (t) is a solution of (4.4), and so y1 (t) = X(t)d1 , where
d1 is a constant n-dimensional vector. Similarly, y2 (t) = X(t)d2, and so on.
Form a matrix D = [d1 d2 . . . dn ], with constant entries. Then
(4.5)
Y (t) = X(t)D ,
by the rules of matrix multiplication. Observe that the matrix D is nonsingular (D = X 1 (t)Y (t), and D 1 = Y 1 (t)X(t)).
Any fundamental matrix X satisfies the equation (4.4), so that
X 0 = A(t)X .
Indeed, the first column on the left, which is x01 , is equal to the first column
on the right, which is Ax1 , etc. Using the last formula, it is straightforward
to justify that the general solution of the non-homogeneous system
x0 = A(t)x + f (t)
is given by
x(t) = X(t)c + X(t)
X 1 (s)f (s) ds .
X 1 (s)f (s) ds .
187
Periodic Systems
We now consider n n systems with periodic coefficients
(4.7)
We assume that all entries of the n n matrix A(t) are functions of the
period p. Any solution x(t) of (4.7) satisfies this system at all times t, in
particular at t + p, x0 (t + p) = A(t + p)x(t + p), which implies that
x0 (t + p) = A(t)x(t + p) ,
so that x(t + p) is also a solution of (4.7). Let X(t) be a fundamental matrix
of (4.7), then so is X(t + p), and by (4.5)
(4.8)
X(t + p) = X(t)D ,
(4.9)
X 1 (s)f (s) ds .
In particular,
z(p) = X(p)z(0) + b ,
where we have denoted b = X(p) 0p X 1 (s)f (s) ds. By the periodicity of
our system, z(t+p) is also a solution of (4.9), which is equal to z(p) at t = 0.
Therefore, using (4.6) again,
R
X 1 (s)f (s) ds .
Then
z(2p) = X(p)z(p) + b = X(p) (X(p)z(0) + b) + b = X 2 (p)z(0) + X(p)b + b .
By induction, for any integer m > 0,
(4.10)
z(mp) = X m (p)z(0) +
m1
X
k=0
X k (p)b .
189
X 1 (s)f (s) ds .
We obtain a p-periodic solution, with x(p) = x(0), provided that the initial
vector x(0) satisfies
(4.11)
(I X(p)) x(0) = b ,
where, as before, b = X(p)
Rp
0
Assume, contrary to what we want to prove, that the system (4.9) has no
p-periodic solutions. Then the system (4.11) has no solutions. This implies
that det (I X(p)) = 0, and then det (I X(p))T = det (I X(p)) = 0. It
follows that the system
(4.12)
(I X(p))T v = 0
has non-trivial solutions, and, by the Fredholm alternative, we can find a
non-trivial solution v0 of (4.12), which satisfies
(4.13)
(b, v0 ) 6= 0 .
We now take the scalar product of (4.10) with v0 , and use (4.14):
(z(mp), v0) = (X m(p)z(0), v0) +
m1
X
(X k (p)b, v0)
k=0
= (z(0), X m(p)T v0 ) +
m1
X
k=0