Second Order Ode
Second Order Ode
Second Order Ode
Differential Equations
where the ai are real numbers, and we attempt to find a solution of the form
y(t) = ert . (Note the following notational convention:
(n) dn y
y := n
dt
while y n := y raised to the nth power.) This attempt leads to the characteristic
equation (after dividing by ert ):
With this kind of problem, the only hard part is to find the zeros of our
characteristic equation (1.1.2) . Here are two helpful facts from algebra:
1
1. If the ai are integers, and p/q is a rational root, then p is a factor of an
and q is a factor of a0 .
where the ai are real numbers, and each function gi (t) is the product of some
combination of
4. Now if any of the terms of our guess are in fact solutions to the homoge-
neous problem (i.e. (1.1.1) ), then we adjust our guess by multiplying it
by t to a power large enough to ensure that all of the terms of our new
guess will NOT be solutions of the homogeneous problem (1.1.1) .
2
2 Nastier Equations
Now we will present some methods to solve some nastier equations.
then
y2 (t)g(t) y1 (t)g(t)
Y (t) = y1 (t) dt + y2 (t) dt (2.1.2)
W (y1 , y2 )(t) W (y1 , y2 )(t)
Of course this method assumes that you have already found two linearly in-
dependent solutions, y1 and y2 .
In general, if we have
where n m > 0, then the substitution u = y (nm) will simplify things. (It
will reduce the order from n to m.)
3
The trick to solving this equation is to introduce the change of variables x =
ln(t) (so dx
dt
= 1t ), and use the chain rule (and a bunch of scratch paper )
to derive the following equation relating y and x :
In this last equation we have y as a function of x, not t, and the derivatives are
with respect to x, not t. Now we have a constant coefficient equation whose
solution, say y(x) = c1 y1 (x) + c2 y2 (x), is easy to find. Our solution to (2.3.1)
is then
y(t) = c1 y1 (ln(t)) + c2 y2 (ln(t)) . (2.3.3)
Now for those of you who want practice with the chain rule, and those math-
ematicians among you who dont want to accept stuff on faith, here is the
derivation:
We use the chain rule in the following form:
d dg dg dx
g(x(t)) = g (x(t))x (t) or = (2.3.4)
dt dt dx dt
dy
with g = y and then again with g = dx
. Since x = ln(t) we get:
dy dy dx dy 1
= = (2.3.5)
dt dx dt dx t
[ ] [ ]
d2 y d dy d 1 dy
= =
dt2 dt dt dt t dx
[ ]
1 dy 1 d dy
= 2 + by the product rule
t dx t dt dx
(2.3.6)
[ 2
]
1 dy 1 d y dx dy
= + by the chain rule with g =
t2 dx t dx2 dt dx
1 dy 1 d2 y
= 2 + .
t dx t2 dx2
Now we substitute into (2.3.1) :
d2 y dy
0 = t 2 + t
2
+ y
dt dt
[ ] [ ]
1 dy 1 d2 y 1 dy (2.3.7)
=t 2
2
+ + t + y
t dx t2 dx2 t dx
= y y + y + y = y + ( 1)y + y .
4
2.4 Reduction of Order
Suppose that we have one solution, y1 (t) (not identically zero), of
To find a second solution, we let y2 (t) := v(t)y1 (t). Substituting into (2.4.1)
yields
So by letting u := v , we get
which is a linear 1st order equation for u, and hopefully not too hard to solve.
Since u = v , we integrate u (our solution of the last ODE) to get v, and then
we multiply by y1 to get y2 .
where
Q(x) R(x)
p(x) = and q(x) =
P (x) P (x)
are rational functions. We look for solutions near a point x0 where P (x0 ) = 0.
We look for solutions of the form
y(x) = aj (x x0 )j , (2.5.2)
j=0
i.e. we assume that our solution is analytic, and we plug in its power series.
A few facts:
1. With the conditions which we have imposed on P, Q, and R, we will
have two linearly independent analytic solutions of our equation. (The
important part of this statement is the analyticity.)
2. If aj (x x0 ) =
j
bj (x x0 )j (2.5.3)
j=0 j=0
3. If f (x) = aj (x x0 )j ,
j=0
f (j) (x0 )
then aj = .
j!
Because of the second item above, when substituting into the ODE (2.5.1), one
typically has to shift indices so that like powers of (x x0 ) can be grouped
together. In particular, the following shifts need to be done quite frequently.
y (x) = jaj (x x0 )j1
j=1
= j=0 (j + 1)aj+1 (x x0 )j
(2.5.4)
y (x) = j(j 1)a (x x )j2
j=2 j 0
= j=0 (j + 2)(j + 1)a j+2 (x x0 )j
for some constants, ci . If the functions fi are solutions to a linear ODE, then
the last statement is equivalent to saying that the solution fi0 can be obtained
from the other fi by the principle of superposition (see below).
The functions f1 , . . . , fn are linearly independent on I if they are not
linearly dependent. So, none of the fi are linear combinations of the other fi .
If the fi are solutions, then none of the fi are superpositions of the others.
6
3.2 Results
3.1 Theorem (Existence and Uniqueness). If pi (t) and g(t) are all continuous
on the open interval I (and t0 is in I), then there exists a unique solution of
(3.0.2) which satisfies (3.0.3) .
Trivial obserations:
Now we start listing everything that we know about our homogeneous equation
(3.0.1) on an interval I where we assume that the pi (t) are continuous.
In light of the first three points above, we can make more sense of the ex-
istence and uniqueness theorem: Since the general solution of (3.0.1) has n
undetermined constants, c1 , . . . , cn , we need exactly n initial conditions to
get n equations for our n unknowns, and therefore we need exactly n initial
conditions to get exactly one solution. (3.0.3) has n conditions as expected.
7
Now we turn to our inhomogeneous problem ( 3.0.2) . We assume that
y1 , . . . , yn are linearly independent solutions of our homogeneous problem
(3.0.1) , and we assume that Y (t) and Z(t) are particular solutions of (3.0.2) .
1. Any solution y(t) of (3.0.2) can be put into the form y(t) = c1 y1 (t) +
+ cn yn (t) + Y (t).
4. All of these results are based on the linearity of L, our differential oper-
ator, which is due to the linearity of taking m derivatives:
dm dm dm
[c f
1 1 (x) + + c f
n n (x)] = c 1 f 1 (x) + + c n fn (x) .
dxm dxm dxm
To prove (2) for example we compute:
L[y] = L[c1 y1 + + cn yn + Y ]
= c1 0 + + cn 0 + g(t) = g(t) .
Copyright 1999
c Ivan Blank