Second Order Ode

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Second and Higher Order

Differential Equations

1 Constant Coefficient Equations


The methods presented in this section work for nth order equations.

1.1 Homogeneous Equations


We consider the equation:

a0 y (n) (t) + a1 y (n1) (t) + + an y(t) = 0 (1.1.1)

where the ai are real numbers, and we attempt to find a solution of the form
y(t) = ert . (Note the following notational convention:

(n) dn y
y := n
dt
while y n := y raised to the nth power.) This attempt leads to the characteristic
equation (after dividing by ert ):

P (r) := a0 rn + a1 rn1 + + an1 r + an = 0 . (1.1.2)

The Fundamental Theorem of Algebra guarantees that we will have n (not


necessarily distinct) roots, ri , of our characteristic equation.
The type of solutions which we get will now depend on whether the root
of our characteristic equation is real or complex.
Corresponding to a real root r repeated times we get the solutions:

ert , tert , . . . , t1 ert .

Corresponding to a complex root + i, we will always have its complex


conjugate i (we are using the fact that the ai are real), and we get
the solutions
et sin(t) and et cos(t) .

With this kind of problem, the only hard part is to find the zeros of our
characteristic equation (1.1.2) . Here are two helpful facts from algebra:

1
1. If the ai are integers, and p/q is a rational root, then p is a factor of an
and q is a factor of a0 .

2. If r is a root of P (r), our characteristic polynomial, then r r is a factor


of P (r). In other words, there is a polynomial Q(r) (which can be found
by long division) such that (r r)Q(r) = P (r). (This fact is called
The Factor Theorem in some Precalculus texts.) Now the degree of Q
is smaller than the degree of P, so it will probably be easier to find the
roots of Q.

1.2 Nonhomogeneous Equations


Now we consider the equation:

a0 y (n) (t) + a1 y (n1) (t) + + an y(t) = g1 (t) + + gm (t) (1.2.1)

where the ai are real numbers, and each function gi (t) is the product of some
combination of

1. t to a positive whole number power,

2. e to a constant times t, and

3. a sine or cosine of a constant times t.

We attempt to solve equation ( 1.2.1) by guessing the form of the answer.


This method is called the method of undetermined coefficients. Rather than
completely random guesswork we have some guides:

1. We can solve (1.2.1) by adding our solutions to the m separate problems:

a0 y (n) (t) + a1 y (n1) (t) + + an y(t) = gi (t) .

2. We never guess a sine without a cosine and vice versa.

3. If A0 t f (t) is the leading term of our guess, then we should guess

A0 t f (t) + A1 t1 f (t) + + A1 tf (t) + A f (t).

4. Now if any of the terms of our guess are in fact solutions to the homoge-
neous problem (i.e. (1.1.1) ), then we adjust our guess by multiplying it
by t to a power large enough to ensure that all of the terms of our new
guess will NOT be solutions of the homogeneous problem (1.1.1) .

2
2 Nastier Equations
Now we will present some methods to solve some nastier equations.

2.1 Variation of Parameters


If p, q, and g are continuous on the open interval I, and if y1 and y2 are
linearly independent solutions of the homogeneous equation

y (t) + p(t)y (t) + q(t)y(t) = 0 , (2.1.1)

then

y2 (t)g(t) y1 (t)g(t)
Y (t) = y1 (t) dt + y2 (t) dt (2.1.2)
W (y1 , y2 )(t) W (y1 , y2 )(t)

will be a solution of the inhomogeneous equation

y (t) + p(t)y (t) + q(t)y(t) = g(t) . (2.1.3)

Of course this method assumes that you have already found two linearly in-
dependent solutions, y1 and y2 .

2.2 When the Dependent Variable Is Missing


For a 2nd order ODE of the form

y (t) = f (t, y ) , (2.2.1)

the substitution v = y , v = y leads to the 1st order ODE

v (t) = f (t, v) . (2.2.2)

In general, if we have

y (n) (t) = f (t, y (n1) , y (n2) , . . . , y (nm) ) , (2.2.3)

where n m > 0, then the substitution u = y (nm) will simplify things. (It
will reduce the order from n to m.)

2.3 Euler Equations


We consider the equation:

t2 y (t) + ty (t) + y(t) = 0. (2.3.1)

3
The trick to solving this equation is to introduce the change of variables x =

ln(t) (so dx
dt
= 1t ), and use the chain rule (and a bunch of scratch paper )
to derive the following equation relating y and x :

y (x) + ( 1)y (x) + y(x) = 0. (2.3.2)

In this last equation we have y as a function of x, not t, and the derivatives are
with respect to x, not t. Now we have a constant coefficient equation whose
solution, say y(x) = c1 y1 (x) + c2 y2 (x), is easy to find. Our solution to (2.3.1)
is then
y(t) = c1 y1 (ln(t)) + c2 y2 (ln(t)) . (2.3.3)
Now for those of you who want practice with the chain rule, and those math-
ematicians among you who dont want to accept stuff on faith, here is the
derivation:
We use the chain rule in the following form:
d dg dg dx
g(x(t)) = g (x(t))x (t) or = (2.3.4)
dt dt dx dt
dy
with g = y and then again with g = dx
. Since x = ln(t) we get:

dy dy dx dy 1
= = (2.3.5)
dt dx dt dx t
[ ] [ ]
d2 y d dy d 1 dy
= =
dt2 dt dt dt t dx
[ ]
1 dy 1 d dy
= 2 + by the product rule
t dx t dt dx
(2.3.6)
[ 2
]
1 dy 1 d y dx dy
= + by the chain rule with g =
t2 dx t dx2 dt dx

1 dy 1 d2 y
= 2 + .
t dx t2 dx2
Now we substitute into (2.3.1) :

d2 y dy
0 = t 2 + t
2
+ y
dt dt
[ ] [ ]
1 dy 1 d2 y 1 dy (2.3.7)
=t 2
2
+ + t + y
t dx t2 dx2 t dx

= y y + y + y = y + ( 1)y + y .

4
2.4 Reduction of Order
Suppose that we have one solution, y1 (t) (not identically zero), of

y (t) + p(t)y (t) + q(t)y(t) = 0 . (2.4.1)

To find a second solution, we let y2 (t) := v(t)y1 (t). Substituting into (2.4.1)
yields

0 = y1 v + (2y1 + py1 )v + (y1 + py1 + qy1 )v = y1 v + (2y1 + py1 )v . (2.4.2)

So by letting u := v , we get

0 = y1 u + (2y1 + py1 )u , (2.4.3)

which is a linear 1st order equation for u, and hopefully not too hard to solve.
Since u = v , we integrate u (our solution of the last ODE) to get v, and then
we multiply by y1 to get y2 .

2.5 Series Solutions


We consider the equation

y (x) + p(x)y (x) + q(x)y(x) = 0 or


(2.5.1)
P (x)y (x) + Q(x)y (x) + R(x)y(x) = 0

where
Q(x) R(x)
p(x) = and q(x) =
P (x) P (x)
are rational functions. We look for solutions near a point x0 where P (x0 ) = 0.
We look for solutions of the form


y(x) = aj (x x0 )j , (2.5.2)
j=0

i.e. we assume that our solution is analytic, and we plug in its power series.
A few facts:
1. With the conditions which we have imposed on P, Q, and R, we will
have two linearly independent analytic solutions of our equation. (The
important part of this statement is the analyticity.)

2. If aj (x x0 ) =
j
bj (x x0 )j (2.5.3)
j=0 j=0

in an interval containing x0 , then aj = bj for all j.

3. If f (x) = aj (x x0 )j ,
j=0

f (j) (x0 )
then aj = .
j!
Because of the second item above, when substituting into the ODE (2.5.1), one
typically has to shift indices so that like powers of (x x0 ) can be grouped
together. In particular, the following shifts need to be done quite frequently.

y (x) = jaj (x x0 )j1
j=1

= j=0 (j + 1)aj+1 (x x0 )j
(2.5.4)
y (x) = j(j 1)a (x x )j2
j=2 j 0
= j=0 (j + 2)(j + 1)a j+2 (x x0 )j

3 General Theory of nth Order


Linear Equations
We consider the homogeneous equation:
L[y] := y (n) (t) + p1 (t)y (n1) + + pn1 (t)y (t) + pn (t)y(t) = 0 , (3.0.1)
and the inhomogeneous equation:
L[y] = g(t) (3.0.2)
each with the n initial conditions
y(t0 ) = y0 , y (t0 ) = y0 , . . . , y (n1) (t0 ) = y0
(n1)
. (3.0.3)

3.1 Preliminary Definitions


The functions f1 , . . . , fn are linearly dependent on I if there are constants
k1 , . . . , kn , which are not all zero, such that k1 f1 (t) + + kn fn (t) = 0 for all
t in I. Equivalently, the fi are linearly dependent if one of them, say fi0 , is a
linear combination of the remaining fi , i.e.

fi 0 = c i fi ,
i=i0

for some constants, ci . If the functions fi are solutions to a linear ODE, then
the last statement is equivalent to saying that the solution fi0 can be obtained
from the other fi by the principle of superposition (see below).
The functions f1 , . . . , fn are linearly independent on I if they are not
linearly dependent. So, none of the fi are linear combinations of the other fi .
If the fi are solutions, then none of the fi are superpositions of the others.

6
3.2 Results
3.1 Theorem (Existence and Uniqueness). If pi (t) and g(t) are all continuous
on the open interval I (and t0 is in I), then there exists a unique solution of
(3.0.2) which satisfies (3.0.3) .

Trivial obserations:

The coefficient of y (n) is 1, not a function of t.

Equation (3.0.1) is a special case of equation (3.0.2) where g(t) 0, so


the theorem above for (3.0.2) with (3.0.3) applies to (3.0.1) with (3.0.3)
as well.

Now we start listing everything that we know about our homogeneous equation
(3.0.1) on an interval I where we assume that the pi (t) are continuous.

1. There exists n linearly independent solutions y1 , . . . , yn of (3.0.1) on I.


(Note that the order of our ODE is n.)

2. Every solution y of (3.0.1) on I is a linear combination of the y1 , . . . , yn .


In other words, there exist constants c1 , . . . , cn such that y(t) =
c1 y1 (t) + + cn yn (t).

3. The Principle of Superposition: All linear combinations of the yi are


solutions of (3.0.1) on I. In particular, sums, differences, and constant
multiples of solutions are solutions.

4. Given any n solutions of (3.0.1) , y1 , . . . , yn which are not necessarily


linearly independent, their Wronskian, W (y1 , . . . , yn )(t), which is a
function of t on I, turns out to be either identically zero (i.e. zero for all
t in I), in which case the yi are linearly dependent, or it is never zero in
I, in which case the yi are linearly independent.

5. As a consequence of the last point, it suffices to check the Wronskian


of n solutions at a single point to see if they are linearly independent.
Abels formula:
[ ]
W (y1 , . . . , yn )(t) = c exp p1 (t) dt .

In light of the first three points above, we can make more sense of the ex-
istence and uniqueness theorem: Since the general solution of (3.0.1) has n
undetermined constants, c1 , . . . , cn , we need exactly n initial conditions to
get n equations for our n unknowns, and therefore we need exactly n initial
conditions to get exactly one solution. (3.0.3) has n conditions as expected.

7
Now we turn to our inhomogeneous problem ( 3.0.2) . We assume that
y1 , . . . , yn are linearly independent solutions of our homogeneous problem
(3.0.1) , and we assume that Y (t) and Z(t) are particular solutions of (3.0.2) .

1. Any solution y(t) of (3.0.2) can be put into the form y(t) = c1 y1 (t) +
+ cn yn (t) + Y (t).

2. Anything of the form c1 y1 (t) + + cn yn (t) + Y (t) is a solution of (3.0.2)

3. Y (t) Z(t) is a solution of the homogeneous equation (3.0.1) , NOT the


inhomogeneous equation (3.0.2) .

4. All of these results are based on the linearity of L, our differential oper-
ator, which is due to the linearity of taking m derivatives:
dm dm dm
[c f
1 1 (x) + + c f
n n (x)] = c 1 f 1 (x) + + c n fn (x) .
dxm dxm dxm
To prove (2) for example we compute:

L[y] = L[c1 y1 + + cn yn + Y ]

= c1 L[y1 ] + + cn L[yn ] + L[Y ]

= c1 0 + + cn 0 + g(t) = g(t) .

Copyright 1999
c Ivan Blank

You might also like