Theme 2 Part I
Theme 2 Part I
Theme 2 Part I
1. Source
2. Introduction
3. Onestep methods
5. Multistep methods
References
RJL Chapter: 5.0
Notes
▶ Differential equations are often divided into two classes,
ODEs and PDEs, according to the number of independent
variables, and are usually studied separately.
▶ A more meaningful division, however, is between initial
value problems (IVP), which usually model time-dependent
phenomena, and boundary value problems (BVP), which
generally model steady-state systems
▶ We begin with a simple IVP for a time-dependent ODE,
1
see Burden et al., Numerical analysis, 10E, Cengage Learning.
Notes
▶ The IVP differs from the BVP in that we are given all the
data at the initial time, t0 = 0 say, and we should march
forward in time, computing approximations at successive
times t1 , t2 , · · · .
▶ Given v0 = u0 , we want to compute approximations v1 , v2 , · · ·
satisfying vn = u(tn ).
▶ We will use k to denote the time step, i.e., tn = nk for
n ≥ 0, and k = tn+1 − tn for a uniform grid.
▶ In some cases, the notation ∆t is often used...
Remark 1
Analysis of schemes for boundary value problems will be
considered in Theme 5, under elliptic partial differential
equations.
One-step method: general framework
▶ Any one-step method for approximating the solution, y (t),
of the initial-value problem
Desirable property
An ideal difference-equation method would have the property
that, given a tolerance ε > 0, a minimal number of mesh points
could be used to ensure that the global error, |u(tn ) − vn |, did
not exceed ε for any n = 0, 1, · · · , N − 1.
Recall: u ′ = f (u, t), u(t0 ) = u0 , t > t0
1. Forward Euler scheme
1
δ + vn = [vn+1 − vn ] = f (vn , tn ), n = 0, 1, ...
k
2. Backward Euler scheme
1
δ − vn = [vn − vn−1 ] = f (vn , tn ), n = 0, 1, ...
k
▶ The method is called one-step methods because the
approximation for the mesh point tn+1 involves information
from only one of the previous mesh points, tn .
▶ Although these methods might use function evaluation
information at points between tn and tn+1 , they do not
retain that information for direct use in future
approximations.
▶ All information used is obtained within the subinterval
over which the solution is being approximated.
Definition 1
The truncation error is the amount by which the solution of
the differential equation fails to satisfy the approximate
(finite difference) equation, in normalized form.2
Example 1
The local truncation error of the trapezoidal method is
u(tn+1 ) − u(tn ) 1
τn = − [f (u(tn )) + f (u(tn+1 ))]
k 2
Definition 2
An approximate (finite difference) method is consistent with
the differential equation if the truncation error goes to zero
as the step size k goes to zero.
Definition 3
An approximate finite difference method is stable is the error
goes to zero as the truncation error goes to zero.
2
Normalized means in a form such that the terms in the difference
equation approximate the corresponding terms in the differential equation.
▶ Consider a one-step method: we now study the convergence
of the sequence {vn }0≤n≤N ⊂ Rn to the solution u(t),
t ∈ [a, b].
▶ The simplest method we consider is the one-step Euler’s
method given by
vn+1 = vn + kf (vn , tn ), v0 = u(t0 ), n = 0, 1, 2, · · ·
n−1
X
= (1 + kL)n e0 + A (1 + kL)j
j=0
(1 + kL)n − 1
= (1 + kL)n e0 + A
kL
e nkL − 1
≤ e nkL e0 + A
kL
We have used the fact that (1 + kL)n ≤ e nkL using Lemma 1.
Theorem 1
Suppose that f (u, t) is a continuous vector valued function on
D,
D = {(u, t)|t ∈ [a, b], u ∈ Rn }
and satisfy a Lipschitz condition in the variable u over D
with a Lipschitz constant L. Suppose that u ′′ (t) exists and is
continuous for t ∈ [a, b] and that
max u ′′ (t) ∞
= M,
t∈[a,b]
Proof.
Exercise: See Assignment 2.
Proof.
The discretisation error at step n is given by en = vn − u(tn ).
Notice that v0 = u(t0 ) so that e0 = 0. Taylor’s theorem gives
u ′′ (ξn )
u(tn+1 ) = u(tn ) + kf (u(tn ), tn ) + k 2 , ξn ∈ [tn , tn+1 ]
2
Subtracting the above equation from the Euler scheme
u ′′ (ξn )
en+1 = vn − u(tn ) + k(f (vn , tn ) − f (u(tn ), tn )) − k 2
2
u ′′ (ξn )
= en + k(f (vn , tn ) − f (u(tn ), tn )) − k 2
2
Taking the norm of each side of the equation, gives
k2 k2
∥en+1 ∥ ≤ ∥en ∥ + k ∥f (vn , tn ) − f (u(tn ), tn )∥ + M ≤ (1 + kL) ∥en ∥ + M
2 2
using the Lipschitz condition of f and ∥e0 ∥ = 0. Result
follows from the Lemma 2.
Exercise 1
Consider the simple scalar linear initial value problem
u ′ = λu + g (t), u(t0 ) = u0 ,
v0 = u0
vn+1 = vn + kT (n) (tn , vn ) for each n = 0, 1, 2, . . .
k ′ k n−1 (n−1)
T (n) (tn , vn ) = f (tn , vn ) + f (tn , vn ) + . . . + f (tn , vn )
2 n!
Notes
▶ Polynomial approximations can also be used to achieve high
accuracy instead of using higher order finite difference
approximations.
▶ This introduces the concept of multistage methods where
intermediate values of the solution and its derivative are
generated and used within a single time step
Example 2
Consider the following 2-stage Runger-Kutta method
1
v ∗ =vn + kf (vn , tn ),
2
∗ k
vn+1 =vn + kf v , tn +
2
▶ The disadvantage of Taylor methods, is the undesirable
property of requiring the computation of higher
derivatives of f (t, u).
▶ Runge-Kutta methods have higher order truncation error,
but eliminates the need to compute derivatives.
▶ The Euler method, Taylor methods, and Runge-Kutta methods
are called one-step methods because approximation for the
mesh point tn+1 involves information from only one of the
previous mesh points tn .
Midpoint Method
v0 = u0
k k
vn+1 = vn + kf tn + , vn + f (tn , vn ) for each n = 0, 1, 2, . . .
2 2
Multistep methods
▶ The approximate solution is available at each of the mesh
points t0 , t1 , · · · , tn before the approximation at tn+1 is
obtained, and because the error |wn − y (tn )| tends to
increase with n, ...
▶ so it seems reasonable to develop methods that use these
more accurate previous data when approximating the
solution at tn+1 .
▶ Methods using the approximation at more than one previous
mesh point to determine the approximation at the next
point are called multistep methods.
▶ We will now give a precise definition of these methods,
together with the definition of the two types of multistep
methods.
▶ Higher order accuracy is obtained by using multistep
methods that involve other previous values.
▶ Examples:
1. the midpoint method (leapfrog method)
1
[vn+1 − vn−1 ] = f (vn )
2k
w0 = α, w1 = α1 , w2 = α2 , · · · , wm−1 = αm−1
Recall.
w0 = α, w2 = α1 , w2 = α2 , w3 = α3
hh
wn+1 = wn + 55f (tn , wn )−59f (tn−1 , wn−1 )+37f (tn−2 , wn−2 )−9f (tn−3 , wn−
24
for each i = 3, 4, · · · , N − 1. The scheme has a LTE of O(h4 ).
w0 = α, w2 = α1 , w2 = α2
hh i
wn+1 = wn + 9f (tn+1 , wn+1 )+19f (tn , wn )−5f (tn−1 , wn−1 )+f (tn−2 , wn−2 )
24
for each i = 3, 4, · · · , N − 1. The scheme has a LTE of O(h3 ).
Starting Values & Accuracy
▶ The 4 or 3 starting values (in either explicit AB4 or
implicit AM4) must be specified, generally by assuming
w0 = α and generating the remaining values by either a
Runge-Kutta or Taylor method.
▶ The implicit methods are generally more accurate than the
explicit methods, but to apply an implicit method such as
the Adams-Moulton Method directly, we must solve the
implicit equation for wn+1 .
▶ This is not always possible, and even when it can be done
the solution for wn+1 may not be unique.
Example 3
▶ Assume that we have used the Runge-Kutta method of order 4
with h = 0.2 to approximate the solutions to the initial
value problem
y = y − t 2 + 1, 0 ≤ t ≤ 2, y (0) = 0.5