0% found this document useful (0 votes)
4 views32 pages

topic-2-handout

The document is a comprehensive guide on differential equations, covering topics such as initial-value problems, analytical solutions, and various numerical methods including Taylor and Runge-Kutta methods. It provides examples and MATLAB implementations for solving ordinary differential equations. The content is structured into sections that detail different methodologies and their applications in numerical mathematics.

Uploaded by

Nathan Bouquet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views32 pages

topic-2-handout

The document is a comprehensive guide on differential equations, covering topics such as initial-value problems, analytical solutions, and various numerical methods including Taylor and Runge-Kutta methods. It provides examples and MATLAB implementations for solving ordinary differential equations. The content is structured into sections that detail different methodologies and their applications in numerical mathematics.

Uploaded by

Nathan Bouquet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Numerical Mathematics

Differential Equations
Pieter Collins

Department of Knowledge Engineering


Maastricht University
pieter.collins@maastrichtuniversity.nl

KEN1540, Block 5, April-May 2021

Introduction to Differential Equations 2


Differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Solutions of differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Analytical solution of differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Initial-Value Problems 6
Initial-value problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
ODEs in Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Taylor Methods 10
Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Taylor methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Runge-Kutta Methods 23
Midpoint method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Modified
. . . . . . . .Euler
. . . . method
........................................................................... 29
Second-order
. . . . . . . . . . . .methods
........................................................................... 30
Third-order methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Fourth-order methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Multistep Methods 44
Multistep methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Adams-Bashforth
. . . . . . . . . . . . . . .methods
........................................................................ 46
Bootstrapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

Implicit methods 51
Adams-Moulton
. . . . . . . . . . . . . methods
.......................................................................... 53
Predictor-corrector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Fully-implicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Gear’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Stability 63
Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Explicit methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

1
Explicit vs implicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Stiff systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Multivariable systems 73
Multivariable systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Second-order
. . . . . . . . . . . .systems
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Adaptive methods 79
Adaptive methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Bogacki-Shampine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Adaptive multistep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

Global errors 87
Growth of errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

2
Introduction to Differential Equations 2 / 89

Differential equations

A differential equation for a quantity y is a formula giving the time-derivative of y in terms of y itself:
ẏ = f (t, y).

If y is finite-dimensional, then we have an ordinary differential equation (ODE). Other kinds of differential
equation are partial differential equation, stochastic differential equation.

Examples
• Exponential growth ẏ = ky or decay ẏ = −ky .

• Logistic growth ẏ = ky(ymax − y).

• Decay and forcing ẏ = −ky + a sin(ωt)

• Linear (first order): ẏ + p(t)y = q(t)

• Damped oscillations ẍ + δ ẋ + kx = 0.
Equivalently: ẋ = v , v̇ = ẍ = −δv − kx.

3 / 89

Solutions of differential equations

Solution curves Differential equations have multiple solutions!


dy
• If dt = ky , then any function y(t) = Cekt is a solution!
dy
Check by substitution: dt = Ckekt = KCekt = ky .
1

• If ẏ = y(1 − y), then any function y(t) = 0.8

1/(Ce−t + 1) is a solution. 0.6

0.4

0.2

0
0 1 2 3 4 5

Theorem If ẏ = f (t, y) where f is differentiable in y , then there is a unique solution curve through any point.
• Hence solution curves do not cross each other or merge.

4 / 89

Analytical solution of differential equations

For some differential equations, we can compute a solution analytically.

Linear first-order equations ẏ(t) + p(t)y(t) = q(t).


R
Multiply through by e p(t)dt . Then by the product and chain rules
R R
d p(t)dt y(t) = ep(t)dt ẏ(t) + p(t)y(t) = e p(t)dt q(t)
 
dt e
Hence the general solution is
R R
p(t)dt p(t)dt
R 
y(t) = e− C+ e q(t) dt .

Analytical solutions do not exist except in special cases! Even when they do exist, they are hard to calculate!

Instead, numerical methods are generally-applicable and usually very reliable!

5 / 89

3
Initial-Value Problems 6 / 89

Initial-value problems

Initial-Value Problem Find a function y : [t0 , tf ] → R satisfying


dy
= f (t, y), y(t0 ) = y0 .
dt
We say t0 is the initial time, y0 is the initial state, and tf is the final time.
Common (but not necessary) to take t0 = 0.

Numerical solution Approximate y at points t0 < t1 < · · · < tN = tf as


y(ti ) ≈ wi , i = 0, . . . , N.

Step size The step sizes are hi = ti+1 − ti , so ti+1 = ti + hi .


Simplest is to take constant step size hi = h.
Then h = (tf − t0 )/N and ti = t0 + ih.

7 / 89

Chemical reactions

Example Reactions A −→ Y and Y + Y −→ B .


The concentrations a = [A] and y = [Y ] satisfy the differential equations
da dy
= −k1 a; = k1 a − k2 y 2
dt dt
where k1 , k2 are rate constants.
Since a decays exponentially as a = a0 e−k1 t where a0 = [A(0)], we have
dy
= a0 k1 e−k1 t − k2 y 2 .
dt
Use initial condition t0 = 0 and y0 = [Y (0)] = 0.
Henceforth, take constants a0 = k1 = k2 = 1 (in appropriate units) so
dy
= e−t − y 2 .
dt
8 / 89

4
Solution of ODEs in Matlab

Example Solve the following initial value problem in Matlab.


dy
= a0 k1 e−k1 t − k2 y 2 ; y(0) = 0
dt
Set up the differential equation:
a0=1; k1=1; k2=1;
f = @(t,y) a0*k1*exp(-k1*t) - k2*y.^2;
t0=0; tf=5;
y0=0;

Solve using a built-in solver e.g. ode45:


[ts,ys]=ode45(f,[t0,tf],y0)
Other solvers are available, namely ode23, ode23s, ode15s.
Plot the solution:
plot(ts,ys)

9 / 89

Taylor Methods 10 / 89

Euler’s method

First-order approximation The differential equation


dy
= f (t, y)
dt
is approximated by
δy
≈ f (t, y).
δt
Then
y(t + δt) − y(t) = δy ≈ f (t, y) δt,
yielding the time-step of Euler’s method
y(t + δt) ≈ y(t) + f (t, y(t))δt.
Alternatively, write
y(t + h) ≈ y(t) + hf (t, y(t)).

Euler’s method Compute wi ≈ y(ti ) by w0 = y(t0 ) and


wi+1 = wi + hi f (ti , wi )
where hi = ti+1 − ti is the step size.

11 / 89

5
Euler’s method

Example Solve dy/dt = f (t, y) := e−t − y 2 with y(0) = 0 up to time 1.


Initialise t0 = 0 and w0 = y0 = y(0) = 0.
Take tf = 1, and aim to approximate y(1) i.e. y(t) when t = 1.
Try a single step, h = 1.0:
Formula w1 = w0 + hf (t0 , w0 ).
Evaluate f (t0 , w0 ) = (e−t0 − w02 ) = e−0 − 02 = 1, so
y(1.0) ≈ w1 = w0 + hf (t0 , w0 ) = 0.0000 + 1.0×1.0000 = 1.0000.
The exact value is y(1) = 0.503347 (6 sf ).
Absolute error ≈ 0.50.
Relative error
|w1 − y(1)|/|y(1)| = 0.4967/0.503347 ≈ 99%!
Very poor approximation...

12 / 89

Euler’s method

Example Solve dy/dt = f (t, y) := e−t − y 2 with y(0) = 0 up to time 1.


Take n = 2 steps, each with step size h = 0.5.
Times t0 = 0, t1 = t0 + h = 0.5, t2 = t1 + h = 1.0.
Initialise w0 = y(t0 ) = y(0) = 0.000. Solution is y(1) = y(t2 ) ≈ w2
f (t0 , w0 ) = f (0.0, 0.000)
y(0.5) ≈ w1 = w0 + hf (t0 , w0 )
= e−0.0 − 0.0002
= 0.000 + 0.5 × f (0.0, 0.000)
= 1.000
= 0.0000 + 0.5×1.000 = 0.500.
f (t1 , w1 ) = f (0.5, 0.500)
y(1.0) ≈ w2 = w1 + hf (t1 , w1 )
= e−0.5 − 0.5002
= 0.500 + 0.5 × f (0.5, 0.500)
= 0.607 − 0.250
= 0.5000 + 0.5×0.357 = 0.678.
= 0.357
Better approximation, absolute error ≈ 0.17; relative error 35%.

13 / 89

Euler’s method

Example Use fixed step size h = 0.2:

i ti wi f (ti , wi ) wi + hf (ti , wi )
0 0.00000 0.00000 1.00000 0.20000
1 0.20000 0.20000 0.77873 0.35575
2 0.40000 0.35575 0.54376 0.46450
3 0.60000 0.46450 0.33305 0.53111
4 0.80000 0.53111 0.16725 0.56456
5 1.00000 0.56456
Better approximation; absolute error |w5 − y(1.0)| ≈ 0.061, relative error 12%.
With step size h = 0.1, find y(1.0) ≈ w10 = 0.5329, relative error 5.8%.
Error approximately halves on halving the step-size: first order method!

6
14 / 89

Error of Euler’s method (Non-examinable)

Error estimate By Taylor’s theorem, y(t + h) = y(t) + hẏ(t) + 12 h2 ÿ(τ ).


The first derivative of y is given by the differential equation ẏ(t) = f (t, y(t)).
The second derivative of y is given by
d d ∂f (t, y) ∂f (t, y) dy(t)
ÿ(t) = ẏ(t) = f (t, y(t)) = +
dt dt ∂t ∂y dt
∂f (t, y) ∂f (t, y)
= + f (t, y)
∂t ∂y
Using notation f,t = ∂f /∂t and f,y = ∂f /∂y we have
ÿ(t) = f,t (t, y(t)) + f,y (t, y(t)) f (t, y(t)).
Obtain local error estimate
y(t0 ) + h0 ẏ(t0 ) + 21 h20 ÿ(τ ) − w0 + h0 f (t0 , w0 )
 
y(t1 ) − w1 =
h20
= 21 h20 ÿ(τ ) = f,t (τ, y(τ )) + f,y (τ, y(τ ))f (τ, y(τ ))
2
h20
= f,t (τ, η) + f,y (τ, η)f (τ, η) for some τ, η.
2
15 / 89

Euler’s method

Theorem (Local error bound for Euler’s method) Let y be the solution of the differential equation
dy/dt = f (t, y), and let
wi+1 = y(ti ) + hi f (ti , y(ti ))
be given by Euler’s method.
Then there exists τ ∈ [ti , ti+1 ] such that
1
|wi+1 − y(ti+1 )| = |f,t (τ, y(τ )) + f,y (τ, y(τ )) f (τ, y(τ )) h2i .
2
In particular, if y(t) ∈ [ymin , ymax ] for all t ∈ [ti , ti+1 ], and
f,t (τ, η) + f,y (τ, η) f (τ, η) ≤ M
for all τ ∈ [ti , ti+1 ] and η ∈ [ymin , ymax ], then
M 2
|wi+1 − y(ti+1 )| ≤ h = O(h2i ).
2 i
16 / 89

7
Euler’s method

Growth of global errors Need to approximate y at some fixed time tf .


For steps size h, require N = (tf − t0 )/h steps.
For i ≥ 1, y(ti ) 6= wi , so error wi+1 − y(ti+1 ) contains errors accumulated from previous time steps.
If f,y (τ, η) ≤ L, then error ei at time ti satisfies |ei+1 | ≤ e−hi L |ei | if the step is exact.

Theorem (Error bound for Euler’s method) If


f,y (τ, η) ≤ L and f,t (τ, η) + f,y (τ, η) f (τ, η) ≤ M
for all τ ∈ [0, T ] and η ∈ [ymin , ymax ], then for a fixed step size h = T /N and for all i ≤ N ,
M exp(LT ) − 1
|wi − y(ti )| ≤ h .
2 L
Although local (single-step) error is O(h2 ), global error (up to a fixed time) is O(h), since we require O(1/h)
steps.

17 / 89

Second-order Taylor method (Non-examinable)

Second-order Taylor method Use second-order term in Taylor expansion to improve Euler’s method.
y(t + h) ≈ y(t) + hf (t, y) + 21 h2 f,t (t, y) + f,y (t, y)f (t, y) ,


so take
wi+1 = wi + hi f (ti , wi ) + 12 h2i f,t (ti , wi ) + f,y (ti , wi )f (ti , wi ) .


18 / 89

Second-order Taylor method (Non-examinable)

Example Let f (t, y) = e−t − y 2 , so f,t (t, y) = −e−t and f,y (t, y) = −2y .
Initial condition y(0) = 0.
For step size h = 1.0,
f (0.0, 0.000) = 1.000, f,t(0.0, 0.000) = −1.000, f,y (0.0, 0.000) = 0.000;
1 2

y(1.0) ≈ w1 = w0 + hf (t0 , w0 ) + 2h f,t (t0 , w0 ) + f,y (t0 , w0 )f (t0 , w0 )
= 0.0000 + 1.0×1.0000 + 12 ×1.02 ×(−1.0000+0.0000×1.0000)
= 0.0000 + 1.0000 − 0.5000 = 0.5000
Relative error 0.7%, much better!

19 / 89

8
Second-order Taylor method (Non-examinable)

Example Let f (t, y) = e−t − y 2 , so f,t (t, y) = −e−t and f,y (t, y) = −2y .
Initial condition y(0) = 0.
For step size h = 0.5,
y(0.5) ≈ w1 = w0 + hf (t0 , w0 ) + 12 h2 f,t (t0 , w0 ) + f,y (t0 , w0 )f (t0 , w0 )


= 0.0000 + 0.5×1.0000 + 21 ×0.52 × −1.0000+0.0000×1.0000




= 0.3750.
y(1.0) ≈ w2 = w1 + hf (t1 , w1 ) + 12 h2 f,t (t1 , w1 ) + f,y (t1 , w1 )f (t1 , w1 )


= 0.3750 + 0.5×0.4659 + 21 ×0.52 × −0.6065−0.7500×0.4659




= 0.4885.
Relative error 3%, still much better than Euler’s method.
For h = 0.2, y(1.0) ≈ w5 = 0.500708, with relative error 0.52%.
For h = 0.1, y(1.0) ≈ w10 = 0.502675, with relative error 0.13%.
Halving the step size multiplies the error by a quarter, so method is second order; global error O(h2 ).

20 / 89

Higher-order Taylor methods (Non-examinable)

Higher-order derivatives Consider (for simplicity) the autonomous differential equation ẏ = f (y).
Then
ẏ = f (y);
ÿ = f ′ (y)f (y);
...
y = f ′′ (y)f (y)2 + f ′ (y)2 f (y);
ÿ¨ = f ′′′ (y)f (y)3 + 4f ′′ (y)f ′ (y)f (y)2 + f ′ (y)3 f (y);
...
y¨ = f ′′′′ (y)f (y)4 + 7f ′′′ (y)f ′ (y)f (y)3 + 4f ′′ (y)2 f (y)3
+ 11f ′′ (y)f ′ (y)2 f (y)2 + f ′ (y)4 f (y).
Higher-order Taylor methods Higher order Taylor methods can be constructed from these derivatives. e.g.
third-order
h2i ′ h3
f (wi )f (wi ) + i f ′′ (wi )f (wi )2 + f ′ (wi )2 f (wi ) .

wi+1 = wi + hi f (wi ) +
2 6
21 / 89

Higher-order Taylor methods (Non-examinable)

Advantages and disadvantages of Taylor methods


Taylor methods require the computation of partial derivatives, which is inconvenient.
Taylor methods can be used to give rigorous bounds on local and global errors.
e.g. For the second-order Taylor method applied to ẏ = f (y), obtain local error
h3 ...
|w1 − y(t1 )| ≤ supτ ∈[t0 ,t1 ] y (τ )
6
h3
≤ supη∈[ymin ,ymax ] f ′′ (η)f (η)2 + f ′ (η)2 f (η) ,
6
and global error O(h2 ).

22 / 89

9
Runge-Kutta Methods 23 / 89

Midpoint method

Midpoint method Expect derivative ẏ(t + h2 ) at midpoint of interval [t, t + h] to give a better approximation to
the average change h1 (y(t + h) − y(t)) than derivative ẏ(t) at left endpoint.
Can show
y(t + h) = y(t) + h ẏ(t) + O(h2 ); y(t + h) = y(t) + h ẏ(t + h2 ) + O(h3 ).
Would like to use approximation
y(t + h) ≈ y(t) + hf (t + h2 , y(t + h2 )).
Problem: Don’t know y(t + h2 ).

Solution: Estimate y(t+ h2 ) by Euler’s method!


y(t+ h2 ) ≈ y(t)+ h2 f (t, y(t))
Then
y(t + h) ≈ y(t) + hf t + h2 , y(t) + h2 f (t, y(t)) .


Obtain midpoint method:


wi+1 = wi + hi f (ti + 12 hi , wi + 21 hi f (ti , wi )).

24 / 89

Midpoint method

Example Solve ẏ = e−t − y 2 with y(0) = 0 up to t = 1 using h = 0.5.


Times t0 = 0.0, t1 = 0.5, t2 = 1.0. w0 = y(t0 ) = 0.000. Solution y(1) = y(t2 ) ≈ w2 .
f (t0 , w0 ) = f (0.0, 0.0000)
w1 = w0 + hf (t0 + 12 h, w0 + 12 hf (t0 , w0 )) = 1.0000
= w0 + hf (t0 + 21 h, w0 + 12 hf (0.0, 0.0000)) f (t0 + 12 h, w0 + 21 hf (t0 , w0 ))
= w0 + hf (0.0 + 21 ×0.5, 0.0000 + 12 ×0.5×1.0000) = f (0.25, 0.2500)
= w0 + hf (0.25, 0.2500) = 0.0000 + 0.5 × 0.7163 = e−0.2500 − 0.25002
= 0.3582 = 0.7788 − 0.0625 Exact value
w2 = w1 + hf (t1 + 21 h, w1 + 12 hf (t1 , w1 )) = 0.7163

= w1 + hf (t1 + 21 h, w1 + 12 hf (0.5, 0.3582)) f (t1 , w1 ) = f (0.5, 0.3582)


= w1 + hf (0.5 + 21 ×0.5, 0.3582 + 12 ×0.5×0.4783) = 0.4783
= w1 + hf (0.75, 0.4777) = 0.3582 + 0.5 × 0.2442 f (t1 + 12 h, w1 + 21 hf (t1 , w1 ))
= 0.4802. = f (0.75, 0.4777)

y(1) = 0.503347 (6 sf ). = 0.2442


Relative error 4.5%, better than 12% for Euler’s method with h = 0.2.

25 / 89

10
Midpoint method

Example For ẏ = e−t − y 2 , y(0) = 0, take step-size h = 0.1.


Work to machine precision, display wi to 4 dp:
w1 = w0 + hf (t0 + 12 h, w0 + 21 hf (t0 , w0 ))
= w0 + hf (t0 + 12 h, w0 + 21 hf (0.0, 0.00000))
= w0 + hf (0.0 + 21 ×0.1, 0.00000 + 12 ×0.1×1.0000)
= w0 + hf (0.05, 0.05000)
= 0.0000 + 0.1 × 0.9487 = 0.09487
w2 = w1 + hf (t1 + 12 h, w1 + 21 hf (t1 , w1 ))
= w1 + hf (t1 + 12 h, w1 + 21 hf (0.1, 0.09487))
= w1 + hf (0.1 + 21 ×0.1, 0.09487 + 12 ×0.1×0.89584)
= w1 + hf (0.15, 0.13966)
= 0.09487 + 0.1 × 0.8412 = 0.17899.

26 / 89

Midpoint method

Example For ẏ = e−t − y 2 , y(0) = 0, take step-size h = 0.1.


Let ti+ 1 = ti + 21 h, ŵi+ 1 = wi + 21 hf (ti , wi ), wi+1 = wi +hf (ti+ 1 , ŵi+ 1 ).
2 2 2 2
Work to machine precision, display wi to 5 decimal places:
i ti wi f (ti , wi ) ti+ 1 ŵi+ 1 f (ti+ 1 , ŵi+ 1 ) wi+1
2 2 2 2
0 0.0 0.00000 1.0000 0.05 0.05000 0.9487 0.09487
1 0.1 0.09487 0.8958 0.15 0.13966 0.8412 0.17899
2 0.2 0.17899 0.7867 0.25 0.21833 0.7311 0.25211
3 0.3 0.25211 0.6773 0.35 0.28597 0.6229 0.31440
4 0.4 0.31440 0.5715 0.45 0.34297 0.5200 0.36640
5 0.5 0.36640 0.4723 0.55 0.39001 0.4248 0.40888
6 0.6 0.40888 0.3816 0.65 0.42796 0.3389 0.44277
7 0.7 0.44277 0.3005 0.75 0.45780 0.2628 0.46905
8 0.8 0.46905 0.2293 0.85 0.48052 0.1965 0.48870
9 0.9 0.48870 0.1677 0.95 0.49709 0.1396 0.50267
10 1.0 0.50267
Exact value y(1) = 0.503347; relative error 0.13%.

27 / 89

Midpoint method

Example For ẏ = e−t − y 2 , y(0) = 0, approximating y(1) = 0.503347 (6 sf ) with different step-sizes gives
n 1 2 5 10
h 1.0 0.5 0.2 0.1
y(1.0) ≈ wn 0.356531 0.480228 0.500418 0.502666
error 29% 4.6% 0.58% 0.14%
The midpoint method is second-order, with global errors O(h2 ).

28 / 89

11
Modified Euler (trapezoid) method

Modified Euler (trapezoid) method Trapezoid rule gives


h
ẏ(t) + ẏ(t + h) + O(h3 ).

y(t + h) = y(t) + 2
Estimate y(t + h) by Euler’s method, y(t + h) ≈ y(t) + hf (t, y(t)).
Then
y(t + h) ≈ y(t) + 21 h f (t, y(t)) + f (t+h, y(t)+hf (t, y(t))) .


Obtain modified Euler (trapezoid) method:


wi+1 = wi + 12 hi f (ti , wi ) + f (ti +hi , wi +hi f (ti , wi )) .


Example For ẏ = e−t − y 2 , y(0) = 0, approximating y(1) = 0.503347 (6 sf ) with different step-sizes gives
n 1 2 5 10
h 1.0 0.5 0.2 0.1
wn 0.183940 0.468458 0.499972 0.502639
error 63% 6.9% 0.67% 0.14%

29 / 89

Second-order Runge-Kutta methods

General second-order Runge-Kutta method



y(t + h) ≈ y(t) + h a1 f (t, y(t)) + a2 f (t + αh, y(t) + βhf (t, y(t)))
Differentiate with respect to h:

ẏ(t + h) = a1 f (t, y(t)) + a2 f (t + αh, y(t) + βhf (t, y(t)))

+ ha2 αf,t (t + αh, y + βhf (t, y)) + βf,y (t + αh, y + βhf (t, y))f (t, y)
Hence
ẏ(t) = a1 f (t, y) + a2 f (t, y) = (a1 + a2 )f (t, y)

ÿ(t) = 2a2 αf,t (t, y) + βf,y (t, y)f (t, y)
From Taylor’s theorem,
ẏ(t) = f (t, y); ÿ(t) = f,t(t, y) + f,y (t, y)f (t, y).
Comparing coefficients gives a1 + a2 = 1 and 2a2 α = 2a2 β = 1, so
β = α and a1 = 1−1/2α, a2 = 1/2α.
Obtain method
1 1

wi+1 = wi + hi (1 − 2α )f (ti , wi ) + 2α f (ti + αhi , wi + αhi f (ti , wi ))

30 / 89

12
Ralston’s method

General second-order Runge-Kutta method


1 1

wi+1 = wi + hi (1 − 2α )f (ti , wi ) + 2α f (ti + αhi , wi + αhi f (ti , wi ))

Midpoint and modified Euler methods


Taking α = 12 gives the midpoint method with a1 = 0, a2 = 1.
Taking α = 1 gives the modified Euler method with a1 = a2 = 21 .

Ralston’s Method
Taking α = 32 gives a2 = 43 , a1 = 14 and
wi+1 = wi + 14 hi f (ti , wi ) + 3f (ti + 23 hi , wi + 23 hi f (ti , wi ))


This can be written in a standard form for Runge-Kutta methods:


ki,1 = hi f (ti , wi );
ki,2 = hi f (ti + 32 hi , wi + 32 ki,1 );
wi+1 = wi + 14 ki,1 + 3ki,2 = wi + 14 ki,1 + 34 ki,2


= wi + ki where ki = 41 ki,1 + 34 ki,2 .

31 / 89

Runge-Kutta Methods

Example ẏ(t) = e−t − y 2 , y(0) = 0. Step size h = 0.1.


y(0.1) y(1.0)
Euler 0.1000000000000000 0.532904863460103
Midpoint 0.0948729424500714 0.502665926212565
Trapezoid 0.0947418709017980 0.502638707657163
Ralston 0.0948296905440380 0.502658823715687
Exact 0.094854320 · · · 0.5033467 · · ·

32 / 89

Third-order Runge-Kutta methods

Heun’s third-order method


ki,1 = hi f (ti , wi );
ki,2 = hi f (ti + 31 hi , wi + 13 ki,1 );
ki,3 = hi f (ti + 23 hi , wi + 23 ki,2 );
wi+1 = wi + 14 ki,1 + 3ki,3 .


Local error O(h4 ), global error O(h3 ).

Kutta’s third-order method


ki,1 = hi f (ti , wi );
ki,2 = hi f (ti + 12 hi , wi + 21 ki,1 );
ki,3 = hi f (ti + hi , wi − ki,1 + 2ki,2 );
wi+1 = wi + 16 ki,1 + 4ki,2 + ki,3 .


33 / 89

13
Third-order Runge-Kutta methods

General third-order method (Non-examinable)


ki,1 = hi f (ti , wi );
ki,2 = hi f (ti + c2 hi , wi + a21 ki,1 );
ki,3 = hi f (ti + c3 hi , wi + a31 ki,1 + a32 ki,2 );

wi+1 = wi + b1 ki,1 + b2 ki,2 + b3 ki,3 .
where
b1 + b2 + b3 = 1; c2 b2 + c3 b2 = 12 ; c22 b2 + c23 b3 = 31 ;
c2 = a21 ; c3 = a31 + a32 ; c2 b3 a32 = 16 .
Can choose c2 , c3 freely. Then
b3 = (3c2 − 2)/6c3 (c2 − c3 ), b2 = (3c3 − 2)/6c2 (c3 − c2 ), b1 = 1 − b2 − b3 ,
a32 = 1/6c2 b3 , a31 = c3 − a32 , a21 = c2 .

34 / 89

Classical fourth-order Runge-Kutta method

Classical fourth-order Runge-Kutta method


ki,1 = hi f (ti , wi );
ki,2 = hi f (ti + 12 hi , wi + 21 ki,1 );
ki,3 = hi f (ti + 12 hi , wi + 21 ki,2 );
ki,4 = hi f (ti + hi , wi + ki,3 );
wi+1 = wi + 61 (ki,1 + 2ki,2 + 2ki,3 + ki,4 )
Local error O(h5 ), global error O(h4 ).
May set ki = 16 (ki,1 + 2ki,2 + 2ki,3 + ki,4 ), so wi+1 = wi + ki .
Often drop the subscript i:
k1 = hf (t, w); k2 = hf (t + 12 h, w + 12 k1 );
k3 = hf (t + 21 h, w + 12 k2 ); k4 = hf (t + h, w + k3 );
′ 1
w =w+ 6 (k1 + 2k2 + 2k3 + k4 )

35 / 89

Fourth-order Runge-Kutta methods

Fourth-order Runge-Kutta method


ki,1 = hi f (ti , wi );
ki,2 = hi f (ti + 31 hi , wi + 13 ki,1 );
ki,3 = hi f (ti + 32 hi , wi − 13 ki,1 + ki,2 );
ki,4 = hi f (ti + hi , wi + ki,1 − ki,2 + ki,3 );
wi+1 = wi + 18 (ki,1 + 3ki,2 + 3ki,3 + ki,4 )
Local error O(h5 ), global error O(h4 ).
May set ki = 81 (ki,1 + 3ki,2 + 3ki,3 + ki,4 ), so wi+1 = wi + ki .

36 / 89

14
Fourth-order Runge-Kutta methods

Example Solve ẏ(t) = e−t − y 2 with y(0) = 0 using h = 0.5.


Initialise t0 = 0.0, w0 = y(t0 ) = y(0) = 0.0000.
First step h0 = h = 0.5, t1 = t0 + h0 = 0.0 + 0.5 = 0.5.

k0,1 = h0 f (t0 , w0 ) = 0.5f (0.00, 0.0000)


= 0.5 × f (0.00, 0.0000) = 0.5 × 1.0000 = 0.5000
k0,2 = h0 f (t0 + 21 h0 , w0 + 21 k0,1 ) = 0.5f (0.0 + 12 ×0.5, 0.0000 + 12 ×0.5000)
= 0.5 × f (0.25, 0.2500) = 0.5 × 0.7163 = 0.3582
k0,3 = h0 f (t0 + 21 h0 , w0 + 21 k0,2 ) = 0.5f (0.0 + 12 ×0.5, 0.0000 + 12 ×0.3582)
= 0.5 × f (0.25, 0.1791) = 0.5 × 0.7467 = 0.3734
k0,4 = h0 f (t0 + h0 , w0 + k0,3 ) = 0.5f (0.0 + 0.5, 0.0000 + 0.3734)
= 0.5 × f (0.50, 0.3734) = 0.5 × 0.4671 = 0.2336
1
k0 = 6 (k0,1 +2k0,2 +2k0,3 +k0,4 )
= 16 (0.5000+2×0.3582+2×0.3734+0.2336) = 0.3661
w1 = w0 + k0 = 0.0000 + 0.3661 = 0.3661

37 / 89

Fourth-order Runge-Kutta methods

Example Solve ẏ(t) = e−t − y 2 with y(0) = 0 using h = 0.5.


First step gave t1 = 0.5, w1 = 0.3661.
Second step h1 = h = 0.5, t2 = t1 + h1 = 0.5 + 0.5 = 1.0.

k1,1 = h1 f (t1 , w1 ) = 0.5f (0.50, 0.3661)


= 0.5×f (0.50, 0.3661) = 0.5 × 0.4725 = 0.2363
k1,2 = h1 f (t1 + 21 h1 , w1 + 21 k1,1 ) = 0.5f (0.5 + 12 ×0.5, 0.3661 + 12 ×0.2363)
= 0.5×f (0.75, 0.4842) = 0.5 × 0.2379 = 0.1189
k1,3 = h1 f (t1 + 12 h1 , w1 + 21 k1,2 ) = 0.5f (0.5 + 12 ×0.5, 0.3661 + 12 ×0.1189)
= 0.5×f (0.75, 0.4256) = 0.5 × 0.2913 = 0.1456
k1,4 = h1 f (t1 + h1 , w1 + k1,3 ) = 0.5f (0.5 + 0.5, 0.3661 + 0.1456)
= 0.5×f (1.00, 0.5117) = 0.5 × 0.1060 = 0.0530
k1 = 16 (k1,1 +2k1,2 +2k1,3 +k1,4 )
= 61 (0.2363+2×0.1189+2×0.1456+0.0530) = 0.1365
y(1) ≈ w2 = w1 + k1 = 0.3661 + 0.1365 = 0.5025

38 / 89

Fourth-order Runge-Kutta methods

Example ẏ(t) = e−t − y 2 , y(0) = 0. Take h = 0.5. Take two steps. Calulate:
i ti wi ki,1 ki,2 ki,3 ki,4 ki
0 0.0 0.000000 0.500000 0.358150 0.373366 0.233564 0.366100
1 0.5 0.366100 0.236251 0.118946 0.145627 0.053008 0.136401
2 1.0 0.502501
Exact value y(1) = 0.503347 (6 dp ); relative error 0.17%.

39 / 89

15
Fourth-order Runge-Kutta methods
Example Take h = 0.2.

k0,1 = h0 f (t0 , w0 )
= 0.2f (0.0, 0.00000) = 0.2 × 1.00000 = 0.20000
k0,2 = h0 f (t0 + 21 h0 , w0 + 12 k0,1 )
= 0.2f (0.1, 0.10000) = 0.2 × 0.89484 = 0.17897
k0,3 = h0 f (t0 + 21 h0 , w0 + 12 k0,2 )
= 0.2f (0.1, 0.08948) = 0.2 × 0.89683 = 0.17937
k0,4 = h0 f (t0 + h0 , w0 + k0,3 ) = 0.5f (0.2, 0.17937)
= 0.2 × 0.78656 = 0.15731
1
k0 = 6 (k0,1 +2k0,2 +2k0,3 +k0,4 )
= 16 (0.20000+2×0.17897+2×0.17937+0.15731)
= 0.17900
w1 = w0 + k0 = 0.0000 + 0.17900 = 0.17900

40 / 89

Fourth-order Runge-Kutta methods

Example Take h = 0.2.


i ti wi ki,1 ki,2 ki,3 ki,4 ki
0 0.0 0.00000 0.20000 0.17897 0.17937 0.15731 0.17900
1 0.2 0.17900 0.15734 0.13489 0.13602 0.11422 0.13556
2 0.4 0.31456 0.11427 0.09367 0.09519 0.07618 0.09470
3 0.6 0.40925 0.07626 0.05929 0.06079 0.04568 0.06035
4 0.8 0.46960 0.04576 0.03281 0.03407 0.02284 0.03373
5 1.0 0.50333

41 / 89

Higher-order Runge-Kutta methods

Low order methods For n = 1, 2, 3, 4, there are methods of order n requiring n function evaluations per time
step.

Higher order methods At least six function evaluations are needed for a fifth order method.
Even more evaluations are needed for higher orders.

Choice of order Fourth-order methods usually provide a good balance between accuracy and efficiency.

42 / 89

16
Comparison of Runge-Kutta methods
Comparison ẏ(t) = e−t − y 2 , y(0) = 0.

h y(0.1) y(1.0) Nf erel


Euler 0.2 0.564559864473071 5 1.2 × 10−1
Ralston2 0.2 0.500286600094707 10 6.1 × 10−3
Heun3 0.2 0.503415367048022 15 1.4 × 10−4
RK4 0.2 0.503328891202093 20 3.6 × 10−5
Euler 0.1 0.1000000000000000 0.532904863460103 10 5.9 × 10−2
Ralston2 0.1 0.0948296905440380 0.502658823715687 20 1.3 × 10−3
Heun3 0.1 0.0948519042605422 0.503354541136427 30 1.5 × 10−5
RK4 0.1 0.0948541510517630 0.503345613873078 40 2.8 × 10−6
Euler 0.025 0.0961469752655123 0.510557320425266 40 1.4 × 10−2
Ralston2 0.05 0.0948491396932605 0.503183407918572 40 1.6 × 10−4
Heun3 0.077 0.503350170836445 39 6.3 × 10−6
RK4 0.1 0.0948541510517630 0.503345613873078 40 1.0 × 10−6
Exact 0.094854320 · · · 0.5033467 · · ·

43 / 89

Multistep Methods 44 / 89

Multistep methods

Idea Use information from previous (tj , wj ).

Stages An n-stage multistep method computes wi+1 using wi , . . . , wi−n+1 .

General form The general form of an n-stage linear multistep method is


n−1
X n−1
X
wi+1 = aj wi−k + h bj f (ti−j , wi−j ).
j=0 j=0

45 / 89

Adams-Bashforth methods

Derivation
Z ti +hi Z ti +hi
y(ti + hi ) = y(ti ) + ẏ(τ ) dτ = y(ti ) + f (τ, y(τ )) dτ
ti ti
Find a polynomial p(τ ) approximating f (τ, y(τ )) over [ti , ti + hi ].
Estimate p(τ ) by interpolating f (τ, y(τ )) at τ = ti , ti−1 , ti−2 , . . . , ti−k+1 .
Linear interpolation at (ti , wi ) and (ti−1 , wi−1 ) gives
(τ − ti ) 
p(τ ) = f (ti , wi ) + f (ti , wi ) − f (ti−1 , wi−1 )
hi−1
Then for equally-sized steps,
Z ti +h
(τ − ti ) 
y(ti+1 ) ≈ wi+1 = wi + f (ti , wi ) + f (ti , wi ) − f (ti−1 , wi−1 ) dτ
ti h
= wi + hf (ti , wi ) + 21 h f (ti , wi ) − f (ti−1 , wi−1 )


= wi + 12 h 3f (ti , wi ) − f (ti−1 , wi−1 )




17
46 / 89

Adams-Bashforth methods

Two-stage Adams-Bashforth method


wi+1 = wi + 21 h 3f (ti , wi ) − f (ti−1 , wi−1 ) .


5 3 (3)
Local error E = 12 h y (τ ) for some τ ∈ (ti−1 , ti+1 ).
2
Global error O(h ); second-order.

Three-stage Adams-Bashforth method


1

wi+1 = wi + 12 h 23f (ti , wi ) − 16f (ti−1 , wi−1 ) + 5f (ti−2 , wi−2 ) .
Local error E = 38 h4 y (4) (τ ) for some τ ∈ (ti−2 , ti+1 ).
Global error O(h3 ); third-order.

Four-stage Adams-Bashforth method


1
wi+1 = wi + 24 h 55f (ti , wi ) − 59f (ti−1 , wi−1 )

+ 37f (ti−2 , wi−2 ) − 9f (ti−3 , wi−3 ) .
Local error E = 251 5 (5) 4
720 h y (τ ). Global error O(h ); fourth-order.

47 / 89

Bootstrapping multistep methods

Bootstrapping The first step(s) of a multistep method require wi for i < 0 which is not available.
e.g.Two-stage Adams-Bashforth:
w1 = w0 + 12 h 3f (t0 , w0 ) − f (t−1 , w−1 )


Use a Runge-Kutta method of the same order (or higher) to start the multistep method.

48 / 89

Multistep methods

Example Solve ẏ(t) = e−t − y 2 , y(0) = 0 at t = 1 using the two-stage Adams-Bashforth method with
h = 0.5. Booststrap using a second-order method.
Bootstrap using Ralston’s second-order method:
f (t0 , w0 ) = f (0.0, 0.0000) = 1.0000,
f t0 + 23 h0 , w0 + 23 h0 f (t0 , w0 ) = f (0.3̇, 0.3333) = 0.6054,


y(0.5) ≈ w1 = w0 + 41 h0 f (t0 , w0 ) + 3f (t0 + 23 h0 , w0 + 32 h0 f (t0 , w0 )




= 0.0000 + 14 ×0.5×(1.0000 + 3×0.6054)


= 0.3520

Take one step of the Adams-Bashforth method to find w2 :


y(1.0) ≈ w2 = w1 + 12 h 3f (t1 , w1 ) − f (t0 , w0 )


= 0.3520 + 21 ×0.5×(3×f (0.5, 0.3520) − f (0.0, 0.0000))


= 0.3520 + 21 ×0.5×(3×0.48260 − 1.0000) = 0.4640.
Solution y(1) ≈ 0.4640 has absolute error 0.039, relative error 7.8%.

49 / 89

18
Multistep methods

Example Solve ẏ(t) = e−t − y 2 , y(0) = 0 at t = 1 using the two-stage Adams-Bashforth method with
h = 0.1.
Booststrap using Ralston’s second-order method:
w1 = w0 + 14 h0 f (t0 , w0 ) + 3f (t0 + 23 h0 , w0 + 32 h0 f (t0 , w0 ) .


Continue with the two-stage Adams-Bashforth method


wi+1 = wi + 12 h 3f (ti , wi ) − f (ti−1 , wi−1 ) .


ti wi fi = f (ti , wi ) ti wi fi = f (ti , wi )
0.0 0.000000 1.000000 0.5 0.366485 0.472220
0.1 0.094830 0.895845 0.6 0.408752 0.381734
0.2 0.179206 0.786616 0.7 0.442401 0.300867
0.3 0.252407 0.677109 0.8 0.468444 0.229889
0.4 0.314642 0.571320 0.9 0.487884 0.168539
1.0 0.501670
So y(1.0) ≈ w10 = 0.501670.
Absolute error 0.0017, relative error 0.33%.

50 / 89

Implicit methods 51 / 89

Implicit methods

Idea Use an extra forward interpolation condition ẏ(ti+1 ) = f (ti+1 , wi+1 )).
Formula for wi+1 then depends on f (ti+1 , wi+1 ), leading to an algebraic equation for wi+1 which has to be
solved.

52 / 89

Adams-Moulton methods

Two-stage interpolation Interpolate (ti−1 , f (ti−1 , wi−1 )), (ti , f (ti , wi )) and (ti+1 , f (ti+1 , wi+1 )).
Assume equal step-size h.
(τ − ti ) 
p(τ ) = f (ti , wi ) + f (ti+1 , wi+1 ) − f (ti−1 , wi−1 )
2h
(τ − ti )2 
+ 2
f (ti+1 , wi+1 ) − 2f (ti , wi ) + f (ti−1 , wi−1 )
2h
Then
Z ti +h
wi + p(τ ) dτ = wi + hf (ti , wi ) + 14 h f (ti+1 , wi+1 ) − f (ti−1 , wi−1 )
ti
+ 16 h f (ti+1 , wi+1 ) − 2f (ti , wi ) + f (ti−1 , wi−1 )


so
1

wi+1 = wi + 12 h 5f (ti+1 , wi+1 ) + 8f (ti , wi ) − f (ti−1 , wi−1 )

53 / 89

19
Adams-Moulton methods

Two-stage implicit Adams-Moulton method


1

wi+1 = wi + 12 h 5f (ti+1 , wi+1 ) + 8f (ti , wi ) − f (ti−1 , wi−1 )
1 4 (4)
Local error E = − 24 h y (τ ) for some τ ∈ (ti−1 , ti+1 ).
3
Global error O(h ); third-order.
Three-stage implicit Adams-Moulton method
1

wi+1 = wi + 24 h 9f (ti+1 , wi+1 ) + 19f (ti , wi ) − 5f (ti−1 , wi−1 ) + f (ti−2 , wi−2 )
19 5 (5)
Local error E = − 720 h y (τ ) for some τ ∈ (ti−2 , ti+1 ).
4
Global error O(h ); fourth-order.

Explicit versus implicit methods Implicit multistep methods have higher accuracy than explicit methods of
the same order. e.g.
Adams-Bashforth three-stage method local error:
E = 83 y (4) (τ )h4 for some τ ∈ (ti−2 , ti+1 ).
Adams-Moulton two-stage method local error:
1 (4)
E = − 24 y (τ )h4 for some τ ∈ (ti−1 , ti+1 ).

54 / 89

Predictor-corrector approach

Predictor-corrector approach
Use an estimate w̃i+1 on the right-hand side of the Adams-Moulton formula.
1

e.g. wi+1 = wi + 12 h 5f (ti+1 , w̃i+1 ) + 8f (ti , wi ) − f (ti−1 , wi−1 )
Typically use an explicit Adams-Bashforth method to obtain w̃i+1 with the same number of stages (and lower
order).
w̃i+1 = wi + 12 h 3f (ti , wi ) − f (ti−1 , wi−1 ) .

e.g.

Theorem The combination of an n-stage Adams-Bashforth method (order n) as a predictor with an n-stage
Adams-Mouton method (order n + 1) as a corrector has order n + 1.
Alternatively, can use an (n + 1)-stage Adams-Bashforth method and an n-stage Adams-Mouton method to
estimate the error.
1

e.g. w̃i+1 = wi + 12 h 23f (ti , wi ) − 16f (ti−1 , wi−1 ) + 5f (ti−2 , wi−2 ) .
May iterate corrector formula to improve accuracy, but improvement is usually not worth the additional effort.

55 / 89

20
Predictor-corrector approach

Example Solve ẏ = e−t − y 2 , with y(0) = 0 using h = 0.1.


Assume y(0.1) ≈ 0.09485432.
Times t0 = 0.0, t1 = 0.1, t2 = 0.2, . . . with w0 = 0.0 and w1 = 0.09485432.
Predictor from two-stage Adams-Bashforth method:
w̃2 = w1 + 21 h 3f (t1 , w1 ) − f (t0 , w0 ) = 0.17923033.


Adams-Moulton formula
1

wi+1 = wi + 12 h 5f (ti+1 , w̃i+1 ) + 8f (ti , wi ) − f (ti−1 , wi−1 )
Corrector
1

w2 = w1 + 12 h 5f (t2 , w̃2 ) + 8f (t1 , w1 ) − f (t0 , w0 )
1

= 0.0949 + 12 0.1 5f (0.2, 0.1792) + 8f (0.1, 0.0949) − f (0.0, 0.0000)
1
= 0.0949 + 12 ×0.1× 5×0.7866 + 8×0.8958 − 1.0000)
= 0.17901896.
Applying the corrector again gives 0.17902212.
Repeated application of the corrector converges to 0.17902207.

The exact value is y(0.2) = 0.17900201 (8 sf).

56 / 89

Predictor-corrector approach

Example Solve ẏ = e−t − y 2 , with y(0) = 0 using h = 0.1.


Continue using t1 = 0.1, w1 = 0.0949, f (t1 , w1 ) = 0.8958, t2 = 0.2, w2 = 0.1790.
Important: Use the corrected value w2 to compute further in both w̃3 and w3 !
w̃3 = w2 + 21 h 3f (t2 , w2 ) − f (t1 , w1 )


= 0.1790 + 12 0.1 3f (0.2, 0.1790) − f (0.1, 0.0949)




= 0.1790 + 21 ×0.1×(3×0.7867 − 0.8958) = 0.25222940.


1

w3 = w2 + 12 h 5f (t3 , w̃3 ) + 8f (t2 , w2 ) − f (t1 , w1 )
1

= 0.1790 + 12 0.1 5f (0.3, 0.2522) + 8f (0.2, 0.1790) − f (0.1, 0.0949)
1
= 0.1790 + 12 ×0.1× 5×0.6772 + 8×0.7867 − 0.8958) = 0.25221576.
w̃4 = w3 + 21 h 3f (t3 , w3 ) − f (t2 , w2 )


= 0.2522 + 12 0.1 3f (0.3, 0.2522) − f (0.2, 0.1790) = 0.31446243




1

w4 = w3 + 12 h 5f (t4 , w̃4 ) + 8f (t3 , w3 ) − f (t2 , w2 )
1

= 0.2522 + 12 0.1 5f (0.4, 0.3144) + 8f (0.3, 0.2522) − f (0.2, 0.1790)
1
= 0.2522 + 12 ×0.1× 5×0.5714 + 8×0.6772 − 0.7867) = 0.31461683.

57 / 89

21
Predictor-corrector approach
Example Solve ẏ(t) = e−t − y 2 , y(0) = 0 at t = 1 using the two-stage Adams-Bashforth-Moulton
predictor-corrector method with h = 0.1.

ti w̃i f˜i = f (ti , w̃i ) wi fi = f (ti , wi )


0.0 0.00000000 1.00000000
0.1 0.09485432 0.89584008
0.2 0.17923033 0.78660724 0.17901896 0.78668296
0.3 0.25222940 0.67719855 0.25221576 0.67720543
0.4 0.31446243 0.57143343 0.31461683 0.57133630
0.5 0.36645700 0.47223993 0.36673920 0.47203302
0.6 0.40897734 0.38154917 0.40934481 0.38124846
0.7 0.44293043 0.30039794 0.44334435 0.30003109
0.8 0.46928659 0.22909906 0.46971515 0.22869665
0.9 0.48901809 0.16743097 0.48943762 0.16702048
1. 0.503055859 0.11481424 0.50345044

y(1.0) ≈ w10 = 0.503450. Absolute error 0.00010, relative error 0.021%.


58 / 89

Fully-implicit approach (Non-examinable)

Fully-implicit approach
Try to solve the implicit equation for wi+1 exactly. Need to solve for z = wi+1 in
1

z = wi + 12 h 5f (ti+1 , z) + 8f (ti , wi ) − f (ti−1 , wi−1 )
or equivalantly
5 1

gi+1 (z) = 12 hf (ti+1 , z) − z + wi + 12 h 8f (ti , wi ) − f (ti−1 , wi−1 ) = 0.
Typically use the secant method, starting with wi and w̃i+1 , or Newton’s method, which requires computing f,y
5
since g ′ (z) = 12 hf,y (ti+1 , z) − 1.

59 / 89

Gear’s method (Non-examinable)

Gear’s method Fit a polynomial p to data (tj , wj ) for j = i−k+1, . . . , i, i+1.


Solve the equation ṗ(ti+1 ) = f (ti+1 , p(ti+1 )).

Backward-difference method For k = 2, equally-sized steps h, have


τ − ti (τ − ti )2
p(τ ) = wi + (wi+1 − wi−1 ) + (wi+1 − 2wi + wi−1 )
2h 2h2
so
1 τ − ti
ṗ(τ ) = (wi+1 − wi−1 ) + (wi+1 − 2wi + wi−1 ).
2h h2
Taking τ = ti+1 = ti + h gives
1 1 1
ṗ(ti+1 ) = (wi+1 − wi−1 ) + (wi+1 − 2wi + wi−1 ) = (3wi+1 − 4wi + wi−1 ).
2h h 2h
So Gear’s method is
1
(3wi+1 − 4wi + wi−1 ) = f (ti+1 , wi+1 ).
2h
Rearranging gives
wi+1 = 43 wi − 31 wi−1 + 23 hf (ti+1 , wi+1 ).

60 / 89

22
Gear’s methods (Non-examinable)

Two-stage Gear’s (backward-difference) method


wi+1 = 34 wi − 31 wi−1 + 23 hf (ti+1 , wi+1 ).

Three-stage Gear’s (backward-difference) method


18 9 2 6
wi+1 = 11 wi − 11 wi−1 + 11 wi−2 + 11 hf (ti+1 , wi+1 ).

Four-stage Gear’s (backward-difference) method


48 36 16 3 12
wi+1 = 25 wi − 25 wi−1 + 25 wi−2 − 25 wi−3 + 25 hf (ti+1 , wi+1 ).

61 / 89

Gear’s method (Non-examinable)

Exercise Take f (t, y) = e−t − y 2 , h = 0.1, w0 = 0.0, w1 = 0.09485432. Find w2 using the
backward-difference method.
Solve for w2 = z in
g2 (z) = 43 w1 − 31 w0 + 23 hf (t2 , z) − z = 0

62 / 89

Stability 63 / 89

Stability analysis

Example Consider ẏ = λy with step size h.


Exact solution has
y(t + h) = eλh y(t).
For λ < 0, the exact solution is monotonic decreasing.
Euler’s method gives
wi+1 = wi + hf (wi ) = wi + h λwi = (1 + hλ)wi .
For −1 < hλ < 0, Euler’s method is monotonic decreasing.
For −2 < hλ < −1, Euler’s method oscillates and decreases.
For −∞ < hλ < −2, Euler’s method oscillates and grows.
Say Euler’s method is monotonically stable for −1 < hλ < 0, stable for −2 < hλ < 0 and unstable for
hλ < −2.
64 / 89

Stability analysis

Linear stability analysis Consider ẏ = λy with step size h; set α = λh.

Stability criteria
Method is unconditionally-stable if wi → 0 as i → ∞ for all hλ < 0.
Method is limit-stable if wi+1 /wi → 0 as hλ → −∞.

65 / 89

23
Stability analysis

Stability analyses Euler’s method has


wi+1 = (1 + α)wi ,
so is monotone and stable for −1 ≤ α ≤ 0, oscillatory and stable
for −2 < α ≤ −1, unstable for α < −2.
Ralston’s method is stable for −2 ≤ α < 0, since
wi+1 = wi + 14 h(λwi + 3λ(wi + 32 λhwi )) = wi + αwi + 21 α2 wi = (1 + α + 12 α2 )wi .
Midpoint and modified Euler methods are also stable for −2 ≤ α < 0.
Third-order Runge-Kutta methods are stable for −2.51 ≤ α < 0.
Classical fourth-order Runge-Kutta method is stable for −2.78 ≤ α < 0.
The two-stage Adams-Bashforth method has
wi+1 = wi + 21 h(3λwi − λwi−1 ) = (1 + 23 α)wi − 21 αwi−1 .
Assuming wi = β i w0 gives β 2 − (1 + 23 α)β + 12 α = 0.
Stable for −0.8 < α < 0 when |β| < 1 for all solutions.

66 / 89

Stability analysis

Backward Euler method wi+1 = wi + hf (ti+1 , wi+1 ).


Have wi+1 = wi + hλwi+1 = wi + αwi+1 , so (1 − α)wi+1 = wi , giving
1
wi+1 = wi .
1−α
Unconditionally stable, nonoscillating.

Trapezoid (Crank-Nicolson) method wi+1 = wi + 12 h f (ti , wi ) + f (ti+1 , wi+1 ) .




Then wi+1 = wi + 12 h(λwi + λwi+1 ) = wi + 21 αwi + 12 αwi+1 , so (1 − α/2)wi+1 = (1 + α/2)wi . Hence


1 + α/2
wi+1 = wi .
1 − α/2
Unconditionally stable, oscillates if α < −2, ratio wi+1 /wi → 1 as i → ∞.

67 / 89

Stability analysis

Backward-difference method (Non-examinable)


wi+1 = 43 wi − 31 wi−1 + 23 hf (ti+1 , wi+1 ).
Stability wi = β i w0 yields
4
β= 3 − 31 β −1 + 23 αβ
Hence (3 + 2α)β 2 − 4β + 1 = 0, so
p √
4± 16 − 4(3 + 2α) 2 ± 1 − 2α 1
β= = = √
3 + 2α 3 + 2α 2 ∓ 1 − 2α
with |β| < 1 for α < 0.
Unconditionally stable.

68 / 89

24
Stability analysis

Implicit Adams-Moulton (Non-examinable)


Suppose wi = β i w0 . f (t, y) = λy with hλ = α. Then
1
βwi = wi + 12 5αβwi + 8αwi − αwi /β).
Simplifying gives
(12 − 5α)β 2 − (12 + 8α)β + α = 0.
Hence
p
(12 + 8α) ± (12 + 8α)2 − 4α(12 − 5α)
β= .
2(12 − 5α)
This has a positive stable branch and a negative branch which becomes unstable.
If wi−1 = wi /γ with γ > 0, and wi+1 = βwi , then
12 + 8α + α/γ
β= .
12 − 5α
In practise, little danger of instability if w1 is chosen correctly.

69 / 89

Explicit vs implicit methods

Theorem No explicit Runge-Kutta or multistep method is unconditionally stable.

Theorem (Dahlquist barrier) An implicit multistep method can only be unconditionally stable if its order is at
most 2.

70 / 89

Stiff systems

Idea A stiff system contains fast and stable variables, which behave similarly to ẏi = λyi for λ ≪ 0.
These variables converge quickly to a pseudo-equilibrium, after which the dynamics is determined by the slow
variables.
Explicit methods are unstable for moderate step sizes, requiring either implicit methods, or small step sizes.

71 / 89

25
Stiff systems

Example Consider the system


ẏ = 1 − ty.
Eventually, any explicit method becomes unstable for any fixed step size!

Example The Fitzhugh-Nagumo equations are


v̇ = (v − v 3 /3 − w + Iext )τ ;
ẇ = (v + a − bw);
with parameters
Iext = 0.5; a = 0.7; b = 0.8; τ = 12.5;
The parameter τ describes the time-scale separation; the variable v changes quickly and w comparitively slowly.
The Fitzhugh-Nagumo equations are stiff, exhibiting oscillations starting around h = 0.04.

72 / 89

Multivariable systems 73 / 89

Multivariable systems

Example Predator-prey system with rabbits (r ) and stoats (s) satisfying


ṙ = r(3 − s); ṡ = s(r − 2);
with initial conditions
r(0) = 5; s(0) = 2.
To solve in Matlab, write the system in vector form. Introduce the state vector y with components y1 = r and
y2 = s.
Write in vector form:
       
ẏ1 ṙ r(3 − s) y1 (3 − y2 )
ẏ = = = = =: f (t, y).
ẏ2 ṡ s(r − 2) y2 (y1 − 2)
The initial conditions are
     
y1 (0) r(0) 5
y(0) = = = =: y0 .
y2 (0) s(0) 2

74 / 89

26
Multivariable systems

Example Predator-prey system with rabbits (r ) and stoats (s) satisfying


ṙ = r(3 − s); ṡ = s(r − 2); r(0) = 5, s(0) = 2.
Vector form
       
r r(3 − s) y1 (3 − y2 ) 5
y= ; ẏ = = =: f (t, y); y0 = .
s s(r − 2) y2 (y1 − 2) 2
In Matlab,
f=@(t,y)[ y(1)*(3-y(2)); y(2)*(y(1)-2) ]
t0=0, tf=?
y0=[5;2]
Solve up to time tf:
ode45(f,[t0,tf],y0)
Alternatively, set up r and s rate equations separately and combine:
dotr=@(r,s)r*(3-s), dots=@(r,s)s*(r-2), r0=5, s0=2
f=@(t,y)[dotr(y(1),y(2));dots(y(1),y(2))], y0=[r0;s0]
75 / 89

Second-order systems

Example Damped driven pendulum


ml2 s̈ + δlṡ + mgl sin(s) = A cos(ωt)
New variable v = ṡ gives equations
ṡ = v; v̇ = s̈ = −(δ/ml)v − (g/l) sin(s) + (A/ml2 ) cos(ωt).
Write in vector form y1 = s, y2 = v :
   
s y2
y= ; ẏ = f (t, y) = 2
v (A/ml ) cos(ωt) − (g/l) sin(y1 ) − (δ/ml)y2
Run simulation with m = g = l = 1, ω = 1,
ẏ1 = y2 ; ẏ2 = A cos(ωt) − sin(y1 ) − δy2 .
With A = 0, δ = 1/4, s0 = 0, v0 = 2, get damping, slowly varying period.
With A = 3, δ = 1/4, s0 = 0 and v0 = 2 get chaos.

76 / 89

27
Second-order systems

General formulation Suppose


ẍ = g(t, x, ẋ); x(0) = x0 , ẋ0 = v0 .
Introduce new variable v = ẋ with v̇ = ẍ to obtain system
ẋ = v;
v̇ = g(t, x, v)
T
Write in vector form with y1 = x, y2 = v and y = y1 y2 to obtain
       
ẏ1 ẋ v y2
ẏ = = = = =: f (t, y).
ẏ2 v̇ g(t, x, v) g(t, y1 , y2 )
Formulation in Matlab
g = @(t,x,v) ...
x0=..., v0=...
f = @(t,y) [y(2); g(t,y(1),y(2))]
y0 = [x0;v0]
odeXX(f,[t0,tf],y0)
77 / 89

Second-order multivariable systems

Example The motion of a small satellite around a planet of mass M is defined by the differential equation
GM
ẍ = − x =: g(x)
kxk3  
x
Introduce the velocity v = ẋ. The state vector y is given by y = .
v
In components,
 
      x1
x1 v ẋ1  x2 
x= ; v= 1 = ; y=
 v1  .

x2 v2 ẋ2
v2
The evolution of y is given by    
ẋ v
ẏ = =
v̇ g(x)
Formulation in Matlab
g=@(x) (-G*M/norm(x)^3)*x;
f=@(t,y) [y(3:4); g(y(1:2))];
x0=[?;?]; v0=[?;?]; y0=[x0;v0];

78 / 89

28
Adaptive methods 79 / 89

Adaptive methods

Combine a order-n method y(t + h) ≈ y(t) + k with an order-(n − 1) method y(t + h) ≈ y(t) + κ.
Expect
|y(t + h) − (y + k)| ≪ |y(t + h) − (y + κ)| ≈ |(y + k) − (y + κ)| = |k − κ|.
A tolerance of ǫ means that the error of a single step h should be at most ǫh.
A step is of the embedded lower-order method acceptable if
|k − κ| ≤ ǫh.

Matlab Matlab’s ode methods are all adaptive methods. The ode23 method is the Bogacki-Shampine method.

80 / 89

Adaptive methods

If |k−κ| > ǫh, the step is rejected and a smaller step size qh is attepted instead.
If |k − κ| ≤ ǫh, the step is accepted, and a larger step size of max{h, qh} is used for the next step.
For an order-n method with step size qh, if the error for step-size h is approximately |k − κ|, expect single-step
error |k − κ|q n+1 .
Require error for step size qh to be less than qh ǫ, so |k − κ|q n+1 ≤ qh ǫ.
When taking a step of size h, set
 1/n  1/n
ǫh ǫh
q= ≤
s|k − κ| |k − κ|
Here, s is a safety factor, typically s = 2.

81 / 89

Bogacki-Shampine method (Advanced)

Bogacki-Shampine method A third-order method with an embedded second-order method.


k1 = hf (t, w);
k2 = hf (t + 12 h, w + 12 k1 );
k3 = hf (t + 43 h, w + 34 k2 );
2
k= 9 k1 + 93 k2 + 49 k3 ;
k4 = hf (t + h, w + 92 k1 + 93 k2 + 94 k3 ) = hf (t + h, w + k);
7 6 8 3
κ= 24 k1 + 24 k2 + 24 k3 + 24 k4 ;
Error estimates
y(t + h) − (y(t) + k) = O(h3+1 ), y(t + h) − (y(t) + κ) = O(h2+1 ).
Note that
5 1
|κ − k| = 72 k1 − 12 k2 − 91 k3 + 18 k4 = 1
72 |5k1 − 6k3 − 8k3 + 9k4 |.

82 / 89

29
Bogacki-Shampine method (Advanced)

Example ẏ = e−t − y 2 , y(0) = 0, ǫ = 0.001.


For h = 1.0, find
k0,1 = hf (t0 , w0 ) = hf (0.00, 0.0000) = 1.0×1.0000 = 1.0000;
k0,2 = hf (t0 + 12 h, w0 + 21 k0,1 ) = hf (0.50, 0.5000) = 1.0×0.3565 = 0.3565;
k0,3 = hf (t0 + 34 h, w0 + 43 k0,2 ) = hf (0.75, 0.2674) = 1.0×0.4009 = 0.4009;
k0 = 29 k0,1 + 93 k0,2 + 49 k0,3
= 29 ×1.0000 + 93 ×0.3565 + 94 ×0.4009 = 0.5192;
k0,4 = hf (t0 + h, w0 + k0 ) = hf (1.0, 0.5192) = 1.0×0.0983 = 0.0983;
7 6 8 3
κ0 = 24 k0,1 + 24 k0,2 + 24 k0,3 + 24 k0,4
7 6 8 3
= 24 ×1.0000 + 24 ×0.3565 + 24 ×0.4009 + 24 ×0.0983 = 0.5267;
Then |k0 −κ0 | = |0.5192−0.5267| = 0.0075 and ǫh = 0.001; reject step.
1/2 1/2
q= 1
2 ǫh/|k − κ| = 1
2 ×0.001×1.0/0.0075) = 0.0671/2 ≈ 0.26.
Try step size ĥ = qh = 0.26× 1.00 = 0.26 .

83 / 89

Bogacki-Shampine method (Advanced)

Example ẏ = e−t − y 2 , y(0) = 0, ǫ = 0.001.


For h = 0.26, find
k0,1 = hf (t0 , w0 ) = hf (0.00, 0.0000) = 0.26×1.0000 = 0.2686;
k0,2 = hf (t0 + 21 h, w0 + 21 k0,1 ) = hf (0.13, 0.1293) = 0.26×0.8620 = 0.2229;
k0,3 = hf (t0 + 34 h, w0 + 43 k0,2 ) = hf (0.19, 0.1672) = 0.26×0.7958 = 0.2058;
k0 = 29 k0,1 + 93 k0,2 + 94 k0,3
= 29 ×0.2686 + 93 ×0.2229 + 94 ×0.2058 = 0.2232;
k0,4 = hf (t0 + h, w0 + k0 ) = hf (0.26, 0.2232) = 0.26×0.7223 = 0.1868;
7 6 8 3
κ0 = 24 k0,1 + 24 k0,2 + 24 k0,3 + 24 k0,4
7 6 8 3
= 24 ×0.2686 + 24 ×0.2229 + 24 ×0.2058 + 24 ×0.1868 = 0.2231;
Then |k0 −κ0 | = |0.22321−0.22308| = 1.3×10−4 ≤ ǫh = 2.6×10−4 ; accept.
1/n 1/2
q= 1
2 ǫh/|k − κ| = 1
2 ×0.001×0.26/0.00013) = 0.971/2 ≈ 0.98.
Set t1 = t0 + h0 = 0.26 and w1 = w0 + k0 = 0.2232.
Try next step size qh = 0.98×0.26 = 0.25.

84 / 89

30
Bogacki-Shampine method (Advanced)

Example ẏ = e−t − y 2 , y(0) = 0, ǫ = 0.001.


With t1 = 0.26 (2sf), w1 = 0.2232 (4sf) and h = 0.25 (2sf), find
k1,1 = hf (t1 , w1 ) = hf (0.26, 0.2232) = 0.25×0.7223 = 0.1837;
k1,2 = hf (t1 + 21 h, w1 + 21 k1,1 ) = hf (0.39, 0.3150) = 0.25×0.5807 = 0.1477;
k1,3 = hf (t1 + 34 h, w1 + 43 k1,2 ) = hf (0.45, 0.3340) = 0.25×0.5266 = 0.1339;
k1 = 29 k0,1 + 93 k0,2 + 94 k1,3
= 29 ×0.1837 + 93 ×0.1477 + 94 ×0.1339 = 0.1495;
k1,4 = hf (t1 + h, w1 + k1 ) = hf (0.52, 0.3727) = 0.25×0.4599 = 0.1169;
7 6 8 3
κ1 = 24 k1,1 + 24 k1,2 + 24 k1,3 + 24 k1,4
7 6 8 3
= 24 ×0.1837 + 24 ×0.1477 + 24 ×0.1339
+ 24 ×0.1169 = 0.1497;
Then |k1 − κ1 | = |0.14954 − 0.14973| = 0.00019 < 0.00025 = ǫh; accept.
1/n
q = ǫh/|k − κ| = 0.001×0.25/0.00019)1/2 = 0.671/2 ≈ 0.82.
Set t2 = t1 +h1 = 0.52, w2 = w1 + k1 = 0.2232 + 0.1495 = 0.3727.
Next step h = qh = 0.82×0.25 = 0.21.

85 / 89

Adaptive multistep methods (Non-examinable)

Changing step size in a multistep method


To halve the step size, in the two-stage Adams Bashforth method, use
Z t+h/2
(τ −t)
y(t + 21 h) ≈ y(t) +

f (t, y(t)) + f (t, y(t)) − f (t−h, y(t−h)) dτ
t h
= y(t) + 12 hf (t, y(t)) + 18 h f (t, y(t)) − f (t − h, y(t − h))


= y(t) + 81 h 5f (t, y(t)) − f (t − h, y(t − h))




Hence
ti+1 = ti + hi where hi = 12 hi−1
wi+1 = wi + 14 hi 5f (ti , wi ) − f (ti−1 , wi−1 )


To double the step size, use


ti+1 = ti + hi where hi = 2hi−1

wi+1 = wi + hi 2f (ti , wi ) − f (ti−1 , wi−1 )

86 / 89

31
Global errors 87 / 89

Global errors

Growth of global errors If ẏ = λy , then y(t) = y(0)eλt , so an error of size ǫ at time 0 becomes ǫ eλt at time
t.
In general
d  
y1 (t) − y2 (t) = f (t, y1 (t)) − f (t, y2 (t)) = f,y (t, η) y1 (t) − y2 (t)
dt
Error bound Combine global growth of error with local error estimate:

Theorem If fy (τ, y(τ )) ≤ L and the local error is C f (n) (τ, y(τ )) hn+1 then at times tn ∈ [0, T ],
exp(LT ) − 1 n
wn − y(tn ) ≤ CM h where M = supτ ∈[0,T ] |f (n) (τ, y(τ ))|.
L
88 / 89

Global errors

Example (Non-examinable) Duffing oscillator


ẍ + δẋ + αx + βx3 = γ cos(ωt).
Take α = β = γ = ω = 1, δ = 0.1; x(0) = 1.5, ẋ(0) = 0.0. Uncontrollable growth of local errors. Chaotic
motion.

89 / 89

32

You might also like