topic-2-handout
topic-2-handout
Differential Equations
Pieter Collins
Initial-Value Problems 6
Initial-value problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
ODEs in Matlab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Taylor Methods 10
Euler’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Taylor methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Runge-Kutta Methods 23
Midpoint method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Modified
. . . . . . . .Euler
. . . . method
........................................................................... 29
Second-order
. . . . . . . . . . . .methods
........................................................................... 30
Third-order methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Fourth-order methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Multistep Methods 44
Multistep methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Adams-Bashforth
. . . . . . . . . . . . . . .methods
........................................................................ 46
Bootstrapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Implicit methods 51
Adams-Moulton
. . . . . . . . . . . . . methods
.......................................................................... 53
Predictor-corrector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Fully-implicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Gear’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Stability 63
Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Explicit methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
1
Explicit vs implicit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Stiff systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Multivariable systems 73
Multivariable systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Second-order
. . . . . . . . . . . .systems
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Adaptive methods 79
Adaptive methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Bogacki-Shampine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Adaptive multistep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Global errors 87
Growth of errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2
Introduction to Differential Equations 2 / 89
Differential equations
A differential equation for a quantity y is a formula giving the time-derivative of y in terms of y itself:
ẏ = f (t, y).
If y is finite-dimensional, then we have an ordinary differential equation (ODE). Other kinds of differential
equation are partial differential equation, stochastic differential equation.
Examples
• Exponential growth ẏ = ky or decay ẏ = −ky .
• Damped oscillations ẍ + δ ẋ + kx = 0.
Equivalently: ẋ = v , v̇ = ẍ = −δv − kx.
3 / 89
0.4
0.2
0
0 1 2 3 4 5
Theorem If ẏ = f (t, y) where f is differentiable in y , then there is a unique solution curve through any point.
• Hence solution curves do not cross each other or merge.
4 / 89
Analytical solutions do not exist except in special cases! Even when they do exist, they are hard to calculate!
5 / 89
3
Initial-Value Problems 6 / 89
Initial-value problems
7 / 89
Chemical reactions
4
Solution of ODEs in Matlab
9 / 89
Taylor Methods 10 / 89
Euler’s method
11 / 89
5
Euler’s method
12 / 89
Euler’s method
13 / 89
Euler’s method
i ti wi f (ti , wi ) wi + hf (ti , wi )
0 0.00000 0.00000 1.00000 0.20000
1 0.20000 0.20000 0.77873 0.35575
2 0.40000 0.35575 0.54376 0.46450
3 0.60000 0.46450 0.33305 0.53111
4 0.80000 0.53111 0.16725 0.56456
5 1.00000 0.56456
Better approximation; absolute error |w5 − y(1.0)| ≈ 0.061, relative error 12%.
With step size h = 0.1, find y(1.0) ≈ w10 = 0.5329, relative error 5.8%.
Error approximately halves on halving the step-size: first order method!
6
14 / 89
Euler’s method
Theorem (Local error bound for Euler’s method) Let y be the solution of the differential equation
dy/dt = f (t, y), and let
wi+1 = y(ti ) + hi f (ti , y(ti ))
be given by Euler’s method.
Then there exists τ ∈ [ti , ti+1 ] such that
1
|wi+1 − y(ti+1 )| = |f,t (τ, y(τ )) + f,y (τ, y(τ )) f (τ, y(τ )) h2i .
2
In particular, if y(t) ∈ [ymin , ymax ] for all t ∈ [ti , ti+1 ], and
f,t (τ, η) + f,y (τ, η) f (τ, η) ≤ M
for all τ ∈ [ti , ti+1 ] and η ∈ [ymin , ymax ], then
M 2
|wi+1 − y(ti+1 )| ≤ h = O(h2i ).
2 i
16 / 89
7
Euler’s method
17 / 89
Second-order Taylor method Use second-order term in Taylor expansion to improve Euler’s method.
y(t + h) ≈ y(t) + hf (t, y) + 21 h2 f,t (t, y) + f,y (t, y)f (t, y) ,
so take
wi+1 = wi + hi f (ti , wi ) + 12 h2i f,t (ti , wi ) + f,y (ti , wi )f (ti , wi ) .
18 / 89
Example Let f (t, y) = e−t − y 2 , so f,t (t, y) = −e−t and f,y (t, y) = −2y .
Initial condition y(0) = 0.
For step size h = 1.0,
f (0.0, 0.000) = 1.000, f,t(0.0, 0.000) = −1.000, f,y (0.0, 0.000) = 0.000;
1 2
y(1.0) ≈ w1 = w0 + hf (t0 , w0 ) + 2h f,t (t0 , w0 ) + f,y (t0 , w0 )f (t0 , w0 )
= 0.0000 + 1.0×1.0000 + 12 ×1.02 ×(−1.0000+0.0000×1.0000)
= 0.0000 + 1.0000 − 0.5000 = 0.5000
Relative error 0.7%, much better!
19 / 89
8
Second-order Taylor method (Non-examinable)
Example Let f (t, y) = e−t − y 2 , so f,t (t, y) = −e−t and f,y (t, y) = −2y .
Initial condition y(0) = 0.
For step size h = 0.5,
y(0.5) ≈ w1 = w0 + hf (t0 , w0 ) + 12 h2 f,t (t0 , w0 ) + f,y (t0 , w0 )f (t0 , w0 )
= 0.3750.
y(1.0) ≈ w2 = w1 + hf (t1 , w1 ) + 12 h2 f,t (t1 , w1 ) + f,y (t1 , w1 )f (t1 , w1 )
= 0.4885.
Relative error 3%, still much better than Euler’s method.
For h = 0.2, y(1.0) ≈ w5 = 0.500708, with relative error 0.52%.
For h = 0.1, y(1.0) ≈ w10 = 0.502675, with relative error 0.13%.
Halving the step size multiplies the error by a quarter, so method is second order; global error O(h2 ).
20 / 89
Higher-order derivatives Consider (for simplicity) the autonomous differential equation ẏ = f (y).
Then
ẏ = f (y);
ÿ = f ′ (y)f (y);
...
y = f ′′ (y)f (y)2 + f ′ (y)2 f (y);
ÿ¨ = f ′′′ (y)f (y)3 + 4f ′′ (y)f ′ (y)f (y)2 + f ′ (y)3 f (y);
...
y¨ = f ′′′′ (y)f (y)4 + 7f ′′′ (y)f ′ (y)f (y)3 + 4f ′′ (y)2 f (y)3
+ 11f ′′ (y)f ′ (y)2 f (y)2 + f ′ (y)4 f (y).
Higher-order Taylor methods Higher order Taylor methods can be constructed from these derivatives. e.g.
third-order
h2i ′ h3
f (wi )f (wi ) + i f ′′ (wi )f (wi )2 + f ′ (wi )2 f (wi ) .
wi+1 = wi + hi f (wi ) +
2 6
21 / 89
22 / 89
9
Runge-Kutta Methods 23 / 89
Midpoint method
Midpoint method Expect derivative ẏ(t + h2 ) at midpoint of interval [t, t + h] to give a better approximation to
the average change h1 (y(t + h) − y(t)) than derivative ẏ(t) at left endpoint.
Can show
y(t + h) = y(t) + h ẏ(t) + O(h2 ); y(t + h) = y(t) + h ẏ(t + h2 ) + O(h3 ).
Would like to use approximation
y(t + h) ≈ y(t) + hf (t + h2 , y(t + h2 )).
Problem: Don’t know y(t + h2 ).
24 / 89
Midpoint method
25 / 89
10
Midpoint method
26 / 89
Midpoint method
27 / 89
Midpoint method
Example For ẏ = e−t − y 2 , y(0) = 0, approximating y(1) = 0.503347 (6 sf ) with different step-sizes gives
n 1 2 5 10
h 1.0 0.5 0.2 0.1
y(1.0) ≈ wn 0.356531 0.480228 0.500418 0.502666
error 29% 4.6% 0.58% 0.14%
The midpoint method is second-order, with global errors O(h2 ).
28 / 89
11
Modified Euler (trapezoid) method
Example For ẏ = e−t − y 2 , y(0) = 0, approximating y(1) = 0.503347 (6 sf ) with different step-sizes gives
n 1 2 5 10
h 1.0 0.5 0.2 0.1
wn 0.183940 0.468458 0.499972 0.502639
error 63% 6.9% 0.67% 0.14%
29 / 89
30 / 89
12
Ralston’s method
Ralston’s Method
Taking α = 32 gives a2 = 43 , a1 = 14 and
wi+1 = wi + 14 hi f (ti , wi ) + 3f (ti + 23 hi , wi + 23 hi f (ti , wi ))
31 / 89
Runge-Kutta Methods
32 / 89
33 / 89
13
Third-order Runge-Kutta methods
34 / 89
35 / 89
36 / 89
14
Fourth-order Runge-Kutta methods
37 / 89
38 / 89
Example ẏ(t) = e−t − y 2 , y(0) = 0. Take h = 0.5. Take two steps. Calulate:
i ti wi ki,1 ki,2 ki,3 ki,4 ki
0 0.0 0.000000 0.500000 0.358150 0.373366 0.233564 0.366100
1 0.5 0.366100 0.236251 0.118946 0.145627 0.053008 0.136401
2 1.0 0.502501
Exact value y(1) = 0.503347 (6 dp ); relative error 0.17%.
39 / 89
15
Fourth-order Runge-Kutta methods
Example Take h = 0.2.
k0,1 = h0 f (t0 , w0 )
= 0.2f (0.0, 0.00000) = 0.2 × 1.00000 = 0.20000
k0,2 = h0 f (t0 + 21 h0 , w0 + 12 k0,1 )
= 0.2f (0.1, 0.10000) = 0.2 × 0.89484 = 0.17897
k0,3 = h0 f (t0 + 21 h0 , w0 + 12 k0,2 )
= 0.2f (0.1, 0.08948) = 0.2 × 0.89683 = 0.17937
k0,4 = h0 f (t0 + h0 , w0 + k0,3 ) = 0.5f (0.2, 0.17937)
= 0.2 × 0.78656 = 0.15731
1
k0 = 6 (k0,1 +2k0,2 +2k0,3 +k0,4 )
= 16 (0.20000+2×0.17897+2×0.17937+0.15731)
= 0.17900
w1 = w0 + k0 = 0.0000 + 0.17900 = 0.17900
40 / 89
41 / 89
Low order methods For n = 1, 2, 3, 4, there are methods of order n requiring n function evaluations per time
step.
Higher order methods At least six function evaluations are needed for a fifth order method.
Even more evaluations are needed for higher orders.
Choice of order Fourth-order methods usually provide a good balance between accuracy and efficiency.
42 / 89
16
Comparison of Runge-Kutta methods
Comparison ẏ(t) = e−t − y 2 , y(0) = 0.
43 / 89
Multistep Methods 44 / 89
Multistep methods
45 / 89
Adams-Bashforth methods
Derivation
Z ti +hi Z ti +hi
y(ti + hi ) = y(ti ) + ẏ(τ ) dτ = y(ti ) + f (τ, y(τ )) dτ
ti ti
Find a polynomial p(τ ) approximating f (τ, y(τ )) over [ti , ti + hi ].
Estimate p(τ ) by interpolating f (τ, y(τ )) at τ = ti , ti−1 , ti−2 , . . . , ti−k+1 .
Linear interpolation at (ti , wi ) and (ti−1 , wi−1 ) gives
(τ − ti )
p(τ ) = f (ti , wi ) + f (ti , wi ) − f (ti−1 , wi−1 )
hi−1
Then for equally-sized steps,
Z ti +h
(τ − ti )
y(ti+1 ) ≈ wi+1 = wi + f (ti , wi ) + f (ti , wi ) − f (ti−1 , wi−1 ) dτ
ti h
= wi + hf (ti , wi ) + 21 h f (ti , wi ) − f (ti−1 , wi−1 )
17
46 / 89
Adams-Bashforth methods
5 3 (3)
Local error E = 12 h y (τ ) for some τ ∈ (ti−1 , ti+1 ).
2
Global error O(h ); second-order.
47 / 89
Bootstrapping The first step(s) of a multistep method require wi for i < 0 which is not available.
e.g.Two-stage Adams-Bashforth:
w1 = w0 + 12 h 3f (t0 , w0 ) − f (t−1 , w−1 )
Use a Runge-Kutta method of the same order (or higher) to start the multistep method.
48 / 89
Multistep methods
Example Solve ẏ(t) = e−t − y 2 , y(0) = 0 at t = 1 using the two-stage Adams-Bashforth method with
h = 0.5. Booststrap using a second-order method.
Bootstrap using Ralston’s second-order method:
f (t0 , w0 ) = f (0.0, 0.0000) = 1.0000,
f t0 + 23 h0 , w0 + 23 h0 f (t0 , w0 ) = f (0.3̇, 0.3333) = 0.6054,
49 / 89
18
Multistep methods
Example Solve ẏ(t) = e−t − y 2 , y(0) = 0 at t = 1 using the two-stage Adams-Bashforth method with
h = 0.1.
Booststrap using Ralston’s second-order method:
w1 = w0 + 14 h0 f (t0 , w0 ) + 3f (t0 + 23 h0 , w0 + 32 h0 f (t0 , w0 ) .
ti wi fi = f (ti , wi ) ti wi fi = f (ti , wi )
0.0 0.000000 1.000000 0.5 0.366485 0.472220
0.1 0.094830 0.895845 0.6 0.408752 0.381734
0.2 0.179206 0.786616 0.7 0.442401 0.300867
0.3 0.252407 0.677109 0.8 0.468444 0.229889
0.4 0.314642 0.571320 0.9 0.487884 0.168539
1.0 0.501670
So y(1.0) ≈ w10 = 0.501670.
Absolute error 0.0017, relative error 0.33%.
50 / 89
Implicit methods 51 / 89
Implicit methods
Idea Use an extra forward interpolation condition ẏ(ti+1 ) = f (ti+1 , wi+1 )).
Formula for wi+1 then depends on f (ti+1 , wi+1 ), leading to an algebraic equation for wi+1 which has to be
solved.
52 / 89
Adams-Moulton methods
Two-stage interpolation Interpolate (ti−1 , f (ti−1 , wi−1 )), (ti , f (ti , wi )) and (ti+1 , f (ti+1 , wi+1 )).
Assume equal step-size h.
(τ − ti )
p(τ ) = f (ti , wi ) + f (ti+1 , wi+1 ) − f (ti−1 , wi−1 )
2h
(τ − ti )2
+ 2
f (ti+1 , wi+1 ) − 2f (ti , wi ) + f (ti−1 , wi−1 )
2h
Then
Z ti +h
wi + p(τ ) dτ = wi + hf (ti , wi ) + 14 h f (ti+1 , wi+1 ) − f (ti−1 , wi−1 )
ti
+ 16 h f (ti+1 , wi+1 ) − 2f (ti , wi ) + f (ti−1 , wi−1 )
so
1
wi+1 = wi + 12 h 5f (ti+1 , wi+1 ) + 8f (ti , wi ) − f (ti−1 , wi−1 )
53 / 89
19
Adams-Moulton methods
Explicit versus implicit methods Implicit multistep methods have higher accuracy than explicit methods of
the same order. e.g.
Adams-Bashforth three-stage method local error:
E = 83 y (4) (τ )h4 for some τ ∈ (ti−2 , ti+1 ).
Adams-Moulton two-stage method local error:
1 (4)
E = − 24 y (τ )h4 for some τ ∈ (ti−1 , ti+1 ).
54 / 89
Predictor-corrector approach
Predictor-corrector approach
Use an estimate w̃i+1 on the right-hand side of the Adams-Moulton formula.
1
e.g. wi+1 = wi + 12 h 5f (ti+1 , w̃i+1 ) + 8f (ti , wi ) − f (ti−1 , wi−1 )
Typically use an explicit Adams-Bashforth method to obtain w̃i+1 with the same number of stages (and lower
order).
w̃i+1 = wi + 12 h 3f (ti , wi ) − f (ti−1 , wi−1 ) .
e.g.
Theorem The combination of an n-stage Adams-Bashforth method (order n) as a predictor with an n-stage
Adams-Mouton method (order n + 1) as a corrector has order n + 1.
Alternatively, can use an (n + 1)-stage Adams-Bashforth method and an n-stage Adams-Mouton method to
estimate the error.
1
e.g. w̃i+1 = wi + 12 h 23f (ti , wi ) − 16f (ti−1 , wi−1 ) + 5f (ti−2 , wi−2 ) .
May iterate corrector formula to improve accuracy, but improvement is usually not worth the additional effort.
55 / 89
20
Predictor-corrector approach
Adams-Moulton formula
1
wi+1 = wi + 12 h 5f (ti+1 , w̃i+1 ) + 8f (ti , wi ) − f (ti−1 , wi−1 )
Corrector
1
w2 = w1 + 12 h 5f (t2 , w̃2 ) + 8f (t1 , w1 ) − f (t0 , w0 )
1
= 0.0949 + 12 0.1 5f (0.2, 0.1792) + 8f (0.1, 0.0949) − f (0.0, 0.0000)
1
= 0.0949 + 12 ×0.1× 5×0.7866 + 8×0.8958 − 1.0000)
= 0.17901896.
Applying the corrector again gives 0.17902212.
Repeated application of the corrector converges to 0.17902207.
56 / 89
Predictor-corrector approach
1
w4 = w3 + 12 h 5f (t4 , w̃4 ) + 8f (t3 , w3 ) − f (t2 , w2 )
1
= 0.2522 + 12 0.1 5f (0.4, 0.3144) + 8f (0.3, 0.2522) − f (0.2, 0.1790)
1
= 0.2522 + 12 ×0.1× 5×0.5714 + 8×0.6772 − 0.7867) = 0.31461683.
57 / 89
21
Predictor-corrector approach
Example Solve ẏ(t) = e−t − y 2 , y(0) = 0 at t = 1 using the two-stage Adams-Bashforth-Moulton
predictor-corrector method with h = 0.1.
Fully-implicit approach
Try to solve the implicit equation for wi+1 exactly. Need to solve for z = wi+1 in
1
z = wi + 12 h 5f (ti+1 , z) + 8f (ti , wi ) − f (ti−1 , wi−1 )
or equivalantly
5 1
gi+1 (z) = 12 hf (ti+1 , z) − z + wi + 12 h 8f (ti , wi ) − f (ti−1 , wi−1 ) = 0.
Typically use the secant method, starting with wi and w̃i+1 , or Newton’s method, which requires computing f,y
5
since g ′ (z) = 12 hf,y (ti+1 , z) − 1.
59 / 89
60 / 89
22
Gear’s methods (Non-examinable)
61 / 89
Exercise Take f (t, y) = e−t − y 2 , h = 0.1, w0 = 0.0, w1 = 0.09485432. Find w2 using the
backward-difference method.
Solve for w2 = z in
g2 (z) = 43 w1 − 31 w0 + 23 hf (t2 , z) − z = 0
62 / 89
Stability 63 / 89
Stability analysis
Stability analysis
Stability criteria
Method is unconditionally-stable if wi → 0 as i → ∞ for all hλ < 0.
Method is limit-stable if wi+1 /wi → 0 as hλ → −∞.
65 / 89
23
Stability analysis
66 / 89
Stability analysis
67 / 89
Stability analysis
68 / 89
24
Stability analysis
69 / 89
Theorem (Dahlquist barrier) An implicit multistep method can only be unconditionally stable if its order is at
most 2.
70 / 89
Stiff systems
Idea A stiff system contains fast and stable variables, which behave similarly to ẏi = λyi for λ ≪ 0.
These variables converge quickly to a pseudo-equilibrium, after which the dynamics is determined by the slow
variables.
Explicit methods are unstable for moderate step sizes, requiring either implicit methods, or small step sizes.
71 / 89
25
Stiff systems
72 / 89
Multivariable systems 73 / 89
Multivariable systems
74 / 89
26
Multivariable systems
Second-order systems
76 / 89
27
Second-order systems
Example The motion of a small satellite around a planet of mass M is defined by the differential equation
GM
ẍ = − x =: g(x)
kxk3
x
Introduce the velocity v = ẋ. The state vector y is given by y = .
v
In components,
x1
x1 v ẋ1 x2
x= ; v= 1 = ; y=
v1 .
x2 v2 ẋ2
v2
The evolution of y is given by
ẋ v
ẏ = =
v̇ g(x)
Formulation in Matlab
g=@(x) (-G*M/norm(x)^3)*x;
f=@(t,y) [y(3:4); g(y(1:2))];
x0=[?;?]; v0=[?;?]; y0=[x0;v0];
78 / 89
28
Adaptive methods 79 / 89
Adaptive methods
Combine a order-n method y(t + h) ≈ y(t) + k with an order-(n − 1) method y(t + h) ≈ y(t) + κ.
Expect
|y(t + h) − (y + k)| ≪ |y(t + h) − (y + κ)| ≈ |(y + k) − (y + κ)| = |k − κ|.
A tolerance of ǫ means that the error of a single step h should be at most ǫh.
A step is of the embedded lower-order method acceptable if
|k − κ| ≤ ǫh.
Matlab Matlab’s ode methods are all adaptive methods. The ode23 method is the Bogacki-Shampine method.
80 / 89
Adaptive methods
If |k−κ| > ǫh, the step is rejected and a smaller step size qh is attepted instead.
If |k − κ| ≤ ǫh, the step is accepted, and a larger step size of max{h, qh} is used for the next step.
For an order-n method with step size qh, if the error for step-size h is approximately |k − κ|, expect single-step
error |k − κ|q n+1 .
Require error for step size qh to be less than qh ǫ, so |k − κ|q n+1 ≤ qh ǫ.
When taking a step of size h, set
1/n 1/n
ǫh ǫh
q= ≤
s|k − κ| |k − κ|
Here, s is a safety factor, typically s = 2.
81 / 89
82 / 89
29
Bogacki-Shampine method (Advanced)
83 / 89
84 / 89
30
Bogacki-Shampine method (Advanced)
85 / 89
Hence
ti+1 = ti + hi where hi = 12 hi−1
wi+1 = wi + 14 hi 5f (ti , wi ) − f (ti−1 , wi−1 )
86 / 89
31
Global errors 87 / 89
Global errors
Growth of global errors If ẏ = λy , then y(t) = y(0)eλt , so an error of size ǫ at time 0 becomes ǫ eλt at time
t.
In general
d
y1 (t) − y2 (t) = f (t, y1 (t)) − f (t, y2 (t)) = f,y (t, η) y1 (t) − y2 (t)
dt
Error bound Combine global growth of error with local error estimate:
Theorem If fy (τ, y(τ )) ≤ L and the local error is C f (n) (τ, y(τ )) hn+1 then at times tn ∈ [0, T ],
exp(LT ) − 1 n
wn − y(tn ) ≤ CM h where M = supτ ∈[0,T ] |f (n) (τ, y(τ ))|.
L
88 / 89
Global errors
89 / 89
32