APM3711 ASS 2 Solution 202
APM3711 ASS 2 Solution 202
APM3711 ASS 2 Solution 202
33 APM3711/201/1
It is an interesting exercise to solve equation (11) by means of methods (a) to (g). One can again use the above
program but the functions f, f 2 , f 3 and f 4 must now be de¿ned by the relations
Do you get the same results as before? If not, why not? Are the errors larger or smaller?
QUESTION 51
get starting values by the Runge-Kutta Fehlberg method for x = 0.2, x = 0.4, x = 0.6, and then advance the
solution to x = 1.0 by
SOLUTION:
Given
y ) = y sin (H x) , y (0) = 1.
Analytical solution
Separate the variables:
dy
= y sin (H x)
dx
= =
dy dy
, = sin (H x) dx , = sin (H x) dx
y y
1
, ln |y| = cos (H x) + c
H
r s
H1 cos(H x)+c
, y = ±e ,
and the particular solutions which satis¿es the initial value y (0) = 1 is then
1cos H x
y (x) = e H .
APM3711/0/202
34
Numerical solution
The Adams–Moulton method is multistep method, i.e. more than one past value is used to approximate the new
value. In Gerald four multistep methods are discussed.
(1) The 3rd order Adams method has a local error of order 4 and a global error of order 3 :
h
yi+1 = yi + (23 f i 16 f i1 + 5 f i2 )
12
(2) The 4th order Adams method (also called the Adams–Bashforth method) is
h
yi+1 = yi + (55 f i 59 f i1 + 37 f i2 9 f i3 ) .
24
hb b c c
yi+1 = yi1 + f xi + h, y p + 4 f i + fi 1 .
3
(4) The Adams–Moulton method (also called the Adams predictor–corrector method) is
h
y p = yi + (55 f i 59 f i1 + 37 f i2 9 f i 3 )
24
h b b c c
yi+1 = yi + 9 f xi + h, y p + 19 fi 5 f i1 + f i2 .
24
The last three methods all have local error of order h 5 and a global error of order h 4 , but Milne’s method is
sometimes unstable, and the error constant for the Adams–Moulton method is smaller than that of the 4th order
Adams method (19/720 as opposed to 251/720).
Check a discussion of the relative merits of these multistep methods and the Runge–Kutta methods.
Milne’s method and the Adams–Moulton method both need a set of four starting values already calculated by some
other technique, usually by one of the single–step methods. The initial one–step method should be as accurate as
possible, and at the very least should not have an order of error lower than that of the multi–step method we are
planning to use!
In this problem, the initial value at x = 0 is given, and we will use the Runge–Kutta–Fehlberg method with h = 0.2
to ¿nd the additional starting values at x = 0.2, x = 0.4 and x = 0.6. (The Runge–Kutta–Fehlberg method has
a local order of error h 6 , so it is more accurate than the Adams–Moulton and Milne methods.) After that, we will
use the multi–step methods to advance the solution to x = 0.8 and x = 1.0. For comparison, we have also run the
program when the four starting values are exact values.
The program and results follow. You should make sure that you can reproduce the table of results by using a
calculator.
APM3711/0/202
35 APM3711/201/1
PROGRAM AS00_2_2(output);
CONST
xstart = 0.0;
ystart = 1.0;
xend = 1.0;
h = 0.2;
i_end = round((xend - xstart)/h);
TYPE
result = array[0..i_end] of real;
VAR
x, y, exact, ff: result;
fst : text;
36
END; {RKF}
FOR i := 0 to i_end DO
BEGIN
x[i] := xstart + i*h;
exact[i] := yexact(x[i]);
END;
y[0] := ystart;
ff[0] := f(xstart,ystart);
FOR i := 1 to 3 DO
BEGIN
y[i] := RKF(x[i - 1], y[i - 1]);
{ y[i] := exact[i]; }
ff[i] := f(x[i],y[i]);
END; {For i}
FOR i := 4 to i_end DO
BEGIN
y[i] := 0.0;
ff[i] := 0.0;
END; {For i}
END; {Initialise}
37 APM3711/201/1
writeln(fst);
writeln(fst, ’ MILNE METHOD:’);writeln(fst);
writeln(fst,’ x exact y y_pred ’,
’ y error’);
writeln(fst,’ ’,xstart:3:1,
’ ’,ystart:14:11);
FOR i := 1 to 3 DO
BEGIN
error := exact[i] - y[i];
writeln(fst,’ ’,x[i]:3:1,
’ ’,exact[i]:14:11,
’ ’,y[i]:14:11, ’ ’, error:14:11);
END;
FOR i := 4 to i_end DO
BEGIN
y_pred := y[i - 4] + (4*h/3)*
(2*ff[i - 1] - ff[i - 2]
+ 2*ff[i - 3]);
f_pred := f(x[i], y_pred);
y[i] := y[i - 2] + (h/3)*(f_pred +
4*ff[i - 1] + ff[i - 2]);
ff[i] := f(x[i], y[i]);
error := exact[i] - y[i];
writeln(fst,’ ’,x[i]:3:1,’ ’,
exact[i]:14:11,
’ ’, y_pred:14:11,’ ’, y[i]:14:11,
’ ’,error:14:11);
END; {For i}
END; {A_M}
38
writeln(fst);
writeln(fst, ’ ADAMS-MOULTON METHOD:’);writeln(fst);
writeln(fst,’ x exact y y_pred ’,
’ y error’);
writeln(fst,’ ’,xstart:3:1,
’ ’,ystart:14:11);
FOR i := 1 to 3 DO
BEGIN
error := exact[i] - y[i];
writeln(fst,’ ’,x[i]:3:1,
’ ’,exact[i]:14:11,
’ ’,y[i]:14:11, ’ ’, error:14:11);
END;
FOR i := 4 to i_end DO
BEGIN
y_pred := y[i - 1] + (h/24)*
(55*ff[i - 1] - 59*ff[i - 2]
+ 37*ff[i - 3] - 9*ff[i - 4]);
f_pred := f(x[i], y_pred);
y[i] := y[i - 1] + (h/24)*(9*f_pred +
19*ff[i - 1] - 5*ff[i - 2] + ff[i-3]);
ff[i] := f(x[i], y[i]);
error := exact[i] - y[i];
writeln(fst,’ ’,x[i]:3:1,’ ’,
exact[i]:14:11,
’ ’, y_pred:14:11,’ ’, y[i]:14:11,
’ ’,error:14:11);
END; {For i}
END; {A_M}
BEGIN {Program}
assign(fst,’c:\tp\work\A00_2_2a.DAT’);
rewrite(fst);
Initialise(x, y, ff);
Milne(x, exact, y, ff);
A_M(x, exact, y, ff);
close(fst);
END. {Program}
APM3711/0/202
39 APM3711/201/1
MILNE METHOD:
ADAMS-MOULTON METHOD:
MILNE METHOD:
40
ADAMS-MOULTON METHOD:
QUESTION 63
(a) The function e x is to be approximated by a ¿fth-order polynomial over the interval [1, 1]. Why is a Cheby-
shev series a better choice than a Taylor (or Maclaurin) expansion?
T0 (x) = 1
T1 (x) = x
T2 (x) = 2x 2 1
T3 (x) = 4x 3 3x
T4 (x) = 8x 4 8x 2 + 1,
(c) Find the Padé approximation R3 (x), with numerator of degree 2 and denominator of degree 1, to the function
f (x) = x 2 + x 3 .
SOLUTION
(a) The maximum error over the whole interval is smaller. The Taylor series has zero error at x = 0, but
the error can be quite large at x = ±1.
APM3711/0/202
SEMESTER 1
ASSIGNMENT 02
2
QUESTION 1
by using the shooting method. Use the modified Euler method (with only one correction at each step),
and take h = 0.2. Start with an initial slope of y 0 (0) = 1.9 as a first attempt and y 0 (0) = 2.1 as a second
attempt. Then interpolate.
Compare the result with the analytical solution y = x4 − x2 + 2x.
SOLUTION
To solve the following boundary–value problem by using the shooting method:
y (0) = 0, y (1) = 2,
we use the modified Euler method with h = 0.2 and compare the result with the analytic solution
y = x4 − x2 + 2x.
Set
dy
z= = y0
dx
then
dz
z0 =
= y 00 .
dx
The second–order differential equation (1) can thus be written as a system of two coupled first–order
differential equations:
y0 = z ((2))
z 0 = 4xy − x2 z + 2x3 + 6x2 − 2 = g (x, y, z) ((3))
To solve (2) – (3) we must estimate z (0) = y 0 (0). For a chosen estimate z0 the algorithm is
Predictor Corrector
zp = zi + hzi0 zi+1 = zi + h [zi0 + z 0 (xi + h, yp , zp )] /2
yp = yi + hyi0 yi+1 = yi + h [yi0 + y 0 (xi + h, yp , zp )] /2
APM3711/0/202
3 APM3711/202
Sequence of calculations
x0 = 0
y0 = 1
z0 = estimate
zi0 = g (xi , yi , zi )
zp = zi + hzi0
yi0 = zi
yp = yi + hyi0
zp0 = g (xi + h, yp , zp )
zi+1 = zi + h zi0 + zp0 /2
yp0 = zp
yi+1 = yi + h yi0 + yp0 /2
xi+1 = xi + h
Note that one could improve the algorithm slightly by using the corrected value zi+1 instead of zp to
calculate yp0 .
This procedure is repeated, each time using the previous two estimates, until the difference of 2.0 and the
value of y (1) calculated for the last extrapolated estimate z0 is less than a chosen tolerance.
Since (1) is a linear boundary–value problem we expect the problem to be solved after only one extrapola-
tion. This is confirmed by the results. Note that although the exact values for y (0) and y (1) are obtained
the intermediate values are not very accurate. This is due to the inaccuracy of the modified Euler method
for the large step size h = 0.2.
PROGRAM A3 1(output);
uses printer;
APM3711/0/202
CONST
xinitial = 0.0;
xfinal = 1.0;
yinitial = 0.0;
yfinal = 2.0;
zinit1 = 1.9;
zinit2 = 2.1;
h = 0.2;
tolerance = 1e-7;
jmax = 10;
VAR
x, y, z : real;
i, imax : integer;
j : 1..jmax;
zi, yf : array[1..jmax] of real;
fst : text;
BEGIN
x := xinitial;
y := yinitial;
z := zinitial;
writeln(fst);
APM3711/0/202
5 APM3711/202
PROCEDURE PrintExact;
VAR i:integer;
BEGIN
writeln(fst);
writeln(fst,’ Exact solution’);
writeln(fst,’ x y z’);
FOR i := 0 to imax DO
BEGIN
x := xinitial + i*h;
y := sqr(x)*(sqr(x) - 1) + 2*x;
z := x*(4*sqr(x) - 2) + 2;
writeln(fst,x:12:6, y:12:6, z:12:6);
END; {for i}
END; {PrintExact}
BEGIN {program}
assign(fst,’c:\apm311\as00 3 1.dat’);
rewrite(fst);
writeln(fst); writeln(fst);
writeln(fst,’ ***** ASSIGNMENT 3, QUESTION 1’,
’ *****’);
imax := round((xfinal - xinitial)/h);
zi[1] := zinit1;
APM3711/0/202
6
Calculate(zi[1], yf[1]);
zi[2] := zinit2;
Calculate(zi[2], yf[2]);
j := 2;
REPEAT
(* Interpolate the previous two initial values of z
to find the next estimate of zinitial: *)
zi[j + 1] := zi[j - 1] + (zi[j] - zi[j - 1])*
(yfinal - yf[j - 1])/(yf[j] - yf[j - 1]);
j := j + 1;
Calculate(zi[j], yf[j]);
UNTIL (abs(yf[j] - yfinal) < tolerance) or (j = jmax);
writeln(fst,’ number of iterations = ’,j);
PrintExact;
close(fst);
END.
zinitial = 1.900000
x y z
0.000000 0.000000 1.900000
0.200000 0.340000 1.550000
0.400000 0.619320 1.389509
0.600000 0.894354 1.600711
0.800000 1.257740 2.361848
1.000000 1.837653 3.847881
error in final y = 0.162347
zinitial = 2.100000
x y z
0.000000 0.000000 2.100000
0.200000 0.380000 1.752400
0.400000 0.700278 1.603861
0.600000 1.020087 1.845904
0.800000 1.436781 2.665891
1.000000 2.085070 4.247728
error in final y = -0.085070
zinitial = 2.031233
x y z
APM3711/0/202
7 APM3711/202
Exact solution
x y z
0.000000 0.000000 2.000000
0.200000 0.361600 1.632000
0.400000 0.665600 1.456000
0.600000 0.969600 1.664000
0.800000 1.369600 2.448000
1.000000 2.000000 4.000000
QUESTION 2
y 00 = 2x2 y 0 + xy + 2, 1 ≤ x ≤ 4.
Taking h = 1, set up the set of equations required to solve the problem by the finite difference method in
each of the following cases of boundary conditions:
SOLUTION
Given
y 00 = 2x2 y 0 + xy + 2, 1 ≤ x ≤ 4, ((1))
40
ADAMS-MOULTON METHOD:
QUESTION 63
(a) The function e x is to be approximated by a ¿fth-order polynomial over the interval [1, 1]. Why is a Cheby-
shev series a better choice than a Taylor (or Maclaurin) expansion?
T0 (x) = 1
T1 (x) = x
T2 (x) = 2x 2 1
T3 (x) = 4x 3 3x
T4 (x) = 8x 4 8x 2 + 1,
(c) Find the Padé approximation R3 (x), with numerator of degree 2 and denominator of degree 1, to the function
f (x) = x 2 + x 3 .
SOLUTION
(a) The maximum error over the whole interval is smaller. The Taylor series has zero error at x = 0, but
the error can be quite large at x = ±1.
APM3711/0/202
41 APM3711/201/1
(b)
f (x) = 1 x 2x 3 4x 4
Economize once: we add/subtract T4 suitably scaled, such that the x 4 –term disappears. Here, we must
add 12 T4 = 4x 4 4x 2 + 12 , and will get the economized series
1
f ` (x) = f (x) + T4
2
3
= x 4x 2 2x 3 .
2
Economize again: Add T3 suitably scaled, such that the x 3 –term disappears. We must add
1 3
T3 (x) = 2x 3 + x,
2 2
and will get
1
f `` (x) = f ` (x) + T3 (x)
2
3 1
= + x 4x 2 .
2 2
To ¿nd the four unknown values a0 , a1 , a3 , b1 , we must set the coef¿cients of x 0 , x 1 , x 2 , x 3 to zero. This
give us the equation
!
! a0 = 0 !
! a0 = 1
!
! !
!
a =0 a =0
1 1
,
!
! a 1 = 0 !
! a 2 =1
!
!
2 !
!
1+b =0 b = 1
1 1
[Remarks:
• Note that we should not also set the coef¿cient of x 4 . to zero! To ¿nd the four coef¿cients, we need 4
equations. To add one more equation would either be unnecessary or else lead to a contradiction.
APM3711/0/202
42
• Please make sure that you can write down the general form of a Padé approximation Rn (x) with the
numerator and/or the denominator of a given degree! Remember that if the numerator (above the line)
is of degree n (i.e. a polynomial of degree n) and if the denominator (below the line) is of degree m,
then the Padé approximation is denoted by R N where N = m + n. The constant term of the denominator
(the “b0 " term) is always taken to be equal to 1, to ensure that R N (0) is well de¿ned. The number
of constants to be determined is then equal to N + 1 in the Padé approximation R N (x) . To uniquely
determine those N + 1 constants, we need N +1 equations which are obtained by setting the coef¿cients
of the powers of x, of orders 0 to N, to be equal to zero in the numerator of f (x) R N (x) .]