Accelerated Relaxation' or Direct Solution? Future Prospects For Fem

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

INTERNATIONAI. JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, VOL.

2 1, 1 1 1 (1 985)

ACCELERATED ‘RELAXATION’ OR DIRECT SOLUTION?


FUTURE PROSPECTS FOR FEM

o.c.ZIENKIEWICZ AND K. L ~ H N E R
Institute for Numerical Methods in Engineering, Uniaersity College of Swansea, Swansea, [J.K.

SUMMARY
‘Dynamic’ or ‘viscous’ relaxation procedures have not gaincd much popularity in finite element analysis in
which the direct (Gaussian elimination) solution dominates. Reasons for this are various -the most
important being the rather slow convergence generally achieved for such procedures. However, it is possible
to accelerate this quite dramatically and a method of doing so is shown in this paper. With the use of such
acceleration and the inherent advantages of greatly reduced storage requirements and simplicity of
programming, relaxation procedures promise an exciting possibility for the solution of large two- and three-
dimensional problems in both linear and nonlinear ranges.

INTRODUCTION
The term ‘relaxation’ was first introduced into the numerical analysis literature by Southwell‘.2 as
a short name for ‘systematic relaxation of constraints’ -a process of iteration which permitted a
manual solution of large equation systems associated with finite difference approximations.
Indeed, his work opened up the possibility of dealing with reasonably sized problems. With the
advent of computers, methods such as relaxation died as the machine cost of dealing with an
approximation to human intelligence was too large.
A revival of the term, as ‘dynamic relaxation’, was introduced in the early 1960s by Otter and co-
w o r k e r ~ . ~Here,
- ~ the name was associated with the solution of such systems as
K-u +f =0 2
(1)
as an end product of the transient dynamic problem
M.ii + C.U + K * u + f =O. (2)
Solutions of (2) by an explicit finite difference scheme required little computer storage and could be
programmed easily.
The ‘mass’ and ‘damping’ matrices introduced artificially into equation (2) could be chosen in
such a way that a damped convergence of the form shown in Figure l(a) was obtained. Various
authors discuss how optimal structures of M and C are achieved.“-” If we set M = 0, the problem
is completely viscous and an asymptotic convergence of the type illustrated in Figure l(b) occurs.
This type of ‘viscous’ relaxation, which in the mathematical literature is known as Jacobi
iteration,“..’s has been seldom used in practice (examples are given in References 11 to 14),
as it was considered by many to be slow and undesirable. It has, however, the considerable
advantage of simplicity. For this reason we shall consider it alone in the rest of this paper,

0029 -5981/85/010OO1-11$01.10 Received 12 August 1983


() 1985 by John Wiley & Sons, Ltd. Revised 28 November 1983
2 0. C. ZIENKIEWICZ AND R. LUHNER

1.0

0.8

3
8 0.6
3 0.4

0.2

0
la) 0 0.2 0.4 0.6 0.8 1.0
L

I
1.0

0.8

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.E 1.0
(bl
t-
Figure 1. Convergence behaviour of the (a) hyperbolic type and (b) parabolic type

concentrating on the steady-state solution of


(2.8 + K * u+ f = 0, (3)
where, in general, K - u = P(u) may be a nonlinear vector.
The accelerator of the convergence process which we shall introduce owes much to Southwell’s
ideas- and this paper is dedicated to the memory of both Southwell and Otter.

VISCOUS RELAXATION (VR) VS. DIRECT SOLUTION


In the numerical, explicit solution of equation (3) we shall always assume that C is a diagonal
matrix (cg. taking for linear cases: C = diag K). The time step algorithm is given by
AU = A t * C*.NU), (4)
where
+(MU)
=- (K*u + f). (5)
The time step At used is limited to be of the order
1
AtN;- , (6)
”“Inax
ACCELERATED 'RELAXATION' O R DIRECT SOLUTION? 3

where %is
,
,, the highest eigenvalue of the system matrix
A =c - 'K. (7)
Let us consider the exact solution of (3) for constant K , f which is
u = u, + e-.
A.l
(uo - u,), (8)
where u, is the solution of the steady-state problem (1). The matrix A can be decomposed into its
eigenmodes as (see Appendix I):
A = Y-A-Y"', (9)
where

Using equation (9), we can now write for equation (8) (see Appendix I):

= u, + Y-e-A.'Y-'.(uo -- urn). (10)


As the importance of high eigenvalues decreases more rapidly than that of the lower ones at large
times, the error in satisfying the steady state solution becomes
u - u, = a.e-i.mi".t 9

where a is a constant vector.


This means that the time f needed to achieve a certain accuracy i s
1
f- " - .
'bmin

The total number of time steps N STEP required are thus of the order

NSTEP
lvmin

Let us now consider a typical problem resulting in an equation system like (1) and deriving from
a '&-dimensional cube (d = 1,2,3) with Nd-nodes/elements discretizing an operator of order 2m,
and determine:

1 . The number of operations ( N OP) required in the computation of the unknowns u with a given
accuracy;
2. The number of storage locations ( N STORE) needed.

In a single timestep:
NOP - N d and NSTORE - Nd.
Thus, for the total process of reaching a desired accuracy the number of operations needed by the
viscous relaxation process is
4 0. C. ZIENKIEWICZ AND R. LOHNER

For the cube considered the ratio of eigenvalues is proportional to N2*, where 2m is the order of the
operator associated with K (assuming an equal element subdivision). Thus, for the total process,

and
NOP - N2m+d, (16)

N STORE - Nd (17)
If we contrast this with a conventional Gaussian elimination process in which a banded type of
solution is used directly to solve equation (I), we have
N OP - N ~N . BAND^, (18)
and with
N BAND - Nd- '
N O P - N3d-',
and
N STORE - N d . N BAND - NZd-'.
In Table I we compare the N OP and N STORE orders for a problem of order 2m = 2 (thermal or
elasticity problems) for one- two- and three-dimensional situations.

1o5

10'
100 10' 102 lo3 104
N-
Figure 2. Number of relaxation steps needed for the model domain (d-dimensionalcube)

Table I. Order of N OP and N STORE for a problem of order 2m,= 2

Dimension Direct methods VR AVR


NOP NSTORE NOP NSTORE NOP NSTORE

1 N N N3 N N2 N
2 N4 N3 N4 N2 N3 N2
3 N' N5 N5 N3 N4 N2
ACCELERATED 'RELAXATION' OR DIRECT SOLUTION? 5

Although the constants of proportionality cannot be determined, it is evident that relaxation is


superior for all problems as far as storage requirements are concerned (thus making possible the
use of micro/minicomputers for the solution of significant problems). However, in the number of
operations count (and hence the primary cost for medium/big computers) the superiority only
arises in three-dimensional problems. Perhaps this last finding accounts for its limited use by those
whose problems are not limited by computer size.
To confirm the above findings 'experimentally', the number of steps was counted on a series of
solutions for a Poisson equation with a constant source term and Dirichlet boundary conditions
using a regular subdivision. The viscous relaxation was terminated when the residual norm
I1 $11 = c+T.+11'2 (22)
became less than C / N 2 . Figure 2 shows that the predicted result (16) is well reproduced.

ACCELERATED VISCOUS RELAXATION (AVR)


All users of iterative processes in general and of Southwell's relaxation methods in particular know
that it is relatively easy to distribute the residuals uniformly, but the process of their final reduction
to x r o is slow. In manual methods this can be achieved only by the use of block relaxation or
similar overall corrections, after the observation that in each iteration similar changes Au occur,
changing the total residual very slowly.
In viscous relaxation, an identical situation arises, the changes Au in each step being similar in
shape. It should therefore be possible to accelerate the process by periodically making an
adjustment of greater magnitude in the direction of the Au-vector.
Writing this for the new value u* as
U* = u +~ . A u , (23)
we can seek an estimate for a from the scalar equation
WT'$(u*) = 0. (24)
A natural choice for W is
W=AU
which results for linearized problems in

Au"* +(u + ~ A u =) - AuT.(K.u + IXK.AU


+ f) = 0:
or

This line search method can be interpreted in a different manner. We have already observed that
after some time of iteration the lowest eigenvalue imin predominates and that the solution can be
written as

Thus, differentiating
Au = - ,
min
.At.a.e-&nin" -
- - 3,,in.At*(u- u,)
so that
1
uoc=u+-
At. Amin
.Au ~ =u + U ,Au,
6 0. C. ZIENKIEWICZ AND R. LOHNER

where
01 = l/Amin.At.

To determine x, various procedures can be used. For instance, taking the second derivative of (28)
yields
A(Au) = - ~.A,~;Au (32)
and multiplying by A u ' ~we have

This value of a can be shown to be, in fact, identical to aL (viz. (27)) for linear problems (see
Appendix 11),but is more simple to evaluate. Further, an immediate indication of the settling down
of the solution into the lowest eigenmode (Amin) is given if the values of cx arc compared at
subsequent time steps. We have found it optimal for linear problems to introduce correction when
two subsequent values of a do not differ by more than 5 per cent, or

Applying this acceleration process to the examples of Figure 2, we found a considerable reduction

- -
in the number of time steps needed to achieve the desired approximation. As one can observe from
Figure 2, the order of NSTEP has dropped to NSTEP N in place of NSTEP N 2 , with a
consequent reduction of operations required in AVR, which is also indicated in Table I.
As one can see, the method consists basically in the judicious combination of two algorithms,
namely the Jacobi iteration and the steepest descent mcthod.I5 Each algorithm applied by itself is
slowly convergent, whereas the combination proposed here speeds up the convergence by an order
of magnitude, while retaining the simplicity of the original algorithms.
Another observation to be made is that the evaluation of x from equation (32) is by no means
mandatory. Other formulae are obtainable, for example:

1. Via line search (equation (27)).


2. Equation (33).
3. By multiplying equation (32) by A(Au)~,which amounts to

a= A(Au)' * Au
A(AU)' .&Gj'
This is the so-called Aitken process," used by Irons13 in a similar context for the iterative
computation of eigenvalues.
4. By evaluating from equation (32) via

This form, which is particularly easy, was suggested by Karrholm' for the iterative computation
of beams.

Obviously, other means of computing the acceleration factor are possible, but we have found that
among the possibilities numbered above the combination of a, as given by equation (32), and
ACCELERATED 'RELAXATION OR DIRECT SOLUTION? 7

At= 0.8.Atm,,gave the fastest convergence (Atmaxbeing the maximum allowable time step).
With this acceleration now achieved, it seems advantageous to use AVR instead of direct
methods for two-dimensional problems also.

EXAMPLES AND CONCLUSIONS


In the previous discussion we have concentrated on indicating the comparisons in efficiency in
general terms, without regard to programming differences, etc. In Figure 3 we show a comparison
in terms of actual computational time between AVR and a direct solution (using a frontal solver)
for the same linear problems stated in previous sections.
To show that the results are not limited to regular meshes or linear problems, a more realistic
example is shown in Figure 4.We show the results for the linear case, the case of an exponential
source-function
f = e*, (35)
which implies a global softening (for details see Reference 16), and the case of temperature-
dependent conductivity
k=l++. (36)
The number of steps required to reach the prescribed accuracy was nearly the same for all problems
(180 190 steps), indicating a considerable edge over conventional nonlinear solution methods for
big problems.

1000

...
lA
0
t
z
0
W
U

c"
g
-
100
t-
I
3
a
U

0 I I I !
10 20 Iro 80 100
N--.-9
Figure 3. Comparison of actual computer time between AVR ( - - 0 - ) and a frontal solver (-0-) for the two-
dimensional model ( N z nodes)
8 0. C. ZIENKIEWICZ AND R. LOHNER

MESH

Linear
f = l

LINERR CASE
CONTOUR INTERVAL I 0.02

Nonlinear
(sof tening)

CASE i F - E X P I U I
COI'IUOUR INTERVRL : 0.02

Nonlinear
(hardening)

CASE : k=klul-l*u
CONTOUH INTERVAL i 0.02

Figure 4. Comparison of computation times linear/nonlinear problem of heat conduction:

u =0 on boundary-r*
V(kVu) + f = 0

To prove that the proposed method is also applicable to ill-conditioned systems, we considered
the problem shown in Figure 5. The convergence was obtained after 1000 steps without any
difficulties, showing the robustness of the method.
The convergence rate for AVR, as can be seen from Figure 2, is similar to that attained by the
conjugate gradient method (CGM), the amount of operations in each step again being similar. The
ACCELERATED 'RELAXATION' OR DIRECT SOLIJTION'? 9

,UNIFORM SOURCE

u - FIXED u,x = 0
EQUATION V ( k V d + f 1: 0
DISCRETIZATION - It0 x 2 x 2 SIMPLEX ELEMENTS
8.0

4.0
3.0
2.0

4.0

Figure 5. Example of ill-conditioning: problem statement and solution

advantage we see in AVR over CGM is that it is not only applicable to equation systems derived by
minimization of a functional, but also to other, general systems of equations.
Although the results presented in this paper are preliminary, they confirm the authors' view held
for some time that for the majority of nonlinear problems on which current interest is focused it
may well be necessary to abandon the current practices and return to the simpler but more efficient
iterative methods. This was indeed observed earlier in many plasticity solutions, where the
viscoplastic analysis alternative offered often a more economical solution than direct approaches.
The same appears true today in another context.

APPENDIX I: EXPANSION OF eCAC


If we assemble all the eigenvalue equations
A.y, = y.'A.
1 1

into one big matrix equation we obtain


A - Y = Y-A,
which, upon multiplication from the right by Y-', becomes
A=Y.A.Y-'.
10 0. C. ZIENKIEWICZ AND R. LOHNER

Now expand e into a series:

As a typical term in this series is given by

thus:
e-A‘=y.e A1.y-1

APPENDIX 11: IDENTITY O F a AND xI, FOR LINEAR PROBLEMS


AND REGlJLAR MESHES
From equation (27):
AuT,+(u)
a, = (43)
AU‘ ---.K .AU’
*= -(K*u+f). (44)
Recall further that
AU = A z . C - ’ . + = - At.C- +
‘.(K*u f). (45)
If the problem is linear, the increment of Au becomes
A(Au)= -At.C-’.K.Au, (46)
so that
AuT,C.AU
UI. = - xu’-. c:-AGu>’ (47)

As C is diagonal, for Dirichlet problems on regular meshes this becomes


AuT,Au
aL=---
AuT.- -
A(Au)’
i s . equation (33).

REFERENCES

1. R. V. Southwell, Relaxation Methods in Engineering Science, Clarendon Press, Oxford, 1940.


2. R. V. Southwell, Relaxation Methods in Theoretical Physics, vol. I, 1946; vol. 11, 1956, Clarendon Press, Oxford.
3. J. R. H. Otter, Mech. Eng. Des., 3, 183-185 (1965).
4. A. S. Day, ‘An introduction to dynamic relaxation’, Engineer, 219, 218-221 (1965).
5. J. R. H. Otter, E. Cassel and R. E. Hobbs, ‘Dynamic relaxation’, Proc. Inst. Cio. Engrs., 35, 633-656 (1966).
6. W. L.. Wood, ‘Comparison of dynamic relaxation with three other iterative methods+ Engineer, Nov. 24,
683 -687 (1967).
7. J. S. Brew and D. M. Brotton, ‘Nonlinear structural analysis by dynamic relaxation’, Int. J. numer. methods eng., 3,463.-
483 (1971).
8. A. Pica and E. Hinton, ‘Efficient transient dynamic plate bending analysis with Mindlin plates’, Earthquake Eng.
Struct. Dyn., 9, 23- 31 (1981).
9. P. Underwood, ‘Dynamic relaxation-a review’, Research Lab. Report, Lockheed, Palo-Alto (April 1981).
10. M. Papadrakakis, ‘A method for the automatic evaluation of the dynamic relaxation parameters’, Comp. Meth. Appl.
Mech. Eng., 25, 35--48 (1981).
ACCELF.RA’lEI~‘RELAXATION’ O R DIKI3CT SOLU‘I‘ION‘! 11

11. G. Karrholm, ‘A method of iteration applied to beams resting on springs’, Trans. Chalmers IJniv. of Technology,
Goteburg, Sweden, No. 199 (1958).
12. A. C . Aitken, ‘On Bernoulli‘s numerical solution of algebraic equations’, i’roc. Roy. Soc. Edinh., 46, 289 -305 (1926).
13. B. M. Irons and R. C. Tuck, ‘A version of the Aitken accelerator for computer iteration’, Int. j. nwner. methods eng.. 1,
215- 277 (1969).
14. J. T. Oden and J. E. Key, ‘Analysis of static and nonlinear response by explicit time integration’, Short Communication,
Int. j . /turner. methods eng., 7, 2255240 (1973).
15. L. Fox, Introduction to Numerical Linear Algebra. Oxford Univ. Press. London 1964.
16. 0.<‘. Zienkiewicz, The Finite Elrment Metliod, 3rd edn, McGraw-Hill, London, 1977.

You might also like