Accelerated Relaxation' or Direct Solution? Future Prospects For Fem
Accelerated Relaxation' or Direct Solution? Future Prospects For Fem
Accelerated Relaxation' or Direct Solution? Future Prospects For Fem
2 1, 1 1 1 (1 985)
o.c.ZIENKIEWICZ AND K. L ~ H N E R
Institute for Numerical Methods in Engineering, Uniaersity College of Swansea, Swansea, [J.K.
SUMMARY
‘Dynamic’ or ‘viscous’ relaxation procedures have not gaincd much popularity in finite element analysis in
which the direct (Gaussian elimination) solution dominates. Reasons for this are various -the most
important being the rather slow convergence generally achieved for such procedures. However, it is possible
to accelerate this quite dramatically and a method of doing so is shown in this paper. With the use of such
acceleration and the inherent advantages of greatly reduced storage requirements and simplicity of
programming, relaxation procedures promise an exciting possibility for the solution of large two- and three-
dimensional problems in both linear and nonlinear ranges.
INTRODUCTION
The term ‘relaxation’ was first introduced into the numerical analysis literature by Southwell‘.2 as
a short name for ‘systematic relaxation of constraints’ -a process of iteration which permitted a
manual solution of large equation systems associated with finite difference approximations.
Indeed, his work opened up the possibility of dealing with reasonably sized problems. With the
advent of computers, methods such as relaxation died as the machine cost of dealing with an
approximation to human intelligence was too large.
A revival of the term, as ‘dynamic relaxation’, was introduced in the early 1960s by Otter and co-
w o r k e r ~ . ~Here,
- ~ the name was associated with the solution of such systems as
K-u +f =0 2
(1)
as an end product of the transient dynamic problem
M.ii + C.U + K * u + f =O. (2)
Solutions of (2) by an explicit finite difference scheme required little computer storage and could be
programmed easily.
The ‘mass’ and ‘damping’ matrices introduced artificially into equation (2) could be chosen in
such a way that a damped convergence of the form shown in Figure l(a) was obtained. Various
authors discuss how optimal structures of M and C are achieved.“-” If we set M = 0, the problem
is completely viscous and an asymptotic convergence of the type illustrated in Figure l(b) occurs.
This type of ‘viscous’ relaxation, which in the mathematical literature is known as Jacobi
iteration,“..’s has been seldom used in practice (examples are given in References 11 to 14),
as it was considered by many to be slow and undesirable. It has, however, the considerable
advantage of simplicity. For this reason we shall consider it alone in the rest of this paper,
1.0
0.8
3
8 0.6
3 0.4
0.2
0
la) 0 0.2 0.4 0.6 0.8 1.0
L
I
1.0
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.E 1.0
(bl
t-
Figure 1. Convergence behaviour of the (a) hyperbolic type and (b) parabolic type
where %is
,
,, the highest eigenvalue of the system matrix
A =c - 'K. (7)
Let us consider the exact solution of (3) for constant K , f which is
u = u, + e-.
A.l
(uo - u,), (8)
where u, is the solution of the steady-state problem (1). The matrix A can be decomposed into its
eigenmodes as (see Appendix I):
A = Y-A-Y"', (9)
where
Using equation (9), we can now write for equation (8) (see Appendix I):
The total number of time steps N STEP required are thus of the order
NSTEP
lvmin
Let us now consider a typical problem resulting in an equation system like (1) and deriving from
a '&-dimensional cube (d = 1,2,3) with Nd-nodes/elements discretizing an operator of order 2m,
and determine:
1 . The number of operations ( N OP) required in the computation of the unknowns u with a given
accuracy;
2. The number of storage locations ( N STORE) needed.
In a single timestep:
NOP - N d and NSTORE - Nd.
Thus, for the total process of reaching a desired accuracy the number of operations needed by the
viscous relaxation process is
4 0. C. ZIENKIEWICZ AND R. LOHNER
For the cube considered the ratio of eigenvalues is proportional to N2*, where 2m is the order of the
operator associated with K (assuming an equal element subdivision). Thus, for the total process,
and
NOP - N2m+d, (16)
N STORE - Nd (17)
If we contrast this with a conventional Gaussian elimination process in which a banded type of
solution is used directly to solve equation (I), we have
N OP - N ~N . BAND^, (18)
and with
N BAND - Nd- '
N O P - N3d-',
and
N STORE - N d . N BAND - NZd-'.
In Table I we compare the N OP and N STORE orders for a problem of order 2m = 2 (thermal or
elasticity problems) for one- two- and three-dimensional situations.
1o5
10'
100 10' 102 lo3 104
N-
Figure 2. Number of relaxation steps needed for the model domain (d-dimensionalcube)
1 N N N3 N N2 N
2 N4 N3 N4 N2 N3 N2
3 N' N5 N5 N3 N4 N2
ACCELERATED 'RELAXATION' OR DIRECT SOLUTION? 5
This line search method can be interpreted in a different manner. We have already observed that
after some time of iteration the lowest eigenvalue imin predominates and that the solution can be
written as
Thus, differentiating
Au = - ,
min
.At.a.e-&nin" -
- - 3,,in.At*(u- u,)
so that
1
uoc=u+-
At. Amin
.Au ~ =u + U ,Au,
6 0. C. ZIENKIEWICZ AND R. LOHNER
where
01 = l/Amin.At.
To determine x, various procedures can be used. For instance, taking the second derivative of (28)
yields
A(Au) = - ~.A,~;Au (32)
and multiplying by A u ' ~we have
This value of a can be shown to be, in fact, identical to aL (viz. (27)) for linear problems (see
Appendix 11),but is more simple to evaluate. Further, an immediate indication of the settling down
of the solution into the lowest eigenmode (Amin) is given if the values of cx arc compared at
subsequent time steps. We have found it optimal for linear problems to introduce correction when
two subsequent values of a do not differ by more than 5 per cent, or
Applying this acceleration process to the examples of Figure 2, we found a considerable reduction
- -
in the number of time steps needed to achieve the desired approximation. As one can observe from
Figure 2, the order of NSTEP has dropped to NSTEP N in place of NSTEP N 2 , with a
consequent reduction of operations required in AVR, which is also indicated in Table I.
As one can see, the method consists basically in the judicious combination of two algorithms,
namely the Jacobi iteration and the steepest descent mcthod.I5 Each algorithm applied by itself is
slowly convergent, whereas the combination proposed here speeds up the convergence by an order
of magnitude, while retaining the simplicity of the original algorithms.
Another observation to be made is that the evaluation of x from equation (32) is by no means
mandatory. Other formulae are obtainable, for example:
a= A(Au)' * Au
A(AU)' .&Gj'
This is the so-called Aitken process," used by Irons13 in a similar context for the iterative
computation of eigenvalues.
4. By evaluating from equation (32) via
This form, which is particularly easy, was suggested by Karrholm' for the iterative computation
of beams.
Obviously, other means of computing the acceleration factor are possible, but we have found that
among the possibilities numbered above the combination of a, as given by equation (32), and
ACCELERATED 'RELAXATION OR DIRECT SOLUTION? 7
At= 0.8.Atm,,gave the fastest convergence (Atmaxbeing the maximum allowable time step).
With this acceleration now achieved, it seems advantageous to use AVR instead of direct
methods for two-dimensional problems also.
1000
...
lA
0
t
z
0
W
U
c"
g
-
100
t-
I
3
a
U
0 I I I !
10 20 Iro 80 100
N--.-9
Figure 3. Comparison of actual computer time between AVR ( - - 0 - ) and a frontal solver (-0-) for the two-
dimensional model ( N z nodes)
8 0. C. ZIENKIEWICZ AND R. LOHNER
MESH
Linear
f = l
LINERR CASE
CONTOUR INTERVAL I 0.02
Nonlinear
(sof tening)
CASE i F - E X P I U I
COI'IUOUR INTERVRL : 0.02
Nonlinear
(hardening)
CASE : k=klul-l*u
CONTOUH INTERVAL i 0.02
u =0 on boundary-r*
V(kVu) + f = 0
To prove that the proposed method is also applicable to ill-conditioned systems, we considered
the problem shown in Figure 5. The convergence was obtained after 1000 steps without any
difficulties, showing the robustness of the method.
The convergence rate for AVR, as can be seen from Figure 2, is similar to that attained by the
conjugate gradient method (CGM), the amount of operations in each step again being similar. The
ACCELERATED 'RELAXATION' OR DIRECT SOLIJTION'? 9
,UNIFORM SOURCE
u - FIXED u,x = 0
EQUATION V ( k V d + f 1: 0
DISCRETIZATION - It0 x 2 x 2 SIMPLEX ELEMENTS
8.0
4.0
3.0
2.0
4.0
advantage we see in AVR over CGM is that it is not only applicable to equation systems derived by
minimization of a functional, but also to other, general systems of equations.
Although the results presented in this paper are preliminary, they confirm the authors' view held
for some time that for the majority of nonlinear problems on which current interest is focused it
may well be necessary to abandon the current practices and return to the simpler but more efficient
iterative methods. This was indeed observed earlier in many plasticity solutions, where the
viscoplastic analysis alternative offered often a more economical solution than direct approaches.
The same appears true today in another context.
thus:
e-A‘=y.e A1.y-1
REFERENCES
11. G. Karrholm, ‘A method of iteration applied to beams resting on springs’, Trans. Chalmers IJniv. of Technology,
Goteburg, Sweden, No. 199 (1958).
12. A. C . Aitken, ‘On Bernoulli‘s numerical solution of algebraic equations’, i’roc. Roy. Soc. Edinh., 46, 289 -305 (1926).
13. B. M. Irons and R. C. Tuck, ‘A version of the Aitken accelerator for computer iteration’, Int. j. nwner. methods eng.. 1,
215- 277 (1969).
14. J. T. Oden and J. E. Key, ‘Analysis of static and nonlinear response by explicit time integration’, Short Communication,
Int. j . /turner. methods eng., 7, 2255240 (1973).
15. L. Fox, Introduction to Numerical Linear Algebra. Oxford Univ. Press. London 1964.
16. 0.<‘. Zienkiewicz, The Finite Elrment Metliod, 3rd edn, McGraw-Hill, London, 1977.