Solving Systems of Linear Equations. Iterative Methods
Solving Systems of Linear Equations. Iterative Methods
Solving Systems of Linear Equations. Iterative Methods
Iterative methods
Iterative methods
Iterative methods cannot compete with direct elimination methods for arbitrary
matrix A. However, in certain types of problems, systems of linear equations have
many ai,j elements as zero, or close to zero (sparse systems). Under those
circumstances, iterative methods can be extremely fast. Iterative methods are also
efficient for solving Partial Differential Equations by finite difference or finite
element methods.
The idea of the iterative solution of a linear system is based on assuming an initial
(trial) solution that can be used to generate an improved solution. The procedure is
repeated until convergence with an accepted accuracy solution occurs. However,
for an iterative method to succeed/converge, the linear system of equations needs
to be diagonally dominant.
• SLE solution methods that provide an accurate solution through a finite number
of steps are called direct solution methods.
• Gaussian, determinant (or Kramer), decomposition methods are direct, but in
practice an accurate solution is obtained only when no rounding errors are made.
• In addition, increasing n in these methods the number of arithmetic operations
performed increases very rapidly, for example, in the Gaussian method number of
operations for n variables and equations O (2/3).
• Therefore, direct methods (Gaussian) can be applied in practice only to solve
relatively small (n to ) systems of equations.
• Next, we will examine the so-called iterative methods, which find an
approximate solution with any desired accuracy: –Jacobi, Seidel methods.
• These methods solve many systems of linear equations much faster than the
Gaussian method (especially if most of the matrix coefficients of the system are
zero).
Jacobi method
Transform SLE
• We finish iterations, when the distance between and will be less than the desired
accuracy , for example the distance can be measured as:
Exercises
Solve the system of linear equations with 0.1 accuracy by the Jacobi method:
We choose the initial guess = 0 and calculate first iteration and its error.
Since in this iteration error is too big, we continue calculate the next iteration.
• This condition called the diagonal dominance condition, and the diagonal of
the system to which this condition applies is predominant.
• When the condition of the theorem is not satisfied and Jacobi's method
diverges, one can try to swap the system equations in places for this condition
to be valid.
Seidel method
Like in Jacobi method - transform SLE:
In the new k-th iteration of Seidel iteration process already calculated (adjusted) xi
values are used in the same in iteration to specify other values xj (j = i + 1, ..., n):
Exercises
Solve the system of linear equations with 0.1 accuracy by the Jacobi method:
We choose the initial guess = 0 and calculate first iteration and its error.
Since in this iteration error is too big, we continue calculate the next iteration.
Notice that the approximations obtained by Seidel method are closer to the real solution
than obtained by Jacobi method.
Convergence of the Seidel method
• How and when does the Seidel method converge?
• Like the Jacobi method, the Seidel method converges if the diagonal of the system is
predominant.
• However, the areas of convergence of these two methods do not coincide - there are
systems of equations for which the Jacobi method converges and the Seidel method
diverges and, conversely, there are systems for which the Seidel method converges and
the Jacobi method diverges.
• It can be shown that for systems to which both methods converge, the same accuracy is
achieved approximately twice as fast by the Seidel method.
Practical notes
There are multiple methods, and various computer packages available for solving
systems of linear equations. A researcher (or a student) faces this question - what
would be a good way to solve my problem? Should I invest my time in writing a
program, buy software that could do this job for me, learn how to use a sophisticated
numerical package, or attempt to find a matrix calculator on the Web?
The answer depends on the following factors:
a)the complexity of the system of equations (the size, conditioning, a general or sparse
matrix),
b)whether the problem is a part of a larger computational problem or a standing alone
task,
c)whether a one-time solution is needed, or multiple systems are to be solved.
A simple student problem can instantly be solved even with Excel. Excel has a number of functions to work with
matrices, in particularly MINVERSE to find an inverse matrix, and MMULT for matrix multiplication. With Excel a
solution can be just a few clicks away using x = A−1b.
Software packages such as Mathematica, Maple, or MathCad have libraries for solving various systems of equations.
If the problem is part of a larger computational project, and a system of equations is not very large (less than a few
hundreds of equation), yet well-conditioned, then using the quick and efficient programs of this chapter would be best.
However, for serious computational projects, it is advisable to use sophisticated packages developed by experts in
numerical analysis. The most well known commercial general libraries are NAG (Numerical Algorithmic Group), and
IMSL (International Mathematical and Statistical Library), both available in Fortran 90/5 and C/C++. The NAG
package also includes libraries for parallel calculations. The IMSL library now is a part of compilers such as Intel
Fortran, and Intel C++. Additionally, there are also various special packages to solve multiple problems of linear
algebra that are absolutely free. LAPACK (Linear Algebra PACKage) is the most advanced linear algebra library. It
provides routines for solving systems of linear equations and eigenvalue problems. LAPACK has routines to handle
both real and complex matrices in single and double precision. The present core version has several implementations:
LAPACK95 uses features of Fortran 95, CLAPACK in for C, LAPACK++ for C++ (it is being superseded by the
Template Numerical Toolkit (TNT), JLAPACK for Java.
There are also two large numerical libraries that have multiple routines for linear
algebra problems. SLATEC is a comprehensive library of routines having over 1400
general purpose mathematical and statistical programs. The code was developed by a
group from few National Laboratories (US), and is therefore public domain. The
library was written in Fortran 77, but some routines are translated to Fortran 90, and
there is a possibility to use SLATEC routines from a C++ code.
The other large library is the GNU Scientific Library (or GSL). It is written in the C.
The GSL is part of the GNU project and is distributed under the GNU General Public
License. GAMS - Guide to Available Mathematical Software from the National
Institute of Standards and Technology (NIST) is a practical starting point to find the
best routine for your problem. GAMS provides an excellent reference place and
orientation for available programs.
Exercises
Solve the system of linear equations with 0.1 accuracy by the Jacobi and Seidel methods.
Compare number of iterations:
1.
2.