Linear Systems - Iterative Methods

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

SOLUTIONS TO LINEAR SYSTEMS

ITERATIVE METHODS
Iterative Methods

 Numerical Methods require the use of iterative


algorithms in order to find the numerical answer.

 Usually
used for solving linear equations with large
number of variables, when the elimination
methods prove to be too tedious.

 Analgorithm is a step by step procedure that


produces a solution in a finite number of steps.
Iterative Methods
 Algorithm:
1. Define the iterative formula.
2. Initialize: Find an initial value for the variable vector
being approximated.
2. Iterate: Derive a new estimate from the previous
estimate using the iterative formula.
3. Termination Test: Stop the iteration when
,

= specified tolerable error


= error of the kth iteration = .
Iterative Methods
A. Gauss Jacobi Method
 Steps:
1. Express each of the variable in the system in terms of
the other variables (If possible, x1, x2,..., xn are
variables with the highest coefficient in each
equation).
Iterative Methods
(Gauss Jacobi Method)
 Steps (cont.)
2. Set an initial guess then solve for the new set of
values using the initial guess values.
3. Use the newly computed values of the variables to
compute for the next set of values. The general
form is given by:
Iterative Methods
(Gauss Jacobi Method)
 Steps (cont.)
4. Stop the iteration once the termination test has
been satisfied.
 Thetermination test just compares the difference of
the most recent value of the variables to the one
preceding it to the specified tolerable error ( ).
Iterative Methods
(Gauss Jacobi Method)
 Example:
 Findan approximate solution to the following
system of equations:

 Assume that the maximum tolerable error ( ) is


0.0075.
Iterative Methods
(Gauss Jacobi Method)
 Example (cont.):
Step 1: Express each of the variables in the system
in terms of the other variables.

𝑥3 = 1.6 + 0.4𝑥1 + 0.4𝑥2

Step 2: Assume: x10 = x20 = x30 = 0 (Initialization)


Then, x11 = 2.6, x21 = -1.8, x31 = 1.6
Iterative Method
Gauss Jacobi Method
 Example (cont.)
Steps 3&4: Use the newly computed values of the
variables to compute for the next set of values.
Perform termination test.

Given that: x11 = 2.6, x21 = -1.8, x31 = 1.6


x12 = 2.6 – 0.2(-1.8) – 0.4(1.6) = 2.32 error: x1 = 0.28
x22 = -1.8 – 0.2(2.6) + 0.6(1.6) = -1.36 x2 = 0.44
x32 = 1.6 + 0.4(2.6) + 0.4(-1.8) = 1.92 x3 = 0.32

Given that: x12 = 2.32, x22 = -1.36, x32 = 1.92


x13 = 2.6 – 0.2(-1.36) – 0.4(1.92) = 2.104 error: x1 = 0.216
x23 = -1.8 – 0.2(2.32) + 0.6(1.92) = -1.112 x2 = 0.248
x33 = 1.6 + 0.4(2.32) + 0.4(-1.36) = 1.984 x3 = 0.064
Iterative Method
Gauss Jacobi Method
 Example (cont.)

Given that: x13 = 2.104, x23 = -1.112, x33 = 1.984


x14 = 2.6 – 0.2(-1.112) – 0.4(1.984) = 2.0288 error: x1 = 0.0752
x24 = -1.8 – 0.2(2.104) + 0.6(1.984) = -1.0304 x2 = 0.0816
x34 = 1.6 + 0.4(2.104) + 0.4(-1.112) = 1.9968 x3 = 0.0128

Given that: x14 = 2.0288, x24 = -1.0304, x34 = 1.9968


x15 = 2.6 – 0.2(-1.0304) – 0.4(1.9968) = 2.00736 error: x1 = 0.02144
x25 = -1.8 – 0.2(2.0288) + 0.6(1.9968) = -1.00768 x2 = 0.02272
x35 = 1.6 + 0.4(2.0288) + 0.4(-1.0304) = 1.99936 x3 = 0.00256

Given that: x15 = 2.00736, x25 = -1.00768, x35 = 1.99936


x16 = 2.6 – 0.2(-1.00768) – 0.4(1.99936) = 2.0001792 error: x1 = 0.005568
x26 = -1.8 – 0.2(2.00736) + 0.6(1.99936) = -1.001856 x2 = 0.005824
x36 = 1.6 + 0.4(2.00736) + 0.4(-1.00768) = 1.999872 x3 = 0.000512
Iterative Method
Gauss Jacobi Method
Iterative Method
Gauss Seidel Method
A modification of the Gauss Jacobi Method which
assures a faster convergence. It uses the same
concept except that the latest value of a variable
is used to compute for the value of the other
variables even if it is within the same iteration.

 Developed by Carl Friedrich Gauss (1777-1855)


and Philipp Ludwig von Seidel (1821-1896) who
were both Germans.
Iterative Method
Gauss Seidel Method
Iterative Methods
(Gauss Seidel Method)
 Example:
 Findan approximate solution to the following
system of equations:

 Assume that the maximum tolerable error ( ) is


0.0075.
Iterative Methods
(Gauss Seidel Method)
 Example (cont.):
Step 1: Express each of the variables in the system
in terms of the other variables.

𝑥3 = 1.6 + 0.4𝑥1 + 0.4𝑥2


Step 2: Assume: x10 = x20 = x30 = 0 (Initialization)
Then, x11 = 2.6
x21 = -1.8 – 0.2(2.6) + 0.6 (0) = -2.32
x31 = 1.6 + 0.4(2.6) + 0.4(-2.32) = 1.712
Iterative Methods
(Gauss Seidel Method)
Summary of values in tabular form:

X1 X2 X3 Є(X1) Є(X2) Є(X3)

k=0 0 0 0

k=1 2.6 -2.32 1.712 2.6 2.32 1.712

k=2 2.3792 -1.2486 2.05222 0.2208 1.07136 0.34022

k=3 2.02884 -0.9744 2.02176 0.35036 0.27421 0.03046

k=4 1.98618 -0.9842 2.0008 0.04266 0.00975 0.02096

k=5 1.99652 -0.9988 1.99908 0.01033 0.01464 0.00172

k=6 2.00013 -1.0006 1.99982 0.00362 0.00176 0.00074


Iterative Method
Gauss Seidel Method
Iterative Method
Gauss Seidel Method
Iterative Method
Gauss Seidel Method
Condition for
Convergence/Divergence
 Iterative methods (such as the Gauss-Jacobi
and Gauss Seidel) converge if they provide a
solution that is very close to the optimal. They
diverge if the solution they provide detracts
farther and farther away from the optimal.
 A sufficient condition for the convergence of
the Gauss-Jacobi and Gauss-Seidel methods is
for the coefficient matrix to be diagonally
dominant:
Condition for
Convergence/Divergence
 This means that if every diagonal element
whose absolute value is larger than the sum of
the absolute values of the other entries in its row,
then the matrix is said to be diagonally
dominant.
 If a matrix is diagonally dominant, then the
Gauss-Jacobi and Gauss-Seidel methods
converge to the optimal solution.
Condition for
Convergence/Divergence
Condition for
Convergence/Divergence
 Another example:

can be converted to

to be diagonally dominant.
Successive Over Relaxation (SOR)
 Successive over-relaxation (SOR) is a variant of
the Gauss-Seidel method for solving a linear
system of equation, resulting in faster
convergence.
 A similar method can be used for any slowly
converging iterative process.
 It was devised simultaneously by David M. Young
and by H. Frankel in 1950 for the purpose of
automatically solving linear systems on digital
computers.
Successive Over Relaxation (SOR)

 Given
a square system of n linear equations with
unknown x:

where:
Successive Over Relaxation (SOR)
 Then A can be decomposed into a diagonal
component D, and strictly lower and upper
triangular components L and U:

 The system of linear equations may be rewritten


as:

for a constant ω > 0, called the relaxation factor.


Successive Over Relaxation (SOR)
 The method of successive over-relaxation is an
iterative technique that solves the left hand side
of this expression for x, using previous value for x
on the right hand side. Analytically, this may be
written as:

 However, by taking advantage of the triangular


form of (D+ωL), the elements of x(k+1) can be
computed sequentially using forward
substitution:
Successive Over Relaxation (SOR)
 The choice of relaxation factor ω is not necessarily
easy, and depends upon the properties of the
coefficient matrix. For symmetric, positive-definite
matrices it can be proven that 0 < ω < 2 will lead
to convergence, but we are generally interested in
faster convergence rather than just convergence.

 If ω = 1, then SOR = Gauss Seidel.


Successive Over Relaxation (SOR)
Definition of Positive definite matrix
Successive Over Relaxation (SOR)
Successive Over Relaxation (SOR)
Successive Over Relaxation (SOR)
Successive Over Relaxation (SOR)
Successive Over Relaxation (SOR)
Successive Over Relaxation (SOR)
Symmetric Successive Over
Relaxation (SSOR)
 The symmetric successive over relaxation (SSOR) method
combines two successive over relaxation method (SOR)
sweeps together in such a way that the resulting iteration
matrix is similar to a symmetric matrix in the case that the
coefficient matrix (A) of the linear system (Ax=b) is
symmetric.
 The SSOR is a forward SOR sweep followed by a backward
SOR sweep in which the unknowns are updated in the
reverse order.
 The similarity of the SSOR iteration matrix to a symmetric
matrix permits the application of SSOR as a pre-conditioner
for other iterative schemes for symmetric matrices. This is the
primary motivation for SSOR, since the convergence rate is
usually slower than the convergence rate for SOR with
optimal.
Exercises
1.

2.

 For both problems, solve using


 Gauss Jacobi Method
 Gauss Seidel
 SOR, ω = 1.2
 Assume that the maximum tolerable error (e) is 0.005. Limit to three
decimal places.
Exercise
 A furniture manufacturer makes tables, chairs, and
sofas. In one month, the company has available 300
units of wood, 350 units of labor, and 225 units of
upholstery. The manufacturer wants a production
schedule for the month that uses all of these resources.
The different products require the following amounts of
resources.
Table Chair Sofa
Wood 4 1 3
Labor 3 2 5
Upholstery 2 0 4

(a) Make a mathematical model of this production


problem.
(b) Find an approximate solution (accurate to within 10%).
Special Cases
A. Tests for the Existence of a Solution
 Given a linear system of equation in its matrix form.
AX = B
 If the determinant of A ≠ 0, then there exists a
Unique Solution. We call the matrix A non-singular.
However, when B = 0, then we have the Trivial
Solution, X = 0.
 If the determinant of A = 0, then either no solution
exists or an infinite number of solutions exists. The
matrix A is called Singular. When no solution exists,
we have an Inconsistent System of equations. On
the other hand, we have an Over-determined
System if an infinite number of solution exists.
Special Cases
A. Tests for the Existence of a Solution
 Generalization:
 If /A/ = 0 and the RDs (revised determinant) of all the
variables are non-zeroes, then no solution exists,
meaning no value of x1 and x2 will be able to satisfy the
given system of equations simultaneously.

From , we get
 The previous generalization made could actually be
proven using this equation. Given the condition that
/A/ = 0 and all RD’s are non-zeroes (RD ≠ 0), there is
definitely no value of Xi that would satisfy the equation.
Zero multiplied by any number could not possible yield
a non-zero value. Therefore, when such a case arise, a
conclusion is made that there is no solution.
Special Cases
A. Tests for the Existence of a Solution
 Example 1:

Recall: Using Cramer’s Rule

Where: RD = revised determinant

Both values are non-zero. The lines are parallel and they will not intersect,
there is no solution.
Special Cases
 Example 2:

eqn. 1
eqn. 2

It must be noted that this system of equations represent one and


the same line, i.e, eqn. 2 is obtained by multiplying eqn. 1 by (+2).

Both values are zero.


Special Cases
 Generalization:
 If /A/ = 0 and the RD’s of all the variables are also
zero, then an infinite number of solutions exists
meaning an infinite set of values of X1 and X2 will
satisfy the system of equations.
 Again, the generalization made could be proven by
using the equation . If it has been found that /A/ = 0
and RDi = 0, then the equation becomes,
0Xi = 0
 We know that zero multiplies by any value will result
to zero. Therefore, any value assigned to Xi will
definitely satisfy equation. From this, we can
conclude that given such conditions (/A/ = 0 and
RDi = 0), an infinite number of solution is possible.
Special Cases
 Using the Gaussian Elimination or Gauss Jordan
Method, the case wherein no solution or infinite
solution exist may be detected immediately if for
any row of any iteration, the following are
obtained:
- Infinite solutions
- No solution

Where C is any constant


Special Cases
B. Number of unknowns less than the number of
equations (m > n)
 If m > n, i.e., the number of equation is more than
the number of unknowns involved in those
equations, then we have an over-determined
system. It is not possible to solve such a system of
equations without using an additional criterion in
which case we may achieve an appropriate
solution.
Special Cases
C. Number of unknowns more than the number of
equations (m < n)
 If m < n, i.e., the number of equations is less than
then number of unknowns (an undetermined
system), then the system of equations will have
infinite number of solution each obtained by
setting the surplus (n-m) unknowns to a constant
value and solving for the remaining n unknowns
from the m equations.
Special Cases
 Example:
3X1 + X2 – X3 + 2X4 = 8
X1 + X2 + X3 – X4 = 4
m = 2 equations , n = 4 unknowns

Therefore, n – m = 4 – 2 = 2 unknowns to be equated to a


constant value.

 Therefore, to solve, assume that the value of (n-m)


of the variables, then solve for the remaining m
variables.
Special Cases
1. Assuming arbitrarily that the n-m variables will
have a value of 1. Say X1 = 1 and X2 = 1. The
system of equations then becomes,
3(1) + (1) – X3 + 2X4 = 8  -X3 +
2X4 = 4
(1) + (1) + X3 – X4 = 4  X3 – X4 = 2

 Solving for X3 and X4, we have X3 = 8 and X4 = 6.


Therefore, (1 1 8 6) is one of the possible solutions.
Special Cases
2. If each of the (n-m) variables are required to be
zero:
X1 = 0 and X2 = 0

The system of equations becomes


-X3 + + 2X4 = 8
X3 – X4 = 4

 Solving for X3 and X4, we get X3 = 16 and X4 = 12.


Therefore, (0 0 16 12) is another possible solutions.
Special Cases

3. If we assume X1 = 3 and X2 = 2, the system


becomes
-X3 + + 2X4 = -3
X3 – X4 = -1

 Solving for X3 and X4, we get X3 = -5 and X4 = -4.


Therefore, (3 2 -5 -4) is another possible solutions.
Special Cases
Definition of some terms
1. Basic Solution – a solution obtained when all (n-m)
variables are required to zero.
Example: (0 0 16 12)
2. Non-basic Solution – a solution obtained when at
least 1 of the (n-m) variables is not equated to zero.
Example: (1 1 8 6)
3. Feasible Solution – a solution where all values of the
variables are non-negative.
Example: (1 1 8 6)
Special Cases

4. Feasible Basic Solution – a basic solution where all


values of the remaining m variables are non-
negative.
Example: (0 0 16 12)
5. Non-feasible Solution – a solution where at least 1 of
the variable assume a negative value.
Example: (3 2 -5 -4)
6. Basic Variable – the variable(s) that was(were) not
equated to zero.
7. Non-basic Variable – each of the (n-m) variables
that were equated to zero.
8. Degenerate Solution – a solution where at least 1 of
the basic variable has a value of zero.
Exercise
 Forwhat values of a does the linear
system
x + y = 3
5x + 5y = a
Have (a) no solution
(b) exactly one solution
(c) infinitely many solutions
Exercise

 Given:
2X1 + X2 + X3 – 3X4 = 5
-2X1 + X2 - 2X3 + X4 = 2
Identify:
(a) a solution of the above system
(b) a basic feasible solution to the above
system
END OF PRESENTATION

You might also like