1-Solution of Linear Equationsjh

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 25

13. (i) Write an iteration formula for finding the value of 1/N, where N is a real number.
(ii) Hence, evaluate 1/26, correct to four decimal places.
14. Find the root of the equation sin x = 1 + x3, which lies in the interval (– 2, – 1), correct to
three decimal places.
15. Find the approximate root of xex = 3, correct to three decimal places.
In the following problems, find the root as specified using the iteration method/method of
successive approximations/fixed point iteration method.
16. Find the smallest positive root of x2 – 5x + 1 = 0, correct to four decimal places.
17. Find the smallest positive root of x5 – 64x + 30 = 0, correct to four decimal places.
18. Find the smallest negative root in magnitude of 3x3 – x + 1 = 0, correct to four decimal
places.
19. Find the smallest positive root of x = e–x, correct to two decimal places.
20. Find the real root of the equation cos x = 3x – 1. (A.U. Nov./Dec. 2006)
21. The equation x2 + ax + b = 0, has two real roots α and β. Show that the iteration method
(i) xk+1 = – (axk + b)/xk, is convergent near x = α, if | α | > | β |,
(ii) xk+1 = – b/(xk + a), is convergent near x = α, if | α | < | β |.

1.2 LINEAR SYSTEM OF ALGEBRAIC EQUATIONS

1.2.1 Introduction
Consider a system of n linear algebraic equations in n unknowns
a11x1 + a12x2 + ... + a1nxn = b1
a21x1 + a22x2 + ... + a2nxn = b2
... ... ... ...
an1x1 + an2x2 + ... + annxn = bn
where aij, i = 1, 2, ..., n, j = 1, 2, …, n, are the known coefficients, bi , i = 1, 2, …, n, are the known
right hand side values and xi , i = 1, 2, …, n are the unknowns to be determined.
In matrix notation we write the system as
Ax = b (1.33)

LMa 11 a12 … a1n OP LM x OP


1 LMb OP
1

A= M PP MM … PP , and b = MMb…PP .
a 21 a22 … a2n 2x 2
where ,x=
MMNa… n1

an 2
… …
… ann PQ MNx PQ
n MNb PQ
n

The matrix [A | b], obtained by appending the column b to the matrix A is called the
augmented matrix. That is
26 NUMERICAL METHODS

LMa 11 a12 … a1n b1 OP


[A|b] = M PP
a 21 a22 … a2n b2
MMNa…n1

an 2
… …
… ann

bn PQ
We define the following.
(i) The system of equations (1.33) is consistent (has at least one solution), if
rank (A) = rank [A | b] = r.
If r = n, then the system has unique solution.
If r < n, then the system has (n – r) parameter family of infinite number of solutions.
(ii) The system of equations (1.33) is inconsistent (has no solution) if
rank (A) ≠ rank [A | b].
We assume that the given system is consistent.
The methods of solution of the linear algebraic system of equations (1.33) may be
classified as direct and iterative methods.
(a) Direct methods produce the exact solution after a finite number of steps (disregarding the
round-off errors). In these methods, we can determine the total number of operations (addi-
tions, subtractions, divisions and multiplications). This number is called the operational count
of the method.
(b) Iterative methods are based on the idea of successive approximations. We start with an
initial approximation to the solution vector x = x0, and obtain a sequence of approximate
vectors x0, x1, ..., xk, ..., which in the limit as k → ∞, converge to the exact solution vector x.
Now, we derive some direct methods.

1.2.2 Direct Methods


If the system of equations has some special forms, then the solution is obtained directly.
We consider two such special forms.
(a) Let A be a diagonal matrix, A = D. That is, we consider the system of equations
Dx = b as
a11x1 = b1
a22x2 = b2
... ... ... ... (1.34)
an–1, n – 1 xn– 1 = bn–1
annxn = bn
This system is called a diagonal system of equations. Solving directly, we obtain
bi
xi = , aii ≠ 0, i = 1, 2, ..., n. (1.35)
aii
(b) Let A be an upper triangular matrix, A = U. That is, we consider the system of
equations Ux = b as
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 27

a11x1 + a12x2 + ... ... + a1n xn = b1


a22x2 + ... ... + a2n xn = b2
... ... ... ... (1.36)
an–1, n–1 xn–1 + an–1, n xn = bn–1
annxn = bn
This system is called an upper triangular system of equations. Solving for the unknowns
in the order xn, xn–1, ..., x1, we get
xn = bn/ann,
xn–1 = (bn–1 – an–1, nxn)/an–1, n–1,
... ... ... ...

F n I
GGb − ∑ a 1, j x j JJ F I
H 1
j=2 K = GGb − ∑ a n
xj JJ a11
x1 =
a11 H 1
j=2
1, j
K (1.37)

The unknowns are obtained by back substitution and this procedure is called the back
substitution method.
Therefore, when the given system of equations is one of the above two forms, the solution
is obtained directly.
Before we derive some direct methods, we define elementary row operations that can be
performed on the rows of a matrix.
Elementary row transformations (operations) The following operations on the rows of a
matrix A are called the elementary row transformations (operations).
(i) Interchange of any two rows. If we interchange the ith row with the jth row, then we
usually denote the operation as Ri ↔ Rj.
(ii) Division/multiplication of any row by a non-zero number p. If the ith row is multiplied by
p, then we usually denote this operation as pRi.
(iii) Adding/subtracting a scalar multiple of any row to any other row. If all the elements of
the jth row are multiplied by a scalar p and added to the corresponding elements of the ith
row, then, we usually denote this operation as Ri ← Ri + pRj. Note the order in which the
operation Ri + pRj is written. The elements of the jth row remain unchanged and the elements
of the ith row get changed.
These row operations change the form of A, but do not change the row-rank of A. The
matrix B obtained after the elementary row operations is said to be row equivalent with A. In
the context of the solution of the system of algebraic equations, the solution of the new system
is identical with the solution of the original system.
The above elementary operations performed on the columns of A (column C in place of
row R) are called elementary column transformations (operations). However, we shall be using
only the elementary row operations.
28 NUMERICAL METHODS

In this section, we derive two direct methods for the solution of the given system of
equations, namely, Gauss elimination method and Gauss-Jordan method.

1.2.2.1 Gauss Elimination Method


The method is based on the idea of reducing the given system of equations Ax = b, to an
upper triangular system of equations Ux = z, using elementary row operations. We know that
these two systems are equivalent. That is, the solutions of both the systems are identical. This
reduced system Ux = z, is then solved by the back substitution method to obtain the solution
vector x.
We illustrate the method using the 3 × 3 system
a11x1 + a12x2 + a13 x3 = b1
a21x1 + a22 x2 + a23 x3 = b2 (1.38)
a31x1 + a32 x2 + a33 x3 = b3
We write the augmented matrix [A | b] and reduce it to the following form
Gauss elimination
[A|b] → [U|z]
The augmented matrix of the system (1.38) is

LMa 11 a12 a13 b1 OP (1.39)


MNaa
21
31
a22
a32
a 23 b2
a33 b3 PQ
First stage of elimination

We assume a11 ≠ 0. This element a11 in the 1 × 1 position is called the first pivot. We use
this pivot to reduce all the elements below this pivot in the first column as zeros. Multiply the
first row in (1.39) by a21/a11 and a31/a11 respectively and subtract from the second and third
rows. That is, we are performing the elementary row operations R2 – (a 21/a11)R1 and
R3 –(a31/a11)R1 respectively. We obtain the new augmented matrix as

LMa 11 a12 a13 b1 OP


MM 00 (1)
a22 (1)
a23 b2(1) PP (1.40)
N (1)
a32 (1)
a33 b3(1) Q
(1) FG a
21 IJ a (1) FG a
21 IJ a , b(1) FG a
21 IJ b ,
where a22 = a22 –
Ha11 K 12 , a23 = a23 –
Ha11 K 13 2
= b2 –
Ha11 K 1

– G
Fa31 IJ a – G
Fa31 IJ a , b Fa
– G
31 IJ b .
Ha K Ha K Ha K
(1) (1) (1)
a32 = a32 12 , a33 = a33 13 3
= b3 1
11 11 11

Second stage of elimination


(1) (1)
We assume a22 ≠ 0 . This element a22 in the 2 × 2 position is called the second pivot.
We use this pivot to reduce the element below this pivot in the second column as zero. Multi-
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 29

(1) (1)
ply the second row in (1.40) by a32 / a22 and subtract from the third row. That is, we are
(1) (1)
performing the elementary row operation R3 – ( a32 / a22 )R2. We obtain the new augmented
matrix as
LMa 11 a12 a13 b1 OP
MM 00 (1)
a22 (1)
a23 b2(1) PP (1.41)
N 0 (2)
a33 b3( 2) Q
Fa I a
(1) Fa I b
(1)
where
( 2)
a33 (1)
= a33 − GH a JK
32
(1)
22
(1)
23 , b3( 2 ) = b3(1) − GH a JK
32
(1)
22
(1)
2 .

( 2)
The element a33 ≠ 0 is called the third pivot. This system is in the required upper
triangular form [U|z]. The solution vector x is now obtained by back substitution.

From the third row, we get x3 = b3( 2) / a33


( 2)
.

From the second row, we get x2 = (b2(1) − a 23


(1) (1) .
x 3 )/ a 22
From the first row, we get x1 = (b1 – a12x2 – a13 x3)/a11.
In general, using a pivot, all the elements below that pivot in that column are made zeros.
Alternately, at each stage of elimination, we may also make the pivot as 1, by dividing
that particular row by the pivot.
Remark 15 When does the Gauss elimination method as described above fail? It fails when
any one of the pivots is zero or it is a very small number, as the elimination progresses. If a
pivot is zero, then division by it gives over flow error, since division by zero is not defined. If a
pivot is a very small number, then division by it introduces large round-off errors and the
solution may contain large errors.
For example, we may have the system
2x2 + 5x3 = 7
7x1 + x2 – 2x3 = 6
2x1 + 3x2 + 8x3 = 13
in which the first pivot is zero.
Pivoting Procedures How do we avoid computational errors in Gauss elimination? To avoid
computational errors, we follow the procedure of partial pivoting. In the first stage of
elimination, the first column of the augmented matrix is searched for the largest element in
magnitude and brought as the first pivot by interchanging the first row of the augmented
matrix (first equation) with the row (equation) having the largest element in magnitude. In
the second stage of elimination, the second column is searched for the largest element in
magnitude among the n – 1 elements leaving the first element, and this element is brought as
the second pivot by interchanging the second row of the augmented matrix with the later row
having the largest element in magnitude. This procedure is continued until the upper triangular
system is obtained. Therefore, partial pivoting is done after every stage of elimination. There
is another procedure called complete pivoting. In this procedure, we search the entire matrix
A in the augmented matrix for the largest element in magnitude and bring it as the first pivot.
30 NUMERICAL METHODS

This requires not only an interchange of the rows, but also an interchange of the positions of
the variables. It is possible that the position of a variable is changed a number of times during
this pivoting. We need to keep track of the positions of all the variables. Hence, the procedure
is computationally expensive and is not used in any software.
Remark 16 Gauss elimination method is a direct method. Therefore, it is possible to count
the total number of operations, that is, additions, subtractions, divisions and multiplications.
Without going into details, we mention that the total number of divisions and multiplications
(division and multiplication take the same amount of computer time) is n (n2 + 3n – 1)/3. The
total number of additions and subtractions (addition and subtraction take the same amount of
computer time) is n (n – 1)(2n + 5)/6.
Remark 17 When the system of algebraic equations is large, how do we conclude that it is
consistent or not, using the Gauss elimination method? A way of determining the consistency
is from the form of the reduced system (1.41). We know that if the system is inconsistent then
rank (A) ≠ rank [A|b]. By checking the elements of the last rows, conclusion can be drawn
about the consistency or inconsistency.
(2)
Suppose that in (1.41), a33 ≠ 0 and b3( 2 ) ≠ 0. Then, rank (A) = rank [A|b] = 3. The
system is consistent and has a unique solution.
Suppose that we obtain the reduced system as

LMa 11 a12 a13 b1 OP


MM 00 PP
( 1) (1)
a22 a23 b2( 1) .
N 0 0 b3( 2) Q
Then, rank (A) = 2, rank [A|b] = 3 and rank (A) ≠ rank [A|b]. Therefore, the system is
inconsistent and has no solution.
Suppose that we obtain the reduced system as

LMa 11 a12 a13 b1 OP


MMN 00 PPQ
(1) ( 1)
a22 a23 b2( 1) .
0 0 0
Then, rank (A) = rank [A|b] = 2 < 3. Therefore, the system has 3 – 2 = 1 parameter
family of infinite number of solutions.
Example 1.13 Solve the system of equations
x1 + 10x2 – x3 = 3
2x1 + 3x2 + 20x3 = 7
10x1 – x2 + 2x3 = 4
using the Gauss elimination with partial pivoting.
Solution We have the augmented matrix as

LM 1 10 − 1 3 OP
MN102 3 20 7
−1 2 4 PQ
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 31

We perform the following elementary row transformations and do the eliminations.

LM10 −1 2 4 OP
R1 ↔ R3 :
MN 21 PQ
3 20 7 . R2 – (R1/5), R3 – (R1/10) :
10 − 1 3

LM10 −1 2 4 OP 10 LM −1 2 4 OP
MN 00 3.2
101
.
19.6
− 1.2 2.6 PQ
6.2 . R2 ↔ R3 : 0
0 MN 101
.
3.2
− 1.2
19.6 PQ
2.6 .
6.2

LM10 −1 2 4 OP
R3 – (3.2/10.1)R2 :
MN 00 0
.
101 − 1.2
19.98020
2.6
5.37624
.
PQ
Back substitution gives the solution.
5.37624
Third equation gives x3 = = 0.26908.
19.98020
1 1
Second equation gives x2 = (2.6 + 1.2x3) = (2.6 + 1.2(0.26908)) = 0.28940.
10.1 10.1

1 1
First equation gives x1 = (4 + x2 – 2x3) = (4 + 0.2894 – 2(0.26908)) = 0.37512.
10 10
Example 1.14 Solve the system of equations
2x1 + x2 + x3 – 2x4 = – 10
4x1 + 2x3 + x4 = 8
3x1 + 2x2 + 2x3 = 7
x1 + 3x2 + 2x3 – x4 = – 5
using the Gauss elimination with partial pivoting.
Solution The augmented matrix is given by

LM2 1 1 − 2 − 10 OP
MM43 0
2
2 1
2 0
8
7
. PP
N1 3 2 −1 −5 Q
We perform the following elementary row transformations and do the eliminations.

LM4 0 2 1 8 OP
R1 ↔R : M
2
MN13
2
1
2
1 − 2 − 10
2 0 7 PP
. R2 – (1/2) R1, R3 – (3/4) R1, R4 – (1/4) R1:
3 2 −1 − 5 Q
32 NUMERICAL METHODS

LM4 0 2 1 8 OP LM4 0 2 1 8 OP
MM00 1 0 − 5 / 2 − 14 . R ↔ R :
2 1/ 2 − 3 / 4 1 2 4 PP MM00 3 3/ 2 − 5/ 4 − 7 .
2 1/ 2 − 3/ 4 1 PP
N0 3 3/ 2 − 5 / 4 − 7 Q N0 1 0 − 5/ 2 − 14 Q
LM4 0 2 1 8 OP
R3 – (2/3) R2, R4 – (1/3)R2: MM00 3 3/2 − 5/ 4
0 − 1/ 2 1/ 12
−7 . R – R :
17 / 3 4 3 PP
N0 0 −1/ 2 − 25 / 12 − 35 / 3 Q
LM4 0 2 1 8 OP
MM00 3 3/2 − 5/ 4
0 − 1/ 2 1/ 12
−7
17 / 3
. PP
N0 0 0 − 13 / 6 − 52 / 3 Q
Using back substitution, we obtain

IJ FG − 6 IJ = 8, x = – 2 FG 17 − 1 x IJ = – 2 FG 17 − 1 ( 8)IJ
FG 52
x4 = −
H
K H 13 K
3 H 3 12 K 3 H 3 12 K 3 = – 10,

1L F 3I F 5 I O 1 L F 3 I F 5I O
= M− 7 − G J x + G J x P = M− 7 − G J ( − 10) + G J ( 8)P = 6,
x2
3N H 2 K H 4 K 3
Q 3N 2 H K 4
H 4K Q
1 1
x1 = [8 – 2x3 – x4] = [8 – 2(– 10) – 8] = 5.
4 4
Example 1.15 Solve the system of equations
3x1 + 3x2 + 4x3 = 20
2x1 + x2 + 3x3 = 13
x1 + x2 + 3x3 = 6
using the Gauss elimination method.
Solution Let us solve this problem by making the pivots as 1. The augmented matrix is given
by

LM3 3 4 20 OP
MN12 1 3 13 .
1 3 6 PQ
We perform the following elementary row transformations and do the eliminations.

LM
1 1 4 / 3 20/ 3 OP
1 1 4 / 3 20/ 3 LM OP
R1/3: 2 1 3
1 1 3 MN PQ
13 . R2 – 2R1, R3 – R1: 0 − 1 1/ 3 − 1/ 3 .
6 0 0 5/ 3 − 2/3 MN PQ
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 33

Back substitution gives the solution as

FG 2 IJ FG 3IJ = – 2 , x 1 x3 1 1 2FG IJ
1
x3 = –
H 3 K H 5K 5 2 =
3
+
3
= +
3 3

5 H K
= ,
5

20 4 20 1 4 2 35 FG IJ
x1 =
3
– x2 – x3 =
3 3
− −
5 3

5
=
5
= 7.
H K
Example 1.16 Test the consistency of the following system of equations
x1 + 10x2 – x3 = 3
2x1 + 3x2 + 20x3 = 7
9x1 + 22x2 + 79x3 = 45
using the Gauss elimination method.
Solution We have the augmented matrix as

LM 1 10 − 1 3 OP
3 20 7 .
MN29 22 79 45 PQ
We perform the following elementary row transformations and do the eliminations.

LM
1 10 − 1 3 1 10 − 1 3 OP LM OP
MN
R2 – 2R1, R3 – 9R1 : 0 − 17 22 1 . R3 – 4R2 : 0 − 17 22 1 .
0 − 68 88 18 0 0 0 14 PQ MN PQ
Now, rank [A] = 2, and rank [A|b] = 3. Therefore, the system is inconsistent and has no
solution.

1.2.2.2 Gauss-Jordan Method


The method is based on the idea of reducing the given system of equations Ax = b, to a
diagonal system of equations Ix = d, where I is the identity matrix, using elementary row
operations. We know that the solutions of both the systems are identical. This reduced system
gives the solution vector x. This reduction is equivalent to finding the solution as x = A–1b.
Gauss - Jordan method
[A|b] → [I | X]
In this case, after the eliminations are completed, we obtain the augmented matrix for
a 3 × 3 system as
LM1 0 0 d1 OP
MN00 1
0
0
1
d2
d3
PQ (1.42)

and the solution is xi = di, i = 1, 2, 3.


34 NUMERICAL METHODS

Elimination procedure The first step is same as in Gauss elimination method, that is, we
make the elements below the first pivot as zeros, using the elementary row transformations.
From the second step onwards, we make the elements below and above the pivots as zeros
using the elementary row transformations. Lastly, we divide each row by its pivot so that the
final augmented matrix is of the form (1.42). Partial pivoting can also be used in the solution.
We may also make the pivots as 1 before performing the elimination.
Let us illustrate the method.
Example 1.17 Solve the following system of equations
x1 + x2 + x3 = 1
4x1 + 3x2 – x3 = 6
3x1 + 5x2 + 3x3 = 4
using the Gauss-Jordan method (i) without partial pivoting, (ii) with partial pivoting.
Solution We have the augmented matrix as
LM1 1 1 1 OP
MN43 3 −1 6
5 3 4 PQ
(i) We perform the following elementary row transformations and do the eliminations.

LM 1 1 1 1OP
R2 – 4R1, R3 – 3R1 :
MN00 −1
2
−5
0 1
PQ
2 .

1 LM 0 −4 3O
R1 + R2, R3 + 2R2 : 0 2P .
0
MN −1
0
−5
− 10
P
5Q

1 LM 0 0 1 OP
R1 – (4/10)R3, R2 – (5/10) R3 : 0
0
MN −1
0
0
− 10
− 1/2 .
5
PQ
Now, making the pivots as 1, ((– R2), (R3/(– 10))) we get

LM1 0 0 1 OP
MN00 1
0
0
1
1/2 .
− 1/2
PQ
Therefore, the solution of the system is x1 = 1, x2 = 1/2, x3 = – 1/2.
(ii) We perform the following elementary row transformations and do the elimination.

LM
4 3 −1 6 OP LM
1 3/4 − 1/4 3/2 OP
R1 ↔ R 2 : 1
3
MN 1
5
1
3
1 .
4
PQ MN
R1/4 : 1 1
3 5
1
3
1 .
4 PQ
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 35

LM1 3/4 − 1/4 3/2 OP


R2 – R1, R3 – 3R1 :
MN00 1/4
11/4
5/4
15/4
− 1/2 .
− 1/2
PQ
LM1 3/4 − 1/4 3/2 OP 1 LM 3/4 − 1/4 3/2 OP
R2 ↔ R3 : 0
MN0 11/4
1/4
15/4
5/4 − 1/2
PQ
− 1/2 . R2 /(11/4) : 0
0
MN 1
1/4
15/11
5/4
PQ
− 2/11 .
− 1/2

LM 1 0 − 14/11 18/11 OP
R1 – (3/4) R2, R3 – (1/4)R2 :
MN00 1 15/11 − 2/11 .
0 10/11 − 5/11
PQ
LM1 0 − 14/11 18/11 OP
R3 /(10/11) :
MN00 1
0
15/11
1
− 2/11 .
− 1/2
PQ
1 LM 0 0 1 OP
R1 + (14/11) R3, R2 – (15/11)R3 : 0
0
MN 1
0
0
1
1/2 .
− 1/2
PQ
Therefore, the solution of the system is x1 = 1, x2 = 1/2, x3 = – 1/2.
Remark 18 The Gauss-Jordan method looks very elegant as the solution is obtained directly.
However, it is computationally more expensive than Gauss elimination. For large n, the total
number of divisions and multiplications for Gauss-Jordan method is almost 1.5 times the
total number of divisions and multiplications required for Gauss elimination. Hence, we do
not normally use this method for the solution of the system of equations. The most important
application of this method is to find the inverse of a non-singular matrix. We present this
method in the following section.

1.2.2.3 Inverse of a Matrix by Gauss-Jordan Method


As given in Remark 18, the important application of the Gauss-Jordan method is to
find the inverse of a non-singular matrix A. We start with the augmented matrix of A with the
identity matrix I of the same order. When the Gauss-Jordan procedure is completed, we ob-
tain
Gauss - Jordan method
[A | I] → [I | A–1]
since, AA–1 = I.
Remark 19 Partial pivoting can also be done using the augmented matrix [A|I]. However,
we cannot first interchange the rows of A and then find the inverse. Then, we would be finding
the inverse of a different matrix.
Example 1.18 Find the inverse of the matrix
LM 1 1 1OP
MN43 3 −1
5 3 PQ
36 NUMERICAL METHODS

using the Gauss-Jordan method (i) without partial pivoting, and (ii) with partial pivoting.
Solution Consider the augmented matrix

LM1 1 1 1 0 0 OP
MN43 3
5
−1
3
0
0
1
0
0 .
1
PQ
(i) We perform the following elementary row transformations and do the eliminations.

LM1 1 1 1 0 0 OP
R2 – 4R1, R3 – 3R1:
MN00 −1 −5 − 4 1 0 .
2 0 −3 0 1 PQ
1 LM 1 1 1 0 0 OP
– R2 : 0
0
MN 1
2
5
0
4
−3
−1
0
0
1
PQ
LM1 0 −4 −3 1 0 OP
R1 – R2, R3 – 2R2 :
MN00 1 5 4 −1 0 .
0 − 10 − 11 2 1 PQ
LM1 0 −4 −3 1 0 OP
R /(– 10) : 0
3
MN0 1
0
5
1
4
11/10
−1
− 2/10
0
− 1/10
.
PQ
LM1 0 0 14/10 2/10 − 4/10OP
R1 + 4R3, R2 – 5R3 : 0
MN0 1
0
0
1
− 15/10
11/10
0
− 2/10
PQ
5/10 .
− 1/10

Therefore, the inverse of the given matrix is given by

LM 7/5 1/5 − 2/5 OP


MN11− 3/10/2 0
− 1/5
1/2 .
− 1/10
PQ
(ii) We perform the following elementary row transformations and do the eliminations.

LM4 3 LM1
−1 0 1 0OP 3/4 − 1/4 0 1/4 0OP
R1 ↔R 2 MN3
: 1 1
5
1
R /4 : 1
3
MN3 1
0
0
0 1
PQ
0 . 1 1
5
1
3
1
0
0
0
PQ
0 .
1

L1 3/4 − 1/4 0 1/4 0O


– R , R – 3R : M0 1/4 5/4 1 − 1/4 0P .
R2 1 3
MN0 11/4 15/4 0 − 3/4 1PQ
1

L1 3/4 − 1/4 0 1/4 0O


↔ R : M0 11/4 15/4 0 − 3/4 1 P .
R2
MN0 1/4 5/4 1 − 1/4 0PQ
3
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 37

1LM 3/4 − 1/4 0 1/4 0 OP


R2 /(11/4) : 0
0 MN 1
1/4 5/4
/
1511 0
1
/
− 311
− 1/4
/ .
411
0 PQ
1 LM 0 − 1411
/ 0 511
/ − 311
/ OP
R1 – (3/4) R2, R3 – (1/4)R2 : 0
0 MN 1
0
1511
/
1011
/
0
1
− 311
/
− 211
/ − 111
/ PQ
4/11 .

1 LM 0 − 14/11 0 511
/ − 311
/ OP
R3 /(10/11) : 0
0 MN 1
0
1511
1
/
1110
/
0 − 311
/
− 15/
411
/
− 110
/
.
PQ
1 LM 0 0 7/5 1/5 − 2/5 OP
R 1 + (14/11) R3, R2 – (15/11)R3 : 0
0 MN 1
0
0
1
− 3/2
1110
/
0
− 1/5
1/2 .
− 110
/ PQ
Therefore, the inverse of the matrix is given by

LM 7/5 /
15 − 2/5 OP
MN− 1110
3/2
/
0
/
− 15
1/2
/
− 110 PQ
Example 1.19 Using the Gauss-Jordan method, find the inverse of

LM2 2 3 OP
1 1 . (A.U. Apr./May 2004)
MN21 3 5 PQ
Solution We have the following augmented matrix.

LM2 2 3 1 0 0 OP
0 .
MN21 1
3
1
5
0
0
1
0 1
PQ
We perform the following elementary row transformations and do the eliminations.

LM 1 1 3/2 1/2 0 0 1 OP LM 1 3/2 1/2 0 0OP


MN
R1 / 2 : 2 1 1
1 3 5
0 1 0 . R2 – 2R1, R3 – R1 : 0
0 0 1 0 PQ MN −1
2
−2
7/2
−1
− 1/2
1
0 PQ
0 .
1

1 LM 1 3/2 1/2 0 0 OP
R2 ↔ R3. Then, R2/2 : 0
0 MN 1
−1
7/4
−2
− 1/4
−1
0 1/2 .
1 0 PQ
1 LM 0 − 1/4 3/4 0 − 1/2 OP
R1 – R2, R3 + R2 : 0
0 MN 1
0
7/4
− 1/4
− 1/4
− 5/4
0
1
1/2 .
1/2 PQ
38 NUMERICAL METHODS

LM1 0 − 1/4 3/4 0 − 1/2OP


R3 /(– 1/4) :
MN00 1
0
7/4
1
− 1/4
5
0
−4 −2 PQ
1/2 .

1 LM 0 0 2 −1 −1OP
R1 + (1/4)R3, R2 – (7/4)R3 : 0
0 MN 1
0
0
1
−9
5
7
−4 −2 PQ
4 .

Therefore, the inverse of the given matrix is given by

LM 2 −1 −1 OP
MN− 95 7
−4
4 .
−2
PQ

REVIEW QUESTIONS

1. What is a direct method for solving a linear system of algebraic equations Ax = b ?


Solution Direct methods produce the solutions in a finite number of steps. The number
of operations, called the operational count, can be calculated.
2. What is an augmented matrix of the system of algebraic equations Ax = b ?
Solution The augmented matrix is denoted by [A | b], where A and b are the coeffi-
cient matrix and right hand side vector respectively. If A is an n × n matrix and b is an
n × 1 vector, then the augmented matrix is of order n × (n + 1).
3. Define the rank of a matrix.
Solution The number of linearly independent rows/columns of a matrix define the row-
rank/column-rank of that matrix. We note that row-rank = column-rank = rank.
4. Define consistency and inconsistency of a system of linear system of algebraic equa-
tions Ax = b.
Solution Let the augmented matrix of the system be [A | b].
(i) The system of equations Ax = b is consistent (has at least one solution), if
rank (A) = rank [A | b] = r.
If r = n, then the system has unique solution.
If r < n, then the system has (n – r) parameter family of infinite number of solutions.
(ii) The system of equations Ax = b is inconsistent (has no solution) if
rank (A) ≠ rank [A | b].
5. Define elementary row transformations.
Solution We define the following operations as elementary row transformations.
(i) Interchange of any two rows. If we interchange the ith row with the jth row, then we
usually denote the operation as Ri ↔ Rj.
(ii) Division/multiplication of any row by a non-zero number p. If the ith row is multi-
plied by p, then we usually denote this operation as pRi.
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 39

(iii) Adding/subtracting a scalar multiple of any row to any other row. If all the elements
of the jth row are multiplied by a scalar p and added to the corresponding elements
of the ith row, then, we usually denote this operation as Ri ← Ri + pRj. Note the
order in which the operation Ri + pRj is written. The elements of the jth row remain
unchanged and the elements of the ith row get changed.
6. Which direct methods do we use for (i) solving the system of equations Ax = b, and (ii)
finding the inverse of a square matrix A?
Solution (i) Gauss elimination method and Gauss-Jordan method. (ii) Gauss-Jordan
method.
7. Describe the principle involved in the Gauss elimination method.
Solution The method is based on the idea of reducing the given system of equations Ax
= b, to an upper triangular system of equations Ux = z, using elementary row opera-
tions. We know that these two systems are equivalent. That is, the solutions of both the
systems are identical. This reduced system Ux = z, is then solved by the back substitu-
tion method to obtain the solution vector x.
8. When does the Gauss elimination method fail?
Solution Gauss elimination method fails when any one of the pivots is zero or it is a
very small number, as the elimination progresses. If a pivot is zero, then division by it
gives over flow error, since division by zero is not defined. If a pivot is a very small
number, then division by it introduces large round off errors and the solution may con-
tain large errors.
9. How do we avoid computational errors in Gauss elimination?
Solution To avoid computational errors, we follow the procedure of partial pivoting. In
the first stage of elimination, the first column of the augmented matrix is searched for
the largest element in magnitude and brought as the first pivot by interchanging the
first row of the augmented matrix (first equation) with the row (equation) having the
largest element in magnitude. In the second stage of elimination, the second column is
searched for the largest element in magnitude among the n – 1 elements leaving the
first element, and this element is brought as the second pivot by interchanging the
second row of the augmented matrix with the later row having the largest element in
magnitude. This procedure is continued until the upper triangular system is obtained.
Therefore, partial pivoting is done after every stage of elimination.
10. Define complete pivoting in Gauss elimination.
Solution In this procedure, we search the entire matrix A in the augmented matrix for
the largest element in magnitude and bring it as the first pivot. This requires not only
an interchange of the equations, but also an interchange of the positions of the vari-
ables. It is possible that the position of a variable is changed a number of times during
this pivoting. We need to keep track of the positions of all the variables. Hence, the
procedure is computationally expensive and is not used in any software.
11. Describe the principle involved in the Gauss-Jordan method for finding the inverse of a
square matrix A.
40 NUMERICAL METHODS

Solution We start with the augmented matrix of A with the identity matrix I of the
same order. When the Gauss-Jordan elimination procedure using elementary row trans-
formations is completed, we obtain
Gauss - Jordan method
[A | I] → [I | A–1]
since, AA–1 = I.
12. Can we use partial pivoting in Gauss-Jordan method?
Solution Yes. Partial pivoting can also be done using the augmented matrix [A | I].
However, we cannot first interchange the rows of A and then find the inverse. Then, we
would be finding the inverse of a different matrix.

EXERCISE 1.2

Solve the following system of equations by Gauss elimination method.


1. 10x – 2y + 3z = 23 2. 3.15x – 1.96y + 3.85z = 12.95
2x + 10y – 5z = – 53 2.13x + 5.12y – 2.89z = – 8.61
3x – 4y + 10z = 33. 5.92x + 3.05y + 2.15z = 6.88.

LM2 2 1OP LM xOP LM 1OP LM2 1 1 OP LM x OP LM 2OP


2 1

3.
MN41 3 3
PQ MN yzPQ = MN23PQ . 4. M
4
MN31
0
2
2
2
1
0PP MMxx PP = MM− 31PP .
2

Q Nx Q N 2Q
3
1 1 3 2 6 4

Solve the following system of equations by Gauss-Jordan method.


5. 10x + y + z = 12
2x + 10y + z = 13 (A.U. Nov/Dec 2004)
x + y + 5z = 7.
6. x + 3y + 3z = 16
x + 4y + 3z = 18 (A.U. Apr/May 2005)
x + 3y + 4z = 19.
7. 10x – 2y + 3z = 23 8. x1 + x2 + x3 = 1
2x + 10y – 5z = – 53 4x1 + 3x2 – x3 = 6
3x – 4y + 10z = 33. 3x1 + 5x2 + 3x3 = 4.
Find the inverses of the following matrices by Gauss-Jordan method.

LM2 1 1OP LM 1 1 3OP


9.
MN31 4 9 PQ
2 3 . (A.U. Nov/Dec 2006) 10.
MN− 21 − 43 −− 43PQ . (A.U. Nov/Dec 2006)

LM2 2 6 OP L2 0 1OP
12. M3
11.
MN42 6 −6 .
−8 −8 PQ MN 1 − 21 05PQ . (A.U. Nov/Dec 2005)
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 41

Show that the following systems of equations are inconsistent using the Gauss elimination
method.
13. 2x1 + x2 – 3x3 = 0 14. x1 – 3x2 + 4x3 = 2
5x1 + 8x2 + x3 = 14, x1 + x2 – x3 = 0
4x1 + 13x2 + 11x3 = 25. 3x1 – x2 + 2x3 = 4.
Show that the following systems of equations have infinite number of solutions using the Gauss
elimination.
15. 2x1 + x2 – 3x3 = 0, 16. x1 + 5x2 – x3 = 0,
5x1 + 8x2 + x3 = 14, 2x1 + 3x2 + x3 = 11,
4x1 + 13x2 + 11x3 = 28. 5x1 + 11x2 + x3 = 22.

1.2.3 Iterative Methods


As discussed earlier, iterative methods are based on the idea of successive approximations.
We start with an initial approximation to the solution vector x = x0, to solve the system of
equations Ax = b, and obtain a sequence of approximate vectors x0, x1, ..., xk, ..., which in the
limit as k → ∞, converges to the exact solution vector x = A–1b. A general linear iterative
method for the solution of the system of equations Ax = b, can be written in matrix form as
x(k+1) = Hx(k) + c, k = 0, 1, 2, … (1.43)
where x(k+1) and x(k) are the approximations for x at the (k + 1)th and kth iterations respec-
tively. H is called the iteration matrix, which depends on A and c is a column vector, which
depends on A and b.
When to stop the iteration We stop the iteration procedure when the magnitudes of the
differences between the two successive iterates of all the variables are smaller than a given
accuracy or error tolerance or an error bound ε, that is,

xi( k + 1) − xi( k) ≤ ε, for all i. (1.44)


For example, if we require two decimal places of accuracy, then we iterate until
xi( k + 1) − xi( k) < 0.005, for all i. If we require three decimal places of accuracy, then we iterate

( + 1)
until xi k − xi( k) < 0.0005, for all i.

Convergence property of an iterative method depends on the iteration matrix H.

Now, we derive two iterative methods for the solution of the system of algebraic equations
a11x1 + a12x2 + a13x3 = b1
a21x1 + a22x2 + a23x3 = b2 (1.45)
a31x1 + a32x2 + a33x3 = b3

1.2.3.1 Gauss-Jacobi Iteration Method


Sometimes, the method is called Jacobi method. We assume that the pivots aii ≠ 0, for
all i. Write the equations as
42 NUMERICAL METHODS

a11x1 = b1 – (a12x2 + a13x3)


a22x2 = b2 – (a21x1 + a23x3)
a33x3 = b3 – (a31x1 + a32x2)
The Jacobi iteration method is defined as
1
x1( k + 1) = [b1 − ( a12 x2( k) + a13 x3( k) )]
a11

1
x2( k + 1) = [b2 − (a21 x1( k) + a23 x3( k) )]
a22

1
x3( k + 1) = [b3 − ( a31 x1( k) + a32 x2( k) )] , k = 0, 1, 2, ... (1.46)
a33
Since, we replace the complete vector x(k) in the right hand side of (1.46) at the end of
each iteration, this method is also called the method of simultaneous displacement.
Remark 20 A sufficient condition for convergence of the Jacobi method is that the system of
equations is diagonally dominant, that is, the coefficient matrix A is diagonally dominant. We
n
can verify that | aii | ≥ ∑ |a ij |. This implies that convergence may be obtained even if the
j =1, i ≠ j
system is not diagonally dominant. If the system is not diagonally dominant, we may exchange
the equations, if possible, such that the new system is diagonally dominant and convergence
is guaranteed. However, such manual verification or exchange of equations may not be possible
for large systems that we obtain in application problems. The necessary and sufficient condition
for convergence is that the spectral radius of the iteration matrix H is less than one unit, that
is, ρ(H) < 1, where ρ(H) is the largest eigen value in magnitude of H. Testing of this condition
is beyond the scope of the syllabus.
Remark 21 How do we find the initial approximations to start the iteration? If the system is
diagonally dominant, then the iteration converges for any initial solution vector. If no suitable
approximation is available, we can choose x = 0, that is xi = 0 for all i. Then, the initial
approximation becomes xi = bi /aii, for all i.
Example 1.20 Solve the system of equations
4x1 + x2 + x3 = 2
x1 + 5x2 + 2x3 = – 6
x1 + 2x2 + 3x3 = – 4
using the Jacobi iteration method. Use the initial approximations as
(i) xi = 0, i = 1, 2, 3, (ii) x1 = 0.5, x2 = – 0.5, x3 = – 0.5.
Perform five iterations in each case.
Solution Note that the given system is diagonally dominant. Jacobi method gives the iterations
as
x1(k+1) = 0.25 [2 – (x2(k) + x3(k))]
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 43

x2(k+1) = 0.2 [– 6 – (x1(k) + 2x3(k))]


x3(k+1) = 0.33333 [– 4 – (x1(k) + 2x2(k))], k = 0, 1, ...
We have the following results.
(i) x1(0) = 0, x2(0) = 0, x3(0) = 0.
First iteration
x1(1) = 0.25 [2 – (x2(0) + x3(0))] = 0.5,
x2(1) = 0.2 [– 6 – (x1(0) + 2x3(0))] = – 1.2,
x3(1) = 0.33333 [– 4 – (x1(0) + 2x2(0))] = – 1.33333.
Second iteration
x1(2) = 0.25 [2 – (x2(1) + x3(1))] = 0.25 [2 – (– 1.2 – 1.33333)] = 1.13333,
x2(2) = 0.2 [– 6 – (x1(1) + 2x3(1))] = 0.2 [– 6 – (0.5 + 2(– 1.33333))] = – 0.76668,
x3(2) = 0.33333 [– 4 – (x1(1) + 2x2(1))] = 0.33333 [– 4 – (0.5 + 2(– 1.2))] = – 0.7.
Third iteration
x1(3) = 0.25 [2 – (x2(2) + x3(2))] = 0.25 [2 – (– 0.76668 – 0.7)] = 0.86667,
x2(3) = 0.2 [– 6 – (x1(2) + 2x3(2))] = 0.2 [– 6 – (1.13333 + 2(– 0.7))] = – 1.14667,
x3(3) = 0.33333 [– 4 – (x1(2) + 2x2(2))]
= 0.33333 [– 4 – (1.13333 + 2(– 0.76668))] = – 1.19998.
Fourth iteration
x1(4) = 0.25 [2 – (x2(3) + x3(3))] = 0.25 [2 – (– 1.14667 – 1.19999)] = 1.08666,
x2(4) = 0.2 [– 6 – (x1(3) + 2x3(3))] = 0.2 [– 6 – (0.86667 + 2(– 1.19998))] = – 0.89334,
x3(4) = 0.33333 [– 4 – (x1(3) + 2x2(3))]
= 0.33333 [– 4 – (0.86667 + 2(– 1.14667))] = – 0.85777.
Fifth iteration
x1(5) = 0.25 [2 – (x2(4) + x3(4))] = 0.25 [2 – (– 0.89334 – 0.85777)] = 0.93778,
x2(5) = 0.2 [– 6 – (x1(4) + 2x3(4))] = 0.2 [– 6 – (1.08666 + 2(– 0.85777))] = – 1.07422,
x3(5) = 0.33333 [– 4 – (x1(4) + 2x2(4))]
= 0.33333 [– 4 – (1.08666 + 2(– 0.89334))] = – 1.09998.
It is interesting to note that the iterations oscillate and converge to the exact solution
x1 = 1.0, x2 = – 1, x3 = – 1.0.
(ii) x1(0) = 0.5, x2(0) = – 0.5, x3(0) = – 0.5.
First iteration
x1(1) = 0.25 [2 – (x2(0) + x3(0))] = 0.25 [2 – (– 0.5 – 0.5)] = 0.75,
x2(1) = 0.2 [– 6 – (x1(0) + 2x3(0))] = 0.2 [– 6 – (0.5 + 2(– 0.5))] = – 1.1,
x3(1) = 0.33333 [– 4 – (x1(0) + 2x2(0))] = 0.33333 [– 4 – (0.5 + 2(– 0.5))] = – 1.16667.
44 NUMERICAL METHODS

Second iteration
x1(2) = 0.25 [2 – (x2(1) + x3(1))] = 0.25 [2 – (– 1.1 – 1.16667)] = 1.06667,
x2(2) = 0.2 [– 6 – (x1(1) + 2x3(1))] = 0.2 [– 6 – (0.75 + 2(– 1.16667))] = – 0.88333,
x3(2) = 0.33333 [– 4 – (x1(1) + 2x2(1))] = 0.33333 [– 4 – (0.75 + 2(– 1.1))] = – 0.84999.
Third iteration
x1(3) = 0.25 [2 – (x2(2) + x3(2))] = 0.25 [2 – (– 0.88333 – 0.84999)] = 0.93333,
x2(3) = 0.2 [– 6 – (x1(2) + 2x3(2))] = 0.2 [– 6 – (1.06667 + 2(– 0.84999))] = – 1.07334,
x3(3) = 0.33333 [– 4 – (x1(2) + 2x2(2))]
= 0.33333 [– 4 – (1.06667 + 2(– 0.88333))] = – 1.09999.
Fourth iteration
x1(4) = 0.25 [2 – (x2(3) + x3(3))] = 0.25 [2 – (– 1.07334 – 1.09999)] = 1.04333,
x2(4) = 0.2 [– 6 – (x1(3) + 2x3(3))] = 0.2 [– 6 – (0.93333 + 2(– 1.09999))] = – 0.94667,
x3(4) = 0.33333 [– 4 – (x1(3) + 2x2(3))]
= 0.33333 [– 4 – (0.93333 + 2(– 1.07334))] = – 0.92887.
Fifth iteration
x1(5) = 0.25 [2 – (x2(4) + x3(4))] = 0.25 [2 – (– 0.94667 – 0.92887)] = 0.96889,
x2(5) = 0.2 [– 6 – (x1(4) + 2x3(4))] = 0.2 [– 6 – (1.04333 + 2(– 0.92887))] = – 1.03712,
x3(5) = 0.33333 [– 4 – (x1(4) + 2x2(4))]
= 0.33333 [– 4 – (1.04333 + 2(– 0.94667))] = – 1.04999.
Example 1.21 Solve the system of equations
26x1 + 2x2 + 2x3 = 12.6
3x1 + 27x2 + x3 = – 14.3
2x1 + 3x2 + 17x3 = 6.0
using the Jacobi iteration method. Obtain the result correct to three decimal places.
Solution The given system of equations is strongly diagonally dominant. Hence, we can ex-
pect faster convergence. Jacobi method gives the iterations as
x1(k+1) = [12.6 – (2x2(k) + 2x3(k))]/26
x2(k+1) = [– 14.3 – (3x1(k) + x3(k))]/27
x3(k+1) = [6.0 – (2x1(k) + 3x2(k))]/17 k = 0, 1, ...
Choose the initial approximation as x1(0) = 0, x2(0) = 0, x3(0) = 0. We obtain the following
results.
First iteration
1 1
x1(1) = [12.6 – (2x2(0) + 2x3(0))] = [12.6] = 0.48462,
26 26
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 45

1 1
x2(1) = [– 14.3 – (3x1(0) + x3(0))] = [– 14.3] = – 0.52963,
27 27
1 1
x3(1) = [6.0 – (2x1(0) + 3x2(0))] = [6.0] = 0.35294.
17 17
Second iteration

1 1
x1(2) = [12.6 – (2x2(1) + 2x3(1))] = [12.6 – 2 (– 0.52963 + 0.35294)] = 0.49821,
26 26
1 1
x2(2) = [– 14.3 – (3x1(1) + x3(1))] = [– 14.3 – (3(0.48462) + 0.35294)] = – 0.59655,
27 27
1 1
x3(2) = [– 6.0 – (2x1(1) + 3x2(1))] = [6.0 – (2(0.48462) + 3(– 0.52963))] = 0.38939.
17 17
Third iteration
1 1
x1(3) = [12.6 – (2x2(2) + 2x3(2))] = [12.6 – 2(– 0.59655 + 0.38939)] = 0.50006,
26 26
1 1
x2(3) = [– 14.3 – (3x1(2) + x3(2))] = [– 14.3 – (3(0.49821) + 0.38939)] = – 0.59941,
27 27
1 1
x3(3) = [– 6.0 – (2x1(2) + 3x2(2))] = [6.0 – (2(0.49821) + 3(– 0.59655))] = 0.39960.
17 17
Fourth iteration
1 1
x1(4) = [12.6 – (2x2(3) + 2x3(3))] = [12.6 – 2(– 0.59941 + 0.39960)] = 0.50000,
26 26
1 1
x2(4) = [– 14.3 – (3x1(3) + x3(3))] = [– 14.3 – (3(0.50006) + 0.39960)] = – 0.59999,
27 27
1 1
x3(4) = [– 6.0 – (2x1(3) + 3x2(3))] = [6.0 – (2(0.50006) + 3(– 0.59941))] = 0.39989.
17 17

We find x1( 4 ) − x1( 3 ) = | 0.5 – 0.50006 | = 0.00006,

x 2( 4 ) − x 2( 3 ) = | – 0.59999 + 0.59941 | = 0.00058,

x 3( 4 ) − x 3( 3 ) = | 0.39989 – 0.39960 | = 0.00029.

Three decimal places of accuracy have not been obtained at this iteration.
Fifth iteration
1 1
x1(5) = [12.6 – (2x2(4) + 2x3(4))] = [12.6 – 2(– 0.59999 + 0.39989)] = 0.50001,
26 26
1 1
x2(5) = [– 14.3 – (3x1(4) + x3(4))] = [– 14.3 – (3(0.50000) + 0.39989)] = – 0.60000,
27 27
46 NUMERICAL METHODS

1 1
x3(5) = [– 6.0 – (2x1(4) + 3x2(4))] = [6.0 – (2(0.50000) + 3(– 0.59999))] = 0.40000.
17 17
(4) (3 )
We find x1 − x1 = | 0.50001 – 0.5 | = 0.00001,

x 2( 4 ) − x 2( 3 ) = | – 0.6 + 0.59999 | = 0.00001,

x 3( 4 ) − x 3( 3 ) = | 0.4 – 0.39989 | = 0.00011.

Since, all the errors in magnitude are less than 0.0005, the required solution is
x1 = 0.5, x2 = – 0.6, x3 = 0.4.
Remark 22 What is the disadvantage of the Gauss-Jacobi method? At any iteration step, the
value of the first variable x1 is obtained using the values of the previous iteration. The value of
the second variable x2 is also obtained using the values of the previous iteration, even though
the updated value of x1 is available. In general, at every stage in the iteration, values of the
previous iteration are used even though the updated values of the previous variables are
available. If we use the updated values of x1, x2,..., xi–1 in computing the value of the variable
xi, then we obtain a new method called Gauss-Seidel iteration method.

1.2.3.2 Gauss-Seidel Iteration Method


As pointed out in Remark 22, we use the updated values of x1, x2,..., xi–1 in computing
the value of the variable xi. We assume that the pivots aii ≠ 0, for all i. We write the equations
as
a11x1 = b1 – (a12x2 + a13x3)
a22x2 = b2 – (a21x1 + a23x3)
a33x3 = b3 – (a31x1 + a32x2)
The Gauss-Seidel iteration method is defined as

1
x1(k+1) = [b – (a12x2(k) + a13x3(k))]
a11 1

1
x2(k+1) = [b – (a21x1(k+1) + a23x3(k))]
a 22 2

1
x3(k+1) = [b – (a31x1(k+1) + a32x2(k+1))] (1.47)
a33 3
k = 0, 1, 2,...
This method is also called the method of successive displacement.
We observe that (1.47) is same as writing the given system as
a11x1(k+1) = b1 – (a12 x2(k) + a13 x3(k))
a21x1(k+1) + a22 x2(k+1) = b2 – a23 x3(k) (1.48)
a31x1(k+1) + a32 x2 (k+1) + a33x3(k+1) = b3
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 47

Remark 23 A sufficient condition for convergence of the Gauss-Seidel method is that the sys-
tem of equations is diagonally dominant, that is, the coefficient matrix A is diagonally domi-
nant. This implies that convergence may be obtained even if the system is not diagonally
dominant. If the system is not diagonally dominant, we may exchange the equations, if possi-
ble, such that the new system is diagonally dominant and convergence is guaranteed. The
necessary and sufficient condition for convergence is that the spectral radius of the iteration
matrix H is less than one unit, that is, ρ(H) < 1, where ρ(H) is the largest eigen value in
magnitude of H. Testing of this condition is beyond the scope of the syllabus.
If both the Gauss-Jacobi and Gauss-Seidel methods converge, then Gauss-Seidel method
converges at least two times faster than the Gauss-Jacobi method.
Example 1.22 Find the solution of the system of equations
45x1 + 2x2 + 3x3 = 58
– 3x1 + 22x2 + 2x3 = 47
5x1 + x2 + 20x3 = 67
correct to three decimal places, using the Gauss-Seidel iteration method.
Solution The given system of equations is strongly diagonally dominant. Hence, we can expect
fast convergence. Gauss-Seidel method gives the iteration
1
x1(k+1) = (58 – 2x2(k) – 3x3(k)),
45
1
x2(k+1) = (47 + 3x1(k+1) – 2x3(k)),
22
1
x3(k+1) = (67 – 5x1(k+1) – x2(k+1)).
20
Starting with x1(0) = 0, x2(0) = 0, x3(0) = 0, we get the following results.
First iteration
1 1
x1(1) = (58 – 2x2(0) – 3x3(0)) = (58) = 1.28889,
45 45
1 1
x2(1) = (47 + 3x1(1) – 2x3(0)) = (47 + 3(1.28889) – 2(0)) = 2.31212,
22 22
1 1
x3(1) = (67 – 5x1(1) – x2(1)) = (67 – 5(1.28889) – (2.31212)) = 2.91217.
20 20
Second iteration

1 1
x1(2) = (58 – 2x2(1) – 3x3(1)) = (58 – 2(2.31212) – 3(2.91217)) = 0.99198,
45 45
1 1
x2(2) = (47 + 3x1(2) – 2x3(1)) = (47 + 3(0.99198) – 2(2.91217)) = 2.00689,
22 22
48 NUMERICAL METHODS

1 1
x3(2) = (67 – 5x1(2) – x2(2)) = (67 – 5(0.99198) – (2.00689)) = 3.00166.
20 20
Third iteration
1 1
x1(3) = (58 – 2x2(2) – 3x3(2)) = (58 – 2(2.00689) – 3(3.00166) = 0.99958,
45 45
1 1
x2(3) = (47 + 3x1(3) – 2x3(2)) = (47 + 3(0.99958) – 2(3.00166)) = 1.99979,
22 22
1 1
x3(3) = (67 – 5x1(3) – x2(3)) = (67 – 5(0.99958) – (1.99979)) = 3.00012.
20 20
Fourth iteration

1 1
x1(4) = (58 – 2x2(3) – 3x3(3)) = (58 – 2(1.99979) – 3(3.00012)) = 1.00000,
45 45
1 1
x2(4) = (47 + 3x1(4) – 2x3(3)) = (47 + 3(1.00000) – 2(3.00012)) = 1.99999,
22 22
1 1
x3(4) = (67 – 5x1(4) – x2(4)) = (67 – 5(1.00000) – (1.99999)) = 3.00000.
20 20

We find x1( 4 ) − x1( 3 ) =  1.00000 – 0.99958  = 0.00042,

x 2( 4 ) − x 2( 3 ) =  1.99999 – 1.99979  = 0.00020,

x 3( 4 ) − x 3( 3 ) =  3.00000 – 3.00012  = 0.00012.

Since, all the errors in magnitude are less than 0.0005, the required solution is
x1 = 1.0, x2 = 1.99999, x3 = 3.0.
Rounding to three decimal places, we get x1 = 1.0, x2 = 2.0, x3 = 3.0.
Example 1.23 Computationally show that Gauss-Seidel method applied to the system of
equations
3x1 – 6x2 + 2x3 = 23
– 4x1 + x2 – x3 = – 8
x1 – 3x2 + 7x3 = 17
diverges. Take the initial approximations as x1 = 0.9, x2 = – 3.1, x3 = 0.9. Interchange the first
and second equations and solve the resulting system by the Gauss-Seidel method. Again take
the initial approximations as x1 = 0.9, x2 = – 3.1, x3 = 0.9, and obtain the result correct to two
decimal places. The exact solution is x1 = 1.0, x2 = – 3.0, x3 = 1.0.
Solution Note that the system of equations is not diagonally dominant. Gauss-Seidel method
gives the iteration
x1(k+1) = [23 + 6x2(k) – 2x3(k))]/3
x2(k+1) = [– 8 + 4x1(k+1) + x3(k)]
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 49

x3(k+1) = [17 – x1(k+1) + 3x2(k+1)]/7.


Starting with the initial approximations x1 = 0.9, x2 = – 3.1, x3 = 0.9, we obtain the
following results.
First iteration

1 1
x1(1) = [23 + 6x2(0) – 2x3(0)] = [23 + 6(– 3.1) – 2(0.9)] = 0.8667,
3 3
x2(1) = [– 8 + 4x1(1) + x3(0)] = [– 8 + 4(0.8667) + 0.9] = – 3.6332,
1 1
x3(1) = [17 – x1(1) + 3x2(1)] = [17 – (0.8667) + 3(– 3.6332)] = 0.7477.
7 7
Second iteration

1 1
x1(2) = [23 + 6x2(1) – 2x3(1)] = [23 + 6 (– 3.6332) – 2(0.7477)] = – 0.0982,
3 3
x2(2) = [– 8 + 4x1(2) + x3(1)] = [– 8 + 4(– 0.0982) + 0.7477] = – 7.6451,
1 1
x3(2) = [17 – x1(2) + 3x2(2)] = [17 + 0.0982 + 3(– 7.6451)] = – 0.8339.
7 7
Third iteration

1 1
x1(3) = [23 + 6x2(2) – 2x3(2)] = [23 + 6 (– 7.6451) – 2(– 0.8339)] = – 7.0676,
3 3
x2(3) = [– 8 + 4x1(3) + x3(2)] = [– 8 + 4(– 7.0676) – 0.8339] = – 37.1043,
1 1
x3(3) = [17 – x1(3) + 3x2(3)] = [17 + 7.0676 + 3(– 37.1043)] = – 12.4636.
7 7
It can be observed that the iterations are diverging very fast.
Now, we exchange the first and second equations to obtain the system
– 4x1 + x2 – x3 = – 8
3x1 – 6x2 + 2x3 = 23
x1 – 3x2 + 7x3 = 17.
The system of equations is now diagonally dominant. Gauss-Seidel method gives iteration
x1(k+1) = [8 + x2(k) – x3(k)]/4
x2(k+1) = – [23 – 3x1(k+1) – 2x3(k)]/6
x3(k+1) = [17 – x1(k+1) + 3x2(k+1)]/7.
Starting with the initial approximations x1 = 0.9, x2 = – 3.1, x3 = 0.9, we obtain the
following results.
First iteration
1 1
x1(1) = [8 + x2(0) – x3(0)] = [8 – 3.1 – 0.9] = 1.0,
4 4
50 NUMERICAL METHODS

1 1
x2(1) = – [23 – 3x1(1) – 2x3(0)] = – [23 – 3(1.0) – 2(0.9)] = – 3.0333,
6 6
1 1
x3(1) = [17 – x1(1) + 3x2(1)] = [17 – 1.0 + 3(– 3.0333)] = 0.9857.
7 7
Second iteration

1 1
x1(2) = [8 + x2(1) – x3(1)] = [8 – 3.0333 – 0.9857] = 0.9953,
4 4
1 1
x2(2) = – [23 – 3x1(2) – 2x3(1)] = – [23 – 3(0.9953) – 2(0.9857)] = – 3.0071,
6 6
1 1
x3(2) = [17 – x1(2) + 3x2(2)] = [17 – 0.9953 + 3(– 3.0071)] = 0.9976.
7 7
Third iteration
1 1
x1(3) = [8 + x2(2) – x3(2)] = [8 – 3.0071 – 0.9976] = 0.9988,
4 4
1 1
x2(3) = – [23 – 3x1(3) – 2x3(2)] = – [23 – 3(0.9988) – 2(0.9976)] = – 3.0014,
6 6
1 1
x3(3) = [17 – x1(3) + 3x2(3)] = [17 – 0.9988 + 3(– 3.0014)] = 0.9996.
7 7
Fourth iteration
1 1
x1(4) = [8 + x2(3) – x3(3)] = [8 – 3.0014 – 0.9996] = 0.9998,
4 4
1 1
x2(4) = – [23 – 3x1(4) – 2x3(3)] = – [23 – 3(0.9998) – 2(0.9996)] = – 3.0002,
6 6
1 1
x3(4) = [17 – x1(4) + 3x2(4)] = [17 – 0.9998 + 3(– 3.0002)] = 0.9999.
7 7

We find x1( 4 ) − x1( 3 ) =  0.9998 – 0.9988  = 0.0010,

x 2( 4 ) − x 2( 3 ) =  – 3.0002 + 3.0014  = 0.0012,

x 3( 4 ) − x 3( 3 ) =  0.9999 – 0.9996  = 0.0003.

Since, all the errors in magnitude are less than 0.005, the required solution is
x1 = 0.9998, x2 = – 3.0002, x3 = 0.9999.
Rounding to two decimal places, we get x1 = 1.0, x2 = – 3.0, x3 = 1.0.
SOLUTION OF EQUATIONS AND EIGEN VALUE PROBLEMS 51

REVIEW QUESTIONS

1. Define an iterative procedure for solving a system of algebraic equations Ax = b. What


do we mean by convergence of an iterative procedure?
Solution A general linear iterative method for the solution of the system of equations
Ax = b can be written in matrix form as
x(k+1) = Hx(k) + c, k = 0, 1, 2, ...
where x(k+1) and x(k) are the approximations for x at the (k + 1)th and kth iterations
respectively. H is called the iteration matrix depending on A and c, which is a column
vector depends on A and b. We start with an initial approximation to the solution vec-
tor x = x0, and obtain a sequence of approximate vectors x0, x1 ,..., xk, ... We say that the
iteration converges if in the limit as k → ∞, the sequence of approximate vectors x0,
x1,..., xk,... converge to the exact solution vector x = A–1 b.
2. How do we terminate an iterative procedure for the solution of a system of algebraic
equations Ax = b ?
Solution We terminate an iteration procedure when the magnitudes of the differences
between the two successive iterates of all the variables are smaller than a given accu-
racy or an error bound ε, that is,

x i( k + 1) − x i( k ) ≤ ε, for all i.
For example, if we require two decimal places of accuracy, then we iterate until

x i( k + 1) − x i( k ) < 0.005, for all i. If we require three decimal places of accuracy, then we

( k + 1)
iterate until x i − x i( k ) < 0.0005, for all i.

3. What is the condition of convergence of an iterative procedure for the solution of a


system of linear algebraic equations Ax = b ?
Solution A sufficient condition for convergence of an iterative method is that the sys-
tem of equations is diagonally dominant, that is, the coefficient matrix A is diagonally

n
dominant. We can verify that | aii | ≥ ∑ |a ij |. This implies that convergence may be
j =1, i ≠ j
obtained even if the system is not diagonally dominant. If the system is not diagonally
dominant, we may exchange the equations, if possible, such that the new system is
diagonally dominant and convergence is guaranteed. The necessary and sufficient con-
dition for convergence is that the spectral radius of the iteration matrix H is less than
one unit, that is, ρ(H) < 1, where ρ(H) is the largest eigen value in magnitude of H.
4. Which method, Gauss-Jacobi method or Gauss-Seidel method converges faster, for the
solution of a system of algebraic equations Ax = b ?
Solution If both the Gauss-Jacobi and Gauss-Seidel methods converge, then Gauss-
Seidel method converges at least two times faster than the Gauss-Jacobi method.
52 NUMERICAL METHODS

EXERCISE 1.3

Solve the following system of equations using the Gauss-Jacobi iteration method.
1. 20x + y – 2z = 17, 2. 27x + 6y – z = 85,
3x + 20y – z = – 18, x + y + 54z = 110,
2x – 3y + 20z = 25. (A.U. Nov/Dec 2006) 6x + 15y + 2z = 72. (A.U. May/June 2006)
3. x + 20y + z = – 18, 4. 10x + 4y – 2z = 20,
25x + y – 5z = 19, 3x + 12y – z = 28,
3x + 4y + 8z = 7. x + 4y + 7z = 2.
Solve the following system of equations using the Gauss-Seidel iteration method.
5. 27x + 6y – z = 85, 6. 4x + 2y + z = 14,
x + y + 54z = 110, x + 5y – z = 10,
6x + 15y + 2z = 72. x + y + 8z = 20.
(A.U. May/June 2006) (A.U. Apr/May 2005)
7. x + 3y + 52z = 173.61,
x – 27y + 2z = 71.31,
41x – 2y + 3z = 65.46. Start with x = 1, y = – 1, z = 3. (A.U. Apr/May 2004)
8. 20x – y – 2z = 17,
3x + 20y – z = – 18,
2x – 3y + 20z = 25. (A.U. Nov/Dec 2003)
9. x + 20y + z = – 18, 10. 10x + 4y – 2z = 20,
25x + y – 5z = 19, 3x + 12y – z = 28,
3x + 4y + 8z = 7. x + 4y + 7z = 2.

1.3 EIGEN VALUE PROBLEMS

1.3.1 Introduction
The concept of eigen values and finding eigen values and eigen vectors of a given matrix are
very important for engineers and scientists.
Consider the eigen value problem
Ax = λ x. (1.49)
The eigen values of a matrix A are given by the roots of the characteristic equation
 A – λ I  = 0. (1.50)
If the matrix A is of order n, then expanding the determinant, we obtain the character-
istic equation as
p(λ) = (– 1)n λn + a1λn–1 + ... + an–1 λ + an = 0. (1.51)

You might also like