Ma214 S23 Part06
Ma214 S23 Part06
Ma214 S23 Part06
Remark 6.4.2.
Observe that Jacobi and Gauss-Seidel methods are stationary methods. In fact, we
can construct our own stationary iterative method for a linear system of the form
Ax = b through the following steps:
Lemma 6.4.4.
For any subordinate matrix norm # · # : Mn (C) → [0, ∞),
93
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 6.4 Spectral Radius and Convergence Theorem
Theorem 6.4.5.
For each A ∈ Mn (C) and each # > 0, there exists a subordinate matrix norm # · #
such that
ρ(A) ≤ #A# < ρ(A) + #.
Lemma 6.4.6.
Let B ∈ Mn (C). Then ρ(B) < 1 if and only if
Lemma 6.4.7.
Let B ∈ Mn (C) with ρ(B) < 1. Then (I − B)−1 exists and we have
(I − B)−1 = I + B + B 2 + . . . .
We now give the main convergence theorem for any stationary iterative method.
Proof.
We first assume that ρ(B) < 1 and prove that the sequence {x(k) } converges for any
94
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 6.5 Exercises
x(k+1) − x = λk+1
I vI .
6.5 Exercises
Iterative Methods
1. Let A be a diagonally dominant matrix such that aij = 0 for every i, j ∈
{1, 2, · · · , n} such that i > j + 1. Does naive Gaussian elimination method
preserve the diagonal dominance? Justify your answer.
2. Let A be a diagonally dominant matrix. Show that all the diagonal elements of
A are non-zero (i.e., aii (= 0 for i = 1, 2, · · · , n.). As a consequence, the iterating
sequences of Jacobi and Gauss-Seidel methods are well-defined if the coefficient
matrix A in the linear system Ax = b is a diagonally dominant matrix.
3. Write the formula for the Jacobi iterative sequence of the system
Without performing the iterations, show that the sequence does not converge to
the exact solution of this system. Can you make a suitable interchange of rows
so that the resulting system is diagonally dominants?
4. Find the n×n matrix B and the n-dimensional vector c such that the Gauss-Seidel
method can be written in the form
x(k+1) = Bx(k) + c, k = 0, 1, 2, · · ·
95
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 6.5 Exercises
5. Check for the convergence of the Jacobi and Gauss-Seidel methods for each of the
following systems.
i)
ii)
x1 − 2x2 + 2x3 = 1,
x1 + x2 − x3 = 1,
2x1 − 2x2 + x3 = 1.
iii)
x1 + x2 + 10x3 = −1,
2x1 + 3x2 + 5x3 = −6,
3x1 + 2x2 − 3x3 = 4.
96
S. Baskar and S. Sivaji Ganesh Spring 2022-23
CHAPTER 7
Let A be an n × n matrix with real entries. Eigenvalues of A are defined as the roots
of the equation
det(λI − A) = 0. (7.1)
97
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
the eigenvalues that we are looking for. This is illustrated in the following example
by Wilkinson where we see that “the roots of polynomials are extremely sensitive to
perturbations in the coefficients”.
The roots of the polynomial f (x) are 1, 2, · · · , 10, and all these roots are simple roots.
If we perturb this polynomial as F (x) = f (x) + 0.01g(x), then all the roots lie in the
interval [1, 3.5] (verified graphically). In fact, the largest root of the polynomial f (x)
is 10 and the largest root of the polynomial F (x) is approximately equal to 3.398067.
Thus, if the coefficient of x10 is perturbed by a small amount of 0.01, the root 10 of
f (x) could move as much a distance as approximately 6.6.
Due to the two reasons discussed above, we look for an alternate method to compute the
eigenvalues of a given matrix. One such method is the power method that can be used to
obtain the eigenvalue which is the largest in magnitude among all the other eigenvalues
and the corresponding eigen vector. In Subsection 7.1, we present the power method
and discuss the condition under which this method can be applied. In Subsection 7.2 we
prove the Gerschgorin theorem which may be used as a tool to find a class of matrices
for which power method can be applied successfully.
98
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
Remark 7.1.2.
A matrix may have a unique dominant eigenvalue or more than one dominant eigen-
values. Further, even if dominant eigenvalue is unique the corresponding algebraic and
geometric multiplicities could be more than one, and also both algebraic and geometric
multiplicities may not be the same. All these possibilities are illustrated in the following
example.
Example 7.1.3.
1. The matrix
1 0 0
A= 0 −2 1
0 0 −1
has eigenvalues 1, −1, and −2. The matrix A has a unique dominant eigenvalue,
which is −2 as this is the largest in absolute value, of all eigenvalues. Note that
the dominant eigenvalue of A is a simple eigenvalue.
2. The matrix
1 3 4
B= 0 2 1
0 0 −2
has eigenvalues 1, −2, and 2. According to our definition, the matrix B has
two dominant eigenvalues. They are −2 and 2. Note that both the dominant
eigenvalues of B are simple eigenvalues.
99
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
The matrix C1 has a unique dominant eigenvalue 2, which has algebraic mul-
tiplicity 2 and geometric multiplicity 1. The matrix C2 has a unique dominant
eigenvalue 2, whose algebraic and geometric multiplicities equal 2. The matrix
C3 has a unique dominant eigenvalue 2, which has algebraic multiplicity 2 and
geometric multiplicity 1.
As mentioned above the power method is used to compute the dominant eigenvalue
and the corresponding eigen vector of a given n × n matrix provided this eigenvalue is
unique. Thus, in the above examples, power method can be used for the matrices A
but not for B even though B has distinct eigenvalues. Let us now detail the power
method.
(H2) There exists a basis of Rn consisting of eigenvectors of A. That is, there exists
v 1 , v 2 , · · · , v n satisfying Av k = λk v k for k = 1, 2, · · · , n; and such that for each
v ∈ Rn there exists unique real numbers c1 , c2 , · · · , cn such that
v = c1 v 1 + c2 v 2 + · · · + cn v n .
∞
(
for some scalars c1 , c2 , · · · , cn ∈ R with c1 (= 0 and x (0)
∈
/ KerAk .
k=1
100
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
x(0) = c1 v 1 + c2 v 2 + · · · + cn v n , c1 (= 0.
• Using the assumption (7.2), we get |λj /λ1 | < 1, for j = 2, · · · , n. Therefore, we
have
Ak x(0)
lim = c1 v 1 . (7.5)
k→∞ λk1
For c1 (= 0, the right hand side of the above equation is a scalar multiple of the
eigenvector.
• From the above expression for Ak x(0) , we also see that
(Ak+1 x(0) )i
lim = λ1 , (7.6)
k→∞ (Ak x(0) )i
where i is any index such that the fractions on the left hand side are meaningful
(which is the case when x(0) ∈
/ ∪∞k=1 KerA ).
k
The power method generates two sequences {µk } and {x(k) }, using the results (7.5) and
(7.6), that converge to the dominant eigenvalue λ1 and the corresponding eigenvectors
v 1 , respectively.
We now describe the steps involved in the power method for generating these two
sequences.
101
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
(1)
#y (1) #∞ = |yi |
and set
y (1)
x(1) := .
µ1
Step 3: From x(1) , we can obtain µ2 and x(2) as in step 2.
Continue this procedure.
General form of the power method iterative sequences:
After choosing the initial vector x(0) arbitrarily, we generate the sequences {µ(k) } and
{x(k) } using the formulas
y (k+1)
(7.7)
(k+1)
µk+1 = yi , x(k+1) = ,
µk+1
where
for k = 0, 1, · · · .
This iterative procedure is called the power method.
Remark 7.1.4.
The scaling factor µk introduced in (7.7) makes sure that x(k) has its maximum
norm equal to 1, i.e., #x(k) #∞ = 1. This rules out the possibilities of limk→∞ x(k)
being 0 or the vector x(k) escaping to infinity.
We now discuss the sufficient conditions under which the power method converges.
102
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
∞
(
for some scalars c1 , c2 , · · · , cn ∈ R with c1 (= 0 and x(0) ∈
/ KerAk .
k=1
Proof.
From the definition of x(k+1) , we have
) n % &k+1 *
' λj
x(k+1) = mk+1 λk+1
1 c1 v 1 + cj vj .
j=2
λ1
Taking maximum norm on both sides and noting that #x(k) #∞ = 1, we get
, % &k+1 ,
+ + , n
' λj ,
, ,
1 = +mk+1 λk+1
1
+ , c v
1 1 + c j v j, .
, λ1 ,
j=2 ∞
103
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
lim µk+1 = λ1 .
k→∞
Note that the above theorem does not guarantee the convergence of the sequence {xn }
to an eigenvector of the dominant eigenvalue. However, if the dominant eigenvalue has
an eigenvector with a unique dominant component, then this sequence converges as
discussed in the following theorem.
104
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
|v1j | = #v 1 #∞ (7.11)
Proof.
Let us first set up some notation. Let the eigenvectors v 1 , v 2 , · · · , v n be given by
Ax(0) = c1 λ1 v 1 + c2 λ2 v 2 + · · · + cn λn v n .
Ak x(0) = (λk1 c1 v11 + λk2 c2 v21 + · · · + λkn cn vn1 , · · · , λk1 c1 v1n + λk2 c2 v2n + · · · + λkn cn vnn )T
(7.14)
Maximum norm of the vector A x is going to be the modulus of one of its compo-
k (0)
|v1i |
≤1 for i = 1, 2, · · · , n. (7.15)
|v1j |
105
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
Observe that
(Ak x(0) )i λk1 c1 v1i + λk2 c2 v2i + · · · + λkn cn vni
= .
(Ak x(0) )j λk1 c1 v1j + λk2 c2 v2j + · · · + λkn cn vnj
λk2 c2 λkn cn
v1i + v 2i + · · · + vni
(Ak x(0) )i λk1 c1 λk1 c1
= (7.16)
(Ak x(0) )j λk c 2 λk c n
v1j + 2k v2j + · · · + nk vnj
λ1 c 1 λ1 c 1
v1i
Note that the RHS in the equation (7.16) converges to as k → ∞. Since,
v1j
+ +
+ v1i +
+ + < 1,
+ v1j + for i (= j, (7.17)
λk2 c2 λkn cn
v2j + · · · +
v1j + vnj
(Ak x(0) )j λk1 c1 λk1 c1
µk = k−1 (0) = λ1 . (7.19)
(A x )j λk−1 c2 λk−1
n cn
v1j + 2k−1 v2j + · · · + vnj
λ1 c 1 λk−1
1 c 1
Thus,
lim µk = λ1 . (7.20)
k→∞
Ak x(0)
x(k) =
(Ak x(0) )j
% k (0) &T
(A x )1 (Ak x(0) )j−1 (Ak x(0) )j+1 (Ak x(0) )n
= ,··· , , 1, , · · · , k (0) .
(Ak x(0) )j (Ak x(0) )j (Ak x(0) )j (A x )j
106
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
Example 7.1.7.
Consider the matrix
3 0 0
A= −4 6 2 .
16 −15 −5
The eigenvalues of this matrix are
λ1 = 3, λ2 = 1 and λ3 = 0.
Thus, the hypothesis (H1) and (H2) are satisfied. Choose the initial guess x0 =
(1, 0.5, 0.25)T , which also satisfies the hypothesis (H3).
The first ten terms of the iterative sequence in power method given by (7.7)-(7.8) for
the given matrix A are as follows:
Iteration No: 1
107
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
108
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
Iteration No: 9
These ten iterates suggest that the sequence {µk } converges to the eigenvalue λ1 = 3
1
and the sequence {x(k) } converges to (0.5, 0, 1) = v 1 .
2
1. The Power method requires at the beginning that the matrix has only one
dominant eigenvalue, and this information is generally unavailable.
2. Even when there is only one dominant eigenvalue, it is not clear how to
choose the initial guess x(0) such that it has a non-zero component (c1 in the
notation of the theorem) along the eigenvector v 1 .
Note that in the above example, all the hypothesis are satisfied. Now let us ask the
question
“What happens when any of the hypotheses of power method is violated?”
We discuss these situations through examples.
109
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
which has eigenvalues 1, −2, and 2. Clearly, the matrix B has two dominant eigen-
values, namely, −2 and 2. We start with an initial guess x(0) = (1, 1, 1) and the first
five iterations generated using power method are given below:
Iteration No: 1
110
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
Thus we conclude that the power method when applied to a matrix which has more
than one dominant eigenvalue may not converge.
Let us now illustrate the situation when the hypothesis (H3) is violated.
Example 7.1.11 [Failure of hypothesis (H3): Initial guess x(0) is such that c1 = 0].
111
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
λ1 = 3, λ2 = 1 and λ3 = 0.
Iteration No: 1
112
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
This makes the iteration to converge to λ2 , which is the next dominant eigenvalue.
Remark 7.1.12.
It is important that we understand the hypothesis (H3) on the initial guess x(0)
correctly. Note that (H3) says that the coefficient of v 1 (which was denoted by c1 )
should be non-zero when x(0) is expressed as
x(0) = c1 v 1 + c2 v 2 + · · · + cn v n .
method will converge to the second dominant eigenvalue if the first coordinate of
the initial guess is zero. However, we may expect this to happen if c1 = 0. The
following example illustrates this fact.
Example 7.1.13.
Consider the matrix
91.4 −22.0 −44.8000
A = 175.2 −41.0 −86.4 .
105.2 −26.0 −51.4000
113
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.1 Power Method
Note that the matrix A satisfies the hypothesis (H1) since -5 is the unique dominant
eigenvalue and it is also a simple eigenvalue. The matrix A satisfies the hypothesis
(H2) as all eigenvalues are distinct and hence eigevectors form a basis for R3 . Thus
the fate of the power method iterates depends solely on the choice of the initial guess
x(0) and whether it satisfies the hypothesis (H3)
• Let us take the initial guess x(0) = (1, 0.5, 0.25)T . Note that c1 (= 0 for this ini-
tial guess. Thus the initial guess satisfies the hypothesis (H3) and the iterative
sequences generated by power method converges to the dominant eigenvalue
λ1 = −5 and the corresponding eigenvector (with a scalar multiple) 15 v 1 .
• Let us take the initial guess x(0) = (0, 0.5, 0.25)T . Note that c1 (= 0 for this ini-
tial guess. Thus the initial guess satisfies the hypothesis (H3) and the iterative
sequences generated by power method converges to the dominant eigenvalue
λ1 = −5 and the corresponding eigenvector (with a scalar multiple) 15 v 1 . Com-
pare this with Example 7.1.11. In the present case the first coordinate of the
initial guess vector is zero, just as in Example 7.1.11. In Example 7.1.11 the
power method iterate converged to the second dominant eigenvalue and the
corresponding eigenvector, which does not happen in the present case. The
reason is that in the Example 7.1.11, c1 = 0 for the initial guess chosen, but in
the current example c1 (= 0.
then we can apply the power method to the matrix A−1 to approximate the eigenvalue
λn , provided the other two hypotheses of Theorem 7.1.5 are satisfied.
The main difficulty in this idea is to compute A−1 . Instead, of first computing A−1 and
then applying the power method, we can obtain a LU factorization of A once and then
solve the system
Ay (k+1) = x(k)
at every iteration of the power method using one forward substitution and one backward
substitution. By keeping all the other steps of the power method unchanged, the
resulting method is the inverse power method for computing the eigenvalue of A having
the smallest absolute value and a corresponding eigen vector.
114
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.2 Gerschgorin’s Theorem
and Dk denotes the closed disk in the complex plane with centre akk and radius ρk ,
i.e.,
5 6
Dk = z ∈ C : |z − akk | ≤ ρk . (7.22)
1. Each eigenvalue of A lies in one of the disks Dk . That is, no eigenvalue of A
lies in C \ ∪nk=1 Dk .
2. Suppose that among the disks D1 , D2 , · · · , Dn , there is a collection of m disks
whose union (denoted by R1 ) is disjoint from the union of the rest of the
n − m disks (denoted by R2 ). Then exactly m eigenvalues lie in R1 and n − m
eigenvalues lie in R2 (here each eigenvalue is counted as many times as its
algebraic multiplicity).
The disks Dk , k = 1, 2, . . . , n are called the Gerschgorin’s disks.
115
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.2 Gerschgorin’s Theorem
Proof.
We will prove only (i) as it is easy, and the proving (ii) is beyond the scope of this
course.
Let λ be an eigenvalue of A. Then there exists a v = (v1 , v2 , · · · , vn ) ∈ Rn and v (= 0
such that
Av = λv (7.23)
Let 1 ≤ r ≤ n be such that |vr | = max{|v1 |, |v2 |, · · · , |vn |}. The rth equation of the
system of equations (7.23) is given by (actually, of Av − λv = 0)
Observe that the right hand side of the inequality (7.26) is ρr . This proves that
λ ∈ Dr .
Example 7.2.2.
For the matrix
4 1 1
0 2 1 ,
−2 0 9
116
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.2 Gerschgorin’s Theorem
Draw a picture of these disks and observe that D3 neither intersects D1 nor D2 . By
(ii) of Theorem 7.2.1, D3 has one eigenvalue and D1 ∪D2 has two eigenvalues counting
multiplicities. Note that the eigenvalues are approximately 4.6318, 1.8828 ∈ D1 ∪ D2
and 8.4853 ∈ D3 .
Remark 7.2.3.
Gerschgorin’s circle theorem is helpful in finding bound for eigenvalues. For the
matrix in Example 7.2.2, any number z in D1 satisfies |z| ≤ 6. Similarly any
number z in D2 satisfies |z| ≤ 3, and any number z in D3 satisfies |z| ≤ 11. Since
any eigenvalue λ lies in one of three disks, we can conclude that |λ| ≤ 11.
Remark 7.2.4.
The main disadvantage of the power method discussed in Section 7.1 is that if a
given matrix has more than one dominant eigenvalues, then the method may not
converge. So, for a given matrix, we do not know whether the power method will
converge or not. Also, as the power method is reasonably slow (see Example 7.1.13
for an illustration) we may have to perform reasonably large number of iterations
to come to know that the method is not actually converging.
Thus, a tool to find out whether the given matrix has a unique dominant eigenvalue
or not is highly desirable. The Gerschgorin theorem (Theorem 7.2.1) can sometimes
be used to see if power method can be used for a given matrix. For instance, in Ex-
ample 7.2.2, we see that the power method can be used to obtain an approximation
to the dominant eigenvalue.
Since a matrix A and its transpose (denoted by AT ) have same eigenvalues, we can apply
Gerschgorin Circle Theorem to AT and conclude the following corollary.
117
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.2 Gerschgorin’s Theorem
Corollary 7.2.5.
Let A be an n × n matrix. For each k = 1, 2, · · · , n, define τk by
n
'
τk = |ajk |, (7.27)
j=1
j$=k
and Bk denotes the closed disk in the complex plane with centre akk and radius τk .
That is,
5 6
Bk = z ∈ C : |z − akk | ≤ τk . (7.28)
1. Each eigenvalue of A lies in one of the disks Bk . That is, no eigenvalue of A
lies in C \ ∪nk=1 Bk .
2. Suppose that among the disks B1 , B2 , · · · , Bn , there is a collection of m disks
whose union (denoted by C1 ) is disjoint from the union of the rest of the n −
m disks (denoted by C2 ). Then exactly m eigenvalues lie in C1 and n − m
eigenvalues lie in C2 (here each eigenvalue is counted as many times as its
algebraic multiplicity).
Optimal Bounds
Using the Gerschgorin theorem, we wish to identify an annular region in the complex
plane of the form
5 6
z ∈ C : r ≤ |z| ≤ R , (7.29)
for some non-negative real numbers r and R, containing all the eigenvalues of a given
matrix.
Let A = (aij ) be an n × n matrix with real entries. For k = 1, 2, · · · , n, consider the
Gerschgorin disk Dk defined by (7.22). Then, for z ∈ Dk we have
|z − akk | ≤ ρk ,
118
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.2 Gerschgorin’s Theorem
Therefore, by the Gerschgorin theorem, all the eigenvalues of A are contained in the
annular region (7.29) where
where
t = min (|akk | − τk ), T = max (|akk | + τk ),
1≤k≤n 1≤k≤n
where
Λ∗ := max{r, t} and Λ∗ := min{R, T }.
We define Λ∗ and Λ∗ as the optimal bounds for the eigenvalues of A given by the
Gerschgorin theorem. Note that this is the best possible information that we can
obtain using the Gerschgorin theorem.
Example 7.2.6.
For the matrix
6 0 2
−1 −5 0
−2 2 −3
the Gerschgorin disks can by obtained using (7.22) and are given by
119
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.3 Exercises
Thus, the smallest annular region, obtained using the Gerschgorin theorem, that
contains all the eigenvalues of A is given by
5 6
z ∈ C : 1 ≤ |z| ≤ 8
7.3 Exercises
1. The matrix
2 0 0
A= 2 1 0
3 0 1
2. The matrix
5.4 0 0
A = −113.0233 −0.5388 −0.6461
−46.0567 −6.4358 −0.9612
has eigenvalues λ1 = 5.4, λ2 = 1.3 and λ3 = −2.8 with corresponding eigenvectors
v1 = (0.2, −4.1, 2.7)T , v2 = (0, 1.3, −3.7)T and v3 = (0, 2.6, 9.1)T . To which
eigenvalue and the corresponding eigenvector does the power method converge if
we start with the initial guess x(0) = (0, 1, 1)? Justify your answer.
3. Use Gerschgorin’s circle theorem to determine the intervals in which the eigen-
values of the matrix
0.5 0 0.2
A= 0 3.15 −1 .
0.57 0 −7.43
120
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.3 Exercises
lie, given that all eigenvalues of A are real. Show that power method can be
applied for this matrix to find the dominant eigenvalue without computing eigen-
values explicitly. Compute the first three iterates of Power method sequences.
5. Construct the iterative sequences using inverse power method for the matrix A
given in Exercise 4. Staring with x(0) = (1, 1, 1)T perform five iterations.
6. Using an appropriate shifted inverse power method for the matrix A given in
Exercise 4, construct the sequences {µk } and {x(k) } that converge to λ2 and an
eigenvector of λ2 , respectively. Perform adequate number of iterations to confirm
the convergence numerically.
121
S. Baskar and S. Sivaji Ganesh Spring 2022-23
Section 7.3 Exercises
7. Use the Gerschgorin Circle theorem to determine bounds for the eigenvalues for
the following matrices. Also find optimum bounds wherever possible. Also draw
the pictures of all the regions given by Greschgorin circles.
1 0 0 4 −1 0
−1 0 1 , −1 4 −1 ,
−1 −1 2 −1 −1 4
1 0 −1 1
4.75 2.25 −0.25
2.25 4.75 2 2 −1 1
1.25 , .
0 1 3 −2
−0.25 1.25 4.75
1 0 1 4
8. Prove that the following two statements concerning n × n matrices are equivalent.
i) Every diagonally dominant matrix is invertible.
ii) Each of the eigenvalues of a matrix A, belongs to at least one Gerschgorin
disk corresponding to A.
122
S. Baskar and S. Sivaji Ganesh Spring 2022-23