Review of Properties of Eigenvalues and Eigenvectors
Review of Properties of Eigenvalues and Eigenvectors
Review of Properties of Eigenvalues and Eigenvectors
\
|
=
0
2
1
x .
We have
|
|
|
.
|
\
|
=
|
|
|
.
|
\
|
=
|
|
|
.
|
\
|
|
|
|
.
|
\
|
=
0
2
1
1
0
2
1
0
2
1
7 8 16
4 3 8
4 4 9
Ax
( ) x x = = 1
Hence 1 = is such that there exists a nonzero vector x such that Ax = x.
Thus is an eigenvalue of A.
Similarly, if we take = 3,
1
1
2
x
| |
|
=
|
|
\ .
we find that
Ax = x. Thus, = 3 is also an eigenvalue of A.
Let be an eigenvalue of A. Then any nonzero x such that Ax = x is
called an eigenvector of A.
Let be an eigenvalue of A. Let,
|
|
|
.
|
\
|
=
7 8 16
4 3 8
4 4 9
A
82
{ } x Ax : C x W
n
= =
Then we have the following properties of
W :
(i)
W , that is ,
n n n
= = A .
+
+ = +
= =
W y x
y) (x y) A(x
y Ay , x Ax y x, (ii) W
(iii) For any constant k, we have
=
= =
W kx
(kx) A(kx)
(kx) x k kAx
Thus
W is a subspace of C
n
. This is called the characteristic subspace or the
eigensubspace corresponding to the eigenvalue .
Example: Consider the A in the example on page 81. We have seen that = -1
is an eigenvalue of A. What is
) 1 (
W
, the eigensubspace corresponding to 1?
We want to find all x such that
Ax = -x, that is,
(A+I)x = , that is,
we want to find all solutions of the homogeneous system Mx = ; where
|
|
|
.
|
\
|
= + =
8 8 16
4 4 8
4 4 8
I A M
We now can use our row reduction to find the general solution of the system.
2 1
1
3 1
1
8
2
1 1
1
8 4 4
2 2
0 0 0 0 0 0
0 0 0 0 0 0
R R
R
R R
M
| |
| |
|
|
|
|
|
|
|
\ .
\ .
83
Thus,
3 2 1
x
2
1
x
2
1
x + =
Thus the general solution of (A+I) x = is
2 3
2 2 3
3
1 1
1 1
2 2
1 1
2 0
2 2
0 2
x x
x x x
x
| |
+
| | | |
|
| |
|
= +
| |
|
| |
|
\ . \ .
\ .
|
|
|
.
|
\
|
+
|
|
|
.
|
\
|
=
2
0
1
0
2
1
2 1
A A
where A
1
and A
2
are arbitrary constants.
Thus consists of all vectors of the form
|
|
|
.
|
\
|
+
|
|
|
.
|
\
|
2
0
1
0
2
1
2 1
A A .
Note: The vectors
|
|
|
.
|
\
|
|
|
|
.
|
\
|
2
0
1
,
0
2
1
form a basis for
-1
and therefore
dim
) 1 (
W
= 2.
What is
) 3 (
W the eigensubspace corresponding to the eigenvalue 3 for the above
matrix?
We need to find all solutions of Ax = 3x,
i.e., Ax 3x =
i.e., Nx =
where
84
|
|
|
.
|
\
|
= =
4 8 16
4 0 8
4 4 12
3I A N
Again we use row reduction
|
|
|
.
|
\
|
|
|
|
|
.
|
\
|
+
0 0 0
3
4
3
8
0
4 4 12
3
4
3
8
0
3
4
3
8
0
4 4 12
N
3 4
1 3 1 R R
R
3
4
R
3
2
R
1
R and
3 2 1
4 4 12 x x x + =
3 2
3
4
3
8
x x =
2 3
2 x x =
2 2 2 1
12 8 4 12 x x x x = + =
1 2
x x =
1 2 3 1 2
2 2 ; x x x x x = = =
The general solution is
|
|
|
.
|
\
|
=
|
|
|
.
|
\
|
2
1
1
2
1
1
1
1
x
x
x
x
Thus
) 3 (
W consists of all vectors of the form
|
|
|
.
|
\
|
2
1
1
Where is an arbitrary constant.
85
Note: The vector
|
|
|
.
|
\
|
2
1
1
forms a basis for
) 3 (
W and hence
dim.
) 3 (
W = 1.
Now when can a scalar be an eigenvalue of a matrix A of order n? We
shall now investigate this question. Suppose is an eigenvalue of A.
This There is a nonzero vector x such that Ax = x.
= x ) I A ( and x
The system = x ) I A ( has at least one nonzero solution.
nullity (A - I) 1
rank (A - I) < n
(A - I) is singular
det. (A - I) = 0
Thus, is an eigenvalue of A det. (A - I) = 0.
Conversely, is a scalar such that det. (A - I) = 0.
This (A - I) is singular
rank (A - I) < n
nullity (A - I) 1
The system = x ) I A ( has nonzero solution.
is an eigenvalue of A.
Thus, is a scalar such that det. (A - I) = 0 is an eigenvalue.
Combining the two we get,
is an eigenvalue of A
det. (A - I) = 0
det. (I - A) = 0
Now let C() = det. (I - A)
Thus we see that,
The eigenvalues of a matrix A are precisely the roots of
C() = det. (I - A).
86
We have,
( )
nn n n
n
n
a a a
a a a
a a a
C
=
K
K K K K
K K K K
K
K
2 1
2 22 21
1 12 11
( ) ( ) A a a
n n
nn
n
. det 1
1
11
+ + + + =
K K
Thus ; C() is a polynomial of degree n. Note the leading coefficient of C() is
one and hence C() is a monic polynomial of degree n. This is called
CHARACTERISTIC POLYNOMIAL of A. The roots of the characteristic
polynomial are the eigenvalues of A. The equation C() = 0 is called the
characteristic equation.
Sum of the roots of C() = Sum of the eigenvalues of A
= a
11
+ . . . . . . + a
nn
,
and this is called the TRACE of A.
Product of the roots of C() = Product of the eigenvalues of A
= det. A.
In our example in page 81 we have
|
|
|
.
|
\
|
=
7 8 16
4 3 8
4 4 9
A
( )
7 8 16
4 3 8
4 4 9
) .( det
+
= =
A I C
7 8 1
4 3 1
4 4 1
3 2 1
+
+
+
+ +
C C C
( )
7 8 1
4 3 1
4 4 1
1
+ =
87
3 4 0
0 1 0
4 4 1
) 1 (
1 2
1 3
+
+ =
R R
R R
( )( )( ) 3 1 1 + + =
( ) ( ) 3 1
2
+ =
Thus the characteristic polynomial is
( ) ( ) 3 1 ) (
2
+ = C
The eigenvalues are 1 (repeated twice) and 3.
Sum of eigenvalues = (-1) + (-1) + 3 = 1
= Trace A = Sum of diagonal entries.
Product of eigenvalues = (-1) (-1) (3) = 3 = det. A.
Thus, if A is an nxn matrix, we define the CHARACTERISTIC POLYNOMIAL as,
A I C = ) ( . . . . . . . . . . . . .(1)
and observe that this is a monic polynomial of degree n. When we factorize this
as,
( ) ( ) ( )
k
a
k
a a
C = K K
2 1
2 1
) (
. . . . . . . .(2)
where
1
,
2
, . . . . . .,
k
are the distinct roots; these distinct roots are the distinct
eigenvalues of A and the multiplicities of these roots are called the algebraic
multiplicities of these eigenvalues of A. Thus when C() is as in (2), the distinct
eigenvalues are
1
,
2
, . . . . . .,
k
and the algebraic multiplicities of these
eigenvalues are respectively, a
1
, a
2
, . . . . . , a
k
.
For the matrix in Example in page 81 we have found the characteristic
polynomial on page 86 as
( ) ( ) 3 1 ) (
2
+ = C
Thus the distinct eigenvalues of this matrix are
1
= -1 ; and
2
= 3 and their
algebraic multiplicities are respectively a
1
= 2 ; a
2
= 1.
If
i
is an eigenvalues of A, the eigen subspace corresponding to
i
is
i
W
and is defined as
{ } x Ax : C x W
i
n
i
= =
88
The dimension of
i
W
2
, . . . . . . . .,
s
be s distinct scalars, (i.e.,
i
j
if i j ).
Consider,
) ( ) )( ( ) )( (
) ( ) )( ( ) )( (
) (
1 1 2 1
1 1 2 1
s i i i i i i i
s i i
i
p
=
+
+
K K
K K
) (
) (
1
j i
j
s j
i j
for i = 1,2, . . . . . . ., s . . . . . . .. (4)
Then p
i
() are all polynomials of degree s-1.
Further notice that
( ) ( ) ( ) 0 ) (
1 1 1
= = = = = =
+ s i i i i i i
p p p p K K
( ) 1 =
i i
p
Thus p
i
()are all polynomials of degree s-1 such that,
( )
=
= =
j i if
if
0
j i 1
p
ij j i
. . . . . . . . . . (5)
89
We call these the Lagrange Interpolation polynomials. If p() is any polynomial
of degree s-1 then it can be written as a linear combination of p
1
(),p
2
(), . . .,
p
s
() as follows:
( ) ( ) ( ) ( ) ( ) ( ) ( )
s s
p p p p p p p + + + = L
2 2 1 1
. . . . (6)
( ) ( )
=
=
s
i
i i
p p
1
With this preliminary, we now proceed to study the properties of the
eigenvalues and eigenvectors of an nxn matrix A.
Let
1
, . . . . ,
k
be the distinct eigenvalues of A. Let
1
,
2,
. . . ,
k
be
eigenvectors corresponding to these eigenvalues respectively ; i.e.,
i
are
nonzero vectors such that
A
i
=
i
i ,
i=1,2,,k . . . . . . . . . . .(6)
From (6) it follows that
i
i
2
i i i i i i
2
A ) ( A ) A ( A A = = = =
i
3
i i
i
2
i i
2
i i
2
i
3
A ) ( A ) A ( A A = = = =
and by induction we get
i
m
i i
m
A =
for any integer m 0 . . . . . . . . . . .(7)
(We interpret A
0
as I).
Now,
s
s
a a a p + + + = K K
1 0
) (
be any polynomial. We define p(A) as the matrix,
s
s
A a A a I a A p + + + = K K
1 0
) (
Now
i
s
s i
A a A a I a A p ) ( ) (
1 0
+ + + = K K
i
s
s i i
A a A a a + + + = K K
1 0
i
s
i s i i 1 i 0
a a a + + + = K K
by (6)
i
s
i s i 1 0
) a a a ( + + + = K K
. ) (
i i
p =
90
Thus,
Now we shall prove that the eigenvectors,
1
,
2
, . . . . ,
k
corresponding
to the distinct eigenvalues
1
,
2
, . . . . ,
k
of A, are linearly independent .
In order to establish this linear independence, we must show that
0
2 1 2 2 1 1
= = = = = + + +
K n K K
C C C C C C K K . . . (8)
Now if in (4) & (5) we take s = k ;
i
=
i,
(I=1,2,..,s) then we get the Lagrange
Interpolation polynomials as
( )
) (
) (
1
j i
j
k j
i
i j
p
; i = 1,2,.., k (9)
and
( )
ij j i
p = (10)
Now,
n k k
C C C = + + + ....
2 2 1 1
For 1 i k,
( )[ ] ( )
n n i k k i
A p C C C A p = = + + + ....
2 2 1 1
( ) ( )
n k i k i i
A p C A p C A p C = + + + .... ) (
2 2 1 1
( ) ( ) , .... ) (
2 2 2 1 1 1 n k k i k i i
p C p C p C = + + + (by property I on page 86)
; 1 ; k i C
i i
= by (10)
k i C
i
= 1 ; 0 since
i
are nonzero vectors
Thus
If
i
is any eigenvalue of A and
i
is an eigenvector corresponding to
i
,then
for any polynomial p() we have . ) ( ) (
i i i
p A p =
PROPERTY I
91
0 .... ....
2 1 2 2 1 1
= = = = = + + +
n n k k
C C C C C C proving (8). Thus
we have
Eigen vectors corresponding to distinct eigenvalues of A are linearly
independent.
PROPERTY II