Lecture 4 - (Spring 2024)
Lecture 4 - (Spring 2024)
Lecture 4 - (Spring 2024)
Determinants
Hence, the determinant of a 3 3 matrix can be computed based on the determinants of its 2 2
sub-matrices. □
Definition 4.1.2: For n 2 , the determinant of a square matrix A aij nn , denoted by det A, is
the sum of n terms of the form 11 j a1 j det A 1 | j , where A 1 | j n 1n 1 is obtained by
deleting the first row and the jth column of A. More precisely,
n
det A= 11 ja1 j det A 1 | j (4.1.1)
j 1
Definition 4.1.3: 11 j det A 1 | j is called the (1,j)th cofactor of A. In general, 1i j det A i | j ,
where A i | j n 1n 1 is formed by deleting the ith row and jth column of A, is called the (i,j)th
1
Theorem 4.1.4: Let A nn , then
n n
det A akjckj k th row expansion ailcil l th column expansion .
j 1 i 1
That is, det A can be evaluated by expanding along any row or any column. □
Remark: The formula (4.1.1) is simply the expansion along the first row. □
1 0 1
0
3 1 2 2
Example 4.1.5: Consider A . To find det A, perhaps the easiest way is to expand
1 0 2 1
2 0 0 1
along the 2nd column, which contains the most zero entries. □
[Proof]: Expanding det A by 1st row of A is the same as expanding det AT by the 1st column of AT . □
Property 4.2.3: Suppose a . Then det aA an det A for A nn .
[Proof]: By induction. For n=1, the assertion is true. Suppose it is true for n k 1 , i.e., for any
B k 1k 1 det aB ak 1 det B . Then for n=k,
2
k
det aA aa1 j 1, j - cofactor of aA
j 1
k
aa1 j 11 j det
aA 1 | j
(by definition of cofactor)
j 1
dimension k -1k -1
k
aa1 j 11 j ak -1 det A 1 | j (by assumption made about n k 1)
j 1
k k
aka1 j 11 j det A 1 | j ak a1 j 11 j det A 1 | j
j 1 j 1
k
a det A .
Property 4.2.4: Let A nn and r r1 r2 rn be a row vector. Also let
1 7 5 1 7 5 1 7 5
Example: Verify that det 2 0 3 det 2 0 3 det 2 0 3 . □
1 0 4 1 7 (1) 1 4 7 0 1 1
Property 4.2.5: If A has two rows (or columns) that are identical, then det A=0.
a b
[Proof]: The proof is done by induction. The statement is true for n=2, since det 0 . Suppose
a b
that the claim holds for n k 1 . Then for n=k, expand det A along any row or column other than the
two equal rows or columns gives det A=0. This is because every cofactor involves the determinant of an
(k 1) (k 1) matrix which contains two identical rows (or columns) and is zero by assumption. The
3
1 1 0 7
2 3 4 5
Example: Consider A . Let us evaluate det A via expansion along the second row:
1 1 0 7
1 5 4 3
1 1 0 7
2 3 4 5
det A
1 1 0 7
1 5 4 3
1 0 7 1 0 7 1 1 7 1 1 0
(2) det 1 0 7 3 det 1 0 7 4 det 1 1 7 5 det 1 1 0 .
5 4 3 1 4 3 1 5 3 1 5 4
□
Property 4.2.6: If B is obtained from A by interchanging two rows (or columns) of A, then we have
det B det A .
[Proof]: Suppose we interchange the ith row ri and the jth row rj of A to get B, i j . Denote by h, g
the determinant of the matrix obtained from A by replacing the ith row of A by h and jth row of A by g.
Therefore det A ri , rj and det B rj , ri . We know that h, h 0 for all h (from Property (4.2.5))
and that
h p, g h, g p, g
from Property 4.2.4 .
h, p g h, p h, g
Hence
0 ri rj , ri rj ri , ri rj rj , ri rj ri , ri ri , rj rj , ri rj , rj
0
0
ri , rj rj , ri det A det B,
Property 4.2.7: If B is obtained from A by multiplying the ith row of A by a scalar a , then
det B a det A .
4
Property 4.2.8: If B is obtained from A by adding a multiple of one row (or one column) of A to another
[Proof]: Suppose that the ith row ri of A is replaced by ri a rj to get B. Then we have
det B det A det M , where M is obtained from A by replacing ri by arj (Prop. 4.2.4). Since
arj rj
det M det a det 0,
rj rj
a a12 a13 a
11 11 a12 a13
Then det B det A det aa11 aa12 aa13 det A a det a11 a12 a13 det A . □
a 31 a 32 a 33
a 31 a 32 a 33
0
For A, B nn , a celebrated formula regarding the determinant of product is det AB det A det B ,
the proof of which is far from trivial, even though nothing is of surprise as ab det ab det a det b
when specialized to the scalar case of n 1 . Yet we are not empty-handed when extending to n 1 ,
bearing in mind that oftentimes we would be able to say something using elementary matrices. Hence let’s
begin with elementary matrices of three kinds; afterwards the general case will follow.
Tij : add a scalar multiple of the ith row of I to the jth row of I.
We have
5
det Pij det I 1, from Property 4.2.6
det Si a det I a, from Property 4.2.7
det Tij det I 1, from Property 4.2.8
That is, the determinant of the product of an elementary matrix (of any kinds) and a square matrix
A nn equals to the product of respective determinants.
( ) If A is singular, ( AX I has no solution), in G H , the reduced row echelon form of A I ,
G must has at least one zero row. Since G H E1E2 Ek A I , we have G E1E2 Ek A , and
0 det G det E1E2 Ek A det E1 det Ek det A .
0
det AB det E1E2 Ek B det E1 det Ek det B det E 2 Ek det B det A det B .
1E
A
If A is singular, by following the previous arguments we can put A EG , where E is the product of a
sequence of elementary matrices (hence det E 0 ) and G contains zero rows at its bottom. This implies
AB EGB , and the square matrix GB has zero rows at the bottom (why). We then have
6
det AB det EGB det E det GB 0
0
det A= det E det
G 0, det A det B 0 det AB . □
0
1
Corollary 4.2.11: If A is nonsingular, then det A1 .
det A
[Proof]: AA1 I, det AA1 det A det A1 1 . □
Note: In general,
det M1 M2 det M1 det M2 , for arbitrary M1 and M2 .
1 2 3 1
Ex., Take M1 and M
2
1 3 . Then det M1 det M2 9 det M1 M2 23 . □
2 5
Motivation: For a square matrix A, we can use Gauss elimination (or elementary row operations) to solve
the linear equation Ax b or to find the inverse of A (if it exists). A closed-form expression of the
solution A1b or the inverse A1 can be obtained through elementary matrix multiplication. The
determinant can be used for providing alternative closed-form expressions for A1b or A1 , directly
in terms of the entries of A . This is the main focus of this section.
A. Representation of Inverse
To proceed, suppose A aij nn and let 1 i n be fixed. Let B be obtained by replacing the
7
Then B has two identical rows and hence det B 0 (by Property 4.2.5). Let us evaluate det B by
c j 1
n n
0 det B bjkc jk aikc jk ai1 ain , for i j , (4.3.1)
k 1 k 1
c jn
where c jk be the (jk)th cofactor of A. Equation (4.3.1) says that the sum of the products of the entries of
Example:
1 1 0 7 1 1 0 7
2 3 4 5 2 3 4 5
A B (replacing the 3 row of A by the 1 row)
rd st
1 2 3 6 1 1 0 7
1 5 4 3 1 5 4 3
4
0 det B b3kc3k (expanding along the 3rd row)
k 1
n
1 c31
a11
-1 c32 0 c33 7 c34
a13 a14
aikc jk □
a12 k 1
Note that, for a fixed 1 i n , (4.3.1) defines n 1 equations, which can be combined with equation
(4.3.2) to get
i th row of A adjoint of A det A 0 0
1 0 0
the i th position
:Ai
:ei , i th row of the identity matrix In
(4.3.3)
8
By collecting equation (4.3.3) for all 1 i n we have the key relation: Denoting ri 1n the ith
row of A ,
r1 e1
adjoint of A det A
(4.3.4)
r
n en
A I
Remarks:
1
A1 adjA whenever det A 0 . (4.3.5)
det A
(c) Equation (4.3.5) provides an elegant representation of the inverse. However, (4.3.5) is seldom used in
practice since it involves the computations of determinants of matrices (this is not preferred in
numerical computations).
Suppose A nn is nonsingular, i.e. det A 0 . Equation (4.3.5) implies that the solution to
Ax b is
1
x A1b adjA b . (4.3.6)
det A
Let us write
x1 b
1 c11 cn 1
x2 b
2 .
x , b , and adjA
x bn c1n cnn
n
With (4.3.6) we immediately have
n
c j 1bj
j 1
x1 n
x2 c j 2bj
det A j 1
1
x
n n
c b
jn j
j 1
9
In particular,
b a a1n
1 12
1
n
1
x1 det A c j 1bj det A det A1 , where A1
j 1
bn an 2 ann
1
In general, we have x k det A det Ak , where Ak is obtained from A by replacing the kth column by
det A a
1 11 a1,i 1 b1 a1,i 1 a1,n
1
x det A , where Ai , 1 i n.
det An an 1 an,i 1 bn an,i 1 an,n
replace the i th column of A by b
1 0 2 x1 6
Example: Solve 3 4 6 x 2 = 30 by the Cramer’s rule.
1 2 3 x 3 8
A b
6 0 2 1 6 2 1 0 6
det A 44 0 , A1 30 4 6 , A2 3 30 6 , A3 3 4 30
8 2 3 1 8 3 1 2 8
det A1 40 10 det A2 72 18 det A3 152 38
x1 , x2 , x3 . □
det A 44 11 det A 44 11 det A 44 11
The following conditions are equivalent for a square matrix A nn to be nonsingular:
(1) The inverse of A exists.
(2) Ax 0 iff x 0 .
(3) Ax b has a unique solution for every b .
(4) The reduced row echelon form of A is the identity matrix.
(5) A can be expressed as a product of elementary matrices.
(6) det A 0 .
10
4-4. Geometric Interpretations of Determinants
a 0 0 a b b
ad ad
0 d d 0 d d
a a
0 0
a b b
ad bc
c d d
a
c
c d
a b
The area of the parallelogram equals
(a b) c (c d ) b
(a b) (c d ) 2 2 ad bc .
2 2
a b
In general, the absolute value of the determinant of the 2 2 matrix is equal to the area of the
c d
a b
parallelogram determined by column vectors c and .
d
a b a b a b a ka
Note: If is singular, then det
c d 0 . In this case we must have
c d c kc , and hence
c d
the two column vectors are along the same line in the two-dimensional plane.
11