0% found this document useful (0 votes)
58 views19 pages

Linear Algebra: Simultaneous Linear Equations

1. A system of linear equations Ax = c has a solution if and only if rank(A) = rank(Ac). 2. If rank(A) = n, then the system has a unique solution x = A^-1c, where A^-1 is the inverse of A. 3. The determinant of an n×n matrix A, denoted |A|, is a scalar value that can be computed recursively using cofactors and properties like expanding by rows or columns. The determinant is used to determine if a matrix is invertible.

Uploaded by

Tarun Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views19 pages

Linear Algebra: Simultaneous Linear Equations

1. A system of linear equations Ax = c has a solution if and only if rank(A) = rank(Ac). 2. If rank(A) = n, then the system has a unique solution x = A^-1c, where A^-1 is the inverse of A. 3. The determinant of an n×n matrix A, denoted |A|, is a scalar value that can be computed recursively using cofactors and properties like expanding by rows or columns. The determinant is used to determine if a matrix is invertible.

Uploaded by

Tarun Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Linear Algebra:

Simultaneous Linear Equations


1

1. System of Linear Equations


Consider a system of m simultaneous linear equations in n unknowns:
a11x1 + a12x2 + ::: + a1nxn = c1;
a21x1 + a22x2 + ::: + a2nxn = c2;
... (*)
am1x1 + am2x2 + ::: + amnxn = cm;
– In matrix-vector notation, we can write this system as

Ax = c; where
0 1 0 1 0 1
a11 a12 a1n x1 c1
B a21 a22 a2n C B x2 C B c2 C
Am n =B
@ ... ... C B C
. . . ... A ; xn 1 = @ ... A ; cm 1 =B C
@ ... A :
am1 am2 amn xn cm

The system of equations (*) is called homogeneous if c = 0; and non-homogeneous


if c 6= 0.
2

In analyzing a system of linear equations (*), the following questions naturally arise:
(i) Existence: Does there exist a solution to (*)?
(ii) Uniqueness: If there exists a solution to (*), is it unique?
(iii) Computation: If there exists a solution to (*), how can we nd such a solution?
3

2. Existence of Solutions
If the system of equations (*) is homogeneous, there is always a trivial solution,
namely x = 0:

#1. Give an example to illustrate that if the system of equations is non-homogeneous,


then, in general, a solution may not exist.

In general, given the system of equations (*), we would like to know, given A and c;
whether there is a solution to (*).

Consider the system Ax = c.


– The m (n + 1) matrix
0 1
a11 a12 a1n c1
B a21 a22 a2n c2 C
Ac = B
@ ... ... . . . ... ... C
A
am1 am2 amn cm
is known as the augmented matrix.
4

– Note that the augmented matrix Ac can be interpreted as an ordered set of n + 1


column vectors A1; A2; :::; An; c :

Theorem 1:
Let A be an m n matrix and c be a vector in <m: Then the system of equations
Ax = c has a solution if and only if
rank (A) = rank (Ac) :
– Proof: To be discussed in class.
– Hints:
1. Keep in mind that the augmented matrix Ac can be interpreted as an ordered
set of n + 1 column vectors A1; A2; :::; An; c .
2. Recognize that Ax = c has a solution implies that c can be expressed as a linear
combination of the column vectors of A; A1; A2; :::; An :
5

3. Uniqueness of Solutions
Theorem 2:
Let A be an m n matrix and c be a vector in <m: Then the system of equations
Ax = c has a unique solution if and only if
rank (A) = rank (Ac) = n:
– Proof: To be discussed in class.
– Hints:
1. Step1: To show that if there exists a unique solution, then rank (A) = rank (Ac) =
n:
- Given that there exists a unique solution, call it x (that is, Ax = c): Then, by
Theorem 1, rank (A) = rank (Ac) :
- It remains to show that rank (A) = n:
- If rank (A) 6= n; then it must be that rank (A) < n:
6

) A1; A2; :::; An is a set of linearly dependent vectors.


) There exists a vector = ( 1; 2 ; ::: n ) ; 6= 0; such that A = 0:
- Now try to nd out a contradiction to the fact that there exists a unique solution.

2. Step 2: To show that if rank (A) = rank (Ac) = n; then there exists a unique
solution.
7

4. Calculation of Solutions
Consider the case of n linear equations in n unknowns. Let A be an n n matrix,
and c be a vector in <n: Consider the system of equations given by
Ax = c:
#2. Prove that if rank (A) = n; then rank (Ac) = n:
– In view of this and Theorem 2, we have to check only if rank (A) = n to see whether
a unique solution exists.
rank (A) = n; ) A is non-singular, ) A is invertible,
) Premultiplying Ax = c by A 1
we get
A 1Ax = A 1c; ) Ix = A 1c; ) x = A 1c:
So x = A 1c is the solution.
– In terms of calculating this solution, it remains to learn how to calculate A 1; the
inverse of a non-singular matrix.
- This leads us naturally into the study of determinants.
8

5. Determinants
Let A be an n n matrix. We can associate with A a number, denoted by jAj ; called
the determinant of A:
The determinant of the n n matrix is de ned recursively as follows:
(1) For a 1 1 matrix, which is a number, we de ne the determinant to be the number
itself.
(2) For any m m matrix A (m 2); the cofactor Aij of the element aij is ( 1)i+j times
the determinant of the submatrix obtained from A by deleting row i and column j:
The determinant of the m m matrix is then given by
m
X
jAj = a1j A1j :
j=1

– Thus using (2) and knowing (1), the determinant of a 2 2 matrix is


a11a22 a12a21:
9

– This information can then be used in (2) again to obtain the determinant of a 3 3
matrix:
a11A11 + a12A12 + a13A13
= a11 (a22a33 a32a23) a12 (a21a33 a31a23) + a13 (a21a32 a31a22) :
– This procedure can be continued to obtain the determinant of any n n matrix.
– It is implicit in the de nition of jAj that the expansion is done by the rst row. How-
ever, it can be shown that for every i = 1; 2; :::; n;
n
X
jAj = aij Aij
j=1

so that expansion by any row will give the same result.


– Expansion by any column will also give the same result, that is, for every j =
1; 2; :::; n;
n
X
jAj = aij Aij :
i=1
10

Properties of Determinants:

(i) jAj = AT :

(ii) The multiplication of any one row by a scalar k will change the determinant k -fold.

(iii) The interchange of any two rows will alter the sign, but not the numerical value, of
the determinant.

(iv) If one row is a multiple of another row, the determinant is zero.

(v) The addition of a multiple of any row to another row will leave the determinant
unaltered.

(vi) The expansion of a determinant by “alien” cofactors yields a value of zero. That is,
n
X
aij Akj = 0; if i 6= k:
j=1

[Here the expansion is by the i-th row, using cofactors of the k -th row].
11

(vii) jABj = jAj jBj :

– Properties (ii) – (v) hold if the word “row” is replaced uniformly by “column” in each
statement.

– The proofs will be discussed in class.


12

6. Matrix Inversion
Theorem 3:
Let A be an n n matrix. Then A is invertible if and only if jAj 6= 0: Furthermore, in
case A is invertible, A 1 = jAj 1 :
– Proof: To be discussed in class.
– Hints:
1. Step1: To show that if A is invertible then jAj =
6 0: Use Property (vii).
2. Step 2: To show that if jAj =
6 0 then A is invertible.
- It is equivalent to show that A is nonsingular.
- Suppose not. Then the column vectors of A; A1; A2; :::; An ; are linearly de-
pendent.
) One column vector can be expressed as a linear combination of the other col-
umn vectors.
- Now use Property (v) to show that a contradiction arises.
13

It follows that for an n n matrix A; the following statements are equivalent:


– A is invertible;
– A is non-singular;
– jAj =
6 0;
– rank (A) = n;
– Column vectors of A are linearly independent.

Cofactor Matrix: For an n n matrix A; we de ne the cofactor matrix of A to be the


n n matrix given by
0 1
A11 A12 A1n
B A21 A22 A2n C
B
C = @ .. . . C:
. .. . . . .. A
An1 An2 Ann
Adjoint Matrix: The transpose of C is called the adjoint of A; and denoted by adj A;
that is, adj A = C T :
14

By the rules of matrix multiplication,


0P n Pn Pn 1
a A a A a1j Anj
B j=1 1j 1j j=1 1j 2j C
B n j=1 C
BP Pn Pn C
B a2j A1j a2j A2j a2j Anj C
AC = B
T
B j=1 . j=1 j=1
C
C
B . ... ... ... C
B n . C
@P Pn Pn A
anj A1j anj A2j anj Anj
j=1 j=1 j=1

0 1
jAj 0 0
B 0 jAj 0 C
= B
@ ... ... . . . ... C
A
0 0 jAj

= jAj I:
– Note that this calculation is valid for any n n matrix, invertible or otherwise.
15

Inverse of a Matrix:

– If A is invertible, there is A 1
such that AA 1
= A 1A = I:

– Consider the relation AC T = jAj I: Premultiplying by A 1


we get
C T = jAj A 1:
– Since A is invertible, we have jAj =
6 0: Then we can divide by jAj and get
1 CT adj A
A = = :
jAj jAj

- This gives a formula for computing the inverse of an invertible matrix A in terms
of the determinant and cofactors of A.
16

7. Cramer's Rule
Recall that we wanted to calculate the unique solution of a system of n equations in
n unknowns given by
Ax = c
where A is an n n matrix and c is a vector in <n:
– We found that the unique solution is given by
x = A 1c:
– Using the formula for A 1
derived above we conclude that
adj A
x = A 1c = c:
jAj
17

Let us evaluate xi using the above relationship:


adj A
xi = eix = ei c
jAj
(A1i A2i ... Ani) c
=
jAj
(c1A1i + c2A2i + ::: + cnAni)
=
jAj
a11 a1; i 1 c1 a1; i+1 a1n
1 a21 a2; i 1 c2 a2; i+1 a2n
= . . . ... :
jAj ... . . . ... ... ...
an1 an; i 1 cn an; i+1 ann
This gives us an easy way to compute the solution of xi:
– Replace the i-th column of A by the vector c and nd the determinant of this matrix.
– Dividing this number by the determinant of A yields the solution of xi:
– This rule is known as Cramer's Rule.
18

References
Must read the following sections from the textbook:
Section 9.1 (pages 188 – 194) [You can skip Theorems 9.1 and 9.2.],
Section 9.2 (pages 194 – 197).
This material is based on
1. Hadley, G., Linear Algebra, Massachusetts: Addison-Wesley, 1964 (chapters 3, 5),
2. Hohn, Franz E., Elementary Matrix Algebra, New Delhi: Amerind, 1971 (chapters
2, 6, 7),
3. Gale, David, The Theory of Linear Economic Models, New York: McGraw-Hill, 1960
(chapter 2).
Some of this material is also covered in
4. Dorfman, Robert, Paul A. Samuelson and Robert M. Solow, Linear Programming
and Economic Analysis, New York: McGraw-Hill, 1958 (Appendix B).

You might also like