0% found this document useful (0 votes)
21 views45 pages

Chap2 1

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 45

Solving Sets of Equations

150 B.C.E., 九章算術 Carl Friedrich Gauss, 1777-1855


Outline
 Existence, uniqueness, and conditioning

 Solving linear systems f ( x)  0

 Special types of linear systems

 Software for linear systems

Numerical Methods © Wen-Chieh Lin 2


Systems of Linear Equations
 Given m × n matrix A and m-vector b, find
unknown n-vector x satisfying Ax=b
 “Can b be expressed as linear combination of
columns of A?”
 If so, coefficients of linear combination are
given by components of solution vector x
 Solution may or may not exist, and may or
may not be unique
 For now, we consider only m=n
Numerical Methods © Wen-Chieh Lin 3
Nonsingularity and Singularity
 An n × n matrix A is nonsingular if it has any
of the following equivalent properties

 Inverse of A, denoted by A-1, exists

 det(A)≠0

 rank(A) = n

 For any vector z ≠ 0, Az ≠ 0


Numerical Methods © Wen-Chieh Lin 4
Existence and Uniqueness
 Existence and uniqueness of solution to Ax = b
depend on whether A is singular or nonsingular

 Can also depend on b, but only in singular


case

Numerical Methods © Wen-Chieh Lin 5


Geometric Interpretation
 In two dimensions, each equation determines
straight line in plane

 Solution is the intersection point of two lines

 If two straight lines are not parallel


(nonsingular), then intersection point is unique

Numerical Methods © Wen-Chieh Lin 6


Geometric Interpretation (cont.)
 If two straight lines are parallel (singular), then
they either do not intersect (no solution) or else
coincide (any point along line is a solution)

 In higher dimensions, each equation determines


a hyperplane; if a matrix is nonsingular,
intersection of hyperplanes is a unique solution

Numerical Methods © Wen-Chieh Lin 7


Example: Nonsingularity
 2 × 2 system
 5x1  3x2  b1
7 x1  2 x2  b2
or in matrix-vector notation
 5 3  x1   b1 
Ax       b
 7 2  x2  b2 
is nonsingular regardless of value of b

 For example, if b = [1, 11]T then x = [1, 2]T is


a unique solution
Numerical Methods © Wen-Chieh Lin 8
Example: Singularity
 2 × 2 system  5 3   x1   b1 
Ax       b
 10  6  x2  b2 
is singular regardless of value of b

 With b = [1, 10]T, there is no solution

 With b = [1, -2]T, x = [t, (5t+1)/3]T is solution


for any real number t, so there are infinitely
many solutions
Numerical Methods © Wen-Chieh Lin 9
Solving Linear Systems
 To solve a linear system, transform it into one
whose solution is same but easier to compute
 What type of transformation of linear system
leaves solution unchanged?
 We can premultiply (from left) both sides of a
linear system Ax = b by any nonsingular
matrix M without affecting its solution
 Solution to MAx = Mb is given by
x = (MA)-1Mb = A-1M-1Mb = A-1b

Numerical Methods © Wen-Chieh Lin 10


Example: Permutation Matrix
 Permutation (or Transposition) matrix P has one 1
in each row and column and zeros elsewhere, i.e.,
identity matrix with rows or columns permuted

1 0 0 0 9 6 2 13 9 6 2 13
0 0 0 1 4 2 8 11 3 2 6 8
P , A   , PA   
0 0 1 0 0 7 1 9 0 7 1 9
     
0 1 0 0 3 2 6 8 4 2 8 11

 Note that P-1 = PT

Numerical Methods © Wen-Chieh Lin 11


Example: Permutation Matrix (cont.)
 Premultiplying both sides of system by
permutation matrix, PAx = Pb, reorder rows, but
solution x is unchanged

 Postmultiplying A by permutation matrix, APx =


b, reorder columns, which permutes components
of original solution
x = (AP)-1b = P-1A-1b = PT(A-1b)

Numerical Methods © Wen-Chieh Lin 12


Triangular Linear Systems
 What type of linear system is easy to solve?
 a11 0  0   x1   b1 
a a 0    x  b 
Ax  Lx   21 22   2
  2
b
    0     
    
an1 an 2  ann   xn  bn 
a11 a12  a1n   x1   b1 
0 a     x  b 
Ax  Ux   22  2    2   b
  0  an1,n     
    
 0  0 ann   xn  bn 
Numerical Methods © Wen-Chieh Lin 13
Triangular Matrices
 Two specific triangular forms are of particular
interest
 lower triangular: all entries above main
diagonal are zero, aij = 0 for i < j
 upper triangular: all entries below main
diagonal are zero, aij = 0 for i > j
 a11 0  0  a11 a12  a1n 
a  0 a   
a 0 
L   21 22  U 22 
    0   0  an1,n 
   
a a
 n1 n 2  ann   0  0 ann 
Numerical Methods © Wen-Chieh Lin 14
Forward Substitution
 Forward substitution for lower triangular
system Lx = b a 0  0 x
 11   1   b1 
a a 0    x  b 
 21 22  2    2 
x1  b1 a11    0      
    
an1 an 2  ann   xn  bn 
x2  (b2  a21x1 ) a22

 i 1

xi   bi   aij x j  aii , i  1,, n
 j 1 
Numerical Methods © Wen-Chieh Lin 15
Backward Substitution
 Forward substitution for upper triangular
system Ux = b
a11 a12  a1n   x1   b1 
0          
     
xn  bn ann   0 an1,n1 an1,n   xn1  bn1 
    
0  0 ann   xn   bn 
xn1  (bn1  an1,n xn ) an1,n1
 n

xi   bi   aij x j  aii , i  n,,1
 j i 1 
Numerical Methods © Wen-Chieh Lin 16
Elimination
 We have learned using the elimination method
to solve linear equations since high school!
 a11 x1  a12 x2  b1
  ( a21 / a11 )
a21 x1  a22 x2  b2

a11 x1  a12 x2  b1

 a22 x2  (a21 a11 )a12 x2  b2  (a21 a11 )b1

Numerical Methods © Wen-Chieh Lin 17


Elimination (cont.)
 What I’m going to show is just a systematical
way to describe the process so that we can

 handle n linear equations with n unknowns

 develop an algorithm consisting of a series


of matrix multiplications to solve a system
of linear equations

Numerical Methods © Wen-Chieh Lin 18


Elimination (cont.)
 To transform general linear system into a
triangular form, we need to replace selected
nonzero entries of matrix by zeros
MA = L or MA = U
 This can be done by taking linear combinations of
rows
 Let’s consider a column vector for now
 a11 a12  a1n 
a    
M 
21   Ma Ma  Ma 
1 2 n
   an1,n1 an1,n 
 
an1  an,n1 Numerical
ann  Methods © Wen-Chieh Lin 19
Elimination (cont.)
 Consider a 2-vector
 a1 
a 
a2 
 If a1 ≠ 0, then

 1 0  a1  a1 
 a a 1 a    0 
 2 1  2   

Numerical Methods © Wen-Chieh Lin 20


Elementary Elimination Matrices
 More generally, we can eliminate all entries below
kth position in n-vector a by transformation
1  0 0  0  a1   a1 
          
    
0  1 0  0  ak  ak 
Mk a       
0   mk 1 1  0 ak 1   0 
          
    
0   mn 0  1  an   0 
where mi  ai ak , i  k  1,, n
 Divisor ak, called pivot, must be nonzero
Numerical Methods © Wen-Chieh Lin 21
Elementary Elimination Matrices (cont.)
 Matrix Mk, called elementary elimination matrix,
adds multiple of row k to each subsequent row,
with multipliers mi chosen so that result is zero
 Rows 1,…,k of matrix A are not affected!
1  0 0  0  a1   a1 
          
    
0  1 0  0  ak  ak 
   
0   mk 1 1  0 ak 1  0 
          
    
0   mn 0  1  an  0 
Numerical Methods © Wen-Chieh Lin 22
Elementary Elimination Matrices (cont.)
 Matrix Mk, called elementary elimination matrix,
adds multiple of row k to each subsequent row,
with multipliers mi chosen so that result is zero
 Rows 1,…,k of matrix A are not affected!
1  0 0  0  a1   a1 

 I
kk
     

 
 


0  1 0  0  ak  ak 
   
0   mk 1 1  0 ak 1  0 
          
    
0   mn 0  1  an  0 
Numerical Methods © Wen-Chieh Lin 23
Elementary Elimination Matrices (cont.)
 Matrix Mk, called elementary elimination matrix,
adds multiple of row k to each subsequent row,
with multipliers mi chosen so that result is zero
 Subsequent pre-multiplications of Mk‘s will not
change the processed rows 1,…,k of matrix A
a11 
0  
 
   akk 
Mk 1Mk M1A  Mk 1  
   0 ak 1,k 
     
 
 0 Methods
Numerical 0 © Wen-Chieh
ank Linann  24
Elementary Elimination Matrices (cont.)
 Matrix Mk, called elementary elimination matrix,
adds multiple of row k to each subsequent row,
with multipliers mi chosen so that result is zero
 Subsequent pre-multiplications of Mk‘s will not
change the processed rows 1,…,k of matrix A
a11 
0  
 Not changed by Mk+1 
   akk 
Mk 1Mk M1A  Mk 1  
   0 ak 1,k 
     
 
 0 Methods
Numerical 0 © Wen-Chieh
ank Linann  25
Properties of Elementary Elimination Matrices
 Mk is unit lower triangular and nonsingular
 Mk = I – mkekT
e  0  0 1 0  0
T
k
 Mk = I + mkek
-1 T
kth element
1  0 0  0  0 
        
   
0  1 0  0  0 
Mk     I  m e
k k
T
mk   
0   mk 1 1  0 mk 1 
        
   
0   mn 0  1  mn 
kth column Numerical Methods © Wen-Chieh Lin 26
Gaussian Elimination
To reduce general linear system Ax = b to upper
triangular form
 First choose M1 with a11 as a pivot, to
eliminate the 1st column of A below 1st row
 System becomes M1Ax = M1b but solution is
unchanged
 Next choose M2 using a22 as pivot, to eliminate
second column of M1A below second row
 System becomes M2M1Ax = M2M1b but solution
is unchanged
Numerical Methods © Wen-Chieh Lin 27
Gaussian Elimination (cont.)
 Process continues for each successive column
until all subdiagonal entries have been zeroed
Mn1 M1Ax  Mn1 M1b
MAx  Mb
Ux  Mb
 Resulting upper triangular linear system
can be solved by back substitution to obtain
solution to original linear system Ax = b
 Process just described is called Gaussian
Elimination
Numerical Methods © Wen-Chieh Lin 28
LU Factorization
 U=MA is upper triangular
 A=M-1U
 It can be proved that M-1 is unit lower
triangular
M1  (Mn1 M1 )1  M11 Mn11
 So we have
A  LU
with L unit lower triangular and U upper
triangular
 Gaussian elimination produces LU factorization
of matrix into triangular factors
Numerical Methods © Wen-Chieh Lin 29
Solving Ax = b by LU Factorization
 Ax=b  LUx = b
 Let y=Ux  Ly = b
 Solve Ly = b by forward substitution
 Solve Ux = y by back substitution
 Note that y is the same as the transformed right-
hand side in Gaussian elimination
Ux = Mb  y = Mb
 Gaussian elimination and LU factorization are
two ways of expressing the same solution process
Numerical Methods © Wen-Chieh Lin 30
Using LU for multiple right-hand sides
 If LU factorization of a matrix A is given, we
can solve Ax = b for different b vectors as
follows:

 Ax = b  LUx = b

 Solve Ly = b using forward substitution

 Then solve Ux = y using backward


substitution
Numerical Methods © Wen-Chieh Lin 31
Example: Gaussian Elimination
 Use Gaussian elimination to solve linear system
 4  2 1  x1  15
Ax   3  1 4  x2    8   b
    
 1  1 3  x3  13
 1 0 0  4  2 1 4  2 1 
M1A   3 / 4 1 0  3  1 4  0  2.5 4.75
    
 1 / 4 0 1  1  1 3 0  0.5 2.75
 1 0 0 15  15 
M1b   3 / 4 1 0  8   19.25
    
 1 / 4 0 1 13  9.25 
Numerical Methods © Wen-Chieh Lin 32
Example: Gaussian Elimination (cont.)
 To eliminate subdiagonal entry of second
column of M1A

1 0 0 4  2 1  4  2 1 
M2M1A  0 1 0 0  2.5 4.75  0  2.5 4.75
    
0  0.2 1 0  0.5 2.75 0 0 1.8 

1 0 0  15   15 
M2M1b  0 1 0 19.25  19.25  Mb
    
0  0.2 1  9.25   5.40 

Numerical Methods © Wen-Chieh Lin 33


Example: Gaussian Elimination (cont.)
 We have reduced original system to equivalent
upper triangular system
4  2 1   x1   15 
Ux  0  2.5 4.75  x2   19.25  Mb
    
0 0 1.8   x3   5.4 

which now can be solved by back substitution


to obtain  x1   2 
 x    2
 2  
 x3   3 
Numerical Methods © Wen-Chieh Lin 34
Example: Gaussian Elimination (cont.)
 To write out LU factorization explicitly,

 1 0 0 1 0 0  1 0 0
L  M11M21   3 / 4 1 0 0 1 0   3 / 4 1 0
    
 1 / 4 0 1 0 0.2 1  1 / 4 0.2 1

 4  2 1  1 0 0 4  2 1 
A   3  1 4   3 / 4 1 0 0  2.5 4.75  LU
    
 1  1 3  1 / 4 0.2 1 0 0 1.80

Numerical Methods © Wen-Chieh Lin 35


Gaussian-Jordan Elimination
 In Gauss-Jordan elimination, matrix is reduced
to diagonal rather than triangular form

 Row combinations are used to eliminate


entries above as well as below diagonal

Numerical Methods © Wen-Chieh Lin 36


Gaussian-Jordan Elimination (cont.)
 Elimination matrix used for a given column
vector a is
1  0  m1 0  0  a1   0 
           
    
0  1  mk 1 0  0 ak 1   0 
    
Mk a  0  0 1 0  0  ak   ak 
0  0  mk 1 1  0 ak 1   0 
    
           
0 0  mn 0  1  an   0 
where mi  ai ak , i Numerical
1, , n Methods © Wen-Chieh Lin 37
Comparison with Gaussian Elimination

1  0 0 0  0  a1   a1 
            
    
0  1 0 0  0 ak 1  ak 1 
    
Mk a  0  0 1 0  0  ak    ak 
0  0  mk 1 1  0 ak 1   0 
    
            
0  0  mn 0  1  an   0 
where mi  ai ak , i  k  1, , n
Numerical Methods © Wen-Chieh Lin 38
Solving Ax = b by Gaussian Jordan Elimination

Mn1 M1Ax  Mn1 M1b


MAx  Mb
Dx  Mb
diagonal matrix
 Gauss-Jordan elimination requires about n3/2
multiplications and similar number of
additions, 50% more expensive than LU
factorization (see textbook)

Numerical Methods © Wen-Chieh Lin 39


Row Interchanges
 Gaussian elimination breaks down if a leading
diagonal entry of a remaining unreduced
matrix is zero at any stage

 Easy fix: if a diagonal entry in column k is


zero, then interchange row k with some
subsequent row having a nonzero entry in
column k and then proceed as usual

Numerical Methods © Wen-Chieh Lin 40


Row Interchanges (cont.)
 If there is no nonzero entry on or below
diagonal in column k, then there is nothing to
do at this stage, so skip to next column
 Zero on diagonal causes resulting upper
triangular matrix to be singular, but LU
factorization can still be completed

 Subsequent back-substitution will fail;


however, as it should for a singular matrix
Numerical Methods © Wen-Chieh Lin 41
Partial Pivoting
 In principle, any nonzero entry can be a pivot,
but in practice, a pivot should be chosen to
minimize error propagation
 To avoid amplifying previous rounding errors
when multiplying the remaining portion of a
matrix by an elementary elimination matrix,
multipliers should not exceed 1 in magnitude
 This can be accomplished by choosing the
entry of the largest magnitude on or below
diagonal as pivot at each stage
Numerical Methods © Wen-Chieh Lin 42
Partial Pivoting (cont.)

 Partial pivoting is necessary in practice for


numerically stable implementation of Gaussian
elimination for general linear system

Numerical Methods © Wen-Chieh Lin 43


Example: Pivoting and Precision
 Consider  x  By  C

Dx  Ey  F
 Without pivoting
x  By  C
( E  BD  ) y  F  CD 
F  CD  CD C
y  
E  BD  BD B
C  B(C B) C  C
x  0
 
Numerical Methods © Wen-Chieh Lin 44
Example: Pivoting and Precision
 With pivoting  Dx  Ey  F

 x  By  C
Dx  Ey  F
( B  E D) y  C  F D
C  F D C
y 
B  E D B
F  E (C B) BF  CE
x 
D BD
Numerical Methods © Wen-Chieh Lin 45

You might also like