BTL Nhóm Q - 6.1-6.2
BTL Nhóm Q - 6.1-6.2
BTL Nhóm Q - 6.1-6.2
Introduction
Kirchhoff’s laws of electrical circuits state that both the net flow of current through each
junction and the net voltage drop around each closed loop of a circuit are zero. Suppose
that a potential of V volts is applied between the points A and G in the circuit and that i1 , i2 ,
i3 , i4 , and i5 represent current flow as shown in the diagram. Using G as a reference point,
Kirchhoff’s laws imply that the currents satisfy the following system of linear equations:
5i1 + 5i2 = V ,
i3 − i4 − i5 = 0,
2i4 − 3i5 = 0,
i1 − i2 − i3 = 0,
5i2 − 7i3 − 2i4 = 0.
A 2⍀ B 3⍀ C
i1 i2 i3 i4 2⍀
i5
V volts 5⍀ 2⍀ D
i5
i1 i3 1⍀
G 3⍀ F 4⍀ E
The solution of systems of this type will be considered in this chapter. This application
is discussed in Exercise 29 of Section 6.6.
Linear systems of equations are associated with many problems in engineering and sci-
ence, as well as with applications of mathematics to the social sciences and the quantitative
study of business and economic problems.
In this chapter we consider direct methods for solving a linear system of n equations
in n variables. Such a system has the form
E1 : a11 x1 + a12 x2 + · · · + a1n xn = b1 ,
E2 : a21 x1 + a22 x2 + · · · + a2n xn = b2 ,
.. (6.1)
.
En : an1 x1 + an2 x2 + · · · + ann xn = bn .
In this system we are given the constants ai j , for each i, j = 1, 2, . . . , n, and bi , for each
i = 1, 2, . . . , n, and we need to determine the unknowns x1 , . . . , xn .
357
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
358 CHAPTER 6 Direct Methods for Solving Linear Systems
Direct techniques are methods that theoretically give the exact solution to the system in
a finite number of steps. In practice, of course, the solution obtained will be contaminated by
the round-off error that is involved with the arithmetic being used. Analyzing the effect of
this round-off error and determining ways to keep it under control will be a major component
of this chapter.
A course in linear algebra is not assumed to be prerequisite for this chapter, so we
will include a number of the basic notions of the subject. These results will also be used
in Chapter 7, where we consider methods of approximating the solution to linear systems
using iterative methods.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.1 Linear Systems of Equations 359
In the new system, E2 is used to eliminate the unknown x2 from E3 and E4 by performing
(E3 − 4E2 ) → (E3 ) and (E4 + 3E2 ) → (E4 ). This results in
E1 : x1 + x2 + 3x4 = 4,
E2 : − x2 − x3 − 5x4 = −7,
(6.3)
E3 : 3x3 + 13x4 = 13,
E4 : − 13x4 = −13.
The system of equations (6.3) is now in triangular (or reduced) form and can be solved
for the unknowns by a backward-substitution process. Since E4 implies x4 = 1, we can
solve E3 for x3 to give
1 1
x3 = (13 − 13x4 ) = (13 − 13) = 0.
3 3
Continuing, E2 gives
x2 = −(−7 + 5x4 + x3 ) = −(−7 + 5 + 0) = 2,
and E1 gives
x1 = 4 − 3x4 − x2 = 4 − 3 − 2 = −1.
The solution to system (6.3), and consequently to system (6.2), is therefore, x1 = −1,
x2 = 2, x3 = 0, and x4 = 1.
Definition 6.1 An n × m (n by m) matrix is a rectangular array of elements with n rows and m columns
in which not only is the value of an element important, but also its position in the array.
The notation for an n × m matrix will be a capital letter such as A for the matrix and
lowercase letters with double subscripts, such as ai j , to refer to the entry at the intersection
of the ith row and jth column; that is,
⎡ ⎤
a11 a12 · · · a1m
⎢ a21 a22 · · · a2m ⎥
⎢ ⎥
A = [ai j ] = ⎢ . .. .. ⎥ .
⎣ .. . . ⎦
an1 an2 ··· anm
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
360 CHAPTER 6 Direct Methods for Solving Linear Systems
Solution The matrix has two rows and three columns so it is of size 2 × 3. It entries are
described by a11 = 2, a12 = −1, a13 = 7, a21 = 3, a22 = 1, and a23 = 0.
The 1 × n matrix
is called an n-dimensional column vector. Usually the unnecessary subscripts are omitted
for vectors, and a boldface lowercase letter is used for notation. Thus
⎡ ⎤
x1
⎢ x2 ⎥
⎢ ⎥
x=⎢ . ⎥
⎣ .. ⎦
xn
y = [y1 y2 . . . yn ]
a row vector. In addition, row vectors often have commas inserted between the entries to
make the separation clearer. So you might see y written as y = [y1 , y2 , . . . , yn ].
An n × (n + 1) matrix can be used to represent the linear system
by first constructing
⎡ ⎤ ⎡ ⎤
a11 a12 · · · a1n b1
⎢ a21 a22 · · · a2n ⎥ ⎢ b2 ⎥
⎢ ⎥ ⎢ ⎥
A = [ai j ] = ⎢ .. .. .. ⎥ and b=⎢ .. ⎥
⎣ . . . ⎦ ⎣ . ⎦
an1 an2 ··· ann bn
⎡ .. ⎤
a11 a12 · · · a1n .. b1
⎢a21 a22 · · · a2n .. b2 ⎥
⎢ ⎥
[A, b] = ⎢ . .. .. .. .. ⎥ ,
⎣ .. . . .. .⎦
..
an1 an2 ··· ann . bn
Augmented refers to the fact that where the vertical dotted line is used to separate the coefficients of the unknowns from the
the right-hand side of the system values on the right-hand side of the equations. The array [A, b] is called an augmented
has been included in the matrix. matrix.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.1 Linear Systems of Equations 361
Repeating the operations involved in Example 1 with the matrix notation results in first
considering the augmented matrix:
⎡ ⎤
1 1 0 3 ... 4
⎢ 2 1 −1 1 ... 1 ⎥
⎢ ⎥.
⎣ 3 −1 −1 2 ... −3 ⎦
−1 2 3 −1 .. 4
Performing the operations as described in that example produces the augmented matrices
⎡ ⎤ ⎡ ⎤
1 1 0 3 ... 4 1 1 0 3 ... 4
⎢ 0 −1 −1 −5 .. −7 ⎥ ⎢ −5 ... −7 ⎥
⎢ .. ⎥ and ⎢ 0 −1 −1 ⎥.
⎣ 0 −4 −1 −7 . −15 ⎦ ⎣ 0 0 3 13 ... 13 ⎦
..
0 3 3 2 . 8 0 0 0 −13 .. −13
A technique similar to Gaussian The final matrix can now be transformed into its corresponding linear system, and so-
elimination first appeared during lutions for x1 , x2 , x3 , and x4 , can be obtained. The procedure is called Gaussian elimination
the Han dynasty in China in the with backward substitution.
text Nine Chapters on the The general Gaussian elimination procedure applied to the linear system
Mathematical Art, which was
written about 200 B.C.E. Joseph E1 : a11 x1 + a12 x2 + · · · + a1n xn = b1 ,
Louis Lagrange (1736–1813)
described a technique similar to
E2 : a21 x1 + a22 x2 + · · · + a2n xn = b2 ,
.. .. (6.4)
this procedure in 1778 for the . .
case when the value of each
equation is 0. Gauss gave a more
En : an1 x1 + an2 x2 + · · · + ann xn = bn ,
general description in Theoria
Motus corporum coelestium is handled in a similar manner. First form the augmented matrix Ã:
sectionibus solem ambientium, ⎡ . ⎤
which described the least squares a11 a12 · · · a1n .. a1,n+1
⎢ a21 a22 · · · a2n ... a2,n+1 ⎥
technique he used in 1801 to ⎢ .. ⎥
à = [A, b] = ⎢ . .. .. .. .. ⎥, (6.5)
determine the orbit of the minor ⎣ .. . . .. . ⎦
planet Ceres.
an1 an2 · · · ann .. an,n+1
where A denotes the matrix formed by the coefficients. The entries in the (n + 1)st column
are the values of b; that is, ai,n+1 = bi for each i = 1, 2, · · · , n.
Provided a11 = 0, we perform the operations corresponding to
to eliminate the coefficient of x1 in each of these rows. Although the entries in rows 2, 3, . . . , n
are expected to change, for ease of notation we again denote the entry in the ith row and the
jth column by ai j . With this in mind, we follow a sequential procedure for i = 2, 3, . . . , n−1
and perform the operation
provided aii = 0. This eliminates (changes the coefficient to zero) xi in each row below the
ith for all values of i = 1, 2, . . . , n − 1. The resulting matrix has the form:
⎡ . ⎤
a11 a12 · · · a1n .. a1,n+1
⎢ 0 . . a.22 .
. ⎥
⎢ . . . · · · a2n ... a2,n+1 ⎥
Ø = ⎢ . . . . . . . . . . . . .. . ⎥,
⎣ .. . . . . . . ..
..
.. ⎦
. ..
0 . . . . . . . . . 0 ann .. an,n+1
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
362 CHAPTER 6 Direct Methods for Solving Linear Systems
where, except in the first row, the values of ai j are not expected to agree with those in the
original matrix Ã. The matrix Ø represents a linear system with the same solution set as the
original system.
The new linear system is triangular,
so backward substitution can be performed. Solving the nth equation for xn gives
an,n+1
xn = .
ann
Solving the (n − 1)st equation for xn−1 and using the known value for xn yields
an−1,n+1 − an−1,n xn
xn−1 = .
an−1,n−1
for each i = n − 1, n − 2, · · · , 2, 1.
Gaussian elimination procedure is described more precisely, although more intricately,
by forming a sequence of augmented matrices Ã(1) , Ã(2) , . . ., Ã(n) , where Ã(1) is the matrix
à given in (6.5) and Ã(k) , for each k = 2, 3, . . . , n, has entries ai(k)
j , where:
⎧
(k−1)
⎪
⎪ when i = 1, 2, . . . , k − 1 and j = 1, 2, . . . , n + 1,
⎪ai j ,
⎪
⎪
⎨0, when i = k, k + 1, . . . , n and j = 1, 2, · · · , k − 1,
ai(k) =
j
⎪
⎪
(k−1)
ai,k−1
⎪
⎪ (k−1)
− (k−1)
when i = k, k + 1, . . . , n and j = k, k + 1, . . . , n + 1.
⎪ a
⎩ ij (k−1)
ak−1, j ,
ak−1,k−1
Thus
⎡ (1) (1) (1) (1) (1) (1) .. (1) ⎤
a11 a12 a13 ··· a1,k−1 a1k ··· a1n .. a1,n+1
⎢ .. ⎥
⎢ 0. . . . a22 (2) a(2) · · · a2,k−1 (2) (2)
··· (2) .. a(2) ⎥
⎢ .. . . . . . . . . 23
a2k a2n .. 2,n+1 ⎥
⎢ ⎥
⎢ .. ... ... .. .. .. .. .. ⎥
⎢ .. ... ... . . . .. . ⎥
⎢ ... ...
... ... .. ⎥
⎢ .. . . . . (k−1) ⎥
Ã(k) =⎢
⎢
.. . . . ak−1,k−1
(k−1)
ak−1,k (k−1)
· · · ak−1,n .. a(k−1)
.. k−1,n+1
⎥
⎥ (6.6)
⎢ .. ... ⎥
⎢ .. .. (k) (k) .. (k) ⎥
⎢ .. 0 akk ··· akn .. ak,n+1 ⎥
⎢ .. .. ⎥
⎢ .. .. .. .. .. ⎥
⎢ .. . . . . ⎥
⎣ .. .. ⎦
. (k) (k)
.. (k)
0 ......................... 0 ank ··· ann .. an,n+1
represents the equivalent linear system for which the variable xk−1 has just been eliminated
from equations Ek , Ek+1 , . . . , En .
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.1 Linear Systems of Equations 363
(2)
The pivot element for a specific The diagonal entry a22 , called the pivot element, is 0, so the procedure cannot continue
column is the entry that is used to in its present form. But operations (Ei ) ↔ (Ej ) are permitted, so a search is made of
place zeros in the other entries in (2) (2) (2)
the elements a32 and a42 for the first nonzero element. Since a32 = 0, the operation
that column.
(E2 ) ↔ (E3 ) is performed to obtain a new matrix,
⎡ . ⎤
1 −1 2 −1 ... −8
⎢ 0 2 −1 1 ... 6 ⎥
Ã(2) = ⎢ ⎣ 0
⎥.
0 −1 −1 ... −4 ⎦
0 0 2 4 .. 12
Since x2 is already eliminated from E3 and E4 , Ã(3) will be Ã(2) , and the computations
continue with the operation (E4 + 2E3 ) → (E4 ), giving
⎡ . ⎤
1 −1 2 −1 ... −8
⎢ 0 2 −1 1 ... 6 ⎥
Ã(4) = ⎢
⎣ 0
⎥.
⎦
0 −1 −1 ... −4
0 0 0 2 .. 4
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
364 CHAPTER 6 Direct Methods for Solving Linear Systems
Finally, the matrix is converted back into a linear system that has a solution equivalent to
the solution of the original system and the backward substitution is applied:
4
x4 = = 2,
2
[−4 − (−1)x4 ]
x3 = = 2,
−1
[6 − x4 − (−1)x3 ]
x2 = = 3,
2
[−8 − (−1)x4 − 2x3 − (−1)x2 ]
x1 = = −7.
1
(k)
Example 2 illustrates what is done if akk = 0 for some k = 1, 2, . . . , n − 1. The kth
(k−1)
column of à from the kth row to the nth row is searched for the first nonzero entry. If
(k)
apk = 0 for some p,with k + 1 ≤ p ≤ n, then the operation (Ek ) ↔ (Ep ) is performed to
(k)
obtain Ã(k−1) . The procedure can then be continued to form Ã(k) , and so on. If apk = 0 for
each p, it can be shown (see Theorem 6.17 on page 398) that the linear system does not
(n)
have a unique solution and the procedure stops. Finally, if ann = 0, the linear system does
not have a unique solution, and again the procedure stops.
Algorithm 6.1 summarizes Gaussian elimination with backward substitution. The al-
(k)
gorithm incorporates pivoting when one of the pivots akk is 0 by interchanging the kth row
(k)
with the pth row, where p is the smallest integer greater than k for which apk = 0.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.1 Linear Systems of Equations 365
Illustration The purpose of this illustration is to show what can happen if Algorithm 6.1 fails. The
computations will be done simultaneously on two linear systems:
x1 + x2 + x3 = 4, x1 + x2 + x3 = 4,
2x1 + 2x2 + x3 = 6, and 2x1 + 2x2 + x3 = 4,
x1 + x2 + 2x3 = 6, x1 + x2 + 2x3 = 6.
These systems produce the augmented matrices
⎡ . ⎤ ⎡ .. ⎤
1 1 1 ... 4 1 1 1 .. 4
à = ⎣ 2 2 1 ... 6 ⎦ and à = ⎣ 2 2 1 .. 4 ⎦ .
..
1 1 2 .. 6 1 1 2 . 6
Since a11 = 1, we perform (E2 − 2E1 ) → (E2 ) and (E3 − E1 ) → (E3 ) to produce
⎡ . ⎤ ⎡ . ⎤
1 1 1 ... 4 1 1 1 ... 4
à = ⎣ 0 0 −1 ... −2 ⎦ and à = ⎣ 0 0 −1 ... −4 ⎦ .
0 0 1 .. 2 0 0 1 .. 2
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
366 CHAPTER 6 Direct Methods for Solving Linear Systems
At this point, a22 = a32 = 0. The algorithm requires that the procedure be halted, and no
solution to either system is obtained. Writing the equations for each system gives
x1 + x2 + x3 = 4, x1 + x2 + x3 = 4,
−x3 = −2, and −x3 = −4,
x3 = 2, x3 = 2.
The first linear system has an infinite number of solutions, which can be described by x3 = 2,
x2 = 2 − x1 , and x1 arbitrary.
Although Algorithm 6.1 can be viewed as the construction of the augmented matrices
Ã(1) , . . . , Ã(n) , the computations can be performed in a computer using only one n × (n + 1)
array for storage. At each step we simply replace the previous value of ai j by the new one.
In addition, we can store the multipliers mji in the locations of aji because aji has the value
0 for each i = 1, 2, . . . , n − 1 and j = i + 1, i + 2, . . . , n. Thus A can be overwritten by the
multipliers in the entries that are below the main diagonal (that is, the entries of the form aji ,
with j > i) and by the newly computed entries of Ã(n) on and above the main diagonal (the
entries of the form ai j , with j ≤ i). These values can be used to solve other linear systems
involving the original matrix A, as we will see in Section 6.5.
Operation Counts
Both the amount of time required to complete the calculations and the subsequent round-off
error depend on the number of floating-point arithmetic operations needed to solve a routine
problem. In general, the amount of time required to perform a multiplication or division on a
computer is approximately the same and is considerably greater than that required to perform
an addition or subtraction. The actual differences in execution time, however, depend on the
particular computing system. To demonstrate the counting operations for a given method,
we will count the operations required to solve a typical linear system of n equations in
n unknowns using Algorithm 6.1. We will keep the count of the additions/subtractions
separate from the count of the multiplications/divisions because of the time differential.
No arithmetic operations are performed until Steps 5 and 6 in the algorithm. Step
5 requires that (n − i) divisions be performed. The replacement of the equation Ej by
(Ej − mji Ei ) in Step 6 requires that mji be multiplied by each term in Ei , resulting in a total
of (n − i)(n − i + 1) multiplications. After this is completed, each term of the resulting
equation is subtracted from the corresponding term in Ej . This requires (n − i)(n − i + 1)
subtractions. For each i = 1, 2, . . . , n − 1, the operations required in Steps 5 and 6 are as
follows.
Multiplications/divisions
(n − i) + (n − i)(n − i + 1) = (n − i)(n − i + 2).
Additions/subtractions
(n − i)(n − i + 1).
The total number of operations required by Steps 5 and 6 is obtained by summing the
operation counts for each i. Recalling from calculus that
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.1 Linear Systems of Equations 367
m
m
m(m + 1)
m
m(m + 1)(2m + 1)
1 = m, j= , and j2 = ,
j=1 j=1
2 j=1
6
Multiplications/divisions
n−1
n−1
(n − i)(n − i + 2) = (n2 − 2ni + i2 + 2n − 2i)
i=1 i=1
n−1
n−1
n−1
n−1
= (n − i) + 2 2
(n − i) = i +2
2
i
i=1 i=1 i=1 i=1
Additions/subtractions
n−1
n−1
(n − i)(n − i + 1) = (n2 − 2ni + i2 + n − i)
i=1 i=1
n−1
n−1
n−1
n−1
= (n − i)2 + (n − i) = i2 + i
i=1 i=1 i=1 i=1
(n − 1)n(2n − 1) (n − 1)n n −n 3
= + = .
6 2 3
The only other steps in Algorithm 6.1 that involve arithmetic operations are those
required for backward substitution, Steps 8 and 9. Step 8 requires one division. Step 9
requires (n − i) multiplications and (n − i − 1) additions for each summation term and
then one subtraction and one division. The total number of operations in Steps 8 and 9 is
as follows.
Multiplications/divisions
n−1
n−1
1+ ((n − i) + 1) = 1 + (n − i) + n − 1
i=1 i=1
n−1
n−1
n2 + n
1=n+ (n − i) = n + i= .
i=1 i=1
2
Additions/subtractions
n−1
n−1
n−1
n2 − n
((n − i − 1) + 1) = (n − i) = i=
i=1 i=1 i=1
2
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
368 CHAPTER 6 Direct Methods for Solving Linear Systems
Multiplications/divisions
2n3 + 3n2 − 5n n2 + n n3 n
+ = + n2 − .
6 2 3 3
Additions/subtractions
n3 − n n2 − n n3 n2 5n
+ = + − .
3 2 3 2 6
For large n, the total number of multiplications and divisions is approximately n3 /3,
as is the total number of additions and subtractions. Thus the amount of computation and
the time required increases with n in proportion to n3 , as shown in Table 6.1.
E X E R C I S E S E T 6.1
1. For each of the following linear systems, obtain a solution by graphical methods, if possible. Explain
the results from a geometrical standpoint.
a. x1 + 2x2 = 3, b. x1 + 2x2 = 3, c. x1 + 2x2 = 0, d. 2x1 + x2 = −1,
x1 − x2 = 0. 2x1 + 4x2 = 6. 2x1 + 4x2 = 0. 4x1 + 2x2 = −2,
x1 − 3x2 = 5.
2. For each of the following linear systems, obtain a solution by graphical methods, if possible. Explain
the results from a geometrical standpoint.
a. x1 + 2x2 = 0, b. x1 + 2x2 = 3, c. 2x1 + x2 = −1, d. 2x1 + x2 + x3 = 1,
x1 − x2 = 0. −2x1 − 4x2 = 6. x1 + x2 = 2, 2x1 + 4x2 − x3 = −1.
x1 − 3x2 = 5.
3. Use Gaussian elimination with backward substitution and two-digit rounding arithmetic to solve
the following linear systems. Do not reorder the equations. (The exact solution to each system is
x1 = 1, x2 = −1, x3 = 3.)
a. 4x1 − x2 + x3 = 8, b. 4x1 + x2 + 2x3 = 9,
2x1 + 5x2 + 2x3 = 3, 2x1 + 4x2 − x3 = −5,
x1 + 2x2 + 4x3 = 11. x1 + x2 − 3x3 = −9.
4. Use Gaussian elimination with backward substitution and two-digit rounding arithmetic to solve
the following linear systems. Do not reorder the equations. (The exact solution to each system is
x1 = −1, x2 = 1, x3 = 3.)
a. −x1 + 4x2 + x3 = 8, b. 4x1 + 2x2 − x3 = −5,
5
x
3 1
+ 23 x2 + 23 x3 = 1, 1
x
9 1
+ 19 x2 − 13 x3 = −1,
2x1 + x2 + 4x3 = 11. x1 + 4x2 + 2x3 = 9.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.1 Linear Systems of Equations 369
5. Use the Gaussian Elimination Algorithm to solve the following linear systems, if possible, and deter-
mine whether row interchanges are necessary:
a. x1 − x2 + 3x3 = 2, b. 2x1 − 1.5x2 + 3x3 = 1,
3x1 − 3x2 + x3 = −1, −x1 + 2x3 = 3,
x1 + x2 = 3. 4x1 − 4.5x2 + 5x3 = 1.
c. 2x1 = 3, d. x 1 + x2 + x4 = 2,
x1 + 1.5x2 = 4.5, 2x1 + x2 − x3 + x4 = 1,
− 3x2 + 0.5x3 = −6.6, 4x1 − x2 − 2x3 + 2x4 = 0,
2x1 − 2x2 + x3 + x4 = 0.8. 3x1 − x2 − x3 + 2x4 = −3.
6. Use the Gaussian Elimination Algorithm to solve the following linear systems, if possible, and deter-
mine whether row interchanges are necessary:
a. x2 − 2x3 = 4, b. x1 − 21 x2 + x3 = 4,
x1 −x2 + x3 = 6, 2x1 − x2 − x3 + x4 = 5,
x1 − x3 = 2. x1 + x2 + 21 x3 = 2,
x1 − 1
x
2 2
+ x3 + x4 = 5.
2x1 − 6αx2 = 3,
3αx1 − x2 = 23 .
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
370 CHAPTER 6 Direct Methods for Solving Linear Systems
x1 − x2 + αx3 = −2,
−x1 + 2x2 − αx3 = 3,
αx1 + x2 + x3 = 2.
n3 n
+ n2 − multiplications/divisions
2 2
and
n3 n
− additions/subtractions.
2 2
b. Make a table comparing the required operations for the Gauss-Jordan and Gaussian elimination
methods for n = 3, 10, 50, 100. Which method requires less computation?
16. Consider the following Gaussian-elimination-Gauss-Jordan hybrid method for solving the system
(6.4). First, apply the Gaussian-elimination technique to reduce the system to triangular form. Then
use the nth equation to eliminate the coefficients of xn in each of the first n − 1 rows. After this is
completed use the (n − 1)st equation to eliminate the coefficients of xn−1 in the first n − 2 rows, etc.
The system will eventually appear as the reduced system in Exercise 12.
a. Show that this method requires
n3 3 5
+ n2 − n multiplications/divisions
3 2 6
and
n3 n2 5
+ − n additions/subtractions.
3 2 6
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.1 Linear Systems of Equations 371
b. Make a table comparing the required operations for the Gaussian elimination, Gauss-Jordan,
and hybrid methods, for n = 3, 10, 50, 100.
17. Use the hybrid method described in Exercise 16 and two-digit rounding arithmetic to solve the systems
in Exercise 3.
18. Repeat Exercise 7 using the method described in Exercise 16.
19. Suppose that in a biological system there are n species of animals and m sources of food. Let xj
represent the population of the jth species, for each j = 1, · · · , n; bi represent the available daily
supply of the ith food; and ai j represent the amount of the ith food consumed on the average by a
member of the jth species. The linear system
represents an equilibrium where there is a daily supply of food to precisely meet the average daily
consumption of each species.
a. Let
⎡ . ⎤
1 2 0 .. 3
.
A = [ai j ] = ⎣ 1 0 2 .. 2 ⎦,
0 0 1 .. 1
x = (xj ) = [1000, 500, 350, 400], and b = (bi ) = [3500, 2700, 900]. Is there sufficient food
to satisfy the average daily consumption?
b. What is the maximum number of animals of each species that could be individually added to the
system with the supply of food still meeting the consumption?
c. If species 1 became extinct, how much of an individual increase of each of the remaining
species could be supported?
d. If species 2 became extinct, how much of an individual increase of each of the remaining
species could be supported?
20. A Fredholm integral equation of the second kind is an equation of the form
b
u(x) = f (x) + K(x, t)u(t) dt,
a
where a and b and the functions f and K are given. To approximate the function u on the interval
[a, b], a partition x0 = a < x1 < · · · < xm−1 < xm = b is selected and the equations
b
u(xi ) = f (xi ) + K(xi , t)u(t) dt, for each i = 0, · · · , m,
a
are solved for u(x0 ), u(x1 ), · · · , u(xm ). The integrals are approximated using quadrature formulas
based on the nodes x0 , · · · , xm . In our problem, a = 0, b = 1, f (x) = x 2 , and K(x, t) = e|x−t| .
a. Show that the linear system
1
u(0) = f (0) + [K(0, 0)u(0) + K(0, 1)u(1)],
2
1
u(1) = f (1) + [K(1, 0)u(0) + K(1, 1)u(1)]
2
must be solved when the Trapezoidal rule is used.
b. Set up and solve the linear system that results when the Composite Trapezoidal rule is used with
n = 4.
c. Repeat part (b) using the Composite Simpson’s rule.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
372 CHAPTER 6 Direct Methods for Solving Linear Systems
ajk(k)
mjk = (k)
akk
will be much larger than 1. Round-off error introduced in the computation of one of the
terms akl(k) is multiplied by mjk when computing ajl(k+1) , which compounds the original error.
Also, when performing the backward substitution for
(k)
5.291
m21 = = 1763.66,
0.003000
rounds to the large number 1764. Performing (E2 − m21 E1 ) → (E2 ) and the appropriate
rounding gives the system
0.003000x1 + 59.14x2 ≈ 59.17
−104300x2 ≈ −104400,
instead of the exact system, which is
0.003000x1 + 59.14x2 = 59.17
−104309.376x2 = −104309.376.
The disparity in the magnitudes of m21 a13 and a23 has introduced round-off error, but the
round-off error has not yet been propagated. Backward substitution yields
x2 ≈ 1.001,
which is a close approximation to the actual value, x2 = 1.000. However, because of the
small pivot a11 = 0.003000,
59.17 − (59.14)(1.001)
x1 ≈ = −10.00
0.003000
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.2 Pivoting Strategies 373
Figure 6.1
x2
Approximation E2
(⫺10, 1.001) Exact solution
(10, 1) E1
⫺10 10 x1
Partial Pivoting
(k)
Example 1 shows how difficulties can arise when the pivot element akk is small relative to
(k)
the entries ai j , for k ≤ i ≤ n and k ≤ j ≤ n. To avoid this problem, pivoting is performed
(k)
by selecting an element apq with a larger magnitude as the pivot, and interchanging the
kth and pth rows. This can be followed by the interchange of the kth and qth columns, if
necessary.
The simplest strategy is to select an element in the same column that is below the
diagonal and has the largest absolute value; specifically, we determine the smallest p ≥ k
such that
(k)
|apk | = max |aik(k) |
k≤i≤n
This requires that the operation (E2 ) ↔ (E1 ) be performed to produce the equivalent system
E1 : 5.291x1 − 6.130x2 = 46.78,
E2 : 0.003000x1 + 59.14x2 = 59.17.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
374 CHAPTER 6 Direct Methods for Solving Linear Systems
The four-digit answers resulting from the backward substitution are the correct values
x1 = 10.00 and x2 = 1.000.
The technique just described is called partial pivoting (or maximal column pivoting)
and is detailed in Algorithm 6.2. The actual row interchanging is simulated in the algorithm
by interchanging the values of NROW in Step 5.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.2 Pivoting Strategies 375
Each multiplier mji in the partial pivoting algorithm has magnitude less than or equal
to 1. Although this strategy is sufficient for many linear systems, situations do arise when
it is inadequate.
is the same as that in Examples 1 and 2 except that all the entries in the first equation have
been multiplied by 104 . The partial pivoting procedure described in Algorithm 6.2 with
four-digit arithmetic leads to the same results as obtained in Example 1. The maximal value
in the first column is 30.00, and the multiplier
5.291
m21 = = 0.1764
30.00
leads to the system
which has the same inaccurate solutions as in Example 1: x2 ≈ 1.001 and x1 ≈ −10.00.
si = max |ai j |.
1≤ j≤n
If we have si = 0 for some i, then the system has no unique solution since all entries in the
ith row are 0. Assuming that this is not the case, the appropriate row interchange to place
zeros in the first column is determined by choosing the least integer p with
|ap1 | |ak1 |
= max
sp 1≤k≤n sk
and performing (E1 ) ↔ (Ep ). The effect of scaling is to ensure that the largest element
in each row has a relative magnitude of 1 before the comparison for row interchange is
performed.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
376 CHAPTER 6 Direct Methods for Solving Linear Systems
and perform the row interchange (Ei ) ↔ (Ep ) if i = p. The scale factors s1 , . . . , sn are
computed only once, at the start of the procedure. They are row dependent, so they must
also be interchanged when row interchanges are performed.
The next example demonstrates using Maple and the LinearAlgebra library to perform
scaled partial pivoting with finite-digit rounding arithmetic.
Example 3 Solve the linear system using three-digit rounding arithmetic in Maple with the Linear-
Algebra library.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.2 Pivoting Strategies 377
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
378 CHAPTER 6 Direct Methods for Solving Linear Systems
we perform
AA4 := RowOperation(AA3, [2, 3])
giving
⎡ ⎤
1.09 .987 .832 4.21
⎣ 0 −6.12 −.689 −6.16 ⎦ .
0 6.57 −4.18 −18.6
The multiplier m32 is computed by
AA4[3, 2]
m32 :=
AA4[2, 2]
−1.07
and the elimination step
AA5 := RowOperation(AA4, [3, 2], −m32)
results in the matrix
⎡ ⎤
1.09 .987 .832 4.21
⎣ 0 −6.12 −.689 −6.16 ⎦ .
0 .02 −4.92 −25.2
We cannot use BackwardSubstitute on this matrix because of the entry .02 in the last row of
the second column, that is, which Maple knows as the (3, 2) position. This entry is nonzero
due to rounding, but we can remedy this minor problem setting it to 0 with the command
AA5[3, 2] := 0
You can verify this is correct with the command evalm(AA5)
Finally, backward substitution gives the solution x, which to 3 decimal digits is x1 = −0.436,
x2 = 0.430, and x3 = 5.12.
The first additional computations required for scaled partial pivoting result from the
determination of the scale factors; there are (n − 1) comparisons for each of the n rows, for
a total of
n(n − 1) comparisons.
To determine the correct first interchange, n divisions are performed, followed by n − 1
comparisons. So the first interchange determination adds
n divisions and (n − 1) comparisons.
The scaling factors are computed only once, so the second step requires
(n − 1) divisions and (n − 2) comparisons.
We proceed in a similar manner until there are zeros below the main diagonal in all but
the nth row. The final step requires that we perform
2 divisions and 1 comparison.
As a consequence, scaled partial pivoting adds a total of
n−1
(n − 1)n 3
n(n − 1) + k = n(n − 1) + = n(n − 1) comparisons (6.7)
2 2
k=1
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.2 Pivoting Strategies 379
and
n
n n(n + 1) 1
k= k −1= − 1 = (n − 1)(n + 2) divisions
2 2
k=2 k=1
n
1
k(k − 1) = n(n2 − 1).
3
k=2
As a consequence, this pivoting technique would add O(n3 /3) comparisons, in addition to
the [n(n + 1)/2] − 1 divisions.
Complete Pivoting
Pivoting can incorporate the interchange of both rows and columns. Complete (or maximal)
pivoting at the kth step searches all the entries ai j , for i = k, k + 1, . . . , n and j = k,
k +1, . . . , n, to find the entry with the largest magnitude. Both row and column interchanges
are performed to bring this entry to the pivot position. The first step of total pivoting requires
that n2 − 1 comparisons be performed, the second step requires (n − 1)2 − 1 comparisons,
and so on. The total additional time required to incorporate complete pivoting into Gaussian
elimination is
n
n(n − 1)(2n + 5)
(k 2 − 1) =
6
k=2
comparisons. Complete pivoting is, consequently, the strategy recommended only for sys-
tems where accuracy is essential and the amount of execution time needed for this method
can be justified.
E X E R C I S E S E T 6.2
1. Find the row interchanges that are required to solve the following linear systems using
Algorithm 6.1.
a. x1 − 5x2 + x3 = 7, b. x1 + x2 − x3 = 1,
10x1 + 20x3 = 6, x1 + x2 + 4x3 = 2,
5x1 − x3 = 4. 2x1 − x2 + 2x3 = 3.
c. 2x1 − 3x2 + 2x3 = 5, d. x2 + x3 = 6,
−4x1 + 2x2 − 6x3 = 14, x1 − 2x2 − x3 = 4,
2x1 + 2x2 + 4x3 = 8. x1 − x2 + x3 = 5.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
380 CHAPTER 6 Direct Methods for Solving Linear Systems
2. Find the row interchanges that are required to solve the following linear systems using Algorithm 6.1.
a. 13x1 + 17x2 + x3 = 5, b. x1 + x2 − x3 = 0,
x2 + 19x3 = 1, 12x2 − x3 = 4,
12x2 − x3 = 0. 2x1 + x2 + x3 = 5.
c. 5x1 + x2 − 6x3 = 7, d. x1 − x2 + x3 = 5,
2x1 + x2 − x3 = 8, 7x1 + 5x2 − x3 = 8,
6x1 + 12x2 + x3 = 9. 2x1 + x2 + x3 = 7.
3. Repeat Exercise 1 using Algorithm 6.2.
4. Repeat Exercise 2 using Algorithm 6.2.
5. Repeat Exercise 1 using Algorithm 6.3.
6. Repeat Exercise 2 using Algorithm 6.3.
7. Repeat Exercise 1 using complete pivoting.
8. Repeat Exercise 2 using complete pivoting.
9. Use Gaussian elimination and three-digit chopping arithmetic to solve the following linear systems,
and compare the approximations to the actual solution.
a. 0.03x1 + 58.9x2 = 59.2, b. 3.03x1 − 12.1x2 + 14x3 = −119,
5.31x1 − 6.10x2 = 47.0. −3.03x1 + 12.1x2 − 7x3 = 120,
Actual solution [10, 1]. 6.11x1 − 14.2x2 + 21x3 = −139.
Actual solution [0, 10, 17 ].
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
6.3 Linear Algebra and Matrix Inversion 381
2x1 + x2 + 3x3 = 1,
4x1 + 6x2 + 8x3 = 5,
6x1 + αx2 + 10x3 = 5,
with |α| < 10. For which of the following values of α will there be no row interchange required when
solving this system using scaled partial pivoting?
a. α = 6 b. α = 9 c. α = −3
32. Construct an algorithm for the complete pivoting procedure discussed in the text.
33. Use the complete pivoting algorithm to repeat Exercise 9 Maple with Digits:= 10.
34. Use the complete pivoting algorithm to repeat Exercise 10 Maple with Digits:= 10.
Definition 6.2 Two matrices A and B are equal if they have the same number of rows and columns, say
n × m, and if ai j = bi j , for each i = 1, 2, . . . , n and j = 1, 2, . . . , m.
Copyright 2010 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.