Ala2 SSM PDF
Ala2 SSM PDF
Ala2 SSM PDF
Manual
for
ISBN 978–3–319–91040–6
To the Student
These solutions are a resource for students studying the second edition of our text Applied
Linear Algebra, published by Springer in 2018. An expanded solutions manual is available
for registered instructors of courses adopting it as the textbook.
Acknowledgements
We thank a number of people, who are named in the text, for corrections to the solutions
manuals that accompanied the first edition. Of course, as authors, we take full respon-
sibility for all errors that may yet appear. We encourage readers to inform us of any
misprints, errors, and unclear explanations that they may find, and will accordingly up-
date this manual on a timely basis. Corrections will be posted on the text’s dedicated web
site:
http://www.math.umn.edu/∼olver/ala2.html
Chapter 4. Orthogonality . . . . . . . . . . . . . . . . . . . . . 28
Chapter 6. Equilibrium . . . . . . . . . . . . . . . . . . . . . 44
Chapter 7. Linearity . . . . . . . . . . . . . . . . . . . . . . . 49
Chapter 9. Iteration . . . . . . . . . . . . . . . . . . . . . . . 70
♥ 1.1.3. (a) With Forward Substitution, we just start with the top equation and work down.
Thus 2 x = −6 so x = −3. Plugging this into the second equation gives 12 + 3y = 3, and so
y = −3. Plugging the values of x and y in the third equation yields −3 + 4(−3) − z = 7, and
so z = −22.
0
1.2.1. (a) 3 × 4, (b) 7, (c) 6, (d) ( −2 0 1 2 ), (e) 2
.
−6
1 2 3 1 2 3 4 1
1.2.2. Examples: (a) 4 5 6 4 7 2 .
, (c) 5 6 , (e)
7 8 9 7 8 9 3 3
! ! !
6 1 u 5
1.2.4. (b) A = , x= , b= ;
3 −2 v 5
2 −1 2 u 2
(d) A = −1 −1 3
, x = v , b = 1 ;
3 0 −2 w 1
1 0 1 −2 x −3
2 −1 2 −1 y −5
(f ) A =
, x = , b = .
0 −6 −4 2
z
2
1 3 2 −1 w 1
1 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
1.2.6. (a) I = 0
0 1 0 0,
O=
0
0 0 0 0.
0 0 0 1 0
0 0 0 0 0
0 0 0 0 1 0 0 0 0 0
(b) I + O = I , I O = O I = O. No, it does not.
1
c 2018 Peter J. Olver and Chehrzad Shakiban
2 Students’ Solutions Manual: Chapter 1
! 1 11 9
3 6 0
3
1.2.7. (b) undefined, (c) , (f ) −12 −12
,
−1 4 2
7 8 8
♦ 1.2.29. (a) The ith entry of A z is 1 ai1 + 1 ai2 + · · · + 1 ain = ai1 + · · · + ain , which is
1
the ith row sum. (b) Each row of W has n − 1 entries equal to n and one entry equal
1−n 1 1−n
to n and so its row sums are (n − 1) n + n = 0. Therefore, by part (a), W z = 0.
Consequently, the row sums of B = A W are the entries of B z = A W z = A 0 = 0, and the
result follows.
♥ 1.2.34. (a) This follows by direct computation. (b) (i )
! ! ! ! ! ! !
−2 1 1 −2 −2 1 −2 4 1 0 −1 4
= ( 1 −2 ) + (1 0) = + = .
3 2 1 0 3 2 3 −6 2 0 5 −6
11
1.3.1. (b) u = 1, v = −1; (d) x1 = 3 , x2 = − 10 2
3 , x3 = − 3 ;
(f ) a = 13 , b = 0, c = 34 , d = − 23 .
! !
1 7 4 2R1 +R2 1 7 4
1.3.2. (a) −→ . Back Substitution yields x2 = 2, x1 = −10.
−2 −9 2 0 5 10
1 −2 1 0 4R +R 1 −2 1 0 3 R +R 1 −2 1 0
1 3 2 2 3
(c)
0 2 −8 8
−→ 0
2 −8 8
−→ 0
2 −8 8 .
−4 5 9 −9 0 −3 13 −9 0 0 1 3
Back Substitution yields z = 3, y = 16, x = 29.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 1 3
1 0 −2 0
−1 1 0 −2 0
−1
0 1 0 −1
2
0
1 0 −1
2.
(e) reduces to
0 −3 2 0
0
0 0 2 −3
6
−4 0 0 7 −5 0 0 0 −5 15
Back Substitution yields x4 = −3, x3 = − 23 , x2 = −1, x1 = −4.
1.3.15. (a) Add −2 times the second row to the first row of a 2 × n matrix.
(c) Add −5 times the third row to the second row of a 3 × n matrix.
1 0 0 0 1 0 0 3
0 1 0 0 0 1 0 0
1.3.16. (a) , (c) .
0 0 1 0
0 0 1 0
0 0 1 1 0 0 0 1
1.3.20. (a) Upper triangular; (b) both upper and lower unitriangular; (d) lower unitriangular.
! ! 1 0 0 −1 1 −1
1 0 1 3
−1
1.3.21. (a) L = , U= , (c) L = 1 0
, U = 0 2 0,
−1 1 0 3
1 0 1 0 0 3
1 0 0 −1 0 0
(e) L =
−2 1 0, U = 0
−3 0.
−1 −1 1 0 0 2
1.3.25. (a) aij = 0 for all i 6= j; (c) aij = 0 for all i > j and aii = 1 for all i.
!
1 1
1.3.27. False. For instance is regular. Only if the zero appear in the (1, 1) position
1 0
does it automatically preclude regularity of the matrix.
c 2018 Peter J. Olver and Chehrzad Shakiban
4 Students’ Solutions Manual: Chapter 1
! 0 −1
−1
1 ,
−1 .
1.3.32. (a) x = 2 , (c) x = (e) x =
3 0 5
2
! ! !
5 9
1 0 −1 3 − 11 1 11
1.3.33. (a) L = , U= ; x1 = , x2 = , x3 = .
−3 1 0 11 2 1 3
11 11
1 0 0
9 −2 −1 1 −2
2
(c) L =
− 3 1 0
, U= 0
− 13 1 ;
3
x1 = 2 ,
x2 = −9 .
2 5 3 −1
9 3 1 0 0 − 13
5 1
4
1 0 0 0 1 0 −1 0 14
1
0 1 0 0 0 2 3 −1 − − 5
4 14
(e) L =
, U =
7 ; x1 = , x2 = .
−1
3
2 1 0
0 0 − 27 2
1
1
4 14
0 − 12 −1 1 0 0 0 4 1 1
4 2
0 1 0 0
0 1 0
1 0 0 0
1.4.11. (a)
0 0 1, (c)
.
0 0 0 1
1 0 0
0 0 1 0
1 0 0 0 1
0 0 0 0
0 0 1
10 0 0 0 0 0 1 0 1 0 0
1.4.13. (b) , , ,
00 0 1
0 1 0 0
1 0 0 0
0 0 1 0 0 0 1 0 0 0 1 0
1 0 0 0 0 0 0 1 0 1 0 0
0 1 0 0 1 0 0 0 0 0 0 1
, , .
0 0 0 1
0 1 0 0
1 0 0 0
0 0 1 0 0 0 1 0 0 0 1 0
1.4.14. (a) True, since interchanging the same pair of rows twice brings you back to where you
started.
0 0 1
1.4.16. (a) 0 1 0
. (b) True.
1 0 0
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 1 5
1.4.19. Let ri , rj denote the rows of the matrix in question. After the first elementary row op-
eration, the rows are ri and rj + ri . After the second, they are ri − (rj + ri ) = − rj and
rj + ri . After the third operation, we are left with − rj and rj + ri + (− rj ) = ri .
! ! ! !
5
0 1 0 1 1 0 2 −1 2
;
1.4.21. (a) = , x=
1 0 2 −1 0 1 0 1 3
0 0 1 0 1 −3 1 0 0 1 0 2 −1
(c) 1 0 0 2 3 0 1 −3 1
0 = 0 1 0 , x= ;
0 1 0 1 0 2 0 2 1 0 0 9 0
0 0 1 0 0 1 0 0 1 0 0 0 1 4 −1 2 −1
1 0 0 0 2 3 1 0 0 1 0 0 0 1 0 0 −1
(e) = , x = .
0 1 0 0
1 4 −1 2
2 −5 1 0
0
0 3 −4
1
0 0 0 1 7 −1 2 3 7 −29 3 1 0 0 0 1 3
0 0 1 0 0 1 −1 1 1 0 0 0 1 −1 1 −3
0 1 0 0 0 1 1 0 0 1 0 0 0 1 1 0
1.4.22. (b)
= ;
1 0 0 0
1 −1 1 −3
0 1 1 0
0 0 −2 1
5 3
0 0 0 1 1 2 −1 1 1 3 2 1 0 0 0 2
solution: x = 4, y = 0, z = 1, w = 1.
1.4.26. False. Changing the permuation matrix typically changes the pivots.
2 1 1 3 −1 −1 1 0 0 3 −1 −1 2 1 1
1.5.1. (b) 3 2 1 1 0 1 1
−4 2 = 0 1 = −4 2 3 2 .
2 1 2 −1 0 1 0 0 1 −1 0 1 2 1 2
1 0 0 0
! !
0 1 1 2 0
1 0 0.
1.5.3. (a) , (c) , (e)
1 0 0 1 0 −6 1 0
0 0 0 1
!
2 1
−1 −1 1 3 3
1.5.6. (a) A = , B −1 = .
2 −1 − 1 1
3 3
!
1
2 1 0 3
(b) C = , C −1 = B −1 A−1 = .
3 0 1 − 23
0 0 0 1 1 0 0 0
0 0 1 0 0 0 1 0
1.5.9. (a)
, (c) .
0 1 0 0
0 0 0 1
1 0 0 0 0 1 0 0
!
1 −1 1
1.5.13. Since c is a scalar, A (c A) = c A−1 A = I .
c c
c 2018 Peter J. Olver and Chehrzad Shakiban
6 Students’ Solutions Manual: Chapter 1
1.5.16. If all the diagonal entries are nonzero, then D−1 D = I . On the other hand, if one of
diagonal entries is zero, then all the entries in that row are zero, and so D is singular.
5 1 5
1 3
− 8 8 8
− 8 8
1 1
1.5.25. (b)
3
; (d) no inverse; (f )
− 2 2 − 21 .
8 − 81
7
8 − 38 8
1
! ! ! !
1 0 01 1 3 1 3
1.5.26. (b) = ; (d) not possible;
3 1 −8 0 0 1 3 1
1 0 0 1 0 0 1 0 0 1 0 0 1 0 0
(f )
3 1 0 0 1
0
0 1 0 0 −1 0 0 1 0
0 0 1 2 0 1 0 3 1 0 0 1 0 0 8
1 0 3 1 0 0 1 2 0 1 2 3
0 1 00 1 4
0 1 0 = 3 5 5 .
0 0 1 0 0 1 0 0 1 2 1 2
i 0 −1
i 1
− 2 2
1.5.28. (a) , (c) 1− i −i 1
1 .
2 − 2i −1 −1 −i
9 −15 −8 3 2
5 2
17 17 2 2
1.5.31. (b) = , (d) 6 −10 −5
−1 = 3 ,
1 3
− 17 17 12 2 −1 2 1 5 0
1 0 1 1 4 3
0 0 −1 −1 11 1
(f ) = .
2 −1 −1 0 −7 4
2 −1 −1 −1 6 −2
1
8
1
1.5.32. (b) 4 ; (d) singular matrix; (f ) − 1 .
1 2
4 5
8
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 1 7
! ! ! ! !
0 1 0 4 1 0 −7 0 1 − 72
1.5.33. (b) = ,
1 0 −7 2 0 1 0 4 0 1
1 0 0 1 1 5 1 0 0 1 0 0 1 1 5
7
(d) 0 0 1 1 1 −2 = 2 1 0 0 −3 0 0 1
3 ,
0 1 0 2 −1 3 1 0 1 0 0 −7 0 0 1
1 −1 1 2 1 0 0 0 1 0 0 0 1 −1 1 2
1 −4 1 5 1
1 0 0 0
−3 0 0 0
1 0 −1
(f )
= .
1 2 −1 −1
1 −1 1 0
0 0 −2 0
0 0 1 0
3 1 1 6 3 − 43 1 1 0 0 0 4 0 0 0 1
7
! 1 3
−8 2
1.5.34. (b) , (d)
−2 , (f )
.
3 5
0
− 35
! 1 2 !
1 0
1 3 5
1.6.1. (b) , (d) 2 0, (f ) .
1 2 2 4 6
−1 2
3 1 !
−1 2 −3
1.6.2. AT =
−1 2, BT = ,
2 0 4
−1 1
! −1 6 −5
−2 0
(A B)T = B T AT = , (B A)T = AT B T =
5 −2 11
.
2 6
3 −2 7
1.6.5. (A B C)T = C T B T AT
1.6.8. (a) (A B)−T = ((A B)T )−1 = (B T AT )−1 = (AT )−1 (B T )−1 = A−T B −T .
! !
1 0 1 −2
(b) A B = , so (A B)−T = ,
2 1 0 1
! ! !
−T 0 −1 −T 1 −1 −T −T 1 −2
while A = ,B = , so A B = .
1 1 −1 2 0 1
♦ 1.6.20. True. Invert both sides of the equation AT = A, and use Lemma 1.32.
! ! ! !
1 1 1 0 1 0 1 1
1.6.25. (a) = ,
1 4 1 1 0 3 0 1
1 −1 −1 1 0 0 1 0 0 1 −1 −1
1
(c) −1 3 2 = −1 1 0 0 2 0 0 1
2 .
1
−1 2 0 −1 2 1 0 0 − 32 0 0 1
c 2018 Peter J. Olver and Chehrzad Shakiban
8 Students’ Solutions Manual: Chapter 1
1.7.1. (b) The solution is x = −4, y = −5, z = −1. Gaussian Elimination and Back Substitu-
tion requires 17 multiplications and 11 additions; Gauss–Jordan uses 20 multiplications and
0 −1 −1
11 additions; computing A−1 =
2 −8 −5 takes 27 multiplications and 12 additions,
3
2 −5 −3
while multiplying A−1 b = x takes another 9 multiplications and 6 additions.
1 2 0 1 0 0 1 2 0 −2
1.7.9. (a) −1 −1
1 = −1 1
00 1 1 3
, x= .
0 −2 3 0 −2 1 0 0 5 0
! ! !
−8 −10 −8.1
1.7.16. (a) , (b) , (c) . (d) Partial pivoting reduces the effect of
4 −4.1 −4.1
round off errors and results in a significantly more accurate answer.
6 0
5 1.2
1
1.7.20. (a) − 13 = −2.6
.
5 , (c)
1
−1.8
− 95 0
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 1 9
2
121 .0165
1 ! 38
− 13 −.0769 .3141
121 .
1.7.21. (a) = , (c) =
8 .6154
59
.2438
13 242
56 −.4628
− 121
1.8.13. True. For example, take a matrix in row echelon form with r pivots, e.g., the matrix A
with aii = 1 for i = 1, . . . , r, and all other entries equal to 0.
1.8.17. 1.
♦ 1.8.21. By Proposition 1.39, A can be reduced to row echelon form U by a sequence of elemen-
tary row operations. Therefore, as in the proof of the L U decomposition, A = E1 E2 · · · EN U
where E1−1 , . . . , EN
−1
are the elementary matrices representing the row operations. If A is
singular, then U = Z must have at least one all-zero row.
1.8.27. (b) k = 0 or k = 21 .
c 2018 Peter J. Olver and Chehrzad Shakiban
10 Students’ Solutions Manual: Chapter 1
!
2 −1
1.9.1. (a) Regular matrix, reduces to upper triangular form U = , so its determinant is 2.
0 1
−1 0 3
(b) Singular matrix, row echelon form U = 0 1 −2
, so its determinant is 0.
0 0 0
−2 1 3
(d) Nonsingular matrix, reduces to upper triangular form U = 0 1 −1
after one row
interchange, so its determinant is 6. 0 0 3
1.9.4. (a) True. By Theorem 1.52, A is nonsingular, so, by Theorem 1.18, A−1 exists.
! !
2 3 0 1
(c) False. For A = and B = , we have
−1 −2 0 0
!
2 4
det(A + B) = det = 0 6= −1 = det A + det B.
−1 −2
1.9.13.
a11 a12 a13 a14
a21 a22 a23 a24
det
=
a31 a32 a33 a34
a41 a42 a43 a44
a11 a22 a33 a44 − a11 a22 a34 a43 − a11 a23 a32 a44 + a11 a23 a34 a42 − a11 a24 a33 a42
+ a11 a24 a32 a43 − a12 a21 a33 a44 + a12 a21 a34 a43 + a12 a23 a31 a44 − a12 a23 a34 a41
+ a12 a24 a33 a41 − a12 a24 a31 a43 + a13 a21 a32 a44 − a13 a21 a34 a42 − a13 a22 a31 a44
+ a13 a22 a34 a41 − a13 a24 a32 a41 + a13 a24 a31 a42 − a14 a21 a32 a43 + a14 a21 a33 a42
+ a14 a22 a31 a43 − a14 a22 a33 a41 + a14 a23 a32 a41 − a14 a23 a31 a42 .
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 2: Vector Spaces and Bases
Distributivity:
(c + d) (x + i y) = (c + d) x + i (c + d) y = (c x + d x) + i (c y + d y)
= c (x + i y) + d (x + i y),
c[ (x + i y) + (u + i v) ] = c (x + u) + (y + v) = (c x + c u) + i (c y + c v)
= c (x + i y) + c (u + i v).
Associativity of Scalar Multiplication:
c [ d (x + i y) ] = c [ (d x) + i (d y) ] = (c d x) + i (c d y) = (c d) (x + i y).
Unit for Scalar Multiplication: 1 (x + i y) = (1 x) + i (1 y) = x + i y.
Note: Identifying the complex number x + i y with the vector ( x, y )T ∈ R 2 respects the opera-
tions of vector addition and scalar multiplication, and so we are in effect reproving that R 2 is a
vector space.
11
c 2018 Peter J. Olver and Chehrzad Shakiban
12 Students’ Solutions Manual: Chapter 2
2.2.2. (a) Not a subspace; (c) subspace; (e) not a subspace; (g) subspace.
-1
-0.5 -1
0 -0.5
0.5 0
1
0.5 -1.5
1 -1.75
-2
-2.25
2 -2.5
1.5
1.25
0.75
-2
0.5
-1
-0.5
0
0.5
1
2.2.16. (a) Subspace; (c) not a subspace: the zero function does not satisfy the condition;
(e) subspace; (g) subspace.
♥ 2.2.24. (b) Since the only common solution to x = y and x = 3 y is x = y = 0, the lines only
! ! !
x a 3b
intersect at the origin. Moreover, every v = = + , where a = − 21 x + 23 y,
y a b
1 1
b= 2 x− 2 y, can be written as a sum of vectors on each line.
because the exponential e− y goes to zero faster than any power of y goes to ∞.
−1 2 5
2.3.1. 2
= 2 −1 − −4 .
3 2 1
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 2 13
1 1 0
2.3.3. (a) Yes, since −2 = 1 − 3 1 ;
−3 1 0
3 1 0 2
0 2 −1 0
(c) No, since the vector equation
=c +c
1 2
+c
3
does not have a
−1 0 3 1
solution. −2 1 0 −1
− 35 6 1
2.3.5. (b) The plane z = x− 5 y: 0
-1
2.3.8. (a) They span P (2) since ax2 +bx+c = 21 (a−2b+c)(x2 +1)+ 21 (a−c)(x2 −1)+b(x2 +x+1).
2.3.24. (a) Linearly dependent; (c) linearly independent; (e) linearly dependent.
c 2018 Peter J. Olver and Chehrzad Shakiban
14 Students’ Solutions Manual: Chapter 2
2.3.32. (a) They are linearly dependent since (x2 − 3) + 2(2 − x) − (x − 1)2 ≡ 0.
(b) They do not span P (2) .
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 2 15
! 1!
b1 3 2 .
2.5.1. (a) Image: all b = such that 4 b1 + b2 = 0; kernel spanned by
b2 1
5
b
− 4
1
(c) Image: all b = 7 .
b2 such that − 2 b1 + b2 + b3 = 0; kernel spanned by −8
b3
1
1
− 52 1
4 0
2
2.5.2. (a) 0 , 1
3 : 0 :
: plane; (b) 8 line; (e) point.
1 0 1 0
−1 1+t
2.5.4. (a) b = 2 2 + t ,
; (b) the general solution is x = where t is arbitrary.
−1 3+t
2.5.7. In each case, the solution is x = x⋆ + z, where x⋆ is the particular solution and z be-
2 5
1
− 7
6
⋆
longs to the kernel:
(b) x = −1
, z= z
− 17 ;
(d) x⋆ =
1, z = 0.
2
0 −3
1
T
2.5.8. The kernel has dimension n−1, with basis − rk−1 e1 +ek = − rk−1 , 0, . . . , 0, 1, 0, . . . , 0
for k = 2, . . . n. The image has dimension 1, with basis (1, rn , r2 n . . . , r(n−1)n )T .
−2 −1 −6
2.5.12. x⋆1 = , x⋆2 = ; x = x⋆1 + 4 x⋆2 = .
3 1 7
2 2 2
1
2.5.14. (a) By direct matrix multiplication: A x⋆1 = A x⋆2 =
−3 .
5
1 −4
(b) The general solution is x = x⋆1 + t (x⋆2 − x⋆1 ) = (1 − t) x⋆1 + t x⋆2 = 1 + t
2.
0 −2
c 2018 Peter J. Olver and Chehrzad Shakiban
16 Students’ Solutions Manual: Chapter 2
! ! ! !
1 1 3 −2
2.5.21. (a) image: ; coimage:
; kernel: ; cokernel: .
2 −3 1 1
1 0 1 −3
1 1 −3
1 −1 −3 2
(c) image:
1 , 0 ; coimage: ,
; kernel:
,
; cokernel:
1.
2 −3 1 0
2 3 1
1 2 0 1
2.5.23. (i ) rank = 1; dim img A = dim coimg A = 1, dim ker A = dim coker A = 1;
! !
−2 2
kernel basis: ; cokernel basis: ; compatibility conditions: 2 b1 + b2 = 0;
1 1
! ! !
1 1 −2
example: b = , with solution x = +z .
−2 0 1
(iii ) rank = 2; dim img A = dim coimg A = 2, dim ker A = 0, dim coker A = 1;
20
− 13
20 3
kernel: {0}; cokernel basis: 3 ; compatibility conditions: − 13 b1 +
13
13 b2 + b3 = 0;
1
1 1
example: b = −2 ,
with solution x = 0 .
2 0
(v ) rank = 2; dim img A = dim coimg A = 2, dim ker A = 1, dim coker A = 2; kernel
9 1
−4 4
−1
1
1
4 −4
basis: −1 ;
cokernel basis:
,
;
compatibility: − 49 b1 + 1
4 b2 + b3 = 0,
1
1
0
1 0
2
1 −1
1 6
4 b1 − 14 b2 + b4 = 0; example: b =
, with solution x = 0 + z −1 .
3
0 1
1
1 0 1
1
0 1 −3
2.5.24. (b) dim = 1; basis: 1 ; (d) dim = 3; basis:
,
, .
−3 2 −8
−1
2 −3 7
1 0
1 −1
2.5.26. (b)
,
.
1 0
0 1
2.5.29. Both sets are linearly independent and hence span a three-dimensional subspace of R 4 .
Moreover, w1 = v1 + v3 , w2 = v1 + v2 + 2 v3 , w3 = v1 + v2 + v3 all lie in the span of
v1 , v2 , v3 and hence, by Theorem 2.31(d) also form a basis for the subspace.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 2 17
2.5.41. True. If ker A = ker B ⊂ R n , then both matrices have n columns, and so n − rank A =
dim ker A = dim ker B = n − rank B.
♦ 2.5.44. Since we know dim img A = r, it suffices to prove that w1 , . . . , wr are linearly indepen-
dent. Given
0 = c1 w1 + · · · + cr wr = c1 A v1 + · · · + cr A vr = A(c1 v1 + · · · + cr vr ),
we deduce that c1 v1 + · · · + cr vr ∈ ker A, and hence can be written as a linear combination
of the kernel basis vectors:
c1 v1 + · · · + cr vr = cr+1 vr+1 + · · · + cn vn .
But v1 , . . . , vn are linearly independent, and so c1 = · · · = cr = cr+1 = · · · = cn = 0, which
proves linear independence of w1 , . . . , wr .
−1 0 1 0 0
−1 0 1 0 −1
1 0 0 0
0 −1 1 0;
0 −1 0 1 0
2.6.3. (a) (c) .
0 1 0 −1
0
−1 0 0 1
0 0 1 −1 0 0 1 −1 0
0 0 0 1 −1
−1 0
0
1
0
−1 1 −1
2.6.4. (a) 1 circuit: ; (c) 2 circuits: , .
−1 0 1
1 1 0
0 1
c 2018 Peter J. Olver and Chehrzad Shakiban
18 Students’ Solutions Manual: Chapter 2
1 −1 0 0
1 0 −1 0
1 0 0 −1
♥ 2.6.7. (a) Tetrahedron:
0
1 −1 0
0 1 0 −1
0 0 1 −1
number of circuits = dim coker A = 3, number of faces = 4.
−1 1 0 0 0 0
−1 1 0 0
0 1 −1 0 0 0
♥ 2.6.9. (a) (i )
0 1 −1 0, (iii )
0 0 1 −1 0 0.
0 1 0 −1 0 1 0 0 −1 0
0 1 0 0 0 −1
(b)
−1 1 0 0 0 −1 1 0 0 0 −1 1 0 0 0
0 −1 1 0 0 0 −1 1 0 0 0 1 −1 0 0
, , .
0 0 −1 1 0
0 0 −1 1 0
0 1 0 −1 0
0 0 0 −1 1 0 1 0 0 −1 0 1 0 0 −1
♥ 2.6.10. G3 G4
(a)
1 −1 0 0
1 0 −1 0
1 −1 0
1 0 0 −1
(b) 1 0 −1 .
,
0
1 −1 0
0 1 −1 0
1 0 −1
0 0 1 −1
♥ 2.6.14. (a) Note that P permutes the rows of A, and corresponds to a relabeling of the ver-
tices of the digraph, while Q permutes its columns, and so corresponds to a relabeling of
the edges. (b) (i ) Equivalent, (iii ) inequivalent, (v ) equivalent.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 3: Inner Products and Norms
0.5
-0.5
-1
q q
♦ 3.1.7. kcvk = h cv , cv i = c2 h v , v i = | c | k v k.
19
c 2018 Peter J. Olver and Chehrzad Shakiban
20 Students’ Solutions Manual: Chapter 3
♦ 3.1.12. (a) k u + v k2 − k u − v k2 = h u + v , u + v i − h u − v , u − v i
= h u , u i + 2 h u , v i + h v , v i − h u , u i − 2 h u , v i + h v , v i = 4 h u , v i.
3.1.21. (a) h 1 , x i = 21 , k 1 k = 1, k x k = √1 ;
3
r
(c) h x , ex i = 1, k x k = √1 , k ex k = 1 2
2 (e − 1) .
3
r r
2 56
3.1.22. (b) h f , g i = 0, k f k = 3 , kg k = 15 .
Z 1
3.1.23. (a) Yes; (b) no, since it fails positivity: for instance, (1 − x)2 x dx = − 34 .
−1
√ √
3.1.26. No. For example, on [ − 1, 1 ], k 1 k = 2 , but k 1 k2 = 2 6= k 12 k = 2.
The second has a similar proof, or follows from symmetry, cf. Exercise 3.1.9.
To prove symmetry:
Z b Z b
hf ,gi = f (x) g(x) w(x) dx = g(x) f (x) w(x) dx = h g , f i.
a a
Z b
As for positivity, h f , f i = f (x)2 w(x) dx ≥ 0. Moreover, since w(x) > 0 and the inte-
a
grand is continuous, Exercise 3.1.29 implies that h f , f i = 0 if and only if f (x)2 w(x) ≡ 0
for all x, and so f (x) ≡ 0.
(b) If w(x0 ) < 0, then, by continuity, w(x) < 0 for x0 − δ ≤ x ≤ x0 + δ for some δ > 0. Now
choose f (x) 6≡ 0 so that f (x) = 0 whenever | x − x0 | > δ. Then
Z b Z x +δ
0
hf ,f i = f (x)2 w(x) dx = f (x)2 w(x) dx < 0, violating positivity.
a x0 −δ
q
3.1.32. (a) h f , g i = 23 , k f k = 1, k g k = 28
45 .
√ √
3.2.1. (a) | v1 · v2 | = 3 ≤ 5 = 5 5 = k v1 k k v2 k; angle: cos−1 53 ≈ .9273;
√ √ √
(c) | v1 · v2 | = 0 ≤ 2 6 = 2 12 = k v1 k k v2 k; angle: 21 π ≈ 1.5708.
√ √
3.2.4. (a) | v · w | = 5 ≤ 7.0711 = 5 10 = k v k k w k.
√ √
3.2.5. (b) | h v , w i | = 11 ≤ 11.7473 = 23 6 = k v k k w k.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 3 21
kv + wk
kwk
θ
kv − wk
kvk
√
3.2.32. (a) k v1 + v2 k = 4 ≤ 2 5 = k v1 k + k v2 k,
√ √ √
(c) k v1 + v2 k = 14 ≤ 2 + 12 = k v1 k + k v2 k.
√ √ √
3.2.33. (a) k v1 + v2 k = 5 ≤ 5 + 10 = k v1 k + k v2 k.
c 2018 Peter J. Olver and Chehrzad Shakiban
22 Students’ Solutions Manual: Chapter 3
r
2 1 2 1 −2
3.2.34. (b) k f + g k = 3 + 2e + 4 e−1 − 2e ≈ 2.40105
r r
2 1 2
≤ 2.72093 ≈ 3 + 2 (e − e−2 ) = k f k + k g k.
r r
133 28
3.2.35. (a) k f + g k = 45 ≈ 1.71917 ≤ 2.71917 ≈ 1 + 45 = k f k + k g k.
Z s s
1 Z 1 Z 1
x 2 x
3.2.37. (a)
0
f (x) g(x) e dx
≤ f (x) e dx g(x)2 ex dx ,
0 0
s s s
Z 1h i2 Z 1 Z 1
x
f (x) + g(x) e dx ≤ f (x)2 ex dx + g(x)2 ex dx ;
0 0 0
r
√
(b) h f , g i = 1
2 (e2 − 1) = 3.1945 ≤ 3.3063 = e − 1 13 (e3 − 1) = k f k k g k,
r r
1 3 7 √
kf + g k = 2
3e +e +e− 3 = 3.8038 ≤ 3.8331 = e − 1 + 13 (e3 − 1) = k f k + k g k;
√
3 e2 − 1
(c) cos θ = q = .9662, so θ = .2607.
2 (e − 1)(e3 − 1)
3.3.1. k v + w k1 = 2 ≤ 2 = 1 + 1 = k v k1 + k w k1 ,
√
k v + w k2 = 2 ≤ 2 = 1 + 1 = k v k2 + k w k2 ,
√
k v + w k3 = 3 2 ≤ 2 = 1 + 1 = k v k3 + k w k3 ,
k v + w k∞ = 1 ≤ 2 = 1 + 1 = k v k∞ + k w k∞ .
1 2 1 1
3.3.6. (a) k f − g k1 = 2 = .5, k f − h k1 = 1 − π = .36338, k g − h k1 = 2 − π = .18169,
r r
1 3 4
so g, h are closest. (b) k f − g k2 = 3 = .57735, k f − h k2 = 2 − π = .47619,
r
5 2
k g − h k2 = 6 − π = .44352, so g, h are closest.
3 5
3.3.7. (a) k f + g k1 = 4 = .75 ≤ 1.3125 ≈ 1 + 16 = k f k1 + k g k1 ;
r r
31 7
(b) k f + g k2 = 48 ≈ .8036 ≤ 1.3819 ≈ 1 + 48 = k f k2 + k g k2 .
3.3.13. True for an inner product norm, but false in general. For example,
k e1 + e2 k1 = 2 = k e1 k1 + k e2 k1 .
Z b Z bh i
♦ 3.3.17. (a) k f + g k1 = | f (x) + g(x) | dx ≤ | f (x) | + | g(x) | dx
a a
Z b Z b
= | f (x) | dx + | g(x) | dx = k f k1 + k g k1 .
a a
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 3 23
1 1
3 3
3.3.20. (b) 2 ; (d) 2 .
3 3
−1 −1
0.75
0.25
-0.25
-0.75
-1 -1
-1
3.3.25.
18
3.3.28. (a) 5 x − 65 ; (c) 3
2 x − 21 ; (e) 3
√ x− 1
√ .
2 2 2 2
3.3.29. (a) Yes, (c) yes, (e) no, (g) no — its norm is not defined.
√ √ √
3.3.31. (a) k v k2 = 2 , k v k∞ = 1, and √1 2 ≤ 1 ≤ 2;
2
1
(c) k v k2 = 2, k v k∞ = 1, and 2 2 ≤ 1 ≤ 2.
n
X X n
X
k v k21 = | vi |2 + 2 | vi | | vj | ≤ n | vi |2 = n k v k22 .
i=1 i<j i=1
√ √ √ √
(ii ) (a) k v k2 = 2, k v k1 = 2, and 2 ≤ 2 ≤ 2 2.
(iii ) (a) v = c ej for some j = 1, . . . , n.
c 2018 Peter J. Olver and Chehrzad Shakiban
24 Students’ Solutions Manual: Chapter 3
♥ 3.3.40. (a) The maximum (absolute) value of fn (x) is 1 = k fn k∞ . On the other hand,
sZ sZ
∞ n √
k fn k2 = | fn (x) |2 dx = dx = 2n −→ ∞.
−∞ −n
(b) Suppose there exists a constant C such that k f k2 ≤ C k f k∞ for all functions. Then, in
√
particular, 2 n = k fn k2 ≤ C k fn k∞ = C for all n, which is impossible.
3
3.3.45. (a) 4, (c) .9.
! ! !
0 1 1 2 0 −2
3.3.47. False: For instance, if A = ,S= , then B = S −1 A S = , and
0 1 0 1 0 1
k B k∞ = 2 6= 1 = k A k∞ .
0.5
-0.5
-1
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 3 25
! ! 9 6 3
10 6 6 −8
6
3.4.22. (i ) ; positive definite. (iii ) ; positive definite. (v ) 6 0;
6 4 −8 13
3 0 3
−1
positive semi-definite; null directions: all nonzero scalar multiples of
1 .
1
! 21 12 9
9 −12
12
3.4.23. (iii ) , positive definite; (v ) 9 3, positive semi-definite.
−12 21
9 3 6
Note: Positive definiteness doesn’t change, since it only depends upon the linear indepen-
dence of the vectors.
1 e−1 1
2 (e2 − 1)
3.4.25. K =
e−1 1
2 (e2 − 1) 1
3 (e3 − 1)
1
2 (e2 − 1) 1
3 (e3 − 1) 1
4 (e4 − 1)
♦ 3.4.30. (a) is a special case of (b) since positive definite matrices are symmetric.
(b) By Theorem 3.34 if S is any symmetric matrix, then S T S = S 2 is always positive semi-
definite, and positive definite if and only if ker S = {0}, i.e., S is nonsingular. In particular,
if S = K > 0, then ker K = {0} and so K 2 > 0.
3.5.1. (a) Positive definite; (c) not positive definite; (e) positive definite.
! ! ! !
1 2 1 0 1 0 1 2
3.5.2. (a) = ; not positive definite.
2 3 2 1 0 −1 0 1
3 −1 3 1 0 0 3 0 0 1 − 13 1
1 14
(c) −1
5 1
= −3 1 0
0 3 0
0 1 73 ; positive definite.
3 8
3 1 5 1 7 1 0 0 7 0 0 1
1
1 2 − 12
3.5.4. K = 1
2 2 0
; yes, it is positive definite.
1
− 2 0 3
3.5.5. (a) (x + 4 y)2 − 15 y 2 ; not positive definite. (b) (x − 2 y)2 + 3 y 2 ; positive definite.
c 2018 Peter J. Olver and Chehrzad Shakiban
26 Students’ Solutions Manual: Chapter 3
1 0 2 x
3.5.7. (a) ( x y z )T
0 2 4 y ; not positive definite;
2 4 12 z
1 1 −2 x
(c) ( x y z )T
1 2 −3 y ; positive definite.
−2 −3 6 z
n
X
3.5.10. (b) tr K = kii > 0 since, according to Exercise 3.4.5, every diagonal entry of K is
i=1
positive.
! ! !
4 −12 2 0 2 −6
3.5.19. (b) = ,
−12 45 −6 3 0 3
√
√
2 √1 √1
2 1 1 2 √0 0
√2 2
(d)
1 2 1
=
√1 √3 0
0 √3 √1
.
2 2 2 6
1 1 2 √1 √1 √2 0 0 √2
2 6 3 3
! ! !
4 −2 2 0 2 −1
3.5.20. (a) = √ √ , (c) no factorization.
−2 4 −1 3 0 3
√
3.5.21. (b) z12 + z22 , where z1 = x1 − x2 , z2 = 3 x2 ;
r r
√
(d) z12 + z22 + z32 , where z1 = 3 x1 − √1 x2 − √1 x3 , z2 = 53 x2 − √1 x3 , z 3 = 28
5 x3 .
3 3 15
(
1, k even,
3.6.2. ek π i = cos k π + i sin k π = (−1)k =
−1, k odd.
√ 1 i 1 i
3.6.5. (a) i = eπ i /2 ; (b) i = eπ i /4 = √ + √ and e5 π i /4 = − √ − √ .
2 2 2 2
3.6.7. (a) 1/z moves in a clockwise direction around a circle of radius 1/r.
3.6.16. (b) cos 3 θ = cos3 θ − 3 cos θ sin2 θ, sin 3 θ = 3 cos θ sin2 θ − sin3 θ.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 3 27
d λx d µx
♦ 3.6.24. (a) e = e cos ν x + i eµ x sin ν x = (µ eµ x cos ν x − ν eµ x sin ν x) +
dx dx
+ i (µ eµ x sin ν x + ν eµ x cos ν x) = (µ + i ν) eµ x cos ν x + i eµ x sin ν x = λ eλ x .
1 1
3.6.25. (a) 2 x+ 4 sin 2 x, (c) − 14 cos 2 x, (e) 3
8 x+ 1
4 sin 2 x + 1
32 sin 4 x.
3.6.48. (b) (i ) h x + i , x − i i = − 32 + i , k x + i k = k x − i k = √2 ;
3
√
13 4
(ii ) | h x + i , x − i i | = 3 ≤ 3 = k x + i k k x − i k,
k (x + i ) + (x − i ) k = k 2 x k = √2 ≤ √4 = k x + i k + k x − i k.
3 3
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 4: Orthogonality
4.1.1. (a) Orthogonal basis (c) not a basis; (e) orthogonal basis.
4.1.5. (a) a = ± 1.
4.1.6. a = 2 b > 0.
1 0 0
4.1.9. False. Consider the basis v1 = v2 = 1 ,
v3 = 1 ,
Under the weighted in- 0 .
0 0 1
ner product, h v1 , v2 i = b > 0, since the coefficients of a, b, c appearing in the inner product
must be strictly positive.
28
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 4 29
T T
(d) u1 = √1 , √1 , u2 = − √2 , √1 .
3 3 15 15
√ 2 √ 2
(e) v = h v , u1 i u1 + h v , u2 i u2 = √7 u1 − 15
3 u2 ; k v k2 = 18 = √7 + − 15
3 .
3 3
h x , p1 i 1 h x , p2 i h x , p3 i 1
4.1.26. = , = 1, = 0, so x = 2 p1 (x) + p2 (x).
k p1 k2 2 k p2 k2 k p3 k2
T T T T
4.2.2. (a) √1 , 0, √1 , 0 , 0, √1 , 0, − √1 , 1 1 1 1
− 12 , 12 , 12 , 1
2 2 2 2 2, 2, −2, 2 , 2 .
T
4.2.4. (b) Starting with the basis 1
2 , 1, 0 , ( −1, 0, 1 )T , the Gram–Schmidt process pro-
T T
duces the orthonormal basis √1 , √2 , 0 , − 4
√ 2
, √ 5
, √ .
5 5 3 5 3 5 3 5
4.2.8. Applying the Gram–Schmidt process to the standard basis vectors e1 , e2 gives
1
√1 0 1 √
3 ,
2 2 3
(a) , ; (b) .
0 √1 0 √2
5 3
c 2018 Peter J. Olver and Chehrzad Shakiban
30 Students’ Solutions Manual: Chapter 4
!T !T
1+ i 1− i 3− i 1+ 3i
4.2.15. (a) , , √ , √ .
2 2 2 5 2 5
!T !T
1− i 1 −1 + 2 i 3 − i 3 i
4.2.16. (a) √ , √ ,0 , √ , √ , √ .
3 3 2 6 2 6 2 6
.5164 −.2189 .2529
0. .8165 .57735 .2582
−.5200
.5454
4.2.17. (a) .7071 , −.4082 ,
.57735
; (c)
.7746
,
.4926 ,
−.2380 .
.7071 .4082 −.57735 .2582 −.5200 −.3372
0. .4105 .6843
4.3.3. (a) True: Using the formula (4.31) for an improper 2 × 2 orthogonal matrix,
!2 !
cos θ sin θ 1 0
= .
sin θ − cos θ 0 1
4.3.9. In general, det(Q1 Q2 ) = det Q1 det Q2 . If both determinants are +1, so is their prod-
uct. Improper times proper is improper, while improper times improper is proper.
4.3.14. False. This is true only for row interchanges or multiplication of a row by −1.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 4 31
√
1 −3 √1 − √2 5 − √1
5 5 5
4.3.27. (a) = ,
2 1 √2 √1 0 √7
5 5 5
√
√2 − √1 √1
√3 − √3
2 1 −1 5 30 6 5 5 5
q q q
5
(c)
0 1 3= 0 √1 0
6 2
7 15 ,
6 6 5
q q q
−1 −1 1 − √1 − 15 2 2
3 0 0 2 23
5
0 0 2
0 0 1
1 0 −1
(e) 1 = 0 0 1
0 4 0 1 4 .
−1 0 1 −1 0 0 0 0 2
2
4.3.28. 2 1 −1 √1 − 1
√ 3 0 2
1
3 2 3
√ 2 x
1
1 2 2
√ √
(ii ) (a) 0 2
= 3 0 3
0
2 −2 2
, (b)
y = −1 ;
√
2 z
2 −1 3 3 − √1 − √ 1
0 0 2 −1
2 3 2
♠ 4.3.29. 3 × 3 case:
4 1 0 .9701 −.2339 .0643 4.1231 1.9403 .2425
1 4 1 −.2571 1.9956
= .2425 .9354 0 3.773 .
0 1 4 0 .2650 .9642 0 0 3.5998
♥ 4.3.32. (a) If rank A = n, then the columns w1 , . . . , wn of A are linearly independent, and so
form a basis for its image. Applying the Gram–Schmidt process converts the column basis
w1 , . . . , wn to an orthonormal basis u1 , . . . , un of img A.
1
1 −1 √ − 32 √ √
5
5 5
(c) (i ) 2 3 = √2 1 .
3
5 0 3
0 2 2
0 3
! 1 0 0 ! 1 0
−1 0 0
4.3.34. (a) (i ) , (iii )
0 −1 0. (b) (i ) v = c , (iii ) v = c 0 + d 0 .
0 1 1
0 0 1 0 1
! !
−1.2361 .4472 .8944
4.3.35. Exercise 4.3.27: (a) b =
v , H1 = ,
1 2.0000 .8944 −.4472
! !
.4472 .8944 2.2361 −.4472
Q= , R= ;
.8944 −.4472 0 −3.1305
−.2361 .8944 0 −.4472
(c) b = 0 0
v 1 , H1 = 0 1 ,
−1 −.4472 0 −.8944
0 1 0 0
b = −.0954 ,
H2 = 0 .9129 .4082
v 2 ,
.4472 0 .4082 −.9129
.8944 −.1826 .4082 2.2361 1.3416 −1.3416
Q=
0 .9129 .4082
, R=
0 1.0954 2.556
;
−.4472 −.3651 .8165 0 0 1.633
c 2018 Peter J. Olver and Chehrzad Shakiban
32 Students’ Solutions Manual: Chapter 4
−1 0 0 −1 0 1 0 0
(e) b = 0 , H = 0 1 0 , = 0 , H2 = 0 1 0
v 1 1 vb
2 ,
−1 −1 0 0 0 0 0 1
0 0 −1 1 0 −1
Q= 0 1 0, R= 0 4 1
.
−1 0 0 0 0 −2
Exercise 4.3.29: 3 × 3 case:
−.1231 .9701 .2425 0
b =
v 1 , H1 =
.2425 −.9701 0,
1
0 0 0 1
0 1 0 0
b =
v −7.411 , H2 = 0 −.9642 .2650
2
1 0 .2650 .9642
.9701 −.2339 .0643 4.1231 1.9403 .2425
Q= .2425 .9354 −.2571 1.9956
, R= 0 3.773 ;
0 .2650 .9642 0 0 3.5998
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 4 33
1 1
−
⊥ 3 3
4.4.12. (a) W has basis 1 ,
0
, dim W ⊥ = 2;
0 1
2
(d) W ⊥ has basis
1 , dim W ⊥ = 1.
1
1 3
3
2 2
4.4.13. (a) 4 1 , 0 .
; (b)
−5 0 1
1 2
3 7 3 3
4.4.15. (a) w = 10 , z= 10 ; (c) w = −1 , z= 1 .
1 21 3 3
− 10
10
− 13 1
3
2
−1 6
3 3
4.4.16. (a) Span of 1 ,
0; dim W ⊥ = 2. (d) Span of ;
2 dim W ⊥ = 1.
0 1 1
Z 1
4.4.19. (a) h p , q i = p(x) q(x) dx = 0 for all q(x) = a + b x + c x2 , or, equivalently,
−1
Z 1 Z 1 Z 1
p(x) dx = x p(x) dx = x2 p(x) dx = 0. Writing p(x) = a + b x + c x2 + d x3 + e x4 ,
−1 −1 −1
2 2 2 2 2 2 2
the orthogonality conditions require 2 a + 3c+ 5 e = 0, 3 b+ 5 d = 0, 3 a+ 5 c+ 7 e = 0.
3 3 4 6 2 3 ⊥
(b) Basis: t − 5 t, t − 7t + 35 ; dim W = 2; (c) the preceding basis is orthogonal.
c 2018 Peter J. Olver and Chehrzad Shakiban
34 Students’ Solutions Manual: Chapter 4
4.4.29. Note: To show orthogonality of two subspaces, it suffices to check orthogonality of their
respective basis vectors.
! ! ! !
1 −2 1 2
(a) (i ) Image: ; cokernel: ; coimage: ; kernel: ;
2 1 −2 1
! ! ! !
1 −2 1 2
(ii ) · = 0; (iii ) · = 0.
2 1 −2 1
0 1 −3 −1 0 −3
(c) (i ) Image:
−1 , 0 ; cokernel: −2 ; coimage: 0 , 1 ; kernel: −2 ;
−2 3 1 −3 2 1
0 −3 1 −3 −1 −3 0 −3
(ii ) −1 · −2 = 0 · −2 = 0; (iii ) 0 · −2 = 1 · −2 = 0.
−2 1 3 1 −3 1 2 1
T
2 2
4.4.30. (a) The compatibility condition is 3 b1 + b2 = 0 and so the cokernel basis is 3,1 .
(c) There are no compatibility conditions, and so the cokernel is {0}.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 4 35
3 h t3 , q1 i 3 Z 1
h t3 , q0 i 1 Z 1
= = t3 t dt, 0= = t3 dt.
5 k q1 k2 2 −1 k q0 k2 2 −1
5 ! d5
4.5.2. (a) q5 (t) = t5 − 10 3
9 t + 5
21 t= (t2 − 1)5 , (b) t5 = q5 (t) + 10
9 q3 (t) + 3
7 q1 (t).
10 ! dt5
s
k! dk 2k (k !)2 2
4.5.6. qk (t) = (t2 − 1)k , k qk k = .
(2 k) ! dtk (2 k) ! 2k + 1
q
♥ 4.5.10. (a) The roots of P2 (t) are ± √1 ; the roots of P3 (t) are 0, ± 3
5 .
3
4.5.22. A basis for the solution set is given by ex and e2 x . The Gram-Schmidt process yields
2 (e3 − 1) x
the orthogonal basis e2 x and e .
3 (e2 − 1)
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 5: Minimization and Least Squares
5.1.4. Note: To minimize the distance between the point ( a, b )T to the line y = m x + c:
n o
(i ) in the ∞ norm we must minimize the scalar function f (x) = max | x − a |, | m x + c − b | ,
while (ii ) in the 1 norm we must minimize the scalar function f (x) = | x − a |+| m x + c − b |.
T
(i ) (b) all points on the line segment ( 0, y )T for 1 ≤ y ≤ 3; (d) − 23 , 3
2 .
T T
(ii ) (b) ( 0, 2 ) ; (d) all points on the line segment ( t, −t ) for −2 ≤ t ≤ −1.
5.1.7. This holds because the two triangles in the figure are congruent. According to Exercise
5.1.6(c), when k a k = k b k = 1, the distance is | sin θ | where θ is the angle between a, b, as
ilustrated:
b
a
θ
| a x0 + b y0 + c z 0 + d | 1
♥ 5.1.9. (a) The distance is given by √ . (b) √ .
a 2 + b 2 + c2 14
1 1
5.2.1. x = 2, y= = −2, with f (x, y, z) = − 32 . This is the global minimum because the
2 ,z
1 1 0
coefficient matrix
1 3 1 is positive definite.
0 1 1
36
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 5 37
T
5.2.7. (a) maximizer: x⋆ = 10 3
11 , 11 ; maximum value: p(x⋆ ) = 16
11 .
T
5.3.1. Closest point: 6 38 36
7 , 35 , 35 ≈ ( .85714, 1.08571, 1.02857 )T ; distance: √1 ≈ .16903.
35
!
8
1 5
1.6
5.4.1. (a) 2, (b) = .
28 .4308
65
1 8
5.4.2. (b) x = − 25 , y = − 21 ; (d) x = 13 , y = 2, z = 43 .
5.4.5. The solution is x⋆ = ( −1, 2, 3 )T . The least squares error is 0 because b ∈ img A and so
x⋆ is an exact solution.
5.4.8. The solutions are, of course, the same:
.8 −.43644
! !
.4 .65465 5 0 −.04000
(b) Q =
,
R= , x= ;
.2 −.43644 0 4.58258 −.38095
.4 .43644
c 2018 Peter J. Olver and Chehrzad Shakiban
38 Students’ Solutions Manual: Chapter 5
.18257 .36515 .12910
5.47723 −2.19089 0
.36515 −.18257 .90370
,
(d) Q = R= 0 1.09545 −3.65148
,
0 .91287 .12910
0 0 2.58199
−.91287 0 .38730
x = ( .33333, 2.00000, .75000 )T .
T T
5.4.10. (a) − 17 , 0 , (b) 9 4
14 , 31 .
5.5.3. (a) y = 3.9227 t − 7717.7; (b) $147, 359 and $166, 973.
5.5.6. (a) The least squares exponential is y = e4.6051−.1903 t and, at t = 10, y = 14.9059.
(b) Solving e4.6051−.1903 t = .01, we find t = 48.3897 ≈ 49 days.
♦ 5.5.12.
1 m
X 1 m
X 2t m
X ( t )2 m
X
(ti − t )2 = t2i − ti + 1 = t2 − 2 ( t )2 + ( t )2 = t2 − ( t )2 .
m i=1 m i=1 m i=1 m i=1
10
5.5.13. (a) y = 2 t − 7; 5
2 4 6 8 10
-5
14
12
10
2
(b) y = t + 3 t + 6; 8
6
4
2
-3 -2 -1 1 2
5
4
3
3 2
(d) y = − t + 2t − 1. 2
1
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 5 39
5.5.17. The quadratic least squares polynomial is y = 4480.5 + 6.05 t − 1.825 t2 , and y = 1500 at
t = 42.1038 seconds.
1 2
5.5.20. (a) p2 (t) = 1 + t + 2t , whose maximal error over [ 0, 1 ] is .218282.
5.5.22. p(t) = .9409 t + .4566 t2 − .7732 t3 + .9330 t4 . The graphs are very close over the interval
0 ≤ t ≤ 1; the maximum error is .005144 at t = .91916. The functions rapidly diverge above
1, with tan t → ∞ as t → 12 π, whereas p( 12 π) = 5.2882. The first graph is on the interval
1
[ 0, 1 ] and the second on [ 0, 2 π ]:
14
1.5
12
1.25
10
1
8
0.75
6
0.5
4
0.25 2
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 1.2 1.4
t3 − t t2
♦ 5.5.31. q0 (t) = 1, q1 (t) = t − t, q2 (t) = t2 − t − t − t2 ,
t2 − t2
t3 − t t2
q0 = t0 , q1 = t1 − t t0 , q2 = t2 − t1 − t − t2 t0 ,
t2 − t2
2
2 2 2 2
2 t3 − t t2
k q0k = 1, k q1k = t2 − t , k q2k = t4 − t2 − .
t2 − t2
5.5.35. (a) For example, an interpolating polynomial for the data (0, 0), (1, 1), (2, 2) is the straight
line y = t.
f (x + h) − f (x − h) f (x + h) − 2 f (x) + f (x − h)
♥ 5.5.39. (a) f ′ (x) ≈ ; (b) f ′′ (x) ≈
2h h2
Z b h i
1
♥ 5.5.40. (a) Trapezoid Rule: f (x) dx ≈ 2 (b − a) f (x0 ) + f (x1 ) .
a
Z b h i
1
(b) Simpson’s Rule: f (x) dx ≈ 6 (b − a) f (x0 ) + 4 f (x1 ) + f (x2 ) .
a
c 2018 Peter J. Olver and Chehrzad Shakiban
40 Students’ Solutions Manual: Chapter 5
1 0 1
5.5.41. The sample matrix is A = 0 1 .5
; the least squares solution to A x = y =
−1 0 .25
3 1
gives g(t) = 8 cos π t + 2 sin π t.
0.8
0.2
-3 -2 -1 1 2 3
3 9 9 9 9 2
5.5.47. (a) 7 + 14 t; (b) 28 + 7 t− 14 t .
1 4 3 2 6 2
5.5.58. (a) 5 + 7 − 12 + 2t
3
= − 35 + 7t .
0.5
−1.25 (x + 1)3 + 4.25 (x + 1) − 2, −1 ≤ x ≤ 0, -1 -0.5 0.5 1
1.25 x3 − 3.75 x2 + .5 x − 1, 0 ≤ x ≤ 1. -1
-1.5
-2
3
2.5
2
3 (x − 1)3 − 11
3 (x − 1) + 3, 1 ≤ x ≤ 2, 1.5
2
(c) u(x) =
− 1 (x − 2)3 + 2 (x − 2)2 − 5
1
3 3 (x − 2), 2 ≤ x ≤ 4. 0.5
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 5 41
5.5.68. 1
0.5
−5.25 (x + 1)3 + 8.25 (x + 1)2 − 2, −1 ≤ x ≤ 0, -1 -0.5 0.5 1
-1.5
-2
3
2.5
3.5 (x − 1)3 − 6.5 (x − 1)2 + 3,
2
1 ≤ x ≤ 2, 1.5
(c) u(x) = 1
− 1.125 (x − 2)3 + 4 (x − 2)2 − 2.5 (x − 2), 2 ≤ x ≤ 4. 0.5
1
x3 − 2 x2 + 1, 0 ≤ x ≤ 1, 0.8
0.6
0.2
− (x − 2)3 + (x − 2)2 + (x − 2), 2 ≤ x ≤ 3. -0.2
0.5 1 1.5 2 2.5 3
♥ 5.5.75. (a)
1− 19
15 x+ 4
15 x3 , 0 ≤ x ≤ 1, 1
0.8
7
C0 (x) = − 15 (x − 1) + 54 (x − 1)2 − 1
(x − 1)3 ,
0.6
3 1 ≤ x ≤ 2,
0.4
2 1 2 1 3
0.2
8
5 x− 3
5 x3 , 0 ≤ x ≤ 1, 1
0.8
1
(x − 1) − 95 (x − 1)2 + (x − 1)3 ,
0.6
C1 (x) = 1 − 5 1 ≤ x ≤ 2, 0.4
0.2
− 54 (x − 2) + 6
5 (x − 2)2 − 2
5 (x − 2)3 , 2 ≤ x ≤ 3, 0.5 1 1.5 2 2.5 3
− 25 x + 2
5 x3 , 0 ≤ x ≤ 1, 1
0.8
4 6 2 3 0.6
C2 (x) = 5 (x − 1) + 5 (x − 1) − (x − 1) , 1 ≤ x ≤ 2, 0.4
0.2
1 9 2 3 3
5 (x − 2) − 5 (x − 2) + 5 (x − 2) , 2 ≤ x ≤ 3, 0.5 1 1.5 2 2.5 3
1 1 3
15 x − 15 x , 0 ≤ x ≤ 1,
1
0.8
C3 (x) = − 12 15 (x − 1) − 15 (x − 1)2 + 31 (x − 1)3 ,
0.6
1 ≤ x ≤ 2,
0.4
7
4 2 4 3
0.2
(b) It suffices to note that any linear combination of natural splines is a natural spline. More-
over, u(xj ) = y0 C0 (xj ) + y1 C1 (xj ) + · · · + yn Cn (xj ) = yj , as desired.
c 2018 Peter J. Olver and Chehrzad Shakiban
42 Students’ Solutions Manual: Chapter 5
−ix
5.6.1. (a) (i ) c0 = 0, c1 = − 12 i , c2 = c−2 = 0, c3 = c−1 = 1 1
2 i , (ii ) 2 i e − 12 i e i x = sin x;
√ √ √
(c) (i ) c0 = 13 , c1 = 3−123 i , c2 = 1−123 i , c3 = c−3 = 0, c4 = c−2 = 1+123 i ,
√ √ √ √ √
c5 = c−1 = 3+123 i , (ii ) 1+123 i e− 2 i x + 3+123 i e− i x + 13 + 3−123 i e i x + 1−123 i e2 i x
= 13 + 12 cos x + √ 1
sin x + 61 cos 2 x + √ 1
sin 2 x.
2 3 2 3
1 1 1
1 2 3 4 5 6 1 2 3 4 5 6
1 2 3 4 5 6
The interpolants are accurate along most of the interval, but there is a noticeable problem
near the endpoints x = 0, 2 π. (In Fourier theory, [ 19, 61 ], this is known as the Gibbs phe-
nomenon.)
40 40 40
30 30 30
♠ 5.6.4. (a) 20 20 20
10 10 10
1 2 3 4 5 6 1 2 3 4 5 6
1 2 3 4 5 6
1 1 1
(c) 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
-1 -1 -1
2 2
2
1.5 1.5 1.5
1 1 1
♠ 5.6.10. 0.5 0.5 0.5
1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
-0.5 -0.5 -0.5
-1 -1 -1
The average absolute errors are .018565 and .007981; the maximal errors are .08956 and
.04836, so the 21 mode compression is about twice as accurate.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 5 43
♣ 5.6.13. Very few are needed. In fact, if you take too many modes, you do worse! For example,
if ε = .1,
2 2 2 2 2
1 1 1 1 1
1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
plots the noisy signal and the effect of retaining 2 l + 1 = 3, 5, 11, 21 modes. Only the first
three give reasonable results. When ε = .5 the effect is even more pronounced:
2.5 2.5 2.5 2.5
2
2 2 2 2
1.5
1.5 1.5 1.5 1.5
1 1 1 1 1
1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6
-0.5 -0.5 -0.5 -0.5
1 3
0 0
2 4
1 1 1
− 1 − + 1 i
♠ 5.6.17. (a) f =
(0)
c = 2
,
(1)
c = 2
, c = c
(2)
,
1
= 4
4
;
1
1
2
− 14
3 1 3 1 1
2 −2 2 −4 + 4 i
1
2π
1 1
π
π 2
π 2
π √
2+1
3π 0 1π 1π √ π
4 2 4 8 2
1 1 1
π π 0
2 2 2 π 0
√
1 1 1 2−1
π
4 (0)
π
2 (1)
0
(2)
4 π
(3) √ π
8 2 .
(c) f =
, c
=
3
, c =
1
, c
=
1
, c = c
=
0 π
2π 2π 0
4 √
1 1 1 1+ i 2−1
4π
4π
4π
8 π
√ π
8 2
1π 1π 1π 0
2
4
2
0
√
3 3 1 1− i 2+1
4 π 4 π − 4 π 8 π √ π
8 2
1 1 2 0
−1 1 0 0
♠ 5.6.18. (a) c =
,
f (0) =
,
f (1) =
,
f = f (2) =
1 −1 −2 4
−1 −1 0 0
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 6: Equilibrium
! !
18
3 −2 5
3.6
6.1.1. (a) K = ; (b) u = = ; (c) the first mass has moved the
−2 3 17 3.4
5
T
farthest; (d) e = 18 1 17
5 , − 5, − 5 = ( 3.6, − .2, − 3.4 )T , so the first spring has stretched
the most, while the third spring experiences the most compression.
! !
3 −2 7 7.0
6.1.3. Exercise 6.1.1: (a) K = ; (b) u = = ;
−2 2 17 8.5
2
(c) the second mass has moved the farthest;
T
(d) e = 7, 3
2 = ( 7.0, 1.5 )T , so the first spring has stretched the most.
12
0.4
10
8 0.2
6
20 40 60 80 100
4 -0.2
2
-0.4
20 40 60 80 100
50 1
40 0.8
30 0.6
20 0.4
10 0.2
20 40 60 80 100 20 40 60 80 100
6.1.8. (a) For maximum displacement of the bottom mass, the springs should be arranged from
weakest at the top to strongest at the bottom, so c1 = c = 1, c2 = c′ = 2, c3 = c′′ = 3.
44
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 6 45
1 −1 0 0
1 0 −1 0 3 −1 −1 u 3
1
6.2.2. (a) A = 1
0 0 −1
;
(b) −1
3 −1 u2 = 0
.
0 1 −1 0 −1 −1 2 u3 0
0 1 0 −1
T
(c) u = 15 9 3
8 , 8, 2 = ( 1.875, 1.125, 1.5 )T ;
T
(d) y = v = A u = 3 3 15
4, 8, 8 , − 83 , 9
8 = ( .75, .375, 1.875, − .375, 1.125 )T .
(e) The bulb will be brightest when connected to wire 3, which has the most current
flowing through.
♠ 6.2.6. None.
♠ 6.2.8. (a) The potentials remain the same, but the currents are all twice as large.
6.2.12. (a) True, since they satisfy the same systems of equilibrium equations K u = − AT C b = f .
(b) False, because the currents with the batteries are, by (6.37), y = C v = C A u + C b,
while for the current sources they are y = C v = C A u.
6.2.17. (a) If f are the current sources at the nodes and b the battery terms, then the nodal
voltage potentials satisfy AT C A u = f − AT C b.
(b) By linearity, the combined potentials (currents) are obtained by adding the potentials
(currents) due to the batteries and those resulting from the current sources.
(c) 1
2 P = p(u) = 1
2 uT K u − uT (f − AT C b).
6.3.1. 8 cm
c 2018 Peter J. Olver and Chehrzad Shakiban
46 Students’ Solutions Manual: Chapter 6
6.3.3. (a) For a unit horizontal force on the two nodes, the displacement vector is
u = ( 1.5, − .5, 2.5, 2.5 )T , so the left node has moved slightly down and three times as far
to the right, while the right node has moved five times as far up and to the right. Note that
the force on the left node is transmitted through the top bar to the right node, which ex-
plains why it moves significantly further. The stresses are e = ( .7071, 1, 0, − 1.5811 )T , so
the left and the top bar are elongated, the right bar is stress-free, and the reinforcing bar is
significantly compressed.
3 1
0 1 0 0 2 u1 − 2 v1 − u2 = f1 ,
−1 0 1 0
1 3
− u1 +
2 2 v1 = g1 ,
♥ 6.3.5. (a) A =
0 0 0 1 ;
(b)
0 0 √1 √
1
− u1 + 32 u2 + 1
2 v2 = f2 ,
2 2
− √1 √1 0 0 1 3
2 2 2 u2 + 2 v2 = g2 .
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 6 47
♣ 6.3.11. (a)
−1 0 0 1 0 0 0 0 0 0 0 0
0 −1 0 0 0 0 0 1 0 0 0 0
√1
0 0 −1 0 0 0 0 0 0 0 0
2
A=
0 0 0 √1 − √1 0 − √1 0 0 0 0 1
;
2 2 2
√1 √1 √1 √1
0 0 0 0 − 0 0 0 − 0
2 2 2 2
0 0 0 0 0 0 0 √1 − √1 0 − √1 √1
2 2 2 2
1 0 0 0 0 0
0 1 0 0 0 0
0 0 1 0 0 0
1 0 0 0 0 0
0 1 0 0 0 −1
0 0 1 0 −1 0
(b) v1 = ,
1
v2 = ,
0
v3 = ,
0
v4 =
0
, v5 =
0
, v6 =
1
;
0 1 0 0 0 0
0 0 1 −1 0 0
1 0 0 0 1 0
0 1 0 1 0 0
0 0 1 0 0 0
(f ) For fi = ( fi , gi , hi )T we require f1 + f2 + f3 + f4 = 0, g1 + g2 + g3 + g4 = 0,
h1 + h2 + h3 + h4 = 0, h3 = g4 , h2 = f4 , g2 = f3 , i.e., there is no net horizontal force and
no net moment of force around any axis.
(g) You need to fix three nodes. Fixing two still leaves a rotation motion around the line
connecting them.
(h) Displacement of the top node: u4 = ( −1, −1, −1 )T ; since e = ( −1, 0, 0, 0 )T , only the
vertical bar experiences compression of magnitude 1.
c 2018 Peter J. Olver and Chehrzad Shakiban
48 Students’ Solutions Manual: Chapter 6
6.3.14. (a) 3 n.
(b) Example: a triangle each of whose nodes is connected
to the ground by two additional, non-parallel bars.
♦ 6.3.18. (a) We are assuming that f ∈ img K = coimg A = img AT , cf. Exercise 3.4.32. Thus, we
can write f = AT h = AT C g where g = C −1 h.
(b) The equilibrium equations K u = f are AT C A u = AT C g which are the normal equa-
tions (5.36) for the weighted least squares solution to A u = g.
√1 √1 0 0 0
2 2
♥ 6.3.20. (a) A⋆ =
−1 0 1 0 0;
0 0 − √1 √1 √1
2 2 2
3 1
2 2 −1 0 0
1 1
0 0 0
2 2
K ⋆ u = f ⋆ where K ⋆ =
−1 0 3
2 − 21 − 12
.
1
0 0 − 21 1
2 2
0 0 − 21 1
2
1
2
(b) Unstable, since there are two mechanisms prescribed by the kernel basis elements
( 1, −1, 1, 1, 0 )T , which represents the same mechanism as when the end is fixed, and
( 1, −1, 1, 0, 1 )T , in which the roller and the right hand node move horizontally to the right,
while the left node moves down and to the right.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 7: Linearity
7.1.16. (b) Not linear; codomain space = Mn×n . (d) Not linear; codomain space = Mn×n .
(f ) Linear; codomain space = R.
Z bh i
7.1.22. Iw [ c f + d g ] = c f (x) + d g(x) w(x) dx
a
Z b Z b
=c f (x) w(x) dx + d g(x) w(x) dx = c Iw [ f ] + d Iw [ g ].
a a
∂2 h i ∂2 h i
7.1.24. ∆[ c f + d g ] = 2
c f (x, y) + d g(x, y) + 2
c f (x, y) + d g(x, y)
∂x ∂y
∂2f ∂2f ∂2g ∂2g
= c + +d + = c ∆[ f ] + d ∆[ g ].
∂x2 ∂y 2 ∂x2 ∂y 2
49
c 2018 Peter J. Olver and Chehrzad Shakiban
50 Students’ Solutions Manual: Chapter 7
! ! ! !
1 0 0 1 0 0 0 0
7.1.27. (b) dimension = 4; basis: , , , .
0 0 0 0 1 0 0 1
(d) dimension = 4; basis given by L0 , L1 , L2 , L3 , where Li [ a3 x3 + a2 x2 + a1 x + a0 ] = ai .
! !
0 1 0 0
7.1.28. True. The dimension is 2, with basis , .
0 0 0 1
T
7.1.30. (a) a = ( 3, −1, 2 )T , (c) a = 5 1 5
4, −2, 4 .
7.1.41. (a) L = E ◦ D where D[ f (x) ] = f ′ (x), E[ g(x) ] = g(0). No, they do not commute:
D ◦ E is not even defined since the codomain of E, namely R, is not the domain of D, the
space of differentiable functions. (b) e = 0 is the only condition.
7.1.51. (a) The inverse is the scaling transformation that halves the length of each vector.
(b) The inverse is counterclockwise rotation by 45◦ . (d) No inverse.
√1 √1 √1 − √1
2 2 2 2
7.1.52. (b) Function: ; inverse: .
− √1 √1 √1 √1
2 2 2 2
1 1
(d) Function: 2 2
; no inverse.
1 1
2 2
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 7 51
♥ 7.1.58. (a) Every vector in V can be uniquely written as a linear combination of the basis ele-
ments: v = c1 v1 + · · · + cn vn . Assuming linearity, we compute
L[ v ] = L[ c1 v1 + · · · + cn vn ] = c1 L[ v1 ] + · · · + cn L[ vn ] = c1 w1 + · · · + cn wn .
Since the coefficients c1 , . . . , cn of v are uniquely determined, this formula serves to uniquely
define the function L : V → W . We must then check that the resulting function is linear.
Given any two vectors v = c1 v1 + · · · + cn vn , w = d1 v1 + · · · + dn vn in V , we have
L[ v ] = c1 w1 + · · · + cn wn , L[ w ] = d1 w1 + · · · + dn wn .
Then, for any a, b ∈ R,
L[ a v + b w ] = L[ (a c1 + b d1 ) v1 + · · · + (a cn + b dn ) vn ]
= (a c1 + b d1 ) w1 + · · · + (a cn + b dn ) wn
= a c1 w1 + · · · + cn wn + b d1 w1 + · · · + dn wn = a L[ v ] + d L[ w ],
proving linearity of L.
(b) The inverse is uniquely defined by the requirement that L−1 [ wi ] = vi , i = 1, . . . , n.
Note that L ◦ L−1 [ wi ] = L[ vi ] = wi , and hence L ◦ L−1 = I W since w1 , . . . , wn is a basis.
Similarly, L−1 ◦ L[ vi ] = L−1 [ wi ] = vi , and so L−1 ◦ L = I V .
(c) If A = ( v1 v2 . . . vn ), B = ( w1 w2 . . . wn ), then L has matrix representative B A−1 ,
while L−1 has matrix representative A B −1 .
! !
3 5 −1 2 −5
(d) (i ) L = , L = .
1 2 −1 3
√1 − √1 √
2 2
7.2.1. (a) . (i ) The line y = x; (ii ) the rotated square 0 ≤ x + y, x − y ≤ 2;
√1 √1
2 2 (iii ) the unit disk.
3 4
− 5 5
(c) . (i ) The line 4 x + 3 y = 0; (ii ) the rotated square with vertices
4 3 T √ T
5 5 T
( 0, 0 )T , √1 , √1 , 0, 2 , − √1 , √1 ; (iii ) the unit disk.
2 2 2 2
!
−1 0
7.2.2. (a) L2 = represents a rotation by θ = π;
0 −1
(b) L is clockwise rotation by 90◦ , or, equivalently, counterclockwise rotation by 270◦ .
! !
−1 −4
7.2.5. The image is the line that goes through the image points , .
2 −1
c 2018 Peter J. Olver and Chehrzad Shakiban
52 Students’ Solutions Manual: Chapter 7
3
2.5
! ! ! !
0 1 4 3 2
0 2 3 1 1
0.5
1 2 3 4
3
! ! ! !
0 2 3 1
(c) Parallelogram with vertices , , , : 2
0 1 4 3
1
0.75
0.5
0.25
! !
− 12 1 -1 -0.75-0.5-0.25 0.25 0.5 0.75 1
2 −1 -0.5
-0.75
-1
7.2.9. (b) True. (d) False: in general circles are mapped to ellipses.
! ! ! !
1 1 1 0 1 0 1 1
7.2.13. (b) = :
−1 1 −1 1 0 2 0 1
A shear of magnitude −1 along the x-axis, followed by a scaling in the y direction by a fac-
tor of 2, followed by a shear of magnitude −1 along the y-axis.
1 1 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 1 0
(d) 1 0
1 = 1 1
00 1
00 1
0 0 −1
00
1 −1 0 1 0
:
0 1 1 0 0 1 0 −1 1 0 0 2 0 0 1 0 0 1 0 0 1
A shear of magnitude 1 along the x-axis that fixes the x z-plane, followed a shear of mag-
nitude −1 along the y-axis that fixes the xy plane, followed by a reflection in the xz plane,
followed by a scaling in the z direction by a factor of 2, followed a shear of magnitude −1
along the z-axis that fixes the x z-plane, followed a shear of magnitude 1 along the y-axis
that fixes the yz plane.
1 1
7.2.15. (b) 2 2
.
1 1
2 2
1 0 0
7.2.17. 0 1 0
is the identity transformation;
0 0 1
0 0 1
0 1 0
is a reflection in the plane x = z;
1 0 0
0 1 0
◦
0 0 1
is rotation by 120 around the line x = y = z.
1 0 0
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 7 53
! −1 0 0
−1 0 ◦
7.2.19. det = +1, representing a 180 rotation, while 0 −1 0 = −1, det
0 −1
0 0 −1
and so is a reflection — but through the origin, not a plane, since it doesn’t fix any nonzero
vectors.
! !
1 −6 −1 0
7.2.24. (b) , (d) .
− 34 3 0 5
−3 −1 −2
7.2.25. (a) 6 1 6
.
1 1 0
1 −4 0 ! ! !
0,
1 2 1 0 0
7.2.26. (b) bases:
0,4
, and , ; canonical form: ;
−2 1 0 0 0
0 1 3
1 0 1 1 2 −1 1 0 0
(d) bases: 0 , 1 , −2 , and 1 , −1 , −1 ; canonical form: 0 1 0
.
0 0 3 2 1 1 0 0 0
♦ 7.2.28. (a) Let Q have columns u1 , . . . , un , so Q is an orthogonal matrix. Then the matrix rep-
resentative in the orthonormal basis is
B = Q−1 A Q = QT A Q, and B T = QT AT (QT )T = QT A Q = B.
! ! !
1 0 1 −1 2 0
(b) Not necessarily. For example, if A = ,S = , then S −1 A S =
0 2 1 0 1 1
is not symmetric.
7.3.1. (b) True. (d) False: in general circles are mapped to ellipses.
7.3.3. (a) (i ) The horizontal line y = −1; (ii ) the disk (x − 2)2 + (y + 1)2 ≤ 1 of radius 1
centered at ( 2, −1 )T ; (iii ) the square { 2 ≤ x ≤ 3, −1 ≤ y ≤ 0 }.
(c) (i ) The horizontal line y = 2; (ii ) the elliptical domain x2 − 4 x y + 5 y 2 + 6 x − 16 y + 12 ≤ 0;
(iii ) the parallelogram with vertices ( 1, 2 )T , ( 2, 2 )T , ( 4, 3 )T , ( 3, 3 )T .
! !
−2 1 2
7.3.4. (a) T3 ◦ T4 [ x ] = x+ ,
−1 0 2
! ! ! ! ! ! !
−2 1 1 2 0 1 2 1 2 1 1
with = , = + ;
−1 0 0 1 −1 0 2 0 1 0 2
c 2018 Peter J. Olver and Chehrzad Shakiban
54 Students’ Solutions Manual: Chapter 7
3 3
2 2 2
(c) T3 6 [ x ] =
◦T x + ,
1 1
2 2 2
! ! ! !
3 3 1 1
2 2 1 2 2 2 2 1 2 1 1
with = , = + .
1 1
0 1 1 1 2 0 1 0 2
2 2 2 2
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 7 55
e2 x e− 2 x
7.4.15. v ′′ −4 v = 0, so u(x) = c1 +c2 . The solutions with c1 +c2 = 0 are continuously
x x
differentiable at x = 0, but only the zero solution is twice continuously differentiable.
c 2018 Peter J. Olver and Chehrzad Shakiban
56 Students’ Solutions Manual: Chapter 7
7
0
− 5
7.4.24. (a) Not in the image. (b) x = 1 +
z
− 65 .
0 1
7.4.25. (b) x = − 17 + 3
7 z, y = 4
7 + 2
7 z, not unique; (d) u = 2, v = −1, w = 0, unique.
1 x
7.4.26. (b) u(x) = 6e sin x + c1 e2 x/5 cos 45 x + c2 e2 x/5 sin 54 x.
1 1 1 1 11 − x 9 −x
7.4.27. (b) u(x) = 4 − 4 cos 2 x, (d) u(x) = − 10 cos x + 5 sin x + 10 e cos 2 x + 10 e sin 2 x.
3x
7.4.32. (b) u(x) = − 19 x − 1
10 sin x + c1 e + c2 e− 3 x ,
x 1 x 1 −x
(d) u(x) = 1
6 x e − 18 e + 4 e + c1 ex + c2 e− 2 x .
√ √
7.4.35. u(x) = − 7 cos x − 3 sin x.
1 1 3 1
7.4.36. (a) u(x) = 9 x + cos 3 x + 27 sin 3 x, (c) u(x) = 3 cos 2 x + 10 sin 2 x − 5 sin 3 x.
∂u 2 ∂2u 2 2
♥ 7.4.46. (a) = − k2 e− k t+ i k x = ; (c) e− k t
cos k x, e− k t
sin k x.
∂t ∂x2
7.4.48. (a) Conjugated, (d) not conjugated.
♦ 7.4.51. L[ u ] = L[ v ] + i L[ w ] = f , and, since L is real, the real and imaginary parts of this
equation yield L[ v ] = f , L[ w ] = 0.
!
13
1 −1 − 10
7.5.1. (a) , (c) 7 7 .
2 3 5 15
7 7
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 7 57
!
3
2 −3 − 52
7.5.2. Domain (a), codomain (b): ; domain (b), codomain (c): 2 ;
4 9 1 10
3 3
6
− 17
domain (c), codomain (a): 7 .
5 5
7 7
1 −2 0
7.5.3. (b) 1 0 − 23
2
2
0 3 2
1 −2 0
1 −1 −1
7.5.4. Domain (a), codomain (b):
1 0 −3
; domain (b), codomain (c):
1 0 −1
.
0 2 6 1 4 5
3 3 3
! !
1 0 −1 2 0 −2
7.5.5. Domain (a), codomain (a): ; domain (a), codomain (c): .
3 2 1 8 8 4
7.5.11. In all cases, L = L∗ if and only if its matrix representative A, with respect to the stan-
! !
−1 0 3 0
dard basis, is symmetric. (a) A = = AT , (c) A = = AT .
0 −1 0 3
0 1 1
7.5.14. (a) a12 = 21 a21 , a13 = 13 a31 , 21 a23 = 13 a32 , (b) Example: 2 1 2 .
3 3 2
T
1 1
7.5.24. Minimizer: 5, −5 ; minimum value: − 51 .
T
2 1
7.5.26. Minimizer: 3, 3 ; minimum value: −2.
T
7 2 7
7.5.28. (a) Minimizer: 13 , 13 ; minimum value: − 26 .
12 5 T 43
(c) Minimizer: 13 , 26 ; minimum value: − 52 .
1 6
7.5.29. (a) 3, (b) 11 .
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 8: Eigenvalues and Singular Values
8.1.2. γ = log 2/100 ≈ .0069. After 10 years: 93.3033 gram; after 100 years: 50 gram;
after 1000 years: .0977 gram.
8.1.5. The solution is u(t) = u(0) e1.3 t . To double, we need e1.3 t = 2, so t = log 2/1.3 = .5332.
To quadruple takes twice as long, t = 1.0664.
To reach 2 million, the colony needs t = log 106 /1.3 = 10.6273.
b du
♦ 8.1.7. (a) If u(t) ≡ u⋆ = − , then = 0 = a u + b, hence it is a solution.
a dt
dv b
(b) v = u − u⋆ satisfies = a v, so v(t) = c ea t , and u(t) = c ea t − .
dt a
! !
−1 1
8.2.1. (a) Eigenvalues: 3, −1; eigenvectors: , .
1 1
1 −1 1
(e) Eigenvalues: 4, 3, 1; eigenvectors: −1 , 0 , 2
.
1 1 1
3 3− 2i 3+ 2i
(g) Eigenvalues: 0, 1 + i , 1 − i ; eigenvectors: 1 , 3− i , 3+ i .
0 1 1
2
(i) −1 is a simple eigenvalue, with eigenvector
−1 ;
1
1 2
−
3 3
2 is a double eigenvalue, with eigenvectors 0 , 1
.
1 0
58
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 8 59
! !
1 −1
8.2.7. (a) Eigenvalues: i , −1 + i ; eigenvectors: , .
0 1
! 3 1 !
−1 5 + 5i
(c) Eigenvalues: −3, 2 i ; eigenvectors: , .
1 1
! !
−1 1
8.2.9. For n = 2, the eigenvalues are 0, 2, and the eigenvectors are , and .
1 1
−1 −1 1
For n = 3, the eigenvalues are 0, 0, 3, and the eigenvectors are
0 , 1 , and
1 .
1 0 1
8.2.12. True — by the same computation as in Exercise 8.2.10(a), c v is an eigenvector for the
same (real) eigenvalue λ.
♦ 8.2.24. (a) Starting with A v = λ v, multiply both sides by A−1 and divide by λ to obtain
A−1 v = (1/λ) v. Therefore, v is an eigenvector of A−1 with eigenvalue 1/λ.
(b) If 0 is an eigenvalue, then A is not invertible.
! !
0 −1 1 0
8.2.29. (b) False. For example, has eigenvalues i , − i , whereas has
1 0 0 −1
eigenvalues 1, −1.
7 24 3 4
25 − 25
8.2.31. (a) (ii ) Q = . Eigenvalues −1, 1; eigenvectors 5 , 5 .
24 7 4
− 25 − 25 5 − 35
c 2018 Peter J. Olver and Chehrzad Shakiban
60 Students’ Solutions Manual: Chapter 8
h i
−1 −1
♦ 8.2.32. (a) det(B − λ I ) = det(S A S − λ I ) = det S (A − λ I )S
= det S −1 det(A − λ I ) det S = det(A − λ I ).
(b) The eigenvalues are the roots of the common characteristic equation. (c) Not usually.
If w is an eigenvector of B, then v = S w is an eigenvector of A and conversely.
8.2.36. (a) The characteristic equation of a 3 × 3 matrix is a real cubic polynomial, and hence
0 1 0 0
−1 0 0 0
has at least one real root. (b)
has eigenvalues ± i .
0 0 0 1
0 0 −1 0
0 0 1 √
1 3
8.2.39. False. For example, 1 0 0
has eigenvalues 1, − 2 ± 2 i.
0 1 0
♦ 8.2.44. (a) The axis of the rotation is the eigenvector v corresponding to the eigenvalue +1.
Since Q v = v, the rotation fixes the axis, and hence must rotate around it. Choose an
orthonormal basis u1 , u2 , u3 , where u1 is a unit eigenvector in the direction of the axis of
rotation, while u2 + i u3 is a complex eigenvector for the eigenvalue e i θ . In this basis, Q
1 0 0
has matrix form
0 cos θ − sin θ , where θ is the angle of rotation.
0 sin θ cos θ
2
(b) The axis is the eigenvector −5 for the eigenvalue 1. The complex eigenvalue is
1
√
7 2 30 7
13 +i 13 , and so the angle is θ = cos−1 13 ≈ 1.00219.
! ! !
0 1 1 −1
♥ 8.2.47. (a) M2 = : eigenvalues 1, −1; eigenvectors , .
1 0 1 1
(b) By part (a), O = A−1 pA (A) = A − (tr A) I + (det A)A−1 , and the formula follows upon
solving for A−1 . (c) tr A = 4, det A = 7 and one checks A2 − 4 A + 7 I = O.
-1
-2
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 8 61
1
(c) Gershgorin disks: | z − 2 | ≤ 3, | z | ≤ 1;
√ -1 1 2 3 4 5
eigenvalues: 1 ± i 2; -1
-2
-3
-2
-4
♦ 8.2.55. (i ) Because A and its transpose AT have the same eigenvalues, which must therefore
belong to both DA and DAT .
2
-1
-2
1
(c) Gershgorin disks: | z − 2 | ≤ 1, | z | ≤ 3;
√ -3 -2 -1 1 2 3
eigenvalues: 1 ± i 2; -1
-2
-3
-2
-4
!
1 2
8.2.56. (a) False: is a counterexample.
2 5
c 2018 Peter J. Olver and Chehrzad Shakiban
62 Students’ Solutions Manual: Chapter 8
!
2
8.3.2. (a) Eigenvalue: 2; eigenvector: ; not complete.
1
!
1± i
(c) Eigenvalues: 1 ± 2 i ; eigenvectors: ; complete.
2
1 1
(e) Eigenvalue 3 has eigenspace basis , 0
1
; not complete.
0 1
! !
−1 1
8.3.3. (a) Eigenvalues: −2, 4; the eigenvectors , form a basis for R 2 .
1 1
! !
i −i
(b) Eigenvalues: 1 − 3 i , 1 + 3 i ; the eigenvectors , , are not real, so the dimen-
1 1
sion is 0.
1 0
(e) The eigenvalue 1 has eigenvector 0 ; the eigenvalue −1 has eigenvector 0 . The
0 1
eigenvectors span a two-dimensional subspace of R 3 .
! !
3 3 0 0
8.3.13. In all cases, A = S Λ S −1 . (a) S = , Λ= .
1 2 0 −3
! !
− 35 + 1
5 i − 53 − 1
5 i −1 + i 0
(c) S = , Λ= .
1 1 0 −1 − i
0 21 1 0 0 0
(e) S = 1 −10 6 , Λ = 0 7 0
.
0 7 3 0 0 −1
−4 3 1 0 −2 0 0 0
−3 2 0 1 0 −1 0 0
(h) S = , Λ = .
0 6 0 0
0 0 1 0
12 0 0 0 0 0 0 2
0 −1 0 i −i 0 i 0 0 −i 1
0
2i 2
1
8.3.16. (a) 1 0 0 = 1 1 0 0 −i
0 2 0
2 .
1
0 0 1 0 0 1 0 0 1 0 0 2
√
8.3.17. (a) Yes: distinct real eigenvalues −3, 2. (c) No: complex eigenvalues 1, − 12 ± 2
5
i.
! !
1 −1 1+ i 0
8.3.18. In all cases, A = S Λ S −1 . (a) S = , Λ= .
1 1 0 −1 + i
! !
1 − 12 − 12 i 4 0
(c) S = , Λ= .
1 1 0 −1
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 8 63
8.3.19. Use the formula A = S Λ S −1 . For part (e) you can choose any other eigenvalues and
eigenvectors you want to fill in S and Λ.
0 0 4
5 4
(a) 3 3 , (e) example: 0 1 0.
8 1
3 3 0 0 −2
!
11 −6
8.3.20. (a) .
18 −10
8.3.25. Let A = S Λ S −1 . Then A2 = I if and only if Λ2 = I , and so all its eigenvalues are ±1.
! ! !
3 −2 1 1
Examples: A = , with eigenvalues 1, −1 and eigenvectors , ; or, even
4 −3 1 2
! ! !
0 1 1 −1
simpler, A = , with eigenvalues 1, −1 and eigenvectors , .
1 0 1 1
8.4.1. (a) {0}, the x-axis, the y-axis, the z-axis, the x y-plane, the x z-plane, the y z-plane, R 3 .
8.4.3. (a) Real: {0}, R 2 . Complex: {0}, the two (complex) lines spanned by each of the
complex eigenvectors ( i , 1 )T , ( − i , 1 )T , C 2 .
(d) Real: {0}, the three lines spanned by each of the eigenvectors ( 1, 0, 0 )T , ( 1, 0, −1 )T ,
( 0, 1, −1 )T , the three planes spanned by pairs of eigenvectors, R 3 . Complex: {0}, the three
(complex) lines spanned by each of the real eigenvectors, the three (complex) planes spanned
by pairs of eigenvectors, C 3 .
8.4.8. False.
! !
1 −1 1 1
8.5.1. (b) Eigenvalues: 7, 3; eigenvectors: √ , √ .
2 1 2 1
4 4
√
5 2 − 3 − √
5 5 2
3
(d) Eigenvalues: 6, 1, −4; eigenvectors: √ , 4 , − 3
√
.
5 2
5 5 2
√1 0 √1
2 2
5 1
√
8.5.2. (a) Eigenvalues 2±2 17 ; positive definite. (c) Eigenvalues 0, 1, 3; positive semi-definite.
8.5.5. (a) The characteristic equation p(λ) = λ2 − (a + d)λ + (a d − b c) = 0 has real roots if and
only if its discriminant is non-negative: 0 ≤ (a + d)2 − 4 (a d − b c) = (a − d)2 + 4 b c, which is
the necessary and sufficient condition for real eigenvalues.
(b) If A is symmetric, then b = c and so the discriminant is (a − d)2 + 4 b2 ≥ 0.
c 2018 Peter J. Olver and Chehrzad Shakiban
64 Students’ Solutions Manual: Chapter 8
λ k v k2 = (A v) · v = vT AT v = vT A v = v · (A v) = λ k v k2 ,
! √1 − √2 √1 √2
−3 4 5 5 5 0
5 5
8.5.13. (a) = ,
4 3 √2 √1 0 −5 − √2 √1
5 5 5 5
√1 − √1 √1 √1 √2 √1
1 1 0 6 2 3 3 0 0 6 6 6
(c) 1 2 1 =
√2 0 − √1
0 1 0
− √ 1
0 √1
.
6 3 2 2
0 1 1
√1 √1 √1 0 0 0 √1 − √1 √1
6 2 3 3 3 3
! √1 √1
5 −2
− 5 0
− √1 √1
2 2 2 2
8.5.14. (b) = ,
−2 5 √1 √1 0 −10 √1 √1
2 2 2 2
4
√ − 53 − √ 4 √4 3
√ √1
1 0 4 5 2 5 2 6 0 0 5 2 5 2 2
3 4 3
(d)
0 1 3=
√
5 2 5
√ 0 1
5 2
0
− 35 4
5 0.
4 3 1
√1 √1 4 3 1
0 0 0 −4 − √ − √ √
2 2 5 2 5 2 2
57
25 − 24
25
8.5.15. (a) . (c) None, since eigenvectors are not orthogonal.
24 43
− 25 25
2 2
8.5.17. (b) 7 √1 x+ √2 y + 11
2 − √2 x + √1 y = 7
5 (x + 2 y)2 + 2
5 (− 2 x + y)2 ,
5 5 5 5
2 2 2
1 √1 x + √1 y + √1 z
(d) 2 + − √1 y + √1 z + 2 − √2 x+ √1 y+ √1 z
3 3 3 2 2 6 6 6
= 16 (x + y + z)2 + 12 (− y + z)2 + 31 (− 2 x + y + z)2 .
√ √
8.5.21. Principal stretches = eigenvalues: 4 + 3, 4 − 3, 1;
√ T √ T
principal directions = eigenvectors: 1, −1 + 3, 1 , 1, −1 − 3, 1 , ( −1, 0, 1 )T .
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 8 65
1.5
0.5
! !
√ q
2 −1 1
(b) (ii ) -1.5 -1 -0.5 0.5 1 1.5 ellipse: semi-axes 2, 3, principal axes , .
-0.5
1 1
-1
-1.5
8.5.26. Only the identity matrix is both orthogonal and positive definite. Indeed, if K = K T > 0
is orthogonal, then K 2 = I , and so its eigenvalues are all ± 1. Positive definiteness im-
plies that all the eigenvalues must be +1, and hence its diagonal form is Λ = I . But then
K = Q I QT = I also.
√ √
♦ 8.5.27. (a) Set B = Q Λ QT , where Λ is the diagonal matrix with the square roots of the
eigenvalues of A along the diagonal. Uniqueness follows from the fact that the eigenvectors
and eigenvalues are uniquely determined. (Permuting them does not change the final form
√
√ √ 2 0 0
1 3 + 1 3−1 √
of B.) (b) (i ) ; (iii ) 0 5 0
√ √ .
2 3−1 3+1 0 0 3
√
2 −3 √2 − √1
= 5 5 5 0
8.5.29. (b)
1
√ ,
1 6 √ √2 0 3 5
5 5
0 −3 8 0 − 35 45 1 0 0
(d) 1 0 0 = 1 0 0 0
0 5 .
4 3
0 4 6 0 5 5 0 0 10
♥ 8.5.32. (i ) This follows immediately from the spectral factorization. The rows of Λ QT are
λ1 u T T
1 , . . . , λn un , and formula (8.37) follows from the alternative version of matrix multipli-
cation given in Exercise 1.2.34.
!
1 2 4
−3 4 − 25
(ii ) (a) = 5 5 5
− 5 5 .
4 3 2 4
− 2 1
5 5 5 5
c 2018 Peter J. Olver and Chehrzad Shakiban
66 Students’ Solutions Manual: Chapter 8
♦ 8.5.42. According to the discussion preceding the statement of the Theorem 8.42,
n o
λj = max yT Λ y
k y k = 1, y · e1 = · · · = y · ej−1 = 0 .
Moreover, using (8.36), setting x = Q y and using the fact that Q is an orthogonal matrix
and so (Q v) · (Q w) = v · w for any v, w ∈ R n , we have
xT A x = yT Λ y, k x k = k y k, y · e i = x · vi ,
3 2 1
8.5.46. (a) Maximum: 4 ; minimum: 5 . (c) Maximum: 2; minimum: 2 .
√1 √1 2 −2
2 2
8.6.1. (a) U = , ∆= ;
1 √1
− √ 0 2
2 2
√3 √2 2 15
(c) U =
13 13
, ∆= ;
− √2 √3 0 −1
13 13
♦ 8.6.4. If A is symmetric, its eigenvalues are real, and hence its Schur Decomposition is
A = Q ∆ QT , where Q is an orthogonal matrix. But AT = (QT QT )T = Q T T QT , and
hence ∆T = ∆ is a symmetric upper triangular matrix, which implies that ∆ = Λ is a
diagonal matrix with the eigenvalues of A along its diagonal.
0 0 1
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 8 67
−1 0 0
(e) Eigenvalues: −2, 0; Jordan basis: v1 = 0 , v = −1 , v = −1 ;
2 3
1 0 1
−2 1 0
Jordan canonical form: 0 −2 0
.
0 0 0
2 0 0 0 2 1 0 0 2 0 0 0 2 0 0 0
0 2 0 0 0 2 0 0 0 2 1 0 0 2 0 0
, , , ,
0 0 2 0
0 0 2 0
0 0 2 0
0 0 2 1
8.6.8.
0 0 0 2 0 0 0 2 0 0 0 2 0 0 0 2
2 1 0 0 2 1 0 0 2 0 0 0 2 1 0 0
0 2 0 0 0 2 1 0 0 2 1 0 0 2 1 0
, , , .
0 0 2 1
0 0 2 0
0 0 2 1
0 0 2 1
0 0 0 2 0 0 0 2 0 0 0 2 0 0 0 2
8.6.11. True. All Jordan chains have length one, and so consist only of eigenvectors.
♦ 8.6.24. First, since Jλ,n is upper triangular, its eigenvalues are its diagonal entries, and hence λ
is the only eigenvalue. Moreover, v = ( v1 , v2 , . . . , vn )T is an eigenvector if and only if
(Jλ,n − λ I )v = ( v2 , . . . , vn , 0 )T = 0. This requires v2 = · · · = vn = 0, and hence v must be
a scalar multiple of e1 .
8.6.26. (a) {0}, the y-axis, R 2 ; (c) {0}, the line spanned the eigenvector ( 1, −2, 3 )T , the
plane spanned by ( 1, −2, 3 ) , ( 0, 1, 0 )T , and R 3 ;
T
(e) {0}, the x-axis, the w-axis,
4
the x z-plane, the y w-plane, R .
q √ √ √ √
8.7.1. (a) 3± 5; (c) 5 2 ; (e) 7, 2.
√ √ q √
! 5
√ √−1+
√−1− √ 5 √ √−2+ 5
√ √ 1
√
1 1 10−2 5 10+2 5 3 + 5 0 10−4 5 10−4 5
8.7.2. (a)
0
=
2 2 2 q √ √
5 ,
√ √ √ √ 0 3− 5 √−2− √ √ 1 √
10−2 5 10+2 5 10+4 5 10+4 5
!
1 −2 − √1 √
(c) = 10 ( 5 2 ) − √1 √2 ,
−3 6 √3 5 5
10
√
!
− √2 √1 4
0 − √35 − √35
3 √1 √3
2 1 0 −1 5 5 7 35 35
(e) = √ .
0 −1 1 1 √1 √2 0 2 √2 − √1 √2 √1
5 5 10 10 10 10
c 2018 Peter J. Olver and Chehrzad Shakiban
68 Students’ Solutions Manual: Chapter 8
√
8.7.4. (a) The eigenvalues of K = ATA are 15 2 ±
221
2 = 14.933, .0667. The square roots of
these eigenvalues give us the singular values of A. i.e., 3.8643, .2588. The condition number
is 3.86433 / .25878 = 14.9330.
(c) The singular values are 3.1624, .0007273, and so the condition number is
3.1624 / .0007273 = 4348.17; the matrix is slightly ill-conditioned.
♠ 8.7.6. In all cases, the large condition number results in an inaccurate solution.
(a) The exact solution is x = 1, y = −1; with three digit rounding, the computed solution is
x = 1.56, y = −1.56. The singular values of the coefficient matrix are 1615.22, .274885, and
the condition number is 5876.
8.7.8. Let A = v ∈ R n be the matrix (column vector) in question. (a) It has one singular
v vT
value: k v k; (b) P = , Σ = k v k — a 1 × 1 matrix, Q = (1); (c) v+ = .
kvk k v k2
8.7.10. Almost true, with but one exception — the zero matrix.
8.7.21. False. For example, the 2 × 2 diagonal matrix with diagonal entries 2 · 10k and 10−k for
k ≫ 0 has determinant 2 but condition number 2 · 102 k .
1 2 9
! 7 7 ! 7
1 1 1 5
8.7.34. (b) A = , A+ = 4
7
5
− 14 ,
x⋆ = A+ = 15 .
7
2 −1 1 2
2 1 11
7 14 7
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 8 69
8.7.38. We list the eigenvalues of the graph Laplacian; the singular values of the incidence
matrix are obtained by taking square roots.
√ √ √ √
7+ 5 5+ 5 7− 5 5− 5
(i ) 4, 3, 1, 0; (iii ) , , , , 0.
2 2 2 2
8.8.1. Assuming ν = 1: (b) Mean = 1.275; variance = 3.995; standard deviation = 1.99875.
(d) Mean = .4; variance = 2.36; standard deviation = 1.53623.
8.8.2. Assuming ν = 1: (b) Mean = .36667; variance = 2.24327; standard deviation = 1.49775.
(d) Mean = 1.19365; variance = 10.2307; standard deviation = 3.19855.
8.8.6. Observe that the row vector containing the column sums of A is obtained by left multi-
plication by the row vector e = ( 1, . . . , 1 ) containing all 1s. Thus the columns sums of A
are all zero if and only if e A = 0. But then clearly, e A B = 0.
♣ 8.8.8. (a) The singular values/principal variances are 31.8966, .93037, .02938, .01335 with
.08677 −.80181 .46356 −.06779
−.34555 −.44715 −.05729 .13724
principal directions
.77916 , .08688 , .31273 , −.29037 .
−.27110 −.05630 −.18453 −.94302
−.43873 .38267 .80621 −.05448
(b) The fact that there are only 4 nonzero singular values tells us that the data lies on a
four-dimensional subspace. Moreover, the relative smallness of two smaller singular val-
ues indicates that the data can be viewed as a noisy representation of points on a two-
dimensional subspace.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 9: Iteration
9.1.2. (a) u(k+1) = 1.0325 u(k) , u(0) = 100, where u(k) represents the balance after k years.
(b) u(10) = 1.032510 × 100 = 137.69 dollars.
(c) u(k+1) = (1 + .0325/12) u(k) = 1.002708 u(k) , u(0) = 100, where u(k) represents the
balance after k months. u(120) = (1 + .0325/12)120 × 100 = 138.34 dollars.
log | b | − log | a |
9.1.6. | u(k) | = | λ |k | a | > | v (k) | = | µ |k | b | provided k > , where the
log | λ | − log | µ |
inequality relies on the fact that log | λ | > log | µ |.
9.1.10. Let u(k) represent the balance after k years. Then u(k+1) = 1.05 u(k) + 120, with
u(0) = 0. The equilibrium solution is u⋆ = − 120/.05 = − 2400, and so after k years the
balance is u(k) = (1.05k − 1) · 2400. Then
u(10) = $1, 509.35, u(50) = $25, 121.76, u(200) = $4, 149, 979.40.
3k + (−1)k − 3k + (−1)k
9.1.13. (a) u(k) = , v (k) = ;
2 2
√ √ k √ √ k √ √
(k) ( 5 + 2)(3 − 5) + ( 5 − 2)(3 + 5) (k) (3 − 5)k − (3 + 5)k
(c) u = √ , v = √ .
2 5 2 5
√ ! √ !
√ − 2 √ 2
9.1.14. (a) u(k) = c1 (− 1 − 2)k + c2 (− 1 + 2)k ,
1 1
√ √
√ k 5− i 3 √ k 5+ i 3
(b) u(k) = c1 1
2 + 2
3
i 2 + c2 1
2 − 2
3
i 2 .
1 1
√ ! √ !
5
2 cos 13 k π + 23 sin 31 k π 5
2 sin 13 k π − 23 cos 13 k π
= a1 + a2
cos 31 k π sin 31 k π
9.1.16. (a) It suffices to note that the Lucas numbers are the general Fibonacci numbers (9.16)
when a = L(0) = 2, b = L(1) = 1. (b) 2, 1, 3, 4, 7, 11, 18.
!k ! ! !
4 1 −1 −1 3k 0 −2 −1
9.1.18. (b) = ,
−2 1 1 2 0 2k 1 1
k
1 1 2
1 1 −1 4k 0 0
1
3 3
1 1
3
1
(d)
1 2 1
= 1 −2 0
0 1 0
6 − 31 1
6
.
2 1 1
1 1 1 0 0 (−1)k − 1
2 0 1
2
70
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 9 71
u(k) 1 1 −1 2
4k
(k) ! ! k ! 3
u −1 −1 3 (k)
1
9.1.19. (b) = , (d) v = 1 −2 0
.
v (k) 1 2 −2k+1 3
w(k)
1 1 1 0
√ √ √ √
(5 − 3 5)(2 + 5)k + (5 + 3 5)(2 − 5)k
9.1.22. (a) u(k) = 4
3 − 1
3 (−2)k , (c) u(k) = .
10
♠ 9.1.29. (a)
! !
−1 1
E1 : principal axes: , ; semi-axes: 1, 31 ; area: 1
3 π.
1 1
! !
−1 1
E2 : principal axes: ; ; semi-axes: 1, 91 ; area: 1
9 π.
1 1
! !
−1 1 1 1
E3 : principal axes: ; ; semi-axes: 1, 27 ; area: 27 π.
1 1
! !
−1 1 1 1
E4 : principal axes: ; ; semi-axes: 1, 81 ; area: 81 π.
1 1
9.1.35. According to Theorem 8.32, the eigenvectors of T are real and form an orthogonal basis
of R n with respect to the Euclidean norm. The formula for the coefficients cj thus follows
directly from (4.8).
9.1.37. Separating the equation into its real and imaginary parts, we find
! ! !
x(k+1) µ −ν x(k)
= .
y (k+1) ν µ y (k) !
1
The eigenvalues of the coefficient matrix are µ ± i ν, with eigenvectors and so the
∓i
solution is
! ! !
x(k) x(0) + i y (0) 1 x(0) − i y (0) 1
= (µ + i ν)k + (µ − i ν)k .
y (k) 2 −i 2 i
c 2018 Peter J. Olver and Chehrzad Shakiban
72 Students’ Solutions Manual: Chapter 9
9.1.41. (a) u(k) = 2k c1 + 1
2 k c2 , v (k) = 1 k
3 2 c2 ;
(c) u(k) = (−1)k c1 − k c2 + 1
2 k(k − 1) c3 , v (k) = (−1)k c2 − (k + 1) c3 , w(k) = (−1)k c3 .
♥ 9.1.43. (a) The system has an equilibrium solution if and only if (T − I )u⋆ = b. In particular,
if 1 is not an eigenvalue of T , every b leads to an equilibrium solution.
(b) Since v(k+1) = T v(k) , the general solution is
√ √ √
5+ 33 5− 33 5+ 33
9.2.1. (a) Eigenvalues: 2 ≃ 5.3723,
≃ − .3723; spectral radius:
2 2 ≃ 5.3723.
√
(d) Eigenvalues: 4, − 1 ± 4 i ; spectral radius: 17 ≃ 4.1231.
√
9.2.2. (a) Eigenvalues: 2 ± 3 i ; spectral radius: 13 ≃ 3.6056; not convergent.
4 3 4
(c) Eigenvalues: 5 , 5 , 0; spectral radius: 5; convergent.
√ √
5+ 73 5− 73
9.2.3. (b) Unstable: eigenvalues 12 ≃ 1.12867, 12 ≃ − .29533;
5 1 1
(d) stable: eigenvalues −1, ± i ; (e) unstable: eigenvalues 4, 4, 4.
9.2.6. A solution u(k) → 0 if and only if the initial vector u(0) = c1 v1 + · · · + cj vj is a linear
combination of the eigenvectors (or more generally, Jordan chain vectors) corresponding to
eigenvalues satisfying | λi | < 1 for i = 1, . . . , j.
9.2.10. Since ρ(c A) = | c | ρ(A), then c A is convergent if and only if | c | < 1/ρ(A). So, techni-
cally, there isn’t a largest c.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 9 73
! −1
1
−2 .
9.2.23. (a) All scalar multiples of ; (c) all scalar multiples of
1
1
9.2.24. (a) The eigenvalues are 1, 21 , so the fixed points are stable, while all other solutions go
k T
to a unique fixed point at rate 1
2 . When u(0) = ( 1, 0 )T , then u(k) → 3 3
5, 5 .
(c) The eigenvalues are −2, 1, 0, so the fixed points are unstable. Most solutions, specifically
those with a nonzero component in the dominant eigenvector direction, become unbounded.
However, when u(0) = ( 1, 0, 0 )T , then u(k) = ( −1, −2, 1 )T for k ≥ 1, and the solution
stays at a fixed point.
9.2.26. False: T has an eigenvalue of 1, but convergence requires that all eigenvalues be less
than 1 in modulus.
3 8 8
9.2.30. (a) 4, convergent; (c) 7, inconclusive; (e) 7, inconclusive; (f ) .9, convergent.
9.2.31. (a) .671855, convergent; (c) .9755, convergent; (e) 1.1066, inconclusive.
2
9.2.32. (a) 3, convergent; (c) .9755, convergent; (e) .9437, convergent.
♦ 9.2.37. This follows directly from the fact, proved in Proposition 8.62, that the singular values
of a symmetric matrix are just the absolute values of its nonzero eigenvalues.
9.2.41. For instance, any diagonal matrix whose diagonal entries satisfy 0 < | aii | < 1.
! ! !
0 1 1 2 0 −2
9.2.42. (a) False: For instance, if A = ,S = , then B = S −1 A S = ,
0 1 0 1 0 1
and k B k∞ = 2 6= 1 = k A k∞ . (c) True, since A and B have the same eigenvalues.
T
1 5
9.3.1. (b) Not a transition matrix; (d) regular transition matrix: 6, 6 ;
T
1 1 1
(e) not a regular transition matrix; (f ) regular transition matrix: 3, 3, 3 .
9.3.4. 2004: 37,000 city, 23,000 country; 2005: 38,600 city, 21,400 country; 2006: 39,880
city, 20,120 country; 2007: 40,904 city, 19,096 country; 2008: 41,723 city, 18,277 country;
Eventual: 45,000 in the city and 15,000 in the country.
9.3.8. When in Atlanta he always goes to Boston; when in Boston he has a 50% probability of
going to either Atlanta or Chicago; when in Chicago he has a 50% probability of going to
either Atlanta or Boston. The transition matrix is regular because
.375 .3125 .3125
T4 = .25 .5625 .5
has all positive entries.
.375 .125 .1875
On average he visits Atlanta: 33.33%, Boston 44.44%, and Chicago: 22.22% of the time.
c 2018 Peter J. Olver and Chehrzad Shakiban
74 Students’ Solutions Manual: Chapter 9
9.3.10. Numbering the vertices from top to bottom and left to right, the transition matrix is
1 1 1
0 4 4 0 0 0
9
1 1 1 1 2
0 0
2 4 2 4 9
1 1 1 1 2
0 0
2 4 4 2 9
T =
.
The probability eigenvector is
1
and so the bug spends,
1 1
0 4 0 0 4 0
9
1 1 1 1 2
0 4 4 2 0 2
9
1
0 0 0 41 0
4
1
9
on average, twice as much time at the edge vertices as at the corner vertices.
1 1 1
3 3 3
9.3.14. The limit is 1 1 1 .
3 3 3
1 1 1
3 3 3
T
1 1
9.3.19. All equal probabilities: z = n, . .. , n .
n
X
9.3.22. True. If v = ( v1 , v2 , . . . , vn )T is a probability eigenvector, then vi = 1 and
n
X i=1
tij vj = λ vi for all i = 1, . . . , n. Summing the latter equations over i, we find
j =1 n
X n
X n
X n
X
λ=λ vi = tij vj = vj = 1,
i=1 i=1 j =1 j =1
since the column sums of a transition matrix are all equal to 1.
n
X
♦ 9.3.24. The ith entry of v is vi = tij uj . Since each tij ≥ 0 and uj ≥ 0, the sum vi ≥ 0
j =1
n
X n
X n
X
also. Moreover, vi = tij uj = uj = 1 because all the column sums of T are
i=1 i,j = 1 j =1
equal to 1, and u is a probability vector.
9.4.3. (a) Strictly diagonally dominant; (c) not strictly diagonally dominant;
(e) strictly diagonally dominant.
1
♠ 9.4.4. (a) x = 7 = .142857, y = − 72 = − .285714;
(e) x = −1.9172, y = − .339703, z = −2.24204.
♠ 9.4.5. (c) Jacobi spectral radius = .547723, so Jacobi converges to the solution
x = 87 = 1.142857, y = 197 = 2.71429.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 9 75
! .3333
.7857
−1.0000 .
9.4.6. (a) u = , (c) u =
.3571
1.3333
9.4.11. False for elementary row operations of types 1 & 2, but true for those of type 3.
1
♠ 9.4.13. (a) x = 7 = .142857, y = − 72 = − .285714;
(e) x = −1.9172, y = − .339703, z = −2.24204.
♠ 9.4.17. The solution is x = .083799, y = .21648, z = 1.21508. The Jacobi spectral radius
is .8166, and so it converges reasonably rapidly to the solution; indeed, after 50 iterations,
x(50) = .0838107, y (50) = .216476, z (50) = 1.21514. On the other hand, the Gauss–
Seidel spectral radius is 1.0994, and it slowly diverges; after 50 iterations, x(50) = −30.5295,
y (50) = 9.07764, z (50) = −90.8959.
♣ 9.4.22. (a) Diagonal dominance requires | z | > 4; (b) The solution is u = (.0115385,
−.0294314, −.0755853, .0536789, .31505, .0541806, −.0767559, −.032107, .0140468, .0115385)T .
It takes 41 Jacobi iterations and 6 Gauss–Seidel iterations to compute the first three deci-
mal places of the solution. (c) Computing the spectral radius, we conclude that the Jacobi
Method converges to the solution whenever | z | > 3.6387, while the Gauss–Seidel Method
converges for z < − 3.6386 or z > 2.
!
1.4
♥ 9.4.24. (a) u = .
.2
(b) The spectral radius is ρJ = .40825 and so it takes about −1/ log10 ρJ ≃ 2.57 iterations
to produce each additional decimal place of accuracy.
(c) The spectral radius is ρGS = .16667 and so it takes about −1/ log 10 ρGS ≃ 1.29 itera-
tions to produce each additional decimal place of accuracy.
(n+1) 1−ω − 21 ω 3
ω
(d) u = u(n)
+ . 2
− 13 (1 − ω) ω 61 ω 2 − ω + 1 2
3ω− 2ω
1 2
(e) The SOR spectral radius is minimized when the two eigenvalues of Tω coincide, which
occurs when ω⋆ = 1.04555, at which value ρ⋆ = ω⋆ − 1 = .04555, so the optimal SOR
Method is almost 3.5 times as fast as Jacobi, and about 1.7 times as fast as Gauss–Seidel.
(f ) For Jacobi, about −5/ log10 ρJ ≃ 13 iterations; for Gauss–Seidel, about −5/ log10 ρGS =
7 iterations; for optimal SOR, about −5/ log10 ρSOR ≃ 4 iterations.
(g) To obtain 5 decimal place accuracy, Jacobi requires 12 iterations, Gauss–Seidel requires
6 iterations, while optimal SOR requires 5 iterations.
c 2018 Peter J. Olver and Chehrzad Shakiban
76 Students’ Solutions Manual: Chapter 9
♣ 9.4.27. (a) x = .5, y = .75, z = .25, w = .5. (b) To obtain 5 decimal place accuracy, Jacobi
requires 14 iterations, Gauss–Seidel requires 8 iterations. One can get very good approxi-
mations of the spectral radii ρJ = .5, ρGS = .25, by taking ratios of entries of successive
iterates, or the ratio of norms of successive error vectors. (c) The optimal SOR Method has
ω = 1.0718, and requires 6 iterations to get 5 decimal place accuracy. The SOR spectral
radius is ρSOR = .0718.
♣ 9.4.31. The Jacobi spectral radius is ρJ = .909657. Using equation (9.76) to fix the SOR para-
meter ω = 1.41307 actually slows down the convergence since ρSOR = .509584 while
ρGS = .32373. Computing the spectral radius directly, the optimal SOR parameter is
ω⋆ = 1.17157 with ρ⋆ = .290435. Thus, optimal SOR is about 13 times as fast as Jacobi,
but only marginally faster than Gauss-Seidel.
(k+1)
♥ 9.4.35. (a) u = u(k) + D−1 r(k) = u(k) − D−1 A u(k) + D−1 b
= u(k) − D−1 (L + D + U )u(k) + D−1 b = − D−1 (L + U )u(k) + D−1 b,
which agrees with (9.55).
♠ 9.5.1. In all cases, we use the normalized version (9.82) starting with u(0) = e1 ; the answers
are correct to 4 decimal places. (a) After 17 iterations, λ = 2.00002, u = ( −.55470, .83205 )T .
(c) After 38 iterations, λ = 3.99996, u = ( .57737, −.57735, .57734 )T .
(e) After 36 iterations, λ = 5.54911, u = ( −.39488, .71005, .58300 )T .
♠ 9.5.2. In each case, to find the dominant singular value of a matrix A, we apply the
Power Method to K = ATA and take the square root of its dominant eigenvalue to
q
find the dominant singular value σ1 = λ1 of A.
!
2 −1
(a) K = ; after 11 iterations, λ1 = 13.0902 and σ1 = 3.6180;
−1 13
5 2 2 −1
2 8 2 −4
(c) K =
; after 16 iterations, λ = 11.6055 and σ = 3.4067.
2 2 1 −1
1 1
−1 −4 −1 2
1
♦ 9.5.5. (a) If A v = λ v then A−1 v = v, and so v is also the eigenvector of A−1 .
λ
(b) If λ1 , . . . , λn are the eigenvalues of A, with | λ1 | > | λ2 | > · · · > | λn | > 0 (recalling that
1 1
0 cannot be an eigenvalue if A is nonsingular), then ,..., are the eigenvalues of A−1 ,
λ1 λn
1 1 1 1
and > > ··· > and so is the dominant eigenvalue of A−1 . Thus,
| λn | | λn−1 | | λ1 | λn
applying the Power Method to A−1 will produce the reciprocal of the smallest (meaning the
one closest to 0) eigenvalue of A and its corresponding eigenvector.
(c) The rate of convergence of the algorithm is the ratio | λn /λn−1 | of the moduli of the
smallest two eigenvalues.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 9 77
! !
.3310 .9436
9.5.11. (a) Eigenvalues: 6.7016, .2984; eigenvectors: , .
.9436 −.3310
.2726 .9454 −.1784
(c) Eigenvalues: 4.7577, 1.9009, −1.6586; eigenvectors:
.7519 , −.0937 , .6526 .
.6003 −.3120 −.7364
! !
−2 −1
9.5.13. (a) Eigenvalues: 2, 1; eigenvectors: , .
3 1
(c) Eigenvalues: 3.5842, −2.2899, 1.7057;
−.4466 .1953 .7491
eigenvectors: −.7076 , −.8380 , −.2204 .
.5476 −.5094 .6247
9.5.15. (a) It has eigenvalues ± 1, which have the same magnitude. The Q R factorization is
trivial, with Q = A and R = I . Thus, R Q = A, and so nothing happens.
√
(b) It has a pair of complex conjugate eigenvalues of modulus 7 and a real eigenvalue −1.
However, running the Q R iteration produces a block upper triangular matrix with the real
eigenvalue at position (3, 3) and a 2 × 2 upper left block that has the complex conjugate
eigenvalues of A as eigenvalues.
9.5.18. (a)
1 0 0 8.0000 7.2801 0
H= 0 −.9615 .2747 7.2801 3.5660
, T = H AH = 20.0189 .
0 .2747 .9615 0 3.5660 4.9811
9.5.21. (b)
1 0 0 0 3.0000 −2.2361 −1.0000 0
0 −.8944 0 −.4472 −2.2361 3.8000 2.2361 .4000
H1 = , A1 = ,
0 0 1 0
0 1.7889 2.0000 −5.8138
0 −.4472 0 .8944 0 1.4000 −4.4721 1.2000
1 0 0 0 3.0000 −2.2361 .7875 .6163
0 1 0 0 −2.2361 3.8000 −2.0074 −1.0631
H2 = , A2 = .
0 0 −.7875 −.6163
0 −2.2716 −3.2961 2.2950
0 0 −.6163 .7875 0 0 .9534 6.4961
c 2018 Peter J. Olver and Chehrzad Shakiban
78 Students’ Solutions Manual: Chapter 9
√1 √1 √1
5 5 5
9.6.1. (b) V (1) : − √2
,
V (2) : − √2
,
V (3) : − √2
;
5 5 5
0 0 0
1 1 0 1 0 0
0 0 1 0 1 0
(d) V (1) :
,
V (2) :
, , V (3) :
, ,
.
0 0 0 0 0 1
0 0 0 0 0 t0
9.6.7. In each case, the last uk is the actual solution, with residual rk = f − K uk = 0.
! ! ! !
2 .76923 .07692 .78571
(a) r0 = , u1 = , r1 = , u2 = ;
1 .38462 −.15385 .35714
−1 −.13466 2.36658 −.13466
(c) r0 =
−2 , u1 = −.26933 ,
r1 =
−4.01995 , u2 = −.26933 ,
.21271 1.33333
♣ 9.6.8. Remarkably, after only two iterations, the method finds the exact solution: u3 = u⋆ =
( .0625, .125, .0625, .125, .375, .125, .0625, .125, .0625 )T , and hence the convergence is dramat-
ically faster than the other iterative methods.
k rk k2 xT A2 x − 2 bT A xk + k b k2
♦ 9.6.14. dk = T
= Tk 3 k .
rk A rk xk A xk − 2 b T K 2 xk + b T A b
r=2 r=5 r = 10
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 9 79
For any integer 0 ≤ j ≤ 2r+1 − 1, on the interval j 2−r−1 < x < (j + 1)2−r−1 , the value of
the Haar approximant of order r is constant and equal to the value of the function f (x) = x
at the midpoint:
sr (x) = f (2 j + 1) 2−r−2 = (2 j + 1) 2−r−2 .
(Indeed, this is a general property of Haar wavelets.) Hence | hk (x) − f (x) | ≤ 2−r−2 → 0 as
r → ∞, proving convergence.
(c) Maximal deviation: r = 2 : .0625, r = 5 : , .0078125 r = 10 : .0002441.
♠ 9.7.2. (i ) (a) The coefficients are c0 = − 61 , c0,0 = 0, cj,k = (2j − 1 − 2 k) 2−2 j−2 for
k = 0, . . . , 2j − 1 and j ≥ 1.
(b)
r=2 r=5 r = 10
In general, the Haar wavelet approximant sr (x) is constant on each subinterval k 2− r−1 <
x < (k + 1) 2− r−1 for k = 0, . . . , 2r+1 − 1 and equal to the value of the function f (x)
at the midpoint. This implies that the maximal error on each interval is bounded by the
deviation of the function from its value at the midpoint, which suffices to prove convergence
sr (x) → f (x) at r → ∞ provided f (x) is continuous.
(c) Maximal deviation: r = 2 : .05729, r = 5 : , .007731 r = 10 : .0002441.
♥ 9.7.4. (b)
1 1 1 0 1 1 1 1
1 1 −1 0 1 1 1 −1 −1
W2 =
, W2−1 = .
1 −1 0 1 4
2 −2 0 0
1 −1 0 −1 0 0 2 −2
9.7.6. The function ϕ(x) = σ(x) − σ(x − 1) is merely a difference of two step functions — a box
function. The mother wavelet is thus
1
w(x) = ϕ(2 x) − ϕ(2 x − 1) = σ(x) − 2 σ x − 2 + σ(x − 1)
c 2018 Peter J. Olver and Chehrzad Shakiban
80 Students’ Solutions Manual: Chapter 9
♠ 9.7.8.
(a) Exercise 9.7.1: (a) The coefficients are c0 = .5181, c0,0 = −.1388, c1,0 = .05765, c1,1 =
−.1833, c2,0 = c2,1 = 0, c2,2 = .05908, c2,3 = −.2055, c3,0 = c3,1 = c3,2 = c3,3 = c3,4 =
c3,5 = −.01572, c3,6 = .05954, c3,7 = −.2167.
(b)
r=2 r=5 r = 10
♦ 9.7.12. We compute
ψ(x) = ϕ(x + 1) = ϕ(2 (x + 1) − 1) + ϕ(2 (x + 1) − 2) = ϕ(2 x + 1) + ϕ(2 x) = ψ(2 x) + ψ(2 x − 1),
provided l 6= m.
9.7.21. Almost true — the column sums of the coefficient matrix are both 1; however, the (2, 1)
entry is negative, which is not an allowed probability in a Markov process.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual for
Chapter 10: Dynamics
!
du 0 1
10.1.1. (i ) (a) u(t) = c1 cos 2 t + c2 sin 2 t. (b) = u.
dt −4 0
!
c1 cos 2 t + c2 sin 2 t
(c) u(t) = .
− 2 c1 sin 2 t + 2 c2 cos 2 t
0.4
0.2
(d) (e) 2 4 6 8
-0.2
-0.4
! !
−2t 2t du 0 1 c1 e− 2 t + c2 e2 t
(ii ) (a) u(t) = c1 e + c2 e . (b) = u. (c) u(t) = .
dt 4 0 − 2 c1 e− 2 t + 2 c2 e2 t
7
6
5
4
(d) (e) 3
2
1
-1 -0.5 0.5 1
dv du
♦ 10.1.4. (a) Use the chain rule to compute =− (− t) = − A u(− t) = − A v.
dt dt
(b) Since v(t) = u(− t) parameterizes the same curve as u(t), but in the reverse direction.
! !
dv 0 −1 c1 cos 2 t − c2 sin 2 t
(c) (i ) = v; solution: v(t) = .
dt 4 0 2 c1 sin 2 t + 2 c2 cos 2 t
! !
dv 0 −1 c1 e2 t + c2 e− 2 t
(ii ) = v; solution: v(t) = .
dt −4 0 − 2 c1 e2 t + 2 c2 e− 2 t
10.1.8. False. If u = A u then the speed along the trajectory at the point u(t) is k A u(t) k.
So the speed is constant only if k A u(t) k is constant. (Later, in Lemma 10.29, this will be
shown to correspond to A being a skew-symmetric matrix.)
♠ 10.1.10. In all cases, the t axis is plotted vertically, and the three-dimensional solution curves
(u(t), u(t), t)T project to the phase plane trajectories (u(t), u(t))T .
81
c 2018 Peter J. Olver and Chehrzad Shakiban
82 Students’ Solutions Manual: Chapter 10
(ii ) Hyperbolic curves going away from the t axis in both directions:
7 −5t 8 5t −5t 4 5t
10.1.11. u(t) = 5e + 5e , v(t) = − 14
5 e + 5e .
10.1.20. (a) Linearly independent; (b) linearly independent; (d) linearly dependent;
(e) linearly independent.
dv du
10.1.24. =S = S A u = S A S −1 v = B v.
dt dt
♦ 10.1.25. (i ) This is an immediate consequence of the preceding two exercises.
! ! √ √ ! √
−1 1 c1 e− 2 t − 2i 2 i c1 e(1+ i √2) t
(ii ) (a) u(t) = , (c) u(t) = ,
1 1 c2 e2 t 1 1 c2 e(1− i 2)t
4 3 + 2i 3− 2i c1
(e) u(t) = −2
−2 − i −2 + i
c2 e i t
.
1 1 1 c3 e− i t
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 10 83
! !
c1 e2 t + c2 t e2 t c1 e− 3 t + c2 21 + t e− 3 t
10.1.26. (a) , (c) ,
c2 e2 t 2 c1 e− 3 t + 2 c2 t e− 3 t
c1 e− 3 t + c2 t e− 3 t + c3 1 + 21 t2 e− 3 t
(e) c2 e− 3 t + c3 t e− 3 t .
−3t −3t 1 2 −3t
c1 e + c2 te + 2 c3 t e
! ! 2 0 0
du 2 − 21 du 0 0 du
10.1.27. (a) = u, (c) = u, (e) =
0 −3 0 u.
dt 0 1 dt 1 0 dt
2 3 0
dui
10.1.28. (a) No, since neither , i = 1, 2, is a linear combination of u1 , u2 . Or note that the
dt
trajectories described by the functions cross, violating uniqueness.
(b) No, since polynomial solutions a two-dimensional system can be at most first order in t.
! !
−1 0 2 3
(d) Yes: u = u. (e) Yes: u =
u.
0 1 −3 2
which equals
j j
λt X tj−i tj−1 X tj−i
A uj = e A wi = eλ t w1 + (λ wi + wi−1 ) .
i=1 (j − i) ! (j − 1) ! i = 2 (j − i) !
(b) At t = 0, we have uj (0) = wj , and the Jordan chain vectors are linearly independent.
10.2.9. The system is stable since ± i must be simple eigenvalues. Indeed, any 5 × 5 matrix
has 5 eigenvalues, counting multiplicities, and the multiplicities of complex conjugate eigen-
values are the same. A 6 × 6 matrix can have ± i as complex conjugate, incomplete double
eigenvalues, in addition to the simple real eigenvalues −1, −2, and in such a situation the
origin would be unstable.
10.2.14. (a) True, since the sum of the eigenvalues equals the trace, so at least one must be
positive or have positive real part in order that the trace be positive.
10.2.20. False. Only positive definite Hamiltonian functions lead to stable gradient flows.
c 2018 Peter J. Olver and Chehrzad Shakiban
84 Students’ Solutions Manual: Chapter 10
0.5
-0.25
and leave the Hamiltonian function constant: -0.5
-1
⋆ ⋆
10.2.24. (a) The equilibrium solution satisfies A u = − b, and so v(t) = u(t) − u satisfies
⋆
v = u = A u + b = A(u − u ) = A v,
!
−2 3
10.3.1. (ii ) A = ;
−1 1 √ !
√ 3 3
1 3 2 +i 2
λ1 = 2 +i 2 , v1 = ,
√ !
1
√ 3 3
1 3 −i
λ2 = 2− i 2 , v2 = 2 2 ,
1
√ √ √ √
u1 (t) = e− t/2 3 3
2 c1 − 2 c2 cos 2 t +
3 3
2 c1 + 3
2 c2 sin 2
3
t ,
√ √
u2 (t) = e− t/2 c1 cos 23 t + c2 sin 23 t ;
stable focus; asymptotically stable
! !
2 cos t − sin t 2 sin t + cos t
10.3.2. (ii ) u(t) = c1 e− t + c2 e− t ;
5 cos t 5 sin t
stable focus; asymptotically stable.
!
−1 4
10.3.3. (a) For the matrix A = ,
1 −2
tr A = −3 < 0, det A = −2 < 0, ∆ = 17 > 0,
so this is an unstable saddle point.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 10 85
!
5 4
(c) For the matrix A = ,
1 2
tr A = 7 > 0, det A = 6 > 0, ∆ = 25 > 0,
so this is an unstable node.
!
4 t 1 −2t
e − 3e − 13 et + 1 −2t
3e cos t − sin t
10.4.1. (a) 3 , (c) ,
4 t 4 −2t
− 13 et + 4 −2t sin t cos t
3e − 3e 3 e
2t !
e cos t − 3 e2 t sin t 2t
2 e sin t
(e) .
−5 e2 t sin t e cos t + 3 e2 t sin t
2t
1 0 0
e− 2 t + t e− 2 t t e− 2 t t e− 2 t
10.4.2. (a) 2 sin t cos t sin t −2t −2t − 2 t .
, (c) −1+e e −1 + e
2 cos t − 2 − sin t cos t −2t −2t
1−e − te −t e− 2 t 1 − te−2t
10.4.11. The origin is an asymptotically stable if and only if all solutions tend to zero as t → ∞.
Thus, all columns of et A tend to 0 as t → ∞, and hence limt → ∞ et A = O. Conversely, if
limt → ∞ et A = O, then any solution has the form u(t) = et A c, and hence u(t) → 0 as
t → ∞, proving asymptotic stability.
c 2018 Peter J. Olver and Chehrzad Shakiban
86 Students’ Solutions Manual: Chapter 10
10.4.15. Set U (t) = A et A , V (t) = et A A. Then, by the matrix Leibniz formula (10.41),
U = A2 et A = A U, V = A et A A = A V,
while U (0) = A = V (0). Thus U (t) and V (t) solve the same initial value problem, hence, by
uniqueness, U (t) = V (t) for all t. Alternatively, one can use the power series formula
(10.47): ∞ tn
X
A et A = An+1 = et A A.
n=0 n!
10.4.20. Lemma 10.28 implies det et A = et tr A = 1 for all t if and only if tr A = 0. (Even if tr A
is allowed to be complex, by continuity the only way this could hold for all t is if tr A = 0.)
d
♦ 10.4.25. (a) diag (et d1 , . . . et dn ) = diag (d1 et d1 , . . . , dn et dn ) = D diag (et d1 , . . . , et dn ).
dt
Moreover, at t = 0, we have diag (e0 d1 , . . . , e0 dn ) = I . Therefore, diag (et d1 , . . . , et dn )
satisfies the defining properties of et D .
! ! !−1
4 t 1 −2t
1 1 et 0 1 1 e − 3e − 31 et + 1 −2t
3e
(c) 10.4.1: (a) = 3 ;
1 4 0 e− 2 t 1 4 4 t 4 −2t
− 31 et + 4 −2t
3e − 3e 3e
! ! !−1 !
i −i eit 0 i −i cos t − sin t
(c) = ;
1 1 0 e− i t 1 1 sin t cos t
! ! !−1
3
5 − 51 i 3
5 + 51 i e(2+ i )t 0 3
5 − 51 i 3
5 + 51 i
(e)
1 1 0 e(2− i )t 1 1
!
e2 t cos t − 3 e2 t sin t 2 e2 t sin t
= .
−5 e2 t sin t e cos t + 3 e2 t sin t
2t
10.4.2: (a)
−1
−1 0 0 1 0 0 −1 0 0 1 0 0
0 −i i
0 eit 0
0 −i i =
2 sin t cos t sin t
;
2 1 1 0 0 e− i t 2 1 1 2 cos t − 2 − sin t cos t
dU
♦ 10.4.28. (a) If U (t) = C et B , then = C et B B = U B, and so U satisfies the differential
dt
equation. Moreover, C = U (0). Thus, U (t) is the unique solution to the initial value prob-
lem U = U B, U (0) = C, where the initial value C is arbitrary.
!
1 0
10.4.32. (b) — shear transformations in the y direction. The trajectories are lines
t 1
parallel to the y-axis. Points on the y-axis are fixed.
!
cos 2 t − sin 2 t
(d) — elliptical rotations around the origin. The trajectories are the
2 sin 2 t 2 cos 2 t
ellipses x2 + 1
4 y 2 = c. The origin is fixed.
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 10 87
1
0 t
10.4.33. (b) 01 0
— shear transformations in the x direction, with magnitude propor-
0 0 1
tional to the z coordinate. The trajectories are lines parallel to the x axis. Points on the xy
plane are fixed.
cos t sin t 0
(d) − sin t cos t 0
— spiral motions around the z axis. The trajectories are the pos-
t
0 0 e
itive and negative z axes, circles in the xy plane, and cylindrical spirals (helices) winding
around the z axis while going away from the xy pane at an exponentially increasing rate.
The only fixed point is the origin.
10.4.35.
" ! !# ! " ! !# !
2 0 0 0 0 0 2 0 0 3 0 6
, = , , = ,
0 0 1 0 −2 0 0 0 −3 0 6 0
" ! !# ! " ! !# !
2 0 0 −1 0 −2 2 0 0 1 0 2
, = , , = ,
0 0 4 0 −8 0 0 0 1 0 −2 0
dU dunj
10.4.41. In the matrix system = A U , the equations in the last row are = 0 for
dt dt
j = 1, . . . , n, and hence the last row of U (t) is constant. In particular, for the exponen-
tial matrix solution U (t) = et A the last row must equal the last row of the identity matrix
U (0) = I , which is eT
n.
!
x+t
♦ 10.4.43. (a) : translations in x direction.
y
! !
(x + 1) cos t − y sin t − 1 −1
(c) : rotations around the point .
(x + 1) sin t + y cos t 0
10.4.44. (a) If U 6= {0}, then the system has eigenvalues with positive real part corresponding
to exponentially growing solutions and so the origin is unstable. If C 6= {0}, then the sys-
tem has eigenvalues with zero real part corresponding to either bounded solutions, which
are stable but not asymptotically stable modes, or, if the eigenvalue is incomplete, polyno-
mial growing unstable modes.
10.4.45. (a) S = U = ∅, C = R 2 ;
√ √
2− 7 ! 2+ 7 !
(b) S = span 3
U = span , C = ∅; 3 ,
1 1
1 1 2
(d) S = span
−1 , U = span 1 , C = span 0 .
−1 −1 −3
c 2018 Peter J. Olver and Chehrzad Shakiban
88 Students’ Solutions Manual: Chapter 10
q
10.5.1. The vibrational frequency is ω = 21/6 ≃ 1.87083, and so the number of hertz is
ω/(2 π) ≃ .297752.
2
-2
2
1.5
1
0.5
(c) Periodic of period 12: -5 5 15
10 20 25
-0.5
-1
-1.5
3
2
1
(f ) Quasi-periodic: 10 20 30 40 50 60
-1
-2
√ √
10.5.5. (a) 2 , 7 ; (b) 4 — each eigenvalue gives two linearly independent solutions;
! !
√ 2 √ −1
(c) u(t) = r1 cos( 2 t − δ1 ) + r2 cos( 7 t − δ2 ) ; (d) The solution
1 2
is periodic if only one frequency is excited, i.e., r1 = 0 or r2 = 0; all other solutions are
quasiperiodic.
√ √
10.5.7. (a) u(t) = r1 cos(t − δ1 ) + r2 cos( 5 t − δ2 ), v(t) = r1 cos(t − δ1 ) − r2 cos( 5 t − δ2 );
(c) u(t) = ( r1 cos(t − δ1 ), r2 cos(2 t − δ2 ), r3 cos(3 t − δ1 ) )T .
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 10 89
√ √
10.5.16. (a) u(t) = a t + b + 2 r cos( 5 t − δ), v(t) = − 2 a t − 2 b + r cos( 5 t − δ).
The unstable mode consists of the terms with a in them; it will not be excited if the initial
conditions satisfy u(t0 ) − 2 v(t0 ) = 0.
√ r √ ! √ r √ !
1− 13 7+ 13 1+ 13 7− 13
(c) u(t) = − 2 a t − 2 b − 4 r1 cos 2 t − δ1 − 4 r2 cos 2 t − δ2 ,
√ r √ ! √ r √ !
3− 13 7+ 13 3+ 13 7− 13
v(t) = − 2 a t − 2 b + 4 r1 cos 2 t − δ1 + 4 r2 cos 2 t − δ2 ,
r √ ! r √ !
7+ 13 7− 13
w(t) = a t + b + r1 cos 2 t − δ1 + r2 cos 2 t − δ2 .
The unstable mode is the term containing a; it will not be excited if the initial conditions
satisfy − 2 u(t0 ) − 2 v(t0 ) + w(t0 ) = 0.
− √1 √1 0
4 0 0
2 2
10.5.17. (a) Q =
0 0 1
,
Λ= 0
2 0.
√1 √1 0 0 0 2
2 2
(b) Yes, because K is symmetric and has all positive eigenvalues.
T !
1√ √ √
(c) u(t) = cos2 t, √ sin 2 t, cos 2 t .
2
√
(d) The solution u(t) is periodic with period 2 π.
√
(e) No — since the frequencies 2, 2 are not rational multiples of each other,
the general solution is quasi-periodic.
r r
3 1
√ 3 1
√
♠ 10.5.22. (a) Frequencies: ω1 = 2 − 2 5 = .61803, ω2 = 1, ω3 = 2 5 = 1.618034;
+ 2
√ √
2− 5 −1 2+ 5
−1 −1 1
stable eigenvectors: v1 =
√
, v2 =
,
v3 =
√
; unstable
−2 + 5 −1 −2 − 5
1 1 1
1
−1
eigenvector: v4 =
. In the lowest frequency mode, the nodes vibrate up and towards
1
1
each other and then down and away, the horizontal motion being less pronounced than the
vertical; in the next mode, the nodes vibrate in the directions of the diagonal bars, with one
moving towards the support while the other moves away; in the highest frequency mode,
they vibrate up and away from each other and then down and towards, with the horizontal
motion significantly more than the vertical; in the unstable mode the left node moves down
and to the right, while the right-hand node moves at the same rate up and to the right.
! q !
1 −3
10.5.27. (a) u(t) = r1 cos √1 t − δ1 + r2 cos 5
t − δ2 ,
2 2 3 1
r √ ! 1+√3 r √ ! 1−√3
3− 3 2 + 3+ 3 2 .
(c) u(t) = r1 cos 2 t − δ1 r2 cos 2 t − δ2
1 1
√ r √ √ r √
3−1 3− 3 3+1 3+ 3
10.5.28. u1 (t) = √
2 3
cos 2 t+ √
2 3
cos 2 t,
r √ r √
1 3− 3 1 3+ 3
u2 (t) = √ cos 2 t− √ cos 2 t.
2 3 2 3
c 2018 Peter J. Olver and Chehrzad Shakiban
90 Students’ Solutions Manual: Chapter 10
♣ 10.5.30. (a) We place the oxygen molecule at the origin, one hydrogen at ( 1, 0 )T and the other
at ( cos θ, sin θ )T = ( −0.2588, 0.9659 )T with θ = 180
105
π = 1.8326 radians. There are two in-
dependent vibrational modes, whose fundamental frequencies are ω1 = 1.0386, ω2 = 1.0229,
with corresponding eigenvectors v1 = ( .0555, −.0426, −.7054, 0., −.1826, .6813 )T ,
v2 = ( −.0327, −.0426, .7061, 0., −.1827, .6820 )T . Thus, the (very slightly) higher frequency
mode has one hydrogen atoms moving towards and the other away from the oxygen, which
also slightly vibrates, and then all reversing their motion, while in the lower frequency mode,
they simultaneously move towards and then away from the oxygen atom.
10.5.37. The solution is u(t) = 41 (v+5) e− t − 41 (v+1) e− 5 t , where v = u(0) is the initial velocity.
v+1 v+1
This vanishes when e4 t = , which happens when t = t⋆ > 0 provided > 1, and
v+5 v+5
so the initial velocity must satisfy v < −5.
10.5.39. (a) By Hooke’s Law, the spring stiffness is k = 16/6.4 = 2.5. The mass is 16/32 = .5.
√
The equations of motion are .5 u + 2.5 u = 0. The natural frequency is ω = 5 = 2.23607.
(b) The solution to the initial value problem .5 u + u + 2.5 u = 0, u(0) = 2, u(0) = 0, is
u(t) = e− t (2 cos 2 t + sin 2 t). (c) The system is underdamped, and the vibrations are less
rapid than the undamped system.
-5 5 10 15 20
-1
-2
(c) cos 10 t + cos 9.5 t = 2 sin .25 t sin 9.75 t; fast frequency: 9.75, beat frequency: .25.
2
5 10 15 20 25 30
-1
-2
√ √ √
10.6.2. (a) u(t) = 1
27
1
cos 3 t− 27 cos 6 t, (c) u(t) = 1
2 sin 2 t+e− t/2 cos 15
2 t− 15
5 sin 15
2 t .
c 2018 Peter J. Olver and Chehrzad Shakiban
Students’ Solutions Manual: Chapter 10 91
1 2 1
10.6.3. (a) u(t) = 3 cos 4 t + 3 cos 5 t + 5 sin 5 t;
undamped periodic motion with fast frequency 4.5 and beat frequency .5:
1
0.5
5 10 15 20 25 30
-0.5
-1
56 − 5 t
(c) u(t) = − 60
29 cos 2 t +
5
29 sin 2 t − 29 e + 8 e− t ;
the transient is an overdamped motion; the persistent motion is periodic:
4
3
2
1
-1 5 10 15 20 25
-2
u(t) = 4
17 cos 2 t + 16
17 sin 2 t + e− t 30
17 cos 2 t − 1
17 sin 2 t
3 − t/3 1 −t
10.6.11. (b) u(t) = 2e − 2e , (d) u(t) = e− t/5 cos 1
10 t + 2 e− t/5 sin 1
10 t.
165 − t/4
10.6.12. u(t) = 41 e cos 14 t− 91
41 e− t/4 sin 14 t − 124
41 cos 2 t + 32
41 sin 2 t
− .25 t − .25 t
= 4.0244 e cos .25 t − 2.2195 e sin .25 t − 3.0244 cos 2 t + .7805 sin 2 t.
10.6.17. u(t) =
1! q
√ ! √ !
2
√ −1 − 5 √ −1 + 5
(b) sin 3 t + r1 cos( 4 + 5 t − δ1 ) + r2 cos(4 − 5 t − δ2 ) ,
−1 2 2
2 q
1 17 −3 1
(d) cos 2t + r1 cos( 53 t − δ1 ) + r cos( √1
2 t − δ2 ) .
2
− 12
17 1 2
c 2018 Peter J. Olver and Chehrzad Shakiban
92 Students’ Solutions Manual: Chapter 10
r √ r√
3− 3 3+ 3
10.6.18. (a) The resonant frequencies are 2 = .796225, 2 = 1.53819.
r √ ! !
3+ 3 w1
(b) For example, a forcing function of the form cos 2 t w, where w = is not
w2
√ !
−1 − 3 √
orthogonal to the eigenvector , so w2 6= (1 + 3 )w1 , will excite resonance.
1
♣ 10.6.20. In each case, you need to force the system by cos(ω t)a where ω 2 = λ is an eigenvalue
and a is orthogonal to the corresponding eigenvector. In order not to excite an instability,
a needs to also be orthogonal to the kernel of the stiffness matrix spanned by the unstable
mode vectors.
(a) Resonant frequencies: ω1 = .5412, ω2 = 1.1371, ω3 = 1.3066, ω4 = 1.6453;
.6533 .2706 .2706 − .6533
.2706 .6533 − .6533 .2706
eigenvectors: v1 =
, v =
2
, v =
3
, v =
4
;
.6533 − .2706 .2706 .6533
− .2706 .6533 .6533 .2706
no unstable modes.
(c) Resonant frequencies: ω1 = .3542, ω2 = .9727, ω3 = 1.0279, ω4 = 1.6894, ω5 = 1.7372;
eigenvectors:
− .0989 − .1160 .1251 .3914 .6889
− .0706 .6780 − .6940 .2009 .1158
0
.2319
0
− .7829
0 ;
v1 = , v2 = , v3 = , v4 = , v5 =
− .9851 0 .0744 0 − .1549
.0989 − .1160 − .1251 .3914 − .6889
− .0706 − .6780 − .6940 − .2009 .1158
unstable mode: z = ( 1, 0, 1, 0, 1, 0 )T . To avoid exciting the unstable mode, the initial veloc-
ity must be orthogonal to the null eigenvector: z · u(t0 ) = 0, i.e., there is no net horizontal
c 2018 Peter J. Olver and Chehrzad Shakiban
http://www.springer.com/978-3-319-91040-6