A Tutorial Overview of Vector and Matrix Norms
A Tutorial Overview of Vector and Matrix Norms
A Tutorial Overview of Vector and Matrix Norms
A Tutorial Overview of
Vector and Matrix Norms
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 1 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Abstract
Intended for new graduate students whose experience as undergraduates
may have prepared them inadequately to apply norms to numerical error-
analyses and to proofs of convergence, this tutorial surveys norms for
finite-dimensional real spaces in a way that may ease a transition to the
infinite-dimensional spaces of Functional Analysis. Among the topics
covered are some more useful than is implied by their absence from most
curricula. The notation is mostly standard but interpreted in ways not
always taught to undergraduates, so attendees may prepare for the tutorial
by reading just a few of my lecture notes for Math. H110 posted at
<eecs.berkeley.edu/~wkahan/MathH110/2dspaces.pdf> and
<.../pts.pdf> in that order, and afterwards <.../geo.pdf> and
<.../geos.pdf> skimmed lightly.
This tutorial omits proofs; almost all can be found in
<.../NORMlite.pdf>, <.../GIlite.pdf>, and a few other places cited.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 2 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Contents
Abstract Page 2
What are Norms for? Two Examples … 5-7
Part I: Vector Norms
The Trouble with Norms …, too many Unit Balls 9
Choosing a Norm 12-3
Dual Spaces 15-7
Changing a Basis 18
Real Inner-Product Spaces 19
Auerbach’s Parallelepiped Theorem 21
Fritz John’s Ellipsoid Theorem 22
Part II: Matrix Norms
Overloaded Notation 24
What must we know to choose an apt norm? 25
Mere Matrix Norms vs. Operator Norms 26-8
Maximized Ratios of Familiar Norms 29
Choosing a Norm 30
When is a Preassigned Matrix Norm Also an Operator Norm? 31
Orthogonally Invariant Matrix Norms 32
Dual Norms for Dual Matrix Spaces, and Norms for Bilinear Forms 33-4
Part III: Matrix Norms and Matrix Inverses
Condensed Review of Parts I & II 35
Sensitivities of Inverses to (Infinitesimal) Perturbations 36
Operator Norms of Inverses 37-8
… continued …
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 3 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
… Contents continued …
Equilibration and Preconditioning 39-40
Iterative Refinement 41
Diagonal Dominance and Schur Complements 42-3
Fredholm’s Alternatives 44
Generalized Inverses and Pseudo-Inverses 45
How Big must they be? Their Use can be Dangerous! 46-8
A Generality 49
Part IV: Matrix Norms and Eigenvalues
Matrix Norms exceed Eigenvalues 51
A Computed Eigenvector’s Error is an Angle 52
Clustered Eigenvalues’ Sensitivities to Perturbations 53
Gershgorin’s Circles enclose Eigenvalues; Extreme Singular Values 54-5
Eigenvalues’ Sensitivities 56-7
Perron-Frobenius Theory of Nonnegative Matrices, and Optimal Diagonal Equilibration 58-9
Part V: Matrix Norms and Real Symmetric Matrices’ Eigenvalues
Stationary Points and Values, Real Quadratic Forms, Congruence, Diagonalization 61-63
Stationary Points and Values, and Generalized Symmetric Eigenproblems 64
Simultaneous Diagonalization by Congruence 65-6
Real Symmetric Eigenproblem; Courant-Fischer Minimax 67
Absolute and Relative Perturbations 68
Partial Eigensystems and Spectral Gaps 69-73
Spectral Gaps, Invariant Subspaces, and Angles 74
Miscellany 75
Citations and Further Reading; Epilogue 76-9
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 4 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Examples …
•••
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 5 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Useful computed
DATA results F(X) lie
Our chosen norm inside this circle:
may exaggerate ƒ
uncertainties in X
actual input data ===> ƒ(X)
and computed results X + ∆X RESULTS
whose correlations e.g. Matrix Inversion,
ƒ(X + ∆X)
the norm disregards. but inappropriate for log, acos, …
If F is “backward stable” but computed F(X) is very wrong, do we blame the victim ƒ for “ill condition” at X ?
Error-Analyses tend to excessive pessimism partly because they allow for unlikely conspiracies
among rounding errors, and partly because the chosen norms are often not the most suitable.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 6 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
But we don’t know z yet, much less the eigenvalues of the derivative ƒ `(z) . Jacobian
Contractive Map: ||ƒ(y) – ƒ(x)||/||y – x|| < λ < 1 whenever distinct x and y
lie in some sufficiently large region X .
Then either ||xn – z|| ≤ λn·||x0 – z|| → 0 so xn → z = ƒ(z) uniquely in X ,
or ultimately xn escapes from X , which is too small to hold a fixed-point.
And test the Contractive hypothesis: Is ||xn+1 – xn||/||xn – xn–1|| < 1 ? Until roundoff …
THE CATCH: All this makes sense only for an appropriately chosen norm ||…|| .
That is the trouble with norms: There are so many of them, apt choice may be hard.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 7 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Part I
Vector Norms
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 8 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
• Triangle Inequality: ||x + y|| ≤ ||x|| + ||y|| for all x and y . Equality need not imply parallelism.
If ||x|| is a norm, so is |||x||| := ||L–1·x|| for any fixed invertible linear operator L .
If ||x|| and |||x||| are norms, so are max{||x||, |||x|||} , √(||x||2 + |||x|||2) , ||x|| + |||x||| , … .
The Unit Ball of a norm ||x|| is the region B := { x : ||x|| ≤ 1 } . This B turns out to
be closed, bounded, centrally symmetric ( B = –B ) and convex with o strictly inside.
“Convex” means that, if x and y lie in B , so does ζ·x + (1– ζ)·y for 0 ≤ ζ ≤ 1 .
Line segment joining “points” x and y
Conversely, any region B closed, bounded, centrally symmetric and convex with o
strictly inside is the Unit Ball of the norm ||x|| := inf ( |ξ| for which x/ξ lies in B ) .
•
y •
∂B x ||x|| ≈ 2/3
o•
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 9 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Let B := [b1, b2, b3, …, bn] be a Basis for the (abstract?) space of vectors x , so each
vector x = B·x = ∑j bj·ξj is represented by a column-vector x = [ξ1; ξ2; ξ3; …; ξn] in
MATLAB’s notation. Thus, basis B is an invertible linear map from a space of columns x
to the (abstract?) space of vectors x = B·x ; each x has its own column x = B–1·x .
Then ||x|| will be computed as ||x|| := ||x|| from the components ξj of column x .
If a basis B accepted first is unsatisfactory, a new basis B := B·C–1 can be chosen; here
C is an invertible matrix. And the new representative of x = B·x = B·x is then column
x = C·x , whence the new formula to compute ||x|| = || x || becomes || x || = ||C–1·x|| .
Don’t Memorize! Re-derive!
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 10 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Having chosen (or accepted) a basis in which vector x is represented by its column
x = [ξ1; ξ2; ξ3; …; ξn] , we can choose a formula by which to compute ||x|| := ||x|| from
the components ξj . Here are three familiar formulas and their Unit Balls for n = 3 :
These norms are unchanged by permutation of the elements of x ; these norms treat each
element the same as any other. What if some elements need closer scrutiny than others?
Let W be an invertible diagonal matrix; |||x||| := ||W–1·x|| is a norm too, maybe better.
Let C be an invertible matrix of a change of basis; |||x||| := ||C–1·x|| is a norm too.
A change of variables often induces a change of norm. Or is it vice-versa ?
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 11 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Example: The control of electric power distribution networks involves voltages of widely
diverse magnitudes, ranging from those on cross-country transmission lines to those in
radio receivers of the microwave transmissions needed to control power generation.
Locations Voltage Ranges Better Units
1 Cross-Country Transmission Towers 250,000 - 1,000,000 Megavolts
2 Substation to Transformers on Telephone Poles 2,000 - 10,000 Kilovolts
3 In-house wall sockets to appliances 110 - 240 Volts
4 Computer power supplies to transistors 2 - 12 Volts
5 Transistor circuits’ on-off variations 0.001 - 0.01 Millivolts
6 Inputs to radio receivers’ antennas 0.000,001 - 0.000,01 Microvolts
Changes by 0.001 Volt are negligible in locations 1 - 4 , devastating in the last two.
Changes by 0.001 Unit are negligible in all locations.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 12 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 13 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 14 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Since x need not be a column, wT need not be a row, much less the “Transpose” of a
vector w . Except for Euclidean and other Inner-Product spaces, there is no necessary
relation between wT and w , just as Miss Carla and Master Carlo need not be related.
Compare Contravariant and Covariant Tensors.
Distinctions between dual spaces are obscured by other notations for wT·x like w'·x, w•x, <x, w>,
<x|w>, (x, w), … devised originally for Euclidean and other Inner-Product spaces, each one
Isomorphic to its dual. Many mathematicians expect context to disambiguate those notations.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 15 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
E.g.: The space dual to x ’s contains the derivative µ`(x) of any scalar function µ(x)
because scalar dµ(x) = µ`(x)·dx .
From that follows ||x|| = max |wT·x|/||wT|| over wT ≠ oT . Non-trivial proof in general
Hölder’s Inequality: |wT·x| ≤ ||wT||·||x|| for all wT and x . Cf. Cauchy’s for ||…||2
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 16 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
• Row(s) wT = [ω1, ω2, ω3, …, ωn] dual to column x = [ξ1; ξ2; ξ3; …; ξn] w.r.t. ||x|| :
» For ||x||2 : wT := xT (its transpose !) uniquely. (Complex conjugate transpose for complex spaces)
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 17 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Just as the “columns” bj of B are basis vectors for the space of vectors x , the “rows”
of B–1 are basis vectors ejT (functionals) for the dual-space of linear functionals wT .
“Evaluation” functional ejT extracts x ’s component ξj = ejT·x of column x = B–1·x .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 18 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
An inner-product space has a natural linear map between it and its dual space. Vector
x = B·x represented by column x maps to linear functional x• = xT·M·B–1 represented
by row xT·M for some symmetric positive definite matrix M = MT , the space’s Metric.
M comes from the basis vectors in B := [b1, b2, b3, …, bn] thus: Mij = bi•bj . Similarly
w• = wTB–1 represented by row wT maps to w = B·M–1·w . Now representative
columns x = B–1·x and y = B–1·y figure in the formula to obtain y•x = yT·M·x ; and
||x|| = √(xT·M·x) but ||w•|| = √(wT·M–1·w) . Now x and x• are duals w.r.t. ||…|| .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 19 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
The Parallelogram Law: ||x + y||2 + ||x – y||2 ≡ 2||x||2 + 2||y||2 . … Jordan-von Neumann Th’m
But every other normed n-space is NOT Isomorphic to its dual space
except possibly if dimension n ≤ 2 .
||…||∞ ↔ ||…||1
Generally, flat spots on the surface of a normed space’s unit ball match up with vertices of the dual space’s unit ball, and vice-versa.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 20 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
The foregoing formula helps to provide a short proof of the most useful special case of …
Auerbach’s Theorem: Any n-space’s unit ball B can be circumscribed by at least one
Parallelepiped P whose 2n faces touch B at their midpoints.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 21 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
An Ellipsoid is the level surface of a Positive Definite Quadratic Form; see pp. 62-63.
• Therefore any n-dimensional norm ||…|| can be approximated within a factor n±1/4
by ||…||2 at a cost of order n after the change of basis.
Often a change of basis involving nothing more onerous than choices of suitable units
(recall the voltages’ example on p. 12) allows one of the cheap norms ||…||∞, ||…||2 or
||…||1 to provide adequate error-estimates.
Fritz John’s Ellipsoid Theorem has vastly many other useful implications some of which
are mentioned in my posted lecture notes <…/MathH110/NORMlite.pdf>
However, if dimension n is too big or infinite, different norms may defy comparison.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 22 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Part II
Matrix Norms
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 23 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
All the linear operators L mapping one given normed space of vectors x to another
constitute a third vector space and therefore can be subjected to any of a vast horde of
norms. We prefer that each such Matrix Norm satisfy these four requirements:
• Positivity: ∞ > ||L|| > 0 for every L except ||O|| = 0 .
• Homogeneity: ||λ·L|| = |λ|·||L|| for every scalar λ . Let’s keep λ real.
Overloaded Notation
Note that the last requirement involves three norms all written “||…||” . This notation is
heavily overloaded, obliging readers to disambiguate norms by close attention to the
linguistic type and context of each norm’s argument, just as we learned to distinguish
||wT|| from ||x|| . A Compatible Matrix norm is compatible with one or two vector norms
and consequently with their dual norms: Compatibility implies ||wT·L|| ≤ ||wT||·||L|| too.
This Compatibility requirement is trifling because it can always be met by scaling up the
norm offered for linear maps L : If µ := ( max ||L·x|| over ||L|| = ||x|| = 1 ) > 1 then
replace ||L|| by a scaled-up norm |||L||| := ||L||·µ to make it compatible.
(Then if identity I maps a space to itself but with a different norm, ||I|| can have any positive value! )
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 24 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
More generally, most of a second college course in linear algebra concerns the question
“What can be inferred about a linear operator L from its matrix L given
only the natures of L ’s domain- and target-spaces but not their bases?”
If only the spaces’ dimensions are known, only Rank(L) := Rank(L) can be inferred.
If L maps an otherwise undistinguished space to itself, only L ’s Jordan Normal Form.
If L maps one Euclidean space to another, only the Singular Values of L .
If L is a Symmetric map between a space and its dual space, only L ’s Inertia.
see Sylvester’s Inertia Theorem on p. 63
If too little is known about the domain- and target-spaces of linear operator L , not much
about it can be inferred (numerically) from arbitrary matrix representations L of L .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 25 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Here are three Matrix norms for m-by-n matrices L that take account only of its
elements Lij , not of the operator L that L represents, so these norms’ theory is shallow:
• Compatibilities: ||L||∑ & ||x||1 , ||L||F & ||x||2 , ||L||µ & ||x||∞ , among others.
Each of these norms possesses separately a property called Multiplicative Dominance :
||K·L|| ≤ ||K||·||L|| whenever matrices K and L can be multiplied.
This property implies Compatibility with inherited n-by-1 vector norms. Consequently
Multiplicative Dominance is so useful that we shall require it of all Matrix norms.
Operator Norms are also called “Sup Norms” and “LUB-norms” (Least Upper Bound).
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 26 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Operator Norms
• ||L|| := ( max ||L·x|| over ||x|| = 1 ) = ( max ||wT·L|| over ||wT|| = 1 )
= ( max | wT·L·x | over ||wT|| = ||x|| = 1 ) with ||…T|| dual to ||L·…||
Later we shall see how to test whether a given Matrix norm ||L|| is an Operator Norm too.
For now observe that Operator Norms possess Multiplicative Dominance in this way:
||K·L|| ≤ ||K||·||L|| if Domain(K) ⊇ Range(L) and both have the same vector norm.
And for linear maps L of a space to itself each Operator Norm satisfies ||L|| ≥ |λ| for
every eigenvalue λ of L ; and || I || = 1 .
A few Operator Norms can be computed at modest cost from the matrix L representing
L in suitably chosen bases for its domain- and target-spaces. Because there are many
uses for different norms, they must be distinguished by subscripts that clutter the notation:
Let ||L||αβ := ( max ||L·x||α/||x||β over x ≠ o ) = ( max ||wT·L||β/||wT||α over wT ≠ oT )
except we abbreviate ||L||αα to ||L||α . Most subscripts α and β will be 1, 2 or ∞ .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 27 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
• ||L||∞1 = maxi maxj |Lij| = ||LT||∞1 = ||L||µ/n = ||LT||µ/m , the biggest Magnitude
T
Also ||L||2 = the biggest Eigenvalue of O L from the Singular Value Decomposition.
L O
And ||L||2 ≤ √(||L||1·||L||∞) . Do you see why?
||L||12 , ||L||21 , ||L||1∞ and ||L||2∞ are little used because their computations cost too much.
If dimensions are not too big, then, just as any vector norm ||…|| can be approximated by
||…||1, ||…||2 or ||…||∞ perhaps after a change of basis, so can any Operator Norm …
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 28 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
There are analogous ratio-maxima for m-by-n Matrix norms; for instance
µF∑ := maxL ≠ O ||L||F/||L||∑ = 1 ; µ∑F = √m·n ; µµF = µµ∑ = n ; µFµ = √m/n ; µ∑µ = m .
Operator Norms inherit their ratio-maxima from their associated vector norms:
Recalling that ||L||αβ := maxx ≠ o ||L·x||α/||x||β , set µαβγδ := maxL ≠ O ||L||αβ/||L||γδ .
Then it so happens that µαβγδ = µαγ·µδβ .
Caution: If L is not square, then ratios µ… must take different dimensions into account.
These ratio-maxima µ… reveal how well or badly one norm can approximate another.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 29 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
For example, given two diagonal matrices Vt and Vd with wildly diverse magnitudes on
their diagonals, but such that the nonzero elements of Vt·K·Vd span a modest range of
magnitudes (such a K is called a “Graded” matrix), then variations ∆K in K may be
gauged more equitably by norm |||∆K||| := ||Vt·∆K·Vd|| than by a familiar ||∆K|| .
If ||…|| is an Operator Norm, so is |||…||| , but derived from vector norms after basis
changes tantamount to scaling the vectors: |||x||| := ||Vd–1·x|| for x in the domain-space;
|||y||| := ||Vt·y|| for y in the target-space of ∆K whose |||∆K||| = maxx≠o |||∆K·x|||/|||x||| .
Another example: Compute a norm [ ∆K]] := || ∆K || from the elements ∆Kij := ∆Kij/Ωij
scaled by a given array of positive Weights 1/Ωij . When ||…|| = ||…||∞1 , the biggest
magnitude, [ …]] is called an “Elementwise” norm, especially if every Ωij := |Kij| > 0 .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 30 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
To simplify the exploration that follows, suppose a given Matrix norm [ …]] for square
n-by-n matrices possesses Positivity, Homogeneity and the Triangle Inequality but, for
lack of defined vector norms, perhaps not Compatibility.
Eg. [ L]] = ||L||F ⇒ ||L|| = ||L||2 ; [ L]] = ||L||µ ⇒ ||L|| = ||L||∞ ; [ L]] = ||L||∑ ⇒ ||L|| = ||L||1 .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 31 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 32 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Given a Matrix norm ||L|| for m-by-n matrices L , the natural norm for their dual-
space of n-by-m matrices KT is ||KT|| := maxL≠O Trace(KT·L)/||L|| .
This places a severe strain upon our subscripting notation. For example …
Dual to the norm ||L||∞ = maxi ∑j |Lij| is ||KT||(∞) := ∑i maxj |Kij| ≥ ||KT||∞ = ||K||1 .
The strain is less severe for our orthogonally invariant norms; for example …
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 33 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 34 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Part III:
Matrix Norms and Matrix Inverses
2. Among Matrix norms, the Operator Norms ||L|| := maxx≠o ||L⋅x||/||x|| have
the most useful properties ... multiplicative ..., minimality ... .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 35 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 36 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 37 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 38 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 39 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Gaussian Elimination is affected by equilibration only through its effect upon the order of
pivot selection. This effect may be thwarted if column exchanges are disallowed, and
then computed results can be undeservedly grossly inaccurate, remediable only by
Iterative ReÞnement. See striking examples posted on my <É/Math128/FailMode.pdf>
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 40 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Iterative ReÞnement
attenuates ill effects of Ill-Condition or BIG dimensions or ... ?
Given F and c , let G stand for operations performed to solve Ò F⋅z = c Ó for z ;
e.g., triangular factorization of F , etc., or Conjugate Gradient iteration, ... .
Computation yields instead x := G⋅c ≈ z ... roundoff accumulates, iteration stops, ...
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 41 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Diagonal Dominance
Some matrices are obviously invertible.
Suppose L maps a space linearly to itself, and ||L|| < 1 . Then (I Ð L)Ð1 exists
because LiouvilleÕs series (I Ð L)Ð1 = I + L + L2 + L3 + L4 + É converges.
Often B is invertible though Ò = Ó replaces almost all Ò > Ó signs, but not always.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 42 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Schur Complements
S := Z Ð EáCÐ1áD is the Schur Complement of C in B := C D .
E Z
Without pivotal exchanges, Gaussian Elimination reduces B through
successive Schur Complements of BÕs leading principal submatrices, like C ,
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 43 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 44 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 45 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
For Operator Norms ||...|| generally, the best that can be said appears to be ...
Theorem: F has at least one Generalized Inverse G with
||G|| ≤ √(Rank(F))/( min ||∆F|| over Rank(F - ∆F) < Rank(F) ) .
I hope this theorem has a short proof; mine is much too long.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 46 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Over-determined x :
F g
Choose x with minimum ||x||2 to minimize ||F⋅x Ð g||2 .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 47 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Under-determined x :
Choose x with minimum ||x||2 to satisfy F⋅x ≈ g . F g
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 48 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
A Generality
Often a computational task amounts to solving Ò F(z) = o Ó for z given F .
Errors and uncertainties due to roundoff etc. make F(z + δz) = δF instead.
Consequent uncertainty or error in z is δz ≈ F `(z)-1⋅δF .
Uncertainty in data that speciÞes F is ampliÞed in z by as much as ||F `(z)-1|| .
HUGE ampliÞcation ||F `(z)-1|| can occur only if DATA puts F CLOSE to a
SINGULARITY :
¥ A Pole (inÞnite value) of F `-1 .
¥ Conßuence (collapse of dimension; multiplicity of mapping F , of zeros z ).
¥ Exponentially-growing functions of variables unobviously near ∞ .
Common! See also my web pageÕs <É/WrongR.pdf>
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 49 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Part IV:
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 50 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 51 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 52 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Example: L := 0 0 0 10 0 0
; det(λ⋅Ι − L) = λ6 Ð 105⋅ξ .
0 0 0 0 10 0
0 0 0 0 0 10
ξ 0 0 0 0 0
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 53 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Numbers of Eigenvalues: 3 2 2
This theorem is useful mainly for matrices with Prominent if not Dominant diagonals.
Sometimes replacing L by V-1⋅L⋅V for a suitable matrix V , perhaps diagonal, perhaps
consisting of approximate eigenvectors, can reduce the sizes of at least some of the disks.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 54 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Appending rows and/or columns to n-by-n L cannot decrease its nth singular value.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 55 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 56 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 57 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
If, instead of P > O , we have merely P ≥ O , then Perron Root ρ need not be simple; it and/or
elements of eigenvector p may vanish, and other eigenvalues of P may match ρ in magnitude.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 58 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 59 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Part V:
The best source about eigensystems of real symmetric and Hermitian matrices
is B.N. ParlettÕs book The Symmetric Eigenvalue Problem
[1998] 426 pp., SIAM, Philadelphia
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 60 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
1.5
0.5
-0.5
-1
-1.5
-2
-2.5
-1.5 -1 -0.5 0 0.5 1 1.5
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 61 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
(Complex spaces, for which y' and H' are complex conjugate transposes, and H = H' is Hermitian
instead of real symmetric, and Háxáy is complex conjugate to Háyáx , will not be treated here. Besides,
the treatment of complex spaces would differ only slightly from real.)
Typically Φ(x) is some kind of Energy,Ñ Kinetic Energy if x stands for velocities or
momenta, Elastic Energy if for inÞnitesimal displacements from equilibrium, etc.
How does a change of basis from B to B := B·C–1 affect H ? cf. pp. 18-19
x changes to x := C·x but w' to w' := w'·C–1 , and H to H := C'–1·HáCÐ1 , whence
Φ(x) = x'áHáx = x'áHáx . ( Φ(x) need not be defined upon the vector space dual to x Õs .)
The relation between H and H := C'–1·HáCÐ1 is called a Congruence.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 62 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 63 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Each stationary value of ρ(x) and its stationary point x turn out to satisfy
H·x = ρ(x)·M·x and M·x ≠ o .
Hence this ρ and x are solutions of a Symmetric Generalized Eigenproblem.
If M–1 exists, each such ρ is an eigenvalue of B := H·M–1 . However, not all
eigenvalues ß of B need be stationary values of ρ(x) , not even if all are real.
e.g., H := 20 , M := 01 , x := ξ , ρ(x) = ξ2/(ξη) ; B = H·M–1 = 02 , ß = 0 . No stationary ρ .
00 10 η 00
Every real square matrix B = H·M–1 for some symmetric H = H' and M = M' .
Therefore, without further restrictions, Symmetric Generalized Eigenproblems and
unrestricted nonsymmetric eigenproblems fall prey to the same pathologies; cf. pp. 51-56.
Consequently we shall impose further restrictions upon Φ and Ψ in what follows. …
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 64 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 65 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Suppose some linear combination, say η·H + µ·M , of real symmetric H and M
is positive definite, so some congruence diagonalizes H and M simultaneously.
How then may such a congruence C–1 be found?
Choose a second linear combination, say α·H – β·M , independent of the first
so α·µ + β·η ≠ 0 . A good choice would have ||α·H – β·M|| << || |α·H| + |β·M| ||
by virtue of substantial cancellation, but this may not be feasible. Thus a given
Symmetric Generalized Eigenproblem “ H·x = ρ·M·x ” is converted into a
Definite Symmetric Generalized Eigenproblem “ H·y = ρ·M·y ” in which
H := α·H – β·M = H' and M := η·H + µ·M = M' is positive definite.
Their eigenvalues are related: ρ = (α·ρ – β)/(η·ρ + µ) ; ρ = (β + µ·ρ)/(α – η·ρ) .
A way to compute them and a desired congruence: Cholesky factorize M = U'·U
and compute an eigendecomposition of W := U'–1·H·U–1 = W' = Q·Ω·Q' with
an orthogonal Q' = Q–1 and diagonal Ω of eigenvalues ρ . Now C–1 := U–1·Q .
Thus does MATLAB’s eig . Since such eigenproblems can be pathological, their error-analysis isn’t
yet tidy enough for a succinct and memorable overview. See instead Ren-Cang Li’s §15.4 in L.
Hogben’s Handbook … cited under Further Reading, and my …/Math128/GnSymEig.pdf> . No
comparable numerical scheme is known to find an Indefinite Symmetric Generalized Eigenproblem’s
congruence when it exists. What follows concerns problems that are easier and better understood. …
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 66 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 67 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 68 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Especially when dimension n is big, occasions arise to compute only those eigenvalues
of H in some interval separated from the rest of E (H) by a Gap or two. To this end,
suppose n-by-m matrix F has m < n linearly independent columns approximating
(perhaps poorly) m eigenvectors of H , and suppose m-by-m real symmetric matrix M
(not necessarily diagonal) has a spectrum E (M) = {µ1 ≥ µ2 ≥ µ3 ≥ … ≥ µm} thought to
approximate part of E (H) . Let Residual R := H·F – F·M and let β := ||F ||2·||R||2 .
R := – M
Computable H F F For ||F ||2 see p. 44.
Eigenvectors of H are orthogonal. If their estimates in F are too far from orthonormal,
||F ||2 may be excessively big. A remedy for this is Reorthogonalization: …
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 69 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Reorthogonalization:
One way replaces F by Q from the QR factorization F = Q·U with upper-triangular U
and Q'·Q = I . Other ways, one sometimes faster, one closer to F , are explored in
<www.eecs.berkeley.edu/~wkahan/Math128/NearestQ.pdf> .
After a (closely) orthonormal n-by-m matrix Q replaces F , the new residual becomes
R := H·Q – Q·M , and β := ||R||2 . Then, as before, the m eigenvalues µi of E (M)
estimate m of the n eigenvalues θj in E (H) thus:
Among n eigenvalues θj in E (H) there are m each of which lies in a different
(possibly overlapping) interval wherein | µi – θ | ≤ β for i = 1, 2, 3, …, m .
Now R can have its norm β := ||R||2 minimized by the choice M := Q'·H·Q . Then those
m eigenvalues θj fall into much narrower intervals when β is much tinier than Spectral
Gaps between the rest of E (H) and those m or their estimates µi . To describe these
Spectral Gaps we perform a (notional, not necessarily computed) change of coordinates:
Let n-by-n orthogonal [Q, Q] be obtained from any n–m orthonormal columns Q
orthogonal to Q , so n-by-n [Q, Q]'·[Q, Q] = I . This [Q, Q] provides a new ortho-
normal coordinate system in which the linear operator formerly represented by H is now
represented by [Q, Q]'·H·[Q, Q] . Of course, spectrum E ([Q, Q]'·H·[Q, Q]) = E (H) .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 70 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
For i = 1, 2, 3, …, n define the Gaps γi between spectra E (M) and E (W) thus:
If ηi ∈ E (M) then γi := minj |ηi – ωj| else if ηi ∈ E (W) then γi := minj |ηi – µj| .
Let Gap γ := mini γi . Usually E (M) and E (W) are disjoint, and then γ > 0 .
Then Chi-Kwong Li & Ren-Cang Li [2005] proved VERY GENERALLY that every
|θi – ηi| ≤ β2/( γi/2 + √( β2 + γi2/4 ) ) ≤ β2/( γ/2 + √( β2 + γ2/4 ) ) ≤ min{ β , β2/γ } .
When β << γ these inequalities estimate that part of E (H) approximated by E (M) far
more tightly than β because somehow the rest of E (H) is known to be farther away.
SOMEHOW ?
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 71 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Error-bounds on three previous pages bound Absolute errors θi –ηi . What about
Relative Errors log(θi/ηi) ?
Again, n-by-m matrix Q has orthonormal columns, M := Q'·H·Q , R := H·Q – Q·M .
Now, provided ||R·M–1||2 < ψ < 1 , the m eigenvalues µi in E (M) estimate m of the
n eigenvalues θj in E (H) thus: cf. p. 70
Among n eigenvalues θj in E (H) there are m each of which lies in a different
(possibly overlapping) interval wherein | log(θ/µi) | < ψ for i = 1, 2, 3, …, m .
These bounds are advantageous only when ||R·M–1|| is a lot smaller than ||R||·||M–1|| .
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 72 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Z comes from S via Deflation. Every Absolute Error |σj – ζj| ≤ ||E|| .
What about Relative Errors log(σj/ζj) ?
If either ||D–1·E|| < 2ψ < 1 or ||E·F–1|| < 2ψ < 1 then every | log(σj/ζj)| < ψ .
It persists with 0/0 := 1 even if some ζj = 0 so long as either ||D–1·E|| or ||E·F–1|| exists.
Example:
Let n-by-n S := bidiag s s … s s e = D e in which the pair s is
1 1 … … 1 1 f o' f 1
missing from only the first and last columns, and s > f >> 1 > e > 0 .
σ1 ≈ s + 1 >> σn ≈ (s2 – 1)/√(s2n – n·s2 + n – 1) for s > 3 and n > 3 .
Deleting e causes relative error < ψ if e < 2ψ·f though this e can exceed σn hugely.
For relative error-bounds improved substantially by a knowledge of gaps like γ , see Kahan [2012"].
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 73 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 74 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Miscellany:
• Elaborations of Perturbation Theory: Ren-Cang Li’s §15 in Hogben [2007]
• Compute Eigenvalues to High Relative Accuracy: Z. Drmac’s §46 in Hogben
… of Graded Real Symmetric Matrices, Positive Definite Matrices (only ?)
… of Specially Structured Matrices: see his citations of P. Koev et al.
Another such example: pp. 12-13 of <…/HilbMats.pdf>
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 75 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
For a treatment of finite-dimensional linear algebra that prepares the diligent reader for
infinite dimensions, try Finite-Dimensional Linear Analysis, a Systematic Presentation
in Problem Form by I.M. Glazman & Ju.I. Ljubic, translated and edited by G.P. Barker
& G. Kuerti [1974] MIT Press. This huge text’s exposition consists of about 1500
problems with hints but none accompanied by a full solution. You must do it yourself.
Normed Linear Spaces 3rd ed. by M.M. Day [1973], Springer-Verlag, is a short (211
pages) brutally compressed overview of the situation in infinite-dimensional spaces. The
last chapter is a nine-page reader’s guide to the literature up to 1972.
For more about unitarially invariant Cross-Norms and Symmetric Gauge Functions
applicable to linear operators upon Hilbert spaces, see ch. V of Norm Ideals of
Completely Continuous Operators by R. Schatten [1960], Springer-Verlag.
B.N. ParlettÕs book The Symmetric Eigenvalue Problem [1998] 426 pp., SIAM,
Philadelphia, is the best source about the properties and computations of eigensystems of
real symmetric and Hermitian matrices.
Citations & Further Reading continues É
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 76 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
N.J. Higham [2002] Accuracy & Stability of Numerical Algorithms 2nd ed., ~700 pp,.
SIAM, Philadelphia, is the best text treating error-analysis of roundoff.
Often better than bounds upon norms of errors are elementwise bounds mentioned herein
at the bottom of p. 30. For more about them see Higham’s “A survey of componentwise
perturbation theory in numerical linear algebra” pp. 49-77 in …
W. Gautschi (ed.) Mathematics of Computation 1943-1993, A half century of
computational mathematics; Proc. of the Math. of Comp. 50th Anniv. Symposium,
9 - 13 Aug. 1993 in Vancouver B.C., American Math Soc.
Chandler Davis & W.M. Kahan [1969] “Some New Bounds on Perturbations of
Subspaces” pp. 863-868 in Bulletin Amer. Math. Soc. 75 #4. This describes bounds upon
angles between subspaces in a readable way, far more so than the most often cited …
Davis & Kahan [1970] “The Rotation of Eigenvectors by a Perturbation. III” pp. 1-46 in
SIAM J. Numer. Anal 7 #1. Used herein on p. 74.
Citations & Further Reading continues É
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 77 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 78 / 79
File: NormOvrv Tutorial Overview of Vector and Matrix Norms Version dated January 30, 2013 11:18 am
Epilogue
I learned what little I know about norms etc. over half a century ago, and later turned in
a different direction. Consult appropriate Math. Dept. professors for this century’s
understandings of normed and more general metric spaces.
Still, when a decision is needed, it often amounts to distilling one number out of many:
Pass a test? Accept a candidate? Choose a purchase? Launch an enterprise? …
Such unavoidable decisions must occasionally be mistaken or, less often, very lucky.
Prof. W. Kahan SUBJECT TO CHANGE: Do you have the latest version? Page 79 / 79