Numerical Linear Algebra with Applications, Vol. 4(6), 459–468 (1997)
On Zero Locations of Predictor Polynomials
Fermin S. V. Bazán∗ and Licio H. Bezerra†
Departamento de Matemática, Universidade Federal de Santa Catarina, Florianópolis, SC 88040900, Brazil
Predictor polynomials are often used in linear prediction methods mainly for extracting properties of physical
systems which are described by time series. The aforementioned properties are associated with a few zeros of
large polynomials and for this reason the zero locations of those polynomials must be analyzed. We present
a linear algebra approach for determining the zero locations of predictor polynomials, which enables us to
generalize some early results obtained by Kumaresan in the signal analysis field. We also present an analysis
of zero locations for time series having multiple zeros. © 1997 by John Wiley & Sons, Ltd.
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
(No. of Figures: 0 No. of Tables: 0 No. of Refs: 15)
KEY WORDS predictor polynomials;
eigenvalues; companion matrices; time series
1. Introduction
Linear prediction is a technique widely used in developing parametric models that represent
dynamic systems from discrete time signals, also known as time series. Once the model is
successful, then that model can be very useful among other applications for predicting and
forecasting, for example. An excellent survey about the relevance and applications of linear
prediction can be found in Makhoul [12] (see also [6, 14]). The model most currently used
for representing a large class of dynamic systems (and which will also be used here) is that
described by the impulse response of the system, which is either a real or complex-valued
∗
Correspondence to F. S. V. Bazán, Departamento de Matemática, Universidade Federal de Santa Catarina,
Florianópolis, SC 88040-900, Brazil. E-mail: fermin@mtm.ufsc.br.
†
E-mail: licio@mtm.ufsc.br.
Contract grant sponsor: CNPq, Brazil; contract grant number: 300487/94-0 (NV); contract grant sponsor:
Fundação CAPES, Brazil; contract grant number: BEX 3119/95-5.
CCC 1070–5325/97/060459–10 $17.50
©1997 by John Wiley & Sons, Ltd.
Received 12 December 1996
460
F. S. V. Bazán and L. H. Bezerra
function composed of a weighted sum of n damped/undamped complex exponentials. Thus,
the time series under analysis is
hk =
n
X
rj esj k1t , k = 0, 1, . . .
(1.1)
j =1
where rj and sj , si 6= sj for i 6= j , are the parameters that hypothetically govern the system,
1t is the sampling interval and n is the order of the system. The sj are called poles and are
constants which characterize the normal modes of the system while the rj , called residues,
describe how much each mode participates in the time series. A predictor polynomial then
predicts future samples of the signal by using past sample values. More precisely, a predictor
polynomial of degree N
P (t) = c0 + c1 t + · · · + cN−1 t N−1 − t N , N ≥ n
(1.2)
predicts hj +N from knowledge of the N preceding {hj , hj +1 , . . . , hj +N−1 }, according to
the rule
c0 hj + c1 hj +1 + · · · + cN−1 hj +N−1 = hj +N j ≥ 0
(1.3)
An interesting fact about predictor polynomials is that the system poles sj , can be extracted
from their zeros. Once the non-linear parameters are found, the task of determining the rj
is simpler. In the linear prediction approach then the problem of constructing a parametric
model for a given signal is reduced to that of estimating the coefficients ci . In practical
experimental situations however, although the sj ’s are related to n zeros of the polynomial
(the signal zeros), there are N − n zeros (the extraneous zeros) without physical meaning
that arise as a consequence of using a polynomial of order larger than necessary because
the system order, n, is not known in advance. For this reason and mainly to establish criteria
that enable us to discriminate the signal zeros from the extraneous ones, an analysis on
zero locations of predictor polynomials seems to be indispensable. With respect to this,
two approaches are known. The first, introduced by Kumaresan [11], uses properties of
predictor error filters and is well understood in the signal analysis field; while the other,
introduced by Cybenko [4], employs both a classical theorem of Fejer and some properties
of orthogonal polynomials in the unit circle [5].
The goal of this work is to present a simplifying approach for the problem based mainly
on linear algebra concepts. For this purpose, the concept of predictor matrix is introduced
and then the zero locations of predictor polynomials are analyzed from the eigenvalues of
their associated companion predictor matrices. Besides this analysis, an extension for the
analysis of zero locations of predictor polynomials corresponding to time series containing
multiple poles is addressed as well.
The paper is organized as follows: in Section 2 some basic results are presented and
the notation is introduced. Section 3 extends the concept of predictor polynomial to the
concept of predictor matrix; some theorems which explain much on eigenvalue locations
of companion predictor matrices are presented. The paper ends with an extension of results
for time series which contain multiples poles.
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
© 1997 by John Wiley & Sons, Ltd
On Zero Locations of Predictor Polynomials
461
2. Preliminary Results and Notation
Let H(l) be the M × N Hankel matrix indexed by an integer l ≥ 0, whose ith column vector
is ĥi = [hl+i−1 hl+i · · · hl+M+i−2 ]T .
H(l) = [ĥl ĥl+1 · · · ĥl+N −1 ] =
hl
hl+1
..
.
hl+M−1
hl+1
hl+2
..
.
hl+M
···
···
···
···
hl+N −1
hl+N
..
.
hl+M+N −2
(2.1)
M×N
In relation to H(l), it is easy to see that
H(l) = VZl RW
(2.2)
where Z = diag(z1 , · · · , zn ), in which zj = esj 1t , j = 1, 2, · · · , n, R = diag(r1 , · · · , rn ),
V = V(z1 , z2 , · · · , zn ) is the M × n Vandermonde matrix described below
1
1
···
1
z1
z2
···
zn
V = ..
(2.3)
..
..
..
.
.
.
.
z1M−1 z2M−1 · · · znM−1 M×n
and W is the transpose of the submatrix of V formed by taking its first N rows. A consequence of decomposition (2.2) is that for all l ≥ 0, rank(H(l)) = n whenever M ≥ N ≥ n
and si 6= sj , i 6= j , [1]. It also follows that the column space of H(l), denoted by 5(H(l)) ,
is spanned by the column vectors of matrix V and that the null space of H(l), denoted by
1(H(l)), is the same as that of W. The pseudo-inverse of a matrix A of order M × N is
defined as the unique matrix A† of order N × M satisfying the conditions: (i) AA† A = A,
(ii) A† AA† = A† , (iii) (A† A)H = A† A; and iv) (AA† )H = A† A; where the
superscript H is used to denote conjugate transposition. If A has the full rank factorization A = BC, where rank(A)=rank(B)= rank(C), then
−1
−1
A† = C† B† , with B† = (BH B) BH , C† = CH (CCH )
(2.4)
Further properties about pseudo-inverses can be found in Björk [3] and Stewart and Sun
[13]. The spectrum of a matrix will be denoted by λ(A). In the next section we present a
theorem for which the following two lemmas will be required.
Lemma 2.1. Let A ∈ CM×N and B ∈ CN×M , M ≥ N , where CM×N denotes the set of
all complex M × N matrices. Then λ(AB) = λ(BA) ∪ {0}.
A proof for this lemma can be found in Horn and Johnson [9].
Lemma 2.2. Let A ∈ CM×M be a Hermitian matrix whose ith column vector is ai ; that
is, A = [a1 a2 · · · aM ]. Let A↑ = [a2 a3 · · · aM 0] and A↓ = [0 a1 a2 · · · aM−1 ]. Then
λ(A↑ ) = λ(A↓ ), where the bar denotes complex conjugation.
© 1997 by John Wiley & Sons, Ltd.
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
462
F. S. V. Bazán and L. H. Bezerra
Proof
First, write A↑ and A↓ in block triangular form
0
F
0
,
A↑ =
A↓ =
fH 0
0
bH
G
where F and G are both (M − 1) × (M − 1) submatrices of A, and 0, f , b are all complex
vectors in CM−1 . Since A is Hermitian, by simple inspection one sees that F = GH . The
assertion of the lemma follows as a consequence of the structure of those matrices.
3. The Zeros of Predictor Polynomials
We start by extending the notion of predictor polynomial to the concept of predictor matrix.
Let H(l), l ≥ 0, be a Hankel matrix of order M × N , we say that a matrix # of order N × N
is a predictor matrix if and only if for all integers l ≥ 0 H(l + 1) = H(l)#. The matrix
# has an analogous role as that developed for predictor polynomials: it predicts the new
data sample hl+N from knowledge of the preceding N samples {hl , hl+1 , . . . , hl+N −1 }.
We observe that there are an infinite collection of matrices # satisfying this definition and
that the signal zeros can always be extracted from the eigenvalues of any predictor matrix.
Of course, from the equality above, using equation (2.2) and the pseudo-inverse properties
expressed in (2.4), one sees that
H(l + 1) = H(l)# ⇔ W# = ZW
(3.1)
The assertion follows after observing that the rows of W are left eigenvectors of # corresponding to the signals. We shall analyze the locations of the eigenvalues of the predictor
polynomial P (t) = t N − (c0 + c1 t + · · · + cN −1 t N−1 ), N ≥ n, by analyzing the locations
of the eigenvalues of a predictor companion matrix
# = [e2 e3 · · · eN c] =
0
1
0
..
.
0
0
0
1
..
.
0
···
···
···
..
.
···
0
0
0
..
.
1
c0
c1
c2
..
.
cN −1
(3.2)
N×N
whose N th column vector, c, is a solution of the system of linear equations:
H(l)c = ĥl+N ⇔ Wc = ZN ⌊1
(3.3)
where ⌊1 denotes a vector with components equal to unity. It is obvious that the above system
is consistent and that it has an infinite number of solutions, since H is rank defficient and
because ĥl+N ∈ 5(H(l)). Therefore, the zeros of P (t), or equivalently, the eigenvalues of
the companion matrix # depend on how one chooses the vector c. Interesting work regarding
zeros of polynomials can be found in [7, 9, 10]. Generalizations on predictor matrices may
be found in [2].
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
© 1997 by John Wiley & Sons, Ltd
On Zero Locations of Predictor Polynomials
463
Theorem 3.1. Let #↑ be a companion matrix whose N th column vector is the minimum
norm solution of (2.3). Then #↑ has n of its eigenvalues located at zi = esi 1t , i = 1, . . . , n;
the others, which are called the extraneous ones, fall inside the unit circle.
Proof
As has already been stated, esi 1t , i = 1, . . . , n forms part of the spectrum of any predictor
matrix. Therefore we need to prove the second part of the theorem. We start by observing
that the right eigenvectors associated with extraneous eigenvalues belong to the null space
of W. Now, the N th component of these eigenvectors cannot be zero. To see this, let φ be
a unit length eigenvector associated with the extraneous eigenvalue γ .
φ1
0
φ2
φ1
···
φ
c
=
γ
+
φ
#↑ φ = γ φ ⇔
2
N
φN −1
···
φN
φN−1
If γ = 0 there is nothing to prove; assume then γ 6= 0. By observing the above relation,
one sees that φN 6= 0, otherwise φ would be the zero vector, which is a contradiction. Now,
H
let φ ↓ = [0 φ̄1 , · · · , φ̄N−1 ]. Left multiplying in the above relation by φ H yields
γ = φ H #φ = φ H φ ↓ + φN φ H c
But, since c is the minimum norm solution of (3.3), c ∈ [1(W)]⊥ , and so φ H c = 0. Hence,
|γ | = |φ H φ ↓ | ≤ kφ ↓ k < 1
Linear prediction may also be carried out in the reverse direction. For this, it is sufficient
to seek an N × N matrix which enables us to come back from the state l + 1 of the system
to the state l. Thus, one may define a backward predictor matrix in the following way: we
say that an N × N matrix, $ is a backward predictor matrix if and only if for all l ≥ 0,
H(l) = H(l + 1)$. As before, one has the following equivalence
H(l) = H(l + 1)$ ⇔ W$ = Z−1 W
(3.4)
from which, as before, {z1−1 , z2−1 , . . . , zn−1 } are eigenvalues of $ with the rows of W as
associated left eigenvectors. This is an important fact that may be useful for discriminating
signal zeros corresponding to exponentially damped signals. We will see later that provided
the signal zeros are extracted from the eigenvalues of a suitable companion matrix, the
extraneous zeros separate from the signal zeros in a natural way: the signal zeros fall
outside the unit circle while the extraneous ones lie inside. To see this, we will consider
backward predictor companion matrices of the form
d0
1 0 ··· 0
d1
0 1 ··· 0
..
.. ..
..
..
↓
(3.5)
# = [d e1 e2 · · · eN−1 ] = .
.
.
.
.
dN −2 0 0 · · · 1
dN−1 0 0 · · · 0 N×N
© 1997 by John Wiley & Sons, Ltd.
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
464
F. S. V. Bazán and L. H. Bezerra
where the first column vector, d, is a solution of the system of linear equations:
H(l + 1)d = ĥl ⇔ Wd = Z−1 ⌊1
(3.6)
with associated backward predictor polynomial Q(t) = dN−1 +dN−2 t +· · ·+d0 t N−1 −t N .
We will show that, provided the vector d is selected via pseudo-inversion, then the extraneous
eigenvalues of #↓ , or equivalently the extraneous zeros of Q(t), are the complex conjugates
of those of #↑ . We first prove an auxiliary result in the following lemma.
Lemma 3.1. Let 3 be the orthogonal projection onto the subspace [N (W)]⊥ , and let
4 = I − 3, where I is the N × N identity matrix. Since #↓ = 6↓ + 5↓ , where 6↓ = 3#↓ ,
5↓ = 4#↓ , the signal eigenvalues of #↓ are the non-zero eigenvalues of 6↓ while the
non-zero extraneous ones are the non-zero eigenvalues of 5↓ .
Proof
Observe that 3 = W† W. Now, by using (3.4), 6↓ = 3#↓ = W† Z−1 W. Hence, by
applying Lemma 2.1, one sees that λ(6↓ ) = λ(W† Z−1 W) = λ(Z−1 )WW† ∪ {0} =
λ(Z−1 )∪{0} and so the assertion holds. On the other hand, let γ be an extraneous eigenvalue
of #↓ , and let φ be an associated right eigenvector. We prove that φ and γ are an eigenpair
of R↓ . In fact,
γ φ = #↓ φ = 6↓ φ + 5↓ φ = W† Z−1 Wφ + 5↓ φ = 5↓ φ
because φ ∈ 1(W).
Remark 1
A similar result can be established for #↑ : its signal eigenvalues are the non-zero eigenvalues
of 6↑ while its non-zero extraneous eigenvalues are the non-zero eigenvalues of 5↑ , where
6↑ = 3#↑ and 5↑ = 4#↑ .
Theorem 3.2. Let #↓ be a companion matrix whose column vector d is the minimum
norm solution of the system of equations (3.6). Then #↓ has n eigenvalues located at
zi = e−si 1t , i = 1, . . . , n and the extraneous ones fall inside the unit circle and are the
complex conjugates of those of #↑ .
Proof
By Lemma 3.1, the system eigenvalues are the non-zero eigenvalues of 6↓ , that is, zi =
e−si 1t , i = 1, . . . , n; and the non-zero extraneous eigenvalues are non-zero eigenvalues of
R↓ . Next, let qi be the ith column vector of 4, ie, 4 = [q1 q2 · · · qN ]. From this, we have
5↑ = [q2 q3 · · · qN−1 0] and 5↓ = [0 q1 q2 · · · qN ]
because both column vectors c and d belong to [N (W)]⊥ . But 4 is Hermitian; therefore,
the final assertion of the theorem follows as consequence of Lemma 2.2 and Theorem 3.1.
When modeling is performed with time domain data, as is the case when one analyzes
vibrating systems in modal analysis, for example, one deals with polynomials whose coefficients are real. In this case, as a consequence of Theorem 3.2, we have
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
© 1997 by John Wiley & Sons, Ltd
On Zero Locations of Predictor Polynomials
465
Corollary 3.1. Let #↑ and $↓ be the matrices introduced in the above theorems. Then,
provided that real-valued time series are analyzed, the extraneous eigenvalues of #↑ match
exactly with those of $↓ .
We would like to mention that the zero locations for predictor polynomials were earlier
studied by Kumaresan [11], in the context of linear prediction-error filter polynomials for
deterministic signals. He stated that the results of Corollary 3.1 hold regardless of whether
the time series is real or complex valued. However, as we saw, this result holds only for
real-valued time series.
4. Extension for Time Series with Multiple Poles
Time series containing multiple poles arise very often in mechanical and electromagnetic
systems, for instance after sampling response funtions [8, 15] of the type
h(t) =
nk
K X
X
rk,q t q−1 esk t
(4.1)
k=1 q=1
We will carry out an analysis regarding zero locations of polynomials which come in
connection with time series related to the above function. For this, we assume si 6= sj for
i 6= j , nk ≥ 1, rk,nk 6= 0, for k = 1, 2, . . . , K, and the multiplicities satisfy n1 +· · ·+nK =
n. With the above assumptions we shall show that the aforementioned polynomials have
zeros which behave similarly to those in the simple pole case. For simplicity we assume the
signal has a single pole of multiplicity n; that is, we assume time series of the form
hk = [(r1 + (k1t)r2 + (k1t)2 r3 + · · · + (k1t)n−1 rn ]es1t , k = 0, 1, . . .
(4.2)
The basic idea is to follow a procedure like the one developed for the single zero case. So,
we start by seeking a factorization for the Hankel matrix H(l). For this, let z = es1t , and
let ĥj be the j th column vector of H(l). Using (4.2), we write
hl+j −1
hl+j
(4.3)
ĥj =
= V3j r
..
.
hl+j +M−2
where V is an M × n matrix defined by
1
0
z
(1t)z
2
(21t)z2
V= z
..
..
.
.
zM−1
···
···
···
((M − 1)1t)zM−1
···
· · · ((M − 1)1t)n−1 zM−1
P = z1−1 G1
© 1997 by John Wiley & Sons, Ltd.
0
(1t)n−1 z
(21t)n−1 z2
..
.
(4.4)
(4.5)
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
466
F. S. V. Bazán and L. H. Bezerra
with 1 = diag(1, 1t, (1t)2 , . . . , (1t)n−1 ), and G the n × n upper triangular matrix
j
, for j ≥ i. That is, the non-null
defined so that its i, j entry is Gi,j =
(i − 1)!(j − i + 1)!
elements of the j th column of G are the binomial coefficients of an expansion of the type
(a + b)j −1 . On the other hand, r is a vector containing the residue r’s. Next, let R be a
symmetric n × n matrix defined so that its i, j entry is
(i + j − 2)!
ri+j −1 ,
for i ≤ n − j + 1
Ri,j =
(4.6)
(i − 1)!(j − 1)!
R =0,
otherwise
i,j
In other words, the elements of the j th cross-diagonal of R are obtained by the non-vanishing
entries of the j th column of G times the j th element of vector r. If n = 4 for example, G
and R are
r3 r4
r1 r2
1 1 1 1
r2 2r3 3r4 0
0 1 2 3
(4.7)
G=
0 0 1 3 , R = r3 3r4 0
0
0
0
r4 0
0 0 0 1
Using (4.3), the Hankel matrix can now be written as
H(l) = [ĥl ĥl+1 · · · ĥl+N−1 ] = V3l [Re1 , 3Re1 , . . . , 3N −1 Re1 ]
But, by direct calculation one can verify that PR = RPT and that
zi−1
((i − 1)1t)zi−1
(Pi )T e1 =
..
.
(4.8)
(4.9)
((i − 1)1t)n−1 zi−1
thus, a factorization for H(l) is
H(l) = VR(Pl )T W
(4.10)
in which W is obtained by transposing the submatrix of V formed by its first N rows. If
we assume M ≥ N ≥ n, it can be proved that V is of rank n, see [15], and that R is nonsingular, since we assume rn 6= 0. Hence, the factorization above is a full-rank factorization
of H(l) and thus rank(H(l)) = n. With this decomposition at hand, one can always compute
solutions for our familiar linear prediction equation, H(l + 1) = H(l)#. Furthermore, we
should emphasize that if for some l a solution is computed, it serves to predict exactly the
whole signal, since that solution does not depend on l, but only on the poles:
H(l + 1) = H(l)# ⇔ PT W = W#
(4.11)
After replacing P for its canonical Jordan decomposition, P = Q J Q−1 , one sees that z =
es1t is an eigenvalue of # of multiplicity n, with associated left generalized eigenvectors,
the rows of QT W; that is, any predictor matrix has an eigenvalue of multiplicity n, z = es1t .
Hence, if the predictor matrix is a companion one, the associated predictor polynomial has
a zero of multiplicity n, which is z = es1t . Here again, the location in the complex plane
of the extra N − n zeros depends on the choice of #. But, if one chooses the companion
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
© 1997 by John Wiley & Sons, Ltd
On Zero Locations of Predictor Polynomials
467
matrix so that its N th column vector is the minimal 2-norm solution of the system of linear
equations H(l)c = ĥl+N , the procedure employed in the proof of Theorem 3.1 allow us in
the current situation to prove that # (or equivalently the associated predictor polynomial
P (t)) has n repeated eigenvalues located at z = es1t and the remaining ones fall inside
the unit circle, analogous to the single pole case. If one carries out linear prediction in
the reverse direction, one is able to obtain analogous results, that is, one may show that if
#↓ is a backward predictor companion matrix whose first column vector d is the minimal
2-norm solution of H(l + 1)d = ĥl , then #↓ has an eigenvalue of multiplicity n located at
z−1 = e−s1t while the extraneous ones correspond to the conjugates of those of #↑ .
To analyze time series arising from (4.1), notice that in this case the column vector ĥj
can be written as
K
X
j
ĥj =
Vk Pk rk
(4.12)
k=1
where Vk is an M × nk matrix, Pk is an nk × nk matrix, both defined by replacing z by
zk = esk 1t , in (4.4) and (4.5) respectively, and rk is the residue vector associated with the
pole sk . Hence, one may prove that the decomposition expressed in (4.10) actually holds,
with the difference that V, P and R are now block matrices:
V = [V1 · · · VK ], P = diag(P1 , . . . , PK ), and R = diag(R1 , . . . , RK )
where Rk are symmetric nk × nk matrices, defined analogously as in (4.6) using the components of the residues related to the pole sk . In its turn, WT is an N × n block matrix,
obtained by taking the first N rows of V.
5. Concluding Remarks
A decomposition of H(l) for the particular case l = 0 was presented by Wei and Majda
[15] and was employed to prove that the poles of the time series (4.1) can be extracted from
any polynomial whose coefficients are solutions of a linear system with coefficient matrix
H(0), although nothing was said regarding the extra N − n zeros. Here, using the fact that V
is full rank [15], we see from (4.10) that for all l ≥ 0, rank(H(l)) = n, which allows us to
conclude that the number of exponentials contained in any time series either expressed by
(1.1) or (4.1) can always be detected from the rank of H(l). Another consequence of using
the factorization (4.10) is that the involved work developed by Wei and Majda in proving
that result related to the extraction of poles from polynomials now becomes very easy, since
this is a direct consequence of (4.11).
As final remark, we would like to emphasize that the actual factorization of H(l) and the
results obtained in the previous analysis enable us to prove theorems which in some sense
are the companion ones of Theorem 3.1 and Theorem 3.2. That is, both Theorem 3.1 and
Theorem 3.2 hold, regardless of whether the time series has multiple poles or not. As in
practical applications, the polynomial coefficients are only mere approximations of the true
ones, an interesting problem which should be investigated is how the signal zeros change
when the polynomial coefficients suffer small perturbations. We hope this work is useful
for developing a zero perturbation analysis taking advantage of the rich resources of linear
algebra. This done, the problem of separating the signal zeros from the extraneous ones
may be tackled by using backward predictor companion matrices.
© 1997 by John Wiley & Sons, Ltd.
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
468
F. S. V. Bazán and L. H. Bezerra
Acknowledgements
The first author was supported by CNPq, Brazil, grant 300487/94-0 (NV). Part of this research was performed
while the second author was at CERFACS, Toulouse, France, and was supported by Fundação CAPES, Brazil,
grant number BEX 3119/95-5.
REFERENCES
1. F. S. V. Bazán and C. Bavastri. An optimized pseudo-inverse algorithm (OPIA) for multi-input
multi-output modal parameter identification. Mechanical Systems and Signal Processing, 10,
365–380, 1996.
2. L. H. Bezerra and F. S. V. Bazán. Eigenvalue locations of generalized companion predictor
matrices. Technical Report TR/PA/97/01, CERFACS, Toulouse, 1997.
3. Å. Björk. Numerical Methods for Least Squares Problems. SIAM, Philadelphia, 1996.
4. G. Cybenko. Locations of zeros of predictor polynomials. IEEE Trans. Automat. Control,
AC-27, 235–237, 1982.
5. P. J. Davies. Interpolation and Approximation. Blaisdell, New York, NY, 1963.
6. B. De Moor. The singular value decomposition and long and short spaces of noisy matrices.
IEEE Trans. on Signal Processing, 41, 2826–2838, 1993.
7. A. Edelman and H. Murakami. Polynomial roots from companion matrix eigenvalues. Math.
Comp., 64, 763–776, 1995.
8. T. L. Henderson. Geometric methods for determining system poles from transient response.
IEEE Trans. ASSP, 29, 982–988, 1981.
9. R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge,
1985.
10. F. Kittaneh. Singular values of companion matrices and bounds on zeros of polynomials. SIAM
J. Matrix Anal. Appl., 16, 333–340, 1995.
11. R. Kumaresan. On the zeros of the linear prediction-error filters for deterministic signals. IEEE
Trans. ASSP, ASSP-31, 217–221, 1983.
12. J. Makhoul. Linear prediction: a tutorial review. Proc. of the IEEE, 63, 561–580, 1975.
13. G. W. Stewart and J.-G. Sun. Matrix Perturbation Theory. Academic Press, San Diego, 1990.
14. A. van der Veen, E. F. Deprettere and A. L. Swindlehurst. Subspace-based signal analysis
using singular value decomposition. Proc. of the IEEE, 81, 1277–1309, 1993.
15. M. Wei and G. Majda. A new theoretical approach for prony’s method. Linear Algebra Appl.,
136, 119–132, 1990.
Numer. Linear Algebra Appl., Vol. 4, 459–468 (1997)
© 1997 by John Wiley & Sons, Ltd