LN LinearTSModels
LN LinearTSModels
Series Analysis
Semester 1, 2015-16
LINEAR TIME SERIES MODELS
1 Introduction
Time series are observations collected over time.
Two aims of time series analysis are (i) to model the un-
derlying mechanism that generated the observations (ii)
using the underlying model to forecast the series.
Observed series exhibit different features. Any credible
model must be able to account for at least the salient
features.
Some examples of time series plots:
40
30
Inches
20
10
Y ear
3
40
Current Year Rainfall (Inches)
30
20
10
10 20 30 40
4
85
Colour Proper ty
80
75
70
65
0 5 10 15 20 25 30 35
Batch
85
80
75
70
65
65 70 75 80 85
40
20
0
Year
60
40
20
0
0 20 40 60 80
Scatter Plot of
Canadian Hare
6
Stickiness in the time series plot. Neighbouring values
are very closely related. Number does not change much
from one year to the next.
Scatter plot shows strong positive correlation.
70
T emper ature
50
30
10
Year
7
70
Current Month T emp
60
50
40
30
20
10
10 20 30 40 50 60 70
2 Fundamental Concepts
8
2.1 Mean, Variance and Covariance
The sequence of random variables fYt : t = 0; 1; 2; : : :g
is called a stochastic process.
The complete probability structure of such a process is
determined by the joint probability distribution of all fi-
nite collections of the Y 0s:
Fortunately, often we dont need to deal with these joint
distributions, instead we need only consider the means,
variances and covariances.
The mean function, t; of a stochastic process fYt : t =
0; 1; 2; : : :g is defined as
= E(Yt), t = 0; 1; 2; : : :
t (1)
Note that t is just the expected value of the process at
time t, and, in general, it can vary with time.
The autocovariance function, t;s; is defined as
where
9
Cov(Yt; Ys) = E[(Yt t )(Ys s )] = E(YtYs) t s:
(3)
The autocorrelation function, t;s ; is defined as
Yt = Yt 1 + et; Y1 = e1 (9)
is called a random walk process.
By repeated substitution it can easily be shown that
9
Y1 = e1 >
>
=
Y2 = e2 + e1
.. (10)
>
>
;
Yt = et + et 1 + + e1
From Equation (10), we obtain
11
E(Yt) = t = E(et + et 1 + + e1) = 0; 8t (11)
and
6
4
Random
2
0
-2
0 10 20 30 40 50 60
Tim e
et + et 1
Yt = (15)
2
Lets derive the first two moments of Yt :
ne + e o
t t 1
E(Yt) = t = E =0
2
ne + e o
t t 1
V ar(Yt) = V ar
2
14
V ar(et) + V ar(et 1)
=
2
4
= 0:5 e ne + e o
t t 1 et 1 + et 2
Cov(Yt; Yt 1) = Cov ;
2 2
Cov(et; et 1) + Cov(et; et 2)+
= 14
Cov(et 1; et 1) + Cov(et 1; et 2)
Cov(et 1; et 1)
=
2
4
= 0:25 e
or
t;t 1 = 0:25 2e ; 8t
Further,
ne + e o
t t 1 et 2 + et 3
Cov(Yt; Yt 2) = Cov ; =0
2 2
since the e’s are independent. Similarly,
Cov(Yt; Yt k ) = 0 for k > 1: Hence, we have
8
< 0:5 2e for jt sj = 0
t;s = 0:25 2e for jt sj = 1 (16)
:
0 for jt sj > 1
and
15
8
< 1 for jt sj = 0
t;s = 0:5 for jt sj = 1 (17)
:
0 for jt sj > 1
2.4 Stationarity
A process is said to be strictly stationary if the joint dis-
tribution of Yt1 ; Yt2 ; : : : ; Ytn is the same as that of Yt1 k ; Yt2 k; : : : ;
for all choices of time points t1; t2; :::; tn and k:
This is a strong assumption and often difficult to estab-
lish in practice.
A weaker version, referred to as weak (or second-order)
stationarity requires that
I The mean of Yt is constant over time
I t;t k = 0;k for all time t and lag k:
16
Since the covariance of a stationary process depends only
on the time difference jt (t k)j and not on the actual
times t and t k , we can simply express the autocovari-
ance as k : Similarly, autocorrelations can be expressed
as k :
Yt = 0 + 1 Yt 1 + et (18)
where
17
E(et) = 0
2
e t=s
E(etes) =
0 t 6= s
s
s ar(Yt)
1V
s = = = s1; s = 0; 1; 2; : : : (21)
0 V ar(Yt)
Note that for j 1j < 1 the autocorrelations decay with
the lag length.
Plotting the autocorrelations with the lag length gives the
correlogram.
For j 1j < 1 the mean, variance and covariances of the
Y series are constants, independent of time. The series
is therefore weakly or covariance stationary.
An autoregressive of order p; AR(p), process has the
20
form
Yt = 0 + 1Yt 1 + + p Yt p + et (22)
For p = 2, we have the AR(2) process
22
2
1 1
1 = ; 2 = +
2
1 2 1 2
For k = 3; 4; :::the autocorrelations for an AR(2)
process follows a second-order difference equation:
k = 1 k 1+ 2 k 2
Stationary conditions ensure that the acf dies out as
the lag increases.
The AR(2) process may be expressed in terms of the lag
operator L defined as
Lyt = yt 1
L(Lyt) = L2yt = yt 2
In general, Lsyt = yt s
The AR(2) process is then
A(L)yt = et (25)
where
2
A(L) = 1 1 L 2 L (26)
A(L) is referred to as a polynomial in the lag operator.
Now,
2
A(L) = 1 1L 2L = (1 1 L)(1 2 L)
where the 0s and 0
s are connected by
23
1+ 2 = 1 and 1 2 = 2:
The inverse A 1(L) may be written as
1 1 c d
A (L) = = +
(1 1 L)(1 2 L) (1 1 L) (1 2 L)
where
c= 1 =( 2 1 ) and d = 2 =( 2 1)
Then
1 c d
yt = A (L)et = et + et (27)
(1 1 L) (1 2 L)
From the results of AR(1), stationarity of the AR(2)
requires that j 1j < 1 and j 2j < 1:
The 0s may be seen as the roots of the quadratic equa-
tion
2
1 2 =0
This follows for the fact that for a quadratic equation
x2 + bx + c = 0; the sum of the two roots equals
b and the product of the two roots equals c:
This is known as the characteristic equation of the AR(2)
process. The roots are
q
2
1 1+4 2
1; 2 =
2
24
These roots are real or complex, depending on whether
2
1 + 4 2 > 0 or < 0: If the roots are complex, the
autocorrelation coefficients will display sine wave fluc-
tuations, which will dampen towards zero provided the
complex roots have moduli less than 1:
Stationarity requires that the roots of the characteristic
equation, whether real or complex, have moduli less than
1: This is often stated as the roots lie within the unit cir-
cle.
An alternative statement is that the roots of A(z) = 1
2
1z 2 z lie outside the unit circle. The roots of A(z)
are the values of z that solve the equation
2
A(z) = 1 1z 2z = 0
The roots are reciprocal of those of the characteristic
equation, i.e.
zj = 1= j ; j = 1; 2:
So, if the 0s lie within the unit circle, the z 0s must lie
outside the unit circle.
So, often the stationarity condition is stated as the roots
of the polynomial in the lag operator lie outside the unit
circle.
25
3.1 Solution of Difference Equations
Consider the first-order difference equation:
xt = a1 xt 1
Trivial solution:
xt = xt 1 = =0
An obvious solution is:
xt = at1
since then xt = a1xt 1 gives
at1 = a1(at1 1) = xt = at1
Multiplying at1 by an arbitrary constant gives another
solution Aat1; since
xt = a1xt 1 ) Aat1 = a1Aat1 1 = Aat1
Characteristics of the solution:
I If ja1j < 1; at1 ! 0; as t ! 1
Direct convergence if 0 < a1 < 1
Oscillatory if 1 < a1 < 0
I If ja1j > 1 solution is not stable.
For a1 > 1; solution ! 1 as t ! 1
For a1 < 1; solution oscillates explosively as t ! 1
I If a1 = 1; any arbitrary constant A satisfies the
difference equation yt = yt 1
I If a1 = 1 the system is meta-stable:
26
at1 = 1 if t is even
at1 = 1 if t is odd
Consider the 2nd order difference equation:
xt = a1xt 1 + a2xt 2
Solution to 1st order system suggests trying the
solution
xt = A t :
If this is a solution, it must satisfy the difference
equation:
A t a1 A t 1 a2 A t 2 = 0
Divide through by A t 2: Find values that satisfy
2
a1 a2 = 0
The two solutions are
p
a1 a21 + 4a2
1; 2 =
2
Each of these roots yields a valid solution for the 2nd
order difference equation.
These solutions are not unique. For any two arbitrary
constants A1 and A2, the linear combination A1 t1 +
A2 t2 also solves the difference equation:
A1 t
1 + A2 t
2 = a1(A1 t1 1 + A2 t2 1)
+a2(A1 t1 2 + A2 t2 2)
27
A1( t1 a1 t1 1 a2 t1 2)
+A2( t2 a1 t2 1 a2 t2 2) = 0
Since 1 and 2 each solves the 2nd order difference
equation, the terms int he brackets equal to zero.
Therefore the solution to the 2nd order difference equa-
tion is
xt = A1 t1 + A2 t2 (28)
p
Three possible cases for the solutions, depending on a21 + 4a2
Case 1: a21 + 4a2 > 0
Roots will be real and distinct.
xt = A1 t1 + A2 t2
If both j 1j and j 2j are both < 1, series is convergent.
If any one of the roots is greater than 1, series is
explosive.
Example.
xt = 0:2xt 1 + 0:35xt 2
xt 0:2xt 1 0:35xt 2 = 0
Characteristic equation:
2
0:2 0:35
p= 0
0:2 0:04 + (4)(0:35)
1 ; 2 =
2
28
p
0:2 1:44
= = 0:7; 0:5
2
So, xt = A1(0:7)t + A2( 0:5)t
Convergent series.
Suppose
xt = 0:7xt 1 + 0:35xt 2
xt 0:7xt 1 0:35xt 2 = 0
Characteristic equation:
2
0:7 0:35
p= 0
0:7 0:49 + (4)(0:35)
1; 2 =
p 2
0:2 1:44
= = 1:037; 0:337
2
So, xt = A1(1:037)t + A2( 0:337)t
Series is explosive.
Case 2: a21 + 4a2 = 0
The two roots are then
a1
1 ; 2 =
2
Hence, a solution is
a1 t
xt =
2
In this case, can show that another solution is
a1 t
xt = t
2
29
If this is a solution, it must satisfy
xt a1xt 1 a2xt 2 = 0; i.e.
a1 t a1 t 1 a1 t 2
t a1(t 1) a2(t 2) =0
2 2 t 2 2
a1
Dividing through by ; we obtain
2
a1 2 a1
t a1(t 1) a2(t 2) = 0
2 2
Collecting terms,
a21 a21
+ a2 t + + 2a2 =0
4 2
Since a21 + 4a2 = 0 each of the bracketed terms equals
zero.
So, general solution is
a1 t a1 t
xt = A 1 + A2t
2 2
Series will be explosive if ja1j > 2;
Convergent if ja1j < 2
Case 3: a21 + 4a2 < 0 (a2 < 0)
Imaginary roots.
p
2
1 = a1 + i ja1 + 4a2 j
p
2 = a1 i ja21 + 4a2j
30
Expressing roots in polar coordinate form, the solution
can be written as
xt = 1rt cos(t + 2)
where 1 and 2 are arbitrary constants;
a1
r = ( a2)0:5 and cos =
2( a2)0:5
Solution shows wave-like pattern.
Since cos function is bounded, stability condition
depends on r i.e. ( a2)0:5
I ja2j = 1; oscillation with unchanging amplitude
I ja2j < 1; damped oscillations
I ja2j > 1; explosive oscillations
Example
xt = 1:6xt 1 0:9xt 2
xt 1:6xt 1 + 0:9x
p t 2=0
1:6 (1:6)2 (4)(0:9)
1; 2 =
2
1:6 1:02i
=
2
= 0:8 0:51i
r = (0:9)0:5 = 0:949
1:6
cos = = 0:843 ) = 0:567
2(0:9)0:5
xt = 1(0:949)t cos(0:567t + 2)
31
Damped sine waves.
32