0% found this document useful (0 votes)
7 views

LN LinearTSModels

Uploaded by

sureitan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

LN LinearTSModels

Uploaded by

sureitan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

HE4020 Econometric Time

Series Analysis
Semester 1, 2015-16
LINEAR TIME SERIES MODELS
1 Introduction
Time series are observations collected over time.
Two aims of time series analysis are (i) to model the un-
derlying mechanism that generated the observations (ii)
using the underlying model to forecast the series.
Observed series exhibit different features. Any credible
model must be able to account for at least the salient
features.
Some examples of time series plots:
40
30
Inches

20
10

1880 1900 1920 1940 1960 1980

Y ear

Los Angeles Annual Rainfall

3
40
Current Year Rainfall (Inches)

30
20
10

10 20 30 40

Previous Year Rainfall (Inches)

Scatter Plot of LA Rainfall

Considerable variation in rainfall over the years. Some


years are very wet (e.g. 1883) while other years are dry
(e.g. 1983).
Scatter plot does not suggest strong correlation in rainfall
from one year to the next.

4
85
Colour Proper ty

80
75
70
65

0 5 10 15 20 25 30 35

Batch

Time Series Plot of Colour Property


Current Batch Colour Proper ty

85
80
75
70
65

65 70 75 80 85

Previous Batch Colour Proper ty

Scatter Plot of Colour


Property

Colour property of batches produced in an industrial chem-


ical process.
5
Time series plot shows that consecutive batches then to
have similar colour values.
Scatter plot shows a small degree of positive association
between consecutive batches.
80
60
Number

40
20
0

1905 1910 1915 1920 1925 1930 1935

Year

Annual Number of Canadian Hare


80
Current Year Number

60
40
20
0

0 20 40 60 80

Previous Year Number

Scatter Plot of
Canadian Hare

6
Stickiness in the time series plot. Neighbouring values
are very closely related. Number does not change much
from one year to the next.
Scatter plot shows strong positive correlation.
70
T emper ature

50
30
10

1964 1966 1968 1970 1972 1974 1976

Year

Average Monthly Temperature

7
70
Current Month T emp

60
50
40
30
20
10

10 20 30 40 50 60 70

Corresponding Month T emp Previous Year

Scatter Plot Of Average


Monthly Temp

Time series displays regular pattern called seasonality.


Observations twelve months apart are related. All Janu-
ary and February temperatures are low while June, July
and August months are warmer.
Twelve-month lag scatter plot shows strong positive cor-
relation.

2 Fundamental Concepts
8
2.1 Mean, Variance and Covariance
The sequence of random variables fYt : t = 0; 1; 2; : : :g
is called a stochastic process.
The complete probability structure of such a process is
determined by the joint probability distribution of all fi-
nite collections of the Y 0s:
Fortunately, often we dont need to deal with these joint
distributions, instead we need only consider the means,
variances and covariances.
The mean function, t; of a stochastic process fYt : t =
0; 1; 2; : : :g is defined as

= E(Yt), t = 0; 1; 2; : : :
t (1)
Note that t is just the expected value of the process at
time t, and, in general, it can vary with time.
The autocovariance function, t;s; is defined as

t;s = Cov(Yt; Ys); ::t; s = 0; 1; 2; : : : (2)

where
9
Cov(Yt; Ys) = E[(Yt t )(Ys s )] = E(YtYs) t s:
(3)
The autocorrelation function, t;s ; is defined as

t;s = Corr(Yt; Ys); t; s = 0; 1; 2; : : : (4)


where

Cov(Yt; Ys) t;s


Corr(Yt; Ys) = p =p (5)
V ar(Yt)V ar(Ys) t;t s;s
Note that
9
t;t = V ar(Yt) ; t;t = 1 =
t;s = s;t ; s;t = t;s (6)
p ;
t;s t;t s;s ; t;s 1
The following results are useful to evaluate covariance
properties of various time series models:
" #
P
m P
n P
m P
n
Cov ciYti ; dj Ysj = cidj Cov(Yti ; Ysj )
i=1 j=1 i=1 j=1
(7)
10
P
m P
m P
m iP1
V ar ciYti = c2i V ar(Yti ) + 2 cicj Cov(Yti ; Ytj )
i=1 i=1 i=2 j=1
(8)
where c1; c2; : : : ; cm and d1; d2; : : : ; dn are constants;
t1; t2; : : : ; tm and s1; sj ; : : : ; sn are time points.
2.2 Random Walk
Let e1; e2; : : : be a sequence of iid random variables each
with mean zero and variance 2e :
The observed time series fYt : t = 1; 2; : : :g generated
by the process

Yt = Yt 1 + et; Y1 = e1 (9)
is called a random walk process.
By repeated substitution it can easily be shown that
9
Y1 = e1 >
>
=
Y2 = e2 + e1
.. (10)
>
>
;
Yt = et + et 1 + + e1
From Equation (10), we obtain
11
E(Yt) = t = E(et + et 1 + + e1) = 0; 8t (11)

and

V ar(Yt) = V ar(et + et 1 + + e1) = t 2e (12)


Note that while the mean is constant over time, the vari-
ance increases with t:
Consider next the covariance function. Suppose 1 t
s: Then we have
9
t;s = Cov(Yt; Ys) >
>
>
>
>
>
>
= Cov(e1 + + et; e1 + + et + et+1 + + es >
>
>
>
>
>
>
>
>
P
s P
t >
=
= Cov(ei; ej )
i=1 j=1
>
>
>
>
P
t >
>
>
>
= V ar(ej ) >
>
j=1 >
>
>
>
>
>
2
>
;
= t e
(13)
12
The autocorrelation function for the random walk is then
given by
9
t;s >
t;s = p >
>
t;t s;s >
>
>
>
>
>
2 >
=
t e
= p (14)
(t 2e )(s 2e ) >
>
>
>
r >
>
>
>
t >
>
= ;
s

Note that t;s decreases as jt sj increases, but the rate


of decrease is smaller for larger values of t (or s) :
q q
1 1
1;2 = 2 = 0:707; 1;3 = 3 = 0:577; : : :
q q
10 10
10;11 = 11 = 0:953; 10;12 = 12 = 0:913; : : :
A simulated random walk where the e’s are selected from
the standard normal distribution.
13
8
W alk

6
4
Random

2
0
-2

0 10 20 30 40 50 60

Tim e

Although the mean is zero, the series tends to wander


away from the mean because the variance increases and
adjacent values become more strongly correlated over
time.

2.3 A Moving Average


Consider

et + et 1
Yt = (15)
2
Lets derive the first two moments of Yt :
ne + e o
t t 1
E(Yt) = t = E =0
2
ne + e o
t t 1
V ar(Yt) = V ar
2
14
V ar(et) + V ar(et 1)
=
2
4
= 0:5 e ne + e o
t t 1 et 1 + et 2
Cov(Yt; Yt 1) = Cov ;
2 2
Cov(et; et 1) + Cov(et; et 2)+
= 14
Cov(et 1; et 1) + Cov(et 1; et 2)
Cov(et 1; et 1)
=
2
4
= 0:25 e
or

t;t 1 = 0:25 2e ; 8t
Further,
ne + e o
t t 1 et 2 + et 3
Cov(Yt; Yt 2) = Cov ; =0
2 2
since the e’s are independent. Similarly,
Cov(Yt; Yt k ) = 0 for k > 1: Hence, we have
8
< 0:5 2e for jt sj = 0
t;s = 0:25 2e for jt sj = 1 (16)
:
0 for jt sj > 1
and
15
8
< 1 for jt sj = 0
t;s = 0:5 for jt sj = 1 (17)
:
0 for jt sj > 1

Note that, for example, 1;2 = 3;4 = 9;10 = 0:5 and


1;3 = 5;7 = 9;11 = 0: That is, Y values one pe-
riod apart are correlated to 0.5, and Y values two periods
apart are uncorrelated. These results hold regardless of
which time periods we are considering. This leads us to
the important concept of stationarity.

2.4 Stationarity
A process is said to be strictly stationary if the joint dis-
tribution of Yt1 ; Yt2 ; : : : ; Ytn is the same as that of Yt1 k ; Yt2 k; : : : ;
for all choices of time points t1; t2; :::; tn and k:
This is a strong assumption and often difficult to estab-
lish in practice.
A weaker version, referred to as weak (or second-order)
stationarity requires that
I The mean of Yt is constant over time
I t;t k = 0;k for all time t and lag k:

16
Since the covariance of a stationary process depends only
on the time difference jt (t k)j and not on the actual
times t and t k , we can simply express the autocovari-
ance as k : Similarly, autocorrelations can be expressed
as k :

2.5 White noise


A white noise process is a sequence of independent, iden-
tically distributed random variables fetg:
A white noise process has the following properties:
I constant mean: E(et) =
V ar(et) for k = 0
I k=
0 for k 6= 0
1 for k = 0
I k=
0 for k 6= 0

3 Stationary Time Series Models


Consider a time series generated by the following model:

Yt = 0 + 1 Yt 1 + et (18)
where
17
E(et) = 0
2
e t=s
E(etes) =
0 t 6= s

This is known as an autoregressive order 1, or AR(1),


process.
How does Yt behave over time?
Yt = 0 + 1Yt 1 +et = 0 + 1( 0 + 1Yt 2 +et 1)+et
= 0(1 + 1) + 21Yt 2 + et + 1et 1
= 0(1 + 1) + 21( 0 + 1Yt 3 + et 2) + et + 1et 1
= 0(1 + 1 + 21) + 31Yt 3 + et + 1et 1 + 21et 2
Continued substitution of y on the right hand side leads
to
Yt = 0(1+ 1 + 21 + )+et + 1et 2
1 + 1 et 2 +
Taking expectation on both sides gives
E(Yt) = 0(1 + 1 + 21 + )
This expectation exists if the infinite geometric series
converges.
A necessary and sufficient condition for this is j 1j < 1:
The expectation is then
18
0
E(Yt) = (19)
1 1
Thus if j 1j < 1 the Y series has a constant uncondi-
tional mean at all points in time. In that case,
Yt = + et + 1et 1 + 21et 2 + (20)
Consider now the variance of Y .
V ar(Yt) = E(Yt )2
= E(e2t + 21e2t 1 + 41e2t 2 + + 2 1 et et 1 + )
= 2e (1 + 21 + 41 + )
2
e
= 2
1 1
Thus, the Y series has a constant unconditional variance,
independent of time.
Next consider the covariance of Y over time. The co-
variance of Y with its lagged value is known as an auto-
covariance. The first-lag autocovariance is defined as
= E(Yt
1 )(Yt 1 ):
Now,
2
Yt = et + 1 et 1 + 1 et 2 +
2
Yt 1 = et 1+ 1 et 2 + 1 et 3 +
Hence,
19
1 = E(et + 1et 1 + 21et 2 + )(et 1 + 1 et 2
+ 21et 3 + )
Taking expectation
2 2 4
1 = 1 e (1 + 1 + 1+ ) = 1V ar(Yt)
Similarly, second-lag autocovariance is 2 = 21V ar(Yt)
and in general s = s1V ar(Yt); s = 0; 1; 2; : : :
Note that the autocovariances depend only on the lag
length and are independent of the time point.
Dividing the autocovariances by the variance gives the
autocorrelations:

s
s ar(Yt)
1V
s = = = s1; s = 0; 1; 2; : : : (21)
0 V ar(Yt)
Note that for j 1j < 1 the autocorrelations decay with
the lag length.
Plotting the autocorrelations with the lag length gives the
correlogram.
For j 1j < 1 the mean, variance and covariances of the
Y series are constants, independent of time. The series
is therefore weakly or covariance stationary.
An autoregressive of order p; AR(p), process has the
20
form

Yt = 0 + 1Yt 1 + + p Yt p + et (22)
For p = 2, we have the AR(2) process

Yt = 0 + 1Yt 1 + 2Yt 2 + et (23)


Repeated substitution for the Y on the rhs and taking
expectation will give the unconditional mean of Yt.
However, if Y is covariance stationary, the unconditional
mean can be evaluated more easily as follows:

E(Yt) = 0 + 1 E(Yt 1 ) + 2 E(Yt 2 ) + E(et)


(24)
0
E(Yt) =
1 1 2
0
Let E(Yt) = = : Then 0 = (1 1
1 1 2
2 ):
Substituting for 0 in the AR(2) model, we have
Yt = (1 1 2)
+ 1Yt 1 + 2Yt 2 + et
Yt = 1 (Yt 1 ) + 2(Yt 2 ) + et
yt = 1 yt 1 + 2 y t 2 + et
21
where yt = Yt :
Multiply by yt and take expectation, we have
E(yt2) = 1E(yt 1yt) + 2E(yt 2yt) + E(etyt)
2
0 = 1 1+ 2 2+ e
Multiply by yt 1 and yt 2 and take expectations:
1 = 1 0+ 2 1
2 = 1 1+ 2 0
Substitute for 1 and 2 in the preceding equation and
simplify:
2
(1 2) e
0 =
(1 + 2)(1 1 2 )(1 + 1 2)
For stationarity this variance must be constant and
positive. Sufficient conditions for stationarity are that
each term in the parentheses is positive:
1+ 2 <1
2 1 <1
j 2j < 1
Dividing the two equations above for 1 and 2 by 0
we obtain the Yule-Walker equations for the AR(2)
process:
1 = 1+ 2 1
2 = 1 1+ 2
From these, we can solve for 1 and 2 :

22
2
1 1
1 = ; 2 = +
2
1 2 1 2
For k = 3; 4; :::the autocorrelations for an AR(2)
process follows a second-order difference equation:
k = 1 k 1+ 2 k 2
Stationary conditions ensure that the acf dies out as
the lag increases.
The AR(2) process may be expressed in terms of the lag
operator L defined as
Lyt = yt 1
L(Lyt) = L2yt = yt 2
In general, Lsyt = yt s
The AR(2) process is then

A(L)yt = et (25)
where

2
A(L) = 1 1 L 2 L (26)
A(L) is referred to as a polynomial in the lag operator.
Now,
2
A(L) = 1 1L 2L = (1 1 L)(1 2 L)
where the 0s and 0
s are connected by
23
1+ 2 = 1 and 1 2 = 2:
The inverse A 1(L) may be written as
1 1 c d
A (L) = = +
(1 1 L)(1 2 L) (1 1 L) (1 2 L)
where
c= 1 =( 2 1 ) and d = 2 =( 2 1)
Then

1 c d
yt = A (L)et = et + et (27)
(1 1 L) (1 2 L)
From the results of AR(1), stationarity of the AR(2)
requires that j 1j < 1 and j 2j < 1:
The 0s may be seen as the roots of the quadratic equa-
tion
2
1 2 =0
This follows for the fact that for a quadratic equation
x2 + bx + c = 0; the sum of the two roots equals
b and the product of the two roots equals c:
This is known as the characteristic equation of the AR(2)
process. The roots are
q
2
1 1+4 2
1; 2 =
2

24
These roots are real or complex, depending on whether
2
1 + 4 2 > 0 or < 0: If the roots are complex, the
autocorrelation coefficients will display sine wave fluc-
tuations, which will dampen towards zero provided the
complex roots have moduli less than 1:
Stationarity requires that the roots of the characteristic
equation, whether real or complex, have moduli less than
1: This is often stated as the roots lie within the unit cir-
cle.
An alternative statement is that the roots of A(z) = 1
2
1z 2 z lie outside the unit circle. The roots of A(z)
are the values of z that solve the equation
2
A(z) = 1 1z 2z = 0
The roots are reciprocal of those of the characteristic
equation, i.e.
zj = 1= j ; j = 1; 2:
So, if the 0s lie within the unit circle, the z 0s must lie
outside the unit circle.
So, often the stationarity condition is stated as the roots
of the polynomial in the lag operator lie outside the unit
circle.

25
3.1 Solution of Difference Equations
Consider the first-order difference equation:
xt = a1 xt 1
Trivial solution:
xt = xt 1 = =0
An obvious solution is:
xt = at1
since then xt = a1xt 1 gives
at1 = a1(at1 1) = xt = at1
Multiplying at1 by an arbitrary constant gives another
solution Aat1; since
xt = a1xt 1 ) Aat1 = a1Aat1 1 = Aat1
Characteristics of the solution:
I If ja1j < 1; at1 ! 0; as t ! 1
Direct convergence if 0 < a1 < 1
Oscillatory if 1 < a1 < 0
I If ja1j > 1 solution is not stable.
For a1 > 1; solution ! 1 as t ! 1
For a1 < 1; solution oscillates explosively as t ! 1
I If a1 = 1; any arbitrary constant A satisfies the
difference equation yt = yt 1
I If a1 = 1 the system is meta-stable:

26
at1 = 1 if t is even
at1 = 1 if t is odd
Consider the 2nd order difference equation:
xt = a1xt 1 + a2xt 2
Solution to 1st order system suggests trying the
solution
xt = A t :
If this is a solution, it must satisfy the difference
equation:
A t a1 A t 1 a2 A t 2 = 0
Divide through by A t 2: Find values that satisfy
2
a1 a2 = 0
The two solutions are
p
a1 a21 + 4a2
1; 2 =
2
Each of these roots yields a valid solution for the 2nd
order difference equation.
These solutions are not unique. For any two arbitrary
constants A1 and A2, the linear combination A1 t1 +
A2 t2 also solves the difference equation:
A1 t
1 + A2 t
2 = a1(A1 t1 1 + A2 t2 1)
+a2(A1 t1 2 + A2 t2 2)

27
A1( t1 a1 t1 1 a2 t1 2)
+A2( t2 a1 t2 1 a2 t2 2) = 0
Since 1 and 2 each solves the 2nd order difference
equation, the terms int he brackets equal to zero.
Therefore the solution to the 2nd order difference equa-
tion is

xt = A1 t1 + A2 t2 (28)
p
Three possible cases for the solutions, depending on a21 + 4a2
Case 1: a21 + 4a2 > 0
Roots will be real and distinct.
xt = A1 t1 + A2 t2
If both j 1j and j 2j are both < 1, series is convergent.
If any one of the roots is greater than 1, series is
explosive.
Example.
xt = 0:2xt 1 + 0:35xt 2
xt 0:2xt 1 0:35xt 2 = 0
Characteristic equation:
2
0:2 0:35
p= 0
0:2 0:04 + (4)(0:35)
1 ; 2 =
2
28
p
0:2 1:44
= = 0:7; 0:5
2
So, xt = A1(0:7)t + A2( 0:5)t
Convergent series.
Suppose
xt = 0:7xt 1 + 0:35xt 2
xt 0:7xt 1 0:35xt 2 = 0
Characteristic equation:
2
0:7 0:35
p= 0
0:7 0:49 + (4)(0:35)
1; 2 =
p 2
0:2 1:44
= = 1:037; 0:337
2
So, xt = A1(1:037)t + A2( 0:337)t
Series is explosive.
Case 2: a21 + 4a2 = 0
The two roots are then
a1
1 ; 2 =
2
Hence, a solution is
a1 t
xt =
2
In this case, can show that another solution is
a1 t
xt = t
2
29
If this is a solution, it must satisfy
xt a1xt 1 a2xt 2 = 0; i.e.
a1 t a1 t 1 a1 t 2
t a1(t 1) a2(t 2) =0
2 2 t 2 2
a1
Dividing through by ; we obtain
2
a1 2 a1
t a1(t 1) a2(t 2) = 0
2 2
Collecting terms,
a21 a21
+ a2 t + + 2a2 =0
4 2
Since a21 + 4a2 = 0 each of the bracketed terms equals
zero.
So, general solution is
a1 t a1 t
xt = A 1 + A2t
2 2
Series will be explosive if ja1j > 2;
Convergent if ja1j < 2
Case 3: a21 + 4a2 < 0 (a2 < 0)
Imaginary roots.
p
2
1 = a1 + i ja1 + 4a2 j
p
2 = a1 i ja21 + 4a2j
30
Expressing roots in polar coordinate form, the solution
can be written as
xt = 1rt cos(t + 2)
where 1 and 2 are arbitrary constants;
a1
r = ( a2)0:5 and cos =
2( a2)0:5
Solution shows wave-like pattern.
Since cos function is bounded, stability condition
depends on r i.e. ( a2)0:5
I ja2j = 1; oscillation with unchanging amplitude
I ja2j < 1; damped oscillations
I ja2j > 1; explosive oscillations
Example
xt = 1:6xt 1 0:9xt 2
xt 1:6xt 1 + 0:9x
p t 2=0
1:6 (1:6)2 (4)(0:9)
1; 2 =
2
1:6 1:02i
=
2
= 0:8 0:51i
r = (0:9)0:5 = 0:949
1:6
cos = = 0:843 ) = 0:567
2(0:9)0:5
xt = 1(0:949)t cos(0:567t + 2)

31
Damped sine waves.

32

You might also like