Econtrics RESOURCE
Econtrics RESOURCE
Econtrics RESOURCE
This version of the guide is for student users of RATS and EVIEWS
PREFACE
This Students Manual is designed to accompany the fourth edition of Walter Enders
Applied Econometric Time Series (AETS). As in the first edition, the text instructs by induction. The
method is to take a simple example and build towards more general models and econometric
procedures. A large number of examples are included in the body of each chapter. Many of the
mathematical proofs are performed in the text and detailed examples of each estimation procedure
are provided. The approach is one of learning-by-doing. As such, the mathematical questions and the
suggested estimations at the end of each chapter are important.
The aim of this manual is NOT to provide the answers to each of the mathematical problems.
The questions are answered in great detail in the Instructors version of the manual. If your intstuctor
desire, he/she may provide you with the answers. Instead, the goal of the manual is to get you up
and running on RATS or EVIEWS. The manual does contain the code or workfiles that you can use
to read the data sets. Nevertheless, you will have all of the data to obtain the results reported in the
Questions and Exercises sections of AETS. Even if your instructor does not assign the exercises, I
encourage you to work through as many of these exercises as possible. RATS users should also
download the powerpoint slides for RATS users on time-series.net.
There were several factors leading me to provide the partial programs for RATS and EViews
users. First, two versions of the RATS Programming Manual can be downloaded (at no charge) from
www.estima.com/enders or from www.time-series.net. The two Programming Manuals provide a
complete discussion of many of the programming tasks used in time-series econometrics. EViews
was included since it is a popular package that allows users to produce almost all of the results
obtained in the text. Adobe Acrobat allows you to copy a program from the *.pdf version of this
manual and paste it directly into RATS. EViews is a bit different. As such, I have created EViews
workfiles for almost all of the exercises in the text. This manual describes the contents of each
workfile and how each file was created.
AETS 4
Page 2
CONTENTS
1. Difference Equations
Lecture Suggestions
Answers to Questions
page 4
page 6
3. Modeling Volatility
Lecture Suggestions
Answers to Questions
page 20
page 28
page 37
page 45
page 56
Semester Project
page 61
AETS 4
Page 3
CHAPTER 1
DIFFERENCE EQUATIONS
Introduction 1
1 Time-Series Models 1
2 Difference Equations and Their Solutions 7
3 Solution by Iteration 10
4 An Alternative Solution Methodology 14
5 The Cobweb Model 18
6 Solving Homogeneous Difference Equations 22
7 Particular Solutions for Deterministic Processes 31
8 The Method of Undetermined Coefficients 34
9 Lag Operators 40
10 Summary 43
Questions and Exercises 44
Online in the Supplementary Manual
APPENDIX 1.1 Imaginary Roots and de Moivres Theorem
APPENDIX 1.2 Characteristic Roots in Higher-Order Equations
LEARNING OBJECTIVES
1. Explain how stochastic difference equations can be used for forecasting and illustrate
how such equations can arise from familiar economic models.
2. Explain what it means to solve a difference equation.
3. Demonstrate how to find the solution to a stochastic difference equation using the
iterative method.
3. Demonstrate how to find the homogeneous solution to a difference equation.
4. Illustrate the process of finding the homogeneous solution.
5. Show how to find homogeneous solutions in higher order difference equations.
7. Show how to find the particular solution to a deterministic difference equation.
8. Explain how to use the Method of Undetermined Coefficients to find the particular
solution to a stochastic difference equation.
9. Explain how to use lag operators to find the particular solution to a stochastic
difference equation.
AETS 4
Page 4
Key Concepts
It is essential to understand that difference equations are capable of capturing the types of
dynamic models used in economics and political science. Towards this end, in my own classes, I simulate
a number of series and discuss how their dynamic properties depend on the parameters of the datagenerating process. Next, I show the students a number of macroeconomic variables--such as real GDP,
real exchange rates, interest rates, and rates of return on stock prices--and ask them to think about the
underlying dynamic processes that might be driving each variable. I also ask them think about the
economic theory that bears on the each of the variables. For example, the figure below shows the three
real exchange rate series used in Figure 3.5. You might see a tendency for the series to revert to a longrun mean value. Nevertheless, the statistical evidence that real exchange rates are actually mean reverting
is debatable. Moreover, there is no compelling theoretical reason to believe that purchasing power parity
holds as a long-run phenomenon. The classroom discussion might center on the appropriate way to model
the tendency for the levels to meander. At this stage, the precise models are not important. The objective
is for you to conceptualize economic data in terms of difference equations.
It is also important to understand the distinction between convergent and divergent solutions. Be
sure to emphasize the relationship between characteristic roots and the convergence or divergence of a
sequence. Much of the current time-series literature focuses on the issue of unit roots. Question 5 at the
end of this chapter is designed to preview this important issue. The tools to emphasize are the method of
undetermined coefficients and lag operators.
2.25
2.00
1.75
Pound
1.50
1.25
1.00
0.75
Euro
Sw. Franc
0.50
2000
2002
2004
2006
2008
2010
2012
AETS 4
Page 5
CHAPTER 2
STATIONARY TIME-SERIES MODELS
1 Stochastic Difference Equation Models 47
2 ARMA Models 50
3 Stationarity 51
4 Stationarity Restrictions for an ARMA (p, q) Model 55
5 The Autocorrelation Function 60
6 The Partial Autocorrelation Function 64
7 Sample Autocorrelations of Stationary Series 67
8 BoxJenkins Model Selection 76
9 Properties of Forecasts 79
10 A Model of the Interest Rate Spread 88
11 Seasonality 96
12 Parameter Instability and Structural Change 102
13 Combining Forecasts 109
14 Summary and Conclusions 112
Questions and Exercises 113
Online in the Supplementary Manual
Appendix 2.1:Estimation of an MA(1)Process
Appendix 2.2:Model Selection Criteria
LEARNING OBJECTIVES
1. Describe the theory of stochastic linear difference equations
2. Develop the tools used in estimating ARMA models.
3. Consider the time-series properties of stationary and nonstationary models.
4. Consider various test statistics to check for model adequacy. Several examples of
estimated ARMA models are analyzed in detail. It is shown how a properly estimated
model can be used for forecasting.
5. Derive the theoretical autocorrelation function for various ARMA processes
6. Derive the theoretical partial autocorrelation function for various ARMA processes
7. Show how the BoxJenkins methodology relies on the autocorrelations and partial
autocorrelations in model selection.
8. Develop the complete set of tools for BoxJenkins model selection.
9. Examine the properties of time-series forecasts.
10. Illustrate the BoxJenkins methodology using a model of the term structure of
interest rates.
11. Show how to model series containing seasonal factors.
12. Develop diagnostic testing for model adequacy.
13. Show that combined forecasts typically outperform forecasts from a single model.
Page 6
At several points in the text, I indicate that forecasting is a blend of the scientific method and art. I try to stress that
there are several guidelines that can be very helpful in selecting the most appropriate forecasting model:
1. Looking at the time path of a series is the single most important step in forecasting the series. Examining the
series allows you to see if it has a clear trend and to get a reasonable idea if the trend is linear or nonlinear. Similarly,
a series may or may not have periods of excess volatility. Outliers and other potential problems with the data can
often be revealed by simply looking at the data. If the series does not seem to be stationary, there are several
transformations (see below) that can be used to produce a series that is likely to be covariance stationary.
2. In most circumstances, there will be several plausible models that fit the data. The in-sample and out-of sample
properties of such models should be thoroughly compared.
3. It is standard to plot the forecasts in the same graph as the series being forecasted. Sometimes it is desirable to
place confidence intervals around the forecasted values. If you chose a transformation of the series [e.g., log(x) ] you
should forecast the values of the series, not the transformed values.
4. The Box-Jenkins method will help you select a reasonable model. The steps in the Box-Jenkins methodology
entail:
Indentifcation
Graph the datasee (1) abovein order to determine if any transformations are necessary
(logarithms, differencing, ... ). Examine the ACF and the PACF of the transformed data in order to determine the
plausible models.
Estimation
Estimate the plausible models and select the best. You should entertain the possibility
of several models and estimate each. The best will have coefficients that are
statistically significant and a good fit (use the AIC or SBC to determine the fit).
Diagnostic Checking
Examine the ACF and PACF of the residuals to check for signicant autocorrelations. Use the Q-statistics to
determine if groups of autocorrelations are statistically signicant.
Other diagnostic checks include (i) out-of-sample forecasting of known data values (ii)
splitting the sample, and (iii) overtting (adding a lagged value that should be
insignicant). You can also check for parameter instability and structural change using the methods discussed
in Section 12.
Forecasting
Use the methods discusses in Section 9 to compare the out-of-sample forecasts of the alternative models.
5. My own view is that too many econometricians (students and professionals) overfit the data by including
marginally significant intermediate lags. For example, with quarterly data, someone might fit an ARMA model with
AR coefficients at lags 1, 4 and 9 and MA an coefficient at lag 7. Personally, I do not think that such models make
any sense. As suggested by the examples of the interest rate spread and the data in the file SIM_2.XLS, fitting
isolated lag coefficients is highly problematic.
Transforming the Variables: I use Figure M2 to illustrate the effects of differencing and over-differencing. The
first graph depicts 100 realizations of the unit root process yt = 1.5yt1 0.5yt2 + t. If you examine the graph, it is
clear there is no tendency for mean reversion. This non-stationary series has a unit root that can be eliminated by
differencing. The second graph in the figure shows the first-difference of the {yt} sequence: yt = 0.5yt1 + t. The
positive autocorrelation (1 = 0.5) is reflected in the tendency for large (small) values of yt to be followed by other
large (small) values. It is simple to make the point that the {yt} sequence can be estimated using the Box-Jenkins
methodology. It is obvious to students that the ACF will reflect the positive autocorrelation. The third graph shows
the second difference: 2yt = 0.52yt1 + t t1. Students are quick to understand the difficulties of estimating this
overdifferenced series. Due to the extreme volatility of the {2yt} series, the current value of 2yt is not helpful in
AETS 4
Page 7
predicting 2yt+1.
The effects of logarithmic data transformations are often taken for granted. I use Figure M2 to illustrate
the effects of the Box-Cox transformation. The first graph shows 100 realizations of the simulated AR(1) process: yt
= 5 + 0.5yt + t. The {t} series is precisely the same as that used in constructing the graphs in Figure M2. In
fact, the only difference between the middle graph of Figure M2 and the first graph of Figure M2 involves the
presence of the intercept term. The effects of a logarithmic transformation can be seen by comparing the two lefthand-side graphs of Figure M2. It should be clear that the logarithmic transformation smooths" the series. The
natural tendency is for students to think smoothing is desirable. However, I point out that actual data (such as asset
prices) can be quite volatile and that individuals may respond to the volatility of the data and not the logarithm of the
data. Thus, there may be instances in which we do not want to reduce the variance actually present in the data. At
this time, I mention that the material in Chapter 3 shows how to estimate the conditional variance of a series. Two
Box-Cox transformations are shown in the right-hand-side graphs of Figure M2. Notice that decreasing reduces
variability and that a small change in can have a pronounced effect on the variance.
AETS 4
Page 8
yt = 1.5yt1 0.5yt2 + t
The unit root means that the sequence does not
exhibit any tendency for mean reversion.
20
40
60
80
100
First-difference
0.5
0.5
20
40
60
80
100
Second-difference
0.5
yt = 5 + 0.5yt + t
= 0.5
0.5
AETS 4
20
40
60
80
100
Page 9
12
10
50
100
50
100
4
2
2
0
0
50
100
50
100
The first graph shows 100 realizations of a simulated AR(1) process; by construction, the standard deviation of the
{yt} sequence is 0.609. The next three graphs show the results of Box-Cox transformations using values of = 0.5,
0.0, and 0.5, respectively. You can see that decreasing acts to smooth the sequence.
AETS 4
Page 10
cor(partial=pacf,qstats,number=24,span=8) y1
graph 1
# y1
boxjenk(ar=1) y1 / resids
;* estimates an AR(1) model and saves the
;*
residuals in the series called resids
Page 11
cor(partial=pacf,qstats,number=24,span=8) y2
graph 1
# y2
*RATS contains a procedure to plot autocorrelation and partial autocorrelations. To use the
*procedure use the following two program lines:
source(noecho) c:\winrats\bjident.src
@bjident y2
boxjenk(ar=1) y2 / resids
;* estimates an AR(1) model and saves the residuals
cor(number=24,partial=partial,qstats,span=8) resids / cors
compute aic = %nobs*log(%rss) + 2*%nreg
compute sbc = %nobs*log(%rss) + %nreg*log(%nobs)
display 'aic = ' AIC 'sbc = ' sbc
*To compare the MA(2) to the ARMA(1, 1) you need to be a bit careful. For a head-to-head *comparison, you need
to estimate the models over the same sample period. The ARMA(1, 1) *uses 99 observations while the MA(2) uses
all 100 observations.
10. The third column in SIM_2.XLS contains the 100 values of a AR(2) process; this series is entitled Y3. The
following programs will perform the tasks indicated in the text. Due to differences in data handling and rounding,
your answers need only approximate those reported in the text.
EViers Users should see the notes to Question 8 above.
Sample Program for RATS Users
Use the first three lines from Question 7 or 8 to read in the data set. To graph the series use:
graph 1 ; # y3
AETS 4
boxjenk(ar=1) y3 / resids
*To estimate the AR(2) model with the single MA coefficient at lag 16 use:
boxjenk(ar=2,ma=||16||) y3 / resids
11. If you have not already done so, download the Programming Manual that accompanies this text and the data set
QUARTERLY.XLS.
a. Section 2.7 examines the price of finished goods as measured by the PPI. Form the logarithmic change in the PPI
as: dlyt = log(ppit) log(ppit1). Verify that an AR(||1,3||) model of the dlyt series has a better in-sample fit than an
AR(3) or an ARMA(1,1) specification.
EViers Users:
The file aets4_ch2_question11.wf1 contains the variables ppp and dly.
Programs for RATS USERS
*Read in the data set using:
cal(q) 1960 1
all 2012:4
open data c:\aets4\quarterly.xls
data(org=obs,format=xls)
tab(picture='*.##')
* Create dlyt = log(ppit) log(ppit1).
log ppi / ly
dif ly / dly
* Estimate the AR(3) model using
box(constant,ar=3) dly / resids
* For each, compare the fit using: (Be sure to estimate each over the same sample period)
com aic = %nobs*log(%rss) + 2*(%nreg)
com sbc = %nobs*log(%rss) + (%nreg)*log(%nobs)
display "aic = " aic "bic = " sbc
b. How does the out-of-sample fit of the AR(||1,3||) compare to that of the ARMA(1,1)?
Notes for EViews Users
The file aets4_ch2_question11.wf1 for instructors contains the one-step-ahead forecasts and a 2 standard error
band for the ARMA(1,1) model. The the graph is entitled graph_q11 and the forecasts are in the series dlyf. note that
in creating the forecasts, use the option STATIC. The dynamic forecasts are the multi-step ahead forecasts
conditional on the initial observation. For example, click on the equation names eq_arma11 and then click on
Forecast. The Method options allows you to chose either method. If you chose the entire sample period (the
default) you should obtain:
AETS 4
Page 13
.06
Forecast: DLYF
Actual: DLY
Forecast sample: 1960Q1 2012Q4
Adjusted sample: 1960Q3 2012Q4
Included observations: 210
Root Mean Squared Error 0.009333
Mean Absolute Error
0.006253
Mean Abs. Percent Error
109.0738
Theil Inequality Coefficient 0.373616
Bias Proportion
0.000003
Variance Proportion
0.266415
Covariance Proportion 0.733582
.04
.02
.00
-.02
-.04
65
70
75
80
85
DLYF
90
95
00
05
10
2 S.E.
12. Section 2.9 of the Programming Manual that accompanies considers several seasonal models of the variable
Currency (Curr) on the data set QUARTERLY.XLS.
a. First-difference log(currt) and obtain the ACF and PACF of the resultant series. Does the seasonal pattern best
reflect an AR, MA or a mixed pattern? Why is there a problem in estimating the first-difference using the BoxJenkins methodology? b. Now, obtain the ACF and PACF of the seasonal difference of the first-difference. What
is likely the pattern present in the ACF and PACF?
AETS 4
Page 14
Page 15
Page 16
b. Estimate the AR(7) and ARMA(1, 1) models over the period 1960Q1 to 2000Q3. Obtain the one-step-ahead
forecasts and the one-step-ahead forecast errors from each.
* Obtain the out-of-sample forecasts for the AR7 using:
com h = 50, start = 2012:4-h+1
set f_ar7 start * = 0. do i = 1,h ;
boxjenk(define=ar7,constant,ar=7,noprint) y * 2012:4-(h+1)+i
forecast 1 1
# ar7 f_ar7
end do
* Create the regression equations to test for the bias using:
lin y ; # constant f_ar7
TEST(NOZEROS)
#12
#01
c. Construct the Diebold-Mariano test using the mean absolute error. How do the results compare to those
Page 17
Variable
C
Y_BREAK(-1)
DUMMY
DUMMY_Y
0.221870
0.092321
0.573773
0.121476
7.218226
2.756616
-0.391130
4.472253
Prob.
0.0000
0.0066
0.6963
0.0000
In contrast, nobreak is the model estimated without any of the dumy variables. The file also contains the recursive
estimates, the recursive residuals and the cusums. To obtain these estimates, open EQ01 and select the tab VIEW.
Select Stability Diagnostics to produce the recursive estimates, recursive residuals and the cusums.
AETS 4
Page 18
AETS 4
Page 19
CHAPTER 3
MODELING VOLATILITY
1 Economic Time Series: The Stylized Facts 118
2 ARCH and GARCH Processes 123
3 ARCH AND GARCH Estimates of Inflation 130
4 Three Examples of GARCH Models 134
5 A GARCH Model of Risk 141
6 The ARCH-M Model 143
7 Additional Properties of GARCH Processes 146
8 Maximum Likelihood Estimation of GARCH Models 152
9 Other Models of Conditional Variance 154
10 Estimating the NYSE U.S. 100 Index 158
11 Multivariate GARCH 165
12 Volatility Impulse Responses 172
13 Summary and Conclusions 174
Questions and Exercises 176
Online
Appendix 3.1 Multivariate GARCH Models is in the Supplementary Manual.
Learning Objectives
1. Examine the so-called stylized facts concerning the properties of economic timeseries data.
2. Introduce the basic ARCH and GARCH models.
3. Show how ARCH and GARCH models have been used to estimate inflation rate
volatility.
4. Illustrate how GARCH models can capture the volatility of oil prices, real U.S. GDP,
and the interest rate spread.
5. Show how a GARCH model can be used to estimate risk in a particular sector of the
economy.
6. Explain how to estimate a time-varying risk premium using the ARCH-M model.
7. Explore the properties of the GARCH(1,1) model and forecasts from GARCH models.
8. Derive the maximum likelihood function for a GARCH process.
9. Explain several other important forms of GARCH models including IGARCH,
asymmetric TARCH, and EGARCH models.
10. Illustrate the process of estimating a GARCH model using the NYSE 100 Index.
11. Show how multivariate GARCH models can be used to capture volatility spillovers.
12. Develop volatility impulse response functions and illustrate the estimation technique
using exchange rate data.
Page 20
1)
]. The lower two graphs show the interaction of the ARCH error term and the magnitude of the
AR(1) coefficients. Increasing the magnitude of the AR(1) coefficient from 0.0, to 0.2, to 0.9,
increased the volatility of the simulated {yt} sequence. For your convenience, a copy of the figure is
reproduced below.
2. Instead of assigning Question 4 as a homework assignment, I use the three models to illustrate the properties of an
ARCH-M process. Consider the following three models:
Model 1: yt = 0.5yt-1 + t
Model 2: yt = t - (t-1)2
Model 3: yt = 0.5yt-1 + t - (t-1)2
Model 1 is a pure AR(1) process that is familiar to the students. Model 2 is a pure ARCH-M process. When
the realized value of t-1 is large in absolute value, the conditional expectation of yt is negative: Et-1yt = (t-1)2. Thus,
Model 2 illustrates a simple process in which the conditional mean is
AETS 4
Page 21
20
40
60
80
100
20
40
yt = 0.2yt-1 + t
100
yt = 0.9yt-1 + t
20
20
20
0
20
40
60
(c)
AETS 4
80
(b)
(a)
20
60
80
100
20
40
60
80
100
(d)
Page 22
negatively related to the absolute value of the previous period's error term. Suppose that all values of i for i 0 are zero.
Now, if the next 5 values of the t sequence are (1, -1, -2, 1, 1), yt has the time path shown in Figure 3M-1 (see the
answer to Question 4 below). I use a projection of Figure 3M-1 to compare the path of the AR(1) and ARCH-M models.
Model 3 combines the AR(1) model with the ARCH-M effect exhibited by Model 2. The dotted line shown in Figure
3M-1 shows how the AR(1) and ARCH-M effects interact.
* Next, estimate an AR(1) model without an intercept and produce the ACF and PACF.
boxjenk(ar=1) y / resids
cor(partial=pacf,qstats,number=24,span=4,dfc=1) resids
* Now, define sqresid as the squared residuals from the AR(1) model and construct the ACF
* and PACF of these squared residuals.
set sqresid = resids*resids
cor(partial=pacf,qstats,number=24,span=4,dfc=1) sqresid
Instead of the GARCH Instruction users of older versions of RATS can use
nonlin b1 a0 a1
;* prepares for a non-linear estimation b1 a0 and a1
frml regresid = y - b1*y{1}
;* defines the residual
frml archvar = a0 + a1*regresid(t-1)**2
;* defines the variance
frml archlogl = (v=archvar(t)), -0.5*(log(v)+regresid(t)**2/v) ;* defines the likelihood
boxjenk(ar=1) y
;* estimate an AR(1) in order to obtain an initial
compute b1=%beta(1)
;*
guess for the value of b1 and a0
compute a0=%seesq, a1=.3
;* the initial guesses of a0 and a1
* Given the initial guesses and the definition of archlogl, the next line performs the non* linear estimation of b1, a0, and a1.
maximize(method=bhhh,recursive,iterations=75) archlogl 3 *
6. The second series on the file ARCH.XLS contains 100 observations of a simulated ARCH-M
AETS 4
Page 23
;* The second series on the file is called y_m. TABLE produces the desired
;* summary statistics.
Page 24
data(format=xls,org=obs)
*Next create the annualized growth rate using:
log rgdp / ly ; dif ly / dly; set dlya = 400*dly
9. This program produces the results for the NYSE data used in Section 10. In some weeks, there
were not five trading days due to holidays and events such as 9/11. The data in the file sets the
rate of return equal to zero for such dates. Any capital gain or loss is attributed to the first day
after trading resumes.
Notes for EViews Users
1. In the Instructors Manual the file nyse(returns).wf1 contains the answers to the questions for
this section. The GROUP spreadsheet contains the variables return and rate. The construction
of rate was discussed above.
2. The table labeled p160 contains the estimate of rate as an AR(2) and acf_squaredresid is the
ACF of the squared residuals from the AR(2) model.
3. The estimates on pages 160 163 are clearly labeled in the file.
Sample Program for RATS Users
* Read in the data using:
CAL(daily) 2000 1 3
all 2012:7:16
open data c:\aets4\chapter_3\nyse(returns).xlsx
data(org=obs,format=xlsx)
* Create Figure 3.3
gra(footer='Figure 3.3: Percentage Change in the NYSE US 100: (Jan 4, 2000 - $
July 16, 2012)',vla='percentage change',patterns) 1
# rate
* Create the histogram using:
stat(noprint) rate ; set standard = (rate-%mean)/%variance**.5
@histogram(distri=normal,maxgrid=50) standard
10. Use the data of the file EXRATES(DAILY).XLS to estimate a bivariate model of the pound
and euro exchange rates.
Notes for EViews Users
1. In the Instructors Manual, the workfile aets4_ch3_q10.wf1 contains the answers to this question. The three
exchange rates (euro, pound, and sw) and their logarithmic changes (dleu, dluk, and dlsw) are contained in the file.
Include dlsw to reproduce the results in the text.
2. In order to estimate the CCC model it is necessary to combine the model of the mean and the format of the
variance into a SYSTEM. For the diagonal vech the following code was used:
system sys01
sys01.append dleu = c(1)
sys01.append dluk = c(2)
sys01.arch @diagvech c(indef) arch(1) garch(1)
AETS 4
Page 25
Hence, the models of the mean are simply constants; c(1) and c(2) are the intercepts for the euro and pound
equations, respectively. The last instruction specifies a GARCH(1, 1) process for the conditional variances
Similarly, the SYSTEM instructions for the CCC model are:
system sys02
sys02.append dleu = c(1)
sys02.append dluk = c(2)
sys02.arch @ccc c(indef) arch(1) garch(1)
These sets of instructions are in the SYSTEM tabs sys01 and sys02. The tabs diagvech and ccc contain the output.
Sample Program for RATS Users
* The following program will reproduce the results reported in the text. Simply eliminate the Swiss franc from the
models below to answer Question 10.
OPEN DATA "C:\AETS4\Chapter_3\exrates(daily).xls"
CALENDAR(D) 2000:1:3
ALL 2013:04:26
DATA(FORMAT=XLS,ORG=COLUMNS)
* Fill in the missing values using
set euro = %if(%valid(euro),euro,euro{1})
set pound = %if(%valid(pound),pound,pound{1})
set sw1 = %if(%valid(sw),sw,sw{1})
set sw = 1/sw1 ; Convert to same base currency
*Create the logarithmic changes of the three rates
log euro / leu ; dif leu / dleu
log pound / luk ; dif luk / dluk
log sw / lsw ; dif lsw / dlsw
* Create Figure 3.5 using:
labels pound sw euro;# 'Pound' 'Swiss Fr' 'Euro'
SPGRAPH
gra(footer='Figure 3.5: Daily Exchange Rates (Jan 3, 2000 - April 4, 2013)', $
vla='currency per dollar',patterns) 3
# euro / 1 ; # pound / 2; # sw / 12
GRTEXT(ENTRY=2000:6:1,Y=1.75,size=18) 'Pound'
GRTEXT(ENTRY=2000:6:1,Y=1.05,size=18) 'Euro'
GRTEXT(ENTRY=2000:6:1,Y=0.70,size=18) 'Sw. Franc'
SPGRAPH(DONE)
12. In Section 4, it was established that a reasonable model for the price of oil is an MA(1) with
the GARCH conditional variance: ht = 0.402 + 0.097 t21 + 0.881ht1.
Notes for EViews Users
1. In the Instructors Manual, the workfile aets4_ch3_q12.wf1 contains the spot price of oil and
the variable p = 100*dlog(spot). The time plot of spot is contained on graphspot.
2. The variable dummy was created using the second method discussed in Question 8 above. The
variable time was generated using
time = @trend+1
AETS 4
Page 26
gra(footer='Figure 3.6: Weekly Values of the Spot Price of Oil: (May 15, 1987 - Nov 1, $
2013)',vla='dollars per barrel',patterns) 1
# spot 2000:1:2 *
* To create Figure 3.13, standardize the sample data to mean zero, variance one.
diff(standardize) rate / stdreturn
density(smooth=1.5) stdreturn / xdensity fdensity
set normalf = %density(xdensity)
* Next is a t(3) standardized to a variance of 1.0. (The variance of
* non-standardized t(nu) is nu/(nu-2), which is 3 for nu=3).
set tf
= %tdensity(xdensity*sqrt(3.0),3.0)*sqrt(3.0)
scatter(patterns,nokbox,footer="Figure 3.13 Distribution of Oil Price Changes",style=line, $
key=below,klabels=||"Actual change","Normal density","t density"||) 3
# xdensity fdensity
# xdensity normalf
# xdensity tf
AETS 4
Page 27
CHAPTER 4
MODELS WITH TREND
1 Deterministic and Stochastic Trends
2 Removing the Trend
3 Unit Roots and Regression Residuals
4 The Monte Carlo Method
5 DickeyFuller Tests
6 Examples of the DickeyFuller Test
7 Extensions of the DickeyFuller Test
8 Structural Change
9 Power and the Deterministic Regressors
10 Tests with More Power
11 Panel Unit Root Tests
12 Trends and Univariate Decompositions
13 Summary and Conclusions
Questions and Exercises
181
189
195
200
206
210
215
227
235
238
243
247
254
255
Learning Objectives
1. Formalize simple models of variables with a time-dependent mean.
2. Compare models with deterministic versus stochastic trends.
3. Show that the so-called unit root problem arises in standard regression and in timesseries models.
4. Explain how Monte Carlo and simulation techniques can be used to derive critical
values for hypothesis testing.
5. Develop and illustrate the DickeyFuller and augmented DickeyFuller tests for the
presence of a unit root.
6. Apply the DickeyFuller tests to U.S. GDP and to real exchange rates
7. Show how to apply the DickeyFuller test to series with serial correlation, moving
average terms, multiple unit roots, and seasonal unit roots.
8. Consider tests for unit roots in the presence of structural change.
9. Illustrate the lack of power of the standard DickeyFuller test.
10. Show that generalized least squares (GLS) detrending methods can enhance the
power of the DickeyFuller tests
11. Explain how to use panel unit root tests in order to enhance the power of the
DickeyFuller test.
12. Decompose a series with a trend into its stationary and trend components.
AETS 4
Page 28
Lecture Suggestions
1. A common misconception is that it is possible to determine whether or not a series is stationary
by visually inspecting the time path of the data. I try to dispel this notion using overhead
transparencies of the four graphs in Figure 4.2. I cover-up the headings in figures and ask the
students if they believe that the series are stationary. All agree that the two series in graph (b) and (c)
of the figure 4.3 are non-stationary. However, there is no simple way to determine whether the
series are trend stationary or difference stationary. I use these same overheads to explain why unit
root tests have very low power. Figure 4.2 (with captions the captions removed) is reproduced on
the next page for your convenience.
2. Much of the material in Chapter 4 relies on the material in Chapter 1. I remind students of the
relationship between characteristic roots, stability, and stationarity. At this point, I solve some of the
mathematical questions involving unit root process. You can select from Question 5, 6, 9 and 10 of
Chapter 1 and Question 1 of Chapter 4.
AETS 4
Page 29
(b)
10
56
48
40
32
24
16
-2
-4
0
10
20
30
40
50
60
70
80
90
100
10
20
30
40
(c)
50
60
70
80
90
100
60
70
80
90
100
(d)
56
10
48
40
32
24
16
-2
-4
10
20
30
40
50
60
70
80
90
100
10
20
30
40
50
t-Statistic
Prob.*
-1.678217
-3.482453
-2.884291
-2.578981
0.4399
Since the sample t-statistic is 1.678 and the 5% critical value is 2.88, it is not possible to reject the null
hypothesis of a unit root. Open the la series and select Unit Root Test from the View tab. For Test
Type select Augmented Dickey-Fuller and in Test for a unit root in select the Level
AETS 4
Page 30
button. Include an Intercept (not a Trend and Intercept). As can be seen from the output above, the
Lag length was Automatic selection based on the t-statistic option using Maximum lags of
8.
3. In the Instructors Manual, the results for the ERS test for the Australian and Canadian rates are in the Tables ers_a
and ers_c. Again, open the la series and select Unit Root Test from the View tab. For Test Type select now
select Dickey-Fuller GLS (ERS) and in Test for a unit root in select the Level button. Include
an Intercept (not a Trend and Intercept).
4. In the Instructors Manual, the workfile aets4_ch4_q4c.wf1 contains the answers to part c of this question. The
series y, y_tilde, yd, z1 and z2 are in the GROUP labeled data. You can examine data or the spreadsheet
ERSTEXT.XLS to see how the data were generated. The results of the ERS test are in the Table ers_test.
Page 31
@hegyqnew(signif=0.05,criterion=nocrit,nlag=10) lx
5. The second column in the file BREAK.XLS contains the simulated data used in Section 8.
a. Plot the data to see if you can recognize the effects of the structural break.
b. Verify the results reported in Section 8.
Notes for EViews Users
1. The file aets4_ch4_section 8.wf1 reproduces the results from Section 8. The graph of y1 is in the GRAPH
graph01. In addition to the series y1 and y2, there are level shift and pulse dummy variables labeled dl and dp,
respectively. To create dummy variables, you should see the discussion in Chapter 2, Question 16. For now, note
the dummy variables can be generated using:
time = @trend(1)+1
dl = @recode(time>50,1,0)
dp = @recode(time=50,1,0)
In the Instructors Manual, the ACF is in Table acf_y1; you can see that the series is quite persistent. A simple
Dickey-Fuller test (ignoring the break) is in df_y1 and the estimated model y1t = c + a1y1t1 + a2time + a3dl + a4dp
is in the Table estimatedmodel_y1.
2. The results for the series y2 are in similarly named Tabs.
RATS PROGRAM FOR PARTS A and B
The data set contains 100 simulated observations with a break occurring at t = 51. To replicate the results in
section 8, perform the following:
all 100
open data a:\break.xls
data(format=xls,org=obs)
set trend = t
* The graph of the series shown in Figure 4.10 was created using
graph 1 ; # y1
* The ACF is obtained using
cor(method=yule) y1
6. The file RGDP.XLS contains the real GDP data that was used to estimate (4.29).
Notes for EViews Users
1. In the Instructors Manual, the workfile aets4_ch4_q6.wf1 contains the answers to this question. In addition to the
series in REAL.WF1, the file contains the log of real GDP (lrgdp), the growth rate of real GDP (dlrgdp). Note that level
is a level shift dummy and dtrend is a dummy equal to zero until 1973Q1 and is equal to the series 105, 106, 107,
thereafter. The dummies were created using a combination of the @recode, @data, @dateval, and @trend
functions:
level = @recode(@date<@dateval("1973/02"), 1, 0)
dtrend = @recode(@date<@dateval("1973/02"),0, @trend)
2. The Table eq_429 reports the results of the Dickey-Fuller text reported on page 429 of the text and perrontest reports
the results of the Perron test for lrgdp.
AETS 4
Page 32
3. The obtain the results in hpfiler open the lrgdp series and select the PROC tab. In the OUTPUT series boxes,
enter hptrend and hpcycle and do not change the default value of Lambda = 1600. The plot of lrgdp, hptrend and
hpcycle are in the GRAPG hpseries.
4. The estimate for part d is in part_d and the residual plot are in residuals_partd.
Sample Program for RATS USERS
a. The following program will replicate the results in Section 6.
READ IN THE DATA WITH
cal(q) 1947 1
all 2012:4
open data c:\aets4\real.xls
data(format=xls,org=obs)
* Transform the variables
set time = t
log rgdp / ly
dif ly / dly
lin dly ; # constant time ly{1} dly{1}
exclude ; # time ly{1}
exc ; # constant time ly{1}
You can create Figure 4.12 with
set trgdp = rgdp/1000. ; set trcons = rcons/1000. ; set trinv = rinv/1000.
@hpfilter trgdp / hp_rgdp
@hpfilter trcons / hp_rcons
@hpfilter trinv / hp_rinv
spg(hfi=1,vfi=1)
gra(footer='Figure 4.12',pat,vla='trllions of 2005 dollars') 6
# trgdp / 2 ; # hp_rgdp / 1 ; # trcons / 2 ; # hp_rcons / 1 ; # trinv / 2 ; # hp_rinv
Grt(entry = 2003:1,y = 13,size=18) 'RGDP'
Grt(entry = 2004:1,y = 6.5,size=18) 'Consumption'
Grt(entry = 2004:1,y = 2.8,size=18) 'Investment'
spg(done)
7. The file PANEL.XLS contains the real exchange rate series used to perform the panel unit root tests reported in
Section 11.
a. Replicate the results of Section 11.
Notes for EViews Users
1. In the Instructors Manual, the answers are contained in the workfile aets4_ch4_q7.wf1. The first step is to group
the log of the exchange rates as in the GROUP groupedrates. Go to the View tab and selects Unit Root Test. In
the dialogue box, for Test type select Individual root-Im, Pesaran, Shin. Select the Level button and
include an Individual intercept.
Page 33
t-Statistic
Prob.*
-2.293050
0.435
In order to reproduce these results, open the series lindprod and on the View menu select Unit Root
Test. In the dialogue box Test type select Augmented Dickey-Fuller. Select the Level and
Trend and intercept buttons. In the Lag length box, select t-statistic and use the default
value of 14.
2. To use eight lagged changes for the unemp series, open the series and on the View menu select Unit Root
Test. In the dialogue box Test type select Augmented Dickey-Fuller. Select the Level and
intercept buttons. There is no reason to incorporate a trend for the unemployment rate series. In the Lag
length box, select User specified and enter 8 in the dialogue box. You will obtain the results reported
in the Table part_b.
3. To use only one lagged change in the test, repeat the steps in part b but enter 1 in the dialogue box for Lag
length. The result should be identical to that in the Table labeled part_c.
4. The Table part_d reports the effects of regressing indprod on m1nsa. Note that the t-statistics are very high and
AETS 4
Page 34
Page 35
11. Chapter 6 of the Programming Manual analyzes the real GDP data in the file QUARTERLY(2012).XLS. Unlike
the real GDP data used in the text, the date in this file begins in 1960Q1. Perform parts a through e below using
this shorter data set.
Notes for EViews Users
1. In the Instructors Manual, the workfile aets4_ch4_q11.wf1 contains the answers to all parts of this question.
From the Quick tab, estimate the equation: lrgdp c @trend. This equation is labeled eq_parta. From the
Proc menu, select Make residual series and name them resid01. In the workfile, we obtained the ACF of
resid01 and named them part_a. The correlations show only a mild tendency to decay.
2. Perform a Dickey-Fuller test on lrgdp. If you use the AIC to select the lag length, you should obtain the results in
part_b:
Null Hypothesis: LRGDP has a unit root
Exogenous: Constant, Linear Trend
Lag Length: 2 (Automatic - based on AIC, maxlag=14)
t-Statistic
Augmented Dickey-Fuller test statistic
Prob.*
-2.163125982 0.50718420
3. Generate the series cycle as log(potent) lrgdp. Perform a unit root test (without a trend). The results are in df_cycle
and the results are in gls_cycle. Note that the DF-GLS test is more supportive of stationarity than the DF test.
Selected Program for RATS Users
a. Form the log of real GDP as lyt = log(RGDP). Detrend the data with a linear time trend and form the
autocorrelations.
ANSWER:
*READ IN THE DATA USING:
cal(q) 1960 1
all 2012:4
open data c:\RatsManual\quarterly(2012).xls
data(org=obs,format=xls)
log rgdp / ly
set trend = t
lin ly / resids1 ; # constant trend
cor(number=8,picture='##.##') resids1
AETS 4
Page 36
CHAPTER 5
MULTIEQUATION TIME-SERIES MODELS
1 Intervention Analysis
2 ADLs and Transfer Functions
3 An ADL of Terrorism in Italy
4 Limits to Structural Multivariate Estimation
5 Introduction to VAR Analysis
6 Estimation and Identification
7 The Impulse Response Function
8 Testing Hypotheses
9 Example of a Simple VAR: Domestic and Transnational Terrorism
10 Structural VARs
11 Examples of Structural Decompositions
12 Overidentified Systems
13 The BlanchardQuah Decomposition
14 Decomposing Real and Nominal Exchange Rates: An Example
15 Summary and Conclusions
Questions and Exercises
261
267
277
281
285
290
294
303
309
313
317
321
325
331
335
337
Learning Objectives
1. Introduce intervention analysis and transfer function analysis.
2. Show that transfer function analysis can be a very effective tool for forecasting and
hypothesis testing when it is known that there is no feedback from the dependent to the
so-called independent variable.
3. Use data involving terrorism and tourism in Italy to explain the appropriate way to
estimate an autoregressive distributed lag (ADL).
4. Explain why the major limitation of transfer function and ADL models is that many
economic systems do exhibit feedback.
5. Introduce the concept of a vector autoregression (VAR).
6. Show how to estimate a VAR. Explain why a structural VAR is not identified from a
VAR in standard form.
7. Show how to obtain impulse response and variance decompositions.
8. Explain how to test for lag lengths, Granger causality, and exogeneity in a VAR.
9. Illustrate the process of estimating a VAR and for obtaining the impulse responses
using transnational and domestic terrorism data.
10. Develop two new techniques, structural VARs and multivariate decompositions,
which blend economic theory and multiple time-series analysis.
11. Illustrate several types of restrictions that can be used to identify a structural VAR.
12. Show how to test overidentifying restrictions. The method is illustrated using both
macroeconomic and agricultural data.
13. Explain how the BlanchardQuah restriction of long-run neutrality can be used to
identify a VAR.
AETS 4
Page 37
14. The BlanchardQuah decomposition is illustrated using real and nominal exchange
rates.
Key Concepts
1. Although it is possible to skip the estimation of transfer functions, Sections 1 through 3 act as an
introduction to VAR analysis. In a sense, VAR analysis can be viewed as a progression.
Intervention analysis treats {yt} as stochastic and {zt} as a deterministic process. Transfer function
allows {zt} to be stochastic, but assumes that there is no feedback from the {yt} sequence to the {zt}
sequence. The notion of an autoregressive distributed lag (ADL) is also introduced here. Finally,
VAR analysis treats all variables symmetrically. In my classes I use Section 4 to explain the
limitations of intervention and transfer function analysis and to justify Sims' methodology.
2. I emphasize the distinction between the VAR residuals and the structural innovations. Questions
4, 5, and 6 at the end of the chapter are especially important. I work through one of these questions
in the classroom and assign the other two for homework. You might project Figure 5.7 in order to
illustrate the effects of alternative orderings in a Choleski decomposition. A large-sized version of
the figure is included here for your convenience.
AETS 4
Page 38
t
t
vt
Model 1:
Response to zt shock
Response to yt shock
0.5
0.5
10
20
(a)
0.25
0.25
20
0.5
(c)
Legend: Solid line = {yt} sequence
20
Response to yt shock
10
15
Response to zt shock
10
(b)
Model 2:
0.5
10
20
(d)
AETS 4
Page 39
Page 40
LogL
LR
FPE
AIC
SC
HQ
0
1
2
3
4
5
6
324.9193
544.0122
552.4162
565.7126
572.8451
579.6017
591.1367
NA
429.5088
16.22544
25.27636
13.34702
12.44282
20.89999
8.29e-06
1.04e-06
1.04e-06
9.98e-07
1.02e-06
1.04e-06
1.02e-06
-3.187320
-5.267448
-5.261546
-5.304085
-5.285595
-5.263383
-5.288482
-3.138187
-5.070917*
-4.917617
-4.812758
-4.646870
-4.477260
-4.354961
-3.167441
-5.187931*
-5.122392
-5.105293
-5.027166
-4.945316
-4.910778
AETS 4
Page 41
7
8
9
604.1574
611.7356
622.3190
23.20527
13.28055
18.23280*
9.76e-07*
9.92e-07
9.78e-07
-5.328291
-5.314214
-5.329891*
-4.247372
-4.085896
-3.954176
-4.890949
-4.817234
-4.773274
AETS 4
df
3
3
6
Prob.
0.0010
0.0005
0.0000
Page 42
Dependent variable: S
Excluded
Chi-sq
DLIP
1.188741
DUR
4.423043
All
17.33896
df
3
3
6
Prob.
0.7557
0.2193
0.0081
3. The selection Impulse will yield the impulse response function (see the Table impulseresponses) . The
variance decompositions are in theTable variancedecompositions.
Sample Program for RATS Users
a. If you perform a test to determine whether st Granger causes lipt you should find that the F-statistic is 2.44 with a
prob-value of 0.065. How do you interpret this result?
* Continue with the program above. Note that the Granger-causality tests are produced with: system(model=chap5)
var dlip dur s
lags 1 to 3
det constant
end(system)
estimate(residuals=resids3,out=sigma)
11. This set of exercises uses data from the file entitled QUARTERLY.XLS in order to estimate the dynamic effects
of aggregate demand and supply shocks on industrial production and the inflation rate. Create the logarithmic change
in the index of industrial production (indprod) as lipt = ln(indprodt) ln(indprodt) and the inflation rate (as
measured by the CPI) as inft = log(cpit) log(cpit1).
Notes for EVIEWS Users
1. The workfilefile aets4_ch5_q11.wf1 contains the variables cpi and indprod. The variables inf and dlip were
generated as:
inf = log(cpi) - log(cpi(-1)) and dlip = log(indprod) - log(indprod(-1)).
2. The unit root tests are in the Tables parta_a and partb_b. To reproduce the results for dlip, open the series
dlip and select Unit Root Test from the View tab. It should be clear that the variable is stationary.
However, the results for inf are not as straightforward. If you select the AIC or SBC from the Lag length
dialogue box, it is just possible to reject the null at the 5% level. However, is you use the general-to-specific
method (i.e., the t-statistic option) , it is not possible to reject the null of a unit root at conventional
significance levels.
3. The Table var_3lags contains the results for the 3-lag model. The residuals are in the series resid01 and
resid02. To perform the Granger causality tests, click the SYSTEM bq_var , choose the View tab on the Lag
Structure tab and then select Pairwise Granger Causality, you should obtain the results in the
Table grangercausality.
4. The impulse responses are in the
5. As described in the EViews manual, in order to perform the BQ decomposition it is necessary to construct the
pattern matrix
NA
0.000000
NA
NA
This is patc in the workfile. You can create this matrix with the following commands:
matrix(2,2) patc = na
patc(2,1) = 0
Now select Proc/Estimate Structural Factorization from the VAR tab. Next, click Matrix and
in the SVAR dialogue box choose Long-Run Pattern and and enter patc. You should obtain the results in
the bq_martix and bq_impulseresponses tables.
AETS 4
Page 43
AETS 4
Page 44
Key Concepts
Figure 6.1 and Worksheet 6.1 illustrate the concept of cointegration. Worksheet 6.2 illustrates spurious
regressions. You can use Figures M6-1 and M6-2 below for further emphasis. The first two panels in Figure M6-1 show
100 realizations of two independent unit root processes. The {yt} and {zt} sequences were constructed as:
and:
yt = 0.1 + yt-1 + yt
zt = 0.2 + zt-1 + zt
Page 45
Two sets of one hundred random numbers were drawn to represent the {yt} and {zt} sequences. Using the
initial values y0 = 0 and z0 = -5, the next 100 realizations of each were constructed using the formulas above. The drift
terms impart a positive trend to each. Since each sequence tends to increase over time, the two appear to move together.
The scatter plot in the third panel and the time plots in the fourth panel reflect this tendency. The spurious regression of
yt on zt appears to have a "good" fit. However, regression coefficients are meaningless. The problem is that the errorterm is a unit-root process; all deviations from the regression line are permanent.
In contrast, the simulated {yt} and {zt} sequences shown in Figure M6-2 are cointegrated. The two random-walk plus
noise processes were simulated as:
yt = yt-1 + t + yt - yt-1
zt = zt-1 + t + zt - zt-1
where: t, yt, and zt are computer generated random numbers.
The series have the same stochastic trend. The scatter plot in the third panel and the time plots in the fourth
panel reflect the tendency of both to rise and fall together in response to the common {t} shocks. The regression of yt on
zt yields a stationary error process. Hence, all deviations from the regression line are temporary.
Notes for EViews Users
1. In the Instructors Manual, the workfiles aets4_ch6_figure62.wf1 and aets4_ch6_figure63.wf1 were used to construct
Figures 6.2 and 6.3. Notice that you can open the files to examine how the various series were constructed. You can
also examine the files used to create the two worksheets. These two files are named aets4_ch6_worksheet61.wf1 and
aets4_ch6_worksheet62.wf1.
2. EViews reports the results of error correction models differently than other software packages. In RATS, for
example, error correction models are estimated using the two-step Engle-Granger procedure. As such, the error
correction term is the residual from the long-run relationship. In EVIEWS, the estimated model uses a one-step
method maximum likelihood estimator. Thus the results can differ from those reported in the text.
As illustrated below, in EVIEWS you can select an error correction model from the Quick tab. Select
Estimate VAR from the Quick menu. In VAR Specification, select Vector Error Correction.
If, for example, you estimate an error correction model for yt and zt without any deterministic regressors and 1 lag
of each variable, the output will be in the form:
y(-1)
z(-1)
1.000000
1
Error Correction:
CointEq1
D(Y(-1))
D(Z(-1))
DY
1
11
12
DZ
2
21
22
Page 46
zt = 0.2 + zt-1 + zt
20
20
10
10
10
20
40
60
80
10
100
20
40
60
80
100
15
20
y sequence
10
10
5
0
0
5
z sequence
10
15
10
20
40
60
80
100
20
40
60
80
100
AETS 4
Page 47
yt = yt-1 + yt
4
20
40
60
80
Scatter Plot
y sequence
zt = zt-1 + zt
100
0
z sequence
20
40
60
20
40
60
80
80
100
100
{y} sequence
{w} sequence
The simulated {yt} and {zt} sequences are both random-walk plus noise processes. Each meanders without any
tendency to return to a long-run mean value. The error terms are: yt = t + yt - yt-1 and zt = t + zt - zt-1.
Since each has the same stochastic trend, the {yt} and {zt} series are cointegrated.
The scatter plot of zt and yt captures the tendency
of both series to move together. The regression of
yt on zt yields: yt = 0.889zt + 0.007.
0.5
0.5
20
AETS 4
40
60
80
100
Page 48
;* Take first-differences
You can obtain the lag length for the VAR using one of the three selection criterion:
@varlagselect(lags=4,crit=gtos) ; # y z w
@varlagselect(lags=4,crit=aic) ; # y z w
@varlagselect(lags=4,crit=bic) ; # y z w
* To reproduce the results in Section 9, use the CATS procedure or use the file entitled johmle.src. Estimate the model
AETS 4
Page 49
Page 50
2. Use Quick and Estimate Regression to estimate the desired regression; enter
tbill c r5 r10
to obtain the results reported in the TABLE partb.
From the Proc tab select Make Residual Series and call the residuals residuals. Open this residual
series and select Unit Root Test from the View tab. The TABLE partb_englegranger contains the results.
For part c of the question, repeat using the other interest rates as the dependent variables. For example the
TABLE partc uses r10 as the dependent variable, the residuals are called residuals_partc and the results of the
test are in the TABLE partc_englegranger.
3. The TABLE vec contains the estimated error correction model. Open the GROUP var to see how the results were
obtained. Select the View tab, select Cointegration test and select button 2 (Intercept (no
trend) in CE no intercept in VAR). The lag interval should be 1 7. You should obtain the
results reported in the TABLE partd.
Sample Program for RATS Users
* Read in the data set using the first 4 lines from Question 3. Next perform the unit root tests using dfunit.src
@dfunit(maxlags=8,method=gtos,signif=0.05) tbill
* Estimate the long run relationship using:
linreg(define=rshort) tbill / resids1
# constant r5 r10
* Compile johmle.src and then enter the commands:
@johmle(lags=8,determ=rc)
# tbill r5 r10
5. In Question 4, the EngleGranger methodology found that the long-run equilibrium relationship for the three interest
rates was
TBILLt = 0.367 1.91R5t + 2.74 R10t
Notes for EViews Users
1. Tin the Instructors Manual, the workfile aets4_ch6_q5.wf1 continues with the results of Question 4. Recall that the
TABLE vec contains the error-correction model. To proceed with the question, open var01 and from the Estimate
tab select button 2 in VAR Type, enter the Endogenous Variables in the order tbill r5 r10 and use only
2 lagged changesthe results are in var01. Be sure to select button 2 (Vector Error Correction) in the VAR
Type dialogue box. Now select the Cointegration tab and select button 2. The estimated model is in the
TABLE parta.
2. From var01, select the Impulse tab. If you use the defaults you will obtain the graph in partb:
AETS 4
Page 51
Response of TBILL to R5
1.0
1.0
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0.0
0.0
0.0
-0.2
-0.2
-0.2
10
Response of R5 to TBILL
10
.8
.6
.6
.6
.4
.4
.4
.2
.2
.2
.0
.0
3
10
10
.6
.6
.5
.5
.4
.4
.4
.3
.3
.3
.2
.2
.2
.1
.1
.1
.0
.0
4
10
10
10
10
.5
Response of R10 to R5
.6
.0
1
Response of R5 to R10
.8
Response of R5 to R5
.8
.0
1
10
3. To obtain the variance decompositions, from var01, select Variance Decompositions from the View tab. The
results are in partc.
Sample Program for RATS Users
* Find the appropriate lag length for the VAR with:
@varlagselect(crit=aic,signif=0.05,lags=8) ; # tbill r5 r10
* Estimate the error correcting model using:
system(model=q4)
variables tbill r5 r10
lags 1 to 3
*det constant
ect rshort ; * rshort was defined in the linreg instruction above
end(system)
estimate(outsigma=v)
8. Chapter 6 of the Programming Manual uses the variables Tbill and Tb1yr on the file QUARTERLY.XLS to
illustrate both the Johansen and EngleGranger cointegration tests.
Notes for EViews Users
1. In the Instructors Manual, the workfile aets4_ch6_q8.wf1 contains the variables tbill and tb1yr. The tbill rate is the
same one as used in Question 4. The TABLE parta_tbill contains the results of the Dickey-Fuller test for tbill. To
reproduce the results, open tbill and from the View tab select Unit Root Test. In the dialogue box, choose the
Augmented Dickey-Fuller and for Lag length select t-statistic with a Maximum Lags of 8.
Repeat for tb1yr.
2. Use Quick and Estimate Regression to estimate the desired regression; enter
tbill c tb1yr
to obtain the results reported in the TABLE partb. The equation is named eq_partb.
AETS 4
Page 52
From the Proc tab select Make Residual Series and call the residuals residuals. Open this residual
series and select Unit Root Test from the View tab. The TABLE partc contains the results using 7 lags.
3. The TABLE partd contains the estimated error correction model. Open the GROUP var to see how the results
were obtained. Select Estimate VAR from the Quick menu. In VAR Specification, select Vector
Error Correction and use 7 lags and on the Cointegration tab use option 2 (constant in the
cointegrating vector only).
4. Open vec and select Cointegration Test from the View tab. Use 7 lags and select button 2 for the choice
of deterministic trends. The results are in parte.
The question is answered in the Programming Manual using RATS.
9. The file COINT_PPP.XLS contains monthly values of the Japanese, Canadian, and Swiss consumer price levels
and the bilateral exchange rates with the United States. The file also contains the U.S. consumer price level. The
names on the individual series should be self-evident. For example, JAPANCPI is the Japanese price level and
JAPANEX is the bilateral Japanese/U.S. exchange rate. The starting date for all variables is January 1974 while the
availability of the variables is such that most end near the end of 2013. The price indices have been normalized to
equal 100 in January 1973 and only the U.S. price index is seasonally adjusted.
Page 53
AETS 4
Page 54
CHAPTER 7
NONLINEAR MODELS AND BREAKS
1 Linear Versus Nonlinear Adjustment
2 Simple Extensions of the ARMA Model
3 Testing for Nonlinearity
4 Threshold Autoregressive Models
5 Extensions of the TAR Model
6 Three Threshold Models
7 Smooth Transition Models
8 Other Regime Switching Models
9 Estimates of STAR Models
10 Generalized Impulse Responses and Forecasting
11 Unit Roots and Nonlinearity
12 More on Endogenous Structural Breaks
13 Summary and Conclusions
408
410
413
420
427
433
439
445
449
453
461
466
474
Key Concepts
1. Once you abandon the linear framework, it is necessary to select a specific nonlinear alternative. Unfortunately, the
literature does not provide a solid framework for this task. It is possible to estimate a series as a GAR, Bilinear, TAR,
LSTAR, ESTAR, Markov switching, or ANN process. General tests for nonlinearity do not have a specific alternative
hypothesis. Lagrange Multiplier tests generally accept a number of nonlinear alternatives. The issue can be illustrated by
the estimate industrial production series beginning on page 419. Although a nonlinear model may be appropriate, the
final TAR specification is a bit doubtful. My own view is that an underlying theoretical model should guide the model
selection process. For example, in Section 7, a TAR model was used since theory suggests that low-terrorism states
should be more persistent than high-terrorism states. To make the point, I rely heavily on Questions 1 and 2. Question 1
is designed to give the student practice in formulating a nonlinear model that is consistent with an underlying economic
model. Question 2 asks the student to think about the nature of the nonlinearity that is suggested by any particular
nonlinear estimation. In guiding the class discussion, you might want to make an overhead transparency of Figure M7.1
below.
2. The estimation of many nonlinear models requires the use of a software package with a programming language.
Although the syntax explained in RATS Programming Manual may not be directly compatible with your software
package, the logic will be nearly identical. As such, you can have your students read the following sections of the
Programming Manual: Nonlinear Least Squares in Chapter 1.4, Do Loops in Chapter 3.1; If-Then-Else Blocks in Chapter
4.1, and Estimating a Threshold Autoregression (beginning on page 130).
3. For RATS users, the answers to Questions 8, 9 amd 10 are in the Programming Manual. As such, they are not
reproduced here.
AETS 4
Page 55
Coefficient
Std. Error
t-Statistic
Prob.
C
Y(-1)
Y2(-1)
Y3(-1)
Y4(-1)
1.211127
1.258764
-0.026978
-0.032880
-0.003187
0.214742
0.076699
0.027325
0.002854
0.000658
5.639923
16.41179
-0.987323
-11.52066
-4.843551
0.0000
0.0000
0.3245
0.0000
0.0000
You can perform the LM test by selecting Coefficient Diagnostics from the View tab. Then select
Wald Test and enter c(3) = c(4) = c(5) = 0.
5. To estimate the GAR model, from the Quick tab, estimate the regression:
y c y(-1) y(-2) y2(-1). The results are reported in the TABLE gar.
Sample Program for RATS Users
all 250
;* The first three lines read in the data set
Page 56
5. The file GRANGER.XLS contains the interest rate series used to estimate the TAR and M-TAR models in Section
11.
Notes for Eviews Users
1. In the Instructors Manual, the workfile endersgranger.wf1 contains the results reported in Section 11. The file
contains the variable s generated as: s = r_10 s_short. The variables ds, drl and drs were grnerated
using: ds = d(s), drl = d(r_10) and drs = d(r_short). The unit root test for s is reported in the
TABLE df_spread. From the View tab select Unit Root Test, include only an intercept (no trend) and
User Specificed lag of 1.
2. In order to estimate the TAR model, it is first necessary to generate the threshold variables s_plus and s_minus.
Given that the threshold is 0.27, you can use the @recode function as follows:
s_plus = @recode(s(-1)+0.27>0,s(-1)+0.27,0)
s_minus = @recode(s(-1)+0.27<=0,s(-1)+0.27,0)
To obtain the results in the TABLE p464_tar, select Estimate Equation from the View tab and
enter: s s_plus s_minus s(-1). Do not include separate intercept terms as these are already embedded in
the continuous form of the TAR model.
3. In order to estimate the M-TAR model, it is necessary to create the threshold variables mtar_plus and mar_minus.
Given that the threshold is zero for the M-TAR model, this is accomplished using
mtar_plus = @recode(ds(-1)>0,s(-1)-1.64,0)
mtar_minus = @recode(ds(-1)<=0,s(-1)-1.64,0)
To obtain the results in the TABLE p264_mtar, select Estimate Equation from the View tab and
enter: ds mtar_plus mtar_minus ds(-1).
4. Equation 465 contains the estimates of the M-TAR error-correction model. Select Estimate Equation from
the View tab and enter:
drl mtar_plus mtar_minus drs(-1 to -2) drl(-1 to -2)
drs mtar_plus mtar_minus drs(-1 to -2) drl(-1 to -2)
The results are in the TABLES drl_mtar and drs_mtar. You can open the EQUATION drlmtar to
experiment with the estimates.
RATS programmers can estimate the TAR and M-TAR models using
cal 1958 1 4
;* These four lines read in the data set. The data begin in
all 8 1994:1
;* 1958Q1 and end in 1994Q1
open data c:\aets3\granger.xls
data(format=xls,org=obs)
set spread = r_10 - r_short
dif spread / ds
* Perform the Dickey-Fuller test on the spread. Save the residuals as resids
lin ds / resids; # constant spread{1} ds{1}
7. The file labeled SIM_TAR.XLS contains the 200 observations used to construct Figure 7.3. You can answer the
questions in the test using the following:
Notes for Eviews Users
AETS 4
Page 57
1. The workfile aets4_ch7_q7.wf1 contains the variables y and ordered. Ordered are the ordered values of the y
series arranged from low to high. You can reproduce Figure 7.4 by plotting the ordered series. The TABLE parta
contains the desired regression equation. Select Estimate Equation from the View tab and enter:
y c y(-1).
To perform the RESET, Open eq01 and from the View tab select Stability Diagnostics and
Ramseys RESET Test. If you enter 3 for Number of Fitted Terms, you should obtain the same
output as in the TABLE reset. The point to note is that the RESET does not indicate any nonlinearity.
2. EViews cannot readily perform a repetitive set of estimations within a DO loop. It is possible to estimate a TAR
models using various threshold values and select the best fitting value. Hence, if you know = 0 you could use
the following two instructions to generate the indicator functions
plus = @recode(y(-1)>0,1,0)
minus = @recode(y(-1)<=0,1,0)
Next, use Quick Estimate Equation and enter:
y plus minus plus*y(-1) minus*y(-1)
The results are in the TABLE tar and in EQUATION eq02 Note that this is different from the result
reported in the text since the estimation uses = 0.4012.
3. If you want to use = 0.4012 generate plus and minus using:
plus = @recode(y(-1)>0.4012,1,0)
minus = @recode(y(-1)<=0.4012,1,0)
Page 58
model.
11. The file OIL.XLS contains the variable SPOT measuring the weekly values of the spot price of oil
over the May 15, 1987 Nov 1, 2013 period. In Section 4 of Chapter 3, we formed the variable pt =
100[log(spott) log(spott1)] and found that it is reasonable to model pt as an MA(||1,3||) process.
However, another reasonable model is the autoregressive representation: pt = 0.095 + 0.172pt1 +
0.084pt3. The issue is to determine whether the {pt } series contains breaks or nonlinearities.
Notes for Eviews Users
1. The file aets4_ch_7_q11.wf1 contains the price of oil (spot) and and the series p = 100*dlog(spot). TABLE parta
and EQUATION eq_parta contain the results from estimating the p series as an AR(||1,3||) process. To obtain the
result in the TABLE cusum select View Stability Diagnostics and Recursive Estimates.
From the Output box, select CUSUM test.
2. To test for a single breakpoint, select View Stability Diagnostics and choose Quandt-Andrews
Breakpoint Test. If you use the default 15% trimming, you should obtain the results reported in Table
partb.
3. To perform the Bai-Perron test select View Stability Diagnostics and choose Multiple
Breakpoints Tests. Select Global L breaks vs. none , use a maximum of 5 breaks and the
default trimming and significance level. The results in TABLE partc indicate that there are no breaks.
4. To estimate the model given that = 1.7, use GENERATE to form the following variables
plus = @recode(p(-1)>1.7,1,0)
minus = @recode(p(-1)<=1.7,1,0)
p1_plus = p(-1)*plus
p3_plus = p(-3)*plus
p1_minus = p(-1)*minus
p3_minus = p(-3)*minus
From Quick Estimate Equation enter
p plus p1_plus p3_plus minus p1_minus p3_minus
The results are in eq_partd.
5. From Quick Estimate Equation enter
p p1_minus p3_minus
AETS 4
Page 59
SEMESTER PROJECT
The best way to learn econometrics is to estimate a model using actual data. At the beginning
of the semester (quarter), students should identify a simple economic model that implies a long-run
equilibrium relationship between a set of economic variables. Data collection should begin as early
as possible so that the econometric tests can be performed as they are covered in class. Some
students may be working on projects for which they have data. Others should be able to construct a
satisfactory data set using the internet. Some of the web sites that were used in writing the test are
1. www.fedstats.gov/ The gateway to statistics from over 100 U.S. Federal agencies such as the
Bureau of Labor Statistics, the Bureau of Economic Analysis, and the Bureau of the
Census.
2. www.research.stlouisfed.org/fred2/ The St. Louis Fed Database. With over 1000
downloadable economic variables, this is probably the best site for economic time-series
data.
3. www.nyse.com/marketinfo/marketinfo The New York Stock Exchange: The Data Library
contains daily volumes and closing prices for the major indices
4. www.oecd.org/statistics/ The statistics portal for the Organization for Economic Cooperation
and Development. Economic indicators, leading indicators, and labor force statistics.
For those who spend too much time searching for a project, students can update
MONEY_DEM.XLS. This data set is used in the Programming Manual. The file contains quarterly
values of seasonally adjusted U.S. nominal GDP, real GDP in 1996 dollars (RGDP), the money
supply as measured by M2 and M3, and the 3-month and 1-year treasury bill rates for the period
1959:1 2001:1. Both interest rates are expressed as annual rates and the other variables are in
billions of dollars. The data were obtained from the website of the Federal Reserve Bank of St. Louis
and saved in Excel format.
The semester project is designed to employ all of the material covered in the text. Each
student is required to submit a paper demonstrating competence in using the procedures. I require
my students to use the format below. The various sections are collected throughout the semester so
that student progress can be monitored. At the end of the semester, the individual sections are
compiled into the final course paper. Of course, you might want to adapt the outline to your specific
emphasis and to the statistical software package available to you. In Example 1, the student wants to
estimate a demand for money function. In Example 2, the student wants to estimate the term
structure of interest rates.
1. Introduction
Of course, it is important that students learn to generate their own research ideas. However,
AETS 4
Page 60
in the short space of a semester (or quarter), it is necessary for students to quickly select a semester
project. After two weeks, I ask my students for a page or two containing:
1. A statement of the objective of the paper
2. A brief description of the relevant literature including the equation(s) to be estimated
3. The definition and source of each series to be used in the project. Some mention should be
made concerning the relationship between the variables in the theoretical model and the
actual data available.
Example 1: The student discusses why the demand for money can be represented by:
mt = 0 + 1yt + 2rt + pt + et
where: mt = money supply ( =money demand), yt = measure of income or output; rt = vector
of interest rates; pt = price index; t is a time-subscript; et is an error-term; and all variables
are measured in logarithms. Note that these variables are included on the data set
MONEY_DEM.XLS.
Example 2: The student discusses why the term structure of interest rates implies a
relationship among short-term and long-term interest rates of the form:
TBILLt = 0 + 1R3t + 2R10t + et
where: TBILLt is the treasury bill rate, R3t is a three-year rate and R10t is a ten year rate.
Note that these three variables are on the file INT_RATES.XLS.
2. Difference equation models
This portion of the project is designed to familiarize students with the application of
difference equations to economic time-series data. Moreover, the initial data manipulation and
creation of simple time-series plots introduces the student to the software at an early stage in the
project. In this second portion of the paper, students should:
1. Plot the time path of each variable and describe its general characteristics. There should be
some mention of the tendencies for the variables to move together.
2. For each series, develop a simple difference equation model that mimics its essential
features.
Example 1: This portion of the paper contains time-series plots of the money supply,
output, interest rate(s), and price index. The marked tendency for money, output and prices
to steadily increase is noted. Periods of tranquility and volatility are indicated and the student
mentions that the periods are similar for all of the variables. The student indicates that a
AETS 4
Page 61
difference equation with one or more characteristic root lying outside the unit circle might
capture the time path of the money supply, price index, and level of output.
Example 2: This portion of the paper contains time-series plots of the various interest rate
series. The tendency for the rates to meander is noted. The student shows how difference
equations with a characteristic root that is unity can mimic the essential features of the
interest rate series. It is also shown that characteristic roots near unity will impart similar
time-paths to the series. The tendency for the rates to move together and any periods of
tranquility and volatility are mentioned.
3. Univariate properties of the variables
This portion of the project introduces the student to the tools used in estimating the
univariate properties of stationary time-series. Chapter 2 provides the necessary background
material. The student should select two or three of the key variables and for each:
1. Estimate an ARIMA model using the Box-Jenkins technique.
2. Provide out-of-sample forecasts. There should be some mention of the forecasting
performance of the models.
Example 1: The Box-Jenkins method suggests several plausible models for the money
supply. Each is examined and compared in detail since the project focuses on money
demand.
Example 2: The three interest rate series are estimated using the Box-Jenkins methodology.
The focus is not on any single interest rate series. Instead, several reasonable models for
each series are found. Tests for the presence of GARCH and ARCH-M effects are presented.
4. Conditional Volatility
I allow students some flexibility in proceeding at this point. Some students will have a keen
interest in financial econometrics. These students should select two financial variables and test each
for the presence of GARCH effects. They should be able to present a well-reasoned GARCH model
for each series. I put particular importance on the justification of the most appropriate specification.
I also ask them to estimate the two series as a multivariate GARCH process. They should be able to
compare several different multivariate specifications. Students selecting this option need not spend
much time on Section 5 below. For most financial variables, simple differencing or constructing the
growth rate will result in a process that is stationary and not very persistent.
Students who select non-financial variables should be each for the presence of GARCH
and/or ARCH-M effects. I expect these students to place more emphasis on the topic in Section 5
below.
5. Estimates of the trend
AETS 4
Page 62
Most students will select one or more variables that exhibit evidence of non-stationary
behavior. In this portion of the project, students should:
1. Use the material in the Chapter 4 to discuss plausible models for the trends. The goal is to
refine the difference equation model suggested in Step 2 of the project.
2. Decompose the variables into their temporary and permanent components.
3. Use the material in Chapter 4 to conduct formal tests for unit roots and/or deterministic
time trends. There can be a comparison of the effects of "detrending" versus differencing a
series showing evidence of a trend.
4. Potentially important seasonal effects and/or evidence of structural change should be
noted. If warranted, seasonal unit root tests and/or Perron tests for unit roots in the presence
of structural breaks should be conducted.
Example 1: The money supply is decomposed into its temporary and permanent
components using a Beveridge-Nelson decomposition. Dickey-Fuller tests for unit roots in
the money supply series are conducted. The various tests for the presence of drift and/or time
trends are also conducted. Given that the Federal Reserve has changed its operating
procedures, the money supply is tested for unit roots in the presence of a structural break.
Example 2: The long-term rate is decomposed into its temporary and permanent
components using a Beveridge-Nelson decomposition. Dickey-Fuller tests for unit roots in
all of the interest rate series are conducted. The presence of unit roots in the interest rate
series is mixed. It may be that the interest rates are near unit root processes. The student
compares the ARMA estimates of the long-term rate using levels, "detrended" values, and
first-differences of the data.
6. Vector Autoregression Methods
This portion of the project introduces multiple time-series methods. Chapter 5 of the text
provides the background material necessary for the student to estimate a VAR. The student should
complete the following tasks:
1. Estimate the variables as a VAR. The relationship among the variables should be analyzed
using innovation accounting (impulse response functions and variance decomposition)
methods.
2. Compare the VAR forecasts to the univariate forecasts obtained in Step 3 of the project.
3. A structural VAR using the Sims-Bernanke or Blanchard and Quah techniques should be
attempted.
Example 1: The money supply, income level, interest rate and price level are estimated as
AETS 4
Page 63
an autoregressive system. It is reported that the univariate forecasts from Step 3 are nearly
the same as those from the VAR. Granger causality tests are performed in order to pare down
the model. A Choleski decomposition with various orderings is used to decompose the
forecast error variances of the variables. Impulse response functions are used to examine the
effects of the various shocks on the demand for money. The student estimates a structural
VAR such that contemporaneous real income and interest rate shocks are unaffected by the
other variables in the system.
Example 2: The student estimates the three interest rates as a VAR. Since the issue of
stationarity is unclear, the student estimates the VAR in levels and in first-differences. A
Choleski decomposition with various orderings is used to decompose the forecast error
variances of the variables. Impulse response functions are used to show the effects of shocks
to 10-year and 3-year rates on short-term rates. The student uses a bivariate VAR to
decompose the 10-year interest rate into its temporary and permanent components.
7. Cointegration
This portion of the project introduces the concept of cointegration. Chapter 6 of the text
provides the appropriate background. The student should:
1. Conduct Engle-Granger and Johansen tests for cointegration
2. Estimate the error-correction model. The error-correction model should be used to analyze
the variables using innovation accounting techniques.
Example 1: For some specifications, the Engle-Granger and Johansen tests for cointegration
will reveal a long-run equilibrium relationship among the variables. For other specifications
and other sample periods, there is no credible money demand function. If the variables are
not cointegrated, the error-correction model is not estimated since the variables are not
cointegrated. Instead, the student discusses some of the credible reasons underlying the
rejection of the theory as presented.
Example 2: The Engle-Granger and Johansen tests for cointegration reveal a long-run
equilibrium relationship among the interest rates. The error-correction model is estimated.
Innovation accounting is conducted; the results are compared to those reported in Step 5.
8. Nonlinearity
The last portion of the project uses nonlinear time-series models presented in Chapter 7. The
student should:
1. Discuss a possible reason why a nonlinear specification might be plausible.
2. Conduct a number of tests that are capable of detecting nonlinearity.
3. Compare the linear estimates to the estimates from a nonlinear model.
AETS 4
Page 64
Example 1: One reason why Engle-Granger and Johansen tests may fail is that they
implicitly assume a linear adjustment mechanism. The student tests the variables for linear
versus nonlinear behavior.
Example 2: The text suggests that interest rate spreads are nonlinear. An inverted yield
curve is far less persistent than a situation when short-term rates are below long-term rates.
A nonlinear model of the spread in estimated and compared to the linear model.
AETS 4
Page 65