Week 1 Readings A
Week 1 Readings A
Week 1 Readings A
Contents
1 2 Introduction 1
Solution and Format . . . . . . . . . . . . . . . . Estimation . . . . . . . . . . . . . . . . . . . . . . The nature of the VARs from miniature models . Using Loose Theory in Macroeconometric Models
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
4 6 6 7
8 9
1 Introduction
Theoretical models have always been constructed to illuminate and foster an understanding of how the interactions between agents and institutions can account for observed phenomena. The art of designing a useful theoretical model is that it should be complex enough to provide a good description of the principal forces at work in producing a particular phenomenon but not too complex that the explanation becomes clouded by allowing for too many factors. Macroeconomics has always had such miniature general 1
Dynamic Stochastic General Equilibrium Models Incomplete Dynamic Stochastic General Equilibrium Models Models with Explicit Long Run Equilibrium Models with Implicit Long Run Equilibrium SVARs VARs
Degree of Empirical Coherence
Figure 1: equilibrium models, and they have gured prominently in textbooks. We will refer to them as Mini-T models, the T indicating theoretic. Examples would be the IS-LM description of the Keynesian model, the AD-AS extension of IS-LM, the Mundell-Flemming-Dornbusch (MFD) model, and, more recently, the New-Keynesian Policy Model (NKPM). Each of these was designed to account for a feature of the policy environment that had become increasingly important - rising prices in the case of AS-AD and the new world of de-regulated capital markets and exible exchange rates for MFD. The NKPM reects the resurgance of the Phillips curves and the emphasis on Taylor rules for the setting of interest rates - see Allsop and Vines (2000) for a useful description of this model. These Mini-T models omit much, and much is assumed in their construction, but they have served their purpose of directing thought about the macro economy. The gure above will facilitate our discussion throughout this course. It shows the trade o that always exists between building models that have a strong theoretical perspective and those that are strongly oriented towards tting data sets. Models on the y-axis can be regarded as the Mini-T models we have just described. Models on the x axis are generally miniature statistical models chosen so as to produce a close t to the data. The rst modication of Mini-T models in the direction of the data involved moving away from the old paradigm of deterministic models that had perfect foresight. Instead an emphasis was placed upon the importance of shocks and expectations for macro-economic outcomes. Hence Mini-T 2
models were modied to describe agents making choices in the face of these shocks. Their core becomes a set of Euler equations linking the currentperiod, present and expected future values of the variable (or variables). A long-run growth path is generally implied by the models and features have been introduced into them that might allow for departures from this path for extensive periods of time, in particular frictions in production and labour markets and inertia in expenditure decisions. These models are the class of Dynamic Stochastic General Equilibrium (DSGE) models studied in most graduate macro-economic courses today. They provide reasonably exible tools for theoretical analysis and increasingly incorporate many types of constraints that are viewed as being important to actual outcomes. Now macroeconomics has a distinguishing feature that the major consumers of the output are policy makers and their advisers. This group generally recognizes the need for some theoretical model to guide their deliberations, although often it may have simply become part of their thought processes rather than being spelled out explicitly. Consequently, manipulation of miniature models is often a primary input into the development of an understanding of the broad outlines of the environment to which a policy response is to be made. But such models are rarely able to capture the complexities of any given situation e.g. referring to aggregate demand, as in an IS curve, rather than its components, is unlikely to produce a very convincing analysis for any policy discussion. Moreover, policy makers have increasingly been required to be precise about the arguments in support of a particular policy action (and sometimes the information that is an input into it, such as forecasts), and this points to the need to expand the size of the model while retaining the clarity that a strong theoretical perspective brings to analysis. Models have emerged, and are emerging, that aim to do this. We will refer to these as base models. They are a heterogeneous group and in history have taken a number of forms depending on what has been assumed about the shocks and how precisely the long-run equilibrium paths are determined. Thus in the gure above the incomplete DSGE models and hybrid models which feature either an explicit or implicit long-run equilibrium path - are members of this class. Generally, it is not intended that they provide a very close t to the data and extra adjustments are needed to turn the base model into an operational model that could be used in a policy environment driven by forecasts. Base models are the core of a policy system and exist to anchor the discussion of policy options in some consistent economic framework rather 3
than allowing it be be sidetracked into a debate about the idiosyncracies of particular data sets. Base models age generally quite large but sometimes miniature versions of them have become popular for discussion and analysis. The most popular of these has been the New Keynesian policy model (NKPM), which can be thought of as an extension of the IS-LM, AS-AD Mini-T models. It is sometimes used as a theoretical model but, increasingly, it involves using variants that aim to match data, so that the versions we will discuss in the lectures are down the curve rather than on the axis. Because of the close connection between base and miniature models we will spend a large section of these lectures looking at miniature models. Understanding the diculties in specication, solution , estimation and inference that can arise with these is really central to understanding the problems that can arise in base models. At the end of these lectures we will return to the base models often used in practice and try to tie the discussion together.
(1)
y = Py 1 + D
t t
j =0
S Eu+
j t t
(2)
and P, S are functions of the coecients Bj and A. Richard Dennis lectures will be more specic about the range of methods
where
D = (B0 AP )1F
that can be used to solve such models. For now it is important to note that the ultimate dynamic structure for yt will come from two sources. One of these derives from the theoretical structure - that is the Euler equations and constraints - and is represented by P. The second source of dynamics stems from the nature of the ut . To analyse the latter, we adopt the assumption, common to many Mini-T models, that ut follows a VAR(1)
ut
= u 1 +
t
(3)
j j where G = D j =0 S . This solution method requires that one be able to nd the Et ut+j under a variety of processes rather than just the VAR(1), and also to be able to estimate . For this reason the rst step in the lectures will be to look at time series models for a single member of ut . Specically we
= =
(4) (5)
look at the class of covariance stationary processes and important members of this class such as the AR, MA and ARMA processes. The case when = 1 is of particular interest as it involves a unit root in the ut process and brings in the concepts of integrated series and the permanent components of such series. A close look at such concepts and how they change our approach to estimation and inference will be needed. Now, if the rank of G equals dim(ut )(which seems likely), then G+ = (G G)1 G is the generalized inverse of G and so ut = G+ (yt P yt1 ) and
this can be used to get
yt
(6)
This expression makes clear that the intrinsic dynamics described by the theoretical model (represented by P ), is augmented by extrinsic dynamics captured by ,with the consequence that the evolutionary process for yt changes from a VAR(1) to a VAR(2). Now the system we descibed above is in terms of the number of variables yt that is involved in the model. We can show that if we only looked at a single variable ykt then the VAR system above would become an ARMA system. 5
Since we are often interested in building models in order to discuss items such as business cycles, which are movements in a single variable representing economic activity, we will need to ask how the characteristics of such a series maps into a business cycle. This introduces a new topic and one can make some broad points about the nature of the business cycle. As the lectures move on we will need to ask the question however of how the nature of the system above maps into the ARMA process for output since it is only by doing that that we can begin to answer questions like what drives the business cycle.
2.2 Estimation
In order to use a miniature model the parameters underlying its Euler equations need to be estimated. This might be done directly or indirectly. The main direct method has been GMM and so we will need to review the theory of that estimation method. Generally in macroeconomics it comes down to doing instrumental variables estimation and so the literature on weak instruments that has emerged over the past decade needs to be considered to see whether it is relevant in this context. The alternative indirect approach involves working with the solved VAR and performing estimation with FIML, and so we need to look at the relative merits of the two approaches. Although the answers are likely to be context dependent we can gain a lot of insight into the issues by working with a particular miniature model- a variant of the New Keynesian policy model.
than variables i.e. dim(ut ) < dim(yt). Such a static rank reduction aects the covariance matrix of the yt . Nevertheless in both cases the VAR can be written as depending upon some factors as in Forni et al (2003). A special case of interest is when some of the eigenvalues of are unity. i.e. some of the shocks are permanent. Regardless of the nature of the shocks it is always possible to nd the moving average representation for yt i.e. the form:
yt = C (L) t ,
(7)
are the j period impulse responses. In miniature models the Cj can generally be found analytically while simulations of the base model provide numerical solutions. There will be dierent values for the Cj , depending on whether the shocks are strictly permanent or simply pure impulses i.e. = I or = 0. We will refer to these polar cases as the permanent and transitory shock cases. In most instances the shocks ut are items like productivity, risk premia, money etc and are dened relative to the model, but they are smaller in number than the yt i.e. dim(ut ) < dim(yt ). Which shocks should be used when attempting to model a given data set, and deciding on whether they should be permanent or transitory, is a dicult issue, but one that is necessary for any empirical work. When there are permanent shocks the value of C will indicate the long run responses. Any variable whose associated column in C has non-zero elements will be an I(1) variable and the rank of C will indicate how many co-integrating vectors there are among the I(1) variables. These vectors are found as the that sets C = 0. As we will discuss later the are not unique and some identifying assumptions need to be placed upon them.
Once the have been found it is possible to re-write the VAR as an ECM. Combinations of permanent and transitory shocks can be handled in a similar way. Some discussion of this is in Levtenchova et al (1998).
called Structural VARs (SVARs). VAR models are on the x axis but there has always been interest in trying to move up the curve by placing some restrictions upon the VAR equations that are loosely guided by the ideas from miniature theoretical models i.e. to work with a SVAR model that is loosely related to theoretical models. This is also true of models that feature permanent shocks and such models are structural vector ECMs (SVECMs). We therefore need to ask about how successful these stategies are in incorporating theoretical ideas. As we will see there are serious questions to be raised about the assumptions used in the transition from VARs to SVARs. These models can be useful but one needs to be aware of their limitations and to check whether there are problems (where this is possible).
endowing agents with perfect foresight as either the new permanent value for the shock is known or it is known that the shock only lasts for one period. That means that the models are solved using perfect foresight algorithms. Because the shock processes are only of two types Pagan (2003) refers to these base models as incomplete DSGE (IDSGE) models. There are both advantages and disadvantages to working with IDSGE models. The disadvantage is that one gives up some generality. The main advantage arises in a forecasting context where often paths for shocks are to be specied based on the priors of the policy makers, and these rarely t a simple structure like (3). Moreover, not having to specify a path for the shocks simplies computation of optimal solutions a good deal and produces a neat separation between what the theory can provide in the way of dynamics and what is being imposed as an auxiliary assumption. At the end of the lectures we will consider these dierences and how one might bridge some of the gaps in much more detail.
4 References
Allsopp, C. and D. Vines (2000), The Assessment: Macroeconomic Policy" , Oxford Review of Economic Policy, 16, 1-32. Binder,M and M.H.. Pesaran (1995), Multivariate Rational Expectations Models and Macroeconomic Modelling: A Review and Some New Result, in M.H. Pesaran and M Wickens (eds) Handbook of Applied Econometrics: Macroeconomics, Basil Blackwell, Oxford. Black, R., V. Cassino, A. Drew, E. Hansen, B. Hunt, D. Rose and A. Scott (1997), The Forecasting and Policy System: The Core Model, Reserve Bank of New Zealand Research Paper 43. Coletti,D., B. Hunt, D. Rose and R. Tetlow (1996), The Bank of Canadas New Quarterly Projection Model, Part 3: The dynamic model: QPM , Bank of Canada, Technical Report 75. Forni, M., M.Lippi and L. Reichlin (2003), Opening the Black Box: Structural Factor Models versus Structural VARs, ECARES, Universite Libre de Bruxelles, mimeo. Giannone, D., L. Reichlin and L. Sala (2002), VARs, Common Factors and the Empirical Validation of Equilibrium Business Cycle Models Journal of Econometrics (forthcoming) 9
S.Levtchenkova, A.R. Pagan and J. Robertson (1998) Shocking Stories , Journal of Economic Surveys, 12, 1998, 507-532. Pagan,A.R. (2003) Report on Modelling and Forecasting at the Bank of England . Bank of England Quarterly Bulletin, Spring, 1-29 Smets, F. and R. Wouters (2002), An Estimated Stochastic Dynamic General Equilibrium Model of the Euro Area , Working Paper #171, European Central Bank Vahid, F. and R.F. Engle, 1993, Common Trends and Common Cycles. Journal of Applied Econometrics, 8, 341-360.
10