04 Advance

Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Heteroskedasticity and serial correlation.

Generalized
least squares estimator. Weighted least squares.
Robust and clustered standard errors.

Jakub Mućk
SGH Warsaw School of Economics

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation 1 / 45


Least squares estimator

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Least squares estimator 2 / 45
Multiple regression

Least squares estimator :

y = β0 + β1 x1 + β2 x2 + . . . + βK xK + ε (1)
where
I y is the (outcome) dependent variable;
I x1 , x2 , . . . , xK is the set of independent variables;
I ε is the error term.
The dependent variable is explained with the components that vary with the
the dependent variable and the error term.
β0 is the intercept.
β1 , β2 , . . . , βK are the coefficients (slopes) on x1 , x2 , . . . , xK .

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Least squares estimator 3 / 45
Multiple regression

Least squares estimator :

y = β0 + β1 x1 + β2 x2 + . . . + βK xK + ε (1)
where
I y is the (outcome) dependent variable;
I x1 , x2 , . . . , xK is the set of independent variables;
I ε is the error term.
The dependent variable is explained with the components that vary with the
the dependent variable and the error term.
β0 is the intercept.
β1 , β2 , . . . , βK are the coefficients (slopes) on x1 , x2 , . . . , xK .

β1 , β2 , . . . , βK measure the effect of change in x1 , x2 , . . . , xK upon the


expected value of y (ceteris paribus).

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Least squares estimator 3 / 45
Assumptions of the least squares estimators I
Assumption #1: true DGP (data generating process):

y = Xβ + ε. (2)

Assumption #2: the expected value of the error term is zero:

E (ε) = 0, (3)

and this implies that E (y) = Xβ.


Assumption #3: Spherical variance-covariance error matrix.

var(ε) = E(εε0 ) = Iσ 2 (4)


. In particular:
I the variance of the error term equals σ:
var (ε) = σ 2 = var (y) . (5)
I the covariance between any pair of εi and εj is zero”
cov (εi , εj ) = 0. (6)
Assumption #4: Exogeneity. The independent variable are not random
and therefore they are not correlated with the error term.

E(Xε) = 0. (7)
Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Least squares estimator 4 / 45
Assumptions of the least squares estimators II

Assumption #5: the full rank of matrix of explanatory variables (there is


no so-called collinearity):

rank(X) = K + 1 ≤ N. (8)

Assumption #6 (optional): the normally distributed error term:

ε ∼ N 0, σ 2 .

(9)

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Least squares estimator 5 / 45
Gauss-Markov Theorem

Assumptions of the least squares estimators


Under the assumptions A#1-A#5 of the multiple linear regression model,
the least squares estimator β̂ OLS has the smallest variance of all linear and
unbiased estimators of β.

β̂ OLS is the Best Linear Unbiased Estimators (BLUE) of β.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Least squares estimator 6 / 45
The least squares estimator

The least squares estimator


−1
β̂ OLS = X0 X X0 y. (10)

The variance of the least square estimator


−1
V ar(β̂ OLS ) = σ 2 X0 X (11)

If the (optional) assumption about normal distribution of the error


term is satisfied then

β ∼ N β̂ OLS , V ar(β̂ OLS ) .



(12)

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Least squares estimator 7 / 45
Consequences of non spherical errors

Consequences of non spherical


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation errors 8 / 45
Consequences of non spherical errors
General variance of the least square estimator (β̂ OLS ):
h 0 i
V ar(β̂ OLS ) = E β̂ OLS − β β̂ OLS − β

. (13)

Let rewrite the least square estimator:


−1 −1 −1
β̂ OLS = X0 X X0 y = X0 X X0 (Xβ + ε) = β + X0 X X0 ε, (14)

then
h −1  −1 0 i
V ar(β̂ OLS ) = E X0 X X0 ε X0 X X0 ε
h −1 −1 i
= E X0 X X0 εε0 X X0 X
−1 −1
X0 X X0 E εε0 X X0 X
 
=

If the assumption #3 about spherical variance-covariance error matrix,


i.e.. E(εε0 ) = σ 2 I is not satisfied, the above expression cannot be simpli-
fied and written as:
−1
V ar(β̂ OLS ) = σ 2 X0 X . (15)

Consequences of non spherical


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation errors 9 / 45
Non spherical errors

Consequences
I The least squares estimator is still unbiased and consistent but it no longer
BLUE.
I Inconsistency of variance. The standard errors usually computed for the
least squares estimator are unreliable.
I Confidence intervals and hypothesis tests that use these standard errors may
be misleading.
Detection
I Visual inspection of residuals.
I Formal tests.
Dealing with non spherical errors
I (Feasible) Generalized Least Squares.
I Robust standard errors.
Special cases
I Heteroskedasticity of the error term.
I Serial correlation.

Consequences of non spherical


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation errors 10 / 45
Heteroskedasticity

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Heteroskedasticity 11 / 45
Heteroskedasticity

Homoskedasticity
I The simple linear model:
yi = −β0 + β1 xi + εi var(εi ) = σ 2 , (16)
the variance of the least squares estimators for β1 :
σ2
var(β̂1LS ) = PN (17)
i=1
(xi − x̄)2
Heteroskedasticity
I The simple linear model (with heteroskedasticity):

yi = β0 + β1 xi + εi var(εi ) = σi2 , (18)


the variance of the least squares estimators for β1 :
N PN
X
i=1
(xi − x̄)2 σi2
var(β̂1LS ) = wi σi2 = PN 2 (19)
i=1 i=1
(xi − x̄)2

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Heteroskedasticity 12 / 45
Heteroskedasticity

Heteroskedasticity is often encountered when using cross-sectional data.


Cross-section data invariably involve observation units of varying sizes, e.g.,
households, firms, workers.
Intuition: as the size of the economic unit becomes larger, there is more
uncertainty associated with the outcomes.
Heteroskedasticity is sometimes present in time-series data.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Heteroskedasticity 13 / 45
Detecting Heteroskedasticity

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 14 / 45
Detecting Heteroskedasticity

Method that can be used to detect heteroskedasticity


1. An informal way using residual charts, i.e., the squared residuals versus
explanatory variables.
2. A formal way using statistical tests:
2.1 The Breusch-Pagan test;
2.2 The White test;
2.3 The Goldfeld-Quandt test.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 15 / 45
Example

600
household food expenditure per week
200 300 400
100 500

0 10 20 30 40
weekly household income

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 16 / 45
Example

50000
40000
squared residuals
20000 30000
10000
0

0 10 20 30 40
weekly household income

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 16 / 45
The Breusch-Pagan test I

The Breusch-Pagan Lagrange Multiplier test allows to test whether


the variance of the error term depends on some explanatory variables z that
are possibly different from x
A general form for the variance function

var(yi ) = σi2 = E(ε2i ) = h (α0 + α1 zi1 + . . . + αS ziS ) . (20)


Two possible functions for h():
I Exponential function:
h (α0 + α1 zi1 + . . . + αS ziS ) = exp (α0 + α1 zi1 + . . . + αS ziS ) . (21)
I Linear function:
h (α0 + α1 zi1 + . . . + αS ziS ) = α0 + α1 zi1 + . . . + αS ziS , (22)
it should be noted that in the linear function one must be careful to ensure
h() > 0.
The null and alternative hypotheses are:

H0 : α1 = α2 = . . . = αS = 0,
H1 : not all αj = 0.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 17 / 45
The Breusch-Pagan test II
The null is about homoskedasticity while the alternative is about heteroskedas-
ticity.
Note that for linear function we have:

ε2i = E(ε2i ) + νi = α0 + α1 zi1 + . . . + αS ziS + νi , (23)

where νi is random.
The test statistics based on the above regression (for linear function) obtained
after substitution the least squares residualsε̂2i for ε2i :

ε̂2i = α0 + α1 zi1 + . . . + αS ziS + νi . (24)

Finally, the test statistics based on the R2 from the previous regression has
a chi-square distribution with S degrees of freedom:

χ2 = N R2 ∼ χ2(S) . (25)

The Breusch-Pagan/ Lagrange Multiplier test is a large sample test.


In this test, the value of the statistic computed from the linear function is
valid for testing an alternative hypothesis of heteroskedasticity where the
variance function can be of any form given by h().

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 18 / 45
The White test I

In the White test the explanatory variables x, their squares and cross-
products are used instead of z.
Example. In the linear

E(y) = β0 + β1 x1 + β2 x2 . (26)

the following variables will be used

z1 = x1 , z2 = x2 , z3 = x21 , z4 = x22 , and z5 = x1 x2 .

The White test is perform as F test or χ2 test (as previously).


The null is about homoskedasticity while the alternative is about heteroskedas-
ticity.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 19 / 45
The Goldfeld-Quandt test I

The Goldfeld-Quandt test is designed to test for this form of heteroskedas-


ticity, where the sample can be partitioned into two groups and we suspect
the variance could be different in the two groups.
The sample can be partitioned with:
I indicator variable,
I qualitative variable.
Example: wages for female and male workers:

ln wagei = β0 + β1 educi + β2 f emalei + εi , i = 1, 2, . . . , N. (27)

Splitting sample:

ln wageM i = βM 0 + βM 1 educM i + εM i , i = 1, 2, . . . , NM , (28)


ln wageF i = βF 0 + βF 1 educF i + εF i , i = 1, 2, . . . , NF . (29)

The null hypothesis:


2
H0 σM = σF2 . (30)

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 20 / 45
The Goldfeld-Quandt test II

Test statistics:
2 2
σ̂M /σM
F= ∼ F(NM −KM ,FM −KF ) , (31)
σ̂F /σF2
2

when the null is true


2
σ̂M
F= , (32)
σ̂F2
2
when σ̂M > σ̂F2
If the F is higher than its critical value we can reject the null.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Detecting Heteroskedasticity 21 / 45
Heteroskedasticity-Consistent Standard Errors

Heteroskedasticity-Consistent
Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Standard Errors 22 / 45
Heteroskedasticity-Consistent Standard Errors I

In the presence of the heteroskedasticity the least squares estimator, although


still being unbiased, is no longer blue.
The typical least squares standard errors are incorrect.
Heteroskedasticity-Consistent Standard Errors is a way of correcting
the standard errors so that our interval estimates and hypothesis tests are
valid since they take into consideration heteroskedasticity.
White’s heteroskedasticity-consistent estimator for the simple linear
model: PN
N (xi − x̄)2 ε̂2i
ˆ β̂1LS =
 i=1
var 2 . (33)
N −2 N
(xi − x̄)2
P
i=1

The White’s estimator for the variance helps avoid computing incorrect in-
terval estimates or incorrect values for test statistics in the presence of het-
eroskedasticity but it does not address the other implication of heteroskedas-
ticity.
I But when sample size is large the variance of the least squares estimator may
still be sufficiently small to get precise estimates.
I Robust standard errors estimator does not require to specify a suitable variance
function h().

Heteroskedasticity-Consistent
Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Standard Errors 23 / 45
Clustered standard errors

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Clustered standard errors 24 / 45
Clustered standard errors

Clustered standard errors could be applied in the presence of the heteroskedas-


ticity and when observation can be grouped/clustered.
Example: student’s result and classes.
Key assumption: independence (of the error term) between clusters and
dependence within clusters.
The variance-covariance of the error term:

0 if i and j belong to different clusters,
E(εi εj ) = (34)
σij if i and j belong to the same group

The general variance-covariance of the error term matrix will be block diag-
onal.
Denoting the group by g = 1, 2, . . . , G, the variance can be estimated:
G
!
0
−1 X −1
V ar(β̂ LS
)= XX x0 ê0g êg x X0 X . (35)
g=1

Key problem: we should know our data to appropriately apply clustering.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Clustered standard errors 25 / 45
Generalized Least Squares

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Generalized Least Squares 26 / 45
GLS: known form of variance I

Consider simply linear regression:

yi = β0 + β1 xi + εi (36)

where the error term is heteroskedastic, i.e., var(εi ) = σi2 .


The generalized least squares estimator (GLS) depends on the un-
known variance of the error term σi2 .
However, one can assume some structure on σi2 . For instance,

var(εi ) = σi2 = σ 2 xi . (37)

Under above assumption we can apply GLS transformation to our variables


(dependent, explanatory and error term):
yi 1 xi εi
√ = β0 √ + β1 √ + √ , (38)
xi xi xi xi

or more generally
zi
zi∗ = √ . (39)
xi

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Generalized Least Squares 27 / 45
GLS: known form of variance II

The variance of the transformed error term is therefore constant:


εi 1 1
var(ε∗i ) = var( √ ) = var(εi ) = σ 2 xi = σ 2 . (40)
xi xi xi

Therefore the least squares estimator can be applied to the regression that
bases on transformed variables.

yi∗ = β0 + β1 x∗i + ε∗i . (41)


The GLS transformation/estimator can be viewed as a weighted least
squares estimator:
I Minimizing sum of ε∗ , i.e., the transformed errors:
i
N N N  2
X X ε2 X εi
ε∗2
i = i
= 1/2
. (42)
xi xi
i=1 i=1 i=1

I The error are weighted by 1/x1/2 .


i
I Intuition: observation with smaller error variance has a larger weights (im-
portance).

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Generalized Least Squares 28 / 45
GLS and grouped data I

Example: wages for female and male workers in the divided samples"
Splitting sample:

ln wageM i = βM 0 + βM 1 educM i + εM i , i = 1, 2, . . . , NM , (43)


ln wageF i = βF 0 + βF 1 educF i + εF i , i = 1, 2, . . . , NF . (44)

The GLS estimator can be applied as follows:


ln wageM i 1 educM i εM i
= βM 0 + βM 1 + , i = 1, 2, . . . , NM , (45)
σM σM σM σM
ln wageF i 1 educF i εF i
= βF 0 + βF 1 + , i = 1, 2, . . . , NF . (46)
σF σF σF σF
where σM and σF is the standard deviation of the error term in the subsam-
ples form male and female workers, respectively.
How to get σM and σF estimates?
We can use a Feasible Generalized Least Squares (FGLS) estimator.
The steps are as follows:
1. Obtained σM and σF estimates by applying the least squares separately to
both subsamples (like in the Goldfeld-Quandt test).

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Generalized Least Squares 29 / 45
GLS and grouped data II

2. Construct the general variance of the error term:


n
σ̂M if F EM ALEi = 0,
σ̂i = (47)
σ̂F if F EM ALEi = 1.
3. Apply the least squares to the transformed initial model:
ln wagei 1 educi f emalei εi
= β0 + β1 + β2 + . (48)
σ̂i σ̂i σ̂i σ̂i σ̂i

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Generalized Least Squares 30 / 45
Unknown Form of Variance I

General steps in applying the GLS when the form of variance is


unknown
1. Estimate equation by least squares and compute the squares of the least
squares residual ε̂i
2. Take squared residuals (ε̂2i ) and apply the least squares to the equation de-
scribing the variance. One of the possible form is :

ln ε̂2i = α1 + α2 z1 + . . . + αS zS + νi (49)

where νi is the random and zj is explanatory variable or its transformation


(e.g. logarithmic).
3. Compute the estimated variance σ̂i2 . For the example from previous point:

σ̂i2 = exp (α̂1 + α̂2 z1 + . . . + α̂S zS ) . (50)

4. Transform variables (dependent and explanatory):

yi∗ = yi /σ̂i and x∗ji = xji /σ̂i . (51)

5. Apply the least squares to transformed variables.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Generalized Least Squares 31 / 45
Serial correlation

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 32 / 45
Nature of serial correlation of the error term

Serial correlation of error term is usually present in time series.


In general, serial correlation is a measure of persistence/inertia. This is a
common feature of many economic variables.
Serial correlation of the error term suggests that dynamic relationship
between variables is misspecified.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 33 / 45
Detecting serial correlation of the error term

An informal way using residual charts:


I plotting residuals êt versus time,
I plotting residuals êt versus lagged residuals êt−1 ,

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 34 / 45
Detecting serial correlation of the error term

An informal way using residual charts:


I plotting residuals êt versus time,
I plotting residuals êt versus lagged residuals êt−1 ,
Formal ways using statistical tests:
I Testing autocorrelation of order one as well as of higher orders.
I The Lagrange multiplier test.
I The Durbin-Watson test.

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 34 / 45
Autocorrelation

The population correlation x and y:


cov (x, y)
ρxy = p . (52)
var (x) var (y)

he population autocorrelation of order one:


cov (yt , yt−1 )
ρ1 = p . (53)
var (yt ) var (yt−1 )

The sample autocorrelation (ACF) of order one:


PT
t=2
(yt − ȳ) (yt−1 − ȳ)
r1 = PT . (54)
t=1
(yt − ȳ)2

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 35 / 45
Autocorrelation

The k-th order sample autocorrelation (ACF) :


1
PT
T −k t=k+1 t (y − ȳ) (yt−k − ȳ)
rk = T
. (55)
1
− ȳ)2
P
T t=1
(yt

Testing significance of autocorrelation.


I The null is about no serial correlation, i.e.,
H0 : ρk = 0. (56)
I The test statistic: √
Z= T rk ∼ N (0, 1). (57)

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 36 / 45
The Lagrange multiplier test I
The Lagrange multiplier test allows to test jointly correlations at more
than one lag.
The AR(1) model for error term:

et = ρ1 et−1 + νt (58)

where νt ∼ N (0, σν2 ).


Substitution into a simple regression we get:

yt = β0 + β1 xt + ρ1 et−1 + νt . (59)

Substituting by the residuals we get:

yt = β0 + β1 xt + ρ1 êt−1 + νt , (60)

and using a fact that yt = β̂0 + β̂1 + êt :

β̂0 + β̂1 + êt = β0 + β1 xt + ρ1 êt−1 + νt , (61)

which after manipulation leads to the following auxiliary regression in


the Lagrange Multiplier (LM) test:

êt = γ0 + γ1 xt + ρêt−1 + νt . (62)


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 37 / 45
The Lagrange multiplier test II

The null is about no autocorrelation of order one, i.e.,

H0 : ρ1 = 0. (63)

The test statistic:


LM = T × R2 ∼ χ2(1) . (64)

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 38 / 45
The Lagrange multiplier test – testing higher order of autocorrelation

The AR(k) model for error term:

et = ρ1 et−1 + ρ2 et−2 + . . . + ρk et−k + νt . (65)

The auxiliary regression:

êt = γ0 + γ1 xt + ρêt−1 + ρ̂2 et−2 + . . . + ρ̂k et−k + νt . (66)

The null is about no autocorrelation up to k-th order:

H0 : ρ1 = ρ2 = . . . = ρk = 0. (67)

The test statistic:


LM = T × R2 ∼ χ2(k) . (68)

Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Serial correlation 39 / 45
Estimation with Serially Correlated Errors

Estimation with Serially


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Correlated Errors 40 / 45
Dealing with serial correlation

Four strategies can be considered:


1. Least squares estimation with HAC (heteroskedasticity and autocorrelation
consistent) standard errors.
2. Generalized squares estimation (the Cochrane-Orcutt estimator).
3. Dynamic model (TBC on next classes)

Estimation with Serially


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Correlated Errors 41 / 45
HAC standard errors

The variance of the least squares estimator (in the simply regression model, i.e.,
yt = β0 + β1 xt + et ):
X XX
var(β̂1 ) = wt var(et ) + wt ws cov(et , es ), (69)
t t t6=s

where
(xt − x̄)
wt = . (70)
(xt − x̄)2
P
t
If there is no serial correlation then the variance
X
var(β̂1 ) = wt2 var(et ), (71)
t

is very similar to the heteroskedasticity-consistent (HC) variance estimator.


In practice, we estimate the Newey-West robust standard errors by limiting (trun-
cating) the number of lags (the second term of the HAC). The results (i.e. standard
errors) can be very sensitive the this choice.
The common practice is to use the prewhitening of explanatory variables.
I It allows to eliminate the persistence of explanatory variable that could be
essential in constructing weights (wt ).

Estimation with Serially


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Correlated Errors 42 / 45
Properties of an AR(1) error model

AR(1) error model:


et = ρet−1 + νt , (72)
where |ρ| < 1, νt ∼ N (0, σν2 ) and cov(νt , νs ) = 0 for t 6= s.
The mean and variance of the error term:
σν2
E (et ) = 0, var(et ) = σe2 = . (73)
1−ρ
The covariance and autocorrelation (of order i-th) of the error term:

ρk σν2
cov(et , et−k ) = , ρi = ρi . (74)
1 − ρ2

Estimation with Serially


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Correlated Errors 43 / 45
Simple regression with AR(1) errors

When the error term follows AR(1) then the simple regression can be ex-
pressed:
yt = β0 + β1 xt + ρet−1 + νt . (75)
For the period t − 1 the error term can be expressed as:

et−1 = yt−1 − β0 − β1 xt−1 . (76)

Combining above facts we get:

yt = β0 (1 − ρ) + β1 xt + ρyt−1 − ρβ2 xt−1 + νt . (77)

Estimation with Serially


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Correlated Errors 44 / 45
The Cochrane-Orcutt estimator

Alternatively, we can use the Cochrane-Orcutt estimator.


This is a special case of GLS transformation, i.e.,

zt∗ = zt − ρzt−1 , (78)

where z ∈ {yt , xt , et }.
This transformation is called quasi-differencing.
To get estimates of ρ we can use sample correlation of residuals.
By construction the error term is not autocorrelated:

e∗t = et − ρet−1 = ρet−1 + νt − ρet−1 = νt . (79)

Estimation with Serially


Jakub Mućk Advanced Applied Econometrics Heteroskedasticity and serial correlation Correlated Errors 45 / 45

You might also like