100% found this document useful (1 vote)
182 views8 pages

Proof of The Gauss-Markov Theorem: 2012 Dan Nettleton (Iowa State University)

The document provides a proof of the Gauss-Markov theorem. It shows that under the Gauss-Markov linear model, the ordinary least squares (OLS) estimator of an estimable linear function is the best linear unbiased estimator (BLUE) in that its variance is less than or equal to the variance of any other linear unbiased estimator. The proof involves showing that for any other linear unbiased estimator d, its variance is greater than the variance of the OLS estimator by at least the positive term σ2(d - l)0(d - l), where l is the OLS estimator.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
182 views8 pages

Proof of The Gauss-Markov Theorem: 2012 Dan Nettleton (Iowa State University)

The document provides a proof of the Gauss-Markov theorem. It shows that under the Gauss-Markov linear model, the ordinary least squares (OLS) estimator of an estimable linear function is the best linear unbiased estimator (BLUE) in that its variance is less than or equal to the variance of any other linear unbiased estimator. The proof involves showing that for any other linear unbiased estimator d, its variance is greater than the variance of the OLS estimator by at least the positive term σ2(d - l)0(d - l), where l is the OLS estimator.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Proof of the Gauss-Markov Theorem

Copyright 2012
c Dan Nettleton (Iowa State University) Statistics 511 1/8
The Gauss-Markov Theorem

Under the Gauss-Markov Linear Model, the OLS estimator c0 β̂ of


an estimable linear function c0 β is the unique Best Linear
Unbiased Estimator (BLUE) in the sense that Var(c0 β̂) is strictly
less than the variance of any other linear unbiased estimator of
c0 β.

Copyright 2012
c Dan Nettleton (Iowa State University) Statistics 511 2/8
Unbiased Linear Estimators of c0 β

If a is a fixed vector, then a0 y is a linear function of y.

An estimator that is a linear function of y is said to be a


linear estimator.

A linear estimator a0 y is an unbiased estimator of c0 β if and


only if

E(a0 y) = c0 β ∀ β ∈ IRp ⇐⇒ a0 E(y) = c0 β ∀ β ∈ IRp


⇐⇒ a0 Xβ = c0 β ∀ β ∈ IRp
⇐⇒ a0 X = c0 .

Copyright 2012
c Dan Nettleton (Iowa State University) Statistics 511 3/8
The OLS Estimator of c0 β is a Linear Estimator

We have previously defined the Ordinary Least Squares


(OLS) estimator of an estimable c0 β by c0 β̂, where β̂ is any
solution to the normal equations X0 Xb = X0 y.

We have previously shown that c0 β̂ is the same for any β̂


that is a solution to the normal equations.

We have previously shown that (X0 X)− X0 y is a solution to the


normal equations for any generalized inverse of X0 X denoted
by (X0 X)− .

Thus, c0 β̂ = c0 (X0 X)− X0 y = `0 y (where `0 = c0 (X0 X)− X0 ) so


that c0 β̂ is a linear estimator.

Copyright 2012
c Dan Nettleton (Iowa State University) Statistics 511 4/8
c0 β̂ is an Unbiased Estimator of an Estimable c0 β

By definition, c0 β is estimable if and only if there exists a


linear unbiased estimator of c0 β.

It follows from slide 3 that c0 β is estimable if and only if


c0 = a0 X for some vector a.

If c0 β is estimable, then

`0 X = c0 (X0 X)− X0 X = a0 X(X0 X)− X0 X = a0 PX X = a0 X = c0 .

Thus, by slide 3, c0 β̂ = `0 y is an unbiased estimator of c0 β


whenever c0 β is estimable.

Copyright 2012
c Dan Nettleton (Iowa State University) Statistics 511 5/8
Proof of the Gauss-Markov Theorem

Suppose d0 y is any linear unbiased estimator other than the


OLS estimator c0 β̂ = `0 y.

Then we know the following:


1 d 6= ` ⇐⇒ ||d − `||2 = (d − `)0 (d − `) > 0, and

2 d0 X = `0 X = c0 =⇒ d0 X − `0 X = 00 =⇒ (d − `)0 X = 00 .

We need to show Var(d0 y) > Var(c0 β̂).

Copyright 2012
c Dan Nettleton (Iowa State University) Statistics 511 6/8
Proof of the Gauss-Markov Theorem

Var(d0 y) = Var(d0 y − c0 β̂ + c0 β̂)


= Var(d0 y − c0 β̂) + Var(c0 β̂) + 2Cov(d0 y − c0 β̂, c0 β̂).

Var(d0 y − c0 β̂) = Var(d0 y − `0 y) = Var((d0 − `0 )y) = Var((d − `)0 y)


= (d − `)0 Var(y)(d − `) = (d − `)0 (σ 2 I)(d − `)
= σ 2 (d − `)0 I(d − `) = σ 2 (d − `)0 (d − `) > 0 by (1).

Cov(d0 y − c0 β̂, c0 β̂) = Cov(d0 y − `0 y, `0 y) = Cov((d − `)0 y, `0 y)


= (d − `)0 Var(y)` = σ 2 (d − `)0 `
= σ 2 (d − `)0 X[(X0 X)− ]0 c = 0 by (2).
Copyright 2012
c Dan Nettleton (Iowa State University) Statistics 511 7/8
Proof of the Gauss-Markov Theorem

It follows that

Var(d0 y) = Var(d0 y − c0 β̂) + Var(c0 β̂)


> Var(c0 β̂). 2

Copyright 2012
c Dan Nettleton (Iowa State University) Statistics 511 8/8

You might also like