0% found this document useful (0 votes)
17 views8 pages

Introecon Estimators Properties

1) An estimator is said to be unbiased if its expected value equals the true parameter value. The sample mean and variance are unbiased estimators of the population mean and variance. 2) A minimum variance unbiased estimator (MVUE) has the smallest possible variance among all unbiased estimators. The sample mean is the MVUE of the population mean for normally distributed data. 3) The Cramer-Rao inequality provides a lower bound for the variance of any unbiased estimator. An estimator that achieves this lower bound is an MVUE.

Uploaded by

Onyango Bernard
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views8 pages

Introecon Estimators Properties

1) An estimator is said to be unbiased if its expected value equals the true parameter value. The sample mean and variance are unbiased estimators of the population mean and variance. 2) A minimum variance unbiased estimator (MVUE) has the smallest possible variance among all unbiased estimators. The sample mean is the MVUE of the population mean for normally distributed data. 3) The Cramer-Rao inequality provides a lower bound for the variance of any unbiased estimator. An estimator that achieves this lower bound is an MVUE.

Uploaded by

Onyango Bernard
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Properties of estimators

Unbiased estimators:
Let θ̂ be an estimator of a parameter θ. We say that θ̂ is an unbiased estimator of θ if
E(θ̂) = θ

Examples:
Let X1 , X2 , · · · , Xn be an i.i.d. sample from a population with mean µ and standard deviation σ. Show that
X̄ and S 2 are unbiased estimators of µ and σ 2 respectively.

1
Minimum variance unbiased estimators (MVUE):
Cramer-Rao inequality:
Let X1 , X2 , · · · , Xn be an i.i.d. sample from a distribution that has pdf f (x) and let θ̂ be an estimator of a
parameter θ of this distribution. We will show that the variance of θ̂ is at least:
1 1
var(θ̂) ≥ or var(θ̂) ≥ 2
nE( ∂lnf
∂θ
(x) 2
) −nE( ∂ lnf
∂θ 2
(x)
)
Theorem:
If θ̂ is an unbiased estimator of θ and if
1
var(θ̂) =
nE( ∂lnf
∂θ
(x) 2
)

In other words, if the variance of θ̂ attains the minimum variance of the Cramer-Rao inequality we say that
θ̂ is a minimum variance unbiased estmator of θ (MVUE).

Example:
Let X1 , X2 , · · · , Xn be an i.i.d. sample from a normal population with mean µ and standard deviation σ.
Show that X̄ is a minimum variance unbiased estimator of µ.

2
Relative efficiency:
If θ̂1 and θ̂2 are both unbiased estimators of a parameter θ we say that θ̂1 is relatively more efficient if
var(θ̂1 ) < var(θ̂2 ). We use the ratio

var(θ̂1 )
var(θ̂2 )

as a measure of the relative efficiency of θ̂2 w.r.t θ̂1 .

Example:
Suppose X1 , X2 , · · · , Xn is an i.i.d. random sample from a Poisson distribution with parameter λ. Let
λ̂1 = X̄ and λ̂2 = X1 +X 2
2
be two unbiased estimators of λ. Find the relative efficiency of λ̂2 w.r.t. λ̂1 .

3
Consistent estimators:
Definition:
The estimator θ̂ of a parameter θ is said to be consistent estimator if for any positive 
lim P (|θ̂ − θ| ≤ ) = 1
n→∞
or

lim P (|θ̂ − θ| > ) = 0


n→∞

We say that θ̂ converges in probability to θ (also known as the weak law of large numbers). In other words: the
average of many independent random variables should be very close to the true mean µ with high probability.

Theorem:
An unbiased estimator θ̂ of a parameter θ is consistent if var(θ̂) = 0 as n → ∞.

4
Bias:
The bias B of an estimator θ̂ is given by
B = E(θ̂) − θ
In general, given two unbiased estimators we would choose the estimator with the smaller variance. However
this is not always possible (there may exist biased estimators with smaller variance). We use the mean
square error (M SE)

M SE = E(θ̂ − θ)2
as a measure of the goodness of an estimator. We can show that

M SE = var(θ̂) + B 2

Example:
The reading on a voltage meter connected to a test circuit is uniformly distributed over the interval (θ, θ +1),
where θ is the true but unknown voltage of he circuit. Suppose that X1 , X2 , · · · , Xn denotes a random sample
of such readings.
a. Show that X̄ is a biased estimator of θ, and compute the bias.

b. Find a function of X̄ that is an unbiased estimator of θ.


c. Find the M SE when X̄ is used as an estimator of θ.

5
Fisher information and Cramér-Rao inequality

Let X be a random variable with pdf f (x; θ). Then


Z ∞
f (x; θ)dx = 1 take derivatives w.r.t. θ on both sides
−∞

Z ∞
∂f (x; θ)
dx = 0 this is the same as:
−∞ ∂θ

Z ∞
1 ∂f (x; θ)
f (x; θ)dx = 0 or
−∞ f (x; θ) ∂θ

Z ∞
∂lnf (x; θ)
f (x; θ)dx = 0 differentiate again w.r.t. θ
−∞ ∂θ


∂ 2 lnf (x; θ)
Z  
∂lnf (x; θ) ∂f (x; θ)
f (x; θ) + dx = 0 or
−∞ ∂θ2 ∂θ ∂θ


∂ 2 lnf (x; θ)
Z  
∂lnf (x; θ) 1 ∂f (x; θ)
f (x; θ) + f (x; θ) dx = 0 or
−∞ ∂θ2 ∂θ f (x; θ) ∂θ

" 2 #

∂ 2 lnf (x; θ)
Z 
∂lnf (x; θ)
f (x; θ) + f (x; θ) dx = 0 or
−∞ ∂θ2 ∂θ

∞ ∞ 2
∂ 2 lnf (x; θ)
Z Z 
∂lnf (x; θ)
f (x; θ)dx + f (x; θ)dx = 0 or
−∞ ∂θ2 −∞ ∂θ

2
∂ 2 lnf (X; θ)
  
∂lnf (X; θ)
E +E =0 or
∂θ2 ∂θ

2
∂ 2 lnf (X; θ)
  
∂lnf (X; θ)
E = −E
∂θ ∂θ2
The expression
 2
∂lnf (X; θ)
E = I(θ)
∂θ
is the so called Fisher information for one observation.

6
Let’s find the Fisher information in a sample: Let X1 , X2 , · · · , Xn be an i.i.d. random sample from a distri-
bution with pdf f (x; θ). The joint pdf of X1 , X2 , · · · , Xn is

L(θ) = f (x1 ; θ)f (x2 ; θ) · · · · · · f (xn ; θ)


Take logarithms on both sides · · ·

lnL(θ) = lnf (x1 ; θ) + lnf (x2 ; θ) + · · · + lnf (xn ; θ)


Take derivatives w.r.t θ on both sides

∂lnL(θ) ∂lnf (x1 ; θ) ∂lnf (x2 ; θ) ∂lnf (xn ; θ)


= + + ··· +
∂θ ∂θ ∂θ ∂θ
 2
∂lnf (X;θ)
When one observation was involved (see previous page) the Fisher information was E ∂θ . Now we
are dealing with a random sample X1 , X2 , · · · , Xn and f (x; θ) is replaced by L(θ) (the joint pdf). Therefore,
 2
the Fisher information in the sample will be E ∂lnL(θ) ∂θ .
 2  2
∂lnL(θ) ∂lnf (x1 ; θ) ∂lnf (x2 ; θ) ∂lnf (xn ; θ)
= + + ··· + or
∂θ ∂θ ∂θ ∂θ

 2  2  2  2
∂lnL(θ) ∂lnf (x1 ; θ) ∂lnf (x2 ; θ) ∂lnf (xn ; θ)
= + + ··· +
∂θ ∂θ ∂θ ∂θ

∂lnf (x1 ; θ) ∂lnf (x2 ; θ)


+ 2 + ···
∂θ ∂θ
Take expected values on both sides
 2  2  2  2
∂lnL(θ) ∂lnf (X1 ; θ) ∂lnf (X2 ; θ) ∂lnf (Xn ; θ)
E =E +E + ··· + E
∂θ ∂θ ∂θ ∂θ
The expected value of the cross-product terms is equal to zero. Why?

We conclude that the Fisher information in the sample is:


 2
∂lnL(θ)
E = I(θ) + I(θ) + · · · + I(θ)
∂θ
or
In (θ) = nI(θ)
The Fisher information in the sample is equal to n times the Fisher information for one observation.

7
Cramér-Rao inequality:
1 1 1
var(θ̂) ≥ or var(θ̂) ≥ 2 or var(θ̂) ≥  
nI(θ) ∂ 2 lnf (X;θ)

∂lnf (X;θ) −nE
nE ∂θ ∂θ 2

Let X1 , X2 , · · · , Xn be an i.i.d. random sample from a distribution with pdf f (x; θ), and let θ̂ = g(X1 , X2 , · · · , Xn )
be an unbiased estimator of the unknown parameter θ. Since θ̂ is unbiased, it is true that E(θ̂) = θ, or
Z ∞ Z ∞
······ g(x1 , x2 , · · · , xn )f (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ)dx1 dx2 · · · dxn = θ
−∞ −∞

Take derivatives w.r.t θ on both sides


Z ∞ Z ∞ " n #
X 1 ∂f (xi ; θ)
······ g(x1 , x2 , · · · , xn ) f (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ)dx1 dx2 · · · dxn = 1
−∞ −∞
f (xi ; θ ∂θ
i=1

Since
1 ∂f (xi ; θ) ∂lnf (xi ; θ)
=
f (xi ; θ) ∂θ ∂θ
we can write the previous expression as
Z ∞ Z ∞ " n #
X ∂lnf (xi ; θ)
······ g(x1 , x2 , · · · , xn ) f (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ)dx1 dx2 · · · dxn = 1
−∞ −∞
∂θ
i=1

or
Z ∞ Z ∞
······ g(x1 , x2 , · · · , xn )Qf (x1 ; θ)f (x2 ; θ) · · · f (xn ; θ)dx1 dx2 · · · dxn = 1
−∞ −∞

where
n
X ∂lnf (xi ; θ)
Q=
i=1
∂θ

But also, θ̂ = g(X1 , X2 , · · · , Xn ).

You might also like