0% found this document useful (0 votes)
84 views

Solved Problems

The document contains 3 solved problems involving statistical concepts: 1) A sample of heights is given and the sample mean, variance, and standard deviation are calculated. 2) It is shown that modifying an unbiased estimator by adding a zero-mean random variable or subtracting/dividing by a constant yields another unbiased estimator. 3) For a maximum likelihood estimator of the upper bound of a uniform distribution, its bias, mean squared error, and consistency are derived.

Uploaded by

Chhanny Sorn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views

Solved Problems

The document contains 3 solved problems involving statistical concepts: 1) A sample of heights is given and the sample mean, variance, and standard deviation are calculated. 2) It is shown that modifying an unbiased estimator by adding a zero-mean random variable or subtracting/dividing by a constant yields another unbiased estimator. 3) For a maximum likelihood estimator of the upper bound of a uniform distribution, its bias, mean squared error, and consistency are derived.

Uploaded by

Chhanny Sorn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

8.2.

5 Solved Problems
Problem 1

Let X be the height of a randomly chosen individual from a population. In order to estimate the mean and
variance of X , we observe a random sample X1 ,X2 ,⋯ ,X7 . Thus, Xi 's are i.i.d. and have the same
distribution as X . We obtain the following values (in centimeters):

166.8, 171.4, 169.1, 178.5, 168.0, 157.9, 170.1

Find the values of the sample mean, the sample variance, and the sample standard deviation for the observed
sample.

Solution

¯¯¯¯
X1 + X2 + X3 + X4 + X5 + X6 + X7
X =
7
166.8 + 171.4 + 169.1 + 178.5 + 168.0 + 157.9 + 170.1
=
7

= 168.8

The sample variance is given by

7
2
1
2
S = ∑(X k − 168.8) = 37.7
7−1
k=1

Finally, the sample standard deviation is given by


−−
2
= √S = 6.1

The following MATLAB code can be used to obtain these values:


x=[166.8, 171.4, 169.1, 178.5, 168.0, 157.9, 170.1];

m=mean(x);

v=var(x);

s=std(x);

Problem 2

Prove the following:

^
a. If Θ is an unbiased estimator for θ, and W is a zero mean random variable, then
1

^ ^
Θ2 = Θ1 + W

is also an unbiased estimator for θ.


^ ^
b. If Θ 1 is an estimator for θ such that E [Θ 1 ] = aθ + b , where a ≠ 0, show that

^
Θ1 − b
^
Θ2 =
a

is an unbiased estimator for θ.

Solution
a. We have

^ ^
E [Θ 2 ] = E [Θ 1 ] + E [W ] (by linearity of expectation)

^
= θ+0 (since Θ 1  is unbiased and E W = 0)

= θ.

^
Thus, Θ is an unbiased estimator for θ.
2
b. We have

^
E [Θ 1 ] − b
^
E [Θ 2 ] = (by linearity of expectation)
a
aθ + b − b
=
a

= θ.

^
Thus, Θ 2 is an unbiased estimator for θ.

Problem 3

Let X1 , X2 , X3 , . . ., Xn be a random sample from a U nif orm(0, θ) distribution, where θ is unknown.


Define the estimator

^
Θ n = max{X 1 , X 2 , ⋯ , X n }.

^ ^
a. Find the bias of Θ n , B(Θ n ).

^ ^
b. Find the MSE of Θ n , M S E (Θ n ) .

^
c. Is Θ n a consistent estimator of θ?

Solution
If X ∼ U nif orm(0, θ) , then the PDF and CDF of X are given by

1
⎧ 0 ≤ x ≤ θ
⎪ θ

fX (x) = ⎨


0 otherwise

and


⎧0 x < 0






x
FX (x) = ⎨ 0 ≤ x ≤ θ
θ







1 x > 1

^
By Theorem 8.1, the PDF of Θ n is given by

n−1
f^ (y) = nfX (x)[FX (x)]
Θn

n−1
ny

⎪ 0 ≤ y ≤ θ
⎪ θ
n

= ⎨



0 otherwise

^ , we have
a. To find the bias of Θ n

θ n−1
ny
^
E [Θ n ] = ∫ y⋅ dy
n
0
θ
n
= θ.
n+1

Thus, the bias is given by

^ ^
B(Θ n ) = E [Θ n ] − θ
n
= θ−θ
n+1

θ
= − .
n+1

^
b. To find M S E (Θ n ) , we can write

^ ^ ^ 2
M S E (Θ n ) = Var(Θ n ) + B(Θ n )

2
θ
^
= Var(Θ n ) + .
2
(n + 1)

^
Thus, we need to find Var(Θ ). We have

θ n−1
2 ny
^ 2
E [Θ n ] = ∫ y ⋅ dy
n
0
θ
n
2
= θ .
n+2

Thus,
2 2
^ ) = E [ ^ ] − (E [ ^ ]
Var(Θ n Θn Θn )

n
2
= θ .
2
(n + 2)(n + 1)

Therefore,
2
n θ
^ ) =
M S E (Θ
2
+
n θ
2 2
(n + 2)(n + 1) (n + 1)

2

= .
(n + 2)(n + 1)

c. Note that
2

^
lim M S E (Θ n ) = lim = 0.
n→∞ n→∞ (n + 2)(n + 1)

^
Thus, by Theorem 8.2, Θ n is a consistent estimator of θ.

Problem 4

Let X1 , X2 , X3 , . . ., Xn be a random sample from a Geometric(θ) distribution, where θ is unknown.


Find the maximum likelihood estimator (MLE) of θ based on this random sample.

Solution
If Xi ∼ Geometric(θ) , then
x−1
PXi (x; θ) = (1 − θ) θ.

Thus, the likelihood function is given by

L(x 1 , x 2 , ⋯ , x n ; θ) = PX1 X2 ⋯Xn (x 1 , x 2 , ⋯ , x n ; θ)

= PX1 (x 1 ; θ)PX2 (x 2 ; θ) ⋯ PXn (x n ; θ)


n
[∑ xi −n] n
= (1 − θ) i=1
θ .

Then, the log likelihood function is given by


n

ln L(x 1 , x 2 , ⋯ , x n ; θ) = (∑ x i − n) ln(1 − θ) + n ln θ.

i=1

Thus,
n
d ln L(x 1 , x 2 , ⋯ , x n ; θ) −1 n
= (∑ x i − n) ⋅ + .
dθ 1−θ θ
i=1
By setting the derivative to zero, we can check that the maximizing value of θ is given by
n
^
θ ML = n
.
∑ xi
i=1

Thus, the MLE can be written as


n
^
ΘM L = n
.
∑ Xi
i=1

Problem 5

Let X1 , X2 , X3 , . . ., Xn be a random sample from a U nif orm(0, θ) distribution, where θ is unknown.


Find the maximum likelihood estimator (MLE) of θ based on this random sample.

Solution
If Xi ∼ U nif orm(0, θ), then

1
⎧ 0 ≤ x ≤ θ
⎪ θ

fX (x) = ⎨


0 otherwise

The likelihood function is given by

L(x 1 , x 2 , ⋯ , x n ; θ) = fX1 X2 ⋯Xn (x 1 , x 2 , ⋯ , x n ; θ)

= fX1 (x 1 ; θ)fX2 (x 2 ; θ) ⋯ fXn (x n ; θ)


1
⎧ 0 ≤ x1 , x2 , ⋯ , xn ≤ θ
⎪ θ
n

= ⎨


0 otherwise

1
Note that n
is a decreasing function of θ. Thus, to maximize it, we need to choose the smallest
θ

possible value for θ. For i = 1, 2, . . . , n , we need to have θ ≥ xi . Thus, the smallest possible
value for θ is

^
θ M L = max(x 1 , x 2 , ⋯ , x n ).

Therefore, the MLE can be written as

^
Θ M L = max(X 1 , X 2 , ⋯ , X n ).

Note that this is one of those cases wherein θ^M L cannot be obtained by setting the derivative of the
likelihood function to zero. Here, the maximum is achieved at an endpoint of the acceptable
interval.
← previous
next →

You might also like