Solved Problems
Solved Problems
5 Solved Problems
Problem 1
Let X be the height of a randomly chosen individual from a population. In order to estimate the mean and
variance of X , we observe a random sample X1 ,X2 ,⋯ ,X7 . Thus, Xi 's are i.i.d. and have the same
distribution as X . We obtain the following values (in centimeters):
Find the values of the sample mean, the sample variance, and the sample standard deviation for the observed
sample.
Solution
¯¯¯¯
X1 + X2 + X3 + X4 + X5 + X6 + X7
X =
7
166.8 + 171.4 + 169.1 + 178.5 + 168.0 + 157.9 + 170.1
=
7
= 168.8
7
2
1
2
S = ∑(X k − 168.8) = 37.7
7−1
k=1
m=mean(x);
v=var(x);
s=std(x);
Problem 2
^
a. If Θ is an unbiased estimator for θ, and W is a zero mean random variable, then
1
^ ^
Θ2 = Θ1 + W
^
Θ1 − b
^
Θ2 =
a
Solution
a. We have
^ ^
E [Θ 2 ] = E [Θ 1 ] + E [W ] (by linearity of expectation)
^
= θ+0 (since Θ 1 is unbiased and E W = 0)
= θ.
^
Thus, Θ is an unbiased estimator for θ.
2
b. We have
^
E [Θ 1 ] − b
^
E [Θ 2 ] = (by linearity of expectation)
a
aθ + b − b
=
a
= θ.
^
Thus, Θ 2 is an unbiased estimator for θ.
Problem 3
^
Θ n = max{X 1 , X 2 , ⋯ , X n }.
^ ^
a. Find the bias of Θ n , B(Θ n ).
^ ^
b. Find the MSE of Θ n , M S E (Θ n ) .
^
c. Is Θ n a consistent estimator of θ?
Solution
If X ∼ U nif orm(0, θ) , then the PDF and CDF of X are given by
1
⎧ 0 ≤ x ≤ θ
⎪ θ
fX (x) = ⎨
⎩
⎪
0 otherwise
and
⎧
⎧0 x < 0
⎪
⎪
⎪
⎪
⎪
⎪
x
FX (x) = ⎨ 0 ≤ x ≤ θ
θ
⎪
⎪
⎪
⎪
⎪
⎩
⎪
1 x > 1
^
By Theorem 8.1, the PDF of Θ n is given by
n−1
f^ (y) = nfX (x)[FX (x)]
Θn
n−1
ny
⎧
⎪ 0 ≤ y ≤ θ
⎪ θ
n
= ⎨
⎪
⎩
⎪
0 otherwise
^ , we have
a. To find the bias of Θ n
θ n−1
ny
^
E [Θ n ] = ∫ y⋅ dy
n
0
θ
n
= θ.
n+1
^ ^
B(Θ n ) = E [Θ n ] − θ
n
= θ−θ
n+1
θ
= − .
n+1
^
b. To find M S E (Θ n ) , we can write
^ ^ ^ 2
M S E (Θ n ) = Var(Θ n ) + B(Θ n )
2
θ
^
= Var(Θ n ) + .
2
(n + 1)
^
Thus, we need to find Var(Θ ). We have
θ n−1
2 ny
^ 2
E [Θ n ] = ∫ y ⋅ dy
n
0
θ
n
2
= θ .
n+2
Thus,
2 2
^ ) = E [ ^ ] − (E [ ^ ]
Var(Θ n Θn Θn )
n
2
= θ .
2
(n + 2)(n + 1)
Therefore,
2
n θ
^ ) =
M S E (Θ
2
+
n θ
2 2
(n + 2)(n + 1) (n + 1)
2
2θ
= .
(n + 2)(n + 1)
c. Note that
2
2θ
^
lim M S E (Θ n ) = lim = 0.
n→∞ n→∞ (n + 2)(n + 1)
^
Thus, by Theorem 8.2, Θ n is a consistent estimator of θ.
Problem 4
Solution
If Xi ∼ Geometric(θ) , then
x−1
PXi (x; θ) = (1 − θ) θ.
ln L(x 1 , x 2 , ⋯ , x n ; θ) = (∑ x i − n) ln(1 − θ) + n ln θ.
i=1
Thus,
n
d ln L(x 1 , x 2 , ⋯ , x n ; θ) −1 n
= (∑ x i − n) ⋅ + .
dθ 1−θ θ
i=1
By setting the derivative to zero, we can check that the maximizing value of θ is given by
n
^
θ ML = n
.
∑ xi
i=1
Problem 5
Solution
If Xi ∼ U nif orm(0, θ), then
1
⎧ 0 ≤ x ≤ θ
⎪ θ
fX (x) = ⎨
⎩
⎪
0 otherwise
= ⎨
⎩
⎪
0 otherwise
1
Note that n
is a decreasing function of θ. Thus, to maximize it, we need to choose the smallest
θ
possible value for θ. For i = 1, 2, . . . , n , we need to have θ ≥ xi . Thus, the smallest possible
value for θ is
^
θ M L = max(x 1 , x 2 , ⋯ , x n ).
^
Θ M L = max(X 1 , X 2 , ⋯ , X n ).
Note that this is one of those cases wherein θ^M L cannot be obtained by setting the derivative of the
likelihood function to zero. Here, the maximum is achieved at an endpoint of the acceptable
interval.
← previous
next →