Exam Et4386 Estimation and Detection: January 21st, 2016
Exam Et4386 Estimation and Detection: January 21st, 2016
Exam Et4386 Estimation and Detection: January 21st, 2016
n = 0, . . . , N 1
where A is an unknown constant parameter, r > 0 is known, n is the timeindex in samples and w[n] is zero-mean uncorrelated Gaussian noise having
variance 2 . The goal is to make an estimate of A by using the data.
(1 p) (a) Give the likelihood function p(x; A).
(2 p) (b) Derive the Cramer-Rao lower bound when observing x.
(2 p) (c) Does the minimum variance unbiased (MVU) estimator exist? If
so determine the MVU.
(1 p) (d) Give the best linear unbiased estimator (BLUE).
Instead of assuming that A is an unknown constant (deterministic) parameter, we now assume A is the realization of a scalar Gaussian random process,
(i.e., A N (0, A2 )) that is independent from w[n].
(3 p) (e) Calculate the (Bayesian) MMSE estimator that is given by E[A|x].
(Hint: You can make use of the fact that the signal model is Gaussian
and linear: w N (0, 2 I) and A N (0, A2 ). )
In question (d) we already calculated the BLUE for the case that A was
assumed to be deterministic. In the next question we again assume A is
deterministic. However, we change the assumed distribution of the noise
w[n] such that it is Rayleigh distributed given by the pdf
(
w[n]
w[n]2
exp
w[n] > 0
2
2 2
p(w[n]) =
0
w[n] < 0.
(1 p) (f ) Give the best linear unbiased estimator with these new noise characteristics and explain the changes compared to the BLUE estimator
in question 1(d).
2 p(x, )dxd
Bmse() =
( )
and the classical MSE is given by
Z
2 p(x; )dx
mse() = ( )
(1 p) (a) Explain the difference between the Bayesian mean square error
(MSE), and the classical mean square error.
(2 p) (b) Show by derivation that the MMSE estimator is given by the
conditional expectation E[|x].
Assume that is an unknown scalar parameter with prior probability density
function (pdf) p(). To estimate , we use N noisy observations given by
x[n], with n = 0, 1, . . . , N 1. Assume that the observations are conditionally
independent and identically distributed (i.i.d.). The conditional pdfs are
given by
exp(x[n]), x[n] > 0
p(x[n]|) =
0,
x[n] < 0.
The prior pdf is given by
p() =
exp(), > 0
0,
< 0.
(2 p) (c) Two different Bayesian estimators are the MMSE and the MAP
estimator. Choose the most convenient estimator based. Explain your
choice and calculate the estimate of using observations x[n] with
n = 0, 1, . . . , N 1.
(1 p) (d) Indicate based on your solution obtained in question (c): 1) How
does the final solution depend on the prior information and the data,
AND, 2) how the prior information and the data are traded off aginst
each other as a function of N ?
3
We now consider a different statistical model where the posterior pdf is given
by
exp[( x[0])] > x[0]
p(|x[0]) =
0,
< x[0].
Note that this model is based on only one observation x[0], i.e., N = 1.
(2 p) (e) Determine the MMSE estimator.
R
R
(Hint: Remember partial integration, i.e., udv = uv vdu. )
(1 p) (f ) Determine the MAP estimator.
(1 p) (g) Which of the two estimators calculated in question (d) and (e)
will have the smallest (Bayesian) mean square error?