Math PDF
Math PDF
Math PDF
SOLVED PROBLEMS 81
Example 5.5.10. Let X be uniformly distributed in [0, 2π] and Y = sin(X). Calculate the
p.d.f. fY of Y .
X 1
fY (y) = fX (xn )
|g 0 (x n )|
For each y ∈ (−1, 1), there are two values of xn in [0, 2π] such that g(xn ) = sin(xn ) = y.
q p
|g 0 (xn )| = | cos(xn )| = 1 − sin2 (xn ) = 1 − y2,
and
1
fX (xn ) = .
2π
Hence,
1 1 1
fY (y) = 2 p = p .
1 − y 2π
2 π 1 − y2
Example 5.5.11. Let {X, Y } be independent random variables with X exponentially dis-
tributed with mean 1 and Y uniformly distributed in [0, 1]. Calculate E(max{X, Y }).
P (Z ≤ z) = P (X ≤ z, Y ≤ z) = P (X ≤ z)P (Y ≤ z)
z(1 − e−z ), for z ∈ [0, 1]
=
1 − e−z , for z ≥ 1.
Hence,
1 − e−z + ze−z , for z ∈ [0, 1]
fZ (z) =
e−z , for z ≥ 1.
82 CHAPTER 5. RANDOM VARIABLES
Accordingly,
Z ∞ Z 1 Z ∞
−z −z
E(Z) = zfZ (z)dz = z(1 − e + ze )dz + ze−z dz
0 0 1
Z 1 Z 1 Z 1
ze−z dz = − zde−z = −[ze−z ]10 + e−z dz
0 0 0
−1
= −e − [e−z ]10 = 1 − 2e −1
.
Z 1 Z 1 Z 1
z 2 e−z dz = − z 2 de−z = −[z 2 e−z ]10 + 2ze−z dz
0 0 0
−1 −1 −1
= −e + 2(1 − 2e ) = 2 − 5e .
Z ∞ Z 1
−z
ze dz = 1 − ze−z dz = 2e−1 .
1 0
1
E(Z) = − (1 − 2e−1 ) + (2 − 5e−1 ) + 2e−1 = 3 − 5e−1 ≈ 1.16.
2
Example 5.5.12. Let {Xn , n ≥ 1} be i.i.d. with E(Xn ) = µ and var(Xn ) = σ 2 . Use
X1 + · · · + Xn
α := P (| − µ| ≥ ²).
n
1 X1 + · · · + Xn 1 nvar(X1 ) σ2
α≤ 2
var( )= 2 2
= 2.
² n ² n n²
This calculation shows that the sample mean gets closer and closer to the mean: the
Example 5.5.13. Let X =D P (λ). You pick X white balls. You color the balls indepen-
dently, each red with probability p and blue with probability 1 − p. Let Y be the number
of red balls and Z the number of blue balls. Show that Y and Z are independent and that
We find
µ ¶
m+n m
P (Y = m, Z = n) = P (X = m + n) p (1 − p)n
m
µ ¶
λm+n m+n m λm+n (m + n)! m
= p (1 − p)n = × p (1 − p)n
(m + n)! m (m + n)! m!n!
(λp)m −λp (λ(1 − p))n −λ(1−p)
= [ e ]×[ e ],
m! n!
Example 6.7.1. Let (X, Y ) be a point picked uniformly in the quarter circle {(x, y) | x ≥
p
Given Y = y, X is uniformly distributed in [0, 1 − y 2 ]. Hence
1p
E[X | Y ] = 1 − Y 2.
2
b. Find E[T ].
c. Find V ar[T ].
Example 6.7.3. The random variables Xi are i.i.d. and such that E[Xi ] = µ and var(Xi ) =
values. Let S = X1 + X2 + . . . + XN .
a. Find E(S).
b. Find var(S).
Then,
var(S) = E(S 2 ) − (E(S))2 = E(N )σ 2 + E(N 2 )µ2 − µ2 (E(N ))2 = E(N )σ 2 + var(N )µ2 .
Example 6.7.4. Let X, Y be independent and uniform in [0, 1]. Calculate E[X 2 | X + Y ].
Z 1
1 1 x3 1 1 − (z − 1)3
E[X 2 | X + Y = z] = x2 dx = [ ]z−1 = .
z−1 2−z 2−z 3 3(2 − z)
Similarly, if z < 1, then
Z z
1 1 x3 z2
E[X 2 | X + Y = z] = x2 dx = [ ]z0 = .
0 z z 3 3
Example 6.7.5. Let (X, Y ) be the coordinates of a point chosen uniformly in [0, 1]2 . Cal-
culate E[X | XY ].
This is an example where we use the straightforward approach, based on the definition.
The problem is interesting because is illustrates that approach in a tractable but nontrivial
example. Let Z = XY .
Z 1
E[X | Z = z] = xf[X|Z] [x | z]dx.
0
6.7. SOLVED PROBLEMS 97
Now,
fX,Z (x, z)
f[X|Z] [x | z] = .
fZ (z)
Also,
Hence,
1
x, if x ∈ [0, 1] and z ∈ [0, x]
fX,Z (x, z) =
0, otherwise.
Consequently,
Z 1 Z 1
1
fZ (z) = fX,Z (x, z)dx = dx = −ln(z), 0 ≤ z ≤ 1.
0 z x
Finally,
1
f[X|Z] [x | z] = − , for x ∈ [0, 1] and z ∈ [0, x],
xln(z)
and
Z 1
1 z−1
E[X | Z = z] = x(− )dx = ,
z xln(z) ln(z)
so that
XY − 1
E[X | XY ] = .
ln(XY )
Examples of values:
Example 6.7.6. Let X, Y be independent and exponentially distributed with mean 1. Find
E[cos(X + Y ) | X].
98 CHAPTER 6. CONDITIONAL EXPECTATION
We have
Z ∞ Z ∞
−y
E[cos(X + Y ) | X = x] = cos(x + y)e dy = Re{ ei(x+y)−y dy}
0 0
eix cos(x) − sin(x)
= Re{ }= .
1−i 2
E[X1 | Y ].
Intuition suggests, and it is not too hard to justify, that if Y = y, then X1 = y with prob-
ability 1/n, and with probability (n − 1)/n the random variable X1 is uniformly distributed
Example 6.7.8. Let X, Y, Z be independent and uniform in [0, 1]. Calculate E[(X + 2Y +
Z)2 | X].
Example 6.7.9. Let X, Y, Z be three random variables defined on the same probability
and
Hence,
E((X −X1 )2 ) = E((X −X2 +X2 −X1 )2 ) = E((X −X2 )2 )+E((X2 −X1 )2 ) ≥ E((X −X2 )2 ).
Example 6.7.10. Pick the point (X, Y ) uniformly in the triangle {(x, y) | 0 ≤ x ≤
1 and 0 ≤ y ≤ x}.
a. Calculate E[X | Y ].
1+Y
E[X | Y ] = .
2
X
E[Y | X] = .
2
X2
E[(X − Y )2 | X] = .
3
Example 6.7.11. Assume that the two random variables X and Y are such that E[X |
We show that E((X − Y )2 ) = 0. This will prove that X − Y = 0 with probability one.
Note that
Now,
Similarly, one finds that E(XY ) = E(Y 2 ). Putting together the pieces, we get E((X −
Y )2 ) = 0.
Example 6.7.12. Let X, Y be independent random variables uniformly distributed in [0, 1].
Drawing a unit square, we see that given {X < Y }, the pair (X, Y ) is uniformly dis-
tributed in the triangle left of the diagonal from the upper left corner to the bottom right
corner of that square. Accordingly, the p.d.f. f (x) of X is given by f (x) = 2(1 − x). Hence,
Z 1
1
E[X|X < Y ] = x × 2(1 − x)dx = .
0 3
108 CHAPTER 7. GAUSSIAN RANDOM VARIABLES
7.4 Summary
If X, Y are jointly Gaussian, then E[X | Y ] = E(X) + cov(X, Y )var(Y )−1 (Y − E(Y )).
Example 7.5.1. The noise voltage X in an electric circuit can be modelled as a Gaussian
a. What is the probability that it exceeds 10−4 ? What is the probability that it exceeds
2 × 10−4 ? What is the probability that its value is between −2 × 10−4 and 10−4 ?
b. Given that the noise value is positive, what is the probability that it exceeds 10−4 ?
Let Z = 104 X, then Z =D N (0, 1) and we can reformulate the questions in terms of Z.
a. Using (7.1) we find P (Z > 1) = 0.159 and P (Z > 2) = 0.023. Indeed, P (Z > d) =
P (−2 < Z < 1) = P (Z < 1)−P (Z ≤ −2) = 1−P (Z > 1)−P (Z > 2) = 1−0.159−0.023 = 0.818.
b. We have
P (Z > 1)
P [Z > 1 | Z > 0] = = 2P (Z > 1) = 0.318.
P (Z > 0)
7.5. SOLVED PROBLEMS 109
Hence,
r
−4 2
E(|X|) = 10 .
π
random variables. A low-pass filter takes the sequence U and produces the output sequence
a. Find the joint pdf of Xn and Xn−1 and find the joint pdf of Xn and Xn+m for m > 1.
b. Find the joint pdf of Yn and Yn−1 and find the joint pdf of Yn and Yn+m for m > 1.
We start with some preliminary observations. First, since the Ui are independent, they
are jointly Gaussian. Second, Xn and Yn are linear combinations of the Ui and thus are
also jointly Gaussian. Third, the jpdf of jointly gaussian random variables Z is
1 1
fZ (zz ) = p exp[− (zz − m )C −1 (zz − m )]
(2π)n det(C) 2
matrix
Z − m )(Z
E[(Z Z − m )T ]. Finally, we need some basic facts
from algebra. If C =
a b d −b
, then det(C) = ad − bc and C −1 = 1 . We are now ready to
det(C)
c d −c a
answer the questions.
U.
a. Express in the form X = AU
Un−1
1 1
Xn 0
= 2 2 Un
1 1
Xn−1 2 2 0
Un+1
112 CHAPTER 7. GAUSSIAN RANDOM VARIABLES
1 1 3
Then det(C) = 4 − 14 = 16 and
1
16 − 14
C −1 = 2
3 − 14 1
2
2
fXn Yn (xn , yn ) = √
π 3
exp[− 43 (x2n − xn yn + yn2 )]
1
Then det(C) = 4 and
2 0
C −1 =
0 2
1
fXn Yn+1 (xn , yn+1 ) = π exp[− 14 (x2n + yn+1
2 )]
We have
3X + Z
E[X + 2Y |3X + Z, 4Y + 2V ] = a Σ−1
4Y + 2V
where
and
var(3X + Z) E((3X + Z)(4Y + 2V )) 10 0
Σ= = .
E((3X + Z)(4Y + 2V )) var(4Y + 2V ) 0 20
Hence,
10−1 0 3X + Z
E[X+2Y |3X+Z, 4Y +2V ] = [3, 8] = 3 (3X+Z)+ 4 (4Y +2V ).
0 20−1 4Y + 2V 10 10
Example 7.5.4. Assume that {X, Yn , n ≥ 1} are mutually independent random variables
σ 2
Thus we know that X − X̂n = N (0, n+σ 2 ). Accordingly,
σ2 0.1
P (|X − X̂n | > 0.1) = P (|N (0, 2
)| > 0.1) = P (|N (0, 1)| > )
n+σ αn
q
σ2
where αn = n+σ 2
. For this probability to be at most 5% we need
r
0.1 σ2 0.1
= 2, i.e., αn = 2
= ,
αn n+σ 2
so that
n = 19σ 2 .
The result is intuitively pleasing: If the observations are more noisy (σ 2 large), we need
Example 7.5.5. Assume that X, Y are i.i.d. N (0, 1). Calculate E[(X + Y )4 | X − Y ].
Note that X + Y and X − Y are independent because they are jointly Gaussian and
uncorrelated. Hence,
X 2 + Y 2 =D Exd(1/2). That is, the sum of the squares of two i.i.d. zero-mean Gaussian
Z ∞ Z ∞
iuW 2 +y 2 ) 1 −(x2 +y2 )/2
E(e ) = eiu(x e dxdy
−∞ −∞ 2π
Z 2π Z ∞
2 1 −r2 /2
= eiur e rdrdθ
2π
Z0 ∞ 0
2 2 /2
= eiur e−r rdr
0
Z ∞
1 2 2 1
= d[eiur −r /2 ] = .
0 2iu − 1 1 − 2iu
7.5. SOLVED PROBLEMS 115
Example 7.5.7. Let {Xn , n ≥ 0} be Gaussian N (0, 1) random variables. Assume that
Yn+1 = aYn + Xn for n ≥ 0 where Y0 is a Gaussian random variable with mean zero and
a. We see that
αn+1 = a2 αn + 1 and α0 = σ 2 .
1 − a2n
var(Yn ) = αn = a2n σ 2 + , for n ≥ 0.
1 − a2
1
var(Yn ) → γ 2 := as n → ∞.
1 − a2
a.Calculate
E[X1 + X2 + X3 | X1 + X2 , X2 + X3 , X3 + X4 ].
b. Calculate
E[X1 + X2 + X3 | X1 + X2 + X3 + X4 + X5 ].
where the coefficients a, b, c must be such that the estimation error is orthogonal to the
= E((X1 + X2 + X3 ) − Y )(X3 + X4 )) = 0.
2 − a − (a + b) = 2 − (a + b) − (b + c) = 1 − (b + c) − c = 0,
Yk = E[Xk | X1 + X2 + X3 + X4 + X5 ].
Y1 +Y2 +Y3 +Y4 +Y5 = E[X1 +X2 +X3 +X4 +X5 | X1 +X2 +X3 +X4 +X5 ] = X1 +X2 +X3 +X4 +X5 .
3
E[X1 + X2 + X3 | X1 + X2 + X3 + X4 + X5 ] = Y1 + Y2 + Y3 = (X1 + X2 + X3 + X4 + X5 ).
5
Example 7.5.9. Let the Xn ’s be as in Example 7.5.7. Find the jpdf of (X1 + 2X2 +
These random variables are jointly Gaussian, zero mean, and with covariance matrix Σ
given by
14 11 11
Σ=
11 14 11 .
11 11 14
Indeed, Σ is the matrix of covariances. For instance, its entry (2, 3) is given by
1 1
x) =
fX (x exp{− x T Σ−1x }.
(2π)3/2 |Σ|1/2 2
Y ] where
E[X1 + 3X2 |Y
X1
1 2 3
Y =
X2
3 2 1
X3
By now, this should be familiar. The solution is Y := a(X1 + 2X2 + 3X3 ) + b(3X1 +
and
Example 7.5.11. Find the jpdf of (2X1 + X2 , X1 + 3X2 ) where X1 and X2 are independent
These random variables are jointly Gaussian, zero-mean, with covariance Σ given by
5 5
Σ= .
5 10
Hence,
1 1
x) =
fX (x 1/2
exp{− x T Σ−1x }
2π|Σ| 2
1 1 T −1
= exp{− x Σ x }
10π 2
where
1 10 −5
Σ−1 = .
25 −5 5
Example 7.5.12. The random variable X is N (µ, 1). Find an approximate value of µ so
that
Example 7.5.13. Let X be a N (0, 1) random variable. Calculate the mean and the variance
2 /2
E(eiuX ) = e−u and eiθ = cos(θ) + i sin(θ).
Therefore,
2 /2
E(cos(uX) + i sin(uX)) = e−u ,
7.5. SOLVED PROBLEMS 119
so that
2 /2
E(cos(uX)) = e−u and E(sin(uX)) = 0.
1 1 1
E(cos2 (X)) = E( (1 + cos(2X))) = + E(cos(2X)).
2 2 2
2 /2
E(cos(2X)) = e−2 = e−2 ,
1 1 −2 1 1
var(cos(X)) = E(cos2 (X)) − (E(cos(uX)))2 = + e − (e−1/2 )2 = + e−2 − e−1 .
2 2 2 2
Similarly, we find
1 1 −2
E(sin2 (X)) = E(1 − cos2 (X)) = − e = var(sin(X)).
2 2
a. Calculate
E[3X + 5Y | 2X − Y, X + Z].
E[3X + 5Y | V ] = a Σ−1
V V
where
V T ) = [1, 3]
a = E((3X + 5Y )V
and
5 2
ΣV = .
2 2
Hence,
−1
5 2 1 2 −2
E[3X + 5Y | V ] = [1, 3] V = [1, 3] V
2 2 6 −2 5
1 2 13
V = − (2X − Y ) + (X + Z).
= [−4, 13]V
6 3 6
b. Now,
1
E[3X + 5Y | V ] = E(3X + 5Y ) + a Σ−1 V − E(V
V (V V − [1, 2]T )
V )) = 8 + [−4, 13](V
6
26 2 13
= − (2X − Y ) + (X + Z).
6 3 6
Example 7.5.16. Let (X, Y ) be jointly Gaussian. Show that X − E[X | Y ] is Gaussian
We know that
cov(X, Y )
E[X | Y ] = E(X) + (Y − E(Y )).
var(Y )
Consequently,
cov(X, Y )
X − E[X | Y ] = X − E(X) − (Y − E(Y ))
var(Y )
and is certainly Gaussian. This difference is zero-mean. Its variance is
The idea is to specify the likelihood of various outcomes (elements of Ω). If one can
specify the probability of individual outcomes (e.g., when Ω is countable), then one can
choose F = 2Ω , so that all sets of outcomes are events. However, this is generally not
possible as the example of the uniform distribution on [0, 1] shows. (See Appendix C.)
In many problems, we use a method for counting the number of ordered groupings of
identical objects. This method is called the stars and bars method. Suppose we are given
identical objects we call stars. Any ordered grouping of these stars can be obtained by
separating them by bars. For example, || ∗ ∗ ∗ |∗ separates four stars into four groups of sizes
0, 0, 3, and 1.
form M groups. The number of orderings is the number of ways of placing the N identical
¡ ¢
stars and M − 1 identical bars into N + M − 1 spaces, N +M
M
−1
.
Creating compound objects of stars and bars is useful when there are bounds on the
Example 2.7.1. Describe the probability space {Ω, F, P } that corresponds to the random
experiment “picking five cards without replacement from a perfectly shuffled 52-card deck.”
1. One can choose Ω to be all the permutations of A := {1, 2, . . . , 52}. The interpretation
of ω ∈ Ω is then the shuffled deck. Each permutation is equally likely, so that pω = 1/(52!)
for ω ∈ Ω. When we pick the five cards, these cards are (ω1 , ω2 , . . . , ω5 ), the top 5 cards of
the deck.
20 CHAPTER 2. PROBABILITY SPACE
2. One can also choose Ω to be all the subsets of A with five elements. In this case, each
¡ ¢
subset is equally likely and, since there are N := 525 such subsets, one defines pω = 1/N
for ω ∈ Ω.
{1, 2, . . . , 5}}. In this case, the outcome specifies the order in which we pick the cards.
Since there are M := 52!/(47!) such ordered lists of five cards without replacement, we
As this example shows, there are multiple ways of describing a random experiment.
What matters is that Ω is large enough to specify completely the outcome of the experiment.
Example 2.7.2. Pick three balls without replacement from an urn with fifteen balls that
are identical except that ten are red and five are blue. Specify the probability space.
One possibility is to specify the color of the three balls in the order they are picked.
Then
10 9 8 5 4 3
Ω = {R, B}3 , F = 2Ω , P ({RRR}) = , . . . , P ({BBB}) = .
15 14 13 15 14 13
Example 2.7.3. You flip a fair coin until you get three consecutive ‘heads’. Specify the
probability space.
One possible choice is Ω = {H, T }∗ , the set of finite sequences of H and T . That is,
{H, T }∗ = ∪∞ n
n=1 {H, T } .
This is another example of a probability space that is bigger than necessary, but easier
Example 2.7.4. Let Ω = {0, 1, 2, . . .}. Let F be the collection of subsets of Ω that are
No, F is not closed under countable set operations. For instance, {2n} ∈ F for each
A := ∪∞
n=0 {2n}
Example 2.7.5. In a class with 24 students, what is the probability that no two students
N N −1 N −2 N −n+1
α := × × × ··· × .
N N N N
(In this derivation we defined a = (N − n + 1)/N .) With n = 24 and N = 365 we find that
α ≈ 0.48.
Example 2.7.6. Let A, B, C be three events. Assume that P (A) = 0.6, P (B) = 0.6, P (C) =
P (A ∩ B ∩ C).
so that
P (A ∩ B ∩ C) = 0.2.
Example 2.7.7. Let Ω = {1, 2, 3, 4} and let F = 2Ω be the collection of all the subsets of
such that
(ii). The σ-field generated by A is F. (This means that F is the smallest σ-field of Ω
Note that P1 and P2 are not the same, thus satisfying (iii).
1 1 1
P1 ({1, 2}) = P1 ({1}) + P1 ({2}) = 8 + 8 = 4
1 2 1
P2 ({1, 2}) = P2 ({1}) + P2 ({2}) = 12 + 12 = 4
To check (ii), we only need to check that ∀k ∈ Ω, {k} can be formed by set operations
on sets in A ∪ φ∪ Ω. Then any other set in F can be formed by set operations on {k}.
Example 2.7.8. Choose a number randomly between 1 and 999999 inclusive, all choices
being equally likely. What is the probability that the digits sum up to 23? For example, the
number 7646 is between 1 and 999999 and its digits sum up to 23 (7+6+4+6=23).
Numbers between 1 and 999999 inclusive have 6 digits for which each digit has a value in
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}. We are interested in finding the numbers x1 +x2 +x3 +x4 +x5 +x6 =
First consider all nonnegative xi where each digit can range from 0 to 23, the number
¡ ¢
of ways to distribute 23 amongst the xi ’s is 28
5 .
But we need to restrict the digits xi < 10. So we need to subtract the number of ways
arrange 23 amongst xi when some xk ≥ 10 is the same as the number of ways to arrange
P ¡ ¢
yi so that 6i=1 yi = 23 − 10 is 18
5 . There are 6 possible ways for some xk ≥ 10 so there
¡ ¢
are a total of 6 18
5 ways for some digit to be greater than or equal to 10, as we can see by
However, the above counts events multiple times. For instance, x1 = x2 = 10 is counted
both when x1 ≥ 10 and when x2 ≥ 10. We need to account for these events that are counted
multiple times. We can consider when two digits are greater than or equal to 10: xj ≥ 10
number of ways to distribute 23 amongst xi when there are 2 greater than or equal to 10 is
P
equivalent to the number of ways to distribute yi when 6i=1 yi = 23 − 10 − 10 = 3. There
¡¢ ¡¢
are 85 ways to distribute these yi and there are 62 ways to choose the possible two digits
We are interested in when the sum of xi ’s is equal to 23. So we can have at most 2 xi ’s
sum up to 23. The probability that a number randomly chosen has digits that sum up to
(28
5)
−6(18 + 6 8
5 ) (2)(5)
23 is 999999 .
P
Example 2.7.9. Let A1 , A2 , . . . , An , n ≥ 2 be events. Prove that P (∪ni=1 Ai ) = i P (Ai ) −
P P n+1 P (A ∩ A ∩ . . . ∩ A ).
i<j P (Ai ∩ Aj ) + i<j<k P (Ai ∩ Aj ∩ Ak ) − · · · + (−1) 1 2 n
First consider the base case when n = 2. P (A1 ∪ A2 ) = P (A1 ) + P (A2 ) − P (A1 ∩ A2 ).
Assume the result holds true for n, prove the result for n + 1.
P (∪n+1 n n
i=1 Ai ) = P (∪i=1 Ai ) + P (An+1 ) − P ((∪i=1 Ai ) ∩ An+1 )
of those events occur is zero. This result is known as the Borel-Cantelli Lemma.
To prove this result we must write the event “infinitely many of the events An occur”
1.4 Functions of a random variable
Recall that a random variable X on a probability space (Ω, F, P ) is a function mapping Ω to the
real line R , satisfying the condition {ω : X(ω) ≤ a} ∈ F for all a ∈ R. Suppose g is a function
mapping R to R that is not too bizarre. Specifically, suppose for any constant c that {x : g(x) ≤ c}
is a Borel subset of R. Let Y (ω) = g(X(ω)). Then Y maps Ω to R and Y is a random variable.
See Figure 1.6. We write Y = g(X).
X g
X(ω) g(X(ω))
Often we’d like to compute the distribution of Y from knowledge of g and the distribution of
X. In case X is a continuous random variable with known distribution, the following three step
procedure works well:
(1) Examine the ranges of possible values of X and Y . Sketch the function g.
(2) Find the CDF of Y , using FY (c) = P {Y ≤ c} = P {g(X) ≤ c}. The idea is to express the
event {g(X) ≤ c} as {X ∈ A} for some set A depending on c.
(3) If FY has a piecewise continuous derivative, and if the pmf fY is desired, differentiate FY .
If instead X is a discrete random variable then step 1 should be followed. After that the pmf of Y
can be found from the pmf of X using
X
pY (y) = P {g(X) = y} = pX (x)
x:g(x)=y
Example 1.4 Suppose X is a N (µ = 2, σ 2 = 3) random variable (see Section 1.6 for the definition)
and Y = X 2 . Let us describe the density of Y . Note that Y = g(X) where g(x) = x2 . The support
of the distribution of X is the whole real line, and the range of g over this support is R+ . Next we
find the CDF, FY . Since P {Y ≥ 0} = 1, FY (c) = 0 for c < 0. For c ≥ 0,
√ √
FY (c) = P {X 2 ≤ c} = P {− c ≤ X ≤ c}
√ √
− c−2 X −2 c−2
= P{ √ ≤ √ ≤ √ }
3 3 3
√ √
c−2 − c−2
= Φ( √ ) − Φ( √ )
3 3
2
Differentiate with respect to c, using the chain rule and the fact, Φ0 (s) = √1
2π
exp(− s2 ) to obtain
( √ √
√ 1 {exp(−[ √c−2 2
] ) + exp(−[ − √c−2 ]2 )} if y ≥ 0
fY (c) = 24πc 6 6 (1.7)
0 if y < 0
9
Example 1.5 Suppose a vehicle is traveling in a straight line at speed a, and that a random
direction is selected, subtending an angle Θ from the direction of travel which is uniformly dis-
tributed over the interval [0, π]. See Figure 1.7. Then the effective speed of the vehicle in the
B
Θ
fB
−a 0 a
Figure 1.8: The pdf of the effective speed in a uniformly distributed direction.
10
Θ
0 Y
Figure 1.9: A horizontal line, a fixed point at unit distance, and a line through the point with
random direction.
Example 1.6 Suppose Y = tan(Θ), as illustrated in Figure 1.9, where Θ is uniformly distributed
over the interval (− π2 , π2 ) . Let us find the pdf of Y . The function tan(θ) increases from −∞ to ∞
as θ ranges over the interval (− π2 , π2 ). For any real c,
FY (c) = P {Y ≤ c}
= P {tan(Θ) ≤ c}
tan−1 (c) + π
= P {Θ ≤ tan−1 (c)} = 2
π
Differentiating the CDF with respect to c yields that Y has the Cauchy pdf:
1
fY (c) = −∞<c<∞
π(1 + c2 )
Example 1.7 Given an angle θ expressed in radians, let (θ mod 2π) denote the equivalent angle
in the interval [0, 2π]. Thus, (θ mod 2π) is equal to θ + 2πn, where the integer n is such that
0 ≤ θ + 2πn < 2π.
Let Θ be uniformly distributed over [0, 2π], let h be a constant, and let
Θ̃ = (Θ + h mod 2π)
Example 1.8 Let X be an exponentially distributed random variable with parameter λ. Let
Y = bXc, which is the integer part of X, and let R = X − bXc, which is the remainder. We shall
describe the distributions of Y and R.
11
Proposition 1.10.1 Under the above assumptions, Y is a continuous type random vector and for
y in the range of g:
fX (x) ∂x
fY (y) = ∂y = fX (x) (y)
| (x) | ∂y
∂x
and let X = U 2 and Y = U (1 + V ). Let’s find the pdf fXY . The vector (U, V ) in the u − v plane is
transformed into the vector (X, Y ) in the x − y plane under a mapping g that maps u, v to x = u2
and y = u(1 + v). The image in the x − y plane of the square [0, 1]2 in the u − v plane is the set A
given by
√ √
A = {(x, y) : 0 ≤ x ≤ 1, and x ≤ y ≤ 2 x}
See Figure 1.12 The mapping from the square is one to one, for if (x, y) ∈ A then (u, v) can be
y
2
v
1 1
u x
1
∂x ∂x
= 2u 0 2
∂u ∂v
1 + v u = 2u
∂y ∂y
∂u ∂v
Therefore, using the transformation formula and expressing u and V in terms of x and y yields
( √ √y
x+( x −1)
fXY (x, y) = 2x if (x, y) ∈ A
0 else
Example 1.11 Let U and V be independent continuous type random variables. Let X = U + V
and Y = V . Let us find the joint density of X, Y and the marginal density of X. The mapping
u
x u+v
g: → =
v y v
24
is invertible, with inverse given by u = x − y and v = y. The absolute value of the Jacobian
determinant is given by
∂x ∂x
1 1
∂u ∂v
∂y ∂y = 0 1 = 1
∂u ∂u
Therefore
That is fX = fU ∗ fV .
Example 1.12 Let X1 and X2 be independent N (0, σ 2 ) random variables, and let X = (X1 , X2 )T
denote the two-dimensional random vector with coordinates X1 and X2 . Any point of x ∈ R2 can
1
be represented in polar coordinates by the vector (r, θ)T such that r = kxk = (x21 + x22 ) 2 and
θ = tan−1 ( xx21 ) with values r ≥ 0 and 0 ≤ θ < 2π. The inverse of this mapping is given by
x1 = r cos(θ)
x2 = r sin(θ)
We endeavor to find the pdf of the random vector (R, Θ)T , the polar coordinates of X. The pdf of
X is given by
1 − r22
fX (x) = fX1 (x1 )fX2 (x2 ) = e 2σ
2πσ 2
The range of the mapping is the set r > 0 and 0 < θ ≤ 2π. On the range,
∂x
1 ∂x1 cos(θ) −r sin(θ)
∂x
r = ∂r ∂θ =
∂x2 ∂x2 sin(θ) r cos(θ) = r
∂( )
θ ∂r ∂θ
Of course fR,Θ (r, θ) = 0 off the range of the mapping. The joint density factors into a function of
r and a function of θ, so R and Θ are independent. Moreover, R has the Rayleigh density with
parameter σ 2 , and Θ is uniformly distributed on [0, 2π].
25
ELEG–636 Homework #1, Spring 2003
1. Show that if
$%'&
$*,+.-/ , and
then !#" ()"
Answer:
02134 5687079%:;1< #
5687079%:;1<>=#
56
213
()"
?
#
(#"
In the same way
)
)"
(
Then
@
? IA2JLBDK CFE.G.HE I- JLAOK NCFE.G
B CME.G N CFE.G
?QP%J R IA JLB K CFE.G.HE 1< P%J R I- JLA2K NCME.G 13
R B CFE.G R N CFE.G
?QP J R H IK JLB K CME.GSHE P J R IH JLK K NTCFE.G
R B CFE.G R N CME.G
?VUSW & 8#" L$*,+* U.W & (#" X,+
? 8#" L$%'& 8#" @$%,+.-
Y
X Z\[*]
$% [^$*_` $ [^$*_
2.J @Express
b $% the density of the RV in terms of if (a) ; (b)
a .
Answer:
(a)
@Xc 5d#eX
"
5df[^]8eX
5d g@eX
X 7
d
h'i i
56 Xde0e9XQXdj
i
X_7
h i i
560e9X 560e XVX_j
i
X_7
h i i
X XVX_j
" )" i
1
X
Xc "
X
1
ELEG–636 Homework #1, Spring 2003
X_7
h i K L J K J i
Hk B C G B C Gml X_j i
H
X_7
h i K K J i
H BD C G H BTC G X_j i
H H
Xd7
hni i
Xo:p
XQXdj
i
(b)
@X` 56)eX
"
56f[^](eX
56 a Jq b ]reX
X_7
h i
J q i
56 a e XVX_j
i
X_7
hni i
56j U.W
X
s
_
X j
i
X_7
hni i
USW XsX_j
(#" i
@X
X` "
X
Xd7
h i I JLK Jrtvu i
Hk BTC C fG Gml Xdj
H i
Xd7
h i Jgtvu i
A B C C G.G Xdj
i
$%xw a
b are
3. The RVs and L
J y $*>independent J{ 3b exponential
=z
@Xx a with X densities
_: q
Find the densities of the following RVs: | ;}
':
Answer: (1) Let ~ and
w J b
g
a
X
Since and are independent, we have
Y/<
<]8
<
R
8213
J R
w
a J{ C J G a J 13
w b
J a J{
Y w a
2
ELEG–636 Homework #1, Spring 2003
(2)
< 5d ex56e/
" ~
R 56eY8 0X
X213X
J R
R
$*213$*
1<X
J R J R
R w a JLy 1<$ a J{ 13X
R J{ z
a a J C y { G X213X
! w:;
w 1
] b <
Y/< "
w:]2 1@
Z r
4. The RVs and are i
and independent. Show that, if ~ , then
= ¥ Y¦
~
¡ o¢<£ ¤ ~ ¡
Answer:
§ 0 Q
Let ~ , and . Since and are Gaussian, so is also Gaussian. We can find
g
by finding the mean and variance of .
& +
© ~
&ª +
& +@: & + & *+ & %+
So,
r J « ®
a ¬
£ ¤
Thus,
& ¨+ R
213
~ J R
R Ja « ® 13
¯
£ ¤
£ ¤
3
ELEG–636 Homework #1, Spring 2003
& +
~ is already obtained, which is & +*
~
5. Use the moment generating function, show that the linear transformation of a Gaussian random vector
is also Gaussian.
Proof:
Let be a °± real random Gaussian vector, then the density function is
] J(¶ qJ·¹¸ GSº» ¸3¼ ¶ C q<J·¹¸ G
O ²/³ µ ´ q I ³ a C
¤
Let ½ be a °± real vector, then the moment generating function of is
¾
` & a¿ º q +
½
a ¿º q J ¶ qJ· ¸ G º » ¸ ¼ ¶ C q<J· ¸ G 1<
²Y³ @µ´ q I ³ a C
¤
·a !¸ º ¿ ¶ ¿ º» ¸ ¿
Let À be a linear transform of
0
À
Then Á ´ q
À À6Â
The moment generating function of is
¾ &a ¿ º } +
½
& a ¿ º
à q +
& a C ú ¿Gº q +
Using the moment generating function of , we have
¾
½
a · ¸ º C ú ¿ G ¶ C ú ¿ G º » ¸ C ú ¿ G
a ·¹Ä º ¿ ¶ ¿ º» Ä ¿
¾
which has the same form of
½ .
So, is also Gaussian.
$ w
6. Let - ° ¡Å-ÇÆ I be four IID random variables with exponential distribution with = 1.
- É
X - ÈÉ $ > = e_eÊ
° °
Æ I
X
(a) Determine and plot the pdf of °
XËY
(b) Determine and plot the pdf of
X °
(c) Determine and plot the pdf of
X Å °
(d) Compare the pdf of ° with that of the Gaussian density.
Å
4
ELEG–636 Homework #1, Spring 2003
Answer: Let
$% a J b $*
$%
The characteristic function of is Ì
fÍr
Í
ÏÎ
(
$ >=ÐÐÐT=$ -
Since I ° ° are i.i.d.,
z Ñ ² X
X]!ÐÐÐ/r
X
C G
Evaluating both sides by the characteristic
Ì functions, we have Ì
Ñ ² -
Ñ ² fÍr & >a ÒÓ C G +* É Ô Õ ²
C G C G
Æ I
Ì
So, -
zÑ ² fÍr\Ö
C G ÍØ×
ÏÎ
!
X
whose inverse Fourier transform yields the pdf of - °
X - J%I a J b
Ñ ² X X
C G >Ù
9
6@=zÚ@=Ê ¦
This expression holds for any positive integer , including
7. The mean and covariance of a Gaussian random vector are given by, respectively,
Û ÜÞÝ à
ß
and á
I
ÜÞÝ I
ß
Plot the 1 , 2 , and 3 concentration ellipses representing the contours of the density function in
$ =$^3 å $ 7ç
the plane. âäã,° : The radius of an ellipse with major axis a (along ) and minor axis æ
$*
(along ) is given by è ç æ
ç å
ëÇì å Çê Çê :
ãé° æ
e ê eí çî
where i ¤ . Compute the 1 ellipse specified by £ ï I and æ £ ï and then rotate and
C '& $ CFI ðSG $ CFðSG + mC ñMG óò q ñ : Û q
translate each point ã using the transformation .
Answer: ô
J%I Ý ÅË Ë
Ë ß
ÅË
5
ELEG–636 Homework #1, Spring 2003
So,
ô
q ]c
I ³ a J ¶ C qJõ BG º
ö B ¼ ¶ C qJõ BG
¤
Ú
£ a J ÷ kC J I G J C ¶ %
¶% J I G C J G C J G l
Ê
¤
Let
[^$ I = $ '$ I $ I Ç$ 3ø:$ 3
9
The linear transform $ I I
Ý Ýäù ù Ý
$ ß ß ß
ù ù
Ê<ú/û
is a rotation of of the original axes.
I
[^$ I = $ :
Ë
So,
ç
æ Ú
So, the radius of the ellipse is
è ç
æ
ç %üý W ê : % þÇÿ3üz ê
è æ
è
The concentration ellipse of ( ) is thus
I
:
Ë
or
$ I $ I Ç$ 3ø:$ 3
9 9
[*$ I = $
When the function is chosen differently, the figure will be different. But the orientation of
the ellipses are the same.
6
ELEG–636 Homework #1, Spring 2003
x2
x1
1
7
ELEG–636 Test #1, March 25, 1999 NAME:
1. (35 pts) Let y = minfjx1 j; x2 g where x1 and x2 are i.i.d. inputs with cdf and pdf Fx () and fx (),
respectively. For simplicity, assume fx () is symmetric about 0, i.e., fx (x) = fx (,x). Determine the
cdf and pdf of y in terms of the distribution of the inputs. Plot the pdf of y for fx () uniform on [,1; 1].
Note that (
Fx (x) , Fx (,x) for x 0
Fjxj (x) =
0 otherwise
Also
Fminfx1 ;x2 g (x) = 1 , P fx1 xgP fx2 xg = 1 , (1 , Fx (x))(1 , Fx (x))
1 2
Thus,
= 1 , (1 , Fjx j (y))(1 , Fx (y))
Fy (y )
( 1 2
1
ELEG–636 Test #1, March 25, 1999 NAME:
yi = + xi
for i = 1; 2; : : : ; N . We wish to estimate the location parameter using a maximum likelihood estimator
operating on the observations y1 ; y2 ; : : : ; yN . Consider two cases:
N (0; 2 ), for i = 1; 2; : : : ; N .
(10 pts) The xi terms are i.i.d. with distribution xi
(10 pts) The xi terms are independent with distribution xi N (0; i2 ), for i = 1; 2; : : : ; N .
(15 pts) Are the estimates unbiased? What is the variance of the estimates? Are they consistent?
Y N=2 P
p1 , (yi2,2) = 1
N 2
, ,
N (yi )2
j =
fyj (y ) e
22 e i=1 2 2
i=1 22
Thus,
X
N
(yi , )2
M L = arg max
, 22
i=1
i=1
2 i=1
= 0 ) M L = PN 1 M L = Pi=1
2
ii i
i=1
i2 N
w
i=1 2 i=1 i
i
Ef N
P PN w x x w g PN w2 2 PN w 1
i=1 j =1 i i j j i=1 i i i=1 i
=
(
P N
w )2
=
(
P N
w )2
=
(
P N
w )2
= P N
w
i=1 i i=1 i i=1 i i=1 i
Since wi > 0, we have var(M L )[N + 1] < var(M L )[N ]. This, combined with the fact that the
estimator is unbiased means the estimate is consistent.
3
ELEG{636 Test #1, March 23, 2000 NAME:
1. (30 pts) The random variables x and y are independent and uniformly distributed
p on
the interval [0,1]. Determine the conditional distribution frjA(rjA) where r = x2 + y2 and
A = fr 1g.
Answer:
Examine the joint density fx;y (x; y) in the xy plane. Since x and y are independent,
fx;y (x; y) = fx(x)fy (y) = 1 for 0 x; y 1
This denes a uniform
p density over the region 0 x; y 1 in the rst quadrant of the xy plane.
Note that r = x2 + y2 denes an arc in the rst quadrant. Also, if 0 r 1 the area under
the uniform density up to radius r is simply given by
q Z
Fr (r) = P r[ x + y r] = p 2 2 fx;y (x; y)dxdy
2 2
x +y r
Z r 2
= p 1dxdy = for 0 r 1
2 2x +y r 4
Then for A = fr 1g.
Fr;A (r; A) Fr (r) r2 4
FrjA (rjA) = = = = r2 for 0 r 1
P r[A] Fr (1) 4
Thus, frjA(rjA) = 2r for 0 r 1 and 0 elsewhere.
1
ELEG{636 Test #1, April 5, 2001 NAME:
(15 pts) Suppose now that y = a sin(x + ), where and a > 0 are constants. Determine
f (y ).
y
(10 pts) Suppose further that x is uniformly distributed over [ ; ]. Determine f (y) y
Thus p
fx ( y )
f (y xj 0) = 2p y(1 Fx (0))
U (y ):
Now for y = g(x) = a sin(x + ) we have, assuming jyj a, innitely many solutions
xn = arcsin(y=a)
Note that g 2 (x
n )+g 0 2 (x
n )= a2 cos2 (x n + ) + a2 sin2 (x n + ) = a2 . Or,
q q
g 0 (xn ) = a2 g 2 (xn ) = a2 y2 :
Thus
X f (x ) 1 X
fy (y ) =
x
g (x )
=p 2 n
fx (xn ); jyj a
i
0
a n y2 i
1
ELEG–636 Test #1, April 14, 2003 NAME:
(15 pts) Let x and y be independent, zero mean, unit variance Gaussian random variables.
Define
w = x2 + y 2 and z = x2 :
Determine fw;z (w; z ). Are w and z independent?
Answer: Note that
, 0:5) x<2
(
1 1
x + (x 0
fx (x) = 4 2
0 otherwise
Thus 8
>
< 0 x<0
Fx (x) = 1 2
x +
1
u(x , 0:5) 0 x<2
2x
> 8 2
:
1
p
Since x = y for 0 y 1,
8
Fy (y ) =
>
<
p
0
Fx ( y ) 0
y<0
y<1
1y
>
:
1
8
=
>
<
1
y +
1
u(
py , 0:5)
0
0
y<0
y<1
1y
> 8 2
:
1
8
>
< 0 y<0
=
1
y +
1
u(y , 0:25) 0 y<1
1y
> 8 2
:
1
1
ELEG–636 Test #1, April 14, 2003 NAME:
p
The reverse transformation is easily seen to be x = pz and y = w , x 2
= pw , z ,
w z . Thus,
fw;z (w; z ) =
fx;y (x; y )
p fx;y (x; y )
pz
j j
p
+
j j
,pw , z
4 xy x= z 4 xy x=
y = w,z y=
fx;y (x; y )
,pz fx;y (x; y )
x=, z
p
+
4 xyj j
x=
pw , z
+
j j
4 xy
p
y= y =, w,z
(1)
Thus
fw;z (w; z ) = p p1
2 z w , z
e,w= u(w )u(z )u(w , z )
2
2
ELEG–636 Midterm, April 7, 2009 NAME:
(a) [15 pts] Prove the Bienayme inequality, which is a generalization of the Tchebycheff
inequality,
E{|X − a|n }
P r{|X − a| ≥ } ≤
n
for arbitrary a and distribution of X.
(b) [15 pts] Consider the uniform distribution over [−1, 1].
i. [10 pts] Determine the moment generating function for this distribution.
ii. [5 pts] Use the moment generating function to generate a simple expression for
0
the k th moment, mk .
Answer:
(a)
Z ∞ Z Z
n n n
E{|x − a| } = |x − a| fx (x)dx ≥ |x − a| fx (x)dx ≥ n fx (x)dx
−∞ x−a|≥ x−a|≥
E{|X − a|n }
=n P r{|x − a| ≥ } ⇒ P r{|X − a| ≥ } ≤
n
(b)
Z 1 1
s −s
2s (e − e ) s 6= 0
1 sx
Φ(s) = e dx =
2−1 1 s=0
k
d Φ(s)
⇒ E{xk } =
dk s s=0
dΦ(s) 1 s −s 1 s −s
E{x} = = (e + e ) − (e − e )
ds s=0 2s 2s2
s=0
1 s −s 1 s −s
= (e − e ) − (e − e ) =0
2 4 s=0
Repeat the differentiation, limit (l’Hpital’s rule) process. The analytical solution is
simpler:
1 1 k 1 − (−1)k+1
Z
k 0 k = 1, 3, 5, . . .
E{x } = x dx = = 1
2 −1 2(k + 1) k+1 k = 0, 2, 4, . . .
1
ELEG–636 Midterm, April 7, 2009 NAME:
2 )
3. [35 pts] Let Z = X +N , where X and N are independent with distributions N ∼ N (0, σN
1 1
and fX (x) = 2 δ(x − 2) + 2 δ(x + 2).
(a) [15 pts] Determine the MAP, MS, MAE, and ML estimates for X in terms of Z.
(b) [10 pts] Determine the bias of each estimate, i.e., determine whether or not each
estimate is biased.
(c) [10 pts] Determine the variances of the estimates.
Answer:
3
ELEG–636 Homework #1, Spring 2009
1. A token is placed at the origin on a piece of graph paper. A coin biased to heads is given, P (H) =
2/3. If the result of a toss is heads, the token is moved one unit to the right, and if it is a tail the
token is moved one unit to the left. Repeating this 1200 times, what is a probability that the token
is on a unit N , where 350 ≤ N ≤ 450? Simulate the system and plot the histogram using 10,000
realizations.
Solution:
Let x = # of heads. Then 350 ≤ x − (1200 − x) ≤ 450 ⇒ 775 ≤ x ≤ 825 and
825 i 1200−i
X 1200 2 1
P r(775x ≤ 825) =
i 3 3
i=775
Rx 1 −x2 /2
where Φ(x) = −∞ 2π e dx
2. Random variable X is characterized by cdf FX (x) = (1 − e−x )U (x) and event C is defined by
C = {0.5 < X ≤ 1}. Determine and plot FX (x|C) and fX (x|C).
Solution: Evaluating P r(X ≤ x, 0.5 < X ≤ 1) for the allowable three cases
3. Prove that the characteristic function for the univariate Gaussian distribution, N (η, σ 2 ), is
ω2σ2
φ(ω) = exp jωη −
2
Next determine the moment generating function and determine the first four moments.
1
ELEG–636 Homework #1, Spring 2009
Solution:
∞
(x − η)2
Z
1
φ(ω) = √ exp 2
ejωx dx
−∞ 2πσ 2σ
Z ∞ 2
(x − 2ηx + η 2 − 2jωxσ 2 )
1
= √ exp dx
−∞ 2πσ 2σ 2
Z ∞
(x − (ηx + jωσ 2 )2 (−η 2 + (η 2 + jωσ 2 η)2
1
= √ exp exp dx
−∞ 2πσ 2σ 2 2σ 2
Z ∞
(−η 2 + (η 2 + jωσ 2 η)2 (x − (ηx + jωσ 2 )2
1
= exp √ exp dx
2σ 2 −∞ 2πσ 2σ 2
(−η 2 + (η 2 + jωσ 2 η)2
= exp
2σ 2
2 2
which reduces to φ(ω) = exp jωη − ω 2σ . The moment generating function is simple
s2 σ 2
Φ(s) = exp sη +
2
dk Φ(s)
and mk = | ,
dk s s=0
which yields
m1 = η m2 = σ 2 + η 2
m3 = 3ησ 2 + η 3 m4 = 3σ 4 + 6σ 2 η 2 + η 4
Determine (a) fx (x), (b) fY (y), (c) fY (y|x), and (d) E[Y |x].
Solution:
2
ELEG–636 Homework #1, Spring 2009
W = X2 + Y 2 and Z = X2
Note we must have w, z ≥ 0 and w ≥ z. Thus the inverse system (roots) are
√ √
x = ± z, y = ± w − z.
Thus
fXY (x, y) √
fW Z (w, z) = (∗)
4|xy| x = ±√ z
y =± w−z
1 − x2 +y2
fXY (x, y) = e 2 (∗∗)
2π
Substituting (∗∗) into (∗) [which has four terms] and simplifying yields
ew/2
fW Z (w, z) = p U (w − z)U (z) (∗ ∗ ∗)
2π z(w − z)
Note W and Z are not independent. Counter example proof: Suppose W and Z are independent.
Then fW (w)fZ (z) > 0 for all w, z > 0. But this violates (∗ ∗ ∗), as fW Z (w, z) > 0 only for
w ≥ z.
3
ELEG–636 Homework #2, Spring 2009
1. Let
2 −2
R=
−2 5
Express R as R = QΩQH , where Ω is diagonal.
Solution:
2 − λ −2
= λ2 − 7λ + 6 = 0
−2 5 − λ ⇒ λ1 = 6, λ2 = 1
Than solving Rqi = λi qi gives q1 = √1 [1, −2]T and q2 = √1 [2, 1]T . Thus R = QΩQH
5 5
where
6 0
Q = [q1 , q2 ] and Ω=
0 1
σ12
ρσ1 σ2
C=
ρ∗ σ1 σ2 σ22
Solution:
(a) 2
σ1 − λ ρσ1 σ2
= λ2 − (σ12 + σ22 )λ + (1 − |p|2 )σ12 σ22 = 0
ρ∗ σ1 σ2 σ 2 − λ
2
p
(σ12 + σ22 ) ± σ14 + σ24 − 2σ12 σ22 + 4|p|2 σ12 σ22
⇒λ=
2
(b) For σ 2 = σ22 = σ22 p
2σ 2 ± 4|p|2 σ 4
λ= = (1 ± |p|)σ 2
2
3. Let
x[n] = Aejω0 n
where the complex amplitude A is a RV with random magnitude and phase
A = |A|ejφ .
Show that a sufficient condition for the random process to be stationary is that the amplitude and
phase are independent and that the phase is uniformly distributed over [−π, π].
Solution: First note E{x[n]} = E{A}ejω0 n and
E{A} = E{|A|}E{ejφ } = 0
1
ELEG–636 Homework #2, Spring 2009
by independence and uniform distribution of φ. Thus it has a fixed mean. Next note
Utilize Tchebycheff’s inequality to determine a bound for P r{8 < Y < 12}.
1 1 20
Solution: Note ηx = 2 and σx2 = 12 . Thus ηy = 10 and σy2 = 12 = 53 . Utilize Tchebycheff’s
inequality
σ 2 5 5 7
y
P r{|Y − ηy | ≥ 2} ≤ = ⇒ P r{8 < Y < 12} ≥ 1 − =
2 12 12 12
5. Let X ∼ N (0, 2σ 2 ) and Y ∼ N (1, σ 2 ) be independent RVs. Also, define Z = XY . Find the
Bays estimate of X from observation Z:
6. Let X and Y be independent RVs characterized by fX (x) = ae−ax U (x) and fY (y) = ae−ay U (y).
Also, define Z = XY . Find the Bays estimate of X from observation Z using the uniform cost
function.
Solution:
1
Fz|x (z|x) = P r(xy ≤ z|x) = P r(y ≤ z/x) = Fy (z/x) ⇒ fz|x (z|x) = fy (z/x)
x
1
x̂ = arg max fz|x (z|x)fx (x) = arg max fy (z/x)fx (x)
x
1 −az/x −ax −1
= arg max ae ae U (x)U (z) = arg max a2 x−1 e−a(zx +x) U (x)U (z)
x
−1 −1
⇒ 0 = − a2 x−2 e−a(zx +x) + (a2 x−1 e−a(zx +x) )(−a(1 − zx−2 ))
0 = − x−1 − a(1 − zx−2 ) ⇒ ax2 + x − z = 0
√
−1 ± 1 + 4az
⇒ x̂ =
2a
where v1 [n] and v2 [n] are independent white noise processes, each with variance 0.5.
2
ELEG–636 Homework #1, Spring 2008
1. Let fx (t) be symmetric about 0. Prove that µ is the expected value of a sam-
ple distributed according to fx−µ (t).
Solution.
Since fx (t) is symmetric about 0, fx (t) is even.
Z +∞
E[(x − µ)] = tfx−µ (t)dt
−∞
Z +∞
= tfx (t − µ)dt
−∞
Let u = t − µ,
Z +∞
E[(x − µ)] = u + µfx (u)du
−∞
Z +∞ Z +∞
= ufx (u) du + µfx (u)du
−∞ | {z } −∞
odd
Z +∞
= 0+µ fx (u)du
−∞
= µ
Solution. Rb Rb 0
0 b
Recall integrationby parts:
a f (t)g (t)dt = f (t)g(t)|a − a f (t)g(t)dt.
Let g 0 (t) = t exp − 21 t2 and f (t) = √2πt
1
Z ∞
1 1
Qx (x) = √ t exp − t2 dt
x 2πt 2
1
ELEG–636 Homework #1, Spring 2008
∞
Z ∞
1 1 1 1
= −√ exp − t2 − √
2
exp − t2 dt
2πt 2 x x 2πt 2
| {z }
→0 as x→∞
1 1
≈ √ exp − x2
2πx 2
Since x∞ √2πt
1 1 2
R
2 exp − 2 t dt goes to zero as x goes to infinity, the ap-
proximation improves x as increase.
3. The probability density function for a two dimensional random vector is de-
fined by
(
Ax21 x2 x1 , x2 ≥ 0 and x1 + x2 ≤ 1
fx (x) =
0 otherwise
Solution.
(a)
Z 1 Z 1−x1
Fx1 ,x2 (∞, ∞) = Ax21 x2 dx2 dx1
0 0
x22 x2
Z 1
= Ax21
0 2 0
(1 − x1 )2
Z 1
= Ax21 dx1
0 2
A 1 4
Z
= (x − 2x31 + x21 )dx1
2 0 1
A
=
60
= 1 (1)
2
ELEG–636 Homework #1, Spring 2008
• x1 , x2 ≥ 0 and x1 + x2 ≤ 1, then
Z x1 Z x2
F (x1 , x2 ) = 60u2 vdvdu
0 0
= 10x31 x22
• 0 ≤ x1 , x2 ≤ 1 and x1 + x2 ≥ 1, then
• 0 ≤ x1 ≤ 1 and x2 ≥ 1, then
Z 1 Z 1−u
F (x1 , x2 ) = 1 − 60u2 vdvdu
x1 0
= 10x31 − 15x41 + 6x51
• 0 ≤ x2 ≤ 1 and x1 ≥ 1, then
Z 1−x2 Z 1−u
F (x1 , x2 ) = 1 − 60u2 vdvdu
0 x2
= 10x22 − 3
20x2 + 15x42 − 4x52
• x1 , x2 ≥ 1, then F (x1 , x2 ) = 1.
So
0 x1 < 0 or x2 < 0
3 2
10x1 x2 x1 , x2 ≥ 0, x1 + x2 ≤ 1
10x2 − 20x3 + 15x4 − 4x5 + 10x3 − 15x4 + 6x5 − 1 0 ≤ x , x ≤ 1, x + x ≥ 1
2 2 2 2 1 1 1 1 2 1 2
F (x1 , x2 ) =
10x31 − 15x41 + 6x51 0 ≤ x1 ≤ 1, x2 ≥ 1
10x22 − 20x32 + 15x42 − 4x52
0 ≤ x2 ≤ 1, x1 ≥ 1
1 x1 , x2 ≥ 1
(b)
Z 1−x2
fx2 (x2 ) = 60x21 x2 dx1
0
= 20x2 (1 − x2 )3
3
ELEG–636 Homework #1, Spring 2008
(c) Since
Z 1−x1
fx1 (x1 ) = 60x21 x2 dx2
0
= 30x21 (1 − x1 )2
, fx1 ,x2 (x1 , x2 ) 6= fx1 (x1 )fx2 (x2 ). Therefore, fx1 (x1 ) and fx2 (x2 ) are NOT
independent.
Solution.
(a) Since two marginal distributions are independent,
(b)
Z 1 Z 1−x2
P r(A) = 2x2 dx1 dx2
0 0
Z 1
= 2x22 dx2
0
2x32 1
=
3 0
2
= (2)
3
4
ELEG–636 Homework #1, Spring 2008
(c)
fX (X)
fX|A (X|A) =
P r(A)
(
3x2 0 ≤ x1 < x2 ≤ 1
=
0 otherwise
Z 1
fx1 |A (x1 |A) = 3x2 dx2
x1
3x22 1
=
2 x1
3(1 − x1 )2
= , 0 ≤ x1 ≤ 1
2
Z x2
fx2 |A (x2 |A) = 2x2 dx1
0
= 2x22 , 0 ≤ x2 ≤ 1
fX|A (X|A) 6= fx1 |A (x1 |A)fx2 |A (x2 |A). Therefore, fx1 |A (x1 |A) and fx2 |A (x2 |A)
are NOT independent.
5. The entropy H for a random vector is defined as −E{ln fx (x)}. Show that
for the complex Gaussian case
H = N (1 + ln π) + ln |Cx |.
Then,
H = −E{ln fx (x)}
= E[(x − mx )H C−1
x (x − mx )] + N ln π + ln |Cx |
5
ELEG–636 Homework #1, Spring 2008
Note
E[(x − mx )H C−1 H −1
x (x − mx )] = E[trace((x − mx ) Cx (x − mx ))]
= trace(C−1 H
x E[(x − mx )(x − mx ) ])
= trace(C−1
x Cx )
= trace(I) = N
Therefore
H = N + N ln π + ln |Cx |
= N (1 + ln π) + ln |Cx |
x = 3u − 4v
y = 2u + v
where u and v are unit mean, unit variance, uncorrelated Gaussian random
variables.
(a) Determine the means and variances of x and y.
(b) Determine the joint density of x and y.
(c) Determine the conditional density of y given x.
Solution.
(a)
E(y) = E(2u + v)
= 2E(u) + E(v)
= 2+1
= 3
6
ELEG–636 Homework #1, Spring 2008
Thus
fx,y (x, y)
fy|x (y|x) =
fx (x)
√
2π × 5 1 x + 4y −2x + 3y 1
= exp − [( − 1)2 + ( − 1)2 ] + (x + 1)2
22π 2 11 11 2 × 25
r
5 2 1 x + 4y −2x + 3y 1
2 2 2
= exp − [( − 1) + ( − 1) − (x + 1) ]
22 π 2 11 11 25
7
ELEG–636 Homework #1, Spring 2008
Note E{x21 } = σ12 , E{x22 } = σ22 , and E{x1 x2 } = ρσ1 σ2 . Determine the
angle θ such that y1 and y2 are uncorrelated.
Solution.
(
y1 = x1 cos θ + x2 sin θ
y2 = −x1 sin θ + x2 cos θ
8. The covariance matrix and mean vector for a real Gaussian density are
" #
1 0.5
Cx =
0.5 1
and " #
1
mx =
0
8
ELEG–636 Homework #1, Spring 2008
Solution.
Hence, eigenvalues are 0.5 and 1.5. For λ = 0.5, the corresponding eigen-
vector is [1, −1]T . For λ = 1.5, the corresponding eigenvector is [1, 1]T .
(c) Eigenvalues are 0.5 and 1.5. For λ = 0.5, the corresponding eigenvector
is [1, 1]T . For λ = 1.5, the corresponding eigenvector is [1, −1]T .
Solution.
9
ELEG–636 Homework #1, Spring 2008
√
⇒a= 3
That is ( √ √
1
√
2 3
xk ∈ [− 3, 3]
fxk (xk ) =
0 otherwise
10