Solution Exercises List 1 - Probability and Measure Theory
Solution Exercises List 1 - Probability and Measure Theory
Solution Exercises List 1 - Probability and Measure Theory
1. If #Ω < ∞ the answer is yes. If #Ω = ∞ the answer is no. Consider Ω = N. Then A = {even
numbers} ∈ / F but is a countable union of An = {2n} ∈ F, for all n ≥ 1. Hence, F is not
closed by countable unions.
2. Yes. You have to check the properties that define a σ-algebra. It is useful to separate the
cases where an arbitrary set A ∈ F (or a sequence of sets when checking the σ-additivity) is
countable or uncountable.
3. We have that
c
1
(a) {0} = ∪n≥1 [ n+1 , n1 ] .
(b) { n1 : n ≥ 2} = ∪n≥1 [ n+1 1
, n1 ] ∩ [ n1 , n−1
1
] .
(c) ( n1 , 1] = ∪1≤i≤n−11 [ i+1
1
, 1i ] ∩ [ n+1 1
, n1 ].
(d) (0, n1 ] = ∪i≥n [ i+1
1
, 1i ].
c
4. It is a consequence of De Morgan’s law (∩n≥1 An ) = ∪n≥1 Acn .
5. It is a consequence of the fact that the mapping X −1 c preserves all set operations. That is
X −1 (∪n≥1 An ) = ∪n≥1 X −1 (An ), X −1 (Ac ) = X −1 (A) , etc..
6. Define X : A → Ω by X(ω) = ω, that is, X is the injection of the set A into Ω. Then,
A ∩ B = X −1 (B) for any B ∈ F. By exercise 5, we get that FA = {X −1 (B) : B ∈ F} is a
σ-algebra.
7. It follows easily from the definition of σ-algebra.
8. That if {An }n≥1 ⊂ F, then lim supAn ∈ F and lim inf An ∈ F follows from the fact that F
n→∞ n→∞
is closed by countable unions (definition of σ-algebra) and closed by countable intersections
(exercise 4.). That
lim supAn = {An occurs for infinitely many n},
n→∞
and
lim inf An = {An occurs for all but finitely many n},
n→∞
is just writing in words the iterated intersection and union in the definitions of lim supn→∞ An
and lim inf n→∞ An . That lim inf An ⊂ lim supAn is obvious.
n→∞ n→∞
one proceeds by induction. The base case is trivial. In order to prove the general case
define Bn = An \(∪n−1 n−1 c
i=1 A i ) = A n ∩ (∪ i=1 A i ) , 1 ≤ n ≤ N. The sets Bn are pairwise
]n
n
disjoint and satisfy ∪i=1 Ai = Bi . Then,
i=1
−1
N
! N
! N[
!
[ [
P An = P Bn =P Bn + P (BN )
n=1 n=1 n=1
−1
N[
! N
X
≤ P An + P (AN ) ≤ P (An ),
n=1 n=1
where in the first inequality we have used the monotonicity property with BN ⊂ AN and
in the second inequality we have used the induction hypothesis.
11. Let (Ω, F, P ) be a probability space and {An }n≥1 ⊂ F a sequence of events.
lim sup An = ∩∞ ∞
n=1 ∪k=n An
n→∞
and that Bn = ∪∞
k=n An is a decreasing sequence Bn+1 ⊂ Bn . The result follows by using
11.(b) and 11.(c).
] of elements of P. That
conclude that the elements of σ(X) are countable (or finite) unions
is B ∈ σ(X) if and only if there exists J ⊂ N such that B = Ai . This is the general
i∈J
structure of the σ-algebra generated by a countable partition of Ω. A function Y is measurable
with respect to σ(X) if and only if Y is constant over the elements of the partition P.
14. No. The proof of this fact and an example follows form exercise 13, because F is generated by
a finite partition of Ω.
2
Hence, Var[Z] = E[(Z − E[Z]) ] = E[Z 2 ] = 1. Note that X = g(Z) = g◦Z where g(z) = σz+µ
is a continuous (and hence measurable) function. Therefore, X is a random variable because it
is the composition of measurable functions, because Z is a random variable. The distribution
R x−µ
function of X is given by FX (x) = Q(X ≤ x) = Q(σZ + µ ≤ x) = Q(Z ≤ x−µ σ ) = −∞ φ(z)dz
σ
By the linearity of the expectation one has that E[X] = µ + σE[Z] = µ and
2
E[X 2 ] = E[(µ + σZ) ] = µ2 + 2µE[Z] + σ 2 E[Z 2 ] = µ2 + σ 2 .
Moreover, if Y is an arbitrary random variable with E[Y 2 ] < +∞ then, using again the
linearity of the expectation, we get that
2
Var[Y ] = E[(Y − E[Y ]) ] = E[Y 2 ] − 2E[Y ]E[Y ] + E[Y ]2 = E[Y 2 ] − E[Y ]2 .
Hence, Var[X] = σ 2 .
20. To be handed in.
(log(z) − µ)2
dPZ 1
(z) = √ exp − .
dλ z 2πσ 2 2σ 2
Note that E[Y n ] = E[exp(nX)] = ψ(n) and V ar[Y ] = E[Y 2 ] − E[Y ]2 = ψ(2) − ψ(1)2 , where
ψ(θ) is the function defined in exercise 20.
22. It follows from the monotonicity of the expectation (integral with respect to dP ) and the
inequality f (X) ≥ f (X)1{X≥a} ≥ f (a)1{X≥a} , P -a.s..
23. Let {tn }n≥1 ⊂ I\{t0 } be an arbitrary sequence of numbers converging to t0 . Apply, the dom-
X −X
inated convergence theorem to ttnn −t0t0 . To check the hypothesis in the dominated conver-
gence theorem is useful to consider the mean value theorem if f is C 1 (I) then f (t1 ) − f (t2 ) =
f (ξ) (t1 − t2 ) for some t1 , t2 , ξ ∈ I.
25. As X and Y are i.i.d. with law N (µ, σ 2 ) we have that (X, Y ) is multivariate normal with
density given by
!
2 2
1 (x − µ) + (y − µ)
fX,Y (x, y) = fX (x)fY (y) = exp − .
2πσ 2 2σ 2
2
u2
!
u
1 2 + 2 + 2µ2 − 2µu
fU,V (u, v) = exp − 1R2 (u, v).
4πσ 2 2σ 2
(U, V ) are independent iff fU,V (u, v) = fU (u)fV (v) for some densities fU (u) and fV (v). This
happens iff µ = 0 and, in this case, fU (u) and fV (v) are the densities of a normal random
variable with zero mean and variance 2σ 2 .
26. As X and Y are i.i.d. with law N (0, σ 2 ) we have that (X, Y ) is multivariate normal with
density given by 2
x + y2
1
fX,Y (x, y) = fX (x)fY (y) = exp − .
2πσ 2 2σ 2
Next, use Theorem 18 in Lecture 3 with S0 = R × {0}, S1 = (0, +∞) × R, S2 = p( − ∞, 0) × R
and g : Si → g(Si ) = [0, +∞) × R, i = 1, 2, given by (u, v) = g(x, y) = ( x2 + y 2 , x/y).
Note that gi−1 : g(Si ) → Si , i = 1, 2 are given by g1−1 (u, v) = ( √1+v
uv
2
u
, √1+v 2
) and g2−1 (u, v) =
−uv −u u
( √1+v2 , √1+v2 ) and det Jg−1 (u, v) = det Jg−1 (u, v) = − 1+v2 . Hence,
1 2
u2
1 1
fU,V (u, v) = u exp − 1(0,+∞) (u) 1R (v),
2σ 2 πσ 2 1 + v2
and, as the joint density of (U, V ) factorizes, we have that U and V are independent.
27. We can write det Q in terms of ρ. We have that det Q = 1 − ρ2 σ 2X σ 2Y . As det Q ≥ 0 we
must have ρ ∈ [−1, 1]. As σ 2X σ 2Y > 0, we have that |ρ| < 1 iff det Q > 0, which implies that
PX,Y λ2 by Theorem 31 in Lecture 3. Moreover,
−1
σ 2X σ 2Y
ρσ X σ Y 1 −ρσ X σ Y
Q−1 = = .
ρσ X σ Y σ 2Y (1 − ρ2 ) σ 2X σ 2Y −ρσ X σ Y σ 2X
After a little bit of algebra in the terms of the exponential one gets the desired expression
for fX,Y (x, y). As σ 2X σ 2Y > 0, we have that |ρ| = 1 iff det Q = 0, which implies that the
distribution is degenerated or singular, which means that is concentrated in lower dimensional
subspace of R2 . This yields that the distribution of (X, Y ) is not absolutely continuous with
where we have used the change of variable −y = u and the fact that φ(−x) = φ(x). On the
other hand, (Y, Z) is multivariate Gaussian iff any lineal combination of X and Y is a (one
dimensional) Gaussian random variable. Consider W := Y + Z = 2Y 1{|Y |≤a} . Clearly W is
not Gaussian because P (W > b) = 0 for any b > a, while for any Gaussian random variable
this probability is strictly positive. Hence, we can conclude that (Y, Z) is NOT multivariate
Gaussian.
2
29. The idea is to check that E[(Y − X) ] = 0, which yields that Y = X, P -a.s. by exercise 16.
2
When checking that E[(Y − X) ] = 0 one uses the hypothesis in the exercise, the conservation
of the expectation property of the conditional expectation and ”what is measurable goes out”
property of the conditional expectation.
30. We call the conditional expectation defining property (CEDP) the following property:
In order to prove that E[X|G] is equal to some given random variable Z we always have to
check two things. First, that the candidate Z is G-measurable and second that the candidate
Z satisfies (2) . Then, we can conclude that E[X|G] = Z, P -a.s.
(a) It follows from the CEDP taking B = Ω and the fact that a constant, in particular E[X],
is measurable with respect to any σ-algebra, in particular G.
which is an increasing sequence of positive, simple and G-measurable functions such that
Yn % Y, P -a.s.. Then, as XY ∈ L1 the conditional expectation exists and
E[XY |G] = E[X lim Yn |G] = lim E[XYn |G] = lim Yn E[X|G] = Y E[X|G], P -a.s.,
n→∞ n→∞ n→∞
where we have used the conditional monotone convergence theorem and that the result
holds for simple functions. One could have used the CEDP with the usual monotone
convergence theorem to prove the result. For a general G-measurable Y such that Y X ∈
L1 , note that also Y + X ∈ L1 and Y − X ∈ L1 and then we can write
E[XY |G] = E[XY + |G] − E[XY − |G] = Y + E[X|G] − Y − E[X|G] = Y E[X|G].
31. By exercise 13., we know that the candidate random variable must be constant on the elements
P E[X1 ]
of the partition {An }n≥1 . This is the case for n≥1 P (AAnn) 1An . Therefore, we only need to
prove that it satisfies CEDP. This follows easily from the particular structure of the elements
of G, described in exercise 13. We have that B ∈ G iff B = ∪i∈J Ai where J is a countable
subset of N. Then, on the one hand
X X
E[X1B ] = E[X 1Ai ] = E[X1Ai ],
i∈J i∈J
= A1 + A2
σ2
exp(µ + σz)φ(z) = exp µ + φ(z − σ),
2
σ2 µ + σ 2 − log(K)
µ − log(K)
E[max (0, exp(X) − K)] = exp µ + Φ − KΦ .
2 σ σ