Solution Exercises List 1 - Probability and Measure Theory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Introduction and Techniques Exercises

in Financial Mathematics List 1


UiO-STK4510 Solutions and Hints
Autumn 2015
Teacher: S. Ortiz-Latorre

Probability and Measure Theory

1. If #Ω < ∞ the answer is yes. If #Ω = ∞ the answer is no. Consider Ω = N. Then A = {even
numbers} ∈ / F but is a countable union of An = {2n} ∈ F, for all n ≥ 1. Hence, F is not
closed by countable unions.
2. Yes. You have to check the properties that define a σ-algebra. It is useful to separate the
cases where an arbitrary set A ∈ F (or a sequence of sets when checking the σ-additivity) is
countable or uncountable.
3. We have that
 c
1
(a) {0} = ∪n≥1 [ n+1 , n1 ] .
 
(b) { n1 : n ≥ 2} = ∪n≥1 [ n+1 1
, n1 ] ∩ [ n1 , n−1
1
] .
 
(c) ( n1 , 1] = ∪1≤i≤n−11 [ i+1
1
, 1i ] ∩ [ n+1 1
, n1 ].
(d) (0, n1 ] = ∪i≥n [ i+1
1
, 1i ].
c
4. It is a consequence of De Morgan’s law (∩n≥1 An ) = ∪n≥1 Acn .
5. It is a consequence of the fact that the mapping X −1 c preserves all set operations. That is
X −1 (∪n≥1 An ) = ∪n≥1 X −1 (An ), X −1 (Ac ) = X −1 (A) , etc..
6. Define X : A → Ω by X(ω) = ω, that is, X is the injection of the set A into Ω. Then,
A ∩ B = X −1 (B) for any B ∈ F. By exercise 5, we get that FA = {X −1 (B) : B ∈ F} is a
σ-algebra.
7. It follows easily from the definition of σ-algebra.
8. That if {An }n≥1 ⊂ F, then lim supAn ∈ F and lim inf An ∈ F follows from the fact that F
n→∞ n→∞
is closed by countable unions (definition of σ-algebra) and closed by countable intersections
(exercise 4.). That
lim supAn = {An occurs for infinitely many n},
n→∞

and
lim inf An = {An occurs for all but finitely many n},
n→∞
is just writing in words the iterated intersection and union in the definitions of lim supn→∞ An
and lim inf n→∞ An . That lim inf An ⊂ lim supAn is obvious.
n→∞ n→∞

≥ 0 for all A ∈ F = P(N) we must impose αi ≥ 0, i ∈ N. To get P (Ω) = 1 we


9. To get P (A) P

must impose i=1 αi = 1. The convergence of the series implies that αi → 0 when i → +∞,
which rules out the possibility of having all αi ’s equal. For any set A ∈ F define
X X
P (A) = P ({i}) = αi .
i∈A i∈A
P∞
To show the σ-additivity one uses that the series i=1 αi is absolutely convergent and, hence,
the terms
P of the series
P∞canP be reordered without changing the value of its sum, which yields
that i∈∪∞ Ak αi = k=1 i∈Ak αi .
k

1 Last updated: November 25, 2015


10. Let (Ω, F, P ) be a probability space.

]⊂ A then P (B) ≤ P (A). This is done by considering the decompo-


(a) First prove that if B
sition A = (A ∩ B) (A ∩ B c ) using the additivity and the non-negativity of P and that
A ∩ B = B.
(b) To prove that !
N
[ N
X
P An ≤ P (An ),
n=1 n=1

one proceeds by induction. The base case is trivial. In order to prove the general case
define Bn = An \(∪n−1 n−1 c

i=1 A i ) = A n ∩ (∪ i=1 A i ) , 1 ≤ n ≤ N. The sets Bn are pairwise
]n
n
disjoint and satisfy ∪i=1 Ai = Bi . Then,
i=1

−1
N
! N
! N[
!
[ [
P An = P Bn =P Bn + P (BN )
n=1 n=1 n=1
−1
N[
! N
X
≤ P An + P (AN ) ≤ P (An ),
n=1 n=1

where in the first inequality we have used the monotonicity property with BN ⊂ AN and
in the second inequality we have used the induction hypothesis.

11. Let (Ω, F, P ) be a probability space and {An }n≥1 ⊂ F a sequence of events.

(a) Define B1 = A1 , Bk = Ak \Ak−1 = Ak ∩ Ack−1 , k = 2, 3, .... The sets Bk are pairwise


]∞ ]n
disjoint and ∪∞
k=1 Ak = Bk and An = Bk . The result now follows easily from
k=1 k=1
the σ-additivity of P and the definition of the sum of a series.
]
(b) Note that 1 = P (Ω) = P (B B c ) = P (B) + P (B c ) which yields P (B c ) = 1 − P (B).
Furthermore, the sequence of sets {Ack }∞ c c
k=1 is an increasing sequence Ak ⊂ Ak+1 . Using
∞ c ∞ c
de Morgan’s law (∩k=1 Ak ) = ∪k=1 Ak and the conclusion in 11.(a) the result follows.
(c) Define Bn = ∪nk=1 An . {Bn }∞n=1 is a sequence of increasing processes. The result follows
by 11.(a), 10.(b) and the definition of sum of a series.
(d) Follows from 11.(c).
(e) Take complements using de Morgan’s law and reduce the problem to the case 11.(d).
P∞ P∞
(f) Note that n=1 P (An ) < ∞ implies that limk→∞ n=k P (An ) = 0. Recall that

lim sup An = ∩∞ ∞
n=1 ∪k=n An
n→∞

and that Bn = ∪∞
k=n An is a decreasing sequence Bn+1 ⊂ Bn . The result follows by using
11.(b) and 11.(c).

12. It follows from exercise 13.


13. X takes values in {xi }i∈N ⊂ R. Consider Ai = {ω : X(ω) = xi } = X −1 ({xi }). The ] family of
sets P = {Ai }i∈N is a countable partition of Ω,i.e., Ai ∩ Aj = ∅ if i 6= j and Ω = Ai .
] i∈N
By exercise 5. σ(X) = {X −1 (B) : B ∈ B(R)} but X −1 (B) = Ai . Hence, we can
{i:xi ∈B}

] of elements of P. That
conclude that the elements of σ(X) are countable (or finite) unions
is B ∈ σ(X) if and only if there exists J ⊂ N such that B = Ai . This is the general
i∈J
structure of the σ-algebra generated by a countable partition of Ω. A function Y is measurable
with respect to σ(X) if and only if Y is constant over the elements of the partition P.
14. No. The proof of this fact and an example follows form exercise 13, because F is generated by
a finite partition of Ω.

2 Last updated: November 25, 2015


15. Y = g ◦ X, where X : (Ω, F) → (R, B(R)) is measurable and g : (R, B(R)) → (R, B(R))
defined by g(x) = x2 is measurable because is continuous. Hence, Y is (F, B(R))-measurable
because is the composition of measurable functions. To check if X is σ(Y )-measurable we
can use a corollary of the factorization theorem (Corollary 36 in Lecture 3). We need to find
a Borel measurable function ϕ : R → R such that X = ϕ(Y ). As Y = g(X), the desired ϕ
must be equal to g −1 and g −1 only exists if g is injective on X(Ω) ⊂ R, the image of X.
Therefore, X is σ(Y )-measurable iff g is injective on X(Ω). For instance, if X(Ω) = R+ then
X is σ(Y )-measurable, but if X(Ω) = R then X is not σ(Y )-measurable.
16. Consider the measurable sets A = {ω : X(ω) > 0} and An = {ω : X(ω) > 1/n}, n ≥ 1.
An is an increasing sequence that converges to A. Note that RX ≥ X1ARn ≥ n1 1An . Using
the monotonicity of the Lebesgue integral (if f ≥ g, P -a.s. ⇒ Ω f dP ≥ Ω gdP ) show that
P (An ) = 0 and using exercise 11.(a) conclude that P (A) = 0, which yields that X = 0, P -a.s.
17. Consider the measurable sets A+ = {ω : X(ω) > 0} and A− = {ω : X(ω) < 0}. As {ω :
X(ω) 6= 0} = A− ∪ A+ , we can conclude if we show that P (A− ) = P (A+ ) = 0. Note that
X1A+ ≥ 0, P -a.s. and by hypothesis E[X1A+ ] = 0. Then, by exercise 16., we have that
X1A+ = 0, P -a.s.. But as X(ω) > 0 for ω ∈ A+ , the only possibility to have X1A+ = 0, P -a.s.
is that P (A+ ) = 0. The reasoning for A− is similar.
18. First, note that Q ∼ P is equivalent to require that P (A) = 0 iff Q(A) = 0 for all A ∈ F. This
equivalence follows from the definition of Q  P and P  Q. Then, if we assume that Q ∼ P
we have that Q  P and by the Radon-Nikodym theorem we get that
  Z Z
dQ dQ dQ
Q =0 = dP = 1{ dQ =0} dP
dP dP =0}
{ dQ dP Ω dP dP
Z
= 0dP = 0,

where we have used that 1{ dQ =0} dQ


= 0, P -a.s. and that the value of the integral does
dP
dP n o
dQ
not change when interchanging P -a.s. equal integrands. But, Q dP = 0 = 0 implies
n o
dQ
P =0 = 0, because we have assumed that Q ∼ P. Assume now that Q  P and that
n dP o
dQ
P dP = 0 = 0. We must show that if Q(A) = 0 then P (A) = 0. First note that ∀B ∈ F
we have that A ∩ B ∈ F and Q(A ∩ B) = 0, by the monotonicity of Q. By the Radon-Nikodym
Z Z
dQ dQ
0 = Q(A ∩ B) = dP = 1A dP,
A∩B dP B dP
n o
which yields that 1A dQdP = 0, P -a.s.. This last P -a.s. equality combined with P
dQ
dP = 0 =
n o
dQ
0 implies that P (A) = 0. As P dP = 0 = 0 we can define P -a.s. the random variable
 −1
Z = dQdP . Let’s check that Z is P -a.s. equal to dQ dP
. Using Proposition 55 in Lecture 2, we
have that, for all B ∈ F, we can write
Z Z Z  −1 Z
dQ dQ dQ
1B ZdQ = 1B Z dP = 1B dP = 1B dP = P (B),
Ω Ω dP Ω dP dP Ω
 −1
which shows that dQ dP
= dQdP , P -a.s.
2 R +∞
19. Define φ(z) = √1 exp(− z2 ). First we have to check that φ(z)dz = 1. This is a classical
2π −∞
R +∞ R +∞
computation that follows from computing the double integral −∞ −∞ φ(x)φ(y)dxdy using
Fubini’s theorem and a change of variables to polar coordinates,i.e., x = r cos θ, y = r sin θ, r ∈
(0, +∞) and θ ∈ [0, 2π]. Next, consider (Ω, F) = (R, B(R)) and Z : Ω → R defined by
Z(ω) = ω, which is clearly (B(R), B(R))-measurable. By the definition of PZ is easy to check
that PZ = Q. Using the image measure theorem one gets that
Z +∞ Z +∞
E[Z] = zφ(z)dz, E[Z 2 ] = z 2 φ(z)dz.
−∞ −∞

3 Last updated: November 25, 2015


As φ0 (z) = −zφ(z) and limz→−∞ z n φ(z) = limz→+∞ z n φ(z) = 0, n ∈ N ∪ {0} we get that
Z +∞
E[Z] = − φ0 (z)dz = lim φ(z) − lim φ(z) = 0.
−∞ z→−∞ z→+∞

To compute E[Z 2 ] one uses the integration by parts formula to get


Z +∞ Z +∞
+∞
z 2 φ(z)dz = [−zφ(z)]−∞ + φ(z)dz = 0 + 1 = 1.
−∞ −∞

2
Hence, Var[Z] = E[(Z − E[Z]) ] = E[Z 2 ] = 1. Note that X = g(Z) = g◦Z where g(z) = σz+µ
is a continuous (and hence measurable) function. Therefore, X is a random variable because it
is the composition of measurable functions, because Z is a random variable. The distribution
R x−µ
function of X is given by FX (x) = Q(X ≤ x) = Q(σZ + µ ≤ x) = Q(Z ≤ x−µ σ ) = −∞ φ(z)dz
σ

and after the change of variable z = (y − µ)/σ one gets that


!
Z x 2
1 (y − µ)
FX (x) = √ exp − dy.
−∞ 2πσ 2 2σ 2

By the linearity of the expectation one has that E[X] = µ + σE[Z] = µ and
2
E[X 2 ] = E[(µ + σZ) ] = µ2 + 2µE[Z] + σ 2 E[Z 2 ] = µ2 + σ 2 .

Moreover, if Y is an arbitrary random variable with E[Y 2 ] < +∞ then, using again the
linearity of the expectation, we get that
2
Var[Y ] = E[(Y − E[Y ]) ] = E[Y 2 ] − 2E[Y ]E[Y ] + E[Y ]2 = E[Y 2 ] − E[Y ]2 .

Hence, Var[X] = σ 2 .
20. To be handed in.

21. X is measurable because it is a random variable. Y = exp(X) = g ◦ X where g(x) = exp(x)


continuous and hence measurable. Therefore, Y is a random variable because it is the compo-
sition of measurable functions. The law of Y is given by

PY ((−∞, y]) = P (Y ≤ y) = P (exp(X) ≤ y) = 1{y>0} P (X ≤ log(y))


Z log(y)
(x − µ)2
 
1
= 1{y>0} √ exp − dx
−∞ 2πσ 2 2σ 2
Z y
(log(z) − µ)2
 
1
= 1{y>0} √ exp − dz.
0 z 2πσ
2 2σ 2

This shows that dPZ  dλ and

(log(z) − µ)2
 
dPZ 1
(z) = √ exp − .
dλ z 2πσ 2 2σ 2

Note that E[Y n ] = E[exp(nX)] = ψ(n) and V ar[Y ] = E[Y 2 ] − E[Y ]2 = ψ(2) − ψ(1)2 , where
ψ(θ) is the function defined in exercise 20.
22. It follows from the monotonicity of the expectation (integral with respect to dP ) and the
inequality f (X) ≥ f (X)1{X≥a} ≥ f (a)1{X≥a} , P -a.s..

23. Let {tn }n≥1 ⊂ I\{t0 } be an arbitrary sequence of numbers converging to t0 . Apply, the dom-
X −X
inated convergence theorem to ttnn −t0t0 . To check the hypothesis in the dominated conver-
gence theorem is useful to consider the mean value theorem if f is C 1 (I) then f (t1 ) − f (t2 ) =
f (ξ) (t1 − t2 ) for some t1 , t2 , ξ ∈ I.

4 Last updated: November 25, 2015


24. By the independence of X and Y, the joint density of (X, Y ) is given by fX,Y (x, y) =
fX (x)fY (y) where fX and fY are the densities of X and Y, respectively, which exists by as-
sumption. Let Z = X + Y. Compute P (Z ≤ z) integrating fX,Y (x, y) over the set {x + y ≤ z}.
Rewrite the double integral, using Fubini’s theorem and a change of variable, as a double
integral where the outer integral goes from −∞ to z. Taking derivatives with respect to z you
obtain the desired density, which is
Z +∞ Z +∞
fZ (z) = fX (u)fY (z − u)du = fX (z − u)fY (u)du.
−∞ −∞

25. As X and Y are i.i.d. with law N (µ, σ 2 ) we have that (X, Y ) is multivariate normal with
density given by
!
2 2
1 (x − µ) + (y − µ)
fX,Y (x, y) = fX (x)fY (y) = exp − .
2πσ 2 2σ 2

Next, use Theorem 18 in Lecture 3 with S0 = ∅, S1 = R2 and g : S1 → g(S1 ) = R2 given


by (u, v) = g(x, y) = (x + y, x − y). Note that g1−1 : g(S1 ) → S1 is given by g1−1 (u, v) =
( u+v u−v 1
2 , 2 ) = (x, y) and det Jg (u, v) = − 2 . Hence,
−1
1

2
u2
!
u
1 2 + 2 + 2µ2 − 2µu
fU,V (u, v) = exp − 1R2 (u, v).
4πσ 2 2σ 2

(U, V ) are independent iff fU,V (u, v) = fU (u)fV (v) for some densities fU (u) and fV (v). This
happens iff µ = 0 and, in this case, fU (u) and fV (v) are the densities of a normal random
variable with zero mean and variance 2σ 2 .
26. As X and Y are i.i.d. with law N (0, σ 2 ) we have that (X, Y ) is multivariate normal with
density given by  2
x + y2

1
fX,Y (x, y) = fX (x)fY (y) = exp − .
2πσ 2 2σ 2
Next, use Theorem 18 in Lecture 3 with S0 = R × {0}, S1 = (0, +∞) × R, S2 = p( − ∞, 0) × R
and g : Si → g(Si ) = [0, +∞) × R, i = 1, 2, given by (u, v) = g(x, y) = ( x2 + y 2 , x/y).
Note that gi−1 : g(Si ) → Si , i = 1, 2 are given by g1−1 (u, v) = ( √1+v
uv
2
u
, √1+v 2
) and g2−1 (u, v) =
−uv −u u
( √1+v2 , √1+v2 ) and det Jg−1 (u, v) = det Jg−1 (u, v) = − 1+v2 . Hence,
1 2

u2
   
1 1
fU,V (u, v) = u exp − 1(0,+∞) (u) 1R (v),
2σ 2 πσ 2 1 + v2

and, as the joint density of (U, V ) factorizes, we have that U and V are independent.

27. We can write det Q in terms of ρ. We have that det Q = 1 − ρ2 σ 2X σ 2Y . As det Q ≥ 0 we
must have ρ ∈ [−1, 1]. As σ 2X σ 2Y > 0, we have that |ρ| < 1 iff det Q > 0, which implies that
PX,Y  λ2 by Theorem 31 in Lecture 3. Moreover,
−1
σ 2X σ 2Y
  
ρσ X σ Y 1 −ρσ X σ Y
Q−1 = = .
ρσ X σ Y σ 2Y (1 − ρ2 ) σ 2X σ 2Y −ρσ X σ Y σ 2X

Also by Theorem 31 in Lecture 3 we have that


 0  !
1 1 x − µY −1 x − µY
fX,Y (x, y) = √ exp Q
2π det Q 2 (1 − ρ2 ) σ 2X σ 2Y y − µy y − µy

After a little bit of algebra in the terms of the exponential one gets the desired expression
for fX,Y (x, y). As σ 2X σ 2Y > 0, we have that |ρ| = 1 iff det Q = 0, which implies that the
distribution is degenerated or singular, which means that is concentrated in lower dimensional
subspace of R2 . This yields that the distribution of (X, Y ) is not absolutely continuous with

5 Last updated: November 25, 2015


respect to λ2 . A more formal proof is as follows: Define Z = Y − µY − ρ σσX
Y
(X − µX ) and check
2 2 2 2
that E[Z ] = (1 − ρ )σ Y , that is, E[Z ] = 0 iff |ρ| = 1. Thanks to exercise 16, we deduce that
σY
Y = µY + ρ (X − µX ), P -a.s (1)
σX
iff |ρ| = 1. This shows precisely that the values of (X, Y ) are concentrated on the line given
by equation (1) , which has two dimensional Lebesgue measure zero. This fact contradicts the
very definition of PX,Y being absolutely continuous with respect to λ2 . To compute the law of
Y conditioned to X, assuming that |ρ| < 1, we use the formula in Example 45 in Lecture 3.
We get, after a little bit of algebra, that
   2 
σY
fX,Y (x, y) 1 y − µY + ρ σX (x − µX )
fY |X (y|x) = =p exp − .
 
fX (x) 2 2 2σ 2 (1 − ρ2 )
2πσ Y (1 − ρ ) Y

which yields that Y |X = x is N (µY + ρ σσX



Y
(x − µX ), σ 2Y 1 − ρ2 ). The case |ρ| = 0 yields that
Y is the constant µY + ρ σσX
Y
(x − µX ).
 2
28. The idea is to check that P (Z ≤ z) = P (Y ≤ z) for all z ∈ R. Set φ(x) = √12π exp − x2 .
Then,

P (Z ≤ z) = P (Y 1{|Y |≤a} − Y 1{|Y |>a} ≤ z)


= E[1{Y 1{|Y |≤a} −Y 1{|Y |>a} ≤z} ]
Z +∞
= 1{Y 1{|Y |≤a} −Y 1{|Y |>a} ≤z} φ(y)dy
−∞
Z −a Z a Z +∞
= 1{−y≤z} φ(y)dy + 1{y≤z} φ(y)dy + 1{−y≤z} φ(y)dy
−∞ −a a
Z a Z a Z −∞
= − 1{u≤z} φ(−u)du + 1{y≤z} φ(y)dy − 1{u≤z} φ(−u)du
+∞ −a −a
Z +∞ Z a Z −a
= 1{u≤z} φ(u)du + 1{y≤z} φ(y)dy + 1{u≤z} φ(u)du
a −a −∞
Z +∞
= 1{y≤z} φ(y)dy = P (Y ≤ z),
−∞

where we have used the change of variable −y = u and the fact that φ(−x) = φ(x). On the
other hand, (Y, Z) is multivariate Gaussian iff any lineal combination of X and Y is a (one
dimensional) Gaussian random variable. Consider W := Y + Z = 2Y 1{|Y |≤a} . Clearly W is
not Gaussian because P (W > b) = 0 for any b > a, while for any Gaussian random variable
this probability is strictly positive. Hence, we can conclude that (Y, Z) is NOT multivariate
Gaussian.
2
29. The idea is to check that E[(Y − X) ] = 0, which yields that Y = X, P -a.s. by exercise 16.
2
When checking that E[(Y − X) ] = 0 one uses the hypothesis in the exercise, the conservation
of the expectation property of the conditional expectation and ”what is measurable goes out”
property of the conditional expectation.
30. We call the conditional expectation defining property (CEDP) the following property:

∀B ∈ G E[X1B ] = E[Z1B ]. (2)

In order to prove that E[X|G] is equal to some given random variable Z we always have to
check two things. First, that the candidate Z is G-measurable and second that the candidate
Z satisfies (2) . Then, we can conclude that E[X|G] = Z, P -a.s.

(a) It follows from the CEDP taking B = Ω and the fact that a constant, in particular E[X],
is measurable with respect to any σ-algebra, in particular G.

6 Last updated: November 25, 2015


(b) Set A− = {E[X|G] < 0}, which is G-measurable, and assume that P (A− ) > 0. Then,
using the monotonicity property for the ordinary expectation we get that E[X1A− ] ≥ 0
because by hypothesis X ≥ 0, P -a.s.. However, by the CEDP we get that
E[X1A− ] = E[E[X|G]1A− ] < 0,
which is a contradiction and we can conclude that P (A− ) = 0, which yields that E[X|G] ≥
0, P -a.s.
(c) That E[E[X|H]|G] = E[X|H] follows by exercise 30.(d) because E[X|H] is also G mea-
surable. In order to prove that E[E[X|G]|H] = E[X|H], for all B ∈ H ⊂ G we have
that
E[E[E[X|G]|H]1B ] = E[E[X|G]1B ] = E[X1B ] = E[E[X|H]1B ],
where we have used the CEDP with respect to the appropriate σ-algebras.
(d) This property is proved first for indicator functions, then for simple functions, then for
positive functions and finally for arbitrary functions. Let Y = 1A , A ∈ G. For B ∈ G we
have that
E[E[XY |G]1B ] = E[XY 1B ] = E[X1A∩B ] = E[E[X|G]1A∩B ]
= E[1A E[X|G]1B ] = E[Y E[X|G]1B ],
where we have used the CEDP, that A ∩ B ∈ G and the definition of Y. This proves the
result for indicator functions. For simple functions (linear combinations of indicator func-
tions of G measurable sets) the result follows from the linearity of conditional expectation.
If Y is a positive functions, consider
n
n2
X i−1
Yn = 1{ i−1
n ≤Y <
i
} + n1{Y ≥ 2in } , n ≥ 1,
i=1
2n 2 2n

which is an increasing sequence of positive, simple and G-measurable functions such that
Yn % Y, P -a.s.. Then, as XY ∈ L1 the conditional expectation exists and
E[XY |G] = E[X lim Yn |G] = lim E[XYn |G] = lim Yn E[X|G] = Y E[X|G], P -a.s.,
n→∞ n→∞ n→∞

where we have used the conditional monotone convergence theorem and that the result
holds for simple functions. One could have used the CEDP with the usual monotone
convergence theorem to prove the result. For a general G-measurable Y such that Y X ∈
L1 , note that also Y + X ∈ L1 and Y − X ∈ L1 and then we can write
E[XY |G] = E[XY + |G] − E[XY − |G] = Y + E[X|G] − Y − E[X|G] = Y E[X|G].
31. By exercise 13., we know that the candidate random variable must be constant on the elements
P E[X1 ]
of the partition {An }n≥1 . This is the case for n≥1 P (AAnn) 1An . Therefore, we only need to
prove that it satisfies CEDP. This follows easily from the particular structure of the elements
of G, described in exercise 13. We have that B ∈ G iff B = ∪i∈J Ai where J is a countable
subset of N. Then, on the one hand
X X
E[X1B ] = E[X 1Ai ] = E[X1Ai ],
i∈J i∈J

and, on the other hand


    !
X E[X1A ] X E[X1A ] X
n n
E[ 1An  1B ] = E[ 1An  1Ai ]
P (An ) P (An )
n≥1 n≥1 i∈J
X E[X1A ]
i
= E[ 1Ai ]
P (Ai )
i∈J
X E[X1Ai ] X E[X1A ]
i
= E[ 1Ai ] = E[1Ai ]
P (Ai ) P (Ai )
i∈J i∈J
X
= E[X1Ai ].
i∈J

7 Last updated: November 25, 2015


The only delicate point in the previous reasoning is the commutation between taking expec-
tation and the sum of a series. But this can be justified using Fubini’s theorem or Dominated
convergence (try to write the precise reasoning).
32. To be handed in.
 2
33. Denote by φ(z) = √12π exp − z2 the density of a standard normal distribution and Φ(z) =
Rz
−∞
φ(y)dy its cumulative distribution function. Therefore, using the image measure theorem
and then the change of variable x = µ + σz, v, one gets
Z
E[max (0, exp(X) − K)] = max (0, exp(x) − K) dPX
R
Z +∞
(x − µ)2
 
1
= max (0, exp(x) − K) √ exp − dx
−∞ 2πσ 2 2σ 2
Z +∞
= max (0, exp(µ + σz) − K) φ(z)dz
−∞
Z+∞
= (exp(µ + σz) − K) 1{exp(µ+σz)≥K} φ(z)dz
−∞
Z +∞ Z +∞
= exp(µ + σz)φ(z)dz − K φ(z)dz
log(K)−µ log(K)−µ
σ σ

= A1 + A2

Note that Φ(z) + Φ(−z) = 1 for all z ∈ R. Hence,


Z +∞     
log(K) − µ µ − log(K)
A2 = −K φ(z)dz = −K 1 − Φ = −KΦ .
log(K)−µ
σ
σ σ

For the term A1 , first note that

σ2
 
exp(µ + σz)φ(z) = exp µ + φ(z − σ),
2

and, therefore, making the change of variable y = z − σ we get


 Z +∞
σ2

A1 = exp µ + φ(z − σ)dz
2 log(K)−µ
σ
 Z +∞
σ2 σ2 log(K) − µ − σ 2
    
= exp µ + φ(y)dy = exp µ + 1−Φ
2 log(K)−µ−σ 2
σ
2 σ
σ2 µ + σ 2 − log(K)
   
= exp µ + Φ .
2 σ

Therefore, we get that

σ2 µ + σ 2 − log(K)
     
µ − log(K)
E[max (0, exp(X) − K)] = exp µ + Φ − KΦ .
2 σ σ

8 Last updated: November 25, 2015

You might also like