Topic 3 Multivariate Models I (Week 2)
Topic 3 Multivariate Models I (Week 2)
Topic 3 Multivariate Models I (Week 2)
2 / 27
Notation and Definitions
▶ d-dimensional random vector: X = (X1 , . . . , Xd )⊺
▶ Joint distribution function is
FX (x1 , . . . , xd ) = P (X1 ≤ x1 , . . . , Xd ≤ xd ) ,
FX (x) = P (X ≤ x) .
3 / 27
Notation and Definitions
F X (x) = P (X > x) .
4 / 27
Notation and Definitions
f (x, y)
fY|X (y|x) =
fX (x)
▶ The two vectors X and Y are independent if and only if the joint
distribution factorizes such that
5 / 27
Moments
▶ Mean vector:
▶ Covariance matrix:
E (BX + b) = BE (X) + b
7 / 27
Cholesky Factorization
By Cholesky factorization, a symmetric positive-definite matrix Σ
can be factorized into
Σ = LL⊺
for a lower triangular matrix L with positive diagonal elements,
denoted by L = Σ1/2 .
8 / 27
Characteristic Function
▶ The characteristic function
⊺
ϕX (t) = E eit X , t ∈ Rd .
9 / 27
Estimation
10 / 27
Multivariate Normal Distributions
▶ Definition: A random vector X = (X1 , . . . , Xd )⊺ has a
multivariate normal or Gaussian distribution if
d
X = µ + AZ
11 / 27
▶ We have
E (X) = µ,
cov (X) = Σ =: AA⊺ ,
12 / 27
▶ Characteristic function of standard univariate normal Z is
itZ 1 2
ϕZ (t) = E(e ) = exp − t ,
2
13 / 27
▶ For the case rank (A) = d ≤ k, the covariance matrix has full
rank d and is therefore invertible (non-singular) and positive
definite
▶ X has an absolutely continuous distribution function with joint
density
1 1 ⊺ −1
f (x) = exp − (x − µ) Σ (x − µ) ,
(2π)d/2 |Σ|1/2 2
14 / 27
Contour
Points with equal density lie on ellipsoids determined by equations
(x − µ)⊺ Σ−1 (x − µ) = c for constants c > 0.
15 / 27
Simulation of Multivariate Normal
16 / 27
Linear Combinations of Multivariate Normal
BX + b ∼ Nk (Bµ + b, BΣB⊺ ) .
17 / 27
Special case: For a ∈ Rd , we have
a⊺ X ∼ N (a⊺ µ, a⊺ Σa)
18 / 27
Multivariate Normal - Quadratic Forms
If X ∼ Nd (µ, Σ) with Σ positive definite, then
19 / 27
Normal Mixture Distributions
20 / 27
Normal Variance Mixture Distribution
21 / 27
Moments
and
√ √ ⊺
cov (X) = E WAZ WAZ
= E (W) AE (ZZ⊺ ) A⊺
= E (W) Σ.
22 / 27
Density
The density of X is
f (x)
Z ∞
= fX|W (x|w) dH (w)
0
∞
w−d/2 (x − µ)⊺ Σ−1 (x − µ)
Z
= exp − dH (w) ,
0 (2π)d/2 |Σ|1/2 2w
where H is the df of W.
23 / 27
Characteristic Function
Example: Show that the characteristic function of X is given by
⊺
⊺ ⊺ t Σt
E[eit X ] = eit µ Ĥ ,
2
R∞
where Ĥ(θ) = 0 e−θw dH(w). We write X ∼ Md µ, Σ, Ĥ .
24 / 27
Multivariate t Distribution as a Special Case
25 / 27
Normal Mean-Variance Mixtures
where:
▶ Z ∼ Nk (0, Ik ),
▶ W ≥ 0 is a non-negative, scalar valued random variable
independent of Z,
▶ m : [0, ∞) → Rd is a measurable function,
▶ A ∈ Rd×k is a matrix of constants.
26 / 27
Discussion
Assume m(W) = µ + Wγ. Find the mean vector and covariance
matrix of X.
27 / 27