Simple Algebra
Simple Algebra
Simple Algebra
and the
Variance–Covariance Matrix
Expectation
Definition 2. The expectation E X~ of a random vector X
~ = [X1 , X2 , . . . , Xp ]T
is given by
EX1
EX2
~
E X = .. .
.
EXp
This is a definition, but it is chosen to merge well with the linear properties
of the expectation, so that, for example:
X1 0 0
0 X2 0
EX ~ =E .. + E .. + · · · + E ..
. . .
0 0 Xp
EX1 0 0
0 EX2 0
= .. + .. + · · · + ..
. . .
0 0 EXp
EX1
EX2
= .. .
.
EXp
1
The linearity properties of the expectation can be expressed compactly by
stating that for any k × p-matrix A and any 1 × j-matrix B,
~ = AE X
E(AX) ~ ~
and E(XB) ~
= (E X)B.
Proposition 4.
~ = E[X
Cov(X) ~X~ T ] − E X(E
~ X)~ T.
Proposition 5.
Var(X1 ) Cov(X1 , X2 ) · · · Cov(X1 , Xp )
Cov(X2 , X1 ) Var(X2 ) · · · Cov(X2 , Xp )
~ =
Cov(X) .. .. .. .. .
. . . .
Cov(Xp , X1 ) Cov(Xp , X2 ) ··· Var(Xp )
2
where ~aT = [a1 , . . . , ap ]. Then we get:
~ = E[~aT X]
E[L(X)] ~ = ~aT E X
~ ,
and
~ = E[~aT X
Var[L(X)] ~X~ T ~a] − E(~aT X)[E(~
~ ~ T
aT X)]
~X
= ~aT E[X ~ T ]~a − ~aT E X(E
~ X) ~ T ~a
T ~ ~ T ~
= ~a E[X X ] − E X(E X) ~a ~ T
~ a
= ~aT Cov(X)~
3
The Multivariate Normal Distribution
A p-dimensional random vector X ~ has the multivariate normal distribution
if it has the density function
~ −p/2 −1/2 1 ~ T −1 ~
f (X) = (2π) |Σ| exp − (X − µ ~ ) Σ (X − µ ~) ,
2
Σ = P DP T ,
4
where D is diagonal. Construct D1/2 by taking the square root of each diag-
onal entry, and define
Σ1/2 = P D1/2 P T .
In R, you can find the eigenvalue decomposition of Σ using:
ed <- eigen(sigma)
D <- diag(ed$values)
P <- ed$vectors