Stats 116 SU
Stats 116 SU
Stats 116 SU
Theory of Probability
Jointly Distributed Random Variables
Prathapasinghe Dharmawansa
Department of Statistics
Stanford University
Summer 2018
Agenda
1
Joint distribution functions
2
Joint cdf of two random variables
3
• Obtaining the distribution of X from the joint cdf of X and Y :
FX (a) =P {X ≤ a} = P {X ≤ a, Y < ∞}
( )
=P lim {X ≤ a, Y ≤ b}
b→∞
(1) (2)
= lim P {X ≤ a, Y ≤ b} = lim F (a, b) = F (a, ∞)
b→∞ b→∞
In (1) we have used the fact that probability is a continuous set (that
is, event) function. In (2), we have used the definition on the previous
page.
• Similarly, FY (b) = P {Y ≤ b} = lima→∞ F (a, b) = F (∞, b).
• FX (·) and FY (·) are referred to as the marginal distributions of X and
Y , respectively.
4
It can be shown that, whenever a1 < a2, b1 < b2,
5
Joint probability mass function
Given two discrete random variables X and Y , the joint probability mass
function of X and Y is defined by
p(x, y) = P {X = x, Y = y}.
Similarly, ∑
pY (y) = P {Y = y} = p(x, y).
x:p(x,y)>0
6
Joint probability mass function: Example
7
( )( ) ( ) ( ) ( )
3 4 12 12 3 12 1
p(2, 1) = / = , p(3, 0) = / = .
2 1 3 220 3 3 220
10 40 30 4 84
0 220 220 220 220 220
30 60 18 108
1 220 220 220 0 220
15 12 27
2 220 220 0 0 220
1 1
3 220 0 0 0 220
P {Y = j} (col. sum) 56
220
112
220
48
220
4
220
8
In the previous table, the probability mass function (pmf) of X is
obtained by computing the row sums, whereas the pmf of Y is obtained
by computing the column sums. Because the individual pmfs of X and Y
thus appear in the margin of such a table, they are often referred to as the
marginal pmfs of X and Y , respectively.
9
Joint pdf of two random variables
10
Joint pdf of two random variables
• Since
∫ b ∫ a
F (a, b) = P {X ∈ (−∞, a], Y ∈ (−∞, b]} = f (x, y)dxdy,
−∞ −∞
it follows that
∂2
f (a, b) = F (a, b)
∂a∂b
wherever the partial derivatives are defined.
11
Joint pdf of two random variables
• Since
∫ b+db ∫ a+da
P {a < X < a + da, b < Y < b + db} = f (x, y)dxdy
b a
≈f (a, b)dadb
12
Joint pdf of two random variables
P {X ∈ A} =P {X ∈ A, Y ∈ (−∞, ∞)}
∫ ∫ ∞ ∫
= f (x, y)dydx = fX (x)dx
A −∞ A
where ∫ ∞
fX (x) = f (x, y)dy
−∞
is thus the probability density function of X. Similarly, the probability
density function of Y is given by
∫ ∞
fY (y) = f (x, y)dx.
−∞
13
Joint pdf of two random variables
Compute:
14
Joint pdf of two random variables
Compute:
15
Joint pdf of two random variables
Example: Consider choosing a point which is uniformly distributed within
a circle of radius R with its center at the origin. Define X and Y to be the
coordinates of the chosen point. The joint density function of X and Y is
given by (for(0,0)
some value x of c)
(X, Y)
{
R c, if x2 + y 2 ≤ R2
f (x, y) = .
0, if x2 + y 2 > R2
y
(a) Determine c.
(b) Find the marginal density functions
of X and Y .
(c) Let D denote the distance from the
origin of the selected point. Compute
the probability that D
is less than or equal to a.
(d) Find E[D].
16
Solution:
1
(a) Determine c. (c = πR 2 .)
(b) Find the marginal density functions of X and Y .
{ √
2 R2 −x2
πR2
, −R ≤ x ≤ R
fX (x) = .
0, otherwise
(c) Let D denote the distance from the origin of the selected point.
Compute the probability that D is less than or equal to a.
a2
P {D ≤ a} = 2 , 0 ≤ a ≤ R.
R
17
Joint distributions of n random variables
18
In particular, for any n sets of real numbers A1, A2, . . . , An,
19
Example: The multinomial distribution
Let Xi denote the number out of the n experiments that result in the
i-th outcome (i = 1, . . . , r), then
n!
P {X1 = n1, X2 = n2, . . . , Xr = nr } = pn1 1 pn2 2 . . . pnr r
n1!n2! . . . nr !
∑r
where i=1 ni = n. The joint distribution whose joint pmf is shown above
is called the multinomial distribution. When r = 2, the multinomial reduces
to the binomial distribution.
20
Independent random variables
21
Independent random variables
In other words, X and Y are independent if, for all A and B, the
events EA = {X ∈ A} and FB = {Y ∈ B} are independent.
Using the three axioms of probability, Equation (⋆) follows if and only
if, for all a, b,
P {X ≤ a, Y ≤ b} = P {X ≤ a}P {Y ≤ b}.
22
Independent random variables
23
Independent random variables
When X and Y are discrete random variables, the condition of
independence in the definition (Equation (⋆)) is equivalent to, for all
x, y,
p(x, y) = pX (x)pY (y). (⋆1)
Discussion of the equivalence:
Comments:
25
Example
26
Example
Suppose that the number of people who enter a post office on a given
day is a Poisson random variable with parameter λ. Show that if each
person who enters the post office is a male with probability p and a female
with probability 1p, then the number of males and females entering the post
office are independent Poisson random variables with respective parameters
λp and λ(1 − p).
27
Steps in the solution: Let X and Y denote, respectively, the number
of males and females that enter the post office. We shall show the
independence of X and Y by establishing Equation (⋆1).
• Consider P {X = i, Y = j}.
P {X = i, Y = j} =P {X = i, Y = j|X + Y = i + j}P {X + Y = i + j}
+ P {X = i, Y = j|X + Y ̸= i + j} P {X + Y ̸= i + j}
| {z }
=0
28
Example
29
Solution: Let X and Y denote, respectively, the time past 12 that
Person 1 and Person 2 arrive¿ Clearly, X and Y are independent random
variables, each uniformly distributed over (0, 60).
30
Independent random variables
Proposition:
The continuous (discrete) random variables X and Y are
independent if and only if their joint probability density (mass)
function can be expressed as
31
Proof: Consider the continuous case.
• =⇒. Since independence implies that the joint density is the product
of the marginal densities of X and Y, so the preceding factorization will
hold when the random variables are independent.
• ⇐= Now, suppose that fX,Y (x, y) = h(x)g(y). Then,
∫ ∞∫ ∞ ∫ ∞ ∫ ∞
1= fX,Y (x, y)dxdy = h(x)dx · g(y)dy = C1C2.
−∞ −∞ | −∞ {z } | −∞ {z }
C1 C2
∫∞
In addition, since by definition fX (x) = −∞ fX,Y (x, y)dy = C2h(x)
∫∞
and fY (y) = −∞ fX,Y (x, y)dx = C1g(y), we have
C1 C2 =1
fX (x)fY (y) = C1C2h(x)g(y) = h(x)g(y) = fX,Y (x, y).
32
Independent random variables
∏
n
P {X1 ∈ A1, X2 ∈ A2, . . . , Xn ∈ An} = P {Xi ∈ Ai}
i=1
∏
n
P {X1 ≤ a1, X2 ≤ a2, . . . , Xn ≤ an} = P {Xi ≤ ai},
i=1
33
Independent random variables
Example:
34
Independent random variables
Thus, ∫ 1 [∫ 1 (∫ 1 ) ]
P {X ≥ Y Z} = dx dz dy.
0 0 yz
35
Remark on the independence
36
Remark on the independence
From the above, one can see that the independence of X1, . . . , Xn can be
established sequentially, i.e., by showing that
• X2 is independent of X1
• X3 is independent of X1, X2
• X4 is independent of X1, X2, X3
• ...
• Xn is independent of X1, . . . , Xn−1.
37
Sums of independent random variables
38
Sums of independent random variables
39
Sums of independent random variables
The probability density function fX+Y is called the convolution of the pdfs
fX and fY (the pdfs of X and Y , respectively).1
1
Note that this part is different from the textbook.
40
Sum of i.i.d. uniform random variables
41
Sum of i.i.d. uniform random variables
42
Sum of i.i.d. uniform random variables
Let X1, X2, . . . , Xn be i.i.d. uniform (0, 1) random variables, and let
43
Sum of i.i.d. uniform random variables
Let X1, X2, . . . , Xn be i.i.d. uniform (0, 1) random variables, and let
Fn(x) = P {X1 + . . . + Xn ≤ x}. Show that Fn(x) = xn/n!, for 0 ≤ x ≤ 1.
∫ ∫
1 x
(x − z)n−1 xn
Fn(x) = Fn−1(x − z)fXn (z)dz = dz = .
0 0 (n − 1)! n!
44
Sum of i.i.d. uniform random variables
45
Sum of i.i.d. uniform random variables
n−1
P {N = n} = P {N > n − 1} − P {N > n} = , n ≥ 1.
n!
∞
∑ ∞
∑ ∞
∑
n(n − 1) 1n
E[N ] = nP {N = n} = = = e.
n=1 n=1
n! n=0
n!
That is, the average number of i.i.d. uniform (0, 1) random variables that
must be summed for the sum to exceed 1 is equal to e.
46
Gamma random variables
λ · e−λy · (λy)t−1
f (y) = , 0<y<∞
Γ(t)
47
Sum of independent gamma random variables
Proposition
If X and Y are independent gamma random variables with
respective parameters (s, λ) and (t, λ), then X + Y is a gamma
random variable with parameters (s + t, λ).
48
Proof: From previous discussions,
∫ ∞
fX+Y (a) = fX (a − y)fY (y)dy
−∞
∫
λ · e−λ(a−y) · (λ(a − y))s−1 λ · e−λy · (λy)t−1
a
= · dy
0 Γ(s) Γ(t)
∫ a
=Ke−λa (a − y)s−1y t−1dy
0
∫ 1
x=y/a s+t−1 −λa
= a e K (1 − x)s−1xt−1dx = Cas+t−1e−λa
| 0 {z }
C
where C is a constant that does not depend on a. Sinc the pdf must
integrate to 1, the value of C is determined, and we have
λe−λa(λa)s+t−1
fX+Y (a) = .
Γ(s + t)
49
Sum of independent gamma random variables
50
Sum of independent gamma random variables
Example:
51
Gamma and chi-squared random variables
1 √ √
fZ 2 (y) = √ [fZ1 ( y) + fZ1 (− y)]
1 2 y
1 2
= √ √ e−y/2
2 y 2π
1
1 −y/2 (y/2) 2 −1
= e · √
2 π
52
(1 1
)
which is the gamma distribution with parameters ,
2 2 .
• A by-product of the above analysis is that
√
Γ(1/2) = π.
(1 )
• But since each Zi2 is gamma 1
2, 2, using the previous proposition, it
follows that χ2n is just the gamma distribution with parameters (n/2, 1/2)
and its pdf is given by
n
1
· e−y/2 · (y/2) 2 −1
fY (y) = 2
, 0<y<∞
Γ(n/2)
53
Normal random variables
Proposition
If Xi, i = 1, . . . , n, are independent random variables that are
2
normally distributed
∑n with respective parameters (µ i , σ i ), i =
1,∑. . . , n, then
∑ i=1 Xi is normally distributed with parameters
n n
( i=1 µi, i=1 σi2).
54
Proof: Steps:
(1) To begin, let X and Y be independent normal rvs with X having mean
0 and variance σ 2 and Y having mean 0 and variance 1. It can be
shown that X + Y is normal with mean 0 and variance 1 + σ 2.
(2) Further, Let X1 and X2 be independent normal rvs with Xi having
mean µi and variance σi2, i = 1, 2. Then
X1 − µ1 X − µ
X + X}2
= σ2 +σ2
2 2 +µ1 + µ2
| 1 {z
| σ | {z }
σ
({z 2 )}
2 2
2 +σ 2 )
N (µ1 +µ2 ,σ1
2
σ1 N (0,1)
N 0, 2
σ2
| {z
( )
}
σ2
Step (1):N 0,1+ 1
2
σ2
(3) Finally, using induction, we can obtain the general case as shown in the
proposition.
55
Normal random variables: Example
56
Solution: Let XA and XB respectively denote the number of games the
team wins against class A and against class B teams. XA and XB are
independent binomial random variables and
57
Lognormal random variables
58
Lognormal random variables: Example
Starting at some fixed time, let S(n) denote the price of a certain security
at the end of n additional weeks, n ≥ 1. A popular model for the evolution
of these prices assumes that the price ratios S(n)/S(n − 1), n ≥ 1,
are independent and identically distributed lognormal random variables.
Assuming this model, with parameters µ = 0.0165, σ = 0.0730, what is the
probability that
(a) the price of the security increases over each of the next two weeks?
(b) the price at the end of two weeks is higher than it is today?
59
Solution: Let Z be a standard normal random variable.
(a) Note that x > 1 if and only if log(x) > log(1) = 0. As a result, we
have
{ } { ( ) }
S(1) S(1)
P >1 =P log >0
S(0) S(0)
( )
log S(1)
S(0) − µ 0 − µ
=P >
| σ
{z } σ
Z
60
(b) Similarly, note that
{ } { ( ) }
S(2) S(1) S(2)
P > 1 =P log · >0
S(0) S(0) S(1)
( S(1) ) (
S(2)
)
=P log + log >0
S(0) S(1)
| {z } | {z }
Z1 Z2
61
Sums of independent Poisson random variables
62
Sums of independent Poisson random variables
∑
n
P {X + Y = n} = P {X = k, Y = n − k}
k=0
∑n
= P {X = k}P {Y = n − k}
k=0
∑n
λk1 −λ1 λ2n−k −λ2 e−(λ1+λ2) ∑ λk1 λn−k
n
n!
2
= e e =
k! (n − k)! n! k!(n − k)!
k=0 k=0
63
Sums of independent binomial random variables
64
Sums of independent binomial random variables
( ) ( )( )
n+m ∑i n m
Solution: Note that = k=0 .
i k i−k
∑
i
P {X + Y = i} = P {X = k, Y = i − k}
k=0
∑
i
= P {X = k}P {Y = i − k}
k=0
i ( )
∑ ( )
n k n−k m i−k m−(i−k)
= p (1 − p) p (1 − p)
k i−k
k=0
∑i ( )( )
n m i n+m−i
= p (1 − p)
k i−k
k=0
( )
n+m i n+m−i
= p (1 − p) =⇒ X + Y binomial (n + m, p)
i
65
Geometric random variables
Proposition
Let X1, . . . , Xn be independent geometric random variables, with
Xi having parameter pi for i = 1, . . . , n. Define qi = 1 − pi,
i = 1, . . . , n. If all the pi are distinct, then, for k ≥ n,
∑
n ∏ pj
k−1
P {Sn = k} = pi q i .
i=1
pj − pi
j̸=i
66
Geometric random variables
∑
k
P {S2 = k} =P {X1 + X2 = k} = P {X1 = j}P {X2 = k − j}
j=0
∑
k−1 ∑
k−1
= P {X1 = j}P {X2 = k − j} = q1j−1p1q2k−j−1p2
j=1 j=1
( )k−1
∑( )j−1 −
k−1 q1
q1 1 q2
=p1p2q2k−2 = p1p2q2k−2
j=1
q2 1 − qq12
p1 p2
=p2q2k−1 + p1q1k−1
p1 − p2 p2 − p1
67
Geometric random variables
∑
k
P {S3 = k} =P {X1 + X2 + X3 = k} = P {S2 = j}P {X3 = k − j}
j=0
∑
k−1
= P {S2 = j}P {X3 = k − j}
j=1
p2 p3 p1 p3
=p1q1k−1 · + p2q2k−1 ·
(p2 − p1) (p3 − p1) (p1 − p2) (p3 − p2)
p2 p1
+ p3 q 3 ·
k−1
(p2 − p3) (p1 − p3)
68
Geometric random variables
69
• If X is geometric with parameter p, then the conditional distribution of
X given X > 1 is the same as the distribution of 1 (the first failed trial)
plus a geometric with parameter p (the number of additional trials after
the first until a success occurs). Therefore,
70
• We obtain P {Sn = k} as follows:
∑
n−1 ∏ pj
P {Sn = k} =pn
p q k−2
pj − pi
i i
i=1 j̸=i
j≤(n−1)
∑ n ∏ pj
+ qn
k−2
piqi
i=1
p j − p i
j̸=i
j≤n
∑
n ∏ pj
P {Sn = k} = piqik−1 .
i=1
pj − pi
j̸=i
71
Conditional distributions: discrete case
72
Condition distributions: Discrete case
• Recall: that, for any two events E and F , the conditional probability of
E given F is defined, provided that P (F ) > 0, by
P (EF )
P (E|F ) = .
P (F )
P {X = x, Y = y} p(x, y)
pX|Y (x|y) = P {X = x|Y = y} = =
P {Y = y} pY (y)
73
• Similarly, the conditional probability distribution function (cdf) of X
given that Y = y is defined, for all y such that pY (y) > 0, by
∑
FX|Y (x|y) = P {X ≤ x|Y = y} = pX|Y (a|y).
a≤x
P {X = x, Y = y}
pX|Y (x|y) =P {X = x|Y = y} =
P {Y = y}
P {X = x}P {Y = y}
=
P {Y = y}
=P {X = x}.
74
Condition distributions – Discrete case: Example 1
Suppose that p(x, y), the joint probability mass function of X and Y , is
given by
75
Condition distributions – Discrete case: Example 1
Solution:
Thus,
p(0, 1) 0.2
pX|Y (0|1) = = = 0.4,
pY (1) 0.5
and
p(1, 1) 0.3
pX|Y (1|1) = = = 0.6.
pY (1) 0.5
76
Condition distributions – Discrete case: Example 2
77
Condition distributions – Discrete case: Example 2
P {X = k|X + Y = n} =P {X = k, Y = n − k|X + Y = k}
P {X = k, Y = n − k, X + Y = n}
=
P {X + Y = n}
P {X = k, Y = n − k}
=
P {X + Y = n}
P {X = k} · P {Y = n − k}
=
P {X + Y = n}
78
−λ1 λk −λ2 λn−k
e · 1
k! ·e · (n−k)!
2
79
Condition distributions – Discrete case: Example 3
Consider the multinomial distribution with joint pmf
n! n
∑k
P {Xi = ni, i = 1, . . . , k} = pn1 1 . . . pk k , ni ≥ 0, ni = n.
n 1 ! . . . nk ! i=1
80
(Cont’d)
pi
P {outcome i|outcome is not any of r + 1, . . . , k} = , i = 1, . . . , r
Fr
∑r
where Fr = i=1 pi is the probability that a trial results in one of the
outcomes 1, . . . , r.
81
Condition distributions – Discrete case: Example 3
∑r
Proof: Let n1, . . . , nr be such that i=1 ni = n − m. The conditional
probability here is given by
82
∑r
Note that i=1 ni = n − m. The above conditional probability can be
further simplified as
83
Condition distributions – Discrete case: Example 4
84
Condition distributions – Discrete case: Example 4
P {Ak , Tk }
P {Ak |Tk } =
P {Tk }
P {Ak } pk (1 − p)n−k
= =( )
P {Tk } n k
p (1 − p)n−k
k
1
=( )
n
k
85
Conditional distributions: continuous case
86
Condition distributions: continuous case
• If X and Y have a joint probability density function f (x, y), then for
all values of y such that fY (y) > 0, the conditional probability density
function of X given that Y = y is defined by
f (x, y)
fX|Y (x|y) = .
fY (y)
Therefore, for small values of dx and dy, fX|Y (x|y)dx represents the
conditional probability that X is between x and x + dx given that Y is
between y and y + dy.
87
Condition distributions: continuous case
88
Condition distributions – continuous case: Example 1
89
Condition distributions – continuous case: Example 1
Solution: Steps:
f (x, y) f (x, y)
fX|Y (x|y) = = ∫∞
fY (y) 0
f (x, y)dx
1 −x/y
= ·e
y
2. Hence, ∫ ∞
1 −x/y
P {X > 1|Y = y} = ·e dx = e−1/y .
1 y
90
Condition distributions – continuous case: Example 2
The bivariate normal distribution
91
Condition distributions – continuous case: Example 2
The bivariate normal distribution
We now determine the conditional density of X given that Y = y.
92
f (x, y)
fX|Y (x|y) = = C1f (x, y)
fY (y)
{ [( )2 ]}
1 x − µx x(y − µy )
=C2 exp − − 2ρ
2(1 − ρ2) σx σxσy
{ [ ( )]}
1 ρσx
=C3 exp − 2 x − 2x µx +
2
(y − µy )
2σx(1 − ρ2) σy
{ [ ( )]2}
1 ρσx
=C4 exp − 2 x − µ x + (y − µy )
2σx(1 − ρ )2 σy
93
Condition distributions – continuous case: Example 2
The bivariate normal distribution
• From the above, we can find the marginal pdfs of X and Y , respectively.
In fact, one can show that X is normal with mean µx and variance σx2 .
Similarly, Y is normal with mean µy and variance σy2.
94
Condition distributions – continuous case: Discussions
95
– For example, let X be a continuous rv with pdf f and let N be a
discrete rv. Consider the conditional probability density function of X
given that N = n.
96
Condition distributions – continuous case: Example 3
97
Condition distributions – continuous case: Example 3
98
Condition distributions – continuous case: Example 3
Based on the example, if the original or prior (to the collection of data)
distribution of a trial success probability is uniformly distributed over (0, 1)
[or, equivalently, is beta with parameters (1, 1)], then the posterior (or
conditional) distribution given a total of n successes in n + m trials is beta
with parameters (1 + n, 1 + m).
99
Order statistics
100
Order statistics
The ordered values X(1) ≤ X(2) ≤ . . . ≤ X(n) are known as the order
statistics corresponding to the random variables X1, X2, . . . , Xn, i.e.,
X(1), . . . , X(n) are the ordered values of X1, X2, . . . , Xn.
101
Joint density function of the order statistics
The order statistics X(1), . . . , X(n) will take on the values x1 ≤ x2 ≤
. . . ≤ xn if and only if, for some permutation (i1, i2, . . . , in) of (1, 2, . . . , n),
X1 = xi1 , X2 = xi2 , . . . , Xn = xin . Since , for any permutation (i1, . . . , in)
of (1, 2, . . . , n),
[ ϵ ϵ ϵ ϵ]
P xi1 − < X 1 ≤ xi1 + , . . . , x in − < X n ≤ xin +
2 2 2 2
≈ ϵnfX1,...,Xn (xi1 , . . . , xin ) = ϵnf (xi1 ) . . . f (xin ) = ϵnf (x1) . . . f (xn)
102
Order statistics: Example 1
103
Order statistics: Example 1 – Solution
= (1 − 2d)3
104
Joint probability distribution of functions of random
variables
105
Joint pdf of functions of random variables
Let X1 and X2 be jointly continuous rvs with joint pdf fX1,X2 . Let
for some functions g1 and g2, where g1 and g2 satisfy the following:
106
Joint pdf of functions of random variables
Under the two conditions stated in the previous page, it can be shown
that the random variables Y1 and Y2 are jointly continuous with joint density
function given by
• Note that J(x1, x2) is the determinant which can be positive or negative,
but |J(x1, x2)| is its absolute value. We have used the notation |A| as
the determinant of a square matrix A, and |a| for the absolute value of
a real number a.
• The proof of the result will be left as an exercise.
107
Joint pdf of functions of rvs: Example 1
108
Joint pdf of functions of rvs: Example 1
109
Joint pdf of functions of rvs: Example 1 – Applications
110
Joint pdf of functions of rvs: Example 1 – Applications
111
Joint pdf of functions of rvs: Example 2
Let (X, Y ) denote a random point in the plane, and assume that the
rectangular coordinates X and Y are independent standard normal random
variables.
R
Y
ȣ
X
112
Joint pdf of functions of rvs: Example 2
Solution: Suppose first that X, Y are both positive. For x > 0, y > 0,
let √
r = g1(x, y) = x2 + y 2, θ = g2(x, y) = tan−1(y/x),
we see that
∂g1 x ∂g1 y
=√ , =√
∂x x2 + y 2 ∂y x2 + y 2
∂g2 −y ∂g2 x
= 2 2
, = 2 2
.
∂x x + y ∂y x +y
Then
1
J(x, y) = .
r
113
Since the conditional joint pdf given that X, Y are both positive is
f (x, y) 2 2 2
f (x, y|X > 0, Y > 0) = = e−(x +y )/2, x > 0, y > 0,
P (X > 0, Y > 0) π
we have the conditional joint pdf of R, Θ given that X, Y are both positive
given by
2r −r2/2 π
f (r, θ|X > 0, Y > 0) = e , r > 0, 0 < θ < .
π 2
Similarly, one can find the following:
2r −r2/2 π
f (r, θ|X < 0, Y > 0) = e , r > 0, < θ < π
π 2
2r −r2/2 3π
f (r, θ|X < 0, Y < 0) = e , r > 0, π < θ <
π 2
2r −r2/2 3π
f (r, θ|X > 0, Y < 0) = e , r > 0, < θ < 2π
π 2
114
Thus, the joint pdf of R, Θ is given by
r −r2/2
f (r, θ) = e , r > 0, 0 < θ < 2π.
2π
From the above, we can find the marginal pdf for R, which is a Rayleigh
distribution:
−r 2 /2
fR(r) = re , r > 0.
115
Joint pdf of functions of n rvs
Let the joint pdf of the n random variables X1, X2, . . . , Xn be given.
Consider the joint pdf of Y1, Y2, . . . , Yn, where
Assume that
116
at all points (x1, . . . , xn).
• Furthermore, we suppose that the equations y1 = g1(x1, . . . , xn), y2 =
g2(x1, . . . , xn), . . . , yn = gn(x1, . . . , xn) have a unique solution
denoted as
Under these assumptions, the joint pdf of the rvs Yi’s is given by
117
Joint pdf of functions of n rvs: Example
118
Solution:
1 0 0 0 ... 0
1 1 0 0 ... 0
1 1 1 0 ... 0
J = . .. .. .. .. .. = 1
.. ..
. .. .. .. ..
1 1 1 1 ... 1
∑
n −λ( n
i=1 xi )
fX1,...,Xn (x1, . . . , xn) = λ e , xi > 0, for all i,
119
we obtain
∑n
n −λ(y1 + i=2 (yi −yi−1 ) ) = λne−λyn ,
fY1,...,Yn (y1, . . . , yn) = λ e
where 0 < y1 < y2 < . . . < yn−1 < yn.
(b) To obtain the marginal pdf for Yn, we can do the following:
∫ yn [∫ y4 (∫ y3 ∫ y2 ) ]
fYn (yn) =λne−λyn ... ( dy1)dy2 dy3 . . . dyn−1
0 0 0 0
λn · ynn−1 −λyn
= e , yn > 0.
(n − 1)!
120
Exchangeable random variables
121
Exchangeable random variables
The random variables X1, X2, . . . , Xn are said to be exchangeable if, for
every permutation i1, . . . , in of the integers 1, . . . , n,
P {Xi1 ≤ x1, Xi2 ≤ x2, . . . , Xin ≤ xn} = P {X1 ≤ x1, X2 ≤ x2, . . . , Xn ≤ xn}
That is, the n random variables are exchangeable if their joint distribution
is the same no matter in which order the variables are observed.
122
Exchangeable random variables
P {Xi1 = x1, Xi2 = x2, . . . , Xin = xn} = P {X1 = x1, X2 = x2, . . . , Xn = xn}
123
Exchangeable random variables: Example
Y1 =X(1),
Yi =X(i) − X(i−1), i = 2, . . . , n.
124
Solution: The transformations
y1 = x1, . . . , yi = xi − xi−1, i = 2, . . . , n
yield
xi = y1 + . . . + yi, i = 1, . . . , n.
The Jacobian of the preceding transformations is equal to 1. Thus,
fY1,...,Yn (y1, y2, . . . , yn) = n!, 0 < y1 < y1 + y2 < . . . < y1 + . . . + yn < 1,
or, equivalently,
125
Because the preceding joint density is a symmetric function of y1, . . . , yn,
we see that the random variables Y1, . . . , Yn are exchangeable.
126
Summary
127