0% found this document useful (0 votes)
9 views

The Distribution of Function of Random Variable

1. The document discusses probability distributions of random variables, including: defining probability density functions (pdfs) and cumulative distribution functions (cdfs) of a single random variable and of two related random variables. 2. It also covers how to derive the distribution of one random variable from another using transformations like Y=w(X), and defines joint and conditional distributions for pairs of random variables. 3. Examples are provided to demonstrate calculating marginal and conditional distributions from a known joint distribution.

Uploaded by

jedacob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

The Distribution of Function of Random Variable

1. The document discusses probability distributions of random variables, including: defining probability density functions (pdfs) and cumulative distribution functions (cdfs) of a single random variable and of two related random variables. 2. It also covers how to derive the distribution of one random variable from another using transformations like Y=w(X), and defines joint and conditional distributions for pairs of random variables. 3. Examples are provided to demonstrate calculating marginal and conditional distributions from a known joint distribution.

Uploaded by

jedacob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

The distribution of function of random variable

Suppose a random variable X has density function fX (x) and


cdf FX (x). Now let Y = w(X) where w(c)˙ is a continuous
and either increasing or decreasing for a < x < b. Suppose
also that a < x < b if and only if α < Y < β, and let
X = w−1(Y ) be the inverse function for α < y < β. Then
the cdf of Y is, if w is an increasing function,
FY (y) = FX (w−1(y)), α < y < β
and, if w is a decreasing function,
FY (y) = 1 − FX (w−1(y)), α < y < β
The density function of Y is
dx


−1
fY (y) = fX (w (y)) ,α < y < β


dy

1.17
Examples
i. Let 

 1, 0 < x < 1
f (x) = 
 0, otherwise
Find the distribution of Y = − log X.

ii. Suppose we have a large collection of cubes. The lengths


of cubes X ranged from 98 cm to 102 cm evenly. There-
fore the density function of X is given by
1



4 98 < x < 102
f (x) = 
 0 otherwise
Now suppose we are interested in the volume of the cube
Y = X 3, what is the distribution of Y ?

1.18
Sometimes we are not only interested in an individual vari-
able but two or more variables.
To specify the relationship between two random variables, we
define the joint cumulative distribution function of X and Y
by
F (x, y) = Pr{X ≤ x, Y ≤ y}
The distribution of X can be obtained from the joint distri-
bution
FX (x) = Pr{X ≤ x}
= Pr{X ≤ x, Y < ∞}
lim X ≤ x, Y ≤ y}
= Pr{y→∞
lim Pr{X ≤ x, Y ≤ y}
= y→∞
= F (x, ∞)
Similarly we can obtain the distribution of Y
All joint probability statement about X and Y can be an-
swered by F (x, y)
Example: Pr{X > a, Y > b}

1.19
If X and Y are both discrete, the we can define the joint
probability mass function by
f (x, y) = Pr{X = x, Y = y}

Now suppose X takes values x1, x2, . . . , xn, Y takes values


y1, y2, . . . , ym, The joint prob. mass function can be easily
expressed in tabular form.
Example: Consider the following joint distribution of X and
Y where X representing the income level and Y representing
the job satisfaction.

1
 Income < 10, 000 
1 Very Dissatisfied



 

2 10, 000 < Income < 20, 000

 

 
 2 Little Dissatisfied

 


 
X= 3 20, 000 < Income < 30, 000 , Y = 
  3 Moderately satisfied
4 30, 000 < Income < 40, 000

 


 

 4 Very satisfied

 



 5 Income < 40, 000

Y
X 1 2 3 4
1 0.08 0.10 0.04 0.02
2 0.05 0.06 0.05 0.03
3 0.04 0.04 0.06 0.06
4 0.02 0.03 0.07 0.06
5 0.01 0.02 0.08 0.08

1.20
Similarly, if X and Y are both continuous, then the joint
probability density function f (x, y) is the one such that
Z Z
Pr{X ∈ C, Y ∈ D} = {(x,y):x∈C;y∈D}
f (x, y)dxdy

Since
F (a, b) = Pr{X ∈ (−∞, a), Y ∈ (−∞, b)}
Z a Z b
= −∞ −∞
f (x, y)dydx
Therefore,
∂2
f (a, b) = F (a, b)
∂x∂y
whenever the derivative exists.
if X and Y are jointly continuous, they are individually con-
tinuous and the prob. density function of X is
Z ∞
f (x) = −∞
f (x, y)dy
and the density function of Y is
Z ∞
f (y) = −∞
f (x, y)dx

1.21
Example: Let X and X be random variables with joint den-
sity function
1



2 0 ≤ x ≤ y, 0 ≤ y ≤ 2
f (x, y) = 
 0, otherwise

1.22
Conditional Distribution: Discrete Case:
Recall the definition of conditional probability of E given F
Pr(E|F ) =

If X and Y are discrete random variable , define the con-


ditional probability mass function of X given Y = y by
Pr(X = x, Y = y) f (x, y)
f (x|y) = Pr(X = x|Y = y) = =
Pr(Y = y) f (y)
for all values of y such that f (y) > 0.
Similarly, define conditional probability function of X given
Y = y by
FX|Y (x|y) = Pr(X ≤ x|Y = y) = f (a|y)
X

a≤x

where f (y) > 0.

1.23
Example:
Suppose that f (x, y), the joint probability mass function of
X and Y , is given by
f (0, 0) = 0.45 f (0, 1) = 0.05 f (1, 0) = 0.05 f (1, 1) = 0.45
Find the marginal distribution of X and the conditional dis-
tribution of X given Y = 0, 1.

1.24
Continuous Case:
If X and Y have the joint probability density function f (x, y),
define the conditional density function of X given Y = y by
f (x, y)
f (x|y) =
f (y)
where f (y) > 0.
It is consistent with the discrete case.

f (x, y)dxdy
f (x|y)dx =
f (y)dy
Pr(x ≤ X < x + dx, y ≤ Y < y + dy)
=
Pr(y ≤ Y < y + dy)
= Pr(x ≤ X < x + dx|y ≤ Y < y + dy)
The conditional cumulative distribution function of X given
Y = y is
Z a
FX|Y (a|y) = Pr(X ≤ a|Y = y) = −∞
f (x|y)dx

Two random variables are independent if


f (x, y) = f (x)f (y)

1.25
Examples: Suppose X, Y have a joint density function de-
fined as


 1, 0 < x < 1; 0 < y < 1
f (x, y) = 
 0, otherwise.
We can compute the marginal density function of X by in-
tegrating Y out:
Z 1
fX (x) = 0
f (x, y)dy
Z 1
= 0
1dy
= y|1y=0
= 1, 0 < x < 1

Similarly, the marginal density of Y is also


fY (y) = 1, 0 < y < 1

Since f (x, y) = fX (x)fY (y), X and Y is independent.

1.26
Now suppose we want to find Pr(X > Y ).
Note that X > Y represents {(x, y) : 0 < y < x < 1}.
Therefore,
Z Z
Pr(X > Y ) = {(x,y):0<y<x<1}
f (x, y)dxdy
Z 1Z x
= 0 0
1dydx
Z 1
= 0
y|xy=0dx
Z 1
= 0
xdx
x 1
=


2 0

1
=
2

1.27
Let X and Y have the joint pdf
fX,Y (x, y) = 2e−(x+y), 0 < x < y < ∞
Then the marginal density of X is
Z ∞
fX (x) = fX,Y (x, y)dy
Zx∞
= x
2e−(x+y)dy
−x∞ −y
Z
= 2e x
e dy
 
−x −y ∞
= 2e −e |x
−x −x
= 2e e
= 2e−2x, 0 < x < ∞
the marginal density of Y is
Z y
fY (y) = fX,Y (x, y)dx
Z0y
= 0
2e−(x+y)dx
y −x
= 2e−y
Z

0
e dx
−x y
 
−y
= 2e −e |0
−y −y
= 2e (1 − e )
= 2e−y (1 − e−y ), 0 < y < ∞
The conditional density of X given Y = y is
fX,Y (x, y)
fX|Y (x|y) =
fY (y)
2e−(x+y)
=
2e−y (1 − e−y )
e−x
= ,0 < x < y
1 − e−y
Similarly, the conditional density of Y given X = x is
e−y
fY |X (y|x) = −x , x < y < ∞
e
1.28
(c) Expected Values and Variance
• If X is discrete and taking values x1, x2, . . . , then the
expectation or expected value, or the mean of X is defined
by
E(X) = xiPr{X = xi}
X

i
Examples:
If the probability mass function of X is given by
1
f (0) = = f (1)
2
then
E(X) =
If I is the indicator variable for the event A, that is, if


 1 if A occurs
I=
 0 otherwise
Then

Therefore, the expectation of the indicator variable for


the event is just the probability that A occurs
• If X is continuous with the pdf f (x), then the expected
value of X is
Z ∞
E(X) = −∞
xf (x)dx
Example: If X has the pdf


3x2 if 0 < x < 1
f (x) = 
 0 otherwise
Then the expected value of X is
1.29
• Sometimes we are not interested in the expectation of
X but the expectation of a function g(X), we need the
following results
If X is discrete with pmf f (xi), then
E(g(X)) = g(xi)f (xi)
X

and if X is continuous with pdf f (x), then


Z ∞
E(g(X)) = −∞
g(x)f (x)dx
If a and b are constants, then
E(aX + b) = aE(X) + b
If X1 and X2 are two random variables, then
E(X1 + X2) = E(X1) + E(X2)

1.30
• Variance
To measure the variation of values in the distribution, we
use Variance
If X is a random variable with mean µ, then the variance
of X is defined by
Var(X) = E[(X − µ)2]
Alternative formula
Var(X) = E(X 2) − µ2

For any constant a and b


Var(aX + b) = a2Var(X)

1.31
• If we have two random variables X1 and X2 and to mea-
sure the dependence structure, we can use the Covariance
Cov(X1, X2) = E[(X1 − µ1)(X2 − µ2)]
where µi = E(Xi), i = 1, 2.
Alternative formula:
Cov(X1, X2) = E(X1X2) − µ1µ2

Var(X1 + X2) = Var(X1) + Var(X2) + 2Cov(X1, X2)

If X1 and X2 are independent, then


Cov(X1, X2) = 0
Correlation Coefficient: Measure of linear relationship be-
tween two random variables
Cov(X, Y )
ρ = Corr(X, Y ) = r r
Var(X) Var(Y )
−1 ≤ ρ ≤ 1.

1.32
We can also compute the conditional mean of X given Y = y
Z y
E(X|Y = y) = xfX|Y (x|y)dx
0
Z y xe−x
= 0 dx
1 − e−y
1 Z y
−x
= xe dx
1 − e−y 0
Integration by parts by letting u = x and dv = e−xdx,
du = dx and v = −e−x, then
y y
−x
[−xe−x|y0 ]
+ 0 e−xdx
Z Z

0
xe dx =
= −ye + [−e−x|y0 ]
−y

= 1 − e−y − ye−y
Therefore,
1 − e−y − ye−y
E(X|Y = y) =
1 − e−y

1.33

You might also like