Introduction To Probability: 2.1 Random Variable

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Chapter 2

Introduction to Probability

2.1 Random Variable


Random variable maps the output of an experiment to the set of real numbers.
Figure 2.1 represents random variable X(s) that maps the outputs of events
from sample space S on a real line. Several points on real line, i.e., x1 , x2 , ..., xk
are real numbers where each number maps one or more outputs from sample
space S. The set of outputs {s1 , s2 , ..., sk } of an experiment are represented
as dots in S. The mapping or functional dependency X(s) is represented by a
connecting curve, which join a dot/dots to real number xi on real line.

Figure 2.1: Mapping of outputs to set of real numbers.

Exercise
1. Consider an experiment of tossing a coin and identify S, X(s), {s1 , s2 , ..., sk }
and {x1 , x2 , ..., xk }.

2. Repeat part(1) when two coins are tossed simultaneously.

3. Consider an experiment of rolling a Dice and identify S, X(s), {s1 , s2 , ..., sk }


and {x1 , x2 , ..., xk }.

3
4 CHAPTER 2. INTRODUCTION TO PROBABILITY

4. Consider an experiment of rolling two Dice and identify S, X(s), {s1 , s2 , ..., sk }
and {x1 , x2 , ..., xk }.

2.2 Probability Distribution


The probability or chance that a discrete random variable X takes a specific
value x3 can be represented as P (X = x3 ). Hence, P (X ≤ x3 ) represents the
probability that X may take any of the values in the range −∞ ≤ x ≤ x3 , i.e.,
the range of x can be expressed in the form of a set for a discrete random variable,
hence, x ∈ {−xN , ..., x1 , x2 , x3 }. Similarly, for a continuous random variable,
the statement P (X ≤ x), where x = xi , helps to calculate the probability of
events which are represented by numerical values on real line. It should be noted
that for a continuous random variable, P (X = xi ) is meaningless for any xi as
it turns out to be zero [proved in subsequent text], rather, the probability of a
range, i.e., P (x1 ≤ X ≤ x2 ) is of interest. The probability function P (X ≤ x)
is known as cumulative distribution function (CDF). It can be symbolized as

FX (x) = P (X ≤ x) (2.1)

Example
For the proof of concept, consider an example of probability distribution for
a discrete random variable. Assume an experiment of “rolling two dice” and
define following events

1. The sum of dots on both dice is 2.

2. The sum of dots on both dice is 3.


..
.

3. The sum of dots on both dice is 12.

The sample space S comprises of eleven events. Map the outcome of each event
to real line and draw FX (x). [Caution: Consider events such that the sum< 2
and sum> 12 for CDF]

2.2.1 Properties of CDF


1. 0 ≤ FX (x) ≤ 1

2. FX (−∞) = 0

3. FX (+∞) = 1

4. FX (x2 ) ≥ FX (x1 ) if x2 ≥ x1

Note that P (X > x) = 1 − FX (x)


2.2. PROBABILITY DISTRIBUTION 5

Exercise
Find the probability of following events in terms of FX (x)

1. P (a < X ≤ b)

2. P (a < X < b)

3. P (a ≤ X < b)

4. P (X > b)

2.2.2 Probability Density Function


dFX (x)
p(x) = (2.2)
dx
� +∞
Note that p(x) ≥ 0 and −∞ p(x)dx = 1.
Find P (x1 ≤ X ≤ x2 ) in terms of p(x).
Compare P (x1 ≤ X < x2 ), P (x1 < X ≤ x2 ) and P (x1 ≤ X ≤ x2 ).

2.2.3 Transformation of Random Variables


Consider Z = g(X), where X is a random variable with PDF pX (x). Assuming
that g(X) is monotonically increasing or decreasing function, the PDF of Z can
be obtained using the transformation
� �
� dx �
pZ (z) = pX (x) �� �� .
dz

Since for any x the transformation is z = g(x), therefore, x = g −1 (z) and pZ (z)
can be expressed as
� −1 �
� dg (z) �
pZ (z) = pX (g (z)) ��
−1 �. (2.3)
dz �

Example
Consider a linear transformation of Z = aX + b, where a and b are constants.
It is evident that Z = g(X), hence

z = g(x)
x = g −1 (z)
z−b
=
a
dg −1 (z) 1
⇒ =
dz a � �
1 z−b
⇒pZ (z) = pX
|a| a

It is shown in the subsequent chapters that noisy received signal is of the form
Z = aX + b.
6 CHAPTER 2. INTRODUCTION TO PROBABILITY

Exercise
Use OCTAVE to solve the following parts.

1. Use ‘randn’ function to generate 10,000 random numbers.

2. Use ‘hist’ function to plot the histogram of generated random numbers.

3. Generate Z from the above random numbers by taking a = b = 1.

2.3 Statistical Averages


It is convenient to describe random variable by few characteristic numbers such
as mean, moment and expectation. These numbers are known as statistical
averages.

Mean
The mean, denoted by mx of a random variable X is the sum of the values of
X weighted by their probabilities.

Exercise
Use ‘randint’ function to generate random integers and compute the following
sum. �
ni xi = n1 x1 + n2 x2 , ... (2.4)
i

where ni is the number of times X = xi occurs in an experiment. [e.g., randint(1,


10, 3)]. To get the mean, mx , divide (2.4) by N . [e.g., 10 for the preceding
function]. The relative frequency, i.e., nNi becomes P (X = xi ) for N → ∞.
Thus the statistical average E(X) = X = mx can be written as
� �� �1�
mx = N1 i ni xi = N (n1 x1 + n2 x2 , ...)

= i xi PX (xi ) (2.5)

For a continuous random variable (2.5) can be rewritten as


� ∞
mx = E[X] = xpX (x)dx (2.6)
−∞

The expectation for g(X), i.e, E[g(X)] can be obtained from pX (x) using the
relation � ∞
E[g(X)] = gX (x)pX (x)dx (2.7)
−∞

Moments
The n − th moment of random variable X is defined as
� ∞
E[X n ] = X n pX (x)dx (2.8)
−∞

You might also like