Lecture 14
Lecture 14
Lecture 14
1 Random Processes
A useful extension of the idea of random variables is the random process. While the random
variable X is defined as a univariate function X(s) where s is the outcome of a random
experiment, the random process is a bivariate function X(s, t) where s is the outcome of a
random experiment and t is an index variable such as time. Examples of random processes
are the voltages in a circuit over time, light intensity over location. The random process, for
two outcomes s1 and s2 can be plotted as
Just as for random variables, usually the s is not explicitly written in X(s, t) so the random
process is denoted as X(t).
1
Continuous Random Process: Voltage in a circuit, temperature at a given location over
time, temperature at different positions in a room.
Discrete Random Sequence: Sampled and quantized voltage from a circuit over time.
where A and ω0 are known constants and Θ is a random variable. Since from a few samples
of X(t) taken at different known times it is possible to calculate θ and thus determine the
sample function of X(t) for all future values of t, this process is deterministic.
2
We can extend our idea of probability density functions (PDFs) from random variables to
density functions for random processes. The first order density function for random process
X(t) is then
∂
f X (x1 ; t1 ) = FX (x1 ; t1 ) (4)
∂x1
The second order density function is then
∂2
f X (x1 , x2 ; t1 , t2 ) = FX (x1 , x2 ; t1 , t2 ) (5)
∂x1 ∂x2
By extension, the N th order density function is then
∂N
fX (x1 , ..., xN ; t1 , ..., tN ) = FX (x1 , ..., xN ; t1 , ..., tN ) (6)
∂x1 ...∂xN
3
2.3 Second Order Stationarity
A common mistake when first working with random processes is to mistake first order sta-
tionarity for general stationarity. It can be easily shown that first order stationarity is not
enough to ensure stationarity of all statistical properties.
Example: A random process is given by
X(t) = N
x12
1
f X (x1 ; t1 ) = √ exp −
2π 2
and the second order density function is
where the second line is a result of X(t1 ) = X(t2 ) for all t1 and t2 from the definition of the
random process. One sample function of X(t) is shown below:
where Ni for i = ..., −2, −1, 0, 1, 2... are independent and identically distributed Gaussian
random variables with PDFs of
ni2
1
f Ni (ni ) = √ exp −
2π 2
We get a random process which is constant for period of length 1 but changes value at every
integer time index, with its value being independent in each interval. One sample function
of Y(t) is shown below:
4
One sample function of Y(t)
The first order density function of Y(t) is given by
y12
1
f Y (y1 ; t1 ) = √ exp −
2π 2
The second order density function of Y(t) given in terms of conditional PDFs is
We note that Y(t1 ) = Y(t2 ) if t1 and t2 are located between the same two integers: i ≤
t1 , t2 < i + 1 for some i. If this case is not true, then Y(t1 ) and Y(t2 ) are independent. The
second order density function of Y(t) is then
δ (y1 − y2 ) √1 exp − y12 if i ≤ t1 , t2 < i + 1 for some i
2
f Y (y1 , y2 ; t1 ; t2 ) = 2 2π 2
1 exp − y1 +y2 otherwise
2π 2
Note from the previous examples, that two functions can have the same first order density
function but different second order density functions. This motivates the definition for
different orders of stationarity.
A random process X(t) is called second order stationary or stationary to order two if
for all possible selections of x1 , x2 , t1 , t2 and ∆. It can be easily seen that second order
stationarity implies first order stationarity but the reverse is not true.
To study second order stationarity some useful functions have been developed. The first
of these is the autocorrelation function of a random process which is defined for a random
process X(t) as
RXX (t1 , t2 ) = E [X (t1 ) X (t2 )] (14)
If X(t) is stationary to order two then it can be seen that
5
2.4 Wide Sense Stationarity
It should be noted that all processes that are stationary to order two have the property that
RXX (t1 , t2 ) = RXX (t2 − t1 ) but the converse is not true. This property is useful so processes
that have this property are given a special name, Wide Sense Stationary. A random process
is called Wide Sense Stationary if
E [X (t)] = X, a constant over all t, and (16)
RXX (t1 , t2 ) = RXX (τ ) where τ = t2 − t1 (17)
Example: A random process X(t) is defined as
X(t) = A cos (ωt + φ)
where A and ω are constants and φ is a random variable that is uniformly distributed from
0 to 2π. The expected value of X(t) is
Z 2π
1
E [X(t)] = A cos (ωt + φ) dφ = 0
0 2π
The autocorrelation function is given by
RX X (t, t + τ ) = E [X(t) X(t + τ )]
= E {A cos (ωt + φ) A cos [ω (t + τ ) + φ]}
2
A
= E [cos (2ωt + ωτ + φ) + cos (−ωτ )]
2
A2 A2
= E [cos (2ωt + ωτ + φ)] + E [cos (ωτ )]
2 2
A2 A2
= (0) + cos (ωτ )
2 2
A2
= cos (ωτ )
2
The mean is a constant and the autocorrelation function is only a function of τ so this
process is Wide Sense Stationary. It is easily seen that if φ is uniformly distributed in 0, π4
the process is not Wide Sense Stationary as the autocorrelation in this case is a function of
both t1 and t2 .
Second Order Stationarity is a sufficient but not necessary condition for Wide Sense Station-
arity. That is, there exist processes which are Wide Sense Stationary but not second order
stationary but the reverse is not true.
Two random processes X(t) and Y(t) are called jointly Wide Sense Stationary if they are
individually Wide Sense Stationary and
RXY (t, t + τ ) = E [X(t) Y(t + τ )] = RXY (τ )
6
for all possible x1 , ..., xN , t1 , ..., tN , and ∆. It is easy to see if a random process is stationary
to order N it is also stationary to all orders less than N . If a random process is stationary
to orders 1,2 up to infinity it is called strictly stationary.
We denote a single sample function of random process X(t) as x(t). The time average of a
sample function x(t) is denoted as
Z T
1
x = A [x(t)] = lim x(t)dt (19)
T →∞ 2T −T
It can be seen that in general x and RXX (τ ) are random variables. It can be easily shown
that
E [x] = X (21)
E [RXX (τ )] = RXX (τ ) (22)
Assume that there was some theorem or set of properties of X(t) that make x and RXX (τ )
constants for all sample functions x(t) of X(t), so that x = X and RXX (τ ) = RXX (τ ).
We call processes that have these properties ergodic. In natural language, ergodic processes
have their time averages equal to their statistical averages. Ergodicity is a restrictive form
of stationarity. It is very difficult to prove mathematically and impossible to prove exper-
imentally. It is often, however, assumed to be true for a given observed process to make
useful kinds of statistical manipulations possible. For example, any time you take multiple
measurements of a single process at different times and average them together to calculate
an estimate of the mean of signal you are assuming that the process being observed is in
some way ergodic.
We call two processes jointly ergodic if they are individually ergodic and if
Z T
1
RXY (τ ) = lim x(t) y(t + τ )dt = RXY (τ ) (23)
T →∞ 2T −T
7
1. E [X(t)] = X < ∞
2. X(t) is bounded, which means that for all sample functions x(t), | x(t)| < ∞ for all t.
1
RT
3. limT →∞ 2T −T
E [| X(t)|] < ∞
4. E |X(t)|2 = RXX (t, t) = E X (t)2 < ∞. A random process that satisfies this is
The first three properties are required to allow us to exchange statistical average and time
average integrals for these random processes.
Define a random variable Ax from X(t) as
T
1
Z
Ax = lim X(t)dt (24)
T →∞ 2T −T
Z T
1
Ax = E [Ax ] = E lim X(t)dt
T →∞ −T 2T
Z T
1
= lim E [X(t)] dt
T →∞ −T 2T
Z T
1
= lim Xdt = X
T →∞ −T 2T
(25)
8
For the random process X(t) to be mean ergodic, we need the integral in (27) to be 0. If X(t)
2 2
is a Wide Sense Stationary process, then CXX (t, t1 ) = RXX (t, t1 ) − X = RXX (t1 − t) − X .
We can then rewrite the integral from (27) as
Z T Z T
1
Var (Ax ) = lim CXX (t1 − t) dtdt1
T →∞ (2T )2 −T −T
2 Z T Z T −t
1
= lim CXX (τ ) dτ dt using τ = t1 − t (28)
T →∞ 2T −T −T −t
2. limτ →∞ CXX (τ ) = 0.
R∞
3. −∞ |CXX (τ )| dτ < ∞.
9
These conditions cause Var(AX ) = 0 and thus are sufficient conditions for X(t) to be mean
ergodic.
Example: A wide sense stationary random process, X(t), has the autocorrelation function of
2
RXX (τ ) = e−ατ and a mean of E [X (t)] = 0 for all t. This process is also mean
R ∞ergodic since
−ατ 2
R∞
CXX (0) = RXX (0) = 1 < ∞, limτ →∞ CXX (τ ) = 0, and −∞ |CXX (τ )| dτ = −∞ e dτ =
√
π
√
α
< ∞ using Equation (C-51) of the textbook.
We proceed, as in the previous section, by calculating the variance of AX and finding condi-
tions for making its variance 0:
( " N
#" N
#)
1 X 1 X
Var (AX ) = E lim X[m] − X X[n] − X
N →∞ 2N + 1 2N + 1
m=−N n=−N
( 2 XN N
)
1 X
= E lim X[m] − X X[n] − X
N →∞ 2N + 1 m=−N n=−N
2 X N N
1 X
= lim E X[m] − X X[n] − X
N →∞ 2N + 1 m=−N n=−N
2 X N N
1 X
= lim CXX [n − m]
N →∞ 2N + 1 m=−N n=−N
2 X N N −m
1 X
= lim CXX [k]
N →∞ 2N + 1 m=−N k=−N −m
2N
1 X |k|
= lim CXX [k] 1 −
N →∞ 2N + 1 2N + 1
k=−2N
(32)
If in the limit as N → ∞, this variance sum goes to zero, then the random sequence is mean
ergodic.
10