Modeling and Simulation Scribe2 October 23

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

E0240: MODELING AND SIMULATION

Lecture #X

Scribe: 17
Your Name
Department
Lecture: X Date

1
Contents
1 Introduction 3

2 Recap of Previous Lecture 3


2.1 Inverse Transformation Method . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Composition Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

3 Convolution Method 4
3.1 Theory and Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Example: Generating a Triangular Distribution . . . . . . . . . . . . . . . . . 4
3.2.1 Step 1: Expressing the Distribution . . . . . . . . . . . . . . . . . . . 4
3.2.2 Step 2: Finding the Cumulative Distribution Function and PDF of S . 5
3.2.3 Step 3: Shifting the Distribution . . . . . . . . . . . . . . . . . . . . . 6
3.2.4 Step 4: Simplifying the Expression . . . . . . . . . . . . . . . . . . . . 6
3.2.5 Advantages of the Convolution Method . . . . . . . . . . . . . . . . . 6

4 Acceptance-Rejection Method 7
4.1 Theory and Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.2 Rationale Behind the Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3 Example: Generating a Complex Distribution . . . . . . . . . . . . . . . . . . 8
4.3.1 Step 1: Choosing the Envelope Function . . . . . . . . . . . . . . . . . 8
4.3.2 Step 2: Computing the Constant c . . . . . . . . . . . . . . . . . . . . 8
4.3.3 Step 3: Defining the Proposal Distribution . . . . . . . . . . . . . . . 8
4.3.4 Step 4: Generating Candidate Random Variables . . . . . . . . . . . . 8
4.3.5 Step 5: Acceptance Criterion . . . . . . . . . . . . . . . . . . . . . . . 8
4.3.6 Efficiency Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3.7 Choosing the Envelope Function . . . . . . . . . . . . . . . . . . . . . 9
4.4 Proof of Acceptance-Rejection Method . . . . . . . . . . . . . . . . . . . . . . 9

5 Summary and Comparative Analysis of Random Number Generation Tech-


niques 11

6 Appendix 12
6.1 Pseudo Random Number Generator . . . . . . . . . . . . . . . . . . . . . . . 12
6.1.1 Key Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

7 Conclusion 12

2
1 Introduction
In this lecture, we explore various random number generation techniques, which are essential
components in computer simulation. We begin with the basic inverse transformation method
and progress to more sophisticated approaches such as the composition, convolution, and
acceptance-rejection methods. Throughout the lecture, illustrative examples are provided
to reinforce understanding. We emphasize the strengths and limitations of each method,
highlighting the importance of selecting an appropriate technique based on the specific
random variable being generated. Additionally, we touch upon the use of simulation as
a valuable tool when analytical models are either unavailable or imprecise, especially in
situations with high injection rates.

2 Recap of Previous Lecture


In the previous class, we delved into several random number generation techniques.

2.1 Inverse Transformation Method

Definition 2.1. The Inverse Transformation Method is a technique for gen-


erating random variables from a specified distribution by utilizing the inverse of
its cumulative distribution function (CDF). Given a continuous random variable
X with CDF FX (x), the method involves generating a uniform random variable
U ∼ Uniform(0, 1) and computing:

−1
X = FX (U ).

−1
This method is straightforward and easy to implement when FX can be expressed in a
closed form. However, it may not be applicable when the inverse CDF is difficult to obtain
analytically.

2.2 Composition Method

Definition 2.2. The Composition Method involves expressing a complex prob-


ability distribution as a mixture of simpler component distributions. If a random
variable X has a PDF fX (x) that can be written as:
n
X
fX (x) = pi fXi (x),
i=1

Pn
where pi ≥ 0 with i=1 pi = 1 and fXi (x) are PDFs of simpler distributions, then
X can be generated by:

1. Selecting one of the component distributions fXi (x) with probability pi .

2. Generating a random variable from the selected distribution.

3
This method is beneficial when dealing with distributions that can be easily decomposed
into simpler components.

3 Convolution Method
Definition 3.1. The Convolution Method is a technique for generating random
variables whose distribution is the sum of multiple independent and identically dis-
tributed (i.i.d.) random variables. If X can be expressed as:
n
X
X= Yj ,
j=1

where each Yj is an i.i.d. random variable, then X can be generated by summing


generated samples of Yj .

3.1 Theory and Algorithm


The convolution method is particularly useful when the desired distribution can be repre-
sented as the convolution of simpler distributions.

Algorithm: Convolution Method

1. Generate Samples: Generate n i.i.d. random variables Y1 , Y2 , . . . , Yn from


the distribution of Y .

2. Compute Sum: Calculate X = Y1 + Y2 + · · · + Yn .

3. Output: The value X is a random variable following the desired distribution.

3.2 Example: Generating a Triangular Distribution


Example 3.1. Consider the triangular distribution with the probability density function
(PDF):

x + 1,

 if − 1 ≤ x ≤ 0,
f (x) = −x + 1, if 0 < x ≤ 1,


0, otherwise.

Our goal is to generate random variables that follow this triangular distribution using
the convolution method.

3.2.1 Step 1: Expressing the Distribution

First, we observe that the triangular distribution can be obtained by convolving two uniform
distributions. Let U1 and U2 be independent uniform random variables on the interval [0, 1].
Define S = U1 + U2 , which ranges from 0 to 2.

4
2
fx (x)

x
−2 −1 1 2

−1

Figure 1: Plot of fx (x)

3.2.2 Step 2: Finding the Cumulative Distribution Function and PDF of S

The cumulative distribution function (CDF) of S is given by:





0, if s < 0,

 1 s2 ,

if 0 ≤ s ≤ 1,
2
FS (s) =
1


 1− 2 (2 − s)2 , if 1 < s ≤ 2,


1, if s > 2.

Differentiating the CDF, we obtain the probability density function (PDF) of S:





 0, if s < 0,

if 0 ≤ s ≤ 1,

s,
fS (s) =


 2 − s, if 1 < s ≤ 2,


0, if s > 2.

2 2
f (U1 ) f (U2 )

1 1

U1 U2
−1 1 2 −1 1 2

−1 −1
(a) Density of U1 (b) Density of U2

Figure 2: Densities of U1 and U2

This PDF represents a triangular distribution on the interval [0, 2].

5
U2

2 − s2

s1

U1
s1 2 − s2 1

Figure 3: Joint Density where s1 ∈ [0, 1] and s2 ∈ [1, 2]

3.2.3 Step 3: Shifting the Distribution

To obtain the desired triangular distribution on [−1, 1], we shift S by subtracting 1:

X = S − 1.

Now, X ranges from −1 to 1, and its PDF matches the target triangular distribution.

3.2.4 Step 4: Simplifying the Expression

Alternatively, we can express X as:

X = (U1 − 0.5) + (U2 − 0.5) = Y1 + Y2 ,

where Y1 and Y2 are i.i.d. uniform random variables on [−0.5, 0.5]. This emphasizes that
X is the sum of two i.i.d. random variables.

3.2.5 Advantages of the Convolution Method

Using the convolution method in this example has several advantages:

• It avoids complex mathematical operations such as finding inverse functions.

• The method requires generating uniform random variables and performing simple
arithmetic operations.

• It eliminates the need for comparisons or conditional statements in the algorithm.

6
4 Acceptance-Rejection Method

Definition 4.1. The Acceptance-Rejection Method is a technique for generating


random variables with a specified distribution by using a proposal distribution and
an acceptance criterion based on the ratio of the target and envelope functions.

4.1 Theory and Algorithm


The acceptance-rejection method is particularly useful when direct methods for generating
random variables are difficult or inefficient.

Algorithm: Acceptance-Rejection Method

1. Select an Envelope Function t(x): Choose t(x) ≥ f (x) for all x.


R∞
2. Compute Constant c: Calculate c = −∞ t(x) dx.
t(x)
3. Define Proposal Distribution r(x): Set r(x) = c .

4. Generate Candidate Y : Generate Y from the distribution r(x).

5. Generate Uniform Variable U : Generate U ∼ Uniform(0, 1), independent


of Y .
f (Y )
6. Acceptance Criterion: Accept Y as X if U ≤ t(Y ) ; otherwise, reject Y and
return to Step 4.

4.2 Rationale Behind the Method


The acceptance probability ft(Y(Y )
) ensures that the accepted random variables have the desired
distribution f (x). The method works because the joint density of Y and U is uniform over
the region where U ≤ ft(Y
(Y )
) , which corresponds to the area under f (x).

2
f (x), t(x)

x
−2 −1 1 2 3

−1

Figure 4: Acceptance-Rejection Example

7
4.3 Example: Generating a Complex Distribution
Example 4.1. Consider the target PDF:

60x3 (1 − x)2 , if 0 ≤ x ≤ 1,
f (x) =
0, otherwise.

Our goal is to generate random variables following this distribution using the acceptance-
rejection method.

4.3.1 Step 1: Choosing the Envelope Function

First, we observe that f (x) attains its maximum value at x = 0.6:

f (0.6) = 60 × (0.6)3 × (1 − 0.6)2 ≈ 2.0736.

We choose:
t(x) = 2.0736, ∀x ∈ [0, 1],

ensuring t(x) ≥ f (x) for all x ∈ [0, 1].

4.3.2 Step 2: Computing the Constant c

Since t(x) is constant over [0, 1]:


Z 1
c= t(x) dx = 2.0736 × 1 = 2.0736.
0

4.3.3 Step 3: Defining the Proposal Distribution

The proposal distribution r(x) is:

t(x) 2.0736
r(x) = = = 1, ∀x ∈ [0, 1].
c 2.0736
Thus, r(x) is the uniform distribution on [0, 1].

4.3.4 Step 4: Generating Candidate Random Variables

We generate Y ∼ Uniform(0, 1) as our candidate random variables.

4.3.5 Step 5: Acceptance Criterion

For each Y , generate U ∼ Uniform(0, 1) independently. We accept Y if:

f (Y ) 60Y 3 (1 − Y )2
U≤ = .
t(Y ) 2.0736

If U satisfies the inequality, we set X = Y ; otherwise, we reject Y and repeat the process.

8
3
f (x)
t(x)

2.07
f (x), t(x)

1.5

0
0 0.2 0.4 0.6 0.8 1
x

Figure 5: Graph of the target PDF f (x) and the envelope function t(x) over [0, 1].

4.3.6 Efficiency Considerations

The acceptance rate is inversely proportional to c:

1 1
P (accept) = = ≈ 0.482.
c 2.0736
On average, we accept approximately 48.2% of the candidate values.

4.3.7 Choosing the Envelope Function

Selecting an appropriate t(x) is crucial. A tighter envelope function increases the acceptance
rate but may complicate the generation of Y .

4.4 Proof of Acceptance-Rejection Method


We aim to prove that the generated random variable X has the desired cumulative distri-
bution function (CDF), i.e.,

P (generated X ≤ x) = P (Y ≤ x | acceptance)

From the definition of conditional probability, this equals:

P (acceptance, Y ≤ x)
P (generated X ≤ x) = (1)
P (acceptance)
Next, we compute P (acceptance | Y = y):
 
f (y) f (y)
P (acceptance | Y = y) = P U ≤ = (2)
t(y) t(y)
since P (U ≤ k) = k by the definition of the uniform random variable U on [0, 1].

9
Numerator:

Z ∞
P (acceptance, Y ≤ x) = P (acceptance, Y ≤ x | Y = y)r(y) dy
−∞
Z x Z ∞
= P (acceptance | Y = y)r(y) dy + P (acceptance, Y ≤ x | Y = y)r(y) dy
−∞ x
Z x
= P (acceptance | Y = y)r(y) dy + 0 (since Y > x =⇒ P (Y ≤ x | Y = y) = 0)
−∞
Z x
f (y)
= r(y) dy
−∞ t(y)
Z x  
f (y) t(y) t(y)
= · dy since r(y) =
−∞ t(y) c c
1 x
Z
= f (y) dy
c −∞
FX (x)
=
c

Denominator:
Z ∞
P (acceptance) = P (acceptance | Y = y)r(y) dy
−∞
Z ∞
f (y)
= r(y) dy
−∞ t(y)
Z∞
f (y) t(y)
= · dy
−∞ t(y) c
Z ∞
1
= f (y) dy
c −∞
 Z ∞ 
1
= since f (y) dy = 1
c −∞

Therefore, the desired probability is:

FX (x)
P (generated X ≤ x) = c = FX (x) (3)
1
c
Hence, we have proved that the generated X has the desired CDF FX (x).

10
5 Summary and Comparative Analysis of Random Num-
ber Generation Techniques

Table 1: Summary of Random Number Generation Methods


Method Details
Inverse Transfor- Introduction: Utilizes the inverse of the cumulative distribution function (CDF)
mation Method to transform uniform random variables into the target distribution. By generating
U ∼ Uniform(0, 1) and computing X = F −1 (U ), we obtain X with the desired
distribution.
Advantages: Simple and intuitive; directly relates uniform random variables to the
target distribution.
Limitations: Requires the analytical form of the inverse CDF, which may not be
available for complex distributions.
Recommendation: Best suited for distributions with known and easily invertible
CDFs, such as exponential, uniform, or logistic distributions.
Composition Introduction: Effective when the target distribution can be expressed as a mixture
Method of simpler component distributions. It involves selecting one of the components based
on predefined probabilities and generating a random variable from that distribution.
Advantages: Utilizes existing generators for component distributions; flexible in
handling mixed distributions.
Limitations: Requires that the target distribution can be decomposed into a finite
mixture of simpler distributions.
Recommendation: Appropriate when the target distribution naturally decomposes
into a mixture model or when simulating systems with multiple modes.
Convolution Introduction: Applicable when the desired distribution is the sum of independent
Method random variables. By generating samples from the component distributions and sum-
ming them, we obtain a random variable following the target distribution.
Advantages: Effective for distributions that are sums of independent random vari-
ables; easy to implement.
Limitations: May be computationally intensive if a large number of summations are
required.
Recommendation: Suitable when the target distribution is the result of a small
number of additive random processes.
Acceptance- Introduction: A versatile technique suitable for complex distributions where direct
Rejection sampling is challenging. It involves generating candidate samples from an easily
Method sampled proposal distribution and accepting or rejecting them based on a criterion
that ensures the accepted samples follow the target distribution.
Advantages: Highly flexible; does not require the inverse CDF; can handle complex
and multi-dimensional distributions.
Limitations: Efficiency heavily depends on the choice of the proposal distribution
and the acceptance rate; may be inefficient if the acceptance probability is low.
Recommendation: Preferred when other methods are impractical; selecting an
appropriate proposal distribution close to the target distribution improves efficiency.

11
6 Appendix
6.1 Pseudo Random Number Generator
Pseudo-random Number Generators (PRNGs) are a fundamental component in many as-
pects of Cyber security, particularly in cryptographic applications where they are used to
generate cryptographic keys, random noises, and other elements that rely on unpredictabil-
ity. The security of the PRNG is critical because if the sequence of numbers it generates
can be predicted, it may compromise the security of the cryptographic system it is used to
protect.
A strong Pseudo-random Number Generator must pass various statistical tests for ran-
domness and also be unpredictable to an attacker who has knowledge of some or even many
preceding numbers in the sequence. However, Pseudo-random Number Generators are not
suitable for all applications due to their deterministic nature; where true randomness is
required, True Random Number Generators (TRNGs) are needed.

6.1.1 Key Characteristics

• Deterministic: PRNGs are predictable if the initial seed value is known.

• Efficient: They can quickly generate long sequences of numbers.

• Repeatable: Given the same seed, a PRNG will always produce the same sequence
of numbers.

• Statistical Randomness: High-quality PRNGs produce sequences that pass tests


which suggest randomness, despite being generated by deterministic processes.

7 Conclusion
Understanding the principles and practical considerations of each random number generation
method is crucial for selecting the most appropriate technique for a given problem. By
aligning the method with the characteristics of the target distribution and computational
constraints, we can achieve efficient and accurate simulation outcomes. Mastery of these
techniques enhances our ability to model complex stochastic systems and contributes to
more robust and reliable simulations. [1]

12
References
[1] Sencode.co.uk, “What is a pseudorandom-number-generator,” 2021.

13

You might also like