Modeling and Simulation Scribe2 October 23
Modeling and Simulation Scribe2 October 23
Modeling and Simulation Scribe2 October 23
Lecture #X
Scribe: 17
Your Name
Department
Lecture: X Date
1
Contents
1 Introduction 3
3 Convolution Method 4
3.1 Theory and Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.2 Example: Generating a Triangular Distribution . . . . . . . . . . . . . . . . . 4
3.2.1 Step 1: Expressing the Distribution . . . . . . . . . . . . . . . . . . . 4
3.2.2 Step 2: Finding the Cumulative Distribution Function and PDF of S . 5
3.2.3 Step 3: Shifting the Distribution . . . . . . . . . . . . . . . . . . . . . 6
3.2.4 Step 4: Simplifying the Expression . . . . . . . . . . . . . . . . . . . . 6
3.2.5 Advantages of the Convolution Method . . . . . . . . . . . . . . . . . 6
4 Acceptance-Rejection Method 7
4.1 Theory and Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.2 Rationale Behind the Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3 Example: Generating a Complex Distribution . . . . . . . . . . . . . . . . . . 8
4.3.1 Step 1: Choosing the Envelope Function . . . . . . . . . . . . . . . . . 8
4.3.2 Step 2: Computing the Constant c . . . . . . . . . . . . . . . . . . . . 8
4.3.3 Step 3: Defining the Proposal Distribution . . . . . . . . . . . . . . . 8
4.3.4 Step 4: Generating Candidate Random Variables . . . . . . . . . . . . 8
4.3.5 Step 5: Acceptance Criterion . . . . . . . . . . . . . . . . . . . . . . . 8
4.3.6 Efficiency Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3.7 Choosing the Envelope Function . . . . . . . . . . . . . . . . . . . . . 9
4.4 Proof of Acceptance-Rejection Method . . . . . . . . . . . . . . . . . . . . . . 9
6 Appendix 12
6.1 Pseudo Random Number Generator . . . . . . . . . . . . . . . . . . . . . . . 12
6.1.1 Key Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
7 Conclusion 12
2
1 Introduction
In this lecture, we explore various random number generation techniques, which are essential
components in computer simulation. We begin with the basic inverse transformation method
and progress to more sophisticated approaches such as the composition, convolution, and
acceptance-rejection methods. Throughout the lecture, illustrative examples are provided
to reinforce understanding. We emphasize the strengths and limitations of each method,
highlighting the importance of selecting an appropriate technique based on the specific
random variable being generated. Additionally, we touch upon the use of simulation as
a valuable tool when analytical models are either unavailable or imprecise, especially in
situations with high injection rates.
−1
X = FX (U ).
−1
This method is straightforward and easy to implement when FX can be expressed in a
closed form. However, it may not be applicable when the inverse CDF is difficult to obtain
analytically.
Pn
where pi ≥ 0 with i=1 pi = 1 and fXi (x) are PDFs of simpler distributions, then
X can be generated by:
3
This method is beneficial when dealing with distributions that can be easily decomposed
into simpler components.
3 Convolution Method
Definition 3.1. The Convolution Method is a technique for generating random
variables whose distribution is the sum of multiple independent and identically dis-
tributed (i.i.d.) random variables. If X can be expressed as:
n
X
X= Yj ,
j=1
Our goal is to generate random variables that follow this triangular distribution using
the convolution method.
First, we observe that the triangular distribution can be obtained by convolving two uniform
distributions. Let U1 and U2 be independent uniform random variables on the interval [0, 1].
Define S = U1 + U2 , which ranges from 0 to 2.
4
2
fx (x)
x
−2 −1 1 2
−1
2 2
f (U1 ) f (U2 )
1 1
U1 U2
−1 1 2 −1 1 2
−1 −1
(a) Density of U1 (b) Density of U2
5
U2
2 − s2
s1
U1
s1 2 − s2 1
X = S − 1.
Now, X ranges from −1 to 1, and its PDF matches the target triangular distribution.
where Y1 and Y2 are i.i.d. uniform random variables on [−0.5, 0.5]. This emphasizes that
X is the sum of two i.i.d. random variables.
• The method requires generating uniform random variables and performing simple
arithmetic operations.
6
4 Acceptance-Rejection Method
2
f (x), t(x)
x
−2 −1 1 2 3
−1
7
4.3 Example: Generating a Complex Distribution
Example 4.1. Consider the target PDF:
60x3 (1 − x)2 , if 0 ≤ x ≤ 1,
f (x) =
0, otherwise.
Our goal is to generate random variables following this distribution using the acceptance-
rejection method.
We choose:
t(x) = 2.0736, ∀x ∈ [0, 1],
t(x) 2.0736
r(x) = = = 1, ∀x ∈ [0, 1].
c 2.0736
Thus, r(x) is the uniform distribution on [0, 1].
f (Y ) 60Y 3 (1 − Y )2
U≤ = .
t(Y ) 2.0736
If U satisfies the inequality, we set X = Y ; otherwise, we reject Y and repeat the process.
8
3
f (x)
t(x)
2.07
f (x), t(x)
1.5
0
0 0.2 0.4 0.6 0.8 1
x
Figure 5: Graph of the target PDF f (x) and the envelope function t(x) over [0, 1].
1 1
P (accept) = = ≈ 0.482.
c 2.0736
On average, we accept approximately 48.2% of the candidate values.
Selecting an appropriate t(x) is crucial. A tighter envelope function increases the acceptance
rate but may complicate the generation of Y .
P (generated X ≤ x) = P (Y ≤ x | acceptance)
P (acceptance, Y ≤ x)
P (generated X ≤ x) = (1)
P (acceptance)
Next, we compute P (acceptance | Y = y):
f (y) f (y)
P (acceptance | Y = y) = P U ≤ = (2)
t(y) t(y)
since P (U ≤ k) = k by the definition of the uniform random variable U on [0, 1].
9
Numerator:
Z ∞
P (acceptance, Y ≤ x) = P (acceptance, Y ≤ x | Y = y)r(y) dy
−∞
Z x Z ∞
= P (acceptance | Y = y)r(y) dy + P (acceptance, Y ≤ x | Y = y)r(y) dy
−∞ x
Z x
= P (acceptance | Y = y)r(y) dy + 0 (since Y > x =⇒ P (Y ≤ x | Y = y) = 0)
−∞
Z x
f (y)
= r(y) dy
−∞ t(y)
Z x
f (y) t(y) t(y)
= · dy since r(y) =
−∞ t(y) c c
1 x
Z
= f (y) dy
c −∞
FX (x)
=
c
Denominator:
Z ∞
P (acceptance) = P (acceptance | Y = y)r(y) dy
−∞
Z ∞
f (y)
= r(y) dy
−∞ t(y)
Z∞
f (y) t(y)
= · dy
−∞ t(y) c
Z ∞
1
= f (y) dy
c −∞
Z ∞
1
= since f (y) dy = 1
c −∞
FX (x)
P (generated X ≤ x) = c = FX (x) (3)
1
c
Hence, we have proved that the generated X has the desired CDF FX (x).
10
5 Summary and Comparative Analysis of Random Num-
ber Generation Techniques
11
6 Appendix
6.1 Pseudo Random Number Generator
Pseudo-random Number Generators (PRNGs) are a fundamental component in many as-
pects of Cyber security, particularly in cryptographic applications where they are used to
generate cryptographic keys, random noises, and other elements that rely on unpredictabil-
ity. The security of the PRNG is critical because if the sequence of numbers it generates
can be predicted, it may compromise the security of the cryptographic system it is used to
protect.
A strong Pseudo-random Number Generator must pass various statistical tests for ran-
domness and also be unpredictable to an attacker who has knowledge of some or even many
preceding numbers in the sequence. However, Pseudo-random Number Generators are not
suitable for all applications due to their deterministic nature; where true randomness is
required, True Random Number Generators (TRNGs) are needed.
• Repeatable: Given the same seed, a PRNG will always produce the same sequence
of numbers.
7 Conclusion
Understanding the principles and practical considerations of each random number generation
method is crucial for selecting the most appropriate technique for a given problem. By
aligning the method with the characteristics of the target distribution and computational
constraints, we can achieve efficient and accurate simulation outcomes. Mastery of these
techniques enhances our ability to model complex stochastic systems and contributes to
more robust and reliable simulations. [1]
12
References
[1] Sencode.co.uk, “What is a pseudorandom-number-generator,” 2021.
13