0% found this document useful (0 votes)
4 views

Introduction to Simulation Using Python

Chapter 14 introduces computer simulation using Python, emphasizing the generation of random variables and simulation of probabilistic systems. It covers the basics of Python, including libraries like Numpy and Matplotlib, and provides examples of generating discrete and continuous probability distributions. The chapter also discusses methods such as the Box-Muller transformation for generating normal random variables.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Introduction to Simulation Using Python

Chapter 14 introduces computer simulation using Python, emphasizing the generation of random variables and simulation of probabilistic systems. It covers the basics of Python, including libraries like Numpy and Matplotlib, and provides examples of generating discrete and continuous probability distributions. The chapter also discusses methods such as the Box-Muller transformation for generating normal random variables.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Chapter 14

Introduction to Simulation Using


Python
E. Sink, A. Rakhshan, and H. Pishro-Nik

14.1 Analysis versus Computer Simulation


A computer simulation is a computer program which attempts to represent the real world based
on a model. The accuracy of the simulation depends on the precision of the model. Suppose that
the probability of heads in a coin toss experiment is unknown. We can perform the experiment
of tossing the coin n times repetitively to approximate the probability of heads.
Number of times heads observed
P (H) =
Number of times the experiment executed
However, for many practical problems it is not possible to determine the probabilities by exe-
cuting experiments a large number of times. With today’s computers processing capabilities,
we only need a high-level language, such as Python, which can generate random numbers, to
deal with these problems.
In this chapter, we present basic methods of generating random variables and simulating
probabilistic systems. The provided algorithms are general and can be implemented in any
computer language. However, to have concrete examples, we provide the actual code in Python.
If you are unfamiliar with Python, you should still be able to understand the algorithms.

14.2 Introduction: What is Python?


Python is a high-level language that assists developers and researchers in solving problems with
fewer lines of code than traditional programming languages, such as C/C++ or Java, by lever-
aging its comprehensive standard library. Python can be utilized for a multitude of applications,
ranging from web development and data analysis to artificial intelligence and scientific compu-
tations. Lists and dictionaries are fundamental data structures in Python. Thus, a foundational
understanding of basic programming concepts is advantageous to harness Python’s full capabili-
ties. While this text presumes you are acquainted with basic Python commands, we will explore
not only how to generate probability distributions using modules like Numpy, but also how to
derive these distributions from the uniform distribution.

1
2 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

In this chapter, we will use the Numpy and Scipy libraries for mathematical functions and
Matplotlib’s Pyplot interface for plotting, though there are many good choices available. Many
Python distributions will include these modules by default. If yours does not, they can be
installed from your operating system command line (Command Prompt on Windows, Terminal
on MacOS, etc.) using pip. For example, to install Numpy, use

python -m pip install numpy

To import these modules, use the following Python code:

import numpy as np
import scipy as sp
import matplotlib . pyplot as plt

We will also need an instance of numpy’s Generator class:

rng = np . random . default_rng ()

Make sure to include these commands at the beginning of your code.

14.3 Discrete and Continuous Random Number Generators


Most programming languages can deliver samples from the uniform distribution to us. (In
reality, the given values are pseudorandom instead of being completely random.) The rest
of this section shows how to convert uniform random variables to any other desired random
variable. The Python code for generating uniform random variables is:

rng . random ()

which returns a pseudorandom value drawn from the standard uniform distribution on the
half-open interval [0, 1). Also,

rng . random ( n )

returns a list of n independent pseudorandom values drawn from the standard uniform distri-
bution on the half-open interval [0, 1), while

rng . random ([ m , n ])

returns an m × n matrix.
14.3. DISCRETE AND CONTINUOUS RANDOM NUMBER GENERATORS 3

14.3.1 Generating Discrete Probability Distributions from Uniform Distri-


bution
Let’s see a few examples of generating certain simple distributions:

Example 1. (Bernoulli) Simulate tossing a coin with probability of heads p.


Solution: Let U be a U nif orm(0, 1) random variable. We can write a Bernoulli random
variable X as: 
1 U <p
X=
0 U ≥p
Thus,

P (H) = P (X = 1)
= P (U < p)
=p

Therefore, X has Bernoulli(p) distribution. The Python code for Bernoulli(p) is:

def bernoulli ( p ):
U = rng . random ()
return ( U < p )

Since the “random” command returns a number between 0 and 1, we divided the interval [0, 1]
into two parts, p and 1 − p in length. Then, the value of X is determined based on where the
number generated from the uniform distribution fell.

Example 2. (Coin Toss Simulation) Write code to simulate tossing a fair coin to see how the
law of large numbers works.
Solution: You can write:

n = 1000
U = rng . random ( n )
toss = ( U < 0.5)
avg = [ sum ( toss [: i ])/ i for i in range (1 , n +1)]
plt . xlabel ( " Coin Toss Number " )
plt . ylabel ( " Proportion of Heads " )
plt . axis ([0 , n ,0 ,1])
plt . plot ( range (1 , n +1) , avg )
plt . show ()

If you run the above code to compute the proportion of 1’s in the variable “toss,” the result will
look like Figure 14.1. You can also assume the coin is biased with probability of heads equal to
0.6 by replacing the third line of the previous code with:

toss = ( U < 0.6)


4 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

Figure 14.1: Python coin toss simualtion

Example 3. (Binomial) Generate a Binomial(50, 0.2) random variable.


Solution: To solve this problem, we can use the following lemma:

Lemma 1. If X1 , X2 , ..., Xn are independent Bernoulli(p) random variables, then the random
variable X defined by X = X1 + X2 + ... + Xn has a Binomial(n, p) distribution.

To generate a random variable X ∼ Binomial(n, p), we can toss a coin n times and count the
number of heads. Counting the number of heads is exactly the same as finding X1 +X2 +...+Xn ,
where each Xi is equal to 1 if the corresponding coin toss results in heads and 0 otherwise.
Since we know how to generate Bernoulli random variables, we can generate a Binomial(n, p)
by adding n independent Bernoulli(p) random variables.

def binomial (n , p ):
U = rng . random ( n )
return sum ( U < p )
14.3. DISCRETE AND CONTINUOUS RANDOM NUMBER GENERATORS 5

Generating Arbitrary Discrete Distributions


In general, we can generate any discrete random variables similar to the above examples using
the following algorithm. Suppose we would like to simulateP the discrete random variable X with
range RX = {x1 , x2 , ..., xn } and P (X = xj ) = pj , so j pj = 1.
To achieve this, first we generate a random number U (i.e., U ∼ U nif orm(0, 1)). Next, we
divide the interval [0, 1] into subintervals such that the jth subinterval has length pj (Figure
14.2). Assume 

 x0 if (U < p0 )
x1 if (p0 ≤ U < p0 + p1 )




 .

..
X= P 
j−1 Pj
x if p ≤ U < p

j k=0 k k=0 k




 ..


.
In other words
X = xj if F (xj−1 ) ≤ U < F (xj ),
where F (x) is the desired CDF. We have

j−1 j
!
X X
P (X = xj ) = P pk ≤ U < pk
k=0 k=0
= pj

p0 p1 p2 p3 ··· pj
0 1

Figure 14.2: Generating discrete random variables

Example 4. Give an algorithm to simulate the value of a random variable X such that

P (X = 1) = 0.35
P (X = 2) = 0.15
P (X = 3) = 0.4
P (X = 4) = 0.1

Solution: We divide the interval [0, 1] into subintervals as follows:

A0 = [0, 0.35)
A1 = [0.35, 0.5)
A2 = [0.5, 0.9)
A3 = [0.9, 1)
6 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

Subinterval Ai has length pi . We obtain a uniform number U . If U belongs to Ai , then X = xi .

P (X = xi ) = P (U ∈ Ai )
= pi

def X ():
A = [0.35 ,0.5 ,0.9 ,1]
x = [1 ,2 ,3 ,4]
j = 0
U = rng . random ()
while U > A [ j ]:
j = j + 1
return x [ j ]

The same result can be achevied using the Numpy choice function:

rng . choice ([1 ,2 ,3 ,4] , p =[0.35 ,0.15 ,0.4 ,0.1])

14.3.2 Generating Continuous Probability Distributions from the Uniform


Distribution: Inverse Transformation Method
At least in principle, there is a way to convert a uniform distribution to any other distribution.
Let’s see how we can do this. Let U ∼ U nif orm(0, 1) and F be a CDF. Also, assume F is
continuous and strictly increasing as a function.

Theorem 1. Let U ∼ U nif orm(0, 1) and F be a CDF which is strictly increasing. Also,
consider a random variable X defined as

X = F −1 (U ).

Then,
X∼F (The CDF of X is F ).

Proof:

P (X ≤ x) = P (F −1 (U ) ≤ x)
= P (U ≤ F (x)) (increasing function)
= F (x)

Now, let’s see some examples. Note that to generate any continuous random variable X with
the continuous CDF F , F −1 (U ) has to be computed.
14.3. DISCRETE AND CONTINUOUS RANDOM NUMBER GENERATORS 7

Example 5. (Exponential) Generate an Exponential(1) random variable.


Solution: To generate an Exponential random variable with parameter λ = 1, we proceed
as follows:

F (x) = 1 − e−x x>0


U ∼ U nif orm(0, 1)
X = F −1 (U )
= − ln(1 − U )
X ∼ F.

This formula can be simplified since

(1 − U ) ∼ U nif orm(0, 1)

U 1−U
0 1

Figure 14.3: Symmetry of uniform distribution

Hence we can simulate X using


X = − ln(U ).

def exponential ():


U = rng . random ()
return - np . log ( U )

Example 6. (Gamma) Generate a Gamma(20,1) random variable.


Solution: For this example, F −1 is even more complicated than the complicated gamma cdf
F itself. Instead of inverting the CDF, we generate a gamma random variable as a sum of n
independent exponential variables.
Theorem 2. Let X1 , X2 , · · · , Xn be independent random variables with Xi ∼ Exponential(λ).
Define
Y = X1 + X2 + · · · + Xn
By the moment generating function method, you can show that Y has a gamma distribution
with parameters n and λ, i.e., Y ∼ Gamma(n, λ).
Having this theorem in mind, we can write:

def gamma (n , lam ):


return ( -1/ lam )* sum ( np . log ( rng . random ( n )))
gamma (20 ,1)
8 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

Example 7. (Poisson) Generate a Poisson random variable. Hint: In this example, use the fact
that the number of events in the interval [0, t] has Poisson distribution when the elapsed times
between the events are Exponential.
Solution: We want to employ the definition of Poisson processes. Assume N represents the
number of events (arrivals) in [0,t]. If the interarrival times are distributed exponentially (with
parameter λ) and independently, then the number of arrivals occurred in [0,t], N , has Poisson
distribution with parameter λt (Figure 14.4). Therefore, to solve this problem, we can repeat
generating Exponential(λ) random variables while their sum is not larger than 1 (choosing
t = 1). More specifically, we generate Exponential(λ) random variables
−1
Ti = ln(Ui )
λ
by first generating uniform random variables Ui ’s. Then we define

X = max {j : T1 + · · · + Tj ≤ 1}

The algorithm can be simplified:


 
−1
X = max j : ln(U1 · · · Uj ) ≤ 1
λ

def poisson ( lam ):


i = 0
U = rng . random ()
Y = -(1/ lam )* np . log ( U )
sum = Y
while sum <= 1:
U = rng . random ()
Y = -(1/ lam )* np . log ( U )
sum = sum + Y
i = i +1
return i

Exp(λ) Exp(λ) Exp(λ) Exp(λ)


0 1
X= Maximum number of exponential random variables

Figure 14.4: Poisson Random Variable

To finish this section, let’s see how to convert uniform numbers to normal random variables.
The normal distribution is extremely important in science because it is very commonly occuring.
Theorem 3. (Box-Muller transformation) We can generate a pair of independent normal vari-
ables (Z1 , Z2 ) by transforming a pair of independent U nif orm(0, 1) random variables (U1 , U2 )
[1].
14.3. DISCRETE AND CONTINUOUS RANDOM NUMBER GENERATORS 9

 √
Z1 = √−2 ln U1 cos(2πU2 )
Z2 = −2 ln U1 sin(2πU2 )

Example 8. (Box-Muller) Generate 5000 pairs of normal random variables and plot both
histograms.
Solution:

U = rng . random ([2 ,5000])


Z1 = np . sqrt ( - np . log ( U [0]))* np . cos (2* np . pi * U [1])
Z2 = np . sqrt ( - np . log ( U [0]))* np . sin (2* np . pi * U [1])
plt . hist (( Z1 , Z2 ))
plt . show ()

Figure 14.5: Histogram of a pair of normal random variables generated by Box-Muller transfor-
mation
10 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

14.4 Python Commands for Special Distributions


In the previous section, we saw how we can sample arbitrary distributions using a uniform
random number generator. However, for many standard distributions, libraries like Numpy have
done this work for us. In practice, it is preferable to use the efficient, extensively documented,
and well-tested library functions when they are available.
The following tables catalog some of the distributions available in Numpy. The full list can
be found in the Numpy documentation [2]. They are implemented as methods of the Generator
class. In addition to the listed arguments, they all take an optional size parameter, which can
be an integer or a list of integers. For example, to generate a a × b matrix drawn from a
Binomial(n, p) distribution, we use the following code:

rng . binomial (n ,p ,[ a , b ])

Even more functionality is available in Scipy’s stats module. In addition to random sampling
of various distributions, Scipy can also compute of the corresponding PMFs/PDFs, CDFs, and
more. The above Numpy command is equivalent to the Scipy command

sp . stats . binom . rvs (n ,p , size =[ a , b ])

The value of the corresponding PMF at k is

sp . stats . binom . pmf (k ,n , p )

and similarly for the CDF. See the Scipy documentation for details [3].

Name Numpy (rng.*) Scipy (sp.stats.*)


Binomial(n, p) binomial(n,p) binom
P oisson(λ) poisson(lam) poisson
Geometric(p) geometric(p) geom

Table 14.1: Discrete distributions. Scipy name can be followed by .rvs(...) for a random sample,
.pmf(k,...) for the PMF at k, or .cdf(k,...) for the CDF at k.

Name Numpy (rng.*) Scipy (sp.stats.*)


N ormal(µ, σ) normal(mu,sigma) norm
Exponential(λ) exponential(1/lam) expon

Table 14.2: Continuous distributions. PMFs are replaced by PDFs, given by .pdf(x,...). Note
that in both libraries, the paramater to the exponential distribution is the scale 1/λ, not the
rate λ itself.
14.5. EXERCISES 11

14.5 Exercises
1. Write Python programs to generate Geometric(p) and Negative Binomial(i,p) random
variables from a uniform random sample (i.e., without using the library functions from
Section 14.4).
Solution: To generate a Geometric random variable, we run a loop of Bernoulli trials until
the first success occurs. K counts the number of failures plus one success, which is equal
to the total number of trials.

def geometric ( p ):
K = 1
while ( rng . random () > p ):
K = K + 1
return K

Now, we can generate Geometric random variable i times to obtain a Negative Binomial(i, p)
variable as a sum of i independent Geometric (p) random variables.

def negativebinomial (i , p ):
return sum ([ geometric ( p ) for j in range ( i )])

2. (Poisson) Use the algorithm for generating discrete random variables to obtain a Poisson
random variable with parameter λ = 2.
Solution: We know a Poisson random variable takes all nonnegative integer values with
probabilities
λi
pi = P (X = xi ) = e−λ for i = 0, 1, 2, · · ·
i!
To generate a P oisson(λ), first we generate a random number U . Next, we divide the
interval [0, 1] into subintervals such that the jth subinterval has length pj (Figure 14.2).
Assume 

 x0 if (U < p0 )
x1 if (p0 ≤ U < p0 + p1 )




 .

..
X= P 
j−1 Pj
 xj if p ≤ U < p



 k=0 k k=0 k

 .
 .
.
Here xi = i − 1, so

X = i if p0 + · · · + pi−1 ≤ U < p0 + · · · + pi−1 + pi


F (i − 1) ≤ U < F (i) F is CDF
12 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

from math import factorial


def poisson ( lam ):
def pdf ( i ):
return np . exp ( - lam )* lam ** i / factorial ( i )
def cdf ( i ):
return sum ([ pdf ( j ) for j in range ( i +1)])
U = rng . random ()
i = 0
while U > cdf ( i ):
i = i +1
return i

3. Explain how to generate a random variable with the density



f (x) = 2.5x x for 0 < x < 1

if your random number generator produces a Standard Uniform random variable U . Hint:
use the inverse transformation method.
Solution:

5
FX (X) = X 2 = U (0 < x < 1)
2
X=U 5

def X ():
U = rng . random ()
return U **(2/5)

We have the desired distribution.


4. Use the inverse transformation method to generate a random variable having distribution
function
x2 + x
F (x) = , 0≤x≤1
2
Solution:
X2 + X
=U
2
1 1
(X + )2 − = 2U
2 4 r
1 1
X + = 2U +
2 4
r
1 1
X = 2U + − (X, U ∈ [0, 1])
4 2
14.5. EXERCISES 13

By generating a random number, U , we have the desired distribution.

def X ():
U = rng . random ()
return np . sqrt (2* U + 1/4) - 1/2

5. Let X have a standard Cauchy distribution.


1 1
FX (x) = arctan(x) +
π 2
Assuming you have U ∼ U nif orm(0, 1), explain how to generate X. Then, use this result
to produce 1000 samples of X and compute the sample mean. Repeat the experiment 100
times. What do you observe and why?
Solution: Using Inverse Transformation Method:
1 1
U− = arctan(X)
 2
 π
1
π U− = arctan(X)
2
 
1
X = tan π(U − )
2

Next, here is the Python code:

U = rng . random ([100 ,1000])


X = np . tan ( np . pi *( U -0.5))
means = np . mean (X , axis =1)
plt . xlabel ( " Experiment " )
plt . ylabel ( " Sample mean " )
plt . plot ( means )
plt . show ()

Cauchy distribution has no mean (Figure 14.6), or higher moments defined.

6. (The Rejection Method) When we use the Inverse Transformation Method, we need a
simple form of the cdf F (x) that allows direct computation of X = F −1 (U ). When
F (x) doesn’t have a simple form but the pdf f (x) is available, random variables with
density f (x) can be generated by the rejection method. Suppose you have a method
for generating a random variable having density function g(x). Now, assume you want to
generate a random variable having density function f (x). Let c be a constant such that

f (y)
≤ c (for all y)
g(y)

Show that the following method generates a random variable with density function f (x).
14 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

Figure 14.6: Simulation of a Cauchy distribution shows that the mean is not defined.

- Generate Y having density g.

- Generate a random number U from U nif orm(0, 1).

f (Y )
- If U ≤ cg(Y ) , set X = Y . Otherwise, return to step 1.

Solution: The number of times N that the first two steps of the algorithm need to be called
is itself a random variable and has a geometric distribution with “success” probability

 
f (Y )
p=P U≤
cg(Y )
14.5. EXERCISES 15

Thus, E(N ) = p1 . Also, we can compute p:


 
f (Y ) f (y)
P U≤ |Y = y =
cg(Y ) cg(y)
Z ∞
f (y)
p= g(y)dy
−∞ cg(y)
1 ∞
Z
= f (y)dy
c −∞
1
=
c
Therefore, E(N ) = c
Let F be the desired CDF (CDF of X). Now, we must show that the conditional distri-
f (Y ) f (Y )
bution of Y given that U ≤ cg(Y ) is indeed F , i.e. P (Y ≤ y|U ≤ cg(Y ) ) = F (y). Assume
f (Y )
M = {U ≤ cg(Y ) }, K = {Y ≤ y}. We know P (M ) = p = 1c . Also, we can compute
f (Y )
f (Y ) P (U ≤ cg(Y ) , Y ≤ y)
P (U ≤ |Y ≤ y) =
cg(Y ) G(y)
Z y P (U ≤ f (y) |Y = v ≤ y)
cg(y)
= g(v)dv
−∞ G(y)
Z y
1 f (v)
= g(v)dv
G(y) −∞ cg(v)
Z y
1
= f (v)dv
cG(y) −∞
F (y)
=
cG(y)
Thus,
P (K|M ) = P (M |K)P (K)/P (M )
f (Y ) G(y)
= P (U ≤ |Y ≤ y) × 1
cg(Y ) c
F (y) G(y)
= × 1
cG(y) c
= F (y)

7. Use the rejection method to generate a random variable having density function Beta(2, 4).
Hint: Assume g(x) = 1 for 0 < x < 1.
Solution:
f (x) = 20x(1 − x)3 0<x<1
g(x) = 1 0 < x < 1
f (x)
= 20x(1 − x)3
g(x)
16 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

We need to find the smallest constant c such that f (x)/g(x) ≤ c. Differentiation of this
quantity yields
 
d fg(x)
(x)

=0
dx
1
Thus, x =
4
f (x) 135
Therefore, ≤
g(x) 64
f (x) 256
Hence, = x(1 − x)3
cg(x) 27

def X ():
Y = rng . random ()
U = rng . random ()
while U > 256/27* Y *(1 - Y )**3:
Y = rng . random ()
U = rng . random ()
return Y

5
8. Use the rejection method to generate a random variable having the Gamma( 2 , 1) density
function. Hint: Assume g(x) is the pdf of the Gamma α = 25 , λ = 1 .


Solution:
4 3
f (x) = √ x 2 e−x , x > 0
3 π
2 2x
g(x) = e− 5 x > 0
5
f (x) 10 3 3x
= √ x 2 e− 5
g(x) 3 π
 
f (x)
d g(x)
=0
dx
5
Hence, x=
2
 3
10 5 2 −3
c= √ e2
3 π 2
3 −3x
f (x) x2 e 5
=  3 −3
cg(x) 5 2
e2
2

We know how to generate an Exponential random variable.

- Generate a random number U1 and set Y = − 25 log U1 .


- Generate a random number U2 .
14.5. EXERCISES 17

3 −3Y
Y 2e 5
- If U2 < 3 −3 , set X = Y . Otherwise, execute the step 1.
5 2
( )
2
e 2

9. Use the rejection method to generate a standard normal random variable. Hint: Assume
g(x) is the pdf of the exponential distribution with λ = 1.
Solution:
2 x2
f (x) = √ e− 2 0 < x < ∞

g(x) = e−x 0 < x < ∞ (Exponential density function with mean 1)
r
f (x) 2 x− x2
Thus, = e 2
g(x) π
f (x)
Thus, x = 1 maximizes
g(x)
r
2e
Thus, c =
π
2
f (x) (x−1)
= e− 2
cg(x)

- Generate Y , an exponential random variable with mean 1.


- Generate a random number U .
−(Y −1)2
- If U ≤ e 2 set X = Y . Otherwise, return to step 1.

10. Use the rejection method to generate a Gamma(2, 1) random variable conditional on its
value being greater than 5, that is

xe−x
f (x) = R ∞ −x
5 xe dx
xe−x
= (x ≥ 5)
6e−5
Hint: Assume g(x) be the density function of exponential distribution.
Solution: Since a Gamma(2, 1) random variable has expected value 2, we use an exponen-
tial distribution with mean 2 that is conditioned to be greater than 5.

xe(−x)
f (x) = R ∞ (−x)
5 xe dx
xe(−x)
= x≥5
6e(−5)
1 (− x2 )
e
g(x) = 2 −5 x≥5
e2
f (x) x x−5
= e−( 2 )
g(x) 3
18 CHAPTER 14. INTRODUCTION TO SIMULATION USING PYTHON

f (x)
We obtain the maximum in x = 5 since g(x) is decreasing. Therefore,

f (5) 5
c= =
g(5) 3

- Generate a random number V .


- Y = 5 − 2 log(V ).
- Generate a random number U .
Y −( Y 2−5 )
- If U < 5e , set X = Y ; otherwise return to step 1.
Bibliography

[1] http://projecteuclid.org/download/pdf_1/euclid.aoms/1177706645

[2] https://numpy.org/doc/stable/reference/random/generator.html#distributions

[3] https://docs.scipy.org/doc/scipy/reference/stats.html#
probability-distributions

[4] Michael Baron, Probability and Statistics for Computer Scientists. CRC Press, 2006

[5] Sheldon M. Ross, Simulation. Academic Press, 2012

19

You might also like