17 Montecarlo

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

monte carlo integration

1
integrals and averages
integral of a function over a
∫ f (x)dAx
domain x∈D

"size" of a domain AD = ∫
x∈D
dAx

average of a function over a ∫


x∈D
f (x)d Ax ∫
x∈D
f (x)d Ax
=
domain ∫ dAx AD
x∈D

2
integrals and averages examples
average "daily" snowfall in Hanover last year
domain: year - time interval (1D)
integration variable: "day" of the year
function: snowfall of "day"

∫ s(day)dlength(day)
day∈year

length(year)

3
integrals and averages examples
"today" average snowfall in New Hampshire
domain: New Hampshire - surface (2D)
integration variable: "location" in New Hampshire
function: snowfall of "location"

∫ s(location)darea(location)
location∈N ewHampshire

area(location)

4
integrals and averages examples
average snowfall in New Hampshire per day this year
domain: New Hampshire × year - area × time (3D)
integration variables: "location" and "day" in New
Hampshire this year
function: snowfall of "location" and "day"

∫ ∫ s(loc, day)darea(loc)dlength(day)
day∈year loc∈N .H.

area(loc)length(day)

5
discreet random variable
random variable: x
values: x0 , x1 , . . . , xn
probabilities: p0 , p1 , . . . , pn
n
∑ pj = 1
j=1

example: rolling a die


values:
x1 = 1, x2 = 2, x3 = 3, x4 = 4, x5 = 5, x6 = 6

probabilities:
1 1 1 1 1 1
p1 = , p2 = , p3 = , p4 = , p5 = , p6 =
6 6 6 6 6 6

6
expected value and variance
expected value: E[x]
n
= ∑ vj pj
j=1

average value of the variable


variance: σ 2 2
[x] = E[(x − E[x]) ]

how much different from the average


property: σ 2 [x] 2 2
= E[x ] − E[x]

example: rolling a die


expected value:
E[x] = (1 + 2 + 3 + 4 + 5 + 6)/6 = 3.5

variance: σ 2 [x] =. . . = 2.917

7
estimating expected values
to estimate the expected value of a variable
choose a set of random values based on the probability
average their results

N
1
E[x] ≈ ∑ xi
N
i=1

larger N give better estimate


example: rolling a die
roll 3 times: {3, 1, 6} → E[x] ≈ (3 + 1 + 6)/3 = 3.33

roll 9 times: {3, 1, 6, 2, 5, 3, 4, 6, 2} → E[x] ≈ 3.51


8
law of large numbers
by taking infinitely many samples, the error between the
estimate and the expected value is statistically zero
the estimate will converge to the right value

N
1
probability[E[x] = lim ∑ xi ] = 1
N →∞ N
i=1

9
continuous random variable
random variable: x
values: x ∈ [a, b]

probability density function: x ∼ p


b
property: ∫ a p(x)dx = 1

probability that variable has value x: p(x)dx

10
uniformly distributed random variable
p is the same everywhere in the interval
b
p(x) = const and ∫ a p(x)dx = 1 implies

1
p(x) =
b−a

[Bala]

11
expected value and variance
b
expected value: E[x] = ∫
a
xp(x)dx
b
E[g(x)] = ∫ g(x)p(x)dx
a
b
variance: σ 2 2
[x] = ∫ (x − E[x]) p(x)dx
a

2 b 2
σ [g(x)] = ∫ (g(x) − E[g(x)]) p(x)dx
a

estimating expected values: E[g(x)] 1 N


≈ ∑ g(xi )
N i=1

12
multidimensional random variables
everything works fine in multiple dimensions
but it is often hard to precisely define domain
except in simple cases

E[g(x)] = ∫ g(x)p(x)dAx
x∈D

13
deterministic numerical integration
split domain in set of fixed segments
sum function values times size of segments
b
I = ∫ f (x)dx I ≈ ∑ f (xj )Δx
a j

[Bala]

14
monte carlo numerical integration
b
need to evaluate: I = ∫
a
f (x)dx
b
by definition: E[g(x)] = ∫
a
g(x)p(x)dx

can be estimated as: E[g(x)] 1 N


≈ ∑ g(xi )
N i=1

by substitution: g(x) = f (x)/p(x)

b N
f (x) 1 f (x i )
I = ∫ p(x)dx ≈ ∑
a
p(x) N p(xi )
i=1

15
monte carlo numerical integration
intuition: compute the area randomly and average the
results
b 1 N f ( xi )
I = ∫ f (x)dx I ≈ ∑
a N i=1 p(xi )

[Bala]
16
monte carlo numerical integration
formally, we can prove that

N
1 f ( xi )
¯ =
I ∑ ¯] = E[g(x)]
⇒ E[I
N p(xi )
i=1

meaning that if we were to try multiple times to evaluate the


integral using our new procedure, we would get, on average,
the same result
variance of the estimate: σ 2 [I¯] =
1

N
2
σ [g(x)]

17
example: integral of constant function
analytic integration

b b

I = ∫ f (x)dx = ∫ kdx = k(b − a)


a a

Monte Carlo integration

b b

I = ∫ f (x)dx = ∫ kdx ≈
a a

N N
1 f ( xi ) 1
≈ ∑ = ∑ k(b − a) =
N p(xi ) N
i=1 i=1

N
18
= k(b − a) = k(b − a)
N
example: computing π
take the square [0, 1] with a quarter-circle in it
2

1 1
Aqcircle = ∫ ∫ f (x, y)dxdy
0 0

1 (x, y) ∈ qcircle
f (x, y) = {
0 otherwise

[MIT OpenCourseware]

19
example: computing π
estimating area of quarter-circle by tossing point in the plane
and evaluating f
1 N
Aqcircle ≈ ∑ f ( xi , y i )
N i=1

[MIT OpenCourseWare]

20
example: computing π
by definition: Aqcircle = π/4

numerical estimation of π

N
4
π ≈ ∑ f ( xi , y i )
N
i=1

21
monte carlo numerical integration
works in any dimension!
need to carefully pick the points
need to properly define the pdf
hard for complex domain shapes
e.g., how to uniformly sample a sphere?

N
1 f (x)
I = ∫ f (x)dAx ≈ ∑
x∈D
N p(x)
i=1

works for badly-behaving functions!

22
monte carlo numerical integration
−−
expected value of te error is O(1/√N )

convergence does not depend on dimensionality


deterministic integration is hard in high dimensions

[Bala]

23
importance sampling principle
how to minimize the noise?
pick samples in area where function is large

N
1 f (x)
I ≈ ∑
N p(x)
i=1

24
1d interval - uniform sampling
call random function in [0, 1), rescale if necessary
in reality only pseudorandom
relies on good generator

r = rand() ⇒ x = r

pick a distribution similar to the function

poptimal ∝ f (x)

25
2d square - uniform sampling
pick two independent random values in [0, 1)

T
r = [rand() rand()] ⇒ x = r

[Shirley]

26
2d square - stratified sampling
divide domain in smaller domains, then pick random points
in each
better variance than normal sampling

i + rx j + ry
x = [ ]
ni nj

27
[Shirley]
2d circle - rejection sampling
pick random points in the uniform square and discard the
ones outside the domain

2
r ∈ [0, 1] 2r − 1, ||2r − 1|| ≤ 1
{ ⇒ x = {
r ∼ 1 discard, otherwise

[Shirley]

28
2d circle - remapping from square
pick random points in the uniform square and remap them
on the circle using polar coordinates

2
r ∈ [0, 1] , r ∼ 1

φ = 2πrx , r = ry

x = (ry cos(2πrx ), ry sin(2πrx ))

x ∼ non-uniform

29
2d circle - remapping from square
pick random points in the uniform square and remap them
on the circle using polar coordinates

x ∼ non-uniform

[Shirley]

30
2d circle - remapping from square
make sampling uniform by computing proper pdf
Sh. 14.4.1

2
r ∈ [0, 1] , r ∼ 1

−−
φ = 2πrx , r = √ry

−− −−
x = (√ry cos(2πrx ), √ry sin(2πrx ))

x ∼ 1/π

31
2d circle - remapping from square
make sampling uniform by computing proper pdf
Sh. 14.4.1

x ∼ 1/π

[Shirley]

32
3d direction - uniform distribution
uniform distribution with respect to solid angle

2
r ∈ [0, 1] , r ∼ 1

φ = 2πrx , θ = arccos(ry )

−−−−− −−−−−
2 2
d = (√1 − ry cos(2πrx ), √1 − ry sin(2πrx ), ry )

1
d ∼

33
3d direction - cosine distribution
cosine distribution wrt solid angle

2
r ∈ [0, 1] , r ∼ 1

−−
φ = 2πrx , θ = arccos(√ry )

−−−− − −−−− − −−
d = (√1 − ry cos(2πrx ), √1 − ry sin(2πrx ), √ry )

cos(θ)
d ∼
π

34
3d direction - cosine power distribution
cosine power distribution wrt solid angle

2
r ∈ [0, 1] , r ∼ 1

1/(n+1)
φ = 2πrx , θ = arccos(ry )

−−−−−−
2
−−−−−−
2 1

d = (√1 − ry cos(2πrx ), √1 − ry
n+1 n+1 n+1
sin(2πrx ), ry )

n+1
n
d ∼ cos (θ)

35

You might also like