The Metropolis Sampling Algorithm: 1.1 Computer Graphics
The Metropolis Sampling Algorithm: 1.1 Computer Graphics
The Metropolis Sampling Algorithm: 1.1 Computer Graphics
surface
moving surface
1.1.3 Radiosity
The form factor computation of radiosity seeks to derive the fraction of light leaving
patch A that falls onto patch B , assuming a Lambertian re
ectance model. Although
an analytic expression has been derived for two arbitrary polygonal patches, this model
is slow to evaluate and does not take into account eects of visibility. Thus, this dis-
continuous 4-dimensional integration problem is amenable to Monte Carlo techniques.
Similarly, nite element-based techniques for radiance (rather than radiosity) need to
evaluate coecients that represent the fraction of light that leaves patch A, falls onto
patch B , and then gets re
ected onto patch C . This is a 6-dimensional integral.
1.2 Finance
Monte Carlo techniques are widely used in the nancial world for valuing securities. One
class of securities traded is mortgage-backed securities. The mortgages backing these
securities have a xed or variable interest rate and typically have a maturity period of
30 years. The value of a mortgage-backed security depends on a complicated mix of
the prime interest rate, the default rate, the remortgaging rate, and a variety of other
interdependent factors that vary over time. By modeling these factors over time, a basis
for valuation is provided. Solutions to these models involve integrating functions of 20
or 30 or even 360 dimensions. Since these functions are typically smooth, quasi-Monte
Carlo techniques work well.
r
σ
In any case, suppose there is some function g(X) in which we are interested. We want
to nd the expected value of g(X) where X is distributed proportionally to f (X). In
other words, the quantity of interest G is:
Z
G = E [g(X)] = g(X)p(X) dX
such that X 2
. Furthermore, let f (X) be some some scalar function. The goal of
the M(RT)2 is to provide a series of samples X1 ; X2; : : : ; XN which have a probability
density function proportional to f (X). In other words:
6 CS448: Lecture #11
Given: A function f (X) where X 2
.
Goal: Generate Xt p(X) where p(X) / f (X) and R
p(X) dX = 1.
3.1 Random Walks
M(RT)2 generates a set of samples by taking a random walk across
. A random walk
is simply a sequence of samples X1; X2; : : : XN where each Xt is chosen with some prob-
ability distribution pt(X). Furthermore, the random walk generated by M(RT)2 is a
Markov chain, meaning that the choice of Xt depends only on Xt 1 . If the random walk
is independent of the actual value of t, it is called a time-homogeneous Markov chain,
which can be characterized by a single transition function for all t:
K (Y ! X) = fdensity that Xt = X given that Xt 1 = Yg
In other words, K (Y ! X) is the probability density of going from state Y to state X.
Since it is a pdf, it satises:
Z
K (Y ! X) dX = 1 for all Y
Thus, for a random walk, we choose an initial state X1 according to some initial distri-
bution p1 (X). Then p2 (X j X1) is set to K (X1 ! X), and X2 is chosen according to
that distribution. By repeating this process, pt (X j Xt 1) is set to K (Xt 1 ! X) and
Xt is chosen according to that distribution. Now since each Xt is a random variable, we
get the the following distributions for each step of the random walk:
p1(X) = initial
Z distribution
p2(X) = p1 (Y)K (Y ! X) dY
Z
p3(X) = p2 (Y)K (Y ! X) dY
...
Z
pt(X) = pt 1 (Y)K (Y ! X) dY
...
Now if our random walk, which is embodied by K (Y ! X), is ergodic, then there exists
a stationary distribution p(X) such that:
Z
p(X) = p(Y)K (Y ! X) dY (1)
That means that for some p(X), the application of a step in our random walk will give
back the exact same distribution. Furthermore, ergodicity guarantees that the random
walk will converge to p(X) for any initial distribution p1 (X):
lim p (X) = p(X)
t!1 t
CS448: Lecture #11 7
...
the state of the system is not static, the amount that goes in each direction is equal, and
thus the system is at equilibrium.
Let us examine what happens when we apply the notion of detailed balance to our
problem:
p(X)T (X ! Y)A(X ! Y) = p(Y)T (Y ! X)A(Y ! X) (2)
This equation says that once a stationary distribution p(X) is reached, then the chance
of being in state X and then taking a proposed transition into state Y is equal to the
chance of being in state Y and then taking a proposed transition into state X. To see
how this provides a solution to stationary distribution, we rewrite Equation (1) in terms
of T (Y ! X) and A(Y ! X):
Z
p0 (X) = p(Y)T (Y ! X)A(Y ! X) dY +
Z
p(X) 1 T (X ! Y)A(X ! Y) dY
The rst term of the above equation expresses the chance of starting in state Y and
taking a transition into X. The second term expresses the chance of starting in state X
and rejecting a transition into another state Y. These are all the ways that one can end
up in state X after a step of the random walk. Rearranging the terms and moving p(X)
into the integral (since it is constant with respect to Y), we get:
p0(X) = Zp(X) +
p(Y)T (Y ! X)A(Y ! X) p(X)T (X ! Y)A(X ! Y) dY
Thus in order to get a stationary distribution, the integral in the above equation must
evaluate to zero. One easy way to do this is to enforce the detailed balance of Equation
(2) in our choice of T (Y ! X) and A(Y ! X). Thus, we are left with:
p0 (X) = p(X)