Sukanta Resonance
Sukanta Resonance
Sukanta Resonance
Sukanta Deb
f(x)
≈ constant
g(x)
Therefore, we write
b b
f(x)
I= f(x)dx = g(x)dx .
a a g(x)
That is
1 f(xi )
N
f(x)
= ,
g(x) N i=1 g(xi )
where N is the number of MC steps and xi s are the
random numbers distributed as g(x). Another way to
deal with this integral is to define
dG(x) = g(x)dx,
where x
G(x) = g(x)dx
a
is the integral of g(x). Now we make a change of vari-
ables using
r = G(x),
where r is a sequence of random numbers uniformly dis-
tributed in [0, 1], i.e., 0 ≤ r ≤ 1. Therefore,
b 1
f(x) f(G−1 (r))
I= dG(x) = dr
a g(x) 0 g(G (r))
−1
1 f(G−1 (ri ))
N
≈ ,
N i=1 g(G−1 (ri ))
where ri are the random numbers uniformly distributed
in [0, 1]. It should be noted that the form of g(x) should
be so chosen that it minimizes the variance of the in-
tegrand fg(x)
(x)
. A proper choice of g(x) would make the
integrand fg(x)
(x)
nearly flat and hence the variance will be
reduced to a great extent. The variance is calculated
from
2
1 ˜ 2 1 ˜
N N
2
σ(imp) = f(xi ) − f(xi ) ,
N i=1 N i=1
f (xi)
where f˜(xi) = g(xi )
and the error of integration is given
by
2
σ(imp)
σI (imp) = .
N
g(x) = A exp(−λx),
yields
λ
A= .
1 − exp(−πλ)
Now, using the condition
G(x) = r, 0 ≤ r ≤ 1,
we get
−1 1 1 − exp(−πλ)
G (r) = − ln .
λ λ
1 f(G−1 (ri ))
N
g(x) = A exp(−λx),
yields
g(x) = λ exp(−λx).
Now using the condition
G(x) = r, 0 ≤ r ≤ 1,
we get
1
G−1 (r) = − ln r.
λ
With N = 10, 000, the value of
1 f(G−1 (ri ))
N
Therefore
1 f(xi )
1 1 N
f(x)
f(x)dx = g(x)dx = ,
0 0 g(x) N i=1 g(xi )
N
2 2
Ĥ = − ∇ + Uext(ri ) + Vij ,
i=1
2m i i<j
2
where − 2m ∇2i is the kinetic energy, Uext (ri ) is the ex-
ternal potential of the ith particle, Vij is the interaction
potential between the ith and jth particles, and ∇2i is
the Laplacian of the ith particle.
where
Ψ2T
ρ(R) =
Ψ2T dR
and
ĤΨT
EL = .
ΨT
The local energy is a function which depends on the po-
sitions of the particles and is a constant if ΨT is an exact
eigenfunction of the Hamiltonian. The more closely ΨT
approaches the exact eigenfunction, the less strongly EL
will vary with R. This means that the variance should
approach zero as our trial wave function approaches the
exact ground state.
In the evaluation of the ground state energy, the varia-
tional wave function is generally chosen to be real and
non-zero almost everywhere in the region of integration.
We want to solve the integral in (2) with importance
sampling MC integration using Metropolis algorithm.
The energy approximation as given in (2) becomes
1
M
ET = ρ(R)EL (R)dR ≈ EL (Ri ), (3)
M i=1
2 d2 1
Ĥ = − 2
+ kx2 ,
2m dx 2
1
yields the well-known ground state energy E0 = 2
= 0.5
(in units of ω) in case of one dimension.
In terms of the units considered, the hamiltonian be-
comes
1 d2 1
Ĥ = − 2
+ x2 .
2 dx 2
Let us consider the trial wave function to be of the form
ΨT = exp (−βx2), where β is the parameter to be
ΨT → 0 as x → ±∞ .
Ĥψ(x)
EL =
ψ(x)
becomes
EL = β + (0.5 − 2.0β 2 )x2
so that the ground state energy can be evaluated from
the integral
ET = EL ρ(x)dx .
1 d2
Ĥ = − + V (x) .
2 dx2
The change in the ground state energies and the vari-
ances obtained for these different cases are listed in Table
1. We can see that the variance σ 2 is 0 for β = 0.500
in the case V (x) = 12 x2 . This is because the trial wave
function ΨT (x) = exp(−0.5x2 ) is the exact wave func-
tion in this case. However, in the other cases, we can
easily see that the variance (σ 2) is not 0, but still the
minimum. This is because the trial wave functions are
not exact and hence the variance (σ 2) departs from zero
in other cases.
d2 2 d
∇2 = + .
dr 2 r dr
ΨT (r) = exp(−βr2) .
ĤΨT (r)
EL = = 3β + (0.5 − 2β 2 )r2 .
ΨT (r)
Suggested Reading
The probability to obtain a value of r in the interval [r, r + dr] is g(r)dr. This must be equal to
the probability to obtain a value of x in the interval [x(r), x(r) + dx(r)], which is f(x)dx. This
means that the probability that r is less than some value r is equal to the probability that x
is less than x(r ) [2], i.e.,
P (r ≤ r ) =P (x ≤ x(r ))
⇒ G(r) =F (x(r)),
where F and G are the cumulative distribution functions corresponding to the PDFs f and g,
respectively.
We know that the cumulative distribution function for the uniform PDF is
G(r) = r, r ∈ [0, 1] .
Therefore,
x(r) r
F [x(r)] = f(x )dx = g(r )dr = r .
−∞ −∞
This shows that the CDF F (x) treated as a random variable is uniformly distributed between
0 and 1. Solution for x(r) may be obtained from the above equation depending on the f(x)
given.
1.1 Uniform Distribution
The uniform distribution [2] for the continuous variable x(−∞ < x < ∞) is defined by
1
f(x; a, b) = , a ≤ x ≤ b,
b−a
= 0, otherwise.
That is, x is likely to be found anywhere between a and b. The CDF F (x) is related to the
PDF f(x) by x
F (x) = f(x )dx .
−∞
Suppose we want ot generate a random variable according to the uniform PDF f(x) defined
above. The CDF F (x) is given by
x
F (x) = f(x ; a, b)dx
−∞
x
1
⇒ F (x) = dx
−∞ b − a
x−a
⇒ F (x) = , a ≤ x ≤ b.
b−a
Now, to solve for x(r), let us set
F (x) = r, r ∈ [0, 1] .
Therefore,
x−a
= r ⇒ x = a + (b − a) ∗ r ⇒ x(r) = a + (b − a) ∗ r .
b−a
Hence, the variable x(r) is distributed according to the PDF f(x) given above. From this, we
see that the uniform random numbers are important as they can be used to generate arbitrary
PDFs by means of transformation from a uniform distribution [2].
1.2 Exponential Distribution
The exponential PDF [2] for the continuous variable x(0 ≤ x < ∞) is defined by
1 x
f(x; ξ) = exp(− ) .
ξ ξ
The PDF is characterized by a single parameter ξ. To generate the random variable x(r)
distributed according to the exponential PDF, let us set
F (x) = r ,
where r ∈ [0, 1] and F (x) is the CDF of the PDF f(x) given by
∞
F (x) = f(x ; ξ)dx .
0
Therefore,
F (x) = r ⇒ x = −ξ ln(1 − r) .
But (1 − r) is distributed in the same way as r. So,
x = −ξ ln r ⇒ x(r) = −ξ ln r .
The variable x(r) is distributed according to the exponential PDF f(x; ξ) as given above. The
histogram plot for the distribution of x(r) with ξ = 2 is shown in Figure A1.
1.3 Gaussian or Normal Distribution
The Gaussian or normal PDF [2] of the continuous variable x(−∞ < x < ∞) is defined by
1 (x − μ)2
f(x; μ, σ 2 ) = √ exp − .
2πσ 2 2σ 2
The PDF has two parameters μ and σ 2 . They correspond to the mean and variance of x,
respectively. Using μ = 0 and σ = 1, the standard Gaussian PDF is defined as
1 x2
f(x; 0, 1) = φ(x) = √ exp(− )
2π 2
(x,y)
In order to construct pairs of normally distributed random numbers, the following procedure
may be adopted:
xi = ri cos θi ,
yi = ri sin θi .
The transformations given above are known as the Box–Muller transformations [3]. Figure
A2 shows histogram plots for the generation of the pair of random variables (x, y) having
the Gaussian PDF f using the Box–Muller transformations.
2. Acceptance-Rejection Method
Suppose we want to generate a random variable from a distribution with PDF f. If it turns
out to be too difficult to simulate the value of the random variable using the transformation
method, the acceptance-rejection method is a useful alternative [4]. Let g(x) be another PDF
defined in the support of f(x) such that
f(x) ≤ cg(x), ∀x ,
where c > 0 is a known constant. Suppose there exists a method to generate a random variable
having PDF g, then according to the acceptance-rejection algorithm [2,3]:
1. Generate y from g.
2. Generate u from U [0, 1].
f(y)
3. If u ≤ cg(y)
set x = y, else return to step 1.
It can be shown that x is a random variable from the distribution with PDF f(x). The function
g(x) is also called majoring function [4]. It can be shown that the expected number of trials for
an acceptance is c. Hence, for this method to be efficient, the constant c must be chosen such
that rejection rate is low. A method to choose an optimum c is [3]
f(x)
c = max .
x g(x)
Now, let us apply acceptance-rejection method to generate a random variable having PDF
Since the random variable is concentrated in the interval [0, 1], let us consider the acceptance-
rejection method with
g(x) = 1, 0 < x < 1.
Therefore,
f(x)
= 12x(1 − x)2 .
g(x)
We need to determine the smallest constant c such that
f(x)
≤ c.
g(x)
f(x)
Now, we use calculus to determine the maximum value of g(x) . It is found to be maximum at
x = 13 . Therefore,
f(x) 1 1 2 16
≤ 12x(1 − x)2 ≤ 12 (1 − ) ≤ = c.
g(x) 3 3 9
Hence,
f(x) 9 27
= 12x(1 − x)2 = x(1 − x)2 .
cg(x) 16 4
The rejection procedure is as follows:
[1] P Bevington and D Robinson, Data reduction and error analysis for the physical sciences, McGraw-Hill
Higher Education. McGraw-Hill, 2003.
[2] G Cowan, Statistical data analysis, Oxford University Press, 1997.
[3] S Ross, Simulation: Statistical Modeling and Decision Science Series, Academic Press, 2006.
[4] C Lemieux, Monte Carlo and Quasi-Monte Carlo Sampling, Mathematics and Statistics, Springer-Verlag,
New York, 2009.
ΨT (α) = exp(−αr 2 ),
show that the ground state energy of the H-atom (in units of e = = m = 1) is given by
(VMC)
ET = Emin = −0.5.
ΨT = x(x − L) exp(αx),
show that the ground state energy (in units of e = = m = 1) of a quantum particle of
mass m moving in a one-dimensional box with walls at x = 0 and x = L, where L = 2, is
(VMC)
ET = Emin = 1.249.