Sukanta Resonance

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

GENERAL ⎜ ARTICLE

Variational Monte Carlo Technique


Ground State Energies of Quantum Mechanical Systems

Sukanta Deb

In this article, I discuss the use of the Variational


Monte Carlo technique in the determination of
ground state energies of quantum harmonic os-
cillators in one and three dimensions. In order to
provide a better understanding of this technique,
a pre-requisite concept of Monte Carlo integra- Sukanta Deb is an
tion is also discussed along with numerical ex- Assistant Professor in the
Department of Physics at
amples. The technique can be easily extended
Acharya Narendra Dev
to study the ground state energies of hydrogen College (University of
atom, particle in a box, helium atom and other Delhi). His research
complex microscopic systems involving N parti- interests include astro-
cles in d-dimensions. nomical photometry and
spectroscopy, light curve
1. Introduction modeling in astronomy and
astrophysics, Monte Carlo
There exist many problems in science and engineering methods and simulations
whose exact solution either does not exist or is difficult and automated data
analysis.
to find. For the solutions of those problems, one has to
resort to approximate methods. In the context of quan-
tum mechanics, the exact solutions of the Schrödinger’s
equation exist only for a few idealized systems. There
are varieties of systems for which the Schrödinger’s equa-
tion either cannot be solved exactly or it involves very
lengthy and cumbersome calculations [1]. For example,
solving the Schrödinger equation to study the proper-
ties of a system with hundreds or thousands of particles
often turns out to be impractical [2].
One of the most important tasks in quantum mechan-
ics is the determination of the ground state energies of Keywords
Variational methods, Monte
the microscopic systems. In this article, I show how
Carlo techniques, harmonic os-
the Variational Monte Carlo (hereafter, VMC) method cillators, quantum mechanical
can be used to determine the ground state energy of a systems.

RESONANCE ⎜ August 2014 713


GENERAL ⎜ ARTICLE

quantum mechanical system. This method can be ex-


tended to study much more complex many-body quan-
tum mechanical systems involving higher dimensions.
The VMC technique [3] provides a simple, robust and
efficient way to solve the ground state energy of a quan-
tum many-particle system. Since the method is rela-
tively insensitive to the size of the system, it can be
applied to large systems where some other methods are
computationally not feasible [2].
The VMC method uses the Metropolis et al. [4] algo-
rithm combined with the Monte Carlo integration us-
ing importance sampling [5] and the variational princi-
ple in quantum mechanics [1] in order to determine the
ground state energy of the system. Determination of the
ground state energies of a many-particle system in d-
dimensions involves (d × N)-dimensional integral. Eval-
uation of these higher dimensional integrals using an-
alytical methods becomes very difficult and practically
impossible. This is where the VMC technique comes to
the rescue and proves to be very powerful and efficient.
In the study of the quantum mechanical system of liq-
uid He4, the MC method was first utilized by McMillan
[6]. The system was studied using a variational wave
function and the MC method using the Metropolis al-
gorithm. Liquid He4 is a system of bosons with a sym-
metric wave function. It was not until 1977 that the
VMC method was used by Ceperley et al. [7] to study a
I would like to fermionic system whose wave functions are antisymmet-
introduce to the ric. Since then, VMC methods have been applied to a
readers in short wide variety of problems. They have been used to study
about the MC the ground state properties of many different systems –
methods – its atoms, molecules and solids [3, 8, 9].
history, origin and
some of the
Before I discuss the technique and its far-reaching impli-
cations, I would like to introduce to the readers in short
important terms
about the MC methods – its history, origin and some of
associated with the
the important terms associated with the VMC technique
VMC technique.

714 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

with detailed illustrations. In Section 2, the history and


the origin of the MC techniques are discussed. Section
3 deals with the evaluation of integrations using crude
and importance-sampling MC methods. In Section 4,
the generation of samples according to a given prob-
ability distribution are discussed using the Metropolis
algorithm. Section 5 deals with the description of the
VMC technique. In Section 6, applications of the VMC
in the determination of ground state energies of quan-
tum harmonic oscillators in one and three dimensions
are discussed. Finally, in Section 7, summary and dis-
cussions are presented.
2. History and Origin of Monte Carlo Methods
The term ‘Monte Carlo’ has been named after the fa-
mous Mediterranean casino town in Monaco. The Monte
Carlo (MC) method refers to a class of statistical meth-
ods which use random numbers (probabilistic method)
similar to those in a casino game of chance, to solve a real
and physical (nonprobabilistic) problem [5]. Since the
method uses random numbers, it is therefore stochastic
in nature and has associated statistical properties. With
the advent of modern computers, the random sampling
required in most analyses can easily be obtained in a
more robust, faster and efficient way. Some of the meth-
ods of generating random numbers from a given prob-
ability distribution are discussed in Appendix A. MC
methods are widely used in many areas of science and
engineering and are particularly useful for solving com-
plicated higher-dimensional integrals and in the study
of N-body systems. In quantum mechanics, the MC
methods are used to simulate many-particle systems us-
ing random numbers. The term `Monte
Carlo' has been
The earliest documented use of random sampling to find
named after the
the solution of an integral is that of the French naturalist
famous Mediterranean
Comte de Buffon (1777). He showed that the probability
casino town in
that a needle of length l thrown randomly onto a grid of
Monaco.

RESONANCE ⎜ August 2014 715


GENERAL ⎜ ARTICLE

parallel lines with distance d ≥ l apart intersects a line


2l
is πd . This is known as the Buffon’s needle problem [5].
In the 1930s, Enrico Fermi made use of the MC tech-
niques in studying the behaviour of the newly discovered
neutron [5]. Sampling experiments were carried out in
order to see how neutral particles are expected to inter-
act with the condensed matter. This led to substantial
insight and helped in the understanding of the analyti-
cal theory of neutron diffusion and transport. Major ad-
vances of the MC techniques were made during World
War II by scientists such as John Von Neumann, En-
rico Fermi, S M Ulam and Nicholas Metropolis working
on the development of nuclear weapons in Los Alamos
National Laboratory, USA [10, 11].
3. Monte Carlo Integration Using Importance
Sampling
Suppose we want to evaluate the integral
 b
I= f(x)dx,
a

where f(x) is a smooth function defined on the interval


[a, b]. In the ‘crude’-MC method, the integral is approx-
imated as [5]

(b − a) 
b N
I(crude) = f(x)dx ≈ f(xi ),
a N i=1

where xi ’s are uniformly distributed random numbers


between a and b. The variance in f is given by
 2
(b − a) N
(b − a) N
2
σ(crude) = f 2 (xi ) − f(xi ) .
N i=1
N i=1

The error in the integration is given by



2
σ(crude) σ(crude)
σI(crude) = = √ .
N N

716 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

Hence, in the ‘crude’-MC method, the error estimate is


proportional to the variance and inversely proportional
to the square root of the number of trials. Hence, there
are only two ways of reducing the error. Firstly, ei-
ther by increasing the number of trials, or, secondly, by
reducing the variance. The later choice is much more
suitable as it requires lesser computational time [12].
In the case of ‘crude’-MC integration, it samples the
function homogeneously, i.e., it samples with the uni-
form distribution. Therefore, if the significant contri-
butions to the integral come from a small region within
the integration volume, there will be only a few sam-
ples there which can lead to large statistical errors, even
though the number of trials are increased. The result of
the MC integration can be greatly improved using the
idea of importance sampling. In this, sampling points
are chosen from a distribution which concentrates the
points where the function to be integrated happens to
be larger [12].
Let g(x) be a PDF defined on [a, b] that has nearly the
same shape as that of f(x) in the sense that

f(x)
≈ constant
g(x)

with the property


 b
g(x)dx = 1 and g(x) > 0, ∀x ∈ [a, b] .
a

Therefore, we write
 b  b 
f(x)
I= f(x)dx = g(x)dx .
a a g(x)

The integral can now be calculated with the random


numbers generated from the distribution g(x) (also called
weight function) and evaluating fg(x
(xi)
i)
at these points.

RESONANCE ⎜ August 2014 717


GENERAL ⎜ ARTICLE

That is
1  f(xi )
N
f(x)
= ,
g(x) N i=1 g(xi )
where N is the number of MC steps and xi s are the
random numbers distributed as g(x). Another way to
deal with this integral is to define
dG(x) = g(x)dx,
where  x
G(x) = g(x)dx
a
is the integral of g(x). Now we make a change of vari-
ables using
r = G(x),
where r is a sequence of random numbers uniformly dis-
tributed in [0, 1], i.e., 0 ≤ r ≤ 1. Therefore,
 b   1
f(x) f(G−1 (r))
I= dG(x) = dr
a g(x) 0 g(G (r))
−1

1  f(G−1 (ri ))
N
≈ ,
N i=1 g(G−1 (ri ))
where ri are the random numbers uniformly distributed
in [0, 1]. It should be noted that the form of g(x) should
be so chosen that it minimizes the variance of the in-
tegrand fg(x)
(x)
. A proper choice of g(x) would make the
integrand fg(x)
(x)
nearly flat and hence the variance will be
reduced to a great extent. The variance is calculated
from
2
1 ˜ 2 1 ˜
N N
2
σ(imp) = f(xi ) − f(xi ) ,
N i=1 N i=1
f (xi)
where f˜(xi) = g(xi )
and the error of integration is given
by 
2
σ(imp)
σI (imp) = .
N

718 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

Illustrations of evaluating MC integrations using the


‘crude’-MC and ‘importance sampling’-MC are given with
the following examples.
Let us evaluate the integral
 π
1.0
I= dx .
0 x + cos (x)
2 2

Using the ‘crude’-MC technique



(b − a) 
b N
I(crude) = f(x)dx ≈ f(ri ),
a N i=1

where a = 0, b = π, ri ∈ [0, 1], ∀i = 1, . . . , N and


1.0
f(x) = .
x2 + cos2(x)
The value of the integral is found to be I(crude) = 1.5952 ±
0.0126 with a variance of σ 2 (crude) = 1.3511. Here, N is
taken to be 10, 000.
Let us now evaluate the above integral using the ‘im-
portance sampling’-MC. Let us choose the importance
function to be of the form

g(x) = A exp(−λx),

where g(x) ≥ 0, A is the normalization constant and λ


is the parameter to be determined for the variance to be
minimized. Normalization condition
 π
g(x)dx = 1
0

yields
λ
A= .
1 − exp(−πλ)
Now, using the condition

G(x) = r, 0 ≤ r ≤ 1,

RESONANCE ⎜ August 2014 719


GENERAL ⎜ ARTICLE

Figure 1. Plot of σ2(imp) versus


λ-values. Minimum value of
σ2(imp) occurs at λ = 0.80. This
value of λ is chosen in the
importance function g(x) =
A exp(–λx) for evaluating the
integral.

we get
 
−1 1 1 − exp(−πλ)
G (r) = − ln .
λ λ

With N = 10, 000, the value of

1  f(G−1 (ri ))
N

N i=1 g(G−1 (ri ))

is calculated for different values of λ ∈ [0.05, 1.60] in


2
steps of 0.05 and the variance σ(imp) is obtained. Fig-
2
ure 1 shows the plot of σ(imp) versus λ-values. From
2
the figure, it can be seen that σ(imp) has the minimum
value of 0.0729 at λ = 0.80. The value of the integral
2
corresponding to this value of λ and σ(imp) is found to
be
I(imp) = 1.5810 ± 0.0027.
Therefore, variance is reduced by a factor of ∼ 20 using
the ‘importance sampling’-MC technique as compared
to the variance using the ‘crude’-MC technique.
Let us now consider the evaluation of the integral
 ∞
1
exp(−x)dx
0 1 + (x − 1)2

720 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

using the MC method. Again, let us choose the impor-


tance function to be of the form

g(x) = A exp(−λx),

where g(x) ≥ 0, A is the normalization constant and λ


is the parameter to be determined for the variance to be
minimized. Normalization condition
 ∞
g(x)dx = 1
0

yields
g(x) = λ exp(−λx).
Now using the condition

G(x) = r, 0 ≤ r ≤ 1,

we get
1
G−1 (r) = − ln r.
λ
With N = 10, 000, the value of

1  f(G−1 (ri ))
N

N i=1 g(G−1 (ri ))

is calculated for different values of λ ∈ [0.05, 2.50] in


2
steps of 0.05 and the variance σ(imp) is obtained. Fig-
2
ure 2 shows the plot of σ(imp) versus λ-values. From
2
the figure, it can be seen that σ(imp) has the minimum
value of 0.0471 at λ = 1.15. The value of the integral
2
corresponding to this value of λ and σ(imp) is found to
be
I(imp) = 0.6967 ± 0.0022.

The evaluation of the integral using the ‘crude’-MC be-


comes cumbersome using uniform sampling when the
range of integration (b − a) is ∞. So a cut-off such as
(b − a) = L >> 1 should be used. One has to choose L

RESONANCE ⎜ August 2014 721


GENERAL ⎜ ARTICLE

Figure 2. Plot of σ2(imp) versus


λ-values. Minimum value of
σ2(imp) occurs at λ = 1.15. This
value of λ is chosen in the
importance function g(x) =
A exp(–λx) for evaluating the
integral .

carefully so that it is neither very large nor very small.


However, the determination of optimal value of cut-off
L is a formidable task. There exists no such clear-cut
rule to calculate the value of this cut-off. In the case
of the integral considered here, if we set L = 10, then
2
I(crude) = 0.7017 ± 0.0047 with a variance of σ(crude) =
0.2251 for N = 10, 000. On the other hand, if L = 100
and N = 10, 000, then I(crude) = 0.7191 ± 0.0049 with
a variance of σ 2(crude) = 0.2355. In both the cases, we
can see that the variance is larger than that determined
using the ‘importance sampling’-MC technique. Also,
there is no definite rule for determining the cut-off in
the ‘crude’-MC when the range of integration becomes
∞. This limits the accuracy of the value of the integral
determined using this method. In contrast, the impor-
tance sampling-MC does not require such cut-off values.
4. Metropolis Algorithm
Methods such as importance sampling can be efficient
in generating random numbers from weight functions
which are simple and are restricted to one dimension.
However, the generation of random numbers from a com-
plicated weight function in many dimensions turns out
to be difficult or sometimes impossible using the impor-
tance sampling, as the form of the weight function is

722 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

difficult to discern from the problems involved [13]. Apart


from that, the weight function g(x) should be integrable
and the integral of it (i.e., G(x)) should be invertible
which is sometimes difficult to obtain analytically, and
numerically it often turns out to be clumsy or inaccurate
[14].
The algorithm of Metropolis et al. [4] is one of the most
simple, efficient and robust ways for generating random
numbers according to a specified probability distribution
of any arbitrary form. The advantage of this algorithm
lies in the fact that through the use of random numbers,
it provides an elegant and efficient way toward answers
to problems that are complicated to solve analytically.
The algorithm has been listed as one of the top 10 algo-
rithms of the 20th century [15,16].
Suppose we want to generate a set of points xi , i =
1, . . . , n, distributed according to a PDF f(x) in one
dimension. The Metropolis algorithm generates a se-
quence of points xi , i = 1, . . . , n, as those visited succes-
sively by a random walker moving through the configu-
ration space. The longer the walk, the closer it approx-
imates the desired distribution. This random number
sequence is generated as follows [13]:

• Choose a starting point x0 .

• Choose a fixed maximum step-size h.

• Generate the next point x1. The Metropolis


algorithm generates
• Suppose that the walker is at a point xi.
a sequence of points
• Choose a new configuration xtrial randomly and xi, i = 1, ..., n, as
uniformly in the interval [xi − h/2, xi + h/2]. those visited
successively by a
• Calculate the ratio
random walker
f(xtrial) moving through the
r= .
f(xi ) configuration space.

RESONANCE ⎜ August 2014 723


GENERAL ⎜ ARTICLE

There are two • If r ≥ 1, the trial step is accepted, i.e., xi+1 =


important issues in xtrial.
applying the Metro-
• If r < 1, the trial step is accepted with probability
polis algorithm.
r. Choose a random number η ∈ [0, 1] and the
Firstly, where to start
next configuration is generated according to
the random walk, i.e.,
what to choose for the xi+1 = xtrial, if η < r,
initial point x0.
= xi , if η ≥ r.
Secondly, how to
choose the
There are two important issues in applying the Metro-
step-size h.
polis algorithm. Firstly, where to start the random walk,
i.e., what to choose for the initial point x0 . Secondly,
how to choose the step-size h. Pertaining to the first
issue of the choice of x0 , an approximate starting point
is a probable one where the distribution function f(x)
is large. However, in the case of a multi-dimensional
problem if the maximum is not known, then the random
walker must be allowed to ‘thermalize’ before the actual
sampling begins. That is, to remove any dependence on
the starting point, the algorithm is run for some large
number of steps which are then discarded [12,13].
On the other hand, if the trial steps are to be taken
within the neighbourhood of xi, the step-size h should be
chosen carefully. Let us suppose that xi is at a maximum
of f, the most likely place for it to be. If h is large, then
f(xtrial) << f(xi ) and most trial steps will be rejected.
On the other hand, if h is chosen to be very small, most
trial steps will be accepted. But the random walker will
never move very far, and so also lead to a poor sampling
of the distribution. A good rule of thumb is that the size
of the trial step h be chosen so that [13]

No. of steps accepted


Acceptance ratio = ∼ 0.5 .
Total number of trial steps
Now, let us apply the Metropolis algorithm in sampling
from the distribution f(x) = exp[−0.5x2]. Figure 3

724 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

Figure 3. The upper panel


shows the plot of the function
f(x) = exp(–0.5x2) as a func-
tion of x. The lower panel
shows the histogram plot for
the generation of the random
variable x having PDF f(x)
using the Metropolis algo-
rithm.

shows the histogram plot for the generation of the ran-


dom variable having the PDFf(x) using the Metropolis
algorithm with the number of MC steps equal to 10, 0000
and step-size set equal to 3 such that the acceptance ra-
tio is equal to 0.4930(∼ 0.5).
We apply the Metropolis algorithm in the evaluation of
the 1-d integral
 1

x 1 − x2dx.
0

Let us choose the importance function to be of the form


g(x) = A [exp(x2 ) − 1.0] . Normalization condition
 1
g(x)dx = 1
0

gives A = 2.1615. Hence the normalized importance


function is

g(x) = 2.1615 exp(x2) − 1 .

RESONANCE ⎜ August 2014 725


GENERAL ⎜ ARTICLE

Therefore
   
1  f(xi )
1 1 N
f(x)
f(x)dx = g(x)dx = ,
0 0 g(x) N i=1 g(xi )

where xi ’s are sampled from the distribution g(x) using


the Metropolis algorithm. For N = 10, 000 and h =
0.42, the value of the integral is found to be I(mp) =
2
0.3334 ± 0.0002, with σ(mp) = 0.0030. This value is
comparable to the actual value of 0.3333.
5. Variational Monte Carlo (VMC) Technique
In the VMC method, the variational principle of quan-
tum mechanics is used to approximate the ground state
energy of a microscopic system. The method employs a
set of adjustable parameters to yield a trial wave func-
tion ΨT , whose form is chosen following a prior analy-
sis of the physical system being investigated. The trial
wave function ΨT when best optimized, approximates
the exact wave function. If the trial wave function ΨT
is identical with the exact ground state wave function,
then the VMC method gives the exact estimate of the
ground state energy.
The behaviour of a quantum mechanical many-particle
system is described by the Schrödinger’s equation [1, 5]:

ĤΨ(R) = EΨ(R) . (1)

Ĥ is the Hamiltonian of the system given by

N   
2 2
Ĥ = − ∇ + Uext(ri ) + Vij ,
i=1
2m i i<j

 2
where − 2m ∇2i is the kinetic energy, Uext (ri ) is the ex-
ternal potential of the ith particle, Vij is the interaction
potential between the ith and jth particles, and ∇2i is
the Laplacian of the ith particle.

726 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

The variational theorem of quantum mechanics states


that the expectation value of the Hamiltonian Ĥ evalu-
ated with any trial wave function ΨT is an upper bound
on the ground state energy Emin [1]:
  
ΨT (R) ĤΨT (R)dR
Ĥ = ET =  ≥ Emin .
ΨT (R) ΨT (R)dR

In order to evaluate this integral with MC methods via


the Metropolis algorithm using importance sampling,
it is written in terms of a probability density function
ρ(R), and a local energy EL [3]:

ET = ρ(R)EL (R)dR, (2)

where
Ψ2T
ρ(R) = 
Ψ2T dR
and
ĤΨT
EL = .
ΨT
The local energy is a function which depends on the po-
sitions of the particles and is a constant if ΨT is an exact
eigenfunction of the Hamiltonian. The more closely ΨT
approaches the exact eigenfunction, the less strongly EL
will vary with R. This means that the variance should
approach zero as our trial wave function approaches the
exact ground state.
In the evaluation of the ground state energy, the varia-
tional wave function is generally chosen to be real and
non-zero almost everywhere in the region of integration.
We want to solve the integral in (2) with importance
sampling MC integration using Metropolis algorithm.
The energy approximation as given in (2) becomes

1 
M
ET = ρ(R)EL (R)dR ≈ EL (Ri ), (3)
M i=1

RESONANCE ⎜ August 2014 727


GENERAL ⎜ ARTICLE

where Ri are sampled from the probability density ρ(R).


6. Application of VMC to Quantum Harmonic
Oscillators
In this section, the applications of the VMC technique
in the determination of ground state energies of quan-
tum harmonic oscillators in one and three dimensions
are discussed.
6.1 Harmonic Oscillator in One Dimension
We know that the Hamiltonian (Ĥ) of a one-dimensional
harmonic oscillator is given by

2 d2 1
Ĥ = − 2
+ kx2 ,
2m dx 2

where m is the mass of the particle and k = mω 2 is a


spring constant, with ω denoting the angular frequency
of oscillation. The first term represents the kinetic en-
ergy operator for a particle of mass m and the second
term represents the potential energy operator for a par-
ticle in a potential well.

 is measured in units of ω and distance


If the energy

in units of mω
, then the solution of the Schrödinger
equation
ĤΨ = EΨ

1
yields the well-known ground state energy E0 = 2
= 0.5
(in units of ω) in case of one dimension.
In terms of the units considered, the hamiltonian be-
comes
1 d2 1
Ĥ = − 2
+ x2 .
2 dx 2
Let us consider the trial wave function to be of the form
ΨT = exp (−βx2), where β is the parameter to be

728 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

determined from the VMC. The above form of the wave


function is chosen following the physical requirement

ΨT → 0 as x → ±∞ .

Therefore, the local energy

Ĥψ(x)
EL =
ψ(x)

becomes
EL = β + (0.5 − 2.0β 2 )x2
so that the ground state energy can be evaluated from
the integral 
ET = EL ρ(x)dx .

To get the minimum value of this integral, β is varied


in the range 0.1 − 2.0 in steps of 0.05. The value of ET
and the variance calculated are plotted as a function of
β in Figure 4. The plot of the harmonic oscillator wave
function is shown in Figure 5. The solid line shows the
actual wave function while the data points marked by
plus-sign are that obtained from the VMC.

Figure 4. The upper panel


shows the plot of energy for
various values of the trial pa-
rameter β in the range 0.1–
2.0 in steps of 0.05. The
lower panel shows the plot of
the variance for the same
values of β.

RESONANCE ⎜ August 2014 729


GENERAL ⎜ ARTICLE

Figure 5. Plot of the wave


function Ψ(x) as a function of
x. The solid line shows the
actual wave function while
the data points marked by
plus-sign are those obtained
from the VMC.

Now, I consider the various cases of addition of quartic


(x4) and sextic (x6 ) term in the Hamiltonian

1 d2
Ĥ = − + V (x) .
2 dx2
The change in the ground state energies and the vari-
ances obtained for these different cases are listed in Table
1. We can see that the variance σ 2 is 0 for β = 0.500
in the case V (x) = 12 x2 . This is because the trial wave
function ΨT (x) = exp(−0.5x2 ) is the exact wave func-
tion in this case. However, in the other cases, we can
easily see that the variance (σ 2) is not 0, but still the
minimum. This is because the trial wave functions are
not exact and hence the variance (σ 2) departs from zero
in other cases.

Table 1. Energies and vari-


ances for different forms of
harmonic oscillator poten-
tials.

730 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

6.2 Harmonic Oscillator in Three Dimensions


The Hamiltonian in three dimensions is given by
1 1
Ĥ = − ∇2 + r2 ,
2 2
where ∇2 is the Laplacian operator. In spherical polar
co-ordinates, it is given by (for a spherically symmetric
wave function having no angular dependence)

d2 2 d
∇2 = + .
dr 2 r dr

Let the trial wave function be of the form

ΨT (r) = exp(−βr2) .

The local energy EL is found to be

ĤΨT (r)
EL = = 3β + (0.5 − 2β 2 )r2 .
ΨT (r)

The values of ET and variances (σ 2 ) are calculated for


various values of β in the range [0.1, 0.2] in steps of 0.05.
(VMC) 2
The minimum value Emin = 1.5 and variance σmin =0
is found for β = 0.5. In this case, the ground state
energy obtained from the VMC method is exactly the
same as that known to be obtained from the Schrödinger
method. This implies that the form of the trial wave
function corresponds to the exact ground state wave
function for the system.
7. Summary and Discussion
In this article, I have discussed about the practical im-
plementation of the VMC technique in the determina-
tion of the ground state energies of quantum harmonic
oscillators in one and three dimensions. An incisive de-
scription of the paraphernalia required for understand-
ing the VMC technique is also presented. The results

RESONANCE ⎜ August 2014 731


GENERAL ⎜ ARTICLE

obtained using the VMC technique are found to be in


excellent agreement with those known to be determined
from the Schrödinger method. The advantage of this
method lies in the fact that irrespective of the analyti-
cal complexity of the problem involved, it stumbles to-
wards an approximation of the result of the problem in
higher dimensions. The capability and robustness of the
VMC technique in dealing with the analytic complexity
and many-particle systems in higher dimensions is one
of its most important advantages. The accuracy of the
VMC technique depends heavily on the form of the trial
wave function chosen for the quantum mechanical sys-
tem under consideration. The chosen trial wave function
should satisfy all the necessary boundary conditions of
the physical system under study so as to get a precise
and accurate value of the ground state energy.
Acknowledgments
The author thanks Prof. T R Seshadri, Department of
Physics & Astrophysics, University of Delhi, for care-
fully reading the first draft of the manuscript and pro-
viding very valuable comments and suggestions. Thanks
are also due to Prof. H P Singh, Department of Physics
& Astrophysics, University of Delhi for motivating me to
take up this project and providing the computational fa-
cility at Observational Astronomy Laboratory, Depart-
ment of Physics & Astrophysics, University of Delhi,
India. Lastly, the author thanks the anonymous referee
for many useful comments and suggestions that signifi-
cantly improved the paper.

Suggested Reading

[1] N Zettili, Quantum Mechanics: Concepts and Applications, Wiley,


2001.
[2] A Gothandaraman, G D Peterson, G L Warren, R J Hinde and R J
Harrison, A hardware-accelerated quantum Monte Carlo framework
for n-body systems, Computer Physics Communications, Vol.180,
No.12, pp.2563–2573, 2009.

732 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

[3] W M Foulkes, L Mitas, R J Needs and G Rajagopal, Quantum Monte


Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33–
83, Jan. 2001.
[4] N Metropolis, A W Rosenbluth, M N Rosenbluth, A H Teller and E
Teller, Equation of State Calculations by Fast Computing Machines,
Journal of Chemical Physics, Vol.21, pp.1087–1092, Jun. 1953.
[5] M Kalos and P Whitlock, Monte Carlo Methods, John Wiley & Sons,
2009.
[6] W L McMillan, Ground State of Liquid He4, Physical Review, Vol.138,
pp.442–451, Apr. 1965.
[7] D Ceperley, G V Chester and M H Kalos, Monte Carlo simulation of
a many-fermion study, Physical Review Vol.B16, pp.3081–3099, Oct.
1977.
[8] J Kolorenc and L Mitas, Applications of quantum Monte Carlo
methods in condensed systems, Reports on Progress in Physics, Vol.74,
No.2, 026502, Feb. 2011.
[9] W A Lester, Jr., L Mitas and B Hammond, Quantum Monte Carlo for
atoms, molecules and solids, Chemical Physics Letters, Vol.478, pp.1–
10, Aug. 2009.
[10] N Metropolis, 1985. Monte Carlo: In the Beginning and Some Great
Expectations, In: R Alcouffe, R Dautray, A Forster, G Ledonois and
B Mercier (Eds.), Lecture Notes in Physics, Berlin Springer Verlag,
Vol. 240, p.62, 1985.
[11] N Metropolis and S Ulam, The Monte Carlo Method, Journal of the
American Statistical Association, Vol.44, No.247, pp.335–341, Sep.
1949.
[12] J Thijssen, Computational Physics, Cambridge University Press, 2007.
[13] S Koonin, Computational Physics: Fortran Version, Basic Books,
1998.
[14] F James, Monte Carlo theory and practice, Reports on Progress in
Physics, Vol.43, No.9, p.1145, 1980.
URL http://stacks.iop.org/0034-4885/43/i=9/a=002
[15] B A Cipra, The Best of the 20th Century: Editors Name Top 10
Algorithms, SIAM News, 33, 2000.
[16] J Dongarra and F Sullivan, Guest editors’ introduction: The top 10
algorithms, Computing in Science and Engg., Vol.2, No.1, pp.22–23,
Jan. 2000.
Address for Correspondence
Sukanta Deb
Department of Physics
Acharya Narendra Dev College
(University of Delhi),
Govindpuri, Kalkaji,
New Delhi 110019, India
Email:sukantodeb@gmail.com

RESONANCE ⎜ August 2014 733


GENERAL ⎜ ARTICLE

Appendix A. Generation of Random Numbers


The kernel of the MC technique is the generation of random numbers. Random numbers are
characterised by the fact that the value of a number cannot be predicted from the previous
numbers. Therefore, if a sequence of random numbers is constructed, the probability density
function (PDF) for a new number is independent of all the numbers generated so far. Random
numbers for use in calculations are very hard to obtain, although they may occur in certain
experiments. For example, random numbers occur in experiments such as the radioactive
decay of a nucleus, cosmic ray hits in a detector or the generation of noise in electronic circuits.
However, the numbers generated from these experiments may not be useful as they may lack
necessary uniformity as required for MC calculations [1].
MC calculations are made more effective and robust using random numbers generated from
computer algorithms. Those random numbers are called pseudo-random numbers. One impor-
tant property of the pseudo-random numbers is that their distribution can be made uniform
within a prescribed range. In the generation of the pseudo-random numbers from computer
algorithms, the next numbers are generated from the previous ones by a mathematical formula.
Fortunately, these pseudo-random numbers have the required properties of randomness which
make them suitable for MC simulations [1]. Although the random numbers generated from the
computer are nearly random in nature, there exists always a correlation between the numbers
after a long cycle, i.e., there is a period before the same set of random numbers is generated.
The computer algorithms are usually based on a random seed that starts the sequence of num-
bers. That is, if we construct a sequence two times with the same initial seed, the same numbers
are produced [1].
1. The Transformation Method
Given a sequence r1 , r2 . . . , rn uniformly distributed in [0, 1], the next step is to determine a
sequence x1 , x2 . . . , xn distributed according to the desired PDF f(x). Therefore, the task is
to find a function x(r) that is distributed according to a specified PDF f(x), given that r is
distributed according to a uniform distribution between 0 and 1 [2].

The probability to obtain a value of r in the interval [r, r + dr] is g(r)dr. This must be equal to
the probability to obtain a value of x in the interval [x(r), x(r) + dx(r)], which is f(x)dx. This
means that the probability that r is less than some value r  is equal to the probability that x
is less than x(r  ) [2], i.e.,
P (r ≤ r  ) =P (x ≤ x(r  ))
⇒ G(r) =F (x(r)),
where F and G are the cumulative distribution functions corresponding to the PDFs f and g,
respectively.
We know that the cumulative distribution function for the uniform PDF is
G(r) = r, r ∈ [0, 1] .
Therefore,
 x(r)  r
F [x(r)] = f(x )dx = g(r  )dr  = r .
−∞ −∞

734 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

This shows that the CDF F (x) treated as a random variable is uniformly distributed between
0 and 1. Solution for x(r) may be obtained from the above equation depending on the f(x)
given.
1.1 Uniform Distribution
The uniform distribution [2] for the continuous variable x(−∞ < x < ∞) is defined by
1
f(x; a, b) = , a ≤ x ≤ b,
b−a
= 0, otherwise.

That is, x is likely to be found anywhere between a and b. The CDF F (x) is related to the
PDF f(x) by  x
F (x) = f(x )dx .
−∞

Suppose we want ot generate a random variable according to the uniform PDF f(x) defined
above. The CDF F (x) is given by
 x
F (x) = f(x ; a, b)dx
−∞
 x
1
⇒ F (x) = dx
−∞ b − a
x−a
⇒ F (x) = , a ≤ x ≤ b.
b−a
Now, to solve for x(r), let us set
F (x) = r, r ∈ [0, 1] .
Therefore,
x−a
= r ⇒ x = a + (b − a) ∗ r ⇒ x(r) = a + (b − a) ∗ r .
b−a
Hence, the variable x(r) is distributed according to the PDF f(x) given above. From this, we
see that the uniform random numbers are important as they can be used to generate arbitrary
PDFs by means of transformation from a uniform distribution [2].
1.2 Exponential Distribution
The exponential PDF [2] for the continuous variable x(0 ≤ x < ∞) is defined by
1 x
f(x; ξ) = exp(− ) .
ξ ξ
The PDF is characterized by a single parameter ξ. To generate the random variable x(r)
distributed according to the exponential PDF, let us set
F (x) = r ,

RESONANCE ⎜ August 2014 735


GENERAL ⎜ ARTICLE

Figure A1. Histogram plot for


the generation of the random
variable x having exponen-
tial PDF f(x) with ξ = 2 using
the transformation method.

where r ∈ [0, 1] and F (x) is the CDF of the PDF f(x) given by
 ∞
F (x) = f(x ; ξ)dx .
0

Therefore,
F (x) = r ⇒ x = −ξ ln(1 − r) .
But (1 − r) is distributed in the same way as r. So,

x = −ξ ln r ⇒ x(r) = −ξ ln r .

The variable x(r) is distributed according to the exponential PDF f(x; ξ) as given above. The
histogram plot for the distribution of x(r) with ξ = 2 is shown in Figure A1.
1.3 Gaussian or Normal Distribution
The Gaussian or normal PDF [2] of the continuous variable x(−∞ < x < ∞) is defined by
 
1 (x − μ)2
f(x; μ, σ 2 ) = √ exp − .
2πσ 2 2σ 2

The PDF has two parameters μ and σ 2 . They correspond to the mean and variance of x,
respectively. Using μ = 0 and σ = 1, the standard Gaussian PDF is defined as

1 x2
f(x; 0, 1) = φ(x) = √ exp(− )
2π 2

with the corresponding CDF  x


Φ(x) = φ(x )dx .
−∞

736 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

Figure A2.The upper and the


lower panel show the histo-
gram plots for the generation
of the pair of random vari-
ables x,y, respectively, hav-
ing the gaussian PDF f using
the Box–Muller transforma-
tions.

(x,y)

In order to construct pairs of normally distributed random numbers, the following procedure
may be adopted:

1. Obtain a pair of uniformly distributed random numbers (ui , vi ).



2. Calculate ri = −2 ln ui , θi = 2πvi .
3. The normally distributed variables are

xi = ri cos θi ,
yi = ri sin θi .

The transformations given above are known as the Box–Muller transformations [3]. Figure
A2 shows histogram plots for the generation of the pair of random variables (x, y) having
the Gaussian PDF f using the Box–Muller transformations.

2. Acceptance-Rejection Method
Suppose we want to generate a random variable from a distribution with PDF f. If it turns
out to be too difficult to simulate the value of the random variable using the transformation
method, the acceptance-rejection method is a useful alternative [4]. Let g(x) be another PDF
defined in the support of f(x) such that

f(x) ≤ cg(x), ∀x ,

where c > 0 is a known constant. Suppose there exists a method to generate a random variable
having PDF g, then according to the acceptance-rejection algorithm [2,3]:

RESONANCE ⎜ August 2014 737


GENERAL ⎜ ARTICLE

1. Generate y from g.
2. Generate u from U [0, 1].
f(y)
3. If u ≤ cg(y)
set x = y, else return to step 1.

It can be shown that x is a random variable from the distribution with PDF f(x). The function
g(x) is also called majoring function [4]. It can be shown that the expected number of trials for
an acceptance is c. Hence, for this method to be efficient, the constant c must be chosen such
that rejection rate is low. A method to choose an optimum c is [3]

f(x)
c = max .
x g(x)

Now, let us apply acceptance-rejection method to generate a random variable having PDF

f(x) = 12x(1 − x)2 , 0 < x < 1 .

Since the random variable is concentrated in the interval [0, 1], let us consider the acceptance-
rejection method with
g(x) = 1, 0 < x < 1.
Therefore,
f(x)
= 12x(1 − x)2 .
g(x)
We need to determine the smallest constant c such that
f(x)
≤ c.
g(x)
f(x)
Now, we use calculus to determine the maximum value of g(x) . It is found to be maximum at
x = 13 . Therefore,
f(x) 1 1 2 16
≤ 12x(1 − x)2 ≤ 12 (1 − ) ≤ = c.
g(x) 3 3 9
Hence,
f(x) 9 27
= 12x(1 − x)2 = x(1 − x)2 .
cg(x) 16 4
The rejection procedure is as follows:

1. Generate random numbers u1 , u2 ∼ U [0, 1].


27
2. If u2 ≤ 4 u1 (1 − u2 )2 set x = u1 , else return to step 1.

The generation of samples corresponding to a PDF by acceptance-rejection method is shown


in Figure A3.

738 RESONANCE ⎜ August 2014


GENERAL ⎜ ARTICLE

Figure A3. The upper panel


shows the plot of the function
f(x) = 12x(1–x)2 as a function
of x in the interval [0,1]. The
lower panel shows the histo-
gram plot for the generation
of the random variable x hav-
ing PDF f(x) using the accep-
tance-rejection algorithm.

[1] P Bevington and D Robinson, Data reduction and error analysis for the physical sciences, McGraw-Hill
Higher Education. McGraw-Hill, 2003.
[2] G Cowan, Statistical data analysis, Oxford University Press, 1997.
[3] S Ross, Simulation: Statistical Modeling and Decision Science Series, Academic Press, 2006.
[4] C Lemieux, Monte Carlo and Quasi-Monte Carlo Sampling, Mathematics and Statistics, Springer-Verlag,
New York, 2009.

Appendix B. Exercises for the Reader

1. Using a trial function of the form

ΨT (α) = exp(−αr 2 ),

show that the ground state energy of the H-atom (in units of e =  = m = 1) is given by
(VMC)
ET = Emin = −0.5.

2. Using a trial function of the form

ΨT = x(x − L) exp(αx),

show that the ground state energy (in units of e =  = m = 1) of a quantum particle of
mass m moving in a one-dimensional box with walls at x = 0 and x = L, where L = 2, is
(VMC)
ET = Emin = 1.249.

RESONANCE ⎜ August 2014 739

You might also like