Ham_form
Ham_form
Ham_form
The function H({q, p}) is called the Hamiltonian function, or just the Hamil-
tonian. Note that we introduced a special letter to distinguish the Hamil-
tonian function from energy. When speaking of the Hamiltonian we always
1
mean a function of generalized momenta and coordinates rather than just
some quantity. Correspondingly, if the energy is expressed as a function of
some other variables, it is not the Hamiltonian.
Now let us evaluate the partial derivative of the function H with respect
to one of the momenta while keeping all the other variables constant:
We see that with our definition of the generalized momentum the expression
in the brackets is identically equal to zero. Hence,
∂H
= q̇s0 . (7)
∂ps0
∂H X ∂ q̇s ({q, p}) ∂L({q̇, q}) ∂ q̇s ({q, p}) ∂L({q̇, q})
= ps − − . (8)
∂qs0 s ∂qs0 ∂ q̇s ∂qs0 ∂qs0
2
Hamiltonian and the initial conditions for all q’s and p’s, one uses Eqs. (11),
called Hamiltonian equations, to find the evolution of the system.
Let us derive the Hamiltonian of a single particle in an external potential.
Starting from the Lagrangian
mṙ2
L = − U (r) , (12)
2
we find the vector of the generalized momentum:
∂L
p = = mṙ . (13)
∂ ṙ
The generalized momentum coincides with what we previously introduced
as the linear momentum. The Hamiltonian of the system is then
p2
H = + U (r) , (14)
2m
and the Hamiltonian equations are
∂U
ṙ = p/m , ṗ = − = F(r) . (15)
∂r
The generalization for the system of N particles interacting through pair
potentials is as follows.
N
X p2j X
H = + Uij (|ri − rj |) , (16)
j=1
2mj i<j
X
ṙi = pi /mi , ṗi = Fij , (17)
j6=i
where
∂Uij
Fij = − . (18)
∂ri
For a counterintuitive example, note that in magnetic field the generalized
momentum, P, does not coincide with the linear momentum p. It is easy
to verify that in a uniform (for simplicity) magnetic field B one has
e
P = p+ B×r, (19)
2
2
1 e
H(P, r) = P − B×r . (20)
2m 2
3
Poisson Brackets. Suppose we are interested in time evolution of some
function A({q, p}) due to the evolution of coordinates and momenta. There
is a useful relation
Ȧ = {H, A} , (21)
where the symbol in the r.h.s. is a shorthand notation—called Poisson
bracket—for the following expression
X ∂H ∂A ∂H ∂A
{H, A} = − . (22)
s ∂ps ∂qs ∂qs ∂ps
The proof is very simple: The chain rule for dA({q(t), p(t)})/dt and then
Eqs. (11) for q̇s and ṗs .
Hence, any quantity A({q, p}) is a constant of motion if, and only if, its
Poisson bracket with the Hamiltonian is zero. In particular, the Hamilto-
nian itself is a constant of motion (if does not explicitly depend of time),
since {H, H} = 0, and this is nothing else than the conservation of energy.
4
The phase space is convenient for statistical description of mechanical
system. Suppose that the initial state for a system is known only with a
certain finite accuracy. This means that actually we know only the proba-
bility density W0 (X) of having the point X somewhere in the phase space.
If the initial condition is specified in terms of probability density, then the
subsequent evolution should be also described probabilistically, that is we
have to work with the distribution W (X, t), which should be somehow re-
lated to the initial condition W (X, 0) = W0 (X). Our goal is to establish
this relation.
We introduce a notion of a statistical ensemble. Instead of dealing with
probability density, we will work with a quantity which is proportional to
it, and is more transparent. Namely, we simultaneously take some large
number Nens of identical and independent systems distributed in accordance
with W (X, t). We call this set of systems statistical ensemble. The j-th
member of the ensemble is represented by its point Xj in the phase space.
The crucial observation is that the quantity Nens W (X, t) gives the number
density (=concentration) of the points {Xj }. Hence, to find the evolution
of W we just need to describe the evolution of the number density of the
points Xj , which is intuitively easier, since each Xj obeys the Hamiltonian
equation of motion.
A toy model. To get used to the ensemble description, and also to obtain
some important insights, consider the following dynamical model with just
one degree of freedom:
H = (1/4)(p2 + q 2 )2 . (26)
q̇ = (p2 + q 2 ) p , (27)
ṗ = −(p2 + q 2 ) q . (28)
The quantity
ω = p2 + q 2 (29)
is a constant of motion, since, up to a numeric factor, it is a square root of
energy. We thus have a linear system of equations
q̇ = ωp , (30)
ṗ = −ωq , (31)
5
which is easily solved:
6
volume Γ0 at the time t is given by the integral
Z
NΓ0 (t) = W (X, t) dΓ , (34)
Γ0
where dΓ = dq1 . . . dqm dp1 . . . dpm is the element of the phase space volume;
the integration is over the volume Γ0 . To characterize the rate of variation
of the number of points within the volume Γ0 , we use the following time
derivative
∂
Z
ṄΓ0 = W (X, t) dΓ . (35)
Γ0 ∂t
By the definition of the function W (X, t), its variable X does not depend
on time, so that the time derivative deals only with the variable t.
There is an alternative way of calculating ṄΓ0 . We may count the num-
ber of points that cross the surface of the volume Γ0 per unit time:
Z
ṄΓ0 = − J · dS . (36)
surface of Γ0
Here J is the flux of the points [number of points per unit (and perpendicular
to velocity) surface per unit time]; dS = n dS, where n is the unit normal
vector at a surface point, and dS is the surface element. We assume that n
is directed outwards and thus write the sign minus in the right-hand side of
(36).
In accordance with the known theorem of calculus, the surface integral
(36) can be converted into the bulk integral
Z Z
J · dS = ∇ · J dΓ , (37)
surface of Γ0 Γ0
7
This is a quite general relation, known as the continuity equation. It arises
in theories describing flows of conserving quantities (say, particles of fluids
and gases). The dimensionality of the problem does not matter.
Now we are going to independently relate the flux J to W (X, t) and thus
end up with a closed equation in terms of W (X, t). By the definition of J
we have
J = W (X, t) Ẋ = W (X, t) V(X) , (41)
because the flux of particles is always equal to their concentration times
velocity. With Eq. (24) taken into account and some simple algebra leading
to the cancellation of terms ∂ 2 H/∂qs ∂ps by the terms -∂ 2 H/∂ps ∂qs , we
ultimately arrive at an elegant formula (we take advantage of the previously
introduced Poisson bracket)
∂
W (X, t) = {W, H} . (42)
∂t
This is the Liouville equation—the equation of motion for the distribution
function W (X, t). Since it is the first-order differential equation with re-
spect to time, it unambiguously defines the evolution of any given initial
distribution.
While the form of the Liouville equation definitely has something in
common with Eq. (21), physical meanings of the two are radically different.
In the l.h.s. of Eq. (21) we are dealing with the full derivative with respect
to time, A ≡ A({q(t), p(t)}), while the variable X in Eq. (42) is essentially
time-independent; it just labels a fixed point in the phase space. Note also
the different sign: {W, H} = −{H, W }.
Nevertheless, the relation Eq. (21) becomes crucially important for un-
derstanding the structure of equilibrium solutions of Liouville equation.
Indeed, for any equilibrium (=time-independent) solution W (X) we have
{H, W } = 0. Thus, if we formally—the procedure has no direct physical
meaning!—plug X = X(t) into W (X), where X(t) is any trajectory satisfy-
ing the equations of motions, then the result will be time-independent. That
is any equilibrium W is formally equal to some constant of motion, and vice
versa! We have already seen an example of such a feature when playing with
our toy model. Now we see that this is a general theorem, called Liouville’s
theorem.
The most important equilibrium solution of Liouville equation for the
distribution function W ({q, p}) is the so-called Gibbs distribution describ-
ing a system in thermal equilibrium with respect to some heat bath of tem-
perature T . Up to a global normalization factor, Gibbs distribution reads
8
W ({q, p}) ∝ e−βH({q,p}) , (43)
where β is a certain (and the only) parameter characterizing given heat
bath, and thus the same for all the systems that are in contact with it. The
temperature is then defined in terms of β as
T = 1/β . (44)
This definition implies that we use energy units for temperature (which is
most natural from theoretical point of view). If temperature is measured in
Kelvins (for historical and practical reasons), then the relation between β
and T becomes
kB T = 1/β , (45)
where the re-scaling coefficient kB is called Boltzmann constant.
The conclusion is that the Hamiltonian function plays the central part
in the equilibrium statistics of the system.
9
Figure 1: Evolution of the ensemble of 1000 systems described by the Hamil-
tonian (26).
10