7.6 Superpostions of Stationary States

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

7.

6 SUPERPOSTIONS OF STATIONARY STATES


I want to be sure you don’t overlook the very special, limited nature of stationary states.
First, not all systems have stationary states—separable solutions to the TDSE exist only for
conservative systems. Second, all systems—conservative or not—exist in states that aren’t
stationary.
In general, some states of an arbitrary system are bound and some are not. But in
either case, the non-stationary wave functions of such a system are not simple products like
𝜓(𝑥)𝜁(𝑡); rather, like the Gaussian (7.3), they are complicated beasties in which position
and time variables are inextricably intertwined.
Personally, I’m more comfortable with non-stationary states, for their physical
properties are more akin to what I expect of a particle in motion. For example, the
expectation value and uncertainty of position for a microscopic particle in a non-stationary
state change with time, as does the position of a macroscopic particle in motion.
So I want to close this chapter on stationary states with a preview of their non-
stationary siblings. For future reference, I’ve collected the properties of these two kinds of
states in Table 7.1. In this section, we’ll look briefly at how non-stationary states are
constructed and at the properties of a single observable—the energy—in such a state.

TABLE 7.1 STATIONARY VS. NON-STATIONARY STATES.

Stationary Non-Stationary

State Function Ψ(𝑥, 𝑡) = 𝜓𝐸(𝑥)𝑒 −𝐸𝑡⁄ℎ Ψ(𝑥, 𝑡) not separable

Energy sharp: Δ𝐸 = 0 not sharp: Δ𝐸(𝑡) = 0

〈𝐸〉 = 𝐸, independent of t
Average Energy 〈𝐸〉 independent of t

Position Probability Densit 𝑃(𝑥) = |𝜓𝐸(𝑥)|2 𝑃(𝑥, 𝑡) = |𝜓(𝑥, 𝑡)|2

〈𝑄〉 independent of t
Observable Q 〈𝑄〉 (𝑡) may depend on t
The Energy in a Non-Stationary State
One way to construct a wave function for a non-stationary state is to superpose some
stationary-state wave functions, controlling the mixture of each contributing stationary state
in our non-stationary brew by assigning each a coefficient. We did just this in Chap. 4, where
we constructed a continuum wave function (a wave packet) for a free particle by superposing
pure momentum functions, using the amplitude function to control the mixture in the packet.
Because the pure momentum functions are functions of infinite extent, we had to superpose
an infinite number of them to produce a spatially localized wave packet.
Similarly, we can use superposition to construct non-stationary wave functions for
bound states. But unlike their continuum counterparts, bound-state wave packets are spatially
localized even if they include only a finite number of contributors, because each of these
contributors is itself spatially localized. I wrote down one such function for the infinite
square well in Example 6.4 [Eq. (6.63)]:
1
𝛹(𝑥, 𝑡) = √53 [2𝜓1 (𝑥)𝑒 −𝑖 𝐸1 𝑡/ћ + 7𝜓2 (𝑥)𝑒 −𝑖 𝐸2 𝑡/ћ ]. (7.81)

Notice that two values of the energy appear in this function, E1 and E2. This duplicity of
energies suggests that the energy of the state represented by the superposition function
𝜓(𝑥, 𝑡) is not sharp. Indeed, it isn’t. The energy uncertainty for this non-stationary state is
positive, as we’ll now demonstrate.
Example 7.4. The Strange Case of the Uncertain Energy
The energy uncertainty (∆𝐸)(𝑡) = √〈𝐸 2 〉 − 〈𝐸〉2 depends on two expectation
values, 〈𝐸〉(𝑡) and 〈𝐸 2 〉. The first of these is the mean value of the energy; for the
state represented by (7.81), this value is
1 𝐿/2
〈𝐸〉(𝑡) = ∫ [2𝜓1 (𝑥)𝑒 −𝑖 𝐸1 𝑡/ћ + 7𝜓2 (𝑥)𝑒 −𝑖 𝐸2 𝑡/ћ ]
53 −𝐿/2
𝐸1 𝑡
̂ [2𝜓1 (𝑥)𝑒 −𝑖
×ℋ ћ + 7𝜓2 (𝑥)𝑒 −𝑖 𝐸2 𝑡/ћ ] 𝑑𝑥 (7.82)

The (real) spatial functions in (7.82) are given by Eqs. (7.55), and their energies by
(7.56). The worst path to take at this point is to substitute these functions, their
energies, and the explicit form of the Hamiltonian into this equation; this path leads
immediately into a thorny patch of messy algebra.
We can avoid the briar patch by leaving the spatial functions and energies
unspecified as long as possible. For example, from the TISE we know what the
Hamiltonian does to the spatial functions in the integrand:
𝐸1 𝑡 𝐸2 𝑡 𝐸1 𝑡 𝐸2 𝑡
ℋ̇ [2𝜓1 (𝑥)𝑒 −𝑖 ћ + 7𝜓2 (𝑥)𝑒 −𝑖 ћ ] = 2𝐸1 𝜓1 (𝑥)𝑒 −𝑖 ћ + 7𝐸2 𝜓2 (𝑥)𝑒 −𝑖 ћ (7.83)

So we can write (7.82) as four similar integrals,

+𝐿/2 +𝐿/2
1
〈𝐸〉(𝑡) = {4𝐸1 ∫ 𝛹1 (𝑥)𝛹1 (𝑥)𝑑𝑥 + 14𝐸1 ∫ 𝛹2 (𝑥)𝛹1 (𝑥)𝑑𝑥 𝑒 𝑖( 𝐸2 −𝐸1 )𝑡/ћ
53 −𝐿/2 −𝐿/2
+𝐿/2 +𝐿/2
𝑖( 𝐸2 −𝐸1 )𝑡/ћ
+ 14𝐸2 ∫ 𝛹1 (𝑥)𝛹2 (𝑥)𝑑𝑥 𝑒 + 49𝐸2 ∫ 𝛹2 (𝑥)𝛹2 (𝑥)𝑑𝑥} (7.84)
−𝐿/2 −𝐿/2

These four integrals fall into two classes. Two are normalization integrals of the
form (7.27). These are easy to evaluate:

+𝐿/2
∫ 𝜓1 (𝑥)𝜓1 (𝑥)𝑑𝑥 = 1
−𝐿/2

+𝐿/2
∫ 𝜓2 (𝑥)𝜓2 (𝑥)𝑑𝑥 = 1 (7.85)
−𝐿/2

The other two are integrals of two different spatial functions. Since the spatial
functions (7.55) are real, these integrals are equal:
+𝐿/2 +𝐿/2
2 +𝐿/2 𝑥 𝑥
∫ 𝛹2 (𝑥)𝛹1 (𝑥)𝑑𝑥 = ∫ 𝛹1 (𝑥)𝛹2 (𝑥)𝑑𝑥 = ∫ cos (𝜋 ) sin (2𝜋 ) 𝑑𝑥 (7.86)
−𝐿/2 −𝐿/2 𝐿 −𝐿/2 𝐿 𝐿

I hope you recognize this as the orthogonality integral for the sine and cosine
functions—and remember that this integral equals zero. If not, please go refresh
your memory with a glance at Eq. (4.45). Inserting Eqs. (7.85) and (7.86) into the
expectation value (7.84), we get (with refreshingly little work) the ensemble average
of the energy in the non-stationary state (7.81)
1 200
〈𝐸〉 = (4𝐸1 − 49𝐸2 ) = 𝐸 = 3.77𝐸1 (7.87)
53 53 1

Notice that this is not equal to any of the eigenvalues En of the Hamiltonian. This
exemplifies a general result for all non-stationary states: the average value of the
energy of a particle in a non-stationary state is not one of the energy eigenvalues.
Nevertheless, in an energy measurement each member of the ensemble exhibits one
and only one of these eigenvalues (Chap. 13). That is, only the eigenvalues En are
observed in an energy measurement, whatever the state of the system. The mean of
these values is not equal to one of the eigenvalues because, as we’re about to
discover, individual measurement results fluctuate around 〈𝐸〉.
Notice also that the time dependence has vanished from the expectation
value of the energy of a non-stationary state, (7.87), as it did (§ 7.5) from this
quantity for a stationary state. This time-independence is a consequence of the
structure of Eq. (7.84): each time-dependent factor in this equation is multiplied by
an orthogonality integral that is equal to zero. No such simplification occurs for an
arbitrary observable Q, because the integrals that multiply time-dependent factors in
〈𝑄〉(𝑡) contain factors other than the spatial functions. Thus, for this state, 〈𝑥〉(𝑡)
and 〈𝑝〉(𝑡) do depend on time.
Now back to the energy uncertainty. The other expectation value we need is

1 𝐿/2 𝐸1 𝑡 𝐸2 𝑡
〈𝐸 2 〉 = ∫ [2𝜓1 (𝑥)𝑒 +𝑖 ћ + 7𝜓2 (𝑥)𝑒 −𝑖 ћ ]
53 −𝐿/2

× ℋ̇ [ 2𝜓1 (𝑥)𝑒 +𝑖 𝐸1 𝑡/ћ + 7𝜓2 (𝑥)𝑒 −𝑖 𝐸2 𝑡/ћ ] 𝑑𝑥 (7.88)

̂2
The evaluation of 〈𝐸 2 〉 proceeds just like that of 〈𝐸〉. We work out the effect of ℋ
̂ 2 =ℋ
on the wave function (7.81) using ℋ ̂ℋ̂ , as in Eq. (7.72). There result four
integrals very much like those of (7.84). As before, two are normalization integrals
and two are orthogonality integrals. And the latter are zero. Working through a
simple arithmetic calculation, we find
1
〈𝐸 2 〉 = (4𝐸12 + 49𝐸22 )
53

1
= [4𝐸12 + 49(4𝐸1 )2 ] (7.89)
53

788 2
= 𝐸
53 1
Using this result and (7.87) for 〈𝐸〉, we obtain the uncertainty for the state (7.81),

788 2 200 2 2
∆𝐸 = √ 𝐸1 − ( ) 𝐸1 = 0.79𝐸1 (7.90)
53 53

So the energy for this non-stationary state is not sharp. Individual results of an
ensemble energy measurement fluctuate about 〈𝐸〉 = 3.77𝐸1 with a standard
deviation of ∆𝐸 = 0.79𝐸1 . This behavior differs strikingly from that of a stationary
state like the one in Example 7.3. For the latter, all members give the same value,
〈𝐸〉𝑛 = 𝐸𝑛 with no fluctuations, and (∆𝐸)𝑛 = 0.
General Non-Stationary States
I included in the non-stationary wave function (7.81) only two constituents, the wave
functions for the ground and first-excited states. The obvious generalization of this example
is an infinite superposition, the most general non-stationary state:

General non-stationary state (7.91)
𝛹(𝑥, 𝑡) = ∑ 𝐶𝑛 𝛹1 (𝑥)𝑒 −𝑖 𝐸1 𝑡/ћ
𝑛=1
For the infinite square well, this wave function looks like

∞ ∞
2 𝑥 2 𝑥
𝛹(𝑥, 𝑡) = ∑ 𝐶𝑛 √ cos (𝑛𝜋 ) 𝑒 −𝑖 𝐸𝑛𝑡/ћ + ∑ 𝐶𝑛 √ sin (𝑛𝜋 𝑥) 𝑒 −𝑖 𝐸𝑛𝑡/ћ (7.92)
𝐿 𝐿 𝐿 𝐿
𝑛=1𝑜𝑑𝑑 𝑛=2𝑒𝑣𝑒𝑛

The coefficients cn which may be complex, control the mixture of stationary states.
For the state (7.81) of the infinite square well, these guys are

2 7
𝐶1 = , 𝐶2 = , 𝐶3 = 𝐶4 = ⋯ = 0
√53 √53

These numbers are called, quite reasonably, mixture coefficients or expansion coefficients.
Let’s look for a moment at the form of the wave function (7.92) at the initial time, t = 0:
∞ ∞
2 𝑥 2 𝑥
𝛹(𝑥, 0) = ∑ 𝐶𝑛 √ cos (𝑛𝜋 ) + ∑ 𝐶𝑛 √ sin (𝑛𝜋 ) (7.93)
𝐿 𝐿 𝐿 𝐿
𝑛=1𝑜𝑑𝑑 𝑛=2𝑒𝑣𝑒𝑛

Structurally, Eq. (7.93) is very like the Fourier series expansion (4.41) for the initial wave
function 𝜓(𝑥, 𝑡). We calculate the Fourier coefficients in a series expansion of a function
𝑓(𝑥) (𝐸𝑞. 4.3) as integrals of sine or cosine functions times the function. Similarly, in
quantum mechanics, we calculate the expansion coefficients in a non-stationary state wave
function of the form (7.91) as integrals over the initial state function, to wit:

𝐶𝑛 (0) = ∫ 𝛹𝑛∗ (𝑥)𝛹(𝑥, 0)𝑑𝑥 (7.94)
−∞

Because these coefficients are intimately related to the wave function at the initial time, I’ve
appended to them the argument (0). The parallel between (7.94) and the techniques of
Fourier analysis is striking and no accident (Chap. 12).
Deducing the State Function
Equation (7.94) shows that from the initial wave function we can determine the expansion
coefficients {cn (0)} and thence the state function 𝜓(𝑥, 𝑡). But in the real world of laboratory
physics, we often cannot proceed in this fashion. Instead we determine these coefficients
experimentally—e.g., by measuring the energy of the system in its initial state. To show you
how this goes, I want to consider a final example.
Example 7.5. Creation of a State Function
You are in a laboratory. The reigning experimentalists are showing you the results
of an energy measurement on an ensemble of infinite-square-well systems. These
measurements unveil the state at t = 0 to be a mixture of the two stationary states
with quantum numbers n = 2 and n = 6. Further, the data reveals that the relative
mixture of these states is 1 : 5 for n = 2 : n = 6. Your mission: determine the wave
function for subsequent times, 𝜓(𝑥, 𝑡). You know the spatial functions 𝜓𝑛 (𝑥) and
energies En in the general expansion (7.92). All you need is the coefficients cn (0).
Translating the experimentalists’ infonnation into quantum mechanics, we
first write down the form of the initial wave function,
𝛹(𝑥, 0) = 𝐶2 (0)𝜓2 (𝑥) + 𝐶6 (0)𝜓6 (𝑥) (7.95)

and the ratio of the expansion coefficients,

𝐶2 (0) 1
= (7.96)
𝐶6 (0) 5

Since C6(0) = 5c2(0), we can write the initial function (7.95) as

𝛹(𝑥, 0) = 𝐶2 (0)𝜓2 (𝑥) + 5𝐶2 (0)𝜓6 (𝑥) = 𝐶2 (0)[𝜓2 (𝑥) − 5𝜓6 (𝑥)] (7. .97)
All that remains is to evaluate c2(0) in (7.97) and to write the resulting
function in the general form (7.92) for t > 0. But what quantum-mechanical
relationship can we use to calculate this coefficient? Well, there’s only one property
of the wave function 𝜓(𝑥, 0)that we haven’t already used. Can you think what it is?
Right: we haven’t normalized the initial function. The coefficient c2(0)
provides the flexibility we need to enforce the condition
∞ 𝐿/2
∫ 𝛹 ∗ (𝑥, 0)𝛹(𝑥, 0)𝑑𝑥 = ∫ 𝛹 ∗ (𝑥, 0)𝛹(𝑥, 0)𝑑𝑥 = 1 (7.98)
−∞ −𝐿/2

A little algebra later, we find that this condition is satisfied by 𝑐2 = 1/√26. So the
initial wave function is

1
𝛹(𝑥, 0) = [𝜓2 (𝑥) + 5𝜓6 (𝑥)] (7.99)
√26

In trying to generalize an initial wave function such as (7.99) to times t > 0,


many newcomers to quantum mechanics come to grief. One of their most common
1
mistakes is to write the t > 0 wave function as [𝜓2 (𝑥) + 5𝜓6 (𝑥)𝑒 −𝑖𝐸𝑡/ℏ ] . I
√26

hope it’s clear why this form is grossly in error. If not, please ponder and commit
to memory the following:
WARNING
You cannot extend the initial state function for a non-
stationary state to times t > 0 by multiplying by a single factor
𝒆−𝒊𝑬𝒕/ℏ. There is no single value of E that characterizes the
state, because the energy of the state is not sharp.
The right way to proceed is indicated by the general form (7.91): we
multiply each term in the expansion by the corresponding time factor. For the state
represented by (7.99) at t = 0, we get
1
𝛹(𝑥, 𝑡) = [𝜓2 (𝑥)𝑒 −𝑖𝐸2 𝑡/ℏ + 5𝜓6 (𝑥)𝑒 −𝑖𝐸6 𝑡/ℏ ] (7.100)
√26

Where the energies are 𝐸𝑛 = 𝑛2 𝜋 2 ℏ2 /(2𝑚𝐿2 ) for n = 2 and n = 6.


I want to leave you thinking about the similarity between the procedure we used in
Example 7.5 to determine the wave function for a bound non-stationary state of a
particle in a box and the one we used in § 4.4 to determine the wave packet for an
unbound non-stationary state of a free particle. Schematically, these procedures
look like

1 [continuum state]
𝜓(𝑥, 0) ⟹ 𝐴(𝑘) ⟹ 𝜓(𝑥, 𝑡) = ∫ 𝐴(𝑘)𝑒 𝑖(𝑘𝑥−𝜔𝑡) 𝑑𝑘
√2𝜋 (7.101)
−∞


[bound state]
𝜓(𝑥, 0) ⟹ {𝑐𝑛 (0)} ⟹ 𝜓(𝑥, 𝑡) = ∑ 𝑐𝑛 (0)𝜓𝑛 (𝑥)𝑒 −𝑖𝐸𝑛 𝑡/ℏ
𝑛
(7.102)

The similarity between these procedures suggests that 𝐴(𝑘) and {𝑐𝑛 (0)} play
analogous roles in the quantum theory of continuum and bound states.
Germinating in this comparison are the seeds of a powerful generalization of these
procedures, as we’ll discover in Chap. 12.

7.7 FINAL THOUGHTS: DO STATIONARY STATES REALLY EXIST?


In this chapter, I’ve introduced the “other” Schrodinger Equation, the TISE

̂ 𝜓𝐸 = 𝐸𝜓𝐸
ℋ (7.103)

The first Schrodinger Equation you met (in Chap. 6) was the TDSE,

̂𝛹=ℇ
ℋ ̂𝛹 (7.104)
It’s vital that you never lose sight of the difference between these equations and their roles in
quantum theory.
The time-dependent Schrodinger Equation (7.104) is a second-order partial
differential equation in space and time variables. We do not see in it a value of the energy
because in general, we cannot associate a single, well-defined value of this observable with a
quantum state. Instead, we find the energy operator ̂
ℇ. We use this equation to study the
evolution of any state, whether or not it is stationary.
The time-independent Schrodinger Equation (7.103) is an eigenvalue equation in
the position variable x (for a one-dimensional, single-particle system). The eigen value is 𝐸,
the value of the (sharp) energy. We use this equation to solve for the spatial dependence of a
stationary-state wave function 𝜓 = 𝜓𝐸 𝑒 −𝑖𝐸𝑡/ℏ . Like many elements of physical theory,
stationary states are an idealization. In the real world, one can prepare a system, via
ensemble measurements, to be in a stationary state at an initial time. But the system won’t
remain in such a state indefinitely. Only a truly isolated system would, in principle, remain in
a stationary state for all time-and in the real world a system cannot be isolated from all
external influences. Even if we take great pains to remove from the vicinity of a particle all
other objects and sources of fields, the particle interacts with the vacuum electromagnetic
field that permeates all space. Understanding the origin of this field requires knowledge of
relativistic quantum theory, which is beyond our current expertise. But the important point is
that this field causes mixing of other stationary states with the initial state. This mixing may
induce the system to eventually undergo a transition to another state.
The key word here is “eventually”—in many cases, a microscopic system remains in
a stationary state long enough for us to perform an experiment. For example, excited
stationary states of atoms are stable for periods ranging from 10-9 to 10-2 sec, depending on
the type of transition the atom eventually undergoes. On the time scale of atomic processes,
such durations are fairly long. So, provided we’re interested in an event that occurs on an
atomic time scale, we can safely treat the system as being in a stationary state.
In spite of their slightly idealized nature, stationary states have become the heart and
soul of quantum mechanical calculations. Their usefulness—and the ease of solution of the
TISE compared to the TDSE—fully justifies the time we shall spend on them, beginning in
the next chapter.

You might also like