Entan Interpret
Entan Interpret
Entan Interpret
L.Motl
Entanglement at a distance is probably the most shocking feature of quantum mechanics. This bizarre
feature of quantum mechanics was first pointed out by Einstein, and later in 1935 by Einstein and his
collaborators Boris Podolsky and Nathan Rosen (EPR). They hypothesized that quantum mechanics
had to be wrong because in various contexts, it was predicting correlations between measurements
that looked like voodoo to EPR. Meanwhile, the proponents of quantum mechanics insisted that the
predictions of quantum mechanics were right.
The EPR arguments were improved and quantified by others who had similar preconceptions as
Einstein, especially by John Bell – who discovered Bell’s inequalities. Who was right? In the last
decades, the experiments have verified the strange predictions of quantum mechanics. Einstein was
wrong and there cannot be any classical “deeper” explanation of the wavefunction in quantum mechan-
ics, except for some very contrived non-local models. The wavefunction in quantum mechanics simply
must be probabilistic and remarkably, we can prove this statement experimentally.
Entangled spins
Imagine that we have a source of spin-1/2 fermions. It emits pairs of particles to the left and to the
right. Draw a picture: a source in the middle and two detectors on the left and on the right. The
source can be such that the wavefunction of the two fermions is in the singlet state:
|↑i |↓i − |↓i |↑i
|ψi = √
2
It is an entangled state because it cannot be written in a factorized form, |ψ1 i|ψ2 i. In fact, it is
“maximally entangled” because if you compute the reduced density matrix for the first (or the second)
particle by tracing over the second particle’s degrees of freedom, you obtain the density matrix of
“complete ignorance”, namely (1/2) × 1, which is “maximally mixed” because it has the maximal
entropy among all matrices of the same size.
The entanglement implies that if the first particle is spinning up, the other particle is spinning
down, and vice versa. The two experimentalists will always find the opposite spins. Moreover, quantum
mechanics guarantees that the same conclusion holds for the spin with respect to any axis. For example,
choose the x-axis. We know that
|→i + |←i |←i − |→i
|↑i = √ , |↓i = √
2 2
where |→i and |←i are eigenstates of σx with eigenvalues +1 and −1, respectively. If you express |ψi
in terms of these states, you will get
|→i|←i − |←i|→i
|ψi = √
2
Indeed, the spins of the two particles are always opposite whatever axis you choose. This |ψi is the
only antisymmetric wavefunction of the two spins (up to a normalization), and because this definition
is axis-independent, all of its properties must be axis-independent, too.
1
For Einstein, this correlation with respect to all axes was already hard to swallow. He was imagining
that if the spins with respect to the z-axis are anticorrelated – if there is a 50 percent probability that
it is “up-down” and 50 percent that the spins are “down-up” – then the particles must actually be
at (with a solid, classical meaning of “be”) one of these two configurations of the spins before we
measure them. If we decide to measure the spin with respect to the x-axis, things change. Because
there is a 50 percent probability to get “spin left” and 50 percent to get “spin right” and because
the electrons are independent “pieces of objective reality”, using Einstein’s words, EPR believed that
for the measurements of the spin with respect to the x-axes, all four combinations had to have the
probability of 25 percent. Quantum mechanics gives us a different result. Only two configurations are
possible and they always have 50 percent each.1
Let’s emphasize from the beginning that quantum mechanics was correct and Einstein was wrong.
Experiments show that it is the case.
When the switches of the two detectors are set to the same value (1-1 or 2-2 or
3-3), the flashes have the same color (red-red or green-green) for both detectors.
Why is it so? It’s because if we choose the same values for the switches, we measure the spins of
the two fermions with respect to the same axis. Because |ψi is in a singlet state, we are guaranteed
to obtain the opposite values of the given component of the spin (an easier explanation: the total spin
must be zero), and because we have configured the colors of the detector flashes in the opposite way,
different eigenvalues from the two detectors translate to the same color of the flashes.
Everyone agrees. Quantum mechanics, Scully, Mulder, experiments – all of them agree that in this
particular experiment, the colors will be identical whenever the switches are set to the same value.
1
A mathematically isomorphic discussion applies to experiments with photons that are right-handed or left-handed
– the counterpart of the spin with respect to one axis; or x-linearly-polarized or y-linearly-polarized – the counterpart
of the spin with respect to another axis. The maximally entangled state remains entangled in all bases you choose –
circularly polarized bases or linearly polarized bases with respect to any pair of orthogonal axes.
2
Bell’s inequality
But what Einstein did not like was the idea that quantum mechanics could only predict the probabilities
but it did not actually say what the spins were prior to the measurements. He was convinced that
“God did not play dice” and he also asked:
What is the probability that the colors match when the switches are set to
random values?
Surprisingly, we will see that quantum mechanics gives different, incompatible answers from any
theory that assumes that there exists “local objective reality” before the measurement. Let us start
with John Bell who will try to defend Einstein’s viewpoint.
3
in the audience, vehemently turns his head to show his disagreement), there must exist a simple
“program” before the measurements that generates the colors as a function of the values we set for the
switches. Such a program assigns the colors C1 , C2 (S1 , S2 ) as functions of the switches S1 , S2 = 1, 2, 3.
Because the detectors are spatially separated, the program must actually decide about the color C1
according to S1 only and similarly for C2 , S2 . If it were not the case, physics near the first detector
would be directly affected by the decisions about the switch of the other detector that can be many
light years away. A decision that can be made instantly and just before the measurement. Such an
action at a distance would violate basic intuition about locality as well as relativity.
This implies that the left detector immediately before the measurement must have a program in it,
like the following:
1 – red, 2 – green, 3 - red
This precise colors depend on the detailed properties of the fermions and the detectors and their
surroundings, including any hypothetical “hidden variables”. The program informs the detector which
color should be flashed for different choices of the switch. The other detector on the right has a similar
program. Because we know that the colors have to match when the switches are set to the same value,
the other detector must actually have the same program:
For this particular program, the colors will agree in 5/9 of cases:
They must agree if the switches are set to the same value (11 or 22 or 33), but they also agree in the
cases 13 and 31 because both 1 and 3 are programmed to flash the red light.
The same probability 5/9 of the same color holds for all programs that use one color for two values
of the switch and the other color for the last value. For the two remaining programs (red-red-red and
green-green-green) that generate the color independently of the switch, the probability is 9/9 that the
colors will agree for random configurations of the switches. At any rate, the overall probability will
be somewhere in between 5/9 and 9/9 for a general combination of “diverse programs” and “uniform
programs”:
5
≤ psame color ≤ 1
9
Because I just proved this inequality to you, let us call this inequality and all analogous inequalities
Bell’s inequalities. These inequalities tell you that the probability of a correlation between two quanti-
ties (such as the color) is bounded unless we want to sacrifice common sense. Consequently, I predict
that the experiments will show that the colors will agree in more than 55 percent of the cases, and if
my quantum colleagues predict something else, they will be proved wrong, I think. (Pauli gets really
upset.)
4
Why is the probability equal to 1/4 in the case that we choose different values of the switches? Start
with the initial singlet state
|↑i |↓i − |↓i |↑i
|ψi = √
2
and make first a measurement of the first particle (without a loss of generality). Without a loss
of generality, you obtain | ↑i . This means that you may forget about all the contributions to the
wavefunction that disagree with your current measurement. Only the contributions that start with |↑i
are to be kept. In other words, the wavefunction after the first measurement “collapses” to
You should not ask too many questions about the collapse itself. The only role of the wavefunc-
tion is to predict the probabilities of different outcomes of the measurement, and what you imagine
that the wavefunction is doing before the measurement or after the measurement is unphysical and
unmeasurable.
Now you also make the measurement of the second particle’s spin with respect to another axis that
differs from the z-axis by ϑ = 120◦ . You should remember that the probability that you measure “up”
with respect to this axis, if the spin of the second particle is “down” with respect to the z-axis, is
1/2
p = |d1/2,−1/2 (ϑ)|2 = . . . = sin2 (ϑ/2)
For ϑ = 120◦, you obtain p = sin2 60◦ i.e. p = 3/4. This is the probability that both particles will
generate the same eigenvalue of the spin with respect to the two axes that differ by 120 degrees. Because
of the different color conventions for the two detectors, the probability of getting the same colors is 1/4
which completes a missing step in the calculation above.
Experimental resolution
This looks like a serious disagreement between Prof. Bell on one side and quantum mechanics on the
other side. What do the experiments say? Well, the experiments answer a resounding “Yes” to the
quantum mechanical predictions including the 50 percent overall probability that the colors match for
random values of the switches. In other words, the experiments tell us that the hidden variables cannot
exist, and even in the case that we want to believe that they exist, they would have to behave in a very
non-local way. Einstein, the author of relativity, would probably dislike such a nonlocality.
Remaining questions?
The discussion in the lecture today should have convinced you that the entanglement is a real and
experimentally verified prediction of quantum mechanics. Moreover, this phenomenon is completely
incompatible with any classical picture that describes the world in terms of “objectively existing” clas-
sical quantities that only interact locally. In some sense, quantum mechanics seems to act non-locally
and this non-locality is experimentally proved. In terms of the “programs” we discussed previously,
such a non-locality allows us to consider more general programs that determine the colors to be differ-
ent in the cases 13 and/or 31 so that the overall probability that the colors agree may decrease below
55 percent.
5
an objective, uniquely determined “classical” state of the system prior to the
measurement.
A possible loophole is that there exists a classical, non-probabilistic description of the real world and
the correlations in the experiments we discussed are caused by some superluminal, material, instant
communication between the detectors. These would be real signals faster than light and they would
contradict special theory of relativity. They would indeed allow you to send actual information faster
than light to the opposite side of our galaxy, at least in principle. From the viewpoint of other reference
frames, such faster-than-light signals would go backward in time, and they would allow you to kill your
grandfather before he met your grandmother – which would make the Universe inconsistent.
Such things should not happen and indeed, they do not happen in the real world. It is because in
the real world, the wavefunction is probabilistic. No real signals ever propagate faster than light; just
illusionary signals based on wrong preconceptions about reality may seem to propagate superluminally
but these can’t be used to transmit any information. The predictions of quantum mechanics work and
they are compatible with special relativity. In fact, quantum field theory (and its ramification, string
theory) is a framework that incorporates both relativity as well as quantum mechanics.
6
Bell’s original setup and Aspect’s experiments
I was cheating a bit concerning the history. The example with the detectors with 3 switches and 2
colors was not one that John Bell actually proposed in 1964. He has originally proposed a slightly less
entertaining arrangement but it also involved a spin-0 state made of two spin-1/2 fermions.
Let us say in advance that the French physicist Alain Aspect has performed experiments very
similar to Bell’s original description – but with the electrons replaced by photons and the Stern-
Gerlach apparata replaced by polarizers – in 1981 and 1982. His detectors were 13 meters apart
and the polarizers were adjusted when the photons were already in flight. This guaranteed that no
conventional information slower than light could have propagated between the detectors. In the last
decade, Swiss and Austrian groups have extended the experiments using fiber optics and other gadgets
to make the distance between the detectors to be around 10 kilometers. All these experiments confirmed
the predictions of quantum mechanics.
where ϑ is the angle in between â, b̂. The expectation value in our state is easy to calculate:2
h↑| h↓| − h↓| h↑| (a) |↑i |↓i − |↓i |↑i
hσâ σb̂ i = √ σz (cos ϑ σz(b) + sin ϑ σx(b) ) √
2 2
cos ϑ cos ϑ
= − +0+0− = − cos ϑ = −â · b̂
2 2
We have evaluated the expectation value of the operator in each of the four bra-ket combinations that
follow from the distribution law. And we have also used the well-known matrix elements of the Pauli
matrices with respect to |↑i and |↓i : h↑| σz |↑i = 1 = − h↓| σz |↓i . The expectation value of the term
proportional to sin ϑ never contributed.
7
Now we know that if we choose the axes identical â = b̂, we must obtain the opposite spins by the
conservation of angular momentum which means that
where dµ(λ) = ρ(λ)dλ is an integration measure that determines the probability distribution for the
hidden variables. We can also replace b̂ by another axis ĉ to write the same formula for P (â, ĉ) as well
as the following difference:
Z h i
P (â, b̂) − P (â, ĉ) = dµ(λ) −A(â, λ)A(b̂, λ) + A(â, λ)A(ĉ, λ)
Z h i
= − dµ(λ)A(â, λ)A(b̂, λ) 1 − A(b̂, λ)A(ĉ, λ)
In the last step, we used A2 (b̂, λ) = +1. Next you should notice that
h i
A(â, λ)A(b̂, λ) ≤ 1, A(b̂, λ)A(ĉ, λ) ≤1 ⇒ 1 − A(b̂, λ)A(ĉ, λ) ≥ 0
By identifying the last term, we can finally write down “the” Bell’s inequality
P (â, b̂) − P (â, ĉ) ≤ 1 + P (b̂, ĉ)
Is it satisfied by the result from quantum mechanics? That would mean that
However, this inequality (Bell’s inequality) is easily violated in quantum mechanics (as well as by the
experiments). For example, choose â, b̂, ĉ in the same plane and â, b̂ orthogonal so that cos ϑab = 0.
Then the inequality says that
| cos ϑac | ≤ 1 − sin ϑac
which is actually violated for any 0 < ϑac < π/2; for example, for ϑac = 45◦ we obtain a wrong
inequality .707 < .293. This means that the experimentally verified quantum mechanical prediction
cannot be computed from a “classical” theory with local hidden variables – and most likely, other
hidden variable theories fail, too.
8
Second big part of the lecture: Decoherence and foundations
of QM
Quantum mechanics significantly differs from classical physics and its interpretation has been a con-
troversial issue since the very beginning.
We will discuss the major interpretational frameworks today and the questions they want to address.
One of the most important recent insights from the 1980s that every up-to-date treatment of the
foundations of quantum mechanics must take into account is the so-called decoherence, and we will
dedicate special time to this phenomenon. Our discussion at the beginning of the lecture will be mostly
non-quantitative.
List of interpretations
The most important interpretations are the following ones:
• Transactional interpretation
• Consistent histories
The differences between them are sometimes subtle and philosophical in character. Among the
interpretations above, the Bohmian interpretation has become very awkward after Bell’s inequalities
were understood and experimentally falsified: it seems to contradict special theory of relativity that
prohibits actual superluminal signals that would be necessary for the Bohmian picture to agree with
observations of the entanglement. The interpretations with a special role of consciousness are obsolete
today because of our understanding of decoherence. The motivation for the transactional interpretation
remains obscure to nearly all physicists and we won’t discuss it at all.
Your instructor thinks that the Consistent Histories are the most satisfactory and modern inter-
pretation. In reality, they are just a small update of the old Copenhagen interpretation where some
of the subtle details are explained and decoherence is appreciated. The many-worlds interpretation is
popular with many active physicists, too, and Everett – who pioneered it – was also an early pioneer
of decoherence. The Copenhagen interpretation is still called the “orthodox” or “canonical” approach.
Let us look at all of these interpretations.
9
Shut up and calculate
Feynman’s dictum “Shut up and calculate” captures the most favorite approach of most active, techni-
cally oriented physicists. It is important to know how to use the mathematical tools, how to extract the
predicted probabilities, and how to compare them with observations because these are the only phys-
ically meaningful and testable questions. A physicist should leave all other questions to philosophers,
artists, and crackpots, Feynman argued.
Copenhagen interpretation
The Copenhagen interpretation – named after the city from which Niels Bohr influenced all of his
soulmates – is the classical interpretation. Max Born was the first one who figured out that the
wavefunctions are interpreted probabilistically. The Copenhagen interpretation states that “quantum
objects” (usually particles and microscopic systems) are described by quantum mechanics. “Classical
objects” (usually macroscopic ones) essentially follow the laws of classical physics. They can be used
to measure the properties of the “quantum objects”. The measurement is an interaction between a
“quantum object” and a “classical object” in which the wavefunction of the “quantum object” collapses
to one of the final states with a well-defined value of the measured quantity (such as the position of
the electron that reaches the screen). The probabilities of different outcomes are determined from
the wavefunction. The wavefunction can either be thought of as a “state of our knowledge” or an
actual “wave” in a configuration space that suddenly collapses, but asking about the physical origin
of the collapse is unphysical. Nevertheless, you may imagine that the evolution of the wavefunction
has two stages: the smooth evolution according to the Schrödinger equation; and the sudden collapse
induced by the act of measurement. This strange collapse is a topic that is generally studied as the
measurement problem.
The Copenhagen interpretation has been enough to explain and understand the results of all ex-
periments that have ever been done (except for some experiments involving decoherence that we will
mention at the end; decoherence was only appreciated since the 1980s). However, it has the following
basic aesthetic flaws:
• no clear criterion how to divide the objects into “classical” and “quantum” one is provided
• if someone provides a “cutoff” that divides the objects in this way, it is arbitrary and unnatural
• a related problem: even macroscopic objects should follow the laws of Nature, i.e. quantum
mechanics, and it is not clear how a priviliged basis is chosen (see the Schrödinger cat problem
below)
• a very related problem: it is not really explained why the classical physics is a limit of quantum
physics even for the process of measurement
• if the wavefunction is thought to be real, the origin of collapse is unexplained and no non-linearity
that would be required for the collapse are found in the theory
Most of these problems are explained by decoherence. Let us look at the classic paradox.
10
Schrödinger cat
Schrödinger considered a cat under a system of hammers that were turned on whenever a decay of
a neutron was detected. The fate of the cat depends on a random process described by quantum
mechanics (a decaying particle). The wavefunction of the neutron is, after several minutes, described
as a combination of |not − decayedi and |decayedi and consequently, because these two states imply
what will happen with the hammers, the cat is, according to Schrödinger equation, found in the state
ψ = α|deadi + β|alivei.
In reality there are very many different states that describe the cat in detail but these two are enough.
When you look at the cat, you will either see it is alive or dead. You will never see that it is 0.6|alivei +
0.8|deadi even though it is a perfectly nice and normalized state. Does it mean that before you look,
the cat was actually in the superposition, with a chance to be alive? When you see it is dead, did
you actually kill the cat just by looking at it? Most importantly, what tells you that the states |deadi
and |alivei may be results of a measurement while other combinations of these vectors can’t? The last
problem is addressed by decoherence. The previous problems are probably left to your classes on moral
reasoning.
Transactional interpretation
This proposal by John Cramer tries to recycle the ideas of Wheeler and Feynman about particles going
forward and backward in time and promote it into an interpretation of quantum mechanics, but no one
else understands how such a statement solves the measurement problem or any other problem discussed
above, and we won’t say anything else about the picture.
de Broglie-Bohm interpretation
In 1927, Prince Louis de Broglie proposed a non-probabilistic possible interpretation of quantum me-
chanics based on the “pilot wave”. These ideas were re-invented and updated by David Bohm in 1952.
Although the framework is trying to be deterministic and Einstein would like this feature, it is mathe-
matically ugly (and does not offer any new predictions beyond orthodox quantum mechanics) so that
even Einstein declared the Bohmian framework to be an “unnecessary superstructure”.
According to the Bohmian mechanics, there objectively exists BOTH a point-like particle as well
as the wavefunction – interpreted as an actual wave. Because the particle is localized at a point, there
is no problem to see that it will be observed at one point. On the other hand, the wavefunction is a
“pilot wave” that adds new forces acting on the particle. The corresponding “quantum potential” can
be defined in such a way that the particle will be “repelled” from the interference minima, and the
11
probabilistic distribution of the particle at time T will agree with |ψ|2 if it agreed in the past. The
precise “guiding potential” acts as follows:
d~xclas h̄ ψ ∗ ∇ψclas
= Im clas
∗
dt m ψclas ψclas
where we emphasized that both the wave ψ as well as the position x are ordinary classical degrees
of freedom. You see that the right hand side is nothing else than the “velocity” computed from the
probability current you should remember from the discussion about the continuity equation. The initial
position of the particle must however be chosen to be random, according to the |ψ|2 distribution, and
no one has explained how this occurs.
It is not explained what happens with the wavefunction or the pilot wave when the particle is
absorbed. The framework has some problems to describe multi-particle quantum mechanics, even
bigger problems to describe the spin, and very serious problems to be compatible with special relativity
(which essentially requires locality). These are the main reasons why the Bohmian interpretation is not
treated seriously by most physicists. On the other hand, it is the most popular interpretation among
many philosophers who have philosophical reasons to believe that the world can’t be probabilistic, and
no amount of experiments will convince these philosophers that their assumption is incorrect.
Consistent Histories
Gell-Mann, Hartle, Omnes, and others promoted the interpretation based on consistent histories. It
states that all types of questions that are meaningful in physics may be phrased as a question what are
the probabilities of different alternative histories. A history is a sequence of projectors Pi that require
that a quantity at time ti satisfied a certain condition. If the history HA , A = 1 . . . N is defined as
HA = P1 (t1 )P2 (t2 ) . . . Pn (tn ), t1 < t2 < . . . < tn
i.e. as a product of projection operators (P 2 = P ) expressing that a condition (P1 ) was satisfied at time
t1 , another condition was satisfied at time t2 , and so forth, its probability to occur, given the initial
density matrix ρ, is computed as
P rob(A) = Tr(HA† ρHA )
where we have adopted the Heisenberg picture (the operators, including the projection operators,
depend on time). Note that if you write ρ = |ψihψ|, you essentially get the usual expression |hψ|φi|2
where |φi is a basis vector determined by the projectors. Two alternative histories must be consistent
which essentially means that they are orthogonal in the following sense:
Tr(HB† ρHA ) = 0 for A 6= B
12
This condition is necessary for the probability of
and similar identities. Because of decoherence explained below, we are only allowed to define “mean-
ingful” histories including the states “dead cat” or “alive cat” but not their combinations.
Decoherence
The primary question that decoherence answers are
• where is the boundary between the quantum world described by interfering wavefunctions and
the classical world that follows our intuition?
• how are the preferred basis vectors of macroscopic objects – such as the dead cat and the alive
cat – chosen?
Decoherence is somewhat analogous to friction and it introduces an “arrow of time” (an asymmetry
between the past and the future). It requires the system to be “open”. In other words, we must look
not only at the object itself with the Hamiltonian Hc (collective), but also the environment with He
and their interaction Hamiltonian Hi .
H = Hc + He + Hi
Decoherence is the loss of the information about the relative phases – or, equivalently, the vanishing of
off-diagonal elements of the density matrix ρc traced over the “uninteresting” environmental degrees
of freedom.
These environmental degrees of freedom are delicate because the spectrum is essentially continuous.
For example, if you have N atoms and each of them has n states, then there are nN states in total,
and if the energy difference between the lowest and the highest energy state is NE0 and we assume
that the distribution is essentially random, then the spacing is of order
NE
h∆Ei =
nN
which is usually very tiny.
and the apparatus in a neutral state |φ0 i. Immediately after the measurement, the apparatus is affected
by the result and the final state is
Actually, our particle was probably absorbed and we should treat it as a part of the apparatus:
13
So far we have still neglected the environment. They are not yet coupled to our apparatus and we may
assume their state to be a universal |φe i so that the total state is
What’s important is that there are interactions between the apparatus and the environment, governed
by the term Hi of the Hamiltonian. This will cause the environment to evolve depending on the state
of the apparatus. The apparatus and the environment start to be entangled
The process is analogous to dissipative forces (friction). When and if the states of the apparatus
|φ1 i and |φ2 i are sufficiently different – for example they contain the absorbed particle located at two
different places that differ by several atomic radii – they will also affect the environment differently
and the states |φe,1i and |φe,2i will actually become (almost) orthogonal; for example, they contain an
infrared photon (that carries heat) in two different regions which guarantees that
hφe,1|φe,2 i = 0
Just as in classical statistical physics, we rightfully assume that we can’t measure all environmental
degrees of freedom. So we must partially trace over them. Thus
What is this density matrix? Last time we understood that this density matrix is not pure because
its eigenvalues differ from 0, 1. It does not describe a superposition of quantum states |φ1 i and |φ2 i.
Instead, it describes a system that has the probability p1 = |c1 |2 to be in the state |φ1i, and the
probability p2 = |c2 |2 to be in the state |φ2 i. Here, p1 , p2 are the eigenvalues of the density matrix.
You see that for p1 6= p2 , there is a unique basis of eigenvectors (up to a phase/normalization of each of
them, of course). For p1 = p2 you actually see that the density matrix is diagonal in all bases. In reality, it is
difficult to guarantee that the density matrix will be exactly a multiple of the identity matrix. But you can
try to design an experiment in which the identity of the preferred states will remain ambiguous by making all
the probabilities exactly equal.
You see that by taking the interaction with the environment into account, we created a diagonal
density matrix whose entries p1 , p2 have a probabilistic interpretation – but a probabilistic interpretation
similar to classical physics. Interference and coherence is gone. A priviliged basis of vectors that are
“accessible to consciousness” is picked. The priviliged basis vectors are, in a sense, the states that are
able to “imprint themselves” faithfully to the environment: the states able to “self-reproduce”. And
you see that dynamics and the interactions with the environment determine how this occurs. No longer
you need philosophy about consciouness or artificial bureaucratic rules that determine the boundary
between quantum mechanics and the range of validity of classical intuition. The detailed properties of
the actual Hamiltonian encode this information.
14
Time needed for decoherence
So how much time it takes for realistic systems before the coherence is lost, the priviliged states are
chosen, and the classical interpretation of their probabilities becomes applicable? Assume that the
inner product of the different environmental states goes like
t
hφe,1|φe,2i ∼ exp(− )
td
To be specific, consider a pendulum with momentum p that is affected by the force F as well as the
friction γp where 1/γ is the typical time of damping. The equation for the momentum says
dp
= F − γp
dt
With these definitions, it turns out that the time for decoherence is
h̄2
td = .
γm kT (∆x)2
The more friction (the stronger interaction with the environment) you have, the faster the decoherence
operates. The higher temperature you have, the faster it becomes. The more separated in space the
states become, the faster your system decoheres. The more massive system you have, the faster the
process goes. The formula above is rather universal; it does not really matter what kind of environment
you consider. Incidentally, once the off-diagonal elements decrease to a small fraction of the original
value, they continue to decrease at a fascinating rate
where A, t0d are constants. It’s because even one emitted photon or another element of the environment
would be enough to make the system decohere exponentially, but because the number of such “photons”
grows with time, the decrease is really exponentially exponential. It is superfast.
A few numbers
For a pendulum of mass m = 10 grams, the friction time 1/γ = 1 minute, ∆x = 1 micron, the time of
decoherence turns out to be
td ≈ 1.6 × 10−26 sec.
The case we considered – in which we measure the position and the states are separated spatially – is
the most typical one but not the only one possible. You may replace space by voltage, for example.
The analogy works as follows:
x → q = CV
h̄2
p → j
⇒ td =
m → L
RkT C 2 V 2
γ → R/L
15
You could calculate the time for a piece of dust colliding with something as pathetic as the cosmic
microwave background, and you would still need a nanosecond for it to decohere. The lesson is com-
pletely clear. Decoherence occurs almost instantly for all ordinary macroscopic (and even mesoscopic)
systems. There are three basic exceptions in which the coherence may be preserved for macroscopic
systems:
• superfluids
• superconductors
• photons
In the first cases, the system is macroscopically described by a field that looks much like the wavefunc-
tion: we “see” quantum mechanics in the macroscopic world.
Quantum computers – to be discussed later – are meant to be following the laws of quantum
mechanics, with all of its complex amplitudes and interference, for long periods of time. Decoherence
is a killer. The technological and, indeed, also the physical challenge is to minimize the decoherence
while preserving the necessary interactions between the pieces of the quantum computer.
16