Cornell 24may24 Quantum Computers
Cornell 24may24 Quantum Computers
Cornell 24may24 Quantum Computers
CHRIS FERRIE
This work is licensed under CC BY-NC-ND 4.0. To view a
copy of this license, visit creativecommons.org/licenses/by-
nc-nd/4.0/.
Contents
Foreword i
The Quantum Age 2
Whence Quantum Computing? 12
Myth 1: Nobody Understands This Quantum Stuff 29
Myth 2: Qubits Are Zero And One At The Same Time 47
Myth 3: Quantum Computers Try All Solutions At Once 63
Myth 4: Quantum Computers Communicate Instantaneously 78
Myth 5: Quantum Computers Will Replace Digital Computers 91
Myth 6: Quantum Computers Will Break The Internet 105
Myth 7: Quantum Computing Is Impossible 119
Foreword
Yet in all that time I never thought to write, much less did I
actually write, a pithy book called “What You Shouldn't
Know About Quantum Computers.” My colleague Chris
Ferrie did. He's the same guy who coauthored the surprise
bestseller “Quantum Computing for Babies.” Now he's back,
with something for those babies to read when they're
slightly older.
Scott Aaronson
Austin, Texas, 2024
The Quantum Age
The nineteenth century was known as
the machine age, the twentieth century will go
down in history as the information age. I
believe the twenty-first century will be the
quantum age.
—Paul Davies
2
of computer that are carrying out the steps of algorithms
right now.)
3
know, roughly speaking, the equations we would need to
solve to determine what would happen when a hypothetical
drug molecule interacted with a biological molecule. There
are even algorithms to solve these equations. Many of the
world’s supercomputers chug away, running inefficient
programs to solve such equations. But it’s just too hard—so
we “solve” the equations with genetically modified mice
instead.
4
Nowadays, many algorithms are phrased as steps that
change sequences of complex numbers rather than 1s and
0s. These so-called quantum algorithms could be performed
by hand or using a digital computer, but it would take a long
time. A quantum computer is a special-purpose device used
to carry out these steps directly and efficiently.
5
downloaded it on. The most interesting thing to imagine is
what those quantum programs might look like—after all,
when digital algorithms were being invented and the early
computing machines were built to implement them, no one
could have imagined that programs for future versions of
those machines would exist to write blogs, share videos,
perform bank transactions, and all the other things we take
for granted. We know quantum computers will be able to
solve some important problems for us, but what we will
eventually have them do for us is unimaginable today
because human foresight rather poor.
6
out what time to catch the bus to arrive early for that
meeting, adding up the values in a spreadsheet, and so on.
Less obvious examples that are really just hidden math
problems are recognizing faces in a digital photo, formatting
words in a document, and seamlessly showing two people’s
faces to each other on other sides of the world in real time.
The QPU
Calculations involving the multiplication and addition of
complex numbers are very time-consuming for a CPU. As you
7
now know, these kinds of calculations are essential for
solving problems in quantum physics, including simulating
chemical reactions and other microscopic phenomena. It
would be convenient for these kinds of calculations if a
quantum processing unit (QPU) were available. These are
confusingly called quantum computers, even though they
are chips sent very specific calculations by a CPU. I predict
they will eventually be called just QPUs.
What will the future QPU in your computer do? First of all,
we could not have guessed even ten years ago what we’d be
doing today with the supercomputers we all carry around in
our pockets. (Mostly, we are applying digital filters to
pictures of ourselves, as it turns out.) So, we probably can’t
even conceive of what QPUs will be used for ten years from
now. However, we do have some clues as to industrial and
scientific applications.
8
At the Quantum Algorithm Zoo (quantumalgorithmzoo.org),
65 problems (and counting) are currently listed that a QPU
could solve more efficiently than a CPU alone. Admittedly,
those problems are abstract, but so are the detailed
calculations that any processor carries out. The trick is in
translating real-world problems into the math problems we
know a QPU could be useful for. Not much effort has been
put into this challenge simply because QPU didn’t exist until
recently, so the incentive wasn’t there. However, as QPUs
start to come online, new applications will come swiftly.
9
the physics at the creation of the universe or the center of a
black hole, and who knows what we will find there.
The takeaway
Qubits, superposition, entanglement, parallelism, and other
quantum magic you’ve read about elsewhere are not useful
concepts to think about if you only want a five-minute
summary of quantum computers. The basic thing you need
to know about QPUs is the same thing you know about
GPUs — they are special-purpose calculators that are good at
solving a particular kind of mathematical problem.
If, at some point, you end up with a job title that has the
word quantum in it, it will probably be a software job (much
like there are 20 software engineers for every computer
hardware engineer today). The most challenging problem a
Quantum Solutions Engineer might face is in translating the
calculations their business currently performs into problems
that can be outsourced to a QPU. They may not even use or
need to understand concepts like superposition and
entanglement! So, you have a pass if you’d like to be spared
the details.
10
in the Quantum Dark Ages, where myth and superstition run
rampant.
11
Whence Quantum
Computing?
“History is not the past but a map of the past,
drawn from a particular point of view, to be
useful to the modern traveller.”
— Henry Glassie
12
System crucial for satellite navigation. Yet, even this
remarkable application is not standalone—it also requires
quantum technology for accurate functioning.
13
machinery to the digital, information-driven age. Now, it is
driving us into the quantum age.
14
As Planck was doing his calculations, he discovered that if
energy has some smallest unit, the formulas worked out
perfectly. This was a wild guess he made in desperation, not
something that could have been intuited or inferred. But it
deviated from classical physics at the most fundamental
level, seemingly rendering centuries of development useless,
so even Planck himself didn’t take it seriously at first. Indeed,
it took many years before others began to pay attention and
many more after that before the consequences were fully
appreciated. Perhaps not surprisingly, one of those who did
grasp the importance early on was Albert Einstein.
15
photon hits an electron, if its energy (determined by its
frequency) is sufficient, it can overcome the energy binding
the electron to the material and eject it. Quantization had
taken hold. This marked a profound transition in physics,
which is still the source of confusion and debate today.
Before photons, light was argued to be either a wave or a
particle. Now, it seems that it is both—or neither. This came
to be known as wave-particle duality.
16
calculate the energies of these spectral lines and brought the
atom into the realm of quantum physics. He presented the
model formally in 1913.
17
In contrast to matrix mechanics, Schrödinger's picture was
called wave mechanics. There was a brief debate about which
of the two alternatives was correct. However, Paul Dirac
showed that both are equivalent by axiomatizing the theory,
demonstrating that a simple set of principles can derive all
that was known about it. He also combined quantum
mechanics with special relativity, leading to the discovery of
antimatter and paving the way for further developments in
quantum field theory, which ultimately led to the so-called
Standard Model of Particle Physics that supported the
seemingly unending stream of particle discoveries in high-
energy particle accelerator experiments.
18
where the two theoretical fields of physics and computing
converged. But to appreciate it, we must tell the other half of
the quantum computing backstory.
19
physics terms. Indeed, anything your smartphone can do, a
large enough system of levers and pulleys can do. It’s just
that your computer can do it much faster and more reliably.
A quantum computer, on the other hand, will function
differently at both the device level and at the abstract level.
20
In 1945, John von Neumann expanded on Turing's work and
proposed the architecture that most computers follow today.
Known as the von Neumann architecture, this structure
divides the computer into a central processing unit (CPU),
memory, input devices, and output devices. Around the same
time, Claude Shannon was thinking about the transmission
of information. In work that birthed the field of information
theory, Shannon introduced the concept of the “bit,” short
for binary digit, as the fundamental unit of information. A bit
is a variable that can take on one of two values, often
represented as 0 and 1, which we will meet again and again
in this book.
21
Fast forward to the ‘70s, and the field of computer science
was in full swing. Researchers devised more nuanced notions
of computation, including the extension of what was
“computable” to what was computable efficiently. Efficiency
has a technical definition, which roughly means that the time
to solve the problem does not “blow up” as the problem size
increases. A keyword here is exponential. If the amount of
time required to solve a problem compounds like investment
interest or bacterial growth, we say it is inefficient. It would
be computable but highly impractical to solve such a
problem. The connection to physics emerged through what
was eventually called the Extended Church-Turing Thesis,
which states not that a Turing machine can’t merely simulate
any physical process but that it can do so efficiently.
22
A brief history of quantum computing
Quantum computing was first suggested by Paul Benioff in
1980. He and others were motivated by the aforementioned
interest in the physical limitations of computation. It became
clear that the mathematical models of computation did not
account for the laws of quantum physics. Once this was
understood, the obvious next step was to consider a fully
quantum mechanical model of a Turing Machine. Parallel to
this, Richard Feynman lamented that it was difficult to
simulate quantum physics on computers and mused that a
computer built on the principles of quantum physics might
fare better. At the same time, in Russia, Yuri Manin also
hinted at the possibility of quantum computers, both noting
their potential access to exponential spaces and the difficulty
of coordinating such. However, the idea remained somewhat
nebulous for a few years.
23
computer can solve in one run of the algorithm what it might
take a conventional computer a growing number as the
problem size gets larger. It’s an admittedly contrived
problem, but it is quite illustrative, and the so-called
Deutsch-Jozsa algorithm will be discussed later.
24
exponentially, while the quantum algorithm requires, at
most, a linear number of steps.
25
and, not long after, entire families of codes to protect
quantum data from errors. While promising, these were still
toy examples that worked in very specific instances. It wasn't
clear whether any required computation could be protected.
26
correction is not perfect, even for digital electronics. (Blue
Screen Of Death, anyone?) Although quantum error
correction demonstrated that the most common errors
occurring during the execution of these instructions can be
corrected, eventually, rare errors will happen, and those will
spoil the computation. Imagine you have a leak in the roof
and a bucket that can catch 99% of the water. Sounds great,
but eventually, the 1% that’s missed by the bucket will flood
the house. What you need is a bucket and a mop. With
quantum computer errors, we had the bucket but not the
mop.
27
And that was it. Before the turn of the century, all the pieces
were in place, and we just needed to wait until someone built
the damn thing! But here we are, decades later, without a
quantum computer — what gives? Is quantum technology a
pipe dream? Is it too dangerous to build? Does it require
access to other dimensions? And, just how exactly is the
thing supposed to work? It’s time we answered the real
questions.
28
Myth 1: Nobody
Understands This
Quantum Stuff
“Nobody understands quantum physics.”
— Richard Feynman
“Those who are not shocked when they first come across
quantum mechanics cannot possibly have understood it.”—
Niels Bohr
29
“I do not like it, and I am sorry I ever had anything to do with
it.”—Erwin Schrödinger
30
While our precision engineering and control at the
microscopic scale was a continuous process of improvement,
there is still a clear distinction between first-generation
quantum technologies, which you will learn about in this
chapter, and second-generation quantum technologies, which
include quantum computers. Once we understood the fine
structure of light and matter, many new paths in
understanding and engineering opened up. Yet, these did not
require the manipulation of individual atoms or photons to
discover, nor did they need access to the fundamental
constituents to exploit. Now, we can control the world down
to individual atoms. It’s still quantum physics, but it
provides us with more possibilities—most of which we
probably don’t even know about yet!
31
would allow Gordon Ramsay to require customization of
reality TV cakes to an unprecedented degree, with demands
on texture, taste, and appearance beyond its recognition as
food. Controlling individual atoms could allow us to design
the world from scratch, producing things beyond our
imagination and unrecognizable to an experience trained in
a classical world.
Transistors
Arguably, the most important technological consequence of
quantum physics is the mighty transistor. If you are sitting
on your mobile phone right now, you are currently sitting on
a billion of these now-tiny devices. But they weren’t always
tiny, and the story of this technology is at least as old as
quantum theory.
32
others. Classical physics could not explain this behavior, nor
could it aid in exploiting these properties. Quantum physics
provided the needed theoretical framework to understand
these materials. The discrete energy levels of Bohr’s atomic
theory generalize to the so-called band theory of solids.
Instead of the specific energy levels for electrons in a single
atom, bulk materials have “bands” and “gaps” in how
electron energy can be organized in materials. Band theory
suggests that conductors have many energy levels for
electrons to move into, which allows electric current to flow
easily. Insulators, on the other hand, have large “band gaps,”
preventing electron movement. Semiconductors have a small
band gap that electrons can cross under certain conditions.
33
The transistor was invented in 1947 by John Bardeen,
Walter Brattain, and William Shockley at Bell Labs. While
“big” compared to today’s transistors, it was still only the size
of a coin. Its purpose was to amplify electrical signals to
replace the bulky and much less efficient vacuum tube. By
using two closely spaced metal contacts on the surface of a
small chunk of semiconductor material, the transistor could
regulate electric current by exploiting the band structure
properties of the material. After the initial demonstration,
progress was rapid. In addition to replacing vacuum tubes
as a necessary component in electric circuits, transistors
also replaced vacuum tubes as computer switches. By 1954,
the first “all-transistor” computer was built, and the rest, as
they say, is history.
Lasers
A laser used to be a LASER (Light Amplification by
Stimulated Emission of Radiation), but it is now so
ubiquitous that it's both a noun and a verb. While you don’t
want to be arbitrarily lasered, whenever you scan a barcode
at the supermarket, for example, you are harnessing the
power of lasers.
34
levels. When the electron changes levels, it absorbs or emits
a photon. The energy of the photon is exactly the difference
in the energy levels. Spontaneous emission happens when
atoms in high-energy states randomly drop to low-energy
states. All the light you see, and all the light ever seen before
the 20th century, was due to atoms randomly changing
energy levels. Einstein suggested that an atom in an excited
state (with its electron at a high energy level) could be
stimulated to drop to a lower energy state with a photon
matching the energy level difference, thereby creating two
identical photons.
Just like the transistor, the laser has found many uses.
Today, lasers are used in a wide variety of fields, from
35
medicine and telecommunications to manufacturing and
entertainment. They are used to cut steel, perform delicate
eye surgeries, carry information over fiber-optic cables, read
barcodes, read and write music and video onto plastic discs,
and create mesmerizing light shows. Our modern world
could not exist without the laser, which will continue to pay
dividends when applied to second-generation quantum
technology.
Atomic clocks
If you've ever used GPS (Global Positioning System) to
navigate, which is nearly impossible not to unless you carry
no devices and drive yourself around in a twenty-year-old
vehicle, then you've indirectly used an atomic clock. Since it
has the word “atom” right in it, you know it has something
to do with quantum physics.
36
subtle but reveal what was called the fine and hyperfine
structure of atoms. In the 1930s, Isidor Rabi developed the
technique of magnetic resonance, enabling the precise
measurement of these features.
37
telecommunications networks like the internet to testing the
predictions of Einstein's theory of relativity. However,
perhaps their most well-known application is in the Global
Positioning System. Each of the 24 satellites in the GPS
constellation carries multiple atomic clocks on board. The
tiny differences in the time signals sent by these clocks,
caused by their different distances from the receiver, allow
the receiver's position to be triangulated within a few meters.
Measuring distances is hard, but since light has a constant
speed, distance can be inferred by how long it takes to
travel—provided you can accurately measure time.
38
One of the hallmarks of quantum physics was the discovery
of spin, a property internal to subatomic particles that forces
them to act like tiny magnets. Individually, they are very
weak and, when surrounded by others aligned in random
directions, are impossible to detect. However, they will align
themselves with a strong enough magnet, and that’s where
the giant superconducting coil magnets of an MRI machine
come in.
39
Without understanding the spin properties of atomic nuclei,
the development of MRI would not have been possible. Today,
MRI is used globally to diagnose a myriad of conditions, from
brain tumors to joint injuries, showcasing yet another
practical application of quantum physics in our everyday
lives.
40
directions. The PET scanner detects these gamma rays,
infers the source, and hence maps out the journey of the
radioactive material within the body.
41
we use to navigate our world to the medical technologies that
help diagnose and treat illnesses to the mathematical models
that power our financial systems—the principles of quantum
mechanics are an integral part of our intentionally
engineered modern world. And this doesn't even touch on the
second-generation quantum technologies that are on the
horizon. As our control of quantum systems continues to
improve, more applications will come, so this all begets the
question, what exactly don’t we understand about quantum
physics?
Quantum teleology
Quantum physics is almost always taught chronologically.
Indeed, I just did that in the previous chapter. You read
about a long list of 20th-century scientific heroes who
uncovered the wild and untamed world behind our fingertips.
The story had modest roots in Planck’s 1900 hypothesis that
energy is discrete. Though we didn't need to make it that far
for the purpose of introducing quantum computers, the
standard tale of quantum physics usually crescendos with
John Bell’s work on “spooky” entanglement in the 1960s.
Today, as the story goes, we are on the cusp of the yet-to-be-
written second quantum revolution.
42
graduate students to blindly turn the mathematical crank to
make predictions about newer and more extreme
experiments. It is often said that generations of physicists
would “shut up and calculate” to earn their degrees and
professorships to eventually repeat the program again with
the next cohort. The wild horse of quantum physics had
apparently been stabled but not tamed.
43
you come to “understand” how to ride it through feel and
experience.
44
Quantum physics is a branch of science that describes
highly isolated systems—things that don’t interact randomly
with other stuff around them. Traditionally, these are small,
like atoms, but now we can engineer artificial systems under
high isolation. Anything that is extremely isolated requires
quantum physics to be described accurately. The
information such things encode is quantum information. If
you attempt to use classical physics or classical information
to make predictions or statements about such things, you
may end up being very wrong.
45
inability to “understand” the point of TikTok does not stand
in the way of it being a successful app, our lack of
“understanding” quantum physics does not stand in the way
of building quantum computers.
46
Myth 2: Qubits Are
Zero And One At The
Same Time
“Well, some go this way, and some go
that way. But as for me, myself,
personally, I prefer the shortcut.”
— Lewis Carroll
47
same time. Remember that computers don’t actually hold 0s
and 1s in their memory. Those labels are just for our
convenience. Each bit of a digital computer is a physical
thing that exists in two easily distinguishable states. Since
we use them to perform logic, we could also have labeled
them “true” and “false.” Now it should be obvious—”true and
false at the same time” is just nonsense. (In formal logic, it is
technically a false statement.)
You see where the problem is, right? From a false premise,
any conclusion can be proven true. This little example was
humorously pointed out by the famous logician Bertrand
Russell, though he wasn’t talking about qubits. However, we
can clearly see that starting with a blatantly false statement
is going to get us nowhere.
48
Consider the following logic. First, if a qubit can be 0 and 1
at the same time, then two qubits can be 00 and 01 and 10
and 11 at the same time. Three qubits can be 000 and 001
and 010 and 011 and 100 and 101 and 110 and 111 at the
same time. And… well, you get the picture. Like mold on that
organic bread you bought, exponential growth!
Escalating quickly
Let’s skip ahead to the end just for a brief moment. I’m going
to tell you what qubits actually are, if only so that I can say
I never held anything back. If you take a formal university
subject in quantum computing, you will learn that qubits are
49
vectors in a complex linear space. That sounds complicated,
but it’s just jargon. Vectors are lists of numbers, spaces are
collections of vectors that are linear because you can add
vectors together, and the word complex refers to numbers
that use the square root of -1, which is also called an
imaginary number.
50
any point, remember that qubits are lists of complex
numbers, and there is a very solid mathematical framework
for dealing with them.
51
“For our elementary coding system we choose the two-level
spin system, which we will call a ‘quantum bit’ or qubit. The
qubit will be our fundamental unit of quantum information,
and all signals will be encoded into sequences of qubits.”
“The term ‘qubit’ was coined in jest during one of the author’s
many intriguing and valuable conversations with W.K.
Wootters, and became the initial impetus for this work.”
Writing in qubits
Usually, you will see qubits “written” with a vertical bar, |,
and a right angle bracket, ⟩, which come together to form
something called a “ket.” There is always something “inside”
the ket, which is just a label. Just as variables in
mathematics are given letter symbols (“Let x be…” anyone?),
an unspecified qubit is typically given the symbol |𝜓⟩. The
notation, called Dirac notation, is not special among the
various ways people denote vectors, but physicists have
found it convenient. Since quantum computing was born out
of this field, it has adopted this notation.
52
The other important thing to note is the use of the word state,
which is confusingly overloaded in both physics and
computation. The object |𝜓⟩ is often called the state of a
qubit or that the qubit is in the state |𝜓⟩. Sometimes |𝜓⟩ is
taken to be equivalent to the physical device encoding the
information. So, you’ll hear about the state of physical qubits,
for example. This is more of a linguistic convenience than
anything else. While there is nothing wrong with using this
short-hand in principle, it is what leads to misconceptions,
such as things being in two places at once, so caution is
advised.
53
are qubits. It may then seem like the statement “the qubit is
in the state |𝜓⟩” implies that |𝜓⟩ is a physical quantity. In
reality, |𝜓⟩, which is quantum information, can only describe
the state of the physical device—whatever current
configuration it might be in. That configuration might be
natural, or it might have been arranged intentionally, which
is the process of encoding the information |𝜓⟩ into the
physical device.
54
Superposition
It’s been mentioned that a qubit is simply a pair of numbers.
There are infinitely many pairs of numbers, but also some
special ones. For example, the pair (1,0) and the pair (0,1)
are pretty special. In fact, they are so special they are given
their own symbols in Dirac notation: |0⟩ and |1⟩. Among the
myriad of options, this choice was made to keep the
connection to the bits 0 and 1 in mind.
The other pair of numbers that usually gets its own symbol
is (1,1). The symbol for this pair is |+⟩, and it’s often called
the “plus” state. Why plus? Ah, we’ve finally made it to
superposition and the origin of “0 and 1 at the same time.”
This is the only bit of math I’ll ask you to do. What is (1,0) +
(0,1)? That’s right, it’s (1,1). Writing this with our symbols,
|+⟩ = |0⟩ + |1⟩.
The qubit is not in the |0⟩ state, nor is it in the |1⟩ state.
Whatever we might do with this qubit seems to affect both
|0⟩ and |1⟩ at the same time. So, it certainly does look like it
is both 0 and 1. Of course, the reality is more subtle than
that.
How would one physically encode the state |+⟩? Naively, the
equation suggests first encoding 0, then encoding 1, and,
55
finally, adding them together. That seems reasonable, but it’s
not possible. There’s never any addition happening in the
physical encoding or processing of qubits. The only reason it
is written this way is out of convenience for scientists who
want to write the states of qubits on paper. Taking a state
|+⟩ and replacing it with |0⟩ + |1⟩ is an intermediate step
that students learn to perform to assist in calculations done
by hand. A quantum computer could not do this, nor would
it need to. A quantum computer holds in its memory |+⟩, full
stop.
56
There is a good reason for it, and it comes when we attempt
to read qubits.
57
seen an atom. What we see with our naked eyes is
information on the displays of large instruments. But that
information is classical, represented in the digital electronics
of the device as bits. In short, any attempt to gain
information from a quantum system results in bits, not
qubits. We cannot simply “read off” the state of a qubit.
58
Quantum measurement
In the previous example, an atom was imagined to encode a
qubit of information in its energy state. When read, one of
two outcomes will occur. If the atom is in the high energy
state, it will release that energy as a photon. But, now it has
no energy, so it must be in the low energy state. While there
are many clever ways to write and read qubits from physical
systems, none of them can avoid this situation. Some of the
verbs that have been associated with the outcome of a read
qubit are destroyed, deleted, collapsed, and other gruesome-
sounding terms. A more straightforward way to say it is
simply that the physical system no longer encodes the
quantum data.
59
changed the value of the very thing we were attempting to
measure.
You might intuit that the more you learn about a system, the
more you change it. Quantum physics is what you get when
you take that idea to the extreme. We can manipulate
quantum objects without disturbing them, but then we
would gain no information from them. We can eek a small
amount of information at the cost of little disturbance, but
extracting the most information possible necessitates
maximum disruption. Imagine a tire pressure gauge that lets
out all the air and only reports whether the tire was
previously full or not. You learn a single bit of information
and are always left with a flat tire. Though that’s a good
analogy, I promise that quantum computers are more useful
than it sounds.
A game of chance
So far, we have that qubits encode quantum data but can
only reveal a smaller amount of classical data. But there's
something that should be nagging at you—surely the
outcome has to depend somehow on the quantum
information |𝜓⟩. Otherwise, what's the point? Indeed.
However, it's not the outcome itself that depends on |𝜓⟩, but
the probability.
60
Quantum physics is not a deterministic theory. It gives us
very accurate predictions of probabilities for the possible
outcomes of experiments, but it does not tell us which
outcome will happen on each occasion. That is, when we read
a qubit, the classical bit we receive is random. Recall the plus
state from before, |+⟩ = |0⟩ + |1⟩. When read, it will produce
the bit 0 or the bit 1 with equal probability. You might call it
a quantum coin—a perfectly unbiased random event. Indeed,
this is the basis of commercially available QRN, or Quantum
Random Number, generators.
61
qubits. The task of an algorithm designer is to amplify the
probabilities associated with correct solutions and minimize
the probabilities of incorrect ones. So, even if individual qubit
measurements are uncertain, quantum algorithms as a
whole guide the computation toward a useful outcome. This
is the topic of the next myth, so there’s more to come on
algorithms.
62
Myth 3: Quantum
Computers Try All
Solutions At Once
“There are two things you should
remember when dealing with parallel
universes. One, they're not really
parallel, and two, they're not really
universes.”
— Douglas Adams
63
versions of ourselves live out different destinies is undeniably
intriguing. So, when quantum computing—a field riddled
with complexities and counterintuitive principles—emerged,
it's unsurprising that the allure of the parallel universe
concept became a go-to explanation for many.
64
Most popularizations of the MWI focus on the metaphors of
a universe that “branches” into “parallel” worlds. This leads
to all sorts of confusion. Not only can you waste your money
on a Universe Splitter iPhone app (which definitely doesn’t
split anything), but physicists even argue amongst
themselves at the level of these metaphors. Let’s call this
kind of stuff Metaphorical Many-Worlds and not discuss it
further. Is there a better way to think about the Many-Worlds
Interpretation than this? Yes—and the first thing we are
going to do is stop calling it that. Everett’s core idea was the
universal wave function. So what is that?
65
when to apply them is arbitrary and at the discretion of the
user of the theory. This bothers all physicists to some extent
but bothered Everett the most.
66
produces the bit 0. We might as well call it a “classical” state.
The same goes for |1⟩. On the other hand, the state |0⟩ + |1⟩
is not a classical state as it cannot be encoded into something
that only holds bits. If we expand on these descriptions to
include more possibilities, the amount of information grows.
Luckily, our notation remains succinct. Let’s say that |world
0⟩ is a definite classical state of the entire universe, as is
|world 1⟩, and they differ only by one bit (the outcome of
reading a single qubit).
Quantum Dad
David Deutsch is often referred to as the “father of quantum
computing.” As noted in the brief history presented in the
introductory chapter, Deutsch conceived of a model of
computation called a universal quantum computer in 1985.
67
Deutsch’s motivation was to find “proof” that MWI is correct.
Deutsch is clearly a proponent of the MWI, and he has
speculated exactly that which we are referring to as a myth.
In his view, when a quantum computer is in a superposition
of states, each component state corresponds to a “world” in
which that particular computational path is being explored.
He dubbed this “quantum parallelism” and suggested that
the physical realization of a quantum computer would be
definitive experimental evidence of MWI.
68
many programs we can run on such a computer. The
program could do nothing and return the same bit it
received. Or, it could switch the value of the bit (0 becomes
1 and 1 becomes 0). It might also ignore the input bit entirely.
It could produce 0 no matter what the input was, or it could
produce 1 no matter what the input was. And that’s it. Those
four options are the only possible ways to manipulate a
single bit. We can split these four programs into two
categories: the pair whose outputs added together are odd,
and the pair whose outputs added together are even. Given
a program, Deutsch’s algorithm tells you which category it
belongs to.
69
Deutsch then asked if all of that computation is not done in
parallel universes, where could it possibly happen?
It happens here
The key to Deutsch’s claim is a mismatch in resources. It
doesn’t take that big of a problem before all possible
solutions outstrip the total number of things in the entire
(single) universe we could use to encode bits. Therefore, the
quantum computer must be using resources in other
universes.
70
Naive quantum parallelism
Now, even if you still want to believe the MWI to be the one
true interpretation of quantum physics, its implications for
quantum computing are just not useful. In fact, they appear
to be dangerous. Computer scientist Scott Aaronson,
probably the most famous popularizer (and tied for the most
curmudgeonly), bangs on this drum (out of necessity) in
every forum he’s invited to, and he appears to be as
sympathetic to MWI as you can get without officially
endorsing it.
71
The second issue with naive quantum parallelism is that
supposing a quantum computer could access alternative
universes, it seems to do so in the most useless way possible.
Rather than performing exponentially many computations in
parallel and combining the results, it simply returns a
random answer from one of the universes. Of course, actual
quantum algorithms don’t work that way either. Instead,
algorithms manipulate the coefficients of superpositions
(whether there is a “plus” or “minus” between |0⟩ and |1⟩) so
that “correct” answers are returned when the quantum data
is read. Crucially, quantum computers can only do this for
very specific problems, suggesting that the power of
quantum data is not access to parallel worlds but simply a
matching of a problem’s structure to the mathematics of
quantum physics.
72
gravity to predict what will happen to smaller balls placed on
the fabric. This is analogous to when we try to interpret
quantum computers through the lens of classical
computers—we are explaining the new idea in the context of
the older ones. However, we can also explain older ideas
through the lens of newer ones, which often have much more
explanatory power.
73
board, you tend to understand both Einsteinian and
Newtonian gravity better. Can we do the same for quantum
computers?
74
bits are four numbers representing the probability of 00, 01,
10, and 11. Just like with qubits, ten probabilistic bits have
210, or 1024, probabilities, one of which is associated with
0000000000 and another with 1001011001, and so on. The
situation is nearly identical, except for the fact that instead
of complex numbers, the probabilities are always positive.
Now, suppose I have a probabilistic computer that simulates
the flipping of ten coins. It manipulates numbers for each of
the 1024 possible sequences of heads and tails just like a
quantum computer would. So, does it calculate those
probabilities in parallel universes? No, obviously not. But
clearly, there must be a difference between the two
computers.
75
a quantum computation restricted to positive numbers that
add up to one.
76
about the nature of reality from the actual, proven
capabilities of quantum computers. While Deutsch was
inspired by the MWI and sees quantum computers as
evidence for it, the actual operation and utility of quantum
computers don't require MWI to be true. In other words,
quantum computers work based on the principles of
quantum mechanics, and their functionality is independent
of the philosophical interpretation of those principles.
77
Myth 4: Quantum
Computers
Communicate
Instantaneously
“I cannot seriously believe in it because
the theory cannot be reconciled with the
idea that physics should represent a
reality in time and space, free from
spooky action at a distance.”
— Albert Einstein
78
While the quantum data within quantum computers is
indeed entangled, we can reframe our understanding such
that this should seem inevitable rather than miraculous.
79
for decades. War and the shift from science to engineering in
quantum physics produced the “shut up and calculate”
generation, which frowned upon what they saw as fruitless
philosophical matters. Of course, there are always a brave
few.
Explainer-level nonsense
The gist of any quantum entanglement story is that it arises
when particles interact and create a “link,” which we call
80
entanglement. Importantly, entanglement remains no matter
how far apart they might be. The state of each individual
particle is not well defined, but their joint (entangled) state
is. Thus, the two particles must be considered as a single
entity spread across a potentially vast distance. If you believe
that story, I agree it’s mystical, just as the internet told you.
81
Cutting the link
Quantum mechanics makes accurate predictions in the
context of entangled systems. It doesn’t actually contain or
suggest the model of entanglement as a physical connection
between two distant objects, though. That model is wrong.
To see why, consider that entanglement can be created
between two particles without ever having them interact.
They can be so far away from each other that not even light
signals can reach one another, meaning nothing physical
could have mediated the “link.”
82
Classical entanglement
Imagine if, instead of atoms, there were two distant boxes,
each with a ball in it. The ball might be removed and sent to
you from either box. At some median location, you receive a
ball in the post. Immediately, the boxes become correlated—
one is empty, and the other is not—because you don’t know
which box the ball came from. While the boxes, still
separated by a great distance, instantly formed this
connection, it is simply your ignorance and future
expectations about what might be revealed that define the
correlation. There is certainly no mystical “link” that
physically manifested between the boxes the moment the
post arrived, and the same is true for atoms. The point here
is that, like correlated bits, entanglement is simply correlated
qubits.
83
could know the whole situation—which box was empty and
where each ball was. That’s just not possible with atoms and
photons.
Technobabble
The point of mathematics is simplifying things that would
require otherwise long-winded and complicated sentences.
So, we replace the things we are talking about with symbols
and numbers. If classical bits are unknown, we write them
as a list of probabilities (p1, p2, p3, …). In the case of the two
boxes, our ignorance of the contents of each box is a
probabilistic bit, as introduced in the previous chapter. The
first box is associated with a pair of probabilities (p1, p2).
Again, this is just a way more succinct way than writing (the
probability that this box has a ball, the probability that this
box has no ball). Between the two boxes, there are four
possible situations, which would have a list of numbers like
(q1, q2, q3, q4).
84
Now, here’s the important point: if the list of four
probabilities for the pair of them can’t be equally described
as two separate lists of two numbers for each of them, then
the information they share must be correlated.
Mathematically, you can take this as the definition of
correlation. For example, (0, 0.5, 0.5, 0) represents the
situation when one ball is received at the central location.
There is zero chance both are empty and zero chance both
have a ball. We are certain one is empty, and one has a ball—
we just don’t know which is which. Since we don’t know what
box it came from, either each box is empty or contains a ball
with 0.5 probability. Each box alone has the same probability
pair (0.5, 0.5), but these individual lists don’t capture the
complete situation—a bigger list is always needed to capture
the correlations.
85
represents. If one atom is described by a qubit (0.6, −0.8),
then we would find it excited with a probability of 0.36 and
decayed with a probability of 0.64.
For the pair of atoms, the two-qubit state has four numbers.
For example, (0, 0.6, −0.8, 0) tells us that only one atom will
be found in the excited state, but with unequal probabilities.
If the list representing the pair of atoms can’t be equally
represented by two smaller lists for each atom individually,
they are correlated. But since they are qubits instead of bits,
we give such a list a new name: entanglement. That’s it. In
quantum information, entanglement is correlated qubits.
Interference
Lists of probabilities change by multiplying and adding up
the individual numbers to create new ones. As pointed out in
the last chapter, multiplying or adding positive numbers can
only produce more positive numbers. Whereas, with qubits,
the list can change in drastically different ways because
adding negative numbers to positive numbers can lead to
cancellation. Borrowing terminology from wave mechanics,
this is often referred to as interference, where two waves
cancel when the crest of one meets the trough of another.
86
choreographing the cancellation of unwanted numbers in
qubits of information. While the computer doesn’t use
entanglement as some physical fuel, we can show that
without it, the computations it performs can be easily
simulated with classical digital computers. That is, a
quantum computer, made of atoms and photons, for
example, that never realizes entanglement is no less
“quantum” than anything else but also no more powerful
than a digital computer. In some sense, though, this is not
surprising. After all, a digital computer that never produces
correlated bits would be extremely useless—no more
powerful than flipping a bunch of fair coins.
Beam me up
Correlated qubits are necessary for quantum computation
and can be seen as a resource for primitive information-
processing tasks. The whimsical names of these tasks don’t
help our myth-busting endeavor, however.
87
sound like science fiction. However, quantum teleportation
is just shifting the location where information is stored in an
efficient way. If I were narrating the protocol, I might say the
following.
It’s not that you are meant to follow the logic there—the
protocol itself is not trivial. However, compare this to an
explanation from Popular Mechanics (A. Thompson, March
16, 2017).
“If we take two particles, entangle them, and send one to the
moon, then we can use that property of entanglement to
teleport something between them. If we have an object we
want to teleport, all we have to do is include that object in the
entanglement… After that, it's just a matter of making an
observation of the object you want to teleport, which sends
that information to the other entangled particle on the moon.
Just like that, your object is teleported, assuming you have
enough raw material on the other side.”
88
Thinking about quantum information as physically
corresponding to classical objects quickly descends into
magical thinking. Not only does it not help explain the
concepts, but it further mystifies quantum physics and gives
it the illusion that supernatural forces are at play.
Just correlations
Most of what you hear and read about quantum
entanglement is the shooting down of attempts to force it into
a classical worldview, but framed with headlines like
“quantum physicists just proved nature is spooky.”
Technically, we call the results no-go theorems because they
rule out theories that would restore classical objectivity to
quantum physics. Classical objectivity is comforting because
it provides a reliable and persistent model of the world. It
allows us to predict and control our environment with
remarkable ease as we cobble together rigid objects to act as
simple machines that extend our natural abilities and more
complicated ones that have enabled a mostly cooperative
global technological society.
89
But that’s a very narrow view influenced by the success of
classical physics and engineering. Quantum physics, with
things like entanglement, throws a wrench into this neat
classical picture of the world.
90
Myth 5: Quantum
Computers Will
Replace Digital
Computers
“Create the hype, but don’t ever believe it.”
— Simon Cowell
91
is a form of hype that misrepresents the potential of quantum
computing.
GPUs to QPUs
As mentioned in the introductory chapter, a quantum
computer is unlikely to be a “computer,” as we colloquially
understand it, but a special-purpose processing unit—the
QPU. The term QPU (quantum processing unit) is analogous
to the graphics processing unit (GPU).
The history of the GPU begins in the late 1990s, with Nvidia
introducing the GeForce 256 in 1999, often celebrated as the
first GPU. Originally, GPUs were designed to accelerate
graphics rendering for gaming and professional visualization,
offloading these tasks from the CPU (central processing unit)
to increase performance and efficiency.
92
like CUDA (Compute Unified Device Architecture), which
allowed developers to use GPUs for tasks beyond graphics
display, including scientific research, machine learning, and
something to do with cryptocurrencies.
93
evolve into complementary components of classical
computers optimized for specific tasks intractable for CPUs.
Much like GPUs, QPUs will eventually be utilized beyond the
applications we envision for them today, but they will never
replace CPUs or even GPUs.
Before we get into what QPUs might do, let’s outline what
they certainly won’t do. But before that, we need to talk about
what problems are hard and why.
Computational complexity
Theoretical computer scientists categorize computational
problems based on the amount of computational resources
(like time and space) required to solve them, essentially
dividing them into “easy” and “hard” problems.
94
Now, imagine each digit in the pin can be more than just 0
or 1. Let's say there are n options for each digit, and the pin
is d digits long. The total number of combinations is n to the
power of d (or nd, which is n multiplied by itself d times). If
the number of digits d stays fixed, but you increase the
number of options n for each digit, then the total number of
combinations grows polynomially because polynomials
involve variables raised to fixed powers. For example,
sticking with a four-digit pin but increasing the options for
each digit from just 0 and 1 to, say, 0 through 9, the safe
gets more secure because the total combinations jump from
16 to 10,000 (which is 104).
95
computers won’t significantly speed them up. An algorithm
is called efficient if it runs in polynomial time. So, both
classical and quantum algorithms for this problem are
efficient.
96
In summary, quantum computers aren’t useful for cracking
pin numbers for two reasons. Either the problem is too
easy—in which case a classical computer suffices—or the
problem is too hard for any computer. By now, you must be
wondering what problems a QPU is actually useful for.
BQP
Ideally, we’d like to define a class of problems that can be
solved with a quantum algorithm in polynomial time. In
classical complexity theory, most problem classes are
defined relative to a deterministic algorithm. But quantum
algorithms end when quantum data is read, a random
process. So, we need to add probability to the definition.
97
NP? Another: is NP contained in BQP? We suspect the
answer is “no” to both of these questions, but there is
currently no mathematical proof. This is why many
statements about computational speed-up are couched in
dodgy language, and some aren’t. We don’t really know if
quantum computers are any different from classical
computers, but we highly suspect they are. In this book, for
brevity, I won’t add all the necessary caveats to statements
about complexity as if it were a graduate class in computer
science.
Quantum simulation
In both classical and quantum physics, scientists use
mathematical models called Hamiltonians to describe how
things behave—from the interactions of tiny particles to the
workings of complex materials. However, understanding and
predicting the behavior of these systems can be incredibly
complex, requiring massive amounts of computational
power, even for supercomputers. Today, many
98
approximations are used to make the problem tractable, yet
at the expense of accuracy.
99
temperature or ultra-strong and lightweight materials. These
could revolutionize fields like energy, transportation, and
electronics. By simulating complex molecules accurately,
quantum simulation could speed up drug discovery and even
allow scientists to probe exotic new phenomena in a virtual
environment free from terrestrial constraints, furthering our
understanding of the universe's fundamental workings.
100
Transform (FFT) and its profound impact on signal analysis,
which can serve as a bridge to understanding the potential
of quantum algorithms that don’t omit exponential speed-
ups.
101
While the hype around quantum computing speed-ups often
invokes the term “exponential,” we also suspect that QPU
can provide square-root speed-ups generically. Again, while
this sounds modest, keep in mind the FFT and all that it has
done for society when you say that!
102
Quantum-powered search. Instead of checking items one by
one, a quantum computer can use superposition to examine
all database entries at the same time. It's like looking at all
the pages of the phone book simultaneously. Okay, now you
know that’s not really how it works. So, what’s a better
explanation than simply “complex math?”
103
By repeating the above process, the number corresponding
to the marked item becomes larger and larger while every
other number approaches zero. Think of it as the right entry
becoming brighter while all others fade. The algorithm
cleverly manipulates the quantum information so that the
marked entry becomes more likely to be found when the data
is finally read.
104
Myth 6: Quantum
Computers Will Break
the Internet
“Amateurs hack systems, professionals hack people.”
— Bruce Schneier
105
A brief history of internet encryption
The internet's precursor, ARPANET, was developed as a
project by the Advanced Research Projects Agency (ARPA) of
the U.S. Department of Defense in the late 1960s. Encryption
at this time was primarily a concern for military
communications. Crudely speaking, there are only two types
of people in this context: us and them. Every physical device
can carry the same secret used to encrypt messages.
However, in a non-military context, every pair of people in
the network needs to share a separate secret. Every new
person added to the network would have to find some way of
transmitting a unique new secret to every person already in
the network. This was a seemingly infeasible proposition
until the mid-1970s.
106
In the 1990s, as the internet became more commercialized
and accessible to the public, the need for secure transactions
became apparent. The introduction of the Secure Sockets
Layer (SSL) protocol by Netscape in 1994 was a critical step
in enabling secure online communications and transactions.
By the 2000s, with the increasing prevalence of cyber
threats, encryption became indispensable. Technologies like
Virtual Private Networks (VPNs), end-to-end encryption in
messaging apps, and secure browsing protocols for websites
become standard practices. Today, it is simply assumed that
your interactions on the internet are private. But why and
how?
To crack a code
To understand the challenge of breaking internet encryption,
especially using RSA, let's simplify the layers and protocols
into a more digestible scenario. Imagine you're shopping
online at your favorite bookstore and decide to purchase a
new book from your favorite author (winky face emoji). At
checkout, you're prompted to enter your credit card details.
This is where RSA encryption steps in to protect your
information.
107
quick read, can be likened to locking your credit card details
in a box. The bookstore has the only key (the private key)
that can unlock the box. Even though the locked box travels
across the vast and insecure network of the internet, only the
bookstore can open it upon receipt, ensuring that your credit
card information remains confidential. However, by
inspecting the lock carefully enough, you could deduce what
the (private) key looks like, make one, and unlock the box.
The question is: how long would this “breaking” of the lock
take?
25195908475657893494027183240048398571429282126
20403202777713783604366202070759555626401852588
07844069182906412495150821892985591491761845028
108
08489120072844992687392807287776735971418347270
26189637501497182469116507761337985909570009733
04597488084284017974291006424586918171951187461
21515172654632282216869987549182422433637259085
14186546204357679842338718477444792073993423658
48238242811981638150106748104516603773060562016
19676256133844143603833904414952634432190114657
54445417842402092461651572335077870774981712577
24679629263863563732899121548314381678998850404
45364023527381951378636564391212010397122822120
720357.
109
2,000 years to solve. In short, if someone could solve this so-
called Integer Factorization Problem efficiently, they’d “break”
the internet and blow some minds because of how difficult it
appears to be. Shor's algorithm, which has been mentioned
several times, does exactly what internet security and
cryptography experts thought was impossible—it efficiently
solves the integer factorization problem.
110
have immediate and dramatic practical implications. This
was of obvious concern to cryptographic security experts who
were still primarily deployed in a military context.
111
a periodic function. While this is not conceptually hard—in
fact, it’s the same problem the DFT and FFT discussed above
are applied to—it is computationally hard for digital
computers.
112
quantum computer. This assumes those qubits work more
or less perfectly. However, even if they don’t, we can rely on
redundancy and error correction to get us there. Many poorly
performing components can come together to act as a better-
performing “virtual” component. Virtual qubits are also
called logical qubits, whereas the raw physical systems (like
atoms and photons) are called physical qubits.
113
While quantum computing is still in its early stages, there
are signs it may follow a similar exponential growth
trajectory—a quantum Moore’s Law, as it were. If we start
with roughly a hundred qubits today, and the number of
qubits doubles every two years, it suggests that in about 34
years (around 2058), quantum computers could become
powerful enough to crack even the most robust RSA
encryption standards. Researchers like Jaime Sevilla and
Jess Riedel support this timeline, publishing a report in late
2020 that claimed a 90% confidence of RSA-2048 being
factored before 2060.
114
Well, I don’t have 34-year secrets. But government agencies
likely do. Secrecy of data has a shelf life, and quantum
computers render the immediacy of that more palpable.
115
that NIST has been proactive in preparing for the transition
to post-quantum cryptography. It launched the Post-
Quantum Cryptography Standardization Project in 2016
with a call for algorithms that could withstand attacks from
quantum computers.
116
limits of our ability to know and control the natural world. In
the viral vortex of discussions surrounding quantum
computing's potential to disrupt current cryptographic
standards, a beacon of hope shines through: Quantum Key
Distribution (QKD).
117
it immune to the threats posed by quantum computing
advancements.
118
Myth 7: Quantum
Computing Is
Impossible
“You insist that there is something a
machine cannot do. If you tell me
precisely what it is a machine cannot do,
then I can always make a machine
which will do just that.”
119
The other argument against quantum computing is that
noise and errors will be so unavoidable that quantum
computations will break down before answers can be
reached. Naysayers need only point to existing quantum
computers to illustrate their claim, not to mention the hype,
empty promises, and failed predictions of “within the next
five years.”
But wait. It’s not that simple. I can indeed encode qubits
quite faithfully with the transistors in my smartphone. My
smartphone could quite comfortably emulate a perfect
quantum computer, provided it had less than 30 qubits or
120
so. When I hijack all the computing resources in my phone
to do this, it really is a 30-qubit quantum computer.
But you could argue that in that scenario, the qubits aren’t
physical. When I googled “what is the most powerful
quantum computer,” the top hit was a press release from a
company I won’t name claiming that their quantum
computer was the most powerful. The device encoded one
qubit of information onto each of six individual atoms
isolated in a specialized ion trap sitting on an integrated
chip—a marvel of modern engineering, no doubt. In such a
device, you can make a correspondence between the
information you want to encode and the information needed
to describe each atom. Thus, qubits are physical.
121
and certainly not process it. My 30-qubit smartphone,
however, does reliably encode quantum data. In fact, by
some accepted measures, it is the most powerful quantum
computer!
122
Maybe they shouldn’t call their devices quantum computers
today, though—at least not until they outperform my
smartphone. The question then is when will that happen?
The nice thing—if you really think this is a problem—is that
quantum computing is a divergent technology. Far enough
into the future, there will be no ambiguity about whether a
computer device is really a quantum computer because there
will not be enough atoms in the universe to construct
smartphone memory capable of encoding the quantum data
the new device can carry. So, one solution is to just hold off
on our bickering until that time.
123
Let’s consider an example.
124
level of the qubits, closing yet another gap in full integration.
And so on it will go.
125
The final coin analogy
Find a coin. Flip it. Did you get heads? Flip it again. Heads.
Again. Tails. Again, again, again…
HHTHHTTTHHTHHTHHTTHT. Is that what you got? No, of
course, you didn’t. That feels obvious. But why?
126
So, obviously, we can’t write them all down. What about if we
just tried to count them one by one, one each second? We
couldn’t do it alone, but what if everyone on Earth helped
us? Let’s round up and say there are 10 billion of us. That
wouldn’t do it. What if each of those 10 billion people had a
computer that could count 10 billion sequences per second
instead? Still no. OK, let’s say, for the sake of argument, that
there were 10 billion other planets like Earth in the Milky
Way, and we got all 10 billion people on each of the 10 billion
planets to count 10 billion sequences per second. What? Still
no? Alright, fine. What if there were 10 billion galaxies, each
with these 10 billion planets? Not yet? Oh, my.
Why am I telling you all this? The point I want to get across
is that humanity’s knack for pattern finding has given us the
false impression that life, nature, or the universe is simple.
It’s not. It’s actually really complicated. But like a drunk
looking for their keys under the lamp post, we only see the
127
simple things because that’s all we can process. The simple
things, however, are the exception, not the rule.
128
Life’s generally complicated, but not so if we stay on the
narrow paths of simplicity. Computers, deep down in their
guts, are making sequences that look like those of coin flips.
Computers work by flipping transistors on and off. But your
computer will never produce every possible sequence of bits.
It stays on the simple path or crashes. There is nothing
innately special about your computer that forces it to do this.
We never would have built computers that couldn’t solve
problems quickly. So computers only work at solving
problems that we find can be solved because we are at the
steering wheel, forcing them to solve problems that appear
effortless.
129
We don’t have to track and keep under control all the details
of quantum things, just as your digital computer does not
need to track all its possible configurations. So next time
someone tells you that quantum computing is complicated
because there are so many possibilities involved, remind
them that all of nature is complicated—the success of
science is finding the patches of simplicity. In quantum
computing, we know which path to take. It’s still full of
debris, and we are smelling flowers and picking the
strawberries along the way, so it will take some time—but
we’ll get there.
130
About the author
131