62332
62332
62332
com
https://ebookfinal.com/download/mind-and-mechanism-1st-
edition-drew-v-mcdermott/
OR CLICK BUTTON
DOWNLOAD EBOOK
https://ebookfinal.com/download/intelligence-beyond-thought-exploding-
the-mechanism-of-mind-dada-gavand/
ebookfinal.com
https://ebookfinal.com/download/washington-merry-go-round-the-drew-
pearson-diaries-1960-1969-1st-edition-drew-pearson/
ebookfinal.com
https://ebookfinal.com/download/endocrine-secrets-5th-edition-michael-
t-mcdermott-md/
ebookfinal.com
https://ebookfinal.com/download/motion-simulation-and-mechanism-
design-with-cosmosmotion-2007-kuang/
ebookfinal.com
Naturalistic Realism and the Antirealist Challenge 1st
Edition Drew Khlentzos
https://ebookfinal.com/download/naturalistic-realism-and-the-
antirealist-challenge-1st-edition-drew-khlentzos/
ebookfinal.com
https://ebookfinal.com/download/arsenic-contamination-of-groundwater-
mechanism-analysis-and-remediation-1st-edition-satinder-ahuja/
ebookfinal.com
https://ebookfinal.com/download/molecular-hematology-5th-edition-drew-
provan/
ebookfinal.com
https://ebookfinal.com/download/hollywood-film-1963-1976-1st-edition-
drew-casper/
ebookfinal.com
https://ebookfinal.com/download/atherosclerosis-experimental-methods-
and-protocols-1st-edition-angela-f-drew-auth/
ebookfinal.com
Drew V. McDermott
i
This Page Intentionally Left Blank
ii
Mind and Mechanism
Drew McDermott
A Bradford Book
The MIT Press
Cambridge, Massachusetts
London, England
iii
c 2001 Massachusetts Institute of Technology
All rights reserved. No part of this book may be reproduced in any form by any
electronic or mechanical means (including photocopying, recording, or informa-
tion storage and retrieval) without permission in writing from the publisher.
McDermott, Drew V.
Mind and mechanism / Drew McDermott.
p. cm.
“A Bradford book.”
Includes bibliographical references and index.
ISBN 0-262-13392-X (hc. : alk. paper)
1. Mind and body. 2. Artificial intelligence. 3. Computational
neuroscience. I. Title.
iv
For Judy
v
This Page Intentionally Left Blank
vi
There are still harmless self-observers who believe that there are “immediate cer-
tainties”: for example, “I think,” or as the superstition of Schopenhauer put it,
“I will”; as though knowledge here got hold of its object purely and nakedly as
“the thing in itself,” without any falsification on the part either the subject or the
object. . . . The philosopher must say to himself: When I analyze the process that
is expressed in the sentence, “I think,” I find a whole series of daring assertions
that would be difficult, perhaps impossible, to prove; for example, that it is I who
think, that there must necessarily be something that thinks, that thinking is an
activity and operation on the part of a being who is thought as a cause, that there
is an “ego,” and, finally, that it is already determined what is to be designated
by thinking—that I know what thinking is. . . . In short, the assertion “I think”
assumes that I compare my state at the present moment with other states of myself
which I know, in order to determine what it is; on account of this retrospective
connection with further “knowledge,” it has, at any rate, no immediate certainty
for me. In place of the “immediate certainty” in which people may believe in the
case at hand, the philosopher thus finds a series of metaphysical questions pre-
sented to him, truly searching questions of the intellect, to wit: “From where do
I get the concept of thinking? Why do I believe in cause and effect? What gives
me the right to speak of an ego, and even of an ego as cause, and finally of an ego
as the cause of thought?” Whoever ventures to answer these questions at once by
an appeal to a sort of intuitive perception, like the person who says, “I think, and
know that this at least, is true, actual, and certain”—will encounter a smile and
two question marks from a philosopher nowadays. “Sir,” the philosopher will
perhaps give him to understand, “it is improbable that you are not mistaken; but
why insist on the truth?”
—Nietzsche (1886), pp. 213–214
vii
This Page Intentionally Left Blank
viii
Contents
Preface xi
Acknowledgments xv
Notes 243
References 249
Index 259
ix
This Page Intentionally Left Blank
x
Preface
There are many reasons to work in the field of artificial intelligence (AI).
My reason is a desire to solve the “mind-body” problem, to understand
how it is that a purely physical entity, the brain, can have experiences.
In spite of this long-range goal, my research has been concerned with
seemingly much tinier questions, such as, how might a robot know where
it is? How would a computer represent the sort of routine but arbitrary
fact or belief that people seem to keep track of effortlessly? (I’m thinking
of “facts” such as “If you go swimming too soon after eating, you might
get a cramp.”) It may seem misguided to pursue tactical objectives that
are so remote from the strategic objective, but scientists have learned that
in the end finding precise answers to precise questions is a more reliable
means to answering the big questions than simply speculating about what
the big answers might be. Indeed, scientists who venture to attempt such
speculation are often looked at askance, as if they had run out of useful
things to do.
Hence, by writing a book on the mind-body problem from a compu-
tational perspective, I am risking raised eyebrows from my colleagues. I
take that risk because I think the mind-body problem is important, not
just technically, but culturally. There is a large and growing literature on
novel approaches to the problem. Much of it is quite insightful, and some
is totally wrong (in my opinion, of course). Even authors I agree with
often fail to understand the role of computational ideas in explaining the
mind. Claims like these are often made with only the flimsiest arguments:
• An intelligent computer program would treat every reasoning problem
as a deduction.
xi
xii Preface
bits out, to keep from driving away a large class of intelligent readers who
suffer from “mathematics anxiety” or “philosophy narcolepsy.” I decided
to leave chapter 2 in to counteract the general tendency in surveys of AI to
talk about what’s possible instead of what’s actually been accomplished.
The problem with the former approach is that people have an odd series
of reactions to the idea of artificial intelligence. Often their first reaction
is doubt that such a thing is possible; but then they swing to the opposite
extreme, and start believing that anything they can imagine doing can be
automated. A description of how it might be possible to program a com-
puter to carry on a conversation encourages this gullibility, by painting a
vivid picture of what such a program would be like, without explaining
that we are very far from having one. Hence, throughout the book, I try
to differentiate what we know how to build from what we can imagine
building.
I left chapter 5 in for a different reason. I think the most serious objec-
tions to a computational account of mind rest on the issue of the observer-
relativity of symbols and semantics, the question of whether symbols can
mean anything, or can even be symbols in the first place, unless human
beings impute meanings to them. This may not seem like the most serious
objection for many readers, and they can skip most of chapter 5. Readers
who appreciate the objection will want to know how I answer it.
With these caveats in mind, let me invite you to enjoy the book. The
puzzles that arise in connection with the mind-body problem are often
entertaining, once you’ve wrapped your mind around them. They are
also important. If people really can be explained as machines controlled
by computational brains, what impact does that have on ethics or religion?
Perhaps we can’t answer the question, but we should start asking it soon.
This Page Intentionally Left Blank
xiv
Acknowledgments
xv
This Page Intentionally Left Blank
xvi
1
The Problem of Phenomenal Consciousness
Science has pushed man farther and farther from the center of the uni-
verse. We once thought our planet occupied that center; it doesn’t. We
once thought that our history was more or less the history of the world;
it isn’t. We once thought that we were created as the crown and guardian
of creation; we weren’t. As far as science is concerned, people are just a
strange kind of animal that arrived fairly late on the scene. When you
look at the details of how they work, you discover that, like other life
forms, people’s bodies are little chemical machines. Enzymes slide over
DNA molecules, proteins are produced, various chemical reactions are
catalyzed. Molecules on the surfaces of membranes react to substances
they come into contact with by fitting to them and changing shape, which
causes chemical signals to alter the usual flow of events, so that the ma-
chine’s behavior can change as circumstances change.
Traditionally there was one big gap in this picture: the human mind.
The mind was supposed to be a nonphysical entity, exempt from the laws
that govern the stars, the earth, and the molecules that compose us. What
if this gap closes? What if it turns out that we’re machines all the way
through?
This possibility may seem too implausible or repugnant to contemplate.
Nonetheless, it looms on the horizon. For some of us, it seems like the
most likely possibility. The purpose of this essay is to increase the plau-
sibility of the hypothesis that we are machines and to elaborate some of
its consequences. It may seem that a wiser or more moral strategy would
be to avoid thinking about such a weird and inhuman hypothesis. I can’t
agree. If we are indeed physical systems, then I get no comfort from the
fact that most people don’t know it and that I can occasionally forget it.
1
2 Chapter 1
I will be arguing that people have minds because they, or their brains, are
biological computers. The biological variety of computer differs in many
ways from the kinds of computers engineers build, but the differences are
superficial. When evolution created animals that could benefit from per-
forming complex computations, it thereby increased the likelihood that
some way of performing them would be found. The way that emerged used
the materials at hand, the cells of the brain. But the same computations
could have been performed using different materials, including silicon. It
may sound odd to describe what brains do as computation, but, as we
shall see, when one looks at the behavior of neurons in detail, it is hard to
avoid the conclusion that their purpose is to compute things. Of course,
the fact that some neurons appear to compute things does not rule out
that those same neurons might do something else as well, maybe some-
thing more important; and there are many more neurons whose purpose
has not yet been fathomed.
Even if it turns out that the brain is a computer, pure and simple, an
explanation of mind will not follow as some kind of obvious corollary. We
see computers around us all the time, none of which has a mind. Brains
appear to make contact with a different dimension. Even very simple
animals seem to be conscious of their surroundings, at least to the extent
of feeling pleasure and pain, and when we look into the eyes of complex
animals such as our fellow mammals, we see depths of soul. In humans the
mind has reached its earthly apogee, where it can aspire to intelligence,
morality, and creativity.
So if minds are produced by computers, we will have to explain how.
Several different mechanisms have been proposed, not all of them plau-
sible. One is that they might “excrete” mind in some mysterious way, as
the brain is said to do. This is hardly an explanation, but it has the virtue
of putting brains and computers in the same unintelligible boat. A variant
of this idea is that mind is “emergent” from complex systems, in the way
that wetness is “emergent” from the properties of hydrogen and oxygen
atoms when mixed in great numbers to make water.
I think we can be more specific about the way in which computers
can have minds. Computers manipulate information, and some of this
The Problem of Phenomenal Consciousness 3
fact, there are long intervals when everything we perceive involves us. In
social settings, much of what we observe is how other humans react to
what we are doing or saying. Even when one person is alone in a jungle,
she may still find herself explaining the appearance of things partly in
terms of her own observational stance. A person who did not have beliefs
about herself would appear to be autistic or insane. We can confidently
predict that if we meet an intelligent race on another planet they will
have to have complex models of themselves, too, although we can’t say
so easily what those models will look like.
I will make two claims about self-models that may seem unlikely at
first, but become obvious once understood:
1. Everything you think you know about yourself derives from your self-
model.
2. A self-model does not have to be true to be useful.
The first is almost a tautology, although it seems to contradict a traditional
intuition, going back to Descartes, that we know the contents of our
minds “immediately,” without having to infer them from “sense data” as
we do for other objects of perception. There really isn’t a contradiction,
but the idea of the self-model makes the tradition evaporate. When I say
that “I” know the contents of “my” mind, who am I talking about? An
entity about whom I have a large and somewhat coherent set of beliefs,
that is, the entity described by the self-model. So if you believe you have
free will, it’s because the self-model says that. If you believe you have
immediate and indubitable knowledge of all the sensory events your mind
undergoes, that’s owing to the conclusions of the self-model. If your beliefs
include “I am more than just my body,” and even “I don’t have a self-
model,” it’s because it says those things in your self-model. As Thomas
Metzinger (1995b) puts it, “since we are beings who almost constantly
fail to recognize our mental models as models, our phenomenal space is
characterized by an all-embracing naive realism, which we are incapable
of transcending in standard situations.”
You might suppose that a self-model would tend to be accurate, other
things being equal, for the same reason that each of our beliefs is likely
to be true: there’s not much point in having beliefs if they’re false. This
supposition makes sense up to a point, but in the case of the self-model we
The Problem of Phenomenal Consciousness 5
run into a peculiar indeterminacy. For most objects of belief, the object
exists and has properties regardless of what anyone believes. We can pic-
ture the beliefs adjusting to fit the object, with the quality of the belief
depending on how good the fit is (Searle 1983). But in the case of the
self, this picture doesn’t necessarily apply. A person without a self-model
would not be a fully functioning person, or, stated otherwise, the self does
not exist prior to being modeled. Under these circumstances, the truth of
a belief about the self is not determined purely by how well it fits the
facts; some of the facts derive from what beliefs there are. Suppose that
members of one species have belief P about themselves, and that this
enables them to survive better than members of another species with be-
lief Q about themselves. Eventually everyone will believe P, regardless of
how true it is. However, beliefs of the self-fulfilling sort alluded to above
will actually become true because everyone believes them. As Nietzsche
observed, “The falseness of a judgment is . . . not necessarily an objection
to a judgment . . . . The question is to what extent it is life-promoting . . . ,
species-preserving . . . ” (Nietzsche 1886, pp. 202–203). For example, a
belief in free will is very close (as close as one can get) to actually having
free will, just as having a description of a window inside a computer is
(almost) all that is required to have a window on the computer’s screen.
I will need to flesh this picture out considerably to make it plausible. I
suspect that many people will find it absurd or even meaningless. For one
thing, it seems to overlook the huge differences between the brain and
a computer. It also requires us to believe that the abilities of the human
mind are ultimately based on the sort of mundane activity that computers
engage in. Drawing windows on a screen is trivial compared to writing
symphonies, or even to carrying on a conversation. It is not likely that
computers will be able to do either in the near future. I will have to argue
that eventually they will be able to do such things.
The issues surrounding the relation between computation and mind are
becoming relevant because of the complete failure of dualism as an ex-
planation of human consciousness. Dualism is the doctrine that people’s
minds are formed of nonphysical substances that are associated with their
6 Chapter 1
bodies and guide their bodies, but that are not part of their bodies and are
not subject to the same physical laws as their bodies. This idea has been
widely accepted since the time of Descartes, and is often credited to him,
but only because he stated it so clearly; I think it is what anyone would
come to believe if they did a few experiments. Suppose I ring a bell in your
presence, and then play a recording of the 1812 Overture for you. You are
supposed to raise your hand when you hear the sound of that bell. How
do you know when you hear that sound? Introspectively, it seems that,
though you don’t actually hear a bell ringing, you can summon a “mental
image” of it that has the same tonal quality as the bell and compare it at
the crucial moment to the sounds of the church bells near the end of the
overture. (You can summon it earlier, too, if not as vividly, and note its
absence from the music.) Now the question is, where do mental sounds
(or visual images, or memories of smells) reside? No one supposes that
there are tiny bell sounds in your head when you remember the sound of
a bell. The sounds are only “in your mind.” Wherever this is, it doesn’t
seem to be in your brain.
Once you get this picture of the relation between mind and brain, it
seems to account for many things. I’ve focused on remembering the sound
of a bell, but it also seems to account for perceiving the sound as a bell
sound in the first place. The bell rings, but I also experience it ringing.
Either event could occur without the other. (The bell could ring when I’m
not present; I could hallucinate the ringing of a bell.) So the experience is
not the same as the ring. In fact, the experience of the ring is really closer
than the physical ringing to what I mean by the word or concept “ring.”
Physics teaches us all sorts of things about metal, air, and vibration, but the
experience of a ringing doesn’t ever seem to emerge from the physics. We
might once have thought that the ringing occurs when the bell is struck,
but we now know that it occurs in our minds after the vibrations from the
bell reach our minds. As philosophers say, vibration is a primary quality
whereas ringing is a secondary quality.
Philosophers use the word quale to describe the “ringyness” of the expe-
rience of a bell, the redness of the experience of red, the embarrassingness
of an experience of embarrassment, and so forth. Qualia are important
for two reasons. First, they seem to be crucially involved in all perceptual
events. We can tell red things from green things because one evokes a red
The Problem of Phenomenal Consciousness 7
quale and the other a green one. Without that distinction we assume we
couldn’t tell them apart, and indeed color-blind people don’t distinguish
the quale of red from the quale of green. Second, qualia seem utterly un-
physical. Introspectively they seem to exist on a different plane from the
objects that evoke them, but they also seem to fill a functional role that
physical entities just could not fill. Suppose that perceiving or remem-
bering a bell sound did cause little rings in your head. Wouldn’t that be
pointless? Wouldn’t we still need a further perceptual process to classify
the miniature events in our heads as ringings of bells, scents of ripe apples,
or embarrassing scenes?
So far I have focused on perception, but we get equally strong intuitions
when we look at thought and action. It seems introspectively as if we
act after reasoning, deciding, and willing. These processes differ from
physical processes in crucial respects. Physical processes are governed by
causal laws, whereas minds have reasons for what they do. A causal law
enables one to infer, from the state of a system in one region of space-time,
the states at other regions, or at least a probability distribution over those
states. The “state” of a system is defined as the values of certain numerical
variables, such as position, velocity, mass, charge, heat, pressure, and so
forth—primary qualities. We often focus for philosophical purposes on
the case of knowing a complete description of a system at a particular
time and inferring the states at later times, but this is just one of many
possible inference patterns. All of them, however, involve the inference of
a description of the physical state of the system at one point in space and
time from a description of its state at other points. By contrast, the reason
for the action of a person might be to avoid a certain state. A soldier
might fall to the ground to avoid getting shot. People are not immune to
physical laws; a soldier who gets shot falls for the same reason a rock
does. But people seem to transcend them.
This idea of physical laws is relatively new, dating from the seventeeth
century. Before that, no one would have noticed a rigid distinction between
the way physical systems work and the way minds work because everyone
assumed that the physical world was permeated by mental phenomena.
But as the universe came to seem mechanical, the striking differences
between the way it works and the way our minds work became more
obvious. Descartes was the first to draw a line around the mind and
8 Chapter 1
put all mental phenomena inside that boundary, all physical phenomena
outside it.
Nowhere is the contrast between cause and reason more obvious than
in the phenomenon of free will. When you have to make a decision about
what to do, you take it for granted that you have a real choice to make
among alternative actions. You base your choice on what you expect to
happen given each action. The choice can be difficult if you are not sure
what you want, or if there is a conflict between different choice criteria.
When the conflict is between principle and gain, it can be quite painful. But
you never feel in conflict in the same way with the principle of causality,
and that makes it hard to believe that it is a physical brain making the
decision. Surely if the decision-making process were just another link in a
chain of physical events it would feel different. In that case the outcome
would be entirely governed by physical laws, and it would simply happen.
It is hard to imagine what that would feel like, but two scenarios come
to mind: either you would not feel free at all, or occasionally you would
choose one course of action and then find yourself, coerced by physics,
carrying out a different one. Neither scenario obtains: we often feel free
to choose, and we do choose, and then go on from there.
Arguments like these make dualism look like a very safe bet, and for
hundreds of years it was taken for granted by almost everyone. Even those
who found it doubtful often doubted the materialist side of the inequality,
and conjectured that mind was actually more pervasive than it appears.
It is only in the last century (the twentieth) that evidence has swung the
other way. It now seems that mere matter is more potent than we thought
possible. There are two main strands of inquiry that have brought us to
this point. One is the burgeoning field of neuroscience, which has given us
greater and greater knowledge of what brains actually do. The other is the
field of computer science, which has taught what machines can do. The
two converge in the field of cognitive science, which studies computational
models of brains and minds.
Neither of these new sciences has solved the problems it studies, or
even posed them in a way that everyone agrees with. Nonetheless, they
have progressed to the point of demonstrating that the dualist picture is
seriously flawed. Neuroscience shows that brains apparently don’t con-
nect with minds; computer science has shown that perception and choice
The Problem of Phenomenal Consciousness 9
apparently don’t require minds. They also point to a new vision of how
brains work in which the brain is thought of as a kind of computer.
Let’s look at these trends in more detail, starting with the brain. The
brain contains a large number (1011 ) of cells called neurons that
apparently do all its work.1 A neuron, like other cells, maintains different
concentrations of chemicals on each side of the membrane that surrounds
it. Because many of these chemicals are electrically charged ions, the result
is a voltage difference across the membrane. The voltage inside the cell
is about 60 millivolts below the voltage outside. When stimulated in the
right way, the membrane can become depolarized, that is, lose its voltage
difference by opening up pores in the membrane and allowing ions to
flow across. In fact, the voltage briefly swings the opposite direction, so
that the inside voltage become 40 millivolts above the outside voltage.
When that happens, neighboring areas of the membrane depolarize as
well. This causes the next area to depolarize, and so forth, so that a wave
of depolarization passes along the membrane. Parts of the cell are elon-
gated (sometimes for many centimeters), and the wave can travel along
such an elongated branch until it reaches the end. Behind the wave, the cell
expends energy to pump ions back across the membrane and reestablish
the voltage difference.
When the depolarization wave reaches the end of a branch, it can cause
a new wave to be propagated to a neighboring cell. That’s because the
branches of neurons often end by touching the branches of neighboring
neurons. Actually, they don’t quite touch; there is a gap of about one
billionth of a meter (Churchland 1986). The point where two neurons
come into near contact is called a synapse. When a depolarization wave
hits a synapse, it causes chemicals called neurotransmitters to be emitted,
which cross the gap to the next neuron and stimulate its membrane. In
the simplest case one may visualize the gap as a relay station: the signal
jumps the gap and continues down the axon of the next neuron. When
a neuron starts a depolarization wave, it is said to fire. Many neurons
have one long branch called the axon that transmits signals, and several
shorter ones called dendrites that receive them. The axon of one neuron
will make contact at several points on the dendrites of the next neu-
ron. (A neuron may have more than one axon, and an axon may make
contact on the dendrites of more than one neuron.) A depolarization
10 Chapter 1
The Mind
Relay Stations
Figure 1.1
The naı̈ve dualist picture
Lest figure 1.1 be thought of as a straw man, in figure 1.2 I have re-
produced a figure endorsed by Sir John Eccles, one of the few unabashed
dualists to be found among twentieth-century neurophysiologists (Eccles
1970, figure 36, detail). He divides the world into the material domain
(“World 1”), the mental domain (“World 2”), and the cultural domain
(“World 3”), which I have omitted from the figure. The brain is mostly
in World 1, but it makes contact with World 2 through a part called the
“liaison brain.” The liaison brain is where Eccles supposes the causality
gap lies.
Unfortunately for this dualist model, the behavior of neurons doesn’t
fit it. For one thing, there are few places at which data are simply trans-
mitted. Usually a neuron fires after a complex series of transmissions are
received from the neurons whose axons connect to it. The signals coming
out of a group of neurons are not copies of the signals coming in. They
12 Chapter 1
World 2
Liaison Brain
(World 1)
Memory stores
(World 3b)
Brain
World 1
Body
Aff. World 1 Eff.
path. path.
World 1
Figure 1.2
From Eccles 1970, p. 167, figure 36
are, however, a function of the signals coming in. That is, if there is a
nonphysical “extra” ingredient influencing the output of the neurons, its
effects must be very slight. As far as we can tell, any given input always
results in essentially the same output.
We have to be careful here about exactly how much to claim. Neurons’
behavior changes over time, as they must if they are to be able to learn.
The Problem of Phenomenal Consciousness 13
Axon 2
Axon 1
Dendrite
Cell Body
⫹40 mV
Position along
membrane
Voltage
⫺70 mV
Figure 1.3
Graded action potentials along dendritic membrane
at the data represented by the inputs to that module under the proposal,
look at the data encoded by the outputs, and ask whether the output is
an interesting function of the input, in the sense that one could see why
an organism would want to compute that function. If the answer is yes,
then that is evidence that the code is real. Such codes are now found
routinely. For example, in the visual system of the brain there are arrays
of cells whose inputs represent brightness values at all points of the visual
field and whose outputs represent the degree to which there is a vertical
brightness edge at each point. Such an edge is defined as a brightness
profile that changes value sharply along a horizontal line, from light to
dark or vice versa. Other arrays of cells are sensitive to edges with other
orientations.
Finding edge detectors like this is exciting because there are independent
theories of how information might be extracted from the visual field that
suggest that finding edges will often be useful. (For example, if the sun
goes behind a cloud, all the brightnesses become smaller, but many of
the edges stay in the same place.) But what I want to call attention to is
The Problem of Phenomenal Consciousness 15
that the edges are being found before the signals “reach the mind.” Edges
are not something perceived qualitatively, or at least not exclusively. Here
we find edges being found in a computational sense that is essentially
independent of mind.
At this point we have been led to notice the importance of the second
major intellectual strand in the story, namely, the science of computa-
tion. We usually use the phrase “computer science” to refer to it, but
that doesn’t mean it’s about laptops and mainframes. It’s about physical
embodiments of computational processes, wherever we find them, and it
appears that one place we find them is in groups of neurons.
Let’s turn our attention from neurons for a second and think in terms of
artificial systems. Suppose we build an artificial ear. It takes sound waves,
analyzes them for different frequencies, and prints out the word “bell”
if the frequency analysis matches the profile for a bell, and “not a bell”
otherwise. I don’t mean to suggest that this would be easy to do; in fact,
it’s quite difficult to produce an artificial ear that could discriminate as
finely as a person’s. What I want to call attention to is that in performing
the discrimination the artificial ear would not actually experience ringing
or the absence of ringing; it would not experience anything. If you doubt
that, let’s suppose that I can open it up and show you exactly where the
wires go, and exactly how the software is written. Here a set of tuning
forks vibrate sympathetically to different frequencies; here an analog-to-
digital converter converts the amplitude of each vibration to a numerical
quantity; there a computer program matches the profile of amplitudes to
a stored set of profiles. There’s no experience anywhere, nor would we
expect any.
Hence it should give us pause if the structures in the brain work in
similar ways. And in fact they do. The ear contains a spiraling tube, the
cochlea, different parts of whose membrane vibrate in resonance with
different frequencies. These vibrations cause receptor cells to send trains
of spikes of different frequencies, the frequency of a spike train encoding
the amplitude of a particular sound frequency. The overuse of the word
“frequency” here is confusing, but also illuminating. One physical quan-
tity, the rate at which a neuron fires, is being used to encode a completely
different quantity, the magnitude of a certain frequency in the spectrum of
a sound, just as voltages are used in digital computers to represent entities
16 Chapter 1
above, the gap might be much smaller than depicted in figures 1.1 and 1.2,
but it would have to be there, and the link across it would be nonphysical.
It is possible that we will encounter such a linkage in the brain, but
almost no one expects to. Eccles (1973, p. 216) proposes that his “liaison
brain” is located in the left hemisphere, because the speech center of most
people is located in the left hemisphere. Needless to say, despite intense
research activity, no such linkage has appeared. Of course, if it existed it
would be very hard to find. The network of neurons in the brain is an
intricate tangle, and there are large sections as yet unexplored. Almost
all experimentation on brains is done on nonhuman animals. Probing a
person’s brain is allowed only if the probing has no bad or permanent
effects. The failure to find a liaison brain in a nonhuman brain might
simply indicate that such brains are not conscious. On the other hand, if
an animal is conscious, it might be considered just as unethical to exper-
iment on its brain as on one of ours. So we may be eternally barred from
the decisive experiment. For all these reasons it will probably always be
an option to believe that dualistic gaps exist somewhere, but cannot be
observed. However, failing to find the causal chain from brain to mind
and back is not what the dualist must worry about. The real threat is
that neuroscientists will find another, physical chain of events between
the tasting of the wine and the uttering of the words.
Suppose we open up the brain of a wine taster and trace out exactly
all the neural pathways involved in recognizing and appreciating a sip
of wine. Let’s suppose that we have a complete neuroscientific, compu-
tational explanation of why she utters “magnifique!” instead of “sacre
bleu!” at a key moment. Even if there is an undetected dualist gap, there
is nothing for it to do. Nothing that occurs in the gap could affect what
she says, because we have (we are imagining) a complete explanation of
what she says.
One further option for the dualist is to say that experience is a non-
physical event-series that accompanies and mirrors the physical one. It
plays no causal role, it just happens. This position is called epiphenom-
enalism. This is a possibility, but an unappealing one. For one thing, it
seems like an odd way for the universe to be. Why is it that the sort
of arrangement of molecules that we find in the brain is accompanied
by this extra set of events? Why should the experiences mirror what the
18 Chapter 1
brain does so closely? Keep in mind that the dualist holds that there is
no physical explanation of this link, so it is difficult to point to physical
properties of the molecules in the brain that would have anything to do
with it. The interior of the sun consists of molecules that move around
in complex ways. Are these motions accompanied by experiences? When
an ice cube melts, its molecules speed up in reaction to the heat of the
environment. Does the ice cube feel heat? It can’t tell us that it does, but
then you can’t tell us either. You may believe that your utterance of the
words “It’s hot in here” has something to do with your experience of
heat, but it actually depends on various neurophysiological events that
are no different in principle from what happens to the ice cube. The ex-
perience of heat is something else that happens, on the side. Difficulties
such as these make epiphenomenalism almost useless as a framework for
understanding consciousness, and I will have little to say about it.
Most scientists and philosophers find the problems with dualism in-
surmountable. The question is what to replace it with. A solution to the
problem of consciousness, or the mind-body problem, would be a purely
physical mechanism in the brain whose behavior we could identify with
having experience. This is a tall order. Suppose we filled it, by finding
that mechanism. As we peered at it, we could say with certainty that a
certain set of events was the experience of red, that another set was a kind
of pleasure, another an excruciating pain. And yet the events would not
be fundamentally different from the events we have already observed in
brains.
Furthermore, if the brain really is just an organic information-
processing system, then the fact that the events occur in neurons would
just be a detail. We would expect that if we replaced some or all of the neu-
rons with equivalent artificial systems, the experiences wouldn’t change.
This seems implausible at first. We think of living tissue as being intrin-
sically sensitive in ways that silicon and wire could never be. But that
intuition is entirely dualistic; we picture living tissue as “exuding” ex-
perience in some gaseous manner that we now see isn’t right. At a fine
enough resolution, the events going on in cells are perfectly mechanical.
The wetness disappears, as it were, and we see little molecular machines
pulling themselves along molecular ropes, powered by molecular springs.
At the level of neurons, these little machines process information, and
The Problem of Phenomenal Consciousness 19
they could process the information just as well if they were built using
different molecules.
This idea seems so unpalatable to some that they refuse to speculate any
further. Consciousness is evidently so mysterious that we will never un-
derstand it. We can’t imagine any way to build a bridge between physics
and mental experience, so we might as well give up. (Colin McGinn is
the philosopher most closely associated with this position; see McGinn
1991.) This is an odd position to take when things are just starting
to get interesting. In addition, one can argue that it is irresponsible to
leave key questions about the human mind dangling when we might clear
them up.
Modern culture is in an awkward spot when it comes to the mind-body
problem. Scientists and artists are well aware that dualism has failed,
but they have no idea what to replace it with. Meanwhile, almost ev-
eryone else, including most political and religious leaders, take dualism
for granted. The result is that the intellectual elite can take comfort in
their superiority over ordinary people and their beliefs, but not in much
else. Is this a state of affairs that can last indefinitely without harmful
consequences?
I realize that there is a cynical view that people have always accepted
delusions and always will. If most people believe in astrology, there can’t
be any additional harm in their having incoherent beliefs about what goes
on inside their heads. I suppose that if you the view the main purpose of the
human race as the consumption of products advertised on TV, then their
delusions are not relevant. I prefer to think that, at the very least, humans
ought to have a chance at the dignity that comes from understanding and
accepting their world. Our civilization ought to be able to arrive at a
framework in which we appreciate human value without delusions.
The field of consciousness studies has been quite busy lately. There seem
to be two major camps on the mind-body problem: those who believe that
we already have the tools we need to explain the mind, and those who
believe that we don’t and perhaps never will. McGinn is in the pessimistic
camp, as is Nagel (1975) and others. I’m an optimist.
20 Chapter 1
vehicle Vgenuine is, according to the theory, conscious. The other, imple-
mented using vehicle Vbogus , is only apparently conscious. When it talks of
its experiences, it’s actually making meaningless sounds, which fool only
those unfamiliar with the theory. Unfortunately, no matter how elegant the
theory is, it won’t supply any actual evidence favoring one vehicle over
the other. By hypothesis, everything observable about the two systems
is explained by the computational process they both instantiate. For in-
stance, if you wonder why E1 is sometimes unconscious of its surround-
ings when deeply involved in composing a tune, whatever explanation you
arrive at will work just fine for E2 , because they implement exactly the
same computational processes, except that in the case of E2 you’ll have
to say, “It’s ‘apparently conscious’ of its surroundings most of the time,
except when its working on a new tune, when it’s not even apparently
conscious.”
Vehicle theories are thus likely to be a dead end. This is not to say that
the explanation of consciousness may not require new mechanisms. The
point is, though, that if they are required they will still be mechanisms.
That is, they will explain observable events. Phenomenal consciousness is
not a secret mystery that is forever behind a veil. When I taste something
sour, I purse my lips and complain. A theory must explain why things
have tastes, but it must also explain why my lips move in those ways, and
the two explanations had better be linked.
Many critics of computational theories reject the idea that phenomenal
consciousness can be explained by explaining certain behaviors, such as
lip pursing, or utterances such as “Whew, that’s sour.” But even those who
believe that explaining such behavior is insufficient must surely grant that
it is necessary. We can call this the Principle of the Necessity of Behavioral
Explanation: No theory is satisfactory unless it can explain why someone
having certain experiences behaves in certain characteristic ways. Natu-
rally, process theories tend to explain behavior well and experience not
so well, whereas vehicle theories tend to have the opposite problem.
Process theories tend to fall into two groups, called first-order and
higher-order theories. The former are those in which in some contexts
the processing of sensory information is “experience-like” in a way that
allows us to say that in those contexts the processing is experience. Higher-
order theories, on the other hand, say that to be an experience a piece of
The Problem of Phenomenal Consciousness 23
conducted in the pure philosophical style will nevertheless bear with me,
in spite of my neglect of some of the problems and issues that philosophers
focus on.
For example, philosophers spend a lot of time arguing about functional-
ism. This term has several meanings. Some people treat it as synonymous
with computationalism, the doctrine that the mind can be explained en-
tirely in terms of computation. Since I’m defending a version of computa-
tionalism, to that extent I’m defending functionalism, too. However, there
are also other meanings assigned to the term, which reduces its utility. One
version may be summarized thus: what mental terms and predicates refer
to is whatever fills certain causal roles in a functional description of the
organism involved. For example, a statement such as “Fred is in pain” is
supposed to mean, “Fred is in state X, where X is a state with the prop-
erties pain is supposed to have in a worked-out functional description of
Fred, e.g., the property of causing Fred to avoid what he believes causes
X.” Actually, stating the full meaning requires replacing “believe” with a
functionally described Y, and so forth.
The purpose of this version of functionalism is to show that, in principle,
mental terms can be defined so that they can be applied to systems without
making any assumptions about what those systems are made of. If pain
can be defined “functionally,” then we won’t be tempted to define it in
terms of particular physical, chemical, or neurological states. So when we
find an alien staggering from its crashed spaceship and hypothesize that
it is in pain, the claim won’t be refutable by observing that it is composed
of silicon instead of carbon.
I am obviously in sympathy with the motivation behind this project.
I agree with its proponents that the being staggering from the space-
ship might be in pain in spite of being totally unlike earthling animals.
The question is whether we gain anything by clarifying the definitions
of terms. We have plenty of clearcut mental states to study, and can
save the borderline cases for later. Suppose one had demanded of Van
Loewenhook and his contemporaries that they provide a similar sort of
definition for the concept of life and its subconcepts, such as respiration
and reproduction. It would have been a complete waste of time, because
what Van Loewenhook wanted to know, and what we are now figuring
out, is how life works. We know there are borderline cases, such as viruses,
26 Chapter 1
but we don’t care exactly where the border lies, because our understand-
ing encompasses both sides. The only progress we have made in defining
“life” is to realize that it doesn’t need to be defined. Similarly, what we
want to know about minds is how they work. My guess is that we will
figure that out, and realize that mental terms are useful and meaningful,
but impossible to define precisely.
In practice people adopt what Dennett (1978a) calls the “intentional
stance” toward creatures that seem to think and feel. That is, they sim-
ply assume that cats, dogs, and babies have beliefs, desires, and feelings
roughly similar to theirs as long as the assumption accounts for their be-
havior better than any other hypothesis can. If there ever are intelligent
robots, people will no doubt adopt the intentional stance toward them,
too, regardless of what philosophers or computer scientists say about the
robots’ true mental states. Unlike Dennett, I don’t think the success of the
intentional stance settles the matter. If a system seems to act intentionally,
we have to explain why it seems that way using evidence besides the fact
that a majority of people agree that it does. People are right when they
suppose babies have mental states and are wrong when they suppose the
stars do.
So I apologize for not spending more time on issues such as the structure
of reductionism, the difference between epistemological and metaphysi-
cal necessity, and the varieties of supervenience. I am sure that much
of what I say could be said (and has been said) more elegantly using
those terms, but I lack the requisite skill and patience. My main use of
the philosophical literature is the various ingenious thought experiments
(“intuition pumps,” in Dennett’s phrase) that philosophers have used in
arguments. These thought experiments tend to have vivid, intuitively com-
pelling consequences; that’s their whole purpose. In addition, the appar-
ent consequences are often completely wrong. I believe in those cases it is
easy to show that they are wrong without appeal to subtle distinctions;
if those familiar with the philosophical intricacies are not satisfied, there
are plenty of other sources where they can find the arguments refuted
in the correct style. In particular, Daniel Dennett (1991), David Rosen-
thal (1986, 1993), Thomas Metzinger (1995a), and William Lycan (1987,
1996) defend positions close to mine in philosophers’ terms, though they
each disagree with me on several points.
The Problem of Phenomenal Consciousness 27
28
2
Artificial Intelligence
Cognitive science is based on the idea that computation is the main thing
going on in the brain. Cognitive scientists create computational models
of the sorts of tasks that brains do, and then test them to see if they
work. This characterization is somewhat vague because there is a wide
range of models and testing strategies. Some cognitive scientists are inter-
ested in discovering exactly which computational mechanisms are used by
human brains. Others are interested primarily in what mechanisms could
carry out a particular task, and only secondarily in whether animal brains
actually use those mechanisms.
This discipline has been around for about half a century. Before that,
psychologists, linguists, neuroscientists, and philosophers asked questions
about the mind, but in different ways. It was the invention of the digi-
tal computer that first opened up the possibility of using computational
models to explain almost everything.
It is possible to approach cognitive science from various points of view,
starting from psychology, neuroscience, linguistics, or philosophy, as pre-
vious authors have done (Jackendoff 1987; Dennett 1991; Churchland
and Sejnowski 1992). My starting point is going to be computer science.
The application of computer science to cognitive science is called artificial
intelligence, or AI. AI has been used as a platform for a more general dis-
cussion before (Minsky 1986; Hofstadter and Dennett 1981), but rarely
in a way that takes philosophical questions seriously.1
One misapprehension is that artificial intelligence has to do with intelli-
gence. When the field started, it tended to focus on “intellectual” activities
such as playing chess or proving theorems. It was assumed that algorithms
29
30 Chapter 2
for hard problems like these would automatically apply to other areas re-
quiring deep thought, while “lower level” activities like walking or seeing
were assumed to be relatively straightforward applications of control the-
ory. Both of these assumptions have been abandoned. There turns out to
be no “General Problem Solver”2 that can solve a wide range of intel-
lectual problems; and there turns out to be no precise boundary line be-
tween lower-level activities and high-level “thinking.” There are plenty of
boundaries between modules, but no difference in computational mech-
anism between peripheral and central modules, and no clear support for
the notion that as you penetrate deeper into the brain you finally reach
an area in which you find “pure intelligence.”
It would be better, therefore, if AI had a different name, such as cog-
nitive informatics. Since we can’t wave a magic wand and cause such a
terminological revolution, we will keep the old label for the field, but let
me repeat: AI will have a lot to say about what role computation plays in
thinking, but almost nothing to say about intelligence.
People tend to underestimate the difficulty of achieving true machine in-
telligence. Once they’ve seen a few examples, they start drawing schematic
diagrams of the mind, with boxes labeled “perception” and “central ex-
ecutive.” They then imagine how they would carry out these subtasks,
and their introspections tell them it wouldn’t be that hard. It doesn’t take
long for them to convince themselves that intelligence is a simple thing,
just one step beyond Windows 98. The truth is that boxes are much easier
to label than to fill in. After a while, you begin to suspect the boundaries
between the boxes were misdrawn. Fifty years into the AI project, we’ve
become much more humble. No one would presume any longer to draw a
schematic for the mind. On the other hand, we have more concrete ideas
for solving particular tasks.
Some of the topics in this chapter may be familiar to some readers. I
urge them not to skim the chapter too hastily, though, because when I
allude to computational mechanisms later in the book, I’m thinking of
mechanisms such as the ones I describe here. That’s not to say that there
won’t be plenty of new algorithms and data structures discovered in the
future. I’m just not assuming the need for, or predicting the discovery of,
any powerful new ideas that will revolutionize the way we think about
computation in the brain. There is nothing up my sleeve.3
Artificial Intelligence 31
Computer Chess
To get a feel for what artificial intelligence is trying to do, let’s look at
a particular case history, the development of computer programs to play
games such as chess. Playing chess was one of the first tasks to be con-
sidered by researchers, because the chessboard is easy to represent inside
a computer, but winning the game is difficult. The game has the reputa-
tion of requiring brains to play, and very few people ever get good at it.
There is no obvious algorithm for playing chess well, so it appears to be a
good domain for studying more general sorts of reasoning, or it appeared
that way at first. In the 1950s, Allen Newell, Clifford Shaw, and Herbert
Simon wrote some papers (Newell et al. 1958) about a program they de-
signed and partially implemented, which used general-purpose symbolic
structures for representing aspects of chess positions. However, over the
years chess programs have become more and more specialized, so that
now there is no pretense that what they do resembles human thinking, at
least not in any direct way.
Almost all chess programs work along the lines suggested in early
papers by Claude Shannon and Alan Turing (Shannon 1950a, 1950b;
Turing 1953), which build on work in game theory by von Neumann and
Morgenstern (1944). A key feature of chess is that both players know ev-
erything about the current state of play except each other’s plans. In card
games a player is usually ignorant of exactly which cards the opponent
has, which adds a dimension we don’t need to worry about in chess, where
both players can see every piece. We can use this feature to construct a
simple representation of every possible continuation from a chess posi-
tion. Imagine drawing a picture of the current position at the top margin
of an enormous blackboard. Assuming it is your turn to move, you can
then draw all the possible positions that could result from that move, and
join them to the original position by lines. Now below each such position
draw a picture of every position that can be reached by the opponent’s
next move. Continue with your move, and keep going until every possible
position reachable from the original position has been drawn. (This had
better be a really big blackboard.) The process will go on for a long time,
but not forever, because at any position there are only a finite number of
available moves and every chess game must come to an end. The resulting
32 Chapter 2
1
Player
1 0 0 1 1
Opponent
P
1 1 1 0 1 0 1 1 1 1 1 1
Player
1 1 1 1 0 1
Figure 2.1
Game tree
include this possibility for completeness, because we’re going to see the
same pattern elsewhere in the tree: at any position where it’s your turn
to move, the position should be labeled 1 if any child is labeled 1; 0 if
no child is labeled 1 but some child is labeled 0; and −1 if every child
is labeled −1. In other words, at every position where it’s your turn to
move, you should label it with the maximum of the labels attached to its
children. At positions where it’s the opponent’s turn to move, you should,
by a similar argument, place a label equal to the minimum of the labels
attached to the children.
If we continue in this way we will eventually label every position. The
original position will be labeled 1 if you can force a win, −1 if your
opponent can, and 0 if neither of you can. This claim may not seem
obvious, so let’s consider in more detail the strategy you can follow if the
label is 1. There must be some child position labeled 1, so pick one and
make the move that gets you there. At this position it is the opponent’s turn
to move, so if it’s labeled 1 then every child position must be labeled 1. So
no matter what the opponent does, you will be in a situation like the one
you started with: you will have a position with at least one child position
labeled 1. You will be able to make the corresponding move, and then
wait for the opponent to pick among unappetizing choices. No matter
what happens, you’ll always have a position labeled 1 and can therefore
always force a situation where the opponent must move from a position
labeled 1. Sooner or later you’ll get to a position one step from a leaf,
where you have a move that wins.
This idea was first made explicit by von Neumann and Morgenstern.
It sounds like a surefire recipe for winning at chess (and any other board
game with complete information about the state of play and no random
element such as dice or roulette wheels). Of course, no person can play
the game this way, because the equipment (an enormous blackboard) is
too unwieldy, and it would take way too long to work out the whole game
tree. But computers can get around these problems, right? They can use
some kind of database to represent the tree of positions, and use their
awesome speed to construct the tree and label all the positions.
No, they can’t. There is no problem representing the board positions.
We can use a byte to represent the state of a square, and 64 bytes to
represent the board.4 It’s not hard to compute the set of all possible moves
Other documents randomly have
different content
“It will be, but I don’t want to be idle.”
“Perhaps you are right. I will be on the look-out for you, and if I
find something more congenial I will inform you at once.”
Ben did find it slow work following his old business. He missed the
nightly applause, and the pleasant consciousness that he was
earning three times his necessary expenses.
“You must have saved up a lot of money while you were acting,”
he said one day.
George Grayson did not press the matter, but invited Ben out to
play pool at a place on Sixth Avenue.
“Well, Ben, you’re beaten!” said Grayson. “The rule is to pay at the
end of each game.”
Ben took a nickel from his pocket and handed it to the attendant.
“Don’t argue the matter or all the boys will be laughing at you.”
Ben saw that he had been deceived, but took the advice of his
tricky companion.
Grayson whispered some words in the ear of the next player and
he laughed rather derisively. Ben thought he caught the word
“miser.” At any rate he had had enough of pool playing, and soon
after left the hall.
He did not feel very cordial towards Grayson, but the latter made
friendly advances, and as he said no more about pool Ben gradually
admitted him to companionship.
Two or three times he asked Grayson the street and number of
the business firm which employed him, but only received an evasive
answer.
There came a dull time, so far as news was concerned, and Ben
found that the sale of papers fell off, so that he was no longer able
to earn seventy-five cents a day. This was the very smallest sum on
which he could live even with the strictest economy, and, reluctant
as he was to do it, he found that he must draw some money from
the savings bank.
In the tray of this trunk he had placed his savings bank book. He
opened the trunk and looked confidently for the book. But to his
surprise it was not to be found.
Ben sat down before the open trunk and tried to recall all the
incidents connected with the last time of opening it. But the more he
thought the more puzzled he became.
Then it flashed upon him that the book might have been stolen.
He went at once to the room of his literary friend, Sylvanus
Snodgrass, and told him of his discovery.
“But who could have stolen it?” asked Ben, perplexed. “The
servant wouldn’t do it I am sure.”
“No, she is an honest Swedish girl. She wouldn’t be capable of it.”
“I agree with you, but some one must have taken it from the
trunk.”
“Of course! Let me think,” and the novelist leaned his head on his
hand and wrinkled up his forehead in the throes of mental
speculation.
“Depend upon it, Ben, that ring was bought with your money, and
George Grayson opened your trunk and stole your bank book.”
“Go to the bank, give notice of your loss, and find out whether
any money has been drawn from the bank on your account.”
This seemed to be sensible advice, and Ben acted upon it the next
morning. Mr. Snodgrass accompanied him to the banking house at
the junction of Broadway and Sixth Avenue at Thirty-second Street.
“When did you see the book last?” asked the official.
“Wednesday.”
“I should like to know if any money has been drawn on it?” asked
Ben.
The books were referred to, and the answer came, “Forty dollars
were drawn day before yesterday. Didn’t you sign the order?”
“No.”
“No, sir.”
At this moment Sylvanus, who had been looking out of the front
window, came up and said hurriedly, “Grayson is coming, and he has
a bank book in his hand.”
CHAPTER XXII.
Ben quickly informed the paying teller of the new arrival, and he
and Snodgrass took a position on the left hand side of the main
entrance, where there was a chance of their escaping observation.
Grayson entered the bank with a jaunty step and walked up to the
window of the paying teller. He did not stop to write a check for the
sum he wished to withdraw, the check being already drawn and
inclosed in the book.
“Very well.”
The paying teller took the book and went to the ledger, ostensibly
to compare the signature with that on the check. At the same time
he whispered to a young employee, who immediately left the bank
to summon a policeman.
George Grayson kept his place at the window, looking more cool
and unconcerned than he would had he known what was going on.
Somehow there seemed to be a good deal of delay in getting the
money. The paying teller occupied a considerable time in turning
over the pages of the ledger.
Thus far he had not discovered Ben and his friend the novelist, but
chancing to turn his head after a time he caught sight of the two.
Then he understood.
The boy spoke a word to the officer, who sprang forward and
grasped Grayson by the arm.
“Come back into the bank with me,” said the policeman, “and you
will learn.”
By this time the bank official had come out in front of the
partition.
“I will take him to the station house,” said the officer, “and depend
on you to appear as prosecutor.”
“It is too late,” said the officer. “I will trouble you to come to the
station-house with me to make known the charge.”
Ben did so, and matters took their course. After some delay he
received back the savings-bank book with the ring and about ten
dollars. George Grayson was sentenced to a term of imprisonment.
Ben pitied him and would gladly have spared him this, but the law
was inexorable.
Grayson tried to shake off the officer’s hand.
“Not so fast, my friend,” said the officer.—
Page 175.
Ben Bruce.
CHAPTER XXIII.
A STRANGE ADVENTURE.
The summer passed slowly. Business was unusually dull even for
this time of the year, and Ben’s earnings were proportionately small.
Week by week he was obliged to draw from his fund in the savings
bank until he had less than five dollars to his credit there.
“Yes, I will buy all you have,” was the unexpected answer.
“Thank you. You are very kind. Will you take the papers, or shall I
carry them for you?”
“Never mind! Leave them in that doorway, or give them to some
other newsboy. I want to employ you for a time.”
“You can have these papers, Tom. They are a present from this
lady.”
As the coachman closed the door she said, “Drive to the Fifth
Avenue Hotel.”
The mysterious lady sat on the back seat and signed to Ben to
place himself opposite to her.
“You may follow me,” said the lady as she paid and dismissed the
cab driver.
“Now sit down,” she said, “and we will have a little conversation.”
“Yes, madam.”
“Then I am willing.”
“I have taken the adjoining bedroom for you: go in and put on the
suit of clothes you will find on the bed. Brush your hair carefully, and
try to do me credit.”
Ben smiled.
The suit which Ben had put on was of fine imported cloth, and
evidently expensive.
“The suit fits you admirably,” she said. “It is very becoming.”
“That is what I don’t understand,” said Ben. “How could you select
a suit for me before you knew me?”
“Suppose I say that I looked for a boy to match the suit? It shows
that I have a correct eye, does it not?”
“Yes, madam.”
“Your shoes need polishing,” the lady said. “Go down below and
get a shine. You will find a bootblack in the lower part of the hotel.
Have you change?”
“Yes, madam.”
“Say ‘yes, mother.’ It is as well that you should get used to the
name.”
“No; bear in mind that you are acting. On the stage people are
husbands and wives, mothers and sons, for the occasion only.”
“I will remember.”
Ben went below and had his shoes blacked. When the operation
was ended he went up-stairs.
“Ring the bell, Edwin,” she said, “or rather go down yourself and
order a cab.”
“Nonsense!” exclaimed the lady sharply. “Tell him his niece, Maria
Harcourt, has just arrived from Europe and wishes to see him.”
“Very well, ma’am,” said the girl, overawed, “I’ll tell him.”
She went up-stairs and quickly returned, saying, “He will see you.”
“Of course he will. Edwin, you may stay here until I return, unless
you are sent for.”
She bent forward and just touched his chin with her lips.
“Yes, uncle.”
“Geneva. Yes, uncle, my poor Edwin was very sick, but fortunately
he recovered and is now the picture of health.”
“It was for the interest of Basil to report so, Uncle Henry.”
“True, but——”
“Well, we won’t discuss the matter. I will try to think as well of him
as I can. The fact is, however, that Edwin is alive and well. If you will
give me an order on your bankers for the last six months’ income I
shall be glad.”
“Very well, Maria. I only wish to see him. I don’t feel well enough
for a prolonged interview.”
“Oh no.”
“No, what?”
“All right—mother.”
Ben followed Mrs. Harcourt up the broad staircase, and into the
presence of the frail old gentleman. Mr. Anderson looked up as they
entered the room and signed for Ben to approach.
“No, sir.”
“Yes, sir.”
“Yes, sir.”
“You are very well grown. I did not expect to find you so large.”
“Yes, sir.”
“I wouldn’t for the world interfere with your quiet ways, uncle.
Remember that you are an invalid, and need to have things quiet
around you. Edwin is a boy of a lively temperament, and he will feel
more comfortable at the hotel.”
“No doubt you are right, Maria. Shall you stay long in the city?”
“My plans are not formed yet, Uncle Henry, but I will apprise you
of them when I have made up my mind. And now I must really say
good morning.”
Ben shook the old man’s hand, and followed Mrs. Harcourt out of
the room.
“Yes, he was in a pleasant mood. Perhaps when you next see him
it may be different. Now let us go to the carriage. I am going to Wall
Street.”
CHAPTER XXV.
“I shall have some other calls to make, Edwin,” she said, “and
won’t take a carriage till I am through. Now let us go up-stairs.
“I’ll remember.”
“Very well.”
Ben felt puzzled. He did not at all comprehend what was going on,
but concluded that it was all “in the play.”
A young man came forward and said politely, “What can I do for
you, madam?”
“Yes, madam.”
“Give him my card.”
“Certainly.”
“Stay here, Edwin, till I return or send for you,” she said, and Ben
seated himself in a chair near the window.
“Five years.”
“And that was probably the reason you did not send me the last
quarterly income due to me as his guardian?”
“Yes.”
“May I see him?”
“Sixteen.”
“And now, Edwin,” said the lady, “I won’t detain you. You may go
down at once to the Fifth Avenue Hotel and await me there. Or, if
you want two hours for yourself, meet me at the end of that time at
my room. I am not sure whether you have any money. Here is a ten-
dollar bill.”
“Thank you—mother.”
“What are your plans, my dear Mrs. Harcourt?” asked the banker.
“Shall you remain in America?”
“Is he at school?”
“I shall probably place him at school, but my plans are not fully
formed.”
“He does not appear to have any resemblance to the late Mr.
Harcourt.”
“True.”
If only a week he might as well keep the room, as the price was
so small, and he was in funds. Having no urgent business, he
decided to walk up Broadway.