Dretske F Perception Knowledge & Belief
Dretske F Perception Knowledge & Belief
Dretske F Perception Knowledge & Belief
FRED DRETSKE
Stanford University
Hi CAMBRIDGE
WW UNIVERSITY PRESS
A catalog record for this book is available from the British Library.
Library of Congress Cataloging in Publication Data
Dretske, Fred I.
Perception, knowledge, and belief: selected essays / Fred
Dretske.
p. cm. (Cambridge studies in philosophy)
ISBN 0-521-77181-1. - ISBN 0-521-77742-9 (pbk.)
1. Knowledge, Theory of. 2. Philosophy of mind. I. Title.
II. Series.
BD161.D73 2000
121'.092dc21
99-15973
CIP
ISBN 0 521 77181 1 hardback
ISBN 0 521 77742 9 paperback
Contents
Preface
Part One
page ix
Knowledge
Conclusive Reasons
2
3
4
5
Epistemic Operators
The Pragmatic Dimension of Knowledge
The Epistemology of Belief
Two Conceptions of Knowledge: Rational vs. Reliable
Belief
30
48
64
80
Simple Seeing
Conscious Experience
Differences That Make No Difference
The Mind's Awareness of Itself
What Good Is Consciousness?
97
113
138
158
178
vu
195
208
227
14
15
Index
242
259
275
vin
Preface
These fifteen essays span almost thirty years from 1970 (Epistemic
Operators) to 1999 (The Mind's Awareness of Itself). They record a
growing preoccupation with the mind's material constitution. I think of
them as steps, sometimes very halting and uncertain steps, toward philosophical naturalism about the mind.
I began by worrying about very traditional epistemological questions.
What makes a belief that something is F into knowledge that it is F?
What makes an F-like experience a perception of an F? As the concept
of information began to dominate my thinking about epistemology
(circa 1980), I pursued naturalistic answers to these questions without
pausing to ask what makes a belief a belief or an experience an experience. Knowledge was analyzed as information-caused belief. Perception
was identified with (a particular kind of) information-carrying experience. In Knowledge and the Flow of Information (1981) I had argued that
semantic information what information (news, message) a signal carries
can be understood as an objective, a completely natural, commodity.
It was as acceptable in a materialistic metaphysics as was the statistical
construct of information employed by communications engineers. I
therefore had, or so I thought, a naturalized epistemology. Knowledge
and perception were merely specialized informational states in certain
living organisms. I soon realized, however, that as long as I had no
acceptable (i.e., naturalistic) account of belief, I had no acceptable account of knowledge. Until I could say what an experience was, I had no
materialistic recipe for perception. If perception of an F is an information
(about F)-rich experience, what, exactly, is an experience? If knowledge
is information-caused belief, what, then, is a belief? Until I had answers
to these questions, I had no answers to questions I expected an adequate
IX
XI
signals that carry it) can do some real causal work in the world. This is,
I believe, a first step in understanding how the mind (the chief consumer
of information) can be causally efficacious in the world and how information can direct its activities. Only by understanding this, I feel, can
one appreciate the role that reasons play in the explanation of behavior.
I return to this theme again in Essay 15 by comparing the way beliefs
explain the behavior of animals to the way money explains the behavior
of vending machines. Essay 12 is a blunt I'm sure some will find it
crude expression of my naturalistic philosophy of mind.
xn
Part One
Knowledge
1
Conclusive Reasons
(2b) one is conceding that James would, or might, have said what he
did without possessing a stamp collection, and in the light of this concession one cannot go on to insist that, nonetheless, S knows he has a
stamp collection on the basis, simply, of what James said.
Gilbert Harman contrasts two cases, the lottery case and the testimony
case.2 Although S, say, has only one among thousands of tickets in a
lottery and, hence, has an extremely slender chance of winning, we
naturally reject the idea that S could know that he was going to lose on
the basis of a correct probability estimate (well over 99.9%) of his
losing. Even if S correctly predicts that he is going to lose, we would
deny that he knew he was going to lose if the only basis he had for this
belief was the fact that his chances of winning were so slight.3 Harman
compares this case with the situation in which we often seem prepared
to say that S knows that P when he is told that P is the case by some
other person (testimony case). Although probability estimates are not
altogether appropriate here, we do know that people sometimes lie,
sometimes they are honestly mistaken, and so on. There always seems
to be a chance that what a person tells us is not the case, however
sincere or credible he may appear, and the order of magnitude of this
chance seems to be comparable to the chance that we might win in
some appropriate lottery situation. Why, then, are we prepared to say
that we know in the one case but not in the other? Harman has some
revealing remarks to make about these cases, but I mention them only
to bring out their relevance to the present discussion. For I think this
contrast strengthens the view that (2) is normally accepted as a necessary consequence of (1), that when we are unwilling to endorse the
corresponding instantiation of (2) we are unwilling to talk of anyone
knowing that P is the case on the basis of the evidence expressed by R.
In many testimony situations we are, I believe, willing to affirm (2): the
person would not have said it unless it was so. In the lottery case,
however, the connection between R and P expressed by this subjunc-
2
3
tive conditional fails to be realized, and it fails no matter how great the
probabilities become. Adjusting the wording of (2) to suit the example
in question4 we have
(2c) If S were going to win the lottery, his chances of winning would not be
1/m (m being the number of tickets sold).
Whatever (finite) value we give to m, we know this is false since
someone whose chances of winning are 1/m will win, and since there is
nothing special about S that would require him to have a better chance
of winning than anyone else in order to win, we reject (2c) as false.
Hence, we reject the idea that S can know he is going to lose on the
basis of the fact that his chances of losing are (m l)/m.
Alvin Goldman, in developing a causal account of knowledge, constructs a situation in which S is said to know that a nearby mountain (I
will call it M) erupted many years ago. He knows this on the basis of
the presence of solidified lava throughout the countryside surrounding
the mountain.5 According to Goldman, a necessary condition for S's
knowing that M erupted many years ago on the basis of the present
existence and distribution of the lava is that there be a causal connection
between the eruption of the mountain and the present existence and
distribution of the lava. I do not wish to dispute this claim at the
moment since the view I am advancing is even stronger: viz. that a
necessary condition for S to know that M erupted on this basis is that
The wording of (2) will sometimes have to be adjusted to suit the particular instantiation
in question. The chief factors determining this adjustment are the relative temporal locations of R, P, and the time of utterance and also the causal connections, if any, that are
believed to hold between R and P. The particular wording I have given (2) is most
appropriate when P is some state of affairs antecedent to (or contemporaneous with) both
R and the time of utterance. This, of course, is the result of the fact that (2) is most often
used when P is some state of affairs causally responsible for the present condition R. When
P is a future state we might express (2) as: R would not be the case unless P were going to
happen. For example, he would not have registered unless he were going to vote. I do not
wish to preclude the possibility of knowing that something will occur on the basis of
present evidence by restricting the wording of (2). The difficulty, of course, is that when P
is some future state, the subjunctive relating it to R generally becomes somewhat questionable. We prefer to say, in our more cautious moods, that if he were not planning to vote,
he would not have registered (acknowledging, thereby, the fact that contingencies may
interfere with the execution of his plans). But in the same cautious moods we prefer to say,
not that we know he is going to vote (because he registered), but that we know he plans
or intends to vote.
"A Causal Theory of Knowing, "Journal of Philosophy, June 22, 1967, p. 361.
(2d) The lava would not be here, and distributed in this manner, unless M
erupted
is true. (2d) is a stronger claim than that the eruption of M is causally
connected with the present existence and distribution of the lava. (2d)
requires, in addition, that M's eruption be necessary for the present state
of affairs. To illustrate, consider the following embellishment on Goldman's example. Not far from M is another mountain, N. The geology
of the area is such that at the point in time at which M erupted
something, so to speak, was bound to give; if M had not erupted, N
would have. Furthermore, the location of N is such that if it, rather than
M, had erupted, the present distribution of lava would have been, in all
respects relevant to S's taking it as a reason for believing M erupted, the
same. In such circumstances Goldman's necessary condition is satisfied,
but mine is not. (2d) is false; it is false that the lava would not be here,
and distributed in this fashion, unless M had erupted. For if, contrary to
the hypothesis, M had not erupted, N would have, leaving the very
same (relevant) traces.
In such circumstances I do not think we could say that S knew that
M erupted on the basis of the present existence and distribution of lava.
For, by hypothesis, this state of affairs would have obtained whether M
erupted or not and, hence, there is nothing about this state of affairs that
favors one hypothesis (M erupted) over a competing hypothesis (N
erupted). S is still correct in supposing that M did erupt, still correct in
supposing that it was M's eruption that is causally responsible for the
present existence and distribution of lava, but he does not know it was
M that erupted not unless he has some additional grounds that preclude N. If he has such additional grounds, call them Q, then we can
say that he knows that M erupted and he knows this on the basis of R
and Q. In this case, however, the corresponding instantiation of (2) is
also satisfied: R and Q would not be the case unless M erupted. As
things stand, the most that S could know, on the basis simply of the
present existence and distribution of lava, is that either M or N erupted.
(2) permits us to say this much, and no more, about what can be known
on the basis of lava flow.
The case becomes even clearer if we exploit another of Harman's
examples.6 Harold has a ticket in a lottery. The odds against his winning
In "Knowledge, Inference, and Explanation," pp. 168-169. I have adapted the example
somewhat.
There is a way of reading (2e) that makes it sound true viz., if we illicitly smuggle in the
fact that Harold has lost the lottery. That is, (2e') "Harold, having lost the lottery, would
not have an X unless Rockaford had given him one" is true. But this version of (2) makes
R, the reason S has for believing Rockaford gave him an X, not only Harold's possession
of an X but also his having lost the lottery. This, by hypothesis, is not part of S's reason;
hence, not properly included in (2e). (2e) must be read in something like the following
fashion: Harold would not have an X, whatever the outcome of the lottery, unless
Rockaford had given him one. With this reading it is clearly false.
It is difficult to say whether this is a counterexample to Goldman's analysis. I think it
satisfies all the conditions he catalogs as sufficient for knowledge, but this depends on how
strictly Goldman intends the condition that S must be warranted in inferring that P is the
case from R.
vailed on this occasion, circumstances that include such things as card distribution, arrangement of players, and so on, an occurrence of the first
sort (neighbor remains in game) will invariably be followed by one of
the second sort (his receipt of a royal flush). One cannot falsify his claim
by showing that he would not have received a royal flush, despite his
neighbor's remaining in the game, if the card distribution in the deck
had been different from what it in fact was. For his claim was a claim
about the inevitable sequence of events with that distribution of cards.
Statements such as (2), then, even when R and P are expressions for
particular states of affairs, express a general uniformity, but this general
uniformity is not that whenever a state similar to R is the case, then a
state similar to P will also be (or have been) the case. The uniformity in
question concerns the relationship between states similar to R and P
under a fixed set of circumstances. Whenever (a state such as) R in circum-
stances C then (a state such as) P where the circumstances C are defined
in terms of those circumstances that actually prevail on the occasion of
R and P. But does C include all the circumstances that prevail on the
occasion in question or only some of these? Clearly not all the circumstances since this would trivialize every subjunctive conditional of this
sort. Even if we restrict C to only those circumstances logically independent of R and P we still obtain a trivialization. For, to use Goldman's
mountain example (as embellished), C would still include the fact that
N did not erupt (since this is logically independent of both R and P),
10
and this is obviously one of the circumstances not held fixed when we
say that the lava would not be here unless M erupted. For in asserting
his subjunctive we mean to be asserting something that would be false
in the situation described (N would have erupted if M had not), whereas
if we hold this circumstance (N did not erupt) fixed, the uniformity
between the presence of lava and M's eruption would hold.
I think that our examples, not to mention an extensive literature on
the subject,9 point the way to a proper interpretation of C. The circumstances that are assumed constant, that are tacitly held fixed, in conditionals such as (2), are those circumstances prevailing on the occasion in
question (the occasion on and between which the particular states R
and P obtain) that are logically and causally independent of the state of
affairs expressed by P. 10 When we have a statement in the subjunctive
that (unlike (2)) is counterfactual (the antecedent gives expression to a
state of affairs that does or did not obtain), then C includes those
circumstances prevailing on the occasion that are logically and causally
independent of the state of affairs (or lack of such state) expressed by the
antecedent of the conditional. In our poker game, for example, we can
say that S's statement (I would have got a royal flush if my neighbor had
stayed in the game) fixes that set of circumstances that are logically and
causally independent of his neighbor's staying in the game (i.e., the
antecedent since the statement is counterfactual). Hence, if there is
another player in the game (whose presence or absence affects the cards
dealt to S) who would have dropped out if S's neighbor had not dropped
9
I am not proposing a solution to the problem to which Nelson Goodman (Fact, Fiction
and Forecast, Chapter I), Roderick Chisholm ("The Contrary-to-Fact Conditional," Mind,
55, 1946), and others addressed themselves in trying to specify the "relevant conditions"
associated with counterfactuals. I shall use the notion of "causality" in my treatment, a
device that both Goodman and Chisholm would regard as question-begging. I am not,
however, attempting to offer an extensionalist analysis of the subjunctive conditional; I
am merely trying to get clear in what way such conditionals are stronger than the
statement of a causal relationship between R and P and yet (in one sense) weaker than a
statement of the universal association between states similar to R and P.
10 This characterization of the circumstances " C " has interesting and, I believe, significant
repercussions for subjunctives having the form of (2) in which R expresses some present
(or past) state of affairs and P expresses some future state of affairs. Although I lack the
space to discuss the point here, I believe an important asymmetry is generated by a shift
in the relative temporal locations of R and P. I also believe, however, that this asymmetry
is faithfully reflected in the difference between knowing what will happen on the basis of
present data and knowing what did happen on the basis of present data. In other words, I
feel that an asymmetry in (2), arising from a shift in the relative temporal locations of R
and P, helps one to understand the difference we all feel between knowing, on the one
hand, what did happen or is happening and, on the other hand, what will happen.
11
out, then this person's remaining in the game is not held fixed, not
included in C, because it is causally connected to the state of affairs
expressed by the antecedent in S's statement. Therefore, we can show
S's statement to be false if we can show that such a circumstance
prevailed, and it is along these lines that one would surely argue in
attempting to show S that he was wrong, wrong in saying that he would
have received a royal flush if his neighbor had stayed in the game.
On the other hand, one cannot show that S's statement is false by
showing that, were the cards differently arranged in the remainder of
the deck, he would not have received his royal flush; for the arrangement of cards in the remainder of the deck (unlike the presence or
absence of our other player) is (presumably) independent of S's neighbor's departure from the game. Hence, it is one of the conditions held
fixed, included in C, by S's statement, and we are not allowed to
consider alterations in it in assessing the general implication of S's statement.
Or consider our original thermometer example. Recall, the statement
in question was: "The thermometer would not have read 98.6 unless
the child's temperature was normal." Suppose someone responds, "Oh,
it would have (or might have) if the thermometer was broken." It is
important to understand that one can grant the truth of this response
without abandoning the original assertion; for the original assertion had,
as its general implication, not a statement expressing a uniform relationship between states of affairs similar to the child's temperature (normal
body temperature) and states of affairs similar to the thermometer reading (a reading of 98.6), but, rather, a uniformity between such states
under a fixed set of circumstances. And, if I am right, this fixed set of
circumstances includes the actual state of the thermometer (defective or
accurate); it is one of those circumstances prevailing on the occasion in
question that is causally and logically independent of the child's temperature. Hence, this circumstance cannot be allowed to vary as it is in the
preceding response by the words "if the thermometer was broken." To
determine the truth of the original assertion, we must suppose that the
thermometer is accurate (or defective) whatever the actual condition is. If,
therefore, the thermometer was not broken or otherwise defective on
that occasion, then the suggestion that it would (or might) have read the
same despite a feverish child if it were broken or defective is, although
quite true, irrelevant to the truth of the statement: "It would not have
read 98.6 unless the child's temperature was normal."
One final important feature of (2). I have said that, generally speaking,
12
the placeholders " R " and " P " represent expressions designating specific
states of affairs or conditions. When this is so, (2) still has a general
implication, but the general implication, expressing a uniform relationship between states of affairs similar to R and P, has its scope restricted to
situations in which the circumstances C (as specified earlier) obtain.
Since we are talking about specific states of affairs in most instantiations
of (2), it becomes extremely important to observe the sorts of referring
expressions embodied within both " R " and "P." For example, when I
say, "John would not have said it was raining unless it was raining," I
am talking about John and about a particular utterance of his. Someone else
might have said this without its being true; John may have said something
else without its being true. Nonetheless, John would not have said it was
raining unless it was. An incurable liar about most things, John has a
pathological devotion to accuracy on matters concerning the weather.
In such a case, although John is, admittedly, a most unreliable informant
on most matters, we can say that he would not have said it was raining
unless it was so. This is only to say that the referring expressions to be
found in " R " and " P " help to define the scope of the implied generalization. Recall, the implied generalization was about states of affairs similar
to (the particular states) R and P. Similar in what respect? The sorts of
referring expressions to be found within " R " and " P " help us to answer
this question. In the case of John, the general implication involved not
a person's saying something (under circumstances C), not John's saying
something (under circumstances C), but John's saying something about
the weather (under circumstances C).
13
(4) R would be the case even though not-P were the case.
(4) is the contrary of (2), not its contradictory, since both (2) and (4) may
turn out false.11 For example, suppose S asserts,
(2g) I would have won the lottery if I had bought two tickets (instead of only
one).
We may deny the truth of this contention without committing ourselves
to the truth of
(4g) You would have lost even if you had bought two tickets.
All that is intended in denying (2g) is that the purchase of two tickets is
connected with winning in the alleged manner, that the purchase of two
tickets would have assured him of winning. Its failing to be connected
in the manner alleged is, however, quite consistent with his winning with
two tickets. What we commit ourselves to in denying (2g) is
(3g) You might have lost even with two tickets.
(3g) asserts what (2g) denies; viz. that even with two tickets it is still a
matter of chance; the possibility of losing is not eliminated by holding
two tickets instead of one.
As a matter of common practice, of course, we often employ something similar to (4) in denying (2). This is understandable enough since
the truth of (4) does entail the falsity of (2). The point I am driving at,
however, is that we need not affirm anything as strong as (4) in denying
(2); all we are required to affirm is that R and not-P might both be the
case or that, even though R is given, P might not be the case. That is to
say, the proper expression for the negation of (2) is (3); and if we
understand (3) as affirming "Given R, () ~ P" (alternatively ()(R. ~
P)), then we are justified in representing (2) as "Given R, ~ () ~ P"
(alternatively, ~ ()(R. ~ P))- If someone says, "James would not have
come without an invitation," we can deny this without supposing that
James would have come without an invitation. For suppose we know
that if James had not received an invitation, he would have flipped a
coin to decide whether to go or not. In such a case, it is not true that
he would not have come without an invitation, but neither is it true
14
that he would have come without an invitation. The fact is that he might
have come (depending on the outcome of the toss) without an invitation.
Before proceeding there is an important ambiguity that must be
eliminated. There is a use of the word "might" (and related modal
terms) that gives expression to the speaker's epistemic state in relation to
some situation. It is a use of the word "might" that is naturally followed
by a parenthetical "for all I know." For example, in saying, "He might
win the election," the speaker is most naturally understood as expressing
the fact that he does not know whether he will win the election or not,
that he has no (or there are no) very convincing grounds for supposing
he will lose. This use of the word can properly be deployed even when
the state of affairs in question is physically or logically impossible. S, for
instance, may concede the premises of our valid argument but, nonetheless, ignorant of its validity, insist that the conclusion might (for all he
knows) be false even though the premises are true.
Contrasted with this epistemic use of the modal terms is what we
might call an objective sense of these terms, a sense of the term "could"
(for example) in which if R entails P then, independent of what S
knows about the logical relationship between R and P, it is false to say
that R and not-P could (or might) both be the case. Moreover, if we
accept the results of modern physics, then in this objective sense of the
term S's statement that there are objects that can travel faster than the
speed of light is false, and it is false even though, for all he knows, there
are objects that can. In this objective sense one is making a remark about
the various possibilities for the occurrence, or joint occurrence, of events
or the coexistence of states of affairs, and in this sense ignorance of what
is the case is no guarantee that one's statements about what might or
could be the case are true. The possibilities must actually be as one alleges.
When S (knowing that James had an invitation to come) asserts that
James, being the sort of fellow he is, might (even) have come without
an invitation, he is making a remark about James and what James is
inclined to do or capable of doing. He is obviously not registering some
fact about his ignorance of whether or not James possesses an invitation.
The modal term appearing in (3) is meant to be understood in the
objective sense. (3) is meant to be a statement about the possibilities for
the joint realization of two states of affairs (R and not-P) independent
of what the speaker happens to know about the actual realization of P
(R being given). Drawing from our discussion in the preceding section,
we can say that if (2) is true, if R would not be the case unless P were
the case, then in these circumstances (specified earlier) P is a state of affairs
15
16
12
Recall footnote 5 concerning the particular wording of (2); I intend those remarks to
apply to this definition of "conclusive reasons."
17
13 It is this stronger connection that blocks the sort of counterexample that can be generated
to justified-true-belief analyses of knowledge. Gettier's (and Lehrer's) examples, for instance, are directed at those analyses that construe knowledge in terms of a degree of
justification that is compatible with being justified in believing something false (both
Gettier and Lehrer mention this feature at the beginning of their discussion). The counterexamples are then constructed by allowing S to believe that P (which is false) with the
appropriate degree of justification, letting P entail Q (which is true), and letting S believe that
Q on the basis of its logical relationship to P. We have, then, a case where S truly believes
that Q with the appropriate degree of justification (this degree of justification is allegedly
preserved through the entailment between P and Q), but a case where S does not know
that Q (since his means of arriving as it were so clearly defective). On the present analysis,
of course, the required connection between S's evidence and P is strong enough to
preclude P's being false. One cannot have conclusive reasons for believing something that is
false. Hence, this sort of counter-example cannot be generated. Part of the motivation for
the present analysis is the conviction (supported by Gettier-like examples) that knowledge,
if it embodies an evidential relation at all, must embody a strong enough one to eliminate
the possibility of mistake. See Edmund Gettier's "Is Justified True Belief Knowledge?"
Analysis, 23.6, June 1963, and Keith Lehrer, "Knowledge, Truth and Evidence," Analysis,
25.5, April 1965. I should also mention here that these same sorts of considerations
seemed to move Brian Skyrms toward a similar analysis; see especially pp. 385386 in his
"The Explication of'X knows That P'," The Journal of Philosophy, June 22, 1967.
18
19
to him that he finds difficult to describe. Still, if the way the thing looks
to S is such that it would not look that way unless it had the property
Q, then its looking that way to S is a conclusive reason for S's believing
that it has the property Q; and if S believes that it is Q on this basis, then
he has, in the way the thing looks to him, a conclusive reason for
believing it Q.
Also, there are a number of things that people commonly profess to
know (Sacramento is the capital of California, the earth is roughly
spherical) for which there is no definite piece of evidence, no single state
of affairs or easily specifiable set of such states, that even approximates a
conclusive reason. In such cases, although we can cite no single piece of
data that is clinching and, hence, are at a loss for conclusive reasons
when asked to give reasons (or when asked "How do you know?") we,
nonetheless, often enough have conclusive reasons in a vast spectrum of
experiences that are too diverse to admit of convenient citation. Countless experiences converge, so to speak, on the truth of a given proposition, and this variety of experience may be such that although one may
have had any one of these experiences without the proposition in question being true, one would not have had all of them unless what one
consequently believes was true. The fallibility of source A and the
fallibility of source B does not automatically entail that when A and B
agree about P's being the case, that, nonetheless, P might still be false.
For it may be that A and B would not both have indicated that P was the
case unless P was the case, although neither A nor B, taken by themselves, provide conclusive reasons for P. For example, although any
single newspaper account may be in error on a particular point, several
independent versions (wire services, of course, tend to eliminate this
independence) may be enough to say that we know that something is
so on the basis of the newspaper accounts. All of them would not have been
in such close agreement unless their account was substantially correct.14
Finally, I do not wish to suggest by my use of the word "reason" that
14 The fact that all newspapers sometimes print things that are false does not mean that we
cannot know that something is true on the basis of a single newspaper account. The
relevant question to ask (as in the case of a person's testimony - see Section 1) is not
whether newspapers sometimes print false stories, not even whether this newspaper sometimes prints false stories, but whether this newspaper would have printed this story if it were
not true. The Midville Weekly Gazette's story about dope addiction on the campus may
not correspond with the facts, but would The Times have printed this story about the
president's visit to Moscow if it were not true?
20
when S has conclusive reasons for believing P, S has reasoned his way to
the conclusion that P is the case from premises involving R or that S
has consciously used R a s a reason in arriving at the belief that P. I am
inclined to think (but I shall not now argue it) that when one knows
that P, on whatever basis this might be, little or no reasoning is involved.
I would prefer to describe it as follows: sometimes a person's conviction
that P is the case can be traced to a state of affairs (or cluster of situations)
that satisfies the three conditions defining the possession of conclusive
reasons. When it can be so traced, then he knows; when it cannot be so
traced, then we say he does not know, although he may be right about
P's being the case. Of course, his belief may be traceable to such a source
without our being able to trace it. In such a case we are mistaken in
saying that he does not know.
Turning now to the question of whether having conclusive reasons
to believe, as defined by (A)-(C), constitutes a sufficient condition for
knowledge, I shall mention and briefly respond to what I consider to be
the most serious objections to this proposal.
There is, first, a tendency to conflate knowing that P with knowing
that one knows that P. If this is done then conditions (A)(C) will
immediately appear insufficient since they do not describe S as knowing
or having any basis for believing that R, his basis for believing P,
constitutes an adequate basis, much less a conclusive basis, for believing
P. Even if one does not go this far, there is still a tendency to say that if
S knows that P, then S must at least believe that he knows that P is the
case. If one adopts this view then, once again, conditions (A)(C) appear
inadequate since they do not describe (nor do they entail) that S believes
he knows that P is the case. I see no reason, however, to accept either
of these claims. We naturally expect of one who knows that P that he
believe that he knows, just as we expect of someone who is riding a
bicycle that he believe he is riding one, but in neither case is the belief
a necessary accompaniment. The confusion is partially fostered, I believe,
by a failure to distinguish between what is implied in knowing that P
and what is implied (in some sense) by someone's saying he knows that
P. Consider, however, cases in which we freely ascribe knowledge to
agents in which it seems quite implausible to assign the level of conceptual sophistication requisite to their believing something about knowledge, believing something about their epistemic relation to the state of
affairs in question. A dog may know that his master is in the room, and
I (at least) want to say that he can know this in a straightforward sense
21
22
23
is the case. I believe that this objection trades on the very confusion we
have just discussed; that is, it mistakenly supposes that if S does not
know that R is conclusive for P (has no legitimate basis for believing
this), then S does not know that P is the case (has no legitimate basis for
believing this). Or, what I think amounts to the very same thing, it
fallaciously concludes that S (given his basis for belief) might be wrong
about P from the fact that S (given his basis for belief) might be wrong
about R's being conclusive for P. Or, to put it in still another way, it
incorrectly supposes that if it is, either wholly or in part, accidental that
S is right about R's being conclusive for P, then it is also, either wholly
or in part, accidental that he is right about P's being the case. Such
inferences are fallacious and, I believe, they are fallacious in the same
way as the following two examples: (a) Concluding that it was sheer
luck (chance), a mere accident, that the declarer made his bid of seven
spades because it was sheer luck (chance), a mere accident, that he was
dealt thirteen spades; (b) Concluding that the window was broken
accidentally because the man who threw the brick through it came by
the belief that bricks break windows in an accidental (silly, unreasonable,
or what have you) way.
Sometimes the stage is set for a nonaccident in a purely accidental
way. In the preceding case it is accidental that S knows that P on the
basis of R, but this does not make it accidental that he is right about P
for he believes P on the basis of R, and R simply would not be the case
unless P were the case. Given R, it is not at all accidental that he is right
about P. What is accidental is that he was correct in believing that R
was a conclusive reason for P, but all this shows is that he does not
know that R is conclusive for P, does not know that he knows that P.
And with this much I am in full agreement.18
Skeptical arguments have traditionally relied on the fact that S in
purporting to know that P on the basis of R, was conspicuously unable
to justify the quality of his reasons, was hopelessly incapable of providing
satisfactory documentation for the truth of (A). The conclusion fallaciously drawn from this was that S did not know that P was true (simply)
18 In speaking of "accidentality" in this connection I have in mind Peter Unger's analysis of
knowledge in terms of its not being at all accidental that the person is right (see his "An
Analysis of Factual Knowledge," Journal of Philosophy, March 21, 1968). That is, I want to
claim that any S satisfying conditions (A)(C) is a person of whom it is true to say that it
is not at all accidental that he is right about P's being the case, although it may be
accidental that this is no accident (it may be accidental that he has conclusive reasons, that
he has reasons in virtue of which it is not at all accidental that he is right about P).
24
on the basis of R. Clearly, however, all that follows from the fact that S
has little or no grounds for thinking (A) true is that he lacks satisfactory
grounds for thinking he knows that P is true. It does not follow that he
does not know that P is true. Knowing that P is the case on the basis of
R involves knowing that R is the case (and believing P on that basis)
when (A) is true. It is the truth of (A), not the fact that one knows it true,
that makes R conclusive for P.
There is another respect in which traditional skeptical arguments have
been on the mark. One way of expressing my argument is to say that
the familiar and (to some) extremely annoying challenge, "Couldn't it
be an illusion (fake, imitation, etc.)?" or "Isn't it possible that you are
dreaming (hallucinating, etc.)?" is, in a certain important respect, quite
proper and appropriate even when there is no special reason to think you are
dreaming, hallucinating, confronting a fake and so on. For our knowledge
claims do entail that the evidence or grounds one has for believing would
not have been available if what one consequently believes (and claims to
know) were false; hence, they do entail that, given one's evidence or
grounds, it is false that one might be mistaken about what one purports to
know. (1) does entail the falsity of (3); hence, (1) can be shown to be
false not only by showing that S is dreaming or hallucinating or whatever, but also by showing that he might be, that his experience or
information on which he bases his belief that P can be had in such
circumstances as these without P being the case. It is not in the propriety
or relevance of these challenges that skepticism has gone awry. On the
contrary, the persistent and continuing appeal of skepticism lies in a
failure to come to grips with this challenge, in a refusal to acknowledge
its legitimacy and, hence, answer it. It simply will not do to insist that in
concerning himself with the possibility of mistake a skeptic is setting
artificially high standards for knowledge and, therefore, may be ignored
when considering ordinary knowledge claims. (1) does imply that (3) is
false, and it seems to me quite a legitimate line of argument for the
skeptic to insist that if (3) is true, if you might be dreaming or whatever,
then (1) is false - you do not know, on the basis of your present visual,
auditory, and other experiences, what you purport to know.
I think there are several confusions to be found in traditional skepticism, but one of them is not that of insisting that to know that P on the
basis of R, R must somehow preclude the possibility of not-P. The
confusions lie elsewhere. If one interprets the "might" (or "could") of
(3) too narrowly (as simply "logically possible") then, of course, (3) will,
in almost all interesting cases, turn out true and (therefore) (1) false.
25
19
26
(B1) S believes that the solution is a base, and he believes this on the basis of the
indicator's change in color.
(C) S knows that the indicator changed from yellow to blue (he saw it change
saw that it changed).
I have said that these three conditions were sufficient for knowledge.
Does S know that the solution is a base? Before answering this question
the reader should be informed that there is another chemical indicator,
Bromophenal Blue, that also turns from yellow to blue but only when
immersed in an acid. S, however, is quite unaware of the existence of
other such indicators. He merely assumes that a yellow indicator turning
blue is a positive test for a base. S's ignorance on this point does not
alter the fact that the preceding three conditions are satisfied. Yet,
despite the satisfaction of these conditions, I find it (in some cases) most
implausible to say he knows that the solution is a base. Whether he
knows or not depends in a crucial way on our understanding of condition (B1). The indicator's change in color, although it is a conclusive
reason, has its conclusiveness, so to speak, restricted in scope to the
range of cases in which Thymol Blue is the indicator (or, if this laboratory uses only Thymol Blue, in which the indicator is from this laboratory or of the sort used in this laboratory). What is a conclusive reason
for believing the solution to be a base is that a Thymol Blue indicator (or
an indicator from this laboratory) changed from yellow to blue when
immersed in it, not simply that an (unspecified) chemical indicator
changed from yellow to blue. The fact that this indicator happens to be
Thymol Blue is what accounts for the truth of (A'). Since, however, it
is a Thymol Blue indicator's (or some other appropriately specified
indicator's) color transformation that is conclusive, we must (see condition (B')) require that S's basis for believing the solution to be a base be
that a Thymol Blue indicator (or such-and-such indicator) changed from
yellow to blue. He need not, once again, know that a Thymol Blue's
color transformation (or such-and-such indicator's transformation) is conclusive, but he must be exploiting those things about the indicator's
transformation in virtue of which it is conclusive. In some cases an A's being
B is a conclusive reason for believing P only in virtue of the fact that it
is, in particular, an A (or, say, something that is Q) that is B; in such
cases we must understand condition (B) as requiring that S's basis for
believing P include not only the fact that this (something or other) is B,
but also that this something or other is, in particular, an A (or something
that is Q). And this requires of us in addition that we understand
27
condition (C) that in such a way that S not only know that R is the
case, know (let us say) that A is B, but also know that it is, in particular,
an A that is B. 20
One further example to illustrate a distinct, but closely related, difficulty. Suppose K is behaving in such a way that it is true to say that he
would not be behaving in that way unless he was nervous. Suppose S
purports to know that K is nervous and, when asked how he knows
this, replies by saying, "From the way he is behaving." Once again, our
three conditions are satisfied or can easily be assumed to be satisfied.
Yet, if we suppose that the distinctive thing about K's behavior is that
he is doing Ba while performing B 2 , then if S is relying on B1 (or B2)
alone, we should not say that he knows that K is nervous. It is quite
true that the basis for S's belief (that K is nervous) is K's behavior, and
in this (relatively unspecified) sense we might say that S is relying on the
significant aspects of the situation, but the fact is that the crucial aspects
(those aspects that make K's behavior conclusive) are more specific than
those on which S is relying in purporting to know. We must insist,
therefore, that S's basis for believing P be as specific in regard to the
relevant aspects of R as is necessary to capture the distinctive (i.e.,
conclusive, those figuring essentially in the satisfaction of (2)) features of
the situation.
I think both of the preceding qualifications can be summarized by
saying that when one has conclusive reasons, then this is sufficient for
knowing that P is the case when those reasons are properly specific, both
with regard to what it is that displays the particular features on which
one relies and on the particular features themselves. A complete state20 This type of restriction is required for the sort of example discussed in my article "Reasons
and Consequences," Analysis, April 1968. In this article I argued that one could know
that the A is B (e.g., the widow was limping) while having little, if any, justification for
believing that it was, in particular, a widow who was limping (hence, one could know
that the widow was limping without knowing that it was a widow who was limping). Since,
however, the statement "The widow is limping" implies "There is (or it is) a widow who
is limping," I took this as showing that S can know that P, know that P entails Q, and
yet not know that Q. On the present analysis of conclusive reasons, of course, P is a
conclusive reason for Q (since P entails Q) and anyone who believed Q on the basis of P
should (on this analysis) know that Q. The restriction being discussed in the text blocks
this result by requiring S to know those things in particular about P that make it
conclusive for Q. In this example S must know that it is a widow who is limping since it
is this aspect of his conclusive reason (the widow is limping) in virtue of which it functions
as a conclusive reason for believing that there is a widow who is limping. I am indebted
to Bruce Freed for clarification on this point.
28
ment of these restrictions is, however, far beyond the scope of this essay.
Suffice it to say that the possession of conclusive reasons for believing is
a necessary condition for knowledge and, properly qualified along the
lines suggested here, also (I think) sufficient.
29
Epistemic Operators
30
This list begins with two epistemic operators, "reason to believe that"
and "know that." Since I shall be concerned with these later in the
essay, let me skip over them now and look at those appearing near the
end of the list. They will suffice to answer our opening question, and
their status is much less problematic than that of some of the other
operators.
"She lost" entails "Someone lost." Yet, it may be strange that she
lost, not at all strange that someone lost. "Bill and Susan married each
other" entails that Susan got married; yet, it may be quite odd that
(strange that, incredible that) Bill and Susan married each other but quite
unremarkable, not at all odd that, Susan got married. It may have been
a mistake that they married each other, not a mistake that Susan got
married. Or finally, "I hit the bull's-eye" entails that I either hit the
bull's-eye or the side of the barn; and although I admit that it was lucky
that (accidental that) I hit the bull's-eye, I will deny that it was lucky,
an accident, that I hit either the bull's-eye or the side of the barn.
Such examples show that not all operators are fully penetrating. Indeed, such operators as "it is strange that," "it is accidental that," and
"it is a mistake that" fail to penetrate to some of the most elementary
logical consequences of a proposition. Consider the entailment between
31
"P. Q" and " Q . " Clearly, it may be strange that P and Q, not at all
strange that P, and not at all strange that Q. A concatenation of factors,
no one of which is strange or accidental, may itself be strange or accidental. Taken by itself, there is nothing odd or suspicious about Frank's
holding a winning ticket in the first race. The same could be said about
any of the other races: there is nothing odd or suspicious about Frank's
holding a winning ticket in the nth race. Nonetheless, there is something
very odd, very suspicious, in Frank's having a winning ticket in n races.
Therefore, not only are these operators not fully penetrating, they lie,
as it were, on the other end of the spectrum. They fail to penetrate to
some of the most elementary consequences of a proposition. I shall refer
to this class of operators as nonpenetrating operators. I do not wish to
suggest by this label that such operators are totally impotent in this
respect (or that they are all uniform in their degree of penetration). I
mean it, rather, in a rough, comparative, sense: their degree of penetration
is less than that of any of the other operators I shall have occasion to
discuss.
We have, then, two ends of the spectrum with examples from both
ends. Anything that falls between these two extremes I shall call a
semipenetrating operator. And with this definition I am, finally, in a position to express my main point, the point I wish to defend in the rest of
this essay. It is, simply, that all epistemic operators are semipenetrating
operators. There is both a trivial and a significant side to this claim. Let
me first deal briefly with the trivial aspect.
The epistemic operators I mean to be speaking about when I say that
all epistemic operators are semipenetrating include the following:
(a)
(b)
(c)
(d)
(e)
(f)
(g)
S knows that. . .
5 sees (or can see) that. . .
5 has reason (or a reason) to believe that. . .
There is evidence to suggest that. . .
S can prove that. . .
S learned (discovered, found out) that. . .
In relation to our evidence it is probable that. . .
Part of what needs to be established in showing that these are all semipenetrating operators is that they all possess a degree of penetration
greater than that of the nonpenetrating operators. This is the trivial side
of my thesis. I say it is trivial because it seems to me fairly obvious that
if someone knows that P and Q, has a reason to believe that P and Q,
or can prove that P and Q, he thereby knows that Q, has a reason to
32
believe that Q, or can prove (in the appropriate epistemic sense of this
term) that Q. Similarly, if S knows that Bill and Susan married each
other, he (must) know that Susan got married (married someone). If he
knows that P is the case, he knows that P or Q is the case (where the
"or" is understood in a sense that makes " P or Q" a necessary consequence of "P"). This is not a claim about what it would be appropriate
to say, what the person himself thinks he knows or would say he knows.
It is a question, simply, of what he knows. It may not be appropriate to
say to Jim's wife that you know it was either her husband, Jim, or
Harold who sent the neighbor lady an expensive gift when you know it
was Harold. For, although you do know this, it is misleading to say you
know it especially to Jim's wife.
Let me accept, therefore, without further argument that the epistemic
operators are not, unlike "lucky that," "strange that," "a mistake that,"
and "accidental that," nonpenetrating operators. I would like to turn,
then, to the more significant side of my thesis. Before I do, however, I
must make one point clear lest it convert my entire thesis into something
as trivial as the first half of it. When we are dealing with the epistemic
operators, it becomes crucial to specify whether the agent in question
knows that P entails Q. That is to say, P may entail Q, and S may know
that P, but he may not know that Q because, and perhaps only because,
he fails to appreciate the fact that P entails Q. When Q is a simple
logical consequence of P we do not expect this to happen, but when
the propositions become very complex, or the relationship between
them very complex, this might easily occur. Let P b e a set of axioms, Q
a theorem. S's knowing P does not entail S's knowing Q just because P
entails Q; for, of course, S may not know that P entails Q, may not
know that Q is a theorem. Hence, our epistemic operators will turn out
not to be penetrating because, and perhaps only because, the agents in
question are not fully cognizant of all the implications of what they
know to be the case, can see to be the case, have a reason to believe is
the case, and so on. Were we all ideally astute logicians, were we all
fully apprised of all the necessary consequences (supposing this to be a
well defined class) of every proposition, perhaps then the epistemic
operators would turn into fully penetrating operators. That is, assuming
that if P entails Q, we know that P entails Q, then every epistemic
operator is a penetrating operator: the epistemic operators penetrate to
all the known consequences of a proposition.
It is this latter, slightly modified, claim that I mean to reject.
Therefore, I shall assume throughout the discussion that when Q is a
33
34
35
Suppose you have a reason to believe that the church is empty. Must
you have a reason to believe that it is a church? I am not asking whether
you generally have such a reason. I am asking whether one can have a
reason to believe the church empty without having a reason to believe
that it is a church that is empty. Certainly your reason for believing that
the church is empty is not itself a reason to believe it is a church; or it
need not be. Your reason for believing the church to be empty may be
that you just made a thorough inspection of it without finding anyone.
That is a good reason to believe the church empty. Just as clearly,
however, it is not a reason, much less a good reason, to believe that
what is empty is a church. The fact is, or so it seems to me, I do not
have to have any reason to believe it is a church. Of course, I would
never say the church was empty, or that I had a reason to believe that
the church was empty, unless I believed, and presumably had a reason
for so believing, that it was z church that was empty, but this is a
presumed condition of my saying something, not of my having a reason
to believe something. Suppose I had simply assumed (correctly as it turns
out) that the building was a church. Would this show that I had no
reason to believe that the church was empty?
Suppose I am describing to you the "adventures" of my brother
Harold. Harold is visiting New York for the first time, and he decides
to take a bus tour. He boards a crowded bus and immediately takes the
last remaining seat. The little old lady he shouldered aside in reaching
his seat stands over him glowering. Minutes pass. Finally, realizing that
my brother is not going to move, she sighs and moves resignedly to the
back of the bus. Not much of an adventure, but enough, I hope, to
make my point. I said that the little old lady realized that my brother
would not move. Does this imply that she realized that, or knew that, it
was my brother who refused to move? Clearly not. We can say that S
knows that X is Y without implying that S knows that it is X that is Y.
We do not have to describe our little old lady as knowing that the man
or the person would not move. We can say that she realized that, or
knew that, my brother would not move (minus, of course, this pattern of
emphasis), and we can say this because saying this does not entail that
the little old lady knew that, or realized that, it was my brother who
refused to move. She knew that my brother would not move, and she
knew this despite the fact that she did not know something that was
necessarily implied by what she did know - viz., that the person who
refused to move was my brother.
I have argued elsewhere that to see that A is B, that the roses are
36
wilted for example, is not to see, not even to be able to see, that they
are roses that are wilted.1 To see that the widow is limping is not to see
that it is a widow who is limping. I am now arguing that this same
feature holds for all epistemic operators. I can know that the roses are
wilting without knowing that they are roses, know that the water is
boiling without knowing that it is water, and prove that the square root
of 2 is smaller than the square root of 3 and, yet, be unable to prove
what is entailed by this viz., that the number 2 has a square root.
The general point may be put this way: there are certain presuppositions associated with a statement. These presuppositions, although their
truth is entailed by the truth of the statement, are not part of what is
operated on when we operate on the statement with one of our epistemic
operators. The epistemic operators do not penetrate to these presuppositions. For example, in saying that the coffee is boiling I assert that the
coffee is boiling, but in asserting this I do not assert that it is coffee that
is boiling. Rather, this is taken for granted, assumed, presupposed, or
what have you. Hence, when I say that I have a reason to believe that
the coffee is boiling, I am not saying that this reason applies to the fact
that it is coffee that is boiling. This is still presupposed. I may have such
a reason, of course, and chances are good that I do have such a reason
or I would not have referred to what I believe to be boiling as coffee, but
to have a reason to believe the coffee is boiling is not, thereby, to have
a reason to believe it is coffee that is boiling.
One would expect that if this is true of the semipenetrating operators,
then it should also be true of the nonpenetrating operators. They also
should fail to reach the presuppositions. This is exactly what we find. It
may be accidental that the two trucks collided, but not at all accidental
that it was two trucks that collided. Trucks were the only vehicles
allowed on the road that day, and so it was not at all accidental or a
matter of chance that the accident took place between two trucks. Still,
it was an accident that the two trucks collided. Or suppose Mrs. Murphy
mistakenly gives her cat some dog food. It need not be a mistake that
she gave the food to her cat, or some food to a cat. This was intentional.
What was a mistake was that it was dog food that she gave to her cat.
Hence, the first class of consequences that differentiate the epistemic
operators from the fully penetrating operators is the class of consequences associated with the presuppositions of a proposition. The fact
1
Seeing and Knowing (Chicago: University Press, 1969), pp. 93112, and also "Reasons and
Consequences," Analysis (April 1968).
37
38
39
Unlike our other operators, this one does not have a propositional operand. Despite the
rather obvious differences between this case and the others, I still think it useful to call
attention to its analogous features.
40
For example, he would not have bid seven no-trump unless he had all
four aces. I shall abbreviate this operator as "R ..."; hence, our
example could be written "he bid seven no-trump > he had all four
aces."
Each of these operators has features similar to those of our epistemic
operators. If one retraces the ground we have already covered, one will
find, I think, that these operators all penetrate deeper than the typical
nonpenetrating operator. If R explains why (or is the reason that) P and
Q are the case, then it explains why (is the reason that) Q is the case.3 If
I can explain why Bill and Harold are always invited to every party, I
can explain why Harold is always invited to every party. From the fact
that it was a mistake for me to quit my job it does not follow that it was
a mistake for me to do something, but if I had a reason to quit my job,
it does follow that I had a reason to do something. And if the grass
would not be green unless it had plenty of sunshine and water, it follows
that it would not be green unless it had water.
Furthermore, the similarities persist when one considers the presuppositional consequences. I argued that the epistemic operators fail to
penetrate to the presuppositions; the preceding three operators display
the same feature. In explaining why he takes his lunch to work, I do
not (or need not) explain why he goes to work or why he works at all.
The explanation may be obvious in some cases, of course, but the fact
is, I need not be able to explain why he works (he is so wealthy) to
explain why he takes his lunch to work (the cafeteria food is so bad).
The reason the elms on Main Street are dying is not the reason there are
elms on Main Street. I have a reason to feed my cat, no reason (not, at
least, the same reason) to have a cat. And although it is quite true that
he would not have known about our plans if the secretary had not told
him, it does not follow that he would not have known about our plans
if someone other than the secretary had told him. That is, (He knew about
our plans) (The secretary told him) even though it is not true that
(He knew about our plans) > (It was the secretary who told him). Yet,
the fact that it was the secretary who told him is (I take it) a presupposiOne must be careful not to confuse sentential conjunction with similar-sounding expressions involving a relationship between two things. For example, to say Bill and Susan got
married (if it is intended to mean that they married each other), although it entails that Susan
got married, does not do so by simplification. "Reason why" penetrates through logical
simplification, not through the type of entailment represented by these two propositions.
That is, the reason they got married is that they loved each other; that they loved each
other is not the reason Susan got married.
41
tional consequence of the fact that the secretary told him. Similarly, if
George is out to set fire to the first empty building he finds, it may be
true to say that George would not have set fire to the church unless it
(the church) was empty yet false to say that George would not have set
fire to the church unless it was a church.
I now wish to argue that these three operators do not penetrate to a
certain set of contrast consequences. To the extent that the epistemic
operators are similar to these operators, we may then infer, by analogy,
that they also fail to penetrate to certain contrast consequences. This is,
admittedly, a weak form of argument, depending as it does on the
grounds there are for thinking that the preceding three operators and
the epistemic operators share the same logic in this respect. Nonetheless,
the analogy is revealing. Some may even find it persuasive.4
(A) The pink walls in my living room clash with my old green
couch. Recognizing this, I proceed to paint the walls a compatible shade
of green. This is the reason I have, and give, for painting the walls green.
Now, in having this explanation for why I painted the walls green, I do
not think I have an explanation for two other things, both of which are
entailed by what I do have an explanation for. I have not explained why
I did not, instead of painting the walls green, buy a new couch or cover
the old one with a suitable slip cover. Nor have I explained why, instead
of painting the walls green, I did not paint them white and illuminate
them with green light. The same effect would have been achieved, the
same purpose would have been served, albeit at much greater expense.
I expect someone to object as follows: although the explanation given
for painting the walls green does not, by itself, explain why the couch
was not changed instead, it nonetheless succeeds as an explanation for
why the walls were painted green only insofar as there is an explanation
for why the couch was not changed instead. If there is no explanation
for why I did not change the couch instead, there has been no real, no
complete, examination for why the walls were painted green.
I think this objection wrong. I may, of course, have an explanation
for why I did not buy a new couch: I love the old one or it has
I think that those who are inclined to give a causal account of knowledge should be
particularly interested in the operator "R ..." since, presumably, it will be involved in
many instances of knowledge ("many," not "all," since one might wish to except some
form of immediate knowledge - knowledge of one's own psychological state - from the
causal account). If this operator is only semipenetrating, then any account of knowledge
that relies on the relationship expressed by this operator (as I believe causal accounts must)
will be very close to giving a semipenetrating account of "knowing that."
42
sentimental value. But then again I may not. It just never occurred to
me to change the couch; or (if someone thinks that its not occurring to
me is an explanation of why I did not change the couch) I may have
thought of it but decided, for what reasons (if any) I cannot remember,
to keep the couch and paint the walls. That is to say, I cannot explain
why I did not change the couch. I thought of it but I did not do it. I
do not know why. Still, I can tell you why I painted the walls green.
They clashed with the couch.
(B) The fact that they are selling Xs so much more cheaply here than
elsewhere may be a reason to buy your Xs here, but it certainly need
not be a reason to do what is a necessary consequence of buying your Xs
here viz., not stealing your Xs here.
(C) Let us suppose that S is operating in perfectly normal circumstances, a set of circumstances in which it is true to say that the wall he
sees would not (now) look green to him unless it was green (if it were
any other color it would look different to him). Although we can easily
imagine situations in which this is true, it does not follow that the wall
would not (now) look green to 5 if it were white cleverly illuminated
to look green. That is,
(i) The wall looks green (to S) the wall is green.
(ii) The wall is green entails that the wall is not white cleverly illuminated to
look green (to S).
are both true; yet, it is not true that
(iii) The wall looks green (to S) > is not white cleverly illuminated to look
green (to S).
There are dozens of examples that illustrate the relative impenetrability
of this operator. We can truly say that A and B would not have collided
if B had not swerved at the last moment and yet concede that they
would have collided without any swerve on the part of B if the direction
in which A was moving had been suitably altered in the beginning.5
The explanation for why the modal relationship between R and P (R > P) fails to carry
over (penetrate) to the logical consequences of P (i.e., R Q where Q is a logical
consequence of P) is to be found in the set of circumstances that are taken as given, or held
fixed, in subjunctive conditionals. There are certain logical consequences of P that, by
bringing in a reference to circumstances tacitly held fixed in the original subjunctive (R
P), introduce a possible variation in these circumstances and, hence, lead to a different
framework of fixed conditions under which to assess the truth of R > Q. For instance, in
the last example in the text, when it is said that A and B would not have collided if B had
43
The structure of these cases is virtually identical with the one that
appeared in the case of the epistemic operators, and I think that by
looking just a little more closely at this structure we can learn something
very fundamental about our class of epistemic operators and, in particular, about what it means to know something. If I may put it this way,
within the context of these operators no fact is an island. If we are
simply rehearsing the facts, then we can say that it is a fact that Brenda
did not take any dessert (although it was included in the meal). We
can say this without a thought about what sort of person Brenda is or
what she might have done had she ordered dessert. However, if we
put this fact into, say, an explanatory context, if we try to explain this
fact, it suddenly appears within a network of related facts, a network
of possible alternatives that serve to define what it is that is being explained.
What is being explained is a function of two things not only the fact
(Brenda did not order any dessert), but also the range of relevant alternatives. A relevant alternative is an alternative that might have been
realized in the existing circumstances if the actual state of affairs had
not materialized.6 When I explain why Brenda did not order any dessert
by saying that she was full (was on a diet, did not like anything on the
dessert menu), I explain why she did not order any dessert rather than,
as opposed to, or instead of ordering some dessert and eating it. It is this
not swerved at the last moment, the truth of this conditional clearly takes it as given that A
and B possessed the prior trajectories they in fact had on the occasion in question. Given
certain facts, including the fact that they were traveling in the direction they were, they
would not have collided if B had not swerved. Some of the logical consequences of the
statement that B swerved do not, however, leave these conditions unaltered e.g., B did
not move in a perfectly straight line in a direction 2 counterclockwise to the direction it
actually moved. This consequence "tinkers" with the circumstances originally taken as
given (held fixed), and a failure of penetration will usually arise when this occurs. It need not
be true that A and B would not have collided if B had moved in a perfectly straight line in
a direction 2 counterclockwise to the direction it actually moved.
6 I am aware that this characterization of "a relevant alternative" is not, as it stands, very
illuminating. I am not sure I can make it more precise. What I am after can be expressed
this way: if Brenda had ordered dessert, she would not have thrown it at the waiter, stuffed
it in her shoes, or taken it home to a sick friend (she has no sick friend). These are not
alternatives that might have been realized in the existing circumstances if the actual state of
affairs had not materialized. Hence, they are not relevant alternatives. In other words, the
"might have been" in my characterization of a relevant alternative will have to be unpacked
in terms of counterfactuals.
44
Take the fact that Lefty killed Otto. By changing the emphasis pattern
we can invoke a different set of contrasts and, hence, alter what it is that
S is said to know when he is said to know that Lefty killed Otto. We
can say, for instance, that S knows that Lefty killed Otto. In this case
(and I think this is the way we usually hear the sentence when there is
no special emphasis) we are being told that 5 knows the identity of
Otto's killer, that it was Lefty who killed Otto. Hence, we expect S's
reasons for believing that Lefty killed Otto to consist in facts that single
45
out Lefty as the assailant rather than George, Mike, or someone else. On
the other hand, we can say that S knows that Lefty killed Otto. In this
case we are being told that S knows what Lefty did to Otto; he killed him
rather than merely injuring him, killed him rather than merely threatening
him, and so on. A good reason for believing that Lefty killed Otto (rather
than merely injuring him) is that Otto is dead, but this is not much of a
reason, if it is a reason at all, for believing that Lefty killed Otto.
Changing the set of contrasts (from "Lefty rather than George or Mike"
to "killed rather than injured or threatened") by shifting the emphasis
pattern changes what it is that one is alleged to know when one is said
to know that Lefty killed Otto. 7 The same point can be made here as
we made in the case of explanation: the operator will penetrate only to
those contrast consequences that form part of the network of relevant
alternatives structuring the original context in which a knowledge claim
was advanced. Just as we have not explained why Brenda did not order
some dessert and throw it at the waiter when we explained why she did
not order some dessert (although what we have explained her not
ordering any dessert entails this), so also in knowing that Lefty killed
Otto (knowing that what Lefty did to Otto was kill him) we do not
necessarily (although we may) know that Lefty killed Otto (know that it
was Lefty who killed Otto). Recall the example of the little old lady who
knew that my brother would not move without knowing that it was
my brother who would not move.
The conclusions to be drawn are the same as those in the case of
explanation. Just as we can say that within the original setting, within
the original framework of alternatives that defined what we were trying
to explain, we did explain why Brenda did not order any dessert, so also
within the original setting, within the set of contrasts that defined what
it was we were claiming to know, we did know that the wall was red
and did know that it was a zebra in the pen.
To introduce a novel and enlarged set of alternatives, as the skeptic is
inclined to do with our epistemic claims, is to exhibit consequences of
what we know, or have reason to believe, that we may not know, may
not have a reason to believe; but it does not show that we did not
know, did not have a reason to believe, whatever it is that has these
The same example works nicely with the operator "R >. . . ." It may be true to say that
Otto would not be dead unless Lefty killed him (unless what Lefty did to him was kill him)
without its being true that Otto would not be dead unless Lefty killed him (unless it was
Lefty who killed him).
46
47
3
The Pragmatic Dimension
of Knowledge
48
49
"Conclusive reasons," Australasian Journal of Philosophy (May 1971) and Seeing and Knowing
50
51
certain sort of thing: bumps in the case of flatness and objects in the case
of emptiness. The fact that there can be nothing of this sort present for
the concept to be satisfied is what makes it an absolute concept. It is
why if Xis empty, Y cannot be emptier. Nonetheless, when it comes to
determining what counts as a thing of this sort (a bump or an object),
and hence what counts against a correct application of the concept, we
find the criteria or standards peculiarly spongy and relative. What counts
as a thing for assessing the emptiness of my pocket may not count as a
thing for assessing the emptiness of a park, a warehouse, or a football
stadium. Such concepts, we might say, are relationally aboslute; absolute,
yes, but only relative to a certain standard. We might put the point this
way: to be empty is to be devoid of all relevant things, thereby exhibiting,
simultaneously, the absolute (in the word "all") and relative (in the
word "relevant") character of this concept.
If, as I have suggested, knowledge is an absolute concept, we should
expect it to exhibit this kind of relationally absolute character. This,
indeed, is the possibility I mean to explore in this essay. What I propose
to do is to use what I have called relationally absolute concepts as a
model for understanding knowledge. In accordance with this approach
(and in harmony with an earlier suggestion) I propose to think of
knowledge as an evidential state in which all relevant alternatives (to what
is known) are eliminated. This makes knowledge an absolute concept, but
the restriction to relevant alternatives makes it, like empty and flat, applicable to this epistemically bumpy world we live in.
Why do this? What are the advantages? A partial catalog of benefits
follows:
(1) A growing number of philosophers are able to find, or so they
claim, a pragmatic, social, or communal dimension to knowledge.4 A
I have in mind Harman's discussion in: Thought (Princeton, 1973) of evidence one does
not possess, Goldman's barn example in "Discrimination and perceptual knowledge," The
Journal of Philosophy 73.20 (1976), the sorts of examples appearing in various Defeasibility
analyses of knowledge (see Keith Lehrer and Thomas Paxson, Jr., "Knowledge: Undefeated
justified true belief," Journal of Philosophy 66.8 [1969] and Peter Klein, "A proposed
definition of propositional knowledge," Journal of Philosophy, 68.16 [1971], Ernest Sosa's
recommendation (in "How do you know?", American Philosophical Quarterly 11.2 [1974])
that we must depart from the traditional conception of knowledge by putting in relief the
relativity of knowledge to an epistemic community (p. 117), and David Annis's "A contextualist theory of epistemic justification," American Philosophical Quarterly, 15.3 (1978), in
which the basic model of justification (and presumably of knowledge) revolves around a
person's being able to meet certain objections. The trend here, if this is a trend, seems to
52
be toward the kind of relativity espoused by Thomas Kuhn in his The Structure of Scientific
Revolutions (Chicago, 1962).
53
54
have been migrating to the Midwest from their home in Siberia, and he
and his research assistants are combing the Midwest in search of confirmation.
Once we embellish our simple story in this way, intuitions start to
diverge on whether our amateur bird-watcher does indeed know that
yonder bird is a Gadwall duck (we are assuming, of course, that it is a
Gadwall). Most people (I assume) would say that he did not know the
bird to be a Gadwall if there actually were Siberian grebes in the vicinity.
It certainly sounds strange to suppose that he could give assurances to
the ornithologist that the bird he saw was not a Siberian grebe (since he
knew it to be a Gadwall duck). But what if the ornithologist's suspicions
are unfounded? None of the grebes have migrated. Does the birdwatcher still not know what he takes himself to know? Is, then, the
simple presence of an ornithologist, with his false hypothesis, enough to
rob the bird-watcher of his knowledge that the bird on the pond is a
Gadwall duck? What if we suppose that the Siberian grebes, because of
certain geographical barriers, cannot migrate? Or suppose that there really
are no Siberian grebes the existence of such a bird being a delusion of
a crackpot ornithologist. We may even suppose that, in addition to there
being no grebes, there is no ornithologist of the sort I described, but
that people in the area believe that there is. Or some people believe that
there is. Or the bird-watcher's wife believes that there is and, as a result,
expresses skepticism about his claim to know that what he saw was a
Gadwall duck. Or, finally, although no one believes any of this, some of
the locals are interested in whether or not our bird-watcher knows that
there are no look-alike migrant grebes in the area.
Somewhere in this progression philosophers, most of them anyway,
will dig in their heels and say that the bird-watcher really does know that
the bird he sees is a Gadwall, and that he knows this despite his inability
to justifiably rule out certain alternative possibilities. For example, if
there are no look-alike grebes and no ornithologist of the sort I described, but the bird-watcher's wife believes that there are (a rumour
she heard from her hairdresser), this does not rob him of his knowledge
that the bird he saw was a Gadwall. He needn't be able to rule out the
possibility that there are, somewhere in the world, look-alike grebes that
have migrated to the Midwest in order to know that the bird he saw
was a Gadwall duck. These other possibilities are (whether the birdwatcher realizes it or not) simply too remote.
Most philosophers will dig in their heels here because they realize
that if they don't, they are on the slippery slope to skepticism with
55
nothing left to hang on to. If false rumors about look-alike grebes and
ornithologists can rob an expert bird-watcher of his knowledge that a
bird seen in good light, and under ideal conditions, is a Gadwall duck,
then similarly false rumors, suspicions, or even conjectures about deceptive demons or possible tricks will rob everyone of almost everything
they know. One of the ways to prevent this slide into skepticism is to
acknowledge that although knowledge requires the evidential elimination of all relevant alternatives (to what is known), there is a shifting,
variable set of relevant alternatives. It may be that our bird-watcher does
know the bird is a Gadwall under normal conditions (because look-alike
grebes are not a relevant alternative) but does not know this if there is a
suspicion, however ill founded it may be, that there exist look-alike
grebes within migrating range. This will (or should) be no more unusual
than acknowledging the fact that a refrigerator could truly be described
as empty to a person looking for something to eat but not truly described
as empty to a person looking for spare refrigerator parts. In the first case
"empty" implies having no food in it; in the second it implies having
no shelves, brackets, and hardware in it.
These, then, are some of the advantages to be derived from this
approach to the analysis of knowledge. They are, however, advantages
that can be harvested only if certain questions can be given reasonable
answers: in particular (a) what makes a possibility relevant? (b) If, in
order to know, one must rule out all relevant alternatives, how is this
"elimination" to be understood? What does it take, evidentially, to "rule
out" an alternative? (c) Is it possible, as this type of analysis suggests, for
one to know something at one time and, later, not know it (due to the
introduction of another relevant alternative) without forgetting it? (c)
Can one make it easier to know things by remaining ignorant of what
are, for others, relevant possibilities?
These, and many more questions, need answers if this framework for
the analysis of knowledge is to be anything more than suggestive. Since
I cannot here (or anywhere else, for that matter) provide answers to all
these questions, I will try, in the time remaining, to fill in some of the
large gaps.
Call the Contrasting Set (CS) the class of situations that are necessarily
eliminated by what is known to be the case. That is, if 5 knows that P,
then Q is in the CS (of P) if and only if, given P, necessarily not-Q. In
our bird-watcher's example, the bird's being a Siberian grebe (or any
kind of grebe at all) is in the CS of our bird-watcher's knowledge, or
putative knowledge, that it is a Gadwall duck. So is its being an elephant,
56
.RS
Figure 1
The solid lines indicate an RS and the corresponding piece of evidence that would be required to know with this RS. With a different
RS (RS), indicated by dotted lines, different evidence would be re5 Although there are grebes, and some of them look like ducks, there are (to the best of my
knowledge) no Siberian grebes that look like Gadwall ducks. This part of my story was
pure invention.
57
quired. If Siberian grebes are in the RS, then additional, more elaborate,
evidence is required to know that yonder bird is a Gadwall than in the
normal situation. Since the bellies are of different color, one might, for
example, be able to tell that it was a Gadwall by watching it in flight.
The point, however, is that something more would be needed than was
available in the original, normal situation.
In terms of this kind of diagram, a skeptic could be represented as
one who took RS = CS in all cases. One's evidence must be comprehensive enough to eliminate all contrasting possibilities there being no
irrelevant alternatives.
Once the mistake is made of identifying RS with CS, the pressure
(on non-skeptics) for lowering the standards ofjustification (requisite for
knowing) becomes irresistible. For if in order to know that P one must
be justified in rejecting all members of the CS (not just all members of
the RS), then one can no longer expect very impressive levels of justification for what people know to be the case. If the evidence our birdwatcher has for believing the bird to be a Gadwall duck (wing markings,
etc.) is also supposed to justify the proposition that it is not a look-alike
grebe, then, obviously, the justification is nowhere near conclusive.
What some philosophers seem inclined to conclude from this is that
knowledge does not require conclusive evidence. The reasoning is simple: the bird-watcher knows it is a Gadwall; he doesn't have conclusive
reasons (he can't exclude the possibility that it is a look-alike grebe);
therefore knowledge does not require conclusive reasons. But this, I
submit, is a fallacy, a misunderstanding of what needs to be conclusively
excluded in order to know. Such reasoning is analogous to arguing that
to be empty an object can have a few hundred things in it, and to
conclude this on the basis of the undeniable fact that empty classrooms,
warehouses, and buildings generally have at least a hundred things in
them.
But what determines the membership of an RS? An RS, you will
recall, is a set of situations each member of which contrasts with what is
known to be the case and must be evidentially excluded if one is to
know. Are there criteria for membership in this set? I'm now going to
stick my neck out by saying what some of the considerations are that
determine the membership of these sets. I do not expect much agreement.
(1) The first point has to do with the way we use contrastive focusing
to indicate the range of relevant alternatives. I have discussed this phe-
58
6
7
59
60
61
on the skeptic (by drastically diminishing the RS), have suggested that
an alternative becomes relevant only when there are positive reasons for
thinking it is, or may be, realized. Doubt can also be irrational, and if
there are no reasons to doubt, mere possibilities are irrelevant to whether
what is believed is known.
This, obviously, is an overreaction. The Wisconsin lakes could be
loaded with migrant Siberian grebes without the bird-watcher's having
any reason to think that such look-alike birds actually existed. His lack
of any reason to doubt, his ignorance of the possibility that what he sees
is a grebe and not a Gadwall, is irrelevant. The mere possibility is in this
case enough to show he doesn't know.
This shows that having a reason (evidence) to think X is a genuine
possibility is not a necessary condition for Xs being a relevant alternative. Perhaps, though, it is sufficient. Perhaps, that is, a reasonable
(justified) belief that yonder bird might be a look-alike grebe (whether
or not this belief is true) is enough to make its being a look-alike grebe
a relevant possibility.
But if a person really does believe that the bird could be a grebe,
aside from the question of whether or not this belief is reasonable, he
surely fails to have the kind of belief requisite to knowing it is a Gadwell.
He certainly doesn't think he knows it is a Gadwall. I do not know
exactly how to express the belief condition on knowledge, but it seems
to me that anyone who believes (reasonably or not) that he might be
wrong fails to meet it.8 And so the present suggestion is irrelevant to our
problem. It describes conditions in which the subject fails to know but
only by robbing him of the belief requisite to knowledge.
It may be thought that the mere presence of evidence that one might
be wrong, assuming this evidence does not affect one's willingness to
believe, is enough to make the respect in which one (according to this
evidence) might be wrong a relevant alternative. This has the unfortunate consequence that one can rob a person, indeed a whole community, of its knowledge by spreading a false rumor. I can, for example,
tell the bird-watcher that I just met an ornithologist looking for migrant
grebes. Once this message is relayed to the bird-watcher, even if he
rejects it as a silly fabrication, he no longer knows that the bird he saw
We needn't suppose that for S to know that P, S must believe that he can't be wrong. But
it does seem reasonable to insist that if S knows that P, he does not believe that he might
be wrong. In other words, if the bird-watcher really believes that the bird he sees might be
a grebe, then he does not know it is a Gadwall.
62
63
4
The Epistemology of Belief
64
tive predicament. I think, though, that this picture distorts the epistemological task by grossly underestimating the cognitive demands of
simple belief. Knowing is hard, but believing is no piece of cake either.
In fact, or so I wish to argue, believing something requires precisely the
same skills involved in knowing. Anyone who believes something thereby
exhibits the cognitive resources for knowing. There is, as we shall see, a
gap between belief and knowledge, but it is not one that provides any
comfort to the philosophical skeptic. If I may, for dramatic effect, overstate my case somewhat, if you can't know it, you can't believe it either.
1. REPRESENTATION AND MISREPRESENTATION
65
mistake (the instrument concludes), it is we who assigned it a representational capacity beyond its actual powers. One could as well print
"Gross National Product in Dollars" on its face and then complain that
it misrepresented the state of the national economy.
It seems more reasonable to say that it is the instrument's job to
register the pressure and that it is our job (the job of those who use the
instrument) to see to it that a change in pressure is reliably correlated
with altitude (or whatever other quantity we use the instrument to
measure) to see to it, in other words, that the instrument is used in
conditions in which alterations in pressure carry information about the
magnitudes we use the instrument to measure. If this is so, then the
instrument is discharging its representational responsibilities in a perfectly
satisfactory way. We aren't fooling the instrument; at most we are
fooling ourselves.
Is the speedometer on a car misrepresenting the vehicle's speed if we
jack up the car, engage the gears, and run the engine? The drive shaft
and wheels will turn, and the speedometer will, accordingly, register
(say) 30 mph. The car, of course, is stationary. Something is amiss, and
if we have to place blame, the speedometer is the likely culprit. It is
saying something false. Or is it? How do we decide what the speedometer is saying? Perhaps the speedometer is representing the only thing it
is capable of representing, saying the only thing it knows how to say,
namely, that the wheels are turning at a certain rate. The mistake, if a
mistake is being made here at all, occurs in us, in what we infer must be
true if what the speedometer says is true.
What, then, does a measuring instrument actually represent? Or, to
put it more suggestively, what does the instrument really believe? Does
the altimeter have altitude beliefs or merely pressure beliefs? Does the
speedometer have vehicle-speed beliefs or merely wheel-rotation beliefs?
Until we are in a position to answer these questions we cannot say how,
or even whether, it is possible to "fool" the instrument. We cannot say
whether, in situations like those described earlier the instruments are
misrepresenting anything, whether it is even possible to make them
"believe" something false.
It is time to stop describing instruments in such inappropriate ways.
Although I think it sensible to speak of instruments representing the
quantities they are designed to measure, I do not, of course, think they
say or believe things. We cannot, literally, fool instruments. They don't
make mistakes. I allowed myself to speak this way in order to reveal my
overall strategy. So before moving on to a discussion of creatures to
66
which it does make sense to attribute genuine cognitive states (like belief
and knowledge), let me describe an intermediate case. Some may find it
more realistic, and hence more convincing, than examples involving
speedometers and altimeters.
A frog in its natural habitat will flick with its tongue at small, moving,
dark spots. The neural mechanisms responsible for this response have,
for fairly obvious reasons, been called "bug detectors." In the frog's
natural habitat, all (or most) small, moving, dark spots are bugs, a staple
item in the frog's diet. Psychologists (with presumably better intentions
than I had with the altimeter) have removed frogs from their natural
habitat, projected small, moving, dark shadows on a surface in front of
the frogs, and observed the creatures' response. Not unexpectedly, the
frogs "zap" the moving shadow.
What shall we say about this situation? Has the frog mistakenly
identified the shadow as a bug? Is the frog misrepresenting its surroundings? Does the frog have a false belief, a belief to the effect that this
(small, moving, dark spot) is a bug? Or shall we say that the frog
(assuming for the moment that it has beliefs) does not have "bug" beliefs
at all? Instead what it has are "small-moving-dark-spot" beliefs? Since
the frog usually operates in circumstances (swamps, ponds, etc.) where
small, moving, dark spots are bugs, natural selection has favored the
development of a zapping reflex to whatever the frog perceives as a
small, dark spot. If we take this latter view, then although psychologists
can starve a frog in this artificial environment, they can't fool it. The
frog never makes a mistake because it never represents, or takes things
to be other than they are. It represents the shadow as a small, moving,
dark spot, and this representation is perfectly correct. The frog goes
hungry in this situation, not because it mistakenly sees dark spots as
edible bugs, but because what it correctly sees as moving spots are not,
in fact, edible bugs.
If we adopt this latter strategy in describing what the frog believes,
then it becomes very hard, if not impossible, to fool the animal. If the
frog has beliefs at all, it approaches infallibility in these beliefs. And this
infallibility is achieved in the same way it was (or could be) achieved
with the altimeter and speedometer by tailoring the content of belief
(representation) to whatever properties of the stimulus trigger the relevant
response. If we are willing to be less ambitious in this way about what
we describe the frog as believing, we can be correspondingly more
ambitious in what we describe the frog as knowing.
But there is, surely, a truth of the matter. The frog either believes
67
Some philosophers, I know, would deny this. I make the assumption, nonetheless, because
(1) I believe it, and (2) it makes my argument that much more difficult and, therefore, that
much more significant if correct.
68
to the effect that X is plastic? Does the dolphin now have some crude
notion of plasticity?
Of course not. The reason we are prepared to credit the dolphin with
the concept of a cylinder (and hence, with beliefs to the effect that this
is a cylinder and that is not) is not just because it distinguishes cylinders
from other shaped objects (for it does, with equal success, distinguish
plastic from nonplastic objects) but because of our conviction that it was
the cylindricality of these objects to which the creature was responding
(and not their plasticity). The animal's sensitive sonar is capable (or so we
believe) of picking up information about the shape of distant objects,
and it was trained to respond in some distinctive way to this piece of
information. There is no reason to think it was picking up, or responding to, information about the chemical constitution of these objects to
the fact that they were plastic. We could test this, of course. Merely
place a wooden cylinder in the pool and observe the animal's response.
A positive response would indicate that it was the cylindricality, not the
plasticity, to which the animal developed a sensitivity. It was, as I prefer
to put it (more about this later), information about the object's shape,
not information about its chemical structure, that guided the animal's
discriminatory behavior during learning. It is this fact that lies behind
our unwillingness to credit the dolphin with the concept plastic and our
willingness (or greater willingness) to credit it with the concept of
cylindricality, even though (given the restricted learning conditions) it
became as successful in distinguishing plastic from nonplastic objects as
it did in distinguishing cylinders from noncylinders. Even if (cosmic
coincidence) all and only cylinders were made of plastic, so that our
trained dolphins could infallibly detect plastic objects (or detect them as
infallibly as they detected cylinders), this would not have the slightest
tendency to make us say that they had acquired the concept of plastic or
could now have beliefs about the plasticity of objects. The level of
sophistication to which we are willing to rise in describing the belief
content of the dolphin is no higher than the kind of information about
objects to which we believe it sensitive. The dolphin can have "cylinder" beliefs but not "plastic" beliefs because, as far as we know anyway,
it has a sensory system that allows it to pick up information about the
shape, but not the chemical structure, of objects at a distance.
It is important to understand what is happening when we make these
judgements, what kinds of considerations shape our decisions about what
level of conceptual sophistication to assign an animal (whether it be a
frog, a dolphin, or a human child). The decision about what concept to
69
assign a creature, and hence the decision about what sorts of beliefs we
may attribute to it, are guided by our assessment of the sort of information the animal utilizes during learning to articulate, develop, and refine
its discriminatory and classificatory repertoire. If we are talking about an
instrument, something that doesn't learn, then its representational
power, what it represents things as being, is a function of the information
to which the instrument is sensitive. Since altimeters are not sensitive to
information about the gross national product, no matter what I happen
to write on the face of the instrument, an altimeter cannot represent or
misrepresent the gross national product. But since the instrument is
sensitive to information about pressure and, some would say, in some
situations at least, to information about altitude, it is capable of both
representing and misrepresenting these magnitudes.
This principle (the principle, namely, that the representational powers
of a system are limited by its informational pickup and processing capabilities) underlies many of our judgments about the conditions in which
someone can and cannot learn. Why can't you teach a normally endowed child her colors in the dark? Because information about the color
of the objects is not therein made available for shaping the child's
discriminatory and identificatory responses. Even if the child succeeds in
picking out all the blue objects (in virtue of the fact, say, that all and
only the blue ones are furry), she will not, by this procedure, learn the
concept blue. She will not believe of the next furry blue object she finds
that it is blue. The most she will believe is that it is furry. Even if we
taught her to say "blue" every time she encountered a blue object in
the dark, we would not, thereby, have given the child a color concept.
We would merely have given her an eccentric way of expressing her
concept of furryness.
The moral of the story is this: to learn what an X is, to acquire the
capacity to represent something as an X (believe it to be an X), it is not
enough to be shown Xs and non-Xs and to distinguish successfully
between them. Unless the information that the Xs are X is made available to the learner (or instrument), and it is this information that is used
to discriminate and classify, the system will not be representing anything
as an X. Even if some concept is acquired, and even if this concept
happens to be coextensive with that of X (thus allowing the subject to
distinguish successfully the Xs from the non-Xs), the concept acquired
will not be that of an X. The subject will not be able to believe of Xs
that they are X. For the concept acquired during learning is determined
by the kind of information to which the learner becomes sensitive, and
70
71
Although they can be interdefined in fairly trivial ways, they needn't be. In Knowledge and
the Flow of Information (Bradford/MIT, 1981), I give an independent (of knowledge) analysis
of information, thus making the concept available for the analysis of knowledge.
Obviously one can believe something is a dingbat when it is not a dingbat - hence, believe
things that cannot be known (because they are not the case). I hope it is clear from the
wording in the text that I am not denying this obvious fact. The thesis is, rather, that if
one has the concept of a dingbat (hence, capable of holding beliefs to the effect that
something is a dingbat), then that something is a dingbat is the sort of thing one can know.
72
The temporal qualifier should always be understood. A creature may have acquired a
concept at a time when he possessed a fully functional sensory system, one capable of
picking up and processing information of a certain kind. Once the concept is acquired,
though, the creature may have lost the information-processing capacity (e.g., have gone
blind), hence losing the capacity to know what he can still believe.
73
I earlier (circa second draft) thought that what the child said when it said "This is circular"
was false. I thought this because it seemed to me that what the child was saying (with these
words) was that the object was circular. Jon Barwise convinced me that this was not so.
What the child is saying when it uses these words is that the object is blue. Hence, what
the child is saying is true.
74
"The Meaning of 'Meaning' " in Language, Mind and Knowledge, Minnesota Studies in the
Philosophy of Science, 7, Minneapolis: University of Minnesota Press (1975); reprinted in
Mind, Language and Reality - Philosophical Papers, Vol. 2, Cambridge, England (1975).
75
76
77
10 As I understand the Causal Theory (of natural kind terms), Tommy would (according to
this theory) have our concept of water since, by hypothesis, H 2 O figured causally (just as
it did for Earthlings) in his acquisition of this concept. This, I think, shows what is wrong
with a causal theory.
78
11 If the reader thinks it could not show this (viz., that we do not believe what we think we
believe), so much the worse for the view that we could entertain (simple) beliefs that we
could not know to be true.
79
There are two ways to think about knowledge. One way is to start, so
to speak, at the bottom. With animals. The idea is to isolate knowledge
in a pure form, where its essential nature is not obscured by irrelevant
details. Cats can see. Dogs know things. Fido remembers where he
buried his bone. That is why he is digging near the bush. Kitty knows
where the mouse ran. That is why she waits patiently in front of the
hole. If animals are not conceptually sophisticated, do not posses language, do not understand what it takes to know, then this merely shows
that such talents are not needed to know.
If, then, pursuing this strategy, you want to find out what knowledge
is, look at what Fido has when he sees where his food bowl is or
remembers where he buried his bone. Think about Kitty as she stalks a
bird or pursues a mouse. And this, whatever it is, is exactly what we've
got when we know that the universe is over ten billion years old and
that water is the ash of hydrogen. It is true that in its grander manifestations (in science, for example) knowledge may appear to be totally
beyond Fido's and Kitty's modest capacities, but this is simply a confusion of what is known (recondite and well beyond their grasp) with
knowledge itself something they have (albeit about more humble
topics) in great profusion.
This, as I say, is one way of thinking about knowledge. Call it the
bottom-up strategy. It appeals to those philosophers who seek some naturalistic basis for epistemological theory, some way of integrating philosophical questions about knowledge, perception, and memory with
Reprinted from Grazer Philosophiche Studien 4 (1991), 1530, by permission of the publisher.
80
81
82
Island. But wherever it was he picked up the internalist virus, it has been
with him for at least thirty years. He now exhibits unmistakable symptoms of terminal top-downism. I don't expect treatment especially
from a bottom-upper like myself to help much now. He's probably
got the bug for life. Nonetheless, I would like to try poking around
with a sharp (or as sharp as I can make it) instrument. Even if I don't
succeed in curing the patient (and, realistically, I do not even expect to
penetrate the skin), maybe I can provoke some responsive groans.
To begin with, it won't do (on the part of either side) to say that
there are different, or perhaps many, senses of the verb "to know" and
that the bottom-uppers (or top-downers, take your pick) are preoccupied with an irrelevant, an unimportant, a not-epistemologically central, or a not very interesting (or not very interesting to us) notion of
what it means to know. This may well be true, but one can't just
declare it, pick up one's toys (the interesting-to-me sense of knowledge),
and go home to play by oneself. Lehrer, at times, threatens to do this.
In dismissing the kind of knowledge that animals, young children, and
brain-damaged adults have exactly the sort of cases that bottom-uppers
think are of central importance for understanding knowledge Lehrer,
for instance, says (repeatedly) that this is not the sort of knowledge that
concerns him. These senses of the word do not, he says, capture the
sense of knowledge that is characteristically human, that distinguishes
us from other beings. "I am concerned," he says, "with knowledge
that a being could in principle and with training articulate" (1988,
pp. 330-331).
The question, of course, is whether there is a sense of knowledge that
distinguishes us from inarticulate beings. Is there a sense of "hunger" or
"thirst" that distinguishes us from animals and young children, who
cannot say they are hungry or articulate what it is like to be thirsty? If we
are inclined to think (as I assume most of us do) that animals and infants,
although quite unable to articulate it, can be thirsty in pretty much the
same way we are thirsty, what is wrong with supposing that Fido's got
exactly what we've got knowledge but he can't express it the way
we can? Fido knows his food bowl is over there by the table and I know
his food bowl is over there by the table, and we both know this in
pretty much the same way we both see that it's there. I, to be sure,
differ from Fido in many interesting ways. I can say not only what I
know (that the bowl is over there), but that I know. He can't. I have a
reasonable grasp of what it takes to know; he doesn't. I have read
Edmund Gettier; he can't. On demand, I might even be able to concoct
83
a pretty good argument (starting from premises about how things seem
to me) that the bowl is over there by the table something that is quite
beyond Fido's feeble abilities. But why is any of this relevant to whether
Fido knows exactly what I know that the bowl is by the table and
knows it, furthermore, in exactly the way I know it, by seeing? If Lehrer
is, for whatever reasons, more interested in studying the cognitive exploits of articulate and reasonably intelligent human beings, more concerned with what constitutes "critical reasoning" (1990, p. 36), well and
good. Let him do so. If he wants to study the talents and capacities that
make possible our most cherished scientific achievements, the discovery
of the double helix, for instance, and our most worthy practical attainments, the development of a system of justice, for example (1990, p. 5),
he has every right to do so. But why suppose that what distinguishes us
from Fido including the great many things we know that Fido cannot
know - is relevant to whether Fido knows anything at all and knows it,
furthermore, in the same sense in which we know things? Superior
intellectual accomplishments, our capacity to engage in critical discussion
and rational confrontation (1990, p. 88), are certainly not relevant to
whether Fido eats, sleeps, and defecates in exactly the same sense we do.
Why, then, suppose it relevant to whether Fido can see, know, and
remember in the same sense we do? One cannot simply assume that
these are different senses without begging the key issue separating internalists and externalists.
One can, if one likes, study the way astrophysicists differ from ordinary mortals in their knowledge of processes occurring in distant stars,
but it would be a great mistake to think that just because what they
know, and their ways of knowing it, are beyond the intellectual capacities of ten-year-old children, the feeble-minded, and chimpanzees, that,
therefore, astrophysicists know things in a sense of the word "knowledge" that is different from that applied to ten-year-old children, the
feeble-minded, and chimpanzees.
Even if we should suppose, with Lehrer, that there is some special
sense of knowledge that we have that Fido lacks, a sense that requires
not just having received the relevant information (which Fido receives
too), but an appreciation, understanding, or justification (that Fido presumably lacks) that one has received such information, an ability to
"defend" a claim that one knows (which Fido obviously lacks), we can
still ask whether this is an epistemologically interesting, useful, or common notion, one that we actually use in our everyday descriptions of
84
85
5 Although this may be disputed in certain circles (e.g., artificial intelligence and cognitive
science), I assume it here since it is common ground between top-downers and bottomuppers: both externalists and internalists require, minimally, something like belief on the
part of the knower. Lehrer requires more than this minimum, but more of this in a
moment.
6 I leave aside the attribution of skills and talents (knowing how) since these, even for the
most primitive organisms, are (I assume) quite literal. Minnows know how to swim and
spiders know how to spin a web. We are here concerned with factual knowledge, knowledge in what Lehrer (1990, pp. 34) calls the informational sense (although, of course, I
reject his claim that knowledge in this sense requires one to recognize something as information
(p. 3).
7 Lehrer, of course, has a more demanding condition on knowledge than simple belief. He
describes this as acceptance, and what he means by acceptance is pretty fancy, a mental state
that Fido and Kitty (however able they might be to have beliefs) cannot have (1990,
pp. 113E). He may be right in imposing this stronger condition, of course, but we cannot,
at this stage of the argument, accept this stronger condition. To do so would beg the
question against the bottom-uppers since animals, infants, and the feeble-minded (braindamaged) cannot (in Lehrer's sense) accept anything.
86
beliefs at all, it does not tell us anything about knowledge itself, about
whether they know, or can know, the things they do have beliefs about.
The question we are asking, or should be asking, is not whether Fido
believes anything at all,8 but whether, assuming there are things Fido
believes, which (if any) of these beliefs qualify as knowledge. And why?
If he can think his food bowl is over there by the table, why can't Fido
know it and know it, moreover, in exactly the same sense in which I
know it?
Lehrer has made it quite clear what it is he thinks animals (very young
children, brain-damaged adults) lack that disqualify them as knowers.
He does not challenge them as believers. A two-year-old child, a dog,
and a mentally retarded adult can (for some suitable values of P) think P.
What they cannot do, for any value of P, however, is know P. The
reason they cannot is that although they can get the information needed
to know, and although this information may cause them to believe P,
they do not have the information that it is information (1990, pp. 162164).9 They are ". . . unable to discern the difference between correct
information and misinformation or even understand the distinction between truth and deception" (1988, pp. 332333). They lack the concept
of veracity (1990, pp. 8-9). They lack resources that we possess, the
capacity for knowing or telling that what they are receiving is genuine
information (1990, pp. 162164). For all these reasons they fail to know
the things they believe.
If we accept these conditions on knowledge, it means that although
poor Fido can get (and use) information information he clearly needs
to find his food bowl, to avoid obstacles as he moves about the house,
and to get the stick he is told to fetch he has no way of knowing any
of this. Fido does not know where his food bowl is (although he always
finds it when he is hungry), never knows where the sticks are he is told
to fetch (although he always brings them back), and cannot know where
the doorway is (although he always manages to walk through it and not
I assume here that the disagreement between the top-downers and the bottom-uppers,
between internalists and externalists, is not over the question of whether animals can think.
That strikes me as an issue in the philosophy of mind that is seldom, if ever, broached (or
argued for) in the epistemological literature. Until we have convincing grounds for thinking
otherwise, then, I will assume that animals have beliefs (although what beliefs they have
may be difficult to say). If they do not know anything, it is for some other reason.
If the information (that P) is not good enough (without the information that it is information) to know P, one wonders what good the information that it is information is without
the added (3rd level) information that this (2nd level) information (about the 1st level
information) is information.
87
into the adjacent walls). Since he cannot know these things, it follows
that, despite having good eyesight, he cannot (ever!) see where his bowl
is, where the sticks (that he fetches) are, or where the doorway is.10
To my ear, as (I assume) to most ears, all of this sounds most implausible. In fact, it sounds downright false. Obviously Fido can see where
his food bowl is. Of course these children, even very young children,
can see where things are their toys and dolls, for instance. Only a
philosopher in the grip of theory would think to deny it. If a theory of
knowledge has, as one of its consequences, that animals and children
cannot see where anything is, then it is time to get a new theory.
It is, furthermore, too early to drag in a distinction to rescue the
theory. Too early to insist that. . . well, yes, Fido can see (hence, know)
where his bowl is, or a child can see where her toys are, in some sense
of see (and know), but not in the sense of see (and know) that is
philosophically important, relevant, or interesting (they don't have advanced knowledge or metaknowledge of such things). As I indicated earlier,
if, at this stage of the proceedings, this is to be anything but a questionbegging and ad hoc rescue of an internalist theory, we need an argument
- not just a bald claim - that such distinctions (between senses of
"know") actually exist. For the same reason it is also question-begging
(if not blatantly false) to classify such examples as "borderline" and,
hence, not to be used against internalist theories. There is nothing
borderline about these examples not unless one has already made up
one's mind about what is to count as knowledge.11
Bottom-uppers like myself prefer examples like the ones just given,
examples involving sense perception, because it is here that justification,
inference, reasoning, and evidence (the sorts of things that top-downers
think important) seem least relevant seem, in fact, totally irrelevant.
We know it, just as Fido knows it, because we can see it. We don't
have to reason about it, think about it, have a justification for believing
it. The justification if, indeed, this kind of talk even makes sense in
this context lies in the seeing. As J. L. Austin was fond of pointing
10 It follows on two assumptions: (1) If one sees where X is, then one sees that X is there for
some value of "there" (e.g., under the sofa, in the closet), and (2) if one sees that X is
there, one knows that X is there. Both assumptions strike me as obvious.
11 Lehrer comes close to suggesting that he can classify anything he likes as borderline (and,
hence, ineligible as a counterexample against him) because (1990, p. 32) the reasons for
his classification are "theoretical." Unless I miss something, this is a way of saying that his
theory will decide which examples are relevant to testing his theory. A nice arrangement.
out, when things are in front of your nose, you don't need evidence for
thinking they are there.
Top-downers like Lehrer prefer, of course, to talk about different
examples: Madame Curie discovering radium, Sherlock figuring out
who poisoned Colonel Mustard, and so on. Everyone, for perfectly
understandable reasons, wants to emphasize the cases that best fit their
theory. But although top-downers and bottom-uppers (for obvious theoretical reasons) choose different examples to motivate their analysis,
they should, when talking about the "same" example, at least agree
about whether or not it is a genuine instance of knowledge whether
or not the person or animal sees, remembers, or knows in some intuitive,
preanalytic sense. If philosophers cannot even agree about the data, about
the classification of cases to be accounted for by a theory of knowledge,
then bickering about the right analysis of knowledge is as silly as it is
fruitless. They are not even talking about the same thing.
So I think it important to agree about cases before we start disagreeing
about the best way to understand them, about the best way to analyze
them. The question, then, is whether Fido can see where his food bowl
is and see this, moreover, in the same sense in which we (adult human
beings) can see where his food bowl is.12 The way we find out (come to
know) where bowls (and a great many other common objects) are
located is (typically) by seeing where they are. That, I should suppose, is
the way Fido finds out (comes to know) where they are. This, I submit,
is the judgment of common sense and common language. It is the way
everybody talks in their unguarded (unphilosophical) moments. In the
absence of (non-question-begging) reasons for rejecting such intuitive
judgments, then, what a theory of knowledge should supply is, among
other things, an account of what it is that Fido has when he knows (by
seeing) where his food bowl is. If Fido doesn't have the capacity to
distinguish truth from deception, if he doesn't know that what he is
getting from the optical input is correct information, then this shows as
clearly as anything could show that such higher-level capacities are not
needed to know.
Lehrer, obviously, disagrees. He will have a chance to tell us why he
disagrees. Since he will have that opportunity, I will attempt to focus
the discussion by considering an example that he (1988, p. 333) adapts
12 We are assuming, of course, that the bowl is in plain sight, that everyone (humans and
Fido) has excellent vision, and so on. No tricks.
89
90
one's sources is just what internalism (of the sort favored by Lehrer)
asserts.
Lehrer thinks (and says) that externalists like myself will classify this
case differently, that we will say that Faith knows the pressure is 190. If
this were so, then, of course, we would be disagreeing about cases once
again, and the implication would be that externalists, operating with a
strong theoretical bias, and contrary to commonsense intuitions, are
judging the case so as to fit their theory. But Lehrer is wrong about this.
At least he is wrong about me. The reason he thinks we disagree about
this case is because he has changed the example. He has converted it
into a case where Faith, the person whose knowledge (or lack of it) one
is asked to judge, is being described as quite unreasonable in trusting the
gauge. Why, for example, does she ignore the flashing warning light?
She knows (we are told) that it signals a malfunctioning pressure gauge,
and yet, she inexplicably ignores the light and trusts the gauge. What a
remarkably stupid thing to do. Lehrer is certainly right: it is hard to
credit her with knowledge. I join Keith (and, I assume, commonsense)
in saying that in such circumstances Faith doesn't know.
But although we agree about how to classify the example, we disagree
about why Faith doesn't know. Lehrer says it is because Faith doesn't
know she is getting accurate information from the gauge. Externalists,
bottom-uppers like myself, would say that she lacks knowledge because
knowledge requires, among other things,13 getting information (being
properly connected to the facts) in something like normal circumstances,
and Faith's circumstances are not at all normal. It is one thing to see
(and thereby come to know) what time it is by glancing at a familiar
clock situated in a familiar setting (on the wall of one's home, say). It is
quite another to trust the same clock when it is sitting, partially disman-
13 Lehrer keeps saying that I think that if a person receives the information that P, then she
knows that P (1990, p. 33; 1988, p. 332). This, of course, is not true. Human beings (not
to mention animals, gauges, computers, etc.) receive enormous quantities of information
every minute. Most of this information does not result in knowledge. Knowledge requires
the receipt of information, yes, but it requires more. It requires that this information cause
belief, and cause it, moreover, in something like normal conditions (see discussion in text).
Lehrer also insists that externalists overlook the fact that knowledge requires not only
receiving information (satisfaction of the externalist condition) but also acceptance of the
fact that one is receiving information (acceptance of the fact that this externalist condition
is satisfied). I have never overlooked this fact. The fact that the information that P causes
a belief that P evinces an acceptance (not, of course, in Lehrer's sense, but in a relevant
ordinary sense) of the fact that one is getting such information. The information doesn't
cause belief in those who do not (in the required sense) accept it as information.
91
tied but still running accurately, on a workbench at the local repair shop.
There is no reason why externalists like myself have to treat these cases
in the same way. Although information is still being transmitted, it is no
longer clear that that, in fact, is what is causing belief.14
So externalists, bottom-uppers like myself, can agree with Lehrer that
in his version of the example, Faith does not know. We disagree (not
unexpectedly given our different theories) about why she doesn't
know, but at least we agree about the data agree about the fact that,
measured against a commonsense yardstick, she doesn't know. What
remains to be settled, and this brings us back closer to Fido's situation,
is whether there is agreement about the example in its original version,
a version that more closely approximates (and was deliberately designed
to approximate) ordinary cases of perceptual knowledge.
In my original version of the example, the attendant, the one who
ignored the warning light and trusted the gauge, was not the same
person as the engineer who installed the safety device. Although the
engineer knows what the flashing light means (and would, were he
behaving reasonably, mistrust the pressure gauge), the attendant, the one
whose knowledge was being assessed, did not. She was, therefore, not
being irrational in trusting the gauge. She was merely trusting a source
of information that had, over the weeks, months, or years she had used
it, proven reliable. She was, in this respect, just like Fido, young children, or a brain-damaged adult. She was just like you and me during
most of our waking lives when we unthinkingly trust what our senses,
our friends, and our books and newspapers tell us. So the question that
it is important to get clear about, the question I want to put to Keith,
the question the example (in its original version) was meant to pose, is
this: assuming a reliable source of information (e.g., the pressure gauge)
and perfectly normal circumstances from the point of view of the potential knower (i.e., my version of the example), does one know when
caused to believe by the reliable source? Does Faith know in my version
of the example?
The existence of a malfunctioning safety device, and the (misleading)
counterevidence it supplies, is really an irrelevant detail. This was origi-
14 The discussion about "relevant conditions" in the recent literature is an attempt to specify
the sorts of conditions in which reliable connections (that produce belief) yield knowledge. The counterfactual analysis of conclusive reasons (Dretske 1969) and information
(Dretske 1981) are attempts to use the "logic" of counterfactuals to determine relevant
circumstances.
92
nally included merely to make it hard for people like myself (externalists,
bottom-uppers). It makes the case harder for externalists because (I have
found) the presence of this counterevidence (misleading though it be) is
enough to persuade some people to say that Faith does not know (even
when she satisfies most externalist conditions for knowing). If we remove this small detail, though, we get something that is, in most people's opinion at least, a paradigm case of knowledge: Faith seeing (by
the gauge) what the pressure is. If this isn't knowledge, the sort of thing
a theory of knowledge is supposed to count as knowledge, I, for one,
fail to understand the project anymore. If Faith cannot see what the
pressure is (hence, know what the pressure is) by trusting a perfectly
operating gauge in completely routine circumstances, then there isn't
much left for a theory of knowledge to account for. For, as most people
will recognize, the gauge is merely a sensory prosthetic - an extension
of our biological information delivery (= perceptual) systems. It plays
no essential role in the dialectic. If we cannot rely on it as a trusted
provider of information, then it is hard to see why we should be able to
rely on our eyes, ears, and nose. Or, indeed, anything at all.
But if, without checking or conducting special tests, Faith can come
to know by trusting a reliable instrument, an instrument that is, in fact,
delivering the right information, why can't Fido know when he trusts
the deliverances of his equally reliable senses when they are delivering
the required information?
REFERENCES
Dretske, F. 1969. Seeing and Knowing. Chicago, 111.; University of Chicago Press.
Dretske, F. 1981. Knowledge and the Flow of Information. Boston, Mass.; MIT Press/A
Bradford Book.
Lehrer, K. 1988. "Metaknowledge: Undefeated Justification," in: Synthese 74, 329347.
Lehrer, K. 1989. "Knowledge Reconsidered," in: Knowledge and Skepticism, Marjorie
Clay & Keith Lehrer (eds.). Boulder, Co.; Westview Press.
Lehrer, K. 1990. Theory of Knowledge. Boulder, Co.; Westview Press.
93
Part Two
Perception and Experience
6
Simple Seeing
I met Virgil Aldrich for the first time in the fall of 1969 when I arrived
in Chapel Hill to attend a philosophy conference. My book, Seeing and
Knowing,1 had just appeared a few months earlier. Virgil greeted me with
a copy of it under his arm, whisked me off to a quiet corner in a local
coffee shop, and proceeded to cross-examine me on its contents.
I confess to remembering very little about this conversation. I was, of
course, flattered by the attention, and delighted to see his copy of the
book full of underlining and marginalia. He had obviously been studying
it. This fact so overwhelmed me that I found it difficult to keep my
mind on the conversation. What could I have written that he found so
absorbing? Did he like it? Did he agree with me? It was hard to tell.
Since then I have discovered what provoked Virgil's interest. It seems
we disagree about what seeing amounts to what it means, or what is
essential to, our seeing things. This, by itself, is not particularly noteworthy since (as I have also discovered) many, and sometimes it seems
most, philosophers disagree with me on this topic. The significance of
Virgil's and my disagreement about visual perception lies not in the fact
that we disagree, but in how we disagree. For it turns out that we are
more or less natural allies in this area. We are both trying to resist what
we view as a mistaken conflation of perception with conception, both
trying to preserve the distinction between sentience and sapience, both
trying to isolate and describe a way of seeing, simple seeing as it has
Reprinted from D. F. Gustafson and B. L. Tapscott, eds., Body, Mind, and Method, pp. 1
15, copyright 1979 by Kluwer Academic Publishers, with kind permission from Kluwer
Academic Publishers.
1 Chicago (1969).
97
I refer, in particular, to "Visual Noticing Without Believing," Mind, Vol. LXXXIII, No.
332, October 1974; "Sight and Light," American Philosophical Quarterly, Vol. 11, No. 4,
October 1974; "On Seeing What Is Not There," Rice University Studies, Vol. 58, No. 3,
Summer 1972; and his critical review of my book in The Journal of Philosophy, Vol. LXVII,
No, 23, December 10, 1970.
The requisite distinctions get messy, but I want to talk about the verb "to see" insofar as it
takes a direct object and, more specifically, a concrete noun phrase as its direct object. I
will not be concerned with seeing, say, the pattern, the answer, the problem or the trouble.
There are a variety of noun phrases that, when used as direct objects of the verb "to see,"
give the resulting statement epistemic implications. Aside from those mentioned in the
previous footnote, we have the color, the shape., the size, and so on. Psychologists' interest in
the properties or dimensions of things (rather than in the things having these properties or
dimensions) tends, I think, to mislead them into thinking that all statements about what we
see have cognitive implications, that all seeing is knowing (or believing). There is, however,
a significant difference between seeing the round X and seeing the roundness of X (its
shape).
98
99
then our question about simple seeing is a question not about visual
perception, but about whatever it is that we use the ordinary verb "to
see" (followed by a concrete noun phrase) to describe. What are we
doing when we see something? Or, if this is not something we do, what
conditions must obtain for us to see a robin, a sunset, or a star? In
particular, can one see a robin without perceiving it? This question may
sound a bit odd, but only because one is accustomed to conflating seeing
with visual perception. But if perception is (either by stipulation or common understanding) cognitively loaded, if some degree of recognition
or categorization is essential to our perception of things, it is by no
means obvious that one must perceive something in order to see it.
Quite the contrary. One learns to perceive (i.e., recognize, identify,
classify) those things that, even before learning takes place, one can see.
What else, one might ask, does one learn to identify?
A second way to muddle issues is by interpreting claims about simple
seeing, not as claims about our ordinary (mature) way of seeing robins,
trees, and people, but as claims about an underdeveloped stage of consciousness, a dim sort of visual awareness, that lower organisms (perhaps)
experience but that human beings (if they experience it at all) quickly
outgrow during infancy. This confusion, I suspect, is nourished by the
failure to distinguish between quite different theses. We have, on the
one hand, the claim that:
(1) Simply seeing X is compatible with no beliefs about X.
On the other hand we have such claims as:
(2a) Simply seeing X is incompatible with beliefs about X,
and
(2b) Simply seeing X occurs only if, as a matter of fact, the seer has no beliefs
about X.
It is (1) that gives expression to the relevant view about simple seeing.
At least it gives expression to the only view I have ever propounded and
thought worthy of defense despite persistent efforts to interpret me
otherwise. To say (as I did in Seeing and Knowing) that seeing a robin
(nonepistemically) is belief neutral is not to say that one cannot see a
robin in this way with a belief to the effect that it is a robin, a bird, or a
thing. It is to say that your seeing the robin is independent of, a relationship that can obtain without, such beliefs.
In my correspondence and discussions with Virgil I have often com-
100
101
That this was my intention is clear from the fact that my analysis of primary epistemic seeing
(seeing that so-and-so is such-and-such) requires the subject to see (nonepistemically) soand-so.
102
this does not make you an authority about what you see. In situations
of this sort, one sees X without believing one sees X while believing,
in fact, that one does not see X.
Although this is perfectly obvious on one level, the illusion persists
that if S does not believe she sees X, or (worse) believes she does not
see X, then there are special problems associated with the claim that she
does, nonetheless, see X. If, upon returning from the dresser drawer, a
man sincerely asserts that he did not see the cuff link, if this is what he
really believes, then he must not have seen it. This, though, is the
evidential sense of "must," the sense in which we might say, "Since he
does not believe he has ever been to Hawaii, he must not have been
there." S's beliefs on such matters are (at best) only evidence for what
she sees. If S does not believe she saw X, the quality of this belief as
evidence that she did not see X depends on whether S knows what Xs
look like, the conditions (normal or abnormal) in which she saw X, and
a host of other cognitive factors that are independent of whether or not
she saw X. What a person believes (about what she sees), and what she
is consequently prepared to assert or deny about what she sees, is conditioned by the conceptual and cognitive resources she has available for
picking out and identifying what she sees. If she does not know what a
marsupial is, she isn't likely to believe that she sees one. And if she
mistakenly believes that kangaroos are the only marsupials, she might
well believe she sees no marsupials when, in fact, she sees them (opossums) all over the yard.
There are strong philosophical motivations for denying any beliefneutral form of seeing (simple seeing). The inspiration for such denials
comes, I think, from positivistic and (more specifically) behavioristic
sources. If S's seeing X is only contingently related to S's beliefs about X,
if she could see X with no beliefs about X, then there is no secure basis
in S's behavior (linguistic or otherwise) for determining whether or not
she does see X Seeing is deprived of its logical or conceptual links with
the observational data base.
This epistemological consequence is alarming enough to goad some
philosophers into interpreting good eyesight as a cognitive capacity so as
to secure the requisite links with behavior. If seeing X can be interpreted
as a form of believing (if not believing that it is X, at least believing that
it is Y where " Y" is a description that applies to X), then, since believing
has behavioral criteria, seeing does also. One of the currently fashionable
ways of achieving this linkage is by identifying psychological states with
functional states of the organism. Since a functional state is one that
103
10 This sort of functional analysis can be found in D. C. Dennett's Content and Consciousness,
London, 1969. See also Hilary Putnam's "The Nature of Mental States," in Materialism
and the Mind Body Problem, edited by David Rosenthal, Englewood Cliffs, N.J.; 1971,
and David Lewis's "An Argument for the Identity Theory" in the same volume, reprinted
from The Journal of Philosophy, Vol. LXIII, 1, January 6, 1966.
11 This example is taken from Fodor, Psychological Explanation, New York, 1968. For my
purposes it makes no difference whether we refer to the cam shaft as a cam shaft or as a
valve lifter (i.e., in explicitly functional terms) as long as it is clear that we are referring to
the cam shaft (and not some larger functional unit).
104
12
13
I have tried to do a little better in "The Role of the Percept in Visual Cognition,"
Minnesota Studies in the Philosophy of Science, Vol. IX, edited by Wade Savage, Minneapolis,
1978.
This example is an adaptation of one used by my colleague, Dennis Stampe, to illustrate a
similar point.
105
copy is indistinguishable from the original. I place the copy (thoughtlessly confusing it with the original) neatly on top of the original (so that
only the copy is visible), gaze down at it (them?), and notice what I take
to be a smudge on the top sheet. As it turns out, I am mistaken (it was
only a shadow) but, as chance would have it, the original (which I
cannot see) is smudged in just the way I thought the top sheet was. I
obviously have a number of false beliefs: viz., that this sheet is smudged,
that this sheet is the original letter. What makes these beliefs false is the
fact that they are (contrary to what I believe) beliefs about the copy.
What, then, makes these beliefs beliefs about the copy and not about
the original?
It will come as no surprise to find that the answer to this question is
that my beliefs are about the first sheet (the copy), not the second (the
original), because I see the first sheet, not the second, and my belief is
about what I see. This answer isn't very illuminating. For we are now
trying to say what it is that constitutes one's seeing the first sheet, what
makes the first (not the second) sheet the perceptual object and, therefore,
the thing about which I have a belief. Since what I believe (that it is
smudged, that it is the original letter) about the perceptual object is true
of something (the second sheet) that is not the perceptual object, what I
believe about what I see does not itself determine what I see. What does
determine this?
It will not do to say that I am seeing the copy because I am looking at
the copy (not the original) since any sense of "looking at" that does not
beg the question (e.g., you can only look at things you see) is a sense in
which I am also looking at the original.14 If the copy was removed
(revealing, thereby, the original letter) my experience would not change
in any qualitative respect. Convergence, accommodation, and focus
would remain the same since the original is (for all practical purposes) in
the same place as the copy. Obviously, then, these factors, no more than
belief, determine what it is that I see. What does?
14 Sometimes Virgil seems to suggest that looking at X is sufficient for seeing X. For example,
he says that there is a weak sense of simply seeing something in which "to be looking at
something is to see it." "Sight and Light," American Philosophical Quarterly, October 1974,
note on p. 320. I doubt whether there is such a sense of the verb "to see." He goes on to
say, however, that he distinguishes seven senses of "simply see." Since it isn't clear to me
whether these are supposed to be distinct senses of the ordinary verb "to see" (or special
senses of a technical term "simply see"), and since I am not sure I understand Virgil's
sense of "looking at," I hesitate to pin this view on him.
106
It may be said that there is one belief I have that is true of, and only
true of, the thing I see: namely, that I see it (assuming I see only one
thing). This gets us nowhere. For what makes my belief (that I see it)
true of the copy, not the original? What makes the copy, not the original,
the referent of "it"? Answer: the fact that I see the copy (not the
original) and what I refer to with "it" is what I see. Since this is so, we
are back to where we started: what singles out the copy as the perceptual
object, as the thing I see?
I have no doubt exhausted the patience of causal theorists by this
time. Isn't it clear, they will tell us, that the perceptual object is determined by the causal antecedents of our visual experience? Light is being
reflected from the copy, not the original, and since it is this object that is
(causally) responsible for the experience I am having, it is this object that
I see. The fact that removal of the copy (exposing, thereby, the original
letter) would leave the experience qualitatively unchanged, would perhaps (if it was removed without my knowledge) leave all my beliefs
unchanged, is immaterial. It would change what I see because it would
change the causal origin of the resultant experience.
Whether a causal account is ultimately satisfactory or not (I shall have
more to say about this in a moment), it does succeed in driving a wedge
between perception and conception. It divorces questions about what
we see from questions about what, if anything, we know or believe
about what we see. It distinguishes questions about the etiology of our
experience from questions about the effects of that experience. Insofar as
it achieves this separation it succeeds in capturing the essence of simple
seeing.
The causal theory gets this part of the story correct, and it is for this
reason that it represents such an attractive candidate for the analysis of
simple seeing. It explains, among other things, how it is possible to have
all one's beliefs about X false while still being about X. What makes my
beliefs about the copy is the fact that I stand to the copy in the appropriate causal relation, the relation that, according to this view of things,
constitutes my seeing the copy. What makes all my beliefs false is the fact
that nothing I believe (e.g., that it is smudged, that it is the original
letter) is true of the thing to which I stand in this causal relation.
The difficulties in articulating a full-dress causal analysis are well
known, and I do not intend to rehearse them here. I mention two
problems only for the purpose of indicating why, despite its attractive
features, I do not personally subscribe to such a view. There is, first, the
107
Sensing and Knowing, edited by Robert J. Swartz, Garden City, N.Y., 1965.
16 See, for example, "The Quantum Theory of Light and the Psycho-Physiology of Vision"
in Psychology: The Study of a Science, edited by Sigmund Koch, New York, 1959.
17 One standard view of causality makes the cause part of some nomically sufficient condition
for the effect (the so-called Reliability Analysis). Since (if we take contemporary physics
seriously) there is no sufficient condition for a photon's absorption, nothing causes it to be
absorbed on this analysis of causality. This, then, constitutes a break in the causal chain
between subject and object.
108
smell, taste, or touch X.18 If we do not press too hard on the idea of
causality, we may say that this information is delivered by means of
causal mechanisms and processes. When we see X, X (or some event
associated with the presence of X) initiates a sequence of events that
culminates in a distinctive sort of experience, the sort we call a visual
experience. Typically, this experience embodies information about the
color, shape, size, position, and movement of X. The role or function
of the sensory systems in the total cognitive process is to get the message
in so that a properly equipped receiver can modulate her responses to
the things about which she is getting information. The sensory system is
the postal system in this total cognitive enterprise. It is responsible for
the delivery of information, and its responsibility ends there. What we do
with this information, once received, whether we are even capable of
interpreting the messages so received, are questions about the cognitiveconceptual resources of the perceiver. If you don't take the letters from
the mailbox, or if you can't understand them once you do, don't blame
the postal system. It has done its job. The trouble lies elsewhere.
This, in barest outline, is an information-theoretical account of simple
seeing. It differs from a causal account not by denying that causal processes are at work in the delivery of information, but by denying that
this is the essence of the matter. Seeing X is getting information (coded
in a certain way) about X, and if information about X can be delivered
by noncausal processes, so much the worse for causality. If the processes
by means of which we see very faint stars are infected with the uncertainty, the inherent randomness, of quantum phenomena, and if such
processes are not to be counted as causal in nature, then we see things
to which we do not stand in the appropriate causal relation. But we still
see them, and the reason we do is because the experience that is generated by the occurrence of these inherently random events embodies
information about the distant stars (e.g., where they are).
I should want to say the same thing about the factors that Virgil finds
so important to our seeing things: ocular focusing, binocular convergence, accommodation, illumination, and so on. Doubtless these things
18 The key to vision is not what information is delivered (information about the color, shape,
and size of things) but how this information is delivered. We can, of course, see that
something is hot, hard, or salty (by looking at the litmus paper). What makes these cases
of seeing is that this information is delivered in characteristically visual form (roughly: in
terms of the way things look rather than the way they feel or taste). Virgil. ("Sight and
Light," p. 319) attributes to me the view that seeing is determined by what we can tell
(about objects) rather than (as I believe) how we tell it.
109
are important. Ask an ophthalmologist. But, again, they don't touch the
essence of simple seeing. They don't isolate what is essential to our
seeing things. As things stand, I cannot see much if my eyeballs point in
different directions. But a fish can. I cannot see the window frame very
clearly if I focus on the trees I see through the window. But I do,
sometimes, see the window frame and the trees. I may not be able to
identify, or tell you much about, the other children playing in the
schoolyard if I concentrate my attention on the little girl jumping rope.
But it seems to me obvious that I see a great many children besides the
one I watch. Look at a flag for a few moments. How many stars do you
see? If the flag has fifty stars, and none of them are obscured by folds in
the cloth or by other objects, it seems reasonable enough to say you saw
them all. All fifty. Which one or ones did you notice? It sounds odd (at
least to my ear) to say that you noticed every star on the flag, but not at
all odd to say you saw them all.19 What makes our visual experience the
rich and profuse thing we know it to be is that we see more than we can
ever notice or attend to. The sensory systems, and in particular the visual
system, delivers more information than we can ever (cognitively) digest.
The postal system deposits junk mail at a rate that exceeds our capacity
to read it.
In speaking of sensory systems in the way that I have, as systems
responsible for the delivery of information, it should be emphasized that
the term "information" is being used here in the way we speak of light
(from a star) as carrying information about the chemical constitution of
the star or the way the height of a mercury column (in a thermometer)
carries information about the temperature. These events or states of
affairs carry or embody information about something else but, of course,
no one may succeed in extracting that information. It is in this sense that
our visual (auditory, tactual, etc.) experience embodies information
about our surroundings. It can carry this information without the subject's (undergoing the experience) ever extracting that information for
19 If I understand him correctly, Virgil would like to deny that peripherally seen objects are
really seen or that we see the other stars on the flag if we notice, say, only the slightly offcolor star on the upper right. "Acute full-fledged seeing, in the basic sense, requires
undivided visual attention, and this involves full ocular concentration" ("Visual Noticing
Without Believing," p. 521). Once again I am reluctant to attribute this view to him
since I don't think I understand what he means by noticing something. Since he also
distinguishes between "consummated" and "unconsummated" basic seeing, and takes
some seeing as more basic than others, the issues are clouded by this fragmentation in senses
of "seeing."
110
Ill
112
7
Conscious Experience1
There is a difference between hearing Clyde play the piano and seeing
him play the piano. The difference consists in a difference in the kind of
experience caused by Clyde's piano playing. Clyde's performance can
also cause a belief the belief that he is playing the piano. A perceptual
belief that he is playing the piano must be distinguished from a perceptual experience of this same event. A person (or an animal, for that
matter) can hear or see a piano being played without knowing, believing, or judging that a piano is being played. Conversely, a person (I do
not know about animals) can come to believe that Clyde is playing the
piano without seeing or hearing him do it without experiencing the
performance for herself.
This distinction between a perceptual experience of x and a perceptual belief about x is, I hope, obvious enough. I will spend some time
enlarging upon it, but only for the sake of sorting out relevant interconnections (or lack thereof). My primary interest is not in this distinction
but, rather, in what it reveals about the nature of conscious experience
and, thus, consciousness itself. For unless one understands the difference
between a consciousness of things (Clyde playing the piano) and a
consciousness of facts (that he is playing the piano), and the way this
difference depends, in turn, on a difference between a concept-free
mental state (e.g., an experience) and a concept-charged mental state
(e.g., a belief), one will fail to understand how one can have conscious
experiences without being aware that one is having them. One will fail
I am grateful to Berent Enc, Giiven Guzeldere, Lydia Sanchez, Ken Norman, David Robb,
and Bill Lycan for critical feedback. I would also like to thank the Editor and anonymous
referees of Mind for a number of very helpful suggestions.
113
114
115
(1) S sees (hears, etc.) x (or that P) => S is conscious of x (that P). 7
In this essay I shall be mainly concerned with perceptual forms of consciousness. So when I speak of S's being conscious (or aware) of something I will have in mind S's seeing, hearing, smelling, or in some way
sensing a thing (or fact).
Consciousness of facts implies a deployment of concepts. If S is aware
that x is F, then S has the concept F and uses (applies) it in his awareness
of x.8 If a person smells that the toast is burning, thus becoming aware
that the toast is burning, this person applies the concept burning (perhaps
also the concept toast) to what he smells. One cannot be conscious that
the toast is burning unless one understands what toast is and what it
means to burn unless, that is, one has the concepts needed to classify
objects and events in this way. I will follow the practice of supposing
that our awareness of facts takes the form of a belief. Thus, to smell that
the toast is burning is to be aware that the toast is burning is to believe
that the toast is burning. It is conventional in epistemology to assume
that when perceptual verbs take factive nominals as complements, what
is being described is not just belief but knowledge. Seeing or smelling
that the toast is burning is a way of coming to know (or, at least, verifying
the knowledge) that the toast is burning. It will be enough for present
purposes if we operate with a weaker claim: that perceptual awareness
of facts is a mental state or attitude that involves the possession and use
of concepts, the sort of cognitive or intellectual capacity involved in
thought and belief. I will, for convenience, take belief (that P) as the
normal realization of an awareness that P.
Perceptual awareness of facts has a close tie with behavior with, in
particular (for those who have language), an ability to say what one is
aware of. This is not so with a consciousness of things. One can smell
or see (hence, be conscious of) burning toast while having little or no
I will not try to distinguish direct from indirect forms of perception (and, thus, awareness).
We speak of seeing Michael Jordan on TV. If this counts as seeing Michael Jordan, then
(for purposes of this essay) it also counts as being aware or conscious of Michael Jordan (on
TV). Likewise, if one has philosophical scruples about saying one smells a rose or hears a
bell - thinking, perhaps, that it is really only scents and sounds (not the objects that give
off those scents or make those sounds) that one smells and hears then when I speak of
being conscious of a flower (by smelling) or bell (by hearing), one can translate this as
being indirectly conscious of the flower via its scent and the bell via the sound it makes.
8 Generally speaking, the concepts necessary for awareness of facts are those corresponding
to terms occurring obliquely in the clause (the that-clause) describing the fact one is aware
of.
116
117
awareness of some (unspecified) fact. The abstract noun phrase or interrogative nominal stands in for some factive clause. Thus, seeing (being
conscious of) the difference between A and B is to see (be conscious)
that they differ. If the problem is the clogged drain, then to be aware of
the problem is to be aware that the drain is clogged. To be aware of the
problem, it isn't enough to be aware of (e.g., to see) the thing that is
the problem (the clogged drain). One has to see (the fact) that it is
clogged. Until one becomes aware of this fact, one hasn't become aware
of the problem. Likewise, to see where the cat is hiding is to see that it
is hiding there, for some value of "there."
This can get tricky and is often the source of confusion in discussing
what can be observed. This is not the place for gory details, but I must
mention one instance of this problem since it will come up again when
we discuss which aspects of experience are conscious when we are
perceiving a complicated scene. To use a traditional philosophical example, suppose S sees a speckled hen on which there are (on the facing
side) twenty-seven speckles. Each speckle is clearly visible. Not troubling
to count, S does not realize that (hence, is not aware that) there are
twenty-seven speckles. Nonetheless, we assume that S looked long
enough, and carefully enough, to see each speckle. In such a case,
although S is aware of all twenty-seven speckles (things), he is not aware
of the number of speckles because being aware of the number of speckles
requires being aware that there is that number of speckles (a fact), and S
is not aware of this fact.9 For epistemological purposes, abstract objects
are disguised facts; you cannot be conscious of these objects without
being conscious of a fact.
(2) is a thesis about concrete objects. The values of x are things as this
was defined earlier. Abstract objects do not count as things for purposes
of (2). Hence, even though one cannot see the difference between A
and B without seeing that they differ, cannot be aware of the number of
speckles on the hen without being aware that there are twenty-seven,
and cannot be conscious of an object's irregular shape without being
conscious that it has an irregular shape, this is irrelevant to the truth of
(2).
As linguists (e.g., Lees, 1963, p. 14) observe, however, abstract nouns
may appear in copula sentences opposite both factive (that) clauses and
concrete nominals. We can say that the problem is that his tonsils are
inflamed (a fact); but we can also say that the problem is, simply, his
9
118
(inflamed) tonsils (a thing). This can give rise to an ambiguity when the
abstract noun is the object of a perceptual verb. Although it is, I think,
normal to interpret the abstract noun as referring to a fact in perceptual
contexts, there exists the possibility of interpreting it as referring to a
thing. Thus, suppose that Tom at time t t differs (perceptibly) from Tom
at t2 only in having a moustache at t2. S sees Tom at both times but does
not notice the moustache is not, therefore, aware that he has grown a
moustache. Since, however, S spends twenty minutes talking to Tom in
broad daylight, it is reasonable to say that although S did not notice the
moustache, he (must) nonetheless have seen it.10 If S did see Tom's
moustache without (as we say) registering it at the time, can we describe
S as seeing, and thus (in this sense) being aware of, a difference in Tom's
appearance between ta and t2? In the factive sense of awareness (the
normal interpretation, I think), no; S was not aware that there was a
difference. S was not aware at t2 that Tom had a moustache. In the thing
sense of awareness, however, the answer is yes. S was aware of the
moustache at t2, something he was not aware of at t t , and the moustache
is a difference in Tom's appearance.
If, as in this example, "the difference between A and B" is taken to
refer not to the fact that A and B differ, but to a particular element or
condition of A and B that constitutes their difference, then seeing the
difference between A and B would be seeing this element or condition a thing, not a fact. In this thing sense of "the difference" a person or
animal who had not yet learned to discriminate (in any behaviorally
relevant way) between (say) two forms might nonetheless be said to see
(and in this sense be aware of) the difference between them if it saw the
parts of one that distinguished it from the other. When two objects
differ in this perceptible way, one can be conscious of the thing (speckle,
line, star, stripe) that is the difference without being conscious of the
difference (= conscious that they differ). In order to avoid confusion
about this critical (for my purposes) point, I will, when speaking of our
awareness or consciousness of something designated by an abstract noun
or phrase (the color, the size, the difference, the number, etc.), always
specify whether I mean thing-awareness or fact-awareness. To be thing-
10 If it helps, the reader may suppose that later, at t3, S remembers having seen Tom's
moustache at t2 while being completely unaware at the time (i.e., at t2) that Tom had a
moustache. Such later memories are not essential (S may see the moustache and never
realize he saw it), but they may, at this point in the discussion, help calm verificationists'
anxieties about the example.
119
120
121
14 For purposes of illustrating distinctions I use a simple causal theory of knowledge (to
know that P is to be caused to believe that P by the fact that P) and perception (to
perceive x is to be caused to have an experience by x). Although sympathetic to certain
versions of these theories, I wish to remain neutral here.
122
123
124
Alpha
Beta
Figure 1
125
SPOT
Figure 2
Alpha and Beta. If the figure is being held at arm's length, though, this
should not be necessary, although it may occur anyway via the frequent
involuntary saccades the eyes make. A second or two should suffice.
During this brief interval some readers may have noticed the difference between Alpha and Beta. For expository purposes, I will assume
no one did. The difference is indicated in Figure 2. Call the spot, the
one that occurs in Alpha but not Beta, Spot.
According to my assumptions, then, everyone (when looking at Figure 1) saw Spot. Hence, according to (1), everyone was aware of the
thing that constitutes the difference between Alpha and Beta. According
to (4), then, everyone consciously experienced (i.e., had a conscious
experience of) the thing that distinguishes Alpha from Beta. Everyone,
therefore, was thing-aware, but not fact-aware, of the difference between Alpha and Beta. Spot, if you like, is Alpha's moustache.
Let E(Alpha) and E(Beta) stand for one's experience of Alpha and
one's experience of Beta, respectively. Alpha and Beta differ; Alpha has
Spot as a part; Beta does not. E(Alpha) and E(Beta) must also differ.
E(Alpha) has an element corresponding to (caused by) Spot. E(Beta)
does not. E(Alpha) contains or embodies, as a part, an E(Spot), an
experience of Spot, while E(Beta) does not. If it did not, then one's
experience of Alpha would have been the same as one's experience of
Beta and, hence, contrary to (4), one would not have seen Spot when
looking at Alpha.16
One can, of course, be conscious of things that differ without one's
experience of them differing in any intrinsic way. Think of seeing
16 I do not think it necessary to speculate about how E(Spot) is realized or about its exact
relation to E(Alpha). I certainly do not think E(Spot) must literally be a spatial part of
E(Alpha) in the way Spot is a spatial part of Alpha. The argument is that there is an
intrinsic difference between E(Alpha) and E(Beta). E(Spot) is just a convenient way of
referring to this difference.
126
127
raised. Seeing the two fingers is not like seeing a flock of geese (from a
distance) where individual geese are "fused" into a whole and not seen.
In the case of the fingers, one sees both the finger on the left and the
finger on the right. Quite a different experience from seeing only the
finger on the left. When the numbers get larger, as they do with Alpha
and Beta, the experiences are no longer discernibly different to the
person having them. Given that each spot is seen, however, the experiences are, nonetheless, different. Large numbers merely make it harder
to achieve fact-awareness of the differences on the part of the person
experiencing the differences. E(Spot) is really no different than the
difference between experiencing one finger and two fingers in broad
daylight. The only difference is that in the case of Alpha and Beta there
is no fact-awareness of the thing that makes the difference.17
Since the point is critical to my argument, let me emphasize the last
point. In speaking of conscious differences in experience it is important
to remember that one need not be conscious of the difference (=
conscious that such a difference exists) in order for such differences to
exist. Readers who noticed a difference between Alpha and Beta were,
thereby, fact-aware of the difference between Alpha and Beta. Such
readers may also have become fact-aware (by inference?) of the difference between their experience of Alpha and their experience of Beta
that is, the difference between E (Alpha) and E(Beta). But readers who
were only thing-aware of the difference between Alpha and Beta were
not fact-conscious of the difference between Alpha and Beta. They were
not, therefore, fact-conscious of any difference between E(Alpha) and
E(Beta) their conscious experience of Alpha and Beta. These are
conscious differences of which no one is conscious.
In saying that the reader was conscious of Spot and, hence, in this
17 Speaking of large numbers, Elizabeth, a remarkable eidetiker (a person who can maintain
visual images for a long time) studied by Stromeyer and Psotka (1970), was tested with
computer-generated random-dot stereograms. She looked at a 10,000-dot pattern for one
minute with one eye. Then she looked at another 10,000-dot pattern with the other eye.
Some of the individual dots in the second pattern were systematically offset so that a
figure in depth would emerge (as in using a stereoscope) if the patterns from the two eyes
were fused. Elizabeth succeeded in superimposing the eidetic image that she retained from
the first pattern over the second pattern. She saw the figure that normal subjects can see
only by viewing the two patterns (one with each eye) simultaneously.
I note here that to fuse the two patterns, the individual dots seen with one eye must
somehow be paired with those retained by the brain {not the eye; this is not an afterimage) from the other eye.
128
sense, the difference between Alpha and Beta without being conscious
of the fact that they differed, we commit ourselves to the possibility of
differences in conscious experience that are not reflected in conscious
belief. Consciousness of Spot requires a conscious experience of Spot, a
conscious E(Spot); yet, there is nothing in one's conscious beliefs either about Spot, about the difference between Alpha and Beta, or
about the difference between E (Alpha) and E(Beta) - that registers this
difference. What we have in such cases is internal state consciousness
with no corresponding (transitive) creature consciousness of the conscious
state.18 With no creature consciousness we lack any way of discovering,
even in our own case, that there exists this difference in conscious state. To
regard this as a contradiction is merely to confuse the way an internal
state like an experience can be conscious with the way the person who
is in that state can be, or fail to be, conscious of it.
It may be supposed that my conclusion rests on the special character
of my example. Alpha contains a numerically distinct element, Spot, and
our intuitions about what is required to see a (distinct) thing, are,
perhaps, shaping our intuitions about the character of the experience
needed to see it. Let me, therefore, borrow an example from Irvin Rock
(1983). Once again, the reader is asked to view Figure 3 (after Rock
1983, p. 54) for a second and then say which, Alpha or Beta at the
bottom, is the same as the figure shown at the top.
As closer inspection reveals, the upper left part of Alpha contains a
few wiggles found in the original but not in Beta. Experimental subjects
asked to identify which form it was they had seen did no better than
chance. Many of them did not notice that there were wiggles on the
figure they were shown. At least they could not remember having seen
them. As Rock (1983, p. 55) observes:
Taken together, these results imply that when a given region of a figure is a
nonconsequential part of the whole, something is lacking in the perception of
it, with the result that no adequate memory of it seems to be established.
No adequate memory of it is established because, I submit, at the time
the figure is seen there is no fact-awareness of the wiggles. You cannot
remember that there are wiggles on the left if you were never aware that
18 I return, in the next section, to the question of whether we might not have thingawareness of E(Spot) - that is, the same kind of awareness of the difference between
E(Alpha) and E(Beta) as we have of the difference between Alpha and Beta.
129
Alpha
Beta
Figure 3
there were wiggles on the left.19 Subjects were (or may well have been)
aware (thing-aware) of the wiggles are (they saw them), but never
became aware that they were there. The wiggles are what Spot (or
Tom's moustache) is: a thing one is thing-aware of but never notices.
What is lacking in the subject's perception of the figure, then, is an
awareness of certain facts (that there are wiggles on the upper left), not
(at least not necessarily) an awareness of the things (the wiggles) on the
left.
In some minds the second example may suffer from the same defects
as the first: it exploits subtle (at least not easily noticeable) differences in
detail of the object being perceived. The differences are out there in the
objects, yes, but who can say whether these differences are registered in
here, in our experience of the objects? Perhaps our conviction (or my
conviction) that we do see (and, hence, consciously experience) these
points of detail, despite not noticing them, is simply a result of the fact
that we see figures (Alpha and Beta, for instance) between which there
are visible differences, differences that could be identified (noticed) by an
appropriate shift of attention. But just because the details are visible does
not mean that we see them or, if we do, that there must be some
intrinsic (conscious) difference in the experience of the figures that differ
in these points of detail.
19 Although there may be other ways of remembering the wiggles. To use an earlier
example, one might remember seeing Tom's moustache without (at the time) noticing it
(being fact-aware of it). Even if one cannot remember that Tom had a moustache (since
one never knew this), one can, I think, remember seeing Tom's moustache. This is the
kind of memory (episodic vs. declarative) involved in a well-known example: remembering how many windows there are in a familiar house (e.g., the house one grew up in) by
imagining oneself walking through the house and counting the windows. One does not,
in this case, remember that there were twenty-three windows, although one comes to
know that there were twenty-three windows by using one's memory.
130
This is a way of saying that conscious experiences, the sorts of experiences you have when looking around the room, cannot differ unless
one is consciously aware that they differ. Nothing mental is to count as
conscious (no state consciousness) unless one is conscious of it (without
creature consciousness). This objection smacks of verificationism, but
calling it names does nothing to blunt its appeal. So I offer one final
example. It will, of necessity, come at the same point in a more indirect
way. I turn to perceptually salient conditions, conditions it is hard to
believe are not consciously experienced. In order to break the connection between experience and belief, between thing-awareness and factawareness, then, I turn to creatures with a diminished capacity for factawareness.20
Eleanor Gibson (1969, p. 284), in reporting Kluver's studies with
monkeys, describes a case in which the animals are trained to the larger
of two rectangles. When the rectangles are altered in size, the monkeys
continue to respond to the larger of the two whatever their absolute
size happens to be. In Kluver's words, they "abstract" the LARGER
THAN relation. After they succeed in abstracting this relation, and
when responding appropriately to the larger (A) of two presented rectangles (A and B), we can say that they are aware of A, aware of B (thingawareness), and aware that A is larger than B (fact-awareness). Some
philosophers may be a little uncomfortable about assigning beliefs to
monkeys in these situations, uncomfortable about saying that the monkey is aware that A is larger than B, but let that pass. The monkeys at
least exhibit a differential response, and that is enough. How shall we
describe the monkeys' perceptual situation before they learned to abstract
this relation? Did the rectangles look different to the monkeys? Was there
any difference in their experience of A and B before they became aware
that A was larger than B? We can imagine the difference in size to be as
great as we please. They were not fact-aware of the difference, not
aware that A is larger than B, to be sure. But that isn't the question. The
question is: were they conscious of the condition of A and B that, so to
speak, makes it true that A is larger than B?21 Does their experience of
131
objects change when, presented with two objects, the same size, one of
these objects expands, making it much larger than the other? If not, how
could these animals ever learn to do what they are being trained to do
distinguish between A's being larger than B and A's not being larger
than B?
It seems reasonable to suppose that, prior to learning, the monkeys
were thing-aware of a difference that they became fact-aware of only
after learning was complete. Their experience of A and B was different,
consciously so, before they were capable of exhibiting this difference in
behavior. Learning of this sort is simply the development of factawareness from thing-awareness.
The situation becomes even more compelling if we present the monkeys with three rectangles and try to get them to abstract the INTERMEDIATE IN SIZE relation. This more difficult problem proves capable of solution by chimpanzees, but monkeys find it extremely difficult.
Suppose monkey M cannot solve it. What shall we say about M's
perceptual condition when he sees three rectangles, A, B, and C, of
descending size. If we use behavioral criteria for what kind of facts M is
conscious of and assume that M has already mastered the first abstraction
(the LARGER THAN relation), M is aware of the three rectangles, A,
B, and C. M is also aware that A is larger than B, that B is larger than
C, and that A is larger than C. M is not, however, aware that B is
INTERMEDIATE IN SIZE even though this is logically implied by
the facts he is aware of. Clearly, although M is not (and, apparently,
cannot be made) aware of the fact that B is intermediate in size, he is
nonetheless aware of the differences (A's being larger than B. B's being
larger than C) that logically constitute the fact that he is not aware of.
B's being intermediate in size is a condition the monkey is thing-aware
of but cannot be made fact-aware of. There are conscious features of the
animal's experiences that are not registered in the animal's fact-awareness
and, hence, not evinced in the animal's deliberate behavior.
4. WHAT, THEN, MAKES EXPERIENCES CONSCIOUS?
We have just concluded that there can be conscious differences in a
person's experience of the world and, in this sense, conscious features
of her experience of which that person is not conscious. If this is true,
then it cannot be a person's awareness of a mental state that makes that
state conscious. E(Spot) is conscious, and it constitutes a conscious difference between E(Alpha) and E(Beta) even though no one, including
132
133
mental state conscious, then the inner sense theory has no grounds for
saying that E(Spot) is not conscious. For a person might well be thingaware of E(Spot) - thus making E(Spot) conscious -just as he is thingaware of Spot, without ever being fact-aware of it. So on this version of
the spotlight theory, a failure to realize, a total unawareness of the fact
that there is a difference between E(Alpha) and E(Beta), is irrelevant to
whether there is a conscious difference between these two experiences.
This being so, the inner sense theory of what makes a mental state
conscious does nothing to improve one's epistemic access to one's own
conscious states. As far as one can tell, E(Spot) (just like Spot) may as well
not exist. What good is an inner spotlight, an introspective awareness of
mental events, if it doesn't give one epistemic access to the events on
which it shines? The inner sense theory does nothing to solve the
problem of what makes E(Spot) conscious. On the contrary, it multiplies
the problems by multiplying the facts of which we are not aware. We
started with E(Spot) and gave arguments in support of the view that
E(Spot) was conscious even though the person in whom it occurred was
not fact-aware of it. We are now being asked to explain this fact by
another fact of which we are not fact-aware: namely, the fact that we
are thing-aware of E(Spot). Neither E(Spot) nor the thing-awareness of
E(Spot) makes any discernible difference to the person in whom they
occur. This, surely, is a job for Occam's razor.
If we do not have to be conscious of a mental state (like an experience) for the mental state to be conscious, then, it seems, consciousness
of something cannot be what it is that makes a thing conscious. Creature
consciousness (of either the factive or thing form) is not necessary for
state consciousness.22 What, then, makes a mental state conscious? When
S smells, and thereby becomes aware of, the burning toast, what makes
his experience of the burning toast a conscious experience? When S
becomes aware that the light has turned green, what makes his belief
that the light has turned green a conscious belief?
This is the big question, of course, and I am not confronting it in this
essay. I am concerned only with a preliminary issue a question about
the relationship (or lack thereof) between creature consciousness and
state consciousness. For it is the absence of this relation (in the right
form) that undermines the orthodox view that what makes certain men22
Neither is it sufficient. We are conscious of a great many internal states and activities that
are not themselves conscious (heartbeats, a loose tooth, hiccoughs of a fetus, a cinder in
the eye).
134
23 If fact-awareness was what made a belief conscious, it would be very hard for young
children (those under the age of three or four years, say) to have conscious beliefs. They
don't yet have a firm grasp of the concept of a belief and are, therefore, unaware of the
fact that they have beliefs. See Flavell (1988) and Wellman (1990).
135
The claim is not that we are unaware of our own conscious beliefs and
experiences (or unaware that we have them). It is, instead, that our
being aware of them, or that we have them, is not what makes them
conscious. What makes them conscious is the way they make us conscious of something else the world we live in and (in proprioception)
the condition of our own bodies.
Saying just what the special status is that makes certain internal representations conscious while other internal states (lacking this status)
remain unconscious is, of course, the job for a fully developed theory of
consciousness. I haven't supplied that. All I have tried to do is to indicate
where not to look for it.
REFERENCES
ton-Century-Crofts.
Grice, P. 1989: Studies in the Way of Words. Cambridge, Massachusetts: Harvard
University Press.
Gustafson, D. F., and B. L. Tapscott, eds. 1979: Body, Mind and Method: Essays in
Honor of Virgil Aldrich. Dordrecht, Holland: D. Reidel Publishing Company.
Humphrey, N . 1992: A History of the Mind: Evolution and the Birth of Consciousness.
136
137
8
Differences That Make No
Difference1
Never mind differences that make no difference. There are none. I want
to talk, instead, about differences that do not make a difference to anyone,
differences of which no one is aware. There are lots of these.
According to Daniel Dennett, though, there are fewer than you
might have thought. There are, to be sure, physical differences even
some that exist in you of which you are not aware. There are also
conscious events in me, and differences among them, that you don't
know about. But there are no conscious events in you that escape your
attention. If you do not believe yourself to be having conscious experience ((), then <|) is not a conscious experience at all. Dennett calls this
view, a view that denies the possibility in principle of consciousness in
the absence of a subject's belief in that consciousness, first-person operationalism.2 Dennett is a first-person operationalist. For him there are no
facts about one's own conscious life no, as it were, conscious facts of which one is not conscious.
Philosophers like to call each other names. I'm no exception. The
preferred term of abuse these days, especially among materialists, seems
to be "Cartesian." So I will use it. First-person operationalism sounds
like warmed-over Cartesianism to me. For Descartes, the mind is an
open book. Everything that happens there is known to be happening
there by the person in whom it is happening. No mental secrets. For
Dennett, too, there are no secrets, no facts about our conscious lives
Reprinted from Philosophical Topics 22 (1 and 2) (Spring and Fall 1994), 4157, by permission of the publisher.
1 I am grateful to Giiven Giizeldere for many helpful suggestions.
2 D. Dennett, Consciousness Explained (Boston: Little, Brown & Co., 1991), 132.
138
that are not available for external publication. Differences that make no
difference to the publicity department, to what a person knows or
believes and can thus exhibit in overt behavior, are not conscious differences.
The mind is like everything else: There is more to it than we are
aware. If making a difference to someone is understood, as Dennett
understands it, as a matter of making a difference to what that person
believes or judges, then conscious differences need make no difference
to anyone not even to the person in whom they occur.
1. HIDE THE THIMBLE
From Content and Consciousness2' to Consciousness Explained, a span of
over twenty years, Dennett has been resolute in his conviction that
awareness of something an apple or a thimble - requires some kind of
cognitive upshot. In the 1969 book (chapter six, "Awareness and Consciousness"), this is expressed as the idea that awareness of an apple on a
table is awareness that there is an apple on a table. Awareness that there
is an apple on a table, in turn, gets cashed out 4 as a content-bearing
internal state like a judgment or a belief that controls behavior and (for
those who can speak) speech. In 1991, the same view is expressed by
saying that Betsy, who is looking for a thimble in the children's game
"Hide the Thimble," does not see the thimble until she "zeros in on"
it and identifies it as a thimble. Only when an appropriate judgment is
made "Aha, the thimble" will she see it. Only then will she become
aware of it. Only then will the thimble be "in" Betsy's conscious
experience.
As a historical note, the same year Content and Consciousness appeared,
I published Seeing and Knowing.5 Although I was concerned primarily
with epistemological issues, how seeing gives rise to knowing, I made a
great fuss about what I called nonepistemic perception, perception that
does not require (although in adult human beings it is normally accompanied by) belief or knowledge. In contrast to seeing facts {that they are
apples and thimbles), seeing objects (apples and thimbles) is a case of
nonepistemic perception. I made a fuss about nonepistemic perception
because so many bad arguments in epistemology (and, at that time, in
3
4
5
Dennett, Content and Consciousness (London: Routledge and Kegan Paul, 1969).
Ibid., 118.
F. Dretske, Seeing and Knowing (Chicago: University of Chicago Press, 1969).
139
Since I have just introduced the term "awareness" and will shortly be talking about
consciousness, I should perhaps take this opportunity to register a point about usage. I take
seeing, hearing, tasting, etc., an object or event, X, to be ways of being (perceptually)
aware of X. I assume the same with factive clauses: To see or smell that P that the toast
is burning, for example - is to be (perceptually) aware that P. I also follow what I take to
be standard usage and take perceptual awareness of X (or that P) to be a form - in fact, a
paradigmatic form - of consciousness (of either X or that P). This is what T. Natsoulas
("Consciousness," American Psychologist 33 [1978]: 904914) calls "consciousness 3," and
he describes this as our most basic concept of consciousness. It should also be evident that
I use the verbs "aware" and "conscious" interchangeably. There are some subtle differences
between these verbs (see A. R. White's Attention [Oxford: Basil Blackwell, 1964]), but I
don't think any of these nuances bear on the disagreement between Dennett and me. So I
ignore them.
140
In calling this a referentially transparent context, I mean to restrict the values of "X" and
"Y" to noun phrases referring to specific objects and events (e.g., "the apple on the table,"
"the thimble on the mantle"). When interrogative nominals (what X is, who X is, where
X is), factive clauses (that it is X), and abstract nouns (the difference, the pattern, the
problem, the answer) follow the perceptual verb, the context is no longer transparent.
8 As certain forms of agnosia testify: "Associative agnosia is also often taken to be a more
specific syndrome, in which patients have a selective impairment in the recognition of
visually presented objects, despite apparently adequate visual perception of them" (M.J.
Farah, Visual Agnosia [Cambridge, Mass.: MIT Press, 1990], 57).
9 P. Grice, "Logic and Conversation," in P. Cole and J. Morgan, eds., Syntax and Semantics
(New York: Academic Press, 1975).
141
In a footnote in Consciousness Explained,10 Dennett asks whether identification of a thimble conies after or before becoming conscious of it.
He tells us that his Multiple Drafts model of consciousness "teaches us"
not to ask this question. One can understand why he wouldn't want to
ask this question and, therefore, why he would favor a theory that did
not let one ask it. He doesn't want to hear the answer. Are we really
being told that it makes no sense to ask whether one can see, thus be
aware of, thus be conscious of, objects before being told what they are?
Does it make no sense to ask, Macbeth style, "What is this I see before
me?"
That it does make sense seemed obvious to me in 1969. It still does.
Frankly, I thought when Dennett read my book it would seem obvious
to him. Apparently it didn't. Maybe he didn't read the book. Whatever
the explanation, he is still convinced that seeing is a form of knowing
(or believing or taking see later), that being conscious of a (j) is being
conscious that it is a (j).11
I remain convinced that as long as these perceptual attitudes seeing
objects and seeing facts, being aware of apples and being aware that they
are apples are conflated, it is hard (to be honest, I think it is impossible)
to give a plausible theory of consciousness. One has already suppressed
one of the most distinctive elements of our conscious life the difference
between experience and belief.
2. ANIMALS AND INFANTS
Cats and birds can see thimbles as well as (probably better than) little
girls. They have better eyesight. The department in which little girls
surpass birds and cats is in the conceptual department: They know, while
142
cats and birds do not, what thimbles are. They know that, other things
being equal, things that look like that are thimbles. When they see things
that look like that, then they can judge them to be, identify them as,
take them to be, thimbles. They can, as a result, not only see thimbles,
but, when they are attentive and the thimbles are not too far away, see
that they are thimbles something quite beyond the capacity of birds
and cats. This, though, is no reason to deny that animals can see thimbles. That would be to confuse ignorance with blindness.
In replying to criticisms by Lockwood and Fellows and O'Hear,
Dennett questions the "methodological assumption" that animals and
infants are conscious.12 Whether or not infants and animals are conscious, he declares, has no clear pretheoretical meaning. What Dennett
is doing here, of course, is recapitulating Descartes's answer to Arnauld.
Holding that all conscious phenomena are thoughtlike in character,
Descartes concluded that animals, lacking the power of thought, could
not be perceptually conscious of anything. If sheep seem to see the
wolves from whom they run, the appearances are deceptive. Such flight
is an unconscious reflex to retinal stimulation.
Dennett is no Cartesian, but he does, like Descartes, have a theory of
consciousness to which conceptually impoverished animals (and infants)
are an embarrassment.13 How can a bird who cannot take a thimble to
be a thimble, cannot judge, believe, think (let alone say) that something
is a thimble, see a thimble? How can sheep be aware of wolves if they
cannot judge them to be wolves? Descartes's bold way out of this
problem was to deny that animals were conscious of anything. Dennett's
way out - not quite so bold is to insist that it isn't clear that animals
(not to mention infants) are conscious of anything. For dialectical purposes, though, the result is the same: Embarrassing counterexamples are
neutralized. One cannot use the fact obvious to most of us that
animals can see to argue that seeing is not believing.
For the sake of joining issues, I am willing to defer to Dennett's
judgments about what is, and what isn't, clear in this area, but I have my
suspicions about what is shaping his convenient intuitions on this matter.
It wasn't so long ago, after all, that this, or something very like this, was
143
144
145
146
does one tell that Betsy had a potential belief that some object was a
thimble (or whatever potential belief an experience of the thimble is
supposed to be)? How does Betsy tell she has one? In observing a crowd
of people or a shelf full of books, does one have a potential belief for
each (visible) person and book? The difference between having a potential belief and having no belief at all sounds like a difference that doesn't
make a difference. Potential beliefs about thimbles seem to be "cognitions" one can have without knowing one has them. Why trade experiences one can have without knowing it for cognitions one can have
without knowing it?
George Pitcher is another cognitivist who understands the problems
in accounting for sense experience.24 Realizing that X can look red to S
without S's consciously believing that X is red, Pitcher identifies Xs
looking red with an unconscious belief state.25 In order to account for
the "richness" of perceptual consciousness seeing a red ball among a
cluster of other colored objects the belief state with which the "look"
of things is identified is said to be a large set of such unconscious
beliefs.26 Finally, for the person who mistakenly thinks he is experiencing
an illusion, a person who sees an oasis before him when he consciously
believes that there is no oasis before him and that nothing there in the
desert even looks like an oasis,27 Pitcher resorts28 to suppressed, or
"partially" or "mostly" suppressed, inclinations to believe. According to
this way of talking, Betsy's thimble-sightings turn out to be her thimblecaused-suppressed-inclinations-to-believe.
Once again, it is hard to see what is gained by these verbal maneuvers.
The difference between a visual experience and a belief about what you
experience seems reasonably clear pretheoretically. Why must the distinction be rendered in quasi-cognitive terms especially when this
results in the awkward identification of conscious experience with unconscious beliefs and inclinations? After all the huffing and puffing, we
are left with a difference that doesn't make a difference to anyone. So
why bother?
Dennett, working within this tradition, has his own philosophically
"correct" way of talking about perceptual experiences. In "Time and
24
25
26
27
28
147
148
doorbells are "deciding" that someone is at the door. And a thermometer is "interpreting" the increased agitation of the molecules as a room
temperature of 78. We can talk this way, yes,34 but one must be careful
not to conclude from this way of talking that anything significant is
being said about the nature of perceptual experience. One has certainly
not shown that seeing an object, being perceptually aware of a thimble,
consists in a judgment that it is a thimble (or anything else) in anything
like the ordinary sense of the word "judgment." One is certainly not
entitled to conclude that "there is no such phenomenon as really seeming over and above the phenomenon of judging that something is the
case." Once the bloated terminology is eliminated, all one can really
conclude is that perception is a complex causal process in which there
are, in the nervous system, different responses to different stimuli. Causal
theorists have been saying that sort of thing for years. No one took them
to be propounding a theory of consciousness. Perhaps they could have
improved their case by calling the products of such causal processes
"narrative fragments" or "microtakings." It sounds so much more . . .
uh . . . mental.
4. CONSCIOUS EXPERIENCE
149
150
sometimes illustrated this process with examples involving our perception of complex scenes: crowds of people, shelves full of books, a sky
full of stars, arrays of numbers, and so on. Since Dennett has used similar
examples to reach an opposite conclusion, let me sharpen our points of
disagreement by considering such an example.
Consider a two-year-old child I will call her Sarah who knows
what fingers are but has not yet learned to count, does not yet know
what it means to say there are five fingers on her hand, five cookies in
the jar, and so on. Sarah can, I claim, see all five fingers on her hand
not one at a time, but all five at once.40 This is, I know, an empirical
claim, but it is an empirical claim for which there is, for normal twoyear-olds, an enormous amount of evidence. Whether or not Sarah sees
all five fingers depends, of course, on Sarah, the lighting, the angle at
which she sees the fingers, and so on. Let us suppose, though, that Sarah
is a child of average eyesight (intelligence has nothing to do with it),
that she is looking at the fingers in good light, and that each finger is in
plain view. Part of what it means to say that Sarah sees all five fingers is
that if you conceal one of the fingers, things will look different to Sarah.
There will then be only four fingers she sees. There will not only be
one less (visible) finger in the world, but one less finger in Sarah's
experience of the world. This difference in the world makes a difference
in Sarah's experience of the world, and it makes a difference even when
Sarah is unable to judge what difference it makes or even that it makes a
difference. I would like to say that the same is true of birds and cats,
but, out of deference to Dennett's unstable intuitions, I promised not to
mention animals again.
I have heard cognitivists insist that one can see five objects, even
without judging there to be five, by executing five judgments, one for
each object seen. Although Sarah cannot count to five - thus cannot
take there to be five objects she can, simultaneously as it were, take
there to be a finger five different times. Cognitivists are a stubborn
bunch, but this strikes me as a fairly desperate move, not one that
Dennett would happily make. Cognitivists want to define what is seen
in terms of what one judges, the content of a judgment, not in terms of
40 There is a sense in which one can see n objects without seeing any of the n objects. One
might, for example, see a flock of eighty-four birds or a herd of thirty-six cows without
seeing any individual bird or cow. The flock or herd, seen from a great distance, might
look like a spot in the distance. This is not the sense in which I say Sarah sees five fingers.
Sarah sees each of the five fingers, not (just) a heap (flock, herd, pile) of five fingers.
151
152
44
45
This way of putting the case for phenomenal properties is, I think, quite close to Ned
Block's insightful suggestions about the need to distinguish what he calls phenomenal
consciousness from access consciousness. See Block's "Inverted Earth," in J. Tomberlin,
ed., Philosophical Perspectives, 4: Action Theory and Philosophy of Mind (Atascadero, Calif:
Ridgeview Publishing Co., 1990); "Consciousness and Accessibility," The Behavioral and
Brain Sciences 13 (1990): 596598; "Evidence against Epiphenomenalism," The Behavioral
and Brain Sciences, 14 (1991): 670672; and his review of Dennett's Consciousness Explained
in The Journal ojPhilosophy 90 (1993): 181-192.
I argue this point in greater detail in "Conscious Experience," Mind 102 (1993): 2 6 3 283.
153
5. QUALIA
In "Quining Qualia" Dennett tells us that qualia are the way things look
or appear.46 As long as one understands the look to be what I just called
the phenomenal appearances (= the way things look that is logically
although surely not causally independent of what a person believes or
judges), this is a workable definition. It captures what most philosophers
mean to be arguing about when they argue about qualia. I'm willing to
work with it.
According to this definition, then, a person who sees a blue hexagon
in normal circumstances will have an experience that exhibits the qualia
blueness and hexagonality. These are among the person's visual qualia
whether or not that person is able to judge or say that there is, or appears
to be, a blue hexagon in front of her. Although I promised not to
mention animals again, I cannot forbear saying that it will also be the
qualia of normally sighted chimpanzees and a great variety of other
mammals. If there are genuine doubts about this, the evidence lies in
discrimination and matching tests plus a little neurophysiology.47
I said earlier that I agreed with much that Dennett has said about
qualia. If qualia are supposed to be ineffable, intrinsic, privileged, and so
on, then, I agree, there are no qualia. But there is no reason to throw a
clean baby out with dirty bathwater. We can, as Flanagan argues, keep
the qualia and renounce the philosophical accretions.48 I do not believe
in sense data, but I don't renounce sense perception because philosophers have said confused things about it.
Consider ineffability. If S's qualia are identified with the way things
look to S, then, since something can look (|) to a person unable to judge
that it is (|), a person's qualia may be quite ineffable by that person at the
time she has them. Sarah, at two years old, cannot express the fiveness
that she experiences. But we can. I did. Those of us who know what
properties objects have and, thus, the ways that objects will appear in
normal conditions - can describe our own and other people's qualia. I
did this for Sarah and I can do it for chimps. If chimps and children can
see blue hexagons, and if they are not color-blind, then, whether or not
they know it, their visual qualia are hexagonality and blueness. In normal
154
49 Dennett and Kinsbourne, "Time and the Observer," 240. I may appear to be skating
rather cavalierly over the inverted-spectrum problem here. I admit the appearances but
deny the reality. I do not, however, have the time to justify this claim. So the appearances
will have to stand.
155
50 Dennett and Kinsbourne ("Time and the Observer") do an excellent job of exposing this
fallacious pattern of inference when it occurs in our thinking about representations especially those having to do with temporal properties. The properties represented are
not, or need not be, properties of the representation.
156
The trouble with this answer, as I have been at pains to argue, is that
the microjudgments, the potential beliefs, the suppressed inclinations,
have to occur in persons and animals incapable of making the corresponding judgments or having the relevant beliefs. Why, then, call them
judgments or beliefs? If Sarah's visual system can "take" there to be five
fingers on her hand without Sarah's taking there to be five fingers on
her hand, what sorts of inventions are these microtakings, these narrative
fragments, these partial drafts? Until we know, we won't know what
conscious experience is.
157
The hard problem of consciousness, the place where the explanatory gap
is widest viz., the nature of phenomenal experience is especially
vexing for people who believe that:
(1) Conscious perceptual experiences exist inside a person (probably somewhere
in the brain)1
(2) Nothing existing inside a person has (or needs to have2) the properties one
is aware of in having these experiences.
Reprinted from Philosophical Studies (1999), 1-22, copyright 1999 by Kluwer Academic
Publishers, with kind permission from Kluwer Academic Publishers. My thanks to Bill
Lycan, Tom Nagel, and Ned Block for helpful criticisms.
1 Locating the mind (thoughts, experiences, etc.) inside the head is not a denial of externalism
about the mind. Externalism is the view that what makes a mental state the mental state it
is are factors existing outside the person. Externalism is consistent with (indeed, I think it
implies) the claim that mental states are inside the person. What is external (according to
externalism) are not the thoughts and experiences themselves, but (some of) the factors that
make them thoughts and experiences. Money is no less in my pocket by having the factors
that make it money existing outside my pocket.
2 The parenthetical qualification is necessary because, of course, there are exceptions. Sometimes there is something existing in the head of a person having an experience that has the
properties that person is aware of in having that experience. Think, for example, of seeing
your own teeth (in a mirror) or a human brain. In seeing someone else's brain, something
in your head (viz., your brain) has the properties (gray, brain-shaped, etc.) that you are
aware of. This, of course, is the exception. Typically, we do not see things that look like
things in our heads. I will here be concerned with visual experiences (e.g., that of seeing
or - in hallucination - seeming to see, a pumpkin) the phenomenal qualities of which
(color, shape, movement, texture, distance) are not properties of anything in the brain of
the experiencer. For this reason I will generally omit the qualification "or needs to have"
and simply assume that nothing in the head has the properties that one is aware of in
having the experience.
I later (3) return to other exceptions to (2), proprioception e.g., headaches, itches,
cramps, and thirst, bodily sensations that (according to some) are internal and have the
158
The experience I have when I see (dream of, hallucinate) a large orange
pumpkin has to be inside me. Why else would it cease to exist when I
close my eyes, awaken, or sober up? Yet, nothing inside me - certainly
nothing in my brain has the properties I am aware of when I have this
experience. There is nothing orange and pumpkin-shaped in my head.
How, then, can I be aware of what my perceptual experiences are like
presumably a matter of knowing what qualities they have - if none of
the properties I am aware of when I have these experiences are properties of the experience?
Surely, though, we are, in some sense, aware of our own conscious
experiences. We have, if not infallible, then privileged, access to their
phenomenal character. I may not know what it is like to be a bat, but I
certainly know what it is like to be me, and what it is like to be me is
primarily some would say it is exclusively a matter of the phenomenal qualities of my perceptual (including proprioceptive) experience. I
am aware - directly aware - of what it is like to see (dream of, hallucinate) orange pumpkins. If such awareness is incompatible with (1) and
(2), so much the worse for (1) and (2).
This is a problem that some philosophers have given up trying to
solve. Others spend time tinkering with (2). The problem is real enough,
but (2) is not the culprit. The solution lies in distinguishing between the
fundamentally different sorts of things we are aware of and, as a result,
the different forms that awareness (or consciousness3) of things can take.
Once these distinctions are in place, we can see why (1) and (2) are
compatible with privileged awareness of one's own experience. We can
have our cake and eat it too.
By way of previewing the argument for this conclusion, let o be an
object (or event, condition, state i.e., a spatiotemporal particular), P a
property of o. We speak of being aware of o, of P, and of the fact that o
is P. These differences in the ontological kinds of which we are aware
are reflected in corresponding differences in the acts of awareness.
Awareness of P is a much different mental state from awareness of the o
that is P, and both differ from an awareness of the fact that o is P.
In thinking about the mind's awareness of itself, these differences are
properties of which one is aware in having these sensations. For the present I mean to
focus exclusively on perceptual modalities - hearing, seeing, smelling, tasting, and feeling that are of (or purport to be of) external objects and conditions.
I use these terms interchangeably when they are followed by a word or phrase specifying
what we are aware (conscious) of.
159
Throughout this essay I will use o to designate an external physical object (e.g., a pumpkin)
e a mental particular (e.g., a visual experience of o); P, M, and C properties of o; and 9$,
Wl, and ( (Old English font) properties of e.
I assume that universals (and, a fortiori, the universals one is aware of) are neither inside
nor outside the head. Awareness of colors, shapes, and movements, when there is no
external object that has the property one is aware of, is not, therefore, a violation of (2). A
measuring instrument (a speedometer, for example) can (when malfunctioning) be "aware
of" (i.e., represent) a speed of 45 mph without any object's (inside or outside the instrument) having this magnitude.
160
161
Depending on the property in question, it will sometimes sound odd to say that one is
aware of o's P-ness without being aware of o. For example, can one become aware of the
second hand's movement without being aware of (seeing, feeling, or somehow sensing) the
second hand itself? Can one be aware of my movement (executing a dance step, say) by
observing another person execute the same movement?
We are sometimes, of course, interested not in universal properties but in particular
instancings of these universal properties - what philosophers call tropes. Although two
objects, a and b, are the same color or execute the same movement (i.e., instantiate the
same universal property P), as color (trope) is not the same as b's color (trope), nor is the
movement (trope) of a the same as the movement (trope) of b. As a fact about ordinary
usage, I think we generally mean to refer to something like a trope when we speak of as
movement or the movement of a - a universal property, movement, as realized in a particular
individual. We perhaps come closer to the universal property in speaking of the movement
that an individual executes.
My claim that property-awareness is independent of object-awareness, then, is a claim
about our awareness of universal properties, the properties objects can share with other
objects, and not an object's particular "value" (trope) of this shareable property.
162
datum) in one's head that has the properties one is aware of in having
the experience. Since I am here exploring the possibility of understanding conscious experience given the truth of both (1) and (2), I assume,
to the contrary, that hallucinations are experiences in which one is aware
of properties (shapes, colors, movements, etc.) without being oconscious of objects having these properties. To suppose that awareness
of property P must always be an awareness of an object having property
P is what Roderick Chisholm (1957) called the Sense-Datum Fallacy.
Following Chisholm, and in accordance with (2), I will take this to be a
genuine fallacy. Hallucinating about pumpkins is not to be understood
as an awareness of orange pumpkin-shaped objects. It is rather to be
understood as >-awareness of the kind of properties that o-awareness of
pumpkins is usually accompanied by.
Awareness (i.e., ^-awareness) of properties without awareness (pawareness) of objects having these properties may still strike some readers
as bizarre. Can we really be aware of (uninstantiated) universals? Yes,
we can, and, yes, we sometimes are. It is well documented that the brain
processes visual information in segregated cortical areas (see Hardcastle
1994 for references and discussion). One region computes the orientation of lines and edges, another responds to color, still another to
movement.8 As a result of this specialization it is possible, by suitable
manipulation, to experience one property without experiencing others
with which it normally co-occurs. In the aftereffect called the waterfall
phenomenon, for instance, one becomes aware of movement without the
movement's being of any thing. There is apparent motion without the
appearance of anything moving. To obtain this effect one stares for
several minutes at something (e.g., a waterfall) that moves steadily in
one direction. In transferring one's gaze to a stationary scene one then
experiences movement in the opposite direction. Remarkably, though,
this movement does not "attach" itself to objects. None of the objects
one sees appear to be moving. Yet, one experiences movement. As a
psychologist (Frisby 1980, p. 101) puts it, "although the after-effect gives
a very clear illusion of movement, the apparently moving features nevertheless seem to stay still!" One becomes, he says, "aware of features
This gives rise to what psychologists call the binding problem: how does the nervous system
"pull together," so to speak, all the information about different properties into a unified
experience of a single object an object that has all those properties? How does the brain
put this shape together with this color and that movement to get (the appearance of) an
orange pumpkin moving to the left?
163
164
"Virtual" because there are relations other than these forms of awareness that I must (given
my limited purposes here) ignore. For instance, it might be argued (plausibly, I think) that
o-awareness requires p-awareness of some properties - if not the properties the object
actually has, then the properties it appears to have. You can't perceive (thus be aware of)
an object unless it appears some way to you, and if appearing (|) to S is a way of S being paware of the property (|), then o-awareness requires /^-awareness of (|) for some property (f).
This, I think, is one possible (and for my money, the only plausible) way of construing the
doctrine that all seeing is seeing as.
I am willing to concede some degree of dependence. It will not be important for the
use to which I will put these distinctions. I will exploit only the degree of independence my
examples have already established.
165
our awareness not of pumpkins, their properties,10 and facts about them,
but of our experience (C) of a pumpkin, its properties, and facts about it.
Letting ^3 stand for a property of a pumpkin experience, a property that
helps makes this experience the kind of experience it is, how does one
become aware that t is ^S? Is this achieved by an awareness of e and %
or is it, instead, indirect mediated by an awareness of some other
object and (or) property?
John Locke (1959) thought that the mind's awareness of itself was
quasi-perceptual and, thus, direct. We become aware that a visual experience is ?$ in the same way we can (if we trust common sense)
become aware that a pumpkin is P - by means of o-awareness of the
experience and p-awareness of ^3 According to some philosophers, all
fact-awareness begins here.11 Thus, awareness of facts about a pumpkin,
that it is P, are reached via inference from o-awareness of and pawareness of one or more of its properties. We become fact-aware of
what is going on outside the mind in something like the way we become
^ aware of what is happening outside a room in which we watch TV.
The only objects we are aware of are in the room (e.g., the television
set); the only properties we are aware of are properties of those objects
(patterns on the screen). Only ^awareness awareness of what is happening on the playing field, concert hall, or broadcast studio - is capable
of taking us outside the room.
I will not discuss such theories (basically sense-data theories). I set
them aside, without argument, because they all deny thesis (2), and my
10 I will typically use movement and shape as my examples of properties that we can (and
do) become aware of in visual perception. I could as well use (I sometimes do use) color,
but for obvious reasons (relating to the thesis I am proposing) I prefer to avoid the
"secondary" properties and concentrate on the "primary" ones since some people will
surely insist that it is not the pumpkin that is (phenomenally) orange, but our experience
(or some proper part of our experience) of the pumpkin. The only relevant property that
the pumpkin has is a disposition to produce orange es (pumpkin experiences) in properly
situated perceivers.
I do not share this view of color (and the other so-called secondary properties). I do
not see how any materialist can (see (1) and (2) and the following discussion). I take color,
the surface property of things that we experience in experiencing objects, to be an
objective property of the objects we experience. For this reason I sometimes use color,
smell, sounds, etc. in my examples. For those who find this realism objectionable (or
question-begging), please substitute an appropriate primary property e.g., movement
shape, orientation, extension - whenever I use an objectionable secondary property. I
don't think anything important hangs on my choice of examples.
11 For skeptics it often ends here. If/-awareness is a form of knowledge, the only ^-awareness
is of mental affairs i.e., facts of the form: that C is 9$.
166
167
168
169
170
job), Mary knows all about tomatoes that they are red (P) and she
knows all about what goes on in other people's heads when they see red
objects (there is something in their brain that has the property 9$), but
she does not herself have internal states of this sort. If she did, she would,
contrary to hypothesis, be p-aware of (she would actually experience)
the color red. Once she walks outside the room, objects (es) in her head
acquire ^3 she becomes p-aware of red. She is now aware of things
(i.e., p-aware of colors) she was not previously aware of. Using our
present distinctions to express Jackson's point, the question posed is not
whether Mary is now aware of something she was not previously aware
of (of course she is; she is now ^-aware of colors), but whether Mary is
now ^aware of things that she was not previously ^aware of. The
answer, on the present account of things, is No. 14 Mary always knew
that ripe tomatoes were red (P) and that ripe tomato experiences were
^3 - viz., awarenesses of red. There are no other relevant facts for her to
become aware of (although there are now ways of expressing these facts
that were not previously available to her15). Emerging from the colorfree room gives her an awareness of properties (P) that figure in the facts
(that o is P) she was already aware of, but it doesn't give her an awareness
of any new facts.
We have now taken the first step in this account of the mind's
awareness of itself. In a way that is consistent with both (1) and (2) and
in a way that preserves the essential features of the mind's awareness of
itself (the psychological immediacy and epistemically privileged character
of this awareness) we have an account at least the broad outlines of
one of how we are aware of our own experiences of the world. What
remains to be done is to see whether this account can be generalized to
all mental states. My efforts at generalization (3) will be feeble. I can, at
this point, do little more than gesture in what I take to be the appropriate directions. I close (in 4) with a mildly interesting implication of this
account of self-awareness.
14 There is the fact that she now occupies states having $ , but this doesn't count since this
fact wasn't a fact until she recovered her color vision. What Mary doesn't know is
supposed to be a fact about the world as it existed before she recovered her color vision.
15 The fact she was formerly aware of that she expressed as "Experience t is a ^-awareness
of red" can now (now that she is /j-aware of red) be expressed as "Experience e is a pawareness of this."
171
172
173
174
17 I take^awareness (of the fact) that o is P to imply knowledge (of the fact) that o is P and
the latter to imply belief that o is P. Belief, in turn, requires possession of concepts
corresponding to the (obliquely) occurring expressions (i.e., "P") in the factive clause that
specifies what is believed. Hence, one cannot be ^-aware that o is P without possessing
the (or a) concept corresponding to P.
175
176
ence) of pentagons, but an awareness of pentagon experiences. Awareness of experience awaits development of the understanding, an understanding of what property the experience is ap-awareness of. If you lack
this understanding, you can still become aware of pentagons (just open
your eyes in a pentagon-infested environment), but you cannot be aware
of your pentagon experiences. It is like awareness of neutrinos. Being
what they are (i.e., unobservable: we do not have a sense organ that
makes us o-aware of them), neutrinos are objects one cannot be aware
of until one learns physics. Unlike pentagons, you have to know what
they are to be aware of them.
The mind becomes aware of itself, of its own conscious experiences,
by a developmental process in which concepts needed for such awareness are acquired. You don't need the concepts of PENTAGON or
EXPERIENCE to experience (e.g., see or feel) pentagons, but you do
need these concepts to become aware of pentagon experiences. As
psychologists are learning (in the case of such concepts as EXPERIENCE), this doesn't happen with children until around the age of four
to five years. In most animals it never happens. The mind is the first indeed, the only thing we are aware with, but it is among the last
things we are aware of.
REFERENCES
Jackson, F. 1986. What Mary Didn't Know. The Journal of Philosophy, LXXXIII,
pp.291-295.
Locke, J. 1959. An Essay concerning Human Understanding. Ed. A. C. Fraser. N e w
177
10
What Good Is Consciousness?
178
of understanding human behavior, then, the fact that some humans are
uncles is epiphenomenal. If consciousness is like that if it is like being
an uncle then, for the same reason, psychological theory need not be
concerned with it. It has no purpose, no function. No good comes from
being conscious.
Is this really a worry? Should it be a worry? The journals and books,
I know, are full of concern these days about the role of consciousness.3
Much of this concern is generated by startling results in neuropsychology
(more of this later). But is there a real problem here? Can there be a
serious question about the advantages, the benefits, the good of being
conscious? I don't think so. It seems to me that the flurry of interest in
the function of consciousness betrays a confusion about several quite
elementary distinctions. Once the distinctions are in place and there is
nothing especially arcane or tricky about them - the advantages (and,
therefore, the good) of consciousness is obvious.
1. THE FIRST DISTINCTION: CONSCIOUS BEINGS VS.
CONSCIOUS STATES
Stones are not conscious, but we are.4 So are many animals. We are not
only conscious (full stop), we are conscious of things of objects (the
bug in my soup), events (the commotion in the hall), properties (the
color of his tie), and facts (that someone is following me). Using Rosenthal's (1990) terminology, I will call all these creature consciousness. In
this sense the word is applied to beings who lose and regain consciousness and are conscious of things and that things are so.
Creature consciousness is to be distinguished from what Rosenthal
calls state consciousness the sense in which certain mental states, processes, events, and activities (in or of conscious beings) are said to be
either conscious or unconscious. When we describe desires, fears, and
experiences as being conscious or unconscious we attribute or deny
consciousness not to a being, but to some state, condition, or process in
that being. States (processes, etc.), unlike the creatures in whom they
3
4
For recent expressions of interest, see Velmans (1991), Rey (1988), and Van Gulick (1989).
I here ignore dispositional senses of the relevant terms - the sense in which we say of
someone or something that it is a conscious being even if, at the time we describe it this
way, it is not (in any occurrent sense) conscious. So, for example, in the dispositional sense,
I am a conscious being even during dreamless sleep.
179
I here ignore disputes about whether, in some strict sense, we are really aware of objects
or only (in smell) odors emanating from them or (in hearing) voices or noises they make. I
shall always take the perceptual object - what it is we see, hear, or smell (if there is such
an object) - to be some external physical object or condition. I will not be concerned with
just what object or condition this is.
180
it is. "What is that I smell?" is the question of a person who might well
be aware of (i.e., smell) burning toast without being aware that it is
burning toast he is aware of. Second, even if one knows what it is one
is aware of- knows that it is burning toast one might not understand
what it means to be aware of it, might not, therefore, be aware that one
is aware of it. A small child or an animal someone who lacks the
concept of awareness - can be conscious of (i.e., smell) burning toast
without ever being aware that they are aware of something. Even if she
happens to know that it (i.e., what she is aware of) is burning toast, she
may not, therefore, be aware - that she is aware of it.
The language here is a bit tricky, so let me give another example.
One can be aware of (e.g., hear) a French horn without being aware
that that is what it is. One might think it is a trombone or (deeply
absorbed in one's work) not be paying much attention at all (but later
remember hearing it). If asked whether you hear a French horn, you
might well (falsely) deny it or (more cautiously) say that you don't
know. Not being aware that you are aware of a French horn does not
mean you are not aware of a French horn. Hearing a French horn is
being conscious of a French horn. It is not - not necessarily anyway - to
be aware that it is a French horn or aware that you are aware of it. Mice
who hear - and thus become auditorily aware of - French horns never
become aware that they are aware of French horns.6
So, once again, when I say that if you see, hear, or smell something
you must be conscious of it, the "it" refers to what you are aware of
(burning toast, a French horn), not what it is (i.e., that it is burning toast,
a French horn) you are aware or that you are aware of it. Animals (not
to mention human infants) are presumably aware of a great many things
(they see, smell, and feel the things around them). Often, though, they
are not aware of what it is they are aware of and seldom (if ever) are
they aware that they are aware of it.
So much for terminological preliminaries. I have not yet (I hope) said
anything that is controversial. Nonetheless, with only these meager
In saying this I assume two things, both of which strike me as reasonably obvious: (1) to
be aware that you are aware of a French horn requires some understanding of what
awareness is (not to mention an understanding of what French horns are); and (2) mice
(even if we give them some understanding of French horns) do not understand what
awareness is. They do not, that is, have the concept of awareness.
181
This is not to say that consciousness is always advantageous. As Georges Rey reminds me,
some tasks playing the piano, pronouncing language, and playing sports are best
performed when the agent is largely unaware of the performatory details. Nonetheless,
even when one is unconscious of the means, consciousness of the end (e.g., the basket into
which one is trying to put the ball, the net into which one is trying to hit the puck, the
teammate to whom one is trying to throw the ball) is essential. You don't have to be aware
of just how you manage to backhand the shot to do it skillfully, but, if you are going to be
successful in backhanding the puck into the net, you have to be aware of the net (where it
is).
182
183
In the case of external objects (like lions) the experience is necessary, but not sufficient, for
awareness of (seeing) the lion. We also need a lion, of course, and whatever causal relations
between the lion and the experience are required to make the experience of the lion.
184
experiences. The purpose is to enable the gazelle to see, hear, and smell
the lions.
I do not expect many people to be impressed with this result. I expect
to be told that internal states are conscious not (as I have suggested) if
the animal is conscious with them, but, rather, if the animal (in whom
they occur) is conscious of them. A conscious state is conscious in virtue
of being an object, not an act, of creature awareness. A state becomes
conscious, according to this orthodox line of thinking, when it becomes
the object of some higher-order act, a thought or experience. Conscious
states are not states that make the creatures in whom they occur conscious; it is the other way around: creatures make the states that occur
in them conscious by becoming conscious of them.
Since, according to this account, the only way a state can become an
object of consciousness is if there are higher-order acts (i.e., thoughts or
experiences) that take it as their object, this account of what makes a
state conscious has come to be called an H O (for higher-order) theory
of consciousness. It has several distinct forms, but all versions agree that
an animal's experience (of lions, say) remains unconscious (or, perhaps,
nonconscious) until the animal becomes aware of it. A higher-order
awareness of one's lion-experience can take the form of a thought (an
H O T theory) - in which case one is aware that (i.e., one thinks that)
one is experiencing a lion - or the form of an experience (an HOE
theory) - in which case one is aware of the lion-experience in something
like the way one is aware of the lion: one experiences one's lionexperience (thus becoming aware of one's lion-experience) in the way
one is aware of (experiences) the lion.
I have elsewhere (Dretske 1993, 1995) criticized H O theories of
consciousness, and I will not repeat myself here. I am more concerned
with what H O theories have to say - if, indeed, they have anything to
say about the good of consciousness. If conscious states are states we
are, in some way, conscious of, why have conscious states? What do
conscious states do that unconscious states don't do? According to H O
theory, we (i.e., creatures) could be conscious of (i.e., see, hear, and
smell) most of the objects and events we are now conscious of (and this
includes whatever bodily conditions we are proprioceptively aware of)
without ever occupying a conscious state. To be in a conscious state is
to be conscious of the state, and since the gazelle, for example, can be
conscious of a lion without being conscious of the internal states that
make it conscious of the lion, it, can be conscious of the lion that is,
see, smell, feel, and hear the lion while occupying no conscious states
185
at all. This being so, what is the purpose, the biological point, of
conscious states? It is awareness of the lion, not awareness of lionexperiences, that is presumably useful in the struggle for survival. It is
the lions, not the lion-experiences, that are dangerous.
On an object conception of state consciousness, it is difficult to
imagine how conscious states could have a function. To suppose that
conscious states have a function would be like supposing that conscious
ball bearings that is, ball bearings we are conscious of have a
function. If a conscious ball bearing is a ball bearing we are conscious
of, then conscious ball bearing have exactly the same causal powers as
do the unconscious ones. The causal powers of a ball bearing (as opposed
to the causal powers of the observer of the ball bearing) are in no way
altered by being observed or thought about. The same is true of mental
states like thoughts and experiences. If what makes an experience or a
thought conscious is the fact that S (the person in whom it occurs) is,
somehow, aware of it, then it is clear that the causal powers of the
thought or experience (as opposed to the causal powers of the thinker
or experiencer) are unaffected by its being conscious. Mental states and
processes would be no less effective in doing their job whatever,
exactly, we take that job to be - if they were all unconscious. According
to H O theories of consciousness, then, asking about the function of
conscious states in mental affairs would be like asking about the function
of conscious ball bearings in mechanical affairs.
David Rosenthal (a practicing H O T theorist) has pointed out to me
in correspondence that although experiences do not acquire causal powers by being conscious, there may nonetheless be a purpose served by
their being conscious. The purpose might be served not by the beneficial
effects of a conscious experience (conscious and unconscious experiences
have exactly the same effects, according to H O theories), but by the
effects of the higher-order thoughts that makes the experience conscious.
Although the conscious experiences don't do anything the unconscious
experiences don't do, the creatures in which conscious experiences occur are different as a result of having the higher-order thoughts that
make their (lower-order) experiences conscious. The animals having
conscious experiences is therefore in a position to do things that animals
having unconscious experiences are not. They can, for instance, run
from the lion they (consciously) experience - something they might not
do by having an unconscious experience of the lion. They can do this
because they are (let us say) aware that they are aware of a lion aware
186
I assume here that, according to HOT theories, the higher-order thought one has about a
lion-experience that makes that experience conscious is that it is a lion-experience (an
experience of a lion). This needn't be so (Rosenthal 1991 denies that it is so), but if it isn't
so, it is even harder to see what the good of conscious experiences might be. What good
would be a thought about a lion-experience that it was . . . what? . . . simply an experience?
What good is that?
187
and those of a human - are conscious because, I submit, they make the
creature in which they occur aware of things - whatever objects and
conditions are perceived (lions, for instance). Being aware that you are
having such experiences is as irrelevant to the nature of the experience
as it is to the nature of observed ball bearings.10
3. THE THIRD DISTINCTION: OBJECT VS. FACT AWARENESS
Once again, I expect to hear that this is all too quick. Even if one should
grant that conscious states are to be identified with acts, not objects, of
creature awareness, the question is not what the evolutionary advantage
of perceptual belief is, but what the advantage of perceptual (i.e., phenomenal) experience is. What is the point of having conscious experiences of lions (i.e., lion-qualia) as well as conscious beliefs about lions?
Why are we aware of objects (lions) as well as various facts about them
(that they are lions, that they are headed this way)? After all, in the
business of avoiding predators and finding mates, what is important is
not experiencing (e.g., seeing, hearing) objects, but knowing certain
facts about these objects. What is important is not seeing a hungry lion
but knowing (seeing) that it is a lion, hungry, or whatever (with all that
this entails about the appropriate response on the part of lion-edible
objects). Being aware of (i.e., seeing) hungry lions and being aware of
them, simply, as tawny objects or as large, shaggy cats (something a twoyear-old child might do) isn't much use to someone on the lion's dinner
menu. It isn't the objects you are aware of, the objects you see - and,
therefore, the qualia you experience - that is important in the struggle
for survival; it is the facts you are aware of, what you know about what
10 I'm skipping over a difficulty that I should at least acknowledge here. There are a variety
of mental states - urges, desires, intentions, purposes, etc. - that we speak of as conscious
(and unconscious) whose consciousness cannot be analyzed in terms of their being acts
(instead of objects) of awareness since, unlike the sensory states associated with perceptual
awareness (seeing, hearing, and smelling), they are not, or do not seem to be, states of
awareness. If these states are conscious, they seem to be made so by being objects, not acts
of consciousness (see, e.g., van Gulick 1985). I don't have the space to discuss this alleged
difference with the care it deserves. I nonetheless acknowledge its relevance to my present
thesis by restricting my claims about state consciousness to experiences - more particularly,
perceptual experiences. Whatever it is that makes a desire for an apple, or an intention to
eat one, conscious, visual (gustatory, tactile, etc.) experiences of apples are made conscious
not by the creature in whom they occur being conscious of them, but by making the
creature in whom they occur conscious (of apples).
188
189
190
191
REFERENCES
192
Part Three
Thought and Intentionality
11
Putting Information to Work
195
objects like you and me, who understand, or think they understand, some
of what is happening around them. Talking to Michael has profound
effects, while talking to rocks and goldfish has little or no effect because
Michael, unlike rocks and goldfish, understands, or thinks he understands, what he is being told. As a consequence, he is - typically at least
brought to believe certain things by these acts of communication. The
rocks and the goldfish, on the other hand, are impervious to meaning.
Instead of inducing belief, all we succeed in doing by remonstrating with
such objects is jostling them a bit with the acoustic vibrations we produce.
So appearances may be deceptive. It may turn out that the difference
between Michael and a goldfish isn't that Michael, unlike a goldfish,
responds to information, but that Michael, unlike a goldfish, has beliefs,
beliefs about what sounds mean (or about what the people producing
these sounds mean), and which therefore (when he hears these sounds)
induce beliefs on which he acts. These beliefs are, to be sure, sometimes
aroused in him by sounds that actually carry information. Nevertheless,
if these beliefs in no way depend on the information these sounds carry,
then the information carried by the belief-eliciting stimulation is explanatorily irrelevant. After all, rocks and goldfish are also affected by information-carrying signals. When I talk to a rock, the rock is, as I said,
jostled by acoustic vibrations. But the point is that although my utterances, the ones that succeed in jostling the rock, carry information, the
information they carry is irrelevant to their effect on the rock. The
information in this stimulation doesn't play any explanatory role in
accounting for the rock's response to my communication. From the
rock's point of view, my utterance may as well not carry information.
Subtract the information (without changing the physical properties of
the signal carrying this information) and the effect is exactly the same.
And so it may be with Michael. To find out, as we did with the
rock, whether information is doing any real work, we merely apply
Mill's Method of Difference. Take away the information, leaving everything else as much the same as possible, and see if the effect on Michael
changes. As we all know, we needn't suppose that his wife, Sandra, is
actually having an affair with Charles to get a reaction in fact, the very
same reaction from Michael. He will react in exactly the same way if
his alleged informant is lying or is simply mistaken about Sandra's affair.
As long as the act of communication is the same, as long as what the
speaker says and does means the same thing (to Michael), it will elicit the
196
same reaction from him. What is said doesn't have to be true to get this
effect. Michael just has to think it true. Nothing, in fact, need even be
said. As long as Michael thinks it was (truly) said, as long as he thinks
something with this meaning occurred, the result will be the same.
In saying that it is Michael's beliefs, not the meaning (if any) or
information (if any) in the stimuli giving rise to these beliefs, that causally
explains Michael's behavior, I am assuming that we can (and should)
make appropriate distinctions between these ideas between information, meaning, and belief. Some people, I know, use these notions
interchangeably. That is too bad. It confuses things that should be kept
distinct. According to this careless way of talking (especially prevalent, I
think, among computer scientists), information is meaning or (at least) a
species of meaning, and a belief (or a reasonable analog of a belief) just
is an internal state having the requisite kind of meaning. So, for instance,
anything that means that Michael's wife is having an affair carries this
piece of information. If I say his wife is having an affair, then my
utterance, whether or not it is true, carries this information. And if I enter
this "information" into a suitably programmed computer, then, whether
or not Sandra is unfaithful, the "fact" that she is unfaithful becomes part
of the machine's "data base," the "information" on which it relies to
reason, make inferences, answer questions, and solve problems. The
computer now "thinks" that Michael's wife is having an affair. Michael
doesn't even have to be married for the machine to be given the "data,"
the "information," that, as it were, his wife is having an affair. On this
usage, the facts, the information, the data, are what we say they are.
It is perfectly understandable why computer scientists (not to mention
a good many other people) prefer to talk this way. After all, it is natural
to suppose that a computer (or a human brain, for that matter) is
insensitive to the truth of the representations on which it operates. Put
the sentence " P " in a machine's (or a human brain's) data file, and it
will operate with those data in exactly the same way whether " P " is
true or false, whether it is information or misinformation. From the
machine's (brain's) point, of view, anything in it that qualifies as a belief
qualifies as knowledge, anything in it that means that P is information that
P. The distinction between meaning and information, between belief
and knowledge, is a distinction that makes sense only from the outside.
But what makes sense only from the outside of the machine or person
whose behavior is being explained cannot (according to this way of
thinking) help to explain that machine's (or person's) behavior. It cannot
197
because the machine (or person) can't get outside of itself to make the
needed discriminations. So, for practical explanatory purposes, meaning
15 (or may as well be) information.
Whatever the practical exigencies may be, something that makes
sense only from the outside is, nonetheless, something that makes perfectly good sense. It certainly should make perfectly good sense to those
of us (on the outside) talking about such systems. Something (like a
statement) that means that Michael's wife is having an affair need not
carry this information. It needn't carry this information either because
Michael's wife is not having an affair or because, although she is, the
words or symbols used to make this statement are being used in a way
that is quite unrelated to her activities. I can say anything I like, that
Mao Tse Tung liked chocolate ice cream for instance, and the words I
utter will mean something quite definite in this case that Mao Tse
Tung liked chocolate ice cream. But these words, even if they are (by
some lucky accident) true, won't carry the information that Mao liked
chocolate ice cream. They won't because the sentence, used merely as
an example and in total ignorance of Mao's preferences in ice cream, in
no way depends on Mao's likes and dislikes. These words mean something, but they do not, not at least when coming from me, inform the
listener. I might succeed in getting you to believe that Mao liked
chocolate ice cream. I might, by telling you this, misinform you about
his taste in ice cream. But misinformation is not a species of information
any more than belief is a species of knowledge.
So, at least on an ordinary understanding of information and meaning,
something can mean that P without thereby carrying the information
that P. And someone can believe that P without ever having received
the information that P. Often enough, what makes people believe that
P is being told that P by someone they trust. Sometimes these communications carry information. Sometimes they do not. Their efficacy in
producing belief resides, however, not in the fact that the utterance
carries information, but in its meaning (or perceived meaning), who
uttered it, and how. No successful liar can seriously doubt this.
So if we distinguish between an event's meaning (= what it says,
whether truly or falsely, about another state of affairs) and the information
it carries (what it, among other things, truly says about another state of
affairs), a distinction that is roughly equivalent to Grice's (1957) distinction between nonnatural and natural meaning, the causal role of information becomes more problematic. What explains Michael's reaction to
the verbal communication is his believing that his wife was having an
198
affair with Charles. What explains his believing this is his being told it
by a trusted confidant that is, his hearing someone (he trusts) say this,
utter words with this meaning. At no point in the explanatory proceedings do we have to mention the truth of what is said, the truth of what
is believed, or the fact that information (as opposed to misinformation)
was communicated. If Michael acts on his belief, he may, sooner or later,
confront a situation that testifies to the truth of what he believes (and
was told). He will then, presumably, acquire new beliefs about his wife
and Charles, and these new beliefs will help determine what further
reactions he has, what he goes on to do. But still, at no point do we
have to speak of information or truth in our explanations of Michael's
behavior. All that is needed is what Michael thinks is true, what he thinks
is information, what he believes. Knowing whether these beliefs are true
or not may be helpful in predicting the results of his actions (whether,
for instance, he will actually find Sandra at home when he goes to look),
but it is not essential for explaining and predicting what he will actually
do whether, that is, he will go home to look.
Appearances, then, do seem to be misleading. Despite the way things
first looked, despite a variety of familiar examples in which information
seemed to make a causal difference, we still haven't found an honest job
for it, something information (as opposed to meaning or belief) does
that constitutes its special contribution to the causal story.
TRUTH AND SUPERVENIENCE
It isn't hard to see why there is trouble finding a decent job for information. The information a signal (structure, event, condition, state of
affairs) carries is a function of the way a signal is related to other conditions in the world. I have my own views (Dretske 1981) about what
these relations come down to, what relations constitute information. I
happen to think information requires, among other things, some relation
of dependency between the signal and the condition about which it
carries information. Signals don't carry information about conditions,
even conditions that happen to obtain, on which their occurrence does
not depend in some appropriate way. But it isn't necessary to argue
about these details. I'm not asking you to agree with me about exactly
what information is to agree with me that, whatever it is, as long as it
(unlike meaning) involves truth, there is a special problem about how it
can be put to work in a scientifically respectable way how it, or the
fact that something carries it, can be made explanatorily relevant.
199
It is better to say that under such conditions they can be informationally different. Whether
they are different will depend not only on whether P exists in one case but not the other,
but on whether there is, in the case where P exists, the required information-carrying
dependency between P and the signal.
In "common cause" situations, cases where A, although neither the cause nor the effect of
B, is correlated with B because they have some common cause C, we may (depending on
the details of the case) be able to say that B would not have happened unless A happened
and, yet, deny that A in any way explains (causally or otherwise) B. An explanatory relation
between A and B, a relation that lets us say that B happened because A happened, requires
more than counterfactual-supporting generalizations between A and B.
I disagree, therefore, with Jerry Fodor (1987: 139-140) that (to put it crudely) an
adequate story can be told about mental causation without making intentional properties
(like meaning or information) determine causal roles. It isn't enough to have these intentional properties (like meaning or information) determine causal roles. It isn't enough to
have these intentional properties figure in counterfactual-supporting generalizations. That
(alone) won't show that people behave the way they do because of what they believe and
desire.
200
they cause what they cause. Distilled water will extinguish a fire
thereby causing exactly what undistilled water causes but the fact that
the water is distilled does not figure in the explanation of why the fire is
extinguished. It won't because undistilled water has exactly the same
effects on flames. And if the information in a signal is like this, like the
fact that water is distilled, then the fact that a signal carries information
is as explanatorily relevant to its effects on a receiver as is the fact that
the water is distilled to its effects on a flame.3
Coupled with the idea that a symbol's meaning is a function of its
relations to other conditions (the conditions it, in some sense, signifies
or means), such arguments have led to the view that it is the form,
shape, or syntax not the meaning, content, or semantics of our
internal states that ultimately pulls the levers, turns the gears, and applies
the brakes in the behavior of thoughtful and purposeful agents. Semantics or meaning, the what-it-is-we-believe (and want) is causally (and,
therefore, explanatorily) irrelevant to the production of behavior (which
is not to say that it cannot be used to predict behavior). I have merely
extended these arguments, applying them to information rather than
meaning or content. Since information, unlike meaning, requires truth,
I think these arguments are even more persuasive when applied to
information. I have, in fact, assumed up to this point that there is no
particular difficulty about how meaning, either as embodied in what a
person says or as embodied in what a person believes, could figure in a
causal explanation. But this, too, has its difficulties. Since I am convinced
that belief and meaning are notions that ultimately derive from the
information-carrying properties of living systems, I think these problems
are, at a deeper level, connected. I think, in fact, that the central problem
in this area is the causal efficacy of information. If we can find a
respectable job for information, if it can be provided a causal job to do,
if it can be put to work, then the causal role of meaning and belief
(indeed, of all the other psychological attitudes), being derivative, will
fall into place. But these are issues that go beyond the scope of this essay
and I will not return to them here.4
Jerry Fodor, Psychosemantics, 33, nicely illustrates this with the property of being an Hparticle. A particle has the property of being an H-particle (at time t) if and only if a dime
Fodor flips (at time t) is heads. If the dime turns up tails, these particles are T-particles. Hparticles are obviously causally efficacious, but no one supposes that their causal efficacy is
to be explained by their being H-particles.
For a full discussion see Explaining Behaviour: Reasons in a World of Causes (Cambridge, MA:
201
202
fact, change S's causal powers? Until we know how, we won't know
how information can make a difference in this world.
INDICATORS AND ARTIFACTS
203
but not why it is converted into a cause of M. It is, rather, S's relational
properties that explain why it was selected (by the engineer designing
the device) for this causal job. Anything, no matter what its intrinsic
properties (as long as they can be harnessed to do the job), would have
done as well. As long as the behavior of this element exhibits the
appropriate degree of correlation with P, it is a candidate for being made
into a switch for M, the behavior we want coordinated with P. If,
furthermore, an element is selected for its causal role (in the production
of M) because of its correlation with P, because it does not (normally)
occur without P, we have (almost) a case of an element's informational
properties explaining its causal properties: it does (or is made to do) this
because it carries information about (or co-occurs with) that. It isn't the
element's shape, form, or syntax that explains its conversion into a cause
of M; it is, instead, its information-carrying, its semantic, properties.
This is all a little too fast, of course. We smuggled into the proceedings an engineer, with purposes and intentions of his own, soldering
things here, wiring things there, because of what he knows (or thinks)
about the effects to be achieved thereby. I therefore expect to hear the
objection that deliberately designed artifacts do not demonstrate the
causal efficacy of information. All they illustrate, once again, is the causal
efficacy of belief (and purpose). In the case of artifacts, what explains the
conversion of an information-bearing element (an indicator of P) into a
cause of output (M) is the designer's knowledge (or belief) that it is a
reliable indicator and his or her desire to coordinate M with P. To make
information do some real work, it would be necessary to make the
causal powers of S depend on its carrying information, or on its cooccurrence with P, without the intercession of cognitive intermediaries
with purposes and intentions of their own. The information (correlation)
alone, not some agent's recognition of this fact, must carry the explanatory burden.
INDICATORS AND LEARNING
To see how this might be accomplished, simply remove the engineer.
Since artifacts do not spontaneously change the way they behave (at least
not normally in a desired way) without some help from the outside,
replace the seat belt mechanism with a system that is capable of such
unassisted reconfiguration. That is, replace the artifact with an animal
a rat, say. Put the rat into conditions a suitably arranged Skinner box
will do in which a certain response is rewarded (with food, say) when,
204
205
206
207
12
If You Can't Make One, You
Don't Know How It Works
There are things I believe that I cannot say at least not in such a way
that they come out true. The title of this essay is a case in point. I really
do believe that, in the relevant sense of all the relevant words, if you
can't make one, you don't know how it works. The trouble is I do not
know how to specify the relevant sense of all the relevant words.
I know, for instance, that you can understand how something works
and, for a variety of reasons, still not be able to build one. The raw
materials are not available. You cannot afford them. You are too clumsy
or not strong enough. The police will not let you.
I also know that you may be able to make one and still not know
how it works. You do not know how the parts work. I can solder a
snaggle to a radzak, and this is all it takes to make a gizmo, but if I do
not know what snaggles and radzaks are, or how they work, making
one is not going to tell me much about what a gizmo is. My son once
assembled a television set from a kit by carefully following the instruction manual. Understanding next to nothing about electricity, though,
assembling one gave him no idea of how television worked.
I am not, however, suggesting that being able to build one is sufficient for knowing how it works. Only necessary. And I do not much
care about whether you can actually put one together. It is enough if
Reprinted from Midwest Studies in Philosophy 19 (1994), 468482, by permission of the
publisher.
I read an early version of this essay at the annual meeting of the Society for Philosophy
and Psychology, Montreal, 1992. I used an enlarged form of it at the NEH Summer
Institute on the Nature of Meaning, codirected by Jerry Fodor and Ernie LePore, at
Rutgers University in the summer of 1993. There were many people at these meetings
who gave me useful feedback and helpful suggestions. I am grateful to them.
208
you know how one is put together. But, as I said, I do not know how to
make all the right qualifications. So I will not try. All I mean to suggest
by my provocative title is something about the spirit of philosophical
naturalism. It is motivated by a constructivist's model of understanding.
It embodies something like an engineer's ideal, a designer's vision, of
what it takes to really know how something works. You need a blueprint, a recipe, an instruction manual, a program. This goes for the mind
as well as any other contraption. If you want to know what intelligence
is, or what it takes to have a thought, you need a recipe for creating
intelligence or assembling a thought (or a thinker of thoughts) out of
parts you already understand.
INFORMATION AND INTENTIONALITY
In speaking of parts one already understands, I mean, of course, parts that
do not already possess the capacity or feature one follows the recipe to
create. One cannot have a recipe for cake that lists a cake, not even a
small cake, as an ingredient. One can, I suppose, make a big cake out of
small cakes, but recipes of this sort will not help one understand what a
cake is (although it might help one understand what a big cake is). As a
boy, I once tried to make fudge by melting fudge in a frying pan. All I
succeeded in doing was ruining the pan. Don't ask me what I was trying
to do change the shape of the candy, I suppose. There are perfectly
respectable recipes for cookies that list candy (e.g., gumdrops) as an
ingredient, but one cannot have a recipe for candy that lists candy as an
ingredient. At least it will not be a recipe that tells you how to make
candy or helps you understand what candy is. The same is true of minds.
That is why a recipe for thought cannot have interpretive attitudes or
explanatory stances among the eligible ingredients - not even the attitudes and stances of others. That is like making candy out of candy in
this case, one person's candy out of another person's candy. You can do
it, but you still will not know how to make candy or what candy is.
In comparing a mind to candy and television sets I do not mean to
suggest that minds are the sort of thing that can be assembled in your
basement or in the kitchen. There are things, including things one fully
understands, things one knows how to make, that cannot be assembled
that way. Try making Rembrandts or $100 bills in your basement. What
you produce may look genuine, it may pass as authentic, but it will not
be the real thing. You have to be the right person, occupy the right
office, or possess the appropriate legal authority in order to make certain
209
things. There are recipes for making money and Rembrandts, and
knowing these recipes is part of understanding what money and Rembrandts are, but these are not recipes you and I can use. Some recipes
require a special cook.
This is one (but only one) of the reasons it is wrong to say, as I did
in the title, that if you cannot make one, you do not know how it
works. It would be better to say, as I did earlier, that if you do not
know how to make one, or know how one is made, you do not really
understand how it works.
Some objects are constituted, in part, by their relationships to other
objects. Rembrandts and $100 bills are like that. So are cousins and
mothers-in-law. That is why you could not have built my cousin in
your basement, while my aunt and uncle could. There is a recipe in this
case, just not one you can use. The mind, I think, is also like that, and I
will return to this important point in a moment.
It is customary to think of naturalistic recipes for the mind as starting
with extensional ingredients and, through some magical blending process, producing an intentional product: a thought, an experience, or a
purpose. The idea behind this proscription of intentional ingredients
seems to be that since what we are trying to build a thought is an
intentional product, our recipe cannot use intentional ingredients.
This, it seems to me, is a mistake, a mistake that has led some
philosophers to despair of ever finding a naturalistic recipe for the mind.
It has given naturalism an undeserved bad name. The mistake is the
same as if we proscribed using, say, copper wire in our instruction
manual for building amplifiers because copper wire conducts electricity
exactly what the amplifiers we are trying to build do. This, though, is
silly. It is perfectly acceptable to use copper wire in one's recipe for
building amplifiers. Amplifier recipes are supposed to help you understand how something amplifies electricity, not how something conducts
electricity. So you get to use conductors of electricity, and in particular
copper wire, as a part in one's amplifier kit. Conductors are eligible
components in recipes for building amplifiers even if one does not know
how they manage to conduct. An eligible part, once again, is an ingredient, a part, a component, that does not already have the capacity or
power one uses the recipe to create. That is why one can know what
gumdrop cookies are, know how to make them, without knowing how
to make gumdrops or what, exactly, gumdrops are.
The same is true for mental recipes. As long as there is no mystery
not, at least, the same mystery about how the parts work as how the
210
211
It is worth emphasizing that this is not derived or in any way secondclass intentionality. This is the genuine article original intentionality, as
some philosophers (including this one) like to say. The intentional states
a compass occupies do not depend on our explanatory purposes, attitudes, or stances. To say that the compass (in certain conditions Q
indicates the direction of the Arctic Pole is to say that, in these conditions, the direction of the pointer depends in some lawlike way on the
whereabouts of the pole. This dependency exists whether or not we
know it exists, whether or not anyone ever exploits this fact to build
and use compasses. The intentionality of the device is not, like the
intentionality of words and maps, borrowed or derived from the intentionality (purposes, attitudes, knowledge) of its users. The power of this
instrument to indicate north to or for us may depends on our taking it to
be a reliable indicator (and, thus, on what we believe or know about it),
but its being a reliable indicator does not itself depend on us.
"Intentionality" is a much abused word, and it means a variety of
different things. But one thing it has been used to pick out are states,
conditions, and activities having a propositional content the verbal expression of which does not allow the substitution, salva veritate, of coreferring expressions. This is Chisholm's third mark of intentionality.1
Anything exhibiting this mark is about something else under an aspect.
It has, in this sense, an aspectual shape.2 Compass needles are about
geographical regions or directions under one aspect (as, say, the direction
of the pole) and not others (as, say, the habitat of polar bears). This is
the same way our thoughts are about a place under one aspect (as where
I was born) but not another (as where you were born). If having this
kind of profile is, indeed, one thing that is meant by speaking of a state,
condition, or activity as intentional, then it seems clear that there is no
need to naturalize intentionality. It is already a familiar part of our
physical world. It exists wherever you find clouds, smoke, tree rings,
shadows, tracks, light, sound, pressure, and countless other natural phenomena that carry information about how other parts of the world are
arranged and constituted.
Intentional systems, then, are not the problem. They can be picked
1 Roderick M. Chisholm, Perceiving: A Philosophical Study (Ithaca, N.Y., 1957), chap. 11.
2 This is John Searle's way of putting it; see his The Rediscovery of Mind (Cambridge, Mass.,
1992), 131, 156. I think Searle is wrong when he says (p. 161) that there are no aspectual
shapes at the level of neurons. Indicators in the brain, those in the sensory pathways, are as
much about the perceived world under an aspect as is the compass about the Arctic under
an aspect.
212
Despite even the necessary coextensionality of "F* and "G." A thought that x is F is
different from a thought that x is G even if F-ness and G-ness are related in such a way
that nothing can be F without being G. This, too, is an aspect of intentionality. In Knowledge
and the Flow of Information (Cambridge, Mass., 1981), 173, I called this the second (for
nomic necessity) and third (for logical necessity) orders of intentionality. Although measuring instruments exhibit first-order intentionality (they can indicate that x is F without
indicating that x is G even when *'K' and "G" happen to be coextensional), they do not
exhibit higher levels of intentionality. If (in virtue of a natural law between F-ness and Gness) Fs must be G, then anything carrying information that x is F will thereby carry the
information that it is G. Unlike thoughts, compasses cannot distinguish between nomically
equivalent properties.
My discussion has so far passed over this important dimension of intentionality. Although I will return to it briefly, the point raises too many complications to be addressed
here.
213
214
215
thermometer could not say something that was false. Take us away and
all you have is a tube full of mercury being caused to expand and
contract by changes in the temperature a column of metal doing
exactly what paper clips, thumb tacks, and flag poles do. Once we
change our attitude, once we (as it were) stop investing informational
trust in the instrument, it loses its capacity to misrepresent. Its meaning
ceases to be detached. It becomes, like every other piece of metal, a
mere purveyor of information.
NATURAL FUNCTIONS
Although representational artifacts are thus not available as eligible ingredients in our recipe for the mind, their derived (from us) power to
misrepresent is suggestive. If an information-carrying element in a system
could somehow acquire the function of carrying information, and acquire this function in a way that did not depend on our intentions,
purposes, and attitudes, then it would thereby acquire (just as a thermometer or a compass acquires) the power to misrepresent the conditions it had the function of informing about. Such functions would
bring about a detachment of meaning from cause. Furthermore, since
the functions would not be derived from us, the meanings (unlike the
meaning of thermometers and compasses) would be original, underived
meanings. Instead of just being able to build an instrument that could,
because of the job we give it, fool us, the thing we build with these
functions could, quite literally, itself be fooled.
If, then, we could find naturalistically acceptable functions, we could
combine these with natural indicators (the sort used in the manufacture of compasses, thermometers, pressure gauges, and electric eyes) in
a naturalistic recipe for thought. If the word "thought" sounds too
exalted for the mechanical contraption I am assembling, we can describe the results in more modest terms. What we would have is a
naturalistic recipe for representation, a way of building something that
would have, quite apart from its creator's (or anyone else's) purposes
or thoughts, a propositional content that could be either true or false.
If that is not quite a recipe for mental bearnaise sauce, it is at least a
recipe for a passable gravy. I will come back to the bearnaise sauce in
a moment.
What we need in the way of another ingredient, then, is some natural
process whereby elements can acquire, on their own, apart from us, an
216
9
10
11
For the purpose of this essay, I ignore skeptics about functions - those who think, for
example, that the heart only has the function of pumping blood because this is an effect
in which we have (for whatever reason) a special interest. See, for example, John Searle,
The Rediscovery of Mind, p. 238, and Dan Dennett's "Evolution, Error and Intentionality"
in The Intentional Stance (Cambridge, Mass., 1987).
Larry Wright, "Functions," Philosophical Review 82 (1973): 139-168, and Teleological Explanations (Berkeley, 1976).
E.g., R u t h Millikan, Language, Thought, and Other Biological Categories: New Foundations for
Realism (Cambridge, Mass., 1984) and "Biosemantics," Journal of Philosophy 86, no. 6
(1989); David Papineau, Reality and Representation (New York, 1987) and "Representation
and Explanation," Philosophy of Science 51, no. 4 (1984): 550-572; Mohan Matthen.
"Biological Functions and Perceptual Content," Journal of Philosophy 85, no. 1 (1988): 5
27; and Peter Godfrey Smith, "Misinformation," Canadian Journal of Philosophy 19. no. 4
(December 1989): 533-550 and "Signal, Decision, Action," Journal of Philosophy 88, no.
12 (December 1991): 709-722.
This may sound as though we are smuggling in the back door what we are not allowing
in the front: a tainted ingredient, the idea of a needful system, a system, that, given its
needs, has a use for information. I think not. All that is here meant by a need (for system
of type S) is some condition or result without which the system could (or would) not
exist as a system of type S. Needs, in this minimal sense, are merely necessary conditions
for existence. Even plants have needs in this sense. Plants cannot exist (as plants) without
water and sunlight.
217
218
age their development. If the only natural functions are those provided
by evolutionary history and individual learning, then, no one is going to
build thinkers of thoughts, much less a mind, in the laboratory. This
would be like building a heart, a real one, in your basement. If hearts
are essentially organs of the body having the biological function of
pumping blood, you cannot build them. You can wait for them to
develop, maybe even hurry things along a bit by timely assists, but you
cannot assemble them out of ready-made parts. These functions are the
result of the right kind of history, and you cannot not now give a
thing the right kind of history. It has to have it. Although there is a
recipe for building internal representations, structures having natural
indicator functions, it is not a recipe you or I, or anyone else, can use to
build one.
THE DISJUNCTION PROBLEM
There are, I know, doubts about whether a recipe consisting of information and natural teleology (derived from natural functions - either
phylogenetic or ontogenetic) is capable of yielding a mental product
something with an original power to misrepresent. The doubts exist
even with those who share the naturalistic impulse. Jerry Fodor, for
instance, does not think Darwin (or Skinner, for that matter) can rescue
Brentano's chestnuts from the fire.13 He does not think teleological
theories of intentionality will solve the disjunction problem. Given the
equivalence of the disjunction problem and the problem of misrepresentation, this is a denial, not just a doubt, that evolutionary or learningtheoretic accounts of functions are up to the task of detaching meaning
from cause, of making something say c o w when it can be caused by
horses on a dark night.
I tend to agree with Fodor about the irrelevance of Darwin for
understanding mental representation. I agree, however, not (like Fodor)
out of the general skepticism about teleological accounts of meaning,
but because I think Darwin is the wrong place to look for the teleology, for the functions, underlying mental representations (beliefs,
thoughts, judgments, preferences, and their ilk). Mental representations
have their place in explaining deliberate pieces of behavior, intentional
acts for which the agent has reasons. This is exactly the sort of behavior
that evolutionary histories are unequipped to explain. We might rea13
219
220
How can an event have the content JERSEY C O W rather than, say, c o w
when any event that carries the first piece of information also carries the
second? To this problem functions provide an elegant answer. A token
of type R can carry information that it does not have the function of
carrying that it does not, therefore, mean (in the sense of "mean" in
which a thing can mean that P when P is false). Altimeters, for instance,
carry information about air pressure (that is how they tell the altitude),
but it is not their function to indicate air pressure. Their function is to
indicate altitude. That is why they represent (and can misrepresent)
altitude and not air pressure.
2. If tokens of type R can be caused by both A and B, how can
tokens of this type mean that A (and not A or 23)? If R is a type of
structure tokens of which can be caused by both cows and, say, horses
on a dark night, how can any particular token of R mean c o w rather
than COW OR HORSE ON A DARK NIGHT? For this problem I think
Fodor is right: teleology is of no help. What we need, instead, is a better
understanding of information, how tokens of a type R can carry information (that x is a cow, for instance) even though, in different circumstances and on other occasions, tokens of this same type fail to carry this
information (because x is not a cow; it is a horse on a dark night). The
solution to this problem requires understanding the way information is
relativized to circumstances, the way tokens of type R that occur in
broad daylight at 10 feet, say, can carry information that tokens of this
same type, in other circumstances, in the dark or at 1200 feet, fail to
carry.15
The problem of detaching meaning from causes and thus solving
the problem of misrepresentation - occurs at two distinct levels, at the
level of types and the level of tokens. At the token level the problem is:
how can tokens of a type all have the same meaning or content, F,
when they have different causes (hence, carry different information)?
Answer: each token, whatever information it happens to carry, whatever
its particular cause, has the same information-carrying function, a function it derives from the type of which it is a token. Since meaning is
identified with information-carrying function, each token, whatever its
cause, has the same meaning, the job of indicating F. Teleology plays a
crucial role here - at the level of tokens. The problem at the type level
is: how can a type of event have, or acquire, the function of carrying
15 In Knowledge and the Flow of Information I called these circumstances, the ones to which
the informational content of a signal was relative, "channel conditions."
221
information F when tokens of this type occur, or can occur (if misrepresentation is to be possible), without F? Answer: certain tokens, those
that occur in circumstances C, depend on F. They would not occur
unless F existed. These tokens carry the information that F. It is from
them that the type acquires its information-carrying function. At the
type level, then, teleology is of no help. Information carries the load.
Both are needed to detach meaning from causes.
There is a third problem, sometimes not clearly distinguished from
the preceding two problems, that has still a different solution (why
should different problems have the same solution?). How can R represent something as F without representing it as G when the properties F
and G are equivalent in some strong way (nomically, metaphysically, or
logically)? How, for instance, can R have the function (especially if this
is understood as a natural function) of indicating that something is water
without having the function of indicating that it is H 2 O? If it cannot,
then, since we can obviously believe that something is water and not
believe that it is H 2 O, a theory of representation that equates content
with what a structure has the natural function of indicating is too feeble
to qualify as a theory of belief. It does not cut the intentional pie into
thin enough slices.
I mention this problem here (I also alluded to it in footnote 3), not
for the purpose of suggesting an answer to it,16 but merely to set it apart
as requiring special treatment. The problem of distinguishing representational contents that are equivalent in some strong way is surely a
problem for naturalistic theories of content, but it is not a problem that
teleology (at least not a naturalistic teleology) can be expected to solve.
To discredit a teleological approach to representation because it fails to
solve this problem, then, is like criticizing it because it fails to solve
Zeno's Paradoxes.
THE RECIPE
222
223
224
when and where that output is appropriate, then, no matter what further
services may be required of R, part of JR.'S job, its function, is to supply
this needed information. That is why it is there, directing traffic, in the
way that it is.
In achieving its representational status, then, R becomes a determinant of need-related behavior, behavior that satisfies needs when R
carries the information it is its function to carry. Since R represents the
conditions (F) in which the behavior it is called upon to cause is needsatisfying, R must, when it is doing its job, produce intelligent (i.e.,
need-satisfying) output. Even when it is not doing its job, even when it
misrepresents, the behavior it helps produce will be behavior that is
rationalized by the F-facts that R (mis)represents as existing. According
to this recipe for thought, then, something becomes the thought that F
by assisting in the production of an intelligent response to F.
Something not only becomes the thought that F by assisting in the
production of an intelligent response to F, it assists in the intelligent
response because it signifies what it does. When the capacity for thought
emerges in accordance with the preceding recipe, not only do thoughts
(together with needs and desires) conspire to produce intelligent behavior, they produce this behavior because they are the thoughts they are,
because they have that particular content. It is their content, the fact
that they are thoughts that F, not thoughts that G, that explains why
they were recruited to help in the production of those particular
responses to F. This, it seems to me, vindicates, in one fell swoop,
both the explanatory and rationalizing role of content. We do not need
"rationality constraints" in our theory of content. Rationality emerges
as a by-product from the process in which representational states are
created.
Our recipe yields a product having the following properties:
1. The product has a propositional content that represents the world in an
aspectual way (as, say, F rather than G even when Fs are always G).
2. This content can be either true or false.
3. The product is a "player" in the determination of system output (thus
helping to explain system behavior).
4. The propositional content of this product is the property that explains the
product's role in determining system output. The system not only does what
it does because it has this product, but what it is about this product that
explains why the system does what it does is its propositional content.
5. Although the system can behave stupidly, the normal role of this product
225
(the role it will play when it is doing the job for which it was created) will
be in the production of intelligent (need and desire satisfaction) behavior.
This, it seems to me, is about all one could ask of a naturalistic recipe
for thought.
226
13
The Nature of Thought
227
Tyler Burge (1990: 113) suggests that, if nothing else, language is social in that interaction
with other persons is psychologically necessary to learn language.
2 Those who argue for the social character of thought are not, of course, denying that
thought is private, something that goes on in the head of the thinker. I can make out my
Federal income tax return in the privacy of my home, but in order for this to be a correct
description of what is gong on in my home, there has to be something (a federal government) outside my home.
228
229
As remarked earlier, it is also a good reason to resist the externalization of thought. I will
try to indicate later why I do not believe it is as good a reason.
230
the view that they have to be expressed that way to capture the psychological reality of the thought being described.
To illustrate this important point, suppose I describe Clyde as thinking that my wife is a good cook. What I say (not just my saying it)
implies that I have a wife. Clyde cannot think this thought unless I have
a wife, unless, therefore, there is someone else in the world besides
Clyde. He can, of course, think some other thought, a thought he might
express by the words "Your wife is a good cook," but if I do not have
a wife, this would not be the thought that my wife is a good cook. It
would be the thought that X is a good cook where "X" picks out
someone, or something, that Clyde mistakenly takes to be my wife and
thinks, mistakenly or not, is a good cook. This something needn't be a
human being; it could be a robot or a figment of Clyde's imagination.
So Clyde can have a thought that he would express this way without
my having a wife, but he cannot have a thought fitting this description,
the thought that my wife is a good cook, without my having a wife.
What this tells us is that we can describe what someone thinks in a
way that implies the existence of other people even though the thought
being described remains, so to speak, socially uncommitted. Our attributions of thought have, or may have, a social character while the
thoughts themselves lack it.8
So what must be shown if one is to show that thought has a social
character is not merely that our attributions of thought, our thought
ascriptions, have a social character, but that the thoughts themselves are
such that they cannot be thought, cannot possess the kind of psychological identity that makes them the thoughts they are, without the existence of other beings. It may turn out, as Brian Loar (1985) argues, that
the that-clauses we use to ascribe thoughts are shot through with social
presuppositions but that the thoughts themselves, the content that we
8
In 1979 (76) and 1982 (98) Burge is careful to point out that the individuation of thoughts
is bound up primarily with the obliquely occurring expressions in the content-clauses (the
that-clauses) we attribute to a person. Other differences (mainly involving personal pronouns, possessive adjectives, demonstrative pronouns and indexical adverbs and pronouns
like "here," "now," "there," etc.), although they make for differences in ascription, do not
constitute differences in psychological content.
Woodfield (1982a: vi) makes the same distinction and says that in considering the social
(or extrinsic) character of thought we are concerned with whether these thoughts are
intrinsically social - whether, that is, it is the thought itself that is existentially committed or
merely our ascriptions of thought. What we are after (Woodfield 1982b: 263) is "the whole
content and nothing but the content as it presented itself to 5."
231
ordinarily use to individuate thoughts, and on which we rely to understand the behavior of the person in whom they occur, is free of such
social implications.
Since this is a moderately tricky point, let me dwell on it a moment
longer. It would be silly to say that action has a social character, that you
cannot do anything in isolation, because we can describe every action in
a way that implies the existence of other human beings. Clyde touched
my car. That is one way of describing what Clyde did, and describing
what he did in this way implies that someone else (me) exists. Maybe I
can always manage to describe what Clyde and everyone else does by
words that imply that I (or some other human being) exists. Susan
stepped on a rock that I once stepped on. Andrew bought a jacket the
same color as mine. And so on. The possibility of doing this proves
nothing. It shows nothing about action itself (as opposed to these actions) because there are other descriptions that are equally correct descriptions of actions Clyde performed that do not imply that there are
other human beings. Clyde not only touched my car, he touched a car.
That, too, is an action of his, and it does not imply that anyone else
exists. So action does not have a social character. If you want to show
that action has a social character, you have to show not that actions can
be described in a way so as to imply the existence of other human
beings, but that they have to be, that unless this implication is present the
thing being described is not really an action. This is what it takes to
show that thought has a social character. Not that thoughts can be,
perhaps always are, described in such a way as to imply the existence of
other beings, but that unless this implication is present, what is being
described is not really a thought.
This is a very tall order. I do not know how anyone can hope to
argue it without a fully developed theory of what it takes to have a
thought.
There are, I know, shortcuts. You don't need a complete theory of
the mind. To establish the social character of thought, all you need is a
necessary condition for thought that itself has a social character. Ever
since Descartes, one of the favorite candidates for this necessary condition has been language. You can't think anything if you don't have a
language since thought is, as it were, an internalization of the spoken
word (or, as with behaviorists, a linguistic disposition). If, then, language
is social, something I have already conceded, then so is thought. You
cannot develop the capacity to think, hence cannot think, unless there
are (or were they may all have died) other people. You can have a
232
Norman Malcolm (1973: 460) accepts the Cartesian doctrine ("I agree, therefore, with
the Cartesians that thoughts cannot be attributed to animals that are without language")
but maintains that this is consistent with saying that animals think. Thinking, for Malcolm,
is not the same as having thoughts.
10 At times Burge seems to be suggesting that although thought may not have a social
character (it may not, for example, in nonlinguistic animals or infants), it does in creatures
that speak a language: "Crudely put, wherever the subject has attained a certain competence in large relevant parts of his language and has (implicitly) assumed a certain general
commitment or responsibility to the communal conventions governing the language's
symbols, the expressions the subject uses take on a certain inertia in determining attributions of mental content to him. In particular, the expressions the subject uses sometimes
provide the content of his mental states or events even though he only partially understands, or even misunderstands, some of them" (1979: 562).
233
The only way I know to argue that thought lacks a social character is
to provide an example of an internal state of an animal that in no way
depends on the existence of other animals and is, arguably at least, a
thought. In order to give this example, in order to make it convincing,
I have to say something about what a representation is and why it is
plausible to regard thought as a kind of internal representation.
Representations are produced by objects or (in the case of biological
entities) organs by way of performing an indicator or informationproviding function. Think about an ordinary instrument the speedometer in your car, for instance. This device has the job of telling you
how fast the car is moving. It has the function of supplying this information. The way it performs its job is by means of a mobile pointer on
a calibrated scale. When things are working right, the position of the
pointer indicates, carries the information, that the car is going, say, 60
mph. When things are not working right, the gauge misrepresents the
speed of the car: it "says" the car is going 60 mph when the car is not
going this fast. The way the instrument "says" this, the way it performs
its job, is by adjusting the orientation of the pointer. Different positions
of the pointer are the instrument's representations, its (as it were)
"thoughts" about how fast the car is going. The instrument's power to
misrepresent the speed of the car, a power that any system must have in
order to qualify as an appropriate model for thought (since thoughts are
the sorts of things that can be false), is a power that derives from the
instrument's information-carrying function. A device that does not have
the function of indicating speed (even if it happens to indicate speed)
cannot misrepresent speed. That is why paper clips cannot, while thermometers can, misrepresent the temperature. Although both the mercury in the thermometer and the paper clips in my desk carry information about temperature, the thermometer has the function of providing
this information, while the paper clips do not. Thermometers can lie, at
least about the temperature; paper clips cannot.
If we think, then, of thoughts as something like internal representations, internal pointer readings, having an information-carrying function,
we can see how thoughts could be either true or false, depending on
whether they were doing their job right or not. This is grossly oversimplified, of course, because we are dealing with only the crudest
possible situations. But the basic idea, I hope, is clear.
In the case of artifacts (like instruments, diagrams, gauges, and language), the functions that convert an otherwise eligible event or condition (one that can carry the relevant information) into a representation
234
come from us. They are conventional in the same way that all symbols
are conventional. We determine what the function of these artifacts will
be and, hence, whether they will produce representations and, if so, of
what. Since thought is not conventional, at least not in this way, the
functions that convert otherwise eligible internal events and conditions
in the nervous system into representations (and, hence, thoughts) are
natural, not conventional. Just as they give the heart a certain bloodpumping function, evolutionary histories give various sense organs their
information-providing function thereby making our experience of the
world, the product these organs produce by way of performing their
function, a (sensory) representation of the world. Learning (or so I have
argued in Dretske 1988) also gives rise to elements that have an informational function. Such items are the concepts essential to thought.
Thoughts are the internal representations whose information-providing
function has been acquired in learning, the kind of learning wherein is
acquired the concepts needed to have these thoughts.
This is a stripped-down version of a view about the nature of
thought. It happens to be my view, but that is irrelevant for the purposes
to which I want to put it. I haven't, I know, given you enough of the
view to convince you of its many and fulsome merits. But that, too,
isn't necessary. I have given enough of it, I hope, to provide the backdrop of the example that I now want to develop. For with this theory
in the background, what I hope to give you is the example I promised
an example of something that is, arguably at least, a thought and that is
free of all social implications. What I will provide is a description of a
normal learning process in which an internal structure develops that has
representational content. It not only has an appropriate content, it helps
explain the actions of the animal in which it occurs. If you agree with
me that this is, indeed, what I have succeeded in describing, then I hope
you will also agree with me that what I have described is, arguably at
least, a thought. It will also be something that is devoid of all social
character. If you do not agree with me that what I have described is a
thought, I hope you will at least agree with me that it is thoughtlike in
character, the sort of thing that, if it got enough companions of the right
kind, a (so to speak) critical mass of such representational neighbors,
then it would be a thought. I'll settle for this.
Consider, then, the following story about a generic animal I will call
Buster. Buster lives in a place where there are furry worms. I will call
them (naturally enough) furms. Furms come in all sizes and colors, but
they all are furry (F) and they are all shaped like worms (W). Since they
235
are the only things in Buster's habitat that are both furry and wormshaped, the properties F and W suffice to identify an object as a furm in
this environment.
Furms and their salient features are observable by Buster. By this I
mean that, thanks to evolution, Buster comes into the world equipped
with sensory mechanisms that register such things as the shape and color,
the movement and orientation, the texture and size of middle-sized
objects like furms. Although he might have to learn where to look for
them, Buster doesn't have to learn to see them. Since they are all around
him, all he has to do to see them is open his eyes.
But although Buster is, from birth, equipped to see furms, he has no
instinctive, no genetically programmed, reaction to them. They are not,
in this respect, like bright lights, hot surfaces, loud noises, and fleas on
the rump, objects to which he has an instinctive, a genetically determined, response. Buster doesn't withdraw or hide from, snarl or stare at,
attack or bite furms. Or, if he does, this is not to a furm as a furm, but
as (say) an obstacle in his path, something blocking his line of sight, or
as something moving in the grass. Buster might, out of curiosity, sniff at
a furm, poke at it, step on it, or watch it if it does something unusual,
but a furm for Buster, initially at least, is much what a tree, a bush, or a
rock is to us: just one of the many uncategorized things in our environment that we cannot help see when our eyes are open but to which,
prior to learning, we pay little attention.
What I am asking you to imagine, of course, is an animal who can
and does experience objects of a particular kind (furms) but an animal
who as yet has developed no capacity to respond to them as objects of a
particular type. Buster does not yet have the concept FURM. Although
he can experience furms, and in this sense (if this is a sense) internally
represent them, he does not yet have beliefs of the sort: that is a furm.
There is a difference, as we all (I hope) know, between seeing a furm
and having a belief that it (what one sees) is a furm, a difference that
only learning is capable of bridging.11 If Buster is to develop the capacity
to believe of the furms he sees that they are furms, if he is to see them as
furms, he has to develop a new way of representing them. Unlike
Buster's experiences of furms, the sensory mechanisms for which are
11 If the reader doubts this, think about (say) trapezoids. You could see trapezoids (the ones
the teacher drew on the board) long before you could see them as trapezoids, long before
you were able to recognize or identify them as trapezoids (see that they were trapezoids).
236
innate, nature did not equip Buster to have furm beliefs. He has to get
that for himself.
One day, out of idle curiosity, Buster sniffs at a large red furm and
thereupon experiences a painful stinging sensation in his nose. Buster
thereafter avoids large red furms. The same thing happens again with a
small green furm. In a short time we find Buster behaving in a way we
would all agree is intelligent: he avoids furms. He no longer sniffs at
them. When they approach, he quickly retreats. Why? How is one to
explain Buster's behavior?
Let me put this explanatory question in a special context so as to
clarify just what we are seeking when we ask to have Buster's behavior
explained. Imagine that we put Buster in a contrived, an artificial,
situation in which there are fake furms (caterpillars or, if you like,
mechanical furms), objects that are both furry and wormlike but not
furms. Buster spots a fake furm approaching, and he quickly withdraws.
A newly arrived observer wants to know why. Why did Buster retreat?
He didn't do this before. What is the explanation of Buster's behavior?
First Try: Buster withdrew because he saw a furm coming toward him.
This cannot be the right answer since Buster did not see a furm
coming toward him. It was not a furm.
Second Try: Buster withdrew because he saw what looked like a furm
(something that was both F and W) coming toward him.
Once again, this explanation (like the first explanation) attempts to
explain Buster's behavior without invoking a belief that Buster has about
the object he sees. He retreats, we are told, because he sees something
fitting a certain description, just as one might pull one's hand away from
a surface fitting a certain description "hot."
But this, too, cannot be the right explanation. At least it cannot be
the full explanation. Prior to his painful encounters with furms, Buster
saw what looked like furms coming toward him (in this case they
actually were furms), objects that were both F and W, and Buster did
not withdraw. He had exactly the same experience then, before learning,
that he is having now, after learning, the sort of experience that constitutes his seeing an approaching furm. Now he withdraws; then he did
not. Now the experience triggers withdrawal movements; then it did
not. Why? What is the difference? That, surely, is the question we need
answered in order to understand why Buster is withdrawing, and that
237
238
239
240
241
14
Norms, History, and the
Constitution of the Mental
242
1. NORMATIVE CONCEPTS
243
The following discussion of norm-loaded concepts is, I think, independent of what view
one takes of the embedded norms. If one is a subjectivist about these things - thinking,
perhaps, that to say that something is wrong (bad, not the way it ought to be, etc.) is
merely to assert (or evince) personal feelings and attitudes toward it (e.g., that you don't
like or approve of it), then norm-laden concepts are concepts that S cannot correctly apply
to x unless S (not necessarily anyone else) has the right feelings or attitudes toward x. For
a recent defense of an "expressivist" analysis of norms see Gibbard (1990).
Although something else may be bad or wrong - e.g., the negligence (driving while drunk)
that led to the accident. Thus, the result may still be labeled murder.
244
formed they are all like that. Nothing can be any of these things unless
it, or the processes leading up to it, are subject to norms. If something is
marred or damaged, for instance, it departs in some degree from a
standard that defines how it should be. Many of these standards come
from us, the designers and makers of devices. It is our purposes, the way
we want or intend things to be, that makes them when they fail to be
that way damaged, spoiled, malformed, flawed, and so on. If a gadget
is my creation, if I made it do a certain task, then it is broken, flawed,
defective, or broken if it doesn't do what I want it to do. If I want the
clock I build to lose time, if that was my purpose in building it this way,
then it isn't working right, it is broken or defective, if it doesn't lose
time.5 I am the origin of this norm.
Things are subject to multiple standards. Sometimes, for example, we
intend things to be in a condition they are not relative to some other
standard supposed to be in. There is no inconsistency in saying that an
object is supposed to be in a state that (relative to another standard) it is
not supposed to be in.
If mental concepts are normatively loaded, then minds (or those parts of
the mind that we pick out with norm-laden concepts) cannot exist
without norms. This gives rise to a problem: where do these norms
come from? They certainly cannot come from us from our intentions,
purposes, and desires the way the norms governing our own creations
(e.g., my slow-running clock) come from us since the norms we now
seek are ones on which intentions, purposes, and desires themselves
5
If one thinks of clocks as functional devices - objects that are supposed to tell the right time
- then the gadget I build, not having that function, is not a clock. Whatever it is, though,
it is supposed to run slowly - i.e., more slowly than a proper clock.
245
depend for their existence. So the norms constituting these mental states
have to come from somewhere else. Where?
This is a problem if mental concepts are norm-laden. But are they?
Some of them seem to be. Mistakes, after all, are (according to my
dictionary) beliefs, judgments, or takings that are in some way wrong,
bad, or improper. One cannot believe, perceive, or infer without risking
misrepresentation, illusion, and fallacy. That is part of the game. Maybe
cognition can occur (in an omniscient being?) without error, but the
possibility of mistake, the possibility of getting it wrong, of committing
a fallacy, is part of what we mean when we speak of someone as judging,
inferring, or concluding that so-and-so is true. If, then, the possibility of
mistake (if not mistakes themselves) is part of what we mean in describing an act or practice as cognitive, then cognition is norm-laden in the
sense that nothing merits this description unless there are, somewhere in
the background, norms relative to which states and activities can be
deemed wrong, bad, or incorrect. If you can't make a mistake, if what
you are doing isn't the sort of thing in which mistakes are possible, then
what is happening might be described as digestive or immunological,
but it isn't cognitive. It is not believing, judging, reasoning, perceiving,
or inferring.
At one time this line of reasoning seemed right to me, and I took it
to show that there was a problem for naturalistic accounts of the mind.
The problem or what seemed to me a problem was to determine
the source of these cognitive norms. Where did they come from? How
does one get a mental OUGHT, a SUPPOSED T O BE, from a biological IS? David Hume taught us that it is a fallacy to conclude that
something ought (morally) to be so from premises describing only what
is (as a matter of fact) so. Why isn't it also a fallacy to suppose (as
naturalistic theories of the mind do) that normatively loaded mental
states arise from, and are thus reducible to, the norm-free facts of physics
and biology?
One response to this problem (championed by Ruth Millikan, 1984,
1993) is that the facts of biology are not norm free. An organ's or a
trait's proper function roughly speaking, what it is selected to do is,
Millikan insists, itself a normative concept.6 If it is a thing's proper
function to do F, then it is supposed to do F. Thus, norm-laden cogni6
See also Neander (1995, p. 112), who says that biological norms underwrite semantic
norms. Not everyone agrees, of course, that natural selection yields norms. Foder (1996,
p. 252), Bedau (1993), Matthen (1997), and others are skeptics.
246
tive discourse (Millikan 1993, p. 72, speaks of false beliefs as defective and
"true" and "false" as normative terms) can be grounded in biological
norms. There is no fallacious derivation of a cognitive OUGHT from a
biological IS, only a transformation of a biological into a cognitive
OUGHT. She describes her purpose (1993, p. 10) as defending this
biological solution to "the normativity problem," the problem of accounting for false beliefs, misperceptions, bad inferences, errors, and so
on. This, indeed, is why she speaks (in the title of her first book) of
language and thought as biological categories (Millikan 1984).
The problem to which Millikan's biological solution is a solution no
longer seems like a problem to me. Beliefs and judgments must be either
true or false, yes, but there is nothing normative about truth and falsity.
What makes a judgment false (true) is the fact that it fails (or succeeds)
in corresponding to the facts, and failing (or succeeding) in corresponding to the facts is, as far as I can see, a straightforward factual matter.
Nothing normative about it. An arrow (on a sign, say) can point to
Chicago or away from Chicago. There is a difference here, yes, but the
difference is not normative. Aside from our purposes in putting the sign
there or in using the sign as a guide, there is nothing right or wrong,
nothing that is supposed-to-be or supposed-not-to-be, about an arrow
pointing to Chicago. The same holds for beliefs. Aside from our purposes in forming beliefs or in using beliefs as guides to action, there is
nothing they should or shouldn't be. Chris Peacocke (1992, p. 126)
claims that "correct" and "incorrect" are normative notions because
whether X is correct or not depends on the way the world is. But
whether X is pointing at Chicago also depends on the way the world is.
It depends on where Chicago is. That doesn't make "pointing at Chicago" a normative expression.
For understandable reasons we dislike false beliefs and do our best to
avoid them. This dislike and avoidance leads us to describe false beliefs
in ways that are heavily normative as, for example, mistakes or misrepresentations where the prefix "mis" signifies that the judgment or belief
has gone amiss, that it is wrong, bad, or improper in some way. But the
practice of describing false beliefs in this normative way doesn't show
that there is anything essentially normative about false beliefs any more
than it shows that there is something essentially normative about the
weather (e.g., a blizzard) on the day of our picnic because we describe
it as awful. The fact that cognition requires the possibility of error, and
that errors are bad, does not mean that cognition requires norms not
unless errors are necessarily bad. But why should we believe this? Bad,
247
yes, at least most of the time, but not necessarily bad. The only fault
with fallacious reasoning, the only thing wrong or bad about mistaken
judgments, is that, generally speaking, we don't like them. We do our
best to avoid them. They do not most of the time at least serve our
purposes. This, though, leaves the normativity of false belief and fallacious reasoning in the same place as the normativity of foul weather and
bad table manners in the attitudes, purposes, and beliefs of the people
who make judgments about the weather and table behavior.
Some have argued that it isn't truth and falsity that are norm-laden,
but the concept of belief itself. Beliefs by their very nature, and unlike
wishes, hopes, desires, and doubts (not to mention bicycles and rocks),
are mental states that aspire to truth and, therefore, fail or are defective
in some way when they are not true. A belief can be false, yes, just as a
(defective) heart can fail to pump blood, but a belief, even when false,
is supposed to be true, just as the defective heart is supposed to pump
blood. Beliefs aspire to truth; that is their job, their purpose, their raison
d'etre. Anything lacking this purpose just isn't a belief any more than a
device that isn't supposed to tell the right time is a clock. So if, in the
natural world, there are no OUGHTS, neither are there any beliefs.
I know this view is aggressively promoted,7 but I do not find it
plausible. I agree that beliefs are necessarily true or false. If I didn't
understand what it was to be true or false, I could hardly understand
what it was to be a belief. But I do not see that I need go further than
this. This seems like enough to distinguish beliefs from other mental
states like wishes, desires, hopes, doubts, and pains - not to mention
bicycles and rocks.8 Why, in order to understand what a belief is, do I
also have to think of a belief as something that is supposed to be true? If I
deliberately deceive you, is the resulting belief supposed to be true?
248
Recalling our early remark (about multiple standards), it may be objected that although the
belief, given my intentions, is supposed to be false, it is, given (merely the fact) that it is a belief,
supposed to be true. Perhaps, but what reason is there to think this is so?
249
250
251
252
least we do not yet have an argument that it is. Mistakes are bad, yes,
and, generally speaking, given our cognitive purposes, beliefs ought to
be true and experiences veridical. But the norms implied by these evaluative judgments are the norms we bring to cognitive affairs. They have
the same source current attitudes and purposes - as do the norms
applied to the weather on the day of a picnic and rude behavior at the
dinner table. Take away these attitudes and purposes and you eliminate
the norms that make false beliefs mistakes, but you do not eliminate false
beliefs. A belief just like the weather - is independent of the norms
we apply to it. As we have seen, there are norms that are independent
of our purposes and attitudes. We can still say that, given their evolutionary history, and independent of what we may happen to desire and
intend, perceptual mechanisms are supposed to provide us with information about our environment and that, therefore, perceptual beliefs are
supposed to be true. We can say this in the same way we say that the
heart is supposed to pump blood and the liver is supposed to aid in
digestion and excretion. But the norms implied in these judgments,
although historically grounded, are not essential to the cognitive products of these historical processes. We do not need to suppose at least
we have not yet found a reason to suppose that such an evolutionary
history is essential to the perceptual processes and states that are its
products.
There is, however, the possibility that history, although not required
for the normativity of cognition, is required for some other aspect of
our mental life.13 This is a possibility I mean to raise in the remainder of
this article. The upshot of this brief look will be that although we do
not need history to explain the normativity of mental affairs, we may
13 The fact that history may be needed to explain the intentionality, not the normativity, of
the mental is not always clearly distinguished. Neander (1995, p. 110), for example, speaks
of grounding intentionality in biological facts (functions) but then (p. 112) says that
functions are needed to underwrite the normative notion of mental (semantic) content.
Millikan moves back and forth between intentionality and normativity as the feature of
mental affairs that proper functions are supposed to ground. MacDonald (1989, especially
p. 189), has a clear discussion of these matters.
Incidentally (in Dretske 1986) I described misrepresentation as a problem for a naturalistic approach to the mind, but I did not clearly specify whether I thought this problem
was a normative problem or just a problem about intentionality. I don't think I was clear
about it then. History, I would now say, is necessary for misrepresentation, not because
representation (or misrepresentation) is essentially normative, but because it is essentially
intentional.
253
254
255
256
REFERENCES
Dennett, D. 1996. Granny versus Mother Nature - No contest. Mind and Language,
11.3, pp. 263-269.
Dretske, F. 1986. Misrepresentation. In Belief: Form, Content, and Function, R. Bogdan, ed. Oxford; Clarendon Press.
Dretske, F. 1995. Naturalizing the Mind. Cambridge, MA; MIT Press.
Fodor, J. 1990. A Theory of Content and Other Essays. Cambridge, MA; MIT Press.
Fodor, J. 1996. Deconstructing Dennett's Darwin. Mind and Language, 11.3,
pp. 246-262.
Gibbard, A. 1990. Wise Choices, Apt Feelings. Cambridge, MA; Harvard University
Press.
Hanson, N. R. 1958. Patterns of Discovery. Cambridge; Cambridge University Press.
Kitcher, P. 1993. Function and design. Midwest Studies in Philosophy, XVIII. Notre
Dame, IN; University of Notre Dame Press, pp. 379-397.
Kripke, S. 1982. Wittgenstein on Rules and Private Language. Oxford; Blackwell.
Lycan, W. 1997. Consciousness and Experience. Cambridge, MA; MIT Press.
Lycan, W. 1990a. Introduction (to Part II). Mind and Cognition: A Reader. Oxford;
BlackweU, pp. 59-62.
Lycan, W. 1990b. Mind and Cognition: A Reader. W. Lycan, ed. Oxford; Blackwell.
MacDonald, G. 1989. Biology and representation. Mind and Language, 4.3, pp. 186
199.
Matthen, M. 1997. Teleology and the product analogy. Australasian Journal ofPhilosophy, 75.1, pp. 21-37.
Millikan, R . 1984. Language, Thought, and Other Biological Categories. Cambridge,
257
258
15
Minds, Machines, and Money:
What Really Explains Behavior
Reprinted from Human Action, Deliberation and Causation, Philosophical Studies Series 77, ed.
Jan Bransen and Stefan Cuypers, pp. 157-173, copyright 1998 by Kluwer Academic
Publishers, with kind permission from Kluwer Academic Publishers.
259
After writing this essay I came across Allen (1995), in which a similar anlogy is developed
to reach a similar conclusion.
2 Corresponding to Kim's second formulation of weak supervenience (1984a). Although
citations are to individual articles, all page references to Kim are to Kim (1993b), in which
the individual essays are collected.
260
of S has the same value of V. This corresponds to what Kim calls weak
supervenience.3
As a result of this (normally) widespread supervenience and the correlation associated with it, we can (and regularly do) use the fact that
something is money to predict and "explain" (more about the scare
quotes in a moment) the effects money has in transactions of various
sorts. Why did the cashier give me $8 in change? Because lunch cost
$12 and I gave her $20. Why didn't the vending machine give me the
candy I selected? Because I deposited only $0.55 and the candy bars cost
$0.65.
Are these familiar explanations really correct? Is the fact that I gave
the cashier $20 really the (or part of the) explanation of why she gave
me $8 change? Is the monetary value of the paper I gave her a causally
relevant property? The coins I deposited in the vending machine are
worth only $0.55, but is this fact relevant to why the machine did not
give me a candy bar? Is the value, the legal worth, of these coins a
causally casually relevant fact about them? I know we talk this way. I
know that everyday explanations of such results are replete with references to monetary value, but is this extrinsic property the causally relevant property?
It is important to understand that these are questions about the causal
relevance of an object's properties (its being worth $20), not the causal
efficacy of the objects (the $20 bills) that have these properties. These
are, in other words, questions about what explains the result, not what
causes it. Giving the cashier an object with a monetary value of $20
caused her to give me $8 change. About that there is no argument. The
question we are asking, though, is not whether a $20 bill is a causally
effective object, but whether its being a $20 bill explains its effectiveness.
Is the value of the paper I give her a fact about the paper that explains
the result of giving her the paper? What if I, instead, give her a piece of
paper that looks and feels exactly like a real $20 bill? Would the result
be different if we suppose the bill was a perfect counterfeit? No, of
course not. If she can't tell the difference, how could it be? Well, if we
At least it is a form of local weak supervenience - local to a given nation or economic unit.
Although it would complicate monetary exchanges, there is no reason why two countries
might not assign the same (type of) object different monetary values. If this happened, then,
even without counterfeiting, there would be local (i.e., national), but not global (international), supervenience. In speaking of monetary value supervening on the intrinsic properties of an object, I should, therefore, be understood as referring to a given country or
economic unit.
261
really believe this, as I assume we all do, then why say that the cashier
gave me $8 change because I gave her $20? Giving her $20 is the cause,
but that it was $20 is not the explanation of her giving me $8 change.
The correct explanation is that I gave her a piece of paper that looked
and felt (to her) like a $20 bill. The causally effective properties, those
that explain why the effect occurs, are the intrinsic, the observable,
properties of the paper on which its being $20 supervenes, the properties
you and I, cashiers and machines, use to tell whether it is $20.
I am not, mind you, recommending that we change explanatory
practice. Although I am convinced that its being money is (in most
imaginable cases) totally irrelevant to the results obtained, I will go right
on explaining the results of monetary transactions in terms of the money
exchanged. Although we predict the behavior of vending machines by
mentioning the value of the money we put in them ("You have to
deposit $0.75 to get a Coke") we all know that it isn't the value of the
money that explains the result. It is the shape, size, weight, and (for
machines that take bills) visible marks of the objects we put in them that
explains why machines behave the way they do. An object with the
same S and a different V (a slug) would produce the same behavior.
Vending machines (not to mention store clerks) are equipped to detect
the shape, size, and density, but surely not the economic history, of the
objects they receive. We nonetheless pretend to explain machine behavior by mentioning the historical-social properties ($0.75) of the internal
objects (coins) that cause behavior. We ignore the intrinsic properties
that are causally relevant. We ignore them because, often enough, we
don't even know what they are. Nonetheless, given the facts of supervenience, we know that, normally, inserting $0.75 will get you a Coke
even if we don't know which properties of the $0.75 are responsible for
this effect (is density relevant?). V is, after all, multiply realizable in S.
We can use a variety of different coins, of different shapes and sizes, to
make $0.75. The machine will give us a Coke, it will behave in the
same way, if we insert quarters, dimes, and a nickel; or seven dimes and
a nickel; or fifteen nickels. As long as the coins add up to $0.75 we get
the same result. So it is simpler and much more convenient in our
explanations of machine behavior to mention the extrinsic V all the
different Ss have in common even though we know it is S, not V, that
explains the result. Convenience explains the explanatory pretense.
This, incidentally, is why I am suspicious of philosophical appeals to
our ordinary explanatory practice, or to the explanatory practices in the
special sciences, to support accounts of what causally explains what (see,
262
for example, Burge 1986, 1989, 1993, 1995; Baker 1995). Our explanatory practice is often governed by practical convenience and, sometimes, theoretical ignorance. I know, for example, that we commonsensically invoke beliefs and desires to explain human (and sometimes
animal) behavior. That, I am willing to concede, is the accepted practice.
Even in cognitive psychology and computer science (presumably special
sciences) there are a variety of intentional ideas (e.g., data structures,
information, representation) that regularly appear in causal explanations.
But saying that x's having P causally explains x's Q-ing, when P is a
relational or even worse an intentional property of x, doesn't make
it so. Even if everyone says it. If I trusted explanatory practice this blindly,
I would have to conclude that the monetary value of objects explains
their effect on vending machines. It will take more than our explanatory
practice to convince me of this.
It may be thought that I am constructing a false dichotomy, that the
two explanations of a cashier's or a vending machine's behavior one
in terms of intrinsic S properties, the other in terms of extrinsic V
properties do not (as I have been assuming) really compete. They
aren't mutually exclusive. They can both be correct. The explanation in
terms of a coin's intrinsic properties is a. proximal explanation of its effect
on the vending machine, while the explanation in terms of monetary
value is a more remote explanation of this same result. It is like explaining
a behavioral deficit (stuttering, say) by describing the brain damage that
produces the stutter (explanation by intrinsic properties of the stutterer)
or by mentioning the incident being dropped on the head as an infant
that causally explains this brain damage (an explanation by extrinsic
i.e., historical properties). The first is a proximal, the second a remote,
explanation of the stuttering. Similarly, if we think of the fact that the
paper I give the cashier has a monetary value of $20 - that it has the
kind of history and use that makes it $20 as the causal explanation of
its having the observable properties it now has, then social-historical V
properties causally explain intrinsic S-properties and, thus, explain (in a
more remote way) whatever the S-properties causally explain - why, for
example, the cashier gave me $8 change for my $20.
This objection, although it gets at something interesting about the
connection between extrinsic and intrinsic properties in explanations of
this sort, is not, as it stands, correct. The facts that give coins and bank
notes their value (the V-facts) do not causally explain why these objects
have the size, shape, and markings they have (the S-facts). The reason
why $20 bills have Andrew Jackson's picture on them while $5 bills
263
have Abe Lincoln's picture, the reason they have these particular observable features,4 is not that these bills have the value they have. It has to
do, rather, with the various decisions and policies of administrators in
the U.S. Treasury Department. The pictures on U.S. coins and bank
notes might well have been different. If everybody (including the government) agreed, we could, in fact, make $20 bills (the bills that are now
worth $20) into bills worth $5, and vice versa.
Nonetheless, although I think the objection mistaken, it raises an
interesting possibility, the possibility that the explanatory efficacy of an
object's extrinsic properties lies in the complex causal relations between
an object's extrinsic properties and its intrinsic nature. I will return to
this point later in order to explore this possibility. Pending deeper
investigation, though, I assume that the output of people and vending
machines in monetary exchanges is not to be explained, not even remotely, by the extrinsic value of the money that produces that output.
The causal efficacy of money is not explained by its being money.
When externally individuated properties (like V) supervene on intrinsic properties (S), and the supervenient property is multiply realized in S
(thereby making it practically convenient to express generalizations in
terms of V rather than S), talk of the supervenient properties begins to
dominate explanatory contexts and one finds little or no mention of S.5
Imagine trying to explain why Clyde got a Coke not by saying he
deposited the required $0.75, but by describing the S-properties that
were actually causally relevant. If we happen to be ignorant of exactly
which coins Clyde deposited in the machine, the explanation would, of
necessity, be radically disjunctive: fifteen coins of this sort; or two coins
of this sort and seven coins of that sort; and so on and so on. Nobody
gives those kinds of explanations. What does this show? Nothing. Or,
perhaps, only that we are lazy or ignorant.
Despite this undeniable tendency in explanatory practice to drift to
the most conveniently expressible generalizations, V-generalizations are
not the sort that will support explanations. Predictions, yes, but not
explanations. In more careful moments when, for instance, we are
4
264
2. THE ANALOGY
There is a prevalent view in the philosophy of mind that the propositional attitudes (including belief) are something like internal coins. What
you believe (intend, desire, conclude, regret, etc.) is an extrinsic property
of the internal belief (intention, etc.) in the same way that the value of
coins is extrinsic to the coins in a machine. For a materialist (who is not
an eliminativist) a belief (some brain state, say) has intrinsic (neurobiological) properties, but it also has a content or meaning (= what it is one
believes), and this is determined, in part at least, by the relations this
To machine behavior, not to machine output. This distinction between output and behavior is a distinction that figures importantly in my account of the way reasons explain
behavior in Explaining Behavior (1988). Here I merely note the distinction. I return to it
later.
265
The Standard Theory is commonly thought to have the kind of epiphenomenal implications we uncovered in examining monetarymachine
interactions. Although the content of a belief what one believes - is
routinely mentioned in explanations of behavior (just as the value of
coins is mentioned in explanations of machine behavior), this content is,
according to Standard Theory, as irrelevant to what we do as is the
value of coins to what a machine does. If you want to know what makes
vending machines dispense Cokes and candy bars, look to the intrinsic
properties of the internal causes the shape, size, and weight of the
internal coins that trigger its responses. For the same reason, if you want
to know what makes people do the things they do, look not to the
relational properties of belief (those that constitute what we believe) but
to the intrinsic (i.e., neurobiological) properties of the belief Look to
7 And a realist (i.e., not an eliminativist) about the mind.
266
the "shape" and "size" that is, the syntax of these internal "coins,"
not their semantics.
This is a form of epiphenomenalism because although beliefs, on this
view, turn out to be causally active (just as the coins deposited in
vending machines are causally active), the properties of the internal cause
that make it mental, the extrinsic properties that give it content (and
thus make it into a belief), are not relevant to the causal efficacy of the
belief. Thus, the Standard View, while denying neither the reality nor
the causal efficacy of the mental, leaves little or no room for understanding the causal efficacy of the mental qua mental. Beliefs, qua beliefs,
have as much effect on the behavior of persons as do quarters, qua
quarters, on the behavior of vending machines.
4. SOLUTIONS
Standard theorists are aware of this problem, of course, and they have
adopted a variety of different strategies to neutralize its impact. Some
(e.g., Campbell 1970; Stich 1978, 1983) simply accept the implication
and try to live with it. Others (e.g., Burge 1986, 1989, 1993, 1995;
Baker 1995) insist that it should be actual explanatory practice, not a
priori metaphysical principles, that determines what is a causally relevant
property. So if, in ordinary causal explanations of behavior, we invoke
what is believed to explain what is done, then what is believed content
is causally relevant to behavior and that is an end to the matter
metaphysical principles to the contrary be hanged. Still others (e.g.,
Fodor 1987) concede the irrelevance of extrinsic or broad content and
look for a satisfactory substitute an intrinsic content, narrow content.
Or, like Davidson (1980), one takes comfort in the fact that beliefs are
causes and refuses to worry about what it is about them that explains
their effects (on Davidson's theory it turns out to be the intrinsic physical
properties of the belief- the ones that figure in strict laws). It is hard to
see why some of these strategies (e.g., Fodor's and Davidson's) for
vindicating the explanatory role of belief are not so much ways of solving
the problem as (like the first) gritty ways of learning to live with (and
talking around) it.
In a series of insightful articles, Jaegwon Kim (1984a, 1984b, 1987,
1989, 1990, 1991, 1993a) has explored the idea that mental causation is
a form of supervenient causation (I denote supervenient causation by
"causation/'). One macroevent (increasing the temperature of a fixed
267
Not all philosophers think this. Some (including myself see Dretske 1995) have a
representational view of sensations that identifies experienced qualities (qualia) with representational properties. Thus, just like beliefs, the mental properties of sensations turn out to
be extrinsic or relational properties of internal states: see Harman (1990), Lycan (1987,
1997), Tye (1994, 1995).
This is confirmed by his doubts a few pages later (p. 107) about whether the account of
supervenient causation will work for intentional states - states (like belief) that have a
propositional content.
268
269
what the coin's shape and size cause for example, an elliptical shadow
in obliquely falling light. The value (being extrinsic) and the physical
appearance (intrinsic) remain distinct attributes of the coin with different
causal powers. To get supervenient causation we need strong supervenience, but what could it mean to suppose that the monetary value of a
piece of paper or the value of a quarter was necessarily tied up with its
having a particular shape, size, and set of marks? This, it seems, could be
the case only if the monetary value of the paper, its being a genuine $20
bill, was not in fact relational at all but, rather, reducible to the paper's
having just that set of intrinsic properties.13 This, though, is precisely
what Standard Theory denies.
I do not think, therefore, that supervenient causation is a viable
account of the causal powers of extrinsic mental states.14 If what I believe
is a genuine relational property of me, then it might, in some local
way,15 weakly supervene on my intrinsic physical properties, but I do
not see how it can display the kind of dependence on my intrinsic physical
properties that would tempt us to say that it explains whatever the
physical states on which it supervenes explains.
5. A BETTER SOLUTION
We have, however, neglected an important aspect of the causal relations
at work in both monetarymachine and mindbody cases. In the monetarymachine interaction, for instance, there is the fact that the machines on which coins have a causal impact were designed and manufactured to be sensitive to objects having those intrinsic properties (S) on
which monetary value supervenes, and they were made that way precisely because V supervenes on S. Business being what it is, machines that
dispense commodities like cigarettes, food, and drink would not be
designed to yield their contents to objects having S unless objects having
S had V. Remove the fact of supervenience (as a result of widespread
13 Kim (1989) makes exactly this point - the point, namely, that strong supervenience - the
kind necessary for supervenient causation occurs only when there is a possibility of
reduction of the macroproperties to the micro. That is the basis of his argument that
nonreductive materialists should derive no comfort from supervenient causation as a way to
give the mental some causal punch in the material world.
14 Despite his suggestion (1991) that supervenient causation be considered a "modified"
version of my own theory (of belief and desire), I suspect Kim would agree with this.
15 Kim stresses the need to localize the supervenience (the supervenience base for your
thoughts may not be the same as mine) in Kim (1991).
270
counterfeiting, say) and S objects will soon lose their causal power. They
will no longer produce the effects they now produce. They will lose
their causal power because machines will no longer be built to respond
to objects having S. The causal efficacy of intrinsic S (on machines
not to mention people) depends on the supervenience of extrinsic V on
S. Let V supervene on a different set of properties, T, and T-objects
will, quickly enough, assume the causal powers of S-objects.
This additional dimension to the causal story does not show that a
vending machine's output is explained by the monetary value of the
coins deposited in it. No, the Cokes come rolling down the chute not
because an object with a certain value is deposited in the machine, but
because an object with a certain size and shape is. Nonetheless, if what
we want to explain is not why a Coke came sliding down the chute
(the shape and size of the coins deposited will explain that), but why
objects having the size and shape of nickels, dimes, and quarters cause
Cokes to come rolling down the chute, why objects of that sort have
effects of this sort, the answer lies, in part at least, in the fact that there
is a reliable (enough) correlation between objects having that size and
shape and their having a certain monetary value. It lies, in other words,
in the fact that there is a supervenience (weak supervenience) of V on S.
The value doesn't explain why the Cokes come out, but it does explain
why coins objects of that size and shape cause Cokes to come out.
When we turn to the mindbody case, this dimension of the causal
story is suggestive. If we think of ourselves as 'Vending machines"
whose internal causal structure is designed, shaped, and modified not, as
with vending machines, by engineers, but, in the first instance, by
evolution and, in the second, by learning, then we can say that although
it is the "size" and "shape" (the syntax, as it were) of the internal causes
that make the body move the way it does (just as it is the size and shape
of the coins that releases the Cokes) it is, or may be, the fact that a
certain extrinsic property supervenes on that neurological "size" and
"shape" that explains why internal events having these intrinsic properties have the effect on the body that they have. What explains why a
certain neurological event in the visual cortex of a chicken an event
caused by the shadow of an overhead hawk causes the chicken to
cower and hide is the fact that such neurological events have a significant
(to chickens) extrinsic property the property of normally being caused
by predatory hawks. It is, or may be, possession of this extrinsic property
what the internal events indicate about external affairs that explains
why objects having those intrinsic properties cause what they do.
271
There is but a short step from here to the conclusion that it is the
extrinsic, not the intrinsic, properties of internal events that causally
explain behavior. All that is needed to execute this step is the premise
that behavior is not the bodily movements that internal events cause,
but the causing of these movements by internal events. All that is required, that is, is an appropriate distinction between the behavior that
beliefs explain and the bodily movements that (in part) constitute that
behavior. For if moving your arms and legs (behavior) is not the same
as the movements of the arms and legs, but it is, rather, some internal
event causing the arms and legs to move, then although the intrinsic
properties of our internal "coins" will explain (via activation of muscles)
the movements of our arms and legs, the extrinsic properties, properties
having to do with what external conditions these internal events are
correlated with, will explain why we move them.
This is not the place to amplify this account. I tried to do this in
Dretske (1988). The only point I want to make here is that the account
I gave there of how reasons explain behavior depends on a correlation
between the extrinsic (informational) and the intrinsic (biological) properties of reasons. It depends on weak supervenience of the extrinsic on
the intrinsic. Without that supervenience, reasons cannot get their hand
on the steering wheel. This is not because the extrinsic causally, explains
the movements of the body. No. That would require strong supervenience, and the relational properties underlying mental content do not
strongly supervene on neurobiological properties any more than the
value of coins strongly supervenes on their size and shape. It is rather
because supervenience weak supervenience explains why the internal
events that cause the body to move cause it to move the way it does. If
I am right about behavior, that is exactly what we want beliefs to explain
viz., behavior, what a person does.
So to our series of opening questions, the answers are as follows: Yes,
beliefs stand to human behavior in something like the way money stands
to vending machine behavior. Does this show that what we believe is
causally irrelevant to what we do? No, it does not show this any more
than it shows that the fact that nickels, dimes, and quarters have monetary value is irrelevant to the behavior of vending machines. The fact
that these coins have monetary value, the fact that they are a widely
accepted medium of exchange, explains why the machines (are built to)
dispense their contents when objects of this sort are placed in them. In
this sense, the fact that these coins have monetary value explains why
272
machines behave the way they do when the coins are in them. The
same is true of belief: the extrinsic properties of these beliefs what it is
we believe explains why we behave the way we do when these beliefs
occur in us.
REFERENCES
Allen, C. 1995. It Isn't What You Think: A New Idea About Intentional Causation.
Nous, 29, pp. 115-126.
Baker, L. 1995. Explaining Attitudes: A Practical Approach to the Mind. Cambridge:
273
A Bradford Book.
Tomberlin, J. E., ed. 1990. Philosophical Perspectives, 4: Action Theory and Philosophy
274
Index
275
276
contextual relativity, 50
contrast consequences, 38-40, 42-4, 467
277
278
279
knowledge (cont.)
information, 72; contrast with belief,
64-5, 72-3, 74, 75-8; perception
and, 140, 141; possibility of mistake
in, 19; pragmatic, social, or communal dimension to, 52-3, 48-63;
senses of, 834, 85, 88; two conceptions of, 80-93
Knowledge and the Flow of Information
(Dretske), ix
knowledge claims, 25, 46; how we
know in, 60; skeptical objections to,
38-9
Kripke, S., 248n7
language: and belief, 73-8; of entitlements, commitments, rules, and
norms, 251; social character of, 2278, 232-3
learning, 68-73, 104, 224, 242, 271; in
animals, 204-5, 218; causal efficacy
of information and, 207; in cracking
sensory codes, 111; indicators and,
204-7; information in, 77; and informational functions, 235; to perceive,
100; right kind of, 223; and social
character of thought, 235-40
Lehrer, Keith, 18nl3, 82-3, 84, 85,
86nn5,6,7, 87, 88nll, 89-90, 91-2
Levine, Joseph, 214
Loar, Brian, 231-2
Locke, John, 133, 166
logically conclusive reasons (LCR), 17
long-distance truck driver example, 123
4
lottery paradox, 6, 8-9, 14
Lycan, W., 133
MacDonald, G., 253nl3
machines: knowledge in, x, 856;
money and, 259, 260-5, 270-1
Malcolm, Norman, 233n9
materialsm, 112, 170; beliefs in, 265,
266
Matthen, M., 246n6
meaning(s): detached from cause, 214,
215, 216, 219, 221-2; independent
280
pains, 1725
Peacocke, Chris, 247, 248n7
penetrating operator, 30-2, 33, 35, 38
perception, ix, x, xi, 80, 90-100, 1401, 182, 206, 254; in animals, 144;
without belief, 146; causal analysis of,
108; in cognitivism, 144, 145; complex causal process, 149; of complex
scenes, 151; and conception, 97, 107;
everyday, 164; function of, 251-2;
nonepistemic, 139-40; of objects and
events/of facts, 140; veridical, 168
perceptual awareness, 150; of facts, 116
17
perceptual belief(s), 188; distinct from
perceptual experience, 113, 124, 131,
152-3; norm-laden, 251-2; perceptual experience assimilated to, 146;
truth of, 253
perceptual consciousness, 140, 147
perceptual experiences, 101, 102, 140,
147-9, 150-2, 165-71, 175, 188,
191, 251-2; assimilated to perceptual
belief, 146; awareness of, 166, 167-8,
170-1, 172; conscious, 158, 159, 172;
distinct from perceptual belief, 113,
124, 131, 152-3; phenomenology of,
164-5
perceptual object(s), 106, 107, 108
perceptual systems, 104, 251; biological
functions of, 252
phenomenal character: of bodily sensations, 174; of experiences, 175; offawareness of experience, 167
phenomenal consciousness, 153n44,
155
phenomenal experience, 153; function
of, 189, 190, 191; nature of, 158-9,
160; of or about things, 172
phenomenal properties, 153, 155
phenomenal states, xi, 2689
phenomenology: of perceptual experience, 164-5
philosophy of mind, x, 64, 140, 145,
265; naturalized, xi-xii
phylogenetic source of natural functions, 217, 218, 219, 220, 224
281
282
283
verificationism, 131
visual experience, 111-12, 135, 158n2,
1901; awareness of, 166; properties
of, 169
visual perception, 97, 100; problemsolving approaches to, 81nl
waterfall phenomenon, 163-4
White, A., 114-15, 180
windows in house example, 130nl9
Woodfield, A., 231n8
Wright, Larry, 217
284