Minds and Machines (2005) 15: 207–228
DOI 10.1007/s11023-005-2004-2
Springer 2005
Computation and Intentionality: A Recipe for
Epistemic Impasse
I. SHANI
The University of Haifa, Mount Carmel 31905, Haifa, Israel; E-mail: ishani479@hotmail.com
Abstract. Searle’s celebrated Chinese room thought experiment was devised as an attempted
refutation of the view that appropriately programmed digital computers literally are the
possessors of genuine mental states. A standard reply to Searle, known as the ‘‘robot reply’’
(which, I argue, reflects the dominant approach to the problem of content in contemporary
philosophy of mind), consists of the claim that the problem he raises can be solved by supplementing the computational device with some ‘‘appropriate’’ environmental hookups. I
argue that not only does Searle himself casts doubt on the adequacy of this idea by applying to
it a slightly revised version of his original argument, but that the weakness of this encodingbased approach to the problem of intentionality can also be exposed from a somewhat
different angle. Capitalizing on the work of several authors and, in particular, on that of
psychologist Mark Bickhard, I argue that the existence of symbol-world correspondence is not
a property that the cognitive system itself can appreciate, from its own perspective, by
interacting with the symbol and therefore, not a property that can constitute intrinsic content.
The foundational crisis to which Searle alluded is, I conclude, very much alive.
Key words: Bickhard, computational theory of the mind, encoding, intrinsic intentionality,
Searle, the Chinese room, the robot reply
1. Introduction
John Searle’s Chinese room thought experiment (1980) is, without a doubt,
one of the most celebrated, and most pointed and divisive, criticisms of the
computational (symbol-manipulation, information processing) theory of the
mind. From the moment of its inception, Searle’s Gedankenexperiment provoked strong reactions. Many have dismissed the argument, giving it short
‘‘refutations’’ (see for example, Abelson, 1980; Block, 1980), yet its import
continues to reverberate in the milieu of cognitive science, artificial intelligence, and the philosophy of mind.1
As Searle himself made quite clear, the aim of the Chinese room argument
was to show that digital computers do not, and cannot, exhibit intrinsic
intentionality and hence that the intentionality of intelligent beings, humans
included, does not, and cannot, consist of computer-like symbol manipulation.
In what follows, I examine Searle’s argument in some detail. I proceed to
discuss one of the major rebuttals to Searle’s charge – the robot reply (RR).
208
I. SHANI
I argue that RR reflects the most popular approach to the naturalization of
mental content, an approach inspired by the computational theory of the
mind (CTM). According to RR, what constitute mental content are symbolworld correspondence relations – in complete independence from, yet in
supplementation to, the system’s cognitive architecture. I argue that not only
does Searle himself cast doubt on the adequacy of this idea by applying to it a
slightly revised version of his original argument, but that the weakness of this
encoding-based approach to the problem of content can also be exposed
from a somewhat different angle.
Capitalizing on the work of several authors, and, in particular, on that of
psychologist Mark Bickhard, I argue that the existence of symbol-world
correspondence is not a property that the cognitive system itself can appreciate, from its own perspective, by interacting with the symbol and therefore
not a property that can constitute intrinsic content. Thus, attempts to deal
with the problem of content within the dominant information-processing
approach fail to live up to the expectation (shared by the majority of its own
practitioners) of articulating a notion of mental content that can make epistemic sense from the first-person perspective of a psychological agent. The
result, I submit, is a failure to naturalize content.
2. Part One: Intrinsic Intentionality and the Chinese Room
2.1.
SEARLE’S AARGUMENT AS A QUA ARGUMENT
What is the Chinese room argument an argument for: The Chinese room
thought experiment is devised as an attempted refutation of the view that
appropriately programmed digital computers literally are the possessors of
genuine mental states. As such, the argument is a direct assault on the CTM,
often described as ‘‘cognitivism’’ (Harnad, 1990; Searle, 1992) or ‘‘the cognitive science view’’ (Fodor, 1975; Sayre, 1987). According to CTM, mental
processes are computational processes performed on internal symbol strings;
intelligent beings are, therefore, digital computers of the appropriate sort,
and the challenge, from the point of view of theoretical cognitive research, is
to identify, understand and (eventually) be able to construct such intelligent
formal automata. The Chinese room argument aims to show that computerlike information processing is not only insufficient for the having of genuine
mental states but that, in all likelihood, it plays no constitutive role in the
making of intentional agency (mental life).
If appropriately programmed computers really are minds, Searle reminds
us, then they ‘‘can be literally said to understand and have other cognitive
states’’ (1980, 282). Correspondingly, Searle constructs his argument as a
counterexample to the claim that (a) appropriately programmed digital
COMPUTATION AND INTENTIONALITY
209
computers literally are the owners of intrinsic intentional states and (b) the
modus operandi of such computational machines explains human cognition.
To test the claim that all minds are computational machines in CTM’s
sense, Searle suggests, we need to ask ourselves what it would be like if our
own minds operated in a similar manner to such machines. If computer-like
symbol manipulation constitutes understanding and other intrinsically
intentional states, then we, too, ought to understand (and to have other
cognitive states) in virtue of performing such computations. The Chinese
room argument is devised as a modus tollens to this conditional. It is meant
to cast serious doubt on the idea that human cognizers understand (and
possess other intentional states) in virtue of performing computational procedures on symbol structures, and by doing so the aim is to undermine
CTM’s aspirations to provide the key to the understanding of intelligent
behavior.
In the scenario Searle imagines, he is situated in a room, manipulating
Chinese characters, which he does not understand, according to written
instructions in English, which he does understand. As it turns out, some of
the input in Chinese that Searle receives is interpreted by outsiders as questions, and the Chinese characters Searle issues as an output are interpreted as
answers to those questions. Occasionally, Searle is also asked questions in
English, and he issues answers as he sees fit.2 Now, Searle maintains that in
the Chinese case he himself operates like a computational machine, he
manipulates symbols according to rules, and the output he issues is interpreted by outsiders as intelligent answers to specific questions. Yet all the
while Searle understands nothing. In the English case, on the other hand,
Searle understands perfectly well, but nothing in the example suggests that
his understanding has anything to do with computer-like information processing.
From his lack of understanding of the Chinese symbols he manipulates,
Searle concludes that a computing machine which operates along similar
lines of symbol manipulation does not understand either, and does not literally provide answers to the questions it is being asked. Since (leaving aside
practical considerations such as the amount of time required for accomplishing complex computations) Searle is capable of performing all the
computations the machine is capable of performing, if what the machine does
were to generate understanding, then he, too, would have understood the
questions he is being asked. Yet, as the argument shows, no such understanding ensues.
The intended upshot of the Chinese room example, then, is to argue that
digital computers lack understanding in any literal sense of the word. Since,
qua computational machine, Searle understands nothing, and since, qua
intentional agent, his understanding seems to have nothing to do with computerized symbol manipulation, the argument suggests that computer-like
210
I. SHANI
information-processing adds nothing to the amount of intentional content in
the room and, by way of extrapolation, that it adds nothing to the sum total of
intrinsic intentionality in nature.
Moreover, a significant feature of the argument is that while Searle himself
does not understand Chinese, the Chinese characters he issues as output in
reaction to the questions given to him are treated by people outside the room
as meaningful verbal responses. This fact marks a distinction that is crucial
for Searle, the distinction between the first-person perspective and the thirdperson perspective. Genuine intentional content, he maintains, must be
manifested as a first-person property – it must be intrinsic to the system itself
and cannot be derived from the intentionality of observers, users, or programmers. Unfortunately for CTM, Searle argues, ‘‘such intentionality as
computers appear to have is solely in the minds of those who program them
and those who use them, those who send in the input and who interpret the
output’’ (Ibid., 301).
2.2.
NATURALIZED CONTENT AND INTRINSIC INTENTIONALITY
Interestingly, while sympathizers with CTM are almost unanimously in large
disagreement with Searle’s conclusion, the majority amongst them agree with
him on two of his most basic assumptions: first, that the distinction between
intrinsic and derived intentionality is theoretically sound and important;
second, that the success of a theory of psychological content hinges on its
ability to explain intentionality as a system-intrinsic phenomenon (Block,
1990; Dretske, 1988; Fodor [in Dennett, 1987, 288], Millikan, 1989; see,
however, Dennett, 1987).
Indeed, one of the attractive features of CTM was that it carried a promise
for the reconciliation of naturalism with realism about intentional phenomena. On the premise of intentional realism, however, the distinction between
intrinsic and derived intentionality is mandatory. Realists believe that
intentional states are not just in the eye of the beholder. If representation is
real, if there are positive facts of the matter regarding the possession of
intentional states, there must be some systems that possess such states, and
the possession of such states in such systems cannot be explained by reference
to the intentional states of still other systems, on pain of regress.3 It transpires, then, that a failure to respect the distinction between derived intentionality and intrinsic intentionality, and to explain the possibility of the
latter, should count heavily against a theory – to the point of constituting a
reductio of the theory (again, at least insofar as intentional realism is
accepted as an axiom...).
It is precisely because it challenges the appropriateness of CTM as a
framework on which to found an explanation of intentionality as a system-
COMPUTATION AND INTENTIONALITY
211
intrinsic phenomenon that Searle’s argument has provoked such strong
reactions. If Searle is right, then ‘‘the basic idea of modern cognitive theory’’
(Fodor, 1981) suffers from a vividly serious theoretical incoherence. Despite
being considered by many as the key to unraveling the mystery of cognitive
phenomena, it would seem that, on the premise that minds are digital computers, it becomes impossible to explain the real thing about aboutness, the
intrinsic intentional contents of the cognitive states of intelligent beings. Put
differently, if Searle is right, CTM fails to sustain a notion of mental content
that can make explanatory sense as a first-person property of psychological
agents, without presupposing the epistemic contribution of observers, programmers, users, interpreters, or designers.
2.3.
THE CHINESE ROOM AGAIN: THE ROBOT REPLY
Many objections have been raised against the conclusions Searle draws from
his thought experiment, and Searle himself addresses some of them in his
original paper as well as in his replies to peer commentaries. In the context of
the present discussion, I would like to concentrate on one of these objections.
One of the major lines of criticism addressed against Searle is an argument
that Searle calls ‘‘the robot reply‘’. The main point of RR is this. While Searle
may be right that a stationary computational machine of the sort targeted in
his argument lacks genuine understanding, the problem can be solved if we
allow the computer to be placed inside a robot in such a way that
(i) The computer receives information input from peripheral ‘‘sensory’’
processes within the robot, which in turn encode environmental
events in the form of transduced energy patterns.
(ii) The computer issues information output that controls the robot’s
‘‘motor’’ apparatus.
Such a computer, the idea goes, would be hooked up to the environment
and would be in charge of operating the robot’s seemingly intelligent
behavior, and this is all that may be required to sustain intrinsic intentionality.
Two points need to be made with regard to RR. First, as Searle himself
emphasizes, ‘‘it tacitly concedes that cognition is not solely a matter of formal
symbol manipulation’’ (1980, 293). Second, while conceding that cognition
and computation are not strictly identical, RR nevertheless persists in presupposing the adequacy of CTM as a theory of cognitive architecture. That is
to say, according to RR there is nothing wrong with the idea that mental
processes are formal, symbol manipulation, processes; it is just that such
processes must be supplemented with content-generating environmental
hookups (see below).
212
I. SHANI
In reaction to RR, Searle’s claims that the reply does nothing to alleviate
the problem. The addition of such ‘‘sensory’’ and ‘‘motor’’ capacities’’, he
argues, ‘‘adds nothing by way of understanding, in particular, or intentionality, in general, to Schank’s original program’’ (Ibid.). In order to show that
this is, indeed, the case, Searle offers an updated version of his original
Chinese room argument. Suppose, he says, that, as in the original scenario, I
am situated in a room wherein I perform blind computations on Chinese
characters. Only this time, unbeknownst to me, the room is placed inside a
robot in such a way that some of the input I receive comes from a television
camera in the robot’s head that ‘‘perceives’’ external events, and the output I
issue contains instructions to the robot’s motor apparatus. Still, Searle
maintains, he himself – the robot’s cognitive processor as it were – has no
grasp whatsoever of the meaning of the ‘‘information’’ he receives and of the
motor ‘‘instructions’’ he issues, while the rest of the robot’s story consists of
nothing but non-intelligent receptors and effectors (mechanical and electric
processes of energy transduction, limb movement etc.). There is nothing here
to sustain intrinsic intentionality, let alone a sophisticated human-like
intelligence.
In reply to Searle’s reply, Fodor has argued that
Searle’s treatment of the robot reply is quite unconvincing. Given that
there are the right kinds of causal linkages between the symbols that the
device manipulates and things in the world – including the afferent and
efferent transducers of the device – it is quite unclear that intuition rejects
ascribing propositional attitudes to it. All that Searle’s example shows is
that the kind of causal linkage he imagines – one that is in effect mediated
by a man sitting in the head of a robot – is, unsurprisingly, not the right
kind (1980, 520; for a similar argument see Hayes et al., 1992).
From the point of view of our present concern, Fodor’s reply is of particular
interest since it summarizes in a nutshell the most popular approach to the
problem of content. Indeed, understood as it is understood within the
dominant cognitivist tradition, the entire project of naturalizing mental
content can be viewed as a variant of RR. It consists of an attempt to
supplement the basic cognitive architecture of CTM with causal or informational hookups, or with teleological evolutionary history, premised on the
assumption that such addition is necessary for a complete account of mental
content but that, nevertheless, CTM is an adequate theory of mental processing. Opinions are widely diverged regarding the merits of specific proposals, but none of the leading theories of mental content currently in vogue
– information semantics, teleosemantics, or conceptual role semantics –
challenges the validity of CTM as a theory of cognitive processing. Thus, the
basic supposition is that the symbolic approach to cognition provides a more
or less adequate account of information (or ‘cognitive’) processing, and that
COMPUTATION AND INTENTIONALITY
213
the project of naturalizing content consists in articulating a supplementary
story, couched in naturalistic terms, of the conditions under which the
symbols that partake in the processing acquire semantic values.
On this view, then, cognition simply is computation plus symbol-world
linkage, from which it follows that computation plus symbol-world linkage,
is sufficient for intrinsic intentionality. As a result, it is believed that there
must be some symbol-world linkages that, when joined with the appropriate
computing device, simply are content-constitutive. This is, precisely, the issue
between Searle and the dominant cognitivist approach to the problem of
intentionality of which Fodor is such an avid representative.
In response to Fodor, Searle argues that to rule out conscious implementation as an illegitimate agency of causal mediation would be arbitrary
and unjustified. Precisely because symbol systems are formal, such implementation details need not matter (see Section 3.1 below); and, as Searle
reminds us, some of Turing’s own examples of Turing machines involved
conscious agents going through the steps of the program. Moreover, even if
we grant Fodor’s point that CTM need not imply that every instantiation of a
Turing machine is constitutive of intrinsic intentionality (see 1980, 525), we
would be hard-pressed defending the claim that the class of implementations
that do constitute such intentionality necessarily does not include conscious
implementations. In order to demonstrate the arbitrariness of such exclusion,
Searle introduces another thought experiment. Suppose, he says,
that we found that the quickest way and most efficient way to make
computers work was by utilizing tiny creatures imported from Mars who
were the size of a few molecules and who implemented the program by
consciously going through all the steps in the symbol manipulation. . . We
can suppose that they find it enormous fun to do this, so they enjoy the
work and are willing to work for no pay, and that they compute faster
than any known electronic circuits. They are better, cheaper, and easier
to program than silicon chips. (1980, 526).
‘‘Now’’, Searle concludes, ‘‘it would not be even remotely plausible to say
that in this eventuality they were not actually going through the steps of the
program.’’ (Ibid.).
Plausibility notwithstanding, the point of this little thought experiment is
to show that there is no well-motivated reason to exclude conscious rule
following from being a potential machine-table implementation of intelligent
agency (if such implementation is to be considered possible at all). Nothing in
the logic of the mind-as-computer hypothesis suggests a thesis so strong as
the thesis that the implementational infrastructure must, of necessity, be nonconscious, precisely because nothing in this logic hinges heavily on any
implementation details.
214
I. SHANI
More importantly, Searle insists that the fact that he can iterate his original thought experiment under the new (robot) circumstances, shows that no
amount of causal hookups can help generate intentional content as long as
the thinking-qua-computation hypothesis is retained. The robot reply consists of adding externalist causal connections to an otherwise solipsistic
process of symbol manipulation, yet Searle maintains that the fact that a
modified version of his argument is applicable to the new circumstances
shows that ‘‘no matter what outside causal impacts there are on the formal
tokens, these are not by themselves sufficient to give the tokens any intentional content.’’ (1980, 522–523). There is, of course, a causal story that
accounts for the having of intrinsically intentional states (or else, naturalism
is plain false), but Searle denies that such a story has anything in particular to
do with computer-like symbol manipulation (for more on that, see Section 3
below). In short, Searle believes, and he believes that his thought experiment
shows, that as long as the computational picture is presupposed, there is no
intrinsic intentionality, and that no supplementation in the form of causal
transductions can compensate for this basic fact.
3. Part Two: Cognitivism, Content and the Impasse
3.1. THE BASIC IDEA OF MODERN COGNITIVE THEORY: SEMANTIC
ENGINES AND THE PROBLEM OF CONTENT
In the remaining parts of this paper, I would like to argue in support of
Searle’s skeptical conclusion, and against the adequacy of RR. A natural
thing to do is to go back to the basic notion underlying CTM, to what many
refer to as ‘‘the basic idea of cognitive science’’ (Haugeland, 1981, 31),
namely, the idea that intelligent beings are semantic engines (Dennett, 1978),
to wit, ‘‘automatic formal systems with interpretations under which they
consistently make sense’’ (Haugeland, 1981, 31). To understand this idea, we
need to consider briefly the notions of a formal system and of an automatic
formal system. Formal systems, whether axiomatic logico-deductive systems
or rule-based games such as chess, are systems in which physical tokens are
manipulated according to explicit rules. A crucial feature of formal systems is
the fact that they are digital, namely (as Haugeland puts it), self-contained,
perfectly definite, and finitely checkable.4
• Self-containment: The system is immune to external influence in the sense
that only its own tokens, positions, and rule-governed moves make any
difference to it, and that they do so only insofar as they exhibit those
aspects that are relevant for legal (formally defined) state-transition of the
system.
COMPUTATION AND INTENTIONALITY
215
• Perfect definiteness: Bearing outright mistakes or breakdowns, there are
no ambiguities, approximations, or ‘‘judgement calls’’ in determining
what the position is or whether a certain move is legal.
• Finite checkability (effective procedure): For each state and each
candidate state transition, only a finite number of steps have to be
checked to see whether this state-transition could be legal in that state.
One of the implications of the fact that formal systems are digital in the
above sense is that they are multiply realizable and intersubstitutable. Multiple realizability follows from the fact that what determines whether or not
two tokens are tokens of the same (formal) type is whether or not they are
treated equally by the rules for token (symbol) manipulation; in short, formally equivalent tokens are intersubstitutable, regardless of their extraneous,
non-formal, properties. Formal equivalence, however, applies not only to
tokens of formal systems, but to formal systems as wholes. Formal systems
that are isomorphic, and thus, can be translated to one another, are equivalent, and, therefore, intersubstitutable (i.e., bearing practical considerations,
the substitution of a formal system A for an equivalent system B makes no
difference).
The intersubstitutability of formal systems is of the utmost importance to
CTM, since it is in virtue of this property that one formal system can model
(or ‘‘mimic’’) another and, ultimately, that a universal Turing machine can
model every formal system. This brings us to the notion of an automatic
formal system – a computational machine or digital computer.
An automatic formal system is a physical device that models a formal
system, that is, which automatically manipulates its own physical tokens in
structural isomorphism to that formal system.
Can automatic formal systems be endowed with intrinsic intentionality?
An obvious difficulty comes to mind. In standard formal systems (whether
automatic or not), the manipulation of tokens is purely syntactic; it is
based solely on the formal properties of symbol tokens and is insensitive
to their semantic properties. It transpires, then, that to the extent that
automatic formal systems, computational machines, are cognitive ‘‘engines’’ at all, they are syntactic, not semantic ‘‘engines’’ (cf. Fodor, 1981).
How, then, can such systems operate as semantic engines? The standard
reply is that even though the manipulation of tokens in formal systems is
purely syntactic, it is nevertheless such that it may respect and preserve
the semantic properties of these tokens qua interpreted symbols, and that
this is all that is needed to ensure the ‘‘two lives’’ – syntactic and semantic
– of such tokens.
As we know from axiomatic formal deductive systems, tokens with
semantic values can be the objects of purely formal manipulations that
nevertheless respect their semantic identity. In other words, if the formal
216
I. SHANI
manipulations are validly pursued, the corresponding semantic relations
follow as a matter of course, or as Haugeland puts it: ‘‘if you take care of the
syntax, the semantics takes care of itself’’ (1981, 23). Semantic engines, then,
are syntactic engines plus interpretation – syntactically driven machines that
‘‘can handle meaning’’, as it were.
Yet, at this joint we are facing another problem. CTM accounts for
semantic content by recourse to the notion of ‘interpretation’, and ‘interpretation’, at least in the canonical use of the term, is a user-relative construct – a paradigmatic example of derived intentionality. Interpretations,
whether of maps, texts, logical symbols or data structures, require interpreters. Since, ultimately, at the end of the line of this hermeneutic process
there must be interpreters already equipped with intelligence, the problem
of intrinsic (or original) intentionality is begged or regressed.5 In other
words, interpretation-based semantics creates an unsolvable explanatory
backlog for the simple fact that interpretation is a process of content
transmission, not of content generation. In standard home computers, for
example, it is we as interpreters, programmers, and users who transform the
systems’ syntactically driven operations into semantically meaningful
operations. The system itself does not really ‘‘handle meaning’’ by itself and
for itself.
Advocates of CTM are usually aware of this fact, yet the standard
response to the problem is that the problem of intrinsic intentionality may
be solved if something very much like an interpretation can be secured
naturally – without the mediation of intelligent agents. Roughly, the idea
is that the problem of intrinsic intentionality can be solved if it can be
shown that, by dint of natural processes alone, formal automata can be
constructed that ‘‘mimic’’ artificially constructed semantic engines in the
sense that
(a) They are syntactically driven computational machines.
(b) The symbol tokens on which the computations are performed are
mapped onto the environment (and perhaps also onto each other) in a
way that parallels the mapping relations that result from the
assignment of semantic values in standard interpretations (thereby
bestowing semantic values onto the symbol tokens). And,
(c) There is a correlation between the syntactically driven operations the
system undergoes and rational relations among the meanings of the
symbolic structures on which computations are performed (Fodor,
1981; Block, 1993).
In short, cognitivist theories of content deal with the problem of the
observer-dependency of interpretation by assuming that the problem can be
solved if we eliminate the interpreters from the picture and let nature take
care of the interpretation.
COMPUTATION AND INTENTIONALITY
217
I would like to conclude by arguing that this strategy is ill fated, and that it
is ill equipped to deal with the problem of intrinsic intentionality. While the
import of the argument to be presented converges remarkably with Searle’s
argument, it illuminates the problem from a somewhat different angle,
thereby providing additional support to Searle’s provocative conclusion.
3.2. ON THE INCOHERENCE OF ENCODING-BASED REPRESENTATION
A crucial step in the ‘‘interpretation without interpreters’’ solution to the
problem of intrinsic intentionality is the assumption that (some) naturally
constructed mapping relations are content-constitutive (see clause (b) above).
Namely, the assumption is that it is in virtue of ‘‘linkages between symbols
that the device manipulates and things in the world’’ (as Fodor, (1980, 520)
puts it) that mental states become endowed with content. The underlying
idea, then, is that what accounts for the intrinsic intentionality of genuine
intelligent beings is a relation of correspondence, or encoding, between
internal symbol structures and external conditions.
Like interpretation, however, ‘encoding’ – at least in the canonical use of
the term – is an exemplar of derived intentionality. Paradigmatically (for
example, in communication theory), encodings are symbols the meaning of
which is made by convention to correspond to (stand for) other symbols
whose meaning is already fixed. Thus, Morse codes, Braille characters, or
binary computer digits are all systems of symbolization into which the previously established meanings of natural language (e.g., English) symbol
structures can be transposed – often with great practical yield. For example,
the Morse code ‘‘...’’ is a stand-in for the character ‘‘S’’, and it is useful
because, unlike characters and numerals, dots and dashes can be sent over
telegraph wires.6 But the dots and dashes are only meaningful due to the fact
that they borrow their meaning from the already fixed meanings of the
characters and numerals they encode.
Such use of the term ‘encoding’ is non-problematic as far as it goes, but it
is also irrelevant for the purpose of explicating intrinsic content. Conventional encodings do not generate new representational contents: they are
merely transformations of already existing contents into new representational
forms (and, often, new physical media), and they presuppose interpreters. By
contrast, the processes that constitute the intentionality of intrinsic mental
states must be content generating. On pain of regress they cannot rely on
prior meanings, and cannot presuppose the intervention, or authority, of
intelligent interpreters.
Nevertheless, the accepted wisdom in cognitivist theories of content is that
something much akin to standard encodings is precisely what gives intrinsic
representational structures their meaning. Recall that what makes standard
218
I. SHANI
encodings work is the fact that they are made to correspond, in a reliably
controlled manner, to meaning-laden symbols. In a strikingly similar fashion,
it is believed that the problem of explaining intrinsic intentionality can be
solved if it can be shown that basic-level (original) representational structures
correspond in a definite, unaided, natural way (the ‘‘right’’ way) to specific
external conditions. The implicit assumption, then, is that what really matters
is the existence of some sort of correspondence relations, whereby the
external correspondents are ‘‘specified’’. This, the specification, is what gives
representational structures their content. Since such specification does not
require that the specified objects are themselves meaning-laden symbols, and
since it can obtain by dint of natural processes alone, that is, without presupposing the intervention of interpreters (designers, users), it is, the idea
goes, naturalistically adequate. From this perspective, the disanalogies with
standard encoding, the fact that the ‘‘encoded’’ external conditions are not
themselves (at any rate, need not be) meaning-laden symbols, and the fact
that the interpreters are eliminated, may be seen as unproblematic, advantageous modifications. After all, these modifications are necessary in order to
meet naturalistic standards of explanation, and, in any case, they are inessential insofar as the epistemic function of specification, the ‘‘true essence’’ of
encoding, is concerned.
Thus, representation is reconstructed as a natural ‘‘encoding’’ process, a
naturally designed correspondence, as it were. Such correspondence retains
the structure of standard encodings in the sense that they both presuppose
covariance between the encoding and the encoded elements. At the same time,
the interpreters are eliminated from the scene, leaving only natural processes
to bear on the covariance. In short, the assumption is that nature can ‘‘take
care of the situation’’ by creating correspondence relations semi-analogous to
artificial encodings, and that the real challenge is to find how this is being
done, namely, to find the ‘‘right’’ kind of correspondence (recall the citation
from Fodor in Section 2.3).
As Mark Bickhard has argued extensively, however (e.g., 1980, 1993,
1996, 2000, 2003; Bickhard and Terveen, 1995; see also Edelman and Tononi,
2000; Shannon 1993 for some converging arguments), the idea that such
relations of natural correspondence are constitutive of intrinsic intentionality
leads to an epistemic, or theoretical, impasse. The problem, Bickhard argues,
is that the information that such correspondence relations are taken to
generate is senseless once appeals to the epistemic authority of interpreters is
banned and the first person perspective of the psychological agent itself is
considered on its own epistemic terms. Mental states, he says, do not wear
their denotational values on their sleeves, and the fact that they correspond
to external sources does not explain how knowledge of the correspondence,
knowledge of the encoded elements, is attained at the ‘‘destination’’ (i.e., by
the agent). Consequently, mental states cannot be intrinsically informative,
COMPUTATION AND INTENTIONALITY
219
cannot serve their owners as representations, in virtue of the fact that they
‘‘encode’’ external conditions.7
Bickhard’s argument can be presented in two versions, each of which
illustrates the same problem, but with different emphases.
Epistemic vacuity (from the first-person perspective, symbol-object correspondences are epistemically vacuous): Conventional correspondences are
informative for those who are familiar with the convention since the latter
have access to both sides of the correspondence (and to the rules that connect
them together). By contrast, when a mental state R is said to represent an
external condition F in virtue of the fact that it corresponds to it in some
naturally constructed fashion, the system – call it S – that is supposed to
make use of R has no equally available access to both sides of the correspondence. Rather, all S has to work with are its own mental states. S can
only access F via R (or some other mental states); it cannot observe the
correspondence from both ends and use the fact that it obtains as an independent source of knowledge. In the absence of such knowledge, however,
the situation is analogous to having an access to the symbol string ‘‘-.’’
without knowing that it is the Morse code correspondent of ‘‘N’’: no such
knowledge can be miraculously gained merely in virtue of the fact that the
correspondence obtains. To put it differently, if correspondence relations
were content-constitutive, then the mere fact that the correspondence obtains
ought to have been sufficient for making R a representation of F; but, as the
Morse code example suggests, from the first-person perspective, this fact,
considered in itself, is epistemically vacuous – it does nothing by way of
informing S what’s on the other end of the presumed correspondence.8
If the fact that R corresponds to F is epistemically vacuous, however, if it
yields no intrinsically available information to the effect that it is F that R
stands for, then it cannot be taken as constitutive of making R a representation of F. For, if R is, indeed, a representation of F, then, whatever else it
might be, it must be capable of being used (used by S itself, that is) as a
representation of the sort. But, what Bickhard’s argument shows, I think, is
that the one thing that the postulation of a symbol-world correspondence
relation does not explain is how any symbol could function, intrinsically, as a
representation in virtue of the fact that it stands in such a correspondence
relation to some external item. In short, the fact that inner structures may
encode external conditions does not, in itself, explain how such structures can
come to serve their owners as representations and, consequently, it offers no
satisfactory solution to the problem of intrinsic intentionality.9
Extrinsicality (the property of ‘corresponding to...’ is an extrinsic property
of mental states and, as such, it is, again, epistemically vacuous): The fact
that R encodes F (that it correlates with, or is caused by, F) is an extrinsic
fact about R in the sense that it makes no difference to the internal causal
structure of the representation. That is, in principle, R could have been
220
I. SHANI
exactly the same even if instead of F it corresponded to F¢, or even to nothing
at all (for more detail, see Bickhard, 2003). If R’s content makes no difference
to its internal causal structure, however, then the fact that R possesses this
particular content has no unique impact on the manner in which it interacts
with other mental structures (R¢, R¢¢ ...) in S’s cognitive economy. But then
the difference between R F and, say, R F¢ (where ‘’ stands for
‘corresponds to’) is not a difference that can be detected elsewhere in the
system, and hence not a difference that makes a difference to the ongoing
flow of the system’s cognitive activity.10 In other words, from the first-person
perspective of the cognitive agent itself, the encoded value of R is simply nontransparent. S cannot learn that R encodes F, nor can it use R as a representation of F, merely by virtue of the fact that the encoding relation obtains
(i.e., that R carries information – in the technical sense used in communication theory – about F, or that it is caused by F). In short, since encoding is
an extrinsic (or, external) relation, the fact that R encodes F does not explain
how, relying on nothing but its own cognitive resources, the system itself, S,
can use R as a representation of F.
The upshot of both versions, then, is that mental structures cannot
function as representations, cannot be intrinsically informative, in virtue of
the fact that they encode whatever it is they encode. True, the elimination of
interpreters from the scene renders the encoding process naturalistically
legitimate, but, as Bickhard argues, this is being done at the price of epistemic
coherence. The only coherent sense in which encodings can be said to represent is observer-dependent, and eliminating the observer does not yield a
coherent account of intrinsic intentionality – rather, it yields the total elimination of all forms of content, derived or intrinsic.
4. Recapitulations, a Conclusion, and Some Afterthoughts
Bickhard’s argument against encoding-based representation converges on,
and provides additional support to, Searle’s contention that RR offers no
adequate response to the original charge of the Chinese room argument,
namely, to the claim that CTM yields a notion of mental content that is
inherently observer-dependent.
As mentioned before, the Chinese room argument was devised as a refutation of the view that appropriately programmed digital computers literally
are the possessors of genuine mental states. The robot’s reply concedes this
point but, at the same time, attempts to sustain a close relative of the position
targeted in Searle’s original argument. It consists of the idea that intrinsic
intentionality is, or at any rate can be, constituted by appropriately programmed digital computers whose internal symbol structures appropriately
encode external conditions.
COMPUTATION AND INTENTIONALITY
221
As we have seen, Searle’s main objection to RR is that it offers no real
improvement over the problematic situation illustrated by the original Chinese room argument. He argues that the same old argument can be adapted,
with little modification and with equal force, to the new, robot, circumstances; and the moral, he reasons, is that computation plus symbol-world
linkages is no more intrinsically intentional than computation without such
linkages. All the reply does is to add blind (non-intelligent) causal linkages to
blind computations, and nothing in the addition of such linkages can compensate for the epistemic (cognitive, intentional) vacuity of the original situation.
But one person’s modus ponens is another’s modus tollens. Challenging
Searle’s intuition, Fodor considers the formula ‘‘cognition equals computation plus causal linkage’’ to be firmly intact and he concludes, instead, that
there must be something wrong with Searle’s example. No machine table
instantiation constitutive of intrinsic intentionality, he argues, employs a little
man to bring about its state transitions.
Yet, as Searle points out (see Section 2.3 above), it is unclear just on what
ground conscious implementation ought to be universally excluded. In an
attempt to dissolve Searle’s argument, Fodor claims that the machine statetransitions at the instantiation level must consist of effective (immediate,
proximal) causes only (1980, 525), yet he offers no argument just why this
must be the case.
While intuitions may diverge when it comes to the dispute between Searle
and Fodor and the implications it bears for the status of RR, Bickhard’s
argument against encoding-based representation illuminates Searle’s position
from a somewhat different angle and provides powerful reasons to side with
Searle.
What Bickhard’s argument shows, if correct, is that encoding is not, and
cannot be, constitutive of intrinsic content. In other words, it is not in virtue
of encoding that an otherwise non-intentional system can become intentional.
Since RR concedes that computation alone does not generate cognition
(intrinsic intentionality), and since it identifies the missing piece in the puzzle
with an ‘‘appropriate’’ relation of encoding (symbol-world linkage), Bickhard’s argument is, in effect, an indirect assault on the adequacy of the reply.
Recall that by applying his thought experiment to the circumstances
decreed by RR, and by illustrating his own total lack of comprehension of
the meaning of the ‘‘information’’ he gains under these new circumstances,
Searle attempts to show the futility of the reply. By arguing against the
possibility of content-generating encodings, Bickhard’s argument converges
onto the same conclusion.
The imports of the arguments are remarkably similar. In fact, in both cases,
the conclusion is identical: Symbol-world linkages do not generate content
and cannot fill the theoretical void in the explanation of intentionality as a
222
I. SHANI
system-intrinsic phenomenon. Even the language is sometimes similar. Consider the following passage from Searle’s reply to Fodor.
[l]et the egg foo yung symbol be causally connected to egg foo yung in
any way you like, that connection by itself will never enable the agent to
interpret the symbol as meaning egg foo yung. To do that he would have
to have, for example, some awareness of the causal relation between the
symbol and the referent; but now we are no longer explaining intentionality in terms of symbols and causes but in terms of symbols, causes,
and intentionality... (1980b, 522).
Compare Bickhard
The basic point here is simply that informational correspondence, even
should it exist, does not announce on its sleeve that it exists, or what it is
in correspondence with. Some state or event in a brain or machine that is
in informational correspondence with something in the world must in
addition have content about what that correspondence is with in order to
function as a representation for that system. (2000, 68)
Searle and Bickhard agree, then, that the kind of information that the
‘‘egg foo yung’’ symbol is said to carry is just not the kind of information
that can be appreciated from the first-person perspective of a psychological agent.11 Where they differ is in the way they arrive at their conclusions. Searle’s conclusion is based, primarily, on his persuasion that the
revised version of the Chinese room argument shows that the allegedly
content-constitutive relations between the symbol ‘‘egg foo yung’’ and the
object egg foo yung fail to generate any understanding of the meaning of
the symbol. Bickhard arrives at the same conclusion by arguing that the
fact that the symbol encodes the object does nothing by way of explaining
how a psychological subject could use the former as a representation of
the latter. Where Searle demonstrates his point phenomenologically (by
imagining himself performing the computations and by observing that he
does not understand a thing), Bickhard bases his argument on functional
considerations (by showing that there is nothing in encoding per se to
guarantee the capacity of mental states to inform their owners of the
world around them); but these are simply alternative ways of making the
same point.
If symbol-world correspondences are incapable of generating content,
then, not only does RR fail, but, as was mentioned before, the entire project
of explaining intentionality in cognitivist (computational, information-processing) terms seems to be facing a blind alley.
The CTM is premised on a categorical dichotomy between symbol
manipulation and symbol significance (between form and content, syntax and
COMPUTATION AND INTENTIONALITY
223
semantics, information processing and information pickup, computation and
interpretation, etc.). The logic of the dichotomy urges a piecemeal approach
in which the task of providing an account of mental content can be sharply
distinct from, yet at the same time be supplementary to, the task of
accounting for mental (cognitive) processing. CTM being the dominant
theory of cognitive ‘‘architecture’’ (that is, of mental processing), it was only
natural that attempts to explain intentionality, to naturalize content, were
equally dominated by this piecemeal, complementary approach. An encoding-based approach to the explanation of content fits perfectly well in this
picture, since it respects the dichotomy between content and process and, in
particular, is compatible with CTM. Yet, if Searle and Bickhard are right, the
process-content dualism inherent in ‘‘the basic idea of modern cognitive
theory’’, rather than being the solution to the problem of intentionality, is a
veritable source of trouble.
The assumption that mental processing is a purely formal procedure of
symbol-manipulation implies that mental processing, qua mental processing,
has nothing to do with content. Computation, in other words, is nonintentional, as the original Chinese room argument so vividly illustrates.
Similarly, the assumption that mental content is purely a matter of symbolworld linkage implies that, qua content, it has nothing to do with mental
processing. Now, both Searle’s criticism of the robot’s reply and Bickhard’s
argument against encoding-based representation demonstrate that the same
problem – lack of intentionality – arises again. The existence of symbol-world
linkage is not a property that the cognitive system itself can appreciate, from
its own perspective, by interacting with the symbol and therefore not a
property that can constitute intrinsic content (cf. Edelman and Tononi, 2000,
chapter 11). Indeed, given the assumption that mental content is extrinsic to
mental processing, we need not be surprised at this failure to explicate
intentionality (content) as a system-intrinsic phenomenon! Either way, then,
the system itself, the psychological agent, is left without intentionality.
The only sort of intentionality that CTM sustains, Bickhard and Searle
argue, is derived intentionality – the ascription of content made from the
third-person perspective of programmers, users, observers, and interpreters.12 As mentioned in Section 2.1, standard (artificially constructed) digital
computers are semantic processors only because they are given semantic
interpretation. The underlying idea of cognitivism, however, is that computational machines can come to possess intrinsic intentionality by virtue of
processes that mimic the assignment of content by way of an interpretation in
that they specify mapping relations that parallel the mapping relations that
result from interpretation. The problem with this strategy, as I think both
Searle’s and Bickhard’s arguments show, is that the structure of such
semantic allocation retains its original, observer-dependent character. The
naturally constituted mapping relations that the theory envisions do not
224
I. SHANI
generate intrinsic content; they are merely such that they could make (epistemic) sense to an interpreter if only there were one around.
It transpires, then, that there is no coherent reconciliation between computation and intentionality (or between CTM and the problem of intrinsic
content) and that the foundational crisis pointed out by Searle almost a
quarter of a century ago is very much alive.
5. Afterthoughts
5.1.
ON THE AD HOMINEM, AND IGNORANCE, FALLACIES OF THE ‘‘SO,
WHAT DO YOU SUGGEST INSTEAD?’’ OBJECTION
One of the most puzzling features of the controversy that surrounds the
Chinese room argument is the fact that so many able people tend to behave as
if Searle’s failure to provide a convincing alternative to the computational
story somehow implies that his criticism need not be taken seriously. To be
sure, it would be nice to have a viable positive account of intrinsic intentionality. But the cogency of Searle’s critique of cognitivism in no way depends
on the availability of a ready-made alternative, and the shortcomings of his
own account of intentionality (developed in Searle, 1983, 1992) give no reason
to dismiss what he has to say. Clearly, the reasoning behind such rhetorical
device involves an ad hominem fallacy, but it also involves an interesting
fallacy of ignorance: Since my opponent has not shown that his recipe for
intentionality is correct, or that he has one, my own recipe remains intact.
This being said, it may be mentioned that, unlike Searle, Bickhard has
developed, over the years, a powerful systematic alternative to CTM and
to encoding-based models of representation. Bickhard labels his alternative
to the computational approach ‘interactivism’. This is not the time and
place to explore interactivism in detail (the interested reader may consult
Bickhard, 1980, 1982, 1993, 1996, 2000, 2003; Bickhard and Richie, 1983;
Bickhard and Terveen, 1995; Christensen and Hooker, 2001). Suffice it to
mention here that
(a) The theory strives to explain representation on the basis of the notion
of self-organization in far-from-equilibrium thermodynamics and is a
pioneering, and one of the theoretically most developed, accounts
framed within the emerging dynamic framework in the cognitive
sciences.
(b) It explains mental content without presupposing, neither computational mental processing, nor an encoding-based account of meaning.
Thus, aligning Bickhard’s critique of encoding models of representation
with Searle’s critique of CTM may also yield the advantageous side product
COMPUTATION AND INTENTIONALITY
225
of giving no pretext to an immature dismissal of Searle’s argument on pain of
there being no alternative to the formula ‘cognition equals computation plus
symbol-world linkage’.
5.2.
ON THE FALLACY OF EQUATING FIRST-PERSON SIGNIFICANCE WITH
CONSCIOUS APPREHENSION
Discussions of the problem of intrinsic intentionality are frequently burdened
with the inculcated false impression that first-person intentionality is necessarily (that is, in all its manifestations) conscious. Often, what nourishes this
commonly held assumption is the fact that the debate between Searle and his
critics is marked by Searle’s own emphasis on conscious processes versus the
tendency of his cognitive adversaries to reply by pointing to the prevalence of
non-conscious cognitive processes. On a self-organization-based account
such as Bickhard’s, however, the requirement for first-person significance is
not tantamount to a presupposition of conscious accessibility (for a similar,
highly detailed, view see also Damasio, 1994, 1999). Rather, it is a requirement that the information will function as information for the agent, namely,
that it will take an integral part in the agent’s self-organizing (or self-maintaining) action flow – consciously or not. On this view, consciousness is
nested in a larger embodied context that is already perspectival; to wit, the
functional self is broader than the conscious self is. Thus, the first-person
perspective begins with the functional, homeostatic, ‘‘infrastructure’’ of the
embodied agent, which predates consciousness, and, consequently, the
reduction of this perspective to something that entails conscious experience is
fallacious.
When the possibility that intrinsic intentionality need not be conscious is
acknowledged, it becomes clear that the fact that many, perhaps most, of the
processes that CTM strives to explain are non-conscious (informationally
encapsulated, cognitively impenetrable, or however you prefer to put it)
cannot be advanced as a counterargument to the charge that the theory
suffers from a chronic disability to make good of intrinsic intentionality.
Simply put, if all cognitive processes, conscious or not, are perspectival, then
the fact that advocates of the computational approach are typically preoccupied with non-conscious cognitive processes does not imply that they need
not worry about first-person significance.
Acknowledgements
I would like to thank Ausonio Marras, Mark Bickhard, Robert Stainton,
Liam Dempsey, Marnina Norris, Daniel Vaillancourt and Zara McDoom,
John Kearns and an anonymous referee at Minds and Machines.
226
I. SHANI
Notes
1
Consider, in this context, the following quotations from Harnad and from Hayes.
‘‘Searle’s celebrated Chinese Room Argument has shaken the foundations of Artificial
Inteligence. Many refutations have been attempted, but none seem convincing.’’ (Harnad,
1989, 1). ‘‘My own notion of what constitutes [the core of cognitive science] could be
summed up in the following way: it consists of a careful and detailed explanation of what’s
really silly about Searle’s Chinese room argument’’ (Hayes, p. 2 in Lucas and Hayes,
1982).
2
Searle’s original exposition of the Chinese room argument is modeled after a machine designed by Schank and his co-workers for the purpose of understanding a story (Schank and
Abelson, 1977).
3
It is worth mentioning that to insist on the intrinsicality of mental content is not tantamount
to espousing internalism (though this is a common mistake; see for example, Dennett, 1987,
chap. 8). Granted that representation is a relational phenomenon that involves a relation to (a
portion of) the environment, there is still an internal component to it that is unique to the
subject – my representational states are internal states had be me, and not by someone else. It
is my representations, which function as representations for me, whereby I become aware of
the world around me and that are causally efficacious in my thought processes and in my
actions; and it is in this sense that they are intrinsic to me, as opposed to being intrinsic to
someone else, or to be ascribed to me from someone else’s perspective (cf. Searle, 1992, 78–82).
4
There are problems with, and disagreements about, the correct technical characterization of
‘digital system’ and ‘digital representation’ (see for example, Demopolous, 1987; Pylyshyn,
1984, 200–202), but none of them bears heavily on the issues I discuss below.
5
As a referee of the paper has kindly pointed out, it could be the case that one formal system,
say a Lisp interpreter, interprets another formal system, say a Lisp program. It follows, then,
that not every interpreter has to be intelligent (i.e., to possess intentionality). However, a Lisp
interpreter is only an interpreter to the effect that it is assigned this role by a human observer.
In other words, it is only an interpreter in a derived sense: though not intelligent in itself, it
presupposes an intelligent human interpreter capable of taking what it does to be an interpretation – hence the circularity problem.
6
The example is borrowed from Bickhard (1996).
7
The argument presented here against encoding is, in fact, only one among a host of arguments Bickhard has developed over the years. I chose to focus on this particular argument,
however, because it seems to be one of the more compelling ways of demonstrating the
problem and because it converges nicely with Searle’s rebuttal to the robot reply.
8
In this context, it is instructive to mention an old argument by Piaget against what he
called ‘‘copy’’ theories of Knowledge. Compare: ‘‘ [I] find myself opposed to the view of
knowledge as a copy, a passive copy, of reality. In point of fact, this notion is based on a
vicious circle: in order to make a copy we have to know the model that we are copying, but
according to this theory of knowledge the only way to know the model is by copying it, until
we are caught in a circle, unable ever to know whether our copy of the model is like the
model or not.’’ (1970, 15).
9
It is worth emphasizing that the point made here about the inability of encoding relations to
constitute intrinsically functioning representations have nothing to do with the question
whether or not the representation in question is being used consciously, or explicitly. Rather, if
the argument applies at all, it applies with equal force to non-conscious, or implicit, use of
mental representations (see also Section 5.2).
10
For an extended analysis of this problem of internal communication see Edelman and
Tononi (2000, chap. 11).
COMPUTATION AND INTENTIONALITY
227
11
In this context, it is useful to distinguish between two notions of information that are often
being conflated: one, information in the technical sense of communication theory, and another, which roughly parallels (or at any rate implies) propositional content and is therefore
necessarily semantic. Sayre (1987) suggests the convention info(t) and info(s) to distinguish the
technical from the semantic sense (see also Bickhard, 2000, Bunge, 1974, Searle, 1980). Using
this convention, we can express Searle’s and Bickhard’s criticism in the following way: symbolworld linkages may be sufficient for info(t), but (information semantics notwithstanding) they
do not constitute info(s).
12
Notably, a very similar conclusion is also reached by Dennett (e.g., 1987) implied as it is by
his influential theory of the intentional stance. There is, however, an important difference too.
Unlike Searle and Bickhard, who are manifest intentional realists, Dennett is an instrumentalist who believes that there simply is no such thing as intrinsic intentionality. Consequently,
Dennett is not disturbed a tiny bit by the conclusion that CTM entails the absence of intrinsic
intentionality. As I have argued in Section 2.2, however, this result should be worrisome to a
large squad of philosophers who are at once realists and computationalists (e.g., Block,
Dretske, Fodor and Millikan).
References
Abelson, R.P. (1980), Searle’s Argument is Just a Set of Chinese Symbols, Behavioral and
Brain Sciences 3, pp. 424–425.
Bickhard, M.H. (1980), Cognition, Convention and Communication, NY: Praeger.
Bickhard, M.H. (1982), Automata Theory, Artificial Intelligence, and Genetic Epistemology,
Revue Internationale de Philosophie 36, pp. 549–566.
Bickhard, M.H. (1993), Representational Content in Humans and Machines, Journal of
Experimental and Theoretical Artificial Intelligence 5, pp. 285–333.
Bickhard, M.H. (1996), Troubles with Computationalism, in W. O’Donohue and R.F.
Kitchener, eds., The Philosophy of Psychology, London: Sage, pp. 173–183.
Bickhard, M.H. (2000), Information and Representation in Autonomous Agents, Journal of
Cognitive System Research 1, pp. 65–75.
Bickhard, M.H. (2003), Process and Emergence: Normative Function and Representation, in
J. Seibt, ed., Process Theories: Crossdisciplinary Studies in Dynamic Categories, Dordrecht:
Kluwer Academic.
Bickhard, M.H. and Richie, D.M. (1983), On the Nature of Representation: A Case Study of
James Gibson’s Theory of Perception, NY: Praeger.
Bickhard, M.H. and Terveen, L. (1995), Foundational Issues in Artificial Intelligence and
Cognitive Science, Amsterdam: Elsevier.
Block, N. (1980), What Intuitions about Homonculi Don’t Show, Behavioral and Brain Sciences 3, pp. 425–426.
Block, N. (1990), Can the Mind Change the World?, in G. Boolos, ed., Meaning and Method:
Essays in Honor of Hilary Putnam, Cambridge: Cambridge University Press, pp. 137–170.
Block, N. (1993), The Computer Model of the Mind, in A.I. Goldman, ed., Readings in
Philosophy and Cognitive Science., Cambridge MA: MIT Press, pp. 818–831.
Bunge, M.A. (1974), Treatise on Basic Philosophy Sense and Reference Vol. 1 Dordrecht:
Reidel.
Christensen, W.D. and Hooker, C.A. (2001), ‘Self-Directed Agents’, in J. McIntosh, ed.,
Naturalism, Evolution and Intentionality. Canadian Journal of Philosophy, Special Supplementary Volume.
228
I. SHANI
Damasio, A.R. (1994), Descarte’s Error: Emotion, Reason and the Human Brain, NY: GrossetPutnam.
Damasio, A.R. (1999), The Feeling of What Happens: Body and Emotion in the Making of
Consciousness, NY: Harcourt Brace.
Demopolous, W. (1987), On Some Fundamental Distinctions of Computationalism, Synthese
70, pp. 79–96.
Dennett, D.C. (1978), Brainstorms: Philosophical Essays on Mind and Psychology, Montgomery VT: Bradford Books.
Dennett, D.C. (1987), The Intentional Stance, Cambridge MA: MIT.
Dretske, F. (1988), Explaining Behavior: Reasons in a World of Causes, Cambridge, MA: MIT/
Bradford.
Edelman, G.M. and Tononi, G. (2000), Consciousness: How Matter Becomes Imagination,
London: Penguin Books.
Fodor, J.A. (1975), The Language of Thought, Hassocs Sussex/Scranton PA: Harvester Press/
Crowell.
Fodor, J.A. (1980), Searle on What Only Brains Can Do. Open Peer Commentary. Behavioral
and Brain Sciences, 3. Reprinted in D. Rosenthal (ed.).
Fodor, J.A. (1981), Representations, Montgomery: Bradford.
Harnad, S. (1989), Minds Machines and Searle, Journal of Theoretical and Experimental
Artificial Intelligence 1, pp. 5–25.
Harnad, S. (1990), The Symbol Grounding Problem, Physica D 42, pp. 335–346.
J. Haugeland (ed.) (1981), Mind Design, Cambridge MA: MIT.
Hayes, P.J., Harnad, S., Perlis, D. and Block, N. (1992), Virtual Symposium on Virtual Mind,
Minds and Machines 2(3), pp. 217–238.
Lucas M.M. Hayes P.J. (ed.) (1982), Proceedings of the Cognitive Curricula Conference,
Rochester NY: University of Rochester.
Millikan, R.G. (1989), Biosemantics, Journal of Philosophy 86(6), pp. 281–297.
Piaget, J. (1970), Genetic Epistemology, NY: Columbia.
Pylyshyn, Z. (1987), Computation and Cognition: Toward a Foundation for Cognitive Science,
Cambridge MA: Bradford/MIT.
Rosenthal, D. (1991), The Nature of Mind, Oxford: Oxford University Press.
Sayre, K. (1987), Cognitive Science and the Problem of Semantic Content, Synthese 70, pp.
247–269.
Searle, J.R. (1980), Minds, Brains, and Programs, Behavioral and Brain Sciences 3, pp. 417–
424.
Searle, J.R. (1980), From Author Response. Behavioral and Brain Sciences, 3. Reprinted in
D. Rosenthal (ed.).
Searle, J.R. (1983), Intentionality: An Essay in the Philosophy of Mind, Cambridge: Cambridge
University Press.
Searle, J.R. (1992), The Rediscovery of Mind, Cambridge MA: MIT/Bradford.
Schank, R.C. and Abelson, R.P. (1977), Scripts, Plans, Goals and Understanding, Hillsdale,
N.J.: Laurence Erlbaum Associates.
Shannon, B. (1993), The Representational and the Presentational Hertfordshire, England:
Havester Wheatsheaf.