Computationalism: The Very Idea: David Davenport (David@bilkent - Edu.tr)

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 7

Computationalism: The Very Idea

David Davenport (david@bilkent.edu.tr)
Computer Eng. & Info. Science Dept.,
Bilkent University, 06533 Ankara –Turkey.

whether   computationalism   actually   has   any   explanatory


Abstract value after all.
This   paper   argues   that   computationalism,   properly
understood, is an invaluable tool in our quest to understand
Computationalism is the view that computation, an abstract cognition.   Indeed,   it   will   become   obvious   that   all   the
notion lacking semantics and real­world interaction, can offer alternative proposals are ultimately committed to the very
an explanatory basis for cognition. This paper argues that not same viewpoint. To appreciate this, however, it is necessary
only   is   this   view   correct,   but   that   computation,   properly to   first   develop   a   sketch   of   computation   and   of   what   a
understood,   would   seem   to   be   the   only   possible   form   of
cognitive agent is and how it may function. This perspective
explanation! The argument is straightforward: To maximise
their   chances   of   success,   cognitive   agents   need   to   make will   then   provide   the   foundation   upon   which   to   sensibly
predictions   about   their   environment.   Models   enable   us   to discuss   the   meanings   and   relative   merits   of   the   various
make predictions, so agents presumably incorporate a model ideas.   This   approach   is   somewhat   unusual,   perhaps,   but
of their environment as part of their architecture. Constructing appropriate   given   the   interrelated   nature   of   the   concepts
a model requires instantiating a physical "device" having the being investigated. Ultimately, these notions simply cannot
necessary   dynamics.   A   program   and   the   computation   it be   understood   in   isolation,   but   only   as   a   package,   as   a
defines comprise an abstract specification for a causal system. coherent   philosophy   of   mind   (indeed,   a   philosophy   of
An   agent's   model   of   its   world   (and   presumably   its   entire everything.1)
mechanism)   can   thus   be   described   in   computational   terms
too,   so   computationalism   must   be   correct.   Given   these
interpretations,   the   paper   refutes   arguments   that   purport   to What is Computation?
show   that   everything   implements   every   computation Consider   the   relatively   intuitive   notion   of   a   model.   A
(arguments   which,   if   correct,   would   render   the   notion   of model   is   something   that   stands   in   for   and   which
computation   vacuous.)   It   then   goes   on   to   consider   how "corresponds"   in   relevant   ways,   to   the   system   being
physical systems can "understand" and interact intelligently modelled. The model can thus be used to obtain information
with their environment, and also looks at dynamical systems about   the   real   system,   without   the   necessity   of   actually
and the symbolic vs. connectionist issue.
interacting with it. This is clearly advantageous when, for
instance, the system is too large to manipulate directly (the
solar system, or the weather) or when it might be dangerous
Introduction or   expensive   to   actually   do   so   (oil   spillage   from   a
Not surprisingly, early digital computers were frequently supertanker or the financial collapse of a major company.)
referred to as "electronic brains." Able to perform complex Notice also, that there may be many different ways to model
mathematical   calculations,   play   games   and   even   write the   very   same   system.   Depending   on   the   purpose   (the
simple   poetry,   these   machines   clearly   displayed answers required) a model may be more or less accurate,
characteristics previously thought to be the exclusive realm may   include   or   omit   various   details,   and   may   be
of human beings. Moreover, they did these feats with such implemented in different materials. Models, then, allow for
rapidity that it was difficult not to imagine that they would predictions about the way things will be, are, or were, in a
one day exceed our own rather limited cognitive abilities, given set of circumstances.
leading us either into utopia or into slavery. Models often bear a physical resemblance to the system
Of course, this hasn't happened yet (at best, we seem to they represent. For example, a model yacht is a scaled down
have a sort of utopic slavery!), but the idea that the mind version of the real object. Besides being a great play thing,
might   be   some   form   of   computer   has   certainly   gained it can be used to answer questions, not just about what the
considerable  currency.  Indeed, most  research  programs  in real yacht looks (or would look) like, but about how it might
Cognitive   Science   and   Artificial   Intelligence   adopt   this behave   in   various   sea   and   wind   conditions,   or   how   its
view, known as computationalism –variously the idea that performance  might  change  if the mast were  to be moved
cognition is computation or that minds can be described by slightly   forward   or   aft.   Increasingly,   however,   abstract
programs.   Initial   success,   however,   has   given   way   to   a mathematical/computational   models   are   replacing   models
series   of   seemingly   insurmountable   practical   and that physically resemble the phenomena under investigation.
philosophical   problems.   These   difficulties   have   prompted
some to search for alternative formulations and to question
1
  Davenport (1997), for example, shows how the notion of truth
may be analysed using the ideas herein.
One reason for this is the difficulty, time and cost associated Before moving on to look at cognition and how this view
with crafting such physical models. of computation is related to it, it is important to dispose of
What is a computational model? In essence, it is simply a the argument, put forward by Putnam (1988), to the effect
function2;   a   mapping   from   input   states   to   output   states that   computation   is   all   pervasive.   According   to  Putnam’s
(which may be thought of as a mapping from questions – proof (in the Appendix of his Reality and Representation),
including any initial  data– to answers.)  It  is important  to any   open   system,   for   example,   a   rock,   can   compute   any
emphasize that nothing in this view of computation implies function.   If   true,   this   would   render   void   the
discreteness, i.e. it is perfectly compatible with continuous computationalist’s   claim   that   cognition   is   simply   a
(real) values. Of course, rather than a single function, the particular   class   of   computation,   since   everything,   even   a
mapping  may   be  specified  as   the   joint  (sequential  and/or rock, would be capable of cognition!
parallel)  application of  possibly recursive  functions. State The essence of Putnam’s argument is as follows: Every
variables may be used to retain intermediate results between ordinary open system  will be in different  maximal  states,
function applications. Computation might also be usefully say s1, s2, … sn, at each of a sequence of times, t 1, t2, … tn.  If
viewed as a formal axiomatic system (comprising a set of the transition table for a given finite state automaton (FSA)
symbols and rules for manipulating them.) These alternative calls for it  to go through a particular  sequence  of formal
views   offer   different   conceptualisations   and   allow   for states, then it is always possible to map this sequence onto
different   space/time   tradeoffs   during   implementation, the physical state sequence. For instance, if the FSA is to go
however, they do not otherwise add or subtract anything. through the sequence ABABA, then it is only necessary to
Of   course,   to   actually   obtain   answers,   abstract   models map A onto the physical state s1  s3  s5, and B onto s2  s4.
must   be   instantiated   somehow.   This   demands   a   physical In this way any FSA can be implemented by any physical
implementation, either directly, using, for example, a FSA, system.
or indirectly, by mentally executing the abstraction or using, Fortunately (for cognitive science), the argument is not as
for example, a digital computer. Implementing an abstract good as it may at first appear. Putnam’s Principle of Non­
model involves mapping states of the abstraction to physical Cyclical Behaviour hints at the difficulty. His proof relies
states,   along  with  any  relevant  sequences   of  such   states. 3 on   the   fact   that   an   open   system   is   always   in   different
There are several approaches to achieving this. One might maximal   states   at   different   times.   In   other   words,   it   is
find (or attempt to find) an existing physical entity whose possible to perform this mapping operation only once (and
functioning   happened   to   coincide   with   that   desired.   This then probably only with the benefit of hindsight!) But this is
may  prove  very   difficult  though,  given  the  constraints  of of no use whatsoever; for computation, as we have seen, is
any reasonably complex model.4  Alternatively, rather than about prediction. Not only is Putnam’s “computer” unable
use an existing system, one could construct such a system to repeat the computation, ever, but also it can only actually
anew. This then becomes an engineering problem. The thing make   one  “prediction”   (answer  one   trivial   question.)  The
to note, however, is that in both cases we have to rely on problem is that the system is not really implementing the
(known,   proven)   causal5  interaction   to   ensure   that   the FSA in its entirety. A true implementation requires that the
desired   behaviour   is   actually   forthcoming.   Of   course,   we system  reliably  traverse   different   state   sequences   from
now have at our disposal a tool –the digital computer– that different   initial   conditions   in   accordance   with   the   FSA’s
provides   a   very   quick   and   easy   way   to   construct   an transition   table.   In   other   words,  whenever  the   physical
implementation   for   almost   any   abstraction 6.   Computer system is placed in state s i it should move into state s j, and
programs are the means by which we specify the desired whenever it is in sk it should move to s l, and so on for every
states   and   state   transitions   to   our   machine.   A   program single   transition   rule.   Clearly,   this   places   much   stronger
(algorithm7, computation) is thus an abstract specification of constraints   on   the   implementation.   Chrisley   (1995),
the causal behaviour we desire from the system which is to Copeland (1996) and Chalmers (1996) all argue this point in
implement our computation (c.f. Chalmers, 1995) more detail. Chalmers also suggests replacing the FSA with
a CSA (Combinatorial State Automata), which is like a FSA
2
  In fact, it may actually be a relation –which admits many­to­ except   that   its   states   are   represented   by   vectors.   This
many   mappings–   which   may   be   necessary   to   properly   model combinatorial   structure   is   supposed   to   place   extra
concurrent systems (c.f. non­deterministic Turing Machines.) constraints   on   the   implementation   conditions,   making   it
3
  Where states and “sequences” may be real valued or discrete.
Note also that, strictly, there is probably no reason to suppose even more difficult to find an appropriate mapping. While
that the mapping might not be sequence to state and vice versa, this is true, as Chalmers points out, for every CSA there is a
much   like   a   Fourier   transform.   Finally,   we   might   accept   to FSA that can simulate it, and which could therefore offer a
implement a functionally equivalent computation, in which case simpler implementation! 
there need not be an obvious mapping!
4
 Note that the mapping must be repeatable, so that arguments to One final point before moving on; Searle (1992) argues
the effect that one might map such a sequence to anything are that   computation   is   observer­relative,   that   syntax   is   not
unlikely   to   be   fruitful.   This  point  is  discussed  in  some  detail intrinsic to physics, and that his wall might thus be seen to
below. implement   the  Wordstar  program.   We  have  seen   that  the
5
 Causal may mean nothing more that reliably predictable! latter  claim  is  nonsense,   however,   there  is an  element  of
6
  Even Turing Machines are unable to perform some functions,
see, for example, Turing (1936) and Copeland (1997). truth  in the  former   ideas.   While  the relation   between  the
7
  An   algorithm   is   a   sequence   of   (one   or   more)   function physical   implementation   and   the   abstract   computation   is
applications. Algorithms comprising different sequences and/or highly constrained, it is still up to the observer to decide
primitive functions may result in the same overall computation. what   constitutes   a   system   state   and   where   to   draw   the
A program is an algorithm in a machine understandable format.
boundaries   of  the  system.  Clearly,   it   is also  the  observer certain cognitive systems do not occur elsewhere. In such a
who interprets the system being modelled in terms of the case we might model a specific agent by employing another
physical   states   –and   thus   corresponding   abstract such agent –if one exists. We could not, however, model the
computational states. Computation, like beauty, is largely in class of such agents in this manner or, equivalently, use the
the eye of the beholder! only   existing   agent   to   model   itself   (which   would   be
nonsense –having no predictive value!)
What is Cognition? The second form of the question, "Is cognition literally
Agents   are   usually   considered   to   be   small   parts   of   the computation?"   cannot   be   answered   quite   so   easily.
world   in  which   they   exist   and   are   thus  assumed   to   have Computation is certainly part of cognition (specifically, the
limited   abilities.   Cognitive   agents   are   agents   that agent’s model of the environment.) But what of the other
incorporate   and   use   knowledge   of   the   environment   to elements, the input and output pathways linking the model
improve their chances of success, even survival! to the environment, the goals, the matching and decision­
In order to cope with the vagaries of its world, an agent making   mechanism,   etc.,   are   they   also   computational?
needs to select and execute the action most appropriate to its Again,   if   they   are   physical/causal   systems,   then,
goals.   This   requires   making   predictions,   both   about   the presumably, they too can be interpreted computationally, in
current state of the un­sensed parts of its world and about which   case   we   should  also   accept   that   cognition   is  quite
the effects of its own actions on the world. Prediction, then, literally   a   matter   of   implementing   the   right   form   of
is the principle around which cognition is organised, and an computational   system.   Of   course,   this   account   does   not
agent’s   knowledge   thus   constitutes   a   model   of   its directly   explain   the   subjective   aspects   of   mind   (feelings,
environment. The model is a (presumably) physical system desires, etc.) but that is another story.8 
that   implements   a   computation   capable   of   providing   the The final interpretation, “Does the notion of computation
necessary   answers.   The   relation   between   cognition   and fail   to   have   explanatory   value   when   it   comes   to
computation is thus clear. understanding cognition?” is of more immediate concern to
An agent’s model may be innate or it may be constructed cognitive   science   and   artificial   intelligence   researchers.
(learnt) as a result of sensing and possibly interacting with Most work in the field tacitly assumes that computation is
the environment. It may be static or continuously refined, an appropriate basis, so if this turns out to be wrong there
again as a result of interactions. Given such a model of the are   likely   to   be   massive   upheavals!   The   case   against
world,   sensory   input   must   somehow   combine   with   it   to computationalism has been growing stronger of late, with
determine  actions  relevant  to the agent's  present  situation claims   to   the   effect   that   computation   lacks   semantics,   is
and goal. Any discrepancy between the model's predictions disembodied, is insensitive to real­world timing constraints,
and the subsequent sensory input will indicate errors in the is   at   the   wrong   level,   and,   most   dramatically,   that   since
model and can thus provide the basis for updating it.  every   system   can   compute   every   function,   it   is   just   too
How   an   agent   actually   acquires,   represents   and pervasive to be meaningful.
subsequently utilises the knowledge inherent in its model of Clearly,   computation   is   important   from   a   practical
the world need not concern us here (see Davenport (1992, perspective and also, perhaps, from a historical one. It has
1993) for further ideas in this direction.)  already served as a vital step in the evolution of ideas that
will ultimately lay bare the mysteries of cognition. In fact,
the   case   against   the   computational   view   of   mind   is
Computationalism misguided. We have already seen that, while every system
Given   the   interpretations   of   computation   and   cognition can indeed be viewed as implementing some computation,
outlined above, is computationalism correct?  There  are at every system simply cannot implement every computation.
least three ways to interpret this question, (1) Can cognition Moreover, the fact that computation lacks certain elements
be described (simulated) by computations, (2) Is cognition of   mind,   such   as   semantics,   is   not   important,   since   our
literally   computation,   and   (3)   Does   the   notion   of objective   must   be   to   explain   how   these   features   arise.   If
computation   offer   a   suitable   basis   for   understanding   and computation did possess them it certainly could not provide
explaining cognition.  any basis for understanding them! Besides, the notion of a
Based on our analysis, the answer to the first form of the computational   model   is   clearly   central   to   the   cognitive
question,   "Can   cognition   be   described   by   computations?" process   and,   at   least   in   the   case   of   semantics,   it   would
would   seem   to   be   "yes."   Clearly,   we   can   construct appear that we can actually develop explanations in these
computational simulations of cognition at various levels; the terms, as the following section explains.
question   though,   presumably,   refers   to   description   at   the
“lowest” physical­level (if there is any sense to this notion.)
Assuming that the mind/brain has a purely physical  basis 8
 My intuition here is that there is nothing else! Feelings and the
(i.e. no part of it –the soul, perhaps– would continue to exist like are a natural “by­product” of certain sufficiently complex
were its material components to be destroyed) then, since a biological cognitive systems. In any case, we seem to have little
program/computation   is   simply   a   description   of   a   causal choice but to proceed on the assumption that, as Searle (1992)
system, answering the question in the affirmative requires put it, “the mental can be physical too.” Note that, even if true,
this is not to say that all talk of the mental is redundant. Quite
another physical system having equivalent causal dynamics clearly, explanations/models at this level allow for efficiency in
that   we   can   utilise   as   the   model.   This   is   an   empirical our everyday activities. It would be quite impossible for us to
problem. It may be the case that the underlying physics of function   if   all   “processing”   had   to   proceed   at   the   molecular­
level.
Meaning and Representation Identifying Symbols & Rules
What   gives   some   symbols/states   meaning?   The   answer An   agent's   model   of   its   world   might   be   viewed   as   a
seems obvious, minds do! But this is of no help when the formal  system comprising symbols and inference rules. A
objective is to understand how the mind works. How can number of questions thus arise, first, and foremost of which
mental  representations have meaning? AI researchers first concerns the origin of these symbols and rules. Are they,
suggested   that   they   gained   their   meaning   from   other perhaps,   innate,   or   does   the   agent   somehow   select   an
representations.   Searle's   (1980)   infamous   Chinese   Room appropriate set of symbols? Acquiring (and maintaining) a
Argument was the first nail in the coffin of this idea. Harnad suitable   set   of   base   symbols   for   a   given   environment   is
(1993)   provided   a   clearer   demonstration   of   its   futility, likely to be one of the primary determinants of success or
however, likening the situation to trying to understand the failure for an agent.
meaning of a word in a foreign language dictionary. Each How   then,   might   an   agent   "discover"   the   symbols   it
word is defined in terms of other words, such that, unless needs?   An   outline   answer   might   go   something   like   this.
someone provides the meanings for a few primitive words, Agents   have   a   number   of   sensors   and   actuators.   The
there is no hope of understanding anything! problem for any agent is to decide which actuator (if any) to
Obviously,   these   primitive   terms   can   only   acquire invoke at any particular moment. Its objective is to satisfy
meaning   as   a   result   of   their   relation   to   the   world its needs (food, sex, comfort, etc.) In some cases evolution
(environment).   Attention   thus   turned   to   solving   this   so­ may have endowed it with automatic (innate) mechanisms
called   Symbol   Grounding   Problem.   Connectionists   saw that   restore   it   to   its   "ideal"   state.   In   other   situations,
ANN's (artificial neural networks) as the solution. Others, in however, it will need to instigate "deliberate" actions in the
particular Harnad, favoured a hybrid approach, whereby a hope   of   achieving   these   goals.   On   the   (necessary)
neural  network would sit  between the environment  and a assumption that there is some regularity in the environment
symbol system, isolating appropriate symbols and providing (and lacking any other prior knowledge), the best an agent
the necessary grounding. Given the apparent limitations of can do is to store past sensory input patterns and then match
ANN's (lack of compositional structure, etc. as pointed out the   current   situation   against   these   in   the   hope   that   they
by   Fodor   &   Pylyshyn   (1989),   but   refuted   by   later might repeat. The matching process will thus produce a set
developments,   e.g.   recurrent   neural   networks),   Harnad's of expectations, and assuming that the agent has also stored
proposal   seemed   reasonable.   On   the   other   hand,   the information about its past actions and their effects, it should
fundamental problem remains. What exactly is the relation then be able to compute the "intersection" between these, its
between   the   mental   state   and   the   world?   Simply perceived situation and its goals, and hence select the most
"connecting"   it   (providing   a   causal   pathway)   to   the appropriate action to take.
environment doesn't exactly resolve this question. Indeed, it Given the variation in input patterns, the initial problem is
probably isn't even a necessary condition. Many alternative to   identify   sets   of   sensor   inputs   that   regularly   occur
explanations, such as co­variance, seem similarly flawed. together.   Having   isolated   these   initial   sets,   the   agent   can
Actually,   given   the   analysis   of   cognition   in   terms   of further group them into less frequently occurring sets, and
models,   the   solution   is   basically   straightforward.   A so   on.   Gradually,   it   should   also   be   able   to   determine
representation (state) has meaning for the agent just in case combinations of  these sets that  are  mutually exclusive  of
it has predictive value. On relevant  occasions the symbol each   other   (by   observing   that   they   share   terms,   for
might   be   activated   via   causal   connections   with   the example.) All of these groupings form the agent's (internal)
environment,   indicating   that   the   particular   feature   it symbols. Another set of symbols (external ones) is formed
represents   is   present.   On   other   occasions   it   may   become when   the   agent   acquires   language.   Meaning   in   these
active as a consequence of the execution of the model and symbols   involves   an   additional   mapping   from   the   word
thus constitute a prediction. It may not even have a real­ itself to the representation of the corresponding concept.
world counterpart, but simply be part of a theory (model), As for the inference rules, they must be basically logical –
which provides answers in the absence of anything better since  the  agent  must  make   the  correct,   rational,  "logical"
(the   ether   or   charmed   quarks,   for   instance.)   It   is   not,   of choices. We can thus expect logical rules to be part of an
course, necessary that the predictions always be correct in agent’s   makeup,   i.e.   in   biological   agents,   evolution   will
order   for   the   state   to   be   counted   as   a   meaningful have   produced/favoured   mechanisms   which   behave   as   if
representation.   Neither  is it  necessary   that  the  agent  ever they   were   performing   logical   inferences.   Classical   Logic,
display   behaviour   based   on   the   representation.   This   is   in being the result of abstraction from our spoken language, is
contrast to the Interactivist theory of Bickhard and Treveen evidence for this, although, of course, it does not account
(1995), which, while similar in other respects, claims that for   all   our   observed   reasoning.   There   have   been   many
interaction   is   necessary   for   real   meaning.   This   seems attempts to extend Logic (e.g. modal logics, temporal and
patently wrong. Few people have "played" with the planets non­monotonic   logics,   etc.,)   some,   however,  would  argue
or with electrons, yet we would surely wish to say that they that the very foundation upon which Logic is built is in need
did understand these concepts. If not, education would seem of revision! For some thoughts in this direction, see Kosko
to be a waste of time! (1993)   and   Davenport   (1999).   Certainly,   human   beings
frequently fail to reason perfectly (perhaps due to biological
limitations, lack of time, incorrect or incomplete knowledge,
etc.), but the fact remains that an agent’s mechanism must
be inherently logical.
Symbolic, Connectionist or Dynamicist? contrast, the connectionist approach 10  is to retain only one
Which is the correct view, the symbolic or connectionist instance of a token, and then to create a link to this instance
one? Or do we need a hybrid solution? Or perhaps another from each expression in which it participates. It may also be
solution altogether, as, for example, the dynamicists would possible to distinguish the paradigms along the lines of how
have us believe? It should be clear from the foregoing that, they "view" the world. Symbolic systems often take sets of
since   cognition   is   described   in   functional   terms,   this   is mutually   exclusive   tokens   (e.g.   blue,   red,   green...   or   car,
essentially an implementation (organisational) issue. Indeed, bike, truck...) as their starting point, whereas connectionist
the same is true of computation. There may be several ways systems tend to start  with (expect) conjunctions of terms.
of instantiating the very same computation. They may differ The latter is attempting to identify sets of inputs that always
in many respects   –energy consumption, speed, reliability, occur together, the former, ones that never occur together.
even   method9,   etc.–   but   these   are   all   irrelevant   to   the In   reality   both   are   impossible   to   guarantee,   so   that,   as
computation itself. indicated earlier, the only realistic option is for an agent to
In   fact,   it   is   far   from   clear   what   precisely   defines   or store  input  patterns   in the  hope  that  they  may  happen  to
distinguishes   the   two   archrivals,   the   symbolic   and repeat.
connectionist paradigms. The symbolic paradigm is usually And   what   of   the   dynamicist   view,   is   it   a   viable
considered to be closely associated with conventional digital alternative?   Van   Gelder   (1995)   characterises   cognitive
computers whereas the connectionist paradigm is essentially processes   as   "state­space   evolution   within   dynamical
defined   by   artificial   neural   network   architectures.   The systems." The archetypical example of a dynamical system
situation   is   not   so   clear­cut,   however.   For   one   thing, he suggests is Watt's centrifugal governor for controlling the
ultimately, both can be implemented using the very same speed   of   a   steam   engine.   Simplifying   a   little,   the   basic
technologies,   for   example,   voltages   on   wires.   Another argument is that the system is not computational since there
confusion   arises   from   the   fact   that   ANN's   can   be are   no   representations   or,   at   the   very   least,   no   symbolic
implemented   on   (or   at   least   approximately   by)   symbolic representations.   From the characterisation of computation
computers and there is reason to believe that the converse is given at the beginning of this paper, it will be clear that a
also true. McCulloch and Pitts (1943) purported to show just system can be computational irrespective of whether or not
this, by demonstrating that it was possible to construct logic it displays identifiable symbols. The system clearly has a
gates from ANN­like threshold elements. This, however, is representation of the current engine speed (via the gearing
not   really   sufficient   since   the   neural   elements   are   not   in constant.)   Furthermore,   since   it   accurately   predicts   the
practice organised (connected) in this fashion. Some of the amount of steam needed to maintain a steady speed, it must
limitations   on  the   processing   capabilities   of   connectionist have   some   representation   of   the   desired   engine   speed
systems, as pointed out by Fodor and Pylyshyn (1989), have (actually  encoded  in the mechanical  design, weights, arm
been overcome, however, the analysis of this paper clearly lengths,   angles,   etc.)   And,   of   course,   while   it   can   be
points   to   one   remaining   shortcoming.   Any   reasonably described by complex differential equations, it can also be
sophisticated   agent   must   be   able   to   carry   out   predictions described   by   (symbolic)   difference   equations   or,   even
independent  of its current sensor inputs. But most ANN's qualitative   language/symbols,   although   in   each   case   with
are simple feed­forward devices and even recurrent ANN's correspondingly less precision. 
only delay the input one "step." Meeting this predictability Many physical systems do (like the governor) appear to
requirement   would   seem   to   be   possible,   in   principle, exhibit properties that are not "naturally" separable, i.e. they
although   it   may   demand   more   complex   ANN's,   perhaps, appear   continuous.   Conceptually,   perhaps,   if   the   universe
ones in which individual neurons retain state information.  were  (ultimately) comprised of particles and neat slots, it
So, –assuming that they are both capable of supporting might be possible to agree upon an appropriate set of states
the   necessary   computational   structures–   the   choice   is   an (symbols) to employ in describing any system. On the other
organisational one and cognitive agents could equally well hand, if, as seems more likely, the universe is not made up
employ   either.   Of   course,   there   may   be   other   reasons   to of nice discrete entities, then what we decide to call a state
prefer one form to the other. It may be that one is easier to is   entirely   up   to   us!   Indeed,   whatever   the   underlying
implement   in a  particular   technology;   silicon,  biology, or physics, we as macro agents are continually faced with such
beer cans! Or that it requires less hardware or works more situations. We are perhaps fortunate in that most physical
reliably.  entities   happen   to   form   relatively   static   “islands”   widely
Given a particular type of agent (e.g. ourselves), it might separated   from   each   other.   For   this   reason   we   find   little
be   useful   to   be   able   to   determine   which   form   it   was difficulty  in  identifying  everyday  objects,  but  we   quickly
employing.   Whether   this   is   possible   depends   on   exactly begin to falter when faced with gaseous clouds or subatomic
what   distinguishes   the   two   paradigms.   One   point   of particles.
difference would appear to lie in the mode of storage. In There   would   thus   appear   to   be   no   principled   way,   or
symbolic   systems,   if   a   token   is   to   appear   in   several reason,   to   distinguish   dynamical   systems   from   any   other
"expressions" then each will contain a copy of the token. In forms. Indeed, both symbolic and connectionist systems are
ultimately dynamical systems. Kentridge (1995) provides a
9
  We   may   wish   to   distinguish   between   the   function   being
computed and the method of computing it. For instance, there are 10
  Here   we   assume   the   exemplar   connectionist   system   to   be   a
several ways of actually multiplying two values (using repeated perceptron­like   structure   that   does   not   employ   distributed
addition, shift /add, etc.) representations.
good discussion of this showing how rules and differential Summary & Concluding Remarks
equations can be equally good descriptions of ANN’s. Does computation, an abstract  notion lacking semantics
and   real­world   interaction,   offer   a   suitable   basis   for
Timing explaining cognition? The answer would appear to be yes,
Another   criticism   frequently   levelled   against indeed, it would seem to offer the only possible explanation!
computationalism   is   its   failure   to   say   anything   about   the The basic argument of this paper is as follows. Models
timing   of   events.   This   is   because   the   very   notion   of enable   us   to   make   predictions.   Constructing   a   model
computation   has   traditionally   abstracted   out   time, requires   building   a   physical   "device"   whose   states   and
concerning itself only with sequence. Thus, whether a step dynamics map onto those of the target system. A convenient
in   an   algorithm   takes   a   second   or   a   millennium   doesn't way to do this is to write a program that can be executed on
matter,   the   end   result   will   (must)   be   the   same.   The a   digital   computer.   The   program,   and   the   computation   it
increasing use of GUI's and embedded systems, however, defines,   is   thus   an   abstract   specification   of   the   desired
seems   to   make   the   traditional   (Turing   Machine   inspired) causal   system.   To   maximise   their   chances   of   success,
view of computation, –as a purely formal system in which cognitive   agents   need   to   make   predictions   about   their
each state is always followed by the very same next state– environment. It therefore seems reasonable to assume that
rather   less   appropriate.   Today,   computer   systems   must their architecture must include a model that can be used to
handle   asynchronous   input   from   the   environment   and make   such   predictions.   This   model   can   be   described   and
respond in a timely manner. So, the question is whether it is interpreted   in   computational   terms,   so   computationalism
necessary to complicate the classical picture of computation must offer an appropriate basis for explanation.
with actual time values or whether pure sequences can still While   connectionists   and   dynamicists   claim   to   offer
suffice. alternative   models,   it   is   clear   that   these   relate   to
Obviously, a system has to be fast enough to cope with organisational concerns and thus do not deflect the essential
the demands made upon it, otherwise it will fail to "keep computational explanation, for they too are computations!
up." Every technology has its limits though, biological ones The argument put forward by roboticists, psychologists and
perhaps   more   so   than   others,   so   there   will   always   be social   theorists,   that   intelligence/representation   demands
situations that an agent constructed with a given technology situated interaction, would appear to be essentially correct
will be unable to handle.  Engineers, asked to design/build a on the analysis presented here. A state is representational
system   guaranteed   to   cope   with   events   having   certain only on the basis of its predictive value to the agent. From
temporal   characteristics,   need   to   be   concerned   with   the the   computational   viewpoint   this   is   perfectly   natural   and
actual   minimal   response   times   of   their   system,   and   may, answers   the   question   of   semantics.   Finally,   the
perhaps,   choose   a   faster   technology   if   necessary.   On   the philosophical   argument,   which   claims   to   show   that
other hand, if the technology is fixed, as for example is our computation is a potentially vacuous concept, was seen to
own,   then   there   is   little   more   that   can   be   done   (after be misleading. Mapping every computation to every system
selecting an optimal algorithm.) is simply not possible because the proper causal structure is
Another  possible concern   relates  to  situations  in  which lacking.   Computation   is   about   prediction   and   while   it   is
the agent may respond too quickly and hence fail to achieve possible to map any specific computational sequence onto
its goal. While this may be solved by the addition of timing (almost) any physical system, there is little predictive value
information,   (e.g.   explicit   time   delays)   it   might   also   be in so doing. 
handled by finding conditions for invoking the actuator that Using   these   ideas   we   might   envisage   a   hierarchy   of
are more specific. In reality, this might include taking inputs systems   based   on   their   cognitive   abilities.   At   the   bottom
from   naturally   occurring   "clock"   signals   (e.g.   the   human would be FSA­like machines that have no inputs or outputs.
heart   beat);   however,   this   would   not   constitute   timing Above them, purely reactive systems with a few inputs and
information, per se.  outputs   but   fixed   causal   pathways   and   no   internal   state.
The final reason that timing might be important, relates to Next   would   come   adaptive   systems,   slightly   more
stability concerns. Avoiding oscillation in control systems is sophisticated, being able to modulate their behaviour within
clearly important and the development of the mathematical the limits of their fixed physical  structure. These may be
tools necessary to guarantee this has been one of the major followed by FSA that do have inputs and outputs. Above
achievements of modern control theory. Unfortunately, even these would come systems like the modern digital computer
in the case of relatively simple linear systems, the analysis and the Turing Machine that have I/O, a significant number
is very complex, and there would seem to be no natural way of states and a practically infinite ability to reconfigure their
to extend it to non­linear systems many orders of magnitude causal structure. Lastly, at the top of the hierarchy we might
more complex. The only hope would seem to be systems place   systems   that   not   only   have   the   ability   to   rewire
that   could   learn   and   adapt   so   as   to   automatically   stem themselves, but also to expand the number of inputs, outputs
oscillation as much as possible. and states available to them.
In our quest to understand cognition, extending the notion A major objective of this work was to establish a simple
of   computation   to   include   timing   information   thus   seems coherent framework within which to understand the notions
unnecessary.   On   the   other   hand,   techniques   that   would of   computation   and   cognition,   and   the   relation   between
allow performance criteria to be evaluated would certainly them.   By   taking   a   broad   view   of   computation   and
be beneficial. examining what it is to implement one, we have hopefully
made progress in a way that respects the mass of existing Turing,   A.M.   (1936).   On  Computatble   Numbers,   with  an
theoretical work and yet retains our intuitions. Application  to the  Entscheidungsproblem.  Proc.  Of  the
London Mathematical Society, Series 2, 42, pp.230­265.
van  Gelder,   T. (1995).   What   Might   Cognition  be, If   Not
References Computation. The Journal Of Philosophy, Vol.XCI, No.7
Bickhard, M. H. & Terveen, L. (1995). Foundational Issues
in Artificial Intelligence and Cognitive Science: Impasse
and   Solution.  Advances   in   Psychology   109,   North­
Holland, Elsevier.
Chalmers, D. J. (1995). On Implementing a Computation.
Minds and Machines 4, pp.391­402.
Chalmers,   D.J.   (1996).   Does   a   Rock   Implement   Every
Finite­State­Automaton?  Synthese,   Vol.8   No.3,   pp.309­
333, Kluwer Academic Pub.
Chrisley,   R.L.   (1995).   Why   Everything   Doesn’t   Realize
Every Computation.  Minds and Machines 4,  pp403­420,
Kluwer.
Copeland,   B.J.   (1996).   What   is   Computation?  Synthese,
Vol.8 No.3, pp.335­359, Kluwer Academic Pub.
Copeland,   B.J.   (1997).   The   Broad   Conception   of
Computation.  American   Behavioral   Scientist,   Vol.40,
No.6, pp.690­716, Sage Pub.
Davenport, D. (1992). Intelligent Systems: the weakest link?
In   Kaynak,   O.,  G.Honderd,   E.  Grant   (1993)  Intelligent
Systems:   Safety,   Reliability   and   Maintainability   Issues,
Berlin: Springer­Verlag.
Davenport,   D.   (1993).   Inscriptors:   Knowledge
Representation for Cognition. Proceedings of Eighth Int.
Symposium   on   Computers   and   Information   Science,
Istanbul.
Davenport, D. (1997). Towards a Computational Account of
the   Notion   of   Truth.  Proceedings   of   the  6th  Turkish
Artificial Intelligence and Neural Network Symposium.
Davenport, D. (1999).   The Reality of Logic.  A talk to the
Cognitive Science Group, Middle East Tech. University.
(see http://www.cs.bilkent.edu.tr/~david/david.html) 
Fodor,   J.   &   Pylyshyn,   Z.,   (1989).   Connectionism   and
Cognitive Architectures: A Critical Analysis. In Pinker, S.
& Mehler, J., (Eds.) (1990). Connections and Symbols (A
Special Issue of the Journal Cognition), Bradford Books,
MIT Press.
Harnad, S. (1993). Grounding Symbols in the Analog World
with   Neural   Nets.  Think  (Special   Issue   on   Machine
Learning.)
Kentridge, R.W. (1995). Symbols, Neurons, Soap­Bubbles
and   the   Neural   Computation   Underlying   Cognition.
Minds and Machines 4, pp439­449, Kluwer.
Kosko,   B.   (1993).  Fuzzy   Thinking:   The   New   Science   of
Fuzzy Logic. New York: Hyperion.
McCulloch, W.S. & Pitts, W. (1943). A Logical Calculus of
the   Ideas   Immanent   in   Nervous   Activity.   Bulletin   of
Mathematical Biophysics, 5, pp.115­133.
Putnam, H. (1988). Representation and Reality. Cambridge,
MA: MIT Press.
Searle, J. (1980). Minds, Brains and Programs. Behavioural
and Brain Sciences  3, pp.417­424. Reprinted in Boden,
M.   (Ed.),     (1990)  The   Philosophy   of   Artificial
Intelligence, Oxford Univ. Press.
Searle, J. (1992). The Rediscovery of the Mind. Cambridge,
MA: MIT Press.

You might also like