On Levels of Cognitive Modeling
On Levels of Cognitive Modeling
On Levels of Cognitive Modeling
The article first addresses the importance of cognitive modeling, in terms of its value to
cognitive science (as well as other social and behavioral sciences). In particular, it
emphasizes the use of cognitive architectures in this undertaking. Based on this approach,
the article addresses, in detail, the idea of a multi-level approach that ranges from social
to neural levels. In physical sciences, a rigorous set of theories is a hierarchy of
descriptions/explanations, in which causal relationships among entities at a high level
can be reduced to causal relationships among simpler entities at a more detailed level.
We argue that a similar hierarchy makes possible an equally productive approach toward
cognitive modeling. The levels of models that we conceive in relation to cognition include,
at the highest level, sociological/anthropological models of collective human behavior,
behavioral models of individual performance, cognitive models involving detailed
mechanisms, representations, and processes, as well as biological/physiological models of
neural circuits, brain regions, and other detailed biological processes.
Keywords: Cognitive Modeling; Cognitive Architecture; Level; Causality
1. Introduction
In this article we will argue for the importance of cognitive modeling, in terms of its
value to cognitive science and other social and behavioral sciences, and, in turn,
propose a leveled, or hierarchical, framework for cognitive modeling.
Models in cognitive and social sciences may be roughly categorized into
computational, mathematical, or verbal models. Each model may be viewed as a
theory of whatever phenomena it purports to capture. Although each of these types
of models has its role to play, we will be mainly concerned with computational
modeling, and computational cognitive architectures in particular. The reason for
this emphasis is that, at least at present, computational modeling appears to be the
most promising in many ways, and offers the flexibility and expressive power that no
Correspondence to: Ron Sun, Cognitive Science Department, Rensselaer Polytechnic Institute, Troy, NY 12180,
USA. Email: rsun@rpi.edu
their interactions. Because of the difficulty and the complexity of this task, it is
important to find the best way possible to approach this task. Therefore, let us begin
by looking into some foundational issues.
First, what is the nature of ‘scientific understanding’ and ‘scientific explanation’,
which are what cognitive modeling purports to provide? Over the last 500 years of
the growth of modern science, the conception in philosophy of science has changed,
as physics in particular has become more and more abstract and mathematical.
For example, ‘explanation’ is commonly interpreted as identifying causes. This is
relatively straightforward when used in the context of an example such as the heat of
the sun explaining rain and snow: heat causes evaporation of surface water, which
results in precipitation when the water vapor rises to higher altitudes where the
temperature is lower. However, interest in a deeper definition of ‘explanation’
increased as many of the scientific theories of the 20th century came to construe,
construct, and depend upon constructed objects (e.g., atoms, fields of force, genes,
etc.) that are not directly observed by human senses.
Consequently, one important strategic decision that one has to make with respect
to cognitive science is the level(s) of analysis/abstraction at which we model cognitive
agents. In this regard, we need to better understand the possibility of different levels
of modeling in cognitive science. Below, we will describe in some detail an abstract
framework regarding different levels of analyses involved in developing models in
cognitive science, which we inevitably encounter when we go from data analysis to
model specifications and then to their implementations.
We note that traditional theories of multi-level analysis hold that there are various
levels each of which involves a different amount of computational detail (Marr,
1982). In Marr’s theory, first there is the computational theory level, in which we
determine the proper computation to be performed, its goals, and the logic of the
strategies by which the computation is carried out. Second, there is the representation
and algorithm level, in which we are concerned with carrying out the computational
theory determined at the first level and, in particular, the representation for the input
and the output and the algorithm for the transformation from the input to the
output. The third level is the hardware implementation level, in which we physically
realize the representation and algorithms determined at the second level. According
to Marr, these three levels are only loosely coupled, or relatively independent; there
are a wide array of choices at each level, independent of the other two. Some
phenomena may be explained at only one or two levels. Marr (1982) emphasized the
‘‘critical’’ importance of explicit formulation at the level of computational theory—
i.e., the level at which the goals and purposes of a cognitive process are specified, and
internal and external constraints that make the process possible are worked out and
related to each other and to the goals of computation. His explanation was that the
nature of computation depends more on the computational problems to be solved
than on the way the solutions are implemented. In his own words, ‘‘an algorithm is
likely to be understood more readily by understanding the nature of the problem
being solved than by examining the mechanism (and the hardware) in which it is
embodied’’ (p. 24). Thus, he preferred a top-down approach, from a more abstract
618 R. Sun et al.
level to a more detailed level (see Figure 1). We believe that such theories focus too
much on relatively minor differences in computational tools (e.g., algorithms,
programs, and implementations).
Another variant is Newell’s and Simon’s (1976) three level theory. Newell and
Simon proposed the following three levels.
1. The knowledge level. Why cognitive agents do certain things is explained by
appealing to their goals and their knowledge, and by showing rational connections
between them.
2. The symbol level. The knowledge and goals are encoded by symbolic structures,
and the manipulation of these structures implements their connections.
3. The physical level. The symbol structures and their manipulations are realized in
some physical form(s).
Sometimes this three-level organization was referred to as ‘‘the classical cognitive
architecture’’ (Newell, 1990). The point being emphasized is very close to Marr’s
view: what is important is the analysis at the knowledge level and then at the
symbol level, i.e., identifying the task and designing symbol structures and symbol
manipulation procedures suitable for it. Once this analysis (at these two levels) is
worked out, the analysis can be implemented in any available physical means.
In contrast, in our view, the distinction borrowed from computer programming
concerning ‘computation’, ‘algorithms’, ‘programs’, and ‘hardware realizations’, and
their variations—as has been accentuated in Marr’s (1982) and Newell and Simon’s
(1976) level theories—is rather insignificant. This is because, first, the differences
among them are small and often fuzzy, compared with the differences among the
processes and systems to be modeled (i.e., the differences among the sociological
versus the psychological versus the intra-agent, etc.). Second, these different
computational constructs are in reality closely tangled: we cannot specify algorithms
without at least some considerations of possible implementations, and what is to be
considered ‘computation’ (i.e., what can be computed) relies on algorithms,
especially the notion of algorithmic complexity, and so on. Therefore, in actuality, we
often must somehow consider together computation, algorithms, and implementa-
tion. Third, in our view, the separation of these computational details failed to
produce any major insight, but theoretical baggage. A reorientation toward a
systematic examination of phenomena instead of the tools we use in modeling is,
we believe, a step in the right direction.
So, instead of these existing level theories, we would like to take a different
perspective on multiple levels of analysis, which leads in turn to different levels of
Figure 1 Marr’s (1982) Hierarchy of Levels and a New Hierarchy of Four Levels.
Philosophical Psychology 619
In particular, in studying cognition, there is no fixed path from either the highest
level to the lowest level, or vice versa. Instead, multiple levels can, and should, be
pursued simultaneously and be used to constrain and guide each other to narrow
down the search for plausible cognitive architectures or interesting cognitive
phenomena. Cognitive processes are too complex to be tackled in isolation; an all-out
attack from multiple, converging fronts is necessary. This observation applies to levels
as well as to domains.
We noticed recently that this view of levels is somewhat related to Allen
Newell’s ‘‘bands’’: the biological, cognitive, rational, and social bands (Newell,
1990). Newell was mainly concerned with different temporal durations of these
different ‘bands’, from tens of milliseconds of the biological band to many hours
of the social band. However, our view of levels has little to do with temporal
durations of these processes, and more with different scales of phenomena and,
consequently, different intellectual approaches towards studying them (as well as
different causal explanations and their correspondence across levels, as will be
discussed next).
In sum, we advocate that a new hierarchy of levels be adopted that focuses
attention on phenomena to be studied instead of variations in the tools that we can
use. Each of these newly specified levels provides a unique contribution in an
integrative framework for the study of cognition. The distinction of these multiple
levels is viable, and may even be necessary.
and salts. A consistent causal relationship (or law) on that level is acid plus alkali
generates salt plus water. The number of different possible chemicals in macro
chemistry is extremely large. On the other hand, atomic theory uses less than 100
different types of atoms and, by means of intermediate entities (like hydroxyl and
hydrogen ions), can generate the acid þ alkali law. However, at the atomic level, a full
end-to-end description of a macro chemistry process would be long and convoluted,
because it would include a description of the behaviors of all the approximately 1024
atoms making up a typical chemical reaction. The information density of such a
description would make it too complex to be grasped within human cognitive
capacities. In practice, the physical sciences establish multiple levels of description
that bridge the gap between macro phenomena and fundamental theories like
quantum mechanics.
Likewise, computer scientists established multiple levels of description of
computation. At a higher level, a high-level programming language (e.g., an
‘imperative language’) is used, which has many different programming constructs for
the convenient description of computation. At a lower level, a machine language
(or an ‘assembly language’) is used, which normally has far fewer different constructs.
At an even lower level, transistors are used for all operations, which have only two
states (on/off). All high-level language constructs can be mapped to machine
language constructs in an automated way (through a ‘compiler’). All high-level
language constructs, as well as all machine language constructs, can also be
straightforwardly mapped to the operations of transistors. However, a description of
a reasonably complex computational task based on transistors would be extremely
difficult to understand. Even a description based on a machine language would be
long and hard to read, due to information density. Hence, higher-level descriptions
are necessary.
In the physical sciences, bridging of levels is often achieved by creating descriptions
at a detailed (deep) level of small but typical processes within a high-level phenom-
enon, and achieving consistency with the high-level phenomenon as much as
possible. Consistency between levels implies that the causal relationships between
high-level entities in a process are generated by the causal relationships between
groups of detailed entities making up the high-level entities. A significant
inconsistency between levels would invalidate the theory. If a large and representative
sample of such consistency tests all give positive results, the detailed (deep) theory
is considered valid. In this situation, a correspondence exists between high and low
level descriptions. In a successful theory, causal relationships at the deeper level
may predict unexpected but experimentally confirmable causal relationships
at higher levels.
Note that the high-level processes that actually occur are only a tiny subset of all
the high-level processes that could conceivably occur given the detailed-level entities.
Although any conceivable high-level process could be described in terms of the
detailed entities, the subset that actually occurs is determined by the actual
configuration of entities that happens to exist (in physical science terms, the
boundary conditions). The identity of the subset may or may not be determined
624 R. Sun et al.
a priori from the properties of the individual entities alone. In this sense, some
high-level processes may be ‘‘emergent’’.
(p. 24). Earlier, we argued for the usefulness of computational models, which we will
not repeat here.
A theory of cognition that is analogous with a theory in the physical sciences
should ideally first establish a set of entities or conditions, {C1, . . . , Cn}, at a high level
with consistent causal relationships, e.g., the presence of C1 plus C2 causes C3. Then,
the theory must also establish a (smaller) set of entities or conditions at a lower level
so that different combinations of entities or conditions, {c1, . . . , cn}, correspond with
the high-level set, in such a way that if C1 plus C2 causes C3 at the high level of
description, then at the detailed level, the combination of c1, c2, etc. corresponding
with C1 plus C2 at the high level causes the combination of c3 corresponding with C3
at the high level. Descriptions at the high level contain less densely packed
information than descriptions at the detailed level. This often means that the number
of entities that are needed at the high level in general would be larger than the
number of entities at the detailed level, and the number of different types of causal
relationships at the high level would generally also be larger.
The lower informational density of description at a higher level means it is possible
that many states at a more detailed level could correspond with the same state at the
higher level. Instances of a high-level state are instances of detailed-level states (i.e.,
correspondence of categories across levels). However, normally, no two different
states at a high level can correspond with the same state at a detailed level (Braddon-
Mitchell & Jackson, 1996; Kim, 1993). Furthermore, the objects at a higher level are
defined in such a way that most of the interactions at deeper levels occur within
higher-level objects, and only a small residual amount of interaction occurs between
objects. The higher-level model can then explicitly include only these residual
interactions and the volume of information is kept within human capabilities.
In addition, note that the appropriate choice of objects and causal relationships, at
any one level, is determined by the phenomena to be studied. For example, within
small domains on the surface of the Earth, atoms and chemicals are useful objects. In
larger domains, mineral deposits are useful, and for yet larger domains, continental
plates. Similarly, in cognitive science, useful objects of analysis are identified through
the confluence of a variety of considerations. These considerations may include levels
of analysis, objectives of analysis, data/observations available, accuracy of data/
observations available, information from adjacent levels, etc. Specifically, at a high
level (the psychological level), we may identify objects of analysis such as accuracy (of
a particular type of action), response time, recall rate, etc. At a lower level (the
componential level), we may identify objects such as rules for action decision-
making, associative memory, inferences, etc. At a slightly lower level, we may instead
identify details of mechanisms such as rule encoding (for representing action rules),
associative strengths (between two memory items), backward versus forward
chaining (in making inferences), etc. At a very low level (the physiological level),
we may identify neural correlates of action rules, associative memory, etc., as well as
their details (e.g., their encodings and parameters).
Correspondences across levels are often worked out with a great deal of effort,
through trial-and-error empirical work. For example, mapping neural circuits to
626 R. Sun et al.
cognitive entities (or vice versa) is a tremendously difficult and tedious process.
Consistency between levels implies that the causal relationships between high-level
entities in a process are generated by the causal relationships between groups of
detailed entities making up the high-level entities. If a representative sample of
consistency tests across levels leads to positive results, the deep theory may be
considered valid.
In relation to the notion of ‘hierarchies of models’, we also need to examine the
notion of ‘modules’. In cognitive science, many different types of module have
already been proposed: e.g., peripheral modules, domain-specific modules, and
anatomical modules. Proposed peripheral modules include early vision, face
recognition, and language production and comprehension. Such modules take
information from a particular modality and only perform a specific range of
functions. Proposed domain-specific modules include driving a car or flying an
airplane (Hirschfeld & Gelman, 1994; Karmiloff-Smith, 1992). In such modules,
highly specific knowledge and skill are well developed for a particular domain, but do
not translate easily into other domains. Anatomical modules are isolatable
anatomical regions of the brain that work in relative independence (Damasio,
1994). In an early discussion of cognitive modules, Fodor (1983) proposed that
modules have proprietary input domains and dedicated neural hardware, and
generate information in accordance with algorithms that are not shared with or
accessible to other modules.
All these module definitions can be interpreted as different instantiations of the
requirement for less external interactions and more internal interactions. Different
types of modules are different ways to achieve minimized information exchange
among different parts of a system. Minimization of information exchange is
equivalent to requiring that the difference between internal and external interaction
be as large as possible across all modules in a system. In particular, the concept of
information hiding introduced by Parnas (1972) is that a module processes
information hidden from other modules.4
Notice one particularly significant difference between our view of levels and that
of Marr (1982). Marr argued that complete understanding of information
processing in cognition required understanding on three levels: the levels of
computational theory, representation and algorithm, and hardware implementa-
tion. However, Marr’s position was that the three levels are only loosely coupled,
since ‘‘the choice of algorithm is influenced, for example, by what it has to do
and by the hardware in which it must run. But there is a wide choice available
at each level, and the explication of each level involves issues that are rather
independent of the other two’’ (p. 24). Our notion of hierarchies is rather different:
The functionality described at different levels is consistent with each other. The
difference is mostly in the information density (level of details) and thus the
length and the complexity of description, and in the (likely) smaller number
of entities needed by description at a more detailed level. However, causal
relationships at one level will have to correspond (in some way) with causal
relationships at other levels.
Philosophical Psychology 627
impact on many fields of social sciences research (Axtell, Axelrod, & Cohen, 1996).
(More rigorous correspondence, i.e., integration, is of course also possible, as will be
discussed below.)
We note that much of the work of science takes place at higher levels. Scientists,
like all human beings, can only handle a limited volume of information at one time,
and must use a higher-level theory for thinking about a broader phenomenon. If
necessary, they can then focus in on a small aspect of that phenomenon to develop or
apply more detailed theories. For example, an engineer attempting to determine the
reasons for a problem in a chemical manufacturing plant will mainly think at the
macroscopic chemical level, but with occasional shifts to the atomic (or even
quantum mechanical) level. Likewise, a social scientist may mainly think at the
macroscopic social level, but occasionally more detailed probes into individual minds
at the psychological or the componential level may be necessary in order to
understand details of the cognitive basis of some social phenomena. Conversely, a
cognitive scientist may mainly think at the psychological or componential level, but
sometimes a broader perspective at a more macroscopic level may be necessary to
understand the sociocultural determinants of individual cognition.
Beyond such cross-level analysis, there may even be mixed-level analysis. This idea
may be illustrated by the research at the boundaries of quantum mechanics. In
deriving theories, physicists often start working in a purely classical language that
ignores quantum probabilities, wave functions, etc., and subsequently overlay
quantum concepts upon a classical framework (Greene, 1999). This approach is not
particularly surprising, since it directly mirrors our experience. At first blush, the
universe appears to be governed by laws rooted in classical concepts such as a particle
having a definite position and velocity at any given moment in time. It is only after
detailed microscopic scrutiny that we realize that we must modify such familiar
classical ideas. Thus, we view the differences and the separations amongst levels as
rather fluid, and, more importantly, our idealized view does not prevent us from
seeing alternative possibilities.
Another case of mixing levels is as follows. The objects and causal relationships at
higher levels may be defined as combinations of more detailed objects and more
detailed causal relationships. In the ideal case, the causal relationships between
objects at higher levels can be defined in simple terms with 100% accuracy without
reference to the internal structure of those objects as defined at more detailed levels
(e.g., the internal structure of atoms at the quantum mechanical level can largely be
ignored in macroscopic chemistry). In practice, however, this ideal is often not fully
achieved, and the simpler causal relationships at a higher level sometimes generate
predictions that are less consistent with observations than those generated at a more
detailed level. A model must therefore have rules that indicate the conditions under
which a more detailed model must supersede the higher-level model, or, in other
words, when the generally negligible effects of the internal structure of higher-level
objects must be taken into account. Therefore, again, it must be possible to integrate
models on adjacent levels. We would expect similar issues to arise in cognitive
modeling.
Philosophical Psychology 629
experimental data. Revision and refinement are inevitable when inconsistencies and
incorrect predictions are discovered, or when the model is incapable of predicting
something. However, when given a sufficiently high degree of mismatch between the
data and the current architecture, i.e., when revision and refinement are no longer
able to accommodate problems that arise, a crisis may develop, which leads to a new
‘paradigm’, i.e., new architectures or even new approaches towards building cognitive
architectures.
However, even with this issue of validation addressed (bearing in mind that our
view is unlikely to be accepted by everyone), there may still be objections to our
multi-level approach. Objections to our hierarchical framework may include the
following: (1) strict correspondence is unlikely, (2) neurophysiology should go all the
way, (3) lower levels are irrelevant.
We respond as follows. First, there are various kinds of causal correspondence
(e.g., from strictly deterministic to highly probabilistic, and from complete reduction
to looser mappings, etc.). We understand that the concept of ‘causal correspondence’
(the same as the concept of ‘causal explanation’ itself ) has to be a flexible and
evolving one. As science progresses, these concepts are bound to change, which we
have seen plenty of evidence of in the history of modern science. Therefore, a
dogmatic view of them is not warranted, and it is certainly not part of our point here.
Second, many people in cognitive science and in AI believe that low levels
are simply implementational details, and as such, they have only minimum relevance
to understanding high-level cognition (Marr, 1982). We disagree. As stated earlier,
different levels are intimately tied together by correspondence of causal explanations
across levels, as well as by mixed-level causal explanations. Such tying together is
necessary for developing a deep theory in cognitive science, as opposed to a shallow,
or even superficial, theory. Anderson (2002) outlined another case (an empirical
case) against this view. Using examples from his tutoring systems, he showed that
low-level considerations can benefit understanding at a much higher level.
In direct contrast to the idea that low levels are irrelevant, there is also the view that
insists that the only way to understand the human mind/brain is through
neurophysiology, which can lead directly to understanding high-level thinking as
well as sociocultural processes (Churchland, 1989; Damasio, 1994; LeDoux, 1992).
We disagree with this view as well. Consider what we said earlier regarding
descriptive complexity and information density: because of a larger amount of details
at the lower levels, it would be hard to describe high-level phenomena in a clear,
convincing, and humanly comprehensible manner. That is where higher-level
theories come into play: they help to describe higher-level (larger scale) phenomena
with higher-level, more succinct concepts and higher-level, more succinct causal
explanations. Higher abstractness at higher levels may help to reduce the descriptive
complexity of theories and make them humanly comprehensible and practically
useful. In the case of neurophysiology, as compared with psychology, the difference
in terms of amount of detail is indeed huge.
We need to strive to avoid extreme positions at either end of the spectrum
of possibilities. Clearly, a proper approach would be somewhere between these
Philosophical Psychology 633
7. A Quick Survey
Applying this multi-level approach to computational cognitive modeling, a hierarchy
of models would be needed. A cognitive architecture should, ideally, likewise have a
hierarchy of descriptions, from the most abstract to the most detailed, with consistent
causal correspondence (Ohlsson & Jewett, 1997).
As mentioned briefly before, ACT-R has been successful in capturing a wide variety
of cognitive data (Anderson & Lebiere, 1998). Beyond capturing psychological and
behavioral data, work has been going on in capturing socio-cultural processes
through simulating relatively detailed cognitive processes (West & Lebiere, 2001).
Furthermore, attempts have been made to map model processes onto brain regions
and neurophysiological processes. Overall, the model has shown significant promise
in bridging several different levels (inter-agent, agent, intra-agent, and neural).
Also as mentioned earlier, CLARION has been capturing a wide range of data in
various areas (Sun, 2002). Serious attempts have been made to address sociocultural
processes through simulation using this architecture (Sun, 2004; Sun & Naveh, 2004).
Some promising results have been obtained. Some attempts have also been made that
map model structures, components, and processes onto those of the brain through
utilizing work on biological processes of reinforcement learning (Houk, Adams, &
Barto, 1995; Keele, Ivry, Hazeltine, Mayr, & Heuer, 1998; Posner, DiGirolamo, &
Fernandez-Duque, 1997; Sun, 2002). Although much more work is needed, this
cognitive architecture also shows promise in terms of leading up to a multi-level
hierarchical theory with clear cross-level causal correspondence.
Some other cognitive architectures are also making some effort in terms of
bridging across levels. For example, SOAR (Rosenbloom, Laird, & Newell, 1993) is
extending into a higher level, the social level, with work on teams and other group
processes. On the other hand, some connectionist models, e.g., those described
by O’Reilly and Norman (2002), are making strong connections between neuro-
physiology and psychology. RA (Coward, 2001) is also making connections to the
neurophysiological level.
However, some other existing cognitive architectures may be less promising in terms
of linking across levels and establishing causal correspondence. For instance, COGNET
(Zachary, Le Mentec, & Ryder, 1996) has been purely at a high level of behavioral
description. Its mechanisms and processes do not translate into lower-level processes.
In terms of the details of its mechanisms and processes, it is not grounded in either
existing psychological theories or experimental results in a very convincing way.
634 R. Sun et al.
Acknowledgment
This work was done while the first author was supported (in part) by Army Research
Institute contract DASW01-00-K-0012.
Notes
[1] In any experiment involving the human mind/brain, there are a very large number of
parameters that could influence results, and these parameters are either measured or left to
chance. Given the large number of parameters, many have to be left to chance. The selection
of which parameters to control and which to leave to chance is a decision made by the
experimenter. This decision is made on the basis of which parameters the experimenter thinks
are important.
[2] We shall note that exploring the match between a model and human data is an important
means of understanding the mind/brain. Obtaining a good fit is not as trivial a result as one
would believe if one does not have sufficient hands-on experience in this area. Finding a good
fit often involves painstaking work. The result is a detailed understanding of what affects
performance in what ways. Modeling has a lot to contribute to the understanding of the
mind/brain through generating detailed, process-based matching with human data.
[3] See Sun (2001) for a more detailed argument of the relevance of sociocultural processes to
cognition and vice versa.
[4] One of the reasons for modular architectures is that such architectures make it easier to
modify functionality, diagnose and repair problems, and design modules relatively
independently (Bass, Clements, & Kazman, 1998; Kamel, 1987).
[5] He listed three aspects of causality, which he believed to be fundamental: (i) causal processes,
(ii) causal interactions, and (iii) conjunctive common causes. In the process, he developed the
notions of various causal forks (interactive, conjunctive, and perfect causal forks).
[6] The emphasis in the neurosciences on physiological experiments and the suspicion of
high-level theories indicate a residue influence of this approach.
References
Anderson, J. (2002). Spanning seven orders of magnitude. Cognitive Science, 26, 85–112.
Anderson, J., & C. Lebiere, (1998). The atomic components of thought. Mahwah, NJ: Lawrence
Erlbaum Associates.
Axelrod, R. (1984). The evolution of cooperation. New York: Basic Books.
Axtell, R., Axelrod, J., & Cohen, M. (1996). Aligning simulation models: A case study and results.
Computational and Mathematical Organization Theory, 1, 123–141.
Bass, L., Clements, P., & Kazman, R. (1998). Software architecture in practice. Reading, MA:
Addison-Wesley.
Bechtel, W., & Richardson, R. C. (1993). Discovering complexity: Decomposition and localization as
strategies in scientific research. Princeton, NJ: Princeton University Press.
Braddon-Mitchell, D. & Jackson, F. (1996). Philosophy of mind and cognition. Oxford, England:
Blackwell.
Castelfranchi, C. (2001). The theory of social functions: Challenges for computational social science
and multi-agent learning. Cognitive Systems Research, 2, 5–38.
Churchland, P. (1989). A neurocomputational perspective. Cambridge, MA: MIT Press.
636 R. Sun et al.
Coward, L.A. (2001). The recommendation architecture: Lessons from the design of large scale
electronic systems for cognitive science. Journal of Cognitive Systems Research, 2, 111–156.
Damasio, A. (1994). Descartes’ error. New York: Grosset Putnam.
Durkheim, W. (1962). The rules of the sociological method. Glencoe, IL: The Free Press. (Original
work published 1895)
Edelman, G. (1989). The remembered present: A biological theory of consciousness. New York: Basic
Books.
Fodor, J. (1983). The modularity of mind. Cambridge, MA: MIT Press.
Freeman, W. (1995). Societies of brains. Hillsdale, NJ: Lawrence Erlbaum Associates.
Greene, B. (1999). The elegant universe. New York: W.W. Norton.
Hirschfeld, L. A., & Gelman, S. (1994). Mapping the mind: Domain specificity in cognition and
culture. New York: Cambridge University Press.
Houk, J., Adams, J., & Barto, A. (1995). A model of how the basal ganglia generate and
use neural signals that predict reinforcement. In J. Holik, J. Davis, & D. Beiser, (Eds.),
Models of information processing in the basal ganglia (pp. 249–270). Cambridge, MA: MIT
Press.
Jackendoff, R. (1987). Consciousness and the computational mind. Cambridge, MA: MIT Press.
Kamel, R. (1987). Effect of modularity on system evolution. IEEE Software, 4, 48–54.
Karmiloff-Smith, A. (1992). Beyond modularity. Cambridge, MA: MIT Press.
Keele, S., Ivry, R., Hazeltine, E., Mayr, U., & Heuer, H. (1998). The cognitive and neural architecture
of sequence representation. (Technical Report no. 98-03). Eugene, OR: University of Oregon
Institute of Cognitive and Decision Sciences.
Kim, J. (1993). Supervenience and mind. New York: Cambridge University Press.
Kuhn, T. (1970). Structure of scientific revolutions. Chicago: University of Chicago Press.
Lakatos, I. (1970). Falsification and methodology of research programs. In I. Lakatos & A. Musgrave
(Eds.), Criticism and the growth of knowledge (pp. 51–58). Cambridge, England: Cambridge
University Press.
Lave, J. (1988). Cognition in practice. Cambridge, England: Cambridge University Press.
LeDoux, J. (1992). Brain mechanisms of emotion and emotional learning. Current Opinion in
Neurobiology, 2, 191–197.
Machamer, P., Darden, L., & Craver, C. (2000). Thinking about mechanisms. Philosophy of Science,
67, 1–25.
Marr, D. (1982). Vision. New York: W.H. Freeman.
McDowell, J. (1994). Mind and world. Cambridge, MA: Harvard University Press.
Milner, D., & Goodale, N. (1995). The visual brain in action. New York: Oxford University Press.
Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.
Newell, A., & Simon, H. (1976). Computer science as empirical inquiry: Symbols and search.
Communications of ACM, 19, 113–126.
Ohlsson, S., & Jewett, J. (1997). Simulation models and the power law of learning. In Proceedings of
the 19th Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum
Associates.
O’Reilly, R. C., & Norman, K. A. (2002). Hippocampal and neocortical contributions to memory:
Advances in the complementary learning systems framework. Trends in Cognitive Sciences, 12,
505–510.
Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules.
Communications of the ACM, 15, 1053–1058.
Penrose, R. (1994). Shadows of the mind. Oxford, England: Oxford University Press.
Pew, R., & Mavor, A. S. (Eds.) (1998). Modeling human and organizational behavior: Application
to military simulations. Washington, DC: National Academy Press.
Popper, K. (1959). The logic of scientific discovery. London: Hutchinson.
Posner, M., DiGirolamo, G., & Fernandez-Duque, D. (1997). Brain mechanisms of cognitive skills.
Consciousness and Cognition, 6, 267–290.
Philosophical Psychology 637
Ritter, F. E., Shadbolt, N. R., Elliman, D., Young, R. M., Gobet, F., & Baxter, G., (2003). Techniques
for modeling human performance in synthetic environments: A supplementary review.
Wright-Patterson Air Force Base, OH: Human Systems Information Analysis Center.
Roberts, S., & Pashler, H. (2000). How persuasive is a good fit? A comment on theory testing.
Psychological Review, 107, 358–367.
Rosenbloom, P., Laird, J., & Newell, A. (1993). The SOAR papers: Research on integrated intelligence.
Cambridge, MA: MIT Press.
Salmon, W. (1984). Scientific explanation and the causal structure of the world. Princeton, NJ:
Princeton University Press.
Salmon, W. (1998). Causality and explanation. New York: Oxford University Press.
Schunn, C., & Wallach, D. (2001). In defense of goodness-of-fit in comparison of models to data.
Unpublished manuscript.
Smolin, L. (2001). Three roads to quantum gravity. New York: Basic Books.
Sun, R. (1994). Integrating rules and connectionism for robust commonsense reasoning. New York:
John Wiley & Sons.
Sun, R. (1999). Accounting for the computational basis of consciousness: A connectionist
approach. Consciousness and Cognition, 8, 529–565.
Sun, R. (2001). Cognitive science meets multi-agent systems: A prolegomenon. Philosophical
Psychology, 14, 5–28.
Sun, R. (2002). Duality of the mind. Mahwah, NJ: Lawrence Erlbaum Associates.
Sun, R. (2003). Desiderata for cognitive architectures. Philosophical Psychology, 17, 341–373.
Sun, R. (Ed.) (2004). Cognition and multi-agent interaction. New York: Cambridge University Press.
Sun, R., Merrill, E., & Peterson, T. (2001). From implicit skills to explicit knowledge: A bottom-up
model of skill learning. Cognitive Science, 25, 203–244.
Sun, R., & Naveh, I. (2004). Simulating organizational decision-making using a cognitively realistic
agent model. Journal of Artificial Societies and Social Simulation, 7. Retrieved June 30, 2005,
from http://jasss.soc.surrey.ac.uk/7/3/5.html
Vygotsky, L. S. (1962). Thought and language. Cambridge, MA: MIT Press.
West, R., & Lebiere, C. (2001). Simple games as dynamic coupled systems: Randomness and other
emergent properties. Cognitive Systems Research, 1, 221–239.
Zachary, W., Le Mentec, J., & Ryder, J. (1996). Interface agents in complex systems. In C. Nituen &
E. Park (Eds.), Human interaction with complex systems: Conceptual principles and design
practice (pp. 35–52). Needham, MA: Kluwer.