STUDIES IN LOGIC, GRAMMAR
AND RHETORIC 41 (54) 2015
DOI: 10.1515/slgr-2015-0021
Marcin Miłkowski
Institute of Philosophy and Sociology, Polish Academy of Sciences
THE HARD PROBLEM OF CONTENT:
SOLVED (LONG AGO)
Abstract. In this paper, I argue that even if the Hard Problem of Content,
as identified by Hutto and Myin, is important, it was already solved in naturalized semantics, and satisfactory solutions to the problem do not rely merely
on the notion of information as covariance. I point out that Hutto and Myin
have double standards for linguistic and mental representation, which leads to
a peculiar inconsistency. Were they to apply the same standards to basic and
linguistic minds, they would either have to embrace representationalism or turn
to semantic nihilism, which is, as I argue, an unstable and unattractive position.
Hence, I conclude, their book does not offer an alternative to representationalism. At the same time, it reminds us that representational talk in cognitive
science cannot be taken for granted and that information is different from mental representation. Although this claim is not new, Hutto and Myin defend it
forcefully and elegantly.
Keywords: representation, Hard Problem of Content, satisfaction conditions,
information
Is cognitive science viable without appealing to notions such as truth
or veridicality? The recent flurry of antirepresentational manifestos suggests that it is; antirepresentationalists claim that there can be cognitive
science without content that has satisfaction conditions, at least for a large
range of cognitive phenomena. One such manifesto can be found in Hutto
& Myin (2013). But dissatisfaction with contentful representations runs
deep. Even Fodor and Pylyshyn talk about minds without meanings, claiming that the only semantic property of representations is reference (Fodor
& Pylyshyn, 2015).
However, global antirepresentationalism is obviously untenable; it is
equivalent to semantic nihilism, which states that nothing ever is true or
false, or veridical. Of course, then, semantic nihilism cannot be true, for if
it is true, it simply has no truth-value, and some semantic nihilists seem
to embrace naı̈ve antirealism and claim that nothing ever is true or false.
Hutto and Myin are not that radical, though. The question is: How radISBN 978-83-7431-450-3
ISSN 0860-150X
73
Marcin Miłkowski
ical are they, really? My answer is that even if they seem to be radical,
their position is awkwardly inconsistent and that their main argument is
old news. Moreover, the main problem for representationalism, which they
call “The Hard Problem of Content”, has already been solved. In fact, it
was solved several dozen years ago—the book is curiously out of date with
current naturalized semantics.
The structure of the paper is as follows. In the first section, I will argue that because we know that naı̈ve semantic nihilism is untenable, we
are justified in saying that there should be a viable notion of satisfaction
conditions. Then I show that even if the Hard Problem of Content is important, Hutto and Myin are confused to think that all its solutions rely on the
notion of information as covariance. There is more to the notion of natural
information than covariance, and teleosemantic accounts of representation
do not rely merely on covariance. Indeed, the Hard Problem of Content was
solved a couple of dozen years ago. I conclude by pointing out that, for the
sake of consistency, Hutto and Myin should apply the same standards to
basic and linguistic minds. But if they do, they will either have to embrace
representationalism or turn to semantic nihilism. Hence, I conclude, their
book does not offer an alternative to representationalism. At the same time,
it reminds us that representational talk in cognitive science cannot be taken
for granted and that information is different from mental representation.
Although this claim is not new, Hutto and Myin defend it forcefully and
elegantly.
1. The Lessons of Semantic Nihilism
Naı̈ve semantic nihilism is not a philosophical position that deserves a serious debate because it would imply that expressing any position, including
semantic nihilism, is pointless. Although there might still be defenders of
such a position, it undermines the very idea of a philosophical debate, as long
as the debate is supposed to be based on rational argumentation. In rational argumentation, one is forced to accept a sound argument, and soundness
implies the truth of the premises and the validity of the argument. Just because these are universal standards for any rational debate, undermining
the notion of truth can be detrimental; there would be no way of deciding
between opposing positions besides rhetoric. Hence, it is a minimal requirement for rational argumentation in philosophy; one has to assume that one’s
statements can be truth-bearers. If they cannot have any truth-value, then
it’s no longer philosophy.
74
The Hard Problem of Content: Solved (Long Ago)
Of course, one could argue that all debate is reducible to rhetorical
tricks, and appeal to a Nietzschean vision of philosophy, but that simply
is not the standard that most philosophers, and science in general, accept.
I don’t think the self-refutation of semantic nihilism would persuade a Nietzschean thinker at all, as some kinds of philosophy arguably avoid the
appeal to truth, and turn philosophy into a kind of literature. Yet if philosophy is just a kind of literature, as Richard Rorty (1989) claims, it makes
little sense to read most of it, as it’s stunningly boring, painful to understand, and dull. Homer is much more fun to read than Plato, and Stieg
Larsson definitely aesthetically more rewarding than Heidegger. But still,
one may refuse to engage in philosophical debate and, say, play the flute
or write poems instead; but if arguments and reasons count, then it has to
be assumed that there are truth-bearers and satisfaction conditions. And
sophisticated global antirealism has yet to be invented, if it is supposed to
be plausible in a philosophical debate.
Closer to home, semantic nihilism is implicitly denied by all participants of the recent debate over representation. But if this is so, then global
antirepresentationalism (or naı̈ve antirealism about everything) cannot be
a stable position, and appeals to Ockham’s razor are unsuccessful when semantic properties as such are concerned. Parsimony considerations would
speak for semantic nihilism; after all, an ontology that does not include
semantic properties is more parsimonious. But such an ontology is useless
for any serious philosophy. This pertains also to naturalistic philosophy. If
there is no naturalistic account of truth or truth-bearers in a naturalistic
philosophy, so much the worse for such philosophy, not for truth. In other
words, even if we have no account of truth, we need to assume that one is
possible.
The dialectic of the debate is such that antirepresentationalism has to
make an important concession: there cannot be a general argument against
all forms of representation. This is why successful antirepresentational arguments are not systematic (Kirchhoff, 2011): there is no general theory
of antirepresentationalism that would embrace naı̈ve semantic nihilism. At
best, such an argument against representations can rest on a pessimistic
induction, but with an important limitation—arguing against meaningful
linguistic representations leads to an unstable position. This is precisely
why Hutto and Myin are not so radical as to argue against all representations. But this also means that their antirepresentational points are inapplicable to natural language. Indeed, this is what they explicitly claim.
However, their Hard Problem of Content is equally hard when it comes to
cognitive representations as when it comes to language. The account of sat75
Marcin Miłkowski
isfaction conditions is equally difficult for mental content as for linguistic
content, and there is no reason to give a special proviso for language in
their radical enactivism (Harvey, 2015). Yet, at the same time, I think one
cannot seriously embrace semantic nihilism, which is sometimes implied by
Harvey in his criticism of Hutto and Myin. Elsewhere I have already argued that there is a successful account of satisfaction conditions for mental
contents (Miłkowski, 2015b). Here, I will show that the Hard Problem of
Content can be easily solved when one does not assume that informationas-covariance constitutes content, and that it has indeed been solved. I will
also show that there are other notions of information, summarily ignored by
Hutto and Myin in their discussion—in particular control information and
information-as-structural-similarity.
2. The Hard Problem of Content and Information
Hutto and Myin never take care to define the Hard Problem of Content
explicitly in their book, but it seems that what they have in mind is the
difficulty of positing informational content compatible with explanatory naturalism (Hutto & Myin, 2013, p. xv). Informational content has satisfaction
conditions—it can be satisfied; i.e., be true or false, be veridical or not,
etc. However, as they claim, the only naturalistic notion of information is
the one based on covariance. And because covariance doesn’t constitute
content the problem is hard: There is no naturalistic notion of information that might fit the bill. Semantic information, called “information-ascontent” by Hutto and Myin, simply is not the same as “information-ascovariance”:
Naturalistic theories with explanatory ambitions cannot simply help themselves to the notion of information-as-content, since that would be to presuppose rather than explain the existence of semantic or contentful properties.
(Hutto & Myin, 2013, p. 67)
One thing that strikes a philosopher of science in this statement is the
generic notion of “explanatory ambitions”. Nowhere in the book are Hutto
and Myin clear about what they mean by “explanation”. The “naturalistic
theories” in the above passage cannot refer simply to scientific theories,
as all scientific theories are eo ipso naturalistic; so they probably mean
naturalistic theories in philosophy. But it’s slightly unclear what is meant
here. Do they simply mean explanatory naturalism, which presupposes that
all genuine explanations are compatible with methodological naturalism? Or
76
The Hard Problem of Content: Solved (Long Ago)
do they think that philosophy is poised to offer explanations for empirical
phenomena? This would be fairly surprising, as they do not report any
experiments or empirical evidence even in the broadest sense.
Do they embrace the argument model of Hempel and Oppenheim
(1948)? If that is so, they would have to presuppose that there are universal
laws in cognitive science that are used for explanation jointly with initial
conditions to infer the statements about observed (or predicted) states of
affairs. Maybe they only believe in invariant generalizations in cognitive
science (Woodward, 2001)? Or maybe they think that the notion of function is the core of cognitive explanations (Millikan, 1986)? Whatever they
assume, they fail to analyze any explanations that rely on the notion of
content with satisfaction conditions. One may charitably assume that they
probably mean that the notion of content cannot be presupposed in explanations of satisfaction conditions. This is undeniably true; the explanandum
should be elucidated in a fashion that is methodologically naturalistic for
the explanation to be explanatorily naturalistic. Yet this is not true for explanations that appeal to the notion of content in their explanantia. Let me
elaborate.
Explanations are not supposed to explain everything in one blow, one
possible exception being the alleged theory of everything in fundamental
physics (but even such an explanation would presuppose mathematics rather
than explaining it). Given the dialectic of the debate over content, as introduced in the previous section, one may presuppose that there are true
and false statements; and that means that one may also presuppose that
there is a naturalistic account of them, even if one is not ready to defend
one. Similarly, one need not explain what the objective measurement is in
any empirical explanation that relates to experimental measurements. They
may be safely assumed, as they belong to basic notions in science, and Hutto
and Myin supposedly talk of scientific explanations when they mention “explanatory ambitions”. For this reason, they are wrong about the burden
of proof in the debate; representationalists need not prove that the notion
of representation is explanatorily naturalistic if they only use the notion to
explain other phenomena. This does not mean that we don’t need a naturalistic account of representation; it is just to say that explanatory ambitions
do not always commit cognitive scientists to defend any naturalized semantics at all even if they use representational explanations. In other words,
they may safely assume that the Hard Problem of Content is solved (and,
indeed, it is).
Let me return to the main argument in the book. The argument assumes
that the only naturalistic notion of information is information-as-covariance;
77
Marcin Miłkowski
and Hutto and Myin introduce it with a generic definition “s’s being F ‘carries information about’ t’s being H iff the occurrence of these states of affairs
covary lawfully, or reliably enough” (Hutto & Myin, 2013, p. 66).
I am not going to argue that covariance constitutes content. It does not.
Obviously, covariance is a relation that has completely different characteristics than the relation of representing. For example, covariance is necessarily
symmetrical and reflexive, while representing need not be. This much is
sufficient to show that one cannot reduce representing to covarying.
But Hutto and Myin are wrong in thinking that most theorists of naturalized semantics believe that covariance constitutes content. They do not.
Of course, it is true that the talk of codes in cognitive science is prevalent,
and that coding does not constitute content. This has been argued by Mark
Bickhard for dozens of years (Bickhard & Richie, 1983; Bickhard, 1993,
2008; Campbell, 2011): there are encodings, such as Morse Code, but they
cannot explicate the notion of content. But even if the code (or covariance)
assumption can be found in many foundational texts in cognitive science (for
example in the notion of semantic transduction used by Pylyshyn, 1984), it
does not mean that there has been no progress since then.
Moreover, non-semantic information has more flavors that are formally
quite distinct from information-as-covariance. Before I show how the Hard
Problem has been already solved by Dretske (and Millikan, and Fodor, and
Bickhard, by the way), it’s important to note that mere property covariance
does not suffice to describe certain informational relationships. One of these
is information-as-structural-similarity.
Let me clarify why this is important. It was Kenneth Craik who proposed, in his pioneering work from 1943, that the main function of thought
is to model reality in much the same way as the mechanical devices used
to aid in calculation (Craik, 1967, p. 57). To model reality, thought needs
to parallel reality mechanically in the brain. Since then, this idea has been
embraced, elaborated, and developed by representationalists who stress that
there is an important difference between describing or modeling reality and
simply indicating a single property. Many have claimed that there is a crucial difference between simple property covariance and structured property covariance; the latter is usually framed in terms of isomorphism or
homomorphism (Bartels, 2006), or to be more exact—structural similarity. The latter, however, ever since Goodman (1951), has had bad philosophical press. It’s a well-known fact that the raven can be thought to
be similar to a writing desk (Carroll, 1900); anything can be shown to
be similar to anything else, unless the relata of the similarity relation
are fixed before checking whether the relation obtains. Yet when the re78
The Hard Problem of Content: Solved (Long Ago)
lata are fixed, there are several satisfactory accounts of similarity that can
serve as the basis of information-as-structural-similarity (Decock & Douven, 2010).1
One of the formally correct and influential accounts of similarity is Tversky’s (1977) contrast account. This account is psychologically realistic but
can be applied to judge, for example, the similarity of scientific models to
reality (Weisberg, 2013). Note that the psychologically realistic notion of
similarity does not involve a symmetrical relationship. For example, human
subjects judge North Korea to be more similar to China than China to
North Korea. There are several formal models of psychological similarity,
but Tversky’s set-theoretic version remains one of the simplest to understand, so I will restrict myself to it. Moreover, Tversky’s contrast account
of similarity is not reducible to covariance, which will illustrate my point
that covariance is different from similarity, at least in its psychologically
credible flavor.
The important point is that, for two entities that are assessed as similar,
there will be features that they share and ones that they don’t share; and
the features they don’t share do not covary (as they are absent). One cannot easily account for the contrast between similar entities using covariance
between non-occurrent properties. Even if there is some covariance of properties of two objects, it is therefore not enough for psychologically plausible
similarity judgments: covariance is symmetric and psychological similarity
antisymmetric.
Why is this important? For one thing, William Ramsey has recently argued that mere covariance is not enough for representation; he also requires
that there be an additional relationship of similarity between structured
properties (Ramsey, 2007). For another, although structured similarity implies covariance between shared features of some (structured) entities, and
covariant features are trivially similar to one another, these are two formally different relationships. They also involve different logical depth (or the
amount of structural information, measured in logons, or numbers of degree
of freedom). And most importantly, the informational structures that are
neither trivially simple nor extremely complex have distinct cognitive roles,
for example in surrogative reasoning (Swoyer, 1991). Surrogative reasoning
allows one to reason about relatively complex properties in the world without any direct sensing of such properties. In other words, they are detachable from their referents and can be used in representation-hungry problems
(Clark & Toribio, 1994). For this reason, some view representations based
on structured similarity as paradigmatic cases of cognitive representation
(Cummins & Roth, 2012).
79
Marcin Miłkowski
So while it is true that structured similarity implies covariance, and covariance implies structured similarity, it does not follow that vehicles that
are in a structured similarity relationship with their reference have the same
amount of structural information as vehicles that merely covary. The additional logical depth of contrast-based structured similarity stems from its
antisymmetric nature. But most importantly, structured similarity is at the
core of the capacity to model the referents of structured representations.
This feature of representation has been acknowledged by virtually all proponents of naturalized semantics (except Fodor with his reliance on the
claim that there is no meaning), and it remains a blind spot in Hutto and
Myin’s analysis.
However, there is another blind spot, which is even more important. The
relationships of covariance, iso- or homomorphism, or similarity require one
to fix the candidate relata before assessing whether the relationship obtains
or not. This is one of the important problems for structuralism in any domain, and has been repeatedly discovered by many, at least since Newman’s
proof that Russell’s account of perception is trivial (Newman, 1928). One
of the ways to fix relata is to look at how information works in a system;
entities or properties that do not have any influence on the system that
is supposed to exploit the information should not be taken into consideration. Hence, one has first to look at how information impacts the activity
of a physical system. The mere structure of the vehicle of information, be
it simple or complex, is not enough to determine the meaningfulness of this
information; even more, it is not enough to determine the mathematical
amount of information. It is because the structural information—the number of distinctions, or physical degrees of freedom—that can be exploited
depends on the system in which a given vehicle is operated upon.
Hence, to see how information works, or how representation parallels
reality for a physical system, one needs to account also for the operation of
the users of models. And this is exactly what was proposed by another pioneer of cybernetics, Donald M. MacKay, who analyzed the way information
controls the behavior of machines and people (and belonged, with Kenneth
Craik, Alan Turing and W. Ross Ashby, to the Ratio Club). Control information (to use the term coined by Sloman, 2010) remains the core of
contemporary action-oriented accounts of representation. (MacKay himself
used the term semantic information to refer to it, but quite differently than
his followers. I will use Sloman’s term here.)
MacKay (1969) analyzes the meaning of a message in terms of its effects
on the receiver.2 Suppose I acquire some information which makes some
difference to me:
80
The Hard Problem of Content: Solved (Long Ago)
Fundamentally it implies that in some circumstance or other my expectations
will be different. I am now conditionally ready to react differently. The reactions
potentially affected may be internal or external. They may themselves take the
form of choices from among a number of possible later states of readiness to
react, choices which will now be different as a result of my gaining information.
It is the hierarchy of such readinesses-my total state of readiness for adaptive
or goal-directed activity-which changes when I gain information. (MacKay,
1969, p. 60)
For this reason, the meaning of the message for the receiver cannot
be captured fully in propositional content (MacKay, 1969, p. 73). Now,
suppose a receiver has a vehicle with just one physical degree of freedom,
and it only affects a single choice of the receiver, and one that has no further
consequences. In such a case, this vehicle would be trivially simple. But the
same vehicle may be used in a way that operates on a whole ensemble of
other choices; and for this reason, the operational effect of the information
would be complex. For example, I might see a lamp that turns off to signal
that my stereo has been turned off. The lamp can take just two states;
it can be on or off. But because there are multiple other choices I can
make, including the choices in my further reasoning, the lamp has a rich
effect on me.
Contrast this with a modern computer display. It has thousands of
possible degrees of freedom. But it makes no difference to a blind person
(without a screen reader); it also makes little difference to a person with
impaired vision or to a hallucinator. Notably, when it displays a string of
letters, it makes little difference to an illiterate person. Hence, to understand
the causal influence of vehicles of information (analyzed by Dretske, 1988,
in terms of structured causation), one needs to understand the operational
structure of the receiver. To illustrate this point, one may imagine a single
point in a space. The presence or absence of the point corresponds to a single
physical degree of freedom. But when projected on a two-dimensional space
in a receiver, it may affect the operation of the receiver in two dimensions.
Depending on the dimensionality of the space the receiver uses, the point
will have more or less control information.
These caveats apart, supporters of modeling accounts of representation
are mostly right in claiming that simple representations are quite different from complex models. Simple representations are poised to have much
more complex effects in cognitive systems. Similarity-based representations
are usually created by reliable processes that are supposed to track some
features of targets and abstract away from other features; but they also
may and usually have more control information, as they can influence the
81
Marcin Miłkowski
receivers—in this case, simply, the cognitive systems that contain the mental
representations.
However, neither information-as-covariance, nor information-as-structured-similarity, nor control information constitute content. Besides the
crudest version of the causal account of reference—Fodor (1992) halfjokingly attributes it to B. F. Skinner—no current account in naturalized
semantics actually claims that content is constituted merely by a tracking
(causation or covariation), similarity, or control relation. While Hutto and
Myin are not alone in this mistaken criticism (see, for example, Mendelovici, 2012), it is important to understand why content is not constituted
by tracking, similarity, or control. If a relation (in a strict logical sense) between the vehicle and the representation’s target constituted content, then
false content would have been impossible. This has been known at least since
Brentano (1900). Relations obtain only when their relata exist, and in the
case of intentionality, the targets, or what the representation is about, might
not exist. Brentano, in his early theory, posited non-existent objects as relata but later criticized such theory vehemently (Chrudzimski, 1999). Ever
since Brentano, there has been a major split in theories of intentionality.
The first group of accounts claimed that intentionality is not a relation, but
a pseudo-relation. The second group took intentionality to be a relation and
tried to explain misrepresentation and falsehood in terms of non-existent objects, purely intentional objects, abnormal conditions of representing, and
the such. Interestingly, it is the tradition closer to the solution offered by
Hutto and Myin. Their teleosemiotics is exactly a new name for this old
idea: dispose with content, and frame intentionality in terms of physical
contact (or covariance).
However, Dretske, Millikan, Fodor, Cummins, Bickhard, and other proponents of naturalized semantics do not treat intentionality as a relation
in the standard, logical sense. For this reason, in their accounts, intentionality is not reduced to tracking or similarity relationships. Intentionality
is not information. First of all, were intentionality reduced to information,
the well-known problem of impossibility of falsehoods would reappear. In
addition, which is also crucially important, not all tracking, similarity, or
control relationships constitute mental representations. They are necessary
but not sufficient for representation. For Dretske and Millikan, another crucial factor of content determination is teleological function; for Fodor, an
important role is assigned to counterfactual considerations.
Take visual hallucination as an example. Briefly, according to Dretske’s
account, a certain activation of neurons in the visual pathway has the function of indicating the properties of the perceived scene. In the case of biolog82
The Hard Problem of Content: Solved (Long Ago)
ical dysfunction (like in people with visual impairment), the visual system
may still seem to indicate bizarre figures while there is nothing in the visual field that corresponds to them. But then of course there is no real
indication; the system uses the visual pathway as though it was indicating
visual properties. The content is not determined by mere indication but by
a function of indication. A similar story can be spelled out in modal terms
(as Fodor would insist): the hallucination is asymmetrically dependent on
the lawful relationship occurring between the representation vehicle and
the target. The content is not determined by the relationship itself but by
a counterfactual regarding of the relationship.
One fact that is frequently missed in polemics against teleofunctional theories of content is that indication is for Dretske a basic form
of predication (and Hutto and Myin, in their definition of information-ascovariance, follow Dretske). Let’s see how Dretske defines functional meaning (meaningf):
(Mf) d’s being G meansf that w is F = d’s function is to indicate the condition of w, and the way it performs this function is, in part, by indicating
that w is F by its (d’s) being G. (Dretske, 1986, p. 22)
Notice that indication in this sense is truth-functional; a property F is
ascribed to w, and this can be spelled out in basic logical terms as ascribing a predicate to a subject. Hence, indication has satisfaction conditions.
At the same time, indication cannot be false; it cannot fail to indicate that
w is F. To make falsehood of representation possible, Dretske makes it asymmetrically dependent on truth by introducing the notion of function. The
entity d has the function of indicating that w is F, but as soon as it malfunctions, the indication is false. Yet the content is not lost; if it were an
indicator, it would truly indicate that w is F. But it only has a function of
indication, and it fails to perform the function; hence it is not the case that
it indicates that w is F. Any user (or consumer) of d that would take it to be
an indicator would be in error by taking its property or state G to indicate
that w is F.
There might be various problems with Dretske’s account of content—to
mention only that his early account of function as related to basic biological needs is imprecise, that he fails to link content with agency (and
control), and that indication is taken to be a strictly necessary relationship
(Bielecka, 2014)—but it solves the Hard Problem of Content at least in
principle. The satisfaction conditions are determined by the indication relation cum teleological function, and there is nothing non-naturalistic about
83
Marcin Miłkowski
the account. While various accounts of naturalized semantics differ in many
regards, they usually recruit a similar solution. What is particularly interesting, the solution does not treat truth and falsehood symmetrically; falsehood
is dependent on truth but not vice versa. And for Millikan and Bickhard,
information-as-covariance is not the starting point; it appears because the
cognitive system uses its representations systematically in its environment,
not vice versa (see Millikan, 2007, for a longer discussion of this important
point).
The move made by functional theories of content is quite similar to
the one recommended by MacKay; he insisted that meaning, framed in
his operational terms of modification of readiness to act, is coupled in
an information-processing system with its evaluation subsystems (MacKay,
1969, p. 67). The systems whose operation depends on the information
should evaluate it to make sure that it controls their operation in a proper
way. MacKay however did not notice that this requires an account of the normativity of content, which is implied by “proper operation” of the contentdriven system. The accounts of content based on the biological notion of
function solve this problem one way or another (Miłkowski, 2015a). This is
true also of some brands of representational enactivism.
For example, the interactivist model (Bickhard, 2008; Campbell, 2011),
which also assumes, just like Hutto and Myin, that encodings are not
sufficient for representation, does not exorcise information relationships –
such as the ones constituted by causation, correlation, similarity, or control. They are recruited for action, and they’re used to build indications
of possible actions (roughly affordances in Gibson’s, 1977, terminology; for
a defense of the claim that affordances are representational, see Bickhard
& Richie, 1983). These indications have implicit content that such-and-such
action would be successful in such-and-such circumstances.3 This means that
the most basic representations have satisfaction conditions because they are
related to the success of actions that they implicitly rely on.
Let me sum up. The Hard Problem of Content is not at all hard anymore. It has been solved repeatedly. Even if Hutto and Myin are right that
information-as-covariance does not constitute content, this does not mean
that there is no role for information in theories of representation. I insist
that the account of satisfaction conditions naturally relies on control information, which, in turn, is not reducible to information-as-covariance. Richly
structured vehicles of information that are produced by reliable processes
selected for tracking some properties may have both causal influence on the
operation of information-processing systems and satisfaction conditions, as
long as they function as control information and are appropriately evaluated.
84
The Hard Problem of Content: Solved (Long Ago)
3. A Dilemma for Radical Enactivism
The commentators have already noticed a peculiar inconsistency in radical
enactivism. It has double standards for mental and linguistic representation
(Harvey, 2015). Hutto and Myin claim that basic minds have no need for
content with satisfaction conditions. The notion of the basic mind remains
undefined and it’s probably just a metaphorical term for a cognitive process that does not rely on any linguistic processing. (Otherwise, one could
suppose that propositional content controls my motor skills when I ride my
bike, as some analytic philosophers have suggested; cf. Stanley, 2011. I doubt
that Hutto and Myin would endorse Stanley’s analysis.)
But the account of satisfaction conditions for natural languages is in
no better position than the account of satisfaction conditions for mental
content. While there is a well-known formal definition of truth for formalized languages (Tarski, 1933), Tarski has repeatedly stressed that it does
not work for natural languages at all, as these languages do not conform
to methodological strictures of logic (White & Tarski, 1987). There have
been some attempts to use Tarski’s ideas directly in work on the semantics
of natural languages (Davidson, 1984; Field, 1972), but there is no workedout definition of truth for any natural language that would satisfy Tarski’s
standards. Of course, one need not despair, but prospects for a rapid development of such a formal, comprehensive account are rather dim.
At the same time, we know that for any simple example statement in
English, we can readily formulate appropriate Tarskian T-equivalences. In
this respect, the situation is exactly the same as with mental representations: we don’t have a comprehensive account of mental representations in
any natural mind, but we can sketch accounts of truth—or veridicality—
conditions for most simple examples. As Tyler Burge (2010) argues, most
current work in models of early vision, which is probably the most advanced part of perceptual psychology, assumes veridicality as one of the
crucial properties of representations, and can give partial accounts of how
veridicality is established by some organisms.
Notice also that because of the dialectics of the debate, one is free to
assume that true statements and, by the same token, truth-conditions do
exist. And since the semantics of natural languages is in no better shape than
the naturalized semantics of mental representation, we should be equally
free to assume that satisfaction conditions exist for mental representation.
Language should not be given any special treatment, so for consistency,
Hutto and Myin should insist on representationalism, unless they want to
embrace semantic nihilism.
85
Marcin Miłkowski
So the dilemma for radical enactivism is either be consistent and drop
radicalism, or be consistent and really radical, by endorsing semantic nihilism. The dilemma suggests that radical enactivism is not a stable position,
since it makes a special exception for language, probably just to avoid semantic nihilism, which is not particularly interesting to serious thinkers such
as Hutto and Myin. Because unstable positions aren’t terribly attractive,
no wonder that Hutto and Myin’s attempt to overthrow representationalism has met with considerable opposition. However, the debate sparked by
their book is immensely important, as is their insistence that information-ascovariance is not enough to establish the existence of mental representations.
Obviously, it is not sufficient; it’s just a fallible heuristic, and researchers in
cognitive science should be aware of this.
To sum up, radical enactivism is an unstable position and not a viable
alternative to representationalism in cognitive science. While representationalism can be good or bad, and Hutto and Myin are right in arguing
that one should not posit representations too quickly, their enthusiasm for
disposing with the notion of satisfaction conditions for mental contents is
simply premature.
NOTES
1
This means that there are several (or even several dozen) slightly different formal
definitions of information-as-structural-similarity. This is not important for my purposes
here, so I will not elaborate on the differences among them.
2 A similar account of the meaning of symbols in terms of constraining has been proposed
by Pattee (1972). The intuition behind these two proposals seems to be exactly the same.
3 For this reason, affordances imply a basic form of predication. Notice that they are
control information, so they are not reducible to their propositional content, but they still
implicitly have propositional content.
REFERENCES
Bartels, A. (2006). Defending the structural concept of representation. Theoria,
21(1), 7–19.
Bickhard, M. H. (1993). Representational content in humans and machines. Journal of Experimental & Theoretical Artificial Intelligence, 5(4), 285–333.
doi:10.1080/09528139308953775
Bickhard, M. H. (2008). The interactivist model. Synthese, 166(3), 547–591.
doi:10.1007/s11229-008-9375-x
Bickhard, M. H., & Richie, D. M. (1983). On the nature of representation: a case
study of James Gibson’s theory of perception. New York: Praeger.
Bielecka, K. (2014). Błędne reprezentacje a pojęcie funkcji w teleosemantyce. Analiza koncepcji Dretskego i Millikan. Filozofia Nauki, 1(85), 105–120.
86
The Hard Problem of Content: Solved (Long Ago)
Brentano, F. (1900). Psychologie vom empirischen Standpunkt. Hamburg: F. Meiner.
Burge, T. (2010). Origins of objectivity. Oxford: Oxford University Press.
Campbell, R. J. (2011). The concept of truth. Houndmills: Palgrave Macmillan.
Carroll, L. (1900). Alice’s adventures in wonderland. New York: Street & Smith.
Chrudzimski, A. (1999). Die Theorie der Intentionalität bei Franz Brentano. Grazer
Philosophische Studien, 57, 45–66. doi:10.5840/gps1999574
Clark, A., & Toribio, J. (1994). Doing without representing? Synthese, 101(3),
401–431. doi:10.1007/BF01063896
Craik, K. (1967). The nature of explanation. Cambridge: Cambridge University
Press.
Cummins, R., & Roth, M. (2012). Meaning and content in cognitive science. In
R. Schantz (Ed.), Prospects for meaning (pp. 365–382). Berlin: de Gruyter.
Davidson, D. (1984). Inquiries into truth and interpretation. Oxford: Clarendon
Press.
Decock, L., & Douven, I. (2010). Similarity after Goodman. Review of Philosophy
and Psychology, 2(1), 61–75. doi:10.1007/s13164-010-0035-y
Dretske, F. I. (1986). Misrepresentation. In R. Bogdan (Ed.), Belief: form, content,
and function (pp. 17–37). Oxford: Clarendon Press.
Dretske, F. I. (1988). Explaining behaviour: Reasons in a world of causes. Cambridge, MA: The MIT Press.
Field, H. (1972). Tarski’s theory of truth. The Journal of Philosophy, 69(13), 347–
375.
Fodor, J. A. (1992). A theory of content and other essays. Cambridge, MA: The
MIT Press.
Fodor, J. A., & Pylyshyn, Z. W. (2015). Minds without meanings: An essay on the
content of concepts.
Gibson, J. J. (1977). The theory of affordances. In R. Shaw & J. Bransford (Eds.),
Perceiving, acting and knowing (pp. 67–82). Hillsdale, NJ: Erlbaum.
Goodman, N. (1951). The structure of appearance. Cambridge, MA: Harvard University Press.
Harvey, M. I. (2015). Content in languaging: why radical enactivism is incompatible
with representational theories of language. Language Sciences, 48, 90–129.
doi:10.1016/j.langsci.2014.12.004
Hempel, C., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy
of Science, 15(2), 135–175.
Hutto, D. D., & Myin, E. (2013). Radicalizing enactivism: Basic minds without
content. Cambridge, MA: The MIT Press.
Kirchhoff, M. D. (2011). Anti-representationalism: Not a well-founded theory of
cognition. Res Cogitans, 2, 1–34.
87
Marcin Miłkowski
MacKay, D. M. (1969). Information, mechanism and meaning. Cambridge, MA:
The MIT Press.
Mendelovici, A. (2012). Reliable misrepresentation and tracking theories of mental
representation. Philosophical Studies, 165(2), 421–443. doi:10.1007/s11098012-9966-8
Millikan, R. G. (1986). Thoughts without laws; cognitive science with content. The
Philosophical Review, 95(1), 47–80.
Millikan, R. G. (2007). An input condition for teleosemantics? Reply to Shea (and
Godfrey-Smith). Philosophy and Phenomenological Research, 75(2), 436–
455. doi:10.1111/j.1933-1592.2007.00083.x
Miłkowski, M. (2015a). Function and causal relevance of content. New Ideas in
Psychology, 1–9. doi:10.1016/j.newideapsych.2014.12.003
Miłkowski, M. (2015b). Satisfaction conditions in anticipatory mechanisms. Biology
& Philosophy, (February). doi:10.1007/s10539-015-9481-3
Newman, M. H. A. (1928). Mr. Russell’s “Causal Theory of Perception”. Mind,
37(146), 137–148.
Pattee, H. H. (1972). Physical problems of decision-making constraints. International Journal of Neuroscience, 3(3), 99–105. doi:10.3109/00207457209147
629
Pylyshyn, Z. W. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: The MIT Press.
Ramsey, W. M. (2007). Representation reconsidered. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511597954
Rorty, R. (1989). Contingency, irony, and solidarity. Cambridge: Cambridge University Press.
Sloman, A. (2010). What’s information, for an organism or intelligent machine?
How can a machine or organism mean? In G. Dodig-Crnkovic & M. Burgin (Eds.), Information and computation. Singapore: World Scientific Publishing.
Stanley, J. (2011). Know how. Oxford: Oxford University Press.
Swoyer, C. (1991). Structural representation and surrogative reasoning. Synthese,
87, 449–508.
Tarski, A. (1933). Pojęcie prawdy w językach nauk dedukcyjnych. Prace Towarzystwa Naukowego Warszawskiego, Wydział III Nauk MatematycznoFizycznych, (34).
Tversky, A. (1977). Features of similarity. Psychologial Review, 84(4), 327–352.
Weisberg, M. (2013). Simulation and similarity: using models to understand the
world. New York: Oxford University Press.
White, M., & Tarski, A. (1987). A philosophical letter of Alfred Tarski. The Journal
of Philosophy, 84(1), 28–32.
Woodward, J. (2001). Law and explanation in biology: Invariance is the kind of
stability that matters. Philosophy of Science, 68(1), 1–20.
88