The Gödel theorem and human nature (Putnam)

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

CHAPTER 15

The Gödel Theorem and


Human Nature

Hilary W. Putnam

In the Encyclopedia of Philosophy (Edwards, 1967, 348–49), the article by Jean van
Heijenoort titled “Gödel’s Theorem” begins with the following terse paragraphs:

By Gödel’s theorem the following statement is generally meant:


In any formal system adequate for number theory there exists an undecidable formula,
that is, a formula that is not provable and whose negation is not provable. (This statement
is occasionally referred to as Gödel’s first theorem.)
A corollary to the theorem is that the consistency of a formal system adequate for number
theory cannot be proved within the system. (Sometimes it is this corollary that is referred
to as Gödel’s theorem; it is also referred to as Gödel’s second theorem.)
These statements are somewhat vaguely formulated generalizations of results published
in 1931 by Kurt Gödel, then in Vienna.

Despite its forbidding technicality, the Gödel (1931) theorem has never stopped gener-
ating enormous interest. Much of that interest has been piqued because with the proof
of the Gödel theorem, the human mind has succeeded in proving that – at least in any
fixed, consistent system with a fixed, finite set of axioms that are at least minimally
adequate for number theory and in a system that has the usual logic as the instrument
with which deductions are to be made from those axioms – there has to be a mathe-
matical statement the human mind cannot prove. (In fact, the theorem is much stronger
than this: even if we allow the system in question to contain an infinite list of axioms
and to have an infinite list of additional rules of inference, the theorem still applies,
provided the two lists can be generated by a computer that is allowed to go on running
forever.)
Moreover, instead of speaking of formal systems adequate for (at least) number the-
ory, one can speak of computers. In such a version, the theorem says that if a computer
is allowed to write sentences of number theory forever, subject to the constraints that
(1) the list contains a certain subset of the axioms of Peano arithmetic, (2) the list is
not inconsistent (i.e., the list does not contain both a sentence and the negation of that
sentence), and (3) any sentence that can be deduced from finitely many sentences in
325
326 the gödel theorem and human nature

the list via first-order logic is also in the list, then there is a true sentence of number
theory – in fact, the sentence that says that the output of the machine is consistent is
an example – that is not included in the list of sentences listed by the computer. If we
speak of the sentences listed by the computer as “proved” by the computer, and of a
computer of the kind just mentioned as an “acceptable” computer, we can phrase what
I describe in the preceding succinctly as follows:
There exists a procedure by which, given any acceptable computer, a human mathemati-
cian can write down a sentence of number theory that the icomputer cannot prove or
disprove.

Does this or does this not tell us something about the human mind? That is what I shall
discuss in this chapter.

15.1 Noam Chomsky, Scientific Competence,


and Turing Machines

What provoked the research described in this chapter is a conversation I had with Noam
Chomsky more than twenty years ago. I asked Chomsky whether he thought our total
competence – not just the competence of his postulated “language organ” but also the
competence of the “scientific faculty” that he also postulates – can be represented by
a Turing machine (where the notion of competence is supposed to distinguish between
our true ability and our so-called performance errors). He said yes. I at once thought
that a Gödelian argument could be constructed to show that if that were correct, then
we could never know – not just never prove mathematically, which is obvious after
the Gödel incompleteness theorems, but never know even with the help of empirical
investigation – which Turing machine it is that simulates our scientific competence.
That Gödelian argument was given in an article I published more than twenty years
ago (Putnam, 1985), but I have always been dissatisfied with one of the assumptions
I used in that proof (I called it a “criterion of adequacy” for formalizations of the
notion of justification), namely, the assumption that no empirical evidence can justify
believing p if p is mathematically false. (Although many would regard this assumption
as self-evident, those who would allow quasi-empirical methods in mathematics have
good reason to reject it.) Today, I am ready to demonstrate a proof whose assumptions
seem to me unproblematic. (Or at least they would be unproblematic if the notion of
a formalizable scientific competence made sense. Why I don’t think it makes sense is
the subject of the final part of this chapter.)

15.2 Gödel’s Incompleteness Theorems

Gödel’s second incompleteness theorem shows that if the set of mathematical truths
that is within human competence to prove were the output of an acceptable computer
(as defined earlier), then the consistency of the computer’s output could not be one of
those mathematical truths. For reasons I will go into in the final part of this chapter,
this doesn’t show that the human mind (or the competence of a human mathematician)
thinking about chomsky’s conjecture 327

can’t be perfectly simulated by a Turing machine. (This remark does not mean, by
the way, that I want to defend the view that it can be simulated by a Turing machine;
in the final part of this chapter, I argue that it is a mistake to think of us as having a
mathematical competence that is so well defined that that question makes sense is a
mistake.) Postponing that point for the time being, though, the Gödel incompleteness
theorems don’t even prove that we can’t know which Turing machine simulates our
mathematical competence, assuming such a machine exists, for to know that would
require empirical, not mathematical, research.
To see why, note that the proposition that we are consistent is ambiguous. For exam-
ple, this proposition may mean that we know the English sentence “our mathematical
competence is consistent” to be true; this is very different from knowing an arith-
metized version of the claim that it is consistent. This latter requires knowing which
Turing machine we are. Even if, in some sense, we know that we are consistent, to
go from that fact to the statement that “Turing machine Tk has a consistent output,”
where Tk is a machine that simulates our mathematical competence, would require the
empirical premise that Tk is such a machine; however, then “Tk is consistent” would
be a quasi-empirical mathematical statement, that is, a statement of mathematics that
we believe on the basis of an argument some of whose premises were empirical, and
nothing in the Gödel theorems rules out our knowing the truth of such a quasi-empirical
statement, in addition to knowing the mathematical theorems that we are able to prove
in the strict sense. However, I do want to say that reflecting on Gödel’s methods and
their implications can lead us to a deeper philosophical understanding of the issues
raised by Lucas, Penrose, and (on the other side) Chomsky.

15.3 Thinking about Chomsky’s Conjecture in a Rigorous


Gödelian Way

To think about Chomsky’s conjecture in a Gödelian way, let COMPETENCE abbreviate


the empirical hypothesis that a particular Turing machine Tk (the kth machine in the
standard Kleene enumeration) perfectly simulates the competence of our scientific
faculty, in Chomsky’s sense. To make this concrete, we may take this to assert that
for any proposition p, and for any evidence u expressible in the language that Tk uses
for expressing propositions and evidence (and if Chomsky is correct, that language is
innate), Tk sooner or later prints out Justifiedu (p) if and only if it is justified to accept p
when u is our total relevant evidence. (I identify sentences with their Gödel numbers in
what follows. Thus “Tk sooner or later prints out ‘Justifiedu (p)’” means that Tk sooner
or later prints out the word Justified followed by the symbols for the Gödel number of
the sentence u (subscripted), followed by a left parenthesis, followed by the symbols
for the Gödel number of the sentence p, followed by a right parenthesis.)
In addition, COMPETENCE asserts that the hypotheses and evidence we can de-
scribe in an arbitrary natural language can also can be represented in Tk ’s language,
that is, that that language has the full expressive power of Chomskian “Mentalese.”
I am going to sketch a proof (given in full in the appendix) that if this is true, then
COMPETENCE cannot be both true and justified. In other words, if some particular
Turing machine Tk correctly enumerates the sentences that our competence would
328 the gödel theorem and human nature

take to be justified on arbitrary evidence, then that statement is one that cannot be
empirically justified on any evidence.
The proof proceeds, of course, via reductio ad absurdum. The idea is to assume
that COMPETENCE is true and that some sentence e states evidence that justifies
accepting COMPETENCE. (Note that e and k are constants throughout the following
argument.) Using Gödel’s technique, we can construct a sentence GÖDEL in the
language of Peano arithmetic, and hence in the language of Tk (because that language
is capable of expressing all humanly accessible science), that says that Tk never prints
out Justifiede (GÖDEL). Therefore GÖDEL has the following property:

It is justified to believe GÖDEL on evidence e if and only if it is justified to believe (on


that same evidence) that the machine Tk never says (even if it is allowed to run forever)
that it is justified to believe GÖDEL on evidence e.

We are now ready to describe the idea of the proof of the following theorem (the actual
proof is given in the appendix).
An anti-Chomskian incompleteness theorem is that

COMPETENCE can’t be both true and justified.

It is well known that Gödel’s proof mimics the liar’s paradox. The proof of the anti-
Chomskian incompleteness theorem given in the appendix similarly mimics the fol-
lowing paradox, which we might call the justification paradox:

By the Gödel technique for constructing self-referring sentences (or the technique of
Quine, 1966, 9), it is easy to construct a sentence that says of itself that belief in it is
not justified. For example (using Quine’s technique), (B) “When appended to its own
quotation yields a sentence such that believing it is not justified” when appended to its
own quotation yields a sentence such that believing it is not justified.

Now reflect on the following: is believing (B) justified? Well, if belief in (B) is justified,
then I am justified in believing that belief in (B) is not justified because I see that that
is what (B) says. However, if belief in (B) is justified, then believing that belief in (B)
is justified is justified, so believing that belief in (B) is justified is both justified and not
justified, which is impossible. In this way, I now have justified believing that belief in
(B) is not justified, but what (B) says is precisely that belief in (B) is not justified, so I
have justified believing (B). Therefore believing (B) is justified, which, we just proved,
leads to a contradiction!
Just as Gödel’s proof turned the liar’s paradox from a contradiction into a proof that
the arithmetized version of the sentence “I am not provable” is not provable (unless
the system in question is inconsistent), so the proof we give in the appendix turns the
justification paradox into a proof that the arithmetized version of the sentence “believing
me is not justified” is not printed out by the machine Tk . However, if COMPETENCE
were true and also verified (by evidence e) to be true, we could know this proof and
so justify the sentence “believing me is not justified” – which would lead to a real
contradiction.
lucas and penrose 329

15.4 Lucas and Penrose

As mentioned earlier, the Oxford philosopher John Lucas (1961) claimed that the Gödel
theorem shows that noncomputational processes (processes that cannot in principle be
carried out by a digital computer, even if its memory space is unlimited) go on in
our minds. He concluded that our minds cannot be identical with our brains or with
any material system (because assuming standard physics, the latter cannot carry out
noncomputational processes) and hence that our minds are immaterial. More recently,
a thinker whose scientific work I very much admire, Roger Penrose, used a similar
but more elaborate argument to claim that we need to make fundamental changes in
the way we view both the mind and the physical world. He, too, argued (Penrose,
1994) that the Gödel theorem shows that noncomputational processes must go on in
the human mind, but instead of positing an immaterial soul in which they can go on,
he concluded that these noncomputational processes must be physical processes in the
brain and that our physics needs to change to account for them.
Although neither Lucas nor Penrose is a Chomskian, an important similarity exists
between their notion of our mathematical competence and Chomsky’s notion of com-
petence. Chomsky (1957) has insisted that our competence in his sense includes the
ability to generate arbitrarily long grammatical sentences, together with the proofs of
their grammaticality (which had the form of so-called trees in Syntactic Structures). It
is well known (see, e.g., Hopcroft and Ullman, 1979) that this ability (in the case of
unrestricted languages in the Chomsky-Schützenberger hierarchy) is equivalent to the
ability to simulate the computations of an arbitrary Turing machine. Thus our com-
petence is supposed to include the ability to verify all truths of the form “machine Ti
eventually prints out sentence p.” However, Lucas and Penrose must also be making
this assumption because they assume that our competence is strictly greater than the
ability to prove all the theorems of Peano arithmetic, and all truths of the form “machine
Ti eventually prints out sentence p” are (on suitable arithmetization) theorems of Peano
arithmetic.
I reviewed Penrose’s (1994) argument, and like many others who studied it, I found
many gaps (Putnam 1994). Relax, though: I am not going to ask you to think about
the details of Penrose’s argument, and I am not going to provide a listing of the places
at which I believe it goes wrong. It may be that Penrose himself recognized that his
argument has gaps (even though, at the beginning, he promised a “clear and simple
proof”) because he offered a number of arguments that are not mathematical at all but
rather philosophical. (In any case, see Penrose’s Chapter 16.)
The heart of his reasoning goes as follows: let us suppose that the brain of an ideally
competent mathematician can be represented by the program of an acceptable computer
(as defined earlier), a program that generates all and only correct and convincing proofs.
The statement that the program is consistent is not one that the program generates, by
Gödel’s second incompleteness theorem. Therefore, if an ideal human mathematician
could prove that statement, we would have a contradiction. This part of Penrose’s
reasoning is incontrovertibly correct.
It is possible, however, that the program is too long to be consciously comprehended
at all – as long as the London telephone book or even orders of magnitude longer. In
that case, Penrose asks, how could evolution have produced such a program? It would
330 the gödel theorem and human nature

have to have evolved in parts, and there would be no evolutionary advantage to the
parts.
As I shall explain shortly, I find some things wrong with the notion of “encapsulating
all humanly accessible methods of proof” and hence with the question of whether a
computer could do such a thing. However, even if we accept that notion, it is important
to recall that just this objection has been made to almost every evolutionary explanation
(how could wings have evolved in parts? how could eyes have evolved in parts?). In
case after case, an answer has been found (see Carroll, 2006), but the answer typically
involves generations of researchers, and we are hardly at a stage at which study of the
aspects of the brain that subserve our mathematical abilities are well enough understood
even to begin evolutionary study. Still, if you will permit me to offer what are obviously
speculations, following are two ways in which our mathematical abilities have parts
that could have utility apart from being combined in their present package:

1. As Wittgenstein suggested, the basic laws of arithmetic and geometry could, for example,
have first arisen as empirical generalizations that were later hardened into conceptual
truths(for an account of this, see Steiner, 2000). Most of this process would belong to the
history of our cultural rather than our biological evolution, although it would presuppose
the evolution of language and of the skills that underlie, for example, counting.
2. Reflection on their own practices (in any area) is historically an important way in which
humans arrive at new practices, but the biological capacity for such reflection could cer-
tainly have evolved independently of mathematics itself. Metamathematical reflection,
which is what fascinates Penrose, in fact came very late on the mathematical scene. For
several millennia, mathematics was exhausted by the two subjects of arithmetic (or num-
ber theory) and (Euclidean) geometry, and the explosion in what we today consider to be
mathematics, including set theory and its less well known (by nonmathematicians) rival,
category theory, was not driven by either Gödelian reflection or new a priori insights;
instead, new axioms, such as the axiom of choice and the axiom of replacement, were
added on strongly pragmatic grounds (again, a matter of cultural rather than biological
evolution). Only with Gödel’s incompleteness theorems does reflection on mathematical
practice become a way of proving number theoretic propositions that could not have
been proved previously. (The most elementary form of metamathematical reflection is
semantic reflection, illustrated by the argument that all the axioms of Peano arithmetic
are true, and the rules of inference preserve truth; therefore all the theorems of Peano
arithmetic are true, but 1 = 0 is not true and therefore is not a theorem of Peano arith-
metic, i.e., Peano arithmetic is consistent. Tarski’s work shows how this argument can
be formalized in a weak second-order extension of Peano arithmetic, and the Gödel
theorem shows that the result cannot be proved in Peano arithmetic itself.) Nothing
about this so far, though, suggests any problem for standard evolutionary theory.

What about iterated reflection? Let L0 be Peano arithmetic. Because we can see, by
the kind of metamathematical reflection just described, that the soundness of L0 (which
we accept) entails the truth of Con(L0 ), we can accept the soundness of L1 , where L1 is
the result of adding Con(L0 ) as an additional axiom to L0 . Continuing this procedure,
we generate a hierarchy of stronger and stronger intuitively acceptable systems, L0 ,
L1 , L2 , L3 , . . . , and so on, ad infinitum.
lucas and penrose 331

Turing (1939) proposed to extend this series into the transfinite, using notations for
computable ordinals. At limit ordinals λ, the associated Lλ is the union of all the earlier
Lλ ”ccpms. (The notations for the ordinals are indispensable in defining the sentence
Con(L) precisely.) What if our idealized mathematical competencies were represented
by the totality of such Lλ with λ < ϕ, for some computable ordinal ϕ? (ϕ would have
to be a computable ordinal such that no recursive well ordering of that order type could
be seen by a human mathematician to be a well ordering or proved to be a well ordering
from below.) Such a totality might well be longer than the London telephone book, but
it would be constructed from L0 , a system that is intuitively simple, using a method of
extension that is also intuitively simple. (As I just noted, the ability to reflect on our
own practices, which is what enabled us to arrive at that method of extension and also
enables us to see that various computable orderings are well orderings – normally, by
seeing that all their initial segments are well orderings – could well have developed
independently of L0 .)
However, what if the ordinal logics procedure Turing described didn’t break off at
a computable ordinal? Feferman (1962) shows that with stronger reflection principles,
one can obtain all true number-theoretical propositions on a suitable path through the
computable ordinal notations. In that case, Penrose would be right: our mathematical
competence would exceed the powers of any Turing machine. My point is that Penrose
cannot exclude the possibility that our powers do break off at some computable ordinal.
Ah, but Penrose will certainly ask, what about us determines the limits of metamath-
ematical reflection? What determines where we break off? Here I have to say that I
don’t think there is a precise limit to what human mathematicians can do via reflection.
The very notion of an ideal mathematician is too problematic for us to speak of a
determinate limit here – a point to which I shall return shortly. Different individuals
have different limits to what they can do via metamathematical reflection (and the
possibilities of metamathematical reflection have barely been discovered to date). I
would say that what the limits are in the case of any given individual is obviously an
empirical question, and not necessarily a very interesting or important one.
One thing I will grant Penrose, however, is that I do find it mysterious that we have
the mathematical abilities we do. The simple fact that we can intuitively understand
the notion that the natural number sequence has no end amazes me more than that we
can Gödelize any given consistent formal system. I also find it mysterious that we have
the natural-scientific abilities, the linguistic abilities, the aesthetic abilities, and so on,
that we do. If a change in our physical, biological, and neurological theories makes it
possible to explain them, then, of course, I would welcome it – we all would – but I
do not believe that Penrose has given compelling reasons to think that our brains must
have noncomputable powers and that we must seek a scientific theory to explain them.
To see what we can learn about the human mind, or at least how we should (or rather
shouldn’t) think about it, from the Gödel theorem, I want now to raise two objections
against the very question that Penrose asks – the question of whether the set of theorems
that an ideal mathematician can prove could be generated by a computer.
My first objection – and this is a point that I want to emphasize – is that the notion of
simulating the performance of a mathematician is highly unclear. Whether it is possible
to build a machine that behaves as a typical human mathematician is a meaningful,
empirical question, but a typical human mathematician makes mistakes. The output
332 the gödel theorem and human nature

of an actual mathematician contains inconsistencies (especially if we are to imagine


that she goes on proving theorems forever, as the application of the Gödel theorem
requires), so the question of proving that the whole of this output is consistent does
not even arise. To this, Penrose’s reply seems to be that the mathematician may make
errors, but she corrects them on reflection.
As I pointed out before, even if (via this or some other philosophical argument) we
know that the English sentence “our mathematical competence is consistent” is true,
this is very different from knowing an arithmetized version of the claim that this is the
case. This latter would require knowing which Turing machine simulates our mathe-
matical competence. (That we cannot know a similar fact about our natural-scientific
competence by empirical investigation was what our anti-Chomskian incompleteness
theorem showed.)
My second objection is that to confuse these questions is to miss the normativity
of the notion of ideal mathematics. The notion of ideal mathematics is the notion of
a human mathematician who always behaves (in his mathematical work) as a mathe-
matician should (or ideally should). To the extent that we can succeed in describing
human beings as physical systems, physics can say (at best) how they will behave,
competence errors and all. However, physics is not in the business of telling us how
human beings should behave.

15.5 A Caution against a Widespread but Naive Argument


for Algorithms in the Brain

It is not my aim merely to criticize those who make the mistake of thinking that the
Gödel theorem proves that the human mind or brain can carry out noncomputational
processes. That Lucas and Penrose have failed to prove their claims about what the
Gödel theorem “shows about the human mind” is widely recognized. At the same time,
however, a view that may seem to be at the opposite end of the philosophical spectrum
from the Lucas-Penrose view seems to me to be vulnerable to the two objections I
just made. I refer to the widespread use of the argument, in philosophy as well as in
cognitive science, that whenever human beings are able to recognize that a property
applies to a sentence in a “potentially infinite” (i.e., a fantastically large) set of cases,
an “algorithm” must be in their brains that accounts for this, and the task of cognitive
science must be to describe that algorithm. I think that the case of mathematics shows
that that cannot be a universal truth, and my criticism of Penrose may enable us to see
why we shouldn’t draw any mystical conclusions about either the mind or the brain
from the fact that it isn’t a universal truth.
First, though, a terribly important qualification is in order. I have no doubt that
human beings have the ability to employ recursions (another word for algorithms)
both consciously and unconsciously and that cognitive science should explain (and in
many cases does successfully explain) recognition abilities by appealing to the notion
of an algorithm. The work of Chomsky and his school, in particular, shows how our
ability to recognize the property of grammaticality might be explainable in this way.
Why, then, do I say that the case of mathematics shows that it cannot be a universal
truth that whenever human beings are able to recognize that a property applies to a
a caution against a widespread but naive argument 333

sentence in a potentially infinite (i.e., a fantastically large) set of cases, there must be
an algorithm in their brains that accounts for it?
Well, just as it is true that we can recognize grammaticality in a fantastically large
set of cases, it is true that we can recognize its being proved in a fantastically large
set of cases. As we saw (via the anti-Chomskian incompleteness theorem), Gödel’s
techniques can be used to show – via a very different argument from Penrose’s – that
if an algorithm accounts for this ability, we are not going to be able to verify this.
However, I think it would be wrong just to stop with this conclusion. It would be wrong
because talk about a potentially infinite set of cases makes sense only when we can
verify the presence of an algorithm. Why do I say this?
Think for a minute about the mathematician about whom Penrose imagines he is
talking. If the mathematician could really recognize a potential infinity of theorems,
then he or she would be able to recognize theorems of arbitrary finite length, for
example, theorems too long to be written down before all the stars cool or are sucked
into black holes. However, any adequate physics of the human brain will certainly
entail that the brain will disintegrate long before it gets through trying to prove any
such theorem. In short, in fact, the set of theorems any physically possible human
mathematician can prove is finite.
Moreover, it is not really a set because sets, by definition, are two valued: an item
is either in or out. However, the predicate prove is vague. We can, of course, make it
precise by specifying a fixed list of axioms and rules to be used, but then the Gödel
theorem does show that in a perfectly natural sense, we can prove a statement that
is not derivable from those axioms by those rules. This may drive us to say that “by
‘prove’ we mean prove by any means we can ‘see’ to be correct.” However, this uses
the essentially vague notion “see to be correct.” In short, there isn’t a potentially infinite
set of theorems a mathematician can prove concerning which we can ask, is it recursive
or nonrecursive? What we do have is a vague, finite collection of theorems that a
real (flesh-and-blood) mathematician can prove. Vague, finite collections are neither
recursive nor nonrecursive; those concepts apply only to well-defined sets.
It may be objected, though, that Chomsky taught us that we can – and should –
idealize language users by imagining that (like ideal computers) they can go on gen-
erating grammatical sentences forever, sentences of arbitrary length. Yes, he did, but
the idealization he showed us how to make is only precise because it corresponds to
a well-defined algorithm. If we wish the question “what can an ideal mathematician
prove?” to be a precise one, we must specify just how the mathematician’s all-too-finite
brain is to be idealized, just as Chomsky did. If we do this by specifying an algorithm,
then, of course, the result will be computational.
If this is right, then we should not allow ourselves to say that the set of statements
that a mathematician can prove is a potentially infinite set. We should say only that
there are fantastically many statements a mathematician – an actual mathematician –
can prove and let it go at that. It does not follow from this that there is an algorithm such
that everything an ideally competent mathematician would (should?) count as proving
a mathematical statement is an instance of that algorithm, anymore than it follows from
the fact that there are fantastically many jokes that a person with a good sense of humor
can see to be funny that there is an algorithm such that everything a person with an
ideal sense of humor would see to be funny is an instance of that algorithm.
334 the gödel theorem and human nature

Appendix

As earlier, we assume that Tk is a machine that perfectly represents our scientific


competence in the sense that for any proposition p, and for any evidence u (expressible
in the language that Tk uses for expressing propositions and expressing evidence), Tk
sooner or later prints out Justifiedu (p) if and only if it is justified to accept p when u
is our total relevant evidence. Let e be the Gödel number of a sentence that describes
evidence that justifies accepting COMPETENCE. Note that k and e are constants
in the following argument. As before, COMPETENCE also asserts that whatever
hypotheses and evidence we can describe in an arbitrary natural language can also can
be represented in Tk ’s language, that is, that that language has the full of expressive
power of the Mentalese that Chomskians mention. Using Gödel’s technique, we can
construct a sentence GÖDEL in the language of Peano arithmetic, and hence in the
language of Tk (because that language is capable of expressing all humanly accessible
science), that says that Tk never prints out Justifiede (GÖDEL). Therefore GÖDEL has
the following property:
It is justified to believe GÖDEL on evidence e if and only if it is justified to believe (on
that same evidence) that the machine Tk never says (even if it is allowed to run forever)
that it is justified to believe GÖDEL on evidence e.

We assume two axioms concerning the notion of justification:


Axiom 1: It is never both the case that it is justified to believe p on evidence e
and justified to believe ¬p on evidence e.
Remark We know various statements of science that are, at a certain stage, justified
on all the available relevant evidence but are subsequently rejected on the basis of
new relevant evidence, possibly within the framework of new theories. Thus a Turing
machine that simulates our scientific competence as it proceeds chronologically may
list inconsistent statements because our evidence is not the same at different times.
However, Tk is not supposed to mimic the diachronic behavior of our brains but
is intended simply to represent a recursive enumeration of the sentences that it is
within the competence of the “scientific faculty” – which Chomsky claims to be
computational – to accept as justified on any evidence expressible in Mentalese. Thus
axiom 1 represents the requirement that the scientific faculty, when acting in accordance
with its competence, not regard both p and ¬p as justified on the same evidence. Even
this has been questioned by a friend, who writes:
I suppose that if an agent had purported justifications of a sentence and its negation, he
would realize that at least one of these justifications is bogus and should be withdrawn.
But the subject may not realize that he has justified a given sentence and its negation. To
give our agents the ability to tell whether their (justified) beliefs are consistent is to give
them a nonrecursive ability.

My response is that it is part of the notion of justification that justified beliefs are
not flatly contradictory (i.e., of the form p&¬p). I believe that this is something on
which all epistemologists would agree. If there is some other notion of justification
appendix 335

with which both p and ¬p can sometimes be justified on the very same evidence, I am
sure that is not the notion of justification either Chomsky or Penrose or Lucas or I have
in mind. (The ability to tell whether two beliefs are flatly contradictory is a primitive
recursive ability.)
Because COMPETENCE is the statement that Tk perfectly represents our compe-
tence, we also have the following:
Axiom 2: If Justifiede (the machine T k eventually prints out Justifiede (p))
and Justifiede (COMPETENCE), then Justifiede (p). (“Justifiede (p)” is, of course,
simply the sentence in the language of Tk – which was assumed to be a formalized
version of Chomsky’s Mentalese – that expresses the statement that “it is justified
to believe the sentence p on evidence e.” Note that by well-known results of Gödel
and Turing, “the machine Tk eventually prints out Justifiede (p))” is expressible
by a sentence of Peano arithmetic.)
The anti-Chomskian incompleteness theorem is as follows:
“COMPETENCE” can’t be both true and justified.

To guide the reader, following is an outline of the proof I shall give (in both parts of the
proof, we will assume that COMPETENCE is both true and justified by the evidence e).
Outline of Part I We will prove (by assuming the opposite and deriving a con-
tradiction) that it is not the case that the machine Tk eventually prints out Justifiede
(GÖDEL) (i.e., the machine Tk , which we assumed to enumerate correctly the sen-
tences that our epistemic competence would tell us are justified on arbitrary evidence,
does not generate the Gödelian sentence on the evidence that justifies our acceptance
of COMPETENCE).
Outline of Part II The conclusion of part I was as follows:
(a) It is not the case that the machine Tk eventually prints out Justifiede (GÖDEL). Because
we can know this proof if we have empirically justified COMPETENCE, and a proof
is a justification, it follows immediately that
(b) Justifiede (the machine T k never prints out Justifiede (GÖDEL)).
Then the rest of part II will derive a contradiction from (b), thus showing that the
assumption that COMPETENCE is both true and justified must be false. The proof in
full follows.
Part I of the Proof Assume: The machine Tk prints out Justifiede (GÖDEL)
(i) Justifiede (GÖDEL). Reason: For the reductio proof of our incompleteness theo-
rem, we are assuming COMPETENCE. By assumption, the machine Tk prints out
Justifiede (GÖDEL). So by COMPETENCE, Justifiede (GÖDEL).
(ii) It is justifiede to believe that “the machine Tk eventually prints out Justifiede (GÖDEL).”
Reason: We just showed that Justifiede (GÖDEL). Then, by COMPETENCE, Justifiede
(the machine T k eventually prints out Justifiede (GÖDEL)); that is, we are justifiede in
believing that the machine Tk eventually prints out Justifiede (GÖDEL).
(iii) We are justified in believing “the machine Tk does not eventually print out
Justifiede (GÖDEL).” Reason: Because, as pointed out earlier, on Chomsky’s ideal-
ized notion of competence (and also on the notion of our mathematical competence
336 the gödel theorem and human nature

assumed by Lucas and Penrose), it is within our competence to recognize all truths
of the form “machine Ti eventually prints out sentence p,” and, by assumption, “the
machine Tk prints out Justifiede (GÖDEL),” this is a truth that our scientific faculty is
able to prove. Because a proof is a justification, we have the following: Justifiede (the
machine T k eventually prints out Justifiede (GÖDEL)). However . . .

It is justified to believe GÖDEL on evidence e if and only if it is justified to believe (on


that same evidence) that the machine Tk never says (even if it is allowed to run forever)
that it is justified to believe GÖDEL on evidence e, and because this equivalence is known
to us, and by (i), we are justifiede in believing the left side, we are justifiede in believing
the right side; that is, we are justifiede in believing “the machine Tk does not eventually
print out Justifiede (GÖDEL)).”

However, (ii) and (iii) violate our consistency axiom (axiom 1). Thus (still assuming
that COMPETENCE is both true and justified on evidence e) we conclude that the
assumption of our subproof is false: it is not the case that the machine Tk prints out
Justifiede (GÖDEL). However, this is reasoning that we can easily go through if we have
discovered COMPETENCE to be true, so without any additional empirical evidence, we
have justifiede that “the machine Tk does not eventually print out Justifiede (GÖDEL)).”
To complete the proof of our incompleteness theorem, we therefore now need a
proof that this, too, leads to a contradiction. It follows:
Part II of the Proof We have, in part I, proved the following:

(i) Tk does not eventually print Justifiede (GÖDEL). From this, by COMPETENCE, we
have
(ii) Justifiede (GÖDEL). However, we have from before (B) Justifiede (GÖDEL) if and only
if (Justifiede (Tk does not eventually print Justifiede (GÖDEL)). Therefore
(iii) Justifiede (Tk does not eventually print Justifiede (GÖDEL)). However, from (i) and
the obvious rule of inference that proving X allows writing Justifiede (X), we have
(iv) Justifiede (Tk does not eventually print Justifiede (GÖDEL)).

Now (iii) and (iv) are a contradiction. This completes the proof of our anti-
Chomskian incompleteness theorem.

References

Carroll, S. B. (2006). The Making of the Fittest: DNA and the Ultimate Forensic Record of Evolution.
New York: W. W. Norton.
Chomsky, N. (1957). Syntactic Structures. 1st ed. Berlin: Mouton de Gruyter. [2nd ed. 2002.]
Edwards, P., ed. (1967). Encyclopedia of Philosophy. Vol. 3. New York: Macmillan.
Feferman, S. (1962). Transfinite recursive progressions of axiomatic theories. Journal of Symbolic
Logic, 27, 259–316.
Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter
Systeme I. Monatshefte für Mathematik und Physik, 38, 173–98. [English trans. J. van Heijenoort,
ed. (1967). Frege to Gödel: A Source Book on Mathematical Logic. Cambridge, MA: Harvard
University Press, pp. 596–616. Repr. with facing English trans. On formally undecidable propo-
references 337

sitions of Principia Mathematica and Related Systems. I. In Collected Works, vol. 1 (1986),
pp. 145–95.]
Hopcroft, J., and Ullman, J. D. (1979). Introduction to Automata Theory, Languages and Computation.
Boston: Addison-Wesley.
Lucas, J. R. (1961). Minds, machines, and Gödel. Philosophy, 36, 112–27. [Repr. A. R. Anderson
(1964). Minds and Machines. Englewood Cliffs, NJ: Prentice Hall.]
Penrose, R. (1994). Shadows of the Mind: An Approach to the Missing Science of Consciousness.
Oxford: Oxford University Press.
Putnam, H. (1985). Reflexive reflections. Erkenntnis, 22(1), 143–54. [Collected in H. Putnam (1994).
Words and Life. Cambridge, MA: Harvard University Press, pp. 416–27.]
. (1994). Review of Shadows of the Mind. New York Times Book Review, November 20. [Repr.
H. Putnam (1995). Review of Shadows of the Mind. Bulletin of the American Mathematical Society,
32 (3), 370–73.]
Quine, W. v. O. (1966). The Ways of Paradox and Other Essays. New York: Random House.
Steiner, M. (2000). Mathematical intuition and physical intuition in Wittgenstein’s later philosophy.
Synthese, 125(3), 333–40.
Tarksi, A. (1936). The Concept of Truth in Formalized Languages. In Polish; the standard English
translation is collected in Tarski’s Logic, Semantics, Metamathematics, 2nd. edition (edited by
J. Corcoran) (Indianapolis: Hackett, 1983) (1st edition edited and translated by J. H. Woodger,
Oxford 1956).
Turing, A. (1939). Systems of logic defined by ordinals. Proceedings of the London Mathematical
Society, Series II, 45, 161–228.

You might also like