0% found this document useful (0 votes)
27 views17 pages

Living Extremely Flat The Life of An Aut

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 17

Living Extremely Flat: The Life of an Automaton;

John von Neumann’s Conception of Error


of (in)Animate Systems

Giora Hon

Proofreading or “editing” has been suggested in DNA


replication . . . but a detailed description of its chemical
kinetic basis is lacking. The problem is thus to find a simple
quantitative model containing the essential features of
proofreading scheme. . . . These circumstances allow the
construction of a simple mechanism of “kinetic proofreading.”1
John J. Hopfield, 1974

Half a century ago, in January 1952, in a lecture delivered at the California Institute
of Technology, John von Neumann (1903–1957) envisaged the synthesis of reli-
able organisms from unreliable components. This was not a science-fiction talk,
calling for imaginative creations in the spirit of Ridley Scott’s Blade Runners. It
was a carefully argued scientific paper in which von Neumann sought to prove the
existence of a self-reproducing universal computer. The paper constitutes an impor-
tant contribution to the consolidation of the theory of automata. Von Neumann did
not conceive of cellular automata as mathematical objects for pure investigation;
rather, he considered the new algorithm a means for treating in detail the problem
of how to make machine reproducible.2 The realization that cellular automata can
demonstrate that “arbitrarily complicated mathematics could be performed within a
system whose basic organization is thoroughly rudimentary,”3 is a testimony to the
success of von Neumann’s idea. Indeed, his construction shows that “a small set of
local rules acting on a large repetitive array can result in a structure with very com-
plex behavior. The von Neumann construction thus immediately suggests how an
organ with behavior as complex as the brain’s can be specified from limited genetic
information.”4
To get the basic terms clear, cellular automata are “abstract dynamical systems
that play a role in discrete mathematics comparable to that played by partial differ-
ential equations in the mathematics of the continuum.”5 These dynamical systems

G. Hon ( )
Department of Philosophy, University of Haifa, Israel
e-mail: hon@research.haifa.ac.il

G. Hon et al. (eds.), Going Amiss in Experimental Research, Boston Studies 55


in the Philosophy of Science 267, DOI 10.1007/978-1-4020-8893-3_5,
© Springer Science+Business Media B.V. 2009
56 G. Hon

consist of arrays of computing elements characterized by discreteness in space, time


and state values, whose architecture and design followed initially studies of the nerve
systems of mammals: the fact that “the cerebral cortex is composed of a large num-
ber of local neural assemblies that are iterated throughout its extent.”6 An essential
feature of this computational technique is that the functions computed are functions
of the internal states of the computing elements and the inputs from neighboring ele-
ments. Thus the role of a cellular operation is to transform an array of data displayed
in a discrete space at time t, into an array of data at time t + 1. At this time each ele-
ment of the array has a value which is determined not only by its initial state, but also
by the values of its nearest neighbors. Operations are assumed to occur in discrete
time with each step in time being a generation, iteration, or cycle. Further, changes
in all computing or processing elements, are taken to be simultaneously. The action,
in other words, of all the elements in a cellular array is synchronous. This action is
governed by a transition rule or transform that uses as its independent variables the
states of the particular computing element and its neighbors. Finally, the transition
rule is considered local (no action-at-a-distance), uniform (the same rule applies to
all sites at all times) and deterministic: any given configuration of the states of the
elements of the array has just one successor configuration for a given transform.
Note crucially that no centralized authority, so to speak, governs the evolution of
the system which, on the contrary, evolves as per local interactions between a single
cell and its neighboring cells. Thus, no general predictive procedure is possible,
that is, there is neither an analytical expression nor a short-cut in the computational
process—the system simply has to exhaust its runs. In other words, the evolution of
such systems effectively defines the most efficient simulation of their behavior.7
For further clarification one may draw a comparison between cellular automata
and the traditional model of computation, the Turing machine. The former scheme
of calculation is parallel while the latter is serial. Cellular automata have no “head”
to “read” a sign and put it in relation to a subsequent sign; rather, the computation
proceeds in parallel across the entire lattice—the multiple cells that comprise the
automaton. Moreover, cellular automata have no halting states and therefore it is
difficult to separate in such schemes the dynamics of the system from the compu-
tation.8 This means that the dynamics becomes an expression of the computation
and consequently the simulations which is expressed in the computation can be
seen—in some cases, directly—in the dynamics of the automaton.
Notice that in the Turing machine there is a clear separation between on the one
hand the structural part of the computer which is fixed, and on the other hand the
data which are variable and do not belong to the material structure of the computer.
In other words, the computer cannot operate on its own matter; it cannot extend or
modify itself, or build other computers. This is not the case with cellular automata
where, as we have seen, it is difficult to distinguish between the dynamics of the
system and the computation itself. This suggests the following characterization of
these two different schemes: while the Turing machine is predominantly spatial, the
cellular automata are essentially temporal.9
These features make cellular automata conducive to simulating complex dynam-
ics and especially behaviors of living systems, since little modeling is required.
Living Extremely Flat
57
The algorithm of cellular automata is therefore useful for simulating real complex
systems such as physical fluids, molecular dynamical systems, natural ecologies,
weather, neural networks, military command and control networks, economy and
many others.10 Though von Neumann was known as a leading physicist and math-
ematician, he was also involved—in his capacity as an advisor to many government
agencies—in many of these fields characterized by dynamical complexity. In this
context he directed his attention to reductionistic explanation of certain aspects of
biology. Explicit physical considerations are lacking in his work on cellular autom-
ata. He recognized and indeed emphasized a central feature of cellular automata,
namely, that unlike the rigidity of the Turing machine here, in cellular automata,
the distinction between the computing devices and data is blurred: construction and
computation are two possible modes of activity of this algorithm. In other words,
the plasticity of cellular automata—so characteristic of the living system—caught
the attention of von Neumann.
The operational success of von Neumann’s theory of cellular automata amounts
to a proof that the following possibility is viable, namely, the successful abstrac-
tion of “the set of primitive logical interactions necessary for the evolution of the
complex forms of organization essential for life.”11 Put differently, the physiologi-
cal fact that the cerebral cortex consists of a very large number of local neural
assemblies that are iterated throughout its extent, is successfully represented by an
array of computing elements and their rules of transition. The success of the theory
is in showing the possibility that such a structure of numerous simple elements
is capable of complex behavior, as the brain amply exhibits, “without the need to
invoke region-to-region variability, long range interactions, stochastic components,
or mysticism.”12
The success of this proof of possibility should not surprise us since we have
known the answer from the outset: living systems reproduce themselves, and they
consist of some basic discernable elements. The presupposition that these systems
are biochemical machines leads to the expectation that they should be describable
by some algorithm. The success is then in finding an algorithm that captures these
features which the theory of cellular automata can simulate.13
My interest in cellular automata does not lie, however, in the success of this
computing technique in simulating life phenomena. Rather, I am interested in the
approach that von Neumann took which is rarely elaborated in the literature. Von
Neumann commenced his paper by stating that its subject matter is

the role of error in logics, or in the physical implementation of logics—in automata-syn-


thesis. Error is viewed, therefore, not as an extraneous and misdirected or misdirecting
accident, but as an essential part of the process under consideration—its importance in the
synthesis of automata being fully comparable to that of the factor which is normally con-
sidered, the intended and correct logical structure.14

This instructive statement is placed up-front in the introductory section of von Neu-
mann’s essay of 1956: “Probabilistic Logics and the Synthesis of Reliable Organ-
isms from Unreliable Components.” It is a significant opening remark. It puts on a
par the negative concept of error with positive elements of knowledge. To formulate
58 G. Hon

it differently and in practical terms, von Neumann noted that computing structures
require reliability and therefore the occurrence of error should be addressed head-on
and indeed at the outset of the project. The complexity of the brain, its dexterous
performance and robustness, served for him as the prime example which points not
only towards possible successful designs, but also to the treatment of failures.15 The
conception of failure of machines and living systems is at the center of this paper.
To anticipate my findings, structurally we may benefit enormously from the anal-
ogy between living systems and cellular automata, but the nature of error, or failure,
in computing systems transpires to be starkly different from failures in the living
systems. Put another way, the occurrence of error points to differences rather than
to similarities between living systems and cellular automata. A brief philosophical
analysis of the notion of error in general will facilitate a clear understanding of these
differences.
Von Neumann expressed dissatisfaction with the way error had been treated:
“unsatisfactory and ad hoc” are his words. He thought that,

error should be treated by thermodynamical methods, and be the subject of a thermody-


namical theory, as information has been, by the work of L. Szilard and C. E. Shannon.16

He then admitted that his work fell short of this conception, but added that he
intended his discussion of error to contribute toward this approach.
I will not pursue this physical approach to error; rather, I will direct attention to
the core of the problem, to what I call the epistemic phenomenon of error. Against
this background I will examine the striking difference between error of inanimate
systems and that of the living. I shall conclude by suggesting that this difference
may have consequences for the conception of experimentation in the biological
domain.
I begin then with the epistemic phenomenon of error. According to David Hume
(1711–1776) there are seven different kinds of philosophical relations: “resem-
blance, identity, relations of time and place, proportion in quantity or number,
degrees in any quality, contrariety, and causation.” Hume divides these relations
into two classes. The first class comprises those relations that depend entirely on the
ideas which we compare, and the second those which may be changed without any
need for adjustment. To the former belong the four relations: resemblance, contrari-
ety, degrees in equality, and proportions in quantity or number; and to the latter the
remaining three relations: identity, the situations in time and place, and causation.17
Having presented these relations and classified them in these two groups, depend-
ing on the nature of the underlying idea, Hume states:

All kinds of reasoning consist in nothing but a comparison, and a discovery of those rela-
tions, either constant or inconstant, which two or more objects bear to each other.18

Hume italicized “comparison” and placed this discussion of philosophical relations


and their underlying notion of comparison in his analysis of knowledge as part of
his first book on human nature, that is, On the Understanding.
Living Extremely Flat
59
We need not enter into an argument with Hume about the kinds of philosophical
relations and their classification; rather, at stake here is comparison—a central pro-
cedure for attaining understanding. Taking mathematics as the paramount example
for his claim, Hume observes that we can carry on in algebra and arithmetic “a chain
of reasoning to any degree of intricacy, and yet preserve a perfect exactness and
certainty.” This, he explains, is due to the fact that in this kind of reasoning we
possess

a precise standard, by which we can judge of the equality and proportion of numbers; and
according as they correspond or not to that standard, we determine their relations, without
any possibility of error.19

From this analysis we may surmise that for error to be identified as such a context
must be established in which procedures of comparison could be developed and
indeed applied. Such procedures logically require that a standard must be available
to allow for the comparison to proceed so that an error could be determined. In other
words, a fundamental characteristic of error is the recognition of a discrepancy in a
comparative procedure. It is essential to underline “recognition” since otherwise an
error would not be acknowledged as such.
What do we claim to know when we identify an error? We discern a divergence
from a certain standard—a discrepancy. I have suggested elsewhere that the nature
of the discrepancy and its reason may shed light on the object under study.20 Fol-
lowing up this approach, my goal here is to draw consequences from the contrast
between discrepancies identified in inanimate systems that are designed to simulate
live organisms on the one hand, and claims of errors pertaining to living systems on
the other. Von Neumann’s pioneering papers on computing machines and cellular
automata present a rich case for such a study.
In his seminal paper of 1946, “On the principles of large scale computing
machines,” von Neumann, together with Herman H. Goldstine, addressed the broad
issue: “to what extent can human reasoning in the sciences be more efficiently
replaced by mechanisms?”21 Von Neumann and Goldstine observed that in highly
complex fields that are based on non-linear partial differential equations such as
fluid dynamics there had arisen a computational gap that generations of mathemati-
cians had not succeeded in bridging. According to the authors, most experiments
in these fields are “of a quite peculiar form”: they are designed not to verify pro-
posed theories but to replace a computation from an unquestioned theory by direct
measurements. Wind tunnels, for example, are used as computing devices of the
so-called analogy type to integrate the non-linear partial differential equations of
fluid dynamics. The construction of large scale computing machines was partially
motivated by this impasse. As the authors put it: “many branches of both pure and
applied mathematics are in great need of computing instruments to break the pres-
ent stalemate created by the failure of the purely analytical approach to non-linear
problems.”22
The machines which von Neumann and Goldstine considered belong to the digi-
tal, or counting type. These machines treat real numbers as aggregate of digits and
60 G. Hon

they are distinct from the analogical, measurement type. In analogical machines a
real number is treated as a physical quantity, e.g., the intensity of an electrical cur-
rent or the voltage of an electrical potential. The machines of the analogical type
tend to be of a one-purpose character, specialized for a given task. This stands in
contrast to the digital machines which are essentially all-purpose.23
One aspect of the design of the digital machines which von Neumann and his
collaborator set to address right at the outset was the question of stability; the issue
of error is at the center of this discussion.24 For my argument it is important to
note that von Neumann analyzes the issue of error in computing machines before
he discusses “the input-output organs”, “the memory organ” and “the coding of
problems”—the sections that in the paper follow the discussion on error. Thus, the
issue of error is presented before attention is given to the architecture and the under-
lying principles of these machines.
Von Neumann discerns two principal types of error. The first type pertains
to malfunctions: “the device functions differently from the way in which it was
designed and relied on to function.”25 Von Neumann adds that this type has its
counterpart in human mistakes, both in planning and in actual human comput-
ing. Malfunctions are quite unavoidable in machine computing and they require
checking. However vital this form of checking to the running of computing
machines, von Neumann chooses not to be concerned with it. Rather, he focuses
on the other type of error which arises even when the machine works perfectly
well according to plan. Under this heading von Neumann distinguished three
kinds of error.26
The first kind has to do with the fact that all data of empirical origin is approxi-
mate. Any uncertainty of the input, be it associated with the data or with the back-
ground theory, that is, approximate differential equations, will reflect itself as an
uncertainty of the results. Based on well-known mathematical analyses, it could be
shown that the size of the divergence due to this source depends on the size of the
input errors and the degree of continuity of the mathematics involved. Von Neu-
mann remarks that this kind of error pertains to any application of mathematics to
nature and therefore is not peculiar to the computational approach. He therefore did
not pursue it further.27
The second kind of error under the heading of functioning as planned, deals with
the specific nature of digital computing. All continuous mathematical procedures,
like integrations of differential equations, must be replaced in digital computing by
elementary mathematical operations, that is, they must be approximated by a suc-
cession of the basic arithmetical operations of addition, subtraction, multiplication
and division. The resulting deviation from the exact result is due therefore to trun-
cation errors that express the discrepancy between the original continuous problem
and its digital transform. However, von Neumann observes that this kind of error
can be kept under control by familiar mathematical methods and are usually—so he
remarks—not the main source of trouble. He therefore “passes them up, too,” as he
comments, at least for the time being.28
The third kind of error, the last one in von Neumann’s enumeration, is the most
crucial. It has to do with the fact that, irrespective whether the input is accurate or
Living Extremely Flat
61
approximate, “no machine, no matter how it is constructed, is really carrying out the
operations of arithmetics in the rigorous mathematical sense.” And he continues,

there is no machine in which the operations that are supposed to produce the four elemen-
tary functions of arithmetic, will really all produce the correct result, i.e. the sum, differ-
ence, product or quotient which corresponds precisely to those values of the variables that
were actually used.29

In analogical machines this is the result of representing the variables by physi-


cal quantities and the arithmetical operations or any other operation by physical
processes. Such processes are invariably affected by uncontrollable uncertainties
and physical fluctuations inherent in any physical instrument. Von Neumann resorts
here to a term which he borrowed from communication engineering that has since
then gained currency. “These operations,” he writes, “are contaminated by the noise
of the machine.”30 Analogical machines always include in their performance of an
arithmetic operation an unknown quantity which represents the random noise of the
mechanism of the physical processes involved. It is paramount for the success of the
operation to minimize this quantity.31
In digital machines the reason for this kind of error is different. A digital machine
must work with a definite number, which may contain many digits, but ultimately it
must have a fixed, finite value. The capacity of the machine determines this value and
thus its limit. Arithmetical operations conducted on a given number will normally
result in more digit numbers than the machine would be able to represent with its
own finite structure. A new term is therefore introduced which is known as the round-
off error. Although this term is not a random variable and can be in fact determined
in every particular instance, its determination is so complicated and its variations
throughout its instances in a given calculation is so irregular that it can be consid-
ered to a high degree of approximation a random variable.32 Von Neumann therefore
refers to this third kind of error in both analogical and digital machines as noise,
and observes that, “there is ample evidence to confirm the view, that in complicated
calculations . . . this source of error is the critical, the primarily limiting factor.”33

Faults in large scale computing machines


Malfunctions: Mistake Functioning according to plan: Error
Uncertainty in the input: theory and data
Uncertainty due to digital representation; truncation error
Noise & round-off error

In 1948, two years after the presentation of his research on large scale comput-
ing machines, von Neumann delivered a paper on “The General and Logical Theory
of Automata.”34 It was clear to von Neumann that in spite of the fact that natural
organisms are, as a rule, much more complicated and subtle than artificial automata,
there is a fruitful reciprocal relation between these distinct systems. While some
regularity in living organisms could be instructive in the thinking and planning of
automata, the experience with automata could be to some extent projected on the
62 G. Hon

interpretation of natural organisms.35 The latter point is at the center of my interest.


Although von Neumann acknowledged the different conception of error in the two
systems, he thought they are in some sense related. I call this claim into question;
indeed, I attempt to refute it.
Since the living system is immensely complex, von Neumann suggests a reason-
able and indeed by now a common approach of two moves based on the following
presupposition. The organism may be viewed as made up of parts which are to a
certain extent independent, elementary units. The first move is then to identify the
structure and function of such elementary units individually. The second move con-
sists in seeking an “understanding how these elements are organized into a whole,
and how the functioning of the whole is expressed in terms of these elements.”36
In the first move von Neumann retains the traditional distinction of structure and
function as the underlying heuristics.37 He disposes of this first step by applying
what he calls the Axiomatic Procedure:

Axiomatizing the behavior of the elements means this: We assume that the elements have
certain well-defined, outside, functional characteristics; that is, they are to be treated as
“black boxes.” They are viewed as automatisms, the inner structure of which need not be
disclosed, but which are assumed to react to certain unambiguously defined stimuli, by
certain unambiguously defined responses.38

This procedure is a powerful heuristic device that underlies physiological studies.


Not surprisingly, von Neumann chooses to concentrate on the second move, where
issues of formalism—logical as well as mathematical—are at stake.
As in the other papers, here too the issues of precision and reliability receive
attention right at the outset. Von Neumann remarks that normally one would expect
of a machine that “the larger the number of operations required to produce a result,
the smaller will be the significant contribution of every individual operation.”39
Thus the occurrence of error in automata will matter only to the extent of the frac-
tion of the total number of steps which are required for the completion of the task.
This however does not hold for computing machines. In computing machines any
step—whatever the number of operations—is as important as the whole result. To
put it bluntly in von Neumann’s own words: “any error can vitiate the result in its
entirety.”40 Computing machines have to perform billions of steps in a short time
and no error is permitted in a considerable part of the procedure. In fact, the demand
is that no error should occur anywhere in the entire procedure. In this sense, a com-
puting machine is an exceptional artificial automaton, but it is this feature, accord-
ing to von Neumann, that makes this automaton most suitable for a comparison to
the functioning of a natural organism.
By comparing a cellular automaton with a living organism, von Neumann iden-
tifies processes of digital and analogical nature. While the nerve impulse seems
to function in a binary way and thus well suited to digital representation, other
functions of the living system are mediated in a continuous fashion in what von
Neumann calls “humoral media”.41 Specifically, he discerns both processes in the
central nervous system, that is, digital as well as analogical. The organism exhib-
its composite functional sequences which “go through a variety of steps from the
Living Extremely Flat
63
original stimulus to the ultimate effect—some of the steps being neural, that is,
digital, and others humoral, that is, analogy.” Furthermore,

These digital and analogy portions in such a chain may alternately multiply. In certain cases
of this type, the chain can actually feed back into itself, that is, its ultimate output may again
stimulate its original input.42

The complexity of the living organism is due partly to this intricate combination
of different kinds of process, in contrast to computing machines which in the pres-
ent state of the art are purely digital. And von Neumann remarks that in drawing an
analogy between the living organism and large scale computing machines he attends
only to the digital aspect of the living system—an oversimplification which is how-
ever heuristically productive, and especially so when the device—be it a neuron or a
vacuum tube (von Neumann, it should be noted, wrote this paper before the invention
of the transistor)—is considered a “black box” with a schematic description.43
The parallel function of the two key elements, that is, the nerve cell and the vac-
uum tube, has thus been drawn. It reflects the correspondence between the building
blocks of the nervous system and those of the automata with computing capability. Von
Neumann turns now to what he considers a crucial drawback, in fact the stumbling
block in the development of automata, namely, the rigidity of the formalism: the avail-
able mathematical-logical theories had been too rigid to be conducive to the operational
requirements of automata. In particular, the length of “chains of reasoning” had to be
considered as well as failures that are part and parcel of a working machine. Thus,

The operations of logic (syllogisms, conjunctions, disjunctions, negations, etc., that is, in
the terminology that is customary for automata, various forms of gating, coincidence, anti-
coincidence, blocking, etc., actions) will all have to be treated by procedures which allow
exceptions (malfunctions) with low but non-zero probabilities.44

Von Neumann imports his analysis of error from the large scale computing machines
to his studies of automata. He expected this theory to be less combinatorial and
more analytical, akin to the character of thermodynamics as Boltzmann treated it.
Von Neumann discerns here a theoretical limitation which is of much importance to
the point I am seeking to make. At stake is error checking procedure.
We have seen von Neumann analyzing possible kinds of error in large scale com-
puting machines. For him errors and their sources “need only be foreseen generically,
that is, by some decisive traits, and not specifically . . . in complete detail.”45 However,
a malfunction in artificial automata must be detected, as soon as it occurs, otherwise
these machines would be useless. Effort should be made to identify the error, by say
mathematical means or automated checks, to isolate the faulty component that caused
the error, and put it then aright or replace it altogether. This is why designers compart-
mentalize machines. As Walter Elsasser (1904–1991) explains:

If a system is sufficiently compartmentalized so that errors are prevented from spreading,


their consequences may be limited to one compartment for a very long time. If this is not
done the consequences of the error tend to spread over the whole system owing to the
extensive interconnection of various processes by mutual feedback. Designers of electronic
64 G. Hon

computers therefore have a pronounced tendency to compartmentalize their systems as


much as possible, partly in order to prevent the spreading of errors and partly to be able to
track them down more readily in case they occur.46

Notice that the diagnosis is effected from without and the faulty component is
replaced by agents external to the system. But over and above the corrective mea-
sures that may be taken, the error itself may be identified in the first place only
against a known standard or criterion. It is this identification which subsequently
allows for insulation and rectification. Therefore, as von Neumann puts it,

we are trying to arrange the automata in such a manner that errors will become as conspicu-
ous as possible, and intervention and correction follow immediately.47

The quick intervention is important to prevent further errors setting in. It is a com-
mon experience that machine which has begun to malfunction rarely will restore
itself, and more probably go from bad to worse.
This is not the case of the living system; in von Neumann’s words, “the organism
obviously has a way to detect . . . [malfunctions] and render them harmless.”48 Note
that von Neumann regards this observation as indisputable: he says “obviously”.
An organism, for example, the living cell, is presumed to have a way of detecting
on its own, that is, from within, malfunctions and treat them accordingly. Therefore
this system must

contain the necessary arrangements to diagnose errors as they occur, to readjust the organ-
ism so as to minimize the effects of the errors, and finally to correct or to block permanently
the faulty components.49

And in the case of the living system there is little evidence of compartmentaliza-
tion. Thus, according to von Neumann, the entire organism appears to make the
malfunctions as inconsequential as possible, and to apply corrective measures. In
other words, “organisms are constructed to make errors as inconspicuous, as harm-
less, as possible.”50 In sum, while the engineer seeks to make the error as conspicu-
ous and distinct as possible and react swiftly with external means to eliminate it
before further errors set in, the alien designer of the living system has equipped
the system with an internal faculty that can diagnose a malfunction and render it as
inconspicuous as possible in a relatively long time—so von Neumann’s argument
runs.
I have underlined the success of cellular automata in obtaining complexity that
evolves from rudimentary, elementary machinery in parallel to that of the living sys-
tem. But when it comes to disturbances and interferences there appear to be major
qualitative differences—the flexibility of cellular automata is not sufficient for cap-
turing the plasticity of the organism in handling faults. To use a metaphoric language,
automata live an extremely flat live. At stake are the very elements of the cellular
automata: the number of states variable in a given cell, the number of cell neigh-
bors and the sensitivity of the transition rule to the environment. The difficulties in
capturing the versatility of the living system may be characterized respectively as
Living Extremely Flat
65
robustness to perturbation, that is, stability, then variability, and finally sensitivity (or
rather insensitivity) to changes in the transition rule.51
Consider robustness:

Alteration of the state of a single unit of the von Neumann machine typically leads to
catastrophic failure; [by contrast] malfunction of a single neuron or neural assembly should
have no measurable effect.52

The successful operation of the von Neumann construction is due to choosing a dis-
crete substrate in space, time, and state variable. This success is obtained however at
a very high price since the automaton is much more vulnerable to disturbances than,
say, differential equations whose continuous substrate is conducive to the treatment
of perturbations. How many states are required in order to obtain robustness in cel-
lular automata? It may well be that increasing the number of states would not after
all result in robustness.
Then there is the issue of variability. It is the variability at the level of the indi-
vidual neuron which the von Nuemann machine cannot accommodate, for it would
fail catastrophically were the interacting neighboring cells of the automaton be of
a too varied nature. Again, the question of number arises: how many neighboring
cells it would take to achieve variability, a feature which is natural, so to speak, in
the living system.
Finally, it may be at times beneficiary to the living system to be insensitive
to the environmental changes; by comparison, it is not at all clear how a cellular
automaton can ignore changes in the transition rule. These three elements: stabil-
ity, variability and sensitivity may constitute terminal problems for the designer
of cellular automata in the attempt to depict fundamental features of the living
system.
Such difficulties render the comparison of computing inanimate machines and
living systems problematic; but how does error fare in this comparison? I return to
the distinction which von Neumann draws between modes of checking and rectify-
ing errors in artificial automata and organisms. Recall that the engineer seeks to
make the error as conspicuous as possible in the shortest time possible, quite the
opposite to the common practice, as it were, of the living system. Now, how is error
made conspicuous, or for that matter, inconspicuous? Von Neumann’s analysis is
based on the presupposition that knowledge of what the machine is supposed to
do and how it is designed to accomplish it is given. As I have argued, a compari-
son procedure makes the discrepancy apparent. Thus, it is this given knowledge of
goals and means that makes the identification of error possible. This procedure of
comparison should work also for the living system. Von Neumann characterizes the
relevant background knowledge—the “operating conditions”—in the living system
as “normal”; that is, the operating conditions

represent the functionally normal state of affairs within the large organism. . . . Thus the
important fact is not whether an organ has necessarily and under all conditions the all-or-none
character—this is probably never the case—but rather whether in its proper context it func-
tions primarily, and appears to be intended to function primarily, as an all-or-none organ.
66 G. Hon

And von Neumann adds candidly,

I realize that this definition brings in rather undesirable criteria of “propriety” of context,
of “appearance” and “intention.” I do not see, however, how we can avoid using them, and
how we can forgo counting on the employment of common sense in their application.53

Indeed, it is impossible to see how such terms can be avoided—this is the kern of
my claim. Von Neumann’s revealing remark harbors important consequences, but
he does not draw them. Knowledge of these “operating conditions” is in effect the
standard against which error may be discerned and if criteria such as “propriety”,
“appearance”, and “intention” are undesirable then on what grounds could a fault in
the living system be identified at all as such, namely, a fault?
The problem is compounded by the fact that the living system lacks accuracy.
Karl Lashley (1890–1958)—the American psychologist who brought into focus the
controversy between localization and holistic emphasis of brain function—posed
this problem to von Neumann in the discussion on the theory of automata.

In the computing machines, the one thing we demand is precision; on the other hand, when
we study the organism, one thing which we never find is accuracy or precision. In any
organic reaction there is a normal, or nearly normal, distribution of errors around a mean.
The mechanisms of reaction are statistical in character and their accuracy is only that of a
probability distribution in the activity of enormous numbers of elements. In this respect the
organism resembles the analogical rather than the digital machine. The invention of sym-
bols and the use of memorized number series convert the organism into a digital machine,
but the increase in accuracy is acquired at the sacrifice of speed. One can estimate the num-
ber of books on a shelf at a glance, with some error. To count them requires much greater
time. As a digital machine the organism is inefficient. That is why you build computing
machines.54

This statistical approach is usually associated with the belief in the existence of
overall laws of large scale nerve stimulation and composite action, but in living sys-
tems there are often single elements, a neuron, that may control a whole process.55
How could we then determine the governing law of this single cell? What will be
considered “appropriate” of its behavior or, for that matter, what is its “intention”?
Put concisely, we have to determine the “value” system of this neuron in order to
identify an error in its function.
This train of reasoning underpins Warren S. McCulloch’s graphical response to
von Neumann’s theory of automata. McCulloch, of the well known McCulloch-
Pitts model of the neuron (1943), is recorded rejoining:

I confess that there is nothing I envy Dr. von Neumann more than the fact that the machines
with which he has to cope are those for which he has, from the beginning, a blueprint of
what the machine is supposed to do and how it is supposed to do it. Unfortunately for us
in the biological sciences—or, at least, in psychiatry—we are presented with an alien, or
enemy’s, machine. We do not know exactly what the machine is supposed to do and cer-
tainly we have no blueprint of it. In attacking our problems, we only know, in psychiatry,
that the machine is producing wrong answers. We know that, because of the damage by the
machine to the machine itself and by its running amuck in the world. However, what sort of
difficulty exists in that machine is no easy matter to determine.56
Living Extremely Flat
67
Note that the standard of comparison to which McCulloch refers is coherence, that
is, what appears to McCulloch and his co-workers in psychiatry as self-preservation
and efficient adaptability to the world—be it either the physical or the social world,
or indeed both realms. But surely this is just one interpretation, one possible mode
of evaluating the objective that this system, namely, the human being, is supposed
to accomplish.
The claim that the living systems lacks a known standard, which in turn under-
mines—so I have argued—the possibility of determining error in this context, may
be formulated for clarity sake by using the notion of “teacher”, an agent knowledge-
able of the system so that it can supervise its performance. An artificial automaton
has to have a teacher, the designer who oversees the functioning of the machine.
The teacher, by definition, possesses knowledge of the standard that the automaton
has to maintain. In principle, the teacher could be decoded and the instructions be
taught automatically. The crucial point, however, is that the teaching comes from
without, externally to the system. Note that the teacher is not capable of doing
what the machine does, it only oversees the functioning of the machine. Indeed,
as Lashley pointed out, this is why we build such machines. Thus, we may ask,
how does the teacher know that the end result of millions of calculations is cor-
rect? The teacher can supervise the procedure but cannot check the result itself. Von
Neumann’s solution is degeneracy, namely, apply another machine; so he calls this
procedure, “multiplexing”.57

Connect . . . three . . . machines in such a manner that they always compare their results after
every single operation, and then proceed as follows. (a) If all three have the same result,
they continue unchecked. (b) If any two agree with each other, but not with the third, then
all three continue with the value agreed on by the majority. (c) If no two agree with each
other, then all three stop.58

This system that comprises three machines will obtain correct results unless two of
the three machines err simultaneously, for which the probability, according to von
Neumann’s calculation, is one in 33 million. Notice how von Neumann proceeds:
he applies a comparative procedure. Once again the key is comparison and in this
case each result is compared to the other in an attempt to achieve consensus, albeit
machine produced consensus.59
Thus far machines and automata and their required instructor; but does the living
system has a “teacher”? If the answer is negative, or if we do not have access to
it, then in such systems the determination of malfunctions, and generally of
errors—a process to which von Neumann refers as “obvious”—would be logically
impossible. In this sense the foregoing discussion of error in living systems is in
fact unfounded.60
Granted, living systems possess organs that have identifiable functions whose
ultimate goals and standards may be determined as “normal”. This brings us, how-
ever, directly to the function-structure problematic distinction.61 But note that these
organs are mostly peripheral, located as they are at the interface between the living
system and its environment. Consider, however, the cell itself, or its constitutive
elements—the fundamental building blocks of life. The determination of function
68 G. Hon

ceases then to be clear and consequently knowledge of the standard, that is, the
norm, may be missing altogether. I claim that in these cases it is not clear at all what
does it mean to impute error to the system, and indeed to call a certain building
block faulty.
This philosophical worry does not disturb practitioners from further inquiring
into biology in the spirit that von Neumann inaugurated half a century ago. A good
example is the work of John J. Hopfield who in the 1970s developed an algorithmic
scheme which he called “kinetic proofreading”, and later on in the 1980s demon-
strated how physical systems could pick up features of neural networks and simu-
late the function of memory purely by computation. Hopfield speaks of “reading”
the genetic code with few mistakes. He considers the understanding of how small
error rates are achieved in the living systems as one of the fundamental general
problems of biosynthesis. Admittedly, he writes that he examines the issue “from a
phenomenological point of view.” Still, his proofreading procedure which is based
on energy levels presupposes the concept of error as a primitive that needs no expla-
nation, certainly not a technical one, and one remains perplexed with respect to the
definition of this basic concept, let alone imputing it to organism.62
The two related points, namely, lack of a teacher (or ignorance of it) and pro-
cesses that are in principle not accurate, constitute a categorical difference between
large scale computing machines and artificial automata on the one hand and living
systems on the other. To be sure, the comparison between the two systems is pro-
ductive as von Neumann amply showed. However, the comparison may be mis-
leading when it comes to the conception of error. In fact, given the argument I have
presented concerning the epistemic phenomenon of error, the attribution of error to
animate systems may be in itself erroneous.
The question now presents itself whether the application of the experimental
technique in biology—as we have come to know it, say, in biophysical experi-
ments—should take stock of this consequence. So far it appears that this has not
been the case and practitioners such as Hopfield have no hesitation to attribute
error, e.g., misreading, to the living systems, and indeed to its constitutive elements.
In conclusion, I suggest drawing the consequence so that to avoid the undesirable
criteria of “propriety” of context, of “appearance” and “intention”, as indeed von
Neumann described the problem. A new mode of experimenting is called for that
acknowledges this difficulty, but this I leave for another story.

Acknowledgment I thank Jutta Schickore for incisive comments and Andrea Loett-
gers for drawing my attention to the work of J. J. Hopfield.

Notes

1. Hopfield 1974, 4135.


2. Thatcher 1970. Cf. Kendall and Duff 1984, 1. For historical background, see Abraham 2000,
Ch. III: “From Neural Networks to Self-Reproduction: John von Neumann and Automata
Theory.”
Living Extremely Flat
69
3. McIntosh 1990, 105.
4. Victor 1990, 205.
5. Toffoli and Margolus 1990, 230.
6. Victor 1990, 205.
7. Kendall and Duff 1984, 11–12; Ilachinski 2001, 7. Cf. Wolfram 1986, 1. For historical back-
ground, see Toffoli and Margolus 1990, 231–232.
8. Culick et al. 1990, 357.
9. For further discussion, see Sutner 1990, 389–390.
10. Kendall and Duff 1984, 11–12.
11. Ilachinski 2001, 3.
12. Victor 1990, 205 (emphasis in the original).
13. Ilachinski 2001, 571.
14. Von Neumann 1956/1963, 329.
15. Kendall and Duff 1984, ix.
16. Von Neumann 1956/1963, 329.
17. Hume 1739–1740/1978, 69–73; Bk.1, pt.3, 1, 2.
18. Ibid., 73 (emphasis in the original).
19. Ibid., 71.
20. Hon 1998, 466.
21. Goldstine and von Neumann 1946/1963, 2; Bródy and Vámos 1995, 495.
22. Ibid., 4; ibid., 497.
23. Goldstine and von Neumann 1946/1963, 8–9.
24. Ibid., 13–14.
25. Ibid., 15.
26. Ibid.; Bródy and Vámos 1995, 508.
27. Goldstine and von Neumann 1946/1963, 16; Bródy and Vámos 1995, 508–509.
28. Ibid., 509.
29. Ibid.
30. Goldstine and von Neumann 1946/1963, 16 (emphasis in the original); Bródy and Vámos 1995,
509.
31. Bródy and Vámos 1995, 531; cf. von Neumann 1951/1963, 293–294.
32. Ibid., 533; cf. ibid., 294–295.
33. Goldstine and von Neumann 1946/1963, 17; Bródy and Vámos 1995, 510.
34. Von Neumann 1951/1963; Bródy and Vámos 1995, 526–566.
35. Ibid., 288–289; ibid., 526–527.
36. Ibid., 289; ibid., 527.
37. I have argued elsewhere (Hon 2000) that this is the source of much misconception of the
living system.
38. Von Neumann 1951/1963, 289.
39. Von Neumann 1951/1963, 292; Bródy and Vámos 1995, 530.
40. Ibid. Von Neumann qualified this remark, adding that the claim is not absolutely true; probably
only 30 per cent of all steps made are of this nature.
41. Von Neumann 1951/1963, 296.
42. Von Neumann 1951/1963, 296; Bródy and Vámos 1995, 534. Cf. Von Neumann 1956/1963,
368–369, 372.
43. Ibid., 296–298; ibid., 534–536. See also ibid., 368–369, 372, 375–376.
44. Ibid., 304; Bródy and Vámos 1995, 542.
45. Von Neumann 1951/1963, 324; Bródy and Vámos 1995, 562.
46. Elsasser 1966, 40.
47. Von Neumann 1951/1963, 305–306; cf. Bródy and Vámos 1995, 543–44.
48. Von Neumann 1951/1963, 305; Bródy and Vámos 1995, 543.
49. Ibid.
50. Ibid., 306; Bródy and Vámos 1995, 544.
51. Victor 1990.
70 G. Hon

52. Ibid., 206.


53. Von Neumann 1951/1963, 298.
54. See von Neumann 1951/1963, 324; Bródy and Vámos 1995, 565.
55. See von Neumann 1956/1963, 369.
56. See Von Neumann 1951/1963, 319; Bródy and Vámos 1995, 557.
57. Von Neumann 1956/1963, 347 and 353–368 (§§ 9, 10).
58. Von Neumann 1951/1963, 322. For a detailed technical analysis see von Neumann 1956/1963,
347–353.
59. Burks, Goldstine and von Neumann 1946/1963, 68–70. Note that this procedure does not
allow for diagnosis.
60. Canguilhem’s study of the normal and the pathological focuses on this difficulty from a dif-
ferent perspective (1978/1991).
61. See Hon 2000.
62. Hopfield 1974; 1980; 1982.

References

Abraham, T. H. (2000). “Microscopic cybernetics: mathematical logic, automata theory, and the
formalization of biological phenomena, 1936–1970.” Ph.D. thesis, Institute for the History and
Philosophy of Science and Technology, Toronto, Canada: University of Toronto.
Bródy, F. and T. Vámos eds. (1995). The Neumann Compendium. Singapore: World Scientific.
Burks, A. W., H. H. Goldstine and John von Neumann (1946/1963). “Preliminary Discussion of
the Logical Design of an Electronic Computing Instrument.” Report prepared for U.S. Army
Ordnance Department, 1946. Reprinted in von Neumann 1963, 34–79.
Canguilhem, G. (1978/1991). The Normal and the Pathological. New York: Zones.
Culick, K. II, L. P. Hurd and S. Yu (1990). “Computation Theoretic Aspects of Cellular Automata.”
In Gutowitz 1990, 357–378.
Elsasser, W. (1966). Atom and Organism: A New Approach to Theoretical Biology. New Jersey:
Princeton University Press.
Goldstine, H. H. and J. von Neumann (1946/1963). “On the Principles of Large Scale Computing
Machines.” Unpublished manuscript, printed in von Neumann 1963, 1–32. See also Bródy and
Vámos 1995, 494–525.
Gutowitz, H. (1990). Cellular Automata: Theory and Experiment. North Holland. Physica D, 45: 1–3.
Hon, G. (1998). “Exploiting Error.” Studies in History and Philosophy of Science 29: 465–479.
Hon, G. (2000). “The Limits of Experimental Method: Experimenting on an Entangled System—
The Case of Biophysics.” In M. Carrier, G. J. Massey and L. Reutsche eds., Science at Century’s
End: Philosophical Questions on the Progress and Limits of Science. Pittsburgh: University of
Pittsburgh Press, pp. 284–307.
Hopfield, J. J. (1974). “Kinetic Proofreading: A new Mechanism for Reducing Errors in
Biosynthetic Processes Requiring High Specifity.” Proceedings of the National Academy of
Science 71: 4135–4139.
Hopfield, J. J. (1980). “The energy relay: A proofreading scheme based on dynamic cooperativity
and lacking all characteristic symptoms of kinetic proofreading in DNA replication and protein
sythesis.” Proceedings of the National Academy of Science 77: 5248–5252.
Hopfield, J. J. (1982). “Neural networks and physical systems with emergent collective computa-
tional abilities.” Proceedings of the National Academy of Science 79: 2554–2558.
Hume, D. (1739–1740/1978). A Treatise of Human Nature. L. A. Selby-Bigge ed., 2nd edition.
P. H. Nidditch ed., Oxford: Clarendon Press.
Ilachinski, A. (2001). Cellular Automata: A Discrete Universe. Singapore: World Scientific.
Kendall, P., Jr. and M. J. B. Duff (1984). Modern Cellular Automata: Theory and Applications.
New York and London: Plenum Press.
Living Extremely Flat
71
McIntosh, H. V. (1990). “Wolfram’s Class IV Automata and a Good Life.” In Gutowitz 1990,
105–121.
Sutner, K. (1990). “Classifying Circular Cellular Automata.” In Gutowitz 1990, 386–395.
Thatcher, J. W. (1970). “Universality in the von Neumann cellular model.” In A. W. Burks ed.,
Essays on Cellular Automata. Urbana: University of Illinois Press.
Toffoli, T. and N. H. Margolus (1990). “Invertible Cellular Automata: A Review.” In Gutowitz
1990, 229–253.
Victor, J. D. (1990). “What can Automaton Theory Tell us about the Brain.” In Gutowitz 1990,
205–207.
Von Neumann, J. (1951/1963). “The General and Logical Theory of Automata.” In Cerebral
Mechanisms in Behaviour. The Hixon Symposium. New York: Wiley, 1951. Reprinted in von
Neumann 1963, 288–328.
Von Neumann, J. (1956/1963). “Probabilistic Logics and the Synthesis of Reliable Organisms
From Unreliable Components.” In C. E. Shannon and J. McCarthy eds., Automata Studies.
Annals of Mathematics Studies, No. 34. Princeton, N. J.: Princeton University Press, pp. 43–98.
Reprinted in von Neumann 1963, 329–378.
Von Neumann, J. (1963). Collected Works, A. H. Taub ed., vol. 5: Design of Computers, Theory of
Automata and Numerical Analysis. New York: Macmillan.
Wolfram, S. (1986). Theory and Applications of Cellular Automata. Singapore: World Scientific.

You might also like