Living Extremely Flat The Life of An Aut
Living Extremely Flat The Life of An Aut
Living Extremely Flat The Life of An Aut
Giora Hon
Half a century ago, in January 1952, in a lecture delivered at the California Institute
of Technology, John von Neumann (1903–1957) envisaged the synthesis of reli-
able organisms from unreliable components. This was not a science-fiction talk,
calling for imaginative creations in the spirit of Ridley Scott’s Blade Runners. It
was a carefully argued scientific paper in which von Neumann sought to prove the
existence of a self-reproducing universal computer. The paper constitutes an impor-
tant contribution to the consolidation of the theory of automata. Von Neumann did
not conceive of cellular automata as mathematical objects for pure investigation;
rather, he considered the new algorithm a means for treating in detail the problem
of how to make machine reproducible.2 The realization that cellular automata can
demonstrate that “arbitrarily complicated mathematics could be performed within a
system whose basic organization is thoroughly rudimentary,”3 is a testimony to the
success of von Neumann’s idea. Indeed, his construction shows that “a small set of
local rules acting on a large repetitive array can result in a structure with very com-
plex behavior. The von Neumann construction thus immediately suggests how an
organ with behavior as complex as the brain’s can be specified from limited genetic
information.”4
To get the basic terms clear, cellular automata are “abstract dynamical systems
that play a role in discrete mathematics comparable to that played by partial differ-
ential equations in the mathematics of the continuum.”5 These dynamical systems
G. Hon (
)
Department of Philosophy, University of Haifa, Israel
e-mail: hon@research.haifa.ac.il
This instructive statement is placed up-front in the introductory section of von Neu-
mann’s essay of 1956: “Probabilistic Logics and the Synthesis of Reliable Organ-
isms from Unreliable Components.” It is a significant opening remark. It puts on a
par the negative concept of error with positive elements of knowledge. To formulate
58 G. Hon
it differently and in practical terms, von Neumann noted that computing structures
require reliability and therefore the occurrence of error should be addressed head-on
and indeed at the outset of the project. The complexity of the brain, its dexterous
performance and robustness, served for him as the prime example which points not
only towards possible successful designs, but also to the treatment of failures.15 The
conception of failure of machines and living systems is at the center of this paper.
To anticipate my findings, structurally we may benefit enormously from the anal-
ogy between living systems and cellular automata, but the nature of error, or failure,
in computing systems transpires to be starkly different from failures in the living
systems. Put another way, the occurrence of error points to differences rather than
to similarities between living systems and cellular automata. A brief philosophical
analysis of the notion of error in general will facilitate a clear understanding of these
differences.
Von Neumann expressed dissatisfaction with the way error had been treated:
“unsatisfactory and ad hoc” are his words. He thought that,
He then admitted that his work fell short of this conception, but added that he
intended his discussion of error to contribute toward this approach.
I will not pursue this physical approach to error; rather, I will direct attention to
the core of the problem, to what I call the epistemic phenomenon of error. Against
this background I will examine the striking difference between error of inanimate
systems and that of the living. I shall conclude by suggesting that this difference
may have consequences for the conception of experimentation in the biological
domain.
I begin then with the epistemic phenomenon of error. According to David Hume
(1711–1776) there are seven different kinds of philosophical relations: “resem-
blance, identity, relations of time and place, proportion in quantity or number,
degrees in any quality, contrariety, and causation.” Hume divides these relations
into two classes. The first class comprises those relations that depend entirely on the
ideas which we compare, and the second those which may be changed without any
need for adjustment. To the former belong the four relations: resemblance, contrari-
ety, degrees in equality, and proportions in quantity or number; and to the latter the
remaining three relations: identity, the situations in time and place, and causation.17
Having presented these relations and classified them in these two groups, depend-
ing on the nature of the underlying idea, Hume states:
All kinds of reasoning consist in nothing but a comparison, and a discovery of those rela-
tions, either constant or inconstant, which two or more objects bear to each other.18
a precise standard, by which we can judge of the equality and proportion of numbers; and
according as they correspond or not to that standard, we determine their relations, without
any possibility of error.19
From this analysis we may surmise that for error to be identified as such a context
must be established in which procedures of comparison could be developed and
indeed applied. Such procedures logically require that a standard must be available
to allow for the comparison to proceed so that an error could be determined. In other
words, a fundamental characteristic of error is the recognition of a discrepancy in a
comparative procedure. It is essential to underline “recognition” since otherwise an
error would not be acknowledged as such.
What do we claim to know when we identify an error? We discern a divergence
from a certain standard—a discrepancy. I have suggested elsewhere that the nature
of the discrepancy and its reason may shed light on the object under study.20 Fol-
lowing up this approach, my goal here is to draw consequences from the contrast
between discrepancies identified in inanimate systems that are designed to simulate
live organisms on the one hand, and claims of errors pertaining to living systems on
the other. Von Neumann’s pioneering papers on computing machines and cellular
automata present a rich case for such a study.
In his seminal paper of 1946, “On the principles of large scale computing
machines,” von Neumann, together with Herman H. Goldstine, addressed the broad
issue: “to what extent can human reasoning in the sciences be more efficiently
replaced by mechanisms?”21 Von Neumann and Goldstine observed that in highly
complex fields that are based on non-linear partial differential equations such as
fluid dynamics there had arisen a computational gap that generations of mathemati-
cians had not succeeded in bridging. According to the authors, most experiments
in these fields are “of a quite peculiar form”: they are designed not to verify pro-
posed theories but to replace a computation from an unquestioned theory by direct
measurements. Wind tunnels, for example, are used as computing devices of the
so-called analogy type to integrate the non-linear partial differential equations of
fluid dynamics. The construction of large scale computing machines was partially
motivated by this impasse. As the authors put it: “many branches of both pure and
applied mathematics are in great need of computing instruments to break the pres-
ent stalemate created by the failure of the purely analytical approach to non-linear
problems.”22
The machines which von Neumann and Goldstine considered belong to the digi-
tal, or counting type. These machines treat real numbers as aggregate of digits and
60 G. Hon
they are distinct from the analogical, measurement type. In analogical machines a
real number is treated as a physical quantity, e.g., the intensity of an electrical cur-
rent or the voltage of an electrical potential. The machines of the analogical type
tend to be of a one-purpose character, specialized for a given task. This stands in
contrast to the digital machines which are essentially all-purpose.23
One aspect of the design of the digital machines which von Neumann and his
collaborator set to address right at the outset was the question of stability; the issue
of error is at the center of this discussion.24 For my argument it is important to
note that von Neumann analyzes the issue of error in computing machines before
he discusses “the input-output organs”, “the memory organ” and “the coding of
problems”—the sections that in the paper follow the discussion on error. Thus, the
issue of error is presented before attention is given to the architecture and the under-
lying principles of these machines.
Von Neumann discerns two principal types of error. The first type pertains
to malfunctions: “the device functions differently from the way in which it was
designed and relied on to function.”25 Von Neumann adds that this type has its
counterpart in human mistakes, both in planning and in actual human comput-
ing. Malfunctions are quite unavoidable in machine computing and they require
checking. However vital this form of checking to the running of computing
machines, von Neumann chooses not to be concerned with it. Rather, he focuses
on the other type of error which arises even when the machine works perfectly
well according to plan. Under this heading von Neumann distinguished three
kinds of error.26
The first kind has to do with the fact that all data of empirical origin is approxi-
mate. Any uncertainty of the input, be it associated with the data or with the back-
ground theory, that is, approximate differential equations, will reflect itself as an
uncertainty of the results. Based on well-known mathematical analyses, it could be
shown that the size of the divergence due to this source depends on the size of the
input errors and the degree of continuity of the mathematics involved. Von Neu-
mann remarks that this kind of error pertains to any application of mathematics to
nature and therefore is not peculiar to the computational approach. He therefore did
not pursue it further.27
The second kind of error under the heading of functioning as planned, deals with
the specific nature of digital computing. All continuous mathematical procedures,
like integrations of differential equations, must be replaced in digital computing by
elementary mathematical operations, that is, they must be approximated by a suc-
cession of the basic arithmetical operations of addition, subtraction, multiplication
and division. The resulting deviation from the exact result is due therefore to trun-
cation errors that express the discrepancy between the original continuous problem
and its digital transform. However, von Neumann observes that this kind of error
can be kept under control by familiar mathematical methods and are usually—so he
remarks—not the main source of trouble. He therefore “passes them up, too,” as he
comments, at least for the time being.28
The third kind of error, the last one in von Neumann’s enumeration, is the most
crucial. It has to do with the fact that, irrespective whether the input is accurate or
Living Extremely Flat
61
approximate, “no machine, no matter how it is constructed, is really carrying out the
operations of arithmetics in the rigorous mathematical sense.” And he continues,
there is no machine in which the operations that are supposed to produce the four elemen-
tary functions of arithmetic, will really all produce the correct result, i.e. the sum, differ-
ence, product or quotient which corresponds precisely to those values of the variables that
were actually used.29
In 1948, two years after the presentation of his research on large scale comput-
ing machines, von Neumann delivered a paper on “The General and Logical Theory
of Automata.”34 It was clear to von Neumann that in spite of the fact that natural
organisms are, as a rule, much more complicated and subtle than artificial automata,
there is a fruitful reciprocal relation between these distinct systems. While some
regularity in living organisms could be instructive in the thinking and planning of
automata, the experience with automata could be to some extent projected on the
62 G. Hon
Axiomatizing the behavior of the elements means this: We assume that the elements have
certain well-defined, outside, functional characteristics; that is, they are to be treated as
“black boxes.” They are viewed as automatisms, the inner structure of which need not be
disclosed, but which are assumed to react to certain unambiguously defined stimuli, by
certain unambiguously defined responses.38
These digital and analogy portions in such a chain may alternately multiply. In certain cases
of this type, the chain can actually feed back into itself, that is, its ultimate output may again
stimulate its original input.42
The complexity of the living organism is due partly to this intricate combination
of different kinds of process, in contrast to computing machines which in the pres-
ent state of the art are purely digital. And von Neumann remarks that in drawing an
analogy between the living organism and large scale computing machines he attends
only to the digital aspect of the living system—an oversimplification which is how-
ever heuristically productive, and especially so when the device—be it a neuron or a
vacuum tube (von Neumann, it should be noted, wrote this paper before the invention
of the transistor)—is considered a “black box” with a schematic description.43
The parallel function of the two key elements, that is, the nerve cell and the vac-
uum tube, has thus been drawn. It reflects the correspondence between the building
blocks of the nervous system and those of the automata with computing capability. Von
Neumann turns now to what he considers a crucial drawback, in fact the stumbling
block in the development of automata, namely, the rigidity of the formalism: the avail-
able mathematical-logical theories had been too rigid to be conducive to the operational
requirements of automata. In particular, the length of “chains of reasoning” had to be
considered as well as failures that are part and parcel of a working machine. Thus,
The operations of logic (syllogisms, conjunctions, disjunctions, negations, etc., that is, in
the terminology that is customary for automata, various forms of gating, coincidence, anti-
coincidence, blocking, etc., actions) will all have to be treated by procedures which allow
exceptions (malfunctions) with low but non-zero probabilities.44
Von Neumann imports his analysis of error from the large scale computing machines
to his studies of automata. He expected this theory to be less combinatorial and
more analytical, akin to the character of thermodynamics as Boltzmann treated it.
Von Neumann discerns here a theoretical limitation which is of much importance to
the point I am seeking to make. At stake is error checking procedure.
We have seen von Neumann analyzing possible kinds of error in large scale com-
puting machines. For him errors and their sources “need only be foreseen generically,
that is, by some decisive traits, and not specifically . . . in complete detail.”45 However,
a malfunction in artificial automata must be detected, as soon as it occurs, otherwise
these machines would be useless. Effort should be made to identify the error, by say
mathematical means or automated checks, to isolate the faulty component that caused
the error, and put it then aright or replace it altogether. This is why designers compart-
mentalize machines. As Walter Elsasser (1904–1991) explains:
Notice that the diagnosis is effected from without and the faulty component is
replaced by agents external to the system. But over and above the corrective mea-
sures that may be taken, the error itself may be identified in the first place only
against a known standard or criterion. It is this identification which subsequently
allows for insulation and rectification. Therefore, as von Neumann puts it,
we are trying to arrange the automata in such a manner that errors will become as conspicu-
ous as possible, and intervention and correction follow immediately.47
The quick intervention is important to prevent further errors setting in. It is a com-
mon experience that machine which has begun to malfunction rarely will restore
itself, and more probably go from bad to worse.
This is not the case of the living system; in von Neumann’s words, “the organism
obviously has a way to detect . . . [malfunctions] and render them harmless.”48 Note
that von Neumann regards this observation as indisputable: he says “obviously”.
An organism, for example, the living cell, is presumed to have a way of detecting
on its own, that is, from within, malfunctions and treat them accordingly. Therefore
this system must
contain the necessary arrangements to diagnose errors as they occur, to readjust the organ-
ism so as to minimize the effects of the errors, and finally to correct or to block permanently
the faulty components.49
And in the case of the living system there is little evidence of compartmentaliza-
tion. Thus, according to von Neumann, the entire organism appears to make the
malfunctions as inconsequential as possible, and to apply corrective measures. In
other words, “organisms are constructed to make errors as inconspicuous, as harm-
less, as possible.”50 In sum, while the engineer seeks to make the error as conspicu-
ous and distinct as possible and react swiftly with external means to eliminate it
before further errors set in, the alien designer of the living system has equipped
the system with an internal faculty that can diagnose a malfunction and render it as
inconspicuous as possible in a relatively long time—so von Neumann’s argument
runs.
I have underlined the success of cellular automata in obtaining complexity that
evolves from rudimentary, elementary machinery in parallel to that of the living sys-
tem. But when it comes to disturbances and interferences there appear to be major
qualitative differences—the flexibility of cellular automata is not sufficient for cap-
turing the plasticity of the organism in handling faults. To use a metaphoric language,
automata live an extremely flat live. At stake are the very elements of the cellular
automata: the number of states variable in a given cell, the number of cell neigh-
bors and the sensitivity of the transition rule to the environment. The difficulties in
capturing the versatility of the living system may be characterized respectively as
Living Extremely Flat
65
robustness to perturbation, that is, stability, then variability, and finally sensitivity (or
rather insensitivity) to changes in the transition rule.51
Consider robustness:
Alteration of the state of a single unit of the von Neumann machine typically leads to
catastrophic failure; [by contrast] malfunction of a single neuron or neural assembly should
have no measurable effect.52
The successful operation of the von Neumann construction is due to choosing a dis-
crete substrate in space, time, and state variable. This success is obtained however at
a very high price since the automaton is much more vulnerable to disturbances than,
say, differential equations whose continuous substrate is conducive to the treatment
of perturbations. How many states are required in order to obtain robustness in cel-
lular automata? It may well be that increasing the number of states would not after
all result in robustness.
Then there is the issue of variability. It is the variability at the level of the indi-
vidual neuron which the von Nuemann machine cannot accommodate, for it would
fail catastrophically were the interacting neighboring cells of the automaton be of
a too varied nature. Again, the question of number arises: how many neighboring
cells it would take to achieve variability, a feature which is natural, so to speak, in
the living system.
Finally, it may be at times beneficiary to the living system to be insensitive
to the environmental changes; by comparison, it is not at all clear how a cellular
automaton can ignore changes in the transition rule. These three elements: stabil-
ity, variability and sensitivity may constitute terminal problems for the designer
of cellular automata in the attempt to depict fundamental features of the living
system.
Such difficulties render the comparison of computing inanimate machines and
living systems problematic; but how does error fare in this comparison? I return to
the distinction which von Neumann draws between modes of checking and rectify-
ing errors in artificial automata and organisms. Recall that the engineer seeks to
make the error as conspicuous as possible in the shortest time possible, quite the
opposite to the common practice, as it were, of the living system. Now, how is error
made conspicuous, or for that matter, inconspicuous? Von Neumann’s analysis is
based on the presupposition that knowledge of what the machine is supposed to
do and how it is designed to accomplish it is given. As I have argued, a compari-
son procedure makes the discrepancy apparent. Thus, it is this given knowledge of
goals and means that makes the identification of error possible. This procedure of
comparison should work also for the living system. Von Neumann characterizes the
relevant background knowledge—the “operating conditions”—in the living system
as “normal”; that is, the operating conditions
represent the functionally normal state of affairs within the large organism. . . . Thus the
important fact is not whether an organ has necessarily and under all conditions the all-or-none
character—this is probably never the case—but rather whether in its proper context it func-
tions primarily, and appears to be intended to function primarily, as an all-or-none organ.
66 G. Hon
I realize that this definition brings in rather undesirable criteria of “propriety” of context,
of “appearance” and “intention.” I do not see, however, how we can avoid using them, and
how we can forgo counting on the employment of common sense in their application.53
Indeed, it is impossible to see how such terms can be avoided—this is the kern of
my claim. Von Neumann’s revealing remark harbors important consequences, but
he does not draw them. Knowledge of these “operating conditions” is in effect the
standard against which error may be discerned and if criteria such as “propriety”,
“appearance”, and “intention” are undesirable then on what grounds could a fault in
the living system be identified at all as such, namely, a fault?
The problem is compounded by the fact that the living system lacks accuracy.
Karl Lashley (1890–1958)—the American psychologist who brought into focus the
controversy between localization and holistic emphasis of brain function—posed
this problem to von Neumann in the discussion on the theory of automata.
In the computing machines, the one thing we demand is precision; on the other hand, when
we study the organism, one thing which we never find is accuracy or precision. In any
organic reaction there is a normal, or nearly normal, distribution of errors around a mean.
The mechanisms of reaction are statistical in character and their accuracy is only that of a
probability distribution in the activity of enormous numbers of elements. In this respect the
organism resembles the analogical rather than the digital machine. The invention of sym-
bols and the use of memorized number series convert the organism into a digital machine,
but the increase in accuracy is acquired at the sacrifice of speed. One can estimate the num-
ber of books on a shelf at a glance, with some error. To count them requires much greater
time. As a digital machine the organism is inefficient. That is why you build computing
machines.54
This statistical approach is usually associated with the belief in the existence of
overall laws of large scale nerve stimulation and composite action, but in living sys-
tems there are often single elements, a neuron, that may control a whole process.55
How could we then determine the governing law of this single cell? What will be
considered “appropriate” of its behavior or, for that matter, what is its “intention”?
Put concisely, we have to determine the “value” system of this neuron in order to
identify an error in its function.
This train of reasoning underpins Warren S. McCulloch’s graphical response to
von Neumann’s theory of automata. McCulloch, of the well known McCulloch-
Pitts model of the neuron (1943), is recorded rejoining:
I confess that there is nothing I envy Dr. von Neumann more than the fact that the machines
with which he has to cope are those for which he has, from the beginning, a blueprint of
what the machine is supposed to do and how it is supposed to do it. Unfortunately for us
in the biological sciences—or, at least, in psychiatry—we are presented with an alien, or
enemy’s, machine. We do not know exactly what the machine is supposed to do and cer-
tainly we have no blueprint of it. In attacking our problems, we only know, in psychiatry,
that the machine is producing wrong answers. We know that, because of the damage by the
machine to the machine itself and by its running amuck in the world. However, what sort of
difficulty exists in that machine is no easy matter to determine.56
Living Extremely Flat
67
Note that the standard of comparison to which McCulloch refers is coherence, that
is, what appears to McCulloch and his co-workers in psychiatry as self-preservation
and efficient adaptability to the world—be it either the physical or the social world,
or indeed both realms. But surely this is just one interpretation, one possible mode
of evaluating the objective that this system, namely, the human being, is supposed
to accomplish.
The claim that the living systems lacks a known standard, which in turn under-
mines—so I have argued—the possibility of determining error in this context, may
be formulated for clarity sake by using the notion of “teacher”, an agent knowledge-
able of the system so that it can supervise its performance. An artificial automaton
has to have a teacher, the designer who oversees the functioning of the machine.
The teacher, by definition, possesses knowledge of the standard that the automaton
has to maintain. In principle, the teacher could be decoded and the instructions be
taught automatically. The crucial point, however, is that the teaching comes from
without, externally to the system. Note that the teacher is not capable of doing
what the machine does, it only oversees the functioning of the machine. Indeed,
as Lashley pointed out, this is why we build such machines. Thus, we may ask,
how does the teacher know that the end result of millions of calculations is cor-
rect? The teacher can supervise the procedure but cannot check the result itself. Von
Neumann’s solution is degeneracy, namely, apply another machine; so he calls this
procedure, “multiplexing”.57
Connect . . . three . . . machines in such a manner that they always compare their results after
every single operation, and then proceed as follows. (a) If all three have the same result,
they continue unchecked. (b) If any two agree with each other, but not with the third, then
all three continue with the value agreed on by the majority. (c) If no two agree with each
other, then all three stop.58
This system that comprises three machines will obtain correct results unless two of
the three machines err simultaneously, for which the probability, according to von
Neumann’s calculation, is one in 33 million. Notice how von Neumann proceeds:
he applies a comparative procedure. Once again the key is comparison and in this
case each result is compared to the other in an attempt to achieve consensus, albeit
machine produced consensus.59
Thus far machines and automata and their required instructor; but does the living
system has a “teacher”? If the answer is negative, or if we do not have access to
it, then in such systems the determination of malfunctions, and generally of
errors—a process to which von Neumann refers as “obvious”—would be logically
impossible. In this sense the foregoing discussion of error in living systems is in
fact unfounded.60
Granted, living systems possess organs that have identifiable functions whose
ultimate goals and standards may be determined as “normal”. This brings us, how-
ever, directly to the function-structure problematic distinction.61 But note that these
organs are mostly peripheral, located as they are at the interface between the living
system and its environment. Consider, however, the cell itself, or its constitutive
elements—the fundamental building blocks of life. The determination of function
68 G. Hon
ceases then to be clear and consequently knowledge of the standard, that is, the
norm, may be missing altogether. I claim that in these cases it is not clear at all what
does it mean to impute error to the system, and indeed to call a certain building
block faulty.
This philosophical worry does not disturb practitioners from further inquiring
into biology in the spirit that von Neumann inaugurated half a century ago. A good
example is the work of John J. Hopfield who in the 1970s developed an algorithmic
scheme which he called “kinetic proofreading”, and later on in the 1980s demon-
strated how physical systems could pick up features of neural networks and simu-
late the function of memory purely by computation. Hopfield speaks of “reading”
the genetic code with few mistakes. He considers the understanding of how small
error rates are achieved in the living systems as one of the fundamental general
problems of biosynthesis. Admittedly, he writes that he examines the issue “from a
phenomenological point of view.” Still, his proofreading procedure which is based
on energy levels presupposes the concept of error as a primitive that needs no expla-
nation, certainly not a technical one, and one remains perplexed with respect to the
definition of this basic concept, let alone imputing it to organism.62
The two related points, namely, lack of a teacher (or ignorance of it) and pro-
cesses that are in principle not accurate, constitute a categorical difference between
large scale computing machines and artificial automata on the one hand and living
systems on the other. To be sure, the comparison between the two systems is pro-
ductive as von Neumann amply showed. However, the comparison may be mis-
leading when it comes to the conception of error. In fact, given the argument I have
presented concerning the epistemic phenomenon of error, the attribution of error to
animate systems may be in itself erroneous.
The question now presents itself whether the application of the experimental
technique in biology—as we have come to know it, say, in biophysical experi-
ments—should take stock of this consequence. So far it appears that this has not
been the case and practitioners such as Hopfield have no hesitation to attribute
error, e.g., misreading, to the living systems, and indeed to its constitutive elements.
In conclusion, I suggest drawing the consequence so that to avoid the undesirable
criteria of “propriety” of context, of “appearance” and “intention”, as indeed von
Neumann described the problem. A new mode of experimenting is called for that
acknowledges this difficulty, but this I leave for another story.
Acknowledgment I thank Jutta Schickore for incisive comments and Andrea Loett-
gers for drawing my attention to the work of J. J. Hopfield.
Notes
References
Abraham, T. H. (2000). “Microscopic cybernetics: mathematical logic, automata theory, and the
formalization of biological phenomena, 1936–1970.” Ph.D. thesis, Institute for the History and
Philosophy of Science and Technology, Toronto, Canada: University of Toronto.
Bródy, F. and T. Vámos eds. (1995). The Neumann Compendium. Singapore: World Scientific.
Burks, A. W., H. H. Goldstine and John von Neumann (1946/1963). “Preliminary Discussion of
the Logical Design of an Electronic Computing Instrument.” Report prepared for U.S. Army
Ordnance Department, 1946. Reprinted in von Neumann 1963, 34–79.
Canguilhem, G. (1978/1991). The Normal and the Pathological. New York: Zones.
Culick, K. II, L. P. Hurd and S. Yu (1990). “Computation Theoretic Aspects of Cellular Automata.”
In Gutowitz 1990, 357–378.
Elsasser, W. (1966). Atom and Organism: A New Approach to Theoretical Biology. New Jersey:
Princeton University Press.
Goldstine, H. H. and J. von Neumann (1946/1963). “On the Principles of Large Scale Computing
Machines.” Unpublished manuscript, printed in von Neumann 1963, 1–32. See also Bródy and
Vámos 1995, 494–525.
Gutowitz, H. (1990). Cellular Automata: Theory and Experiment. North Holland. Physica D, 45: 1–3.
Hon, G. (1998). “Exploiting Error.” Studies in History and Philosophy of Science 29: 465–479.
Hon, G. (2000). “The Limits of Experimental Method: Experimenting on an Entangled System—
The Case of Biophysics.” In M. Carrier, G. J. Massey and L. Reutsche eds., Science at Century’s
End: Philosophical Questions on the Progress and Limits of Science. Pittsburgh: University of
Pittsburgh Press, pp. 284–307.
Hopfield, J. J. (1974). “Kinetic Proofreading: A new Mechanism for Reducing Errors in
Biosynthetic Processes Requiring High Specifity.” Proceedings of the National Academy of
Science 71: 4135–4139.
Hopfield, J. J. (1980). “The energy relay: A proofreading scheme based on dynamic cooperativity
and lacking all characteristic symptoms of kinetic proofreading in DNA replication and protein
sythesis.” Proceedings of the National Academy of Science 77: 5248–5252.
Hopfield, J. J. (1982). “Neural networks and physical systems with emergent collective computa-
tional abilities.” Proceedings of the National Academy of Science 79: 2554–2558.
Hume, D. (1739–1740/1978). A Treatise of Human Nature. L. A. Selby-Bigge ed., 2nd edition.
P. H. Nidditch ed., Oxford: Clarendon Press.
Ilachinski, A. (2001). Cellular Automata: A Discrete Universe. Singapore: World Scientific.
Kendall, P., Jr. and M. J. B. Duff (1984). Modern Cellular Automata: Theory and Applications.
New York and London: Plenum Press.
Living Extremely Flat
71
McIntosh, H. V. (1990). “Wolfram’s Class IV Automata and a Good Life.” In Gutowitz 1990,
105–121.
Sutner, K. (1990). “Classifying Circular Cellular Automata.” In Gutowitz 1990, 386–395.
Thatcher, J. W. (1970). “Universality in the von Neumann cellular model.” In A. W. Burks ed.,
Essays on Cellular Automata. Urbana: University of Illinois Press.
Toffoli, T. and N. H. Margolus (1990). “Invertible Cellular Automata: A Review.” In Gutowitz
1990, 229–253.
Victor, J. D. (1990). “What can Automaton Theory Tell us about the Brain.” In Gutowitz 1990,
205–207.
Von Neumann, J. (1951/1963). “The General and Logical Theory of Automata.” In Cerebral
Mechanisms in Behaviour. The Hixon Symposium. New York: Wiley, 1951. Reprinted in von
Neumann 1963, 288–328.
Von Neumann, J. (1956/1963). “Probabilistic Logics and the Synthesis of Reliable Organisms
From Unreliable Components.” In C. E. Shannon and J. McCarthy eds., Automata Studies.
Annals of Mathematics Studies, No. 34. Princeton, N. J.: Princeton University Press, pp. 43–98.
Reprinted in von Neumann 1963, 329–378.
Von Neumann, J. (1963). Collected Works, A. H. Taub ed., vol. 5: Design of Computers, Theory of
Automata and Numerical Analysis. New York: Macmillan.
Wolfram, S. (1986). Theory and Applications of Cellular Automata. Singapore: World Scientific.