Three - Paradigms - of - Computer - Science SEMANA 1
Three - Paradigms - of - Computer - Science SEMANA 1
Three - Paradigms - of - Computer - Science SEMANA 1
net/publication/220636751
CITATIONS READS
64 2,421
1 author:
Amnon H. Eden
Sapience.org
98 PUBLICATIONS 1,195 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Amnon H. Eden on 10 February 2015.
Amnon H. Eden
A. H. Eden (&)
Department of Computer Science, University of Essex, Colchester, Essex, UK
A. H. Eden
Center for Inquiry, Amherst, NY, USA
123
136 A. H. Eden
1 Introduction
In his seminal work on scientific revolutions, Thomas Kuhn (1992) defines scientific
paradigms as ‘‘some accepted examples of actual scientific practice... [that] provide
models from which spring particular coherent traditions of scientific research’’. The
purpose of this paper is to investigate the paradigms of computer science and to
expose their philosophical origins.
Peter Wegner (1976) examines three definitions of computer science: as a branch
of mathematics (e.g., Knuth 1968), as an engineering (‘technological’) discipline,
and as a natural (‘empirical’) science. He concludes that the practices of computer
scientists are effectively committed not to one but to either one of three ‘research
paradigms’1. Taking a historical perspective, Wegner argues that each paradigm
dominated a different decade during the 20th century: the scientific paradigm
dominated the 1950s, the mathematical paradigm dominated the 1960s, and the
technocratic paradigm dominated the 1970s—the decade in which Wegner wrote his
paper2. We take Wegner’s historical account to hold and postulate (§5) that to this
day computer science is largely dominated by the tenets of the technocratic
paradigm. We shall also go beyond Wegner and explore the philosophical roots of
the dispute on the definition of the discipline.
Timothy Colburn (2000, p. 154) suggests that the different definitions of the
discipline merely emanate from complementary interpretations (or ‘views’) of the
activity of writing computer programs, and therefore they can be reconciled as such.
Jim Fetzer (1993) however argues that the dispute is not restricted to definitions,
methods, or reconcilable views of the same activities. Rather, Fetzer contends that
disagreements extend to philosophical positions concerning a broad range of issues
which go beyond the traditional confines of the discipline: ‘‘The ramifications of this
dispute extend beyond the boundaries of the discipline itself. The deeper question
that lies beneath this controversy concerns the paradigm most appropriate to
computer science’’. Not unlike Kuhn, Fetzer takes ‘paradigm’ to be that set of
coherent research practices that a community of computer scientists share amongst
them. By calling the disagreements ‘paradigmatic’ Fetzer claims that their roots
1
To which Wegner also refers as ‘cultures’ or ‘disciplines’ interchangeably.
2
The ‘‘Denning report’’ (Denning et al. 1989) authored by the task force which was commissioned to
investigate ‘‘the core of computer science’’ also lists three ‘‘paradigms’’ of the discipline: theory/
mathematics, abstraction/science, and design/engineering. According to this report, these paradigms are
‘‘cultural styles by which we approach our work’’. They conclude however that ‘‘in computing the three
processes are so intricately intertwined that it is irrational to say that any one is fundamental’’.
123
Three Paradigms of Computer Science 137
(§2) The rationalist paradigm, which was common among theoretical computer
scientists, defines the discipline as a branch of mathematics (MET-RAT),
treats programs on a par with mathematical objects (ONT-RAT), and seeks
certain, a priori knowledge about their ‘correctness’ by means of deductive
reasoning (EPI-RAT).
(§3) The technocratic paradigm, promulgated mainly by software engineers,
defines computer science as an engineering discipline (MET-TEC), treats
programs as mere data (ONT-TEC), and seeks probable, a posteriori
knowledge about their reliability empirically using testing suites (EPI-TEC).
(§4) The scientific paradigm, prevalent in artificial intelligence, defines computer
science as a natural (empirical) science (MET-SCI), takes programs to be on
a par with mental processes (ONT-SCI), and seeks a priori and a posteriori
knowledge about them by combining formal deduction and scientific
experimentation (EPI-SCI).
Since arguments supporting the tenets of the rationalist and technocratic
epistemological positions have already been examined elsewhere (e.g., Colburn’s
(2000) detailed account of the ‘verification wars’), their treatment in §2 and §3 is
brief. Instead, we expand on the arguments of complexity, non-linearity, and self-
modifiability for the unpredictability of programs and conclude that knowledge
concerning certain properties of all but the most trivial programs can only be
established by conducting scientific experiments.
In §4 we proceed to examine seven properties of program-processes (temporal,
non-physical, causal, metabolic, contingent upon a physical manifestation, and non-
linear) and conclude that program-processes are, in terms of category of existence,
on a par with mental processes. This discussion shall lead us to concur with Colburn
and conclude that the tenets of the scientific paradigm are the most appropriate for
computer science. Nonetheless, in §5 we demonstrate evidence for the dominance of
123
138 A. H. Eden
the technocratic paradigm which has prevailed since Wegner (1976) described the
1970s as the decade of the ‘technological paradigm’ and examine its consequences.
Our discussion will lead us to conclude that this domination has not benefited
software engineering, and that for the discipline to become as effective as its sister,
established engineering disciplines it must abandon the technocratic paradigm.
123
Three Paradigms of Computer Science 139
What ontological category would computer [programs] belong to? Are they
supposed to be material objects? ... If so, what matter; and if not, what are they
made of? ... Events or processes? Platonic complexes of pure information? ...
If not, where are they? ... Are they located in space and time at all? ... Or are
the traditional ontological categories of the philosophers adequate to account
for this new phenomenon? (Olson 1997)
We take into consideration all sorts of entities that computer scientists
conventionally take to be ‘computer programs’, such as numerical analysis
programs, database and World Wide Web applications, operating systems,
compilers/interpreters, device drivers, computer viruses, genetic algorithms,
network routers, and Internet search engines. We shall thus restrict most of our
discussion to such conventional notions of computer programs, and generally
assume that each is encoded for and executed by silicon-based von-Neumann
computers. We therefore refrain from extending our discussion to the kind of
programs that DNA computing and quantum computing are concerned with.
The ontological dispute in computer science may be recast in the terminology we
shall introduce below as follows:
ONT Are program-scripts mathematical expressions? Are programs mathemat-
ical objects? Alternatively, should program-scripts be taken to be just ‘a bunch of
data’ and the existence of program-processes dismissed? Or should program-
scripts be taken to be on a par with DNA sequences (such as the genomic
information representing a human), the interpretation of which is on a par with
mental processes?
Below we clarify some of the technical terms mentioned in ONT and in the
remainder of this paper.
Terminology
123
140 A. H. Eden
4
Also known as machine code or object code.
5
The program adds 3 to the product of two numbers, encoded in the 8086 microprocessor assembly
language (Adapted from Georick et al. 1997).
6
For example, consider the difficulty of spotting and correcting errors in the program in Table 1.
123
Three Paradigms of Computer Science 141
Table 3 Steps in a sample program-process generated from executing the program in Table 2
[ ðexample 4 5Þ
ðþ ð 4 5Þ 3Þ
ðþ 20 3Þ
23
processes: The first is an inert sequence of symbols; the second is a causal and a
temporal entity. Any number of program-processes can be potentially be generated
from each program-script. Furthermore, certain operating systems allow the
simultaneous generation of a large number of program-processes from a single
program-script executed concurrently by a single microprocessor. For example, my
Personal Computer can generate and concurrently execute large numbers of
program-processes from the program-script in Table 2.
Program specifications are statements that assert our expectations from a program.
If specifications are defined before the program-script is encoded they can be used to
articulate the objectives of the encoding enterprise and drive the software
development process, which is often complex and arduous. For example, a
specification asserting that the program-script in Table 2 indeed calculates the sum
of the product of two numbers and the number 3 can be formally specified as a
lambda expression:
kxy:x y þ 3 ð1Þ
In more conventional notation, (1) can also be represented as a two-place function:
exampleðx; yÞ ¼ x y þ 3 ð2Þ
7
The program adds 3 to the product of two numbers, encoded here in the syntax of Scheme (Abelson and
Sussman 1996), a dialect of Lisp
123
142 A. H. Eden
(Table 2) can be defined by the extent to which it satisfies specification (2). If the
specification is articulated in a mathematical language, as in (2), it is referred to as a
formal specification, in which case the question of ‘correctness’ is well-defined.
Most specifications however are not quite as simple as (2). Specifications may
assert not only the outcome of executing a particular program-script (e.g., adding a
record to a database of moving a robotic arm) but also how efficient are the
program-processes generated therefrom (e.g., how long it takes to carry out a
particular calculation) and how reliable they are (e.g., do they terminate
unexpectedly?). For this reason, fully formulated specifications are not always
feasible, as demonstrated by the specifications in Table 4.
Indeed, although the correctness of a program can be a source of considerable
damage, or even a matter of life and death, it may be very difficult—or, as Fetzer
and Cohn claimed, altogether impossible—to establish formally. And while
executing a program-script in various circumstances (‘program testing’) can
discover certain errors, no number of tests can establish their absence8. For these
reasons, the problem of program correctness has become central to computer
science. If correctness cannot be formally specified and the problem of establishing
it is not even well-defined then is it at all meaningful to ask whether a program is
correct, and if so then what should ‘correctness’ be taken to mean and how can it be
established effectively? These questions are at the heart of the epistemological
dispute:
EPI Is warranted knowledge about programs a priori or a posteriori?9 In other
words, does knowledge about programs emanate from empirical evidence or from
pure reason? What does it mean for a program to be correct, and how can this
property be effectively established? Must we consider correctness to be a well-
defined property—should we insist on formal specifications under all circum-
stances and seek to prove it deductively—or should we adopt a probabilistic
notion of correctness (‘probably correct’) and seek to establish it a posteriori by
statistical means?
8
A statement most widely attributed to Dijkstra.
9
We follow Colburn (2000) in taking a priori knowledge about a program to be knowledge that is prior
to experience with it, namely knowledge emanating from analyzing the program-script, and a posteriori
knowledge to be knowledge following from experience with observed phenomena, namely knowledge
concerning a given set of specific program-processes generated from a given script.
123
Three Paradigms of Computer Science 143
123
144 A. H. Eden
During the 1940s the first electronic computers appeared, and with them emerged
the contemporary notions of computer programs (§1.2). A mathematical proof
demonstrating that programs encoded in machine programming languages are
computationally equivalent to the mathematical notions of mechanistic computation
on offer has established the relevance of deductive reasoning to modern computer
science. In particular, computational equivalence implied that any problem which
can be solved (or efficiently solved) by a turing machine can be solved by executing
a program-script encoded in a machine programming language (§1.2), and vice
versa, namely, that any problem which cannot be (efficiently) solved by a turing
machine also cannot be (effectively) solved by executing a program-script encoded
in a machine programming language. For this reason machine programming
languages are described as ‘turing-complete’ languages. High-order programming
languages have thus appeared in a rich mathematical context, the design of which
was heavily influenced by the mathematical notions of mechanistic computation on
offer. For example, the striking resemblance between the Lisp program in Table 2
and the lambda expression specifying it (1) emanates directly from the commitment
of the designer of the Lisp programming language (McCarthy 1960) to lambda
calculus.
The fundamental theorems of the theories of computation have remained relevant
notwithstanding generations of exponential growth in computing power. Time has
thus secured the primacy of deductive methods of investigation as a source of
certain knowledge about programs and led many to concur with Hoare. For
example, Knuth justifies his definition of computer science as a branch of
mathematics (Knuth 1968) as follows:
Like mathematics, computer science will be somewhat different from other
sciences in that it deals with man-made laws which can be [deductively]
proved, instead of natural laws which are never known with certainty. (Knuth
1974)
The rationalist stance in the methodological dispute can thus be summarized as
follows:
MET-RAT Computer science is a branch of mathematics, writing programs is a
mathematical activity, and deductive reasoning is the only accepted method of the
investigating programs.
MET-RAT is justified by the rationalist ontological and epistemological positions
examined below.
123
Three Paradigms of Computer Science 145
12
Dijkstra (1988) offered an explanationto how this ‘fact’ escaped mathematicians and programmers
alike: ‘‘Programs were so much longer formulae than [mathematics] was used to that [many] did not even
recognize them as such.
123
146 A. H. Eden
The proof of correctness of the script in Table 2 shall proceed with the attempt
to prove (3) by employing the rules of inference of Hoare Logic. Once
established, such a mathematical proof shall thus secure the correctness of the
program-script in Table 2 with certainty otherwise reserved to mathematical
theorems.
Other efforts in delivering formal semantics have followed Hoare’s example in
the attempt to prove program correctness using other axiomatic theories. In
particular, Scott’s denotational semantics (Stoy 1977) harnessed the axioms of
Zermelo-Fraenkel to prove program correctness.
13
Bill Rapaport (2007) notes that such a position has interesting consequences on the question whether
programs can be copyrighted or patented.
123
Three Paradigms of Computer Science 147
123
148 A. H. Eden
intuition about the nature of mathematical objects such as numbers, triangles, and
(set-theoretic), e.g., by adding up apples or by drawing triangles on paper, such
evidence only offer anecdotal knowledge. If programs are taken to be mathematical
objects (ONT-RAT) and the methods of computer science are the methods of
mathematical disciplines, then knowledge about programs can only proceed
deductively. Indeed, a rationalist position towards knowledge in branches of pure
mathematics such as geometry, logic, arithmetic, topology, and set theory largely
dismiss a posteriori knowledge as unreliable, ineffective, and not sufficiently
general.
Objections to EPI-RAT are examined in the following sections.
123
Three Paradigms of Computer Science 149
The technocratic turn away from the methods of theoretical computer science,
indeed away from all scientific practices, was most explicitly articulated by John
Pierce:
I don’t really understand the title, Computer Science. I guess I don’t
understand science very well; I’m an engineer. ... Computers are worth
thinking about and talking about and doing about only because they are useful
devices, which do something for somebody. If you are just interested in
contemplating the abstract, I would strongly recommend the belly button.
(Pierce 1968)
Indeed the technocratic doctrine contends that there is no room for theory nor for
science in computer science. During the 1970 this position, promoted primarily by
software engineers and programming practitioners, came to dominate the various
branches of software engineering. Today, the principles of scientific experimen-
tation are rarely employed in software engineering research. An analysis of all
5,453 papers published during 1993–2002 in nine major software engineering
journals and proceedings of three leading conferences revealed that less than 2% of
the papers (!) report the results of controlled experiments. Even when conducted,
the statistical power of such experiments falls substantially below accepted norms
as well as the levels found in the related disciplines (Dybå et al. 2006).
Instead of conducting experiments, software engineers use testing suites, the
purpose of which is to establish statistically the reliability of specific products of the
process of manufacturing software. For example, to establish the reliability of a
program designed for operating a microwave oven, software engineering educators
speak of a regimented process of software design (although a precise specification
of which is hardly ever offered), followed by an ‘implementation’ phase during
which the program-script is encoded (about which little can be said), concluding
with the construction of a testing suite and executing (say) 10,000 program-
processes generated from the given program-script. If executed in a range of actual
(rather than hypothetical) microwave ovens, such a comprehensive test suite
furnishes the programmer with statistical data which can be used to quantitatively
establish the reliability of the computing system in question, e.g., using metrics
such as probability of failure on demand and mean time to failure (Sommerville
2006).
Evidence to the decline of scientific methods is found in textbooks on software
engineering (e.g., Sommerville 2006). Rarely dedicating any space to deductive
reasoning16 and never to the principles of scientific experimentation in empirical
sciences, such textbooks cover the subjects of software design, software evolution,
and software testing, focusing on manufacturing and testing methods borrowed from
traditional engineering trades. Much discussed topics include models of software
development lifecycles, methods of designing testing suites, reliability metrics, and
statistical modelling.
The position of the technocratic paradigm concerning the methodological dispute
can thus be recast as follows:
16
At most, lip-service is paid to the role of verification in ‘safety-critical software systems’.
123
150 A. H. Eden
123
Three Paradigms of Computer Science 151
17
For example, the Debian GNU/Linux 3.1 version of the Linux operating system (Debian 2007) is the
product of contributions made by thousands of individuals that are entirely unrelated except in their
attempt to improve it.
18
One petabyte (1PB) is 1,024 terabytes or 250 bytes.
123
152 A. H. Eden
Fetzer (1993) and Avra Cohn (1989) offer what is essentially an ontological
argument for an even stronger epistemological position, to which we shall refer as
the argument of category mistake. According to this argument, a priori knowledge
about the behaviour of machines is impossible in principle:
A proof that one specification implements another—despite being completely
rigorous, expressed in an explicit and well understood logic, and even checked
by another system—should still be viewed in context of many extra-logical
factors which affect the correct functioning of hardware systems. (Cohn 1989)
The technocratic position concerning the nature of knowledge can be justified by
the argument of category mistake as follows:
EPI-TECOnt It is impossible to prove deductively the correctness of any physical
object. A priori, certain knowledge about the behaviour of actual programs is
unachievable. If at all meaningful, ‘correctness’ must be taken to mean tested and
proven ‘reliability’, a posteriori knowledge about which is measured in
probabilistic terms and established using extensive testing suites.
Peter Markie (2004) defines empiricism as that school of thought which holds
that sense experience is the ultimate source of all our concepts and knowledge.
Empiricism rejects pure reason as a source of knowledge, indeed any notion of
a priori, certain knowledge, claiming that warranted beliefs are gained from
experience. Thus, EPI-TEC and EPI-TECOnt are in line with the empiricist
philosophical position.
The argument of complexity won the hearts of many computer scientists. As a
result, the technocratic doctrine has come to dominate software engineering journals
(IEEE TSE) and conferences (ICSE), contributions to which are traditionally judged
by experience gained from actual implementations—‘‘concrete, practical applica-
tions’’—which must be employed to demonstrate any thesis put forth, may it be
theoretical or practical. Software engineering classics such as the 1969 NATO
report (Naur and Randell 1969) and the grand ‘‘Software Engineering Body of
Knowledge’’ project (Abran and Moore 2004) hold a posteriori knowledge to be
123
Three Paradigms of Computer Science 153
superior on all other knowledge about programs and dismiss or neglect the role of
formal deduction. Same position is widely embraced in all branches of software
design. For example, the merits of design patterns (Gamma et al. 1995) and
architectural styles (Perry and Wolf 1992) are measured almost exclusively in terms
of the number of successful applications thereof.
The records of the NATO conference on software engineering (Naur and Randell
1969) quote van der Pohl in suggesting that program-scripts are themselves just
‘‘bunches of data’’:
A program [script] is a piece of information only when it is executed. Before
it’s really executed as a program in the machine it is handled, carried to the
machine in the form of a stack of punch cards, or it is transcribed, whatever is
the case, and in all these stages, it is handled not as a program but just as a
bunch of data. (Van der Poel, in Naur and Randell 1969)
If mere ‘‘bunches of data’’, representing a configuration of the electronic charge
of a particular printed circuit, program-scripts are on a par with (the manuscript of)
Shakespeare’s Hamlet and (the pixelized representation of) Botticelli’s The Birth of
Venus. Therefore ‘that which can be represented by data’ can be just about anything,
including non-existent entities such as Hamlet and Venus. The existence of those
putative abstract (intangible, non-physical) entities must therefore be rejected.
This objection can be attributed to a nominalist position in traditional
metaphysics. Nominalism (Loux 1998) seeks to show that discourse about abstract
entities is analysable in terms of discourse about familiar concrete particulars.
Motivated by an underlying concern for ontological parsimony, and in particular the
proliferation of universals in the platonist’s putative sphere of abstract existence, the
nominalist principle commonly referred to as Occam’s Razor (‘‘don’t multiply
entities beyond necessity’’) denies the existence of abstract entities. By this
ontological principle, nothing exists outside of concrete particulars, including not
entities that are ‘that which is fully and precisely defined by the program script’
(ONT-RAT). The existence of a program is therefore unnecessary.
The technocratic ontology can thus be summarized as follows:
ONT-TEC ‘That which is fully and precisely represented by a script sp’ is a
putative abstract (intangible, non-physical) entity whose existence is not
supported by direct sensory evidence. The existence of such entities must be
rejected. Therefore, ‘programs’ do not exist.
Indeed, the recurring analogies to airplanes, power stations, chemical
analyzers, and other engineered artefacts for which no ontologically independent
notion of a program is meaningful seems to support ONT-TEC. But while ONT-
TEC is corroborated by a nominalist position, it is not committed thereto. In
absence of an explicit commitment to any particular school of thought in
123
154 A. H. Eden
Allen Newel and Herbert Simon, prominent pioneers of AI, define computer
science as follows:
Computer science is the study of the phenomena surrounding computers ... it
an empirical discipline ... an experimental science ... like astronomy,
economics, and geology (Newell & Simon 1976)
Scientific experiments are traditionally concerned with ‘natural’ objects, such as
chemical compounds, DNA sequences, stellar bodies (e.g., Eddington’s 1919 solar
eclipse experiment), atomic particles, or human subjects (e.g., experiments
concerning cognitive phenomena.) It can be argued that the notion of scientific
experiment is only meaningful when applied to ‘natural’ entities but not to
‘artificial’ objects such as programs and computers; namely, that programs and
computers cannot be the subject of scientific experiments:
There is nothing natural about software or any science of software. Programs
exist only because we write them, we write them only because we have built
computers on which to run them, and the programs we write ultimately reflect
the structures of those computers. Computers are artifacts, programs are
artifacts, and models of the world created by programs are artifacts. Hence,
any science about any of these must be a science of a world of our own making
rather than of a world presented to us by nature. (Mahoney 2002)
As a reply, Newell and Simon contend that, even if they are indeed contingent
artefacts, programs are nonetheless appropriate subjects for scientific experiments,
albeit of a novel sort (‘‘nonetheless, they are experiments’’. Newell and Simon
1976) Their justification for this position is simple: If programs and computers are
taken to be some part of reality, in particular if the scientific ontology (ONT-SCI) is
123
Three Paradigms of Computer Science 155
123
156 A. H. Eden
implement deviate from the phenomena they seek to explain. In Popper’s (1963)
terms, the difference between programs and the (naturalistic view of) reality is at
most limited by the verisimilitude (or truthfulness) of our most advanced scientific
theory. The progress of science is manifest in the increase in this verisimilitude.
Since any distinction between the subject matter of computer science and natural
sciences is taken to be at most the product of the (diminishing) inaccuracy of
scientific theories, the methods of computer science are the methods of natural
sciences.
But the methods of the scientific paradigm are not limited to empirical validation,
as mandated by the technocratic paradigm. Notwithstanding the technocratic
arguments to the unpredictability of programs (as well as the additional arguments
we examine in §4.2), the deductive methods of theoretical computer science have
been effective in modelling, theorizing, reasoning about, constructing, and even in
predicting—albeit only to a limited extent—innumerable actual programs in
countless many practical domains. For example, context-free languages has been
successfully used to build compilers (Aho et al. 1986); computable notions of
formal specifications (Turner 2005) offer deductive methods of reasoning on
program-scripts without requiring the complete representation of petabytes of
program and data; and classical logic can be used to distinguish effectively between
abstraction classes in software design statements (Eden et al. 2006). If computer
science is indeed a branch of natural sciences then its methods must also include
deductive and analytical methods of investigation.
From this Wegner (1976) concludes that theoretical computer science stands to
computer science as theoretical physics stands to physical sciences: deductive
analysis therefore plays the same role in computer science as it plays in other
branches of natural sciences. Analytical investigation is used to formulate
hypotheses concerning the properties of specific programs, and if this proves to
be a highly complex task (e.g., Table 4) it nonetheless an indispensable step in any
scientific line of enquiry.
Tim Colburn concurs with this view and concludes that in reality the tenets of the
scientific paradigm offer the most complete description of the methods of computer
science:
Computer science ‘‘in the large’’ can be viewed as an experimental discipline
that holds plenty of room for mathematical methods, including formal
verification, within theoretical limits of the sort emphasized by Fetzer
(Colburn 2000, p. 154)
To summarize, the scientific position concerning the methodological question
(MET) can therefore be distinguished from the rationalist (MET-RAT) and the
technocratic (MET-TEC) positions as follows:
MET-SCI Computer science is a natural science on a par with astronomy,
geology, and economics, any distinction between their respective subject matters
is no greater than the limitations of scientific theories. Seeking to explain, model,
understand, and predict the behaviour of computer programs, the methods of
computer science include both deduction and empirical validation. Theoretical
123
Three Paradigms of Computer Science 157
123
158 A. H. Eden
(3) Given any two states s1 and s2, the futures of some states near s1 eventually
become near s2 (Devaney 1989).
For example, the future state of a program calculating the nth value of formula
(4) for some r > 3 satisfies the conditions of deterministically chaotic phenomenon,
and therefore cannot be determined analytically:
Already in 1946, before the principles of chaos theory have been developed and
evidence to its widespread applicability has been presented, von Neumann observed
that the outcome of programs computing non-linear mathematical functions cannot
be analytically determined:
Our present analytical methods seem unsuitable for the solution of the
important problems arising in connection with non-linear partial differential
equations and, in fact, with virtually all types of non-linear problems in pure
mathematics. (von Neumann, in Mahoney 2002)
In 1979, DeMillo et al. illustrated how ‘chaotic’ computer programs are using the
example of weather systems, for which an event as minute as the flap of a butterfly’s
wings may potentially have a disproportionate effect, indeed a result as catastrophic
as causing a hurricane:
Every programmer knows that altering a line or sometimes even a bit can
utterly destroy a program or mutilate it in ways that we do not understand and
cannot predict. ... Until we know more about programming, we had better for
all practical purposes think of systems as composed, not of sturdy structures
like algorithms and smaller programs, but of butterflies’ wings. (DeMillo et al.
1979)
In other words, even if a program was not specifically encoded to calculate a non-
linear function, in effect its behaviour amounts to such a program. The reason is that
one part or another of it is non-linear. DeMillo et al. specifically mention operating
systems and compliers, which in effect take large part in the behaviour or almost
any program. Therefore, it is very unlikely that any knowledge about all but the
most trivial programs can be established without conducting experiments.
Knuth conceded the weight of the argument of non-linearity, in particular with
relation to the class of programs that are the concern of artificial life:
It is abundantly clear that a programmer can create something and be totally
aware of the laws that are obeyed by the program, and yet be almost totally
unaware of the consequences of those laws; [for example,] running a program
from a slightly different configuration often leads to really surprising new
behaviour. (Knuth Undated)
Berry et al. corroborate the argument of non-linearity by showing that the very
behaviour of microprocessors is chaotic when executing certain program-processes:
123
Three Paradigms of Computer Science 159
123
160 A. H. Eden
19
We ignore, for the moment, difficulties arising from concurrency and the possibility of suspending the
execution of program-processes.
20
That is, the computational process by the central processing unit depends on the consumption of
energy; if suspended, program-processes cease to exist.
123
Three Paradigms of Computer Science 161
21
Turing forecast named the year 2000 as a target. During that year, Jim Moor conducted an experiment
which refuted Turing’s prediction, but he hastens to add: ‘‘Of course, eventually, 50 years from now or
500 years from now, an unrestricted Turing test might be passed routinely by some computers. If so, our
jobs as philosophers would just be beginning’’. (Moor 2000)
123
162 A. H. Eden
5 Discussion
22
Hoare (2006) has recently conceded that ‘‘Because of its effective combination of pure knowledge and
applied invention, Computer Science can reasonably be classified as a branch of Engineering Science.’’
123
Three Paradigms of Computer Science 163
contributed to the dominance of the technocratic doctrine in all but some branches
of AI.
As a result of the increasing influence that the technocratic paradigm has been
having on undergraduate curricula, ‘computer science’ academic programs are
seldom true to their name. Courses teaching computability, complexity, automata
theory, algorithmic theory, and even logic in undergraduate programs have been
dropped in favour of courses focusing on technological trends teaching software
design methodologies, software modelling notations (e.g., the Unified Modelling
Language200523), programming platforms, and component-based software engi-
neering technologies. As a result, a growing proportion of academic programs churn
increasing numbers of graduates in ‘computer science’ with no background in the
theory of computing and no understanding of the theoritical foundations of the
discipline.
In 1988, Dijkstra scathingly attacked the decline of mathematical, conceptual,
and scientific principles, a trend which has turned computer science programmes
into semi-professional schools which train students in commercially driven, short-
lived technology:
So, if I look into my foggy crystal ball at the future of computing science
education, I overwhelmingly see the depressing picture of ‘‘Business as
usual’’. The universities will continue to lack the courage to teach hard
science, they will continue to misguide the students, and each next stage of
infantilization of the curriculum will be hailed as educational progress.
(Dijkstra 1988)
It is difficult to determine precisely the outcome of the domination of the
technocratic doctrine on computer science education. but the anti-scientific attitude
has evidently taken its toll on the software industry. Since it was declared in the
1968 NATO conference (Naur and Randell 1969), the never-ending state of
‘software crisis’ has been renamed to ‘software’s chronic crisis’ (Gibbs 1994) and in
2005 it was pronounced ‘software hell’ (Carr 2004). The majority of multimillion-
dollar software development projects, government and commercial, largely
continues to end with huge losses and no gains (Carr 2004). As a standard,
software manufacturers sign their clients on an End-User Licence Agreements
(EULA) which offer less of a guarantee for their merchandise than any other
commodity with the possible exception of casinos and used cars. Much of the
professional literature refers to software in a jargon borrowed from mathematics,
melodrama, and witchcraft in almost equal measures (e.g., Raymond 1996). Crimes
involving bypassing security bots guarding the most heavily protected electronically
stored secrets and spreading a wide spectrum of software malware have become part
of daily life. The correct operation of the majority of computing devices has become
largely dependent on daily—even hourly—updates of a host of defence mecha-
nisms: firewalls, anti-virus, anti-spyware, anti-trojans, anti-worms, anti-dialers, anti-
rootkits, etc. Even with the widespread use of these defence mechanisms, virtually
no computer is invulnerable to malicious programs that disable and overtake global
23
To which Bertrand Meyer (1997) satirical critique offers valuable insights.
123
164 A. H. Eden
Epilogue
Acknowledgements Special thanks go to Ray Turner for reviewing draft arguments and for his
guidance and continuous support, without which this paper would not have been possible; to Jack
Copeland for his guidance on matters of traditional philosophy; and to Bill Rapaport for his detailed
comments. We also thank Tim Colburn (2000) and Bill Rapaport (2005) without whose extensive
contributions the nascent discipline of philosophy of computer science would not exist; Barry Smith for
his guidance; Susan Stuart for developing the contentions made of this paper; Naomi Draaijer for her
support; Yehuda Elkana, Saul Eden-Draaijer, and Mary J. Anna for their inspiration. This research was
123
Three Paradigms of Computer Science 165
supported in part by grants from UK’s Engineering and Physical Sciences Research Council and the
Royal Academy of Engineering.
References
Abran, A., & Moore, J. W. (Eds.) (2004). Guide to the Software Engineering Body of Knowledge—
SWEBOK (2004 ed.) Los Alamitos: IEEE Computer Society.
Abelson, H., Sussman, J.J. (1996). Structure and Interpretation of Computer Programs. (2nd ed.)
Cambridge: MIT Press.
Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers: Principles, techniques, and tools. Reading:
Addison Wesley.
Balaguer, M. (2004). Platonism in metaphysics. In: E. N. Zalta (Ed.), The Stanford Encyclopedia of
philosophy (Summer 2004 ed.) Available http://plato.stanford.edu/archives/sum2004/entries/plato-
nism. (Accessed March 2007.)
Bedau, M. A. (2004). Artificial life. In: L. Floridi (Ed.), The Blackwell guide to philosophy of computing
and information. Malden: Blackwell.
Berry, H., Pérez, D. G., & Temam, O. (2005). Chaos in computer performance. Nonlinear Sciences
arXiv:nlin.AO/0506030.
Brent, R., & Bruck, J. (2006). Can computers help to explain biology? Nature, 440, 416–417.
Bundy, A. (2005). What kind of field is AI? In: D. Partridge, & Y. Wilks (Eds.), The foundations of
artificial intelligence. Cambridge: Cambridge university Press.
Carr, N. G. (2004). Does IT matter? Information technology and the corrosion of competitive advantage.
Harvard Business School Press.
Cohn, A. (1989). The notion of proof in hardware verification. Journal of Automated Reasoning, 5(2),
127–139.
Colburn, T. R. (2000). Philosophy and computer science. Armonk, N.Y.: M.E. Sharpe.
Copeland, B.J. (2002). The Church-Turing thesis. In: Edward N. Zalta (Ed.) The Stanford Encyclopedia
of Philosophy (Fall 2002 ed.) Available http://plato.stanford.edu/archives/fall2002/entries/church-
turing/ (Accessed Mar. 2007).
Copeland, B.J. (2006). Are computer programs natural kinds? Personal correspondence.
Devaney, R. L. (1989). Introduction to chaotic dynamical systems (2nd ed.). Redwood: Benjamin-
Cummings Publishing.
Debian Project, The. http://www.debian.org. Accessed March 2007.
DeMillo, R. A., Lipton, R. J., & Perlis, A. J. (1979). Social processes and proofs of theorems and
programs. Communications of the ACM, 22(5), 271–280.
Denning, P. J. (1989). A debate on teaching computing science. Communications of the ACM, 32(12),
1397–1414.
Denning, P. J., Comer, D. E., Gries, D., Mulder, M. C., Tucker, A., Turner, A. J., & Young, P. R. (1989).
Computing as a discipline. Communication of the ACM, 32(1), 9–23.
Dijkstra, E.W. (1988) On the cruelty of really teaching computing science. Unpublished manuscript EWD
1036.
Dybå, T., Kampenesa, V. B., & Sjøberg, D. I. K. (2006) A systematic review of statistical power in
software engineering experiments. Information and Software Technology, 48(8), 745–755.
Eden, A. H., Hirshfeld, Y., & Kazman, R. (2006) Abstraction classes in software design. IEE Software,
153(4), 163–182. London, UK: The Institution of Engineering and Technology.
Einstein, A. (1934). Mein Weltbild. Amsterdam: Querido Verlag.
Fasli, M. (2007). Agent technology for E-commerce. London: Wiley.
Fetzer, J. H. (1993). Program verification. In: J. Belzer, A. G. Holzman, A. Kent, & J. G. Williams (Eds.),
Encyclopedia of computer science and technology (Vol. 28, Supplement 13). New York: Marcel
Dekker Inc.
Gamma, E., Helm, R., Johnson, R., & Vlissides, J. M. (1995). Design patterns: Elements of reusable
object-oriented software. Reading: Addison-Wesley.
Georick, W., Hoffmann, U., Langmaack, & H. (1997). Rigorous compiler implementation correctness:
How to prove the real thing correct. Proc. Intl. Workshop Current Trends in Applied Formal
Method. Lecture Notes in Computer Science, Vol. 1641, pp. 122–136. London, UK: Springer-
Verlag.
123
166 A. H. Eden
123
Three Paradigms of Computer Science 167
Putnam, H. (1975). Minds and machines. In: Philosophical papers, Vol. 2: Mind, Language, and reality.
pp. 362–385. Cambridge: Cambridge University Press.
Quine, W. V. O. (1969). Natural kinds. In: Ontological reality and other essays. Columbia University
Press.
Rapaport, W. J. (2007). Personal correspondence.
Rapaport, W. J. (2005). Philosophy of computer science: An introductory course. Teaching Philosophy,
28(4), 319–341.
Raymond, E. S. (1996). The New Hacker’s Dictionary (3rd ed.). Cambridge: MIT Press.
Simon, H. A. (1969). The sciences of the artificial (1st ed.) Boston: MIT Press.
Sommerville, I. (2006). Software engineering (8th ed.) Reading: Addison Wesley.
Stack, G. S. (1998). Materialism. Routledge Encyclopedia of Philosophy (electronic Ver. 1.0). London
and New York: Routledge.
Steinhart, E. (2003). Supermachines and superminds. Minds and Machines, 13(1), 155–186.
Stoy, J. E. (1977). Denotational semantics: The Scott-Strachey approach to programming language
theory. Cambridge: MIT Press.
Strachey, C. (1973). The varieties of programming language. Tech. Rep. PRG-10 Oxford University
Computing Laboratory.
Szyperski, C. A. (2002). Component software—Beyond object-oriented programming (2nd ed.). Reading:
Addison-Wesley.
Turing, A. M. (1936). On computable numbers, with an application to the entscheidungsproblem.
In Proc. London Math. Soc. Ser., 2, 43(2198). Reprinted in Turing & Copeland (2004).
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
Turing, A. M., & Copeland, B. J. (Ed.) (2004). The essential Turing: Seminal writings in computing,
logic, philosophy, artificial intelligence, and artificial life plus the secrets of Enigma. Oxford, USA:
Oxford University Press.
Turner, R. (2005). The foundations of specification. Journal of Logic & Computation, 15(5), 623–663.
Turner, R. (2007). Personal correspondence.
Wegner, P. (1976). Research paradigms in computer science. In Proc. 2nd Int’l Conf. Software
engineering, San Francisco, CA, pp. 322–330.
123
View publication stats