Sci Eng Ethics (2008) 14:245–249
DOI 10.1007/s11948-008-9072-7
COMMENT
Critiquing a Critique
A Comment on ‘‘A Critique of Positive Responsibility
in Computing’’
Keith W. Miller
Received: 1 April 2008 / Accepted: 14 April 2008 / Published online: 7 May 2008
Springer Science+Business Media B.V. 2008
I am not a big fan of meta-critiques, but James Stieb’s article ‘‘A Critique of
Positive Responsibility in Computing’’ [1] includes claims that I judge to be
overstated and should be challenged. Stieb’s article also discusses at least
three important questions that I think should be further explored.
Overstated Claims
Responsibility for a ‘‘Perhaps Unlimited’’ List
In his introduction, Stieb quotes Gotterbarn, and then writes, ‘‘Apparently, he holds
computer professionals responsible for an undisclosed, perhaps unlimited, list of
‘undesirable events,’ including what most people would call ‘bugs’ or ‘computer
errors’’’[1]. This is not apparent to this reader of Gotterbarn’s work. Although he
does encourage computer professionals to take responsibility for their work
(including its shortcomings), nowhere in Gotterbarn’s writings appears anything
close to the claim that computer professionals are responsible for a ‘‘perhaps
unlimited list of undesireable events.’’ Any serious discussion of the responsibilities
of computer professionals would have to analyze the relationship (or lack of same)
between the undesirable events and the professional’s actions.
The Butterfly Effect and its Significance
Stieb quotes Gotterbarn: ‘‘Everything that an engineer develops has a direct impact on
mankind’’ [1]. Gotterbarn’s sentence is reminiscent of the ‘‘Butterfly Effect’’ related
K. W. Miller (&)
Department of Computer Science, University of Illinois at Springfield, 1 University Plaza,
Springfield, IL 62703, USA
e-mail: kmill2@uis.edu
123
246
K. W. Miller
to Edward Lorenz’s work on the potential dramatic effects of small actions [2].
Gotterbarn’s sentence seems unremarkable and obvious, though perhaps stated a bit
grandly. Although the impact may be large or small, what one does as a professional
(and as a private citizen, for that matter) has an impact on other people. While
Gotterbarn’s sentence seems to me straightforward, Stieb’s subsequent ‘‘interpretation’’ of Gotterbarn’s sentence is anything but straightforward. Stieb writes, ‘‘Here
Gotterbarn holds every engineer morally ‘responsible’ for all bad effects to mankind’’
[1] (Stieb’s emphasis). Gotterbarn’s sentence (or Gotterbarn’s surrounding paragraph
or Gotterbarn’s 1995 paper [3]) does not contain claims that logically lead to Stieb’s
claim. Sentence A does not imply sentence B. It just doesn’t follow.
A does not Contradict X, therefore A Supports X
Stieb writes: ‘‘Yet, nothing in Gotterbarn’s work appears to contradict the notion of
holding programmers and others responsible for most any undesirable event
associated with the computing and design process’’ [1] (Stieb’s emphasis). Stieb
seems to imply that since Gotterbarn does not explicitly deny X, then Gotterbarn
must agree with X. As far as I know, neither Gotterbarn nor Stieb has publicly
denied that the moon is made of Roquefort cheese; however, I do not infer that
either holds that position. Stieb’s extreme statement about responsibility for
computer professionals is not Gotterbarn’s.
X is an Awful Thing, so Y is Awful
Stieb writes: ‘‘Second, positive responsibility is pernicious in the sense that it is
open-ended, it makes professionals responsible for any ‘undesirable events’
regardless of who interprets them as undesirable and why’’ [1]. The definition
that Stieb ascribes to ‘‘positive responsibility’’ is not recognizably Gotterbarn’s.
Stieb distorts Gotterbarn’s position, and then ridicules the distortion.
X Leads to Y, and Y is Awful; Except that X doesn’t Really Lead to Y
Stieb writes: ‘‘On the other hand, taken strongly, a stakeholder theory is
preposterous overkill if it allows the mail clerk to overrule decisions made by
boards of directors when they disagree with the interests of mail clerks everywhere’’
[1]. Surely that would be preposterous, but stakeholder theory has never required
this absurd extreme of empowerment, and Stieb’s characterization of the example as
‘‘fanciful’’ seems to recognize that. If so, then what is Stieb’s point, exactly? If
stakeholder theory actually leads to absurd conclusions, then the theory needs
adjusting; but if it does not actually lead to absurd conclusions, then how does this
absurd scenario add to the discussion?
Three Questions Worth Exploring
Although they are somewhat hidden beneath problematic rhetoric, at least three
important questions (each worth exploring) can be found in Stieb’s article.
123
Critiquing a Critique
247
What should be the Ethical Goals of a Computing Profession?
Stieb quotes from an Ayn Rand novel: ‘‘I don’t intend to build in order to serve or
help anyone out…’’ [4]. Stieb then states: ‘‘One need not support all that Rand stood
for, nor battle her considerable opposition to point out that the primary goal of
professionalism is competent creation’’ [1]. Stieb states this as if it were a truism:
primary goal of professionalism = competent creation. Stieb’s statement is difficult
to defend if it is taken to be descriptive. Instead, it should be taken as normative:
Stieb asserts that the primary goal of professionalism should be competent creation.
That normative statement is part of an interesting issue that is worth exploring: what
should be the goals of computing professionals?
The goal of competent creation is certainly a worthy goal, one that is explicit in
the codes of ethics Stieb criticizes. But should competent creation be the primary
goal, trumping other candidates such as service to the public, duties to self, and
loyalty to an employer? Traditionally (though not universally), codes of ethics for
organizations that identify themselves as ‘‘professional’’ tend to make obligations to
the public primary. However, that does not settle the question of whether those
obligations to the public should be primary. That is a question worth exploring.
This issue could also be framed as about whether computing should be
considered a profession (which traditionally does suggest an abstract ‘‘pact with the
public’’), or a craft (which does not usually have that same connotation). Is a
programmer a professional in a fiduciary role with a customer, or is a programmer a
technician focusing more narrowly on the task at hand? These are two different
views (there are many others) of what a programmer should be, and both can be
reasonably defended, but the two views lead in quite different directions. A related
debate involves what Computer Science students should be taught. If computer
professionals have broader responsibilities, then the curriculum should be broader.
Computer technicians have more narrow responsibilities, and that would suggest a
narrower, deeper focus. Clearly there are necessary tradeoffs, and those tradeoffs
require careful judgments.
Stieb has made a case for dramatically narrowing the range of expectations for
computing professionals; others, including Gotterbarn, differ [5]. A great deal of work
remains to be done in trying to better understand what goals are appropriate for people
who work with computers, and what goals (if any) should have priority.
What is an Appropriate Ethical Stance towards Bugs?
Stieb writes: ‘‘Insisting that every ‘bug’ or ‘computer error’ is an ethical lapse, runs
the risk of confusing efficiency with ethics to the detriment of both’’ [1].
Gotterbarn’s position about bugs and ethics is not fairly reflected in Stieb’s paper.
However, the issue of bugs in software is a genuinely interesting question, both for
scholars and practitioners concerned with the ethics of computing.
Writing bug-free software is astronomically unlikely, even for moderately
complex projects. Given that reality, what is an appropriate ethical stance towards
delivering software that almost inevitably includes bugs? There are several
approaches to this problem discussed in the literature, including informed consent
123
248
K. W. Miller
(in which developers are transparent about known bugs and the amount of testing
that has been done), [6] reducing the number of features in software to reduce
complexity (and therefore increase reliability) [7] and allowing market forces to
determine the proper balance between increasing reliability and decreasing costs [8]
(similar to Stieb’s conclusion, I think). All three of these proposed approaches (and
many others not mentioned here) have advantages and disadvantages. Clear thinking
and precise writing about this issue would be a useful contribution to the field.
What is the Public Good with Respect to Computing?
Stieb writes: ‘‘Given this psychological egoism, and the problems with gauging
society’s good, attention to the public good (in a positive sense) should be given up
in favor of the promotion of individual good’’ [1]. Stieb suggests that because it is
difficult to gain consensus about an exact definition of ‘‘the public good,’’ computer
professionals should abandon any attention to the public good. Such abandonment
would be contrary to a long tradition of engineers (and other professionals).
Engineers striving to be competent must deal with similarly vague ideals such as
‘‘good design’’. ‘‘Good design’’ will never have a final definition, but it is often
used, for example, by software engineers to distinguish good software from bad
software. Indeed, a goal that Stieb endorses, efficiency, is itself a term that is
endlessly debated by professionals in software development. For example, the
tradeoffs between machine efficiency, user efficiency and programmer efficiency
make it notoriously difficult to decide whether or not an engineer is being optimally
‘‘efficient’’ when designing, implementing, and maintaining software.
All thinking people have to deal with terms that are not, and will not, have
precise meanings. The derogatory label ‘‘email spam’’ and the everyday designation
‘‘chair’’ are both difficult to define in an air-tight way, yet most people most of the
time communicate effectively about both spam and chairs. (I thank Dartmouth’s
James Moor for clarifying this point in a recent paper [9]). Similarly, ‘‘the public
good’’ can be a useful concept even though it is vague.
Even those who differ with Stieb on his dismissal of the public good as a useful
concept can agree that he has raised an interesting issue: what exactly is the public
good (especially with respect to computing professionals), and who gets to decide that
question? Because of the global reach of computing and the Internet, ‘‘the public’’ in
computing is a very large group indeed, and determining what is good for such a
diverse collection is a daunting task. However, unless computing professionals want
to join Stieb in abandoning the public good as a practical goal, then scholars and
practitioners should work on a better (though it can never be perfect) understanding of
what ‘‘the public good’’ means for computing professionals.
References
1. Stieb, J. A. (2008). A critique of positive responsibility in computing. Science and Engineering Ethics,
14, 2.
2. Lorenz, E. (1996). The essence of chaos (pp. 14–15). University of Washington Press.
123
Critiquing a Critique
249
3. Gotterbarn, D. (1995). The moral responsibility of software developers: Three levels of professional
software engineering. The Journal of Information Ethics, 4(1), 54–64.
4. Rand, A. (1961). The virtue of selfishness. New York: Signet.
5. Bynum, T. W., & Rogerson, S. (Eds.). (2004). Computer ethics and professional responsibility.
Malden, MA: Blackwell.
6. Miller, K. (1998). Software informed consent: Docete emptorem, not caveat emptor. Science and
Engineering Ethics, 4(3), 357–362.
7. McGibbon, T., & Nicholls, D. (2002). Making the (business) case for software reliability, Reliability
and Maintainability Symposium. Proceedings. Annual, pp. 285–292.
8. Flor, N. V., Lerch, F. J., & Hong, S. (1999). A market-driven approach to teaching software
components engineering. Annals of Software Engineering, 6(1–4), 223–251.
9. Miller, K., & Moor, J. (2008). The ethics of spam. In K. Himma & H. Tavani (Eds.), Handbook of
information and computer ethics. John Wiley and Sons, Inc.
123