Machine Morality and Human Responsibility - Rubin
Machine Morality and Human Responsibility - Rubin
Machine Morality and Human Responsibility - Rubin
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
Center for the Study of Technology and Society is collaborating with JSTOR to digitize,
preserve and extend access to The New Atlantis
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Symposium IV
Charles T. Rubin, a New Atlantis contributing editor and an author of the Futurisms
blog on transhumanism at TheNewAtlantis.com, is an associate professor of political science
at Duquesne University.
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
of robots, we will be as ever before - or perhaps as never before - stuck
with morality.
It should be noted, of course, that the type of artificial intelligence
of interest to Čapek and today's writers - that is, truly sentient artificial
intelligence - remains a dream, and perhaps an impossible dream. But if it
is possible, the stakes of getting it right are serious enough that the issue
demands to be taken somewhat seriously, even at this hypothetical stage.
Though one might expect that nearly a century's time to contemplate
these questions would have yielded some store of wisdom, it turns out
that Čapek's work shows a much greater insight than the work of today's
authors - which in comparison exhibits a narrow definition of the threat
posed to human well-being by autonomous robots. Indeed, Čapek chal-
lenges the very aspiration to create robots to spare ourselves all work,
forcing us to ask the most obvious question overlooked by today's authors:
Can any good can come from making robots more responsible so that we
can be less responsible?
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
We can argue as much as we want about the content - that is, about what
specific actions an AI should actually be obligated or forbidden to do - so
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
Summer 2011 ~ 61
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
But Yudkowsky casts some light on how this route to making machines
more moral than humans is not so easy after all. He complains about those,
like Capek, who have written fiction about immoral machines. They imag-
ine these machines to be motivated by the sorts of things that motivate
humans: revenge, say, or the desire to be free. That is absurd, he claims.
Such motivations are a result of our accidental evolutionary heritage:
Summer 2011 ~ 63
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
overthrow the tribal chief - or rather, replace the tribal chief - if the
opportunity presented itself, and so on. Even if an AI tries to exter-
minate humanity, ve ^sic, again] won't make self-justifying speeches
about how humans had their time, but now, like the dinosaur, have
become obsolete. Guaranteed. Only Evil Hollywood Als do that.
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
and is now the lone human survivor, "You have to kill and rule if you
to be like people. Read history! Read people's books! You have to conq
and murder if you want to be people!"
As for Domin's goal, then, of creating a worldwide aristocracy
which the most worthy and powerful class of beings rules, one might
that indeed with the successful robot rebellion the best man has won. The
only thing that could prove to him that the robots were yet more human
would be for them to turn on themselves - for, as he says, "No one can
hate more than man hates man!" But he fails to see that his own nomi-
nally altruistic intentions could be an expression of this same hatred of
the merely human. Ultimately, Domin is motivated by the same belief of
the Rossums that the humans God created are not very impressive - God,
after all, had "no notion of modern technology."
As for notions of modern technology, there is another obvious but far
less noble purpose for friendly robots than the lofty ones their makers
typically proclaim: they could be quite useful for turning a profit. This
is the third definition of friendly robots implicitly offered by the Rossum
camp, through Busman, the firm's bookkeeper. He comes to understand
that he need pay no mind to what is being sold, nor to the consequences
of selling it, for the company is in the grip of an inexorable necessity - the
power of demand - and it is "naïve" to think otherwise. Busman admits
to having once had a "beautiful ideal" of "a new world economy"; but now,
as he sits and does the books while the crisis on the island builds and the
last humans are surrounded by a growing robot mob, he realizes that the
world is not made by such ideals, but rather by "the petty wants of all
respectable, moderately thievish and selfish people, i.e., of everyone." Next
to the force of these wants, his lofty ideals are "worthless."
Whether in the form of Busman's power of demand or of Domin's uto-
pianism, claims of necessity become convenient excuses. Busman's view
means that he is completely unwilling to acknowledge any responsibility
on his part, or on the part of his coworkers, for the unfolding disaster - an
absolution which all but Alquist are only too happy to accept. When Dr.
Gall tries to take responsibility for having created the new-model robots,
one of whom they know to be a leader in the rebellion, he is argued out
of it by the specious reasoning that the new model represents only a tiny
fraction of existing robots.
Čapek presents this flight from responsibility as having the most
profound implications. For it turns out that, had humanity not been killed
off by the robots quickly, it was doomed to a slower extinction in any
case - as women have lost the ability to bear children. Helena is terrified
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
Summer 201 1 ~ 71
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
about who is a robot and who is a human. In the Prologue, the robots
dressed just like the human beings, but in the remainder of the play,
are dressed in numbered, dehumanizing uniforms. On the other h
Helena gets Dr. Gall to perform the experiments to modify robots to m
them more human - which she believes would bring them to underst
human beings better and therefore hate them less. (It is in respon
this point that Domin claims no one can hate man more than man do
proposition Helena rejects.) Dr. Gall changes the "temperament" of som
robots - they are made more "irascible" than their fellows - along wi
"certain physical details," such that he can claim they are "people."
Gall only changes "several hundred" robots, so that the ratio
unchanged to changed robots is a million to one; but we know that Dam
one of the new robots sold, is responsible for starting the robot rebell
Helena, then, bears a very large measure of responsibility for the car
that follows. But this outcome means that in some sense she got exac
what she had hoped for. In a moment of playful nostalgia before thing
the island start to go bad, she admits to Domin that she came with "t
rible intentions... to instigate a r-revolt among your abominable Robo
Helena's mixed feelings about the objects of her philanthropy - or
be more precise, her philanthropoidy - help to explain her willingnes
believe Alquist when he blames the rebellious robots for human infert
And they presage the speed with which she eventually takes the decis
action of destroying the secret recipe for manufacturing robots - an
for an eye, as it were. It is not entirely clear what the consequences of
act might be for humanity. For it is surely plausible that, as Busman think
the robots would have been willing to trade safe passage for the remai
humans for the secret of robot manufacturing. Perhaps, under the n
difficult human circumstances, Helena could have been the mother of
new race. But just as Busman intended to cheat the robots in this tra
he could, so too the robots might have similarly cheated human being
they could. All we can say for sure is that if there were ever any possib
for the continuation of the human race after the robot rebellion, Hele
act unwittingly eliminates it by removing the last bargaining chip.
In Čapek's world, it turns out that mutual understanding is after
unable to moderate hatred, while Helena's quest for robot equality
Domin's quest for robot slavery combine to end very badly. It is
to believe that Čapek finds these conclusions to be to humanity's cred
The fact that Helena thinks a soul can be manufactured suggests
she has not really abandoned the materialism that Domin has announc
as the premise for robot creation. It is significant, then, that the
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
Summer 201 1 ~ 73
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
love" that "life shall not perish." Perhaps, Alquist seems to imply, in
face of robot love, God will call forth the means of maintaining life -
from a biblical point of view, it would indeed be no unusual thing for
hitherto barren to become parents. Even short of such a rebirth, Al
finds comfort in his belief that he has seen the hand of God in the love
between robot Helena and Primus:
"So God created man in his own image, in the image of God created
he him; male and female created he them. And God blessed them,
and God said unto them, Be fruitful, and multiply, and replenish the
earth.... And God saw every thing that he had made, and, behold, it
was very good." . . . Rossum, Fabry, Gall, great inventors, what did you
ever invent that was great when compared to that girl, to that boy, to
this first couple who have discovered love, tears, beloved laughter, the
love of husband and wife?
Someone without that faith will have a hard time seeing such a bright
future arising from the world that R.U.R. depicts; accordingly, it is not
clear that we should assume Alquist simply speaks for Čapek. What seems
closer to the truth for eyes of weaker faith is that humans, and the robots
created in their image, will have alike destroyed themselves by undercut-
ting the conditions necessary for their own existences. Nature and life
will remain, as per Alquisťs encomium, but in a short time love will be
extinguished.
Summer 201 1 ~ 75
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
ing the split-second decisions that Wallach and Allen imagine their m
machines will have to make, why should we assume it will be autonom
Yudkowsky's Friendly AI may avoid that problem with its alien style
moral reasoning - but it will still have to be active in the human wor
and its human subjects, however wrongly, will still have to interpret
choices in human terms that, as we have seen, might make its advanc
benevolence seem more like hostility.
In both cases, it appears that it will be difficult for human beings
have anything more than mere faith that these moral machines reall
have our best interests at heart (or in code, as it were). The conclusion
we must simply accept such a faith is more than passingly ironic, given
these "frightful materialists" have traditionally been so totally oppos
to putting their trust in the benevolence of God, in the face of what
take to be the obvious moral imperfection of the world. The point ap
equally, if not more so, to today's Friendly AI researchers.
But if moral machines will not heal the world, can we not at l
expect them to make life easier for human beings? Domin's effort to m
robot slaves to enhance radically the human condition is reflected in
desire of today's authors to turn over to AI all kinds of work that we
we would rather not or cannot do; and his confidence is reflected
more so, considering the immensely greater amount of power propos
for Als. If it is indeed important that we accept responsibility for cre
machines that we can be confident will act responsibly, that can only
because we increasingly expect to abdicate our responsibility to them. A
the bar for what counts as work we would rather not do is more read
lowered than raised. In reality, or in our imaginations, we see, like A
Smith's little boy operating a valve in a fire engine, one kind of work
we do not have to do any more, and that only makes it easier to imag
others as well, until it becomes harder and harder to see what machin
could not do better than we, and what we in turn are for.
Like Domin, our contemporary authors do not seem very interested
asking the question of whether the cultivation of human irresponsibili
which they see, in effect, as liberation - is a good thing, or whether
Alquist would have it) there is some vital connection between work
human decency. Čapek would likely connect this failure in Domin to
underlying misanthropy; Yudkowsky's transhumanism begins from a
tinctly similar outlook. But it also means that whatever their apparen
philanthropic intentions, Wallace, Allen, Yudkowsky, and their peers
be laying the groundwork for the same kind of dehumanizing results t
Capek made plain for us almost a century ago.
Summer 201 1 ~ 77
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Charles T. Rubin
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms
Machine Morality and Human Responsibility
This content downloaded from 86.142.90.237 on Tue, 26 May 2020 18:06:47 UTC
All use subject to https://about.jstor.org/terms