Article
Subject objects
Lucy Suchman
Lancaster University, UK
Feminist Theory
12(2) 119–145
! The Author(s) 2011
Reprints and permissions:
sagepub.co.uk/journalsPermissions.nav
DOI: 10.1177/1464700111404205
fty.sagepub.com
Abstract
The focus of my inquiry in this article is the figure of the Human that is enacted in the
design of the humanoid robot. The humanoid or anthropomorphic robot is a model
(in)organism, engineered in the roboticist’s laboratory in ways that both align with and
diverge from the model organisms of biology. Like other model organisms, the laboratory robot’s life is inextricably infused with its inherited materialities and with the
ongoing – or truncated – labours of its affiliated humans. But while animal models
are rendered progressively more standardised and replicable as tools for the biological
sciences, the humanoid robot is individuated and naturalised. Three stagings of human–
robot encounters (with the robots Mertz, Kismet and Robota respectively) demonstrate different possibilities for conceptualising these subject objects, for the claims
about humanness that they corporealise, and for the kinds of witnessing that they
presuppose.
Keywords
feminist technoscience, human–machine relations, nonhuman subjects, robot
‘Please look at my face’ entreats Mertz the Speaking Robot, installed in the atrium
lobby of the Frank Gehry Stata Center, home of the Computer Science and
Artificial Intelligence Laboratory at Massachusetts Institute of Technology
(MIT) (Figure 1). Designed to engage passing humans in communicative exchange,
Mertz is the corporealisation1 of what its designers describe as ‘a robotic creature
that lives around people on a regular basis and incrementally learns from its experience, instead of a research tool that interfaces only with its programmers’
(Aryananda, 2005a). A worldly robot, in short.
Like all truly worldly things Mertz stars in a YouTube video, posted in
December 2006 under the title ‘The speaking robot @ MIT’.2 For humans nurtured in the Disney-saturated worlds of 20th century living rooms and movie theatres, the robot is recognisable as the cartoon of a child-like creature. As people
Corresponding author:
Lucy Suchman, Department of Sociology, Bowland North, Lancaster University, Lancaster, LA1 4YT, UK
Email: l.suchman@lancaster.ac.uk
120
Feminist Theory 12(2)
Figure 1. Mertz the Speaking Robot with M.
Source: ‘The speaking robot @ MIT’. http://www.youtube.com/watch?v¼HuqL74C6KI8.
pass by, Mertz hails them into an encounter typical of the perplexing mix of
enchantment and mystification that comprises the search for intelligibility in artificial life. Even without a close reading of the exchange, we can note the momentto-moment, shifting choreography of its lively objects and obliging subjects. The
humans alternately shape themselves into appropriately cartooned subjects for
reciprocal engagement with Mertz (humorously exaggerating their direct
addresses), and reframe the robot as an object of their shared puzzlement and
pleasure (as their gaze turns to each other). Mertz, in turn, robotically translates
the fragments of sound and motion that these noisy objects emit into readable
signals and enacts its own in-built logic of more and less sensible replies.
Entrained by Mertz’s vitality the human interlocutors are robotically subjectified;
shifting their orientation to each other’s queries and laughter, the robot is correspondingly restored to humanlike objectness.
As a second order witness to the record of this encounter, I find the exchange
between the robot Mertz and its humans at once familiar, and new. The familiarity
derives from my own long-standing critical engagement with the project of making
machines more like humans, an endeavour that I have tracked since the mid 1980s
(see Suchman, 2007a, b; Castañeda and Suchman, forthcoming).3 I see repeated, in
the rationale for Mertz’s creation, long-standing assumptions about communication as information processing, and in the robot’s performance evidence for the
limits to the mechanical reproduction of interaction as we know it through computational processes. At the same time, the newness of this exhibit lies in just how
the human/machine interface is enacted here by this particular robot and these
specific humans, including the inevitable moments of alignment and breakdown
that move us as observers and to which our laughter – and theirs – is a response.
My interest in the project of creating humanlike machines has always been the
figure of the Human4 that inspires it, and the ways in which careful attention to
Suchman
121
what happens at the interface of persons and machines can help us to reconceptualise human–machine relations and differences. Feminist theorising has provided
us with a rich body of conceptual tools with which to approach these questions,
exemplified by Donna Haraway’s articulation of nonhuman primates as focal subjects/objects for interrogations of the ‘almost Human’ (1989: 2), and her more
recent delineation of lines of connection to other forms of natureculture
(Haraway, 1991, 1997, 2008). A recurring theme in these writings is the historical
prevalence of mimesis or mirroring as a guiding trope for figuring human–
nonhuman encounters: a form of relation that privileges vision, and looks to
find in the Other a differently embodied reproduction of the Self. Feminist
theory has generated as well compelling alternatives to mimesis as a frame for
subject object relations. Inspired most directly by Karen Barad’s extensive explication of entanglement (2007), this article explores the figuration of subject object
intra-actions in contemporary humanoid robotics, and how we might rethink questions of sameness and difference at the interface of humans and machines.
‘Entanglement’ is Barad’s heuristic for always thinking entities performatively, as
effects of rather than antecedents to relations (see also Law, 2004). The companion
neologism of ‘intra-action’ signals this commitment to the premise that subject
object difference is not given, but arises from the material-discursive practices
through which boundaries and associated entities are made. Humans are not the
sole arbiters of agency in these entity-making practices, as agency is ‘not an attribute, but the ongoing reconfigurings of the world’ (Barad, 2007: 141). Humans are,
however, always already implicated in and part of the world’s reconfiguring, as well
as of the capacities for being and action that arise.
I begin in what follows with a discussion of the premise that we might fruitfully
think of humanoid robots as model (in)organisms, taking direction from critical
reflections within the field of science and technology studies on the ‘model organism’ as a research object. I then turn to the demonstration, as a focal event in
technology research and development, and as my own research site in this article.
I use these as conceptual tools with which to think across three particular stagings
of human–robot encounters (with the robots Mertz, Kismet and Robota respectively). My task is to demonstrate in this text different possibilities for conceptualising these subject objects, the claims about humanness that they corporealise, and
the kinds of witnessing that they presuppose. I close with some reflections on the
lessons of humanoid robotics for feminist theorising regarding human/nonhuman
relations, with particular reference to the case of humans and machines.
The humanoid robot as model (in)organism
How then should we understand the nature of model (in)organisms like the robot
Mertz, in their constitution of quasi or proto-Humans? The humanoid robot is a
particular kind of model, engineered in the roboticist’s laboratory in ways that
both align with and diverge from the model organisms of biology. Science studies
researchers have documented the practices through which the animal model in the
122
Feminist Theory 12(2)
laboratory sciences is (at least partially) transformed from a naturalistic, specific
and idiosyncratic individual into an analytic object and source of generalised
knowledge (see Lynch, 1988; Asdal, 2008; Davies, 2010). In many respects, the
humanoid robot evidences similar premises and aspirations. The robot can serve
as a model for the Human insofar as its existence is framed as elucidating universally applicable truths about how humans work. And as a scientific object (rather
than one created for instrumental purposes of service provision to humans,
although these aims are often conflated in roboticists’ own writings), the laboratory
robot’s ability to replicate the observable behaviours of persons is taken as evidence that those behaviours are, in turn, an effect of comparable mechanisms.5
Rodney Brooks, Director of the Computer Science and Artificial Intelligence
Laboratory at MIT and in that respect (by academic lineage) grandfather of the
robot Mertz, embraces this logic. He urges us to think beyond human exceptionalism by recognising that we are, in essence, machines. Brooks argues that an
attachment to the ‘specialness’ of humans is anachronistic, a premise that he mobilises to open space for the possibility – even inevitability – of humanoid machines.
He identifies the past 50 years as the period of the ‘third challenge’ to human
specialness (the first two being those posed by Darwin and Galileo), arising most
specifically from the combination of ‘syntax and technology’ that comprises computation (Brooks, 2002: 165). For Brooks, that humans are machines is not metaphor, in other words, but starting premise, and any denial of that premise can
only be irrational. In this cosmology the organic materiality of the human is a kind
of historical accident, and the robot is the model for how it could, and will, be
otherwise.6
On the face of it, the project of humanoid robotics might seem well aligned with
feminist theoretical arguments for the vitality of the inorganic, sharing the premise
that agencies traditionally associated exclusively with the human, or even more
broadly animal, need to be granted to nonhuman things.7 The divide between
human and machine, in this respect, is no more an a priori than that between
human and animal. But a commitment to rejecting essentialism, and to holding
entanglements of the organic and inorganic in view, leaves begging the question of
how else we might articulate differences that matter across category boundaries.
When we shift our attention from animals to machines, moreover, the questions
that arise regarding the stakes in respecifying the agencies of the nonhuman with
respect to the inorganic are different to those addressed so eloquently in recent
feminist theory. These questions are particularly salient in the case of the robot,
insofar as dominant discourses in robotics are quick to grant subjecthood to the
humanoid machine, and to embrace the erasure of human/machine difference.
Working from the premise that to be human or nonhuman is ‘a doing or becoming, not an essence’, and concerned to challenge tendencies toward human exceptionalism in feminist theory, Birke et al. (2004) take as their analytic subject object
the laboratory rat, one of the animal kinds transformed into a model organism for
contemporary science (see also Lynch, 1988; Davies, 2010; Holmberg, this volume).
The laboratory rat in their analysis is at once an object of scientific practice and an
Suchman
123
active participant in knowledge making (Birke et al., 2004: 168). Historical figures
of the laboratory animal, they argue, operate to obscure resilient and suppressed
forms of nonhuman agency.8 The laboratory animal as model organism materialises a regime dedicated to the objectification and control of a nature taken to be
separate from culture, while the work of science and technology studies has been to
refigure the laboratory as a site where differences of subject and object, nature and
culture, are made rather than given. These studies direct our attention to the cultural historical genealogies that inform the laboratory, and to the erasures that
enable the seeming authorlessness of its products.
Like the animal configured as a model organism, the laboratory robot’s life is
inextricably infused with its inherited materialities and with the ongoing (or truncated) labours of its affiliated humans. But while animal models are rendered progressively more replicable and standardised as tools for the biological sciences, the
humanoid robot is individuated and naturalised.9 Unlike its cousins on the factory
floor, or on the shelves of Brookstone’s gadget outlets, the laboratory robot resists
commodification or economies of scale. As a consequence, laboratory robots
remain tied to the personae of their inventors. At the same time, Lars Risan
(1997), Stefan Helmreich (1998) and Sarah Kember (2003) have described the process through which material practices in the sciences of the artificial externalise
their own effects, as unpredicted results of the run of a computer program are
objectified into a form of technoscientific nature available for observation and
analysis. Risan (1997) recounts presentations of ‘artificial life’ in which ‘the emphasis is deflected from engineering to scientific discovery and the audience is invited to
identify with the researcher as distanced witnesses of significant findings’ (cited in
Kember, 2003: 58). As Shapin and Schaffer famously put it in their account of the
fashioning of 17th century scientific demonstrations, ‘[t]he matter of fact can serve
as the foundation of knowledge and secure assent insofar as it is not regarded as
man-made’ (1985: 77, my emphasis). The experimental apparatus required to verify
the absence of manufacture included as one of its constituents the notion of ‘seeing
for oneself’, and the figure of the ‘modest witness’ or disinterested observer, so
central to the origin myth of technoscientific objectivity and to the demonstration
as event (see also Latour, 1993; Haraway, 1997). It is to the demonstration that I
turn next.
On demonstrations
If humanoid robots are objects engineered to be models for bodies conceptualised
as identifiably Human subjects, the demonstration comprises what Latour has
named their primary ‘theatre of proof’ (Latour, 1988: 86). In ‘Theatre of Use’
(2009), Wally Smith considers the workings of the technology demonstration, in
which a particular assemblage of hardware and software is presented in action as
evidence for its worth. Smith helpfully proposes that we read the technology ‘demo’
as a distant cousin of the scientific demonstration, the common thread being the
premise, established by Robert Boyle in 1660, that rather than simply being
124
Feminist Theory 12(2)
informed, spectators are witness to some natural or technical object directly.
To elaborate the mise en sce`ne of the demonstration, Smith draws on the work
of Erving Goffman, and in particular his call for the analysis of what he terms the
‘frames’ through which layered propositions of material and imaginary, present
and projected future, can be folded together within a single event. The homely
example of the door-to-door vacuum cleaner salesman serves as illustration of
Goffman’s notion of framing, in which a situation serves as a model for another:
Here, an original activity of cleaning is transformed through a new frame of meaning,
that of the demonstration. Both the salesman and the householder would agree that,
in an important sense, he is not really cleaning the floor but just showing how somebody would do so with his machine. This re-framing brings new meaning and rules of
engagement, but the original activity is still relevant for understanding the new one.
Indeed, its surface form will be identical in some respects and so, in another sense, the
salesman’s cleaning is real: he is genuinely doing something that makes the floor
cleaner. (Smith, 2009: 453, original emphasis)
Smith uses Goffman’s ‘frame analysis’ to widen the frame of the demonstration,
seen as a kind of double-drama that includes the enactment of an encounter with a
technology that is itself embedded in an encounter between the demonstrator and
their audience. As he reminds us, ‘[t]hese two dramas reflect each other. The outer
constructs the inner, while the inner prescribes a future for the outer’ (Smith, 2009:
464). In the case of dramatic fiction, or even of the increasingly popular ‘docudrama’, actors and audience alike are aware that the performance is staged, either
as make-believe or as re-enactment. Even in the case of the documentary, increasingly sophisticated audiences are aware, and admiring, of the artifice involved. But
in every case the audience are positioned as ‘onlookers’ who consume the scene
enacted, and who are thereby in Smith’s words, ‘captured by two realities: a story
and its telling’ (2009: 464).
The canonical story form is the ‘live’ demonstration, where a technology is
animated by those invested in its efficacy for an audience of variously interested
observers. Increasingly, however, in order to maximise its distribution the technology demo is turned into a video document that can be circulated, most prominently
now within the public theatre of the World Wide Web. Contemporary research on
humanoid robots, I would argue, is a promise sustained in significant measure
through the agencies of these demonstrations. While the commercial demonstration projects a utilitarian or instrumental function for an artefact, the robotics
demo is oriented to questions not simply of use, but of existence. But where the
scientific demonstration is a copy of an imagined encounter with Nature, the robotics demonstration is a copy of an encounter with Culture, in the form of an uncannily familiar Other in the making. Sitting between the documentary film and the
system ‘demo’, the recording becomes what Latour (1986) has named an ‘immutable mobile’; that is, a document that can be reliably reproduced, distributed, and
reviewed in a kind of eternal ethnographic present. These re-enactments imply that
Suchman
125
the capacities that they document have an ongoing existence – that they are themselves robust and repeatable – and that like any other living creature’s the humanoid robot’s agencies are not only ongoing but also continuing to develop and
unfold. Staging itself as a kind of glimpse into the life course of the robot, the
demo projects the actions and interactions that the humanoid robot exhibits as
models for a future in which things that now seem marvellous and confined to the
laboratory will be ubiquitous and mundane (see also Bloomfield and Vurdubakis,
2002).
The demonstrations that follow share aspects of Smith’s characterisation, at the
same time that each evidences the particularities of its own context. The first, the
robot Mertz installed in the Stata Center Lobby, is distinguished by the robot’s
staging as ‘on its own’, outside the laboratory and apart from its creators, and by
its invitation to passing strangers to engage it in impromptu conversation. As we
will see, these aspects are central to the rationale for Mertz’s creation as an experimental object, and to its creators’ hopes for its effectiveness as a model interlocutor. The second of my exhibits, a series of video recordings of the robot Kismet
posted on the website of the Computer Science and Artificial Intelligence
Laboratory, exemplifies the canonical evidence for the robot as model humanoid,
whose actions are made intelligible and given eternal presence through encounters
carefully staged and documented by its creators. And finally, the story and images
of the robot Robota, drawn from a published paper reporting on the project to the
research community, narrate a different possibility for the dynamic re-distribution
of agencies across humans and machines, in a set-up hopefully designed to reconfigure the communicative connections between a child and his associated persons.
Demonstration 1: Mertz
So how might we undertake a closer reading of our first exhibit (see Appendix), the
meeting of the robot Mertz and its interlocutors, as I have transcribed it for you
from the video? We can begin by treating Mertz’s opening move (‘Hello, what is
your name?’) as a signal that this robot is an instance of the larger class ‘artificial
agents’, realised in various configurations of hardware and software, and designed
to be sociable:
R ¼ robot, G ¼ person 1, M ¼ person 2; ‘[’ indicates overlap
R: Hello, what is your name?
G: (laughter) My name is G, my friend. What is yours?
[
R: Your name is (unintelligible).
G: Is what?
R: Yes or No?
G: No, G.
R: Say yes or no, please.
G: No.
126
Feminist Theory 12(2)
R: Please look at my face.
M: (He) is speaking in French! Are you speaking French?
R: Say yes or no, please.
G: Nooo.
R: Hello, what is your name?
Taking the conventional form of a greeting, Mertz’s question implicates an
answer, which in turn invokes the frame of a conversational encounter. G’s laughter in response marks this not, however, as an ordinary encounter: it is still, we
might infer, somewhat remarkable (albeit irresistible) to be hailed into conversation
by a machine! At the same time, G’s addition of the endearment ‘my friend’ to his
response (perhaps articulating the irony of being hailed by a strange machine figured as friendly), and his reciprocal request for the robot’s name, suggest his willingness to be interpellated into this incipient human/nonhuman exchange.
In the place of the response that we might expect, however (Mertz’s provision of
its own name), the robot offers a repeat of G’s name that is, albeit difficult to
decipher, audibly not ‘G’. G’s request for clarification, in turn, elicits not a
repeat of the name offered, but instead a repeat of the robot’s question, in the
form of a binary choice of (‘Yes or No’) answers. G’s selection from among the two
options, which he further clarifies by another repetition of his own name (‘No, G’),
is treated by the robot as a failure to comply (‘Say yes or no, please’). G’s subsequent repetition of a simple ‘No’, rather than registering as a response, triggers
another routine from Mertz (‘Please look at my face’) – one that we can imagine
might be aimed at better calibrating the alignment between Mertz’s visual and
auditory systems and those of the robot’s interlocutor, but which at the
same time signals the continued unintelligibility, for Mertz, of G’s response. This
reading is confirmed several exchanges later, when G’s more emphatic repetition
(‘Nooo’) seems to cause Mertz to ‘reboot’, recycling again the robot’s opening
question.
Shifting to M, Mertz proceeds through a series of calls for the humans to align
themselves better with its capacities (‘I cannot see you’, ‘Please look at my face’,
‘Please face me directly’, ‘Do not move too much’, ‘You are too far away’, ‘Come
closer please’).10 G and M’s efforts to comply with the robot’s apparent desire for
greater intimacy (bringing their own faces ever closer to its) are met only with
continued requests and (increasingly plaintive) complaints from the robot. The
robot’s insistence is such that (as another member of the party, off camera to
this point, goes around the back of the robot to look for some evidence of its
mechanism), G surmises that this may in fact be what in the trade is referred to
as a ‘wizard of oz’ experiment. That is, rather than working autonomously, the
entire exchange may be being orchestrated by another human positioned offstage,
perhaps even with the aim of eliciting foolish behaviour on the part of G and his
friends. The outcome is a communicative encounter that, as its mutual intelligibility
progressively declines, becomes increasingly comical for the human participants.
Suchman
127
Mertz’s inability to join in the fun further (even poignantly) confirms the robot’s
Otherness.
Demonstration 2: Kismet
As part of an experiment conducted in 2005, Mertz runs more or less continuously
for seven hours a day over five days, recording its own visual input and corresponding output every second, and engaging over 600 passers-by (Aryananda,
2005b). Analysed as graphs and pie charts of percentages of correct recognition
of human faces and robot-directed speech, the data are taken at least tentatively to
confirm the experimenters’ hopes for the possibility that robots might recruit
humans to scaffold their robotic development in a way modelled on, and implicitly
providing a model for, the socially situated learning of the human infant
(Aryananda, 2005a, b; 2006).
The iconic robot infant, parent of Mertz, is Kismet, the progeny of roboticist
Cynthia Breazeal and colleagues. Like other celebrity robots produced in MIT’s
studios, Kismet is represented through an extensive corpus of media renderings –
stories, photographs, and QuickTime videos available on the World Wide Web.
First among these is the Overview of Kismet narrated by Breazeal.11 There are no
dates associated with these demonstrations, but we can locate this one as likely
sometime in the late 1990s, at the height of Breazeal’s work on Kismet as part of
her Doctorate of Science degree in electrical engineering and computer science
(Breazeal, 2000).
In this introduction to Kismet, Breazeal figures her relations with the robot as
‘something like an infant-caretaker interaction, where I’m the caretaker essentially,
and the robot is like an infant’. The overview sets the human–robot relation within
a frame of learning, with Breazeal providing the scaffolding for Kismet’s development. It offers a demonstration of Kismet’s capabilities, narrated as emotive facial
expressions that communicate the robot’s ‘motivational state’:
Breazeal: This one is anger (laugh) extreme anger, disgust, excitement, fear, this is
happiness, this one is interest, this one is sadness, surprise, this one is tired, and this
one is sleep.
Each identification is timed (with associated editorial cuts) in relation to the
robot’s performance of the expression named. In some instances the emotion is
named first, then performed by Kismet as if in response: in others, the naming
follows the performance, in an intimation of recognition. While the narratives and
demonstrations refer to the robot’s specific materialities – its intensively motorised
face affording multiple degrees of freedom in eyebrows, eyes, lips and ears, its
multi-camera vision system, and the elaborate machinery that processes input stimuli and controls responsive output – the details of Kismet’s code and operations are
reported separately, in the form of technical papers aimed at audiences already
128
Feminist Theory 12(2)
immersed in the specificities of robotic mechanism (e.g. Breazeal, 1998; Breazeal
and Velasquez, 1998; Breazeal and Scassellati, 1999). Some hint of the material
labours involved is provided, however, on an occasion when Breazeal, asked how
many person hours it took to develop Kismet, replies: ‘Oh God, I don’t even want
to think about it. . . There’s tons of infrastructure code that isn’t specifically for this
robot. Code to specifically run Kismet is probably two full-time people working for
2.5 years. The total size of all the software tools we have developed to support our
computation environment is huge’ (Menzel and D’Aluisio, 2000: 66). This extensive
accretion of material labours and technical resources, which together comprise the
rich infrastructure of the laboratory, is elided in the demonstration videos by a
closely drawn frame and a narrative of individual agency. And this is a narrative, in
turn, that was gestated in other laboratories – those of late 19th and early 20th
century developmental psychology.
The enumeration of Kismet’s emotions, displayed as facial expressions of
underlying states and made intelligible for us by Breazeal’s expert reading, connects this lab to these earlier laboratories, as British and European physiologists
joined with new visualisation technologies to isolate, replicate, standardise, and
quantify emotions in the language of curves and numeric tables (Dror, 2001:
360). As a kind of three-dimensional caricature, a comic exaggeration of an
interactive/empathetic organism, Kismet is a rich source of evidence for this
wider cultural and historical heritage. A preliminary inventory of the latter
foregrounds two things. First, a Human Science based on discourses of homoeostasis, regulation, drives and associated emotional expression. And second, a
story of becoming as development, and of normative socialisation as the
grounds for intelligibility. Historian of medicine Otniel Dror describes how, in
the early 20th century, the drive to produce clear, compelling representations of
emotional states led to the co-configuring of imaging technologies and subjects.
‘Good and reliable subjects’ were chosen for their ability to display clearly
recognisable emotions on demand, whereas those that failed to produce unambiguous and easily classifiable behaviours were left out of the experimental
protocol (Dror, 1999: 383). These technologies produced the now familiar catalogue of emotional types normalised across the circumstances of their occurrence (as anger, fear, excitement, and so forth), and treated as internally
homogeneous, if variable in their quantity or intensity. This is a machinery
generated through, but discursively separable from, specific bodies. And
like other marks on bodies, once materialised as a representation or trace,
emotions are extractable from their particular contexts of production. By the
1930s this science was well established, and what was characterised as the
‘objective’ inscription of emotions reached a representational consensus sufficient to support an associated industry of commercially manufactured
emotion-gauging machines. It is from this point, in turn, that emotions are
understood as processes in the general scheme of the body-as-machine (Dror,
2001: 362).
Suchman
129
While contemporary robotics research carries on this tradition in constructing
affective encounters as moments of display and recognition of underlying emotional states, Breazeal further joins this history with another turn in 20th century
psychology, adopting the language of infant and child development. Purportedly
general claims about the child, Claudia Castañeda reminds us, need always to be
located in particular discursive contexts (2002: 5). Among other connotations, the
figure of the child inherited within colonial imaginaries carries with it a process of
becoming made up of inevitable stages and unfulfilled potentialities. The adoption
of this figure into robotics can be traced back to a father of artificial intelligence,
Alan Turing himself, who in his seminal 1950 article ‘Computing Machinery and
Intelligence’ proposes: ‘Instead of trying to produce a programme to simulate the
adult mind, why not rather try to produce one which simulates the child’s? If this
were then subjected to an appropriate course of education one would obtain the
adult brain’ (1950: 456).12
The premise that infants develop as other minds insofar as their caregivers act
toward them as such informs the attachment of MIT’s sociable roboticists to an
ethological psychology of infant–caregiver interactions. In a recent discussion of
Kismet and kin, Evelyn Fox Keller (2007) affirms that projects in so-called ‘robotic
psychology’, ‘epigenetic robotics’, or ‘developmental robotics’ are in turn aimed
at informing a science of the Human. Keller raises a concern with what she names
‘the apparently circular trajectory of this endeavor’ (2007: 341), insofar as it
materialises discourses in developmental psychology, then represents itself as an
independent test bed for assessing their adequacy. In the case of the ‘infant’ robot,
the developmental trope underwrites a kind of perpetual promise that simultaneously accounts for the incompleteness of the project, and motivates its
continuation.
Katherine Hayles provides another commentary on Kismet, which shares with
roboticists’ own accounts the conflation of the two premises identified by Keller:
first, that robots like Kismet matter insofar as they are humanlike, and second that
they are interesting insofar as they are evocative, regardless of their verisimilitude.
Like Breazeal, Hayles characterises Kismet alternately as a robot that can ‘engage
in social interactions’, and as a robot whose ‘design and programming have been
created to optimize interactions with humans’ (2005: 136). The difference between
these two statements is perhaps a subtle one, but it is one, I would argue, that
matters. Roboticists themselves frame this difference as a question of autonomy,
taken in turn as a touchstone for agency. Software agent designer Pattie Maes
characterises autonomy as ‘a system that tries to fulfill a set of goals in a complex,
dynamic environment’ (1997: 136, cited in Kember, 2003: 66). An agent is autonomous, according to Maes, to the degree that it ‘decides itself how to relate its
sensor data to motor commands in such a way that its goals are attended to successfully’ (1997: 136, see also Kember, 2003: 66–67). But others are uneasy with
an autonomy that remains within the implicit bounds of goals given in advance
by a designer, however independently those might be translated into action.
130
Feminist Theory 12(2)
This concern, typically formulated by computer scientists as the problem of context, environment, or ‘the World’, has been further formulated within artificial
intelligence and robotics as a matter of ‘epistemic’ autonomy; that is, the requirement that an artificial creature operate independently of perceptual stimuli or goals
stipulated by its creator.
The capacity of humanoid robots to entrain a human interactant leaves begging, then, the question of just how a robot like Kismet can in turn incorporate
a caregiver’s responses in order to become more humanlike. Keller argues that it
is at this point that roboticists engage in what she names a form of ‘fudging’,
backpedalling on questions of authenticity or verisimilitude in favour of a resort
to instrumental criteria, by demurring on claims for the achievement of ‘genuine’ emotion in favour of a functional equivalent at the robot/human interface
that will enhance the human’s experience. Accepting this latter premise, Keller
frames her concern with humanoid robotics projects not only on the grounds of
their shared participation in more widespread forms of tautological reasoning,
but in terms of the possible realisation of their promises. The latter posit a
future in which fully realised humanoid robots will be able to return the favour
not to human infants, but to ageing baby boomers in need of care. For me,
however, the fear is less that robotic visions will be realised (though real money
will be diverted from other investments), than that the discourses and imaginaries that inspire them will retrench received conceptions both of humanness
and of desirable robot potentialities, rather than challenge and hold open the
space of possibilities.
Demonstration 3: Robota
As a preliminary demonstration of what such possibilities might be, I offer as
my final exhibit Robota, a humanoid robot incorporated into the research of
the Adaptive Systems Research Group at the University of Hertfordshire, under
the direction of Dr Ben Robins and biologist/roboticist Kerstin Dautenhahn.
Trained in Dance Movement Therapy and Computer Science, Robins is
interested in the possibility of therapeutic robot engagements with children
diagnosed with autism (Robins and Dautenhahn, 2006; Robins et al., 2009).13
In one form or another, children diagnosed with autism seem to inhabit a kind
of self-enclosure that resists familiar forms of sociality. In as much as autism is
associated with a flight from the unintelligibility of the multi-layered worlds of
human interaction, the predictability and repetition of robotic actions seems
appositely geared to the child’s needs.
Autism is what is known as a ‘spectrum disorder’; that is, there is enormous
variation among individuals. While other human–robot interaction therapies
aim for statistical significance in large sample sizes, the Adaptive Systems
Group works with particular children to assess an intervention’s effects. More
specifically, Robins and his colleagues are interested in events in which a
Suchman
131
robot doll effectively mediates interactions between children and the experimenter/
therapist (Robins and Dautenhahn, 2006). Importantly, the approach adopted
is one in which the experimenter ‘must include himself as part of the trial’,
being available and ready to respond to the children and able to seize the opportunity for any further interactions should the possibility arise (Robins and
Dautenhahn, 2006: 647). Far from the experimenter as invisible observer,
in other words, it is the incorporation of the experimenter into the child–robot
interaction that forms the criterion of success for their efforts. The experimenter is ‘another possible instrument for engaging social interactions’ (Robins
and Dautenhahn, 2006: 650). The researchers describe one indicative incident,
involving a child pseudonymed ‘Jack’ and the robot doll named Robota (see
Figure 2):
In one of the preliminary trials the child (Jack) engaged in an imitation game with the
robot where the robot mirrored the movements of Jack’s limbs. Unknown to Jack, the
experimenter was operating the robot and responding to Jack’s movements as accurately as he could. However, it just happened, on one occasion, that the experimenter
unintentionally moved the opposite arm of the robot. Jack giggled and mentioned (to
the robot) that this was wrong. After a few turns of correct imitation, the experimenter
then introduced, deliberately this time, another mistake in the robot’s imitation of
Jack’s movement – Jack giggled again talking to the robot with affection that this
is wrong. The experimenter then introduced more deliberate mistakes, and Jack’s
laughter and affection directed at the robot grew. Then an important point arrived
when Jack realized that the experimenter was operating the robot from his laptop and
that it was he who was making the mistakes, so it then became a game between the
experimenter and Jack. Whilst Jack still continued to play the imitation game with
the robot (Figure 2, image a), after each mistake that the robot made in mirroring
Jack’s movements (which were deliberately introduced by the experimenter), Jack
turned to the experimenter laughing saying ‘mistake’, ‘mistake’, this time diverting
his affection towards the experimenter (Figure 2, images b & c). It was very clear at
this stage that Jack was actually knowingly playing with the experimenter and sharing
his enjoyment with him, whilst standing in front of the robot, initiating movements for
the robot to mirror. Thus, Jack was using the robot as a mediator to indirectly interact
and play a game with the experimenter. (Robins and Dautenhahn, 2006: 649,
original emphasis)
We see here a different sense of imitation than that framed in Turing’s famous
‘imitation game’ (Turing, 1950). While Turing’s preoccupation was with imagining
a coherent sense for the question ‘Can machines think?’, the focus of the Adaptive
Systems Group shifts from machine intelligence to the generation of affective,
communicative relations. Recruited to ‘mirror’ the humans who engage with it in
play, the robot does so not as a model Human, but as part of an ‘oscillating
and affective assemblage’ of unfolding sociomaterial connections (Neumark,
2001: 166).
132
Feminist Theory 12(2)
Figure 2. Jack engaged in an imitation game with the robot (image a) and turned to the
investigator with giggles each time the robot made a mistake (images b & c).
Source: Adaptive Systems Research Group, The University of Hertfordshire.
Suchman
133
Becoming with
In the opening pages of When Species Meet (2008: 3), Donna Haraway poses the
question: ‘How is ‘‘becoming with’’ a practice of becoming worldly?’ The question
is movingly addressed, I believe, in the new configurations of becoming with
formed in the nexus of investigator, child and robot evidenced in the Adaptive
Systems Group’s projects. In a way reminiscent of the encounter engendered by
Mertz with which we began, if absent from the robot’s narration, becoming worldly
is manifest here as moments of bodily imitation and connection that travel as a
generative reiteration and a fleeting glance, all animated by affective dynamics that
escape their classification. These are what Haraway has identified as technoscience’s ‘immeasurable results’ (1997: xiii).
In her introduction to the collection Biographies of Scientific Objects, historian
of science Lorraine Daston cites Aristotle’s interest in ‘the perpetuity of coming-tobe’ as the question to be addressed, and she writes: ‘An ontology that is true to
objects that are at once real and historical has yet to come into being, but it is
already clear that it will be an ontology in motion’ (2000: 14). Moving ontologies
sit at the heart of recent feminist theorising, most fundamentally in relation to the
collective project of unfixing categorical delineations of identity and difference in
favour of attention to the times and places through which lines of differentiation
are enacted and come to matter (Ahmed, 1998; Barad, 2007; Currier, 2003; Cussins,
1998; Haraway, 1989, 1997, 2008). If objects, as Haraway reminds us, are ‘boundary projects’ (1991: 201), the figure of the humanoid robot sits provocatively on the
boundary of subjects and objects, threatening its breakdown at the same time that
it reiterates its founding identities and differences. Whether as a promise to subjectify the world of the Object, or a threat to objectify the sanctity of the Subject,
the robot’s potential is perpetually mobilised within both technical and popular
imaginaries. At the same time, the material assemblage of the robot is in complex
intra-action with its accompanying stories, never quite realising its promise but
always also exceeding the narratives that animate it.
In the company of Haraway (1997, 2008), Helmreich (1998), Kember (2003) and
others I would like to work against the digital naturalisation of conventional
visions of life, and for a greater sense of possibility in relations of ‘becoming
with’ for humans and machines. Having provided my demonstrations, I turn in
closing to the question: How should we understand the nature of model (in)organisms like the humanoid robot, in its constitution of the quasi or proto-Human?
What different kinds of movement do these various demonstrations effect? The
answer is tied in part to how each moves us as witnesses: for myself, this involves
some manner of delight at the encounter of humans and robot ‘in the wild’ contingencies of a public space (where risks are taken beyond the bounds of the script);
distress at witnessing the revival of still powerful Humanist modes of psychologising in the figure of the laboratory robot as model organism; and admiration and
hope in the case of a robotically enhanced therapy in the autistic child’s play room.
These configurations differently corporealise the bodies of persons and robots
through their embedding in particular spaces, stories and intra-active encounters.
134
Feminist Theory 12(2)
I am interested in the revitalised histories, lively presents and imagined futures that
comprise the objectivity of these subjects.
In his discussion of demonstrations Smith poses the question: What other figures
of the audience to the demonstration might there be than either the innocent witness to the workings of nature, or the cynical constructivist who sees only artifice?
In her consideration of Shapin and Schaffer’s history of modern science’s modest
witness, Haraway offers a response to this question. She points out that the theatre
of the modest witness to nature stages a subject–object split that erases the presence
of knowers from what is known (1997: 23–24). While the practice of credible testimony is very much at stake in Haraway’s writings she is after, as she puts it, ‘a
more corporeal, inflected, and optically dense. . . kind of modest witness’ (1997: 24).
The latter is premised, among other things, on stories that position the audience
inside rather than outside of the action (1997: 36).
The refiguration of the observer from a location somewhere outside the world
to a position always already entangled within the phenomenon is the central
problematic in Barad’s reconstructions of the demonstration in quantum physics
(2007). Barad’s trope of entanglement suggests an approach to human–robot
encounters that takes each framing of the humanoid robot not as disinterested
reportage or as a response to an independently existing entity, but as an apparatus that includes particular objects of attention and concern and inseparable
knowing subjects (see also Haraway, 1997: 218). This is a method through
which we might restore non-innocence to robot demonstrations and to the
subject object relations that they enact. To do so requires thinking about the
narratives and materialisations of robotics in terms of accountabilities inseparable from truth claims, tracing out their genealogies and their associated
politics. The goal of such an exercise, as Haraway suggests, is to ‘ferret out
how relations and practices get mistaken for nontropic things-in-themselves
in ways that matter to the chances for liveliness of humans and nonhumans’
(1997: 141).
Doing this work requires slowing down the rhetorics of humanlike machines,
and attending closely to material practices. Re-specifying the roboticist’s
labours, and her and others’ intra-actions with her machines, might contribute
to demystification and re-enchantments restorative to the life of subjects and
objects alike. This requires, on the one hand, bringing the roboticist into the
frame and, on the other, tracing the genealogies to which her work is stickily
joined. Morana Alac shows a way toward the first of these through her observations of how roboticists mobilise their own bodies as conduits to specify the
movements of a ‘sociable robot’ (2009). The ‘android’ robots of the Japanese
laboratory studied by Alac are imagined as surrogates, animated by their
makers’ aspirations for a perfect simulacrum, a machine crafted in the precise
likeness of its human designer. This project is pursued through programming
techniques in which the roboticist’s actions form the model for computationally
specifying the movements of the model organism. This approach is based on the
Suchman
135
explicit premise that the boundaries between the robot’s interiority or ‘self’ and
its environment (including humans) need to be redrawn, in the form of an
inclusive ‘robot unified with human system’ (Alac, 2009: 494). Alac’s close reading of the collaborative labours involved in the effort to replicate human movements (including the resistances met in getting the robot to move in those ways)
articulates the sense in which, in her words, ‘the body as a discrete and unified
entity disintegrates through practice’ and reveals the social body as always
already multiparty bodies-in-interaction (2009: 492). The roboticists of Alac’s
study are moved by their subject object and literally feel its body in their
own movements, as an inseparable prerequisite for their problem solving and
engineering practice (see also Myers, 2008). It is in this sense, Alac argues, that
the body of the ‘other’, or of the robot as ‘quasi-other’, functions not simply as
a mirror or replica but as part of a larger configuration within which embodied
agencies emerge ‘across subjects and objects as a dynamic and interactive phenomenon’ (2009: 496). Alac’s work highlights as well the ways in which human
and robot are models for each other, as roboticists shape their own movements
to accommodate the requirements of the machinic body. Roboticists and robots
move each other, in sum, through the ongoing intimacies of everyday sociomaterial labours.
With respect to robot genealogies, we have now no shortage of scholarship
tracing the movements through which information theoretic accounts are translated from analogy to ontology in biology, psychology and sociality in the mid 20th
century, and of their ancestry in commercial and military instrumentalism (see
Edwards, 1996; Haraway, 1997; Hayles, 1999; Kay, 2000; Noble, 1984; Orr,
2006). Insofar as communication acts as an integrating circuit for these translations, it is a crucial site for reconceptualisation. Communication, as Emanuel
Schegloff (1982) reminds us, is not the medium through which an exchange of
messages takes place, or the means by which intentionality and interpretation
operationalise themselves. Rather, interaction in the ethnomethodological sense
is a name for the ongoing, contingent coproduction of a mutually intelligible sociomaterial world. In attempting to understand the constitution of the human/
machine interface I have argued that subject/subject intra-actions involve forms
of mutual intelligibility intimately connected with, but also importantly different
from, the intelligibilities involved in relations of subject and objects (see Suchman,
2007a). The term ‘mutual’, with its implications of reciprocity, is crucial here,
and I would argue needs to be understood as a particular form of collaborative
world-making characteristic of those beings whom we identify as sentient
organisms.
But how then might we make sense of the mixture of objectifying and intimately
intersubjective engagements between roboticists and their companion robots?
Undoubtedly, as Haraway writes of primatologist Barbara Smuts in her encounter
with the baboons of the Rift Valley in Tanzania, Cynthia Breazeal’s practice of
‘becoming with’ Kismet has, to quote Haraway, ‘rewoven the fibers of the
136
Feminist Theory 12(2)
scientist’s being’ (2008: 23). But what kind of ‘becoming with’ is happening in the
laboratories of roboticists informed by behavioural and developmental psychologies engineered over the last century? How do those narratives get woven into, or
superimposed upon, their subject objects, and what happens to both outside the
frame, in the ‘extremely prosaic, relentlessly mundane’ ways through which worlds
come into being (Haraway, 2008: 26)?
In their introduction to the collection Queering the Non/Human Noreen Giffney
and Myra J. Hird (2008) cite the work of artist Karl Grimes in his exhibit Future
Nature, a show of photographs of animal embryos and foetuses in glass jars, originally used in scientific and medical experiments. Grimes explains that his photographs of these specimens attend to animals that are ‘constantly on the verge of
becoming. . . yet frozen in time and death’ (cited in Giffney and Hird, 2008: 2).
The sense of becoming here lies within a developmental imaginary, of something
arrested on its way to realisation. Grimes’ photographic portraiture, Giffney and
Hird observe, suggestively ‘revitalises what have been forgotten as mere scientific
remains, turning former objects into present subjects’ (2008: 2). Discussing Grimes’
photograph of a preserved Axolotl salamander, Giffney and Hird ask:
Is this simply putting the animal to use for the purposes of poring over the ins and
outs of the Human, thus reinscribing by default the Human at the centre of this very
meditation? Perhaps. Yet in its irreducible difference, Axolotl insists that we respond
to it on its own terms – partly ascribed by Grimes certainly – yet also set down by the
animal voluptuously appearing before us, resplendent in its cacophony of contradictions; a signifier of the differential relation between the Human and the nonhuman. . .
(Giffney and Hird, 2008: 2)
In this article I have explored another corporealisation of Human/nonhuman
relations, at once materially different from Grimes’ exhibitions, yet with a certain
kinship. Rather than an organic creature arrested in its epigenetic unfolding, the
humanoid robot is an electro-mechanical and computational artefact. Rather than
fixed by labours at the laboratory bench, robots are enlivened by those labours
(though so are specimens, I suppose, as participants in projects in science). Before
death, the specimen’s potential for liveliness exceeds the robot’s, while afterwards
the robot’s possibilities for further liveliness might be argued to exceed the specimen’s. The specimen’s becoming, while curtailed by its relations with the human,
was previously more independent of human activity; the robot’s is inseparably
reliant on ongoing human labours. Like the preserved embryo, the robot is ‘on
the verge of becoming’, within projects of ‘turning former objects into present
subjects’, and akin to the robot’s effects, the ‘anthropomorphic allure’ of the
robot is achieved ‘through details of gesture and expression’ (Grimes, 2006, cited
in Giffney and Hird, 2008: 2). Moreover, like the specimen, the robot can be
understood as attendant to what is at its core a Human-centred project, albeit as
an entity at least potentially available for engagement on its own terms (Castañeda,
2001).
Suchman
137
Feminist theory alerts us that questions of difference cannot be addressed apart
from the more extended frames of reference in which entities are entangled. With
respect to nonhumans, Haraway calls for:
a materialist, antireductionist, nonfunctionalist, nonanthropomorphic, and semiotically complex sense of the dynamism of nonhumans in knowledge-making and worldbuilding encounters. . . How to ‘figure’ actions and entities nonanthropomorphically
and nonreductively is a fundamental theoretical, moral and political problem.
Practices of figuration and narration are much more than literary decoration.
Kinds of membership and kinds of liveliness – kinship in short – are the issues for
all of us. (1997: fn 23/284)
How then might we refigure our kinship with robots – and more broadly
machines – in ways that go beyond narrow instrumentalism, while also resisting
restagings of the model Human? Avoiding the latter requires creative elaborations
of the particular dynamic capacities that computationally animated materialities
afford, and of the ways that through them humans and nonhumans together can
perform different intelligibilities. These are avenues that have just begun to be
explored in new media arts and, in rarer cases, in systems design and robotics.
Installations like those done as part of Bill Vorn’s ongoing project in ‘robography’
(see Figure 3) or Ken Rinaldo’s ‘emergent systems’, pair unapologetically electromechanical machines with requisitely instrumented humans, in ways that elaborate
and thicken the agencies of the resulting assemblage (Wei, 2002).14 Most notably
for the concerns of this article, these projects move away from humanoid
machines in favour of human–machine intra-actions in which corporeal difference
is translated into connections found in and through the encounter. While affect is
clearly present in these ‘e/motional’ assemblages (Neumark, 2001), the relation
of humans to machines is explicitly that of evocation and response between
different, non-mirroring, dynamically interconnected forms of being. Not only
do these experiments promise innovations in our thinking about machines,
they also reverberate in generative ways with ongoing refigurings of what it
means to be human.
Our best hope for avoiding the twin traps of categorical essentialism and the
erasure of differences that matter is to attend closely to just how human–nonhuman
relations are figured, including their genealogies, legacies, and the distributions
effected through particular cuts. As Birke et al. observe for the case of animals,
‘we might even say that the very use of non-human animals in laboratory science
enacts a radical discontinuity between non-human and human’ (2004: 178, original
emphasis). Similarly, we might argue that the project of the humanoid robot, with
its mimetic and representational commitments to the replication of a Humanist
figure of the subject, further inscribes a discontinuity of subjects and objects, persons and things. The alternative under construction in feminist theory and science
and technology studies, in contrast, aims for ‘more-than-human. . . geographies and
philosophies’ (Lorimer and Davies, 2010: 33) where a concern for forms of relation
138
Feminist Theory 12(2)
Figure 3. Bill Vorn’s Grace State Machine.
Source: http://billvorn.concordia.ca/menuall.html.
and consequences of differentiation are paramount. This requires extricating ourselves from a tradition in which our interest in nonhumans is for either their reflective or contrastive properties vis-à-vis (a certain figure of) our own, in favour of an
attention to ontologies that radically – but always contingently – reconfigure the
boundaries of where we stop, and the rest begins.
Acknowledgements
Earlier versions of this article benefited from discussions following its presentation at numerous venues, including the workshop ‘Experimental Objects’ at Lancaster University in 2010;
an invited seminar at the Institute for Science, Innovation and Society at Oxford
University in 2009; a joint seminar sponsored by the Programs in Science, Technology
Suchman
139
and Society and Work, Technology and Organizations at Stanford University in 2008; as
part of the panel ‘Re-tooling Subjectivities: Exploring the possible through feminist science studies’ at the ‘Subjectivity’ conference at Cardiff University in 2008; at the conference
‘Reclaiming the World: the future of objectivity’ at the University of Toronto in 2008; and
at the Workshop on Animation and Automation, held jointly between the University of
Manchester and Lancaster University in 2008, which I co-organised with Jackie Stacey. I am
grateful to Celia Roberts and Myra J. Hird for encouraging me to develop these ideas for
inclusion in this special issue, as well as for the insightful comments of two anonymous
reviewers.
Notes
1. Corporealisation, as Donna Haraway explains it, is a process through which new bodies,
both human and nonhuman, are brought into being. She reminds us that such bodies
‘are perfectly ‘‘real’’, and nothing about corporealization is ‘‘merely’’ fiction. But corporealization is tropic, and historically specific at every layer of its tissues’ (1997: 142). I try
to hold together these relations of the material and the tropic in the discussion
that follows.
2. http://www.youtube.com/watch?v¼HuqL74C6KI8; see Appendix for transcript. This
video is apparently an impromptu, non-professional recording shot by a member of a
party encountering the robot in the Stata Center atrium. As the document of a brief
encounter between humans and a robot in a public space, it stands apart from the
majority of demonstration videos, produced and distributed by those who create humanoid robots for their sponsors.
3. For related analyses of my own encounters with the MIT AI Laboratory’s celebrity
robots Cog and Kismet, as well as with performance artist Stelarc’s Prosthetic Head,
see Suchman (2007a: ch. 14).
4. In this article I follow the convention of using initial capitalisation to reference normative
form, i.e. the Human, and the uncapitalised term to refer to an open-ended horizon of
possible qualities of resemblance, i.e. humanlike, humanoid.
5. I return to the circularity of this logic below.
6. For a more extended discussion of this premise in the sciences of the artificial see Kember
(2003), Suchman (2007b).
7. For a review of recent writings on ‘material feminisms’ see Hird (2009a); see also Hird
(2009b, 2010); Alaimo and Heckman (2008); Haraway (2008); Barad (2007); Bennett
(2010).
8. Interestingly for my argument Birke et al. specifically cite, as an element of
the discourse that they are aiming to displace, the analogy of animals with
automata in terms of intrinsically determined behaviours, stimulus-response and related
tropes.
9. For an account of a laboratory in which organic (Drosophilia, C-elegans) and inorganic
(robotic) models are combined in research on biological systems, see Fujimura (2005).
In reflecting on what she names the new symbiosis between biology and engineering,
Fujimura observes that this is not surprising, insofar as ‘human scientists have been
building what we know of both biological systems and engineered systems, and the
analogies between the two, since at least the 17th century’ (2005: 213). While our
approaches to biocomplexity, she argues, are a product of ‘movements back and forth
across the machine-living organism border’, her concern is the question ‘What is lost in
140
10.
11.
12.
13.
14.
Feminist Theory 12(2)
translation?’ (2005: 213). Concerned specifically with developments in systems biology,
Fujimura urges that ‘we have to understand which versions of machines and which
versions of nature move back and forth, and when, across the machine-nature
border’, in order to understand how contemporary figures of each are constituted
(2005: 214).
We see a bit more of the robot’s programming, and the aims of the experiment, in the
complaint ‘Too many words’, explicated somewhat by the accompanying text. G’s
conclusion from the latter, however, is that contrary to what the invitation to conversation suggests, in the end Mertz ‘cannot communicate’.
http://www.ai.mit.edu/projects/sociable/overview.html
This proposition of Turing’s is cited approvingly by Elizabeth A. Wilson, along with the
turn to learning, the child and affect in contemporary AI, as corporealised in Kismet
(2002: 48).
The status of autism as a diagnostic category remains contested, including the question
of whether or how this ‘condition’ should be treated as a disability (see valentine, 2010).
Setting aside these important questions for the moment, my interest here is on the
Hertfordshire group’s deliberate reworking of the standard experimental design in
ways that I describe below.
See http://billvorn.concordia.ca/menuall.html and http://kenrinaldo.com/; see also
Kelly Dobson’s vocal intra-actions with a variety of machines (http://eyebeam.org/
people/kelly-dobson); and the robot theatre of Louis Philippe Demers (http://
www.hfg-karlsruhe.de/ldemers).
References
Ahmed S (1998) Differences that Matter: Feminist Theory and Postmodernism. Cambridge:
Cambridge University Press.
Alac M (2009) Moving android: On social robots and body-in-interactions. Social Studies of
Science 39: 491–528.
Alaimo S and Hekman S (2008) Material Feminisms. Bloomington: Indiana University
Press.
Aryananda L (2005a) Socially situated learning for a humanoid robotic creature. Available
at: http://publications.csail.mit.edu/abstracts/abstracts05/aryananda/aryananda.html#2.
Aryananda L (2005b) Out in the world: What did the robot hear and see? Epigenetic
Robotics. Available at: http://people.csail.mit.edu/lijin/publication.html.
Aryananda L (2006) Attending to learn and learning to attend for a social robot. Submitted
to Humanoid 2006: 6th IEEE-RAS International Conference on Humanoid Robots.
Available at: http://people.csail.mit.edu/lijin/hum.pdf.
Asdal K (2008) Subjected to parliament: The laboratory of experimental medicine and the
animal body. Social Studies of Science 38: 899–917.
Barad K (2007) Meeting the Universe Halfway: Quantum Physics and the Entanglement of
Matter and Meaning. Durham, NC: Duke University Press.
Bennett J (2010) Vibrant Matter: A Political Ecology of Things. Durham, NC: Duke
University Press.
Birke L, Bryld M and Lykke N (2004) Animal performances: An exploration of intersections
between feminist science studies and studies of human/animal relationships. Feminist
Theory 5: 167–183.
Suchman
141
Bloomfield B and Vurdubakis T (2002) The vision thing: Constructing technology and the
future in management advice. In: Clark T and Fincham R (eds) Critical Consulting: New
Perspectives in the Management Advice Industry. Oxford: Blackwell, 115–129.
Breazeal C (1998) Early experiments using motivations to regulate human-robot interaction.
Proceedings of 1998 AAAI Fall Symposium, Emotional and Intelligent: The Tangled Knot
of Cognition. Orlando, FL: Menlo Park, CA: AAAI Press, 31–36.
Breazeal C (2000) Sociable machines: Expressive social exchange between humans and
robots. Unpublished Sc.D., MIT.
Breazeal C and Scassellati B (1999) A context-dependent attention system for a social robot.
In: Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
(IJCAI99). Stockholm: Sweden, 1146–1151.
Breazeal C and Velasquez J (1998) Toward teaching a robot ‘infant’ using emotive communication acts. Proceedings of 1998 Simulation of Adaptive Behavior, Workshop on Socially
Situated Intelligence. Switzerland: Zurich, 25–40.
Brooks R (2002) Flesh and Machines: How Robots Will Change Us. New York: Pantheon
Books.
Castañeda C (2001) Robotic skin: The future of touch? In: Ahmed S and Stacey J (eds)
Thinking Through the Skin. London: Routledge, 223–236.
Castañeda C (2002) Figurations: Child, Bodies, Worlds. Durham, NC: Duke University
Press.
Castañeda C and Suchman L (forthcoming) Robot visions. In: Gamari-Tabrizi S (ed.)
NatureCultures: Thinking with Donna Haraway.
Currier D (2003) Feminist technological futures: Deleuze and body/technology assemblages.
Feminist Theory 4: 321–338.
Cussins C (1998) Ontological choreography: Agency for women patients in an infertility
clinic. In: Berg M and Mol A (eds) Differences in Medicine. Durham, NC: Duke
University Press, 166–201.
Daston L (2000) The coming into being of scientific objects. In: Daston L (ed.) Biographies
of Scientific Objects. Chicago: University of Chicago Press, 1–14.
Davies G (2010) Captivating behaviour: Mouse models, experimental genetics and reductionist returns in the neurosciences. The Sociological Review 58: 53–72.
Dror O (1999) The scientific image of emotion: Experience and technologies of inscription.
Configurations 7: 355–401.
Dror O (2001) Counting the affects: Discoursing in numbers. Social Research 68:
357–378.
Edwards P (1996) The Closed World: Computers and the Politics of Discourse in Cold War
America. Cambridge, MA: MIT Press.
Fujimura J (2005) Postgenomic futures: Translations across the machine-nature border in
systems biology. New Genetics and Society 24(2): 195–225.
Giffney N and Hird MJ (2008) Introduction. In: Giffney N and Hird MJ (eds) Queering the
Non/Human. Aldershot: Ashgate, 1–16.
Grimes K (2006) Future nature. Leonardo Electronic Almanac 14(7–8). Available at: http://
leoalmanac.org/gallery/digiwild/future.htm.
Haraway D (1989) Primate Visions: Gender, Race, and Nature in the World of Modern
Science. New York: Routledge.
Haraway D (1991) Simians, Cyborgs, and Women: The Reinvention of Nature. New York:
Routledge.
142
Feminist Theory 12(2)
Haraway D (1997) Modest Witness@Second_Millennium.FemaleManß_Meets_Onco
MouseTM: Feminism and Technoscience. New York: Routledge.
Haraway D (2008) When Species Meet. Minneapolis: University of Minnesota Press.
Hayles NK (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature,
and Informatics. Chicago: University of Chicago Press.
Hayles NK (2005) Computing the human. Theory, Culture & Society 22: 131–151.
Helmreich S (1998) Silicon Second Nature: Culturing Artificial Life in a Digital World.
Berkeley: University of California Press.
Hird M (2009a) Feminist engagements with matter. Feminist Studies 35: 329–346.
Hird M (2009b) The Origins of Sociable Life: Evolution after Science Studies. Basingstoke:
Palgrave.
Hird M (2010) Meeting with the microcosmos. Environment and Planning D: Society and
Space 28: 36–39.
Kay L (2000) Who Wrote the Book of Life? A History of the Genetic Code. Stanford:
Stanford University Press.
Keller EF (2007) Booting up baby. In: Riskin J (ed.) Genesis Redux. Chicago: University of
Chicago Press, 334–345.
Kember S (2003) Cyberfeminism and Artificial Life. London: Routledge.
Latour B (1986) Visualization and cognition: Thinking with eyes and hands. Knowledge and
Society 6: 1–40.
Latour B (1988) The Pasteurization of France. Cambridge, MA: Harvard University
Press.
Latour B (1993) We Have Never Been Modern. Cambridge, MA: Harvard University Press.
Law J (2004) After Method: Mess in Social Science Research. London: Routledge.
Lorimer J and Davies G (2010) Interdisciplinary conversations on interspecies encounters.
Environment and Planning D: Society and Space 28(1): 32–33.
Lynch M (1988) Sacrifice and the transformation of the animal body into a scientific object:
Laboratory culture and ritual practices in the neurosciences. Social Studies of Science 18:
265–289.
Maes P (1997) Modelling adaptive autonomous agents. In: Langton C (ed.) Artificial Life:
An Overview. Cambridge, MA: MIT Press.
Menzel P and D’Aluisio F (2000) Robo Sapiens. Cambridge, MA: MIT Press.
Myers N (2008) Molecular embodiments and the body-work of modeling in protein crystallography. Social Studies of Science 38(2): 163–199.
Neumark N (2001) E/motional machines: Esprit de corps. In: Koivunen A and Passonen S
(eds) Affective Encounters: Rethinking Embodiment in Feminist Media Studies, Media
Studies, Series A, No. 49. Turku, Finland: University of Turku, School of Art,
Literature and Music, 162–170.
Noble D (1984) Forces of Production. New York and Oxford: Oxford University
Press.
Orr J (2006) Panic Diaries: A Genealogy of Panic Disorder. Durham, NC: Duke University
Press.
Risan L (1997) Artificial life: A technoscience leaving modernity? An anthropology of subjects and objects. Available at: http://anthrobase.com/Txt/R/Risan_L_05.htm.
Robins B and Dautenhahn K (2006) The role of the experimenter in HRI research: A case
study evaluation of children with autism interacting with a robotic toy. In: Proceedings of
the 15th IEEE International Symposium on Robot and Human Interactive Communication
Suchman
143
(RO-MAN06), 6–8 September. University of Hertfordshire, Hatfield, UK: IEEE Press,
646–651.
Robins B, Dautenhahn K and Dickerson P (2009) From isolation to communication: A case
study evaluation of robot assisted play for children with autism with a minimally expressive humanoid robot. In: Proceedings of the Second International Conference on Advances
in Computer-Human Interactions, ACHI 09. Cancun, Mexico: IEEE Computer Society
Press, 205–211.
Schegloff E (1982) Discourse as an interactional achievement. In: Tannen D (ed.)
Georgetown University Roundtable on Language and Linguistics: Analyzing Discourse
and Talk. Washington, DC: Georgetown University Press, 71–93.
Shapin S and Schaffer S (1985) Leviathan and the Air-pump: Hobbes, Boyle, and the
Experimental Life. Princeton, NJ: Princeton University Press.
Smith W (2009) Theatre of use: A frame analysis of IT demonstrations. Social Studies of
Science 39(3): 449–480.
Suchman L (2007a) Human-Machine Reconfigurations: Plans and Situated Actions, 2nd edn.
New York: Cambridge University Press.
Suchman L (2007b) Feminist STS and the sciences of the artificial. In: Hackett E,
Amsterdamska O, Lynch M and Wajcman J (eds) The Handbook of Science and
Technology Studies, 3rd edn. Cambridge, MA: MIT Press, 139–163.
Turing AM (1950) Computing machinery and intelligence. Mind 59: 433–460.
valentine k (2010) A consideration of medicalisation: Choice, engagement and other responsibilities of parents of children with autism spectrum disorder. Social Science & Medicine
71: 950–957.
Wei SX (2002) Resistance is fertile: Gesture and agency in the field of responsive media.
Configurations 10: 439–472.
Wilson E (2002) Imaginable computers: Affects and intelligence in Alan Turing. In: Tofts D,
Jonson A and Callaro A (eds) Prefiguring Cyberculture: An Intellectual History.
Cambridge, MA: MIT Press, 38–51.
Appendix: Mertz transcript
Source: ‘The Speaking Robot @ MIT’. http://www.youtube.com/watch?v¼
HuqL74C6KI8
R ¼ robot, G ¼ person 1, M ¼ person 2; ‘[’ indicates overlap
R: Hello, what is your name?
G: (laughter) My name is G, my friend. What is yours?
[
R: Your name is (unintelligible).
G: Is what?
R: Yes or No?
G: No, G.
R: Say yes or no, please.
G: No.
R: Please look at my face.
144
Feminist Theory 12(2)
M: (He) is speaking in French! Are you speaking French?
R: Say yes or no, please.
G: Nooo.
R: Hello, what is your name?
G: G
[
M: M
G: Ohh! (much laughter)
R: (turning toward M) Your name is (unintelligible). Yes or No?
M: (laughing) No. It’s M.
R: I cannot see you.
M: I’m here.
R: Please look at my face.
M: What? What did you say?
R: Please face me directly.
M: (looking to G) What?
G: I didn’t get it.
R: Please face me directly.
G: It’s so ridiculous!
M: Ah, answer correctly.
R: Hello, what is your name?
M: M. And yours?
R: (frowns)
M: Ohh! (points)
R: Do not move too much.
G: You move too much.
R: I cannot see you.
(much laughter, exchanges of looks, among humans)
R: Please look at my face.
[
M: Can you see me now? Are you seeing me now?
R: Please face me directly.
G: What?
R: (plaintively) You are too far away.
M: (laughing, look to G) No I’m not far away, what do you want?
R: Please say some words. (unintelligible)
M: Let’s go and talk.
R: Too many words.
G: Oh, okay
M: Oh, too many words.
G: (reading text). ‘I don’t understand any language. But I am trying to learn and
repeat simple words you say.’
R: (turns to G) Come closer please.
Suchman
145
G: Oh, he cannot communicate. (Onlooking friend goes around to look at the back of
R, enters frame of camera.)
R: Come closer please.
G: Am I close enough?
R: You are too far away.
(much laughter)
R: Come closer please.
M: I think he wants to get kissed.
G: (kissing noises, waves, much laughter) In fact there is someone, there is somebody
in there () a lot of fun.
(more laughter)
M: Okay (waves) good bye. Bye bye.
G: (waves) Bye, bye, huh.