Lucy Suchman
Related Authors
Christoph Ernst
Rheinische Friedrich-Wilhelms-Universität Bonn
Jens Schröter
Rheinische Friedrich-Wilhelms-Universität Bonn
Andreas Sudmann
Rheinische Friedrich-Wilhelms-Universität Bonn
Philippe Sormani
EHESS-Ecole des hautes études en sciences sociales
Robert Hanna
University of Colorado, Boulder
Simon Parry
The University of Manchester
Victoria Gill
Lesley University
Dominik Gerst
University of Duisburg-Essen
Adrian Grauenfels
Book Publishing Services
Uploads
Book by Lucy Suchman
Neural Networks proposes to reconstruct situated practices, social histories, mediating techniques, and ontological assumptions that inform the computational project of the same name. If so-called machine learning comprises a statistical approach to pattern extraction, then neural networks can be defined as a biologically inspired model that relies on probabilistically weighted neuron-like units to identify such patterns. Far from signaling the ultimate convergence of human and machine intelligence, however, neural networks highlight the technologization of neurophysiology that characterizes virtually all strands of neuroscientific and AI research of the past century. Taking this traffic as its starting point, this volume explores how cognition came to be constructed as essentially computational in nature, to the point of underwriting a technologized view of human biology, psychology, and sociability, and how countermovements provide resources for thinking otherwise.
Papers by Lucy Suchman
Starting with her early works on “Talk with Machines” (1986, republished in 2021) and her books Plans and Situated Actions: The Problem of Human-Machine Communication (1987) and Human-Machine Reconfigurations (2007a), Lucy Suchman not only opened up a new domain of scientific interest in humans and technology, but also showed how the scope of human machine relations needs to be reconceptualized. With her most recent works (2023a, 2023b), she not only widens the perspective on the contexts for machine usage, particularly by the military, but she also gives insights on how to conceptualize AI in terms of its ontological status and its agency. Discussing the relevance of the concept of autonomy for relations between humans and machines, Lucy Suchman clearly positions herself in the debate and demonstrates how we need to reconfigure and address so-called machine autonomy.
This article aims to integrate two interrelated strands in critical security studies. The first is mounting evidence for the fallacy of claims for precision and accuracy in the United States ‘counterterrorism’ programme, particularly as it involves expanding aerial surveillance in support of operations of extrajudicial assassination. The second line of critical analysis concerns growing investment in the further automation of these operations, more specifically in the form of the US Department of Defense Algorithmic Warfare Cross-Functional Team, or Project Maven. Building upon generative intersections of critical security studies and science and technology studies (STS), I argue that the promotion of automated data analysis under the sign of artificial intelligence can only serve to exacerbate military operations that are at once discriminatory and indiscriminate in their targeting, while remaining politically and legally unaccountable.
Neural Networks proposes to reconstruct situated practices, social histories, mediating techniques, and ontological assumptions that inform the computational project of the same name. If so-called machine learning comprises a statistical approach to pattern extraction, then neural networks can be defined as a biologically inspired model that relies on probabilistically weighted neuron-like units to identify such patterns. Far from signaling the ultimate convergence of human and machine intelligence, however, neural networks highlight the technologization of neurophysiology that characterizes virtually all strands of neuroscientific and AI research of the past century. Taking this traffic as its starting point, this volume explores how cognition came to be constructed as essentially computational in nature, to the point of underwriting a technologized view of human biology, psychology, and sociability, and how countermovements provide resources for thinking otherwise.
Starting with her early works on “Talk with Machines” (1986, republished in 2021) and her books Plans and Situated Actions: The Problem of Human-Machine Communication (1987) and Human-Machine Reconfigurations (2007a), Lucy Suchman not only opened up a new domain of scientific interest in humans and technology, but also showed how the scope of human machine relations needs to be reconceptualized. With her most recent works (2023a, 2023b), she not only widens the perspective on the contexts for machine usage, particularly by the military, but she also gives insights on how to conceptualize AI in terms of its ontological status and its agency. Discussing the relevance of the concept of autonomy for relations between humans and machines, Lucy Suchman clearly positions herself in the debate and demonstrates how we need to reconfigure and address so-called machine autonomy.
This article aims to integrate two interrelated strands in critical security studies. The first is mounting evidence for the fallacy of claims for precision and accuracy in the United States ‘counterterrorism’ programme, particularly as it involves expanding aerial surveillance in support of operations of extrajudicial assassination. The second line of critical analysis concerns growing investment in the further automation of these operations, more specifically in the form of the US Department of Defense Algorithmic Warfare Cross-Functional Team, or Project Maven. Building upon generative intersections of critical security studies and science and technology studies (STS), I argue that the promotion of automated data analysis under the sign of artificial intelligence can only serve to exacerbate military operations that are at once discriminatory and indiscriminate in their targeting, while remaining politically and legally unaccountable.
Translated by Alisa Maximova
Edited by Andrei Korbut
With an introduction to Russian edition by Lucy Suchman
ISBN ISBN: 978-5-9500244-5-0
But AI and robotics are very different kinds of natureculture than global warming. True, dynamics are in place that will unfold if they are not actively interrupted and mitigated. But these dynamics are much more wholly human ones, less entangled with the more-than-human and more amenable to a political will to intervention. Moreover, while technological initiatives are progressing in some areas (processing power, data storage, the sophistication of algorithms, and networking), there is a notable lack of progress in efforts to achieve humanlike capacities. These differences are obscured, however, by the prevailing mystification of the state of the robotic arts and sciences. So what if the questions that we ask are rather these: In what ways, and to what extent, are machines becoming more humanlike, and in relation to what figure of the human? In whose interests are these projects, and who decides that they should go forward, in lieu of other projects of transformative future making?
We can begin to address these questions by looking more closely at the boundaries of robot agencies: that is, the ways in which they are currently designated, and how they might be drawn differently. This approach begins from the observation that the framing of so-called autonomous robots – in both their visual and narrative representation, and in the material practices of their demonstration – reiterates a commitment to the figure of a human subject characterized by bounded individuality, and to the reproduction of an order of hierarchical humanity deeply rooted in imperial/colonial histories.
The reading of humanoid robot mediations that follows is part of a broader critical engagement with projects to configure robots in the image of living creatures, and in particular humans and their companion species. Tracking and responding to media reports of these developments, I try to identify alternative resources from anthropology, science and technology studies, feminist and post/decolonial scholarship that can help us to question the assumptions that these stories repeat, at the same time that they purport to be telling us about things that are unprecedented and, most disturbingly, sure to happen. My aim is to destabilize the authority, the credibility, of these narratives of humanoid (and more broadly lifelike) robots, in order to hold open a space for critical analysis that might enable, in turn, very different technological projects.
As a contribution to the CCW's third informal meeting of experts on lethal autonomous weapon systems (LAWS), this briefing paper focuses on the implications of the requirement of situational awareness for autonomous action – whether by humans, machines or complex human-machine systems. For the purposes of this paper, 'autonomy' refers to self-directed action, and more specifically the action-according-to-rule that comprises military discipline. Unlike the algorithmic sense of a rule as that term is used in Artificial Intelligence (AI), military rules always require interpretation in relation to a specific situation, or situational awareness. Focusing on the principle of distinction, I argue that International Humanitarian Law (IHL) presupposes capacities of situational awareness that it does not, and cannot, fully specify. At the same time, autonomy or 'self-direction' in the case of machines requires the adequate specification (by human designers) of the conditions under which associated actions should be taken. This requirement for unambiguous specification of condition/action rules marks a crucial difference between autonomy as a legally accountable human capacity, and machine autonomy. The requirement for situational awareness in the context of combat, as a prerequisite for action that adheres to IHL, raises serious doubts regarding the feasibility of lawful autonomy in weapon systems. The questions surrounding lethal autonomous weapon systems (LAWS) are being addressed by the Convention on Certain Conventional Weapons (CCW) along multiple lines of analysis. This briefing paper is meant as a contribution to discussions regarding the concept of autonomy, on the basis of which I present an argument questioning the feasibility of LAWS that would comply with International Humanitarian Law (IHL). 1 This argument is based not on principle, but rather on empirical evidence regarding the interpretive capacities that legal frameworks like IHL presuppose for their application in a specific situation. These capacities make up what in military terms is named situational awareness. 2 Despite other areas of