31667-Article Text-35731-1-2-20241016
31667-Article Text-35731-1-2-20241016
31667-Article Text-35731-1-2-20241016
647
beliefs using more technical vocabulary, both groups ex- which humans evaluate machines (Jones-Jang and Park
pressed similar humanness-related beliefs and ethical 2023). Jacovi et al. (2023) explicitly tie such perceptions to
concerns. design, suggesting that AI explanation methods should take
In the rest of the paper, we review relevant prior work, into account “folk concepts” of technological behavior. De-
de-scribe our methodology, and then report findings related sign which accounts for human perceptions can reduce the
to the emergent themes of humanness and ethics. Finally, we potential for misunderstandings of AI behavior by users and
discuss the implications of these themes for the development thereby reduce negative impacts.
and deployment of AI systems. Surveys on perceptions of AI in general have demon-
strated a mix of hope and concern among the public in the
U.S. (Zhang and Dafoe 2019; Zhang and Dafoe 2020; Pew
Background Research Center 2022; Bao et al. 2022) and around the
This section first discusses how perceptions of technology world (Ipsos 2023). Kelley et al. (2021) found a consistent
impact use before describing prior research on perceptions view across 8 countries that AI will have a large impact on
of AI systems. society, with a mix of positive expectations and concerns.
Research on perceptions of AI among healthcare staff cor-
The Role of Perceptions in Use of Technology roborates the public’s ambivalence about AI’s impacts
(Castagno and Khalifa 2020). Experts appear to share the
The field of human factors aims to understand the subjective
same mix of concern and optimism as the general public,
and behavioral components of people’s interactions with
with concerns related to profit and power incentives, human
technology in order to improve the design of technological
rights, and misinformation (Pew Research Center 2023).
systems (Helander 1997). Trust in an automated system, for
Some researchers have qualitatively explored the public’s
instance, is a subjective variable that is influenced by user-,
perceptions of AI. Woodruff et al. (2018) found that partic-
context-, and system-related factors (Lee and See 2004;
ipants in traditionally marginalized groups believed algo-
Hoff and Bashir 2015). Perceptions of characteristics of an
rithmic fairness would impact their trust in a company or
automated system, such as its reliability, can influence user
product. Das Swain et al. (2023) interviewed information
behavior (Lee and See 2004). Understanding perceptions of
workers and suggest that passive-sensing AI systems used
technology can therefore provide critical insights into adop-
to assess worker performance and well-being be imple-
tion and use. Human-centered design (ISO 2019) applies the
mented within norms established by reported expectations
study of perceptions of technology toward designing sys-
and concerns. The U.S. Public Assembly on High Risk AI
tems that are best suited for human use and interaction, an
brought members of the general public together with experts
approach that has been embraced by the human-centered AI
to learn about various AI topics and deliberate on AI’s
community (Shneiderman 2022). The current study contrib-
harms and benefits to individuals, institutions, and society,
utes in-depth data on general public and expert perceptions
with participants often commenting on the responsibility of
of AI to advance human-centered AI.
developers and deployers (Atwood and Bozentko 2023).
Most similarly to the current study, Alizadeh, Stevens, and
Perceptions of Artificial Intelligence Esau (2021) report findings from interviews on folk theories
Cave et al. (2018) note that the general public’s perceptions about AI in an undergraduate explainable AI course, con-
of new technology can directly impact technological pro- cluding that concepts such as being automated, having
gress and adoption, finding that current narratives about AI agency, being humanlike, and learning shaped participants’
focus on embodied and humanlike technology, utopian or thoughts about AI’s impacts.
dystopian extremes, and homogenous representation. The Our study makes the following novel contributions to this
proliferation of AI systems warrants research focused on body of existing research on perceptions of AI: 1) we col-
how “artificial intelligence” is perceived by the public, as lected in-depth interview data to identify and describe spe-
well as how beliefs may impact AI use and adoption. cific beliefs about AI, rather than assessing solely the extent
Beliefs about AI do appear to impact the willingness to of concerns or hopes for AI; 2) we recruited general public
rely on algorithmic decision-making systems. Prior work and expert participants to observe the role of expertise and
has observed both algorithmic aversion (Dietvorst, Sim- knowledge in shaping perceptions; and 3) we did not pre-
mons, and Massey 2015) and algorithmic appreciation scribe a single definition of AI nor prompt participants to
(Logg, Minson, and Moore 2019), where people adhere dif- focus on any particular application—our findings therefore
ferently to advice depending on whether it came from an al- reflect what participants consider to be important character-
gorithm or a human. Adherence appears to vary based on istics of “artificial intelligence” across various domains and
factors such as observation of algorithmic errors, type of applications.
task, and self-confidence (Prahl and Van Swol 2017), being
highly context-dependent and driven by heuristics with
648
Methodology Data Analysis
We conducted interviews in April through June of 2022. The After interviews were completed, a multidisciplinary team
interview protocol was refined via reviews by six AI re- of five researchers coded and analyzed the transcripts. Anal-
searchers and two qualitative methodologists, as well as ysis was conducted across two phases based on approaches
to qualitative analysis described by Saldaña (2021). Analysis
feedback from four pilot interviews with individuals repre-
Phase 1 involved inductively developing a codebook and
sentative of the target populations. The study was reviewed
coding the 45 interview transcripts. Analysis Phase 2 in-
by our institution’s Research Protections Office. The inter-
volved theming the final codebooks and the coded data.
view protocol is available in our online Appendix5.
In Analysis Phase 1, coding for GP and E samples was
conducted separately but followed the same process. Two
Participants & Recruitment
researchers first independently open coded a set of five ran-
We recruited 25 individuals in the U.S. general public (GP) domly selected transcripts. Those researchers engaged in a
and 20 AI experts (E) to participate in interviews. GPs were series of discussions merging similar codes and removing
recruited from a consumer research firm’s panel of study less frequently encountered codes until they agreed on an
participants from the DC metro area, northeast, and southern initial codebook. Sets of three randomly chosen transcripts
U.S. We used a demographic screening questionnaire to re- were then used to test and refine the initial codebook until
cruit a sample that was representative of the U. S. in terms the researchers agreed that it adequately captured partici-
of sex and race. Es were selected from a list of roughly 700 pants’ responses. Next, the full set of transcripts was re-
attendees of a workshop on AI risk management. Research- coded using the final codebook. For both GP and E samples,
ers randomly selected individuals who worked with AI in at least one researcher who was not involved in open coding
industry in the U.S. and contacted them in batches until 20 participated in recoding. The research team first coded three
interviews had been scheduled. Demographics for each sam- randomly chosen transcripts to familiarize the new research-
ple are shown in our online Appendix5. GPs had various oc- ers with the final codebook. Each researcher was then as-
cupations, including project manager, physician, home- signed a subset of the transcripts to recode. Coders met
maker, student, and writer. Es spanned a variety of roles throughout the coding process to discuss preliminary find-
such as research analyst, data scientist, consultant, and prod- ings. Analytic memoing was conducted by coders to contin-
uct director, and worked on various topics such as privacy, uously enhance understanding of the data.
cybersecurity, biometrics, standards, and policy. In Analysis Phase 2, research team members reviewed the
final codebook, memos, and the data belonging to each code
Semi-Structured Interviews to look for patterns, collapsing codes into themes related to
Each interview was conducted virtually, lasted for approxi- perceptions of AI. During theming discussions, it became
mately one hour, and was conducted by the same member apparent that views of humanness and ethics were raised
of the research team. Participants were encouraged to share consistently by participants to describe their perceptions of
their honest thoughts about AI and were informed that the AI. The research team met repeatedly to discuss their obser-
interviewer, who has a human factors and human-computer vations using the concepts of humanness and ethics to guide
interaction background, was not an “AI expert.” The inter- conversations. Areas of the interview data where these
viewer provided neither a definition of AI nor examples of themes were explicitly discussed by participants were ex-
AI systems, allowing participants to articulate their own plored in detail to refine the final themes in the Findings sec-
ideas of what constitutes AI and AI applications. Each inter- tion. Theming discussions also focused on comparisons
view began with the question, “When you hear artificial in- across general public and expert interviews.
telligence or AI, what do you think of?” The interviewer Throughout the remainder of this paper, exemplar quotes
subsequently asked each of the questions in the interview are included to illustrate the resulting themes. Quotes are
protocol if it was not addressed spontaneously during the followed by a participant ID and timestamp in parentheses.
conversation. The interviewer was accompanied by a For instance, “(GP01, 12:34)” would indicate that a general
notetaker during each interview. The interviewer and public participant made the statement 12 minutes and 34
notetaker used memoing throughout the interview period to seconds into the interview.
reflect on emerging themes. Memos from the interview
stage informed later analysis. Interviews were audio-rec-
orded and professionally transcribed. Findings
Interviews consisted of wide-ranging discussions of per-
sonal technology use, expected impacts of AI, and the role
5 https://www.nist.gov/programs-projects/human-centered-ai
649
Figure 1: Humanness was discussed in terms of three sub-themes describing beliefs about AI’s characteristics.
of new technology in life and society. Humanness and ethics wouldn't be able to answer, you know, like a multi-step ques-
were unifying concepts that drove perceptions of AI: partic- tion like that” (GP03, 27:08). Across both groups, partici-
ipants described AI by comparing its characteristics to those pants remarked that, “AI, inherently, is just technology. It's
of humans and frequently speculated on AI’s potential im- just a machine. It's just a program. It's not running itself”
pacts on people and society. In the subsequent sections, we (GP24, 31:37), diminishing its capability with statements
describe the nature of humanness-related beliefs and ethical like, “AI doesn't think— it learns from patterns, and it spits
concerns in turn. out an outcome” (E17, 12:46) and “a computer, it only does
what you tell it to do” (GP15, 32:14).
Humanness Views of AI as programmed were juxtaposed against def-
Humanness has been conceptualized in sociology as a set of initions which likened AI to humans. Participants used the
traits which are used to distinguish humans from animals word “tool” to reconcile AI’s simultaneous humanlike and
and machines (Haslam 2006). The emergence of the human- machinelike qualities: “it's a tool that…is not going to solve
ness theme in interviews suggests that anthropomorphism, all your problems, it has to be used in conjunction with the
the attribution of humanness to a non-human entity, may tools we've already got” (E19, 11:37). E02 expressed a sim-
contribute to perceptions of AI systems (Kaplan et al. 2021). ilar desire to retain human authority: “you want it to be a
Anthropomorphism has been previously studied in percep- tool rather than…you know, being better than you, right?”
tions of automation (de Visser et al. 2016; Jensen, Khan, and (E02, 36:56). The Product of Developers belief allowed par-
Albayram 2020). ticipants to limit the extent of AI’s humanness and to main-
Humanness was discussed by participants to both com- tain the conceptual separation between human and machine.
pare and contrast AI systems with humans. Early in inter- Participants also described AI’s mistakes as extensions of
views, GPs referred to AI having a “mind of its own” (GP10, those made by humans: “if it does make a mistake I would
1:24) and to “virtual assistants… that are, you know, maybe say that's probably down to programming, which is still con-
you're interacting with online that… are just not a real per- nected to a person” (GP14, 24:16). E05 summarized this
son” (GP02, 1:30). Es referred to the “general automating view of AI systems as programmed machines: “at the end of
of human tasks” (E04, 3:39) and “thinking machines” (E18, the day, AI is a computer system. It's software… it's taught
3:05). When asked explicitly to define AI at the end of the to look at X data and to weight it in certain ways, and based
interview, GP14 referred to it as “like partially an artificial on that, to come to certain conclusions. And it can get them
brain… minus the consciousness and emotions part” (GP14, wrong, especially AI systems that are learning from people”
55:48). E08 directly stated, “it's making that distinction be- (E05, 13:50). Despite being somewhat humanlike, AI was
tween human and not” (E08, 54:42). In both groups, AI was viewed as limited due to its lack of agency compared to hu-
defined as humanlike, but was still considered technology. mans.
We organized humanness-related perceptions into three Humanness Theme #2: Can’t Handle Nuance
sub-themes shown in Figure 1: 1) Product of Developers, 2) Perceptions of AI’s strengths and weaknesses were consist-
Can’t Handle Nuance¸ and 3) Empathy. These perceptions ently related to the notion that AI is ill-equipped to handle
appeared to impact participants’ willingness to rely on AI nuanced or complex situations, referred to here as the Can’t
systems. Handle Nuance belief. Participants characterized AI as rigid
Humanness Theme #1: Product of Developers and inflexible relative to humans.
Throughout interviews, AI systems were deliberately cate- Human capability provided a baseline by which partici-
gorized as machines via statements emphasizing that they pants reasoned about AI’s characteristics. Weaknesses of
are created by humans. This Product of Developers belief humans were used to describe strengths of AI. For instance,
was often invoked to downplay the extent of AI systems’ GP17 noted the speed of AI, in that, “it can kind of move
humanness and emphasize their limitations: “I would prob- quicker than we're able to consciously” (GP17, 14:35),
ably say because of the way it's programmed, it probably while E02 mentioned its superior ability to consume large
650
amounts of information: “as an individual, the amount of struggles versus a direct phrase” (E07, 21:08). To partici-
data which you can consume and learn from is limited” pants, being programmed meant that AI systems are not able
(E02, 36:56). Meanwhile, weaknesses of AI were often pre- to adapt.
sented as strengths of humans. GP11 described how “hu- Participants also explained discomfort with self-driving
mans read the room better” (GP11, 12:16). E14 articulated vehicles due to nuance. GP21 would consider using an au-
a weakness as well: “all it's gonna do is still be a processor tomated vehicle on, “a stretch of a highway or something
of their information. It's not gonna be able to intelligently where you're not traveling too far” but was skeptical of its
interpret and provide what a human would” (E14, 36:11). performance in a city: “there's unknown variables with cy-
The tone of these comparisons was notably protective of hu- clists, and scooters… and people just jaywalking. Maybe
man uniqueness, with participants expressing that AI would that…adds too many different variables” (GP21, 13:04).
never be as capable as humans at some tasks: “I just don't E08 had the same discomfort with self-driving in populated
trust it could get all…the little intricacies that a human areas: “Just because the environment is busier. It's more
would” (GP11, 44:48). E10 similarly discounted AI’s complicated, it's more cluttered… and as a result, that
“thinking” capabilities: “That's not critical thinking, right? brings on a more risk” (E08, 27:26). The Can’t Handle Nu-
That's just doing a bunch of computations” (E10, 26:45). ance belief was associated with an evaluation of context. In
Despite acknowledging AI’s strengths, participants believed the case of self-driving, the environment’s unpredictability
that AI is “not gonna be quite the same as a person next to affected participants’ views of the AI system’s capabilities
you” (E17, 32:04). and, as a result, their willingness to rely.
The Can’t Handle Nuance belief that drove perceptions E19 aptly summarized the consensus among participants
of AI’s strengths and weaknesses was straightforwardly on nuance: “The more nebulous… your goal is or the means
stated by E08: “when it comes to creativity, people tend to of reaching your goal… the more trouble the computer's
be very good at that. When it comes to being relational, peo- gonna have doing it” (E19, 18:08). Participants viewed AI
ple are much better at that… machines are very good at re- as proficient in processing large amounts of data and han-
petitive or redundant things” (E08, 46:56). GP17 likewise dling routine tasks efficiently. However, non-repetitive
expressed doubt that AI has the necessary “subtlety of un- tasks that required human adaptability were seen as difficult
derstanding” (GP17, 20:53). Participants believed that AI for AI. Qualifying a task as “nuanced,” therefore, involved
systems may miss details that would not be missed by a hu- assessing whether human traits were required. Empathy was
man. The Can’t Handle Nuance belief was, however, also frequently mentioned as one such trait.
associated with consistency, precision, and efficiency. Humanness Theme #3: Empathy
Viewing AI as able to “do it the same exact way every single Empathy and the ability to relate to people were central to
time,” GP11 preferred AI over humans in matters of “effi- beliefs about AI’s characteristics. GP06, who worked in
ciency” (GP11, 40:21). GP18 referred to AI as a “glorified healthcare, described how an AI system would be limited if
record keeper” and “something that's more consistent” doing their job: “I think people underestimate kind of
(GP18, 33:37) than humans. Es similarly praised AI’s what…I do in my job as a healthcare provider. Meaning like
strengths in “pattern recognition and identifying details in staying up to date in the literature, but also taking in pa-
patterns” (E08, 45:35) and in “situations where you have a tients whole… situation into the context of their potential
very clearly defined set of parameters, and it doesn't have to medical decisions. Meaning not just like what their medical
extrapolate a great deal” (E17, 18:49). According to partic- problems are, but kind of how, where are they in their life
ipants, rigidity is not all bad—it’s just a matter of what you and what's their support system like? And, you know, some-
are aiming to do. times you can kind of tell if someone's coming to you with
The Can’t Handle Nuance belief led to a lack of trust in like a medical problem or if they wanna be just like reas-
AI systems in complicated situations, sometimes resulting sured” (GP06, 17:56). This account reflects how partici-
in a desire for human oversight that was expected to com- pants often teased apart AI’s strengths and weaknesses.
pensate for AI’s weaknesses. Chatbots, for instance, were Tasks have components deemed suitable for machines (e.g.,
frequently described as being unable to handle certain situ- “staying up to date in the literature,” “what their medical
ations: “if it's something very straightforward… something problems are”) and components that might require human-
that uses…preexisting apps like to set a timer or to set an ness (e.g., “where are they in their life and what's their sup-
alarm, those typically work really well. But if I wanna port system like,” “they wanna be just like reassured”). A
say…something more complex like find the nearest pizza task was considered too “nuanced” for AI when it required
place… I think then it tends to get clunky to the point where the emotional skills considered unique to humans, also re-
it would be faster if I would just take the time to type it in ferred to as a “human touch” (GP15, 32:14; GP18, 31:04) or
myself” (GP06, 4:07). E07 similarly defined overly nuanced
situations: “If I were to ask [it a] compound phrase… it often
651
Figure 2: Ethics was discussed in terms of four sub-themes describing concerns about AI’s potential impacts.
the “human element” (GP09, 35:11; GP11, 18:53; E02, healthcare” (E16, 39:56). Perceptions of AI’s limitations in
43:17; E05, 31:30; E14, 34:40). contextual understanding and empathy led to discomfort
To many participants, empathy and flexibility were seen about its use in healthcare settings. To participants, human
as unique to humans and out of reach for AI. This was not oversight would help ensure that anything missed by the AI
just a logical assessment. Participants reacted strongly to the system would be captured by the human.
idea of AI encroaching on areas of human empathy. For in- Overall, the humanness theme emphasized the role of be-
stance, after stating that, “when it comes to hotel and hospi- liefs in the willingness to rely on AI systems, a finding also
tality... you need a real person” (GP13, 14:23), GP13 firmly reflected in various concerns about AI’s ethical impacts.
dehumanized AI systems: “It's not real, it's fake. It's con-
trolled” (GP13, 16:36). GP09 was quite skeptical that AI Ethics
systems could process emotion: “let's say I've just experi- Although only one question in the interview protocol spe-
enced loss… even if an AI system can…read my facial fea- cifically referred to ethics, discussions of “right” and
tures and know that…I might be about to cry, I could…turn “wrong” and the potential impacts of AI on people and so-
around and punch a wall because…grief is just com- ciety were prevalent. Ethical concerns centered around the
pletely…one of the purest forms of human emotion, and I notion that AI is made and used by people: “I think the big-
think there's very little predictability that exists within it” ger thing that's scary about it is who are the people that are
(GP09, 43:43). Similar to discussions of nuance in chatbot in charge of these…machines or technology, because they
and self-driving applications, human emotion was viewed as are…a reflection of their…creators” (GP01, 2:12). Refer-
too complicated and unpredictable for AI. Es also expressed ences to the people behind AI systems were consistently
this view of AI’s emotional limitations, noting that “there's used to describe expectations for impacts: “I think it would
no relatability” (E08, 31:21) and that there are “social things be incorrect for me to say that I trust AI. I would say… it's
that you can't program” (E06, 28:58). The cold and emo- more important to know the person, the company, the busi-
tionless nature of AI was viewed as a strength for repetitive ness…behind the AI, and how it was developed, why it was
tasks but a weakness for those requiring compassion. developed, more so than the actual AI itself” (E14, 30:46).
Participants often suggested that humans be in place to We organized participants’ ethical concerns into four sub-
compensate for AI’s empathy-related weaknesses. themes shown in Figure 2: 1) Organizations Have Profit
Healthcare was one such setting where human oversight was Motive and Power, 2) AI is Fallible and Impacts People, 3)
desired. According to GP02, while AI could be appropri- AI is Inevitable, and 4) AI Lacks Transparency.
ately used for “simple things,” the medical context was too
serious to do without a human: “there are some issues that, Ethics Theme #1: Organizations Have Profit Motive and
like, when health or life or death, or… wellbeing of myself Power.
or another person is…really on the line, that… maybe AI Profit motives of technology companies were described as a
is… a good kind of initial screener or… first step or some- driving force for AI development: “people like it, so they
thing, but I'd still want… you know, some element of… a hu- can make money on it, so why are we not gonna do it”
(GP01, 47:07); “it’s a major tech company and the number
man interaction or supervision there as well” (GP02,
19:14). To participants, the human touch could not be fully one goal is to make money” (E16, 2:34). Participants also
replaced by AI systems: “a piece of AI might be able to say, described a potential misalignment between those creating
oh, you have this gene and… this clinical trial, you know, AI technologies and the public: “the interests of…most peo-
has a chemical that affects this gene... But…your doctor ple who are gonna be making that technology are typically
knows that you've had a fever for the past five days …we're not aligned with the interests of the larger population”
just not there yet with the code… we always need to have (GP06, 30:28); “there's always gonna be that commercial
that human in the loop…for the foreseeable future, espe- pressure to push more on the commercialization side and
cially when it comes to something as high risk as… development side and less on the side that is more in the
652
public interest” (E04, 16:25). Participants described AI sys- Views of AI’s fallibility led to a desire for human over-
tems as a product of people and, as such, driven by the same sight, as in E20’s account of a frustrating experience with an
societal structures that motivate people. automated system: “if a human had looked at it-they proba-
Participants also expressed concerns about concentrated bly could have been like, ‘Oh, well, you needed that proce-
power among technology organizations: “I still see it being dure to get this procedure done. Um, so of course, we'll
developed again by the most rich and powerful people...on cover both.’ So I think, yeah, those automated systems really
the planet by government, big government, big corpora- drive me crazy sometimes” (E20, 42:48). Other participants
tions…to perpetuate their own…needs” (E13, 41:47). GP01 noted that, because AI might not be fully reliable, they
felt that technology is developed, “on a high level by people would be more comfortable “having somebody, a human
with money and power and like education, who…make those come in afterwards, and actually…make the final determi-
decisions and take us in the direction…that we're going in” nation, and only using it as a tool” (GP12, 14:08). Partici-
(GP01, 53:31). E11 was “concerned that the private indus- pants’ concerns about AI’s fallibility and desire for human
try, like the big tech firms may take advantage of oversight indicate the central view that humans are respon-
this…asymmetric information” (E11, 24:25), referring to the sible for AI’s impacts.
power imbalance between those developing and those im- Ethics Theme #3: AI is Inevitable.
pacted by technology. Reservations about data collection Participants described the presence of AI systems in the fu-
and use were a major part of this perceived asymmetry: “it ture as a certainty, often acquiescing to the inevitable pres-
leads to companies just having… unimaginable amounts of ence of technology in their lives: “Like either you're gonna
data… about my choices” (GP12, 25:42). Our participants’ like live up in a mountain and just have no technology and
commentary points to the importance of considering con- no contact with anybody ever, or [you’re] just gonna accept
centrated power, large-scale data collection, and profit mo- that everything in life comes with a risk…and just go with
tive when establishing ethical AI principles. it” (GP20, 33:47). E13 used an analogy to previous techno-
Ethics Theme #2: AI is Fallible and Impacts People. logical advancement: “if you're a horse and buggy person
Participants described AI systems as fallible and were there- and you knew that the CO2 from cars was gonna destroy the
fore concerned about relying on them in various circum- environment. Would you not get a car? It's a sacrifice that I
stances. GP12 expressed this viewpoint: “it's more of just think I would respect you for…But I mean, are you doing
the fact that these are just people or companies doing what that much good or…really just like being a pain…to your
they're doing, but like, all humans they aren't infallible, and family...it's just inevitable... We take the good with the bad
these systems are flawed” (GP12, 19:02). The perception of on it” (E13, 46:09). Participants described their decisions to
AI as imperfect was connected to the view that humans use various technological systems as a result of increased
themselves are imperfect. This apprehension centered convenience and efficiency, without which life would be too
around outcomes that impact people's lives at both individ- difficult.
ual and group levels: “The second you flip that switch to in- Despite acknowledging convenience, an underlying feel-
clude…social data, economic data…of people, things that ing of resignation characterized perceptions of inevitability,
involve civil rights and individual rights, that is where it be- as when participants expressed a perceived lack of control
comes incredibly scary and dangerous” (E03, 14:23). over personal data: “Like, if you're trying to worry about
Es explicitly connected AI’s fallibility to the role of data: protecting your data… there's so much that's just already
“all of the biases that go into…our society…and whatever out there, and you can't, you don't have any control over,
culture that is represented…in the data that you're using… that I seem to know of” (GP02, 37:03). GP20 explicitly tied
you're just gonna reproduce those same things, and I think the inevitability of technology to privacy concerns: “it's just
AI is really prone to amplifying those kinds of issues” (E06, like, you know, everything is automated at this point. So it's
29:12). E14 similarly argued that systems, “can only process just kinda like you have no privacy, the illusion of privacy,
the data that is given or provided to the artificial intelli- like it's just something to make people able to sleep at night”
gence to respond to a very narrow set of specific questions (GP20, 32:02). Even Es were resigned to using technologi-
or specific asks…of the technology. It can't do more” (E14, cal systems despite acknowledging the potential risks: “in
40:20). GPs made the same point, but less technically: “if theory, I should delete my Facebook account. But there's
the people putting in all the information were biased, also that piece…like Facebook groups are really useful to
then…that would kind of make the AI biased” (GP10: find people” (E20, 38:54). E12 acknowledged that their use
31:01). GP11 expressed a similar thought: “I don't think of technology was not wholly in line with their privacy con-
that's a fault…on AI, per se. That's more on us as humans” cerns: “I don't know if I do what I would tell someone else
(GP11, 25:51). Both groups viewed AI systems as a product to do to mitigate those concerns… because…of the conven-
of the people by which they were created and, therefore, ience... do I wanna put in a lot of work to like, think about
prone to accelerating existing societal problems. this? Or do I just wanna do it because, like, I'm tired and
653
there's so many things we have to think about and do during Discussion
the day?” (E12, 25:49). Although technology appeared to
satisfy participants’ needs, use was frequently discussed as Our analysis revealed two overarching components of per-
a result of a lack of alternative options or the significant in- ceptions of AI: humanness and ethics. We first elaborate on
convenience it would be to eschew modern technology. The the implications of these two themes before identifying
perceived lack of choice was driven by a view that AI will points of overlap. Lastly, we comment on the overall align-
only become more prevalent and be used more frequently in ment between general public and expert perceptions.
the future.
Ethics Theme #4: AI Lacks Transparency.
Design Implications of Humanness-Related Beliefs
Lastly, participants described concerns about a lack of trans- In general, our participants expressed beliefs about AI via
parency in AI systems. Many GPs expressed a lack of comparisons to humans’ corresponding characteristics. The
awareness of AI’s presence as well as a lack of understand- Product of Developers belief aligns with findings in re-
ing about how AI systems work: “ultimately, I think we don't search on perceptions of algorithmic harm (Lima, Grgić-
know... the thought is if we put something into a search bar, Hlača, and Cha 2023) and automation errors (Jensen et al.
certain things will pop up. Right? We don't know who has 2019), where attribution of blame to system developers has
control of what pops up” (GP11, 36:15). Es, despite having been observed to affect perceptions of technological sys-
more technical understanding of how AI works, still had tems. Moreover, Can’t Handle Nuance and Empathy beliefs
questions surrounding the use of their data and the presence observed in the current study corroborate previous research
of AI: “I don't like the idea of being manipulated by on the perceived rigidity of machines (Madhavan and Wieg-
AI…give me the ability to…maintain my privacy and, you mann 2004; Madhavan and Wiegmann 2007) and finding
know, inform me of what data is being used” (E16, 8:17). emotionality and flexibility to be components of perceptions
Even Es, though informed on the types of applications of intelligent personal assistants (Doyle et al. 2019). Design-
which may use AI, were not entirely sure of when and how ing to account for humanness-related beliefs can ensure that
specific systems utilized AI. AI systems are human-centered (Jensen 2021), as suggested
Participants explicitly stated that users should be pro- for improving the understandability of explanations of AI
vided with more information regarding data collection and output (Toreini et al. 2020) and mitigating “anthropo-
usage: “ethical AI means...the companies that are using morphic bias” that occurs when individuals attribute intent
it...they need to disclose that they're using AI, in what man- to AI systems (Jacovi et al. 2023).
ner that they're using it, and disclose when things go wrong. Based on our findings, effective communication to users
Like if something goes wrong with the data that they're col- about AI system performance may consist of two compo-
lecting” (GP08, 48:38). GPs linked their desire for infor- nents: 1) a definition of “nuance” for a particular task, and
mation about AI systems to their trust in those systems: 2) an indicator of the AI system’s performance relative to a
“those, like, 100-page disclosures…nobody has time to read human’s with respect to that nuance. Consider, for instance,
through those...so if a company were to be transparent, from chatbots. Interview participants described frustrating in-
the start in a way that's understandable and not a hundred stances when a chatbot could not handle their individualized
pages...to me they'll feel more trustworthy” (GP07, 48:40). request. In this case, “nuance” may be defined as the degree
Lack of insight into AI’s functioning was a primary concern of specificity of a request, based on the space of input which
of Es as well: “It's really about control and use, informing the chatbot is equipped to handle. Chatbot designers might
people about the ways in which their information is be- provide 1) an indication of the types of requests which are
ing…shared and disclosed, and in what context” (E12, “more nuanced” (e.g., returning a purchased item) and those
19:01). While the push for more information may be an up- which are “less nuanced” (e.g., tracking a shipment), fol-
hill battle given the degree of resignation observed in our lowed by 2) a comparison of the chatbot’s performance rel-
sample, participants suggested that efforts to improve trans- ative to a human customer service representative’s perfor-
parency would be appreciated: “There'll have to be more mance handling each type of request. A similar approach
standards” (GP05, 45:28); “I would definitely wanna see may be effective for self-driving cars by 1) indicating the
more regulations or improvements in that space” (E11, types of environments which are “more nuanced” (e.g.,
29:23). Participants also consistently called for improved town centers with many pedestrians) and “less nuanced”
communication by organizations behind AI via more reada- (e.g., highways) for the technological system, and 2) com-
ble terms and conditions as well as disclosure of the use of paring the self-driving system’s performance to human per-
AI in products or services. Overall, uncertainty about AI use formance in each situation. Defining nuance in a given sce-
and data collection was connected to a desire for more in- nario may require a complicated consideration of the roles
formation. of humans and technology in impactful decision-making
processes, as well as assessment of users’ preconceptions
654
about tasks and the competencies they require. In this study, White House Office of Science and Technology Policy
participants described nuanced tasks as requiring human 2022).
qualities such as flexibility, emotion, and creativity, alt- Researchers have suggested that the principles of respect
hough such beliefs may not always reflect accurately on sys- for persons, beneficence, and justice from human subjects
tem capabilities. research protections may extend to AI (Greene et al. 2024).
The Product of Developers belief has similar implications Ethical concerns in the current study point to approaches re-
for the communication of AI system performance infor- lated to these principles which may improve AI ethics. Re-
mation to users. Interview participants invoked Product of spect for persons means that 1) individuals should be treated
Developers to emphasize AI systems’ lack of humanness. as autonomous agents, and 2) persons with diminished au-
On one hand, this could lead a user to thoughtfully consider tonomy are entitled to protection. Participants’ desire for or-
the humans who created a technological system with which ganizations to disclose AI use, as well as a perceived lack of
they are interacting and adjust their performance expecta- control over data collection and technology use, suggest that
tions accordingly. On the other hand, Product of Developers obtaining informed consent from users may contribute to re-
could lead a user to view an AI system as a neutral entity spect for persons. Beneficence means that researchers 1) do
whose behavior is fixed and predetermined when, in reality, no harm, and 2) maximize benefit and minimize risk. Par-
it may be much more dynamic. Like Can’t Handle Nuance, ticipants expressed strong opinions on the drivers of AI de-
the downplaying of technological capability associated with velopment and a desire for public interest to be prioritized
Product of Developers might foster misconceptions that in risk-benefit assessments, believing that organizations
lead to inappropriate reliance and negatively impact users. should actively consider impacts of their AI systems on peo-
Effectively accommodating pre-existing beliefs and peo- ple. Our findings further suggest careful consideration of
ple’s tendency to compare AI systems to humans may lead how human-in-the-loop solutions address impacts. Justice
to more efficient, effective, and favorable interactions be- means that benefits and risks are shared among the popula-
tween humans and AI systems. tion that may benefit from results of research. Participants
expressed concerns regarding unequal distribution of risk
Insights for Ethical AI Principles related to AI’s fallibility and the idea that AI may exacerbate
Not only did GP and E participants express similar ethical existing social inequalities. Views on concentrated power
concerns, but those concerns aligned with principles pro- and asymmetry between organizations behind AI and the
posed in various existing ethical AI guidelines, such as the public also demonstrate concerns about unequal distribution
Organization for Economic Co-operation and Development of benefits. Though uncertainty about data usage described
(OECD) AI Principles (Yeung 2020), guidelines from the by both groups suggests that these concerns span levels of
European Commission's High-Level Expert Group on AI expertise, the general public may be prone to greater risk
(European Commission 2019), Japan’s Social Principles of due to a slightly lesser awareness of where and how AI
Human-Centric AI (Cabinet Secretariat 2019), China’s Bei- might be used. Given the observed view of AI systems as a
jing AI Principles (International Research Center for AI Eth- product of people, a framework like the Belmont Report
ics and Governance 2019) and the U.S.’s Blueprint for an may be a productive initial step for organizations seeking to
AI Bill of Rights (White House Office of Science and Tech- address the public’s ethical concerns, as well as external par-
nology Policy 2022). The principle of accountability (Yeung ties and regulators seeking to improve AI ethics.
2020; European Commission 2019) may address the role of
organizations behind AI emphasized by participants. Ro- Synthesizing Humanness and Ethics
bustness and safety (Yeung 2020; White House Office of We identified three points of overlap between the human-
Science and Technology Policy 2022) relate to concerns ob- ness and ethics themes which appear to represent broad fea-
served about AI’s fallibility. While the perceived inevitabil- tures of perceptions of AI in the U.S.: 1) perceptions of peo-
ity of new technology is not explicit in current ethical guide- ple behind the technology, 2) views of AI’s fallibility, and 3)
lines, promoting principles such as sustainable development desire for human oversight. Each has implications for future
(European Commission 2019; Cabinet Secretariat 2019), re- research.
spect for human autonomy (European Commission 2019) The Product of Developers belief was used to articulate
and human dignity (Cabinet Secretariat 2019), as well as pri- AI’s nature as a programmed machine and limit its capabil-
vacy (White House Office of Science and Technology Pol- ity accordingly. Moreover, ethical concerns expressed by
icy 2022) may begin to address the resignation and per- participants consistently referred to humans as the source of
ceived lack of control observed among our participants. AI’s impacts. Collectively, these findings point to the cen-
Calls for transparency are also a consistent feature of such tral role of perceptions of people behind technology in per-
principles (European Commission 2019; Yeung 2020; ceptions of AI systems. Human perceptions of AI systems
appear to be tied directly to their social context. This is not
655
to imply that technological characteristics are unimportant, Differences in articulation of beliefs and concerns re-
but that beliefs about both public institutions and technology flected a broader difference across samples—Es were less
organizations may play a primary role in shaping beliefs likely than GPs to discuss their own use of AI. Es tended to
about and use of systems. Future research on perceptions of describe the problems they saw in the AI field and often had
AI systems should account for views of institutional context to be asked explicitly to discuss their personal use and per-
and people behind the technology. ceptions. For instance, the question, “Do you trust AI?” was
AI was viewed by participants as fundamentally imper- frequently answered by Es describing what may lead people
fect. Can’t Handle Nuance and Empathy beliefs referred to in general to trust AI, rather than their own trust. Still, there
AI as limited in certain complex contexts and emotionally was a significant degree of similarity in the perceptions ex-
incapable relative to humans. The perceived weaknesses of pressed across samples, suggesting that the perceptions dis-
AI, frequently tied to its programmed nature and human cre- cussed may represent those of the American public broadly,
ators, contributed to concerns about potential negative im- regardless of knowledge of or experience with AI.
pacts on people. Having observed perceptions of fallibility
in both samples, it appears that fundamental human beliefs Limitations
about machine rigidity drive perceptions of AI systems and Interviews were conducted in April through June of 2022,
may influence reliance and use across contexts. We recom- prior to the popularization of generative AI (e.g., (Roose
mend that future research explore the role of beliefs such as 2022)). While perceptions may have shifted as a result, our
Can’t Handle Nuance and Empathy in the willingness to rich interview data provide a unique glimpse into percep-
rely on AI systems, as well as the apparent moderating role tions of AI prior to this increase in public attention. Ob-
of context- and task-related variables. served beliefs align with prior work on perceptions of auto-
Lastly, due to perceptions of AI’s limitations and con- mation (Madhavan and Wiegmann 2004; Madhavan and
cerns for its impacts, participants expressed a desire for hu- Wiegmann 2007), suggesting that human perceptions of ma-
man oversight in various contexts. Humans were viewed as chines will continue to affect perceptions of newer techno-
both the source of AI’s weaknesses and as responsible for logical systems.
its impacts. The desire for human oversight was most prom- Because no definition of AI was provided, our findings
inent in contexts viewed as requiring humanness. Ethical describe beliefs related to participants’ ideas of “AI,” rather
concerns were also consistently articulated via commentary than technological systems necessarily containing AI. Par-
on human accountability. These findings suggest that human ticipants were also told that the study was comparing the
participation in AI processes is strongly valued by the gen- general public and experts and therefore were likely aware
eral public and AI experts. We recommend that future re- of the group to which they belonged. Nonetheless, all par-
search consider not only how the expressed value in human ticipants were informed that the interviewer was not an AI
oversight influences interactions with AI systems, but what expert and were encouraged to share their honest thoughts.
its role can and should be in AI development and deploy- The similarities observed across groups highlight the funda-
ment. mental nature of various perceptions that were shared.
656
References Greene, K. K.; Theofanos, M. F.; Watson, C.; Andrews, A.;
and Barron, E. 2024. Avoiding Past Mistakes in Unethical
Alizadeh, F.; Stevens, G.; and Esau, M. 2021. I Don’t Know, Human Subjects Research: Moving From Artificial Intelli-
is AI also Used in Airbags? An Empirical Study of Folk gence Principles to Practice. Computer 57(2), 53–63.
Concepts and People’s Expectations of Current and Future doi.org/10.1109/MC.2023.3327653.
Artificial Intelligence. I-Com 20(1): 3–17. Helander, M. 1997. The Human Factors Profession. In
doi.org/10.1515/icom-2021-0009. Handbook of Human Factors and Ergonomics, edited by G.
Atwood, S., and Bozentko, K. 2023. U.S. Public Assembly Salvendy, 3–15. New York: John Wiley & Sons.
on High Risk Artificial Intelligence 2023 Event Report. Hoff, K. A., and Bashir, M. 2015. Trust in Automation: In-
https://www.cndp.us/ai/. Accessed 2024-04-24. tegrating Empirical Evidence on Factors That Influence
Bao, L.; Krause, N. M.; Calice, M. N.; Scheufele, D. A.; Trust. Human Factors 57(3), 407–434.
Wirz, C. D.; Brossard, D.; Newman, T. P.; and Xenos, M. doi.org/10.1177/0018720814547570.
A. 2022. Whose AI? How Different Publics Think About AI International Research Center for AI Ethics and Govern-
and its Social Impacts. Computers in Human Behavior 130: ance. 2019. Beijing Artificial Intelligence Principles.
107182. doi.org/10.1016/j.chb.2022.107182. https://ai-ethics-and-governance.institute/beijing-artificial-
Brynjolfsson, E., and McAfee, A. 2017. Artificial Intelli- intelligence-principles/. Accessed 2024-04-24.
gence, For Real. Harvard Business Review. Ipsos. 2023. Global Views on A.I. 2023. https://www.ip-
https://hbr.org/2017/07/the-business-of-artificial-intelli- sos.com/sites/default/files/ct/news/documents/2023-07/Ip-
gence. Accessed 2024-04-24. sos%20Global%20AI%202023%20Report-WEB_0.pdf.
Cabinet Secretariat. 2019. Social Principles of Human-Cen- Accessed 2024-04-24.
tric AI. https://www.cas.go.jp/jp/seisaku/jinkouchi- International Organization for Standardization (ISO). 2019.
nou/pdf/humancentricai.pdf. Accessed 2024-04-24. Ergonomics of Human-System Interaction—Human-Cen-
Castagno, S., and Khalifa, M. 2020. Perceptions of Artificial tred Design for Interactive Systems (ISO Standard No.
Intelligence Among Healthcare Staff: A Qualitative Survey 9241-210:2019). https://www.iso.org/standard/77520.html.
Study. Frontiers in Artificial Intelligence 3: 578983. Accessed 2024-04-24.
doi.org/10.3389/frai.2020.578983. Jacovi, A.; Bastings, J.; Gehrmann, S.; Goldberg, Y.; and
Cave, S.; Craig, C.; Dihal, K.; Dillon, S.; Montgomery, J.; Filippova, K. 2023. Diagnosing AI Explanation Methods
Singler, B.; & Taylor, L. 2018. Portrayals and Perceptions With Folk Concepts of Behavior. Journal of Artificial Intel-
of AI and Why They Matter. In The Royal Society. ligence Research 78, 459–489.
doi.org/10.17863/CAM.34502. doi.org/10.1613/jair.1.14053.
Das Swain, V.; Gao, L.; Wood, W. A.; Matli, S. C.; Abowd, Jensen, T. 2021. Disentangling Trust and Anthropomor-
G. D.; and De Choudhury, M. 2023. Algorithmic Power or phism Toward the Design of Human-Centered AI Systems.
Punishment: Information Worker Perspectives on Passive In Artificial Intelligence in HCI: Lecture Notes in Computer
Sensing Enabled AI Phenotyping of Performance and Well- Science, edited by Degen, H., Ntoa, S., 41–58. Springer,
being. In Proceedings of the 2023 CHI Conference on Hu- Cham. doi.org/10.1007/978-3-030-77772-2_3.
man Factors in Computing Systems, Article 246, 1–17. New Jensen, T.; Albayram, Y.; Khan, M. M. H.; Al Fahim, M.
York: Association for Computing Machinery. A.; Buck, R.; and Coman, E. 2019. The Apple Does Fall Far
doi.org/10.1145/3544548.3581376. From the Tree: User Separation of a System From its Devel-
de Visser, E. J.; Monfort, S. S.; McKendrick, R.; Smith, M. opers in Human-Automation Trust Repair. In Proceedings
A. B.; McKnight, P. E.; Krueger, F.; and Parasuraman, R. of the 2019 ACM Designing Interactive Systems Confer-
2016. Almost Human: Anthropomorphism Increases Trust ence, 1071–1082. New York: Association for Computing
Resilience in Cognitive Agents. Journal of Experimental Machinery. doi.org/10.1145/3322276.3322349.
Psychology: Applied 22(3): 331-49. Jensen, T.; Khan, M. M. H.; and Albayram, Y. 2020. The
doi.org/10.1037/xap0000092. Role of Behavioral Anthropomorphism in Human-Automa-
Dietvorst, B. J.; Simmons, J. P.; and Massey, C. 2015. Al- tion Trust Calibration. In Artificial Intelligence in HCI,
gorithm Aversion: People Erroneously Avoid Algorithms HCII 2020, Lecture Notes in Computer Science, edited by
After Seeing Them Err. Journal of Experimental Psychol- Degen, H., Reinerman-Jones, L., 33–53. Springer, Cham.
ogy: General 144(1): 114–126. doi.org/10.1007/978-3-030-50334-5_3.
doi.org/10.1037/xge0000033. Kelley, P. G.; Yang, Y.; Heldreth, C.; Moessner, C.; Sedley,
Doyle, P. R.; Edwards, J.; Dumbleton, O.; Clark, L.; and A.; Kramm, A.; Newman, D. T.; and Woodruff, A. 2021.
Cowan, B. R. 2019. Mapping Perceptions of Humanness in Exciting, Useful, Worrying, Futuristic: Public Perception of
Intelligent Personal Assistant Interaction. In Proceedings of Artificial Intelligence in 8 Countries. In Proceedings of the
the 21st International Conference on Human-Computer In- 2021 AAAI/ACM Conference on AI, Ethics, and Society,
teraction with Mobile Devices and Services, Article 5, 1–12. 627–637. New York: Association for Computing Machin-
New York: Association for Computing Machinery. ery. doi.org/10.1145/3461702.3462605.
doi.org/10.1145/3338286.3340116. Lee, J. D., and See, K. A. 2004. Trust in Automation: De-
European Commission. 2019. High-Level Expert Group on signing for Appropriate Reliance. Human Factors 46(1),
AI (AI HLEG) Ethics Guidelines for Trustworthy AI. 50–80. doi.org/10.1518/hfes.46.1.50_30392.
https://digital-strategy.ec.europa.eu/en/library/ethics-guide- Lima, G.;Grgić-Hlača, N.; and Cha, M. 2023. Blaming Hu-
lines-trustworthy-ai. Accessed 2024-04-24. mans and Machines: What Shapes People’s Reactions to Al-
657
gorithmic Harm. In Proceedings of the 2023 CHI Confer- Algorithmic Fairness. Proceedings of the 2018 CHI Confer-
ence on Human Factors in Computing Systems, Article 372, ence on Human Factors in Computing Systems, Article 656,
1–26. New York: Association for Computing Machinery. 1–14. New York: Association for Computing Machinery.
doi.org/10.1145/3544548.3580953. doi.org/10.1145/3173574.3174230.
Logg, J. M.; Minson, J. A.; and Moore, D. A. 2019. Algo- Yeung, K. 2020. Recommendation of the Council on Artifi-
rithm Appreciation: People Prefer Algorithmic to Human cial Intelligence (OECD). International Legal Materials
Judgment. Organizational Behavior and Human Decision 59(1), 27–34. doi.org/10.1017/ilm.2020.5.
Processes 151, 90–103. doi.org/10.1016/j.ob- Zhang, B., and Dafoe, A. 2019. Artificial Intelligence:
hdp.2018.12.005. American Attitudes and Trends. Oxford, UK: Center for the
Madhavan, P., and Wiegmann, D. A. 2004. A New Look at Governance of AI, Future of Humanity Institute, University
the Dynamics of Human-Automation Trust: is Trust in Hu- of Oxford. doi.org/10.2139/ssrn.3312874.
mans Comparable to Trust in Machines? Proceedings of the Zhang, B., and Dafoe, A. 2020. U.S. Public Opinion on the
Human Factors and Ergonomics Society Annual Meeting Governance of Artificial Intelligence. Proceedings of the
48(3), 581-585. doi.org/10.1177/154193120404800365. AAAI/ACM Conference on AI, Ethics, and Society, 187–
Madhavan, P., and Wiegmann, D. A. (2007). Similarities 193. New York: Association for Computing Machinery.
and Differences Between Human–Human and Human–Au- doi.org/10.1145/3375627.3375827.
tomation Trust: An Integrative Review. Theoretical Issues
in Ergonomics Science 8(4), 277–301.
doi.org/10.1080/14639220500337708.
Pew Research Center. 2022. AI and Human Enhancement:
Americans’ Openness Is Tempered by a Range of Concerns.
https://www.pewresearch.org/science/2022/03/17/ai-and-
human-enhancement-americans-openness-is-tempered-by-
a-range-of-concerns/. Accessed 2024-04-25.
Pew Research Center. 2023. As AI Spreads, Experts Predict
the Best and Worst Changes in Digital Life by 2035.
https://www.pewresearch.org/internet/2023/06/21/as-ai-
spreads-experts-predict-the-best-and-worst-changes-in-dig-
ital-life-by-2035/. Accessed 2024-04-25.
Prahl, A., and Van Swol, L. 2017. Understanding Algorithm
Aversion: When is Advice From Automation Discounted?
Journal of Forecasting 36(6), 691–702.
doi.org/10.1002/for.2464.
Roose, K. 2022. The Brilliance and Weirdness of ChatGPT.
The New York Times. https://www.ny-
times.com/2022/12/05/technology/chatgpt-ai-twitter.html.
Accessed 2024-04-25.
Saldaña, J. 2021. The Coding Manual for Qualitative Re-
searchers: Fourth Edition. Sage Publications.
Shneiderman, B. 2022. Human-Centered AI. Oxford Uni-
versity Press.
Stone, P.; Brooks, R.; Brynjolfsson, E.; Calo, R.; Etzioni,
O.; Hager, G.; Hirschberg, J.; Kalyanakrishnan, S.; Kamar,
E.; and Kraus, S. 2022. Artificial intelligence and Life in
2030: The One Hundred Year Study on Artificial Intelli-
gence. arXiv:2211.06318.
Toreini, E.; Aitken, M.; Coopamootoo, K.; Elliott, K.; Ze-
laya, C. G.; and Van Moorsel, A. 2020. The Relationship
Between Trust in AI and Trustworthy Machine Learning
Technologies. Proceedings of the 2020 Conference on Fair-
ness, Accountability, and Transparency, 272–283. New
York: Association for Computing Machinery.
doi.org/10.1145/3351095.3372834.
White House Office of Science and Technology Policy.
2022. Blueprint for an AI Bill of Rights: Making Automated
Systems Work for the American People.
https://www.whitehouse.gov/ostp/ai-bill-of-rights/. Ac-
cessed 2024-04-25.
Woodruff, A.; Fox, S. E.; Rousso-Schindler, S.; and War-
shaw, J. 2018. A Qualitative Exploration of Perceptions of
658