Governance-With-Teeth A19 April 2019
Governance-With-Teeth A19 April 2019
Governance-With-Teeth A19 April 2019
April 2019
First published by ARTICLE 19, 2019
ARTICLE 19
Free Word Centre
60 Farringdon Road
London EC1R 3GA
UK
www.article19.org
T: +44 20 7324 2500
E: info@article19.org
Tw: @article19org
Fb: facebook.com/article19org
ISBN: 978-1-910793-42-8
Text and analysis © ARTICLE 19, 2018 under Creative Commons Attribution-Non-
Commercial-ShareAlike 2.5 licence. To access the full legal text of this licence, please
visit: http://creativecommons.org/licenses/by-ncsa/2.5/legalcode.
ARTICLE 19 works for a world where all people everywhere can freely express
themselves and actively engage in public life without fear of discrimination. We do this
by working on two interlocking freedoms, which set the foundation for all our work.
The Freedom to Speak concerns everyone’s right to express and disseminate opinions,
ideas and information through any means, as well as to disagree from, and question
power-holders. The Freedom to Know concerns the right to demand and receive
information by power-holders for transparency good governance and sustainable
development. When either of these freedoms comes under threat, by the failure of
power-holders to adequately protect them, ARTICLE 19 speaks with one voice, through
courts of law, through global and regional organisations, and through civil society
wherever we are present.
Contents
About us 4
Executive Summary 6
I. Introduction 8
4. Recommendations 22
Endnotes 23
About us
ARTICLE 19 is a global human rights organisation that protects and promotes the
right to freedom of expression and information around the world. Established in
1987 in London, ARTICLE 19 monitors threats to freedom of expression in different
regions of the world, and develops long-term strategies to address them.
We have also provided expert input on related topics and served on multiple
committees dedicated to AI and human rights. We have served as invited experts
to the UNESCO World Commission on the Ethics of Scientific Knowledge and
Technology (COMEST) and several other UN processes, including consultations
with various Special Rapporteurs. We have made a submission to the UK House
of Lords Select Committee on AI; offered expert input to the Council of Europe
committee MSI-AUT; and made several submissions to the AI and ethics initiative of
the Institute of Electrical and Electronics Engineering (IEEE), where we also maintain
co-chairship of several working groups of the initiative. We hold membership in the
Partnership on AI and have given guidance on the development of AI for network
management at the Internet Engineering Task Force.
4
Executive Summary
As artificial intelligence (AI) is increasingly integrated into societies, its potential
impact on democracy and society has given rise to important debates about how AI
systems should be governed. Some stakeholders have put their focus on building
normative ethical principles, while others have gravitated towards a technical
discussion of how to build fair, accountable, and transparent AI systems. A third
approach has been to apply existing legal human rights frameworks to guide the
development of AI that is human rights-respecting through design, development and
deployment.
In this paper, ARTICLE 19 considers the ethical and technical approaches in the
field so far. We identify the contours and limitations of these parallel discussions,
and then propose a human rights-based approach to each of them. The intention
behind this paper is to explore how a human rights-based approach can
constructively inform and enhance these efforts and present key recommendations
to stakeholders:
5
7. As a continuation of the above, meaningfully engage with civil society and
academia at each stage of the process, to cultivate constructive criticism as
part of internal deliberation processes.
2. Ensure that AI systems in the public sector undergo adequate human rights
impact assessments, due diligence, and continuous auditing. These are not
systems that can simply be rolled out. They should instead be tailored to the
exact context and use for which they are intended.
5. Ensure that national and international efforts around AI are equally informed
by human rights concerns, constitutional standards, and the public interest
as they are by industry concerns.
2. Advocate for the promotion and protection of human rights in the context of
AI systems in a manner that is also informed by technical considerations
and limitations.
6
I. Introduction
As artificial intelligence (AI)1 has demonstrated its power to revolutionise
fundamental systems of communication, commerce, labor, and public services,
it has captured the attention of the technology industry, public officials, and civil
society. The potential of AI to perform tasks with speed and at scale beyond human
capability has fueled great excitement. AI systems are already deeply embedded
in our everyday lives - from helping us navigate through morning traffic to offering
up the day’s news, to more nefarious uses of systems for surveillance,2 warfare,3
and oppressing democratic dissent.4 Yet many of the most powerful stakeholders
in the field have only just begun to consider the impact of AI systems on society,
democracy, rights, and justice.
7
2. The societal impact of AI: two
approaches
2.1 The normative approach: Ethics initiatives
AI systems raise myriad questions for society and democracy, only some of which
are covered5 or addressed by existing laws.6 In order to fill these perceived gaps, a
vocal group of governments, industry players, academics, and civil society actors
have been promoting principles or frameworks for ethical AI.7 While there is an
abstract awareness of what “ethics” generally means, there is no precise or shared
understanding of the term. It has been subject to multiple interpretations by various
stakeholders, and is often defined by industry actors who use it on a case-by-case
basis.
At the time of writing this paper, at least 25 countries have published national AI
strategies9 and ethical task forces have cropped up around the world. The European
Commission’s High Level Expert Group on AI has laid down “Ethics Guidelines for
Trustworthy AI” focusing on respect for human autonomy, prevention of harm,
fairness, and explicability.10 Companies are constituting ethical boards, publishing
ethical principles for AI, and taking part in multi-stakeholder initiatives in this space
as well. Technical organisations like the Association for Computing Machinery
(ACM) and the Institute for Electrical and Electronics Engineers (IEEE) have published
ethical principles for autonomous systems. And academic11 and civil society actors
have engaged via government consultations12 and multi-stakeholder forums like the
Partnership on AI (PAI).13
This plethora of initiatives underlines the need for a framework to discuss the
desired impact of new technologies on society - it pushes for an articulation of what
ought to be done. The current debate around AI and ethics is rich, multi-disciplinary,
and takes various forms, including ethics boards, principles, and public statements.
ARTICLE 19 recognises the importance of these efforts, but also believes that there
is more reason to be critical than accepting of ethics initiatives, as we will discuss in
the next section.
8
2.1.3 Critical gaps in ethics initiatives
A primary reason to be critical of ethics initiatives in isolation is that they, more often
than not, are not actionable. They do not afford mechanisms that lead to tangible
change. The various principles developed by industry and states have, as of yet,
failed to develop strong accompanying accountability mechanisms. They lack
concrete and narrowly defined language,14 independent oversight or enforcement
mechanisms, and clear transparency and reporting requirements. This means that
no matter how laudable the principles are, there is no way to hold governments
or companies to said principles. The general lack of transparency mechanisms
leaves no pathway for other stakeholders to know whether or not companies and
governments are complying with their own principles. And in cases where non-
compliance is revealed, there are inadequate mechanisms to hold companies and
governments accountable for their wrongdoing.
For instance, after Google received pushback from its own employees surrounding
Project Maven, a partnership with the US Department of Defence to improve drone
targeting using AI, the company published a set of AI principles that elucidated
its commitment to ethics, and made a public pledge to refrain from building
certain types of technology.15 But Google has not disclosed to what extent these
principles are embedded in concrete work in the company, and there has been no
demonstrable change in how the company has altered its internal decision making
processes. This is particularly worrying because in the case of public-private
partnerships such as Project Maven, the accountability that governments otherwise
owe the public is potentially diluted by the use of technology built behind closed
doors and vague, non-binding commitments.
This lack of accountability can even take a toll when AI systems are still being
developed. Recently, McNamara et al conducted a study where software engineers
were explicitly instructed to consider the ACM code of ethics as they were developing
new products. The study found that the code of ethics had no “observed effect” on
their work.16 This suggests that ethical codes of conduct cannot be a solution in and of
themselves, unless accompanied by mechanisms for compliance and accountability.
9
This approach also does not account for enforcement or redressal mechanisms,
nor does it contemplate the duty of companies to act in certain ways. Furthermore,
it fails to provide mechanisms by which consumers, public interest advocates, civil
society, and other affected individuals can have agency in the event that companies
fail to meet their own ethical standards.
For instance, following scrutiny over its algorithmic systems,18 Facebook took
several steps towards engaging with ethics initiatives. The company backed an
ethical AI institute at the University of Munich19 and became a founding member of
the Partnership on AI, a multi-stakeholder group that aims to “study and formulate
best practices on AI technologies, to advance the public’s understanding of AI,
and to serve as an open platform for discussion and engagement about AI and its
influences on people and society.”20 Facebook even signed resolutions calling for
the development of ethical principles in the US Congress.21 Yet at the same time,
recent research shows that Facebook discriminates on advertisement delivery on
the basis of gender and race,22 and has also been charged with housing-related
discrimination.23
As the outcomes of Project Maven show, these initiatives become even more
complex when they are deployed in public-private partnerships, where government
agencies procure privately developed AI systems. In these agreements, sole reliance
on ethical frameworks (as opposed to legally-binding constitutional or human rights
frameworks) dilutes state accountability and rights-based obligations.
As discussed above, there are various cases of governments working together with
industry to improve surveillance of dissidents, precision targeting in drones, or facial
recognition software for law enforcement purposes. These partnerships regularly
take shape in the absence of safeguards or meaningful oversight.
When civil society is invited to partake in deliberation around ethical AI, the division
of seats at the table is not equitable. Ethical initiatives within industry are more often
than not opaque to civil society, with most ethical boards and codes of conduct
being developed and deliberated exclusively in-house. What is more surprising is
that this trend continues even in ethical initiatives convened by governments. For
instance, during deliberations at the European High Level Expert Group on Artificial
Intelligence (EU-HLEG),26 industry was heavily represented, but academics and
civil society did not enjoy the same luxury. And while some non-negotiable ethical
principles were originally articulated in the document, these were omitted from the
final document due to industry pressure.27
When we look at efforts led by companies, certain stakeholders appear to use ethics
initiatives as an alternative or preamble to regulation. This approach can carry
dangerous consequences for human rights and the public interest. In proposing
ethical frameworks or principles to avoid regulation under the guise of encouraging
innovation, stakeholders seek to achieve precisely what is discussed above, a
practice Ben Wagner has termed “ethics washing.”30 They affect a veneer of “being
ethical,” yet they have no mechanisms of accountability with which to comply,31 and
thus they face no consequence for their actions. Some may advocate for ethics as a
preamble to regulation, arguing that it is too soon to prescribe regulation addressing
AI.32 But in multiple cases, this has proven to be a strategy of simply buying time to
profit from and experiment on societies and people, in dangerous and irreversible
ways.
For example, Google’s ethical principles laid out its aspirations to build AI systems
that are socially beneficial, and also to avoid creating or reinforcing unfair bias. A
few months later, the company constituted an ethics board, including individuals33
who demonstrably contradicted the basic assumptions behind Google’s ethical
principles. This made the principles and actual company practice fundamentally
11
incongruous, and Google dissolved the board just days later, following public and
internal pushback.34 Google’s AI principles, therefore, have no teeth - they do not
preclude the company from violating its own principles because they create no
obligation or duty in the first place. The link between ethical aspirations and industry
duty is weak at best, and non-existent at worst.
Another problem with trusting companies to do “the right thing” comes from their
lack of understanding of the societal impacts of technology and appropriate
ways to deal with them. For instance, in April 2018, in his testimony before the
United States Congress, Facebook CEO Mark Zuckerberg revealed the company’s
increasing reliance on AI tools to solve problems of hate speech, terrorist
propaganda, election manipulation and misinformation. But research and media
reports have shown that AI tools are ill-suited to do this work - they are not
technically equipped to understand societal nuances or context in speech, and often
make the problem worse.
Finally, on a more granular level, it is also important to note that while ethical
principles are put forth as aspirational goals, even when well-intended and narrowly
tailored, there has seldom been guidance on how to provide balancing mechanisms
for conflicting principles.
Widespread use of AI systems on society has brought to the fore concerns around
discrimination,35 injustice,36 the exercise of rights37, amongst others. A community of
researchers, academics, and scientists have been working to address these issues
by developing AI systems that are fair, accountable, and transparent (FAT). This
particular field of technical work has grown over decades, and pre-dates the current
resurgence in AI-focused ethics initiatives.
12
The notion of fairness in machine learning (the most popular subset of AI
techniques) is arguably the most prominent topic in the field today, with researchers
and practitioners attempting to articulate what fairness entails and in turn,
operationalise these learnings at the time of deployment.
The conversation around FAT has been led by academia for many years, and this is
still true today. The Association for Computing Machinery Fairness, Accountability,
and Transparency (ACM FAT*) in ML43 is perhaps the most popular venue for FAT
work in the field. Some recent papers from the event included work on improving
fairness of facial recognition algorithms,44 distinguishing between fairness and bias,45
and a study on bias in news and fact checking. 46
Industry has also engaged with the idea of fairness. In 2018, Accenture published
a fairness toolkit to help businesses work towards fair outcomes in the process of
deploying AI systems.47 Spotify and Microsoft researchers recently presented work
on the challenges they face when trying to implement fairness on a daily basis for
technical experts working on FAT issues.48 FAT work is also carried out in industry
consortiums like the PAI and IEEE, which often pave the way for wider stakeholder
engagement with these issues. Venues such as the PAI offer dedicated working
13
groups on FAT issues, and have strong civil society participation. Other venues like
ACM FAT* are still very much technical, leaning strongly towards academia.
Even as this field pushes the limits of our current understanding of fair, accountable
and transparent AI systems, ARTICLE 19 believes that some concerns with the
current approach remain.
Consider how a credit scoring model might be trained to operate. How could such
a model affect historically disadvantaged populations? It is possible that, when
studied in isolation, historical discrimination of the past could easily translate
to future discrimination. Recognition of structural and systemic inequalities,
constitutional guarantees of affirmative action (where applicable), and the
responsibility to correct past practices would mean building a technology that
transcends this view. This has been recognised by the community itself and is
slowly being addressed. Some of the most recent literature in this field attempts to
learn and further develop these concepts by borrowing from political philosophy,50
social sciences,51 and even the fields of education and hiring.52
But significant questions remain: Which values are embedded? How are technical
experts positioned to understand them? What shared terms exist around these
values, and what are possible ways to codify them?
Second, the FAT approach does not structurally engage with responsibility, rights,
duties, or harms of technical systems in an actionable way. This it shares in
common with the ethics initiatives discussed in the previous section. The FAT
14
approach does not necessarily articulate responsibility, harm, or expectation of fair
treatment to ensure that these goals are met.
Also similar to the ethics approach, the FAT approach does not address the
tangible effect of technical systems on the exercise of rights, or on people’s lives.
For instance, there has been recent pressure on technology companies like IBM
and Amazon (among others) to ensure that facial recognition systems have equal
accuracy rates between vulnerable and dominant racial groups.53 Ensuring this could
satisfy some definitions of fairness, yet it does not engage with the broader question
in play. Are facial recognition systems a threat to the exercise of fundamental rights
such as privacy and free expression? Does a perfectly fair facial recognition system
-- one that is equally and similarly accurate across demographic groups -- have a
disproportionate impact on vulnerable groups that have historically been subject to
surveillance? Should the faces of individuals from these groups be used in training
datasets? What implications does this have on their autonomy and privacy?
Relatedly, the FAT approach has been built to ensure that systems are fair,
transparent and accountable, but it does not empower individuals or surrounding
institutional mechanisms to challenge the decisions that these systems make. By
sidestepping the question of rights and duties, affected individuals and oversight
authorities cannot meaningfully challenge systems that fail them. Take for instance
New York’s Automated Decision Systems Task Force that examines the use of
automated decision-making systems by the city to prevent bias or other harms.
The task force has adopted principles of equity, fairness and accountability,54 and
has leading AI scientists and researchers among its members. Yet a full year after it
was constituted, the task force is unable to carry out its duties because there are no
accompanying institutional mechanisms, oversight powers, or rights to investigate
uses of automated decision-making systems.55
15
3. Towards a human rights-based
approach
Alongside technical and ethical approaches to addressing the societal impact of AI
systems, there has been a third approach, focused on law and regulation. While a
handful of states have begun to explore legal frameworks for governing AI, various
stakeholders have turned to international human rights instruments as a way
forward.
There have been preliminary attempts to begin regulating AI at national and regional
levels. In the EU, for example, the General Data Protection Regulation (GDPR)56
articulates a few rights with respect to automated decision making. Under the GDPR,
data controllers (typically companies or state entities) are required to provide data
subjects with information about “the existence of automated decision making,
including profiling...and, at least in those cases, meaningful information about the
logic involved, as well as the significance and the envisaged consequences of such
processing for the data subject.”57 The GDPR also provides that individuals have
the “right not to be subject to a decision based solely on automated processing,
including profiling, which produces legal effects concerning him or her or similarly
significantly affects him or her.”58 and guarantees that data subjects can seek
human intervention and contest the decision.59 The extent and impact of these
provisions are the subject of ongoing debate, and are sometimes referred to as
the “right to explanation.”60 In the United States, an Algorithmic Accountability
Bill introduced in the Senate in April 2019 contemplates impact assessments for
automated decision systems and data protection.61
More recently, over 100 organisations signed a statement focused on civil liberties
concerns regarding the use of pretrial risk assessment tools.65 The United Nations
also has weighed in on the human rights debate: UN Special Rapporteur on
the promotion and protection of the right to freedom of opinion and expression
presented a detailed report on the human rights impact of AI systems to the General
Assembly.66
16
Some national governments also have adopted a human rights-based approach. In
July 2018, the Australian human rights commission launched a three-year project
to understand the human rights impact of AI.67 The governments of Canada and
France are steering an international study group aimed at human centric artificial
intelligence, using human rights as one of the anchors of the study.68 There has also
been work highlighting the human rights obligations of businesses in the context of
AI.69
Having discussed the current deliberation around societal impacts of AI, we now
wish to propose a way forward. ARTICLE 19 believes that current deficiencies
discussed above -- the lack of enforcement, accountability, meaningful redressal,
and individual empowerment -- could be constructively supplanted through a
human rights-based approach.
The human rights-based approach identifies rights holders (people who use or are
affected by technologies) and duty bearers (companies or governments deploying
said technologies). It is a universal set of principles, has binding effect, and is based
on the rule of law. It draws on an internationally recognised system of law that
defines both business and state responsibility and the specific standards they must
adhere to. This means that rights, reasonable restrictions, their status under law
and implementation in practice, are anchored in a system that is verifiable, specific,
and detailed. This international system has grounding in law, is based on commonly
understood language and affords procedures and institutions that can help ensure
that duty bearers meet their obligations, and that rights holders have recourse to
effective remedies.70
In this section, we will contemplate how a human rights-based approach can benefit
existing technical and ethical conversations discussed above, and in turn also
identify ways in which the converse is also true, i.e. what aspects of the FAT and
ethics conversations can inform existing human rights approaches to AI.
17
3.1 A human rights-based approach to Ethics
In our analysis, our primary concern with ethics initiatives was their common lack of
accountability and enforcement mechanisms. Some appeared to be little more than
efforts to skirt regulation, while others, though perhaps well-intended, did not have
proverbial teeth: They put forth admirable goals, but offered little if any mechanism
of accountability or enforcement. If a human rights-based approach were brought to
bear here, it could complement ethical principles with an enforcement mechanism,
drawing from the UNGPs, which offer comprehensive guidance on how to make
industry practices more concretely accountable, in addition to mechanisms for
redressal, safeguarding rights, and ensuring performance of corresponding duties.
Grounding ethical principles in human rights standards would also preclude the
worrying prospect of “ethics washing” and rubber stamping. Human rights law has a
legacy of constitutional and judicial interpretation, legal oversight, and enforcement
mechanisms that are subject to review.
We also observed that industry initiatives and corporate aims have come to
dominate and set the agenda within many ethics initiatives. A human rights-
based approach could re-balance the scales, as it would embrace the realities of
carrying out business by prescribing a specific, detailed account of what rights and
obligations are in play. It would also keep individuals and the rights owed to them as
the central focus, and the point for calibration.
Perhaps as a result of the agenda-setting by industry, the ethics framework does not
contemplate the duty of companies to act in certain ways. Instead, it tends to focus
on responsibility for outcomes (who holds it, and what it entails), as opposed to
addressing bigger questions around business models and the monopolistic power of
tech companies. A human rights-based approach would correct for this by targeting
every step of a company’s business model, thus bringing into focus not just outputs
from a particular business, but also processes and safeguards for rights holders and
duty bearers. This could also be drawn from the UNGPs, alongside international and
national law, if applicable.
18
3.2 A human rights-based approach to FAT
In the section discussing FAT above, we identified a few key issues where this work
could be further developed to incorporate the social, political, and legal context in
which AI systems are deployed.
In turn, this could allow the FAT field to more meaningfully address the tangible
effects of AI systems on the people’s lives and fundamental rights. Because human
rights-based approaches are grounded in the relationship between rights holders
and duty bearers, it refocuses these questions in terms of individual and collective
rights, and the obligations owed to rights holders, including rights to seek redressal.
The FAT approach does not empower individuals, or provide particular agency in
this regard. A human rights-based approach offers enforcement mechanisms, and
prescribes institutional processes to work with in the event of rights violations,
in addition to offering existing external mechanisms. In case of discrimination,
for instance, the FAT approach may lay out tests for fairness and methods of
accountability, and a rights-based framework augments these by also providing
redressal mechanisms for people affected.
The FAT approach focuses on systems, and mechanisms for accountability around
systems, but does not clearly articulate responsibility, harm, and expectation of fair
treatment. A human rights-based approach is particularly well positioned to fill this
gap as it is informed by international legal standards and clear articulation of roles
and responsibilities.
19
A human rights-based approach could also help to carry forward questions in the
FAT field about how to engage with and critically examine values. Amid efforts
to promote value-sensitive design72 of machine learning systems, and a growing
awareness of the need to explicitly deal with values that are encoded in systems,
human rights provide a universal set of values with legal grounding. ARTICLE 19
believes that invoking a human rights framework is especially important given these
learnings, as human rights are the most universal set of values that we have, with
shared language and decades of interpretation and implementation.
3.3 How might FAT and ethics initiatives help improve a human rights-based
approach?
Advocacy for human rights protections should take into account technical
considerations: While human rights have grounding in law, and also universally
understood language and meaning, advocates for human rights could strengthen
their approach to AI systems by learning and taking into account technical
necessities, limitations, and terminologies. This would not only enable precision
across disciplines to emerge, it would also carry out important translation between
technical and non-technical audiences.
20
4. Recommendations
We call on industry to:
2. Ensure that AI systems in the public sector undergo adequate human rights
impact assessments, due diligence, and continuous auditing. These are not
systems that can simply be rolled out. They should instead be tailored to the
exact context and use for which they are intended.
21
3. Root the design, development, and deployment of AI systems in
constitutional guarantees and human rights standards.
5. Ensure that national and international efforts around AI are equally informed
by human rights concerns, constitutional standards, and the public interest
as they are by industry concerns.
2. Advocate for the promotion and protection of human rights in the context of
AI systems in a manner that is also informed by technical considerations
and limitations.
22
Endnotes
1 In a previous report co-written with 5 Barocas, Solon, and Andrew D. Selbst.
Privacy International, we attempted ‘Big Data’s Disparate Impact’. California
to capture this new technology in Law Review 671 (2016). https://papers.
the following definition: “The term ssrn.com/sol3/Papers.cfm?abstract_
‘AI’ is used to refer to a diverse range id=2477899.
of applications and techniques, at
different levels of complexity, autonomy 6 Veale Michael, Binns Reuben, and
and abstraction. This broad usage Edwards Lilian. ‘Algorithms That
encompasses machine learning (which Remember: Model Inversion Attacks
makes inferences, predictions and and Data Protection Law’. Philosophical
decisions about individuals), domain- Transactions of the Royal Society A:
specific AI algorithms, fully autonomous Mathematical, Physical and Engineering
and connected objects and even the Sciences 376, no. 2133 (November
futuristic idea of an AI ‘singularity’.” We 2018). https://doi.org/10.1098/
also outlined a number of key concepts rsta.2018.0083.
related to AI systems. ARTICLE 19,
and Privacy International. ‘Privacy and 7 Cowls, Josh, and Luciano Floridi.
Freedom of Expression In the Age of ‘Prolegomena to a White Paper on an
Artificial Intelligence’, 2018. https:// Ethical Framework for a Good AI Society’,
www.article19.org/wp-content/ (June 2018). https://papers.ssrn.com/
uploads/2018/04/Privacy-and- abstract=3198732.
Freedom-of-Expression-In-the-Age-of-
Artificial-Intelligence-1.pdf. 8 Floridi, et al. ‘An Ethical Framework for
a Good AI Society: Opportunities, Risks,
2 Vincent, James. ‘Artificial Intelligence Principles, and Recommendations’.
Is Going to Supercharge Surveillance’. In Minds and Machines (December
The Verge, January 2018. https://www. 2018). https://www.researchgate.net/
theverge.com/2018/1/23/16907238/ publication/328699738_An_Ethical_
artificial-intelligence-surveillance- Framework_for_a_Good_AI_Society_
cameras-security. Opportunities_Risks_Principles_and_
Recommendations.
3 Coughlan, Sean. ‘Google “to End
Pentagon AI Project”’, June 2018, sec. 9 Dutton, Tim. ‘An Overview of National
Business. https://www.bbc.com/news/ AI Strategies’. Politics + AI (blog), June
business-44341490. 2018. https://medium.com/politics-ai/
an-overview-of-national-ai-strategies-
4 Kania, Elsa B. ‘China’s AI Giants Can’t Say 2a70ec6edfd.
No to the Party’. Foreign Policy (blog), August
2018. https://foreignpolicy.com/2018/08/02/
chinas-ai-giants-cant-say-no-to-the-party/.
23
10 ‘Ethics guidelines for Trustworthy April 2019. https://people.engr.ncsu.edu/
AI’. Accessed 15 April 2019. https:// ermurph3/papers/fse18nier.pdf.
ec.europa.eu/digital-single-market/en/
news/ethics-guidelines-trustworthy-ai. 17 Boyle, Alan. ‘AI Expert Says Microsoft
Is Cutting off Some Sales Due to Ethics
11 ‘Asilomar AI Principles’. Future of Concerns – GeekWire’. Geekwire, April
Life Institute. Accessed 11 April 2019. 2018. https://www.geekwire.com/2018/
https://futureoflife.org/ai-principles/. microsoft-cutting-off-sales-ai-ethics-
top-researcher-eric-horvitz-says/.
12 ARTICLE 19. ‘Submission of
Evidence to the House of Lords Select 18 Madrigal, Alexis C. ‘What Facebook
Committee on Artificial Intelligence’, Did to American Democracy’. The
September 2017. https://www.article19. Atlantic, October 2017. https://
org/wp-content/uploads/2017/10/ www.theatlantic.com/technology/
ARTICLE-19-Evidence-to-the-House- archive/2017/10/what-facebook-
of-Lords-Select-Committee-AI-1. did/542502/. Also see: Angwin, Julia,
pdf. Also see ‘High-Level Expert Group Madeleine Varner and Ariana Tobin. ‘
on Artificial Intelligence | Digital Single Facebook Enabled Advertisers to Reach
Market’. Accessed 11 April 2019. https:// ‘Jew Haters’’. ProPublica, September
ec.europa.eu/digital-single-market/ 2017. https://www.propublica.org/
en/high-level-expert-group-artificial- article/facebook-enabled-advertisers-
intelligence. to-reach-jew-haters.
25
33 Smith, Reiss. ‘Google appoints 39 See, for instance, Kleinberg, Jon,
transphobic conservation to AI ethics Sendhil Mullainathan, and Manish
board’. Pink News, March 2019. https:// Raghavan. ‘Inherent Trade-Offs in the
www.pinknews.co.uk/2019/03/27/ Fair Determination of Risk Scores’.
google-ai-artificial-intelligence-ethics- ArXiv:1609.05807 [Cs, Stat], (September
board-kay-coles-james/. 2016). http://arxiv.org/abs/1609.05807.
29
ARTICLE 19 Free Word Centre 60 Farringdon Road London EC1R 3GA
T +44 20 7324 2500 F +44 20 7490 0566
E info@article19.org W www.article19.org Tw @article19org facebook.com/article19org