Ai Notes Unit-Iii

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 20

AD8013 CONCEPTS AND ISSUE UNIT-III

Accountability in Computer Systems, Transparency,


Responsibility and AI. Race and Gender, AI as a
moral right-holder.

1.Accountability in Computer Systems:


1.1 Definitions and the Unit of Analysis

To understand accountability in the context of AI


systems, we must begin by examining the various ways the term
is used and the variety of concepts to which it is meant to refer.
Further, we must examine the unit of analysis, or the level of
abstraction at which we consider the term to apply. As with many
terms used in the discussion of AI, different stakeholders have
fundamentally different and even incompatible ideas of what
concept they are referring to, especially when they come from
different disciplinary backgrounds. This confusion leads to
disagreement and debate in which parties disagree not on
substance, but on the subject of debate itself. Here, we provide a
brief overview of concepts designated by the term
“accountability”, covering their relationships, commonalities, and
divergences in the service of bridging such divides.

1.2 Artifacts, Systems, and Structures:

Where does accountability lie? Accountability is


generally conceptualized with respect to some entity – a
relationship that involves reporting of information to that entity
and in exchange receiving praise, disapproval, or consequences
when appropriate. Successfully demanding accountability around
an entity, person, system, or artifact requires establishing both
ends of this relationship: who or what answers to whom or to
what? Additionally, to understand a discussion of or call for
accountability in an AI system or application, it is critical to
determine what things the system must answer for, that is, the
information exchanged. There are many ways to ground a
AD8013 CONCEPTS AND ISSUE UNIT-III

demand for answerability and give it normative force, and


commensurately many types of accountability: moral,
administrative, political, managerial, market, legal & judicial,
relative to constituent desires, and professional. AI systems
intersect with all eight types of accountability, each in different
ways and depending on the specifics of the application context.

Often, the unit of analysis referenced by someone discussing


accountability relates to their disciplinary training and
orientation: those interested in technology development, design,
and analysis are more likely to conceptualize the system-as-
embodied, situating algorithms and the agency of AI systems
within machines themselves, or with their designers (i.e.,
technologists focus on the computers, the software, and the
interfaces, or the engineering process). Political, social, and legal
demands for accountability often focus around higher-order units
such as sociotechnical systems of artifacts interacting with
people or entire paradigms of social organization (i.e., policy
discussions are often focused on systemwide outcomes or the
fidelity of systems to democratically determined goals, looking at
the company or agency involved and operative policy rather than
the specific tools in use or their performance characteristics).
Often, all units of analysis inform appropriate interventions
supporting accountability, as the information necessary to
establish system-level accountability may depend on metrics and
measures established at the technical level. Thus, accountability
must be part of the design at every scale, in tandem. Related to
the unit of analysis question is the issue of causal and moral
responsibility. When operationalizing accountability, it is
important that the relationship of answerability corresponds
either to its subject causing the condition for which it is
answerable or to its being morally culpable for that condition. If
no such link exists, or if the information conveyed via the
accountability relationship does not establish the link, then it is
difficult to find the actor.
AD8013 CONCEPTS AND ISSUE UNIT-III

accountable. Operationalizing accountability in AI systems


requires developing ways to make such links explicit and
communicable. For example, the scapegoating of a component or
portion of the problem can impair agency of the involved actors in
establishing fault. Additionally, the problem of many hands can
serve as a barrier to accountability, as it did in the Ariane 5
Flight 501 failure.11 While many hands were responsible for that
failure, this need not be the case: alternative governance
structures for such multifaceted, cross-functional development
teams could explicitly make leaders responsible, providing an
incentive for them to ensure adequate performance and the
avoidance of failures across their organization, or use other
mechanisms to make domains of answerability clear at the level
of functions or organizations. For example, legal proceedings
often hold organizations (say, corporations) accountable at an
abstract level, leaving the determination of individual
accountability to happen inside the organization.12 But these as
well can be their own sort of scapegoating – accidents in
autonomous systems are often blamed on “human error” even
when the human has little meaningful control over what the
system is doing.13 Accountability, Oversight, and Review If we
conceptualize accountability as answerability of various kinds,
and we understand who must answer, for what, and to whom the
answers are intended, then we have redeveloped the concept of
oversight, a component of governance where a designated
authority holds special power to review evidence of activities and
to connect them to consequences. Oversight complements
regulatory methods in governance, allowing for checks and
controls on a process even when the correct behavior of that
process cannot be specified in advance as a rule. Rather, an
oversight entity can observe the actions and behaviors of the
process and separate the acceptable ones from the unacceptable
ones ex post. Further, when rules exist, an oversight entity can
verify that the process acted consistently within them. Even when
guidance is expressed in standards or principles, oversight can
apply those more abstract desiderata in a given case, weighing
considerations against each other given scenario-specific facts
AD8013 CONCEPTS AND ISSUE UNIT-III

and circumstances. In computer science, and in engineering


generally, the twin modalities of guaranteeing compliance with a
formally stated policy ex ante and keeping records which provide
for auditing ex post have long been recognized as the major
approaches to understanding the fidelity of an artifact to goals
such as correctness, security, and privacy.14 However, the
dominant modality – whether building software and hardware
controllers; rockets and aircraft; or bridges and buildings – has
been to decide on a rule up front, to express this rule as a set of
requirements for the system, to implement the system so that it
is faithful to those requirements, and to verify that the
implementation comports with the requirements while also
validating that the requirements

1.3 Accountability as Accounting, Recordkeeping, and


Verifiability

The simplest definition of accountability is in


terms of accounting, that is, keeping records of what a system
did so that those actions can be reviewed later. It is important
that such records be faithful recordings of actual behaviors, to
support the reproducibility of such behaviors and their analysis.
Additionally, such records must have their integrity maintained
from the time they are created until the time they must be
reviewed, so that the review process reliably examines (and can
be seen by others to examine) faithful records that describe what
they purport to describe. Finally, it is important that both the
fidelity and the integrity of the records be evident both to the
overseer and anyone who relies on the overseer’s judgements.
Oversight in which the entity being reviewed can falsely
demonstrate compliance is no oversight at all.

1.4 Accountability as Responsibility

Answerability includes not just the notion that answers


exist, but that individuals or organizations can be made to
answer for outcomes of their behavior or of the behavior of tools
they make use of. Responsibility ties actions or outcomes to
AD8013 CONCEPTS AND ISSUE UNIT-III

consequences. Authors in this space have identified three major


normative bases for this connection: causality, fault, and duty –
either the actions of the entity being held accountable caused the
outcome being considered, or the entity is somehow culpable for
the outcome irrespective of cause, or the entity is ascribed an
obligation to certain behaviors. All three types of responsibility,
and the relationship of any to accountability, are subtle and bear
unpacking. Operationalizing any one or all three to make
practical the necessary accountability mechanisms and regimes
is the subject of much work across several disciplines.

1.5 Accountability as Normative Fidelity

The most abstract way that the term


“accountability” is used connects the answerability relationship
to broader norms, values, and fundamental rights. That is, when
a system should uphold a particular political, social, or legal
norm or be held to some moral standard, that requirement is
often couched in terms of accountability in the sense of moral
responsibility.29 For example, Bovens, Schillemans, and Goodin
observe that, in politics, “‘[a]ccountability’ is used as a synonym
for many loosely defined political desiderata, such as good
governance, transparency, equity, democracy, efficiency,
responsiveness, responsibility, and integrity.”30 Political
scientists often wonder whether accountability continues to hold
meaning, even when operationalizing it is straightforward in the
ever-growing number of places where it is claimed as desirable.

1.6 Accountability as a Governance Goal

This notion of accountability as normative fidelity


demonstrates that accountability can serve as a governance
mechanism. Because accountability is straightforwardly
achievable and enables judgements about complex and contested
values, it is a useful and tractable goal for governance. Systems
can be designed to meet articulated requirements for
accountability, and this enables governance within companies,
around governmental oversight, and with respect to the public
AD8013 CONCEPTS AND ISSUE UNIT-III

trust. Interested parties can verify that systems meet these


requirements. This verification operates along the same lines that
interested parties would use to confirm that any governance is
operating as intended. Establishing lines of accountability forces
a governance process to reckon with the values it must protect or
promote without needing a complete articulation and
operationalization of those values. This makes accountability a
primary value for which all governance structures should strive.

1.7 Mechanisms for Accountability in AI

Of course, transparency is a useful tool in the


governance of computer systems, but mostly insofar as it serves
accountability. To the extent that targeted, partial transparency
helps oversight entities, subjects of a computer system’s outputs,
and the public at large understand and establish key properties
of that system, transparency provides value. But there are other
mechanisms available for building computer systems that
support accountability of their creators and operators. First, it is
key to understand what interests the desired accountability
serves and to establish the answerability relationships: what
agents are accountable to which other agents (“accountability of
what?” and “accountability to whom?”), for what outcomes, and
to what purpose? Once these are established, it is clearer which
records must be kept to support interrogation of this relationship
and to ensure that blame and punishment can be meted out to
the appropriate agents in the appropriate cases. These records
must be retained in a manner that guarantees that they relate to
the relevant behavior of the computer system, representing the
relationship between its inputs, its logic, and its outputs
faithfully.

1.8 Whither Accountability in AI?

Where do these ideas lead us for accountability in AI


systems? What ends does accountability serve and what are the
means to achieving them? Human values are political questions,
and reflecting them in AI systems is a political act with
AD8013 CONCEPTS AND ISSUE UNIT-III

consequences on the real world. We can (and must) connect


these consequences to existing political decision-making systems
by viewing the gap between system behaviors and contextual
norms in terms of accountability. For example, if we want to
know that an AI system is performing “ethically”, we cannot
expect to “implement ethics in the system” as is often suggested.
Rather, we must design the system to be functional in context,
including contexts of oversight and review. Only then will we be
able to establish trust in AI systems, leveraging existing
infrastructures of trust among people and in institutions to new
technologies and tools. Thus, the prime focus of building ethical
AI systems must be building AI into human systems in a way
that supports effective accountability for the entire assemblage.
While the need for such practices is great, and while it is critical
to establish what engineered objects are supposed to do,
including what is necessary to satisfy articulated accountability
relationships, the actual reduction to practice of such tools in a
way that demonstrably supports accountability and other human
values remains an important open question for research. While
many tools and technologies exist, only now are we beginning to
understand how to compose them to serve accountability and
other values.

2.Transparency in Ethics and in AI

What Plato Did’ Transparency in ethics has at least three


aspects. One is visibility to others. If others can see what you are
doing, it makes it more likely you’ll behave well. Philosophers
have long known this. In Plato’s Republic, Protagoras considered
the Ring of Gyges, which magically renders its wearer invisible.
Possessed of this, Protagoras argued, one would of course commit
all manner of wrong-doing (Plato 1974). Conversely, much recent
research lends support to the view that even imagined scrutiny
by others helps us do the right thing (Zimbardo 2008). The
second is comprehensibility to others. Ethics demands a shared
system of justification. In the Republic, Plato infamously argued
that those in the top rung of society, the Philosopher Kings,
AD8013 CONCEPTS AND ISSUE UNIT-III

dubbed the ‘gold’, had a grasp of moral truths but that the lower
orders, or those dubbed the ‘silver’ and ‘bronze’ in society, were
incapable of full access such knowledge. And a related aspect is
accountability to others. A corollary of Plato’s views on knowledge
and government is that, in governing those under them, the
‘noble lie’ could be justified to keep the hoi polloi in order. I take
it that a view is abhorrent in any democratic society. It goes
without saying that you can’t claim to be adequately addressing
ethical questions, if you refuse to explain yourself to rightly
interested parties. Of course there will often then be a further
question about who such parties are and what claims they have
on you. What this means in AI: Firstly, The very complexity of
much of AI means that there is often a particular question of
transparency. If even its creators don’t know precisely how an
algorithm produced by machine learning is operating, how do we
know if it’s operating ethically or not? The frequently posed fears
that without our knowledge we might be manipulated by
powerful machines or very powerful corporations armed to the
teeth with the opaque machinations of AI, gives a modern take on
the Ring of Gyges myth. Only, now it’s not actually a myth.
Having specialist knowledge, as professionals in AI have, does not
entitle you to ‘lie’ to the people, nor to be in sole charge of
questions that concern them; quite the reverse. Such specialist
knowledge should mandate a duty to explain. However, the
question of how much transparency is legitimate in respect to
certain activities is an open question. Only a fool wants the
security services of their country to be fully transparent given the
existence of real enemies; nonetheless drawing the line may be
hard. Commercial companies also have reasons for secrecy.
Which brings us on to the next point: Secondly, there are many
powerful actors involved in AI whose activities may affect billions
of others; perhaps then, in some ways, a technological elite with
access to arcane knowledge—AI professionals—are the new
‘Philosopher Kings’. How they handle ethics, how they explain
themselves, and whether they manage any system of
accountability and dialogue, will be critical to any claim they
might make to be truly concerned with ethics.
AD8013 CONCEPTS AND ISSUE UNIT-III

3. Race and Gender in AI:

From massive face-recognition-based surveillance and


machine-learning-based decision systems predicting crime
recidivism rates, to the move towards automated health
diagnostic systems, artificial intelligence (AI) is being used in
scenarios that have serious consequences in people's lives.
However, this rapid permeation of AI into society has not been
accompanied by a thorough investigation of the sociopolitical
issues that cause certain groups of people to be harmed rather
than advantaged by it. For instance, recent studies have shown
that commercial face recognition systems have much higher error
rates for dark skinned women while having minimal errors on
light skinned men. A 2016 ProPublica investigation uncovered
that machine learning based tools that assess crime recidivism
rates in the US are biased against African Americans. Other
studies show that natural language processing tools trained on
newspapers exhibit societal biases (e.g. finishing the analogy
"Man is to computer programmer as woman is to X" by
homemaker). At the same time, books such as Weapons of Math
Destruction and Automated Inequality detail how people in lower
socioeconomic classes in the US are subjected to more automated
decision making tools than those who are in the upper class.
Thus, these tools are most often used on people towards whom
they exhibit the most bias. While many technical solutions have
been proposed to alleviate bias in machine learning systems, we
have to take a holistic and multifaceted approach. This includes
standardization bodies determining what types of systems can be
used in which scenarios, making sure that automated decision
tools are created by people from diverse backgrounds, and
understanding the historical and political factors that
disadvantage certain groups who are subjected to these tools.
AD8013 CONCEPTS AND ISSUE UNIT-III

3.1 DATA-DRIVEN CLAIMS ABOUT RACE AND GENDER


PERPETUATE THE NEGATIVE BIASES OF THE DAY

Science is often hailed as an objective discipline in pursuit


of truth. Similarly, one may believe that technology is inherently
neutral, and that products that are built by those representing
only a slice of the world’s population can be used by anyone in
the world. However, an analysis of scientific thinking in the 19th
century, and major technological advances such as automobiles,
medical practices and other disciplines shows how the lack of
representation among those who have the power to build this
technology has resulted in a power imbalance in the world, and
in technology whose intended or unintended negative
consequences harm those who are not represented in its
production1. Artificial intelligence is no different. While the
popular paradigm of the day continues to change, the dominance
of those who are the most powerful race/ethnicity in their
location (e.g. White in the US, ethnic Han in China, etc.),
combined with the concentration of power in a few locations
around the world, has resulted in a technology that can benefit
humanity but also has been shown to (intentionally or
unintentionally) systematically discriminate against those who
are already marginalized.

3.2 USING PAST DATA TO DETERMINE FUTURE OUTCOMES


RESULTS IN RUNAWAY FEEDBACK LOOPS

An aptitude test designed by specific people is


bound to inject their subjective biases of who is supposed to be
good for the job, and eliminate diverse groups of people who do
not fit the rigid, arbitrarily defined criteria that have been put in
place. Those for whom the tech industry is known to be hostile
will have difficulty succeeding, getting credit for their work, or
promoted, which in turn can seem to corroborate the notion that
they are not good at their jobs in the first place. It is thus
unsurprising that in 2018, automated hiring tools used by
Amazon and others which naively train models based on past
data in order to determine future outcomes, create runaway
AD8013 CONCEPTS AND ISSUE UNIT-III

feedback loops exacerbating existing societal biases. A hiring


model attempting to predict the characteristics determining a
candidate’s likelihood of success at Amazon would invariably
learn that the undersampled majority (a term coined by Joy
Buolamwini) are unlikely to succeed because the environment is
known to be hostile towards people of African, Latinx, and Native
American descent, women, those with disabilities, members of
the LGBTQ+ community and any community that has been
marginalized in the tech industry and in the US. The person may
not be hired because of bias in the interview process, or may not
succeed because of an environment that does not set up people
from certain groups for success. Once a model is trained on this
type of data, it exacerbates existing societal issues driving further
marginalization.

3.3 UNREGULATED USAGE OF BIASED AUTOMATED FACIAL


ANALYSIS TOOLS:

Predictive policing is only one of the data-driven


algorithms employed by US law enforcement. The perpetual
lineup report by Clare Garvie, Alvaro Bedoya and Jonathan
Frankle discusses law enforcement’s unregulated use of face
recognition in the United States, stating that one in two American
adults are in a law enforcement database that can be searched
and used at any time17. There is currently no regulation in place
auditing the accuracy of these systems, or specifying how and
when they can be used. The report further discusses the
potential for people to be sent to jail due to cases of mistaken
identity, and notes that operators are not well trained on using
any of these tools. The authors propose a model law guiding
government usage of automated facial analysis tools, describing a
process by which the public can debate its pros and cons before
it can be used by law enforcement.

3.4 AI BASED TOOLS ARE PERPETUATING GENDER


STEREOTYPES
AD8013 CONCEPTS AND ISSUE UNIT-III

While the previous section has discussed manners in which


automated facial analysis tools with unequal performance across
different subgroups are being used by law enforcement, this
section shows that the existence of some tools in the first place,
no matter how “accurate” they are, can perpetuate harmful
gender stereotypes. There are many ways in which society’s views
of race and gender are encoded into the AI systems that are built.
Studies such as Hamidi et al.’s Gender Recognition or Gender
Reductionism22 discuss this in the context of automatic gender
recognition systems such as those studied by Buolamwini and
Gebru, and the harms they cause particularly to the transgender
community.

3.5 POWER IMBALANCE AND THE EXCLUSION OF


MARGINALIZED VOICES IN AI :

The weaponization of technology against certain groups, as


well as its usage to maintain the status quo while being touted as
a liberator of those without power, is not new to AI. In Model
Cards for Model Reporting, Mitchell et al. note parallels to other
industries where products were designed for a homogenous
group of people27. From automobiles crash tested on dummies
with prototypical adult “male” characteristics resulting in
accidents that disproportionately killed women and children, to
clinical trials that excluded many groups of people resulting in
drugs that do not work or disproportionately negatively affect
women, products that are built and tested on a homogenous
group of people work best for that group.

3.6 THE DESIGN OF ETHICAL AI STARTS FROM WHO IS


GIVEN A SEAT AT THE TABLE :

Ethical AI is not an abstract concept, but one that is in dire


need of a holistic approach. It starts from who is at the table,
who is creating the technology, and who is framing the goals and
values of AI. As such, an approach that is solely crafted, led, and
evangelized by those in powerful positions around the world, is
AD8013 CONCEPTS AND ISSUE UNIT-III

bound to fail. Who creates the technology determines whose


values are embedded in it. For instance, if the tech industry were
not dominated by cis gendered straight men, would we have
developed automatic gender recognition tools that have been
shown to harm transgender communities and encourage
stereotypical gender roles? If they were the ones overrepresented
in the development of artificial intelligence, what types of tools
would we have developed instead? If the most significant input
for developing AI used in the criminal justice system came from
those who were wrongfully accused of a crime and confronted
with high cash bail due to risk assessment scores, would we have
had the algorithms of today that disproportionately
disenfranchise Black and Brown communities in the US? If the
majority of AI research were 20 funded by government agencies
working on healthcare rather than military entities such as the
Defense Advanced Research Projects Agency (DARPA), would we
be working towards drones that identify persons of interest?

3.7 EDUCATION IN SCIENCE AND ENGINEERING NEEDS TO


MOVE AWAY FROM “THE VIEW FROM NOWHERE”:

If we are to work on technology that is beneficial to all of


society, it has to start from the involvement of people from many
walks of life and geographic locations. The future of who
technology benefits will depend on who builds it and who utilizes
it. As we have seen, the gendered and racialized values of the
society in which this technology has been largely developed have
seeped into many aspects of its characteristics. To work on
steering AI in the right direction, scientists must understand that
their science cannot be divorced from the world’s geopolitical
landscape, and there are no such things as meritocracy and
objectivity. Feminists have long critiqued “the view from
nowhere”: the belief that science is about finding objective
“truths” without taking people’s lived experiences into account.
This and the myth of meritocracy are the dominant paradigms
followed by disciplines pertaining to science and technology that
continue to be dominated by men.
AD8013 CONCEPTS AND ISSUE UNIT-III

4. ARTIFCIAL INTELLIGENCE AND MORAL RIGHTS :

Whether copyrights should exist in content


generated by an artifcial intelligence is a frequently discussed
issue in the legal literature. Most of the discussion focuses on
economic rights, whereas the relationship of artifcial intelligence
and moral rights remains relatively obscure. However, as moral
rights traditionally aim at protecting the author’s “personal
sphere”, the question whether the law should recognize such
protection in the content produced by machines is pressing; this
is especially true considering that artifcial intelligence is
continuously further developed and increasingly hard to
comprehend for human beings. This paper frst provides the
background on the protection of moral rights under existing
international, U.S. and European copyright laws. On this basis,
the paper then proceeds to highlight special issues in connection
with moral rights and content produced by artifcial intelligence,
in particular whether an artifcial intelligence itself, the creator or
users of an artifcial intelligence should be considered as owners
of moral rights. Finally, the present research discusses possible
future solutions, in particular alternative forms of attribution
rights or the introduction of related rights.

Artifcial Intelligence (AI) is often considered as a


disruptive technology. The implications of this technology are not
confned to the industrial sector, but do extent to numerous
artistic felds, like the creation of music or works of art (Bridy
2016; Niebla Zatarain 2018; Schönberger 2018). As the
technology promises great advances in a wide variety of contexts
and areas of research, massive initiatives and investments are
being undertaken; this also applies to the political level
(European Commission 2018b). However, the use of AI can have
far reaching consequences and many problems are still not fully
explored. This relates, for instance, to the technology’s
philosophical and economic implications, but also to the legal
framework that governs its use. Widely discussed legal questions
include liability issues, especially in the context of autonomous
AD8013 CONCEPTS AND ISSUE UNIT-III

driving (Collingwood 2017), data protection (Kuner et al. 2018) as


well as the protection of AI and its products (Abbott 2016;
Vertinsky and Rice 2002) under copyright law (Grimmelmann
2016a, b). This also relates to machine learning (Surden 2014).
The copyright-related literature has, as far as can be seen,
focused on the question whether economic rights exist or should
exist in AI-generated content. However, the relationship between
AI and moral rights has not been studied to a comparable extent
(Miernicki and Ng 2019; Yanisky-Ravid 2017). Against this
background, this research analyzes the relationship between AI
and moral copyrights. For the sake of completeness, we will also
allude to economic rights in AI-generated content, where
appropriate.

4.1 Legal Background

Moral rights acknowledge that authors have personal


interests in their creations and the corresponding use that is
made of them. These interests are conceptually diferent from the
economic or commercial interests protected by the author’s
economic rights which are typically understood to enable the
author to derive fnancial gain from her creation (Rigamonti
2007). Moral rights thus aim to protect the non-economic
interests of the author; this is often justifed with reference to a
“presumed intimate” bond of the creator with his or her creations
(Rigamonti 2006). In this light, moral rights protect the
personality of the author in the work (Biron 2014). Not
surprisingly, diferent jurisdictions have varied takes on this
school of thought and do not interpret this ideology to the same
extent. In this regard, it is generally said that common law
jurisdictions are more hesitant to granting moral rights than civil
law jurisdictions (Rigamonti 2007; Schére 2018). This might
explain why—even though moral rights can be found in various
jurisdictions throughout the world—the degree of international
harmonization with regard to moral rights is rather low
(Rigamonti 2006). The general principles are set forth by the
Berne Convention (WIPO 1979); its article Art 6bis states that
AD8013 CONCEPTS AND ISSUE UNIT-III

“the author shall have the right to claim authorship of the work
and to object to any distortion, mutilation or other modifcation
of, or other derogatory action in relation to, the said work, which
would be prejudicial to his honor or reputation.” As can be seen,
the Berne Convention provides for two distinct moral rights: The
right of attribution and the right of integrity of the work
(Rigamonti 2006; cf. U.S. Court of Appeals 1st Circuit 2010). The
right of attribution generally includes the right to be recognized
as the author of a work, so that users of the work as well as the
public in general will associate the work with its creator (Ciolino
1995), which is also linked to a certain kind of social recognition
and appreciation; recognition for one’s work is sometimes deemed
a “basic human desire” (US Copyright Ofce 2019). The right to
integrity, in turn, refers to the author’s interest not to have his or
her work be altered drastically or used in a prejudicial way.
Whether there is an infringement of the right to integrity is very
dependent on the individual case as well as, importantly, the
context of the use. Under this rule, moral rights could be
infringed if, for instance, a song is rearranged or used for a
purpose completely diferent from the author’s intentions
(Ricketson and Ginsburg 2006). Moral rights came in special
focus in the U.S. legal regime with the accession of the United
States to the Berne Convention (Ciolino 1995). At that time,
legislative changes were not made because the moral rights
contained in the convention were already, according to
Congress’s opinion, provided for a sufcient extent under U.S. law
(U.S. Copyright Ofce 2019). Later, moral rights were explicitly
recognized in the Visual Artist Rights Act (17 U.S.C. § 106A);
however, the scope of the act is relatively narrow (U.S. Copyright
Ofce 2019; cf. U.S. Court of Appeals 1st Circuit 2010). In fact, the
transposition of the Berne Convention’s requirements into U.S.
law as regards moral rights has been a long source of controversy
(Ginsburg 2004; Rigamonti 2006). In any event, however, it is fair
not to only look at the U.S. Copyright Act, but also at other laws
on the federal level as well as common law claims that can arise
under state law, for instance (Rigamonti 2007; U.S. Copyright
Ofce 2019).
AD8013 CONCEPTS AND ISSUE UNIT-III

4.2 Protection of AI-generated content

The key question of much of the copyright-related


debate on AI is whether or to what extent copyrightable works
require or should require human action. Under the Berne
Convention, there is no clear defnition of the concept of
“authorship”. However, it is strongly suggested that the
convention only refers to human creators (Ginsburg 2018;
Ricketson 1991), thereby excluding AI-generated content from its
scope. This means that the minimum standard set forth by the
Berne Convention only applies to works made by humans. U.S.
copyright law afords protection to “original works of authorship”
(17 U.S.C. § 102(a)). This language is understood as referring to
creations of human beings only (Cliford 1997; U.S. Copyright
Ofce 2017); accordingly, the law denies copyright protection for
the “creations” of animals, the so-called “monkey selfe” case (U.S.
District Court Northern District of California 2016) 3 being a
notable example for the application of these principle in the
courts. This equally applies to content produced by machines
(U.S. Copyright Ofce 2017) or, in the present context, AI-
generated content (Abbott 2016; Yanisky-Ravid 2017). In
consequence, such content is not copyrightable, unless a human
author is found to have contributed creative input. Since there is
no general EU copyright code but rather several directives with
their respective scope, it is not easy to distill the concept of
authorship under EU law. However, it is possible to infer some
guidance from the language of the diferent directives. On the one
hand, the “own intellectual creation” standard is set forth in
respect of databases, computer programs and photographs
(European Parliament and Council of the European Union 1996,
art. 3(1); European Parliament and Council of the European
Union 2006, art. 6; European Parliament and the Council 2009,
art. 1(3); see also European Parliament and the Council 2019,
art. 14). On the other hand, the “Copyright Directive” (European
Parliament and the Council 2001) refers to “authors” and
“works”. With regard to frst standard, the ECJ establishes the
connection to the author’s personality (European Court of Justice
AD8013 CONCEPTS AND ISSUE UNIT-III

2011a; European Court of Justice 2012), a line of argumentation


that can also be found in connection with the “Copyright
Directive” (European Parliament and the Council 2001)
(European Court of Justice 2008; European Court of Justice
2011b). Accordingly, many commentators conclude the human
authorship is required under European copyright law (Handig
2009; Ihalainen 2018; Miernicki and Ng 2019; Niebla Zatarain
2018).

4.3 AI as the owner of moral rights

The concept of AI as a possible owner falls on the


fact that AI is not simply an automatic system but could be an
autonomous system. While the development of an AI system,
including data mining, machine learning and training processes,
are normally supervised by humans, recent advancements in AI
technology have enabled AI to learn from other AIs, in a process
called “kickstarting”.4 The “black box” of AI (Knight 2017), where
developers of AI networks are unable to explain why their AI
programs have produced a certain result, further augments the
belief of autonomy of AI and its capability of having its own
persona and decisionmaking abilities. However, it should be
noted that any discussion centering on AI as copyright owners is
essentially a discussion de lege ferenda. This is because for
machines to be granted rights, some form of legal personality
would be required (Bridy 2012; Yu 2017). It is our impression
that this is not the case for many jurisdictions in Europe,
America and Asia. In turn, whether machines should be granted
some form of legal personality is in fact discussed on diferent
levels. This relates, of course, to academic literature (Günther
et al. 2012; Paulius et al. 2017; Solaiman 2017), but also to
legislative initiatives (Committee on Legal Afairs (2015).5
However, often times the discussion in this context centers on
liability issues which are a diferent question from the grant of
moral copyrights (Miernicki and Ng 2019); the attribution of
liability is not the same as allocating exclusive rights, although
the former must certainly be considered when acknowledging AI
AD8013 CONCEPTS AND ISSUE UNIT-III

as legal persons. This is especially true if a legislation granting AI


the status of a legal person (similar to a company or a
partnership) would apply across other legislations unless
precluded by that specifc legislation otherwise.

4.4 Creators or users of an AI as the owners of moral rights

Should humans that interact with the AI—i.e.,


the creators or the users—should have moral rights in the
content produced by the AI? Where software is a tool used by
humans, the general rules apply: Where the programmer
contributes creative efort, she is awarded a copyright the extent
of which is determined by national law. However, the situation is
different where the AI produces the content without original
input contributed a human being (Miernicki and Ng 2019). In
order to analyze this question, it is helpful to conceptualize the
diferent roles of the AI and the involved humans as follows: The
programmer creates the AI and holds, generally speaking, a
copyright in a literary work (cf. WTO 1994, Art 9 et seq.). Thus,
under ordinary circumstances, the AI constitutes copyrightable
subject-matter and is referred to hereinafter as “frst generation
work” because it directly stems from the programmer’s creative
efort (cf. Yanisky-Ravid and Velez-Hernandez 2018). Going a step
further, if this work (software) generates new content, we could
refer to this content as a “second generation work” since—
provided that the programmer did not intervene at all—it only
indirectly stems from the programmer’s creative work.10 Now the
question arises whether the creator has a special non-economic
connection to the “second generation work” that could be
protected by moral rights. To answer this question, it is useful to
recall the fundamental rationales of moral rights: these rights are
granted because the work represents “an extension of the
author’s personhood” (Rigamonti 2006); the author’s personal
traits are, so to say, “embodied” in the work (cf. Rosenthal Kwall
2010). Conversely, in absence of this creative endeavor, there is a
lack of the intimate connection between the creator of the AI and
AD8013 CONCEPTS AND ISSUE UNIT-III

the produced content that moral rights have traditionally sought


to protect.

4.5 A related rights approach?

In light of the foregoing, we believe that granting moral


rights in AI-generated content is, in principle, not compatible
with the traditional rationale of these rights. Apart from
alternative models to the attribution right that should be
considered, it remains to discuss whether or to what extent there
should be economic rights and how such rights would ft in the
copyright system. This is in many respects a question of whether
such rights can produce benefcial incentives (either for investing
in the development of AI or publishing its results), a question
which has been discussed at length (Davies 2011; Glasser 2001;
Grimmelmann 2016a, b; McCutheon 2013; Perry and Margoni
2010; Samuelson 1986; Yu 2017); we have already made some
arguments above that would also apply to economic rights and do
not delve further into this debate in this research. For the
present purposes, and from the moral rights perspective, it would
be a regulatory perspective to grant—similar to the solution
found in the UK—certain economic rights but no or only limited
moral rights; this resembles the legal situation with regard to
related rights (Miernicki and Ng 2019).15 Since such a “middle-
ground solution” might be more easily compatible with the
diferent views that exist on moral rights, e.g., with regard to their
scope, transfer or ownership by legal persons (Miernicki and Ng
2019; cf. Denicola 2016; Ory and Sorge 2019), it might be more
likely to fnd a consensus for the international harmonization of
this matter. However, also a related rights solution runs the risk
of triggering a potential proliferation of protected content.

You might also like