Journal of Politics and Ethics in New Technologies and AI
Vol 2, No 1 (2023)
Journal of Politics and Ethics in New Technologies and AI
A Public Sphere for AI
Themis Tzimas
doi: 10.12681/jpentai.33299
Copyright © 2023, Themis Tzimas
This work is licensed under a Creative Commons Attribution 4.0.
https://epublishing.ekt.gr | e-Publisher: EKT | Downloaded at: 16/03/2023 07:13:40
Powered by TCPDF (www.tcpdf.org)
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
e-ISSN: 2944-9243
© The Author(s), CC-BY 4.0
https://doi.org/10.12681/jpentai.33299
1
RESEARCH ARTICLE
A Public Sphere for AI
Themis Tzimas
Lawyer & Adjunct Lecturer, Department of Political Science, Democritus University of Thrace, Greece.
Abstract
The present article addresses key elements of the unique ontology of AI and argues that these require the
expansion of the public sphere, in order to successfully manage the entry of new intelligent actors in legally
regulated relationships which are based on the identification of causal connections. In this sense it attempts
to link law and political science, given that the governance of any phenomenon or field includes law and in
particular the detection, of legally interesting, causal relationships. Regu lating such relationships effectively
offers legal certainty, which in turn is a fundamental element of effective governance. In our self- evidently,
human- centered world, whether we are talking about natural persons, or for legal persons, it is self- evident
that there is, in the end, a human hand behind the causal relations with which law is involved. Once other
non- human, intelligent actors gradually enter the forefront, these causal relations become further
complicated. It is on these complications and their impact that we focus.
Keywords: Artificial Intelligence, Regulation, Intellectual Property, Governance
Introduction
The debate about AI and the different aspects of governance, although it has begun, remains less
developed than the significance of the phenomenon justifies. In this article we argue that key elements
of the unique ontology of AI require the expansion of the public sphere, among other reasons, in order
to successfully manage the entry of new intelligent actors in legally regulated relationships which are
based on the identification of causal connections.
It is on this aspect of AI governance, which links law and political science, that the present article
focuses on. We also start from the premise that the governance of any phenomenon or field includes
law and in particular the detection, of legally interested, causal relationships. Regulating such
relationships effectively offers legal certainty, which in turn is a fundamental element of effective
governance.
Law, as an aspect of the governance of our relations, is based on the attribution of legal personality,
whether to natural persons or to artificial, legal persons. In our self- evidently, human- centered world,
whether we are talking about natural persons, or for legal persons, it is self- evident that there is in the
Tzimas, T. (2023). A Public Sphere for AI. Journal of Politics and Ethics in New Technologies and AI, 2(1), e33299.
https://doi.org/10.12681/jpentai.33299
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
2
end, a human hand behind the causal relations with which law is involved. Once other, non- human,
intelligent actors gradually enter the forefront, these causal relations become further complicated.
Complications are caused on the one hand by the difficulty of connecting the concept of creativity,
work and responsibility with non-human actors and on the other hand by the fact that both material
benefit and punishment or any form of sanction are also adapted to human ontology and action.
The main argument that we present in this article is that the unique ontology of AI requires an expanded
public sphere to which on the one hand the autonomous creations will belong, while on the other hand
it will be able to function as a compensation scheme for torts arising from the autonomous operation
of AI.
We start by referring to certain elements of the ontology of AI, then we move on to examining the
importance of causality in achieving legal certainty. We focus on the protection of IP norms and tort
law, since they are two obvious areas of law where causality is fundamental, under the prism of AI.
On the basis of the above we support the need for an expanded public sphere.
1. The Unique Ontology of AI
In the world of humans, which is the world as we still understand it, and regarding humans, we know
reason, intelligence and self- awareness, almost always, when we see it. There is occasionally some
degree of uncertainty but in spite of all the ambiguity that emerges under extreme conditions, we easily
agree that higher intelligence constitutes the realm of humanity. This is why a complete legal
personality and all that flows out of it, is a human privilege.
Artificial Intelligence -AI- is getting closer to changing the fore- mentioned, seemingly, self- obvious
fact. The variety of definitions of AI, despite their ambiguity share some common elements: the
replication of human thinking, the demonstration of rationality (Ruseell & Norving, 2010),
consciousness, self-awareness, language use, ability to learn (Scherer, 2016) or intelligence by
computational agents (Poole & Mackworth, 2010) and for some the mere mimicking of aspects of
human intelligence (Charniak & McDermott, 1985; Rich & Knight, 1991; Scherer, 2016), are all
aspects of the ontology of AI.
The fundamental element of AI is its intellectual autonomy, which is expanding and provides it with
the capacity to adapt to novel environments (Omohundro, 2008; Russell & Norving, 2010). This is
what makes AI invaluable: the passage from automation, to autonomy. Autonomy means that AI is not
the mere outcome of software programming but that it imitates and reproduces the learning procedure
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
3
which is followed by humans, through machine-learning, as envisioned by Alan Turing (McCarthy,
2008; Lake et al., 2016).
Alan Turing’s approach was that computers could imitate children’s minds, methodology and
evolution:
“[I]nstead of trying to produce a programme to simulate the adult mind, why not rather try to
produce one which simulates the child’s? If this were then subjected to an appropriate course of
education one would obtain the adult brain… There is an obvious connection between this
process and evolution…. One may hope, however, that this process will be more expeditious
than evolution.” (Turing, 1950, p. 456)
Through the different types of machine learning (Tito, 2017) - i.e. “supervised learning”, which “uses
a set of examples with a label informing the algorithm of the expected output…”, “[R]einforcement
learning,” where “algorithms learn how to choose between a set of actions to accomplish a task that
will maximize some reward…” and “unsupervised learning” which “…refers to the problem of
designing algorithms which can learn by themselves without any external goal (either a list of labelled
examples or rewards) and which would be able to come up with their own goals” (Righetti, 2016) and in spite of the challenges that the development of machine- learning faces (Davies & Marcus,
2015), it produces impressive results (Müller & Bostrom, 2016; SAS, n.d.),1 which sustain the quest
of enhanced AI autonomy (Karnow, 2016).
The goal of machine- learning is to achieve in terms of the intelligence of AI, natural- like, evolutionary
patterns and therefore to come up with solutions to a wide range of not predetermined, problems,
without necessarily having humans in the loop (Bostrom, 2014). It is on such grounds, that AI
autonomy is built and evolves. Machine- learning also explains why as AI evolves its nature becomes
probabilistic, non- linear, complicated, opaque and therefore unpredictable.
There are two fundamental uncertainties, that machine- learning meets due to the fact that it functions
in non- deterministic environment: environment and model uncertainty (Huszár, 2015). Therefore,
what AI is expected to be doing is to learn and decide, on the basis of the action with the highest
expected utility, in light of the AI system’s basic preferences and goals (Bostrom, 2014).
It is this procedure and the goals that it serves, those which push AI towards demonstrating and
developing characteristics such as logic as a tool of analysis (Thomason, 2003) - creativity, problem
According to Müller and Bostrom (2016), “The median estimate of respondents was for a one in two chance that highlevel machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that
systems will move on to superintelligence in less than 30 years thereafter”.
1
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
4
solving, pattern recognition, classification, learning, induction, deduction, analogies building,
optimization, surviving in an environment and language processing (Hutter, 2010; Hallevy, 2018),
cognitive autonomy, intuition and strategic thinking (Camett & Heinz, 2006; Suchman & Weber, 2016;
Yanisky-Ravid & Liu, 2018; Hallevy, 2018).
The above -mentioned procedure of AI explains how, since we “trust” a procedure similar to the
learning of children, in order to train machines, we necessarily also “accept” the consequences of the
“black box effect” in terms of AI behavior and decision- making (UK Government Office for Science,
2015; Castelvecchi, 2016).
While it remains almost impossible to completely predict the evolution of AI autonomy and of
machine- learning, we know that AI machine- learning, at large based on artificial neural networks is
already surpassing “rules- based programming” (Pyle & San Jose, 2015), which means that it already
possesses the capacity to function autonomously from the human programmer and surpass by far
human intelligence –currently- in narrow, pre-determined areas as well as to evolve and even reprogram itself.
The above- mentioned “structure” is what sustains and evolves AI autonomy. On the one hand, as AI
autonomy evolves, gradually pushes humans out of the loop, whereas on the other hand provides to AI
an expanding variety of aspects of intelligence (Russell & Norving, 2010; Laton, 2017).
Of course, AI has not yet achieved general intelligence. 2 It remains a question whether it will do so.
An even more puzzling question is whether it will achieve consciousness and what that means both
from an AI and a human intelligence, perspective. Still, regardless of any potential answer to the above
questions or setback in the evolution of AI, we are already witnessing -and we will do so even more
in the future- AI entities emerging as at least partially, intelligent and autonomous actors, playing a
AI is roughly distinguished between weak AI, where “the computer is merely an instrument for investigating cognitive
processes” and strong AI, where “[t[he processes in the computer are intellectual, self-learning processes.” Weak AI is
called Artificial Narrow Intelligence-ANI- while strong AI is distinguished between Artificial General Intelligence –AGIand Artificial Super Intelligence-ASI. ANI is already present, whereas AGI and ASI are anticipated or speculated.”
Although it is with AGI and ASI that a level equal or superior, respectively, to human intelligence will be reached, even
now, ANI is already in the course of growing autonomy, in the sense of “…systems capable of operating in the real-world
environment without any form of external control for extended periods of time .” (Urban, 2015; Heath, 2018; Bekey, 2005).
AGI is expected to be the “type of adaptable intellect found in humans, a flexible form of intelligence capable of learning
how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide
variety of topics based on its accumulated experience.” (Heath, 2018). In this sense, the lesser that AI is based on
programming and the more it is based on experience, the closer it gets to AGI (Moravec, 1976). Super intelligence moves
one step further and refers to the exceeding of human intelligence in the sense of “ any intellect that greatly exceeds the
cognitive performance of humans in virtually all domains of interest.” (Bostrom, 2014).
2
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
5
crucial role in causal relationships. After all, for AI it is fundamental to be able to adapt to change
(Russell & Norving, 2010).
Governance and in its framework, law need to take into account this development and potential of AI.
It is in this sense that we must design norms regulating the legally significant causalities, taking into
account the existence and the potential evolution of AI as a new type of intelligent actors.
A significant obstacle in terms of the efficiency of any design for the governance of AI is that we need
to take into account is that we cannot completely or even proximately in some cases predict the range
of forms that AI entities may take as intelligent actors, due both of the technological evolution of AI
per se, as well as of our incapacity to understand completely human intelligence, which after all
constitutes the basic criterion for the assessment of the evolution of AI. 3
2. The Causality in Law: IP and Tort Law
The identification of causal relationships constitutes one of the foundations of human reason. In fact,
it is at the basis of our ability to draw conclusions both about already known environments and
relationships, and about unknown ones. The identification of causal relationships and the reward or
punishment on their basis offers for different reasons an aspect of necessary security, crucial both for
the effectiveness of the governance of our societies in general, and for the consolidation of legal
certainty.
As we saw before, the autonomy of AI marks the entry of new intelligent actors in the context of causal
relationships. This condition means that more and different stages compared to what we know as of
now, within the legally significant causal relationships intervene until the final result occurs.
Governance and law should therefore plan how to answer a number of questions: what extent of human
presence in the loop -if any- is obligatory in the framework of legally valid interactions and relations?
3
In terms of the significance of AI capacity to replicate human intelligence, as a defining paragon of AI, two contradictory
“tests” have been proposed: the one is the so –called Turing Test, which is in fact an imitation game aiming at the
identification of intelligent machines. The idea of the test is that if a computer can fool a human judge into making him/her
believe that the mysterious interlocutor on the other side is human. The Turing test focuses upon the external manifestation
of intelligence as indicator of the emergence of higher of intelligence. The Chinese Room Argument by the philosopher
John Searle indicated the possible flaws of the Turing test by trying to prove that AI can manifest capacities of intelligence
without actually comprehending the semantics of higher intelligence of human type and without therefore developing the
subjective, mental experience of human intelligence (Hern, 2014; Cole, 2020). Both of these tests, as well as the wider
discussion about AI are related to the fact that consciousness is notoriously complicated or even elusive until now (Dennet,
1978; Minsky, 1985; Greely, 2018). As long as consciousness remains a privilege of humans our way to identify is more
or less “we know it when we see it”. Things -will- get even more complicated with AI: even if we see it, are we sure that
it is actually there? Can we trust the external manifestation of consciousness, when demonstrated by machines which are
created and trained to imitate? (McGinn, 1991; Chalmers, 1996, 2008; Jihnson-Laird, 1983; Frye, 2018).
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
6
Should we accept the prospect of legally significant relations and interactions, for example in the legal
area under examination in the present article, although not only in this one, with no human presence
in the loop? We should always identify a “human hand” that is entitled to the profit or should bear the
responsibility? If so, who should that be? The designer of the software? Those harvesting big- data and
the “machine- trainers? The owner of the AI entity? 4
To date, the legal framework has not been able to keep pace with these evolving technologies and their
consequences. Among other legal areas, the issues of profitability from AI and of the wider nexus of
intellectual property norms, as well as the legal issues of liability are still regulated pre dominantly,
solely under the present, completely, human- centered norms of the era of automation. That creates the
potential for legal gaps.
In order to examine how legally significant causalities should be governed, the starting point is to
examine how they are regulated in the present. We focus on IP and tort law as they constitute two areas
of law which are characteristic of the significance of legal causality.
Intellectual property “…very broadly, means the legal rights which result from intellectual activity in
the industrial, scientific, literary and artistic fields” (WIPO, 2014). The main goal of intellectual
property is the transformation of knowledge into economic value (Manderieux, 2010), because of the
practical application of the intellectual activity (WIPO, 2014).
The theoretical foundation of intellectual property norms is the causal link between the work of the
creator and the reward for it, as a material motive for further innovative work (Khoury, 2016; Fisher,
2001). Therefore, IP norms’ foundation includes both a material and ethical motive, in order for
innovativeness and its practical implications to evolve (United Nations, 1975). It is obvious that this
causal relationship is important only as long as human labor and innovativeness is rewarded. For any
other intelligent actor is pointless. 5 Therefore, in an environment of autonomous, intelligent, nonhuman beings who will be entitled to such rewards and rights, if any? (Sobel, 2017).6
4
Even this last option can easily turn out to be problematic, considering the previous conversation concerning the
possibility of self- replicating AI entities, which essentially may not be owned by anyone.
5 Parenthetically, there is also the contradictory approach that is built on a wider polemic against patents, mainly because
of their lack of social utility (Krauss, 1989; Bethards, 2004; Cohen, 2006; Salzberger, 2006).
6 To this difficult question, we should take into consideration the fact that according to IP law, an invention in order to be
protected under IP law must be specific, which very often contradicts the inexplicability and the unpredictability of AI
(Hashiguchi, 2017). Further, it is problematic that inventions in relation to AI mostly refer to the methods and the devices
that are designed to carry out mental steps.
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
7
When we examine AI applications, even at the level of ANI, which replicate human mental activities
we must examine if these activities belong still in the area of automation, 7 being therefore predetermined by the software designer or fall into the area of autonomy, constituting not pre- designed
and unpredictable by humans, applications. There are already cases of creative works conducted by
autonomous algorithms, such as for example in music (Shi, 2016; van den Oord et al., 2016), pictures
(Mordvintsev et al., 2015) and writing (The Guardian, 2020), as well as cases of expanding autonomy
of AI inventions in drug industry among other areas with concrete industrial applications. As Abbott
(2016a) wrote, “Soon computers will be routinely inventing, and it may only be a matter of time until
computers are responsible for most innovation.”
While patent laws have been designed on the assumption that only humans can demonstrate
inventiveness (Kim, 2018),8 creativity and inventiveness are gradually demonstrated by non- human
actors as well. Machines even today are considered as merely mediators of human creativity (BurrowGiles Lithographic Co. v. Sarony, 1884; WIPO, 1979; Apple Computer, Inc. v. Franklin Computer
Corp, 1983, as cited in de Cock Buning, 1998). Whenever a machine is endorsed in the procedure of
innovativeness, we search for a causal relationship that can be traced back to a human inventor. 9 The
actions of a machine are considered as pre- determined and automated by the humanly designed
software (Tremblay, 2015; Palace, 2019), which means that the effort then is to attribute rights from
innovativeness to a natural or an artificial legal person, often and gradually, regardless of the potential
creativity of AI.10
7
This is the prevalent as of now approach, based on the precedent of technologies such as photography, video- games and
computers (Burrow-Giles Lithographic Co. v. Sarony, 1884; U.S. Copyright Office, 1965; U.S. CONTU, 1978; Midway
Mfg. Co. v. Artic Intern., Inc., 1983; Jaszi, 1992; Grinmelmann, 2016).
8 Characteristically, the US copyright office back in the year 1956 had determined that the author of any copyrightable
work must be human, a position which was reiterated again in relevant, future, cases, in front of both of the courts and of
the copyright office. With a similar understanding, when the US patent law was adopted, it was stated in the US Congress
that it involved “anything under the sun that is made by man.” (Pearlman, 2018). In such a framework, the US Copyright
Act, characteristically states that copyright can be granted for an "original work of authorship fixed in any tangible medium
of expression”, while the relevant Copyright Office has established the “Human Authorship Requirement”, according to
which, “"[t]o qualify as a work of 'authorship' a work must be created by a human being.” (Pub. L. No. 94-553, 90 Stat.
2541, 1976; U.S. Copyright Office, 2017).
9 “Although […] the human input as regards the creation of machine-generated Programs may be relatively modest, and
will be increasingly modest in the future [...] nevertheless, a human ‘author’ in the widest sense is always present, and
must have the right to claim authorship in the program.” Commission (1989, Article 1).
10 After all, this is what several court decisions have maintained. The Court of Justice of the EU –CJEU- in the “Infopaq
case” concluded that regarding author’s creation and its patentability, the work must be attributed to the author in order to
qualify for such protection. The important elements are the genuineness of the work- not constituting replica of an earlier
work- and the subjectivity of the author- i.e. of the creator of the work (Case C-5/08, 2009; Case C-393/09, 2010; Case C403/08, 2011; Case C-145/10, 2011; Case C-604/10, 2012; Case C-393/09, 2010). See also Case C-403/08 and C-429/08.
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
8
Discussions about the material reward for the programmer or the owner of an AI entity, reaffirm the
human-centered approach (Bridy, 2012).11 The attribution of reward to programmers, assumes that the
initial software design constitutes the main causal link between the innovativeness and its practical
application. The problem in this approach consists in that while the programmer of AI initiates AI
evolution, this contribution is not necessarily the main or the sole regarding AI future evolution and
actions.
As we already said, machine learning and AI autonomy produce unpredictable and unexpected
outcomes (Tanz, 2016; Gilbert, 2017). The initial software design, especially as AI autonomy grows
sets in motion a sequence of events which do not always or necessarily constitute the linear outcome
of their software design and therefore cannot be attributed to the programmer’s initial work, especially
up to extent that patent-eligibility would require. Regarding the attribution of economic reward to the
owner of the AI system, while that seems more practical, it does not align neither with material reward
for labor, nor for innovativeness.
While there is still, many AI applications and acts, which could partially align with our traditional,
human-centered, reward schemes, the truth is that they gradually retreat under the weight of AI's
expanding autonomy. The problem then with the human- centered approaches is that they overlook
the extent and the significance of AI autonomy and demonstration of creativity or of aspects of it by
AI.12
Creativity, until now is considered as self-evidently human (de Cock Buning, 2016). Contrary to that,
we are already witnessing aspects of AI-oriented creativity. AI systems can write their own articles
and compose music, design new medicine, play games - potentially the most fundamental
demonstration of creative thinking - among different manifestations of creativity. The demonstration
of aspects of creativity by AI is already underway and, in this sense, there is inconsistency with current
laws (Pearlman, 2018).
The judiciary is in general negative in recognizing patent eligibility for non- human creations, although there is one at least
different case; that of Jonathon Koza's AI system’ invention- the genetic programming invention- for which however it
was eventually, again, the human inventor of the AI system the one who received patent (Keats, 2006).
11 “Perhaps the best reason to allocate ownership interests to someone, however, is that someone must be motivated, if not
to create the work, then to bring it into public circulation.” (Samuelson, 1986).
“Contract arrangements between the copyright owner of a computer program and those who use the program to create
new works can be relied upon to allocate rights in the works created.” (Goldstein, 2014, § 2.2.2).
12 Going back to the US Copyright Office (2017, § 313.2), it is characteristic that it states it “ will not register works
produced by a machine, that operates randomly or automatically without any creative input or intervention from a human
author.” What if such creative input however can be externalized by a non- human actor? DeepMind indicates for example
some extent of creativity (Mordvintsev et al., 2015).
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
9
On such grounds it is suggested that AI can already produce patentable material (Abbott, 2016a;
2016b). Of course, it is almost impossible to come up with a quantification of creativity, so that we
can determine what extent of creativity can lead to a creation that may be considered as patent eligible
(Feist Publications, Inc., v. Rural Telephone Service Co., 1991).
The crucial point here however consists in the fact that we can trace the origins of innovativeness, in
terms of creativity, not only in humans but gradually in “machines” as well. If creativity can be a
characteristic of AI as well, then the most crucial causal connection leading to innovativeness and
material profit can be non- human oriented (Ritchie, 2007). Therefore, patent eligible creations can
emerge from non-human intelligent actors which make us wonder, whether protection under IP norms
for such creations makes any sense (Abbott, 2016a). In fact, once we are talking about creation of nonhuman origin, the very sense of creativity changes (Sachs, 2016). As stated in the sixty-eighth annual
report of the Copyright Office (1965):
"[t]he crucial question appears to be whether the “work” is basically one of human authorship,
with the computer merely being an assisting instrument, or whether the traditional elements of
authorship in the work (literary, artistic or musical expression or elements of selection,
arrangement, etc.) were actually conceived and executed not by a man but by a machine.”
As AI autonomy “grows” and automation is surpassed towards autonomy, the causal connection
between the mental conception and the industrial application can be projected on AI as well (Pearlman,
2018; Palace, 2019). What such causality would mean in terms of law is that protection of the final
creation under IP norms becomes either irrelevant or unfair (Ralston, 2005).
Although there are experts who have argued in favor of the provision of IP norms protection to AI as
well (Hristov, 2017), neither material reward nor material motivation are relevant with AI (Samuelson,
1986; Hattenbach & Glucoft, 2015). In fact, the potential speed of AI innovativeness will make us
envisage on what innovativeness is and on how non- obvious is (Samore, 2013). As stated by Plotkin
(2009), “Supply every engineer with state-of-the-art artificial invention technology and train them in
how to use that technology, and you have effectively boosted the level of ordinary inventive skill in the
field.”
In addition, the attribution of IP norms protection to a natural or an artificial, legal person who are not
linked to the initial, creative thinking would be unfair since it would provide material advantage
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
10
without any causal connection with the creative thinking that led to the industrial application (Patry,
2016).13
Rewarding humans for AI creativity is like rewarding parents for their children’s creations (Abbott,
2016a); or as the courts in the US concluded, employing someone to invent does not make you an
inventor (Abrams, 2009; Plotkin, 2009; Schuster, 2018; Palace, 2019). The problem with an unfair
practice is not only that it is unfair but that it builds a cumulative effect in favor of those in power, thus
reproducing and magnifying social, economic and political inequalities.
On the basis of the aforementioned developments, it is reasonable to argue in favor of an expanded
public sphere, within which AI autonomous entities’ creations should be placed (Clifford, 1997). In
this sense, AI may emerge as pioneer for a new “wave” of universal access to science, technology and
their applications.14 There is no reasonable ground why anyone among us - natural or artificial legal
person- should exclude all others from the profit and the scientific progress that autonomous AI
autonomy creates.
The other sideof causality is the one related to liability- within which an obvious area is that of tort
law albeit not the only one (Bekey et al., 2011). Liability is raised “(i) when the product created
deviates from its intended design (manufacturing defect), (ii) when the product should have been
designed differently to avoid a foreseeable risk of harm (design defect), or (iii) when companies fail to
provide instructions or warnings that could have avoided foreseeable risks of harm (failure to warn).”
(Nersesian & Mancha, 2021, p. 66). Tort law applies in cases of dangerous or unreasonable conduct
which may be intentional or negligent (Nersesian & Mancha, 2021). Liability, both in general and in
relation to tort law is based on the causal relationship between defect caused intentionally or by
negligence which constitutes breach of the safety rules, on the one part and damage on the other end
(de Bruin, 2016). The important element lies in the existence of a causal connection between the defect
and damage (Prosser et al., 1984; MacCoun, 1993).
Again, the evolving autonomy of AI transforms the foundations of the attribution of responsibility:
“the development of more versatile AI systems combined with advances in machine learning make it
all but certain that issues pertaining to unforeseeable AI behavior will crop up with increasing
frequency and that the unexpectedness of AI behavior will rise significantly” (Sherer, 2016, pp 35913
The public has no inherent interest in who owns the copyright so long as works are placed into the marketplace. Under
this instrumental approach to copyright, “author” is a construct denoting merely the initial owner of all rights. That initial
owner may be the actual individual who created the work, but need not be.
14 The role is to “serve an essential purpose in democratic society by providing a common reservoir of information upon
which an informed citizenry can make choices” (Erickson, 2010).
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
11
60), or to put it in other terms “…it would be hard to determine whether the precise cause was the
operating system or the application (and, if the latter, which application). This analysis is all the more
difficult where the software is open source (since no single author is responsible) and the hardware
can be easily modified.” (Calo, 2011).
The emergence and evolution of AI makes it increasingly difficult to determine the exact person o r
entity that must be held liable, due to the different layers which are involved in AI development and
its inherent unpredictability (Merchant & Lindor, 2012; Solow-Niederman, 2020). This
unpredictability on the one hand constitutes an actual problem in terms of tracing the one responsible
-if any- for the harmful AI actions and on the other hand could help those actually responsible evade
their responsibilities (Vladeck, 2014). We find ourselves in the difficult (im)balance between not being
able to accurately identify the causal relationships leading to the final event raising liability and having
to hold liable a probably, ontologically indifferent to any such penalties, non- human entity (Bathae,
2020).15
The difficulty emerges from the fact that once other, non- human, intelligent actors are introduced into
the causal connection that leads to a specific act or omission, the identification of the responsible
person becomes more difficult as humans are pushed away from the loop (Karnow, 1996). Who among
the implicated individuals and entities is to be held liable? The programmer, the owner, any other in
between or the AI entity per se? (Browne & Harrison-Spoerl, 2008; Knight, 2017; Seseri, 2018).
We need a set of different liability, risk, as well as autonomy factors, clarifying liability for the human
behind the AI, in the wider chain of design, ownership and use of AI. At the narrow end of AI, the
latter is closer to automation. 16 Therefore, it constitutes more of an “innocent agent” (Solum, 1992).
Software designer or the owner of AI could be rather easily identified as those bearing responsibility
(Hallevy, 2010). The former is responsible for conducts determined by software defects, where the
latter for conducts emerging from harmful use of the AI (Decker, 2014).
However, as we move away from automation, towards autonomy, things become more complicated
(Tanz, 2016). ΑΙ becomes a superseding cause; that is, “an intervening force ... sufficient to prevent
liability for an actor whose tortious conduct was a factual cause of harm'--of any harm that such
systems cause” (Scherer, 2016, p. 365). In many ways, by adhering to the evolving nature of AI we
endorse its potential unpredictability.
15
By the way, the EU Parliament is trying to come up with new legal regulations, dealing with responsibility from AI as
well as with the potential legal personality of the latter (European Parliament, 2017).
16 Such is the determination of responsibility also, once the malfunction is an outcome of hardware.
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
12
From the initial software which aims at creating an evolving algorithm, to harvesting and use of big
data, up to the use of such an algorithm in our everyday life as well as in a variety of crucial sectors of
public and private life, we accept the interference of an unpredictable -at least partially -actor. The
above does not negate the fact that there may be misuse anyway of AI by humans. In such cases and
in spite of the difficulties that may emerge, liability can be traced back to a human or humanadministered actor. The main point is that the gradual acceptance of AI’s role necessarily leads to the
acceptance of a society of wider and not only, human- oriented or nature- oriented, risk.
The acceptance of a society of wider risk will certainly have an impact on the rules about liability;
however, it cannot lead to a situation of liability vacuum. What we need is to adjust liability to the
different levels of AI autonomy. We cannot have an “one size fits all” approach to AI acting in different
areas of human conduct and of different autonomy.
A scale of risk factor should be adopted so that liability is designed on the basis of a combination of
the area of AI actions and of the level of AI autonomy. For example, AI driving cars, treating patients,
participating in wars and guessing our choices in music are not active in areas of identical significance.
In the latter case, the presence of human in the loop is not important. In the first three cases, it is on
the contrary necessary to have some type of human oversight and monitoring. If the area of AI activity
is the one parameter, the other should be the level of AI autonomy.
On the basis of such a combination, a variety of measures can be adopted; some AI applications may
be banned for example, whereas other could be subjected to limitations regarding for example the
necessary human presence in the loop, the objective responsibility emerging from their use, standards
regarding the conduct of machine- learning or the software design, etc.
Therefore, AI liability could be governed and regulated in two ways: the first type would consist in the
specific, subjective use of AI, contrary to the specific standards of AI use. The second type would
consist in the adoption of an objective threshold of liability on the basis the combination of area and
of autonomy of AI. Such regulations of AI-oriented liability can provide necessary legal certainty,
since the non- compliance with a system of pre-determined standards will establish clear - or at least
clearer - causal relationships (Kowert, 2017).
Still, we will obviously have an expanding area of AI applications which ontologically will be
unpredictable and which further will be “predictably unpredictable”, meaning that unpredictability will
be constituting part of the efficient implementation of AI role. In such a framework, AI may have no
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
13
owner and no human in the loop of its decision or the owner may be totally unaware of potentially
harmful consequences (Matthias, 2004; Wallach, 2011).
In spite of all potential morns, we will need to accept an extent of higher risk, something that should
lead again to the creation of a wide public sphere, funded by AI-generated wealth and compensating
damage conducted by AI. (Karnow, 1996). Such a scheme can provide legal certainty, where AI
ontology creates uncertainty.17
Conclusions
AI remains predominantly non- regulated in spite of its present and future impact on our societies.
Even further, the first attempts to govern its evolution are market- oriented regardless of its wider
social implications. Social and economic matters- among several others- will continue to emerge and
in fact will magnify due to the innovativeness and the wealth which will be produced by AI, as well
as because of its harmful impact.
AI could wreak havoc in our legal and governance systems if left unchecked and unregulated. On the
contrary a well -designed regulatory system and an efficient public sphere despite not being able to
“cure” all of the relevant issues will have the potential to enhance its socially beneficial impact and
provide greater legal certainty in an uncertain environment.
The creation of a public sphere where AI – generated inventions and creations will belong will need
to be international. This will provide coordination in an inherently universal issue, as the combination
of AI and of the cyberspace will transcend national borders. 18 A system of universal reward and of
access to compensation schemes can further promote less risk and therefore a more techno- friendly
environment.
Eventually, serious decisions must be made with the main being whether we will accept a non- human
centered world or not. In the course towards such a direction it is wise to come up with transparent
governance and legal norms, regulating the interactions with AI.
References
Abbott, R. (2016a). I Think, Therefore I Invent: Creative Computers and the Future of Patent Law. Boston
College Law Review, 57(4), 1079-1126
17
There is also the issue of criminal liability. That is analyzed below in the framework of potential, AI, legal personality.
The right to self-determination in all its manifestations, equality, right to work, trade unions’ participation, social security
and insurance, adequate standard of living could be promoted more efficiently if such technological developments produce
wealth for the wider public, instead for a small number of private companies, in oligopolist terms and economic framework.
18
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
14
Abbott, R. (2016b). Hal the Inventor: Big Data and Its Use by Artificial Intelligence. In Sugimoto, C. R., Ekbia,
H. R. & Mattioli, M. (eds). Big Data Is Not A Monolith. MIT Press.
Abrams, D. S. (2009). Did TRIPS Spur Innovation? An Empirical Analysis of Patent Duration and Incentives
to Innovate. Faculty Scholarship at Penn Law, 274.
Apple Computer, Inc. v. Franklin Computer Corp. 714 F.2d 1240 (3d Cir. 1983), as quoted in Madeleine de
Cock Buning 1998, supra note 43, at p. 183
Bathae, Y. (2020). Artificial Intelligence Opinion Liability. Berkeley Technology Law Journal, 35, 113.
Bekey G, Lin P., & Abney, K. (2011). Ethical Implications of Intelligent Robots. In Krichmar, J. L. and
Wagatsuma, H. (eds). Neuromorphic and Brain-Based Robots. Cambridge University Press.
Bekey, G. A. (2005). Autonomous Robots: From Biological Inspiration to Implementation and Control. MIT
Press.
Bethards, M. (2004). Condemning a Patent: Taking Intellectual Property by Eminent Domain. AIPLA Quarterly
Journal, 32(1), 81.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Bridy, A. (2012). Coding Creativity: Copyright and the Artificially Intelligent Author. Stanford Technology Law
Review, 5, 1-28.
Burrow-Giles Lithographic Co. v. Sarony. (1884). Ill U.S. 53.
Calo, M. R. (2011). Open Robotics. Maryland Law Review, 70(3), 571.
Camett, J. B. & Heinz, B. (2006). John Koza Built an Invention Machine. Popular Science.
www.popsci.com/scitech/article/2006-04/john-koza-has-built-invention-machine [Accessed 16/09/2018].
Case C-145/10, Eva-Maria Painer/Standard Verlags [2011] at para. 94;
Case C-393/09, Bezpecnostní softwarová asociace [2010] ECR 2010 I-13971, para. 45;
Case C-393/09, Bezpecnostní softwarová asociace, [2010] supra note 24, para 49
Case C-403/08 and C-429/08, FA Premier League/Karen Murphy [2011] ECR 2011 I-09083, at para. 97;
Case C-5/08, Danske Dagblades Forening [2009] ECR I-06569, at para. 35;
Case C-604/10, Football Dataco/Yahoo [2012] ECLI:EU:C:2012:115, at para. 38
Castelvecchi, D. (2016). Can we open the black box of AI. Nature, 538. http://www.nature.com/news/can-weopen-the-black-box-of-ai-1.2073
Chalmers, D. (1996). The Conscious Mind, In Search of a Final Theory. Oxford University Press.
Chalmers, D. (2008). The Hard Problem of Consciousness. In Velmans M. & Schneider, S. (eds). The Blackwell
Companion to Consciousness. Wiley-Blackwell.
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
15
Charniak, E, and McDermott, D. (1985). Introduction to Artificial Intelligence. Addison-Wesley
Clifford, R. D. (1997). Intellectual Property in the Era of the Creative Computer Program: Will the True Creator
Please Stand Up?. Tulane Law Review, 71.
Cole, D. (2020). The Chinese Room Argument. In Zalta, E. N. (ed.). The Stanford Encyclopedia of Philosophy
(Spring 2020 Edition). https://plato.stanford.edu/archives/spr2020/entries/chinese-room/
Commission. (1989). Proposal for a Council Directive on the legal protection of computer programs. COM
(88)
816
final,
Article
1.
https://eur-lex.europa.eu/legalcontent/EN/TXT/PDF/?uri=CELEX:51988PC0816&from=GA
Davis, E. & Marcus, G. (2015). Commonsense Reasoning and Commonsense Knowledge in Artificial
Intelligence. Communications of the ACM, 58(9), 92-103. http://cacm.acm.org/magazines/2015/9/191169commonsensereasoning-and-commonsense-knowledge-in-artificial-intelligence/fulltext#
de Bruin, R. (2016). Autonomous Intelligent Cars on the European Intersection of Liability and Privacy:
Regulatory Challenges and the Road Ahead. European Journal of Risk Regulation, 7, 485.
de Cock Buning, M. (1998). Copyright Law and Information Technology: on the Limited Shelf-Life of
Technology-Specific Regulations. University of Amsterdam.
de Cock Buning, M. (2016). Autonomous Intelligent Systems as Creative Agents under the EU Framework for
Intellectual Property. European Journal of Risk Regulation, 7(2), 310-322.
Decker, M. (2014). Responsible Innovation for Adaptive Robots. In Battaglia, F., Mukerji, N., and NidaRümelin, J. (eds). Rethinking Responsibility in Science and Technology. Pisa University Press, 65-86.
Erickson, K. (2016). Defining the public domain in economic terms-- approaches and consequences for policy.
Nordic Journal Of Applied Ethics, 10(1), 114. http://dx.doi.org/10.5324/eip.v10i1.1951
European Parliament. (2017). Report with recommendations to the Commission on Civil Law Rules
on Robotics. http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+REPORT+A8-2017–
0005+0+DOC+XML+V0//EN
Feist Publications, Inc., v. Rural Telephone Service Co., 499 U.S. 340 (1991)
Frye, B. L. (2018). The Lion, The Bat & The Thermostat: Metaphors on Consciousness. Savannah Law Review,
5, 13.
Gilbert, E. (2017). Artificial Intelligence: Teaching Machines to Learn Like Humans. INTEL.
https://iq.intel.com/artificial-intelligence-teaching-machines-to-learn-like-humans/ [Accessed 27/03/2018].
Goldstein, P. (2014). GOLDSTEIN ON COPYRIGHT (3d ed. 2014).
Grinmelmann, J. (2016). There's No Such Thing as a Computer-Authored Work - and It's a Good Thing, Too.
Columbia Journal of Law & the Arts, 39(3), 403.
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
16
Hallevy, G. (2010). The Criminal Liability of Artificial Intelligence Entities - from Science Fiction to Legal
Social
Control.
Akron
Intellectual
Property
Journal,
4(2),
Article
1.
https://ideaexchange.uakron.edu/akronintellectualproperty/vol4/iss2/1
Hallevy, G. (2018). Dangerous Robots
http://dx.doi.org/10.2139/ssrn.3121905
–
Artificial
Intelligence
vs.
Human
Intelligence.
Hashiguchi, M. (2017). The Global Artificial Intelligence Revolution Challenges Patent Eligibility Laws.
Journal of Business & Technology Law, 13(1). https://digitalcommons.law.umaryland.edu/jbtl/vol13/iss1/2
Hattenbach, B. & Glucoft, J. (2015). Patents in an Era of Infinite Monkeys and Artificial Intelligence. Stan.
Tech. L. Rev., 19, 32.
Heath, N. (2018). What is AI? Everything you need to know about Artificial Intelligence.
https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/
[Accessed 01/02/2019].
Hern, A. (2014). What is the Turing test? And are we all doomed now?, The Guardian,
https://www.theguardian.com/technology/2014/jun/09/what-is-the-alan-turing-test [Accessed 18/06/2020].
Hristov, K. (2017). Artificial Intelligence and the Copyright Dilemma. IDEA, 57, 431.
Huszár F. (2015). The Two Kinds of Uncertainty an AI Agent Has to Represent. inFERENCe,
https://www.inference.vc/the-two-kinds-of-uncertainties-in-reinforcement-learning-2/
(Accessed
06/07/2019)
Hutter, M. (2010). Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability.
Springer.
Jaszi, P. (1992). On the Author Effect: Contemporary Copyright and Collective Creativity. Cardozo Arts &
Entertainment Law Journal, 10(2), 293-320.
Johnson-Laird, P. N. (1983). Mental Models. Towards a Cognitive Science of Language, Inference and
Consciousness. Cambridge University Press.
Julie E. Cohen, J. E. (2006). Copyright, Commodification, and Culture: Locating the Public Domain. In
Guibault, L. & Hugenholtz, P. B. (eds). The future of the public domain: identifying the commons in
information law. Berkeley Law.
Karnow, C. E. A. (1996). Liability for Distributed Artificial Intelligences. Berkeley Technology Law Journal,
11, 147.
Karnow, C. E. A. (2016). The application of traditional tort theory to embodied machine intelligence. In Calo,
Froomkin, and Kerr (eds). Robot Law. Edward Elgar.
Keats, J. (2006). John Koza Has Built an Invention Machine. Popular
www.popsci.com/scitech/article/2006-04/john-koza-has-builtinvention-machine
Science.
http://
Khoury, A. (2016). Intellectual property rights for hubots: On the legal implications of human -like robots as
innovators and creators. Cardozo Arts & Ent. LJ, 35, 635.
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
17
Kim, D. (2018). Intellectual Property in The Fourth Industrial Revolution Era. Les Nouvelles, 53(1), 20.
Knight, W. (2017). The Dark Secret at the Heart of Al. MIT
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
Technology
Review.
Kowert, W. (2017). The Foreseeability Of Human-Artificial Intelligence Interactions. Texas Law Review, 96,
181.
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., and Gershman, S. J. (2016). Building Machines That Learn and
Think Like People. Center for Brains, Minds and Machines, MEMO NO. 046.
htttp://www.mit.edu/~tomeru/papers/machines_that_think.pdf
Laton, D. (2017). Manhattan_Project.Exe: A Nuclear Option for the Digital Age. Catholic University Journal
of Law & Technology, 25, 94.
M. Neil Browne, M. N. & Harrison-Spoerl, R. R. (2008). Putting Expert Testimony in Its Epistemological Place:
What Predictions of Dangerousness in Court Can Teach Us. Marquette Law Review, 91, 1119.
MacCoun, R. (2993). Is There a “Deep-Pocket” Bias in the Tort System?. Rand Institute of Civil Justice.
Manderieux, L. (2010). Secured Transactions as a Tool for Better Use of Intellectual Property Rights and of
Intellectual Property Licensing (including Patent Licensing). Uniform Law Review, 15(2), 447-457.
Marchant, G. E. & Lindor, R. A. (2012). The Coming Collision Between Autonomous Vehicles and the Liability
System. Santa Clara Law Review, 52(4), 1321.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics
and Information Technology, 6, 175 -183.
McCarthy, J. (2008). The Well-Designed
https://doi.org/10.1016/j.artint.2008.10.001
Child.
Artificial
Intelligence,
172(18),
2003-2014.
McGinn, C. (1991). The Problem of Consciousness: Essays Towards a Resolution. Basil Blackwell.
Michael Krauss, M. (1989). Property, Monopoly, and Intellectual Rights. Non-Posnerian Law and Economics
Symposium; Hamline Law Review, 12(2), 305.
Midway Mfg. Co. v. Artic Intern., Inc., 704 F. 2d 1009, 1011 (7d Cir. 1983)
Moravec,
H.
(1976).
The
role
of
raw
power
in
www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html
14/07/2019].
intelligence.
[Accessed
Mordvintsev, A., Olah, C., and Tyka, M. (2015). Inceptionism: Going Deeper into Neural Networks. Google
Research. https://research.googleblog.com/2015/06/inceptionismgoing-deeper-into-neural.html
Müller, V. C. and Bostrom, N. (2016). Future Progress In Artificial Intelligence: A Survey of Expert Opinion.
In Müller, V. C. (ed.). Fundamental Issues of Artificial Intelligence. Springer International Publishing.
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
18
Nersessian, D. & Mancha, R. (2021). From Automation to Autonomy: Legal and Ethical Responsibility Gaps
in Artificial Intelligence Innovation. Michigan Technology Law Review, 27, 55.
Omohundro, S. M. (2008). The Basic AI Drives. In Wang, P., Goertzel, B and Franklin, S. (eds), Artificial
General Intelligence 2008: Proceedings of the First AGI Conference. IOS Press
Palace, V. M. (2019). What if Artificial Intelligence Wrote This? Artificial Intelligence and Copyright Law.
Florida Law Review, 71(1), 5. https://scholarship.law.ufl.edu/flr/vol71/iss1/5
Patry, W. F. (2016). PATRY ON COPYRIGHT § § 3:19.
Pearlman, R. (2018). Recognizing Artificial Intelligence (AI) As Authors And Inventors Under U.S. Intellectual
Property Law. Richmond Journal of Law and Technology, 24, 2.
Plotkin, R. (2009). The Genie in the Machine: How Computer-Automated Inventing Is Revolutionizing Law and
Business. Stanford Law Books
Poole, D. L. and Mackworth, A. K. (2010). Artificial Intelligence: Foundations of Computational Agents.
Cambridge University Press.
Prosser, W. L., Keeton, W. P., Dobbs, D. B., Keeton, R. E., and Owen, D. G. (1984). Prosser and Keeton on
Torts, 5th Edition.
Pub. L. No. 94-553, 90 Stat. 2541 (1976) (codified as amended at 17 U.S.C. §§ 101-810 (2012)
Pyle, D. & San Jose, C. (2015). An executive’s guide to machine learning. McKinsley Quarterly.
https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-machine-learning
[Accessed 03/02/2019].
Ralston, W. T. (2005). Copyright in Computer-Composed Music: Hal Meets Handel. Journal of the Copyright
Society of the U.S.A, 52(3), 281-284.
Rich, E. and Knight, K. (1991). Artificial Intelligence. McGraw-Hill
Righetti, L. (2016). Emerging technology and future autonomous weapons. In ICRC, Autonomous Weapon
Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons. International
Committee of the Red Cross.
Ritchie, G. (2007). Some Empirical Criteria for Attributing Creativity to a Computer Program. Minds and
Machines, 17, 67–99.
Russell, S. and Norvig, P. (2010). Artificial Intelligence: A Modern Approach (Third Edition). Prentice Hall.
Sachs, R. (2016). The Mind as Computer Metaphor: Benson and the Mistaken Application of Mental Steps to
Software (Part 3), Bilskiblog.
Salzberger, E. (2006). Economic Analysis of the Public Domain. In Guibault, L. & Hugenholtz, P. B. (eds). The
future of the public domain: identifying the commons in information law. Berkeley Law.
Tzimas (2023)
https://doi.org/10.12681/jpentai.33299
19
Samore, W. (2013). Artificial Intelligence and the Patent System: Can a New Tool Render a Once Patentable
Idea Obvious?. Journal of Science and Technology Law, 29, 113.
Samuelson, P. (1986). Allocating Ownership Rights in Computer-Generated Works. University of Pittsburgh
Law Review, 47, 1185-1228.
SAS (n.d.). Machine Learning: What it is and Why
http://www.sas.com/en_us/insights/analytics/machine-learning.html
it
Matters.
SAS
INSTITUTE,
Scherer, M. U. (2016). Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and
Strategies. Harvard Journal of Law & Technology, 29(2), 354.
Schuster, W. M. (2018). Artificial Intelligence and Patent Ownership. Washington and Lee Law Review, 75,
1945.
Seseri, R. (2018). The Problem winh "Eplainable" Al. TECHCRUNCH. https://techcrunch.com/2018
/06/14/the-problem-with-explainable-ai/
Shi, K. (2016). Beats by Al. IBM RES. https://www.ibm.com/blogs/research2016/07/beats-by-ai
Solow-Niederman, Α. (2020). Administering Artificial Intelligence. Southern California Law Review, 93, 633.
Solum, L. B. (1992). Legal Personhood for Artificial Intelligences. North Carolina Law Review, 70(4), 1231.
Suchman, L. and Weber, J. (2016). Human-Machine Autonomies. In Bhuta, N., Beck, S., Geib, R., Yan Liu, H.
and Kreb, C. (eds). AUTONOMOUS WEAPON SYSTEMS: LAW, ETHICS, POLICY. Cambridge University
Press.
Tanz, J. (2016). Soon We Won't Program Computers. We'll Train Them Like Dogs. WIRED. https://
www.wired.com/2016/05/the-end-of-code/ [Accessed 27/03/2018].
The Guardian. (2020, September 8). A robot wrote this entire article. Are you scared yet, human?.
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
Thomason, R. (2003). Logic and Artificial Intelligence.
plato.stanford.edu/entries/logic-ai/ (Accessed 29/08/2018).
Stanford Encyclopedia
of Philosophy,
Tito, J. (2017). Destination unknown: Exploring the impact of Artificial Intelligence on Government. Centre for
Public Impact.
Tremblay, M. (2015). Should Robots Have Legal Rights?. Robotoshop. http://www.robotshop.com/blog/en/
should-robots-have-legal-rights-17333
Turing, A. M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433–60.
U.S. Copyright Office. (1965). Sixty-eighth annual report of the register of copyrights. Washington, DC.
https://www.copyright.gov/reports/annual/archive/ar-1965.pdf
U.S. Copyright Office. (2017). Conpendium Of U.S. Copyright Office Practices § 313.2 (3d ed.).
https://www.copyright.gov/comp3/docs/compendium.pdf [https://permacc/RY7TG6KE
Journal of Politics and Ethics in New Technologies and AI
Volume 2, Issue 1 (2023)
20
U.S. National Commission On New Technological Uses Of Copyrighted Works. (1978). Final Report of The
National
Commission
on
New
Technological
Uses
of
Copyrighted
Works.
https://babel.hathitrust.org/cgi/pt?id=mdp.39015026832934
UK Government Office for Science. (2015). Artificial intelligence: opportunities and implications for the future
of
decision
making.
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/566075/gs-16-19-artificialintelligenceai-report.pdf
United Nations. (1975). The Role of Patents in the Transfer of Technology to Developing Countries. E. 75. II.
D. 6.
Urban, T. (2015). The AI Revolution: The Road to Superintelligence. waitbutwhy.com/2015/01/artificialintelligence-revolution-1.html [Accessed 28/06/2018]
van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A.,
Kavukcuoglu, K. et al. (2016). WaveNet: A Generative Model for Raw Audio.
https://arxiv.org/abs/1609.03499
Vladeck, D. C. (2014). Machines Without Principals: Liability Rules and Artificial Intelligence. Washington
Law Review, 89, 117.
Wallach, W. (2011). From Robots to Techno Sapiens: Ethics, Law and Public Policy in the Developments of
Robotics and Neurotechnologies. Law, Innovation and Technology, 3(2), 185.
William Fisher, W. (2001). Theories of Intellectual Property. In Munzer, S. R. and Postema, G. (eds). New
Essays in the Legal and Political Theory of Property. Cambridge University Press.
World Intellectual Property Organization. (1979). Berne Convention for the protection of Literary and Artistic
works, as amended on 1979. WIPO, TRT/BERNE/00.1.
World Intellectual Property Organization. (2014). WIPO Intellectual Property Handbook.
Yanisky-Ravid, S., & Liu, X. J. (2017). When artificial intelligence systems produce inventions: the 3A era and
an alternative model for patent law. Cardozo Law Review, 39(6), 2215-2263.
Powered by TCPDF (www.tcpdf.org)