Preprint Not Peer Reviewed: Robotics in Health Care: Who Is Liable?
Preprint Not Peer Reviewed: Robotics in Health Care: Who Is Liable?
Preprint Not Peer Reviewed: Robotics in Health Care: Who Is Liable?
www.pragatipublication.com
ISSN 2249-3352 (P) 2278-0505 (E)
Cosmos Impact Factor-5.86
ed
Robotics in Health Care: Who is Liable?
Dr. Vikrant Yadav
School of Law,
iew
Ajeenkya D. Y. Patil University, Pune
Abstract
Ever Since evolution of robotics technology, there has been a prediction that robots will replace
the human worker in the near future. In the 21st century, robots are expected to play a vital role in
healthcare industry. With the advent of artificial intelligence and its increased recognition and
v
use as an alternate for human being in various sectors, several unaddressed legal concerns have
arose.
re
This research paper is an attempt to analyse the laws (that may be applied to robots and/or
humans controlling them), legal issue in case of use of robotics in health care industry and its
legal repercussions like liability in case of negligence on part robots involve in healthcare
For Edinburgh University's professor of robotics Sethu Vijayakumar we are at the beginning of a
process which will transform how we live and work over the next two decades.
"All of this confluence of robotics, AI, social network systems and knowledge sharing is driving a
huge, new revolution," he says.
rin
Eight hospitals using surgical robotics are in western India, including Jaslok Hospital, Kokilaben
Dhirubhai Ambani Hospital, Sir HN Reliance Foundation Hospital, and Tata Memorial Hospital.
Surgical robotics in India at The Krishna Institute of Medical Sciences (KIMS) in Hyderabad. Seven
are in southern India, including Apollo Hospital — in Chennai and Hyderabad– the Krishna
Pr
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=3598028
International journal of basic and applied research
www.pragatipublication.com
ISSN 2249-3352 (P) 2278-0505 (E)
Cosmos Impact Factor-5.86
ed
Institute of Medical Sciences (KIMS), the Amrita Institute of Medical Sciences, and Aster Medicity,
Kochi. One surgical robotics user, Apollo Gleneagles Hospital, is in Kolkata (John Edwards 2016).
The number of startups entering the healthcare AI space has increased in recent years, with over
iew
50 companies raising their first equity rounds since January 2015(CBINSIGHTS, 2017).
―By 2025, AI systems could be involved in everything from population health management, to
digital avatars capable of answering specific patient queries.‖ — Harpreet Singh Buttar, analyst at
Frost & Sullivan(CBINSIGHTS, 2017).
As per CB Insights, healthcare has seen the greatest deal flow of all the industries that AI is
involved in. With market leaders such as Microsoft (Microsoft has done analysis of effective
cancer treatment options as well as chronic disease treatment and drug development. (Jonathan
v
Cohn)), Google (Google‘s Deep Mind is used by the United Kingdom National Health Service to
detect health risks, and analyze medical images. (Sarah Bloch-Budzier, 2016)) and IBM focusing
re
on the industry (IBM‘s Watson is currently involved in oncology treatment.(Jonathan Cohn, 2013)),
there is immense growth predicted in the sector.Moreover, Cambridge Consultants has
developed ‗Axsis‘, a system that is designed to perform cataract surgeries with greater accuracy
than a human. (Sally Adee)
In India, The Ministry of Industry and Commercehad constituted an 18 member task force,
er
comprising of experts, academics, researchers and industry leaders, along with the active
participation of governmental bodies / ministries such as NITI Aayog, Ministry of Electronics and
Information Technology, Department of Science & Technology, UIDAI and DRDO in August 2017,
titled ―Task force on AI for India‘s Economic Transformation‖, chaired by V. Kamakoti, a professor
pe
at IIT Madras to explore possibilities to leverage AI for development across various fields. Along
with nine other domains of relevance, healthcare and AI is also an important domain wherein the
task force has The report has identified the following major challenges in deploying AI systems on
a large scale basis in India,
(i) creating electronic health data repositories with sufficient high quality annotated
ot
Legal Issues
Robotics, Healthcare and Privacy
Calo identifies three core privacy risks of robotic technology: surveillance, access, and social
ep
bonding. Two of these three forms are connected to physical privacy. Surveillance, in the context
of robots, often involves a physical component as exemplified by drone spying in military
contexts. Similarly, home healthcare robots could be abused for surveillance purposes by the
government, corporations or private citizens. One can imagine a scenario where security services
Pr
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=3598028
International journal of basic and applied research
www.pragatipublication.com
ISSN 2249-3352 (P) 2278-0505 (E)
Cosmos Impact Factor-5.86
ed
install tracking software into such an ostensibly harmless robot to monitor the whereabouts and
habits of suspicious subjects (C. Lutz and Tamò).
Robots ―do not just impact on the physical environment of their users or provide limited, task
iew
specific information, but control the informational environment for humans more
comprehensively‖ (Felzmann, H et. al. 2015).
A study by Syrdal and colleagues (2007) showed that most participants felt uneasy when put in a
scenario where robots had the power to disclose personal information, such as psychological or
personality characteristics, to other individuals without their permission. They were also worried
about the possibility of data about themselves being collected by robots of others, similar to
concerns about drones. (Felzmann, H et. al. 2015).
v
AI and Legal Liability: Three Models of Gabriel Halley
re
Gabriel Hallevy has proposed three models of the criminal liability of artificial intelligence
entities: Perpetration-via-Another (operator or user) Liability Model; Natural-
ProbableConsequence Liability Model; Direct Liability Model (2010a, 2010b, 2010c, 2012).
accountable for the crime committed by the AI entity, if the offense is a natural and probable
consequence of the AI‘s conduct. Even though the programmer or user was not aware that the
offense was committed until it had already been committed, did not plan to commit any offense
and did not take part in the commission of the offense, if there is evidence that they could and
should foresee the potential commission of offenses, then they might be prosecuted for the
rin
offense.
Unlike first model, the second model doesn‘t require any criminal intention on part of
programmer/user. It proposes the liability of programmer/user on basis of lack of due diligence
resulting in foreseeable Crime (i.e. negligent application of AI entity resulting in Crime).
ep
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=3598028
International journal of basic and applied research
www.pragatipublication.com
ISSN 2249-3352 (P) 2278-0505 (E)
Cosmos Impact Factor-5.86
ed
idea that criminal liability implicates solely the fulfilment of two different requirements: actus reus
(external element) and mens rea (internal element) and if AI entities were able to fulfil them both
then criminal accountability would follow.
iew
Application of third model would require recognition of AI/Robots as separate legal person. This
process has started very recently with Saudi Arabia conferring legal personality on Sophia (a
humanoid robot). This model would play a vital role in future when the states across the globe will
follow the footsteps of Saudi Arabia and recognise humanoid robots as Legal Person.
Civil Liability
With respect to the liability of AI systems for negligent actions, most jurisdictions across the world
v
have applied principle of ―strict product liability.‖ Indian judiciary too has recognised the
principle of strict product liability. In law, rights and duties are attributed to legal persons, both
re
natural (such as humans) and artificial (such as corporations). As of today, robots or machines
have not been recognised as natural or legal persons in India, hence they cannot be held liable in
their own capacity (United States v. Athlone Indus., Inc., 746 F.2d 977, 979 (3rd Cir. 1984)).
Criminal Liability
er
Harm to human health is a possibility in cases where Robots are engaged in healthcare.As robots
are not natural persons, the liability in case of crimes by robots will be determined in same
manner as it is done in case of crimes in/by legal persons/corporations i.e. application of piercing
of veil and identifying the natural person in charge/ benefitting from acts of robots. In India and
pe
majority of states have yet not recognised robots as legal person. Since they have capacity to act
on their own, in case of crimes by robots, for determining the liability, it is suggested that,
commercial corporations be specially created for the use of software agents/autonomous robots
(Giovanni 2002). Thus being, liability for the acts of software agents would impend on commercial
corporations (Wettig 2003; Wettig and Zehendner 2004).
ot
Isaac Asimov (1942)in his fictional story titled ‗Runaround‘ laid down three fundamental laws of
robotics (Isaac Asimov):
(i) A robot may not injure a human being or, through inaction, allow a human being to
come to harm;
tn
(ii) A robot must obey the orders given to it by human beings, except where such orders
would conflict with the First Law;
(iii) A robot must protect its own existence as long as such protection does not conflict
with the First or Second Law.
(iv) A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
rin
Data protection
These laws are important and are of contemporary relevance in today‘s era of Artifical
Intelligence, for the safety and avoidance of any Robotic Crimes.
Suggestions
ep
AI/Robots are programmed to pursue assigned tasks and goals autonomously thus incapable of
pursuing and responding to unforeseen situations leading to unwanted or illicit results. Hence, the
developers, operators and/or users of AI may be held liable for negligence, imprudence or
unskillfulness. In case of healthcare, wherein the robots are used by hospitals and/or
Pr
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=3598028
International journal of basic and applied research
www.pragatipublication.com
ISSN 2249-3352 (P) 2278-0505 (E)
Cosmos Impact Factor-5.86
ed
corporations, natural persons gaining monetary benefits by such use of robots are to be held
liable for punishment and penalties/sanctions may be imposed on legal persons (Hospitals,
Corporations etc.)
iew
In states recognising AI robots as legal persons, the the Direct Liability Model ofGabriel Hallevy
should be implemented. However, the acceptance of this model should not result into complete
relinquishment of liability of programmer or user as this may result in use of AI robots as shield
for committing Crime. In such cases a cuation should be exercised and AI entity be held liable
only for the acts which involve will, volition or control of and by the AI entity.
Conclusion:
v
Though the role of robots is limited in today‘s health care sector, current research and
developments in robotics indicate it‘slikely to be increased use in near future. With rampant
re
technological advancements, robot administered healthcare (from diagnosis to
recover/rehabilitation) will soon be a reality. The robots engaged in healthcare can cause severe
bodily harm, sometimes resulting in death of patient in certain situation like programme
malfunction. Unfortunately, modern day legislations, particularly in countries like India, are not
clear on the point of liability in such cases. Hence, in criminal cases, the courts have to apply
er
same logic which is applied in cases of corporate frauds, and in civil cases, one which is applied
in doctrine of strict product liability and vicarious liability. Application of Gabriel Hallevy‘s
models coupled with above referred suggestions may provide a guide in drafting future
legislations on liability of AI entity.
pe
References:
1. A Research Roadmap for Medical and Healthcare Robotics, STAN. UNIV. (2008), archived
at www.perma.cc
2. Calo, R. (2012). Robots and Privacy. In P. Lin, G. Bekey, & K. Abney (Eds.), Robot Ethics:
ot
The Ethical and Social Implications of Robotics (1st ed., pp. 187-202). Cambridge, MA:
MIT Press.
3. CBINSIGHTS, From Virtual Nurses to Drug Discovery: 106 Artificial Intelligence Startups in
Healthcare, www.cbinsights.com (February 3, 2017)
tn
4. Christoph Lutz and Aurelia Tamò, Privacy and Healthcare Robots –An ANT Analysis,
www.pdfs.semanticscholar.org
5. Felzmann, H., Beyan, T., Ryan, M., & Beyan, O. (2015). Implementing an ethical approach
to big data analytics in assistive robotics for elderly with dementia. Proceedings of the
2010 ACM ETHICOMP Conference (pp. 280-286), Leicester, 7-9 September 2015, at pg. 3.
rin
6. Hallevy, Gabriel: The Matrix of Derivative Criminal Liability, p. 38. Springer, Heidelberg
7. Isaac Asimov, I, Robot – ‗Runaround,‘ Street and Smith Publications, (1942), www.ttu.ee
8. John Edwards, Surgical Robotics in India Grows but Could Use Help,
9. www.roboticsbusinessreview.com (July 06, 2016)
ep
10. John Markoff, New Research Center Aims to Develop Second Generation of Surgical
Robots, The New York Times, October 23, 2014, www.nytimes.com
11. Jonathan Cohn, The Robot Will See You Now, The Atlantic, available at
www.theatlantic.com (March 2013)
Pr
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=3598028
International journal of basic and applied research
www.pragatipublication.com
ISSN 2249-3352 (P) 2278-0505 (E)
Cosmos Impact Factor-5.86
ed
12. Kenneth Macdonald , A robotic revolution in healthcare, www.bbc.com (20 March 2017)
13. Robotic Nurse Assistant project at Georgia Institute of Technology, www.hsi.gatech.edu
14. Sally Adee, Robot surgeon can slice eyes finely enough to remove cataracts, Newscientist,
iew
available at www.newscientist.com
15. Sarah Bloch-Budzier, NHS using Google technology to treat patients, available at
www.bbc.com, (November 22, 2016)
16. Sartor, Giovanni: Agents in Cyberlaw. Proceedings of the workshop on the law of
electronic agents – LEA 2002 (2002)
17. Syrdal et.al. Global Opposition to U.S. Surveillance and Drones, but Limited Harm to
America‘s Image. PEW Research Center (2014). June 2014. Retrieved from
v
www.pewglobal.org
18. Task force on AI for India‘s Economic Transformation,
re
19. www.dipp.nic.in
20. Wettig, Steffen & Zehendner, Eberhard: A legal analysis of human and electronic agents.
Artificial
21. Wettig, Steffen & Zehendner, Eberhard: The electronic agent: a legal personality under
German law?. Proceedings of 2nd workshop on law an electronic agents – LEA 2003, p.
57-112 (2003)
er
pe
ot
tn
rin
ep
Pr
This preprint research paper has not been peer reviewed. Electronic copy available at: https://ssrn.com/abstract=3598028