ROBOTIC MEDICINE IN THE EU:
DIGITAL ETHICS AND EU COMMON VALUES
Conference ‘New Technologies In Health:
Medical, Legal & Ethical Issues’ | Thessaloniki
21 November 2019 | 13:10-13:30h (20 min presentation)
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
1
Jean Monnet Chair | now: EU values & ethics
EU Values & DIGitalization for our CommuNITY (DIGNITY)
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
2
Structure
EU & ethics
EU values
(general &
specific ones)
Ethical
principles &
principlism
Specific
requirements
Concluding
theses
Setting the
agenda
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
3
Broad understanding as a starting point
“Robotics for Medicine and Healthcare is considered the domain of systems able to perform coordinated
mechatronic actions (force or movement exertions) on the basis of processing of information acquired
through sensor technology, with the aim to support the functioning of impaired individuals, medical
interventions, care and rehabilitation of patients and also to support individuals in prevention
programmes” (p. 4)
Butter, M., Rensma, A., van Boxsel, J., Kalisingh, S., Schoone, M., Leis, M., . . . Korhonen, I. (2008). Robotics for Healthcare: Final Report.
“At the most basic level, ‘healthcare robotics’ (medical robotics) is simply the application of robotics
technology to healthcare to diagnose and treat disease, or to correct, restore or modify a body function
or a body part.”
Robotics Business Review (2009, April 13). Healthcare Robotics: Current Market Trends and Future Opportunities. Retrieved from
https://www.roboticsbusinessreview.com/health-medical/healthcare-robotics-current-market-trends-and-future-opportunities/.
“Robo-ethics is of course not an ethics developed for robots, as in Asimov’s famous principles”, it is “rather
an ethics designed for humans to interact with robots.” (pp. 26-27)
Hilgendorf, E. (2017). Modern technology and legal compliance. In E. Hilgendorf & M. Kaiafa-Gbandi (Eds.), Compliance measures: and their role in German and Greek law
(21-35). Athēna: P.N. Sakkulas.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
4
Possible applications (partly overlapping)
• Medical diagnosis (Dolic et al., 2019, pp. 7, 15 [Chatila]; see also: Tasioulas, 2019, p. 50)
• Prevention & treatment of diseases (Dolic et al., 2019, pp. 7, 15 [Chatila])
• Robotic surgery: more accurate, less invasive and remote interventions (based on availability and
assessment of vast amounts of data (Dolic et al., 2019, pp. 7, 15 [Chatila]; see also: Tasioulas, 2019, p. 50; COMEST, 2017 pp. 5,
30)
• Care and socially assistive robots: ageing population (COMEST, 2017 pp. 5, 31) affected by multimorbidities (Dolic et al., 2019, p. 7; see also COMEST, 2017 p. 5); exoskeletons, as well as companion robots
(COMEST, 2017 pp. 5, 31; see also Dolic et al., 2019, p. 15 [Chatila])
• Therapy (COMEST, 2017 p. 5) for children with autism (Tasioulas, 2019, p. 50; see also COMEST, 2017 pp. 5, 30)
• Rehabilitation systems: support for recovery of patients and long-term treatment at home,
instead of a healthcare facility (Dolic et al., 2019, p. 7; see also COMEST, 2017 p. 5)
• Training for health and care workers: support for continuous training and life-long learning
initiatives (Dolic et al., 2019, p. 7)
Sources:
• Dolic, Zrinjka, Castro, R., & Moarcas, A. (April 2019). Robots in healthcare: A solution or a problem? In-depth analysis requested by the ENVI committee.
• World Commission on the Ethics of Scientific Knowledge and Technology (2017, September 14). Report of COMEST on robotics ethics: SHS/YES/COMEST-10/17/2 REV.
• Tasioulas, J. (2019). First Steps Towards an Ethics of Robots and Artificial Intelligence. Journal of Practical Ethics, 7(1), 49–83 (50).
See also: European Parliament resolution of 12 February 2019 on ‘A comprehensive European industrial policy on artificial intelligence and robotics’ | (2018/2088(INI)),
recitals AF and AH.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
5
Possible applications | continued
Source: Cresswell, K., Cunningham-Burley, S., & Sheikh, A. (2018). Health Care Robotics: Qualitative Exploration of Key Challenges and Future Directions. Journal of Medical
Internet Research, 20(7), e10410 | p. 3.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
6
Relationship of EU law, ethics & values
values
Law
& human
rights
Ethics
Morality
Source: Frischhut, 2019, p. 3
(modified)
Relative, not an absolute import:
• Law & science: Wahlberg & Persson, 2017;
Wahlberg, 2017, p. 63; Wahlberg, 2010,
pp. 208, 213
Science, medicine,
etc.
• Law (Austria) & morality:
OGH, 2012, pt. 4.6.1 (N.B. Austrian Supreme
Court in civil and criminal law issues)
• EU law & ethics: Frischhut, 2019, p. 123
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
7
Ethics | normative theories | less about morality
Theoretical philosophy
Legal philosophy
Practical philosophy
Metaethics
Political philosophy
Ethics
ethics
Normative ethics
Normative theories
Applied ethics
Robotic ethics
Deontology
Consequentialism
Virtue ethics
morality
Source: Frischhut, 2019, p. 9
“In its most familiar sense, the word morality […] refers to norms about right and wrong human
conduct that are so widely shared that they form a stable social compact. As a social institution,
morality encompasses many standards of conduct, including moral principles, rules, ideals, rights, and
virtues. We learn about morality as we grow up […]”
Source: Beauchamp & Childress, 2013, pp. 2-3
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
8
Values | EU general ones | Art. 2 TEU
Article 2
The Union is founded on the values of respect for human dignity, freedom, democracy,
equality, the rule of law and respect for human rights, including the rights of persons
belonging to minorities.
These values are common to the Member States in a society in which pluralism, nondiscrimination, tolerance, justice, solidarity and equality between women and men prevail.
Picture source: The Economist, December 9th – 15th 2006 (Link)
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
9
Values | EU specific fields | overview (excerpt)
year
legal status
application
or distinct
values
health
non-financial
reporting
2006
2014
soft-law (conclusions
of health ministers)
(mainly) distinct values
(and corresponding
principles)
binding (amendment
to EU directive)
(mainly) application
sports
2017
2018
digitalization
2018
soft law:
soft-law (advisory
opinion)
EP resolution (2017)
Council conclusions
(2018)
Promotion of EU
values, plus distinct
values
(mostly) distinct
values
(mainly) application
Source: Frischhut, 2019, p. 35
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
10
Values | EU specific fields | digitalization
Source: Ethics Advisory Group. (2018). Towards a digital ethics: Report by the Ethics Advisory Group established by the European Data Protection Supervisor, the EU’s
independent data protection authority. Retrieved from https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
11
Values | also emphasized by other documents
˃ 16 February 2017 | European Parliament resolution on Civil Law Rules on Robotics | (2015/2103(INL)) | pt. 13
˃ World Commission on the Ethics of Scientific Knowledge and Technology (2017, September 14). Report of
COMEST [UNESCO] on robotics ethics: SHS/YES/COMEST-10/17/2 REV. | pp. 8, 48-55, et passim
˃ 9 March 2018 | European Group on Ethics in Science and New Technologies (EGE) “Statement on Artificial Intelligence,
Robotics and ‘Autonomous’ Systems”(Link)| (a) Human dignity: “inherent human state of being worthy of respect”; (b) Autonomy:
“refers to the capacity of human persons to legislate for themselves, to formulate, think and choose norms, rules and laws for themselves to
follow”; (c) Responsibility: “serve the global social and environmental good”; “risk awareness and a precautionary approach are crucial” (pp.
16-17); (d) Justice, equity, and solidarity: “Discriminatory biases in data sets used to train and run AI systems should be prevented or
detected, reported and neutralised at the earliest stage possible” (p. 17); (e) Democracy; (f) Rule of law and accountability: “fair and clear
allocation of responsibilities” (p. 18); (g) Security, safety, bodily and mental integrity; (h) Data protection and privacy; (i) Sustainability
˃ 12 February 2019 | European Parliament resolution ‘A comprehensive European industrial policy on artificial intelligence
and robotics’ | (2018/2088(INI)) | pt. 147
˃ Renda, A. (2019). Artificial Intelligence: Ethics, governance and policy challenges. Report of a
CEPS Task Force. Brussels| pp. 114, 116
˃ Dolic, Zrinjka, Castro, R., & Moarcas, A. (April 2019). Robots in healthcare: A solution or a problem?
In-depth analysis requested by the ENVI committee. | “These values include transparency, accountability,
explicability, auditability and traceability, and neutrality or fairness” (Dolic et al., 2019, p. 16 [Chatila])
˃ 25 May 2019 | OECD Principles on Artificial Intelligence | Link (N.B. similar approach as in EU, i.e. values, human rights / humancentric, transparency, trust)
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
12
From values to ‘ethical principles’
Source: Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines.
Nature Machine Intelligence, 1(9), 389–399.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
13
Ethical principles I
˃ European Economic and Social Committee, Opinion on ‘Artificial intelligence — The consequences of
artificial intelligence on the (digital) single market, production, consumption, employment and society’,
OJ 2017 C 288/1.
Asking for a “code of ethics for the development, application and use of AI so that throughout their entire operational process
AI systems remain compatible with the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as
with fundamental human rights” (pt. 1.7)
˃ World Commission on the Ethics of Scientific Knowledge and Technology (2017, September 14).
Report of COMEST on robotics ethics: SHS/YES/COMEST-10/17/2 REV.
Asking for framework of ethical values and principles, sometimes with confusing terminology; (i) human dignity;
(ii) value of autonomy; (iii) value of privacy; (iv) ‘do not harm’ principle; (v) principle of responsibility; (vi) value of
beneficence; and (v) value of justice. (p. 8; pp. 48-55)
˃ Renda, A. (2019). Artificial Intelligence: Ethics, governance and policy challenges. Report of a
CEPS Task Force. Brussels | pro hierarchy of principles (p. 116, pt. 12); some principles need to be further
clarified; such as “non-maleficience” (p. 116, pt. 12), fairness (p. 116, pt. 13), acceptable discrimination (p. 117,
pt. 16); enhanced requirements in sensitive fields, such as healthcare (p. 118, pt. 20; see also p. 119. pt. 23)
˃ Dolic, Zrinjka, Castro, R., & Moarcas, A. (April 2019). Robots in healthcare: A solution or a problem? In-depth analysis
requested by the ENVI committee. | “specific principles needed for guiding the use of AI and robotised systems”
(Dolic et al., 2019, p. 16 [Chatila])
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
14
Ethical principles II
˃ 16 February 2017 | European Parliament resolution of 16 February 2017 with recommendations to the Commission on
Civil Law Rules on Robotics (2015/2103(INL)) | “human safety, health and security; freedom, privacy, integrity and dignity; selfdetermination and non-discrimination, and personal data protection” (pt. 10); transparency (pt. 12)
˃ 8 April 2019 | AI HLEG publishes “Ethics guidelines for trustworthy AI”, including “Trustworthy AI Assessment List” (pp. 2631) | Link | (i) Respect for human autonomy (p. 12); (ii) prevention of harm (p. 12); (iii) fairness (p. 13); (iv) explicability (p. 13)
˃ 8 April 2019 | EC communication ‘Building Trust in Human-Centric Artificial Intelligence’ COM(2019) 168 final | Seven key
requirements for trustworthy AI applications: human agency and oversight; technical robustness and safety; privacy and data governance;
transparency; diversity, non-discrimination and fairness; societal and environmental well-being; accountability (p. 4)
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
15
Principlism 4+1 | Floridi et al. 2018
•
Beneficence (pt. 4.1): equivalent terms are ‘well-being’, ‘common good’, ‘humanity’, ‘human dignity’
•
Non-maleficence: (pt. 4.2): privacy, security and ‘capability caution’ | prevent intended and unintended harm
•
Autonomy: (pt. 4.3): “humans should always retain the power to decide which decisions to take“
•
Justice: (pt. 4.4): distributive justice; risk of bias in datasets
•
Explicability: (pt. 4.5): equivalent terms are ‘transparency’, ‘accountability’, etc.
Source: Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., . . . Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks,
Principles, and Recommendations. Minds and Machines, 31(1), 1.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
16
Principlism
˃ 16 February 2017 | European Parliament resolution of 16 February 2017 with recommendations to the Commission on
Civil Law Rules on Robotics (2015/2103(INL)) | pt. 13, Annex: code of ethical conduct for robotics engineers
• “Beneficence – robots should act in the best interests of humans;
• Non-maleficence – the doctrine of ‘first, do no harm’, whereby robots should not harm a human;
• Autonomy – the capacity to make an informed, un-coerced decision about the terms of interaction with robots;
• Justice – fair distribution of the benefits associated with robotics and affordability of homecare and healthcare robots in
particular.” (Annex: code of ethical conduct for robotics engineers)
˃ 12 February 2019 | European Parliament resolution ‘A comprehensive European industrial policy on artificial intelligence
and robotics’ | (2018/2088(INI)) | pt. 147
˃ Dolic, Zrinjka, Castro, R., & Moarcas, A. (April 2019). Robots in healthcare: A solution or a problem? In-depth
analysis requested by the ENVI committee. | Principlism as good starting point against background of importance
of ethics and values, but insufficient (Dolic et al., 2019, p. 15 [Chatila])
˃ Critical: Mittelstadt, B. (2019). AI Ethics – Too Principled to Fail? Retrieved from
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293 | Key arguments: no equivalent fiduciary
relationship for AI; lack of professional history and well-defined norms of ‘good’ behaviour; no empirically proven
methods to translate principles into practice; lack of legal and professional accountability mechanisms.
Source: 12 February 2019 | European Parliament resolution ‘A comprehensive European industrial policy on artificial intelligence and robotics’ | (2018/2088(INI))
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
17
Specific requirements | human centric | I
Humans first
•
Human centric: respect for human autonomy -> “human-centric design principles and leave meaningful opportunity for human choice”
(AI-HLEG, 2019, p. 12)
•
Support, not replace: “Because of the reservations about AI, there is broad agreement among physicians and medical ethicists that algorithms
should support, but not replace, the physician.” (Katzenmeier, 2019, p. 269; translation)
•
Added value: “AI should relieve the physician of routine work and help him with the initial assessment. Ideally, the physician would expand his
knowledge from the analogue world to the digital world and thus make a better diagnosis.” (Katzenmeier, 2019, p. 269; translation)
•
Humans in command:
•
–
“the principle of the supervised autonomy of robots, whereby the initial planning of treatment and the final decision regarding its execution will
always remain with a human surgeon” (EP, 16.2.2017, pt. 33)
–
In favour of a “a human-in-command approach to AI” (EESC on AI, 2017, pt. 1.6)
–
“Only humans make the final decision and take responsibility for it.” (EESC on digital rev., 2019, pt. 1.2)
–
Responsibility: ‘man out of the loop’, e.g. in case of medical diagnoses based on large amount of information; EESC in favour of a “human-incommand principle” (EESC on digital rev., 2019, pt. 4.3)
Human dignity:
–
Human dignity: robotics only “for care tasks requiring no emotional, intimate or personal involvement” (EESC on digital rev., 2019, pt. 4.5)
–
“one of the most pressing questions to be addressed is how health and care would be transformed, and whether these technologies could lead to
possible repercussions for human dignity”; human factor in care (Dolic et al., 2019, p. 7)
Source: European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).
Source: Dolic, Zrinjka, Castro, R., & Moarcas, A. (April 2019). Robots in healthcare: A solution or a problem? In-depth analysis requested by the ENVI committee.
Source: Opinion of the European Economic and Social Committee on ‘The digital revolution in view of citizens’ needs and rights’, OJ 2019 C 190/17.
Source: Katzenmeier, C. (2019). Big Data, E-Health, M-Health, KI und Robotik in der Medizin. Digitalisierung des Gesundheitswesens. Herausforderung des Rechts. Medizinrecht
(MedR), 37(4), 259–271.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
18
Specific requirements | human centric | II
Robots lack empathy etc.
•
•
No empathy
–
“Only the physician can include the social, psychological and personal framework conditions in the treatment, only he is capable of empathy so important in dealing with the patient.” (Katzenmeier, 2019, p. 269; translation)
–
“whereas there are strong ethical, psychological and legal concerns about the autonomy of robots, their obvious lack of human empathy and
their impact on the doctor-patient relationship, which have not yet been properly addressed at EU level, in particular as regards the protection
of patients’ personal data, liability, and the new economic and employment relationships that will be brought about; whereas ‘autonomy’ as
such can only by fully attributed to human beings; whereas there is a need for a robust legal and ethical framework for artificial intelligence”
(EP, 12.2.2019, recital AJ)
–
“The use of robotics in the healthcare sector is anticipated. But robots are devices that are unable to replicate the empathic capacities and
reciprocity of human care relationships. If not used under certain framework conditions, robots can undermine human dignity. Care robots,
therefore, should only be used for care tasks requiring no emotional, intimate or personal involvement.” (EESC on digital rev., 2019, pt. 1.11)
–
“I don’t know that deep learning or robots will ever be capable of reproducing the essence of human-to-human support.” (Topol, 2019, p. 164)
Human contact:
–
“Stresses that human contact is a crucial aspect of human care” (EP, 12.2.2019, pt. 70)
–
Care robots: “human contact is one of the fundamental aspects of human care” (EP, 16.2.2017, pts. 31-32 [32])
Source: European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).
Source: 12 February 2019 | European Parliament resolution ‘A comprehensive European industrial policy on artificial intelligence and robotics’ | (2018/2088(INI))
Source: Opinion of the European Economic and Social Committee on ‘The digital revolution in view of citizens’ needs and rights’, OJ 2019 C 190/17.
Source: Katzenmeier, C. (2019). Big Data, E-Health, M-Health, KI und Robotik in der Medizin. Digitalisierung des Gesundheitswesens. Herausforderung des Rechts. Medizinrecht
(MedR), 37(4), 259–271.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
19
Specific requirements | human centric | III
Also following a ‘humans first’-approach
•
8 April 2019 | EC launches pilot phase; inviting stakeholders to test the detailed assessment list | COM(2019) 168 final | Human agency and
oversight
•
12 February 2019 | European Parliament resolution ‘A comprehensive European industrial policy on artificial intelligence and robotics’ |
(2018/2088(INI)) | “Stresses that ethical rules must be in place to ensure human-centric AI development, the accountability and transparency
of algorithmic decision-making systems, clear liability rules and fairness” (pt. 143)
Vulnerable groups
•
Special attention to be paid to possible develop. of an emotional connection between humans and robots ‒ particularly in vulnerable groups
(children, the elderly and people with disabilities) (EP, 16.2.2017, pt. 3)
•
Value of autonomy: individual and different relationships; especially for vulnerable groups; environmental impacts (COMEST, 2017, p. 50)
•
Special attention to be paid to vulnerable persons (AI-HLEG, 2019, pp. 11, 12, 13, 20, et passim)
•
Security, safety, bodily and mental integrity: special emphasis on vulnerable people (EGE, 2018, p. 19)
Global ethical and value-based benchmark
•
7 December 2018 | EC coordinated plan with Member States to foster development & use of AI in Europe | COM(2018) 795 final, IP/18/6689:
“Europe can become a global leader in developing and using AI for good and promoting a human-centric approach and ethics-by-design
principles.” (pt. 2.6)
•
12 February 2019 | European Parliament resolution ‘A comprehensive European industrial policy on artificial intelligence and robotics’ |
(2018/2088(INI)) | Recommendation “that Europe should take the lead on the global stage” (pt. 142)
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
20
Specific requirements | trust
Trust
•
Importance of enforcement: validation and certification requirements, to foster trust (Dolic et al., 2019, p. 16 [Chatila])
•
“Trust builds on shared assumptions about material and immaterial values, about what is important and what is expendable.
It stems from shared social practice, shared habits, ways of life, common norms, convictions and attitudes. Trust is based on
shared experiences, on a shared past, shared traditions and shared memories.” (EAG, 2018, p. 21)
•
Emphasizing necessity to “gain the trust and acceptance of patients and healthcare providers” (Dolic et al., 2019, p. 7)
•
“human relationships and by extension human–robot relationships need to be based on some level of trust” (Lichocki, Kahn, & Billard, 2011,
p. 46)
•
“But to me, those are the secondary gains of deep medicine [i.e. preventing wasteful use of medical resources]. It’s our chance, perhaps the
ultimate one, to bring back real medicine: Presence. Empathy. Trust. Caring. Being Human.” (Topol, 2019, p. 309)
Also emphasizing trustworthy AI:
•
7 December 2018 | EC coordinated plan with Member States to foster development & use of AI in Europe | COM(2018) 795 final, IP/18/6689
•
12 February 2019 | European Parliament resolution ‘A comprehensive European industrial policy on artificial intelligence and robotics’ |
(2018/2088(INI)) | “trust” occurs 17 times
•
8 April 2019 | AI HLEG publishes “Ethics guidelines for trustworthy AI”, including “Trustworthy AI Assessment List” (pp. 26-31) | Link
•
8 April 2019 | EC communication ‘Building Trust in Human-Centric Artificial Intelligence’ COM(2019) 168 final
Source: Ethics Advisory Group. (2018). Towards a digital ethics: Report by the Ethics Advisory Group established by the European Data Protection Supervisor, the EU’s independent
data protection authority. Retrieved from https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf
Source: Dolic, Zrinjka, Castro, R., & Moarcas, A. (April 2019). Robots in healthcare: A solution or a problem? In-depth analysis requested by the ENVI committee.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
21
Specific requirements | reversibility, traceability …
Reversibility
•
“Devices should be removable without causing lasting harm or the loss of initial functions of the human body.”
(Grinbaum et al., 2017, p. 7)
Traceability
•
“possibility to track the causes of all past actions (and omissions) of a robot” (COMEST, 2017, p. 6; see also p. 36)
•
8 April 2019 | EC launches pilot phase; inviting stakeholders to test the detailed assessment list |
COM(2019) 168 final | transparency: traceability of AI systems should be ensured
Other specific recommendations
•
Further development of codes of ethics, in a multidisciplinary way; also to be implemented in education (COMEST, 2017, p. 8)
•
Ethics to be integrated in design process of robotic technologies (COMEST, 2017, p. 8)
•
“new robotic technologies be introduced carefully and transparently in small-scale, well-monitored settings, and the implications of these
technologies on human practices, experiences, interpretational frameworks, and values be studied openly” (COMEST, 2017, p. 8)
•
Necessity of public discussions (COMEST, 2017, pp. 8-9)
•
“when AI is being used in implanted medical devices, the bearer should have the right to inspect and modify the source code used in the
device” (EP, 12.2.2019, pt. 75)
•
“regulatory practices should establish procedures that limit the use of machine learning systems to specific tasks for which their accuracy and
reliability have been empirically validated.” (London, 2019, 20)
•
Necessity to avoid the ‘rubbish in, rubbish out’ phenomenon, especially in the health sector, in order to avoid discrimination based on gender,
ethnic background, income, etc. (see also Nordling, 2019, S104)
Source: European Parliament resolution of 12 February 2019 on ‘A comprehensive European industrial policy on artificial intelligence and robotics’ | (2018/2088(INI)).
Source: World Commission on the Ethics of Scientific Knowledge and Technology (2017, September 14). Report of COMEST on robotics ethics: SHS/YES/COMEST-10/17/2 REV.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
22
The future of work
Source: Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again (First edition). New York: Basic Books. p. 162.
See also: European Group on Ethics in Science and New Technologies. (2018). Future of Work, Future of Society: Opinion No 30. Brussels: Publications Office of the EU. Retrieved
from https://ec.europa.eu/info/publications/future-work-future-society_en; Prainsack, B., & Buyx, A. (2018). The value of work: Addressing the future of work through the lens of
solidarity. Bioethics, 32(9), 585–592. https://doi.org/10.1111/bioe.12507
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
23
Concluding theses
Status quo
•
In the EU, ethics (see also Pirs & Frischhut, 2019, forthcoming) has to be seen against the background of the common values, especially
human dignity (‘last resort’ for threat situations, which are not even foreseeable today; Borowsky, 2019, p. 125), and the fundamental rights
(Frischhut, 2019).
•
Just as the values of the EU lead to mutual trust among the Member States, in relation to citizens, values and trust are also important
in the area of AI etc. (EAG, 2018, p. 20; COM[2019] 168 final; etc.). Hence, an ethical approach is important, to both gain, as well as to
maintain trust (Cresswell, Cunningham-Burley, Sheikh, 2018, p. 8).
•
Robots ethics has to be seen as an evolutionary (“values we use to evaluate technology […] change over time”; COMEST, 2017, p. 47),
not as a revolutionary process (Dodig Crnkovic & Çürüklü, 2012, pp. 61, 69; cf. also Renda, 2019, p. 120), where we should also consider that
“[t]echnologies can change human values and normative orientations” (COMEST, 2017, p. 47).
Future ‘direction of travel’
•
Further research should focus on the relationship between the general and the specific values, and on the relationship between the
specific ones themselves, in particular how they can all benefit from each other's determination of content in the respective
relationships.
•
This approach should also embrace a possible fruitful relationship between values and corresponding legal principles (cf. 2006 EU
health values).
•
Hence, in addition to the general and specific values, principles (transparency, integrity, etc.) (see also Palazzani, 2019, pp.141-142) and
the ‘four principles model’ (Beauchamp & Childress, 2013; Floridi et al., 2018) should also play a role (critical: Mittelstadt, 2019; on possible
challenges, see also Floridi, 2019). This goes in a similar direction as the Responsible Research and Innovation (RRI) approach (Schomberg,
2013; EC, 2013; Owen & Pansera, 2019).
•
The status quo, which can be identified in different fields (health, digitalization, etc.), eventually will need to be balanced with other
values and further developed, as there are still some steps to go (Frischhut, 2019, p. 145; cf. also the 1950 Schuman declaration).
•
As argued by others, “robotics in healthcare should be held to higher standards than other more general applications” (Renda, 2019, p.
118; Dolic et al., 2019, p. 18 [Renda]).
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
24
Literature mentioned on slides | references
•
Beauchamp, T. L., & Childress, J. F. (2013). Principles of biomedical ethics (7th ed.). New York: Oxford University Press.
•
Borowsky, M. (2019). Artikel 1 Würde des Menschen. In J. Meyer & S. Hölscheidt (Eds.), Charta der Grundrechte der Europäischen Union (5th ed., pp. 101–
147). Baden-Baden, Bern: Nomos; Stämpfli; Facultas.
•
Butter, M., Rensma, A., van Boxsel, J., Kalisingh, S., Schoone, M., Leis, M., . . . Korhonen, I. (2008). Robotics for Healthcare: Final Report.
•
Cresswell, K., Cunningham-Burley, S., & Sheikh, A. (2018). Health Care Robotics: Qualitative Exploration of Key Challenges and Future Directions. Journal of
Medical Internet Research, 20(7), e10410.
•
Dodig Crnkovic, G., & Çürüklü, B. (2012). Robots: ethical by design. Ethics and Information Technology, 14(1), 61–71.
•
Dolic, Zrinjka, Castro, R., & Moarcas, A. (April 2019). Robots in healthcare: A solution or a problem? In-depth analysis requested by the ENVI committee.
•
European Commission (2013). Options for Strengthening Responsible Research and Innovation: Report of the Expert Group on the State of Art in Europe on
Responsible Research and Innovation.
•
European Group on Ethics in Science and New Technologies. (2018). Future of Work, Future of Society: Opinion No 30. Brussels: Publications Office of the EU.
Retrieved from https://ec.europa.eu/info/publications/future-work-future-society_en.
•
Ethics Advisory Group [EAG]. (2018). Towards a digital ethics: Report by the Ethics Advisory Group established by the European Data Protection Supervisor,
the EU’s independent data protection authority. Retrieved from
https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf.
•
Floridi, L. (2019). Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology, 32(2), 185–193.
https://doi.org/10.1007/s13347-019-00354-x.
•
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., . . . Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society:
Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 31(1), 1. https://doi.org/10.1007/s11023-018-9482-5.
•
Frischhut, M. (2019). The Ethical Spirit of EU Law. Cham: Springer International Publishing. Open access: https://jeanmonnet.mci.edu.
•
Grinbaum, A., Chatila, R., Devillers, L., Ganascia, J.-G., Tessier, C., & Dauchet, M. (2017). Ethics in Robotics Research: CERNA Mission and Context. IEEE
Robotics & Automation Magazine, 24(3), 139–145. https://doi.org/10.1109/MRA.2016.2611586.
•
Hilgendorf, E. (2017). Modern technology and legal compliance. In E. Hilgendorf & M. Kaiafa-Gbandi (Eds.), Compliance measures: and their role in German
and Greek law (21-35). Athēna: P.N. Sakkulas.
•
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
25
Literature mentioned on slides | references
•
Katzenmeier, C. (2019). Big Data, E-Health, M-Health, KI und Robotik in der Medizin. Digitalisierung des Gesundheitswesens. Herausforderung des Rechts.
Medizinrecht (MedR), 37(4), 259–271.
•
London, A. J. (2019). Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability. The Hastings Center Report, 49(1), 15–21.
https://doi.org/10.1002/hast.973.
•
Mittelstadt, B. (2019). AI Ethics – Too Principled to Fail? Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3391293.
•
Nordling, L. (2019). A fairer way forward for AI in health care. Nature, 573(7775), S103-S105.
•
Owen, R., & Pansera, M. (2019). Responsible Innovation and Responsible Research and Innovation. In D. Simon, S. Kuhlmann, J. Stamm, & C. Weert (Eds.),
Handbook on Science and Public Policy (pp. 26–48). Cheltenham, UK, Northampton, MA: Edward Elgar Publishing.
•
Palazzani, L. (2019). Innovation in Scientific Research and Emerging Technologies: A Challenge to Ethics and Law. Cham: Springer Nature.
•
Pirs, M., & Frischhut, M. (2019, forthcoming). Ethical Integration in EU Law: The prevailing normative theories in EGE opinions. In N. Ryder & L. Pasculli (Eds.),
Corruption, Integrity and the Law: Global Regulatory Challenges. Milton Park, Abingdon, Oxon, New York, NY: Routledge.
•
Prainsack, B., & Buyx, A. (2018). The value of work: Addressing the future of work through the lens of solidarity. Bioethics, 32(9), 585–592.
https://doi.org/10.1111/bioe.12507.
•
Renda, A. (2019). Artificial Intelligence: Ethics, governance and policy challenges. Report of a CEPS Task Force. Brussels.
•
Robotics Business Review (2009, April 13). Healthcare Robotics: Current Market Trends and Future Opportunities. Retrieved from
https://www.roboticsbusinessreview.com/health-medical/healthcare-robotics-current-market-trends-and-future-opportunities/.
•
Schomberg, R. v. (2013). A Vision of Responsible Research and Innovation. In R. Owen, J. R. Bessant, & M. Heintz (Eds.), Responsible innovation (Vol. 3,
pp. 51–74). Chichester, West Sussex, United Kingdom: Wiley.
•
Tasioulas, J. (2019). First Steps Towards an Ethics of Robots and Artificial Intelligence. Journal of Practical Ethics, 7(1), 49–83.
•
Topol, E. J. (2019). Deep medicine: How artificial intelligence can make healthcare human again (First edition). New York: Basic Books.
•
Wahlberg, L., & Persson, J. (2017). Importing Notions in Health Law: Science and Proven Experience. European Journal of Health Law, 1–26.
•
Wahlberg, L. (2010). Legal Questions and Scientific Answers: Ontological Differences and Epistemic Gaps in the Assessment of Causal Relations. Lund: MediaTryck.
•
Wahlberg, L. (2017). Legal Ontology, Scientific Expertise and The Factual World. Journal of Social Ontology, 3(1), 49–65.
•
World Commission on the Ethics of Scientific Knowledge and Technology (2017, September 14). Report of COMEST on robotics ethics: SHS/YES/COMEST-10/17/2 REV.
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
26
https://jeanmonnet.mci.edu
https://twitter.com/mafrischhut
https://www.instagram.com/markus_frischhut
Thank you for your attention!
MCI MANAGEMENT CENTER INNSBRUCK
THE ENTREPRENEURIAL SCHOOL®
Dr. Markus Frischhut, LL.M.
Jean Monnet Chair “EU Values & Digitalization”
Professor & Study Coordinator European Union Law
Department International Business & Law
Universitaetsstrasse 15, 6020 Innsbruck, Austria
Phone: +43 512 2070 -3632, Fax: -3699
mailto:markus.frischhut@mci.edu, www.mci.edu
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
27
Other approaches | ethics in digitalization (excerpt)
˃ January 2017 | Asilomar AI Principles: https://futureoflife.org/ai-principles/
•
Research issues (No 1-5) | Ethics and values (No 6-18) | Long-term issues (No 19-23)
See also: EGE (2018). Statement on
Artificial Intelligence, Robotics and
‘Autonomous’ Systems. pp. 13-14
˃ November 2017 | “The Montreal Declaration for a Responsible Development of
Artificial Intelligence: a participatory process”: https://www.montrealdeclaration-responsibleai.com/the-declaration
•
Principles: Well being | autonomy | privacy | solidarity | democratic participation | equity | diversity | prudence | responsibility |
sustainability
•
N.B. No hierarchy | harmonious interpretation | dialogue | ethical principles can be translated into law
˃ December 2017 | Institute of Electrical and Electronics Engineers (IEEE): “Ethically aligned design: A Vision for Prioritizing Human
Well-being with Autonomous and Intelligent Systems”: https://ethicsinaction.ieee.org/
•
General principles: 1. Human Rights | 2. Well-being | 3. Data Agency | 4. Effectiveness | 5. Transparency |
6. Accountability | 7. Awareness of Misuse| 8. Competence
˃ April 2018 | House of Lords Select Committee on Artificial Intelligence: ‘”AI in the UK: ready, willing and able”: Link
•
5 overreaching principles for an AI Code (para. 417):
•
(1) Artificial intelligence should be developed for the common good and benefit of humanity.
•
(2) Artificial intelligence should operate on principles of intelligibility and fairness.
•
(3) Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
•
(4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
•
(5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
˃ October 2019 | Datenethikkommission. Gutachten der Datenethikkommission der Bundesregierung: Link
THE ENTREPRENEURIAL SCHOOL®
M CI M ANAGEM ENT CENTER INNSBRUCK
6020 Innsbruck / Austria
Universitätsstraße 15
jeanmonnet.mci.edu
markus.frischhut@mci.edu
28