s12909 021 02870 X

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Banerjee et al.

BMC Medical Education (2021) 21:429


https://doi.org/10.1186/s12909-021-02870-x

RESEARCH Open Access

The impact of artificial intelligence on


clinical education: perceptions of
postgraduate trainee doctors in London
(UK) and recommendations for trainers
Maya Banerjee1†, Daphne Chiew2†, Keval T. Patel3, Ieuan Johns4, Digby Chappell2, Nick Linton4, Graham D. Cole4,
Darrel P. Francis2, Jo Szram5, Jack Ross3 and Sameer Zaman2,3,4,6*

Abstract
Background: Artificial intelligence (AI) technologies are increasingly used in clinical practice. Although there is
robust evidence that AI innovations can improve patient care, reduce clinicians’ workload and increase efficiency,
their impact on medical training and education remains unclear.
Methods: A survey of trainee doctors’ perceived impact of AI technologies on clinical training and education was
conducted at UK NHS postgraduate centers in London between October and December 2020. Impact assessment
mirrored domains in training curricula such as ‘clinical judgement’, ‘practical skills’ and ‘research and quality
improvement skills’. Significance between Likert-type data was analysed using Fisher’s exact test. Response
variations between clinical specialities were analysed using k-modes clustering. Free-text responses were analysed
by thematic analysis.
Results: Two hundred ten doctors responded to the survey (response rate 72%). The majority (58%) perceived an
overall positive impact of AI technologies on their training and education. Respondents agreed that AI would
reduce clinical workload (62%) and improve research and audit training (68%). Trainees were skeptical that it would
improve clinical judgement (46% agree, p = 0.12) and practical skills training (32% agree, p < 0.01). The majority
reported insufficient AI training in their current curricula (92%), and supported having more formal AI training (81%).
Conclusions: Trainee doctors have an overall positive perception of AI technologies’ impact on clinical training.
There is optimism that it will improve ‘research and quality improvement’ skills and facilitate ‘curriculum mapping’.
There is skepticism that it may reduce educational opportunities to develop ‘clinical judgement’ and ‘practical skills’.
Medical educators should be mindful that these domains are protected as AI develops. We recommend that
‘Applied AI’ topics are formalized in curricula and digital technologies leveraged to deliver clinical education.
Keywords: Artificial intelligence, Machine learning, Medical education, Clinical training

* Correspondence: sameer.zaman10@imperial.ac.uk

Maya Banerjee and Daphne Chiew contributed equally to this work.
2
Imperial College London, Exhibition Road, London SW7 2AZ, UK
3
Guy’s & St. Thomas’ NHS Foundation Trust, Westminster Bridge Road,
London SE1 7EH, UK
Full list of author information is available at the end of the article

© The Author(s). 2021 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if
changes were made. The images or other third party material in this article are included in the article's Creative Commons
licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons
licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain
permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the
data made available in this article, unless otherwise stated in a credit line to the data.
Banerjee et al. BMC Medical Education (2021) 21:429 Page 2 of 10

Background repetitive practice. Conversely, additional skills are re-


The healthcare sector is undergoing a digital revolution quired to interact safely and effectively with AI technolo-
[1], with a wide range of artificial intelligence (AI) inno- gies such as data science, statistics and AI ethics [8].
vations increasingly encountered in routine clinical prac- Tomorrow’s clinicians must be skilled in data input,
tice [2]. From designing and training algorithms, to interpreting algorithmic outputs and communication of
interpreting their output in clinical practice, these tech- AI-derived treatment plans to patients [20]. Although
nologies require skillful human-machine interaction. formal education in these skills is recommended in na-
The need for formal AI training has been recognized as tional healthcare policy [2], it does not currently feature
a priority for government health policy [3, 4]. Although in most clinical training curricula.
there is an abundance of evidence supporting the safety, In this study we evaluate the impact of AI technologies
accuracy, cost and workflow benefits of AI technologies on clinical education, as perceived by doctors in post-
in healthcare [5–7], there is limited research investigat- graduate training. Based on the results, we make prac-
ing the impact of AI on medical education [8, 9], the tical recommendations for trainers to maximise the
new skills required, and in particular the perceptions of benefits of clinical AI whilst mitigating potential negative
trainee doctors who will be frontline users of AI in impacts to deliver future-proof medical education.
healthcare.
Increasingly, AI technologies are being developed that Methods
are capable of automating many tasks typically per- Survey
formed by doctors as part of their clinical training. So A questionnaire was devised to investigate trainee doc-
far these advancements are mostly limited to research tors’ perceptions of AI technologies, specifically the im-
applications, but it is only a matter of time before com- pact on their training and education. The survey was
mercial uptake results in their routine use in everyday sent by email to doctors working in London (UK) at
clinical practice. For example, decision support systems NHS postgraduate training centers at which research
can triage patients, suggest diagnoses and alert test re- team members were based to maximise response rate.
sults by analysing data faster and more accurately than These centers, mainly headquartered at tertiary hospi-
clinicians [6, 10]. Natural language processing can auto- tals, administrate both hospital and community-based
mate clinical documentation by text summarization [11] postgraduate training in London, enabling inclusion of
and detect diagnoses from scan reports [12]. Computer trainees from a wide range of specialities. The survey
vision algorithms can detect lesions from radiological was conducted between October and December 2020.
scans, make measurements, and pick-up incidental find- Respondents provided their informed consent by partici-
ings, saving time and reducing error [13, 14]. Robots pating in the anonymous survey. The rationale, protocol
trained by human operators could perform procedures and data collection tool were approved by the Guy’s and
such as venepuncture, and ultrasound [15, 16]. St. Thomas’ NHS Foundation Trust Institutional Audit
Although these technologies undoubtedly offer consid- and Quality Improvement Review Board (ref. 11,865).
erable benefits to patients, clinicians, and healthcare sys- All methods were carried out in accordance with rele-
tems, there are concerns that they could come at a cost vant guidelines and regulations.
such as lack of trust in the algorithmic output due to the Demographic data such as age, sex, clinical specialism
black-box effect [17, 18]. AI could also reduce training and geographical location were anonymously collected.
opportunities for doctors to develop clinical judgement, Five-point Likert-type scales (ranging from ‘strongly dis-
practical ability and communication skills. Current train- agree’ to ‘strongly agree’) were used to investigate partic-
ing programmes may fall short of equipping clinicians ipants’ agreement with statements across a number of
with the technical, statistical and analytical skills re- domains. The content validity of Likert-type scale state-
quired to apply AI effectively for themselves and their ments was assessed by an expert panel of six healthcare
patients. professionals with experience of AI in healthcare. Indi-
The length of postgraduate medical training varies be- vidual items were scored to reach a consensus on which
tween countries, but the content of medical curricula is should be included. The survey questions were designed
largely consistent globally [8]. Traditionally, medical to minimize bias and further refined based on a pilot
education is centred around knowledge assimilation (by study of participants. The domains of impact assessment
didactic teaching) and vocational training to develop mirrored common themes in UK clinical training curric-
practical, interpersonal and professional skills [19]. With ula: ‘clinical judgement’, ‘practical skills’, ‘research and
AI algorithms capable of retaining and computing many quality improvement skills’ and ‘curriculum mapping’.
orders of magnitude more data than humans, there may Non-mandatory free-text spaces provided respondents
be less need for clinicians to memorize large amounts of the opportunity to comment on the positive and nega-
medical information or to hone procedural skills by tive impacts of AI on clinical training.
Banerjee et al. BMC Medical Education (2021) 21:429 Page 3 of 10

Quantitative analyses (32%) (Fig. 1). The majority reported insufficient AI


Internal reliability of the survey questions was measured training in their current curricula (92%), and supported
by calculating Cronbach’s alpha. Likert-type data were more formal AI training (81%) (Fig. 2). Detailed re-
analysed by (i) calculating the category of the median re- sponses to all Likert-type questions in the survey are
sponse for each statement, and (ii) comparing the pro- available in Supplementary Appendix A (Figures 1 and
portion of agreement between the overall perception 2). The median responses to the Likert-scale questions
statement (‘AI systems being used in healthcare will im- are shown in Table 1. There was agreement that AI
prove my training and education’) with the other state- would have an overall positive impact on training and
ments in a pairwise manner using Fisher’s exact test. To education. This agreement was sustained for ‘research
display the range of responses across each question, waf- and audit skills’ and ‘curriculum mapping’. Conversely,
fle plots were generated using the matplotlib library in respondents tended towards disagreement for the do-
Python 3.8. K-modes clustering was used to identify key mains of ‘clinical judgement/decision making’ (p = 0.12)
groups of responses to the Likert-type statements [21]. and strong disagreement for the domain ‘clinical skills’
(p < 0.01) (Table 2).
Qualitative analyses
Free-text responses were analysed using thematic ana- Cluster analyses
lysis [22]. Text data were hand-coded by two independ- Clustering of the Likert-type statement responses re-
ent researchers. Themes and sub-themes were vealed four key groups:
subsequently generated and representative examples
identified from the raw data.  AI Negative (8 participants) - strongly disagreed
with the majority of the statements.
Results  AI Indifferent (84 participants) - no preference
Quantitative analyses toward agree or disagree.
Two hundred ten doctors responded to the online sur-  AI Optimistic (105 participants) - a preference
vey (response rate 72%; 47% female). Overall there was toward agreeing with statements.
high internal reliability of the survey questions (Cron-  AI Positive (15 participants) - strongly agreed with
bach’s alpha 0.82). 58% perceived an overall positive im- the majority of the statements.
pact of AI technologies on clinical training (Fig. 1).
Trainees agreed that AI would reduce clinical workload The cluster composition of clinical specialisms was
(62%) and improve training in research and audit skills calculated (Fig. 3), and shows that community-based and
(68%). Lower proportions agreed that it would improve radiology specialisms contained a higher proportion of
training in clinical judgement (46%) and practical skills ‘AI Optimistic’ participants. Acute specialities and child

Fig. 1 Domain-based impact of clinical AI on training and education - waffle plots of responses to Likert-type questions in survey of 210 trainee
doctors (each icon represents one respondent to the survey)
Banerjee et al. BMC Medical Education (2021) 21:429 Page 4 of 10

Fig. 2 Exposure to AI systems and attitudes towards AI training - waffle plots of responses to Likert-type questions in survey of 210 trainee
doctors (each icon represents one respondent to the survey)

and maternal health specialities contained a higher pro- in the context of the small proportion of respondents
portion of ‘AI Indifferent’ participants. The strongly po- from this specialty.
larized groups of ‘AI Positive’ and ‘AI Negative’ are
much smaller in size in all specialisms except for Clinical Qualitative analyses
Radiology in which there was a high proportion of ‘AI Thematic analysis of free-text responses revealed two
positive’ participants. This result should be interpreted main themes (‘positive perceptions’ and ‘negative per-
ceptions’) (Fig. 4). Positive sub-themes included more
free time for training and educational activities, directly
Table 1 Median response of participants answering the Likert- improving the quality of practical skills training by enab-
type statements ling high-fidelity simulation, enabling more time to en-
Likert-type statement Median gage in interpersonal aspects of clinical practice, and
response boosting training in research, audit and quality
I regularly encounter AI systems in my: improvement.
Clinical practice Disagree Negative sub-themes included less development of
Training and education Disagree clinical judgement, reduced opportunity for practical
AI systems being used in healthcare will: skills, workflow intrusion, increased administrative
workload and reduced development of clinical account-
Improve my training and education Agree
ability and probity.
Reduce my clinical workload Agree
Improve my clinical judgement/decision-making Neutral Discussion
Improve my practical skills Neutral This study is the first of its kind to evaluate the impact
Improve my research, audit and quality improvement skills Agree of AI technologies in healthcare on medical education
Make it easier for me to map my training curriculum Neutral and clinical training by analysing perceptions of trainee
doctors across a range of hospital and community-based
Training in AI
specialities in London, UK. Overall, doctors perceive that
There is currently sufficient training in AI in my clinical Disagree
curriculum
clinical AI will have a positive impact on their training
(58% agree). Domain-based analysis reveals more mixed
More training in AI should be made available Agree
perceptions. The overwhelming majority (82%) report
Banerjee et al. BMC Medical Education (2021) 21:429 Page 5 of 10

Table 2 Proportion of participants agreeing with statements (agree and strongly agree). P-values of pairwise Fisher’s exact test
comparisons of % agreement between the overall statement (bold) and other statements
Likert-type statement Agreement (%) P-value (in comparison to overall perception)
AI systems being used in healthcare will:
Improve my training and education 58.4 –
Reduce my clinical workload 61.8 0.67
Improve my clinical judgement/decision-making 45.8 0.12
Improve my practical skills 31.9 < 0.01
Improve my research, audit and quality improvement skills 68.1 0.19
Make it easier for me to map my training curriculum 49.2 0.26

insufficient training in AI topics, with strong support for AI-based decision support systems are automatically up-
formal training in these topics. dated with the latest literature. Trainees perceived this
to be a positive impact, enabling them greater exposure
Domains of perceived positive impact to the latest evidence to improve their knowledge and
Doctors were most optimistic that clinical AI would im- the quality of care they provide.
prove training in research, audit and quality improve-
ment. These are key education domains and can be
challenging to fulfill without significant time and effort Domains with skeptical perceptions
commitment outside of work. AI systems can improve Trainees perceived that clinical AI could reduce their
the efficiency of research and audit by rapidly and accur- training in practical skills, clinical judgement and
ately analysing large volumes of data. This may explain decision-making. Developing these skills requires itera-
trainees’ positive perceptions in this regard [23]. An in- tive practice, formation of heuristics, personal reflection,
direct but desirable effect is that clinical AI could free varied clinical experience and time. Participants reported
up doctors’ time to spend doing other educational activ- that decision support systems, robotics and automatic
ities, which was a common positive theme throughout. image analysis could reduce training opportunities in
Developing skills in evidence-based medicine is a key these domains leading to deskilling. Another skeptical
training requirement, but keeping up with rapidly chan- perception was increased administrative workload lead-
ging clinical guidelines can be burdensome for doctors. ing to information overload. Clinical AI developers

Fig. 3 The Likert-type statement responses cluster composition of different clinical specialities. (Acute specialities include Acute Medicine,
Intensive Care Medicine, Anaesthetics and Emergency Medicine; Child and maternal health include Paediatrics, Obstetrics and Gynaecology;
Community specialities include General Practice and Psychiatry)
Banerjee et al. BMC Medical Education (2021) 21:429 Page 6 of 10

Fig. 4 Themes and sub-themes identified from thematic analysis of free-text response data, along with representative examples from the
raw data

should ensure that these technologies do not impede educators should work collaboratively to produce ethical
workflow to enable their adoption in clinical practice. and legal frameworks that will protect and enable clini-
Medical educators should note that there are some cians to develop these skills effectively in the age of clin-
areas in which training may be harmed by clinical AI. ical AI.
Involving clinicians in the development of these algo-
rithms (such as ground-truth labelling and procedural Perceptual variations by clinical specialism
training) will help trainees continue to develop these Medical, surgical and community-based (General Prac-
skills, because the AI will depend on their training to tice, Psychiatry) specialities had a higher proportion of
mimic behavior. This will also increase trust in the AI ‘AI optimistic’ trainees Fig. 3). This may be due to a
technologies and improve explainability to patients. lower clinical acuity in these specialities and higher ad-
ministrative workload. Trainees’ in these specialities may
Impact on interpersonal and ethical development perceive that AI will improve their workflow to free up
The impact on interpersonal and ethical skills training time for training and educational activities such as com-
featured in both positive and negative perception munication skills development.
themes. Doctors envisage that AI will automate tasks, Acute specialities (Emergency Medicine, Acute Medi-
freeing up more time to develop communication skills. cine, Anesthesia, Intensive Care) and child and maternal
Trainees are optimistic that this will enhance their abil- health (Paediatrics, Obstetrics and Gynaecology) had
ity to provide patient-centred care. Conversely, doctors more ‘AI Negative’ and ‘Indifferent’ responses. This
express concern that accountability is unclear when AI might be due to the higher clinical acuity in these speci-
is part of clinical decisions, which could cause human alities, including emergency procedures and diagnostic
deskilling in ownership and probity. Both points of view ambiguity, which may rely on experience or ‘gut-feeling’.
are valid; AI in healthcare has already created a new eth- These skills are notoriously hard to model for AI devel-
ical landscape [24]. Governing bodies and medical opment [25]. This mirrors trainees’ perceptions that
Banerjee et al. BMC Medical Education (2021) 21:429 Page 7 of 10

training in practical procedures and clinical judgement Training in ‘Applied AI’ will need to be supported by
might be reduced by clinical AI. e-learning, didactic teaching, assessment in examina-
Clinical Radiology trainees, although a small propor- tions, supervised clinical learning events and personal re-
tion of respondents, were highly optimistic. This echoes flection (Fig. 5). Although this has been considered as a
positive attitudes reported previously [26]. Radiology has priority for the future health workforce [2, 3], ‘Applied
experienced the most AI advances in clinical practice AI’ topics remain widely absent from clinical training
already [27, 28] so Radiology trainees are most likely to curricula [9, 19]. Ultimately this will negatively affect the
already first-hand experience of AI’s impact on their quality of patient care by missing out on the myriad ben-
training. This may explain their positive perceptions efits of clinical AI systems.
compared to other specialities. Our survey was not prefaced by any educational ma-
Overall, specialities with higher capacity for automa- terial on AI, for two reasons. First, it avoided prolonging
tion were more optimistic; the implication is that rather the time participants would need to commit, which
than a panacea, delivery of medical education in the AI would have reduced response rates. Second, it would
age will need to be tailored to these subtle variations be- have biased the responses towards taking the point of
tween specialities. view implicitly expressed in the educational material.
Even if we had striven for neutrality, it would be difficult
to achieve it and even more difficult to document that
Implications for clinical curricula and recommendations we had achieved it.
for trainers The optimistic perceptions of AI reported in this study
As clinical practice changes so must clinical education. may lessen as doctors gain more first-hand experience of
This study brings the need for formal AI training in clin- AI in their clinical practice and training (including its
ical curricula into sharp relief, and confirms a willing ap- weaknesses such as data bias and the black-box effect)
petite for this from trainee doctors. The impact of AI on [20]. Conversely, the opposite may occur; indeed in this
medical education can take different routes. Direct study the participants working in the speciality most
routes leverage AI technology to improve the delivery of likely to have already experienced AI technologies (Radi-
training itself. Indirect routes benefit education by ology) were actually the most ‘AI optimistic’. Curriculum
streamlining workflow to free up more time for educa- development must provide a balanced view, recognizing
tion and training. Although the majority (72%) of re- AI limitations, weaknesses and potential for harm.
spondents in our survey were yet to regularly encounter
AI systems in their training and education, it is an area Limitations
of active research ranging from assisted radiology teach- This study provides a snapshot of the opinions of trainee
ing [29] to virtual reality for surgical skills development doctors working in the UK NHS. The survey was hosted
[30] and automated assessment of procedural perform- primarily by postgraduate training centers in London
ance [31]. Medical curricula should be reviewed to lever- (UK), where research team members were based to
age these technologies to directly improve the delivery maximise response rate (72%). Although these centers
of clinical education. administrate postgraduate training for hospital and
We propose that medical curriculum makers consider community based trainees (enabling representation of
a new set of AI-specific skills. These include data input trainees in General Practice, Psychiatry etc.), the re-
and management, mathematics and statistics, communi- sults of our survey could be biased in favor of
cating AI outcomes to patients and AI-specific ethics. trainees working in London, who may have specific
Medical training curricula are already saturated with experiences due to local uptake of AI technologies in
limited room for new topics so practical training in ‘Ap- urban compared to rural areas. Based on our results,
plied AI’ would be most feasible. Alongside an overview we recommend a survey of all UK NHS postgraduate
of common ML architectures, this should include bal- centers to gain a cross-sectional representation of
anced training in clinical AI interpretation including trainee experience, and to elucidate any geographical
data bias, overfitting and the potential for harm. variations in experience and opinion. We also recom-
Navigating the current ML research landscape is chal- mend the inclusion of medical undergraduate stu-
lenging. Common pitfalls include over-optimistic con- dents, since they are the clinical workforce of the
clusions from ‘human versus clinician’ studies that are future, and most likely to be directly affected by the
usually retrospective and prone to bias [32], lack of stan- impact of AI technologies on medical training.
dardized benchmarks and no universally accepted AI Participants’ level of AI knowledge was not collected in
evaluation metrics [33]. Training in ‘Applied AI’ must our survey. Participants’ role at the time of responding
additionally equip clinicians with skills in critical litera- was collected, with only 2 respondents being currently in-
ture analysis. volved with clinical AI research. The majority of
Banerjee et al. BMC Medical Education (2021) 21:429 Page 8 of 10

Fig. 5 Potential new AI-related clinical training domains in future curricula. Novel training and assessment methods to deliver AI-based training.
(ML = machine learning). (Created with BioRender.com)

respondents neither encountered AI technologies regu- and educational opportunities in these skills should be
larly in their clinical practice (68%) or training (72%). Fu- protected. There is overwhelming support for formal
ture work would benefit from participants self-rating their education in AI-based skills and topics. Medical curric-
level of AI knowledge to contextualize findings. ula should be reviewed to include ‘Applied AI’ topics,
The impact on communication and interpersonal skills using digital technologies to deliver training.
was not assessed in the Likert-type part of our survey,
Thematic analysis of free-test responses revealed import- Abbreviations
ant trainee perceptions in these areas. Future evaluation AI: Artificial intelligence; ML: Machine learning
of trainees’ attitudes should further probe the perceived
impact on domains such as communication, profession-
Supplementary Information
alism, leadership and probity, which are key elements of The online version contains supplementary material available at https://doi.
all clinical training curricula. org/10.1186/s12909-021-02870-x.

Conclusions Additional file 1: Appendix A. Responses to Likert-type questions in


the survey of trainee doctors.
Clinical AI will affect medical education and clinical
training. Trainee doctors have overall positive percep-
tions of this impact. Training in practical procedures Acknowledgements
and clinical judgement may be reduced by clinical AI None.
Banerjee et al. BMC Medical Education (2021) 21:429 Page 9 of 10

Authors’ contributions 9. Wartman SA, Combs CD. Medical education must move from the
MB and DaC contributed equally to this work. MB, DaC, JS, JR and SZ information age to the age of artificial intelligence. Acad Med. 2018;93(8):
devised the research question and strategy. KP, IJ, performed data collection. 1107–9. https://doi.org/10.1097/ACM.0000000000002044.
DiC and DF performed data analyses. MB, DaC, NL, GC, DF, JS, JR and SZ 10. Fernandes M, Vieira SM, Leite F, Palos C, Finkelstein S, Sousa JMC. Clinical
contributed to manuscript writing. All authors reviewed the manuscript. The decision support Systems for Triage in the emergency department using
author(s) read and approved the final manuscript. intelligent systems: a review. Artif Intell Med. 2020;102:101762. https://doi.
org/10.1016/j.artmed.2019.101762.
Funding 11. Goldstein A, Shahar Y. An automated knowledge-based textual
Not applicable. summarization system for longitudinal, multivariate clinical data. J Biomed
Inform. 2016;61:159–75. https://doi.org/10.1016/j.jbi.2016.03.022.
Availability of data and materials 12. Bressem KK, Adams LC, Gaudin RA, Tröltzsch D, Hamm B, Makowski MR,
The datasets used and/or analysed during the current study are available et al. Highly accurate classification of chest radiographic reports using a
from the corresponding author on reasonable request. deep learning natural language model pretrained on 3.8 million text
reports. Bioinformatics. 2021;36(21):5255–61.
Declarations 13. Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, et al.
A guide to deep learning in healthcare. Nat Med. 2019;25(1):24–9. https://
Ethics approval and consent to participate doi.org/10.1038/s41591-018-0316-z.
Informed consent was obtained from all participants. Respondents provided 14. Howard JP, Zaman S, Ragavan A, Hall K, Leonard G, Sutanto S, et al.
their consent by participating in the anonymous survey. The rationale, Automated analysis and detection of abnormalities in transaxial anatomical
protocol and data collection tool were approved by the Guy’s and St. cardiovascular magnetic resonance images: a proof of concept study with
Thomas’ NHS Foundation Trust Institutional Audit and Quality Improvement potential to optimize image acquisition. Int J Cardiovasc Imaging. 2020
Review Board (ref. 11865) and did not require separate ethics committee [cited 2021 Jan 4]; Available from;37(3):1033–42. https://doi.org/10.1007/s1
approval. All methods were carried out in accordance with relevant 0554-020-02050-w.
guidelines and regulations. 15. Groetz S, Wilhelm K, Willinek W, Pieper C, Schild H, Thomas D. A new
robotic assistance system for percutaneous CT-guided punctures: initial
Consent for publication experience. Minim Invasive Ther Allied Technol. 2016;25(2):79–85. https://
Not applicable. doi.org/10.3109/13645706.2015.1110825.
16. Blaivas M, Blaivas L, Philips G, Merchant R, Levy M, Abbasi A, et al.
Competing interests Development of a deep learning network to classify inferior vena cava collapse
The authors declare that they have no competing interests. to predict fluid responsiveness. J Ultrasound Med. 2021;40(8):1495–504.
17. Asan O, Bayrak AE, Choudhury A. Artificial Intelligence and Human Trust in
Author details Healthcare: Focus on Clinicians. J Med Internet Res. 2020;22(6) [cited 2021
1
University College London, Gower Street, London WC1E 6BT, UK. 2Imperial May 26] Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/
College London, Exhibition Road, London SW7 2AZ, UK. 3Guy’s & St. Thomas’ PMC7334754/.
NHS Foundation Trust, Westminster Bridge Road, London SE1 7EH, UK. 18. DeCamp M, Tilburt JC. Why we cannot trust artificial intelligence in
4
Imperial College Healthcare NHS Trust, Du Cane Road, London W12 0HS, medicine. Lancet Digital Health. 2019;1(8):e390. https://doi.org/10.1016/S2
UK. 5Royal College of Physicians, 11 St. Andrews Place, London NW1 4LE, UK. 589-7500(19)30197-9.
6
Artificial Intelligence for Healthcare Centre for Doctoral Training, Imperial 19. Wartman SA, Combs CD. Reimagining medical education in the age of AI.
College London, South Kensington Campus, London SW7 2BX, UK. AMA J Ethics. 2019;21(2):E146–52. https://doi.org/10.1001/amajethics.2019.146.
20. Regulating Black-Box Medicine | Michigan Law Review [Internet]. [cited 2021
Received: 17 March 2021 Accepted: 4 August 2021 Jan 10]. Available from: https://michiganlawreview.org/regulating-black-box-
medicine/
21. Huang Z. Extensions to the k-means algorithm for clustering large data sets
References with categorical values; 1998.
1. Topol EJ. High-performance medicine: the convergence of human and 22. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol.
artificial intelligence. Nat Med. 2019;25(1):44–56. https://doi.org/10.1038/s41 2006;3(2):77–101. https://doi.org/10.1191/1478088706qp063oa.
591-018-0300-7. 23. Extance A. How AI technology can tame the scientific literature. Nature.
2. Ross J, Webb C, Rahman F, AoRCM. Artificial Intelligence in Healthcare: 2018;561(7722):273–4. https://doi.org/10.1038/d41586-018-06617-5.
Academy of Medical Royal Colleges; 2019. [cited 2020 Jan 12]. Available 24. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial
from: https://www.aomrc.org.uk/wp-content/uploads/2019/01/Artificial_ intelligence-driven healthcare. Artificial Intelligence Healthcare. 2020:295–
intelligence_in_healthcare_0119.pdf 336. https://doi.org/10.1016/B978-0-12-818438-7.00012-5.
3. Topol E. The Topol Review - preparing the healthcare workforce to deliver 25. Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Artificial
the digital future: NHS; 2019. [cited 2021 Jan 12]. Available from: https:// intelligence in anesthesiology: current techniques, clinical applications, and
topol.hee.nhs.uk/wp-content/uploads/HEE-Topol-Review-2019.pdf limitations. Anesthesiology. 2020;132(2):379–94. https://doi.org/10.1097/ALN.
4. Joshi I, Morley J. Artificial Intelligence: How to get it right. Putting policy 0000000000002960.
into practice for safe data-driven innovation in health and care: NHSX; 2019. 26. Dumić-Čule I, Orešković T, Brkljačić B, Kujundžić Tiljak M, Orešković S. The
[cited 2021 Jan 12]. Available from: https://www.nhsx.nhs.uk/media/ importance of introducing artificial intelligence to the medical curriculum -
documents/NHSX_AI_report.pdf assessing practitioners’ perspectives. Croat Med J. 2020;61(5):457–64. https://
5. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. doi.org/10.3325/cmj.2020.61.457.
Dermatologist-level classification of skin cancer with deep neural networks. 27. Rajpurkar P, Irvin J, Ball RL, Zhu K, Yang B, Mehta H, et al. Deep learning for
Nature. 2017;542(7639):115–8. chest radiograph diagnosis: a retrospective comparison of the CheXNeXt
6. Tomašev N, Glorot X, Rae JW, Zielinski M, Askham H, Saraiva A, et al. A algorithm to practicing radiologists. PLoS Med. 2018;15(11):e1002686.
clinically applicable approach to continuous prediction of future acute https://doi.org/10.1371/journal.pmed.1002686.
kidney injury. Nature. 2019;572(7767):116–9. https://doi.org/10.1038/s41586- 28. Ardila D, Kiraly AP, Bharadwaj S, Choi B, Reicher JJ, Peng L, et al. End-to-end
019-1390-1. lung cancer screening with three-dimensional deep learning on low-dose
7. Nelson A, Herron D, Rees G, Nachev P. Predicting scheduled hospital chest computed tomography. Nat Med. 2019;25(6):954–61. https://doi.org/1
attendance with artificial intelligence. NPJ Digital Med. 2019;2(1):1–7. 0.1038/s41591-019-0447-x.
8. Paranjape K, Schinkel M, Nannan Panday R, Car J, Nanayakkara P. 29. Cheng C-T, Chen C-C, Fu C-Y, Chaou C-H, Wu Y-T, Hsu C-P, et al. Artificial
Introducing Artificial Intelligence Training in Medical Education. JMIR Med intelligence-based education assists medical students’ interpretation of hip
Educ. 2019;5(2) [cited 2021 Jan 10] Available from: https://www.ncbi.nlm.nih. fracture. Insights Imaging. 2020;11(1):119. https://doi.org/10.1186/s13244-02
gov/pmc/articles/PMC6918207/. 0-00932-0.
Banerjee et al. BMC Medical Education (2021) 21:429 Page 10 of 10

30. Aeckersberg G, Gkremoutis A, Schmitz-Rixen T, Kaiser E. The relevance of


low-fidelity virtual reality simulators compared with other learning methods
in basic endovascular skills training. J Vasc Surg. 2019;69(1):227–35. https://
doi.org/10.1016/j.jvs.2018.10.047.
31. Winkler-Schwartz A, Bissonnette V, Mirchi N, Ponnudurai N, Yilmaz R,
Ledwos N, et al. Artificial intelligence in medical education: best practices
using machine learning to assess surgical expertise in virtual reality
simulation. J Surg Educ. 2019;76(6):1681–90. https://doi.org/10.1016/j.jsurg.2
019.05.015.
32. Nagendran M, Chen Y, Lovejoy CA, Gordon AC, Komorowski M, Harvey H,
et al. Artificial intelligence versus clinicians: systematic review of design,
reporting standards, and claims of deep learning studies. BMJ. 2020;368:
m689.
33. Choudhury A, Asan O. Role of artificial intelligence in patient safety
outcomes: systematic literature review. JMIR Med Inform. 2020;8(7):e18599.
https://doi.org/10.2196/18599.

Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.

You might also like