Intensive Care Med (2021) 47:157–159
[Link]
EDITORIAL
Five things every clinician should know
about AI ethics in intensive care
James A. Shaw1,2* , Nayha Sethi3 and Brian L. Block4
© 2020 Springer-Verlag GmbH Germany, part of Springer Nature
You have just admitted two patients to your intensive computer algorithms that run complex computations on
care unit (ICU) with coronavirus disease 2019 (COVID- data using advanced statistical analyses [3]. These algo-
19), both needing intubation. You only have the resources rithms are generally trained on large datasets, which
to offer mechanical ventilation to one of them. In your permit more accurate predictions than can be made with
view, both are equally ill and warrant a trial of mechani- other methodologies. Healthcare applications of AI range
cal ventilation. Your hospital uses artificial intelligence from clinician-facing tools to predict clinical deterio-
(AI) to make recommendations for the allocation of ration in the ICU to patient-facing applications such as
scarce resources, to reduce subjectivity and remove automated chat functions (a chatbot) of which families
treating clinicians from triage decisions. Without show- can ask questions [3]. The purpose of becoming familiar
ing the data or reason behind its decision, the algorithm with the technology underlying AI is not to become an
recommends offering mechanical ventilation to one of expert in developing such technologies. Rather, practic-
the patients, who is White, rather than the other, who is ing clinicians must understand what algorithms can and
Black. You wonder why the algorithm made this recom- cannot do, promote the appropriate use of healthcare AI,
mendation and whether it is morally “right”. and recognize when the technology is not performing as
As applications of AI become a routine part of clini- desired or expected.
cal practice, intensive care clinicians will need to develop Second, clinicians should understand that patients and
an understanding of the ethics and responsibilities that the public will not necessarily trust or embrace health-
come with healthcare AI. In this brief paper, we out- care AI. A 2019 survey of members of the Canadian pub-
line five things every clinician should know to inform lic found that 58% of respondents believed it was very or
the ethical use of AI technologies in intensive care (see somewhat likely that AI technologies would be deliver-
Fig. 1 for a summary). We highlight issues that clinicians ing health services in the next 10 years [4]. Two-thirds of
must understand to engage in ethical deliberation about respondents believed that such advances in the role of AI
the uses of AI more generally. Readers seeking additional in medicine would have a positive impact on their lives.
information and a principlist approach to issues of AI And yet, experimental evidence in the United States sug-
in healthcare would do well to read other articles in this gests that people are less likely to use medical services
special series on AI, or consult other authoritative publi- when such services are known to use AI [5]. Trust—it
cations [1, 2]. turns out—is still dependent on having a clinician remain
First, clinicians should have a basic fluency with the in charge of decision-making. Thus, clinicians planning
technology underlying AI because they will ultimately to utilize healthcare AI must develop strategies for com-
remain ethically and legally responsible for treatment municating clearly that the clinician will only employ AI
decisions. As a general-purpose technology, AI refers to in ways that are safe and effective [6].
Third, clinicians must understand the provenance of
training data used in healthcare AI. Algorithms derive
*Correspondence: [Link]@[Link] their power and accuracy from the data they are trained
1
Research Director of Artificial Intelligence, Ethics & Health, Joint Centre on. In most cases, training data comes from individual
for Bioethics, University of Toronto, Toronto, Canada
Full author information is available at the end of the article patients. While using patient-level data is not inher-
ently objectionable, the use of such data without consent
158
Fig. 1 Five things every clinician should know about AI ethics in intensive care
is morally problematic [7]. In Denmark, for example, deployed a model to identify patients with complex needs
health authorities made population-wide health record that was allocating systematically fewer resources to
data available to digital health innovators [8]. When a Black patients than White patients based on past expen-
group of physicians discovered this was taking place, ditures to those patients [11]. The algorithm was perpetu-
they raised public awareness and advocated for ending ating historical inequities in access by providing less care
such data sharing. Similar events have taken place in the to Black patients. It was not the case that such patients
United Kingdom and the United States, where health sys- needed less care.
tems shared large numbers of health records with large The fifth and final consideration we emphasize is that
technology companies [7]. Although morally problem- clinicians should understand whether algorithms are
atic, sharing de-identified data to generate AI is legal in achieving appropriate and desired results. Just as an
most of the world. European countries tend to demand evidence base is required before utilizing novel chemo-
stronger justifications for such initiatives, but even in therapy, AI algorithms must also be tested to ensure they
Europe, patients need not expressly grant permission are delivering intended results. Healthcare AI is not infal-
for their health record to be shared. Just as with other lible. Clinicians must have access to information about
technologies, frameworks to inform ethical data use lag the impact that AI has on patient outcomes to avoid
behind the deployment of AI, which is occurring across causing patient harm [12]. Recent work has proposed a
sectors at a blistering pace [9]. multi-phased approach to generating evidence on health-
Fourth—in addition to considering whether training care AI, and clinicians should become familiar with the
data was acquired with patient consent, it is also impor- interpretation of such evidence for this novel collection
tant to understand that algorithms may perpetuate bias. of technologies [13].
Suresh and Guttag (2019) outline six ways in which bias Returning to the patients mentioned at the outset, the
can be incorporated into the process of AI development clinician is apt to question the reasoning behind the algo-
and deployment [10]. Although a detailed exploration of rithm. They must have access to information about how
each form of bias is beyond the scope of this paper, we the algorithm was trained, understand how the training
provide one example here. Obermeyer et al. (2019) iden- data relates to the patients in front of them, and review
tified the case where a health system in the United States patient outcomes associated with the algorithm. In this
159
case, there is concern that the algorithm may perpetu- 2. Morley J et al. (2020) The ethics of AI in health care: A mapping review.
Soc Sci Med, 113172
ate inequities by calculating survival probabilities on the 3. Shaw J, Rudzicz F, Jamieson T, Goldfarb A (2019) Artificial intelligence and
basis of historical data showing that Black patients—who the implementation challenge. J Med Internet Res 21(7):e13659
often receive worse medical care and face other social 4. Canadian Medical Association, “The Future of Connected Health Care:
Reporting Canadians’ Perspectives on the Health Care System.” Aug. 2019,
inequities—are less likely to survive. Accessed: Jul. 07, 2020. [Online]. https://[Link]/sites/default/files/
pdf/Media-Releases/The-Future-of-Connected-Healthcare-[Link].
5. Longoni C, Bonezzi A, Morewedge CK (2019) Resistance to medical
Author details artificial intelligence. J Consum Res 46(4):629–650
1
Research Director of Artificial Intelligence, Ethics & Health, Joint Centre 6. Nundy S, Montgomery T, Wachter RM (2019) Promoting trust between
for Bioethics, University of Toronto, Toronto, Canada. 2 Institute for Health Sys- patients and physicians in the era of artificial intelligence. JAMA
tem Solutions and Virtual Care, Women’s College Hospital, 76 Grenville Street, 322(6):497–498
Toronto, ON M5S1B2, Canada. 3 Centre for Biomedicine, Self and Society, Usher 7. Wachter RM, Cassel CK (2020) Sharing health care data with digital giants:
Institute, University of Edinburgh, Edinburgh, UK. 4 Department of Pulmonary, overcoming obstacles and reaping benefits while protecting patients.
Allergy, Critical Care and Sleep Medicine, University of California, San Francisco, JAMA 323(6):507–508
USA. 8. Wadmann S, Hoeyer K (2018) Dangers of the digital fit: rethinking seam-
lessness and social sustainability in data-intensive healthcare. Big Data
Compliance with ethical standards Soc 5(1):2053951717752964
9. Einav S, Ranzani OT (2020) Focus on better care and ethics: Are medical
Conflicts of interest ethics lagging behind the development of new medical technologies?
The authors declare no conflicts of interest. Intensive Care Med 46(8):1611–1613. https://[Link]/10.1007/s0013
4-020-06112-4
10. Suresh H, Guttag JV (2019) A framework for understanding unintended
Publisher’s Note consequences of machine learning. ArXiv Prepr. ArXiv190110002
Springer Nature remains neutral with regard to jurisdictional claims in pub-
11. Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial
lished maps and institutional affiliations.
bias in an algorithm used to manage the health of populations. Science
366(6464):447–453
Received: 24 August 2020 Accepted: 3 October 2020
12. Wynants L et al (2020) Prediction models for diagnosis and prognosis
Published online: 19 October 2020
of covid-19 infection: systematic review and critical appraisal. BMJ
369:m1328. https://[Link]/10.1136/bmj.m1328
13. McCradden MD, Stephenson EA, Anderson JA Clinical research underlies
ethical integration of healthcare artificial intelligence. Nat Med 26(9), Art.
References no. 9, Sep. 2020. https://[Link]/10.1038/s41591-020-1035-9
1. Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guide-
lines. Nat Mach Intell 1(9):389–399
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.