Details
Originalsprache | Englisch |
---|---|
Aufsatznummer | 105860 |
Seitenumfang | 8 |
Fachzeitschrift | Journal of medical ethics |
Jahrgang | 47 |
Ausgabenummer | e3 |
Frühes Online-Datum | 3 Apr. 2020 |
Publikationsstatus | Veröffentlicht - 29 Nov. 2021 |
Abstract
Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare continuously raise the need to evaluate and to improve clinical decision-making. This article scrutinises if and how clinical decision-making processes are challenged by the rise of so-called artificial intelligence-driven decision support systems (AI-DSS). In a first step, this article analyses how the rise of AI-DSS will affect and transform the modes of interaction between different agents in the clinic. In a second step, we point out how these changing modes of interaction also imply shifts in the conditions of trustworthiness, epistemic challenges regarding transparency, the underlying normative concepts of agency and its embedding into concrete contexts of deployment and, finally, the consequences for (possible) ascriptions of responsibility. Third, we draw first conclusions for further steps regarding a 'meaningful human control' of clinical AI-DSS.
ASJC Scopus Sachgebiete
- Sozialwissenschaften (insg.)
- Gesundheit (Sozialwissenschaften)
- Pflege (insg.)
- Probleme, Ethik und rechtliche Aspekte
- Geisteswissenschaftliche Fächer (insg.)
- Geisteswissenschaftliche Fächer (sonstige)
- Medizin (insg.)
- Health policy
Ziele für nachhaltige Entwicklung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: Journal of medical ethics, Jahrgang 47, Nr. e3, 105860, 29.11.2021.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Primer on an ethics of AI-based decision support systems in the clinic
AU - Braun, Matthias
AU - Hummel, Patrik
AU - Beck, Susanne
AU - Dabrock, Peter
N1 - Funding Information: funding This work is part of the research project DABIGO (ZMV/1–2517 FSB 013), which has been funded by the German Ministry for Health, as well as the research project vALID (01GP1903A), which has been funded by the German Ministry of Education and Research.
PY - 2021/11/29
Y1 - 2021/11/29
N2 - Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare continuously raise the need to evaluate and to improve clinical decision-making. This article scrutinises if and how clinical decision-making processes are challenged by the rise of so-called artificial intelligence-driven decision support systems (AI-DSS). In a first step, this article analyses how the rise of AI-DSS will affect and transform the modes of interaction between different agents in the clinic. In a second step, we point out how these changing modes of interaction also imply shifts in the conditions of trustworthiness, epistemic challenges regarding transparency, the underlying normative concepts of agency and its embedding into concrete contexts of deployment and, finally, the consequences for (possible) ascriptions of responsibility. Third, we draw first conclusions for further steps regarding a 'meaningful human control' of clinical AI-DSS.
AB - Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare continuously raise the need to evaluate and to improve clinical decision-making. This article scrutinises if and how clinical decision-making processes are challenged by the rise of so-called artificial intelligence-driven decision support systems (AI-DSS). In a first step, this article analyses how the rise of AI-DSS will affect and transform the modes of interaction between different agents in the clinic. In a second step, we point out how these changing modes of interaction also imply shifts in the conditions of trustworthiness, epistemic challenges regarding transparency, the underlying normative concepts of agency and its embedding into concrete contexts of deployment and, finally, the consequences for (possible) ascriptions of responsibility. Third, we draw first conclusions for further steps regarding a 'meaningful human control' of clinical AI-DSS.
KW - decision-making
KW - ethics
UR - http://www.scopus.com/inward/record.url?scp=85083239338&partnerID=8YFLogxK
U2 - 10.1136/medethics-2019-105860
DO - 10.1136/medethics-2019-105860
M3 - Article
AN - SCOPUS:85083239338
VL - 47
JO - Journal of medical ethics
JF - Journal of medical ethics
SN - 0306-6800
IS - e3
M1 - 105860
ER -