Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Susanne Beck
  • Michelle Faber
  • Simon Gerndt
Forschungs-netzwerk anzeigen

Details

Titel in ÜbersetzungLegal aspects of the use of artificial intelligence and robotics in medicine and care
OriginalspracheDeutsch
Seiten (von - bis)247-263
Seitenumfang17
FachzeitschriftEthik in der Medizin
Jahrgang35
Ausgabenummer2
Frühes Online-Datum17 Apr. 2023
PublikationsstatusVeröffentlicht - Juni 2023

Abstract

Definition of the problem: Rapid developments in the field of artificial intelligence (AI) and robotics pose new challenges not only to ethics but also to law, especially in the field of medicine and nursing. In principle, the use of AI has the potential to facilitate, if not improve, both curative treatments and adequate handling in the context of care. Administrative tasks, the monitoring of vital functions and their parameters, and the examination of tissue samples, for example, could run autonomously. In diagnostics and therapy, such systems can support the attending physician. Intelligent beds make it possible to mobilize patients early while at the same time reducing the need for personnel. Nevertheless, the use of these systems poses legal challenges. For example, there is a risk of injury to the people involved. Unlike conventional technologies, AI “suffers” from the “black box problem”: the results generated by the systems are no longer fully predictable and comprehensible. Its use entails unknown and incalculable risks, with particular implications for civil liability and criminal responsibility. Arguments: To whom the decisions of the systems are normatively attributable is a core question of legal discourse. The obvious choice, for reasons of practicability, of attributing the behaviour of an AI system to the human being who makes the final decisions is not convincing in all cases, but often degrades the human being to a symbolic “liability servant” and imposes the risks of the technologies on the human being in a one-sided manner. Furthermore, in the field of medicine and care, questions arise regarding the approval of AI systems, since the machines continue to learn during use and thus continuously change their structural design. Since the systems require any amount of reliable data for training and later use—especially through further learning—adequate handling of personal data is also necessary with regard to data protection law in the area of care and medicine. Conclusions: It is therefore advisable to address the potential for conflict from an ethical and legal perspective at an early stage in order to prevent a potential social fear of such systems and to create a practical framework for action. Orientation towards the guiding principle of “meaningful human control” offers the potential to solve these challenges.

Schlagwörter

    AI, Care, Human in the loop, Liability servant, Meaningful human control, Medicine, Participation, Responsibility diffusion

ASJC Scopus Sachgebiete

Ziele für nachhaltige Entwicklung

Zitieren

Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege. / Beck, Susanne; Faber, Michelle; Gerndt, Simon.
in: Ethik in der Medizin, Jahrgang 35, Nr. 2, 06.2023, S. 247-263.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Beck S, Faber M, Gerndt S. Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege. Ethik in der Medizin. 2023 Jun;35(2):247-263. Epub 2023 Apr 17. doi: 10.1007/s00481-023-00763-9
Beck, Susanne ; Faber, Michelle ; Gerndt, Simon. / Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege. in: Ethik in der Medizin. 2023 ; Jahrgang 35, Nr. 2. S. 247-263.
Download
@article{930571f6a2b74e4682ec2ea0b8e392ad,
title = "Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege",
abstract = "Definition of the problem: Rapid developments in the field of artificial intelligence (AI) and robotics pose new challenges not only to ethics but also to law, especially in the field of medicine and nursing. In principle, the use of AI has the potential to facilitate, if not improve, both curative treatments and adequate handling in the context of care. Administrative tasks, the monitoring of vital functions and their parameters, and the examination of tissue samples, for example, could run autonomously. In diagnostics and therapy, such systems can support the attending physician. Intelligent beds make it possible to mobilize patients early while at the same time reducing the need for personnel. Nevertheless, the use of these systems poses legal challenges. For example, there is a risk of injury to the people involved. Unlike conventional technologies, AI “suffers” from the “black box problem”: the results generated by the systems are no longer fully predictable and comprehensible. Its use entails unknown and incalculable risks, with particular implications for civil liability and criminal responsibility. Arguments: To whom the decisions of the systems are normatively attributable is a core question of legal discourse. The obvious choice, for reasons of practicability, of attributing the behaviour of an AI system to the human being who makes the final decisions is not convincing in all cases, but often degrades the human being to a symbolic “liability servant” and imposes the risks of the technologies on the human being in a one-sided manner. Furthermore, in the field of medicine and care, questions arise regarding the approval of AI systems, since the machines continue to learn during use and thus continuously change their structural design. Since the systems require any amount of reliable data for training and later use—especially through further learning—adequate handling of personal data is also necessary with regard to data protection law in the area of care and medicine. Conclusions: It is therefore advisable to address the potential for conflict from an ethical and legal perspective at an early stage in order to prevent a potential social fear of such systems and to create a practical framework for action. Orientation towards the guiding principle of “meaningful human control” offers the potential to solve these challenges.",
keywords = "AI, Care, Human in the loop, Liability servant, Meaningful human control, Medicine, Participation, Responsibility diffusion",
author = "Susanne Beck and Michelle Faber and Simon Gerndt",
note = "Funding Information: Das diesem Beitrag zugrunde liegende Vorhaben vALID wurde mit Mitteln des Bundesministerium f{\"u}r Bildung und Forschung unter dem F{\"o}rderkennzeichen 01GP1903B gef{\"o}rdert. Die Verantwortung f{\"u}r den Inhalt dieser Ver{\"o}ffentlichung liegt bei den Autor:innen.",
year = "2023",
month = jun,
doi = "10.1007/s00481-023-00763-9",
language = "Deutsch",
volume = "35",
pages = "247--263",
journal = "Ethik in der Medizin",
issn = "0935-7335",
publisher = "Springer Verlag",
number = "2",

}

Download

TY - JOUR

T1 - Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege

AU - Beck, Susanne

AU - Faber, Michelle

AU - Gerndt, Simon

N1 - Funding Information: Das diesem Beitrag zugrunde liegende Vorhaben vALID wurde mit Mitteln des Bundesministerium für Bildung und Forschung unter dem Förderkennzeichen 01GP1903B gefördert. Die Verantwortung für den Inhalt dieser Veröffentlichung liegt bei den Autor:innen.

PY - 2023/6

Y1 - 2023/6

N2 - Definition of the problem: Rapid developments in the field of artificial intelligence (AI) and robotics pose new challenges not only to ethics but also to law, especially in the field of medicine and nursing. In principle, the use of AI has the potential to facilitate, if not improve, both curative treatments and adequate handling in the context of care. Administrative tasks, the monitoring of vital functions and their parameters, and the examination of tissue samples, for example, could run autonomously. In diagnostics and therapy, such systems can support the attending physician. Intelligent beds make it possible to mobilize patients early while at the same time reducing the need for personnel. Nevertheless, the use of these systems poses legal challenges. For example, there is a risk of injury to the people involved. Unlike conventional technologies, AI “suffers” from the “black box problem”: the results generated by the systems are no longer fully predictable and comprehensible. Its use entails unknown and incalculable risks, with particular implications for civil liability and criminal responsibility. Arguments: To whom the decisions of the systems are normatively attributable is a core question of legal discourse. The obvious choice, for reasons of practicability, of attributing the behaviour of an AI system to the human being who makes the final decisions is not convincing in all cases, but often degrades the human being to a symbolic “liability servant” and imposes the risks of the technologies on the human being in a one-sided manner. Furthermore, in the field of medicine and care, questions arise regarding the approval of AI systems, since the machines continue to learn during use and thus continuously change their structural design. Since the systems require any amount of reliable data for training and later use—especially through further learning—adequate handling of personal data is also necessary with regard to data protection law in the area of care and medicine. Conclusions: It is therefore advisable to address the potential for conflict from an ethical and legal perspective at an early stage in order to prevent a potential social fear of such systems and to create a practical framework for action. Orientation towards the guiding principle of “meaningful human control” offers the potential to solve these challenges.

AB - Definition of the problem: Rapid developments in the field of artificial intelligence (AI) and robotics pose new challenges not only to ethics but also to law, especially in the field of medicine and nursing. In principle, the use of AI has the potential to facilitate, if not improve, both curative treatments and adequate handling in the context of care. Administrative tasks, the monitoring of vital functions and their parameters, and the examination of tissue samples, for example, could run autonomously. In diagnostics and therapy, such systems can support the attending physician. Intelligent beds make it possible to mobilize patients early while at the same time reducing the need for personnel. Nevertheless, the use of these systems poses legal challenges. For example, there is a risk of injury to the people involved. Unlike conventional technologies, AI “suffers” from the “black box problem”: the results generated by the systems are no longer fully predictable and comprehensible. Its use entails unknown and incalculable risks, with particular implications for civil liability and criminal responsibility. Arguments: To whom the decisions of the systems are normatively attributable is a core question of legal discourse. The obvious choice, for reasons of practicability, of attributing the behaviour of an AI system to the human being who makes the final decisions is not convincing in all cases, but often degrades the human being to a symbolic “liability servant” and imposes the risks of the technologies on the human being in a one-sided manner. Furthermore, in the field of medicine and care, questions arise regarding the approval of AI systems, since the machines continue to learn during use and thus continuously change their structural design. Since the systems require any amount of reliable data for training and later use—especially through further learning—adequate handling of personal data is also necessary with regard to data protection law in the area of care and medicine. Conclusions: It is therefore advisable to address the potential for conflict from an ethical and legal perspective at an early stage in order to prevent a potential social fear of such systems and to create a practical framework for action. Orientation towards the guiding principle of “meaningful human control” offers the potential to solve these challenges.

KW - AI

KW - Care

KW - Human in the loop

KW - Liability servant

KW - Meaningful human control

KW - Medicine

KW - Participation

KW - Responsibility diffusion

U2 - 10.1007/s00481-023-00763-9

DO - 10.1007/s00481-023-00763-9

M3 - Artikel

VL - 35

SP - 247

EP - 263

JO - Ethik in der Medizin

JF - Ethik in der Medizin

SN - 0935-7335

IS - 2

ER -