Details
Titel in Übersetzung | Legal aspects of the use of artificial intelligence and robotics in medicine and care |
---|---|
Originalsprache | Deutsch |
Seiten (von - bis) | 247-263 |
Seitenumfang | 17 |
Fachzeitschrift | Ethik in der Medizin |
Jahrgang | 35 |
Ausgabenummer | 2 |
Frühes Online-Datum | 17 Apr. 2023 |
Publikationsstatus | Veröffentlicht - Juni 2023 |
Abstract
Definition of the problem: Rapid developments in the field of artificial intelligence (AI) and robotics pose new challenges not only to ethics but also to law, especially in the field of medicine and nursing. In principle, the use of AI has the potential to facilitate, if not improve, both curative treatments and adequate handling in the context of care. Administrative tasks, the monitoring of vital functions and their parameters, and the examination of tissue samples, for example, could run autonomously. In diagnostics and therapy, such systems can support the attending physician. Intelligent beds make it possible to mobilize patients early while at the same time reducing the need for personnel. Nevertheless, the use of these systems poses legal challenges. For example, there is a risk of injury to the people involved. Unlike conventional technologies, AI “suffers” from the “black box problem”: the results generated by the systems are no longer fully predictable and comprehensible. Its use entails unknown and incalculable risks, with particular implications for civil liability and criminal responsibility. Arguments: To whom the decisions of the systems are normatively attributable is a core question of legal discourse. The obvious choice, for reasons of practicability, of attributing the behaviour of an AI system to the human being who makes the final decisions is not convincing in all cases, but often degrades the human being to a symbolic “liability servant” and imposes the risks of the technologies on the human being in a one-sided manner. Furthermore, in the field of medicine and care, questions arise regarding the approval of AI systems, since the machines continue to learn during use and thus continuously change their structural design. Since the systems require any amount of reliable data for training and later use—especially through further learning—adequate handling of personal data is also necessary with regard to data protection law in the area of care and medicine. Conclusions: It is therefore advisable to address the potential for conflict from an ethical and legal perspective at an early stage in order to prevent a potential social fear of such systems and to create a practical framework for action. Orientation towards the guiding principle of “meaningful human control” offers the potential to solve these challenges.
Schlagwörter
- AI, Care, Human in the loop, Liability servant, Meaningful human control, Medicine, Participation, Responsibility diffusion
ASJC Scopus Sachgebiete
- Pflege (insg.)
- Probleme, Ethik und rechtliche Aspekte
- Sozialwissenschaften (insg.)
- Gesundheit (Sozialwissenschaften)
- Geisteswissenschaftliche Fächer (insg.)
- Philosophie
- Medizin (insg.)
- Health policy
Ziele für nachhaltige Entwicklung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: Ethik in der Medizin, Jahrgang 35, Nr. 2, 06.2023, S. 247-263.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege
AU - Beck, Susanne
AU - Faber, Michelle
AU - Gerndt, Simon
N1 - Funding Information: Das diesem Beitrag zugrunde liegende Vorhaben vALID wurde mit Mitteln des Bundesministerium für Bildung und Forschung unter dem Förderkennzeichen 01GP1903B gefördert. Die Verantwortung für den Inhalt dieser Veröffentlichung liegt bei den Autor:innen.
PY - 2023/6
Y1 - 2023/6
N2 - Definition of the problem: Rapid developments in the field of artificial intelligence (AI) and robotics pose new challenges not only to ethics but also to law, especially in the field of medicine and nursing. In principle, the use of AI has the potential to facilitate, if not improve, both curative treatments and adequate handling in the context of care. Administrative tasks, the monitoring of vital functions and their parameters, and the examination of tissue samples, for example, could run autonomously. In diagnostics and therapy, such systems can support the attending physician. Intelligent beds make it possible to mobilize patients early while at the same time reducing the need for personnel. Nevertheless, the use of these systems poses legal challenges. For example, there is a risk of injury to the people involved. Unlike conventional technologies, AI “suffers” from the “black box problem”: the results generated by the systems are no longer fully predictable and comprehensible. Its use entails unknown and incalculable risks, with particular implications for civil liability and criminal responsibility. Arguments: To whom the decisions of the systems are normatively attributable is a core question of legal discourse. The obvious choice, for reasons of practicability, of attributing the behaviour of an AI system to the human being who makes the final decisions is not convincing in all cases, but often degrades the human being to a symbolic “liability servant” and imposes the risks of the technologies on the human being in a one-sided manner. Furthermore, in the field of medicine and care, questions arise regarding the approval of AI systems, since the machines continue to learn during use and thus continuously change their structural design. Since the systems require any amount of reliable data for training and later use—especially through further learning—adequate handling of personal data is also necessary with regard to data protection law in the area of care and medicine. Conclusions: It is therefore advisable to address the potential for conflict from an ethical and legal perspective at an early stage in order to prevent a potential social fear of such systems and to create a practical framework for action. Orientation towards the guiding principle of “meaningful human control” offers the potential to solve these challenges.
AB - Definition of the problem: Rapid developments in the field of artificial intelligence (AI) and robotics pose new challenges not only to ethics but also to law, especially in the field of medicine and nursing. In principle, the use of AI has the potential to facilitate, if not improve, both curative treatments and adequate handling in the context of care. Administrative tasks, the monitoring of vital functions and their parameters, and the examination of tissue samples, for example, could run autonomously. In diagnostics and therapy, such systems can support the attending physician. Intelligent beds make it possible to mobilize patients early while at the same time reducing the need for personnel. Nevertheless, the use of these systems poses legal challenges. For example, there is a risk of injury to the people involved. Unlike conventional technologies, AI “suffers” from the “black box problem”: the results generated by the systems are no longer fully predictable and comprehensible. Its use entails unknown and incalculable risks, with particular implications for civil liability and criminal responsibility. Arguments: To whom the decisions of the systems are normatively attributable is a core question of legal discourse. The obvious choice, for reasons of practicability, of attributing the behaviour of an AI system to the human being who makes the final decisions is not convincing in all cases, but often degrades the human being to a symbolic “liability servant” and imposes the risks of the technologies on the human being in a one-sided manner. Furthermore, in the field of medicine and care, questions arise regarding the approval of AI systems, since the machines continue to learn during use and thus continuously change their structural design. Since the systems require any amount of reliable data for training and later use—especially through further learning—adequate handling of personal data is also necessary with regard to data protection law in the area of care and medicine. Conclusions: It is therefore advisable to address the potential for conflict from an ethical and legal perspective at an early stage in order to prevent a potential social fear of such systems and to create a practical framework for action. Orientation towards the guiding principle of “meaningful human control” offers the potential to solve these challenges.
KW - AI
KW - Care
KW - Human in the loop
KW - Liability servant
KW - Meaningful human control
KW - Medicine
KW - Participation
KW - Responsibility diffusion
U2 - 10.1007/s00481-023-00763-9
DO - 10.1007/s00481-023-00763-9
M3 - Artikel
VL - 35
SP - 247
EP - 263
JO - Ethik in der Medizin
JF - Ethik in der Medizin
SN - 0935-7335
IS - 2
ER -