Owning Decisions: AI Decision-Support and the Attributability-Gap

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Jannik Zeiser

Research Organisations

View graph of relations

Details

Original languageEnglish
Article number27
Number of pages19
JournalScience and engineering ethics
Volume30
Issue number4
Early online date18 Jun 2024
Publication statusPublished - Aug 2024

Abstract

Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine’s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today’s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call “decision ownership”: they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.

Keywords

    Accountability, Agency, Artificial intelligence, Attributability, Decision-support systems, Machine learning, Responsibility

ASJC Scopus subject areas

Sustainable Development Goals

Cite this

Owning Decisions: AI Decision-Support and the Attributability-Gap. / Zeiser, Jannik.
In: Science and engineering ethics, Vol. 30, No. 4, 27, 08.2024.

Research output: Contribution to journalArticleResearchpeer review

Zeiser J. Owning Decisions: AI Decision-Support and the Attributability-Gap. Science and engineering ethics. 2024 Aug;30(4):27. Epub 2024 Jun 18. doi: 10.1007/s11948-024-00485-1
Download
@article{6f4e59befa084e30843853a85282e1f0,
title = "Owning Decisions: AI Decision-Support and the Attributability-Gap",
abstract = "Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine{\textquoteright}s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today{\textquoteright}s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call “decision ownership”: they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.",
keywords = "Accountability, Agency, Artificial intelligence, Attributability, Decision-support systems, Machine learning, Responsibility",
author = "Jannik Zeiser",
note = "Publisher Copyright: {\textcopyright} The Author(s) 2024.",
year = "2024",
month = aug,
doi = "10.1007/s11948-024-00485-1",
language = "English",
volume = "30",
journal = "Science and engineering ethics",
issn = "1353-3452",
publisher = "Springer Netherlands",
number = "4",

}

Download

TY - JOUR

T1 - Owning Decisions

T2 - AI Decision-Support and the Attributability-Gap

AU - Zeiser, Jannik

N1 - Publisher Copyright: © The Author(s) 2024.

PY - 2024/8

Y1 - 2024/8

N2 - Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine’s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today’s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call “decision ownership”: they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.

AB - Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine’s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today’s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call “decision ownership”: they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.

KW - Accountability

KW - Agency

KW - Artificial intelligence

KW - Attributability

KW - Decision-support systems

KW - Machine learning

KW - Responsibility

UR - http://www.scopus.com/inward/record.url?scp=85196270364&partnerID=8YFLogxK

U2 - 10.1007/s11948-024-00485-1

DO - 10.1007/s11948-024-00485-1

M3 - Article

C2 - 38888795

AN - SCOPUS:85196270364

VL - 30

JO - Science and engineering ethics

JF - Science and engineering ethics

SN - 1353-3452

IS - 4

M1 - 27

ER -