Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksJoint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering
UntertitelFoundation for Software Quality (REFSQ 2024)
Band3672
PublikationsstatusVeröffentlicht - 2024
Veranstaltung30th International Working Conference on Requirement Engineering: Foundation for Software Quality, REFSQ 2024 - Winterthur, Schweiz
Dauer: 8 Apr. 202411 Apr. 2024

Publikationsreihe

NameCeur Workshop Proceedings
Band3672

Abstract

With the rise of artificial intelligence (AI) in society, more people are coming into contact with complex and opaque software systems in their daily lives. These black-box systems are typically hard to understand and therefore not trustworthy for end-users. Research in eXplainable Artificial Intelligence (XAI) has shown that explanations have the potential to address this opacity, by making systems more transparent and understandable. However, the line between interpretability and explainability is blurry at best. While there are many definitions of explainability in XAI, most do not look beyond the justification of outputs, i.e., to provide interpretability. Meanwhile, contemporary research outside of XAI has adapted wider definitions of explainability, and examined system aspects other than algorithms and their outputs. In this position paper, we argue that requirements engineers for AI systems need to consider explainability requirements beyond interpretability. To this end, we present a hypothetical scenario in the medical sector, which demonstrates a variety of different explainability requirements that are typically not considered by XAI researchers. This contribution aims to start a discussion in the XAI community and motivate AI engineers to take a look outside the black-box when eliciting explainability requirements.

ASJC Scopus Sachgebiete

Zitieren

Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability. / Droste, Jakob; Deters, Hannah; Fuchs, Ronja et al.
Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2024). Band 3672 2024. (Ceur Workshop Proceedings; Band 3672).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Droste, J, Deters, H, Fuchs, R & Schneider, K 2024, Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability. in Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2024). Bd. 3672, Ceur Workshop Proceedings, Bd. 3672, 30th International Working Conference on Requirement Engineering: Foundation for Software Quality, REFSQ 2024, Winterthur, Schweiz, 8 Apr. 2024. <https://ceur-ws.org/Vol-3672/RE4AI-paper2.pdf>
Droste, J., Deters, H., Fuchs, R., & Schneider, K. (2024). Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability. In Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2024) (Band 3672). (Ceur Workshop Proceedings; Band 3672). https://ceur-ws.org/Vol-3672/RE4AI-paper2.pdf
Droste J, Deters H, Fuchs R, Schneider K. Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability. in Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2024). Band 3672. 2024. (Ceur Workshop Proceedings).
Droste, Jakob ; Deters, Hannah ; Fuchs, Ronja et al. / Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability. Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2024). Band 3672 2024. (Ceur Workshop Proceedings).
Download
@inproceedings{cc20852811094830bae8c63ec6e9afaa,
title = "Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability",
abstract = "With the rise of artificial intelligence (AI) in society, more people are coming into contact with complex and opaque software systems in their daily lives. These black-box systems are typically hard to understand and therefore not trustworthy for end-users. Research in eXplainable Artificial Intelligence (XAI) has shown that explanations have the potential to address this opacity, by making systems more transparent and understandable. However, the line between interpretability and explainability is blurry at best. While there are many definitions of explainability in XAI, most do not look beyond the justification of outputs, i.e., to provide interpretability. Meanwhile, contemporary research outside of XAI has adapted wider definitions of explainability, and examined system aspects other than algorithms and their outputs. In this position paper, we argue that requirements engineers for AI systems need to consider explainability requirements beyond interpretability. To this end, we present a hypothetical scenario in the medical sector, which demonstrates a variety of different explainability requirements that are typically not considered by XAI researchers. This contribution aims to start a discussion in the XAI community and motivate AI engineers to take a look outside the black-box when eliciting explainability requirements.",
keywords = "Explainable Artificial Intelligence, Interpretability, Requirements Engineering",
author = "Jakob Droste and Hannah Deters and Ronja Fuchs and Kurt Schneider",
note = "Publisher Copyright: {\textcopyright} 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).; 2024 Joint International Conference on Requirements Engineering: Foundation for Software Quality Workshops, Doctoral Symposium, Posters and Tools Track, and Education and Training Track, REFSQ-JP 2024 ; Conference date: 08-04-2024 Through 11-04-2024",
year = "2024",
language = "English",
volume = "3672",
series = "Ceur Workshop Proceedings",
booktitle = "Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering",

}

Download

TY - GEN

T1 - Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability

AU - Droste, Jakob

AU - Deters, Hannah

AU - Fuchs, Ronja

AU - Schneider, Kurt

N1 - Publisher Copyright: © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

PY - 2024

Y1 - 2024

N2 - With the rise of artificial intelligence (AI) in society, more people are coming into contact with complex and opaque software systems in their daily lives. These black-box systems are typically hard to understand and therefore not trustworthy for end-users. Research in eXplainable Artificial Intelligence (XAI) has shown that explanations have the potential to address this opacity, by making systems more transparent and understandable. However, the line between interpretability and explainability is blurry at best. While there are many definitions of explainability in XAI, most do not look beyond the justification of outputs, i.e., to provide interpretability. Meanwhile, contemporary research outside of XAI has adapted wider definitions of explainability, and examined system aspects other than algorithms and their outputs. In this position paper, we argue that requirements engineers for AI systems need to consider explainability requirements beyond interpretability. To this end, we present a hypothetical scenario in the medical sector, which demonstrates a variety of different explainability requirements that are typically not considered by XAI researchers. This contribution aims to start a discussion in the XAI community and motivate AI engineers to take a look outside the black-box when eliciting explainability requirements.

AB - With the rise of artificial intelligence (AI) in society, more people are coming into contact with complex and opaque software systems in their daily lives. These black-box systems are typically hard to understand and therefore not trustworthy for end-users. Research in eXplainable Artificial Intelligence (XAI) has shown that explanations have the potential to address this opacity, by making systems more transparent and understandable. However, the line between interpretability and explainability is blurry at best. While there are many definitions of explainability in XAI, most do not look beyond the justification of outputs, i.e., to provide interpretability. Meanwhile, contemporary research outside of XAI has adapted wider definitions of explainability, and examined system aspects other than algorithms and their outputs. In this position paper, we argue that requirements engineers for AI systems need to consider explainability requirements beyond interpretability. To this end, we present a hypothetical scenario in the medical sector, which demonstrates a variety of different explainability requirements that are typically not considered by XAI researchers. This contribution aims to start a discussion in the XAI community and motivate AI engineers to take a look outside the black-box when eliciting explainability requirements.

KW - Explainable Artificial Intelligence

KW - Interpretability

KW - Requirements Engineering

UR - http://www.scopus.com/inward/record.url?scp=85193068547&partnerID=8YFLogxK

M3 - Conference contribution

VL - 3672

T3 - Ceur Workshop Proceedings

BT - Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering

T2 - 2024 Joint International Conference on Requirements Engineering: Foundation for Software Quality Workshops, Doctoral Symposium, Posters and Tools Track, and Education and Training Track, REFSQ-JP 2024

Y2 - 8 April 2024 through 11 April 2024

ER -

Von denselben Autoren