Details
Original language | English |
---|---|
Title of host publication | Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering |
Subtitle of host publication | Foundation for Software Quality (REFSQ 2024) |
Volume | 3672 |
Publication status | Published - 2024 |
Event | 2024 Joint International Conference on Requirements Engineering: Foundation for Software Quality Workshops, Doctoral Symposium, Posters and Tools Track, and Education and Training Track, REFSQ-JP 2024 - Winterthur, Switzerland Duration: 8 Apr 2024 → 11 Apr 2024 |
Publication series
Name | Ceur Workshop Proceedings |
---|---|
Volume | 3672 |
Abstract
With the rise of artificial intelligence (AI) in society, more people are coming into contact with complex and opaque software systems in their daily lives. These black-box systems are typically hard to understand and therefore not trustworthy for end-users. Research in eXplainable Artificial Intelligence (XAI) has shown that explanations have the potential to address this opacity, by making systems more transparent and understandable. However, the line between interpretability and explainability is blurry at best. While there are many definitions of explainability in XAI, most do not look beyond the justification of outputs, i.e., to provide interpretability. Meanwhile, contemporary research outside of XAI has adapted wider definitions of explainability, and examined system aspects other than algorithms and their outputs. In this position paper, we argue that requirements engineers for AI systems need to consider explainability requirements beyond interpretability. To this end, we present a hypothetical scenario in the medical sector, which demonstrates a variety of different explainability requirements that are typically not considered by XAI researchers. This contribution aims to start a discussion in the XAI community and motivate AI engineers to take a look outside the black-box when eliciting explainability requirements.
Keywords
- Explainable Artificial Intelligence, Interpretability, Requirements Engineering
ASJC Scopus subject areas
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2024). Vol. 3672 2024. (Ceur Workshop Proceedings; Vol. 3672).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Peeking Outside the Black-Box: AI Explainability Requirements beyond Interpretability
AU - Droste, Jakob
AU - Deters, Hannah
AU - Fuchs, Ronja
AU - Schneider, Kurt
N1 - Publisher Copyright: © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
PY - 2024
Y1 - 2024
N2 - With the rise of artificial intelligence (AI) in society, more people are coming into contact with complex and opaque software systems in their daily lives. These black-box systems are typically hard to understand and therefore not trustworthy for end-users. Research in eXplainable Artificial Intelligence (XAI) has shown that explanations have the potential to address this opacity, by making systems more transparent and understandable. However, the line between interpretability and explainability is blurry at best. While there are many definitions of explainability in XAI, most do not look beyond the justification of outputs, i.e., to provide interpretability. Meanwhile, contemporary research outside of XAI has adapted wider definitions of explainability, and examined system aspects other than algorithms and their outputs. In this position paper, we argue that requirements engineers for AI systems need to consider explainability requirements beyond interpretability. To this end, we present a hypothetical scenario in the medical sector, which demonstrates a variety of different explainability requirements that are typically not considered by XAI researchers. This contribution aims to start a discussion in the XAI community and motivate AI engineers to take a look outside the black-box when eliciting explainability requirements.
AB - With the rise of artificial intelligence (AI) in society, more people are coming into contact with complex and opaque software systems in their daily lives. These black-box systems are typically hard to understand and therefore not trustworthy for end-users. Research in eXplainable Artificial Intelligence (XAI) has shown that explanations have the potential to address this opacity, by making systems more transparent and understandable. However, the line between interpretability and explainability is blurry at best. While there are many definitions of explainability in XAI, most do not look beyond the justification of outputs, i.e., to provide interpretability. Meanwhile, contemporary research outside of XAI has adapted wider definitions of explainability, and examined system aspects other than algorithms and their outputs. In this position paper, we argue that requirements engineers for AI systems need to consider explainability requirements beyond interpretability. To this end, we present a hypothetical scenario in the medical sector, which demonstrates a variety of different explainability requirements that are typically not considered by XAI researchers. This contribution aims to start a discussion in the XAI community and motivate AI engineers to take a look outside the black-box when eliciting explainability requirements.
KW - Explainable Artificial Intelligence
KW - Interpretability
KW - Requirements Engineering
UR - http://www.scopus.com/inward/record.url?scp=85193068547&partnerID=8YFLogxK
M3 - Conference contribution
VL - 3672
T3 - Ceur Workshop Proceedings
BT - Joint Proceedings of REFSQ-2024 Workshops, Doctoral Symposium, Posters & Tools Track, and Education and Training Track co-located with the 30th International Conference on Requirements Engineering
T2 - 2024 Joint International Conference on Requirements Engineering: Foundation for Software Quality Workshops, Doctoral Symposium, Posters and Tools Track, and Education and Training Track, REFSQ-JP 2024
Y2 - 8 April 2024 through 11 April 2024
ER -