Details
Original language | English |
---|---|
Title of host publication | 2021 IEEE 29th International Requirements Engineering Conference (RE) |
Editors | Ana Moreira, Kurt Schneider, Michael Vierhauser, Jane Cleland-Huang |
Pages | 197-208 |
Number of pages | 12 |
ISBN (electronic) | 9781665428569 |
Publication status | Published - 2021 |
Publication series
Name | Proceedings of the IEEE International Conference on Requirements Engineering |
---|---|
ISSN (Print) | 1090-705X |
ISSN (electronic) | 2332-6441 |
Abstract
Keywords
- Explainability, Explainable Artificial Intelligence, Explanations, Interpretability, Non-Functional Requirements, Quality Aspects, Requirements Synergy, Software Transparency
ASJC Scopus subject areas
- Computer Science(all)
- General Computer Science
- Engineering(all)
- General Engineering
- Business, Management and Accounting(all)
- Strategy and Management
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2021 IEEE 29th International Requirements Engineering Conference (RE). ed. / Ana Moreira; Kurt Schneider; Michael Vierhauser; Jane Cleland-Huang. 2021. p. 197-208 (Proceedings of the IEEE International Conference on Requirements Engineering).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Exploring Explainability
T2 - A Definition, a Model, and a Knowledge Catalogue
AU - Chazette, Larissa
AU - Brunotte, Wasja
AU - Speith, Timo
N1 - Funding Information: ACKNOWLEDGMENTS This work was supported by the research initiative Mobilise between the Technical University of Braunschweig and Leibniz University Hannover, funded by the Ministry for Science and Culture of Lower Saxony and by the Deutsche Forschungs-gemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Work on this paper was also funded by the Volkswagen Foundation grant AZ 98514 “Explainable Intelligent Systems” (EIS) and by the DFG grant 389792660 as part of TRR 248. We thank Martin Glinz for his feedback on our research design. Furthermore, we thank all workshop participants, the anonymous reviewers, and the colleagues who gave feedback on our manuscript.
PY - 2021
Y1 - 2021
N2 - The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trust-worthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.
AB - The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trust-worthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.
KW - Explainability
KW - Explainable Artificial Intelligence
KW - Explanations
KW - Interpretability
KW - Non-Functional Requirements
KW - Quality Aspects
KW - Requirements Synergy
KW - Software Transparency
UR - http://www.scopus.com/inward/record.url?scp=85118468569&partnerID=8YFLogxK
U2 - 10.1109/RE51729.2021.00025
DO - 10.1109/RE51729.2021.00025
M3 - Conference contribution
SN - 978-1-6654-2857-6
T3 - Proceedings of the IEEE International Conference on Requirements Engineering
SP - 197
EP - 208
BT - 2021 IEEE 29th International Requirements Engineering Conference (RE)
A2 - Moreira, Ana
A2 - Schneider, Kurt
A2 - Vierhauser, Michael
A2 - Cleland-Huang, Jane
ER -