Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Larissa Chazette
  • Wasja Brunotte
  • Timo Speith
View graph of relations

Details

Original languageEnglish
Title of host publication2021 IEEE 29th International Requirements Engineering Conference (RE)
EditorsAna Moreira, Kurt Schneider, Michael Vierhauser, Jane Cleland-Huang
Pages197-208
Number of pages12
ISBN (electronic)9781665428569
Publication statusPublished - 2021

Publication series

NameProceedings of the IEEE International Conference on Requirements Engineering
ISSN (Print)1090-705X
ISSN (electronic)2332-6441

Abstract

The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trust-worthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.

Keywords

    Explainability, Explainable Artificial Intelligence, Explanations, Interpretability, Non-Functional Requirements, Quality Aspects, Requirements Synergy, Software Transparency

ASJC Scopus subject areas

Cite this

Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue. / Chazette, Larissa; Brunotte, Wasja; Speith, Timo.
2021 IEEE 29th International Requirements Engineering Conference (RE). ed. / Ana Moreira; Kurt Schneider; Michael Vierhauser; Jane Cleland-Huang. 2021. p. 197-208 (Proceedings of the IEEE International Conference on Requirements Engineering).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Chazette, L, Brunotte, W & Speith, T 2021, Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue. in A Moreira, K Schneider, M Vierhauser & J Cleland-Huang (eds), 2021 IEEE 29th International Requirements Engineering Conference (RE). Proceedings of the IEEE International Conference on Requirements Engineering, pp. 197-208. https://doi.org/10.1109/RE51729.2021.00025
Chazette, L., Brunotte, W., & Speith, T. (2021). Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue. In A. Moreira, K. Schneider, M. Vierhauser, & J. Cleland-Huang (Eds.), 2021 IEEE 29th International Requirements Engineering Conference (RE) (pp. 197-208). (Proceedings of the IEEE International Conference on Requirements Engineering). https://doi.org/10.1109/RE51729.2021.00025
Chazette L, Brunotte W, Speith T. Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue. In Moreira A, Schneider K, Vierhauser M, Cleland-Huang J, editors, 2021 IEEE 29th International Requirements Engineering Conference (RE). 2021. p. 197-208. (Proceedings of the IEEE International Conference on Requirements Engineering). doi: 10.1109/RE51729.2021.00025
Chazette, Larissa ; Brunotte, Wasja ; Speith, Timo. / Exploring Explainability : A Definition, a Model, and a Knowledge Catalogue. 2021 IEEE 29th International Requirements Engineering Conference (RE). editor / Ana Moreira ; Kurt Schneider ; Michael Vierhauser ; Jane Cleland-Huang. 2021. pp. 197-208 (Proceedings of the IEEE International Conference on Requirements Engineering).
Download
@inproceedings{39dca1b0e7864e209f7a3c0dc390a9ec,
title = "Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue",
abstract = "The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trust-worthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.",
keywords = "Explainability, Explainable Artificial Intelligence, Explanations, Interpretability, Non-Functional Requirements, Quality Aspects, Requirements Synergy, Software Transparency",
author = "Larissa Chazette and Wasja Brunotte and Timo Speith",
note = "Funding Information: ACKNOWLEDGMENTS This work was supported by the research initiative Mobilise between the Technical University of Braunschweig and Leibniz University Hannover, funded by the Ministry for Science and Culture of Lower Saxony and by the Deutsche Forschungs-gemeinschaft (DFG, German Research Foundation) under Germany{\textquoteright}s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Work on this paper was also funded by the Volkswagen Foundation grant AZ 98514 “Explainable Intelligent Systems” (EIS) and by the DFG grant 389792660 as part of TRR 248. We thank Martin Glinz for his feedback on our research design. Furthermore, we thank all workshop participants, the anonymous reviewers, and the colleagues who gave feedback on our manuscript.",
year = "2021",
doi = "10.1109/RE51729.2021.00025",
language = "English",
isbn = "978-1-6654-2857-6",
series = "Proceedings of the IEEE International Conference on Requirements Engineering",
pages = "197--208",
editor = "Ana Moreira and Kurt Schneider and Michael Vierhauser and Jane Cleland-Huang",
booktitle = "2021 IEEE 29th International Requirements Engineering Conference (RE)",

}

Download

TY - GEN

T1 - Exploring Explainability

T2 - A Definition, a Model, and a Knowledge Catalogue

AU - Chazette, Larissa

AU - Brunotte, Wasja

AU - Speith, Timo

N1 - Funding Information: ACKNOWLEDGMENTS This work was supported by the research initiative Mobilise between the Technical University of Braunschweig and Leibniz University Hannover, funded by the Ministry for Science and Culture of Lower Saxony and by the Deutsche Forschungs-gemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Work on this paper was also funded by the Volkswagen Foundation grant AZ 98514 “Explainable Intelligent Systems” (EIS) and by the DFG grant 389792660 as part of TRR 248. We thank Martin Glinz for his feedback on our research design. Furthermore, we thank all workshop participants, the anonymous reviewers, and the colleagues who gave feedback on our manuscript.

PY - 2021

Y1 - 2021

N2 - The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trust-worthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.

AB - The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trust-worthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.

KW - Explainability

KW - Explainable Artificial Intelligence

KW - Explanations

KW - Interpretability

KW - Non-Functional Requirements

KW - Quality Aspects

KW - Requirements Synergy

KW - Software Transparency

UR - http://www.scopus.com/inward/record.url?scp=85118468569&partnerID=8YFLogxK

U2 - 10.1109/RE51729.2021.00025

DO - 10.1109/RE51729.2021.00025

M3 - Conference contribution

SN - 978-1-6654-2857-6

T3 - Proceedings of the IEEE International Conference on Requirements Engineering

SP - 197

EP - 208

BT - 2021 IEEE 29th International Requirements Engineering Conference (RE)

A2 - Moreira, Ana

A2 - Schneider, Kurt

A2 - Vierhauser, Michael

A2 - Cleland-Huang, Jane

ER -