Loading [MathJax]/extensions/tex2jax.js

Exploring the means to measure explainability: Metrics, heuristics and questionnaires

Research output: Contribution to journalArticleResearchpeer review

Details

Original languageEnglish
Article number107682
JournalInformation and Software Technology
Volume181
Early online date8 Feb 2025
Publication statusE-pub ahead of print - 8 Feb 2025

Abstract

Context: As the complexity of modern software is steadily growing, these systems become increasingly difficult to understand for their stakeholders. At the same time, opaque and artificially intelligent systems permeate a growing number of safety-critical areas, such as medicine and finance. As a result, explainability is becoming more important as a software quality aspect and non-functional requirement. Objective: Contemporary research has mainly focused on making artificial intelligence and its decision-making processes more understandable. However, explainability has also gained traction in recent requirements engineering research. This work aims to contribute to that body of research by providing a quality model for explainability as a software quality aspect. Quality models provide means and measures to specify and evaluate quality requirements. Method: In order to design a user-centered quality model for explainability, we conducted a literature review. Results: We identified ten fundamental aspects of explainability. Furthermore, we aggregated criteria and metrics to measure them as well as alternative means of evaluation in the form of heuristics and questionnaires. Conclusion: Our quality model and the related means of evaluation enable software engineers to develop and validate explainable systems in accordance with their explainability goals and intentions. This is achieved by offering a view from different angles at fundamental aspects of explainability and the related development goals. Thus, we provide a foundation that improves the management and verification of explainability requirements.

Keywords

    Explainability, Heuristics, Literature studies, Metrics, Quality models, Requirements engineering

ASJC Scopus subject areas

Cite this

Exploring the means to measure explainability: Metrics, heuristics and questionnaires. / Deters, Hannah; Droste, Jakob; Obaidi, Martin et al.
In: Information and Software Technology, Vol. 181, 107682, 05.2025.

Research output: Contribution to journalArticleResearchpeer review

Download
@article{ce8c1636da0c4f0a96b3ebea2f1b24ed,
title = "Exploring the means to measure explainability: Metrics, heuristics and questionnaires",
abstract = "Context: As the complexity of modern software is steadily growing, these systems become increasingly difficult to understand for their stakeholders. At the same time, opaque and artificially intelligent systems permeate a growing number of safety-critical areas, such as medicine and finance. As a result, explainability is becoming more important as a software quality aspect and non-functional requirement. Objective: Contemporary research has mainly focused on making artificial intelligence and its decision-making processes more understandable. However, explainability has also gained traction in recent requirements engineering research. This work aims to contribute to that body of research by providing a quality model for explainability as a software quality aspect. Quality models provide means and measures to specify and evaluate quality requirements. Method: In order to design a user-centered quality model for explainability, we conducted a literature review. Results: We identified ten fundamental aspects of explainability. Furthermore, we aggregated criteria and metrics to measure them as well as alternative means of evaluation in the form of heuristics and questionnaires. Conclusion: Our quality model and the related means of evaluation enable software engineers to develop and validate explainable systems in accordance with their explainability goals and intentions. This is achieved by offering a view from different angles at fundamental aspects of explainability and the related development goals. Thus, we provide a foundation that improves the management and verification of explainability requirements.",
keywords = "Explainability, Heuristics, Literature studies, Metrics, Quality models, Requirements engineering",
author = "Hannah Deters and Jakob Droste and Martin Obaidi and Kurt Schneider",
note = "Publisher Copyright: {\textcopyright} 2025",
year = "2025",
month = feb,
day = "8",
doi = "10.1016/j.infsof.2025.107682",
language = "English",
volume = "181",
journal = "Information and Software Technology",
issn = "0950-5849",
publisher = "Elsevier",

}

Download

TY - JOUR

T1 - Exploring the means to measure explainability

T2 - Metrics, heuristics and questionnaires

AU - Deters, Hannah

AU - Droste, Jakob

AU - Obaidi, Martin

AU - Schneider, Kurt

N1 - Publisher Copyright: © 2025

PY - 2025/2/8

Y1 - 2025/2/8

N2 - Context: As the complexity of modern software is steadily growing, these systems become increasingly difficult to understand for their stakeholders. At the same time, opaque and artificially intelligent systems permeate a growing number of safety-critical areas, such as medicine and finance. As a result, explainability is becoming more important as a software quality aspect and non-functional requirement. Objective: Contemporary research has mainly focused on making artificial intelligence and its decision-making processes more understandable. However, explainability has also gained traction in recent requirements engineering research. This work aims to contribute to that body of research by providing a quality model for explainability as a software quality aspect. Quality models provide means and measures to specify and evaluate quality requirements. Method: In order to design a user-centered quality model for explainability, we conducted a literature review. Results: We identified ten fundamental aspects of explainability. Furthermore, we aggregated criteria and metrics to measure them as well as alternative means of evaluation in the form of heuristics and questionnaires. Conclusion: Our quality model and the related means of evaluation enable software engineers to develop and validate explainable systems in accordance with their explainability goals and intentions. This is achieved by offering a view from different angles at fundamental aspects of explainability and the related development goals. Thus, we provide a foundation that improves the management and verification of explainability requirements.

AB - Context: As the complexity of modern software is steadily growing, these systems become increasingly difficult to understand for their stakeholders. At the same time, opaque and artificially intelligent systems permeate a growing number of safety-critical areas, such as medicine and finance. As a result, explainability is becoming more important as a software quality aspect and non-functional requirement. Objective: Contemporary research has mainly focused on making artificial intelligence and its decision-making processes more understandable. However, explainability has also gained traction in recent requirements engineering research. This work aims to contribute to that body of research by providing a quality model for explainability as a software quality aspect. Quality models provide means and measures to specify and evaluate quality requirements. Method: In order to design a user-centered quality model for explainability, we conducted a literature review. Results: We identified ten fundamental aspects of explainability. Furthermore, we aggregated criteria and metrics to measure them as well as alternative means of evaluation in the form of heuristics and questionnaires. Conclusion: Our quality model and the related means of evaluation enable software engineers to develop and validate explainable systems in accordance with their explainability goals and intentions. This is achieved by offering a view from different angles at fundamental aspects of explainability and the related development goals. Thus, we provide a foundation that improves the management and verification of explainability requirements.

KW - Explainability

KW - Heuristics

KW - Literature studies

KW - Metrics

KW - Quality models

KW - Requirements engineering

UR - http://www.scopus.com/inward/record.url?scp=85217789947&partnerID=8YFLogxK

U2 - 10.1016/j.infsof.2025.107682

DO - 10.1016/j.infsof.2025.107682

M3 - Article

AN - SCOPUS:85217789947

VL - 181

JO - Information and Software Technology

JF - Information and Software Technology

SN - 0950-5849

M1 - 107682

ER -

By the same author(s)