Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Katharina J. Rohlfing
  • Philipp Cimiano
  • Ingrid Scharlau
  • Tobias Matzner
  • Heike M. Buhl
  • Hendrik Buschmeier
  • Elena Esposito
  • Angela Grimminger
  • Barbara Hammer
  • Reinhold Hab-Umbach
  • Ilona Horwath
  • Eyke Hullermeier
  • Friederike Kern
  • Stefan Kopp
  • Kirsten Thommes
  • Axel Cyrille Ngonga Ngomo
  • Carsten Schulte
  • Henning Wachsmuth
  • Petra Wagner
  • Britta Wrede

External Research Organisations

  • Paderborn University
  • Bielefeld University
View graph of relations

Details

Original languageEnglish
Article number9292993
Pages (from-to)717-728
Number of pages12
JournalIEEE Transactions on Cognitive and Developmental Systems
Volume13
Issue number3
Publication statusPublished - 9 Sept 2021
Externally publishedYes

Abstract

The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.

Keywords

    Explainability, explainable artificial systems, process of explaining and understanding

ASJC Scopus subject areas

Cite this

Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. / Rohlfing, Katharina J.; Cimiano, Philipp; Scharlau, Ingrid et al.
In: IEEE Transactions on Cognitive and Developmental Systems, Vol. 13, No. 3, 9292993, 09.09.2021, p. 717-728.

Research output: Contribution to journalArticleResearchpeer review

Rohlfing, KJ, Cimiano, P, Scharlau, I, Matzner, T, Buhl, HM, Buschmeier, H, Esposito, E, Grimminger, A, Hammer, B, Hab-Umbach, R, Horwath, I, Hullermeier, E, Kern, F, Kopp, S, Thommes, K, Ngonga Ngomo, AC, Schulte, C, Wachsmuth, H, Wagner, P & Wrede, B 2021, 'Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems', IEEE Transactions on Cognitive and Developmental Systems, vol. 13, no. 3, 9292993, pp. 717-728. https://doi.org/10.1109/TCDS.2020.3044366
Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier, H., Esposito, E., Grimminger, A., Hammer, B., Hab-Umbach, R., Horwath, I., Hullermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A. C., Schulte, C., Wachsmuth, H., Wagner, P., & Wrede, B. (2021). Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. IEEE Transactions on Cognitive and Developmental Systems, 13(3), 717-728. Article 9292993. https://doi.org/10.1109/TCDS.2020.3044366
Rohlfing KJ, Cimiano P, Scharlau I, Matzner T, Buhl HM, Buschmeier H et al. Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. IEEE Transactions on Cognitive and Developmental Systems. 2021 Sept 9;13(3):717-728. 9292993. doi: 10.1109/TCDS.2020.3044366
Rohlfing, Katharina J. ; Cimiano, Philipp ; Scharlau, Ingrid et al. / Explanation as a Social Practice : Toward a Conceptual Framework for the Social Design of AI Systems. In: IEEE Transactions on Cognitive and Developmental Systems. 2021 ; Vol. 13, No. 3. pp. 717-728.
Download
@article{3193f77bde8b4df0b9dbaa8b5f8aba14,
title = "Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems",
abstract = "The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.",
keywords = "Explainability, explainable artificial systems, process of explaining and understanding",
author = "Rohlfing, {Katharina J.} and Philipp Cimiano and Ingrid Scharlau and Tobias Matzner and Buhl, {Heike M.} and Hendrik Buschmeier and Elena Esposito and Angela Grimminger and Barbara Hammer and Reinhold Hab-Umbach and Ilona Horwath and Eyke Hullermeier and Friederike Kern and Stefan Kopp and Kirsten Thommes and {Ngonga Ngomo}, {Axel Cyrille} and Carsten Schulte and Henning Wachsmuth and Petra Wagner and Britta Wrede",
year = "2021",
month = sep,
day = "9",
doi = "10.1109/TCDS.2020.3044366",
language = "English",
volume = "13",
pages = "717--728",
journal = "IEEE Transactions on Cognitive and Developmental Systems",
issn = "2379-8920",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "3",

}

Download

TY - JOUR

T1 - Explanation as a Social Practice

T2 - Toward a Conceptual Framework for the Social Design of AI Systems

AU - Rohlfing, Katharina J.

AU - Cimiano, Philipp

AU - Scharlau, Ingrid

AU - Matzner, Tobias

AU - Buhl, Heike M.

AU - Buschmeier, Hendrik

AU - Esposito, Elena

AU - Grimminger, Angela

AU - Hammer, Barbara

AU - Hab-Umbach, Reinhold

AU - Horwath, Ilona

AU - Hullermeier, Eyke

AU - Kern, Friederike

AU - Kopp, Stefan

AU - Thommes, Kirsten

AU - Ngonga Ngomo, Axel Cyrille

AU - Schulte, Carsten

AU - Wachsmuth, Henning

AU - Wagner, Petra

AU - Wrede, Britta

PY - 2021/9/9

Y1 - 2021/9/9

N2 - The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.

AB - The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.

KW - Explainability

KW - explainable artificial systems

KW - process of explaining and understanding

UR - http://www.scopus.com/inward/record.url?scp=85098767199&partnerID=8YFLogxK

U2 - 10.1109/TCDS.2020.3044366

DO - 10.1109/TCDS.2020.3044366

M3 - Article

AN - SCOPUS:85098767199

VL - 13

SP - 717

EP - 728

JO - IEEE Transactions on Cognitive and Developmental Systems

JF - IEEE Transactions on Cognitive and Developmental Systems

SN - 2379-8920

IS - 3

M1 - 9292993

ER -

By the same author(s)