Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Katharina J. Rohlfing
  • Philipp Cimiano
  • Ingrid Scharlau
  • Tobias Matzner
  • Heike M. Buhl
  • Hendrik Buschmeier
  • Elena Esposito
  • Angela Grimminger
  • Barbara Hammer
  • Reinhold Hab-Umbach
  • Ilona Horwath
  • Eyke Hullermeier
  • Friederike Kern
  • Stefan Kopp
  • Kirsten Thommes
  • Axel Cyrille Ngonga Ngomo
  • Carsten Schulte
  • Henning Wachsmuth
  • Petra Wagner
  • Britta Wrede

Externe Organisationen

  • Universität Paderborn
  • Universität Bielefeld
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer9292993
Seiten (von - bis)717-728
Seitenumfang12
FachzeitschriftIEEE Transactions on Cognitive and Developmental Systems
Jahrgang13
Ausgabenummer3
PublikationsstatusVeröffentlicht - 9 Sept. 2021
Extern publiziertJa

Abstract

The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.

ASJC Scopus Sachgebiete

Zitieren

Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. / Rohlfing, Katharina J.; Cimiano, Philipp; Scharlau, Ingrid et al.
in: IEEE Transactions on Cognitive and Developmental Systems, Jahrgang 13, Nr. 3, 9292993, 09.09.2021, S. 717-728.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Rohlfing, KJ, Cimiano, P, Scharlau, I, Matzner, T, Buhl, HM, Buschmeier, H, Esposito, E, Grimminger, A, Hammer, B, Hab-Umbach, R, Horwath, I, Hullermeier, E, Kern, F, Kopp, S, Thommes, K, Ngonga Ngomo, AC, Schulte, C, Wachsmuth, H, Wagner, P & Wrede, B 2021, 'Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems', IEEE Transactions on Cognitive and Developmental Systems, Jg. 13, Nr. 3, 9292993, S. 717-728. https://doi.org/10.1109/TCDS.2020.3044366
Rohlfing, K. J., Cimiano, P., Scharlau, I., Matzner, T., Buhl, H. M., Buschmeier, H., Esposito, E., Grimminger, A., Hammer, B., Hab-Umbach, R., Horwath, I., Hullermeier, E., Kern, F., Kopp, S., Thommes, K., Ngonga Ngomo, A. C., Schulte, C., Wachsmuth, H., Wagner, P., & Wrede, B. (2021). Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. IEEE Transactions on Cognitive and Developmental Systems, 13(3), 717-728. Artikel 9292993. https://doi.org/10.1109/TCDS.2020.3044366
Rohlfing KJ, Cimiano P, Scharlau I, Matzner T, Buhl HM, Buschmeier H et al. Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems. IEEE Transactions on Cognitive and Developmental Systems. 2021 Sep 9;13(3):717-728. 9292993. doi: 10.1109/TCDS.2020.3044366
Rohlfing, Katharina J. ; Cimiano, Philipp ; Scharlau, Ingrid et al. / Explanation as a Social Practice : Toward a Conceptual Framework for the Social Design of AI Systems. in: IEEE Transactions on Cognitive and Developmental Systems. 2021 ; Jahrgang 13, Nr. 3. S. 717-728.
Download
@article{3193f77bde8b4df0b9dbaa8b5f8aba14,
title = "Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems",
abstract = "The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.",
keywords = "Explainability, explainable artificial systems, process of explaining and understanding",
author = "Rohlfing, {Katharina J.} and Philipp Cimiano and Ingrid Scharlau and Tobias Matzner and Buhl, {Heike M.} and Hendrik Buschmeier and Elena Esposito and Angela Grimminger and Barbara Hammer and Reinhold Hab-Umbach and Ilona Horwath and Eyke Hullermeier and Friederike Kern and Stefan Kopp and Kirsten Thommes and {Ngonga Ngomo}, {Axel Cyrille} and Carsten Schulte and Henning Wachsmuth and Petra Wagner and Britta Wrede",
year = "2021",
month = sep,
day = "9",
doi = "10.1109/TCDS.2020.3044366",
language = "English",
volume = "13",
pages = "717--728",
journal = "IEEE Transactions on Cognitive and Developmental Systems",
issn = "2379-8920",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "3",

}

Download

TY - JOUR

T1 - Explanation as a Social Practice

T2 - Toward a Conceptual Framework for the Social Design of AI Systems

AU - Rohlfing, Katharina J.

AU - Cimiano, Philipp

AU - Scharlau, Ingrid

AU - Matzner, Tobias

AU - Buhl, Heike M.

AU - Buschmeier, Hendrik

AU - Esposito, Elena

AU - Grimminger, Angela

AU - Hammer, Barbara

AU - Hab-Umbach, Reinhold

AU - Horwath, Ilona

AU - Hullermeier, Eyke

AU - Kern, Friederike

AU - Kopp, Stefan

AU - Thommes, Kirsten

AU - Ngonga Ngomo, Axel Cyrille

AU - Schulte, Carsten

AU - Wachsmuth, Henning

AU - Wagner, Petra

AU - Wrede, Britta

PY - 2021/9/9

Y1 - 2021/9/9

N2 - The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.

AB - The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.

KW - Explainability

KW - explainable artificial systems

KW - process of explaining and understanding

UR - http://www.scopus.com/inward/record.url?scp=85098767199&partnerID=8YFLogxK

U2 - 10.1109/TCDS.2020.3044366

DO - 10.1109/TCDS.2020.3044366

M3 - Article

AN - SCOPUS:85098767199

VL - 13

SP - 717

EP - 728

JO - IEEE Transactions on Cognitive and Developmental Systems

JF - IEEE Transactions on Cognitive and Developmental Systems

SN - 2379-8920

IS - 3

M1 - 9292993

ER -

Von denselben Autoren