Details
Original language | English |
---|---|
Article number | 9292993 |
Pages (from-to) | 717-728 |
Number of pages | 12 |
Journal | IEEE Transactions on Cognitive and Developmental Systems |
Volume | 13 |
Issue number | 3 |
Publication status | Published - 9 Sept 2021 |
Externally published | Yes |
Abstract
The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.
Keywords
- Explainability, explainable artificial systems, process of explaining and understanding
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Computer Science(all)
- Artificial Intelligence
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: IEEE Transactions on Cognitive and Developmental Systems, Vol. 13, No. 3, 9292993, 09.09.2021, p. 717-728.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Explanation as a Social Practice
T2 - Toward a Conceptual Framework for the Social Design of AI Systems
AU - Rohlfing, Katharina J.
AU - Cimiano, Philipp
AU - Scharlau, Ingrid
AU - Matzner, Tobias
AU - Buhl, Heike M.
AU - Buschmeier, Hendrik
AU - Esposito, Elena
AU - Grimminger, Angela
AU - Hammer, Barbara
AU - Hab-Umbach, Reinhold
AU - Horwath, Ilona
AU - Hullermeier, Eyke
AU - Kern, Friederike
AU - Kopp, Stefan
AU - Thommes, Kirsten
AU - Ngonga Ngomo, Axel Cyrille
AU - Schulte, Carsten
AU - Wachsmuth, Henning
AU - Wagner, Petra
AU - Wrede, Britta
PY - 2021/9/9
Y1 - 2021/9/9
N2 - The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.
AB - The recent surge of interest in explainability in artificial intelligence (XAI) is propelled by not only technological advancements in machine learning but also by regulatory initiatives to foster transparency in algorithmic decision making. In this article, we revise the current concept of explainability and identify three limitations: passive explainee, narrow view on the social process, and undifferentiated assessment of explainee's understanding. In order to overcome these limitations, we present explanation as a social practice in which explainer and explainee co-construct understanding on the microlevel. We view the co-construction on a microlevel as embedded into a macrolevel, yielding expectations concerning, e.g., social roles or partner models: typically, the role of the explainer is to provide an explanation and to adapt it to the current level of explainee's understanding; the explainee, in turn, is expected to provide cues that direct the explainer. Building on explanations being a social practice, we present a conceptual framework that aims to guide future research in XAI. The framework relies on the key concepts of monitoring and scaffolding to capture the development of interaction. We relate our conceptual framework and our new perspective on explaining to transparency and autonomy as objectives considered for XAI.
KW - Explainability
KW - explainable artificial systems
KW - process of explaining and understanding
UR - http://www.scopus.com/inward/record.url?scp=85098767199&partnerID=8YFLogxK
U2 - 10.1109/TCDS.2020.3044366
DO - 10.1109/TCDS.2020.3044366
M3 - Article
AN - SCOPUS:85098767199
VL - 13
SP - 717
EP - 728
JO - IEEE Transactions on Cognitive and Developmental Systems
JF - IEEE Transactions on Cognitive and Developmental Systems
SN - 2379-8920
IS - 3
M1 - 9292993
ER -