Details
Original language | English |
---|---|
Title of host publication | TrustNL |
Subtitle of host publication | First Workshop on Trustworthy Natural Language Processing, Proceedings of the Workshop |
Editors | Yada Pruksachatkun, Anil Ramakrishna, Kai-Wei Chang, Satyapriya Krishna, Jwala Dhamala, Tanaya Guha, Xiang Ren |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 68-73 |
Number of pages | 6 |
ISBN (electronic) | 9781954085336 |
Publication status | Published - 2021 |
Event | 1st Workshop on Trustworthy Natural Language Processing, TrustNLP 2021 - Virtual, Online Duration: 10 Jun 2021 → 5 Jul 2021 |
Abstract
Post-hoc explanation methods are an important class of approaches that help understand the rationale underlying a trained model's decision. But how useful are they for an end-user towards accomplishing a given task? In this vision paper, we argue the need for a benchmark to facilitate evaluations of the utility of post-hoc explanation methods. As a first step to this end, we enumerate desirable properties that such a benchmark should possess for the task of debugging text classifiers. Additionally, we highlight that such a benchmark facilitates not only assessing the effectiveness of explanations but also their efficiency.
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Arts and Humanities(all)
- Language and Linguistics
- Computer Science(all)
- Computational Theory and Mathematics
- Social Sciences(all)
- Linguistics and Language
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
TrustNL: First Workshop on Trustworthy Natural Language Processing, Proceedings of the Workshop. ed. / Yada Pruksachatkun; Anil Ramakrishna; Kai-Wei Chang; Satyapriya Krishna; Jwala Dhamala; Tanaya Guha; Xiang Ren. Association for Computational Linguistics (ACL), 2021. p. 68-73.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Towards Benchmarking the Utility of Explanations for Model Debugging
AU - Idahl, Maximilian
AU - Lyu, Lijun
AU - Gadiraju, Ujwal
AU - Anand, Avishek
PY - 2021
Y1 - 2021
N2 - Post-hoc explanation methods are an important class of approaches that help understand the rationale underlying a trained model's decision. But how useful are they for an end-user towards accomplishing a given task? In this vision paper, we argue the need for a benchmark to facilitate evaluations of the utility of post-hoc explanation methods. As a first step to this end, we enumerate desirable properties that such a benchmark should possess for the task of debugging text classifiers. Additionally, we highlight that such a benchmark facilitates not only assessing the effectiveness of explanations but also their efficiency.
AB - Post-hoc explanation methods are an important class of approaches that help understand the rationale underlying a trained model's decision. But how useful are they for an end-user towards accomplishing a given task? In this vision paper, we argue the need for a benchmark to facilitate evaluations of the utility of post-hoc explanation methods. As a first step to this end, we enumerate desirable properties that such a benchmark should possess for the task of debugging text classifiers. Additionally, we highlight that such a benchmark facilitates not only assessing the effectiveness of explanations but also their efficiency.
UR - http://www.scopus.com/inward/record.url?scp=85119843243&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2105.04505
DO - 10.48550/arXiv.2105.04505
M3 - Conference contribution
AN - SCOPUS:85119843243
SP - 68
EP - 73
BT - TrustNL
A2 - Pruksachatkun, Yada
A2 - Ramakrishna, Anil
A2 - Chang, Kai-Wei
A2 - Krishna, Satyapriya
A2 - Dhamala, Jwala
A2 - Guha, Tanaya
A2 - Ren, Xiang
PB - Association for Computational Linguistics (ACL)
T2 - 1st Workshop on Trustworthy Natural Language Processing, TrustNLP 2021
Y2 - 10 June 2021 through 5 July 2021
ER -