Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval |
Seiten | 203-210 |
Seitenumfang | 8 |
ISBN (elektronisch) | 9781450386111 |
Publikationsstatus | Veröffentlicht - 31 Aug. 2021 |
Veranstaltung | 11th ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021 - Virtual, Online, Kanada Dauer: 11 Juli 2021 → 11 Juli 2021 |
Publikationsreihe
Name | ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval |
---|
Abstract
Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Informatik (sonstige)
- Informatik (insg.)
- Information systems
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. 2021. S. 203-210 (ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models
AU - Singh, Jaspreet
AU - Khosla, Megha
AU - Zhenye, Wang
AU - Anand, Avishek
N1 - Funding Information: This work was partially funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (grant no. 01DD20003).The authors would also like to acknowledge the Deutsche Forschungsgemeinschaft (DFG) - Project number 440551765 titled IREM: Interpretability of Retrieval Models
PY - 2021/8/31
Y1 - 2021/8/31
N2 - Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
AB - Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
KW - explainability
KW - interpretability
KW - learning-to-rank
KW - LTR
UR - http://www.scopus.com/inward/record.url?scp=85114491388&partnerID=8YFLogxK
U2 - 10.1145/3471158.3472241
DO - 10.1145/3471158.3472241
M3 - Conference contribution
AN - SCOPUS:85114491388
T3 - ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval
SP - 203
EP - 210
BT - ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval
T2 - 11th ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021
Y2 - 11 July 2021 through 11 July 2021
ER -