Details
Original language | English |
---|---|
Title of host publication | ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval |
Pages | 203-210 |
Number of pages | 8 |
ISBN (electronic) | 9781450386111 |
Publication status | Published - 31 Aug 2021 |
Event | 11th ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021 - Virtual, Online, Canada Duration: 11 Jul 2021 → 11 Jul 2021 |
Publication series
Name | ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval |
---|
Abstract
Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
Keywords
- explainability, interpretability, learning-to-rank, LTR
ASJC Scopus subject areas
- Computer Science(all)
- Computer Science (miscellaneous)
- Computer Science(all)
- Information Systems
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. 2021. p. 203-210 (ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models
AU - Singh, Jaspreet
AU - Khosla, Megha
AU - Zhenye, Wang
AU - Anand, Avishek
N1 - Funding Information: This work was partially funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (grant no. 01DD20003).The authors would also like to acknowledge the Deutsche Forschungsgemeinschaft (DFG) - Project number 440551765 titled IREM: Interpretability of Retrieval Models
PY - 2021/8/31
Y1 - 2021/8/31
N2 - Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
AB - Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
KW - explainability
KW - interpretability
KW - learning-to-rank
KW - LTR
UR - http://www.scopus.com/inward/record.url?scp=85114491388&partnerID=8YFLogxK
U2 - 10.1145/3471158.3472241
DO - 10.1145/3471158.3472241
M3 - Conference contribution
AN - SCOPUS:85114491388
T3 - ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval
SP - 203
EP - 210
BT - ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval
T2 - 11th ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021
Y2 - 11 July 2021 through 11 July 2021
ER -