Details
Originalsprache | Englisch |
---|---|
Publikationsstatus | Elektronisch veröffentlicht (E-Pub) - 29 Apr. 2020 |
Abstract
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
2020.
Publikation: Arbeitspapier/Preprint › Preprint
}
TY - UNPB
T1 - Valid Explanations for Learning to Rank Models
AU - Singh, Jaspreet
AU - Khosla, Megha
AU - Anand, Avishek
PY - 2020/4/29
Y1 - 2020/4/29
N2 - Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to a ranking decision. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
AB - Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to a ranking decision. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.
KW - cs.LG
KW - cs.IR
KW - stat.ML
M3 - Preprint
BT - Valid Explanations for Learning to Rank Models
ER -