Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Jaspreet Singh
  • Megha Khosla
  • Wang Zhenye
  • Avishek Anand

Organisationseinheiten

Externe Organisationen

  • Amazon.com, Inc.
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval
Seiten203-210
Seitenumfang8
ISBN (elektronisch)9781450386111
PublikationsstatusVeröffentlicht - 31 Aug. 2021
Veranstaltung11th ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021 - Virtual, Online, Kanada
Dauer: 11 Juli 202111 Juli 2021

Publikationsreihe

NameICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval

Abstract

Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.

ASJC Scopus Sachgebiete

Zitieren

Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models. / Singh, Jaspreet; Khosla, Megha; Zhenye, Wang et al.
ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. 2021. S. 203-210 (ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Singh, J, Khosla, M, Zhenye, W & Anand, A 2021, Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models. in ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval, S. 203-210, 11th ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021, Virtual, Online, Kanada, 11 Juli 2021. https://doi.org/10.1145/3471158.3472241
Singh, J., Khosla, M., Zhenye, W., & Anand, A. (2021). Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models. In ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval (S. 203-210). (ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval). https://doi.org/10.1145/3471158.3472241
Singh J, Khosla M, Zhenye W, Anand A. Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models. in ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. 2021. S. 203-210. (ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval). doi: 10.1145/3471158.3472241
Singh, Jaspreet ; Khosla, Megha ; Zhenye, Wang et al. / Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models. ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. 2021. S. 203-210 (ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval).
Download
@inproceedings{67ed182884bb4bf6be6875e3d1c3c10b,
title = "Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models",
abstract = "Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.",
keywords = "explainability, interpretability, learning-to-rank, LTR",
author = "Jaspreet Singh and Megha Khosla and Wang Zhenye and Avishek Anand",
note = "Funding Information: This work was partially funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (grant no. 01DD20003).The authors would also like to acknowledge the Deutsche Forschungsgemeinschaft (DFG) - Project number 440551765 titled IREM: Interpretability of Retrieval Models ; 11th ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021 ; Conference date: 11-07-2021 Through 11-07-2021",
year = "2021",
month = aug,
day = "31",
doi = "10.1145/3471158.3472241",
language = "English",
series = "ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval",
pages = "203--210",
booktitle = "ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval",

}

Download

TY - GEN

T1 - Extracting per Query Valid Explanations for Blackbox Learning-to-Rank Models

AU - Singh, Jaspreet

AU - Khosla, Megha

AU - Zhenye, Wang

AU - Anand, Avishek

N1 - Funding Information: This work was partially funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (grant no. 01DD20003).The authors would also like to acknowledge the Deutsche Forschungsgemeinschaft (DFG) - Project number 440551765 titled IREM: Interpretability of Retrieval Models

PY - 2021/8/31

Y1 - 2021/8/31

N2 - Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.

AB - Learning-to-rank (LTR) is a class of supervised learning techniques that apply to ranking problems dealing with a large number of features. The popularity and widespread application of LTR models in prioritizing information in a variety of domains makes their scrutability vital in today's landscape of fair and transparent learning systems. However, limited work exists that deals with interpreting the decisions of learning systems that output rankings. In this paper we propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to the ranked output for a given query. We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features, as a way of measuring goodness. We devise a novel optimization problem to maximize validity directly and propose greedy algorithms as solutions. In extensive quantitative experiments we show that our approach outperforms other model agnostic explanation approaches across pointwise, pairwise and listwise LTR models in validity while not compromising on completeness.

KW - explainability

KW - interpretability

KW - learning-to-rank

KW - LTR

UR - http://www.scopus.com/inward/record.url?scp=85114491388&partnerID=8YFLogxK

U2 - 10.1145/3471158.3472241

DO - 10.1145/3471158.3472241

M3 - Conference contribution

AN - SCOPUS:85114491388

T3 - ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval

SP - 203

EP - 210

BT - ICTIR 2021 - Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval

T2 - 11th ACM SIGIR International Conference on Theory of Information Retrieval, ICTIR 2021

Y2 - 11 July 2021 through 11 July 2021

ER -