Listwise Explanations for Ranking Models Using Multiple Explainers

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Lijun Lyu
  • Avishek Anand

Research Organisations

External Research Organisations

  • Delft University of Technology
View graph of relations

Details

Original languageEnglish
Title of host publicationAdvances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Proceedings
EditorsJaap Kamps, Lorraine Goeuriot, Fabio Crestani, Maria Maistro, Hideo Joho, Brian Davis, Cathal Gurrin, Annalina Caputo, Udo Kruschwitz
Place of PublicationCham
Pages653-668
Number of pages16
ISBN (electronic)978-3-031-28244-7
Publication statusPublished - 2023
Event45th European Conference on Information Retrieval, ECIR 2023 - Dublin, Ireland
Duration: 2 Apr 20236 Apr 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13980 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

This paper proposes a novel approach towards better interpretability of a trained text-based ranking model in a post-hoc manner. A popular approach for post-hoc interpretability text ranking models are based on locally approximating the model behavior using a simple ranker. Since rankings have multiple relevance factors and are aggregations of predictions, existing approaches that use a single ranker might not be sufficient to approximate a complex model, resulting in low fidelity. In this paper, we overcome this problem by considering multiple simple rankers to better approximate the entire ranking list from a black-box ranking model. We pose the problem of local approximation as a Generalized Preference Coverage (GPC) problem that incorporates multiple simple rankers towards the listwise explanation of ranking models. Our method Multiplex uses a linear programming approach to judiciously extract the explanation terms, so that to explain the entire ranking list. We conduct extensive experiments on a variety of ranking models and report fidelity improvements of 37%–54% over existing competitors. We finally compare explanations in terms of multiple relevance factors and topic aspects to better understand the logic of ranking decisions, showcasing our explainers’ practical utility.

Keywords

    Explanation, List-wise, Neural, Post-hoc, Ranking

ASJC Scopus subject areas

Cite this

Listwise Explanations for Ranking Models Using Multiple Explainers. / Lyu, Lijun; Anand, Avishek.
Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Proceedings. ed. / Jaap Kamps; Lorraine Goeuriot; Fabio Crestani; Maria Maistro; Hideo Joho; Brian Davis; Cathal Gurrin; Annalina Caputo; Udo Kruschwitz. Cham, 2023. p. 653-668 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 13980 LNCS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Lyu, L & Anand, A 2023, Listwise Explanations for Ranking Models Using Multiple Explainers. in J Kamps, L Goeuriot, F Crestani, M Maistro, H Joho, B Davis, C Gurrin, A Caputo & U Kruschwitz (eds), Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13980 LNCS, Cham, pp. 653-668, 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, 2 Apr 2023. https://doi.org/10.1007/978-3-031-28244-7_41
Lyu, L., & Anand, A. (2023). Listwise Explanations for Ranking Models Using Multiple Explainers. In J. Kamps, L. Goeuriot, F. Crestani, M. Maistro, H. Joho, B. Davis, C. Gurrin, A. Caputo, & U. Kruschwitz (Eds.), Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Proceedings (pp. 653-668). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 13980 LNCS).. https://doi.org/10.1007/978-3-031-28244-7_41
Lyu L, Anand A. Listwise Explanations for Ranking Models Using Multiple Explainers. In Kamps J, Goeuriot L, Crestani F, Maistro M, Joho H, Davis B, Gurrin C, Caputo A, Kruschwitz U, editors, Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Proceedings. Cham. 2023. p. 653-668. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). Epub 2023 Mar 17. doi: 10.1007/978-3-031-28244-7_41
Lyu, Lijun ; Anand, Avishek. / Listwise Explanations for Ranking Models Using Multiple Explainers. Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Proceedings. editor / Jaap Kamps ; Lorraine Goeuriot ; Fabio Crestani ; Maria Maistro ; Hideo Joho ; Brian Davis ; Cathal Gurrin ; Annalina Caputo ; Udo Kruschwitz. Cham, 2023. pp. 653-668 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{2e3858b1a7fa4e938011d3a6930db06d,
title = "Listwise Explanations for Ranking Models Using Multiple Explainers",
abstract = "This paper proposes a novel approach towards better interpretability of a trained text-based ranking model in a post-hoc manner. A popular approach for post-hoc interpretability text ranking models are based on locally approximating the model behavior using a simple ranker. Since rankings have multiple relevance factors and are aggregations of predictions, existing approaches that use a single ranker might not be sufficient to approximate a complex model, resulting in low fidelity. In this paper, we overcome this problem by considering multiple simple rankers to better approximate the entire ranking list from a black-box ranking model. We pose the problem of local approximation as a Generalized Preference Coverage (GPC) problem that incorporates multiple simple rankers towards the listwise explanation of ranking models. Our method Multiplex uses a linear programming approach to judiciously extract the explanation terms, so that to explain the entire ranking list. We conduct extensive experiments on a variety of ranking models and report fidelity improvements of 37%–54% over existing competitors. We finally compare explanations in terms of multiple relevance factors and topic aspects to better understand the logic of ranking decisions, showcasing our explainers{\textquoteright} practical utility.",
keywords = "Explanation, List-wise, Neural, Post-hoc, Ranking",
author = "Lijun Lyu and Avishek Anand",
note = "Funding Information: Acknowledgements. This work is partially supported by German Research Foundation (DFG), under the Project IREM with grant No. AN 996/1-1. ; 45th European Conference on Information Retrieval, ECIR 2023 ; Conference date: 02-04-2023 Through 06-04-2023",
year = "2023",
doi = "10.1007/978-3-031-28244-7_41",
language = "English",
isbn = "9783031282430",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "653--668",
editor = "Jaap Kamps and Lorraine Goeuriot and Fabio Crestani and Maria Maistro and Hideo Joho and Brian Davis and Cathal Gurrin and Annalina Caputo and Udo Kruschwitz",
booktitle = "Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Proceedings",

}

Download

TY - GEN

T1 - Listwise Explanations for Ranking Models Using Multiple Explainers

AU - Lyu, Lijun

AU - Anand, Avishek

N1 - Funding Information: Acknowledgements. This work is partially supported by German Research Foundation (DFG), under the Project IREM with grant No. AN 996/1-1.

PY - 2023

Y1 - 2023

N2 - This paper proposes a novel approach towards better interpretability of a trained text-based ranking model in a post-hoc manner. A popular approach for post-hoc interpretability text ranking models are based on locally approximating the model behavior using a simple ranker. Since rankings have multiple relevance factors and are aggregations of predictions, existing approaches that use a single ranker might not be sufficient to approximate a complex model, resulting in low fidelity. In this paper, we overcome this problem by considering multiple simple rankers to better approximate the entire ranking list from a black-box ranking model. We pose the problem of local approximation as a Generalized Preference Coverage (GPC) problem that incorporates multiple simple rankers towards the listwise explanation of ranking models. Our method Multiplex uses a linear programming approach to judiciously extract the explanation terms, so that to explain the entire ranking list. We conduct extensive experiments on a variety of ranking models and report fidelity improvements of 37%–54% over existing competitors. We finally compare explanations in terms of multiple relevance factors and topic aspects to better understand the logic of ranking decisions, showcasing our explainers’ practical utility.

AB - This paper proposes a novel approach towards better interpretability of a trained text-based ranking model in a post-hoc manner. A popular approach for post-hoc interpretability text ranking models are based on locally approximating the model behavior using a simple ranker. Since rankings have multiple relevance factors and are aggregations of predictions, existing approaches that use a single ranker might not be sufficient to approximate a complex model, resulting in low fidelity. In this paper, we overcome this problem by considering multiple simple rankers to better approximate the entire ranking list from a black-box ranking model. We pose the problem of local approximation as a Generalized Preference Coverage (GPC) problem that incorporates multiple simple rankers towards the listwise explanation of ranking models. Our method Multiplex uses a linear programming approach to judiciously extract the explanation terms, so that to explain the entire ranking list. We conduct extensive experiments on a variety of ranking models and report fidelity improvements of 37%–54% over existing competitors. We finally compare explanations in terms of multiple relevance factors and topic aspects to better understand the logic of ranking decisions, showcasing our explainers’ practical utility.

KW - Explanation

KW - List-wise

KW - Neural

KW - Post-hoc

KW - Ranking

UR - http://www.scopus.com/inward/record.url?scp=85151134683&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-28244-7_41

DO - 10.1007/978-3-031-28244-7_41

M3 - Conference contribution

AN - SCOPUS:85151134683

SN - 9783031282430

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 653

EP - 668

BT - Advances in Information Retrieval - 45th European Conference on Information Retrieval, ECIR 2023, Proceedings

A2 - Kamps, Jaap

A2 - Goeuriot, Lorraine

A2 - Crestani, Fabio

A2 - Maistro, Maria

A2 - Joho, Hideo

A2 - Davis, Brian

A2 - Gurrin, Cathal

A2 - Caputo, Annalina

A2 - Kruschwitz, Udo

CY - Cham

T2 - 45th European Conference on Information Retrieval, ECIR 2023

Y2 - 2 April 2023 through 6 April 2023

ER -