Loading [MathJax]/extensions/tex2jax.js

A study on the Interpretability of Neural Retrieval Models using DeepSHAP

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autorschaft

  • Zeon Trevor Fernando
  • Jaspreet Singh
  • Avishek Anand

Organisationseinheiten

Details

OriginalspracheEnglisch
Titel des SammelwerksSIGIR 2019
UntertitelProceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval
Herausgeber (Verlag)Association for Computing Machinery (ACM)
Seiten1005-1008
Seitenumfang4
ISBN (elektronisch)9781450361729
PublikationsstatusVeröffentlicht - 18 Juli 2019
Veranstaltung42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019 - Paris, Frankreich
Dauer: 21 Juli 201925 Juli 2019

Abstract

A recent trend in IR has been the usage of neural networks to learn retrieval models for text based adhoc search. While various approaches and architectures have yielded significantly better performance than traditional retrieval models such as BM25, it is still difficult to understand exactly why a document is relevant to a query. In the ML community several approaches for explaining decisions made by deep neural networks have been proposed - including DeepSHAP which modifies the DeepLift algorithm to estimate the relative importance (shapley values) of input features for a given decision by comparing the activations in the network for a given image against the activations caused by a reference input. In image classification, the reference input tends to be a plain black image. While DeepSHAP has been well studied for image classification tasks, it remains to be seen how we can adapt it to explain the output of Neural Retrieval Models (NRMs). In particular, what is a good “black” image in the context of IR? In this paper we explored various reference input document construction techniques. Additionally, we compared the explanations generated by DeepSHAP to LIME (a model agnostic approach) and found that the explanations differ considerably. Our study raises concerns regarding the robustness and accuracy of explanations produced for NRMs. With this paper we aim to shed light on interesting problems surrounding interpretability in NRMs and highlight areas of future work.

ASJC Scopus Sachgebiete

Zitieren

A study on the Interpretability of Neural Retrieval Models using DeepSHAP. / Fernando, Zeon Trevor; Singh, Jaspreet; Anand, Avishek.
SIGIR 2019: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery (ACM), 2019. S. 1005-1008.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Fernando, ZT, Singh, J & Anand, A 2019, A study on the Interpretability of Neural Retrieval Models using DeepSHAP. in SIGIR 2019: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery (ACM), S. 1005-1008, 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, Frankreich, 21 Juli 2019. https://doi.org/10.48550/arXiv.1907.06484, https://doi.org/10.1145/3331184.3331312
Fernando, Z. T., Singh, J., & Anand, A. (2019). A study on the Interpretability of Neural Retrieval Models using DeepSHAP. In SIGIR 2019: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (S. 1005-1008). Association for Computing Machinery (ACM). https://doi.org/10.48550/arXiv.1907.06484, https://doi.org/10.1145/3331184.3331312
Fernando ZT, Singh J, Anand A. A study on the Interpretability of Neural Retrieval Models using DeepSHAP. in SIGIR 2019: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery (ACM). 2019. S. 1005-1008 doi: 10.48550/arXiv.1907.06484, 10.1145/3331184.3331312
Fernando, Zeon Trevor ; Singh, Jaspreet ; Anand, Avishek. / A study on the Interpretability of Neural Retrieval Models using DeepSHAP. SIGIR 2019: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. Association for Computing Machinery (ACM), 2019. S. 1005-1008
Download
@inproceedings{3b1076eae7874d0db566033a31087229,
title = "A study on the Interpretability of Neural Retrieval Models using DeepSHAP",
abstract = "A recent trend in IR has been the usage of neural networks to learn retrieval models for text based adhoc search. While various approaches and architectures have yielded significantly better performance than traditional retrieval models such as BM25, it is still difficult to understand exactly why a document is relevant to a query. In the ML community several approaches for explaining decisions made by deep neural networks have been proposed - including DeepSHAP which modifies the DeepLift algorithm to estimate the relative importance (shapley values) of input features for a given decision by comparing the activations in the network for a given image against the activations caused by a reference input. In image classification, the reference input tends to be a plain black image. While DeepSHAP has been well studied for image classification tasks, it remains to be seen how we can adapt it to explain the output of Neural Retrieval Models (NRMs). In particular, what is a good “black” image in the context of IR? In this paper we explored various reference input document construction techniques. Additionally, we compared the explanations generated by DeepSHAP to LIME (a model agnostic approach) and found that the explanations differ considerably. Our study raises concerns regarding the robustness and accuracy of explanations produced for NRMs. With this paper we aim to shed light on interesting problems surrounding interpretability in NRMs and highlight areas of future work.",
author = "Fernando, {Zeon Trevor} and Jaspreet Singh and Avishek Anand",
note = "Funding Information: This work was supported by the Amazon research award on {\textquoteleft}Interpretability of Neural Rankers{\textquoteright}.; 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019 ; Conference date: 21-07-2019 Through 25-07-2019",
year = "2019",
month = jul,
day = "18",
doi = "10.48550/arXiv.1907.06484",
language = "English",
pages = "1005--1008",
booktitle = "SIGIR 2019",
publisher = "Association for Computing Machinery (ACM)",
address = "United States",

}

Download

TY - GEN

T1 - A study on the Interpretability of Neural Retrieval Models using DeepSHAP

AU - Fernando, Zeon Trevor

AU - Singh, Jaspreet

AU - Anand, Avishek

N1 - Funding Information: This work was supported by the Amazon research award on ‘Interpretability of Neural Rankers’.

PY - 2019/7/18

Y1 - 2019/7/18

N2 - A recent trend in IR has been the usage of neural networks to learn retrieval models for text based adhoc search. While various approaches and architectures have yielded significantly better performance than traditional retrieval models such as BM25, it is still difficult to understand exactly why a document is relevant to a query. In the ML community several approaches for explaining decisions made by deep neural networks have been proposed - including DeepSHAP which modifies the DeepLift algorithm to estimate the relative importance (shapley values) of input features for a given decision by comparing the activations in the network for a given image against the activations caused by a reference input. In image classification, the reference input tends to be a plain black image. While DeepSHAP has been well studied for image classification tasks, it remains to be seen how we can adapt it to explain the output of Neural Retrieval Models (NRMs). In particular, what is a good “black” image in the context of IR? In this paper we explored various reference input document construction techniques. Additionally, we compared the explanations generated by DeepSHAP to LIME (a model agnostic approach) and found that the explanations differ considerably. Our study raises concerns regarding the robustness and accuracy of explanations produced for NRMs. With this paper we aim to shed light on interesting problems surrounding interpretability in NRMs and highlight areas of future work.

AB - A recent trend in IR has been the usage of neural networks to learn retrieval models for text based adhoc search. While various approaches and architectures have yielded significantly better performance than traditional retrieval models such as BM25, it is still difficult to understand exactly why a document is relevant to a query. In the ML community several approaches for explaining decisions made by deep neural networks have been proposed - including DeepSHAP which modifies the DeepLift algorithm to estimate the relative importance (shapley values) of input features for a given decision by comparing the activations in the network for a given image against the activations caused by a reference input. In image classification, the reference input tends to be a plain black image. While DeepSHAP has been well studied for image classification tasks, it remains to be seen how we can adapt it to explain the output of Neural Retrieval Models (NRMs). In particular, what is a good “black” image in the context of IR? In this paper we explored various reference input document construction techniques. Additionally, we compared the explanations generated by DeepSHAP to LIME (a model agnostic approach) and found that the explanations differ considerably. Our study raises concerns regarding the robustness and accuracy of explanations produced for NRMs. With this paper we aim to shed light on interesting problems surrounding interpretability in NRMs and highlight areas of future work.

UR - http://www.scopus.com/inward/record.url?scp=85073786690&partnerID=8YFLogxK

U2 - 10.48550/arXiv.1907.06484

DO - 10.48550/arXiv.1907.06484

M3 - Conference contribution

AN - SCOPUS:85073786690

SP - 1005

EP - 1008

BT - SIGIR 2019

PB - Association for Computing Machinery (ACM)

T2 - 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019

Y2 - 21 July 2019 through 25 July 2019

ER -