Causal Probing for Dual Encoders

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Delft University of Technology
View graph of relations

Details

Original languageEnglish
Title of host publicationCIKM 2024
Subtitle of host publicationProceedings of the 33rd ACM International Conference on Information and Knowledge Management
Pages2292-2303
Number of pages12
ISBN (electronic)9798400704369
Publication statusPublished - 21 Oct 2024
Event33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 - Boise, United States
Duration: 21 Oct 202425 Oct 2024

Abstract

Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input queries and output targets to a common vector space encoding the semantic similarity. Despite their prevalence and impressive performance, little is known about the inner workings of dense encoders for retrieval. We investigate neural retrievers using the probing paradigm to identify well-understood IR properties that causally result in ranking performance. Unlike existing works that have probed cross-encoders to show query-document interactions, we provide a principled approach to probe dual-encoders. Importantly, we employ causal probing to avoid correlation effects that might be artefacts of vanilla probing. We conduct extensive experiments on one such dual encoder (TCT-ColBERT) to check for the existence and relevance of six properties: term importance, lexical matching (BM25), semantic matching, question classification, and the two linguistic properties of named entity recognition and coreference resolution. Our layer-wise analysis shows important differences between re-rankers and dual encoders, establishing which tasks are not only understood by the model but also used for inference.

Keywords

    information retrieval, interpretability, language models, probing

ASJC Scopus subject areas

Cite this

Causal Probing for Dual Encoders. / Wallat, Jonas; Hinrichs, Hauke; Anand, Avishek.
CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. p. 2292-2303.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Wallat, J, Hinrichs, H & Anand, A 2024, Causal Probing for Dual Encoders. in CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. pp. 2292-2303, 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024, Boise, United States, 21 Oct 2024. https://doi.org/10.1145/3627673.3679556
Wallat, J., Hinrichs, H., & Anand, A. (2024). Causal Probing for Dual Encoders. In CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management (pp. 2292-2303) https://doi.org/10.1145/3627673.3679556
Wallat J, Hinrichs H, Anand A. Causal Probing for Dual Encoders. In CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. p. 2292-2303 doi: 10.1145/3627673.3679556
Wallat, Jonas ; Hinrichs, Hauke ; Anand, Avishek. / Causal Probing for Dual Encoders. CIKM 2024 : Proceedings of the 33rd ACM International Conference on Information and Knowledge Management. 2024. pp. 2292-2303
Download
@inproceedings{03acd2ad182f4287acad9d6c28897a58,
title = "Causal Probing for Dual Encoders",
abstract = "Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input queries and output targets to a common vector space encoding the semantic similarity. Despite their prevalence and impressive performance, little is known about the inner workings of dense encoders for retrieval. We investigate neural retrievers using the probing paradigm to identify well-understood IR properties that causally result in ranking performance. Unlike existing works that have probed cross-encoders to show query-document interactions, we provide a principled approach to probe dual-encoders. Importantly, we employ causal probing to avoid correlation effects that might be artefacts of vanilla probing. We conduct extensive experiments on one such dual encoder (TCT-ColBERT) to check for the existence and relevance of six properties: term importance, lexical matching (BM25), semantic matching, question classification, and the two linguistic properties of named entity recognition and coreference resolution. Our layer-wise analysis shows important differences between re-rankers and dual encoders, establishing which tasks are not only understood by the model but also used for inference.",
keywords = "information retrieval, interpretability, language models, probing",
author = "Jonas Wallat and Hauke Hinrichs and Avishek Anand",
note = "Publisher Copyright: {\textcopyright} 2024 ACM.; 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024 ; Conference date: 21-10-2024 Through 25-10-2024",
year = "2024",
month = oct,
day = "21",
doi = "10.1145/3627673.3679556",
language = "English",
pages = "2292--2303",
booktitle = "CIKM 2024",

}

Download

TY - GEN

T1 - Causal Probing for Dual Encoders

AU - Wallat, Jonas

AU - Hinrichs, Hauke

AU - Anand, Avishek

N1 - Publisher Copyright: © 2024 ACM.

PY - 2024/10/21

Y1 - 2024/10/21

N2 - Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input queries and output targets to a common vector space encoding the semantic similarity. Despite their prevalence and impressive performance, little is known about the inner workings of dense encoders for retrieval. We investigate neural retrievers using the probing paradigm to identify well-understood IR properties that causally result in ranking performance. Unlike existing works that have probed cross-encoders to show query-document interactions, we provide a principled approach to probe dual-encoders. Importantly, we employ causal probing to avoid correlation effects that might be artefacts of vanilla probing. We conduct extensive experiments on one such dual encoder (TCT-ColBERT) to check for the existence and relevance of six properties: term importance, lexical matching (BM25), semantic matching, question classification, and the two linguistic properties of named entity recognition and coreference resolution. Our layer-wise analysis shows important differences between re-rankers and dual encoders, establishing which tasks are not only understood by the model but also used for inference.

AB - Dual encoders are highly effective and widely deployed in the retrieval phase for passage and document ranking, question answering, or retrieval-augmented generation (RAG) setups. Most dual-encoder models use transformer models like BERT to map input queries and output targets to a common vector space encoding the semantic similarity. Despite their prevalence and impressive performance, little is known about the inner workings of dense encoders for retrieval. We investigate neural retrievers using the probing paradigm to identify well-understood IR properties that causally result in ranking performance. Unlike existing works that have probed cross-encoders to show query-document interactions, we provide a principled approach to probe dual-encoders. Importantly, we employ causal probing to avoid correlation effects that might be artefacts of vanilla probing. We conduct extensive experiments on one such dual encoder (TCT-ColBERT) to check for the existence and relevance of six properties: term importance, lexical matching (BM25), semantic matching, question classification, and the two linguistic properties of named entity recognition and coreference resolution. Our layer-wise analysis shows important differences between re-rankers and dual encoders, establishing which tasks are not only understood by the model but also used for inference.

KW - information retrieval

KW - interpretability

KW - language models

KW - probing

UR - http://www.scopus.com/inward/record.url?scp=85209995253&partnerID=8YFLogxK

U2 - 10.1145/3627673.3679556

DO - 10.1145/3627673.3679556

M3 - Conference contribution

AN - SCOPUS:85209995253

SP - 2292

EP - 2303

BT - CIKM 2024

T2 - 33rd ACM International Conference on Information and Knowledge Management, CIKM 2024

Y2 - 21 October 2024 through 25 October 2024

ER -

By the same author(s)