Probing BERT for Ranking Abilities

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Jonas Wallat
  • Fabian Beringer
  • Abhijit Anand
  • Avishek Anand

Research Organisations

External Research Organisations

  • Delft University of Technology
View graph of relations

Details

Original languageEnglish
Title of host publicationAdvances in Information Retrieval
Subtitle of host publication45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II
EditorsJaap Kamps, Lorraine Goeuriot, Fabio Crestani, Maria Maistro, Hideo Joho, Brian Davis, Cathal Gurrin, Annalina Caputo, Udo Kruschwitz
Place of PublicationCham
Pages255-273
Number of pages19
ISBN (electronic)978-3-031-28238-6
Publication statusPublished - 17 Mar 2023
Event45th European Conference on Information Retrieval, ECIR 2023 - Dublin, Ireland
Duration: 2 Apr 20236 Apr 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13981 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).

ASJC Scopus subject areas

Cite this

Probing BERT for Ranking Abilities. / Wallat, Jonas; Beringer, Fabian; Anand, Abhijit et al.
Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. ed. / Jaap Kamps; Lorraine Goeuriot; Fabio Crestani; Maria Maistro; Hideo Joho; Brian Davis; Cathal Gurrin; Annalina Caputo; Udo Kruschwitz. Cham, 2023. p. 255-273 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 13981 LNCS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Wallat, J, Beringer, F, Anand, A & Anand, A 2023, Probing BERT for Ranking Abilities. in J Kamps, L Goeuriot, F Crestani, M Maistro, H Joho, B Davis, C Gurrin, A Caputo & U Kruschwitz (eds), Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13981 LNCS, Cham, pp. 255-273, 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, 2 Apr 2023. https://doi.org/10.1007/978-3-031-28238-6_17
Wallat, J., Beringer, F., Anand, A., & Anand, A. (2023). Probing BERT for Ranking Abilities. In J. Kamps, L. Goeuriot, F. Crestani, M. Maistro, H. Joho, B. Davis, C. Gurrin, A. Caputo, & U. Kruschwitz (Eds.), Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II (pp. 255-273). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 13981 LNCS).. https://doi.org/10.1007/978-3-031-28238-6_17
Wallat J, Beringer F, Anand A, Anand A. Probing BERT for Ranking Abilities. In Kamps J, Goeuriot L, Crestani F, Maistro M, Joho H, Davis B, Gurrin C, Caputo A, Kruschwitz U, editors, Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. Cham. 2023. p. 255-273. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-031-28238-6_17
Wallat, Jonas ; Beringer, Fabian ; Anand, Abhijit et al. / Probing BERT for Ranking Abilities. Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. editor / Jaap Kamps ; Lorraine Goeuriot ; Fabio Crestani ; Maria Maistro ; Hideo Joho ; Brian Davis ; Cathal Gurrin ; Annalina Caputo ; Udo Kruschwitz. Cham, 2023. pp. 255-273 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{320363a182464e93a5b50745cbfd3298,
title = "Probing BERT for Ranking Abilities",
abstract = "Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).",
author = "Jonas Wallat and Fabian Beringer and Abhijit Anand and Avishek Anand",
note = "Funding Information: Acknowledgements. This research was (partially) funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor with grant No. 01DD20003.; 45th European Conference on Information Retrieval, ECIR 2023 ; Conference date: 02-04-2023 Through 06-04-2023",
year = "2023",
month = mar,
day = "17",
doi = "10.1007/978-3-031-28238-6_17",
language = "English",
isbn = "9783031282379",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "255--273",
editor = "Jaap Kamps and Lorraine Goeuriot and Fabio Crestani and Maria Maistro and Hideo Joho and Brian Davis and Cathal Gurrin and Annalina Caputo and Udo Kruschwitz",
booktitle = "Advances in Information Retrieval",

}

Download

TY - GEN

T1 - Probing BERT for Ranking Abilities

AU - Wallat, Jonas

AU - Beringer, Fabian

AU - Anand, Abhijit

AU - Anand, Avishek

N1 - Funding Information: Acknowledgements. This research was (partially) funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor with grant No. 01DD20003.

PY - 2023/3/17

Y1 - 2023/3/17

N2 - Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).

AB - Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).

UR - http://www.scopus.com/inward/record.url?scp=85150969553&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-28238-6_17

DO - 10.1007/978-3-031-28238-6_17

M3 - Conference contribution

AN - SCOPUS:85150969553

SN - 9783031282379

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 255

EP - 273

BT - Advances in Information Retrieval

A2 - Kamps, Jaap

A2 - Goeuriot, Lorraine

A2 - Crestani, Fabio

A2 - Maistro, Maria

A2 - Joho, Hideo

A2 - Davis, Brian

A2 - Gurrin, Cathal

A2 - Caputo, Annalina

A2 - Kruschwitz, Udo

CY - Cham

T2 - 45th European Conference on Information Retrieval, ECIR 2023

Y2 - 2 April 2023 through 6 April 2023

ER -