Probing BERT for Ranking Abilities

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

  • Jonas Wallat
  • Fabian Beringer
  • Abhijit Anand
  • Avishek Anand

Organisationseinheiten

Externe Organisationen

  • Delft University of Technology
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksAdvances in Information Retrieval
Untertitel45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II
Herausgeber/-innenJaap Kamps, Lorraine Goeuriot, Fabio Crestani, Maria Maistro, Hideo Joho, Brian Davis, Cathal Gurrin, Annalina Caputo, Udo Kruschwitz
ErscheinungsortCham
Seiten255-273
Seitenumfang19
ISBN (elektronisch)978-3-031-28238-6
PublikationsstatusVeröffentlicht - 17 März 2023
Veranstaltung45th European Conference on Information Retrieval, ECIR 2023 - Dublin, Irland
Dauer: 2 Apr. 20236 Apr. 2023

Publikationsreihe

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Band13981 LNCS
ISSN (Print)0302-9743
ISSN (elektronisch)1611-3349

Abstract

Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).

ASJC Scopus Sachgebiete

Zitieren

Probing BERT for Ranking Abilities. / Wallat, Jonas; Beringer, Fabian; Anand, Abhijit et al.
Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. Hrsg. / Jaap Kamps; Lorraine Goeuriot; Fabio Crestani; Maria Maistro; Hideo Joho; Brian Davis; Cathal Gurrin; Annalina Caputo; Udo Kruschwitz. Cham, 2023. S. 255-273 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 13981 LNCS).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Wallat, J, Beringer, F, Anand, A & Anand, A 2023, Probing BERT for Ranking Abilities. in J Kamps, L Goeuriot, F Crestani, M Maistro, H Joho, B Davis, C Gurrin, A Caputo & U Kruschwitz (Hrsg.), Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bd. 13981 LNCS, Cham, S. 255-273, 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Irland, 2 Apr. 2023. https://doi.org/10.1007/978-3-031-28238-6_17
Wallat, J., Beringer, F., Anand, A., & Anand, A. (2023). Probing BERT for Ranking Abilities. In J. Kamps, L. Goeuriot, F. Crestani, M. Maistro, H. Joho, B. Davis, C. Gurrin, A. Caputo, & U. Kruschwitz (Hrsg.), Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II (S. 255-273). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 13981 LNCS).. https://doi.org/10.1007/978-3-031-28238-6_17
Wallat J, Beringer F, Anand A, Anand A. Probing BERT for Ranking Abilities. in Kamps J, Goeuriot L, Crestani F, Maistro M, Joho H, Davis B, Gurrin C, Caputo A, Kruschwitz U, Hrsg., Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. Cham. 2023. S. 255-273. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). doi: 10.1007/978-3-031-28238-6_17
Wallat, Jonas ; Beringer, Fabian ; Anand, Abhijit et al. / Probing BERT for Ranking Abilities. Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. Hrsg. / Jaap Kamps ; Lorraine Goeuriot ; Fabio Crestani ; Maria Maistro ; Hideo Joho ; Brian Davis ; Cathal Gurrin ; Annalina Caputo ; Udo Kruschwitz. Cham, 2023. S. 255-273 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{320363a182464e93a5b50745cbfd3298,
title = "Probing BERT for Ranking Abilities",
abstract = "Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).",
author = "Jonas Wallat and Fabian Beringer and Abhijit Anand and Avishek Anand",
note = "Funding Information: Acknowledgements. This research was (partially) funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor with grant No. 01DD20003.; 45th European Conference on Information Retrieval, ECIR 2023 ; Conference date: 02-04-2023 Through 06-04-2023",
year = "2023",
month = mar,
day = "17",
doi = "10.1007/978-3-031-28238-6_17",
language = "English",
isbn = "9783031282379",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "255--273",
editor = "Jaap Kamps and Lorraine Goeuriot and Fabio Crestani and Maria Maistro and Hideo Joho and Brian Davis and Cathal Gurrin and Annalina Caputo and Udo Kruschwitz",
booktitle = "Advances in Information Retrieval",

}

Download

TY - GEN

T1 - Probing BERT for Ranking Abilities

AU - Wallat, Jonas

AU - Beringer, Fabian

AU - Anand, Abhijit

AU - Anand, Avishek

N1 - Funding Information: Acknowledgements. This research was (partially) funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor with grant No. 01DD20003.

PY - 2023/3/17

Y1 - 2023/3/17

N2 - Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).

AB - Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).

UR - http://www.scopus.com/inward/record.url?scp=85150969553&partnerID=8YFLogxK

U2 - 10.1007/978-3-031-28238-6_17

DO - 10.1007/978-3-031-28238-6_17

M3 - Conference contribution

AN - SCOPUS:85150969553

SN - 9783031282379

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 255

EP - 273

BT - Advances in Information Retrieval

A2 - Kamps, Jaap

A2 - Goeuriot, Lorraine

A2 - Crestani, Fabio

A2 - Maistro, Maria

A2 - Joho, Hideo

A2 - Davis, Brian

A2 - Gurrin, Cathal

A2 - Caputo, Annalina

A2 - Kruschwitz, Udo

CY - Cham

T2 - 45th European Conference on Information Retrieval, ECIR 2023

Y2 - 2 April 2023 through 6 April 2023

ER -