Details
Original language | English |
---|---|
Title of host publication | Advances in Information Retrieval |
Subtitle of host publication | 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II |
Editors | Jaap Kamps, Lorraine Goeuriot, Fabio Crestani, Maria Maistro, Hideo Joho, Brian Davis, Cathal Gurrin, Annalina Caputo, Udo Kruschwitz |
Place of Publication | Cham |
Pages | 255-273 |
Number of pages | 19 |
ISBN (electronic) | 978-3-031-28238-6 |
Publication status | Published - 17 Mar 2023 |
Event | 45th European Conference on Information Retrieval, ECIR 2023 - Dublin, Ireland Duration: 2 Apr 2023 → 6 Apr 2023 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 13981 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. ed. / Jaap Kamps; Lorraine Goeuriot; Fabio Crestani; Maria Maistro; Hideo Joho; Brian Davis; Cathal Gurrin; Annalina Caputo; Udo Kruschwitz. Cham, 2023. p. 255-273 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 13981 LNCS).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Probing BERT for Ranking Abilities
AU - Wallat, Jonas
AU - Beringer, Fabian
AU - Anand, Abhijit
AU - Anand, Avishek
N1 - Funding Information: Acknowledgements. This research was (partially) funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor with grant No. 01DD20003.
PY - 2023/3/17
Y1 - 2023/3/17
N2 - Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).
AB - Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).
UR - http://www.scopus.com/inward/record.url?scp=85150969553&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-28238-6_17
DO - 10.1007/978-3-031-28238-6_17
M3 - Conference contribution
AN - SCOPUS:85150969553
SN - 9783031282379
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 255
EP - 273
BT - Advances in Information Retrieval
A2 - Kamps, Jaap
A2 - Goeuriot, Lorraine
A2 - Crestani, Fabio
A2 - Maistro, Maria
A2 - Joho, Hideo
A2 - Davis, Brian
A2 - Gurrin, Cathal
A2 - Caputo, Annalina
A2 - Kruschwitz, Udo
CY - Cham
T2 - 45th European Conference on Information Retrieval, ECIR 2023
Y2 - 2 April 2023 through 6 April 2023
ER -