Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Advances in Information Retrieval |
Untertitel | 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II |
Herausgeber/-innen | Jaap Kamps, Lorraine Goeuriot, Fabio Crestani, Maria Maistro, Hideo Joho, Brian Davis, Cathal Gurrin, Annalina Caputo, Udo Kruschwitz |
Erscheinungsort | Cham |
Seiten | 255-273 |
Seitenumfang | 19 |
ISBN (elektronisch) | 978-3-031-28238-6 |
Publikationsstatus | Veröffentlicht - 17 März 2023 |
Veranstaltung | 45th European Conference on Information Retrieval, ECIR 2023 - Dublin, Irland Dauer: 2 Apr. 2023 → 6 Apr. 2023 |
Publikationsreihe
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Band | 13981 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (elektronisch) | 1611-3349 |
Abstract
Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Advances in Information Retrieval : 45th European Conference on Information Retrieval, ECIR 2023, Dublin, Ireland, April 2–6, 2023, Proceedings, Part II. Hrsg. / Jaap Kamps; Lorraine Goeuriot; Fabio Crestani; Maria Maistro; Hideo Joho; Brian Davis; Cathal Gurrin; Annalina Caputo; Udo Kruschwitz. Cham, 2023. S. 255-273 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 13981 LNCS).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Probing BERT for Ranking Abilities
AU - Wallat, Jonas
AU - Beringer, Fabian
AU - Anand, Abhijit
AU - Anand, Avishek
N1 - Funding Information: Acknowledgements. This research was (partially) funded by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor with grant No. 01DD20003.
PY - 2023/3/17
Y1 - 2023/3/17
N2 - Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).
AB - Contextual models like BERT are highly effective in numerous text-ranking tasks. However, it is still unclear as to whether contextual models understand well-established notions of relevance that are central to IR. In this paper, we use probing, a recent approach used to analyze language models, to investigate the ranking abilities of BERT-based rankers. Most of the probing literature has focussed on linguistic and knowledge-aware capabilities of models or axiomatic analysis of ranking models. In this paper, we fill an important gap in the information retrieval literature by conducting a layer-wise probing analysis using four probes based on lexical matching, semantic similarity as well as linguistic properties like coreference resolution and named entity recognition. Our experiments show an interesting trend that BERT-rankers better encode ranking abilities at intermediate layers. Based on our observations, we train a ranking model by augmenting the ranking data with the probe data to show initial yet consistent performance improvements (The code is available at https://github.com/yolomeus/probing-search/ ).
UR - http://www.scopus.com/inward/record.url?scp=85150969553&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-28238-6_17
DO - 10.1007/978-3-031-28238-6_17
M3 - Conference contribution
AN - SCOPUS:85150969553
SN - 9783031282379
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 255
EP - 273
BT - Advances in Information Retrieval
A2 - Kamps, Jaap
A2 - Goeuriot, Lorraine
A2 - Crestani, Fabio
A2 - Maistro, Maria
A2 - Joho, Hideo
A2 - Davis, Brian
A2 - Gurrin, Cathal
A2 - Caputo, Annalina
A2 - Kruschwitz, Udo
CY - Cham
T2 - 45th European Conference on Information Retrieval, ECIR 2023
Y2 - 2 April 2023 through 6 April 2023
ER -