Details
Original language | English |
---|---|
Number of pages | 6 |
Publication status | Published - 2021 |
Event | 30th Text REtrieval Conference, TREC 2021 - Virtual, Online, United States Duration: 15 Nov 2021 → 19 Nov 2021 |
Conference
Conference | 30th Text REtrieval Conference, TREC 2021 |
---|---|
Country/Territory | United States |
City | Virtual, Online |
Period | 15 Nov 2021 → 19 Nov 2021 |
Abstract
In this paper we describe the approach we used for the passage and document ranking task in the TREC 2021 deep learning track. Our approach aims for efficient retrieval and re-ranking by making use of fast look-up-based forward indexes for dense dual-encoder models. The score of a query-document pair is computed as a linear interpolation of the corresponding lexical (BM25) and semantic (re-ranking) scores. This is akin to performing the re-ranking step “implicitly” together with the retrieval step. We improve efficiency by avoiding forward passes of expensive re-ranking models without compromising performance.
Keywords
- dense, hybrid, interpolation, ranking, retrieval, sparse
ASJC Scopus subject areas
- Arts and Humanities(all)
- Language and Linguistics
- Computer Science(all)
- Computer Science Applications
- Social Sciences(all)
- Linguistics and Language
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
2021. Paper presented at 30th Text REtrieval Conference, TREC 2021, Virtual, Online, United States.
Research output: Contribution to conference › Paper › Research › peer review
}
TY - CONF
T1 - L3S at the TREC 2021 Deep Learning Track
AU - Leonhardt, Jurek
AU - Rudra, Koustav
AU - Anand, Avishek
N1 - Funding Information: Acknowledgements Funding for this work was in part provided by EU Horizon 2020 grant no. 871042 (SoBigData++) and 832921 (MIRROR).
PY - 2021
Y1 - 2021
N2 - In this paper we describe the approach we used for the passage and document ranking task in the TREC 2021 deep learning track. Our approach aims for efficient retrieval and re-ranking by making use of fast look-up-based forward indexes for dense dual-encoder models. The score of a query-document pair is computed as a linear interpolation of the corresponding lexical (BM25) and semantic (re-ranking) scores. This is akin to performing the re-ranking step “implicitly” together with the retrieval step. We improve efficiency by avoiding forward passes of expensive re-ranking models without compromising performance.
AB - In this paper we describe the approach we used for the passage and document ranking task in the TREC 2021 deep learning track. Our approach aims for efficient retrieval and re-ranking by making use of fast look-up-based forward indexes for dense dual-encoder models. The score of a query-document pair is computed as a linear interpolation of the corresponding lexical (BM25) and semantic (re-ranking) scores. This is akin to performing the re-ranking step “implicitly” together with the retrieval step. We improve efficiency by avoiding forward passes of expensive re-ranking models without compromising performance.
KW - dense
KW - hybrid
KW - interpolation
KW - ranking
KW - retrieval
KW - sparse
UR - http://www.scopus.com/inward/record.url?scp=85180003935&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85180003935
T2 - 30th Text REtrieval Conference, TREC 2021
Y2 - 15 November 2021 through 19 November 2021
ER -