Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Jurek Leonhardt
  • Henrik Müller
  • Koustav Rudra
  • Megha Khosla
  • Abhijit Anand
  • Avishek Anand

Organisationseinheiten

Externe Organisationen

  • Indian Institute of Technology Kharagpur (IITKGP)
  • Delft University of Technology
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer117
Seitenumfang34
FachzeitschriftACM Transactions on Information Systems
Jahrgang42
Ausgabenummer5
Frühes Online-Datum8 Nov. 2023
PublikationsstatusVeröffentlicht - 29 Apr. 2024

Abstract

Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes - vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform an evaluation to show the effectiveness and efficiency of Fast-Forward indexes - our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.

Zitieren

Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders. / Leonhardt, Jurek; Müller, Henrik; Rudra, Koustav et al.
in: ACM Transactions on Information Systems, Jahrgang 42, Nr. 5, 117, 29.04.2024.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Leonhardt J, Müller H, Rudra K, Khosla M, Anand A, Anand A. Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders. ACM Transactions on Information Systems. 2024 Apr 29;42(5):117. Epub 2023 Nov 8. doi: 10.48550/arXiv.2311.01263, 10.1145/3631939
Leonhardt, Jurek ; Müller, Henrik ; Rudra, Koustav et al. / Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders. in: ACM Transactions on Information Systems. 2024 ; Jahrgang 42, Nr. 5.
Download
@article{1994ad98d42543fcbbb2cae6beff931b,
title = "Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders",
abstract = "Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes - vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform an evaluation to show the effectiveness and efficiency of Fast-Forward indexes - our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.",
keywords = "dual-encoders, efficiency, Information retrieval, IR, latency, ranking",
author = "Jurek Leonhardt and Henrik M{\"u}ller and Koustav Rudra and Megha Khosla and Abhijit Anand and Avishek Anand",
note = "Publisher Copyright: {\textcopyright} 2024 Copyright held by the owner/author(s).",
year = "2024",
month = apr,
day = "29",
doi = "10.48550/arXiv.2311.01263",
language = "English",
volume = "42",
journal = "ACM Transactions on Information Systems",
issn = "1046-8188",
publisher = "Association for Computing Machinery (ACM)",
number = "5",

}

Download

TY - JOUR

T1 - Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders

AU - Leonhardt, Jurek

AU - Müller, Henrik

AU - Rudra, Koustav

AU - Khosla, Megha

AU - Anand, Abhijit

AU - Anand, Avishek

N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s).

PY - 2024/4/29

Y1 - 2024/4/29

N2 - Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes - vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform an evaluation to show the effectiveness and efficiency of Fast-Forward indexes - our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.

AB - Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes - vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform an evaluation to show the effectiveness and efficiency of Fast-Forward indexes - our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.

KW - dual-encoders

KW - efficiency

KW - Information retrieval

KW - IR

KW - latency

KW - ranking

UR - http://www.scopus.com/inward/record.url?scp=85178950416&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2311.01263

DO - 10.48550/arXiv.2311.01263

M3 - Article

AN - SCOPUS:85178950416

VL - 42

JO - ACM Transactions on Information Systems

JF - ACM Transactions on Information Systems

SN - 1046-8188

IS - 5

M1 - 117

ER -