Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Jurek Leonhardt
  • Henrik Müller
  • Koustav Rudra
  • Megha Khosla
  • Abhijit Anand
  • Avishek Anand

Research Organisations

External Research Organisations

  • Indian Institute of Technology Kharagpur (IITKGP)
  • Delft University of Technology
View graph of relations

Details

Original languageEnglish
Article number117
Number of pages34
JournalACM Transactions on Information Systems
Volume42
Issue number5
Early online date8 Nov 2023
Publication statusPublished - 29 Apr 2024

Abstract

Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes - vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform an evaluation to show the effectiveness and efficiency of Fast-Forward indexes - our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.

Keywords

    dual-encoders, efficiency, Information retrieval, IR, latency, ranking

ASJC Scopus subject areas

Cite this

Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders. / Leonhardt, Jurek; Müller, Henrik; Rudra, Koustav et al.
In: ACM Transactions on Information Systems, Vol. 42, No. 5, 117, 29.04.2024.

Research output: Contribution to journalArticleResearchpeer review

Leonhardt J, Müller H, Rudra K, Khosla M, Anand A, Anand A. Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders. ACM Transactions on Information Systems. 2024 Apr 29;42(5):117. Epub 2023 Nov 8. doi: 10.48550/arXiv.2311.01263, 10.1145/3631939
Leonhardt, Jurek ; Müller, Henrik ; Rudra, Koustav et al. / Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders. In: ACM Transactions on Information Systems. 2024 ; Vol. 42, No. 5.
Download
@article{1994ad98d42543fcbbb2cae6beff931b,
title = "Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders",
abstract = "Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes - vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform an evaluation to show the effectiveness and efficiency of Fast-Forward indexes - our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.",
keywords = "dual-encoders, efficiency, Information retrieval, IR, latency, ranking",
author = "Jurek Leonhardt and Henrik M{\"u}ller and Koustav Rudra and Megha Khosla and Abhijit Anand and Avishek Anand",
note = "Publisher Copyright: {\textcopyright} 2024 Copyright held by the owner/author(s).",
year = "2024",
month = apr,
day = "29",
doi = "10.48550/arXiv.2311.01263",
language = "English",
volume = "42",
journal = "ACM Transactions on Information Systems",
issn = "1046-8188",
publisher = "Association for Computing Machinery (ACM)",
number = "5",

}

Download

TY - JOUR

T1 - Efficient Neural Ranking Using Forward Indexes and Lightweight Encoders

AU - Leonhardt, Jurek

AU - Müller, Henrik

AU - Rudra, Koustav

AU - Khosla, Megha

AU - Anand, Abhijit

AU - Anand, Avishek

N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s).

PY - 2024/4/29

Y1 - 2024/4/29

N2 - Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes - vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform an evaluation to show the effectiveness and efficiency of Fast-Forward indexes - our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.

AB - Dual-encoder-based dense retrieval models have become the standard in IR. They employ large Transformer-based language models, which are notoriously inefficient in terms of resources and latency.We propose Fast-Forward indexes - vector forward indexes which exploit the semantic matching capabilities of dual-encoder models for efficient and effective re-ranking. Our framework enables re-ranking at very high retrieval depths and combines the merits of both lexical and semantic matching via score interpolation. Furthermore, in order to mitigate the limitations of dual-encoders, we tackle two main challenges: Firstly, we improve computational efficiency by either pre-computing representations, avoiding unnecessary computations altogether, or reducing the complexity of encoders. This allows us to considerably improve ranking efficiency and latency. Secondly, we optimize the memory footprint and maintenance cost of indexes; we propose two complementary techniques to reduce the index size and show that, by dynamically dropping irrelevant document tokens, the index maintenance efficiency can be improved substantially.We perform an evaluation to show the effectiveness and efficiency of Fast-Forward indexes - our method has low latency and achieves competitive results without the need for hardware acceleration, such as GPUs.

KW - dual-encoders

KW - efficiency

KW - Information retrieval

KW - IR

KW - latency

KW - ranking

UR - http://www.scopus.com/inward/record.url?scp=85178950416&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2311.01263

DO - 10.48550/arXiv.2311.01263

M3 - Article

AN - SCOPUS:85178950416

VL - 42

JO - ACM Transactions on Information Systems

JF - ACM Transactions on Information Systems

SN - 1046-8188

IS - 5

M1 - 117

ER -