Data Augmentation for Sample Efficient and Robust Document Ranking

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Abhijit Anand
  • Jurek Leonhardt
  • Jaspreet Singh
  • Koustav Rudra
  • Avishek Anand

Organisationseinheiten

Externe Organisationen

  • Delft University of Technology
  • Indian Institute of Technology Kharagpur (IITKGP)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer119
Seitenumfang29
FachzeitschriftACM Transactions on Information Systems
Jahrgang42
Ausgabenummer5
Frühes Online-Datum29 Nov. 2023
PublikationsstatusVeröffentlicht - 29 Apr. 2024

Abstract

Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this article, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achieving sample efficiency or learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of the MS MARCO and TREC-DL test sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-of-domain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.

Zitieren

Data Augmentation for Sample Efficient and Robust Document Ranking. / Anand, Abhijit; Leonhardt, Jurek; Singh, Jaspreet et al.
in: ACM Transactions on Information Systems, Jahrgang 42, Nr. 5, 119, 29.04.2024.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Anand A, Leonhardt J, Singh J, Rudra K, Anand A. Data Augmentation for Sample Efficient and Robust Document Ranking. ACM Transactions on Information Systems. 2024 Apr 29;42(5):119. Epub 2023 Nov 29. doi: 10.48550/arXiv.2311.15426, 10.1145/3634911
Anand, Abhijit ; Leonhardt, Jurek ; Singh, Jaspreet et al. / Data Augmentation for Sample Efficient and Robust Document Ranking. in: ACM Transactions on Information Systems. 2024 ; Jahrgang 42, Nr. 5.
Download
@article{1b737d65f5ee4c00bb185f3743eb81a4,
title = "Data Augmentation for Sample Efficient and Robust Document Ranking",
abstract = "Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this article, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achieving sample efficiency or learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of the MS MARCO and TREC-DL test sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-of-domain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.",
keywords = "contrastive loss, data augmentation, document ranking, Information retrieval, interpolation, IR, ranking, ranking performance",
author = "Abhijit Anand and Jurek Leonhardt and Jaspreet Singh and Koustav Rudra and Avishek Anand",
note = "Publisher Copyright: {\textcopyright} 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.",
year = "2024",
month = apr,
day = "29",
doi = "10.48550/arXiv.2311.15426",
language = "English",
volume = "42",
journal = "ACM Transactions on Information Systems",
issn = "1046-8188",
publisher = "Association for Computing Machinery (ACM)",
number = "5",

}

Download

TY - JOUR

T1 - Data Augmentation for Sample Efficient and Robust Document Ranking

AU - Anand, Abhijit

AU - Leonhardt, Jurek

AU - Singh, Jaspreet

AU - Rudra, Koustav

AU - Anand, Avishek

N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.

PY - 2024/4/29

Y1 - 2024/4/29

N2 - Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this article, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achieving sample efficiency or learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of the MS MARCO and TREC-DL test sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-of-domain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.

AB - Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this article, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achieving sample efficiency or learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of the MS MARCO and TREC-DL test sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-of-domain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.

KW - contrastive loss

KW - data augmentation

KW - document ranking

KW - Information retrieval

KW - interpolation

KW - IR

KW - ranking

KW - ranking performance

UR - http://www.scopus.com/inward/record.url?scp=85195026365&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2311.15426

DO - 10.48550/arXiv.2311.15426

M3 - Article

AN - SCOPUS:85195026365

VL - 42

JO - ACM Transactions on Information Systems

JF - ACM Transactions on Information Systems

SN - 1046-8188

IS - 5

M1 - 119

ER -