Details
Original language | English |
---|---|
Article number | 119 |
Number of pages | 29 |
Journal | ACM Transactions on Information Systems |
Volume | 42 |
Issue number | 5 |
Early online date | 29 Nov 2023 |
Publication status | Published - 29 Apr 2024 |
Abstract
Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this article, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achieving sample efficiency or learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of the MS MARCO and TREC-DL test sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-of-domain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.
Keywords
- contrastive loss, data augmentation, document ranking, Information retrieval, interpolation, IR, ranking, ranking performance
ASJC Scopus subject areas
- Computer Science(all)
- Information Systems
- Business, Management and Accounting(all)
- Computer Science(all)
- Computer Science Applications
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: ACM Transactions on Information Systems, Vol. 42, No. 5, 119, 29.04.2024.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Data Augmentation for Sample Efficient and Robust Document Ranking
AU - Anand, Abhijit
AU - Leonhardt, Jurek
AU - Singh, Jaspreet
AU - Rudra, Koustav
AU - Anand, Avishek
N1 - Publisher Copyright: © 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/4/29
Y1 - 2024/4/29
N2 - Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this article, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achieving sample efficiency or learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of the MS MARCO and TREC-DL test sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-of-domain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.
AB - Contextual ranking models have delivered impressive performance improvements over classical models in the document ranking task. However, these highly over-parameterized models tend to be data-hungry and require large amounts of data even for fine-tuning. In this article, we propose data-augmentation methods for effective and robust ranking performance. One of the key benefits of using data augmentation is in achieving sample efficiency or learning effectively when we have only a small amount of training data. We propose supervised and unsupervised data augmentation schemes by creating training data using parts of the relevant documents in the query-document pairs. We then adapt a family of contrastive losses for the document ranking task that can exploit the augmented data to learn an effective ranking model. Our extensive experiments on subsets of the MS MARCO and TREC-DL test sets show that data augmentation, along with the ranking-adapted contrastive losses, results in performance improvements under most dataset sizes. Apart from sample efficiency, we conclusively show that data augmentation results in robust models when transferred to out-of-domain benchmarks. Our performance improvements in in-domain and more prominently in out-of-domain benchmarks show that augmentation regularizes the ranking model and improves its robustness and generalization capability.
KW - contrastive loss
KW - data augmentation
KW - document ranking
KW - Information retrieval
KW - interpolation
KW - IR
KW - ranking
KW - ranking performance
UR - http://www.scopus.com/inward/record.url?scp=85195026365&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2311.15426
DO - 10.48550/arXiv.2311.15426
M3 - Article
AN - SCOPUS:85195026365
VL - 42
JO - ACM Transactions on Information Systems
JF - ACM Transactions on Information Systems
SN - 1046-8188
IS - 5
M1 - 119
ER -