Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | 2023 ACM/IEEE Joint Conference on Digital Libraries |
Untertitel | JCDL |
Herausgeber (Verlag) | Institute of Electrical and Electronics Engineers Inc. |
Seiten | 237-241 |
Seitenumfang | 5 |
ISBN (elektronisch) | 9798350399318 |
ISBN (Print) | 979-8-3503-9932-5 |
Publikationsstatus | Veröffentlicht - 2023 |
Veranstaltung | 2023 ACM/IEEE Joint Conference on Digital Libraries, JCDL 2023 - Santa Fe, USA / Vereinigte Staaten Dauer: 26 Juni 2023 → 30 Juni 2023 |
Publikationsreihe
Name | Proceedings of the ACM/IEEE Joint Conference on Digital Libraries |
---|---|
Band | 2023-June |
ISSN (Print) | 1552-5996 |
Abstract
We present a large-scale empirical investigation of the zero-shot learning phenomena in a specific recognizing textual entailment (RTE) task category, i.e., the automated mining of LEADERBOARDS for Empirical AI Research. The prior reported state-of-the-art models for LEADERBOARDS extraction formulated as an RTE task in a non-zero-shot setting are promising with above 90% reported performances. However, a central research question remains unexamined: did the models actually learn entailment? Thus, for the experiments in this paper, two prior reported state-of-the-art models are tested out-of-the-box for their ability to generalize or their capacity for entailment, given LEADERBOARD labels that were unseen during training. We hypothesize that if the models learned entailment, their zero-shot performances can be expected to be moderately high as well-perhaps, concretely, better than chance. As a result of this work, a zero-shot labeled dataset is created via distant labeling, formulating the LEADERBOARD extraction RTE task.
ASJC Scopus Sachgebiete
- Ingenieurwesen (insg.)
- Allgemeiner Maschinenbau
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
2023 ACM/IEEE Joint Conference on Digital Libraries: JCDL. Institute of Electrical and Electronics Engineers Inc., 2023. S. 237-241 (Proceedings of the ACM/IEEE Joint Conference on Digital Libraries; Band 2023-June).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Zero-shot Entailment of Leaderboards for Empirical AI Research
AU - Kabongo, Salomon
AU - D'Souza, Jennifer
AU - Auer, Sören
N1 - Funding Information: Acknowledgments. This work was co-funded by the Federal Ministry of Education and Research (BMBF) of Germany for the project LeibnizKILabor (grant no. 01DD20003), BMBF project SCINEXT (GA ID: 01lS22070), NFDI4DataScience (grant no. 460234259) and by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536).
PY - 2023
Y1 - 2023
N2 - We present a large-scale empirical investigation of the zero-shot learning phenomena in a specific recognizing textual entailment (RTE) task category, i.e., the automated mining of LEADERBOARDS for Empirical AI Research. The prior reported state-of-the-art models for LEADERBOARDS extraction formulated as an RTE task in a non-zero-shot setting are promising with above 90% reported performances. However, a central research question remains unexamined: did the models actually learn entailment? Thus, for the experiments in this paper, two prior reported state-of-the-art models are tested out-of-the-box for their ability to generalize or their capacity for entailment, given LEADERBOARD labels that were unseen during training. We hypothesize that if the models learned entailment, their zero-shot performances can be expected to be moderately high as well-perhaps, concretely, better than chance. As a result of this work, a zero-shot labeled dataset is created via distant labeling, formulating the LEADERBOARD extraction RTE task.
AB - We present a large-scale empirical investigation of the zero-shot learning phenomena in a specific recognizing textual entailment (RTE) task category, i.e., the automated mining of LEADERBOARDS for Empirical AI Research. The prior reported state-of-the-art models for LEADERBOARDS extraction formulated as an RTE task in a non-zero-shot setting are promising with above 90% reported performances. However, a central research question remains unexamined: did the models actually learn entailment? Thus, for the experiments in this paper, two prior reported state-of-the-art models are tested out-of-the-box for their ability to generalize or their capacity for entailment, given LEADERBOARD labels that were unseen during training. We hypothesize that if the models learned entailment, their zero-shot performances can be expected to be moderately high as well-perhaps, concretely, better than chance. As a result of this work, a zero-shot labeled dataset is created via distant labeling, formulating the LEADERBOARD extraction RTE task.
KW - Entailment
KW - Information-Extraction
KW - Leaderboard
KW - Natural-Language-Inference
UR - http://www.scopus.com/inward/record.url?scp=85174576379&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2303.16835
DO - 10.48550/arXiv.2303.16835
M3 - Conference contribution
AN - SCOPUS:85174576379
SN - 979-8-3503-9932-5
T3 - Proceedings of the ACM/IEEE Joint Conference on Digital Libraries
SP - 237
EP - 241
BT - 2023 ACM/IEEE Joint Conference on Digital Libraries
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 ACM/IEEE Joint Conference on Digital Libraries, JCDL 2023
Y2 - 26 June 2023 through 30 June 2023
ER -