Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandBeitrag in Buch/SammelwerkForschungPeer-Review

Autoren

Organisationseinheiten

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksDatabase and Expert Systems Applications
Herausgeber/-innenChristine Strauss, Toshiyuki Amagasa, Gabriele Kotsis, Ismail Khalil, A Min Tjoa
Seiten508-515
Seitenumfang8
ISBN (elektronisch)978-3-031-39847-6
PublikationsstatusVeröffentlicht - 2023

Publikationsreihe

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Band14146 LNCS
ISSN (Print)0302-9743
ISSN (elektronisch)1611-3349

Abstract

Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models.

ASJC Scopus Sachgebiete

Zitieren

Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. / D’Souza, Jennifer; Hrou, Moussab; Auer, Sören.
Database and Expert Systems Applications . Hrsg. / Christine Strauss; Toshiyuki Amagasa; Gabriele Kotsis; Ismail Khalil; A Min Tjoa. 2023. S. 508-515 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 14146 LNCS).

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandBeitrag in Buch/SammelwerkForschungPeer-Review

D’Souza, J, Hrou, M & Auer, S 2023, Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. in C Strauss, T Amagasa, G Kotsis, I Khalil & AM Tjoa (Hrsg.), Database and Expert Systems Applications . Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Bd. 14146 LNCS, S. 508-515. https://doi.org/10.48550/arXiv.2305.12900, https://doi.org/10.1007/978-3-031-39847-6_40
D’Souza, J., Hrou, M., & Auer, S. (2023). Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. In C. Strauss, T. Amagasa, G. Kotsis, I. Khalil, & A. M. Tjoa (Hrsg.), Database and Expert Systems Applications (S. 508-515). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 14146 LNCS). https://doi.org/10.48550/arXiv.2305.12900, https://doi.org/10.1007/978-3-031-39847-6_40
D’Souza J, Hrou M, Auer S. Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. in Strauss C, Amagasa T, Kotsis G, Khalil I, Tjoa AM, Hrsg., Database and Expert Systems Applications . 2023. S. 508-515. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). Epub 2023 Aug 18. doi: 10.48550/arXiv.2305.12900, 10.1007/978-3-031-39847-6_40
D’Souza, Jennifer ; Hrou, Moussab ; Auer, Sören. / Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. Database and Expert Systems Applications . Hrsg. / Christine Strauss ; Toshiyuki Amagasa ; Gabriele Kotsis ; Ismail Khalil ; A Min Tjoa. 2023. S. 508-515 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inbook{1d012503238b4e4584598274eaa758ae,
title = "Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph",
abstract = "Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models.",
keywords = "Knowledge Graph Completion, Natural Language Processing, Open Research Knowledge Graph, Prompt-based Question Answering, Question Answering",
author = "Jennifer D{\textquoteright}Souza and Moussab Hrou and S{\"o}ren Auer",
note = "Publisher Copyright: {\textcopyright} The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.",
year = "2023",
doi = "10.48550/arXiv.2305.12900",
language = "English",
isbn = "978-3-031-39846-9",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "508--515",
editor = "Christine Strauss and Toshiyuki Amagasa and Gabriele Kotsis and Ismail Khalil and Tjoa, {A Min}",
booktitle = "Database and Expert Systems Applications",

}

Download

TY - CHAP

T1 - Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph

AU - D’Souza, Jennifer

AU - Hrou, Moussab

AU - Auer, Sören

N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.

PY - 2023

Y1 - 2023

N2 - Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models.

AB - Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models.

KW - Knowledge Graph Completion

KW - Natural Language Processing

KW - Open Research Knowledge Graph

KW - Prompt-based Question Answering

KW - Question Answering

UR - http://www.scopus.com/inward/record.url?scp=85174710939&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2305.12900

DO - 10.48550/arXiv.2305.12900

M3 - Contribution to book/anthology

SN - 978-3-031-39846-9

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 508

EP - 515

BT - Database and Expert Systems Applications

A2 - Strauss, Christine

A2 - Amagasa, Toshiyuki

A2 - Kotsis, Gabriele

A2 - Khalil, Ismail

A2 - Tjoa, A Min

ER -

Von denselben Autoren