Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph

Research output: Chapter in book/report/conference proceedingContribution to book/anthologyResearchpeer review

Authors

Research Organisations

External Research Organisations

  • German National Library of Science and Technology (TIB)
View graph of relations

Details

Original languageEnglish
Title of host publicationDatabase and Expert Systems Applications
EditorsChristine Strauss, Toshiyuki Amagasa, Gabriele Kotsis, Ismail Khalil, A Min Tjoa
Pages508-515
Number of pages8
ISBN (electronic)978-3-031-39847-6
Publication statusPublished - 2023

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume14146 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models.

Keywords

    Knowledge Graph Completion, Natural Language Processing, Open Research Knowledge Graph, Prompt-based Question Answering, Question Answering

ASJC Scopus subject areas

Cite this

Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. / D’Souza, Jennifer; Hrou, Moussab; Auer, Sören.
Database and Expert Systems Applications . ed. / Christine Strauss; Toshiyuki Amagasa; Gabriele Kotsis; Ismail Khalil; A Min Tjoa. 2023. p. 508-515 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 14146 LNCS).

Research output: Chapter in book/report/conference proceedingContribution to book/anthologyResearchpeer review

D’Souza, J, Hrou, M & Auer, S 2023, Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. in C Strauss, T Amagasa, G Kotsis, I Khalil & AM Tjoa (eds), Database and Expert Systems Applications . Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 14146 LNCS, pp. 508-515. https://doi.org/10.48550/arXiv.2305.12900, https://doi.org/10.1007/978-3-031-39847-6_40
D’Souza, J., Hrou, M., & Auer, S. (2023). Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. In C. Strauss, T. Amagasa, G. Kotsis, I. Khalil, & A. M. Tjoa (Eds.), Database and Expert Systems Applications (pp. 508-515). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 14146 LNCS). https://doi.org/10.48550/arXiv.2305.12900, https://doi.org/10.1007/978-3-031-39847-6_40
D’Souza J, Hrou M, Auer S. Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. In Strauss C, Amagasa T, Kotsis G, Khalil I, Tjoa AM, editors, Database and Expert Systems Applications . 2023. p. 508-515. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). Epub 2023 Aug 18. doi: 10.48550/arXiv.2305.12900, 10.1007/978-3-031-39847-6_40
D’Souza, Jennifer ; Hrou, Moussab ; Auer, Sören. / Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph. Database and Expert Systems Applications . editor / Christine Strauss ; Toshiyuki Amagasa ; Gabriele Kotsis ; Ismail Khalil ; A Min Tjoa. 2023. pp. 508-515 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inbook{1d012503238b4e4584598274eaa758ae,
title = "Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph",
abstract = "Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models.",
keywords = "Knowledge Graph Completion, Natural Language Processing, Open Research Knowledge Graph, Prompt-based Question Answering, Question Answering",
author = "Jennifer D{\textquoteright}Souza and Moussab Hrou and S{\"o}ren Auer",
note = "Publisher Copyright: {\textcopyright} The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.",
year = "2023",
doi = "10.48550/arXiv.2305.12900",
language = "English",
isbn = "978-3-031-39846-9",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "508--515",
editor = "Christine Strauss and Toshiyuki Amagasa and Gabriele Kotsis and Ismail Khalil and Tjoa, {A Min}",
booktitle = "Database and Expert Systems Applications",

}

Download

TY - CHAP

T1 - Evaluating Prompt-Based Question Answering for Object Prediction in the Open Research Knowledge Graph

AU - D’Souza, Jennifer

AU - Hrou, Moussab

AU - Auer, Sören

N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023.

PY - 2023

Y1 - 2023

N2 - Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models.

AB - Recent investigations have explored prompt-based training of transformer language models for new text genres in low-resource settings. This approach has proven effective in transferring pre-trained or fine-tuned models to resource-scarce environments. This work presents the first results on applying prompt-based training to transformers for scholarly knowledge graph object prediction. Methodologically, it stands out in two main ways: 1) it deviates from previous studies that propose entity and relation extraction pipelines, and 2) it tests the method in a significantly different domain, scholarly knowledge, evaluating linguistic, probabilistic, and factual generalizability of large-scale transformer models. Our findings demonstrate that: i) out-of-the-box transformer models underperform on the new scholarly domain, ii) prompt-based training improves performance by up to 40% in relaxed evaluation, and iii) tests of the models in a distinct domain reveals a gap in capturing domain knowledge, highlighting the need for increased attention and resources in the scholarly domain for transformer models.

KW - Knowledge Graph Completion

KW - Natural Language Processing

KW - Open Research Knowledge Graph

KW - Prompt-based Question Answering

KW - Question Answering

UR - http://www.scopus.com/inward/record.url?scp=85174710939&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2305.12900

DO - 10.48550/arXiv.2305.12900

M3 - Contribution to book/anthology

SN - 978-3-031-39846-9

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 508

EP - 515

BT - Database and Expert Systems Applications

A2 - Strauss, Christine

A2 - Amagasa, Toshiyuki

A2 - Kotsis, Gabriele

A2 - Khalil, Ismail

A2 - Tjoa, A Min

ER -

By the same author(s)