Evaluating Large Language Models for Structured Science Summarization in the Open Research Knowledge Graph

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Vladyslav Nechakhin
  • Jennifer D’Souza
  • Steffen Eger

Organisationseinheiten

Externe Organisationen

  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
  • Universität Mannheim
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer328
Seitenumfang19
FachzeitschriftInformation (Switzerland)
Jahrgang15
Ausgabenummer6
PublikationsstatusVeröffentlicht - 5 Juni 2024

Abstract

Structured science summaries or research contributions using properties or dimensions beyond traditional keywords enhance science findability. Current methods, such as those used by the Open Research Knowledge Graph (ORKG), involve manually curating properties to describe research papers’ contributions in a structured manner, but this is labor-intensive and inconsistent among human domain-expert curators. We propose using Large Language Models (LLMs) to automatically suggest these properties. However, it is essential to assess the readiness of LLMs like GPT-3.5, Llama 2, and Mistral for this task before their application. Our study performs a comprehensive comparative analysis between the ORKG’s manually curated properties and those generated by the aforementioned state-of-the-art LLMs. We evaluate LLM performance from four unique perspectives: semantic alignment with and deviation from ORKG properties, fine-grained property mapping accuracy, SciNCL embedding-based cosine similarity, and expert surveys comparing manual annotations with LLM outputs. These evaluations occur within a multidisciplinary science setting. Overall, LLMs show potential as recommendation systems for structuring science, but further fine-tuning is recommended to improve their alignment with scientific tasks and mimicry of human expertise.

ASJC Scopus Sachgebiete

Zitieren

Evaluating Large Language Models for Structured Science Summarization in the Open Research Knowledge Graph. / Nechakhin, Vladyslav; D’Souza, Jennifer; Eger, Steffen.
in: Information (Switzerland), Jahrgang 15, Nr. 6, 328, 05.06.2024.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Nechakhin V, D’Souza J, Eger S. Evaluating Large Language Models for Structured Science Summarization in the Open Research Knowledge Graph. Information (Switzerland). 2024 Jun 5;15(6):328. doi: 10.48550/arXiv.2405.02105, 10.3390/info15060328
Nechakhin, Vladyslav ; D’Souza, Jennifer ; Eger, Steffen. / Evaluating Large Language Models for Structured Science Summarization in the Open Research Knowledge Graph. in: Information (Switzerland). 2024 ; Jahrgang 15, Nr. 6.
Download
@article{fd9933b1ee0143a09164d475b9a88414,
title = "Evaluating Large Language Models for Structured Science Summarization in the Open Research Knowledge Graph",
abstract = "Structured science summaries or research contributions using properties or dimensions beyond traditional keywords enhance science findability. Current methods, such as those used by the Open Research Knowledge Graph (ORKG), involve manually curating properties to describe research papers{\textquoteright} contributions in a structured manner, but this is labor-intensive and inconsistent among human domain-expert curators. We propose using Large Language Models (LLMs) to automatically suggest these properties. However, it is essential to assess the readiness of LLMs like GPT-3.5, Llama 2, and Mistral for this task before their application. Our study performs a comprehensive comparative analysis between the ORKG{\textquoteright}s manually curated properties and those generated by the aforementioned state-of-the-art LLMs. We evaluate LLM performance from four unique perspectives: semantic alignment with and deviation from ORKG properties, fine-grained property mapping accuracy, SciNCL embedding-based cosine similarity, and expert surveys comparing manual annotations with LLM outputs. These evaluations occur within a multidisciplinary science setting. Overall, LLMs show potential as recommendation systems for structuring science, but further fine-tuning is recommended to improve their alignment with scientific tasks and mimicry of human expertise.",
keywords = "large language models, Open Research Knowledge Graph, structured summarization",
author = "Vladyslav Nechakhin and Jennifer D{\textquoteright}Souza and Steffen Eger",
note = "Publisher Copyright: {\textcopyright} 2024 by the authors.",
year = "2024",
month = jun,
day = "5",
doi = "10.48550/arXiv.2405.02105",
language = "English",
volume = "15",
number = "6",

}

Download

TY - JOUR

T1 - Evaluating Large Language Models for Structured Science Summarization in the Open Research Knowledge Graph

AU - Nechakhin, Vladyslav

AU - D’Souza, Jennifer

AU - Eger, Steffen

N1 - Publisher Copyright: © 2024 by the authors.

PY - 2024/6/5

Y1 - 2024/6/5

N2 - Structured science summaries or research contributions using properties or dimensions beyond traditional keywords enhance science findability. Current methods, such as those used by the Open Research Knowledge Graph (ORKG), involve manually curating properties to describe research papers’ contributions in a structured manner, but this is labor-intensive and inconsistent among human domain-expert curators. We propose using Large Language Models (LLMs) to automatically suggest these properties. However, it is essential to assess the readiness of LLMs like GPT-3.5, Llama 2, and Mistral for this task before their application. Our study performs a comprehensive comparative analysis between the ORKG’s manually curated properties and those generated by the aforementioned state-of-the-art LLMs. We evaluate LLM performance from four unique perspectives: semantic alignment with and deviation from ORKG properties, fine-grained property mapping accuracy, SciNCL embedding-based cosine similarity, and expert surveys comparing manual annotations with LLM outputs. These evaluations occur within a multidisciplinary science setting. Overall, LLMs show potential as recommendation systems for structuring science, but further fine-tuning is recommended to improve their alignment with scientific tasks and mimicry of human expertise.

AB - Structured science summaries or research contributions using properties or dimensions beyond traditional keywords enhance science findability. Current methods, such as those used by the Open Research Knowledge Graph (ORKG), involve manually curating properties to describe research papers’ contributions in a structured manner, but this is labor-intensive and inconsistent among human domain-expert curators. We propose using Large Language Models (LLMs) to automatically suggest these properties. However, it is essential to assess the readiness of LLMs like GPT-3.5, Llama 2, and Mistral for this task before their application. Our study performs a comprehensive comparative analysis between the ORKG’s manually curated properties and those generated by the aforementioned state-of-the-art LLMs. We evaluate LLM performance from four unique perspectives: semantic alignment with and deviation from ORKG properties, fine-grained property mapping accuracy, SciNCL embedding-based cosine similarity, and expert surveys comparing manual annotations with LLM outputs. These evaluations occur within a multidisciplinary science setting. Overall, LLMs show potential as recommendation systems for structuring science, but further fine-tuning is recommended to improve their alignment with scientific tasks and mimicry of human expertise.

KW - large language models

KW - Open Research Knowledge Graph

KW - structured summarization

UR - http://www.scopus.com/inward/record.url?scp=85197313646&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2405.02105

DO - 10.48550/arXiv.2405.02105

M3 - Article

AN - SCOPUS:85197313646

VL - 15

JO - Information (Switzerland)

JF - Information (Switzerland)

IS - 6

M1 - 328

ER -