Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 273-285 |
Seitenumfang | 13 |
Fachzeitschrift | International Journal on Digital Libraries |
Jahrgang | 25 |
Ausgabenummer | 2 |
Publikationsstatus | Veröffentlicht - 5 Apr. 2023 |
Extern publiziert | Ja |
Abstract
Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.
ASJC Scopus Sachgebiete
- Sozialwissenschaften (insg.)
- Bibliotheks- und Informationswissenschaften
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: International Journal on Digital Libraries, Jahrgang 25, Nr. 2, 05.04.2023, S. 273-285.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Creating and validating a scholarly knowledge graph using natural language processing and microtask crowdsourcing
AU - Oelen, Allard
AU - Stocker, Markus
AU - Auer, Sören
N1 - Funding Information: This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) and the TIB Leibniz Information Centre for Science and Technology. We would like to thank Mohamad Yaser Jaradeh and Jennifer D’Souza for their contributions to this work. Open Access funding enabled and organized by Projekt DEAL.
PY - 2023/4/5
Y1 - 2023/4/5
N2 - Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.
AB - Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.
KW - Crowdsourcing microtasks
KW - Knowledge graph validation
KW - Scholarly knowledge graphs
KW - User interface evaluation
UR - http://www.scopus.com/inward/record.url?scp=85151531559&partnerID=8YFLogxK
U2 - 10.1007/s00799-023-00360-7
DO - 10.1007/s00799-023-00360-7
M3 - Article
AN - SCOPUS:85151531559
VL - 25
SP - 273
EP - 285
JO - International Journal on Digital Libraries
JF - International Journal on Digital Libraries
SN - 1432-5012
IS - 2
ER -