Details
Original language | English |
---|---|
Pages (from-to) | 273-285 |
Number of pages | 13 |
Journal | International Journal on Digital Libraries |
Volume | 25 |
Issue number | 2 |
Publication status | Published - 5 Apr 2023 |
Externally published | Yes |
Abstract
Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.
Keywords
- Crowdsourcing microtasks, Knowledge graph validation, Scholarly knowledge graphs, User interface evaluation
ASJC Scopus subject areas
- Social Sciences(all)
- Library and Information Sciences
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: International Journal on Digital Libraries, Vol. 25, No. 2, 05.04.2023, p. 273-285.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Creating and validating a scholarly knowledge graph using natural language processing and microtask crowdsourcing
AU - Oelen, Allard
AU - Stocker, Markus
AU - Auer, Sören
N1 - Funding Information: This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) and the TIB Leibniz Information Centre for Science and Technology. We would like to thank Mohamad Yaser Jaradeh and Jennifer D’Souza for their contributions to this work. Open Access funding enabled and organized by Projekt DEAL.
PY - 2023/4/5
Y1 - 2023/4/5
N2 - Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.
AB - Due to the growing number of scholarly publications, finding relevant articles becomes increasingly difficult. Scholarly knowledge graphs can be used to organize the scholarly knowledge presented within those publications and represent them in machine-readable formats. Natural language processing (NLP) provides scalable methods to automatically extract knowledge from articles and populate scholarly knowledge graphs. However, NLP extraction is generally not sufficiently accurate and, thus, fails to generate high granularity quality data. In this work, we present TinyGenius, a methodology to validate NLP-extracted scholarly knowledge statements using microtasks performed with crowdsourcing. TinyGenius is employed to populate a paper-centric knowledge graph, using five distinct NLP methods. We extend our previous work of the TinyGenius methodology in various ways. Specifically, we discuss the NLP tasks in more detail and include an explanation of the data model. Moreover, we present a user evaluation where participants validate the generated NLP statements. The results indicate that employing microtasks for statement validation is a promising approach despite the varying participant agreement for different microtasks.
KW - Crowdsourcing microtasks
KW - Knowledge graph validation
KW - Scholarly knowledge graphs
KW - User interface evaluation
UR - http://www.scopus.com/inward/record.url?scp=85151531559&partnerID=8YFLogxK
U2 - 10.1007/s00799-023-00360-7
DO - 10.1007/s00799-023-00360-7
M3 - Article
AN - SCOPUS:85151531559
VL - 25
SP - 273
EP - 285
JO - International Journal on Digital Libraries
JF - International Journal on Digital Libraries
SN - 1432-5012
IS - 2
ER -