Details
Original language | English |
---|---|
Pages (from-to) | 197-215 |
Number of pages | 19 |
Journal | International Journal on Digital Libraries |
Volume | 23 |
Issue number | 2 |
Early online date | 2 Nov 2021 |
Publication status | Published - Jun 2022 |
Abstract
The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.
Keywords
- Digital library, Information extraction, Knowledge graphs, Neural machine learning, Scholarly text mining, Semantic relation classification
ASJC Scopus subject areas
- Social Sciences(all)
- Library and Information Sciences
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: International Journal on Digital Libraries, Vol. 23, No. 2, 06.2022, p. 197-215.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections
AU - Jiang, Ming
AU - D’Souza, Jennifer
AU - Auer, Sören
AU - Downie, J. Stephen
N1 - Funding Information: This material is based upon work supported by the National Science Foundation under Grant No. OAC 1939929 and by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536)
PY - 2022/6
Y1 - 2022/6
N2 - The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.
AB - The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.
KW - Digital library
KW - Information extraction
KW - Knowledge graphs
KW - Neural machine learning
KW - Scholarly text mining
KW - Semantic relation classification
UR - http://www.scopus.com/inward/record.url?scp=85118454711&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2305.02291
DO - 10.48550/arXiv.2305.02291
M3 - Article
AN - SCOPUS:85118454711
VL - 23
SP - 197
EP - 215
JO - International Journal on Digital Libraries
JF - International Journal on Digital Libraries
SN - 1432-5012
IS - 2
ER -