Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Ming Jiang
  • Jennifer D’Souza
  • Sören Auer
  • J. Stephen Downie

Research Organisations

External Research Organisations

  • German National Library of Science and Technology (TIB)
  • University of Illinois at Urbana-Champaign
View graph of relations

Details

Original languageEnglish
Pages (from-to)197-215
Number of pages19
JournalInternational Journal on Digital Libraries
Volume23
Issue number2
Early online date2 Nov 2021
Publication statusPublished - Jun 2022

Abstract

The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.

Keywords

    Digital library, Information extraction, Knowledge graphs, Neural machine learning, Scholarly text mining, Semantic relation classification

ASJC Scopus subject areas

Cite this

Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections. / Jiang, Ming; D’Souza, Jennifer; Auer, Sören et al.
In: International Journal on Digital Libraries, Vol. 23, No. 2, 06.2022, p. 197-215.

Research output: Contribution to journalArticleResearchpeer review

Jiang M, D’Souza J, Auer S, Downie JS. Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections. International Journal on Digital Libraries. 2022 Jun;23(2):197-215. Epub 2021 Nov 2. doi: 10.48550/arXiv.2305.02291, 10.1007/s00799-021-00313-y
Jiang, Ming ; D’Souza, Jennifer ; Auer, Sören et al. / Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections. In: International Journal on Digital Libraries. 2022 ; Vol. 23, No. 2. pp. 197-215.
Download
@article{1d2b034695c143098a7bedb4e22de255,
title = "Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections",
abstract = "The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers{\textquoteright} performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier{\textquoteright}s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.",
keywords = "Digital library, Information extraction, Knowledge graphs, Neural machine learning, Scholarly text mining, Semantic relation classification",
author = "Ming Jiang and Jennifer D{\textquoteright}Souza and S{\"o}ren Auer and Downie, {J. Stephen}",
note = "Funding Information: This material is based upon work supported by the National Science Foundation under Grant No. OAC 1939929 and by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) ",
year = "2022",
month = jun,
doi = "10.48550/arXiv.2305.02291",
language = "English",
volume = "23",
pages = "197--215",
number = "2",

}

Download

TY - JOUR

T1 - Evaluating BERT-based scientific relation classifiers for scholarly knowledge graph construction on digital library collections

AU - Jiang, Ming

AU - D’Souza, Jennifer

AU - Auer, Sören

AU - Downie, J. Stephen

N1 - Funding Information: This material is based upon work supported by the National Science Foundation under Grant No. OAC 1939929 and by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536)

PY - 2022/6

Y1 - 2022/6

N2 - The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.

AB - The rapid growth of research publications has placed great demands on digital libraries (DL) for advanced information management technologies. To cater to these demands, techniques relying on knowledge-graph structures are being advocated. In such graph-based pipelines, inferring semantic relations between related scientific concepts is a crucial step. Recently, BERT-based pre-trained models have been popularly explored for automatic relation classification. Despite significant progress, most of them were evaluated in different scenarios, which limits their comparability. Furthermore, existing methods are primarily evaluated on clean texts, which ignores the digitization context of early scholarly publications in terms of machine scanning and optical character recognition (OCR). In such cases, the texts may contain OCR noise, in turn creating uncertainty about existing classifiers’ performances. To address these limitations, we started by creating OCR-noisy texts based on three clean corpora. Given these parallel corpora, we conducted a thorough empirical evaluation of eight Bert-based classification models by focusing on three factors: (1) Bert variants; (2) classification strategies; and, (3) OCR noise impacts. Experiments on clean data show that the domain-specific pre-trained Bert is the best variant to identify scientific relations. The strategy of predicting a single relation each time outperforms the one simultaneously identifying multiple relations in general. The optimal classifier’s performance can decline by around 10% to 20% in F-score on the noisy corpora. Insights discussed in this study can help DL stakeholders select techniques for building optimal knowledge-graph-based systems.

KW - Digital library

KW - Information extraction

KW - Knowledge graphs

KW - Neural machine learning

KW - Scholarly text mining

KW - Semantic relation classification

UR - http://www.scopus.com/inward/record.url?scp=85118454711&partnerID=8YFLogxK

U2 - 10.48550/arXiv.2305.02291

DO - 10.48550/arXiv.2305.02291

M3 - Article

AN - SCOPUS:85118454711

VL - 23

SP - 197

EP - 215

JO - International Journal on Digital Libraries

JF - International Journal on Digital Libraries

SN - 1432-5012

IS - 2

ER -

By the same author(s)