Improving Scholarly Knowledge Representation: Evaluating BERT-Based Models for Scientific Relation Classification

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

  • Ming Jiang
  • Jennifer D’Souza
  • Sören Auer
  • J. Stephen Downie

Research Organisations

External Research Organisations

  • University of Illinois at Urbana-Champaign
  • German National Library of Science and Technology (TIB)
View graph of relations

Details

Original languageEnglish
Title of host publicationDigital Libraries at Times of Massive Societal Transition
Subtitle of host publication22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020, Proceedings
EditorsEmi Ishita, Natalie Lee Pang, Lihong Zhou
Place of PublicationCham
PublisherSpringer Science and Business Media Deutschland GmbH
Pages3-19
Number of pages17
ISBN (electronic)978-3-030-64452-9
ISBN (print)9783030644512
Publication statusPublished - 2020
Event22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020 - Kyoto, Japan
Duration: 30 Nov 20201 Dec 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12504 LNCS
ISSN (Print)0302-9743
ISSN (electronic)1611-3349

Abstract

With the rapid growth of research publications, there is a vast amount of scholarly knowledge that needs to be organized in digital libraries. To deal with this challenge, techniques relying on knowledge-graph structures are being advocated. Within such graph-based pipelines, inferring relation types between related scientific concepts is a crucial step. Recently, advanced techniques relying on language models pre-trained on large corpora have been popularly explored for automatic relation classification. Despite the remarkable contributions that have been made, many of these methods were evaluated under different scenarios, which limits their comparability. To address this shortcoming, we present a thorough empirical evaluation of eight Bert-based classification models by focusing on two key factors: 1) Bert model variants, and 2) classification strategies. Experiments on three corpora show that domain-specific pre-training corpus benefits the Bert-based classification model to identify the type of scientific relations. Although the strategy of predicting a single relation each time achieves a higher classification accuracy than the strategy of identifying multiple relation types simultaneously in general, the latter strategy demonstrates a more consistent performance in the corpus with either a large or small number of annotations. Our study aims to offer recommendations to the stakeholders of digital libraries for selecting the appropriate technique to build knowledge-graph-based systems for enhanced scholarly information organization.

Keywords

    Digital library, Information extraction, Knowledge graphs, Neural machine learning, Scholarly text mining, Semantic relation classification

ASJC Scopus subject areas

Cite this

Improving Scholarly Knowledge Representation: Evaluating BERT-Based Models for Scientific Relation Classification. / Jiang, Ming; D’Souza, Jennifer; Auer, Sören et al.
Digital Libraries at Times of Massive Societal Transition : 22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020, Proceedings. ed. / Emi Ishita; Natalie Lee Pang; Lihong Zhou. Cham: Springer Science and Business Media Deutschland GmbH, 2020. p. 3-19 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12504 LNCS).

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Jiang, M, D’Souza, J, Auer, S & Downie, JS 2020, Improving Scholarly Knowledge Representation: Evaluating BERT-Based Models for Scientific Relation Classification. in E Ishita, NL Pang & L Zhou (eds), Digital Libraries at Times of Massive Societal Transition : 22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 12504 LNCS, Springer Science and Business Media Deutschland GmbH, Cham, pp. 3-19, 22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020, Kyoto, Japan, 30 Nov 2020. https://doi.org/10.1007/978-3-030-64452-9_1
Jiang, M., D’Souza, J., Auer, S., & Downie, J. S. (2020). Improving Scholarly Knowledge Representation: Evaluating BERT-Based Models for Scientific Relation Classification. In E. Ishita, N. L. Pang, & L. Zhou (Eds.), Digital Libraries at Times of Massive Societal Transition : 22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020, Proceedings (pp. 3-19). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12504 LNCS). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-64452-9_1
Jiang M, D’Souza J, Auer S, Downie JS. Improving Scholarly Knowledge Representation: Evaluating BERT-Based Models for Scientific Relation Classification. In Ishita E, Pang NL, Zhou L, editors, Digital Libraries at Times of Massive Societal Transition : 22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020, Proceedings. Cham: Springer Science and Business Media Deutschland GmbH. 2020. p. 3-19. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). Epub 2020 Nov 26. doi: 10.1007/978-3-030-64452-9_1
Jiang, Ming ; D’Souza, Jennifer ; Auer, Sören et al. / Improving Scholarly Knowledge Representation : Evaluating BERT-Based Models for Scientific Relation Classification. Digital Libraries at Times of Massive Societal Transition : 22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020, Proceedings. editor / Emi Ishita ; Natalie Lee Pang ; Lihong Zhou. Cham : Springer Science and Business Media Deutschland GmbH, 2020. pp. 3-19 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
Download
@inproceedings{6c085a08c0e34d97a5bad7c12b281716,
title = "Improving Scholarly Knowledge Representation: Evaluating BERT-Based Models for Scientific Relation Classification",
abstract = "With the rapid growth of research publications, there is a vast amount of scholarly knowledge that needs to be organized in digital libraries. To deal with this challenge, techniques relying on knowledge-graph structures are being advocated. Within such graph-based pipelines, inferring relation types between related scientific concepts is a crucial step. Recently, advanced techniques relying on language models pre-trained on large corpora have been popularly explored for automatic relation classification. Despite the remarkable contributions that have been made, many of these methods were evaluated under different scenarios, which limits their comparability. To address this shortcoming, we present a thorough empirical evaluation of eight Bert-based classification models by focusing on two key factors: 1) Bert model variants, and 2) classification strategies. Experiments on three corpora show that domain-specific pre-training corpus benefits the Bert-based classification model to identify the type of scientific relations. Although the strategy of predicting a single relation each time achieves a higher classification accuracy than the strategy of identifying multiple relation types simultaneously in general, the latter strategy demonstrates a more consistent performance in the corpus with either a large or small number of annotations. Our study aims to offer recommendations to the stakeholders of digital libraries for selecting the appropriate technique to build knowledge-graph-based systems for enhanced scholarly information organization.",
keywords = "Digital library, Information extraction, Knowledge graphs, Neural machine learning, Scholarly text mining, Semantic relation classification",
author = "Ming Jiang and Jennifer D{\textquoteright}Souza and S{\"o}ren Auer and Downie, {J. Stephen}",
year = "2020",
doi = "10.1007/978-3-030-64452-9_1",
language = "English",
isbn = "9783030644512",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Science and Business Media Deutschland GmbH",
pages = "3--19",
editor = "Emi Ishita and Pang, {Natalie Lee} and Lihong Zhou",
booktitle = "Digital Libraries at Times of Massive Societal Transition",
address = "Germany",
note = "22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020 ; Conference date: 30-11-2020 Through 01-12-2020",

}

Download

TY - GEN

T1 - Improving Scholarly Knowledge Representation

T2 - 22nd International Conference on Asia-Pacific Digital Libraries, ICADL 2020

AU - Jiang, Ming

AU - D’Souza, Jennifer

AU - Auer, Sören

AU - Downie, J. Stephen

PY - 2020

Y1 - 2020

N2 - With the rapid growth of research publications, there is a vast amount of scholarly knowledge that needs to be organized in digital libraries. To deal with this challenge, techniques relying on knowledge-graph structures are being advocated. Within such graph-based pipelines, inferring relation types between related scientific concepts is a crucial step. Recently, advanced techniques relying on language models pre-trained on large corpora have been popularly explored for automatic relation classification. Despite the remarkable contributions that have been made, many of these methods were evaluated under different scenarios, which limits their comparability. To address this shortcoming, we present a thorough empirical evaluation of eight Bert-based classification models by focusing on two key factors: 1) Bert model variants, and 2) classification strategies. Experiments on three corpora show that domain-specific pre-training corpus benefits the Bert-based classification model to identify the type of scientific relations. Although the strategy of predicting a single relation each time achieves a higher classification accuracy than the strategy of identifying multiple relation types simultaneously in general, the latter strategy demonstrates a more consistent performance in the corpus with either a large or small number of annotations. Our study aims to offer recommendations to the stakeholders of digital libraries for selecting the appropriate technique to build knowledge-graph-based systems for enhanced scholarly information organization.

AB - With the rapid growth of research publications, there is a vast amount of scholarly knowledge that needs to be organized in digital libraries. To deal with this challenge, techniques relying on knowledge-graph structures are being advocated. Within such graph-based pipelines, inferring relation types between related scientific concepts is a crucial step. Recently, advanced techniques relying on language models pre-trained on large corpora have been popularly explored for automatic relation classification. Despite the remarkable contributions that have been made, many of these methods were evaluated under different scenarios, which limits their comparability. To address this shortcoming, we present a thorough empirical evaluation of eight Bert-based classification models by focusing on two key factors: 1) Bert model variants, and 2) classification strategies. Experiments on three corpora show that domain-specific pre-training corpus benefits the Bert-based classification model to identify the type of scientific relations. Although the strategy of predicting a single relation each time achieves a higher classification accuracy than the strategy of identifying multiple relation types simultaneously in general, the latter strategy demonstrates a more consistent performance in the corpus with either a large or small number of annotations. Our study aims to offer recommendations to the stakeholders of digital libraries for selecting the appropriate technique to build knowledge-graph-based systems for enhanced scholarly information organization.

KW - Digital library

KW - Information extraction

KW - Knowledge graphs

KW - Neural machine learning

KW - Scholarly text mining

KW - Semantic relation classification

UR - http://www.scopus.com/inward/record.url?scp=85097538751&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-64452-9_1

DO - 10.1007/978-3-030-64452-9_1

M3 - Conference contribution

AN - SCOPUS:85097538751

SN - 9783030644512

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 3

EP - 19

BT - Digital Libraries at Times of Massive Societal Transition

A2 - Ishita, Emi

A2 - Pang, Natalie Lee

A2 - Zhou, Lihong

PB - Springer Science and Business Media Deutschland GmbH

CY - Cham

Y2 - 30 November 2020 through 1 December 2020

ER -

By the same author(s)