No one is perfect: Analysing the performance of question answering components over the DBpedia knowledge graph

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Kuldeep Singh
  • Ioanna Lytra
  • Arun Sethupat Radhakrishna
  • Saeedeh Shekarpour
  • Maria Esther Vidal
  • Jens Lehmann

Organisationseinheiten

Externe Organisationen

  • Rheinische Friedrich-Wilhelms-Universität Bonn
  • University of Minnesota
  • University of Dayton
  • Technische Informationsbibliothek (TIB) Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer100594
FachzeitschriftJournal of Web Semantics
Jahrgang65
Frühes Online-Datum5 Aug. 2020
PublikationsstatusVeröffentlicht - Dez. 2020

Abstract

Question answering (QA) over knowledge graphs has gained significant momentum over the past five years due to the increasing availability of large knowledge graphs and the rising importance of Question Answering for user interaction. Existing QA systems have been extensively evaluated as black boxes and their performance has been characterised in terms of average results over all the questions of benchmarking datasets (i.e. macro evaluation). Albeit informative, macro evaluation studies do not provide evidence about QA components’ strengths and concrete weaknesses. Therefore, the objective of this article is to analyse and micro evaluate available QA components in order to comprehend which question characteristics impact on their performance. For this, we measure at question level and with respect to different question features the accuracy of 29 components reused in QA frameworks for the DBpedia knowledge graph using state-of-the-art benchmarks. As a result, we provide a perspective on collective failure cases, study the similarities and synergies among QA components for different component types and suggest their characteristics preventing them from effectively solving the corresponding QA tasks. Finally, based on these extensive results, we present conclusive insights for future challenges and research directions in the field of Question Answering over knowledge graphs.

ASJC Scopus Sachgebiete

Zitieren

No one is perfect: Analysing the performance of question answering components over the DBpedia knowledge graph. / Singh, Kuldeep; Lytra, Ioanna; Radhakrishna, Arun Sethupat et al.
in: Journal of Web Semantics, Jahrgang 65, 100594, 12.2020.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Singh K, Lytra I, Radhakrishna AS, Shekarpour S, Vidal ME, Lehmann J. No one is perfect: Analysing the performance of question answering components over the DBpedia knowledge graph. Journal of Web Semantics. 2020 Dez;65:100594. Epub 2020 Aug 5. doi: 10.48550/arXiv.1809.10044, 10.1016/j.websem.2020.100594
Singh, Kuldeep ; Lytra, Ioanna ; Radhakrishna, Arun Sethupat et al. / No one is perfect : Analysing the performance of question answering components over the DBpedia knowledge graph. in: Journal of Web Semantics. 2020 ; Jahrgang 65.
Download
@article{4df41d75ddf1465088692d3919276e83,
title = "No one is perfect: Analysing the performance of question answering components over the DBpedia knowledge graph",
abstract = "Question answering (QA) over knowledge graphs has gained significant momentum over the past five years due to the increasing availability of large knowledge graphs and the rising importance of Question Answering for user interaction. Existing QA systems have been extensively evaluated as black boxes and their performance has been characterised in terms of average results over all the questions of benchmarking datasets (i.e. macro evaluation). Albeit informative, macro evaluation studies do not provide evidence about QA components{\textquoteright} strengths and concrete weaknesses. Therefore, the objective of this article is to analyse and micro evaluate available QA components in order to comprehend which question characteristics impact on their performance. For this, we measure at question level and with respect to different question features the accuracy of 29 components reused in QA frameworks for the DBpedia knowledge graph using state-of-the-art benchmarks. As a result, we provide a perspective on collective failure cases, study the similarities and synergies among QA components for different component types and suggest their characteristics preventing them from effectively solving the corresponding QA tasks. Finally, based on these extensive results, we present conclusive insights for future challenges and research directions in the field of Question Answering over knowledge graphs.",
keywords = "Entity linking, Experiment and analysis, Knowledge graph, Question answering, Relation extraction, Relation linking",
author = "Kuldeep Singh and Ioanna Lytra and Radhakrishna, {Arun Sethupat} and Saeedeh Shekarpour and Vidal, {Maria Esther} and Jens Lehmann",
note = "Funding Information: This work has received funding from the EU H2020 R&I programme for the Marie Sk?odowska-Curie action WDAqua (GA No 642795).",
year = "2020",
month = dec,
doi = "10.48550/arXiv.1809.10044",
language = "English",
volume = "65",
journal = "Journal of Web Semantics",
issn = "1570-8268",
publisher = "Elsevier",

}

Download

TY - JOUR

T1 - No one is perfect

T2 - Analysing the performance of question answering components over the DBpedia knowledge graph

AU - Singh, Kuldeep

AU - Lytra, Ioanna

AU - Radhakrishna, Arun Sethupat

AU - Shekarpour, Saeedeh

AU - Vidal, Maria Esther

AU - Lehmann, Jens

N1 - Funding Information: This work has received funding from the EU H2020 R&I programme for the Marie Sk?odowska-Curie action WDAqua (GA No 642795).

PY - 2020/12

Y1 - 2020/12

N2 - Question answering (QA) over knowledge graphs has gained significant momentum over the past five years due to the increasing availability of large knowledge graphs and the rising importance of Question Answering for user interaction. Existing QA systems have been extensively evaluated as black boxes and their performance has been characterised in terms of average results over all the questions of benchmarking datasets (i.e. macro evaluation). Albeit informative, macro evaluation studies do not provide evidence about QA components’ strengths and concrete weaknesses. Therefore, the objective of this article is to analyse and micro evaluate available QA components in order to comprehend which question characteristics impact on their performance. For this, we measure at question level and with respect to different question features the accuracy of 29 components reused in QA frameworks for the DBpedia knowledge graph using state-of-the-art benchmarks. As a result, we provide a perspective on collective failure cases, study the similarities and synergies among QA components for different component types and suggest their characteristics preventing them from effectively solving the corresponding QA tasks. Finally, based on these extensive results, we present conclusive insights for future challenges and research directions in the field of Question Answering over knowledge graphs.

AB - Question answering (QA) over knowledge graphs has gained significant momentum over the past five years due to the increasing availability of large knowledge graphs and the rising importance of Question Answering for user interaction. Existing QA systems have been extensively evaluated as black boxes and their performance has been characterised in terms of average results over all the questions of benchmarking datasets (i.e. macro evaluation). Albeit informative, macro evaluation studies do not provide evidence about QA components’ strengths and concrete weaknesses. Therefore, the objective of this article is to analyse and micro evaluate available QA components in order to comprehend which question characteristics impact on their performance. For this, we measure at question level and with respect to different question features the accuracy of 29 components reused in QA frameworks for the DBpedia knowledge graph using state-of-the-art benchmarks. As a result, we provide a perspective on collective failure cases, study the similarities and synergies among QA components for different component types and suggest their characteristics preventing them from effectively solving the corresponding QA tasks. Finally, based on these extensive results, we present conclusive insights for future challenges and research directions in the field of Question Answering over knowledge graphs.

KW - Entity linking

KW - Experiment and analysis

KW - Knowledge graph

KW - Question answering

KW - Relation extraction

KW - Relation linking

UR - http://www.scopus.com/inward/record.url?scp=85089276334&partnerID=8YFLogxK

U2 - 10.48550/arXiv.1809.10044

DO - 10.48550/arXiv.1809.10044

M3 - Article

AN - SCOPUS:85089276334

VL - 65

JO - Journal of Web Semantics

JF - Journal of Web Semantics

SN - 1570-8268

M1 - 100594

ER -