A comprehensive quality assessment framework for scientific events

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Sahar Vahdati
  • Said Fathalla
  • Christoph Lange
  • Andreas Behrend
  • Aysegul Say
  • Zeynep Say
  • Sören Auer

Research Organisations

External Research Organisations

  • University of Oxford
  • Institute for Applied Informatics (InfAI) e.V.
  • University of Bonn
  • Alexandria University
  • RWTH Aachen University
  • Fraunhofer Institute for Applied Information Technology (FIT)
  • TH Köln - University of Applied Sciences
  • German National Library of Science and Technology (TIB)
View graph of relations

Details

Original languageEnglish
Pages (from-to)641-682
Number of pages42
JournalSCIENTOMETRICS
Volume126
Issue number1
Early online date3 Nov 2020
Publication statusPublished - Jan 2021

Abstract

Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.

Keywords

    Bibliometrics, Metadata analysis, Quality assessment, Recommendation, Scientific events

ASJC Scopus subject areas

Cite this

A comprehensive quality assessment framework for scientific events. / Vahdati, Sahar; Fathalla, Said; Lange, Christoph et al.
In: SCIENTOMETRICS, Vol. 126, No. 1, 01.2021, p. 641-682.

Research output: Contribution to journalArticleResearchpeer review

Vahdati, S, Fathalla, S, Lange, C, Behrend, A, Say, A, Say, Z & Auer, S 2021, 'A comprehensive quality assessment framework for scientific events', SCIENTOMETRICS, vol. 126, no. 1, pp. 641-682. https://doi.org/10.1007/s11192-020-03758-1
Vahdati, S., Fathalla, S., Lange, C., Behrend, A., Say, A., Say, Z., & Auer, S. (2021). A comprehensive quality assessment framework for scientific events. SCIENTOMETRICS, 126(1), 641-682. https://doi.org/10.1007/s11192-020-03758-1
Vahdati S, Fathalla S, Lange C, Behrend A, Say A, Say Z et al. A comprehensive quality assessment framework for scientific events. SCIENTOMETRICS. 2021 Jan;126(1):641-682. Epub 2020 Nov 3. doi: 10.1007/s11192-020-03758-1
Vahdati, Sahar ; Fathalla, Said ; Lange, Christoph et al. / A comprehensive quality assessment framework for scientific events. In: SCIENTOMETRICS. 2021 ; Vol. 126, No. 1. pp. 641-682.
Download
@article{106596c22d08477b8c1ab72778694bef,
title = "A comprehensive quality assessment framework for scientific events",
abstract = "Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics{\textquoteright} values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.",
keywords = "Bibliometrics, Metadata analysis, Quality assessment, Recommendation, Scientific events",
author = "Sahar Vahdati and Said Fathalla and Christoph Lange and Andreas Behrend and Aysegul Say and Zeynep Say and S{\"o}ren Auer",
note = "Funding Information: Open Access funding enabled and organized by Projekt DEAL. This work is part of the doctoral dissertation of the first author at the University of Bonn, and has been partially presented in Chapter 4 of the dissertation documentation (Vahdati ). The work as been partially funded by DFG under grant agreement LA 3745/4-1 (ConfIDent) and ERC project ScienceGRAPH No. 819536. The authors would like to thank Prof. Maria-Esther Vidal for her valuable comments during the development of this work. ",
year = "2021",
month = jan,
doi = "10.1007/s11192-020-03758-1",
language = "English",
volume = "126",
pages = "641--682",
journal = "SCIENTOMETRICS",
issn = "0138-9130",
publisher = "Springer Netherlands",
number = "1",

}

Download

TY - JOUR

T1 - A comprehensive quality assessment framework for scientific events

AU - Vahdati, Sahar

AU - Fathalla, Said

AU - Lange, Christoph

AU - Behrend, Andreas

AU - Say, Aysegul

AU - Say, Zeynep

AU - Auer, Sören

N1 - Funding Information: Open Access funding enabled and organized by Projekt DEAL. This work is part of the doctoral dissertation of the first author at the University of Bonn, and has been partially presented in Chapter 4 of the dissertation documentation (Vahdati ). The work as been partially funded by DFG under grant agreement LA 3745/4-1 (ConfIDent) and ERC project ScienceGRAPH No. 819536. The authors would like to thank Prof. Maria-Esther Vidal for her valuable comments during the development of this work.

PY - 2021/1

Y1 - 2021/1

N2 - Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.

AB - Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.

KW - Bibliometrics

KW - Metadata analysis

KW - Quality assessment

KW - Recommendation

KW - Scientific events

UR - http://www.scopus.com/inward/record.url?scp=85094972350&partnerID=8YFLogxK

U2 - 10.1007/s11192-020-03758-1

DO - 10.1007/s11192-020-03758-1

M3 - Article

AN - SCOPUS:85094972350

VL - 126

SP - 641

EP - 682

JO - SCIENTOMETRICS

JF - SCIENTOMETRICS

SN - 0138-9130

IS - 1

ER -

By the same author(s)