Details
Original language | English |
---|---|
Pages (from-to) | 641-682 |
Number of pages | 42 |
Journal | SCIENTOMETRICS |
Volume | 126 |
Issue number | 1 |
Early online date | 3 Nov 2020 |
Publication status | Published - Jan 2021 |
Abstract
Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.
Keywords
- Bibliometrics, Metadata analysis, Quality assessment, Recommendation, Scientific events
ASJC Scopus subject areas
- Social Sciences(all)
- Computer Science(all)
- Computer Science Applications
- Social Sciences(all)
- Library and Information Sciences
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: SCIENTOMETRICS, Vol. 126, No. 1, 01.2021, p. 641-682.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - A comprehensive quality assessment framework for scientific events
AU - Vahdati, Sahar
AU - Fathalla, Said
AU - Lange, Christoph
AU - Behrend, Andreas
AU - Say, Aysegul
AU - Say, Zeynep
AU - Auer, Sören
N1 - Funding Information: Open Access funding enabled and organized by Projekt DEAL. This work is part of the doctoral dissertation of the first author at the University of Bonn, and has been partially presented in Chapter 4 of the dissertation documentation (Vahdati ). The work as been partially funded by DFG under grant agreement LA 3745/4-1 (ConfIDent) and ERC project ScienceGRAPH No. 819536. The authors would like to thank Prof. Maria-Esther Vidal for her valuable comments during the development of this work.
PY - 2021/1
Y1 - 2021/1
N2 - Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.
AB - Systematic assessment of scientific events has become increasingly important for research communities. A range of metrics (e.g., citations, h-index) have been developed by different research communities to make such assessments effectual. However, most of the metrics for assessing the quality of less formal publication venues and events have not yet deeply investigated. It is also rather challenging to develop respective metrics because each research community has its own formal and informal rules of communication and quality standards. In this article, we develop a comprehensive framework of assessment metrics for evaluating scientific events and involved stakeholders. The resulting quality metrics are determined with respect to three general categories—events, persons, and bibliometrics. Our assessment methodology is empirically applied to several series of computer science events, such as conferences and workshops, using publicly available data for determining quality metrics. We show that the metrics’ values coincide with the intuitive agreement of the community on its “top conferences”. Our results demonstrate that highly-ranked events share similar profiles, including the provision of outstanding reviews, visiting diverse locations, having reputed people involved, and renowned sponsors.
KW - Bibliometrics
KW - Metadata analysis
KW - Quality assessment
KW - Recommendation
KW - Scientific events
UR - http://www.scopus.com/inward/record.url?scp=85094972350&partnerID=8YFLogxK
U2 - 10.1007/s11192-020-03758-1
DO - 10.1007/s11192-020-03758-1
M3 - Article
AN - SCOPUS:85094972350
VL - 126
SP - 641
EP - 682
JO - SCIENTOMETRICS
JF - SCIENTOMETRICS
SN - 0138-9130
IS - 1
ER -