Details
Original language | English |
---|---|
Pages (from-to) | 51-60 |
Number of pages | 10 |
Journal | Operating Systems Review (ACM) |
Volume | 49 |
Issue number | 1 |
Publication status | Published - 20 Jan 2015 |
Externally published | Yes |
Event | 8th Workshop on Large-Scale Distributed Systems and Middleware, LADIS 2014 - Cambridge, United Kingdom (UK) Duration: 23 Oct 2014 → 24 Oct 2014 |
Abstract
Compared to more traditional disciplines, such as the natural sciences, computer science is said to have a somewhat sloppy relationship with the external repeatability of published results. However, from our experience the problem starts even earlier: In many cases, authors are not even able to replicate their own results a year later, or to explain how exactly that number on page three of the paper was computed. Because of constant time pressure and strict submission deadlines, the successful researcher has to favor timely results over experiment documentation and data traceability. We consider internal repeatability to be one of the most important prerequisites for external replicability and the scientific process. We describe our approach to foster internal repeatability in our own research projects with the help of dedicated tools for the automation of traceable experimental setups and for data presentation in scientific papers. By employing these tools, measures for ensuring internal repeatability no longer waste valuable working time and pay off quickly: They save time by eliminating recurring, and therefore error-prone, manual work steps, and at the same time increase confidence in experimental results.
ASJC Scopus subject areas
- Computer Science(all)
- Information Systems
- Computer Science(all)
- Hardware and Architecture
- Computer Science(all)
- Computer Networks and Communications
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Operating Systems Review (ACM), Vol. 49, No. 1, 20.01.2015, p. 51-60.
Research output: Contribution to journal › Conference article › Research › peer review
}
TY - JOUR
T1 - The dataref versuchung∗
T2 - 8th Workshop on Large-Scale Distributed Systems and Middleware, LADIS 2014
AU - Dietrich, Christian
AU - Lohmann, Daniel
PY - 2015/1/20
Y1 - 2015/1/20
N2 - Compared to more traditional disciplines, such as the natural sciences, computer science is said to have a somewhat sloppy relationship with the external repeatability of published results. However, from our experience the problem starts even earlier: In many cases, authors are not even able to replicate their own results a year later, or to explain how exactly that number on page three of the paper was computed. Because of constant time pressure and strict submission deadlines, the successful researcher has to favor timely results over experiment documentation and data traceability. We consider internal repeatability to be one of the most important prerequisites for external replicability and the scientific process. We describe our approach to foster internal repeatability in our own research projects with the help of dedicated tools for the automation of traceable experimental setups and for data presentation in scientific papers. By employing these tools, measures for ensuring internal repeatability no longer waste valuable working time and pay off quickly: They save time by eliminating recurring, and therefore error-prone, manual work steps, and at the same time increase confidence in experimental results.
AB - Compared to more traditional disciplines, such as the natural sciences, computer science is said to have a somewhat sloppy relationship with the external repeatability of published results. However, from our experience the problem starts even earlier: In many cases, authors are not even able to replicate their own results a year later, or to explain how exactly that number on page three of the paper was computed. Because of constant time pressure and strict submission deadlines, the successful researcher has to favor timely results over experiment documentation and data traceability. We consider internal repeatability to be one of the most important prerequisites for external replicability and the scientific process. We describe our approach to foster internal repeatability in our own research projects with the help of dedicated tools for the automation of traceable experimental setups and for data presentation in scientific papers. By employing these tools, measures for ensuring internal repeatability no longer waste valuable working time and pay off quickly: They save time by eliminating recurring, and therefore error-prone, manual work steps, and at the same time increase confidence in experimental results.
UR - http://www.scopus.com/inward/record.url?scp=84955300775&partnerID=8YFLogxK
U2 - 10.1145/2723872.2723880
DO - 10.1145/2723872.2723880
M3 - Conference article
AN - SCOPUS:84955300775
VL - 49
SP - 51
EP - 60
JO - Operating Systems Review (ACM)
JF - Operating Systems Review (ACM)
SN - 0163-5980
IS - 1
Y2 - 23 October 2014 through 24 October 2014
ER -