The dataref versuchung: Saving Time through Better Internal Repeatability

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Autorschaft

  • Christian Dietrich
  • Daniel Lohmann

Externe Organisationen

  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU Erlangen-Nürnberg)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)51-60
Seitenumfang10
FachzeitschriftOperating Systems Review (ACM)
Jahrgang49
Ausgabenummer1
PublikationsstatusVeröffentlicht - 20 Jan. 2015
Extern publiziertJa
Veranstaltung8th Workshop on Large-Scale Distributed Systems and Middleware, LADIS 2014 - Cambridge, Großbritannien / Vereinigtes Königreich
Dauer: 23 Okt. 201424 Okt. 2014

Abstract

Compared to more traditional disciplines, such as the natural sciences, computer science is said to have a somewhat sloppy relationship with the external repeatability of published results. However, from our experience the problem starts even earlier: In many cases, authors are not even able to replicate their own results a year later, or to explain how exactly that number on page three of the paper was computed. Because of constant time pressure and strict submission deadlines, the successful researcher has to favor timely results over experiment documentation and data traceability. We consider internal repeatability to be one of the most important prerequisites for external replicability and the scientific process. We describe our approach to foster internal repeatability in our own research projects with the help of dedicated tools for the automation of traceable experimental setups and for data presentation in scientific papers. By employing these tools, measures for ensuring internal repeatability no longer waste valuable working time and pay off quickly: They save time by eliminating recurring, and therefore error-prone, manual work steps, and at the same time increase confidence in experimental results.

ASJC Scopus Sachgebiete

Zitieren

The dataref versuchung: Saving Time through Better Internal Repeatability. / Dietrich, Christian; Lohmann, Daniel.
in: Operating Systems Review (ACM), Jahrgang 49, Nr. 1, 20.01.2015, S. 51-60.

Publikation: Beitrag in FachzeitschriftKonferenzaufsatz in FachzeitschriftForschungPeer-Review

Dietrich, C & Lohmann, D 2015, 'The dataref versuchung: Saving Time through Better Internal Repeatability', Operating Systems Review (ACM), Jg. 49, Nr. 1, S. 51-60. https://doi.org/10.1145/2723872.2723880
Dietrich, C., & Lohmann, D. (2015). The dataref versuchung: Saving Time through Better Internal Repeatability. Operating Systems Review (ACM), 49(1), 51-60. https://doi.org/10.1145/2723872.2723880
Dietrich C, Lohmann D. The dataref versuchung: Saving Time through Better Internal Repeatability. Operating Systems Review (ACM). 2015 Jan 20;49(1):51-60. doi: 10.1145/2723872.2723880
Dietrich, Christian ; Lohmann, Daniel. / The dataref versuchung : Saving Time through Better Internal Repeatability. in: Operating Systems Review (ACM). 2015 ; Jahrgang 49, Nr. 1. S. 51-60.
Download
@article{ed632c8080a54117b32d9bbe667b5e90,
title = "The dataref versuchung∗: Saving Time through Better Internal Repeatability",
abstract = "Compared to more traditional disciplines, such as the natural sciences, computer science is said to have a somewhat sloppy relationship with the external repeatability of published results. However, from our experience the problem starts even earlier: In many cases, authors are not even able to replicate their own results a year later, or to explain how exactly that number on page three of the paper was computed. Because of constant time pressure and strict submission deadlines, the successful researcher has to favor timely results over experiment documentation and data traceability. We consider internal repeatability to be one of the most important prerequisites for external replicability and the scientific process. We describe our approach to foster internal repeatability in our own research projects with the help of dedicated tools for the automation of traceable experimental setups and for data presentation in scientific papers. By employing these tools, measures for ensuring internal repeatability no longer waste valuable working time and pay off quickly: They save time by eliminating recurring, and therefore error-prone, manual work steps, and at the same time increase confidence in experimental results.",
author = "Christian Dietrich and Daniel Lohmann",
year = "2015",
month = jan,
day = "20",
doi = "10.1145/2723872.2723880",
language = "English",
volume = "49",
pages = "51--60",
number = "1",
note = "8th Workshop on Large-Scale Distributed Systems and Middleware, LADIS 2014 ; Conference date: 23-10-2014 Through 24-10-2014",

}

Download

TY - JOUR

T1 - The dataref versuchung∗

T2 - 8th Workshop on Large-Scale Distributed Systems and Middleware, LADIS 2014

AU - Dietrich, Christian

AU - Lohmann, Daniel

PY - 2015/1/20

Y1 - 2015/1/20

N2 - Compared to more traditional disciplines, such as the natural sciences, computer science is said to have a somewhat sloppy relationship with the external repeatability of published results. However, from our experience the problem starts even earlier: In many cases, authors are not even able to replicate their own results a year later, or to explain how exactly that number on page three of the paper was computed. Because of constant time pressure and strict submission deadlines, the successful researcher has to favor timely results over experiment documentation and data traceability. We consider internal repeatability to be one of the most important prerequisites for external replicability and the scientific process. We describe our approach to foster internal repeatability in our own research projects with the help of dedicated tools for the automation of traceable experimental setups and for data presentation in scientific papers. By employing these tools, measures for ensuring internal repeatability no longer waste valuable working time and pay off quickly: They save time by eliminating recurring, and therefore error-prone, manual work steps, and at the same time increase confidence in experimental results.

AB - Compared to more traditional disciplines, such as the natural sciences, computer science is said to have a somewhat sloppy relationship with the external repeatability of published results. However, from our experience the problem starts even earlier: In many cases, authors are not even able to replicate their own results a year later, or to explain how exactly that number on page three of the paper was computed. Because of constant time pressure and strict submission deadlines, the successful researcher has to favor timely results over experiment documentation and data traceability. We consider internal repeatability to be one of the most important prerequisites for external replicability and the scientific process. We describe our approach to foster internal repeatability in our own research projects with the help of dedicated tools for the automation of traceable experimental setups and for data presentation in scientific papers. By employing these tools, measures for ensuring internal repeatability no longer waste valuable working time and pay off quickly: They save time by eliminating recurring, and therefore error-prone, manual work steps, and at the same time increase confidence in experimental results.

UR - http://www.scopus.com/inward/record.url?scp=84955300775&partnerID=8YFLogxK

U2 - 10.1145/2723872.2723880

DO - 10.1145/2723872.2723880

M3 - Conference article

AN - SCOPUS:84955300775

VL - 49

SP - 51

EP - 60

JO - Operating Systems Review (ACM)

JF - Operating Systems Review (ACM)

SN - 0163-5980

IS - 1

Y2 - 23 October 2014 through 24 October 2014

ER -