ASlib: A benchmark library for algorithm selection

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Bernd Bischl
  • Pascal Kerschke
  • Lars Kotthoff
  • Marius Lindauer
  • Yuri Malitsky
  • Alexandre Fréchette
  • Holger Hoos
  • Frank Hutter
  • Kevin Leyton-Brown
  • Kevin Tierney
  • Joaquin Vanschoren

Externe Organisationen

  • Ludwig-Maximilians-Universität München (LMU)
  • Westfälische Wilhelms-Universität Münster (WWU)
  • University of British Columbia
  • Albert-Ludwigs-Universität Freiburg
  • IBM Research
  • Universität Paderborn
  • Eindhoven University of Technology (TU/e)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)41-58
Seitenumfang18
FachzeitschriftArtificial intelligence
Jahrgang237
Frühes Online-Datum8 Apr. 2016
PublikationsstatusVeröffentlicht - Apr. 2016
Extern publiziertJa

Abstract

The task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances. The algorithm selection problem is attracting increasing attention from researchers and practitioners in AI. Years of fruitful applications in a number of domains have resulted in a large amount of data, but the community lacks a standard format or repository for this data. This situation makes it difficult to share and compare different approaches effectively, as is done in other, more established fields. It also unnecessarily hinders new researchers who want to work in this area. To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. To demonstrate the breadth and power of our platform, we describe a study that builds and evaluates algorithm selection models through a common interface. The results display the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.

ASJC Scopus Sachgebiete

Zitieren

ASlib: A benchmark library for algorithm selection. / Bischl, Bernd; Kerschke, Pascal; Kotthoff, Lars et al.
in: Artificial intelligence, Jahrgang 237, 04.2016, S. 41-58.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Bischl, B, Kerschke, P, Kotthoff, L, Lindauer, M, Malitsky, Y, Fréchette, A, Hoos, H, Hutter, F, Leyton-Brown, K, Tierney, K & Vanschoren, J 2016, 'ASlib: A benchmark library for algorithm selection', Artificial intelligence, Jg. 237, S. 41-58. https://doi.org/10.1016/j.artint.2016.04.003
Bischl, B., Kerschke, P., Kotthoff, L., Lindauer, M., Malitsky, Y., Fréchette, A., Hoos, H., Hutter, F., Leyton-Brown, K., Tierney, K., & Vanschoren, J. (2016). ASlib: A benchmark library for algorithm selection. Artificial intelligence, 237, 41-58. https://doi.org/10.1016/j.artint.2016.04.003
Bischl B, Kerschke P, Kotthoff L, Lindauer M, Malitsky Y, Fréchette A et al. ASlib: A benchmark library for algorithm selection. Artificial intelligence. 2016 Apr;237:41-58. Epub 2016 Apr 8. doi: 10.1016/j.artint.2016.04.003
Bischl, Bernd ; Kerschke, Pascal ; Kotthoff, Lars et al. / ASlib: A benchmark library for algorithm selection. in: Artificial intelligence. 2016 ; Jahrgang 237. S. 41-58.
Download
@article{f20f0144d58d4525941a04d5e977e9cb,
title = "ASlib: A benchmark library for algorithm selection",
abstract = "The task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances. The algorithm selection problem is attracting increasing attention from researchers and practitioners in AI. Years of fruitful applications in a number of domains have resulted in a large amount of data, but the community lacks a standard format or repository for this data. This situation makes it difficult to share and compare different approaches effectively, as is done in other, more established fields. It also unnecessarily hinders new researchers who want to work in this area. To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. To demonstrate the breadth and power of our platform, we describe a study that builds and evaluates algorithm selection models through a common interface. The results display the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.",
keywords = "Algorithm selection, Empirical performance estimation, Machine learning",
author = "Bernd Bischl and Pascal Kerschke and Lars Kotthoff and Marius Lindauer and Yuri Malitsky and Alexandre Fr{\'e}chette and Holger Hoos and Frank Hutter and Kevin Leyton-Brown and Kevin Tierney and Joaquin Vanschoren",
note = "Funding information: FH and ML are supported by the DFG (German Research Foundation) under Emmy Noether grant HU 1900/2-1 . KLB, AF and LK were supported by an NSERC E.W.R. Steacie Fellowship; in addition, all of these, along with HH, were supported under the NSERC Discovery Grant Program. Part of this research was supported by a Microsoft Azure for Research grant.",
year = "2016",
month = apr,
doi = "10.1016/j.artint.2016.04.003",
language = "English",
volume = "237",
pages = "41--58",
journal = "Artificial intelligence",
issn = "0004-3702",
publisher = "Elsevier",

}

Download

TY - JOUR

T1 - ASlib: A benchmark library for algorithm selection

AU - Bischl, Bernd

AU - Kerschke, Pascal

AU - Kotthoff, Lars

AU - Lindauer, Marius

AU - Malitsky, Yuri

AU - Fréchette, Alexandre

AU - Hoos, Holger

AU - Hutter, Frank

AU - Leyton-Brown, Kevin

AU - Tierney, Kevin

AU - Vanschoren, Joaquin

N1 - Funding information: FH and ML are supported by the DFG (German Research Foundation) under Emmy Noether grant HU 1900/2-1 . KLB, AF and LK were supported by an NSERC E.W.R. Steacie Fellowship; in addition, all of these, along with HH, were supported under the NSERC Discovery Grant Program. Part of this research was supported by a Microsoft Azure for Research grant.

PY - 2016/4

Y1 - 2016/4

N2 - The task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances. The algorithm selection problem is attracting increasing attention from researchers and practitioners in AI. Years of fruitful applications in a number of domains have resulted in a large amount of data, but the community lacks a standard format or repository for this data. This situation makes it difficult to share and compare different approaches effectively, as is done in other, more established fields. It also unnecessarily hinders new researchers who want to work in this area. To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. To demonstrate the breadth and power of our platform, we describe a study that builds and evaluates algorithm selection models through a common interface. The results display the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.

AB - The task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances. The algorithm selection problem is attracting increasing attention from researchers and practitioners in AI. Years of fruitful applications in a number of domains have resulted in a large amount of data, but the community lacks a standard format or repository for this data. This situation makes it difficult to share and compare different approaches effectively, as is done in other, more established fields. It also unnecessarily hinders new researchers who want to work in this area. To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. To demonstrate the breadth and power of our platform, we describe a study that builds and evaluates algorithm selection models through a common interface. The results display the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.

KW - Algorithm selection

KW - Empirical performance estimation

KW - Machine learning

UR - http://www.scopus.com/inward/record.url?scp=84962888054&partnerID=8YFLogxK

U2 - 10.1016/j.artint.2016.04.003

DO - 10.1016/j.artint.2016.04.003

M3 - Article

AN - SCOPUS:84962888054

VL - 237

SP - 41

EP - 58

JO - Artificial intelligence

JF - Artificial intelligence

SN - 0004-3702

ER -

Von denselben Autoren