The Configurable SAT Solver Challenge (CSSC)

Research output: Contribution to journalArticleResearchpeer review

Authors

  • Frank Hutter
  • Marius Lindauer
  • Adrian Balint
  • Sam Bayless
  • Holger Hoos
  • Kevin Leyton-Brown

External Research Organisations

  • University of Freiburg
  • Ulm University
  • University of British Columbia
View graph of relations

Details

Original languageEnglish
Pages (from-to)1-25
Number of pages25
JournalArtificial intelligence
Volume243
Early online date20 Oct 2016
Publication statusPublished - Feb 2017
Externally publishedYes

Abstract

It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014.

Keywords

    Algorithm configuration, Competition, Empirical evaluation, Propositional satisfiability

ASJC Scopus subject areas

Cite this

The Configurable SAT Solver Challenge (CSSC). / Hutter, Frank; Lindauer, Marius; Balint, Adrian et al.
In: Artificial intelligence, Vol. 243, 02.2017, p. 1-25.

Research output: Contribution to journalArticleResearchpeer review

Hutter, F, Lindauer, M, Balint, A, Bayless, S, Hoos, H & Leyton-Brown, K 2017, 'The Configurable SAT Solver Challenge (CSSC)', Artificial intelligence, vol. 243, pp. 1-25. https://doi.org/10.1016/j.artint.2016.09.006
Hutter, F., Lindauer, M., Balint, A., Bayless, S., Hoos, H., & Leyton-Brown, K. (2017). The Configurable SAT Solver Challenge (CSSC). Artificial intelligence, 243, 1-25. https://doi.org/10.1016/j.artint.2016.09.006
Hutter F, Lindauer M, Balint A, Bayless S, Hoos H, Leyton-Brown K. The Configurable SAT Solver Challenge (CSSC). Artificial intelligence. 2017 Feb;243:1-25. Epub 2016 Oct 20. doi: 10.1016/j.artint.2016.09.006
Hutter, Frank ; Lindauer, Marius ; Balint, Adrian et al. / The Configurable SAT Solver Challenge (CSSC). In: Artificial intelligence. 2017 ; Vol. 243. pp. 1-25.
Download
@article{916140a3bac542949b7fa0f8ef52ec87,
title = "The Configurable SAT Solver Challenge (CSSC)",
abstract = "It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014.",
keywords = "Algorithm configuration, Competition, Empirical evaluation, Propositional satisfiability",
author = "Frank Hutter and Marius Lindauer and Adrian Balint and Sam Bayless and Holger Hoos and Kevin Leyton-Brown",
note = "Funding information: Many thanks go to Kevin Tierney for his generous help with running GGA, including his addition of new features, his suggestion of parameter settings and his conversion script to read the pcs format. We also thank the solver developers for proofreading the description of their solvers and their parameters. For computational resources to run the competition, we thank Compute Canada (CSSC 2013) and the German Research Foundation (DFG; CSSC 2014). F. Hutter and M. Lindauer thank the DFG for funding this research under Emmy Noether grant HU 1900/2-1 . H. Hoos acknowledges funding through an NSERC Discovery Grant.",
year = "2017",
month = feb,
doi = "10.1016/j.artint.2016.09.006",
language = "English",
volume = "243",
pages = "1--25",
journal = "Artificial intelligence",
issn = "0004-3702",
publisher = "Elsevier",

}

Download

TY - JOUR

T1 - The Configurable SAT Solver Challenge (CSSC)

AU - Hutter, Frank

AU - Lindauer, Marius

AU - Balint, Adrian

AU - Bayless, Sam

AU - Hoos, Holger

AU - Leyton-Brown, Kevin

N1 - Funding information: Many thanks go to Kevin Tierney for his generous help with running GGA, including his addition of new features, his suggestion of parameter settings and his conversion script to read the pcs format. We also thank the solver developers for proofreading the description of their solvers and their parameters. For computational resources to run the competition, we thank Compute Canada (CSSC 2013) and the German Research Foundation (DFG; CSSC 2014). F. Hutter and M. Lindauer thank the DFG for funding this research under Emmy Noether grant HU 1900/2-1 . H. Hoos acknowledges funding through an NSERC Discovery Grant.

PY - 2017/2

Y1 - 2017/2

N2 - It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014.

AB - It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014.

KW - Algorithm configuration

KW - Competition

KW - Empirical evaluation

KW - Propositional satisfiability

UR - http://www.scopus.com/inward/record.url?scp=84995745553&partnerID=8YFLogxK

U2 - 10.1016/j.artint.2016.09.006

DO - 10.1016/j.artint.2016.09.006

M3 - Article

AN - SCOPUS:84995745553

VL - 243

SP - 1

EP - 25

JO - Artificial intelligence

JF - Artificial intelligence

SN - 0004-3702

ER -

By the same author(s)