Details
Original language | English |
---|---|
Pages (from-to) | 1-25 |
Number of pages | 25 |
Journal | Artificial intelligence |
Volume | 243 |
Early online date | 20 Oct 2016 |
Publication status | Published - Feb 2017 |
Externally published | Yes |
Abstract
It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014.
Keywords
- Algorithm configuration, Competition, Empirical evaluation, Propositional satisfiability
ASJC Scopus subject areas
- Arts and Humanities(all)
- Language and Linguistics
- Social Sciences(all)
- Linguistics and Language
- Computer Science(all)
- Artificial Intelligence
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: Artificial intelligence, Vol. 243, 02.2017, p. 1-25.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - The Configurable SAT Solver Challenge (CSSC)
AU - Hutter, Frank
AU - Lindauer, Marius
AU - Balint, Adrian
AU - Bayless, Sam
AU - Hoos, Holger
AU - Leyton-Brown, Kevin
N1 - Funding information: Many thanks go to Kevin Tierney for his generous help with running GGA, including his addition of new features, his suggestion of parameter settings and his conversion script to read the pcs format. We also thank the solver developers for proofreading the description of their solvers and their parameters. For computational resources to run the competition, we thank Compute Canada (CSSC 2013) and the German Research Foundation (DFG; CSSC 2014). F. Hutter and M. Lindauer thank the DFG for funding this research under Emmy Noether grant HU 1900/2-1 . H. Hoos acknowledges funding through an NSERC Discovery Grant.
PY - 2017/2
Y1 - 2017/2
N2 - It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014.
AB - It is well known that different solution strategies work well for different types of instances of hard combinatorial problems. As a consequence, most solvers for the propositional satisfiability problem (SAT) expose parameters that allow them to be customized to a particular family of instances. In the international SAT competition series, these parameters are ignored: solvers are run using a single default parameter setting (supplied by the authors) for all benchmark instances in a given track. While this competition format rewards solvers with robust default settings, it does not reflect the situation faced by a practitioner who only cares about performance on one particular application and can invest some time into tuning solver parameters for this application. The new Configurable SAT Solver Competition (CSSC) compares solvers in this latter setting, scoring each solver by the performance it achieved after a fully automated configuration step. This article describes the CSSC in more detail, and reports the results obtained in its two instantiations so far, CSSC 2013 and 2014.
KW - Algorithm configuration
KW - Competition
KW - Empirical evaluation
KW - Propositional satisfiability
UR - http://www.scopus.com/inward/record.url?scp=84995745553&partnerID=8YFLogxK
U2 - 10.1016/j.artint.2016.09.006
DO - 10.1016/j.artint.2016.09.006
M3 - Article
AN - SCOPUS:84995745553
VL - 243
SP - 1
EP - 25
JO - Artificial intelligence
JF - Artificial intelligence
SN - 0004-3702
ER -