Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 |
Seiten | 1355-1362 |
Seitenumfang | 8 |
ISBN (elektronisch) | 9781577358008 |
Publikationsstatus | Veröffentlicht - 2018 |
Extern publiziert | Ja |
Veranstaltung | 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 - New Orleans, USA / Vereinigte Staaten Dauer: 2 Feb. 2018 → 7 Feb. 2018 |
Publikationsreihe
Name | Proceedings of the AAAI Conference on Artificial Intelligence |
---|---|
ISSN (Print) | 2374-3468 |
Abstract
The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem. While all existing AC methods start the configuration process of an algorithm A from scratch for each new type of benchmark instances, here we propose to exploit information about A's performance on previous benchmarks in order to warmstart its configuration on new types of benchmarks. We introduce two complementary ways in which we can exploit this information to warmstart AC methods based on a predictive model. Experiments for optimizing a flexible modern SAT solver on twelve different instance sets show that our methods often yield substantial speedups over existing AC methods (up to 165-fold) and can also find substantially better configurations given the same compute budget.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Artificial intelligence
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
32nd AAAI Conference on Artificial Intelligence, AAAI 2018. 2018. S. 1355-1362 (Proceedings of the AAAI Conference on Artificial Intelligence).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Warmstarting of Model-Based Algorithm Configuration
AU - Lindauer, Marius
AU - Hutter, Frank
N1 - Funding information: The authors acknowledge funding by the DFG (German Research Foundation) under Emmy Noether grant HU 1900/2-1 and support by the state of Baden-Württemberg through bwHPC and the DFG through grant no INST 39/963-1 FUGG.
PY - 2018
Y1 - 2018
N2 - The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem. While all existing AC methods start the configuration process of an algorithm A from scratch for each new type of benchmark instances, here we propose to exploit information about A's performance on previous benchmarks in order to warmstart its configuration on new types of benchmarks. We introduce two complementary ways in which we can exploit this information to warmstart AC methods based on a predictive model. Experiments for optimizing a flexible modern SAT solver on twelve different instance sets show that our methods often yield substantial speedups over existing AC methods (up to 165-fold) and can also find substantially better configurations given the same compute budget.
AB - The performance of many hard combinatorial problem solvers depends strongly on their parameter settings, and since manual parameter tuning is both tedious and suboptimal the AI community has recently developed several algorithm configuration (AC) methods to automatically address this problem. While all existing AC methods start the configuration process of an algorithm A from scratch for each new type of benchmark instances, here we propose to exploit information about A's performance on previous benchmarks in order to warmstart its configuration on new types of benchmarks. We introduce two complementary ways in which we can exploit this information to warmstart AC methods based on a predictive model. Experiments for optimizing a flexible modern SAT solver on twelve different instance sets show that our methods often yield substantial speedups over existing AC methods (up to 165-fold) and can also find substantially better configurations given the same compute budget.
UR - http://www.scopus.com/inward/record.url?scp=85059965402&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85059965402
T3 - Proceedings of the AAAI Conference on Artificial Intelligence
SP - 1355
EP - 1362
BT - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018
T2 - 32nd AAAI Conference on Artificial Intelligence, AAAI 2018
Y2 - 2 February 2018 through 7 February 2018
ER -