Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 1633-1699 |
Seitenumfang | 67 |
Fachzeitschrift | Journal of Artificial Intelligence Research |
Jahrgang | 75 |
Publikationsstatus | Veröffentlicht - Dez. 2022 |
Abstract
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Artificial intelligence
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: Journal of Artificial Intelligence Research, Jahrgang 75, 12.2022, S. 1633-1699.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Automated Dynamic Algorithm Configuration
AU - Adriaensen, Steven
AU - Biedenkapp, André
AU - Shala, Gresa
AU - Awad, Noor
AU - Eimer, Theresa
AU - Lindauer, Marius
AU - Hutter, Frank
N1 - Funding Information: All authors acknowledge funding by the Robert Bosch GmbH. Theresa Eimer and Marius Lindauer acknowledge funding by the German Research Foundation (DFG) under LI 2801/4-1. We thank Maximilian Reimer, Rishan Senanayake, Göktuǧ Karakaşli, Nguyen Dang, Diederick Vermetten, Jacob de Nobel and Carolin Benjamins for their contributions to DACBench, and Carola Doerr for the many discussions on related work and problem formulation.
PY - 2022/12
Y1 - 2022/12
N2 - The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning parameters, there is still a lot of untapped potential as the learned configuration is static, i.e., parameter settings remain fixed throughout the run. However, it has been shown that some algorithm parameters are best adjusted dynamically during execution. Thus far, this is most commonly achieved through hand-crafted heuristics. A promising recent alternative is to automatically learn such dynamic parameter adaptation policies from data. In this article, we give the first comprehensive account of this new field of automated dynamic algorithm configuration (DAC), present a series of recent advances, and provide a solid foundation for future research in this field. Specifically, we (i) situate DAC in the broader historical context of AI research; (ii) formalize DAC as a computational problem; (iii) identify the methods used in prior art to tackle this problem; and (iv) conduct empirical case studies for using DAC in evolutionary optimization, AI planning, and machine learning.
AB - The performance of an algorithm often critically depends on its parameter configuration. While a variety of automated algorithm configuration methods have been proposed to relieve users from the tedious and error-prone task of manually tuning parameters, there is still a lot of untapped potential as the learned configuration is static, i.e., parameter settings remain fixed throughout the run. However, it has been shown that some algorithm parameters are best adjusted dynamically during execution. Thus far, this is most commonly achieved through hand-crafted heuristics. A promising recent alternative is to automatically learn such dynamic parameter adaptation policies from data. In this article, we give the first comprehensive account of this new field of automated dynamic algorithm configuration (DAC), present a series of recent advances, and provide a solid foundation for future research in this field. Specifically, we (i) situate DAC in the broader historical context of AI research; (ii) formalize DAC as a computational problem; (iii) identify the methods used in prior art to tackle this problem; and (iv) conduct empirical case studies for using DAC in evolutionary optimization, AI planning, and machine learning.
KW - cs.AI
KW - cs.LG
KW - cs.NE
UR - http://www.scopus.com/inward/record.url?scp=85148436940&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2205.13881
DO - 10.48550/arXiv.2205.13881
M3 - Article
VL - 75
SP - 1633
EP - 1699
JO - Journal of Artificial Intelligence Research
JF - Journal of Artificial Intelligence Research
SN - 1076-9757
ER -