Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Parallel Problem Solving from Nature – PPSN XVI |
Untertitel | 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part I |
Herausgeber/-innen | Thomas Bäck, Mike Preuss, André Deutz, Michael Emmerich, Hao Wang, Carola Doerr, Heike Trautmann |
Erscheinungsort | Cham |
Seiten | 691-706 |
Seitenumfang | 16 |
ISBN (elektronisch) | 978-3-030-58112-1 |
Publikationsstatus | Veröffentlicht - 2020 |
Veranstaltung | 16th International Conference on Parallel Problem Solving from Nature, PPSN 2020 - Leiden, Niederlande Dauer: 5 Sept. 2020 → 9 Sept. 2020 |
Publikationsreihe
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Band | 12269 |
ISSN (Print) | 0302-9743 |
ISSN (elektronisch) | 1611-3349 |
Abstract
An algorithm’s parameter setting often affects its ability to solve a given problem, e.g., population-size, mutation-rate or crossover-rate of an evolutionary algorithm. Furthermore, some parameters have to be adjusted dynamically, such as lowering the mutation-strength over time. While hand-crafted heuristics offer a way to fine-tune and dynamically configure these parameters, their design is tedious, time-consuming and typically involves analyzing the algorithm’s behavior on simple problems that may not be representative for those that arise in practice. In this paper, we show that formulating dynamic algorithm configuration as a reinforcement learning problem allows us to automatically learn policies that can dynamically configure the mutation step-size parameter of Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We evaluate our approach on a wide range of black-box optimization problems, and show that (i) learning step-size policies has the potential to improve the performance of CMA-ES; (ii) learned step-size policies can outperform the default Cumulative Step-Size Adaptation of CMA-ES; and transferring the policies to (iii) different function classes and to (iv) higher dimensions is also possible.
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Parallel Problem Solving from Nature – PPSN XVI: 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part I. Hrsg. / Thomas Bäck; Mike Preuss; André Deutz; Michael Emmerich; Hao Wang; Carola Doerr; Heike Trautmann. Cham, 2020. S. 691-706 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 12269).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Learning Step-Size Adaptation in CMA-ES
AU - Shala, Gresa
AU - Biedenkapp, André
AU - Awad, Noor
AU - Adriaensen, Steven
AU - Lindauer, Marius
AU - Hutter, Frank
N1 - Funding Information: The authors acknowledge funding by the Robert Bosch GmbH.
PY - 2020
Y1 - 2020
N2 - An algorithm’s parameter setting often affects its ability to solve a given problem, e.g., population-size, mutation-rate or crossover-rate of an evolutionary algorithm. Furthermore, some parameters have to be adjusted dynamically, such as lowering the mutation-strength over time. While hand-crafted heuristics offer a way to fine-tune and dynamically configure these parameters, their design is tedious, time-consuming and typically involves analyzing the algorithm’s behavior on simple problems that may not be representative for those that arise in practice. In this paper, we show that formulating dynamic algorithm configuration as a reinforcement learning problem allows us to automatically learn policies that can dynamically configure the mutation step-size parameter of Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We evaluate our approach on a wide range of black-box optimization problems, and show that (i) learning step-size policies has the potential to improve the performance of CMA-ES; (ii) learned step-size policies can outperform the default Cumulative Step-Size Adaptation of CMA-ES; and transferring the policies to (iii) different function classes and to (iv) higher dimensions is also possible.
AB - An algorithm’s parameter setting often affects its ability to solve a given problem, e.g., population-size, mutation-rate or crossover-rate of an evolutionary algorithm. Furthermore, some parameters have to be adjusted dynamically, such as lowering the mutation-strength over time. While hand-crafted heuristics offer a way to fine-tune and dynamically configure these parameters, their design is tedious, time-consuming and typically involves analyzing the algorithm’s behavior on simple problems that may not be representative for those that arise in practice. In this paper, we show that formulating dynamic algorithm configuration as a reinforcement learning problem allows us to automatically learn policies that can dynamically configure the mutation step-size parameter of Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We evaluate our approach on a wide range of black-box optimization problems, and show that (i) learning step-size policies has the potential to improve the performance of CMA-ES; (ii) learned step-size policies can outperform the default Cumulative Step-Size Adaptation of CMA-ES; and transferring the policies to (iii) different function classes and to (iv) higher dimensions is also possible.
KW - Algorithm configuration
KW - Evolutionary algorithms
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85091295368&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58112-1_48
DO - 10.1007/978-3-030-58112-1_48
M3 - Conference contribution
AN - SCOPUS:85091295368
SN - 9783030581114
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 691
EP - 706
BT - Parallel Problem Solving from Nature – PPSN XVI
A2 - Bäck, Thomas
A2 - Preuss, Mike
A2 - Deutz, André
A2 - Emmerich, Michael
A2 - Wang, Hao
A2 - Doerr, Carola
A2 - Trautmann, Heike
CY - Cham
T2 - 16th International Conference on Parallel Problem Solving from Nature, PPSN 2020
Y2 - 5 September 2020 through 9 September 2020
ER -