Details
Original language | English |
---|---|
Title of host publication | Parallel Problem Solving from Nature – PPSN XVI |
Subtitle of host publication | 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part I |
Editors | Thomas Bäck, Mike Preuss, André Deutz, Michael Emmerich, Hao Wang, Carola Doerr, Heike Trautmann |
Place of Publication | Cham |
Pages | 691-706 |
Number of pages | 16 |
ISBN (electronic) | 978-3-030-58112-1 |
Publication status | Published - 2020 |
Event | 16th International Conference on Parallel Problem Solving from Nature, PPSN 2020 - Leiden, Netherlands Duration: 5 Sept 2020 → 9 Sept 2020 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 12269 |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
An algorithm’s parameter setting often affects its ability to solve a given problem, e.g., population-size, mutation-rate or crossover-rate of an evolutionary algorithm. Furthermore, some parameters have to be adjusted dynamically, such as lowering the mutation-strength over time. While hand-crafted heuristics offer a way to fine-tune and dynamically configure these parameters, their design is tedious, time-consuming and typically involves analyzing the algorithm’s behavior on simple problems that may not be representative for those that arise in practice. In this paper, we show that formulating dynamic algorithm configuration as a reinforcement learning problem allows us to automatically learn policies that can dynamically configure the mutation step-size parameter of Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We evaluate our approach on a wide range of black-box optimization problems, and show that (i) learning step-size policies has the potential to improve the performance of CMA-ES; (ii) learned step-size policies can outperform the default Cumulative Step-Size Adaptation of CMA-ES; and transferring the policies to (iii) different function classes and to (iv) higher dimensions is also possible.
Keywords
- Algorithm configuration, Evolutionary algorithms, Reinforcement learning
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Parallel Problem Solving from Nature – PPSN XVI: 16th International Conference, PPSN 2020, Leiden, The Netherlands, September 5-9, 2020, Proceedings, Part I. ed. / Thomas Bäck; Mike Preuss; André Deutz; Michael Emmerich; Hao Wang; Carola Doerr; Heike Trautmann. Cham, 2020. p. 691-706 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 12269).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Learning Step-Size Adaptation in CMA-ES
AU - Shala, Gresa
AU - Biedenkapp, André
AU - Awad, Noor
AU - Adriaensen, Steven
AU - Lindauer, Marius
AU - Hutter, Frank
N1 - Funding Information: The authors acknowledge funding by the Robert Bosch GmbH.
PY - 2020
Y1 - 2020
N2 - An algorithm’s parameter setting often affects its ability to solve a given problem, e.g., population-size, mutation-rate or crossover-rate of an evolutionary algorithm. Furthermore, some parameters have to be adjusted dynamically, such as lowering the mutation-strength over time. While hand-crafted heuristics offer a way to fine-tune and dynamically configure these parameters, their design is tedious, time-consuming and typically involves analyzing the algorithm’s behavior on simple problems that may not be representative for those that arise in practice. In this paper, we show that formulating dynamic algorithm configuration as a reinforcement learning problem allows us to automatically learn policies that can dynamically configure the mutation step-size parameter of Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We evaluate our approach on a wide range of black-box optimization problems, and show that (i) learning step-size policies has the potential to improve the performance of CMA-ES; (ii) learned step-size policies can outperform the default Cumulative Step-Size Adaptation of CMA-ES; and transferring the policies to (iii) different function classes and to (iv) higher dimensions is also possible.
AB - An algorithm’s parameter setting often affects its ability to solve a given problem, e.g., population-size, mutation-rate or crossover-rate of an evolutionary algorithm. Furthermore, some parameters have to be adjusted dynamically, such as lowering the mutation-strength over time. While hand-crafted heuristics offer a way to fine-tune and dynamically configure these parameters, their design is tedious, time-consuming and typically involves analyzing the algorithm’s behavior on simple problems that may not be representative for those that arise in practice. In this paper, we show that formulating dynamic algorithm configuration as a reinforcement learning problem allows us to automatically learn policies that can dynamically configure the mutation step-size parameter of Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We evaluate our approach on a wide range of black-box optimization problems, and show that (i) learning step-size policies has the potential to improve the performance of CMA-ES; (ii) learned step-size policies can outperform the default Cumulative Step-Size Adaptation of CMA-ES; and transferring the policies to (iii) different function classes and to (iv) higher dimensions is also possible.
KW - Algorithm configuration
KW - Evolutionary algorithms
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85091295368&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-58112-1_48
DO - 10.1007/978-3-030-58112-1_48
M3 - Conference contribution
AN - SCOPUS:85091295368
SN - 9783030581114
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 691
EP - 706
BT - Parallel Problem Solving from Nature – PPSN XVI
A2 - Bäck, Thomas
A2 - Preuss, Mike
A2 - Deutz, André
A2 - Emmerich, Michael
A2 - Wang, Hao
A2 - Doerr, Carola
A2 - Trautmann, Heike
CY - Cham
T2 - 16th International Conference on Parallel Problem Solving from Nature, PPSN 2020
Y2 - 5 September 2020 through 9 September 2020
ER -