Details
Original language | English |
---|---|
Title of host publication | Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence |
Editors | Jerome Lang |
Pages | 1442-1448 |
Number of pages | 7 |
ISBN (electronic) | 9780999241127 |
Publication status | Published - 2018 |
Externally published | Yes |
Event | 27th International Joint Conference on Artificial Intelligence, IJCAI 2018 - Stockholm, Sweden Duration: 13 Jul 2018 → 19 Jul 2018 |
Abstract
Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance.
ASJC Scopus subject areas
- Computer Science(all)
- Artificial Intelligence
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence. ed. / Jerome Lang. 2018. p. 1442-1448.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Neural Networks for Predicting Algorithm Runtime Distributions
AU - Eggensperger, Katharina
AU - Lindauer, Marius
AU - Hutter, Frank
N1 - Funding information: The authors acknowledge funding by the German Research Foundation (DFG) under Emmy Noether grant HU 1900/2-1 and support by the state of Baden-Württemberg through bwHPC and through grant no INST 39/963-1 FUGG. K. Eggensperger additionally acknowledges funding by the State Graduate Funding Program of Baden-Württemberg.
PY - 2018
Y1 - 2018
N2 - Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance.
AB - Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance.
UR - http://www.scopus.com/inward/record.url?scp=85055674327&partnerID=8YFLogxK
U2 - 10.24963/ijcai.2018/200
DO - 10.24963/ijcai.2018/200
M3 - Conference contribution
AN - SCOPUS:85055674327
SP - 1442
EP - 1448
BT - Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
A2 - Lang, Jerome
T2 - 27th International Joint Conference on Artificial Intelligence, IJCAI 2018
Y2 - 13 July 2018 through 19 July 2018
ER -