Details
Original language | English |
---|---|
Title of host publication | Learning and Intelligent Optimization |
Editors | Clarisse Dhaenens, Laetitia Jourdan, Marie-Eleonore Marmion |
Publisher | Springer Verlag |
Pages | 1-16 |
Number of pages | 16 |
ISBN (print) | 9783319190839 |
Publication status | Published - 29 May 2015 |
Externally published | Yes |
Event | 9th International Conference on Learning and Intelligent Optimization, LION 2015 - Lille, France Duration: 12 Jan 2015 → 15 Jan 2015 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 8994 |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
In view of the increasing importance of hardware parallelism, a natural extension of per-instance algorithm selection is to select a set of algorithms to be run in parallel on a given problem instance, based on features of that instance. Here, we explore how existing algorithm selection techniques can be effectively parallelized. To this end, we leverage the machine learning models used by existing sequential algorithm selectors, such as 3S, ISAC, SATzilla and ME-ASP, and modify their selection procedures to produce a ranking of the given candidate algorithms; we then select the top n algorithms under this ranking to be run in parallel on n processing units. Furthermore, we adapt the pre-solving schedules obtained by aspeed to be effective in a parallel setting with different time budgets for each processing unit. Our empirical results demonstrate that, using 4 processing units, the best of our methods achieves a 12-fold average speedup over the best single solver on a broad set of challenging scenarios from the algorithm selection library.
Keywords
- Algorithm selection, Answer set programming, Constraint solving, Parallel portfolios
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Learning and Intelligent Optimization. ed. / Clarisse Dhaenens; Laetitia Jourdan; Marie-Eleonore Marmion. Springer Verlag, 2015. p. 1-16 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 8994).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - From Sequential Algorithm Selection to Parallel Portfolio Selection
AU - Lindauer, M.
AU - Hoos, Holger H.
AU - Hutter, F.
PY - 2015/5/29
Y1 - 2015/5/29
N2 - In view of the increasing importance of hardware parallelism, a natural extension of per-instance algorithm selection is to select a set of algorithms to be run in parallel on a given problem instance, based on features of that instance. Here, we explore how existing algorithm selection techniques can be effectively parallelized. To this end, we leverage the machine learning models used by existing sequential algorithm selectors, such as 3S, ISAC, SATzilla and ME-ASP, and modify their selection procedures to produce a ranking of the given candidate algorithms; we then select the top n algorithms under this ranking to be run in parallel on n processing units. Furthermore, we adapt the pre-solving schedules obtained by aspeed to be effective in a parallel setting with different time budgets for each processing unit. Our empirical results demonstrate that, using 4 processing units, the best of our methods achieves a 12-fold average speedup over the best single solver on a broad set of challenging scenarios from the algorithm selection library.
AB - In view of the increasing importance of hardware parallelism, a natural extension of per-instance algorithm selection is to select a set of algorithms to be run in parallel on a given problem instance, based on features of that instance. Here, we explore how existing algorithm selection techniques can be effectively parallelized. To this end, we leverage the machine learning models used by existing sequential algorithm selectors, such as 3S, ISAC, SATzilla and ME-ASP, and modify their selection procedures to produce a ranking of the given candidate algorithms; we then select the top n algorithms under this ranking to be run in parallel on n processing units. Furthermore, we adapt the pre-solving schedules obtained by aspeed to be effective in a parallel setting with different time budgets for each processing unit. Our empirical results demonstrate that, using 4 processing units, the best of our methods achieves a 12-fold average speedup over the best single solver on a broad set of challenging scenarios from the algorithm selection library.
KW - Algorithm selection
KW - Answer set programming
KW - Constraint solving
KW - Parallel portfolios
UR - http://www.scopus.com/inward/record.url?scp=84942419507&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-19084-6_1
DO - 10.1007/978-3-319-19084-6_1
M3 - Conference contribution
AN - SCOPUS:84942419507
SN - 9783319190839
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 1
EP - 16
BT - Learning and Intelligent Optimization
A2 - Dhaenens, Clarisse
A2 - Jourdan, Laetitia
A2 - Marmion, Marie-Eleonore
PB - Springer Verlag
T2 - 9th International Conference on Learning and Intelligent Optimization, LION 2015
Y2 - 12 January 2015 through 15 January 2015
ER -