Details
Original language | English |
---|---|
Title of host publication | ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML) |
Publication status | E-pub ahead of print - 7 Jun 2022 |
Event | ICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World - Baltimore, United States Duration: 22 Jul 2022 → 23 Jul 2022 |
Abstract
Keywords
- cs.LG
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML). 2022.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Towards Meta-learned Algorithm Selection using Implicit Fidelity Information
AU - Mohan, Aditya
AU - Ruhkopf, Tim
AU - Lindauer, Marius
N1 - Camera-ready version
PY - 2022/6/7
Y1 - 2022/6/7
N2 - Automatically selecting the best performing algorithm for a given dataset or ranking multiple algorithms by their expected performance supports users in developing new machine learning applications. Most approaches for this problem rely on pre-computed dataset meta-features and landmarking performances to capture the salient topology of the datasets and those topologies that the algorithms attend to. Landmarking usually exploits cheap algorithms not necessarily in the pool of candidate algorithms to get inexpensive approximations of the topology. While somewhat indicative, hand-crafted dataset meta-features and landmarks are likely insufficient descriptors, strongly depending on the alignment of the topologies that the landmarks and the candidate algorithms search for. We propose IMFAS, a method to exploit multi-fidelity landmarking information directly from the candidate algorithms in the form of non-parametrically non-myopic meta-learned learning curves via LSTMs in a few-shot setting during testing. Using this mechanism, IMFAS jointly learns the topology of the datasets and the inductive biases of the candidate algorithms, without the need to expensively train them to convergence. Our approach produces informative landmarks, easily enriched by arbitrary meta-features at a low computational cost, capable of producing the desired ranking using cheaper fidelities. We additionally show that IMFAS is able to beat Successive Halving with at most 50% of the fidelity sequence during test time.
AB - Automatically selecting the best performing algorithm for a given dataset or ranking multiple algorithms by their expected performance supports users in developing new machine learning applications. Most approaches for this problem rely on pre-computed dataset meta-features and landmarking performances to capture the salient topology of the datasets and those topologies that the algorithms attend to. Landmarking usually exploits cheap algorithms not necessarily in the pool of candidate algorithms to get inexpensive approximations of the topology. While somewhat indicative, hand-crafted dataset meta-features and landmarks are likely insufficient descriptors, strongly depending on the alignment of the topologies that the landmarks and the candidate algorithms search for. We propose IMFAS, a method to exploit multi-fidelity landmarking information directly from the candidate algorithms in the form of non-parametrically non-myopic meta-learned learning curves via LSTMs in a few-shot setting during testing. Using this mechanism, IMFAS jointly learns the topology of the datasets and the inductive biases of the candidate algorithms, without the need to expensively train them to convergence. Our approach produces informative landmarks, easily enriched by arbitrary meta-features at a low computational cost, capable of producing the desired ranking using cheaper fidelities. We additionally show that IMFAS is able to beat Successive Halving with at most 50% of the fidelity sequence during test time.
KW - cs.LG
M3 - Conference contribution
BT - ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML)
T2 - ICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World
Y2 - 22 July 2022 through 23 July 2022
ER -