Towards Meta-learned Algorithm Selection using Implicit Fidelity Information

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML)
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 7 Juni 2022
VeranstaltungICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World - Baltimore, USA / Vereinigte Staaten
Dauer: 22 Juli 202223 Juli 2022

Abstract

Automatically selecting the best performing algorithm for a given dataset or ranking multiple algorithms by their expected performance supports users in developing new machine learning applications. Most approaches for this problem rely on pre-computed dataset meta-features and landmarking performances to capture the salient topology of the datasets and those topologies that the algorithms attend to. Landmarking usually exploits cheap algorithms not necessarily in the pool of candidate algorithms to get inexpensive approximations of the topology. While somewhat indicative, hand-crafted dataset meta-features and landmarks are likely insufficient descriptors, strongly depending on the alignment of the topologies that the landmarks and the candidate algorithms search for. We propose IMFAS, a method to exploit multi-fidelity landmarking information directly from the candidate algorithms in the form of non-parametrically non-myopic meta-learned learning curves via LSTMs in a few-shot setting during testing. Using this mechanism, IMFAS jointly learns the topology of the datasets and the inductive biases of the candidate algorithms, without the need to expensively train them to convergence. Our approach produces informative landmarks, easily enriched by arbitrary meta-features at a low computational cost, capable of producing the desired ranking using cheaper fidelities. We additionally show that IMFAS is able to beat Successive Halving with at most 50% of the fidelity sequence during test time.

Zitieren

Towards Meta-learned Algorithm Selection using Implicit Fidelity Information. / Mohan, Aditya; Ruhkopf, Tim; Lindauer, Marius.
ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML). 2022.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Mohan, A, Ruhkopf, T & Lindauer, M 2022, Towards Meta-learned Algorithm Selection using Implicit Fidelity Information. in ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML). ICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World, Baltimore, Maryland, USA / Vereinigte Staaten, 22 Juli 2022. <https://arxiv.org/abs/2206.03130>
Mohan, A., Ruhkopf, T., & Lindauer, M. (2022). Towards Meta-learned Algorithm Selection using Implicit Fidelity Information. In ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML) Vorabveröffentlichung online. https://arxiv.org/abs/2206.03130
Mohan A, Ruhkopf T, Lindauer M. Towards Meta-learned Algorithm Selection using Implicit Fidelity Information. in ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML). 2022 Epub 2022 Jun 7.
Mohan, Aditya ; Ruhkopf, Tim ; Lindauer, Marius. / Towards Meta-learned Algorithm Selection using Implicit Fidelity Information. ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML). 2022.
Download
@inproceedings{b615fe7ed821409aa5e00cb1e5bd2aa6,
title = "Towards Meta-learned Algorithm Selection using Implicit Fidelity Information",
abstract = "Automatically selecting the best performing algorithm for a given dataset or ranking multiple algorithms by their expected performance supports users in developing new machine learning applications. Most approaches for this problem rely on pre-computed dataset meta-features and landmarking performances to capture the salient topology of the datasets and those topologies that the algorithms attend to. Landmarking usually exploits cheap algorithms not necessarily in the pool of candidate algorithms to get inexpensive approximations of the topology. While somewhat indicative, hand-crafted dataset meta-features and landmarks are likely insufficient descriptors, strongly depending on the alignment of the topologies that the landmarks and the candidate algorithms search for. We propose IMFAS, a method to exploit multi-fidelity landmarking information directly from the candidate algorithms in the form of non-parametrically non-myopic meta-learned learning curves via LSTMs in a few-shot setting during testing. Using this mechanism, IMFAS jointly learns the topology of the datasets and the inductive biases of the candidate algorithms, without the need to expensively train them to convergence. Our approach produces informative landmarks, easily enriched by arbitrary meta-features at a low computational cost, capable of producing the desired ranking using cheaper fidelities. We additionally show that IMFAS is able to beat Successive Halving with at most 50% of the fidelity sequence during test time. ",
keywords = "cs.LG",
author = "Aditya Mohan and Tim Ruhkopf and Marius Lindauer",
note = "Camera-ready version; ICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World ; Conference date: 22-07-2022 Through 23-07-2022",
year = "2022",
month = jun,
day = "7",
language = "English",
booktitle = "ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML)",

}

Download

TY - GEN

T1 - Towards Meta-learned Algorithm Selection using Implicit Fidelity Information

AU - Mohan, Aditya

AU - Ruhkopf, Tim

AU - Lindauer, Marius

N1 - Camera-ready version

PY - 2022/6/7

Y1 - 2022/6/7

N2 - Automatically selecting the best performing algorithm for a given dataset or ranking multiple algorithms by their expected performance supports users in developing new machine learning applications. Most approaches for this problem rely on pre-computed dataset meta-features and landmarking performances to capture the salient topology of the datasets and those topologies that the algorithms attend to. Landmarking usually exploits cheap algorithms not necessarily in the pool of candidate algorithms to get inexpensive approximations of the topology. While somewhat indicative, hand-crafted dataset meta-features and landmarks are likely insufficient descriptors, strongly depending on the alignment of the topologies that the landmarks and the candidate algorithms search for. We propose IMFAS, a method to exploit multi-fidelity landmarking information directly from the candidate algorithms in the form of non-parametrically non-myopic meta-learned learning curves via LSTMs in a few-shot setting during testing. Using this mechanism, IMFAS jointly learns the topology of the datasets and the inductive biases of the candidate algorithms, without the need to expensively train them to convergence. Our approach produces informative landmarks, easily enriched by arbitrary meta-features at a low computational cost, capable of producing the desired ranking using cheaper fidelities. We additionally show that IMFAS is able to beat Successive Halving with at most 50% of the fidelity sequence during test time.

AB - Automatically selecting the best performing algorithm for a given dataset or ranking multiple algorithms by their expected performance supports users in developing new machine learning applications. Most approaches for this problem rely on pre-computed dataset meta-features and landmarking performances to capture the salient topology of the datasets and those topologies that the algorithms attend to. Landmarking usually exploits cheap algorithms not necessarily in the pool of candidate algorithms to get inexpensive approximations of the topology. While somewhat indicative, hand-crafted dataset meta-features and landmarks are likely insufficient descriptors, strongly depending on the alignment of the topologies that the landmarks and the candidate algorithms search for. We propose IMFAS, a method to exploit multi-fidelity landmarking information directly from the candidate algorithms in the form of non-parametrically non-myopic meta-learned learning curves via LSTMs in a few-shot setting during testing. Using this mechanism, IMFAS jointly learns the topology of the datasets and the inductive biases of the candidate algorithms, without the need to expensively train them to convergence. Our approach produces informative landmarks, easily enriched by arbitrary meta-features at a low computational cost, capable of producing the desired ranking using cheaper fidelities. We additionally show that IMFAS is able to beat Successive Halving with at most 50% of the fidelity sequence during test time.

KW - cs.LG

M3 - Conference contribution

BT - ICML Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML)

T2 - ICML 2022 Workshop on Adaptive Experimental Design and Active Learning in the Real World

Y2 - 22 July 2022 through 23 July 2022

ER -

Von denselben Autoren