MASIF: Meta-learned Algorithm Selection using Implicit Fidelity Information

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autorschaft

Externe Organisationen

  • Albert-Ludwigs-Universität Freiburg
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
FachzeitschriftTransactions on Machine Learning Research
Frühes Online-Datum18 Apr. 2023
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 18 Apr. 2023

Abstract

Selecting a well-performing algorithm for a given task or dataset can be time-consuming and tedious, but is crucial for the successful day-to-day business of developing new AI & ML applications. Algorithm Selection (AS) mitigates this through a meta-model leveraging meta-information about previous tasks. However, most of the available AS methods are error-prone because they characterize a task by either cheap-to-compute properties of the dataset or evaluations of cheap proxy algorithms, called landmarks. In this work, we extend the classical AS data setup to include multi-fidelity information and empirically demonstrate how meta-learning on algorithms’ learning behaviour allows us to exploit cheap test-time evidence effectively and combat myopia significantly. We further postulate a budget-regret trade-off w.r.t. the selection process. Our new selector MASIF is able to jointly interpret online evidence on a task in form of varying-length learning curves without any parametric assumption by leveraging a transformer-based encoder. This opens up new possibilities for guided rapid prototyping in data science on cheaply observed partial learning curves.

Zitieren

MASIF: Meta-learned Algorithm Selection using Implicit Fidelity Information. / Ruhkopf, Tim; Mohan, Aditya; Deng, Difan et al.
in: Transactions on Machine Learning Research, 18.04.2023.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Ruhkopf, T., Mohan, A., Deng, D., Tornede, A., Hutter, F., & Lindauer, M. (2023). MASIF: Meta-learned Algorithm Selection using Implicit Fidelity Information. Transactions on Machine Learning Research. Vorabveröffentlichung online. https://openreview.net/forum?id=5aYGXxByI6
Ruhkopf T, Mohan A, Deng D, Tornede A, Hutter F, Lindauer M. MASIF: Meta-learned Algorithm Selection using Implicit Fidelity Information. Transactions on Machine Learning Research. 2023 Apr 18. Epub 2023 Apr 18.
Download
@article{91a29f974fce4959967ea6759e1075f4,
title = "MASIF: Meta-learned Algorithm Selection using Implicit Fidelity Information",
abstract = " Selecting a well-performing algorithm for a given task or dataset can be time-consuming and tedious, but is crucial for the successful day-to-day business of developing new AI & ML applications. Algorithm Selection (AS) mitigates this through a meta-model leveraging meta-information about previous tasks. However, most of the available AS methods are error-prone because they characterize a task by either cheap-to-compute properties of the dataset or evaluations of cheap proxy algorithms, called landmarks. In this work, we extend the classical AS data setup to include multi-fidelity information and empirically demonstrate how meta-learning on algorithms{\textquoteright} learning behaviour allows us to exploit cheap test-time evidence effectively and combat myopia significantly. We further postulate a budget-regret trade-off w.r.t. the selection process. Our new selector MASIF is able to jointly interpret online evidence on a task in form of varying-length learning curves without any parametric assumption by leveraging a transformer-based encoder. This opens up new possibilities for guided rapid prototyping in data science on cheaply observed partial learning curves.",
keywords = "Algorithm Selection, Meta-Learning, Multi-Fidelty Optimization",
author = "Tim Ruhkopf and Aditya Mohan and Difan Deng and Alexander Tornede and Frank Hutter and Marius Lindauer",
year = "2023",
month = apr,
day = "18",
language = "English",

}

Download

TY - JOUR

T1 - MASIF: Meta-learned Algorithm Selection using Implicit Fidelity Information

AU - Ruhkopf, Tim

AU - Mohan, Aditya

AU - Deng, Difan

AU - Tornede, Alexander

AU - Hutter, Frank

AU - Lindauer, Marius

PY - 2023/4/18

Y1 - 2023/4/18

N2 - Selecting a well-performing algorithm for a given task or dataset can be time-consuming and tedious, but is crucial for the successful day-to-day business of developing new AI & ML applications. Algorithm Selection (AS) mitigates this through a meta-model leveraging meta-information about previous tasks. However, most of the available AS methods are error-prone because they characterize a task by either cheap-to-compute properties of the dataset or evaluations of cheap proxy algorithms, called landmarks. In this work, we extend the classical AS data setup to include multi-fidelity information and empirically demonstrate how meta-learning on algorithms’ learning behaviour allows us to exploit cheap test-time evidence effectively and combat myopia significantly. We further postulate a budget-regret trade-off w.r.t. the selection process. Our new selector MASIF is able to jointly interpret online evidence on a task in form of varying-length learning curves without any parametric assumption by leveraging a transformer-based encoder. This opens up new possibilities for guided rapid prototyping in data science on cheaply observed partial learning curves.

AB - Selecting a well-performing algorithm for a given task or dataset can be time-consuming and tedious, but is crucial for the successful day-to-day business of developing new AI & ML applications. Algorithm Selection (AS) mitigates this through a meta-model leveraging meta-information about previous tasks. However, most of the available AS methods are error-prone because they characterize a task by either cheap-to-compute properties of the dataset or evaluations of cheap proxy algorithms, called landmarks. In this work, we extend the classical AS data setup to include multi-fidelity information and empirically demonstrate how meta-learning on algorithms’ learning behaviour allows us to exploit cheap test-time evidence effectively and combat myopia significantly. We further postulate a budget-regret trade-off w.r.t. the selection process. Our new selector MASIF is able to jointly interpret online evidence on a task in form of varying-length learning curves without any parametric assumption by leveraging a transformer-based encoder. This opens up new possibilities for guided rapid prototyping in data science on cheaply observed partial learning curves.

KW - Algorithm Selection

KW - Meta-Learning

KW - Multi-Fidelty Optimization

M3 - Article

JO - Transactions on Machine Learning Research

JF - Transactions on Machine Learning Research

SN - 2835-8856

ER -

Von denselben Autoren