HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

Research Organisations

External Research Organisations

  • Paderborn University
View graph of relations

Details

Original languageEnglish
Title of host publicationNeurIPS Workshop on Meta Learning (MetaLearn 2022)
Publication statusPublished - 2022
EventWorkshop on Meta-Learning (MetaLearn 2022) - online
Duration: 2 Dec 2022 → …

Abstract

It is well known that different algorithms perform differently well on an instance of an algorithmic problem, motivating algorithm selection (AS): Given an instance of an algorithmic problem, which is the most suitable algorithm to solve it? As such, the AS problem has received considerable attention resulting in various approaches - many of which either solve a regression or ranking problem under the hood. Although both of these formulations yield very natural ways to tackle AS, they have considerable weaknesses. On the one hand, correctly predicting the performance of an algorithm on an instance is a sufficient, but not a necessary condition to produce a correct ranking over algorithms and in particular ranking the best algorithm first. On the other hand, classical ranking approaches often do not account for concrete performance values available in the training data, but only leverage rankings composed from such data. We propose HARRIS- Hybrid rAnking and RegRessIon foreSts - a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses. HARRIS' decisions are based on a forest model, whose trees are created based on splits optimized on a hybrid ranking and regression loss function. As our preliminary experimental study on ASLib shows, HARRIS improves over standard algorithm selection approaches on some scenarios showing that combining ranking and regression in trees is indeed promising for AS.

Cite this

HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection. / Fehring, Lukas; Hanselle, Jonas; Tornede, Alexander.
NeurIPS Workshop on Meta Learning (MetaLearn 2022). 2022.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Fehring, L, Hanselle, J & Tornede, A 2022, HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection. in NeurIPS Workshop on Meta Learning (MetaLearn 2022). Workshop on Meta-Learning (MetaLearn 2022), 2 Dec 2022. https://doi.org/10.48550/arXiv.2210.17341
Fehring, L., Hanselle, J., & Tornede, A. (2022). HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection. In NeurIPS Workshop on Meta Learning (MetaLearn 2022) https://doi.org/10.48550/arXiv.2210.17341
Fehring L, Hanselle J, Tornede A. HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection. In NeurIPS Workshop on Meta Learning (MetaLearn 2022). 2022 Epub 2022 Oct 31. doi: 10.48550/arXiv.2210.17341
Fehring, Lukas ; Hanselle, Jonas ; Tornede, Alexander. / HARRIS : Hybrid Ranking and Regression Forests for Algorithm Selection. NeurIPS Workshop on Meta Learning (MetaLearn 2022). 2022.
Download
@inproceedings{ac6176b1d3fa4aea9c3a3e6521490242,
title = "HARRIS: Hybrid Ranking and Regression Forests for Algorithm Selection",
abstract = "It is well known that different algorithms perform differently well on an instance of an algorithmic problem, motivating algorithm selection (AS): Given an instance of an algorithmic problem, which is the most suitable algorithm to solve it? As such, the AS problem has received considerable attention resulting in various approaches - many of which either solve a regression or ranking problem under the hood. Although both of these formulations yield very natural ways to tackle AS, they have considerable weaknesses. On the one hand, correctly predicting the performance of an algorithm on an instance is a sufficient, but not a necessary condition to produce a correct ranking over algorithms and in particular ranking the best algorithm first. On the other hand, classical ranking approaches often do not account for concrete performance values available in the training data, but only leverage rankings composed from such data. We propose HARRIS- Hybrid rAnking and RegRessIon foreSts - a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses. HARRIS' decisions are based on a forest model, whose trees are created based on splits optimized on a hybrid ranking and regression loss function. As our preliminary experimental study on ASLib shows, HARRIS improves over standard algorithm selection approaches on some scenarios showing that combining ranking and regression in trees is indeed promising for AS. ",
author = "Lukas Fehring and Jonas Hanselle and Alexander Tornede",
year = "2022",
doi = "10.48550/arXiv.2210.17341",
language = "English",
booktitle = "NeurIPS Workshop on Meta Learning (MetaLearn 2022)",
note = "Workshop on Meta-Learning (MetaLearn 2022) ; Conference date: 02-12-2022",

}

Download

TY - GEN

T1 - HARRIS

T2 - Workshop on Meta-Learning (MetaLearn 2022)

AU - Fehring, Lukas

AU - Hanselle, Jonas

AU - Tornede, Alexander

PY - 2022

Y1 - 2022

N2 - It is well known that different algorithms perform differently well on an instance of an algorithmic problem, motivating algorithm selection (AS): Given an instance of an algorithmic problem, which is the most suitable algorithm to solve it? As such, the AS problem has received considerable attention resulting in various approaches - many of which either solve a regression or ranking problem under the hood. Although both of these formulations yield very natural ways to tackle AS, they have considerable weaknesses. On the one hand, correctly predicting the performance of an algorithm on an instance is a sufficient, but not a necessary condition to produce a correct ranking over algorithms and in particular ranking the best algorithm first. On the other hand, classical ranking approaches often do not account for concrete performance values available in the training data, but only leverage rankings composed from such data. We propose HARRIS- Hybrid rAnking and RegRessIon foreSts - a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses. HARRIS' decisions are based on a forest model, whose trees are created based on splits optimized on a hybrid ranking and regression loss function. As our preliminary experimental study on ASLib shows, HARRIS improves over standard algorithm selection approaches on some scenarios showing that combining ranking and regression in trees is indeed promising for AS.

AB - It is well known that different algorithms perform differently well on an instance of an algorithmic problem, motivating algorithm selection (AS): Given an instance of an algorithmic problem, which is the most suitable algorithm to solve it? As such, the AS problem has received considerable attention resulting in various approaches - many of which either solve a regression or ranking problem under the hood. Although both of these formulations yield very natural ways to tackle AS, they have considerable weaknesses. On the one hand, correctly predicting the performance of an algorithm on an instance is a sufficient, but not a necessary condition to produce a correct ranking over algorithms and in particular ranking the best algorithm first. On the other hand, classical ranking approaches often do not account for concrete performance values available in the training data, but only leverage rankings composed from such data. We propose HARRIS- Hybrid rAnking and RegRessIon foreSts - a new algorithm selector leveraging special forests, combining the strengths of both approaches while alleviating their weaknesses. HARRIS' decisions are based on a forest model, whose trees are created based on splits optimized on a hybrid ranking and regression loss function. As our preliminary experimental study on ASLib shows, HARRIS improves over standard algorithm selection approaches on some scenarios showing that combining ranking and regression in trees is indeed promising for AS.

U2 - 10.48550/arXiv.2210.17341

DO - 10.48550/arXiv.2210.17341

M3 - Conference contribution

BT - NeurIPS Workshop on Meta Learning (MetaLearn 2022)

Y2 - 2 December 2022

ER -

By the same author(s)