A flexible class of dependence-aware multi-label loss functions

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autorschaft

  • Eyke Hüllermeier
  • Marcel Wever
  • Eneldo Loza Mencia
  • Johannes Fürnkranz
  • Michael Rapp

Externe Organisationen

  • Ludwig-Maximilians-Universität München (LMU)
  • Universität Paderborn
  • Technische Universität Darmstadt
  • Johannes Kepler Universität Linz (JKU)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Seiten (von - bis)713-737
Seitenumfang25
FachzeitschriftMachine learning
Jahrgang111
Ausgabenummer2
PublikationsstatusVeröffentlicht - Feb. 2022
Extern publiziertJa

Abstract

The idea to exploit label dependencies for better prediction is at the core of methods for multi-label classification (MLC), and performance improvements are normally explained in this way. Surprisingly, however, there is no established methodology that allows to analyze the dependence-awareness of MLC algorithms. With that goal in mind, we introduce a class of loss functions that are able to capture the important aspect of label dependence. To this end, we leverage the mathematical framework of non-additive measures and integrals. Roughly speaking, a non-additive measure allows for modeling the importance of correct predictions of label subsets (instead of single labels), and thereby their impact on the overall evaluation, in a flexible way. The well-known Hamming and subset 0/1 losses are rather extreme special cases of this function class, which give full importance to single label sets or the entire label set, respectively. We present concrete instantiations of this class, which appear to be especially appealing from a modeling perspective. The assessment of multi-label classifiers in terms of these losses is illustrated in an empirical study, clearly showing their aptness at capturing label dependencies. Finally, while not being the main goal of this study, we also show some preliminary results on the minimization of this parametrized family of losses.

ASJC Scopus Sachgebiete

Zitieren

A flexible class of dependence-aware multi-label loss functions. / Hüllermeier, Eyke; Wever, Marcel; Loza Mencia, Eneldo et al.
in: Machine learning, Jahrgang 111, Nr. 2, 02.2022, S. 713-737.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Hüllermeier, E, Wever, M, Loza Mencia, E, Fürnkranz, J & Rapp, M 2022, 'A flexible class of dependence-aware multi-label loss functions', Machine learning, Jg. 111, Nr. 2, S. 713-737. https://doi.org/10.1007/s10994-021-06107-2
Hüllermeier, E., Wever, M., Loza Mencia, E., Fürnkranz, J., & Rapp, M. (2022). A flexible class of dependence-aware multi-label loss functions. Machine learning, 111(2), 713-737. https://doi.org/10.1007/s10994-021-06107-2
Hüllermeier E, Wever M, Loza Mencia E, Fürnkranz J, Rapp M. A flexible class of dependence-aware multi-label loss functions. Machine learning. 2022 Feb;111(2):713-737. doi: 10.1007/s10994-021-06107-2
Hüllermeier, Eyke ; Wever, Marcel ; Loza Mencia, Eneldo et al. / A flexible class of dependence-aware multi-label loss functions. in: Machine learning. 2022 ; Jahrgang 111, Nr. 2. S. 713-737.
Download
@article{1ffc80d7ee3243e0bf554778c51fb083,
title = "A flexible class of dependence-aware multi-label loss functions",
abstract = "The idea to exploit label dependencies for better prediction is at the core of methods for multi-label classification (MLC), and performance improvements are normally explained in this way. Surprisingly, however, there is no established methodology that allows to analyze the dependence-awareness of MLC algorithms. With that goal in mind, we introduce a class of loss functions that are able to capture the important aspect of label dependence. To this end, we leverage the mathematical framework of non-additive measures and integrals. Roughly speaking, a non-additive measure allows for modeling the importance of correct predictions of label subsets (instead of single labels), and thereby their impact on the overall evaluation, in a flexible way. The well-known Hamming and subset 0/1 losses are rather extreme special cases of this function class, which give full importance to single label sets or the entire label set, respectively. We present concrete instantiations of this class, which appear to be especially appealing from a modeling perspective. The assessment of multi-label classifiers in terms of these losses is illustrated in an empirical study, clearly showing their aptness at capturing label dependencies. Finally, while not being the main goal of this study, we also show some preliminary results on the minimization of this parametrized family of losses.",
keywords = "Analysis, Label dependence, Loss function, Multi-label classification, Non-additive measures",
author = "Eyke H{\"u}llermeier and Marcel Wever and {Loza Mencia}, Eneldo and Johannes F{\"u}rnkranz and Michael Rapp",
note = "Publisher Copyright: {\textcopyright} 2022, The Author(s).",
year = "2022",
month = feb,
doi = "10.1007/s10994-021-06107-2",
language = "English",
volume = "111",
pages = "713--737",
journal = "Machine learning",
issn = "0885-6125",
publisher = "Springer Netherlands",
number = "2",

}

Download

TY - JOUR

T1 - A flexible class of dependence-aware multi-label loss functions

AU - Hüllermeier, Eyke

AU - Wever, Marcel

AU - Loza Mencia, Eneldo

AU - Fürnkranz, Johannes

AU - Rapp, Michael

N1 - Publisher Copyright: © 2022, The Author(s).

PY - 2022/2

Y1 - 2022/2

N2 - The idea to exploit label dependencies for better prediction is at the core of methods for multi-label classification (MLC), and performance improvements are normally explained in this way. Surprisingly, however, there is no established methodology that allows to analyze the dependence-awareness of MLC algorithms. With that goal in mind, we introduce a class of loss functions that are able to capture the important aspect of label dependence. To this end, we leverage the mathematical framework of non-additive measures and integrals. Roughly speaking, a non-additive measure allows for modeling the importance of correct predictions of label subsets (instead of single labels), and thereby their impact on the overall evaluation, in a flexible way. The well-known Hamming and subset 0/1 losses are rather extreme special cases of this function class, which give full importance to single label sets or the entire label set, respectively. We present concrete instantiations of this class, which appear to be especially appealing from a modeling perspective. The assessment of multi-label classifiers in terms of these losses is illustrated in an empirical study, clearly showing their aptness at capturing label dependencies. Finally, while not being the main goal of this study, we also show some preliminary results on the minimization of this parametrized family of losses.

AB - The idea to exploit label dependencies for better prediction is at the core of methods for multi-label classification (MLC), and performance improvements are normally explained in this way. Surprisingly, however, there is no established methodology that allows to analyze the dependence-awareness of MLC algorithms. With that goal in mind, we introduce a class of loss functions that are able to capture the important aspect of label dependence. To this end, we leverage the mathematical framework of non-additive measures and integrals. Roughly speaking, a non-additive measure allows for modeling the importance of correct predictions of label subsets (instead of single labels), and thereby their impact on the overall evaluation, in a flexible way. The well-known Hamming and subset 0/1 losses are rather extreme special cases of this function class, which give full importance to single label sets or the entire label set, respectively. We present concrete instantiations of this class, which appear to be especially appealing from a modeling perspective. The assessment of multi-label classifiers in terms of these losses is illustrated in an empirical study, clearly showing their aptness at capturing label dependencies. Finally, while not being the main goal of this study, we also show some preliminary results on the minimization of this parametrized family of losses.

KW - Analysis

KW - Label dependence

KW - Loss function

KW - Multi-label classification

KW - Non-additive measures

UR - http://www.scopus.com/inward/record.url?scp=85123113719&partnerID=8YFLogxK

U2 - 10.1007/s10994-021-06107-2

DO - 10.1007/s10994-021-06107-2

M3 - Article

AN - SCOPUS:85123113719

VL - 111

SP - 713

EP - 737

JO - Machine learning

JF - Machine learning

SN - 0885-6125

IS - 2

ER -

Von denselben Autoren