Well-tuned Simple Nets Excel on Tabular Datasets

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Albert-Ludwigs-Universität Freiburg
  • Bosch Center for Artificial Intelligence (BCAI)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksProceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021)
Seitenumfang23
PublikationsstatusElektronisch veröffentlicht (E-Pub) - 2021
Veranstaltung35th Conference on Neural Information Processing Systems: Track on Datasets and Benchmarks - Virtual-only
Dauer: 6 Dez. 202114 Dez. 2021

Abstract

Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.

Zitieren

Well-tuned Simple Nets Excel on Tabular Datasets. / Kadra, Arlind; Lindauer, Marius; Hutter, Frank et al.
Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 2021.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Kadra, A, Lindauer, M, Hutter, F & Grabocka, J 2021, Well-tuned Simple Nets Excel on Tabular Datasets. in Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 35th Conference on Neural Information Processing Systems, 6 Dez. 2021. <https://arxiv.org/abs/2106.11189>
Kadra, A., Lindauer, M., Hutter, F., & Grabocka, J. (2021). Well-tuned Simple Nets Excel on Tabular Datasets. In Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021) Vorabveröffentlichung online. https://arxiv.org/abs/2106.11189
Kadra A, Lindauer M, Hutter F, Grabocka J. Well-tuned Simple Nets Excel on Tabular Datasets. in Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 2021 Epub 2021.
Kadra, Arlind ; Lindauer, Marius ; Hutter, Frank et al. / Well-tuned Simple Nets Excel on Tabular Datasets. Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 2021.
Download
@inproceedings{33b0d2d471094782ad33952e85d3b8ce,
title = "Well-tuned Simple Nets Excel on Tabular Datasets",
abstract = " Tabular datasets are the last {"}unconquered castle{"} for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost. ",
keywords = "cs.LG",
author = "Arlind Kadra and Marius Lindauer and Frank Hutter and Josif Grabocka",
year = "2021",
language = "English",
booktitle = "Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021)",
note = "35th Conference on Neural Information Processing Systems : Track on Datasets and Benchmarks, NeurIPS 2021 ; Conference date: 06-12-2021 Through 14-12-2021",

}

Download

TY - GEN

T1 - Well-tuned Simple Nets Excel on Tabular Datasets

AU - Kadra, Arlind

AU - Lindauer, Marius

AU - Hutter, Frank

AU - Grabocka, Josif

PY - 2021

Y1 - 2021

N2 - Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.

AB - Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.

KW - cs.LG

M3 - Conference contribution

BT - Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021)

T2 - 35th Conference on Neural Information Processing Systems

Y2 - 6 December 2021 through 14 December 2021

ER -

Von denselben Autoren