Well-tuned Simple Nets Excel on Tabular Datasets

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

External Research Organisations

  • University of Freiburg
  • Bosch Center for Artificial Intelligence (BCAI)
View graph of relations

Details

Original languageEnglish
Title of host publicationProceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021)
Number of pages23
Publication statusE-pub ahead of print - 2021
Event35th Conference on Neural Information Processing Systems: Track on Datasets and Benchmarks - Virtual-only
Duration: 6 Dec 202114 Dec 2021

Abstract

Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.

Keywords

    cs.LG

Cite this

Well-tuned Simple Nets Excel on Tabular Datasets. / Kadra, Arlind; Lindauer, Marius; Hutter, Frank et al.
Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 2021.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Kadra, A, Lindauer, M, Hutter, F & Grabocka, J 2021, Well-tuned Simple Nets Excel on Tabular Datasets. in Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 35th Conference on Neural Information Processing Systems, 6 Dec 2021. <https://arxiv.org/abs/2106.11189>
Kadra, A., Lindauer, M., Hutter, F., & Grabocka, J. (2021). Well-tuned Simple Nets Excel on Tabular Datasets. In Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021) Advance online publication. https://arxiv.org/abs/2106.11189
Kadra A, Lindauer M, Hutter F, Grabocka J. Well-tuned Simple Nets Excel on Tabular Datasets. In Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 2021 Epub 2021.
Kadra, Arlind ; Lindauer, Marius ; Hutter, Frank et al. / Well-tuned Simple Nets Excel on Tabular Datasets. Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 2021.
Download
@inproceedings{33b0d2d471094782ad33952e85d3b8ce,
title = "Well-tuned Simple Nets Excel on Tabular Datasets",
abstract = " Tabular datasets are the last {"}unconquered castle{"} for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost. ",
keywords = "cs.LG",
author = "Arlind Kadra and Marius Lindauer and Frank Hutter and Josif Grabocka",
year = "2021",
language = "English",
booktitle = "Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021)",
note = "35th Conference on Neural Information Processing Systems : Track on Datasets and Benchmarks, NeurIPS 2021 ; Conference date: 06-12-2021 Through 14-12-2021",

}

Download

TY - GEN

T1 - Well-tuned Simple Nets Excel on Tabular Datasets

AU - Kadra, Arlind

AU - Lindauer, Marius

AU - Hutter, Frank

AU - Grabocka, Josif

PY - 2021

Y1 - 2021

N2 - Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.

AB - Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.

KW - cs.LG

M3 - Conference contribution

BT - Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021)

T2 - 35th Conference on Neural Information Processing Systems

Y2 - 6 December 2021 through 14 December 2021

ER -

By the same author(s)