Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021) |
Seitenumfang | 23 |
Publikationsstatus | Elektronisch veröffentlicht (E-Pub) - 2021 |
Veranstaltung | 35th Conference on Neural Information Processing Systems: Track on Datasets and Benchmarks - Virtual-only Dauer: 6 Dez. 2021 → 14 Dez. 2021 |
Abstract
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021). 2021.
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Well-tuned Simple Nets Excel on Tabular Datasets
AU - Kadra, Arlind
AU - Lindauer, Marius
AU - Hutter, Frank
AU - Grabocka, Josif
PY - 2021
Y1 - 2021
N2 - Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.
AB - Tabular datasets are the last "unconquered castle" for deep learning, with traditional ML methods like Gradient-Boosted Decision Trees still performing strongly even against recent specialized neural architectures. In this paper, we hypothesize that the key to boosting the performance of neural networks lies in rethinking the joint and simultaneous application of a large set of modern regularization techniques. As a result, we propose regularizing plain Multilayer Perceptron (MLP) networks by searching for the optimal combination/cocktail of 13 regularization techniques for each dataset using a joint optimization over the decision on which regularizers to apply and their subsidiary hyperparameters. We empirically assess the impact of these regularization cocktails for MLPs in a large-scale empirical study comprising 40 tabular datasets and demonstrate that (i) well-regularized plain MLPs significantly outperform recent state-of-the-art specialized neural network architectures, and (ii) they even outperform strong traditional ML methods, such as XGBoost.
KW - cs.LG
M3 - Conference contribution
BT - Proceedings of the international conference on Advances in Neural Information Processing Systems (NeurIPS 2021)
T2 - 35th Conference on Neural Information Processing Systems
Y2 - 6 December 2021 through 14 December 2021
ER -