Learning Activation Functions for Sparse Neural Networks

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Autoren

Externe Organisationen

  • Hochschule Mälardalen (MDH)
  • Tarbiat Modarres University
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Titel des SammelwerksSecond International Conference on Automated Machine Learning
PublikationsstatusAngenommen/Im Druck - 16 Mai 2023

Abstract

Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in
critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.

Zitieren

Learning Activation Functions for Sparse Neural Networks. / Loni, Mohammed; Mohan, Aditya; Asadi, Mehdi et al.
Second International Conference on Automated Machine Learning. 2023.

Publikation: Beitrag in Buch/Bericht/Sammelwerk/KonferenzbandAufsatz in KonferenzbandForschungPeer-Review

Loni, M, Mohan, A, Asadi, M & Lindauer, M 2023, Learning Activation Functions for Sparse Neural Networks. in Second International Conference on Automated Machine Learning. <https://arxiv.org/abs/2305.10964>
Loni, M., Mohan, A., Asadi, M., & Lindauer, M. (Angenommen/im Druck). Learning Activation Functions for Sparse Neural Networks. In Second International Conference on Automated Machine Learning https://arxiv.org/abs/2305.10964
Loni M, Mohan A, Asadi M, Lindauer M. Learning Activation Functions for Sparse Neural Networks. in Second International Conference on Automated Machine Learning. 2023
Loni, Mohammed ; Mohan, Aditya ; Asadi, Mehdi et al. / Learning Activation Functions for Sparse Neural Networks. Second International Conference on Automated Machine Learning. 2023.
Download
@inproceedings{c14d584e7dc944fe8fe30f1eb5d63307,
title = "Learning Activation Functions for Sparse Neural Networks",
abstract = "Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.",
keywords = "Sparse Neural Networks, automated machine learning",
author = "Mohammed Loni and Aditya Mohan and Mehdi Asadi and Marius Lindauer",
year = "2023",
month = may,
day = "16",
language = "English",
booktitle = "Second International Conference on Automated Machine Learning",

}

Download

TY - GEN

T1 - Learning Activation Functions for Sparse Neural Networks

AU - Loni, Mohammed

AU - Mohan, Aditya

AU - Asadi, Mehdi

AU - Lindauer, Marius

PY - 2023/5/16

Y1 - 2023/5/16

N2 - Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.

AB - Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.

KW - Sparse Neural Networks

KW - automated machine learning

M3 - Conference contribution

BT - Second International Conference on Automated Machine Learning

ER -

Von denselben Autoren