Learning Activation Functions for Sparse Neural Networks

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Authors

External Research Organisations

  • Mälardalen University (MDH)
  • Tarbiat Modarres University
View graph of relations

Details

Original languageEnglish
Title of host publicationSecond International Conference on Automated Machine Learning
Publication statusAccepted/In press - 16 May 2023

Abstract

Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in
critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.

Keywords

    Sparse Neural Networks, automated machine learning

Cite this

Learning Activation Functions for Sparse Neural Networks. / Loni, Mohammed; Mohan, Aditya; Asadi, Mehdi et al.
Second International Conference on Automated Machine Learning. 2023.

Research output: Chapter in book/report/conference proceedingConference contributionResearchpeer review

Loni, M, Mohan, A, Asadi, M & Lindauer, M 2023, Learning Activation Functions for Sparse Neural Networks. in Second International Conference on Automated Machine Learning. <https://arxiv.org/abs/2305.10964>
Loni, M., Mohan, A., Asadi, M., & Lindauer, M. (Accepted/in press). Learning Activation Functions for Sparse Neural Networks. In Second International Conference on Automated Machine Learning https://arxiv.org/abs/2305.10964
Loni M, Mohan A, Asadi M, Lindauer M. Learning Activation Functions for Sparse Neural Networks. In Second International Conference on Automated Machine Learning. 2023
Loni, Mohammed ; Mohan, Aditya ; Asadi, Mehdi et al. / Learning Activation Functions for Sparse Neural Networks. Second International Conference on Automated Machine Learning. 2023.
Download
@inproceedings{c14d584e7dc944fe8fe30f1eb5d63307,
title = "Learning Activation Functions for Sparse Neural Networks",
abstract = "Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.",
keywords = "Sparse Neural Networks, automated machine learning",
author = "Mohammed Loni and Aditya Mohan and Mehdi Asadi and Marius Lindauer",
year = "2023",
month = may,
day = "16",
language = "English",
booktitle = "Second International Conference on Automated Machine Learning",

}

Download

TY - GEN

T1 - Learning Activation Functions for Sparse Neural Networks

AU - Loni, Mohammed

AU - Mohan, Aditya

AU - Asadi, Mehdi

AU - Lindauer, Marius

PY - 2023/5/16

Y1 - 2023/5/16

N2 - Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.

AB - Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.

KW - Sparse Neural Networks

KW - automated machine learning

M3 - Conference contribution

BT - Second International Conference on Automated Machine Learning

ER -

By the same author(s)