Details
Original language | English |
---|---|
Title of host publication | Second International Conference on Automated Machine Learning |
Publication status | Accepted/In press - 16 May 2023 |
Abstract
critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.
Keywords
- Sparse Neural Networks, automated machine learning
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Second International Conference on Automated Machine Learning. 2023.
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Learning Activation Functions for Sparse Neural Networks
AU - Loni, Mohammed
AU - Mohan, Aditya
AU - Asadi, Mehdi
AU - Lindauer, Marius
PY - 2023/5/16
Y1 - 2023/5/16
N2 - Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.
AB - Sparse Neural Networks (SNNs) can potentially demonstrate similar performance to their dense counterparts while saving significant energy and memory at inference. However, the accuracy drop incurred by SNNs, especially at high pruning ratios, can be an issue in critical deployment conditions. While recent works mitigate this issue through sophisticated pruning techniques, we shift our focus to an overlooked factor: hyperparameters and activation functions. Our analyses have shown that the accuracy drop can additionally be attributed to (i) Using ReLU as the default choice for activation functions unanimously, and (ii) Fine-tuning SNNs with the same hyperparameters as dense counterparts. Thus, we focus on learning novel activation functions for sparse networks and combining these with a separate hyperparameter optimization (HPO) regime for sparse networks. By conducting experiments on popular DNN models (VGG-16, ResNet-18, and EfficientNet-B0) trained on CIFAR-10 and ImageNet-16 datasets, we show that the novel combination of these two approaches, dubbed Sparse Activation Function Search, short: SAFS, results in up to 8.88% and 6.33% absolute improvement in the accuracy for VGG-16 and ResNet-18 over the default training protocols, especially at high pruning ratios.
KW - Sparse Neural Networks
KW - automated machine learning
M3 - Conference contribution
BT - Second International Conference on Automated Machine Learning
ER -