Details
Original language | English |
---|---|
Article number | 1850008 |
Journal | International Journal of Computational Intelligence and Applications |
Volume | 17 |
Issue number | 2 |
Publication status | Published - 1 Jun 2018 |
Externally published | Yes |
Abstract
Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very high-dimensional. We suggest using a lower-dimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higher-dimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various state-of-the-art hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential model-based algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lower-dimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.
Keywords
- Bayesian optimization, convolutional neural networks, genetic algorithm, hyperparameter importance, Hyperparameter optimization
ASJC Scopus subject areas
- Computer Science(all)
- Software
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- Computer Science Applications
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: International Journal of Computational Intelligence and Applications, Vol. 17, No. 2, 1850008, 01.06.2018.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks
AU - Hinz, Tobias
AU - Navarro-Guerrero, Nicolás
AU - Magg, Sven
AU - Wermter, Stefan
N1 - Publisher Copyright: © 2018 World Scientific Publishing Europe Ltd.
PY - 2018/6/1
Y1 - 2018/6/1
N2 - Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very high-dimensional. We suggest using a lower-dimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higher-dimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various state-of-the-art hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential model-based algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lower-dimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.
AB - Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very high-dimensional. We suggest using a lower-dimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higher-dimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various state-of-the-art hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential model-based algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lower-dimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.
KW - Bayesian optimization
KW - convolutional neural networks
KW - genetic algorithm
KW - hyperparameter importance
KW - Hyperparameter optimization
UR - http://www.scopus.com/inward/record.url?scp=85048655419&partnerID=8YFLogxK
U2 - 10.1142/S1469026818500086
DO - 10.1142/S1469026818500086
M3 - Article
AN - SCOPUS:85048655419
VL - 17
JO - International Journal of Computational Intelligence and Applications
JF - International Journal of Computational Intelligence and Applications
SN - 1469-0268
IS - 2
M1 - 1850008
ER -