Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autorschaft

Externe Organisationen

  • Universität Hamburg
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer1850008
FachzeitschriftInternational Journal of Computational Intelligence and Applications
Jahrgang17
Ausgabenummer2
PublikationsstatusVeröffentlicht - 1 Juni 2018
Extern publiziertJa

Abstract

Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very high-dimensional. We suggest using a lower-dimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higher-dimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various state-of-the-art hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential model-based algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lower-dimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.

ASJC Scopus Sachgebiete

Zitieren

Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks. / Hinz, Tobias; Navarro-Guerrero, Nicolás; Magg, Sven et al.
in: International Journal of Computational Intelligence and Applications, Jahrgang 17, Nr. 2, 1850008, 01.06.2018.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Hinz, T, Navarro-Guerrero, N, Magg, S & Wermter, S 2018, 'Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks', International Journal of Computational Intelligence and Applications, Jg. 17, Nr. 2, 1850008. https://doi.org/10.1142/S1469026818500086
Hinz, T., Navarro-Guerrero, N., Magg, S., & Wermter, S. (2018). Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks. International Journal of Computational Intelligence and Applications, 17(2), Artikel 1850008. https://doi.org/10.1142/S1469026818500086
Hinz T, Navarro-Guerrero N, Magg S, Wermter S. Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks. International Journal of Computational Intelligence and Applications. 2018 Jun 1;17(2):1850008. doi: 10.1142/S1469026818500086
Hinz, Tobias ; Navarro-Guerrero, Nicolás ; Magg, Sven et al. / Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks. in: International Journal of Computational Intelligence and Applications. 2018 ; Jahrgang 17, Nr. 2.
Download
@article{ee3af4572cb54327ab98838e6993b9cf,
title = "Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks",
abstract = "Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very high-dimensional. We suggest using a lower-dimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higher-dimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various state-of-the-art hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential model-based algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lower-dimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.",
keywords = "Bayesian optimization, convolutional neural networks, genetic algorithm, hyperparameter importance, Hyperparameter optimization",
author = "Tobias Hinz and Nicol{\'a}s Navarro-Guerrero and Sven Magg and Stefan Wermter",
note = "Publisher Copyright: {\textcopyright} 2018 World Scientific Publishing Europe Ltd.",
year = "2018",
month = jun,
day = "1",
doi = "10.1142/S1469026818500086",
language = "English",
volume = "17",
number = "2",

}

Download

TY - JOUR

T1 - Speeding up the Hyperparameter Optimization of Deep Convolutional Neural Networks

AU - Hinz, Tobias

AU - Navarro-Guerrero, Nicolás

AU - Magg, Sven

AU - Wermter, Stefan

N1 - Publisher Copyright: © 2018 World Scientific Publishing Europe Ltd.

PY - 2018/6/1

Y1 - 2018/6/1

N2 - Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very high-dimensional. We suggest using a lower-dimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higher-dimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various state-of-the-art hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential model-based algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lower-dimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.

AB - Most learning algorithms require the practitioner to manually set the values of many hyperparameters before the learning process can begin. However, with modern algorithms, the evaluation of a given hyperparameter setting can take a considerable amount of time and the search space is often very high-dimensional. We suggest using a lower-dimensional representation of the original data to quickly identify promising areas in the hyperparameter space. This information can then be used to initialize the optimization algorithm for the original, higher-dimensional data. We compare this approach with the standard procedure of optimizing the hyperparameters only on the original input. We perform experiments with various state-of-the-art hyperparameter optimization algorithms such as random search, the tree of parzen estimators (TPEs), sequential model-based algorithm configuration (SMAC), and a genetic algorithm (GA). Our experiments indicate that it is possible to speed up the optimization process by using lower-dimensional data representations at the beginning, while increasing the dimensionality of the input later in the optimization process. This is independent of the underlying optimization procedure, making the approach promising for many existing hyperparameter optimization algorithms.

KW - Bayesian optimization

KW - convolutional neural networks

KW - genetic algorithm

KW - hyperparameter importance

KW - Hyperparameter optimization

UR - http://www.scopus.com/inward/record.url?scp=85048655419&partnerID=8YFLogxK

U2 - 10.1142/S1469026818500086

DO - 10.1142/S1469026818500086

M3 - Article

AN - SCOPUS:85048655419

VL - 17

JO - International Journal of Computational Intelligence and Applications

JF - International Journal of Computational Intelligence and Applications

SN - 1469-0268

IS - 2

M1 - 1850008

ER -

Von denselben Autoren