Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

  • Effat Jalaeian Zaferani
  • Mohammad Teshnehlab
  • Amirreza Khodadadian
  • Clemens Heitzinger
  • Mansour Vali
  • Nima Noii
  • Thomas Wick

Externe Organisationen

  • K.N. Toosi University of Technology
  • Technische Universität Wien (TUW)
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer6206
FachzeitschriftSensors
Jahrgang22
Ausgabenummer16
PublikationsstatusVeröffentlicht - 18 Aug. 2022

Abstract

In this work, a method for automatic hyper-parameter tuning of the stacked asymmetric auto-encoder is proposed. In previous work, the deep learning ability to extract personality perception from speech was shown, but hyper-parameter tuning was attained by trial-and-error, which is time-consuming and requires machine learning knowledge. Therefore, obtaining hyper-parameter values is challenging and places limits on deep learning usage. To address this challenge, researchers have applied optimization methods. Although there were successes, the search space is very large due to the large number of deep learning hyper-parameters, which increases the probability of getting stuck in local optima. Researchers have also focused on improving global optimization methods. In this regard, we suggest a novel global optimization method based on the cultural algorithm, multi-island and the concept of parallelism to search this large space smartly. At first, we evaluated our method on three well-known optimization benchmarks and compared the results with recently published papers. Results indicate that the convergence of the proposed method speeds up due to the ability to escape from local optima, and the precision of the results improves dramatically. Afterward, we applied our method to optimize five hyper-parameters of an asymmetric auto-encoder for automatic personality perception. Since inappropriate hyper-parameters lead the network to over-fitting and under-fitting, we used a novel cost function to prevent over-fitting and under-fitting. As observed, the unweighted average recall (accuracy) was improved by 6.52% (9.54%) compared to our previous work and had remarkable outcomes compared to other published personality perception works.

ASJC Scopus Sachgebiete

Zitieren

Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception. / Jalaeian Zaferani, Effat; Teshnehlab, Mohammad; Khodadadian, Amirreza et al.
in: Sensors, Jahrgang 22, Nr. 16, 6206, 18.08.2022.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Jalaeian Zaferani, E, Teshnehlab, M, Khodadadian, A, Heitzinger, C, Vali, M, Noii, N & Wick, T 2022, 'Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception', Sensors, Jg. 22, Nr. 16, 6206. https://doi.org/10.3390/s22166206
Jalaeian Zaferani, E., Teshnehlab, M., Khodadadian, A., Heitzinger, C., Vali, M., Noii, N., & Wick, T. (2022). Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception. Sensors, 22(16), Artikel 6206. https://doi.org/10.3390/s22166206
Jalaeian Zaferani E, Teshnehlab M, Khodadadian A, Heitzinger C, Vali M, Noii N et al. Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception. Sensors. 2022 Aug 18;22(16):6206. doi: 10.3390/s22166206
Jalaeian Zaferani, Effat ; Teshnehlab, Mohammad ; Khodadadian, Amirreza et al. / Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception. in: Sensors. 2022 ; Jahrgang 22, Nr. 16.
Download
@article{19afe1a67fcf42909bfc0af40df6f06e,
title = "Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception",
abstract = "In this work, a method for automatic hyper-parameter tuning of the stacked asymmetric auto-encoder is proposed. In previous work, the deep learning ability to extract personality perception from speech was shown, but hyper-parameter tuning was attained by trial-and-error, which is time-consuming and requires machine learning knowledge. Therefore, obtaining hyper-parameter values is challenging and places limits on deep learning usage. To address this challenge, researchers have applied optimization methods. Although there were successes, the search space is very large due to the large number of deep learning hyper-parameters, which increases the probability of getting stuck in local optima. Researchers have also focused on improving global optimization methods. In this regard, we suggest a novel global optimization method based on the cultural algorithm, multi-island and the concept of parallelism to search this large space smartly. At first, we evaluated our method on three well-known optimization benchmarks and compared the results with recently published papers. Results indicate that the convergence of the proposed method speeds up due to the ability to escape from local optima, and the precision of the results improves dramatically. Afterward, we applied our method to optimize five hyper-parameters of an asymmetric auto-encoder for automatic personality perception. Since inappropriate hyper-parameters lead the network to over-fitting and under-fitting, we used a novel cost function to prevent over-fitting and under-fitting. As observed, the unweighted average recall (accuracy) was improved by 6.52% (9.54%) compared to our previous work and had remarkable outcomes compared to other published personality perception works.",
keywords = "big five personality traits, cultural algorithm, deep learning, hyper-parameter optimization, personality perception",
author = "{Jalaeian Zaferani}, Effat and Mohammad Teshnehlab and Amirreza Khodadadian and Clemens Heitzinger and Mansour Vali and Nima Noii and Thomas Wick",
year = "2022",
month = aug,
day = "18",
doi = "10.3390/s22166206",
language = "English",
volume = "22",
journal = "Sensors",
issn = "1424-8220",
publisher = "Multidisciplinary Digital Publishing Institute",
number = "16",

}

Download

TY - JOUR

T1 - Hyper-Parameter Optimization of Stacked Asymmetric Auto-Encoders for Automatic Personality Traits Perception

AU - Jalaeian Zaferani, Effat

AU - Teshnehlab, Mohammad

AU - Khodadadian, Amirreza

AU - Heitzinger, Clemens

AU - Vali, Mansour

AU - Noii, Nima

AU - Wick, Thomas

PY - 2022/8/18

Y1 - 2022/8/18

N2 - In this work, a method for automatic hyper-parameter tuning of the stacked asymmetric auto-encoder is proposed. In previous work, the deep learning ability to extract personality perception from speech was shown, but hyper-parameter tuning was attained by trial-and-error, which is time-consuming and requires machine learning knowledge. Therefore, obtaining hyper-parameter values is challenging and places limits on deep learning usage. To address this challenge, researchers have applied optimization methods. Although there were successes, the search space is very large due to the large number of deep learning hyper-parameters, which increases the probability of getting stuck in local optima. Researchers have also focused on improving global optimization methods. In this regard, we suggest a novel global optimization method based on the cultural algorithm, multi-island and the concept of parallelism to search this large space smartly. At first, we evaluated our method on three well-known optimization benchmarks and compared the results with recently published papers. Results indicate that the convergence of the proposed method speeds up due to the ability to escape from local optima, and the precision of the results improves dramatically. Afterward, we applied our method to optimize five hyper-parameters of an asymmetric auto-encoder for automatic personality perception. Since inappropriate hyper-parameters lead the network to over-fitting and under-fitting, we used a novel cost function to prevent over-fitting and under-fitting. As observed, the unweighted average recall (accuracy) was improved by 6.52% (9.54%) compared to our previous work and had remarkable outcomes compared to other published personality perception works.

AB - In this work, a method for automatic hyper-parameter tuning of the stacked asymmetric auto-encoder is proposed. In previous work, the deep learning ability to extract personality perception from speech was shown, but hyper-parameter tuning was attained by trial-and-error, which is time-consuming and requires machine learning knowledge. Therefore, obtaining hyper-parameter values is challenging and places limits on deep learning usage. To address this challenge, researchers have applied optimization methods. Although there were successes, the search space is very large due to the large number of deep learning hyper-parameters, which increases the probability of getting stuck in local optima. Researchers have also focused on improving global optimization methods. In this regard, we suggest a novel global optimization method based on the cultural algorithm, multi-island and the concept of parallelism to search this large space smartly. At first, we evaluated our method on three well-known optimization benchmarks and compared the results with recently published papers. Results indicate that the convergence of the proposed method speeds up due to the ability to escape from local optima, and the precision of the results improves dramatically. Afterward, we applied our method to optimize five hyper-parameters of an asymmetric auto-encoder for automatic personality perception. Since inappropriate hyper-parameters lead the network to over-fitting and under-fitting, we used a novel cost function to prevent over-fitting and under-fitting. As observed, the unweighted average recall (accuracy) was improved by 6.52% (9.54%) compared to our previous work and had remarkable outcomes compared to other published personality perception works.

KW - big five personality traits

KW - cultural algorithm

KW - deep learning

KW - hyper-parameter optimization

KW - personality perception

UR - http://www.scopus.com/inward/record.url?scp=85136698309&partnerID=8YFLogxK

U2 - 10.3390/s22166206

DO - 10.3390/s22166206

M3 - Article

C2 - 36015967

AN - SCOPUS:85136698309

VL - 22

JO - Sensors

JF - Sensors

SN - 1424-8220

IS - 16

M1 - 6206

ER -

Von denselben Autoren