Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 1923-1933 |
Seitenumfang | 11 |
Fachzeitschrift | Neural Computing and Applications |
Jahrgang | 33 |
Ausgabenummer | 6 |
Frühes Online-Datum | 20 Juni 2020 |
Publikationsstatus | Veröffentlicht - März 2021 |
Abstract
Machine learning (ML) methods have shown powerful performance in different application. Nonetheless, designing ML models remains a challenge and requires further research as most procedures adopt a trial and error strategy. In this study, we present a methodology to optimize the architecture and the feature configurations of ML models considering a supervised learning process. The proposed approach employs genetic algorithm (GA)-based integer-valued optimization for two ML models, namely deep neural networks (DNN) and adaptive neuro-fuzzy inference system (ANFIS). The selected variables in the DNN optimization problems are the number of hidden layers, their number of neurons and their activation function, while the type and the number of membership functions are the design variables in the ANFIS optimization problem. The mean squared error (MSE) between the predictions and the target outputs is minimized as the optimization fitness function. The proposed scheme is validated through a case study of computational material design. We apply the method to predict the fracture energy of polymer/nanoparticles composites (PNCs) with a database gathered from the literature. The optimized DNN model shows superior prediction accuracy compared to the classical one-hidden layer network. Also, it outperforms ANFIS with significantly lower number of generations in GA. The proposed method can be easily extended to optimize similar architecture properties of ML models in various complex systems.
ASJC Scopus Sachgebiete
- Informatik (insg.)
- Software
- Informatik (insg.)
- Artificial intelligence
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: Neural Computing and Applications, Jahrgang 33, Nr. 6, 03.2021, S. 1923-1933.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - An efficient optimization approach for designing machine learning models based on genetic algorithm
AU - Hamdia, Khader M.
AU - Zhuang, Xiaoying
AU - Rabczuk, Timon
N1 - Funding Information: Open Access funding provided by Projekt DEAL. This study was funded by Alexander von Humboldt-Stiftung (Grant number Sofja Kovalevskaja 2015).
PY - 2021/3
Y1 - 2021/3
N2 - Machine learning (ML) methods have shown powerful performance in different application. Nonetheless, designing ML models remains a challenge and requires further research as most procedures adopt a trial and error strategy. In this study, we present a methodology to optimize the architecture and the feature configurations of ML models considering a supervised learning process. The proposed approach employs genetic algorithm (GA)-based integer-valued optimization for two ML models, namely deep neural networks (DNN) and adaptive neuro-fuzzy inference system (ANFIS). The selected variables in the DNN optimization problems are the number of hidden layers, their number of neurons and their activation function, while the type and the number of membership functions are the design variables in the ANFIS optimization problem. The mean squared error (MSE) between the predictions and the target outputs is minimized as the optimization fitness function. The proposed scheme is validated through a case study of computational material design. We apply the method to predict the fracture energy of polymer/nanoparticles composites (PNCs) with a database gathered from the literature. The optimized DNN model shows superior prediction accuracy compared to the classical one-hidden layer network. Also, it outperforms ANFIS with significantly lower number of generations in GA. The proposed method can be easily extended to optimize similar architecture properties of ML models in various complex systems.
AB - Machine learning (ML) methods have shown powerful performance in different application. Nonetheless, designing ML models remains a challenge and requires further research as most procedures adopt a trial and error strategy. In this study, we present a methodology to optimize the architecture and the feature configurations of ML models considering a supervised learning process. The proposed approach employs genetic algorithm (GA)-based integer-valued optimization for two ML models, namely deep neural networks (DNN) and adaptive neuro-fuzzy inference system (ANFIS). The selected variables in the DNN optimization problems are the number of hidden layers, their number of neurons and their activation function, while the type and the number of membership functions are the design variables in the ANFIS optimization problem. The mean squared error (MSE) between the predictions and the target outputs is minimized as the optimization fitness function. The proposed scheme is validated through a case study of computational material design. We apply the method to predict the fracture energy of polymer/nanoparticles composites (PNCs) with a database gathered from the literature. The optimized DNN model shows superior prediction accuracy compared to the classical one-hidden layer network. Also, it outperforms ANFIS with significantly lower number of generations in GA. The proposed method can be easily extended to optimize similar architecture properties of ML models in various complex systems.
KW - Deep neural networks
KW - Fracture energy
KW - Genetic algorithm
KW - Machine learning
KW - Optimization
KW - Polymer nanocomposites
UR - http://www.scopus.com/inward/record.url?scp=85087085394&partnerID=8YFLogxK
U2 - 10.1007/s00521-020-05035-x
DO - 10.1007/s00521-020-05035-x
M3 - Article
AN - SCOPUS:85087085394
VL - 33
SP - 1923
EP - 1933
JO - Neural Computing and Applications
JF - Neural Computing and Applications
SN - 0941-0643
IS - 6
ER -