Dispatch of decentralized energy systems using artificial neural networks: A comparative analysis with emphasis on training methods

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Autoren

Externe Organisationen

  • WETEC Systems GmbH
Forschungs-netzwerk anzeigen

Details

OriginalspracheEnglisch
Aufsatznummer100730
FachzeitschriftEnergy Conversion and Management: X
Jahrgang24
PublikationsstatusVeröffentlicht - Okt. 2024

Abstract

Due to the availability of flexibility, Decentralized Energy Systems (DES) play a central role in integrating renewable energies. To efficiently utilize renewable energy, dispatchable components must be operated to bridge the time gap between inflexible supply and energy demand. Due to the large number of energy converters, energy storage systems, and flexible consumers, there are many ways to achieve this. Conventional rule-based dispatch strategies often reach their limits here, and optimized dispatch strategies (e.g., model predictive control or the optimal dispatch) are usually based on very good forecasts. Reinforcement learning, particularly the application of Artificial Neural Networks (ANN), offers the possibility to learn complex decision-making processes. Since long training times are required for this, an efficient training framework is needed. The present paper proposes different training methods to learn an ANN-based dispatch strategy. In Method I, the ANN attempts to learn the solution to the corresponding optimal dispatch problem. In method II, the energy system model is simulated during the training to compute the observation state and operating costs resulting from the ANN-based dispatch. Method III uses the fast executable Method I solution as a warm-up solution for the computationally expensive training with Method II. In the present paper, a model-based analysis compares the different ANN-based dispatch strategies with rule-based dispatch strategies, model predictive dispatch (MPC), and optimal dispatch regarding their computational efficiency and the resulting operating costs. The dispatch strategies are compared based on three case studies with different system topologies for which training and test data are applied. Training method I proved to be non-competitive. However, training methods II and III significantly outperformed rule-based dispatch strategies across all case studies for both training and test data sets. Notably, methods II and III also surpassed MPC-based dispatch strategies under high and medium uncertainty forecasts for the training data in the first two case studies. In contrast, MPC-based dispatch was superior in the third case study, likely due to the higher system's complexity and in the test data set due to the methodological advantage of being optimized for each specific data set. The effectiveness of training method III depends on the performance of the warm-up training with method I: warm-up is beneficial only if this results in an already promising dispatch (as seen in case study two). Otherwise, training method II proves more effective, as observed in case studies one and three.

Zitieren

Dispatch of decentralized energy systems using artificial neural networks: A comparative analysis with emphasis on training methods. / Koenemann, Lukas; Bensmann, Astrid; Gerster, Johannes et al.
in: Energy Conversion and Management: X, Jahrgang 24, 100730, 10.2024.

Publikation: Beitrag in FachzeitschriftArtikelForschungPeer-Review

Koenemann L, Bensmann A, Gerster J, Hanke-Rauschenbach R. Dispatch of decentralized energy systems using artificial neural networks: A comparative analysis with emphasis on training methods. Energy Conversion and Management: X. 2024 Okt;24:100730. doi: 10.1016/j.ecmx.2024.100730
Koenemann, Lukas ; Bensmann, Astrid ; Gerster, Johannes et al. / Dispatch of decentralized energy systems using artificial neural networks : A comparative analysis with emphasis on training methods. in: Energy Conversion and Management: X. 2024 ; Jahrgang 24.
Download
@article{d5c61ec7fc3a4fd99aa672a750a6c8a7,
title = "Dispatch of decentralized energy systems using artificial neural networks: A comparative analysis with emphasis on training methods",
abstract = "Due to the availability of flexibility, Decentralized Energy Systems (DES) play a central role in integrating renewable energies. To efficiently utilize renewable energy, dispatchable components must be operated to bridge the time gap between inflexible supply and energy demand. Due to the large number of energy converters, energy storage systems, and flexible consumers, there are many ways to achieve this. Conventional rule-based dispatch strategies often reach their limits here, and optimized dispatch strategies (e.g., model predictive control or the optimal dispatch) are usually based on very good forecasts. Reinforcement learning, particularly the application of Artificial Neural Networks (ANN), offers the possibility to learn complex decision-making processes. Since long training times are required for this, an efficient training framework is needed. The present paper proposes different training methods to learn an ANN-based dispatch strategy. In Method I, the ANN attempts to learn the solution to the corresponding optimal dispatch problem. In method II, the energy system model is simulated during the training to compute the observation state and operating costs resulting from the ANN-based dispatch. Method III uses the fast executable Method I solution as a warm-up solution for the computationally expensive training with Method II. In the present paper, a model-based analysis compares the different ANN-based dispatch strategies with rule-based dispatch strategies, model predictive dispatch (MPC), and optimal dispatch regarding their computational efficiency and the resulting operating costs. The dispatch strategies are compared based on three case studies with different system topologies for which training and test data are applied. Training method I proved to be non-competitive. However, training methods II and III significantly outperformed rule-based dispatch strategies across all case studies for both training and test data sets. Notably, methods II and III also surpassed MPC-based dispatch strategies under high and medium uncertainty forecasts for the training data in the first two case studies. In contrast, MPC-based dispatch was superior in the third case study, likely due to the higher system's complexity and in the test data set due to the methodological advantage of being optimized for each specific data set. The effectiveness of training method III depends on the performance of the warm-up training with method I: warm-up is beneficial only if this results in an already promising dispatch (as seen in case study two). Otherwise, training method II proves more effective, as observed in case studies one and three.",
keywords = "Artificial neural network, Decentralized energy systems, Smart dispatch strategies",
author = "Lukas Koenemann and Astrid Bensmann and Johannes Gerster and Richard Hanke-Rauschenbach",
note = "Publisher Copyright: {\textcopyright} 2024 The Author(s)",
year = "2024",
month = oct,
doi = "10.1016/j.ecmx.2024.100730",
language = "English",
volume = "24",

}

Download

TY - JOUR

T1 - Dispatch of decentralized energy systems using artificial neural networks

T2 - A comparative analysis with emphasis on training methods

AU - Koenemann, Lukas

AU - Bensmann, Astrid

AU - Gerster, Johannes

AU - Hanke-Rauschenbach, Richard

N1 - Publisher Copyright: © 2024 The Author(s)

PY - 2024/10

Y1 - 2024/10

N2 - Due to the availability of flexibility, Decentralized Energy Systems (DES) play a central role in integrating renewable energies. To efficiently utilize renewable energy, dispatchable components must be operated to bridge the time gap between inflexible supply and energy demand. Due to the large number of energy converters, energy storage systems, and flexible consumers, there are many ways to achieve this. Conventional rule-based dispatch strategies often reach their limits here, and optimized dispatch strategies (e.g., model predictive control or the optimal dispatch) are usually based on very good forecasts. Reinforcement learning, particularly the application of Artificial Neural Networks (ANN), offers the possibility to learn complex decision-making processes. Since long training times are required for this, an efficient training framework is needed. The present paper proposes different training methods to learn an ANN-based dispatch strategy. In Method I, the ANN attempts to learn the solution to the corresponding optimal dispatch problem. In method II, the energy system model is simulated during the training to compute the observation state and operating costs resulting from the ANN-based dispatch. Method III uses the fast executable Method I solution as a warm-up solution for the computationally expensive training with Method II. In the present paper, a model-based analysis compares the different ANN-based dispatch strategies with rule-based dispatch strategies, model predictive dispatch (MPC), and optimal dispatch regarding their computational efficiency and the resulting operating costs. The dispatch strategies are compared based on three case studies with different system topologies for which training and test data are applied. Training method I proved to be non-competitive. However, training methods II and III significantly outperformed rule-based dispatch strategies across all case studies for both training and test data sets. Notably, methods II and III also surpassed MPC-based dispatch strategies under high and medium uncertainty forecasts for the training data in the first two case studies. In contrast, MPC-based dispatch was superior in the third case study, likely due to the higher system's complexity and in the test data set due to the methodological advantage of being optimized for each specific data set. The effectiveness of training method III depends on the performance of the warm-up training with method I: warm-up is beneficial only if this results in an already promising dispatch (as seen in case study two). Otherwise, training method II proves more effective, as observed in case studies one and three.

AB - Due to the availability of flexibility, Decentralized Energy Systems (DES) play a central role in integrating renewable energies. To efficiently utilize renewable energy, dispatchable components must be operated to bridge the time gap between inflexible supply and energy demand. Due to the large number of energy converters, energy storage systems, and flexible consumers, there are many ways to achieve this. Conventional rule-based dispatch strategies often reach their limits here, and optimized dispatch strategies (e.g., model predictive control or the optimal dispatch) are usually based on very good forecasts. Reinforcement learning, particularly the application of Artificial Neural Networks (ANN), offers the possibility to learn complex decision-making processes. Since long training times are required for this, an efficient training framework is needed. The present paper proposes different training methods to learn an ANN-based dispatch strategy. In Method I, the ANN attempts to learn the solution to the corresponding optimal dispatch problem. In method II, the energy system model is simulated during the training to compute the observation state and operating costs resulting from the ANN-based dispatch. Method III uses the fast executable Method I solution as a warm-up solution for the computationally expensive training with Method II. In the present paper, a model-based analysis compares the different ANN-based dispatch strategies with rule-based dispatch strategies, model predictive dispatch (MPC), and optimal dispatch regarding their computational efficiency and the resulting operating costs. The dispatch strategies are compared based on three case studies with different system topologies for which training and test data are applied. Training method I proved to be non-competitive. However, training methods II and III significantly outperformed rule-based dispatch strategies across all case studies for both training and test data sets. Notably, methods II and III also surpassed MPC-based dispatch strategies under high and medium uncertainty forecasts for the training data in the first two case studies. In contrast, MPC-based dispatch was superior in the third case study, likely due to the higher system's complexity and in the test data set due to the methodological advantage of being optimized for each specific data set. The effectiveness of training method III depends on the performance of the warm-up training with method I: warm-up is beneficial only if this results in an already promising dispatch (as seen in case study two). Otherwise, training method II proves more effective, as observed in case studies one and three.

KW - Artificial neural network

KW - Decentralized energy systems

KW - Smart dispatch strategies

UR - http://www.scopus.com/inward/record.url?scp=85206272896&partnerID=8YFLogxK

U2 - 10.1016/j.ecmx.2024.100730

DO - 10.1016/j.ecmx.2024.100730

M3 - Article

AN - SCOPUS:85206272896

VL - 24

JO - Energy Conversion and Management: X

JF - Energy Conversion and Management: X

M1 - 100730

ER -

Von denselben Autoren