Details
Originalsprache | Englisch |
---|---|
Aufsatznummer | 100730 |
Fachzeitschrift | Energy Conversion and Management: X |
Jahrgang | 24 |
Publikationsstatus | Veröffentlicht - Okt. 2024 |
Abstract
Due to the availability of flexibility, Decentralized Energy Systems (DES) play a central role in integrating renewable energies. To efficiently utilize renewable energy, dispatchable components must be operated to bridge the time gap between inflexible supply and energy demand. Due to the large number of energy converters, energy storage systems, and flexible consumers, there are many ways to achieve this. Conventional rule-based dispatch strategies often reach their limits here, and optimized dispatch strategies (e.g., model predictive control or the optimal dispatch) are usually based on very good forecasts. Reinforcement learning, particularly the application of Artificial Neural Networks (ANN), offers the possibility to learn complex decision-making processes. Since long training times are required for this, an efficient training framework is needed. The present paper proposes different training methods to learn an ANN-based dispatch strategy. In Method I, the ANN attempts to learn the solution to the corresponding optimal dispatch problem. In method II, the energy system model is simulated during the training to compute the observation state and operating costs resulting from the ANN-based dispatch. Method III uses the fast executable Method I solution as a warm-up solution for the computationally expensive training with Method II. In the present paper, a model-based analysis compares the different ANN-based dispatch strategies with rule-based dispatch strategies, model predictive dispatch (MPC), and optimal dispatch regarding their computational efficiency and the resulting operating costs. The dispatch strategies are compared based on three case studies with different system topologies for which training and test data are applied. Training method I proved to be non-competitive. However, training methods II and III significantly outperformed rule-based dispatch strategies across all case studies for both training and test data sets. Notably, methods II and III also surpassed MPC-based dispatch strategies under high and medium uncertainty forecasts for the training data in the first two case studies. In contrast, MPC-based dispatch was superior in the third case study, likely due to the higher system's complexity and in the test data set due to the methodological advantage of being optimized for each specific data set. The effectiveness of training method III depends on the performance of the warm-up training with method I: warm-up is beneficial only if this results in an already promising dispatch (as seen in case study two). Otherwise, training method II proves more effective, as observed in case studies one and three.
ASJC Scopus Sachgebiete
- Energie (insg.)
- Erneuerbare Energien, Nachhaltigkeit und Umwelt
- Energie (insg.)
- Kernenergie und Kernkraftwerkstechnik
- Energie (insg.)
- Feuerungstechnik
- Energie (insg.)
- Energieanlagenbau und Kraftwerkstechnik
Ziele für nachhaltige Entwicklung
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: Energy Conversion and Management: X, Jahrgang 24, 100730, 10.2024.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Dispatch of decentralized energy systems using artificial neural networks
T2 - A comparative analysis with emphasis on training methods
AU - Koenemann, Lukas
AU - Bensmann, Astrid
AU - Gerster, Johannes
AU - Hanke-Rauschenbach, Richard
N1 - Publisher Copyright: © 2024 The Author(s)
PY - 2024/10
Y1 - 2024/10
N2 - Due to the availability of flexibility, Decentralized Energy Systems (DES) play a central role in integrating renewable energies. To efficiently utilize renewable energy, dispatchable components must be operated to bridge the time gap between inflexible supply and energy demand. Due to the large number of energy converters, energy storage systems, and flexible consumers, there are many ways to achieve this. Conventional rule-based dispatch strategies often reach their limits here, and optimized dispatch strategies (e.g., model predictive control or the optimal dispatch) are usually based on very good forecasts. Reinforcement learning, particularly the application of Artificial Neural Networks (ANN), offers the possibility to learn complex decision-making processes. Since long training times are required for this, an efficient training framework is needed. The present paper proposes different training methods to learn an ANN-based dispatch strategy. In Method I, the ANN attempts to learn the solution to the corresponding optimal dispatch problem. In method II, the energy system model is simulated during the training to compute the observation state and operating costs resulting from the ANN-based dispatch. Method III uses the fast executable Method I solution as a warm-up solution for the computationally expensive training with Method II. In the present paper, a model-based analysis compares the different ANN-based dispatch strategies with rule-based dispatch strategies, model predictive dispatch (MPC), and optimal dispatch regarding their computational efficiency and the resulting operating costs. The dispatch strategies are compared based on three case studies with different system topologies for which training and test data are applied. Training method I proved to be non-competitive. However, training methods II and III significantly outperformed rule-based dispatch strategies across all case studies for both training and test data sets. Notably, methods II and III also surpassed MPC-based dispatch strategies under high and medium uncertainty forecasts for the training data in the first two case studies. In contrast, MPC-based dispatch was superior in the third case study, likely due to the higher system's complexity and in the test data set due to the methodological advantage of being optimized for each specific data set. The effectiveness of training method III depends on the performance of the warm-up training with method I: warm-up is beneficial only if this results in an already promising dispatch (as seen in case study two). Otherwise, training method II proves more effective, as observed in case studies one and three.
AB - Due to the availability of flexibility, Decentralized Energy Systems (DES) play a central role in integrating renewable energies. To efficiently utilize renewable energy, dispatchable components must be operated to bridge the time gap between inflexible supply and energy demand. Due to the large number of energy converters, energy storage systems, and flexible consumers, there are many ways to achieve this. Conventional rule-based dispatch strategies often reach their limits here, and optimized dispatch strategies (e.g., model predictive control or the optimal dispatch) are usually based on very good forecasts. Reinforcement learning, particularly the application of Artificial Neural Networks (ANN), offers the possibility to learn complex decision-making processes. Since long training times are required for this, an efficient training framework is needed. The present paper proposes different training methods to learn an ANN-based dispatch strategy. In Method I, the ANN attempts to learn the solution to the corresponding optimal dispatch problem. In method II, the energy system model is simulated during the training to compute the observation state and operating costs resulting from the ANN-based dispatch. Method III uses the fast executable Method I solution as a warm-up solution for the computationally expensive training with Method II. In the present paper, a model-based analysis compares the different ANN-based dispatch strategies with rule-based dispatch strategies, model predictive dispatch (MPC), and optimal dispatch regarding their computational efficiency and the resulting operating costs. The dispatch strategies are compared based on three case studies with different system topologies for which training and test data are applied. Training method I proved to be non-competitive. However, training methods II and III significantly outperformed rule-based dispatch strategies across all case studies for both training and test data sets. Notably, methods II and III also surpassed MPC-based dispatch strategies under high and medium uncertainty forecasts for the training data in the first two case studies. In contrast, MPC-based dispatch was superior in the third case study, likely due to the higher system's complexity and in the test data set due to the methodological advantage of being optimized for each specific data set. The effectiveness of training method III depends on the performance of the warm-up training with method I: warm-up is beneficial only if this results in an already promising dispatch (as seen in case study two). Otherwise, training method II proves more effective, as observed in case studies one and three.
KW - Artificial neural network
KW - Decentralized energy systems
KW - Smart dispatch strategies
UR - http://www.scopus.com/inward/record.url?scp=85206272896&partnerID=8YFLogxK
U2 - 10.1016/j.ecmx.2024.100730
DO - 10.1016/j.ecmx.2024.100730
M3 - Article
AN - SCOPUS:85206272896
VL - 24
JO - Energy Conversion and Management: X
JF - Energy Conversion and Management: X
M1 - 100730
ER -