The increasing share of renewable energy sources in industrial, multi-energy systems has introduced significant challenges for the optimal coordination of multiple energy carriers. This work focuses on improving the cost-effective and robust operation of an industrial plant that simultaneously produces electricity, steam, hot water, and chilled water. It explores whether data-driven control strategies based on deep reinforcement learning can enhance the economic and energy performance of the plant, compared to traditional methods, and whether multi-agent structures can further improve robustness. Three control architectures were designed and evaluated – centralized, decentralized, and hierarchical – using a year of actual operational data from the industrial plant. The results show that all of the proposed strategies reduced total operating costs by more than 6% compared to existing rule-based control. The hierarchical configuration achieved the best performance and demonstrated superior robustness to variations in energy prices. These findings highlight the potential of learning-based hierarchical coordination as a practical and resilient framework to manage complex industrial energy systems.
Deep reinforcement learning control architectures for industrial multi-energy systems: from single-agent to hierarchical multi-agents / Franzoso, Andrea; Fambri, Gabriele; Badami, Marco. - In: ENERGY CONVERSION AND MANAGEMENT. - ISSN 0196-8904. - 350:(2026). [10.1016/j.enconman.2025.120963]
Deep reinforcement learning control architectures for industrial multi-energy systems: from single-agent to hierarchical multi-agents
Andrea Franzoso;Gabriele Fambri;Marco Badami
2026
Abstract
The increasing share of renewable energy sources in industrial, multi-energy systems has introduced significant challenges for the optimal coordination of multiple energy carriers. This work focuses on improving the cost-effective and robust operation of an industrial plant that simultaneously produces electricity, steam, hot water, and chilled water. It explores whether data-driven control strategies based on deep reinforcement learning can enhance the economic and energy performance of the plant, compared to traditional methods, and whether multi-agent structures can further improve robustness. Three control architectures were designed and evaluated – centralized, decentralized, and hierarchical – using a year of actual operational data from the industrial plant. The results show that all of the proposed strategies reduced total operating costs by more than 6% compared to existing rule-based control. The hierarchical configuration achieved the best performance and demonstrated superior robustness to variations in energy prices. These findings highlight the potential of learning-based hierarchical coordination as a practical and resilient framework to manage complex industrial energy systems.| File | Dimensione | Formato | |
|---|---|---|---|
|
1-s2.0-S0196890425014876-main.pdf
accesso aperto
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Creative commons
Dimensione
7.51 MB
Formato
Adobe PDF
|
7.51 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/3006460
