Industrial energy systems often supply diverse energy vectors: electricity, steam, hot and chilled water. To meet these demands, they integrate various technologies, including cogeneration units (e.g., internal combustion engines or microturbines), steam generators, chillers, and renewable sources like photovoltaic arrays. These components differ in efficiency, flexibility, and operational constraints, creating a tightly coupled and complex optimization problem. Traditional rule-based control strategies, common in practice, often fail to handle the variability of renewables and the complexity of multi-energy infrastructures. In this context, Deep Reinforcement Learning (DRL) has emerged as a compelling alternative. In DRL, an agent learns optimal policies through trial and error with an environment: it is a flexible framework for control and can be implemented using different structural paradigms, depending on how decision-making is distributed. Typically, only a subset of technologies, those with greater flexibility and cost impact, are directly controlled, while others respond indirectly to upstream decisions. This modularity suits decentralized or hierarchical control, where decision-making is distributed across multiple DRL agents to improve scalability and responsiveness. This work presents different DRL structures (centralized and decentralized) for the optimization of an industrial multi-energy system. All DRL models were benchmarked against a typical rule-based controller and a MILP-based optimization. Results show DRL outperforms the rule-based strategy, which cannot account for renewable variability. Among DRL configurations, performance varied, highlighting the importance of control architecture. Notably, the best-performing DRL setup used a hierarchical configuration, achieving results close to the MILP optimum and demonstrating the potential of hierarchical DRL agents for efficient, scalable energy management.

Multi-Agent Deep Reinforcement Learning for Optimized Operation of Industrial Energy Systems / Franzoso, Andrea; Fambri, Gabriele; Badami, Marco. - (2025). (Intervento presentato al convegno 11th International Conference on Smart Energy Systems (SESAAU2025) tenutosi a Copenaghen (DNK) nel 16-17 settembre 2025).

Multi-Agent Deep Reinforcement Learning for Optimized Operation of Industrial Energy Systems

Franzoso, Andrea;Fambri Gabriele;Badami, Marco
2025

Abstract

Industrial energy systems often supply diverse energy vectors: electricity, steam, hot and chilled water. To meet these demands, they integrate various technologies, including cogeneration units (e.g., internal combustion engines or microturbines), steam generators, chillers, and renewable sources like photovoltaic arrays. These components differ in efficiency, flexibility, and operational constraints, creating a tightly coupled and complex optimization problem. Traditional rule-based control strategies, common in practice, often fail to handle the variability of renewables and the complexity of multi-energy infrastructures. In this context, Deep Reinforcement Learning (DRL) has emerged as a compelling alternative. In DRL, an agent learns optimal policies through trial and error with an environment: it is a flexible framework for control and can be implemented using different structural paradigms, depending on how decision-making is distributed. Typically, only a subset of technologies, those with greater flexibility and cost impact, are directly controlled, while others respond indirectly to upstream decisions. This modularity suits decentralized or hierarchical control, where decision-making is distributed across multiple DRL agents to improve scalability and responsiveness. This work presents different DRL structures (centralized and decentralized) for the optimization of an industrial multi-energy system. All DRL models were benchmarked against a typical rule-based controller and a MILP-based optimization. Results show DRL outperforms the rule-based strategy, which cannot account for renewable variability. Among DRL configurations, performance varied, highlighting the importance of control architecture. Notably, the best-performing DRL setup used a hierarchical configuration, achieving results close to the MILP optimum and demonstrating the potential of hierarchical DRL agents for efficient, scalable energy management.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3004795
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo