Deep Reinforcement Learning (DRL) has emerged as a promising approach to address the trade-off between energy efficiency and indoor comfort in buildings, potentially outperforming conventional Rule-Based Controllers (RBC). This paper explores the real-world application of a Soft-Actor Critic (SAC) DRL controller in a building’s Thermally Activated Building System (TABS), focusing on optimising energy consumption and maintaining comfortable indoor temperatures. Our approach involves pre-training the DRL agent using a simplified Resistance-Capacitance (RC) model calibrated with real building data. The study first benchmarks the DRL controller against three RBCs, two Proportional-Integral (PI) controllers and a Model Predictive Controller (MPC) in a simulated environment. In the simulation study, DRL reduces energy consumption by 15% to 50% and decreases temperature violations by 25% compared to RBCs, reducing also energy consumption and temperature violations compared to PI controllers by respectively 23% and 5%. Moreover, DRL achieves comparable performance in terms of temperature control but consuming 29% more energy than an ideal MPC. When implemented in a real building during a two-month cooling season, the DRL controller performances were compared with those of the best-performing RBC, enhancing indoor temperature control by 68% without increasing energy consumption. This research demonstrates an effective strategy for training and deploying DRL controllers in real building energy systems, highlighting the potential of DRL in practical energy management applications.

Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control / Silvestri, Alberto; Coraci, Davide; Brandi, Silvio; Capozzoli, Alfonso; Borkowski, Esther; Köhler, Johannes; Wu, Duan; Zeilinger, Melanie N.; Schlueter, Arno. - In: APPLIED ENERGY. - ISSN 0306-2619. - 368:(2024). [10.1016/j.apenergy.2024.123447]

Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control

Davide Coraci;Silvio Brandi;Alfonso Capozzoli;
2024

Abstract

Deep Reinforcement Learning (DRL) has emerged as a promising approach to address the trade-off between energy efficiency and indoor comfort in buildings, potentially outperforming conventional Rule-Based Controllers (RBC). This paper explores the real-world application of a Soft-Actor Critic (SAC) DRL controller in a building’s Thermally Activated Building System (TABS), focusing on optimising energy consumption and maintaining comfortable indoor temperatures. Our approach involves pre-training the DRL agent using a simplified Resistance-Capacitance (RC) model calibrated with real building data. The study first benchmarks the DRL controller against three RBCs, two Proportional-Integral (PI) controllers and a Model Predictive Controller (MPC) in a simulated environment. In the simulation study, DRL reduces energy consumption by 15% to 50% and decreases temperature violations by 25% compared to RBCs, reducing also energy consumption and temperature violations compared to PI controllers by respectively 23% and 5%. Moreover, DRL achieves comparable performance in terms of temperature control but consuming 29% more energy than an ideal MPC. When implemented in a real building during a two-month cooling season, the DRL controller performances were compared with those of the best-performing RBC, enhancing indoor temperature control by 68% without increasing energy consumption. This research demonstrates an effective strategy for training and deploying DRL controllers in real building energy systems, highlighting the potential of DRL in practical energy management applications.
File in questo prodotto:
File Dimensione Formato  
Real building implementation of a deep reinforcement learning controller to enhance energy efficiency and indoor temperature control.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 2.29 MB
Formato Adobe PDF
2.29 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2988906