Recently, deep reinforcement learning has emerged as a popular approach for enhancing thermal energy management in buildings due to its flexibility and model-free nature. However, the time-consuming convergence of deep reinforcement learning poses a challenge. To address this, offline pre-training of deep reinforcement learning controllers using physics-based simulation environments has been commonly employed. However, developing these models requires significant effort and expertise. Alternatively, data-driven models offer a promising solution by emulating building dynamics, but they struggle to predict previously unseen patterns. Therefore, this paper introduces a strategy to effectively train and deploy a deep reinforcement learning controller by means of long short-term memory neural networks. The experiments were carried out using an EnergyPlus simulation environment as a proxy of a real building. An automatic and recursive procedure is designed to determine the minimum amount of historical data required to train a robust data-driven model which mimics building dynamics. The trained deep reinforcement learning agent meets safety requirements in the simulation environment after two and a half months of training. Additionally, it reduces indoor temperature violations by 80% while consuming the same amount of energy as a baseline rule-based controller.

Effective pre-training of a deep reinforcement learning agent by means of long short-term memory models for thermal energy management in buildings / Coraci, D.; Brandi, S.; Capozzoli, A.. - In: ENERGY CONVERSION AND MANAGEMENT. - ISSN 0196-8904. - STAMPA. - 291:(2023). [10.1016/j.enconman.2023.117303]

Effective pre-training of a deep reinforcement learning agent by means of long short-term memory models for thermal energy management in buildings

Coraci D.;Brandi S.;Capozzoli A.
2023

Abstract

Recently, deep reinforcement learning has emerged as a popular approach for enhancing thermal energy management in buildings due to its flexibility and model-free nature. However, the time-consuming convergence of deep reinforcement learning poses a challenge. To address this, offline pre-training of deep reinforcement learning controllers using physics-based simulation environments has been commonly employed. However, developing these models requires significant effort and expertise. Alternatively, data-driven models offer a promising solution by emulating building dynamics, but they struggle to predict previously unseen patterns. Therefore, this paper introduces a strategy to effectively train and deploy a deep reinforcement learning controller by means of long short-term memory neural networks. The experiments were carried out using an EnergyPlus simulation environment as a proxy of a real building. An automatic and recursive procedure is designed to determine the minimum amount of historical data required to train a robust data-driven model which mimics building dynamics. The trained deep reinforcement learning agent meets safety requirements in the simulation environment after two and a half months of training. Additionally, it reduces indoor temperature violations by 80% while consuming the same amount of energy as a baseline rule-based controller.
File in questo prodotto:
File Dimensione Formato  
Capozzoli-Effective.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 2.76 MB
Formato Adobe PDF
2.76 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2980758