Recently, a growing interest has been observed in HVAC control systems based on Artificial Intelligence, to improve comfort conditions while avoiding unnecessary energy consumption. In this work, a model-free algorithm belonging to the Deep Reinforcement Learning (DRL) class, Soft Actor-Critic, was implemented to control the supply water temperature to radiant terminal units of a heating system serving an office building. The controller was trained online, and a preliminary sensitivity analysis on hyperparameters was performed to assess their influence on the agent performance. The DRL agent with the best performance was compared to a rule-based controller assumed as a baseline during a three-month heating season. The DRL controller outperformed the baseline after two weeks of deployment, with an overall performance improvement related to control of indoor temperature conditions. Moreover, the adaptability of the DRL agent was tested for various control scenarios, simulating changes of external weather conditions, indoor temperature setpoint, building envelope features and occupancy patterns. The agent dynamically deployed, despite a slight increase in energy consumption, led to an improvement of indoor temperature control, reducing the cumulative sum of temperature violations on average for all scenarios by 75% and 48% compared to the baseline and statically deployed agent respectively.
Online implementation of a soft actor-critic agent to enhance indoor temperature control and energy efficiency in buildings / Coraci, D.; Brandi, S.; Piscitelli, M. S.; Capozzoli, A.. - In: ENERGIES. - ISSN 1996-1073. - 14:4(2021), p. 997. [10.3390/en14040997]
|Titolo:||Online implementation of a soft actor-critic agent to enhance indoor temperature control and energy efficiency in buildings|
|Data di pubblicazione:||2021|
|Digital Object Identifier (DOI):||http://dx.doi.org/10.3390/en14040997|
|Appare nelle tipologie:||1.1 Articolo in rivista|