The application of deep reinforcement learning (DRL) to train an agent capable of learning control laws for pulsed jets to manipulate the wake of a bluff body is presented and discussed. The work has been performed experimentally at a value of the Reynolds number Re similar to 10(5) adopting a single-step approach for the training of the agent. Two main aspects are targeted: first, the dimension of the state, allowing us to draw conclusions on its effect on the training of the neural network; second, the capability of the agent to learn optimal strategies aimed at maximizing more complex tasks identified with the reward. The agent is trained to learn strategies that minimize drag only or minimize drag while maximizing the power budget of the fluidic system. The results show that independently on the definition of the reward, the DRL learns forcing conditions that yield values of drag reduction that are as large as 10% when the reward is based on the drag minimization only. On the other hand, when also the power budget is accounted for, the agent learns forcing configurations that yield lower drag reduction (5%) but characterized by large values of the efficiency. A comparison between the natural and the forced conditions is carried out in terms of the pressure distribution across the model's base. The different structure of the wake that is obtained depending on the training of the agent suggests that the possible forcing configuration yielding similar values of the reward is local minima for the problem. This represents, to the authors' knowledge, the first application of a single-step DRL in an experimental framework at large values of the Reynolds number to control the wake of a three-dimensional bluff body. Published under an exclusive license by AIP Publishing.

Deep reinforcement learning for active control of a three-dimensional bluff body wake / Amico, E; Cafiero, G; Iuso, G. - In: PHYSICS OF FLUIDS. - ISSN 1070-6631. - 34:10(2022), p. 105126. [10.1063/5.0108387]

Deep reinforcement learning for active control of a three-dimensional bluff body wake

Amico, E;Cafiero, G;Iuso, G
2022

Abstract

The application of deep reinforcement learning (DRL) to train an agent capable of learning control laws for pulsed jets to manipulate the wake of a bluff body is presented and discussed. The work has been performed experimentally at a value of the Reynolds number Re similar to 10(5) adopting a single-step approach for the training of the agent. Two main aspects are targeted: first, the dimension of the state, allowing us to draw conclusions on its effect on the training of the neural network; second, the capability of the agent to learn optimal strategies aimed at maximizing more complex tasks identified with the reward. The agent is trained to learn strategies that minimize drag only or minimize drag while maximizing the power budget of the fluidic system. The results show that independently on the definition of the reward, the DRL learns forcing conditions that yield values of drag reduction that are as large as 10% when the reward is based on the drag minimization only. On the other hand, when also the power budget is accounted for, the agent learns forcing configurations that yield lower drag reduction (5%) but characterized by large values of the efficiency. A comparison between the natural and the forced conditions is carried out in terms of the pressure distribution across the model's base. The different structure of the wake that is obtained depending on the training of the agent suggests that the possible forcing configuration yielding similar values of the reward is local minima for the problem. This represents, to the authors' knowledge, the first application of a single-step DRL in an experimental framework at large values of the Reynolds number to control the wake of a three-dimensional bluff body. Published under an exclusive license by AIP Publishing.
2022
File in questo prodotto:
File Dimensione Formato  
2022_ACI_PoF.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 4.25 MB
Formato Adobe PDF
4.25 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2976848