In this paper, we consider a stochastic Model Predictive Control able to account for effects of additive stochastic disturbance with unbounded support, and requiring no restrictive assumption on either independence nor Gaussianity. We revisit the rather classical approach based on penalty functions, with the aim of designing a control scheme that meets some given probabilistic specifications. The main difference with previous approaches is that we do not recur to the notion of probabilistic recursive feasibility, and hence we do not consider separately the unfeasible case. In particular, two probabilistic design problems are envisioned. The first randomization problem aims to design offline the constraint set tightening, following an approach inherited from tube-based MPC. For the second probabilistic scheme, a specific probabilistic validation approach is exploited for tuning the penalty parameter, to be selected offline among a finite-family of possible values. The simple algorithm here proposed allows designing a single controller, always guaranteeing feasibility of the online optimization problem. The proposed method is shown to be more computationally tractable than previous schemes. This is due to the fact that the sample complexity for both probabilistic design problems depends on the prediction horizon in a logarithmic way, unlike scenario-based approaches which exhibit linear dependence. The efficacy of the proposed approach is demonstrated with a numerical example.
A probabilistic validation approach for penalty function design in stochastic model predictive control / Mammarella, M.; Alamo, T.; Lucia, S.; Dabbene, F.. - ELETTRONICO. - 53:(2020), pp. 11271-11276. (Intervento presentato al convegno 21st IFAC World Congress 2020 tenutosi a Germany nel 2020) [10.1016/j.ifacol.2020.12.362].
A probabilistic validation approach for penalty function design in stochastic model predictive control
Mammarella M.;Dabbene F.
2020
Abstract
In this paper, we consider a stochastic Model Predictive Control able to account for effects of additive stochastic disturbance with unbounded support, and requiring no restrictive assumption on either independence nor Gaussianity. We revisit the rather classical approach based on penalty functions, with the aim of designing a control scheme that meets some given probabilistic specifications. The main difference with previous approaches is that we do not recur to the notion of probabilistic recursive feasibility, and hence we do not consider separately the unfeasible case. In particular, two probabilistic design problems are envisioned. The first randomization problem aims to design offline the constraint set tightening, following an approach inherited from tube-based MPC. For the second probabilistic scheme, a specific probabilistic validation approach is exploited for tuning the penalty parameter, to be selected offline among a finite-family of possible values. The simple algorithm here proposed allows designing a single controller, always guaranteeing feasibility of the online optimization problem. The proposed method is shown to be more computationally tractable than previous schemes. This is due to the fact that the sample complexity for both probabilistic design problems depends on the prediction horizon in a logarithmic way, unlike scenario-based approaches which exhibit linear dependence. The efficacy of the proposed approach is demonstrated with a numerical example.File | Dimensione | Formato | |
---|---|---|---|
IFAC2020.pdf
accesso aperto
Tipologia:
2. Post-print / Author's Accepted Manuscript
Licenza:
Pubblico - Tutti i diritti riservati
Dimensione
421.54 kB
Formato
Adobe PDF
|
421.54 kB | Adobe PDF | Visualizza/Apri |
1-s2.0-S2405896320306467-main.pdf
accesso aperto
Descrizione: articolo principale
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Creative commons
Dimensione
421.54 kB
Formato
Adobe PDF
|
421.54 kB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2907192