In traditional methods for black-box optimization, a considerable number of objective function evaluations are required, which can be time consuming and often unfeasible for many engineering applications with expensive models to evaluate. Bayesian Optimization methods can improve the efficiency of the optimization procedure by actively learning a surrogate model of the objective function which exhausts and synthesizes the available information along the search path to lower the amount of required expensive function evaluations. Efficiency can be further improved in a multifidelity setting, where cheaper but potentially biased approximations to the function that can be used to assist the search of optimal points. In this talk we investigate further computational benefits offered by the availability of parallel/distributed computing architectures, whose optimal usage is an open opportunity within the context of active learning. We introduce the Resource Aware Active Learning (RAAL) algorithm, a multifidelity Bayesian scheme that exploits at each optimization step the current surrogate model together with the parallel/distributed computational budget, and computes the set of best sample locations and associated fidelity sources to evaluate in order to maximize the information gain on the objective function. The scheme is demonstrated for a variety of single and multifidelity benchmark problems, and the results show a major speedup of the optimization task.

Resource Aware Active Learning for Multifidelity Optimization / Manganini, Giorgio; Grassi, Francesco; Garraffa, Michele; Mainini, Laura. - ELETTRONICO. - (2021). (Intervento presentato al convegno SIAM Conference on Computational Science and Engineering tenutosi a virtual event nel 1-5 March 2021).

Resource Aware Active Learning for Multifidelity Optimization

Grassi, Francesco;Garraffa, Michele;Mainini, Laura
2021

Abstract

In traditional methods for black-box optimization, a considerable number of objective function evaluations are required, which can be time consuming and often unfeasible for many engineering applications with expensive models to evaluate. Bayesian Optimization methods can improve the efficiency of the optimization procedure by actively learning a surrogate model of the objective function which exhausts and synthesizes the available information along the search path to lower the amount of required expensive function evaluations. Efficiency can be further improved in a multifidelity setting, where cheaper but potentially biased approximations to the function that can be used to assist the search of optimal points. In this talk we investigate further computational benefits offered by the availability of parallel/distributed computing architectures, whose optimal usage is an open opportunity within the context of active learning. We introduce the Resource Aware Active Learning (RAAL) algorithm, a multifidelity Bayesian scheme that exploits at each optimization step the current surrogate model together with the parallel/distributed computational budget, and computes the set of best sample locations and associated fidelity sources to evaluate in order to maximize the information gain on the objective function. The scheme is demonstrated for a variety of single and multifidelity benchmark problems, and the results show a major speedup of the optimization task.
File in questo prodotto:
File Dimensione Formato  
ManganiniGrassiGarraffaMainini_Resource Aware Active Learning for Multifidelity Optimization.pdf

accesso aperto

Tipologia: Abstract
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 100.15 kB
Formato Adobe PDF
100.15 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2924129