Evolutionary algorithms greatly benefit from an optimal application of the different genetic operators during the optimization process: thus, it is not surprising that several research lines in literature deal with the self-adapting of activation probabilities for operators. The current state of the art revolves around the use of the Multi-Armed Bandit (MAB) and Dynamic Multi-Armed bandit (D-MAB) paradigms, that modify the selection mechanism based on the rewards of the different operators. Such methodologies, however, update the probabilities after each operator's application, creating possible issues with positive feedbacks and impairing parallel evaluations, one of the strongest advantages of evolutionary computation in an industrial perspective. Moreover, D-MAB techniques often rely upon measurements of population diversity, that might not be applicable to all real-world scenarios. In this paper, we propose a generalization of the D-MAB approach, paired with a simple mechanism for operator management, that aims at removing several limitations of other D-MAB strategies, allowing for parallel evaluations and self-adaptive parameter tuning. Experimental results show that the approach is particularly effective with frameworks containing many different operators, even when some of them are ill-suited for the problem at hand, or are sporadically failing, as it commonly happens in the real world.

Operator Selection using Improved Dynamic Multi-Armed Bandit / Belluz, Jany; Gaudesi, Marco; Squillero, Giovanni; Tonda, ALBERTO PAOLO. - STAMPA. - (2015), pp. 1311-1317. (Intervento presentato al convegno Genetic and Evolutionary Computation Conference tenutosi a Madrid) [10.1145/2739480.2754712].

Operator Selection using Improved Dynamic Multi-Armed Bandit

GAUDESI, MARCO;SQUILLERO, Giovanni;
2015

Abstract

Evolutionary algorithms greatly benefit from an optimal application of the different genetic operators during the optimization process: thus, it is not surprising that several research lines in literature deal with the self-adapting of activation probabilities for operators. The current state of the art revolves around the use of the Multi-Armed Bandit (MAB) and Dynamic Multi-Armed bandit (D-MAB) paradigms, that modify the selection mechanism based on the rewards of the different operators. Such methodologies, however, update the probabilities after each operator's application, creating possible issues with positive feedbacks and impairing parallel evaluations, one of the strongest advantages of evolutionary computation in an industrial perspective. Moreover, D-MAB techniques often rely upon measurements of population diversity, that might not be applicable to all real-world scenarios. In this paper, we propose a generalization of the D-MAB approach, paired with a simple mechanism for operator management, that aims at removing several limitations of other D-MAB strategies, allowing for parallel evaluations and self-adaptive parameter tuning. Experimental results show that the approach is particularly effective with frameworks containing many different operators, even when some of them are ill-suited for the problem at hand, or are sporadically failing, as it commonly happens in the real world.
2015
9781450334723
9781450334723
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2617406
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo