Service orchestrators such as Kubernetes are widely employed to automate the handling and scheduling of workloads, which involves determining the most suitable physical node on which to start a new task. The expanding application of Machine Learning (ML) algorithms, and in particular Reinforcement Learning (RL), opens up new development opportunities to make runtime decisions that can account for multiple metrics and varying network conditions. However, current RL-based solutions are unable to fit the growing complexity of distributed applications and infrastructure, characterized by a more heterogeneous resource continuum and the increasing need to minimize energy consumption while satisfying tasks' requirements. To fill this gap, we propose RL-ICE as an innovative scheduler that can work in such a cloud continuum by leveraging a multi-cluster and hierarchical RL to satisfy both user Quality of Experience (QoE) metrics and tenant's costs. We test RL-ICE in a simulated large-scale environment and in a real-world Kubernetes setup. In both scenarios, our solution effectively balances user-perceived latency, energy consumption, and deployment costs. Additionally, RL-ICE can dynamically respond to network failures by migrating microservices to maintain efficient management of resources.

Scheduling Latency-Sensitive Tasks in the Cloud Continuum with Hierarchical Reinforcement Learning / Monaco, Doriana; Sacco, Alessio; Casetti, Claudio; Marchetto, Guido. - (2025). (Intervento presentato al convegno NOMS 2025-2025 IEEE Network Operations and Management Symposium tenutosi a Honolulu (USA) nel 12–16 May 2025) [10.1109/NOMS57970.2025.11073729].

Scheduling Latency-Sensitive Tasks in the Cloud Continuum with Hierarchical Reinforcement Learning

Doriana Monaco;Alessio Sacco;Claudio Casetti;Guido Marchetto
2025

Abstract

Service orchestrators such as Kubernetes are widely employed to automate the handling and scheduling of workloads, which involves determining the most suitable physical node on which to start a new task. The expanding application of Machine Learning (ML) algorithms, and in particular Reinforcement Learning (RL), opens up new development opportunities to make runtime decisions that can account for multiple metrics and varying network conditions. However, current RL-based solutions are unable to fit the growing complexity of distributed applications and infrastructure, characterized by a more heterogeneous resource continuum and the increasing need to minimize energy consumption while satisfying tasks' requirements. To fill this gap, we propose RL-ICE as an innovative scheduler that can work in such a cloud continuum by leveraging a multi-cluster and hierarchical RL to satisfy both user Quality of Experience (QoE) metrics and tenant's costs. We test RL-ICE in a simulated large-scale environment and in a real-world Kubernetes setup. In both scenarios, our solution effectively balances user-perceived latency, energy consumption, and deployment costs. Additionally, RL-ICE can dynamically respond to network failures by migrating microservices to maintain efficient management of resources.
2025
979-8-3315-3163-8
File in questo prodotto:
File Dimensione Formato  
RL_scheduler___NOMS_2025-4.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 507.67 kB
Formato Adobe PDF
507.67 kB Adobe PDF Visualizza/Apri
Scheduling_Latency-Sensitive_Tasks_in_the_Cloud_Continuum_with_Hierarchical_Reinforcement_Learning.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 578.31 kB
Formato Adobe PDF
578.31 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3001635