The Open Radio Access Network (O-RAN) architecture is key to enabling AI-driven dynamic network management. However, the complexity of this architecture introduces challenges, especially in managing conflicts between different AI-driven applications that operate concurrently within the network. These conflicts, if left unchecked, can lead to degraded network performance and service disruptions. To address this issue, we propose XAI4C (Explainable AI for Conflict Detection), a framework that leverages the SHAP (SHapley Additive exPlanations) explainable AI technique. XAI4C enhances transparency and interpretability in AI decision-making by helping network operators understand the factors driving AI decisions across different network components thereby allowing for early detection of conflicts between applications. In this paper, we first present the architecture and operation of the XAI4C framework. We then demonstrate its effectiveness in conflict detection through two case studies related to network slicing. Our results demonstrate that XAI4C outperforms the state-of-the-art PACIFISTA providing a detection accuracy increase up to 30%, while reducing the number of samples required for conflict detection by 41.17%.

XAI4C: An XAI-powered Conflict Detection Framework in O-RAN / Varshney, Nancy; Mungari, Federico; Puligheddu, Corrado; Badawy, Ahmed; Chiasserini, Carla Fabiana. - ELETTRONICO. - (2025). (Intervento presentato al convegno The 22nd IEEE International Conference on Mobile Ad-Hoc and Smart Systems (MASS 2025) tenutosi a Chicago (USA) nel 6-8 ottobre 2025).

XAI4C: An XAI-powered Conflict Detection Framework in O-RAN

Varshney, Nancy;Mungari, Federico;Puligheddu, Corrado;Badawy, Ahmed;Chiasserini, Carla Fabiana
2025

Abstract

The Open Radio Access Network (O-RAN) architecture is key to enabling AI-driven dynamic network management. However, the complexity of this architecture introduces challenges, especially in managing conflicts between different AI-driven applications that operate concurrently within the network. These conflicts, if left unchecked, can lead to degraded network performance and service disruptions. To address this issue, we propose XAI4C (Explainable AI for Conflict Detection), a framework that leverages the SHAP (SHapley Additive exPlanations) explainable AI technique. XAI4C enhances transparency and interpretability in AI decision-making by helping network operators understand the factors driving AI decisions across different network components thereby allowing for early detection of conflicts between applications. In this paper, we first present the architecture and operation of the XAI4C framework. We then demonstrate its effectiveness in conflict detection through two case studies related to network slicing. Our results demonstrate that XAI4C outperforms the state-of-the-art PACIFISTA providing a detection accuracy increase up to 30%, while reducing the number of samples required for conflict detection by 41.17%.
File in questo prodotto:
File Dimensione Formato  
XAI4Conflicts.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 417.32 kB
Formato Adobe PDF
417.32 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3002330