As Machine Learning (ML) systems become integral to network management, the need for transparent decision-making grows. While post-hoc explainability methods provide insights into model behavior, their technical nature often limits accessibility. We explore Large Language Models (LLMs) for translating complex ML model explanations, extracted using explainable artificial intelligence frameworks, into natural language to simplify user understanding and interpretability. Using direct prompting and self-reflection-based prompting, we generate explanations for a lightpath Quality of Transmission (QoT) estimation model. Empirical evaluations confirm the correctness and usefulness of LLM-generated interpretations in about 65% of the cases, highlighting the benefits of self-reflection in enhancing explanation quality. The study also remarks on the necessity of devising enhancements to improve the results achieved so far.

Natural Language Interpretability for ML-Based QoT Estimation via Large Language Models / Ayoub, Omran; Natalino, Carlos; Troia, Sebastian; Rottondi, Cristina; Andreoletti, Davide; Lelli, Francesco; Giordano, Silvia; Monti, Paolo. - (2025), pp. 1-4. ( 25th Anniversary International Conference on Transparent Optical Networks, ICTON 2025 Barcelona (Esp) 06-10 July 2025) [10.1109/icton67126.2025.11125132].

Natural Language Interpretability for ML-Based QoT Estimation via Large Language Models

Rottondi, Cristina;
2025

Abstract

As Machine Learning (ML) systems become integral to network management, the need for transparent decision-making grows. While post-hoc explainability methods provide insights into model behavior, their technical nature often limits accessibility. We explore Large Language Models (LLMs) for translating complex ML model explanations, extracted using explainable artificial intelligence frameworks, into natural language to simplify user understanding and interpretability. Using direct prompting and self-reflection-based prompting, we generate explanations for a lightpath Quality of Transmission (QoT) estimation model. Empirical evaluations confirm the correctness and usefulness of LLM-generated interpretations in about 65% of the cases, highlighting the benefits of self-reflection in enhancing explanation quality. The study also remarks on the necessity of devising enhancements to improve the results achieved so far.
2025
979-8-3315-9777-1
File in questo prodotto:
File Dimensione Formato  
Natural_Language_Interpretability_for_ML-Based_QoT_Estimation_via_Large_Language_Models.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 7.93 MB
Formato Adobe PDF
7.93 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
ICTON2025_Adaptive_Explainability.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 775.13 kB
Formato Adobe PDF
775.13 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3006478