Concept drift in machine learning refers to changes in the underlying data distribution over time, which can lead to a degradation in the performance of predictive models. Although many methods have been proposed to detect and adapt to concept drift, effective methods to explain it in a human-understandable manner remain lacking. To address this, we propose the use of neuro-symbolic rules to explain the reason for drift. We applied recent rule extraction methods to convolutional neural networks (CNNs) to shed light on the model's internal behavior and promote interpretability of the outputs, while also proposing two novel automated approaches for semantic kernel labeling. We conducted preliminary experiments to assess the applicability and effectiveness of these rules in explaining concept drift, and the efficacy of the kernel labeling strategies. Under the optimality assumption, our method was able to extract rules that can facilitate the identification of the causes of drift, through either rule inspection or antecedents activation frequencies analysis. Moreover, the proposed strategies for kernel labeling offer a more reliable and scalable alternatives to the state-of-the-art solutions.

Explaining Concept Drift via Neuro-Symbolic Rules / Basci, Pietro; Greco, Salvatore; Manigrasso, Francesco; Cerquitelli, Tania; Morra, Lia. - ELETTRONICO. - (In corso di stampa). (Intervento presentato al convegno European Workshop on Trustworthy AI (TRUST-AI) tenutosi a Bologna nel 25-26 ottobre 2025).

Explaining Concept Drift via Neuro-Symbolic Rules

Pietro Basci;Salvatore Greco;Francesco Manigrasso;Tania Cerquitelli;Lia Morra
In corso di stampa

Abstract

Concept drift in machine learning refers to changes in the underlying data distribution over time, which can lead to a degradation in the performance of predictive models. Although many methods have been proposed to detect and adapt to concept drift, effective methods to explain it in a human-understandable manner remain lacking. To address this, we propose the use of neuro-symbolic rules to explain the reason for drift. We applied recent rule extraction methods to convolutional neural networks (CNNs) to shed light on the model's internal behavior and promote interpretability of the outputs, while also proposing two novel automated approaches for semantic kernel labeling. We conducted preliminary experiments to assess the applicability and effectiveness of these rules in explaining concept drift, and the efficacy of the kernel labeling strategies. Under the optimality assumption, our method was able to extract rules that can facilitate the identification of the causes of drift, through either rule inspection or antecedents activation frequencies analysis. Moreover, the proposed strategies for kernel labeling offer a more reliable and scalable alternatives to the state-of-the-art solutions.
In corso di stampa
File in questo prodotto:
File Dimensione Formato  
TRUST_AI_Workshop___ECAI25.pdf

accesso riservato

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 3.09 MB
Formato Adobe PDF
3.09 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3003603