Concept drift in machine learning refers to changes in the underlying data distribution over time, which can lead to a degradation in the performance of predictive models. Although many methods have been proposed to detect and adapt to concept drift, effective methods to explain it in a human-understandable manner remain lacking. To address this, we propose the use of neuro-symbolic rules to explain the reason for drift. We applied recent rule extraction methods to convolutional neural networks (CNNs) to shed light on the model's internal behavior and promote interpretability of the outputs, while also proposing two novel automated approaches for semantic kernel labeling. We conducted preliminary experiments to assess the applicability and effectiveness of these rules in explaining concept drift, and the efficacy of the kernel labeling strategies. Under the optimality assumption, our method was able to extract rules that can facilitate the identification of the causes of drift, through either rule inspection or antecedents activation frequencies analysis. Moreover, the proposed strategies for kernel labeling offer a more reliable and scalable alternatives to the state-of-the-art solutions.

Explaining Concept Drift via Neuro-Symbolic Rules / Basci, Pietro; Greco, Salvatore; Manigrasso, Francesco; Cerquitelli, Tania; Morra, Lia. - ELETTRONICO. - 4132:(2025), pp. 61-72. ( European Workshop on Trustworthy AI (TRUST-AI) Bologna (ITA) 25-26 ottobre 2025).

Explaining Concept Drift via Neuro-Symbolic Rules

Pietro Basci;Salvatore Greco;Francesco Manigrasso;Tania Cerquitelli;Lia Morra
2025

Abstract

Concept drift in machine learning refers to changes in the underlying data distribution over time, which can lead to a degradation in the performance of predictive models. Although many methods have been proposed to detect and adapt to concept drift, effective methods to explain it in a human-understandable manner remain lacking. To address this, we propose the use of neuro-symbolic rules to explain the reason for drift. We applied recent rule extraction methods to convolutional neural networks (CNNs) to shed light on the model's internal behavior and promote interpretability of the outputs, while also proposing two novel automated approaches for semantic kernel labeling. We conducted preliminary experiments to assess the applicability and effectiveness of these rules in explaining concept drift, and the efficacy of the kernel labeling strategies. Under the optimality assumption, our method was able to extract rules that can facilitate the identification of the causes of drift, through either rule inspection or antecedents activation frequencies analysis. Moreover, the proposed strategies for kernel labeling offer a more reliable and scalable alternatives to the state-of-the-art solutions.
2025
File in questo prodotto:
File Dimensione Formato  
short54.pdf

accesso aperto

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 3.44 MB
Formato Adobe PDF
3.44 MB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3003603