The recent advancements of Artificial Intelligence (AI) have generated a lot of interest in the robotics community. Indeed, AI can find application in a wide variety of problems. Among these, social navigation of mobile robots is a big challenge, where ensuring non-harmful behaviors of the robotic system is fundamental. In this paper, we consider a simulated navigation problem that involves a fleet of mobile agents moving in a cross scenario, governed by a human-like behavior. With the purpose of avoiding collisions among them, we show how safe and explainable AI (XAI) methods can constitute useful tools to tailor the parameters of the behavior towards a safe, collision-free, navigation. We first explore how global native rule-based classification provides interpretable characterizations of the agents' behavior. Afterwards, we derive safety regions, $\mathcal{S}_{\varepsilon}$, denoting the zones in the parameters space where collisions are avoided, with a maximum error given by $\varepsilon$. The design of the regions is based on scalable classifiers, a technique to tune the decision function of a machine learning (ML) classifier so to bound its error on a desired class to a predefined level, combined with either probabilistic scaling (probabilistic safety regions, PSR), or with conformal prediction theory (conformal safety regions, CSR). Finally, we investigate how explainability can be provided to these regions by extracting local rules from their boundaries.

Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions / Narteni, Sara; Carlevaro, Alberto; Guzzi, Jérôme; Mongelli, Maurizio. - 2156:(2024), pp. 396-417. (Intervento presentato al convegno The 2nd world conference on eXplainable Artificial Intelligence (xAI 2024) tenutosi a La Valletta (Malta) nel 17-19 July 2024) [10.1007/978-3-031-63803-9_22].

Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions

Sara Narteni;
2024

Abstract

The recent advancements of Artificial Intelligence (AI) have generated a lot of interest in the robotics community. Indeed, AI can find application in a wide variety of problems. Among these, social navigation of mobile robots is a big challenge, where ensuring non-harmful behaviors of the robotic system is fundamental. In this paper, we consider a simulated navigation problem that involves a fleet of mobile agents moving in a cross scenario, governed by a human-like behavior. With the purpose of avoiding collisions among them, we show how safe and explainable AI (XAI) methods can constitute useful tools to tailor the parameters of the behavior towards a safe, collision-free, navigation. We first explore how global native rule-based classification provides interpretable characterizations of the agents' behavior. Afterwards, we derive safety regions, $\mathcal{S}_{\varepsilon}$, denoting the zones in the parameters space where collisions are avoided, with a maximum error given by $\varepsilon$. The design of the regions is based on scalable classifiers, a technique to tune the decision function of a machine learning (ML) classifier so to bound its error on a desired class to a predefined level, combined with either probabilistic scaling (probabilistic safety regions, PSR), or with conformal prediction theory (conformal safety regions, CSR). Finally, we investigate how explainability can be provided to these regions by extracting local rules from their boundaries.
2024
978-3-031-63802-2
978-3-031-63803-9
File in questo prodotto:
File Dimensione Formato  
Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions.pdf

embargo fino al 10/07/2025

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 1.9 MB
Formato Adobe PDF
1.9 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
xAI2024_published_nostro.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.09 MB
Formato Adobe PDF
1.09 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2990594