Unlike conventional frame-based imaging, neuromorphic sensing mimics the brain's asynchronous processing. Event-driven cameras capture only changes in light intensity, providing microsecond-scale temporal resolution-crucial for tracking satellites amidst celestial motion and atmospheric interference. This sparse encoding significantly reduces data volume while preserving a high dynamic range, making it well-suited for challenging lighting conditions and enabling motion-blur-free imaging. The combination of event-driven input and spiking neural networks (SNNs), deployed on non-Von Neumann neuromorphic hardware, further supports real-time, adaptive processing with ultra-low power consumption, often in the milliwatt range. To distinguish moving objects while the event-based camera itself is in motion, the agent requires a robust object motion segmentation (OMS) mechanism. Building on the work of D'Angelo et al., this study proposes an SNN architecture for OMS in dynamic scenes, coupled with an additional SNN module for motion direction detection.

Spiking motion direction through object motion sensitivity / Pignari, Riccardo; Fra, Vittorio; Urgese, Gianvito; Knight, James C.; D'Angelo, Giulia. - ELETTRONICO. - (2025). (Intervento presentato al convegno 2025 IEEE International Conference on Development and Learning (ICDL) Prague tenutosi a Prague (CZ) nel 16/09/2025) [10.5281/zenodo.15831646].

Spiking motion direction through object motion sensitivity

Riccardo Pignari;Vittorio Fra;Gianvito Urgese;
2025

Abstract

Unlike conventional frame-based imaging, neuromorphic sensing mimics the brain's asynchronous processing. Event-driven cameras capture only changes in light intensity, providing microsecond-scale temporal resolution-crucial for tracking satellites amidst celestial motion and atmospheric interference. This sparse encoding significantly reduces data volume while preserving a high dynamic range, making it well-suited for challenging lighting conditions and enabling motion-blur-free imaging. The combination of event-driven input and spiking neural networks (SNNs), deployed on non-Von Neumann neuromorphic hardware, further supports real-time, adaptive processing with ultra-low power consumption, often in the milliwatt range. To distinguish moving objects while the event-based camera itself is in motion, the agent requires a robust object motion segmentation (OMS) mechanism. Building on the work of D'Angelo et al., this study proposes an SNN architecture for OMS in dynamic scenes, coupled with an additional SNN module for motion direction detection.
File in questo prodotto:
File Dimensione Formato  
IEEE_ICDL_2025.pdf

accesso aperto

Tipologia: Abstract
Licenza: Creative commons
Dimensione 634.93 kB
Formato Adobe PDF
634.93 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3002924