Spiking Neural Networks (SNN) are an emerging type of biologically plausible and efficient Artificial Neural Net- work (ANN). This work presents the development of a hardware accelerator for a SNN for high-performance inference, targeting a Xilinx Artix-7 Field Programmable Gate Array (FPGA). The model used inside the neuron is the Leaky Integrate and Fire (LIF). The execution is clock-driven, meaning that the internal state of the neuron is updated at every clock cycle, even in absence of spikes. The inference capabilities of the accelerator are evaluated using the MINST dataset. The training is performed offline on a full precision model. The results show a good improvement in performance if compared with the state-of- the-art accelerators, requiring 215μs per image. The energy consumption is slightly higher than the most optimized design, with an average value of 13mJ per image. The test design consists of a single layer of four-hundred neurons and uses around 40% of the available resources on the FPGA. This makes it suitable for a time-constrained application at the edge, leaving space for other acceleration tasks on the FPGA.
Spiker: an FPGA-optimized Hardware accelerator for Spiking Neural Networks / Carpegna, Alessio; Savino, Alessandro; Di Carlo, Stefano. - ELETTRONICO. - (2022), pp. 14-19. (Intervento presentato al convegno 2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) tenutosi a Pafos, Cyprus nel 04-06 July 2022) [10.1109/ISVLSI54635.2022.00016].
Spiker: an FPGA-optimized Hardware accelerator for Spiking Neural Networks
Carpegna, Alessio;Savino, Alessandro;Di Carlo, Stefano
2022
Abstract
Spiking Neural Networks (SNN) are an emerging type of biologically plausible and efficient Artificial Neural Net- work (ANN). This work presents the development of a hardware accelerator for a SNN for high-performance inference, targeting a Xilinx Artix-7 Field Programmable Gate Array (FPGA). The model used inside the neuron is the Leaky Integrate and Fire (LIF). The execution is clock-driven, meaning that the internal state of the neuron is updated at every clock cycle, even in absence of spikes. The inference capabilities of the accelerator are evaluated using the MINST dataset. The training is performed offline on a full precision model. The results show a good improvement in performance if compared with the state-of- the-art accelerators, requiring 215μs per image. The energy consumption is slightly higher than the most optimized design, with an average value of 13mJ per image. The test design consists of a single layer of four-hundred neurons and uses around 40% of the available resources on the FPGA. This makes it suitable for a time-constrained application at the edge, leaving space for other acceleration tasks on the FPGA.| File | Dimensione | Formato | |
|---|---|---|---|
| main.pdf accesso aperto 
											Tipologia:
											2. Post-print / Author's Accepted Manuscript
										 
											Licenza:
											
											
												Pubblico - Tutti i diritti riservati
												
												
												
											
										 
										Dimensione
										362.26 kB
									 
										Formato
										Adobe PDF
									 | 362.26 kB | Adobe PDF | Visualizza/Apri | 
| Spiker_an_FPGA-optimized_Hardware_accelerator_for_Spiking_Neural_Networks.pdf accesso riservato 
											Tipologia:
											2a Post-print versione editoriale / Version of Record
										 
											Licenza:
											
											
												Non Pubblico - Accesso privato/ristretto
												
												
												
											
										 
										Dimensione
										676.4 kB
									 
										Formato
										Adobe PDF
									 | 676.4 kB | Adobe PDF | Visualizza/Apri Richiedi una copia | 
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2971596
			
		
	
	
	
			      	