The demand for executing Deep Neural Networks (DNNs) with low latency and minimal power consumption at the edge has led to the development of advanced heterogeneous Systems-on-Chips (SoCs) that incorporate multiple specialized computing units (CUs), such as accelerators. Offloading DNN computations to a specific CU from the available set often exposes accuracy vs efficiency trade-offs, due to differences in their supported operations (e.g., standard vs. depthwise convolution) or data representations (e.g., more/less aggressively quantized). A challenging yet unresolved issue is how to map a DNN onto these multi-CU systems to maximally exploit the parallelization possibilities while taking accuracy into account. To address this problem, we present ODiMO, a hardware-aware tool that efficiently explores fine-grain mapping of DNNs among various on-chip CUs, during the training phase. ODiMO strategically splits individual layers of the neural network and executes them in parallel on the multiple available CUs, aiming to balance the total inference energy consumption or latency with the resulting accuracy, impacted by the unique features of the different hardware units. We test our approach on CIFAR-10, CIFAR-100, and ImageNet, targeting two open-source heterogeneous SoCs, i.e., DIANA and Darkside. We obtain a rich collection of Paretooptimal networks in the accuracy vs. energy or latency space. We show that ODiMO reduces the latency of a DNN executed on the Darkside SoC by up to 8x at iso-accuracy, compared to manual heuristic mappings. When targeting energy, on the same SoC, ODiMO produced up to 50.8x more efficient mappings, with minimal accuracy drop (< 0.3%).
Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time / Risso, M.; Burrello, A.; Jahier Pagliari, D.. - In: IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS. - ISSN 0278-0070. - ELETTRONICO. - 44:9(2025), pp. 3532-3545. [10.1109/TCAD.2025.3543715]
Optimizing DNN Inference on Multi-Accelerator SoCs at Training-time
M. Risso;A. Burrello;D. Jahier Pagliari
2025
Abstract
The demand for executing Deep Neural Networks (DNNs) with low latency and minimal power consumption at the edge has led to the development of advanced heterogeneous Systems-on-Chips (SoCs) that incorporate multiple specialized computing units (CUs), such as accelerators. Offloading DNN computations to a specific CU from the available set often exposes accuracy vs efficiency trade-offs, due to differences in their supported operations (e.g., standard vs. depthwise convolution) or data representations (e.g., more/less aggressively quantized). A challenging yet unresolved issue is how to map a DNN onto these multi-CU systems to maximally exploit the parallelization possibilities while taking accuracy into account. To address this problem, we present ODiMO, a hardware-aware tool that efficiently explores fine-grain mapping of DNNs among various on-chip CUs, during the training phase. ODiMO strategically splits individual layers of the neural network and executes them in parallel on the multiple available CUs, aiming to balance the total inference energy consumption or latency with the resulting accuracy, impacted by the unique features of the different hardware units. We test our approach on CIFAR-10, CIFAR-100, and ImageNet, targeting two open-source heterogeneous SoCs, i.e., DIANA and Darkside. We obtain a rich collection of Paretooptimal networks in the accuracy vs. energy or latency space. We show that ODiMO reduces the latency of a DNN executed on the Darkside SoC by up to 8x at iso-accuracy, compared to manual heuristic mappings. When targeting energy, on the same SoC, ODiMO produced up to 50.8x more efficient mappings, with minimal accuracy drop (< 0.3%).| File | Dimensione | Formato | |
|---|---|---|---|
| 
									
										
										
										
										
											
												
												
												    
												
											
										
									
									
										
										
											Optimizing_DNN_Inference_on_Multi-Accelerator_SoCs_at_Training-time.pdf
										
																				
									
										
											 accesso riservato 
											Tipologia:
											2. Post-print / Author's Accepted Manuscript
										 
									
									
									
									
										
											Licenza:
											
											
												Non Pubblico - Accesso privato/ristretto
												
												
												
											
										 
									
									
										Dimensione
										6.13 MB
									 
									
										Formato
										Adobe PDF
									 
										
										
								 | 
								6.13 MB | Adobe PDF | Visualizza/Apri Richiedi una copia | 
| 
									
										
										
										
										
											
												
												
												    
												
											
										
									
									
										
										
											2409.18566v2.pdf
										
																				
									
										
											 accesso aperto 
											Tipologia:
											1. Preprint / submitted version [pre- review]
										 
									
									
									
									
										
											Licenza:
											
											
												Pubblico - Tutti i diritti riservati
												
												
												
											
										 
									
									
										Dimensione
										6.13 MB
									 
									
										Formato
										Adobe PDF
									 
										
										
								 | 
								6.13 MB | Adobe PDF | Visualizza/Apri | 
| 
									
										
										
										
										
											
												
												
												    
												
											
										
									
									
										
										
											Optimizing_DNN_Inference_on_Multi-accelerator_SoCs_at_Training-Time.pdf
										
																				
									
										
											 accesso riservato 
											Tipologia:
											2a Post-print versione editoriale / Version of Record
										 
									
									
									
									
										
											Licenza:
											
											
												Non Pubblico - Accesso privato/ristretto
												
												
												
											
										 
									
									
										Dimensione
										6.16 MB
									 
									
										Formato
										Adobe PDF
									 
										
										
								 | 
								6.16 MB | Adobe PDF | Visualizza/Apri Richiedi una copia | 
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2997772
			
		
	
	
	
			      	