Depthwise separable convolutions are a fundamental component in efficient Deep Neural Networks, as they reduce the number of parameters and operations compared to traditional convolutions while maintaining comparable accuracy. However, their low data reuse opportunities make deploying them notoriously difficult. In this work, we perform an extensive exploration of alternatives to fuse the depthwise and pointwise kernels that constitute the separable convolutional block. Our approach aims to minimize time-consuming memory transfers by combining different data layouts. When targeting a commercial ultra-low-power device with a three-level memory hierarchy, the GreenWaves GAP8 SoC, we reduce the latency of end-to-end network execution by up to 11.40%. Furthermore, our kernels reduce activation data movements between L2 and L1 memories by up to 52.97%.

Accelerating Depthwise Separable Convolutions on Ultra-Low-Power Devices / Daghero, Francesco; Burrello, Alessio; Poncino, Massimo; Macii, Enrico; Jahier Pagliari, Daniele. - STAMPA. - 15226:(2025), pp. 46-58. (Intervento presentato al convegno 24nd International Conference, SAMOS 2024 tenutosi a Samos (GRC) nel June 29 – July 4, 2024) [10.1007/978-3-031-78377-7_4].

Accelerating Depthwise Separable Convolutions on Ultra-Low-Power Devices

Daghero, Francesco;Burrello, Alessio;Poncino, Massimo;Macii, Enrico;Jahier Pagliari, Daniele
2025

Abstract

Depthwise separable convolutions are a fundamental component in efficient Deep Neural Networks, as they reduce the number of parameters and operations compared to traditional convolutions while maintaining comparable accuracy. However, their low data reuse opportunities make deploying them notoriously difficult. In this work, we perform an extensive exploration of alternatives to fuse the depthwise and pointwise kernels that constitute the separable convolutional block. Our approach aims to minimize time-consuming memory transfers by combining different data layouts. When targeting a commercial ultra-low-power device with a three-level memory hierarchy, the GreenWaves GAP8 SoC, we reduce the latency of end-to-end network execution by up to 11.40%. Furthermore, our kernels reduce activation data movements between L2 and L1 memories by up to 52.97%.
2025
9783031783760
9783031783777
File in questo prodotto:
File Dimensione Formato  
post-print.pdf

embargo fino al 28/01/2026

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 688.63 kB
Formato Adobe PDF
688.63 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
978-3-031-78377-7_4.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 790.67 kB
Formato Adobe PDF
790.67 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2999019