This paper presents a hardware-software co-design approach for accelerating Temporal Convolutional Network (TCN) inference on resource-constrained edge devices. TCNs are powerful models for sequential data analysis, but their computational complexity poses challenges for deployment in low-power IoT applications. To address this, we integrate the CGRA accelerator "Mage" into the RISC-V-based X-HEEP platform, enabling efficient execution of dilated 1D convolutions through tailored memory access optimizations, tiling strategies, and DMA-based data transfers. Our methodology includes a PyTorch-based training pipeline, a custom C-based inference engine, and hardware acceleration via Mage, which supports dynamic reconfiguration for different TCN layers. Experimental evaluation on a Pynq-Z2 FPGA demonstrates significant speedups, achieving up to 69.2x and 82.6x for int 16 and int 8 configurations, respectively, across the entire network. These results highlight the effectiveness of our approach in enabling real-time TCN inference for edge analytics, paving the way for scalable and efficient deployment of deep learning models in IoT scenarios. While we validate our approach using an EMG dataset, the proposed solution is general and applicable to various timeseries analytics tasks.
Optimizing TCN Inference: A Hardware-Software Co-Design Approach with CGRA Acceleration / Varaldi, Alessandro; Naclerio, Alessio; Riente, Fabrizio; Zamboni, Maurizio; Graziano, Mariagrazia; Vacca, Marco. - ELETTRONICO. - (2025), pp. 1-6. (Intervento presentato al convegno 2025 IEEE Computer Society Annual Symposium on VLSI (ISVLSI) tenutosi a Kalamata (GR) nel 6-9 July 2025) [10.1109/isvlsi65124.2025.11130330].
Optimizing TCN Inference: A Hardware-Software Co-Design Approach with CGRA Acceleration
Varaldi, Alessandro;Naclerio, Alessio;Riente, Fabrizio;Zamboni, Maurizio;Graziano, Mariagrazia;Vacca, Marco
2025
Abstract
This paper presents a hardware-software co-design approach for accelerating Temporal Convolutional Network (TCN) inference on resource-constrained edge devices. TCNs are powerful models for sequential data analysis, but their computational complexity poses challenges for deployment in low-power IoT applications. To address this, we integrate the CGRA accelerator "Mage" into the RISC-V-based X-HEEP platform, enabling efficient execution of dilated 1D convolutions through tailored memory access optimizations, tiling strategies, and DMA-based data transfers. Our methodology includes a PyTorch-based training pipeline, a custom C-based inference engine, and hardware acceleration via Mage, which supports dynamic reconfiguration for different TCN layers. Experimental evaluation on a Pynq-Z2 FPGA demonstrates significant speedups, achieving up to 69.2x and 82.6x for int 16 and int 8 configurations, respectively, across the entire network. These results highlight the effectiveness of our approach in enabling real-time TCN inference for edge analytics, paving the way for scalable and efficient deployment of deep learning models in IoT scenarios. While we validate our approach using an EMG dataset, the proposed solution is general and applicable to various timeseries analytics tasks.File | Dimensione | Formato | |
---|---|---|---|
Optimizing_TCN_Inference_A_Hardware-Software_Co-Design_Approach_with_CGRA_Acceleration.pdf
accesso riservato
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
632.23 kB
Formato
Adobe PDF
|
632.23 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/3002589