Deep Recommender Models (DLRMs) inference is a fundamental AI workload accounting for more than 79% of the total AI workload in Meta's data centers. DLRMs' performance bottleneck is found in the embedding layers, which perform many random memory accesses to retrieve small embedding vectors from tables of various sizes. We propose the design of tailored data flows to speedup embedding look-ups. Namely, we propose four strategies to look up an embedding table effectively on one core, and a framework to automatically map the tables asymmetrically to the multiple cores of a SoC. We assess the effectiveness of our method using the Huawei Ascend AI accelerators, comparing it with the default Ascend compiler, and we perform high-level comparisons with Nvidia A100. Results show a speed-up varying from 1.5x up to 6.5x for real workload distributions, and more than 20x for extremely unbalanced distributions. Furthermore, the method proves to be much more independent of the query distribution than the baseline.

Deep Recommender Models Inference: Automatic Asymmetric Data Flow Optimization / Ruggeri, Giuseppe; Andri, Renzo; Pagliari, Daniele Jahier; Cavigelli, Lukas. - ELETTRONICO. - (2024), pp. 517-520. (Intervento presentato al convegno 2024 IEEE 42nd International Conference on Computer Design (ICCD) tenutosi a Milano (Italia) nel 18-20 November 2024) [10.1109/iccd63220.2024.00085].

Deep Recommender Models Inference: Automatic Asymmetric Data Flow Optimization

Ruggeri, Giuseppe;Pagliari, Daniele Jahier;
2024

Abstract

Deep Recommender Models (DLRMs) inference is a fundamental AI workload accounting for more than 79% of the total AI workload in Meta's data centers. DLRMs' performance bottleneck is found in the embedding layers, which perform many random memory accesses to retrieve small embedding vectors from tables of various sizes. We propose the design of tailored data flows to speedup embedding look-ups. Namely, we propose four strategies to look up an embedding table effectively on one core, and a framework to automatically map the tables asymmetrically to the multiple cores of a SoC. We assess the effectiveness of our method using the Huawei Ascend AI accelerators, comparing it with the default Ascend compiler, and we perform high-level comparisons with Nvidia A100. Results show a speed-up varying from 1.5x up to 6.5x for real workload distributions, and more than 20x for extremely unbalanced distributions. Furthermore, the method proves to be much more independent of the query distribution than the baseline.
2024
979-8-3503-8040-8
979-8-3503-8041-5
File in questo prodotto:
File Dimensione Formato  
Deep_Recommender_Models_Inference_Automatic_Asymmetric_Data_Flow_Optimization.pdf

accesso riservato

Descrizione: Versione Editoriale
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 357.54 kB
Formato Adobe PDF
357.54 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2999072