The programming flexibility and parallelism of Graphics Processing Units (GPUs) contribute to their effective adoption in complex and data-intensive fields like Machine Learning, especially in the deployment of Convolutional Neural Networks (CNNs). CNNs are also used in some safety-critical applications with severe reliability constraints, such as autonomous driving and robotics. Modern GPUs efficiently combine hardware schedulers controllers and in-chip accelerators (e.g., Tensor Core Units, or TCUs) to enhance CNN’s performance. Interestingly, fine-grain reliability analyses combining the operation of task scheduling policies in GPUs and TCUs have remained unexplored. This work analyses the reliability impact of scheduling policies on GPUs when permanent faults affect TCUs, during the execution of CNN operations. We developed a configurable architectural GPU model (in terms of clusters and parallel cores) that implements five selectable scheduling policies and supports the instruction-accurate execution of TCUs. Our results indicate that the GPU’s architecture and the scheduling policy play a crucial role in the application’s corruption from faulty TCUs. From the experiments, we found that some policies can reduce the corruption effects by up to 22% for large GPUs. In addition, we evaluated the dynamic variability of the scheduling policies and their complexity on identifying deterministic effects on the application’s outputs.
Analyzing the Impact of Scheduling Policies on the Reliability of GPUs Running CNN Operations / Limas Sierra, Robert; Guerrero-Balaguera, Juan-David; Pessia, Francesco; Rodriguez Condia, Josie E.; Reorda, Matteo Sonza. - (2024). (Intervento presentato al convegno 2024 IEEE 42nd VLSI Test Symposium (VTS) tenutosi a Tempe, AZ (USA) nel 22-24 April 2024) [10.1109/vts60656.2024.10538940].
Analyzing the Impact of Scheduling Policies on the Reliability of GPUs Running CNN Operations
Limas Sierra, Robert;Guerrero-Balaguera, Juan-David;Rodriguez Condia, Josie E.;Reorda, Matteo Sonza
2024
Abstract
The programming flexibility and parallelism of Graphics Processing Units (GPUs) contribute to their effective adoption in complex and data-intensive fields like Machine Learning, especially in the deployment of Convolutional Neural Networks (CNNs). CNNs are also used in some safety-critical applications with severe reliability constraints, such as autonomous driving and robotics. Modern GPUs efficiently combine hardware schedulers controllers and in-chip accelerators (e.g., Tensor Core Units, or TCUs) to enhance CNN’s performance. Interestingly, fine-grain reliability analyses combining the operation of task scheduling policies in GPUs and TCUs have remained unexplored. This work analyses the reliability impact of scheduling policies on GPUs when permanent faults affect TCUs, during the execution of CNN operations. We developed a configurable architectural GPU model (in terms of clusters and parallel cores) that implements five selectable scheduling policies and supports the instruction-accurate execution of TCUs. Our results indicate that the GPU’s architecture and the scheduling policy play a crucial role in the application’s corruption from faulty TCUs. From the experiments, we found that some policies can reduce the corruption effects by up to 22% for large GPUs. In addition, we evaluated the dynamic variability of the scheduling policies and their complexity on identifying deterministic effects on the application’s outputs.File | Dimensione | Formato | |
---|---|---|---|
Analyzing_the_Impact_of_Scheduling_Policies_on_the_Reliability_of_GPUs_Running_CNN_Operations.pdf
accesso riservato
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Non Pubblico - Accesso privato/ristretto
Dimensione
672.53 kB
Formato
Adobe PDF
|
672.53 kB | Adobe PDF | Visualizza/Apri Richiedi una copia |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2989157