With the explosion in the size of off-the-shelf integrated circuits and the advent of novel techniques related to failure modes, commercial Automatic Test Pattern Generator and fault simulation engines are often insufficient to measure the coverage of particular metrics. Consequently, a general working framework consists of storing simulation traces during the analysis phase and collecting test statistics from post-processing. Unfortunately, typical simulation traces can be hundreds of gigabytes long, and their analysis can require several days, even on large and powerful computational servers. In this paper, we propose a set of strategies to mitigate the evaluation time and the memory needed to analyze huge dump files stored in the standard Value Change Dump format. We concentrate on burn-in-related metrics that current commercial fault simulators and Automatic Test Pattern Generators cannot evaluate. We show how to divide the analysis process into several concurrent pipeline stages. We revise the logic process of each stage and all principal intermediate data structures, to adopt smart parallelization with very low contention and extremely low overhead. We exploit several low-level optimizations from modern programming techniques to reduce computation time and balance the different pipeline phases. We analyze simulation traces up to almost 250 GBytes computing different testing metrics. Overall, we can keep under control the memory usage, and we show time improvements of over two orders of magnitude compared to previously adopted state-of-the-art tools.
Parallel Multithread Analysis of Extremely Large Simulation Traces / Appello, D.; Bernardi, Paolo; Calabrese, Andrea; Pollaccia, G.; Quer, Stefano; Tancorre, V.; Ugioli, R.. - In: IEEE ACCESS. - ISSN 2169-3536. - ELETTRONICO. - 10:(2022), pp. 56440-56457. [10.1109/ACCESS.2022.3177613]
Parallel Multithread Analysis of Extremely Large Simulation Traces
Paolo Bernardi;Andrea Calabrese;Stefano Quer;
2022
Abstract
With the explosion in the size of off-the-shelf integrated circuits and the advent of novel techniques related to failure modes, commercial Automatic Test Pattern Generator and fault simulation engines are often insufficient to measure the coverage of particular metrics. Consequently, a general working framework consists of storing simulation traces during the analysis phase and collecting test statistics from post-processing. Unfortunately, typical simulation traces can be hundreds of gigabytes long, and their analysis can require several days, even on large and powerful computational servers. In this paper, we propose a set of strategies to mitigate the evaluation time and the memory needed to analyze huge dump files stored in the standard Value Change Dump format. We concentrate on burn-in-related metrics that current commercial fault simulators and Automatic Test Pattern Generators cannot evaluate. We show how to divide the analysis process into several concurrent pipeline stages. We revise the logic process of each stage and all principal intermediate data structures, to adopt smart parallelization with very low contention and extremely low overhead. We exploit several low-level optimizations from modern programming techniques to reduce computation time and balance the different pipeline phases. We analyze simulation traces up to almost 250 GBytes computing different testing metrics. Overall, we can keep under control the memory usage, and we show time improvements of over two orders of magnitude compared to previously adopted state-of-the-art tools.File | Dimensione | Formato | |
---|---|---|---|
ieeeAccess.pdf
accesso aperto
Descrizione: Articolo principale
Tipologia:
2a Post-print versione editoriale / Version of Record
Licenza:
Creative commons
Dimensione
5.83 MB
Formato
Adobe PDF
|
5.83 MB | Adobe PDF | Visualizza/Apri |
Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2965802