Model checkers and sequential equivalence checkers have become essential tools for the semiconductor industry in recent years. The Hardware Model Checking Competition (HWMCC) was founded in 2006 with the purpose of intensifying research interest in these technologies, and establishing more of a science behind them. For example, the competition provided a standardized benchmark format, a challenging and diverse set of industrially-relevant public benchmarks, and, as a consequence, a significant motivation for additional research to advance the state-of-the-art in model checkers for these verification problems. This paper provides a historical perspective, and an analysis of the tools and benchmarks submitted to the competition. It also presents a detailed analysis of the results collected in the 2014 edition of the contest, showing relations among tools, and among tools and benchmarks. It finally proposes a list of considerations, lessons learned, and hints for both future organizers and competitors.
Hardware Model Checking Competition 2014: An Analysis and Comparison of Model Checkers and Benchmarks / Cabodi, Gianpiero; Loiacono, Carmelo; Palena, Marco; Pasini, Paolo; Quer, Stefano; Patti, Denis; Vendraminetto, Danilo; Biere, A.; Helianko, K.. - In: JOURNAL ON SATISFIABILITY, BOOLEAN MODELING AND COMPUTATION. - ISSN 1574-0617. - ELETTRONICO. - 9:(2016), pp. 135-172.
Hardware Model Checking Competition 2014: An Analysis and Comparison of Model Checkers and Benchmarks
CABODI, Gianpiero;LOIACONO, CARMELO;PALENA, MARCO;PASINI, PAOLO;QUER, Stefano;PATTI, DENIS;VENDRAMINETTO, DANILO;
2016
Abstract
Model checkers and sequential equivalence checkers have become essential tools for the semiconductor industry in recent years. The Hardware Model Checking Competition (HWMCC) was founded in 2006 with the purpose of intensifying research interest in these technologies, and establishing more of a science behind them. For example, the competition provided a standardized benchmark format, a challenging and diverse set of industrially-relevant public benchmarks, and, as a consequence, a significant motivation for additional research to advance the state-of-the-art in model checkers for these verification problems. This paper provides a historical perspective, and an analysis of the tools and benchmarks submitted to the competition. It also presents a detailed analysis of the results collected in the 2014 edition of the contest, showing relations among tools, and among tools and benchmarks. It finally proposes a list of considerations, lessons learned, and hints for both future organizers and competitors.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/2629229
Attenzione
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo