This paper describes a portfolio-based approach for model checking, i.e., an approach in which several model checking engines are orchestrated to reach the best possible performance on a broad and real set of designs. Model checking algorithms are evaluated through experiments, and experimental data inspire package tuning, as well as new algorithmic features and methodologies. This approach, albeit similar to several industrial and academic experiences, and already applied in other domains, is somehow new to the model checking field. Its contributions lie in the description of how we: 1) characterize and classify benchmarks in a dynamic way, throughout experimental runs, 2) relate model checking problems to algorithms and engines, 3) introduce a dynamic tuning of sub-engines, exploiting an on-the-fly performance analysis, 4) record results of different approaches, and sort out heuristics to target different classes of problems. We provide a detailed description of the experiments performed in preparation of the Model Checking Competition 2010, where PdTRAV, our academic verification tool, won the UNSAT division, while ranking second in the OVERALL category.

Benchmarking a Model Checker for Algorithmic Improvements and Tuning for Performance / Cabodi, Gianpiero; Nocco, Sergio; Quer, Stefano. - In: FORMAL METHODS IN SYSTEM DESIGN. - ISSN 0925-9856. - STAMPA. - 39:2(2011), pp. 205-227. [10.1007/s10703-011-0123-3]

Benchmarking a Model Checker for Algorithmic Improvements and Tuning for Performance

CABODI, Gianpiero;NOCCO, SERGIO;QUER, Stefano
2011

Abstract

This paper describes a portfolio-based approach for model checking, i.e., an approach in which several model checking engines are orchestrated to reach the best possible performance on a broad and real set of designs. Model checking algorithms are evaluated through experiments, and experimental data inspire package tuning, as well as new algorithmic features and methodologies. This approach, albeit similar to several industrial and academic experiences, and already applied in other domains, is somehow new to the model checking field. Its contributions lie in the description of how we: 1) characterize and classify benchmarks in a dynamic way, throughout experimental runs, 2) relate model checking problems to algorithms and engines, 3) introduce a dynamic tuning of sub-engines, exploiting an on-the-fly performance analysis, 4) record results of different approaches, and sort out heuristics to target different classes of problems. We provide a detailed description of the experiments performed in preparation of the Model Checking Competition 2010, where PdTRAV, our academic verification tool, won the UNSAT division, while ranking second in the OVERALL category.
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2481585
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo