Model-Based Verification: Analysis Guidelines
journal contributionposted on 01.12.2001 by Grace A. Lewis, Santiago Cornella-Dordo, David Gluch, John J. Hudak, Charles B. Weinstock
Any type of content formally published in an academic journal, usually following a peer-review process.
This technical note provides guidance for the analysis activity that occurs during the interpretation of results produced by model-checking tools. In the model-checking analysis activity, the main question is, "Does the system behave correctly?" To answer this question, a model and a set of expected properties are used as input to a model checker. The expected output is a confirmation or refutation of the specified expected properties. In most cases, if the model checker does not confirm the property, it provides a counterexample. Counterexamples are executions of the model showing the sequence of steps that refutes the expected property. Sometimes the state space to be explored in order to find this counterexample is so large that it cannot be completely covered. This is the state explosion problem. Models must be tuned to reduce the state space; this is a manual and intuitive task. Interpreting the model checker's output can also be difficult. The significance of the output must be assessed; its interpretation may suggest an error in the claims or the model, or a defect in the actual system. This document presents the problems related to interpreting results. It provides strategies to overcome state explosion, analyze results, and provide feedback to the system designers and developers.