Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, IFIP Advances in Information and Communication Technology
…
16 pages
1 file
The bitstate hashing, or supertrace, technique was introduced in 1987 as a method to increase the quality of verification by reachability analyses for applications that defeat analysis by traditional means because of their size. Since then, the technique has been included in many research verification tools, and was adopted in tools that are marketed commercially. It is therefore important that we understand well how and why the method works, what its limitations are, and how it compares with alternative methods over a broad range of problem sizes. The original motivation for the bitstate hashing technique was based on empirical evidence of its effectiveness. In this paper we provide an analytical argument. We compare the technique with two alternatives that have been proposed in the recent literature. We also describe a sequential bitstate hashing technique that can be of value when confronted with very large problem sizes.
Protocol Specification, Testing and Verification, 1995
The bitstate hashing, or supertrace, technique was introduced in 1987 as a method to increase the quality of verification by reachability analyses for applications that defeat analysis by traditional means because of their size. Since then, the technique has been included in many research verification tools, and was adopted in tools that are marketed commercially. It is therefore important that
Formal Techniques for Networked and Distributed Systems, 1998
Partial search methods, like bitstate hashing or supertrace, allow formal verification techniques to be applied to problems which normally could not be solved by exhaustive verification. A high coverage (defined as the percentage of the reachable states actually explored by the verifier) is important since the higher the coverage the lower the probability that a protocol error is not detected. In literature sequential hashing is proposed to improve the coverage of supertrace (i. e. start supertrace repeatedly by using different hash functions). Since supertrace is included in many commercial and noncommercial verification tools, it is important to know where its limitations are and where there is potential for possible improvements. We present both theoretical and experimental results to measure the quality of sequential hashing with supertrace. Tests made with the SPIN validator with different classes of hash functions show that the additional number of states reached in successive runs decreases rapidly. A coverage of close to 1 is practically impossible. While all the hash functions have the same asymptotical behaviour universal hashing seems to be best. We present an analysis for the expected coverage of sequential hashing which explains accurately the experimental results. Our new mathematical model shows that the overlap of successive runs is much higher than predicted by former approaches. Additionally, it predicts that simple randomization strategies which in successive runs change randomly both the hash function and the traversal order of the states can increase the coverage significantily. Experimental results show that our new algorithm, universal supertrace, which is based on such randomization techniques, outperforms the original supertrace and allows the coverage to be increased by a factor of up to 8-10.
2000
Our experience with semi-exhaustive verification shows a severe degradation in usability for the corner-case bugs, where the tuning effort becomes much higher and recovery from dead-ends is more and more difficult. Moreover, when there are no bugs at all, shifting semi-exhaustive traversal to exhaustive traversal is very expensive, if not impossible. This makes the output of semi-exhaustive verification on non-buggy designs very ambiguous.
Protocol Specification, Testing and Verification, XII, 1992
We study the effect of three new reduction strategies for conventional reachability analysis, as used in automated protocol validation algorithms. The first two strategies are implementations of partial order semantics rules that attempt to minimize the number of execution sequences that need to be explored for a full state space exploration. The third strategy is the implementation of a state compression scheme that attempts to minimize the amount of memory that is used to built a state space. The three strategies are shown to have a potential for substantially improving the performance of a conventional search. The paper discusses the optimal choices for reducing either run time or memory requirements by four to six times. The strategies can readily be combined with each other and with alternative state space reduction techniques such as supertrace or state space caching methods.
Abstract This paper describes and analyzes a probabilistic technique to reduce the memory requirement of the table of reached states maintained in veri cation by explicit state enumeration. The memory savings of the new scheme come at the price of a certain probability that the search becomes incomplete. However, this probability can be made negligibly small by using typically 40 bits of memory per state.
Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering - ASE '05, 2005
Partial order (p.o.) reduction techniques are a popular and effective approach for tackling state space explosion in the verification of concurrent systems. These techniques generate a reduced search space that could be exponentially smaller than the complete state space. Their major drawback is that the amount of reduction achieved is highly sensitive to the properties being verified. For the same program, different properties could result in very different amounts of reduction achieved. We present a new approach which combines the benefits of p.o. reduction with the added advantage that the size of the constructed state space is completely independent of the properties being verified. As in p.o. reduction, we use the notion of persistent sets to construct a representative interleaving for each maximal trace of the program. However, we retain concurrency information by assigning vector timestamps to the events in each interleaving. Our approach hinges upon the use of efficient algorithms that parse the encoded concurrency information in the representative interleaving to determine whether a safety violation exists in any interleaving of the corresponding trace. We show that, for some types of predicates, reachability detection can be performed in time that is polynomial in the length of the interleaving. Typically, these predicates exhibit certain characteristics that can be exploited by the detection algorithm. We implemented our algorithms in the popular model checker SPIN, and present experimental results that demonstrate the effectiveness of our techniques. For example, we verified a distributed dining philosophers protocol in 0.03 seconds, using 1.253 MB of memory. SPIN, using traditional p.o. reduction techniques, took 759.71 seconds and 439.116 MB of memory.
Software: Practice and Experience, 1988
An automated analysis of all reachable states in a distributed system can be used to trace obscure logical errors that would be very hard to find manually. This type of validation is traditionally performed by the symbolic execution of a finite state machine (FSM) model of the system studied.
2004
Reachability checking and Pre-image computation are fundamental problems in ATPG and formal verification. Traditional sequential search techniques based on ATPG/SAT, or on OBDDS have diverging strengths and weaknesses. In this paper, we describe how structural analysis and conflict-based learning are combined in order to improve the efficiency of sequential search. We use conflict-based learning and illegal state learning across time-frames. We also address issues in efficiently bounding the search space in a single timeframe and across time-frames. We analyze each of these techniques experimentally and demonstrate the advantages of each technique. We compare performance against a commercial sequential ATPG engine and VIS [13] on a set of standard benchmarks.
International Journal of Engineering Sciences & Research Technology, 2012
Lecture Notes in Computer Science, 1997
prod is a reachability analyzer for Predicate/Transition Nets.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Electronic Notes in Theoretical Computer Science, 2008
Springer eBooks, 2004
2003 Design, Automation and Test in Europe Conference and Exhibition, 2003
Formal Methods in System Design, 2006
Lecture Notes in Computer Science, 2009
Lecture Notes in Computer Science, 2001
Proceedings of 1997 IEEE International Symposium on Circuits and Systems. Circuits and Systems in the Information Age ISCAS '97
Proceedings of International Conference on Computer Aided Design