Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2005, Proceedings of the ASP-DAC 2005. Asia and South Pacific Design Automation Conference, 2005.
Coverage metrics, which evaluate the ability of a test sequence to detect design faults, are essential to the validation process. A key source of difficulty in determining fault detection is that the control flow path traversed in the presence of a fault cannot be determined. Fault detection can only be accurately determined by exploring the set of all control flow paths, which may be traversed as a result of a fault. We present a coverage metric that determines the propagation of fault effects along all possible faulty control flow paths. The complexity of exploring multiple control flow paths is greatly alleviated by heuristically pruning infeasible control flow paths using the algorithm that we present. The proposed coverage metric provides high accuracy in designs that contain complex control flow. The results obtained are promising.
Journal of Computer Science, 2009
Problem statement: Dynamic verification, the use of simulation to determine design correctness, is widely used due to its tractability for large hardware designs. A serious limitation of dynamic techniques is the difficulty in determining whether or not a test sequence is sufficient to detect all likely design errors. Coverage metrics are used to address this problem by providing a set of goals to be achieved during the simulation process; if all coverage goals are satisfied then the test sequence is assumed to be complete. Coverage metrics hence evaluate the ability of a test sequence to detect design errors and are essential to the verification process. A key source of difficulty in determining error detection is that the control-flow path traversed in the presence of an error cannot be determined. This problem becomes particularly difficult in case of typical industrial designs involving interaction of control flow paths between concurrent processes. Error detection can only be accurately determined by exploring the set of all control-flow paths, which may be traversed as a result of an error. Also, there is no technique to identify a correlation between coverage metrics and hardware design quality. Approach: We present a coverage metric that determined the propagation of error effects along all possible erroneous control-flow paths across processes. The complexity of exploring multiple controlflow paths was greatly alleviated by heuristically pruning infeasible control-flow paths using the algorithm that we present. We also presented a technique to evaluate coverage metric by examining its ability to ensure the detection of real design errors. We injected errors in the design to correlate their detection with the coverage computed by our metric. Results: Our coverage metric although analyzed all control-flow paths it pruned the infeasible ones and eliminated them from coverage consideration, hence reducing the complexity of generating tests meant to execute them. The metric also correlated better with detection of design errors than some well-studied metrics do. Conclusion: The proposed coverage metric provided high accuracy in measurement of coverage in designs that contain complex control-flow with concurrent processes. It is superior at detecting design error when compared with the metrics it was compared with.
Proc. 27th International Computer Software and Applications Conference (COMPSAC 2003), 2003
Effectiveness of testing criteria is the ability to detect failures in a software program. We consider not only effectiveness of some testing criterion in itself but a variance of effectiveness of different test sets satisfied the same testing criterion. We name this property ‘tolerance’ of a testing criterion and show that, for practical using a criterion, a high tolerance is as well important as high effectiveness. The results of empirical evaluation of tolerance for different criteria, types of faults and decisions are presented. As well as quite simple and well-known control-flow criteria, we study more complicated criteria: Full Predicate Coverage,Modified Condition/Decision Coverage and Reinforced Condition/Decision Coverage criteria.
1998
Abstract We present a fast, dynamic fault coverage estimation technique for sequential circuits that achieves high degrees of accuracy by significantly reducing the number of injected faults and faulty-event evaluations. Specficdy, we dynamically reduce injection of two types of faults:(1) hyperactive faults that never get detected, and (2) faults whose effects never propagate to a flip-flop or primary output. The cost of fault simulation is greatly reduced as injection of most of these two types of faults is prevented.
2006
Abstract���Functional validation of modern microprocessors is an important and complex problem. One of the problems in functional validation is the generation of test cases that has higher potential to find bugs in the design. Coverage directed test generation techniques create test suites that satisfy a coverage metric and are used extensively in validating software and hardware. In this paper, we address the question ���Which coverage metric is better in finding the commonly seen design bugs during microprocessor validation?���.
Software Testing, Verification & Reliability, 2004
Fault-detection effectiveness of coverage criteria has remained one of the controversial issues in recent years. In order to detect a fault, a test set must execute the faulty statement, cause infection of the data state and then propagate the faulty data state to bring about a failure. This paper sheds some light on the earlier contradictory results by investigating the infection aspect of coverage criteria.For a given test criterion, the number of test sets satisfying the criterion may be very large, with varying fault-detection effectiveness. In a recent work the measure of variation in effectiveness of a test criterion was defined as ‘tolerance’. This paper presents an experimental evaluation of tolerance for control-flow test criteria by exhaustive test set generation, wherever possible. The approach used here is complementary to earlier empirical studies that adopted analysis of some test sets using random selection techniques. Four industrially used control-flow testing criteria, Condition Coverage (CC), Decision Condition Coverage (DCC), Full Predicate Coverage (FPC) and Modified Condition Decision Coverage (MCDC) have been analysed against four types of faults. A new test criterion, Reinforced Condition Decision Coverage (RCDC), is also analysed and compared. Copyright © 2004 John Wiley & Sons, Ltd.
Proceedings European Design and Test Conference. ED & TC 97
This paper presents a testability analysis and improvement technique for the controller of an RT level design. It detects hard-to-reach states by analyzing both the data path and the controller of a design. The controller is modijied using register initialization, branch control, and loop termination methods to enhance its state reachability. This technique complements the data path scan method and can be used to avoid scanning registers involved in the critical paths. Experimental results show the improvement of fault coverage with a very low area overhead.
Journal of Electronic Testing, 1998
A BIST-based test synthesis methodology for control-flow intensive behaviors is proposed. This methodology targets the control statements in a behavioral description, such as if-then-else and loop statements, because such statements can introduce testability problems in the resulting circuit. How well the operations in each branch of a control statement can be tested depends on the probability of taking each branch and the quality of the test patterns used in each branch. Behavioral modifications are presented that can resolve these testability issues. The proposed methodology systematically identifies poor testability areas within a behavior and applies the behavioral modifications to improve the testability. Experimental results from six practical examples show that this technique is effective.
2007 Design, Automation & Test in Europe Conference & Exhibition, 2007
Functional validation of modern microprocessors is an important and complex problem. One of the problems in functional validation is the generation of test cases that has higher potential to find faults in the design. We propose a model based test generation framework that generates tests for design fault classes inspired from software validation. There are two main contributions in this paper. Firstly, we propose a microprocessor modeling and test generation framework that generates test suites to satisfy Modified Condition Decision Coverage (MCDC), a structural coverage metric that detects most of the classified design faults as well as the remaining faults not covered by MCDC. Secondly, we show that there exists good correlation between types of design faults proposed by software validation and the errors/bugs reported in case studies on microprocessor validation. We demonstrate the framework by modeling and generating tests for the microarchitecture of VESPA, a 32-bit microprocessor. In the results section, we show that the tests generated using our framework's coverage directed approach detects the fault classes with 100% coverage, when compared to model-random test generation.
2014
Mechatronic systems operating in industrial environments are subject to a variety of threats because of harsh conditions. Industrial systems usually use commercial off-the shelf (COTS) equipment which are not robust and safe against hostile conditions and therefore require fault-tolerance considerations. This paper presents a novel and efficient method for online detection of control flow errors, called software-based control flow checking (SCFC). It is implemented purely in software and does not manipulate the hardware architecture of the system. Redundant instructions and signatures are embedded into the program at compile time and are utilized for control flow checking at run time. The signatures of the basic blocks are derived from the program graph. It is shown in the paper that SCFC method can increase single detection capability to 14.7% and the fault coverage to 6.12% averagely in comparison with other methods without any increase in memory and performance overheads. In the paper, besides experimental evaluations, analytical evaluations are also carried out, based on probability principles. The detection ability of each method used is thus computed. These computations verify the experimental results and show that SCFC can detect more errors than other methods suggested in literature. Considering the memory limitations in some (such as space) applications and the trend towards the requirement for faster execution of programs, we suggest a novel metric called fitness parameter which incorporates these. It is a better measure than the previously proposed ones since it considers the fault coverage, the memory overhead and the execution time (performance overhead) of each method simultaneously, as well as the detection capability.
2012
Abstract—FALCON (FAst fauLt COverage estimatioN) is a scalable method for fault grading which uses local fault simulations to estimate the fault coverage of a large system. The generality of this method makes it applicable for any modular design. Our analysis shows that the run time of our algorithm is related to the number of gates and the number of IOs in a module, while fault simulation run time is related to the total number of gates in the system. We have measured fault coverage for OR1200 and IVM processors and compared the results with fault simulation performed by a commercial tool. We have also compared our results with fault sampling. Our results show that for large designs FALCON works faster (one order of magnitude) compared to fault simulation. It also has smaller error rate compared to fault sampling when the size of design under test grows. I.
IEEE Design & Test of Computers, 2001
Formal Aspects of Computing, 2006
This paper describes an approach to the formalization of existing criteria used in computer systems software testing and proposes a new Reinforced Condition/Decision Coverage (RC/DC) criterion. This new criterion has been developed from the well-known Modified Condition/Decision Coverage (MC/DC) criterion and is more suitable for the testing of safety-critical software where MC/DC may not provide adequate assurance. As a formal language for describing the criteria, the Z notation has been selected. Formal definitions in the Z notation for RC/DC, as well as MC/DC and other criteria, are presented. Specific examples of using these criteria for specification-based testing are considered and some features are formally proved. This characterization is helpful in the understanding of different types of testing and also the correct application of a desired testing regime.
Advances on P2P, Parallel, Grid, Cloud and Internet Computing, 2019
Many software-implemented control flow error detection techniques have been proposed over the years. In an effort to reduce their overhead, recent research has focused on selective approaches. However, correctly applying these approaches can be difficult. This paper aims to address this concern and proposes a new approach. Our new approach is easier to implement and is applicable on any existing control flow error detection technique. To prove its validity, we apply our new approach to the Random Additive Control Flow Error Detection technique and perform fault injection experiments. The results show that the selective implementation has approximately the same error detection ratio with a decrease in execution time overhead.
2006 IEEE International Test Conference, 2006
Statistical stuck-at fault coverage estimation assumes that signals at primary inputs and at other internal gates of the circuit are statistically independent. While valid for random and pseudo-random inputs, this causes substantial errors in coverage estimation for input sequences that are functional and not random, as shown by experimental data presented in this paper. At internal gates, signal correlation due to fanout reconvergence, even for random input sequences, contributes to errors. A significantly improved coverage estimation algorithm is presented in this paper. First, during logic simulation we identify faults that are guaranteed to stay undetected by the applied vectors. Then, after logic simulation, we estimate the detection probabilities of the remaining faults. Compared to Stafan, the statistics gathered during logic simulation are modified in order to eliminate the non-random biasing of the input sequence. Besides the improved detection probabilities, a newly defined effective length (N ef f ) of the vector sequence corrects for the temporally correlated signals. Experimental results for ISCAS combinational benchmarks demonstrate validity of this approach.
IEEE Transactions on Computers, 2016
Detecting the effects of transient faults is a key point in many processor-based safety-critical applications. This paper proposes to adopt the debug interface module existing today in several processors/controllers available on the market. In this way, we can achieve a good detection capability and small latency with respect to control flow errors, while the cost for adopting the proposed technique is rather limited and does not involve any change either in the processor hardware or in the application software. The method works even if the processor uses caches and we experimentally evaluated its characteristics demonstrating the advantages and showing the limitations on two pipelined processors. Experimental results performed by fault injection using different software applications demonstrate that the method is able to archieve high fault coverage (more than 95% in nearly all the considered cases) with a limited cost in terms of area and performance degradation.
2007
This work presents a methodology for the automatic test vector generation for SystemC combinational designs based on code coverage analysis which is complementary to the functional testing. The method uses coverage information to generate test vectors capable of covering the portions of code not exercised by the black-box testing. Vectors are generated using an instrumented code followed by a numerical optimization method. This approach does not suffer from restrictions related to symbolic execution such as defining array reference values and loop boundaries, as the code is really executed together with the optimization. We expect this combined methodology to achieve total code coverage of the design and reduce the fault of omission problem, undetectable by structural testing alone.
2009 10th Latin American Test Workshop, 2009
The paper proposes High-Level Decision Diagrams (HLDDs) model based structural coverage metrics that are applicable to, both, verification and high-level test. Previous works have shown that HLDDs are an efficient model for simulation and test generation. However, the coverage properties of HLDDs against Hardware Description Languages (HDL) have not been studied in detail before. In this paper we show that the proposed methodology allows more stringent structural coverage analysis than traditional VHDL code coverage. Furthermore, the main new contribution of the paper is a hierarchical approach for condition coverage metric analysis that is based on HLDDs with expansion graphs for conditional nodes. Experiments on ITC99 benchmarks show that up to 14% increase in coverage accuracy can be achieved by the proposed methodology.
2010
This paper proposes two efficient software techniques, Control-flow and Data Errors Correction using Data-flow Graph Consideration (CDCC) and Miniaturized Check-Pointing (MCP), to detect and correct control-flow errors. These techniques have been implemented based on addition of redundant codes in a given program. The creativity applied in the methods for online detection and correction of the control-flow errors is using data-flow graph alongside of using control-flow graph. These techniques can detect most of the control-flow errors in the program firstly, and next can correct them, automatically. Therefore, both errors in the control-flow and program data which is caused by control-flow errors can be corrected, efficiently. In order to evaluate the proposed techniques, a post compiler is used, so that the techniques can be applied to every 80×86 binaries, transparently. Three benchmarks quick sort, matrix multiplication and linked list are used, and a total of 5000 transient faults are injected on several executable points in each program. The experimental results demonstrate that at least 93% and 89% of the control-flow errors can be detected and corrected without any data error generation by the CDCC and MCP, respectively. Moreover, the strength of these techniques is significant reduction in the performance and memory overheads in compare to traditional methods, for as much as remarkable correction abilities.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2013
This paper introduces a new metric to characterize test sets in terms of their diagnostic power. Our method uses much less space compared to the existing ones and is quite accurate. The metric can be utilized to increase the diagnosability of incompletely specified test sets via don't care filling. The Xfilling approach can be integrated with test pattern generation tools to aid in better diagnostic pattern set generation. Index Terms-Fault clustering, fault diagnosis, fault dictionary, indistinguishable fault pairs, X filling. I. Introduction F AULT DIAGNOSIS plays a major role in fast yield ramp up. With the increasing complexity of integrated circuit (IC) logic design and increasing difficulty of physical inspection in today's multilayer deep sub-micron devices, the observation has become exceedingly expensive and time consuming. The primary responsibility of any diagnosis algorithm is to accurately narrow down the list of suspected candidates. Almost all the diagnosis algorithms proposed in the literature [1]-[3] use the failure information produced by the tester. Some diagnosis algorithms [4], [5] also use the pass patterns to narrow down the list further. Overall, the backbone of any diagnosis algorithm is the test set in use. Consider two faults f 1 and f 2 having similar responses for all patterns in the test set in use. If any of these two faults occur, all diagnosis algorithms (which use this test set) will report both the faults with same rank. This leads us to the problem of assessing test sets in terms of their diagnosing capability. The exact brute force method requires building fault dictionary, which consists of all the faulty responses for each test vector. However, for a test set, memory required for storing the fault dictionary is O(F * T * O), where F is the number of faults, T is the number of patterns in the test set, and O is the number of outputs of the circuit. In [6], authors have proposed a single structure to store all information related to all F faults in the circuit with respect to a pattern k. They have used a F * F distinguishability matrix D k. The generic element d k (i, j) of the matrix is one if and only if the pattern k can distinguish between faults f i and f j , 0
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.