Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
In this paper, we present methods for constructing optimal tests to detect structural faults in analog integrated circuits in the presence of process variation. The analog test determination problem is formulated as selecting an optimal subset from an initial large set of tests with optimality criteria defined in terms of fault coverage and fault separation on a given fault set. The process variation may be represented either deterministically by box constraints or statistically as random variables. Each of these representations require different methods for computing the detectabilities. In the deterministic case, the detectability measures are computed by a combination of analytical and numerical optimization techniques. Such an approach helps reduce the number of simulations by up to three times over traditional Monte Carlo methods. This approach produces more compact test sets compared to a linear sensitivity analysis while being closer in accuracy to the Monte Carlo method. In the statistical case, the detectability measures are computed as separation distances between the good and faulty distributions. These distributions, represented nonparametrically are generated by traditional Monte Carlo techniques. Once the deterministic or statistical detectabilities are computed for the entire test set, a test compaction step is performed which is a covering problem. On solving this covering problem we get a test set with optimal fault coverage and fault separation.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 1998
New methods for analog fault detection and for the selection of measurements for analog testing (wafer probe or final testing) are presented. Using Bayes' rule, the information contained in the measurement data and the information of the a priori probabilities of a circuit's being fault free or faulty are converted into a posteriori probabilities and used for fault detection in analog integrated circuits, with a decision criterion that considers the statistical tolerances and mismatches of the circuit parameters. An adaptive formulation of the a priori probabilities is given that updates their values according to the results of the testing and fault detection. In addition, a systematic method is proposed for the optimal selection of the measurement components so as to minimize the probability of an erroneous test decision. Examples of DC wafer-probe testing as well as production testing using the power-supply current spectrum are given that demonstrate the effectiveness of the algorithms.
ICONIP '02. Proceedings of the 9th International Conference on Neural Information Processing. Computational Intelligence for the E-Age (IEEE Cat. No.02EX575), 2002
A new era began in microelectronics with the advent of integrated circuit (IC) technology. With dramatic improvement of integration technology, the complexities of IC testing has increased and become mucb more acute. Fault simulation technique for maximization of fault detection in IC testing is presented in this paper.
1999
Analog integrated circuit testing and diagnosis is a very challenging problem. The inaccuracy of measurements, the in nite domain of possible values and the parameter deviations are among the major di culties. During the process of optimizing production tests, Monte Carlo simulation is often needed due to parameter variations, but because of its expensive computational cost, it becomes the bottleneck of such a process. This paper describes a new technique to reduce the number of simulations required during analog fault simulation. This leads to the optimization of production tests subjected to parameter variations. In section I a review of the state of the art is presented, section II introduces the algorithm and describes the methodology of our approach. The results on CMOS 2-stage opamp and conclusions are given in sections III and IV.
1999
Efficient methods to evaluate the quality of a test set in terms of its coverage of arbitrary defects in a circuit are presented. Our techniques rapidly estimate arbitrary defect coverage because they are independent of specific, physical, fault models. We overcome the potentially explosive computational requirements associated with considering all possible defects by implicitly evaluating multiple faults (of all types) simultaneously and by exploiting the local nature of defects. Our experiments show that a strong correlation exists between stuck-at fault coverage and defects whose behavior is independent of the input vectors. Our techniques are capable of identifying regions in the circuit where defects may escape the test set. We also demonstrate how the chances of detection of an arbitrary defect by a test set vary when a single stuck-at-fault within the vicinity of that defect is detected multiple times by the test set
IEEE Transactions on …, 1990
European Test Workshop 1999 (Cat. No.PR00390), 1999
The purpose of this paper is to analyze an optimization method to improve the testability of structural and parametric faults in analog circuits. The approach consists of finding an optimum sub-set of tests which maximizes the fault coverage with minimum cost. The method is based on covering a discrete set of intervals by taking advantage of strategies effectively used in digital synthesis. A simple application example is given to illustrate the proposal by studying the fault coverage obtained using different test sets on the ITC97 benchmark op-amp.
1999
A procedure for the determination of an optimum set of testable components in the fault diagnosis of analog linear circuits is presented. The proposed method has its theoretical foundation in the testability concept and in the canonical ambiguity group concept. New considerations relevant to the existence of unique solution in the k k k-fault diagnosis problem of analog linear circuits are presented, and examples of application of the developed procedure are considered by exploiting a software package based on symbolic analysis techniques.
IET Computers & Digital Techniques, 2007
Test sets that detect each target fault n times (n-detection test sets) are typically generated for restricted values of n due to the increase in test set size with n. We perform both a worst-case analysis and an average-case analysis to check the effect of restricting n on the unmodeled fault coverage of an (arbitrary) n-detection test set. Our analysis is independent of any particular test set or test generation approach. It is based on a specific set of target faults and a specific set of untargeted faults. It shows that, depending on the circuit, very large values of n may be needed to guarantee the detection of all the untargeted faults. We discuss the implications of these results.
Proceedings of International Conference on Computer Aided Design
Test design of analog circuits based on statistical methods for decision making is a topic of growing interest. The major problem of such statistical approaches with respect to industrial applicability concerns the confidence with which the determined test criteria can be applied in production testing. This mainly refers to the consideration of measurement noise, to the selected measurements, as well as to the required training and validation samples. These crucial topics are addressed in this paper. On exploiting experience from the statistical design of analog circuits and from pattern recognition methods, efficient solutions to these problems are provided. A very robust test design is achieved by systematically considering measurement noise, by selecting most significant measurements, and by using most meaningful samples. Moreover, parametric as well as catastrophic faults are covered on application of digital testing methods.
IFIP Advances in Information and Communication Technology, 2015
This work presents new approaches to minimize the number of test frequencies for linear analog circuits. The cases of single and multiple fault detection regions for multiple test measures are considered. We first address the case when the injected faults have a single detection region in the frequency band. We show that the problem can be formulated as a set covering problem with a matrix having the consecutive-ones property for which the network simplex algorithm turns out to be very efficient. A second approach consists in modeling the problem by means of an interval graph, leading to its solution with a specific polynomial-time algorithm. A case-study of a biquadratic filter is presented for illustration purposes. Numerical simulations demonstrate that the two different approaches solve the optimization problem very fast. Finally, the optimization problems arising from multiple detection regions are modeled and solution approaches are discussed.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 1999
This paper presents a new approach to the test design of analog circuits, called characteristic observation inference (COI). The COI method considers parametric as well as catastrophic faults. A strict distinction between the operational environment, defined by the specifications of the circuit, and the test environment, defined by the test configuration and the test equipment, is introduced. A parametric fault model is developed that combines circuit specifications, statistical parameters reflecting parametric faults, and measurements of the circuit under test. These measurements are called characteristic observations. For each specification, a test inference criterion is computed using feature extraction and logistic discrimination analysis. From a set of such criteria the satisfaction or violation of the specifications can be inferred from characteristic observations. Based on these results, additional test criteria for catastrophic faults are determined using test set compaction. Moreover, measurement noise and parasitic effects, which crucially influence the test design, are systematically considered, and a physically interpretable sampling strategy is presented. The COI method applied to two different test designs yields very good results with respect to parametric faults as well as to catastrophic faults.
Journal of Electronic Testing, 1993
Analog circuit testing is considered to be a very difficult task. This difficulty is mainly due to the lack of fault models and accessibility to internal nodes. To overcome this problem, an approach is presented for analog circuit modeling and testing. The circuit modeling is based on first-order sensitivity computation. The testability of the circuit is analyzed by the multiple-fault model and by functional testing. Component deviations are deduced by measuring a number of output parameters, and through sensitivity analysis and tolerance computation. Using this approach, adequate tests are identified for testing catastrophic and soft faults. Some experimental results are presented for simple models as well as multiple-fault models.
IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 1997
The high cost of capital equipment for production testing coupled with the time that an analog circuit spends on a tester has made it imperative to minimize average chip testing time during production. Testing time can be reduced by decreasing the number of tests that need to be performed on a circuit and by optimizing the order of the tests. This can be done by studying performance data for a sample of chips or using simulation data. The advantage of simulation data is that a large number of circuits do not have to be nonoptimally tested in order to generate the data set. Generating simulation data for very large scale integration (VLSI) analog circuits has been considered to be very difficult because of the computational cost of simulating a large system, the very large number of random variables needed to model the manufacturing process, and the need to model and simulate not only random parameter variations but also spot defects. These problems are overcome by decomposing a circuit into several blocks, which are linked together by a behavioral model of the system. In order for this to work, the blocks must not depend on a large set of significant random variables, and the blocks should not be tightly coupled. In this paper we demonstrate our statistical simulation methodology for a VLSI analog circuit. The proposed approach to defect simulation is applied to a parallel filter bank which has a total of 32 channels, is composed of 67 blocks, and has over 4040 transistors and 750 capacitors. Simulation results are used to optimize the test set for this circuit, by showing that the frequency response tests at many frequencies are redundant
Microprocessors and Microsystems, 2014
The dependability of current and future nanoscale technologies highly depends on the ability of the test-30 ing process to detect emerging defects that cannot be modeled traditionally. Generating test sets that 31 detect each fault more than one times has been shown to increase the effectiveness of a test set to detect 32 non-modeled faults, either static or dynamic. Traditional n-detect test sets guarantee to detect a modeled 33 fault with minimum n different tests. Recent techniques examine how to quantify and maximize the dif-34 ference between the various tests for a fault. The proposed methodology introduces a new systematic test 35 generation algorithm for multiple-detect (including n-detect) test sets that increases the diversity of the 36 fault propagation paths excited by the various tests per fault. A novel algorithm tries to identify different 37 propagating paths (if such a path exists) for each one of the multiple (n) detections of the same fault. The 38 proposed method can be applied to any linear, to the circuit size, static or dynamic fault model for multi-39 ple fault detections, such as the stuck-at or transition delay fault models, and avoids any path or path seg-40 ment enumeration. Experimental results show the effectiveness of the approach in increasing the number 41 of fault propagating paths when compared to traditional n-detect test sets. 42
ISCAS 2001. The 2001 IEEE International Symposium on Circuits and Systems (Cat. No.01CH37196), 2001
In analog integrated circuits, process variations result in physical parameter variations. Simulated performance values must then be considered with their tolerance intervals. Consequently, contrarily to digital circuits where the outputs are either '0' or '1' such that we can decide without ambiguity whether a fault is detectable or not, for analog circuits fault detectability is still a vague problem since the fault can either be completely detectable, partially detectable or completely undetectable which makes it very difficult to take a decision. In order to solve this decision problem, we have introduced the fault detection probability (FDP) function which allows to formalize the problem of analog fault detection subjected to parameter variations.
TURKISH JOURNAL OF ELECTRICAL ENGINEERING & COMPUTER SCIENCES, 2018
Determination of the most appropriate test set is critical for high fault coverage in testing of digital integrated circuits. Among black-box approaches, random testing is popular due to its simplicity and cost effectiveness. An extension to random testing is antirandom that improves fault detection by maximizing the distance of every subsequent test pattern from the set of previously applied test patterns. Antirandom testing uses total Hamming distance and total cartesian distance as distance metrics to maximize diversity in the testing sequence. However, the algorithm for the antirandom test set generation has two major issues. Firstly, there is no selection criteria defined when more than one test pattern candidates have the same maximum total Hamming distance and total cartesian distance. Secondly, determination of total Hamming distance and total Cartesian distance is computational intensive as it is a summation of individual Hamming distances and cartesian distances with all the previously selected test patterns. In this paper, two-dimensional Hamming distance is proposed to address the first issue. A novel concept of horizontal Hamming distance is introduced, which acts as a third criterion for test pattern selection. Fault simulations on ISCAS'85 and ISCAS'89 benchmark circuits have shown that employing horizontal Hamming distance improves the effectiveness of pure antirandom in terms of fault coverage. Additionally, an alternative method for total Hamming distance calculations is proposed to reduce the computational intensity. The proposed method avoids summation of individual Hamming distances by keeping track of number of 0s and 1s applied at each inputs. As a result, up to 90% of the computations are reduced.
Further, application of these bounds to scalable RC ladder networks reveal a number of interesting characteristics. The approach adopted here is general and can be extended to find bounds of DL and FC of other parametric test methods for linear and non-linear circuits.
ACM Transactions on Design Automation of Electronic Systems, 2020
Safety-critical and mission-critical systems, such as airplanes or (semi-)autonomous cars, are relying on an ever-increasing number of embedded integrated circuits. Consequently, there is a need for complete defect coverage during the testing of these circuits to guarantee their functionality in the field. In this context, reducing the escape rate of defects during production testing is crucial, and significant progress has been made to this end. However, production testing using automatic test equipment is subject to various measurement parasitic variations, which may have a negative impact on the testing procedure and therefore limit the final defect coverage. To tackle this issue, this article proposes an improved test flow targeting increased analog defect coverage, both at the system and block levels, by analyzing and improving the coverage of typical functional and structural tests under these measurement variations. To illustrate the flow, the technique of inserting a pseudo-...
1992
The problem of test set generation of a VLSI circuit is known to be NP-complete. The detection probabtit y distribution of a given circuit aids in the teat, generation process. In thii paper a new method of estimating the testabtlty and the fimlt coverage distributions is presented. the simplicity of computing the necessary parameters to predictthese distributiona. Testability is modeled as a mixture of a two linear continuous functions and a aeriti of discrete impulse fimctions. The statistical package Minitab is used in eMrnat_ ing the parameters of tedabiity. Prom the estimated parameters, a relationship between teatahiity and coverage is employed to estimate the coverage distribution. The significance of this work u in removing the need to employ fault simulation. experimental results on three of the large new ISCAS-89 circuits reflect the accuracy of this work.
2008
A method is presented for deterministic test pattern generation using a uniform functional fault model for combinational circuits. The fault model allows to represent the physical defects in components and defects in the communication network of components by the same technique. Physical defects are modeled as parameters in generic Boolean differential equations. Solutions of these equations give the conditions at which defects are locally activated. The defect activation conditions are used as functional fault models for logic level test generation purposes. A method is proposed which allows to find the types of faults that may occur in a real circuit and to determine their probabilities. A defect-oriented deterministic test generation tool was developed, and the experimental data obtained by the tool for ISCAS'85 benchmarks are presented. It was shown that for the majority of cases 100% stuck-at fault tests do not cover 100% of the physical defects. The main feature of the new tool is that it allows to reach 100% coverage for the given set of defects or to prove the redundancy of not detected defects. Shorts are the dominant cause of faults in modern CMOS processes. In current approach the wired-AND fault model was considered.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.