Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003, Proceedings of the 40th conference on Design automation - DAC '03
We consider the relationship between test data compression and the ability to perform comprehensive testing of a circuit under an n -detection test set. The size of an n -detection test set grows approximately linearly with n . Therefore, one may expect a decompresser that can decompress a compressed n -detection test set to be larger than a decompresser required for a compact conventional test set. The results presented in this work demonstrate that it is possible to use a decompresser designed based on a compact one-detection test set in order to apply an n -detection test set. Thus, the design of the decompresser does not have to be changed as n is increased. We describe a procedure that generates an n -detection test set to achieve this result.
2014
The two major areas of concern in the testing of VLSI circuits are Test data volume and excessive test power. Among the many different compression coding schemes proposed till now, the CCSDS (Consultative Committee for Space Data Systems) lossless data compression scheme is one of the best. This paper discusses the techniques that test data compression scheme based on lossless data compression Rice Algorithm as recommended by the CCSDS for the reduction of required test data amount to be stored on the tester, which will be transferred during manufacturing testing to each core in a system-on-a-chip (SOC). In the proposed scheme, the test vectors for the SOC are compressed by using Rice Algorithm, and by applying various binary encoding techniques. Experimental results show that the test data compression ratio for the larger ISCAS 89 Benchmark Circuits is significantly improved in comparison with existing methods.
IEEE Transactions on Computers, 2004
A new class of static compaction procedures is described that generate test sets with reduced test application times for scan circuits. The proposed class of procedures combines the advantages of two earlier static compaction procedures, one that tends to generate large numbers of tests with a short primary input sequence included in every test and one that tends to generate small numbers of tests with a long primary input sequence included in one of the tests. A procedure of the proposed class starts from an initial test set that has a large number of tests and long primary input sequences and it selects a subset of the tests and subsequences of their primary input sequences. It thus has the flexibility of finding an appropriate balance between the number of tests and the lengths of the primary input sequences in order to minimize the test application time. Several ways of computing the primary input sequences for the initial test set are considered. The most compact test sets are obtained when a test sequence for the nonscan circuit is available and this sequence is used as part of every test in the initial test set. However, it is shown that high levels of compaction can also be achieved without the overhead of test generation for the nonscan circuit. Specifically, we show that the industry practice of holding a primary input vector constant between scan operations can be accommodated. We estimate the ability of the procedure to achieve optimum test sets by computing a lower bound on the number of tests and demonstrating that the procedure achieves or approaches this lower bound.
The objective of manufacturing test is to separate the faulty circuits from the good circuits after they have been manufactured. Three problems encompassed by this task will be mentioned here. First, the reduction of the power consumed during test. The behavior of the circuit during test is modified due to scan insertion and other testing techniques. Due to this, the power consumed during test can be abnormally large, up to several times the power consumed during functional mode. This can result in a good circuit to fail the test or to be damaged due to heating. Second, how to modify the design so that it is easily testable. Since not every possible digital circuit can be tested properly it is necessary to modify the design to alter its behavior during test. This modification should not alter the functional behavior of the circuit. An example of this is test point insertion, a technique aimed at reducing test time and decreasing the number of faulty circuits that pass the test. Third, the creation of a test set for a given design that will both properly accomplish the task and require the least amount of time possible to be applied. The precision in separation of faulty circuits from good circuits depends on the application for which the circuit is intended and, if possible, must be maximized. The test application time is should be as low as possible to reduce test cost. This dissertation contributes to the discipline of manufacturing test and will encompass advances in the afore mentioned areas. First, a method to reduce the power consumed during test is proposed. Second, in the design modification area, a new algorithm to compute test points is proposed. Third, in the test set creation area, a new algorithm to reduce test set application time is introduced. The three algorithms are scalable to current industrial design sizes. Experimental results for TABLE OF CONTENTS LIST OF TABLES .
2006 IEEE International Symposium on Circuits and Systems
Test data compression is an effective methodology for reducing test data volume and testing time. In this paper, we present a new test data compression technique based on block merging. The technique capitalizes on the fact that many consecutive blocks of the test data can be merged together. Compression is achieved by storing the merged block and the number of blocks merged. It also takes advantage of cases where the merged block can be filled by all 0's or all 1's. Test data decompression is performed on chip using a simple circuitry that repeats the merged block the required number of times. The decompression circuitry has the advantage of being test data independent. Experimental results on benchmark circuits demonstrate the effectiveness of the proposed technique compared to other coding-based compression techniques. I. INTRODUCTION With recent advances in process technology, it is predicted that the density of integrated circuits will soon reach several billion transistors per chip [1]. The increasing density of integrated circuits has resulted in tremendous increase in test data volumes. Large test data volumes not only increase the testing time but may also exceed the capacity of tester memory. The cost of automatic test equipment (ATE) increases significantly with the increase in their speed, channel capacity, and memory. Having limited tester memory implies multiple time-consuming ATE loads ranging from minutes to hours [2]. An effective way to reduce test data volume is by test compression. The objective of test data compression is to reduce the number of bits needed to represent the test data. The compressed test set is stored in the tester memory and a decompression circuitry on chip is used to decompress the test data and apply it to the circuit under test. Test data compression results in reducing the required tester memory to store the test data and the test time. Test data compression techniques can be broadly classified into three categories [3]: Code-based schemes, Lineardecompression-based schemes and Broadcast-scan-based schemes. Code-based compression schemes are based on encoding test cubes using test compression codes. Techniques in this category include run-length-based codes, statistical codes, dictionary-based codes and constructive codes. Run
2003
This paper presents a compression/decompression scheme based on selective Huffman coding for reducing the amount of test data that must be stored on a tester and transferred to each core in a system-on-a-chip (SOC) during manufacturing test. The test data bandwidth between the tester and the SOC is a bottleneck that can result in long test times when testing complex SOCs that contain many cores. In the proposed scheme, the test vectors for the SOC are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the cores. A small amount of on-chip circuitry is used to decompress the test vectors. Given the set of test vectors for a core, a modified Huffman code is carefully selected so that it satisfies certain properties. These properties guarantee that the codewords can be decoded by a simple pipelined decoder (placed at the serial input of the core's scan chain) that requires very small area. Results indicate that the proposed scheme can provide test data compression nearly equal to that of an optimum Huffman code with much less area overhead for the decoder. Index Terms-Automatic test equipment, compression, decompression architecture, embedded core testing, testing time, test set encoding. I. INTRODUCTION One of the key concerns in any design project is to meet time-to-market constraints. In order to accomplish this goal, chip designers often use predesigned and preverified cores to develop systems-on-a-chip (SOC) devices. With time, these devices have become extremely complex. This high level of integration has allowed vendors to drive down the effective manufacturing costs. However, it has also rapidly increased the complexity of testing these chips. One of the increasingly difficult challenges in testing SOCs is dealing with the large amount of test data that must be transferred between the tester and the chip [9], [34]. Each core in an SOC has a given set of test vectors that must be applied to it (usually through a test wrapper that is provided around a core). The test vectors must be stored on the tester and then transferred to the inputs of the core during modular testing. As more and more cores (each with its own test set) are placed on a single chip, the amount of total test data for the chip increases rapidly. This poses a serious problem because of the cost and limitations of automated test equipment (ATE). Testers have limited speed, channel capacity, and memory. In general, the amount of time required to test a chip depends on how much test data needs to be transferred to the chip Manuscript
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2009
The degree of achievable test-data compression depends on not only the compression scheme but also the structure of the applied test data. Therefore, it is possible to improve the compression rate of a given test set by carefully organizing the way that test data are present in the scan structure. The relationship between signal probability and test-data entropy is explored in this paper, and the results show that the theoretical maximum compression can be increased through a partition of scan flip-flops such that the test data present in each partition have a skewed signal distribution. In essence, this approach simply puts similar scan flip-flops in an adjacent part of a scan chain, which also helps to reduce shift power in the scan test process. Furthermore, it is shown that the intrapartition scan-chain order has little impact on the compressibility of a test set; thus, it is easy to achieve higher test compression with low routing overhead. Experimental results show that the proposed partition method can raise the compression rates of various compression schemes by more than 17%, and the average reduction in shift power is about 50%. In contrast, the increase in routing length is limited. Index Terms-Entropy theory, routing, scan-based design, test power, test-data compression. I. INTRODUCTION T HE LARGE amount of test data in modern very large scale integration (VLSI) circuits become a great challenge, as they require not only large automatic-test-equipment (ATE) memory but also significant test application time [1]. In order to minimize test cost, compression techniques can be used to reduce test-data volume. Existing test-data compression methods can be classified into three types [2]: code-based [3]-[19],
2013 IEEE 31st VLSI Test Symposium (VTS), 2013
A highly efficient SOC test compression scheme which uses sequential linear decompressors local to each core is proposed. Test data is stored on the tester in compressed form and brought over the TAM to the core before being decompressed. Very high encoding efficiency is achieved by providing the ability to share free variables across test cubes being compressed at the same time as well as in subsequent time steps. The idea of retaining unused nonpivot free variables when decompressing one test cube to help for encoding subsequent test cubes that was introduced in [Muthyala 12] is applied here in the context of SOC testing. It is shown that in this application, a firstin first-out (FIFO) buffer is not required. The ability to retain excess free variables rather than wasting them when the decompressor is reset avoids the need for high precision in matching the number of free variables used for encoding with the number of care bits. This allows greater flexibility in test scheduling to reduce test time, tester storage, and control complexity as indicated by the experimental results. 2013 IEEE 31st VLSI Test Symposium (VTS)
1998
A novel test vector compressioddecompression technique is proposed for reducing the amount of test data that must be stored on a tester and transferred to each core when testing a core-based design. A small amount of on-chip circuitry is used to reduce both the test storage and test time required for testing a core-based design. The fully specified test vectors provided by the core vendor are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the core (the compression is lossless). Instead of having to transfer each entire test vector from the tester to the core, a smaller amount of compressed data is transferred instead. This reduces the amount of test data that must be stored on the tester and hence reduces the total amount of test time required for transferring the data with a given test data bandwidth.
2019
This work presents a novel hybrid compression architecture that seamlessly combines the advantages of an embedded test compression technique with a lightweight codewordbased compression scheme. Embedded test compression has proven to be beneficial and is widely used in industrial circuit designs. However, particularly, in test applications within lowpin-count environments, a certain number of test patterns is incompressible and will, therefore, be rejected. This leads to a test coverage decrease which, in turn, jeopardizes the zero defect policy of safety-critical applications like automotive microcontrollers. Therefore, the rejected test patterns are typically transferred in an uncompressed way bypassing the embedded compression, which is extremely costly. The proposed hybrid architecture mitigates the adverse impact of rejected test patterns on the compression ratio as well as on the test application time of state-of-theart techniques. The experimental evaluation of industrial-siz...
IET Computers & Digital Techniques, 2009
An effective reconfigurable broadcast scan compression scheme that employs test set partitioning and relaxation-based test vector decomposition is proposed. Given a constraint on the number of tester channels, the technique classifies the test set into acceptable and bottleneck vectors. The bottleneck vectors are then decomposed into a set of vectors that meet the given constraint. The acceptable and decomposed test vectors are partitioned into the smallest number of partitions while satisfying the tester channels constraint to reduce the decompressor area. Thus, the technique by construction satisfies a given tester channels constraint at the expense of increased test vector count and number of partitions, offering a tradeoff between test compression, test application time and test decompression circuitry area. Experimental results demonstrate that the proposed technique achieves better compression ratios compared to other test compression techniques.
Power consumption of very large scale circuits may increase significantly during testing. This extra power consumption may give rise to several problems. It may be responsible for cost, performance verification, technology related problems and can reduce the battery life when on-line testing is considered. Because of increased design complexity and advanced fabrication technologies, the number of tests and corresponding data volume increases rapidly. As the large size of test data volume is becoming one of the major problems in testing System-on-a-Chip (SoC), several compression coding schemes have been proposed in the literatures. Test data compression is an effective methodology for reducing test data volume and testing time. The survey of the low power testing using compression techniques that can be used to test VLSI circuits by combining better performance of these methods.
Current design methodologies and methodologies for reducing test data volume and test application time for full-scan circuits allow testing of multiple circuits (or subcircuits of the same circuit) simultaneously using the same test data. We describe a static compaction procedure that accepts test sets generated independently for multiple full-scan circuits, and produces a compact test set that detects all the faults detected by the individual test sets. The resulting test set can be used for testing the circuits simultaneously using the same test data. This procedure provides an alternative to test generation procedures that perform test generation for complex circuits made up of multiple circuits. Such procedures also reduce the amount of test data and test application time required for testing all the circuits by testing them simultaneously using the same test data. However, they require consideration of a more complex circuit.
Lecture Notes in Computer Science, 2005
This paper presents a software tool for test pattern compaction combined with compression of the test patterns to further reduce test data volume and time requirement. Usually the test set compaction is performed independently on test compression. We have implemented a test compaction and compression scheme that reorders test patterns previously generated in an ATPG in such a way that they are well suited for decompression. The compressed test sequence is decompressed in a scan chain. No design changes are required to be done in the functional part of the circuit. The tool is called COMPAS and it finds a sequence of overlapping patterns; each pattern detects a maximum number of circuit faults. Each pattern differs from the contiguous one in the first bit only, the remaining pattern bits are shifted for one position towards the last bit. The pattern first bits are stored in an external tester memory. The volume of stored data is substantially lower than in other comparable test pattern compression methods. The algorithm can be used for test data reduction in System on Chip testing using the IEEE P 1500 Standard extended by the RESPIN diagnostic access. Using this architecture the compressed test data are transmitted through a narrow test access mechanism from a tester to the tested SoC and the high volume decompressed test patterns are shifted through the high speed scan chains between the System on Chip (SoC) cores.
Proceedings Twelfth International Conference on VLSI Design. (Cat. No.PR00013), 1999
Generalized Modified Positional Syndrome (GMPS), of order p, a new compaction scheme for test output data is presented. The order p determines the alising probability and the amount of hardware overhead required to implement the scheme. GMPS of order two gives an aliasing probability about an order of magnitude lower than the best scheme reported in literature with minimal extra hardware. A hardware realization scheme for GMPS has been presented. The scheme uses adders with feedback.
2012
Abstract—While defect oriented testing in digital circuits is a hard process, detecting a modeled fault more than one time has been shown to result in high defect coverage. Previous work shows that such test sets, known as multiple detect or-detect test sets, are of increased quality for a number of common defects in deep sub-micrometer technologies. Method for multiple detect test generation usually produce fully specified test patterns.
IET Computers & Digital Techniques, 2008
Test data compression is an effective methodology for reducing test data volume and testing time. In this paper, we present a new test data compression technique based on block merging. The technique capitalizes on the fact that many consecutive blocks of the test data can be merged together. Compression is achieved by storing the merged block and the number of blocks merged. It also takes advantage of cases where the merged block can be filled by all 0's or all 1's. Test data decompression is performed on chip using a simple circuitry that repeats the merged block the required number of times. The decompression circuitry has the advantage of being test data independent. Experimental results on benchmark circuits demonstrate the effectiveness of the proposed technique compared to other coding-based compression techniques.
Journal of Systems Architecture, 2004
Conversion of the flip-flops of the circuit into scan cells helps ease the test challenge; yet test application time is increased as serial shift operations are employed. Furthermore, the transitions that occur in the scan chains during these shifts reflect into significant levels of circuit switching unnecessarily, increasing the power dissipated. Judicious encoding of the correlation among the test vectors and construction of a test vector through predecessor updates helps reduce not only test application time but also scan chain transitions as well. Such an encoding scheme, which additionally reduces test data volume, can be further enhanced through appropriately ordering and padding of the test cubes given. The experimental results confirm the significant reductions in test application time, test data volume and test power achieved by the proposed compression methodology. * The work of the first author is supported through an IBM graduate fellowship.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2003
Test sets for path delay faults in circuits with large numbers of paths are typically generated for path delay faults associated with the longest circuit paths. We show that such test sets may not detect faults associated with the next-to-longest paths. This may lead to undetected failures since shorter paths may fail without any of the longest paths failing. In addition, paths that appear to be shorter may actually be longer than the longest paths if the procedure used for estimating path length is inaccurate. We propose a test enrichment procedure that increases significantly the number of faults associated with the next-to-longest paths that are detected by a (compact) test set. This is achieved by allowing the underlying test generation procedure the flexibility of detecting or not detecting the faults associated with the next-to-longest paths. Faults associated with next-to-longest paths are detected without increasing the number of tests beyond that required to detect the faults associated with the longest paths. The proposed procedure thus improves the quality of the test set without increasing its size.
IET Computers & Digital Techniques, 2007
Test sets that detect each target fault n times (n-detection test sets) are typically generated for restricted values of n due to the increase in test set size with n. We perform both a worst-case analysis and an average-case analysis to check the effect of restricting n on the unmodeled fault coverage of an (arbitrary) n-detection test set. Our analysis is independent of any particular test set or test generation approach. It is based on a specific set of target faults and a specific set of untargeted faults. It shows that, depending on the circuit, very large values of n may be needed to guarantee the detection of all the untargeted faults. We discuss the implications of these results.
ACM Transactions on Design Automation of Electronic Systems, 2003
Testing system-on-chips involves applying huge amounts of test data, which is stored in the tester memory and then transferred to the chip under test during test application. Therefore, practical techniques, such as test compression and compaction, are required to reduce the amount of test data in order to reduce both the total testing time and memory requirements for the tester. In this paper, a new approach to static compaction for combinational circuits, referred to as test vector decomposition (TVD), is proposed. In addition, two new TVD based static compaction algorithms are presented. Experimental results for benchmark circuits demonstrate the effectiveness of the two new static compaction algorithms.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.