Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
The entropy of a set of data is a measure of the amount of information contained in it. Entropy calculations for fully specified data have been used to get a theoretical bound on how much that data can be compressed. This paper extends the concept of entropy for incompletely specified test data (i.e., that has unspecified or don't care bits) and explores the use of entropy to show how bounds on the maximum amount of compression for a particular symbol partitioning can be calculated. The impact of different ways of partitioning the test data into symbols on entropy is studied. For a class of partitions that use fixed-length symbols, a greedy algorithm for specifying the don't cares to reduce entropy is described. It is shown to be equivalent to the minimum entropy set cover problem and thus is within an additive constant error with respect to the minimum entropy possible among all ways of specifying the don't cares. A polynomial time algorithm that can be used to approximate the calculation of entropy is described. Different test data compression techniques proposed in the literature are analyzed with respect to the entropy bounds. The limitations and advantages of certain types of test data encoding strategies are studied using entropy theory.
Proceedings. Ninth IEEE European Test Symposium, 2004. ETS 2004., 2004
The entropy of a set of data is related to the amount of information that it contains and provides a theoretical bound on the amount of compression that can be achieved. While calculating entropy is well understood for fully specified data, this paper explores the use of entropy for incompletely specified test data and shows how theoretical bounds on the maximum amount of test data compression can be calculated. An algorithm for specifying don't cares to minimize entropy for fixed length symbols is presented, and it is proven to provide the lowest entropy among all ways of specifying the don't cares. The impact of different ways of partitioning the test data into symbols on entropy is studied. Different test data compression techniques are analyzed with respect to their entropy bounds. Entropy theory is used to show the limitations and advantages of certain types of test data encoding strategies.
VLSI Design, 2010
The run length based coding schemes have been very effective for the test data compression in case of current generation SoCs with a large number of IP cores. The first part of paper presents a survey of the run length based codes. The data compression of any partially specified test data depends upon how the unspecified bits are filled with 1s and 0s. In the second part of the paper, the five different approaches for “don't care” bit filling based on nature of runs are proposed to predict the maximum compression based on entropy. Here the various run length based schemes are compared with maximum data compression limit based on entropy bounds. The actual compressions claimed by the authors are also compared. For various ISCAS circuits, it has been shown that when the X filling is done considering runs of zeros followed by one as well as runs of ones followed by zero (i.e., Extended FDR), it provides the maximum data compression. In third part, it has been shown that the average...
International Test Conference, 2003. Proceedings. ITC 2003.
Store-and-generate techniques encode a given test set and regenerate the original test set during the test with the help of a decoder. Previous research has shown that runlength coding, particularly alternating run-length coding, can provide high compression ratios for the test data. However, experimental data show that longer run-lengths are distributed sparsely in the code space and often occur only once, which implies an inefficient encoding. In this study a hybrid encoding strategy is presented which overcomes this problem by combining both the advantages of run-length and dictionary-based encoding. The compression ratios strongly depend on the strategy of mapping don't cares in the original test set to zeros or ones. To find the best assignment an algorithm is proposed which minimizes the total size of the test data consisting of the encoded test set and the dictionary. Experimental results show that the proposed approach works particularly well for larger examples yielding a significant reduction of the total test data storage compared to pure alternating runlength coding. Recent coding strategies for test data compression are based on classical techniques such as statistical coding, run-length coding or dictionary-based coding [4-9, 11,
IEEE VLSI Test Symposium, 2002
Run-length codes and their variants have recently been shown to be very effective for compressing system-on-a-chip (SOC) test data. In this paper, we analyze the Golomb code, the conventional run-length code and the FDR code for a binary memoryless data source, and compare the compression obtained in each case to fundamental entropy bounds. We show analytically that the FDR code outperforms both the conventional run-length code and the Golomb code for test resource partitioning (TRP) based on data compression. We also present a modified compression/decompression architecture for obtaining even higher compression. We demonstrate the effectiveness of these compression codes using the larger ISCAS-89 benchmark circuits and two representative circuits from industry. Finally, we show that the FDR code is almost as effective as Unix utilities gzip and compress, even though it uses a much simpler decompression algorithm.
Journal of Signal Processing Systems, 2012
The emergence of the nanometer scale integration technology made it possible for systems-on-a-chip, SoC, design to contain many reusable cores from multiple resources. This resulted in higher complexity SoC testing than the conventional VLSI. To address this increase in design complexity in terms of data-volume and test-time, several compression methods have been developed, employed and proposed in the literature. In this paper, we present a new efficient test vector compression scheme based on block entropy in conjunction with our improved row-column reduction routine to reduce test data significantly. Our results show that the proposed method produces much higher compression ratio than all previously published methods. On average, our scheme scores nearly 13% higher than the best reported results. In addition, our scheme outperformed all results for each of the tested circuits. The proposed scheme is very fast and has considerable low complexity.
Test power and test time have been the major issues for current scenario of VLSI testing. The hidden structure of IP cores in SoC has further exacerbated these problems. The test data compression is the well known method used to reduce the test time. The don't care bit filling method and test vector reordering method can be used for effective test data compression as well as reduction in scan power. In this paper, in beginning, the mixed approach adaptive algorithm for don't care bit filling is proposed which is developed to enhance both parameters i.e. the power reduction and compression ratio. After the bit filling, the vectors are reordered using Artificial Intelligence approach. The quality parameter used for reordering is Adaptive Weighted Transition Matrix (AWTM) considering both, scan-in-&-scan-out vectors. The modified selective Huffman coding is applied on the reordered vector set to give the optimum compression. The experimental results on ISCAS benchmark circuit proves that the proposed method gives the better compression as well as better power reduction.
Proceedings Twelfth International Conference on VLSI Design. (Cat. No.PR00013), 1999
Generalized Modified Positional Syndrome (GMPS), of order p, a new compaction scheme for test output data is presented. The order p determines the alising probability and the amount of hardware overhead required to implement the scheme. GMPS of order two gives an aliasing probability about an order of magnitude lower than the best scheme reported in literature with minimal extra hardware. A hardware realization scheme for GMPS has been presented. The scheme uses adders with feedback.
ACM Transactions on Design Automation of Electronic Systems, 2003
We present a dictionary-based test data compression approach for reducing test data volume in SOCs. The proposed method is based on the use of a small number of ATE channels to deliver compressed test patterns from the tester to the chip and to drive a large number of internal scan chains in the circuit under test. Therefore, it is especially suitable for a reduced pin-count and lowcost DFT test environment, where a narrow interface between the tester and the SOC is desirable. The dictionary-based approach not only reduces test data volume but it also eliminates the need for additional synchronization and handshaking between the SOC and the ATE. The dictionary entries are determined during the compression procedure by solving a variant of the well-known clique partitioning problem from graph theory. Experimental results for the ISCAS-89 benchmarks and representative test data from IBM show that the proposed method outperforms a number of recently-proposed test data compression techniques. Compared to the previously proposed test data compression approach based on selective Huffman coding with variable-length indices, the proposed approach generally provides higher compression for the same amount of hardware overhead.
International Journal of Engineering & Technology, 2018
This paper presents a new X-filling algorithm for test power reduction and a novel encoding technique for test data compression in scan-based VLSI testing. The proposed encoding technique focuses on replacing redundant runs of the equal-run-length vector with a shorter codeword. The effectiveness of this compression method depends on a number of repeated runs occur in the fully specified test set. In order to maximize the repeated runs with equal run length, the unspecified bits in the test cubes are filled with the proposed technique called alternating equal-run-length (AERL) filling. The resultant test data are compressed using the proposed alternating equal-run-length coding to reduce the test data volume. Efficient decompression architecture is also presented to decode the original data with lesser area overhead and power. Experimental results obtained from larger ISCAS'89 benchmark circuits show the efficiency of the proposed work. The AERL achieves up to 82.05 % of compres...
Because of the increased design complexity and advanced fabrication technologies, number of tests and corresponding test data volume increases rapidly. As the large size of test data volume is becoming one of the major problems in testing System on-a-Chip (SOC). Test data volume reduction is an important issue for the SOC designs. Several compression coding schemes had been proposed in the past. Run Length Coding was one of the most familiar coding methodologies for test data compression. Golomb coding was used in existing compression side. The compression ratio of golomb code was found to be lesser than the combined Alternative Variable Run-length code (AVR) and nine code compression (9C) methods. The proposed combined AVR and 9C codes are used for reducing the test data volume. The experiment is conducted for proposed methods using ISCAS'89 benchmark circuits. The experimental results shows that, the proposed method is highly efficient when compared with the existing methods.
© IDOSI Publications, , 2014
Higher Circuit Densities in system-on-chip (SOC) designs and increase in design complexity have led to drastic increase in test data volume. This results in long test application time and high tester memory requirement. Test Data Compression/Decompression addresses this problem by reducing the test data volume without affecting the overall system performance. This paper proposes a test data compression scheme that combines the advantages of compatible block coding followed by simple run lengthcoding techniques to address the large test data volume of Automatic Test Equipment. This test data compression technique significantly reduces memory requirements. The algorithm is applied on various benchmark circuits and compared results with existing test compression/Decompression techniques. The experimental evaluation revealed that the proposed method achieves on an average, a compression ratio of 70%. The proposed approach, improves the compression efficiency without introducing any additional decompression penalty. Experimental results demonstrate that this approach produces up to 30% better compression compared to existing methods.
Proceedings of the 40th conference on Design automation - DAC '03, 2003
We consider the relationship between test data compression and the ability to perform comprehensive testing of a circuit under an n -detection test set. The size of an n -detection test set grows approximately linearly with n . Therefore, one may expect a decompresser that can decompress a compressed n -detection test set to be larger than a decompresser required for a compact conventional test set. The results presented in this work demonstrate that it is possible to use a decompresser designed based on a compact one-detection test set in order to apply an n -detection test set. Thus, the design of the decompresser does not have to be changed as n is increased. We describe a procedure that generates an n -detection test set to achieve this result.
IET Computers & Digital Techniques, 2008
Test data compression is an effective methodology for reducing test data volume and testing time. In this paper, we present a new test data compression technique based on block merging. The technique capitalizes on the fact that many consecutive blocks of the test data can be merged together. Compression is achieved by storing the merged block and the number of blocks merged. It also takes advantage of cases where the merged block can be filled by all 0's or all 1's. Test data decompression is performed on chip using a simple circuitry that repeats the merged block the required number of times. The decompression circuitry has the advantage of being test data independent. Experimental results on benchmark circuits demonstrate the effectiveness of the proposed technique compared to other coding-based compression techniques.
2006 IEEE International Symposium on Circuits and Systems
Test data compression is an effective methodology for reducing test data volume and testing time. In this paper, we present a new test data compression technique based on block merging. The technique capitalizes on the fact that many consecutive blocks of the test data can be merged together. Compression is achieved by storing the merged block and the number of blocks merged. It also takes advantage of cases where the merged block can be filled by all 0's or all 1's. Test data decompression is performed on chip using a simple circuitry that repeats the merged block the required number of times. The decompression circuitry has the advantage of being test data independent. Experimental results on benchmark circuits demonstrate the effectiveness of the proposed technique compared to other coding-based compression techniques. I. INTRODUCTION With recent advances in process technology, it is predicted that the density of integrated circuits will soon reach several billion transistors per chip [1]. The increasing density of integrated circuits has resulted in tremendous increase in test data volumes. Large test data volumes not only increase the testing time but may also exceed the capacity of tester memory. The cost of automatic test equipment (ATE) increases significantly with the increase in their speed, channel capacity, and memory. Having limited tester memory implies multiple time-consuming ATE loads ranging from minutes to hours [2]. An effective way to reduce test data volume is by test compression. The objective of test data compression is to reduce the number of bits needed to represent the test data. The compressed test set is stored in the tester memory and a decompression circuitry on chip is used to decompress the test data and apply it to the circuit under test. Test data compression results in reducing the required tester memory to store the test data and the test time. Test data compression techniques can be broadly classified into three categories [3]: Code-based schemes, Lineardecompression-based schemes and Broadcast-scan-based schemes. Code-based compression schemes are based on encoding test cubes using test compression codes. Techniques in this category include run-length-based codes, statistical codes, dictionary-based codes and constructive codes. Run
Integration, the VLSI Journal, 2012
A new scheme of test data compression based on run-length, namely equal-run-length coding (ERLC) is presented. It is based on both types of runs of 0's and 1's and explores the relationship between two consecutive runs. It uses a shorter codeword to represent the whole second run of two equal length consecutive runs. A scheme for filling the don't-care bits is proposed to maximize the number of consecutive equal-length runs. Compared with other already known schemes, the proposed scheme achieves higher compression ratio with low area overhead. The merits of the proposed algorithm are experimentally verified on the larger examples of the ISCAS89 benchmark circuits.
2009 Annual IEEE India Conference, 2009
Test data compression is a basic necessity for today's test methodology with reference to test cost and test time. This paper presents a compression/decompression scheme based on Frequency Dependant Bit Appending of test vector used with statistical codes. In the proposed scheme, the emphasis is not only on data compression but it aims the data compression with a smaller amount of silicon area overhead for on chip decoder. We have observed that when the number of bits per test vector is prime number or multiplication of prime number (particularly multiplied by 2 or 3), statistical codes gives a large area overhead. The proposed scheme of Frequency Dependant Bit Appending (FDBA) shows that in such cases, if we append few bits at the end of test vector before compression, it improves % compression with very less area overhead. With ISCAS benchmark circuits, it has been shown that when the proposed scheme is applied with statistical coding method, it not only improves % compression, but the area overhead is reduced a lot compared to the base statistical method.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2000
Increasing embedded systems functionality causes a steep increase in code size. For instance, more than 60MB of software is installed in current state-of-the-art cars. It is often challenging and cumbersome to host vast amount of software in an efficient way within a given hardware resource budget of an embedded system. This may be done by using code compression techniques, which compress the program code off-line (i.e. at design time) and decompress it on-line (i.e. at run time).The overview of a traditional test compression framework is shown in . The original test data is compressed and stored in the memory. Thus, the memory size is significantly reduced. An on-chip decoder decodes the compressed test data from the memory and delivers the original uncompressed set of test vectors to the design-under-test (DUT).The major contributions of this paper are as follows: 1) it develops an efficient bitmask selection technique for test data in order to create maximum matching patterns; 2) it develops an efficient dictionary selection method which takes into account the bitmask based compression; and 3) it proposes a test compression technique using efficient dictionary and bitmask selection to significantly reduce the testing time and memory requirements.
2014
The need of testing large amount of data in large ICs has increased the time and memory requirement by many folds. Several test data compression schemes have been proposed for reducing the test data volume. In this paper, we propose a novel, lossless, time and memory minimizing test data compression scheme based on the fixed to variable length coding with limited number of code words. In our scheme, we divide the test vectors into fixed length blocks and then compress them into variable length codes. We use this extended variable length codes algorithm to make changes in the test vectors and to increase the compression ratio. The generation of good compression ratio through our scheme is proved by the experimental results for the ISCAS- 89 benchmark circuits and the compressed test data. In comparison to the previous test data techniques based on variable length codes, the outcome of our method has a reasonable effect on compression.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2003
Test sets for path delay faults in circuits with large numbers of paths are typically generated for path delay faults associated with the longest circuit paths. We show that such test sets may not detect faults associated with the next-to-longest paths. This may lead to undetected failures since shorter paths may fail without any of the longest paths failing. In addition, paths that appear to be shorter may actually be longer than the longest paths if the procedure used for estimating path length is inaccurate. We propose a test enrichment procedure that increases significantly the number of faults associated with the next-to-longest paths that are detected by a (compact) test set. This is achieved by allowing the underlying test generation procedure the flexibility of detecting or not detecting the faults associated with the next-to-longest paths. Faults associated with next-to-longest paths are detected without increasing the number of tests beyond that required to detect the faults associated with the longest paths. The proposed procedure thus improves the quality of the test set without increasing its size.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.