Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, IET Computers & Digital Techniques
Test data compression is an effective methodology for reducing test data volume and testing time. In this paper, we present a new test data compression technique based on block merging. The technique capitalizes on the fact that many consecutive blocks of the test data can be merged together. Compression is achieved by storing the merged block and the number of blocks merged. It also takes advantage of cases where the merged block can be filled by all 0's or all 1's. Test data decompression is performed on chip using a simple circuitry that repeats the merged block the required number of times. The decompression circuitry has the advantage of being test data independent. Experimental results on benchmark circuits demonstrate the effectiveness of the proposed technique compared to other coding-based compression techniques.
IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 2003
This paper presents a compression/decompression scheme based on selective Huffman coding for reducing the amount of test data that must be stored on a tester and transferred to each core in a system-on-a-chip (SOC) during manufacturing test. The test data bandwidth between the tester and the SOC is a bottleneck that can result in long test times when testing complex SOCs that contain many cores. In the proposed scheme, the test vectors for the SOC are stored in compressed form in the tester memory and transferred to the chip where they are decompressed and applied to the cores. A small amount of on-chip circuitry is used to decompress the test vectors. Given the set of test vectors for a core, a modified Huffman code is carefully selected so that it satisfies certain properties. These properties guarantee that the codewords can be decoded by a simple pipelined decoder (placed at the serial input of the core's scan chain) that requires very small area. Results indicate that the proposed scheme can provide test data compression nearly equal to that of an optimum Huffman code with much less area overhead for the decoder.
© IDOSI Publications, , 2014
Higher Circuit Densities in system-on-chip (SOC) designs and increase in design complexity have led to drastic increase in test data volume. This results in long test application time and high tester memory requirement. Test Data Compression/Decompression addresses this problem by reducing the test data volume without affecting the overall system performance. This paper proposes a test data compression scheme that combines the advantages of compatible block coding followed by simple run lengthcoding techniques to address the large test data volume of Automatic Test Equipment. This test data compression technique significantly reduces memory requirements. The algorithm is applied on various benchmark circuits and compared results with existing test compression/Decompression techniques. The experimental evaluation revealed that the proposed method achieves on an average, a compression ratio of 70%. The proposed approach, improves the compression efficiency without introducing any additional decompression penalty. Experimental results demonstrate that this approach produces up to 30% better compression compared to existing methods.
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2000
Because of the increased design complexity and advanced fabrication technologies, number of tests and corresponding test data volume increases rapidly. As the large size of test data volume is becoming one of the major problems in testing System on-a-Chip (SOC). Test data volume reduction is an important issue for the SOC designs. Several compression coding schemes had been proposed in the past. Run Length Coding was one of the most familiar coding methodologies for test data compression. Golomb coding was used in existing compression side. The compression ratio of golomb code was found to be lesser than the combined Alternative Variable Run-length code (AVR) and nine code compression (9C) methods. The proposed combined AVR and 9C codes are used for reducing the test data volume. The experiment is conducted for proposed methods using ISCAS'89 benchmark circuits. The experimental results shows that, the proposed method is highly efficient when compared with the existing methods.
Proceedings Twelfth International Conference on VLSI Design. (Cat. No.PR00013), 1999
Generalized Modified Positional Syndrome (GMPS), of order p, a new compaction scheme for test output data is presented. The order p determines the alising probability and the amount of hardware overhead required to implement the scheme. GMPS of order two gives an aliasing probability about an order of magnitude lower than the best scheme reported in literature with minimal extra hardware. A hardware realization scheme for GMPS has been presented. The scheme uses adders with feedback.
2006 IEEE International Symposium on Circuits and Systems, 2006
In this paper, we present a new test data compression technique based on block merging. The technique capitalizes on the fact that many consecutive blocks of the test data can be merged together. Compression is achieved by storing the merged block and the number of blocks merged. It also takes advantage of cases where the merged block can be filled by all 0's or all 1's. Test data decompression is performed on chip using a simple circuitry that repeats the merged block the required number of times. The decompression circuitry has the advantage of being test data independent. Experimental results on benchmark circuits demonstrate the effectiveness of the proposed technique compared to previous approaches.
International Journal of Engineering & Technology, 2018
This paper presents a new X-filling algorithm for test power reduction and a novel encoding technique for test data compression in scan-based VLSI testing. The proposed encoding technique focuses on replacing redundant runs of the equal-run-length vector with a shorter codeword. The effectiveness of this compression method depends on a number of repeated runs occur in the fully specified test set. In order to maximize the repeated runs with equal run length, the unspecified bits in the test cubes are filled with the proposed technique called alternating equal-run-length (AERL) filling. The resultant test data are compressed using the proposed alternating equal-run-length coding to reduce the test data volume. Efficient decompression architecture is also presented to decode the original data with lesser area overhead and power. Experimental results obtained from larger ISCAS'89 benchmark circuits show the efficiency of the proposed work. The AERL achieves up to 82.05 % of compres...
2013 IEEE 31st VLSI Test Symposium (VTS), 2013
A highly efficient SOC test compression scheme which uses sequential linear decompressors local to each core is proposed. Test data is stored on the tester in compressed form and brought over the TAM to the core before being decompressed. Very high encoding efficiency is achieved by providing the ability to share free variables across test cubes being compressed at the same time as well as in subsequent time steps. The idea of retaining unused nonpivot free variables when decompressing one test cube to help for encoding subsequent test cubes that was introduced in [Muthyala 12] is applied here in the context of SOC testing. It is shown that in this application, a firstin first-out (FIFO) buffer is not required. The ability to retain excess free variables rather than wasting them when the decompressor is reset avoids the need for high precision in matching the number of free variables used for encoding with the number of care bits. This allows greater flexibility in test scheduling to reduce test time, tester storage, and control complexity as indicated by the experimental results. 2013 IEEE 31st VLSI Test Symposium (VTS)
2014
The need of testing large amount of data in large ICs has increased the time and memory requirement by many folds. Several test data compression schemes have been proposed for reducing the test data volume. In this paper, we propose a novel, lossless, time and memory minimizing test data compression scheme based on the fixed to variable length coding with limited number of code words. In our scheme, we divide the test vectors into fixed length blocks and then compress them into variable length codes. We use this extended variable length codes algorithm to make changes in the test vectors and to increase the compression ratio. The generation of good compression ratio through our scheme is proved by the experimental results for the ISCAS- 89 benchmark circuits and the compressed test data. In comparison to the previous test data techniques based on variable length codes, the outcome of our method has a reasonable effect on compression.
Test data compression is an efficient method for reducing the test application cost. The problem of reducing test data has been addressed by researchers in three different aspects: Test Data Compression, Built-in-Self-Test (BIST) and Test set compaction. The latter two methods are capable of enhancing fault coverage with cost of hardware overhead. The drawback of the conventional methods is that they are capable of reducing the test storage and test power but when test data have redundant length of runs, no additional compression method is followed. This paper presents a modified Run Length Coding (RLC) technique with Multilevel Selective Huffman Coding (MLSHC) technique to reduce test data volume, test pattern delivery time and power dissipation in scan test applications where redundant length of runs is encountered then the preceding run symbol is replaced with tiny codeword. Experimental results show that the presented method not only improves the test data compression but also reduces the overall test data volume compared to recent schemes. Experiments for the six largest ISCAS-98 benchmarks show that our method outperforms most known techniques. Keywords-Modified run length coding, multilevel selective Huffman coding, built-in-self-test modified selective Huffman coding, automatic test equipment.
2014
The two major areas of concern in the testing of VLSI circuits are Test data volume and excessive test power. Among the many different compression coding schemes proposed till now, the CCSDS (Consultative Committee for Space Data Systems) lossless data compression scheme is one of the best. This paper discusses the techniques that test data compression scheme based on lossless data compression Rice Algorithm as recommended by the CCSDS for the reduction of required test data amount to be stored on the tester, which will be transferred during manufacturing testing to each core in a system-on-a-chip (SOC). In the proposed scheme, the test vectors for the SOC are compressed by using Rice Algorithm, and by applying various binary encoding techniques. Experimental results show that the test data compression ratio for the larger ISCAS 89 Benchmark Circuits is significantly improved in comparison with existing methods.
International Test Conference, 2003. Proceedings. ITC 2003.
Store-and-generate techniques encode a given test set and regenerate the original test set during the test with the help of a decoder. Previous research has shown that runlength coding, particularly alternating run-length coding, can provide high compression ratios for the test data. However, experimental data show that longer run-lengths are distributed sparsely in the code space and often occur only once, which implies an inefficient encoding. In this study a hybrid encoding strategy is presented which overcomes this problem by combining both the advantages of run-length and dictionary-based encoding. The compression ratios strongly depend on the strategy of mapping don't cares in the original test set to zeros or ones. To find the best assignment an algorithm is proposed which minimizes the total size of the test data consisting of the encoded test set and the dictionary. Experimental results show that the proposed approach works particularly well for larger examples yielding a significant reduction of the total test data storage compared to pure alternating runlength coding. Recent coding strategies for test data compression are based on classical techniques such as statistical coding, run-length coding or dictionary-based coding [4-9, 11,
2001
The increasing complexity of systems-on-a-chip with the accompanied increase in their test data size has made the need for test data reduction imperative. In this work, we introduce a novel lossless compression technique for testing systems-on-a-chip based on geometric shapes. The technique exploits reordering of test vectors to minimize the number of shapes needed to encode the test data. After sorting the test vectors, the test set is partitioned into blocks and each block is encoded separately. For testing a chip, the compressed test data is transferred from the automatic test equipment to the chip where it gets decompressed. Test data decompression is performed on chip and is performed either in hardware using a decoding circuitry or in software using an embedded processor on chip. In both cases, test decompression requires the availability of memory to store the decoded block. In this work, we have deomnstrated both cases. The effectiveness of the technique in achieving high compression ratio is demonstrated on the largest ISCAS85 and full-scanned versions of ISCAS89 benchmark circuits. The proposed technique achieved significantly higher compression ratio in comparison to other test compression techniques. Frequencydirected run-length (FDR) code is a variable-to-variable code based on encoding runs of 0's. In this work, we demonstrate that higher test data compression can be achieved based on encoding both runs of 0's and 1's. We propose an extension to the FDR code (EFDR) and demonstrate by experimental results its effectiveness in achieving higher compression ratio. In the Geometric-Primitives-Based Compression technique, some of the blocks are encoded by storing the real test data because the encoded block size is larger than the actual test data block size. Reducing the number of these blocks could result in higher test data compression. In this work, we propose hybrid test data compression techniques that exploit the use of either FDR or EFDR codes to reduce the number of blocks that are encoded by storing the real test data. Based on experimental results, we demonstrate the effectiveness of the proposed hybrid compression techniques in increasing the test data compression ratios over those obtained by the Geometric-Primitives-Based compression technique.
Computers & Electrical Engineering, 2010
A new scheme of test data compression/decompression, namely coding of even bits marking and selective output inversion, is presented. It first uses a special kind of codewords, odd bits of which are used to represent the length of runs and even bits of which are used to represent whether the codewords finish. The characteristic of the codewords make the structure of decompressor simple. It then introduces a structure of selective output inversion to increase the probability of 0s. This scheme can obtain a better compression ratio than some already known schemes, but it only needs a very low hardware overhead. The performance of the scheme is experimentally confirmed on the larger examples of the ISCAS89 benchmark circuits.
10th IEEE International Conference on Electronics, Circuits and Systems, 2003. ICECS 2003. Proceedings of the 2003, 2003
One of the major challenges in testing a System-on-a-Chip (SOC) is dealing with the large test data size. To reduce the volume of test data, several efficient test data compression techniques have been recently proposed. In this paper, we propose hybrid test compression techniques that combine the Geometric-Primitives-Based compression technique with the frequency-directed run-length (FDR) and extended frequencydirected run-length (EFDR) coding techniques. Based on experimental results, we demonstrate the effectiveness of the proposed hybrid compression techniques in increasing the test data compression ratios over those obtained by the Geometric-Primitives-Based compression technique.
2000
It has been seen that the test data compression has been an emerging need of VLSI field and hence the hot topic of research for last decade. Still there is a great need and scope for further reduction in test data volume. This reduction may be lossy for output side test data but must be lossless for input side test data. This paper summarizes the different methods applied for lossless compression of the input side test data, starting with simple code based methods to combined/hybrid methods. The basic goal here is to prepare survey on current methodologies applied for test data compression and prepare a platform for further development in this avenue.
9th International Conference on Electronics, Circuits and Systems, 2002
Chip (SOC) is dealing with the large test data size. To reduce the volume of test data, several test data compression techniques have been proposed. Frequencydirected run-length (FDR) code is a variable-to-variable run length code based on encoding runs of 0's. In this work, we demonstrate that higher test data compression can be achieved based on encoding both runs of 0's and 1's. We propose an extension to the FDR code and demonstrate by experimental results its effectiveness in achieving higher compression ratio.
Integration, the VLSI Journal, 2012
A new scheme of test data compression based on run-length, namely equal-run-length coding (ERLC) is presented. It is based on both types of runs of 0's and 1's and explores the relationship between two consecutive runs. It uses a shorter codeword to represent the whole second run of two equal length consecutive runs. A scheme for filling the don't-care bits is proposed to maximize the number of consecutive equal-length runs. Compared with other already known schemes, the proposed scheme achieves higher compression ratio with low area overhead. The merits of the proposed algorithm are experimentally verified on the larger examples of the ISCAS89 benchmark circuits.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.