Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2015, International Journal of Computer and Communication Technology
With the increase in silicon densities, it is becoming feasible for compression systems to be implemented in chip. A system with distributed memory architecture is based on having data compression and decompression engines working independently on different data at the same time. This data is stored in memory distributed to each processor. The objective of the project is to design a lossless data compression system which operates in high-speed to achieve high compression rate. By using the architecture of compressors, the data compression rates are significantly improved. Also inherent scalability of architecture is possible. The main parts of the system are the data compressors and the control blocks providing control signals for the Data compressors, allowing appropriate control of the routing of data into and from the system. Each Data compressor can process four bytes of data into and from a block of data in every clock cycle. The data entering the system needs to be clocked in ...
IEEE Journal of Solid-State Circuits, 1993
A lossless data compression and decompression (LDCD) algorithm based on the notion of textual substitution has been implemented in silicon using a linear systolic array architecture. This algorithm employs a model where the encoder and decoder each have a finite amount of memory which is referred to as the dictionary. Compression is achieved by finding matches between the dictionary and the input data stream whereby a substitution is made in the data stream by an index referencing the corresponding dictionary entry. The LDCD system is built using 30 application-specific integrated circuits (ASIC's) each containing 126 identical processing elements (PE's) which perform both the encoding and decoding function at clock rates up to 20 MHz.
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2000
In this paper, we propose a new two-stage hardware architecture that combines the features of both parallel dictionary LZW (PDLZW) and an approximated adaptive Huffman (AH) algorithms. In this architecture, an ordered list instead of the treebased structure is used in the AH algorithm for speeding up the compression data rate. The resulting architecture shows that it not only outperforms the AH algorithm at the cost of only one-fourth the hardware resource but it is also competitive to the performance of LZW algorithm (compress). In addition, both compression and decompression rates of the proposed architecture are greater than those of the AH algorithm even in the case realized by software.
This is to certify that the thesis entitled. "Lossless Data Compression And Decompression Algorithm And Its Hardware Architecture" submitted by Sri V.V.V. SAGAR in partial fulfillment of the requirements for the award of Master of Technology Degree in Electronics and Communication Engineering with specialization in "VLSI Design and Embedded System" at the National Institute of Technology, Rourkela (Deemed University) is an authentic work carried out by his under my supervision and guidance.
International Journal of Electronics Signals and Systems, 2011
The paper presents a novel VLSI architecture for high-speed data compressor designs which implement the X-Match algorithm. This design involves important trade off that affects the compression performance, latency, and throughput. The most promising approach is implemented into FPGA hardware. This device typical compression ratio that halves the original uncompressed data. This device is specifically targeted to enhance the performance of Gbits/s data networks and storage applications where it can double the performance of the original systems. To get high compression rate or to get high data rate of communication not restriction to follow the parallel architecture of data compression. By using existing method the main draw backs are 1. Variation in compression 2. Throughput, 3.Latency, 4.High space, 5. High power. So by using this proposed method we can reduce the variation in the compression, latency and increase through put. And this novel VLSI architecture has a power consumptio...
2006 IEEE International Multitopic Conference, 2006
This paper presents a novel architecture for hardware implementation of LZJ lossless data compression algorithm. The architecture is scalable depending upon the requirements ofparallel comparisons. Several instances of the design are synthesized on an FPGA for 1 Gbitslsec and higher data rates. With the increase in network traffic, large scale digital data storagelretrieval and the requirement for preserving communication channel bandwidth, data compression is receiving enormous attention. For real time applications, hardware implementation of data compression algorithms seems imperative and the only viable solution.
International Journal of Computer Applications, 2010
The advent of modern electronic world has opened up various fronts in multimedia interaction. They are used in various fields for various purposes of education, entertainment, research and many more. This has led to storage and retrieval of multimedia content regularly. But due to limitations of current technology the disk space and the transmission bandwidth fall behind in the race with the requirement of multimedia content. This imposes a need to compress multimedia content so that they can be easily stored requiring lesser space and easily transferred from one point to another. Some online dictionary based compression technique can be applied to reduce the data packet size. When the repetition rate of the same symbols within the data are high the compression techniques works very well. During the process of encoding and decoding, the building of online dictionary in the primary memory ensures the single pass over the data, and the dictionary need not to be transmitted over the network. Our proposed Improved Dictionary technique scans the data byte-wise, so that the chances of repetition of individual symbols are higher for text messages. Fixed length coding transmits fixed length codes for all dictionary entries. For bigger messages better optimization in terms of size reduction can be achieved through variable length coding with L-Z technique, where transmitted code length corresponding to individual dictionary entries will vary according to the requirement dynamically.
Computación y Sistemas, 2006
Nowadays, the use of digital communication systems has increased in such a way that network bandwidth is affected. This problem can be solved by implementing data compression algorithms in communication devices to reduce the amount of data to be transmitted. However, the design of large hardware data compression models implies to consider an efficient use of the silicon area. This work proposes the conjunction of two different hardware lossless data compression approaches which share common hardware elements. The project also involves the design of a hardware/software architecture to exploit parallelism increasing execution speed while keeping flexibility. A custom coprocessor unit executes the compute-intense tasks of the Burrows-Wheeler Transform and the Lempel-Ziv lossless data compression schemes. This coprocessor unit is controlled by a SPARC V8 compatible general purpose microprocessor called LEON2.
2011
Reconfigurable computing is emerging as the new area for satisfying the simultaneous demand for application performance and flexibility. The ability to customize the architecture to match the computation and the data flow of the application has demonstrated significant performance benefits compared to general purpose architectures. In signal processing, multimedia, high speed communication are the major application domains that have significant heterogeneity in their computation and communication structure with various advantages. The reconfigurability of the hardware permits adaptation of the hardware for specific computations in each application to achieve higher performance compared to software. Complex functions can be mapped onto the architecture achieving higher silicon utilization and reducing the instruction fetch and execute bottleneck. In this paper we proposed and implemented a high speed CODEC (for Lossless Compression) which compress the real time image for high speed c...
IEEE Transactions on Circuits and Systems for Video Technology, 1992
This paper reports a VLSI implementation of a lossless data compression algorithm. This is the first implementation of an encoder/decoder chip set that employs the Rice algorithm and provides an introduction to the algorithm as well as a description of the high performance hardware. The algorithm is adaptive over a wide entropy range. Its performance on several 8-h test images has been shown to exceed other techniques employing differential pulse code modulation (DPCM) followed hy arithmetic coding, adaptive Huffman coding, and a Lempel-Ziv-Welch (LZW) algorithm. A major feature of the algorithm is that it requires no look-up tables or external RAM. There are only 71,000 transistors required to implement the encoder and decoder. Each chip was fabricated in a 1.0-pm CMOS process and both are only 5 mm in a side. A comparison is made with other hardware realizations. Under laboratory conditions, the encoder compresses at a rate in excess of 50 Msamples / s and the decoder operates at 25 Msamples / s. The current implementation processes quantized data from 4 to 14 b/sample. The chip set could be used in video systems as a component in a lossy compression scheme to provide real-time, efficient, variable-length coding and decoding of DCT transform coefficients and the low-frequency band found in subband coders.
2011 International Green Computing Conference and Workshops, 2011
We have implemented and evaluated a novel dictionary code compression mechanism where frequently executed individual instructions and/or sequences are replaced in memory with short code words. The result is a dramatically reduced instruction memory access frequency leading to a performance improvement for small instruction cache sizes and to significantly reduced energy consumption in the instruction fetch path.
World Applied Sciences …, 2013
In a distributed environment, large data files remain a major bottleneck. Compression is an important component of the solutions available for creating file sizes of manageable and transmittable dimensions. When high-speed media or channels are used, high-speed data compression is desired. Software implementations are often not fast enough. In this paper, we present the very high speed hardware description language (VHDL) modeling environment of Lempel-Ziv-Welch (LZW) algorithm for binary data compression to ease the description, verification, simulation and hardware realization. The VHDL model defines a main block, which describe the LZW algorithm for binary data compression through a behavioral and structural description. The LZW algorithm for binary data compression comprises of two modules compressor and decompressor. The input of compressor is 1-bit bit stream read in according to the clock cycle. The output is an 8-bit integer stream fed into the decompressor, which is an index that represents the memory location of the bit string stored in the dictionary. The output of decompressor is 1-bit bit stream. Once detecting the particular approaches for input, output, main block and different modules, the VHDL descriptions are run through a VHDL simulator, followed by the timing analysis for the validation, functionality and performance of the designated model that supports the effectiveness of the model for the application.
2009
In this paper we propose a Lossless data compressor in high level throughput using re programable FPGA technology.Real time data compression is expected to play a crucial role in high rate data communication applications. Most available approaches have largely overlooked the impact of mixed pixels and subpixel targets, which can be accurately modeled and uncovered by resorting to the wealth of spectral information provided by hyperspectral image data. In this paper, we proposed an FPGA-based data compressor on the concept of CAM and Dictionary based compression technique has been proposed in this paper . It has been implemented on a Xilinx Spartan3 -II FPGA formed by several millions of gates, and with high computational power and compact size, which make this reconfigurable device very appealing for onboard, real-time data processing.
2006
Nowadays, the use of digital communication systems has increased in such a way that network bandwidth is affected. This problem can be solved by implementing data compression algorithms in communication devices to reduce the amount of data to be transmitted. However, the design of large hardware data compression models implies to consider an efficient use of the silicon area. This work proposes the conjunction of two different hardware lossless data compression approaches which share common hardware elements. The project also involves the design of a hardware/software architecture to exploit parallelism increasing execution speed while keeping flexibility. A custom coprocessor unit executes the compute-intense tasks of the Burrows-Wheeler Transform and the Lempel-Ziv lossless data compression schemes. This coprocessor unit is controlled by a SPARC V8 compatible general purpose microprocessor called LEON2.
As the processors are speeding ahead of the other components in terms of clock speed, the pressure is on the other components like the main memory to keep up with the processors. RAM's are expected to handle more and more information and improve in terms of capacity. Although the size of the memory is also being increased to meet the needs, another solution to the problem is to compress the data written on to the memory and then decompress it during the fetch cycle. Thereby, more information could be stored on the RAM without physically increasing its size.
2021
With the increasing demands for large amounts of data in the digital era for information storage, processing and transfer, the demand for smaller and faster data storage memories have also exponentially increased. In order to avoid the bottleneck between the larger data storages that require larger bandwidths for data and avoid loss of information we have to choose correct data compression techniques which reduces the redundant data storage and in turn reduces the hardware required for data storage and processing. There are two different types of data compression techniques/algorithms: (1) Lossy data compression algorithms (2) Loss less data compression algorithms. While Lossy compression algorithms are faster, they involve loss of data/information to certain level during de-compression or reconstruction. Whereas Lossless compression algorithm are relatively, they can perfectly reconstruct the complete information from the compressed data. We already have several data compression so...
2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT), 2017
The increasing memory requirements of big data applications have been driving the precipitous growth of memory capacity in server systems. To maximize the efficiency of external memory, HW-based memory compression techniques have been proposed to increase effective memory capacity. Although such memory compression techniques can improve the memory efficiency significantly, a critical trade-off exists in the HW-based compression techniques. As the memory blocks need to be decompressed as quickly as possible to serve cache misses, latency-optimized techniques apply compression at the cacheline granularity, achieving the decompression latency of less than a few cycles. However, such latency-optimized techniques can lose the potential high compression ratios of capacity-optimized techniques, which compress larger memory blocks with longer latency algorithms. Considering the fundamental trade-off in the memory compression, this paper proposes a transparent dual memory compression (DMC) architecture, which selectively uses two compression algorithms with distinct latency and compression characteristics. Exploiting the locality of memory accesses, the proposed architecture compresses less frequently accessed blocks with a capacity-optimized compression algorithm, while keeping recently accessed blocks compressed with a latencyoptimized one. Furthermore, instead of relying on the support from the virtual memory system to locate compressed memory blocks, the study advocates a HW-based translation between the uncompressed address space and compressed physical space. This OS-transparent approach eliminates conflicts between compression efficiency and large page support adopted to reduce TLB misses. The proposed compression architecture is applied to the Hybrid Memory Cube (HMC) with a logic layer under the stacked DRAMs. The experimental results show that the proposed compression architecture provides 54% higher compression ratio than the state-of-the-art latency-optimized technique, with no performance degradation over the baseline system without compression.
ccc.inaoep.mx
This paper presents a hardware design and implementation for Lempel-Ziv data compression. The implementation is based on the systolic array approach employing two simples processing elements (PE's). Previously to implement the design, we select the buffer size based on software simulations. By selecting a specific size of the buffer, we can estimate how much area will be required, what compression ratio the compressor will achieve and also, what throughput the compressor can reach. Based on such simulations, a prototype of the compressor was implemented in a Xilinx XC2V1000 FPGA device employing a 512-byte searching buffer and a 15-byte coding buffer. The architecture can achieve a throughput of 11 Mbps while occupying 90% of the FPGA resources. An immediate application of this compressor is to work jointly with a public key cryptographic module.
The aim of research is to building high performance lossless data compression engine which achieve high compression ratio by using combination of two lossless compression methods (Huffman and LZSS) . The engine is being built in two methods ,the first is to compress the file by Huffman then compress the file resulting from the compression process by LZSS and the second is to compress the file by LZSS then compress the file resulting from the compression process by Huffman In this research two common file extensions were selected and for each one of these extensions ten different size files were selected as a samples to compress by using four compression methods(Huffman ,LZSS ,Huffman +LZSS ,LZSS +Huffman ) the response of each extension to the four methods have been discussed . Results are discussed to determine the method which achieve the highest compression ratio .The effect of changing the size of the file on the compression ratio has been studied .
Microprocessors and Microsystems, 2009
The size of the program code has become a critical design constraint in embedded systems, especially in handheld devices. Large program codes require large memories, which increase the size and cost of the chip. In addition, the power consumption is increased due to higher memory I/O bandwidth. Program compression is one of the most often used methods to reduce the size of the program code. In this paper, dictionary-based program compression is evaluated on a customizable processor architecture with parallel resources. In addition to code density, the effectiveness of the method is evaluated in terms of area and power consumption. Furthermore, a mechanism is proposed to maintain the programmability after compression. Up to 77% reduction in area and 73% reduction in power consumption of the program memory and the associated control logic were obtained.
International Journal of Engineering Research and Technology (IJERT), 2013
https://www.ijert.org/fpga-implementation-of-vlsi-architecture-for-data-compression https://www.ijert.org/research/fpga-implementation-of-vlsi-architecture-for-data-compression-IJERTV2IS110671.pdf This project aim is at increasing the security and compression. FPGA implementation of VLSI Architecture of Secure Arithmetic Coding improves the compression. In the traditional Secure Arithmetic Coding has no security. Arithmetic Coding is method for lossless data compression. Arithmetic Coding is a Variable-length entropy encoding that converts a string into another representation that represents frequently used characters using fewer bits and infrequently used characters using more bits, with the goal of fewer bits in total. As opposed to other entropy encoding techniques that separate the input message into its component symbols and replace each symbol with a code word, arithmetic coding encodes the entire message to a single number. Although arithmetic coding provides no security as traditionally implemented. In this project modified scheme that offers both security and compression. The system utilizes an arithmetic coder in which the overall length within the range [0, 1) allocated to each symbol is preserved. Permutations are applied at the input of the Encoder for the security. The overall system provides a simultaneous security, compression.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.