Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014, 2014 19th Asia and South Pacific Design Automation Conference (ASP-DAC)
…
6 pages
1 file
Nowadays, most software and hardware applications are committed to reduce the footprint and resource usage of data. In this general context, lossless data compression is a beneficial technique that encodes information using fewer (or at most equal number of) bits as compared to the original representation. A traditional compression flow consists of two phases: data decorrelation and entropy encoding. Data decorrelation, also called entropy reduction, aims at reducing the autocorrelation of the input data stream to be compressed in order to enhance the efficiency of entropy encoding. Entropy encoding reduces the size of the previously decorrelated data by using techniques such as Huffman coding, arithmetic coding, and others. When the data decorrelation is optimal, entropy encoding produces the strongest lossless compression possible. While efficient solutions for entropy encoding exist, data decorrelation is still a challenging problem limiting ultimate lossless compression opportunities. In this paper, we use logic synthesis to remove redundancy in binary data aiming to unlock the full potential of lossless compression. Embedded in a complete lossless compression flow, our logic synthesis based methodology is capable to identify the underlying function correlating a data set. Experimental results on data sets deriving from different causal processes show that the proposed approach achieves the highest compression ratio compared to state-of-art compression tools such as ZIP, bzip2 and 7zip.
Sum of minterms is a canonical form for representing logic functions. There are classical methods such as Karnaugh map or Quine-McCluskey tabulation for minimizing a sum of products. This minimization reduces the minterms to smaller products called implicants. If minterms are represented by bit strings, the bit strings shrink through the minimization process. This can be considered as a kind of data compression provided that there is a way for retrieving the original bit strings from the compressed strings.
A binary string of length 2 k induces the Boolean function of k variables whose Shannon expansion is the given binary string. This Boolean function then is representable via a unique reduced ordered binary decision diagram (ROBDD). The given binary string is fully recoverable from this ROBDD. We exhibit a lossless data compression algorithm in which a binary string of length a power of two is compressed via compression of the ROBDD associated to it as described above. We show that when binary strings of length n a power of two are compressed via this algorithm, the maximal pointwise redundancy/sample with respect to any s-state binary information source has the upper bound (4 log 2 s + 16 + o(1))/ log 2 n. To establish this result, we exploit a result of Liaw and Lin stating that the ROBDD representation of a Boolean function of k variables contains a number of vertices on the order of (2 + o(1))2 k /k.
The aim of research is to building high performance lossless data compression engine which achieve high compression ratio by using combination of two lossless compression methods (Huffman and LZSS) . The engine is being built in two methods ,the first is to compress the file by Huffman then compress the file resulting from the compression process by LZSS and the second is to compress the file by LZSS then compress the file resulting from the compression process by Huffman In this research two common file extensions were selected and for each one of these extensions ten different size files were selected as a samples to compress by using four compression methods(Huffman ,LZSS ,Huffman +LZSS ,LZSS +Huffman ) the response of each extension to the four methods have been discussed . Results are discussed to determine the method which achieve the highest compression ratio .The effect of changing the size of the file on the compression ratio has been studied .
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2017
As the density of FPGAs has greatly improved over the past few years, the size of configuration bitstreams grows accordingly. Compression techniques can reduce memory size and save external memory bandwidth. To accelerate the configuration process and reduce software start-up time, four open-source lossless compression decoders developed using high-level synthesis techniques are presented. Moreover, in order to balance the objectives of compression ratio, decompression throughput, and hardware resource overhead, various improvements and optimizations are proposed. Full bitstreams and software binaries have been collected as a benchmark, and 33 partial bitstreams have also been developed and integrated into the benchmark. Evaluations of the synthesizable compression decoders are demonstrated on a Xilinx ZC706 board, showing higher decompression throughput than that of the existing lossless compression decoders using our benchmark. The proposed decoders can reduce software start-up time by up to 31.23% in embedded systems and 69.83% reduction of reconfiguration time for partial reconfigurable systems.
Information Processing & Management, 1976
This paper describes a formalism to construct some kinds of algorithms useful to represent one structure about a set of data. It proves that if we do not take into account cost considerations of one algorithm, one can partialy replace the memory by an algorithm. It also proves that the remaining memory part is independant of the construction process. It then evaluate the affects of algorithms representation cost and gives the resulting memory gain obtained in two particular examples.
We need to secured transmission and storage of digital images/information where all the digital images are compressed before the storage or transmit to save the bandwidth. In this process we have embedding digital signals or patterns into an object without affecting in any way the quality of original image. The process is called Watermarking. We have study various papers related to lossless image compression and compare of those papers.
2016
This article discusses the theory, model, implementation and performance of a combinatorial fuzzy-binary and-or (FBAR) algorithm for lossless data compression (LDC) and decompression (LDD) on 8-bit characters. A combinatorial pairwise flags is utilized as new zero/nonzero, impure/pure bit-pair operators, where their combination forms a 4D hypercube to compress a sequence of bytes. The compressed sequence is stored in a grid file of constant size. Decompression is by using a fixed size translation table (TT) to access the grid file during I/O data conversions. Compared to other LDC algorithms, doubleefficient (DE) entropies denoting 50% compressions with reasonable bitrates were observed. Double-extending the usage of the TT component in code, exhibits a Universal Predictability via its negative growth of entropy for LDCs > 87.5% compression, quite significant for scaling databases and network communications. This algorithm is novel in encryption, binary, fuzzy and information-theoretic methods such as probability. Therefore, information theorists, computer scientists and engineers may find the algorithm useful for its logic and applications. Contents 1 Introduction 1.1 Overview 2 The Origin of FBAR Logic 2.1 Motivation and Related Work 2.2 Relatedness of Logic Types 2.3 The Foundation of FBAR Model and Logic 2.4 A Universal FBAR Coding Model and Equation 2.5 Compression Products Aimed by the FBAR Algorithm 2.6 FBAR Synthesis 3 FBAR Compression Theory 3.1 Reversible FBAR Compression Theorem and Proof 3.2 4D Bit-Flag Model Construction
2015
This thesis makes several contributions to the field of data compression. Lossless data compression algorithms shorten the description of input objects, such as sequences of text, in a way that allows perfect recovery of the original object. Such algorithms exploit the fact that input objects are not uniformly distributed: by allocating shorter descriptions to more probable objects and longer descriptions to less probable objects, the expected length of the compressed output can be made shorter than the object’s original description. Compression algorithms can be designed to match almost any given probability distribution over input objects. This thesis employs probabilistic modelling, Bayesian inference, and arithmetic coding to derive compression algorithms for a variety of applications, making the underlying probability distributions explicit throughout. A general compression toolbox is described, consisting of practical algorithms for compressing data distributed by various fund...
2016
Data compression aims to reduce the size of data so that it requires less storage space and less communication channels bandwidth. Many compression techniques (such as LZ77 and its variants) su er from a problem that we call the redundancy caused by the multiplicity of encodings. The Multiplicity of Encodings (ME ) means that the source data may be encoded in more than one way. In its simplest case, it occurs when a compression technique with ME has the opportunity at certain steps, during the encoding process, to encode the same symbol in di erent ways. The Bit Recycling compression technique has been introduced by D. Dubé and V. Beaudoin to minimize the redundancy caused by ME . Variants of bit recycling have been applied on LZ77 and the experimental results showed that bit recycling achieved better compression (a reduction of about 9% in the size of les that have been compressed by Gzip) by exploiting ME . Dubé and Beaudoin have pointed out that their technique could not minimize...
International Journal of Engineering Research and, 2015
The main goal of data compression is to decrease redundancy in warehouse or communicated data, so growing effective data density. It is a common necessary for most of the applications. Data compression is very important relevancy in the area of file storage and distributed system just because of in distributed system data have to send from and to all system. Two configuration of data compression are there "lossy" and "lossless". But in this paper we only focus on Lossless data compression techniques. In lossless data compression, the wholeness of data is preserved. Data compression is a technique that decreases the data size, removing the extreme information. Data compression has many types of techniques that decrease redundancy. The methods which mentioned are Run Length Encoding, Shannon Fanon, Huffman, Arithmetic, adaptive Huffman, LZ77, LZ78 and LZW with its performance.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
… and Technology (ICCET …, 2010
IEEE Transactions on Information Theory, 2003
Lecture Notes in Computer Science, 2013
2007 Information Theory and Applications Workshop, 2007
International Journal of Computer Applications, 2014
IEEE Transactions on Information Theory, 2001
International Journal of Engineering Research and, 2015
International Journal of Engineering Research and Technology (IJERT), 2015
Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No.98TH8341), 1998
Lecture Notes in Computer Science, 2005
IEEE Journal of Solid-State Circuits, 1993
International Journal of Computer and Communication Technology, 2015
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2000
Information Processing Letters, 2002