Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1976, Information Processing & Management
This paper describes a formalism to construct some kinds of algorithms useful to represent one structure about a set of data. It proves that if we do not take into account cost considerations of one algorithm, one can partialy replace the memory by an algorithm. It also proves that the remaining memory part is independant of the construction process. It then evaluate the affects of algorithms representation cost and gives the resulting memory gain obtained in two particular examples.
Data transmission and storage cost money. The more information being dealt with, the more it costs. In spite of this, most digital data are not stored in the most compact form. Rather, they are stored in whatever way makes them easiest to use, such as: ASCII text from word processors, binary code that can be executed on a computer, individual samples from a data acquisition system, etc. Typically, these easy-to-use encoding methods require data files about twice as large as actually needed to represent the information. Data compression is the general term for the various algorithms and programs developed to address this problem. A compression program is used to convert data from an easy-to-use format to one optimized for compactness. Likewise, an uncompression program returns the information to its original form. We examine five techniques for data compression in this chapter. The first three are simple encoding techniques, called: runlength, Huffman, and delta encoding. The last two are elaborate procedures that have established themselves as industry standards: LZW and JPEG.
Data compression has important application in the field of file storage and distributed systems. It helps in reducing redundancy in stored or communicated data. This paper studies various compression techniques and analyzes the approaches used in data compression. Furthermore, information theory concepts that relates to aims and evaluation of data compression methods are briefly discussed. A framework for the evaluation and comparison of various compression algorithms is constructed and applied to the algorithms presented here. This paper reports the theoretical and practical nature of compression algorithms. Moreover, it also discusses the future possibilities of research work in the field of data compression.
2011
Efficient access to large data collections is nowadays an interesting problem for many research areas and applications. A recent conception of the time-space relationship suggests a strong relation between data compression and algorithms in the comparison model. In this sense, efficient algorithms could be used to induce compressed representations of the data they process. Examples of this relationship include unbounded search algorithms and integer encodings, adaptive sorting algorithms and compressed representation of permutations, or union algorithms and encoding for bit vectors. In this thesis, we propose to study the time-space relationship on different data types. We aim to define new compression schemes and compressed data structures based on adaptive algorithms that work over these data types, and to evaluate their practicality in data compression applications.
With the ever increasing growth seen in the field of computing, processing large sized files has become possible. The network technologies which are bound by the constraints involving the physical transfer medium have difficulties transferring such large sized files. This research is aimed at exploring various methods of data compression and implementing those methods. These techniques also provide a certain level of security to the compressed file. There are five image-based approaches and one statistical approach. These approaches usually convert any file into binary string, perform the compression operation on the binary string and then convert the binary string back to a file (compressed file). The reverse procedure is followed at the decompression side.
2014 19th Asia and South Pacific Design Automation Conference (ASP-DAC), 2014
Nowadays, most software and hardware applications are committed to reduce the footprint and resource usage of data. In this general context, lossless data compression is a beneficial technique that encodes information using fewer (or at most equal number of) bits as compared to the original representation. A traditional compression flow consists of two phases: data decorrelation and entropy encoding. Data decorrelation, also called entropy reduction, aims at reducing the autocorrelation of the input data stream to be compressed in order to enhance the efficiency of entropy encoding. Entropy encoding reduces the size of the previously decorrelated data by using techniques such as Huffman coding, arithmetic coding, and others. When the data decorrelation is optimal, entropy encoding produces the strongest lossless compression possible. While efficient solutions for entropy encoding exist, data decorrelation is still a challenging problem limiting ultimate lossless compression opportunities. In this paper, we use logic synthesis to remove redundancy in binary data aiming to unlock the full potential of lossless compression. Embedded in a complete lossless compression flow, our logic synthesis based methodology is capable to identify the underlying function correlating a data set. Experimental results on data sets deriving from different causal processes show that the proposed approach achieves the highest compression ratio compared to state-of-art compression tools such as ZIP, bzip2 and 7zip.
International Series in Operations Research & Management Science, 2000
Hierarchicallossless. data compression is a compression technique that has been shown to effectively compress data in the face of uncertainty concerning a proper probabilistic model for the data. In this technique, one represents a data sequence x using one of three kinds of structures: (1) a tree called a pointer tree, which generates x via a procedure called "subtree copying"; (2) a data flow graph which generates x via a flow of data sequences along its edges; or (3) a context free grammar which generates x via parallel substitutions accomplished with the production rules of the grammar. The data sequence is then compressed indirectly via compression of the structure which represents it. This article is a survey of recent advances in the rapidly growing field of hierarchical lossless data compression. In the article, we illustrate how the three distinct structures for representing a data sequence are equivalent, outline a simple method for designing compact structures for representing a data sequence, and indicate the level of compression performance that can be obtained by compression of the structure representing a data sequence.
2014
Compression is useful because it helps us to reduce the resources usage, such as data storage space or transmission capacity. Data Compression is the technique of representing information in a compacted form. The actual aim of data compression is to be reduced redundancy in stored or communicated data, as well as increasing effectively data density. The data compression has important tool for the areas of file storage and distributed systems. To desirable Storage space on disks is expensively so a file which occupies less disk space is “cheapest” than an uncompressed files. The main purpose of data compression is asymptotically optimum data storage for all resources. The field data compression algorithm can be divided into different ways: lossless data compression and optimum lossy data compression as well as storage areas. Basically there are so many Compression methods available, which have a long list. In this paper, reviews of different basic lossless data and lossy compression ...
Insufficient available memory in an application-specific embedded system is a critical problem affecting the reliability and performance of the device. A novel solution for dealing with this issue is to compress blocks of memory that are infrequently accessed when the system runs out of memory, and to decompress the memory when it is needed again, thus freeing memory that can be reallocated. In order to determine an appropriate compression technique for this purpose, a variety of compression algorithms were studied, several of which were then implemented and evaluated based both on efficiency in speed and compression ability of actual program data
Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for loss less text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms.
2015 7th International Conference of Soft Computing and Pattern Recognition (SoCPaR), 2015
Succinct data structures are introduced to efficiently solve a given problem while representing the data using as little space as possible. However, the full potential of the succinct data structures have not been utilized in software-based implementations due to the large storage size and the memory access bottleneck. This paper proposes a hardware-oriented data compression method to reduce the storage space without increasing the processing time. We use a parallel processing architecture to reduce the decompression overhead. According to the evaluation, we can compress the data by 37.5% and still have fast data access with small decompression overhead.
Cybernetics and Systems Analysis, 2015
The methodology of mathematical-algorithmic constructivism is presented. Relations between constructive-synthesizing structures modeling interrelated constructions and construction processes are determined. A model for the adaptation of compression algorithms is developed in the form of the following constructive-synthesizing structures: a dual constructor of compression and decompression algorithms, a converter of a compression algorithm to a constructive compression process, and an adapter.
Mathematics and Computers in Simulation, 2008
Layered image transmission is successfully adopted for different system applications. Generally, in this technique, image is divided into number of layers for easy and fast transmission keeping the concept of multi-description representation. This configuration helps to make overall system adaptive with respect to different system's bandwidth. Proposed data compression layered scheme's architecture not only works with JPEG but also with MPEG image formats. It includes all necessary steps and precautions to handle possible disturbances in image compression by analyzing different parameters affecting image quality and transmission time. In this paper, behavioral model of complete image compression scheme is discussed starting from image extraction from source, layering of image matrix, generating separate pixel layers for concurrent compression followed by efficient coder operation. This paper also addresses different parameters, which improve image quality with different constraints and limitations encountered during design verification. This proposed scheme is comprised of pre-coder, efficient transformation scheme followed by adaptive Huffman encoding/decoding in conjunction with quantization of image matrix which makes overall compression system lossy in nature. Due to flexibility of operation, this scheme can be applicable in different applications like Robot vision and real time video compression, etc. The results obtained after mathematical analysis and simulation are found useful to prove efficiency of layered compression scheme.
International Journal of Computer Applications, 2019
This document discusses data compression and some of the data compression techniques. Data Compression is a technique of reducing the amount of space data occupies, to ease the process of storage and communication. This involves but is not limited to interpretation and elimination of redundancy in data. The fundamental process of compression involves using a well drafted technique to convert the actual data into the compressed data (smaller size). Depending upon how well a compression technique works and how much data can be regenerated from the compressed data given by a certain technique, the technique is classified as either as a lossy data compression technique or lossless data compression technique.
2005
Compression is an economical and efficient way of data handling in not only communication, but also storage purposes. We aimed to implement a compression application based on frequent use of English letters, digraphs, trigraphs and tetragraphs without sacrificing memory and/or the other resources. Despite its conceptual simplicity, the approach achieves promising results. The system is tested for several data and the result was compared with LZW, Huffman Coding, and arithmetic coding. The system can be applied to either small or large files, and it can be seen that the result is still stable.
International Journal of Computer Applications, 2017
In this current age both communication and generic file compression technologies are using different kind of efficient data compression methods massively. This paper surveys a variety of data compression methods. The aim of data compression is to reduce redundancy in stored or communicated data. Data compression has important application in the area of file storage and distributed system. This paper will provide an overview of several compression methods and will formulate new algorithms that may improve compression ratio and abate error in the reconstructed data. In this work the data compression techniques: Huffman, Run-Length, LZW, Shannon-Fano, Repeated-Huffman, Run-Length-Huffman, and Huffman-Run-Length are tested against different types of multimedia formats such as images and text, which shows the difference of various data compression methods on image and text file.
International Journal of Innovative Technology and Exploring Engineering, 2020
Data compression is a promising scheme to increase memory system capacity, performance and energy advantages. The compression performance could affect the overall network performance when compression scheme is implemented in a communication field. Many data compression schemes have been introduced. Most of other researchers choose very limited parameters to analyze the performance of the selected data compression scheme. This paper classifies the major data compression schemes according to nine different perspectives, such as homogeneity, purpose, accuracy, structuring of the data, repetition distance, structure sharing, number of passes, sampling frequency, and sample size ratio. Various data compression schemes are examined and classified according to the parameters mentioned above. The classification will provide researchers with the in-depth insight on the potential role of compression schemes in memory components and network performance of future extreme-scale systems.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.