Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002, IEEE International Conference on Acoustics Speech and Signal Processing
We present our new low-complexity compression algorithm for lossless coding of video sequences. This new coder produces better compression ratios than lossless compression of individual images by exploiting temporal as well as spatial and spectral redundancy. Key features of the coder are a pixel-neighborhood backward-adaptive temporal predictor, an intra-frame spatial predictor and a differential coding scheme of the spectral components. The residual error is entropy coded by a context-based arithmetic encoder. This new lossless video encoder outperforms state-of-theart lossless image compression techniques, enabling more efficient video storage and communications.
IEEE Transactions on Communications, 1996
In this paper we investigate lossless compression schemes for video sequences. A simple adaptive prediction scheme is presented that exploits temporal correlations or spectral correlations in addition to spatial correlations. It is seen that even with motion compensation, schemes that utilize only temporal correlations do not perform significantly better than schemes that utilize only spectral correlations. Hence we look at hybrids scheme that make use of both spectral and temporal correlations. The hybrid schemes give signi cant improvement in performance over other techniques. Besides prediction schemes, we also look at some simple error modeling techniques that take into account prediction errors made in spectrally and/or temporally adjacent pixels in order to e ciently encode the prediction residual. Implementation results on standard test sequences indicate that signi cant improvements can be obtained by the proposed techniques.
IEEE Transactions on Image Processing, 2000
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
Signal Processing: Image Communication, 2012
This paper presents a lossless video compression system based on a novel Backward Adaptive pixel-based fast Predictive Motion Estimation (BAPME). Unlike the widely used block-matching motion estimation techniques, this scheme predicts the motion on a pixel-by-pixel basis by comparing a group of past observed pixels between two adjacent frames, eliminating the need of transmitting side information. Combined with prediction and a fast search technique, the proposed algorithm achieves better entropy results and significant reduction in computation than pixel-based full search for a set of standard test sequences. Experimental results also show that BAPME outperforms block-based full search in terms of speed and entropy. We also provide the sub-pixel version of BAPME as well as integrate BAPME in a complete lossless video compression system. The experimental results are superior to selected state-of-the-art schemes.
Journal of Information Science and Engineering - JISE, 2008
This paper proposes a lossless image compression scheme integrating well-known predictors and Minimum Rate Predictor (MRP). Minimum Rate Predictor is considered as one of the most successful method in coding rates for lossless grayscale image compression so far. In the proposed method, the linear predictor is designed as the combination of causal neighbors together with well-known predictors (GAP, MED, and MMSE) to improve coding rates. To further reduce the residual entropy, we also redesign the calculation of context quantization and the disposition of neighboring pixels. The modifications made in our proposed method are crucial in enhancing the compression ratios. Experimental results demonstrate that the coding rates of the proposed method are lower than those of MRP and other state-of-the-art lossless coders among most of the test images. In addition, the residual entropy of the proposed scheme in the first iteration is lower than that of MRP and is relatively closer to the final residual entropy than that in MRP. This phenomenon will allow our proposed scheme to be terminated in less iterations while maintaining a relatively good compression performance.
2013 21st Iranian Conference on Electrical Engineering (ICEE), 2013
The widespread usage of internet, limited bandwidth of networks and different types of media all around the net causes a vast growth in compressing data with different abilities and qualities. Nowadays, video is a popular media for everyday usage. In different research areas, there is a need for recording events in high frame rates. Due to the high frame rate video constraints, using complex methods are not suitable for real-time coding of these videos and will increase the cost of the system. There are different lossless, lossy and near-lossless methods for compressing video sequences. Existing lossy methods cannot limit the subjective or objective loss to a certain upper bound. There have been works regarding lossless compression of these sequences, however these works offer modest compression ratios and in some cases will not be enough due to the large size of these sequences. In this paper we propose a near-lossless method that is comparable with successful existing methods of video compression and yet is simple enough for realtime applications. It includes the major conventional parts for this goal which are prediction, quantization and entropy coding. A simple rate control is embedded by different approaches in quantization. The experimental results demonstrate good compression ratios while considering reliability due to control of the maximum pixel error.
Applied Mathematics & Information Sciences, 2014
Predictive methods of image compression traditionally visit the pixels to be compressed in raster scan order, making a prediction for that pixel and storing the difference between the pixel and its prediction. We introduce a new predictive lossless compression method in which the order in which the pixels are visited is determined using a predictor based on previously known pixel values. This makes it possible to reconstruct the image without storing this path. In our tests on standard benchmark images; we show that our approach gives a significant improvement to row wise use of one or two dimensional predictors and gives results similar to or better than standard compression algorithms like median compression and JPG 2000.
State-of-the-art lossless image compression schemes, such as JPEG-LS and CALIC, have been proposed in the context-adaptive predictive coding framework. These schemes involve a prediction step followed by context-adaptive entropy coding of the residuals. However, the models for context determination proposed in the literature, have been designed using ad-hoc techniques. In this paper, we take an alternative approach where we fix a simpler context model and then rely on a systematic technique to efficiently exploit spatial correlation to achieve efficient compression. The essential idea is to decompose the image into binary bitmaps such that the spatial correlation that exists among non-binary symbols is captured as the correlation among few bit positions. The proposed scheme then encodes the bitmaps in a particular order based on the simple context model. However, instead of encoding a bitmap as a whole, we partition it into rectangular blocks, induced by a binary tree, and then independently encode the blocks. The motivation for partitioning is to explicitly identify the blocks within which the statistical correlation remains the same. On a set of standard test images, the proposed scheme, using the same predictor as JPEG-LS, achieved an overall bit-rate saving of 1.56% against JPEG-LS.
Proceedings of the 2nd International Conference on Vision, Image and Signal Processing, 2018
We present a video compression framework that has several components. First, we aim at achieving perceptually lossless compression. Several well-known video codecs in the literature have been evaluated and the performance was assessed using several well-known performance metrics. Second, we investigated the impact of error concealment algorithms for handling corrupted pixels due to transmission errors in communication channels. Extensive experiments using actual videos have been performed to demonstrate the proposed framework.
2014 15th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering, 2014
Lossy video coding can be achieved by using the H.264 Advanced Video Coding. Some methods have been proposed to transform this coder into a lossless one. The hierarchical lossless video coding structure with new intra algorithm prediction was proposed to reach this goal. In this paper we proposed some enhancements of this method to improve the coding efficiency. The simulation results show that the proposed enhancements reduce the total bit of the coded sequence and the execution time.
Still-Image Compression II, 1996
This paper describes a new method for lossless image compression where relative pixel values of localized neighborhoods in test images are stored in a codebook. In order to achieve decorrelation of an image's pixels, each neighborhood of a pixel is assigned to a neighborhood in the codebook, and the di erence between the actual pixel value and the predicted value from the codebook is coded using an entropy coder. Using the same codebook, one can achieve perfect reconstruction of the image. The method is tested on several standard images and compared with previously published methods. These experiments demonstrate that the new method is an attractive alternative to existing lossless image compression techniques.
Rundbrief Der Gi-fachgruppe 5.10 Informationssystem-architekturen, 1991
Lossless data compression systems allow an exact replica of the original data to be reproduced at the receiver. Lossless compression has found a wide range of applications in such diverse fields as: compression of computer data, still images (e.g., medical or graphical images) and video (usually, in the form of entropy coding of the output of intra/inter-frame lossy schemes). It has been studied for over forty years and new compression algorithms are still continuously developed. This paper is a survey of current lossless techniques with results quoted for both sequential data files and still images.
Journal of Electronic Imaging, 1993
Lossy plus lossless techniques for image compression split an image into a low-bit-rate lossy representation and a residual that represents the difference between this low-rate lossy image and the originaL Conventional schemes encode the lossy image and its lossless residual in an independent manner. We show that making use of the lossy image to encode the residual can lead to significant savings in bit rate. Further, the complexity increase to attain these savings is minimaL The savings are achieved by capturing the inherent structure of the image in the form of a noncausal prediction model that we call a prediction tree. This prediction model is then used to transmit the lossless residuaL Simulation results show that a reduction of 0.5 to 1.0 bit/pixel can be achieved in bit rates compared to the conventional approach of independently encoding the residuaL
IEEE Signal Processing Letters, 2000
A novel image compression technique is presented that incorporates progressive transmission and near-lossless compression in a single framework. Experimental performance of the proposed coder proves to be competitive with the state-of-the-art compression schemes.
2012 VIII Southern Conference on Programmable Logic, 2012
This paper presents the Reference Frame Context Adaptive Variable-Length Compressor (RFCAVLC) for video coding systems. RFCAVLC aims to reduce the external memory bandwidth required to carry out this process. Six experiments were performed, all based on adaptations of the Huffman algorithm, and the best experiment achieved an average compression rate of more than 24% without any loss in quality for all targeted resolutions. This result is similar to the best solutions proposed in the literature, but it is the only one without losses in this process. The presented RFCAVLC splits the reference frames in 4x4 blocks and compresses these blocks using one of four static code tables in a context-adaptive way. An architecture that implements the encoder of the RFCAVLC solution was described in VHDL and synthesized to an Altera Stratix IV FPGA. The synthesis results achieved by the designed architecture indicate that this solution can be easily coupled to a complete video encoder system with negligible hardware overhead and without compromising throughput.
Visual Communications and Image Processing 2004, 2004
for their helpful comments and support. I am also grateful to Haoping Yu, who is part of the Corporate Research Group at Thomson in Indianapolis. Haoping and I worked together on a summer internship in 2002 where he gave me many inspirational comments about my research direction. I would also like to thank colleagues in the VIPER (Video and Image Processing) Lab for sharing time in discussions, especially Jinwha Yang who gave many helpful comments concerning research techniques. My family has fully supported me throughout my years at Purdue. Jong-Hun (Bori) Park, Min-Seo (Sori) Park and especially my wife Jin-Hee Cho are my most valuable assets and I am eternally grateful for their emotional steadfastness. Last, but not least, I would like to thank the support of my brother and sisters in Korea.
We present a new method for lossless image compression that gives compression comparable to JPEG lossless mode with about five times the speed. Our method, called ELICS, is based on a novel use of two neighboring pixels for both prediction and error modeling. For coding we use single bits, adjusted binary codes, and Golomb Rice codes. For the latter we present and analyze a provably good method for estimating the single coding parameter. Efficient, lossless image compression system (ELICS) algorithm, which consists of simplified adjusted binary code and Golomb–Rice code with storage-less k parameter selection, is proposed to provide the lossless compression method for high-throughput applications. The simplified adjusted binary code reduces the number of arithmetic operation and improves processing speed. According to theoretical analysis, the storage-less k parameter selection applies a fixed value in Golomb–Rice code to remove data dependency and extra storage for cumulation table.
Lossless image compression is a class of image compression algorithms that allows the original image to be perfectly reconstructed from the compressed image. This undertaking shows another lossless color image compression algorithm based on pixel prediction and arithmetic coding. Lossless squeezing of a Red, Green and Blue (RGB) picture is carried out by first decorrelating utilizing Reversible Color Transform (RCT). The got Y part is then compacted by a conventional lossless grayscale picture clamping strategy. The chrominance image is encoded utilizing Arithmetic coding and pixel prediction system. By utilizing RCT the forecast lapse is characterized and arithmetic coding is connected to the mistake signal. The packed image and encoded picture is joined to structure a lossless compacted RGB picture. It is demonstrated that this system diminishes the bit rates contrasted and JPEG 2000 and JPEG-XR. With a specific end goal to further lessen the bit rate the compression strategies and the pixel prediction strategy can be altered for better execution.
Due to size limitation and complexity of the hardware in transmission applications, multimedia systems and computer communications, compression techniques are much necessary. The reasons for multimedia systems to compress the data, large storage is required to save the compressed data, the storage devices are relatively slow which in real-time, has constrain to play multimedia data, and the network bandwidth, that has limitations to real-time data transmission. This paper presents an enhanced approach of run length coding. First the DCT applied, and the quantization done on the image to be compressed, then the modified run length coding technique has been used to compress the image losslessly. This scheme represents the occurrence of repeated zeros by RUN, and a non-zero coefficient by LEVEL. It removes the value of RUN, as for the sequence of non-zero coefficients it is zero for most of the time and for a zero present between non-zero coefficients is replaced by „0‟ which results in larger compression than RUN, LEVEL (1, 0) pair is used.
Sixth Multidimensional Signal Processing Workshop
Arithmetic coding is applied to provide lossless and loss-inducing compression of optical, infrared, and synthetic aperture radar imagery of natural scenes. Arithmetic coding algorithms successfully exploit the dependence structure of images through the adaptive estimation of probability distributions conditioned on pixel contexts. Several different contexts are considered, including both predictive and non-predictive variations, with both image-dependent and image-independent variations. In lossless coding experiments, arithmetic coding algorithms are shown to outperform comparable variants of both Huffman and Lempel-Ziv-Welch coding algorithms by approximately 0.5 bits per pixel. For image-dependent contexts constructed from high-order autoregressive predictors, arithmetic coding algorithms provide compression ratios as high as 4. Contexts constructed from lower-order auteregressive predictors provide compression ratios nearly as great as those of the higher-order predictors with favorable computational trades. Compression performance variations are shown to reflect the inherent sensor-dependent differences in the stochastic structure of the imagery. Arithmetic coding is also demonstrated to be a valuable addition to loss-inducing compression techniques. Code sequences derived from a lapped orthogonal transform-based vector quantization scheme are shown to be losslessly compressible using the arithmetic coding scheme. For imagery compressed to 0.5 bits per pixel, the addition of an arithmetic coder with Markov-dependent context results in additional compression ratio gains as high as 2 with no additional loss in fidelity. 'Rome Air Development Center sponsored this work under contract number F30602-87-(3-0225.
2007 IEEE International SOC Conference, 2007
In this paper we present a novel hardware architecture for context-based statistical lossless image compression, as part of a dynamically reconfigurable architecture for universal lossless compression. A gradient-adjusted prediction and context modeling algorithm is adapted to a pipelined scheme for low complexity and high throughput. Our proposed system improves image compression ratio while keeping low hardware complexity. This system is designed for a Xilinx Virtex4 FPGA core and optimized to achieve a 123 MHz clock frequency for real-time processing.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.