Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
…
9 pages
1 file
Goddard Space Flight Center actively participated in the mid 90's in an effort to standardize a lossless data compression algorithm for space applications. As the standard effort progressed, implementation in Application Specific Integrated Circuit (ASIC) was initiated for high throughput applications. Eventually, a radiation hardened circuit was fabricated to function at over 20 Msamples/sec. Implementation of new technologies into space missions has always met resistance. The mentality of "If it works, why needs change?" prevails in aerospace community. The notion of "not-invented-here", or otherwise known as NIH disease further hampers progress. The first real mission application at GSFC of the lossless standard was for a small explorer, the Sub-millimeter Wave Astronomy Satellite (SWAS-1999) that needed to overcome insufficient onboard storage capacity. Subsequent mid-class explores for space science missions, Imager for Magnetopause-to-Aurora Global Exploration (IMAGE-00), Microwave Anisotropic Probe (MAP-01), followed. These implementations are all software based.
1994
Space data compression has been used on deep space missions since the late 1960s. Significant flight history on both lossless and lossy methods exists. NASA proposed a standard in May 1994 that addresses most payload requirements for lossless compression. The Laboratory has also been involved in payloads that employ data compression and in leading the American Institute of Aeronautics and Astronautics standards activities for space data compression. This article details the methods and flight history of both NASA and international space missions that use data compression.
Remote Sensing, 2021
The increment in the use of high-resolution imaging sensors on-board satellites motivates the use of on-board image compression, mainly due to restrictions in terms of both hardware (computational and storage resources) and downlink bandwidth with the ground. This work presents a compression solution based on the CCSDS 123.0-B-2 near-lossless compression standard for multi- and hyperspectral images, which deals with the high amount of data acquired by these next-generation sensors. The proposed approach has been developed following an HLS design methodology, accelerating design time and obtaining good system performance. The compressor is comprised by two main stages, a predictor and a hybrid encoder, designed in Band-Interleaved by Line (BIL) order and optimized to achieve a trade-off between throughput and logic resources utilization. This solution has been mapped on a Xilinx Kintex UltraScale XCKU040 FPGA and targeting AVIRIS images, reaching a throughput of 12.5 MSamples/s and c...
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2018
Hyperspectral images taken by satellites pose a challenge for data transmission. Communication with Earth's antennas is usually time restricted and bandwidth is very limited. The CCSDS 1.2.3 algorithm mitigates this issue by defining a lossless compression standard for this kind of data, allowing more efficient usage of the transmission link. Reconfigurable field-programmable gate arrays (FPGAs) are promising platforms that provide powerful on-board computing capabilities and flexibility at the same time. In this paper, we present an FPGA implementation for the CCSDS 1.2.3 algorithm. The proposed method has been implemented on the Virtex-4 XC2VFX60 FPGA (the commercial equivalent of the space-qualified Virtex-4QV XQR4VF60 FPGA) and on the Virtex-7 XC7VX690T, and tested using real hyperspectral data collected by NASA's airborne visible infra-red imaging spectrometer (AVIRIS) and two procedurally generated synthetic images. Our design, occupying a mere third of the Virtex-4 XC2VFX60 FPGA, has a very low power consumption and achieves real-time compression for hyperspectral imaging devices such as NASA's NG-AVIRIS. For this, we use the board's memory as a cache for input data, which allows us to process images as streams of data, completely eliminating storage needs. All these factors make it a great option for on-satellite compression.
IGARSS 2000. IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No.00CH37120)
A high performance lossy data compression technique is currently being developed for space science applications under the requirement of high-speed push-broom scanning. The technique is also error-resilient in that error propagation is contained within a few scan lines. The algorithm is based on block-transform combined with bit-plane encoding; this combination results in an embedded bit string with exactly the desirable compression rate. The lossy coder is described. The compression scheme performs well on a suite of test images typical of images from spacecraft instruments. Hardware implementations are in development; a functional chip set is expected by the end of 2000.
2003
Offer royalty free license to all CCSDS space agencies if any Process both frame and non-frame (push-broom) data patent is included in the algorithm Offer adjustable coded data rate or image quality (up to a lossless
International Journal of Image, Graphics and Signal Processing, 2015
HyperSpectral Imagers (HySI) are used in the spacecraft or aircrafts to get minute characteristics of target element through capturing image in a large number of narrow and contiguous bands. HySI data represented as data cube with two dimensions representing spatial distribution and third dimension providing band information is huge in volume and challenging task to handle. Hence onboard compression becomes a necessary for optimal usage of onboard storage and downlink bandwidth. CCSDS recommended 123.0-B-1 standard[2] has been released with onboard compression scheme of hyperspectral data. The scheme is based on Fast Lossless algorithm and consists of two main functional blocks namely Predictor and Encoder. Predictor algorithm can be implemented in two modes ‗Full Neighborhood Oriented' and ‗Reduced Column Oriented'. Encoder algorithm also defines two options ‗sample-adaptive' and ‗block-adaptive'. We have developed a MATLAB based model implementing the compression scheme with all options defined by the standard. Decompression model is also developed for getting back actual data and end to end verification. Four sets of HySI data (AVIRIS, Hyperion, Chandrayan-1 and FTIS) have been applied as input to the developed model for evaluation of the model. Compression ratio achieved is between 2 to 3 and lossless compression is ensured for each set of data as Mean Square Error (MSE) is zero for all hyperspectral images. Also visual reconstruction of decompressed data matches with original ones. In this paper we have discussed algorithm implementation methodology and results.
Remote Sensing
Hyperspectral imaging is a technology which, by sensing hundreds of wavelengths per pixel, enables fine studies of the captured objects. This produces great amounts of data that require equally big storage, and compression with algorithms such as the Consultative Committee for Space Data Systems (CCSDS) 1.2.3 standard is a must. However, the speed of this lossless compression algorithm is not enough in some real-time scenarios if we use a single-core processor. This is where architectures such as Field Programmable Gate Arrays (FPGAs) and Graphics Processing Units (GPUs) can shine best. In this paper, we present both FPGA and OpenCL implementations of the CCSDS 1.2.3 algorithm. The proposed paralellization method has been implemented on the Virtex-7 XC7VX690T, Virtex-5 XQR5VFX130 and Virtex-4 XC2VFX60 FPGAs, and on the GT440 and GT610 GPUs, and tested using hyperspectral data from NASA's Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS). Both approaches fulfill our real-time requirements. This paper attempts to shed some light on the comparison between both approaches, including other works from existing literature, explaining the trade-offs of each one.
Proceedings of Spie the International Society For Optical Engineering, 2009
To efficiently use the limited bandwidth available on the downlink from satellite to ground station, imager data is usually compressed before transmission. Transmission introduces unavoidable errors, which are only partially removed by forward error correction and packetization. In the case of the commonly used CCSD Rice-based compression, it results in a contiguous sequence of dummy values along scan lines in a band of the imager data. We have developed a method capable of using the image statistics to provide a principled estimate of the missing data. Our method outperforms interpolation yet can be performed fast enough to provide uninterrupted data flow. The estimation of the lost data provides significant value to end users who may use only part of the data, may not have statistical tools, or lack the expertise to mitigate the impact of the lost data. Since the locations of the lost data will be clearly marked as meta-data in the HDF or NetCDF header, experts who prefer to handle error mitigation themselves will be free to use or ignore our estimates as they see fit.
2016
HyperSpectral Imagers (HySI) are used in the spacecraft or aircrafts to get minute characteristics of target element through capturing image in a large number of narrow and contiguous bands. HySI data represented as data cube with two dimensions representing spatial distribution and third dimension providing band information is huge in volume and challenging task to handle. Hence onboard compression becomes a necessary for optimal usage of onboard storage and downlink bandwidth. CCSDS recommended 123.0-B-1 standard[2] has been released with onboard compression scheme of hyperspectral data. The scheme is based on Fast Lossless algorithm and consists of two main functional blocks namely Predictor and Encoder. Predictor algorithm can be implemented in two modes ̳Full Neighborhood Oriented‘ and ̳Reduced Column Oriented‘. Encoder algorithm also defines two options ̳sample-adaptive‘ and ̳block-adaptive‘. We have developed a MATLAB based model implementing the compression scheme with all o...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Computer Networks, 2007
Acta Astronautica, 2009
Anais do XI Computer on the Beach - COTB '20
2013 26th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), 2013
Zenodo (CERN European Organization for Nuclear Research), 2022
Integration, 2019
2015
2008 NASA/ESA Conference on Adaptive Hardware and Systems, 2008
Proceedings of the 23rd symposium on Integrated circuits and system design - SBCCI '10, 2010
Journal of Applied Remote Sensing, 2010
2006 IEEE International Conference on Communications, 2006
Electronics, 2020
CERN European Organization for Nuclear Research - Zenodo, 2022