Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011, Image Fusion and Its Applications
AI
Image fusion techniques are essential for enhancing the utility of remote sensing data captured by satellites. By integrating high spatial resolution panchromatic images with multispectral images, the resulting hybrid products optimize visual information for distinct applications, such as urban and environmental studies. This paper discusses the methodologies for effective image fusion, addresses common challenges in maintaining spectral information while boosting spatial resolution, and presents findings through quantitative classification accuracy assessments, demonstrating improvement in target detection in complex urban environments.
Photogrammetric Engineering and Remote …, 2008
A pixel level data fusion approach based on correspondence analysis (CA) is introduced for high spatial and spectral resolution satellite data. Principal component analysis (PCA) is a well-known multivariate data analysis and fusion technique in the remote sensing community. Related to PCA but a more recent multivariate technique, correspondence analysis, is applied to fuse panchromatic data with multispectral data in order to improve the quality of the final fused image. In the CA-based fusion approach, fusion takes place in the last component as opposed to the first component of the PCA-based approach. This new approach is then quantitatively compared to the PCA fusion approach using Landsat ETMϩ, QuickBird, and two Ikonos (with and without dynamic range adjustment) test imagery. The new approach provided an excellent spectral accuracy when synthesizing images from multispectral and high spatial resolution panchromatic imagery.
Lecture Notes in Computer Science, 2008
In remote sensing, image fusion techniques are used to fuse high spatial resolution panchromatic and lower spatial resolution multispectral images that are simultaneously recorded by one sensor. This is done to create high resolution multispectral image datasets (pansharpening). In most cases, these techniques provide very good results, i.e. they retain the high spatial resolution of the panchromatic image and the spectral information from the multispectral image. When applied to multitemporal and/or multisensoral image data, these techniques still create spatially enhanced datasets but usually at the expense of the spectral characteristics. In this study, eight multitemporal remote sensing images are fused with one panchromatic image to test eight different fusion techniques. The fused images are visually and quantitatively analyzed for spectral characteristics preservation and spatial enhancement. Of the employed methods, only the newly developed Ehlers fusion guarantees excellent color preservation and spatial improvement for all dates and sensors.
2006
The main topic of this paper is high-resolution image fusion. The techniques used to merge high spatial resolution panchromatic images with high spectral resolution multispectral images are described. The most commonly used image fusion methods that work on the principle of component substitution (intensity-hue-saturation method (IHS), Brovey transform (BT), and multiplicative method (MULTI)) have been applied to Ikonos, QuickBird, Landsat, and aerial orthophoto images. Visual comparison, histogram analyses, correlation coefficients, and difference images were used in order to analyze the spectral and spatial qualities of the fused images. It was discovered that for preserving spectral characteristics, one needs a high level of similarity between the panchromatic image and the respective multispectral intensity. In order to achieve this, spectral sensitivity of multispectral and panchromatic data was performed, and digital values in individual bands have been modified before fusion. It has also been determined that spatial resolution is best preserved in the event of an unchanged input panchromatic image.
This work presents a multi-resolution framework for merging a multi-spectral image having an arbitrary number of bands with a higher-resolution panchromatic observation. The fusion method relies on the generalised Laplacian pyramid (GLP), which is a multi-scale oversampled structure. The goal is to selectively perform injection of spatial-frequencies from an image to another with the constraint of thoroughly retaining the spectral information of the coarser data. The novel idea is that a model of the modulation transfer functions (MTF) of the multi-spectral scanner is exploited to design the GLP reduction filter. Thus, the inter-band structure model (IBSM), which is calculated at the coarser scale, where both MS and Pan data are available, can be extended to the finer scale, without the drawback of the poor enhancement occurring when MTFs are assumed to be ideal filters. Experiments carried out on QuickBird data demonstrate that a superior spatial enhancement, besides the spectral quality typical of injection methods, is achieved by means of the MTF-adjusted fusion. 1 INTRODUCTION Image fusion techniques, originally devised to allow integration of different information sources, may take advantage of the complementary spatial/spectral resolution characteristics typical of remotesensing imagery. When exactly three multi-spectral (MS) bands are concerned, the most straightforward fusion method is to resort to an intensity-hue-saturation (IHS) transformation. This procedure is equivalent to inject, i.e. add, the difference between the sharp panchromatic (Pan) image and the smooth intensity into the resampled MS bands (Tu et al. 2001). Since the Pan image, histogram-matched to the intensity component, does not generally have same local radiometry as the latter, when the fusion result is displayed in colour composition, large spectral distortion (colour changes) may be noticed. When more than three spectral bands are available, IHS fusion may be applied to three consecutive spectral components at a time, or better the IHS transformation may be replaced with principal component analysis (PCA). Generally speaking, if the spectral responses of MS bands are not perfectly overlapped with the Pan bandwidth, as it happens to Ikonos-2 and QuickBird, IHS-and PCA-based methods may yield very poor results in terms of spectral fidelity. To definitely overcome this inconvenience, methods based on injecting spatial details only, taken from the Pan image without resorting to IHS transformation, have been introduced and have demonstrated superior performances. Multi-resolution analysis (MRA) provides effective tools, like wavelets and pyramids (Aiazzi et al. 2002), to carry out image merge tasks. However, in the case of high-pass detail injection, spatial distortions, typically ringing or aliasing effects, originating shifts or blur of contours and textures, may occur. These drawbacks, which may be as much
Photogrammetric Engineering & Remote Sensing, 2008
This paper introduces a novel approach for evaluating the quality of pansharpened multispectral (MS) imagery without resorting to reference originals. Hence, evaluations are feasible at the highest spatial resolution of the panchromatic (PAN) sensor. Wang and Bovik's image quality index (QI) provides a statistical similarity measurement between two monochrome images. The QI values between any couple of MS bands are calculated before and after fusion and used to define a measurement of spectral distortion. Analogously, QI values between each MS band and the PAN image are calculated before and after fusion to yield a measurement of spatial distortion. The rationale is that such QI values should be unchanged after fusion, i.e., when the spectral information is translated from the coarse scale of the MS data to the fine scale of the PAN image. Experimental results, carried out on very high-resolution Ikonos data and simulated Pléiades data, demonstrate that the results provided by the proposed approach are consistent and in trend with analysis performed on spatially degraded data. However, the proposed method requires no reference originals and is therefore usable in all practical cases. PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
EURASIP Journal on Advances in Signal Processing, 2012
This article presents a novel method for the enhancement of the spatial quality of hyperspectral (HS) images through the use of a high resolution panchromatic (PAN) image. Due to the high number of bands, the application of a pan-sharpening technique to HS images may result in an increase of the computational load and complexity. Thus a dimensionality reduction preprocess, compressing the original number of measurements into a lower dimensional space, becomes mandatory. To solve this problem, we propose a pan-sharpening technique combining both dimensionality reduction and fusion, making use of non-linear principal component analysis (NLPCA) and Indusion, respectively, to enhance the spatial resolution of a HS image. We have tested the proposed algorithm on HS images obtained from CHRIS-Proba sensor and PAN image obtained from World view 2 and demonstrated that a reduction using NLPCA does not result in any significant degradation in the pan-sharpening results.
Remote sensing image fusion is an effective way to use a large volume of data from multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and QuickBird provide both panchromatic (Pan) images at a higher spatial resolution and multispectral (MS) images at a lower spatial resolution and many remote sensing applications require both high spatial and high spectral resolutions, especially for GIS based applications. An effective image fusion technique can produce such remotely sensed images. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm to obtain more and better information about an object or a study area than. The image fusion is performed at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion takes place. There are many image fusion methods that can be used to produce high resolution multispectral images from a high resolution pan image and low resolution multispectral images. This paper explores the major remote sensing data fusion techniques at pixel level and reviews the concept, principals, limitations and advantages for each technique. This paper focused on traditional techniques like intensity hue-saturation- (HIS), Brovey, principal component analysis (PCA) and Wavelet.
2018
A large number of imaging sensors in the field of remote sensing capture images with different spatial and spectral resolutions. Given a low spatial resolution multispectral image(MS) and a high spatial resolution panchromatic image (PAN),the proposed algorithm aims to enhance the spatial resolution of MS images while simultaneously preserving the high spectral information contained in it. Thus, a method of fusion of MS and PAN images based on Fourier filtering in HSV color space is proposed. Experimental results prove that the proposed method produces better fusion results.
IEEE Geoscience and Remote Sensing Letters, 2014
Unlike multispectral (MSI) and panchromatic (PAN) images, generally the spatial resolution of hyperspectral images (HSI) is limited, due to sensor limitations. In many applications, HSI with a high spectral as well as spatial resolution are required. In this paper, a new method for spatial resolution enhancement of a HSI using spectral unmixing and sparse coding (SUSC) is introduced. The proposed method fuses high spectral resolution features from the HSI with high spatial resolution features from an MSI of the same scene. Endmembers are extracted from the HSI by spectral unmixing, and the exact location of the endmembers is obtained from the MSI. This fusion process by using spectral unmixing is formulated as an ill-posed inverse problem which requires a regularization term in order to convert it into a wellposed inverse problem. As a regularizer, we employ sparse coding (SC), for which a dictionary is constructed using high spatial resolution MSI or PAN images from unrelated scenes. The proposed algorithm is applied to real Hyperion and ROSIS datasets. Compared with other state-of-the-art algorithms based on pansharpening, spectral unmixing, and SC methods, the proposed method is shown to significantly increase the spatial resolution while perserving the spectral content of the HSI. Index Terms-Fusion, hyperspectral images (HSI), multispectral images (MSI), sparse coding (SC), spectral unmixing. I. INTRODUCTION R EMOTE sensing images have been widely used in different practical applications such as earth surface monitoring, agriculture, forest monitoring, environmental studies, and military applications [1]. The main types of remote sensing images are panchromatic (PAN), multispectral (MSI), and hyperspectral images (HSI). PAN images have a high spatial resolution and spatial structures are well defined, but they are limited to one gray-scale image band. MSI have lower spatial resolution than PAN images and contain a limited number of spectral bands. HSI usually have lower spatial resolution than MSI and PAN images but have a high spectral resolution [2],
International Journal of Remote Sensing, 2011
Geo-spatial Information Science, 2005
Four data fusion methods, principle component transform (PCT), brovey transform (BT), smoothing filter-based intensity modulation(SFIM), and hue, saturation, intensity (HSI), are used to merge Landsat-7 ETM+ multispectral hands with ETM+ panchromatic band. Each of them improves the spatial resolution effectively but distorts the original spectral signatures to some extent. SFIM model can produce optimal fusion data with respect to preservation of spectral integrity. However, it results the most blurred and noisy image if the coregistration between the multispectral and pan images is not accurate enough. The spectral integrity for all methods is preserved better if the original multispectral images are within the spectral range of ETM+ pan image. KEYWORDS data fusion~ ETM+; mu[tispectral~ PC transform; SFIM CLC NUMBER TP751
The amount and variety of remote sensing imagery of varying spatial resolution is continuously increasing and techniques for merging images of different spatial and spectral resolution became widely accepted in practice. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image taken of the same geographic area. This study examines fused images and their ability to preserve the spectral and spatial integrity of the original image. The mathematical formulation of ten data fusion techniques is worked out in this paper. Included are colour transformations, wavelet techniques, gradient and Laplacian based techniques, contrast and morphological techniques, feature selection and simple averaging procedures. Most of theses techniques employ hierarchical image decomposition for fusion. IRS-1C and ASTER images are used for the experimental investigations. The panchromatic IRS-1C image has around 5m pixel size, the multispectral ASTER images are at a 15m resolution level. For the fusion experiments the three nadir looking ASTER bands in the visible and near infrared are chosen. The concept for evaluating the fusion methods is based on the idea to use a reduced resolution of the IRS-1C image data at 15m resolution and of the ASTER images at 45m resolution. This maintains the resolution ratio between IRS and ASTER and allows comparing the image fusion result at the 15m resolution level with the original ASTER images. This statistical comparison reveals differences between all considered fusion concepts.
International journal of engineering research and technology, 2021
Hyperspectral imaging (HSI) with otherworldly high targets occasionally encounters low-space targets that can be inferred from image sensor obstacles. Image fusion is a convenient and practical method for processing HSI space target enhancement. It can solidify HSI and multispectral image (MSI) of higher space targets with comparable environments. In the early years, various combinations of HSI and MSI calculations were all familiar to obtain high-target HSI. In any case, you have not conducted large-scale research on the recently proposed combination of HSI and MSI. They are divided into four categories, including pan-honing or pan-sharpening, frame or matrix decomposition, tensor representation, and methods based on deep convolution neural networks.
IEEE Transactions on Geoscience and Remote Sensing, 2000
Our framework is the synthesis of multispectral images (MS) at higher spatial resolution, which should be as close as possible to those that would have been acquired by the corresponding sensors if they had this high resolution. This synthesis is performed with the help of a high spatial but low spectral resolution image: the panchromatic (Pan) image. The fusion of the Pan and MS images is classically referred as pan-sharpening. A fused product reaches good quality only if the characteristics and differences between input images are taken into account. Dissimilarities existing between these two data sets originate from two causes-different times and different spectral bands of acquisition. Remote sensing physics should be carefully considered while designing the fusion process. Because of the complexity of physics and the large number of unknowns, authors are led to make assumptions to drive their development. Weaknesses and strengths of each reported method are raised and confronted to these physical constraints. The conclusion of this critical survey of literature is that the choice in the assumptions for the development of a method is crucial, with the risk to drastically weaken fusion performance. It is also shown that the Amélioration de la Résolution Spatiale par Injection de Structures concept prevents from introducing spectral distortion into fused products and offers a reliable framework for further developments.
Analytical Chemistry, 2000
2003
This work presents a general and formal solution to the problem of fusion of multispectral data with high-resolution panchromatic images. The method relies on the generalised Laplacian pyramid, which is an oversampled structure obtained by subtracting from an image its low-pass version, and selectively performs spatial-frequencies spectrum substitution from an image to another. The novelty of the present work is that a decision based on thresholding the local CC is utilized to check the physical congruence of fusion, while the ratio of local RMSs between the two images provides a space-varying gain factor by which the injected high-pass contribution is equalized. Since the pyramid decomposition is not critically-subsampled, possible impairments in the fused images, due to missing cancellation of aliasing terms, are avoided. Quantitative results are presented and discussed on simulated SPOT 5 data of an urban area (¾ Ñ P, ½¼Ñ XS) obtained from the MIVIS airborne imaging spectrometer.
2006 9th International Conference on Information Fusion, 2006
International Journal of Remote Sensing, 2007
Index Terms-image fusion, data fusion, remote sensing, image processing, signal processing. Visual Sensor, DCT.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.