Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Many algorithms and software tools have been developed for fusing panchromatic and multispectral datasets in remote sensing. Also, a number of methods has been proposed and developed for the comparative evaluation of fusion results. To this date, however, no papers have been published that analyze effectiveness and quality of the evaluation techniques. In our study, methods that evaluate fusion quality are tested for different images and test sites. This analysis shows that in most cases the tested methods perform well, but are sometimes inconsistent with visual analysis results.
2012
Image fusion is a formal framework for combining and utilizing data originating from different sources. It aims at producing high resolution multispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) image. This fused image must contain more interpretable information than can be gained by using the original image. Ideally the fused image should not distort the spectral characteristics of multispectral data as well as, it should retain the basic colour content of the original data. There are many data fusion techniques that can be used which include Principal Component Analysis (PCA), Brovey Transform (BT), Multiplicative Transform (MT) and Discrete Wavelet Transform (DWT). One of the major problems associated with a data fusion technique is how to assess the quality of the fused (spatially enhanced) MS image. This paper presents a comprehensive analysis and evaluation of the most commonly used data fusion techniques. The performance of...
The amount and variety of remote sensing imagery of varying spatial resolution is continuously increasing and techniques for merging images of different spatial and spectral resolution became widely accepted in practice. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image taken of the same geographic area. This study examines fused images and their ability to preserve the spectral and spatial integrity of the original image. The mathematical formulation of ten data fusion techniques is worked out in this paper. Included are colour transformations, wavelet techniques, gradient and Laplacian based techniques, contrast and morphological techniques, feature selection and simple averaging procedures. Most of theses techniques employ hierarchical image decomposition for fusion. IRS-1C and ASTER images are used for the experimental investigations. The panchromatic IRS-1C image has around 5m pixel size, the multispectral ASTER images are at a 15m resolution level. For the fusion experiments the three nadir looking ASTER bands in the visible and near infrared are chosen. The concept for evaluating the fusion methods is based on the idea to use a reduced resolution of the IRS-1C image data at 15m resolution and of the ASTER images at 45m resolution. This maintains the resolution ratio between IRS and ASTER and allows comparing the image fusion result at the 15m resolution level with the original ASTER images. This statistical comparison reveals differences between all considered fusion concepts.
Remote sensing image fusion is an effective way to use a large volume of data from multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and QuickBird provide both panchromatic (Pan) images at a higher spatial resolution and multispectral (MS) images at a lower spatial resolution and many remote sensing applications require both high spatial and high spectral resolutions, especially for GIS based applications. An effective image fusion technique can produce such remotely sensed images. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm to obtain more and better information about an object or a study area than. The image fusion is performed at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion takes place. There are many image fusion methods that can be used to produce high resolution multispectral images from a high resolution pan image and low resolution multispectral images. This paper explores the major remote sensing data fusion techniques at pixel level and reviews the concept, principals, limitations and advantages for each technique. This paper focused on traditional techniques like intensity hue-saturation- (HIS), Brovey, principal component analysis (PCA) and Wavelet.
International Journal of Engineering Sciences & Research Technology, 2014
Image fusion techniques have interest within the remote sensing community. The reason of this is that in most cases the new generation of remote sensors with very high spatial resolution acquires image datasets in two separate modes: the highest spatial resolution is obtained for panchromatic images (PAN) whereas multispectral information (MS) is associated with lower spatial resolution. Usually, the term ‘fusion’ gets several words to appear, such as merging, combination, synergy, integration and several others that express more or less the same meaning the concept have since it appeared in literature. Image fusion techniques can be classified into three categories depending on the stage at which fusion takes place; it is often divided into three levels, namely: pixel level, feature level and decision level of representation. This paper describes the concept of image fusion and its relevant methods.
Transferring spatial details of a high-resolution image into a low-resolution one is called image fusion. There are some different fusion methods introduced. Due to the nature of fusion process, these methods may damage the spectral quality of the low-resolution multispectral image to a certain extent. In the literature, there are some metrics that are used to evaluate the quality of the fused images. Depending on their mathematical algorithms, these quality metrics may result in misleading results in terms of spectral quality in the fused images. If fusion process is successful, the classification result of the fused image should not be worse than the result acquired from raw multispectral image. In this study, Worldview-2, Landsat ETM+ and Ikonos multispectral images are fused with their own panchromatic bands and another Ikonos image is fused with Quickbird pan-sharpened image using IHS, CN, HPF, PCA, Multiplicative, Ehlers, Brovey, Wavelet, Gram-Schmidt and Criteria Based fusion...
Remote sensing delivers multi-modal and -temporal data from the Earth’s surface. In order to cope with these multi-dimensional data sources and to make the most out of them, image fusion is a valuable tool. It has developed over the past few decades into a usable image processing technique for extracting information of higher quality and reliability. As more sensors and advanced image fusion techniques have become available, researchers have conducted a vast amount of successful studies using image fusion. However, the definition of an appropriate workflow prior to processing the imagery requires knowledge in all related fields − i.e. remote sensing, image fusion and the desired image exploitation processing. From the results, it is can be seen that the choice of the appropriate technique, as well as the fine tuning of the individual parameters of this technique, is crucial. There is still a lack of strategic guidelines due to the complexity and variability of data selection, processing techniques and applications. This paper describes the results of a project that forms part of a larger initiative to streamline data selection, application requirements and the choice of a suitable image fusion technique. It aims at collecting successful image fusion cases that are relevant to other users and other areas of interest around the world. From these cases, common guidelines which are valuable contributions to further applications and developments have been derived. The availability of these guidelines will help to identify bottlenecks, further develop image fusion techniques, make best use of existing multimodal images and provide new insights into the Earth’s processes. The outcome is a remote sensing image fusion atlas (book) in which successful image fusion cases are displayed and described, embedded in common findings and generally valid statements in the field of image fusion.
InTech Education and …, 2011
There are many image fusion methods that can be used to produce high-resolution mutlispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) of remote sensed images. This paper attempts to undertake the study of image
Image Fusion and Its Applications, 2011
In remote sensing applications, lower spatial resolution multispectral images are fused with higher spatial resolution panchromatic ones. The objective of this fusion process is to enhance the spatial resolution of the multispectral images to make important features more apparent for human or machine perception. This enhancement is performed by injecting the high frequency component of the panchromatic image into the lower resolution ones without deteriorating the spectral component in the fused product. In this work, we propose a novel pixel based image fusion technique which exploits the statistical properties of the input images to compose the outcome images. Criteria for an optimal image fusion are proposed. The fused image is essentially constructed by using the statistical properties of panchromatic and multispectral images within a window to determine the weighting factors of the input images. This paper describes the principles of the proposed approach, assesses its properties and compares it with other popular fusion techniques. This study is carried out using Ikonos, QuickBird and SPOT images over areas with both urban and rural features. Analytical derivation, numerical analysis and graphic results are presented to support our discussions.
2012
In literature, several methods are available to combine both low spatial multispectral and low spectral panchromatic resolution images to obtain a high resolution multispectral image. One of the most common problems encountered in these methods is spectral distortions introduced during the merging process. At the same time, the spectral quality of the image is the most important factor affecting the accuracy of the results in many applications such as object recognition, object extraction, image analysis. In this study, the most commonly used methods including GIHS, GIHSF, PCA and Wavelet are analyzed using image quality metrics such as SSIM, ERGAS and SAM. At the same time, Wavelet is the best method for obtaining the fused image having the least spectral distortions according to obtained results. At the same time, image quality of GIHS, GIHSF and PCA methods are close to each other, but spatial qualities of the fused image using the wavelet method are less than others.
Image fusion in remote sensing has emerged as a sought-after protocol because it has proven beneficial in many areas, especially in studies of agriculture, environment, and related fields. Simply put, image fusion involves garnering all pivotal data from many images and then merging them in fewer images, ideally into a solitary image. This is because this one fused image packs all the pertinent information and is more correct than any picture extracted from one solitary source. It also includes all the data that is required. Additional advantages are: it lessens the amount of data and it creates images that are appropriate and that can be understood by humans and machines. This paper reviews the three image fusing processing levels, which include feature level, decision level, and pixel level. This paper also dwells upon image fusion methods that fall under four classes: MRA, CS, model-based solutions and hybrid and shows how each class has some distinct advantages as well as drawbacks.
Photogrammetric Engineering & Remote Sensing, 2008
This paper introduces a novel approach for evaluating the quality of pansharpened multispectral (MS) imagery without resorting to reference originals. Hence, evaluations are feasible at the highest spatial resolution of the panchromatic (PAN) sensor. Wang and Bovik's image quality index (QI) provides a statistical similarity measurement between two monochrome images. The QI values between any couple of MS bands are calculated before and after fusion and used to define a measurement of spectral distortion. Analogously, QI values between each MS band and the PAN image are calculated before and after fusion to yield a measurement of spatial distortion. The rationale is that such QI values should be unchanged after fusion, i.e., when the spectral information is translated from the coarse scale of the MS data to the fine scale of the PAN image. Experimental results, carried out on very high-resolution Ikonos data and simulated Pléiades data, demonstrate that the results provided by the proposed approach are consistent and in trend with analysis performed on spatially degraded data. However, the proposed method requires no reference originals and is therefore usable in all practical cases. PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING
2011
Image fusion is important algorithm in many remote sensing applications like visualization, identification of the boundary of the object, object segmentation, object analysis. Most widely used IHS method preserves the spatial information but distorts the spectral information during fusion process. IHS method is also limited to the three bands. In this paper, the MS based difference with respect to its mean image is used to reconstruct the fused image. In this the mean of the MS band is replaced by the mean of the PAN image. In this way both spatial and spectral information are taken into account in the process of image fusion. We have considered actual dataset of IKONOS four band image for our experiment. It has been observed from the simulation results that the proposed algorithm preserves both spatial and spectral information better than compared standard algorithm and it also significantly improves the universal quality index which measures the visual quality of fused image.
Index Terms-image fusion, data fusion, remote sensing, image processing, signal processing. Visual Sensor, DCT.
Remote Sensing for Environmental Monitoring, GIS Applications, and Geology VIII, 2008
Generally, image fusion methods are classified into three levels: pixel level (iconic), feature level (symbolic) and knowledge or decision level. In this paper we focus on iconic techniques for image fusion. There exist a number of established fusion techniques that can be used to merge high spatial resolution panchromatic and lower spatial resolution multispectral images that are simultaneously recorded by one sensor. This is done to create high resolution multispectral image datasets (pansharpening). In most cases, these techniques provide very good results, i.e. they retain the high spatial resolution of the panchromatic image and the spectral information from the multispectral image. These techniques, when applied to multitemporal and/or multisensoral image data, still create spatially enhanced datasets but usually at the expense of the spectral consistency. In this study, a series of nine multitemporal multispectral remote sensing images (seven SPOT scenes and one FORMOSAT scene) is fused with one panchromatic Ikonos image. A number of techniques are employed to analyze the quality of the fusion process. The images are visually and quantitatively evaluated for spectral characteristics preservation and for spatial resolution improvement. Overall, the Ehlers fusion which was developed for spectral characteristics preservation for multi-date and multi-sensor fusion showed the best results. It could not only be proven that the Ehlers fusion is superior to all other tested algorithms but also the only one that guarantees an excellent color preservation for all dates and sensors.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013
It is of great value to fuse a high-resolution panchromatic image and low-resolution multi-spectral images for object recognition. In the paper, tow frames of remotely sensed imagery, including ZY03 and SPOT05, are selected as the source data. Four fusion methods, including Brovey, PCA, Pansharp, and SFIM , are used to fuse the images of multispectral bands and panchromatic band. Three quantitative indicators were calculated and analyzed, that is, gradient, correlation coefficient and deviation. According to comprehensive evaluation and comparison, the best effect is SFIM transformation, combined with fusion image through four transformation methods.
Image fusion is the combination of two or more different images to form a new image by using a certain algorithm. The aim of image fusion is to integrate complementary data in order to obtain more and better information about an object or a study area than can be derived from single sensor data alone. Image fusion can be performed at three different processing levels which are pixel level, feature-level and decision-level according to the stage at which the fusion takes place. This paper explores the major remote sensing data fusion techniques at feature and decision levels implemented as found in the literature. It compares and analyses the process model and characteristics including advantages, limitations and applicability of each technique, and also introduces some practical applications. It concludes with a summary and recommendations for selection of suitable methods.
International Journal of …, 2012
International Journal of Applied Earth Observation and Geoinformation, 2012
Image fusion is a useful tool for integrating a high resolution panchromatic image (PI) with a low resolution multispectral image (MIs) to produce a high resolution multispectral image for better understanding of the observed earth surface. Various methods proposed for pan-sharpening satellite images are examined from the viewpoint of accuracies with which the color information and spatial context of the original image are reproduced in the fused product image. In this study, methods such as Gram-Schmidt (GS), Ehler, modified intensity-hue-saturation (M-IHS), high pass filter (HPF), and wavelet-principal component analysis (W-PCA) are compared. The quality assessment of the products using these different methods is implemented by means of noise-based metrics. In order to test the robustness of the image quality, Poisson noise, motion blur, or Gaussian blur is intentionally added to the fused image, and the signal-to-noise and related statistical parameters are evaluated and compared among the fusion methods. And to achieve the assessed accurate classification process, we proposed a support vector machine (SVM) based on radial basis function kernel. By testing five methods with WorldView2 data, it is found that the Ehler method shows a better result for spatial details and color reproduction than GS, M-IHS, HPF and W-PCA. For QuickBird data, it is found that all fusion methods reproduce both color and spatial information close to the original image. Concerning the robustness against the noise, the Ehler method shows a good performance, whereas the W-PCA approach occasionally leads to similar or slightly better results. Comparing the performance of various fusion methods, it is shown that the Ehler method yields the best accuracy, followed by the W-PCA. The producer's and user's accuracies of the Ehler method are 89.94% and 90.34%, respectively, followed by 88.14% and 88.26% of the W-PCA method.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.