Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Transferring spatial details of a high-resolution image into a low-resolution one is called image fusion. There are some different fusion methods introduced. Due to the nature of fusion process, these methods may damage the spectral quality of the low-resolution multispectral image to a certain extent. In the literature, there are some metrics that are used to evaluate the quality of the fused images. Depending on their mathematical algorithms, these quality metrics may result in misleading results in terms of spectral quality in the fused images. If fusion process is successful, the classification result of the fused image should not be worse than the result acquired from raw multispectral image. In this study, Worldview-2, Landsat ETM+ and Ikonos multispectral images are fused with their own panchromatic bands and another Ikonos image is fused with Quickbird pan-sharpened image using IHS, CN, HPF, PCA, Multiplicative, Ehlers, Brovey, Wavelet, Gram-Schmidt and Criteria Based fusion...
2012
In literature, several methods are available to combine both low spatial multispectral and low spectral panchromatic resolution images to obtain a high resolution multispectral image. One of the most common problems encountered in these methods is spectral distortions introduced during the merging process. At the same time, the spectral quality of the image is the most important factor affecting the accuracy of the results in many applications such as object recognition, object extraction, image analysis. In this study, the most commonly used methods including GIHS, GIHSF, PCA and Wavelet are analyzed using image quality metrics such as SSIM, ERGAS and SAM. At the same time, Wavelet is the best method for obtaining the fused image having the least spectral distortions according to obtained results. At the same time, image quality of GIHS, GIHSF and PCA methods are close to each other, but spatial qualities of the fused image using the wavelet method are less than others.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2013
It is of great value to fuse a high-resolution panchromatic image and low-resolution multi-spectral images for object recognition. In the paper, tow frames of remotely sensed imagery, including ZY03 and SPOT05, are selected as the source data. Four fusion methods, including Brovey, PCA, Pansharp, and SFIM , are used to fuse the images of multispectral bands and panchromatic band. Three quantitative indicators were calculated and analyzed, that is, gradient, correlation coefficient and deviation. According to comprehensive evaluation and comparison, the best effect is SFIM transformation, combined with fusion image through four transformation methods.
2012
Image fusion is a formal framework for combining and utilizing data originating from different sources. It aims at producing high resolution multispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) image. This fused image must contain more interpretable information than can be gained by using the original image. Ideally the fused image should not distort the spectral characteristics of multispectral data as well as, it should retain the basic colour content of the original data. There are many data fusion techniques that can be used which include Principal Component Analysis (PCA), Brovey Transform (BT), Multiplicative Transform (MT) and Discrete Wavelet Transform (DWT). One of the major problems associated with a data fusion technique is how to assess the quality of the fused (spatially enhanced) MS image. This paper presents a comprehensive analysis and evaluation of the most commonly used data fusion techniques. The performance of...
The amount and variety of remote sensing imagery of varying spatial resolution is continuously increasing and techniques for merging images of different spatial and spectral resolution became widely accepted in practice. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image taken of the same geographic area. This study examines fused images and their ability to preserve the spectral and spatial integrity of the original image. The mathematical formulation of ten data fusion techniques is worked out in this paper. Included are colour transformations, wavelet techniques, gradient and Laplacian based techniques, contrast and morphological techniques, feature selection and simple averaging procedures. Most of theses techniques employ hierarchical image decomposition for fusion. IRS-1C and ASTER images are used for the experimental investigations. The panchromatic IRS-1C image has around 5m pixel size, the multispectral ASTER images are at a 15m resolution level. For the fusion experiments the three nadir looking ASTER bands in the visible and near infrared are chosen. The concept for evaluating the fusion methods is based on the idea to use a reduced resolution of the IRS-1C image data at 15m resolution and of the ASTER images at 45m resolution. This maintains the resolution ratio between IRS and ASTER and allows comparing the image fusion result at the 15m resolution level with the original ASTER images. This statistical comparison reveals differences between all considered fusion concepts.
2011
Image fusion is important algorithm in many remote sensing applications like visualization, identification of the boundary of the object, object segmentation, object analysis. Most widely used IHS method preserves the spatial information but distorts the spectral information during fusion process. IHS method is also limited to the three bands. In this paper, the MS based difference with respect to its mean image is used to reconstruct the fused image. In this the mean of the MS band is replaced by the mean of the PAN image. In this way both spatial and spectral information are taken into account in the process of image fusion. We have considered actual dataset of IKONOS four band image for our experiment. It has been observed from the simulation results that the proposed algorithm preserves both spatial and spectral information better than compared standard algorithm and it also significantly improves the universal quality index which measures the visual quality of fused image.
Remote sensing image fusion is an effective way to use a large volume of data from multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and QuickBird provide both panchromatic (Pan) images at a higher spatial resolution and multispectral (MS) images at a lower spatial resolution and many remote sensing applications require both high spatial and high spectral resolutions, especially for GIS based applications. An effective image fusion technique can produce such remotely sensed images. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm to obtain more and better information about an object or a study area than. The image fusion is performed at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion takes place. There are many image fusion methods that can be used to produce high resolution multispectral images from a high resolution pan image and low resolution multispectral images. This paper explores the major remote sensing data fusion techniques at pixel level and reviews the concept, principals, limitations and advantages for each technique. This paper focused on traditional techniques like intensity hue-saturation- (HIS), Brovey, principal component analysis (PCA) and Wavelet.
In remote sensing ,image fusion technique is useful tool use to fuse high spatial resolution panchromatic images(PAN)with lower spatial resolution multispectral images (MS) to create a high spatial resolution multispectral of image fusion (F) while preserving the spectral information in the multispectral image (MS) .there are many PAN sharpening techniques or pixel- Based image fusion techniques that have been developed to try to enhance the spatial resolution and the spectral property preservation of the MS. This paper attempts to undertake the study of image fusion, by using pixel –based image fusion techniques i.e. arithmetic combination, frequency filtering methods of pixel –based image fusion techniques and different statistical techniques of image fusion. The first type includes Brovey Transform (BT),Color Normalize Transformation (CNT) and Multiplicative Method(MLT).the second type include High-Pass Filter Additive Method (HPFA) .the third type includes Local Mean Matching (LMM),Regression Variable Substitution(RVS) . this paper also devotes to concentrate on the analytical techniques for evaluating the quality of image fusion (F) , In this study will concentrate on determination image details quality specially tiny detail and edges by uses two criterion edge detection then contrast and estimation homogenous to determine homogenous in different regions image using agent account interdependence of Block of the during shifting block to ten pixels ,therefore will be evaluation active and good because to take into consideration homogenous and edge quality measurements .
International Journal of Applied Earth Observation and Geoinformation, 2012
Image fusion is a useful tool for integrating a high resolution panchromatic image (PI) with a low resolution multispectral image (MIs) to produce a high resolution multispectral image for better understanding of the observed earth surface. Various methods proposed for pan-sharpening satellite images are examined from the viewpoint of accuracies with which the color information and spatial context of the original image are reproduced in the fused product image. In this study, methods such as Gram-Schmidt (GS), Ehler, modified intensity-hue-saturation (M-IHS), high pass filter (HPF), and wavelet-principal component analysis (W-PCA) are compared. The quality assessment of the products using these different methods is implemented by means of noise-based metrics. In order to test the robustness of the image quality, Poisson noise, motion blur, or Gaussian blur is intentionally added to the fused image, and the signal-to-noise and related statistical parameters are evaluated and compared among the fusion methods. And to achieve the assessed accurate classification process, we proposed a support vector machine (SVM) based on radial basis function kernel. By testing five methods with WorldView2 data, it is found that the Ehler method shows a better result for spatial details and color reproduction than GS, M-IHS, HPF and W-PCA. For QuickBird data, it is found that all fusion methods reproduce both color and spatial information close to the original image. Concerning the robustness against the noise, the Ehler method shows a good performance, whereas the W-PCA approach occasionally leads to similar or slightly better results. Comparing the performance of various fusion methods, it is shown that the Ehler method yields the best accuracy, followed by the W-PCA. The producer's and user's accuracies of the Ehler method are 89.94% and 90.34%, respectively, followed by 88.14% and 88.26% of the W-PCA method.
Applied Geomatics, 2014
Your article is protected by copyright and all rights are held exclusively by Società Italiana di Fotogrammetria e Topografia (SIFET). This e-offprint is for personal use only and shall not be self-archived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com".
2006
The main topic of this paper is high-resolution image fusion. The techniques used to merge high spatial resolution panchromatic images with high spectral resolution multispectral images are described. The most commonly used image fusion methods that work on the principle of component substitution (intensity-hue-saturation method (IHS), Brovey transform (BT), and multiplicative method (MULTI)) have been applied to Ikonos, QuickBird, Landsat, and aerial orthophoto images. Visual comparison, histogram analyses, correlation coefficients, and difference images were used in order to analyze the spectral and spatial qualities of the fused images. It was discovered that for preserving spectral characteristics, one needs a high level of similarity between the panchromatic image and the respective multispectral intensity. In order to achieve this, spectral sensitivity of multispectral and panchromatic data was performed, and digital values in individual bands have been modified before fusion. It has also been determined that spatial resolution is best preserved in the event of an unchanged input panchromatic image.
2011
Various methods proposed for image fusion satellite images are examined from the viewpoint of accuracies with which the color information and spatial context of the original image are reproduced in the fused product image. Image fusion is a useful tool in integrating a high resolution panchromatic image (PI) with a low resolution multispectral image (Mis) to produce a high resolution multispectral image and better understanding of the observed earth surface. In this study, five typical fusion methods of Gram-Schmidt (GS), Ehler, modified intensity-hue-saturation, high pass filter, and wavelet-principal component analysis (PCA) are compared. The spectral quality assessment of the products using these different methods is implemented by image quality metrics. The accuracy of classification result is assessed by means of the support vector machine based on radial basis function kernel. Our analysis indicates that as a whole, the Ehler and wavelet-PCA methods show good performances, followed by GS. Also, the examination of confusion matrix shows that both Ehler and wavelet-PCA yield better accuracies in the classification results.
Remote Sensing of Land
Image fusion involves integration of the geometric details of a high-resolution panchromatic (PAN) image and the spectral information of a low resolution multispectral (XS) image which is useful for regional planning and large scale urban mapping. Present study compares the effectiveness of three image fusion techniques namely, Principal Component Analysis (PCA), Wavelet Transform (WT) and Intensity Hue Saturation (IHS) to merge the XS information and PAN data of QuickBird satellite imagery. Comparison between the fused images obtained from the three fusion techniques is carried out on the basis of qualitative and quantitative evaluations implying, visual interpretation, inter-band correlation, correlation coefficient, standard deviation and mean. Results indicate that all three fusion techniques improves spatial resolution as well as spectral details, however, IHS technique provides the best spectral fidelity by preserving the XS integrity between all the bands under consideration.
Along with the launch of a number of very highresolution satellites in the last decade, efforts have been made to increase the spatial resolution of the multispectral bands using the panchromatic information. Quality evaluation of pixel-fusion techniques is a fundamental issue to benchmark and to optimize different algorithms. In this letter, we present a thorough analysis of the spatial and spectral distortions produced by eight pan sharpening techniques. The study was conducted using real data from different types of land covers and also a synthetic image with different colors and spatial structures for comparison purposes. Several spectral and spatial quality indexes and visual information were considered in the analysis. Experimental results have shown that fusion methods cannot simultaneously incorporate the maximum spatial detail without degrading the spectral information. Atrous_IHS, Atrous_PCA, IHS, and eFIHS algorithms provide the best spatial-spectral tradeoff for wavelet-based and algebraic or component substitution methods. Finally, inconsistencies between some quality indicators were detected and analyzed.
Index Terms-image fusion, data fusion, remote sensing, image processing, signal processing. Visual Sensor, DCT.
In remote sensing community, different image sensors provide different data with different spectral and spatial resolution. Multispectral imaging sensors collect poor spatial resolution multispetral data, while panchromatic imaging ones provide adequate spatial resolution panchromatic data. The fusion of images is the process of combining two or more images into a single image retaining important features from each. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. In many applications, the quality of the fused images is of fundamental importance and is usually assessed by visual analysis subjective to the interpreter. This paper proposes to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. In this paper, the IHS transform is used to combine a high resolution panchromatic image with a low resolution mul...
Remote Sensing for Environmental Monitoring, GIS Applications, and Geology VIII, 2008
Generally, image fusion methods are classified into three levels: pixel level (iconic), feature level (symbolic) and knowledge or decision level. In this paper we focus on iconic techniques for image fusion. There exist a number of established fusion techniques that can be used to merge high spatial resolution panchromatic and lower spatial resolution multispectral images that are simultaneously recorded by one sensor. This is done to create high resolution multispectral image datasets (pansharpening). In most cases, these techniques provide very good results, i.e. they retain the high spatial resolution of the panchromatic image and the spectral information from the multispectral image. These techniques, when applied to multitemporal and/or multisensoral image data, still create spatially enhanced datasets but usually at the expense of the spectral consistency. In this study, a series of nine multitemporal multispectral remote sensing images (seven SPOT scenes and one FORMOSAT scene) is fused with one panchromatic Ikonos image. A number of techniques are employed to analyze the quality of the fusion process. The images are visually and quantitatively evaluated for spectral characteristics preservation and for spatial resolution improvement. Overall, the Ehlers fusion which was developed for spectral characteristics preservation for multi-date and multi-sensor fusion showed the best results. It could not only be proven that the Ehlers fusion is superior to all other tested algorithms but also the only one that guarantees an excellent color preservation for all dates and sensors.
International Journal of Engineering Sciences & Research Technology, 2014
Image fusion techniques have interest within the remote sensing community. The reason of this is that in most cases the new generation of remote sensors with very high spatial resolution acquires image datasets in two separate modes: the highest spatial resolution is obtained for panchromatic images (PAN) whereas multispectral information (MS) is associated with lower spatial resolution. Usually, the term ‘fusion’ gets several words to appear, such as merging, combination, synergy, integration and several others that express more or less the same meaning the concept have since it appeared in literature. Image fusion techniques can be classified into three categories depending on the stage at which fusion takes place; it is often divided into three levels, namely: pixel level, feature level and decision level of representation. This paper describes the concept of image fusion and its relevant methods.
Image Fusion is a technique of obtaining images with high spatial and spectral resolution from low spatial resolution multispectral and high spatial resolution panchromatic images. There is often an inverse relationship between the spectral and spatial resolution of the image. It has not been possible to propose a single sensor package that will meet all our application requirements, while the combined image from multiple sensors will provide more comprehensive information by collecting a wide diversity of sensed wavelengths and spatial resolutions. Due to the demand for higher classification accuracy and the need in enhanced positioning precision there is always a need to improve the spectral and spatial resolution of remotely sensed imagery. These requirements can be fulfilled by the utilization of image processing techniques at a significantly lower expense. The goal is to combine image data to form a new image that contains more interpretable information than can be gained by using the original information. Ideally the fused data should not distort the spectral characteristics of multispectral data as well as it should retain the basic colour content of the original data. If the fused images are used for classification, then the commonly used merging methods are Principal Component Analysis(PCA), Intensity hue saturation method (IHS), Brovey transformation, multiplicative technique (MT), High-pass Filter (HPF), Smoothing Filter-based Intensity Modulation(SFIM) and Wavelet Transform.
Many algorithms and software tools have been developed for fusing panchromatic and multispectral datasets in remote sensing. Also, a number of methods has been proposed and developed for the comparative evaluation of fusion results. To this date, however, no papers have been published that analyze effectiveness and quality of the evaluation techniques. In our study, methods that evaluate fusion quality are tested for different images and test sites. This analysis shows that in most cases the tested methods perform well, but are sometimes inconsistent with visual analysis results.
2007 National Radio Science Conference, 2007
To better identify the objects in remote sensing images, the multispectral images with high spectral resolution and low spatial resolution, and the panchromatic images with high spatial resolution and low spectral resolution need to be fused. Many fusion techniques are discussed in the recent years to obtain images with high spectral resolution and also high spatial resolution. In this paper an image fusion technique, based on integrating both the Intensity-Hue-Saturation (IHS) and the Discrete Wavelet Frame Transform (DWFT), is proposed for boosting the quality of remote sensing images. A panchromatic and multispectral image from Landsat-7(ETM+) satellite has been fused using this new approach. Experimental results show that the proposed technique improves the spectral and spatial qualities of the fused images. Moreover, when this technique is applied to noisy and de-noised remote sensing images it can preserve the quality of the fused images. Comparison analyses between different fusion techniques are also presented and show that the proposed technique outperforms the other techniques.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.