Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011
Image fusion is important algorithm in many remote sensing applications like visualization, identification of the boundary of the object, object segmentation, object analysis. Most widely used IHS method preserves the spatial information but distorts the spectral information during fusion process. IHS method is also limited to the three bands. In this paper, the MS based difference with respect to its mean image is used to reconstruct the fused image. In this the mean of the MS band is replaced by the mean of the PAN image. In this way both spatial and spectral information are taken into account in the process of image fusion. We have considered actual dataset of IKONOS four band image for our experiment. It has been observed from the simulation results that the proposed algorithm preserves both spatial and spectral information better than compared standard algorithm and it also significantly improves the universal quality index which measures the visual quality of fused image.
The amount and variety of remote sensing imagery of varying spatial resolution is continuously increasing and techniques for merging images of different spatial and spectral resolution became widely accepted in practice. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image taken of the same geographic area. This study examines fused images and their ability to preserve the spectral and spatial integrity of the original image. The mathematical formulation of ten data fusion techniques is worked out in this paper. Included are colour transformations, wavelet techniques, gradient and Laplacian based techniques, contrast and morphological techniques, feature selection and simple averaging procedures. Most of theses techniques employ hierarchical image decomposition for fusion. IRS-1C and ASTER images are used for the experimental investigations. The panchromatic IRS-1C image has around 5m pixel size, the multispectral ASTER images are at a 15m resolution level. For the fusion experiments the three nadir looking ASTER bands in the visible and near infrared are chosen. The concept for evaluating the fusion methods is based on the idea to use a reduced resolution of the IRS-1C image data at 15m resolution and of the ASTER images at 45m resolution. This maintains the resolution ratio between IRS and ASTER and allows comparing the image fusion result at the 15m resolution level with the original ASTER images. This statistical comparison reveals differences between all considered fusion concepts.
2012
In literature, several methods are available to combine both low spatial multispectral and low spectral panchromatic resolution images to obtain a high resolution multispectral image. One of the most common problems encountered in these methods is spectral distortions introduced during the merging process. At the same time, the spectral quality of the image is the most important factor affecting the accuracy of the results in many applications such as object recognition, object extraction, image analysis. In this study, the most commonly used methods including GIHS, GIHSF, PCA and Wavelet are analyzed using image quality metrics such as SSIM, ERGAS and SAM. At the same time, Wavelet is the best method for obtaining the fused image having the least spectral distortions according to obtained results. At the same time, image quality of GIHS, GIHSF and PCA methods are close to each other, but spatial qualities of the fused image using the wavelet method are less than others.
Remote sensing image fusion is an effective way to use a large volume of data from multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and QuickBird provide both panchromatic (Pan) images at a higher spatial resolution and multispectral (MS) images at a lower spatial resolution and many remote sensing applications require both high spatial and high spectral resolutions, especially for GIS based applications. An effective image fusion technique can produce such remotely sensed images. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm to obtain more and better information about an object or a study area than. The image fusion is performed at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion takes place. There are many image fusion methods that can be used to produce high resolution multispectral images from a high resolution pan image and low resolution multispectral images. This paper explores the major remote sensing data fusion techniques at pixel level and reviews the concept, principals, limitations and advantages for each technique. This paper focused on traditional techniques like intensity hue-saturation- (HIS), Brovey, principal component analysis (PCA) and Wavelet.
2013
Multispectral image fusion seeks to combine information from different images to obtain more relevant information than can derive from a single one. A wide variety of approaches addressing fusion at pixel level has been developed, but they suffer from several disadvantages, (1) the number of bands merging is limited, (2) color distortion, (3)Spectral content of small objects often lost in the fused images. The paper presents a new approach of image fusion based on a weighted merge of multispectral bands, each band is modeled by two or three Gaussian distributions, the mixture parameters (weights, mean vectors, and covariance matrices) are estimated by the Expectation Maximization algorithm (EM) which maximizes the log-likelihood criterion, the weighted coefficients of each band are extracted from the degree of similarity between this one and the other bands, it’s calculated by a cost function based on the distance between the parameters of the Gaussian distribution of each band. In ...
International Journal of Advance Engineering and Research Development, 2015
Image fusion is the process of combining relevant information from two or more images into a single image. The resulting image will be more informative than any of the input images. Image fusion of Multispectral image and panchromatic image is called as Pan-sharpening. Pan-sharpening combines a low-resolution color multispectral image with a high-resolution grayscale panchromatic image to create a high-resolution fused color image. "Pan Sharpening" is shorthand for "Panchromatic (PAN) sharpening". It means using a panchromatic (single band) image to "sharpen" a multispectral image. In this sense, to "sharpen" means to increase the spatial resolution of a multispectral image. A multispectral image contains a higher degree of spectral resolution than a panchromatic image, while often a panchromatic image will have a higher spatial resolution than a multispectral image. The Intensity-Hue-Saturation (IHS) method is a popular pan-sharpening method used for its efficiency and high spatial resolution. However, the final image produced experiences spectral distortion (Color distortion). In this paper, we introduce IHS methods with modifications to improve the spectral quality of the image. This paper explains the IHS, GIHS, AIHS and EAIHS methods for quality fusion techniques. The quality of the fusion will be decided by the parameters CC, RMSE, RASE, ERGAS,SAM, UIQI and SC.
In remote sensing ,image fusion technique is useful tool use to fuse high spatial resolution panchromatic images(PAN)with lower spatial resolution multispectral images (MS) to create a high spatial resolution multispectral of image fusion (F) while preserving the spectral information in the multispectral image (MS) .there are many PAN sharpening techniques or pixel- Based image fusion techniques that have been developed to try to enhance the spatial resolution and the spectral property preservation of the MS. This paper attempts to undertake the study of image fusion, by using pixel –based image fusion techniques i.e. arithmetic combination, frequency filtering methods of pixel –based image fusion techniques and different statistical techniques of image fusion. The first type includes Brovey Transform (BT),Color Normalize Transformation (CNT) and Multiplicative Method(MLT).the second type include High-Pass Filter Additive Method (HPFA) .the third type includes Local Mean Matching (LMM),Regression Variable Substitution(RVS) . this paper also devotes to concentrate on the analytical techniques for evaluating the quality of image fusion (F) , In this study will concentrate on determination image details quality specially tiny detail and edges by uses two criterion edge detection then contrast and estimation homogenous to determine homogenous in different regions image using agent account interdependence of Block of the during shifting block to ten pixels ,therefore will be evaluation active and good because to take into consideration homogenous and edge quality measurements .
2014
Most Earth observational satellites are not able to acquire high spatial and spectral resolution data simultaneously because of design or observational constraints. To overcome such limitations, image fusion techniques are use. Image fusion is process combine different satellite images on a pixel by pixel basis to produce fused images of higher value. The value adding is meant in terms of information extraction capability, reliability and increased accuracy. The objective of this paper is to describe basics of image fusion, various pixel level mage fusion techniques for evaluating and assessing the performance of these fusion algorithms. Keywords— -Image Fusion, Pixel Level, Multi-sensor, IHS, PCA, Multiplicative, Brovey, DCT, DWT. INTRODUCTION Image Fusion is process of combine two different images which are acquired by different sensor or single sensor. Output image contain more information than input images and more suitable for human visual perception or for machine perception. ...
In remote sensing image processing, the traditional fusion algorithm is based on the Intensity-Hue-Saturation (IHS) transformation. This method does not take into account the texture or spectrum information, spatial resolution and statistical information of the photos adequately, which leads to spectrum distortion of the image. Although traditional solutions in such application combine manifold methods, the fusion procedure is rather complicated and not suitable for practical operation. In this paper, an improved IHS transformation fusion algorithm based on the local variance weighting scheme is proposed for remote sensing images. In our proposal, firstly, the local variance of the SPOT (which comes from French "Systeme Probatoire d'Observation dela Tarre" and means "earth observing system") image is calculated by using different sliding windows. The optimal window size is then selected with the images being normalized with the optimal window local variance. Secondly, the power exponent is chosen as the mapping function, and the local variance is used to obtain the weight of the I component and match SPOT images. Then we obtain the I' component with the weight, the I component and the matched SPOT images. Finally, the final fusion image is obtained by the inverse Intensity-Hue-Saturation transformation of the I', H and S components. The proposed algorithm has been tested and compared with some other image fusion methods well known in the literature. Simulation result indicates that the proposed algorithm could obtain a superior fused image based on quantitative fusion evaluation indices.
2012
Image fusion is a formal framework for combining and utilizing data originating from different sources. It aims at producing high resolution multispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) image. This fused image must contain more interpretable information than can be gained by using the original image. Ideally the fused image should not distort the spectral characteristics of multispectral data as well as, it should retain the basic colour content of the original data. There are many data fusion techniques that can be used which include Principal Component Analysis (PCA), Brovey Transform (BT), Multiplicative Transform (MT) and Discrete Wavelet Transform (DWT). One of the major problems associated with a data fusion technique is how to assess the quality of the fused (spatially enhanced) MS image. This paper presents a comprehensive analysis and evaluation of the most commonly used data fusion techniques. The performance of...
IEEE Transactions on Geoscience and Remote Sensing, 2000
Many image fusion techniques have been developed. However, most existing fusion processes produce color distortion in 1-m fused IKONOS images due to nonsymmetrical spectral responses of IKONOS imagery. Here, we proposed a fusion process to minimize this spectral distortion in IKONOS 1-m color images. The 1-m fused image is produced from a 4-m multispectral (MS) and 1-m panchromatic (PAN) image, maintaining the relations of spectral responses between PAN and each band of the MS images. To obtain this relation, four spectral weighting parameters are added with the pixel value of each band of the original MS image. Then, each pixel value is updated using a steepest descent method to reflect the maximum spectral response on the fused image.
KSCE Journal of Civil Engineering, 2003
In this paper, efforts were made to merge IKONOS panchromatic image with multispectral images using wavelet transform, Intensity-Hue-Saturation (IHS), multiplicative, and Principal Components Analysis (PCA) methods. Numerical comparisons were made to evaluate the effect of different fusion methods on the distortion of spectral characteristics. Likewise, analysis was done on different image fusion results based on land surface materials. Finally, the results of the building outline extraction from fused images and panchromatic image were compared. They show that the extraction of the building boundary from the fused image is better than from the panchromatic image in terms of boundary connection.
In remote sensing applications, lower spatial resolution multispectral images are fused with higher spatial resolution panchromatic ones. The objective of this fusion process is to enhance the spatial resolution of the multispectral images to make important features more apparent for human or machine perception. This enhancement is performed by injecting the high frequency component of the panchromatic image into the lower resolution ones without deteriorating the spectral component in the fused product. In this work, we propose a novel pixel based image fusion technique which exploits the statistical properties of the input images to compose the outcome images. Criteria for an optimal image fusion are proposed. The fused image is essentially constructed by using the statistical properties of panchromatic and multispectral images within a window to determine the weighting factors of the input images. This paper describes the principles of the proposed approach, assesses its properties and compares it with other popular fusion techniques. This study is carried out using Ikonos, QuickBird and SPOT images over areas with both urban and rural features. Analytical derivation, numerical analysis and graphic results are presented to support our discussions.
2006
The main topic of this paper is high-resolution image fusion. The techniques used to merge high spatial resolution panchromatic images with high spectral resolution multispectral images are described. The most commonly used image fusion methods that work on the principle of component substitution (intensity-hue-saturation method (IHS), Brovey transform (BT), and multiplicative method (MULTI)) have been applied to Ikonos, QuickBird, Landsat, and aerial orthophoto images. Visual comparison, histogram analyses, correlation coefficients, and difference images were used in order to analyze the spectral and spatial qualities of the fused images. It was discovered that for preserving spectral characteristics, one needs a high level of similarity between the panchromatic image and the respective multispectral intensity. In order to achieve this, spectral sensitivity of multispectral and panchromatic data was performed, and digital values in individual bands have been modified before fusion. It has also been determined that spatial resolution is best preserved in the event of an unchanged input panchromatic image.
Transferring spatial details of a high-resolution image into a low-resolution one is called image fusion. There are some different fusion methods introduced. Due to the nature of fusion process, these methods may damage the spectral quality of the low-resolution multispectral image to a certain extent. In the literature, there are some metrics that are used to evaluate the quality of the fused images. Depending on their mathematical algorithms, these quality metrics may result in misleading results in terms of spectral quality in the fused images. If fusion process is successful, the classification result of the fused image should not be worse than the result acquired from raw multispectral image. In this study, Worldview-2, Landsat ETM+ and Ikonos multispectral images are fused with their own panchromatic bands and another Ikonos image is fused with Quickbird pan-sharpened image using IHS, CN, HPF, PCA, Multiplicative, Ehlers, Brovey, Wavelet, Gram-Schmidt and Criteria Based fusion methods. The fused images are then classified with Minimum Distance, Binary Encoding, Support Vector Machines, Random Forest, Maximum Likelihood and Artificial Neural Network classification methods by using exactly the same signatures. 450 points are used to calculate the post-classification accuracies of classified images. Some commonly-used metrics (Spectral and Spatial ERGAS, Spectral and Spatial RMSE, SID, SAM, RASE) are also calculated for all fused images. To determine the best fusion algorithm, the consistency between classification results and fusion metric results are evaluated together. HPF fusion method is found out to be the most successful fusion algorithm in terms of preserving the spectral quality and Support Vector Machines classifier is found out to be the best classification algorithm for these data sets.
International Journal of Engineering Sciences & Research Technology, 2014
Image fusion techniques have interest within the remote sensing community. The reason of this is that in most cases the new generation of remote sensors with very high spatial resolution acquires image datasets in two separate modes: the highest spatial resolution is obtained for panchromatic images (PAN) whereas multispectral information (MS) is associated with lower spatial resolution. Usually, the term ‘fusion’ gets several words to appear, such as merging, combination, synergy, integration and several others that express more or less the same meaning the concept have since it appeared in literature. Image fusion techniques can be classified into three categories depending on the stage at which fusion takes place; it is often divided into three levels, namely: pixel level, feature level and decision level of representation. This paper describes the concept of image fusion and its relevant methods.
Science in China Series F: Information Sciences, 2010
Conventional image fusion algorithm, such as IHS, SVR, PCS, etc., may show some defects in inheriting the higher-spectral information embedded in the original lower-spatial resolution MS image. A fusion method based on spectral mixture analysis (FSMA) was proposed in previous study, which has potential in solving this problem. While published results are limited to well-behaved simulated data where the endmembers are known a priori and the FSMA method will not work well when applying to real remotely sensed images because the estimated reflectance ranging in panchromatic band derived from MS bands cannot be treated as the real panchromatic values. In this paper, an improved image fusion method based on spectral mixture analysis (IFSMA) is proposed, in which the original FSMA method was extended to real remotely sensed images by modifying the objective function of the constrained nonlinear optimization expressions. It was compared with the original FSMA, Zhang’s SVR, PCS and IHS method, and results indicated that the IFSMA method was superior to other methods in preserving the spectral and spatial information.
Remote Sensing for Environmental Monitoring, GIS Applications, and Geology VIII, 2008
Generally, image fusion methods are classified into three levels: pixel level (iconic), feature level (symbolic) and knowledge or decision level. In this paper we focus on iconic techniques for image fusion. There exist a number of established fusion techniques that can be used to merge high spatial resolution panchromatic and lower spatial resolution multispectral images that are simultaneously recorded by one sensor. This is done to create high resolution multispectral image datasets (pansharpening). In most cases, these techniques provide very good results, i.e. they retain the high spatial resolution of the panchromatic image and the spectral information from the multispectral image. These techniques, when applied to multitemporal and/or multisensoral image data, still create spatially enhanced datasets but usually at the expense of the spectral consistency. In this study, a series of nine multitemporal multispectral remote sensing images (seven SPOT scenes and one FORMOSAT scene) is fused with one panchromatic Ikonos image. A number of techniques are employed to analyze the quality of the fusion process. The images are visually and quantitatively evaluated for spectral characteristics preservation and for spatial resolution improvement. Overall, the Ehlers fusion which was developed for spectral characteristics preservation for multi-date and multi-sensor fusion showed the best results. It could not only be proven that the Ehlers fusion is superior to all other tested algorithms but also the only one that guarantees an excellent color preservation for all dates and sensors.
Image Fusion and Its Applications, 2011
International Journal of Advanced Research in Electrical, Electronics and Instrumentation Energy, 2015
Image fusion has found many applications in computer vision, remote sensing, intelligent robots and military purposes. The use of different image fusion algorithms gives precise resultant images.In many remote sensing applications, the quantity of image data from the satellite sensors has been increasing because of advanced sensor technology.To avoid the limitations of single sensor images, multisensory image fusion provides the data that is suitable for further applications by eliminating the problem of lack of information. In this paper, a literature review has been made based on different techniques for combining multispectral images available. It includes IHS transform, High Pass filtering, PCA analysis, ANN, Wavelet transform and DCT. One of the effective techniques to get a good quality image is by using the Fuzzylet Fusion Algorithm in which the advantages of both Stationary Wavelet Transform and Fuzzy logic are combined.
Indonesian Journal of Electrical Engineering and Computer Science, 2022
Image-fusion provide users with detailed information about the urban and rural environment, which is useful for applications such as urban planning and management when higher spatial resolution images are not available. There are different image fusion methods. This paper implements, evaluates, and compares six satellite image-fusion methods, namely wavelet 2D-M transform, gram schmidt, high-frequency modulation, high pass filter (HPF) transform, simple mean value, and PCA. An Ikonos image (Panchromatic-PAN and multispectral-MULTI) showing the northwest of Bogotá (Colombia) is used to generate six fused images: MULTIWavelet 2D-M, MULTIG-S, MULTIMHF, MULTIHPF, MULTISMV, and MULTIPCA. In order to assess the efficiency of the six image-fusion methods, the resulting images were evaluated in terms of both spatial quality and spectral quality. To this end, four metrics were applied, namely the correlation index, erreur relative globale adimensionnelle de synthese (ERGAS), relative average spectral error (RASE) and the Q index. The best results were obtained for the MULTISMV image, which exhibited spectral correlation higher than 0.85, a Q index of 0.84, and the highest scores in spectral assessment according to ERGAS and RASE, 4.36% and 17.39% respectively.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.