Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Remotely sensed images are invaluable to acquire geospatial information about earth surface for the assessment of land resources and environment monitoring. In most cases, the information provided by a single sensor is not complete or sufficient. Therefore, images collected by different sensors are combined to obtain complementary information.
The amount and variety of remote sensing imagery of varying spatial resolution is continuously increasing and techniques for merging images of different spatial and spectral resolution became widely accepted in practice. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image taken of the same geographic area. This study examines fused images and their ability to preserve the spectral and spatial integrity of the original image. The mathematical formulation of ten data fusion techniques is worked out in this paper. Included are colour transformations, wavelet techniques, gradient and Laplacian based techniques, contrast and morphological techniques, feature selection and simple averaging procedures. Most of theses techniques employ hierarchical image decomposition for fusion. IRS-1C and ASTER images are used for the experimental investigations. The panchromatic IRS-1C image has around 5m pixel size, the multispectral ASTER images are at a 15m resolution level. For the fusion experiments the three nadir looking ASTER bands in the visible and near infrared are chosen. The concept for evaluating the fusion methods is based on the idea to use a reduced resolution of the IRS-1C image data at 15m resolution and of the ASTER images at 45m resolution. This maintains the resolution ratio between IRS and ASTER and allows comparing the image fusion result at the 15m resolution level with the original ASTER images. This statistical comparison reveals differences between all considered fusion concepts.
Remote sensing image fusion is an effective way to use a large volume of data from multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and QuickBird provide both panchromatic (Pan) images at a higher spatial resolution and multispectral (MS) images at a lower spatial resolution and many remote sensing applications require both high spatial and high spectral resolutions, especially for GIS based applications. An effective image fusion technique can produce such remotely sensed images. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm to obtain more and better information about an object or a study area than. The image fusion is performed at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion takes place. There are many image fusion methods that can be used to produce high resolution multispectral images from a high resolution pan image and low resolution multispectral images. This paper explores the major remote sensing data fusion techniques at pixel level and reviews the concept, principals, limitations and advantages for each technique. This paper focused on traditional techniques like intensity hue-saturation- (HIS), Brovey, principal component analysis (PCA) and Wavelet.
Image fusion in remote sensing has emerged as a sought-after protocol because it has proven beneficial in many areas, especially in studies of agriculture, environment, and related fields. Simply put, image fusion involves garnering all pivotal data from many images and then merging them in fewer images, ideally into a solitary image. This is because this one fused image packs all the pertinent information and is more correct than any picture extracted from one solitary source. It also includes all the data that is required. Additional advantages are: it lessens the amount of data and it creates images that are appropriate and that can be understood by humans and machines. This paper reviews the three image fusing processing levels, which include feature level, decision level, and pixel level. This paper also dwells upon image fusion methods that fall under four classes: MRA, CS, model-based solutions and hybrid and shows how each class has some distinct advantages as well as drawbacks.
2012
Image fusion is a formal framework for combining and utilizing data originating from different sources. It aims at producing high resolution multispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) image. This fused image must contain more interpretable information than can be gained by using the original image. Ideally the fused image should not distort the spectral characteristics of multispectral data as well as, it should retain the basic colour content of the original data. There are many data fusion techniques that can be used which include Principal Component Analysis (PCA), Brovey Transform (BT), Multiplicative Transform (MT) and Discrete Wavelet Transform (DWT). One of the major problems associated with a data fusion technique is how to assess the quality of the fused (spatially enhanced) MS image. This paper presents a comprehensive analysis and evaluation of the most commonly used data fusion techniques. The performance of...
Index Terms-image fusion, data fusion, remote sensing, image processing, signal processing. Visual Sensor, DCT.
There are many image fusion methods that can be used to produce high-resolution mutlispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) of remote sensed images. This paper attempts to undertake the study of image
International Journal of …, 2012
This paper addresses the problem of image fusion of optical (visible and thermal domain) data and radar data for the purpose of visualization. These types of images typically contain a lot of complimentary information, and their joint visualization can be useful and more convenient for human user than a set of individual images. To solve the image fusion problem we propose a novel algorithm that utilizes some peculiarities of human color perception and based on the grey-scale structural visualization. Benefits of presented algorithm are exemplified by satellite imagery.
2012
In literature, several methods are available to combine both low spatial multispectral and low spectral panchromatic resolution images to obtain a high resolution multispectral image. One of the most common problems encountered in these methods is spectral distortions introduced during the merging process. At the same time, the spectral quality of the image is the most important factor affecting the accuracy of the results in many applications such as object recognition, object extraction, image analysis. In this study, the most commonly used methods including GIHS, GIHSF, PCA and Wavelet are analyzed using image quality metrics such as SSIM, ERGAS and SAM. At the same time, Wavelet is the best method for obtaining the fused image having the least spectral distortions according to obtained results. At the same time, image quality of GIHS, GIHSF and PCA methods are close to each other, but spatial qualities of the fused image using the wavelet method are less than others.
In remote sensing ,image fusion technique is useful tool use to fuse high spatial resolution panchromatic images(PAN)with lower spatial resolution multispectral images (MS) to create a high spatial resolution multispectral of image fusion (F) while preserving the spectral information in the multispectral image (MS) .there are many PAN sharpening techniques or pixel- Based image fusion techniques that have been developed to try to enhance the spatial resolution and the spectral property preservation of the MS. This paper attempts to undertake the study of image fusion, by using pixel –based image fusion techniques i.e. arithmetic combination, frequency filtering methods of pixel –based image fusion techniques and different statistical techniques of image fusion. The first type includes Brovey Transform (BT),Color Normalize Transformation (CNT) and Multiplicative Method(MLT).the second type include High-Pass Filter Additive Method (HPFA) .the third type includes Local Mean Matching (LMM),Regression Variable Substitution(RVS) . this paper also devotes to concentrate on the analytical techniques for evaluating the quality of image fusion (F) , In this study will concentrate on determination image details quality specially tiny detail and edges by uses two criterion edge detection then contrast and estimation homogenous to determine homogenous in different regions image using agent account interdependence of Block of the during shifting block to ten pixels ,therefore will be evaluation active and good because to take into consideration homogenous and edge quality measurements .
Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the synthetic aperture radar (SAR) interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter (MKF). Each study case presents also the results achieved by the proposed techniques applied to real data. Ó
In remote sensing applications, lower spatial resolution multispectral images are fused with higher spatial resolution panchromatic ones. The objective of this fusion process is to enhance the spatial resolution of the multispectral images to make important features more apparent for human or machine perception. This enhancement is performed by injecting the high frequency component of the panchromatic image into the lower resolution ones without deteriorating the spectral component in the fused product. In this work, we propose a novel pixel based image fusion technique which exploits the statistical properties of the input images to compose the outcome images. Criteria for an optimal image fusion are proposed. The fused image is essentially constructed by using the statistical properties of panchromatic and multispectral images within a window to determine the weighting factors of the input images. This paper describes the principles of the proposed approach, assesses its properties and compares it with other popular fusion techniques. This study is carried out using Ikonos, QuickBird and SPOT images over areas with both urban and rural features. Analytical derivation, numerical analysis and graphic results are presented to support our discussions.
Boletim de Ciências Geodésicas, 2012
Image fusion techniques of remote sensing data are formal frameworks for merging and using images originating from different sources. This research investigates the quality assessment of Synthetic Aperture Radar (SAR) data fusion with optical imagery. Two different SAR data from different sensors namely RADARSAT-1 and PALSAR were fused with SPOT-2 data. Both SAR data have the same resolutions and polarisations; however images were gathered in different frequencies as C band and L band respectively. This paper contributes to the comparative evaluation of fused data for understanding the performance of implemented image fusion algorithms such as Ehlers, IHS (Intensity-Hue-Saturation), HPF (High Pass Frequency), two dimensional DWT (Discrete Wavelet Transformation), and PCA (Principal Component Analysis) techniques. Quality assessments of fused images were performed both qualitatively and quantitatively. For the statistical analysis; bias, correlation coefficient (CC), difference in variance (DIV), standard deviation difference (SDD), universal image quality index (UIQI) methods were applied on the fused images. The evaluations were performed by categorizing the test area into two as "urban" and "agricultural". It has been observed that some of the methods have enhanced either the spatial quality or preserved spectral quality of the original SPOT XS image to various degrees while some approaches have introduced distortions. In general we noted that Ehlers' spectral quality is far better than those of the other methods. HPF performs almost best in agricultural areas for both SAR images.
2011
Image fusion is important algorithm in many remote sensing applications like visualization, identification of the boundary of the object, object segmentation, object analysis. Most widely used IHS method preserves the spatial information but distorts the spectral information during fusion process. IHS method is also limited to the three bands. In this paper, the MS based difference with respect to its mean image is used to reconstruct the fused image. In this the mean of the MS band is replaced by the mean of the PAN image. In this way both spatial and spectral information are taken into account in the process of image fusion. We have considered actual dataset of IKONOS four band image for our experiment. It has been observed from the simulation results that the proposed algorithm preserves both spatial and spectral information better than compared standard algorithm and it also significantly improves the universal quality index which measures the visual quality of fused image.
Transferring spatial details of a high-resolution image into a low-resolution one is called image fusion. There are some different fusion methods introduced. Due to the nature of fusion process, these methods may damage the spectral quality of the low-resolution multispectral image to a certain extent. In the literature, there are some metrics that are used to evaluate the quality of the fused images. Depending on their mathematical algorithms, these quality metrics may result in misleading results in terms of spectral quality in the fused images. If fusion process is successful, the classification result of the fused image should not be worse than the result acquired from raw multispectral image. In this study, Worldview-2, Landsat ETM+ and Ikonos multispectral images are fused with their own panchromatic bands and another Ikonos image is fused with Quickbird pan-sharpened image using IHS, CN, HPF, PCA, Multiplicative, Ehlers, Brovey, Wavelet, Gram-Schmidt and Criteria Based fusion...
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2013
The task of enhancing the perception of a scene by combining information captured from different image sensors is usually known as multisensor image fusion. This paper presents an area-based image fusion algorithm to merge SAR (Synthetic Aperture Radar) and optical images. The co-registration of the two images is first conducted using the proposed registration method prior to image fusion. Segmentation into active and inactive areas is then performed on the SAR texture image for selective injection of the SAR image into the panchromatic (PAN) image. An integrated image based on these two images is generated by the novel area-based fusion scheme, which imposes different fusion rules for each segmented area. Finally, this image is fused into a multispectral (MS) image through the hybrid pansharpening method proposed in previous research. Experimental results demonstrate that the proposed method shows better performance than other fusion algorithms and has the potential to be applied to the multisensor fusion of SAR and optical images.
2014
Most Earth observational satellites are not able to acquire high spatial and spectral resolution data simultaneously because of design or observational constraints. To overcome such limitations, image fusion techniques are use. Image fusion is process combine different satellite images on a pixel by pixel basis to produce fused images of higher value. The value adding is meant in terms of information extraction capability, reliability and increased accuracy. The objective of this paper is to describe basics of image fusion, various pixel level mage fusion techniques for evaluating and assessing the performance of these fusion algorithms. Keywords— -Image Fusion, Pixel Level, Multi-sensor, IHS, PCA, Multiplicative, Brovey, DCT, DWT. INTRODUCTION Image Fusion is process of combine two different images which are acquired by different sensor or single sensor. Output image contain more information than input images and more suitable for human visual perception or for machine perception. ...
The usefulness of remote sensing data, in particular of images from Earth observation satellites, largely depends on their spectral, spatial and temporal resolution. Each system has its specific characteristics providing different types of parameters on the observed objects. Focussing on operational and most commonly used commercial remote sensing satellite sensors, this paper describes how image fusion techniques can help increase the usefulness of the data acquired. There are plenty of possibilities of combining images from different satellite sensors. This paper concentrates on the existing techniques that preserve spectral characteristics, while increasing the spatial resolution. A very common example is the fusion of SPOT XS with PAN data to produce multispectral (3band) imagery with 10 m ground resolution. These techniques are also referred to as image sharpening techniques. A distinction has to be made between the pure visual enhancement (superimposition) and real interpolation of data to achieve higher resolution (e.g. wavelets). In total, the paper describes a number of fusion techniques, such as RGB colour composites, Intensity Hue Saturation (IHS) transformation, arithmetic combinations (e.g. Brovey transform), Principal Component Analysis, wavelets (e.g. ARSIS method) and Regression Variable Substitution (RVS), in terms of concepts, algorithms, processing, achievements and applications. It is mentioned in which way the results of various techniques are influenced by using different pre-processing steps as well as modifications of the involved parameters. All techniques are discussed and illustrated using examples of applications in the various fields that are part of ITC's educational programme and consulting projects.
International Journal of Engineering Sciences & Research Technology, 2014
Image fusion techniques have interest within the remote sensing community. The reason of this is that in most cases the new generation of remote sensors with very high spatial resolution acquires image datasets in two separate modes: the highest spatial resolution is obtained for panchromatic images (PAN) whereas multispectral information (MS) is associated with lower spatial resolution. Usually, the term ‘fusion’ gets several words to appear, such as merging, combination, synergy, integration and several others that express more or less the same meaning the concept have since it appeared in literature. Image fusion techniques can be classified into three categories depending on the stage at which fusion takes place; it is often divided into three levels, namely: pixel level, feature level and decision level of representation. This paper describes the concept of image fusion and its relevant methods.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.