Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Remote sensing image fusion is an effective way to use a large volume of data from multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and QuickBird provide both panchromatic (Pan) images at a higher spatial resolution and multispectral (MS) images at a lower spatial resolution and many remote sensing applications require both high spatial and high spectral resolutions, especially for GIS based applications. An effective image fusion technique can produce such remotely sensed images. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm to obtain more and better information about an object or a study area than. The image fusion is performed at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion takes place. There are many image fusion methods that can be used to produce high resolution multispectral images from a high resolution pan image and low resolution multispectral images. This paper explores the major remote sensing data fusion techniques at pixel level and reviews the concept, principals, limitations and advantages for each technique. This paper focused on traditional techniques like intensity hue-saturation- (HIS), Brovey, principal component analysis (PCA) and Wavelet.
The amount and variety of remote sensing imagery of varying spatial resolution is continuously increasing and techniques for merging images of different spatial and spectral resolution became widely accepted in practice. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image taken of the same geographic area. This study examines fused images and their ability to preserve the spectral and spatial integrity of the original image. The mathematical formulation of ten data fusion techniques is worked out in this paper. Included are colour transformations, wavelet techniques, gradient and Laplacian based techniques, contrast and morphological techniques, feature selection and simple averaging procedures. Most of theses techniques employ hierarchical image decomposition for fusion. IRS-1C and ASTER images are used for the experimental investigations. The panchromatic IRS-1C image has around 5m pixel size, the multispectral ASTER images are at a 15m resolution level. For the fusion experiments the three nadir looking ASTER bands in the visible and near infrared are chosen. The concept for evaluating the fusion methods is based on the idea to use a reduced resolution of the IRS-1C image data at 15m resolution and of the ASTER images at 45m resolution. This maintains the resolution ratio between IRS and ASTER and allows comparing the image fusion result at the 15m resolution level with the original ASTER images. This statistical comparison reveals differences between all considered fusion concepts.
2012
Image fusion is a formal framework for combining and utilizing data originating from different sources. It aims at producing high resolution multispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) image. This fused image must contain more interpretable information than can be gained by using the original image. Ideally the fused image should not distort the spectral characteristics of multispectral data as well as, it should retain the basic colour content of the original data. There are many data fusion techniques that can be used which include Principal Component Analysis (PCA), Brovey Transform (BT), Multiplicative Transform (MT) and Discrete Wavelet Transform (DWT). One of the major problems associated with a data fusion technique is how to assess the quality of the fused (spatially enhanced) MS image. This paper presents a comprehensive analysis and evaluation of the most commonly used data fusion techniques. The performance of...
Image fusion in remote sensing has emerged as a sought-after protocol because it has proven beneficial in many areas, especially in studies of agriculture, environment, and related fields. Simply put, image fusion involves garnering all pivotal data from many images and then merging them in fewer images, ideally into a solitary image. This is because this one fused image packs all the pertinent information and is more correct than any picture extracted from one solitary source. It also includes all the data that is required. Additional advantages are: it lessens the amount of data and it creates images that are appropriate and that can be understood by humans and machines. This paper reviews the three image fusing processing levels, which include feature level, decision level, and pixel level. This paper also dwells upon image fusion methods that fall under four classes: MRA, CS, model-based solutions and hybrid and shows how each class has some distinct advantages as well as drawbacks.
Index Terms-image fusion, data fusion, remote sensing, image processing, signal processing. Visual Sensor, DCT.
2020
Remote Sensing is the process of gathering information about a particular area or an object from a distance usually from an aircraft or satellites. There are two types of sensors that are popular among remote sensing (RS) applications, one capable of providing images having good spatial resolution known as the Panchromatic Image (PAN) and another with a good spectral resolution, known as the Multispectral image (MS). Even though spatial information of the PAN image is relatively high, the associated spectral information is less. Similarly, for the MS image, the spectral information is more while lacking spatial information. This accounts for the need for a technique known as image fusion. Combining the information of an object or an area obtained from different sensors to produce a single fused image is the main concept behind image fusion. This paper is written to study the different image fusion techniques based on Intensity Hue Saturation (IHS) and to have a comparative study of ...
Image fusion is the combination of two or more different images to form a new image by using a certain algorithm. The aim of image fusion is to integrate complementary data in order to obtain more and better information about an object or a study area than can be derived from single sensor data alone. Image fusion can be performed at three different processing levels which are pixel level, feature-level and decision-level according to the stage at which the fusion takes place. This paper explores the major remote sensing data fusion techniques at feature and decision levels implemented as found in the literature. It compares and analyses the process model and characteristics including advantages, limitations and applicability of each technique, and also introduces some practical applications. It concludes with a summary and recommendations for selection of suitable methods.
Int J Remote Sens, 2006
The preservation of spectral characteristics and the retention of high spatial resolution are two key issues in image fusion. Different applications may require different balances between the two. However, the limitation is that only one fused result is possible for most of the commonly used fusion methods; users have no control over how much spatial detail or spectral information should be retained. Both quantitative and qualitative criteria, including mean, standard deviation, entropy, average gradient, spectral distortion, correlation coefficient and visual assessment, were used to evaluate the performance of the proposed fuser. A SPOT (Satellite Pour l'Observation de la Terre) panchromatic image and Landsat Thematic Mapper (TM) images were employed in the evaluation. The results show that the proposed method is slightly better than, or comparable to, commonly used image fusion methods, including Intensity-Hue-Saturation (IHS), principal component analysis (PCA), and two other wavelet-based fusion methods with replacement and with selection strategies, in terms of spectral characteristic preservation and high spatial resolution retention. Moreover, the proposed method can achieve a wide range of balances between spectral characteristic preservation and high spatial resolution retention, and thus is applicable in different applications. low spatial resolution images using the IHS system and then substituting the *Corresponding author.
Remote Sensing of Land
Image fusion involves integration of the geometric details of a high-resolution panchromatic (PAN) image and the spectral information of a low resolution multispectral (XS) image which is useful for regional planning and large scale urban mapping. Present study compares the effectiveness of three image fusion techniques namely, Principal Component Analysis (PCA), Wavelet Transform (WT) and Intensity Hue Saturation (IHS) to merge the XS information and PAN data of QuickBird satellite imagery. Comparison between the fused images obtained from the three fusion techniques is carried out on the basis of qualitative and quantitative evaluations implying, visual interpretation, inter-band correlation, correlation coefficient, standard deviation and mean. Results indicate that all three fusion techniques improves spatial resolution as well as spectral details, however, IHS technique provides the best spectral fidelity by preserving the XS integrity between all the bands under consideration.
Image Fusion is a technique of obtaining images with high spatial and spectral resolution from low spatial resolution multispectral and high spatial resolution panchromatic images. There is often an inverse relationship between the spectral and spatial resolution of the image. It has not been possible to propose a single sensor package that will meet all our application requirements, while the combined image from multiple sensors will provide more comprehensive information by collecting a wide diversity of sensed wavelengths and spatial resolutions. Due to the demand for higher classification accuracy and the need in enhanced positioning precision there is always a need to improve the spectral and spatial resolution of remotely sensed imagery. These requirements can be fulfilled by the utilization of image processing techniques at a significantly lower expense. The goal is to combine image data to form a new image that contains more interpretable information than can be gained by using the original information. Ideally the fused data should not distort the spectral characteristics of multispectral data as well as it should retain the basic colour content of the original data. If the fused images are used for classification, then the commonly used merging methods are Principal Component Analysis(PCA), Intensity hue saturation method (IHS), Brovey transformation, multiplicative technique (MT), High-pass Filter (HPF), Smoothing Filter-based Intensity Modulation(SFIM) and Wavelet Transform.
The usefulness of remote sensing data, in particular of images from Earth observation satellites, largely depends on their spectral, spatial and temporal resolution. Each system has its specific characteristics providing different types of parameters on the observed objects. Focussing on operational and most commonly used commercial remote sensing satellite sensors, this paper describes how image fusion techniques can help increase the usefulness of the data acquired. There are plenty of possibilities of combining images from different satellite sensors. This paper concentrates on the existing techniques that preserve spectral characteristics, while increasing the spatial resolution. A very common example is the fusion of SPOT XS with PAN data to produce multispectral (3band) imagery with 10 m ground resolution. These techniques are also referred to as image sharpening techniques. A distinction has to be made between the pure visual enhancement (superimposition) and real interpolation of data to achieve higher resolution (e.g. wavelets). In total, the paper describes a number of fusion techniques, such as RGB colour composites, Intensity Hue Saturation (IHS) transformation, arithmetic combinations (e.g. Brovey transform), Principal Component Analysis, wavelets (e.g. ARSIS method) and Regression Variable Substitution (RVS), in terms of concepts, algorithms, processing, achievements and applications. It is mentioned in which way the results of various techniques are influenced by using different pre-processing steps as well as modifications of the involved parameters. All techniques are discussed and illustrated using examples of applications in the various fields that are part of ITC's educational programme and consulting projects.
International journal of remote sensing, 1998
With the availability of multisensor, multitemporal, multiresolution and multifrequency image data from operational Earth observation satellites the fusion of digital image data has become a valuable tool in remote sensing image evaluation. Digital image fusion is a relatively new research ® eld at the leading edge of available technology. It forms a rapidly developing area of research in remote sensing. This review paper describes and explains mainly pixel based image fusion of Earth observation satellite data as a contribution to multisensor integration oriented data processing.
With the availability of multisensor, multitemporal, multiresolution and multifrequency image data from operational Earth observation satellites the fusion of digital image data has become a valuable tool in remote sensing image evaluation. Digital image fusion is a relatively new research ® eld at the leading edge of available technology. It forms a rapidly developing area of research in remote sensing. This review paper describes and explains mainly pixel based image fusion of Earth observation satellite data as a contribution to multisensor integration oriented data processing.
ISPRS Journal of Photogrammetry and Remote Sensing, 1991
University of Maine Orono, ME 04469, USA Commission VII Current and future remote sensing programs such as Landsat, SPOT, MOS, ERS, JERS, and the space platform's Earth Observing System (Eos) are based on a variety of imaging sensors that will provide timely and repetitive multisensor earth observation data on a global scale. Visible, infrared and microwave images of high spatial and spectral resolution will eventually be available for all parts of the earth. It is essential that efficient processing techniques be developed to cope with the large multisensor data volumes. This paper discusses data fusion techniques that have proved successful for synergistic merging of SPOT HRV, Landsat TM and SIR-B
International Journal of Engineering Sciences & Research Technology, 2014
Image fusion techniques have interest within the remote sensing community. The reason of this is that in most cases the new generation of remote sensors with very high spatial resolution acquires image datasets in two separate modes: the highest spatial resolution is obtained for panchromatic images (PAN) whereas multispectral information (MS) is associated with lower spatial resolution. Usually, the term ‘fusion’ gets several words to appear, such as merging, combination, synergy, integration and several others that express more or less the same meaning the concept have since it appeared in literature. Image fusion techniques can be classified into three categories depending on the stage at which fusion takes place; it is often divided into three levels, namely: pixel level, feature level and decision level of representation. This paper describes the concept of image fusion and its relevant methods.
2007 National Radio Science Conference, 2007
To better identify the objects in remote sensing images, the multispectral images with high spectral resolution and low spatial resolution, and the panchromatic images with high spatial resolution and low spectral resolution need to be fused. Many fusion techniques are discussed in the recent years to obtain images with high spectral resolution and also high spatial resolution. In this paper an image fusion technique, based on integrating both the Intensity-Hue-Saturation (IHS) and the Discrete Wavelet Frame Transform (DWFT), is proposed for boosting the quality of remote sensing images. A panchromatic and multispectral image from Landsat-7(ETM+) satellite has been fused using this new approach. Experimental results show that the proposed technique improves the spectral and spatial qualities of the fused images. Moreover, when this technique is applied to noisy and de-noised remote sensing images it can preserve the quality of the fused images. Comparison analyses between different fusion techniques are also presented and show that the proposed technique outperforms the other techniques.
2014
Most Earth observational satellites are not able to acquire high spatial and spectral resolution data simultaneously because of design or observational constraints. To overcome such limitations, image fusion techniques are use. Image fusion is process combine different satellite images on a pixel by pixel basis to produce fused images of higher value. The value adding is meant in terms of information extraction capability, reliability and increased accuracy. The objective of this paper is to describe basics of image fusion, various pixel level mage fusion techniques for evaluating and assessing the performance of these fusion algorithms. Keywords— -Image Fusion, Pixel Level, Multi-sensor, IHS, PCA, Multiplicative, Brovey, DCT, DWT. INTRODUCTION Image Fusion is process of combine two different images which are acquired by different sensor or single sensor. Output image contain more information than input images and more suitable for human visual perception or for machine perception. ...
2006
The main topic of this paper is high-resolution image fusion. The techniques used to merge high spatial resolution panchromatic images with high spectral resolution multispectral images are described. The most commonly used image fusion methods that work on the principle of component substitution (intensity-hue-saturation method (IHS), Brovey transform (BT), and multiplicative method (MULTI)) have been applied to Ikonos, QuickBird, Landsat, and aerial orthophoto images. Visual comparison, histogram analyses, correlation coefficients, and difference images were used in order to analyze the spectral and spatial qualities of the fused images. It was discovered that for preserving spectral characteristics, one needs a high level of similarity between the panchromatic image and the respective multispectral intensity. In order to achieve this, spectral sensitivity of multispectral and panchromatic data was performed, and digital values in individual bands have been modified before fusion. It has also been determined that spatial resolution is best preserved in the event of an unchanged input panchromatic image.
The paper studies the effect of registration on fusion of remote sensed SPOT images. The SPOT-PAN and SPOT-XS images are registered using a wavelet-based multiresolution registration algorithm which is found to give sub pixel registration accuracy. The registered images are subjected to a pixel level multispectral image fusion process using wavelet transform approach. Spectral quality assessment shows that compared to other conventional fusion techniques, wavelet transform keeps much of the spectral information in the merged image with respect to the original multispectral one. Finally, segmentation performed on the fused images gives better segmentation accuracy for wavelet based method.
Applied Geomatics, 2014
Your article is protected by copyright and all rights are held exclusively by Società Italiana di Fotogrammetria e Topografia (SIFET). This e-offprint is for personal use only and shall not be self-archived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com".
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.