Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2011, Image Fusion and Its Applications
CITATIONS 10 READS 45 6 authors, including: Some of the authors of this publication are also working on these related projects: e-sensing: big earth observation data analytics for land use and land cover change information (www.esensing.org) View project URBISAmazônia View project Laercio Namikawa
Remote sensing image fusion has come a long way from research experiments to an operational image processing technology. Having established a framework for image fusion at the end of the 90’s we now provide an overview on the advances in image fusion during the past 15 years. Assembling information about new remote sensing image fusion techniques, recent technical developments and their influence on image fusion, international societies and working groups, new journals and publications we provide insight into new trends. It becomes clear that image fusion facilitates remote sensing image exploitation. It aims at achieving better and more reliable information to better understanding complex Earth systems. The numerous publications during the last decade show that remote sensing image fusion is a well established research field. The experiences gained foster other technological developments in terms of sensor configuration and data exploitation. Multi-modal data usage enables the implementation of the concept of Digital Earth. In order to advance in this respect we recommend that updated guidelines and a set of commonly accepted quality assessment criteria are needed in image fusion.
Remote sensing delivers multi-modal and -temporal data. Image fusion is a valuable tool to optimize multisensor image exploitation. It has developed into a usable image processing technique to extract information of higher quality and reliability. Due to the availability of many different sensors and operational image fusion techniques researchers have conducted a vast amount of successful experiments. However, the definition of an appropriate workflow prior to processing the imagery requires knowledge in all related fields, i.e. remote sensing, image fusion and the desired image exploitation processing. From the results it is visible that the choice of the appropriate technique as well as the fine tuning of the individual parameters of this technique is crucial. There is still a lack of strategic guidelines due to its complexity and variability. This paper reports on the findings of an initiative to streamline data selection, application requirements and the choice of a suitable image fusion technique. All this forms the first step into the development a Fusion Approach Selection Tool (FAST). The project aims at collecting successful image fusion cases that are relevant to other users and other areas of interest. From there standards will be developed that apply to these cases that are valuable contributions to further applications and developments. The availability of these standards will help to further develop image fusion techniques, make best use of existing multimodal images and provide new insights on the processes of the Earth.
Image fusion in remote sensing has emerged as a sought-after protocol because it has proven beneficial in many areas, especially in studies of agriculture, environment, and related fields. Simply put, image fusion involves garnering all pivotal data from many images and then merging them in fewer images, ideally into a solitary image. This is because this one fused image packs all the pertinent information and is more correct than any picture extracted from one solitary source. It also includes all the data that is required. Additional advantages are: it lessens the amount of data and it creates images that are appropriate and that can be understood by humans and machines. This paper reviews the three image fusing processing levels, which include feature level, decision level, and pixel level. This paper also dwells upon image fusion methods that fall under four classes: MRA, CS, model-based solutions and hybrid and shows how each class has some distinct advantages as well as drawbacks.
With the availability of multisensor, multitemporal, multiresolution and multifrequency image data from operational Earth observation satellites the fusion of digital image data has become a valuable tool in remote sensing image evaluation. Digital image fusion is a relatively new research ® eld at the leading edge of available technology. It forms a rapidly developing area of research in remote sensing. This review paper describes and explains mainly pixel based image fusion of Earth observation satellite data as a contribution to multisensor integration oriented data processing.
Remote sensing image fusion is an effective way to use a large volume of data from multisensor images. Most earth satellites such as SPOT, Landsat 7, IKONOS and QuickBird provide both panchromatic (Pan) images at a higher spatial resolution and multispectral (MS) images at a lower spatial resolution and many remote sensing applications require both high spatial and high spectral resolutions, especially for GIS based applications. An effective image fusion technique can produce such remotely sensed images. Image fusion is the combination of two or more different images to form a new image by using a certain algorithm to obtain more and better information about an object or a study area than. The image fusion is performed at three different processing levels which are pixel level, feature level and decision level according to the stage at which the fusion takes place. There are many image fusion methods that can be used to produce high resolution multispectral images from a high resolution pan image and low resolution multispectral images. This paper explores the major remote sensing data fusion techniques at pixel level and reviews the concept, principals, limitations and advantages for each technique. This paper focused on traditional techniques like intensity hue-saturation- (HIS), Brovey, principal component analysis (PCA) and Wavelet.
1998
The need for a definition of the concept of data fusion is established. Already published definitions are discussed. A new definition of the data fusion is proposed, which allows to set up a conceptual approach to the fusion of Earth observation data by putting an emphasis on the framework and on the fundamentals in remote sensing underlying data fusion. Further
Remote sensing delivers multi-modal and -temporal data from the Earth’s surface. In order to cope with these multi-dimensional data sources and to make the most out of them, image fusion is a valuable tool. It has developed over the past few decades into a usable image processing technique for extracting information of higher quality and reliability. As more sensors and advanced image fusion techniques have become available, researchers have conducted a vast amount of successful studies using image fusion. However, the definition of an appropriate workflow prior to processing the imagery requires knowledge in all related fields − i.e. remote sensing, image fusion and the desired image exploitation processing. From the results, it is can be seen that the choice of the appropriate technique, as well as the fine tuning of the individual parameters of this technique, is crucial. There is still a lack of strategic guidelines due to the complexity and variability of data selection, processing techniques and applications. This paper describes the results of a project that forms part of a larger initiative to streamline data selection, application requirements and the choice of a suitable image fusion technique. It aims at collecting successful image fusion cases that are relevant to other users and other areas of interest around the world. From these cases, common guidelines which are valuable contributions to further applications and developments have been derived. The availability of these guidelines will help to identify bottlenecks, further develop image fusion techniques, make best use of existing multimodal images and provide new insights into the Earth’s processes. The outcome is a remote sensing image fusion atlas (book) in which successful image fusion cases are displayed and described, embedded in common findings and generally valid statements in the field of image fusion.
Image Fusion, 2011
Introduction 1.1 Context With the development of new satellite systems and the accessibility of data from public through web services like Google Earth, remote sensing imagery, knows today an important growing which advanced and still advances researches in this area on different aspects. Especially in cartography, many studies have been conducted for multi-source satellite images classification. These studies aim to develop automatic tools in order to facilitate the interpretation and provide a semantic land cover classification. Classical tools based on satellite images deal essentially with one category of satellite images which allows a partial interpretation. Multi-sensor or multi-source image fusion have been applied in the field of remote sensing since 20 years and continues today to provide efficient solutions to problems related to detection and classification. The work presented in this chapter is a part of multi-source fusion research efforts to have reliable and automatic satellite image interpretation. We propose to apply the new fusion concepts and theories for multi-source satellite images. Our main motivation is to measure the real contribution of multi-source image fusion according to the exploitation of satellite images separately. Recent studies suggest that the combination of imagery from satellites with different spectral, spatial, and temporal information may improve land cover classification performance. The use of multi-source satellites images fully take into account the complementary and supplementary information provided by different data sources and considerably optimize the classification of cartographic objects. Particularly, combination of optical and radar remote sensing data may improve the classification results because of the complementarities of these two sources. Spectral features extracted form optical data may remove some difficulties faced when using only radar images. However, radar images present the following massive advantage: the possibility of penetrating the clouds. Thus, data fusion technique is applied to combine these two kinds of information. 1.2 Proposed approach In literature, there is a huge variety of fusion theories mainly probabilistic and Bayesian theory [Mitchell, 2007], fuzzy and possibility theory [Milisavljevi'C & Bloch, 2009], Dumpster and Shafer theory, etc. [Milisavljević & Bloch, 2008]. However, most of them are investigated in four steps which are: modeling, estimation, combination and decision (cf.
ISPRS Journal of Photogrammetry and Remote Sensing, 1991
University of Maine Orono, ME 04469, USA Commission VII Current and future remote sensing programs such as Landsat, SPOT, MOS, ERS, JERS, and the space platform's Earth Observing System (Eos) are based on a variety of imaging sensors that will provide timely and repetitive multisensor earth observation data on a global scale. Visible, infrared and microwave images of high spatial and spectral resolution will eventually be available for all parts of the earth. It is essential that efficient processing techniques be developed to cope with the large multisensor data volumes. This paper discusses data fusion techniques that have proved successful for synergistic merging of SPOT HRV, Landsat TM and SIR-B
Index Terms-image fusion, data fusion, remote sensing, image processing, signal processing. Visual Sensor, DCT.
Image fusion is the combination of two or more different images to form a new image by using a certain algorithm. The aim of image fusion is to integrate complementary data in order to obtain more and better information about an object or a study area than can be derived from single sensor data alone. Image fusion can be performed at three different processing levels which are pixel level, feature-level and decision-level according to the stage at which the fusion takes place. This paper explores the major remote sensing data fusion techniques at feature and decision levels implemented as found in the literature. It compares and analyses the process model and characteristics including advantages, limitations and applicability of each technique, and also introduces some practical applications. It concludes with a summary and recommendations for selection of suitable methods.
The amount and variety of remote sensing imagery of varying spatial resolution is continuously increasing and techniques for merging images of different spatial and spectral resolution became widely accepted in practice. This practice, known as data fusion, is designed to enhance the spatial resolution of multispectral images by merging a relatively coarse-resolution image with a higher resolution panchromatic image taken of the same geographic area. This study examines fused images and their ability to preserve the spectral and spatial integrity of the original image. The mathematical formulation of ten data fusion techniques is worked out in this paper. Included are colour transformations, wavelet techniques, gradient and Laplacian based techniques, contrast and morphological techniques, feature selection and simple averaging procedures. Most of theses techniques employ hierarchical image decomposition for fusion. IRS-1C and ASTER images are used for the experimental investigations. The panchromatic IRS-1C image has around 5m pixel size, the multispectral ASTER images are at a 15m resolution level. For the fusion experiments the three nadir looking ASTER bands in the visible and near infrared are chosen. The concept for evaluating the fusion methods is based on the idea to use a reduced resolution of the IRS-1C image data at 15m resolution and of the ASTER images at 45m resolution. This maintains the resolution ratio between IRS and ASTER and allows comparing the image fusion result at the 15m resolution level with the original ASTER images. This statistical comparison reveals differences between all considered fusion concepts.
International Journal of …, 2012
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2017
The inconsistency between the freely available remote sensing datasets and crowd-sourced data from the resolution perspective forms a big challenge in the context of data fusion. In classical classification problems, crowd-sourced data are represented as points that may or not be located within the same pixel. This discrepancy can result in having mixed pixels that could be unjustly classified. Moreover, it leads to failure in retaining sufficient level of details from data inferences. In this paper we propose a method that can preserve detailed inferences from remote sensing datasets accompanied with crowd-sourced data. We show that advanced machine learning techniques can be utilized towards this objective. The proposed method relies on two steps, firstly we enhance the spatial resolution of the satellite image using Convolutional Neural Networks and secondly we fuse the crowd-sourced data with the upscaled version of the satellite image. However, the covered scope in this paper is concerning the first step. Results show that CNN can enhance Landsat 8 scenes resolution visually and quantitatively.
International journal of remote sensing, 1998
2012
Image fusion is a formal framework for combining and utilizing data originating from different sources. It aims at producing high resolution multispectral images from a high-resolution panchromatic (PAN) image and low-resolution multispectral (MS) image. This fused image must contain more interpretable information than can be gained by using the original image. Ideally the fused image should not distort the spectral characteristics of multispectral data as well as, it should retain the basic colour content of the original data. There are many data fusion techniques that can be used which include Principal Component Analysis (PCA), Brovey Transform (BT), Multiplicative Transform (MT) and Discrete Wavelet Transform (DWT). One of the major problems associated with a data fusion technique is how to assess the quality of the fused (spatially enhanced) MS image. This paper presents a comprehensive analysis and evaluation of the most commonly used data fusion techniques. The performance of...
Image fusion refers to the acquisition, processing and synergistic combination of information provided by various sensors or by the same sensor in many measuring contexts. The aim of this survey paper is to describe three typical applications of data fusion in remote sensing. The first study case considers the problem of the synthetic aperture radar (SAR) interferometry, where a pair of antennas are used to obtain an elevation map of the observed scene; the second one refers to the fusion of multisensor and multitemporal (Landsat Thematic Mapper and SAR) images of the same site acquired at different times, by using neural networks; the third one presents a processor to fuse multifrequency, multipolarization and mutiresolution SAR images, based on wavelet transform and multiscale Kalman filter (MKF). Each study case presents also the results achieved by the proposed techniques applied to real data. Ó
IEEE Transactions on Geoscience and Remote Sensing, 2000
A data fusion method for land cover (LC) classification is proposed that combines remote sensing data at a fine and a coarse spatial resolution. It is a two-step approach, based on the assumption that some of the LC classes can be merged into a more generalized LC class.
2006
In this paper, a model-based approach to multiresolution fusion of remotely sensed images is presented. Given a high spatial resolution panchromatic (Pan) image and a low spatial resolution multispectral (MS) image acquired on the same geographical area, the presented method aims to enhance the spatial resolution of the MS image to the resolution of the Pan observation. The proposed fusion technique utilizes the spatial correlation of each of the high-resolution MS channels by using an autoregressive (AR) model, whose parameters are learnt from the analysis of the Pan data. Under the assumption that the parameters of the AR model for the Pan image are the same as those that represent the MS images due to spectral correlation, the proposed technique exploits the learnt parameter values in the context of a proper regularization technique to estimate the high spatial resolution fields for the MS bands. This results in a combination of the spectral characteristics of the low-resolution MS data with the high spatial resolution of the Pan image. The main advantages of the proposed technique are: 1) unlike standard methods proposed in the literature, it requires no registration between the Pan and the MS images; 2) it models effectively the texture of the scene during the fusion process; 3) it shows very small spectral distortion (as it is less affected, compared to standard methods, by the specific digital numbers of pixels in the Pan image, since it exploits the learnt parameters from the Pan image rather than the actual Pan digital numbers for fusion); and 4) it can be used in critical situations in which the Pan and the MS images are acquired (also by different sensors) in slightly different areas. Quantitative experimental results obtained using Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and Quickbird images point out the effectiveness of the proposed method.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.