Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014
I wish to thank my advisor Evgeny Strekalovskiy for answering my numerous doubts during the elaboration of the thesis. I wish also to thank Prof. Dr. Cremers and the chair of Computer Vision to allow me to do a Master Thesis with them at the Tecnische Universität München. I would also like to express my special thanks of gratitude to my brother without him I would not have discovered the pleasures of informatics, as well as to Raúl who served me as inspiration of how to manage a Master Thesis.
ijcset.com
I. INTRODUCTION Super-resolution image restoration refers to the image processing algorithm which produces high quality, superresolution (SR) images from a set of low-quality, low resolution (LR) images. It is generally regarded as consisting of three steps image registration, ...
2010
The subject of extracting particular high-resolution data from low-resolution images is one of the most important digital image processing applications in recent years, attracting much research. This paper shows how to improve the resolution of real images when given image is in the degraded form. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, and noisy and downsampled measured images. To obtain this result the use an iterative nonlinear restoration blind deconvolution maximum likely-hood algorithm imposing the low frequencies complete data of the original low-resolution image and the high-resolution data present only in a fraction of the image which suppresses the noise amplification and avoid the ringing in deblurred image. Our results show that a high resolution real image derived from superresolution methods enhance spatial resolution and provides substantially more image details.
2009
abstract In this paper we present a new method for superresolution of depth video sequences using high resolution color video. Here we assume that the depth sequence does not contain outlier points which can be present in the depth images. Our method is based on multiresolution decomposition, and uses multiple frames to search for a most similar depth segments to improve the resolution of the current frame. First step is the wavelet decomposition of both color and depth images.
2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008
Time-of-flight (TOF) cameras robustly provide depth data of real world scenes at video frame rates. Unfortunately, currently available camera models provide rather low X-Y resolution. Also, their depth measurements are starkly influenced by random and systematic errors which renders them inappropriate for high-quality 3D scanning. In this paper we show that ideas from traditional color image superresolution can be applied to TOF cameras in order to obtain 3D data of higher X-Y resolution and less noise. We will also show that our approach, which works using depth images only, bears many advantages over alternative depth upsampling methods that combine information from separate high-resolution color and low-resolution depth data.
IEEE Transactions on Image Processing, 1996
Abstract, Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their short-comings. We
Proceedings - International Conference on Image Processing, ICIP, 2006
In this paper, we apply supershapes and R-functions to surface recovery from 3D data sets. Individual supershapes are separately recovered from a segmented mesh. R-functions are used to perform Boolean operations between the reconstructed parts to obtain a single implicit equation of the reconstructed object that is used to define a global error reconstruction function. We present surface recovery results ranging from single synthetic data to real complex objects involving the composition of several supershapes and holes.
In this paper, we propose a convex optimization framework for simultaneous estimation of super-resolved depth map and images from a single moving camera. The pixel measurement error in 3D reconstruction is directly related to the resolution of the images at hand. In turn, even a small measurement error can cause significant errors in reconstructing 3D scene structure or camera pose. Therefore, enhancing image resolution can be an effective solution for securing the accuracy as well as the resolution of 3D reconstruction. In the proposed method, depth map estimation and image super-resolution are formulated in a single energy minimization framework with a convex function and solved efficiently by a first-order primal-dual algorithm. Explicit inter-frame pixel correspondences are not required for our super-resolution procedure, thus we can avoid a huge computation time and obtain improved depth map in the accuracy and resolution as well as highresolution images with reasonable time. The superiority of our algorithm is demonstrated by presenting the improved depth map accuracy, image super-resolution results, and camera pose estimation.
Superresolution (SR) is a method of enhancing image resolution by combining information from multiple images. Two main processes in superresolution are registration and image reconstruction. Both of these processes greatly affect the quality image of the superresolution. Accurate registration is required to obtain high-resolution image quality. This research propose a collaboration between Phase-Based Image Matching (PBIM) registration, and reconstruction using Structure-Adaptive Normalized convolution algorithm (SANC) and Projection Onto Convex sets algorithm (POCs). PBIM was used to estimate translational registration stage. We used the function fitting around the peak point, to obtain sub pixel accurate shift. The results of this registration were used for reconstruction. Three registration method and two reconstruction algorithms have been tested to obtain the most appropriate collaboration by measuring the value of Peak Signal to Noise Ratio (PSNR). The result showed that the collaboration of PBIM and both reconstruction algorithm, SR with PBIM and POCs have PSNR average of 32.12205, while PSNR average of SR with SANC algorithm was 32.07325. For every collaborative algorithms that have been tested, registration PBIM with function fitting, has an higher average PSNR value than the Keren and Marcel registration.
2009
The superresolution algorithms typically transform the images into 1D vectors and perform operations on these vectors to obtain a high resolution image. Transforming the images into vectors results in computations with large matrices. In this paper, we first propose a 2D model for superresolution that treats the images as matrices and hence reduces the computational complexity. In the second part of the paper, we apply this model to superresolution of face images. Zhang et al.
Lecture Notes in Computer Science, 2013
We use multi-frame super-resolution, specifically, Shift & Add, to increase the resolution of depth data. In order to be able to deploy such a framework in practice, without requiring a very high number of observed low resolution frames, we improve the initial estimation of the high resolution frame. To that end, we propose a new data model that leads to a median estimation from densely upsampled low resolution frames. We show that this new formulation solves the problem of undefined pixels and further allows to improve the performance of pyramidal motion estimation in the context of super-resolution without additional computational cost. As a consequence, it increases the motion diversity within a small number of observed frames, making the enhancement of depth data more practical. Quantitative experiments run on the Middlebury dataset show that our method outperforms state-of-the-art techniques in terms of accuracy and robustness to the number of frames and to the noise level.
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2017
Accurate and high-quality depth maps are required in lots of 3D applications, such as multi-view rendering, 3D reconstruction and 3DTV. However, the resolution of captured depth image is much lower than that of its corresponding color image, which affects its application performance. In this paper, we propose a novel depth map super-resolution (SR) method by taking view synthesis quality into account. The proposed approach mainly includes two technical contributions. First, since the captured low-resolution (LR) depth map may be corrupted by noise and occlusion, we propose a credibility based multi-view depth maps fusion strategy, which considers the view synthesis quality and interview correlation, to refine the LR depth map. Second, we propose a view synthesis quality based trilateral depth-map up-sampling method, which considers depth smoothness, texture similarity and view synthesis quality in the up-sampling filter. Experimental results demonstrate that the proposed method outp...
Proceedings of SPIE, 2006
Theoretical and practical limitations usually constrain the achievable resolution of any imaging device. Super-Resolution (SR) methods are developed through the years to go beyond this limit by acquiring and fusing several low-resolution (LR) images of the same scene, producing a high-resolution (HR) image. The early works on SR, although occasionally mathematically optimal for particular models of data and noise, produced poor results when applied to real images. In this paper, we discuss two of the main issues ...
2013 IEEE International Conference on Image Processing, 2013
This work proposes to improve a traditional 3D reconstruction pipeline by combining it with two techniques: A realtime 3D modelling system to give a visual feedback in acquisition stage; and a super-resolution spatio-temporal filter to improve the depth data. We use noisy RGBD images from an inaccurate real-time depth sensor device in the whole process and later replace the color with data from a digital camera to generate realistic texture. Recently, several reconstruction approaches were presented acknowledging the potential of such real-time devices. However, those systems alone fail to retrieve small geometric characteristics from the object. The result of our experiments show a model with a considerable level of details, unexpected from low-quality RGBD images. We aim to use our pipeline to reconstruct objects with cultural value and add them to the virtual museum's database for visualization purpose.
Applied Optics, 1992
The Lukosz technique of superresolution by spatial and temporal frequency interaction is extended. The effects of various misalignments and other errors are considered. An implementation of the technique is presented. Experimental results are given.
SPIE Newsroom, 2007
A new unifying approach to problems common to multiple degraded low-resolution images makes it possible to correct imperfections and to discover unseen details.
Applied optics, 2010
In optical imaging, the resolution of the imaging system is not only limited by the aperture and imperfec-tion of the lens, but also by the CCD nonzero pixel size and separation between the two consecutive pixels. We deal only with the geometric superresolution and assume that ...
IEEE Transactions on Image Processing, 2001
Superresolution reconstruction produces a high-resolution image from a set of low-resolution images. Previous iterative methods for superresolution [9], [11], [18], [27], [30] had not adequately addressed the computational and numerical issues for this ill-conditioned and typically underdetermined large scale problem.
2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009
Depth maps captured with time-of-flight cameras have very low data quality: the image resolution is rather limited and the level of random noise contained in the depth maps is very high. Therefore, such flash lidars cannot be used out of the box for high-quality 3D object scanning. To solve this problem, we present LidarBoost, a 3D depth superresolution method that combines several low resolution noisy depth images of a static scene from slightly displaced viewpoints, and merges them into a high-resolution depth image. We have developed an optimization framework that uses a data fidelity term and a geometry prior term that is tailored to the specific characteristics of flash lidars. We demonstrate both visually and quantitatively that LidarBoost produces better results than previous methods from the literature.
2017
Super-resolution is the process of recovering a high-resolution image from multiple low-resolution images of the same scene. The key objective of super-resolution (SR) imaging is to reconstruct a higher-resolution image based on a set of images, acquired from the same scene and denoted as ‘low-resolution’ images, to overcome the limitation and/or ill-posed conditions of the image acquisition process for facilitating better content visualization and scene recognition. In this paper, we provide a comprehensive review of existing super-resolution techniques and highlight the future research challenges. This includes the formulation of an observation model and coverage of the dominant algorithm – Iterative back projection .We critique these methods and identify areas which promise performance improvements. In this paper, future directions for super-resolution algorithms are discussed. Finally results of available methods are given.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.