Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007
This report addresses the problem of calibrating a single camera using several 3D ground control points and corresponding image points. Analytical and numerical approaches to approximate the desired camera model parameters are discussed.
2000
A new camera calibration method based on DLT model is presented in this paper. Full set of camera parameters can be obtained from multiple views of coplanar calibration object with coordinates of control points measured in 2D. The method consists of four steps which are iterated until convergence. The proposed approach is numerically stable and robust in comparison with calibration techniques based on nonlinear optimization of all camera parameters. Practical implementation of the proposed method has been evaluated on both synthetic and real data. A MATLAB toolbox has been developed and is available on WWW.
1997
In this paper an effective technique of camera calibration based on a 3D-point-grid is presented. The properties of structured space points, defined as 3D-point-grid, and their relations under perspective transformation are analyzed. Upon the correspondence between these 3D-points and their image points a calibration is made not only to compute the parameters of camera model by a linear method simply from the independent points in 3D-point-grid but also to verify, modify and regulate extrinsic parameters of the orientation and position simultaneously from the structure constraint implied in these points. On the basis of the verification, modification and regulation the intrinsic parameters of camera can be computed with more objective criteria. Experimental results show the necessity and advantage of the verification and modification of the orientation and position parameters, and prove that our calibration technique can improve the accuracy of both extrinsic and intrinsic parameters greatly.
Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition
In geometrical camera calibration the objective is to determine a set of camera parameters that describe the mapping between 3-D reference coordinates and 2-D image coordinates. Various methods for camera calibration can be found from the literature. However, surprisingly little attention has been paid to the whole calibration procedure, i.e., control point extraction from images, model fitting, image correction, and errors originating in these stages. The main interest has been in model fitting, although the other stages are also important. In this paper we present a four-step calibration procedure that is an extension to the two-step method. There is an additional step to compensate for distortion caused by circular features, and a step for correcting the distorted image coordinates. The image correction is performed with an empirical inverse model that accurately compensates for radial and tangential distortions. Finally, a linear method for solving the parameters of the inverse model is presented.
2005
This paper presents four alternative ways of initializing camera parameters using essentially the same calibration tools (orthogonal wands) as nowadays popular 3D kinematic systems do. However, the key idea presented here is to sweep the volume with an orthogonal pair or triad of wands instead of a single one. The proposed methods exploit the orthogonality of the used wands and set up familiar linear constraints on certain entities of projective geometry. Extracted initial camera parameters values are closer to the refined ones, which should generally assure faster and safer convergence during the refinement procedure. Even without refinement, sometimes not necessary, reconstruction results using our initial sets are better than using commonly obtained initial values. Besides, the entire calibration procedure is shortened since the usual two calibration phases become one.
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
We present a general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. For some minimal cases, we describe all singularities, naming the parameters that can not be estimated. Experimental results of our method are shown that exhibit the singularities while revealing good performance in non-singular conditions. Several applications of plane-based 3D geometry inference are discussed as well.
ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2016
Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. <br><br> This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. <br><br> The shown sensor (Oblique Imager) is based o 5 Phase One cameras were...
Plane-Based calibration algorithms have been widely adopted for the purpose of camera calibration task. These algorithms have the advantage of robustness compared to self-calibration and flexibility compared to traditional algorithms which require a 3D calibration pattern. While the common assumption is that the intrinsic parameters during the calibration process are fixed, limited consideration has been given to the general case when some intrinsic parameters maybe varying, others maybe fixed or known. We first discuss these general cases for camera calibration. Using a counting argument we enumerate all cases where plane-based calibration may be utilized and list the number of images required for each of these cases.
Camera calibration is an essential task in 3D computer vision and is needed for various kinds of aug-mented or virtual reality applications, where the distance between a real-world point and the camera needs to be known. A robust calibration technique using contours of Surfaces of Revolution is presented in this paper. Relevant contributions are shown and discussed. Additionally, the algorithm is compared to a selection of three standard cam-era calibration implementations including the camera cal-ibration toolbox for Matlab, multi-camera self-calibration and geometric and photometric calibration for augmented reality. The evaluation is performed using low cost cameras and is based on the stability of the calculation of intrinsic parameters (focal length and principal point). The results of the evaluation are shown and further improvements are discussed.
1994
Abstract This paper addresses the problem of calibrating a camera mounted on a robot arm. The objective is to estimate the camera's intrinsic and extrinsic parameters. These include the relative position and orientation of camera with respect to robot base as well as the relative position and orientation of the camera with respect to a pre-defined world frame. A calibration object with a known 3D shape is used together with two known movements of the robot.
2002
This paper addresses the problem of calibrating a pinhole camera from images of a surface of revolution.
2010
The classic perspective projection is mostly used when calibrating a camera. Although this approach is fairly developed and often suitable, it is not necessarily adequate to model any camera system like fish-eyes or catadioptrics. The perspective projection is not applicable when field of views reach 180° and beyond. In this case an appropriate model for a particular non perspective camera has to be used. Having an unknown camera system a generic camera model is required. This paper discusses a variety of parametric and generic camera models. These models will be validated subsequently using different camera systems. A unified approach of deriving initial parameter guesses for subsequent parameter optimisation is presented. Experimental results prove that generic camera models perform as accurate as a particular parametric model would do. Furthermore, there is no previous knowledge about the camera system needed.
2013
Marker based optical motion capture systems use multiple cameras to determine the 3D position of markers. The precise knowledge of the position and orientation of the cameras plays a crucial role in precise marker recognition and position calculation. There are three camera calibration methods presented in this paper, including a new projector based method. The three calibration methods give different precision. Results of measurements will be presented and compared.
1998
Perspective camera calibration has been in the last decades a research subject for a large group of researchers and as a result several camera calibration methodologies can be found in the literature. However, only a small number of those methods base their approaches on the use of monoplane calibration points. We developed one of those methodologies using monoplane calibration points to perform an explicit threedimensional (3-D) camera calibration. This methodology is based on an iterative approach. To avoid the singularities obtained with the calibration equations when monoplane calibration points are used, this method computes the calibration parameters in a multistep procedure and requires a first-guess solution for the intrinsic parameters. These intrinsic parameters are updated and their accuracy increased during the iterative procedure. All computations required are linear and in addition to the extrinsic parameters (Rotation and Translation) the proposed method also computes the first coefficient of the radial distortion (k1) and the skew angle. A first-guess value for the focal length of the lens is required but its value is iteratively updated using the Gauss lens model. This methodology also includes the uncertainty horizontal image scale factor (S x ) on the set of calibration parameters to be computed, which makes this approach independent of the accuracy of the horizontal scale factor. The proposed methodology has the advantage that it can be used with monoplane calibration data with no restrictions for the pose geometry of the camera.
Journal of Global Research …, 2011
This Paper deals with calibrate a camera to find out the intrinsic and extrinsic camera parameters which are necessary to recover the depth estimation of an object in stereovision system.
Wiley Encyclopedia of Computer Science and Engineering, 2007
Geometric camera calibration is a prerequisite for making accurate geometric measurements from image data and it is hence a fundamental task in computer vision. This article gives a discussion about the camera models and calibration methods used in the eld. The emphasis is on conventional calibration methods where the parameters of the camera model are determined by using images of
Computer Vision, 2008
2002
... Camera calibration is the first step towards computational computer vision. ... light emitter) permits crossing both optical rays to get the metric position of the 3D points [ 4 , 5 ... If these measurements are stored, a temporal analysis allows the handler to determine the trajectory of the ...
2005
A calibration procedure for accurately determining the pose and internal parameters of several cameras is described. Multiple simultaneously-captured sets of images of a calibration object in different poses are used by the calibration procedure. Coded target patterns, which serve as control points, are distributed over the surface of the calibration object. The observed positions of these targets within the images can be automatically determined by means of code band patterns. The positions of the targets across the multiple images are then used to infer the camera parameters, as well as the 3D geometrical structure of the targets on the calibration object (thus avoiding the expense of a calibration object with accurately known 3D structure). Results for a three-camera system show RMS (root-mean-square) deviations of less than five microns of the inferred positions of 54 control points, distributed on the surface of a 50 mm cube, from their expected positions on a flat surface. The...
2002
Camera calibration is an important and sensitive step in 3D environment reconstruction by stereovision. Small errors in the initial estimation of the camera parameters (especially in the estimation of the principal point) could rise to high errors in the 3D measurements, which are increasing with the working distance. There are many general-purpose calibration methods for calibration of individual cameras. We propose a method for refining the results of such methods by inferring stereo information from a controlled 3D scene. The parameters of the two cameras composing a stereovision system are recalibrated together by minimizing the depth reconstruction error of the control points from the reference scene.
In computer vision, camera calibration is a procedure that tries to know how a camera projects a 3D object on the screen. This process is necessary in those applications where metric information of the environment must be derived from images. Many methods have been developed in the last few years to calibrate cameras, but very few works (i.e. Tsai [10], Salvi and Armangué [8], Lai [7] or Isern [5]) have been done to compare such methods or to provide the user with hints on the suitability of certain algorithms under particular circumstances. This work presents a comparative analysis of eight calibration methods for static cameras using a pattern as reference: Faugeras [4], Tsai [9] (classic and optimized version), Lineal, Ahmed [1] and Heikkilä [6] methods, which use a single view of a non-planar pattern; Batista's method [3] which uses a single view of a planar pattern; and Zhang's method [11] which uses multiple views of a planar pattern.