Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
In computer vision, camera calibration is a procedure that tries to know how a camera projects a 3D object on the screen. This process is necessary in those applications where metric information of the environment must be derived from images. Many methods have been developed in the last few years to calibrate cameras, but very few works (i.e. Tsai [10], Salvi and Armangué [8], Lai [7] or Isern [5]) have been done to compare such methods or to provide the user with hints on the suitability of certain algorithms under particular circumstances. This work presents a comparative analysis of eight calibration methods for static cameras using a pattern as reference: Faugeras [4], Tsai [9] (classic and optimized version), Lineal, Ahmed [1] and Heikkilä [6] methods, which use a single view of a non-planar pattern; Batista's method [3] which uses a single view of a planar pattern; and Zhang's method [11] which uses multiple views of a planar pattern.
2008
Estimation of camera geometry represents an essential task in photogrammetry and computer vision. Various algorithms for recovering camera parameters have been reported and reviewed in literature, relying on different camera models, algorithms and a priori object information. Simple 2D chessboard patterns, serving as test-fields for camera calibration, allow developing interesting automated approaches based on feature extraction tools. Several such 'calibration toolboxes' are available on the Internet, requiring varying degrees of human interaction. The present contribution extends our implemented fully automatic algorithm for the exclusive purpose of camera calibration. The approach relies on image sets depicting chessboard patterns, on the sole assumption that these consist of alternating light and dark squares. Among points extracted via a sub-pixel Harris operator, the valid chessboard corner points are automatically identified and sorted in chessboard rows and columns by exploiting differences in brightness on either side of a valid line segment. All sorted nodes on each image are related to object nodes in systems possibly differing in rotation and translation (this is irrelevant for camera calibration). Using initial values for all unknown parameters estimated from the vanishing points of the two main chessboard directions, an iterative bundle adjustment recovers all camera geometry parameters (including image aspect ratio and skewness as well as lens distortions). Only points belonging to intersecting image lines are initially accepted as valid nodes; yet, after a first bundle solution, back-projection allows to identify and introduce into the adjustment all detected nodes. Results for datasets from different cameras available on the Web and comparison with other accessible algorithms indicate that this fully automatic approach performs very well, at least with images typically acquired for calibration purposes (substantial image portions occupied by the chessboard pattern, no excessive irrelevant image detail).
2005
This paper presents four alternative ways of initializing camera parameters using essentially the same calibration tools (orthogonal wands) as nowadays popular 3D kinematic systems do. However, the key idea presented here is to sweep the volume with an orthogonal pair or triad of wands instead of a single one. The proposed methods exploit the orthogonality of the used wands and set up familiar linear constraints on certain entities of projective geometry. Extracted initial camera parameters values are closer to the refined ones, which should generally assure faster and safer convergence during the refinement procedure. Even without refinement, sometimes not necessary, reconstruction results using our initial sets are better than using commonly obtained initial values. Besides, the entire calibration procedure is shortened since the usual two calibration phases become one.
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
We present a general algorithm for plane-based calibration that can deal with arbitrary numbers of views and calibration planes. The algorithm can simultaneously calibrate different views from a camera with variable intrinsic parameters and it is easy to incorporate known values of intrinsic parameters. For some minimal cases, we describe all singularities, naming the parameters that can not be estimated. Experimental results of our method are shown that exhibit the singularities while revealing good performance in non-singular conditions. Several applications of plane-based 3D geometry inference are discussed as well.
Many methods have been developed to calibrate cameras, but very few works have been done to compare such methods or to provide the user with hints on the suitability of certain algorithms under particular circum-stances. This work presents a comparative analysis of eight methods of calibration for cameras using a pattern as reference. This paper concentrates on the stability and accuracy of these methods when the pattern is relocated or the camera conguration varies. This study was carried out with real and synthetic images and using, whenever possible, the code made available by the methods' authors on the WWW. The experiments demonstrate that most of these methods are not stable in the sense that the intrinsic parameters returned by a calibration method suered im-portant variations under small displacements of the camera relative to the calibration pattern. Similar results were obtained for extrinsic parameters when the camera only changed its internal conguration (i.e. when it zooms in or out) but kept constant its relative position to the calibration pattern. In addition, this study shows that the image disparity is not an indicator of the reliability of the methods. In spite of the fact that the majority of the methods showed similar levels of global error, the calibrated values obtained for intrinsic and extrinsic parameters varied substantially among these methods for the same set of calibration images.
Computer Vision and Image Understanding, 1996
tures are extracted from the image by means of standard image analysis techniques. These features are generally This paper presents an original approach to the problem of camera calibration using a calibration pattern. It consists of points or lines, but conics can also be used . Then, directly searching for the camera parameters that best project they are given as input to an optimization process which three-dimensional points of a calibration pattern onto intensity searches for the projection parameters P that best project edges in an image of this pattern, without explicitly extracting the three-dimensional model onto them. We will not describe in detail the different methods that ima of the intensity gradient or zero-crossings of the Laplacian, have been developed. Detailed reviews of the main existing we express the whole calibration process as a one-stage optimiapproaches can be found in [15, 16,. We just remark zation problem. A classical iterative optimization technique is that the approaches can be classified into several categoused in order to solve it. Our approach is different from the ries, with respect to: • the camera model: most existing calibration methods camera parameters. Thus, our approach is easier to implement assume that the camera follows the pinhole model. Some and to use, less dependent on the type of calibration pattern of them (mostly in photogrammetry) consider additional that is used, and more robust. First, we describe the details of parameters that model image distorsions. A good study of the approach. Then, we show some experiments in which two the different geometrical distorsion models can be found implementations of our approach and two classical two-stage in . approaches are compared. Tests on real and synthetic data • the optimization process: linear optimization processes allow us to characterize our approach in terms of convergence, sensitivity to the initial conditions, reliability, and accuracy. are often used in computer vision because they are faster
ArXiv, 2019
Camera calibration is a crucial prerequisite in many applications of computer vision. In this paper, a new, geometry-based camera calibration technique is proposed, which resolves two main issues associated with the widely used Zhang's method: (i) the lack of guidelines to avoid outliers in the computation and (ii) the assumption of fixed camera focal length. The proposed approach is based on the closed-form solution of principal lines (PLs), with their intersection being the principal point while each PL can concisely represent relative orientation/position (up to one degree of freedom for both) between a special pair of coordinate systems of image plane and calibration pattern. With such analytically tractable image features, computations associated with the calibration are greatly simplified, while the guidelines in (i) can be established intuitively. Experimental results for synthetic and real data show that the proposed approach does compare favorably with Zhang's metho...
2006 IEEE International Conference on Video and Signal Based Surveillance, 2006
Camera calibration is to estimate the intrinsic and extrinsic parameters of a camera. Most of object-based calibration methods used 3D or 2D pattern. A novel and more flexible 1D object-based calibration was introduced only a couple of years ago, but merely for estimation of intrinsic parameters. The estimation of extrinsic papers is essential when multiple cameras are involved for simultaneously taking images from different view angles and when the knowledge of relative locations between the cameras is required. Though it is relatively simple using 2D or 3D calibration pattern, the estimation of extrinsic parameters is not obvious using 1D pattern. In this paper, we will perform a 1D camera calibration involving both intrinsic and extrinsic parameters.
Proceedings of the Thirteenth Annual South …, 2002
A calibration procedure for accurately determining the pose and internal parameters of several cameras is described. Multiple simultaneously-captured sets of images of a calibration object in different poses are used by the calibration procedure. Coded target patterns, which serve as control points, are distributed over the surface of the calibration object. The observed positions of these targets within the images can be automatically determined by means of code band patterns. The positions of the targets across the multiple images are then used to infer the camera parameters, as well as the 3D geometrical structure of the targets on the calibration object (thus avoiding the expense of a calibration object with accurately known 3D structure). Results for a three-camera system show RMS (root-mean-square) deviations of less than five microns of the inferred positions of 54 control points, distributed on the surface of a 50 mm cube, from their expected positions on a flat surface. The RMS difference between the positions of 1423 observed control points and the positions predicted by a 330 parameter model of the camera system and calibration object was 0.09 pixels. cs.cmu.edu/afs/cs.cmu.edu/user/rgw/www/TsaiCode.html) and Jean-Yves Bouguet's MATLAB implementation based on Zhang's algorithms [11] (see http://www.vision.caltech.edu/bouguetj/ calib_doc/index.html) are both popular. Bouguet's implementation has also been ported to C and incorporated into the Intel's OpenCV library (see http://sourceforge.net/projects/opencvlibrary/).
2010
The classic perspective projection is mostly used when calibrating a camera. Although this approach is fairly developed and often suitable, it is not necessarily adequate to model any camera system like fish-eyes or catadioptrics. The perspective projection is not applicable when field of views reach 180° and beyond. In this case an appropriate model for a particular non perspective camera has to be used. Having an unknown camera system a generic camera model is required. This paper discusses a variety of parametric and generic camera models. These models will be validated subsequently using different camera systems. A unified approach of deriving initial parameter guesses for subsequent parameter optimisation is presented. Experimental results prove that generic camera models perform as accurate as a particular parametric model would do. Furthermore, there is no previous knowledge about the camera system needed.
Journal of the European Optical Society: Rapid Publications, 2014
Generic camera calibration is a method to characterize vision sensors by describing a line of sight for every single pixel. This procedure frees the calibration process from the restriction to pinhole-like optics that arises in the common photogrammetric camera models. Generic camera calibration also enables the calibration of high-frequency distortions, which is beneficial for high-precision measurement systems. The calibration process is as follows: To collect sufficient data for calculating a line of sight for each pixel, active grids are used as calibration reference rather than static markers such as corners of chessboard patterns. A common implementation of active grids are sinusoidal fringes presented on a flat TFT display. So far, the displays have always been treated as ideally flat. In this work we propose new and more sophisticated models to account for additional properties of the active grid display: The refraction of light in the glass cover is taken into account as well as a possible deviation of the top surface from absolute flatness. To examine the effectiveness of the new models, an example fringe projection measurement system is characterized with the resulting calibration methods and with the original generic camera calibration. Evaluating measurements using the different calibration methods shows that the extended display model substantially improves the uncertainty of the measurement system.
Machine Vision and Applications, 2007
Determining camera calibration parameters is a time-consuming task despite the availability of calibration algorithms and software. A set of correspondences between points on the calibration target and the camera image(s) must be found, usually a manual or manually guided process. Most calibration tools assume that the correspondences are already found. We present a system which allows a camera to be calibrated merely by passing it in front of a panel of selfidentifying patterns. This calibration scheme uses an array of fiducial markers which are detected with a high degree of confidence, each detected marker provides one or four correspondence points. Experiments were performed calibrating several cameras in a short period of time with no manual intervention. This marker-based calibration system was compared to one using the OpenCV chessboard grid finder which also finds correspondences automatically. We show how our new marker-based system more robustly finds the calibration pattern and how it provides more accurate intrinsic camera parameters.
Anuário do Instituto de Geociências - UFRJ, 2020
A calibração de câmera digital não-métrica é um procedimento que visa modelar erros sistemáticos causados pela distorção da lente devido ao processo de fabricação e montagem. Este procedimento deve ser realizado para melhorar a precisão de um projeto. Além disso, na Fotogrametria, é essencial compreender os parâmetros de orientação interior para modelar as distorções e gerar produtos cartográficos confiáveis. A calibração da câmera é necessária para câmeras não-métricas devido à sua baixa estabilidade geométrica. No caso de uma câmera digital não-métrica, seus parâmetros de orientação interior são sensíveis à exposição externa e outros fatores, essas características criam a necessidade de calibrar o sensor antes de qualquer aquisição de dados. Os métodos de calibração apresentam diferenças, algumas abordagens requerem maior tempo, dados mais elaborados e algoritmos sofisticados, como a calibração usando pontos de controle no solo; por outro lado, existem abordagens mais rápidas e automatizadas que aplicam algoritmos de visão computacional para reduzir os erros inseridos pelo operador. Neste artigo, a qualidade posicional de dois diferentes métodos para a calibração de câmera foi investigada. O primeiro método, denominado "GCP-based", é baseado em pontos de controles obtidos via estação total e processados com o software Pix4D e Agisoft PhotoScan. O segundo método, denominado "Chessboard-based", é baseado em algoritmos de visão computacional para estimar os parâmetros usando um tabuleiro xadrez com padrões em preto e branco e dimensões conhecidas. Como resultado, o RMSE planimétrico foi comparado com as coordenadas de referência obtidas via estação total, obteve-se melhor acurácia com o software Agisoft PhotoScan, com um RMSE de 1,4 cm.
Journal of Global Research …, 2011
This Paper deals with calibrate a camera to find out the intrinsic and extrinsic camera parameters which are necessary to recover the depth estimation of an object in stereovision system.
A new procedure for the calibration of a camera through the observation of a flat pattern from different points of view is proposed. Effects of lens distortion on the estimation of a homography between the model plane and its image have been considered, obtaining a simultaneous estimate of this homography and the distortion coefficients. As the distortion is mainly radial, it is necessary to estimate previously the coordinates of the principal point so that they can be used for estimation of the distortion coefficients. The comparison of this method with current non-linear optimization methods is shown in the paper.
2002
This paper addresses the problem of calibrating a pinhole camera from images of a surface of revolution.
2007
This report addresses the problem of calibrating a single camera using several 3D ground control points and corresponding image points. Analytical and numerical approaches to approximate the desired camera model parameters are discussed.
Camera calibration is an essential task in 3D computer vision and is needed for various kinds of aug-mented or virtual reality applications, where the distance between a real-world point and the camera needs to be known. A robust calibration technique using contours of Surfaces of Revolution is presented in this paper. Relevant contributions are shown and discussed. Additionally, the algorithm is compared to a selection of three standard cam-era calibration implementations including the camera cal-ibration toolbox for Matlab, multi-camera self-calibration and geometric and photometric calibration for augmented reality. The evaluation is performed using low cost cameras and is based on the stability of the calculation of intrinsic parameters (focal length and principal point). The results of the evaluation are shown and further improvements are discussed.
Computer Vision, 1999. The Proceedings of the …, 1999
We propose a flexible new technique to easily calibrate a camera. It only requires the camera to observe a planar pattern shown at a few (at least two) different orientations. Either the camera or the planar pattern can be freely moved. The motion need not be known. Radial lens distortion is modeled. The proposed procedure consists of a closed-form solution, followed by a nonlinear refinement based on the maximum likelihood criterion. Both computer simulation and real data have been used to test the proposed technique, and very good results have been obtained. Compared with classical techniques which use expensive equipment such as two or three orthogonal planes, the proposed technique is easy to use and flexible. It advances 3D computer vision one step from laboratory environments to real world use. The corresponding software is available from the author's Web page.
… REMOTE SENSING AND …, 2002
Architectural documentation is carried out mostly with (analogue or digital) non-metric cameras or video-cameras. Unknown internal geometry is a main problem in this context, particularly for wide-angle lenses with their considerable amount of distortion. Self-calibrating bundle adjustment or 3D test-field calibration may provide straightforward answers to this problem, yet on occasions such steps can prove too complicated or costly for ordinary users. Hence, this paper discusses the use of simple pre-calibration approaches. Practical examples with wide-angle lenses are first given to illustrate that, in general, use of "nominal" values for the camera constant and the principal point causes no significant problem in most cases of low or moderate accuracy requirements. These same examples, however, reveal the marked effects of radial distortion. This is a main problem which needs to be controlled, even in the simple and most popular among users method of rectification (for which knowledge of the primary interior orientation parameters is irrelevant). Here, simple approaches for determining radial distortion are presented and assessed, ranging from the use of linear features on manmade objects to rectification of regular grids and its various alternatives. On the other hand, the easiest approach for full camera calibration is probably by the common adjustment of images of targeted 2D objects taken under different viewing angles. In the results given the described methods for determining radial distortion have also been evaluated. In conclusion, both the merits and limitations of the discussed simple calibration techniques are outlined but also a gross distinction between analogue and digital camera is made.
2013
Marker based optical motion capture systems use multiple cameras to determine the 3D position of markers. The precise knowledge of the position and orientation of the cameras plays a crucial role in precise marker recognition and position calculation. There are three camera calibration methods presented in this paper, including a new projector based method. The three calibration methods give different precision. Results of measurements will be presented and compared.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.