Papers by Cédric Demonceaux

The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2020
Indoor mapping attracts more attention with the development of 2D and 3D camera and Lidar sensor.... more Indoor mapping attracts more attention with the development of 2D and 3D camera and Lidar sensor. Lidar systems can provide a very high resolution and accurate point cloud. When aiming to reconstruct the static part of the scene, moving objects should be detected and removed which can prove challenging. This paper proposes a generic method to merge meshes produced from Lidar data that allows to tackle the issues of moving objects removal and static scene reconstruction at once. The method is adapted to a platform collecting point cloud from two Lidar sensors with different scan direction, which will result in different quality. Firstly, a mesh is efficiently produced from each sensor by exploiting its natural topology. Secondly, a visibility analysis is performed to handle occlusions (due to varying viewpoints) and remove moving objects. Then, a boolean optimization allows to select which triangles should be removed from each mesh. Finally, a stitching method is used to connect the selected mesh pieces. Our method is demonstrated on a Navvis M3 (2D laser ranger system) dataset and compared with Poisson and Delaunay based reconstruction methods.
IFAC Proceedings Volumes, 2007
Developing Unmanned Aerial Vehicles (UAV) with self-stabilization capabilities represents an inte... more Developing Unmanned Aerial Vehicles (UAV) with self-stabilization capabilities represents an intensive research field nowadays. This paper aims to show that combining our previous horizon extraction algorithm with homography methods permits to estimate the homography more robustly and thus the UAV attitude. We show that imposing horizon constraint permits to remove evident inconvenient non-planar points. Moreover, we explain that computing the horizon in the sphere permits to compute the normal vector of the ground plane and thus retrieve the right motion among the four possible solutions obtained by planar homography, which is not a trivial problem usually.
Lecture Notes in Computer Science, 2015
In this paper we present an alternative formulation for the minimal solution to the 3pt plus a co... more In this paper we present an alternative formulation for the minimal solution to the 3pt plus a common direction relative pose problem. Instead of the commonly used epipolar constraint we use the homography constraint to derive a novel formulation for the 3pt problem. This formulation allows the computation of the normal vector of the plane defined by the three input points without any additional computation in addition to the standard motion parameters of the camera. We show the working of the method on synthetic and real data sets and compare it to the standard 3pt method and the 5pt method for relative pose estimation. In addition we analyze the degenerate conditions for the proposed method.

2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012
The projections of world parallel lines in an image intersect at a single point called the vanish... more The projections of world parallel lines in an image intersect at a single point called the vanishing point (VP). VPs are a key ingredient for various vision tasks including rotation estimation and 3D reconstruction. Urban environments generally exhibit some dominant orthogonal VPs. Given a set of lines extracted from a calibrated image, this paper aims to (1) determine the line clustering, i.e. find which line belongs to which VP, and (2) estimate the associated orthogonal VPs. None of the existing methods is fully satisfactory because of the inherent difficulties of the problem, such as the local minima and the chicken-and-egg aspect. In this paper, we present a new algorithm that solves the problem in a mathematically guaranteed globally optimal manner and can inherently enforce the VP orthogonality. Specifically, we formulate the task as a consensus set maximization problem over the rotation search space, and further solve it efficiently by a branch-and-bound procedure based on the Interval Analysis theory. Our algorithm has been validated successfully on sets of challenging real images as well as synthetic data sets.

2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008
A key requirement for Unmanned Aerial Vehicles (UAV) applications is the attitude stabilization o... more A key requirement for Unmanned Aerial Vehicles (UAV) applications is the attitude stabilization of the aircraft, which requires the knowledge of its orientation. It is now well established that traditional navigation equipments, like GPS or INS, suffer from several disadvantages. That is why some works have suggested a vision-based approach of the problem. Especially, catadioptric vision is more and more used since it permits to gather much more information from the environment, compared to traditional perspective cameras, and therefore the robustness of the UAV attitude estimation is improved. Rotation estimation from conventional and catadioptric images has been extensively studied. Whereas interesting results can be obtained, the existing methods have non-negligible limitations such as difficult features matching (e.g. repeated texture, blurring or illumination changing) or a high computational cost (e.g. vanishing point extraction or analyze in frequency domain). In order to overcome these limitations, this paper presents a top-down approach for estimating the rotation and extracting the vanishing points in catadioptric images. This new framework is accurate and can run in real-time. To obtain the ground truth data, we also calibrate our catadioptric camera with a gyroscope. Finally, experimental results on a real video sequence are presented and compared to the ground truth data obtained by the gyroscope.
2010 IEEE International Conference on Imaging Systems and Techniques, 2010
This article presents a new method for estimating the pose of para-catadioptric vision systems. I... more This article presents a new method for estimating the pose of para-catadioptric vision systems. It is based on the estimation of vanishing points associated with vertical edges of the environment. However, unlike classical approaches no feature (line, circle) extraction and/or identification is needed. A sampled domain of possible vanishing points is tested and histograms are build to characterize the soundness of these points. A specificity index allows to find the more relevant histogram and the pose of the sensor. This method has been tested on simulated and real images giving very promising results (maximum angular error of 0.18 degree).
Omnidirectional Egomotion Estimation from Adapted Motion Field
2009 Fifth International Conference on Signal Image Technology and Internet Based Systems, 2009
The egomotion estimation problem is basic task to most of vision-based mobile robot applications.... more The egomotion estimation problem is basic task to most of vision-based mobile robot applications. Recent research has shown that the use of omnidirectional systems with large field of view facilitates the computation of the observer motion. All previous work takes a motion field, computed in the image, as an input in egomotion estimation process. This motion field is calculated using

2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010
Whereas some methods for plane extraction have been proposed, this problem still remains an open ... more Whereas some methods for plane extraction have been proposed, this problem still remains an open issue due to the complexity of the task. This paper especially focuses on the extraction of points lying on a plane (such as the ground and buildings walls) in sequences acquired by a central omnidirectional camera. Our approach is based on the epipolar constraint for planar scenes (i.e. homography) on a pair of omnidirectional images to detect some interest points belonging to a plane. Our main contribution is the introduction of a new method, called "2-point algorithm for homography", that imposes some constraints on the homography using vanishing point (VP) information. Compared to the widely used DLT (4-point) algorithm, experiments on real data demonstrated that the proposed "2-point algorithm for homography" is more robust to noise and false matching, even when the plane to extract is not dominant in the image. Finally, we show that our system provides key clues for ground segmentation by GrabCut.

International Journal of Computer Vision and Image Processing, 2011
Egomotion estimation is based principally on the estimation of the optical flow in the image. Rec... more Egomotion estimation is based principally on the estimation of the optical flow in the image. Recent research has shown that the use of omnidirectional systems with large fields of view allow overcoming the limitation presented in planar-projection imagery in order to address the problem of motion analysis. For omnidirectional images, the 2D motion is often estimated using methods developed for perspective images. This paper adapts motion field calculated using adapted method which takes into account the distortions existing in the omnidirectional image. This 2D motion field is then used as input to the egomotion estimation process using spherical representation of the motion equation. Experimental results are shown and comparison of error measures are given to confirm that succeeded estimation of camera motion will be obtained when using an adapted method to estimate optical flow.
Automatic calibration of catadioptric cameras in urban environment
2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008
Camera calibration is an important step for vision-based stabilization of unmanned aerial vehicle... more Camera calibration is an important step for vision-based stabilization of unmanned aerial vehicles (UAV). The goal of this paper is to develop a method for automatic calibration of a catadioptric camera so that it can be easily run before mounting the camera on the UAV or even during the flight to deal with vibrations or shocks. Whereas existing works can

Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006.
Unmanned Aerial Vehicles (UAVs) are the subject of an increasing interest in many applications. A... more Unmanned Aerial Vehicles (UAVs) are the subject of an increasing interest in many applications. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensors in order to provide efficient navigation functions. In this paper, we propose a method for attitude computation catadioptric images. We first demonstrate the advantages of the catadioptric vision sensor for this application. In fact, the geometric properties of the sensor permit to compute easily the roll and pitch angles. The method consists in separating the sky from the earth in order to detect the horizon. We propose an adaptation of the Markov Random Fields for catadioptric images for this segmentation. The second step consists in estimating the parameters of the horizon line thanks to a robust estimation algorithm. We also present the angle estimation algorithm and finally, we show experimental results on synthetic and real images captured from an airplane.

2009 IEEE International Conference on Robotics and Automation, 2009
Unmanned Aerial Vehicles (UAV) are the subject of an increasing interest in many applications and... more Unmanned Aerial Vehicles (UAV) are the subject of an increasing interest in many applications and a key requirement for autonomous navigation is the attitude/position stabilization of the vehicle. Some previous works have suggested using catadioptric vision, instead of traditional perspective cameras, in order to gather much more information from the environment and therefore improve the robustness of the UAV attitude/position estimation. This paper belongs to a series of recent publications of our research group concerning catadioptric vision for UAVs. Currently, we focus on the extraction of skyline in catadioptric images since it provides important information about the attitude/position of the UAV. For example, the DEMbased methods can match the extracted skyline with a Digital Elevation Map (DEM) by process of registration, which permits to estimate the attitude and the position of the camera. Like any standard cameras, catadioptric systems cannot work in low luminosity situations because they are based on visible light. To overcome this important limitation, in this paper, we propose using a catadioptric infrared camera and extending one of our methods of skyline detection towards catadioptric infrared images. The task of extracting the best skyline in images is usually converted in an energy minimization problem that can be solved by dynamic programming. The major contribution of this paper is the extension of dynamic programming for catadioptric images using an adapted neighborhood and an appropriate scanning direction. Finally, we present some experimental results to demonstrate the validity of our approach.

2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), 2011
Visual tracking in video sequences is a widely developed topic in computer vision applications. H... more Visual tracking in video sequences is a widely developed topic in computer vision applications. However, the emergence of panoramic vision using catadioptric sensors has created the need for new approaches in order to track an object in this type of images. Indeed the non-linear resolution and the geometric distortions due to the insertion of the mirror, make tracking in catadioptric images a very challenging task. This paper describes particle filter for tracking moving object over time using a catadioptric sensor. In this work different problems due to the specificities of the catadioptric systems such as geometry are considered. The obtained results demonstrate an important improvement of the tracking accuracy with our adapted method and a better robustness to clutter background and light changes.
The 4th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent, 2014
This paper introduces an local path planning algorithm for the self-driving car in a complex envi... more This paper introduces an local path planning algorithm for the self-driving car in a complex environment. The proposed algorithm is composed of three parts: the novel path representation, the collision detection and the path modification using a voronoi cell. The novel path representation provides convenience for checking the collision and modifying the path and continuous control input for steering wheel rather than way point navigation. The proposed algorithm were applied to the self-driving car, EureCar(KAIST) and its applicability and feasibility of real time use were validated.
2007 IEEE 11th International Conference on Computer Vision, 2007
Nowadays, robotic systems are more and more equipped with catadioptric cameras. However several p... more Nowadays, robotic systems are more and more equipped with catadioptric cameras. However several problems associated to catadioptric vision have been studied only slightly. Especially algorithms for detecting rectangles in catadioptric images have not yet been developed whereas it is required in diverse applications such as building extraction in aerial images. We show that working in the equivalent sphere provides an appropriate framework to detect lines, parallelism, orthogonality and therefore rectangles. Finally, we present experimental results on synthesized and real data.

Improvement of feature matching in catadioptric images using gyroscope data
2008 19th International Conference on Pattern Recognition, 2008
Most of vision-based algorithms for motion and localization estimation requires matching some int... more Most of vision-based algorithms for motion and localization estimation requires matching some interest points in a pair of images. After building feature correspondence, it is possible to estimate camera motion/localization using epipolar geometry. However feature matching is still a challenging problem because of time constraint or image variability for example. In several robotic applications, the camera rotation may be known thanks to a gyroscope or another orientation sensor. Therefore, in this paper, we aim to answer the following question: can the knowledge of rotation from a gyroscope be used to improve feature matching. To analyze this new approach of camera and gyroscope data fusion, we proceed in two steps. First, we rotationally align the images using rotation information of the gyroscope. And second, we compare the quality of feature matching in the original and rotationally aligned images. Experimental results on a real catadioptric sequence show that gyroscope data permits to sensibly improve the number of inliers according to epipolar geometry.

Lecture Notes in Computer Science
Lines are particularly important features for different tasks such as calibration, structure from... more Lines are particularly important features for different tasks such as calibration, structure from motion, 3D reconstruction in computer vision. However, line detection in catadioptric images is not trivial because the projection of a 3D line is a conic eventually degenerated. If the sensor is calibrated, it has been already demonstrated that each conic can be described by two parameters. In this way, some methods based on the adaptation of conventional line detection methods have been proposed. However, most of these methods suffer from the same disadvantages than in the perspective case (computing time, accuracy, robustness, ...). In this paper, we then propose a new method for line detection in central catadioptric image comparable to the polygonal approximation approach. With this method, only two points of a chain allows to extract with a very high accuracy a catadioptric line. Moreover, this algorithm is particularly fast and is applicable in realtime. We also present experimental results with some quantitative and qualitative evaluations in order to show the quality of the results and the perspectives of this method.
Procedings of the British Machine Vision Conference 2009, 2009
Caméras omnidirectionnelles: principes et modélisations
International audienceno abstrac
Uploads
Papers by Cédric Demonceaux