Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012, Lecture Notes in Computer Science
We present a system for automatically detecting obstacles from a moving vehicle using a monocular wide angle camera. Our system was developed in the context of finding obstacles and particularly children when backing up. Camera viewpoint is transformed to a virtual bird-eye view. We developed a novel image registration algorithm to obtain ego-motion that in combination with variational dense optical flow outputs a residual motion map with respect to the ground. The residual motion map is used to identify and segment 3D and moving objects. Our main contribution is the feature-based image registration algorithm that is able to separate and obtain ground layer ego-motion accurately even in cases of ground covering only 20% of the image, outperforming RANSAC.
2020
In this research we present a novel algorithm for background subtraction using a moving camera. Our algorithm is based purely on visual information obtained from a camera mounted on an electric bus, operating in downtown Reno which automatically detects moving objects of interest with the view to provide a fully autonomous vehicle. In our approach we exploit the optical flow vectors generated by the motion of the camera while keeping parameter assumptions a minimum. At first, we estimate the Focus of Expansion, which is used to model and simulate 3D points given the intrinsic parameters of the camera, and perform multiple linear regression to estimate the regression equation parameters and implement on the real data set of every frame to identify moving objects. We validated our algorithm using data taken from a common bus route.
For Micro Aerial Vehicles (MAVs), robust obstacle avoidance during flight is a challenging problem because only light weight sensors as monocular cameras can be mounted as payload. With monocular cameras depth perception cannot be estimated from a single view, which means that information has to be estimated by combining images from multiple viewpoints. Here we present a method which focuses only on regions classified as foreground and follow features in the foreground to estimate threat of potential objects being an obstacle in the flight path by depth segmentation and scale estimation. We evaluate results from our approach in an outdoor environment with a small quadrotor drone and analyze the effect of the different stages in the image processing pipeline. We are able to determine evident obstacles in front of the drone with high confidence. Robust obstacleavoidance algorithms as presented in this article will allow Micro Aerial Vehicles to navigate semi-autonomously in outdoor scenarios.
2006
This paper proposes a method for estimating the egomotion of the vehicle and for detecting moving objects on roads by using a vehicle mounted monocular camera. There are two problems in ego-motion estimation. Firstly, a typical road scene contains moving objects such as other vehicles. Secondly, roads display fewer feature points compared to the number associated with background structures. In our approach, ego-motion is estimated from the correspondences of feature points extracted from various regions other than those in which objects are moving. After estimating the ego-motion, the three dimensional structure of the scene is reconstructed and any moving objects are detected. In our experiments, it has been shown that the proposed method is able to detect moving objects such as vehicles and pedestrians.
2006 IEEE Intelligent Vehicles Symposium, 2006
In this paper, we propose a real-time method to detect obstacles using theoretical models of optical flow fields. The idea of our approach is to segment the image in two layers: the pixels which match our optical flow model and those that do not (i.e. the obstacles). In this paper, we focus our approach on a model of the motion of the ground plane. Regions of the visual field that violate this model indicate potential obstacles.
International Journal on Smart Sensing and Intelligent Systems
The novel framework for estimating dense scene flow using depth camera data is demonstrated in the article. Using these estimated flow vectors to identify obstacles improves the path planning module of the autonomous vehicle's (AV) intelligence. The primary difficulty in the development of AVs has been thought to be path planning in cluttered environments. These vehicles must possess the intelligence to recognize their surroundings and successfully navigate around obstacles. The AV needs a thorough understanding of the surroundings to detect and avoid obstacles in a cluttered environment. Therefore, when determining the course, it is preferable to be aware of the kinematic behavior (position and the direction) of the obstacles. As a result, by comparing the depth images between different time frames, the position and direction of the obstacles are calculated using a 3D vision sensor. The current study focuses on the extraction of the flow vectors in 3D coordinates from the diffe...
This paper considers a specific problem of visual perception of motion, namely the problem of visual detection of independent 3D motion. Most of the existing techniques for solving this problem rely on restrictive assumptions about the environment, the observer's motion, or both. Moreover, they are based on the computation of optical flow, which amounts to solving the ill-posed correspondence problem. In this work, independent motion detection is formulated as robust parameter estimation applied to the visual input acquired by a binocular, rigidly moving observer. Depth and motion measurements are combined in a linear model. The parameters of this model are related to the parameters of self-motion (egomotion) and the parameters of the stereoscopic configuration of the observer. The robust estimation of this model leads to a segmentation of the scene based on 3D motion. The method avoids the correspondence problem by employing only normal flow fields. Experimental results demonstrate the effectiveness of this method in detecting independent motion in scenes with large depth variations, without any constraints imposed on observer motion.
Lecture Notes in Computer Science, 2009
We present an approach for identifying and segmenting independently moving objects from dense scene flow information, using a moving stereo camera system. The detection and segmentation is challenging due to camera movement and non-rigid object motion. The disparity, change in disparity, and the optical flow are estimated in the image domain and the three-dimensional motion is inferred from the binocular triangulation of the translation vector. Using error propagation and scene flow reliability measures, we assign dense motion likelihoods to every pixel of a reference frame. These likelihoods are then used for the segmentation of independently moving objects in the reference image. In our results we systematically demonstrate the improvement using reliability measures for the scene flow variables. Furthermore, we compare the binocular segmentation of independently moving objects with a monocular version, using solely the optical flow component of the scene flow.
Proceedings of Conference on Intelligent Vehicles
In this work an approach t o obstacle detection and steering control for safe car driving is presented. It is based on the evaluation of 3D motion and structure parameters from optical flow through the analysis of image sequences acquired by a T.V. camera mounted on a vehicle moving along usual city roads, countryroads or motorways. This work founds on the observation that if a camera is appropriately mounted on a vehicle which moves on a planar surface, the problem of motion and structure recovery from optical flow becomes linear and local, and depends on only two parameters: the angular velocity around an axis orthogonal t o the planar surface, and the ratio between the viewed depth and the translational speed, i.e. the generalized time-to-collision. Examples of the detection of moving surrounding cars and of the trajectory recovery algorithm on image sequences of traffic scenes from urban roads are provided.
Computer Standards & Interfaces, 1999
We describe an optical flow based obstacle detection system for use in detecting vehicles approaching the blind spot of a car on highways and city streets. The system runs at near frame rate (8-15 frames/second) on PC hardware. We will discuss the prediction of a camera image given an implicit optical flow field and comparison with the actual camera image. The advantage to this approach is that we never explicitly calculate optical flow. We will also present results on digitized highway images, and video taken from Navlab 5 while driving on a Pittsburgh highway.
IEEE Transactions on Robotics and Automation, 1998
This paper describes procedures for obtaining a reliable and dense optical flow from image sequences taken by a television (TV) camera mounted on a car moving in usual outdoor scenarios. The optical flow can be computed from these image sequences by using several techniques. Differential techniques to compute the optical flow do not provide adequate results, because of a poor texture in images and the presence of shocks and vibrations experienced by the TV camera during image acquisition. By using correlation based techniques and by correcting the optical flows for shocks and vibrations, useful sequences of optical flows can be obtained. When the car is moving along a flat road and the optical axis of the TV camera is parallel to the ground, the motion field is expected to be almost quadratic and have a specific structure. As a consequence the egomotion can be estimated from this optical flow and information on the speed and the angular velocity of the moving vehicle are obtained. By analyzing the optical flow it is possible to recover also a coarse segmentation of the flow, in which objects moving with a different speed are identified. By combining information from intensity edges a better localization of motion boundaries are obtained. These results suggest that the optical flow can be successfully used by a vision system for assisting a driver in a vehicle moving in usual streets and motorways.
Lecture Notes in Computer Science, 2006
This paper deals with the detection of arbitrary static objects in traffic scenes from monocular video using structure from motion. A camera in a moving vehicle observes the road course ahead. The camera translation in depth is known. Many structure from motion algorithms were proposed for detecting moving or nearby objects. However, detecting stationary distant obstacles in the focus of expansion remains quite challenging due to very small subpixel motion between frames. In this work the scene depth is estimated from the scaling of supervised image regions. We generate obstacle hypotheses from these depth estimates in image space. A second step then performs testing of these by comparing with the counter hypothesis of a free driveway. The approach can detect obstacles already at distances of 50m and more with a standard focal length. This early detection allows driver warning and safety precaution in good time.
IET Image Processing, 2020
Segmentation of moving object in video with moving background is a challenging problem and it becomes more difficult with varying illumination. The authors propose a dense optical flow-based background subtraction technique for object segmentation. The proposed technique is fast and reliable for segmentation of moving objects in realistic unconstrained videos. In the proposed work, they stabilise the camera motion by computing homography matrix, then they perform statistical background modelling using single Gaussian background modelling approach. Moving pixels are identified using dense optical flow in the background modelled scenario. The dense optical flow provides motion information of each pixel between consecutive frames, therefore for moving pixel identification they compute motion flow vector of each pixel between consecutive frames. To distinguish between foreground and background pixels, they labelled each pixel and thresholding the magnitude of motion flow vector identifies the moving pixels. The effectiveness of the proposed algorithm has been evaluated both qualitatively and quantitatively. The proposed algorithm has been evaluated on several realistic videos of different complex conditions. To assess the performance of the proposed work, the authors compared their algorithm with other state-of-art methods and found that the proposed method outperforms the other methods.
Systems and Computers in Japan, 2002
A scheme for detecting a moving object in a three-dimensional environment from observed dynamic images by optical flow, based on the state of the motion of the observing system, is proposed in this paper. The usual optical flow constraint equations defined in an image coordinate system do not sufficiently satisfy the assumptions made in deriving them when the observing system is in motion. In this paper, optical flow constraint equations considering the motion of the observing system are first derived. In order to do this, a mapping converting the motion of a stationary environment image to linear trajectory signals is derived. The uniform velocity property of motion and the isotropic property of motion within a proximal area, which are basic assumptions of the block gradient method, can be satisfied by these. Next, a method of expressing the optical flow constraint equations after mapping by the gradient in the time dimension before mapping is presented. Finally, the residuals of the optical flow constraint equations are proposed as the evaluation quantity for the extraction of a moving object and their efficacy is shown. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(6): 83–92, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.1135
IEEE Intelligent Vehicles Symposium, 2004
This paper deals with the problem of obstacle detection from a single camera mounted on a vehicle. We define an obstacle as any object that obstructs the vehicle's driving path. The perception of the environment is performed through a fast processing of image sequence. The approach is based on motion analysis. Generally, the optical flow techniques are huge in computation time and sensitive to vehicle motion. To overcome these problems, we choose to detect the obstacle in two steps. The road motion is firstly computed through a fast and robust wavelets analysis. Then, we detect the areas which have a different motion thanks to a Bayesian modelization. Results shown in the paper prove that the proposed method permits the detection of any obstacle on a road.
Lecture Notes in Computer Science, 2009
This paper discusses the detection of moving objects (being a crucial part of driver assistance systems) using monocular or stereoscopic computer vision. In both cases, object detection is based on motion analysis of individually tracked image points (optical flow), providing a motion metric which corresponds to the likelihood that the tracked point is moving. Based on this metric, points are segmented into objects by employing a globally optimal graph-cut algorithm. Both approaches are comparatively evaluated using real-world vehicle image sequences.
2013
Differential methods of optical flow estimation are based on partial spatial and temporal derivatives of the image signal. In this paper, the comparison between background modeling technique and Lucas-Kanade optical flow has been done for object detection. Background subtraction methods need the background model from hundreds of images whereas the LucasKanade optical flow estimation method is a differential two frames algorithm, because it needs two frames in order to work. LucasKanade method is used which divides image into patches and computing a single optical flow on each of them. Keywords— Background Modeling, Motion Vector, Optical Flow, Object Detection
2003
Dynamic scene perception is currently limited to detection of moving objects from a static platform. Very few inroads have been made into the problem of dynamic scene perception from moving robotic vehicles. We discuss novel methods to segment moving objects in the motion field formed by a moving camera/robotic platform in real time. Our solution does not require any egomotion knowledge, thereby making the solution applicable to a large class of mobile outdoor robot problems where no IMU information is available. We address two of the toughest problems in dynamic scene perception on the move, using only 2D monocular grayscale images, and secondly where 3D range information from stereo is also available. Our solution involves optical flow computations, followed by optical flow field preprocessing to highlight moving object boundaries. In the case where range data from stereo is computed, a 3D optical flow field is estimated by combining range information with 2D optical flow estimates, followed by a similar 3D flow field preprocessing step. A segmentation of the flow field using fast flood filling then identifies every moving object in the scene with a unique label. This novel algorithm is expected to be the critical first step in robust recognition of moving vehicles and people from mobile outdoor robots, and therefore offers a general solution to the field of dynamic scene perception. It is envisioned that our algorithm will benefit robot scene perception in urban environments for scientific, commercial and defense applications. Results of our real-time algorithm on a mobile robot in a scene with a single moving vehicle are presented.
This paper develops two background/foreground segmentation approaches based on a foreground subtraction from a background model, which uses scene colour and motion information. In the first approach, the background is modelled by a Spatially Global Gaussian Mixture Model (SGGMM) based on scene Red, Green, Blue (RGB) colours. This model is then used to estimate motion based optical flow, which helps indirectly in the scene segmentation decision. In an alternative approach, motion based optical flow information is combined with colours as an augmented feature vector to model the background. For both approaches, we introduce an estimation method of the optical flow uncertainty statistics to use them in the background modelling. Evaluation results for both approaches based on indoor and outdoor image sequences show that the estimated background model is good at describing optical flow uncertainties and the segmentation obtained is better than colour only based segmentation. 1-INTRODUCTION In many applications, such as video surveillance, one of the most important challenges is how to automatically and correctly classify parts of the scene as foreground or background. For this task, a particularly popular approach is based on background subtraction. The idea behind background subtraction is to compare the current image with a reference image of the background, and thus decide what is, and what is not, part of the background. However, one problem with background/foreground subtraction is in obtaining the reference image when, for example, large parts of the background are occluded by moving foreground objects or parts of the background are not seen for a long time. Other issues could be: illumination changes which can easily be misinterpreted as foreground objects, the model of the background becoming obsolete, foreground objects casting shadows where the shadow might be interpreted as foreground, objects believed to be background could move, and moving foreground objects could stop for a long time. To deal with all these problems, one proposed solution is to statistically model the background, which is updated over time to account for progressive scene changes [1-6]. Optical flow is an important feature that provides information about the displacements of brightness patterns in the scene and has been largely used for object segmentation in vision systems. Optical flow calculation aims to estimate 2D apparent motion information from subsequent images. To deal with motion uncertainties, many research works have presented motion estimation approaches where optical flow distributions are able to represent any kind of motion uncertainty. Most of these developed approaches are inaccurate in some parts of the image even if they are robust globally in the overall image. This is due to fact that they do not take into account background modelling and thus their parameters are not changed according to changes in background appearance [6]. This might cause problems for optical flow based approaches if applied on their own in some image segmentation tasks even if the background initialization process is looked at carefully. In this work, we introduce an innovative background subtraction technique where the background is modelled by a Spatially Global Gaussian Mixture Model (SGGMM) [3], based on RGB colours and motion based optical flow features. Two techniques to include the optical flow information in the background model are presented: In the first version the SGGMM is estimated based on the RGB colours only and then used to estimate the optical flow uncertainty model based on spatiotemporal gradient. The estimated optical flow uncertainty model will be integrated with the RGB colours model to obtain a SGGMM based on feature vector combining both colour and optical flow, which is then used to generate the background subtraction support map. In the second version, the SGGMM background model estimation is based on a feature vector combining both RGB colours and optical flow information. To deal with motion induced noise, we introduced an estimation method of the optical flow uncertainty statistics which are used in the background
Lecture Notes in Computer Science, 1994
This paper describes the analysis of image sequences taken by a T.V. camera mounted on a car moving in usual outdoor sceneries. Because of the presence of shocks and vibrations during the image acquisition, the numerical computation of temporal derivatives is very noisy and therefore di erential techniques to compute the optical ow do not provide adequate results. By using correlation based techniques and by correcting the optical ows for shocks and vibrations, it is possible to obtain useful sequences of optical ows. From these optical ows it is possible to estimate the egomotion and to obtain information on the absolute velocity, angular velocity and radius of curvature of the moving vehicle. These results suggest that the optical ow can be successfully used by a vision system for assisting a driver in a vehicle moving in usual outdoor streets and motorways.
A method is proposed for real-time detection of objects that maneuver in the visual field of a monocular observer. Such cases are common in natural environments where the 3D motion parameters of certain objects (e.g. animals) change considerably over time. The approach taken conforms with the theory of purposive vision, according to which vision algorithms should solve many, specific problems under loose assumptions. The method can effectively answer two important questions: (a) whether the observer has changed his 3D motion parameters, and (b) in case that the observer has constant 3D motion, whether there are any maneuvering objects (objects with non-constant 3D motion parameters) in his visual field. The approach is direct in the sense that the structure from motion problem-which can only be solved under restrictive assumptions-is avoided. Essentially, the method relies on a pointwise comparison of two normal flow fields which can be robustly computed from three successive frames. Thus, it bypasses the ill-posed problem of optical flow computation. Experimental results demonstrate the effectiveness and robustness of the proposed scheme. Moreover, the computational requirements of the method are extremely low, making it a likely candidate for real-time implementation.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.