Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2006, 2006 IEEE Intelligent Vehicles Symposium
…
6 pages
1 file
In this paper, we propose a real-time method to detect obstacles using theoretical models of optical flow fields. The idea of our approach is to segment the image in two layers: the pixels which match our optical flow model and those that do not (i.e. the obstacles). In this paper, we focus our approach on a model of the motion of the ground plane. Regions of the visual field that violate this model indicate potential obstacles.
In this paper, an analysis of collision detection via optical flow is presented. The main objective of the work is to detect and initiate a warning signal when there is a non-moving object or a moving object (such as a pedestrian) obstructing the path in which a vehicle is moving. The warning signal is given only if there is a risk of a collision. In order to gather the input video streams to the simulation, a camera with a resolution of 1280×720 pixels was used. Since the processing time increases with resolution, the actual processing was carried out at a lower resolution of 160×120 pixels. The colour segmentation was carried out through RGB colour space. Simulink was used as a platform for the development and testing the simulation. Using the Simulink blocks, video pre-processing, image segmentation, optical flow calculation and thresholding were carried out. The stability of the video is one of the main concerns in this research. For videos taken when the vehicle is moving at high speed, the stability became an issue. The developed system tested successfully at speeds below 10 kmh-1 to detect stationary objects and pedestrians crossing the roads.
Proceedings., International Conference on Image Processing, 1995
This paper proposes a method for detecting obstacles on a runway from monocular image sequences. The surface of the runway is modeled as a plane and the model flow field corresponding to the runway is described by 8 coefficients. The set of 8 coefficients describing the initial model flow field is given by the data from Inertial Navigation System (INS). The uncertainties of this initial model flow field are estimated and used to obtain the accurate model flow field. The residual flow field after warping images by the accurate model flow field is computed by solving overdetermined gradient-based optical flow equations using a singular value decomposition (SVD). This SVD gives us a new insight of the uncertainties inherent in the optical flow computation. Those pixels with large residual flow vectors are considered as obstacles. Experimental results for two real image sequences are presented.
Proceedings of the 28th National IT Conference (2010)
A virtual simulation was carried out as a pilot project to investigate the strength of optical flow in avoiding obstacles in robot navigation. The virtual environment allowed user to change parameters such as speed of the vehicle, camera focal length and light conditions easily. RML 97 was selected as the modeling language of the virtual world and virtual vehicle. Simulink was used for image preprocessing, calculating the optical flow and calculating the navigation commands. At the present level system can affectively used to navigate a robot through an obstacle filled 3D world. The drawback of the present system is when the world is symmetric and robot is heading exactly perpendicular to a face of a symmetric obstacle. More work is carried out in this direction to improve the performance.
2013
Differential methods of optical flow estimation are based on partial spatial and temporal derivatives of the image signal. In this paper, the comparison between background modeling technique and Lucas-Kanade optical flow has been done for object detection. Background subtraction methods need the background model from hundreds of images whereas the LucasKanade optical flow estimation method is a differential two frames algorithm, because it needs two frames in order to work. LucasKanade method is used which divides image into patches and computing a single optical flow on each of them. Keywords— Background Modeling, Motion Vector, Optical Flow, Object Detection
Proceedings of Conference on Intelligent Vehicles
In this work an approach t o obstacle detection and steering control for safe car driving is presented. It is based on the evaluation of 3D motion and structure parameters from optical flow through the analysis of image sequences acquired by a T.V. camera mounted on a vehicle moving along usual city roads, countryroads or motorways. This work founds on the observation that if a camera is appropriately mounted on a vehicle which moves on a planar surface, the problem of motion and structure recovery from optical flow becomes linear and local, and depends on only two parameters: the angular velocity around an axis orthogonal t o the planar surface, and the ratio between the viewed depth and the translational speed, i.e. the generalized time-to-collision. Examples of the detection of moving surrounding cars and of the trajectory recovery algorithm on image sequences of traffic scenes from urban roads are provided.
Systems and Computers in Japan, 2002
A scheme for detecting a moving object in a three-dimensional environment from observed dynamic images by optical flow, based on the state of the motion of the observing system, is proposed in this paper. The usual optical flow constraint equations defined in an image coordinate system do not sufficiently satisfy the assumptions made in deriving them when the observing system is in motion. In this paper, optical flow constraint equations considering the motion of the observing system are first derived. In order to do this, a mapping converting the motion of a stationary environment image to linear trajectory signals is derived. The uniform velocity property of motion and the isotropic property of motion within a proximal area, which are basic assumptions of the block gradient method, can be satisfied by these. Next, a method of expressing the optical flow constraint equations after mapping by the gradient in the time dimension before mapping is presented. Finally, the residuals of the optical flow constraint equations are proposed as the evaluation quantity for the extraction of a moving object and their efficacy is shown. © 2002 Wiley Periodicals, Inc. Syst Comp Jpn, 33(6): 83–92, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.1135
International Journal of Advanced Robotic …, 2007
In this paper we try to develop an algorithm for visual obstacle avoidance of autonomous mobile robot. The input of the algorithm is an image sequence grabbed by an embedded camera on the B21r robot in motion. Then, the optical flow information is extracted from the image sequence in order to be used in the navigation algorithm. The optical flow provides very important information about the robot environment, like: the obstacles disposition, the robot heading, the time to collision and the depth. The strategy consists in balancing the amount of left and right side flow to avoid obstacles, this technique allows robot navigation without any collision with obstacles. The robustness of the algorithm will be showed by some examples.
For Micro Aerial Vehicles (MAVs), robust obstacle avoidance during flight is a challenging problem because only light weight sensors as monocular cameras can be mounted as payload. With monocular cameras depth perception cannot be estimated from a single view, which means that information has to be estimated by combining images from multiple viewpoints. Here we present a method which focuses only on regions classified as foreground and follow features in the foreground to estimate threat of potential objects being an obstacle in the flight path by depth segmentation and scale estimation. We evaluate results from our approach in an outdoor environment with a small quadrotor drone and analyze the effect of the different stages in the image processing pipeline. We are able to determine evident obstacles in front of the drone with high confidence. Robust obstacleavoidance algorithms as presented in this article will allow Micro Aerial Vehicles to navigate semi-autonomously in outdoor scenarios.
2009
Optic flow is the apparent visual motion that one experiences during motion. A fundamental problem in Machine Vision and processing of image sequences is the estimation of the optic flow, which is the projection of the 3D surface point motion on a 2D sensor plane. This paper deals with the phase based algorithm for optic flow computation. The phase based optic flow technique is robust with respect to smooth shading and lighting variations and is amplitude invariant. Therefore, in this method phase contours are tracked over time. The image sequence is spatially filtered using a bank of Gabor filter quadrature pairs and its temporal phase gradient is computed. The velocity estimates with directions orthogonal to the filter pair’s orientations are combined at a specific spatial location. The component velocity is rejected if the corresponding filter pair’s information is not linear over time. A detailed study was carried out on the optic flow algorithms with a specific interest for its...
Optik, 2017
Motion detection is one of the key issues in intelligent video surveillance, traffic monitoring and video-based human computer interaction. In this paper, we have efficiently detected the moving objects by computing the optical flow between three consecutive frames. The proposed method first filters out noise in individual frames using Gaussian filter. Next, it computes the optical flow between (a) the current frame and the previous frame and (b) the current frame and the next frame separately. Subsequently, it combines both the optical flow components to compute the gross optical flow. An adaptive thresholding post-processing step is executed so as to remove the spurious foreground objects. Moving objects are then detected using morphological operation on the equalized output. The method has been conceived, implemented and tested on a set of real video data sets. The experimental results exhibit satisfactory performance when compared with other existing methods.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
International Journal of Intelligent Systems Technologies and Applications, 2018
Lecture Notes in Computer Science, 2009
Lecture Notes in Computer Science, 2012
2020
2011 IEEE International Conference on Industrial Technology, 2011
Springer Tracts in Advanced Robotics, 2008
2006 IEEE Intelligent Transportation Systems Conference, 2006
International Journal of Computer Applications, 2013
IEEE Transactions on Robotics and Automation, 1998
International Journal on Smart Sensing and Intelligent Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
Journal of Visual Communication and Image Representation, 2017