Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, Computer Standards & Interfaces
We describe an optical flow based obstacle detection system for use in detecting vehicles approaching the blind spot of a car on highways and city streets. The system runs at near frame rate (8-15 frames/second) on PC hardware. We will discuss the prediction of a camera image given an implicit optical flow field and comparison with the actual camera image. The advantage to this approach is that we never explicitly calculate optical flow. We will also present results on digitized highway images, and video taken from Navlab 5 while driving on a Pittsburgh highway.
A fundamental goal of an overtaking monitor system is the segmentation of the overtaking vehicle. This application can be addressed through an optical flow driven scheme. We focus on the rear mirror visual field using a camera on the top of it. If we drive a car, the ego-motion optical flow pattern is more or less unidirectional, i.e. all the static objects and landmarks move backwards. On the other hand, an overtaking car generates an optical flow pattern in the opposite direction, i.e. moving forward towards our car. This makes motion processing schemes specially appropriate for an overtaking monitor application. We have implemented a highly parallel bio-inspired optical flow algorithm and tested it with real overtaking sequences in different weather conditions. We have developed a post-processing optical flow step that allows us to estimate the car position. We have tested it using a bank of overtaking car sequences. The overtaking vehicle position can be used to send useful aler...
2007
This paper describes a vision-based system for blind spot detection in intelligent vehicle applications. A camera is mounted in the lateral mirror of a car with the intention of visually detecting cars that can not be perceived by the vehicle driver since they are located in the so-called blind spot. The detection of cars in the blind spot is carried out using computer vision techniques, based on optical flow and data clustering, as described in the following lines.
Lecture Notes in Computer Science, 2007
In this paper a solution to detect wrong way drivers on highways is presented. The proposed solution is based on three main stages: Learning, Detection and Validation. Firstly, the orientation pattern of vehicles motion flow is learned and modelled by a mixture of gaussians. The second stage (Detection and Temporal Validation) applies the learned orientation model in order to detect objects moving in the lane's opposite direction. The third and final stage uses an Appearance-based approach to ensure the detection of a vehicle before triggering an alarm. This methodology has proven to be quite robust in terms of different weather conditions, illumination and image quality. Some experiments carried out with several movies from traffic surveillance cameras on highways show the robustness of the proposed solution.
2007
In this paper it is described a solution to detect wrong way drivers on highways. The proposed solution is based on three main stages: Learning, Detection and Validation. In the first stage, the orientation pattern of vehicles motion flow is learned and modelled by a mixture of gaussians. The second stage (Detection and Temporal Validation) makes use of the learned orientation model to detect objects moving on lane's opposite direction. The third and final stage uses an Appearance based approach to ensure the detection of a vehicle before triggering an alarm. This methodology has proven to be quite robust in terms of different weather conditions, illumination and image quality. Some experiments carried out with several movies from traffic surveillance cameras on highways show the robustness of the proposed solution.
IEEE Transactions on Robotics and Automation, 1998
This paper describes procedures for obtaining a reliable and dense optical flow from image sequences taken by a television (TV) camera mounted on a car moving in usual outdoor scenarios. The optical flow can be computed from these image sequences by using several techniques. Differential techniques to compute the optical flow do not provide adequate results, because of a poor texture in images and the presence of shocks and vibrations experienced by the TV camera during image acquisition. By using correlation based techniques and by correcting the optical flows for shocks and vibrations, useful sequences of optical flows can be obtained. When the car is moving along a flat road and the optical axis of the TV camera is parallel to the ground, the motion field is expected to be almost quadratic and have a specific structure. As a consequence the egomotion can be estimated from this optical flow and information on the speed and the angular velocity of the moving vehicle are obtained. By analyzing the optical flow it is possible to recover also a coarse segmentation of the flow, in which objects moving with a different speed are identified. By combining information from intensity edges a better localization of motion boundaries are obtained. These results suggest that the optical flow can be successfully used by a vision system for assisting a driver in a vehicle moving in usual streets and motorways.
2018
This document describes an advanced system and methodology for Cross Traffic Alert (CTA), able to detect vehicles that move into the vehicle driving path from the left or right side. The camera is supposed to be not only on a vehicle still, e.g. at a traffic light or at an intersection, but also moving slowly, e.g. in a car park. In all of the aforementioned conditions, a driver’s short loss of concentration or distraction can easily lead to a serious accident. A valid support to avoid these kinds of car crashes is represented by the proposed system. It is an extension of our previous work, related to a clustering system, which only works on fixed cameras. Just a vanish point calculation and simple optical flow filtering, to eliminate motion vectors due to the car relative movement, is performed to let the system achieve high performances with different scenarios, cameras and resolutions. The proposed system just uses as input the optical flow, which is hardware implemented in the p...
2006 IEEE Intelligent Vehicles Symposium, 2006
In this paper, we propose a real-time method to detect obstacles using theoretical models of optical flow fields. The idea of our approach is to segment the image in two layers: the pixels which match our optical flow model and those that do not (i.e. the obstacles). In this paper, we focus our approach on a model of the motion of the ground plane. Regions of the visual field that violate this model indicate potential obstacles.
2020
In this research we present a novel algorithm for background subtraction using a moving camera. Our algorithm is based purely on visual information obtained from a camera mounted on an electric bus, operating in downtown Reno which automatically detects moving objects of interest with the view to provide a fully autonomous vehicle. In our approach we exploit the optical flow vectors generated by the motion of the camera while keeping parameter assumptions a minimum. At first, we estimate the Focus of Expansion, which is used to model and simulate 3D points given the intrinsic parameters of the camera, and perform multiple linear regression to estimate the regression equation parameters and implement on the real data set of every frame to identify moving objects. We validated our algorithm using data taken from a common bus route.
2014 IEEE Intelligent Vehicles Symposium Proceedings, 2014
The dynamic appearance of vehicles as they enter and exit a scene makes vehicle detection a difficult and complicated problem. Appearance based detectors generally provide good results when vehicles are in clear view , but have trouble in the scenes edges due to changes in the vehicles aspect ratio and partial occlusions. To compensate for some of these deficiencies, we propose incorporating motion cues from the scene. In this work, we focus on a overtaking vehicle detection in a freeway setting with front and rear facing monocular cameras. Motion cues are extracted from the scene, and leveraging the epipolar geometry of the monocular setup, motion compensation is performed. Spectral clustering is used to group similar motion vectors together, and after post-processing, vehicle detections candidates are produced. Finally, these candidates are combined with an appearance detector to remove any false positives, outputting the detections as a vehicle travels through the scene.
International Journal of Machine Learning and Computing, 2014
When driving a vehicle on a road, if a driver want to change lane, he must glance the rear and side mirrors of his vehicle and turn his head to scan the possible approaching vehicles on the side lanes. However, the view scope by the above behavior is limited; there is a blind spot area invisible. To avoid the possible traffic accident during lane change, we here propose a lane change assistance system to assist changing lane. Two cameras are mounted under side mirrors of the host vehicle to capture rear-side-view images for detecting approaching vehicles. The proposed system consists of four stages: estimation of weather-adaptive threshold values, optical flow detection, static feature detection, and detection decision. The proposed system can detect side vehicles with various approaching speed; moreover, the proposed system can also adapt variant weather conditions and environment situations. Experiment with 14 videos on eight different environments and weather conditions, the results reveal 96 % detection rate with less false alarm. Index Terms-Advanced driver assistance system, blind spot detection, optical flow, underneath shadow features. Din-Chang Tseng received his Ph.D. degree in information engineering from
International Journal of Computer Applications, 2013
In this paper, we present a semi real-time vehicle tracking algorithm to determine the speed of the vehicles in traffic from traffic cam video. The results of this work can be used for traffic control, security and safety both by government agencies and commercial organizations. In this paper a method is described for tracking moving objects from a sequence of video frame. This method is implemented by using optical flow (Horn-Schunck)and (Lucas-Kanade) in mat lab and Simulink. It has a variety of uses, some of which are: human computer interaction, security and surveillance, video communication and compression, augmented reality, traffic control, medical imaging and video editing. Segmentation is performed to detect the object after reducing the noise from that scene. The object is tracked by plotting a rectangular bounding box around it in each frame. The velocity of the object is determined by calculating the distance that the object moved in a sequence of frames with respect to the frame rate that the video is recorded. Comparison and performance analysis of algorithms based on psnr and average angular error is done.
Lecture Notes in Computer Science, 2004
We describe an optical flow processing system that works as a virtual motion sensor. It is based on an FPGA device; this enables the easy change of configuring parameters to adapt the sensor to different motion speeds, light conditions and other environment factors. We call it virtual sensor because it consists on a conventional camera as front-end and a processing FPGA device which embeds the frame grabber, the optical flow algorithm implementation, the output module and some configuring and storage circuitry. To the best of our knowledge this paper represents the first description of a fully working optical flow processing system that includes accuracy and processing speed measurements to evaluate the platform performance.
Proceedings of Conference on Intelligent Vehicles
In this work an approach t o obstacle detection and steering control for safe car driving is presented. It is based on the evaluation of 3D motion and structure parameters from optical flow through the analysis of image sequences acquired by a T.V. camera mounted on a vehicle moving along usual city roads, countryroads or motorways. This work founds on the observation that if a camera is appropriately mounted on a vehicle which moves on a planar surface, the problem of motion and structure recovery from optical flow becomes linear and local, and depends on only two parameters: the angular velocity around an axis orthogonal t o the planar surface, and the ratio between the viewed depth and the translational speed, i.e. the generalized time-to-collision. Examples of the detection of moving surrounding cars and of the trajectory recovery algorithm on image sequences of traffic scenes from urban roads are provided.
Lecture Notes in Computer Science, 2012
We present a system for automatically detecting obstacles from a moving vehicle using a monocular wide angle camera. Our system was developed in the context of finding obstacles and particularly children when backing up. Camera viewpoint is transformed to a virtual bird-eye view. We developed a novel image registration algorithm to obtain ego-motion that in combination with variational dense optical flow outputs a residual motion map with respect to the ground. The residual motion map is used to identify and segment 3D and moving objects. Our main contribution is the feature-based image registration algorithm that is able to separate and obtain ground layer ego-motion accurately even in cases of ground covering only 20% of the image, outperforming RANSAC.
2015
Computer vision obstacle detection on either road lanes or sidewalks is very important for traffic participants. This paper presents an approach for obstacle detection from sequences of consecutive monocular color image frames. Keypoints are uniformly distributed in a grid structure on each input image. A Lucas-Kanade optical flow algorithm is performed between each pair of consecutive frames, on the considered key-points, in order to compute the relative motion vectors. Background movement estimation is computed across frames using a RANSAC procedure. Optical flow vectors that are belonging to the background are filtered out. The others are considered to be within the obstacles and are grouped by a hierarchical clustering algorithm in separate obstacles by analyzing their locations, angles and magnitudes. Spurious clusters with low number of motion vectors are filtered out. Finally, an imminent collision warning is issued both visually and acoustically when an obstacle is detected ...
IEEE Transactions on Vehicular Technology, 2008
Overtaking and lane changing are very dangerous driving maneuvers due to possible driver distraction and blind spots. We propose an aid system based on image processing to help the driver in these situations. The main purpose of an overtaking monitoring system is to segment the rear view and track the overtaking vehicle. We address this task with an optic-flow-driven scheme, focusing on the visual field in the side mirror by placing a camera on top of it. When driving a car, the ego-motion optic-flow pattern is very regular, i.e., all the static objects (such as trees, buildings on the roadside, or landmarks) move backwards. An overtaking vehicle, on the other hand, generates an optic-flow pattern in the opposite direction, i.e., moving forward toward the vehicle. This well-structured motion scenario facilitates the segmentation of regular motion patterns that correspond to the overtaking vehicle. Our approach is based on two main processing stages: First, the computation of optical flow in real time uses a customized digital signal processor (DSP) particularly designed for this task and, second, the tracking stage itself, based on motion pattern analysis, which we address using a standard processor. We present a validation benchmark scheme to evaluate the viability and robustness of the system using a set of overtaking vehicle sequences to determine a reliable vehicle-detection distance.
In this paper, a method for early detection of vehicles is presented. By contrast with the common frame-based analysis, this method focuses on tracking motion rather than vehicle objects. With the detection of motion, the actual shape and size of the objects become less of a concern, thereby allowing detection of smaller objects at an earlier stage. One notable advantage that early detection offers is the ability to place cameras at higher vantage points or in oblique views that provide images in which vehicles in the near parts of the image can be detected by their shape or features while vehicles in the far view cannot. The ability to detect vehicles earlier and to cover longer road sections may be useful in providing longer vehicle trajectories for the purpose of traffic models development (e.g., 4–6) and for improved traffic control algorithms. RELATED WORK Various vehicle detection methods have been reported in the literature. A general subdivision of them shows three main categories, including (a) optical flow, (b) background subtraction, and (c) object-based detection. Optical flow is an approximation of the image motion on the basis of local derivatives in a given sequence of images. That is, it specifies how much each image pixel moves between adjacent images. Bohrer et al. (7) used a simplified optical flow algorithm for obstacle detection. De Micheli and Verri (8) applied the method to estimate the angular speed of vehicles relative to the direction of the road as part of a system to detect obstacles. Batavia et al. (9) described another system for obstacle detection, which is based on the optical flow method, although it does not explicitly calculate the flow field. This system aims to detect approaching vehicles that are in the blind spot of a car. Wang et al. (10) pointed out that for vehicle detection, optical flow suffers from several difficulties such as lack of texture in the road regions and small gray level variations that introduce significant instabilities in the computation of the spatial derivatives. Although most optical flow–based techniques require high computational effort, object recognition via background subtraction techniques usually require significantly lower computational effort. Cheung and Kamath (11) identified the following four major steps in background subtraction algorithms: preprocessing, background modeling, foreground detection, and data validation. They also identified two main categories of background subtraction methods: recursive and nonrecursive. Nonrecursive techniques usually estimate a single background image. For example, frame differencing uses the pervious frame as the background model for the current frame (12). Alternatively, the background is estimated by the median value of each pixel in all the frames in the video sequence (13). Detection and tracking of vehicles from video and other image sequences are valuable in several traffic engineering application areas. Most of the research in this area has focused on detection of vehicles that are sufficiently large in the image that they can be detected on the basis of various features. As a result, the acquisition is feasible on limited sections of roads and may ignore significant parts of the available image. A method for early detection of vehicles was developed and demonstrated. This method focused on tracking motion rather than vehicle objects. With the detection of motion, the actual shape and size of the objects become less of a concern, thereby allowing detection of smaller objects at an earlier stage. One notable advantage that early detection offers is the ability to place cameras at higher vantage points or in oblique views that provide images in which vehicles in the near parts of the image can be detected by their shape or features, whereas vehicles in the far view cannot. Detection and tracking of vehicles from video and other image sequences are valuable to many application areas such as road safety, automatic enforcement, surveillance, and acquisition of vehicle tra-jectories for traffic modeling. An indication of the importance of vehicle tracking is the growing number of related projects reported in recent literature. Among these are research at the University of Arizona that used video cameras, both digital and analogue, mounted on helicopter skids for the acquisition of video sequences for traffic management purposes (1); a project at the Delft University of Technology in the Netherlands, where traffic monitoring from airborne platforms was applied using digital cameras (2); and work at the Berkeley Highway Laboratory in California (3) where several cameras were deployed in a single location to form panoramic coverage of a road section. A common thread in all these projects is the application of methods that consider images with high resolution and large scale, which readily provide information on salient vehicle features (e.g., color and shape). Under these assumptions, the detection is useful in the near field of view when images are taken from an oblique view or to data from low altitudes for horizontal images. As a result, the acquisition is feasible on limited sections of roads and may ignore significant parts of the available image. These parts of the image may be valuable for early detection of vehicles that enter image frames.
Computación y Sistemas
In this article, an effective solution has been presented to assist a driver in taking decisions for overtaking under adverse night time dark condition on a two-lane single carriageway road. Here, an awkward situation of the road where a vehicle is just in front of the test vehicle in the same direction and another vehicle coming from the opposite direction is considered. As the environmental condition is very dark, so only headlights and taillights of any vehicle are visible. Estimation of distance and speed with greater accuracy, especially at night where vehicles are not visible is really a challenging task. The proposed assistance system can estimate the actual and relative speed and the distance of the slow vehicle in front of the test vehicle and the vehicle coming from the opposite direction by observing taillights and headlights respectively. Subsequently, required gap, road condition level, speed and acceleration for safe overtaking are estimated. Finally, the overtaking decision is made in a such way that there should not be any collision between vehicles. Several real time experiments reveal that the estimation achieves a great accuracy with safe condition over the state-of-theart techniques using a low-cost 2D camera.
2016 IEEE International Carnahan Conference on Security Technology (ICCST), 2016
TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.
Lecture Notes in Computer Science, 2011
The goal of this work is to propose a solution to improve a driver's safety while changing lanes on the highway. In fact, if the driver is not aware of the presence of a vehicle in his blindspot a crash can occur. In this article we propose a method to monitor the blindspot zone using video feeds and warn the driver of any dangerous situation. In order to fit in a real time embedded car safety system, we avoid using any complex techniques such as classification and learning. The blindspot monitoring algorithm we expose here is based on a features tracking approach by optical flow calculation. The features to track are chosen essentially given their motion patterns that must match those of a moving vehicle and are filtered in order to overcome the presence of noise. We can then take a decision on a car presence in the blindspot given the tracked features density. To illustrate our approach we present some results using video feeds captured on the highway.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.