Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2007
This paper describes a vision-based system for blind spot detection in intelligent vehicle applications. A camera is mounted in the lateral mirror of a car with the intention of visually detecting cars that can not be perceived by the vehicle driver since they are located in the so-called blind spot. The detection of cars in the blind spot is carried out using computer vision techniques, based on optical flow and data clustering, as described in the following lines.
International Journal of Machine Learning and Computing, 2014
When driving a vehicle on a road, if a driver want to change lane, he must glance the rear and side mirrors of his vehicle and turn his head to scan the possible approaching vehicles on the side lanes. However, the view scope by the above behavior is limited; there is a blind spot area invisible. To avoid the possible traffic accident during lane change, we here propose a lane change assistance system to assist changing lane. Two cameras are mounted under side mirrors of the host vehicle to capture rear-side-view images for detecting approaching vehicles. The proposed system consists of four stages: estimation of weather-adaptive threshold values, optical flow detection, static feature detection, and detection decision. The proposed system can detect side vehicles with various approaching speed; moreover, the proposed system can also adapt variant weather conditions and environment situations. Experiment with 14 videos on eight different environments and weather conditions, the results reveal 96 % detection rate with less false alarm. Index Terms-Advanced driver assistance system, blind spot detection, optical flow, underneath shadow features. Din-Chang Tseng received his Ph.D. degree in information engineering from
A fundamental goal of an overtaking monitor system is the segmentation of the overtaking vehicle. This application can be addressed through an optical flow driven scheme. We focus on the rear mirror visual field using a camera on the top of it. If we drive a car, the ego-motion optical flow pattern is more or less unidirectional, i.e. all the static objects and landmarks move backwards. On the other hand, an overtaking car generates an optical flow pattern in the opposite direction, i.e. moving forward towards our car. This makes motion processing schemes specially appropriate for an overtaking monitor application. We have implemented a highly parallel bio-inspired optical flow algorithm and tested it with real overtaking sequences in different weather conditions. We have developed a post-processing optical flow step that allows us to estimate the car position. We have tested it using a bank of overtaking car sequences. The overtaking vehicle position can be used to send useful aler...
SAE International Journal of Passenger Cars - Electronic and Electrical Systems, 2012
This paper describes a vision-based vehicle detection system for a blind spot warning function. This detection system has been designed to provide ample performance as a driving safety support system, while streamlining the image processing algorithm so that it can be processed using the computational power of an existing ECU. The procedure used by the system to detect a vehicle in a blind spot is as follows. The system consists of four functional components: obstacle detection, velocity estimation, vertical edge detection, and final classification. In obstacle detection, a predicted image is generated under the assumption that the road surface is a perfectly flat plane, and then an object is detected based on a histogram that is created by comparing the predicted image and an actually observed image. The velocity of the object is estimated by tracking the histogram over time, assuming that both the object and the host vehicle are traveling in the same direction. Vertical edge detection is employed so as to avoid misdetection due to a vehicle shadow projected onto the road surface. In final classification, a vehicle is detected on the basis of these results. The effectiveness of the system was verified by conducting road tests on highways and narrow streets with two-way traffic.
2015
During driving Changing lanes can be very hazardous on a busy highway. There is region called “blind spot ” which is a problem for every car driver since it’s not covered by the driver’s mirrors. Relying solely on the mirrors while changing lane can lead to a collision with another vehicle. This paper focuses on this situation by ensuring that the blind spots of the vehicle are clear prior to the driver attempt to change lanes. This computer simulation incorporates the need for detection and warning of objects present within the blind spot on either side of the vehicle to the driver along with distance measurement of the object relative to the vehicle, incase the driver decides to change lanes. This simulation is constructed using the theory of embedded systems and will alert the driver if there is another car on the blind area.
Journal of Transportation Technologies, 2012
Video based vehicle detection technology is an integral part of Intelligent Transportation System (ITS), due to its non-intrusiveness and comprehensive vehicle behavior data collection capabilities. This paper proposes an efficient video based vehicle detection system based on Harris-Stephen corner detector algorithm. The algorithm was used to develop a standalonevehicle detection and tracking system that determines vehicle counts and speeds at arterial roadways and freeways. The proposed video based vehicle detection system was developed to eliminate the need of complex calibration, robustness to contrasts variations, and better performance with low resolutions videos. The algorithm performance for accuracy in vehicle counts and speed was evaluated. The performance of the proposed system is equivalent or better compared to a commercial vehicle detection system. Using the developed vehicle detection and tracking system an advance warning intelligent transportation system was designed and implemented to alert commuters in advance of speed reductions and congestions at work zones and special events. The effectiveness of the advance warning system was evaluated and the impact discussed.
2018
This document describes an advanced system and methodology for Cross Traffic Alert (CTA), able to detect vehicles that move into the vehicle driving path from the left or right side. The camera is supposed to be not only on a vehicle still, e.g. at a traffic light or at an intersection, but also moving slowly, e.g. in a car park. In all of the aforementioned conditions, a driver’s short loss of concentration or distraction can easily lead to a serious accident. A valid support to avoid these kinds of car crashes is represented by the proposed system. It is an extension of our previous work, related to a clustering system, which only works on fixed cameras. Just a vanish point calculation and simple optical flow filtering, to eliminate motion vectors due to the car relative movement, is performed to let the system achieve high performances with different scenarios, cameras and resolutions. The proposed system just uses as input the optical flow, which is hardware implemented in the p...
In this paper, a method for early detection of vehicles is presented. By contrast with the common frame-based analysis, this method focuses on tracking motion rather than vehicle objects. With the detection of motion, the actual shape and size of the objects become less of a concern, thereby allowing detection of smaller objects at an earlier stage. One notable advantage that early detection offers is the ability to place cameras at higher vantage points or in oblique views that provide images in which vehicles in the near parts of the image can be detected by their shape or features while vehicles in the far view cannot. The ability to detect vehicles earlier and to cover longer road sections may be useful in providing longer vehicle trajectories for the purpose of traffic models development (e.g., 4–6) and for improved traffic control algorithms. RELATED WORK Various vehicle detection methods have been reported in the literature. A general subdivision of them shows three main categories, including (a) optical flow, (b) background subtraction, and (c) object-based detection. Optical flow is an approximation of the image motion on the basis of local derivatives in a given sequence of images. That is, it specifies how much each image pixel moves between adjacent images. Bohrer et al. (7) used a simplified optical flow algorithm for obstacle detection. De Micheli and Verri (8) applied the method to estimate the angular speed of vehicles relative to the direction of the road as part of a system to detect obstacles. Batavia et al. (9) described another system for obstacle detection, which is based on the optical flow method, although it does not explicitly calculate the flow field. This system aims to detect approaching vehicles that are in the blind spot of a car. Wang et al. (10) pointed out that for vehicle detection, optical flow suffers from several difficulties such as lack of texture in the road regions and small gray level variations that introduce significant instabilities in the computation of the spatial derivatives. Although most optical flow–based techniques require high computational effort, object recognition via background subtraction techniques usually require significantly lower computational effort. Cheung and Kamath (11) identified the following four major steps in background subtraction algorithms: preprocessing, background modeling, foreground detection, and data validation. They also identified two main categories of background subtraction methods: recursive and nonrecursive. Nonrecursive techniques usually estimate a single background image. For example, frame differencing uses the pervious frame as the background model for the current frame (12). Alternatively, the background is estimated by the median value of each pixel in all the frames in the video sequence (13). Detection and tracking of vehicles from video and other image sequences are valuable in several traffic engineering application areas. Most of the research in this area has focused on detection of vehicles that are sufficiently large in the image that they can be detected on the basis of various features. As a result, the acquisition is feasible on limited sections of roads and may ignore significant parts of the available image. A method for early detection of vehicles was developed and demonstrated. This method focused on tracking motion rather than vehicle objects. With the detection of motion, the actual shape and size of the objects become less of a concern, thereby allowing detection of smaller objects at an earlier stage. One notable advantage that early detection offers is the ability to place cameras at higher vantage points or in oblique views that provide images in which vehicles in the near parts of the image can be detected by their shape or features, whereas vehicles in the far view cannot. Detection and tracking of vehicles from video and other image sequences are valuable to many application areas such as road safety, automatic enforcement, surveillance, and acquisition of vehicle tra-jectories for traffic modeling. An indication of the importance of vehicle tracking is the growing number of related projects reported in recent literature. Among these are research at the University of Arizona that used video cameras, both digital and analogue, mounted on helicopter skids for the acquisition of video sequences for traffic management purposes (1); a project at the Delft University of Technology in the Netherlands, where traffic monitoring from airborne platforms was applied using digital cameras (2); and work at the Berkeley Highway Laboratory in California (3) where several cameras were deployed in a single location to form panoramic coverage of a road section. A common thread in all these projects is the application of methods that consider images with high resolution and large scale, which readily provide information on salient vehicle features (e.g., color and shape). Under these assumptions, the detection is useful in the near field of view when images are taken from an oblique view or to data from low altitudes for horizontal images. As a result, the acquisition is feasible on limited sections of roads and may ignore significant parts of the available image. These parts of the image may be valuable for early detection of vehicles that enter image frames.
International Journal of Intelligent Transportation Systems Research, 2012
In this paper, we propose a multipurpose panoramic vision system for eliminating the blind spot and informing the driver of approaching vehicles using three cameras and a laser sensor. A wide-angle camera is attached to the car trunk and two cameras are attached under each side-view mirror to eliminate the blind spot of a vehicle. A laser sensor is attached on the rear-left of the vehicle to gather information from around the vehicle. The proposed system performs a pre-processing to estimate several system parameters. We compute a warping map to compensate the wide-angle lens distortion of the rear camera. We estimate the focus-of-contraction (FOC) in the rear image, and interactively compute the homography between the rear image and the laser sensor data. Homographies between each sideview image and the rear image are also computed in the preprocessing step. Using the system parameters obtained in the pre-processing step, the proposed system generates a panoramic mosaic view to eliminate the blind spot. First we obtain the undistorted rear image using the warping map. Then, we find road boundaries and classify approaching vehicles in laser sensor data, and overlap the laser sensor data on the rear image for further visualization. Next, the system performs the image registration process after segmentation of road and background regions based on road boundaries. Finally, it generates various views such as a cylindrical panorama view, a top view, a side view and an information panoramic mosaic view for displaying varied safety information.
Lecture Notes in Computer Science, 2007
In this paper a solution to detect wrong way drivers on highways is presented. The proposed solution is based on three main stages: Learning, Detection and Validation. Firstly, the orientation pattern of vehicles motion flow is learned and modelled by a mixture of gaussians. The second stage (Detection and Temporal Validation) applies the learned orientation model in order to detect objects moving in the lane's opposite direction. The third and final stage uses an Appearance-based approach to ensure the detection of a vehicle before triggering an alarm. This methodology has proven to be quite robust in terms of different weather conditions, illumination and image quality. Some experiments carried out with several movies from traffic surveillance cameras on highways show the robustness of the proposed solution.
Lecture Notes in Computer Science, 2011
The goal of this work is to propose a solution to improve a driver's safety while changing lanes on the highway. In fact, if the driver is not aware of the presence of a vehicle in his blindspot a crash can occur. In this article we propose a method to monitor the blindspot zone using video feeds and warn the driver of any dangerous situation. In order to fit in a real time embedded car safety system, we avoid using any complex techniques such as classification and learning. The blindspot monitoring algorithm we expose here is based on a features tracking approach by optical flow calculation. The features to track are chosen essentially given their motion patterns that must match those of a moving vehicle and are filtered in order to overcome the presence of noise. We can then take a decision on a car presence in the blindspot given the tracked features density. To illustrate our approach we present some results using video feeds captured on the highway.
Computer Standards & Interfaces, 1999
We describe an optical flow based obstacle detection system for use in detecting vehicles approaching the blind spot of a car on highways and city streets. The system runs at near frame rate (8-15 frames/second) on PC hardware. We will discuss the prediction of a camera image given an implicit optical flow field and comparison with the actual camera image. The advantage to this approach is that we never explicitly calculate optical flow. We will also present results on digitized highway images, and video taken from Navlab 5 while driving on a Pittsburgh highway.
2005
One of the Advanced Driver Assistance Systems for Intelligent Vehicles havs to deal with the detection and tracking of other vehicles. It will have many applications: Platooning, Stop&go, Blind angle perception, Manoeuvres supervisor. In this paper a system based on computer vision is presented. A geometric model of the vehicle is defined where its energy function includes information of the shape and symmetry of the vehicle and the shadow it produces. A genetic algorithm finds the optimum parameter values. Examples of real images are shown to validate the algorithm.
2007
In this paper it is described a solution to detect wrong way drivers on highways. The proposed solution is based on three main stages: Learning, Detection and Validation. In the first stage, the orientation pattern of vehicles motion flow is learned and modelled by a mixture of gaussians. The second stage (Detection and Temporal Validation) makes use of the learned orientation model to detect objects moving on lane's opposite direction. The third and final stage uses an Appearance based approach to ensure the detection of a vehicle before triggering an alarm. This methodology has proven to be quite robust in terms of different weather conditions, illumination and image quality. Some experiments carried out with several movies from traffic surveillance cameras on highways show the robustness of the proposed solution.
IEEE Transactions on Robotics and Automation, 1998
This paper describes procedures for obtaining a reliable and dense optical flow from image sequences taken by a television (TV) camera mounted on a car moving in usual outdoor scenarios. The optical flow can be computed from these image sequences by using several techniques. Differential techniques to compute the optical flow do not provide adequate results, because of a poor texture in images and the presence of shocks and vibrations experienced by the TV camera during image acquisition. By using correlation based techniques and by correcting the optical flows for shocks and vibrations, useful sequences of optical flows can be obtained. When the car is moving along a flat road and the optical axis of the TV camera is parallel to the ground, the motion field is expected to be almost quadratic and have a specific structure. As a consequence the egomotion can be estimated from this optical flow and information on the speed and the angular velocity of the moving vehicle are obtained. By analyzing the optical flow it is possible to recover also a coarse segmentation of the flow, in which objects moving with a different speed are identified. By combining information from intensity edges a better localization of motion boundaries are obtained. These results suggest that the optical flow can be successfully used by a vision system for assisting a driver in a vehicle moving in usual streets and motorways.
2008
This report provides a brief and informal introduction into stereo and motion analysis for driver assistance. Stereo and motion analysis play a central role in computer vision . Many algorithms in this field have been proposed and carefully studied; see, for example, and the website vision.middlebury.edu for stereo and optic flow algorithms.
Journal of Informatics Electrical and Electronics Engineering (JIEEE), 2021
This paper contains the details of different object detection (OD) techniques, object identification's relationship with video investigation, and picture understanding, it has pulled in much exploration consideration as of late. Customary item identification strategies are based on high-quality highlights and shallow teachable models. This survey paper presents one such strategy which is named as Optical Flow method (OFM). This strategy is discovered to be stronger and more effective for moving item recognition and the equivalent has appeared by an investigation in this review paper. Applying optical stream to a picture gives stream vectors of the focuses compared to the moving items. Next piece of denoting the necessary moving object of interest checks to the post-preparing. Post handling is the real commitment of the review paper for moving item identification issues. Their presentation effectively deteriorates by developing complex troupes which join numerous low-level pictur...
IEEE Transactions on Intelligent Transportation Systems, 2016
This paper proposes a novel approach for detecting and tracking vehicles to the rear and in the blind zone of a vehicle, using a single rear-mounted fisheye camera and multiple detection algorithms. A maneuver that is a significant cause of accidents involves a target vehicle approaching the host vehicle from the rear and overtaking into the adjacent lane. As the overtaking vehicle moves toward the edge of the image and into the blind zone, the view of the vehicle gradually changes from a front view to a side view. Furthermore, the effects of fisheye distortion are at their most pronounced toward the extremities of the image, rendering detection of a target vehicle entering the blind zone even more difficult. The proposed system employs an AdaBoost classifier at distances of 10-40 m between the host and target vehicles. For detection at short distances where the view of a target vehicle has changed to a side view and the AdaBoost classifier is less effective, identification of vehicle wheels is proposed. Two methods of wheel detection are employed: at distances between 5 and 15 m, a novel algorithm entitled wheel arch contour detection (WACD) is presented, and for distances less than 5 m, Hough circle detection provides reliable wheel detection. A testing framework is also presented, which categorizes detection performance as a function of distance between host and target vehicles. Experimental results indicate that the proposed method results in a detection rate of greater than 93% in the critical range (blind zone) of the host.
Communications on Applied Electronics, 2015
In recent years, automotive manufacturers have equipped their vehicles with innovative Advanced Driver Assistance Systems (ADAS) to ease driving and avoid dangerous situations, such as unintended lane departures or collisions with other road users, like vehicles and pedestrians. To this end, ADAS at the cutting edge are equipped with cameras to sense the vehicle surrounding. This research work investigates the techniques for monocular vision based vehicle detection. A system that can robustly detect and track vehicles in images. The system consists of three major modules: shape analysis based on Histogram of oriented gradient (HOG) is used as the main feature descriptor, a machine learning part based on support vector machine (SVM) for vehicle verification, lastly a technique is applied for texture analysis by applying the concept of gray level co-occurrence matrix (GLCM). More specifically, we are interested in detection of cars from different camera viewpoints, diverse lightning conditions majorly images in sunlight, night, rain, normal day light, low light and further handling the occlusion. The images has been pre-processed at the first step to get the optimum results in all the conditions. Experiments have been conducted on large numbers of car images with different angles. For car images the classifier contains 4 classes of images with the combination of positive and negative images, the test and train segments. Due to length of long feature vector we have deduced it using different cell sizes for more accuracy and efficiency. Results will be presented and future work will be discussed.
In this paper, an enhanced optical flow analysis based moving vehicle detection and tracking system has been developed. A novel multidirectional brightness-intensity constraints (MBIGC) estimation and fusion based optical flow analysis (MDFOA) technique has been proposed that performs simultaneous pixel's intensity and velocity estimation in a moving frame for detecting and tracking the moving vehicle. The conventional Lucas Kanade and Horn Schunck optical flow analysis algorithms have been enhanced by incorporating a multidirectional BIGC estimation, which has been further enriched with a non-linear adaptive median filter based denoising. Such novelties have significantly enhanced the video segmentation and detection. A vector magnitude threshold based MDOFA algorithm has been developed for motion vector retrieval that eventually enables swift and precise moving vehicle segmentation from the background frame. A heuristic filtering based blog analysis has been applied for vehicle tracking. The MATLAB based simulation reveals that MDFOA-HS outperforms LK in terms of execution time and detection accuracy. In addition, the accurate traffic density estimation affirms robustness of the proposed system to be used in intelligent transport system.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.