Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003, 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings.
This paper presents a new approach for continuous tracking of moving objects observed by multiple, heterogeneous cameras. Our approach simultaneously processes video streams from stationary and Pan-Tilt-Zoom cameras. The detection of moving objects from moving camera streams is performed by defining an adaptive background model that takes into account the camera motion approximated by an affine transformation. We address the tracking problem by separately modeling motion and appearance of the moving objects using two probabilistic models. For the appearance model, multiple color distribution components are proposed for ensuring a more detailed description of the object being tracked. The motion model is obtained using a Kalman Filter (KF) process, which predicts the position of the moving object. The tracking is performed by the maximization of a joint probability model. The novelty of our approach consists in modeling the multiple trajectories observed by the moving and stationary cameras in the same KF framework. It allows deriving a more accurate motion measurement for objects simultaneously viewed by the two cameras and an automatic handling of occlusions, errors in the detection and camera handoff. We demonstrate the performances of the system on several video surveillance sequences.
First ACM SIGMM international workshop on Video surveillance - IWVS '03, 2003
This paper presents novel approaches for continuous detection and tracking of moving objects observed by multiple, stationary or moving cameras. Stationary video streams are registered using a ground plane homography and the trajectories derived by Tensor Voting formalism are integrated across cameras by a spatio-temporal homography. Tensor Voting based tracking approach provides smooth and continuous trajectories and bounding boxes, ensuring minimum registration error. In the more general case of moving cameras, we present an approach for integrating objects trajectories across cameras by simultaneous processing of video streams. The detection of moving objects from moving camera is performed by defining an adaptive background model that uses an affine-based camera motion approximation. Relative motion between cameras is approximated by a combination of affine and perspective transform while objects' dynamics are modeled by a Kalman Filter. Shape and appearance of moving objects are also taken into account using a probabilistic framework. The maximization of the joint probability model allows tracking moving objects across the cameras. We demonstrate the performances of the proposed approaches on several video surveillance sequences.
2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION'05) - Volume 1, 2005
We present an approach for persistent tracking of moving objects observed by non-overlapping and moving cameras. Our approach robustly recovers the geometry of non-overlapping views using a moving camera that pans across the scene. We address the tracking problem by modeling the appearance and motion of the moving regions. The appearance of the detected blobs is described by multiple spatial distributions models of blobs' colors and edges. This representation is invariant to 2D rigid and scale transformation. It provides a rich description of the detected regions, and produces an efficient blob similarity measure for tracking. The motion model is obtained using a Kalman Filter (KF) process, which predicts the position of the moving objects while taking into account the camera motion. Tracking is performed by the maximization of a joint probability model combining objects' appearance and motion. The novelty of our approach consists in defining a spatio-temporal Joint Probability Data Association Filter (JPDAF) for integrating multiple cues. The proposed method tracks a large number of moving people with partial and total occlusions and provides automatic handoff of tracked objects. We demonstrate the performance of the system on several real video surveillance sequences.
International journal of engineering research and technology, 2018
Real time object detection and tracking is an important task in various surveillance applications. Nowadays surveillance systems are very common in offices, ATM centers, shopping malls etc. In this paper, an Automated Video Surveillance system is presented. The system aims at tracking an object in motion and identifies an object in multiple webcam which would increase the area of tracking. The system employs a novel combination of Gaussian Mixture Model based Adaptive Background Modeling Algorithm and a RGB color model used for identifying an object in multiple webcam.
EURASIP Journal on Advances in Signal Processing, 2007
This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i) motion detection using a layered background model, (ii) object tracking based on local appearance, (iii) hierarchical object recognition, and (iv) fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed.
In this paper, a multiple-object tracking method for visual surveillance applications is presented. Moving objects are detected by adaptive background subtraction and tracked by using a multi-hypothesis testing approach. Object matching between frames is done based on proximity and appearance similarity. A new confidence measure is assigned to each possible match. This information is arranged into a graph structure where vertices represent blobs in consecutive frames and edges represent match confidence values. This graph is later used to prune and refine trajectories to obtain the salient object trajectories. Occlusions are handled through position prediction using Kalman filter and robust color similarity measures. Proposed framework is able to handle imperfections in moving object detection such as spurious objects, fragmentation, shadow, clutter and occlusions.
Advances in Pattern Recognition - Proceedings of the Sixth International Conference, 2007
Continuously Adaptive Mean shift (CAMSHIFT) is a popular algorithm for visual tracking, providing speed and robustness with minimal training and computational cost. While it performs well with a fixed camera and static background scene, it can fail rapidly when the camera moves or the background changes since it relies on static models of both the background and the tracked object. Furthermore it is unable to track objects passing in front of backgrounds with which they share significant colours. We describe a new algorithm, the Adaptive Background CAMSHIFT (ABCshift), which addresses both of these problems by using a background model which can be continuously relearned for every frame with minimal additional computational expense. Further, we show how adaptive background relearning can occasionally lead to a particular mode of instability which we resolve by comparing background and tracked object distributions using a metric based on the Bhattacharyya coefficient.
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009), 2009
Transferring responsibility for object tracking in a video scene to computer vision rather than human operators has the appeal that the computer will remain vigilant under all circumstances while operator attention can wane. However, when operating at their peak performance, human operators often outperform computer vision because of their ability to adapt to changes in the scene. While many tracking algorithms are available, background subtraction, where a background image is subtracted from the current frame to isolate the foreground objects in a scene, remains a well proven and popular technique. Under some circumstances, a background image can be obtained manually when no foreground objects are present. In the case of persistent surveillance outdoors, the background has a time evolution due to diurnal changes, weather, and seasonal changes. Such changes render a fixed background scene inadequate. We present a method for estimating the background of a scene utilizing a Kalman filter approach. Our method applies a one-dimensional Kalman filter to each pixel of the camera array to track the pixel intensity. We designed the algorithm to track the background intensity of a scene assuming that the camera view is relatively stationary and that the time evolution of the background occurs much slower than the time evolution of relevant foreground events. This allows the background subtraction algorithm to adapt automatically to changes in the scene. The algorithm is a two step process of mean intensity update and standard deviation update. These updates are derived from standard Kalman filter equations. Our algorithm also allows objects to transition between the background and foreground as appropriate by modeling the input standard deviation. For example, a car entering a parking lot surveillance camera field of view would initially be included in the foreground. However, once parked, it will eventually transition to the background. We present results validating our algorithm's ability to estimate backgrounds in a variety of scenes. We demonstrate the application of our method to track objects using simple frame detection with no temporal coherency.
Lecture Notes in Computer Science, 2007
Robust tracking of objects in video is a key challenge in computer vision with applications in automated surveillance, video indexing, human-computer-interaction, gesture recognition, traffic monitoring, etc. Many algorithms have been developed for tracking an object in controlled environments. However, they are susceptible to failure when the challenge is to track multiple objects that undergo appearance change to due to factors such as variation in illumination and object pose. In this paper we present a tracker based on Bayesian estimation, which is relatively robust to object appearance change, and can track multiple targets simultaneously in real time. The object model for computing the likelihood function is incrementally updated and uses background-foreground segmentation information to ameliorate the problem of drift associated with object model update schemes. We demonstrate the efficacy of the proposed method by tracking objects in image sequences from the CAVIAR dataset.
2006
frames. The systems was tested on a large ground-truthed data set containing hundreds of color and FLIR image sequences. A performance evaluation for the system was performed and the average evaluation results are reported in this paper.
Multiple object tracking in image sequences is an emerging topic of research in the field of image processing and computer vision. This paper presents a robust tracking algorithm for multiple moving object detection and tracking them in dynamic scenes. Our algorithm consists of two important steps. Firstly adaptive GMM background modeling with noise reduction is used for foreground object segmentation in the noisy scenes. Also a background updating method is applied to correct the detection of foregorund objects that bends to the background instantly. Secondly, our object tracking framework uses Extended Kalman Filter to model the nonlinear dynamics and measurement models. Also a new approach for solving data association problem is introduced to determine appropriate association between objects and Kalman Filters which is necessary in multipleobject scenario. The adaptive GMM with noise reduction greatly reduced the false detections in the scenes. The experiment of our proposed algorithm shows good results for multiple object motion tracking in complex dynamic background.
Journal of Visual Communication and Image Representation, 2015
This paper presents an effective method for the detection and tracking of multiple moving objects from a video sequence captured by a moving camera without additional sensors. Moving object detection is relatively difficult for video captured by a moving camera, since camera motion and object motion are mixed. In the proposed method, the feature points in the frames are found and then classified as belonging to foreground or background features. Next, moving object regions are obtained using an integration scheme based on foreground feature points and foreground regions, which are obtained using an image difference scheme. Then, a compensation scheme based on the motion history of the continuous motion contours obtained from three consecutive frames is applied to increase the regions of moving objects. Moving objects are detected using a refinement scheme and a minimum bounding box. Finally, moving object tracking is achieved using a Kalman filter based on the center of gravity of a moving object region in the minimum bounding box. Experimental results show that the proposed method has good performance.
2004
We present a novel approach for continuous detection and tracking of moving objects observed by multiple stationary cameras. We address the tracking problem by simultaneously modeling motion and appearance of the moving objects. The object's appearance is represented using color distribution model invariant to 2D rigid and scale transformation. It provides an efficient blob similarity measure for tracking. The motion models are obtained using a Kalman Filter process, which predicts the position of the moving object in both 2D and 3D. The tracking is performed by the maximization of a joint probability model reflecting objects' motion and appearance. The novelty of our approach consists in integrating multiple cues and multiple views in a Joint Probability Data Association Filter for tracking a large number of moving people with partial and total occlusions. We demonstrate the performance of the proposed method on a soccer game captured by two stationary cameras.
– Computer Vision is the part of " Artificial Intelligence " concerned with the theory behind artificial systems that extract information from images. Within which, Video Surveillance is a term given to monitor the behavior of any kind through videos. It requires person to monitor the CCTV and huge volume of memory to record it. One of the major challenges involved is the huge volume of video storage and retrieval of the same on demand. In order to avoid the depletion of human resources and to detect the suspicious behaviors that threaten safety and security, Intelligent Video Surveillance system (IVS) is required. The proposed work is focused on bringing effective and efficient video surveillance system with added intelligence to avoid human intervention in identifying security threats. In IVS, Extended Kalman filters the Gaussian mixture models are used to detect the moving objects. A tracking algorithm is proposed for tracking the moving objects. It implements position of each group, the recognition of the same group, and the newly appearing and disappearing groups. So, the proposed work IVS, promises the robustness against the environmental influences and speed, which are suitable for the real-time surveillance in detecting and tracking moving objects.
Procedia Computer Science, 2015
In video analytics based systems, an efficient method for segmenting foreground objects from video frames is the need of the hour. Currently, foreground segmentation is performed by modelling background of the scene with statistical estimates and comparing them with the current scene. Such methods are not applicable for modelling the background of a moving scene, since the background changes between the scenes. The scope of this paper includes solving the problem of background modelling for applications involving moving camera. The proposed method is a non-panoramic background modelling technique that models each pixel with a single Spatio-Temporal Gaussian. Experimentation on various videos promises that the proposed method can detect foreground objects from the frames of moving camera with negligible false alarms.
2010 6th International Conference on Emerging Technologies (ICET), 2010
The aim of this paper is to present an algorithm for multiple object tracking and video summarization in a scene filmed by one or several cameras. We propose a computationally efficient real time human tracking algorithm, which can 1) track objects inside the field of view (FOV) of a camera even in case of occlusions; 2) recognize objects that quit and then return on a camera's FOV; 3) recognize objects passing through different cameras FOV. We propose a simple 1-D appearance model, called vertical feature (VF), view and size invariant, which is stored in a database in order to help object recognition. We combine it with other motion features like position and velocity for real-time tracking. We find the k closest matches of current object and select the one whose predicted position is closest to the current object position. Our algorithm shows good capabilities for objects tracking even with the change of object view angle and also with the partial change of shape. We compare our algorithm with appearance based and motion based algorithms and show the advantage of a combined approach.
Indonesian Journal of Electrical Engineering and Computer Science
Detection of Moving Objects and Tracking is one of the most concerned issue and is being vastly used at home, business and modern applications. It is used to identify and track of an entity in a significant way. This paper illustrates the way to detect multiple objects using background subtraction methods and extract each object features by using Speed-Up Robust Feature algorithm and track the features through k-Nearest Neighbor processing from different surveillance videos sequentially. In the detection of object of each frame, pixel difference is calculated with respect to the reference background frame for the detection of an object which is only suitable for any ideal static condition with the consideration of lights from the environment. Thus, this method will detect the complete object and the extracted feature will be carried out for the tracking of the object in the multiple videos by one by one video. It is expected that this proposed method can commendably abolish the impact of the changing of lights.
BT Technology Journal, 2004
This paper aims to address two of the key research issues in computer vision -the detection and tracking of multiple objects in the cluttered dynamic scene -that underpin the intelligence aspects of advanced visual surveillance systems aiming at automated visual events detection and behaviour analysis. We discuss two major contributions in resolving these problems within a systematic framework. Firstly, for accurate object detection, an efficient and effective scheme is proposed to remove cast shadows/highlights with error corrections based on a conditional morphological reconstruction. Secondly, for effective tracking, a temporal-template-based tracking scheme is introduced, using multiple descriptive cues (velocity, shape, colour, etc) of the 2-D object appearance together with their respective variances over time. A scaled Euclidean distance is used as the matching metric, and the template is updated using Kalman filters when a matching is found or by linear mean prediction in the case of occlusion. Extensive experiments are carried out on video sequences from various realworld scenarios. The results show very promising tracking performance.
In this paper, we propose a real-time video-surveillance system for image sequences acquired by a moving camera. The system is able to compensate the background motion and to detect mobile objects in the scene. Background compensation is obtained by assuming a simple translation of the whole background from the previous to the actual frame. Dominant translation is computed on the basis of the tracker proposed by Shi-Tomasi and Tomasi-Kanade. Features to be tracked are selected according to a new intrinsic optimality criterion. Badly tracked features are rejected on the basis of a statistical test. The current frame and the related background, after compensation, are processed by a change detection method in order to obtain a binary image of moving points.Results are presented in the contest of a visual-based system for outdoor environments.
1999
A common method for real-time segmentation of moving regions in image sequences involves "background subtraction," or thresholding the error between an estimate of the image without moving objects and the current image. The numerous approaches to this problem differ in the type of background model used and the procedure used to update the model. This paper discusses modeling each pixel as a mixture of Gaussians and using an on-line approximation to update the model. The Gaussian distributions of the adaptive mixture model are then evaluated to determine which are most likely to result from a background process. Each pixel is classified based on whether the Gaussian distribution which represents it most effectively is considered part of the background model.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.