Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002, Proceedings. International Conference on Image Processing
A system is described for acquiring multi-view video of a person moving through the environment. A real-time track- ing algorithm adjusts the pan, tilt, zoom and focus param- eters of multiple active cameras to keep the moving person centered in each view. The output of the system is a set of synchronized, time-stamped video streams, showing the person simultaneously from
This work describes a method for real-time motion detection using an active camera mounted on a padtilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation. Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pankilt angles between successive frames are as large as 3".
1999
Abstract In this paper we present a robust real-time method for tracking multiple people from multiple cameras. Our method uses both static and Pan-Tilt-Zoom (PTZ) cameras. The static cameras are used to locate people in the scene, while the PTZ cameras ���lock-on��� to the individuals and provide visual attention. The system provides consistency in tracking between PTZ cameras and works reliably well when people occlude each other.
2010 Conference on Visual Media Production, 2010
This contribution describes a distributed multi-camera capture and processing system for real-time media production applications. Its main design purpose is to allow prototyping of distributed processing algorithms for free-viewpoint applications, but the concept can be adapted to other (multicamera) applications.
2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009
We present here a real time active vision system on a PTZ network camera to track an object of interest. We address two critical issues in this paper. One is the control of the camera through network communication to follow a selected object. The other is to track an arbitrary type of object in real time under conditions of pose, viewpoint and illumination changes. We analyze the difficulties in the control through the network and propose a practical solution for tracking using a PTZ network camera. Moreover, we propose a robust real time tracking approach, which enhances the effectiveness by using complementary features under a two-stage particle filtering framework and a multi-scale mechanism. To improve time performance, the tracking algorithm is implemented as a multi-threaded process in OpenMP. Comparative experiments with state-of-the-art methods demonstrate the efficiency and robustness of our system in various applications such as pedestrian tracking, face tracking, and vehicle tracking.
This work is linked with the context of human activity monitoring and concerns the development of a multicamera tracking system. Our strategy of sensors combination integrates the contextual suitability of each sensor with respect to the task. The suitability of a sensor, represented as a belief indicator, combines two main criteria. Firstly it is based on the notion of spatial "isolation" of the tracked object with respect to other object and secondly on the notion of "visibility" with respect to the sensor. A centralized filter combines the result of the local tracking estimations (sensor level) and then performs the track management. The main objective of the proposed architecture is to deal with the limitation of each local sensor with respect to the problem of visual occlusion.
Motion and Video Computing, 2002. …, 2002
This paper presents a set of methods for multi view image tracking using a set of calibrated cameras. We demonstrate how effective the approach is for resolving occlusions and tracking objects between overlapping and non-overlapping camera views. Moving objects are ...
2003
Robust tracking of persons in real-world environments and in real-time is a common goal in many video applications. In this paper a computational system for the real-time tracking of multiple persons in natural environments is presented. The system integrates state-of-the-art methodologies for the analysis of movement and color, as well as for the detection of faces. Face detection is complemented by a face tracking module based on heuristics developed by the authors. Exemplary results of the integrated system working in real-world video sequences are shown.
First ACM SIGMM international workshop on Video surveillance - IWVS '03, 2003
This paper presents novel approaches for continuous detection and tracking of moving objects observed by multiple, stationary or moving cameras. Stationary video streams are registered using a ground plane homography and the trajectories derived by Tensor Voting formalism are integrated across cameras by a spatio-temporal homography. Tensor Voting based tracking approach provides smooth and continuous trajectories and bounding boxes, ensuring minimum registration error. In the more general case of moving cameras, we present an approach for integrating objects trajectories across cameras by simultaneous processing of video streams. The detection of moving objects from moving camera is performed by defining an adaptive background model that uses an affine-based camera motion approximation. Relative motion between cameras is approximated by a combination of affine and perspective transform while objects' dynamics are modeled by a Kalman Filter. Shape and appearance of moving objects are also taken into account using a probabilistic framework. The maximization of the joint probability model allows tracking moving objects across the cameras. We demonstrate the performances of the proposed approaches on several video surveillance sequences.
1998
In this paper we present a robust real-time method for tracking and recognizing multiple people with multiple cameras. Our method uses both static and Pan-Tilt-Zoom (PTZ) cameras to provide visual attention. The PTZ camera system uses face recognition to register people in the scene and``lock-on''to those individuals. The static camera system provides a global view of the environment and is used to re-adjust the tracking of the system when the PTZ cameras lose their targets.
1998
In this paper we discuss the use of a Kalman filter for three dimensional object tracking in MPI-Video. The general problem to be solved is the reconstruction of the three dimensional position and trajectory of an object from multiple images taken from different positions in a given environment. The use of the Kalman filter allows us to take advantage of the dynamic nature of the video stream: objects do not just stand still, but move around in partially predictable ways. We will show how the use of a dynamic estimator like the Kalman filter is useful not only to obtain more reliable trajectories in the presence of uncertain data, but also to solve problems deriving from the presence of multiple objects, namely the assignment problem.
2005
T he importance of video surveillance techniques , , has increased considerably since the latest terrorist incidents. Safety and security have become critical in many public areas, and there is a specific need to enable human operators to remotely monitor activity across large environments such as: a) transport systems (railway transportation, airports, urban and motorway road networks, and maritime transportation), b) banks, shopping malls, car parks, and public buildings, c) industrial environments, and d) government establishments (military bases, prisons, strategic infrastructures, radar centers, and hospitals).
In this paper, we propose a new multiple-camera people tracking system that is equipped with the following functions: (1) can handle long-term occlusions, complete occlusions, and unpredictable motions; (2) can detect arbitrary sized foreground objects; (3) can detect objects with much faster speed. The main contribution of our method is twofold: 1) An Mto-one relationship with only point homography matching for occlusion detection can achieve efficiency; 2) A view-hopping technique based on object motion probability (OMP) is proposed to automatically select an appropriate observation view for tracking a human subject.
2006
In this paper, we present a coordinated video surveillance system that can minimize the spatial limitation and can precisely extract the 3D position of objects. To do this, our system used an agent based system and also tracked the normalized object using active wide-baseline stereo method.
2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., 2003
This paper presents a new approach for continuous tracking of moving objects observed by multiple, heterogeneous cameras. Our approach simultaneously processes video streams from stationary and Pan-Tilt-Zoom cameras. The detection of moving objects from moving camera streams is performed by defining an adaptive background model that takes into account the camera motion approximated by an affine transformation. We address the tracking problem by separately modeling motion and appearance of the moving objects using two probabilistic models. For the appearance model, multiple color distribution components are proposed for ensuring a more detailed description of the object being tracked. The motion model is obtained using a Kalman Filter (KF) process, which predicts the position of the moving object. The tracking is performed by the maximization of a joint probability model. The novelty of our approach consists in modeling the multiple trajectories observed by the moving and stationary cameras in the same KF framework. It allows deriving a more accurate motion measurement for objects simultaneously viewed by the two cameras and an automatic handling of occlusions, errors in the detection and camera handoff. We demonstrate the performances of the system on several video surveillance sequences.
ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No.01TH8585), 2001
Recent innovations in real-time machine vision, distributed computing, software architectures, and high-speed communication are expanding the available technology for intelligent system development. These technologies allow the realization of intelligent systems that provide the capabilities for a user to experience events from remote locations in an interactive way. In this paper we describe research aimed at the realization of a powerful televiewing system applied to the traffic incident detection and monitoring needs of today's highways. Sensor clusters utilizing both rectilinear and omni-directional cameras will provide an interactive, real-time, multi-resolution televiewing interface to emergency response crews. Ultimately, this system will have a direct impact on reducing incident related highway congestion by improving the quality of information to which emergency personnel have access.
Real-Time Imaging, 1998
Real-Time Tracking of Moving Objects with an Active Camera T his article is concerned with the design and implementation of a system for real-time monocular tracking of a moving object using the two degrees of freedom of a camera platform. Figure-ground segregation is based on motion without making any a priori assumptions about the object form. Using only the first spatiotemporal image derivatives, subtraction of the normal optical flow induced by camera motion yields the object image motion. Closed-loop control is achieved by combining a stationary Kalman estimator with an optimal Linear Quadratic Regulator. The implementation on a pipeline architecture enables a servo rate of 25 Hz. We study the effects of time-recursive filtering and fixed-point arithmetic in image processing and we test the performance of the control algorithm on controlled motion of objects.
Ambient Intelligence
We present the IBM Smart Surveillance System that uses a distributed architecture to manage a heterogeneous network of active cameras. This system consists of a distributed network of cameras, each with local processing that interprets video to detect and track moving objects. The system performs multi-camera tracking as objects pass through the fields of view of different cameras, and actively acquires rich, highresolution data by actively tracking objects of interest with Pan-Tilt-Zoom cameras. The multi-resolution data is stored in a shared index that can be browsed and searched live or post-hoc from a remote location, visualizing very low bandwidth video or activity meta data.
In this paper, we propose a real-time video-surveillance system for image sequences acquired by a moving camera. The system is able to compensate the background motion and to detect mobile objects in the scene. Background compensation is obtained by assuming a simple translation of the whole background from the previous to the actual frame. Dominant translation is computed on the basis of the tracker proposed by Shi-Tomasi and Tomasi-Kanade. Features to be tracked are selected according to a new intrinsic optimality criterion. Badly tracked features are rejected on the basis of a statistical test. The current frame and the related background, after compensation, are processed by a change detection method in order to obtain a binary image of moving points.Results are presented in the contest of a visual-based system for outdoor environments.
SMPTE Motion Imaging Journal, 2007
In order to insert a virtual object into a TV image, the graphics system needs to know precisely how the camera is moving, so that the virtual object can be rendered in the correct place in every frame. Nowadays this can be achieved relatively easily in postproduction, or in a studio equipped with a special tracking system. However, for live shooting on location, or in a studio that is not specially equipped, installing such a system can be difficult or uneconomic. To overcome these limitations, the MATRIS project is developing a real-time system for measuring the movement of a camera. The system uses image analysis to track naturally occurring features in the scene, and data from an inertial sensor. No additional sensors, special markers, or camera mounts are required. This paper gives an overview of the system and presents some results.
2001
Multiple cameras are needed to cover large environments for monitoring activity. To track people successfully in multiple perspective imagery, one needs to establish correspondence between objects captured in multiple cameras. We present a system for tracking people in multiple uncalibrated cameras. The system is able to discover spatial relationships between the camera fields of view and use this information to correspond between different perspective views of the same person. We employ the novel approach of finding the limits of field of view (FOV) of a camera as visible in the other cameras. Using this information, when a person is seen in one camera, we are able to predict all the other cameras in which this person will be visible. Moreover, we apply the FOV constraint to disambiguate between possible candidates of correspondence. We present results on sequences of up to three cameras with multiple people. The proposed approach is very fast compared to camera calibration based approaches.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.