Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003, Robotics and Automation, …
The problem of tracking the position and orientation of a moving object using a stereo camera system is considered in this paper. A robust algorithm based on the extended Kalman filter is adopted, combined with an efficient selection technique of the object image features, based on Binary Space Partitioning tree geometric models. An experimental study is carried out using a vision system of two fixed cameras.
Robotics and Automation, …, 2002
In this paper a computationally efficient algorithm for real time estimation of the position and orientation of moving objects from visual measurements of a system of fixed cameras is proposed. The algorithm is based on extended Kalman filtering of the measurements of the position of suitable feature points selected on the target objects. The effectiveness of the algorithm is improved by using a pre-selection method of the feature points which takes advantage of the Kalman filter prediction capability combined with a BSP tree modeling technique of the objects geometry. Computer simulations are presented to test the performance of the estimation process in the presence of noise, different types of lens geometric distortion, quantization and calibration errors.
2001
The problem of the real time estimation of the position and orientation of moving objects for position-based visual sewoing contml of robotic systems is considered in this paper. A computationally efficient algorithm is proposed based on Kalman filtering of the visual measurements of the position of suitable feature points selected on the target objects. The efficiency of the algorithm is improved by adopting a pre-selection technique of the feature points, based on Binary Space Partitioning (BSP) m e geometric models of the target objects. which takes advantage of the Kalman filter prediction capability. Computer simulations are presented to test the performance of the estimation algorithm in the presence of noise. different types of lens geometric distortion, quantization and calibration m r s .
2003
The use of visual sensors may have high impact in robotic applications where it is required to measure the pose (position and orientation) and the visual features of objects moving in unstructured environments. In this paper, the problem of real-time estimation of the position and orientation of of multiple objects is considered. Special emphasis is devoted to the case when two or more objects overlap with respect to the visual system causing occlusion. The algorithm is based on the Kalman filtering and Binary Space Partition (BSP) tree representations of the objects geometry. The real-time implementation of the algorithm is experimentally tested for the case of visual tracking of two objects using two cameras.
2009
Target tracking has become an area of interest in recent years. Target tracking is able to widen the perspective and view of the static camera to track a target, providing basic artificial intelligence features for robots, and serve as the platform for surveillance purpose. A tracking system is limited to a monocular vision which limits much of the information that is available in an image compared to a binocular or stereo vision. Among the advantages of stereo vision is 3D modeling of scene and depth estimation. In this work, an initial development study and implementation of stereo vision tracking system was done. In developing the stereo vision tracking system, a tracking algorithm was developed together with a depth estimation method to take advantage of the images obtained from having images from two cameras. The depth estimation was found to be 80% accurate for objects at a distance of 15 cm to 65 cm from the cameras.
The use of visual sensors may have high impact in applications where it ist required to measure the pose (position and orientation) and the visual features of object moving in unstructured environments. In robotics, the measurement provided by video cameras can be directly used to perform closed-loop control of the robot end-effector pose. In this chapter the problem of real-time estimation of the position and orientation of a moving object using a fixed stereo camera system is considered. An approach based on the use of the Extended Kalman Filter (EKF) combined with a 3D representation of the objects geometry based on Binary Space Partition (BSP) trees ist illustrated. The performance of the proposed visual tracking algorithm is experimentally tested in the case of an object moving in the visible space of a fixed stereo camera system.
In this paper we recommend a novel method for detecting and tracking objects in the presence of cluttered background such as movements of leaves of trees, under various types of occlusion (object to object and object to scene occlusion) and scale change of object (small or large object) in real-time video. Object detection and tracking are two main terms of developing any tracking system. In our approach firstly we apply filters to remove noise and to avoid minute changes in the scene then we are using the frame differencing method to detect and segment the moving object. Contour tracking approach is applied to track the object of interest in all consecutive video frames. For checking superiority of this method we use it on different type of dataset: KTH and Own dataset.
IEEE Transactions on Robotics, 2000
This paper presents a general approach for the simultaneous tracking of multiple moving targets using a generic active stereo setup. The problem is formulated on the plane, where cameras are modeled as "line scan cameras," and targets are described as points with unconstrained motion. We propose to control the active system parameters in such a manner that the images of the targets in the two views are related by a homography. This homography is specified during the design stage and, thus, can be used to implicitly encode the desired tracking behavior. Such formulation leads to an elegant geometric framework that enables a systematic and thorough analysis of the problem at hand. The benefits of the approach are illustrated by applying the framework to two distinct stereo configurations. In the first case, we assume two pan-tilt-zoom cameras, with rotation and zoom control, which are arbitrarily placed in the working environment. It is proved that such a stereo setup can track up to N = 3 free-moving targets, while assuring that the image location of each target is the same for both views. The second example considers a robot head with neck pan motion and independent eye rotation. For this case, it is shown that it is not possible to track more than N = 2 targets because of the lack of zoom. The theoretical framework is used to derive the control equations, and the implementation of the tracking behavior is described in detail. The correctness of the results is confirmed through simulations and real tracking experiments.
2007
Research in adaptative cruise control (ACC) is currently one of the most important topics in the field of intelligent transportation systems. The main challenge is to perceive the environment, especialy at low speed. In this paper, we present a novel approach to track the 3-D trajectory and speed of the obstacles and the surrounding vehicles through a stereo-vision system. This tracking method extends the classical patch-based Lucas-Kanade algorithm [9], [1] by integrating the geometric constraints of the stereo system into the motion model: the epipolar constraint, which enforces the tracked patches to remain on the epipolar lines, and the magnification constraint, which links the disparity of the tracked patches to the apparent size of these patches. We report experimental results on simulated and real data showing the improvement in accuracy and robustness of our algorithm compared to the classical Lucas-Kanade tracker.
2011 IEEE/SICE International Symposium on System Integration (SII), 2011
This paper presents a vision-tracking for mobile robots, which tracks a moving target based on robot motion and stereo vision information. The proposed system controls pan and tilt actuators attached to a stereo camera, using the data from a gyroscope, robot wheel encoders, pan and tilt actuator encoders, and the stereo camera. Using this proposed system, the stereo camera always faces the moving target. The developed system calculates the angles of the pan and tilt actuators by estimating the relative position of the target with respect to the position of the robot. The developed system estimates the target position using the robot motion information and the stereo vision information. The movement of the robot is modeled as the transformation of the frame, which consists of a rotation and a translation. The developed system calculates the rotation using 3-axis gyroscope data and the translation using robot wheel encoder data. The proposed system measures the position of the target relative to the robot, combining the encoder data of pan and tilt actuators and the disparity map of the stereo vision. The inevitable mismatch of the data, which occurs from the asynchrony of the multiple sensors, is prevented by the proposed system, which compensates for the communication latency and the computation time. The experimental results show that the developed system achieves excellent tracking performance in several motion scenarios, including combinations of straights and curves and climbing of slopes. * Tracking is failed. The written values are calculated from the information of the images, taken before the failure.
2008
Vision based tracking of an object using the ideas of perspective projection inherently consists of nonlinearly modelled measurements although the underlying dynamic system that encompasses the object and the vision sensors can be linear. Based on a necessary stereo vision setting, we introduce an appropriate measurement conversion techniques which subsequently facilitate using a linear filter. Linear filter together with the aforementioned measurement conversion approach conforms a robust linear filter that is based on the set values state estimation ideas; a particularly rich area in the robust control literature. We provide a rigorously theoretical analysis to ensure bounded state estimation errors formulated in terms of an ellipsoidal set in which the actual state is guaranteed to be included to an arbitrary high probability. Using computer simulations as well as a practical implementation consisting of a robotic manipulator, we demonstrate our linear robust filter significantly outperforms the traditionally used extended Kalman filter under this stereo vision scenario.
Real-Time Imaging, 1998
Real-Time Tracking of Moving Objects with an Active Camera T his article is concerned with the design and implementation of a system for real-time monocular tracking of a moving object using the two degrees of freedom of a camera platform. Figure-ground segregation is based on motion without making any a priori assumptions about the object form. Using only the first spatiotemporal image derivatives, subtraction of the normal optical flow induced by camera motion yields the object image motion. Closed-loop control is achieved by combining a stationary Kalman estimator with an optimal Linear Quadratic Regulator. The implementation on a pipeline architecture enables a servo rate of 25 Hz. We study the effects of time-recursive filtering and fixed-point arithmetic in image processing and we test the performance of the control algorithm on controlled motion of objects.
A multi-camera monitoring system for online recognition of moving objects is considered. The system consists of several autonomous vision subsystems. Each of them is able to monitor an area of interest with the aim to reveal and recognize characteristic patterns and to track the motion of the selected configuration. Each subsystem recognizes the existence of the predefined objects in order to report expected motion while automatically tracking the selected object. Simultaneous tracking by two or more cameras is used to measure the instant distance of the tracked object. A modular conception enables simple extension by several static and mobile cameras mechanically oriented in space by the pan and tilt heads. The open architecture of the system allows additional subsystems integration and the day and night image processing algorithms extension.
VIPSI-2006 VENICE, 2006
In this paper we address the problem of tracking features efficiently and robustly along image sequences. To estimate the undergoing movement we use an approach based on Kalman filtering. The measured data is incorporated by optimizing the global correspondence set based on an efficient approximation of the Mahalanobis Distance (MD). Along the image sequence, to deal with the incoming and previously existing features a new management model is considered, so that each occluded feature may be kept on tracking or it may be excluded depending on its historical behavior. This approach handles adequately occlusion, disappearance and (re)appearance of features while tracking efficiently movement in the image scene. It also allows feature tracking in long sequences at low computational cost. Some experimental results are presented.
It is important to maintain the identity of multiple targets while tracking them in some applications such as behavior understanding. However, unsatisfying tracking results may be produced due to different real-time conditions. These conditions include: inter-object occlusion, occlusion of the ocjects by background obstacles, splits and merges, which are observed when objects are being tracked in real-time. In this paper, an algorithm of feature-based using Kalman filter motion to handle multiple objects tracking is proposed. The system is fully automatic and requires no manual input of any kind for initialization of tracking. Through establishing Kalman filter motion model with the features centroid and area of moving objects in a single fixed camera monitoring scene, using information obtained by detection to judge whether merge or split occurred, the calculation of the cost function can be used to solve the problems of correspondence after split happened. The algorithm proposed is validated on human and vehicle image sequence algorithm proposed achieve efficient tracking of multiple moving objects under the confusing situations.
Robotica, 2005
In this paper an algorithm for real-time estimation of the position and orientation of a moving object using a video camera is presented. The algorithm is based on the extended Kalman filter which iteratively computes the object pose from the position measured in the image plane of a set of feature points of the object. A new technique is proposed for the selection of the optimal feature points based on the representation of the object geometry by means of a Binary Space Partitioning (BSP) tree. At each sample time, a visit algorithm of the tree allows pre-selecting all the feature points of the object that are visible from the camera in the pose predicted by the Kalman filter. A further selection is performed to find the optimal set of visible points to be used for image feature extraction. Experimental results are presented which confirm the feasibility and effectiveness of the proposed technique.
Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005
In this paper, two real-time pose tracking algorithms for rigid objects are compared. Both methods are 3D-model based and are capable of calculating the pose between the camera and an object with a monocular vision system. Here, special consideration has been put into defining and evaluating different performance criteria such as computational efficiency, accuracy and robustness. Both methods are described and a unifying framework is derived. The main advantage of both algorithms lie in their real-time capabilities (on standard hardware) whilst being robust to miss-tracking, occlusion and changes in illumination.
The International Conference on Electrical Engineering
A robotic vision system has been designed and analyzed for real time tracking of maneuvering objects. Passive detection using live TV images provides the tracking signals derived from the video data. The calibration and orientation of two cameras is done by a bundle adjustment technique. The target location algorithm determines the centroid coordinates of the target in the image plane and relates it to the aim point in the object plane. The stereoscopic images provide the information, from which the range, r of the object can be determined. The azimuth, and elevation, of the target with respect to a certain origin are determined by correlating the x-y displacements of the centroid in the image plane with the angular displacement of the target in the object plane. The servo drive signals for both the robot motion and the angular positioning of the cameras are derived from the image processing algorithm that keeps the centroid of the target image in the center of the frame and the target in line with the axis of the optical system. Hence, the spherical coordinates of the target are defined and updated with every TV frame. The time development of the centroid in successive TV frames represents the real time trajectory of the target path. A non-linear prediction technique keeps the target within the aim zone of the tracking system. In order to minimize the image processing time, i.e. kept within the demand of real time operation, one TV frame time, an image segmentation process is made to subtract nearly all redundant background details.
Infrared Physics & Technology, 2007
In this paper, we deal with the problem of real-time detection, recognition and tracking of moving objects in open and unknown environments using an infrared (IR) and visible vision system. A thermo-camera and two stereo visible-cameras synchronized are used to acquire multi-source information: three-dimensional data about target geometry and its thermal information are combined to improve the robustness of the tracking procedure. Firstly, target detection is performed by extracting its characteristic features from the images and then by storing the computed parameters on a specific database; secondly, the tracking task is carried on using two different computational approaches. A Hierarchical Artificial Neural Network (HANN) is used during active tracking for the recognition of the actual target, while, when partial occlusions or masking occur, a database retrieval method is used to support the search of the correct target followed. A prototype has been tested on case studies regarding the identification and tracking of animals moving at night in an open environment, and the surveillance of known scenes for unauthorized access control.
Multiple object tracking in image sequences is an emerging topic of research in the field of image processing and computer vision. This paper presents a robust tracking algorithm for multiple moving object detection and tracking them in dynamic scenes. Our algorithm consists of two important steps. Firstly adaptive GMM background modeling with noise reduction is used for foreground object segmentation in the noisy scenes. Also a background updating method is applied to correct the detection of foregorund objects that bends to the background instantly. Secondly, our object tracking framework uses Extended Kalman Filter to model the nonlinear dynamics and measurement models. Also a new approach for solving data association problem is introduced to determine appropriate association between objects and Kalman Filters which is necessary in multipleobject scenario. The adaptive GMM with noise reduction greatly reduced the false detections in the scenes. The experiment of our proposed algorithm shows good results for multiple object motion tracking in complex dynamic background.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.