Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998, Real-Time Imaging
Real-Time Tracking of Moving Objects with an Active Camera T his article is concerned with the design and implementation of a system for real-time monocular tracking of a moving object using the two degrees of freedom of a camera platform. Figure-ground segregation is based on motion without making any a priori assumptions about the object form. Using only the first spatiotemporal image derivatives, subtraction of the normal optical flow induced by camera motion yields the object image motion. Closed-loop control is achieved by combining a stationary Kalman estimator with an optimal Linear Quadratic Regulator. The implementation on a pipeline architecture enables a servo rate of 25 Hz. We study the effects of time-recursive filtering and fixed-point arithmetic in image processing and we test the performance of the control algorithm on controlled motion of objects.
2008
In this paper we propose implementation of a viable algorithm for real time tracking of objects in a video sequence on a Digital Signal Processor (DSP). Three different tracking algorithms are simulated and on the basis of simulation results, the best algorithm is proposed for hardware implementation. The selected algorithm tracks objects by minimizing the error iteratively. A modification of the selected algorithm is suggested that suits the hardware implementation. The algorithm is tested on different video sequences, both synthetic and real, which demonstrates its performance.
The International Conference on Electrical Engineering
A robotic vision system has been designed and analyzed for real time tracking of maneuvering objects. Passive detection using live TV images provides the tracking signals derived from the video data. The calibration and orientation of two cameras is done by a bundle adjustment technique. The target location algorithm determines the centroid coordinates of the target in the image plane and relates it to the aim point in the object plane. The stereoscopic images provide the information, from which the range, r of the object can be determined. The azimuth, and elevation, of the target with respect to a certain origin are determined by correlating the x-y displacements of the centroid in the image plane with the angular displacement of the target in the object plane. The servo drive signals for both the robot motion and the angular positioning of the cameras are derived from the image processing algorithm that keeps the centroid of the target image in the center of the frame and the target in line with the axis of the optical system. Hence, the spherical coordinates of the target are defined and updated with every TV frame. The time development of the centroid in successive TV frames represents the real time trajectory of the target path. A non-linear prediction technique keeps the target within the aim zone of the tracking system. In order to minimize the image processing time, i.e. kept within the demand of real time operation, one TV frame time, an image segmentation process is made to subtract nearly all redundant background details.
This work describes a method for real-time motion detection using an active camera mounted on a padtilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation. Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pankilt angles between successive frames are as large as 3".
International Journal of Innovative Technology and Exploring Engineering, 2019
Object tracking is a troublesome undertaking and significant extent in data processor perception and image handling community. Some of the applications are protection surveillance, traffic monitoring on roads, offense detection and medical imaging. In this paper a recent technique for tracking of moving object is intended. Optical flow information authorizes us to know the displacement and speed of objects personate in a scene. Apply optical flow to the image gives flow vectors of the points to distinguishing the moving aspects. Optical flow is accomplished by Lucas canade algorithm. This algorithm is superior to other algorithms. The outcomes reveals that the intend algorithm is efficient and accurate object tracking method. This paper depicts a smoothing algorithm to track the moving object of both single and multiple objects in real time. The main issue of high computational time is greatly reduced in this proposed work.
A multi-camera monitoring system for online recognition of moving objects is considered. The system consists of several autonomous vision subsystems. Each of them is able to monitor an area of interest with the aim to reveal and recognize characteristic patterns and to track the motion of the selected configuration. Each subsystem recognizes the existence of the predefined objects in order to report expected motion while automatically tracking the selected object. Simultaneous tracking by two or more cameras is used to measure the instant distance of the tracked object. A modular conception enables simple extension by several static and mobile cameras mechanically oriented in space by the pan and tilt heads. The open architecture of the system allows additional subsystems integration and the day and night image processing algorithms extension.
To achieve high performance visual tracking a well balanced integration of vision and control is particularly important. This paper describes an approach for the improvement of tracking performance by a careful design and integration of visual processing routines and control architecture and algorithms. A layered architecture is described. In addition a new approach for the characterization of the performance of active vision systems is described. This approach combines the online generation of test images with the real time response of the mechanical system. Crucial for the performance improvement was the consideration that the number of mechanical degrees of freedom used to perform the tracking was smaller than the number of degrees of freedom of rigid motion (6). The improvements on the global performance are experimentally proved.
Mechatronics, 1995
Recent advances in manufacturing automation have motivated vision-based control of autonomous vehicles operated in unattended factories, material handling processes, warehouse operations, and hazardous environment explorations. Existing vision-based tracking control systems of autonomous vehicles, however, have been limited in real-time applications due to slow and/or expensive visual feedback and complicated dynamics and control with nonholonomic constraints. This paper presents a practical real-time vision-based tracking control system of an unmanned vehicle, ViTra. Unlike the conventional RS170 video-based machine vision systems, ViTra uses a DSP-based flexible integrated vision system (FIVS) which is characterized by low cost, computational efficiency, control flexibility, and a friendly user interface. In particular, this paper focuses on developing a framework for vision tracking systems, designing generic fiducial patterns, and applying real-time vision systems to tracking control of autonomous vehicles. A laboratory prototype vision-based tracking system developed at Georgia Institute of Technology permits the uniquely designed fiducial landmarks to be evaluated experimentally, the control strategy and the path planning algorithm derived in the paper to be validated in real-time, and the issues of simplifying nonlinear dynamics and dealing with nonholonomic constraints to be addressed in practice. Experimental results reveal interesting insights into the design, manufacture, modeling, and control of vision-based tracking control systems of autonomous vehicles.
2014
Object tracking in video sequences has long been a challenging area in the field of computer vision and has many real world applications like surveillance system, robotics, missile defense system, public security system and visual information processing. A lot of research has been undergoing ranging from applications to noble algorithms. However, most works are focused on a specific application, such as tracking human, car, or pre-learned objects. The work described in this paper is focused on tracking a randomly moving object chosen by a user using Kalman filter. For simplicity, a video sequence with just one object within has been selected. The Kalman filter predicts the most probable location of a detected object in the subsequent video frame and tracks an object by assuming the initial state and noise covariance. After sufficient information about the objects is accumulated, we can exploit the learning to successfully track objects. Extended the same concept for tracking the vir...
The use of visual sensors may have high impact in applications where it ist required to measure the pose (position and orientation) and the visual features of object moving in unstructured environments. In robotics, the measurement provided by video cameras can be directly used to perform closed-loop control of the robot end-effector pose. In this chapter the problem of real-time estimation of the position and orientation of a moving object using a fixed stereo camera system is considered. An approach based on the use of the Extended Kalman Filter (EKF) combined with a 3D representation of the objects geometry based on Binary Space Partition (BSP) trees ist illustrated. The performance of the proposed visual tracking algorithm is experimentally tested in the case of an object moving in the visible space of a fixed stereo camera system.
Proceedings of 1st International Conference on Image Processing, 1994
In this paper we describe a two stage active vision system for tracking of a moving object which is detected in an overview image of the scene a close{up view is then taken by c hanging the frame grabber's parameters and by a positional change of the camera mounted on a robot's hand. With a combination of several simple and fast working vision modules, a robust system for object tracking is constructed. The main principle is the use of two stages for object tracking: one for the detection of motion and one for the tracking itself. Errors in both stages can be detected in real time then, the system switches back f r o m t h e tracking to the motion detection stage. Standard UNIX interprocess communication mechanism are used for the communication between control and vision modules. Object{oriented programming hides hardware details.
Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005
In this paper, two real-time pose tracking algorithms for rigid objects are compared. Both methods are 3D-model based and are capable of calculating the pose between the camera and an object with a monocular vision system. Here, special consideration has been put into defining and evaluating different performance criteria such as computational efficiency, accuracy and robustness. Both methods are described and a unifying framework is derived. The main advantage of both algorithms lie in their real-time capabilities (on standard hardware) whilst being robust to miss-tracking, occlusion and changes in illumination.
Robotics and Automation, …, 2003
The problem of tracking the position and orientation of a moving object using a stereo camera system is considered in this paper. A robust algorithm based on the extended Kalman filter is adopted, combined with an efficient selection technique of the object image features, based on Binary Space Partitioning tree geometric models. An experimental study is carried out using a vision system of two fixed cameras.
2009 IEEE 12th International Conference on Computer Vision, 2009
We present a novel keyframe selection and recognition method for robust markerless real-time camera tracking. Our system contains an offline module to select features from a group of reference images and an online module to match them to the input live video in order to quickly estimate the camera pose. The main contribution lies in constructing an optimal set of keyframes from the input reference images, which are required to approximately cover the entire space and at the same time minimize the content redundancy amongst the selected frames. This strategy not only greatly saves the computation, but also helps significantly reduce the number of repeated features so as to improve the camera tracking quality. Our system also employs a parallel-computing scheme with multi-CPU hardware architecture. Experimental results show that our method dramatically enhances the computation efficiency and eliminates the jittering artifacts.
In this paper, we propose a high-performance object tracking system for obtaining high-quality images of a high-speed moving object at video rate by controlling a pair of active cameras that consists of two cameras with zoom lens mounted on two pan-tilt units. In this paper, "high-quality image" implies that the object image is in focus and not blurred, the size of the object in the image remains unchanged, and the object is located at the image center. To achieve our goal, we use the K-means tracker algorithm for tracking objects in an image sequence captured by the active cameras. We use the results of the K-means tracker to control the angular position and speed of each pan-tilt-zoom unit by employing the PID control scheme. By using two cameras, the binocular stereo vision algorithm can be used to obtain the 3D position and velocity of the object. These results are used in order to adjust the focus and zoom. Moreover, our system allows the two cameras to gaze at a single point in 3D space. However, this system may become unstable when the time response deteriorates by excessively interfering in a mutual control loop or by strict restriction of the camera action. In order to solve these problems, we introduce the concept of reliability into the K-means tracker, and propose a method for controlling the active cameras by using relative reliability. We have developed a prototype system and confirmed through extensive experiments that we can obtain focused and motion-blur-free images of a high-speed moving object at video rate.
20th Iranian Conference on Electrical Engineering (ICEE2012), 2012
This paper presents a vision-based navigation strategy for a pan and tilt platform and a mounted video camera as a visual sensor. For detection of objects, a suitable image processing algorithm is used. Moreover, estimation of the object position is performed using the Kalman filter as an estimator. The proposed method is implemented experimentally to a laboratory-size pan and tilt platform. Experimental results show good target tracking by the proposed method in real-time.
2016
Tracking objects in video sequences is a central issue in many areas, such as surveillance, smart vehicles, human-computer interaction, augmented reality applications, and interactive TV etc. It is a process that always involves two steps: detection and tracking. A common approach is to detect the object in the first frame and then track it through the rest of the video. Video Tracking is the process of locating a moving object over time using a camera module. The objective of video tracking is to associate target objects in consecutive video frames. The association can be especially difficult when the objects are moving fast relative to the frame rate. Another situation that increases the complexity of the problem is when the tracked object changes orientation over time. For these situations video tracking systems usually employ a motion model which describes how the image of the target might change for different possible motions of the object. In this project an algorithm is propo...
2002
To achieve high performance visual tracking a well balanced integration of vision and control is particularly important. This paper describes an approach for the improvement of tracking performance by a careful design and integration of visual processing routines and control architecture and algorithms. A layered architecture is described. In addition a new approach for the characterization of the performance of active vision systems is described. This approach combines the online generation of test images with the real time response of the mechanical system. Crucial for the performance improvement was the consideration that the number of mechanical degrees of freedom used to perform the tracking was smaller than the number of degrees of freedom of rigid motion (6). The improvements on the global performance are experimentally proved.
Researchers and robotic development groups have recently started paying special attention to autonomous mobile robot navigation in indoor environments using vision sensors. The required data is provided for robot navigation and object detection using a camera as a sensor. The aim of the project is to construct a mobile robot that has integrated vision system capability used by a webcam to locate, track and follow a moving object. To achieve this task, multiple image processing algorithms are implemented and processed in real-time. A mini-laptop was used for collecting the necessary data to be sent to a PIC microcontroller that turns the processes of data obtained to provide the robot's proper orientation. A vision system can be utilized in object recognition for robot control applications. The results demonstrate that the proposed mobile robot can be successfully operated through a webcam that detects the object and distinguishes a tennis ball based on its color and shape.
SMPTE Motion Imaging Journal, 2007
In order to insert a virtual object into a TV image, the graphics system needs to know precisely how the camera is moving, so that the virtual object can be rendered in the correct place in every frame. Nowadays this can be achieved relatively easily in postproduction, or in a studio equipped with a special tracking system. However, for live shooting on location, or in a studio that is not specially equipped, installing such a system can be difficult or uneconomic. To overcome these limitations, the MATRIS project is developing a real-time system for measuring the movement of a camera. The system uses image analysis to track naturally occurring features in the scene, and data from an inertial sensor. No additional sensors, special markers, or camera mounts are required. This paper gives an overview of the system and presents some results.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.