Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, Lecture Notes in Computer Science
…
20 pages
1 file
Tracking in 3D with an active vision system depends on the performance of both motor control and vision algorithms. Tracking is performed based on different visual behaviors, namely smooth pursuit and vergence control. A major issue in a system performing tracking is its robustness to partial occlusion of the target as well as its robustness to sudden changes of target trajectory. Another important issue is the reconstruction of the 3D trajectory of the target. These issues can only be dealt with if the performance of the algorithms is evaluated. The evaluation of such performances enable the identification of the limits and weaknesses in the system behavior. In this paper we describe the results of the analysis of a binocular tracking system. To perform the evaluation a control framework was used both for the vision algorithms and for the servo-mechanical system. Due to the geometry changes in an active vision system, the problem of defining and generating system reference inputs has specific features. In this paper we analyze this problem, proposing and justifying a methodology for the definition and generation of such reference inputs. As a result several algorithms were improved and the global performance of the system was also enhanced. This paper proposes a methodology for such an analysis (and resulting enhancements) based on techniques from control theory.
Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190), 1998
The performance of a binocular active vision system depends mainly on two aspects: vision/image processing and control. In this paper we characterize the monocular performance of smooth pursuit. This system is used to track binocularly targets in a surveillance environment. One of the aspects of this characterization was the inclusion of the vision processing. To characterize the performance from the control point of view four standard types of inputs were used: step, ramp, parabola and sinusoid. The responses can be used to identify which subsystems can be optimized. We show that prediction and a velocity estimate are essential for a good tracking performance.
6th International Workshop on Advanced Motion Control Proceedings, 2000
This paper deals with active tracking of 3D moving targets. Tracking is based on different visual behaviors, namely smooth pursuit and vergence control. The performance and robustness in visual control of motion depends both on the vision algorithms and the control structure. In this work we evaluate these two aspects, characterize the delays, and discuss ways to cope with latency while improving system performance. Kalman filtering is used to achieve smooth behaviors and increase visual processing robustness. A specific Kalman filter structure is proposed and its tunning and initialization are discussed. Delays and system latencies substantially affect the performance of visually guided systems. Interpolation is used to cope with visual processing delays. Model predictive control strategies are proposed to compensate for the mechanical latency in visual control of motion.
Journal of Applied Research and Technology, 2009
This paper presents a binocular eye-to-hand visual servoing system that is able to track and grasp a moving object in real time. Linear predictors are employed to estimate the object trajectory in three dimensions and are capable of predicting future positions even if the object is temporarily occluded. For its development we have used a CRS T475 manipulator robot with six degrees of freedom and two fixed cameras in a stereo pair configuration. The system has a client‐server architecture and is composed of two main parts: the vision system and the control system. The vision system uses color detection to extract the object from the background and a tracking technique based on search windows and object moments. The control system uses the RobWork library to generate the movement instructions and to send them to a C550 controller by means of the serial port. Experimental results are presented to verify the validity and the efficacy of the proposed visual servoing system.
Journal of Applied Research and Technology, Vol. 7, Núm. 3,, 2009
This paper presents a binocular eye-to-hand visual servoing system that is able to track and grasp a moving object in real time. Linear predictors are employed to estimate the object trajectory in three dimensions and are capable of predicting future positions even if the object is temporarily occluded. For its development we have used a CRS T475 manipulator robot with six degrees of freedom and two fixed cameras in a stereo pair configuration. The system has a client-server architecture and is composed of two main parts: the vision system and the control system. The vision system uses color detection to extract the object from the background and a tracking technique based on search windows and object moments. The control system uses the RobWork library to generate the movement instructions and to send them to a C550 controller by means of the serial port. Experimental results are presented to verify the validity and the efficacy of the proposed visual servoing system.
2002
To achieve high performance visual tracking a well balanced integration of vision and control is particularly important. This paper describes an approach for the improvement of tracking performance by a careful design and integration of visual processing routines and control architecture and algorithms. A layered architecture is described. In addition a new approach for the characterization of the performance of active vision systems is described. This approach combines the online generation of test images with the real time response of the mechanical system. Crucial for the performance improvement was the consideration that the number of mechanical degrees of freedom used to perform the tracking was smaller than the number of degrees of freedom of rigid motion (6). The improvements on the global performance are experimentally proved.
… CONFERENCE ON PATTERN …, 1996
An active vision system has to enable the implementation of reactive visual processes and of elementary visual behaviors in real time. Therefore the control architecture is extremely important. In this paper we discuss a number of issues related with the implementation of a real-time control architecture and describe the architecture we are using with camera heads. Another important issue of the operation of active vision binocular heads is their integration into more complex robotic systems. The design of the control architecture has to be suited to the integration of the system in other robotic systems. We claim that higher levels of autonomy and integration can be obtained by designing the system architecture based on the concept of purposive behavior. At the lower levels we consider vision as a sensor and integrate it in control systems (both feed-forward and servo loops) and several visual processes are implemented in parallel, computing relevant measures for the control process. At higher levels the architecture is modeled as a state transition system. Finally we show how this architecture can be used to implement a pursuit behavior using optical ow. Simultaneously vergence control can also be performed using the same visual processes.
Applications of visual control of motion require that the relationships between motion in the scene and image motion be established. In the case of active tracking of moving targets these relationships become more complex due to camera motion. This work derives the position and velocity equations that relate image motion, camera motion and target 3D motion. Perspective projection is assumed. Both monocular and binocular tracking systems are analyzed. The expressions obtained are simplified to decouple the control of the different mechanical degrees of freedom. The simplification errors are quantified and characterized. This study contributes for the understanding of the tracking process, for establishing the control laws of the different camera motions,for deriving test signals to evaluate the system performance and for developing egomotion compensation algorithms.
IEEE INTERNATIONAL CONFERENCE …, 1997
An active vision system has to enable the implementation of reactive visual processes and of elementary visual behaviors in real time. Therefore the control architecture is extremely important. In this paper we discuss a number of issues related with the implementation of a real-time control architecture and describe the architecture we are using with our camera heads. Even though in most applications a fully calibrated system is not required, we also describe a methodology for calibrating the camera head, taking advantage of its degrees of freedom. These calibration parameters are used to evaluate the performance of the system. Another important issue of the operation of active vision binocular heads is their integration into more complex robotic systems. We claim that higher levels of autonomy and integration can be obtained by designing the system architecture based on the concept of purposive behavior. At the lower levels we consider vision as a sensor and integrate it in control systems (both feed-forward and servo loops) and several visual processes are implemented in parallel, computing relevant measures for the control process. At higher levels the architecture is modeled as a state transition system. Finally we show how this architecture can be used to implement a pursuit behavior using optical ow. Simultaneously vergence control can also be performed using the same visual processes.
This paper presents the use of 3D visual features in the so-called "Visual Servo-ing Approach". After having brieey recalled how the task function approach is used in visual servoing, we present the notion of 3D logical vision sensors which permit us to extract visual information. In particular, we a r e i n terested in those composed of the estimation of both a 3D point and a 3D attitude. We g i v e t h e c o n trol law expression with regard to the two kinds of visual features performed at video rate with our robotic platform. We present some of the experimental results and show the good convergence of the control laws.
To achieve high performance visual tracking a well balanced integration of vision and control is particularly important. This paper describes an approach for the improvement of tracking performance by a careful design and integration of visual processing routines and control architecture and algorithms. A layered architecture is described. In addition a new approach for the characterization of the performance of active vision systems is described. This approach combines the online generation of test images with the real time response of the mechanical system. Crucial for the performance improvement was the consideration that the number of mechanical degrees of freedom used to perform the tracking was smaller than the number of degrees of freedom of rigid motion (6). The improvements on the global performance are experimentally proved.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of 1994 IEEE Workshop on Applications of Computer Vision
Springer eBooks, 2007
IEEE Transactions on Control Systems Technology, 2000
The International Conference on Electrical Engineering
Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005