Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1999, IEEE Transactions on Pattern Analysis and Machine Intelligence
…
10 pages
1 file
This paper deals with the 3D structure estimation and exploration of static scenes using active vision. Our method is based on the structure from controlled motion approach that constrains camera motions to obtain an optimal estimation of the 3D structure of a geometrical primitive. Since this approach involves to gaze on the considered primitive, we have developed perceptual strategies able to perform a succession of robust estimations. This leads to a gaze planning strategy that mainly uses a representation of known and unknown areas as a basis for selecting viewpoints. This approach ensures a reconstruction as complete as possible of the scene.
1992
Abstract An active approach to the integration of shape from x modules-here shape from shading and shape from texture-is proposed. The question of what constitutes a good motion for the active observer is addressed. Generally, the role of the visual system is to ...
Proceedings of the IEEE, 1988
Active Perception (Active Vision specifically) is defined as a study of Modeling and Control strategies for perception. By modeling we mean models of sensors, processing modules and their interaction. We distinguish local models from global models by their extent of application in space and time. The local models represent procedures and parameters such as optical distortions of the lens, focal lens, spatial resolution, band-pass filter, etc. The global models on the other hand characterize the overall performance and make predictions on how the individual modules interact. The control strategies are formulated as a search of such sequence of steps that would minimize a loss function while one is seeking the most information. Examples are shown as the existence proof of the proposed theory on obtaining range from focus and sterolvergence on 2-0 segmentation of an image and 3-0 shape parametriza tion.
Proceedings of IEEE International Conference on Robotics and Automation, 1996
In this paper, we present an agent architecture/active vision research tool called the Active Vision Shell (AV-shell). T h e AV-shell can be viewed as a programming framework for expressing perception and action routines in the context of situated robotics. T h e AV-shell is a powerful interactive C-shell style interface providing m a n y capabilities important in an agent architecture such as the ability t o combine perceptive capabilities of active vision with capabilities provided by other robotic devices, the ability t o interact with a wide variety of active vision devices, a set of image routines and the ability t o compose the routines into continuously running perception action processes. At the end of the paper, we present a n example of AVshell usage.
Autonomous Robots, 2017
Despite the recent successes in robotics, artificial intelligence and computer vision, a complete artificial agent necessarily must include active perception. A multitude of ideas and methods for how to accomplish this have already appeared in the past, their broader utility perhaps impeded by insufficient computational power or costly hardware. The history of these ideas, perhaps selective due to our perspectives, is presented with the goal of organizing the past literature and highlighting the seminal contributions. We argue that those contributions are as relevant today as they were decades ago and, with the state of modern computational tools, are poised to find new life in the robotic perception systems of the next decade. Keywords Sensing • Perception • Attention • Control This is one of several papers published in Autonomous Robots comprising the Special Issue on Active Perception.
1993
An active vision system capable of understanding and learning about a dynamic scene is presented. The system is active since it makes a purposive use of a monocular sensor to satisfy a set of visual tasks. The message conveyed by the paper follows the thesis that learning is indispensable to the vision process to detect expected and unexpected situations especially when the monitoring of scene dynamics is employed. In such cases fast and efficient learning strategies are required to counterbalance the tmsatifactory performance of standard vision techniques. The paper presents two distinct ways of learning. The system can learn about the geometry and dynamics of the scene in the usual active vision sense, by purposively constructing and continuously updating a model of the scene. This results in an incremental improvement of the performance of the vision process. The system can also learn about new objects, by constructing their models on the basis of recognised object features and use the models to predict unwanted situations. We suggest that the vision process benefits from the use of t~hniques for extracting scene characteristics and creating object models. As a consequence of the large variety of existing object classes a precompiled modelling of complex-shaped objects is unrealistic. Moreover, it is difficult to predict the presence and dynamics of all objects which may appear in the scene. Even if the creation of precompiled models for complex objects was feasible. the required recognition mechanisms would be slow and presumably inefficient.
Storage and Retrieval for Image and Video Databases, 1997
Earlier , the biologically plausible active vision ,model for Multiresolutional Attentional Representation and Recognition (MARR) has been developed. The model is based on the scanpath theory of Noton and Stark 17 and provides invariant recognition of gray-level images. In the present paper, the algorithm of automatic image viewing trajectory formation in the MARR model, the results of psychophysical experiments, and possible applications of the model are considered. Algorithm of automatic image viewing trajectory formation is based on imitation of the scanpath formed by operator Several propositions about possible mechanisms for a consecutive selection of fixation points in human visual perception inspired by computer simulation results and known psyhophysical data have been tested and confirmed in our psychophysical experiments. In particular, we have found that gaze switch may be directed (i) to a peripheral part of the vision field which contains an edge oriented orthogonally to the edge in the point of fixation, and (ii) to a peripheral part of the vision field containing crossing edges. Our experimental results have been used to optimize automatic algorithm of image viewing in the MARR model. The modified model demonstrates an ability to recognize complex real world images invariantly with respect to scale, shift, rotation, illumination conditions, and, in part, to point of view and can be used to solve some robot vision tasks.
Proceedings of 1994 IEEE Workshop on Applications of Computer Vision
Flexible operation of a robotic agent in an uncalibrated environment requires the ability to recover unknown or partially known parameters of the workspace through sensing. Of the sensors available to a robotic agent, visual sensors provide information that is richer and more complete than other sensors. In this paper we present robust techniques for the derivation of depth from feature points on a target's surface and for the accurate and highspeed tracking of moving targets. We use these techniques in a system that operates with little or no a priori knowledge of the object-related parameters present in the environment. The system is designed under the Controlled Active Vision framework [16] and robustly determines parameters such as velocity for tracking moving objects and depth maps of objects with unknown depths and surface structure. Such determination of intrinsic environmental parameters is essential for performing higher level tasks such as inspection, exploration, tracking, grasping, and collision-free motion planning. For both applications, we use the Minnesota Robotic Visual Tracker (a single visual sensor mounted on the end-effector of a robotic manipulator combined with a real-time vision system) to automatically select feature points on surfaces, to derive an estimate of the environmental parameter in question, and to supply a control vector based upon these estimates to guide the manipulator. The paper concludes with applications of these techniques to transportation problems such as vehicle tracking.
1994
Flexible operation of a robot in an uncalibrated environment requires the ability to recover unknown or partially known parameters of the workspace through sensing. Of the sensors available to a robotic agent, visual sensors provide information that is richer and more complete than other sensors. In this paper we present robust techniques for the derivation of depth from feature points on a target's surface and for the accurate and high-speed tracking of moving targets. We use these techniques in a system that operates with little or no a priori knowledge of the object-related parameters present in the environment. The system is designed under the Controlled Active Vision framework [16] and robustly determines parameters such as velocity for tracking moving objects and depth maps of objects with unknown depths and surface structure. Such determination of extrinsic environmental parameters is essential for performing higher level tasks such as inspection, exploration, tracking, ...
Robots and Biological Systems: Towards a New Bionics?, 1993
Most past and present work in machine perception has involved extensive static analysis of passively sampled data. However, it should be axiomatic that perception is not passive, but active. Furthermore, most past and current robotics research use rather rigid assumptions, models about the world, objects and their relationships. It is not so difficult to see that these assumptions, most of the time, in realistic situations do not hold, and hence, the robots do not perform to the designer's expectations.
Image and Vision Computing, 1995
This paper discusses specific aspects of the control of an active computer vision system. A prototype control mechanism able to estimate and modify the extrinsic parameters of a single camera has been designed and implemented. The system is capable of continuous and active adjustments of the camera in response to a given visual goal. The implementation of the mechanism takes into account errors in the position of objects stored in the memory of the vision system, and errors due to image segmentation. The advantages and drawbacks of the methods used are discussed, and its performance is demonstrated and quantified in experiments using synthetic and real image data.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS '96
International Journal of Computer Vision, 1994
Applied Intelligence, 1995
IEEE INTERNATIONAL CONFERENCE …, 1997
Journal of Intelligent and Robotic Systems, 2010
International Symposium on Experimental Robotics, 2000
Intelligent Robots and Computer Vision XVI: Algorithms, Techniques, Active Vision, and Materials Handling, 1997
Frontiers in Neurorobotics, 2021
Proceedings Fifth IEEE International Workshop on Computer Architectures for Machine Perception, 2000
6th International Symposium on …, 1998
Cornell University - arXiv, 2013