Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2019, IEEE Robotics and Automation Letters
…
9 pages
1 file
This paper considers the problem of collision-free navigation of omnidirectional mobile robots in environments with obstacles. Information from a monocular camera, encoders, and an inertial measurement unit is used to achieve the task. Three different visual servoing control schemes, compatible with the class of considered robot kinematics and sensor equipment, are analysed and their robustness properties with respect to actuation inaccuracies discussed. Then, a controller is proposed with formal guarantee of convergence to the bisector of a corridor. The main controller components are a visual servoing control scheme and a velocity estimation algorithm integrating visual, kinematic and inertial information. The behaviour of all the considered algorithms is analised and illustrated through simulations both for a wheeled and a humanoid robot. The solution proposed as the most efficient and robust with respect to actuation inaccuracies is also validated experimentally on a real humanoid NAO.
This paper considers the problem of collision-free navigation of omnidirectional mobile robots in environments with obstacles. Information from a monocular camera, encoders, and an inertial measurement unit is used to achieve the task. Three different visual servoing control schemes, compatible with the class of considered robot kinematics and sensor equipment, are analysed and their robustness properties with respect to actuation inaccuracies discussed. Then, a controller is proposed with formal guarantee of convergence to the bisector of a corridor. The main controller components are a visual servoing control scheme and a velocity estimation algorithm integrating visual, kinematic and inertial information. The behaviour of all the considered algorithms is analised and illustrated through simulations both for a wheeled and a humanoid robot. The solution proposed as the most efficient and robust with respect to actuation inaccuracies is also validated experimentally on a real humanoid NAO.
To achieve a fully autonomous robot several individual sensory based tasks can be composed to accomplish a complete autonomous mission. These sensory based tasks can be distance detection with ultrasonic sensors to avoiding collision or stereo vision acquisition and processing for environment mapping. In this context, the sensory based task that is subject of this paper is a moving target following in real time based with an omnidirectional vision system supplying data for a visual servo controlled mobile robot. This paper shows some background information on omnidirectional vision system, object tracking and visual servo control. Then the implemented solution for performing this specific task is presented and the experimental results are shown and discussed.
Advances in Robot Navigation, 2011
HAL (Le Centre pour la Communication Scientifique Directe), 2018
Navigation tasks are often subject to several constraints that can be related to the sensors (visibility) or come from the environment (obstacles). In this paper, we propose a framework for autonomous omnidirectional wheeled robots, that takes into account both collision and occlusion risk, during sensor-based navigation. The task consists in driving the robot towards a visual target in the presence of static and moving obstacles. The target is acquired by fixed-limited field of viewon-board sensors, while the surrounding obstacles are detected by lidar scanners. To perform the task, the robot has not only to keep the target in view while avoiding the obstacles, but also to predict its location in the case of occlusion. The effectiveness of our approach is validated through several experiments.
2001
This paper considers the problem of vision-based control of a nonholonomic mobile robot. We describe the design and implementation of real-time estimation and control algorithms on a car-like robot platform using a single omni-directional camera as a sensor without explicit use of odometry. We provide experimental results for each of these vision-based control objects. The algorithms are packaged as control modes and can be combined hierarchically to perform higher level tasks involving multiple robots.
—This paper presents a new image based visual servoing (IBVS) control scheme for omnidirectional wheeled mobile robots with four swedish wheels. The contribution is the proposal of a scheme that consider the overall dynamic of the system; this means, we put together mechanical and electrical dynamics. The actuators are direct current (DC) motors, which imply that the system input signals are armature voltage applied to DC motors. In our control scheme the PD control law and eye-to-hand camera configuration are used to compute the armature voltages and to measure system states, respectively. Stability proof is performed via Lypunov direct method and LaSalle's invariance principle. Simulation and experimental results were performed in order to validate the theoretical proposal and to show the good performance of the posture errors. Keywords—IBVS, posture control, omnidirectional wheeled mobile robot, dynamic actuator, Lyapunov direct method.
IEEE Transactions on Robotics and Automation, 1999
Visual servoing, i.e., the use of the vision sensor in feedback control, has gained recently increased attention from researchers both in vision and control community. A fair amount of work has been done in applications in autonomous driving, manipulation, mobile robot navigation and surveillance. However, theoretical and analytical aspects of the problem have not received much attention. Furthermore, the problem of estimation from the vision measurements has been considered separately from the design of the control strategies. Instead of addressing the pose estimation and control problems separately, we attempt to characterize the types of control tasks which can be achieved using only quantities directly measurable in the image, bypassing the pose estimation phase. We consider the task of navigation for a nonholonomic ground mobile base tracking an arbitrarily shaped continuous ground curve. This tracking problem is formulated as one of controlling the shape of the curve in the image plane. We study the controllability of the system characterizing the dynamics of the image curve, and show that the shape of the image curve is controllable only up to its "linear" curvature parameters. We present stabilizing control laws for tracking piecewise analytic curves, and propose to track arbitrary curves by approximating them by piecewise "linear" curvature curves. Simulation results are given for these control schemes. Observability of the curve dynamics by using direct measurements from vision sensors as the outputs is studied and an Extended Kalman Filter is proposed to dynamically estimate the image quantities needed for feedback control from the actual noisy images.
2000
We describe a method for visual based robot navigation with a single omni-directional (catadioptric) camera. We show how omni-directional images can be used to generate the representations needed for two main navigation modalities: Topological Navigation and Visual Path Following.
Robotica, 2019
SUMMARYNavigation tasks are often subject to several constraints that can be related to the sensors (visibility) or come from the environment (obstacles). In this paper, we propose a framework for autonomous omnidirectional wheeled robots that takes into account both collision and occlusion risk, during sensor-based navigation. The task consists in driving the robot towards a visual target in the presence of static and moving obstacles. The target is acquired by fixed – limited field of view – on-board cameras, while the surrounding obstacles are detected by lidar scanners. To perform the task, the robot has not only to keep the target in view while avoiding the obstacles, but also to predict its location in the case of occlusion. The effectiveness of our approach is validated through several experiments.
2004
A new method to control the navigation of a mobile robot which is based on visual servoing techniques is presented. The new contribution of this paper could be divided in two aspects: the first one is the solution of the problem which takes place in the control law when features appear or disappear from the image plane during the navigation; and the second one is the way of providing to the control system the reference path that must be followed by the mobile robot. The visual servoing techniques used to carry out the navigation are the image-based and the intrinsic-free approaches. Both are independent of calibration errors which is very useful since it is so difficult to get a good calibration in this kind of systems. Also, the second technique allows us to control the camera in spite of the variation of its intrinsic parameters. So, it is possible to modify the zoom of the camera, for instance to get more details, and drive the camera to its reference position at the same time. A...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Informatics in Control, Automation and Robotics II
Proceedings of the 45th IEEE Conference on Decision and Control, 2006
2013 18th International Conference on Methods & Models in Automation & Robotics (MMAR), 2013
Journal of Robotic Systems, 2004
Cognitive Intelligence and Robotics
Intelligent Robots and …, 2008
IEEE Transactions on Robotics and Automation, 2000