Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
An algorithm for the kinematic calibration of a robot arm is presented. The trajectory, described in the work space, of given point of the last link of a robot is recorded by means of two television cameras. From the data obtained from a number of couples of frames, if the joint position of the robot arm for each couple of frames is measured from the feedback control system, it is possible to compute the Denavit and Hartemberg parameters. A couple of television cameras is employed to obtain a stereoscopic vision. The method has to be linked with a camera calibration technique so that the calibration of a robot arm and of the vision system can be obtained in the same time. The tuning of the technique is still in progress as, presently, allows a precision of little less than 1%. I INTRODUCTION Among the characteristics that define the performances of a robot the most important can be considered the repeatability and the accuracy. Generally, both these characteristics depend on factors...
Early experimental results on algorithm for the kinematic calibration of a robot arm are presented. The technique is based on the recordings of a couple of television cameras. A target is located on the last link of a robot arm and its trajectory is recorded by the telecamera; the joint positions, obtained from the encoder, related to each of the couples of frames, is also recorded. By analysing a number of couples of frames, it is possible to compute the Denavit and Hartemberg parameters. The method needs a previous camera calibration. Early experimental results on the camera model are also described.
An algorithm for the cinematic calibration of a robot arm is presented. This technique uses the images of a couple of tele-cameras to obtain a stereoscopic vision. By a number of images, taken from the tele-cameras, and by measuring the joint parameters for each of the frame, it is possible to compute the other Denavit-Hartemberg parameters. It was found that by means of a suitable number of pictures, it is possible to obtain both the robot arm calibration and the camera system calibration.
1994
Abstract This paper addresses the problem of calibrating a camera mounted on a robot arm. The objective is to estimate the camera's intrinsic and extrinsic parameters. These include the relative position and orientation of camera with respect to robot base as well as the relative position and orientation of the camera with respect to a pre-defined world frame. A calibration object with a known 3D shape is used together with two known movements of the robot.
1999
This article is concerned with calibrating an anthropomorphic two-armed robot equipped with a stereo- camera vision system, that is estimating the different geometric relationships involved in the model of the robot. The calibration procedure that is presented is fully vision-based: the relationships between each camera and the neck and between each arm and the neck are determined using visual measurements.
Camera calibration has always been an essential component of the robot vision systems, with selfcalibration nowadays being an integral and routinely applied operation within photogrammetric triangulation, especially in high-accuracy close-range measurement. With the very rapid growth in adoption of off-the-shelf digital cameras for a host of new 3D measurement applications, however, there are many situations where the geometry of the image network will not support robust recovery of camera parameters via on-the-job calibration. For this reason, stand-alone camera calibration has again emerged as an important issue, and it also remains a topic of research interest in computer vision. This paper overview the current approaches adopted for camera calibration in computer vision. Also, the author's results of camera calibrations are summarized.
2011 15th International Conference on Advanced Robotics (ICAR), 2011
The main purpose of robot calibration is the correction of the possible errors in the robot parameters. This paper presents a method for a kinematic calibration of a parallel robot that is equipped with one camera in hand. In order to preserve the mechanical configuration of the robot, the camera is utilized to acquire incremental positions of the end effector from a spherical object that is fixed in the word reference frame. The positions of the end effector are related to incremental positions of resolvers of the motors of the robot, and a kinematic model of the robot is used to find a new group of parameters which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and improving spatial measurements. Finally, the robotic system is designed to carry out tracking tasks and the calibration of the robot is validated by means of integrating the errors of the visual controller.
Robotics and Computer-Integrated Manufacturing, 2001
One of the problems that slows the development of off-line programming is the low static and dynamic positioning accuracy of robots. Robot calibration improves the positioning accuracy and can also be used as a diagnostic tool in robot production and maintenance. This work presents techniques for modeling and performing robot calibration processes with off-line programming using a 3D vision-based measurement system. The measurement system is portable, accurate and low cost, consisting of a single CCD camera mounted on the robot tool flange to measure the robot end-effector pose relative to a world coordinate system. Radial lens distortion is included in the photogrammetric model. Scale factors and image centers are obtained with innovative techniques, making use of a multiview approach. Results show that the achieved average accuracy using a common offthe-shelf CCD camera varies from 0.2 to 0.4 mm, at distances from 600 to 1000 mm from the target, respectively, with different camera orientations. Experimentation is performed on two industrial robots to test their position accuracy improvement using the calibration system proposed: an ABB IRB-2400 and a PUMA-500. The robots were calibrated at different regions and volumes within their workspace achieving accuracy from three to six times better when comparing errors before and after calibration, if measured locally. The proposed off-line robot calibration system is fast, accurate and easy to set up. r
Sensors, 2013
This paper presents a novel method for the calibration of a parallel robot, which allows a more accurate configuration instead of a configuration based on nominal parameters. It is used, as the main sensor with one camera installed in the robot hand that determines the relative position of the robot with respect to a spherical object fixed in the working area of the robot. The positions of the end effector are related to the incremental positions of resolvers of the robot motors. A kinematic model of the robot is used to find a new group of parameters, which minimizes errors in the kinematic equations. Additionally, properties of the spherical object and intrinsic camera parameters are utilized to model the projection of the object in the image and thereby improve spatial measurements. Finally, several working tests, static and tracking tests are executed in order to verify how the robotic system behaviour improves by using calibrated parameters against nominal parameters. In order to emphasize that, this proposed new method uses neither external nor expensive sensor. That is why new robots are useful in teaching and research activities.
The problem of the camera system modelling is studied and an algorithm for the calibration of the vision system is presented. By means of suitable matrixes a stereoscopic vision system is obtained. This algorithm is suitable to be used to apply vision model to robotic applications
2013
Conventional calibration of industrial robots is carried out with a use of special measurement equipment, which is expensive and it also requires skilled personnel to operate. This paper presents a methodology, which utilises a camera to perform measurements of position accuracy of a robot. The experiments were conducted with a KUKA robot and a coupled measuring arm, as well as an in-door GPS (iGPS) in order to compare the results. The accuracy obtained with the camera is in agreement with the robot accuracy, and is better than the error measurement results of the measuring arm and the iGPS. Hence, it is envisaged that this methodology, together with numerical optimisation, can be used for robot calibration.
—When a robot is required to perform specific tasks defined in the world frame, there is a need for finding the coordinate transformation between the kinematic base frame of the robot and the world frame. The kinematic base frame used by the robot controller to define and evaluate the kinematics may deviate from the mechanical base frame constructed based on structural features. Besides, by using kinematic modeling rules such as the product of exponentials (POE) formula, the base frame can be arbitrarily located , and does not have to be related to any feature of the mechanical structure. As a result, the kinematic base frame cannot be measured directly. This paper proposes to find the kinematic base frame by solving a hand-eye calibration problem using 3D position measurements only, which avoids the inconvenience and inaccuracy of measuring orientations and thus significantly facilitates practical operations. A closed-form solution and an iterative solution are explicitly formulated and proved effective by simulations. Comprehensive analyses of the impact of key parameters to the accuracy of the solution are also carried out, providing four guidelines to better conduct practical operations. Finally, experiments on a 7-DOF industrial robot are performed with an optical tracking system to demonstrate the superiority of the proposed method using position data only over the method using full pose data. Note to Practitioners—Robot-world calibration plays an important role in practical robotic applications where offline programming is adopted. By finding the precise transformation between the base frame of the robot and the world frame, tasks that are usually defined in the world frame can be accurately transformed into the base frame of the robot, enabling successive motion planning and programming. The base frame calibration is also useful in multirobot cooperation where coordination of the robot bases is essential for cooperative manipulations. This paper presents a two-stage method to find the base frame of a robot in the world frame. A closed-form method serves as an initial value finder, and an iterative method refines the calibration accuracy. Comprehensive simulations and experiments are conducted to validate the effectiveness of the proposed method. Theoretical analyses show that the accuracy of the solution will improve when: 1) the movement range of the robot is enlarged; 2) the size of the robot is expanded; 3) the distance between the base frame of the robot and the measurement/world frame is reduced; and 4) the distance between the marker and the origin of the hand frame is decreased. These conclusions provide useful guidance for the practical operations.
Robotics and Computer-integrated Manufacturing, 2011
The paper discusses some of the results of designing and testing a calibrating procedure with important implications in contemporary robotics. This procedure is also mentioned in the literature as the “procedure for calibrating by matching the coordination systems of a robot and a stationed video camera”. The procedure is tested by a training robotechnic system, which consists of an anthropomorphic
New Technologies - Trends, Innovations and Research, 2012
Robotics and Computer-Integrated Manufacturing, 2022
We present a robot kinematic calibration method that combines complementary calibration approaches: self-contact, planar constraints, and self-observation. We analyze the estimation of the end effector parameters, joint offsets of the manipulators, and calibration of the complete kinematic chain (DH parameters). The results are compared with ground truth measurements provided by a laser tracker. Our main findings are: (1) When applying the complementary calibration approaches in isolation, the self-contact approach yields the best and most stable results. (2) All combinations of more than one approach were always superior to using any single approach in terms of calibration errors and the observability of the estimated parameters. Combining more approaches delivers robot parameters that better generalize to the workspace parts not used for the calibration. (3) Sequential calibration, i.e. calibrating cameras first and then robot kinematics, is more effective than simultaneous calibration of all parameters. In real experiments, we employ two industrial manipulators mounted on a common base. The manipulators are equipped with force/torque sensors at their wrists, with two cameras attached to the robot base, and with special end effectors with fiducial markers. We collect a new comprehensive dataset for robot kinematic calibration and make it publicly available. The dataset and its analysis provide quantitative and qualitative insights that go beyond the specific manipulators used in this work and apply to self-contained robot kinematic calibration in general.
… and Automation, IEEE …, 2001
This paper presents an efficient, noncontact measurement technique for the automatic identification of the real kinematic parameters of an industrial robot. The technique is based on least-squares analysis and on the Hayati and Mirmirani kinematic modeling convention for closed kinematic chains. The measurement system consists of a single camera mounted on the robot's wrist. The camera measures position and orientation of a passive target in six degrees of freedom. Target position is evaluated by applying least-squares analysis on an overdetermined system of equations based on the quaternion representation of the finite rotation formula. To enhance the accuracy of the measurement, a variety of image processing functions including subpixel interpolation are applied.
Robotics
Robotic arms are widely used in sectors such as automotive or assembly logistics due to their flexibility and cost. Other manufacturing sectors would like to take advantage of this technology, however, higher accuracy is required for their purposes. This paper integrated a multi-camera system to achieve the requirements for milling and drilling tasks in aeronautic parts. A closed-loop framework allows the position of the robot’s end-effector to be corrected with respect to a static reference. This is due to the multi-camera system tracking the position of both elements due to the passive targets on their surface. The challenge is to find an auxiliary system to measure these targets with an uncertainty that allows the desired accuracy to be achieved in high volumes (>3 m3). Firstly, in a reduced scenario, a coordinate measuring machine (CMM), a laser tracker (LT), and portable photogrammetry (PP) have been compared following the guidelines from VDI/VDE 2634-part 1. The conclusions...
In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three-axial pneumatic parallel mechanism robot arm.
2014
Robot kinematic calibration is the process of enhancing the positioning accuracy of a given manipulator and must be performed after robot manufacture and assembly or during periodical maintenance. This dissertation presents new computationally efficient and robust kinematic calibration algorithms for industrial robots that make use of partial measurements. These include a calibration method that requires the supply of Cartesian coordinates of the calibration points (3DCAL) and another calibration technique that only requires the radial measurements from the calibration points to some reference (1DCAL). Neither method requires orientation measurements nor the explicit knowledge of the whereabout of a reference frame. Contrary to most other similar works, both methods make use of a simplified version of the original Denavit-Hartenberg (DH) kinematic model. The simplified DH(-) model has not only proven to be robust and effective in calibrating industrial manipulators but it is also favored from a computational efficiency viewpoint since it consists of comparatively fewer error parameters. We present a conceptual approach I also would like to thank the director of Yaskawa Motoman Robotics, Inc. Mr. Wade Hickle, Senior Engineer Mr. Eric Marcil and Chief Engineer Mr. George Sutton for their guidance and for making the much needed materials available to me. I appreciate Dr. John Loomis and Dr. Ruihua Liu for participating in my committee. Thanks go to Mr. Tom Schenck of Measurement Specialties, Inc. for allowing me to conduct experiments in his workplace. Finally, I would like to give my special thanks to my dearest friends for all their help and encouragement. vi
Proceedings of the 2005 Ieee International Conference on Robotics and Automation, 2005
For precise control of robots along paths which are sensed online it is of fundamental importance to have a calibrated system. In addition to the identification of the sensor parameters -in our case the camera calibrationwe focus on the adaptation of parameters that characterize the integration of the sensor into the control system or the application. The most important of such parameters are identified best when evaluating an application task, after a short pre-calibration phase. The method is demonstrated in experiments in which a robot arm follows a curved line at high speed.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.