Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2018, ArXiv
…
5 pages
1 file
Current robotic manipulation requires reliable methods to predict whether a certain grasp on an object will be successful or not prior to its execution. Different methods and metrics have been developed for this purpose but there is still work to do to provide a robust solution. In this article we combine different metrics to evaluate real grasp executions. We use different machine learning algorithms to train a classifier able to predict the success of candidate grasps. Our experiments are performed with two different robotic grippers and different objects. Grasp candidates are evaluated in both simulation and real world. We consider 3 different categories to label grasp executions: robust, fragile and futile. Our results shows the proposed prediction model has success rate of 76\%.
Robotics and Autonomous Systems, 2019
Grasp quality metrics aim at quantifying different aspects of a grasp configuration between a specific robot hand and object. They produce a numerical value that allows to rank grasp configurations and optimize based on them. Grasp quality metrics are a key part of most analytical grasp-planning approaches. Additionally, they are often used to generate ground-truth labels for synthetically generated grasp exemplars required for learning-based approaches. Recent studies have highlighted the limitations of grasp quality metrics when used to predict the outcome of a grasp execution on a real robot. In this paper, we systematically study how well seven commonly-used grasp quality metrics perform in the real world. To this end, we generated two datasets of grasp candidates in simulation, each one for a different robotic system. The quality of these synthetic grasp candidates is quantified by the aforementioned metrics. For validation, we developed an experimental procedure to accurately replicate grasp candidates on two real robotic systems and to evaluate the performance of each grasp. Given the resulting datasets, we trained different classifiers to predict grasp success using only grasp quality metrics as input. Our results show that combinations of quality metrics can achieve up to a 85% classification accuracy for real grasps.
Tehnicki Vjesnik-technical Gazette, 2022
Predicting the quality of the robot end-effector grasp quality during an industrial robot manipulator operation can be an extremely complex task. As is often the case with such complex tasks, Artificial Intelligence methods may be applied to attempt the creation of a model -if sufficient data exists. The presented dataset uses a publicly available dataset, consisting of 992632 measurements of position, torque, and velocity -for each of the three joints of three fingers of the simulated end-effector. The dataset is first analyzed and pre-processed to prepare it for model training. The duplicate values are removed from the dataset, as well as the statistical outliers. Then, a multilayer perceptron (MLP) machine learning algorithm is applied to 80% of the data contained in the dataset, using the Grid Search algorithm to determine the best combination of MLP hyperparameters. As the dataset consists of torque, velocity, and speed measurements for separate joints and fingers of the tested end-effector the testing is performed to see if a subset of the inputs may be used to regress the robustness of the given grip. The normalization of the dataset is also applied, and its effect on the regression quality is tested. The results, evaluated with the coefficient of determination, show that while the best model is achieved using all the possible inputs, a satisfactory result can be obtained using only velocity and torque.The results also show that the normalization of the dataset improves the regression quality in all the observed cases.
International Journal of Humanoid Robotics, 2004
Manipulation skills are a key issue for a humanoid robot. Here, we are interested in a vision-based grasping system able to deal with previously unknown objects in real time and in an intelligent manner. Starting from a number of feasible candidate grasps, we focus on the problem of predicting their reliability using the knowledge acquired in previous grasping experiences. A set of visual features which take into account physical properties that can affect the stability and reliability of a grasp are defined. A humanoid robot obtains its grasping experience by repeating a large number of grasping actions on different objects. An experimental protocol is established in order to classify grasps according to their reliability. Two prediction/classification strategies are defined which allow the robot to predict the outcome of a grasp only analizing its visual features. The results indicate that these strategies are adequate to predict the realibility of a grasp and to generalize to different objects.
Journal of Intelligent and Robotic Systems, 2017
Robot grasp quality metrics are used to evaluate, compare and select robotic grasp configurations. Many of them have been proposed based on a diversity of underlying principles and to assess different aspects of the grasp configurations. As a consequence, some of them provide similar information but other can provide completely different assessments. Combinations of metrics have been proposed in order to provide global indexes, but these attempts have shown the difficulties of merging metrics with different numerical ranges and even physical units. All these studies have raised the need of a deeper knowledge in order to determine independent grasp quality metrics which enable a global assessment of a grasp, and a way to combine them. This paper presents an exhaustive study in order to provide numerical evidence for these issues. Ten quality metrics are used to evaluate a set of grasps planned by a simulator for 7 different robot hands over a set of 126 object models. Three statistical analysis, namely, variability, correlation and sensitivity, are performed over this extensive database. Results and graphs presented allow to set practical thresholds for each quality metric, select independent metrics, and determine the robustness of each metric,providing a reliability indicator under pose uncertainty. The results from this paper are
This paper describes a practical approach to the robot grasping problem. An approach that is composed of two different parts. First, a vision-based grasp synthesis system implemented on a humanoid robot able to compute a set of feasible grasps and to execute any of them. This grasping system takes into account gripper kinematics constraints and uses little computational effort.
Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), 2003
This paper deals with visually guided grasping of unmodeled objects for robots which exhibit an adaptive behavior based on their previous experiences. Nine features are proposed to characterize three-finger grasps. They are computed from the object image and the kinematics of the hand. Real experiments on a humanoid robot with a Barrett hand are carried out to provide experimental data. This data is employed by a classification strategy, based on the k-nearest neighbour estimation rule, to predict the reliability of a grasp configuration in terms of five different performance classes. Prediction results suggest the methodology is adequate.
2007
In critical human/robotic interactions such as, e.g., teleoperation by a disabled master or with insufficient bandwidth, or intelligent prostheses, it is highly desirable to have semi-autonomous robotic artifacts interact with a human being. Semi-autonomous teleoperation, for instance, consists in having a smart slave able to guess the master's intentions and possibly take over control in order to perform the desired actions in a more skillful/timely way than with plain, pointto-point teleoperation. In this paper we investigate the possibility of building such an intelligent robotic artifact by training a machine learning system on data gathered from several human subjects while trying to grasp objects in a teleoperation setup. The idea is to have the slave "guess" when the master wants to grasp an object with the maximum possible accuracy; at the same time, the system must be light enough to be usable in an on-line scenario and flexible enough to adapt to different masters, e.g., elderly and/or slow. The outcome of the experiment is that such a system, based upon Support Vector Machines, meets all the requirements, being (a) highly accurate, (b) compact and fast, and (c) largely unaffected by the subjects' diversity. The system is, moreover, trained by something like 3.5 minutes of human data in the worst case.
2018
In this work we present an empirical approach for solving the grasp synthesis problem for anthropomorphic robots equipped with vacuum grippers. Our approach exploits a self-supervised, data-driven learning approach to estimate a suitable grasp for known and unknown objects. We employ a Convolutional Neural Network (CNN) that directly infers the grasping points and the approach angles from RGB-D images as a regression problem. In particular, we split the image into a cell grid where the CNN provides, for each cell, an estimate of a grasp along with a confidence score. We collected a training dataset composed of 4000 grasping attempts by means of an automatic trial-and-error procedure, and we trained end-to-end the CNN directly on both the grasping successes and failures. We report a set of preliminary experiments performed by using known (i.e., object included in the training dataset) and unknown objects, showing that our system is able to effectively learn good grasping configurations.
2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013
Handling objects with a single hand without dropping the object is challenging for a robot. A possible way to aid the motion planning is the prediction of the sensory results of different motions. Sequences of different movements can be performed as an offline simulation, and using the predicted sensory results, it can be evaluated whether the desired goal is achieved. In particular, the task in this paper is to roll a sphere between the fingertips of the dexterous hand of the humanoid robot TWENDY-ONE. First, a forward model for the prediction of the touch state resulting from the in-hand manipulation is developed. As it is difficult to create such a model analytically, the model is obtained through machine learning. To get real world training data, a dataglove is used to control the robot in a master-slave way. The learned model was able to accurately predict the course of the touch state while performing successful and unsuccessful in-hand manipulations. In a second step, it is shown that this simulated sequence of sensor states can be used as input for a stability assessment model. This model can accurately predict whether a grasp is stable or whether it results in dropping the object. In a final step, a more powerful grasp stability evaluator is introduced, which works for our task regardless of the sphere diameter. After grasping an object, often regrasping is necessary before the actual task can be performed, for example when grasping a pen for writing. Regrasping with one hand and without additional support is challenging: in order to achieve robust in-hand manipulation, the current touch state has to be taken into account, but modeling of the contact state is difficult. In particular, it is challenging to design an analytical
Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference, 2020
Robotic grasping of unknown objects in cluttered scenes is already well established, mainly based on advances in Deep Learning methods. A major drawback is the need for a big amount of real-world training data. Furthermore these networks are not interpretable in a sense that it is not clear why certain grasp attempts fail. To make the process of robotic grasping traceable and simplify the overall model we suggest to divide the complex task of robotic grasping into three simpler tasks to find stable grasp points. The first task is to find all grasp points where the gripper can be lowered onto the table without colliding with the object. The second task is to determine for the grasp points and gripper parameters from the first step how the object moves while the gripper is closed. Finally in the third step for all grasp points from the second step it is predicted whether the object slips out of the gripper during lifting. By this simplification it is possible to understand for each grasp point why it is stable and-just as important-why others are unstable or not feasible. In this study we focus on the second task, the prediction of the physical interaction between gripper and object while the gripper is closed. We investigate different Convolutional Neural Network (CNN) architectures and identify the architecture(s) that predict the physical interactions in image space best. We perform the experiments for training data generation in the robot and physics simulator V-REP.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2017 IEEE International Conference on Robotics and Automation (ICRA)
2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2019
Systems, Man, and …, 2005
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Robotics, 2000
IEEE Transactions on Robotics, 2014
Journal of Applied Science and Advanced Engineering
2013 IEEE International Conference on Robotics and Automation, 2013
2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), 2012
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011
Robotics and Autonomous Systems, 2012
International Journal of Advanced Robotic Systems, 2011
Autonomous Robots, 2014
2nd IET International Conference on Intelligent Signal Processing 2015 (ISP), 2015