Papers by Andrea Cherubini

For successful physical human-robot interaction, the capability of a robot to understand its envi... more For successful physical human-robot interaction, the capability of a robot to understand its environment is imperative. More importantly, the robot should extract from the human operator as much information as possible. A reliable 3D skeleton extraction is essential for a robot to predict the intentions of the operator while he/she moves toward the robot or performs a gesture with a specific meaning. For this purpose, we have integrated a time-of-flight depth camera with a stateof-the-art 2D skeleton extraction library namely Openpose, to obtain 3D skeletal joint coordinates reliably. We have also developed a robust and rotation invariant (in the coronal plane) hand gesture detector using a convolutional neural network. At run time (after having been trained) the detector does not require any pre-processing of the hand images. A complete pipeline for skeleton extraction and hand gesture recognition is developed and employed for real-time physical humanrobot interaction, demonstrating the promising capability of the designed framework. This work establishes a firm basis and will be extended for the development of intelligent human intention detection in physical human-robot interaction scenarios, to efficiently recognize a variety of static as well as dynamic gestures.
Deforming a cable to a desired (reachable) shape is a trivial task for a human to do without even... more Deforming a cable to a desired (reachable) shape is a trivial task for a human to do without even knowing the internal dynamics of the cable. This paper proposes a framework for cable shapes manipulation with multiple robot manipulators. The shape is parameterized by a Fourier series. A local deformation model of the cable is estimated on-line with the shape parameters. Using the deformation model, a velocity control law is applied on the robot to deform the cable into the desired shape. Experiments on a dual-arm manipulator are conducted to validate the framework.

This paper represents a step towards vision-based manipulation of plastic materials. Manipulating... more This paper represents a step towards vision-based manipulation of plastic materials. Manipulating deformable objects is made challenging by: 1) the absence of a model for the object deformation, 2) the inherent difficulty of visual tracking of deformable objects, 3) the difficulty in defining a visual error and 4) the difficulty in generating control inputs to minimise the visual error. We propose a novel representation of the task of manipulating deformable objects. In this preliminary case study, the shaping of kinetic sand, we assume a finite set of actions: pushing, tapping and incising. We consider that these action types affect only a subset of the state, i.e., their effect does not affect the entire state of the system (specialized actions). We report the results of a user study to validate these hypotheses and release the recorded dataset. The actions (pushing, tapping and incising) are clearly adopted during the task, although it is clear that 1) participants use also mixed actions and 2) actions' effects can marginally affect the entire state, requesting a relaxation of our specialized actions hypothesis. Moreover, we compute task errors and corresponding control inputs (in the image space) using image processing. Finally, we show how machine learning can be applied to infer the mapping from error to action on the data extracted from the user study.
To make production lines more flexible, dual-arm robots are good candidates to be deployed in aut... more To make production lines more flexible, dual-arm robots are good candidates to be deployed in autonomous assembly units. In this paper, we propose a sparse kinematic control strategy, that minimizes the number of joints actuated for a coordinated task between two arms. The control strategy is based on a hierarchical sparse QP architecture. We present experimental results that highlight the capability of this architecture to produce sparser motions (for an assembly task) than those obtained with standard controllers.

• We provide a haptic zero shot learning algorithm for object recognition. • It enables object re... more • We provide a haptic zero shot learning algorithm for object recognition. • It enables object recognition of objects experienced/touched for the first time. • We develop and test our algorithm on an open source haptic database. • We implement it for haptic recognition of daily-life objects by our robot hand. • Our algorithm enabled our robot to recognize eight out of ten novel objects. a b s t r a c t Object recognition is essential to enable robots to interact with their environment. Robots should be capable, on one hand of recognizing previously experienced objects, and on the other, of using the experienced objects for learning novel objects, i.e. objects for which training data are not available. Recognition of such novel objects can be achieved with Zero-Shot Learning (ZSL). In this work, we show the potential of ZSL for haptic recognition. First, we design a zero-shot haptic recognition algorithm and, using the extensive PHAC-2 database (Chu et al., 2015) as well as our own, we adapt, analyze and optimize the ZSL for the challenges and constraints characteristic of haptic recognition. Finally, we apply the optimized algorithm for haptic recognition of daily-life objects using an anthropomorphic robot hand. Our algorithm enables the robot to recognize eight of the ten novel objects handed to it.
This paper presents a tentacle-based obstacle avoidance scheme for omnidirectional mobile robots ... more This paper presents a tentacle-based obstacle avoidance scheme for omnidirectional mobile robots that must satisfy visibility constraints during navigation. The navigation task consists of driving the robot towards a visual target in the presence of environment (static or moving) obstacles. The target is acquired by an on-board camera, while the obstacles surrounding the robot are sensed by laser range scanners. To perform such task, the robot must avoid the obstacles while maintaining the target in its field of view. The approach is validated in both simulated and real experiments.

Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive ac... more Enabling a humanoid robot to drive a car, requires the development of a set of basic primitive actions. These include: walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal and steering), and moving with the whole-body, to ingress/egress the car. In this paper, we present a sensorbased reactive framework for realizing the central part of the complete task, consisting in driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car, ask for assistive driving, or give the robot full control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements, to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.

In this review, we give a brief outline of robot-mediated gait training for stroke patients, as a... more In this review, we give a brief outline of robot-mediated gait training for stroke patients, as an important emerging field in rehabilitation. Technological innovations are allowing rehabilitation to move toward more integrated processes, with improved efficiency and less long-term impairments. In particular, robot-mediated neurorehabilitation is a rapidly advancing field, which uses robotic systems to define new methods for treating neurological injuries, especially stroke. The use of robots in gait training can enhance rehabilitation, but it needs to be used according to well-defined neuroscientific principles. The field of robot-mediated neurorehabilitation brings challenges to both bioengineering and clinical practice. This article reviews the state of the art (including commercially available systems) and perspectives of robotics in poststroke rehabilitation for walking recovery. A critical revision, including the problems at stake regarding robotic clinical use, is also presented.

Robots operating in household environments need to recognize a variety of objects. Several touch-... more Robots operating in household environments need to recognize a variety of objects. Several touch-based object recognition systems have been proposed in the last few years [2]- . They map haptic data to object classes using machine learning techniques, and then use the learned mapping to recognize one of the previously encountered objects. The accuracy of these proposed methods depends on the mass of the the training samples available for each object class. On the other hand, haptic data collection is often system (robot) specific and labour intensive. One way to cope with this problem is to use a knowledge transfer based system, that can exploit object relationships to share learned models between objects. However, while knowledge-based systems, such as zero shot learning [6], have been regularly proposed for visual object recognition, a similar system is not available for haptic recognition.

This paper presents the FLEXBOT project, a joint LIRMM-QUT effort to develop (in the near future)... more This paper presents the FLEXBOT project, a joint LIRMM-QUT effort to develop (in the near future) novel methodologies for robotic manipulation of flexible and deformable objects. To tackle this problem, and based on our past experiences, we propose to merge vision and force for manipulation control, and to rely on Model Predictive Control (MPC) and constrained optimization to program the object future shape. Index Terms— Control for object manipulation, learning from human demonstration, sensor fusion based on tactile, force and vision feedback. I. CONTEXT This abstract does not present experimental results, but aims at giving some preliminary hints on how flexible robot manipulation should be realized in the near future, particularly in the context of the FLEXBOT project, jointly submitted to the PHC FASIC Program 1 by LIRMM and QUT researchers. The objective of FLEXBOT is to solve one of the most challenging open problems in robotics. In fact, we aim at developing novel methodologies enabling robotic manipulation of flexible and deformable objects. The motivation comes from numerous applications, including the domestic, industrial, and medical examples 2 shown in Fig. 1. Many difficulties emerge when dealing with flexible manipulation. In the first place, the object deformation model (involving elasticity or plasticity) must be known, to derive the robot control inputs required for reconfiguring its shape. Ideally, this model should be derived online, while manipulating , with a simultaneous estimation and control approach, as commonly done in active perception and visual servoing. Hence perception, particularly from vision and force, will be indispensable. This leads to a second major difficulty: deformable object visual tracking. In fact, most current visual object tracking algorithms rely on rigidity, an assumption that is not valid here. A third challenge will consist in generating control inputs that comply with the shape the object is expected to have in the near future. In the next section, we provide a brief survey of the state of art on flexible object manipulation. We then conclude by proposing some novel methodologies for addressing the problem. *
This paper summarizes recent (2011-2016) research carried out within the LIRMM IDH group, to addr... more This paper summarizes recent (2011-2016) research carried out within the LIRMM IDH group, to address the development of collaborative robots for industrial applications. The presented works have been carried out in the frame of various projects, involving major European industrial actors such as PSA Peugeot Citroën, Airbus, and the Tecnalia Foundation.

— In this paper, we address the problem of humanoid navigation in a priori unknown environments, ... more — In this paper, we address the problem of humanoid navigation in a priori unknown environments, cluttered by obstacles. The robot task is to move within the environment without colliding with obstacles and using only ordinary on-board sensors, like monocular cameras and encoders. The proposed approach relies on: (i) optical flow information, to construct a local representation of the environment obstacles and free space; (ii) visual servoing techniques, to achieve safe motion within the environment while regulating appropriate visual features and the robot internal configuration. In case of navigation in a straight corridor, it can be formally proved that the robot converges to the corridor bisector. With respect to previous works, the algorithm proposed here does not make use of any information about the environment, and exploits the humanoid omnidirectional walking capability to achieve safe navigation in narrow passages. The approach is validated through simulations and experiments with NAO.

—Steerable wheeled mobile robots are able to perform arbitrary three-dimensional (3-D) planar tra... more —Steerable wheeled mobile robots are able to perform arbitrary three-dimensional (3-D) planar trajectories, only after initializing the steer joint vector to the proper values. These robots employ fully steerable conventional wheels. Hence, they have higher load carrying capacity than their holonomic counterparts , and as such are preferable for industrial applications. Industrial setups nowadays are being prepared for the emerging field of human–robot collaboration/cooperation. Such field is highly dynamic , due to fast moving human workers, sharing the operation space. This imposes the need for human safe trajectory generators that can lead to frequent halts in motion, replanning, and to sudden, discontinuous changes in the position of the robot's instantaneous center of rotation (ICR). Indeed, this requires steer joint reconfiguration to the newly computed trajectory. This issue is almost ignored in the literature and motivates this work. The authors propose a new ICR-based kinematic controller, that is capable of handling discontinuity in commanded velocity while respecting the maximum joint performance limit. This is done by formulating a quadratic optimization problem with linear constraints in the 2-D ICR space. The controller is also robust against representation and kinematic singularities. It has been tested successfully on the Neobotix-MPO700 industrial mobile robot. Index Terms—Kinematic control, nonholonomic omnidirec-tional wheeled mobile robots, pseudo-omni mobile robots, steer-able wheeled mobile robots, SWMR.
—This article introduces a new human-machine interface for individuals with tetraplegia. We inves... more —This article introduces a new human-machine interface for individuals with tetraplegia. We investigated the feasibility of piloting an assistive device by processing supra-lesional muscle responses online. The ability to voluntarily contract a set of selected muscles was assessed in five spinal cord-injured subjects through electromyographic (EMG) analysis. Two subjects were also asked to use the EMG interface to control palmar and lateral grasping of a robot hand. The use of different muscles and control modalities was also assessed. These preliminary results open the way to new interface solutions for high-level spinal cord-injured patients.

Most studies and reviews on robots for neurorehabilitation focus on their effectiveness. These st... more Most studies and reviews on robots for neurorehabilitation focus on their effectiveness. These studies often report inconsistent results. This and many other reasons limit the credit given to these robots by therapists and patients. Further, neurorehabilitation is often still based on therapists' expertise, with competition among different schools of thought, generating substantial uncertainty about what exactly a neurorehabilitation robot should do. Little attention has been given to ethics. This review adopts a new approach, inspired by Asimov's three laws of robotics and based on the most recent studies in neurorobotics, for proposing new guidelines for designing and using robots for neurorehabilitation. We propose three laws of neurorobotics based on the ethical need for safe and effective robots, the redefinition of their role as therapist helpers, and the need for clear and transparent human-machine interfaces. These laws may allow engineers and clinicians to work closely together on a new generation of neurorobots.
Uploads
Papers by Andrea Cherubini