Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2008, Journal of Intelligent and Robotic Systems
…
21 pages
1 file
Robots that are capable of learning new tasks from humans need the ability to transform gathered abstract task knowledge into their own representation and dimensionality. New task knowledge that has been collected e.g. with Programming by Demonstration approaches by observing a human does not a-priori contain any robot-specific knowledge and actions, and is defined in the workspace of the human demonstrator. This article presents a new approach for mapping abstract human-centered task knowledge to a robot execution system based on the target system properties. Therefore the required background knowledge about the target system is examined and defined explicitely.
2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007
Robots with the capability of learning new tasks from humans need the ability to transform gathered abstract task knowledge into their own representation and dimensionality. New task knowledge that has been acquired e.g. with Programming by Demonstration approaches by observing a human does not a-priori contain any robot-specific knowledge and actions, and is defined in the workspace and action space of the human demonstrator.
Annual Reviews in Control, 1996
Programming by Demonstration (PbD) is an intuitive method to program a robot. The user, acting as a teacher or programmer, shows how a particular task should be carried out. The demonstration is monitored using an interface device that ~d/ows the measurement and recording of both the applied commands as well as the data simultaneously perceived by robot's sensors. This paper identilies the kind of knowledge that the robot can actuul/y acquire from the human user through demonstrations and the requiroments that must be met/n order to be able to interpret what has been demonstrated. Finally, it presents and experimentally eva/uatss an approach to integrated acquisition, evaluation, tuning, and execution of elementary skills and ta~.levei programs for robots based on human demonstrations.
Robot Programming by Demonstration (RbD), also referred to as Learning by Imitation, explores user-friendly means of teaching a robot new skills. Recent advances in RbD have identified a number of key-issues for ensuring a generic approach to the transfer of skills across various agents and contexts. This thesis focuses on the two generic questions of what-to-imitate and how-to-imitate, which are respectively concerned with the problem of extracting the essential features of a task and determining a way to reproduce these essential features in different situations. The perspective adopted in this work is that a skill can be described efficiently at a trajectory level and that the robot may infer what are the important characteristics of the skill by observing multiple demonstrations of it, assuming that the important characteristics are invariant across the demonstrations.
2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL), 2013
This paper presents an algorithm to bootstrap shared understanding in a human-robot interaction scenario where the user teaches a robot a new task using teaching instructions yet unknown to it. In such cases, the robot needs to estimate simultaneously what the task is and the associated meaning of instructions received from the user. For this work, we consider a scenario where a human teacher uses initially unknown spoken words, whose associated unknown meaning is either a feedback (good/bad) or a guidance (go left, right, ...). We present computational results, within an inverse reinforcement learning framework, showing that a) it is possible to learn the meaning of unknown and noisy teaching instructions, as well as a new task at the same time, b) it is possible to reuse the acquired knowledge about instructions for learning new tasks, and c) even if the robot initially knows some of the instructions' meanings, the use of extra unknown teaching instructions improves learning efficiency.
2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012
Humanoid robots operating in the real world must exhibit very complex behaviors, such as object manipulation or interaction with people. Such capabilities pose the problem of being able to reason on a huge number of different objects, places and actions to carry out, each one relevant for achieving robot goals. This article proposes a functional representation of objects, places and actions described in terms of affordances and capabilities. Everyday problems can be efficiently dealt with by decomposing the reasoning process in two phases, namely problem awareness (which is the focus of this article) and action selection.
The 9th International …, 2006
Robots that are designed to act in human-centered environments put up the need for a flexible and adaptive representation of task knowledge. This results on the one hand from the continuously changing and hardly predictable state of an environment that is populated with humans and robots. On the other hand, a task knowledge description of a robot which cooperates with humans has to be adaptable and extendable. This paper presents a task knowledge representation for service robots called Flexible Programs (FP) and the environment for execution of FPs. Flexible Programs can be created manually, or by using the results of machine learning approaches like Programming by Demonstration. It is possible to change, adapt and extend this task knowledge at runtime.
In this study, we present a framework that infers human activities from observations using semantic representations. The proposed framework can be utilized to address the difficult and challenging problem of transferring tasks and skills to humanoid robots. We propose a method that allows robots to obtain and determine a higher-level understanding of a demonstrator’s behavior via semantic representations. This abstraction from observations captures the “essence” of the activity, thereby indicating which aspect of the demonstrator’s actions should be executed in order to accomplish the required activity. Thus, a meaningful semantic description is obtained in terms of human motions and object properties. In addition, we validated the semantic rules obtained in different conditions, i.e., three different and complex kitchen activities: 1) making a pancake; 2) making a sandwich; and 3) setting the table. We present quantitative and qualitative results, which demonstrate that without any further training, our system can deal with time restrictions, different execution styles of the same task by several participants, and different labeling strategies. This means, the rules obtained from one scenario are still valid even for new situations, which demonstrates that the inferred representations do not depend on the task performed. The results show that our system correctly recognized human behaviors in real-time in around 87.44% of cases, which was even better than a random participant recognizing the behaviors of another human (about 76.68%). In particular, the semantic rules acquired can be used to effectively improve the dynamic growth of the ontology-base knowledge representation. Hence, this method can be used flexibly across different demonstrations and constraints to infer and achieve a similar goal to that observed. Furthermore, the inference capability introduced in this study was integrated into a joint space control loop for a humanoid robot, an iCub, for achieving similar goals to the human demonstrator online.
—Learning from Demonstration (LfD) is addressed in this work in order to establish a novel framework for Human-Robot Collaborative (HRC) task execution. In this context, a robotic system is trained to perform various actions by observing a human demonstrator. We formulate a latent representation of observed behaviors and associate this representation with the corresponding one for target robotic behaviors. Effectively, a mapping of observed to performed actions is defined, that abstracts action variations and differences between the human and robotic manipulators, and facilitates execution of newly-observed actions. The learned action-behaviors are then employed to accomplish task execution in an HRC scenario. Experimental results obtained regard the successful training of a robotic arm with various action behaviors and its subsequent deployment in HRC task accomplishment. The latter demonstrate the validity and efficacy of the proposed approach in human-robot collabo-rative setups.
Learning and deliberation are required to endow a robot with the capabilities to acquire knowledge, perform a variety of tasks and interactions, and adapt to open-ended environments. This paper explores the notion of experience-based planning domains (EBPDs) for task-level learning and planning in robotics. EBPDs rely on methods for a robot to: (i) obtain robot activity experiences from the robot's performance ; (ii) conceptualize each experience to a task model called activity schema; and (iii) exploit the learned activity schemata to make plans in similar situations. Experiences are episodic descriptions of plan-based robot activities including environment perception, sequences of applied actions and achieved tasks. The conceptualization approach integrates different techniques including deductive generalization, abstraction and feature extraction to learn activity schemata. A high-level task planner was developed to find a solution for a similar task by following an activity schema. In this paper, we extend our previous approach by integrating goal inference capabilities. The proposed approach is illustrated in a restaurant environment where a service robot learns how to carry out complex tasks.
2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015
This paper presents a novel approach for robot instruction for assembly tasks. We consider that robot programming can be made more efficient, precise and intuitive if we leverage the advantages of complementary approaches such as learning from demonstration, learning from feedback and knowledge transfer. Starting from low-level demonstrations of assembly tasks, the system is able to extract a high-level relational plan of the task. A graphical user interface (GUI) allows then the user to iteratively correct the acquired knowledge by refining high-level plans, and low-level geometrical knowledge of the task. This combination leads to a faster programming phase, more precise than just demonstrations, and more intuitive than just through a GUI. A final process allows to reuse high-level task knowledge for similar tasks in a transfer learning fashion. Finally we present a user study illustrating the advantages of this approach.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
ROBOTICS RESEARCH-INTERNATIONAL SYMPOSIUM-, 2000
… in Robotics and …, 2005
RO-MAN 2007 - The 16th IEEE International Symposium on Robot and Human Interactive Communication, 2007
… and Automation, 2005. …, 2005
ICAPS Doctoral Consortium, 2005
2007 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2007
International Conference on Automated Planning and Scheduling (ICAPS), Workshop on Planning and Robotics (PlanRob), 2013
2016 IEEE International Conference on Robotics and Automation (ICRA), 2016
2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583)
Robotics and Autonomous Systems, 2006
2013 International Conference on Collaboration Technologies and Systems (CTS), 2013
Advanced Robotics, 2006
… Reuse and Integration, 2007. IRI 2007, 2007
Springer Handbook of Robotics, 2008