100% found this document useful (1 vote)
2K views638 pages

Robotic Systems - Applications Control and Programming PDF

Robotic_Systems_-_Applications__Control_and_Programming.pdf

Uploaded by

sarveshknitk
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
2K views638 pages

Robotic Systems - Applications Control and Programming PDF

Robotic_Systems_-_Applications__Control_and_Programming.pdf

Uploaded by

sarveshknitk
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ROBOTIC SYSTEMS APPLICATIONS, CONTROL AND PROGRAMMING

Edited by Ashish Dutta

Robotic Systems Applications, Control and Programming Edited by Ashish Dutta

Published by InTech Janeza Trdine 9, 51000 Rijeka, Croatia Copyright 2012 InTech All chapters are Open Access distributed under the Creative Commons Attribution 3.0 license, which allows users to download, copy and build upon published articles even for commercial purposes, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications. After this work has been published by InTech, authors have the right to republish it, in whole or part, in any publication of which they are the author, and to make other personal use of the work. Any republication, referencing or personal use of the work must explicitly identify the original source. As for readers, this license allows users to download, copy and build upon published chapters even for commercial purposes, as long as the author and publisher are properly credited, which ensures maximum dissemination and a wider impact of our publications. Notice Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published chapters. The publisher assumes no responsibility for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained in the book. Publishing Process Manager Anja Filipovic Technical Editor Teodora Smiljanic Cover Designer InTech Design Team Image Copyright sarah5, 2011. DepositPhotos First published January, 2012 Printed in Croatia A free online edition of this book is available at [Link] Additional hard copies can be obtained from orders@[Link] Robotic Systems Applications, Control and Programming, Edited by Ashish Dutta p. cm. ISBN 978-953-307-941-7

Contents
Preface IX Part 1 Chapter 1 Applications 1 Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery 3 Kanako Harada, Ekawahyu Susilo, Takao Watanabe, Kazuya Kawamura, Masakatsu G. Fujie, Arianna Menciassi and Paolo Dario Target Point Manipulation Inside a Deformable Object 19 Jadav Das and Nilanjan Sarkar Novel Assistive Robot for Self-Feeding Won-Kyung Song and Jongbae Kim 43

Chapter 2

Chapter 3

Chapter 4

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods Zacharia Paraskevi Robotic Systems for Radiation Therapy 85 Ivan Buzurovic, Tarun K. Podder and Yan Yu

61

Chapter 5

Chapter 6

Robotic Urological Surgery: State of the Art and Future Perspectives 107 Rachid Yakoubi, Shahab Hillyer and Georges-Pascal Haber Reconfigurable Automation of Carton Packaging with Robotic Technology 125 Wei Yao and Jian S. Dai Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory 139 Gourab Sen Gupta, Mark Seelye, John Seelye and Donald Bailey

Chapter 7

Chapter 8

VI

Contents

Part 2 Chapter 9

Control and Modeling

159

CPG Implementations for Robot Locomotion: Analysis and Design 161 Jose Hugo Barron-Zambrano and Cesar Torres-Huitzil Tracking Control in an Upper Arm Exoskeleton with Differential Flatness 183 E. Y. Veslin, M. Dutra, J. Slama, O. Lengerke and M. J. M. Tavera Real-Time Control in Robotic Systems Alex Simpkins 209

Chapter 10

Chapter 11

Chapter 12

Robot Learning from Demonstration Using Predictive Sequence Learning 235 Erik Billing, Thomas Hellstrm and Lars-Erik Janlert Biarticular Actuation of Robotic Systems Jan Babi 251

Chapter 13

Chapter 14

Optimization and Synthesis of a Robot Fish Motion 271 Janis Viba, Semjons Cifanskis and Vladimirs Jakushevich Modeling of Elastic Robot Joints with Nonlinear Damping and Hysteresis 293 Michael Ruderman Gravity-Independent Locomotion: Dynamics and Position-Based Control of Robots on Asteroid Surfaces Marco Chacin and Edward Tunstel Kinematic and Inverse Dynamic Analysis of a C5 Joint Parallel Robot 339 Georges Fried, Karim Djouani and Amir Fijany Utilizing the Functional Work Space Evaluation Tool for Assessing a System Design and Reconfiguration Alternatives 361 A. Djuric and R. J. Urbanic Requirement Oriented Reconfiguration of Parallel Robotic Systems 387 Jan Schmitt, David Inkermann, Carsten Stechert, Annika Raatz and Thomas Vietor Vision and Sensors 411 413

Chapter 15

Chapter 16

313

Chapter 17

Chapter 18

Chapter 19

Part 3 Chapter 20

Real-Time Robotic Hand Control Using Hand Gestures Jagdish Lal Raheja, Radhey Shyam, G. Arun Rajsekhar and P. Bhanu Prasad

Contents

VII

Chapter 21

Robotics Arm Visual Servo: Estimation of Arm-Space Kinematics Relations with Epipolar Geometry 429 Ebrahim Mattar Design and Construction of an Ultrasonic Sensor for the Material Identification in Robotic Agents 455 Juan Jos Gonzlez Espaa, Jovani Alberto Jimnez Builes and Jaime Alberto Guzmn Luna Programming and Algorithms 471

Chapter 22

Part 4 Chapter 23

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 473 Christian Schlegel, Andreas Steck and Alex Lotz Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 503 Matthias Burwinkel and Bernd Scholz-Reiter Programming of Intelligent Service Robots with the Process Model FRIEND::Process and Configurable Task-Knowledge 529 Oliver Prenzel, Uwe Lange, Henning Kampe, Christian Martens and Axel Grser Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 553 Claudio Urrea, John Kern and Holman Ortiz An Approach to Distributed Component-Based Software for Robotics 571 A. C. Domnguez-Brito, J. Cabrera-Gmez, J. D. Hernndez-Sosa, J. Isern-Gonzlez and E. Fernndez-Perdomo Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning Problem for Industrial Robots Impact of Interpolation Functions and the Characteristics of the Actuators on Robot Performance 591 Francisco J. Rubio, Francisco J. Valero, Antonio J. Besa and Ana M. Pedrosa Methodology for System Adaptation Based on Characteristic Patterns 611 Eva Voln, Michal Janoek, Vclav Kocian, Martin Kotyrba and Zuzana Oplatkov

Chapter 24

Chapter 25

Chapter 26

Chapter 27

Chapter 28

Chapter 29

Preface
Over the last few decades the focus of robotics research has moved beyond the traditional area of industrial applications to newer areas, including healthcare and domestic applications. These newer applications have given a strong impetus to the development of advanced sensors, control strategies and algorithms. The first section of this book contains advanced applications of robotics in surgery, rehabilitation, modular robotics among others. This is followed by sections on control and sensors, while the fourth section is devoted to robot algorithms. I would like to thank all the authors for entrusting us with their work and specially the editorial members of InTech publishing.

Dr. Ashish Dutta Department of Mechanical Engineering Indian Institute of Technology, Kanpur India

Part 1
Applications

1
Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery
Kanako Harada, Ekawahyu Susilo, Takao Watanabe, Kazuya Kawamura, Masakatsu G. Fujie, Arianna Menciassi and Paolo Dario
2Scuola 1The

University of Tokyo Superiore SantAnna 3Italian Institute of Technology 4Waseda University 1,4Japan 2,3Italy

1. Introduction
The trend in surgical robots is moving from traditional master-slave robots to miniaturized devices for screening and simple surgical operations (Cuschieri, A. 2005). For example, capsule endoscopy (Moglia, A. 2007) has been conducted worldwide over the last five years with successful outcomes. To enhance the dexterity of commercial endoscopic capsules, capsule locomotion has been investigated using legged capsules (Quirini, M. 2008) and capsules driven by external magnetic fields (Sendoh, M. 2003; Ciuti, G. 2010; Carpi, F. 2009). Endoscopic capsules with miniaturized arms have also been studied to determine their potential for use in biopsy (Park, S.-K. 2008). Furthermore, new surgical procedures known as natural orifice transluminal endoscopic surgery (NOTES) and Single Port Access surgery are accelerating the development of innovative endoscopic devices (Giday, S. 2006; Bardaro, S.J. 2006). These advanced surgical devices show potential for the future development of minimally invasive and endoluminal surgery. However, the implementable functions in such devices are generally limited owing to space constraints. Moreover, advanced capsules or endoscopes with miniaturized arms have rather poor dexterity because the diameter of such arms must be small (i.e. a few millimeters), which results in a small force being generated at the tip. A modular surgical robotic system known as the ARES (Assembling Reconfigurable Endoluminal Surgical system) system has been proposed based on the aforementioned motivations (Harada, K. 2009; Harada, K. 2010; Menciassi, A. 2010). The ARES system is designed for screening and interventions in the gastrointestinal (GI) tracts to overcome the intrinsic limitations of single-capsules or endoscopic devices. In the proposed system,

Robotic Systems Applications, Control and Programming

miniaturized robotic modules are ingested and assembled in the stomach cavity. The assembled robot can then change its configuration according to the target location and task. Modular surgical robots are interesting owing to their potential for application as selfreconfigurable modular robots and innovative surgical robots. Many self-reconfigurable modular robots have been investigated worldwide (Yim, M. 2007; Murata, S. 2007) with the goal of developing systems that are robust and adaptive to the working environment. Most of these robots have been designed for autonomous exploration or surveillance tasks in unstructured environments; therefore, there are no strict constraints regarding the number of modules, modular size or working space. Because the ARES has specific applications and is used in the GI tract environment, it raises many issues that have not been discussed in depth in the modular robotic field. Modular miniaturization down to the ingestible size is one of the most challenging goals. In addition, a new interface must be developed so that surgeons can intuitively maneuver the modular surgical robot. The purpose of this paper is to clarify the advantages of the modular approach in surgical applications, as well as to present proof of concept of the modular robotic surgical system. The current paper is organized as follows: Section 2 describes the design of the ARES system. Section 3 details the design and prototyping of robotic modules, including the experimental results. Section 4 describes a reconfigurable master device designed for the robotic modules, and its preliminary evaluation is reported.

2. Design of the modular surgical system


2.1 Clinical indications and proposed procedures The clinical target of the ARES system is the entire GI tract, i.e., the esophagus, stomach, small intestine, and colon. Among GI tract pathologies that can benefit from modular robotic features, biopsy for detection of early cancer in the upper side of the stomach (the fundus and the cardia) was selected as the surgical task to be focused on as a first step. Stomach cancer is the second leading cause of cancer-related deaths worldwide (World Health Organization 2006), and stomach cancer occurring in the upper side of the stomach has the worst outcome in terms of the 5-year survival ratio (Pesic, M. 2004). Thus, early diagnosis of cancer utilizing an advanced endoluminal device may lead to better prognosis. The stomach has a large volume (about 1400 ml) when distended, which provides working space to assemble the ingested robotic modules and change the topology of the assembled robot inside (i.e. reconfiguration). Each robotic module should be small enough to be swallowed and pass through the whole GI tract. Because the size of the commercial endoscopic capsules (11 mm in diameter and 26 mm in length (Moglia, A. 2007)) has already been shown to be acceptable for the majority of patients as an ingestible device, each module needs to be miniaturized to this size before being applied to clinical cases. The surgical procedures proposed for the ARES system (Harada, K. 2010) are shown in Fig. 1. Prior to the surgical procedure, the patient drinks a liquid to distend the stomach to a volume of about 1400 ml. Next, the patient ingests 10-15 robotic modules that complete the assembly process before the liquid naturally drains away from the stomach in 10-20 minutes. The number of the modules swallowed depends on the target tasks and is determined in advance based on the pre-diagnosis. Magnetic self-assembly in the liquid using permanent magnets was selected for this study since its feasibility has already been demonstrated (Nagy, Z. 2007). Soon after the assembly, the robot configures its topology

Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery

according to preoperative planning by repeated docking and undocking of the modules (the undocking mechanism and electrical contacts between modules are necessary for reconfiguration, but they have not been implemented in the presented design). The robotic modules are controlled via wireless bidirectional communication with a master device operated by the surgeon, while the progress in procedure is observed using intraoperative imaging devices such as fluoroscopy and cameras mounted on the modules. After the surgical tasks are completed, the robot reconfigures itself to a snake-like shape to pass through the pyloric sphincter and travel to examine the small intestine and the colon, or it completely disassembles itself into individual modules so that it can be brought out without external aid. One of the modules can bring a biopsy tissue sample out of the body for detailed examination after the procedure is complete.

Fig. 1. Proposed procedures for the ARES system 2.2 Advantages of the modular approach in surgical applications The modular approach has great potential to provide many advantages to surgical applications. These advantages are summarized below using the ARES system as shown in Fig.2. The numbering of the items in Fig.2 is correlated with the following numbering. i. The topology of the modular surgical robot can be customized for each patient according to the location of the disease and the size of the body cavity in which the modular robot is deployed. A set of functional modules such as cameras, needles and forceps can be selected for each patient based on the necessary diagnosis and surgical operation. ii. The modular approach facilitates delivery of more components inside a body cavity that has small entrance/exit hole(s). As there are many cavities in the human body, the modular approach would benefit treatment in such difficult-to-reach places. Because

Robotic Systems Applications, Control and Programming

several functional modules can be used simultaneously, the modular robot may perform rather complicated tasks that a single endoscopic capsule or an endoscopic device is not capable of conducting. For example, if more than two camera modules are employed, the surgeon can conduct tasks while observing the site from different directions. iii. Surgical tools of relatively large diameter can be brought into the body cavity. Conventionally, small surgical forceps that can pass through an endoscopic channel of a few millimeters have been used for endoluminal surgery. Conversely, surgical devices that have the same diameter as an endoscope can be used in the modular surgical system. Consequently, the force generated at the tip of the devices would be rather large, and the performance of the functional devices would be high. iv. The surgical system is more adaptive to the given environment and robust to failures. Accordingly, it is not necessary for the surgical robot to equip all modules that might be necessary in the body because the surgeons can decide whether to add modules with different functionalities, even during the surgical operation. After use, the modules can be detached and discarded if they are not necessary in the following procedures. Similarly, a module can be easily replaced with a new one in case of malfunction. As these advantages suggest, a modular surgical robot would be capable of achieving rather complicated tasks that have not been performed using existing endoluminal surgical devices. These advantages are valid for modular robots that work in any body cavity with a small entrance and exit. Moreover, this approach may be introduced to NOTES or Single Port Access surgery, in which surgical devices must reach the abdominal cavity through a small incision. In Section 3, several robotic modules are proposed, and the performance of these modules is reported to show the feasibility of the proposed surgical system.
(i) (iii)

(ii)

(iv)

Fig. 2. Advantages of the modular approach in surgical applications

Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery

3. Robotic modules
3.1 Design and prototyping of the robotic modules Figure 3 shows the design and prototypes of the Structural Module and the Biopsy Module (Harada, K. 2009, Harada, K. 2010). The Structural Module has two degrees of freedom (90 of bending and 360 of rotation). The Structural Module contains a Li-Po battery (20 mAh, LP2-FR, Plantraco Ltd., Canada), two brushless DC geared motors that are 4 mm in diameter and 17.4 mm in length (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan) and a custom-made motor control board capable of wireless control (Susilo, E. 2009). The stall torque of the selected geared motor is 10.6 mNm and the speed is 112 rpm when controlled by the developed controller. The bending mechanism is composed of a worm and a spur gear (9:1 gear reduction), whereas the rotation mechanism is composed of two spur gears (no gear reduction). All gears (DIDEL SA, Switzerland) were made of nylon, and they were machined to be implemented in the small space of the capsule. Two permanent magnets (Q-05-1.5-01-N, Webcraft GMbH, Switzerland) were attached at each end of the module to help with self-alignment and modular docking. The module is 15.4 mm in diameter and 36.5 mm in length; it requires further miniaturization before clinical application. The casing of the prototype was made of acrylic plastic and fabricated by 3D rapid prototyping (Invison XT 3-D Modeler, 3D systems, Inc., USA). The total weight is 5.6 g. Assuming that the module would weigh 10 g with the metal chassis and gears, the maximum torque required for lifting two connected modules is 5.4 mNm for both the bending DOF and rotation DOF. Assuming that the gear transmission efficiency for the bending mechanism is 30%, the stall torque for the bending DOF is 28.6 mNm. On the other hand, the stall torque for the rotation DOF is 8.5 mNm when the transmission efficiency for the rotation mechanism is 80%. The torque was designed to have sufficient force for surgical operation, but the transmission efficiency of the miniaturized plastic gears was much smaller than the theoretical value as explained in the next subsection. Controller The aforementioned brushless DC motor came with a dedicated motor driving board (SSD04, Namiki Precision Jewel Co., Ltd., 19.6 mm 34.4 mm 3 mm). This board only allows driving of one motor; hence, two boards are required for a robotic module with 2 DOFs. Because there was not sufficient space for the boards in the robotic module, a custom made high density control board was designed and developed in-house. This control board consisted of one CC2430 microcontroller (Texas Instrument, USA) as the main wireless controller and three sets of A3901 dual bridge motor drivers (Allegro MicroSystem, Inc., USA). The fabricated board is 9.6 mm in diameter, 2.5 mm in thickness and 0.37 g in weight, which is compatible with swallowing. The A3901 motor driver chip was originally intended for a brushed DC motor, but a software commutation algorithm was implemented to control a brushless DC motor as well. An IEEE 802.15.4 wireless personal area network (WPAN) was introduced as an embedded feature (radio peripheral) of the microcontroller. The implemented algorithm enables control of the selected brushless DC motor in Back ElectroMotive Force (BEMF) feedback mode or slow speed stepping mode. When the stepping mode is selected, the motor can be driven with a resolution of 0.178. For the modular approach, each control board shall be equipped with a wired locating system for intra-modular communication in addition to the wireless communication. Aside from wireless networking, the wired locating system, which is not implemented in the presented design, would be useful for identification of the sequence of the docked modules

Robotic Systems Applications, Control and Programming

in real time. The wired locating system is composed of three lines, one for serial multidrop communication, one for a peripheral locator and one as a ground reference. When the modules are firmly connected, the intra-modular communication can be switched from wireless to wired to save power while maintaining the predefined network addresses. When one module is detached intentionally or by mistake, it will switch back to wireless mode. Battery The battery capacity carried by each module may differ from one to another (e.g. from 10 mAh to 50 mAh) depending on the available space inside the module. For the current design, a 20 mAh Li-Po battery was selected. Continuous driving of the selected motor on its maximum speed using a 20 mAh Li-Po battery was found to last up to 17 minutes. A module does not withdraw power continuously because the actuation mechanisms can maintain their position when there is no current to the motor owing to its high gear reduction (337:1). A module consumes power during actuation, but its power use is very low in stand-by mode. Biopsy Module The Biopsy Module is a Functional Module that can be used to conduct diagnosis. The grasping mechanism has a worm and two spur gears, which allows wide opening of the grasping parts. The grasping parts can be hidden in the casing at the maximum opening to prevent tissue damage during ingestion. The motor and other components used for the Biopsy Module are the same as for the Structural Module. The brushless DC geared motors (SBL04-0829PG337, Namiki Precision Jewel Co. Ltd., Japan), the control board, a worm gear and two spur gears (9:1 gear reduction) were implemented in the Biopsy Module. A permanent magnet (Q-05-1.5-01-N, Webcraft GMbH, Switzerland) was placed at one side to be connected to another Structural Module.

Fig. 3. Design and prototypes of the structural module (left) and the biopsy module (right)

Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery

The Biopsy Module can generate a force of 7.1 N at its tip, and can also open the grasping parts to a width of 19 mm with an opening angle of 90 degrees. These values are much larger than those of conventional endoscopic forceps, which are 2-4 mm in diameter. As a demonstration, Figure 3 shows the Biopsy Module holding a coin weighing 7.5 g. In conventional endoscopy, forceps are inserted through endoscopic channels that are parallel to the direction of the endoscopic view, which often results in the forceps hiding the target. Conversely, the Biopsy Module can be positioned at any angle relative to the endoscopic view owing to the modular approach, thereby allowing adequate approach to the target. 3.2 Performance of the Structural Module The mechanical performance of the bending and rotation DOFs of the Structural Module was measured in preliminary tests (Menciassi, A. 2010), and the results are summarized in Fig.4. The bending angle was varied by up to 90 in steps of 10 three times in succession. The measured range of the bending angle was -86.0 to +76.3, and the maximum error was 15.8. The rotation angle was increased from 0 to 180 in steps of 45 three times in succession, and the measured range of the rotational angle was between 0 and 166.7 with a maximum error of 13.3. The difference between the driven angle and the measured angle was due to backlash of the gears and the lack of precision and stiffness of the casing made by 3D rapid prototyping. Regardless of the errors and the hysteresis, the repeatability was sufficient for the intended application for both DOFs. These results indicate that the precision of each motion can be improved by changing the materials of the gears and the casing. Since the motor can be controlled with a resolution of 0.178, very precise surgical tasks can be achieved using different manufacturing processes.
90 180

Measured Angle (deg.)

Measured Angle (deg.)


0 -45 45 90 Commanded Angle (deg.)

45 0 -45 -90 -90

135 90 45 0

45 90 180 135 Commanded Angle (deg.)

Fig. 4. Bending angle measurement (left), rotation angle measurement (middle), and torque measurement (right) (Menciassi, A. 2010) In addition to the angle measurements, both bending and rotation torque were measured. The torque was measured by connecting cylindrical parts with permanent magnets at both ends until the bending/rotational motion stopped. The length and weight of each cylinder was designed in advance, and several types of cylinders were prepared. The measured bending torque was 6.5 mNm and the rotation torque was 2.2 mNm. The figure also shows one module lifting up two modules attached to its bending mechanism as a demonstration. The performance in terms of precision and generated torque, which are very important for reconfiguration and surgical tasks, was sufficient; however, the precision was limited owing

10

Robotic Systems Applications, Control and Programming

to the aforementioned fabrication problems. The thin walls of the casing made of acrylic plastic were easily deformed, which caused friction between the parts. The casing made of metal or PEEK and tailor-made metal gears with high precision will improve the mechanism rigidity and performance, thus producing the optimal stability. 3.3 Possible designs of robotic modules Figure 5 shows various designs of robotic modules that can be implemented in the modular surgical robot. The modules can be categorized into three types: structural modules, functional modules, and other modules. Structural modules are used to configure a robotic topology. Functional modules are used for diagnosis or intervention, while other modules can be added to enhance the performance and robustness of the robotic system. Obviously, an assembled robot made of different types of modules (i.e. a robot with high heterogeneity) may provide high dexterity, but the self-assembly in the stomach and control of the modules would become more difficult. To optimize the level of heterogeneity, self-assembly of the robotic modules must be developed so that the reconfiguration of the robotic topology following the self-assembly can be planned in advance. Employing pre-assembled modular arms or tethered modules can be another option to facilitate assembly in a body cavity; however, this would require further anesthesia, and it would hinder the promotion of massive screening.

Fig. 5. Various designs of the robotic modules

Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery

11

4. Reconfigurable master device


4.1 Design and prototyping of the reconfigurable master device One main advantage of using a modular approach in surgical applications is the adaptivity to the given environment as mentioned in Section 2.2. Wherever the robotic platform is deployed in the GI tract, the robotic topology can be changed based on preoperative plans or the in-situ situation to fit in any particular environment. This dynamic changing and reshaping of the robotic topology should be reflected on the user interface. Since it is possible for a robotic topology to have redundant DOFs, the master device for the modular surgical system needs to be able to handle the redundancy that is inherent to modular robots. Based on these considerations, we propose a reconfigurable master device that resembles the robotic platform (Fig.6). When the assembled robot changes its topology, the master device follows the same configuration. The robotic module shown in Fig. 6 has a diameter of 15.4 mm, while a module of the reconfigurable master device has a diameter of 30 mm. The master modules can be easily assembled or disassembled using set screws, and it takes only a few seconds to connect one module to another. Each robotic module is equipped with two motors as described in the previous section; thus, each master module is equipped with two potentiometers (TW1103KA, Tyco Electronics) that are used as angular position sensors. Calculating the angular position of each joint of the reconfigurable master device is quite straightforward. A common reference voltage is sent from a data acquisition card to all potentiometers, after which the angular position can be calculated from the feedback readings. Owing to the identical configuration, the angle of each joint of the robotic modules can be easily determined, even if the topology has redundancy.

Fig. 6. Robotic modules (top line) and the reconfigurable master device (bottom line): one module (left), assembled modules (middle) and prototypes (right)

12

Robotic Systems Applications, Control and Programming

The advantages of the proposed master device include intuitive manipulation. For example, the rotational movement of a structural module used to twist the arm is limited to 180, and the master module also has this limitation. This helps surgeons intuitively understand the range of the motion and the reachable working space of the modules. Using a conventional master manipulator or an external console, it is possible that the slave manipulator cannot move owing to its mechanical constraints, while the master manipulator can still move. However, using the proposed master device, the surgeon can intuitively understand the mechanical constraints by manipulating the master device during practice/training. Furthermore, the position of the master arm can indicate where the robotic modules are, even if they are outside of the camera module's view. These characteristics increase the safety of the operation. This feature is important because the entire robotic system is placed inside the body. In other surgical robotic systems, the position or shape of the robotic arms is not important as they are placed outside of the body and can be seen during operation. Unlike other master devices, it is also possible for two or more surgeons to move the reconfigurable master device together at the same time using multi arms with redundant DOFs. 4.2 Evaluation A simulation-based evaluation setup was selected to simplify the preliminary evaluation of the feasibility of the reconfigurable master device. The authors previously developed the Slave Simulator to evaluate workloads for a master-slave surgical robot (Kawamura, K. 2006). The Slave Simulator can show the motion of the slave robot in CG (Computer Graphics), while the master input device is controlled by an operator. Because the simulator can virtually change the parameters of the slave robot or its control, it is easy to evaluate the parameters as well as the operability of the master device. This Slave Simulator was appropriately modified for the ARES system. The modified Slave Simulator presents the CG models of the robotic modules to the operator. The dimension and DOFs of each module in CG were determined based on the design of the robotic modules. The angle of each joint is given by the signal from the potentiometers of the reconfigurable master device, and the slave modules in CG move in real time to reproduce the configuration of the master device. This Slave Simulator is capable of altering joint positions and the number of joints of the slave arms in CG so that the workspace of the reconfigurable master device can be reproduced in a virtual environment for several types of topologies. The simulator is composed of a 3D viewer that uses OpenGL and a physical calculation function. This function was implemented to detect a collision between the CG modules and an object placed in the workspace. To simplify the experiments to evaluate the feasibility of the proposed master device and usefulness of the developed simulator, only one arm of the reconfigurable master device was used. Three topologies that consist of one Biopsy Module and one or two Structural Module(s) were selected as illustrated in Fig.7. Topology I consists of a Structural Module and a Biopsy Module, and the base is fixed so that the arm appears with an angle of 45 degrees. One Structural Module is added to Topology I to configure Topology II, and Topology III is identical to Topology II, but placed at 0 degrees. Both Topology II and Topology III have redundant DOFs. The projection of the workspace of each arm and the shared workspace are depicted in Fig.8. A target object on which the arm works in the experiments must be placed in this shared area, which makes it easy to compare topologies.

Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery

13

A bar was selected as the target object instead of a sphere because the height of the collision point is different for each topology when the object appears in the same position in the 2D plane. The experiment was designed so that a bar appears at random in the shared workspace. The bar represents a target area at which the Biopsy Module needs to collect tissue samples, and this experiment is a simple example to select one topology among three choices given that the arm can reach the target. We assumed that this choice may vary depending on the user, and this experiment was designed to determine if the reconfigurability of the master device, i.e. customization of the robot, provides advantages and improves performance.

Topology I Combination

Topology II

Topology III

Master device

Simulator

Fig. 7. Three topologies used in the experiments During the experiment, the operator of the reconfigurable master device could hear a beeping sound when the distal end of the arm (i.e. the grasping part of the biopsy module) touched the bar. The task designed for the experiments was to move the arm of the reconfigurable master device as quickly as possible, touch the bar in CG, and then maintain its position for three seconds. The plane in which the bar stands is shown in grids (Fig.9), and the operator obtains 3D perception by observing these grids. The plane with the grids is the same for all topologies. The angle of the view was set so that the view is similar to that from the camera module in Fig.6. Five subjects (a-e) participated in the experiments, none of whom were surgeons. Each subject was asked to freely move the master device to learn how to operate it; however, this practice was allowed for one minute before starting the experiments. Each subject started from Topology I, then tested Topology II and finally Topology III. The time needed to touch the bar and maintain it for three seconds was measured. This procedure was repeated ten times for each topology with a randomized position of the bar. During the procedure, the bar appeared at random; however, it always appeared in the shared workspace to ensure

14

Robotic Systems Applications, Control and Programming

that the arm could reach it. After finishing the experiment, the subjects were asked to fill in a questionnaire (described below) for each topology. The subjects were also asked which topology they preferred.

Fig. 8. Workspace of each topology and the shared workspace

Fig. 9. Simulator and the master device during one test A NASA TLX questionnaire (NASA TLX (website)) was used to objectively and quantitatively evaluate the workload that the subjects felt during the experiments. This method has versatile uses, and we selected this method also because it was used to evaluate the workload in a tele-surgery environment (Kawamura, K. 2006). This method evaluates Metal Demand, Physical Demand, Temporal Demand, Performance, Effort and Frustration, and gives a score that represents the overall workload that the subject felt during the task. 4.3 Results The time spent conducting the given task, the workload score evaluated using the NASA TLX questionnaire and the preference of the topology determined by the subjects are

Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery

15

summarized in Table 1. For each item, a smaller value indicates a more favorable evaluation by the subject. Considering the results of the time and workload score, Topology II was most difficult. The difference between Topology I and III was interesting. Two of the subjects ((b) and (c)) preferred Topology I, which did not have a redundant DOF. Conversely, three of the subjects ((a), (d) and (e)) preferred Topology III because they could select the path to reach the target owing to the redundant DOF. The average scores of the NASA TLX parameters shown in Fig.10 suggest that the Physical Demand workload was high for Topology I, while the Effort workload was high for Topology III. The two subjects who preferred Topology I rather than Topology III claimed that it was not easy to determine where the bar was located when Topology III was used owing to the lack of 3D perception. In addition, they reported that the bar seemed to be placed far from the base. However, the bar appeared randomly, but in the same area; therefore, the bar that appeared in the experiment that employed Topology III was not placed farther from the base when compared to the experiments that used Topology I or Topology II. Accordingly, these two subjects may have had difficulty obtaining 3D perception from the gridded plane. In Topology III, the arm was partially out of view in the initial position; thus, the operator needed to obtain 3D perception by seeing the grids. It is often said that most surgeons can obtain 3D perception even if they use a 2D camera, and our preliminary experimental results imply that this ability might differ by individual. Some people appear to obtain 3D perception primarily by seeing the relative positions between the target and the tool they move. Redundant DOFs may also be preferred by operators with better 3D perception capability. Although the experiments were preliminary, there must be other factors that accounted for the preference of the user. Indeed, it is likely that the preferable topology varies depending on the user, and the developed simulator would be useful to evaluate these variations. The proposed reconfigurable master device will enable individual surgeons to customize the robot and interface as they prefer. Subject a 5.7 7.6 4.9 30.7 47.6 37.0 3 2 1 b 4.1 6.2 4.3 11.3 26.7 5.0 1 3 2 c 4.0 4.8 5.6 28.3 28.0 24.3 1 2 3 d 5.8 5.5 4.4 32.0 53.0 14.3 3 2 1 e 5.0 6.7 4.7 73.3 68.3 61.3 3 2 1

Topology Time (s) I II III Work Load: NASA-TLX Score Preference I II III I II III Table 1. Experimental results

Average 4.9 6.1 4.8 35.1 44.7 28.4 2.2 2.2 1.6

16

Robotic Systems Applications, Control and Programming

Mental Demand 15 Frustration 10 5 0 Effort Temporal Demand Performance Physical Demand

Topology I Topology II Topology III

Fig. 10. NASA TLX parameters for three topologies

5. Conclusion
A modular robot was proposed for endoluminal surgery. The design, prototyping and evaluation of the modules were reported. Although there are some issues related to the fabrication problems, the results of the performance tests show the feasibility of the modular surgical system. A reconfigurable master device has also been proposed, and its feasibility was evaluated by simulation-based experiments. The preliminary results showed that the preferred topology may vary depending on the user. Moreover, the reconfigurable master device would enable each surgeon to customize the surgical system according to his/her own preferences. Development of the robotic modules and the reconfigurable master device provided proof of concept of the modular robotic system for endoluminal surgery, suggesting that the modular approach has great potential for surgical applications.

6. Acknowledgments
This study was supported in part by the European Commission in the framework of the ARES Project (Assembling Reconfigurable Endoluminal Surgical System, NEST-2003-1ADVENTURE/15653), by the European Union Institute in Japan at Waseda University (EUIJ Waseda, [Link] within the framework of its Research Scholarship Programme, and by the Global COE Program "Global Robot Academia" from the Ministry of Education, Culture, Sports, Science and Technology of Japan. The authors are grateful to Mr. Nicodemo Funaro for manufacturing the prototypes and Ms. Sara Condino for her invaluable technical support.

7. References
Bardaro, S. J. & Swanstrm, L. (2006). Development of advanced endoscopes for Natural Orifice Transluminal Endoscopic Surgery (NOTES). In: Minim. Invasive Ther. Allied Technol., 15(6), pp. 378383.

Modular Robotic Approach in Surgical Applications Wireless Robotic Modules and a Reconfigurable Master Device for Endoluminal Surgery

17

Carpi, F. & Pappone, C. (2009). Magnetic maneuvering of endoscopic capsules by means of a robotic navigation system. In: IEEE Trans Biomed Eng, 56(5), pp. 1482-90. Ciuti, G.; Valdastri, P., Menciassi, A. & Dario, P. (2010). Robotic magnetic steering and locomotion of capsule endoscope for diagnostic and surgical endoluminal procedures. In: Robotica, 28, pp. 199-207. Cuschieri, A (2005). Laparoscopic surgery: current status, issues and future developments. In: Surgeon, 3(3), pp. 125138. Giday, S.; Kantsevoy, S. & Kalloo, A. (2006). Principle and history of natural orifice translumenal endoscopic surgery (notes). In: Minim. Invasive Ther. Allied Technol., 15(6), pp. 373377. Harada, K.; Susilo, E., Menciassi, A. & Dario, P. (2009). Wireless reconfigurable modules for robotic endoluminal surgery. In: Proc. IEEE Int. Conf. on Robotics and Automation, pp. 2699-2704. Harada, K.; Oetomo, D., Susilo, E., Menciassi, A., Daney, D., Merlet, J.-P. & Dario, P. (2010). A Reconfigurable Modular Robotic Endoluminal Surgical System: Vision and preliminary results, In: Robotica, 28, pp. 171-183. Kawamura, K.; Kobayashi, Y & Fujie, M. G. (2006). Development of Real-Time Simulation for Workload Quantization in Robotic Tele-surgery. In: Proc. IEEE Int. Conf. on Robotics and Biomimetics, pp.1420-25. Menciassi, A.; Valdastri, P., Harada, K. & Dario, P. (2010). Single and Multiple Robotic Capsules for Endoluminal Diagnosis and Surgery. In: Surgical Robotics - System Applications and Visions, Rosen, J., Hannaford, B. & Satava, M., pp. 313-354., Springer-Verlag, 978-1441911254. Moglia, A.; Menciassi, A. Schurr, M. & Dario, P. (2007). Wireless capsule endoscopy: From diagnostic devices to multipurpose robotic systems. In: Biomed Microdevices, 9, pp. 235243. Murata, S. & Kurokawa, H. (2007). Self-reconfigurable robots. In: IEEE Rob. Autom Mag., 14(1), pp. 7178. Nagy, Z.; Abbott, J. J. & Nelson, B. J. (2007). The magnetic self-aligning hermaphroditic connector: A scalable approach for modular microrobotics. In: Proc. IEEE/ASME Int. Conf. Advanced Intelligent Mechatronics. pp. 16. NASA TLX. available from [Link] Park, S. K.; Koo, K. I., Bang, S. M., Park J. Y., Song, S. Y., & Cho, D. G. (2008). A novel microactuator for microbiopsy in capsular endoscopes. In: J. Micromech. Microeng., 18 (2), 025032. Pesic, M.; Karanikolic, A., Djordjevic, N., Katic, V., Rancic, Z., Radojkovic, M., Ignjatovic N. & Pesic, I. (2004). The importance of primary gastric cancer location in 5-year survival rate. In: Arch Oncol, 12, pp. 5153. Quirini, M.; Menciassi, A., Scapellato, S., Stefanini, C., and Dario, P. (2008). Design and fabrication of a motor legged capsule for the active exploration of the gastrointestinal tract. In: Proc. IEEE/ASME Trans. Mechatronics, 13, pp. 169179. Sendoh, M.; Ishiyama, K. & Arai, K. (2003). Fabrication of magnetic actuator for use in a capsule endoscope. In: IEEE Trans. Magnetics, 39(5), pp. 323234. Susilo, E.; Valdastri, P., Menciassi, A. & Dario, P. (2009). A miniaturized wireless control platform for robotic capsular endoscopy using advanced pseudokernel approach. In: Sensors and Actuators A, 156(1), pp.49-58.

18

Robotic Systems Applications, Control and Programming

World Health Organisation, Fact sheet n.297, Available from: [Link] mediacen-ter/factsheets/ fs297, 2006. Yim, M.; Shen, W., Salemi, B., Rus, D., Moll, M., Lipson, H., Klavins, E. & Chirikjian, G. (2007). Modular self-reconfigurable robot systems [Grand Challenges of Robotics]. In: IEEE Rob. Autom Mag., 14(1), pp. 865872.

2
Target Point Manipulation Inside a Deformable Object
Vanderbilt University, Nashville, TN USA 1. Introduction
Target point manipulation inside a deformable object by a robotic system is necessary in many medical and industrial applications such as breast biopsy, drug injection, suturing, precise machining of deformable objects etc. However, this is a challenging problem because of the difficulty of imposing the motion of the internal target point by a finite number of actuation points located at the boundary of the deformable object. In addition, there exist several other important manipulative operations that deal with deformable objects such as whole body manipulation [1], shape changing [2], biomanipulation [3] and tumor manipulation [4] that have practical applications. The main focus of this chapter is the target point manipulation inside a deformable object. For instance, a positioning operation called linking in the manufacturing of seamless garments [5] requires manipulation of internal points of deformable objects. Mating of a flexible part in electric industry also results in the positioning of mated points on the object. In many cases these points cannot be manipulated directly since the points of interest in a mating part is inaccessible because of contact with a mated part. Additionally, in medical field, many diagnostic and therapeutic procedures require accurate needle targeting. In case of needle breast biopsy [4] and prostate cancer brachytherapy [6], needles are used to access a designated area to remove a small amount of tissue or to implant radio-active seed at the targeted area. The deformation causes the target to move away from its original location. To clarify the situation we present a schematic of needle insertion for breast biopsy procedure as shown in Figure 1. When tip of the needle reaches the interface between two different types of tissue, its further insertion will push the tissue, instead of piercing it, causing unwanted deformations. These deformations move the target away from its original location as shown in Figure 1(b). In this case, we cannot manipulate the targeted area directly because it is internal to the organ. It must be manipulated by controlling some other points where forces can be applied as shown in Figure 1(c). Therefore, in some cases one would need to move the positioned points to the desired locations of these deformable objects (e.g., mating two deformable parts for sewing seamlessly) while in other cases one may need to preserve the original target location (e.g., guiding the tumor to fall into the path of needle insertion). In either of these situations, the ability of a robotic system to control the target of the deformable object becomes important, which is the focus of this chapter. To control the position of the internal target point inside a deformable object requires appropriate contact locations on the surface of the object. Therefore, we address the issue of

Jadav Das and Nilanjan Sarkar

20

Robotic Systems Applications, Control and Programming

determining the optimal contact locations for manipulating a deformable object such that the internal target point can be positioned to the desired location by three robotic fingers using minimum applied forces. A position-based PI controller is developed to control the motion of the robotic fingers such that the robotic fingers apply minimum force on the surface of the object to position the internal target point to the desired location. However, the controller for target position control is non-collocated since the internal target point is not directly actuated by the robotic fingers. It is known in the literature that non-collocated control of a deformable object is not passive, which may lead to instability [7]. In order to protect the object and the robotic fingers from physical damage and also in order to diminish the deterioration of performance caused by unwanted oscillation, it is indispensable to build stable interaction between the robotic fingers and the object. Here we consider that the plant (i.e., the deformable object) is passive and does not generate any energy. So, in order to have stable interaction, it is essential that the controller for the robotic fingers must be stable. Thus, we present a new passivity-based non-collocated controller for the robotic fingers to ensure safe and accurate position control of the internal target point. The passivity theory states that a system is passive if the energy flowing in exceeds the energy flowing out. Creating a passive interface adds the required damping force to make the output energy lower than the input energy. To this end we develop a passivity observer (PO) and a passivity controller (PC) based on [8] for individual robotic finger where PO monitors the net energy flow out of the system and PC will supply the necessary damping force to make PO positive. Our approach extends the concept of PO and PC in [8] to multipoint contacts with the deformable object.

(a)

(b)

(c)

Fig. 1. Schematics of needle breast biopsy procedure: (a) needle insertion, (b) target movement, and (c) target manipulation The remainder of this chapter is organized as follows: we discuss various issues and prior research in Section 2. The problem description is stated in Section 3. Section 4 outlines the mathematical modelling of the deformable object. A framework for optimal contact locations is presented in Section 5. The control methods are discussed in Section 6. The effectiveness of the derived control law is demonstrated by simulation in Section 7. Finally, the contributions of this work and the future directions are discussed in Section 8.

2. Issues and prior research


A considerable amount of work on multiple robotic systems has been performed during the last few decades [9-11, 12-15]. Mostly, the position and/or force control of multiple manipulators handling a rigid object were studied in [9-11]. However, there were some works on handling deformable object by multiple manipulators presented in [12-15]. Saha

Target Point Manipulation Inside a Deformable Object

21

and Isto [12] presented a motion planner for manipulating deformable linear objects using two cooperating robotic arms to tie self-knots and knots around simple static objects. Zhang et al. [13] presented a microrobotic system that is capable of picking up and releasing operation of microobjects. Sun et al. [14] presented a cooperation task of controlling the reference motion and the deformation when handling a deformable object by two manipulators. In [15], Tavasoli et al. presented two-time scale control design for trajectory tracking of two cooperating planar rigid robots moving a flexible beam. However, to the best of our knowledge the works on manipulating an internal target point inside a deformable object are rare [4, 5]. Mallapragada et al. [4] developed an external robotic system to position the tumor in image-guided breast biopsy procedures. In their work, three linear actuators manipulate the tissue phantom externally to position an embedded target inline with the needle during insertion. In [5] Hirai et al. developed a robust control law for manipulation of 2D deformable parts using tactile and vision feedback to control the motion of the deformable object with respect to the position of selected reference points. These works are very important to ours present application, but they did not address the optimal locations of the contact points for effecting the desired motion. A wide variety of modeling approaches have been presented in the literature dealing with computer simulation of deformable objects [16]. These are mainly derived from physicallybased models to produce physically valid behaviors. Mass-spring models are one of the most common forms of deformable objects. A general mass-spring model consists of a set of point masses connected to its neighbors by massless springs. Mass-spring models have been used extensively in facial animation [17], cloth motion [18] and surgical simulation [19]. Howard and Bekey [20] developed a generalized method to model an elastic object with the connections of springs and dampers. Finite element models have been used in the computer simulation to model facial tissue and predict surgical outcomes [21, 22]. However, the works on controlling an internal point in a deformable object are not attempted. In order to manipulate the target point to the desired location, we must know the appropriate contact locations for effecting the desired motion. There can be an infinite number of possible ways of choosing the contact location based on the object shapes and task to be performed. Appropriate selection of the contact points is an important issue for performing certain tasks. The determination of optimal contact points for rigid object was extensively studied by many researchers with various stability criteria. Salisbury [23] and Kerr [24] discussed that a stable grasp was achieved if and only if the grasp matrix is full row rank. Abel et al. [25] modelled the contact interaction by point contact with Coulomb friction and they stated that optimal grasp has minimum dependency on frictional forces. Cutkosky [26] discussed that the size and shape of the object has less effect on the choice of grasp than by the tasks to be performed after examining a variety of human grasps. Ferrari et al. [27] defined grasp quality to minimize either the maximum value or sum of the finger forces as optimality criteria. Garg and Dutta [28] shown that the internal forces required for grasping deformable objects vary with size of object and finger contact angle. In [29], Watanabe and Yoshikawa investigated optimal contact points on an arbitrary shaped object in 3D using the concept of required external force set. Ding et al. proposed an algorithm for computing form closure grasp on a 3D polyhedral object using local search strategy in [30]. In [31, 32], various concepts and methodologies of robot grasping of rigid objects were reviewed. Cornella et al. [33] presented a mathematical approach to obtain optimal solution of contact points using the dual theorem of nonlinear programming. Saut et al. [34]

22

Robotic Systems Applications, Control and Programming

presented a method for solving the grasping force optimization problem of multi-fingered dexterous hand by minimizing a cost function. All these works are based on grasp of rigid objects. There are also a few works based on deformable object grasping. Like Gopalakrishnan and Goldberg [35] proposed a framework for grasping deformable parts in assembly lines based on form closure properties for grasping rigid parts. Foresti and Pellegrino [36] described an automatic way of handling deformable objects using vision technique. The vision system worked along with a hierarchical self-organizing neural network to select proper grasping points in 2D. Wakamatsu et al. [37] analyzed grasping of deformable objects and introduced bounded force closure. However, position control of an internal target point in a deformable object by multi-fingered gripper has not been attempted. In our work, we address the issue of determining the optimal contact locations for manipulating a deformable object such that the internal target point can be positioned to the desired location by three robotic fingers using minimum applied forces. The idea of passivity can be used to guarantee the stable interaction without exact knowledge of model information. Anderson and Spong [38] published the first solid result by passivation of the system using scattering theory. A passivity based impedance control strategy for robotic grasping and manipulation was presented by Stramigioli et al. [39]. Recently, Hannaford and Ryu [40] proposed a time-domain passivity control based on the energy consumption principle. The proposed algorithm did not require any knowledge about the dynamics of the system. They presented a PO and a PC to ensure stability under a wide variety of operating conditions. The PO can measure energy flow in and out of one or more subsystems in real-time by confining their analysis to system with very fast sampling rate. Meanwhile the PC, which is an adaptive dissipation element, absorbs exactly net energy output measured by the PO at each time sample. In [41], a model independent passivity-based approach to guarantee stability of a flexible manipulator with a noncollocated sensor-actuator pair is presented. This technique uses an active damping element to dissipate energy when the system becomes active. In our work we use the similar concept of PO and PC to ensure stable interaction between the robotic fingers and the deformable object. Our work also extends the concept of PO and PC for multi-point contact with the deformable object.

3. Problem description
Consider a case in which multiple robotic fingers are manipulating a deformable object in a 2D plane to move an internal target point to a desired location. Before we discuss the design of the control law, we present a result from [42] to determine the number of actuation points required to position the target at an arbitrary location in a 2D plane. The following definitions are given according to the convention in [42]. Manipulation points: are defined as the points that can be manipulated directly by robotic fingers. In our case, the manipulation points are the points where the external robotic fingers apply forces on the deformable object. Positioned points: are defined as the points that should be positioned indirectly by controlling manipulation points appropriately. In our case, the target is the position point. The control law to be designed is non-collocated since the internal target point is not directly actuated by the robotic fingers. The following result is useful in determining the number of actuation points required to accurately position the target at the desired location.

Target Point Manipulation Inside a Deformable Object

23

Result [42]: The number of manipulated points must be greater than or equal to that of the positioned points in order to realize any arbitrary displacement. In our present case, we assume that the number of positioned points is one, since we are trying to control the position of the target. Hence, ideally the number of contact points would also be one. But practically, we assume that there are two constraints: (1) we do not want to apply shear force on the deformable object to avoid the damage to the surface, and (2) we can only apply control force directed into the deformable object. We cannot pull the surface since the robotic fingers are not attached to the surface. Thus we need to control the position of the target by applying only unidirectional compressive force. In this context, there exists a theorem on the force direction closure in mechanics that helps us determining the equivalent number of compressive forces that can replace one unconstrained force in a 2D plane. Theorem [43]: A set of wrenches w can generate force in any direction if and only if there exists a three-tuple of wrenches { w 1 , w 2 , w 3 } whose respective force directions f1 , f2 , f3 satisfy: i. Two of the three directions f1 , f2 , f3 are independent ii. A strictly positive combination of the three directions is zero.

f1 + f2 + f3 = 0

(1)

where , , and are constants. The ramification of this theorem for our problem is that we need three control forces distributed around the object such that the end points of their direction vectors draw a non-zero triangle that includes their common origin point. With such an arrangement we can realize any arbitrary displacement of the target point. Thus the problem can be stated as: Problem statement: Given the number of actuation points, the initial target and its desired locations, find appropriate contact locations and control action such that the target point is positioned to its desired location by controlling the boundary points of the object with minimum force.

4. Deformable object modelling


Consider a schematic in Figure 2 where three robotic fingers are positioning an internal target (point A) in a deformable object to the desired location (point B). We assume that all the end-effectors of the robotic fingers are in contact with the deformable object such that they can apply only push on the object as needed. The coordinate systems are defined as follows: w is the task coordinate system, o is the object coordinate system, fixed on the object and i is the i-th robotic finger coordinate system, fixed on the i-th end-effectors located at the grasping point. In order to formulate the optimal contact locations, we model the deformable object using mass-spring-damper systems. The point masses are located at the nodal points and a Voigt element [20] is inserted between them. Figure 3 shows a single layer of the deformable object. Each element is labeled as E j for j = 1, 2, , NE where NE is total number of elements in a single layer. Position vector of the i-th mesh point is defined as pi = [ xi yi ]T , i = 1, 2, 3,..., N where, N is total number of point masses. k and c are the spring stiffness and the damping coefficient, respectively. Assume that no moment exerts on each mesh point. Then, the resultant force exerted on the mesh point, pi , can be calculated as

24

Robotic Systems Applications, Control and Programming

wi =

U pi

(2)

where, U denotes the total potential energy of the object

Fig. 2. Schematic of the robotic fingers manipulating a deformable object

Fig. 3. Model of a deformable object with interconnected mass-spring-damper

5. Framework for optimal contact locations


We develop an optimization technique that satisfies the force closure condition for three fingers planar grasp. The resultant wrench for the contacts of three robotic fingers is given by
w = f i n i ( ri ) , (w 2 ) ( f i 0, 1 i 3)
i =1
3

(3)

Target Point Manipulation Inside a Deformable Object

25

where, n i ( ri ) is the unit inner normal of i-th contact and f i denotes the i-th fingers force. We assume that the contact forces should exist in the friction cone to manipulate objects without slip of the fingertip. Now we need to find three distinct points, r1 ( 1 ) , r2 ( 2 ) , and r3 ( 3 ) , on the boundary of the object such that Equation (3) is satisfied. Here, 1 , 2 , and 3 are the three contact point locations measured anti-clockwise with respect to the x axis as shown in Figure 4. In addition, we assume that the normal forces have to be non-negative to avoid separation and slippage at the contact points, i.e.,
f i 0 , i = 1, 2, 3

(4)

Fig. 4. Three fingers grasp of a planar object A physically realizable grasping configuration can be achieved if the surface normals at three contact points positively span the plane so that they do not all lie in the same halfplane [44]. Therefore, a realizable grasp can be achieved if the pair-wise angles satisfy the following constraints

min | i j | max , low i high , i , j = 1, 2, 3 , i j

(5)

A unique solution to realizable grasping may not always exist. Therefore, we develop an optimization technique that minimizes the total force applied on to the object to obtain a particular solution. The optimal locations of the contact points would be the solution of the following optimization problem. fT f min sunject to w = f i n i ( ri )
i =1

min i j max , i , j = 1, 2, 3 , i j
f i 0 , i = 1, 2, 3

(6)

0 i 360 , i = 1, 2, 3

26

Robotic Systems Applications, Control and Programming

Once we get the optimal contact locations, all three robotic fingers can be located at their respective positions to effect the desired motion at those contact points.

6. Design of the controller


In this section, a control law for the robotic fingers is developed to guide a target from any point A to an arbitrary point B within the deformable object as shown in Figure 2.
6.1 Target position control At any given time-step, point A is the actual location of the target and point B is the desired location of the target. n1, n2 and n3 are unit vectors which determine the direction of force application of the actuation points with respect to the global reference frame w . Let assume, p d is the position vector of point B and p is the position vector of point A. Referring to Figure 2, the position vector of point A is given by

p = [ x y ]T

(7)

where, x and y are the position coordinates of point A in the global reference frame w . The desired target position is represented by point B whose position vector is given by

pd = [ xd

y d ]T

(8)

where, xd and y d are the desired target position coordinates. The target position error, e, is given by
e = pd p

(9)

Once the optimal contact locations are determined from Equation (6), the planner generates the desired reference locations for these contact points by projecting the error vector between the desired and the actual target locations in the direction of the applied forces, which is given by ei* = e n i where,
n i = [nxi nyi ]T

(10) (11)

All robotic fingers are controlled by their individual controllers using the following proportional-integral (PI) control law

f i = K Pi e i* + K Ii ei* dt , i = 1, 2, 3

(12)

where, K Pi , and K Ii are the proportional and integral gains. Note that in the control law (12), mechanical properties of the deformable object are not required. Forces applied by the fingers on the surface of the deformable object are calculated by projecting the error vector in the direction of the applied forces. But the Equation (12) does not guarantee that the system will be stable. Thus a passivity-based control approach based on energy monitoring is developed to guarantee the stability of the system.

Target Point Manipulation Inside a Deformable Object

27

6.2 Passivity-based control A passivity-based control approach based on energy monitoring is developed for deformable object manipulation to guarantee passivity (and consequently stability) of the system. The main reason to use passivity-based control is to ensure stability without the need of having an accurate model of the deformable object. It is not possible to develop a precise dynamic model of a deformable object due to complex nonlinear internal dynamics as well as variation in geometry and mechanical properties. Thus passivity based control is an ideal candidate to ensure stability since it is a model independent technique. The basic idea is to use a PO to monitor the energy generated by the controller and to dissipate the excess energy using a PC when the controller becomes active [41], without the need for modeling the dynamics of the plant (deformable object).

Passivity Observer (PO) We develop a network model with PO and PC similar to [41] as shown in Figure 5. The PO monitors the net energy flow of the individual fingers controller. When the energy becomes negative, PC dissipates excess energy from the individual controller. Similar to [41] energy is defined as the integral of the inner product between conjugate input and output, which may or may not correspond to physical energy. Definition of passivity [41] states that the energy applied to a passive network must be positive for all time. Figure 5 shows a network representation of the energetic behavior of this control system. The block diagram in Figure 5 is partitioned into three elements: the trajectory generator, the controller and the plant. Each controller corresponds to one finger. Since three robotic fingers are used for planar manipulation, three individual controller transfer energy to the plant. The connection between the controller and the plant is a physical interface at which conjugate variables ( f i , vi ; where f i is the force applied by i-th finger and vi is the velocity of i-th finger) define physical energy flow between controller and plant. The forces and velocities are given by f = [ f1 v = [ v1

f2
v2

f 3 ]T
v3 ]T

(13) (14)

The desired target velocity is obtained by differentiating (8) with respect to time and is given by
d = [x d p d ]T y

(15)

d and y d are the desired target velocities, respectively. The desired target velocity where, x

along the direction of actuation of the i-th robotic finger is given by


d ni vdi = p

(16)

The trajectory generator essentially computes the desired target velocity along the direction of actuation of the robotic fingers. If the direction of actuation of the robotic fingers, n i , and d , are known with respect to a global reference frame then the desired target velocity, p trajectory generator computes the desired target velocity along the direction of actuation of the fingers using Equation (16).

28

Robotic Systems Applications, Control and Programming

The connections between the trajectory generator and the controller, which traditionally consist of a one-way command information flow, are modified by the addition of a virtual feedback of the conjugate variable [41]. For the system shown in Figure 5, output of the trajectory generator is the desired target velocity, vdi , along direction of i-th finger and output of the controller is calculated from Equation (12).

Fig. 5. Network representation of the control system. 1i and 2 i are the adjustable damping elements at each port, i=1,2,3 For both connections, virtual feedback is the force applied by the robotic fingers. Integral of the inner product between trajectory generator output ( vdi ) and its conjugate variable ( f i ) defines virtual input energy. The virtual input energy is generated to give a command to the controller, which transmits the input energy to the plant through the controller in the form of real output energy. Real output energy is the physical energy that enters to the plant (deformable object) at the point where the robotic finger is in contact with the object. Therefore, the plant is a three-port system since three fingers manipulate the object. The conjugate pair that represents the power flow is f i , vi (the force and the velocity of i-th finger, respectively). The reason for defining virtual input energy is to transfer the source of energy from the controllers to the trajectory generator. Thus the controllers can be represented as two-ports which characterize energy exchange between the trajectory generator and the plant. Note that the conjugate variables that define power flow are discrete time values and so the analysis is confined to systems having a sampling rate substantially faster than the system dynamics. For regulating the target position during manipulation, vdi = 0 . Hence the trajectory generator is passive since it does not generate energy. However, for target tracking, vdi 0

Target Point Manipulation Inside a Deformable Object

29

and f i 0 . Therefore the trajectory generator is not passive because it has a velocity source as a power source. It is shown that even if the system has an active term, stability is guaranteed as long as the active term is not dependent on the system states [45]. Therefore, passivity of the plant and controllers is sufficient to ensure system stability. Here, we consider that the plant is passive. Now we design a PO for sufficiently small timestep T as:
Ei (t k ) = T ( f i (t j )vdi (t j ) f i (t j )vi (t j ))
j =0 k

(17)

where, T is the sampling period and t j = j T . In normal passive operation, Ei (t j ) should always be positive. In case when Ei (t j ) < 0 , the PO indicates that the i-th controller is generating energy and going to be active. The sufficient condition to make the whole system passive can be written as
T f i (t j )vdi (t j ) T f i (t j )vi (t j ) , tk 0 , i = 1, 2, 3
j =0 j =0 k k

(18)

where k means the k-th step sampling time. The active and passive port can be recognized by monitoring the conjugate signal pair of each port in real time. A port is active if fv < 0 that means energy is flowing out of the network system and it is passive if fv 0 , that means energy is flowing into the network system. The input and output energy can be computed as [46]
T E1 T i ( k 1) + f i ( k )vdi ( k ) if f i ( k )vdi ( k ) > 0 E1 i (k) = T E if f i ( k )vdi ( k ) 0 1i ( k 1)

(19)

T E2 T i ( k 1) f i ( k )vdi ( k ) if f i ( k )vdi ( k ) < 0 E2 i (k) = T E if f i ( k )vdi ( k ) 0 2 i ( k 1) P E1 P i ( k 1) f i ( k )vi ( k ) if f i ( k )vi ( k ) < 0 E1 i (k) = P E if f i ( k )vi ( k ) 0 1i ( k 1) P E2 P i ( k 1) + f i ( k )vi ( k ) if f i ( k )vi ( k ) > 0 E2 i (k) = P E if f i ( k )vi ( k ) 0 2 i ( k 1)

(20)

(21)

(22)

T T where, E1 i ( k ) and E2 i ( k ) are the energy flowing in and out at the trajectory side of the P P controller port, respectively, whereas E1 i ( k ) and E2 i ( k ) are the energy flowing in and out at the plant side of the controller port, respectively. So the time domain passivity condition is given by T P T P E1 i ( k ) + E1 i ( k ) E2 i ( k ) + E2 i ( k ) , k 0

(23)

Net energy output of an individual controller is given by

30

Robotic Systems Applications, Control and Programming


T P P T Ei ( k ) = E1 i ( k ) E2 i ( k ) + E1 i ( k ) E2 i ( k )

+ 1i ( k 1)vdi ( k 1)2 + 2 i ( k 1)vi ( k 1)2

(24)

where, the last two terms are the energy dissipated at the previous time step. 1i ( k 1) and 2 i ( k 1) are the damping coefficient calculated based on PO discussed below.

Passivity Controller (PC)


In order to dissipate excess energy of the controlled system, a damping force should be applied to its moving parts depending on the causality of the port. As it is well known, such a force is a function of the system's velocities giving the physical damping action on the system. Mathematically, the damping force is given by
fd = v

(25)

where is the adjustable damping factor and v is the velocity. From this simple observation, it seems necessary to measure and use the velocities of the robotic fingers in the control algorithm in order to enhance the performance by means of controlling the damping forces acting on the systems. On the other hand, velocities measurements are not always available and in these cases position measurements can be used to estimate velocities and therefore to inject damping. When the observed energy becomes negative, the damping coefficient is computed using the following relation (which obeys the constitutive Equation (25)). Therefore, the algorithm used for a 2-port network with impedance causality (i.e., velocity input, force output) at each port is given by the following steps: 1. Two series PCs are designed for several cases as given below: Case 1: If Ei ( k ) 0 , i.e., if the output energy is less than the input energy, there is no need to activate any PCs. Case 2: If Ei ( k ) < 0 , i.e., if the output energy is more than the input energy, i.e., P T E2 i ( k ) > E1 i ( k ) , then we need to activate only the plant side PC.

1i ( k ) = 0 2 i ( k ) = Ei ( k ) / vi ( k )2

(26)

T P Case 3: Similarly, if Ei ( k ) < 0 , E2 i ( k ) > E1 i ( k ) , then we need to activate only the trajectory side PC.

1i ( k ) = Ei ( k ) / vdi ( k )2 2i (k) = 0
2. The contributions of PCs are converted into power variables as
f it ( k ) = 1i ( k )vdi ( k ) f i p ( k ) = 2 i ( k )vi ( k )

(27)

(28)

3.

Modified outputs are


f iT ( k ) = f i ( k ) + f it ( k ) f iP ( k ) = f i ( k ) + f ip ( k )

(29)

Target Point Manipulation Inside a Deformable Object

31

where, f it ( k ) and f i p ( k ) are the PCs outputs at trajectory and plant sides of the controller ports, respectively. f iT ( k ) and f iP ( k ) are the modified outputs at trajectory and plant sides of the controller ports, respectively.

7. Simulation and discussion


We perform extensive simulations of positioning an internal target point to a desired location in a deformable object by external robotic fingers to demonstrate the feasibility of the concept. We discretize the deformable object with elements of mass-spring-damper. We choose m=0.006 kg for each point mass, k=10 N/m for spring constant and c=5 Ns/m for damping coefficient. With this parameter set up, we present four different simulation tasks.

Task 1:
In Task 1, we present the optimal contact locations of various objects using three robotic fingers such that an internal target point is positioned to the desired location with minimum force. The optimal contact locations are computed using Equation (6) as shown in Figures 6 to 8. In these figures, the base of the arrow represents the initial target location and the arrow head denotes the desired location of the target point. The contact locations are depicted by the bold red dots on the periphery of the deformable object. Note that in determining the optimal contact locations, we introduced minimum angle constraints between any two robotic fingers to achieve a physically realizable grasping configuration.

1 2 3 1

2 3

(a)

(b) 2 1 1

2 3 (c)

3 (d)

Fig. 6. Optimal contact locations ( 1 , 2 , 3 ): (a) 59.98o, 204.9o, 244.9o, (b) 14.96o, 159.9o, 199.9o, (c) 7.54o, 182.54o, 327.54o, and (d) 48.59o, 88.59o, 234.39o

32

Robotic Systems Applications, Control and Programming

2 2 1 1

(a)

(b) 1

2 2 1

3 (c)

3 (d)

Fig. 7. Optimal contact locations ( 1 , 2 , 3 ): (a) 0o, 170o, 253.8o, (b) 29.07o, 116.93o, 233.86o, (c) 0o, 175o, 320o, and (d) 76.93o, 116.93o, 261.94o

1 2 2 1

3 (a)

3 1

(b)

1 2 3 2 (c) (d) 3

Fig. 8. Optimal contact locations ( 1 , 2 , 3 ): (a) 25.18o, 199.48o, 262.22o, (b) 0o, 175o, 262.62o, (c) 141.05o, 303.66o, 343.66o and (d) 96.37o, 169.35o, 288.29o

Target Point Manipulation Inside a Deformable Object

33

Task 2:
In Task 2, we present a target positioning operation when the robotic fingers are not located at their optimal contact locations. For instance, we choose that the robotic fingers are located at 0, 120, and 240 degrees with respect to the x-axis as shown in Figure 9. We assume that the initial position of the target is at the center of the section of the deformable object, i.e., (0, 0) mm. The goal is to position the target at the desired location (5, 5) mm with a smooth
0.06

0.04

0.02

y (m)

1
0

-0.02

-0.04

3
-0.06 -0.04 -0.03 -0.02 -0.01 0 x (m) 0.01 0.02 0.03 0.04

Fig. 9. Deformable object with contact points located at 0, 120 and 240 degrees with respect to x-axis
6

4 y (mm)

1 desired actual 0 0 1 2 3 x (mm) 4 5 6

Fig. 10. The desired (red dashed) and the actual (blue solid) straight lines when robotic fingers are located at 0, 120, and 240 degrees with respect to x-axis

34

Robotic Systems Applications, Control and Programming

straight line trajectory. In this simulation, we choose KPi =1000 and KIi =1000, i=1,2,3. Figure 10 shows the actual and desired position trajectories of the target point. It is noticed that there is some error present in the tracking of the desired trajectory. Robotic fingers forces generated by the PI controller are presented in Figure 11 and the POs are falling to negative as shown in Figure 12. Negative values of POs signify that the interaction between the robotic fingers and the deformable object is not stable.
1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 0 1 2 3 4 5 6 time (sec.) 7 8 9 10 f1 f2 f3

Fig. 11. Controller forces when robotic fingers are located at 0, 120, and 240 degrees with respect to x-axis

0.01 E 1 (Nm) 0 -0.01 0.01 E2 (Nm) 0 -0.01 -0.02 0.05 E3 (Nm) 0 -0.05 0 1 2 3 4 5 6 7 8 9 10

Controller forces (N)

10

5 6 time (sec.)

10

Fig. 12. POs when robotic fingers are located at 0, 120, and 240 degrees with respect to x-axis

Target Point Manipulation Inside a Deformable Object

35

Task 3:
In Task 3, we consider the same task as discussed above under Task 2 but the robotic fingers are positioned at their optimal contact locations (Figure 8(a)) and the target is following the desired straight line trajectory. In this case, PCs are not turned on while performing the task. A simple position based PI controller generates the control command based on the error between the desired and the actual location of the target. Figure 13 shows that the target
6

4 y (mm)

1 desired actual 0 0 1 2 3 x (mm) 4 5 6

Fig. 13. The desired (red dashed) and the actual (blue solid) straight lines when PCs are not turned on
0.8 f1 0.7 0.6 Controller forces (N) 0.5 0.4 0.3 0.2 0.1 0 -0.1 f2 f3

5 6 time (sec.)

10

Fig. 14. Controller forces when PCs are not turned on

36

Robotic Systems Applications, Control and Programming

0.01 E1 (Nm) 0 -0.01 5 E2 (Nm) 0 -5 5 E3 (Nm) 0 -5

0 x 10
-3

10

0 x 10
-3

10

5 6 time (sec.)

10

(a)
0.01 E1 (Nm) 0 -0.01 1 E2 (Nm) 0 -1 5 E3 (Nm) 0 -5 0 0.1 0.2 0.3 0.4 0.5 0.6 time (sec.) 0.7 0.8 0.9 1 0 x 10
-7

0 x 10
-6

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

(b) Fig. 15. (a) POs for three robotic fingers when PCs are not turned on, (b) a magnified version of (a) for first few seconds

Target Point Manipulation Inside a Deformable Object

37

tracked the desired position trajectory. Robotic fingers forces generated by the PI controller are presented in Figure 14. Force values in Figure 14 are quite less than those in Figure 11 because of the optimal contact location of the robotic fingers. However, the POs for robotic fingers 2 and 3 are become negative as shown in Figure 15. Negative values of the POs signify that the output energy of the 2-port network is greater than the input energy. Since the plant is considered to be passive, the only source of generating extra energy is the controller that makes the whole system unstable. So we must engage passivity controller to modify the controller output by dissipating the extra amount of energy.

Task 4:
In Task 4, the PCs are turned on and the robotic fingers are commanded to effect the desired motion of the target. The PCs are activated when the POs cross zero from a positive value. The required damping forces are generated to dissipate only the excess amount of energy generated by the controller. In this case, the target tracks the desired straight line trajectory well with the POs remaining positive. Figure 16 represents the actual and the desired trajectories of the target when PCs are turned on. For this case, the PCs on the plant side are only activated whereas the PCs on the trajectory side remain idle. Figure 17 shows the PCs forces generated at the plant side when the POs cross zero. The POs become positive during interaction between the robotic fingers and the object as shown in Figure 18. Hence, the stability of the overall system is guaranteed. The PCs on the trajectory side are shown in Figure 19, which are all zeros. The modified controller outputs to move the target point are shown in Figure 20.

4 y (mm)

1 desired actual 0 0 1 2 3 x (mm) 4 5 6

Fig. 16. The desired (red dashed) and the actual (blue solid) straight lines when PCs are turned on

38

Robotic Systems Applications, Control and Programming

0.1 fp (N) 1 0 -0.1 0.4 fp (N) 2 0.2 0 0.4 fp (N) 3 0.2 0

10

10

5 6 time (sec.)

10

Fig. 17. Required forces supplied by PCs at the plant side when PCs are turned on

0.01 E1 (Nm) 0 -0.01 0.2 E2 (Nm) 0.1 0 -0.1 0.02 E3 (Nm) 0.01 0 -0.01 0 1 2 3 4 5 6 time (sec.) 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

10

Fig. 18. POs for three robotic fingers when PCs are turned on

Target Point Manipulation Inside a Deformable Object

39

1 ft (N) 1 0 -1 1 ft (N) 2 0 -1 1 ft (N) 3 0 -1

10

10

5 6 time (sec.)

10

Fig. 19. PCs forces at the trajectory side when PCs are turned on
0.8 0.7 0.6 Modified controller forces (N) 0.5 0.4 0.3 0.2 0.1 0 -0.1 fP 1 fP 2 fP 3

5 6 time (sec.)

10

Fig. 20. Modified controller forces when PCs are turned on

40

Robotic Systems Applications, Control and Programming

8. Conclusion and future work


In this chapter, an optimal contact formulation and a control action are presented in which a deformable object is manipulated externally by three robotic fingers such that an internal target point is positioned to the desired location. First, we formulated an optimization technique to determine the contact locations around the periphery of the object so that the target can be manipulated with minimum force applied on the object. The optimization technique considers a model of the deformable object. However, it is difficult to build an exact model of the deformable object in general due to nonlinear elasticity, friction, parameter variations, and other uncertainties. Therefore, we considered a coarse model of the deformable object to determine the optimal contact locations which is more realizable. A time-domain passivity control scheme with adjustable dissipative elements has been developed to guarantee the stability of the whole system. Extensive simulation results validate the optimal contact formulation and stable interaction between the robotic fingers and the object.

9. References
[1] D. Sun and Y. H. Liu, Modeling and impedance control of a two-manipulator system handling a flexible beam, ASME Journal of Dynamic Systems, Measurement, and Control, vol. 119, pp. 736-742, 1997. [2] P. Dang, F. L. Lewis, K. Subbarao and H. Stephanou, Shape control of flexible structure using potential field method, 17th IEEE International Conference on Control Applications, Texas, USA, pp. 540-546, 2008. [3] X. Liu, K. Kim, Y. Zhang and Y. Sun, Nanonewton force sensing and control in microrobotic cell manipulation, The International Journal of Robotics Research, vol. 28, issue 8, pp. 1065-1076, 2009. [4] V. G. Mallapragada, N. Sarkar and T. K. Podder, Robot-assisted real-time tumor manipulation for breast biopsy, IEEE Transactions on Robotics, vol. 25, issue 2, pp. 316-324, 2009. [5] S. Hirai and T. Wada, Indirect simultaneous positioning of deformable objects with multi pinching fingers based on uncertain model, Robotica, Millennium Issue on Grasping and Manipulation, vol. 18, pp. 3-11, 2000. [6] S. Nath, Z. Chen, N. Yue, S. Trumpore and R. Peschel, Dosimetric effects of needle divergence in prostate seed implant using 125I and 103Pd radioactive seeds, Medical Physics, vol. 27, pp. 1058-1066, 2000. [7] A. Albu-Schaffer, C. Ott and G. Hirzinger, Constructive energy shaping based impedance control for a class of underactuated Euler-Lagrange systems, IEEE International Conference on Robotics and Automation, pp. 1387-1393, 2005. [8] B. Hannaford and R. Jee-Hwan, Time-domain passivity control of haptic interfaces, IEEE Transactions on Robotics and Automation, vol. 18, pp. 1-10, 2002. [9] S. Ali A. Moosavian and R. Rastegari, Multiple-arm space free-flying robots for manipulating objects with force tracking restrictions, Robotics and Autonomous System, vol. 54, issue 10, pp. 779-788, 2006. [10] Z. Li, S. S. Ge and Z. Wang, Robust adaptive control of coordinated multiple mobile manipulators, Mechatronics, vol. 18, issues 5-6, pp. 239-250, 2008.

Target Point Manipulation Inside a Deformable Object

41

[11] M. Namvar and F. Aghili, Adaptive force-motion control of coordinated robots interacting with geometrically unknown environments, IEEE Transactions on Robotics, vol. 21, issue 4, pp. 678-694, 2005. [12] M. Saha and P. Isto, Manipulation planning for deformable linear objects, IEEE Transactions on Robotics, vol. 23, issue 6, pp. 1141-1150, 2007. [13] Y. Zhang, B. K. Chen, X. Liu and Y. Sun, Autonomous robotic pick-and-place of microobjects, IEEE Transactions on Robotics, vol. 26, issue 1, pp. 200-207, 2010. [14] D. Sun and Y. H. Liu, Modeling and impedance control of a two-manipulator system handling a flexible beam, ASME Journal of Dynamic Systems, Measurement, and Control, vol. 119, no. 4, pp. 736-742 ,1997. [15] A. Tavasoli, M. Eghtesad and H. Jafarian, Two-time scale control and observer design for trajectory tracking of two cooperating robot manipulators moving a flexible beam, Robotics and Autonomous Systems, vol. 57, issue 2, pp. 212-221, 2009. [16] S. F. F. Gibson and B. Mirtich, A survey of deformable modeling in computer graphics, MERL Technical Report, TR97-19, 1997. [17] Y. Zhang, E. C. Prakash and E. Sung, A new physical model with multilayer architecture for facial expression animation using dynamics adaptive mesh, IEEE Transactions on Visualization and Computer Graphics, vol. 10, issue 3, pp. 339-352, 2004. [18] M. Meyer, G. Debunne, M. Desbrun and A. H. Barr, Interactive animation of cloth-like objects in virtual reality, The Journal of Visualization and Computer Animation, vol. 12, no. 1, pp. 1-12(12), 2001. [19] A. Joukhadar and C. Laugier, Dynamic simulation: model, basic algorithms, and optimization, In J.-P. Laumond and M. Overmars, editors, Algorithms for Robotic Motion and Manipulation, pp. 419-434, A. K. Peters Ltd., 1997. [20] A. M. Howard, and G. A. Bekey, Recursive learning for deformable object manipulation, Proc. of International Conference on Advanced Robotics, pp. 939-944, 1997. [21] R. M. Koch, M. H. Gross, F. R. Carls, D. F. Von Buren, G. Fankhauser, Y. I. H. Parish, Simulating facial surgery using finite element models, In ACM SIGGRAPH 96 Conf. Proc., pp. 421-428, 1996. [22] S. D. Pieper, D. R. Laub, Jr., and J. M. Rosen, A finite element facial model for simulating plastic surgery, Plastic and Reconstructive Surgery, 96(5), pp. 1100-1105, Oct. 1995. [23] J. K. Salisbury, Kinematic and force analysis of articulated hands, PhD Thesis, Department of Mechanical Engineering, Standford University, Standford, CA, 1982. [24] J. Kerr, Analysis of multifingered hand, PhD Thesis, Department of Mechanical Engineering, Standford University, Standford, CA, 1984. [25] J. M. Abel, W. Holzmann and J. M. McCarthy, On grasping planar objects with two articulated fingers, IEEE Journal of Robotics and Automation, vol. 1, pp. 211-214, 1985. [26] M. R. Cutkosky, Grasping and fine manipulation for automated manufacturing, PhD Thesis, Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, 1985. [27] C. Ferrari and J. Canny, Planning optimal grasp, IEEE International Conference on Robotics and Automation, pp. 2290-2295, 1992. [28] S. Garg and A. Dutta, Grasping and manipulation of deformable objects based on internal force requirements, International Journal of Advanced Robotic Systems, Vol. 3, No. 2, pp. 107-114, 2006.

42

Robotic Systems Applications, Control and Programming

[29] T. Watanabe and T. Yoshikawa, Grasping optimization using a required external force set, IEEE Transactions on Automation Science and Engineering, Vol. 4, pp. 52-66, 2007. [30] D. Ding, Y. H. Liu and S. Wang, Compuattion of 3D form-closure grasps, IEEE Transactions on Robotics and Autonamtion, Vol. 17, pp. 515-522, 2001. [31] A. Bicchi and V. Kumar, Robotic grasping and contact: a review, Proc. IEEE International Conference on Robotics and Automation, pp. 348-353, 2000. [32] M. T. Mason, Mechanics of robotic manipulation, The MIT Press. [33] J. Cornella, R. Suarez, R. Carloni and C. Melchiorri, Dual programming based approach for optimal grasping force distribution, Journal of Mechatronics, Vol. 18, pp. 348356, 2008. [34] P. Saut, C. Remond, V. Perdereau and M. Drouin, Online computation of grasping force in multi-fingered hands, Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1223-1228, 2005. [35] K. Gopalakrishanan and K. Goldberg, D-space and deform closure grasps of deformable parts, International Journal of Robotics Research, vol. 24, pp. 889-910, 2005. [36] G. L. Foresti and F. A. Pellegrino, Automatic visual recognition of deformable objects for grasping and manipulation, IEEE Transaction on Systems, Man, and Cybernetics: Applications and Reviews, vol. 34, pp. 325-333, 2004. [37] H. Wakamatsu, S. Hirai and K. Iwata, Static analysis of deformable object grasping based on bounded force closure, IEEE International Conference on Robotics and Automation, vol.4, pp. 3324-3329, 1996. [38] R. J. Anderson and M. W. Spong, Asymptotic stability for force reflecting teleoperators with time delays, in, 1989. IEEE International Conference on Robotics and Automation, vol.3, pp. 1618-1625, 1989. [39] S. Stramigioli, C. Melchiorri and S. Andreotti, A passivity-based control scheme for robotic grasping and manipulation, The 38th Conference on Decision and Control, Phoenix, Arizona, USA, 1999. [40] B. Hannaford and J.-H. Ryu, Time domain passivity control of haptic interfaces, IEEE International Conference on Robotics and Automation, vol.2, pp. 1863-1869, 2001. [41] J.-H. Ryu, D.-S. Kwon and B. Hannaford, Control of a flexible manipulator with noncollocated feedback: time-domain passivity approach, IEEE Transactions on Robotics, vol. 20, pp. 776-780, 2004. [42] T. Wada, S. Hirai, S. Kawamura and N. Kamiji, Robust manipulation of deformable objects by a simple PID feedback, IEEE International Conference on Robotics and Automation, vol.1, pp. 85-90, 2001. [43] V. D. Nguyen, Constructing force-closure grasps, IEEE International Conference on Robotics and Automation, pp. 1368-1373, 1986. [44] J. Ponce and B. Faverjon, On computing three-finger force-closure grasps of polygonal objects, Fifth International Conference on Advanced Robotics, vol.2, pp. 1018-1023, 1991. [45] J. E. Colgate and N. Hogan, Robust control of dynamically interacting systems, International Journal of Control, vol. 48, pp. 65 - 88, 1988. [46] J. J.-H. Ryu and C. Preusche, Stable bilateral control of teleoperators under time-varying communication delay: time domain passivity approach, IEEE International Conference on Robotics and Automation, pp. 3508-3513, 2007.

3
Novel Assistive Robot for Self-Feeding
Korea National Rehabilitation Research Institute, Korea National Rehabilitation Center Korea 1. Introduction
Assistive robots, with which users can interact directly, have attracted worldwide attention. They can assist people with disabilities and older persons in the activities of daily living. Assistive robots could be employed for improving quality of life as they can be adjusted according to demographic changes. There are several crucial issues to be considered with regard to these robots, such as customizing them according to the specific culture of the users as well as ensuring cost-effectiveness (Mann, 2005). In Korea, the official number of registered people with disabilities due to illnesses, injuries, and the natural aging process has already exceeded two million (Employment Development Institute, 2009). More than one-third of these disabled people are the elderly. Moreover, due to longer life spans and a decline in birthrate, the elderly make up over 10% of the population in Korea. As a result, effective caregiving with restricted resources is an urgent problem. In order to achieve efficient caregiving for people with disabilities and elderly persons, caregivers should physically interact with the people. For example, caregivers have to assist people in performing the routine activities of their daily lives, such as eating, changing clothes, changing their posture, moving from one location to another, and bathing. Among these activities, eating meals is one of the most essential daily activities. In this regard, caregivers must interact with people frequently to assist with food selection, feeding interval, etc. Existing robotic technologies can be utilized to take over the functions of the caregivers. Thus, assistive robots represent one of the solutions by which disabled or elderly people can receive support for performing the activities of daily life. The design of assistive robots to help with self-feeding depends strongly on the specific culture of the user. Korean food consists chiefly of boiled rice, soup, and side dishes such as Kimchi. The procedure of having a meal is as follows: the user eats the boiled rice first and then the side dishes. These steps are performed repetitively. In comparison with foreign boiled rice, Korean boiled rice sticks together very well after cooking. Handling this sticky boiled rice can be problematic. In addition, Korean soup includes meat, noodles, and various vegetables, thus the existing feeding robots find it difficult to handle Korean foods. Various assistive robots have been developed since the late 1980s, as shown in Fig. 1. Handy1 (Topping & Smith, 1999) is an assistive robot for daily activities such as eating, drinking, washing, shaving, teeth cleaning, and applying make-up. Handy1 consists of a five-DOF (degree-of-freedom) robot, a gripper, and a tray unit. The major function of Handy1 is to help with eating. Handy1 allows a user to select food from any part of the tray. A cup is attached to enable users to drink water with their meal. The walled columns of a

Won-Kyung Song and Jongbae Kim

44

Robotic Systems Applications, Control and Programming

food dish serve an important purpose: the food can be scooped on to the dish without any resultant mixing of food items.

(a) Handy1

(b) Winsford feeder

(c) Neater Eater

(d) My Spoon Fig. 1. Feeding systems

(e) Meal Buddy

(f) Mealtime Partner Dining System

The Winsford feeder (Sammons Preston; Hermann et al., 1999) is a mechanical self-feeding system. It uses a mechanical pusher to fill a spoon and a pivoting arm to raise the spoon to the users mouth that is at a preset position. The plate is rotated to place more food in front of the pusher. The user can choose from two input devices: a chin switch and a rocker switch. Neater Eater (Neater Solutions) has two versions: a manualoperation-type and an automatic-operation-type system. Neater Eater consists of a two-DOF arm and one dish. Two types of food can be present on the one dish. The manual type can be used to suppress the tremors of a users upper limbs while he or she eats. My Spoon (Soyama et al., 2003) is suitable in the case of Japanese food. It consists of a fiveDOF manipulator, a gripper, and a meal tray. The meal tray has four rectangular cells. My Spoon combines several pre-programmed motions: automatic operation, semiautomatic operation, and manual operation. The semiautomatic operation allows a user to select food. The manual operation can change the position in which the food is held. The input device can be selected from among the following: the chin joystick, reinforcement joystick, and switch. The end-effector of the robotic arm has one spoon and one fork, which move together to realize the grasping motion. During the grasping process, the gap between the spoon and the fork changes and thus the end-effector grasps the food. Then the robot moves to a predefined position in front of the users mouth, and the fork moves backward to enable the user to eat the food off the spoon. Meal Buddy (Sammons Preston) has a three-DOF robotic arm and three bowls that can be mounted on a board using magnets. After the system scoops the food, the robotic arm scrapes the surplus food off the spoon with the rod on the bowls.

Novel Assistive Robot for Self-Feeding

45

The Mealtime Partner Dining System (Mealtime Partners) is positioned in front of a users mouth. Three bowls can rotate in front of the mouth. The spoon picks up the food and then moves a short distance toward the preset location of the mouth. This system reduces the chances of the spoon slipping on wet food because the underside of the spoon is wiped off after scooping. Because of the way the system is positioned, the user does not need to lean toward the feeder. In some systems, a beverage straw is located beside the spoon (Pourmohammadali, 2007). Other systems are designed for multiple users (Guglielmelli, 2009). Most feeding systems scoop the food with a spoon. Those systems are not suitable for use in the case of boiled rice, which is a staple food in Korea. In addition, some systems have a single dish, and thus different types of food might be mixed during scooping. My Spoon uses the grasping function to pick up food, but this system has difficulty serving Korean rice due to its fixed grasping strength and the grasping openness of the gripper. As a result, My Spoons gripper sometimes gets a lot of rice attached to its surface. The previously mentioned self-feeding robotic systems also have difficulty scooping this staple Korean food. Feeding robots enable users to enjoy food independently during mealtimes. After preparing food, users can choose when they want to eat the desired food. We developed an assistive robot for self-feeding by taking into consideration the feedback of user candidates and clinical experts. We evaluated the self-feeding robot by performing a series of user tests. The overall process, i.e., formulating a concept, design, and evaluation involves feedback from users and clinical experts. The development process is performed on the basis of the philosophy of participatory action design (Ding et al., 2007). In this paper, we introduce a newly designed self-feeding robot that will be suitable in the case of Korean food, including sticky rice, and we report the results of tests that were performed with several disabled people. In Section 2, we will present an overview of the new self-feeding robot for Korean food. Basic operational procedures of the self-feeding robot will be presented in Section 3. Section 4 contains the results and discussions of tests performed with users with spinal cord injuries. Finally, we will present the conclusion in Section 5.

2. Self-feeding robot system


In this section, we will present an overview of the users, requirements, and system configurations of the self-feeding robot system. 2.1 Users The primary users of self-feeding robots are people with physical disabilities who have difficulty moving their upper limbs. Such people include those suffering from high-level spinal cord injuries, cerebral palsy, and muscular diseases. For example, people with cervical level-4 spinal cord injuries have difficulty moving their upper limbs and retain full movement only above their necks. Some people afflicted with cerebral palsy cannot move their arms and hands, and they often have difficulty moving their necks. When the spoon of a self-feeding robot approaches such a users mouth, that person has a hard time putting the food in his or her mouth. People with muscular diseases have weak muscle movements. Even though they can move their hands, they have limited motor functions in their elbows and shoulder joints. We can also include senior citizens who have difficulties with the motor

46

Robotic Systems Applications, Control and Programming

functions of their upper limbs, e.g., the fragile elderly, among the abovementioned disabled people. It is clear that the number of overall target users of self-feeding robots will be growing in the near future. 2.2 Requirements of a self-feeding robot We surveyed a group of people with disabilities as well as clinical experts to learn about the requirements of a feeding robot (Song et al., 2010a, 2010b). The focus group consisted of a person with a spinal cord injury and a person with cerebral palsy. The clinical experts included occupational therapists and medical doctors of physical medicine and rehabilitation. The major findings of the survey are as follows. Firstly, a user should be able to control the feeding interval for the desired food. In the case of caregiving, one of the common problems is the difficulty in controlling the feeding interval. People with spinal cord injury are able to talk quickly and can therefore manage a short feeding interval. However, people with cerebral palsy have difficulty representing their intentions quickly when the feeding interval is too short.

Fig. 2. Korean food on a standard food container. (From the lower left-hand side going counterclockwise: rice, soup, and side dishes.) Secondly, the specialists and the user candidates believe that the feeding systems are designed more for western-style food. Those systems are not suitable for Korean food, which includes boiled rice, soup, and side dishes. A user eats one of the side dishes and then the boiled rice in turn. These steps are performed repetitively during mealtime. In comparison with foreign boiled rice, Korean boiled rice sticks together very well after cooking. One of the problems of self-feeding systems is handling this sticky boiled rice. In addition, Korean soup includes meat, noodles, and various vegetables. Therefore, existing feeding robots have difficulty handling Korean foods (Fig. 2). Thirdly, a feeding robot should be suitable for use in private homes and facilities. From an economic point of view, a feeding robot is effective in facilities that have many persons with upper-limb disability. Such facilities do not have enough caregivers to help with feeding due to the heavily time-consuming nature of this task. Thus, a robot reduces the burden of caregiving for feeding. A feeding robot can also be used in an ordinary home to improve the quality of life of the users and their families. Members of a family can face each other and freely enjoy talking. The other members of the family can go out for a few hours because they are freed from the burden of having to help with feeding. The location of bowls or a tray is another important factor. According to Korean culture, the location of bowls or a tray is strongly related to the dignity of the person. A simple feeding

Novel Assistive Robot for Self-Feeding

47

system can be made with the bowls located in front of a users mouth. However, some senior user candidates hate having the bowls right in front of their mouth; they prefer to eat the food like ordinary people. Thus, we focus mainly on using a tabletop tray. Other comments of user candidates are as follows: plain machines that can serve simple dishes with water are required. When a caregiver goes out for a while, a user needs to be able to eat cereal with milk with the help of a self-feeding robot. The water supply tool should be located next to the users body. The meal tray must have a cover to protect the food from dust contamination. The cost should be reasonable, e.g., the price should be between US$1800 and $2700. Obviously, a low-cost system is preferred. The feeding robot should be able to deal with noodles. The feeding robot should be able to accommodate the posture of the user. Finally, the feeding robot should be lightweight. We concentrate on rice handling. We do not take into account the handling of soup in this development. We will handle the requirements of feeding Korean soup in a future versions design. Instead of handling Korean soup via a self-feeding robot, a user drinks soup stored in a cup. Generally, we assume that a user drinks soup or water through a straw. Technically, we considered four types of feeding robots in order to ensure that it can grip and release boiled rice effectively, as shown in Fig. 3. In the first concept, a number of bowls are located in front of a users mouth, and the food is presented by the spoon with a short traveling distance. For example, if there are three bowls, one bowl has rice and two bowls have side dishes. However, two side dishes are not enough to constitute a satisfying meal. In general, Korean people eat three or four side dishes with boiled rice at a time. Therefore, we need four or five bowls.

(a) First concept

(b) Second concept

(c) Third concept

(d) Fourth concept

Fig. 3. Design concepts of a feeding robot. (Third concept is chosen.) In the second concept, the bowls are located in the upper front of a users mouth, and then the food drops from the bottom of a bowl. The food is placed in the spoon by a dropping motion, and then the spoon approaches the users mouth. This method requires the mechanism of food dropping on the spoon. This method could be suitable for a bite-sized rice cake.

48

Robotic Systems Applications, Control and Programming

In the third concept, the system with a food tray is located on a table. The robotic arm picks up food and then moves it to a users mouth. These tasks are divided into two steps: one is picking up the food and the other is moving the food to the users mouth. Two arms can be used to perform the above two tasks, respectively. One of the user candidates pointed out the easy installation of the feeding robots, especially a dual-arm manipulator. This is significant because some caregivers might be elderly people who are not familiar with brand-new machines. Finally, one bowl is located in front of the users mouth. The mixed food with rice is loaded in that bowl. Some users do not like the mixed food, even though they do prefer a simple feeding system. We decided on the third concept, which is located on a table, based on the opinions of specialists and user candidates. 2.3 Design of the feeding robot We have developed a simple robotic system that has a dual-arm manipulator that can handle Korean food such as boiled rice in an ordinary food container, as shown in Fig. 4. We divide a self-feeding task into two subtasks: picking up/releasing food and transferring food to a users mouth. The first robotic arm (a spoon-arm, Arm #1) uses a spoon to transfer the food from a container on a table to a users mouth. The second robotic arm (a grab-arm, Arm #2) picks food up from a container and then puts it on the spoon of a spoon-arm.

Grab-arm (Arm #2)

Spoon-Arm (Arm #1)

Gripper

Spoon

Gripper Tray

Fig. 4. Assistive robot for self-feeding. Spoon-arm (Arm #1) uses a spoon to transfer the food from a container to a users mouth. Grab-arm (Arm #2) picks up the food from a container and then loads it onto the spoon of Arm #1 The two arms have different functions. The design of the end-effectors of the two arms could be chosen effectively. To pick up or release the food stably, a grab-arm can use an odd-shaped gripper, as shown in the bottom left-hand side of Fig. 4, because that gripper does not need to approach a users mouth. However, the end-effector of a spoon-arm has an

Novel Assistive Robot for Self-Feeding

49

intact round-shaped spoon to serve food to a users mouth safely. If an end-effector has an unusual shape, then it might pose a danger to the user as it approaches his or her face. The two proposed arms with their different end-effectors mimic Korean eating behavior. Specifically, Koreans use a spoon and steel chopsticks during mealtime, as shown in Fig. 5 (a) and (b). Some people use a spoon and chopsticks simultaneously [Fig. 5 (c)]. In the designed system, the gripper of a grab-arm and the spoon of a spoon-arm take on the roles of chopsticks and a spoon, respectively. Many Korean caregivers pick up food with chopsticks and then put it on a spoon in order to serve food to users such as children and patients. In that sense, the proposed two-armed system stems from Korean eating tools.

(a)

(b)

(c)

Fig. 5. (a) A spoon for scooping food. (b) Chopsticks for picking up food. (c) A person using a spoon and chopsticks A spoon-arm has two degrees of freedom (DOF) in order to transfer food on the spoon without changing the orientation of the spoon. A grab-arm includes a three-DOF SCARA joint for the planar motion, a one-DOF prismatic joint for the up and down motion, and a gripper. The overall number of DOFs of a dual-arm without a gripper is six, as shown in Fig. 6. The feeding robot can use an ordinary cafeteria tray.
Axis #3 R3 Arm #2 Axis #2 R2

Axis #4 R4 Axis #5 R5 Axis #6 P2

Connection Bar P1* Gripper Axis #3 Tray Spoon Axis #1

Arm #1 R1

Fig. 6. The joint configuration of a novel feeding robot for Korean foods. P1 (Prismatic Joint #1) is optionally applied. R = Revolute. P = Prismatic The feeding robot uses a microcontroller unit to control a spoon-arm and a grab-arm, as shown in Fig. 7. We add a small-sized PC with a touch screen to enable the user to enjoy

50

Robotic Systems Applications, Control and Programming

entertainment and to test various kinds of user interfaces. During mealtime, a user wants to enjoy multimedia such as movies or music. In addition, the small-sized PC has a Windows operating system, and we can effectively add assistive devices for human computer interaction, i.e., switches, a joystick, and biosignal interface devices.
Joystick/ Switch Micro Controller Unit Power Unit (Battery) USB Hub PC (Mobile Internet Device) (Additional) Input Device

Arm #1

Arm #2

Fig. 7. Block diagram of feeding robot The microcontroller unit allows a user or a caregiver to select the following: operation modes (automatic/semiautomatic/manual mode), the shape and size of a container, the location of the mouth, the robots speed, the time to stay in front of the mouth, and so on. Depending on the types of food, a user also selects the divided grasping region in each container and the grab strength of the gripper of the grab-arm. Our system will be capable of selecting the above parameters. A user can save the parameters of various kinds of food. We expect that in a community, different members will be able to exchange their robots effectively by exchanging their individual parameters via the Internet. The grasping regions of boiled rice in a bowl could be defined in 3D space because the bowl should be over 50 mm in height. The grasping volume of dishes could be defined as shown in Fig. 8. Our team is making the prototype of the proposed feeding robot. Fig. 9 shows the designed appearance of the proposed self-feeding robot.
Gripper Tip Gripper Direction Gripper Direction Gripper Direction

Gripper Tip

Gripper Tip

1 2 3 1 4 6 5 2 3

9 1 2 3 6 5 9 4 8 7 15

10 16

11 17

Fig. 8. The definition of grasping volume in containers In order to use a conventional food container, we decided that the length of links of the grab-arm should cover the whole area of a food container. A grab-arm is located behind a container or on the left-hand side of a container. A grab-arm can chiefly be located behind a food container on a standard table, as shown in Fig. 10. The lap board of a bed does not

Novel Assistive Robot for Self-Feeding

51

provide enough space behind a food container; therefore, the grab-arm should be located on the left-hand side of a food container.

Fig. 9. The design of a novel feeding robot for Korean food. A spoon-arm (lower left-handside figure) for transferring food and a grab-arm (lower right-hand-side figure) for picking up and releasing food The spoon-arm has two additional variables, namely the motorized prismatic motion toward a users mouth, and the manual change of the link length between the first axis and the second axis of the grab-arm. Fig. 11 shows the overall workspace of the spoon of a grabarm. In accordance with the position of a users mouth, the predefined location in front of the mouth is adjusted when the system is installed. The height of the spoon on the spoon-arm is 250381 mm with respect to the surface of a table. The height of the spoon-arm depends on the table height. We assume that the height of a table is 730750 mm. The spoon could be located at a height of 9801131 mm with respect to the ground. According to the statistics of the Korean disabled, the height of a users mouth could be 1018 mm (female) and 1087 mm (male), as shown in Table 1. Thus, the height of the spoon corresponds with the height of the users mouth. Item Male Female Sitting height on a wheelchair 1261 1187 The distance from a Mouse height on a crown to a mouth wheelchair 174 1087 169 1018

Table 1. Statistics of wheelchair users (unit: mm)

52

Robotic Systems Applications, Control and Programming

Fig. 10. The configuration of a grab-arm

Fig. 11. The work space of a spoon-arm

3. Basic operation of the self-feeding robot


We built two arm configurations of the developed self-feeding robot: a dual-arm configuration and a single-arm configuration. A dual-arm configuration follows an original design concept using both a spoon-arm and a grab-arm. A single-arm configuration uses

Novel Assistive Robot for Self-Feeding

53

only the spoon-arm, and a caregiver takes the role of the grab-arm. We explain the two arm configurations of the feeding robot in the following sections. 3.1 Dual-arm configuration A dual-arm robotic system is applied in accordance with an original design concept. If a caregiver prepares food, users can eat the food on the basis of their intentions. A grab-arm picks up the desired food on a food container, and the arm releases the food on the spoon of a spoon-arm. The spoon-arm moves the spoon to the users mouth. Then the user can eat the food on the spoon.

Fig. 12. The sequential motions of the self-feeding robot in dual-arm configuration. From the top left-hand side, the robot puts a gripper into a bowl of water and then grasps rice The self-feeding robots have three operation modes: an automatic mode, a semiautomatic mode, and a manual mode. The automatic mode has a fixed serving sequence of dishes; users only push a start button when they want to eat the next food on a spoon. In a semiautomatic mode, a user can choose the dishes on the basis of their intention. In this mode, a user can have the dishes that they want to eat. In a manual mode, the user can choose food and control the posture of the robot. In all three modes, the user can select the feeding timing when they want eat. In experiments on handling boiled rice, we observed that releasing rice is as important as picking up rice. The stickiness of the boiled rice can change depending on its temperature. Slightly cool rice is difficult to release from the gripper. In order to solve this problem, the feeding robot automatically puts the gripper of the grab-arm in water before grasping the food. The water is located in a bowl next to the rice. When this is done, the gripper can release the rice on the spoon because the stickiness of the rice has decreased. Fig. 12 shows the whole operation of the self-feeding robot. The amount of rice picked up is adjusted on the basis of actual experiments on rice grasping. A grippers mechanism is the simple opening/closing of gripper fingers via a linear

54

Robotic Systems Applications, Control and Programming

actuator. The weight of the rice corresponding to one grasping motion increases depending on the open/close width (Fig. 13) of the fingers of the gripper when grasping begins, as shown in Fig. 13. The default open/close width of the gripper is 32 mm in order to grasp an average of 10 g rice. The close width of the gripper makes the grasping force to food. Thus, we can grasp various foods by adjusting the open/close width of the gripper.

Open/Close Width of a Gripper Fingers

Fig. 13. Amount of rice in a single grasp 3.2 Single-arm configuration A spoon-arm could be used independently without a grab-arm, as shown in Fig. 14. The caregiver manually picks up the food on a tray and puts the food on the spoon of a spoon-

Fig. 14. The motions of the self-feeding robot in single-arm configuration. The caregiver picks food up on the spoon of the self-feeding robot, and then the user presses the button to receive the food

Novel Assistive Robot for Self-Feeding

55

arm. The next step is similar to that of the dual-arm robotic arm. A caregiver only provides a user with food when the spoon is empty in the home position. From the perspective of the caregiver, he or she can reduce the amount of time needed to check or wait while the user is chewing. From the perspective of the user, the food can be eaten when he or she wants to eat. Although a user may have difficulty choosing food in an automatic manner, he or she can chew the food sufficiently without considering the next spoon serving from a caregiver. From an economical point of view, the single-arm configuration has a lower cost in comparison with the dual-arm configuration. Applying a spoon-arm has advantages in facilities such as hospitals or nursing homes. One caregiver supports more than one consumer. That means one caregiver can serve food on the spoons of multiple users spoonarms in turn. This configuration is especially useful if the labor cost of a caregiver is not high, as in developing countries.

4. User test
At first, we designed the self-feeding robot on the basis of users opinions in order to develop a practical system. After constructing the prototype of the self-feeding robot, we performed a user test using seven people with disabilities via the developed self-feeding robot. Participants used the self-feeding robot to eat Korean food, and we collected feedback. 4.1 Users overall feedback on the self-feeding robot In the users opinions, a self-feeding robot could be useful when a user stays at home alone. The self-feeding robot can be used in two situations: one is solitary eating and the other is eating together. The most important role is in supporting self-feeding without a caregiver when people with disabilities stay in their homes alone and caregivers prepare the food in advance. Some users prefer using a large spoon. For example, a spoon could be a Chinese-style spoon. If a spoon is large, then it can be used to feed wet food. However, a female user could prefer to use a small-sized spoon. The size of the spoon should be customized according to the user preferences. We will consider several spoons with various sizes as well as various depths. Users sometimes request quick motion of the robotic arm. A user wants to be able to adjust the motion speed of the self-feeding robot. The adjusting speed should be customized to the user. The spoon should tilt to the users mouth in order to unload the food on a spoon easily when the spoon arm is positioned in the users mouth. Technically, the spoon should tilt to a user with cerebral palsy because such a user can move his/her head to a limited extent. If the self-feeding robot does not have a tilting function, then the user will struggle to eat the food on the spoon. Specifically, if the robot has a range detection sensor around the spoon, then the robot could move more intelligently in front of the users mouth. That means a spoon automatically tilts in front of a users mouth. If a users mouth moves, the preprogrammed motion is not suitable. If a spoon tilts in the wrong position, food could drop down on the floor. Some people with cerebral palsy or muscular disease have trouble moving their neck, and thus the tilting function of a spoon should be an option. Users and experts want to eat food comfortably using a spoon. For instance, the proposed system moves the spoon in front of the users face. At that time, the spoon moves to the users mouth along the sagittal plane, as shown in Figs. 15 (a) and (b). Some users complain

56

Robotic Systems Applications, Control and Programming

about a threatening feeling when the spoon moves on the sagittal plane. If the spoon approaches a users mouth from the side, then a user can feel safer. Those motions are similar to most peoples motions when they eat food. In addition, a user wants to touch the side surface of a spoon. As a remedy, we will consider how the spoon approaches, as shown in Figs. 15 (c), (d), and (e). However, the sideways approach may require a large installation area for the self-feeding robot.

Fig. 15. Spoon approaching modes. (a) The sagittal plane on which a spoon moves. (b), (c), (d), and (e) represent the top view of a spoon when a robot approaches a users mouth. The red object means a spoon. (c), (d) and (e) are more comfortable then (b) A spoon should have a safety function because the spoon frequently comes in contact with a users mouth. A spoon can be fixed at the tip of an arm with a spring as a component to guarantee safety. The spoon can be connected to magnets. When a large force acts on the spoon, the magnets connection with the spoon could be detached for the users safety. A user who has spasticity needs the compliance of a spoon. Users request a smoother motion of a spoon when it contains food. In addition, the vibration of a spoon should be reduced at that time. Users want a small-sized self-feeding robot for easy installation on a desk. In addition, a user wants to be able to adjust the spoon level of the self-feeding robot. For example, a user who uses a power wheelchair has various height levels from a table. When users first use the self-feeding robot, the level of the spoon on the robotic arm should be adjustable. As a basic function, food rarely drops on a table when a user fails to eat the round-shaped food. When a user eats food that is too stiff, grasping failure occurs. Therefore, we consider the motion, i.e., the speed, of a spoon as well as the shape of a spoon. Some users want to eat rice with a side dish simultaneously. In general, the disabled person who lives alone and receives a caregiving service usually eats rice and a side dish on one spoon at the same time. Some people eat rice mixed with side dishes. The self-feeding robot should mimic that task. The self-feeding robot optionally puts two kinds of food, i.e., rice and a side dish, on a large spoon simultaneously. The spoon should be returned to the home position after a predefined time interval. The robot has two ways to return to the home position of a spoon: one is by making an input signal, and the other is determining a time interval. Sometimes a robot should regrasp food in order to avoid grabbing too much of a side dish. When a robot tries to grab some curry rice, the gripper shape should be changed. The

Novel Assistive Robot for Self-Feeding

57

gripper should be cleaned frequently. The amount of grabbed food could be adjustable. A stop or pause function is required. Other comments are as follows: the robots should be less noisy, have a small-sized input device, serve water, and enable users to eat Korean soup. Users who do not have experience eating food by themselves have a hard time using a self-feeding robot for the first time. Such a person needs to be trained on how to use it. Some users want to eat noodles, such as spaghetti. Most users wish to use a low-cost feeding robot. The filtering of an involuntary input signal of a user should be considered. For instance, in order to reduce malfunctions, a users input could be ignored immediately after a previous input. One user who can push a button by himself prefers to use buttons rather than joysticks. One user prefers using a small-sized joystick. Two joysticks are better than one joystick with buttons. The length of a joystick should be adjustable. Buttons should be large sized. The operation without a PC is required. A user who has limited head motion likes to use the long joystick because that user cannot make a fine motion. The unification of an input device of a wheelchair and a self-feeding robot should be considered. Wireless input devices are preferred. 4.2 Discussion of a single-arm configuration In the single-arm configuration, the caregiver picks food up instead of the grab-arm and then loads food on a spoon. The user makes an input signal to move the spoon to the users mouth. After the user eats food on a spoon, it returns to the home position upon receiving a command or some time interval, as with a dual-arm configuration. The single-arm configuration is useful to a caregiver as well as a user. From the perspective of a user, the feeding timing could be changed freely on the basis of a users intention. The user can chew food sufficiently. When a user watches television, she leisurely eats food. Some users complain about a single-arm configuration. That means a caregiver must stay with a user even though the single arm is used. The single-arm configuration is useful for a caregiver when a user has a meal with his family because the caregiver does not need to move food to the users mouth. Thus, a caregiver likes to use the single-arm configuration. The users and caregivers are satisfied even though picking up food should be performed manually. Food frequently drops down on the floor when a caregiver serves food to the users mouth. However, a user can reduce the instances of dropping food in the case of a single-arm configuration because the spoon is fixed on the spoon-arm and thus the user can estimate the spoon posture correctly. 4.3 Input devices A system should be easy to control for people with disabilities and the elderly. Many users have only head motion, and thus the importance of input devices was mentioned. In this study, we analyzed the self-feeding robot on the basis of two input devices, namely buttons and joysticks, as shown in Fig. 16. Most input devices are developed for hand manipulation. If a user has hand function, then the easiest way is to use the system is with his or her hands. That means the simplest input device is a set of buttons. However, when a user only uses neck motion to make a control input, he or she has difficulty handling input devices with dexterity. Table 2 shows the properties of the input devices.

58

Robotic Systems Applications, Control and Programming

After modification

(a)

(b)

Fig. 16. Input devices of a self-feeding robot. (a) Buttons, (b) Joysticks Input Device # of Used Buttons # of Used Joysticks Length of Joystick (mm) Gap between Joysticks (mm) Angles of Joystick (degrees) Connection Buttons 7 N/A N/A N/A N/A Wire Dual Shock Type Joypad N/A 2 53 44 25 USB

Table 2. Input devices for the self-feeding robot 4.3.1 Buttons The self-feeding robot has a basic input device consisting of a set of buttons. We usually use six buttons that correspond with start, return, and four directions. The buttons were originally developed to check the basic operations of the self-feeding robot. However, quadriplegics who can only move their neck and head would have difficulty pushing the button with their chin. Because the buttons are out of the field of view, the user has a hard time knowing where the buttons are and whether or not they are pushed. Additionally, pushing these buttons requires excessive force and can result in muscle fatigue in a users neck. Thus, a user who uses his or her neck would have difficulty pushing the buttons. Users prefer to use joysticks rather than buttons. On the basis of users opinions, we tested input devices that have joysticks. 4.3.2 Joysticks Two joysticks are employed. Originally, it was determined that a user wants to use two joysticks rather than one joystick and buttons. The length of the joystick is modified from 10 to 53 mm according to users opinions. Because of the gap between the two joysticks, a user can easily manipulate one of the joysticks without any resulting malfunction of the other joystick. The length of the joystick depends on user preference. Some users prefer a long joystick while others like a short one. Most users prefer the wide gap between the two joysticks because a short gap can result in the malfunction of the unused joystick. The moving angles of a joystick are 25. Users express satisfaction with the flexibility of the joystick and its silicon cover, which can be connected to the users skin.

Novel Assistive Robot for Self-Feeding

59

4.4 Satisfaction evaluation of the self-feeding robot We carried out the user tests with user candidates, including people with spinal cord injuries. After they actually ate food using the developed self-feeding robot, we collected their feedback to determine their satisfaction score in accordance with input devices. The users ate food using the self-feeding robot with each of the input devices. The results of the users feedback pertained to the self-feeding robot as well as the input devices. The users rated their satisfaction with the input device activity on a scale of 1 to 10, as with the Canadian Occupational Performance Measure (Pendleton, 2001). A score of 10 indicates the highest level of satisfaction. Most users were satisfied with the self-feeding system that had a dual joystick, as shown in Table 3. This indicates that the use of joysticks is more comfortable than that of buttons. In addition, the self-feeding system operates well. Based on the analysis of users feedback, the key factors affecting the handling of joysticks with regard to a users neck motion are as follows: the distance between the joysticks, the moving angle of the joysticks, and the length of the joysticks. We will perform a comparison study between commercialized feeding systems and the developed system. Input Device SCI #1 SCI #2 SCI #3 SCI #4 SCI #5 SCI #6 SCI #7 Average Score Buttons 6 8 1 1 1 1 1 2.7 Joysticks 8 8 8 8 6 7 8 7.6

Table 3. Satisfaction score of input devices when users eat food via a self-feeding robot (max = 10)

5. Concluding remarks
We have developed a novel assistive robot for self-feeding that is capable of handling Korean food, including sticky rice. This paper presents the overall operation of the selffeeding robot. The proposed robot has three distinguishing points: handling sticky rice, using an ordinary meal tray, and a modular design that can be divided into two arms. Users are people with physical disabilities who have limited arm function. During the development of the robot, we considered the feedback provided by several users and experts. In addition, the user candidates tested the actual the self-feeding robot. It was determined that the input device has the most important role. Many users prefer a dual joystick for self-feeding. Most of the users who participated in the experiments gave us positive feedback. Some users were impressed that they were able to eat their desired food when they wanted to eat it. In future work, we will add several functions to the robot, including improving the reliability of basic operations and adding a safety feature. We will also simplify the system components and perform user evaluations.

60

Robotic Systems Applications, Control and Programming

6. Acknowledgment
This research was supported by a grant (code #10-A-01, #11-A-04) from the Korea National Rehabilitation Research Institute, Korea National Rehabilitation Center, Korea. The authors acknowledge the input of consumer candidates, including Mr. Hongki Kim and Mr. Kwangsup Lee. In addition, the authors gratefully acknowledge the work of Mr. Won-Jin Song and the comments of clinical specialists, including Dr. Bum-Suk Lee, Dr. Sung-Il Hwang, and Ms. Mi-Ok Son at the Korea National Rehabilitation Center.

7. References
Ding, D.; Cooper, R. & Pearlman, J. (2007). Incorporating Participatory Action Design into Research and Education, International Conference on Engineering Education (ICEE) 2007, Coimbra, Portugal. Employment Development Institute (2009). 2009 Disability Statistics (in Korean), Survey & Statistics Team , Korea, ISBN 978-89-5813-737-5. Guglielmelli, E.; Lauro, G.; Chiarugi, F.; Giachetti, G.; Perrella, Y.; Pisetta, A. & Scoglio, A. (2009). Self-feeding Apparatus, US Patent2009/0104004. Hermann, R.; Phalangas, A.; Mahoney, R. & Alexander, M. (1999). Powered Feeding Devices: An Evaluation of Three Models, Arch Phys Med Rehabil, Vol. 80, pp. 12371242, 1999. Mann, W. (2005). Smart Technology for Aging, Disability, and Independence: The State of the Science, Wiley, Hoboken, New Jersey, ISBN 978-0-471-69694-0. Mealtime Partners, [Link] Neater Solutions, [Link] Sammons Preston, [Link] Song, W.-K.; Kim, J.; An, K.-O.; Lee, I.-H.; Song, W.-J.; Lee, B.-S.; Hwang, S.-I.; Son, M.-O. & Lee, E.-C. (2010a). Design of Novel Feeding Robot for Korean Food, ICOST2010, LNCS 6159, Yeunsook Lee et al., Eds. Springer, pp. 152159. Song, W.-K.; Kim, J.; An, K.-O.; Lee, I.-H.; Song, W.-J. & Lee, B.-S. (2010b). New Dual-Arm Assistive Robot for Self-Feeding, Second International Symposium on Quality of Life Technology, Las Vegas, USA. Soyama, R.; Ishii, S. & Fukase, A. (2003). The Development of Meal-assistance Robot MySpoon, International Conference on Rehabilitation Robotics, pp. 8891, Daejeon, Korea. Topping, M. & Smith, J. (1999) The Development of HANDY 1, a Robotic System to Assist the Severely Disabled, International Conference on Rehabilitation Robotics, 1999. Pendleton, H. & Schultz-Krohn, W. (2001). Pedrettis Occupational Therapy, Elsevier. Pourmohammadali, H.; Kofman, J. & Khajepour, A. (2007). Design of a Multiple-user Intelligent Feeding Robot for Elderly and Disabled People, 30th Canadian Medical and Biological Engineering Conference (CMBEC30), Toronto, Ontario, Canada.

4
Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods
University of Patras, Department of Mechanical Engineering & Aeronautics Greece 1. Introduction
In cloth making industry, the need for high flexibility in automation is really imperative due to the extensive style and material variation of the products and the fast fashion changes. The automated handling of fabric materials is one of the most challenging problems in the field of robotics, since it contains a vast potential for high productivity and cost reduction. The main difficulty to get full use of this potential is the fact that it is rather impossible to predict the behaviour of fabrics towards handling due to their low bending rigidity. Moreover, it is difficult to be modelled due to their unpredictable, non-linear and complex mechanical behaviour. Sewing is the most important joining technology in garments and textiles manufacturing. The fact that sewing fabrics is a sensitive operation due to the erratic behaviour of fabrics poses barriers in the development of automated sewing systems. A solution to manipulation tasks that are afflicted with uncertainty, subjectivity, ambiguity and vagueness is an intelligent control approach. The development of intelligent control systems with vision feedback enables robots to perform skilful tasks in more realistic environments and make the research efforts for flexibility and automation really promising. Thus, industrial robots supported by machine vision and sensors can contribute towards advanced automation in apparel manufacturing. On the other hand, sewing fabrics with the exclusive use of robots, without human intervention, is a complicated task. The stitching process is special in the sense that the error cannot be corrected after a part of the cloth has been stitched, implying that the stitching process is not reversible. This limitation implies that clothes that do not conform to the specifications are defective and should be withdrawn from the production. This imposes a great demand on precision during the sewing process. In practice, there is a predefined tolerance considered to be acceptable, as errors resulting from robot accuracy and camera resolution cannot be eliminated. The main focus of this work lies on the design of an innovative intelligent robot controller based on visual servoing that aims to enable robots to handle flexible fabric sheets towards sewing. A great challenge is the design of a robot controller showing robustness against deformations that are likely to appear on the fabric towards robot handling. Special emphasis is given on robot handling fabrics comprised of curved edges with arbitrary curvature. This task is even harder, because up-todate industrial robots present limitations in their movement, owing to the fact that they can only be programmed to make straight or circular motions.

Zacharia Paraskevi

62

Robotic Systems Applications, Control and Programming

2. Literature review on automated sewing systems


To the best of my knowledge, the published research work on robot handling fabrics with curved edges towards sewing is limited to few papers. A first approach to automating fabric manipulation was introduced by (Torgerson & Paul, 1988) including robotic motion control of various fabric shapes. The determination of robot motion paths is based on visual information defining the location of the fabric edges in world coordinates. However, no visual feedback was employed during the robot motion, making the method less accurate. The vision system detects the workpiece edge, extracts the boundary points, determines the parameters defining the geometry of the interior points, and computes the coordinates of the robot path points along subsequent linear and circular path segments. The error deviations between the desired seam line and the actual path line ranged between 3 and 5 mm. However, the flexibility of the method is limited owing to the fact that the path determination algorithms require a geometrical relationship between the robot end-effector and the workpiece to extract the edge parameters that are necessary for determining the robot motion. The FIGARO system (Gershon and Porat, 1988, 1986) performed handling and assembling operations using two superimposed servo control systems. The first system maintains a small tension moving the robot forward with the cloth and the second one rotates the cloth about the sewing needle to produce a seam parallel to the edges. The system performed best with shirting and worsted woven fabrics, which have a reasonable resistance in buckling. Robotic manipulation strategies have been investigated (Gershon, 1993) for handling limp materials proposing a parallel process decomposition of the robot sewing task (Gershon, 1990). A robot arm manipulates the fabric, modeled as a mass-spring-damper system, to modify its orientation and control the fabric tension during the sewing process, which was decomposed into four concurrent processes within a superposition parallel architecture. After FIGARO system, an automated sewing system including two robots handling the fabric on the table was developed (Kudo et al., 2000). The nominal trajectories for both robots are defined through the definition of the points and are necessary for the programming of the robots. Visual information was used to improve tracking accuracy. Sewing along a curved line is achieved by translating the fabric in the sewing direction using the internal command for straight-line motion and simultaneously rotating about the needle according to the visual feedback information. The rotation angle is computed making use of the tangent of the cloth panel based on the method in reference (Gershon & Porat, 1988). The trajectory error was within the range of 0.5mm. A position-based visual servoing system for edge trimming of fabric embroideries by laser was proposed by (Amin-Nejad et al.). A tracking controller, decomposed into a feedforward controller in the tangential direction of the seam and a feedback controller in the normal direction, was implemented. The aim of the controller is to move the cutting beam along the seam with constant velocity and with a constant offset from it as a cutting tolerance. In the experimental results, three types of seam pattern, straight line, sinusoidal, and circular, were chosen for edge trimming. The accuracy achieved with this method was within 0.5mm.

3. Sewing fabrics with straight edges


The present work is part of a project for the development of a robotic workcell for sewing fabrics. The project includes handling tasks (ply separation, translation, placement, folding, feeding and orientation), tension control and quality control of fabrics and stitches

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

63

(Koustoumpardis & Aspragathos, 2003; Koustoumpardis et al., 2006; Moulianitis et al., 1999; Zoumponos & Aspragathos, 2008). After the fabric has been placed at a random location on the working table, a number of sub-tasks should be performed before the sewing process starts. These preliminary sub-tasks are: the recognition of the fabrics shape, the extraction of the seam line, the detection of the edges targeted for sewing, planning of the stitching process and the location of the end-effector on the fabric (Koustoumparis et al., 2007). After the preprocess planning, the robot sewing process is considered and divided into three subtasks: the manipulation of the fabric towards the needle, the sewing of the seam segment and the rotation around the needle. Concerning visual servoing, the developed vision system is a combination of image-based and position-based control system. The image-based analysis is used for the identification of the fabrics shape. After the image acquisition of the fabric, the features (vertices of the edges), the needle-point and the sewing lines orientation are derived from the image analysis. Besides, the position of the needle is also known in the robot coordinate system. The end-effectors position is unknown in the image coordinate system; however, the robot system gives feedback to the control system of the current end-effectors position in the robot base coordinate system. Moreover, the relation between the robot- and the imagecoordinate system is known from the calibration of the camera. This approach makes the system more flexible and limits the calibration errors. 3.1 The sewing process Initially, the fabric lies free of wrinkles at a random location on the work table. The camera captures the image of the fabric without the gripper on it in order to obtain the shape of the fabric and use it as the reference shape towards handling. The stitching process is performed on the seam line situated parallel to the specified fabric edges. The distance between the outer edge and the seam line, called seam allowance, depends on the cloth part and is defined by the apparel manufacturer. In practice, it usually ranges between 1/4 inch and 5/8 inch. After the seam edges of the fabric (dashed-line in Fig. 1) have been determined, the sewing process is ready to start. The sewing line is determined by the feeding mechanism. The sewing process can be decomposed in three separate tasks: transfer, stitching and rotation. The movement towards the needle. After the robot touches the fabric at a proper location, the features, namely the vertices of the fabric, are extracted from the image taken from the camera. After the starting seam edge is identified, the vision system captures the distance (r) between the starting seam vertex and the sewing needle and the orientation angle () formed by the starting seam segment and the sewing line (Fig. 1 (a)). The linear (u) and angular velocity () of the robot end-effector are derived through the designed fuzzy decision system, described in Section 3.2. Given the time step dt and the angle , i.e. the orientation of r in the image coordinate system, the new position and orientation of the endeffector is computed by:

x = x + u cos ( ) dt = - dt

y = y + u sin ( ) dt

(1)

Therefore, the robot end-effector moves along the direction of r (defined by ) and simultaneously rotates around the end-effectors z-axis, which is vertical to the table, until the angle becomes (Zacharia et al., 2005). The fabric is transferred to a new position with

64

Robotic Systems Applications, Control and Programming

new orientation as a result of the movement and rotation of the end-effector, which is stuck on the fabric to avoid slipping between the gripper and the fabric. This motion stops when the seam segment reaches the needle with the desired orientation within an acceptable tolerance.

needle

Needle
sewing line

sewing line

y-axis

y-axis x-axis

x-axis

(a)

(b)

Fig. 1. Scene of the fabric lying on the work table (a) without deformations (b) with deformations
The stitching process. During sewing, the fabric is guided along the sewing line with a constant velocity, which should be the same with the velocity of the sewing machine, so that good seam quality is ensured. When the end-effectors velocity is higher than the sewing velocity, puckering will appear, whereas in the case where it is lower, low seam quality will be produced due to the tension increase. At each time step, the current image of the fabric is captured in order that the orientation error is determined. The orientation error is fed back to be corrected by rotating the fabric around the needle, while simultaneously the robot moves the fabric towards the direction of the sewing line. To circumvent the problem of uncertainty due to the distortions of the fabrics shape, fuzzy logic control is necessary. The inputs are the orientation error and its rate and the output is the rotation angle around the needle. The rotation around the needle. When the seam segment coincides with the sewing line, the robot rotates the fabric around the needle until the next seam segment is aligned to the sewing line. It is worth noting that the end-effector is enforced to make a circular motion around the needle, since it has penetrated into the fabric. The orientation error (e) of the next seam segment in relation to the sewing line and its time rate are the inputs to the fuzzy system that controls the rotation of the fabric around the needle, whereas the output is the angular velocity () of the end-effectors motion around an axis perpendicular to the table at the needle-point. 3.2 The fuzzy robot control system Since modeling the mechanical behavior of fabrics for real-time applications is rather difficult due to their low bending rigidity, an approach based on a fuzzy logic controller (FLC) is developed (Zacharia et al., 2009) considering the robustness and the fast response requirements. The block diagram for the control system is shown in Fig. 2, where the symbols in parentheses stand for the fuzzy system that controls the orientation of the end-

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

65

effector. The controller inputs are the position error (er) and the orientation error (e), which are computed on the image, and their time rates ( e r ) and ( e ). The linear (u) and angular velocity () of the end-effector around z-axis are the outputs.
rd, (d) er, (e) d dt r, () Fuzzy er, (e) controller

u, ()

Robot & Fabric

Camera

Features extraction

image

Fig. 2. The block diagram of the fuzzy logic control The membership functions for the two inputs (Fig. 3 (a) and (b)) and the output of the system that controls the translation of the fabric towards the needle (Fig. 3 (c)) are experimentally tuned with various fabrics. Expert knowledge, often afflicted with uncertainty, is summarized in the proposition: The larger the distance/angle and the smaller its change rate is, the faster the fabric should move/rotate. The fuzzy associative memory (FAM) of the system that controls the translation of the fabric (Table 1) is derived after studying the behavior of the human workers towards sewing. For the rule evaluation, the min operator is used and for the aggregation mechanism the max operation is used. The centroid defuzzification method is used to determine the linear and angular velocity. Owing to space limitation, only the study for the system that controls the translation is presented.
position error rate SMALL MEDIUM LARGE position error SMALL MEDIUM LARGE

Very Small Very Small Very Small

Medium Medium Small

Very Large Very Large Large

Table 1. Fuzzy Associative Memory (FAM) of the sewing system An analytical model of the robot and the fabric is not necessary, since no geometrical characteristics are taken into account and there is no need for special mathematical computations for different fabrics. The proposed technique is feature-based, since only the features, r and u, are necessary for handling the fabric and can cope with the uncertainties that arise due to deformations. The proposed controller can handle possible deformations without changes in its structure achieving the desired accuracy on condition that the seam segment targeted for sewing is undistorted. Fig. 1 (b) shows the fabric of Fig. 1 (a), which has been deformed except for the seam segment targeted for sewing. It is clear from Fig. 1 (b) that the presence of deformations does not affect the control system, despite the fact that the shape of the fabric has significantly changed.

66

Robotic Systems Applications, Control and Programming

(a)

(b)

(c) Fig. 3. The membership functions of (a) the position error (b) the position error rate (c) the linear velocity after manual tuning

4. Sewing fabrics with curved edges through a Genetic-based approach


Real fabric pieces used for cloth making consist of edges with arbitrary curvature. This fact gave an impetus to tackle the problem of robot-handling fabrics with curved edges. To robot-sew fabrics of different shapes and materials is a rather difficult task that requires system flexibility and good quality of the final product. To compensate for ensuing high machine-hour rates, a high throughput is vital if product costs are to be kept low. To enhance system flexibility and its efficiency to handle various fabrics, fuzzy logic and visual servoing control has been employed. Curved edges pose additional difficulties in robot handling compared to straight edges, owing to the fact that up-to-date robots present limitations in their movements. The current industrial robots can only be programmed to perform straight or circular motions using internal commands. Sewing a fabric with arbitrary curvatures is a complicated task and can only be performed by approximating the curve with movements along straight-line segments (Zacharia et al., 2006). The robot makes a straight-line motion along the sewing line and a rotation around the needle-point simultaneously. This motion results in a smooth stitch that resembles the stitch produced by human operators.

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

67

4.1 Related work In the field of pattern recognition, considerable research work has been done on the polygonal approximation of digital contours. Two main families of technique concerning the polygonal approximation of digital curves have been published: those that try to obtain an error-bounded polygonal approximation and those that search for a set of dominant points as vertices of the approximating polygon. In the first family of methods, the main idea is to approximate a given curve with a minimal number of line segments such that the approximation error between the original curve and the corresponding line segments is less than a pre-specified tolerance. Several methods have been used for polygonal approximation, but most of them have the disadvantage that the results are dependent on the selection of the initial points and the arbitrary initial solution. To circumvent these drawbacks, nature-inspired algorithms have been proposed for polygonal approximation. In the genetic algorithm presented in reference (Zhang & Guo, 2001), each chromosome is a binary string of fixed length. The chromosome represents a subset of the curve points, Where the i-th bit denoting the ith original curve point is 1 if the corresponding curve point is taken as a vertex of the approximation polygon and 0 otherwise. The algorithm is improved based on Pareto optimal solution, and the results show that it is efficient and achieves less processing time compared to dynamic programming algorithms. In reference (Yin, 2003), an ant colony search (ACS) algorithm for optimal polygonal approximation is developed. To apply the ACS, the underlying problem should be represented in terms of a graph. Apparently, for the polygonal approximation problem, each point on the curve should be represented as a node of the graph. A number of artificial ants are distributed on the graph and communicate with one another through the pheromone trails that are a form of the long-term memory guiding the future exploration of the graph. The authors compared the performance of the proposed method to those of genetic-based and tabu-search-based methods, and concluded that the numerical results are very encouraging. A polygonal approximation approach of digital curves based on the particle swarm optimization (PSO) algorithm was presented by (Yin, 2004). In PSO which belongs to the class of natural algorithms each particle is presented as a binary vector corresponding to a candidate solution to the polygonal approximation problem. A swarm of particles are initiated and fly through the solution space for targeting the optimal solution. The experimental results manifest that their devised PSO algorithm is comparable to the genetic algorithm (GA), and it also outperforms other PSO versions in the literature for polygonal approximation. In the second family of methods, the approximation does not depend on the starting point and is insensitive to changes of orientation and scale. A representative technique is the dominant point approach (Teh & Chin, 1989). This method focused on the determination of the region of support of every point of the curve, which is the region used in the calculation of each points curvature. Thus, the method overcomes the problem of various scales in the curves features. However, the TehChin approach is not robust in the presence of noise. For the noise compensation, Wu proposed a modification of the method based on a dynamic determination of the region of support (Wu, 2002). 4.2 Sewing using polygonal approximation The apparel manufacturers requirements focus on time minimization for the accomplishment of the sewing process and satisfactory quality of seam, which is mainly based on stitch accuracy and smoothness. To fulfil these requirements, an algorithm based

68

Robotic Systems Applications, Control and Programming

on the polygonal approximation of the curved edges of the fabric is developed for robot handling fabrics towards the sewing process. During the stitching process the robots end-effector is commanded to make two motions simultaneously: a straight motion with constant speed (equal to the sewing machines speed to ensure good seam quality) in the direction of the sewing line, and a rotation around the needle-point. Therefore, the final stitch is not a straight line, but a line that is yielded as a result of the combined end-effectors motion. After all the straight-line segments have been stitched, the deviation error is computed. Then, a new image is taken and the process is iterated. The stitching process stops when the whole curve has been sewn, i.e. when the last point reaches the needle. The problem is formulated as follows: The curve section of the fabric is defined by N data points acquired by the vision system in the form of pixel coordinates in clockwise order, and can be described by a sequence of these points: G={p1, p2, . . ., pN}. Given the N data-points, find the optimal polygon that approximates the curve satisfying two criteria: the criterion for sewing time minimization and the criterion for acceptable accuracy. In this case, the time minimization is translated to the requirement for minimal length of the approximating polygon. Initially, a local search technique is developed to extract the dominant points and then a global search technique is applied taking the dominant points as inputs (Zacharia et al., 2008). The idea behind this is to filter out the initial curve points in an attempt to speed up the convergence of the algorithm. The proposed approach combines two algorithms to benefit from the advantages of each one. The contribution of the work addressed in this Section is twofold. Firstly, a strategy for handling fabrics with curved edges towards the sewing process using a visual servoing controller is developed based on polygonal approximation. Secondly, the algorithms are further extended based on polygonal approximation that exhibited superiority in many other applications, and a new algorithm is proposed based on the dominant point detection approach and the genetic algorithm.
4.3 The proposed genetic-based approach Genetic algorithms (GAs) (Michalewitz, 1996) have attracted attention in solving combinatorial optimization problems of high complexity because of their intrinsic parallelism and their effectiveness. The contribution of the proposed GA is a variable-length chromosome encoding scheme to reduce the computational time and memory requirement and the use of micro-GAs (Krishnakumar, 1989) as an approach for shortening the computational time. The traditional fixed length chromosomes could also be used to solve this problem, but it would lead to computational time increase, as the length of the chromosome would be too large to incorporate all the points comprising the curve. On the other hand, micro-GAs are ideally suited to optimization problems, where the emphasis is on quickly identifying a near-optimal region. In contrast to the traditional GA, the population size is small, yielding a much quicker convergence. Moreover, the micro-GA uses elitism and convergence checking with re-initialization to obtain the near-optimal solution. As a consequence, the proposed micro-GA with variable-length chromosomes can be used in a real-time application. The evaluation mechanism: Given a maximum acceptable error 0, find the polygonal approximation with the minimal number of vertices (corresponding to the minimum total length of the polygon), such that the polygon is distant from the curve by no more than 0. It is worth noting that the algorithm searches in a set of Ns points that are the dominant points

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

69

defined by the dominant point detection approach. Consequently, the input for the microGA is the output of the dominant point detection approach. Let G**={P1, P2,, Pm} be a subset of G*={P1, P2,, PNs }, G** G*, where m ( m < N s ) is the number of the vertices Pi=(xi, yi) of the polygon that approximate the curve section captured in the current image. The function expressing the total length of a polygon with m vertices can be written as:

L total = i=1 L i
m-1

(2)

and L i is the length of the i th edge of the polygon given by:

L i = Pi+1 - Pi

(3)

The constraint that should be satisfied is expressed as the maximum deviation between the curve section and the approximating polygon section. For two points Pi = ( x i , y i ) and Pi+1 = ( x i+1 , y i+1 ) defining an edge of the polygon and a data point p j = x j , y j between Pi and Pi+1 , the perpendicular distance of the arc point p j to the chord Pi Pi+1 is computed by

j =

( Pi+1 - Pi ) ( Pi - p j )
Pi+1 - Pi

(4)

Consequently, the maximum deviation is found by:

= max 1 , 2 ,..., j ,..., q

(5)

where q is the number of the data points between Pi and Pi+1 . The fitness function is given by:

1 , if 0 fitness = L total 0 , otherwise

(6)

where Ltotal is given by Eq.(2). The representation mechanism: In this algorithm, variable-length chromosomes are defined to represent a polygon with m vertices approximating the curve. The maximum number of polygon vertices, defined by the user, is equal to , which is the maximum possible length of the chromosome. The encoding mechanism maps each approximating polygon to a string composed of integers is in the following form:
1 10 24 49 67 92 104 ... i ... N s
m numbers

where 1 i N s , N s is the number of the integer-coordinate data-points of curve section captured by the camera and m , where m , is the length of the chromosome representing the number of vertices of the approximated polygon. The integers i represent the rows of the matrix consisting of all the points Pi=(xi, yi) in ascending order. Consequently, the genes of

70

Robotic Systems Applications, Control and Programming

the chromosome represent the curve data-points used to construct the polygon, i.e. the vertices of the polygon. It should also be mentioned that the first (1) and the last (m) row of the matrix are fixed for each chromosome. Crossover: Crossover is a recombination operator and follows the reproduction. The individuals are randomly selected according to a predefined probability (crossover rate). The one-point crossover is modified, so that the produced offspring are feasible chromosomes. The cut-point lies between the first and the last gene of the parent chromosome with the minimum length. Next, the numbers at each string are reordered in order to appear in ascending order, so that the polygonal section is created by joining the successive points. Population size: The population size depends on the nature and the complexity of the current problem. In this work, the proposed algorithm was tested for various small population sizes, so that both quick convergence and near-optimum solution are achieved. Finally, the selected population size is equal to 20.

5. Sewing fabrics with curved edges through ANFIS


One of the difficulties when applying the fuzzy approach of Section 4 is to obtain the fuzzy rules and the membership functions. In this Section, a neuro-fuzzy approach is introduced with the purpose of extracting fuzzy rules and the membership functions automatically. The proposed neuro-fuzzy approach combines the main components of soft computing (fuzzy logic, neural networks and genetic algorithms) that have shown great ability in solving complex control problems to benefit from the advantages of each one.
5.1 ANFIS framework Fuzzy systems have the capability of translating the expert knowledge into linguistic rules inside a robust mathematical framework in order to draw conclusions and generate responses. However, a fuzzy model consisting of large number of if-then rules to map inputs to outputs is not desired due to the phenomenon of overfitting. Thus, the Takagi SugenoKang type fuzzy models (Sugeno & Kang, 1988; Takagi & Sugeno, 1985), known as TSK models, are widely used for control and modeling because of their high accuracy and relatively small models (Delgado et al., 2001; Mnnle, 2001). When the system complexity is high, fuzzy modeling from input/output data is a useful way of creating FIS models. Because it is a more compact and computationally efficient representation than a Mamdani system, the Sugeno system using cluster information lends itself to the use of adaptive techniques for constructing fuzzy models with a minimum number of rules. Thus, data clustering algorithms were elaborated in order to construct FIS models from data, since they partition a collection of data into clusters based on similarity between data. An adaptive neuro-fuzzy inference system, ANFIS, has been proposed in (Jang, 1993) to effectively deal with multivariable complex systems. ANFIS uses techniques like least squares or back propagation algorithms to determine the membership functions for a given input/output data set. These adaptive techniques can be used to customize the membership functions so that the fuzzy system best models the data. The effectiveness of using ANFIS for control and modeling has been pointed in (Jang et al., 1997; Rizzi et al., 1999). This Section proposes an innovative approach for robot handling pieces of fabrics with curved edges towards sewing, which is based on learning. The fact that adaptive neuro-

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

71

fuzzy approaches have been successfully used for other applications in robotics (Marichal et al., 2001; Rusu et al., 2003; Hui et al., 2006) gave rise to the idea of using it for robot sewing fabrics of curved edges. The main purpose is to overcome difficulties arising from the fact that the fabric pieces have curved edges with arbitrary curvatures; thus, a learning technique is used. The proposed ANFIS based on genetic-oriented clustering combines the concepts of fuzzy logic and neural networks to form a hybrid intelligent system that enhances the ability to automatically learn and adapt. The system learns from the information obtained from the fabrics used for training and is able to respond to new fabrics, which had not been included in the training process.
5.2 Sewing tracking approach The main scope of this Section is to provide a control system capable of dealing with different curvatures and thus, flexible to changes in fabrics shape. To enhance systems flexibility and its efficiency to handle fabric edges with various curvatures, Artificial Intelligent techniques and visual servoing control are employed. Since there is no standardization for curved edges, a method is necessary for estimating the curve. However, searching for the equation that approximates the curve is a time-consuming procedure. In addition, such an approach would require curve estimation for each one curved edge for different fabrics. The proposed system does not need any geometrical computations for estimating the curves, which would lead to computational burden and would, therefore, be prohibitive for using it in an on-line process. The developed control scheme is able to learn from the existed knowledge and respond to new curved edges. Now, assume a piece of fabric with a curved edge of arbitrary curvature, as the one depicted in Fig. 4, which is guided for sewing. The camera captures an image that covers a small area in the vicinity of the sewing needle. The curved section captured in the image is approximated by an edge (AB) that joins the two endings of the curved section. Then, the distance d and the angle are extracted from the image. The feature d is the minimum distance between the needle-point and the straight edge that approximates the curved section. If the seam allowance is , then the distance D, where D=d-, is the minimum distance between the seam line and the needle. The angle is defined by the straight edge that approximates the curve section and the sewing line (Zacharia, 2010).

A d 25 mm D needle

end-effector

40 mm sewing direction

Fig. 4. Features extracted from the camera image

72

Robotic Systems Applications, Control and Programming

The position error is defined as eD=D-D0, where the desired distance is D0=0 and the orientation error is defined as e=-0, where the desired angle is 0=0. To compensate for the seam errors (eD and e), the robot should be assigned to rotate around the needle with an angular velocity () around z-axis. The angular velocity () of the robots end-effector is derived through a Sugeno-type fuzzy decision system, which takes as inputs the position error eD and the orientation error e. The end-effector guides the fabric towards the sewing direction and the camera monitors the fabric to compensate for the seam error. The fuzzy controller, which is tuned through ANFIS, inputs the position and orientation error (eD and e) of the new fabric piece and outputs the predicted angular velocity () of the robots end-effector. Next, the robot is commanded to make a circular motion around the sewing needle by an angle and a translational motion in the sewing direction. The angle is derived by the relationship t, where t is the sampling period.
5.3 Training using the modified ANFIS The basic idea behind using neuro-adaptive learning techniques is that it is very simple and allows implementation of multi-inputsingle-output first order Sugeno-type FIS. This technique provides a method for the fuzzy modeling procedure to learn information about a data set, in order to compute the membership function parameters that best allow the associated fuzzy inference system to track the given input/output data. This learning approach takes place prior to the operation of the control system. The ANFIS methodology can be decomposed into four steps: Data acquisition (Step 1): A number of different fabric pieces of various arbitrary curvatures are selected for experimental tests. During the sewing process, the position and orientation error in the image, as well as the end-effectors angular velocity are measured. As a result, a number of data sets, each one consisting of 3 attributes (eD, e and ) obtained from the robot sewing process. These data sets are divided into training and checking data sets. It is worth noting that the position error (eD), which is derived from the image, is computed in pixels and not in millimetres. Genetic-oriented Clustering (Step 2): Clustering is used to generate a Sugeno-type fuzzy inference system that best models the data behavior using a minimum number of rules. The rules partition themselves according to the fuzzy qualities associated with each of the data clusters. The initial Sugeno-type FIS models are built by means of a genetic-oriented clustering approach using the training data set. In this approach, a Genetic Algorithm using variable-length chromosomes is applied to find the optimum number of cluster centers as well as the partitioning of the data. A comprehensive description of this algorithm can be found in (Zacharia & Aspragathos, 2008). This technique has the advantage over other clustering algorithms that the selection of the cluster centers is not limited to the data points and that it is able to escape from local optima, which is an inherent ability of Genetic Algorithms (GAs). Another advantage is the ability of automatically evolving the number of clusters due to the use of variable-length chromosomes in GAs structure. After applying the genetic-oriented clustering approach, the training data is partitioned into groups, called clusters, and as a result, simpler optimized FIS models with the minimum number of rules are obtained. ANFIS architecture (Step 3): The purpose of this step is to optimize the initial FIS created by using the genetic-oriented clustering technique. A network-type structure similar to that of a neural network, which maps inputs through input membership functions and associated

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

73

parameters, and then through output membership functions and associated parameters to outputs can be used to interpret the inputoutput map. Considering a first-order Sugeno fuzzy model with two inputs x and y and one output f, a typical rule set with two fuzzy if then rules can be expressed as

Rule 1: If (x is A1) and (y is B1) then (f1 = p1x + q1y + r1) Rule 2: If (x is A2) and (y is B2) then (f2 = p2x + q2y + r2)

where x, y are inputs, Ai, Bi are membership functions and pi, qi, ri are consequent parameters and i is the node number. The entire system architecture consists of five layers, namely, the fuzzy layer (Layer 1), product layer (Layer 2), normalized layer (Layer 3), consequence layer (Layer 4) and total output layer (Layer 5). Model Validation (Step 4): The validation process is used to evaluate the model generalization capability. One problem with model validation is selecting a data set that is both representative of the data the trained model is intended to emulate, yet sufficiently distinct from the training data set so as not to render the validation process trivial. Especially in noisy measurements, it is possible that the training data set does not include all the representative features needed for the model. The idea behind using a checking data set for model validation is that after a certain point in the training, the model begins overfitting the training data set. Overfitting can be detected when the checking error starts increasing while the training error is still decreasing. In practice, the experimental data is divided into training data and checking data. The training data is used in both genetic-oriented clustering process and ANFIS training. The checking data is only used in ANFIS training to prevent the model from overfitting.

6. Experimental results
The experiments were carried out using a robotic manipulator with 6 rotational degrees of freedom (RV4A) and controlled by a PC. The robot is programmed in Melfa-Basic language in Cosirop environment, while the analysis of the visual information is performed with Matlab 7.1. The vision system consists of a Pulnix analog video camera at 768576 pixels resolution RGB and a analog camera of the same resolution, which are fixed above the working table in a vertical position (Fig. 5). The vision is transferred to the second camera when the position error becomes less than 10 pixels ( 12.3 mm) and the orientation error is less than 5. Using the second camera, the accepted position and orientation error are set equal to 10 pixels ( 1.389 mm) and 1, respectively. A simple gripper has been designed, so that the arm of the robot is distant from the fabric, as shown in Fig. 5. The fabric is stuck on the gripper to avoid slipping; however, the placement of the gripper onto the fabric is out of the reach of this work. In this work, buckling modes during robot handling are supportable on the condition that it does not appear in the segment to be sewed. However, it still remains a problem that should be avoided. Thus, the position where the gripper touches the fabric is estimated in terms of reducing the possibilities for deformation appearance, taking into account the properties of the fabric (Koustoumpardis & Aspragathos, 2003). It is worth noting that the intent of this work deals with buckling in context of achieving a successful seam tracking and not the correction strategy against folding or wrinkling problems. The experimental tests also showed that deformations are likely to appear close to the gripper position on the fabric, and not on the edge.

74

Robotic Systems Applications, Control and Programming

Fig. 5. The experimental stage Since there is no standardization for deformations, it is difficult to be quantified. However, the greater deformation observed in buckling is about 30 mm, which is the height between the highest and the lowest point of the fabric. In some cases, fabrics were partially folded. In other cases, the wrinkles induced the fabric to fold due to its own weight forming a loop at one side of the fabric. In some experiments, the fabric presented simultaneously both wrinkles, and folds.

(a)

(b)

Fig. 6. Fabric A with curved edge of small curvature (a) Fabric B with curved edge of large curvature In the experiments, a considerable number of fabric pieces of different materials, shape and colour have been used. Both simulation and experimental tests are conducted to verify the efficiency of the proposed approach. A fabric piece is indicatively presented to show the results for both simulation and experimental tests. The shape of this fabric consists of two

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

75

straight-line segments and an arbitrary curve, its colour is red and it has medium bending rigidity (Fig. 6 (a)). Initially, simulation tests were carried out in order to approve the effectiveness of the proposed polygonal approximation based on a micro-Genetic Algorithm (micro-GA). A number of points along a section of the curved edge are captured by the camera that focuses on the area of the needle-point. The array that contains the curve data-points has dimensions 1852 and is the input to the micro-GA. The maximum length of the chromosomes is set to 6 and the maximum acceptable deviation is arbitrarily set to 8 pixels (1 mm). Fig. 7 (a) shows the optimum polygon section resulted from the micro-GA, which approximates the given data-points of the curved edge. The curve section is approximated by a polygon section consisted of three sides and each of them deviates from the corresponding arc section 6.87, 5.55 and 7.07 pixels. Increasing the maximum acceptable deviation from 8 pixels to 12 pixels, the arc section is approximated by a polygon section with three sides, as shown in Fig. 7 (b), where the maximum deviation errors between the curve and each polygon side are 4.48, 5.88 and 10.37 pixels, respectively. Decreasing the maximum acceptable deviation to 4 pixels, the arc is approximated by a polygon section with six sides, as shown in Fig. 7 (c), where the maximum deviation errors are 3.37, 2.52, 2.32, 0.88, 3.15 and 3.08 pixels. Fig. 7 (d) zooms in the first two sides of the polygon shown in Fig. 7 (c).

(a)

(b)

(c)

(d)

Fig. 7. Polygonal approximation with (a) =8 pixels (b) =12 pixels (c) =4 pixels (d) detail of the approximation with =4 pixels

76

Robotic Systems Applications, Control and Programming

These simulation tests demonstrate that the polygon approximation of the curve serves as a trade-off between rapidity and smoothness affected by the tolerance for imprecision. The results show that the approximation leads to a satisfactory seam approximation, while simultaneously the time for the entire process is minimized. In practice, the sewing process is repeated many times and the deviation errors are collected and processed. The maximum acceptable deviation error for all tested cases is set to 8 pixels (1 mm). Fig. 8 (a) shows the deviation errors for Fabric A, a fabric piece with small curvature (Fig. 6 (a)) and Fig. 8 (b) shows the deviation errors for Fabric B, a fabric piece with small curvature (Fig. 6 (b)).
7 6
9.0 8.0

deviation (pixels)

5 4 3 2 1 0 0 3 6 9 12 15

deviation (pixels)

7.0 6.0 5.0 4.0 3.0 2.0 1.0 0.0 1 2 3 4 5 6 7 8 9

number of straight-line segments

number of straight-line segments

(a)

(b)

Fig. 8. Deviation between real and desired path using the micro-GA for (a) fabric A (b) fabric B
Fabric A: The proposed micro-GA is experimentally tested using the dominant points as input. The maximum deviation (in pixels) between the needle-point and polygon approximation is computed from the image captured. The sewing process for the curved edge is accomplished after 15 steps and the results are shown in Fig. 8 (a). The maximum deviation is 6.63 pixels (0.83 mm) and is lower than the maximum acceptable limit of 8 pixels. The average value for the deviation is 2.75 pixels (0.34 mm), which is really satisfactory. The steps 6-10 correspond to the part of the curve with high curvature. Fabric B: Using the dominant points as input, the micro-GA terminates with 9 steps and the average value for the deviation is 4.14 pixels (0.52 mm). The maximum deviation for each method is depicted in Fig. 8 (b). The deviation error for this fabric piece is greater compared to the previous one, which is reasonable, since the curvature is higher. More fabric pieces are used to test the proposed approach, but detailed results are not presented due to the space limit. Table 2 shows indicatively the total steps, as well as the average and maximum deviations for some of the tested fabrics. The task is accomplished in less steps (corresponding to lower total time) when the micro Genetic-based approach that uses the dominant points as input is applied, satisfying the predefined the boundary of 1 mm. In apparel industry, the maximum allowable deviation, which is empirically evaluated, lies in the range between 0.5-5 mm. For all tested cases, the proposed algorithm is proved to be quite robust and efficient, since the achieved deviation, which is the maximum predefined deviation set by the user, is less than 1 mm. Bearing in mind that the robot accuracy is within 0.03 mm, the deviations resulting from the vision- and the robot position errors are very satisfactory.

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

77

Fabric type/color fleece/red cotton/green jeans/blue baize/green tweed

Steps 15 9 10 10 10

Maximum deviation (mm) 0.83 1.00 0.99 0.79 0.96

Average deviation (mm) 0.34 0.52 0.65 0.34 0.58

Table 2. Experimental results for the tested fabrics

(a)

(b)

Fig. 9. (a) Robot-camera-sewing machine system (b) zoom in the camera

Fig. 10. Identification of a curved section To test the ANFIS system, the robot-camera-sewing system is restructured to achieve even better accuracy (Fig. 9). Now, the vision system consists of a web camera at 480640 pixels resolution RGB, which is mounted on the sewing machine, so that the area in the vicinity of the sewing needle is in the field of view of the web camera. The area captured by the web camera is about 4055 mm, but the seam errors (eD and e) are computed in a smaller area

78

Robotic Systems Applications, Control and Programming

4025 mm for a better approximation. Fig. 10 presents indicatively a curved section of a fabric, which is identified and approximated by a straight edge. The proposed seam control strategy is tested using real fabric pieces of different materials, shapes and colours. Fig. 11 shows four fabric pieces with curved edges that were used for training and Fig. 12 shows three fabric pieces that were used for testing. Several experimental tests are conducted to verify the efficiency and approve the effectiveness of the proposed adaptive neuro-fuzzy approach for robot sewing fabrics of curved edges.

(a)

(b)

(c)

(d)

Fig. 11. (a) jackets sleeve (b) shirts sleeve (c) trousers leg (d) short sleeve

(a) Fig. 12. (a) skirts piece (b) pocket (c) hood

(b)

(c)

Using the fabric pieces shown in Fig. 11, a total of 350 data sets are obtained. A total of 300 data sets are selected for the purpose of training in ANFIS and the rest 50 data sets are selected for testing purposes after the training is completed in order to verify the accuracy of the predicted values. Next, the genetic-oriented clustering method is applied to the training data set. Fig. 13 shows the training data sets and the resulting cluster centers obtained after applying the genetic-oriented clustering method. The cluster centers determine the number of the fuzzy sets and the parameters (mean values) of the Gaussian membership functions of the antecedent part, as well as the number of fuzzy rules of the Sugeno-type FIS. The number of the resulted clusters for ra=0.4 is seven. As a result, each input variable is characterized by seven fuzzy sets with the linguistic values {Extremely Small (ES), Very Small (VS), Small (S), Medium (M), Large (L), Very Large (VL), Extremely Large (EL)}. The consequent parameters of each rule of the Sugeno-type FIS are determined by using the linear least-squares algorithm. The membership functions for the two inputs resulting from these cluster centers

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

79

are shown in Fig. 14 (a). The rule base obtained through the genetic-oriented clustering approach consists of 7 rules, shown in Table 3. 1. If (eD is Small) and (e is Extremely Small) then ( is Extremely Small) (1) 2. If (eD is Very Small) and (e is Small) then ( is Very Small) (1) 3. If (eD is Extremely Small) and (e is Extremely Large) then ( is Small) (1) 4. If (eD is Large) and (e is Very Small) then ( is Medium) (1) 5. If (eD is Medium) and (e is Large) then ( is Large) (1) 6. If (eD is Very Large) and (e is Medium) then ( is Very Large) (1) 7. If (eD is Extremely Large) and (e is Very Large) then ( is Extremely Large) (1) Table 3. Fuzzy rule base

Fig. 13. Cluster centers resulting from the genetic-oriented clustering method

(a)

(b)

Fig. 14. The membership functions for inputs using (a) genetic-based clustering (b) ANFIS

80

Robotic Systems Applications, Control and Programming

The next step is the training process that aims at tuning the fuzzy inference system. Fig. 14 (b) shows the final Gaussian membership functions derived after training the system. In contrast to the first input, there is a considerable change in the final membership functions concerning the second input, since the supports of all fuzzy sets are broadened. The root mean squared errors of the output over 100 training epochs, which are obtained by using 300 training datasets and 50 checking data sets, are plotted in Fig. 15. It is obvious that both training and checking error gradually decrease versus the number of epochs.

Fig. 15. ANFIS training process

Fig. 16. Comparisons between measured & predicted angular velocity After 50 checking data sets are entered in ANFIS, an output value can be obtained from the calculation results. This output value is the predicted value for the angular velocity of the end-effector. Fig. 16 shows measured and predicted angular velocity for the checking data, as well as the prediction error expressed as the absolute value of the percentage error between the measured and predicted angular velocity. These two diagrams demonstrate that the predicted values are close to the experimentally measured values, as many of the

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

81

data points fall very close to the predicted points, indicating good correlation. The average error of the prediction of the angular velocity is around 2.57%, which means that the accuracy is as high as 97.43%. An interesting and important conclusion established from the results is that the proposed approach is capable of well estimating data points outside the training space using the advantages of fuzzy logic, neural networks and genetic algorithms. This conclusion reflects the models ability to predict the output based on the input data used for training. In practice, this implies that the method is effective in robot handling fabrics with curved edges, which have not been used in the training process. The strategy for robot-sewing fabrics with curved edges described above has the advantage of performing well regardless of the fabrics deformations. This advantage is of major importance, as fabrics are limp materials that have a tendency to distort and change their shape when handled on a work table. The fact that this approach is based on the information taken from the image that captures the fabric in the neighbourhood of the needle amplifies the systems robustness against deformations. Therefore, deformations that may appear on the fabric during robot handling do not affect the effectiveness of the stitching process.

7. Conclusion
The main focus of this work lies on the design of an innovative visual servoing manipulator controller based on Fuzzy Logic that aims to enable robots to handle flexible fabric sheets lying on a work table. Visual servoing and fuzzy logic are used in robot motion control increases the intelligence of robots and the flexibility of the system. The designed visual servoing control system can deal with a variety of fabrics that are likely to deform and can cope with possible deformations due to buckling (wrinkling, folding) towards handling without degrading its performance, on condition that the seam segment to be sewed is undistorted. The desired accuracy is achieved in approaching the needle even in cases where deformations appeared. This work focuses on the difficult case, where fabrics consist of edges with arbitrary curvatures. The need for approximating the curved edges through straight lines arises from the fact that current industrial robots can only be programmed to make straight or circular motions. From the standpoint of apparel industry manufacturing, it is important to assure that the fabric is sewn in the minimum time, while simultaneously satisfactory accuracy and smoothness are achieved. In our approach, the curve is approximated through small straight segments applying a micro-GA approach that uses the dominant point detection method as input. The proposed approach aims at the minimization of the execution time satisfying a predefined maximum tolerance. Based on the results, some consideration is made concerning the trade-offs between running times and the quality of the final approximation. To alleviate the computational burden of geometrical computations, an innovative method for robot handling fabrics with curved edges towards sewing has been proposed, which is based on a novel genetic-oriented clustering method and an adaptive neuro-fuzzy inference system. This work presents the design and tune of an ANFIS network with the minimum number of fuzzy rules for modelling the complex process of robot sewing fabrics. The proposed adaptive neuro-fuzzy inference system benefits from the advantages of fuzzy logic, neural networks and genetic algorithms. This feature makes this approach a powerful

82

Robotic Systems Applications, Control and Programming

tool to deal with uncertainty embedded in the curved edges of real cloth parts and to cope with new fabric pieces that have not been used for the training process. The experimental results show that the proposed control scheme is effective and efficient in guiding the fabric towards the sewing needle, sewing it and rotating it around the needle. After extensive experimentation, it has been proved to be rather simple, flexible and robust due to its capability to respond to any position and orientation error for a variety of fabrics that are likely to present deformations. The experimental data were obtained using the robot-camera-sewing machine system and real parts of cloths. The proposed method presented good results when applied to fabrics with curved edges of unknown curvatures, which manifest the validity of proposed approach. This method is applicable to any piece of fabric with edges of arbitrary curvature, since it has been proved to be efficient in estimating the appropriate angular velocity of fabrics that were not included in the training process. Although no direct comparison with human stitching is possible, as the equipment deployed attained differ, the achieved accuracy is really promising for future use in industrial applications.

8. Acknowledgment
The author would like to acknowledge Robotics Group of Mechanical Engineering & Aeronautics Dep., University of Patras ([Link] for help and cooperation.

9. References
Amin-Nejad, S.; Smith, J.-S. & Lucas, J. (2003). A visual servoing system for edge trimming of fabric embroideries by laser, Mechatronics, Vol.13, No.6, pp.533551. Delgado, M.R.; Von Zuben, F. & Gomide, F. (2001). Hierarchical genetic fuzzy systems, Information Sciences, Vol.136, No.1, pp.2952. Gershon, D. & Porat, I. (1986). Robotic sewing using multi-sensory feedback, Proceedings of the 16th International Symposium of Industrial Robots, Brussels, pp. 823-34. Gershon, D. & Porat, I. (1988). Visual servo control of a robotic sewing system, Proceedings of the IEEE International Conference on Robotics and automation, pp.18301835, Philadelphia, PA, USA. Gershon, D. (1990). Parallel process decomposition of a dynamic manipulation task: robotic sewing, IEEE Transactions on Robotics and Automation, Vol.6 No.3, pp. 357-366. Gershon, D. (1993), Strategies for robotic handling of flexible sheet material, Mechatronics, Vol.3 No.5, pp. 611-23. Hui, N.B.; Mahendar, V. & Pratihar, D.K. (2006). Time-optimal, collision-free navigation of a carlike mobile robot using neuro-fuzzy approaches, Fuzzy Sets and Systems, Vol.157, No.16, pp.21712204. Jang, J.-S.R. (1993). ANFIS: Adaptive network-based fuzzy inference system, IEEE Transactions on Systems, Man and Cybernetics, Vol.23, No.3, pp.665685. Jang, J.-S.R.; Sun, C.-T. & Mizutani, E. (1997). Neuro-Fuzzy and Soft Computing: A Computation Approach to Learning and Machine Intelligence, Prentice Hall, Upper Saddle River.

Robot Handling Fabrics Towards Sewing Using Computational Intelligence Methods

83

Koustoumpardis P.; Zacharia P. & Aspragathos N. (2007). Intelligent robotic handling of fabrics for sewing, Industrial Robotics: Programming, Simulation and Applications, pIV pro literature Verlag Robert Mayer-Scholz, Chapter 28, pp.559-582, February. Koustoumpardis, P. & Aspragathos, N. (2003). Fuzzy logic decision mechanism combined with a neuro-controller for fabric tension in robotized sewing process, Journal of Intelligent and Robotic Systems, Vol.36 No.1, pp.65-88. Koustoumpardis, P.; Zoumponos, G.; Zacharia, P.; Mariolis, I.; Chatzis, I.; Evangelidis, G. & Zampetakis, A. (2006). Handling of non-rigid materials (XROMA), application in robotic sewing, 37th International Symposium on novelties in Textiles, pp.528-533, Ljubljana, Slovenia, June 1517, 2006. Krishnakumar, K. (1989). Micro-genetic algorithms for stationary and non-stationary function optimization. SPIE Intelligent Control and Adaptive Systems, Vol.1196, pp.289296. Kudo, M.; Nasu, Y.; Mitobe, K. & Borovac, B. (2000). Multi-arm robot control system for manipulation of flexible materials in sewing operation, Mechatronics, Vol.10, No.8, pp.371402. Mnnle, M. (2001). FTSMFast Takagi-Sugeno Fuzzy Modeling, Institute for Computer Design and Fault Tolerance, University of Karlsruhe, Karlsruhe. Marichal, G.N.; Acosta, L.; Moreno, L.; Mndez, J.A.; Rodrigo, J.J. & Sigut, M. (2001). Obstacle avoidance for a mobile robot: a neuro-fuzzy approach, Fuzzy Sets and Systems, Vol.124, No.2, pp.171179. Michalewitz, Z. (1996). Genetic Algorithms + Data Structures = Evolution Programs, 3rd edition, Springer-Verlag. Moulianitis, V.; Dentsoras, A. & Aspragathos, N. (1999). A knowledge-based system for the conceptual design of grippers for handling robots, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, Vol.13, No.1, pp. 3-25. Rizzi, A.; Mascioli F.M.F. & Martinelli G. (1999). Automatic training of ANFIS networks, IEEE International Fuzzy Systems Conference, Seoul, Korea, Vol.3, pp.1655-1660, ISBN: 0-7803-5406-0. Rusu, P.; Petriu, E.M.; Whalen, T.E.; Cornell, A. & Spoelder H.J.W. (2003). Behaviour-based neuro fuzzy controller for mobile robot navigation, IEEE Transactions on Instrumentation and Measurement, Vol.52, No.4, pp.13351340. Sugeno, M. & Kang, G.T. (1988). Structure identification of fuzzy model, Fuzzy Sets and Systems, Vol.28, No.1, pp.1533. Takagi, T. & Sugeno, M. (1985). Fuzzy identification of systems and its applications to modeling and control, IEEE Transactions on Systems, Man and Cybernetics, Vol.15, No.1, pp.116132. Teh, C. H. & Chin, R. T. (1989). On the detection of dominant points in digital curves, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.11, No.8, pp.859872. Torgerson, E. & Paul, F.-W. (1988). Vision-guided robotic fabric manipulation for apparel manufacturing, IEEE Control Systemss Magazine, Vol.8, No.1, pp.1420. Wu, W.-Y. (2002). A dynamic method for dominant point detection, Graphical Models, Vol.64, No.5, pp.304315. Yin, P.-Y. (2003). Ant colony search algorithms for optimal polygonal approximation of plane curves, Pattern Recognition, Vol.36, No.8, 17831797.

84

Robotic Systems Applications, Control and Programming

Yin, P.-Y. (2004). A discrete particle swarm algorithm for optimal polygonal approximation, Journal of Visual Communication and Image Representation, Vol.15, No.2, pp.241260. Zacharia [Link]. (2010). An adaptive neuro-fuzzy inference system for robot handling fabrics with curved edges towards sewing, Journal of Intelligent and Robotic Systems, Vol.58, No.3, pp.193-209. Zacharia, P.; Mariolis, I., Aspragathos, N. & Dermatas, E. (2005). Visual servoing of a robotic manipulator based on fuzzy logic control for handling fabric lying on a table, I*PROMS Virtual Conference on Innovative Production Machines and Systems, , pp. 411 416, July 415 ,2005. Zacharia, P.; Mariolis, I., Aspragathos, N. & Dermatas, E. (2006). Visual servoing controller for robot handling fabrics of curved edges, I*PROMS Virtual Conference on Innovative Production Machines and Systems, pp. 301306, July314, 2006. Zacharia, [Link]. & Aspragathos, N.A. (2008), Genetically oriented clustering using variable length chromosomes, I*PROMS Virtual International Conference on Intelligent Production Machines and Systems, pp.204-209, July 1-14, 2008. Zacharia, [Link].; Mariolis, I.G.; Aspragathos, N.A. & Dermatas E.S. (2008). Robot handling fabrics with curved edges based on visual servoing and polygonal approximation, Special Issue of Proceedings of the Institution of Mechanical Engineers, Part B, Journal of Engineering Manufacture, Vol.222, No.10, pp. 263-1274. Zacharia, [Link].; Mariolis, I.G.; Aspragathos, N.A. & Dermatas E.S. (2009). A robotic system based on fuzzy visual servoing for handling flexible sheets lying on a table, Industrial Robot, Vol. 36, No.5, pp.489-496. Zhang, H. & Guo, J. (2001). Optimal polygonal approximation of digital planar curves using meta heuristics, Pattern Recognition, Vol.34, No.7, pp.14291436. Zoumponos, G. & Aspragathos, N. (2008). Fuzzy logic path planning for the robotic placement of fabrics on a work table, Robotics & Computer-integrated Manufacturing, Vol.24, No.2, pp.174-186.

5
Robotic Systems for Radiation Therapy
1Thomas

Ivan Buzurovic1, Tarun K. Podder2 and Yan Yu1


2East

Jefferson University, Philadelphia, Pennsylvania, Carolina University, Greenville, North Carolina USA

1. Introduction
Medical robotics is an exciting and relatively new field. Robotics plays an important role in medical engineering. Medical robots were initially used in the 1980s, in the field of urology. Robotic arms were developed and used for prostate resection. They can also be highly specialized and assist in diagnosing and treating patients. While there is still much more work to be done, using robots can enhance medical treatments in terms of both the quality and accessibility of care. Using robots can help reduce human error and bring highly specialized information to remote areas without requiring physicians direct intervention. In radiation therapy, high-energy radiation from x-rays, gamma rays, neutrons, and other sources has been used to kill cancer cells and shrink tumors. Radiation may come from a machine outside the body (external-beam radiation therapy), or it may come from radioactive materials placed in the body near cancer cells (internal radiation therapy, implant radiation, or brachytherapy). The usage of robotic systems to improve the cancer treatment outcome is a new field. This field overlaps with electronics, computer science, artificial intelligence, mechatronics, nanotechnology, and bioengineering. For this purpose, robots can be used in medical facilities to perform different tasks such as delivering radiation sources, real-time tumor tracking during radiation delivery or external beam delivery. The only product in the market for robotic radiotherapy is CyberKnife Robotic Radiosurgery System. The robotic system has provision for so-called real-time tracking during beam delivery. The device itself is a 6MV linear accelerator mounted on a six degree-of-freedom (DOF) Keller und Knappich Augsburg (KUKA) industrial robot. This system has real-time image-guided control. Consequently, there is a significantly long time delay (about 200 ms) between the acquisition of tumor coordinates and repositioning to the linear accelerator. The CyberKnife-KUKA robot with linear accelerator end-effector is suited for radiation therapy to any body sites. Its field size is restricted to the limited geometry of 12 discrete circular fields ranging from 5mm to 60mm in diameter. Therefore, the workspace is confined and the radiation therapy community has not fully embraced the idea of using an industrial articulated robotic manipulator yet. The details about CyberKnife robotic system are not included in this chapter. Consequently, the basic idea is to present the novel research results in the field of robotic radiation therapy and its applications.

86

Robotic Systems Applications, Control and Programming

2. Brachytherapy robotics
Brachytherapy is a method of treatment in which sealed radioactive sources are used to deliver radiation at a short distance by interstitial, intracavitary, or surface application. With this mode of therapy, a high radiation dose can be delivered locally to the tumor with rapid dose falloff in the surrounding normal tissue. Recently, several research groups have reported the investigation and development of the robotic systems for the prostate seeds implants, (Stoianovici et al., 2003), (Wei et al., 2004), (Fichtinger et al., 2006), (Yu et al., 2007), (Meltsner, 2007), (Meltsner et al., 2007), (Stoianovici et al., 2007), (Podder et al., 2007), (Salcudean et al., 2008), (Lin et al., 2008), (Moerland et al., 2008). The potential advantages of the robotic seed implants include improving the accuracy of the needle placement and seed delivery, as well as improving the consistency of the seed implant. In medical, especially prostate brachytherapy, applications of robots, precise endeffector position, steady state and positioning accuracy is required. In such applications, even small positioning errors at the manipulator end-effector can have dangerous and costly consequences, (Mavrodis et al., 1997). To achieve the enhancements of the robotic seed delivery, the robots need to be calibrated. Properly calibrated robotic systems have higher absolute positioning accuracy and the deposited seeds positions correspond better to the ones calculated in the planning systems. The brachytherapy robots are usually ultrasound (US) or magnetic resonance imaging (MRI) guided. Generally, to improve needle placement and seed deposition in brachytherapy procedure several methods have been presented in the literature, such as parameter optimization, different needle rotation techniques, robotic insertion, force modeling, and needle steering techniques. Robot assisted therapeutic delivery systems are attractive for several reasons. The potential advantages are increased accuracy, reduced human variability, reduction of clinicians fatigue and reduction of operation time. There can be two methods for robotic needle insertion and seed deposition: single-channel approach and multi-channel approach. In single-channel approach one needle can be inserted at a time and typically 2-5 seeds along the needle track are deposited in the prostate according to dosimetry plan. On the other hand, multi-channel system is capable of placing several needles at the time. Thereby, seed delivery can be faster. Since prostate is not rigidly mounted, it can move and rotate as well as unpredictably deform. When several needles are inserted concurrently, prostate will be uniformly pushed back symmetrically to more stable position and deformation of the tissue can be better estimated for precise delivery. In the following sections two brachytherapy robotic systems has been presented. Both robotic systems: single-channel and multi-channel systems have been designed, developed and manufactured in our research laboratories. 2.1 Single-channel brachytherapy robotic system We have designed and developed a robotic system, named Endo-Uro Computer Lattice for Intratumoral Delivery, Implantation, and Ablation with Nanosensing (EUCLIDIAN). The system consists of a surgical module, a positioning module and a electronic housing, as in Fig. 1.a. Kinematic Denavit-Hartenberg (DH) parameters of the system are presented in Fig.1.b. The platform connects the surgical module to the cart. The platform has provision for both translational and rotational motion. The vertical lift of the surgery module is motorized for ease of operation against gravitational effect. The supporting platform connects the surgical module to the cart. The surgical module consists of two robotic manipulators, i.e., two open

Robotic Systems for Radiation Therapy

87

kinematic chains called needling mechanism and an ultrasound probe driver with five and two DOF, respectively.

a)

b)

Fig. 1. a) EUCLIDIAN image-guided brachytherapy robotic system; b) Kinematic DH schema with reference frames 2.1.1 System description The three main subsystems of the surgical module, Fig. 2, are: 1) Two DOF transrectal ultrasound (TRUS) probe driver, three DOF robotic gantry, and two DOF needle inserter (denote by needling mechanism in Fig. 21.) with seed holder and seed pusher. A brief description of these subsystems is provided in the following sections. The TRUS probe, which contains two transducers, transverse and sagittal planes can be translated and rotated separately by two motors fitted with high-resolution optical encoders. This enables imaging the prostate in transverse as well as in sagittal planes with variable slice thicknesses or intervals, as thin as 0.1 mm. Working ranges of motion of the TRUS probe are 0185 mm and -91 to +91 in translation and rotation, respectively. The TRUS probe can also be operated manually using the knobs; during this mode, the motors are disengaged automatically by electric clutches. There is a provision for a template holder at the end of the TRUS probe driver, enabling manual takeover if required. A pair of prostate stabilization needles can be placed with angulation in both sagittal and coronal planes to prevent the obstruction of robotic needle insertion. This approach has also been shown to produce significant improvement in prostate immobilization. This subsystem, which features two translational motions, x and y direction, and one rotational motion (pitch, i.e., the rotation upward or downward about the x-axis), connects the needle driving module to the positioning platform. The motions are achieved by motors and optical encoders fitted with the motors, Figs. 1 and 2. The range of motion is 62 mm - 67 mm in the x-y direction. The rotational range for angulating the needle is -5 to +5 to avoid pubic arch interference and to reach the target region close to the TRUS probe. The TRUS probe motion and the rest of the surgery module (gantry and needle driver) is decoupled, as they are two separate open kinematic chains, which allow independent motions of the TRUS probe and the needle.

88

Robotic Systems Applications, Control and Programming

Fig. 2. Surgical module of EUCLIDIAN

Fig. 3. Needle driver and seed depositor of EUCLIDIAN robot The needle driver is the most complex subsystem of the EUCLIDIAN, Fig. 3, as the degree of complexity is increased with the introduction of three force-torque sensors, stylet sensor, cannula sensor, and whole needle sensor for cannula rotation and autonomous seed delivery system. The main components of this subsystem are: 1) stylet driver, 2) cannula driver, 3) seed-cartridge holder, and 4) force-torque sensors. The cannula and the stylet are driven separately by two motors. The travel ranges of both the cannula and the stylet are 0 312 mm. During the surgical procedure, the motions of the cannula as well as of the stylet are autonomous in accordance with the treatment plan; however, the clinician must approve the movements at critical points. Our custom-designed seed cartridge can hold 35 radioactive seeds. Accommodation of additional seeds posed some challenges for smooth functioning of the seed cartridge due to spring nonlinearity. However, after careful investigation and adjustments, the seed cartridge is working satisfactorily. The seed pusher, a flat ended stylet, is deployed to expel the seed out of the cartridge and to deposit it at the planned location. Every motion during the sequence of seed delivery is fully automatic; however, the clinician is able to interrupt and/or manipulate the movements at any time using a hand pendant. By monitoring the force data from the sensor installed at the proximal end of the stylet, Fig. 4, the seed removal from the cartridge to the cannula can be verified. A ten-button hand pendant provides the clinician with the authority and freedom to assert control of the surgery module at any desired time. That is, the clinician can command each of the motors for manipulating the motion of various components of the

Robotic Systems for Radiation Therapy

89

surgical module, such as needle insertion, needle rotation, seed deposition, x-y movement of the gantry, and system abort. The three DOF cart (two translations and a rotation about the vertical axis) provides gross movement of the robotic system to align with the patient, while the six DOF platform enables finer movement for desired positioning and orientation of the robot in threedimensional (3D) space. The computer, system electronics, and cable junctions are housed in the electronics housing, Fig. 1. The EUCLIDIANs surgery module is fully autonomous; all the motors are fitted with high-resolution optical encoders and precision gear boxes. The robot is controlled by an industrial computer, which is proven to be robust and reliable for working in harsh industrial environments and military applications. It has a special metallic casing for minimizing electromagnetic interferences. Two Galil control cards, Model DMC- 1842; Galil Motion Control, Inc., Rocklin, CA, are used: One card to control the TRUS probe driver and gantry motions and the other card to control the needle driver and the seed pusher. A robust and stable proportional, integral and derivative (PID) controller has been developed for controlling the motorized surgical module. We have tuned the PID gains in such a manner so that the systems stability is maintained when the needle changes its states from merely position control in the air to both position and force control mode in the patients body. The needle can be driven at a maximum velocity of approximately 100 mm/s; however, a lower velocity, 60 mm/s, setting is used as the default. A frame grabber, FlashBus, Integrated Technologies, Indianapolis, IN, is used for TRUS image capturing. Three force-torque sensors , Nano17, ATI Industrial Automation, Apex, NC and M13, Honeywell, Morristown, NJ, are used for needle insertion force monitoring and robot control feedback. Each of the motors is fitted with an optical encoder, MicroMo Electronics, Inc., Faulhaber Group, Clearwater, FL, which can provide final motion resolutions, considering gear ratios and screw leads, of 0.0007 mm for gantry xy translations, 0.004 mm for stylet and cannula motions, and 0.005 mm and 0.06 for TRUS probe translation and rotation, respectively. 2.1.2 Advanced control techniques In the previous studies, it was concluded that the proper selection of the translational and rotational velocities may reduce tissue deformation and target movements by reducing insertion force, (Buzurovic et al., 2008.a). Therefore, it can be concluded that the insertion force has a dominant influence on the needle insertion, seed deposition precision, and dosimetry distribution in brachytherapy procedure. In our initial work, (Buzurovic et al., 2008.b) we have investigated tracking problem for the same robotic system using neural network approach. The force prediction model was introduced as well. In this article we have introduced novel methods to control the brachytherapy robot with the lowest tissue deformation and highest seed precision delivery as the ultimate goals. In order to achieve the desired dynamic behavior of the robotic system, the control strategies that we implemented in the robotic system have a force prediction module. For the first one we have used an artificial neural network (ANN) and neural network predictive control (NNPC), (Buzurovic et al., 2010.a). In the following part the use of a feedforward model predictive control (MPC) was described. The purpose of this approach is to control the reactive force which is responsible for tissue displacement. Also, as the second control task was to predict and compensate the impact of the measured and unmeasured disturbances rather than waiting until the effect

90

Robotic Systems Applications, Control and Programming

appears at the output of the system on the other side. When the reactive force is minimized, tissue displacement decreases. The force prediction control was also used to minimize the effect of system time-delay. Because of the fact that this procedure required an accurate robot system model, it was necessary to obtain a more precise model. That is a reason for adopting the robotic system model as a singular system of differential equations. Fig. 4 represents the contact surface where reactive force appears during surgery. A mathematical equation of contact surface is calculated using EUCLIDIAN robotic system software.

(p)=0

Fig. 4. a) 3D representation of the contact surface (prostate gland) during insertion. b) 5DOF surgical module. Absolute and local coordinate systems together with generalized coordinates qi Assume that the contact surface is a scalar function: RmR1, Fig. 4a.
(p) = 0

(1)

The contact force vector is:

f = DT (p)

(2)

is a scalar multiplier for the constraint function and D(p) is the gradient of the constraint function, described as in (3),
D( p ) = (p) p (3)

In the previous equation p3 is a position vector from the fixed reference frame to the constrained surface. Additional constraint is p=(q) at the contact point. The matrix function J(q) is defined as the Jacobian matrix function. J (q ) = (q) q (4)

The influence of the contact force to the system can be described as

+ G(q , q) = + J T (q ) f M(q)q

(5)

Robotic Systems for Radiation Therapy

91

where M(q) denotes the inertia matrix function and G is the vector function which describes coriolis, centrifugal and gravitational effects. q is a vector of the generalized coordinates and is the torque vector. Influence of the external contact forces is described by factor f. Combining equations (1)-(5) the mathematical model of the system is + G(q , q) = + J T ( q )DT ( M(q )) (q , q , ) M(q)q After an appropriate transformation, equation (6) is transformed to its matrix form
0 I 0 0 q I = 0 ( ) 0 M q q 0 0 0 0 0 0 0 0 q 0 0 + + ( , ) ( ) J T DT q G q q M q I 0 0 ( p )

(6)

(7)

System (7) is linearized at the surrounding point where the needle is inserted into the patient. The linearized mathematical model of the system is obtained, as follows
(t ) = Ax(t ) + Bu(t ) + d Ex

(8)

with singular matrix Enn, and vector d which represents the unmeasured disturbances such as needle deflection, tissue deformation and displacement, Ann, Bmr. x and u represent the system state-space vector and control vector, respectively, xn, ur. The system defined by (8) is known as a singular system, descriptor system, semi-state system, system of differential-algebraic equations or generalized state space system. They arise naturally in many physical applications, such as electrical networks, aircraft and robotic systems, neutral delay and large-scale systems, economics, optimization problem and constrained mechanics. The matrices of linearized system (8) are defined as follows
0 A = (G J T DT )| 0 q DJ | 0 0 B= u = , I , 0 0 0 I T T 0 J D | M( q 0 ) 0 0 , E = 0 , 0 0 0 0 0 0 d= 0

(9)

Now it is possible to find subspaces S and F, as in (Buzurovic et al., 2010.a), having in mind that for state space X the system equation (8) can be represented as the direct sum of S and F, X= S F. As a result of the matrix transformation M applied to system (8), the slow and fast subsystems can be described by a mathematical formulation
s = Ls xs + Bs u + ds x f = x f + Bf u + d f Lfx

(10)

with Ls = MA|S, Lf =MA|F, Bs=PMB, Bf=QMB, ds=PMd and df=QMd for some linear transformations Q and P represented by matrices Q and P, respectively. For that case, the

92

Robotic Systems Applications, Control and Programming

initial conditions are chosen to satisfy x0s=P x0 and x0f=Q x0. The solution of system (10) is x=xs+ xf
xs = e Ls t x0 s + e Ls (t - ) Bsu( )d + e Ls (t - )d( )d ,
0 0 t t

x f = Lif B f ui
i =0

v1

(11)

Now it is possible to apply the MPC control, Fig. 5, and to investigate dynamical behavior of the system. Disturbance d is given to MPC and its effect is predicted and compensated before the effect appears at the system output.

( p)
d ynami c model

0
0 , 0 q0 , q x y
C

robot dynami c

, f q, q

, q, q
f
ref

(p)
D

Fig. 5. Insertion force control scheme for the slow subsystem In (Mills, 1988) is shown that the impulsive behavior of the system can be avoided using appropriate initial conditions, defined by x0s=P x0 and x0f=Q x0. By using the described approach the fast subsystem will not induce impulsive behavior. Moreover, it can be concluded as stated previously and from equation (11) that there is little need to find fast feedback to eliminate the impulsive behavior. The necessary task was to find an appropriate feedback for the slow subsystem. The control scheme for the slow subsystem is represented in Fig. 5, as suggested in (Cobb, 1983). Furthermore, when the MPC controller is applied, the main objective is to hold the insertion force at the predefined acceptable reference value, by adjusting the control signal from actuators in order to minimize the prostate displacement, i.e. the reactive force which acts upon the tissue. Using this approach it is possible to decrease the insertion force during insertion trajectory. The needle displacement is an unmeasured disturbance and the controller provides feedback compensation for such disturbances. For the insertion force the controller provides feedforward compensation. Various noise effects can corrupt the measurements. The noise could vary randomly with a zero mean, or could exhibit a nonzero, drifting bias. The MPC uses a filtering method for removing estimated noise component. At the beginning of each sampling instant, the controller estimates the current system state. Accurate knowledge of the state improves prediction accuracy which improves controller performances. During the insertion passive force control becomes active and keeps passive insertion force close to the predefined minimal value. When the MPC control approach was implemented it is possible to decrease the insertion force, as it is shown in Fig. 6. Also, peaks during the

Robotic Systems for Radiation Therapy

93

insertion are minimized. These conclusions give us a reason to believe that tissue deformation can be decreased better than using the traditional PID control.

Fig. 6. Controlled force with MPC (red) and insertion force with PID controller (blue)
2.2 Multi-channel brachytherapy robotic system Multi-channel system is capable of placing several needles or even all needles at a time and thereby, it can be faster in delivering the seeds required for the treatment. A multi-channel delivery system can effectively avoid the problem of gradual prostate swelling (i.e., edema) and deformation, which occurs while depositing the seeds by single needle. Since the prostate is not rigidly mounted, the prostate can move and rotate as well as deform quite unpredictably, at every time a needle is inserted. But, when several needles are inserted concurrently, the prostate will be uniformly pushed back symmetrically to a more stable position and the deformation can better be estimated for precise delivery of seeds. Thus, the multi-channel system can overcome some of the drawbacks that may be encountered by the single-channel robotic systems. In this part, we present our Multichannel Imageguided Robotic Assistant for Brachytherapy (MIRAB), which is designed and fabricated for prostate seed implantation. Currently, MIRAB can simultaneously rotate and insert 16 needles. The MIRAB is capable of inserting more needles concurrently, if needle rotation is excluded. 2.2.1 System description The MIRAB system shown in Fig. 7 consisted of five modules: (1) Rotary Needle Adapter, (2) Surgical x-y Carrier, 3) Mounting and Driving Mechanism, (4) Seed Applicator, and (5) Transrectal Ultrasound (TRUS) Driver, (Podder et al., 2010). Rotary Needle Adapter can hold 16 needles with rotational capability. However, additional needles can be installed without provision for rotation. Two direct current (DC) motors rotate the needle using a spur- gear train. It is known that provision of needle rotation reduces the insertion force as well as organ (or target) deflection and deformation. For guiding the extended needles (brachytherapy needles are about 200mm in length and 1.27mm or 1.47mm in diameter) a regular brachytherapy template is installed at the distal end.

94

Robotic Systems Applications, Control and Programming

Fig. 7. MIRAB robotic system The needles can be simultaneously inserted using one of the DC motors on the Mounting and Driving Module. Surgical x-y Carrier is 3-DOF module carries the Seed Applicator which delivers seeds and withdraws needle. Two DC servo motors on the x-y carrier provide motions to the Seed Applicator in x- and y-direction, while another DC servo motor on the Mounting and Driving module provide motion in the z-direction, i.e. the depth. Mounting and Driving Mechanism is driven my two DC servo motors to impart translational motion (along z-direction) to the Needle Adapter and Surgical x-y Carrier. Seed Applicator is a module which is attached to the Surgical x-y Carrier. A DC servo motor is employed to expel the seed from the seed cartridge. Transrectal Ultrasound (TRUS) Driver is a module was originally developed for EUCLIDIAN system, Fig. 2). However, the MIRAB is designed in a way so that it can be installed on the EUCLIDIAN by taking the needling module away from the EUCLIDIAN away. This interchangeability is of MIRAB and EUCLIDIAN will be very convenient to switch from a single-channel system to a multichannel system. TRUS can image the prostate and the relevant anatomies in transverse as well as sagittal planes. All the DC servo motors are fitted with high-resolution (up to about 0.0007mm) optical encoder, MicroMo Electronics, Inc., Faulhaber Group, Clearwater, FL. An industrial computer, Plans, San Diego, CA, is used for controlling the whole system. Two Galil control cards, Model DMC-1842; Galil Motion Control, Inc., Rocklin, CA, are used. All the desired motions are achieved by deploying a Proportional, Derivative and Integral (PID) controller.
2.2.2 Experimental results The purpose of the following experiments is to evaluate performances of multichannel robotic brachytherapy system designed for prostate seed implantation. The developed multichannel robotic system is capable of inserting large number of needles concurrently and depositing seeds automatically.

Robotic Systems for Radiation Therapy

95

Fig. 8. MIRAB robotic system, experimental setup The experimental procedure, Fig. 8, is described in the following part. For cannula, we used 18ga x 20cm Stainless steel BARD needle and for stylet Rev. A4, 304 Stainless steel needle Vita Needle Company. Phantom was prepared from polyvinylchloride (PVC) plastic and PVC+ hardener in the ratio 80% to 20%, respectively, MF Manufacturing, TX. Force measurements were performed using single-axis force sensor, Model 13, Honeywell Sensotech, Columbus, OH, installed on proximal end of the cannula. Deposited seeds were rounded stainless steel dummy seeds, BARD. Position of the tip of the needle and consequently depth of the deposition into the phantom was measured using optical encoders FAULHABER MicroMo HEDM5500J series 5500, attached to the template stage and seed applicator motors. The MIRAB was evaluated for insertion speed of 5mm/s, 10mm/s, 20mm/s, 40mm/s, 60mm/s and 80mm/s and stylet speed in the range of 20-60 mm/s. For each insertion speed we recorded force for 1, 2, 4, 8 and 16 needles installed together on the template stage. Absolute seed placement error was 0.10mm (SD=0.11mm) in X and Y direction and 0.15mm (SD=0.12mm) in Z direction for plan with 16 needles and 64 seeds. Relative position error between seeds were 0.07mm (SD=0.05mm). It can be seen in Fig. 9.a that maximum insertion force for insertion speed of 40mm/s was in the case when 16 needles were inserted together. For that case, maximum force value was 43N. Fig. 9.b represents the insertion force for the insertion speed of 80mm/s. Maximum insertion force for 16 needles was 52N. It can be concluded that more needles were inserted in one insertion the force value was higher. But when 8 needles were inserted in the same time maximum insertion force did not change. For insertion speed of 40mm/s and 80mm/s the insertion force was around 35N. In Fig. 9.c and Fig. 9.d insertion force for whole range of the insertion speed were represented. In the former case 8 needles were inserted together while in the latter 16 needles were inserted. The insertion force for the latter case was about 7N higher due to bigger number of the inserted needles. However, it can be noticed that force does not significantly change in the

96

Robotic Systems Applications, Control and Programming

range of insertion speed higher that 40mm/s (60mm/s or 80mm/s). The conclusion based on this fact is that insertion speed can be divided into two regions with different insertion parameters. It can be concluded that the average maximum insertion force was less then 45N for moderate speed range (insertion speed 40mm/s and stylet speed 30mm/s, 16 needles in the stage) and 52N for high speed range (insertion speed greater then 40mm/s and stylet speed 50mm/s, 16 needles in the stage). Insertion time per seed was 5-8 seconds. Plan delivery time for high speed range was 8min and 12min for moderate speed range. The summary of conclusion is as follows. It was observed that more needles were inserted together force value was higher. However, when 8 and 16 needles were inserted in the same time maximum insertion force did not change. Furthermore, force did not change significantly in the range of insertion speed higher than 40mm/s. Consequently, insertion speed range can be divided into two regions with different insertion parameters. Preliminary results reveal that MIRAB is efficacious for delivering seed accurately. MIRAB can potentially be used for image-guided robotic brachytherapy.

Insertion force for insertion speed of 40mm/s


50 45 40 35 Force [N] 30 25 20 15 10 5 0 1 needle

Insertion force for insertion speed of 80mm/s


60 50 40 Force [N] 30 20 10 0 1 needle 2 needles 4 needles 8 needles 16 needles

2 needles 4 needles 8 needles 16 needles

Insertion force for 8 needles


50 45 40 35 Force [N] 30 25 20 15 10 5 0 0 5mm/s 20mm/s 40mm/s 60mm/s 80mm/s Force [N] 10mm/s 50 40 30 20 10 60

Insertion force for 16 needles

5mm/s 10mm/s 20mm/s 40mm/s 60mm/s 80mm/s

Fig. 9. a) Insertion force for different number of needles installed on the template stage, for insertion speed of 40mm/s. b) Insertion force for different number of needles installed on the template stage, for insertion speed of 80mm/s. c) Insertion force for different insertion speed;8 needles inserted in the same time. d) Insertion force for different insertion speed; 16 needles inserted in the same time

Robotic Systems for Radiation Therapy

97

3. Robotic systems for real-time tumor tracking


Respiratory and cardiac motions induce displacement and deformation of the tumorvolumes in various internal organs. To accommodate this undesired movement and other errors, physicians incorporate a large margin around the tumor to delineate the Planning Target Volume (PTV), so that the Clinical Target Volume (CTV) receives the prescribed radiation dose under any scenario. Consequently, a large volume of healthy tissue is irradiated and sometimes it is difficult to spare critical organs adjacent to the tumor, (Buzurovic et al., 2010.b,c). In this section we described a novel approach to the 4D Active Tracking and Dynamic Delivery (ATDD) incorporating tumor motion prediction technique. The proposed algorithm can predict the tumor position and the robotic systems are able to continuously track the tumor during radiation dose delivery. Therefore a precise dose is given to a moving target while the dose to nearby critical organs is reduced to improve patient treatment outcome. The efficacy of the proposed method has been investigated by extensive computer simulation, (Buzurovic et al., 2010.d, 2011.a). Recently, several research groups are investigating various aspects of tumor tracking and developing tools to deliver precise dose to moving target-volumes, (Ozhasoglu & Murphy, 2001), (Keall et al., 2006.a), (Benchetrit, 2000), (Vedam et al., 2004), (Sharp et al., 2004), (Schweikard et al., 2000), (Kamino et al., 2006), (DSouza & McAvoy, 2006), (Keall et al., 2006.b), (DSouza et al., 2005), (Chung et al., 2006), (Podder et al., 2007, 2008), (Buzurovic et al., 2010.d, 2011.a). Generally, commonly practiced methods for compensating target/tumor motion can be structured as: (i) breath-hold techniques, (ii) gating techniques, and (iii) ATDD. The ATDD is the most effective technique, but it is the most challenging one. The ATDD can be accomplished in three different ways: (1) using the multi-leaf collimator (MLC), (2) using the treatment couch, and (3) using the MLC and the couch simultaneously. However, each of them has its own unique limitations. For instance, MLC gating technique using internal fiducials requires kilovoltage x-ray which delivers unwanted radiation dose to the patient, and additionally gating suffers from severely low duty-cycle (only 30%-50%) and intensity modulated radiation therapy (IMRT) efficiency (only 20%-50%); all these lead to a 4- to 15-fold increase in delivery time over conventional treatment, (Keall et al., 2006.a). Therefore, it is of tremendous clinical interest, if the radiation beam can be delivered with higher duty-cycle (or almost continuously) while compensating for the target movement without exposing the patient to kilovoltage x-ray. Robotic systems can help in a great deal solving this problem. In the following part we present developed dynamic equations of motion HexaPOD robotic couch. We applied the similar approach to one standard 3DOF robotic couch for radiation treatment (Buzurovic et al., 2010.d). In addition, we presented the control approach and prediction module which is capable of compensating time delay of the mechanical system. For system dynamics, we used energy based Lagrangian formulation. The Lagrangian function of dynamic systems can be expressed as: L = Kinetic energy (K) Potential energy (P) Thus, the general form of dynamic equations is (12)

98

Robotic Systems Applications, Control and Programming

d L L = dt q q

(13)

where, qRn is the vector of generalized coordinates, and is the generalized force (or torque) applied to the system through the actuators. The dynamical behavior of a 6DOF robotic couch, has been investigated. Proposed approach is explained below.
3.1 Parallel robotic platform The HexaPOD is a special type of Stewart Platform, i.e., a parallel robotic manipulator. The parallel robotic mechanism consists of a rigid body top plate, or mobile plate, connected to a fixed base plate and is defined by at least three stationary points on the grounded base connected to six independent kinematic legs. Typically, the six legs are connected to both the base plate and the top plate by universal joints in parallel located at both ends of each leg. The legs are designed with an upper body and lower body that can be adjusted, allowing each leg to be varied in length as in Fig. 10.

Fig. 10. HexaPOD robotic couch, a) External isometric view with moving coordinate system, b) External schematic view with fixed coordinate system, c) Schematic of the leg In this study, we have used the following notations for modeling the HexaPOD, i.e. the Stewart platform. Referring figure 2 we have assigned an inertial frame (X,Y,Z) at the center O of the lower platform, i.e. the BASE, and assigned another moving coordinate system (x,y,z) at the center of the top platform, i.e. the TOP. The BASE frame is assigned and called as fixed frame and the TOP frame is moving and is called as moving frame. To specify the configuration of the 6DOF Stewart Platform, six independent positionorientation variables are needed. Denote the position of the origin of the TOP frame with respect to the BASE frame [ px py p z ]T . The orientation is not defined by the standard Euler angles, but by rotating the TOP frame first about the fixed X-axis by , then about fixed Y-axis by and finally about Z-axis by . We denote RX(), RY() and RZ() as the three matrices that represent three basic rotations as follows:
1 0 R X ( ), = 0 C 0 S

C 0 , RY ( ), = S 0 C S

0 1

S C S ( ), = R 0 , Z 0 C 0

S C 0

0 0 1

(14)

Robotic Systems for Radiation Therapy

99

where C(.) =cos(.), and S(.) = sin(.). This definition of the orientation not only provides us with a clear physical meaning but also avoids violating the one-to-one relationship between the system configuration and the values of Xp-0, which may cause the Jacobian matrix to lose its rank, even if the system is not in a singular configuration. Thus, the position and orientation of the upper platform is specified by the Cartesian coordinate system as X p o = [ p x , pY , p Z , , , ]T . Now, the dynamic equations in Cartesian-space (or task-space) are expressed as:
T + V ( X , X )X M ( X p o ) X po m p o p o p o + G ( X p o ) = J ( X p o ) Ftd

(15)

where, Ftd = ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 ) is the vector of forces applied by the actuators of the legs, J is the full system Jacobian matrix. For convenience, we divide the HexaPOD robotic couch into two subsystems: the upper platform (TOP) and the six legs, Fig. 10. We compute the kinetic energy and potential energy of these subsystems and then derive the global dynamic equations.
K up = K up( trans ) + K up( rot ) = 1 1 2 2 2 X Y Z +p +p ) + T mu ( p up ( mf ) I up ( mf ) 2 2

(16)

is a expression for kinetic energy of the upper platform. Potential energy of the upper platform: Pup = mu g pZ Kinetic energy of the legs:
1 K Li = (m1 + m2 )[ h iVTTj VTj kiVTTj (ui )(ui )T VTj ] 2 (18)

(17)

where, Li is the length of the leg at the current configuration, (m1+m2)is the mass of the leg, VTj is the velocity of the leg, and :
1 2 2 m1 m2 l2 l m2 m2 2 , hi = + , l = k h = i i L m +m m1 + m2 1 2 m1 + m2 i Potential energy of the legs:
3 1 + 1 + 2 m2 PLegs = (m1 + m2 ) g l ( pZ + ZTi ) i =1 L2 i L2 i 1 m1 + m2

(19)

(20)

where, ZTj = [0 0 1][ RZ ( )T RX ( )T RY ( )T GTj ( mf ) ] ; GTj ( mf ) is obtained from the geometry of the platform. Substituting the expression from equation (15)-(20) into equations (12), and (13), we can obtain the dynamic equations for the HexaPOD (leg plus top platform). Recalling equation (15), more precisely it can be written,

) =V +V M ( X p o ) = M up + M Legs , V m ( X p o , X p o mup m Legs G ( X p o ) = G up + G Legs

(21)

100

Robotic Systems Applications, Control and Programming

The dynamic equations of motion for HexaPOD robotic couch have been discussed above. These equations are essential in developing our proposed dynamics-based coordinated control system. In the following part it can be seen that the speeds for tracking are slow. However, it was necessary to use dynamical model approach to fulfill strong requirements regarding the tracking accuracy. Since the tracking is impaired with a real-time radiation delivery, it was of the significant importance to have accurate model and to decrease possibilities for inaccurate radiation delivery during tracking.
3.2 Control and prediction To track the tumor trajectory for optimal dose delivery to the CTV while sparing normal tissue, we propose PID control for HexaPOD parallel robotic platform. The basic goal of the controller was to specify the desired trajectory of the couch for all tumor positions. Block diagram of the decentralized coordinated dynamics-based closed-loop control strategy for HexaPOD robotic couch using prediction module PM, controller C and robotic couch T, is presented in Fig.11.

Xmot

PM

Xd

X s

Fig. 11. Control schema for parallel robotic platform By taking this approach it is possible to compare the dynamical behavior of the one of the most suitable system for 4D tracking. In the proposed approach, the trajectories of the tumor motion in the x, y and z directions were obtained from the 4D Computer Tomography (CT) data of real patients. These task-space trajectories were used to generate joint-space motion trajectories for the robotic systems. Referring Fig. 11, we have denoted the following parameters: Xmot is raw data of the tumor motion. Xmot is the input into the prediction module (PM). Output of the PM is predicted value of the tumor motion Xd. Dynamic-based controller is denoted by C. Robotic couch is denoted by T. Input to the T module are desired forces values and output is the motion of the robotic table , denoted by Xs, which compensate tumor motion.
3.2.1 Dynamic-based controller To implement the proposed algorithms, we have used a PID control scheme. Dropping the subscripts, we can rewrite equation (21) as

+ ( X , X )=, M( X )X

(22)

) = V (X , X )X + G( X ) , and = J T ( X )F . Now, the computed torque can be where, ( X , X p o td written as follows:

Robotic Systems for Radiation Therapy

101

[X =, + K ( X X ) + K ( X X )] + M d V d P d

(23)

are the estimates of the systems model parameters, KV is the derivative and where, M is the proportional gain matrix. For HexaPOD gain matrix and KP KV = diag( k v 1 kv 2 k v 3 kv 4 kv 5 kv 6 ) and K P = diag( kp 1 k p 2 kp 3 kp 4 kp 5 k p 6 ) . If the estimates of the model parameters are close approximation of the actual systems parameter, from equations (22) and (23) it can be written, = X + K ( X X ) + K (X X ) X d V d P d (24)

X , =X Now, denoting the position, velocity, and acceleration errors as = X d X , d X , we can rewrite equation (24) as = X d
+ KV + K P = 0

(25)

Equation (25) guarantees asymptotic reduction of the tumor tracking errors or at least keeps error in the acceptable margins. However, to reduce any steady-state errors, an integral control part was also incorporated into equation (26). Thus the final control equation becomes,
+ KV + K P + K I dt = 0 ,
0
t

(26)

where, KI is the integral gain. Thus, equation (26) ensures asymptotic decay of the transient errors as well as reduction of steady-state errors.
3.2.2 Prediction module Prediction module PM is developed to predict tumor motion and to compensate errors due to delay in the system response. Prediction module algorithm uses Xmot to calculate predicted value Xd. The system output Xs is used for accuracy checking and fine tuning purposes. Normalized Least Mean Square (nLMS) algorithm is adapted for the dynamic system to predict the position in three dimensions. System delay td is a parameter in the developed algorithm. The nLMS algorithm belongs to the family of Least Mean Square LMS algorithms. The LMS algorithm is an adaptive algorithm presented by Widrow and Hoff as in (Widrow & Walach, 1994) and (Chen, 1993). The LMS algorithm use iterative procedures to make successive corrections to the weight vector towards the negative direction of the gradient vector. As a result minimal mean square error will be minimized. The LMS algorithm can be summarized as follows: Input vector is X(n) = [x(n), x(n-1),, x(n-t+1)]T, where n is the current algorithm iteration and t is the tap number. Desired vector is D(n) = [d(n), d(n-1),, d(n-t+1)]T. Consequently, predicted vector is Y(n) = [y(n), y(n-1),, y(n-t+1)] T. Error vector is E(n) = [e(n), e(n-1),, e(n-t+1)]T. Filter vector W(n) is used to calculate predicted value y(n). It is of the form W(n) = [w(n), w(n-1),, w(n-t+1)] T. Predicted value is

102

Robotic Systems Applications, Control and Programming

y(n) = W ( n 1)H X( n) where W(n-1)H is the hermitian transpose of W(n-1). Thus, algorithm error is e1(n) = d(n) y(n).

(27)

(28)

Now, filter vector becomes W(n) = W(n-1)+ X(n)e(n), where is the learning rate. In order to ensure the stability of the algorithm it is necessary to adjust LMS to its normalized form nLMS. For the nLMS algorithm the filter vector is: W (n) = W (n 1) +

X(n)e(n) X ( n )H X ( n )

(29)

Furthermore, to ensure accuracy of the prediction process, algorithm checks difference between predicted value y(t) and system output Xs. We defined e2(n) = d(n) Xs(n). (30)

Finally, prediction module calculates next position of the tumor, based on algorithm which incorporates nLMS error e1(n) and physical error e2(n). Therefore, the resultant error is sum of the prediction and tracking error.
3.3 Simulation The computer simulation results for HexaPOD robotic couch have been presented in the Fig. 12 through Fig. 13 below. Fig. 12 shows position of the each HexaPOD leg during tracking task. Combined motion of the robotic legs results in tracking the desired trajectory. This means that the robotic couch will start moving the legs according to the desired trajectory, obtained from 4D-CT. Combined motion of the HexaPOD legs will result in patient motion which has opposite direction of the tumor. Consequently, tumor will appear steady and the beam will irradiate smaller PTV which does not include all possible tumor position during the respiratory cycle.

Fig. 12. HexaPOD legs positions during tracking

Robotic Systems for Radiation Therapy

103

Fig. 13. Overall system errors; a) X, Y,Z in X, Y and Z directions for HexaPOD couch. b) The error amplitudes for steady states An average system error after the transient time was XH=YH =ZH= 0.2mm and 0.4mm for irregular motion pattern, Fig.13. The tracking error cannot be zero, but it can be kept within the tolerable limit. From the simulation results it appeared that proposed methods could yield superior prediction and tracking of the tumor motion induced by respiratory motion. Analyzing the simulation results it can be concluded that the robotic system show the same maximum tracking error which was 1mm. Based on dosimetric studies, (Buzurovic et al., 2010.b,c) it was noticed that implementation of real-time tracking techniques can minimize irradiation to healthy tissues and improve sparing of critical organs. It was shown in the studies that the dosimetric effect on PTV and CTV coverage, caused by the prediction error in tumor tracking procedure, is insignificant. Consequently, tumor tracking error for the described proposed method will not compromise patient treatment outcome. Implementation of the proposed technique can potentially improve real-time tracking of the tumor-volume to deliver highly conformal precise radiation dose at almost 100% duty cycle while minimizing irradiation to health tissues and sparing critical organs. This, in turn, will potentially improve the quality of patient treatment by lowering the toxicity level and increasing survival. In this study, we have deployed a closed-loop PID control. Adaptive control can be a good choice because of the variability in the payload on the system, i.e., the weight of the patient. Results of the adaptive control applied to the proposed system can be found in (Buzurovic et al., 2011.b).

4. Conclusion
In this chapter the use of the robotic systems in radiation therapy has been presented. It was shown that robotic systems can greatly influence to the current method of radiation therapy in both delivering radioactive material inside the cancerous tissue and in compensation of moving tumors during the external beam radiation therapy. The presented systems are novel and the clinical applications of such systems will be near future in modern cancer treatments. It was shown many times that modern robotics can improve many aspects of human work. The presented robotics systems are the example of the previous statement.

104

Robotic Systems Applications, Control and Programming

5. References
Benchetrit, G. (2000). Breathing pattern in humans: diversity and individuality, Respir. Physiol., 22(2-3), pp. 123129 Buzurovic, I.; Podder, T.; Yan, K.; et al. 2008. Parameter optimization for brachytherapy robotic needle insertion and seed deposition, Medical Physics, Vol. 35, No. 6, p. 2865 Buzurovic, I.; Podder, T.K.; and Yu, Y. (November 2008). Force prediction and tracking for image-guided robotic system using neural network approach, Proceedings of IEEE Biomedical Circuits and Systems Conference (BIOCAS 08), Baltimore, MD, USA, pp. 4144 Buzurovic, I.; Podder, T.K.; and Yu, Y. (2010). Prediction Control for Brachytherapy Robotic System, Journal of Robotics, Vol. 2010, Article ID 581840, doi:10.1155/2010/581840 Buzurovic, I.; Werner-Wasik, M.; Biswas. T.; Galvin J.; Dicker, A.P.; Yu, Y.; Podder, T. (2010). Dosimetric Advantages of Active Tracking and Dynamic Delivery. Med. Phys., 37, p. 3191 Buzurovic, I.; Huang, K.; Werner-Wasik, M.; Biswas. T.; Galvin J.; Dicker, A.P.; Yu, Y.; Podder, T. (2010). Dosimetric Evaluation of Tumor Tracking in 4D Radiotherapy, Int. J. Radiat. Oncol. Biol. Phys., 78(3) Suppl.1, p. S689 Buzurovic, I.; Huang, K.; Yu, Y.; Podder, T. (2011). Robotic Approach to 4D Real-time Tumor Tracking for Radiotherapy, Phys. Med. Biol., 56(5), pp.1299-1320 Buzurovic, I.; Huang, K.; Yu, Y.; Podder, T. (May-June, 2010). Tumor Motion Prediction and Tracking in Adaptive Radiotherapy, Proc. of 2010 IEEE Int. Conf. on Bioinformatics and Bioengineering (BIBE), pp. 273-278 Buzurovic, I., Yu, Y.; Podder, T.K. (September, 2011). Active Tracking and Dynamic Dose Delivery for Robotic Couch in Radiation Therapy, Proc. of 2011 IEEE Int. Conf. on Engineering in Medicine and Biology (EMBC), Boston, MA, USA Chen, W.K. (1993). Linear Networks and Systems: Algorithms and Computer-Aided Implementations, Belmont Wadsworth, pp. 123135 Chung, H.; et al. (2006). Mechanical Accuracy of a Robotic Couch, Medical Physics, 33(6), p.2041 Cobb, D. (May, 1983). Descriptor Variable Systems and State Regulation, IEEE Transaction on Automatic Control, Vol. AC-28., No.5 DSouza, W.; McAvoy, T.J.; Analysis of the treatment couch and control system dynamics for respiration-induced motion compensation. Medical Physics, 2006, 33(12):47014701. DSouza, W.; Naqvi, S.A.; Yu, C.X.; (2005). Real-time intra-fraction-motion tracking using the treatment couch: a feasibility study, Phys. Med. and Biol., 50, pp. 4021-4033 Fichtinger, G.; Burdettem, E.C.; Tanacsm, A.; et al. (2006). Robotically assisted prostate Brachytherapy with transrectal ultrasound guidance - Phantom experiments, Brachytherapy 5, pp. 14-26 Kamino Y.; Takayama, K.; Kokubo, M.; et al. (2006). Development of a four-dimensional image-guided radiotherapy system with a gimbaled x-ray head, Int. J. Radiation Oncology Biol. Phys., 66(1), pp. 271278 Keall, P.J.; Mageras, G.S.; Balter, J.M.; et al. (2006). The management of respiratory motion in radiation oncology report of AAPM Task Group 76, Medical Physics, 33(10), pp. 3874-3900

Robotic Systems for Radiation Therapy

105

Keall, P.J.; Cattell, H.; Pokhrel, D.; et al. (2006). Geometric accuracy of a real-time target tracking system with dynamic multileaf collimator tracking system, Int. J. Radiation Oncology, Biol. Phys., 65(5), pp. 1579-1584 Lin, A.; Trejos, A.L.; Patel R.V.; and Malthaner, R.A. (2008). Robot-Assisted Minimally Invasive Brachytherapy for Lung Cancer, Teletherapy (book chapter), ISBN: 978-3540-72998-3, Springer, Berlin, Germany, pp. 33-52 Mavrodis, C.; Dubowsky, S.; et al. (April 19-23, 1997). A Systematic Error Analysis of Robotic Manipulators: Application to a High Performance Medical Robot, IEEE Int. Conf. Robotics and Automation (ICRA), Albuquerque, NM, pp. 2975-2981 Meltsner, M.A. (2007). Design and optimization of a brachytherapy robot, PhD Thesis, Department of Medical Physics, University of Wisconsin-Madison Meltsner, M.A.; Ferrier N.J.; and Thomadsen, B.R. (2007). Observations on rotating needle insertions using a brachytherapy robot, Phys. Med. Biol. 52, pp. 6027-6037 Mills, J.K. (1988). Constrained Manipulator Dynamic Models and Hybrid Control, Proc. of the IEEE International Symposium on Intelligent Control, Vol. I, Arlington, VA, USA, pp. 424-429 Moerland, M; Van den Bosch, M.; Lagerburg, V.; et al. (2008). An MRI scanner compatible implant robot for prostate brachytherapy, Brachytherapy, 7, p.100 Ozhasoglu, C.; Murphy, M.J. (2002). Issues in respiratory motion compensation during external-beam radiotherapy, Int. J Radiat Oncol Biol Phys., 52(5), pp. 13891399 Podder, T.K.; Ng, W.S.; and Yu, Y. (August 23-26, 2007). Multi-channel robotic system for prostate brachytherapy, Int. Conf. of the IEEE Engineering in Medicine and Biology (EMBS), Lyon, France, pp. 1233-1236 Podder, T.K.; Buzurovic, I.; and Yu, Y. (May-June 2010). Multichannel Robot for Imageguided Brachytherapy, Proc. of 10th IEEE International Conference on Bioinformatics& Bioengineering (BIBE), Philadelphia, PA, USA, pp.209-213 Podder, T.K.; Buzurovic, I.; Hu, Y.; Galvin, J.M.; Yu, Y. (2007). Partial transmission highspeed continuous tracking multi-leaf collimator for 4D adaptive radiation therapy, IEEE Int. Conf. on Bioninformatics and Bioengineering (BIBE), Boston, MA, USA, pp. 1108-1112 Podder, T.K.; Buzurovic, I.; Hu, Y.; Galvin, J.M.; Yu, Y. (2008). Dynamics-based decentralized control of robotic couch and multi-leaf collimators for tracking tumor motion, IEEE Int. Conf. on Robotics and Automation (ICRA), Pasadena, CA, USA, pp. 2496-2502. Salcudean, S.E.; Prananta, T.D.; et al. (May 19-23, 2008). A robotic needle guide for prostate brachytherapy, IEEE Int. Conf. Robotics and Automation (ICRA), Anaheim, CA, pp. 2975-2981 Schweikard, A.; Glosser, G.; Bodduluri, M.; et al. (2000). Robotic motion compensation for respiratory movement during radiosurgery, Comput. Aided Surg., 5(4), pp. 263277 Sharp, G.C.; Jiang, S.B.; Shimizu, S.; Shirato, H. (2004). Prediction of respiratory tumour motion for real-time image-guided radiotherapy, [Link]. Med Biol., 49(3), pp. 425 440 Stoianovici, K.; Cleary, A.; Patriciu, M.; et al. (2003). AcuBot: A robot for radiological percutaneous Interventions, IEEE Trans. on Robotics and Automation, Vol. 19, pp. 927 930

106

Robotic Systems Applications, Control and Programming

Stoianovici, D.; Song, S.; Petrisor, D.; et al. (2007). MRI Stealth robot for prostate interventions, Minimally Invasive Therapy 16, pp. 241-248 Vedam, S.S.; Keall, P.J.; Docef, A.; et al. (2004). Predicting respiratory motion for fourdimensional radiotherapy, J. Medical Physics, 31(8), pp. 22742283 Wei, Z; Wan, G; Gardi, L; Fenster, A.; et al. (2004). Robot-assisted 3D-TRUS guided prostate brachytherapy: system integration and validation, Med. Phys. 31, pp. 539-548 Widrow, B.; Walach, E. (1994). Adaptive Inverse Control, Prentice-Hall, NJ, USA Yu Y.; Podder, T.K.; Zhang, Y.D.; Ng, W.S.; et al. (2007). Robotic system for prostate brachytherapy, J. Comp. Aid. Surg. 12, pp. 366-370

6
Robotic Urological Surgery: State of the Art and Future Perspectives
Center for Laparoscopic and Robotic Surgery, Glickman Urological and Kidney Institute, Cleveland Clinic, Cleveland, Ohio, USA 1. Introduction
Minimally invasive surgery has gained popularity over the last decade by offering shorter convalescences, improved peri-opertive outcomes as well as enhanced cosmesis. Community urologists have typically performed standard laparoscopic nephrectomies secondary to its short learning curve, low complication rate, and limited requirement for sophisticated laparoscopic skills. However, advanced minimally invasive operations such as partial nephrectomies, pyeloplasties and prostatectomies necessitate advanced laparoscopic adeptness. The emergence of robotics, with 3D vision and articulated instruments, has allowed wider applications of minimally invasive techniques for more complex urological procedures. Though robotics has overcome some shortcomings of laparoscopic surgery, there remains a limitation with its assertion as a standard amongst the urological community. The lack of tactile feedback, displacement of the surgeon from the bedside, fixed-port system, longer operating room times and cost remain barriers to widespread acceptance of robotics. In addition, the deficiencies within the robotic platform have propagated an evolution in the field with micro and nano-robotics. This chapter highlights the history of robotic surgery along with current and future applications.

Rachid Yakoubi, Shahab Hillyer and Georges-Pascal Haber

2. The Da Vinci surgical system and urology


The da Vinci robot (Intuitive Surgical, Sunnyvale, CA, USA) (Figure 1) remains the only commercially available robotic surgical system since the fusion of computer motion and intuitive surgical system (Table 1). It is a masterslave system in which the surgeon operates the robot from a remote console. The articulating laparoscopic instruments in the da Vinci robot offer six degrees of freedom simulating the human wrist movement during open surgery. This facilitates intra-corporeal suturing especially in reconstructive surgery (Yohannes et al., 2002). Robotic-assisted laparoscopic radical prostatectomy was first reported in 2000 using the da Vinci robot system. Since then, the robotic platform has been applied for a variety of minimally invasive procedures, such as partial nephrectomy, pyeloplasty, cyctectomy and adrenalectomy.

108

Robotic Systems Applications, Control and Programming

Fig. 1. Da Vinci robot 1985 1989 1993 1998 2000 2001 2003 First surgical robot utilization (Neurosurgery) First urologic robot (Probot) First commercially available robot approved by the FDA (AESOP) Zeus system commercially available First robotic radical prostatectomy FDA clearance for Da Vinci system The Zeus system and Intuitive Surgical fusion

Table 1. Robotic surgery timeline 2.1 Prostatectomy Robotic-assisted laparoscopic radical prostatectomy (RARP) was first reported in 2000 (Abbou et al., 2000). Since then the number of patients undergoing RARP for prostate cancer has steadily increased. Early comparisons between radical retropubic prostatectomy (RRP) and RARP show encouraging results. Menon et al. in a prospective nonrandomized study, compared results of 30 consecutive patients undergoing (RRP) and 30 initial patients undergoing (RARP). Estimated blood loss (EBL), blood transfusions, pain score, hospital stay, and mean duration of postoperative catheterization were improved in the RARP group. However, the mean operating time increased for RARP (Menon et al. 2002). In a recent review, Ficcara et al. found that the mean OR time for RARP ranged between 127 and 288 minutes, corroborating a longer operative time with robotics versus open. Nevertheless, transfusion rates and hospital stay were lower with RARP then open RRP. In addition there were comparable complication rates between the RARP and open RRP patients (Ficarra et al., 2009).

Robotic Urological Surgery: State of the Art and Future Perspectives

109

Positive surgical margins are a surrogate for oncological outcomes reported for RARP. A significant advantage in positive margin rates was demonstrated for RALP over RRP. This same difference amongst the robotic and open groups does not exist when comparing laparoscopic radical prostatectomy (LRP). The positive surgical margin rates ranged from 11% to 37% after RRP, from 11% to 30% after LRP, and from 9.6% to 26% after RALP (Ficarra et al., 2009). Recently, survival outcomes have been reported after RARP. Barocas et al. compared 491 open RRP with 1,413 patients undergoing RARP, over a median follow-up of 10 months (Barocas et al., 2010). Robotic group had lower pathological stage (80.5% pT2 vs 69.6% pT2, p <0.01). The 3-year biochemical recurrence-free survival rate was similar between the 2 groups, even after adjusting for pathological stage, grade and margin status. More recently, comparisons of 522 consecutive RARP were matched to patients who underwent LRP and RRP evaluating oncological outcomes (Magheli et al., 2011). Positive surgical margin rates were higher in the robotic group (19%), compared to LRP (13%) and RRP (14%). This difference was not significant for pT2 disease. The mean follow-up was 2.5, 1.4 and 1.3 years for RRP, LRP and RARP respectively, with no statistically significant difference in biochemical-free survival between groups. Erectile function and continence are major outcomes evaluated after prostatectomy. In a matched cohort, comparing 294 RARP for clinically localized prostate cancer with 588 RRP, showed no difference in continence between the 2 approaches at the 1-year follow-up (Krambeck et al. 2009). Furthermore, Roco et al showed 1year continence rates of 97% vs 88% after RARP and RRP, respectively (P = 0.014). The 1 year overall potency recovery rate was 61% vs 41%, after RARP and RRP, respectively (P = 0.003). Overall, RRP seems to be a faster procedure. However, EBL, hospitalization time, and functional outcomes were superior with RARP. Early oncological outcome seemed to be equivalent in the two groups (Rocco et al., 2009). The drawback of RARP is the cost related to purchasing and maintaining the instruments of the robotic system. Bolenz et al. compared the cost of 262 RALP, 220 LRP, and 161 RRP performed at the same institution. The direct cost was higher for the robotic approach than laparoscopic or open. The median direct cost was US$ 6752, US$ 5687, and US$ 4437 for RALP, LRP, and RRP respectively. The most important difference was due to surgical supply and operating room cost (Bolenz et al., 2010). However, the surgical volume may reduce this difference. Scales et al. showed that the cost of RALP is volume dependent, and cost equivalence is achievable with RRP at a surgical volume of 10 cases weekly (Scales et al., 2005). Even the cost of robotic surgery is a difficult question to assess; the costeffectiveness of robotics will probably continue to improve with time (Wilson & Torrey, 2011). The available data demonstrates improvements in blood loss, hospital stay, and pain control with RALP. However the lack of long term cancer specific mortality limits robust oncological comparisons of RARP to RRP. 2.2 Radical nephrectomy Robotic radical and simple nephrectomy is a feasible and safe procedure. A comparison of 46 laparoscopic nephrectomies, 20 hand-assisted laparoscopic nephrectomy, and 13 robotic nephrectomies showed no significant advantage of robotics over traditional laparoscopy or hand-assisted approaches. However, cost analysis illustrated far more cost with robotics among the three groups (Boger et al., 2010).

110

Robotic Systems Applications, Control and Programming

Platforms where robotics may be applied to radical nephrectomies are complex tumors with caval thrombosis. Abaza reported five patients with a renal tumor and inferior vena cava (IVC) thrombus who underwent robotic nephrectomy. The robotic system allowed for venotomy into the vena cava with suture of the defect and no complications. (Abaza, 2011) Robotic-assisted nephrectomy was also applied for live kidney donor. A series of 35 first cases was reported with a mean warm ischemia time of 5.9 minutes. (Louis et al., 2009). However, the cost of the robotic approach may limit its applicability for radical nephrectomies except in complex renal cases with caval thromsis. 2.3 Partial nephrectomy The da Vinci Surgical System provides advantages during robotic partial nephrectomy (RPN) such as 3-D vision, articulating instruments, scaling of movement, tremor filtration, fourth robotic arm assistance, and the TilePro software (Intuitive surgical, Sunnyvale, CA), a live intra-operative ultrasound platform. All these tools are helpful during partial nephrectomy and may overcome the technical challenges of laparoscopic partial nephrectomy (LPN). Since the first report by Gettman et al. showing the feasibility of robotic assisted partial nephrectomy (RPN) (Gettman et al., 2004), a steadily increasing number of series have been reported. Forty consecutive RPN and 62 LPN were retrospectively compared in a study by wang and colleagues showing no significant difference in EBL, collecting system repair (56% for each group), or positive margin rate (1 case in each group). Furthermore, the mean operative time, warm ischemia time (19 vs 25 minutes, for RPN and LPN respectively), and length of stay decreased significantly in the robotic group (Wang & Bhayani, 2009). Haber et al. compared results of 150 consecutive patients who underwent RPN (n = 75) or LPN (n = 75) by a single surgeon. There was no significant difference between the 2 groups in the mean operative time (200 and 197 minutes) (P = .75)), warm ischemia time (18.2 minutes vs 20.3 minutes, P = .27), length of hospital stay (P = .84), change in renal function, or adverse events in the RPN and LPN groups respectively. The mean EBL was higher in the RPN group (323 vs 222 mL); surprisingly fewer patients required a blood transfusion in either group. The higher EBL with RPN may be explained by the surgeons learning curve for RPN compared with LPN. Overall these findings demonstrated comparable outcomes to LPN, regardless of the vast experience the primary laparoscopic surgeon possessed (Haber et al., 2010a). The safety and effectiveness of RPN regarding functional and oncologic outcomes were evaluated in a multi-institutional review with 183 patients who underwent RPN (Benway et al., 2010). The means of peri-operative demographics and outcomes were analyzed illustrating a tumor size 2.87 cm, total operative time 210 min, warm ischemic time 23.9 min, and EBL of 131.5 ml. Sixty-nine percent of excised tumors were malignant, of which 2.7% had positive surgical margins. The incidence of major complications was 8.2%. At up to 26 months follow-up, there have been no recurrences and no significant change in renal function. Additionally, indications for RPN have significantly expanded to include RPN for complex renal tumors (Rogers et al., 2008a; White et al., 2011). White et al. reviewed 67 patients who underwent RPN for a moderately or highly complex renal mass according to the R.E.N.A.L. nephrometry score (7). The median tumor size was 3.7 cm, median operative time was 180 minutes, median EBL was 200 mL, and the warm ischemia time was 19.0 minutes (range 15-

Robotic Urological Surgery: State of the Art and Future Perspectives

111

26). After a mean follow-up of 10 months, no recurrences had occurred indicating that RPN is a safe and feasible option for highly complex renal masses (White et al., 2011). RPN seems to be as an effective and safe alternative to LPN. Surgical technique for RPN has improved and indications have been expanded to more challenging tumors. Currently available comparative studies are retrospective and with a limited follow-up. Future trials are expected to confirm encouraging findings from early reported series. 2.4 Adrenalectomy Robotic-assisted adrenalectomy (RAA) was first reported in 1999 and 2001 using the Aesop (Hubens et al., 1999) and Da Vinci systems, respectively (Horgan & Vanuno, 2001). Since then, the few studies published were about RAA using Da Vinci system. Brunaud et al. prospectively evaluated 100 consecutive patients who underwent RAA. The mean operative time was 95 minutes with a conversion rate 5%. Complication and mortality rates were 10% and 0%, respectively. The mean operative time decreased by 1 minute every 10 cases. Operative time improved more for junior surgeons than for senior surgeons after the first 50 cases. Surgeons experience, first assistant level and tumor size were independent predictors of operative time. The robotic procedure was 2.3 times more costly than laparoscopic adrenalectomy (Brunaud et al., 2008a). The same authors, compared prospectivly perioperative data of 50 patients who underwent RAA with 59 patients who underwent laparoscopic adrenalectomy (LA). RAA was associated with lower blood loss but longer operative times. However, the difference in operative time was not significant after the learning curve of 20 cases. Operative time increased, only in the LA group for obese patients (body mass index >30 kg/m2) and patients with large tumors (>55mm). Length of hospital stay, complication and conversion rates were equivalent in the groups (Brunaud et al., 2008b). Recently, Giulianotti et al. examined 42 patients who underwent RAA by a single surgeon. Median hospital stay was 4 days with postoperative complication rate of 2.4% and mortality rate of 2.4% (Giulianotti et al., 2011). Suggestions have been made that robot assistance may be beneficial for obese patients with large tumors (Brunaud et al., 2008a; Giulianotti et al., 2011), as well as for surgeons with limited laparoscopic experience. Regardless of its feasibility, Prospective studies must focus on potential improve in learning curve with robotic utilization along with a cost analysis evaluating RAA compared to current standards. (Brunaud et al., 2008a) 2.5 Pyeloplasty The first clinical experience of robot-assisted pyeloplasty (RAP) was reported in 2002 (Gettman et al., 2002a). Since then, numerous studies have evaluated the efficiency of RAP. Gupta et al. prospectively evaluated results of 85 consecutive patients who had transperitoneal RAP, using four or five ports. Based on anatomic considerations, different types of pyeloplasty were completed. The mean operative time was 121 min, 47 min of which was for the anastomosis. Mean EBL was 45 mL, with hospital stay of 2.5 days. Three patients had stent migrations, and delayed drainage. Mean follow-up was 13.6 months with an overall success rate of 97%, based on imaging assessment (Gupta et al., 2010). In a comparative non-randomized study, 98 RAP were compared with 74 LP with a mean operative time of 189.3 and 186.6 minutes for RAP and LP respectively. Complication rate was similar with 5.1% and 2.7% for RAP and LP respectively. The suturing time was shorter

112

Robotic Systems Applications, Control and Programming

for the RAP but without statistical significance (48.3 and 60 minutes (P =0.30) for RAP and LP respectively). RAP had a success rate of 93.4% versus 95% for LP based on renal scintigraphy (Bird et al., 2011). Hemal and colleagues illustrated successful application of robotics to pyeoplasty surgery. A nonrandomized study, comparing results of 30 RAP with 30 LP, performed in a transperitoneal approach by a single surgeon. The mean total operating times were 98 minutes and 145 minutes, the mean EBL were 40 mL and 101 mL, and the mean hospital stay of the patients were 2 days and 3.5 days, for RAP and LP, respectively. At follow up, one patient in LP group had obstruction managed by balloon dilation (Hemal et al., 2010). Insufficent evidence exists for the retroperitoneal approach to RAP. Kaouk et al. reported results of 10 patients who underwent retroperitoneal RAP. Four ports were placed for the robot and a successful operation was accomplished. Operative time was 175 mins with minimal complications. The advantage to the retroperitoneal approach was the direct access to the UPJ; however, retroperitoneal surgery has limited working space with unfamiliar anatomy to most urologist (Kaouk et al., 2008). Cestari et al. compared 36 patients who underwent retroperitoneal RAP and 19 transperitoneal RAP for UPJO. Median operative time and hospital stay were similar. Complication rates were comparable. (Cestari et al, 2010). Studies of RAP demonstrate feasibility, efficacy and safety. However, the cost of robotic surgery continues to limit the widespread application of this platform. 2.6 Cystectomy Radical cystectomy with pelvic lymphadectomy (PLND) remains the gold-standard treatment for patients with muscle-invasive blader cancer. However, morbidity related to open radical cystectomy (ORC) presents a real challenge. Thus, robotic-assisted radical cystectomy (RRC) represents a potentiel alltenative to the open approach. Similar to other robotic procedures, the potential advantages of RRC over ORC are decrease in blood loss, pain, and hospital stay. On the other hand, the oncologic outcomes of the RRC remain largely unknown. In a series of 100 consecutive patients who underwent RRC for clinically localized bladder cancer, the mean operative time was 4.6 hours and an EBL 271 ml. Mean hospital stay was 4.9 days with 36 patients experiencing postoperative complications; 8% major complications. Urinary diversion included, ileal conduits 61 % of the time (Pruthi et al., 2010). Kauffman et al. presented results of 85 consecutive patients treated with RRC for bladder cancer. The median age was 73.5 years with high proportion of patients having comorbidities (46% of ASA class 3). Extended pelvic lymphadenectomy was performed in almost all patients (98%). Extravesical disease was found in 36.5% cases and positive surgical margins were present in 6% of patients. At a mean follow-up of 18 months, 20 (24%) patients had presented recurrence; three of them (4%) only had a local recurrence. The overall survival and disease-specific survival rates for the cohort at 2 years were 79% and 85%, respectively. Extravesical disease, positive lymph node, and lymphovascular invasion were associated with worse prognosis (Kauffman et al., 2011). Even with the encouraging results of this study, comparative analysis and long term outcomes are still needed. Josephson et al., in his report of 58 RRC, found different results, with a overall survival rate of 54% and disease specific survival rates of 76% at 2-year (Josephson et al., 2010). However, stratification of survival outcomes by patholoical stage was not reported, making comparasions with other studies futile.

Robotic Urological Surgery: State of the Art and Future Perspectives

113

In an others series of 100 patients with a mean follow up of 21 months, 15 patients had disease recurrence and 6 died of bladder cancer (Pruthi et al., 2010). Comparisons between open and robotic cystectomy are lacking because of the heterogenity in disease burden and patient selection. In a multi-institutional study, of the 527 patients who underwent RRC, 437 (82.9%) had a lymphadenectomy. Surgeons experience on the robot influenced whether a lymphadenectomy was performed (Hellenthal, 2010a). In the same data base, the Positive surgical margin was 6.5% (Hellenthal, 2010b). In a study of 100 consecutive patients with RRC, no positive margin was found (Pruthi et al., 2010). However in this series only 13% cases were pT3/T4 at the final pathology, which may explain this result. RRC offers the potential for improving peri-operative outcomes with the advantages of minimally invasive surgery. It may offer shorter hospital stay, decrease blood loss and pain, and offer a smaller incision, with equal complication rates and oncological outcomes to the current gold standard, ORC. Future studies with longer follow up are essential to have robust data on the comparability of RRC to ORC

3. Robotic laparoendoscopic single-site surgery and natural orifice translumenal endoscopic surgery: Current status
Laparoendoscopic single-site surgery (LESS) (Figure. 2) and natural orifice translumenal endoscopic surgery (NOTES) exploit the use of a single incision or natural entry points into the body cavity. A trend towards scareless surgery with the advantages of laparoscopy has fueled the development of a novel surgical technique, LESS and NOTES. The first laparoscopic transvaginal nephrectomy (NOTES) in a porcine model was described in 2002, introducing another achievable surgical approach (Gettman et al, 2002b). Moreover, the continued dynamism for workable new approaches sprung the inception of the first reported hybrid NOTES nephrectomy in a porcine model utilizing the da Vinci surgical system (Box et al., 2008). Robotic technologies offer a potential improvement over the current flexible instruments and endoscopes. Other limitations include the indirect transmission of forces and space constraints for the operating surgeon (Table 2). In 2008, 30 urologic robotic NOTES procedures on 10 porcine models were performed using the current da Vinci robotic system (Haber et al., 2008a). A 12 and an 8-mm port were placed through a single transumbilical incision to introduce the robotic camera along with a robotic arm using a single port (Uni-X (TM) P-navel Systems, Morganville, NJ). A flexible 12-mm cannula 20cm long (US Endoscopy, Mentor, Ohio, USA) served as a transvaginal port, through which the second robotic arm was docked. All interventions were conducted without complications or conversion. However, problems encountered included the inadequate length of transvaginal instruments, essentially prolonging surgical time. In order for widespread use of newer approaches, development of appropriate instruments should simultaneously occur. The Da Vinci S robotic system using a LESS approach has also been applied to prostate surgery. A transvesical robotic radical prostatectomy was performed in a cadaver model (Desai et al., 2008a) (Figure. 3). Articulated instruments and 3D vision facilitated the dissection of the prostate through the single-port. Moreover, architectural advances with the robotic system will allow accessibility to a wider range of surgeons interested in LESS and NOTES approaches. An example of robotic modifications helping the growth of LESS came

114

Robotic Systems Applications, Control and Programming

with the advent of the novel robotic platform (VeSPA, Intuitive Surgical, California, USA) (Figures. 4 & 5). These curved cannulae and semi-rigid instruments have been designed to compensate for the limitations encountered with conventional LESS surgery. The VeSPA curved cannulae and semirigid instrument design allow the cannulae and instruments to be inserted in close proximity while allowing approximate triangulation intraabdominally. The VESPA platform was successfully applied in a porcine model without conversions or addition of ports (Haber et al., 2010b). However, it is recognized that Robotic-LESS for the kidney is technically more challenging in humans compared to animal model. Therefore, further clinical research is required to confirm these early experimental results. Standard LESS Scope Two-dimensional vision, high resolution, unstable vision Straight and articulating/flexible instruments Lacking (crossed hand surgery) Significant Da Vinci robotic LESS Three-dimensional vision, high definition, stable vision Articulating instruments

Instruments

Triangulation

Lacking (chopstick surgery)

Instrument collision

Significant (due to bulky robotic arms) Enhanced surgeon dexterity because of the Endowrist technology Limited at steep angles Accurate Enhanced (surgeon sitting at console) Managing collisions Medium high

Range of motion

Limited

Tissue dissection Suturing Ergonomics Main assistants role Steepness of learning curve

Challenging Extremely challenging Reduced Camera manoeuvring Very high

Table 2. From laparoendoscopic single-site surgery to robotic laparoendoscopic single-site surgery: technical advances with da Vinci system

Robotic Urological Surgery: State of the Art and Future Perspectives

115

Fig. 2. The robot scope and two arms are inserted through a 2.5-cm incision in the umbilicus

Fig. 3. Single-port and robot instruments through the bladder The initial clinical experience with robotic single-port transumbilical surgery was reported in 2009 (Kaouk et al, 2009a). A multichannel single port (R-port; Advanced Surgical Concepts, Dublin, Ireland) was inserted through a 2-cm umbilical incision into the abdomen. Three procedures were performed, including radical prostatectomy, dismembered pyeloplasty, and right sided radical nephrectomy. All procedures were completed without intraoperative complications. The radical prostatectomy was completed in 5 h, with 45 min required for the anastomosis. The pathology reported negative margins. The pyeloplasty was completed in 4.5 h, and the radical nephrectomy was completed in 2.5 h. R-LESS procedures were accomplished without additional ports or significant differences in peri-operative outcomes compared to standard robotic approaches.

116

Robotic Systems Applications, Control and Programming

Fig. 4. Da Vinci robotic with VeSPA instruments

Fig. 5. VeSPA instruments and accessories. (A) Curved cannulae; (B) multichannel singleport, 8.5-mm robotic scope, flexible instruments passed through the cannulae We recently reported our initial experience with single port robotic partial nephrectomy in two patients without conversions or complications (Kaouk et al. 2009b). A multichannel port (Triport; Advanced Surgical Concepts, Bray, Co Wicklow, Ireland) was utilized. Pediatric 5mm robotic instruments were used. A 30 robotic lens placed in the upward configuration minimized clashing between the scope and instruments. A 2.8 cm left lower pole tumor and a 1.1 cm right lower pole tumor were excised without renal hilar clamping using the

Robotic Urological Surgery: State of the Art and Future Perspectives

117

harmonic scalpel. EBL was 100 ml, operative time was 170 min, length of stay was 3.5 days, and visual analog pain scale at discharge was 1.0/10. Stein et al. reported robotic LESS using a gel port (Applied Medical, Rancho Santa Margarita, California, USA) as the access platform (Stein et al., 2010). Four clinical procedures were performed, including two pyeloplasties, one radical nephrectomy, and one partial nephrectomy. The gel port system was used to allow for a larger working platform. The partial nephrectomy was completed in 180 min. The mass was excised without hilar clamping, using the harmonic scalpel and Hem-o-lok clips (Weck Closure Systems, Research Triangle Park, North Carolina, USA). A radical nephrectomy was performed for a 5-cm leftsided lower pole mass. The renal vein and artery were secured with an endoscopic stapler, and the remainder of the dissection was completed with hook cautery. The specimen was retrieved with an entrapment sac and removed via the umbilical incision extended to 4 cm. EBL was 250 ml, operative time was 200 min, and the hospital stay was 2 days. The Da Vinci system has several advantages over conventional laparoscopy allowing an increasing number of urologist adopt this method. Preliminary results of these minimal invasive approaches are encouraging in LESS and NOTES techniques. Further refinement in instrumentation, improved triangulation and development of robotic systems specific to LESS and NOTES may define the new horizons in single site surgery.

4. Flexible robots
The development of robotic technologies has also flourished in endocopic surgeries. A flexible robotic catheter manipulator (Sensei, Hansen Medical, Mountain View, CA, USA) thought to allow enhanced ergonomics with improved efficiency over standard flexible ureteroscopy was developed by Sensei, Hansen medical. Initial experience in the porcine model showed promising outcomes.

Fig. 6. Pictorial depiction of components of flexible robotic catheter control system. Surgeon console (workstation) showing three LCD screens, one touch screen, and MID

118

Robotic Systems Applications, Control and Programming

Fig. 7. Robotic arm and steerable catheter The novel robotic catheter system comprises the following components (Figure. 6 & 7), (a) surgeon console, including the LCD display and master input device, (b) steerable catheter system, (c) remote catheter manipulator, and (d) electronic rack. The master input device (MID) is a three-dimensional joystick the surgeon uses to remotely maneuver the catheter tip. The real-time image of ureteroscopy, fluoroscopy and representation in space of the catheter position is projected simultaneous on the screens. Furthermore, images from the CT scan or an ultrasound probe images display. The steerable catheter system contains an outer catheter sheath (14F/12F) and an inner catheter guide (12F/10F). The movement of the MID intuitively controls the tip of the catheter guide. The robotic arm, steerable catheter, is attached to the edge of the table. The catheter can be controlled either by fluoroscopy mode, direct vision from the ureteroscope or by a combination of both. In addition, a 3D reconstruction of the extremity the catheter is displayed on the screen, assisting in the identifying the location. Moreover, manoeuvrability is not diminished when the use of fiber laser 200 to 365 microns. The technical feasibility of the robotic catheter system was evaluated in retrograde renoscopy in the porcine model. Authors concluded that the robotic catheter system could easily manoeuvre the ureteroscope into 83 (98%) of the 85 calices tested in a reproducible manner. The ease and reproducibility of intra-renal navigation was rated at a mean of 10 of 10 on the VAS (Desai et al., 2008b). Clinical investigation for flexible robotic system in ureteroscopy was performed in 18 patients who had renal calculi. Mean stone size was 11.9 mm. All procedures were done unilaterally and all patients had ureteral stents placed for 2 weeks. The robotic catheter system was introduced into the renal collecting system manually using fluoroscopic control along a guide wire. All intra-renal manoeuvres were performed completely by the surgeon from the robotic console. All procedures were technically successful, and all calculi were fragmented to the surgeons satisfaction, without conversion to standard ureteroscopy. Mean operative time was 91 minutes, including a robot docking time of 7 minutes, and stone localization time of 9 minutes. The mean visual analog scale rating (from 1, worst, to 10, best) for ease of stone localization was 8.3, ease of maneuvering was 8.5, and ease of fragmentation was 9.2. Complete fragment clearance was achieved in 56% of patients at 2 months, and in 89% of patients at 3 months. One patient required secondary ureteroscopy for a residual stone (Desai et al, 2008c) Improvements in flexible instrumentation have brought a revolution with retrograde intrarenal surgery. Advancements in flexible ureteroscopes, laser lithotripsy, and ureteroscope accessories are contributors to the metamorphosis with robotics in endo-urology.

Robotic Urological Surgery: State of the Art and Future Perspectives

119

Miniaturization has been the key advancement for the progression of robotics in endoscope. Further studies are needed to corroborate these early clinical results.

5. Robotic prostate biopsy


Incorporation of real-time radiographic imaging for biopsy or ablative treatment of urological cancer is currently in development. A robotic arm Vicky (Endo-Control, Grenoble, France) and an ultra-sound probe (B-K Medical, Denmark), were utilized to perform tranrectal prostate biopsies. The software was modified to save up to 24 spatial coordinates and answer the constraints of transrectal ultrasound. Tests on phantom and human cadaver prostates demonstrated the feasibility of robotic transrectal ultrasound, with precision biopsies performed on a target tumor ranging from (0.1 to 0.9 mm) on the prostate and (0.2 to 0.9 mm) on the phantom (Figure. 8). Initial results are encouraging with clinical trials forthcoming (Haber et al., 2008b). Moreover, a robotic positioning system with a biplane ultrasound probe on a mobile horizontal platform was used to perform prostate biopsies in a phantom model. . The integrated software acquires ultrasound images for three-dimensional modeling, coordinates target planning and directs the robotic positioning system. A repeatable accuracy of <1 mm was obtained (Ho et al., 2009). This robotic prostate biopsy system can theoretically biopsy targets from MRI images after merging the ultrasound images, allowing treatment of positive areas found on biopsies. Presently, a fully automated robot system, the MrBot, has been developed for transperineal prostate access. (Patriciu et al., 2007). It is mounted alongside the patient in the MR imager and is operated from the control room under image feedback. The robot presents 6 degrees of freedom: 5 for positioning and orienting the injector and 1 for setting the depth of needle insertion. The first driver was developed for fully automated low-dose (seed) brachytherapy. Compared with classic templates for needle guides, the robot enables additional freedom of motion for better targeting. As such, multiple needle insertions can be performed through the same skin entry point. In addition, angulations reduce the pubic arch interference, allowing better prostate access for sampling. The MrBot robot was constructed to be ubiquitous with medical imaging equipment such as ultrasound and MRI. An accuracy of 0.72 0.36 mm was described for seed placement. Furthermore, multiple clinical interventions are possible with the MrBot system, such as biopsy, therapy injections, and radiofrequency ablations. Preclinical testing in cadaver and animal models has shown favorable results (Mozer et al., 2009).

6. Future trends in the design and application of surgical robots


Advances in robotic systems accompanied by trends toward miniature devices, microrobots and eventually nanorobots, are realistic near future advancements. Prototypes of the Microrobot cameras (15 mm /3 inches) were placed inside the abdomen of a canine model during laparoscopic prostatectomy and nephrectomy allowing for additional views (360-degree) of the surgical field The microrobot was mobile, controlled remotely to desired locations, further aiding the laparoscopic procedures (Joseph et al., 2008). More recently, Lehman et al. showed the feasibility of LESS cholecystectomy in a porcine model, using a miniature robot platform (Lehman et al., 2011). The robot platform incorporated a dexterous in-vivo robot and a remote surgeon interface console. In addition,

120

Robotic Systems Applications, Control and Programming

multiple robots could be inserted through a single incision rather than the traditional use of multiple port sites. Capabilities such as tissue retraction, single incision surgery, supplementary visualization, or lighting can be delivered by these micro-robots. A well-known limitation of the current robotic platform is the lack of tactile feedback. However, the improved visualization and accuracy of robotic instrumentation may overcome this limitation. Furthermore, newer technologies with multiple compact robots, tracking tools, and tactile feedback apparatus may further expand the application of robotic surgery.

Fig. 8. Robot installation for tranrectal prostate biopsies

Fig. 9. The combination of multiple compact robots

Robotic Urological Surgery: State of the Art and Future Perspectives

121

7. Conclusion
Robotic applications in urology are effective and safe technologies. Surgical technique have evolved and indications have been expanded to more challenging scenarios. New technologies are constantly reshaping the urological sphere especially robotics. Further refinements in robotic systems along with a reduction in cost are key components for rapid assimilation amongst the urological community.

8. References
Abaza R. (2011) Initial series of robotic radical nephrectomy with vena caval tumor thrombectomy. Eur Urol. 2011 Apr;59(4):652-6. Abbou CC, Hoznek A, Salomon L, Lobontiu A, Saint F, Cicco A, Antiphon P, & Chopin D. (2000) [Remote laparoscopic radical prostatectomy carried out with a [Link] of a case]. Prog Urol. 2000 Sep;10(4):520-3. Barocas DA, Salem S, Kordan Y, Herrell SD, Chang SS, Clark PE, Davis R, Baumgartner R, Phillips S, Cookson MS, & Smith JA Jr. (2010) Robotic assisted laparoscopic prostatectomy versus radical retropubic prostatectomy for clinically localized prostate cancer: comparison of short-term biochemical recurrence-free survival. J Urol. 2010 Mar;183(3):990-6. Bird VG, Leveillee RJ, Eldefrawy A, Bracho J, & Aziz MS. (2011) Comparison of robotassisted versus conventional laparoscopic transperitoneal pyeloplasty for patients with ureteropelvic junction obstruction: a single-center study. Urology. 2011 Mar;77(3):730-4. Benway BM, Bhayani SB, Rogers CG, Porter JR, Buffi NM, Figenshau RS, & Mottrie A. (2010) Robot-assisted partial nephrectomy: an international experience. Eur Urol. 2010 May;57(5):815-20 Boger M, Lucas SM, Popp SC, Gardner TA, & Sundaram CP. (2010) Comparison of robotassisted nephrectomy with laparoscopic and hand-assisted laparoscopic nephrectomy. JSLS. 2010 Jul-Sep;14(3):374-80. Bolenz C, Gupta A, Hotze T, Ho R, Cadeddu JA, Roehrborn CG, Lotan Y. (2010) Cost comparison of robotic, laparoscopic, and open radical prostatectomy for prostate cancer. Eur Urol. 2010 Mar;57(3):453-8. Box GN, Lee HJ, Santos RJ, Abraham JB, Louie MK, Gamboa AJ, Alipanah R, Deane LA, McDougall EM, & Clayman RV. (2008) Rapid communication: robot-assisted NOTES nephrectomy: initial report. J Endourol. 2008 Mar;22(3):503-6. Brunaud L, Ayav A, Zarnegar R, Rouers A, Klein M, Boissel P, & Bresler L. (2008a) Prospective evaluation of 100 robotic-assisted unilateral adrenalectomies. Surgery. 2008a Dec;144(6):995-1001. Brunaud L, Bresler L, Ayav A, Zarnegar R, Raphoz AL, Levan T, Weryha G, & Boissel P. (2008b) Robotic-assisted adrenalectomy: what advantages compared to lateral transperitoneal laparoscopic adrenalectomy? Am J Surg. 2008b Apr;195(4):433-8. Cestari A, Buffi NM, Lista G, Sangalli M, Scapaticci E, Fabbri F, Lazzeri M, Rigatti P, & Guazzoni G. (2010) Retroperitoneal and transperitoneal robot-assisted pyeloplasty in adults: techniques and results. Eur Urol. 2010 Nov;58(5):711-8.

122

Robotic Systems Applications, Control and Programming

Desai MM, Aron M, Berger A, Canes D, Stein R, Haber GP, Kamoi K, Crouzet S, Sotelo R, Gill IS. (2008a) Transvesical robotic radical prostatectomy. BJU Int. 2008a Dec;102(11):1666-9. Desai MM, Aron M, Gill IS, Haber GP, Ukimura O, Kaouk JH, Stahler G, Barbagli F, Carlson C, & Moll F. (2008b) Flexible robotic retrograde renoscopy: description of novel robotic device and preliminary laboratory experience. Urology. 2008b Jul;72(1):42-6. Desai MM, Grover R, Aron M, Haber GP, Ganpule A, Kaouk I, Desai M, & Gill IS. (2008c) Remote robotic ureteroscopic laser lithotripsy for renal calculi: initial clinical experience with a novel flexible robotic system. J Urol 2008c; 179(4S):435 [Abstract # 1268]. Ficarra V, Novara G, Artibani W, Cestari A, Galfano A, Graefen M, Guazzoni G, Guillonneau B, Menon M, Montorsi F, Patel V, Rassweiler J, & Van Poppel H. (2009) Retropubic, laparoscopic, and robot-assisted radical prostatectomy: a systematic review and cumulative analysis of comparative studies. Eur Urol. 2009 May;55(5):1037-63. Gettman MT, Neururer R, Bartsch G, & Peschel R. (2002a) AndersonHynes dismembered pyeloplasty performed using the da Vinci robotic system. Urology (2002a) 60(3), 509513 Gettman MT, Lotan Y, Napper CA, & Cadeddu JA. (2002b) Transvaginal laparoscopic nephrectomy: development and feasibility in the porcine model. Urology 2002b; 59:446450. Gettman MT, Blute ML, Chow GK, Neururer R, Bartsch G, & Peschel R. (2004) Roboticassisted laparoscopic partial nephrectomy: technique and initial clinical experience with DaVinci robotic system. Urology. 2004 Nov;64(5):914-8. Giulianotti PC, Buchs NC, Addeo P, Bianco FM, Ayloo SM, Caravaglios G, & Coratti A. (2011) Robot-assisted adrenalectomy: a technical option for the surgeon? Int J Med Robot. 2011 Mar;7(1):27-32. Gupta NP, Nayyar R, Hemal AK, Mukherjee S, Kumar R, & Dogra PN. (2010) Outcome analysis of robotic pyeloplasty: a large single-centre experience. BJU Int. 2010 Apr;105(7):980-3. Haber GP, Crouzet S, Kamoi K, Berger A, Aron M, Goel R, Canes D, Desai M, Gill IS, & Kaouk J. (2008a) Robotic NOTES (Natural Orifice Translumenal Endoscopic Surgery) in reconstructive urology: initial laboratory experience. Urology. 2008a Jun;71(6):996-1000. Haber GP, Kamoi K, Vidal C, Koenig P, Crouzet S, Berger A, Aron M, Kaouk J, Desai M, & Gill IS. (2008b) Development and Evaluation of a Transrectal Ultrasound Robot for Targeted Biopsies and Focal Therapy of Prostate Cancer. EUS 2008 Annual Meeting; 2008b. p. 85. Haber GP, White WM, Crouzet S, White MA, Forest S, Autorino R, & Kaouk JH. (2010a) Robotic versus laparoscopic partial nephrectomy: single-surgeon matched cohort study of 150 patients. Urology. 2010a Sep;76(3):754-8. Haber GP, White MA, Autorino R, Escobar PF, Kroh MD, Chalikonda S, Khanna R, Forest S, Yang B, Altunrende F, Stein RJ, & Kaouk J. (2010b) Novel robotic da Vinci instruments for laparoendoscopic single-site surgery. Urology. 2010b Dec;76(6):1279-82.

Robotic Urological Surgery: State of the Art and Future Perspectives

123

Hellenthal NJ, Hussain A, Andrews PE, Carpentier P, Castle E, Dasgupta P, Kaouk J, Khan S, Kibel A, Kim H, Manoharan M, Menon M, Mottrie A, Ornstein D, Palou J, Peabody J, Pruthi R, Richstone L, Schanne F, Stricker H, Thomas R, Wiklund P, Wilding G, & Guru KA. (2011) Lymphadenectomy at the time of robot-assisted radical cystectomy: results from the International Robotic Cystectomy Consortium. BJU Int. 2011 Feb;107(4):642-6. Hellenthal NJ, Hussain A, Andrews PE, Carpentier P, Castle E, Dasgupta P, Kaouk J, Khan S, Kibel A, Kim H, Manoharan M, Menon M, Mottrie A, Ornstein D, Palou J, Peabody J, Pruthi R, Richstone L, Schanne F, Stricker H, Thomas R, Wiklund P, Wilding G, & Guru KA. (2010) Surgical margin status after robot assisted radical cystectomy: results from the International Robotic Cystectomy Consortium. J Urol. 2010 Jul;184(1):87-91. Hemal AK, Mukherjee S, & Singh K. (2010) Laparoscopic pyeloplasty versus robotic pyeloplasty for ureteropelvic junction obstruction: a series of 60 cases performed by a single surgeon. Can J Urol. 2010 Feb;17(1):5012-6. Ho HS, Mohan P, Lim ED, Li DL, Yuen JS, Ng WS, Lau WK, & Cheng CW. (2009) Robotic ultrasound-guided prostate intervention device: system description and results from phantom studies. Int J Med Robot. 2009 Mar;5(1):51-8 Horgan S, Vanuno D. (2001) Robots in laparoscopic surgery. J Laparoendosc Adv Surg Tech 2001;11:415-9. 15. Hubens G, Ysebaert D, Vaneerdeweg W, Chapelle T, & Eyskens E. (1999) Laparoscopic adrenalectomy with the aid of the AESOP 2000. Acta Chir Belg 1999;99:125-9. Joseph JV, Oleynikov D, Rentschler M, Dumpert J, & Patel HR. (2008) Microrobot assisted laparoscopic urological surgery in a canine model. J Urol. 2008 Nov;180(5):2202-5. Josephson DY, Chen JA, Chan KG, Lau CS, Nelson RA, & Wilson TG. (2010) Roboticassisted laparoscopic radical cystoprostatectomy and extracorporeal continent urinary diversion: highlight of surgical techniques and outcomes. Int J Med Robot. 2010 Sep;6(3):315-23. Kaouk JH, Hafron J, Parekattil S, Moinzadeh A, Stein R, Gill IS, & Hegarty N. (2008) Is retroperitoneal approach feasible for robotic dismembered pyeloplasty: initial experience and long-term results. J Endourol. 2008 Sep;22(9):2153-9. Kaouk JH, Goel RK, Haber GP, Crouzet S, Stein RJ. (2009a) Robotic single-port transumbilical surgery in humans: initial report. BJU Int. 2009a Feb;103(3):366-9. Kaouk JH, Goel RK. (2009b) Single-port laparoscopic and robotic partial nephrectomy. Eur Urol 2009b; 55:11631169. Kauffman EC, Ng CK, Lee MM, Otto BJ, Wang GJ, & Scherr DS. (2011) Early oncological outcomes for bladder urothelial carcinoma patients treated with robotic-assisted radical cystectomy. BJU Int. 2011 Feb;107(4):628-35. Krambeck AE, DiMarco DS, Rangel LJ, Bergstralh EJ, Myers RP, Blute ML, & Gettman MT. (2009) Radical prostatectomy for prostatic adenocarcinoma: a matched comparison of open retropubic and robot-assisted techniques. BJU Int. 2009 Feb;103(4):448-53. Lehman AC, Wood NA, Farritor S, Goede MR, & Oleynikov D. (2011) Dexterous miniature robot for advanced minimally invasive surgery. Surg Endosc. 2011 Jan;25(1):119-23. Louis G, Hubert J, Ladriere M, Frimat L, & Kessler M. (2009) [Robotic-assisted laparoscopic donor nephrectomy for kidney transplantation. An evaluation of 35 procedures]. Nephrol Ther. 2009 Dec;5(7):623-30

124

Robotic Systems Applications, Control and Programming

Magheli A, Gonzalgo ML, Su LM, Guzzo TJ, Netto G, Humphreys EB, Han M, Partin AW, & Pavlovich CP. (2011) Impact of surgical technique (open vs laparoscopic vs roboticassisted) on pathological and biochemical outcomes following radical prostatectomy: an analysis using propensity score matching. BJU Int. 2011 Jun;107(12):1956-62. Menon M, Tewari A, Baize B, Guillonneau B, & Vallancien G. (2002) Prospective comparison of radical retropubic prostatectomy and robot-assisted anatomic prostatectomy: the Vattikuti Urology Institute experience. Urology. 2002 Nov;60(5):864-8. Mozer PC, Partin AW, & Stoianovici D. (2009) Robotic image-guided needle interventions of the prostate. Rev Urol. 2009 Winter;11(1):7-15 Patriciu A, Petrisor D, Muntener M, Mazilu D, Schr M, & Stoianovici D. (2007) Automatic brachytherapy seed placement under MRI guidance. IEEE Trans Biomed Eng. 2007 Aug;54(8):1499-506 Pruthi RS, Nielsen ME, Nix J, Smith A, Schultz H, & Wallen EM. (2010) Robotic radical cystectomy for bladder cancer: surgical and pathological outcomes in 100 consecutive cases. J Urol. 2010 Feb;183(2):510-4. Rocco B, Matei DV, Melegari S, Ospina JC, Mazzoleni F, Errico G, Mastropasqua M, Santoro L, Detti S, & de Cobelli O. (2009) Robotic vs open prostatectomy in a laparoscopically naive centre: a matched-pair analysis. BJU Int. 2009 Oct;104(7):9915. Rogers CG, Singh A, Blatt AM, Linehan WM, & Pinto PA. (2008a) Robotic partial nephrectomy for complex renal tumors: surgical technique. Eur Urol. 2008a Mar;53(3):514-21 Scales CD Jr, Jones PJ, Eisenstein EL, Preminger GM, & Albala DM. (2005) Local cost structures and the economics of robot assisted radical prostatectomy. J Urol. 2005 Dec;174(6):2323-9. Stein RJ, White WM, Goel RK, Irwin BH, Haber GP, & Kaouk JH. (2010) Robotic laparoendoscopic single-site surgery using GelPort as the access platform. Eur Urol. 2010 Jan;57(1):132-6 Wang AJ, Bhayani SB. (2009) Robotic partial nephrectomy versus laparoscopic partial nephrectomy for renal cell carcinoma: single-surgeon analysis of >100 consecutive procedures. Urology. 2009 Feb;73(2):306-10. White MA, Haber GP, Autorino R, Khanna R, Hernandez AV, Forest S, Yang B, Altunrende F, Stein RJ, & Kaouk JH. (2011) Outcomes of robotic partial nephrectomy for renal masses with nephrometry score of 7. Urology. 2011 Apr;77(4):809-13. Wilson T, Torrey R. (2011) Open versus robotic-assisted radical prostatectomy: which is better? Curr Opin Urol. 2011 May;21(3):200-5. Yohannes P, Rotariu P, Pinto P, Smith AD, & Lee BR. (2002) Comparison of robotic versus laparoscopic skills: is there a difference in the learning curve? Urology. 2002. Jul;60(1):39-45

7
Reconfigurable Automation of Carton Packaging with Robotic Technology
Wei Yao and Jian S. Dai
Kings College London United Kingdom

1. Introduction
In the highly competitive food market, a wide variety of cartons is used for packaging with attractive forms, eye-catching shapes and various structures from a logistical and a marketing point of view. These frequent changes require innovative packaging. Hence, for some complex styles like origami cartons and cartons for small batch products, most confectionery companies have to resort to using manual efforts, a more expensive process adding cost to the final products, particularly when an expensive seasonal labor supply is required. This manual operating line must go through an expensive induction and training program to teach employees how to erect a carton. Current machines are only used for the same general style or are specifically designed for a fixed type of cartons and all new cartons require development and manufacture of new tooling for this machinery. New tooling is also required for each different pack size and format. The development and manufacture of this tooling can be very expensive and increases the manufacturers lead time for introducing new products to market, therefore reducing the manufacturers ability to match changes in consumer demands. Tooling changeovers, when changing production from one packaging format to another, also adds cost to the manufacturer. It is uneconomical to produce a dedicated machine for a small batch production. Hence, in this high seasonal and competitive market, manufacturer needs a dexterous and reconfigurable machine for their variety and complexity in attracting customers. The machines are expected to reconfigure themselves to adapt to different types of cartons and to follow different instructions for various closing methods. Rapid expansion of robotics and automation technology in recent years has led to development of robots in packaging industry. Though pick and place robots were extensively used, complex tasks for erecting and closing origami-type cartons still resort to manual operations. This presents a challenge to robotics. Some related areas with carton folding on the problem of creating 3-D sheet metal parts from sheet metal blanks by bending are explored by some researchers to provide a starting point for exploration in this field when discussing folding sequences based on their objective function, consisting of a high-level planner that determines the sequence of bends, and a number of low level planners for individual actions. deVin (de Vin et al.,1994) described a computer-aided process planning system to determine bending sequences for sheet-metal manufacturing. Shpitalni and Saddan (Shpitalni & Saddan., 1994) described a system to

126

Robotic Systems Applications, Control and Programming

automatically generate bending sequences. The domain-specific costs for example the number of tool changes and part reorientations were used to guide the A* search algorithm. Radin et al., (Radin et al., 1997) presented a two-stage algorithm. This method generates a bending sequence using collision avoidance heuristics and then searches for lower cost solutions without violating time constrains. Inui et al. (Inui et al., 1987) developed a method to plan sheet metal parts and give a bending simulation. Gupta et al. (Gupta et al., 1997) described a fully automated process planning system with a state-space search approach. Wang and Bourne (Wang & Bourne, 1997) explored a way to shape symmetry to reduce planning complexity for sheet metal layout planning, bend planning, part stacking, and assembly, the geometric constraints in carton folding parallel those in assembly planning. Wang (Wang, 1997) developed methods to unfold 3-D products into 2-D patterns, and identified unfolding bend sequences that avoided collisions with tools. In computational biology, protein folding is one of the most important outstanding problems that fold a one-dimensional amino acid chain into a three-dimensional protein structure. Song and Amato (Song & Amato, 2001a, 2001b) introduced a new method by modeling these foldable objects as tree-like multi-link objects to apply a motion planning technique to a folding problem. This motion planning approach is based on the successful probabilistic roadmap (PRM) method (Kavraki et al., 2001a, 2001b) which has been used to study the related problem of ligands binding (Singh et al., 1999, Bayazit et al., 2001). Advantages of the PRM approach are that it efficiently covers a large portion of the planning space and it also provides an effective way to incorporate and study various initial conformations, which has been of interest in drug design. Molecular dynamics simulations are the other methods (Levitt et al, 1983, Duan. et al., 1993) which tried to simulate the true dynamics of the folding process using the classic Newtons equations of motion. They provided a means to study the underlying folding mechanism, to investigate folding pathways, and provided intermediate folding states. Rapid expansion of robotics and automation technology in recent years has led to development of robots in packaging industry. Automation of the carton folding process involves several subtasks including automatic folding sequence generation, motion planning of carton folding, and development of robotic manipulators for reconfigurability. Though pick and place robots have been extensively used, complex tasks for erecting and closing origami-type cartons still resort to manual operations. This presents a challenge to robotics. Investigating origami naturally leads to the study of folding machinery. This evolves to folding a carton. Lu and Akella (Lu & Akella, 1999, 2000) in 1999 by exploring a fixture technique to describe carton folding based on a conventional cubical carton using a SCARAtype robot and on the similarity between carton motion sequences with robot operation sequences. This approach uses the motion planner to aid the design of minimal complexity hardware by a human designer. But the technology only focused on rectangular cartons not on origami-type cartons which include spherical close chains on vertexes. Balkcom and Mason (Balkcom & Mason, 2004, Balkcom et al., 2004) investigated closed chain folding of origami and developed a machine designed to allow a 4DOF Adept SCARA robot arm to make simple folds. However for complex cartons including multi-closed chains the flexibility and reconfigurability of this method are limited. This requires a quantitative description of a carton, its folding motion and operations. Dai (Dai & Rees Jones, 2001, 2002, 2005, Liu & Dai, 2002, 2003, Dubey &Dai, 2006) and Yao (Yao & Dai, 2008, 2010, 2011) at Kings College London developed a multi-fingered robotic system for carton folding.

Reconfigurable Automation of Carton Packaging with Robotic Technology

127

2. Equivalent mechanism of an origami carton


A carton is formed with crease lines and various geometric shapes of panels. Folding and manipulating a carton is a process of changing the position of panels and their relationship. Taking creases as revolute joints and cardboard panels as links, a carton can be modeled as a mechanism. The kinematic model of this equivalent mechanism provides motion of carton folding. Carton folding starts from a flat cardboard consisting of a set of panels with creases connecting adjacent panels. All the required motions involve carton panels folding about a crease-line through an angle. There are many ways to fold the carton box into its final state, which are based on the location and orientation of the folding. For the simple cartons the manipulation can be assumed to be at one joint between adjacent boards and each joint moves to its goal configuration. The motion path and sequence can be easily calculated by the joint angles of the carton. Folding sequence can be checked by following steps, including checking the reduced graph, which is checking the equivalent mechanism structures of cartons, to see if they are symmetric or have any other geometric characteristics; identifying independent branches which result in independent operations; searching folding layers in turn; and manipulating from the layer which is closest to the base, from the symmetric panels and from the node which is present in the shortest branch.

Fig. 1. A Carton Folding Sequence for an Origami Carton During the process of the Origami carton folding the guiding joint can be identified. Four closed loops with four folding sub-mechanisms are found. For the sub-mechanism of a guiding joint, the mechanism can be described as a 5-Bar mechanism that the panels are described linkage and the creases are revolute joints. It is means that the reaction forces at

128

Robotic Systems Applications, Control and Programming

the joint of the box paper do not produce unexpected deformation. Five revolute joints axes that intersect in the point that is the intersection of the creases define the spherical linkage and their movement is defined. Figure 2 gives a half-erected origami carton which consists of four sections at vertexes located at four corners of the carton labeled as O1, O2, O3, and O4Folding manipulation of the carton is dominated by these sections. The rectangular base of the carton is fixed on the ground. Four creases along the rectangular base are given by four vectors b1, b2, b3 and b4 Four centrally movable creases are given as s, s, s and s each of which is active in a gusset corner. The carton is symmetric along line AA.

A
b4

s
O1

s O s 4 b3 b2 s

O3

b1

O2

Fig. 2. A half-erected origami Thus an equivalent mechanism is modeled in figure 3.4. The five bar mechanisms centres are located at four corners of the carton base O1, O2, O3, and [Link] the equivalent mechanism and folding process, the five-bar spherical mechanism can determine the configuration of the carton.

s' p'
O2

t'

s p

O1

s''' b4 t''
O4

b1 b2 p''

s''

t'''

p'''

O3

b3

Fig. 3. The equivalent mechanism of the base of a double wall tray

Reconfigurable Automation of Carton Packaging with Robotic Technology

129

One of the corners located at O1 formed one spherical mechanism. When its folding joint s moves and controls the configuration of the mechanism and drives joints t, and p. When the two joints attached the mobility of the mechanism reduces from 2 to 1 and the mechanism gets its singularity. The joint s is active and called control joint. During the manually folding of this origami carton human fingers attaches the control joints and drives them into goal positions.

[Link] analysis of the mechanism


The geometric arrangement is defined in figure 4.

s t C 2
1

p D 3
4
b2

O1 5
A

b1

Fig. 4. Geometry of spherical five-bar mechanism Among joint axes t, s, p and fixed axes b1 and b2 configuration control vector s provides the guiding joint axis that controls the carton section configuration. The spherical links between joint axes are labeled as AB, BC, CD, DE, and EA. Since angles i between revolute joints are constant throughout the movement of the linkage, the coordinates of the joint axes t, s, and p can be obtained by matrix operation from original joint axis b1 which is a unit vector. This relationship of the axes can be given in terms of geometric constraints as,

1 = arccos(b t ) 2 = arccos(t s)

3 = arccos( s p )

130

Robotic Systems Applications, Control and Programming

4 = arccos( p d ) 5 = arccos( d b )
Further, the configuration control vector s can be determined as, (1)

s = R( 1 y ,90 0 + )R( 1 x, )b1 = c(90 + 0 s(90 + ) = 0 1 0 0 0 s(90 + ) 0 c(90 + )


0 0

(2) 0 1 0 c 0 s 0 s c 0 c c 0 = s 1 s c

The side elevation vector t is given as,

t = R( 1 z , )R( 1 y, 1 )b1 = c = s 0 s c 0 0 c1 0 0 s1 1 0 s1 0 c s1 1 0 0 = s s1 c1 0 c1 1

(3)

Further the geometric constraint gives,


t s = c 2

(4)

Substituting t of equation (2) and s of equation (3) into above yields


c 2 = c c c s 1 + s s s 1 s c 1 c

(5)

Collecting the same terms with c and s , the above equation can be rewritten as,
A( , )c + B( , )s C ( , ) = 0,

(6)

Where
A( , ) = s 1 c c , B( , ) = s s 1 , C ( , ) = s c 1 c c 2 .

(7)

The side elevation angle varying with the transmission angles and can hence be given:

( , ) = tan 1 ( ) cos 1 (
Further, the main elevation vector p can be obtain as,

A B

C A + B2
2

(8)

p = R( 1 y , 5 )R( 1 z , )R( 1 y, 4 )b1

(9)

Hence,

Reconfigurable Automation of Carton Packaging with Robotic Technology

131

c 5 p= 0 s 5

0 s 5 c 1 0 s 0 c 5 0

s c 0

0 c 4 0 0 1 s 4

0 s 4 0 1 0 0 0 c 4 1

(10)

Since 4 = 5 = / 2 , the above becomes,

0 p= s c
Substituting p of equation (4.15) and s of equation (4.6) into equation (4.3) gives,
c 3 = c s c + s s

(11)

(12)

Assembling the same terms of c and s , the above can be rearranged as,
D( , )c + E( , )s + F = 0

(13)

Where,
D( , ) = s c E( , ) = s F = c 3

(14)

The main elevation angle varying with the transmission angles and can hence be obtained,

( , ) = tan 1 ( ) cos 1 (

D E

F D + E2
2

(15)

The control vector s also can be gotten from the transformation:


s = R( 1 z , )R( 1 y, 1 )R( 1 z, )R( 1 y, 2 )b1

(16)

The vector can be expressed,

c1s 2 c c c 2 s1c + s s s 2 s s s 2 + c s s 2 s= s 2 s 2 c + c 2 c1
Two columns are equal and can be easily got,
c = c 3 c c 2 c 1 2 s 2 c 3 c c 2 c 1 2 s 2

(17)

(18)

( , )=arccos

(19)

132

Robotic Systems Applications, Control and Programming

Hence, the elevation angles are related to the transmission angles that control the configuration vector s. Both sets of angles feature the carton configuration. The following Figure 5 shows the end-point trajectories of four configuration control vector.

Fig. 5. End-point trajectories of four configuration control vectors

4. Velocity-control formulation of the single-vertex on the origami carton


The kinematic representation of the spherical mechanism is showed an above figure which is in the Cartesian space

y $23 $12 z
51 .

$34 x $45

Fig. 6. Screw of the spherical five-bar mechanism Since the system is spherical the fixed point o can be selected as represent point and the system only have 3 rotation parts in Plcker coordinates the infinitesimal screw are given by, $51 = (0,0,1;0) , $12 = ( c s 1 ,s s 1 , c 1 ;0) , $45 = (1,0,0;0) , $34 = (0,s , c ;0) , $23 = ( s ;0) . $23 = ( s ;0) .

Reconfigurable Automation of Carton Packaging with Robotic Technology

133

Where, s is control vector that got from last section Assume that a velocity of control vector v is,

v = $45 + $34 = $15 + $21 + s $32

(20)

Thus after canceling the dual parts of the infinitesimal screw the joint speeds can be computed as,
$32 ] = $45 + $15 s

[ $34

$21

(21)

With the application of Cramers rule,

=
For the coefficients can be gotten,
G1 =

b2 + b1 t p t ps

(22)

s s +s 3c c +s c c s 1s s 3c 3 c c s 1c s s s + s 3c c + s c 1 s c s 1

(23)

then,
G2 = ccs s3scc3 +scc3 ccs1ss3 c3 ccs1 cs ss + s3 cc + sc1 scs1

(24)

So, the velocity of the control joint s can be obtained,


v = b2 + G1 + G 2

(25)

Where,

ss s3 cc cs1 s cs s3sc3 = ss1 c3 c c1 cs1 where, the matrix ss1 c1

t 1 0 0 b1 0 0 s b 2 0 1 c p

(26)

1 0 0 0 0 s is the Jacobean matrix for the control of the system. 0 1 c

134

Robotic Systems Applications, Control and Programming

5. Simulation of automatically folding origami carton using a multi-fingered robotic system


The robot has been designed based on the analysis of common motion and manipulation, which has four fingers in the following figure 7, two with three degrees of freedom noted finger 1 and finger 2, and two noted with finger 3 and finger4 with two degrees of freedom each. These fingers design are able to offer required folding trajectories by changing the control programs of the fingers motors. Two horizontal jaws are arranged with the pushing direction parallel to the fingers horizontal tracks. The joints of the fingers are actuated directly by motors and the whole system requires 14 controlled axes. This design is considered specifically based on providing high degree of reconfigurability with minimum number of motors to be controlled. Finger3

Finger1

Finger4

Finger2

Fig. 7. A Multi-fingered robot for origami carton folding To determine the control setting and speeds of the actuated joints for the multi-finger robotic system when fingertips are in special location and twists the Jacobian [ J ] need to be identified as.

[ $s ] = [ J ][i ]

(27)

For corner1 which located in O1 in configuration space the main parts of the screw is shown in equation 25. Expanding to the four fingers robotic system, each finger related one Conner manipulation which their velocities is got from four five bars control joints twist. While ,

Reconfigurable Automation of Carton Packaging with Robotic Technology

135

[ 1

4 ]

(28)

Then the Jacobean matrix of the robotic system is determined as $ij , where i for fingers number and j for motors number.

ij = [J]

[ i ]

(29)

Then every motors speeds and control can be set. The whole folding process can be simulated by the multi-fingered robotic system in the following figure 8. This methodology of multi-robotic manipulations using configuration space transformation in interactive configuration space is the theoretical base for the further development of multi-finger reconfigurable systems. This test rig has given experimental knowledge and information for further reconfigurable systems in packaging industry. In this robotic system, all motors of the robotic fingers and the two horizontal jaws are controlled by a computer. For different types of cartons, the system just need be changed the programs based on their folding trajectories without readjusting the hardwires of the system.

136

Robotic Systems Applications, Control and Programming

Fig. 8. Simulation of a multi-fingered manipulation on an origami carton

Fig. 9. A multi-fingered robotic system

Reconfigurable Automation of Carton Packaging with Robotic Technology

137

6. Conclusion
This paper has presented new approaches to carton modelling and carton folding analysis. By investigating carton examples, with an analysis of the equivalent mechanism, the gusset vertexes of the cartons - being a common structure of cartons - were extracted and analyzed based on their equivalent spherical linkages and were identified as guiding linkages that determine folding. A reconfigurable robotic system was designed based on the algorithms for multi-fingered manipulation and the principle of reconfigurability was demonstrated. The simulation results for the folding of an origami carton were given to prove the theory of the multifingered manipulation strategy and the concept of reconfigurable robotic systems. Test rigs were developed to demonstrate the principles of the reconfigurable packaging technology.

7. References
Balkcom, D.J., Demaine, E.D. and Demaine, M.L. Folding paper shopping bags, In Proceedings of the 14th Annual Fall Workshop on Computational Geometry, Cambridge, Mass., Nov. 19-20. 2004. Balkcom, [Link] Mason, M.T. Introducing Robotic Origami Folding, Proc. 2004 IEEE International Conference on Robotics and Automation, 2004. Bayazit, O. B., Song, G. and Amato, N. M. Ligand binding with obprm and haptic user input: Enhancing automatic motion planning with virtual touch, In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2001. To appear. This work will be presented as a poster at RECOMB01. Dai, J. S. and Rees Jones, J. Interrelationship between screw systems and corresponding reciprocal systems and applications, [Link], 36(3), pp633-651, 2001. Dai, J. S. and Rees Jones, J., Kinematics and Mobility Analysis of Carton Folds in Packing Manipulation Based on the Mechanism Equivalent, Journal of Mechanical Engineering Science, Proc, IMechE, Part C, 216(10): 959-970, 2002. Dai, J. S. and Rees Jones, J., Matrix Representation of Topological Configuration Transformation of Metamorphic Mechanisms, Transactions of the ASME: Journal of Mechanical Design, 127(4): 837-840, 2005 de Vin, L. J., de Vries, J., Streppel, A. H., Klaassen, E. J. W. and Kals, H. J. J., The generation of bending sequences in a CAPP system for sheet metal components, J. Mater. Processing Technol., vol. 41, pp. 331339, 1994. Duan, Y. and Kollman, P.A. Pathways to a protein folding intermediate observed in a 1microsecond simulation in aqueous solution, Science, 282:740744, 1998. Dubey, V. N., and Dai, J.S., A Packaging Robot for Complex Cartons, Industrial Robot: An International Journal, ISSN: 0143-991X, 33(2): 82-87, 2006. Gupta, S. K., Bourne, D. A., Kim, K. H. and Krishnan, S. S., Automated process planning for sheet metal bending operations, J. Manufact. Syst., vol. 17, no. 5, pp. 338360, 1998. Inui, M., Kinosada, A., Suzuki, H., Kimura, F. and Sata, T., Automatic process planning for sheet metal parts with bending simulation, in ASME Winter Annu. Meeting, Symp. on Intelligent and Integrated Manufacturing Analysis and Synthesis, 1987, pp. 245258. Kavraki, L., Geometry and the discovery of new ligands, In Proc. Int. Workshop on Algorithmic Foundations of Robotics (WAFR), pages 435448, 1996.

138

Robotic Systems Applications, Control and Programming

Kavraki, L., Svestka, P., Latombe, J. C. and Overmars, M., Probabilistic roadmaps for path planning in high-dimensional configuration spaces, IEEE Trans. Robot. Automat., 12(4):566580, August 1996. Levitt, M., Protein folding by restrained energy minimization and molecular dynamics, J. Mol. Bio., 170:723764, 1983. Liu, H. and Dai, J.S., Carton Manipulation Analysis Using Configuration Trasformation, Journal of Mechanical Science, Proc. IMechE. 216(C5), pp. 543-555,2002. Liu,H. and Dai, J.S., An Approach to Carton-Folding Trajectory Planning Using Dual Robotic Fingers, Robotics and Autonomous Systems, 42(1), pp. 47-63, 2003. Lu, L and Akella, S., Folding Cartons with Fixtures: A Motion Planning Approach, IEEE Trans on Robotics and Automation, vol.16, no. 4, pp. 346-356, August 2000. Lu, L. and Akella, S. Folding Cartons with Fixtures: A Motion Plannin Approach, Proc of the IEEE International Conference on Robotics and Automation, May 1999. Radin, B., Shpitalni, M. and Hartman, I., Two-stage algorithm for determination of the bending sequence in sheet metal products, ASME J. Mechan. Design, vol. 119, pp. 259266, 1997. Radin, B., Shpitalni, M. and Hartman, I., Two-stage algorithm for determination of the bending sequence in sheet metal products, ASME J. Mechan. Design, vol. 119, pp. 259266, 1997. Shpitalni, M. and Saddan, D., Automatic determination of bending sequence in sheet metal products, Ann. CIRP, vol. 43, no. 1, pp. 2326, 1994. Singh, A.P., Latombe, J.C. and Brutlag, D.L. A motion planning approach to flexible ligand binding, In 7th Int. Conf. on Intelligent Systems for Molecular Biology (ISMB), pages 252261, 1999. Song, G. and Amato, N. M., A motion planning approach to folding: From paper craft to protein folding, In Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pages 948953, 2001. Song, G. and Amato, N. M., Using motion planning to study protein folding pathways, In Proc. Int. Conf. Comput. Molecular Biology (RECOMB), pages 287296, 2001. Wang, C.-H. and Bourne, D. A., Using symmetry to reduce planning complexity: A case study in sheet-metal production, in Proc. 1997 ASME Design Engineering Technical Conf., Sacramento, CA, Sept. 1997, DETC97DFM4327. Wang, C.-H., Manufacturability-Driven Decomposition of Sheet Metal Products, Ph.D. dissertation, The Robotics Inst., Carnegie Mellon Univ., Pittsburgh, PA, Robotics Inst. Tech. Rep. CMU-RI-TR-97-35, Sept. 1997. Yao,W. and Dai,J.S., Dexterous Manipulation of Origami Cartons with Robot Fingers in the Interactive Configuration Space, Transactions of the ASME: Journal of Mechanical Design, 130(2),pp.022303_1-8, 2008. Yao,W., Cannella,F and Dai,J.S., Automatic Folding of Cartons Using a Reconfigurable Robotic System, Robotics and Computer-Integrated Manufacturing. 27(3), pp 604-613, 2011. Yao,W., Dai,J.S., Medland,T. and Mullineux,G., A Reconfigurable Robotic Folding System for Confectionery Industry, Industrial Robot: An International Journal, 37(6), pp 542551, 2010.

8
Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory
of Engineering and Advanced Technology (SEAT), Massey University, Palmerston North, 2The New Zealand Institute for Plant & Food Research Limited, Palmerston North New Zealand 1. Introduction
An autonomous anthropomorphic robotic arm has been designed and fabricated to automatically monitor plant tissue growth in a modified clonal micro-propagation system which is being developed for the New Zealand Institute for Plant & Food Research Limited. The custom-fabricated robotic arm uses a vertical linear ball shaft and high speed stepper motors to provide the movements of the various joints, with the arm able to swivel 180 degrees horizontally. Sensors located at the end of the arm monitor plant growth and the ambient growing environment. These include a compact colour zoom camera mounted on a pan and tilt mechanism to capture high resolution images, RGB (red, green and blue) colour sensors to monitor leaf colour as well as sensors to measure ambient atmospheric temperature, relative humidity and carbon dioxide. The robotic arm can reach anywhere over multiple trays (600mm x 600mm) of plantlets. Captured plant tissue images are processed using innovative algorithms to determine tissue, or whole plant, growth rates over specified time periods. Leaf colour sensors provide information on the health status of tissue by comparing the output with predetermined optimum values. Custom software has been developed to fully automate the operation of the robotic arm and capture data, allowing the arm to return to specified sites (i.e. individual plantlets) at set time intervals to identify subtle changes in growth rates and leaf colour. This will allow plant nutrient levels and the immediate environment to be routinely adjusted in response to this continuous sensing, resulting in optimised rapid growth of the plant with minimal human input. These low cost colour sensors can be incorporated into a continuous automated system for monitoring leaf colour of growing plants. Subtle colour changes can be an early indication of stress from less than optimum nutrient concentrations. In this chapter we also detail the calibration technique for a RGB sensor and compare it with a high end spectrophotometer.

Gourab Sen Gupta1, Mark Seelye1, John Seelye2 and Donald Bailey1
1School

2. Robotic systems in agriculture


Robotic and automated systems are becoming increasingly common in all economic sectors. In the past decade there has been a push towards more automation in the horticulture

140

Robotic Systems Applications, Control and Programming

industry, and it is only now, as robots become more sophisticated and reliable, that we are beginning to see them used to undertake routine, often repetitive tasks, which are expensive to do using a highly paid labour force. With rapid strides in technological advancement, more and more applications have become possible. These include the development of a robotic system for weed control (Slaughter, et al., 2008), a system for automatic harvesting of numerous agri-science products such as cutting flowers grown in greenhouses (Kawollek & Rath, 2008) and automating cucumber harvesting in greenhouses (van Henten, et al., 2002). Advances in electronics have empowered engineers to build robots that are capable of operating in unstructured environments (Garcia, et al., 2009). Camera-in-hand robotic systems are becoming increasingly popular, wherein a camera is mounted on the robot, usually at the hand, to provide an image of the objects located in the robots workspace (Kelly, et al., 2000). Increasingly, robots are being used to sort, grade, pack and even pick fruits. Fruits can be identified and classified on a continuously moving conveyer belt (Reyes & Chiang, 2009). An autonomous wheeled robot has been developed to pick kiwifruit from orchard vines (Scarfe, et al., 2009). Robotic techniques for production of seedlings have been developed, identifying a need to add a machine vision system to detect irregularities in seed trays and to provide supplementary sowing using a 5-arm robot (HonYong, et al., 1999). Automation of micro propagation for the rapid multiplication of plants has been described for the micro propagation of a grass species that replaces the costly and tedious manual process (Otte, et al., 1996). A system has also been developed that combines plant recognition and chemical micro-dosing using autonomous robotics (Sogaard & Lund, 2007). Colour as a means of assessing quality is also gaining popularity amongst researchers. These include evaluating bakery products using colour-based machine vision (Abdullah, et al., 2000), monitoring tea during fermentation (Borah & Bhuyan, 2005), grading specific fruits and vegetables (Kang & Sabarez, 2009; Miranda, et al., 2007; Omar & MatJafri, 2009) and in the health sector to determine blood glucose concentrations (Raja & Sankaranarayanan, 2006). Near infrared (NIR) sensors are also gaining popularity as non-destructive means of assessing fruit and plant material, including the measurements of plant nutrient status (Menesatti, et al., 2010) as well as testing of fruit quality (Hu, et al., 2005; Nicola, et al., 2007; Paz, et al., 2009). Investigation into non-destructive methods to measure the health status of plants is a relatively new area of research. Subtle leaf colour changes can be used as a measure of plant health. Although limited work has been carried out in real time, a recent micro-propagation based system used potato tissue images captured via a digital camera, to identify the colour of selected pixels (Yadav, et al., 2010). Spectral reflectance, using a range of spectral bands, has been used as a non-destructive measure of leaf chlorophyll content in a range of species (Gitelson, et al., 2003). Alternative methods make use of spectroscopic systems using a fixed light source to record colour reflectance of multiple samples (Yam & Papadakis, 2004). The introduction of machine vision as a means of investigating the environment allows for very complex systems to be developed. Over the years the conventional robotic design types have remained more or less the same; however modified designs are increasingly being employed for specific tasks. Designs of robotic arms have made huge progress in recent years, as motor controllers, sensors and computers have become more sophisticated. It is envisaged that as more sensors, such as NIR (near infra-red) and colour sensors, become readily available, these will be integrated in the robotic arm. One such integrated system, which is unique and different from off-the-shelf robots, is detailed in this chapter.

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

141

3. Overview of the robotic system


The reported robotic system has been developed to work in a specific environment using specific sensors it is meant to monitor growth of plant tissues in a laboratory. The plantlets are growing in multiple trays (600mm x 600mm) which are arranged contiguously on a shelf and there are multiple shelves one above the other. The robot should thus be able to extend its reach vertically and monitor plants in each shelf and be capable of reaching each tray. The robotic system designed is based on a SCARA (Selective Compliant Assembly/Articulated Robot Arm) robot. However, SCARA robots are rigid in the Z-axis and pliable in the XY-axes only. This rigidity in the Z-axis is a serious limitation of a SCARA robot for the given scenario. In the proposed system the entire arm is able to move in the Zaxis, as opposed to just the end-effector. Another point of difference between the proposed system and conventional off-the-shelf industrial robot is the mechanism to house the sensors. To monitor the growth of plants, colour camera and RGB sensors are used. To enable the robot to position itself at a desired distance from the plant surface, proximity sensors are also required. The sensors need to be housed in an enclosure at the end of the robotic arm. In order to capture images and take RGB sensor readings from any angle it should be possible to pan and tilt the sensor housing structure. Such a mechanism is not a standard part of a conventional off-the-shelf industrial robot. In general, the costs of industrial robotic systems are far greater than the proposed system, often a lot more bulky and it is hard to integrate additional components (i.e. Sensors). Two prototypes of the camera-in-hand robotic system were designed and built. The initial prototype made use of servo motors, designed as a simple experiment to test the viability of the system and its control mechanism. The colour camera was incorporated in this prototype and its control was implemented. The captured images were stored in a database for subsequent retrieval and processing. The prototype also helped to experiment with the wireless remote control of the robotic arm and the remote setting of camera features such as zoom, gain and exposure. Having established the proof-of-concept, the second prototype was designed and built to industrial specifications. This version of the prototype made use of hightorque stepper motors and greatly improved the performance of the robotic arm. Additionally, this prototype incorporated low-cost RGB colour sensors for monitoring plant health together with a proximity sensor. Sensors to monitor the ambient atmosphere were also incorporated to measure temperature, humidity and CO2 levels. Section 3.1 gives a concise overview of the first prototype and section 3.2 details the second prototype in greater depth. 3.1 Robotic arm using servo motors The basic concept of this robotic system, from human input to the control of the arm and camera, is outlined in the functional block diagram shown in figure 1. A system engineering approach was employed to take the robotic arm from concept to reality, making use of standard components and integrating them together to make the final product. The robotic arm was designed using 5 servo motors and implemented a pan and tilt mechanism for the camera. The operator uses a joystick to control the movement of the robotic arm. This joystick connects to the PC via a USB interface. Movements of the joystick, made by the operator, vary the slide bars on the Graphical User Interface (GUI) running on the PC and at the same time control the movement of the joints of the arm. Serial data is then sent via the USB to the wireless transmitter (Zigbee Pro) module which transmits the data to the wireless receiver

142

Robotic Systems Applications, Control and Programming

module. The received data is then sent to the camera and the servo controller board. The user interface is shown in figure 2 and the completed robotic system, with plant trays, is shown in figure 3.

Fig. 1. Functional block diagram of the camera-in-hand robotic system


Image Joystick On/Off Additional Commands

Zoom position Slide Bars (Manual Control)

Coordinate system

Fig. 2. The Graphical User Interface (GUI) of the robot control system

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

143

Fig. 3. The completed robotic arm with camera, joystick and plant trays 3.1.1 Control of servo motor based robotic arm Using a modular approach the system was built and tested in parts. The sub-systems that had to be programmed, such as the servo-controller, joystick and camera, were tested separately. The testing setup and procedures are explained in this section. The servo controller board can take two types of serial mode signals USB and 5V TTL UART. The controller board, together with the joystick, was tested using the connection diagram shown in figure 4.

Fig. 4. Block diagram showing connection between PC, servo controller board and joystick In the first instance a simple program was written in Visual Basic, allowing each servo motor to be controlled separately by clicking buttons. The motor parameters such as stepping rate and acceleration interval could be entered through the programs user interface. The user interface of the test program is shown in figure 5. This program sent the corresponding commands to the servo motor controller board. The next step was to control the servo motor by implementing a slide bar (figure 2). This allowed the operator to slide the bar, which incremented or decremented the position value, allowing simple movements based on the position byte. On successfully implementing controls for one servo motor, multiple servo motors could then be controlled in the same manner.

144

Robotic Systems Applications, Control and Programming

Fig. 5. User interface of the test program to control individual motors 3.2 Robotic arm using stepper motors The second design of the robot was extensively influenced by the knowledge and insight gained from the servo motor based design of the robotic arm. The robotic arm was designed to move the end-effector over trays of plant material located on different levels of shelving units to capture images and the colour of the plant material and to use this information as a non-destructive measurement of plant health. To achieve this, a compact colour camera ("Sony Professional," 2010), proximity and colour sensor ("Parallax Home," 2010) are housed in the end-effector. Each tray measures approximately 600mm x 600mm, with each shelf located approximately 300mm above the previous shelf, with the top shelf approximately 1000mm high. The system is designed to move the arm into the trays, capture the required information and then move up to the next shelf and repeat the process on the next tray. 3.2.1 Mechanical design of the robotic arm To allow the robotic arm to move vertically, a ball screw and shaft assembly is incorporated, converting rotational motion into vertical movement. The conceptual design is shown in figure 6. The arm contains a pan and tilt system at its distal end, which houses the camera, colour and proximity sensors. The operation of the arm is completely automated, continually gathering information from the sensors and capturing images for assessment and analysis. The design is based on a modified SCARA robotic arm. Designed in the 3D CAD package, Solidworks, all components where machined in-house using a CNC machine. The arm itself has been through a number of design iterations to overcome unforeseen problems and to improve the efficiency of operation. The robotic arm went through a number of design phases, with each design an improvement over the previous design iteration. In the initial concept it was intended to

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

145

Fig. 6. SolidWorks rendered design of the robotic arm

Fig. 7. The system in various stages of development and integration

Fig. 8. (a) Completed robotic arm using stepper motors (b) Camera housing and RGB colour Sensors

146

Robotic Systems Applications, Control and Programming

make the links of the robotic arm extendable so that the robot can be flexible and adapted to operate in various workspaces. To ensure the motor torque ratings were not exceeded, a gearing system was investigated, which made use of spur gears to increase torque ratios. However, once constructed, a number of issues arose, including excessive weight (with the links extended) and slippage between gears. To overcome these issues a rethink of the arm design removed the ability to extend the link lengths, and a belt and pulley system was integrated to alleviate slippage within the gears. However, each design iteration maintained the overall original design concept. Figure 7 shows an initial version of the robotic arm in various stages of assembly and figure 8 shows the final design, with the various aluminium parts anodized. The completed robotic arm is shown in figure 8 (a). The close up view of the camera housing mechanism and the RGB colour sensors is shown in figure 8(b). 3.2.2 Actuation and control of the robotic arm joints using stepper motors To allow the various joints to move, the arm uses bipolar, high torque stepper motors, which have varying amounts of torque, depending on the joint. The robotic arm uses five stepper motors that are controlled through a motor controller (KTA-190) and micro-step driver (M542) ("OceanControls," 2010). All the five motors have a step angle of 1.8 degrees and make use of a micro step driver that allows the user to select an even finer resolution (i.e. increasing the number of steps per revolution). Both a pulse signal and a direction signal are required for connecting a 4-wire stepper motor to the driver, with speed and torque depending on the winding inductance. The KTA-190 motor controllers provide an interface between the computer and up to 4 stepper motors as well as the ability to control each motor independently or collectively. Utilizing a RS-232 9600, 8N1 ASCII serial communication protocol, up to 4 controller boards can be linked, giving control of up to 16 stepper motors (figure 9). A motor is controlled by a simple address, followed by the appropriate ASCII commands. The controller has as interface to allow limit switches to be used to prevent the motors from travelling out of range. With a total of 17 commands it is possible to tailor the operation and move the motors. Commands include: setting the position of the motors, returning the current positions of the motors, moving the motors by a relative or absolute amount and acceleration settings. A status command returns a 12-bit binary representation on the status of the controller board at any given time, providing information on the movement, direction and status of the limit switch respectively.

Fig. 9. Block diagram showing control of the 5 stepper motors The angle each motor is required to move is calculated via an inverse kinematic algorithm. The user simply enters the desired tray that is required to be monitored, along with the number (and frequency) of readings within the tray. The software then calculates the required motor positions to enable the camera and sensors to capture the required

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

147

information. A proximity sensor has been integrated into the system to ensure that the colour readings are taken at a fixed height of 20mm above the plant material.

4. Sensors
Sensors provide information on the surroundings, which vary depending on their intended applications. A number of sensors have been integrated into the system to provide information in a non-destructive method on plant growth, with the intention of using low cost sensors which are easily amendable into the system. The use of a camera provides information on how fast the plant is growing, by identifying the quantity of plant material from a 2D view. Colour sensors provide information on the colour of an object. When colour sensors are used to monitor plant leaves, subtle changes can be detected before the human eye can identify them. This allows for media and nutrient levels to be adjusted accordingly. A proximity sensor ensures colour readings are taken at a fixed height, while temperature, humidity and CO2 sensors provide information on the ambient environment. We detail the camera control and testing in section 4.1. Colour sensor selection, along with calibration techniques and results, are presented in detail in section 4.2. 4.1 Camera A Sony colour camera (model: FCB-IX11AP) was used. It features a CCD (charge coupled device) image sensor using PAL (Phase Alternating Line) encoding system. The camera has a 40x zoom ratio (10x optical, 4x digital) that is controllable from a PC via Sonys VISCA (Video System Control Architecture) command protocol. Over 38,000 different command combinations are possible for controlling the cameras features. The cameras macro capability allows it to capture images as close as 10mm from the subject and it can operate in light conditions as low as 1.5 Lux. The electronic shutter speed is controllable from 1/10 to 1/10000 of a second allowing for clarity in photographs. In order for the cameras functions to be controlled, hexadecimal commands (as serial data) are sent to the camera. These serial commands require 8 data bits, 1 start bit, 1 (or 2) stop bit and no parity. They have a communication speed of 9.6, 19.2 or 38.4 kbps. The camera can be programmed and controlled using a TTL or RS232C signal level serial interface. To test the camera features, it was directly wired to the PC using the RS232C interface via a USB-to-RS232 converter as shown in figure 10. The video output signal from the camera was fed to a frame grabber/digitizer which is interfaced to the PC using USB. The image captured is displayed on the applications GUI (figure 12).

Fig. 10. Block diagram showing wired connection of PC to cameras RS232 inputs

148

Robotic Systems Applications, Control and Programming

To familiarize with the VISCA command structure and to test the various camera functions, especially the programming commands for controlling the zoom, a standard communication program (Terminal v1.9b) was used to send commands to the camera. To test the TTL interface, the system shown in figure 11 was employed. IC ST232 was used to convert the RS232 level signals to 5V TTL.

Fig. 11. Block diagram showing wired connection of PC to cameras TTL inputs In the final design of the robotic system the camera was connected using the original RS-232 interface, with custom software created to control the features of the camera. Figure 12 shows the user interface which allows camera zoom, iris, gain, exposure settings and shutter features to be manually controlled. It also displays the image captured by the camera. Figure 13 shows the test program depicting the magnified image of the object under observation, with the camera set to 8x zoom. Each of the controllable settings for the camera is controlled by sending arrays of hexadecimal commands to the camera, making use of Sonys VISCA protocol. Custom created software allows the user to control a number of settings to customize the camera to the users desire.

Fig. 12. User interface of the camera control program

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

149

Fig. 13. Graphical User Interface (GUI) of the remote control application (camera zoom 8x) 4.2 RGB colour sensors Currently there are a number of colour sensors on the market, with prices ranging from low cost light-to-frequency chips to sophisticated and very expensive spectrophotometers. Parallax (Parallax Inc, CA, USA) has two colour sensors that integrate seamlessly with their Basic Stamp microcontroller. Both the ColorPAL and TCS 3200 colour sensors are provided with some source code, making them amenable to integrating with our customised system.

Red, Green and Blue LED

TSL13T light sensor

Connection pins (+5V, GND, Signal)

Snorkel

Fig. 14. Parallax ColorPAL colour sensor The ColorPAL sensor (figure 14) illuminates a sample using in-built red, green and blue LED light sources (one colour at a time) and records the quantity of light reflected back from

150

Robotic Systems Applications, Control and Programming

the object. The ColorPAL makes use of a TAOS (Texas Advanced Optoelectronic Solutions) light-to-voltage chip. When light is reflected, the voltage, which is proportional to the light reflected, is used to determine the samples R, G and B colour contents. The ColorPAL requires the sample to be illuminated using each of the red, green and blue LEDs, with a snorkel to shield possible interference from external light sources. This requires the ColorPAL to be in direct contact with the object for an optimum reading without interference.

Lens Connection Pins (on the bottom of the board) White LED

Fig. 15. Parallax TCS3200 colour sensor The TCS3200 Colour sensor (figure 15) makes use of a TAOS TCS3200 RGB light-tofrequency chip. The TCS3200 colour sensor operates by illuminating the object with two white LEDs, while an array of photo detectors (each with a red, green, blue and clear filter) interpret the colour being reflected by means of a square wave output whose frequency is proportional to the light reflected. The TSC3200 Colour sensor has a 5.6-mm lens, which is positioned to allow an area of 3.5 mm2 to be viewed. A USB4000 spectrometer (Ocean Optics Inc., FL, USA) was used to find the height at which the greatest intensity of light occurred when the RGB sensor was placed above a sample. As the two white LEDs are directed down at an angle, there is a point where the light intensity is the greatest. This position was 20 mm above the surface of the sample, as shown in figure 16.

Fig. 16. Light absorbed from TCS3200 across the white LED light spectrum when the sensor is positioned at 6 different heights

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

151

Since the TCS3200 is mounted 20 mm above the sample, and therefore not in direct contact with the sample, it was more suited for our application than the full contact required by the ColorPAL sensor. A Konica Minolta CM-700D Spectrophotometer (Konica Minolta Sensing Americas, Inc., NJ, USA) was used to validate and calibrate the RGB sensors. For accurate measurements, the CM-700D was calibrated by taking both white and black readings by sampling a supplied white and black object respectively. The CM-700D gives colour in the XYZ colour space, as well as L*a*b*, L*C*h, Hunter Lab, Yxy and Munsell. A linear transformation matrix was required to transform data from the XYZ colour space to the RGB colour space to enable comparisons and calibrations to the Parallax sensor. The linear transformation equation to be solved (Juckett, 2010) is: X R Y = M G Z B
x= y= X X +Y +Z Y X +Y +Z

(1)

(2) (3) (4)

z=

Z X +Y +Z

Equations (2 4) combined with the standard 1931 xy chromaticity diagram provided the foundation for the linear transformation (Eq. 1). This transformation converted the XYZ data initially to sRGB colour. The chromaticity values of x, y and z are shown in Table 1 (Lindbloom, 2010). Colour Red Green Blue

x
0.64 0.30 0.15

y
0.33 0.60 0.06

z
0.212656 0.715158 0.072186

Table 1. x, y, and z chromaticity values of red, green and blue converting xyz to sRGB From the x, y and z chromaticity values (Table 1), the transformation matrix, M, is calculated (Eq. 5)

0.721144 0.063298 0.166008 M 0.303556 0.643443 0.052999 0.000076 0.064689 1.024294 To calculate the R, G and B values the inverse is taken (Eq. 6 - 7). R X G = M 1 Y B Z

(5)

(6)

152

Robotic Systems Applications, Control and Programming

1.436603 -0.118534 -0.226698 M 1 -0.681279 1.618476 0.026671 0.042919 -0.102206 0.974614

(7)

Colour Red Green Blue

x 0.7350 0.2740 0.1670

y 0.2650 0.7170 0.0090

z 0.176204 0.812985 0.010811

Table 2. x, y, and z chromaticity values of red, green and blue converting xyz to CIE RGB The x, y and z chromaticity values shown in Table 2, are again used to solve the transformation matrix, M (Eq. 8) Testing showed that the TCS3200 produced light in the CIE RGB colour space, and although results would later show the colour sensor could be calibrated to the sRGB colour space, results from the calibration would be incorrect. Therefore a colour transformation to a CIE RGB colour space was more appropriate than the sRGB colour space; consequently a new linear transformation was required.
0.4887180 0.3106803 0.2006017 M 0.1762044 0.8129847 0.0108109 0.0000000 0.0102048 0.9897952

(8)

Again calculating the R, G and B values the inverse is taken (Eq. 6).
2.3706743 0.9000405 0.4706338 0.0885814 0.5138850 1.4253036 0.0052982 0.0052982 1.0093968

(9)

4.2.1 Colour sensor calibration and results In order to validate the TCS3200 colour sensor, it was necessary to calibrate and test it using the CM-700D Spectrophotometer. This involved taking 200 RGB readings with the TCS3200 using fifteen different coloured samples from the Royal Horticulture Society (RHS) colour charts and averaging them. The same samples were measured, each 20 times, with the CM700D and again averaged. These tests were all completed in a constant temperature dark room. As the CM-700D uses the XYZ colour space, the linear transformation matrix was used to convert the XYZ values to a CIE RGB colour space (Eq. 9). The TCS3200 was firstly calibrated through software by modifying the integration time, to allow the white object (used to calibrate the CMD-700) to have a RGB value as close as possible to 255,255,255 followed by scaling each of the RGB values, to ensure the white reading was that of the CMD-700. In order to calculate a calibration factor the following equation was used:
' RN = RN

(10)

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

153

where:

' RN = CM-700D (desired RGB value)

RN = TCS3200 RGB (Un-calibrated sensor data)

= Gamma (required calibration factor) First the TCS3200 sensor data were scaled to ensure all values are offset, thus ensuring that the white reading is that of the CMD-700 for each of R, G and B reading (Eq. 11)
RN = R R' N G' B' , GN = G N , BN = B N Rmax Gmax Bmax (11)

where Rmax , Gmax , Bmax represent the maximum R, G and B value of a white object from the TCS3200. TCS-3200 ID 123A 127C 129C 131C 133C 135C 137C 139C 141C 143C 145C 147C 149C 155D 202A Gain Adjusted R 99 38 99 25 62 42 42 68 57 71 171 84 174 255 17 G 148 79 152 41 88 51 51 82 80 88 168 86 183 249 17 B 167 75 137 35 85 35 35 58 45 48 122 62 114 258 20 White Adjusted R 88 34 88 22 55 37 37 61 51 63 152 75 155 227 15 G 152 81 156 42 90 52 52 84 82 90 172 88 187 255 17 B 144 65 118 30 73 30 30 50 39 41 105 53 98 222 17 CMD-700 RGB Equivalent (CIE) RGB) R G B 62 155 152 17 89 69 71 166 123 10 43 27 47 93 80 36 78 39 40 54 30 63 88 48 55 87 35 72 91 32 169 185 101 84 91 51 170 206 86 227 255 222 10 13 13 Output Calibrated R 85 31 85 20 52 35 35 57 48 60 149 71 152 226 14 G 158 89 162 49 98 60 60 92 90 98 178 96 192 255 22 B 143 64 117 29 72 29 29 49 38 41 104 53 97 222 17

Table 3. Results obtained comparing the TCS3200 colour sensor (calibrated and uncalibrated) with the CM-700D spectrophotometer over a range of 15 RHS colours Table 3 shows the data recorded from the colour sensors along with the equivalent results from the CMD-700 (using the CIE RGB transformation matrix) and the calibrated TCS3200 results. Table 4, shows the errors associated with the Table 3. The calibration factors () for each colour were calculated using normalized data. (Eq. 12)

R =

' ' ' log( RN / 255) log(GN / 255) log( BN / 255) , G = , B = log( RN / 255) log(GN / 255) log( BN / 255)

(12)

For each colour sample measured, the calibration factor was calculated and averaged using a geometric mean (as opposed to the more general arithmetic mean function (Fleming &

154

Robotic Systems Applications, Control and Programming

Wallace, 1986), thus providing the factor for R, G and B individually. The (desired) calibrated values were then obtained using equation 13.
' RN ( calibrated ) = ( RN / 255) 255

(13)

For a range of fifteen colours, measurements were taken using the TCS3200 RGB sensor and the CM-700D Spectrophotometer (Table 3). The gamma calibration factors calculated were:

( Red ) R =

1.05, ( Green ) G = 0.92, ( Blue ) B = 1.00 TCS3200 (calibrated) R 10.161 3.985 6.631 G 6.162 2.416 4.757 B 4.966 1.947 3.699

(14)

TCS3200 (un-calibrated) Colour Error Error % S.D R 9.691 3.8 7.423 G 6.806 2.669 7.298 B 5.107 2.003 3.485

Table 4. Average Error (0-255), percentage error and standard deviation for red, green and blue measurements of the TCS3200 colour sensor, calibrated and un-calibrated, compared with CM-700D spectrophotometer results across a range of colours

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

155

Fig. 17. TCS3200 sensor RGB readings, calibrated and un-calibrated, compared with the CM700D readings of: Red (A); Green (B); Blue (C) using a CIE RGB colour transformation. (Colour samples are as given in Table 3) An example of a green colour interpreted by the CM-700D and TCS3200 colour sensor before and after calibration is shown in figure 18. TCS3200 (un-calibrated) TCS3200 (calibrated) CM-700D Spectrophotometer RGB = 72,91,32

RGB = 63,90,41

RGB = 60,98,41

Fig. 18. Graphical representation of the colour differences between, calibrated and uncalibrated TCS3200 colour sensor Results showed when calibrating the CM-700D XYZ values to CIE RGB instead of sRGB, the calibration results improved, as shown in Table 5, to have a much smaller error for R, G and B. A colour representation can be seen in Figure 19. CIE RGB Colour Error Error % S.D R 10.289 4.035 6.562 G 6.117 2.399 4.739 B 5.683 2.229 3.357 R 14.777 5.795 12.314 sRGB G 7.055 2.767 7.54 B 9.564 3.751 5.772

Table 5. Comparisons between CIE RGB and sRGB transformation matrix, showing the CIE RGB results to be more accurate than the sRGB

156

Robotic Systems Applications, Control and Programming

TCS3200 (raw reading)

CM-700D Spectrophotometer (sRGB)

CM-700D Spectrophotometer (CIE RGB)

RGB 71,88,48

RGB = 149,166,81

RGB = 72,91,31

Fig. 19. An example of a green colour interpreted by the TCS3200 colour sensor with no calibration compared with the CM-700D with both a sRGB and CIE RGB

5. Conclusion
An anthropomorphic robotic arm has been designed, fabricated and fully tested to meet the requirements set out by The New Zealand Institute for Plant & Food Research Limited. The robotic system is able to reach and monitor plantlets growing in trays on a multi-level shelving unit. Custom software has been developed to fully automate the control of the robotic arm. The position of the robotic arm can be controlled with great precision using the microstepper controller to allow micro-motion of the stepper motors. The robot can be programmed to autonomously position itself and take readings at regular intervals. Several sensors have been integrated in the robotic system, namely a high-end colour camera for capturing high resolution images of plantlets; proximity sensor to position the arm at a predetermined distance from the plant surface for taking measurements; temperature, humidity and CO2 sensors to monitor the ambient atmospheric conditions and a low-cost RGB sensor to measure the colour of plant leaves. Two different RGB sensors have been evaluated. Experimental results show that the Parallax TCS3200 RGB sensor is a useful low cost colour sensor, which when calibrated to an industry standard spectrophotometer, can provide accurate RGB readings. It is therefore a useful component for integrating into an automated system such as a robotic arm, with various other sensors, for monitoring plants growing in a modified plant micro-propagation system. The system has the potential for not only monitoring plant material in a laboratory environment but other applications as well where non-destructive measurements of colour are required.

6. Acknowledgment
The hardware cost of this research has been funded by The New Zealand Institute for Plant & Food Research. The authors greatly appreciate the financial and technical workshop support given by the School of Engineering and Advanced Technology, Massey University and The New Zealand Institute for Plant & Food Research Limited.

Autonomous Anthropomorphic Robotic System with Low-Cost Colour Sensors to Monitor Plant Growth in a Laboratory

157

7. References
Abdullah, M. Z., Aziz', S. A., & Mohamed, A. M. D. (2000). Quality Inspection Of Bakery Products Using A Color-Based Machine Vision System. Journal of Food Quality, 23 (1), 39-50. Borah, S., & Bhuyan, M. (2005). A computer based system for matching colours during the monitoring of tea fermentation. International Journal of Food Science and Technology, 40(6), 675-682. Lindbloom, B. (2010). Useful colour Equations. Retrieved 18/8/2010, from [Link] Fleming, P. J., & Wallace, J. J. (1986). How not to lie with statistics: The correct way to summarize the benchmark results. Communications of the ACM, 29(3), 218-221 Garcia, G. J., Pomares, J., & Torres, F. (2009). Automatic robotic tasks in unstructured environments using an image path tracker. Control Engineering Practice, 17(5), 597608. Gitelson, A. A., Gritz, Y., & Merzlyak, M. N. (2003). Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. Journal of Plant Physiology, 160(3), 271-282. HonYong, W., QiXin, C., Masateru, N., & JianYue, B. (1999). Image processing and robotic techniques in plug seedling production. Transactions of the Chinese Society of Agricultural Machinery, 30, 57-62. Hu, X., He, Y., Pereira, A. G., & Gmez, A. H. (2005). Nondestructive Determination Method of Fruit Quantity Detection Based on Vis/NIR Spectroscopy Technique. 27th Annual International Conference of the Engineering in Medicine and Biology Society (IEEE-EMBS 2005), 1956-1959 Juckett, R. (2010). RGB color space conversion - Linear transformation of color. Retrieved 14/8/2010, from [Link] Kang, S. P., & Sabarez, H. T. (2009). Simple colour image segmentation of bicolour food products for quality measurement. Journal of Food Engineering, 94, 21-25. Kawollek, M., & Rath, T. (2007). Robotic Harvest of Cut Flowers Based on Image Processing by Using Gerbera jamesonii as Model Plant. In S. DePascale, G. S. Mugnozza, A. Maggio & E. Schettini (Eds.), Proceedings of the International Symposium on High Technology for Greenhouse System Management (Greensys 2007), 557-563 Kelly, R., Carelli, R., Nasisi, O., Kuchen, B., & Reyes, F. (2000). Stable Visual Servoing of Camera-in-Hand Robotic Systems. IEEE/ASME Transactions on Mechatronics, 5(1), 39 - 48. Menesatti, P., Antonucci, F., Pallottino, F., Roccuzzo, G., Allegra, M., Stagno, F., et al. (2010). Estimation of plant nutritional status by VisNIR spectrophotometric analysis on orange leaves. Biosystems Engineering, 105(4), 448-454. Miranda, C., Girard, T., & Lauri, P. E. (2007). Random sample estimates of tree mean for fruit size and colour in apple. Scientia Horticulturae, 112 (1), 33-41. Nicola, B. M., Beullens, K., Bobelyn, E., Peirs, A., Saeys, W., Theron, K. I., et al. (2007). Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review. Postharvest Biology and Technology, 46(2), 99-118. OceanControls. (2010). Retrieved 12/04/2010, from [Link]

158

Robotic Systems Applications, Control and Programming

Omar, A. F. B., & MatJafri, M. Z. B. (2009). Optical Sensor in the Measurement of Fruits Quality: A Review on an Innovative Approach. International Journal of Computer and Electrical Engineering, 1(5), 557-561. Otte, C., Schwanke, J., & Jensch, P. (1996). Automatic micropropagation of plants. Optics in Agriculture, Forestry, and Biological Processing II, Proc. SPIE 2907, 80-87. Parallax Home. (2010). Retrieved 05/07/2010, from [Link] Paz, P., Sanchez, M. .-T., Perez-Marn, D., Guerrerob, J. e.-E., & Garrido-Varob, A. (2009). Evaluating NIR instruments for quantitative and qualitative assessment of intact apple quality. Journal of the Science of Food and Agriculture, 89(5), 781-790. Raja, A. S., & Sankaranarayanan, K. (2006). Use of RGB Color Sensor in Colorimeter for better clinical measurements of blood Glucose. BIME Journal 6(1), 23 - 28. Reyes, J., & Chiang, L. (2009). Location And Classification Of Moving Fruits In Real Time With A Single Colour Camera. Chilean Journal Of Agricultural Research, 69, 179187. Scarfe, A. J., Flemmer, R. C., Bakker, H. H., & Flemmer, C. L. (2009). Development of An Autonomous Kiwifruit Picking Robot. 4th International Conference on Autonomous Robots and Agents, 380-384. Slaughter, D. C., Giles, D. K., & Downey, D. (2008). Autonomous robotic weed control systems: A review. Computers and Electronics in Agriculture, 61(1), 63-78. Sogaard, H. T., & Lund, I. (2007). Application accuracy of a machine vision-controlled robotic micro-dosing system. biosystems engineering, 96(3), 315-322. Sony Professional. (2010). Retrieved 05/06/2009, from [Link] van Henten, E. J., Hemming, J., van Tuijl, B. A. J., Kornet, J. G., Meuleman, J., van Os, E. A., et al. (2002). An autonomous robot for harvesting cucumbers in greenhouses. [Article]. Autonomous Robots, 13(3), 241-258. Yadav, S. P., Ibaraki, Y., & Gupta, S. D. (2010). Estimation of the chlorophyll content of micropropagated potato plants using RGB based image analysis. Plant Cell Tissue and Organ Culture, 100(2), 183-188. Yam, K. L., & Papadakis, S. E. (2004). A simple digital imaging method for measuring and analyzing color of food surfaces. Journal of Food Engineering, 61, 137-142.

Part 2
Control and Modeling

0 9
CPG Implementations for Robot Locomotion: Analysis and Design
Jose Hugo Barron-Zambrano and Cesar Torres-Huitzil
Information Technology Laboratory, CINVESTAV-IPN Mexico
1. Introduction
The ability to efciently move in a complex environment is a key property of animals. It is central to their survival, i.e. to avoid predators, to look for food, and to nd mates for reproduction (Ijspeert, 2008). Nature has found different solutions for the problem of legged locomotion. For example, the vertebrate animals have a spinal column and one or two pairs of limbs that are used for walking. Arthropoda animals are characterized by a segmented body that is covered by a jointed external skeleton (exoskeleton), with paired jointed limbs on each segment and they can have a high number of limbs (Carbone & Ceccarelli, 2005). The biological mechanisms underlaying locomotion have therefore been extensively studied by neurobiologists, and in recent years there has been an increase in the use of computer simulations for testing and investigating models of locomotor circuits based on neurobiological observations (Ijspeert, 2001). However, the mechanisms generating the complex motion patterns performed by animals are still not well understood (Manoonpong, 2007). Animal locomotion, for instance, requires multi-dimensional coordinated rhythmic patterns that need to be correctly tuned so as to satisfy multiple constraints: the capacity to generate forward motion, with low energy, without falling over, while adapting to possibly complex terrain (uneven ground, obstacles), and while allowing the modulation of speed and direction (Ijspeert & Crespi, 2007). In vertebrate animals, an essential building block of the locomotion controller is the Central Pattern Generator (CPG) located in the spinal cord. The CPG is a neural circuit capable of producing coordinated patterns of rhythmic activity in open loop, i.e. without any rhythmic inputs from sensory feedback or from higher control centers (Delcomyn, 1980; Grillner, 1985). Interestingly, very simple input signals are sufcient to modulate the produced patterns. Furthermore, CPG can adapt to various environments by changing the periodic rhythmic patterns. For instance, the cats and horses are able to change their locomotor patterns depending on the situation. This relevance of locomotion both for biology and for robotics has led to multiple interesting interactions between the two elds. The interactions have mainly been in one direction, with robotics taking inspiration from biology in terms of morphologies, modes of locomotion, and/or control mechanisms. In particular, many robot structures are directly inspired by animal morphologies, from snake robots, quadruped robots, to humanoid robots. Increasingly, robotics is now providing something back to biology, with robots being used

162

Robotic Systems Applications, Control and Programming Robotic Systems

as scientic tools to test biological hypotheses (Ijspeert, 2008). Researchers have studied the CPGs for decades and some principles have been derived to model their functionality and structure. CPGs have been proposed as a mechanism to generate an efcient control strategy for legged robots based on biological locomotion principles. Locomotion in legged robots is much more complex than wheeled robots, since the formers have between twelve or twenty degrees of freedom, which must be rhythmically coordinated to produce specic gaits. The design of locomotion control systems of legged robots is a challenge that has been partially solved. To develop bio-inspired solutions, detailed analyses of candidate biological neural systems are essential as in the case of legged locomotion. Models of CPGs have been used to control a variety of different types of robots and different modes of locomotion. For instance, CPG models have been used with hexapod and octopod robots inspired by insect locomotion, quadruped robots inspired by vertebrates, such as horse, biped robots inspired by humans and other kind of robots inspired by reptiles, such as snakes, as it will be discussed in the section 2. CPGs could been modeled with different levels of abstraction, these aspects will be presented in section 3. Additionally, CPGs present several interesting properties including distributed control, the ability to deal with redundancies, fast control loops, and allowing modulation of locomotion by simple control signals. These properties, when transferred to mathematical models, make CPGs interesting building blocks for locomotion controllers in robots (Fujii et al., 2002; Ijspeert, 2008), as shown in section 4. In spite of its importance, the CPG-approach presents several disadvantages. One of the main drawbacks of CPGs is related with the learning methodologies to generate the rhythmic signals (Zielinska, 1996). For that purpose, a method to tune a CPG using genetic algorithms (GA) is proposed, whose details are provided in section 5. Moreover, a methodology for designing CPGs to solve a particular locomotor problem is yet missing. Other problem is that animals rarely perform steady-state locomotion for long time, and tend to superpose, and switch between, multiple motor behaviors. A remaining open challenge is therefore to design control architectures capable of exhibiting such richness motor skills. Engineering CPG-based control systems has been difcult since the simulation of even rather simple neural network models requires a signicant computational power that exceeds the capacity of general embedded microprocessors. As a result, CPG dedicated hardware implementations, both analog and digital, have received more attention (Nakada, 2003). On one hand, analog circuits have been already proposed, being computation and power efcient but they usually lack exibility and dynamics and they involve large design cycles. On the other hand, Field-Programmable Gate Array (FPGA) substrate might provide real-time response, low power consumption and concurrent processing solutions for low-level locomotion issues (Barron-Zambrano et al., 2010b). So, a brief analysis of CPG implementations is presented in section 6. Also, the integration of environment information might allow to produce behaviors by means of selection and adaption of lower substrates. Under this scenario, legged robots may recongure their locomotion control strategies on-line according to sensory information so as to effectively handle dynamic changes in the real world. Yet, the FPGA-based approach ts well since it could handle efciently higher perception functions for CPG parameter adaptation, and to provide the neural processing and coordination of a custom and adaptive CPG hardware module. For those reasons, section 7 presents a hardware implementation for quadruped locomotion control based on FPGA. The experimental results of hardware implementation and the future work are shown in sections 8 and 9, respectively. Finally, the conclusions of this work are presented in section 10.

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

163 3

2. Locomotion solution methods


In the literature, there are different approaches to the design of locomotion control systems such as: trajectory based methods, heuristic based methods and biologically inspired methods.
2.1 Trajectory based methods

In trajectory based methods, to move a leg in a desired trajectory, the joint angles between limbs are calculated in advance, by using a mathematical model that incorporates both robot and environment parameters, to produce a sequence of actions specically scheduled. But these methods are not providing any methodology to design a controller. It is only a method to prove the stability of the motion. Therefore, the trajectories have to be designed by trial-and-error or from animal locomotion recording data (Jalal et al., 2009). The most popular stabilization criteria are the Center of Mass (CoM), Center of Pressure (CoP) and Zero Moment Point (ZMP). The gait is stable when one of these criteria remains inside the support polygon (the convex hull formed by the contact points of the feet with the ground). These methods can generate either static or dynamic stability. The rst one prevents the robot from falling down by keeping the CoM inside the support polygon by adjusting the body posture very slowly. Thus minimizing the dynamic effects and allowing the robot to pause at any moment of the gait without falling down. Using this criterion will generally lead to more power consumption since the robot has to adjust its posture so that the CoM is always inside the support polygon. The second one relies on keeping the ZMP or CoP inside the support polygon and this is a necessary and sufcient condition to achieve stability. Dynamic balance is particularly relevant during the single support phase, which means that the robot is standing in only one foot. This generally leads to more fast and reliable walking gaits (Picado et al., 2008).
2.2 Heuristic based methods

The major problem with the ZMP or CoP methods are the complexity of the equations used to compute the robot dynamic. With such a level of complexity directly exposed to the robot programmer, it becomes very difcult to have a global view over the robot behavior. This is because heuristics methods were developed to hide the complexity of the problem and make it more accessible. Heuristic methods have strong similarities with the trajectory based approach. Joint trajectories are also pre-computed from an external optimization process, only the generation strategy differs. This time, heuristic or evolutionary based techniques are used (e.g. genetic algorithms) to design the desired trajectories (Lathion, 2006). The most successful approach of this methods is called Virtual Model Control (VMC) (Picado et al., 2008). It is a motion control framework that uses simulations of virtual components to generate desired joint torques. These joint torques create the same effect that the virtual components would have created, thereby creating the illusion that the simulated components are connected to the real robot. Such components can include simple springs, dampers, dash pots, masses, latches, bearings, non-linear potential and dissipative elds, or any other imaginable component. The generated joint torques create the same effect that the virtual components would create if they were in fact connected to the real robot (Pratt et al., 2001). This heuristic makes the design of the controller much easier. First, it is necessary to place some virtual components to maintain an upright posture and ensure stability. Virtual

164

Robotic Systems Applications, Control and Programming Robotic Systems

model control has been used to control dynamic walking bipedal robots (Pratt et al., 2001) and an agile 3D hexapod in simulation (Torres, 1996).
2.3 Biologically inspired methods

This approach uses CPGs which are supposed to take an important role in animal locomotion. By coupling the neural oscillators signals when they are stimulated through some input, they are able to synchronize their frequencies. In the eld of articial intelligence and robotics, it is possible to build structures that are similar to the neural oscillators found in animals by the denition of a mathematical model (Picado et al., 2008). The term central indicates that sensory feedback (from the peripheral nervous system) is not needed for generating the rhythms. In other words, the CPG has the ability to produce a periodic output from a non-periodic input signal. CPG is a proven fact which exists in animals and its task is to generate the rhythmic movements almost independent of central nervous system. There are a lot of CPG methods and different robotic application have been proposed by different authors (Lee et al., 2007; Loeb, 2001; Nakada, 2003). However, most of the time these CPGs are designed for a specic application and there are very few methodologies to modulate the shape of the rhythmic signals, in particular for on-line trajectory generation and the CPG internal parameters are usually tuned by trial and error methods.
2.4 Approaches analysis

Many different solutions have been experimented to achieve stable robot locomotion, from trajectory based methods to biologically inspired approaches. Each method presents different advantages as well as disadvantages. Next, a features summary of each approach under different requirements such as: robot knowledge, perturbation problems, design methodology, physical implementation and control scheme is presented. Robot knowledge: the major drawback of the trajectory based and heuristic based methods is that they both required an exact knowledge of the robot model. The CPG approach does not require an exact knowledge of the robot model. Perturbation problems: the CPG will recover smoothly and quickly after a perturbation, the ZMP needs external on-line control to deal with perturbations and the VMC is robust against small perturbation. Design methodology: the ZMP and VMC have well dened methodology to prove stability and intuitive ways of describing the controller. In the CPG approach, the number of parameters needed to describe the CPG is usually large and the stability is not guaranteed as in trajectory and heuristic based methods. Physical implementation: the ZMP is easy to implement in real robots. In the VMC, the control is fast and can be efciently done on-line. Meanwhile, the CPG is able to generate new trajectories for various speeds, without the necessity to be trained again. Control scheme: In the ZMP and VMC, the control is more centralized in a black box connected to the robot. But the CPG will be distributed over the whole robot body, as a virtual spinal cord, this property allows to the robot to continue with the gait generation if some part of the control is missing. The review shown that the CPG approach presents advantages, over the ZMP and VMC approaches, in the requirements of robot knowledge, perturbation problems, physical implementation and control scheme. For that reasons, more locomotion controllers for legged robots based on CPG are using currently.

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

165 5

3. Locomotion modeling based on CPG


The activity of the locomotor organs (legs, wings, etc.) generates a propulsive force that leads to locomotion. Each piece of a locomotor organ is rhythmically controlled by a CPG generating the rhythm. These neural networks, controlling each individual organ, interact with others so as to coordinate, in a distributed manner, the locomotor action of a complete specie. Moreover, they also adopt their activity to the surrounding environment through sensory feedback and higher level signals from the central nervous system. CPGs are often modeled as neurons that have mutually coupled excitatory and inhibitory neurons, following regular interconnection structures. CPGs automatically generate complex control signals for the coordination of muscles during rhythmic movements, such as walking, running, swimming and ying (Delcomyn, 1980; Hooper, 2000; Kimura, Shimoyama & Miura, 2003). The locomotion patterns can usually be modulated by some parameters, which offers the possibility to smoothly modify the gait (e.g. increase frequency and/or amplitude) or even to induce gait transitions. In CPG design, there are some common assumptions: the nonlinear oscillators are often assumed to be identical, the stepping movements of each limb are controlled by a single oscillator, while interlimb coordination is provided by the connections between the oscillators (Arena et al., 2004). Moreover, the sensory inputs from lower-level central nervous system and signals from higher-level central nervous system can modulate the activity of CPGs. The lower level signals are used for slow movements (low frequency response). Higher level signals are use to produce intelligent behavior and faster movements (higher frequency response). A comprehensive review of utilizing CPGs in robotics can be found in (Ijspeert, 2008), and an interesting overview of the different oscillators utilized for robotics purposes is given in (Buchli et al., 2006). The vertebrate locomotor system is organized hierarchically where the CPGs are responsible for producing the basic rhythmic patterns, and that higher-level centers (the motor cortex, cerebellum, and basal ganglia) are responsible for modulating these patterns according to environmental conditions. Such a distributed organization presents several interesting features: (i) It reduces time delays in the motor control loop (rhythms are coordinated with mechanical movements using short feedback loops through the spinal cord). (ii) It dramatically reduces the dimensionality of the descending control signals. Indeed the control signals in general do not need to specify muscle activity but only modulate CPG activity. (iii) It therefore signicantly reduces the necessary bandwidth between the higher-level centers and the spinal cord (Ijspeert, 2008; Loeb, 2001). CPG have been modeled with several levels of abstraction as for example: biophysical models, connectionist models, abstract systems of coupled oscillators. In some cases, the CPG models have been coupled to biomechanical simulation of a body, in such case they are called neuromechanical models. Detailed biophysical models are constructed based on the Hodgkin-Huxley type of neuron models. That is, neuron models that compute how ion pumps and ion channels inuence membrane potentials and the generation of action potentials. In particular, the cells excite themselves via fast feedback signals while they inhibit themselves and other populations via slower feedback signals. In some cases, the pacemaker properties of single neurons are investigated. While most models concentrate on the detailed dynamics of small circuits, some models address the dynamics of larger populations of neurons, for instance the generation of traveling waves in the complete lamprey swimming CPG (Arena, 2001). Connectionist models use simplied neuron models such as leaky-integrator neurons or integrate-and-re neurons. This connectionist model has

166

Robotic Systems Applications, Control and Programming Robotic Systems

shown that connectivity alone is sufcient to produce an oscillatory output. The goal of these models is on how rhythmic activity is generated by network properties and how different oscillatory neural circuits get synchronized via interneuron connections (Ijspeert, 2008). Other extensively used oscillators include phase oscillators (Buchli & Ijspeert, 2004; Matsuoka, 1987). Most of the oscillators have a xed waveform for a given frequency. In some cases, closed-form solutions or specic regimes, as for example phase-locked regimes, can be analytically derived but most systems are solved using numerical integration. Several neuromechanical models have been developed. The addition of a biomechanical model of the body and its interaction with the environment offers the possibility to study the effect of sensory feedback on the CPG activity. Finally, oscillator models are based on mathematical models of coupled nonlinear oscillators to study population dynamics. In this case, an oscillator represents the activity of a complete oscillatory center (instead of a single neuron or a small circuit). The purpose of these models is not to explain rhythmogenesis (oscillatory mechanisms are assumed to exist) but to study how inter-oscillator couplings and differences of intrinsic frequencies affect the synchronization and the phase lags within a population of oscillatory centers. The motivation for this type of modeling comes from the fact that the dynamics of populations of oscillatory centers depend mainly on the type and topology of couplings rather than on the local mechanisms of rhythm generation, something that is well established in dynamical systems theory (Ijspeert & Crespi, 2007).

4. CPG-based locomotion design for quadruped robots


The locomotion control research eld has been pretty active and has produced different approaches for legged robots. But the ability for robots to walk in an unknown environment is still much reduced at the time. Experiments and research work have addressed the feasibility of the design of locomotion control systems of legged robots taking inspiration from locomotion mechanisms in humans and animals. Legged locomotion is performed in rhythmic synchronized manner where a large number of degrees of freedom is involved so as to produce well-coordinated movements. As described previously, one of the main drawbacks of CPGs is that there are few learning methodologies to generate the rhythmic signals and their parameters are usually tuned by trial and error methods. Learning and optimization algorithms can be used in different ways. The approaches can be divided into two categories: supervised learning and unsupervised learning. Supervised learning techniques can be applied when the desired rhythmic pattern that the CPG should produce is known. The desired pattern can then be used to dene an explicit error function to be minimized. Such techniques can sometimes be used for designing CPGs, but they are restricted to situations where suitable patterns are available (e.g. they are obtained from kinematic measurements of animals). Some examples of these techniques are learning for vectors elds (Okada et al., 2002), gradient descent learning algorithms (Pribe et al., n.d.) and statistical learning algorithms (Nakanishi et al., 2004). Unsupervised learning techniques are used when the desired behavior of the CPG is not dened by a specic desired pattern (as in supervised learning), but by a high-level performance criterion, for instance, moving as fast as possible. Among unsupervised learning techniques, stochastic population-based optimization algorithms such as evolutionary algorithms have extensively been used to design CPG-like models (Ijspeert, 2008). The parameters that are optimized are usually the synaptic weights in xed neural network architectures and coupling weights in systems of coupled oscillators.

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

167 7

Fig. 1. Congurations of typical gait patterns in quadruped locomotion and their relative phases among the limbs.
4.1 Van Der Pol oscillator model

There are several models for neural oscillators to model the basic CPG to control a limb, such as the Amari-Hopeld model (Amari, 1988), Matsuoka model (Billard & Ijspeert, 2000) and Van Der Pol model (Van Der Pol B, 1928). For this work, the basic cell is modeled by a Van Der Pol (VDP) oscillator which is a relaxation oscillator governed by a second-order differential equation (equation 1): + 2 x = 0 ( p2 x 2 ) x x (1)

where x is the output signal from the oscillator, , p and are the parameters that tune the properties of oscillators. In general, affects the shape of the waveform, the amplitude of x depends largely on the parameter p. When the amplitude parameter p is xed, the output frequency is highly dependent on the parameter . However, a variation of parameter p can slightly change the frequency of the signal, and also can inuence the output frequency.
4.2 Quadruped gaits

A simplied mathematical model of CPG-based locomotion consists of using one cell per limb and replacing each cell by a nonlinear oscillator. Thus, quadruped gaits are modeled by coupling four nonlinear oscillators, and by changing the connections among them, it is possible to reproduce rhythmic locomotion patterns. In rhythmic movements of animals, transition between these movements is often observed. As a typical example, horses choose different locomotive patterns in accordance with their needs, locomotive speeds or the rate of energy consumption. In addition, each gait pattern is characterized by relative phase among the limbs (Dagg, 1973). Figure 1 shows the typical horse gait pattern congurations and theirs relatives phases between the limbs. Here, LF , LH , RF , and RH stand for left forelimb, left hindlimb, right forelimb, and right hindlimb. In the rest of this work, LF , RF , RH and LH will be refer as X1, X2, X3 and X4 , respectively. The mutual interaction among the VDP oscillators in the network produces a gait. By changing the coupling weights, it is possible to generate different gaits. The dynamics of the ith coupled oscillator in the network is given by:
2 i 2 x ai = 0 i + ( p2 x i x ai ) x

(2)

For i = 1, 2, 3, 4 , where xi is the output signal from oscillator i, x ai denotes the coupling contribution of its neighbors given by the equation 3: x ai =

wij x j
j

(3)

168

Robotic Systems Applications, Control and Programming Robotic Systems

where wij is the coupling weight that represents the strength of ith oscillator over the jth oscillator. The generation of the respective gaits depends on the values of the oscillators parameters and the connection wights among oscillators. The analysis of the behavior reported in (Barron-Zambrano & Torres-Huitzil, 2011) shows that the sign of the coupling weight determines the phase difference. For inhibitory connections, the phase lag is around of 360/n to particular values of wij , where n is the number of coupled oscillators. For excitatory connections, the phase difference is equal to zero. Finally, the matrix built for the connection weights, wij , might be symmetrical and regular.

5. GA-based approach for gait learning


To efciently search for CPG parameters a genetic algorithm for oscillator parameters estimation was used. A genetic algorithm is a powerful optimization approach that can work in large-dimensional, nonlinear, and, often, largely unknown parameter spaces. In the simplest form of gait evolution, functional forward locomotion is the only goal and no sensor inputs are used. Gait learning is a form of locomotion learning, but it might be considered a somewhat more difcult problem in legged robots. The search space is represented by the genome dening the controller representation (or controller and morphology representation). Each evolvable parameter of the genome denes a dimension of the search space, and the tness landscape is then given by the manifold dened by the tness of each point in the search space. The tuned method is divided into two stages and the robot controller was represented by a string of real numbers that describe oscillators and their connectivity. The rst one estimates the oscillator parameters and the second one calculates the values of wij to produce the locomotion pattern. The two stages work in sequential way. The rst stage estimates the parameters [ , p, ] to generate a specic wave in frequency and amplitude. Here, the oscillators parameters correspond to the gene and represent the organization of an individual n in a group of population N . The individual, n, is represented by the concatenation of parameters in order shown in equation 4, where each parameter was represented by real numbers of 32-bit: n = [ p ] (4)

Each individual n is evaluated according to the next tness function which is minimized by the GA: f itness p = abs(( Ad A p ) ( f d DFT (So ))) (5)

where Ad is the desired amplitude of the signal, f d is the desired frequency, A p is the amplitude of the generated pattern, So is the pattern generated by the oscillator with the individual n and DFT is the Discrete Fourier Transform. Simulation of So is generated by around of 50 seconds by each individual, n, and only the last 5 seconds are considered for the DFT computation. This is to ensure a stable signal, in frequency and amplitude, and to reduce the amount of data to be processed. Figure 2 shows a block diagram of this stage. The GA ranks the individuals according the tness function. As result of ranking, the best individuals remain in the next generation without modication. For this stage, 2 individuals remain without modication and go to the next generation. By using the crossover operation, other individuals in the next generation are created by combining two

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

169 9

Fig. 2. Block diagram of rst stage. It estimates the oscillator parameter values, [ , p, ], to produce a signal with specic frequency and amplitude. individuals. Furthermore, the remaining individuals are randomly modied by mutation. The probabilities of crossover and mutation are set to be 0.8 and 0.2, respectively. The second stage performs the synchronization among the oscillator outputs. The generation of different gaits needs patterns with specic phase amount the output signals, see gure 1. The stage computes the value of wij . To produce a desired phase is not enough only with connection weights, it is necessary to calculate the initial conditions of each oscillator, x1..4 , too. For that purpose, in the second stage each individual, n, has ve values in order to generate a specic phase in the network. The values in the gene are organized from the right to left. The rst four values are the initial values for each oscillator, x1..4 . And the fth value codes the connection weight, w. The equation 6 shows the presentation of n in this stage. n = [ wx4 x3 x2 x1 ] (6)

where the initial values and connection weight were represented by real numbers. The GA only needs estimate one value of w for one gait and by changing the algebraic sign it is possible to generate the rest of the gaits. The tness function is based on the phase among the patterns. To estimate the phase among the signals, the correlation was used. The oscillator labeled as LF (X1) is the reference signal, phase equal to zero, and the other signals are shifted with respect to this signal. The tness function to evaluate the individual is given by equation 7. Here, the GA minimizes the value of the tness function. f itness = abs(90 phase( x1 , x3 )) + abs(90 phase( x3 , x2 )) + abs(90 phase( x2 , x4 ))

(7)

where phase( xi , x j ) is the phase between the ith and jth oscillator. The simulation of x1..4 is generated by around of 50 seconds by each n and only the last 10 seconds are considered for the phase estimation. Figure 2 shows a block diagram of the second stage.

6. Related work of hardware implementations


Nowadays, many works have aimed to develop robust legged locomotion controllers (Fujii et al., 2002; Fukuoka et al., 2003; Susanne & Tilden, 1998). The design CPG-based control systems has been difcult given that the simulation of even rather simple neural network models requires a computational power that exceeds the capacity of general processors. For that reason, CPG dedicated hardware implementations, both analog and digital, have received more attention (Nakada, 2003). On one hand, CPGs have been implemented using

10

170

Robotic Systems Applications, Control and Programming Robotic Systems

Fig. 3. Block diagram of second stage. It nds the values of coupling, wij and the initial values of x1..4 . microprocessors providing high accuracy and exibility but those systems consume high power and occupy a large area restricting their utility in embedded applications. On the other hand, analog circuits have been already proposed, being computation and power efcient but they usually lack exibility and dynamics and they involve large design cycles. Relatively few works have focused on adopting the hardware technology to fully practical embedded implementations. Examples of this adaptation using analog circuit are presented by (Kier et al., 2006; Lee et al., 2007; Lewis et al., 2001; Nakada, 2003). In the rst one, Nakada et al present a neuromorphic analog CMOS controller for interlimb coordination in quadruped locomotion. Authors propose a CPG controller with analog CMOS circuits. It is capable of producing various rhythmic patterns and changing these patterns promptly. Lewis at al constructed an adaptive CPG in an analog VLSI (Very Large Scale Integration) chip, and have used the chip to control a running robot leg. They show that adaptation based on sensory feedback permits a stable gait even in an under actuated condition: the leg can be driven using a hip actuator alone while the knee is purely passive. Their chip has short-term on board memory devices that allow the continuous, real-time adaptation of both center-of-stride and stride amplitude. In addition, they make use of integrate-and-re neurons for the output motor neurons. Finally, the authors report that their abstraction is at a higher level than other reported work, which lends itself to easier implementation of on-chip learning. Kier et al present a new implementation of an articial CPG that can be switched between multiple output patterns via brief transient inputs. The CPG is based on continuous-time recurrent neural networks (CTRNNs) that contain multiple embedded limit cycles. The authors designed and tested a four-neuron CPG chip in AMIs 1.5 m CMOS process, where each neuron on the chip implements the CTRNN model and is fully programmable. The authors report the measured results from the chip agree well with simulation results, making it possible to develop multi-pattern CPGs using off-line simulations without being concerned with implementation details. Lee at al report a feasibility study of a central pattern generator-based analog controller for an autonomous robot. The operation of a neuronal circuit formed of electronic neurons based on Hindmarsh-Rose neuron dynamics and rst order chemical synapses modeled. The controller is based on a standard CMOS process with 2V supply voltage. Simulated results show that the CPG circuit with coordinate controller and command neuron is viable to build adaptive analog controller for autonomous biomimetic underwater robots. Finally, a biologically inspired swimming machine is presented by Arena in (Arena, 2001). The system used to generate locomotion is

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

171 11

an analog array of nonlinear systems known as Cellular Neural Networks (CNN). The CNN is dened as a two-dimensional array of M N identical cells arranged in a rectangular grid, where each cell was dened as the nonlinear rst order circuit. The author reports that the application to service robot further underlines that CNNs can be used as powerful devices to implement biologically inspired locomotion and swimming. Another solution to implement the CPG is using FPGAs as the hardware platform. The CPG is programmed in either VHDL or Verilog and download it onto the FPGA. This approach provide the hardware efciency with software exibility. In (Torres-Huitzil, 2008), Torres and Girau present an implementation of a CPG module based on the Amari-Hopeld structure into pure FPGA hardware. In their work, they have chosen to attached the CPG implementation to a Microblaze soft-core processor running uclinux. By doing this, they achieve the computation speed from having the CPG running the mathematical operations in the FPGA hardware making it possible to achieve some degree of parallelization in the calculations and then having the soft-core CPU free to handle other high level control algorithms and this could change the control parameters on the CPG implementation. Barron et al in (Barron-Zambrano et al., 2010b) present a FPGA implementation of a controller, based on CPG, to generate adaptive gait patterns for quadruped robots. The proposed implementation is based on an specic digital module for CPGs attached to a soft-core processor so as to provide an integrated and exible embedded system. Experimental results show that the proposed implementation is able to generate suitable gait patterns, such as walking, trotting, and galloping. Other CPGs implementations are using microprocessor. It gives the designer more freedom, because it is not constrained by the difculty of designing e.g. a differential analog circuits. These implementations using a scheme where the control is distributed trough the robot body. The rst implementation example of these is presented by (Inagaki et al., 2006). In that work, Inagaki et al proposed a method to control gait generation and walking speed control for an autonomous decentralized multi-legged robot using CPGs. The robot locomotion control is composed by subsystems. Each subsystem controls a leg and has a microcomputer. The subsystems are connected mechanically and communicate with neighbors. Each microcomputer calculates the dynamics of two oscillators, sends and receives the dynamic results from neighbor subsystems. The gait generation and the walking speed control are achieved by controlling the virtual energy of the oscillators (Hamiltonian). A real robot experiment showed the relationship to the Hamiltonian, the actual energy consumption and the walking speed. The effectiveness of the proposed method was veried. The proposed controller can be generalized as a wave pattern controller, especially for robots that have homogeneous components. In (Crespi & Ijspeert, 2006), Crespi and Ijspeert proposed an amphibious snake robot designed for both serpentine locomotion (crawling) and swimming. It is controlled by an on-board CPG inspired by those found in vertebrates. The AmphiBot II robot is designed to be modular: it is constructed out of several identical segments, named elements. Each element contains three printed circuits (a power board, a proportional-derivative motor controller and a small water detector) connected with a at cable, a DC motor with an integrated incremental encoder, a set of gears. The elements are connected (both mechanically and electrically) using a compliant connection piece xed to the output axis. The main contribution of this article is a detailed characterization of how the CPG parameters (i.e., amplitude, frequency and wavelength) inuence the locomotion speed of the robot.

12

172

Robotic Systems Applications, Control and Programming Robotic Systems

The related works present control schemes for robot locomotion based on biological systems under different implementation platforms. The schemes compute the walk pattern in a parallel way through the specialized either systems or coupled subsystems. The CPG control can be implemented by different approaches (analog circuits, recongurable hardware and digital processors). The analog circuit implementations present a good performance between power and energy consumption, but they have a large design cycle. For that, these approaches are useful in robots with either limited storage of energy or data processing with real time constraints. In the second approach is possible to implement CPG control with real time constraint sacricing the energy performance. These implementations have smaller design cycle than the analog circuit implementations. The approach is ideal to be used in the early robot design stages. The last approach presents a high exibility because it is not constrained by the difculty of designing. The design cycle is shorter but it has the worst energy performance. Furthermore, those systems occupy a large area restricting their utility in embedded applications. The analyzed implementations shown several common features. The rst one is that they are recongurable. The second one, they are capable to adapt oneself to different environments. The most of the implementations presents a scheme where the control is distributed trough the robot body or they are implemented through the interaction of basic elements. These similar features are seen in animal locomotion tasks.

7. CPG hardware implementation for a quadruped robot


In this section, we describe the architecture of the CPG controller for interlimb coordination in quadruped locomotion. First, the design considerations for the implementation are presented. Next, the basic Van Der Pol Oscillator that constitute a part of the CPG network is given. Finally, the architecture of the complete system is described.
7.1 Design considerations

The Van Der Pol oscillator is suitable for CPG implementation as a digital circuit but two main factors for an efcient and exible FPGA-based implementation should be taken into account: a) arithmetic representation, CPG computations when implemented in general microprocessor-based systems use oating point arithmetic. An approach for embedded implementations is the use of 2s complement xed point representation with a dedicated wordlength that better matches the FPGA computational resources and that saves further silicon area at the cost of precision, and b) efciency and exibility, embedded hard processor cores or congurable soft processors developed by FPGA vendors add the software programmability of optimized processors to the ne grain parallelism of custom logic on a single chip (Torres-Huitzil, 2008). In the eld of neural processing, several applications mix real-time or low-power constraints with a need for exibility, so that FPGAs appear as a well-tted implementation solution. Most of the previous hardware implementation of CPGs are capable of generating sustained oscillations similar to the biological CPGs, however, quite a few have addressed the problem of embedding several gaits and performing transitions between them. One important design consideration in this work, is that the FPGA-based implementation should be a platform well suited to explore recongurable behavior and dynamics, i.e., the platform can be switched between multiple output patterns through the application of external inputs.

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

173 13

(a)

(b)

Fig. 4. (a) Digital hardware architecture for the Van Der Pol oscillator and (b) average error as a function of the bit precision used in the basic blocks.
7.2 Module of Van Der Pol oscillator

From the analysis of Van Der Pol equation, Eq. 2, three basic operations were identied: addition, subtraction and multiplication. Thus, one block for each operation was implemented with 2s complement xed-point arithmetic representation. Figure 4(a) shows a block diagram for the hardware implementation of the discretized VDP equation. In the rst stage, the value of x ai (equation 3) is calculated: this value depends on the xi -neighbors and the coupling weight values. This stage uses four multipliers and one adder. The square values of p, x ai and are calculated in the second stage, it uses three multipliers. In the third stage, the values of yi and p2 x ai are calculated, one multiplier and a subtracter are used. The fourth stage computes the values of yi ( p2 x ai ) and w2 x ai . This stage uses two multipliers. For the integration stage, the numerical method of Euler was implemented by using two shift registers and two adders. The integration factor is implemented by a shift register, which shifts i to provide an integration factor of 1/64. The block labeled i and x six positions the values of y as Reg stands for accumulators that hold the internal state of the VPD oscillators. Finally, the values yi and xi are obtained. The size word for each block was 18-bit xed point representation with 11-bit for the integer part and 7-bit for the fractional part. Figure 4(b) shows the amplitude average error using different precisions for the fractional part. The errors were obtained from the hardware implementation. Plot shows that the average error decreases as the resolution of the input variables is incremented. This reduction is not linear, and the graphic shows a point where such reduction is not signicant. Seven bits were chosen as a good compromise between the fractional part representation and its average error.
7.3 Quadruped gait network architecture

In the CPG model for quadruped locomotion all basic VDP oscillators are interconnected through the connection weights (wij). In order to overcome the partial lack of exibility of the CPG digital architecture, it has been attached as a specialized coprocessor to a microblaze processor following an embedded system design approach so as to provide a high level interface layer for application development. A bank of registers is used to provide communication channels to an embedded processor. The bank has twenty-three registers and it receives the input parameters from microblaze, , p2 , 2 , wij and the initial values of each oscillator. The architecture sends output data to specic FPGA pins. Figure 5 shows

14

174

Robotic Systems Applications, Control and Programming Robotic Systems

Fig. 5. Complete architecture for embedded implementation of a CPG-based quadruped locomotion controller. a simplied block diagram of the VPD network interfacing scheme to the bank registers and microblaze processor.

8. Experimental results
8.1 GA-based method results

The proposed GA-based method was implemented in Matlab and its results were tested through hardware implementation reported in (Barron-Zambrano et al., 2010b). For the implementation of the GA, the specialized toolbox of Matlab was used. The parameters used to estimate the variables of the oscillator were: a population of 30 individuals and 200 generations. The probabilities of crossover and mutation are set to be 0.8 and 0.2 respectively. The method has an average error with the desired frequency of 0.03 Hz and with the desired amplitude of 0.15. Simulations show the state of the parameters in different generations, the rst plot shows the simulations in the 2nd population generation. The generated signal in the early generations is periodic but it has low value of frequency and amplitude respect to the actual desired value. The second plot shows a signal with periodic behavior, 7th generation, but with a different frequency with respect to the desired frequency. Finally, the last plot shows the pattern with the desired frequency, 1 Hz, and amplitude, 2.5. The nal parameters [ , p, ] estimated for the method are [0.705, 0.956, 4.531]. Figure 6 shows the behavior of the tness values at each generation for the oscillator parameters, plot 6(a), and for the network weights and initial conditions of x1..4 , plot 6(b). In both cases, the tness function values decrease suddenly in the early stages of tuning. Also, plots show that the method needs a reduced number of generations to tune the oscillator and the network. In a second step, the estimation of wih and the initial values of x1..4 is computed. The parameters used for this step were: a population of 40 individuals and 300 generations. The probabilities of crossover and mutation are set to be 0.5 and 0.2 respectively. In the test, the desired pattern is a walk gait. This pattern has a phase of 90 degrees among the signal in xed order, X1 X3 X2 X4. Only, the walk matrix was estimated by the method. It is possible to generate the trot and gallop gait with the walk matrix and only changing the initial condition of x1..4 , but it is impossible to switch from one gait to another when the gait

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

175 15

0.5 0.4 Fitness 0.3 0.2 0.1 0 0 5 10 15 Generations


(a)

150 Fitness 100 50 0

20

25

20

40 60 Generations
(b)

80

Fig. 6. (a) Fitness convergence plots for the rst stage, oscillator parameters and (b) for the second stage, network weights and initial conditions of x1..4 .
Gait Walk Trot Gallop 1.0 0.26 0.26 0.26 1.0 0.26 0.26 0.26 1.0 0.26 0.26 0.26 0.26 1.0 0.26 0.26 0.26 1.0 0.26 0.26 0.26 1.0 0.26 0.26 Matrix 0.26 0.26 1.0 0.26 0.26 0.26 1.0 0.26 0.26 0.26 1.0 0.26 0.26 0.26 0.26 1.0 0.26 0.26 0.26 1.0 0.26 0.26 0.26 1.0

Table 1. Weight matrix to congure the CPG network is stable. Then, the remaining matrices, trot and gallop, were calculated using the behavior analysis presented in (Barron-Zambrano & Torres-Huitzil, 2011). The analysis shows that the combination of inhibitory and excitatory connections is able to modify the phase among the oscillators. Thus, it is possible to generate different gaits from walk matrix. To change the connections from inhibitory to excitatory type, and vice-versa, is only necessary to change the algebraic sign of the values in the gait matrix. In the trot gait, the front limbs have the reverse phase and the limbs on a diagonal line move synchronously. From that, to generate the trot gait is necessary to change the algebraic sign in the matrix of weights for limbs on a diagonal line. The gallop has similarities with the walk gait, the phase is equal to 90 degrees but the order is different, X1 X2 X3 X4. Therefore, the design of a new matrix for this gait is necessary. The gallop matrix is obtained only by changing the algebraic sign in the one connections off-diagonal, this produces a phase equal to 90 degrees among the limbs. Table 1 shows the matrix weights for the three basic gaits, walk, trot and gallop with parameters , p and equal to [1.12, 1.05, 4.59] and initial conditions x1..4 = [2.44, 4.28, 1.84, 2.71]. Figure 7 shows the evolution of network weights and the initials condition of x. The plot 7(a) shows the pattern generated when the second stage starts. It shows a synchronized pattern where all signals have the same phase. The evolution of population shows how the signals start to have different phases, gure 7(b). Figure 7(c) shows that the method found the correct weight and the initial values of x1..4 to generate the walk gait pattern. Finally, gure 7(d) and 7(e) show the pattern generated by the trot and gallop gaits with matrices estimated from the walk gait matrix.

16

176
3 2 1 Amplitude Amplitude 0 1 2 3 5 X1 X2 X3 X4 3 2 1 0 1 2 3 5

Robotic Systems Applications, Control and Programming Robotic Systems


X1 X2 X3 X4 Amplitude 3 2 1 0 1 2 3 5 X1 X2 X3 X4

7 Time (s)

10

7 Time (s)

10

7 Time (s)

10

(a)
3 2 1 Amplitude 0 1 2 3 5

(b)
X1 X2 X3 X4 Amplitude 3 2 1 0 1 2 3 5 X1 X2 X3 X4

(c)

7 Time (s)

10

7 Time (s)

10

(d)

(e)

Fig. 7. (a)-(c)Evolution of pattern, in different generations, generated by the walk gait. (d)-(e) Pattern generated by the trot and gallop gait with matrices estimated from the walk gait matrix.
8.2 Physical implementation

The CPG digital architecture has been modeled using the Very High Speed Integrated Circuits Hardware Description Language (VHDL), and synthesized using the ISE Foundation and EDK tools from Xilinx targeted to the Virtex-4 FPGA device. In the hardware implementation test, a C-based application was developed on the microblaze embedded processor to set the values of the parameters in the CPG module. The implementation was validated in two ways. The rst one, the results were sent to the host computer through serial connection to visualize the waveforms generated by the module. Then, the hardware waveforms were compared with the software waveforms. In the second way, results were sent to digital-analog converter (DAC) and the output signal from DAC was visualized on a oscilloscope. Figure 8 shows, the periodic rhythmic patterns corresponding to the gaits (walk, trot, gallop) generated by hardware implementation. The values of the weight matrix to congure the CPG network are shown in table 1. The initial value, x1..4 , the parameter values, [ , p, ], and the weight value, wij , were calculated automatically with a GA implementation. The last CPG-hardware test was done through Webots robot simulator software. In this simulation the network with 8 VPD modules was used. Figure 9 shows, the walking locomotion pattern, in the bioloid robot simulator, and the periodic oscillations for the different joints, limbs and knees, produced by the hardware implementation. Figures 9(a)- 9(h) show the specic bioloid joint positions corresponding to the time labels shown in the walk time graph. Figures 9(i)- 9(k) show the phase relationship between the knee and limb joints in the right forelimb and hindlimb during walking. Finally, in gure 9(l) a time graph to show transition between walking and trotting is presented. Locomotions patterns for trot and gallop, and the transitions between them, were also tested.

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design
X1 X2 X3 X4 3 2 Amplitude 1 0 1 2 8 Time (s) 9 10 3 7 8 Time (s) 9 10 X1 X2 X3 X4 X1 X2 X3 X4

177 17

2 Amplitude

2 Amplitude

2 7

2 7 8 9 10

Time (s)

(a) Walk

(b) Trot

(c) Gallop

Fig. 8. (a)-(c) Three basic gaits for quadruped locomotion

9. Future work
In general, the discussion of future work will be focused in three goals: (a) to scale up the present approach to legged robots with several degrees of freedom to generate complex rhythmic movements and behavior, (b) integrate visual perception information to adapt the locomotion control in unknown environment, (c) incorporate the feedback from the robot body to improve the generation of patterns.
9.1 Network scalability

Nowadays, in robotics, the decentralization of control is an approach increasingly used in the design stage. In this approach, the robot control consists of group of modules where each module process the information in a local way. Then, the information of each module is sent to neighbor modules to produce a global result. One of the remarkable features of modular robots control is its scalability. This ability is important to achieve the best conguration required for doing the task, it is also useful in self-repair by separating faulty modules and replacing them with other modules (Murata & Kurokawa, 2007). Self-organization and distributed intelligence characterized by distribution of processing, decentralization of control and sensing tasks and modularization of components, have become essential control paradigms (Mutambara & Durrant-Whyte, 1994). In the hardware architecture context, the scalability can be dened as the ability to increase the architecture throughput without a complete re-design, using additional hardware, in order to meet a user need. In other words, the architecture must be able to adapt oneself to robots with more degrees of freedom only adding the necessary modules. The rst example of hardware scalability is presented in (Barron-Zambrano et al., 2010a). In this work, the scalability of a CPG, which control a quadruped robot with 1 DOF per limb, was the necessity to control a quadruped robot with two degrees of freedom per limb. In this case, the locomotion control system will be scalable a network with eight VDP oscillators as suggested in most works reported in the literature (Fujii et al., 2002; Kimura, Fukuoka, Hada & Takase, 2003).
9.2 Visual perception information

Locomotion and perception have been treated as separate problems in the eld of robotics. Under this paradigm, one rst solves the vision problem of recovering the three-dimensional geometry of the scene. This information is then passed to a planning system that has access to an explicit model of the robot (Lewis & Simo, 1999). This solution is computationally intense and too slow for real-time control using moderate power CPUs. Furthermore, this approach

18

178

Robotic Systems Applications, Control and Programming Robotic Systems

(a)

(b)

(c)

(d)

(e)
4 2 0 2
(d) (a) (b) (f)

(f)
4 2 0 2
(h)

(g)

(h)

Amplitude

(c)

(e)

(g)

4 7

Amplitude

X1 X2 X3 X4

X2 X2 knee

Time (s)

10

4 7

Time (s)

10

(i) Walk time graph


4 2 0 2 4 7 X3 X3 knee Amplitude 4 2 0 2 4

(j) Right forelimb


|Walk|

Amplitude

X1 X2 X3 X4

|Trot|

Time (s)

10

10 12 Time (s)

14

(k) Right hindlimb

(l) walk to trot transition

Fig. 9. (a)-(h) Walking locomotion patter snapshots for the bioloid robot simulator. (i) Walk time graph for different joints. (j)-(k) Oscillation patterns for the right forelimbs, hindlimbs and knees. (l) Waveforms during transition between walking and trotting.

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

179 19

does not exploit the fact that the walking machine will be presented with a similar situation again and again. Recently, approaches consider to eliminate the intermediate explicit model and consider creating a direct coupling of perception to action, with the mapping being adaptive and based on experience. Also, continuous visual input is not necessary for accurate stepping. Not all visual samples have the same potential for control limb movements. Samples taken when the foot to be controlled is in stance are by far more effective in modulating gait. It has been suggested that during stepping visual information is used during the stance phase in a feed forward manner to plan and initiate changes in the swing limb trajectory (Hollands & Marple-Horvat, 1996; Patla et al., 1996). Taken together, this may indicate that gait is modulated at discrete intervals. This modulation may be a program that depends on a brief sampling of the visual environment to instantiate it (c.f. (Patla Aftab E., 1991) ). This hypothesis is intriguing because it implies that after a brief sample it is not necessary to store an internal representation of the world that needs to be shifted and updated during movement. This shifting and updating is problematic for both neural and traditional robotics models (Lewis & Simo, 1999).
9.3 Feedback from the robot body

Studies of mechanisms of adaptive behavior generally focus on neurons and circuits. But adaptive behavior also depends on interactions among the nervous system, body and environment: sensory preprocessing and motor post-processing lter inputs to and outputs from the nervous system; co-evolution and co-development of nervous system and periphery create matching and complementarity between them; body structure creates constraints and opportunities for neural control; and continuous feedback between nervous system, body and environment are essential for normal behavior. This broader view of adaptive behavior has been a major underpinning of ecological psychology and has inuenced behavior-based robotics. (Chiel & Beer, 1997). Recent work in the eld of autonomous robotics has emphasized that intelligent behavior is an emergent property of an agent embedded in an environment with which it must continuously interact. The goal is to integrate information of the robot status with the CPG. The robot will be equipped with different sensors such as bumpers and accelerometers. These sensor signals could be added as coupling terms in the differential equation in the oscillator model. This can be able to simplify the control, allowing them to traverse irregular terrain and making them robust to failures.

10. Conclusions
The quadruped locomotion principles were studied and analyzed. It is feasible to generate locomotion patterns by coupling the neural oscillators signals that are similar to the neural oscillators found in animals by the denition of a mathematical model. The GA based approach takes advantage that the tness function works directly with the oscillator and the network. It makes possible consider other restriction as the power consumption and it does not need knowledge of the robot dynamic to generate the pattern gait. The GA based approach uses small population, some tens individuals, and limited generations, some hundreds generations, ideal to be processed on computers with reduced resources. The applications only estimate the matrix for walk gait. Furthermore, it is possible to build the matrix for gallop and trot gaits using the walk gait matrix. This reduces the number of operations necessary to estimate the matrix for each gait

20

180

Robotic Systems Applications, Control and Programming Robotic Systems

The hardware implementation exploits the distributed processing able to carry out in FPGA devices. The presented examples show that the measured waveforms from the FPGA-based implementation agree with the numerical simulations. The architecture of the elemental Van Der Pol oscillator was designed and attached as a co-processor to microblaze processor. The implementation provides exibility to generate different rhythmic patterns, at runtime, suitable for adaptable locomotion and the implementation is scalable to larger networks. The microblaze allows to propose an strategy for both generation and control of the gaits, and it is suitable to explore the design with dynamic reconguration in the FPGA. The coordination of joint of a legged robot could be accomplished by a simple CPG-based network extremely small suitable for special-purpose digital implementation. Also, the implementation takes advantage of its scalability and recongurability and it can be used in robots with different numbers of limbs.

11. References
Amari, S.-I. (1988). Characteristics of random nets of analog neuron-like elements, pp. 5569. Arena, P. (2001). A mechatronic lamprey controlled by analog circuits, MED01 9th Mediterranean Conference on Control and Automation, IEEE. Arena, P., Fortuna, L., Frasca, M. & Sicurella, G. (2004). An adaptive, self-organizing dynamical system for hierarchical control of bio-inspired locomotion, IEEE Transactions on Systems, Man, and Cybernetics, Part B 34(4): 18231837. Barron-Zambrano, J. H. & Torres-Huitzil, C. (2011). Two-phase GA parameter tunning method of CPGs for quadruped gaits, International Joint Conference on Neural Networks, pp. 17671774. Barron-Zambrano, J. H., Torres-Huitzil, C. & Girau, B. (2010a). FPGA-based circuit for central pattern generator in quadruped locomotion, Australian Journal of Intelligent Information Processing Systems 12(2). Barron-Zambrano, J. H., Torres-Huitzil, C. & Girau, B. (2010b). Hardware implementation of a CPG-based locomotion control for quadruped robots, International Conference on Articial Neural Networks, pp. 276285. Billard, A. & Ijspeert, A. J. (2000). Biologically inspired neural controllers for motor control in a quadruped robot., in a IEEE-INNS-ENNS International Joint Conference on Neural Networks. Volume VI, IEEE Computer Society, pp. 637641. Buchli, J. & Ijspeert, A. J. (2004). Distributed central pattern generator model for robotics application based on phase sensitivity analysis, In Proceedings BioADIT 2004, Springer Verlag, pp. 333349. Buchli, J., Righetti, L. & Ijspeert, A. J. (2006). Engineering entrainment and adaptation in limit cycle systems: From biological inspiration to applications in robotics, Biol. Cybern. 95: 645664. Carbone, G. & Ceccarelli, M. (2005). Legged Robotic Systems, Cutting Edge Robotics, Vedran Kordic, Aleksandar Lazinica and Munir Merdan. Chiel, H. J. & Beer, R. D. (1997). The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment, Trends in Neurosciences 20(12): 553 557. Crespi, A. & Ijspeert, A. J. (2006). AmphiBot II: An Amphibious Snake Robot that Crawls and Swims using a Central Pattern Generator, Proceedings of the 9th International Conference on Climbing and Walking Robots (CLAWAR 2006), pp. 1927. Dagg, A. (1973). Gaits in mammals, Mammal Review 3(4): 6135154.

CPG Implementations for Robot Locomotion: Analysis and Design CPG Implementations for Robot Locomotion: Analysis and Design

181 21

Delcomyn, F. (1980). Neural basis of rhythmic behavior in animals, Science 210: 492498. Fujii, A., Saito, N., Nakahira, K., Ishiguro, A. & Eggenberger, P. (2002). Generation of an adaptive controller CPG for a quadruped robot with neuromodulation mechanism, IEEE/RSJ international conference on intelligent robots and systems pp. 26192624. Fukuoka, Y., Kimura, H., Hada, Y. & Takase, K. (2003). Adaptive dynamic walking of a quadruped robot tekken on irregular terrain using a neural system model, ICRA, pp. 20372042. Grillner, S. (1985). Neural control of vertebrate locomotion - central mechanisms and reex interaction with special reference to the cat, Feedback and motor control in invertebrates and vertebrates pp. 3556. Hollands, M. A. & Marple-Horvat, D. E. (1996). Visually guided stepping under conditions of step cycle-related denial of visual information, Experimental Brain Research 109: 343356. Hooper, S. L. (2000). Central pattern generator, Current Biology 10(2): 176177. Ijspeert, A. (2001). A connectionist central pattern generator for the aquatic and terrestrial gaits of a simulated salamander, Biological Cybernetics 84(5): 331348. Ijspeert, A. (2008). Central pattern generators for locomotion control in animals and robots: A review, Neural Networks 21(4): 642653. Ijspeert, A. J. & Crespi, A. (2007). Online trajectory generation in an amphibious snake robot using a lamprey-like central pattern generator model, Proceedings of the 2007 IEEE International Conference on Robotics and Automation (ICRA 2007), pp. 262268. Inagaki, S., Yuasa, H., Suzuki, T. & Arai, T. (2006). Wave CPG model for autonomous decentralized multi-legged robot: Gait generation and walking speed control, Robotics and Autonomous Systems 54(2): 118 126. Intelligent Autonomous Systems. Jalal, A., Behzad, M. & Fariba, B. (2009). Modeling gait using CPG (central pattern generator) and neural network, Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communication, Springer-Verlag, Berlin, Heidelberg, pp. 130137. Kier, R. J., Ames, J. C., Beer, R. D. & Harrison, R. R. (2006). Design and implementation of multipattern generators in analog vlsi. Kimura, H., Fukuoka, Y., Hada, Y. & Takase, K. (2003). Adaptive dynamic walking of a quadruped robot on irregular terrain using a neural system model, in R. Jarvis & A. Zelinsky (eds), Robotics Research, Vol. 6 of Springer Tracts in Advanced Robotics, Springer Berlin / Heidelberg, pp. 147160. Kimura, H., Shimoyama, I. & Miura, H. (2003). Dynamics in the dynamic walk of a quadruped robot, Irregular Terrain Based on Biological Concepts. International Journal of Robotics Research 22(3U4):187 U202 , MIT Press, pp. 187202. Lathion, C. (2006). Computer science master project, biped locomotion on the hoap2 robot, Biologically Inspired Robotics Group . Lee, Y. J., Lee, J., Kim, K. K., Kim, Y.-B. & Ayers, J. (2007). Low power cmos electronic central pattern generator design for a biomimetic underwater robot, Neurocomputing 71: 284296. Lewis, M. A., Hartmann, M. J., Etienne-Cummings, R. & Cohen, A. H. (2001). Control of a robot leg with an adaptive VLSI CPG chip, Neurocomputing 38-40: 1409 1421. Lewis, M. A. & Simo, L. S. (1999). Elegant stepping: A model of visually triggered gait adaptation. Loeb, G. (2001). Learning from the spinal cord, The Journal of Physiology 533(1): 111117.

22

182

Robotic Systems Applications, Control and Programming Robotic Systems

Manoonpong, P. (2007). Neural Preprocessing and Control of Reactive Walking Machines: Towards Versatile Articial Perception-Action Systems (Cognitive Technologies), Springer-Verlag New York, Inc., Secaucus, NJ, USA. Matsuoka, K. (1987). Mechanisms of frequency and pattern control in the neural rhythm generators, Biological Cybernetics 56(5): 345353. Murata, S., K. K. & Kurokawa, H. (2007). Toward a scalable modular robotic system, IEEE Robotics & Automation Magazine pp. 5663. Mutambara, A. G. O. & Durrant-Whyte, H. F. (1994). Modular scalable robot control, Proceedings of 1994 IEEE International Conference on MFI 94 Multisensor Fusion and Integration for Intelligent Systems pp. 121127. Nakada, K. (2003). An analog cmos central pattern generator for interlimb coordination in quadruped locomotion, IEEE Tran. on Neural Networks 14(5): 13561365. Nakanishi, J., Morimoto, J., Endo, G., Cheng, G., Schaal, S. & Kawato, M. (2004). Learning from demonstration and adaptation of biped locomotion, Robotics and Autonomous Systems 47: 7991. Okada, M., Tatani, K. & Nakamura, Y. (2002). Polynomial design of the nonlinear dynamics for the brain-like information processing of whole body motion, Vol. 2, pp. 14101415 vol.2. Patla, A. E., Adkin, A., Martin, C., Holden, R. & Prentice, S. (1996). Characteristics of voluntary visual sampling of the environment for safe locomotion over different terrains, Experimental Brain Research 112: 513522. Patla Aftab E., Stephen D., R. C. N. J. (1991). Visual control of locomotion: Strategies for changing direction and for going over obstacles., Experimental Psychology: Human Perception and Performance 17: 603634. Picado, H., Lau, N., Reis, L. P. & Gestal, M. (2008). Biped locomotion methodologies applied to humanoid robotics. Pratt, J. E., Chew, C.-M., Torres, A., Dilworth, P. & Pratt, G. A. (2001). Virtual model control: An intuitive approach for bipedal locomotion, I. J. Robotic Res. 20(2): 129143. Pribe, C., Grossberg, S. & Cohen, M. A. (n.d.). Susanne, S. & Tilden, M. W. (1998). Controller for a four legged walking machine, Neuromorphic Systems Engineering Silicon from Neurobiology, World Scientic, Singapore, pp. 138148. Torres, A. L. (1996). Virtual model control of a hexapod walking robot, Technical report. Torres-Huitzil, C, Girau B. (2008). Implementation of Central Pattern Generator in an FPGA-Based Embedded System, ICANN (2): 179-187. Van Der Pol B, V. D. M. J. (1928). The heartbeat considered as a relaxation oscillation, and an electrical model of the heart, 6: 763775. Zielinska, T. (1996). Coupled oscillators utilised as gait rhythm generators of a two-legged walking machine, Biological Cybernetics 74(3): 263273.

10
Tracking Control in an Upper Arm Exoskeleton with Differential Flatness
E. Y. Veslin1, M. Dutra2, J. Slama2, O. Lengerke2,3 and M. J. M. Tavera2
2Universidade 1Universidad

de Boyac, Federal do Rio de Janeiro, 3Universidad Autnoma de Bucaramanga 1,3Colombia 2Brazil

1. Introduction
The Exoskeleton is classified inside of a wider technology known as Wearable Robots, that group describes a robotic field who studies the interaction between the human body and the robotics. In those systems, a mechatronics structure is attached to different members of the human body, and while the wearer commands the mechanical system using physical signals, like the electronic stimulation produced by the orders of the brain, or the movements of the body generated by the muscles; the mechanical system do the hard work, like carrying heavier objects or helping the movement of handicaps members of the body. Since its conception in the sixties (Croshaw, 1969) the bibliography speaks about two models of wearable robots: the prosthetics and the orthotics systems (Fig. 1). The first group replaces the lost body members, while the second one assists to the body movements or, in other cases, extends the body capabilities.

Fig. 1. (Left), orthotic robot of upper parts (Perry et al., 2007). (Right), prosthetic robot of upper parts (Rehabilitation Institute of Chicago, 2006) The exoskeleton take part of the orthotics wearable robots field, its a robotic structure (actually could be considered a mechatronics structure due to the integration of different

184

Robotic Systems Applications, Control and Programming

electronics equipments like sensors, actuators and an intelligent controller that are integrated into the mechanical system) fixed to the human body that follows along the movements of the wearer. At the present time are presented several models of exoskeletons: those that are attached to the legs was called lower part exoskeletons (Kazerooni, 2006) and also exists those that are attached to the arms, that are denominated upper part exoskeletons (Perry et al., 2007). In both structures, its applications goes from the medical camp to military applications; with functions like help handicap patients assisting on the body members movement recuperation (Gopura & Kiguchi, 2007) or helping to the soldiers to carry heavier loads (Kazerooni, 2005). Also, exists prototypes that integrates both systems (Sankai, 2006), those mechanisms like Kazeroonis structure, used to help the wearer to extends his body capabilities in order to carry weights that are beyond his normal possibilities, or help to patients to made normal movements that a sane body could develop. Our work are focussed into the study of the upper arm exoskeleton and a application of differential flatness in order to control the system that moves in a determinate path (in this case a path generated by a polynomial). This document will describe its consecution, initiating with a morphological analysis that leads to a mathematical model and posterior to a cinematic simulation. This model is useful to the consecution of the dynamic model, this study was based on the mechanical structure and the arm movements that were selected to made. This dynamic model will be applied into the problem of trajectory tracking, in this part, the arm is controlled using differential flatness, this part was considered our contribution, therefore, we will discuss about its applications, benefits and problems founded. All the analysis was supported by a code made in Matlab and gives the necessaries simulation to this work, in this way, graphics of the model behaviours was characterized and captured in order to detail the different parts of this work.

2. Forward kinematics
The first problem to solve is the characterization of a human arm model than could be implemented for an exoskeleton design. In this part, forward kinematics plays an important role; offering a mathematical model that allows the prediction of the systems behaviour, giving the necessary preliminary information about the structure and movement capabilities. Case of modelling the human arm for the study of the human motion (Maurel & Thalman, 1999), (Klopar & Lenari, 2005), or in a implementation study of a rehabilitation system in an arm (Culmer et al., 2005), or the design of a master arm for a man-machine interface (Lee et al., 1999), All needs of the forward kinematical analysis in order to initiate a systems modelling. The forward kinematic study initiate with a morphological analysis in the part of the body that will be attached to the mechanical structure. At is says, the exoskeleton is a system that accompany the arm on their movements; therefore the design of the structure must not had interference with it. In order to do this, the morphological analysis gives the information about movements, lengths of the members, number of segments and its respective masses. Another justification is that the biological inspirations permits the creation of mechanical systems that are more compacted and reliable and also, more energy-efficient (Kazerooni, 2006). In the nature, the biological designs of the living organisms allows the adaptation with the environment; the evolution mechanisms have made transformation into the body as the circumstances rules, for this reason the diversity of organic designs and also, the extinction

Tracking Control in an Upper Arm Exoskeleton with Differential Flatness

185

of those species that didnt could adapt to its environment. In conclusion is possibly to say that exist optimizations mechanisms that transforms our structure in function of a goal: survival. The human arm is a result of this optimization, following the bones that conforms it (Fig. 2) its seen that the member are divided into three segments: the arm, the forearm and the hand, united among them by three joints: the shoulder that unites the upper limb to the rest of the body, the elbow that unites the arm with the forearm and finally the wrist that unites the forearm with the hand. The mechanical model of the exoskeleton responds to this configuration, conserving the systems general characteristics, this process of make a copy of the nature design, is called bioimitation (Pons, 2008).

Fig. 2. Human arm structure (Pons, 2008) Initially, for this study the hand and fingers movements will not be considered, so, it could be reduced to a single segment (Pons, 2008), in this way the kinematics is constructed with three segments, the arm, the forearm and the hand, also exists other considerations that could help into the study of the system (Rocon & Pons, 2005): 1. The mechanical behaviour of the arm is independent of the rest of the body. 2. All the components of each segment, including the bones and the soft parts (muscles, skin, etc.) take part of the same rigid body. 3. The deformations of the soft part will not affect significantly the mechanical properties of the entire segment. The lengths of the segments are considered constants and depend of the height of the individual (H), this numerical relationship are detailed into Table 1, who also related another parameters of the human arm, like the position of the centre of mass of each segment.

186 Body Segment Upper Arm Forearm Hand

Robotic Systems Applications, Control and Programming

Length, L 0.186 H 0.146 H 0.108 H

Centre of Mass (% of L) Proximal 0.436 0.43 0.506 Distal 0.564 057 0.494

Table 1. Human arm Anthropometric data (Pons, 2008) Each joint has its proper movements, which depending of the movement could it be: uni, bi or multiaxial (Gowitzke & Milner, 2000), it means that the joints allows movements in one, two or three degrees of freedom and planes generated by the rotation of the segments. For this study all the joints could be considered ideals (Rocon & Pons, 2005). The movements of the arm are showed into Fig. 3. The first joint, the elbow, is compared with a spherical joint that allows movements in three degrees of freedom (multiaxial), its movements are defined as: flexion-extension, abduction-adduction, and rotation. The second joint, the elbow is an uniaxial joint, its movement is compared with a hinge, nevertheless, the bone of the forearm generates a movement of rotation around its own axis, that not affects the position of the three spatial coordinates, but if it rotation. This rotation could be located on the elbow joint or into the wrist joint. On early studies this rotation was considered on the elbow but later in order to make a simplification in the numerical operations it was transported to the wrist (Veslin, 2010). Some research studies the movement of the arm in the plane (Gerald et al., 1997); others articles analyse the movement of the forearm for applications of trembling reduction (Rocon & Pons, 2007). The kinematic analysis on this work will be focused on the movement of the three joints of the elbow, and specifically to the posterior control of the hand position using a mechanical structure supported by the arm, for this reason is necessary to know its capabilities based in an analysis of the workspace. Knowing the structure of the arm, the distance of the segments and the degrees of freedom of its joints, the mathematical model could be developed, in this part the concepts of morphology previously studied are combined with the classical robotic concepts, in this case using the model of homogenous transformation matrix (Siciliano et al., 2009). Equations (13) descript the rotations R around the three axis X, Y and Z, while (4) descript the translations T in whatever axis.
cos sen Rz ( ) = 0 0 cos 0 Ry ( ) = -sen 0 sen cos 0 0 0 0 1 0 0 0 0 1 0 0 0 1

(1)

0 sen 1 0 0 cos 0 0

(2)

Tracking Control in an Upper Arm Exoskeleton with Differential Flatness

187

0 1 0 cos Rx ( ) = 0 sen 0 0 1 0 T (d) = 0 0 0 1 0 0

0 -sen cos 0 0 dx 0 dy 1 dz 0 1

0 0 0 1

(3)

(4)

In (4) is necessary to define in which, or what axis the translation movement will be effectuated. The cinematic model obeys to a kinematics open chain, because only exists one sequence of joints together, connecting the two