0% found this document useful (0 votes)
31 views6 pages

1 s2.0 S147466701645130X Main

Uploaded by

jpmellor1987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views6 pages

1 s2.0 S147466701645130X Main

Uploaded by

jpmellor1987
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Proceedings of the 18th World Congress

The International Federation of Automatic Control


Milano (Italy) August 28 - September 2, 2011

Vision Based Collision Avoidance of


Industrial Robots
Alexander Winkler ∗ Jozef Suchý ∗∗

Department of Robotic Systems, Chemnitz University of Technology,
09107 Chemnitz, Germany (e-mail:
[email protected]).
∗∗
Department of Robotic Systems, Chemnitz University of Technology,
09107 Chemnitz, Germany (e-mail: [email protected])

Abstract: This article presents an approach to collision avoidance of industrial robots. It is


based on the method of artificial potential or force fields. The field is generated by virtual
charges which are placed on obstacles. The virtual force acts on the robot which results in the
modification of the manipulator path to avoid collisions. Robot path modification is performed
by means of impedance control. The positions of the obstacles are determined continuously by
image processing using a simple USB camera, therefore collision avoidance is also able to deal
with moving obstacles, e.g. other manipulators. All algorithms are implemented with a real
robot system and experimental results are presented.

Keywords: Robot control, robots vision, impedance control, collision avoidance, visual
serviong, potential field.

1. INTRODUCTION This integration of the camera signal in the robot position


control loops can be also understood as a kind of robot
visual servoing, (Corke [1996], Hashimoto [2003]).
Collision avoidance of industrial robots is a very important
issue especially when there are changes of the environment This paper is organized as follows: In the next section
in robot’s work envelope. Further effort to make robot the setup of an adequate robot system will be described.
work cells more and more flexible will force this aspect Besides, a possible test scenario will be defined which is
especially when the robot work cell is free of safety fences. used later for experiments. In section 3 some basics of
Some advance approaches of path planning and collision image processing for obstacle detection will be presented
avoidance of robots working in dynamic environments are along with an example. Section 4 deals with possibilities
developed and published, (Khatib [1986], Warren [1989], of collision avoidance of industrial robots based on virtual
Quinlan and Khatib [1993], Brock and Khatib [2002], force fields together with impedance control. After this in
Seraji and Bon [1999], Bosscher and Hedman [2009]). section 5 the implementation of the test scenario and some
experimental results will be presented. Finally, in the last
In this paper we investigate an approach of dynamic col- section the short conclusion will be given and some efforts
lision avoidance which combines the ideas of artificial po- for further work will be presented.
tential fields, visual servoing and impedance control. The
intention of dynamic collision avoidance is the modification
of the robot path to avoid obstacles in situations when it 2. EXPERIMANTAL SETUP
is necessary. The algorithm should work with stationary
and moving obstacles. The basic approach is that the
obstacles emit virtual forces which influence the robot path The robot system used for experiments consists of two
by generating position offsets. One way how to set the KUKA KR6/2 manipulators. They are six axes articulated
robot arms with a nominal payload of 6 kg. Each robot
dynamic relationship between virtual force and position
is controlled by its own KUKA Robot Controller KRC2
offset is by target impedance.
based on industrial PC. At the PC the real time operating
The generation of artificial forces will be performed using system VxWorks runs together with Windows. VxWorks
the idea of virtual charges. Every charge can be regarded is used for real time tasks and Windows for program
as an electric charge with electrostatic field in its neighbor- design and visualization. User robot applications can be
hood. Between two or more charges the electrostatic forces programmed using KUKA Robot Language (KRL) which
act. In this article the virtual charges will be placed on the has all features of common robot programming languages.
end-effector of the robot and on the surfaces of obstacles.
For practical demonstration of vision based collision avoid-
Dealing with moving obstacles the positions of charges ance Robot I executes some handling task. It transports in
has to be continuously updated in real time. In the an endless loop a workpiece from start position A to target
approach presented here, the charge positions will be position B and back, see Fig. 1. During the linear motion
determined from camera information via image processing. along the y-axis of the world frame Robot I is disturbed

978-3-902661-93-7/11/$20.00 © 2011 IFAC 9452 10.3182/20110828-6-IT-1002.00472


18th IFAC World Congress (IFAC'11)
Milano (Italy) August 28 - September 2, 2011

corresponding pixel in the binary image will be set to 1


Robot I otherwise it will be set to 0.

Edge detection Edge detection is performed by the


Marker
Laplacian. The Laplace operator ∆ for two Cartesian
dimensions applied to image function f is defined as
follows, (Siegwart and Nourbakhsh [2004]):
∂2f ∂2f
∆f (x, y) = + (1)
Position A ∂x2 ∂y 2
Position B For the discrete space of image pixels the discrete Laplace
operator can be expressed with convolution operator D:
0 1 0
" #
z Camera D = 1 −4 1 (2)
y 0 1 0
x Robot II It is applied to the color filtered binary image C. The
resulting picture E represents the edges.
Fig. 1. Experimental setup 1, 4Cxy − Cx−1,y − Cx+1,y 
 

−Cx,y−1 − Cx,y+1 > 0
 
by Robot II which comes up to Robot I and simulates an Exy = (3)
obstacle. 
 

0, otherwise
Robot II will be observed continuously by a simple USB
web cam of type Logitech Webcam Pro 9000. It is con- Hough transform For the detection of the marker
nected to Windows PC. The program which runs on the mounted on the robot end-effector the circular Hough
PC calculates the Cartesian end-effector position of Robot transform is used, (Sonka et al. [1999]). It leads to the
II. For this purpose it is equipped with a colored marker position and the diameter of the circular marker. To save
which can be also seen in Fig. 1. The position is then computation time, the minimum and the maximum diam-
sent to the controller of Robot I via serial connection. In eter of the searched circle can be preset in pixels.
the case that the end-effectors were regarded as artificial
charges the interaction force between the end-effectors can Obstacle position In the case that the diameter of the
be calculated. This is performed by Robot controller I. As marker is known, the coordinates of the obstacle can be
a result of the artificial force the path of Robot I will be easily calculated with respect to camera coordinate frame
T
modified using the principle of impedance control. P C = [ xC yC zC ] . Using the homogenous transforma-
tion matrix T CW (McKerrow [1995]), which describes the
3. OBSTACLE DETECTION relative pose between camera and Robot I the Cartesian
T
coordinates of the obstacle in world frame P = [ x y z ]
3.1 Basic Algorithms can be determined:
P = T CW PC (4)
The detection of obstacles within the robot workspace
is performed by image processing. For this purpose the 3.2 Example of Image Recognition
robot work cell is supervised by camera. In the experiment
presented in this paper it is sufficient to use a simple USB Previously described steps to calculate the obstacle posi-
web cam. Its resolution for image capturing is 640 × 480 tion from the web cam image can be seen on the example
pixels. The moving obstacle will be represented by the in Fig. 2. It starts with the original snapshot of the robot
end-effector of Robot II. It is equipped with a circular, workspace (Fig. 2a). In Fig. 2b the result of the color
blue colored marker. The camera is connected to Windows filter is shown which extracts the relevant regions with
PC which fulfills the role of image processing. To this respect to their color in HSV color space. Afterwards edge
end software module was developed in Delphi. It includes detection is performed by Laplacian (Fig. 2c). The Hough
the following features: After capturing the picture of the transform provides a maximum which leads to the marker
robot work cell it is processed by color filter. In the next found in Fig. 2d.
step edge detection is performed. Thereafter, the marker
will be found by Hough transform. Finally, the position of
the obstacle will be calculated. Several algorithms will be 4. COLLISION AVOIDANCE OF ROBOTS
described in somewhat more detail below.
4.1 Artificial Force Field
Color filter In the first step the captured picture is
transformed into binary image with respect to the color The approach of artificial potential fields is a well known
of the obstacle marker. This will be achieved by the color method for path planning of mobile or stationary robots,
filter. The filter works in the HSV color space. For this (Khatib [1986]). It may be as well used to collision avoid-
purpose the values for red, green and blue (RGB) of ance between multiple robots or between a robot and
every pixel of the original image have been converted to its dynamic environment. The artificial force emitted by
values for hue, saturation and value (HSV). If the values the moving obstacle influences the robot motion to bring
of the individual pixels are in a predefined range the the manipulator arm away from it. In this case the main

9453
18th IFAC World Congress (IFAC'11)
Milano (Italy) August 28 - September 2, 2011

a) b) KRL
• User programs
• Motion commands
Non real time
Real time
RSI
Sensor drivers Object library Position
c) d) Analogue input, Signal controller
Force/Torque, processing,
Vision, … Motion control

Fig. 3. Functional scheme of RSI

position control loops. For this purpose KUKA’s Robot


Sensor Interface (RSI) is utilized, (KUKA Roboter GmbH
[2007]). RSI allows the realization of sensor guided robot
Fig. 2. Example of image processing motions, (Winkler and Suchý [2006a]). It is an additional
module which realizes real time signal processing and the
problem is on-line generation of the virtual potential or access to the position control loops. The Robot Sensor
force field of the moving obstacle (robot). For this pur- Interface provides special KRL expressions to generate
pose we propose the approach based on artificial charges. RSI objects which are defined in the object library. There
Its idea comes from electric charges which generate the are three kinds of them. Objects with outputs only pro-
electrostatic field in their neighborhood. The electrostatic vide measured values or signals, e.g., digital/analogue in-
force F 12 between two charges Q1 and Q2 arises according put signals, current joint angles, end-effector position or
to: measured values from force/torque sensor. Objects with
1 Q1 Q2 r
F 12 = − (5) inputs only receive values which take influence on robot
4π ||r||2 ||r|| motion or write to the output periphery of robot controller.
In (5)  represents the electrical permeability and r is the Objects with inputs and outputs are signal processing
position vector between both charges. The absolute force objects like proportional controller, integrator, summator.
between the charges is reciprocally proportional to the Connections between RSI objects may be generated by
square of their distance. However, for realization of virtual using special KRL expressions. In this way it is possible to
force fields in robotics this particular form of dependence create relative complex controller structures. After having
is not obligatory and (5) can be generalized by introducing created the whole RSI structure, it is able to run in real
the so called force function F: time with the interpolation cycle. This means that it is
r executed periodically every 12ms in parallel with the stan-
F 12 = F (||r||) (6)
||r|| dard KRL program. Changes in RSI program structure are
Hence, the function F describes the relationship between possible during program execution. The functional scheme
distance and virtual force. of RSI can be seen in Fig. 3.
In the scenario presented in this paper the virtual force
acts only between the end-effectors of both manipulators. 4.3 Path Modification via Impedance Control
This means that Robot II which has the role of an obstacle
emits the virtual force and Robot I respond to it. The Robot system used for the experiments in this work
Cartesian position of Robot I pI with respect to world allows corrections of the end-effector path by Robot Sensor
frame is available in its robot controller and the position Interface (RSI) already mentioned in the previous section.
of the obstacle pII is detected by image processing already Path corrections can be performed on the level of position
described. So, (6) can be written as: control loops. For this purpose we propose a kind of
T p − pII impedance control. It is one of the main approaches to
F V = [ Fx Fy Fz ] = F (||pI − pII ||) I (7) robot control (Hogan [1985]), (Zeng and Hemami [1997])
||pI − pII ||
or for manual guidance of manipulator arms (Winkler and
To generate virtual force fields surrounding complex obsta- Suchý [2006b]).
cles many charges are necessary. In this case the principle In the context presented here impedance control will be
of superposition is valid and can be used to calculate the used to determine the position correction offset depend-
resulting force, (Winkler and Suchý [2009]). However, in ing on the vector of the virtual force F V . The target
this paper only one virtual charge is used. impedance for x direction e.g., is then given as follows:
FV x (s)
4.2 Robot Motion Control = Mx s2 + Dx s + Cx (8)
∆X(s)
To implement collision avoidance by manipulator path In (8) FV x is the value of the virtual repulsive force in
correction it is necessary to influence the robot motion. x direction, ∆X is the resulting path correction along x
It is the advantage of the chosen robot system that robot direction, Mx , Dx and Cx represent mass, damping and
motion can by influenced in real time on the level of the spring constants, respectively.

9454
18th IFAC World Congress (IFAC'11)
Milano (Italy) August 28 - September 2, 2011

PLC of robot controller I. The path modification is realized


ST_SENPREA ST_SENPREA ST_SENPREA in the robot program.
Receive Receive Receive
virtual force virtual force virtual force 5.2 Implementation
Fx Fy Fz For all parts of the robot system already described the
corresponding software modules have been developed.
ST_PT1 ST_PT1 ST_PT1
First order First order First order Robot I For Robot I the program of a handling task
system system system was developed. During the linear motions along the y axis
of world frame from position A to B and from B to A
Δx Δy Δz (Fig. 1) the RSI structure is active. It includes reception
of the virtual force values from internal PLC and path
ST_PATHCORR modification according to target impedance.
Motion control in Cartesian space
Additionally, the controller of Robot I includes an internal
PLC. It is free programmable and independent from the
Fig. 4. Signal flow diagram of the complete RSI structure main robot program. PLC receives the Cartesian obstacle
to robot path modification position which is represented by the end-effector of Robot
II. Taking the end-effector position of Robot I into account
the PLC calculates the virtual force vector. It is transferred
Robot Obstacle position to the main robot program via special RSI variables for
Robot I data exchange.
controller I
Robot II Robot II represents the moving obstacle. As one
External PC possibility, it can be controlled by a human operator using
teach pendant. The operator moves the gripper equipped
with the colored marker into the current workspace of
Robot Robot II Robot I to disturb its motion. Another way is the devel-
controller II (obstacle) opment of a simple program for the controller of Robot II
Web cam to perform the disturbance task in an automatic mode.

PC Program The program which runs on the Windows


Fig. 5. Structure of the robot system PC performs image processing. It was developed using
Delphi. After the position of the marker is calculated its
It may be convenient to reduce target impedance to spring values are sent to Robot I via serial connection.
damper system. This will result in the dynamic behavior
of a first order system between virtual force and position
5.3 Results of Vision Based Obstacle Detection
offset. Hence, transfer characteristic for the x direction
e.g., can be expressed as follows:
Initially, the operation of the vision based obstacle detec-
∆X(s) 1 tion will be verified. For this purpose the obstacle which
= (9)
Fx (s) Dx s + Kx is simulated by the gripper of Robot II is moved parallel
It can be easily realized using the RSI object ST PT1. to the coordinate axes of the world frame, see also Fig. 1.
The complete RSI structure to influence the current robot Its origin is located in the base of Robot I. The Cartesian
path in all translational Cartesian degrees of freedom position of the tool center point is continuously logged by
can be seen in Fig. 4. The virtual force values (Fx , Fy , the robot program. Additionally, the workspace is super-
Fz ) calculated in a background process were received vised by camera and the gripper position is determined
by ST SENPREA objects. The desired position offsets and logged by the PC program. The plots of the Cartesian
(∆x, ∆y, ∆z) are finally attached to the ST PATHCORR coordinates can be compared regarding Fig. 6.
object which perform inverse kinematics and connection
It can be seen that the coordinates calculated from web
to the robot joint position control loops.
cam match the measured values quite well. Some differ-
ences can be found especially in the y coordinate which
5. IMPLEMENTATION AND EXPERIMENTAL
is depth axis with respect to the camera. However, the
RESULTS
measurement accuracy of the camera is quite sufficient
concerning the application presented here.
5.1 Robot System
The computation time of the whole image processing
The structure of the complete robot system used for cycle (image acquisition, color filter, edge detection, Hough
experiments is shown in Fig. 5. The web cam observes transform) depends on some adjustments, primarily on
the end-effector of Robot II. The image processing runs the settings of the Hough transformation. Values between
on the external Windows PC. When the computation of 200ms and 500ms are possible. Computation time may be
the current robot/obstacle position is finished its values reduced easily using multi core programming. However,
are sent to the controller of Robot I via serial connection. the reached values are quite adequate for the scenario
Afterwards, the virtual force is calculated by the internal presented in this work.

9455
18th IFAC World Congress (IFAC'11)
Milano (Italy) August 28 - September 2, 2011

Obstacle pos. from cam.


1200 1500

1000
x (mm)

1100
500
1000
0

Virutal force (N)


200
0
y (mm)

100
−5

0 −10

Pos. Robot I (mm)


1000
1100
500
z (mm)

1000 0
−500
900 Obstacle position from robot
Obstacle position estimated by web cam

X−pos. Rob. I (mm)


0 5 10 15 20 25 30 35 1280
Time (s)
1260
Fig. 6. Comparison between real and vision estimated 1240 x
obstacle position y
1220 z

5.4 Results of Collision Avoidance 0 10 20 30 40 50 60 70 80 90 100


Time (s)

The estimated obstacle position is ultimately used to Fig. 7. Disturbance of Robot I by obstacle
generate the virtual force vector F V which acts on the
end-effector of Robot I. The force function representing they are realized separately to accentuate the basic ap-
the relationship between absolute value of the repulsive proach of impedance control.
force and distance between gripper (TCP Robot I) and The functioning of vision based collision avoidance is
obstacle (TCP Robot II) is chosen as follows: shown in Fig. 7. Robot I performs the handling task.
−1 Some interesting values were logged only during the linear
F (||pI − pII ||) = 750 ||pI − pII || N · mm (10)
motions parallel to the y axis of the word frame. The
Its sphere of action is bounded by a distance of 300mm
motions for grasping and putting the workpiece were not
and the absolute force value is limited to 15N .
considered. In this context two cycles of the handling task
The artificial repulsive force leads to the modified robot can be found in Fig. 7. In the first cycle the obstacle
path with respect to its original planned path when (Robot II) is away from the path of Robot I. The values of
executing the work task. The aim is to move around the virtual force components are small when the y coordinates
obstacle and avoid collisions. The path modification is of Robot I and Robot II become close at time 8s/35s. For
realized by the approach of impedance control. The desired that reason the evasive movement of Robot I is relatively
target impedance for every translational degree of freedom small which can be seen clearly on the x coordinate.
is realized in discrete time. Thus, e.g., for the x direction
After 42s, within the second cycle, Robot II moves closer
equation (9) has been transformed from Laplace domain
to the path of Robot I. The changing obstacle position
to time discrete domain:
is detected by camera and send to the PLC of Robot
∆X(s) Kx−1 kF x controller I, which calculates the virtual force vector. The
= =
Fx (s) 1 + Dx Kx−1 s 1 + Tx s higher force results in the higher distance of the evasive
(11) motion of Robot I (see time 58s/85s).
∆X(z) b1 z −1
= kF x The robot work cell with respect to the camera view is
Fx (z) 1 − a1 z −1 shown in Fig. 8a for the first cycle and in Fig. 8b for the
Taking into account the interpolation time of robot con- second cycle of the handling task, respectively.
troller of 12ms, which is identical to the sampling time of
RSI, a time constant Tx of approximately 1s is reached 6. CONCLUSION
with the following transfer function:
∆X(z) 0.01z −1 This article presents an approach to collision avoidance of
= kF x (12) industrial robots. It is based on artificial forces emitted
Fx (z) 1 − 0.99z −1 by virtual charges along with image processing for charge
The proportional gain kF x is set to 10mmN −1 . The pa- placement. In the case that the robot comes close to
rameters of target impedances for y and z directions are a detected obstacle its path is modified by means of
chosen in the similar way. It might be possible to merge impedance control. The whole approach may be classified
force function and desired target impedance. However, as visual servoing.

9456
18th IFAC World Congress (IFAC'11)
Milano (Italy) August 28 - September 2, 2011

N. Hogan. Impedance control, an approach to manipula-


a) b) tion: Part i, ii, iii. ASME Journal of Dynamic Systems,
Measurement and Control, 107:1–24, 1985.
O. Khatib. Real-time obstacle avoidance for manipula-
tors and mobile robots. The International Journal of
Robotics Research, 5(1):90–98, 1986.
KUKA Roboter GmbH. KUKA Robot Sensor Interface
(RSI) 2.1, 2007.
P. J. McKerrow. Introduction to Robotics. Addison Wesley,
Fig. 8. Camera view of the robot work cell 1995.
S. Quinlan and O. Khatib. Elastic bands: Connecting
For the implementation the robot system consisting of two path planning and control. In Proceedings of the IEEE
articulated manipulators with six axes was chosen. Robot International Conference on Robotics and Automation,
I performs a handling task, while Robot II equipped with pages 802–807, 1993.
a marker represents the moving obstacle and enters the H. Seraji and B. Bon. Real-time collision avoidance for
workspace of Robot I. The workspace is supervised by position-controlled manipulators. IEEE Transactions
USB web cam and the obstacle position (end-effector of on Robotics and Automation, 15(4):670–676, 1999.
Robot II) is calculated by image processing. As a result of R. Siegwart and I. R. Nourbakhsh. Introduction to Au-
this disturbance the path of Robot I was modified by the tonomous Mobile Robots. MIT Press, 2004.
virtual force. For this purpose the well known approach of F. Som. Innovative robot control offers more operator
impedance control has been used. ergonomics and personnel safety. In Proc. of Joint
Conference on Robotics - 37th International Symposium
The practical realization was successfully accomplished
on Robotics and 4th German Conference on Robotics,
and some results were presented in this paper. More result
2006.
can be shown in a video.
M. Sonka, V. Hlavac, and R. Boyle. Image Processing,
Based on the approaches and results presented here some Analysis, and Machine Vision. Brooks/Cole Publishing
perspective developments seem to be possible for further Company, 1999.
research. C. W. Warren. Global path planning using artificial po-
tential fields. In Proceedings of the IEEE International
For simplification of image processing a colored marker Conference on Robotics and Automation, pages 316–321,
was attached to the obstacle. It would be more preferable 1989.
to do without it. This feature would require more sophis- A. Winkler and J. Suchý. An approach to compliant
ticated algorithms for image processing. motion of an industrial manipulator. In Proceedings
Up to now the generation of the virtual force field was of the 8th International IFAC Symposium on Robot
simplified in the sense that it acts only between the end- Control, 2006a.
effectors which means that one charge is placed on the Alexander Winkler and Jozef Suchý. Force-guided motions
center of the marker and another one on the gripper of of a 6-d.o.f industrial robot with a joint space approach.
Robot I. It would be a challenging task to determine the Advanced Robotics, 20(9):1067–1084, 2006b.
posture of moving Robot II (obstacle) via image processing Alexander Winkler and Jozef Suchý. Intuitive collision
and crop its whole body with virtual charges. avoidance of robots using charge generated virtual force
fields. In Torsten Kröger and Friedrich M. Wahl, editors,
An interesting scenario for further research would also be Advances in Robotics Research, pages 77–87. Springer,
the situations when a human enters the robot workspace. 2009.
The 3D camera information can again be used. The human G. Zeng and A. Hemami. An overview of robot force
is than cropped with virtual charges in real time and control. Robotica, 15(5):473–482, 1997.
he/she emits the repulsive force field. In this case addi-
tional safety aspects should be taken into consideration,
(Som [2006]).

REFERENCES
P. Bosscher and D. Hedman. Real-time collision avoidance
algorithm for robotic manipulators. In Proceedings of
the IEEE International Conference on Technologies for
Practical Robot Applications, pages 113–121, 2009.
Oliver Brock and Oussama Khatib. Elastic strips: A frame-
work for motion generation in human environments.
The International Journal of Robotics Research, 21(12):
1031–1052, 2002.
P. I. Corke. Visual Control of Robots. Research Studies
Press Ltd., 1996.
Koichi Hashimoto. A review on vision-based control of
robot manipulators. Advanced Robotics, 17(10):969–991,
2003.

9457

You might also like