Papers by Ruediger Dillmann
PubMed, 2001
Tracking of a see-through head-mounted display is a necessary precondition for proper overlay of ... more Tracking of a see-through head-mounted display is a necessary precondition for proper overlay of virtual data and real scenes within the display. In our contribution, the intention and technique for Intraoperative Presentation will be presented. Focus will be the tracking of the display device. We will illustrate and compare three different optical tracking approaches and the results achieved by using them.

Elsevier eBooks, 1997
Publisher Summary This chapter presents an approach to fuse sensor information from complementary... more Publisher Summary This chapter presents an approach to fuse sensor information from complementary sensors. The mobile robot PRIAMOS is used as an experimental testbed. A multisensor system supports the vehicle with odometric, sonar, visual, and laser scanner information. Sensor fusion is performed by matching the local perception of a laser scanner and a camera system with a global model that is being built up incrementally. The Mahalanobis distance is used as matching criterion and a Kalman filter is used to fuse matching features. The goal of the chapter is to develop and to apply sensor fusion techniques in order to improve the system's performance for mobile robot positioning and exploration of unknown environments. The approach is able to deal with static as well as dynamic environments. On different levels of processing geometrical, topological and semantical models are generated (exploration) or can be used as a priori information (positioning). The system's performance is demonstrated for the task of building a geometrical model of an unknown environment.

In this paper we want to present a new approach to 3D shape acquisition. This implementation enab... more In this paper we want to present a new approach to 3D shape acquisition. This implementation enables the user to manually move the scan unit over the object without the scan unit being tracked. Registration from scan to scan is done onthe-fly to allow user interaction with the system. Since the user can watch the scene assembling gradually during the scan process he can immediately respond to occuring shadings due to shadows and undercuts. The chosen structured light scan approach uses an initially unknown white noise pattern that can easily be projected with any fixed pattern projection system. Object points are acquired on the basis of finding correspondences in the pattern in the camera image of the current scene. This is achieved by using a combination of a SSD and a least squares correlation algorithm. In the current test setup we are using a standard video beamer and a standard digital camera. Our approach reduces hardware complexity to a minimum using only one camera and a fixed pattern projector which both can be miniaturized. In this paper we want to present the whole system also including the calibration procedure. The focus will be on the correlation method.

The economic importance of additive manufacturing utilizing Fused Deposition Modeling (FDM) 3D-pr... more The economic importance of additive manufacturing utilizing Fused Deposition Modeling (FDM) 3D-printers has been on the rise since key patents on crucial parts of the technology ran out in the early 2000s. Altough there have been major improvements in the materials and print quality of the printers used, the process is still prone towards various errors. At the same time almost none of the printers available use build in sensors to detect errors and react to their occurrence. This work outlines a monitoring system for FDM 3D-printers that is able to detect a multitude of severe and common errors through the use of optical consumer sensors. The system is able to detect layer shifts and stopped extrusion with a high accuracy. Furthermore additional sensors and error detection methods can be easily integrated through the modular structure of the presented system. To be able to handle multiple printer without the same amount of sensors, the sensor was added to the tool center point (TCP) of a robot.
PubMed, 2003
In this paper we present fundamental results of the first evaluation of INPRES in a laboratory en... more In this paper we present fundamental results of the first evaluation of INPRES in a laboratory environment. While the system itself--an HMD-based approach for intraoperative augmented reality in head and neck surgery--has been described elsewhere several times, this paper will focus on methods and outcome of recently accomplished test procedures.
By not knowing their limits, field and service robots either act overly careful, limiting their p... more By not knowing their limits, field and service robots either act overly careful, limiting their potential, or are oblivious to risks involved in their mission, leading to possibly dangerous behaviors. We present our Health-Tree approach to enable robots and their operators to intuitively estimate the robots current well being. In combination with our Skill-Tree the consequences of using robot capabilities are made visible, enabling risk aware decisions. The system was tested with two different robot types, LAURON V and a bebop UAV.

In this paper we want to introduce a new approach to 3D scanning of dynamic scenes. This implemen... more In this paper we want to introduce a new approach to 3D scanning of dynamic scenes. This implementation enables the user not only to manually move a scan head over the object to scan but also to capture moving objects. Registration from scan to scan is done in real-time. The user interacts with the system and watches the scene assembling. He can immediately respond on shadings that occur due to undercuts in the scene. The chosen structured light scan method uses a primarily unknown speckle image that can be easily etched on a chromeon-glass slide and projected using a strobe light. In the current large scale implementation we are using a standard video beamer and a standard digital camera. Miniaturization and adaptation for special purposes (i. e. medical applications) are scheduled for next year. Focus in this paper shall be laid on calibration issues regarding camera and projector

ABSTRACT Rough, unstructured and hazardous areas are typical application scenarios for autonomous... more ABSTRACT Rough, unstructured and hazardous areas are typical application scenarios for autonomous mobile robots. In the case of an error or fault, these robots cannot rely on a human to recover or repair them. Therefore, intelligent fault detection systems have to be developed that can autonomously detect faults and create a system status corresponding to the operability of the robot. After a fault has been detected, it might be possible to adapt the robot and still continue with its primary task. This paper presents a fault diagnosis and status monitoring system for a six-legged walking robot. Our developed system is based on expert knowledge and was implemented on the six-legged robot LAURON. It is able to detect seven different types of faults and errors, ranging from mechanical coupling problems to the total loss of leg controller units. The status monitoring part gives the operator a detailed, but still understandable status report about the most important components and their functionality.

In this paper an approach to real-time position correction and environmental modeling based on od... more In this paper an approach to real-time position correction and environmental modeling based on odometry, ultrasonic sensing, structured light sensing and active stereo vision (bin-and trinocular) is presented. Odometry provides the robot with a position estimation and with the help of a model of the environment sensor perceptions can be matched to predictions. Ultrasonic sensing is capable of collision avoidance and obstacle detection and so enables navigation in simply structured environments. Model-based image processing allows detecting and classifying natural landmarks in the stereo images uniquely. With only one observation the robot's position and orientation relative to the observed landmark is found precisely. This sensing strategy is used when high precision is necessary for the performance of the navigation task. Finally techniques are described that allow an automatic mapping of an unknown or only partially known environment.

Springer eBooks, Sep 9, 2017
What do we actually mean when we talk about “autonomous mobile robots”? This chapter gives an int... more What do we actually mean when we talk about “autonomous mobile robots”? This chapter gives an introduction into the technical side of autonomy of mobile robots in the current state of the art and provides the relevant technical background for discussions about autonomy in mobile robot systems. A short history of robotic development explains the origins of robotics and its development from industrial machines into autonomous agents. The term “autonomous robot” is looked at in more detail by presenting the way of decision-making with different categories of robots and by introducing the general model of a rational agent for decision making. Additionally a short outlook on the process of understanding the environment is discussed, giving an overview of the individual steps from sensing up to interpretation of a scene. Selected examples of modern robots present the current state of the art and its limitations within this field. Overall, this introduction provides the required technical insight for non robotic experts to get an understanding of the term autonomous mobile robots and the implications for regulations concerning it.
ABSTRACT This article describes a cooperative localization method for reconfigurable robots and m... more ABSTRACT This article describes a cooperative localization method for reconfigurable robots and multi-robot teams. In our approach, we use radio-based communication modules to determine the distances between multiple robots. With the help of cheap and simple sensors and suitable models of sensor measurements and robot motion, a particle filter can be applied to estimate the relative robot poses. In addition to tracking relative robot poses, our approach is also capable of solving the kidnapped robot problem for multiple cooperative robots.

Robots are no longer working isolated in safety fences and Human-Robot Collaboration (HRC) is bec... more Robots are no longer working isolated in safety fences and Human-Robot Collaboration (HRC) is becoming one of the most promising topic of research to improve the efficiency in many application scenarios. Sharing the same workspace, both human and robot should clearly understand the intentions and motions of each other, in order to enable an efficient and effective interaction. In this work we propose an AR-based system to show the robot planned motion and target to the worker. We focused on representing this information in an intuitive way for inexperienced users. We introduced a multi-modal communication feedback in order to enable the user to agree with or change the robot plan using gestures and speech. The effectiveness of the system has been evaluated with test cases performed by a group of testers with no robotic experience. The results showed that the system helped the user to better understand the robot intentions and planned motion, improving the ergonomics and trust in the interaction. Furthermore, the evaluation included the rating of the different input modalities provided, in order to compare the different ways of communication proposed.

Nowadays robots are able to work safely close to humans. They are light-weight, intrinsically saf... more Nowadays robots are able to work safely close to humans. They are light-weight, intrinsically safe and capable of avoiding obstacles as well as understand and predict human motions. In this collaborative scenario, the communication between humans and robots is a fundamental aspect to achieve good efficiency and ergonomics in the task execution. A lot of research has been made related to robot understanding and prediction of the human behavior, allowing the robot to replan its motion trajectories. This work is focused on the communication of the robot's intentions to the human to make its goals and planned trajectories easily understandable. Visual and acoustic information has been added to give the human an intuitive feedback to immediately understand the robot's plan. This allows a better interaction and makes the humans feel more comfortable, without any feeling of anxiety related to the unpredictability of the robot motion. Experiments have been conducted in a collaborative assembly scenario. The results of these tests were collected in questionnaires, in which the humans reported the differences and improvements they experienced using the feedback communication system.

Modern robots are mainly controlled by monolithic black-box controllers provided by the individua... more Modern robots are mainly controlled by monolithic black-box controllers provided by the individual manufacturers. In research institutions the first version of the Robot Operating System (ROS1) is widely used for different applications. However, ROS1 lacks real-time capable communication. The ongoing development of ROS2 promises to break this paradigm. By employing Data Distribution Service (DDS) as a middleware the modular architecture aims at providing realtime capabilities. This study assesses the current prospects and limitations of ROS2. It gains novel insights towards improved and, in particular, reliable results regarding latencies and jitter. To this end, the allocation and transmission of ROS2 messages is evaluated in an example application for robotic control. An oscilloscope is applied for external validation of the measurements in such a time-synchronized distributed network. The complete application is set up from non-real-time object detection towards real-time control via ROS2 and EtherCAT. An in-depth evaluation of the ROS2 communication stack on a single host and in distributed setups is included. With real-time safe memory allocation and highly privileged ROS2 processes real-time capabilities are ensured.

The exploration of unknown environment is an important task for the new generation of mobile serv... more The exploration of unknown environment is an important task for the new generation of mobile service robots. These robots are supposed to operate in dynamic and changing environments together with human beings and other static or moving objects. Sensors that are capable of providing the quality of information that is required for the described scenario are optical sensors like digital cameras and laser scanners. In this paper sensor integration and fusion for such sensors is described. Complementary sensor information is transformed into a common representation in order to achieve a cooperating sensor system. Sensor fusion is performed by matching the local perception of a laser scanner and a camera system with a global model that is being built up incrementally. The Mahalanobis distance is used as matching criterion and a Kalman filter is used to fuse matching features. A common representation including the uncertainty and the confidence is used for all scene features. The system's performance is demonstrated for the task of exploring an unknown environment and incrementally building up a geometrical model of it
... Dynamic Gestures as an Input Device for Directing a Mobile Platform ... In the following, the... more ... Dynamic Gestures as an Input Device for Directing a Mobile Platform ... In the following, the imageprocessing and exper-imenting system at our institute and our approach based on hidden Markov ... The sensor employed for hand tracking is not yet installed on a mobile plattform. ...
A good map of its environment is essential for efficient task execution of a mobile robot. Real t... more A good map of its environment is essential for efficient task execution of a mobile robot. Real time map update, especially in dynamic scenes is a difficult problem due to noisy sensor data and limited observation time. The paper describes a mapping procedure which identifies new obstacles in a scene and constructs a 3D surface model of it. This description
Springer Tracts in Advanced Robotics, 2004
Commanding mobile robot assistants still requires classical user interfaces, which in general aff... more Commanding mobile robot assistants still requires classical user interfaces, which in general afford an adaption of the user to the man-machine interface. A more intuitive way of interacting with robot systems is achieved by verbal or gesture commands, been more human like. This article presents new approaches and enhancements for established methods that are in use in our laboratory for
Uploads
Papers by Ruediger Dillmann