Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1998
Issued by Sandia National Laboratories, operated for the United States Department of Energy by San&a Corporation. NOTICE This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, nor any of their contractors, subcontractors, or their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process &sclosed, or represents that its use would not infringe privately owned rights. Reference herein t o any specific commercial product, process, or service by trade name, trademark, manufacturer, o r otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government, any agency thereof, or any of their contractors or subcontractors. The views and opinions expressed herein do not necessarily state or reflect those of the United States Government, any agency thereof, or any of their contractors.
Proceedings of the 3rd British Conference on …, 2001
This paper presents the application of genetic programming (GP) to the task of evolving robot behaviours. The domain used here is the well-known wall-following problem. A set of programs were evolved that can successfully perform wall-following behaviours. The experiments involving different wall shapes were designed and implemented to investigate whether the solutions offered by GP are scalable. Experimental results show that GP is able to automatically produce algorithms for wall-following tasks. In addition, more complex wall shapes were introduced to verify that they do not affect our GP implementation detrimentally.
Working Notes for the AAAI Symposium on …, 1995
We have evaluated the use of Genetic Programming to directly control a miniature robot. The goal of the GP-system was to evolve real-time obstacle avoiding behaviour from sensorial data. The evolved programs are used in a sense-think-act context. We employed a novel technique to enable real time learning with a real robot. The technique uses a probabilistic sampling of the environment where each individual is tested on a new real-time tness case in a tournament selection procedure. The tness has a pain and a pleasure part. The negative part of tness, the pain, is simply the sum of the proximity sensor values. In order to keep the robot from standing still or gyrating, it has a pleasure componentton tness. It gets pleasure from going straight and fast. The evolved algorithm shows robust performance even if the robot is lifted and placed in a completely di erent environment or if obstacles are moved around.
1995
One of the most general forms of representing and specifying behavior is by using a computer language. We have evaluated the use of the evolutionary technique of Genetic Programming (GP) to directly control a miniature robot. The goal of the GP-system was to evolve real-time obstacle avoiding behavior from sensorial. The evolved programs are used in a sense-think-act context. We employed a novel technique to enable real time learning with a real robot using genetic programming. To our knowledge, this is the rst use of GP with a real robot. The method uses a probabilistic sampling of the environment where each individual is tested on a new real-time tness case in a tournament selection procedure. The robots behavior is evolved without any knowledge of the task except for the feed-back from a tness function. The tness has a pain and a pleasure part. The negative part of tness, the pain, is simply the sum of the proximity sensor values. In order to keep the robot from standing still or gyrating, it has a pleasure component to its tness. It gets pleasure from going straight and fast. The evolved algorithm shows robust performance even if the robot is lifted and placed in a completely di erent environment or if obstacles are moved around.
IEEE TRANSACTIONS ON SYSTEMS MAN AND …, 1993
Intelligent robots should be able to use sensor information to learn how to behave in a changing environment. As environmental complexity grows, the learning task becomes more and more difficult. We face this problem using an architecture based on learning classifier systems and on the structural properties of animal behavioural organization, as proposed by ethologists. After a description of the learning technique used and of the organizational structure proposed, we present experiments that show how behaviour acquisition can be achieved. Our simulated robot learns to follow a light and to avoid hot dangerous objects. While these two simple behavioural patterns are independently learnt, coordination is attained by means of a learning coordination mechanism.
1995
A very general form of representing and specifying an autonomous agent's behavior is by using a computer language. The task of planning feasible actions could then simply be reduced to an instance of automatic programming. We have evaluated the use of an evolutionary technique for automatic programming called Genetic Programming (GP) to directly control a miniature robot. To our knowledge, this is the rst attempt to control a real robot with a GP based learning method. Two schemes are presented. The objective of the GP-system in our rst approach is to evolve real-time obstacle avoiding behavior from sensorial data. This technique enables real time learning with a real robot using genetic programming, it has, however, the drawback of the learning time being limited by the response dynamics of the environment. To overcome this problems we have devised a second method, learning from past experiences stored in memory. This new system allows speeds up of the algorithm by a factor of more than 2000. The emergence of the obstacle avoiding behavior is also speeded up by a factor of 40 enabling learning of this task in 1.5 minutes. This learning time is several orders of magnitudes faster then comparable experiments with other control architectures. The used algorithm is furthermore very compact and can be tted into the micro-controller of the autonomous mobile miniature robot.
2000
A method for evolving behavior-based robot controllers using genetic programming is presented. Due to their hierarchical nature, genetic programs are useful representing high-level knowledge for robot controllers. One drawback is the difficulty of incorporating sensory inputs. To overcome the gap between symbolic representation and direct sensor values, the elements of the function set in genetic programming is implemented as a single-layer perceptron. Each perceptron is composed of senory input nodes and a decision output node. The robot learns proper behavior rules based on local, limited sensory information without using an internal map. First, it learns how to discriminate the target using single-layer perceptrons. Then, the learned perceptrons are applied to the function nodes of the genetic program tree which represents a robot controller. Experiments have been performed using Khepera robots. The presented method successfully evolved high-level genetic programs that control the robot to find the light source from sensory inputs.
Springer eBooks, 2007
Una-May 0 'Reilly is a research member ofthe Living Machines and Humanoid Robotics group in the Computer Science and
Computación y …, 2010
In this paper we present the development of a method that combines the evolutionary robotics approach with action selection. A collection task is set in an arena where a Khepera robot has to collect cylinders that simulate food. Furthermore, two basic motivations, labeled as 'fear' and 'hunger', both affect the selection of the behavioral repertoire. In this paper we propose an initial evolutionary stage where behavioral modules are designed as separate selectable modules. Next, we use evolution for optimizing the motivated selection network employed for behavioral switching. Finally, we compare evolved selection with hand-coded selection, which offers some interesting results that support the use of a hybrid approach in the development of behavior-based robotics.
As far as I can remember I have always been fascinated about space. When I saw a documentary about the Russian moon many years ago, I realized that robots are great tools for space exploration. They can be operated from the earth without any risk to humans, or travel through space autonomously. It was then that I became interested in robotics and my curiosity about the subject would grow with the years. The android Data from the television sci-fi Star Trek, an article about the robot Genghis that learnt itself to walk, a documentary about Luc Steels' robot experiments, and especially the deployment of the rover Sojourner on Mars, they all contributed to my interest in robotics. An obligatory part of the computer science Master's program of the Delft University of Technology (DUT) is the research project, usually done in the fourth or fifth year. The main goal of this project is to gain experience in research. Since there is no course on intelligent robotics at the DUT, this was the perfect opportunity for me to learn more about robotics and Artificial intelligence. To get acquainted with the subject, I decided to start by reading some books and articles that gave a general overview of the field. The results can be found in the first part of this report. The first books and articles I read mentioned Rodney Brooks and his subsumption robot architecture quite often. When I did further research on this topic I discovered the interesting subfield of behavior-based robotics, described in the second part in this report. The third part deals with a fairly new subject called evolutionary robotics that allowed me to combine robotics with another interest of mine, which is genetic algorithms. During my search for information on robots I found that many papers on robots can be found on the World Wide Web. The sites and pages I used in my research are included in the appendix at the end of the report. I would like to thank everyone who helped me with my research, especially Leon Rothkrantz for his supervision and guidance during this project.
Adaptive Behavior, 1997
We present a novel evolutionary approach to robotic control of a real robot based on genetic programming (GP). Our approach uses genetic programming techniques that manipulate machine code to evolve control programs for robots. This variant of GP has several advantages over a conventional GP system, such as higher speed, lower memory requirements and better real time properties. Previous attempts to apply GP in robotics use simulations to evaluate control programs and have di culties with learning tasks involving a real robot. We present an on-line control method that is evaluated in two di erent physical environments and applied to two tasks using the Khepera robot platform: obstacle avoidance and object following. The results show fast learning and good generalization.
This paper discusses the use of evolutionary computation to evolve behaviors that exhibit emergent intelligent behavior. Genetic algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the result- ing behaviors are then used to control the actual robot. Some of the emergent behavior is described in detail.
Robotics and Autonomous Systems, 1998
We have used an automatic programming method called genetic programming (GP) for control of a miniature robot. Our earlier work on real-time learning suffered from the drawback of the learning time being limited by the response dynamics of the robot's environment. In order to overcome this problem we have devised a new technique which allows learning from past experiences that are stored in memory. The new method shows its advantage when perfect behavior emerges in experiments quickly and reliably. It is tested on two control tasks, obstacle avoiding and wall following behavior, both in simulation and on the real robot platform Khepera.
2003
This paper presents a Genetic Algorithm (GA) approach to evolving robot behaviours. We use fuzzy logic controllers (FLCs) to design robot behaviours. The antecedents of the FLCs are pre-designed, while their consequences are learned using a GA. The Sony quadruped robots are used to evaluate proposed approaches in the robotic football domain. Two behaviours, ball-chasing and position-reaching, are studied and implemented. An embodied evolution scheme is adopted, by which the robot autonomously evolves its behaviours based on a layered control architecture. The results show that the robot behaviours can be automatically acquired through the GAbased learning of FLCs.
2002
Genetic Programming was used to create the vision subsystem of a reactive obstacle avoidance system for an autonomous mobile robot. The representation of algorithms was specifically chosen to capture the spirit of existing, hand written vision algorithms. Traditional computer vision operators such as Sobel gradient magnitude, median filters and the Moravec interest operator were combined arbitrarily. Images from an office hallway were used as training data. The evolved programs took a black and white camera image as input and estimated the location of the lowest non-ground pixel in a given column. The computed estimates were then given to a handwritten obstacle avoidance algorithm and used to control the robot in real time. Evolved programs successfully navigated in unstructured hallways, performing on par with hand-crafted systems.
Entropy, 2020
RealTimeBattle is an environment in which robots controlled by programs fight each other. Programs control the simulated robots using low-level messages (e.g., turn radar, accelerate). Unlike other tools like Robocode, each of these robots can be developed using different programming languages. Our purpose is to generate, without human programming or other intervention, a robot that is highly competitive in RealTimeBattle. To that end, we implemented an Evolutionary Computation technique: Genetic Programming. The robot controllers created in the course of the experiments exhibit several different and effective combat strategies such as avoidance, sniping, encircling and shooting. To further improve their performance, we propose a function-set that includes short-term memory mechanisms, which allowed us to evolve a robot that is superior to all of the rivals used for its training. The robot was also tested in a bout with the winner of the previous “RealTimeBattle Championship”, which...
… Programming, 2002
We present the system SIGEL that combines the simulation and visualization of robots with a Genetic Programming system for the automated evolution of walking. It is designed to automatically generate control programs for arbitrary robots without depending on detailed analytical information of the robots' kinematic structure. Different fitness functions as well as a variety of parameters allow the easy and interactive configuration and adaptation of the evolution process and the simulations.
16th RoboCup International Symposium (RCS 2012), Lecture Notes in Computer Science, 2013
The development of high-level behavior for autonomous robots is a time-consuming task even for experts. This paper presents a Computer-Aided Software Engineering (CASE) tool, named Kouretes Statechart Editor (KSE), which enables the developer to easily specify a desired robot behavior as a statechart model utilizing a variety of base robot functionalities (vision, localization, locomotion, motion skills, communication). A statechart is a compact platform-independent formal model used widely in software engineering for designing software systems. KSE adopts the Agent Systems Engineering Methodology (ASEME) model-driven approach. Thus, KSE guides the developer through a series of design steps within a graphical environment that leads to automatic source code generation. We use KSE for developing the behavior of the Nao humanoid robots of our team Kouretes competing in the Standard Platform League of the RoboCup competition.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.