Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2003
This paper presents a Genetic Algorithm (GA) approach to evolving robot behaviours. We use fuzzy logic controllers (FLCs) to design robot behaviours. The antecedents of the FLCs are pre-designed, while their consequences are learned using a GA. The Sony quadruped robots are used to evaluate proposed approaches in the robotic football domain. Two behaviours, ball-chasing and position-reaching, are studied and implemented. An embodied evolution scheme is adopted, by which the robot autonomously evolves its behaviours based on a layered control architecture. The results show that the robot behaviours can be automatically acquired through the GAbased learning of FLCs.
2002
This paper presents a method that can acquire fuzzy rules for Sony legged robot behaviours automatically. We use fuzzy logic controllers (FLCs) to design robot behaviours. The antecedents of the FLCs are pre-design, while their consequences are left for automatic acquisition. Once the consequences are optimally chosen, the robot behaviours are well designed. Genetic algorithms (GAs) are employed to search best FLC consequences. The Sony quadruped robots are used to evaluate proposed approaches in the robotic football domain. Two behaviours, ball-chasing and position-reaching, are studied and implemented in both simulation and real robots. The results show that the behaviours can be automatically acquired through the FLCs evolving.
Lecture Notes in Computer Science, 2002
This paper presents an evolutionary approach to learning a fuzzy logic controller(FLC) employed for reactive behaviour control of Sony legged robots. The learning scheme is divided into two stages. The first stage is a structure learning in which the rule base of FLC is generated by a backup updating learning. The second stage is a parameter learning in which the parameters of membership functions of fuzzy sets are learned by a genetic algorithm (GA). Simulation results are provided to show the effectiveness of the proposed learning scheme.
Proc. of the 1st International …, 2001
The robotic soccer belongs to the class of multi-agent systems and involves many challenging sub-problems. Teams of robotic players have to cooperate in order to put the ball in the opposing goal and at the same time defend their own goal. The paper is concerned with the problem of learning and implementation of reactive behaviors for robotic agents playing soccer. It briefly presents the whole control system designed for the Cerberus'01 Sony legged robot team that participated in RoboCup 2001 competitions in Seattle, USA, and then introduces the developed reactive behavior for interception of the moving ball while avoiding collisions with other robotic players and play field walls. For the implementation of the above behavior a fuzzy-neural trajectory generator (FNTG) has been developed and learned. Genetic Algorithms (GAs)based approach has been employed to perform the learning process of the proposed FNTG.
Proceedings 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), 2003
Fuzzy logic plays an important role in the design of reactive robot behaviours. This paper presents a learning approach to the development of a fuzzy logic controller, based on the delayed rewards from the real world. The delayed rewards are apportioned to the individual fuzzy rules by using reinforcement Q-learning. The efficient exploration of a solution space is one of the key issues in the reinforcement learning. A specific genetic algorithm is developed in this paper to trade off the exploration of learning spaces and the exploitation of learned experience. The proposed approach is evaluated on some reactive behaviour of the football-playing robots.
2008
This paper describes our approach on evolving a robot controller to generate a set of goalkeeper behaviours in the RoboCup domain based on the genetic programming (GP) technique. The goalkeeper agent is a software construct that operates in a simulated environment provided by the Soccer Server. Our research is to determine the conditions that need to be met for an evolved goalkeeper agent to perform adequately in a defensive situation. A framework is developed to test the agent's performance in the simulator. The impact of the sensors information that operates as the input to our GP implementation is addressed. The framework by incorporating it within two established soccer agents is evaluated, and then the offline performance of each agent is tested. The experimental results show that our approach is able to produce a robust robot controller and the framework is flexible enough to be readily incorporated into other existing soccer agent frameworks.
2000
Abstract We describe two architectures that autonomously acquire fuzzy control rules to provide ,reactive behavioural competencies ,in a simulated mobile robotics application. One ,architecture is a “Pittsburgh”-style Fuzzy ,Classifier System (Pitt1). The other architecture is a “Michigan”- style Fuzzy Classifier System ,(Mich1). We tested the architectures on their ability to acquire an“investigative” obstacle ,avoidance competency. We found,that Mich1 implemented amore,local
Proceedings of the Second International Conference on Informatics in Control, Automation and Robotics, 2005
In this work we present the creation of a platform, along with an algorithm to evolve the learning of FLCs, especially aiming to the development of fuzzy controllers for mobile robot navigation. The structure has been proven on a Kephera robot. The conceptual aspects that sustain the work include topics such as Artificial Intelligence (AI), control advanced techniques, sensorial systems and mechatronics. Topics related with the control and automatic navigation of robotic systems especially with learning are approached, based on the Fuzzy Logic theory and evolutionary computing. We can say that our structure corresponds basically to a Classifier System, with appropriate modifications for the objective of generating controllers for mobile robot trajectories. The more stress is made on genetic profile than in the characteristics of the individuals and on the other, the strategy of distribution of the reinforcement is emphasized, fundamental aspects on which the work seeks to contribute.
This paper discusses the use of evolutionary computation to evolve behaviors that exhibit emergent intelligent behavior. Genetic algorithms are used to learn navigation and collision avoidance behaviors for robots. The learning is performed under simulation, and the result- ing behaviors are then used to control the actual robot. Some of the emergent behavior is described in detail.
Lecture Notes in Computer Science
We study behavioural patterns learned by a robotic agent by means of two different control and adaptive approaches-a radial basis function neural network trained by evolutionary algorithm, and a traditional reinforcement Qlearning algorithm. In both cases, a set of rules controlling the agent is derived from the learned controllers, and these sets are compared. It is shown that both procedures lead to reasonable and compact, albeit rather different, rule sets.
Computational Intelligence in Control
In this chapter an evolutionary algorithm is developed to learn a fuzzy knowledge base for the control of a soccer micro-robot from any configuration belonging to a grid of initial configurations, to hit the ball along the ball to goal line of sight. A relative coordinate system is used. Forward and reverse mode of the robot and its physical dimensions are incorporated, as well as special considerations to cases when in its initial configuration, the robot is touching the ball.
1996
The implementation of behaviors for embodied autonomous agents by means of Fuzzy Logic Controllers (FLC) has natural and engineering motivations. Fuzzy logic is recognized as a powerful mean to represent approximation intrinsic in human (and animal) reasoning and reacting. On the other side, fuzzy logic shows flexibility and robustness, important in the implementation of artificial devices. Two aspects of the development of autonomous agents may be faced by learning FLCs: the adaptation of the agent to the environment, and the reduction of the design time and efforts. In this paper, we present issues related to learn behaviors implemented as FLCs, and we propose our approach implemented in ELF (Evolutionary Learning for Fuzzy rules). We are using ELF to support the development of different types of agents. Finally, we present the results that we have obtained both in simulated and real environments.
International Journal of Intelligent Systems, 2005
This article presents a hybrid learning architecture for fuzzy control of quadruped walking robots in the RoboCup domain. It combines reactive behaviors with deliberative reasoning to achieve complex goals in uncertain and dynamic environments. To achieve real-time and robust control performance, fuzzy logic controllers (FLCs) are used to encode the behaviors and a two-stage learning scheme is adopted to make these FLCs be adaptive to complex situations. The first stage is called structure learning, in which the rule base of an FLC is generated by a Q-learning scheme. The second stage is called parameter learning, in which the parameters of membership functions in input fuzzy sets are learned by using a real value genetic algorithm. The experimental results are provided to show the suitability of the architecture and effectiveness of the proposed learning scheme.
Robotics and Autonomous Systems, 1998
This paper concerns tdae learning of basic behaviors in an autonomous robot. It presents a method to adapt basic reactive behaviors using a genetic algorithm. Behaviors are implemented as fuzzy controllers and the genetic algorithm is used to evolve their rules. These rules will be formulated in a fuzzy way using prefixed linguistic labels. In order to test the rules obtained in each generation of the genetic evolution process, a real robot has been used. Numerical results from the evolution rate of the different experiments, as well as an example of the fuzzy rules obtained, are presented and discussed.
International Journal of Approximate …, 1997
Fuzzy Logic Controllers constitute knowledge-based systems that include Fuzzy Rules and Fuzzy Membership Functions to incorporate human knowledge into their knowledge base. The de nition of fuzzy rules and fuzzy membership functions is one of the key question when designing Fuzzy Logic Controllers, and is generally a ected by subjective decisions. Some e orts have been made to obtain an improvement on system performance by incorporating learning mechanisms to modify the rules and/or membership functions of the FLC. Genetic Algorithms are probabilistic search and optimization procedures based on natural genetics. This paper proposes a way to apply (with a learning purpose) Genetic Algorithms to Fuzzy Logic Controllers, and presents an application designed to control the Synthesis of biped walk of a simulated 2-D biped robot.
2012
This paper is concerned with the learning of basic behaviors in autonomous robots. In this way, we present a method for the adaptation of basic reactive behaviors implemented as fuzzy controllers applying a genetic algorithm to the evolution of the fuzzy rule system. In this sense, we show our experiments in the evolution of control rules based on symbolic concepts represented as linguistic labels. The rules will be formulated in a fuzzy way and in order to test the rules obtained in each generation of the genetic algorithm a real robot has been used. The individual with the best performance is chosen to generate a new population: the elite strategy. All the new individuals were tested in the same real environment. In conclusion, the individuals of the last generation offer a set of rules that provides better performance than the ones designed by a non-expert designer.
Proceedings of the 15th IFAC World Congress, 2002, 2002
This paper presents a fuzzy logic controller (FLC) for the implementation of some behaviour of Sony legged robots. The adaptive heuristic Critic (AHC) reinforcement learning is employed to refine the FLC. The actor part of AHC is a conventional FLC in which the parameters of input membership functions are learned by an immediate internal reinforcement signal. This internal reinforcement signal comes from a prediction of the evaluation value of a policy and the external reinforcement signal. The evaluation value of a policy is learned by temporal difference (TD) learning in the critic part that is also represented by a FLC. A genetic algorithm (GA) is employed for learning internal reinforcement of the actor part because it is more efficient in searching than other trial and error search approaches.
IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews), 2000
This paper presents the design and implementation of a coordination architecture for quadruped walking robots to learn and execute soccer-playing behaviors. A typical hybrid architecture combing reactive behaviors with deliberative reasoning is developed. The reactive behaviors directly map spatial information extracted from sensors into actions. The deliberative reasoning represents temporal constraints of a robot's strategy in terms of finite state machines. In order to achieve real-time and robust control performance in reactive behaviors, fuzzy logic controllers (FLCs) are used to encode the behaviors, and a two-stage learning scheme is adopted to make these FLCs adaptive to complex situations. The experimental results are provided to show the suitability of the architecture and effectiveness of the proposed learning scheme.
Proceedings of the Genetic and Evolutionary Computation Conference, 2017
In this contribution we propose a hybrid genetic programming approach for evolving a decision making system in the domain of RoboCup Soccer (Simulation League). Genetic programming has been rarely used in this domain in the past, due to the di culties and restrictions of the soccer simulation. e real-time requirements of robot soccer and the lengthy evaluation time even for simulated games provide a formidable obstacle to the application of evolutionary approaches. Our new method uses two evolutionary phases, each of which compensating for restrictions and limitations of the other. e rst phase produces some evolved GP individuals applying an o-game evaluation system which can be trained on snapshots of game situations as they actually happened in earlier games, and corresponding decisions tagged as correct or wrong. e second phase uses the best individuals of the rst phase as input to run another GP system to evolve players in a real game environment where the quality of decisions is evaluated through winning or losing during real-time runs of the simulator. We benchmark the new system against a baseline system used by most simulation league teams, as well as against winning systems of the 2016 tournament. CCS CONCEPTS •Computer systems organization → Evolutionary robotics; •So ware and its engineering → Genetic programming;
Robotics and Autonomous Systems, 2007
In a multi-robotic system, robots interact with each other in a dynamically changing environment. The robots need to be intelligent both at the individual and group levels. In this paper, the evolution of a fuzzy behavior-based architecture is discussed. The behavior-based architecture decomposes the complicated interactions of multiple robots into modular behaviors at different complexity levels. The fuzzy logic approach brings in human-like reasoning to the behavior construction, selection and coordination. Various behaviors in the fuzzy behavior-based architecture are evolved by genetic algorithm (GA). At the lowest level of the architecture hierarchy, the evolved fuzzy controllers enhanced the smoothness and accuracy of the primitive robot actions. At a higher level, the individual robot behaviors have become more skillful after the evolution. At the topmost level, the evolved group behaviors have resulted in aggressive competition strategy. The simulation and real-world experimentation on a robot-soccer system justify the effectiveness of the approach.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.