Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2002
Artificially intelligent opponents in commercial computer games are almost exclusively controlled by manuallydesigned scripts. With increasing game complexity, the scripts tend to become quite complex too. As a consequence they often contain "holes" that can be exploited by the human player. The research question addressed in this paper reads: How can evolutionary learning techniques be applied to improve the quality of opponent intelligence in commercial computer games? We study the off-line application of evolutionary learning to generate neural-network controlled opponents for a complex strategy game called PICOVERSE. The results show that the evolved opponents outperform a manually-scripted opponent. In addition, it is shown that evolved opponents are capable of identifying and exploiting holes in a scripted opponent. We conclude that evolutionary learning is potentially an effective tool to improve quality of opponent intelligence in commercial computer games.
2002
Artificially intelligent opponents in virtual world computer games are almost exclusively controlled by manually-designed scripts. With increasing game complexity, the scripts tend to become quite complex too. As a consequence they often contain "holes" that can be exploited by the human player. The research question addressed in this paper reads: How can machine learning be used to improve the quality of opponent intelligence in computer games? We study the off-line application of evolutionary learning to generate neural-network controlled opponents for a complex strategy game called PICOVERSE. The results show that the evolved opponents outperform a manuallyscripted opponent. In addition, it is shown that evolved opponents are capable of identifying and exploiting holes in a scripted opponent. We conclude that machine learning is an effective tool to improve the quality of opponent intelligence in computer games.
2010
The aim of this paper is to use a simple but powerful evolutionary algorithm called Evolution Strategies (ES) to evolve the connection weights and biases of feed-forward artificial neural networks (ANN) and to examine its learning ability through computational experiments in a non-deterministic and dynamic environment, which is the well-known arcade game called Ms. Pac-man. The resulting algorithm is referred to as an Evolution Strategies Neural Network or ESNet. This study is an attempt to create an autonomous intelligent controller to play the game. The comparison of ESNet with two random systems, Random Direction (RandDir) and Random Neural Network (RandNet) yields promising results.
2006
Video games provide an opportunity and challenge for the soft computational intelligence methods like the symbolic games did for "good old-fashioned artificial intelligence." This article reviews the achievements and future prospects of one particular approach, that of evolving neural networks, or neuroevolution. This approach can be used to construct adaptive characters in existing video games, and it can serve as a foundation for a new genre of games based on machine learning. Evolution can be guided by human knowledge, allowing the designer to control the kinds of solutions that emerge and encouraging behaviors that appear visibly intelligent to the human player. Such techniques may allow building video games that are more engaging and entertaining than current games, and those that can serve as training environments for people. Techniques developed in these games may also be widely applicable in other fields, such as robotics, resource optimization, and intelligent assistants. *
IEEE Computational Intelligence Magazine, 2006
and Games beat the best human players. Board games usually succumb to brute force 1 methods of search (mini-max search, alpha-beta pruning, parallel architectures, etc.) to produce the very best players. Go is an exception, and has so far resisted machine attack. The best Go computer players now play at the level of a good novice (see [3], [4] for review papers and [5]-[8] for some recent research). Go strategy seems to rely as much on pattern recognition as it does on logical analysis, and the large branching factor severely restricts the look-ahead that can be used within a game-tree search. Games also provide interesting abstractions of real-world situations, a classic example being Axelrod's Prisoner's Dilemma [9]. Of particular interest to the computational intelligence community, is the iterated version of this game (IPD), where players can devise strategies that depend upon previous behavior. An updated competition [10], celebrating the 20th anniversary of Axelrod's competition, was held at the 2004 IEEE Congress on Evolutionary Computation (Portland, Oregon, June 2004) and at the IEEE Symposium on Computational Intelligence and Games (Essex, UK, April 2005), and this still remains an extremely active area of research in areas as diverse as biology, economics and bargaining, as well as EC. In recent years, researchers have been applying EC methods to evolve all kinds of game-players, including real-time arcade and console games (e.g., Quake, Pac-Man). There are many goals of this research, and one emerging theme is using EC to generate opponents that are more interesting and fun to play against, rather than being necessarily superior. Before discussing possible future research directions, it is interesting to note some of the achievements during the past 50 years or so, during which time games have held a fascination for researchers. Games of Perfect Information Games of perfect information are those in which all the available information is known by all the players at all times. Chess is the best-known example and has received particular interest culminating with Deep Blue beating Kasparov in 1997, albeit with specialized hardware [11] and brute force search, rather than with AI/EC techniques. However, chess still receives research interest as scientists turn to learning techniques that allow a computer to 'learn' to play chess, rather than being 'told' how it should play (e.g., [12]-[14]). Learning techniques were being used for checkers as far back as the 1950s with Samuel's seminal work ([15], which was reproduced in [16]). This would ultimately lead to Jonathan Schaeffer developing Chinook, which won the world checkers title in 1994 [17], [18]. As was the case with Deep Blue, the question of whether Chinook used AI techniques is open to debate. Chinook had an opening and end game database. In certain games, it was able to play the entire game from these two databases. If this could not be achieved, then a form of mini-max search with alpha-beta pruning and a parallel architecture was used. Chinook is still the recognized world champion, a situation that is likely to remain for the foreseeable future. If Chinook is finally defeated, then it is almost certain that it will be by another computer. Even this is unlikely. On the Chinook Web site [19], there is a report of a tentative proof that the White Doctor opening is a draw. This means that any program using this opening, whether playing black or white, will never lose. Of course, if this proof is shown to be incorrect, then it is possible that Chinook can be beaten; but the team at the University of Alberta has just produced (May 14, 2005) a 10-piece endgame database that, combined with its opening game database, makes it a formidable opponent. Despite the undoubted success of Chinook, the search has continued for a checkers player that is built using "true" AI techniques (e.g., [20]-[25]), where the playing strategy is learned through experience rather than being pre-programmed. Chellapilla and Fogel [20]-[22] developed Anaconda, named due to the strangle hold it places on its opponent. It is also known as Blondie24 [22], which is the name it uses when playing on the Internet. This name was chosen in a successful attempt to attract players on the assumption they were playing against a blonde 24-year-old female. Blondie24 utilizes an artificial neural network with 5,046 weights, which are evolved by an evolutionary strategy. The inputs to the network are the current FEBRUARY 2006 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE 11 © DIGITALVISION In natural evolution, the fitness of an individual is defined with respect to its competitors and collaborators, as well as to the environment.
Presented are issues in designing smart, believable software agents capable of playing strategy games, with particular emphasis on the design of an agent capable of playing Cyberwar XXI, a complex war game. The architecture of a personality-rich, advise-taking game playing agent that learns to play is described. The suite of computational-intelligence tools used by the advisers include evolutionary computation and neural nets.
In evolutionary learning of game-playing strategies, fitness evaluation is based on playing games with certain opponents. In this paper we investigate how the performance of these opponents and the way they are chosen influence the efficiency of learning. For this purpose we introduce a simple method for shaping the fitness function by sampling the opponents from a biased performance distribution. We compare the shaped function with existing fitness evaluation approaches that sample the opponents from an unbiased performance distribution or from a coevolving population. In an extensive computational experiment we employ these methods to learn Othello strategies and assess both the absolute and relative performance of the elaborated players. The results demonstrate the superiority of the shaping approach, and can be explained by means of performance profiles, an analytical tool that evaluate the evolved strategies using a range of variably skilled opponents.
Awale games have become widely recognized across the world, for their innovative strategies and techniques which were used in evolving the agents (player) and have produced interesting results under various conditions. This paper will compare the results of the two major machine learning techniques by reviewing their performance when using minimax, endgame database, a combination of both techniques or other techniques, and will determine which are the best techniques
IEEE Transactions on Systems, Man, and Cybernetics, 2007
We have recently shown that genetically programming game players, after having imbued the evolutionary process with human intelligence, produces human-competitive strategies for three games: backgammon, chess endgames, and robocode (tank-fight simulation). Evolved game players are able to hold their own-and often win-against human or human-based competitors. This paper has a twofold objective: first, to review our recent results of applying genetic programming in the domain of games; second, to formulate the merits of genetic programming in acting as a tool for developing strategies in general, and to discuss the possible design of a strategizing machine.
Computers have difficulty learning how to play Texas Hold'em Poker. The game contains a high degree of stochasticity, hidden information, and opponents that are deliberately trying to misrepresent their current state. Poker has a much larger game space than classic parlour games such as Chess and Backgammon. Evolutionary methods have been shown to find relatively good results in large state spaces, and neural networks have been shown to be able to find solutions to non-linear search problems. In this paper, we present several algorithms for teaching agents how to play No-Limit Texas Hold'em Poker using a hybrid method known as evolving neural networks. Furthermore, we adapt heuristics such as halls of fame and co-evolution to be able to handle populations of Poker agents, which can sometimes contain several hundred opponents, instead of a single opponent. Our agents were evaluated against several benchmark agents. Experimental results show the overall best performance was obtained by an agent evolved from a single population (i.e., with no co-evolution) using a large hall of fame. These results demonstrate the effectiveness of our algorithms in creating competitive No-Limit Texas Hold'em Poker agents.
2011
In recent years, much research attention has been paid to evolving self-learning game players. Fogel's Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this paper, evolutionary neural networks, evolved via an evolution strategy, are employed to evolve game-playing strategies for the game of Checkers. In addition, we introduce an individual and social learning mechanism into the learning phase of this evolutionary Checkers system. The best player obtained is tested against an implementation of an evolutionary Checkers program, and also against a player, which has been evolved within a round robin tournament. The results are promising and demonstrate that using individual and social learning enhances the learning process of the evolutionary Checkers system and produces a superior player compared to what was previously possible.
2007 IEEE Symposium on Computational Intelligence and Games, 2007
This paper describes the EvoTanks research project, a continuing attempt to develop strong AI players for a primitive 'Combat' style video game using evolutionary computational methods with artificial neural networks. A small but challenging feat due to the necessity for agent's actions to rely heavily on opponent behaviour. Previous investigation has shown the agents are capable of developing high performance behaviours by evolving against scripted opponents; however these are local to the trained opponent. The focus of this paper shows results from the use of coevolution on the same population. Results show agents no longer succumb to trappings of local maxima within the search space and are capable of converging on high fitness behaviours local to their population without the use of scripted opponents.
2003
Online learning in commercial computer games allows computer-controlled opponents to adapt to human player tactics. For online learning to work in practice, it must be fast, effective, robust, and efficient. This paper proposes a technique called "dynamic scripting" that meets these requirements. In dynamic scripting an adaptive rule-base is used for the generation of intelligent opponents on the fly. The adaptive performance of dynamic scripting is evaluated in an experiment in which the adaptive players are pitted against a collective of manually designed tactics in a simulated computer roleplaying game. The results indicate that dynamic scripting succeeds in endowing characters with adaptive performance. We therefore conclude that dynamic scripting can be successfully applied to the online adaptation of computer game opponents.
Is it possible to develop a computer program that can learn to play different video games by itself depending on the interactions with other players? Can video game characters learn new skills through interacting with human players? Can we make video games more interesting by allowing in-game characters to behave according to human player's strategy? These are just some of the questions that video game developers and artificial intelligence researchers are working on. In this paper we present an evolutionary approach that uses a modified particle swarm optimization algorithm and artificial neural networks, to answer these questions by allowing the agents to respond to changes in their surroundings. Video games usually require intelligent agents to adapt to new challenges and optimize their own utility with limited resources and our approach utilizes adaptive intelligence to improve an agent's game playing strategies. This research is directly applicable to video games research and evolutionary gaming. The approach presented here can be further extended to develop intelligent systems for the exploitation of weaknesses in an evolutionary system.
An artificial neural network is a system that tries in various degrees to emulate a human brain in order to perform tasks that other computer systems are usually not fit to handle. Artificial neural networks are used in many different areas due to their ability to learn and adapt to many different tasks and make complex predictions. In gaming, computer controlled opponent behavior is usually rule based and dependent on specific conditions and can thus be predictable to a certain degree. As the field of AI and learning systems using artificial neural networks is being developed and expanded, it is inevitable that its use in gaming will be explored thoroughly. This short survey looks at the attempts of using artificial neural networks for opponents in board games and modern computer games, as well as other uses in gaming throughout the last 20 years.
Neural Computing and Applications, 2020
Real-time strategy (RTS) games differ as they persist in varying scenarios and states. These games enable an integrated correspondence of non-player characters (NPCs) to appear as an autodidact in a dynamic environment, thereby resulting in a combined attack of NPCs on human-controlled character (HCC) with maximal damage. This research aims to empower NPCs with intelligent traits. Therefore, we instigate an assortment of ant colony optimization (ACO) with genetic algorithm (GA)-based approach to first-person shooter (FPS) game, i.e., Zombies Redemption (ZR). Eminent NPCs with bestfit genes are elected to spawn NPCs over generations and game levels as yielded by GA. Moreover, NPCs empower ACO to elect an optimal path with diverse incentives and less likelihood of getting shot. The proposed technique ZR is novel as it integrates ACO and GA in FPS games where NPC will use ACO to exploit and optimize its current strategy. GA will be used to share and explore strategy among NPCs. Moreover, it involves an elaboration of the mechanism of evolution through parameter utilization and updation over the generations. ZR is played by 450 players with varying levels having the evolving traits of NPCs and environmental constraints in order to accumulate experimental results. Results revealed improvement in NPCs performance as the game proceeds.
2017
The production of video games is a complex process, which involves several disciplines, spanning from art to computer science. The final goal is to keep entertained the players, by continuously providing them novel and challenging contents. However, the availability of a large variety of pre-produced material is often not possible. A similar problem can be found in many single-player game genres, where the simulated behaviour generated by the Artificial Intelligence algorithms must be coherent, believable, but also adequately variegate to maintain a satisfactory user experience. To this aim, there is a growing interest in the introduction of automatic or semi-automatic techniques to produce and manage the video game contents. In this paper, we present an example of strategic card battle video game based on the applications of Artificial Intelligence and Genetic Algorithms, where the game contents are dynamically adapted and produced during the game sessions. ACM Classification
2011
This paper describes an Evolutionary Algorithm for evolving the decision engine of a bot designed to play the Planet Wars game. This game, which has been chosen for the Google Artificial Intelligence Challenge in 2010, requires that the artificial player is able to deal with multiple objectives, while achieving a certain degree of adaptability in order to defeat different opponents in different scenarios. The decision engine of the bot is based on a set of rules that have been defined after an empirical study. Then, an Evolutionary Algorithm is used for tuning the set of constants, weights and probabilities that define the rules, and, therefore, the global behavior of the bot. The paper describes the Evolutionary Algorithm and the results attained by the decision engine when competing with other bots. The proposed bot defeated a baseline bot in most of the playing environments and obtained a ranking position in top-20% of the Google Artificial Intelligence competition.
Journal of Computer Science and Technology, 2012
This paper investigates the performance and the results of an evolutionary algorithm (EA) specifically designed for evolving the decision engine of a program (which, in this context, is called bot) that plays Planet Wars. This game, which was chosen for the Google Artificial Intelligence Challenge in 2010, requires the bot to deal with multiple target planets, while achieving a certain degree of adaptability in order to defeat different opponents in different scenarios. The decision engine of the bot is initially based on a set of rules that have been defined after an empirical study, and a genetic algorithm (GA) is used for tuning the set of constants, weights and probabilities that those rules include, and therefore, the general behaviour of the bot. Then, the bot is supplied with the evolved decision engine and the results obtained when competing with other bots (a bot offered by Google as a sparring partner, and a scripted bot with a pre-established behaviour) are thoroughly analysed. The evaluation of the candidate solutions is based on the result of non-deterministic battles (and environmental interactions) against other bots, whose outcome depends on random draws as well as on the opponents' actions. Therefore, the proposed GA is dealing with a noisy fitness function. After analysing the effects of the noisy fitness, we conclude that tackling randomness via repeated combats and reevaluations reduces this effect and makes the GA a highly valuable approach for solving this problem.
2015 IEEE Conference on Computational Intelligence and Games (CIG), 2015
General Video Game Playing (GVGP) allows for the fair evaluation of algorithms and agents as it minimizes the ability of an agent to exploit apriori knowledge in the form of game specific heuristics. In this paper we compare four possible combinations of evolutionary learning using Separable Natural Evolution Strategies as our evolutionary algorithm of choice; linear function approximation with Softmax search and-greedy policies and neural networks with the same policies. The algorithms explored in this research play each of the games during a sequence of 1000 matches, where the score obtained is used as a measurement of performance. We show that learning is achieved in 8 out of the 10 games employed in this research, without introducing any domain specific knowledge, leading the algorithms to maximize the average score as the number of games played increases.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.