Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2012
It is intuitive that allowing a deeper search into a game tree will result in a superior player to one that is restricted in the depth of the search that it is allowed to make. Of course, searching deeper into the tree comes at increased computational cost and this is one of the trade-offs that has to be considered in developing a tree-based search algorithm. There has been some discussion as to whether the evaluation function, or the depth of the search, is the main contributory factor in the performance of an evolved checkers player. Some previous research has investigated this question (on Chess and Othello), with differing conclusions. This suggests that different games have different emphases, with respect to these two factors. This paper provides the evidence for evolutionary checkers, and shows that the look-ahead depth (like Chess, perhaps unsurprisingly) is important. This is the first time that such an intensive study has been carried out for evolutionary checkers and given the evidence provided for Chess and Othello this is an important study that provides the evidence for another game. We arrived at our conclusion by evolving various checkers players at different ply depths and by playing them against one another, again at different ply depths. This was combined with the two-move ballot (enabling more games against the evolved players to take place) which provides strong evidence that depth of the look-ahead is important for evolved checkers players.
Evolutionary Computation (CEC), …, 2011
Intuitively it would seem to be the case that any learning algorithm would perform better if it was allowed to search deeper in the game tree. However, there has been some discussion as to whether the evaluation function or the depth of the search is the main contributory factor in the performance of the player. There has been some evidence suggesting that lookahead (i.e. depth of search) is particularly important. In this work we provide a rigorous set of experiments, which support this view. We believe this is the first time such an intensive study has been carried out for evolutionary checkers. Our experiments show that increasing the depth of a look-ahead has significant improvements to the performance of the checkers program and has a significant effect on its learning abilities.
2007
A new method of genetic evolution of linear and nonlinear evaluation functions in the game of checkers is presented. Several practical issues concerning application of genetic algorithms for this task are pointed out and discussed. Experimental results confirm that proposed approach leads to efficient evaluation functions comparable to the ones used in some of commercial applications.
2011
In recent years, much research attention has been paid to evolving selflearning game players. Fogel"s Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this thesis, artificial neural networks are employed to evolve game playing strategies for the game of checkers by introducing a league structure into the learning phase of a system based on Blondie24. We believe that this helps eliminate some of the randomness in the evolution. The best player obtained is tested against an evolutionary checkers program based on Blondie24. The results obtained are promising. In addition, we introduce an individual and social learning mechanism into the learning phase of the evolutionary checkers system. The best player obtained is tested against an implementation of an evolutionary checkers program, and also against a player, which utilises a round robin tournament. The results are promising.
2011
In recent years, much research attention has been paid to evolving self-learning game players. Fogel's Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this paper, evolutionary neural networks, evolved via an evolution strategy, are employed to evolve game-playing strategies for the game of Checkers. In addition, we introduce an individual and social learning mechanism into the learning phase of this evolutionary Checkers system. The best player obtained is tested against an implementation of an evolutionary Checkers program, and also against a player, which has been evolved within a round robin tournament. The results are promising and demonstrate that using individual and social learning enhances the learning process of the evolutionary Checkers system and produces a superior player compared to what was previously possible.
Expert Systems, 2007
Abstract: Two methods of genetic evolution of linear and non-linear heuristic evaluation functions for the game of checkers and give-away checkers are presented in the paper. The first method is based on the simplistic assumption that a relation ‘close’ to partial order can be defined over the set of evaluation functions. Hence an explicit fitness function is not necessary in this case and direct comparison between heuristics (a tournament) can be used instead. In the other approach a heuristic is developed step-by-step based on the set of training games. First, the end-game positions are considered and then the method gradually moves ‘backwards’ in the game tree up to the starting position and at each step the best fitted specimen from the previous step (previous game tree depth) is used as the heuristic evaluation function in the alpha-beta search for the current step. Experimental results confirm that both approaches lead to quite strong heuristics and give hope that a more sophisticated and more problem-oriented evolutionary process might ultimately provide heuristics of quality comparable to those of commercial programs.
Developments in E-systems …, 2011
In recent years, much research attention has been paid to evolving self-learning game players. Fogel's Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this paper, evolutionary neural networks, evolved via an evolution strategy, are employed to evolve game-playing strategies for the game of Checkers. In addition, we introduce an individual and social learning mechanism into the learning phase of this evolutionary Checkers system. The best player obtained is tested against an implementation of an evolutionary Checkers program, and also against a player, which has been evolved within a round robin tournament. The results are promising and demonstrate that using individual and social learning enhances the learning process of the evolutionary Checkers system and produces a superior player compared to what was previously possible.
IEEE Computational Intelligence Magazine, 2006
and Games beat the best human players. Board games usually succumb to brute force 1 methods of search (mini-max search, alpha-beta pruning, parallel architectures, etc.) to produce the very best players. Go is an exception, and has so far resisted machine attack. The best Go computer players now play at the level of a good novice (see [3], [4] for review papers and [5]-[8] for some recent research). Go strategy seems to rely as much on pattern recognition as it does on logical analysis, and the large branching factor severely restricts the look-ahead that can be used within a game-tree search. Games also provide interesting abstractions of real-world situations, a classic example being Axelrod's Prisoner's Dilemma [9]. Of particular interest to the computational intelligence community, is the iterated version of this game (IPD), where players can devise strategies that depend upon previous behavior. An updated competition [10], celebrating the 20th anniversary of Axelrod's competition, was held at the 2004 IEEE Congress on Evolutionary Computation (Portland, Oregon, June 2004) and at the IEEE Symposium on Computational Intelligence and Games (Essex, UK, April 2005), and this still remains an extremely active area of research in areas as diverse as biology, economics and bargaining, as well as EC. In recent years, researchers have been applying EC methods to evolve all kinds of game-players, including real-time arcade and console games (e.g., Quake, Pac-Man). There are many goals of this research, and one emerging theme is using EC to generate opponents that are more interesting and fun to play against, rather than being necessarily superior. Before discussing possible future research directions, it is interesting to note some of the achievements during the past 50 years or so, during which time games have held a fascination for researchers. Games of Perfect Information Games of perfect information are those in which all the available information is known by all the players at all times. Chess is the best-known example and has received particular interest culminating with Deep Blue beating Kasparov in 1997, albeit with specialized hardware [11] and brute force search, rather than with AI/EC techniques. However, chess still receives research interest as scientists turn to learning techniques that allow a computer to 'learn' to play chess, rather than being 'told' how it should play (e.g., [12]-[14]). Learning techniques were being used for checkers as far back as the 1950s with Samuel's seminal work ([15], which was reproduced in [16]). This would ultimately lead to Jonathan Schaeffer developing Chinook, which won the world checkers title in 1994 [17], [18]. As was the case with Deep Blue, the question of whether Chinook used AI techniques is open to debate. Chinook had an opening and end game database. In certain games, it was able to play the entire game from these two databases. If this could not be achieved, then a form of mini-max search with alpha-beta pruning and a parallel architecture was used. Chinook is still the recognized world champion, a situation that is likely to remain for the foreseeable future. If Chinook is finally defeated, then it is almost certain that it will be by another computer. Even this is unlikely. On the Chinook Web site [19], there is a report of a tentative proof that the White Doctor opening is a draw. This means that any program using this opening, whether playing black or white, will never lose. Of course, if this proof is shown to be incorrect, then it is possible that Chinook can be beaten; but the team at the University of Alberta has just produced (May 14, 2005) a 10-piece endgame database that, combined with its opening game database, makes it a formidable opponent. Despite the undoubted success of Chinook, the search has continued for a checkers player that is built using "true" AI techniques (e.g., [20]-[25]), where the playing strategy is learned through experience rather than being pre-programmed. Chellapilla and Fogel [20]-[22] developed Anaconda, named due to the strangle hold it places on its opponent. It is also known as Blondie24 [22], which is the name it uses when playing on the Internet. This name was chosen in a successful attempt to attract players on the assumption they were playing against a blonde 24-year-old female. Blondie24 utilizes an artificial neural network with 5,046 weights, which are evolved by an evolutionary strategy. The inputs to the network are the current FEBRUARY 2006 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE 11 © DIGITALVISION In natural evolution, the fitness of an individual is defined with respect to its competitors and collaborators, as well as to the environment.
IEEE Potentials, 1995
Physica D Nonlinear Phenomena, 1994
Evolution of game strategies is studied in the Erroneous Iterated Prisoner's Dilemma game, in which a player sometimes decides on an erroneous action contrary to his own strategy. Erroneous games of this kind have been known to lead to the evolution of a variety of strategies. This paper describes genetic fusion modeling as a particular source of new strategies. Successive actions are chosen according to strategies having finite memory capacity, and strategy algorithms are elaborated by genetic fusion. Such a fusion process introduces a rhizome structure in a genealogy tree. Emergence of module strategies functions as innovative source of new strategies. How the extinction of strategies and module evolution leads to ESS-free open-ended evolution is also discussed.
Applications of Evolutionary Computation, 2016
We deal with the problem of automatic generation of complete rules of an arbitrary game. This requires a generic and accurate evaluating function that is used to score games. Recently, the idea that game quality can be measured using differences in performance of various game-playing algorithms of different strengths has been proposed; this is called Relative Algorithm Performance Profiles. We formalize this method into a generally application algorithm estimating game quality, according to some set of model games with properties that we want to reproduce. We applied our method to evolve chess-like boardgames. The results show that we can obtain playable and balanced games of high quality.
2023
Simple yet efficient, the minimax algorithm can be adopted to intelligently play turn-based games. Using the language of Python, this paper outlines a minimax-based approach to intelligently play a simplified two-player version of the game of Chinese Checkers. In addition to the overall minimax algorithm, this paper outlines possible additions to the algorithm in the form of the endgame routine and Alpha-Beta Pruning. Both of these additions to the Chinese Checkers game proved to be extremely effective at reducing runtime and increasing the win rate of the computer. Further, this research demonstrated the efficacy of incorporating aspects of board evaluation such as standard deviation and encouraging pieces to move toward the center. Interestingly, odd numbers of layers of search depth proved to be anomalies in the data collected as the algorithm attempted defensive play and was thus unable to determine a winning move.
Parallel Problem Solving from Nature, PPSN XI, 2010
In this paper we describe and analyze a Computational Intelligence (CI)-based approach to creating evaluation functions for two player mind games (i.e. classical turn-based board games that require mental skills, such as chess, checkers, Go, Othello, etc.). The method allows gradual, step-by-step training, starting with end-game positions and gradually moving towards the root of the game tree. In each phase a new training set is generated basing on results of previous training stages and any supervised learning method can be used for actual development of the evaluation function. We validate the usefulness of the approach by employing it to develop heuristics for the game of checkers. Since in previous experiments we applied it to training evaluation functions encoded as linear combinations of game state statistics, this time we concentrate on development of artificial neural network (ANN)-based heuristics.
Theoretical Computer Science, 2001
In this contribution we propose a class of strategies which focus on the game as well as on the opponent. Preference is given to the thoughts of the opponent, so that the strategy under investigation might be speculative. We describe a generalization of OM search, called (D; d)-OM search, where D stands for the depth of search by the player and d for the opponent's depth of search. A known di erence in search depth can be exploited by purposely choosing a suboptimal variation with the aim to gain a larger advantage than when playing the objectively best move. The di erence in search depth may have the result that the opponent does not see the variation in su ciently deep detail. We then give a pruning alternative for (D; d)-OM search, denoted by -ÿ 2 pruning. A best-case analysis shows that -ÿ 2 prunes very e ciently, comparable to the e ciency of -ÿ with regard to minimax. The e ectiveness of the proposed strategy is conÿrmed by simulations using a game-tree model including an opponent model and by experiments in the domain of Othello.
Ieee Transactions on Computational Intelligence and Ai in Games, 2014
The term General Game Playing (GGP) refers to a subfield of Artificial Intelligence which aims at developing agents able to effectively play many games from a particular class (finite, deterministic). It is also the name of the annual competition proposed by Stanford Logic Group at Stanford University, which provides a framework for testing and evaluating GGP agents.
2007
We propose an approach for developing efficient search algorithms through genetic programming. Focusing on the game of chess we evolve entire game-tree search algorithms to solve the Mate-In-N problem: find a key move such that even with the best possible counterplays, the opponent cannot avoid being mated in (or before) move N. We show that our evolved search algorithms successfully solve several instances of the Mate-In-N problem, for the hardest ones developing 47% less game-tree nodes than CRAFTY—a state-of-the-art chess engine with a ranking of 2614 points. Improvement is thus not over the basic alpha-beta algorithm, but over a world-class program using all standard enhancements.
In evolutionary learning of game-playing strategies, fitness evaluation is based on playing games with certain opponents. In this paper we investigate how the performance of these opponents and the way they are chosen influence the efficiency of learning. For this purpose we introduce a simple method for shaping the fitness function by sampling the opponents from a biased performance distribution. We compare the shaped function with existing fitness evaluation approaches that sample the opponents from an unbiased performance distribution or from a coevolving population. In an extensive computational experiment we employ these methods to learn Othello strategies and assess both the absolute and relative performance of the elaborated players. The results demonstrate the superiority of the shaping approach, and can be explained by means of performance profiles, an analytical tool that evaluate the evolved strategies using a range of variably skilled opponents.
Advances in Complex Systems, 2007
Lecture Notes in Computer Science, 2013
In this article, we explain a new chess variant that is more challenging for humans than the standard version of the game. A new rule states that either player has the right to switch sides if a 'chain' or link of pieces is created on the board. This appears to increase significantly the complexity of chess, as perceived by players, but not the actual size of its game tree. 'Search' therefore becomes less of an issue. The advantage of this variant is that it allows research into board games to focus on the 'higher level' aspects of intelligence by building upon the approaches used in existing chess engines. We argue that this new variant can therefore more easily contribute to gaming AI than other games of high complexity.
In Advances in …, 1997
Advances in technology allow for increasingly deeper searches in competitive chess programs. Several experiments with chess indicate a constant improvement in a program's performance for deeper searches; a program searching to depth d + 1 scores roughly 80% of the possible points in a match with a program searching to depth d. In other board games, such as Othello and checkers, additional plies of search translated into decreasing bene ts, giving rise to diminishing returns for deeper searching. This paper demonstrates that there are diminishing returns in chess. However, the high percentage of errors made by chess programs for search depths through 9 ply hides the e ect.
1995
In this paper, we employ the genetic programming paradigm to enable a computer to learn to play strategies for the ancient Egyptian boardgame Senet by evolving board evaluation functions. Formulating the problem in terms of board evaluation functions made it feasible to evaluate the tness of game playing strategies by using tournament-style tness evaluation. The game has elements of both strategy and chance. Our approach learns strategies which enable the computer to play consistently at a reasonably skillful level.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.