Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, AI Magazine
From 1955 to 1965, the most well-known checker-playing program was Samuel's (1967, 1959). This work remains a mile-stone in AI research. Samuel's program report-edly beat a master and solved the game of checkers. Both journalistic claims were false, but they ...
1990
Chinook is the strongest 8 × 8 checkers program around today. Its strength is largely a result of brute-force methods. The program is capable of searching to depths that make it a feared tactician. As with chess, knowledge is the Achilles' heel of the program. However, unlike the chess example, endgame databases go a long way to overcoming this limitation. The program has precomputed databases that classify all positions with 6 or less pieces on the board as won, lost or drawn (with 7 pieces under construction). The program came second to the human World Champion in the U.S. National Open, winning the right to play a World Championship match against him. Chinook is the first computer program in history to earn the right to play for a human World Championship.
Artificial Intelligence, 1992
The checkers program Chinook has won the right to play a 40-game match
ACM SIGART Bulletin, 1991
The checkers program Chinook has won the right to play a 40-game match for the World Checkers Championship against Dr. Marion Tinsley. This was earned by placing second, after Dr. Tinsley, at the U.S. National Open, the biennial event used to determine a challenger for the Championship. At the event, and the preceding Mississippi State Open, Chinook drew a 4 game match against the World Champion and defeated the world's number 2, 4 and 6 players. This is the first time a program has earned the right to contest for a human World Championship. In a practice match played as a prelude to the World Championship match later this year, Dr. Tinsley narrowly defeated Chinook, 7.5--6.5
… of no chance: combinatorial games at …, 1996
In 1962, a checkers-playing program written by Arthur Samuel defeated a self-proclaimed master player, creating a sensation at the time for the fledgling eld of computer science called articial intelligence. The historical record refers to this event as having solved the game of checkers. This paper discusses achieving three dierent levels of solving the game: publicly (as evidenced by Samuel's
Proceedings of the 19th …, 2005
AI has had notable success in building high-performance game-playing programs to complete against the best human players. However, the availability of fast and plentiful machines with large memories and disks creates the possibility of solving a game. This has been done before for simple or relatively small games. In this paper, we present new ideas and algorithms for solving the game of checkers. Checkers is a popular game of skill with a search space of 10 20 possible positions. This paper reports on our first result. One of the ...
2023
Simple yet efficient, the minimax algorithm can be adopted to intelligently play turn-based games. Using the language of Python, this paper outlines a minimax-based approach to intelligently play a simplified two-player version of the game of Chinese Checkers. In addition to the overall minimax algorithm, this paper outlines possible additions to the algorithm in the form of the endgame routine and Alpha-Beta Pruning. Both of these additions to the Chinese Checkers game proved to be extremely effective at reducing runtime and increasing the win rate of the computer. Further, this research demonstrated the efficacy of incorporating aspects of board evaluation such as standard deviation and encouraging pieces to move toward the center. Interestingly, odd numbers of layers of search depth proved to be anomalies in the data collected as the algorithm attempted defensive play and was thus unable to determine a winning move.
William Shelley Branch was born in Hastings , Sussex on 4th July 1854. He was the son of William Branch and Elizabeth Shelley (born c 1826, Lewes). Elizabeth Shelley had married William Branch in Lewes in 1853 [marriage registered in Lewes during the third quarter of 1853]. The newly-weds moved to Hastings, where Mrs Elizabeth Branch started a dressmaking business. William Shelley Branch was born in Hastings the following year. William's brother Henry Edward Branch arrived some six years later [the birth of Henry Edward Branch was registered in Hastings during the third quarter of 1860]. When William Branch senior died, Mrs Elizabeth Branch returned with her two young boys to her home town of Lewes, where she set up a haberdasher's shop in the High Street (Mrs Elizabeth Branch is listed as a haberdasher in Lewes High Street in an 1866 trade directory). Some time before 1878, when he was in his early twenties, William Shelley Branch established a photographic studio which spanned No. 47 and 48 High Street, Lewes. Around 1879, William S. Branch sold this studio to Daubigny Hatch (Henry D'aubigny Hatch) and set up a photographic studio at his mother's fancy goods store at 16 High Street, Lewes. At the time of the 1881 census, Mrs Elizabeth Branch and her sons were living at 16 High Street, Lewes (also known as 16 School Hill), the location of the fancy goods shop and studio. Elizabeth Branch is described in the census return as a 55 year old widow, working as a dealer in wool, toys and other "fancy goods". Henry E. Branch, aged 20, gives his occupation as "News Reporter", while his older brother William S. Branch is entered on the census return as a "Photographer", aged 26. Around 1888, Mrs Elizabeth Branch and her two sons moved to Cheltenham in Gloucestershire. William Shelley Branch was then aged about thirty-four and his brother Henry was in his late twenties. William Shelley Branch established a photographic studio in Suffolk Road, Cheltenham, where he continued in business as a professional photographer for the next five years. Henry Branch, William's younger brother, worked as a journalist for the local newspaper. From around 1895, William S. Branch appears to have abandoned photography for journalism. When the 1901 census was taken, forty-six year old William S. Branch gave his occupation as "Journalist". His brother, Henry E. Branch, is described on the 1901 census return as a "Journalist, SR Editor & Reporter". Henry Branch is remembered today as the author of a study of Gloucestershire entitled "Cotswold and Vale: or Glimpses of Past and Present in Gloucestershire", which was published in Cheltenham in 1904. A keen chess player, William Shelley Branch is known today mainly as a chess historian and the author of an historical survey of the game entitled "A Sketch History of Chess", published in the British Chess Magazine in 1911. Between 1901 and 1932, William Shelley Branch wrote regular articles on the game of chess for the Cheltenham Chronicle and the Cheltenham Examiner. Recognised as an authority on the game, William S. Branch wrote articles on chess and other board games for newspapers at home and abroad. Between 1911 and 1912, William S.Branch wrote a series of articles for the American newspaper The Pittsburg Leader under the general heading of "The history of checkers from the earliest known date. Its evolution and growth ". William Shelley Branch died in Cheltenham, Gloucestershire on 22nd January 1933, at the age of 78.
2011
In recent years, much research attention has been paid to evolving selflearning game players. Fogel"s Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this thesis, artificial neural networks are employed to evolve game playing strategies for the game of checkers by introducing a league structure into the learning phase of a system based on Blondie24. We believe that this helps eliminate some of the randomness in the evolution. The best player obtained is tested against an evolutionary checkers program based on Blondie24. The results obtained are promising. In addition, we introduce an individual and social learning mechanism into the learning phase of the evolutionary checkers system. The best player obtained is tested against an implementation of an evolutionary checkers program, and also against a player, which utilises a round robin tournament. The results are promising.
Parallel Problem Solving from Nature, PPSN XI, 2010
In this paper we describe and analyze a Computational Intelligence (CI)-based approach to creating evaluation functions for two player mind games (i.e. classical turn-based board games that require mental skills, such as chess, checkers, Go, Othello, etc.). The method allows gradual, step-by-step training, starting with end-game positions and gradually moving towards the root of the game tree. In each phase a new training set is generated basing on results of previous training stages and any supervised learning method can be used for actual development of the evaluation function. We validate the usefulness of the approach by employing it to develop heuristics for the game of checkers. Since in previous experiments we applied it to training evaluation functions encoded as linear combinations of game state statistics, this time we concentrate on development of artificial neural network (ANN)-based heuristics.
Evolutionary Computation (CEC), …, 2011
Intuitively it would seem to be the case that any learning algorithm would perform better if it was allowed to search deeper in the game tree. However, there has been some discussion as to whether the evaluation function or the depth of the search is the main contributory factor in the performance of the player. There has been some evidence suggesting that lookahead (i.e. depth of search) is particularly important. In this work we provide a rigorous set of experiments, which support this view. We believe this is the first time such an intensive study has been carried out for evolutionary checkers. Our experiments show that increasing the depth of a look-ahead has significant improvements to the performance of the checkers program and has a significant effect on its learning abilities.
2007
A new method of genetic evolution of linear and nonlinear evaluation functions in the game of checkers is presented. Several practical issues concerning application of genetic algorithms for this task are pointed out and discussed. Experimental results confirm that proposed approach leads to efficient evaluation functions comparable to the ones used in some of commercial applications.
Robotics, 9(4), pp. 1–15, 107, 2020
Human-robot interaction in board games is a rapidly developing field of robotics. This paper presents a robot capable of playing Russian checkers designed for entertaining, training, and research purposes. Its control program is based on a novel unsupervised self-learning algorithm inspired by AlphaZero and represents the first successful attempt of using this approach in the checkers game. The main engineering challenge in mechanics is to develop a board state acquisition system non-sensitive to lighting conditions, which is achieved by rejecting computer vision and utilizing magnetic sensors instead. An original robot face is designed to endow the robot an ability to express its attributed emotional state. Testing the robot at open-air multiday exhibitions shows the robustness of the design to difficult exploitation conditions and the high interest of visitors to the robot.
IEEE Transactions on Computational Intelligence and AI in Games, 2009
IEEE Computational Intelligence Magazine, 2006
and Games beat the best human players. Board games usually succumb to brute force 1 methods of search (mini-max search, alpha-beta pruning, parallel architectures, etc.) to produce the very best players. Go is an exception, and has so far resisted machine attack. The best Go computer players now play at the level of a good novice (see [3], [4] for review papers and [5]-[8] for some recent research). Go strategy seems to rely as much on pattern recognition as it does on logical analysis, and the large branching factor severely restricts the look-ahead that can be used within a game-tree search. Games also provide interesting abstractions of real-world situations, a classic example being Axelrod's Prisoner's Dilemma [9]. Of particular interest to the computational intelligence community, is the iterated version of this game (IPD), where players can devise strategies that depend upon previous behavior. An updated competition [10], celebrating the 20th anniversary of Axelrod's competition, was held at the 2004 IEEE Congress on Evolutionary Computation (Portland, Oregon, June 2004) and at the IEEE Symposium on Computational Intelligence and Games (Essex, UK, April 2005), and this still remains an extremely active area of research in areas as diverse as biology, economics and bargaining, as well as EC. In recent years, researchers have been applying EC methods to evolve all kinds of game-players, including real-time arcade and console games (e.g., Quake, Pac-Man). There are many goals of this research, and one emerging theme is using EC to generate opponents that are more interesting and fun to play against, rather than being necessarily superior. Before discussing possible future research directions, it is interesting to note some of the achievements during the past 50 years or so, during which time games have held a fascination for researchers. Games of Perfect Information Games of perfect information are those in which all the available information is known by all the players at all times. Chess is the best-known example and has received particular interest culminating with Deep Blue beating Kasparov in 1997, albeit with specialized hardware [11] and brute force search, rather than with AI/EC techniques. However, chess still receives research interest as scientists turn to learning techniques that allow a computer to 'learn' to play chess, rather than being 'told' how it should play (e.g., [12]-[14]). Learning techniques were being used for checkers as far back as the 1950s with Samuel's seminal work ([15], which was reproduced in [16]). This would ultimately lead to Jonathan Schaeffer developing Chinook, which won the world checkers title in 1994 [17], [18]. As was the case with Deep Blue, the question of whether Chinook used AI techniques is open to debate. Chinook had an opening and end game database. In certain games, it was able to play the entire game from these two databases. If this could not be achieved, then a form of mini-max search with alpha-beta pruning and a parallel architecture was used. Chinook is still the recognized world champion, a situation that is likely to remain for the foreseeable future. If Chinook is finally defeated, then it is almost certain that it will be by another computer. Even this is unlikely. On the Chinook Web site [19], there is a report of a tentative proof that the White Doctor opening is a draw. This means that any program using this opening, whether playing black or white, will never lose. Of course, if this proof is shown to be incorrect, then it is possible that Chinook can be beaten; but the team at the University of Alberta has just produced (May 14, 2005) a 10-piece endgame database that, combined with its opening game database, makes it a formidable opponent. Despite the undoubted success of Chinook, the search has continued for a checkers player that is built using "true" AI techniques (e.g., [20]-[25]), where the playing strategy is learned through experience rather than being pre-programmed. Chellapilla and Fogel [20]-[22] developed Anaconda, named due to the strangle hold it places on its opponent. It is also known as Blondie24 [22], which is the name it uses when playing on the Internet. This name was chosen in a successful attempt to attract players on the assumption they were playing against a blonde 24-year-old female. Blondie24 utilizes an artificial neural network with 5,046 weights, which are evolved by an evolutionary strategy. The inputs to the network are the current FEBRUARY 2006 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE 11 © DIGITALVISION In natural evolution, the fitness of an individual is defined with respect to its competitors and collaborators, as well as to the environment.
Developments in E-systems …, 2011
In recent years, much research attention has been paid to evolving self-learning game players. Fogel's Blondie24 is just one demonstration of a real success in this field and it has inspired many other scientists. In this paper, evolutionary neural networks, evolved via an evolution strategy, are employed to evolve game-playing strategies for the game of Checkers. In addition, we introduce an individual and social learning mechanism into the learning phase of this evolutionary Checkers system. The best player obtained is tested against an implementation of an evolutionary Checkers program, and also against a player, which has been evolved within a round robin tournament. The results are promising and demonstrate that using individual and social learning enhances the learning process of the evolutionary Checkers system and produces a superior player compared to what was previously possible.
INTENSIF: Jurnal Ilmiah Penelitian dan Penerapan Teknologi Sistem Informasi, 2021
Checkers is a board game that is played by two people which has a purpose to defeat the opponent by eating all the opponent's pieces or making the opponent unable to make a move. The sophistication of technology at this modern time makes the checkers game can be used on a computer even with a smartphone. The application of artificial intelligence in checkers games makes the game playable anywhere and anytime. Alpha Beta Pruning is an optimization technique from the Minimax Algorithm that can reduce the number of branch/node extensions to get better and faster step search results. In this study, a checkers game based on artificial intelligence will be developed using the alpha-beta pruning method. This research is expected to explain in detail how artificial intelligence works in a game. Alpha-beta pruning was chosen because it can search for the best steps quickly and precisely. This study tested 10 respondents to play this game. The results show that the player's win rate w...
Abstract—Based on the machine learning algorithm provided by the Weka platform, this paper performs two studies on the evaluation function parameters of checkers game program. The first one is that, through selecting six numbers, which include the number of the black, the red, the king of black, the king of red, the black in danger, and the red in danger, assigning different weight to each number, then inputting these weighted numbers as factors into the machine learning algorithm, after tested, picking the important numbers as the evaluation parameters, thus, we optimize the parameters of evaluation function for checkers game. The second one is that, by analyzing the influence of removing the weighted numbers of the king of black and the king of red from the program, we obtain the conclusion that the two parameters have large influence, and cannot be removed from the evaluation function.
2012
It is intuitive that allowing a deeper search into a game tree will result in a superior player to one that is restricted in the depth of the search that it is allowed to make. Of course, searching deeper into the tree comes at increased computational cost and this is one of the trade-offs that has to be considered in developing a tree-based search algorithm. There has been some discussion as to whether the evaluation function, or the depth of the search, is the main contributory factor in the performance of an evolved checkers player. Some previous research has investigated this question (on Chess and Othello), with differing conclusions. This suggests that different games have different emphases, with respect to these two factors. This paper provides the evidence for evolutionary checkers, and shows that the look-ahead depth (like Chess, perhaps unsurprisingly) is important. This is the first time that such an intensive study has been carried out for evolutionary checkers and given the evidence provided for Chess and Othello this is an important study that provides the evidence for another game. We arrived at our conclusion by evolving various checkers players at different ply depths and by playing them against one another, again at different ply depths. This was combined with the two-move ballot (enabling more games against the evolved players to take place) which provides strong evidence that depth of the look-ahead is important for evolved checkers players.
Expert Systems, 2007
Abstract: Two methods of genetic evolution of linear and non-linear heuristic evaluation functions for the game of checkers and give-away checkers are presented in the paper. The first method is based on the simplistic assumption that a relation ‘close’ to partial order can be defined over the set of evaluation functions. Hence an explicit fitness function is not necessary in this case and direct comparison between heuristics (a tournament) can be used instead. In the other approach a heuristic is developed step-by-step based on the set of training games. First, the end-game positions are considered and then the method gradually moves ‘backwards’ in the game tree up to the starting position and at each step the best fitted specimen from the previous step (previous game tree depth) is used as the heuristic evaluation function in the alpha-beta search for the current step. Experimental results confirm that both approaches lead to quite strong heuristics and give hope that a more sophisticated and more problem-oriented evolutionary process might ultimately provide heuristics of quality comparable to those of commercial programs.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.