Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
The game of chess has sometimes been referred to as the Drosophila of artificial intelligence and cognitive science research --a standard task that serves as a test bed for ideas about the nature of intelligence and computational schemes for intelligent systems. Both machine intelligence --how to program a computer to play good chess (artificial intelligence) -and human intelligence --how to understand the processes that human masters use to play good chess (cognitive science) --are encompassed in the research, and we will comment on both in this chapter, but with emphasis on computers.
Revista Brasileira de Computação Aplicada
Computational modeling has enabled researchers to simulate tasks which are very often impossible in practice, such as deciphering the working of the human mind, and chess is used by many cognitive scientists as an investigative tool in studies on intelligence, behavioral patterns and cognitive development and rehabilitation. Computer analysis of databases with millions of chess games allows players’ cognitive development to be predicted and their behavioral patterns to be investigated. However, computers are not yet able to solve chess problems in which human intelligence analyzes and evaluates abstractly without the need for many concrete calculations. The aim of this article is to describe and simulate a chess problem situation proposed by the British mathematician Sir Roger Penrose and thus provide an opportunity for a comparative discussion by society of human and artificial intelligence. To this end, a specialist chess computer program, Fritz 12, was used to simulate possible m...
International Joint Conference on Artificial Intelligence, 1991
Our eminent researchers including John McCarthy, Allen Newell, Claude Shannon, Herb Simon, Ken Thompson and Alan Turing put significant effort into computer chess research. Now that comput ers have reached the grandmaster level, and are beginning to vie for the World Championship, the AI community should pause to evaluate the sig nificance of chess in the evolving objectives of AI,
Computer chess started as a promising domain of research in Artificial Intelligence (AI) more than five decades ago. The basic idea was that as a 'thinking sport' it would be a good challenge toward better understanding and potentially simulating (human) cognition in machines. Unfortunately, it was soon discovered that computers could be made to play chess (and certain other games like it) using computational methods quite different from how humans are thought to think. This spawned a competitive computer chess gaming industry and in 1997, the world chess champion was defeated by an IBM supercomputer. Since then, computer chess has seen further improvement with programs playing at the strong grandmaster level even on desktop machines. In the field of AI, attention has therefore shifted to even more-complex games like Go in the hope that computational approaches toward them will succeed where chess had apparently failed. In this paper I challenge that contention. I have reaso...
Harmonia Philosophica Papers, 2024
Playing chess is one of the first sectors of human thinking that were conquered by computers. From the historical win of Deep Blue against chess champion Garry Kasparov until today, computers have completely dominated the world of chess leaving no room for question as to who is the king in this sport. However, the better computers become in chess the more obvious their basic disadvantage becomes: Even though they can defeat any human in chess and play phenomenally great and intuitive moves, they do not seem to know what chess it. Recently, the advent of AlphaZero brought the level of computers to even higher grounds, but made that disadvantage even more obvious. Nowadays the best chess algorithm can find the best moves in any position without having any knowledge of any chess principle whatsoever. A new world of capabilities and knowledge lays open in front of us, but the darkness behind it is deeper than we could ever imagine. The neural network algorithms seem capable of being best in something without even knowing what that something is, making questions regarding the essence of what thought is more important than ever.
AAAI Chess Track, Providence, RI, 1997
A Multiagent Chess playing paradigm is defined. By defining spheres and strength degrees for pieces winning strategies on games can be defined. The new Intelligent Tree Computing theories we have defined since 1994 can be applied to present precise strategies and prove theorems on games. The multiagent chess game model is defined by an isomorphism on multiboards and agents. Intelligent game trees are presented and goal directed planning is defined by tree rewrite computation on intelligent game trees. Applying intelligent game trees we define capture agents and state an overview to multiagent chess thinking. Game tree intelligence degree is defined and applied to prove model-theoretic soundness and completeness. The game is viewed as a multiplayer game with only perfect information between agent pairs. The man-machine technologies thought dilemma is dispelled in brief by addressing the thinking in the absolute versus thinking for a precisely defined area with a multiagent image for the computing mind.
Over the years Artificial Intelligence has made astonishing strides in computer science. Today, computer game is regarded as a new research field of Artificial Intelligence. Here, we use computer game to understand one of the most enigmatic sections of Artificial Intelligence. Game theory is a mathematical study of rational behavior in strategic environment. There are plethora of settings, most notably two-player zero-sum games, game theory provides strong and appealing solution concepts. We have seen the use of these concepts in creation of many game engines for games such as Chess, Tic-Tac-Toe. Various Gaming techniques such as MiniMax, Alpha-Beta pruning, Hash Table and Evaluation are used to create game engine. Artificial Intelligence of game engine can evaluate best moves using deep searching techniques and Probabilistic reasoning can be used to optimize computational capacity of Agent for uncertain and imperfect information environment.
Computational Intelligence, 1996
SSRN Electronic Journal, 2020
Keynes’s father, J N Keynes ,was a ranked and rated chess master in tournament chess who played first board in OTB (Over-The- Board) matches for Cambridge University in the late 1870’s and early 1880’s.J M Keynes undoubtedly learned how to play chess from his father. However, what he also learned was the important role that intuition and perception play in the OTB chess competition, but not in correspondence (postal) chess. The nearly three hundred year old claim, originally made by J.Bentham , which is still the foundation for all classical ,neoclassical, new classical, and new neoclassical theories, is that decision makers are able to calculate an optimal numerical outcome and an optimal ,numerical probability, on which to base their future decisions (moves) in the game of life (chess) under no time constraint. Thus, the decision problem specified by Bentham is, by analogy, the type of situation faced in Correspondence or postal chess. This is also what F P Ramsey’s subjective approach to probability entails-There is no time constraint on the decision maker. Ramsey would have been a horrible (under 800 USCF) OTB chess player since he would lose all his games on time as his clock fell. However, Ramsey’s approach would have made him a very formidable postal or correspondence chess player, where one has no effective time constraint and a player can search for the best, optimal move whereas OTB players are looking for a good (Simon’s satisfactory outcome) move based on their intuitive perception of their study of similar positions from chess theory and competitive experience, given the unclear and ambiguous positions that appear often in middle game situations on the chessboard. Both Keynes and Simon understood that, in most situations on the chess (OTB) board, as well in the game of life, the operational time constraint makes it impossible to calculate a best or optimal outcome or probability over the board or in real life economic, political, and social decision problems or situations. However, it is possible to discover what is called a good or interesting move, which Simon characterized as a satisfying approach, as opposed to maximizing. I know of no tournament chess player in my lifetime of any rating, excluding those who have become academic economists and have turned their backs on the qualities they know were required to become a good chess player, so as to be in step with the Benthamite Utilitarian economist claim that all decision makers can calculate optimal or best decisions in the future, who believes that they can generally calculate precisely over the board.
Lecture Notes in Computer Science, 2013
In this article, we explain a new chess variant that is more challenging for humans than the standard version of the game. A new rule states that either player has the right to switch sides if a 'chain' or link of pieces is created on the board. This appears to increase significantly the complexity of chess, as perceived by players, but not the actual size of its game tree. 'Search' therefore becomes less of an issue. The advantage of this variant is that it allows research into board games to focus on the 'higher level' aspects of intelligence by building upon the approaches used in existing chess engines. We argue that this new variant can therefore more easily contribute to gaming AI than other games of high complexity.
2000
Article prepared for the ENCYCLOPEDIAOF ARTIFICIAL INTELLIGENCE, S. Shapiro (editor), D. Eckroth (Managing Editor) to be published by John Wiley, 1987.
Információs Társadalom
In 2017 AlphaZero, a neural network-based chess engine shook the chess world by convincingly beating Stockfish, the highest-rated chess engine. In this paper, I describe the technical differences between the two chess engines and based on that, I discuss the impact of the modeling choices on the respective epistemic opacities. I argue that the success of AlphaZero's approach with neural networks and reinforcement learning is counterbalanced by an increase in the epistemic opacity of the resulting model.
Computational Intelligence, 1996
This paper introduces METAGAMER, the first program designed within the paradigm of Metagame-playing (Metagame). This program plays games in the class of symmetric chess-like games, which includes chess, Chinese chess, checkers, draughts, and Shogi. METAGAMER takes as input the rules of a specific game and analyzes those rules to construct an efficient representation and an evaluation function for that game; they are used by a generic search engine. The strategic analysis performed by METAGAMER relates a set of general knowledge sources to the details of the particular game. Among other properties, this analysis determines the relative value of the different pieces in a given game. Although METAGAMER does not learn from experience, the values resulting from its analysis are qualitatively similar to values used by experts on known games and are sufficient to produce competitive performance the first time METAGAMER plays a new game. Besides being the first Metagame-playing program, this is the first program to have derived useful piece values directly from analysis of the rules of different games. This paper describes the knowledge implemented in METAGAMER, illustrates the piece values ME?AGAMER derives for chess and checkers, and discusses experiments with METAGAMER on both existing and newly generated games.
IEEE Computational Intelligence Magazine, 2006
and Games beat the best human players. Board games usually succumb to brute force 1 methods of search (mini-max search, alpha-beta pruning, parallel architectures, etc.) to produce the very best players. Go is an exception, and has so far resisted machine attack. The best Go computer players now play at the level of a good novice (see [3], [4] for review papers and [5]-[8] for some recent research). Go strategy seems to rely as much on pattern recognition as it does on logical analysis, and the large branching factor severely restricts the look-ahead that can be used within a game-tree search. Games also provide interesting abstractions of real-world situations, a classic example being Axelrod's Prisoner's Dilemma [9]. Of particular interest to the computational intelligence community, is the iterated version of this game (IPD), where players can devise strategies that depend upon previous behavior. An updated competition [10], celebrating the 20th anniversary of Axelrod's competition, was held at the 2004 IEEE Congress on Evolutionary Computation (Portland, Oregon, June 2004) and at the IEEE Symposium on Computational Intelligence and Games (Essex, UK, April 2005), and this still remains an extremely active area of research in areas as diverse as biology, economics and bargaining, as well as EC. In recent years, researchers have been applying EC methods to evolve all kinds of game-players, including real-time arcade and console games (e.g., Quake, Pac-Man). There are many goals of this research, and one emerging theme is using EC to generate opponents that are more interesting and fun to play against, rather than being necessarily superior. Before discussing possible future research directions, it is interesting to note some of the achievements during the past 50 years or so, during which time games have held a fascination for researchers. Games of Perfect Information Games of perfect information are those in which all the available information is known by all the players at all times. Chess is the best-known example and has received particular interest culminating with Deep Blue beating Kasparov in 1997, albeit with specialized hardware [11] and brute force search, rather than with AI/EC techniques. However, chess still receives research interest as scientists turn to learning techniques that allow a computer to 'learn' to play chess, rather than being 'told' how it should play (e.g., [12]-[14]). Learning techniques were being used for checkers as far back as the 1950s with Samuel's seminal work ([15], which was reproduced in [16]). This would ultimately lead to Jonathan Schaeffer developing Chinook, which won the world checkers title in 1994 [17], [18]. As was the case with Deep Blue, the question of whether Chinook used AI techniques is open to debate. Chinook had an opening and end game database. In certain games, it was able to play the entire game from these two databases. If this could not be achieved, then a form of mini-max search with alpha-beta pruning and a parallel architecture was used. Chinook is still the recognized world champion, a situation that is likely to remain for the foreseeable future. If Chinook is finally defeated, then it is almost certain that it will be by another computer. Even this is unlikely. On the Chinook Web site [19], there is a report of a tentative proof that the White Doctor opening is a draw. This means that any program using this opening, whether playing black or white, will never lose. Of course, if this proof is shown to be incorrect, then it is possible that Chinook can be beaten; but the team at the University of Alberta has just produced (May 14, 2005) a 10-piece endgame database that, combined with its opening game database, makes it a formidable opponent. Despite the undoubted success of Chinook, the search has continued for a checkers player that is built using "true" AI techniques (e.g., [20]-[25]), where the playing strategy is learned through experience rather than being pre-programmed. Chellapilla and Fogel [20]-[22] developed Anaconda, named due to the strangle hold it places on its opponent. It is also known as Blondie24 [22], which is the name it uses when playing on the Internet. This name was chosen in a successful attempt to attract players on the assumption they were playing against a blonde 24-year-old female. Blondie24 utilizes an artificial neural network with 5,046 weights, which are evolved by an evolutionary strategy. The inputs to the network are the current FEBRUARY 2006 | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE 11 © DIGITALVISION In natural evolution, the fitness of an individual is defined with respect to its competitors and collaborators, as well as to the environment.
Pattern recognition lies at the heart of the cognitive science endeavor. In this paper, we provide some criticism of this notion, using studies of chess as an example. The game of chess is, as significant evidence shows, a game of abstractions: pressures; force; open files and ranks; time; tightness of defense; old strategies rapidly adapted to new situations. These ideas do not arise on current computational models, which apply brute force by rote-memorization. In this paper we assess the computational models of CHREST and CHUMP, and argue that chess chunks must contain semantic information. This argument leads to a new and contrasting claim, as we propose that key conclusions of Chase and Simon's (1973) influential study stemmed from a non-sequitur. In the concluding section, we propose a shift in philosophy, from ''pattern recognition'' to a framework of ''experience recognition''.
2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), 2008
In the last two decades the advancement of AI/CI methods in classical board and card games (such as Chess, Checkers, Othello, Go, Poker, Bridge, ...) has been enormous. In nearly all "world famous" games the humans have been decisively conquered by the machines (actually Go remains almost the last redoubt of human supremacy). In the above perspective the natural question that comes to our minds is whether there is still any need for further development of CI methods in this area. What kind of goals can be achieved on this path? What are (if any) the challenging problems in this field? This paper tries to discuss these issues with respect to classical board mind games and provides partial (though highly subjective) answers to some of the open questions.
1993
The point of game tree search is to insulate oneself from errors in the evaluation function. The standard approach is to grow a full width tree as deep as time allows, and then value the tree as if the leaf evaluations were exact. This has been effective in many games because of the computational efficiency of the alpha-beta algorithm. A Bayesian would suggest instead to train a model of one’s uncertainty. This model adds extra information in addition to the standard evaluation function. Within such a formal model, there is an optimal tree growth procedure and an optimal method of valueing the tree. We describe how to optimally value the tree, and how to approximate on line the optimal tree to search. Our tree growth procedure provably approximates the contribution of each leaf to the utility in the limit where we grow a large tree, taking explicit account of the interactions between expanding different leaves. That is to say, our procedure estimates the relevance of each leaf and i...
1991
Article prepared for the 2nd edition of the ENCYCLOPEDIA OF ARTIFICIAL INTELLIGENCE, S. Shapiro (editor), to be published by John Wiley, 1992. This report is for information and review only.
2003
Presented are issues in designing smart, believable software agents capable of playing strategy games, with particular emphasis on the design of an agent capable of playing Cyberwar XXI, a complex war game. The architecture of a personality-rich, advise-taking game playing agent that learns to play is described. The suite of computational-intelligence tools used by the advisers include evolutionary computation and neural nets. I. CONFLICT SIMULATIONS Strategy games, in general, and conflict simulations in particular, offer a fertile ground to study the power of computational intelligence (CI). Board games like Chess or Checkers are widely studied strategy games because the environment in which the user interacts with the game is not a simulation of the problem domain; it is the problem domain. As a result, many vexing problems like imperfect effectors and sensors, incomplete or uncertain data, ill-defined goal states can be bypassed. However, games like Chess are only highly stylize...
International Conference on Artificial Intelligence and Statistics, 2001
Making decisions under uncertainty remains a central problem in AI research. Unfortunately, most uncertain real-world problems are so complex that progress in them is extremely difficult. Games model some elements of the real world, and offer a more controlled environment for exploring methods for dealing with uncertainty. Chess and chesslike games have long been used as a strategically complex test-bed for general AI research, and we extend that tradition by introducing an imperfect information variant of chess with some useful properties such as the ability to scale the amount of uncertainty in the game. We discuss the complexity of this game which we call invisible chess, and present results outlining the basic game. We motivate and describe the implementation and application of two information-theoretic advisors, and describe our decision-theoretic approach to combining these information-theoretic advisors with a basic strategic advisor. Finally we discuss promising preliminary results that we have obtained with these advisors.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.