0% found this document useful (0 votes)
57 views11 pages

Unit 3 AI

Game theory is the study of strategies among rational decision-makers, focusing on interdependent outcomes in various games. The document discusses issues in expert systems, including knowledge representation, acquisition, and handling uncertainty, along with solutions for each. Additionally, it covers limitations of game search algorithms and the application of AI techniques in solving Tic-Tac-Toe, highlighting methods like Minimax, Alpha-Beta Pruning, and Reinforcement Learning.

Uploaded by

pratikshabraut46
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views11 pages

Unit 3 AI

Game theory is the study of strategies among rational decision-makers, focusing on interdependent outcomes in various games. The document discusses issues in expert systems, including knowledge representation, acquisition, and handling uncertainty, along with solutions for each. Additionally, it covers limitations of game search algorithms and the application of AI techniques in solving Tic-Tac-Toe, highlighting methods like Minimax, Alpha-Beta Pruning, and Reinforcement Learning.

Uploaded by

pratikshabraut46
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

1.What Is Game theory?

The game theory is said to be the science of strategies which comes under the probability
distribution. It determines logical as well as mathematical actions that should be taken by
the players in order to obtain the best possible outcomes for themselves in the games. The
games studied in game theory may range from chess to tennis and from child-rearing to
takeovers. But there is one thing common that such an array of games is interdependent, i.e.
outcome for each player depends upon the strategies of all.
In other words, game theory deals with mathematical models of cooperation and conflicts
between rational decision-makers. Game theory can be defined as the study of decision-
making in which the players must make strategies affecting the interests of other players.

Partial (Imperfect Information)


Feature Stochastic Games
Games

Games where the outcome or next state


Games where one or more players
is partly determined by randomness or
Definition do not have full knowledge of the
probability (in addition to players'
game state or opponents' actions.
actions).

From hidden or unknown


Origin of From random events or chance, like dice information such as private cards,
Uncertainty rolls or probabilistic transitions. hidden moves, or hidden player
roles.

Players usually have complete


Players have incomplete
Information knowledge of the current state but are
knowledge of the current state or
Available uncertain about future outcomes due to
other players’ choices.
randomness.

State transitions are deterministic


Transitions from one state to another
State or non-random, but players don't
are probabilistic and depend on both
Transitions have full visibility of the game
player actions and random events.
state.

Players know the structure of the game Players may not know opponents'
Players'
and current state, but not future states strategies, previous moves, or even
Knowledge
due to randomness. their own position fully.

Memory of Often perfect recall – players remember May or may not include perfect
Past Moves past states. recall. Focus is on lack of visibility.

Examples - Backgammon (dice rolls) - Poker (hidden cards)


- Markov Decision Processes (MDPs) - Battleship (hidden ship positions)
Partial (Imperfect Information)
Feature Stochastic Games
Games

- Board games with chance cards


- Stratego

Often modeled using Reinforcement Used in Game Theory, Bayesian


Use in AI/ML Learning with stochastic policies or Games, and decision-making under
environments. uncertainty.

Typically multi-stage with random Typically involve strategic thinking


Game Type
transitions between stages. based on limited knowledge.

Q. What are the issues that need to be addressed for solving esp efficiently?
Explain the solutions to them

✅ 1. Knowledge Representation

Issue: How to represent domain knowledge in a format the system can use.

Solution:

 Use production rules (IF...THEN) or semantic networks, frames, or ontologies.

 Choose formats that are expressive and easy to update.

 Tools like Prolog, CLIPS, or rule-based systems help with formalizing knowledge.

✅ 2. Knowledge Acquisition

Issue: Gathering accurate and complete knowledge from human experts is difficult.

Solution:

 Use structured interviews, observations, or machine learning to extract knowledge.

 Develop knowledge acquisition tools and interfaces to ease input by domain experts.

✅ 3. Inference Mechanism

Issue: Choosing the right reasoning strategy (forward chaining, backward chaining) for efficient
problem-solving.

Solution:

 Use Forward chaining for data-driven tasks (starting from facts to conclusions).

 Use Backward chaining for goal-driven tasks (start from the goal and find supporting facts).

 Use efficient search techniques (like depth-first, breadth-first, or best-first) depending on the
case.
✅ 4. Handling Uncertainty

Issue: Expert systems may encounter incomplete, vague, or uncertain information.

Solution:

 Use probabilistic reasoning (e.g., Bayesian networks).

 Apply fuzzy logic to handle imprecise inputs.

 Use certainty factors to assign confidence levels to rules.

✅ 5. Scalability and Performance

Issue: Large rule sets can lead to slow response times and complex management.

Solution:

 Organize rules in a hierarchical or modular way.

 Use efficient rule-matching algorithms like the Rete algorithm.

 Apply indexing, caching, and rule prioritization.

✅ 6. Explanation Capability

Issue: Users need to understand the system’s reasoning process.

Solution:

 Integrate an explanation subsystem that provides traceable reasoning:

o “Why was this conclusion made?”

o “Why wasn’t this option chosen?”

✅ 7. Maintenance and Updatability

Issue: Keeping the system updated as domain knowledge evolves.

Solution:

 Use knowledge base management tools.

 Store rules and knowledge in an easily editable and modular form.

 Support dynamic rule updating without restarting the system.

✅ 8. User Interface

Issue: Poor interface may hinder expert system usability.


Solution:

 Develop a user-friendly interface that supports inputs, queries, and feedback.

 Support natural language processing (NLP) for easier interaction if needed.

✅ 9. Integration with External Systems

Issue: The expert system may need to work with databases or external tools.

Solution:

 Use APIs, middleware, and data converters for seamless integration.

 Ensure data consistency and synchronization.

✅ 10. Domain Specificity

Issue: Expert systems are often limited to specific domains and can't generalize.

Solution:

 Clearly define the scope of the system.

 Use domain-specific models and allow customization for similar tasks.

Q. Explain limitations of game search algorithm.

1. Combinatorial Explosion

The number of possible game states increases exponentially with each move. For complex games like
Chess or Go, the game tree becomes extremely large, making it computationally expensive and often
infeasible to explore completely.

2. Limited Lookahead

Due to time and memory constraints, search algorithms can only look a few moves ahead. This
shallow depth may lead to missing long-term strategies or threats, affecting the quality of decisions.

3. Inaccurate Evaluation Function

When the algorithm cannot reach terminal states, it uses a heuristic evaluation function to estimate
the value of intermediate game states. If the function is poorly designed or inaccurate, it can lead to
wrong decisions.

4. Real-Time Constraints
In practical scenarios like real-time games, the algorithm must return a decision quickly. Deep
searches are not always possible within the time limits, which may compromise the quality of the
move chosen.

5. Assumption of Perfect Play

Algorithms like Minimax assume that both players play optimally. In real-world situations, opponents
may make sub-optimal or random moves, which are not always exploited by such algorithms.

6. High Memory Usage

Storing game trees and evaluation values requires a large amount of memory, especially in games
with high branching factors. This becomes a limiting factor in large or deep game trees.

7. Handling Stochastic or Non-Deterministic Games

Traditional game search algorithms are designed for deterministic games. They are not suitable for
games involving chance elements like dice or card draws, which require probabilistic approaches.

8. Lack of Learning

Standard game search algorithms do not learn from previous experiences. Their performance does
not improve over time unless combined with learning techniques such as reinforcement learning or
neural networks.

Limitations of MCTS:

 Computationally Intensive: Requires a large number of simulations for accurate results,


making it slow without significant computing power.

 Scalability Issues: Performance can degrade in games or environments with very large
branching factors.

 Quality of Simulations: Effectiveness depends on the quality of random simulations; poor


rollout policies can lead to suboptimal decisions.

 Lack of Domain Knowledge Use: Pure MCTS does not leverage domain-specific heuristics
unless explicitly incorporated.

 Difficulty with Continuous Spaces: MCTS is primarily suited for discrete decision spaces;
handling continuous or high-dimensional spaces is challenging.

Problem Solving Strategies


Problem-solving in computer science involves using different strategies to efficiently find solutions.
The major strategies are:

1. Brute Force:
Try all possible combinations and select the best one.
Example: Trying every possible password combination.

2. Divide and Conquer:


Break a problem into subproblems, solve them independently, and combine the results.
Example: Merge Sort, Quick Sort.

3. Greedy Method:
Make the best possible decision at each step without revisiting previous decisions.
Example: Kruskal’s Algorithm, Huffman Coding.

4. Dynamic Programming:
Solve overlapping subproblems once and store their results for reuse.
Example: Fibonacci series, Knapsack Problem.

5. Backtracking:
Build a solution step-by-step and backtrack when constraints are violated.
Example: N-Queens Problem, Sudoku Solver.

6. Branch and Bound:


Similar to backtracking but uses bounds to prune unpromising paths.
Example: N-Queens (optimized), Travelling Salesman Problem.

7. Recursion:
Solve a problem by solving smaller instances of the same problem.
Example: Tower of Hanoi.

8. Memoization:
Store the results of expensive function calls to avoid redundant computation.
Used in: Top-down Dynamic Programming.

9. Graph Traversal:
Explore graphs using techniques like BFS (Breadth-First Search) or DFS (Depth-First Search).
Example: Maze solving, shortest path.

🔁 Backtracking

Definition:
Backtracking is a general algorithmic technique that tries to build a solution incrementally. When it
finds that the current solution path cannot possibly lead to a valid solution, it "backtracks" and tries a
different path.

Steps:

1. Choose – Make a choice.

2. Explore – Recursively explore that choice.

3. Un-choose – Undo the choice (backtrack) if it doesn't lead to a solution.


Applications:

 N-Queens Problem

 Sudoku Solver

 Maze Solving

 Permutations and Combinations

Q. Explain Monte Carlo Tree Search with all steps and Demonstrate with one Example.

Imagine it’s X’s turn in the following Tic-Tac-Toe state:

X|O|X

---------

O|X|

---------

| |O

Available moves for X: (1,2), (2,0), (2,1)

Step-by-step MCTS Example

1. Selection: Start at root (current board state). Use UCT to select the best child if any exist.

2. Expansion: Add a new node for one of the unvisited moves, e.g., X moves to (1,2):

X|O|X

---------

O|X|X

---------

| |O

3. Simulation: Play out the game from this point using random moves. Suppose the simulation
ends with X winning.

4. Backpropagation: Update the win count and visit count for all nodes along the path from the
leaf to the root.

Repeat the above process many times (e.g., 10,000 iterations). After that, pick the move
with the highest visit count or best win rate.
Q. How AI technique is used to solve tic-tac-toe problem.

Tic-Tac-Toe (also known as Noughts and Crosses) is a simple two-player game played
on a 3×3 grid. Players take turns marking a cell with their symbol (X or O), and the
player who succeeds in placing three of their marks in a horizontal, vertical, or diagonal
row wins the game.

Although it is a simple game, it serves as an excellent example for understanding how AI


techniques can be applied to game solving, strategy formulation, and decision making.

🤖 AI Techniques Used in Tic-Tac-Toe

The following Artificial Intelligence techniques are commonly used to solve the Tic-Tac-
Toe game:

1. Game Tree Representation

 The game is represented as a tree, where:


o Each node represents a game state.
o Each edge represents a player's move.
o The root node is the initial empty board.
o Leaf nodes represent end states (win, lose, or draw).
 The tree is traversed to determine the best move.

2. Minimax Algorithm

The most common AI technique used for solving Tic-Tac-Toe is the Minimax
algorithm, a backtracking-based decision rule used for minimizing the possible loss in
a worst-case scenario.

Working:

 Two players: One is the Maximizer (e.g., AI playing 'X') and the other is the
Minimizer (e.g., human playing 'O').
 The algorithm:
o Simulates all possible future moves.
o Assigns scores to terminal states:
 +10 for a win,
 0 for a draw,
 –10 for a loss.
o Backtracks to choose the move that maximizes the AI's chances and
minimizes the opponent’s.
Minimax Pseudocode:

function minimax(board, depth, isMaximizing):

if terminal state:

return utility value

if isMaximizing:

best = -∞

for each move:

score = minimax(next board, depth+1, false)

best = max(best, score)

return best

else:

best = +∞

for each move:

score = minimax(next board, depth+1, true)

best = min(best, score)

return best

3. Alpha-Beta Pruning (Optimization of Minimax)

 This technique improves the efficiency of the Minimax algorithm by eliminating


branches in the game tree that do not affect the final decision.
 It keeps track of:
o Alpha: the best value for the maximizer.
o Beta: the best value for the minimizer.
 If the current branch cannot yield a better score, it is pruned (not explored further).

4. Evaluation Function
 In complex versions or larger boards, searching the full game tree is not feasible.
 An evaluation function estimates the value of non-terminal states using heuristics
like:
o Number of possible winning lines.
o Count of two-in-a-row with an empty third cell.
 This allows the AI to make decisions quickly in real-time.

5. Reinforcement Learning (Advanced)

 In advanced AI, Reinforcement Learning (RL) techniques like Q-Learning or Deep


Q-Networks (DQN) can be used to learn optimal moves by playing many games.
 The AI rewards itself for winning and penalizes for losing, gradually learning the
best strategy.
 This is useful for generalizing strategies to other board sizes or similar games.

✅ Advantages of Using AI in Tic-Tac-Toe

1. Perfect Play: AI can always play optimally and never lose.


2. Decision Making: AI selects the best move in real-time using heuristics and tree
traversal.
3. Training Tools: Useful in teaching beginners how to play or practice against an
unbeatable opponent.
4. Foundation for Complex Games: Concepts used in Tic-Tac-Toe AI can be scaled to
games like Chess or Go.

📝 Conclusion

Tic-Tac-Toe is a fundamental problem that demonstrates the application of AI


techniques such as game trees, minimax, alpha-beta pruning, and even reinforcement
learning. These strategies ensure optimal decision-making and provide a foundation for
solving more complex real-world game-playing problems. Even though Tic-Tac-Toe is a
solved game, it plays an essential role in teaching AI and game theory concepts
effectively.

You might also like