0% found this document useful (0 votes)
40 views39 pages

Unit 3 Problem Solving-1

The document outlines a structured approach to problem-solving, particularly in the context of artificial intelligence, emphasizing the importance of defining problems, analyzing them, and selecting appropriate solving techniques. It discusses various problem types, including toy and real-world problems, and provides examples such as the 8-puzzle and the traveling salesperson problem. Additionally, it highlights the significance of performance metrics like completeness, optimality, time complexity, and space complexity in evaluating algorithms.

Uploaded by

free98072fire
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views39 pages

Unit 3 Problem Solving-1

The document outlines a structured approach to problem-solving, particularly in the context of artificial intelligence, emphasizing the importance of defining problems, analyzing them, and selecting appropriate solving techniques. It discusses various problem types, including toy and real-world problems, and provides examples such as the 8-puzzle and the traveling salesperson problem. Additionally, it highlights the significance of performance metrics like completeness, optimality, time complexity, and space complexity in evaluating algorithms.

Uploaded by

free98072fire
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

Unit 3 Problem Solving

To solve the problem of building a system we should take the following steps:

1. Define the problem accurately including detailed specifications and what


constitutes a suitable solution.

2. Scrutinize the problem carefully, for some features may have a central effect
on the chosen method of solution.

3. Segregate and represent the background knowledge needed in the solution of


the problem.

4. Choose the best solving techniques for the problem to solve a solution.

Problem solving is a process of generating solutions from observed data.

• A ‘problem’ is characterized by a set of goals, a set of objects, and a set of


operations. These could be ill-defined and may evolve during problem solving.

• A ‘problem space’ is an abstract space. A problem space encompasses all


valid states that can be generated by the application of any combination of
operators on any combination of objects. The problem space may contain one or
more solutions. A solution is a combination of operations and objects that
achieve the goals.

• A ‘search’ refers to the search for a solution in a problem space. Search


proceeds with different types of ‘search control strategies’. The depth-first
search and breadth-first search are the two common search strategies.

Problem-solving is commonly known as the method to reach the desired goal or


finding a solution to a given situation. In computer science, problem-solving
refers to artificial intelligence techniques, including various techniques such as
forming efficient algorithms, heuristics, and performing root cause analysis to
find desirable solutions. In Artificial Intelligence, the users can solve the
problem by performing logical algorithms, utilizing polynomial and differential
equations, and executing those using modelling paradigms. There can be
various solutions to a single problem, which are achieved by different
heuristics. Also, some problems have unique solutions. It all rests on the nature
of the given problem.
Developers worldwide are using artificial intelligence to automate systems for
efficient utilization of time and resources. Some of the most common problems
encountered in day-to-day life are games and puzzles. These can be solved
efficiently by using artificial intelligence algorithms. Ranging from
mathematical puzzles including crypto-arithmetic and magic squares, logical
puzzles including Boolean formulas and N-Queens to popular games like
Sudoku and Chess, these problem-solving techniques are used to form a
solution for all these. Therefore, some of the most prevalent problems that
artificial intelligence has resolved are the following:

 Chess
 N-Queen problem
 Tower of Hanoi Problem
 Travelling Salesman Problem
 Water-Jug Problem

The reflex agent of AI directly maps states into action. Whenever these agents
fail to operate in an environment where the state of mapping is too large and
not easily performed by the agent, then the stated problem dissolves and sent
to a problem-solving domain which breaks the large stored problem into the
smaller storage area and resolves one by one. The final integrated action will
be the desired outcomes.
On the basis of the problem and their working domain, different types of
problem-solving agent defined and use at an atomic level without any internal
state visible with a problem-solving algorithm. The problem-solving agent
performs precisely by defining problems and several solutions. So we can say
that problem solving is a part of artificial intelligence that encompasses a
number of techniques such as a tree, B-tree, heuristic algorithms to solve a
problem.
We can also say that a problem-solving agent is a result-driven agent and
always focuses on satisfying the goals.
Steps problem-solving in AI: The problem of AI is directly associated with
the nature of humans and their activities. So we need a number of finite steps
to solve a problem which makes human easy works.
These are the following steps which require solving a problem:
 Goal Formulation: This one is the first and simple step in problem-
solving. It organizes finite steps to formulate target/goals which require
some action to achieve the goal. Today the formulation of the goal is based
on AI agents.
 Problem formulation: It is one of the core steps of problem-solving which
decides what action should be taken to achieve the formulated goal. In AI
this core part is dependent upon software agent which consisted of the
following components to formulate the associated problem.
A. Initial State: This state requires an initial state for the problem which starts
the AI agent towards a specified goal. In this state new methods also
initialize problem domain solving by a specific class.
B. Action: This stage of problem formulation works with function with a
specific class taken from the initial state and all possible actions done in
this stage.
C. Transition: This stage of problem formulation integrates the actual action
done by the previous action stage and collects the final stage to forward it
to their next stage.
D. Goal test: This stage determines that the specified goal achieved by the
integrated transition model or not, whenever the goal achieves stop the
action and forward into the next stage to determine the cost to achieve the
goal.
E. Path costing: This component of problem-solving numerical assigned what
will be the cost to achieve the goal. It requires all hardware software and
human working cost.

STATE SPACE SEARCH

To solve the problem of playing a game, we require the rules of the game and
targets for winning as well as representing positions in the game. The opening
position can be defined as the initial state and a winning position as a goal state.
Moves from initial state to other states leading to the goal state follow legally.
However, the rules are far too abundant in most games— especially in chess,
where they exceed the number of particles in the universe. Thus, the rules
cannot be supplied accurately and computer programs cannot handle easily. The
storage also presents another problem but searching can be achieved by hashing.

The number of rules that are used must be minimized and the set can be created
by expressing each rule in a form as possible. The representation of games leads
to a state space representation and it is common for well-organized games with
some structure. This representation allows for the formal definition of a problem
that needs the movement from a set of initial positions to one of a set of target
positions. It means that the solution involves using known techniques and a
systematic search. This is quite a common method in Artificial Intelligence.

State Space Search


A state space represents a problem in terms of states and operators that change
states. A state space consists of:

 A representation of the states the system can be in. For example, in a board
game, the board represents the current state of the game.

 A set of operators that can change one state into another state. In a board
game, the operators are the legal moves from any given state. Often the
operators are represented as programs that change a state representation to
represent the new state.

 An initial state.

 A set of final states; some of these may be desirable, others undesirable. This
set is often represented implicitly by a program that detects terminal states.

Example Problems
There are two types of problem approaches:

 Toy Problem: It is a concise and exact description of the problem, which


is used by the researchers to compare the performance of algorithms.
 Real-world Problem: It is real-world based problems which require
solutions. Unlike a toy problem, it does not depend on descriptions, but
we can have a general formulation of the problem.

Some Toy Problems

8 Puzzle Problem:

Here, we have a 3x3 matrix with movable tiles numbered from 1 to 8 with a
blank space. The tile adjacent to the blank space can slide into that space. The
objective is to reach a specified goal state similar to the goal state, as shown in
the below figure.

 In the figure, our task is to convert the current state into goal state by
sliding digits into the blank space.
In the above figure, our task is to convert the current (Start) state into goal state
by sliding digits into the blank space.
The problem formulation is as follows:

 States: It describes the location of each numbered tiles and the blank tile.
 Initial State: We can start from any state as the initial state.
 Actions: Here, actions of the blank space is defined, i.e., either left, right,
up or down
 Transition Model: It returns the resulting state as per the given state and
actions.
 Goal test: It identifies whether we have reached the correct goal-state.
 Path cost: The path cost is the number of steps in the path where the cost
of each step is 1.

Note: The 8-puzzle problem is a type of sliding-block problem, which is used


for testing new search algorithms in artificial intelligence.

8-queens problem:

The aim of this problem is to place eight queens on a chessboard in an order


where no queen may attack another. A queen can attack other queens
either diagonally or in same row and column.

From the following figure, we can understand the problem as well as its correct
solution.
It is noticed from the above figure that each queen is set into the chessboard in a
position where no other queen is placed diagonally, in same row or column.
Therefore, it is one right approach to the 8-queens problem.
For this problem, there are two main kinds of formulation:

 Incremental formulation: It starts from an empty state where the operator


augments a queen at each step.

Following steps are involved in this formulation:

 States: Arrangement of any 0 to 8 queens on the chessboard.


 Initial State: An empty chessboard
 Actions: Add a queen to any empty box.
 Transition model: Returns the chessboard with the queen added in a box.
 Goal test: Checks whether 8-queens are placed on the chessboard without
any attack.
 Path cost: There is no need for path cost because only final states are
counted.
In this formulation, there is approximately 1.8 x 1014 possible sequence to
investigate.

 Complete-state formulation: It starts with all the 8-queens on the


chessboard and moves them around, saving from the attacks.

Following steps are involved in this formulation

 States: Arrangement of all the 8 queens one per column with no queen
attacking the other queen.
 Actions: Move the queen at the location where it is safe from the attacks.

This formulation is better than the incremental formulation as it reduces the


state space from 1.8 x 1014 to 2057, and it is easy to find the solutions.
Some Real-world problems

 Traveling salesperson problem(TSP): It is a touring problem where the


salesman can visit each city only once. The objective is to find the
shortest tour and sell-out the stuff in each city.
 VLSI Layout problem: In this problem, millions of components and
connections are positioned on a chip in order to minimize the area,
circuit-delays, stray-capacitances, and maximizing the manufacturing
yield.

The layout problem is split into two parts:

 Cell layout: Here, the primitive components of the circuit are grouped
into cells, each performing its specific function. Each cell has a fixed
shape and size. The task is to place the cells on the chip without
overlapping each other.
 Channel routing: It finds a specific route for each wire through the gaps
between the cells.
 Protein Design: The objective is to find a sequence of amino acids which
will fold into 3D protein having a property to cure some disease.

Measuring problem-solving performance


Consequently, there are four ways to measure the performance of an algorithm:
Completeness:- It measures if the algorithm guarantees to find a solution (if any
solution exist).
Optimality:- It measures if the strategy searches for an optimal solution.
Time Complexity:- The time taken by the algorithm to find a solution.
Space Complexity:- Amount of memory required to perform a search.
The complexity of an algorithm depends on branching factor or maximum
number of successors, depth of the shallowest goal node (i.e., number of steps
from root to the path) and the maximum length of any path in a state space.

The Water Jug Problem

In this problem, we use two jugs called four and three; four holds a maximum of
four gallons of water and three a maximum of three gallons of water. How can
we get two gallons of water in the four jug?

The state space is a set of prearranged pairs giving the number of gallons of
water in the pair of jugs at any time, i.e., (four, three) where four = 0, 1, 2, 3 or
4 and three = 0, 1, 2 or 3.

The start state is (0, 0) and the goal state is (2, n) where n may be any but it is
limited to three holding from 0 to 3 gallons of water or empty. Three and four
shows the name and numerical number shows the amount of water in jugs for
solving the water jug problem. The major production rules for solving this
problem are shown below:
Initial condition Goal comment

1. (Four, three) if four < 4 (4, three) fill four from tap

2. (Four, three) if three< 3 (four, 3) fill three from tap

3. (Four, three) If four > 0 (0, three) empty four into drain

4. (Four, three) if three > 0 (four, 0) empty three into drain

5. (Four, three) if four + three<4 (Four+three, 0) empty three into four.

6. (four, three) if four + three <3 (0, Four+three) empty four into three.

7.(0, three) if three>0 (three, 0) empty three into four

8. (four, 0) if four > 0 (0, four) empty four into three

9. (0, 2) (2, 0) empty three into four

10. (2, 0) (0, 2) empty four into three

11. (four, three) if four < 4 (4, three-diff) pour diff, 4-four, into four from three
12. (three, four) if three < 3 (four-diff, 3) pour diff, 3-three, into three from four
and a solution is given below four three rule

(Fig. 2.2 Production Rules for the Water Jug Problem)


Gallons in Four Jug Gallons in Three Jug Rules Applied

0 0 -

0 3 2

3 0 7

3 3 2

4 2 11

0 2 3

2 0 10

(Fig. 2.3 One Solution to the Water Jug Problem)

The problem solved by using the production rules in combination with an


appropriate control strategy, moving through the problem space until a path
from an initial state to a goal state is found. In this problem solving process,
search is the fundamental concept. For simple problems it is easier to achieve
this goal by hand but there will be cases where this is far too difficult.

PRODUCTION SYSTEMS

Production systems provide appropriate structures for performing and


describing search processes. A production system has four basic components:

 A set of rules each consisting of a left side that determines the applicability of
the rule and a right side that describes the operation to be performed if the rule
is applied.

 A database of current facts established during the process of inference.

 A control strategy that specifies the order in which the rules will be compared
with facts in the database and also specifies how to resolve conflicts in selection
of several rules or selection of more facts.

 A rule firing module.

The production rules operate on the knowledge database. Each rule has a
precondition—that is, either satisfied or not by the knowledge database. If the
precondition is satisfied, the rule can be applied. Application of the rule changes
the knowledge database. The control system chooses which applicable rule
should be applied and ceases computation when a termination condition on the
knowledge database is satisfied.

Example: Eight puzzle (8-Puzzle)

2The 8-puzzle
3 is a 34 × 3 array containing eight square pieces, numbered 1
8through 8,6 and one2 empty space. A piece can 1be moved2horizontally3 or
7vertically into the 5empty space, in effect exchanging
8 the positions4of the
7
piece and the empty space. There are four possible 6
moves, UP 5(move
the blank space up), DOWN, LEFT and RIGHT. The aim of the game is
to make a sequence of moves that will convert the board from the start
state into the goal state:
Initial state Goal state

This example can be solved by the operator sequence UP, RIGHT, UP, LEFT,
DOWN.

Searching Algorithm

Searching is one of the primary methods of solving any problem in AI. Rational
agents or problem-solving agents use these searching algorithms to find optimal
solutions. These problem-solving agents are often goal-based and utilize atomic
representation. Moreover, these searching algorithms possess completeness,
optimality, time complexity, and space complexity properties based on the
quality of the solution provided by them. Artificial Intelligence is the study of
building agents that act rationally. Most of the time, these agents perform
some kind of search algorithm in the background in order to achieve their
tasks.
 A search problem consists of:
 A State Space. Set of all possible states where you can be.
 A Start State. The state from where the search begins.
 A Goal Test. A function that looks at the current state returns
whether or not it is the goal state.
 The Solution to a search problem is a sequence of actions, called
the plan that transforms the start state to the goal state.
 This plan is achieved through search algorithms.

Types of Searching Algorithms

There are following two main types of searching algorithms:

1. Informed Search
2. Uninformed Search

Uninformed Search

These algorithms do not have the privilege of using basic domain knowledge,
such as the desired goal’s closeness. It contains information
regarding traversing a tree and identifying leaf and goal nodes. Uninformed
search also goes by the name of blind search because while traversing, there is
no specific information about the initial state and test for the goal. This search
goes through every node till reaching the desired destination.

Types of Uninformed Searching Algorithms

There are the following main five types of uninformed search algorithms:

1. Breadth-First Search
2. Depth First Search
3. Uniform Cost Search
4. Iterative Deepening Depth First Search
5. Bidirectional Search

* State: It provides all the information about the environment.

* Goal State: The desired resulting condition in a given problem and the kind
of search algorithm we are looking for.

* Goal Test: The test to determine whether a particular state is a goal state.

* Path/Step Cost: These are integers that represent the cost to move from one
node to another node.

* Space Complexity: A function describing the amount of space(memory) an


algorithm takes in terms of input to the algorithm.
* Time Complexity: A function describing the amount of time the algorithm
takes in terms of input to the algorithm.

* Optimal: Extent of preference of the algorithm

* ‘b‘ – maximum branching factor in a tree.

* ‘d‘ – the depth of the least-cost solution.

* ‘m‘ – maximum depth state space(maybe infinity)

1. Depth First Search (DFS):


It is a search algorithm where the search tree will be traversed from the root
node. It will be traversing, searching for a key at the leaf of a particular branch.
If the key is not found the searching retraces its steps back to the point from
where the other branch was left unexplored and the same procedure is repeated
for that other branch.

The above image clearly explains the DFS Algorithm. First, the searching starts
in the root node A and then goes to the branch where node B is present
(lexicographical order). Then it goes to node D because of DFS and from D
there is the only node to traverse i.e. node H. However, after node H does not
have any child nodes so we retrace the path in which we traversed earlier and
again reach node B but this time we traverse through in the untraced path a
traverse through node E. There are two branches at node E but let’s traverse
node I (lexicographical order) and then retrace the path as we have no further
nodes after I to traverse. Then we traverse node J as it is the untraced branch
and then again find we are at the end and retrace the path and reach node B and
then we will traverse the untraced branch i.e. through node C and repeat the
same process. This is called the DFS Algorithm.

Advantages:

 DFS requires very little memory as it only needs to store a stack of the nodes on
the path from the root node to the current node.

 It takes less time to reach the goal node than the BFS algorithm (if it traverses in
the right path).

Disadvantages:

 There is the possibility that many states keep reoccurring, and there is no
guarantee of finding the solution.
 The DFS algorithm goes for deep down searching and sometimes it may go to
the infinite loop.

Conclusions:
It occupies a lot of memory space, and time to execute when the solution is at
the bottom or end of the tree and is implemented using LIFO Stack data
structure[DS].

Complete: No

Time Complexity: O(bm)


Space complexity: O(bm)

Optimal: Yes

2. Breadth-First Search(BFS)
It is another search algorithm in AI which traverses breadthwise to search the
goal in a tree. It begins searching from the root node and expands the successor
node before going expanding it further expands along breadthwise and traverses
those nodes rather than searching depth-wise.

The above figure is an example of a BFS Algorithm. It starts from the root node
A and then traverses node B. Till this step it is the same as DFS. But here
instead of expanding the children of B as in the case of DFS we expand the
other child of A i.e. node C because of BFS and then move to the next level and
traverse from D to G and then from H to K in this typical example. To traverse
here we have only taken into consideration the lexicographical order. This is
how the BFS Algorithm is implemented.

Advantages:

 BFS will provide a solution if any solution exists.

 If there is more than one solution for a given problem, then BFS will provide
the minimal solution which requires the least number of steps.

Disadvantages:

 It requires lots of memory since each level of the tree must be saved into
memory to expand the next level.

 BFS needs lots of time if the solution is far away from the root node.

Conclusions:
It requires a lot of memory space and time consuming if the goal state is at the
bottom or end. It uses a FIFO queue DS to implement.

Complete: Yes (assuming b is finite)

Time Complexity: O(bd)

Space complexity: O(bd)

Optimal: Yes, if Step cost = 1 (i.e. no cost/all step costs are same)

3. Uniform Cost Search(UCS):


This algorithm is mainly used when the step costs are not the same but we need
the optimal solution to the goal state. In such cases, we use Uniform Cost
Search to find the goal and the path including the cumulative cost to expand
each node from the root node to the goal node. It does not go depth or breadth,
it searches for the next node with the lowest cost and in the case of the same
path cost, let’s consider lexicographical order in our case.

In the above figure consider S to be the start node and G to be the goal state.
From node S we look for a node to expand and we have nodes A and G but
since it’s a uniform cost search it’s expanding the node with the lowest step cost
so node A becomes the successor rather than our required goal node G. From A
we look at its children nodes B and C. So since C has the lowest step cost it
traverses through node C and then we look at successors of C i.e. D and G since
the cost to D is low we expand along with the node D. Since D has only one
child G which is our required goal state we finally reach the goal state G by
implementing UCS Algorithm. If we have traversed this way definitely our total
path cost from S to G is just 6 even after traversing through many nodes rather
than going to G directly where the cost is 12 and 6<<12(in terms of step cost).
But this may not work with all cases.
Advantages:

 Uniform cost search is optimal because at every state the path with the least cost
is chosen.

Disadvantages:

 It does not care about the number of steps involved in searching and only
concerned about path cost. Due to which this algorithm may be stuck in an
infinite loop.

Complete: Yes (if b is finite and costs are stepped costs are zero)

Time Complexity: O(b(c/ϵ)) where, ϵ -> is the lowest cost, c -> optimal cost

Space complexity: O(b(c/ϵ))

Optimal: Yes (even for non-even cost)

4. Depth Limited Search(DLS):


DLS is an uninformed search algorithm. This is similar to DFS but differs only
in a few ways. The sad failure of DFS is alleviated by supplying a depth-first
search with a predetermined depth limit. That is, nodes at depth are treated as if
they have no successors. This approach is called a depth-limited search. The
depth limit solves the infinite-path problem. Depth-limited search can be halted
in two cases:

— Standard Failure Value(SFV): The SFV tells that there is no solution to the
problem.

— Cutoff Failure Value(CFV): The Cutoff Failure Value tells that there is no
solution within the given depth-limit.
The above figure illustrates the implementation of the DLS algorithm. Node A
is at Limit = 0, followed by nodes B, C, D, and E at Limit = 1 and nodes F, G,
and H at Limit = 2. Our start state is considered to be node A and our goal state
is node H. To reach node H we apply DLS. So in the first case let’s, set our
limit to 0 and search for the goal.

Since limit 0, the algorithm will assume that there are no children after limit 0
even if nodes exist further. Now, if we implement it we will traverse only node
A as there is only one node in limit 0, which is basically our goal state. If we
use SFV, it says that there is no solution to the problem at limit 0 whereas CFV
says that there is no solution for the problem until the set depth limit. Now since
we were not able to find the goal we increase our limit to 1 and apply DFS till
limit 1 even though there are further nodes after limit 1. But those nodes aren’t
expanded as we have set our limit as 1.

Hence nodes A followed by B, C, D, and E are expanded as the mentioned


order. As in our first case, if we use SFV, it says that there is no solution to the
problem at limit 1 whereas CFV says that there is no solution for the problem
until the set depth limit 1. Hence we again increase our limit from 1 to 2 in the
notion to find the goal.
Till limit 2 DFS will be implemented from our start node A and its children B,
C, D, and E and then from E it moves to F similarly backtracks the path and
explores the unexplored branch where node G is present and then retraces the
path and explores the child of C i.e. node H and then we finally reach our goal
by applying DLS Algorithm. Suppose we have further successors of node F but
only the nodes till the limit 2 will be explored as we have limited the depth and
have reached the goal state.

This image explains the DLS implementation and could be referred to for better
understanding.

Depth-limited search can be terminated with two Conditions of failure:

Standard Failure: it indicates that the problem does not have any solutions.

Cutoff Failure Value: It defines no solution for the problem within a given
depth limit.

Advantages:

Depth-limited search is Memory efficient.

Disadvantages:
The DLS has disadvantages of completeness and is not optimal if it has more
than one goal state.

Complete: Complete (if solution > depth-limit)

Time Complexity: O(bl) where, l -> depth-limit

Space complexity: O(bl)

Optimal: Yes (only if l > d)

5. Iterative Deepening Depth First Search(IDDFS):


It is a search algorithm that uses the combined power of the BFS and DFS
Algorithm. It is iterative in nature. It searches for the best depth in each
iteration. It performs the Algorithm until it reaches the goal node. The algorithm
is set to search until a certain depth and the depth keeps increasing at every
iteration until it reaches the goal state.

In the above figure let’s consider the goal node to be G and the start state to be
A. We perform our IDDFS from node A. In the first iteration, it traverses only
node A at level 0. Since the goal is not reached we expand our nodes, go to the
next level i.e. 1 and move to the next iteration. Then in the next iteration, we
traverse the node A, B, and C. Even in this iteration our goal state is not reached
so we expand the node to the next level i.e. 2, and the nodes are traversed from
the start node or the previous iteration and expand the nodes A, B, C, D, E, F,
G. Even though the goal node is traversed we go through for the next iteration
and the remaining nodes A, B, D, H, I, E, C, F, K, G(BFS & DFS) too are
explored and we find the goal state in this iteration. This is the implementation
of the IDDFS Algorithm.

Advantages:

 It combines the benefits of BFS and DFS search algorithms in terms of fast
search and memory efficiency.

Disadvantages:

 The main drawback of IDDFS is that it repeats all the work from the previous
phase.

Complete: Yes (by limiting the depth)

Time Complexity: O(bd)

Space complexity: O(bd)

Optimal: Yes (if step cost is uniform)

Systematic: It’s not systematic.

Uninformed search is also known as blind search whereas informed search is


also called heuristics search. Uniformed search does not require much
information. Informed search requires domain-specific details. Compared to
uninformed search, informed search strategies are more efficient and the time
complexity of uninformed search strategies is more. Informed search handles
the problem better than blind search.
Informed Search

These algorithms use basic domain knowledge and comprehend available


information regarding a specified problem as a guideline for optimal solutions.
The solutions provided by informed search algorithms are more efficient than
uninformed search algorithms.

Types of informed Search Algorithms

There are following main two types of informed search algorithms:

1. Greedy Search
2. A * Search

The informed search algorithm is also called heuristic search or directed search.
In contrast to uninformed search algorithms, informed search algorithms require
details such as distance to reach the goal, steps to reach the goal, cost of the
paths which makes this algorithm more efficient.

Here, the goal state can be achieved by using the heuristic function.

The heuristic function is used to achieve the goal state with the lowest cost
possible. This function estimates how close a state is to the goal.

1. Greedy best-first search algorithm


Greedy best-first search uses the properties of both depth-first search and
breadth-first search. Greedy best-first search traverses the node by
selecting the path which appears best at the moment. The closest path is
selected by using the heuristic function. Consider the below
graph with the heuristic values.
Here, A is the start node and H is the goal node.

Greedy best-first search first starts with A and then examines the next neighbour
B and C. Here, the heuristics of B is 12 and C is 4. The best path at the moment
is C and hence it goes to C. From C, it explores the neighbours F and G. the
heuristics of F is 8 and G is 2. Hence it goes to G. From G, it goes to H whose
heuristic is 0 which is also our goal state.

The path of traversal is

A —-> C —-> G —-> H


The time complexity of Greedy best-first search is O(b m) in
worst cases.

Advantages of Greedy best-first search

 Greedy best-first search is more efficient compared with breadth-first search


and depth-first search.

Disadvantages of Greedy best-first search

 In the worst-case scenario, the greedy best-first search algorithm may behave
like an unguided DFS.
 There are some possibilities for greedy best-first to get trapped in an infinite
loop.
 The algorithm is not an optimal one.

2. A* search algorithm
A* search algorithm is a combination of both uniform cost search and greedy
best-first search algorithms. It uses the advantages of both with better memory
usage. It uses a heuristic function to find the shortest path. A* search algorithm
uses the sum of both the cost and heuristic of the node to find the best path.

Consider the following graph with the heuristics values as follows.


Let A be the start node and H be the goal node.

First, the algorithm will start with A. From A, it can go to B, C, H.

Note the point that A* search uses the sum of path cost and heuristics value to
determine the path.

Here, from A to B, the sum of cost and heuristics is 1 + 3 = 4.

From A to C, it is 2 + 4 = 6.

From A to H, it is 7 + 0 = 7.

Here, the lowest cost is 4 and the path A to B is chosen. The other paths will be
on hold.

Now, from B, it can go to D or E.

From A to B to D, the cost is 1 + 4 + 2 = 7.

From A to B to E, it is 1 + 6 + 6 = 13.

The lowest cost is 7. Path A to B to D is chosen and compared with other paths
which are on hold.

Here, path A to C is of less cost. That is 6.

Hence, A to C is chosen and other paths are kept on hold.

From C, it can now go to F or G.

From A to C to F, the cost is 2 + 3 + 3 = 8.

From A to C to G, the cost is 2 + 2 + 1 = 5.


The lowest cost is 5 which is also lesser than other paths which are on hold.
Hence, path A to G is chosen.

From G, it can go to H whose cost is 2 + 2 + 2 + 0 = 6.

Here, 6 is lesser than other paths cost which is on hold.

Also, H is our goal state. The algorithm will terminate here.

The path of traversal is

A —-> C —-> G —-> H

The time complexity of the A* search is O(b^d) where b is the branching factor.

Advantages of A* search algorithm

 This algorithm is best when compared with other algorithms.


 This algorithm can be used to solve very complex problems also it is an optimal
one.

Disadvantages of A* search algorithm

 The A* search is based on heuristics and cost. It may not produce the shortest
path.
 The usage of memory is more as it keeps all the nodes in the memory.

Simulated Annealing
Simulated Annealing (SA) is an effective and general form of optimization. It
is useful in finding global optima in the presence of large numbers of local
optima. “Annealing” refers to an analogy with thermodynamics, specifically
with the way that metals cool and anneal. Simulated annealing uses the
objective function of an optimization problem instead of the energy of a
material.

Implementation of SA is surprisingly simple. The algorithm is basically hill-


climbing except instead of picking the best move, it picks a random move. If
the selected move improves the solution, then it is always accepted. Otherwise,
the algorithm makes the move anyway with some probability less than 1. The
probability decreases exponentially with the “badness” of the move, which is
the amount deltaE by which the solution is worsened (i.e., energy is increased.)

Prob(accepting uphill move) ~ 1 - exp(deltaE / kT))

A parameter T is also used to determine this probability. It is analogous to


temperature in an annealing system. At higher values of T, uphill moves are
more likely to occur. As T tends to zero, they become more and more unlikely,
until the algorithm behaves more or less like hill-climbing. In a typical SA
optimization, T starts high and is gradually decreased according to an
“annealing schedule”. The parameter k is some constant that relates
temperature to energy (in nature it is Boltzmann’s constant.)

Simulated annealing is typically used in discrete, but very large, configuration


spaces, such as the set of possible orders of cities in the Traveling Salesman
problem and in VLSI routing. It has a broad range of application that is still
being explored.

A Simulated annealing algorithm is a method to solve bound-constrained and


unconstrained optimization parameters models. The method is based on
physical annealing and is used to minimize system energy.

In every simulated annealing example, a random new point is generated. The


distance between the current point and the new point has a basis of the
probability distribution on the scale of the proportion of temperature. The
algorithm aims at all those points that minimize the objective with certain
constraints and probabilities. Those points that raise the objective are also
accepted to explore all the possible solutions instead of concentrating only on
local minima.

Optimization by simulated annealing is performed by systematically decreasing


the temperature and minimising the search’s extent.

There are a set of steps that are performed for simulated annealing in ai. These
steps can be summarized as follows:

 Simulated annealing creates a trial point randomly. The algorithm selects the
distance between the current point and the trial point by a probability
distribution. The scale of such distribution is temperature. With the annealing
function trial, point distribution distance is set. To keep the boundaries intact,
the trial point is shifted gradually.
 The Simulated Annealing formula then determines if the new point is better
than the older or not. If the new point is better, it is made as a next point, while
if the new point is worse, it can still be accepted depending upon the simulated
annealing acceptance function.
 A systematic algorithm gradually reduces the temperature selecting the best
point that gets generated in the process.
 For lowering the values, the annealing parameters are set, raising and reducing
the temperatures. The simulated annealing parameters are based on the values of
the probable gradients of every dimension of the objective.
 The simulated annealing is concluded when it reaches the lowest minima or any
of the specific stopping criteria.

Hill Climbing Algorithm


o Hill climbing algorithm is a local search algorithm which continuously
moves in the direction of increasing elevation/value to find the peak of
the mountain or best solution to the problem. It terminates when it
reaches a peak value where no neighbour has a higher value.
o Hill climbing algorithm is a technique which is used for optimizing the
mathematical problems. One of the widely discussed examples of Hill
climbing algorithm is Traveling-salesman Problem in which we need to
minimize the distance travelled by the salesman.
o It is also called greedy local search as it only looks to its good immediate
neighbour state and not beyond that.
o A node of hill climbing algorithm has two components which are state
and value.
o Hill Climbing is mostly used when a good heuristic is available.
o In this algorithm, we don't need to maintain and handle the search tree or
graph as it only keeps a single current state.

Features of Hill Climbing:

Following are some main features of Hill Climbing Algorithm:

o Generate and Test variant: Hill Climbing is the variant of Generate and
Test method. The Generate and Test method produce feedback which
helps to decide which direction to move in the search space.
o Greedy approach: Hill-climbing algorithm search moves in the direction
which optimizes the cost.
o No backtracking: It does not back-track the search space, as it does not
remember the previous states.
o Feedback mechanism: The algorithm has a feedback mechanism that
helps it decide on the direction of movement (whether up or down the
hill). The feedback mechanism is enhanced through the generate-and-test
technique.
o Incremental change: The algorithm improves the current solution by
incremental changes.

State-space Diagram for Hill Climbing:

The state-space landscape is a graphical representation of the hill-climbing


algorithm which is showing a graph between various states of algorithm and
Objective function/Cost.
On Y-axis we have taken the function which can be an objective function or
cost function, and state-space on the x-axis. If the function on Y-axis is cost
then, the goal of search is to find the global minimum and local minimum. If the
function of Y-axis is Objective function, then the goal of the search is to find
the global maximum and local maximum.

Different regions in the state space landscape:

Local Maximum: Local maximum is a state which is better than its neighbour
states, but there is also another state which is higher than it.

Global Maximum: Global maximum is the best possible state of state space
landscape. It has the highest value of objective function.

Current state: It is a state in a landscape diagram where an agent is currently


present.

Flat local maximum: It is a flat space in the landscape where all the neighbour
states of current states have the same value.

Shoulder: It is a plateau region which has an uphill edge.

Game Playing

Game Playing is an important domain of artificial intelligence. Games don’t


require much knowledge; the only knowledge we need to provide is the rules,
legal moves and the conditions of winning or losing the game.
Both players try to win the game. So, both of them try to make the best move
possible at each turn. Searching techniques like BFS(Breadth First Search) are
not accurate for this as the branching factor is very high, so searching will take
a lot of time. So, we need another search procedures that improve –
 Generate procedure so that only good moves are generated.
 Test procedure so that the best move can be explored first.
The most common search technique in game playing is Minimax search
procedure. It is depth-first depth-limited search procedure. It is used for games
like chess and tic-tac-toe.

Mini-Max Algorithm
o Mini-max algorithm is a recursive or backtracking algorithm which is
used in decision-making and game theory. It provides an optimal move
for the player assuming that opponent is also playing optimally.
o Mini-Max algorithm uses recursion to search through the game-tree.
o Min-Max algorithm is mostly used for game playing in AI. Such as
Chess, Checkers, tic-tac-toe, go, and various tow-players game. This
Algorithm computes the minimax decision for the current state.
o In this algorithm two players play the game, one is called MAX and other
is called MIN.
o Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
o Both Players of the game are opponent of each other, where MAX will
select the maximized value and MIN will select the minimized value.
o The minimax algorithm performs a depth-first search algorithm for the
exploration of the complete game tree.
o The minimax algorithm proceeds all the way down to the terminal node
of the tree, then backtrack the tree as the recursion.

Working of Min-Max Algorithm:


o The working of the minimax algorithm can be easily described using an
example. Below we have taken an example of game-tree which is
representing the two-player game.
o In this example, there are two players one is called Maximizer and other
is called Minimizer.
o Maximizer will try to get the Maximum possible score, and Minimizer
will try to get the minimum possible score.
o This algorithm applies DFS, so in this game-tree, we have to go all the
way through the leaves to reach the terminal nodes.
o At the terminal node, the terminal values are given so we will compare
those value and backtrack the tree until the initial state occurs. Following
are the main steps involved in solving the two-player game tree:

Step-1: In the first step, the algorithm generates the entire game-tree and apply
the utility function to get the utility values for the terminal states. In the below
tree diagram, let's take A is the initial state of the tree. Suppose maximizer takes
first turn which has worst-case initial value =- infinity, and minimizer will take
next turn which has worst-case initial value = +infinity.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value
is -∞, so we will compare each value in terminal state with initial value of
Maximizer and determines the higher nodes values. It will find the maximum
among the all.

o For node D max(-1,- -∞) => max(-1,4)= 4


o For Node E max(2, -∞) => max(2, 6)= 6
o For Node F max(-3, -∞) => max(-3,-5) = -3
o For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes
value with +∞, and will find the 3rd layer node values.

o For node B= min(4,6) = 4


o For node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of
all nodes value and find the maximum value for the root node. In this game tree,
there are only 4 layers, hence we reach immediately to the root node, but in real
games, there will be more than 4 layers.

o For node A max(4, -3)= 4


That was the complete workflow of the minimax two player game.

Properties of Mini-Max algorithm:


o Complete- Min-Max algorithm is Complete. It will definitely find a
solution (if exist), in the finite search tree.
o Optimal- Min-Max algorithm is optimal if both opponents are playing
optimally.
o Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(bm), where b is branching factor
of the game-tree, and m is the maximum depth of the tree.
o Space Complexity- Space complexity of Mini-max algorithm is also
similar to DFS which is O(bm).

Limitation of the minimax Algorithm:

The main drawback of the minimax algorithm is that it gets really slow for
complex games such as Chess, go, etc. This type of games has a huge branching
factor, and the player has lots of choices to decide.

Alpha-Beta Pruning
o Alpha-beta pruning is a modified version of the minimax algorithm. It is
an optimization technique for the minimax algorithm.
o As we have seen in the minimax search algorithm that the number of
game states it has to examine are exponential in depth of the tree. Since
we cannot eliminate the exponent, but we can cut it to half. Hence there is
a technique by which without checking each node of the game tree we
can compute the correct minimax decision, and this technique is
called pruning. This involves two threshold parameter Alpha and beta
for future expansion, so it is called alpha-beta pruning. It is also called
as Alpha-Beta Algorithm.
o Alpha-beta pruning can be applied at any depth of a tree, and sometimes
it not only prune the tree leaves but also entire sub-tree.
o The two-parameter can be defined as:

a. Alpha: The best (highest-value) choice we have found so far at any


point along the path of Maximizer. The initial value of alpha is -∞.
b. Beta: The best (lowest-value) choice we have found so far at any
point along the path of Minimizer. The initial value of beta is +∞.
o The Alpha-beta pruning to a standard minimax algorithm returns the
same move as the standard algorithm does, but it removes all the nodes
which are not really affecting the final decision but making algorithm
slow. Hence by pruning these nodes, it makes the algorithm fast.

Condition for Alpha-beta pruning:

The main condition which required for alpha-beta pruning is:

1. α>=β
Key points about alpha-beta pruning:
o The Max player will only update the value of alpha.
o The Min player will only update the value of beta.
o While backtracking the tree, the node values will be passed to upper
nodes instead of values of alpha and beta.
o We will only pass the alpha, beta values to the child nodes.

Working of Alpha-Beta Pruning:

Let's take an example of two-player search tree to understand the working of


Alpha-beta pruning

Step 1: At the first step the, Max player will start first move from node A where
α= -∞ and β= +∞, these value of alpha and beta passed down to node B where
again α= -∞ and β= +∞, and Node B passes the same value to its child D.
Step 2: At Node D, the value of α will be calculated as its turn for Max. The
value of α is compared with firstly 2 and then 3, and the max (2, 3) = 3 will be
the value of α at node D and node value will also 3.

Step 3: Now algorithm backtrack to node B, where the value of β will change as
this is a turn of Min, Now β= +∞, will compare with the available subsequent
nodes value, i.e. min (∞, 3) = 3, hence at node B now α= -∞, and β= 3.

In the next step, algorithm traverse the next successor of Node B which is node
E, and the values of α= -∞, and β= 3 will also be passed.

Step 4: At node E, Max will take its turn, and the value of alpha will change.
The current value of alpha will be compared with 5, so max (-∞, 5) = 5, hence at
node E α= 5 and β= 3, where α>=β, so the right successor of E will be pruned,
and algorithm will not traverse it, and the value at node E will be 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A.
At node A, the value of alpha will be changed the maximum available value is 3
as max (-∞, 3)= 3, and β= +∞, these two values now passes to right successor of
A which is Node C.

At node C, α=3 and β= +∞, and the same values will be passed on to node F.

Step 6: At node F, again the value of α will be compared with left child which
is 0, and max(3,0)= 3, and then compared with right child which is 1, and
max(3,1)= 3 still α remains 3, but the node value of F will become 1.

Step 7: Node F returns the node value 1 to node C, at C α= 3 and β= +∞, here
the value of beta will be changed, it will compare with 1 so min (∞, 1) = 1. Now
at C, α=3 and β= 1, and again it satisfies the condition α>=β, so the next child of
C which is G will be pruned, and the algorithm will not compute the entire sub-
tree G.
Step 8: C now returns the value of 1 to A here the best value for A is max (3, 1)
= 3. Following is the final game tree which is the showing the nodes which are
computed and nodes which has never computed. Hence the optimal value for
the maximizer is 3 for this example.

Move Ordering in Alpha-Beta pruning:

The effectiveness of alpha-beta pruning is highly dependent on the order in


which each node is examined. Move order is an important aspect of alpha-beta
pruning.

It can be of two types:

o Worst ordering: In some cases, alpha-beta pruning algorithm does not


prune any of the leaves of the tree, and works exactly as minimax
algorithm. In this case, it also consumes more time because of alpha-beta
factors, such a move of pruning is called worst ordering. In this case, the
best move occurs on the right side of the tree. The time complexity for
such an order is O(bm).
o Ideal ordering: The ideal ordering for alpha-beta pruning occurs when
lots of pruning happens in the tree, and best moves occur at the left side
of the tree. We apply DFS hence it first search left of the tree and go deep
twice as minimax algorithm in the same amount of time. Complexity in
ideal ordering is O(bm/2).

Rules to find good ordering:

Following are some rules to find good ordering in alpha-beta pruning:

o Occur the best move from the shallowest node.


o Order the nodes in the tree such that the best nodes are checked first.
o Use domain knowledge while finding the best move. Ex: for Chess, try
order: captures first, then threats, then forward moves, backward moves.
o We can bookkeep the states, as there is a possibility that states may
repeat.

You might also like