Module 3
Module 3
*********************************************************************************
SYLLABUS- MODULE III Adversarial search - Games, Optimal decisions in games, The Minimax
algorithm, Alpha-Beta pruning. Constraint Satisfaction Problems – Defining CSP, Constraint
Propagation- inference in CSPs, Backtracking search for CSPs, Structure
of CSP problems.
***************************************************************************
ADVERSARIAL SEARCH
In which we examine the problems that arise when we try to plan ahead in a world where other
agents are planning against us.
GAMES
Games are competitive environments, in which the agent’s goals are in conflict. Games are
easy to formalize. Games can be a good model of real-world competitive or cooperative
activities. E.g., Military confrontations, negotiation, auctions, etc.
1
AIT 307 Introduction to Artificial Intelligence
We need an algorithm to find optimal move and choosing a good move when time is limited.
Pruning allows us to ignore portions of the search tree that make no difference to the final
choice, and heuristic evaluation functions allow us to approximate the true utility of a state
without doing a complete search
MIN- MAX
MAX moves first, and then they take turns moving until the game is over. At the end of the
game, points are awarded to the winning player and penalties are given to the loser. A game
can be formally defined as a kind of search problem with the following elements:
Game tree
The initial state, ACTIONS function, and RESULT function define the game tree for the game
a tree where the nodes are game states and the edges are moves. From the initial state, MAX
has nine possible moves.
Play alternates between MAX’s placing an X and MIN’s placing an O until we reach leaf nodes
corresponding to terminal states such that one player has three in a row or all the squares are
filled. The number on each leaf node indicates the utility value of the terminal state from the
point of view of MAX; high values are assumed to be good for MAX and bad for MIN.
search tree is a tree that is superimposed on the full game tree, and examines enough
nodes to allow a player to determine what move to make.
2
AIT 307 Introduction to Artificial Intelligence
3
AIT 307 Introduction to Artificial Intelligence
value. The terminal nodes on the bottom level get their utility values from the game’s UTILITY
function. Minimax decision will be optimal choice for MAX at root that leads to the state with
the highest minimax value
maximum depth of the tree is m and there are b legal moves at each point,
time complexity O(b m).
space complexity O(bm), for an algorithm that generates all actions at once,
or O(m), for an algorithm that generates actions one at a time
4
AIT 307 Introduction to Artificial Intelligence
The backed-up value of a node n is always the utility vector of the successor state with the
highest value for the player choosing at n
Alpha–beta search updates the values of α and β as it goes along and prunes the remaining
branches at a node (i.e., terminates the recursive call) as soon as the value of the current node
is known to be worse than the current α or β value for MAX or MIN, respectively
The outcome is that we can identify the minimax decision without ever evaluating two of the
leaf nodes.
5
AIT 307 Introduction to Artificial Intelligence
Let the two unevaluated successors of node C have values x and y Then the value of the root
node is given by
In other words, the value of the root and hence the minimax decision are independent of the
values of the pruned leaves x and y.
6
AIT 307 Introduction to Artificial Intelligence
Move ordering
The effectiveness of alpha–beta pruning is highly dependent on the order in which the states
are examined. Eg, here we could not prune any successors of D at all because the worst
successors (from the point of view of MIN) were generated first. If the third successor of D
had been generated first, we would have been able to prune the other two. It might be
worthwhile to try to examine first the successors that are likely to be best
7
AIT 307 Introduction to Artificial Intelligence
Solving a CSP
To solve a CSP we need to define a state space and the notion of a solution. Each state in a
CSP is defined by an assignment of values to some or all of the variables, {Xi = vi, Xj = vj
,...}. An assignment that does not violate any constraints is called a consistent or legal
assignment.
A complete assignment is one in which every variable is assigned, and
A solution to a CSP is a consistent, complete assignment
A partial assignment is one that assigns values to only some of the variables.
Example problem: Map coloring
We are given the task of coloring each region either red, green, or blue in such a way that no
neighboring regions have the same color.
8
AIT 307 Introduction to Artificial Intelligence
9
AIT 307 Introduction to Artificial Intelligence
10
AIT 307 Introduction to Artificial Intelligence
DISJUNCTIVE CONSTRAINT
Suppose we have four workers to install wheels, but they have to share one tool that helps put
the axle in place. We need a disjunctive constraint to say that AxleF and AxleB must not
overlap in time; either one comes first or the other does:
(AxleF + 10 ≤ AxleB) or
(AxleB + 10 ≤ AxleF )
For every variable except Inspect we add a constraint of the form X + dX ≤ Inspect. Finally,
suppose there is a requirement to get the whole assembly done in 30 minutes. We can achieve
that by limiting the domain of all variables: Di = {1, 2, 3,..., 27}. This particular problem is
trivial to solve, but CSPs have been applied to job-shop scheduling problems like this with
thousands of variables.
12
AIT 307 Introduction to Artificial Intelligence
13
AIT 307 Introduction to Artificial Intelligence
14
AIT 307 Introduction to Artificial Intelligence
15
AIT 307 Introduction to Artificial Intelligence
Complexity of AC-3
Assume a CSP with n variables,
◦ each with domain size at most d, and
◦ with c binary constraints (arcs).
◦ Each arc (Xk, Xi) can be inserted in the queue only d times because Xi has at
most d values to delete.
Checking consistency of an arc can be done in O(d2) time, so we get O(cd3) total worst-case
time.
Generalized Arc/Hyperarc Consistent
Arc consistency to handle n-ary rather than just binary constraints. A variable Xi is generalized
arc consistent with respect to an n-ary constraint
◦ if for every value v in the domain of Xi there exists a tuple of values that is a
member of the constraint, has all its values taken from the domains of the
corresponding variables, and has its Xi component equal to v
For example, if all variables have the domain {0, 1, 2, 3}, then to make the variable X consistent
with the constraint X<Y <Z, we would have to eliminate 2 and 3 from the domain of X because
the constraint cannot be satisfied when X is 2 or 3.
Arc consistency can go a long way toward reducing the domains of variables, sometimes
finding a solution (by reducing every domain to size 1) and sometimes finding that the CSP
cannot be solved (by reducing some domain to size 0). But for other networks, arc consistency
fails to make enough inferences.
Consider the map-coloring problem on Australia, but with only two colors allowed, red and
blue. Arc consistency can do nothing because every variable is already arc consistent: each can
16
AIT 307 Introduction to Artificial Intelligence
be red with blue at the other end of the arc (or vice versa). But clearly there is no solution to
the problem: because Western Australia, Northern Territory and South Australia all touch each
other, we need at least three colors for them alone. Arc consistency tightens down the domains
using the arcs
Path consistency
Path consistency tightens the binary constraints by using implicit constraints that are inferred
by looking at triples of variables. A two-variable set {Xi, Xj} is path-consistent with respect to
a third variable Xm if,
for every assignment {Xi = a, Xj = b} consistent with the constraints on {Xi, Xj},
there is an assignment to Xm that satisfies the constraints on {Xi, Xm} and {Xm, Xj}.
This is called path consistency because one can think of it as looking at a path from Xi to Xj
with Xm in the middle.
Path consistency Fares in coloring the Australia map with two colors
We will make the set {WA, SA} path consistent with respect to NT. We start by enumerating
the consistent assignments to the set. In this case, there are only two:
{WA = red, SA = blue} and {WA = blue, SA = red}.
We can see that with both of these assignments NT can be neither red nor blue because it would
conflict with either WA or SA. Because there is no valid choice for NT, we eliminate both
assignments, and we end up with no valid assignments for {WA, SA}. Therefore, we know
that there can be no solution to this problem.
K-consistency
Stronger forms of propagation can be defined with the notion of k-consistency. A CSP is k-
consistent if, for any set of k − 1 variables and for any consistent assignment to those variables,
a consistent value can always be assigned to any kth variable.
1-consistency says that, given the empty set, we can make any set of one
variable consistent: this is what we called node consistency.
2-consistency is the same as arc consistency.
For binary constraint networks, 3-consistency is the same as path consistency.
A CSP is strongly k-consistent if it is k-consistent and is also (k − 1)-consistent, (k − 2)-
consistent, ... all the way down to 1-consistent.
Suppose we have a CSP with n nodes and make it strongly n-consistent (i.e., strongly k-
consistent for k = n).
We can then solve the problem as follows:
◦ First, we choose a consistent value for X1.
◦ We are then guaranteed to be able to choose a value for X2 because the graph
is 2-consistent,
17
AIT 307 Introduction to Artificial Intelligence
18
AIT 307 Introduction to Artificial Intelligence
BOUNDS PROPAGATION
For large resource-limited problems it is usually not possible to represent the domain of each
variable as a large set of integers and gradually reduce that set by consistency-checking
methods. E.g., logistical problems involving moving thousands of people in hundreds of
vehicles
Instead, domains are represented by upper and lower bounds and are managed by bounds
propagation. Example in an airline-scheduling problem, let’s suppose there are two flights,
F1 and F2, for which the planes have capacities 165 and 385, respectively. The initial domains
for the numbers of passengers on each flight are then D1 = [0, 165] and D2 = [0, 385] .
Now suppose we have the additional constraint that the two flights together must carry 420
people: F1 + F2 = 420. Propagating bounds constraints, we reduce the domains to D1 = [35,
165] and D2 = [255, 385] .
We say that a CSP is bounds consistent if for every variable X, and for both the lower- bound
and upper-bound values of X, there exists some value of Y that satisfies the constraint between
X and Y for every variable Y. This kind of bounds propagation is widely used in practical
constraint problems.
Sudoku
A Sudoku board consists of 81 squares, some of which are initially filled with digits from 1 to
9. The puzzle is to fill in all the remaining squares such that no digit appears twice in any row,
column, or 3 × 3 box. A row, column, or box is called a unit. Even the hardest Sudoku problems
yield to a CSP solver in less than 0.1 second
A Sudoku puzzle can be considered a CSP with 81 variables, one for each square. We use the
variable names A1 through A9 for the top row (left to right), down to I1 through I9 for the
bottom row. The empty squares have the domain {1, 2, 3, 4, 5, 6, 7, 8, 9} and the pre-filled
squares have a domain consisting of a single value.
19
AIT 307 Introduction to Artificial Intelligence
In addition, there are 27 different Alldiff constraints: one for each row, column, and box of 9
squares:
Alldiff(A1, A2, A3, A4, A5, A6, A7, A8, A9)
Alldiff(B1, B2, B3, B4, B5, B6, B7, B8, B9)
…
Alldiff(A1, B1, C1, D1, E1, F1, G1, H1, I1)
Alldiff(A2, B2, C2, D2, E2, F2, G2, H2, I2)
…
Alldiff(A1, A2, A3, B1, B2, B3, C1, C2, C3)
Alldiff(A4, A5, A6, B4, B5, B6, C4, C5, C6)
…
Assume that the Alldiffff constraints have been expanded into binary constraints (such as A1
= A2 ) so that we can apply the AC-3 algorithm. Consider variable E6 the empty square
between the 2 and the 8 in the middle box.
From the constraints in the box, we can remove not only 2 and 8 but also 1 and 7 from E6’s
domain. From the constraints in its column, we can eliminate 5, 6, 2, 8, 9, and 3. That leaves
E6 with a domain of {4}; in other words, we know the answer for E6.
Now consider variable I6 the square in the bottom middle box surrounded by 1, 3, and 3.
Applying arc consistency in its column, we eliminate 5, 6, 2, 4 (since we now know E6 must
be 4), 8, 9, and 3. We eliminate 1 by arc consistency with I5 , and we are left with only the
value 7 in the domain of I6. Now there are 8 known values in column 6, so arc consistency can
infer that A6 must be 1. Inference continues along these lines, and eventually, AC-3 can solve
the entire puzzle—all the variables have their domains reduced to a single value
Sudoku problems are designed to be solved by inference over constraints. But many other CSPs
cannot be solved by inference alone; there comes a time when we must search for a solution.
Backtracking Search for CSP
Backtracking search algorithms that work on partial assignments. A standard depth-limited
search can be applied. A state would be a partial assignment, and an action would be adding
var = value to the assignment. But for a CSP with n variables of domain size d, the branching
factor at the top level is nd because any of d values can be assigned to any of n variables.
At the next level, the branching factor is (n − 1)d, and so on for n levels. We generate a tree
with n! · dn leaves, even though there are only dn possible complete assignments! Inference can
be interwoven with search.
20
AIT 307 Introduction to Artificial Intelligence
21
AIT 307 Introduction to Artificial Intelligence
questions:
1) function SELECT-UNASSIGNED-VARIABLE: which variable should be assigned
next?
2) function ORDER-DOMAIN-VALUES: in what order should its values be tried?
Q’s neighbor, SA. The least-constraining-value heuristic therefore prefers red to blue. The
heuristic is trying to leave the maximum flexibility for subsequent variable assignments.
Of course, if we are trying to find all the solutions to a problem, not just the first one, then the
ordering does not matter because we have to consider every value anyway. The same holds if
there are no solutions to the problem
Interleaving search and inference
But inference can be even more powerful in the course of a search: every time we make a
choice of a value for a variable, we have a brand-new opportunity to infer new domain
reductions on the neighboring variables.
Forward checking
This is one of the simplest forms of inference. Whenever a variable X is assigned, the forward-
checking process establishes arc consistency for it: for each unassigned variable Y that is
connected to X by a constraint, delete from Y’s domain any value that is inconsistent with the
value chosen for X. There is no reason to do forward checking if we have already done arc
consistency as a preprocessing step.
There are two important points to notice about this example. First, notice that after WA = red
and Q = green are assigned, the domains of NT and SA are reduced to a single value; we have
eliminated branching on these variables altogether by propagating information from WA and
Q.
A second point to notice is that after V = blue, the domain of SA is empty. Hence, forward
checking has detected that the partial assignment {WA = red, Q = green, V = blue} is
inconsistent with the constraints of the problem, and the algorithm will therefore backtrack
immediately
For many problems the search will be more effective if we combine the MRV heuristic with
forward checking.
Forward checking only makes the current variable arc-consistent, but doesn’t look ahead and
make all the other variables arc-consistent.
MAC (Maintaining Arc Consistency) algorithm
23
AIT 307 Introduction to Artificial Intelligence
More powerful than forward checking, detect this inconsistency. After a variable Xi is assigned
a value, the INFERENCE procedure calls AC-3, but instead of a queue of all arcs in the CSP,
we start with only the arcs(Xj, Xi) for all Xj that are unassigned variables that are neighbors of
Xi. From there, AC-3 does constraint propagation in the usual way, and if any variable has its
domain reduced to the empty set, the call to AC-3 fails and we know to backtrack immediately.
Chronological backtracking
The BACKTRACKING-SEARCH algorithm has a very simple policy for what to do when a
branch of the search fails: back up to the preceding variable and try a different value for it. This
is called chronological backtracking because the most recent decision point is revisited.
Example with a fixed variable ordering Q, NSW , V , T, SA, WA, NT. Suppose we have
generated the partial assignment {Q = red, NSW = green, V = blue, T = red}. When we try the
next variable, SA, we see that every value violates a constraint. We back up to T and try a new
color for Tasmania! Obviously this is silly—recoloring Tasmania cannot possibly resolve the
problem with South Australia
Backtrack to a variable that was responsible for making one of the possible values of the next
variable (e.g. SA) impossible. The set (in this case {Q = red, NSW = green, V = blue, }), is
called the conflict set for SA. The backjumping method backtracks to the most recent
assignment in the conflict set; in this case, backjumping would jump over Tasmania and try a
new value for V. This method is easily implemented by a modification to BACKTRACK such
that it accumulates the conflict set while checking for a legal value to assign. If no legal value
is found, the algorithm should return the most recent element of the conflict set along with the
failure indicator.
Intelligent backtracking: Looking backward
Forward checking can supply the conflict set with no extra work whenever forward checking
based on an assignment X = x deletes a value from Y ’s domain, it should add X = x to Y ’s
conflict set.
If the last value is deleted from Y ’s domain, then the assignments in the conflict set of Y are
added to the conflict set of X. Then, when we get to Y , we know immediately where to
backtrack if needed. In fact, every branch pruned by backjumping is also pruned by forward
checking. Hence simple backjumping is redundant in a forward-checking search or in a search
that uses stronger consistency checking (such as MAC).
Backjumping notices failure when a variable’s domain becomes empty, but in many cases a
branch is doomed long before this occurs
Conflict-directed back jumping
Consider the partial assignment which is proved to be inconsistent: {WA=red, NSW=red}.We
try T=red next and then assign NT, Q, V, SA, no assignment can work for these last 4 variables.
Eventually we run out of value to try at NT, but simple backjumping cannot work because NT
doesn’t have a complete conflict set of preceding variables that caused to fail.
The set {WA, NSW} is a deeper notion of the conflict set for NT, caused NT together with any
24
AIT 307 Introduction to Artificial Intelligence
subsequent variables to have no consistent solution. So the algorithm should backtrack to NSW
and skip over T.
A backjumping algorithm that uses conflict sets defined in this way is called conflict-direct
backjumping
When a variable’s domain becomes empty, the “terminal” failure occurs, that variable has a
standard conflict set. Let Xj be the current variable, let conf(Xj) be its conflict set. If every
possible value for Xj fails, backjump to the most recent variable Xi in conf(Xj), and set
conf(Xi) ← conf(Xi)𝖴conf(Xj) – {Xi}.
The conflict set for an variable means, there is no solution from that variable onward, given the
preceding assignment to the conflict set
e.g. assign WA, NSW, T, NT, Q, V, SA.
SA fails, and its conflict set is {WA, NT, Q}. (standard conflict set)
Backjump to Q, its conflict set is
{NT, NSW}𝖴{WA,NT,Q} - {Q} = {WA, NT, NSW}.
That is, there is no solution from Q onward, given the preceding assignment to {WA, NT, NSW
}. Therefore, we Backtrack to NT, its conflict set is
{WA}𝖴{WA,NT,NSW}-{NT} = {WA, NSW}.
Hence the algorithm backjump to NSW. (over T)
Constraint learning
After backjumping from a contradiction, how to avoid running into the same problem again:
Constraint learning: The idea of finding a minimum set of variables from the conflict set that
causes the problem. This set of variables, along with their corresponding values, is called a no-
good. We then record the no-good, either by adding a new constraint to the CSP or by keeping
a separate cache of no-goods.
Backtracking occurs when no legal assignment can be found for a variable. Conflict-directed
backjumping backtracks directly to the source of the problem.
Consider the state {WA = red, NT = green, Q = blue}
Forward checking can tell us this state is a no-good because there is no valid assignment to SA.
In this particular case, recording the no-good would not help, because once we prune this
branch from the search tree, we will never encounter this combination again.
But suppose that the search tree in were actually part of a larger search tree that started by first
assigning values for V and T. Then it would be worthwhile to record {WA = red, NT = green,
Q = blue} as a no-good because we are going to run into the same problem again for each
possible set of assignments to V and T
No-goods can be effectively used by forward checking or by backjumping. Constraint learning
is one of the most important techniques used by modern CSP solvers to achieve efficiency on
25
AIT 307 Introduction to Artificial Intelligence
complex problems.
The structure of problem
The structure of the problem as represented by the constraint graph can be used to find solution
quickly. e.g. The problem can be decomposed into 2 independent subproblems: Coloring T
and coloring the mainland.
Tree: A constraint graph is a tree when any two varyiable are connected by only one path.
Directed arc consistency (DAC): A CSP is defined to be directed arc-consistent under an
ordering of variables X1, X2, … , Xn if and only if every Xi is arc-consistent with each Xj for j>i.
By using DAC, any tree-structured CSP can be solved in time linear in the number of variables.
How to solve a tree-structure CSP:
Pick any variable to be the root of the tree; Choose an ordering of the variable such that each
variable appears after its parent in the tree. (topological sort). Any tree with n nodes has n-1
arcs, so we can make this graph directed arc-consistent in O(n) steps, each of which must
compare up to d possible domain values for 2 variables, for a total time of O(nd2).
Once we have a directed arc-consistent graph, we can just march down the list of variables and
choose any remaining value.Since each link from a parent to its child is arc consistent, we
won’t have to backtrack, and can move linearly through the variables.
26
AIT 307 Introduction to Artificial Intelligence
There are 2 primary ways to reduce more general constraint graphs to trees:
1. Based on removing nodes;
2. Based on collapsing nodes together
Based on removing nodes;
Example, We can delete SA from the graph by fixing a value for SA and deleting from the
domains of other variables any values that are inconsistent with the value chosen for SA.
The general algorithm:
Choose a subset S of the CSP’s variables such that the constraint graph becomes a tree after
27
AIT 307 Introduction to Artificial Intelligence
First, view each subproblem as a “mega-variable” whose domain is the set of all solutions for
the subproblem. Then, solve the constraints connecting the subproblems using the efficient
algorithm for trees. A given constraint graph admits many tree decomposition; In choosing a
decomposition, the aim is to make the subproblems as small as possible.
Tree width
The tree width of a tree decomposition of a graph is one less than the size of the largest
subproblems. The tree width of the graph itself is the minimum tree width among all its tree
decompositions.
Time complexity: O(ndw+1), w is the tree width of the graph.
The complexity of solving a CSP is strongly related to the structure of its constraint graph.
Tree-structured problems can be solved in linear time. Cutset conditioning can reduce a
general CSP to a tree-structured one and is quite efficient if a small cutset can be found. Tree
decomposition techniques transform the CSP into a tree of subproblems and are efficient if
the tree width of constraint graph is small.
The structure in the values of variables
By introducing a symmetry-breaking constraint, we can break the value symmetry and
reduce the search space by a factor of n!. E.g. Consider the map-coloring problems with n
colors, for every consistent solution, there is actually a set of n! solutions formed by permuting
the color names.(value symmetry). On the Australia map, WA, NT and SA must all have
different colors, so there are 3!=6 ways to assign.
We can impose an arbitrary ordering constraint NT<SA<WA that requires the 3 values to be
in alphabetical order. This constraint ensures that only one of the n! solution is possible:
{NT=blue, SA=green, WA=red}. (symmetry-breaking constraint)
29
AIT 307 Introduction to Artificial Intelligence
Variables:{T,W,O,F,U,R,C1,C2,C3}
Domain of {T,W,O,F,U,R}={0,1,2,3,4,5,6,7,8,9}
Domain of {C1,C2,C3}={0,1}
30
AIT 307 Introduction to Artificial Intelligence
31
AIT 307 Introduction to Artificial Intelligence
32
AIT 307 Introduction to Artificial Intelligence
• A game can be defined by the initial state (how the board is set up), the legal actions in each state,
the result of each action, a terminal test (which says when the game is over), and a utility function that
applies to terminal states. • In two-player zero-sum games with perfect information, the minimax
algorithm can select optimal moves by a depth-first enumeration of the game tree.
• The alpha–beta search algorithm computes the same optimal move as minimax, but achieves much
greater efficiency by eliminating subtrees that are provably irrelevant. • Constraint satisfaction
problems (CSPs) represent a state with a set of variable/value pairs and represent the conditions for a
solution by a set of constraints on the variables.Many important real-world problems can be described
as CSPs. • A number of inference techniques use the constraints to infer which variable/value pairs are
consistent and which are not. These include node, arc, path, and k-consistency.
• Backtracking search, a form of depth-first search, is commonly used for solving CSPs.Inference can
be interwoven with search. • The minimum-remaining-values and degree heuristics are domain-
independent methods for deciding which variable to choose next in a backtracking search. The
leastconstraining-value heuristic helps in deciding which value to try first for a given variable.
Backtracking occurs when no legal assignment can be found for a variable.Conflict-directed
backjumping backtracks directly to the source of the problem.
• Local search using the min-conflicts heuristic has also been applied to constraint satisfaction
problems with great success. • The complexity of solving a CSP is strongly related to the structure of
its constraint graph. Tree-structured problems can be solved in linear time. Cutset conditioning can
reduce a general CSP to a tree-structured one and is quite efficient if a small cutset can be found. Tree
decomposition techniques transform the CSP into a tree of subproblems and are efficient if the tree
width of the constraint graph is small.
33
AIT 307 Introduction to Artificial Intelligence
UNIVERSITY QUESTIONS
1. Explain the six elements of the search problem for the game tic-tac-toe.
2. What are the components of a map colouring CSP problem?
3. Explain the MINIMAX algorithm with an example.
4. What is a constraint satisfaction problem? Formulate the job-shop scheduling as a CSP
problem.
5. Solve graph colourings problem for the following graph given below using backtracking
search for CSP.
6.
7. Explain the following terms. a. Constraint Graph b. Topological Sort
8. Define node consistency with an example.
9. Define the term least constraining value in CSP
10. Explain backtracking search in CSPs using the example of 4-queens problem.
11. Illustrate the working of Minimax search procedure with an example.
12. Solve the following crypt arithmetic problem by hand, using the strategy of backtracking with
forward checking and the MRV & least-constraining-value heuristics. TWO +TWO FOUR
13. Explain alpha beta pruning with a simple example.
14. What is local consistency in CSP constraint propagation? Explain different types of local
consistencies.
15. Write an Arc-Consistency algorithm (AC-3).
16. How and when heuristic is used in Minimax search technique? Illustrate with an example.
Also describe an algorithm for Minimax procedure.
17. Solve the following Crypt arithmetic problem using constraints satisfaction search procedure.
EAT SEND
+THAT + MORE
………… …………
34
AIT 307 Introduction to Artificial Intelligence
APPLE MONEY
18. What is the importance of two bounds in Alpha-Beta cut-offs
19. How and when heuristic is used in Minimax search technique? Illustrate the usage of
heuristic in Minimax procedure
20.20.
35
15-780: Graduate AI
Homework Assignment #2 Solutions
Collaboration Policy: You may discuss the problems with others, but you must write
all code and your writeup independently.
Turning In: Please email your assignment by the due date to [email protected] and
[email protected]. Make sure your solution to each problem is on a separate page. If
your solutions are handwritten, then please take photos or scan them and make sure they
are legible and clear. Please submit your code in separate files so we can easily run them
1 Cryptoarithmethic
Solve the cryptarithmethic problem shown in Fig. 1 by hand, using the strategy of back-
tracking with forward checking and the MRV and least-constraining-value heuristic.
F T U W R O
T W O
+ T W O
F O U R C3 C2 C1
(a) (b)
In a cryptarithmethic problem each letter stands for a distinct digit; the aim is to find
a substitution of digits for letters such that the resulting sum is arithmetically correct, with
the added restriction that no leading zeros are allowed.
To help you out in Fig.1 you can see the constraint hypergraph for the cryptoarithmethic
problem, showing the Aldiff constraint (square box on top) as well as the column addition
1
constraints (four square box in the middle). The variables C1,C2 and C3 represent the carry
digits for the three column.
An Aldiff constraint is a global constraint which says that all of the variable involved in
the constraint must have different value.
The following solution was provided by Revanth Bhattaram (with slight modifications):
For this problem,we use the Minimum Remaining Value Heuristic to choose a variable and
the Least Constraining Value heuristic to assign values to the chosen variables.
The constraints are :
O + O = R + 10 ∗ C1
C1 + W + W = U + 10 ∗ C2
C2 + T + T = O + 10 ∗ C3
F = C3
The main variables here are F, T, U, W, R, O and they all must have different assigned values.
In addition, C1, C2, C3 represent the carries that depend on the assigned values. The carries
can take the values {0,1 }.
Another constraint for this problem is that the number don’t have any leading zeros. This
implies that the variables F, T can’t take the value 0. The backtracking search algorithm
runs as follows :
• A quick look at the constraints shows that the variable F can only take the value 1,
since C3 can be 0 or 1 and F = C3 and F can’t be equal to 0. Thus, we set F = 1 and
consequently C3 = 1.
F=1
• After this point, the domain of values for the variables U, W, R, O is{ 0,2,3....9} and
the domain of values for the variable T is {2,3....9 } . Thus, using the MRV heuristic,
we’ll now be assigning a value to T .
Consider the constraints at this state of time : C2+2T = O+10 ⇒= O = C2+2T−10.
Assigning the values 2,3,4 to T results in O having no possible values. Assigning the
value 5 leaves out just a single value for O = 0. Setting the value T ={ 6,7,8} results
in O having two possible values. Thus, using the least constraining value heuristic, we
set T = 6.
F=1
T=6
2
• At this point, one of the constraints becomes C2 + 12 = O + 10 =⇒ O = C2 + 2.
O’s domain is now {2,3 }which is smaller than any of the other variables and thus the
MRV heuristic directs us to choose to assign a value for O.
The least constraining value heuristic doesn’t help us too much over here and so we
proceed in assigning values in order. We now set O = 2.
F=1
T=6
O=2
F=1
T=6
O=2
R=4
• The constraint now is 2W = U . The domain of U is the set of remaining even values
= {0,8 }and has a smaller domain than W . Thus, we now choose to assign a value to
U . The least constraining value heuristic value doesn’t help narrow down between the
two values (they’re both bad).
F=1
T=6
O=2
R=4
U=0
3
F=1
T=6
O=2
R=4
U=0
X
• Similarly, setting U = 2 doesn’t work as well since 4 has already been assigned to R
and so we backtrack.
F=1
T=6
O=2
R=4
U=2 U=0
X X
F=1
T=6
O=2 O=3
R=4
U=2 U=0
X X
4
F=1
T=6
O=2 O=3
R=4 R=6
U=2 U=0 X
X X
F=1
T=6 T=7
O=2 O=3
R=4 R=6
U=2 U=0 X
X X
F=1
T=6 T=7
R=4 R=6
U=2 U=0 X
X X
5
F=1
T=6 T=7
U=2 U=0 X
X X
F=1
T=6 T=7
X X
• We are now left with only possible value for W . We’ll get W = 3. Notice that
this satisfies all constraints and since we’ve assigned values for all variables, this is a
satisfying assignment.
F=1
T=6 T=7
X X W=3