AI Search & Game Theory Guide
AI Search & Game Theory Guide
in for notes
***************************************************************************
SYLLABUS- MODULE 3 (Search in Complex environments)
Adversarial search - Games, Optimal decisions in games, The Minimax algorithm, Alpha-Beta pruning.
Constraint Satisfaction Problems – Defining CSP, Constraint Propagation- inference in CSPs,
Backtracking search for CSPs, Structure of CSP problems.
***************************************************************************
ADVERSARIAL SEARCH
In which we examine the problems that arise when we try to plan ahead in a world where other
agents are planning against us.
GAMES
Games are competitive environments, in which the agent’s goals are in conflict. Games are
easy to formalize. Games can be a good model of real-world competitive or cooperative
activities. E.g., Military confrontations, negotiation, auctions, etc.
TRACE KTU
Alternating two-player zero-sum games
Here Players take turns. Each game outcome or terminal state has a utility for each player (e.g.,
1 for win, 0 for loss). The sum of both players’ utilities is a constant.
Games vs. single-agent search
In game we don’t know how the opponent will act. The solution is not a fixed sequence of
actions from start state to goal state, but a strategy or policy (a mapping from state to best move
in that state).
Efficiency is critical to playing well.
• The time to make a move is limited.
• The branching factor, search depth, and number of terminal configurations are huge
• In chess, branching factor ≈ 35 and depth ≈ 100, giving a search tree of nodes
• Number of atoms in the observable universe ≈ 1080
• This rules out searching all the way to the end of the game
PRUNING
We need an algorithm to find optimal move and choosing a good move when time is limited.
Pruning allows us to ignore portions of the search tree that make no difference to the final
choice, and heuristic evaluation functions allow us to approximate the true utility of a state
without doing a complete search
MIN- MAX
MAX moves first, and then they take turns moving until the game is over. At the end of the
game, points are awarded to the winning player and penalties are given to the loser. A game
can be formally defined as a kind of search problem with the following elements:
TRACE KTU
Game tree
The initial state, ACTIONS function, and RESULT function define the game tree for the game
a tree where the nodes are game states and the edges are moves. From the initial state, MAX
has nine possible moves.
Play alternates between MAX’s placing an X and MIN’s placing an O until we reach leaf nodes
corresponding to terminal states such that one player has three in a row or all the squares are
filled. The number on each leaf node indicates the utility value of the terminal state from the
point of view of MAX; high values are assumed to be good for MAX and bad for MIN.
search tree is a tree that is superimposed on the full game tree, and examines enough
nodes to allow a player to determine what move to make.
TRACE KTU
function. Minimax decision will be optimal choice for MAX at root that leads to the state with
the highest minimax value
TRACE KTU
maximum depth of the tree is m and there are b legal moves at each point,
time complexity →O(b m).
space complexity → O(bm), for an algorithm that generates all actions at once,
or O(m), for an algorithm that generates actions one at a time
The backed-up value of a node n is always the utility vector of the successor state with the
highest value for the player choosing at n
TRACE KTU
The problem with minimax search is that the number of game states it has to examine is
exponential in the depth of the tree. To avoid this, it is possible to compute the correct minimax
decision without looking at every node in the game tree by pruning. In alpha beta pruning
algorithm it prunes away branches that cannot possibly influence the final decision. Alpha–
beta pruning can be applied to trees of any depth, and it is often possible to prune entire subtrees
rather than just leaves
If Player has a better choice m either at the parent node of n or at any choice point further up,
then n will never be reached in actual play. So once we have found out enough about n (by
examining some of its descendants) to reach this conclusion, we can prune it.
Alpha–beta search updates the values of α and β as it goes along and prunes the remaining
branches at a node (i.e., terminates the recursive call) as soon as the value of the current node
is known to be worse than the current α or β value for MAX or MIN, respectively
The outcome is that we can identify the minimax decision without ever evaluating two of the
leaf nodes.
Let the two unevaluated successors of node C have values x and y Then the value of the root
node is given by
TRACE KTU
In other words, the value of the root and hence the minimax decision are independent of the
values of the pruned leaves x and y.
Move ordering
TRACE KTU
The effectiveness of alpha–beta pruning is highly dependent on the order in which the states
are examined. Eg, here we could not prune any successors of D at all because the worst
successors (from the point of view of MIN) were generated first. If the third successor of D
had been generated first, we would have been able to prune the other two. It might be
worthwhile to try to examine first the successors that are likely to be best
TRACE KTU
there are nine constraints:
TRACE KTU
task starts, expressed as an integer number of minutes. Constraints can assert that one task must
occur before another
• a wheel must be installed before the hubcap is put on and
• that only so many tasks can go on at once.
• a task takes a certain amount of time to complete
Precedence constraints
Whenever a task T1 must occur before task T2, and task T1 takes duration d1 to complete, we
add an arithmetic constraint of the form
T1 + d1 ≤ T2
A small part of the car assembly, consisting of 15 tasks: install axles (front and back), affix all
four wheels (right and left, front and back), tighten nuts for each wheel, affifix hubcaps, and
inspect the fifinal assembly. We can represent the tasks with 15 variables:
X = {AxleF , AxleB,Wheel RF ,Wheel LF ,WheelRB,Wheel LB, NutsRF , NutsLF ,
NutsRB, NutsLB, CapRF , CapLF , CapRB, CapLB,Inspect} .
The value of each variable is the time that the task starts.
DISJUNCTIVE CONSTRAINT
Suppose we have four workers to install wheels, but they have to share one tool that helps put
the axle in place. We need a disjunctive constraint to say that AxleF and AxleB must not
overlap in time; either one comes first or the other does:
(AxleF + 10 ≤ AxleB) or
(AxleB + 10 ≤ AxleF )
For every variable except Inspect we add a constraint of the form X + dX ≤ Inspect. Finally,
suppose there is a requirement to get the whole assembly done in 30 minutes. We can achieve
that by limiting the domain of all variables: Di = {1, 2, 3,..., 27}. This particular problem is
TRACE KTU
trivial to solve, but CSPs have been applied to job-shop scheduling problems like this with
thousands of variables.
TRACE KTU
constraints
TRACE KTU
possible to eliminate all the unary constraints in a CSP by running node consistency.
Arc consistency
A variable in a CSP is arc-consistent if every value in its domain satisfies the variable’s binary
constraints. Xi is arc-consistent with respect to another variable Xj. If for every value in the
current domain Di there is some value in the domain Dj that satisfies the binary constraint on
the arc (Xi, Xj). A network is arc-consistent if every variable is arc consistent with every other
variable.
For example, consider the constraint Y = X2 where the domain of both X and Y is the set of
digits. The constraint can be written as:
(X, Y ), {(0, 0),(1, 1),(2, 4),(3, 9))} .
To make X arc-consistent with respect to Y , we reduce X’s domain to {0, 1, 2, 3}. If we also
make Y arc-consistent with respect to X, then Y ’s domain becomes {0, 1, 4, 9} and the whole
CSP is arc-consistent.
Arc consistency can do nothing for the Australia map-coloring problem. Consider the following
inequality constraint on (SA,WA): {(red, green),(red, blue),(green, red),(green, blue),(blue,
red),(blue, green)}
No matter what value you choose for SA (or for WA), there is a valid value for the other
variable. So applying arc consistency has no effect on the domains of either variable.
TRACE KTU
TRACE KTU
3. If Di is unchanged, the algorithm just moves on to the next arc
4. B)Else if this revises Di (makes the domain smaller),
a. then we add to the queue all arcs (Xk, Xi) where Xk is a neighbor of Xi
◦ because the change in Di might enable further reductions in the domains
of Dk
◦ even if we have previously considered Xk
5. Else if Di is revised down to nothing,
a. then we know the whole CSP has no consistent solution, and AC-3 can
immediately return failure
6. Otherwise, we keep checking, trying to remove values from the domains of variables
until no more arcs are in the queue
7. At that point, we are left with a CSP that is equivalent to the original CSP
a. Then they both have the same solutions but
b. the arc-consistent CSP will in most cases be faster to search because its
variables have smaller domains.
Complexity of AC-3
Assume a CSP with n variables,
◦ each with domain size at most d, and
◦ with c binary constraints (arcs).
◦
TRACE KTU
Each arc (Xk, Xi) can be inserted in the queue only d times because Xi has at
most d values to delete.
Checking consistency of an arc can be done in O(d2) time, so we get O(cd3) total worst-case
time.
Generalized Arc/Hyperarc Consistent
Arc consistency to handle n-ary rather than just binary constraints. A variable Xi is generalized
arc consistent with respect to an n-ary constraint
◦ if for every value v in the domain of Xi there exists a tuple of values that is a
member of the constraint, has all its values taken from the domains of the
corresponding variables, and has its Xi component equal to v
For example, if all variables have the domain {0, 1, 2, 3}, then to make the variable X consistent
with the constraint X<Y <Z, we would have to eliminate 2 and 3 from the domain of X because
the constraint cannot be satisfied when X is 2 or 3.
Arc consistency can go a long way toward reducing the domains of variables, sometimes
finding a solution (by reducing every domain to size 1) and sometimes finding that the CSP
cannot be solved (by reducing some domain to size 0). But for other networks, arc consistency
fails to make enough inferences.
Consider the map-coloring problem on Australia, but with only two colors allowed, red and
blue. Arc consistency can do nothing because every variable is already arc consistent: each can
be red with blue at the other end of the arc (or vice versa). But clearly there is no solution to
the problem: because Western Australia, Northern Territory and South Australia all touch each
other, we need at least three colors for them alone. Arc consistency tightens down the domains
using the arcs
Path consistency
Path consistency tightens the binary constraints by using implicit constraints that are inferred
by looking at triples of variables. A two-variable set {Xi, Xj} is path-consistent with respect to
a third variable Xm if,
• for every assignment {Xi = a, Xj = b} consistent with the constraints on {Xi, Xj},
• there is an assignment to Xm that satisfies the constraints on {Xi, Xm} and {Xm, Xj}.
This is called path consistency because one can think of it as looking at a path from Xi to Xj
with Xm in the middle.
Path consistency Fares in coloring the Australia map with two colors
We will make the set {WA, SA} path consistent with respect to NT. We start by enumerating
the consistent assignments to the set. In this case, there are only two:
{WA = red, SA = blue} and {WA = blue, SA = red}.
We can see that with both of these assignments NT can be neither red nor blue because it would
conflict with either WA or SA. Because there is no valid choice for NT, we eliminate both
assignments, and we end up with no valid assignments for {WA, SA}. Therefore, we know
that there can be no solution to this problem.
K-consistency
TRACE KTU
Stronger forms of propagation can be defined with the notion of k-consistency. A CSP is k-
consistent if, for any set of k − 1 variables and for any consistent assignment to those variables,
a consistent value can always be assigned to any kth variable.
❑ 1-consistency says that, given the empty set, we can make any set of one
variable consistent: this is what we called node consistency.
❑ 2-consistency is the same as arc consistency.
❑ For binary constraint networks, 3-consistency is the same as path consistency.
A CSP is strongly k-consistent if it is k-consistent and is also (k − 1)-consistent, (k − 2)-
consistent, ... all the way down to 1-consistent.
Suppose we have a CSP with n nodes and make it strongly n-consistent (i.e., strongly k-
consistent for k = n).
We can then solve the problem as follows:
◦ First, we choose a consistent value for X1.
◦ We are then guaranteed to be able to choose a value for X2 because the graph
is 2-consistent,
◦ for X3 because it is 3-consistent, and so on.
◦ For each variable Xi, we need only search through the d values in the domain
to find a value consistent with X1,...,Xi−1.
◦ We are guaranteed to find a solution in time O(n2d).
Any algorithm for establishing n-consistency must take time exponential in n in the worst case.
Worse, n-consistency also requires space that is exponential in n. The memory issue is even
more severe than the time. In practice, determining the appropriate level of consistency
checking is mostly an empirical science. It can be said practitioners commonly compute 2-
consistency and less commonly 3-consistency.
Global constraints
Global constraint is one involving an arbitrary number of variables but not necessarily all
variables. Eg, in sudoku the Alldiffff constraint says that all the variables involved must have
distinct values. One simple form of inconsistency detection for Alldiffff constraints works as
follows: if m variables are involved in the constraint, and if they have n possible distinct values
altogether, and m>n, then the constraint cannot be satisfied
Simple algorithm for global constraint
If m variables are involved in the constraint, and if they have n possible distinct values
altogether, and m>n, then the constraint cannot be satisfied.
1. First, remove any variable in the constraint that has a singleton domain, and delete that
variable’s value from the domains of the remaining variables.
TRACE KTU
2. Repeat as long as there are singleton variables. If at any point an empty domain is
produced or there are more variables than domain values left, then an inconsistency has
been detected.
This method can detect the inconsistency in the assignment {WA = red, NSW = red}. Notice
that the variables SA, NT, and Q are effectively connected by an Alldiffff constraint because
each pair must have two different colors. After applying AC-3 with the partial assignment, the
domain of each variable is reduced to {green, blue}. That is, we have three variables and only
two colors, so the Alldiffff constraint is violated. Thus, a simple consistency procedure for a
higher-order constraint is sometimes more effective than applying arc consistency to an
equivalent set of binary constraints.
Resource constraint/ Atmost constraint
In a scheduling problem, let P1,...,P4 denote the numbers of personnel assigned to each of four
tasks. The constraint that no more than 10 personnel are assigned in total is written as
Atmost(10, P1, P2, P3, P4). We can detect an inconsistency simply by checking the sum of the
minimum values of the current domains; for example, if each variable has the domain {3, 4, 5,
6}, the Atmost constraint cannot be satisfied. We can also enforce consistency by deleting the
maximum value of any domain if it is not consistent with the minimum values of the other
domains. Thus, if each variable in our example has the domain {2, 3, 4, 5, 6}, the values 5 and
6 can be deleted from each domain.
BOUNDS PROPAGATION
For large resource-limited problems it is usually not possible to represent the domain of each
variable as a large set of integers and gradually reduce that set by consistency-checking
methods. E.g., logistical problems involving moving thousands of people in hundreds of
vehicles
Instead, domains are represented by upper and lower bounds and are managed by bounds
propagation. Example in an airline-scheduling problem, let’s suppose there are two flights,
F1 and F2, for which the planes have capacities 165 and 385, respectively. The initial domains
for the numbers of passengers on each flight are then D1 = [0, 165] and D2 = [0, 385] .
Now suppose we have the additional constraint that the two flights together must carry 420
people: F1 + F2 = 420. Propagating bounds constraints, we reduce the domains to D1 = [35,
165] and D2 = [255, 385] .
We say that a CSP is bounds consistent if for every variable X, and for both the lower- bound
and upper-bound values of X, there exists some value of Y that satisfies the constraint between
X and Y for every variable Y. This kind of bounds propagation is widely used in practical
constraint problems.
Sudoku
A Sudoku board consists of 81 squares, some of which are initially filled with digits from 1 to
9. The puzzle is to fill in all the remaining squares such that no digit appears twice in any row,
column, or 3 × 3 box. A row, column, or box is called a unit. Even the hardest Sudoku problems
TRACE KTU
yield to a CSP solver in less than 0.1 second
A Sudoku puzzle can be considered a CSP with 81 variables, one for each square. We use the
variable names A1 through A9 for the top row (left to right), down to I1 through I9 for the
bottom row. The empty squares have the domain {1, 2, 3, 4, 5, 6, 7, 8, 9} and the pre-filled
squares have a domain consisting of a single value.
In addition, there are 27 different Alldiff constraints: one for each row, column, and box of 9
squares:
Alldiff(A1, A2, A3, A4, A5, A6, A7, A8, A9)
Alldiff(B1, B2, B3, B4, B5, B6, B7, B8, B9)
…
Alldiff(A1, B1, C1, D1, E1, F1, G1, H1, I1)
Alldiff(A2, B2, C2, D2, E2, F2, G2, H2, I2)
…
Alldiff(A1, A2, A3, B1, B2, B3, C1, C2, C3)
Alldiff(A4, A5, A6, B4, B5, B6, C4, C5, C6)
…
Assume that the Alldiffff constraints have been expanded into binary constraints (such as A1
= A2 ) so that we can apply the AC-3 algorithm. Consider variable E6 the empty square
between the 2 and the 8 in the middle box.
From the constraints in the box, we can remove not only 2 and 8 but also 1 and 7 from E6’s
domain. From the constraints in its column, we can eliminate 5, 6, 2, 8, 9, and 3. That leaves
E6 with a domain of {4}; in other words, we know the answer for E6.
TRACE KTU
Now consider variable I6 the square in the bottom middle box surrounded by 1, 3, and 3.
Applying arc consistency in its column, we eliminate 5, 6, 2, 4 (since we now know E6 must
be 4), 8, 9, and 3. We eliminate 1 by arc consistency with I5 , and we are left with only the
value 7 in the domain of I6. Now there are 8 known values in column 6, so arc consistency can
infer that A6 must be 1. Inference continues along these lines, and eventually, AC-3 can solve
the entire puzzle—all the variables have their domains reduced to a single value
Sudoku problems are designed to be solved by inference over constraints. But many other CSPs
cannot be solved by inference alone; there comes a time when we must search for a solution.
Backtracking Search for CSP
Backtracking search algorithms that work on partial assignments. A standard depth-limited
search can be applied. A state would be a partial assignment, and an action would be adding
var = value to the assignment. But for a CSP with n variables of domain size d, the branching
factor at the top level is nd because any of d values can be assigned to any of n variables.
At the next level, the branching factor is (n − 1)d, and so on for n levels. We generate a tree
with n! · dn leaves, even though there are only dn possible complete assignments! Inference can
be interwoven with search.
TRACE KTU
BACKTRACKING-SEARCH keeps only a single representation of a state and alters that
representation rather than creating new ones
Of course, if we are trying to find all the solutions to a problem, not just the first one, then the
ordering does not matter because we have to consider every value anyway. The same holds if
there are no solutions to the problem
Interleaving search and inference
But inference can be even more powerful in the course of a search: every time we make a
choice of a value for a variable, we have a brand-new opportunity to infer new domain
reductions on the neighboring variables.
Forward checking
This is one of the simplest forms of inference. Whenever a variable X is assigned, the forward-
checking process establishes arc consistency for it: for each unassigned variable Y that is
connected to X by a constraint, delete from Y’s domain any value that is inconsistent with the
value chosen for X. There is no reason to do forward checking if we have already done arc
consistency as a preprocessing step.
TRACE KTU
There are two important points to notice about this example. First, notice that after WA = red
and Q = green are assigned, the domains of NT and SA are reduced to a single value; we have
eliminated branching on these variables altogether by propagating information from WA and
Q.
A second point to notice is that after V = blue, the domain of SA is empty. Hence, forward
checking has detected that the partial assignment {WA = red, Q = green, V = blue} is
inconsistent with the constraints of the problem, and the algorithm will therefore backtrack
immediately
For many problems the search will be more effective if we combine the MRV heuristic with
forward checking.
Forward checking only makes the current variable arc-consistent, but doesn’t look ahead and
make all the other variables arc-consistent.
MAC (Maintaining Arc Consistency) algorithm
More powerful than forward checking, detect this inconsistency. After a variable Xi is assigned
a value, the INFERENCE procedure calls AC-3, but instead of a queue of all arcs in the CSP,
we start with only the arcs(Xj, Xi) for all Xj that are unassigned variables that are neighbors of
Xi. From there, AC-3 does constraint propagation in the usual way, and if any variable has its
domain reduced to the empty set, the call to AC-3 fails and we know to backtrack immediately.
Chronological backtracking
The BACKTRACKING-SEARCH algorithm has a very simple policy for what to do when a
branch of the search fails: back up to the preceding variable and try a different value for it. This
is called chronological backtracking because the most recent decision point is revisited.
Example with a fixed variable ordering Q, NSW , V , T, SA, WA, NT. Suppose we have
generated the partial assignment {Q = red, NSW = green, V = blue, T = red}. When we try the
next variable, SA, we see that every value violates a constraint. We back up to T and try a new
color for Tasmania! Obviously this is silly—recoloring Tasmania cannot possibly resolve the
problem with South Australia
Backtrack to a variable that was responsible for making one of the possible values of the next
variable (e.g. SA) impossible. The set (in this case {Q = red, NSW = green, V = blue, }), is
called the conflict set for SA. The backjumping method backtracks to the most recent
assignment in the conflict set; in this case, backjumping would jump over Tasmania and try a
new value for V. This method is easily implemented by a modification to BACKTRACK such
that it accumulates the conflict set while checking for a legal value to assign. If no legal value
is found, the algorithm should return the most recent element of the conflict set along with the
failure indicator.
TRACE KTU
Intelligent backtracking: Looking backward
Forward checking can supply the conflict set with no extra work whenever forward checking
based on an assignment X = x deletes a value from Y ’s domain, it should add X = x to Y ’s
conflict set.
If the last value is deleted from Y ’s domain, then the assignments in the conflict set of Y are
added to the conflict set of X. Then, when we get to Y , we know immediately where to
backtrack if needed. In fact, every branch pruned by backjumping is also pruned by forward
checking. Hence simple backjumping is redundant in a forward-checking search or in a search
that uses stronger consistency checking (such as MAC).
Backjumping notices failure when a variable’s domain becomes empty, but in many cases a
branch is doomed long before this occurs
Conflict-directed back jumping
Consider the partial assignment which is proved to be inconsistent: {WA=red, NSW=red}.We
try T=red next and then assign NT, Q, V, SA, no assignment can work for these last 4 variables.
Eventually we run out of value to try at NT, but simple backjumping cannot work because NT
doesn’t have a complete conflict set of preceding variables that caused to fail.
The set {WA, NSW} is a deeper notion of the conflict set for NT, caused NT together with any
subsequent variables to have no consistent solution. So the algorithm should backtrack to NSW
and skip over T.
A backjumping algorithm that uses conflict sets defined in this way is called conflict-direct
backjumping
When a variable’s domain becomes empty, the “terminal” failure occurs, that variable has a
standard conflict set. Let Xj be the current variable, let conf(Xj) be its conflict set. If every
possible value for Xj fails, backjump to the most recent variable Xi in conf(Xj), and set
conf(Xi) ← conf(Xi)∪conf(Xj) – {Xi}.
The conflict set for an variable means, there is no solution from that variable onward, given the
preceding assignment to the conflict set
e.g. assign WA, NSW, T, NT, Q, V, SA.
SA fails, and its conflict set is {WA, NT, Q}. (standard conflict set)
Backjump to Q, its conflict set is
{NT, NSW}∪{WA,NT,Q} - {Q} = {WA, NT, NSW}.
That is, there is no solution from Q onward, given the preceding assignment to {WA, NT, NSW
}. Therefore, we Backtrack to NT, its conflict set is
{WA}∪{WA,NT,NSW}-{NT} = {WA, NSW}.
Hence the algorithm backjump to NSW. (over T)
Constraint learning
TRACE KTU
After backjumping from a contradiction, how to avoid running into the same problem again:
Constraint learning: The idea of finding a minimum set of variables from the conflict set that
causes the problem. This set of variables, along with their corresponding values, is called a no-
good. We then record the no-good, either by adding a new constraint to the CSP or by keeping
a separate cache of no-goods.
Backtracking occurs when no legal assignment can be found for a variable. Conflict-directed
backjumping backtracks directly to the source of the problem.
Consider the state {WA = red, NT = green, Q = blue}
Forward checking can tell us this state is a no-good because there is no valid assignment to SA.
In this particular case, recording the no-good would not help, because once we prune this
branch from the search tree, we will never encounter this combination again.
But suppose that the search tree in were actually part of a larger search tree that started by first
assigning values for V and T. Then it would be worthwhile to record {WA = red, NT = green,
Q = blue} as a no-good because we are going to run into the same problem again for each
possible set of assignments to V and T
No-goods can be effectively used by forward checking or by backjumping. Constraint learning
is one of the most important techniques used by modern CSP solvers to achieve efficiency on
complex problems.
TRACE KTU
There are 2 primary ways to reduce more general constraint graphs to trees:
1. Based on removing nodes;
2. Based on collapsing nodes together
Based on removing nodes;
TRACE KTU
Example, We can delete SA from the graph by fixing a value for SA and deleting from the
domains of other variables any values that are inconsistent with the value chosen for SA.
The general algorithm:
Choose a subset S of the CSP’s variables such that the constraint graph becomes a tree after
removal of S. S is called a cycle cutset.
For each possible assignment to the variables in S that satisfies all constraints on S,
(a) remove from the domain of the remaining variables any values that are inconsistent with
the assignment for S, and
(b) If the remaining CSP has a solution, return it together with the assignment for S.
Time complexity: O(dc·(n-c)d2), c is the size of the cycle cut set.
Cutset conditioning: The overall algorithmic approach of efficient approximation algorithms
to find the smallest cycle cutset.
TRACE KTU
algorithm for trees. A given constraint graph admits many tree decomposition; In choosing a
decomposition, the aim is to make the subproblems as small as possible.
Tree width
The tree width of a tree decomposition of a graph is one less than the size of the largest
subproblems. The tree width of the graph itself is the minimum tree width among all its tree
decompositions.
Time complexity: O(ndw+1), w is the tree width of the graph.
The complexity of solving a CSP is strongly related to the structure of its constraint graph.
Tree-structured problems can be solved in linear time. Cutset conditioning can reduce a
general CSP to a tree-structured one and is quite efficient if a small cutset can be found. Tree
decomposition techniques transform the CSP into a tree of subproblems and are efficient if
the tree width of constraint graph is small.
The structure in the values of variables
By introducing a symmetry-breaking constraint, we can break the value symmetry and
reduce the search space by a factor of n!. E.g. Consider the map-coloring problems with n
colors, for every consistent solution, there is actually a set of n! solutions formed by permuting
the color names.(value symmetry). On the Australia map, WA, NT and SA must all have
different colors, so there are 3!=6 ways to assign.
We can impose an arbitrary ordering constraint NT<SA<WA that requires the 3 values to be
TRACE KTU
in alphabetical order. This constraint ensures that only one of the n! solution is possible:
{NT=blue, SA=green, WA=red}. (symmetry-breaking constraint)
TRACE KTU
Variables:{T,W,O,F,U,R,C1,C2,C3}
Domain of {T,W,O,F,U,R}={0,1,2,3,4,5,6,7,8,9}
Domain of {C1,C2,C3}={0,1}
TRACE KTU
TRACE KTU
TRACE KTU
TRACE KTU