0% found this document useful (0 votes)
19 views60 pages

Chap2 CSP Complete

Uploaded by

Nandini Ganjewar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views60 pages

Chap2 CSP Complete

Uploaded by

Nandini Ganjewar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Chapter 2

Artificial Intelligence

Constraint Satisfaction Problems

Prepared By
Ms Lalita Randive
Asst Prof. MIT(CSED)
Constraint Satisfaction Problems
Must be
Hot&Sour
Soup
No
Chicken
Appetizer Peanuts
Dish
Total Cost
< $30
Pork Dish No
Vegetable Peanuts

Not Both Seafood Rice


Spicy Not
Chow Mein
Constraint Network

2
Key issue: So far, we have treated nodes in search trees as
“black boxes,” only looked inside to check whether the node is a
goal state.

In CSPs, we want to “look inside the nodes” and exploit


problem structure during the search.
Sometimes, reasoning or inference (“propagation techniques”)
will led us find solutions without any search!
Constraint Satisfaction Problems
(CSPs)
General Search vs.
Constraint satisfaction problems (CSPs)
• Standard search problem:
– state is a "black box“ – can be accessed only in limited
way:
successor function; heuristic function; and goal test.

What is needed for CSP:


Not just a successor function and goal test. Also a means of
propagating the constraints (e.g. imposed by one queen on
the others and an early failure test).

Explicit representation of constraints and constraint


manipulation algorithms
Constraint Satisfaction Problems (CSP)
Constraint satisfaction problems (CSPs)

States and goal test have a standard representation.

– state is defined by variables Xi with values from domain Di


– goal test is a set of constraints specifying allowable
combinations of values for subsets of variables

Interesting tradeoff:

Example of a (restricted) formal representation language.

Allows useful general-purpose algorithms more powerful than


standard search algorithms that have to resort to problem specific
heuristics to enable solution of large problems.
Constraint Satisfaction Problem
• Set of variables {X1, X2, …, Xn}
• Each variable Xi has a domain Di of possible values
• Usually Di is discrete and finite
• Set of constraints {C1, C2, …, Cp}
• Each constraint Ck involves a subset of variables and specifies
the allowable combinations of values of these variables

Goal:
Assign a value to every variable such that all constraints are satisfied
Motivational Example:
1. 8-Queens

Usual goal:
place 8 non-attacking
queens.
Motivational Example:
8-Queens

l + k = 10 l–k=1
l – k =0

How do we represent 8-Queens


… as a CSP:
l – k= -5 Variables?
l=3
Constraints?
l=2
Note: More than one option.
l=1
k = 1k = 2k = 3

Example: 8-Queens Problem

Xi – column for queen in row i

• 8 variables Xi, i = 1 to 8 (one per row)


• Domain for each variable {1,2,…,8}
• Constraints are of the form:
– Xi  Xj when ji (i.e. no two in the same column)
– No queens in same diagonal:
1) Xi – Xj  i – j
2) Xi – Xj  j – i
(check that this works!)

Alternative? Boolean vars


Boolean Encoding
• 64 variables Xij, i = 1 to 8, j = 1 to 8
• Domain for each variable {0,1} (or {False, True})
• Constraints are of the form: X = 1 iff “there is a queen on ij
• Row and columns location (i,j).”
– If (Xij = 1) then (Xik = 0) for all k = 1 to 8, kj (logical constraint)
– Xij = 1  Xkj = 0 for all k = 1 to 8, ki
• Diagonals
– Xij = 1  Xi+l,j+l = 0 l = 1 to 7, i+l 8; j+l8 (right and up)
– Xij = 1  Xi-l,j+l = 0 l = 1 to 7, i-l 1; j+l8 (right and down)
– Xij = 1  Xi-l,j-l = 0 l = 1 to 7, i-l 1; j-l1 (left and down)
– Xij = 1  Xi+l,j-l = 0 l = 1 to 7, i+l 8; j-l1 (left and up)

What’s missing?
Need N (= 8) queens on board!
3 options:
1) Maximize sum X_ij (optimization formulation)
2) Sum X_ij = N (CSP; bit cumbersome in Boolean logic)
3) For each row i: (X_i1 OR X_i2 OR X_i3 … X_iN)
2 Example: Map-Coloring

• Variables WA, NT, Q, NSW, V, SA, T


• Domains Di = {red, green, blue}
• Constraints: adjacent regions must have different
colors
• e.g., WA ≠ NT, or (WA,NT) in
{(red,green),(red,blue),(green,red),
(green,blue),(blue,red),(blue,green)}
Example: Map-Coloring

• Solutions are complete and consistent


assignments
• e.g., WA = red, NT = green, Q = red,
NSW = green, V = red, SA = blue, T =
green
Application of Graph Coloring
• Lots of applications involving scheduling and assignments.

• Scheduling of final exams – nodes represent finals, edges between


finals denote that both finals have common students (and therefore
they have to have different colors, or different periods).

1 1
7 2 7 2 Time Period  courses

I (red)  1,6
6 3 6 3 II (blue)2
III (green)3,5
5 4 IV (black) 4,7
5 4

Graph of finals for 7 courses


Constraint graph
• Binary CSP: each constraint relates two variables
• Constraint graph: nodes are variables, arcs are
constraints

Two variables are adjacent or neighbors if they are


connected by an edge or an arc
Varieties of CSPs
• Discrete variables
– finite domains:
• n variables, domain size d  O(dn) complete assignments
• e.g., Boolean CSPs, incl. Boolean satisfiability (NP-
complete)
– infinite domains:
• integers, strings, etc.
• e.g., job scheduling, variables are start/end days for each
job
• need a constraint language, e.g., StartJob1 + 5 ≤ StartJob3
• Continuous variables
– e.g., start/end times for Hubble Space Telescope observations
– linear constraints solvable in polynomial time by LP
Varieties of constraints
• Unary constraints involve a single variable,
– e.g., SA ≠ green
• Binary constraints involve pairs of variables,
– e.g., SA ≠ WA
• Higher-order constraints involve 3 or more variables,
– e.g., cryptarithmetic column constraints
• Preferences (Soft Constraints)
e.g. red is better than green
2. Example: Cryptarithmetic

• Variables: F T U W R O X1 X2 X3
• Domains: {0,1,2,3,4,5,6,7,8,9}
• Constraints: Alldiff (F,T,U,W,R,O)
– O + O = R + 10 · X1
– X1 + W + W = U + 10 · X2
– X2 + T + T = O + 10 · X3
– X3 = F, T ≠ 0, F ≠ 0
Real-world CSPs
• Assignment problems
– e.g., who teaches what class
• Timetabling problems
– e.g., which class is offered when and where?
• Transportation scheduling
• Hardware Configuration
• spreadsheets
• Factory scheduling
• Notice that many real-world problems involve real-
valued variables
Backtracking search
 Backtrack Search

empty assignment

1st variable

2nd variable

3rd variable

Assignment = {}
 Backtrack Search

empty assignment

1st variable

2nd variable

3rd variable

Assignment = {(var1=v11)}
 Backtrack Search

empty assignment

1st variable

2nd variable

3rd variable

Assignment = {(var1=v11),(var2=v21)}
 Backtrack Search

empty assignment

1st variable

2nd variable

3rd variable

Assignment = {(var1=v11),(var2=v21),(var3=v31)}
 Backtrack Search

empty assignment

1st variable

2nd variable

3rd variable

Assignment = {(var1=v11),(var2=v21),(var3=v32)}
 Backtrack Search

empty assignment

1st variable

2nd variable

3rd variable

Assignment = {(var1=v11),(var2=v22)}
 Backtrack Search

empty assignment

1st variable

2nd variable

3rd variable

Assignment = {(var1=v11),(var2=v22),(var3=v33)}
Backtracking example
Backtracking example
Backtracking example
Backtracking example
Improving backtracking efficiency

• General-purpose methods can give huge gains in


speed:
– Which variable should be assigned next?
– In what order should its values be tried?
– Can we detect inevitable failure early?
– Can we take advantage of problem structure?
Improving Backtracking Efficiency

Variable & value ordering to


• Which variable should be assigned next? increase the likelihood to success

– Minimum Remaining Values heuristic


• In what order should its values be tried?
– Least Constraining Values heuristic
• Can we detect inevitable failure early?
– Forward checking
– Constraint propagation (Arc Consistency)
• When a path fails, can the search avoid Early failure-detection to decrease the
repeating this failure? likelihood to fail

– Backjumping Restructuring to reduce the


• Can we take advantage of problem structure? problem’s complexity

– Tree-structured CSP

General purpose techniques


Most constraining variable

• Degree Heuristic -A good idea is to use it as a tie-breaker


among most constrained variables
• #2 Most constraining variable:
• choose the variable with the most constraints on
remaining variables
Forward checking
• Idea:
– Keep track of remaining legal values for unassigned variables
– Terminate search when any variable has no legal values

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned
variables
– Terminate search when any variable has no legal values

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned
variables
– Terminate search when any variable has no legal
values

Forward checking
• Idea:
– Keep track of remaining legal values for unassigned
variables
– Terminate search when any variable has no legal values

NT
Q
WA
SA Constraint propagation
NSW
V

• Forward checking propagates information from assigned to


unassigned variables, but doesn't provide early detection for all
failures:
What’s the problem here?

NT and SA cannot both be blue!


Use: constraint propagation repeatedly to enforce
constraints locally.
2. Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y
Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y
Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff
for every value x of X there is some allowed y

• If X loses a value, neighbors of X need to be rechecked


Arc consistency
• Simplest form of propagation makes each arc consistent
• X Y is consistent iff for every value x of X there is some
allowed y

• If X loses a value, neighbors of X need to be rechecked


• Arc consistency detects failure earlier than forward
checking
• Can be run as a preprocessor or after each assignment
Beyond Arc Consistency

Is this network arc {1, 2} {1, 2}


XY
consistent? X Y

What is the solution? XZ YZ

Z
{1, 2}
Clearly arc consistency is not enough to guarantee global
consistency.
There are other forms of consistency, such as k-consistency.

But when k = n (num vars), we are looking at the original


problem!
3. k - Consistency

A graph is K-consistent iff the following is {1, 2} XY {1, 2}


true: X Y

Choose values• of any K-1 variables that


XZ YZ
satisfy all the constraints among these
variables and choose any K th variable.
Then, there exists a value for this Kth Z
variable that satisfies all the constraints {1, 2}
among these K variables.

A graph is strongly K-consistent if it is J-


consistsent for all J<=K.

What type of consistency would we need here to solve


any constraint problem without search? K = N
k-consistency
• A CSP is k-consistent if, for any set of k-1 variables, and for any
consistent assignment to those variables, a consistent value can always
be assigned to any kth variable
• 1-consistency is node consistency
• 2-consistency is arc consistency
• For binary constraint networks, 3-consistency is the same as path
consistency
• Getting k-consistency requires time and space exponential in k
• Strong k-consistency means k’-consistency for all k’ from 1 to k
– Once strong k-consistency for k=#variables has been obtained,
solution can be constructed trivially
• Tradeoff between propagation and branching
• Practitioners usually use 2-consistency and less commonly 3-
consistency
Structured CSPs
Tree-structured CSPs
Algorithm for tree-structured CSPs
Nearly tree-structured CSPs

(Finding the minimum cutset is NP-complete.)


Tree decomposition
• Every variable in original problem
must appear in at least one
subproblem
• If two variables are connected in the
original problem, they must appear
together (along with the constraint)
in at least one subproblem
• If a variable occurs in two
subproblems in the tree, it must
appear in every subproblem on the
path that connects the two

• Algorithm: solve for all solutions of each subproblem. Then, use the tree-structured algorithm,
treating the subproblem solutions as variables for those subproblems.
• O(ndw+1) where w is the treewidth (= one less than size of largest subproblem)
• Finding a tree decomposition of smallest treewidth is NP-complete, but good heuristic methods
exists
Local search for CSPs
• Hill-climbing, simulated annealing typically work with
"complete" states, i.e., all variables assigned
• To apply to CSPs:
– allow states with unsatisfied constraints
– operators reassign variable values

• Variable selection: randomly select any conflicted variable

• Value selection by min-conflicts heuristic:


– choose value that violates the fewest constraints
– i.e., hill-climb with h(n) = total number of violated constraints
Summary
• CSPs are a special kind of problem:
– states defined by values of a fixed set of variables
– goal test defined by constraints on variable values

• Backtracking = depth-first search with one variable assigned per


node

• Variable ordering and value selection heuristics help


significantly

• Forward checking prevents assignments that guarantee later


failure

• Constraint propagation (e.g., arc consistency) does additional


work to constrain values and detect inconsistencies

• Iterative min-conflicts is usually effective in practice

You might also like