ADSA Lecture Notes
ADSA Lecture Notes
The divide and conquer strategy suggest another way to compute the product of two
n X n matrices. Here, assume n is a power of 2. Now, the two matrices A and B are
partitioned into 4 sub matrices, each sub matrix having dimensions n/2 X n/2. Then the
product of AB can be computed by using the formula as:
A1 1 A1 2 B1 1 B1 2 C1 1 C1 2
=
A2 1 A2 2 B2 1 B2 2 C2 1 C2 2
Here,
C11 = A11 B11 + A12 B21
C12 = A11 B12 + A12 B22
C21 = A21 B11 + A22 B21
C22 = A21 B12 + A22 B22
T(n) = b ; n≤2
8 T(n/2) + c n2 ; n>2
T(n) = O ( n3 )
Volker Strassen has discovered a way to compute Cij `s using only 7 multiplications
and 18 additions or subtractions. His method involves first computing the seven n/2 X n/2
matrices P, Q, R, S, T, U and V as:
C11 = P+S–T+V
C12 = R+T
C21 = Q+S
C22 = P+R–Q+U
T(n) = b ; n≤2
7 T(n/2) + a n2 ; n>2
T(n) = O ( n2.81 )
Example: Calculate product of the given two matrices using Strassen’s matrix
multiplication where
A = 1 2 4 1 B = 1 2 3 4
2 3 2 4 4 3 2 1
1 5 1 2 1 3 1 2
3 1 4 2 4 1 2 3
CONVEX HULL
The convex hull of a set S of points in the plane is defined to be the smallest convex
polygon containing all points of S.
Note: A polygon is defined to be convex if for any two points p1 and p2 inside the polygon,
the directed line segment from p1 to p2 is fully contained in the polygon.
UNIT – III
Greedy Method: General Method, Job Sequencing with deadlines, Knapsack Problem,
Minimum cost spanning trees, Single Source Shortest Paths.
Dynamic Programming: General Method, All pairs shortest paths, Single Source Shortest
Paths – General Weights (Bellman Ford Algorithm), Optimal Binary Search Trees, 0/1
Knapsack, String Editing, Travelling Salesperson problem.
***
GREEDY METHOD
Greedy method is a straightforward design technique applied on those methods in
which solution is in terms of a subset that satisfies some constraints.
Any subset that satisfies the constraints of the problem statement is called a “feasible
solution”.
A feasible solution that either maximizes or minimizes the given objective function is
called an “optimal solution”.
To obtain optimal solution, Greedy method suggests that to devise an algorithm that
works in stages, considering one input at a time. At each stage, a decision is made regarding
whether a particular input is I an optimal solution.
If the inclusion of next input into the partially constructed optimal solution will result
in an infeasible solution, then this input is not added to the partial solution. Otherwise, it is
added.
This procedure results the algorithm to generate suboptimal solutions. Hence, this
version of the algorithm is called the “Subset Paradigm”.
Algorithm Greedy ( a , n )
{
// a[1 : n] contains n inputs
solution : = Φ;
for i : = 1 to n do
{
x : = Select ( a );
if Feasible ( solution , x ) then
solution : = Union ( solution , x );
}
return solution;
}
Here,
The function Select( ) selects an input from a[ ] and is assigned to x.
Feasible( ) is a Boolean valued function that determines whether x can be included
into the solution vector.
Union( ) function combines x with the solution and updates the objective function.
APPLICATIONS
Knapsack problem
Job sequencing with dead lines
Minimum cost spanning trees
Single source shortest path etc,
KNAPSACK PROBLEM
Consider there are n objects and a Knapsack or bag. Every object i has a weight w i
and the knapsack has a capacity m. If a fraction xi, 0 ≤ xi ≤ 1, of object i is placed into the
knapsack, then a profit of pixi is earned. The objective is to obtain a filling of the knapsack
that maximizes the total profit earned.
maximize ∑ pi xi → 1
1≤i≤n
subject to ∑ wi xi ≤ m → 2
1≤i≤n
and 0 ≤ xi ≤ 1 , 1≤i≤n → 3
Algorithm GreedyKnapsack ( m , n )
{
// p[1 : n] and w[1 : n] contain the profits and weights respectively of n objects
such that p[i] / w[i] ≥ p[i+1] / w[i+1] ≥ - - - - - - - -.
for i : = 1 to n do
x[i] : = 0.0;
U : = m;
for i : = 1 to n do
{
if ( w[i] > U ) then
break;
else
{
x[i] : = 1.0;
U : = U – w[i];
}
}
if ( i ≤ n ) then
x[i] : = U / w[i];
}
Here,
If sum of all weights is ≤ m, then xi = 1 ; 1 ≤ i ≤ n is an optimal solution.
If p1 / w1 ≥ p2 / w2 ≥ - - - - ≥ pn / wn , then GreedyKnapsack generates an optimal
solution to the given instance of the Knapsack problem
Time complexity of Knapsack problem is O(n) time.
A feasible solution for this problem is a subset of jobs such that each job in this sub
set can be completed by its deadline. The value of the feasible solution J is the sum of
profits of the jobs as ∑ pi.
i€J
An optimal solution is a feasible solution with maximum profit value.
Example: Obtain optimal solution of Job sequencing with deadlines for the instance n=5,
( p1, p2, p3, p4, p5 ) = (20, 15, 10, 5, 1) and ( d1, d2, d3, d4, d5 ) = (2, 2, 1, 3, 3).
Algorithm GreedyJob ( d, J, n )
{
J : = { 1 };
for i : = 2 to n do
{
if ( all jobs in J U { i } can be completed by their deadlines ) then
J : = J U { i };
}
}
etc,
If the graph consists ‘n’ vertices, spanning tree contains exactly ‘n-1’ edges to connect
the ‘n’ vertices.
Consider a weighted graph in which every edge is labeled with a cost / weight value.
Cost of the spanning tree is obtained by adding the cost of the selected edges.
Most important methods used to find the minimum cost spanning trees are:
Prim’s Algorithm
Kruskal’s Algorithm
PRIM’s ALGORITHM
In this method, minimum cost spanning tree builds the tree edge by edge. The next
edge is included based on some optimization criterion. The simplest way is to choose an
edge that results in a minimum increase in the sum of the cost of the edges so far included.
Procedure:
Algorithm Prim( )
{
T:=Φ;
TV : = Φ ;
while ( T contains less than n-1 edges ) do
{
Let (u, v) be a least cost edge such that u € TV and v € TV;
if ( there is no edges ) then
break;
else
{
Add v to TV;
Add (u, v) to T;
}
}
}
KRUSKAL’s ALGORITHM
Kruskal’s algorithm builds a minimum-cost spanning tree T by adding the edges one
at a time. It selects the edges in non-decreasing order of their costs. An edge is added to T if
it does not create a cycle.
Procedure:
Algorithm Kruskal( )
{
T:=Φ;
while (( T has less than n-1 edges) and (E ≠ Φ )) do
{
Choose an edge (u,v) from E of Lowest cost;
Delete (u, v) from E;
if (u, v) does not create a cycle in T then
add (u,v) to T;
else
discard (u, v);
}
}
Note: The main difference between Prim’s and Kruskal’s algorithm is:
Prim’s algorithm maintains tree structure at every stage and Kruskal’s algorithm need
not be maintained.
Finally, both methods produces same spanning tree with minimum-cost.
Consider G = (V, E) be a weighted directed graph and source vertex as v0. The
problem is to determine the shortest path from v0 to all the remaining vertices of
G.
The greedy method generates shortest paths from vertex v0 to the remaining
vertices in non-decreasing order of path length.
First, a shortest path to the nearest vertex is generated. Then, a shortest path to the
second nearest vertex is generated and so on.
Example: Identify shortest paths of the following graph by starting vertex as ‘5’.
for i : = 1 to n do
{
S[i] : = False;
Dist[i] : = Cost[v, i];
}
S[v] : = True;
Dist[v] : = 0.0;
for num : = 1 to n-1 do
{
Choose u from among those vertices not in S such that Dist[u] is minimum;
S[u] : = True;
for ( each w adjacent to u with S[w] = False ) do
{
if ( Dist[w] > Dist[u] + Cost[u, w] ) then
Dist[w] : = Dist[u] + Cost[u, w];
}
}
}
***
DYNAMIC PROGRAMMING
Dynamic programming is an algorithm design method that can be used when the
solution to a problem statement can be viewed as the result of a sequence of decisions.
To obtain optical solution, first generate all possible sequences, and then apply
principle of optimality.
“The principle of optimality states that an optimal sequence of decisions has the
property that whatever the initial state and decisions are, the remaining decisions must
constitute an optimal decision sequence with respect to the state resulting from the firs
decision”.
Note: The essential difference between the Greedy method and the Dynamic programming
is, in the greedy method only one decision sequence is ever generated. In dynamic
programming, many decision sequences may be generated.
Applications: Some of the problem statements that can be solved using dynamic
programming are:
All pairs shortest paths
Single-source shortest paths
Optimal binary search trees
0/1 Knapsack problem etc,
Let G = (V, E) be a directed graph with n vertices. Let cost be a cost adjacency
matrix such that cost <i, i> = 0, 1 ≤ i ≤ n. Then cost <i, j> is the length of edge <i, j> if <i, j>
€ E(G) and cost <i, j> = α if <i, j> € E(G).
All-pairs shortest path problem is to determine of matrix A such that A(i, j) is the
length of a shortest path from i to j. The matrix A can be obtained by solving n-single source
shortest path problems.
for i : = 1 to n do
for j : = 1 to n do
A[i, j] : = cost[i, j];
for k : = 1 to n do
for i : = 1 to n do
for j : = 1 to n do
A[i, j] : = min [ A[i, j] , A[i, k] + A[k, j] ] ;
}
Analysis:
Here, the first nested loop takes O(n2) time and second nested loop takes O(n3) time.
Hence, overall time complexity is O(n3) time.
Let distl[u] be the length of a shortest path from the source vertex v to vertex u that
contains at most l edges. Then,
The main objective is to compute distn-1[u] for all u. This can be done by using
dynamic programming methodology with Bellman and Ford algorithm as:
If the shortest path from v to u with at most k, k > 1, edges has no more than
k-1, then
distk [u] = distk-1 [u]
If the shortest path from v to u with at most k, k > 1, edges has exactly k, then
it is made up of a shortest path from v to some vertex j followed by the edge
<j,u>, then
distk [u] = min { distk-1 [i] + cost [ i, u ] }
i
From this, dist can be shown as:
Example: Assume the set of identifiers are: for, do, while, int, if.
for for
do while etc,
do int
int
if while
if
The first tree takes 1, 2, 2, 3, 4 comparisons to find the identifiers. Thus, the average
number of comparisons is (1 + 2 + 2 + 3 + 4) / 5 = 12/5.
The second tree takes 1, 2, 2, 3, 3 comparisons to find the identifiers. Thus, the
average number of comparisons is (1 + 2 + 2 + 3 + 3) / 5 = 11/5.
The number of comparisons at different levels to search a particular node is
considered as a cost.
For a given set of identifiers, design a binary search tree with minimum cost is known
as “Optimal Binary Search Tree”.
∑ p(i) + ∑ q(i) = 1
1≤i≤n 0 ≤i≤n
To obtain cost function of binary search trees, it is useful to add an external node at
every empty subtree indicated with square nodes.
for
do while
int
if
If a binary search tree represents ‘n’ identifiers, then there will be exactly n internal
nodes and n+1 external nodes. Every internal node represents a point where a successful
search may terminate. Every external node represents a point where an unsuccessful search
may terminate.
Now, if successful search terminates at level l, then expected cost contribution for the
internal node is p(i) * level(ai). If unsuccessful search terminates at a specified level then
expected cost contribution is q(i) * ( level (E i ) – 1).
Finally, cost of the binary search tree is
The aim of the problem statement is to construct the binary search tree with the above
cost is minimum.
ak
l r
Here,
Finally,
Initially, c(i, i) = 0 ;
r(i, i) = 0 ;
w(i, i) = q( i ) , 0 ≤ i ≤ n and
w(i, j) = p( j ) + q( j ) + w(i, j – 1).
Algorithm OBST ( p, q, n )
{
for i : = 0 to n-1 do
{
w[i, i] : = q[i];
c[i, i] : = 0.0;
r[i, i] : = 0;
w[i, i+1] : = q[i] + q[i+1] + p[i+1] ;
c[i, i+1] : = q[i] + q[i+1] + p[i+1] ;
r[i, i+1] : = i+1;
}
w[n, n] : = q[n];
c[n, n] : = 0.0;
r[n, n] : = 0;
for m : = 2 to n do
{
for i : = 0 to n - m do
{
j : = i + m;
w[i, j] : = w[i, j-1] + p[ j ] + q[ j ];
k : = Find(c, r, i, j);
c[i, j] = w[i, j] + c[i, k – 1] + c[k, j];
r[i, j] : = k;
}
}
write ( c[0, n], w[0, n], r[0, n] );
}
0 / 1 KNAPSACK PROBLEM
Consider there are n objects and a Knapsack or bag. Every object i has a weight w i
and the knapsack has a capacity m. If the object i is placed into the knapsack, then a profit of
pixi is earned where xi = 0 or 1. The objective is to fill the knapsack that maximizes the total
profit earned.
Formally, the problem statement can be stated as:
maximize ∑ pi xi → 1
1≤i≤n
subject to ∑ wi xi ≤ m → 2
1≤i≤n
and xi = 0 or 1 , 1≤i≤n → 3
Let us assume that decision of xi is made in the order xn , xn-1 , - - - - , x1. For a
decision on xn possible states are:
Now, the remaining decisions xn-1 , - - - - , x1 must be optimal with respect to the
problem statement resulting from the decision on x n.
To solve this problem using dynamic programming, follow the procedure as:
S1i-1 = { Select next pair (P, W) and add with the pairs S i-1 } i = 1, 2, - - -.
Now, Si is obtained by merging Si-1 and S1i-1 and apply purging rule.
Purging Rule: If Si contains (Pj , Wj) and (Pk , Wk) with the condition that Wj > Wk
and Pj ≤ Pk , then the pair (Pj , Wj) can be discarded.
Step 3: Select a tuple (P1 , W1) with a condition that W1 ≤ m from the last set Si then
Now, select new pair as (P1 – pn , W1 – wn) and repeat Step 3 until all
decisions are completed.
Example: Obtain 0/1 Knapsack solution for the instance n=3, m=6, (p 1 , p2 , p3) = (1, 2, 5)
and (w1 , w2 , w3) = (2, 3, 4).
Algorithm DKP ( p, w, n, m)
{
S0 : = { (0, 0) };
for i : = 1 to n – 1 do
{
S1i-1 = { (P, W) / (P – pi , W - wi ) є Si-1 and W ≤ m } ;
Si : = MergePurge( Si-1 , S1i-1 );
}
(PX, WX) : = last pair in Sn-1;
(PY, WY) : = (P1 + pn , W1 + wn);
The time complexity of 0/1 Knapsack using dynamic programming is O(2 n) time.
STRING EDITING
Consider two strings X = x1, x2, - - -, xn and Y = y1 , y2 , - - - - ym. Where xi , 1 ≤ i ≤ n
and yj, 1 ≤ j ≤ m, are members of a finite set of symbols known as the alphabet. We want to
transform X into Y using a sequence of edit operations on X. The permissible edit operations
are insert, delete, and change, each operation is associated with a cost. The cost of sequence
of operations is the sum of the costs of individual operations in the sequence.
For this,
Define cost(i, j) be the minimum cost of any edit sequence for transforming X into Y.
cost(i, j) can arrive using the recurrence equation:
Where,
cost1 (i, j) = min{ cost(i-1, j) + D(xi) , cost(i, j-1) + I(yj) , cost(i-1, j-1)+c(xi , yj) }
Compute cost(i, j) for all possible values of i and j. There are (n+1)(m+1) values.
These values can be computed in the form of a table M, where each row corresponds
to the value i and column corresponds to j.
A tour for the graph should be visit all vertices exactly once and the cost of the tour is
sum of the cost of the edges. Traveling salesperson problem is to find such a tour
with minimum cost.
Step 1: Let the tour starts and ends at the vertex 1. Then the function g( 1, V-{1} ) be
the total length of the tour.
Let g(i, S) be the length of the shortest path starting at vertex i, going
through all vertices in S, and terminating at vertex 1.
Initially, g ( i, 0 ) = ci 1 , 1 ≤ i ≤ n.
END
UNIT – IV
BACKTRACKING
Backtracking is one of the general technique used to obtain solution of the problem
statement which satisfies some constraints. For this, enumerate all possible solutions and
select which produces optimum result.
Explicit constraints are rules that restrict each xi can take values from a given set Si.
Example: Si = { 0 , 1 }
=> xi = 0 or 1.
Implicit constraints are rules that determines which of the tuples in the solution
space satisfies the criterion function.
Basic Terminology:
Backtracking strategy determines the problem solution by systematic searching for the
solution using Tree structure.
Algorithm BackTracking ( n )
{
k : = 1;
while ( k ≠ 0 ) do
{
if (each x[k] є T ( x[1], x[2], - - -- , x[k-1]) and
Bk(x[1], - - , x[k] is True ) then
{
if ( x[1], - - - , x[k] ) is a path to answer node then
write x[1 : k];
k : = k + 1;
}
else
k : = k – 1;
}
}
APPLICATIONS
8-Queens problem
Sum of subsets problem
Graph coloring
0/1 Knapsack problem etc,
N-QUEENS PROBLEM
4 – QUEENS PROBLEM
Explicit constraints: Si = { 1, 2, 3, 4 }
Xi should select from Si
Implicit constraints: No two queens can be placed in the same row,
column or diagonal.
Suppose two queens are placed at positions ( i, j ) and ( k, l ). Then they are one same
diagonal only if
i–j=k–l or i+j=k+l
j–l=i–k or j–l=k–i
|j–l| = | i – k |;
8 – QUEENS PROBLEM
Explicit constraints: Si = { 1, 2, 3, 4, 5, 6, 7, 8 }
Xi should select from Si
Implicit constraints: No two queens can be placed in the same row,
column or diagonal.
N – QUEENS PROBLEM
Algorithm NQueens ( k, i )
{
for i : = 1 to n do
{
if Place ( k, i ) then
{
x[ k ] : = i;
if ( k = n ) then
write x [ 1 : n ];
else
NQueens( k+1 , n );
}
}
}
Algorithm Place ( k , i )
{
// Returns True if a queen can be placed in kth row and ith column.
for j : = 1 to k – 1 do
{
if ( ( x[ j ] = i ) or (Abs ( x[ j ] – i ) = Abs ( j – k ) ) ) then
return False;
}
return True;
}
Analysis:
Place ( k, i ) returns a Boolean value True if queen can be placed on ith column. For
this, it takes O(k – 1) time.
Overall time complexity of N-Queens problem is O(n2) time.
SUM OF SUBSETS
Consider ‘n’ distinct positive numbers given by the set w = ( w1 , w2 , - - - , wn ). The
aim of the problem is to find all combinations of these numbers whose sums are m.
In solution space tree, left subtree of root defines all subsets containing wi i.e., xi = 1.
Right subtree of the root defines all subsets not containing w i i.e., xi = 0.
Example: Solve sum of subsets problem when n = 4, w = (7, 11, 13, 24) and m = 31.
Algorithm SumOfSub ( s, k, r )
{
x [ k ] : = 1;
if ( s + w[k] = m ) then
write ( x [ 1 : k ] );
else if ( s + w [ k ] + w [ k + 1 ] ≤ m ) then
SumOfSub ( s + w[k], k+1, r-w[k] );
if ( ( s + r – w[k] ≥ m ) and ( s + w[k+1] ≤ m ) ) then
{
x [ k ] : = 0;
SumOfSub ( s , k+1, r-w[k] );
}
}
GRAPH COLORING
Let G be a graph and m be a given positive integer. Graph coloring is a problem of
coloring each vertex in such a way that no two adjacent nodes have the same color and only
m colors are used.
In general, the problem is treated as m-colorability decision problem and the integer
value is treated as chromatic number of the graph.
1
A A
B C 2 3
B C
D E 1 D E 2
F F
3
Suppose we represent a graph by its adjacency matrix G[1:n , 1:n] where G[i, j] = 1 if
(i, j) is an edge of G; otherwise, G[i, j] = 0;
The colors are represented by the integers 1, 2, - - -- , m and the solutions are given
by the n-tuple ( x1 , x2 - - - xn ) where xi is the color of node i.
Algorithm mColoring ( k )
{
// k is the index of the next vertex to color.
repeat
{
NextValue ( k ); // Assign to x[k] a legal color
if ( x[ k ] = 0 ) then
return;
if ( k = n ) then
write ( x [ 1 : n ] ) ;
else
mColoring ( k+1 ) ;
}
until(false);
}
Algorithm NextValue ( k )
{
repeat
{
x [ k ] : = ( x [k] + 1 ) mod (m + 1);
if ( x[ k ] = 0 ) then
return;
for j : =1 to n do
{
if ( ( G[k, j] ≠ 0 ) and ( x[k] = x[j] )) then
break;
}
if ( j = n + 1 ) then
return;
}
until (false);
}
Analysis:
Computing time for NextValue to determine the legal color is O(mn) time.
NextValue function is invoked in m-coloring function by n times.
Hence, time complexity of Graph coloring problem is O(n mn) time.
Consider there are n objects and a Knapsack or bag. Every object i has a weight w i
and the knapsack has a capacity m. If the object i is placed into the knapsack, then a profit of
pixi is earned where xi = 0 or 1. The objective is to fill the knapsack that maximizes the total
profit earned.
Formally, the problem statement can be stated as:
maximize ∑ pi xi
1≤i≤n
subject to ∑ wi xi ≤ m
1≤i≤n
and xi = 0 or 1 , 1≤i≤n
Example: Obtain 0/1 Knapsack solution for the instance n=3, m=6, (p 1 , p2 , p3) = (5, 3, 6)
and (w1 , w2 , w3) = (3, 2, 4).
b : = cp;
c : = cw;
for i : = k + 1 to n do
{
c : = c + w[ i ];
if ( c < m ) then
b : = b + p[ i ];
else
return b + ( 1 – (c – m )/w[ i ] ) * p[ i ];
}
return b;
}
Here, nodes explore it’s children node by using either BFS are D-Search.
In branch and bound technology, a BFS state space search will be called FIFO (First
In First Out) search as the list of live nodes in First-In-First-Out list.
A D-Search stat space search will be called LIFO (Last In First Out) search as the
list of live nodes in Last-In-First-Out list.
As similar to backtracking process, Bounding functions are used to kill live nodes
those are not generating answer state.
Using FIFO and LIFO branch and bound selection of a E-Node is a time consuming
process. To improve its performance, they use another method that involves cost of the node
referred as least cost search.
If x is an answer node, then c(x) is the cost of reaching x from the root of the state
space tree.
If x is not an answer node, then c(x) = α.
Algorithm LCSearach ( t )
{
E : = t;
reperat
{
for each child x of E do
{
if x is an answer node then
write the path from x to t and return;
Add(x);
x → parent : = E;
}
if there ar no more live nodes then
{
write ‘ No Answer Node ‘
return;
}
E : = Least( );
}
until (False);
}
BOUNDING
Bounding functions are used to avoid generation of subtrees that do not contain the
answer node. In bounding function, lower and upper bound values are generated at each
node.
Assume each node x has a cost c(x) associated with it and a minimum cost answer
node is to be found.
A cost function Ĉ(.) such that Ĉ(x) ≤ c(x) is used to provide lower bounds on
solution obtained from any node x.
An upper us an upper bound on the cost of minimum cost solution, then all live nodes
x with Ĉ(x) > upper can be killed.
Initially upper is set to α. Each time a new answer node is found, the value of upper
can be updated.
Note:
1) If the list of live nodes is implemented as a Queue with Least( ) and Add( ) functions,
then LC search is treated as “FIFO Branch and Bound”.
2) If the list of live nodes is implemented as a Stack with Least( ) and Add( ) functions,
then LC search is treated as “LIFO Branch and Bound”.
Consider there are n objects and a Knapsack or bag. Every object i has a weight w i
and the knapsack has a capacity m. If the object i is placed into the knapsack, then a profit of
pixi is earned where xi = 0 or 1. The objective is to fill the knapsack that maximizes the total
profit earned.
In branch and bound strategy, the problem statement can be changed into
minimization problem as:
minimize - ∑ pi xi
1≤i≤n
subject to ∑ wi xi ≤ m
1≤i≤n
and xi = 0 or 1 , 1≤i≤n
Actual profit
m−CurrentTotalWeight of remaining
Ĉ(x) = u( x ) - *
Actual Weight of Remaining Object object
Example: Obtain LCBB solution for the given knapsack instance n = 4, m = 15,
(p1 , p2, p3 , p4) = (10, 10, 12, 18) and ( w1 , w2, w3 , w4) = (2, 4, 6, 9).
TRAVELING SALESPERSON
Let G = (V, E) be a directed graph defining an instance of the traveling salesperson
problem. Let cij is the cost of edge <i, j> є E and cij = α if <i, j> є E. Assume every tour
starts and ends at vertex 1.
To use LCBB to search the traveling salesperson state space tree, define a cost
function c( . ) and other two functions Ĉ(.) and u( . ) such that Ĉ(r) ≤ c(r) ≤ u(r) for all
nodes r. The cost c( . ) is such that the solution node with least c( . ) corresponds to the
shortest tour in G.
To obtain traveling salesperson solution, apply branch and bound strategy as:
For this, choose minimum entry in Row i (column j) and substract it from all
entries in Row i (column j).
Step 2: The total amount subtracted from the columns and rows is a lower bound on the
length of a minimum-cost tour and can be used as Ĉ value for the root of the state
space tree.
Step 3: Obtain a reduced cost matrix for every node in the traveling salesperson state
space tree.
Let A be the reduced cost matrix for node R. Let S be a child of R such that the tree
edge (R, S) corresponds to including edge <i, j> in the tour. If S is not a leaf node, then
reduced cost matrix for S may be obtained as follows:
α 20 30 10 11
15 α 16 4 2
3 5 α 2 4
19 6 18 α 3
16 4 7 16 α
END
UNIT – V
A set of decision problems that can be solved in deterministic polynomial time are
called P-Class problems.
Example: Searching : O(log n)
Sorting techniques : O(n2), O(n log n)
Matrix multiplication : O(n3) etc,
NP-Class problems are further classified into two types as: NP-Hard and NP-
Complete problems.
NP-Hard Problem
NP-Complete Problem
A problem L is NP-Complete if
It belongs to NP and NP-Hard
It has property that it can be solved in polynomial time, iff all other NP-Complete
problems can also be solved in polynomial time.
All NP-Complete problems are NP-Hard, but some NP-Hard problems can’t be NP-
Complete.
Properties:
P and NP: P is the set of all decision problems solvable by deterministic algorithms in
polynomial time. NP is the set of all decision problems solvable by nondeterministic
algorithms in polynomial time.
Cook’s Theorem
Cook’s theorem states that the satisfiability is in P, iff P = NP. The scientist Stephen
Cook in 1972 state that Boolean satisfiability problem is NP-Complete problem.
Let x1 = 1 , x2 = 0, x3 = 0
The non-deterministic algorithm can also be determine the value of expression for
corresponding assignment and accept if entire expression is true.
The proof shows that Boolean satisfiable problem can be solved in polynomial time.
Hence, all the problems in NP could be solved in polynomial time.
The Clique Decision Problem belongs to NP – If a problem belongs to the NP class, then
it should have polynomial-time verifiability, that is given a certificate, we should be able to
verify in polynomial time if it is a solution to the problem.
Proof:
1. Certificate – Let the certificate be a set S consisting of nodes in the clique and S is a
subgraph of G.
2. Verification – We have to check if there exists a clique of size k in the graph. Hence,
verifying if number of nodes in S equals k, takes O(1) time. Verifying whether each
vertex has an out-degree of (k-1) takes O(k2) time. (Since in a complete graph, each
vertex is connected to every other vertex through an edge. Hence the total number of
edges in a complete graph = kC2 = k*(k-1)/2 ). Therefore, to check if the graph formed
by the k nodes in S is complete or not, it takes O(k 2) = O(n2 ) time (since k<=n, where n
is number of vertices in G).
Therefore, the Clique Decision Problem has polynomial time verifiability and hence
belongs to the NP Class.
Conclusion
The Clique Decision Problem is NP and NP-Hard. Therefore, the Clique decision problem
is NP-Complete.
The traveling salesman problem consists of a salesman and a set of cities. The
salesman has to visit each one of the cities starting from a certain one and returning to the
same city. The challenge of the problem is that the traveling salesman wants to minimize the
total length of the trip.
THE END
II/IV B.TECH, I SEM, 2024-25, PBR VITS (Autonomous]: Kavali Page 80