16.
2 Elements of the greedy strategy
423
16.2 Elements of the greedy strategy
A greedy algorithm obtains an optimal solution to a problem by making a sequence
of choices. At each decision point, the algorithm makes choice that seems best at
the moment. This heuristic strategy does not always produce an optimal solution,
but as we saw in the activity-selection problem, sometimes it does. This section
discusses some of the general properties of greedy methods.
The process that we followed in Section 16.1 to develop a greedy algorithm was
a bit more involved than is typical. We went through the following steps:
1. Determine the optimal substructure of the problem.
2. Develop a recursive solution. (For the activity-selection problem, we formulated recurrence (16.2), but we bypassed developing a recursive algorithm based
on this recurrence.)
3. Show that if we make the greedy choice, then only one subproblem remains.
4. Prove that it is always safe to make the greedy choice. (Steps 3 and 4 can occur
in either order.)
5. Develop a recursive algorithm that implements the greedy strategy.
6. Convert the recursive algorithm to an iterative algorithm.
In going through these steps, we saw in great detail the dynamic-programming underpinnings of a greedy algorithm. For example, in the activity-selection problem,
we rst dened the subproblems Sij , where both i and j varied. We then found
that if we always made the greedy choice, we could restrict the subproblems to be
of the form Sk .
Alternatively, we could have fashioned our optimal substructure with a greedy
choice in mind, so that the choice leaves just one subproblem to solve. In the
activity-selection problem, we could have started by dropping the second subscript
and dening subproblems of the form Sk . Then, we could have proven that a greedy
choice (the rst activity am to nish in Sk ), combined with an optimal solution to
the remaining set Sm of compatible activities, yields an optimal solution to Sk .
More generally, we design greedy algorithms according to the following sequence
of steps:
1. Cast the optimization problem as one in which we make a choice and are left
with one subproblem to solve.
2. Prove that there is always an optimal solution to the original problem that makes
the greedy choice, so that the greedy choice is always safe.
424
Chapter 16 Greedy Algorithms
3. Demonstrate optimal substructure by showing that, having made the greedy
choice, what remains is a subproblem with the property that if we combine an
optimal solution to the subproblem with the greedy choice we have made, we
arrive at an optimal solution to the original problem.
We shall use this more direct process in later sections of this chapter. Nevertheless, beneath every greedy algorithm, there is almost always a more cumbersome
dynamic-programming solution.
How can we tell whether a greedy algorithm will solve a particular optimization
problem? No way works all the time, but the greedy-choice property and optimal
substructure are the two key ingredients. If we can demonstrate that the problem
has these properties, then we are well on the way to developing a greedy algorithm
for it.
Greedy-choice property
The rst key ingredient is the greedy-choice property: we can assemble a globally
optimal solution by making locally optimal (greedy) choices. In other words, when
we are considering which choice to make, we make the choice that looks best in
the current problem, without considering results from subproblems.
Here is where greedy algorithms differ from dynamic programming. In dynamic
programming, we make a choice at each step, but the choice usually depends on the
solutions to subproblems. Consequently, we typically solve dynamic-programming
problems in a bottom-up manner, progressing from smaller subproblems to larger
subproblems. (Alternatively, we can solve them top down, but memoizing. Of
course, even though the code works top down, we still must solve the subproblems before making a choice.) In a greedy algorithm, we make whatever choice
seems best at the moment and then solve the subproblem that remains. The choice
made by a greedy algorithm may depend on choices so far, but it cannot depend on
any future choices or on the solutions to subproblems. Thus, unlike dynamic programming, which solves the subproblems before making the rst choice, a greedy
algorithm makes its rst choice before solving any subproblems. A dynamicprogramming algorithm proceeds bottom up, whereas a greedy strategy usually
progresses in a top-down fashion, making one greedy choice after another, reducing each given problem instance to a smaller one.
Of course, we must prove that a greedy choice at each step yields a globally
optimal solution. Typically, as in the case of Theorem 16.1, the proof examines
a globally optimal solution to some subproblem. It then shows how to modify
the solution to substitute the greedy choice for some other choice, resulting in one
similar, but smaller, subproblem.
We can usually make the greedy choice more efciently than when we have to
consider a wider set of choices. For example, in the activity-selection problem, as-
16.2 Elements of the greedy strategy
425
suming that we had already sorted the activities in monotonically increasing order
of nish times, we needed to examine each activity just once. By preprocessing the
input or by using an appropriate data structure (often a priority queue), we often
can make greedy choices quickly, thus yielding an efcient algorithm.
Optimal substructure
A problem exhibits optimal substructure if an optimal solution to the problem
contains within it optimal solutions to subproblems. This property is a key ingredient of assessing the applicability of dynamic programming as well as greedy
algorithms. As an example of optimal substructure, recall how we demonstrated in
Section 16.1 that if an optimal solution to subproblem Sij includes an activity ak ,
then it must also contain optimal solutions to the subproblems Si k and Skj . Given
this optimal substructure, we argued that if we knew which activity to use as ak , we
could construct an optimal solution to Sij by selecting ak along with all activities
in optimal solutions to the subproblems Si k and Skj . Based on this observation of
optimal substructure, we were able to devise the recurrence (16.2) that described
the value of an optimal solution.
We usually use a more direct approach regarding optimal substructure when
applying it to greedy algorithms. As mentioned above, we have the luxury of
assuming that we arrived at a subproblem by having made the greedy choice in
the original problem. All we really need to do is argue that an optimal solution to
the subproblem, combined with the greedy choice already made, yields an optimal
solution to the original problem. This scheme implicitly uses induction on the
subproblems to prove that making the greedy choice at every step produces an
optimal solution.
Greedy versus dynamic programming
Because both the greedy and dynamic-programming strategies exploit optimal substructure, you might be tempted to generate a dynamic-programming solution to a
problem when a greedy solution sufces or, conversely, you might mistakenly think
that a greedy solution works when in fact a dynamic-programming solution is required. To illustrate the subtleties between the two techniques, let us investigate
two variants of a classical optimization problem.
The 0-1 knapsack problem is the following. A thief robbing a store nds n
items. The ith item is worth i dollars and weighs wi pounds, where i and wi are
integers. The thief wants to take as valuable a load as possible, but he can carry at
most W pounds in his knapsack, for some integer W . Which items should he take?
(We call this the 0-1 knapsack problem because for each item, the thief must either
426
Chapter 16 Greedy Algorithms
take it or leave it behind; he cannot take a fractional amount of an item or take an
item more than once.)
In the fractional knapsack problem, the setup is the same, but the thief can take
fractions of items, rather than having to make a binary (0-1) choice for each item.
You can think of an item in the 0-1 knapsack problem as being like a gold ingot
and an item in the fractional knapsack problem as more like gold dust.
Both knapsack problems exhibit the optimal-substructure property. For the 0-1
problem, consider the most valuable load that weighs at most W pounds. If we
remove item j from this load, the remaining load must be the most valuable load
weighing at most W wj that the thief can take from the n 1 original items
excluding j . For the comparable fractional problem, consider that if we remove
a weight w of one item j from the optimal load, the remaining load must be the
most valuable load weighing at most W w that the thief can take from the n 1
original items plus wj w pounds of item j .
Although the problems are similar, we can solve the fractional knapsack problem
by a greedy strategy, but we cannot solve the 0-1 problem by such a strategy. To
solve the fractional problem, we rst compute the value per pound i =wi for each
item. Obeying a greedy strategy, the thief begins by taking as much as possible of
the item with the greatest value per pound. If the supply of that item is exhausted
and he can still carry more, he takes as much as possible of the item with the next
greatest value per pound, and so forth, until he reaches his weight limit W . Thus,
by sorting the items by value per pound, the greedy algorithm runs in O.n lg n/
time. We leave the proof that the fractional knapsack problem has the greedychoice property as Exercise 16.2-1.
To see that this greedy strategy does not work for the 0-1 knapsack problem,
consider the problem instance illustrated in Figure 16.2(a). This example has 3
items and a knapsack that can hold 50 pounds. Item 1 weighs 10 pounds and
is worth 60 dollars. Item 2 weighs 20 pounds and is worth 100 dollars. Item 3
weighs 30 pounds and is worth 120 dollars. Thus, the value per pound of item 1 is
6 dollars per pound, which is greater than the value per pound of either item 2 (5
dollars per pound) or item 3 (4 dollars per pound). The greedy strategy, therefore,
would take item 1 rst. As you can see from the case analysis in Figure 16.2(b),
however, the optimal solution takes items 2 and 3, leaving item 1 behind. The two
possible solutions that take item 1 are both suboptimal.
For the comparable fractional problem, however, the greedy strategy, which
takes item 1 rst, does yield an optimal solution, as shown in Figure 16.2(c). Taking item 1 doesnt work in the 0-1 problem because the thief is unable to ll his
knapsack to capacity, and the empty space lowers the effective value per pound of
his load. In the 0-1 problem, when we consider whether to include an item in the
knapsack, we must compare the solution to the subproblem that includes the item
with the solution to the subproblem that excludes the item before we can make the
16.2 Elements of the greedy strategy
item 1
30
20
20 $100
30 $120
20 $100
+
10
$100
$120 knapsack
(a)
$80
+
50
10
$60
20
30
30 $120
item 3
item 2
427
= $220
$60
= $160
(b)
10
20 $100
+
$60
10
= $180
$60
= $240
(c)
Figure 16.2 An example showing that the greedy strategy does not work for the 0-1 knapsack
problem. (a) The thief must select a subset of the three items shown whose weight must not exceed
50 pounds. (b) The optimal subset includes items 2 and 3. Any solution with item 1 is suboptimal,
even though item 1 has the greatest value per pound. (c) For the fractional knapsack problem, taking
the items in order of greatest value per pound yields an optimal solution.
choice. The problem formulated in this way gives rise to many overlapping subproblemsa hallmark of dynamic programming, and indeed, as Exercise 16.2-2
asks you to show, we can use dynamic programming to solve the 0-1 problem.
Exercises
16.2-1
Prove that the fractional knapsack problem has the greedy-choice property.
16.2-2
Give a dynamic-programming solution to the 0-1 knapsack problem that runs in
O.n W / time, where n is the number of items and W is the maximum weight of
items that the thief can put in his knapsack.
16.2-3
Suppose that in a 0-1 knapsack problem, the order of the items when sorted by
increasing weight is the same as their order when sorted by decreasing value. Give
an efcient algorithm to nd an optimal solution to this variant of the knapsack
problem, and argue that your algorithm is correct.
16.2-4
Professor Gekko has always dreamed of inline skating across North Dakota. He
plans to cross the state on highway U.S. 2, which runs from Grand Forks, on the
eastern border with Minnesota, to Williston, near the western border with Montana.
428
Chapter 16 Greedy Algorithms
The professor can carry two liters of water, and he can skate m miles before running
out of water. (Because North Dakota is relatively at, the professor does not have
to worry about drinking water at a greater rate on uphill sections than on at or
downhill sections.) The professor will start in Grand Forks with two full liters of
water. His ofcial North Dakota state map shows all the places along U.S. 2 at
which he can rell his water and the distances between these locations.
The professors goal is to minimize the number of water stops along his route
across the state. Give an efcient method by which he can determine which water
stops he should make. Prove that your strategy yields an optimal solution, and give
its running time.
16.2-5
Describe an efcient algorithm that, given a set fx1 ; x2 ; : : : ; xn g of points on the
real line, determines the smallest set of unit-length closed intervals that contains
all of the given points. Argue that your algorithm is correct.
16.2-6 ?
Show how to solve the fractional knapsack problem in O.n/ time.
16.2-7
Suppose you are given two sets A and B, each containing n positive integers. You
can choose to reorder each set however you like. After reordering, let ai be the ith
element
Qn of set A, and let bi be the ith element of set B. You then receive a payoff
of i D1 ai bi . Give an algorithm that will maximize your payoff. Prove that your
algorithm maximizes the payoff, and state its running time.