0% found this document useful (0 votes)
25 views34 pages

Nearest Path Algorithm

It gives approach for nearest path algorithm

Uploaded by

henaji8860
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views34 pages

Nearest Path Algorithm

It gives approach for nearest path algorithm

Uploaded by

henaji8860
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

SIAM J. COMPUT.


c 2005 Society for Industrial and Applied Mathematics
Vol. 34, No. 6, pp. 1398–1431

A SHORTEST PATH ALGORITHM FOR


REAL-WEIGHTED UNDIRECTED GRAPHS∗
SETH PETTIE† AND VIJAYA RAMACHANDRAN‡

Abstract. We present a new scheme for computing shortest paths on real-weighted undirected
graphs in the fundamental comparison-addition model. In an efficient preprocessing phase our al-
gorithm creates a linear-size structure that facilitates single-source shortest path computations in
O(m log α) time, where α = α(m, n) is the very slowly growing inverse-Ackermann function, m the
number of edges, and n the number of vertices. As special cases our algorithm implies new bounds
on both the all-pairs and single-source shortest paths problems. We solve the all-pairs problem
in O(mn log α(m, n)) time and, if the ratio between the maximum and minimum edge lengths is
O(1)
bounded by n(log n) , we can solve the single-source problem in O(m + n log log n) time. Both
these results are theoretical improvements over Dijkstra’s algorithm, which was the previous best
for real weighted undirected graphs. Our algorithm takes the hierarchy-based approach invented by
Thorup.

Key words. single-source shortest paths, all-pairs shortest paths, undirected graphs, Dijkstra’s
algorithm

AMS subject classifications. 05C12, 05C85, 68R10

DOI. 10.1137/S0097539702419650

1. Introduction. The problem of computing shortest paths is indisputably one


of the most well-studied problems in computer science. It is thoroughly surprising
that in the setting of real-weighted graphs, many basic shortest path problems have
seen little or no progress since the early work by Dijkstra, Bellman and Ford, Floyd
and Warshall, and others [CLRS01]. For instance, no algorithm for computing single-
source shortest paths (SSSPs) in arbitrarily weighted graphs has yet to improve the
Bellman–Ford O(mn) time bound, where m and n are the number of edges and ver-
tices, respectively. The fastest uniform all-pairs
√ shortest path (APSP) algorithm for
dense graphs [Z04, F76] requires time O(n3 log log n/ log n), which is just a slight
improvement over the O(n3 ) bound of the Floyd–Warshall algorithm. Similarly, Di-
jkstra’s O(m + n log n) time algorithm [Dij59, FT87] remains the best for computing
SSSPs on nonnegatively weighted graphs, and until the recent algorithms of Pettie
[Pet04, Pet02b, Pet03], Dijkstra’s algorithm was also the best for computing APSPs
on sparse graphs [Dij59, J77, FT87].
In order to improve these bounds most shortest path algorithms depend on a re-
stricted type of input. There are algorithms for geometric inputs (see Mitchell’s survey
[Mit00]), planar graphs [F91, HKRS97, FR01], and graphs with randomly chosen edge
weights [Spi73, FG85, MT87, KKP93, KS98, M01, G01, Hag04]. In recent years there

∗ Received by the editors December 13, 2002; accepted for publication (in revised form) October

15, 2004; published electronically July 26, 2005. This work was supported by Texas Advanced
Research Program grant 003658-0029-1999 and NSF grant CCR-9988160. A preliminary version of
this paper, titled Computing shortest paths with comparisons and additions, was presented at the
13th Annual ACM-SIAM Symposium on Discrete Algorithms, 2002, San Francisco, CA.
http://www.siam.org/journals/sicomp/34-6/41965.html
† Max Planck Institut für Informatik, Stuhlsatzenhausweg 85, 66123 Saarbrücken, Germany

([email protected]). This author’s work was also supported by an Alexander von Humboldt
Postdoctoral Fellowship and by an MCD Graduate Fellowship.
‡ Department of Computer Sciences, The University of Texas at Austin, Austin, TX 78712 (vlr@

cs.utexas.edu).
1398
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1399

has also been a focus on computing approximate shortest paths—see Zwick’s recent
survey [Z01]. One common assumption is that the graph is integer-weighted, though
structurally unrestricted, and that the machine model is able to manipulate the in-
teger representation of weights. Shortest path algorithms based on scaling [G85b,
GT89, G95] and fast matrix multiplication [Sei95, GM97, AGM97, Tak98, SZ99, Z02]
have running times that depend on the magnitude of the integer edge weights, and
therefore yield improved algorithms only for sufficiently small edge weights. In the
case of the matrix multiplication–based algorithms the critical threshold is rather
low: even edge weights sublinear in n can be too large. Dijkstra’s algorithm can be
sped up in the integer-weight model by using an √integer priority queue.1 The best
bounds on Dijkstra’s algorithm to date are O(m log log n) (expected) [HT02] and
O(m + n log log n) [Tho03]. Both of these algorithms use multiplication, a non-AC 0
operation; see [Tho03] for bounds in the AC 0 model. Thorup [Tho99] considered
the restricted case of integer-weighted undirected graphs and showed that on an AC 0
random access machine (RAM), shortest paths could be computed in linear time.
Thorup invented what we call in this paper the hierarchy-based approach to shortest
paths.
The techniques developed for integer-weighted graphs (scaling, matrix multipli-
cation, integer sorting, and Thorup’s hierarchy-based approach) seem to depend cru-
cially on the graph being integer-weighted. This state of affairs is not unique to the
shortest path problem. In the weighted matching [G85b, GT89, GT91] and maxi-
mum flow problems [GR98], for instance, the best algorithms for real- and integer-
weighted graphs have running times differing by a polynomial factor. For the shortest
path problem on positively weighted graphs the integer/real gap is only logarith-
mic. It is of great interest whether an integer-based approach is inherently so, or
whether it can yield a faster algorithm for general, real-weighted inputs. In this
paper we generalize Thorup’s hierarchy-based approach to the comparison-addition
model (see section 2.1) and, as a corollary, to real-weighted input graphs. For the
undirected APSP problem we nearly eliminate the existing integer/real gap, reducing
it from log n to log α(m, n), where α is the incomprehensibly slowly growing inverse-
Ackermann function. Before stating our results in detail, we first give an overview
of the hierarchy-based approach and discuss the recent hierarchy-based shortest path
algorithms [Tho99, Hag00, Pet04, Pet02b].
Hierarchy-based algorithms should be thought of as preprocessing schemes for
answering SSSP queries in nonnegatively weighted graphs. The idea is to compute
a small non–source-specific structure that encodes useful information about all the
shortest paths in the graph. We measure the running time of a hierarchy-based algo-
rithm with two quantities: P, the worst case preprocessing cost on the given graph,
and M, the marginal cost of one SSSP computation after preprocessing. Therefore,
solving the s-sources shortest path problem requires s · M + P time. If s = n, that is,
if we are solving APSP, then for all known hierarchy algorithms the P term becomes
negligible. However, P may be dominant (in either the asymptotic or real-world sense)
for smaller values of s. In Thorup’s original algorithm [Tho99], P and M are both
O(m); recall that his algorithm works on integer-weighted undirected graphs. Hagerup
[Hag00] adapted Thorup’s algorithm to integer-weighted directed graphs, incurring a
slight loss of efficiency in the process. In [Hag00], P = O(min{m log log C, m log n}),2

1 It can also be sped up using an integer sorting algorithm in conjunction with Thorup’s reduction

[Tho00] from priority queues to sorting.


2 Hagerup actually proved P = O(min{m log log C, mn}); see [Pet04] for the O(m log n) bound.
1400 SETH PETTIE AND VIJAYA RAMACHANDRAN

where C is the maximum edge weight and M = O(m + n log log n). After the ini-
tial publication of our results [PR02a], Pettie [Pet04, Pet02b] gave an adaptation of
the hierarchy-based approach to real-weighted directed graphs. The main result of
[Pet04] is an APSP algorithm running in time O(mn + n2 log log n), which improved
upon the O(mn + n2 log n) bound derived from multiple runs of Dijkstra’s algorithm
[Dij59, J77, FT87]. The result of [Pet04] is stated in terms of the APSP problem
because its preprocessing cost P is O(mn), making it efficient only if s is very close to
n. In [Pet02b] (see also [Pet03]) the nonuniform complexity of APSP is considered;
the main result of [Pet02b] is an algorithm performing O(mn log α(m, n)) comparison
and addition operations. This bound is essentially optimal when m = O(n) due to
the trivial Ω(n2 ) lower bound on APSP.
In this paper we give new bounds on computing undirected shortest paths in
real-weighted graphs. For our algorithm, the preprocessing cost P is O(mst(m, n) +
min{n log n, n log log r}), where mst(m, n) is the complexity of the minimum span-
ning tree problem and r is the ratio of the maximum-to-minimum edge weight. This
bound on P is never worse than O(m + n log n), though if r is not excessively large,
O(1)
say less than n(log n) , P is O(m + n log log n). We show that the marginal cost M
of our algorithm is asymptotically equivalent to split-findmin(m, n), which is the
decision-tree complexity of a certain data structuring problem of the same name. It
was known that split-findmin(m, n) = O(mα(m, n)) [G85a]; we improve this bound
to O(m log α(m, n)). Therefore, the marginal cost of our algorithm is essentially (but
perhaps not precisely) linear. Theorem 1.1 gives our general result, and Corollaries
1.2 and 1.3 relate it to the canonical APSP and SSSP problems, respectively.
Theorem 1.1. Let P = mst(m, n) + min{n log n, n log log r}, where m and n are
the number of edges and vertices in a given undirected graph, r bounds the ratio of
any two edge lengths, and mst(m, n) is the cost of computing the graph’s minimum
spanning tree. In O(P) time an O(n)-space structure can be built that allows the com-
putation of SSSPs in O(split-findmin(m, n)) time, where split-findmin(m, n) =
O(m log α(m, n)) represents the decision-tree complexity of the split-findmin problem
and α is the inverse-Ackermann function.
Corollary 1.2. The undirected APSP problem can be solved on a real-weighted
graph in O(n · split-findmin(m, n)) = O(mn log α(m, n)) time.
Corollary 1.3. The undirected SSSP problem can be solved on a real-weighted
graph in O(split-findmin(m, n)+mst(m, n)+min{n log n, n log log r}) = O(mα(m, n)+
min{n log n, n log log r}) time.
The running time of our SSSP algorithm (Corollary 1.3) is rather unusual. It con-
sists of three terms, where the first two are unknown (but bounded by O(mα(m, n)))
and the third depends on a nonstandard parameter: the maximum ratio of any two
edge lengths.3 A natural question is whether our SSSP algorithm can be substantially
improved. In section 6 we formally define the class of “hierarchy-based” SSSP algo-
rithms and show that any comparison-based undirected SSSP algorithm in this class
must take time Ω(m+min{n log n, n log log r}). This implies that our SSSP algorithm
is optimal for this class, up to an inverse-Ackermann factor, and that no hierarchy-
based SSSP algorithm can improve on Dijkstra’s algorithm, for r unbounded.
Pettie, Ramachandran, and Sridhar [PRS02] implemented a simplified version of
our algorithm. The observed marginal cost of the [PRS02] implementation is nearly
linear, which is in line with our asymptotic analysis. Although it is a little slower

3 Dinic’s implementation [Din78, Din03] of Dijkstra’s algorithm also depends on r, in both time

and space consumption.


A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1401

than Dijkstra’s algorithm in solving SSSP, it is faster in solving the s-sources shortest
path problem, in some cases for s as small as 3. In many practical situations it is
the s-sources problem, not SSSP, that needs to be solved. For instance, if the graph
represents a physical network, such as a network of roads or computers, it is unlikely
to change very often. Therefore, in these situations a nearly linear preprocessing cost
is a small price to pay for more efficient shortest path computations.
1.1. An overview. In section 2 we define the SSSP and APSP problems and
review the comparison-addition model and Dijkstra’s algorithm [Dij59]. In section
3 we generalize the hierarchy approach to real-weighted graphs and give a simple
proof of its correctness. In section 4 we propose two implementations of the general
hierarchy-based algorithm, one for proving the asymptotic bounds of Theorem 1.1
and one that is simpler and uses more standard data structures. The running times
of our implementations depend heavily on having a well-balanced hierarchy. In section
5 we give an efficient method for constructing balanced hierarchies; it is based on a
hierarchical clustering of the graph’s minimum spanning tree. In section 6 we prove a
lower bound on the class of hierarchy-based undirected SSSP algorithms. In section
7 we discuss avenues for further research.
2. Preliminaries. The input is a weighted, undirected graph G = (V, E, ),
where V = V (G) and E = E(G) are the sets of n vertices and m edges, respectively,
and  : E → R assigns a real length to each edge. The distance from vertex u to
vertex v, denoted d(u, v), is the length of the minimum length path from u to v, or
∞ if there is no such path from u to v, or −∞ if there is no such path of minimum
length. The APSP problem is to compute d(u, v), for (u, v) ∈ V × V , and the SSSP
problem is to compute d(u, v) for some specified source u and all v ∈ V .
If, in an undirected graph, some connected component contains an edge of negative
length, say e, then the distance between two vertices u and v in that component is −∞:
one can always construct a path of arbitrarily small length by concatenating a path
from u to e, followed by the repetition of e a sufficient number of times, followed by a
path from e to v. Without loss of generality we will assume that  : E → R+ assigns
only positive edge lengths. A slightly restricted problem (which forbids the types of
paths described above) is the shortest simple path problem. This problem is NP-hard
as it generalizes the Hamiltonian path problem. However, Edmonds showed that when
there is no negative weight simple cycle, the problem is solvable in polynomial time
by a reduction to weighted matching—see [AMO93, p. 496] and [G85a].
2.1. The comparison-addition model. We use the term comparison-addition
model to mean any uniform model in which real numbers are subject to only compar-
ison and addition operations. The term comparison-addition complexity refers to the
number of comparison and addition operations, ignoring other computational costs.
In the comparison-addition model we leave unspecified the machine model used for
all data structuring tasks. Our results as stated hold when that machine model is a
RAM. If instead we assume a pointer machine [Tar79], our algorithms slow down by
at most an inverse-Ackermann factor.4
The comparison-addition model has some aesthetic appeal because it is the sim-
plest model appropriate to computing shortest paths and many other network opti-

4 The only structure we use whose complexity changes between the RAM and pointer machine

models is the split-findmin structure. On a pointer machine there are matching upper and lower
bounds of Θ(mα) [G85a, LaP96], whereas on the RAM the complexity is somewhere between Ω(m)
and O(m log α)—see Appendix B.
1402 SETH PETTIE AND VIJAYA RAMACHANDRAN

mization problems. A common belief is that simplicity is necessarily gained at the


price of practicality; however, this is not true. In the setting of an algorithms li-
brary, such as LEDA [MN00], it is important—and practical—that data types be fully
separated from algorithms and that the interface between the two be as generic as
possible. There is always room for fast algorithms specialized to integers or floats.
However, even under these assumptions, the gains in speed can be surprisingly minor;
see [PRS02] for one example.
2.1.1. Techniques. In our algorithm we sometimes use subtraction on real num-
bers, an operation that is not directly available in the comparison-addition model.
Lemma 2.1, given below, shows that simulating subtraction incurs at most a constant
factor loss in efficiency.
Lemma 2.1. C comparisons and A additions and subtractions can be simulated
in the comparison-addition model with C comparisons and 2(A + C) additions.
Proof. We represent each real xi = ai −bi as two reals ai , bi . An addition xi +xj =
(ai + aj ) − (bi + bj ) or a subtraction xi − xj = (ai + bj ) − (bi + aj ) can be simulated
with two actual additions. A comparison xi : xj is equivalent to the comparison
ai + bj : aj + bi , which involves two actual additions and a comparison.
At a key point in our algorithm we need to approximate the ratio of two numbers.
Division is clearly not available for real numbers in the comparison-addition model,
and with a little thought one can see that it cannot be simulated exactly. Lemma 2.2,
given below, bounds the time to find certain approximate ratios in the comparison-
addition model, which will be sufficient for our purposes.
Lemma 2.2. Let p1 , . . . , pk be real numbers, where p1 and pk are the smallest and
largest, respectively. We can find the set of integers {qi } such that 2qi ≤ pp1i < 2qi +1
in Θ(log ppk1 + k log log ppk1 ) time. pk
Proof. We generate the set L = {p1 , 2 · p1 , 4 · p1 , . . . , 2log p1  · p1 } with log ppk1
additions; then for each pi we find qi in log |L| = O(log log ppk1 ) time with a binary
search over L.
In our algorithm the {pi } correspond to certain edge lengths, and k = Θ(n). Our
need to approximate ratios, as in Lemma 2.2, is the source of the peculiar n log log r
term in the running time of Theorem 1.1. We note here that the bound stated in
Lemma 2.2 is pessimistic in the following sense. If we randomly select the {pi } from a
uniform distribution (or other natural distribution), then the time to find approximate
ratios can be reduced to O(k) (with high probability) using a linear search rather than
a binary search.
2.1.2. Lower bounds. There are many lower bounds for shortest path problems
in the comparison-addition model, though none are truly startling. Spira and Pan
[SP75] showed that even if additions are free, Ω(n2 ) comparisons are necessary to
solve SSSP on the complete graph. Karger, Koller, and Phillips [KKP93] proved that
directed APSP requires Ω(mn) comparisons if each summation corresponds to a path
in the graph.5 Kerr [K70] showed that any oblivious APSP algorithm performs Ω(n3 )
comparisons, and Kolliopoulos and Stein [KS98] proved that any fixed sequence of
edge relaxations solving SSSP must have length Ω(mn). By “fixed sequence” they
mean one that depends only on m and n but not on the graph structure. Ahuja
et al. [AMOT90] observed that any implementation of Dijkstra’s algorithm requires
Ω(m + n log n) comparison and addition operations. Pettie [Pet04] gave an Ω(m +
5 However, it is not true that all shortest path algorithms satisfy this condition. For example,

our algorithm does not, and neither do [F76, Tak92, Han04, Z04, Pet04, Pet02b].
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1403

min{n log r, n log n}) lower bound on computing directed SSSP with a “hierarchy-
type” algorithm, where r bounds the ratio of any two edge lengths. In section 6 we
prove a lower bound of Ω(m + min{n log log r, n log n}) on hierarchy-type algorithms
for undirected SSSP. These last two lower bounds are essentially tight for hierarchy-
type algorithms, on directed and undirected graphs, respectively.
Graham, Yao, and Yao [GYY80] proved that the information-theoretic argument
cannot prove a nontrivial ω(n2 ) lower bound on the comparison-complexity of APSP,
where additions are granted for free. It is also simple to see that there can be no
nontrivial information-theoretic lower bound on SSSP.
2.2. Dijkstra’s algorithm. Our algorithm, like [Tho99, Hag00], is best un-
derstood as circumventing the limitations of Dijkstra’s algorithm. We give a brief
description of Dijkstra’s algorithm in order to illustrate its complexity and introduce
some vocabulary.
For a vertex set S ⊆ V (G), let dS (u, v) denote the distance from u to v in the
subgraph induced by S ∪ {v}. Dijkstra’s algorithm maintains a tentative distance
function D(v) and a set of visited vertices S satisfying Invariant 2.1. Henceforth, s
denotes the source vertex.
Invariant 2.1. Let s be the source vertex and v be an arbitrary vertex:

d(s, v) if v ∈ S,
D(v) =
dS (s, v) if v ∈ S.
Choosing an initial assignment of S = ∅, D(s) = 0, and D(v) = ∞ for v = s
clearly satisfies the invariant. Dijkstra’s algorithm consists of repeating the following
step n times: choose a vertex v ∈ V (G)\S such that D(v) is minimized, set S :=
S ∪ {v}, and finally, update tentative distances to restore Invariant 2.1. This last
part involves relaxing each edge (v, w) by setting D(w) = min{D(w), D(v) + (v, w)}.
Invariant 2.1 and the positive-weight assumption imply D(v) = d(s, v) when v is
selected. It is also simple to prove that relaxing outgoing edges of v restores Invariant
2.1.
The problem with Dijkstra’s algorithm is that vertices are selected in increasing
distance from the source, a task that is at least as hard as sorting n numbers. Main-
taining Invariant 2.1, however, does not demand such a particular ordering. In fact,
it can be seen that selecting any vertex v ∈ S for which D(v) = d(s, v) will maintain
Invariant 2.1. All hierarchy-type algorithms [Tho99, Hag00, Pet04, Pet02b] maintain
Invariant 2.1 by generating a weaker certificate for D(v) = d(s, v) than “D(v) is min-
imal.” Any such certificate must show that for all u ∈ S, D(u) + d(u, v) ≥ D(v). For
example, Dijkstra’s algorithm presumes there are no negative length edges, hence
d(u, v) ≥ 0, and by choice of v ensures D(u) ≥ D(v). This is clearly a suffi-
cient certificate. In Dinic’s version [Din78] of Dijkstra’s algorithm the lower bound
d(u, v) ≥ min is used, where min is the minimum edge length. Thus Dinic is free
to visit any v ∈ S for which D(v)/min is minimal. All hierarchy-type algorithms
[Tho99, Hag00, Pet04, Pet02b], ours included, precompute a much stronger lower
bound on d(u, v) than d(u, v) ≥ min .
3. The hierarchy approach and its correctness. In this section we gener-
alize the hierarchy-based approach of [Tho99] to real-weighted graphs. Because the
algorithm follows directly from its proof of correctness, we will actually give a kind of
correctness proof first.
Below, X ⊆ V (G) denotes any set of vertices, and s always denotes the source
vertex. Let I be a real interval. The notation X I refers to the subset of X whose
1404 SETH PETTIE AND VIJAYA RAMACHANDRAN

distance from the source lies in the interval I, i.e.,


X I = { v ∈ X : d(s, v) ∈ I }.
Definition 3.1. A vertex set X is (S, [a, b))-safe if (i) X [0,a) ⊆ S,
(ii) for v ∈ X [a,b) , dS∪X (s, v) = d(s, v).
In other words, if a subgraph is (S, I)-safe, we can determine the distances that
lie in interval I without looking at parts of the graph outside the subgraph and S.
Clearly, finding safe subgraphs has the potential to let us compute distances cheaply.
Definition 3.2. A set {Xi }i is a t-partition of X if the {Xi }i partition X and
for every edge (u, v) with u ∈ Xi , v ∈ Xj , and i = j, we have (u, v) ≥ t.
Note that a t-partition need not be maximal; that is, if {X1 , X2 , . . . , Xk } is a
t-partition, then {X1 ∪ X2 , X3 , . . . , Xk } is as well.
Lemma 3.3. Suppose that X is (S, [a, b))-safe. Let {Xi }i be a t-partition of X
and let S  be such that S ∪ X [a,min{a+t,b}) ⊆ S  . Then
(i) for Xi in the t-partition, Xi is (S, [a, min{a + t, b}))-safe;
(ii) X is (S  , [min{a + t, b}, b))-safe.
[a,min{a+t,b})
Proof. We prove part (i) first. Let v ∈ Xi and suppose that the
lemma is false, that d(s, v) = dS∪Xi (s, v). From the assumed safeness of X we know
that d(s, v) = dS∪X (s, v). This means that the shortest path to v must pass through
X\(Xi ∪ S). Let w be the last vertex in X\(Xi ∪ S) on the shortest s–v path. By
Definition 3.2, the edge from w to Xi has length ≥ t. Since d(s, v) < min{a + t, b},
d(s, w) < min{a + t, b} − t ≤ a. Since, by Definition 3.1(i), X [0,a) ∈ S, it must be that
w ∈ S, contradicting our selection of w from X\(Xi ∪ S). Part (ii) claims that X
is (S  , [min{a + t, b}, b))-safe. Consider first Definition 3.1(i) regarding safeness. By
the assumption that X is (S, [a, b))-safe we have X [0,a) ⊆ S, and by definition of S 
we have S ∪ X [a,min{a+t,b}) ⊆ S  ; therefore X [0,min{a+t,b}) ⊆ S  , satisfying Definition
3.1(i). By the assumption that X is (S, [a, b))-safe we have that for v ∈ X [a,b) ,
dS∪X (s, v) = d(s, v); this implies the weaker statement that for v ∈ X [min{a+t,b},b) ,
dS  ∪X (s, v) = dS∪X (s, v) = d(s, v).
As Thorup noted [Tho99], Lemma 3.3 alone leads to a simple recursive procedure
for computing SSSP; however, it makes no guarantee as to efficiency. The input to
the procedure is an (S, I)-safe subgraph X; its only task is to compute the set X I ,
which it performs with recursive calls (corresponding to Lemma 3.3(i) and (ii)) or
directly if X consists of a single vertex. There are essentially three major obstacles
to making this general algorithm efficient: bounding the number of recursive calls,
bounding the time to decide what those recursive calls are, and computing good t-
partitions. Thorup gave a simple way to choose the t-partitions in integer-weighted
graphs so that the number of recursive calls is O(n). However, if adapted directly
to the comparison-addition model, the time to decide which calls to make becomes
Ω(n log n); it amounts to the problem of implementing a general priority queue. We
reduce the overhead for deciding which recursive calls to make to linear by using a
“well balanced” hierarchy and a specialized priority queue for exploiting this kind of
balance. Our techniques rely heavily on the graph being undirected and do not seem
to generalize to directed graphs in any way.
As in other hierarchy-type algorithms, we generalize the distance and tentative
distance notation from Dijkstra’s algorithm to include not just single vertices but sets
of vertices. If X is a set of vertices (or associated with a set of vertices), then
def def
(1) D(X) = min D(v) and d(u, X) = min d(u, v).
v∈X v∈X
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1405

The procedure Generalized-Visit, given below, takes a vertex set X that is


(S, I)-safe and computes the distances to all vertices in X I , placing these vertices in
the set S as their distances become known. We maintain Invariant 2.1 at all times.
By Definition 3.1 we can compute the set X I without looking at parts of the graph
outside of S ∪ X. If X = {v} happens to contain a single vertex, we can compute X I
directly: if D(v) ∈ I, then X I = {v}; otherwise it is ∅. For the general case, Lemma
3.3 says that we can compute X I by first finding a t-partition χ of X, then computing
X I in phases. Let I = I1 ∪ I2 ∪ · · · ∪ Ik , where each subinterval is disjoint from the
others and has width t, except perhaps Ik , which may be a leftover interval of width
less than t. Let Si = S ∪ X I1 ∪ · · · ∪ X Ii and let S0 = S. By the assumption that
X is (S, I)-safe and Lemma 3.3, each set in χ is (Si , Ii+1 )-safe. Therefore, we can
compute S1 , S2 , . . . , Sk = S ∪ X I with a series of recursive calls as 
follows. Assume
that the current set of visited vertices is Si . We determine X Ii+1 = Y ∈χ Y Ii+1 with
recursive calls of the form Generalized-Visit(Y, Ii+1 ), for every Y ∈ χ such that
Y Ii+1 = ∅.
To start things off, we initialize the set S to be empty, set the D-values (tentative
distances) according to Invariant 2.1, and call Generalized-Visit(V (G), [0, ∞)).
By the definition of safeness, V (G) is clearly (∅, [0, ∞))-safe. If Generalized-Visit
works according to specification, when it completes S = V (G) and Invariant 2.1 is
satisfied, implying that D(v) = d(s, v) for all vertices v ∈ V (G).
Generalized-Visit(X, [a, b)): A generalized hierarchy-type algorithm for real-
weighted graphs.
Input guarantee: X is (S, [a, b))-safe and Invariant 2.1 is satisfied.
Output guarantee: Invariant 2.1 is satisfied and Spost = Spre ∪ X [a,b) ,
where Spre and Spost are the set S before and after the call.
1. If X contains one vertex, X = {v}, and D(v) ∈ [a, b), then D(v) =
dS (s, v) = d(s, v), where the first equality is by Invariant 2.1 and the
second by the assumption that X is (S, [a, b))-safe. Let S := S ∪ {v}.
Relax all edges incident on v, restoring Invariant 2.1, and return.
2. Let a := a
While a < b and X ⊆ S
Let t > 0 be any positive real
Let χ = {X1 , X2 , . . . , Xk } be an arbitrary t-partition of X
Let χ = {Xi ∈ χ : D(Xi ) < min{a + t, b} and Xi ⊆ S}
For each Xi ∈ χ , Generalized-Visit(Xi , [a , min{a + t, b}))
a := min{a + t, b}
Lemma 3.4. If the input guarantees of Generalized-Visit are met, then after a
call to Generalized-Visit(X, I), Invariant 2.1 remains satisfied and X I is a subset
of the visited vertices S.
Proof (sketch). The base case, when X is a single vertex, is simple to handle.
Turning to the general case, we prove that each time the while statement is examined
in step 2, X is (S, [a , b))-safe for the current value of S and a ; in what follows we
will treat S as a variable, not a specific vertex set. The first time through the while-
loop in step 2, it follows from the input guarantee to Generalized-Visit that X is
(S, [a , b))-safe. Similarly, the input guarantee for all recursive calls holds by Lemma
3.3. However, to show that X is (S, [a , b))-safe at the assignment a := min{a +t, b},

by Definition 3.1 we must show X [0,min{a +t,b}) ⊆ S. We assume inductively that the
output guarantee of any recursive call to Generalized-Visit is fulfilled; that is,
1406 SETH PETTIE AND VIJAYA RAMACHANDRAN

upon the completion of Generalized-Visit(Xi , [a , min{a + t, b})), S includes the


[a ,min{a +t,b})
set Xi . Each time through the while-loop in step 2 Generalized-Visit
makes recursive calls to all Y ∈ χ . To complete the proof we must show that for
 
Y ∈ χ\χ , Y [a ,min{a +t,b}) \S = ∅. If Y ∈ χ\χ , it was because D(Y ) ≥ min{a + t, b}
 
or because Y ⊆ S, both of which clearly imply Y [a ,min{a +t,b}) \S = ∅. The output
guarantee for Generalized-Visit is clearly satisfied if step 1 is executed; if step 2
is executed, then when the while-loop finishes, X is either (S, [b, b))-safe or X ⊆ S,
both implying X [0,b) ∈ S.
Generalized-Visit can be simplified in a few minor ways. It can be seen that
in step 1 we do not need to check whether D(v) ∈ [a, b); the recursive call would not
have taken place were this not the case. In step 2 the final line can be shortened to
a := a + t. However, we cannot change all occurrences of min{a + t, b} to a + t
because this is crucial to the procedure’s correctness. It is not assumed (nor can it
be guaranteed) that t divides (b − a), so the procedure must be prepared to deal
with fractional intervals of width less than t. In section 4 we show that for a proper
hierarchy this fractional interval problem does not arise.
4. Efficient implementations of Generalized-Visit. We propose two im-
plementations of the Generalized-Visit algorithm, called Visit and Visit-B. The
time bound claimed in Theorem 1.1 is proved by analyzing Visit, given later in
this section. Although Visit is asymptotically fast, it seems too impractical for a
real-world implementation. In section 4.5 we give the Visit-B implementation of
Generalized-Visit, which uses fewer specialized data structures. The asymptotic
running time of Visit-B is just a little slower than that of Visit.
Visit and Visit-B differ from Generalized-Visit in their input/output speci-
fication only slightly. Rather than accepting a set of vertices, as Generalized-Visit
does, our implementations (like [Tho99, Hag00, Pet04, Pet02b]) accept a hierarchy
node x, which represents a set of vertices. Both of our implementations work cor-
rectly for any proper hierarchy H, defined below. We prove bounds on their running
times as a function of m, n, and a certain function of H (which is different for Visit
and Visit-B). In order to compute SSSP in near-linear time the proper hierarchy H
must satisfy certain balance conditions, which are the same for Visit and Visit-B.
In section 5 we give the requisite properties of a balanced hierarchy and show how
to construct a balanced proper hierarchy in O(mst(m, n) + min{n log n, n log log r})
time. Definition 4.1, given next, describes exactly what is meant by hierarchy and
proper hierarchy.
Definition 4.1. A hierarchy is a rooted tree whose leaf nodes correspond to
graph vertices. If x is a hierarchy node, then p(x) is its parent, deg(x) is the number
of children of x, V (x) is the set of descendant leaves (or the equivalent graph vertices),
and diam(x) is an upper bound on the diameter of V (x) (where the diameter of V (x)
is defined to be maxu,v∈V (x) d(u, v)). Each node x is given a value norm(x). A
hierarchy is proper if the following hold:
(i) norm(x) ≤ norm(p(x)),
(ii) either norm(p(x))/norm(x) is an integer or diam(x) < norm(p(x)),
(iii) deg(x) = 1,
(iv) if x1 , . . . , xdeg(x) are the children of x, then {V (xi )}i is a norm(x)-partition
of V (x). (Refer to Definition 3.2 for the meaning of “norm(x)-partition.”)
Part (iv) of Definition 4.1 is the crucial one for computing shortest paths. Part
(iii) guarantees that a proper hierarchy has O(n) nodes. The second part of (ii) is
admittedly a little strange. It allows us to replace all occurrences of min{a + t, b}
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1407

in Generalized-Visit with just a + t, which greatly simplifies the analysis of our


algorithms. Part (i) will be useful when bounding the total number of recursive calls
to our algorithms.

4.1. Visit. Consider the Visit procedure given below. We prove that Visit cor-
rectly computes SSSPs by demonstrating that it is an implementation of Generalized-
Visit, which was already proved correct.
Visit(x, [a, b)).
Input: x is a node in a proper hierarchy H; V (x) is (S, [a, b))-safe and
Invariant 2.1 is satisfied.
Output guarantee: Invariant 2.1 is satisfied and Spost = Spre ∪ V (x)[a,b) ,
where Spre and Spost are the set S before and after the call.
1. If x is a leaf and D(x) ∈ [a, b), then let S := S ∪ {x}, relax all edges
incident on x, restoring Invariant 2.1, and return.
2. If Visit(x, ·) is being called for the first time, create a bucket array of
diam(x)/norm(x) + 1 buckets. Bucket i represents the interval

[ax + i · norm(x), ax + (i + 1) · norm(x)),


D(x) if D(x) + diam(x) < b,
where ax = b−D(x)
b −  norm (x) norm(x) otherwise.

We initialize a := ax and insert all the children of x in H into the bucket


array.

The bucket invariant: A node y ∈ H in x’s bucket array appears (log-


ically) in the bucket whose interval spans D(y). If {xi } are the set of
bucketed nodes, then {V (xi )} is a norm(x)-partition of V (x).

3. While a < b and V (x) ⊆ S


While ∃y in bucket [a , a + norm(x)) s.t. norm(y) = norm(x)
Remove y from the bucket array
Insert y’s children in H in the bucket array
For each y in bucket [a , a + norm(x))
and each y such that D(y) < a and V (y) ⊆ S
Visit(y, [a , a + norm(x)))

a := a + norm(x)
In step 2 of Generalized-Visit we let χ be any arbitrary t-partition of the subset
of vertices given as input. In Visit the input is a hierarchy node x, and the associated
vertex set is V (x). We represent the t-partition of V (x) (where t = norm(x)) by the
set of bucketed H-nodes {xi }i (see step 2), where the sets {V (xi )}i partition V (x).
Clearly the {xi }i are descendants of x. The set {xi }i will begin as x’s children, though
later on {xi }i may contain a mixture of children of x, grandchildren of x, and so on.
Consider the inner while-loop in step 3. Assuming inductively that the buck-
eted H-nodes represent a norm(x)-partition of V (x), if y is a bucketed node and
norm(y) = norm(x), then replacing y by its children in the bucket array produces a
new norm(x)-partition. This follows from the definitions of t-partitions and proper
1408 SETH PETTIE AND VIJAYA RAMACHANDRAN

Case 1: Fully Aligned


a = b − norm(p(x)) ax D(x) b b + norm(p(x))

norm(x) divides (b − ax ) norm(x) divides norm(p(x))

Case 2: Aligned With b

a = b − norm(p(x)) ax D(x) b D(x) + diam(x) b + norm(p(x))

diam(x) < norm(p(x))


norm(x) divides (b − ax )

Case 3: Not Aligned At All


ax = D(x)+
a = b − norm(p(x)) D(x) diam(x) b b + norm(p(x))

D(x) + diam(x) < b

Fig. 1. First observe that when ax is initialized we have D(x) ≥ ax ≥ a, as in the figure. If
ax is chosen such that norm(x) divides (b − ax ), then by Definition 4.1(ii) either norm(x) divides
norm(p(x)) (which puts us in Case 1) or diam(x) < norm(p(x)) (putting us in Case 2); that is,
norm(x) does not divide (b + norm(p(x)) − ax ), but it does not matter since we’ll never reach
b + norm(p(x)) anyway. If ax is chosen so that norm(x) does not divide (b − ax ), then ax = D(x)
and D(x) + diam(x) < b (putting us in Case 3), meaning we will never reach b. Note that by
the definition of diam(x) (Definition 4.1) and Invariant 2.1, for any vertex in u ∈ V (x) we have
d(s, u) ≤ d(s, x) + diam(x) ≤ D(x) + diam(x).

hierarchies (Definitions 3.2 and 4.1). Since the bucketed nodes form a norm(x)-
partition, one can easily see that the recursive calls in step 3 of Visit correspond
to the recursive calls in Generalized-Visit. However, their interval arguments are
different. We sketch below why this change does not affect correctness.
In Generalized-Visit the intervals passed to recursive calls are of the form
[a , min{a + t, b}), whereas in Visit they are [a , a + t) = [a , a + norm(x)). We will
argue why a + t = a + norm(x) is never more than b. The main idea is to show that
we are always in one of the three cases portrayed in Figure 1.
If norm(x) divides norm(p(x)) and ax is chosen in step 2 so that t = norm(x)
divides (b − ax ), then we can freely substitute the interval [a , a + t) for [a , min{a +
t, b}) since they will be identical. Note that in our algorithm (b − a) = norm(p(x)).6
The problems arise when norm(x) does not divide either norm(p(x)) or (b − ax ).
In order to prove the correctness of Visit we must show that the input guarantee
(regarding safe-ness) is satisfied for each recursive call. We consider two cases: when
we are in the first recursive call to Visit(x, ·) and any subsequent call. Suppose we
are in the first recursive call to Visit(x, ·). By our choice of ax in step 2, either
b = ax + q · norm(x) for some integer q, or b > D(x) + diam(x) = ax + diam(x). If
it is the first case, each time the outer while-loop is entered we have a < b, which,

6 Strictly speaking, this does not hold for the initial call because in this case, x = root(H) is the

root of the hierarchy H and there is no such node p(x). The argument goes through just fine if we
let p(root(H)) denote a dummy node such that norm(p(root(H))) = ∞.
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1409

since q is integral, implies min{a + norm(x), b} = a + norm(x). Now consider the


second case, where b > D(x) + diam(x) = ax + diam(x), and one of the recursive calls
Visit(y, [a , a + norm(x))) made in step 3. By Lemma 3.3, V (y) is (S, [a , min{a +
norm(x), b}))-safe, and it is actually (S, [a , a + norm(x)))-safe as well because
b > D(x) + diam(x), implying V (y)[b,∞) ⊆ V (x)[b,∞) = ∅. (Recall from Definition
4.1 that for any u ∈ V (x), diam(x) satisfies d(s, x) ≤ d(s, u) ≤ d(s, x) + diam(x) ≤
D(x)+diam(x).) Now consider a recursive call Visit(x, [a, b)) that is not the first call
to Visit(x, ·). Then by Definition 4.1(ii), either (b − a) = norm(p(x)) is a multiple
of norm(x) or a + diam(x) < b; these are identical to the two cases treated above.
There are two data structural problems that need to be solved in order to effi-
ciently implement Visit. First, we need a way to compute the tentative distances of
hierarchy nodes, i.e., the D-values as defined in (1) in section 3. For this problem
we use an improved version of Gabow’s split-findmin structure [G85a]. The other
problem is efficiently implementing the various bucket arrays, which we solve with a
new structure called the bucket-heap. The specifications for these two structures are
discussed below, in sections 4.2 and 4.3, respectively. The interested reader can refer
to Appendices A and B for details about our implementations of split-findmin and
the bucket-heap, and for proofs of their respective complexities.

4.2. The split-findmin structure. The split-findmin structure operates on


a collection of disjoint sequences, consisting in total of n elements, each with an
associated key. The idea is to maintain the smallest key in each sequence under the
following operations.

split(x): Split the sequence containing x into two sequences: the


elements up to and including x, and the rest.
decrease-key(x, κ): Set key(x) = min{key(x), κ}.
findmin(x): Return the element with minimum key in x’s sequence.

Theorem 4.2, given below, establishes some new bounds on the problem that are
just slightly better than Gabow’s original data structure [G85a]. Refer to Appendix
B for a proof. Thorup [Tho99] gave a similar data structure for integer keys in the
RAM model that runs in linear time. It relies on the RAM’s ability to sort small sets
of integers in linear time [FW93].
Theorem 4.2. The split-findmin problem can be solved on a pointer machine in
O(n + mα) time while making only O(n + m log α) comparisons, where α = α(m, n) is
the inverse-Ackermann function. Alternatively, split-findmin can be solved on a RAM
in time Θ(split-findmin(m, n)), where split-findmin(m, n) = O(n+m log α) is the
decision-tree complexity of the problem.
We use the split-findmin structure to maintain D-values as follows. In the be-
ginning there is one sequence consisting of the n leaves of H in an order consistent
with some depth-first search traversal of H. For any leaf v in H we maintain, by
appropriate decrease-key operations, that key(v) = D(v). During execution of Visit
we will say an H-node is unresolved if it lies in another node’s bucket array but its
tentative distance (D-value) is not yet finalized. The D-value of an H-node becomes
finalized, in the sense that it never decreases again, during step 3 of Visit, either by
being removed from some bucket array or passed, for the first time, to a recursive call
of Visit. (It follows from Definition 3.1 and Invariant 2.1 that D(y) = d(s, y) at the
first recursive call to y.) One can verify a couple properties of the unresolved nodes.
1410 SETH PETTIE AND VIJAYA RAMACHANDRAN

First, each unvisited leaf has exactly one unresolved ancestor. Second, to implement
Visit we need only query the D-values of unresolved nodes. Therefore, we maintain
that for each unresolved node y, there is some sequence in the split-findmin structure
corresponding to V (y), the descendants of y. Now suppose that a previously unre-
solved node y is resolved in step 3 of Visit. The deg(y) children of y will immediately
become unresolved, so to maintain our correspondence between sequences and unre-
solved nodes, we perform deg(y) − 1 split operations on y’s sequence, so that the
resulting subsequences correspond to y’s children.
We remark that the split-findmin structure we use can be simplified slightly be-
cause we know in advance where the splits will occur. However, this knowledge does
not seem to affect the asymptotic complexity of the problem.

4.3. The bucket-heap. We now turn to the problem of efficiently implementing


the bucket array used in Visit. Because of the information-theoretic bottleneck built
into the comparison-addition model, we cannot always bucket nodes in constant time:
each comparison extracts at most one bit of information, whereas properly bucketing
a node in x’s bucket array requires us to extract up to log(diam(x)/norm(x)) bits
of information. Thorup [Tho99] and Hagerup [Hag00] assume integer edge lengths
and the RAM model and therefore do not face this limitation. We now give the
specification for the bucket-heap, a structure that supports the bucketing operations
of Visit. This structure logically operates on a sequence of buckets; however, our
implementation is really a simulation of the logical structure. Lemma 4.3, proved in
Appendix A, bounds the complexity of our implementation of the bucket-heap.

create(μ, δ): Create a new bucket-heap whose buckets are associated


with intervals [δ, δ + μ), [δ + μ, δ + 2μ), [δ + 2μ, δ + 3μ), . . ..
An item x lies in the bucket whose interval spans key(x).
All buckets are initially open.
insert(x, κ): Insert a new item x with key(x) = κ.
decrease-key(x, κ): Set key(x) = min{key(x), κ}. It is guaranteed that x is
not moved to a closed bucket.
enumerate: Close the first open bucket and enumerate its contents.

Lemma 4.3. Let Δx ≥ 1 denote the number of buckets between the first open
bucket at the time of x’s insertion and the bucket from which x was enumerated.
 The
bucket-heap can be implemented on a pointer machine to run in O(N + x log Δx )
time, where N is the number of operations.
When Visit(x, ·) is called for the first time, we initialize the bucket-heap at x
with a call to create(norm(x), ax ), followed by a number of insert operations for each
of x’s children, where the key of a child is its D-value. Here ax is the beginning of the
real interval represented by the bucket array, and norm(x) the width of each bucket.
Every time the D-value of a bucketed node decreases, which can easily be detected
with the split-findmin structure, we perform a decrease-key on the corresponding item
in the bucket-heap. We usually refer to buckets not by their cardinal number but by
their associated real interval, e.g., bucket [ax , ax + norm(x)).

4.4. Analysis of Visit. In this section we bound the time required to compute
SSSP with Visit as a function of m, n, and the given hierarchy H. We will see
later that the dominant term in this running time corresponds to the split-findmin
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1411

structure, whose complexity is no more than O(m log α) but could turn out to be
linear.
Lemma 4.4. Let H be a proper hierarchy. Computing SSSPs with Visit on
H takes time O(split-findmin(m, n) + φ(H)), where split-findmin(m, n) is the
complexity of the split-findmin problem and

 diam(x)   diam(p(x)) 
φ(H) = + log +1 .
norm(x) norm(p(x))
x∈H
x∈H such that
norm(x)=norm(p(x))

Proof. The split-findmin(m, n) term represents the time to relax edges (in step
1) and update the relevant D-values of H-nodes, as described in section 4.2. Except
for the costs associated with updating D-values, the overall time of Visit is linear
in the number of recursive calls and the bucketing costs. The two terms of φ(H)
represent these costs. Consider the number of calls to Visit(x, I) for a particular H-
node x. According to step 3 of Visit, there will be zero calls to x unless norm(x) =
norm(p(x)). If it is the case that norm(x) = norm(p(x)), then for all recursive calls
on x, the given interval I will have the same width: norm(z) for some ancestor z of x.
By Definition 4.1(i), norm(z) ≥ norm(x), and therefore the number of such recursive
calls on x is ≤ diam(x)/norm(x) + 2; the extra 2 counts the first and last recursive
calls, which may cover negligible parts of the interval [d(s, x), d(s, x) + diam(x)]. By
Definition 4.1(iii),|H| < 2n, and therefore the total number of recursive calls is
bounded by 4n + x diam(x)/norm(x), where the summation is over H nodes whose
norm-values differ from their parents’ norm-values.
Now consider the bucketing costs of Visit if implemented with the bucket-heap.
According to steps 2 and 3, a node y is bucketed either because Visit(p(y), ·) was
called for the first time, or its parent p(y) was removed from the first open bucket (of
some bucket array), say bucket [a, a + norm(p(y))). In either case, this means that
d(s, p(y)) ∈ [a, a + norm(p(y))) and that d(s, y) ∈ [a, a + norm(p(y)) + diam(p(y))).
To use the terminology of Lemma 4.3, Δy ≤ diam(p(y))/norm(p(y)), and the
total
 bucketing costs would be #(buckets scanned) + #(insertions) + #(dec-keys) +
x log(diam(p(x))/norm(p(x)) + 1), which is O(φ(H) + m + n).
In section 5 we give a method for constructing a proper hierarchy H such that
φ(H) = O(n). This bound together with Lemma 4.4 shows that we can compute
SSSP in O(split-findmin(m, n)) time, given a suitable hierarchy. Asymptotically
speaking, this bound is the best we are able to achieve. However, the promising
experimental results of a simplified version of our algorithm [PRS02] have led us to
design an alternate implementation of Generalized-Visit that is both theoretically
fast and easier to code.

4.5. A practical implementation of Generalized-Visit. In this section we


present another implementation of Generalized-Visit, called Visit-B. Although
Visit-B is a bit slower than Visit in the asymptotic sense, it has other advantages.
Unlike Visit, Visit-B treats all internal hierarchy nodes in the same way and is
generally more streamlined. Visit-B also works with any optimal off-the-shelf priority
queue, such as a Fibonacci heap [FT87]. We will prove later that the asymptotic
running time of Visit-B is O(m + nlog∗ n). Therefore, if m/n = Ω(log∗ n), both
Visit and Visit-B run in optimal O(m) time.
The pseudocode for Visit-B is given as follows.
1412 SETH PETTIE AND VIJAYA RAMACHANDRAN

Visit-B(x, [a, b)).


Input: x is a node in a proper hierarchy H; V (x) is (S, [a, b))-safe and
Invariant 2.1 is satisfied.
Output guarantee: Invariant 2.1 is satisfied and Spost = Spre ∪ V (x)[a,b) ,
where Spre and Spost are the set S before and after the call.
1. If x is a leaf and D(x) ∈ [a, b), then let S := S ∪ {x}, relax all edges
incident on x, restoring Invariant 2.1, and return.
2. If Visit-B(x, ·) is being called for the first time, put x’s children in H
in a heap associated with x, where the key of a node is its D-value.
Choose ax as in Visit and initialize a := ax and χ := ∅.
3. While a < b and either χ or x’s heap is nonempty,
While there exists a y in x’s heap with D(y) ∈ [a , a + norm(x))
Remove y from the heap
χ := χ ∪ {y}
For each y ∈ χ
Visit-B(y, [a , a + norm(x)))
If V (y) ⊆ S, then set χ := χ\{y}
a := a + norm(x)
The proof of correctness for Visit-B follows the same lines as that for Visit. It
is easy to establish that before the for-loop in step 3 is executed, χ = {y : p(y) =
x, D(y) < a + norm(x), and V (y) ⊆ S}, so Visit-B is actually a more straightfor-
ward implementation of Generalized-Visit than Visit. In Visit-B the norm(x)-
partition for x corresponds to x’s children, whereas in Visit the partition begins with
x’s children but is decomposed progressively.
Lemma 4.5. Let H be a proper hierarchy. Computing SSSPs with Visit-B on
H takes time O(split-findmin(m, n) + ψ(H)), where split-findmin(m, n) is the
complexity of the split-findmin problem and
 diam(x)
ψ(H) = + deg(x) log deg(x) .
norm(x)
x∈H

Proof. The split-findmin term plays the same role in Visit-B as in Visit.
Visit-B is different than Visit in that it makes recursive calls on all hierarchy nodes,
not just those with different norm-values than their parents. Using the same argu-
ment as in Lemma 4.5, we can bound the number of recursive calls of the form Visit-
B(x, ·) as diam(x)/norm(x) + 2; this gives the first summation in ψ(H). Assuming
an optimal heap is used (for example, aFibonacci heap [FT87]), all decrease-keys
take O(m) time, and all deletions take x deg(x) log deg(x) time. The bound on
deletions follows since each of the deg(x) children of x is inserted into and deleted
from x’s heap at most once.
In section 5 we construct a hierarchy H such that ψ(H) = Θ(nlog∗ n), imply-
ing an overall bound on Visit-B of O(m + nlog∗ n), since split-findmin(m, n) =
O(mα(m, n)) = O(m + nlog∗ n). Even though ψ(H) = Ω(nlog∗ n) in the worst case,
we are only able to construct very contrived graphs for which this lower bound is
tight.
5. Efficient construction of balanced hierarchies. In this section we con-
struct a hierarchy that works well for both Visit and Visit-B. The construction
procedure has three distinct phases. In phase 1 we find the graph’s minimum span-
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1413

ning tree, denoted M , and classify its edges by length. This classification immediately
induces a coarse hierarchy, denoted H0 , which is analogous to the component hierarchy
defined by Thorup [Tho99] for integer-weighted graphs. Although H0 is proper, using
it to run Visit or Visit-B may result in a slow SSSP algorithm. In particular, φ(H0 )
and ψ(H0 ) can easily be Θ(n log n), giving no improvement over Dijkstra’s algorithm.
Phase 2 facilitates phase 3, in which we produce a refinement of H0 , called H; this is
the “well balanced” hierarchy we referred to earlier. The refined hierarchy H is con-
structed so as to minimize the φ(H) and ψ(H) terms in the running times of Visit and
Visit-B. In particular, φ(H) will be O(n), and ψ(H) will be O(nlog∗ n). Although
H could be constructed directly from M (the graph’s minimum spanning tree), we
would not be able to prove the time bound of Theorem 1.1 using this method. The
purpose of phase 2 is to generate a collection of small auxiliary graphs that—loosely
speaking—capture the structure and edge lengths of certain subtrees of the minimum
spanning tree. Using the auxiliary graphs in lieu of M , we are able to construct H in
phase 3 in O(n) time.
In section 5.1 we define all the notation and properties used in phases 1, 2, and
3 (sections 5.2, 5.3, and 5.4, respectively). In section 5.5 we prove that φ(H) = O(n)
and ψ(H) = O(nlog∗ n).
5.1. Some definitions and properties.
5.1.1. The coarse hierarchy. Our refined hierarchy H is derived from a coarse
hierarchy H0 , which is defined here and in section 5.2. Although H0 is typically very
simple to describe, the general definition of H0 is rather complicated since it must
take into account certain extreme circumstances. H0 is defined w.r.t. an increasing
sequence of norm-values: norm1 , norm2 , . . ., where all edge lengths are at least
as large as norm1 . (Typically normi+1 = 2 · normi ; however, this is not true in
general.) We will say that an edge e is at level i if (e) ∈ [normi , normi+1 ), or
alternatively, we may write norm(e) = normi to express that e is at level i. A level
i subgraph is a maximal connected subgraph restricted to edges with level i or less,
that is, with length strictly less than normi+1 . Therefore, the level zero subgraphs
consist of single vertices. A level i node in H0 corresponds to a nonredundant level
i subgraph, where a level i subgraph is redundant if it is also a level i − 1 subgraph.
This nonredundancy property guarantees that all nonleaf H0 -nodes have at least
two children. The ancestor relationship in H0 should be clear: x is an ancestor of
y if and only if the subgraph of y is contained in the subgraph of x, i.e., V (y) ⊆
V (x). The leaves of H0 naturally correspond to graph vertices, and the internal
nodes to subgraphs. The coarse hierarchy H0 clearly satisfies Definition 4.1(i), (iii),
(iv); however, we have to be careful in choosing the norm-values if we want it to be
a proper hierarchy, that is, for it to satisfy Definition 4.1(ii) as well. Our method for
choosing the norm-values is deferred to section 5.2.
5.1.2. The minimum spanning tree. By the cut property of minimum span-
ning trees (see [CLRS01, PR02c]) the H0 w.r.t. G is identical to the H0 w.r.t. M ,
the minimum spanning tree (MST) of G. Therefore, the remainder of this section is
mainly concerned with M , not the graph itself. If X ⊆ V (G) is a set of vertices, we let
M (X) be the minimal connected subtree of M containing X. Notice that M (X) can
include vertices outside of X. Later on we will need M to be a rooted tree in order to
talk coherently about a vertex’s parent, ancestors, children, and so on. Assume that
M is rooted at an arbitrary vertex. The notation root(M (X)) refers to the root of
the subtree M (X).
1414 SETH PETTIE AND VIJAYA RAMACHANDRAN

5.1.3. Mass and diameter. The mass of a vertex set X ⊆ V (G) is defined as
def

mass(X) = (e).
e∈E(M (X))

Extending this notation, we let M (x) = M (V (x)) and mass(x) = mass(V (x)),
where x is a node in any hierarchy. Since the MST path between two vertices in
M (x) is an upper bound on the shortest path between them, mass(x) is an upper
bound on the diameter of V (x). Recall from Definition 4.1 that diam(x) denoted any
upper bound on the diameter of V (x); henceforth, we will freely substitute mass(x)
for diam(x).
5.1.4. Refinement of the coarse hierarchy. We will say that H is a refine-
ment of H0 if all nodes in H0 are also represented in H. An equivalent definition,
which provides us with better imagery, is that H is derived from H0 by replacing each
node x ∈ H0 with a rooted subhierarchy H(x), where the root of H(x) corresponds
to (and is also referred to as) x and the leaves of H(x) correspond to the children of
x in H0 . Consider a refinement H of H0 where each internal node y in H(x) satisfies
deg(y) = 1 and norm(y) = norm(x). One can easily verify from Definitions 3.2 and
4.1 that if H0 is a proper hierarchy, so too is H. Of course, in order for φ(H) and
ψ(H) to be linear or near-linear, H(x) must satisfy certain properties. In particular,
it must be sufficiently short and balanced. By balanced we mean that a node’s mass
should not be too much smaller than its parent’s mass.
5.1.5. Lambda values. We will use the following λ-values in order to quantify
precisely our notion of balance:
−q
λ0 = 0, λ1 = 12 and λq+1 = 2λq ·2 .

Lemma 5.1 gives a lower bound on the growth of the λ-values; we give a short
proof before moving on.
Lemma 5.1. min{q : λq ≥ n} ≤ 2log∗ n.
2
Proof. Let Sq be a stack of q twos; for example, S3 = 22 = 16. We will prove
that λq ≥ S q/2 , giving the lemma. One can verify that this statement holds for
q ≤ 9. Assume that it holds for all q  ≤ q.
λq−1 ·2−(q−1) −q
λq+1 = 22 2
{definition of λq+1 }
S ·2−(q−1) −q
2 (q−1)/2
≥2 {inductive assumption}
S
2 (q−1)/2−1
≥2 =S (q+1)/2 {holds for q ≥ 9}.

The third line follows from the inequality S (q−1)/2 · 2−(q−1) − q ≥ S (q−1)/2 −1 ,
which holds for q ≥ 9.
5.1.6. Ranks. Recall from section 5.1.4 that our refined hierarchy H is derived
from H0 by replacing each node x ∈ H0 with a subhierarchy H(x). We assign to all
nodes in H(x) a nonnegative integer rank. The analysis of our construction would
become very simple if for every rank j node y in H(x), mass(y) = λj · norm(x).
Although this is our ideal situation, the nature of our construction does not allow
us to place any nontrivial lower or upper bounds on the mass of y. We will assign
ranks in order to satisfy Property 5.1, given below, which ensures us a sufficiently
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1415

good approximation to the ideal. It is mainly the internal nodes of H(x) that can
have subideal ranks; we assign ranks to the leaves of H(x) (representing children of x
in H0 ) to be as close to the ideal as possible.
We should point out that the assignment of ranks is mostly for the purpose of
analysis. Rank information is never stored explicitly in the hierarchy nodes, nor is
rank information used, implicitly or explicitly, in the computation of shortest paths.
We only refer to ranks in the construction of H and when analyzing their effect on
the φ and ψ functions.
Property 5.1. Let x ∈ H0 and y, z ∈ H(x) ⊆ H.
(a) If y is an internal node of H(x), then norm(y) = norm(x) and deg(y) > 1.
(b) If y is a leaf of H(x) (i.e., a child of x in H0 ), then y has rank j, where j is
maximal s.t. mass(y)/norm(x) ≥ λj .
(c) Let y be a child of a rank j node. We call y stunted if mass(y)/norm(x) <
λj−1 /2. Each node has at most one stunted child.
(d) Let y be of rank j. The children of y can be divided into three sets: Y1 , Y2 , and
a singleton {z} such that (mass(Y1 ) + mass(Y2 ))/norm(x) < (2 + o(1)) · λj .
(e) Let X be the nodes of H(x) of some specific rank. Then y∈X mass(y) ≤
2 · mass(x).
Before moving on, let us examine some features of Property 5.1. Part (a) is
asserted to guarantee that H is proper. Part (b) shows how we set the rank of leaves
of H(x). Part (c) says that at most one child of any node is less than half its ideal
mass. Part (d) is a little technical but basically says that for a rank j node y, although
mass(y) may be huge, the children of y can be divided into sets Y1 , Y2 , {z} such that
Y1 and Y2 are of reasonable mass—around λj · norm(x). However, no bound is placed
on the mass contributed by z. Part (e) says that if we restrict our attention to the
nodes of a particular rank, their subgraphs do not overlap in too many places. To
see how two subgraphs might overlap, consider {xi }, the set of nodes of some rank
in H(x). By our construction it will always be the case that the vertex sets {V (xi )}
are disjoint; however, this does not imply that the subtrees {M (xi )} are edge-disjoint
because M (xi ) can, in general, be much larger than V (xi ).
We show in section 5.5 that if H is a refinement of H0 and H satisfies Property
5.1, then φ(H) = O(n) and ψ(H) = O(nlog∗ n). Recall from Lemmas 4.4 and 4.5 that
φ(H) and ψ(H) are terms in the running times of Visit and Visit-B, respectively.

5.2. Phase 1: The MST and the coarse hierarchy. Pettie and Ramachan-
dran [PR02c] recently gave an MST algorithm that runs in time proportional to the
decision-tree complexity of the MST problem. As the complexity of MST is triv-
ially Ω(m) and only known to be O(mα(m, n)) [Chaz00], it is unknown whether this
cost will dominate or be dominated by the split-findmin(m, n) term. (This issue is
mainly of theoretical interest.) In the analysis we use mst(m, n) to denote the cost
of computing the MST. This may be interpreted as the decision-tree complexity of
MST [PR02c] or the randomized complexity of MST, which is known to be linear
[KKT95, PR02b].
Recall from section 5.1.1 that H0 was defined w.r.t. an arbitrary increasing se-
quence of norm-values. We describe below exactly how the norm-values are chosen,
then prove that H0 is a proper hierarchy. Our method depends on how large r is,
which is the ratio of the maximum-to-minimum edge length in the minimum spanning
tree. If r < 2n , which can easily be determined in O(n) time, then the possible norm-
values are {min · 2i : 0 ≤ i ≤ log r + 1}, where min is the minimum edge length
in the graph. If r ≥ 2n , then let e1 , . . . , en−1 be the edges in M in nondecreasing
1416 SETH PETTIE AND VIJAYA RAMACHANDRAN

order by length and let J = {1} ∪ {j : (ej ) > n · (ej−1 )} be the indices that mark
large separations in the ((ei ))1≤i<n sequence. The possible norm-values are then
{(ej ) · 2i : i ≥ 0, j ∈ J and (ej ) · 2i < (ej+1 )}.
Under either definition, normi is the ith largest norm-value, and for an edge
e ∈ E(M ), norm(e) = normi if (e) ∈ [normi , normi+1 ). Notice that if no edge
length falls within the interval [normi , normi+1 ), then normi is an unused norm-
value. We only need to keep track of the norm-values in use, of which there are no
more than n − 1.
Lemma 5.2. The coarse hierarchy H0 is a proper hierarchy.
Proof. As we observed before, parts (i), (iii), and (iv) of Definition 4.1 are sat-
isfied for any monotonically increasing sequence of norm-values. Definition 4.1(ii)
states that if x is a hierarchy node, either norm(p(x))/norm(x) is an integer or
diam(x)/norm(p(x)) < 1. Suppose that x is a hierarchy node and norm(p(x))/norm(x)
is not integral; then norm(x) = (ej1 ) · 2i1 and norm(p(x)) = (ej2 ) · 2i2 , where
j2 > j1 . By our method for choosing the norm-values, the lengths of all MST edges
are either at least (ej2 ) or less than (ej2 )/n. Since edges in M (x) have length less
than (ej2 ), and hence less than (ej2 )/n, diam(x) < (|V (x)| − 1) · (ej2 )/n < (ej2 ) ≤
norm(p(x)).
Lemma 5.3. We can compute the minimum spanning tree M , and norm(e) for
all e ∈ E(M ), in O(mst(m, n) + min{n log log r, n log n}) time.
Proof. mst(m, n) represents the time to find M . If r < 2n , then by Lemma 2.2
we can find norm(e) for all e ∈ M in O(log r + n log log r) = O(n log log r) time. If
r ≥ 2n , then n log log r = Ω(n log n), so we simply sort the edges of M and determine
the indices J in O(n log n) time. Suppose there are nj edges e s.t. norm(e) is of the
form (ej ) · 2i . Since (e)/(e
j ) ≤ n , we need only generate nj log n values of the
nj

form (ej ) · 2 . A list of the j nj log n = n log n possible norm-values can easily be
i

generated in sorted order. By merging this list with the list of MST edge lengths, we
can determine norm(e) for all e ∈ M in O(n log n) time.
Lemma 5.4, given below, will come in handy in bounding the running time of
our preprocessing and SSSP algorithms. It says that the total normalized mass in
H0 is linear in n. Variations of Lemma 5.4 are at the core of the hierarchy approach
[Tho99, Hag00, Pet04, Pet02b].
Lemma 5.4.
 mass(x)
< 4(n − 1).
norm(x)
x∈H0

Proof. Recall that the notation norm(e) = normi stands for (e) ∈
[normi , normi+1 ), where normi , is the ith largest norm-value. Observe that if
e ∈ M is an MST edge with norm(e) = normi , e can be included in mass(x) for no
more than one x at levels i, i+1, . . . in H0 . Also, it follows from our definitions that for
every normi in use, normi+1 /normi ≥ 2, and for any MST edge, (e)/norm(e) < 2.
Therefore, we can bound the normalized mass in H0 as

 mass(x)  ∞
(e)

norm(x) j=i
norm j
x∈H0 e∈M
norm(e)=normi
 ∞
 (e)
≤ < 4(n − 1).
e∈M j=i
2j−i · normi
norm(e)= normi
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1417

2
3
2

Black vertices are in X M (X) T (X)

Fig. 2. On the left is a subtree of M , the MST, where X is the set of blackened vertices. In the
center is M (X), the minimal subtree of M connecting X, and on the right is T (X), derived from
M (X) by splicing out unblackened degree 2 nodes in M (X) and adjusting edge lengths appropriately.
Unless otherwise marked, all edges in this example are of length 1.

Implicit in Lemma 5.4 is a simple accounting scheme where we treat mass, or


more accurately normalized mass, as a currency equivalent to computational work. A
hierarchy node x “owns” mass(x)/norm(x) units of currency. If we can then show
that the share of some computation relating to x is bounded by k times its currency,
the total time for this computation is O(kn), that is, of course, if all computation is
attributable to some hierarchy node. Although simple, this accounting scheme is very
powerful and can become quite involved [Pet04, Pet02b, Pet03].
5.3. Phase 2: Constructing T (x) trees. Although it is possible to construct
an H(x) that satisfies Property 5.1 by working directly with the subtree M (x), we
are unable to efficiently compute H(x) in this way. The problem is that we have time
roughly proportional to the size of H(x) to construct H(x), whereas M (x) could be
significantly larger than H(x). Our solution is to construct a succinct tree T (x) that
preserves the essential structure of M (x) while having size roughly the same as H(x).
For X ⊆ V (G), let T (X) be the subtree derived from M (X) by splicing out all
single-child vertices in V (M (X)) − X. That is, we replace each chain of vertices in
M (X), where only the end vertices are potentially in X, with a single edge; the length
of this edge is the sum of its corresponding edge lengths in M (X). Since there is a
correspondence between vertices in T (X) and M , we will refer to T (X) vertices by
their names in M . Figure 2 gives examples of M (X) and T (X) trees, where X is the
set of blackened vertices.
If x ∈ H0 and {xj }j is the set of children of x, then let T (x) be the tree
T ({root(M (xj ))}j ); note that root(M (x)) is included in {root(M (xj ))}j . Since
only some of the edges of M (x) are represented in T (x), it is possible that the to-
tal length of T (x) is significantly less than the total length of M (x) (the mass of
M (x)); however, we will require that any subgraph of T (x) have roughly the same
mass as an equivalent subgraph in M (x). In order to accomplish this we attribute
certain amounts of mass to the vertices of T (x) as follows. Suppose that y is a
child of x in H0 and v = root(y) is the corresponding root vertex in T (x). We let
mass(v) = mass(y). All other vertices in T (x) have zero mass. The mass of a subtree
of T (x) is then the sum of its edge lengths plus the collective mass of its vertices.
We will think of a subtree of T (x) as corresponding to a subtree of M (x). Each
edge in T (x) corresponds naturally to a path in M (x), and each vertex in T (x) with
nonzero mass corresponds to a subtree of M (x).
Lemma 5.5. For x ∈ H0 ,
(i) deg(x) ≤ |V (T (x))| < 2 · deg(x);
(ii) let T1 be a subtree of T (x) and T2 be the corresponding tree in M (x). Then
1418 SETH PETTIE AND VIJAYA RAMACHANDRAN

u1 = root(Tx ) u1 = root(Tx )

Already traversed
u2 u2

u3 u3
w = LCA(v, u4 ) w = the new u4

u4 v v = the new u5

Fig. 3. The blackened vertices represent those known to be in T (x). The active path of the
traversal is shown in bold edges. Before v is processed (left) the stack consists of u1 , u2 , u3 , u4 ,
where u4 is the last vertex in the traversal known to be in T (x) and w = LCA(v, u4 ), which implies
w ∈ T (x). After v is processed (right) the stack is set to u1 , u2 , u3 , w, v and w is blackened.

mass(T2 ) ≤ mass(T1 ) ≤ 2 · mass(T2 ).


Proof. Part (i) follows from two observations. First, T (x) has no degree two
vertices. Second, there are at most deg(x) leaves of T (x) since each such leaf corre-
sponds to a vertex root(M (y)) for some child y of x in H0 . Part (ii) follows since
all mass in T2 is represented in T1 , and each edge in T2 contributes to the mass of at
most one edge and one vertex in T1 .
We construct T (x) with a kind of depth first traversal of the minimum spanning
tree, using the procedure Succinct-Tree, given below. Succinct-Tree focuses
on some fixed H0 -node x. We will explain how Succinct-Tree works with the aid
of the diagram in Figure 3. At every point in the traversal we maintain a stack of
vertices u1 , . . . , uq  consisting of all vertices known to be in T (x), whose parents in
T (x) are not yet fixed. The stack has the following properties: ui is ancestral to ui+1 ,
u1 , . . . , uq−1  are on the active path of the traversal, and uq is the last vertex known
to be in T (x) encountered in the traversal.
In Figure 3 the stack consists of u1 , . . . , u4 , where u1 , u2 , u3  are on the active
path of the traversal, marked in bold edges. The preprocessing of v (before making
recursive calls) is to do nothing if v ∈ {root(M (xj ))}j . Otherwise, we update the
stack to reflect our new knowledge about the edges and vertices of T (x). The vertex
w = LCA(uq , v) = LCA(u4 , v) must be in T (x). There are three cases: either w
is the ultimate or penultimate vertex in the stack (uq or uq−1 ), that is, we already
know w ∈ T (x), or w lies somewhere on the path between uq and uq−1 . Figure
3 diagrams the third situation. Because no T (x) vertices were encountered in the
traversal between uq = u4 and v, there can be no new T (x) vertices discovered on
the path between uq and w. Therefore, we can pop uq off the stack, designating its
parent in T (x) to be w, and push w and v onto the stack. The other two situations,
when w = uq or w = uq−1 , are simpler. If w = uq , then we simply push v onto the
stack, and if w = uq−1 , we pop uq off the stack and push v on. Now consider the
postprocessing of v (performed after all recursive calls), and let uq−1 , uq be the last
two vertices in the stack. Suppose that v = uq−1 . We cannot simply do nothing,
because when the active path retracts there will be two stack vertices (v = uq−1 and
uq ) outside of the active path, contrary to the stack properties. However, because no
T (x) vertices were discovered between uq and uq−1 , we can safely say that uq−1 is the
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1419

parent of uq in T (x). So, to maintain the stack properties, we pop off uq and add the
edge (uq , uq−1 ) to T (x).
Succinct-Tree(v): A procedure for constructing T (x), for a given x ∈ H0 .
The argument v is a vertex in the MST M .
The stack for T (x) consists of vertices u1 , . . . , uq , which
are known to be in T (x) but whose parents in T (x) are
not yet known. All but uq are on the active path of the
DFS traversal. Initially the stack for T (x) is empty.

1. If v = root(y), where y is a child of x in H0 , then


2. Let w = LCA(v, uq )
3. If w = uq
4. POP uq off the stack
5. Designate (uq , w) an edge in T (x)
6. If w = uq−1
7. PUSH w on the stack
8. PUSH v on the stack
9. Call Succinct-Tree on all the children of v in M
10. Let uq−1 , uq refer to the last two elements in the current stack
11. If v = uq−1
12. POP uq off the stack
13. Designate (uq , uq−1 ) = (uq , v) an edge in T (x)
Lemma 5.6. Given the MST and a list of its edges ordered by level, H0 and
{T (x)}x can be constructed in O(n) time.
Proof. We construct H0 with a union-find structure and mark all vertices in M
as roots of M (x) for (one or more) x ∈ H0 . It is easy to see that we can construct
all T (x) for x ∈ H0 , with one tree traversal given in procedure Succinct-Tree.
We simply maintain a different stack for each Tx under construction. Thus if v is
the root of several M (y1 ), M (y2 ), . . . , where yi ∈ H0 , we simply reexecute lines 1–8
and 10–13 of Succinct-Tree for each of v’s roles. Using a well-known union-find–
based least common ancestors (LCA) algorithm [AHU76, Tar79b], we can compute
the LCAs in line 2 in O(nα(n)) time, since the number of finds is linear in the number
of nodes in H0 . If we use the scheme of Buchsbaum et al. [BKRW98] instead, the cost
of finding LCAs is linear; however, since this algorithm is offline (it does not handle
LCA queries in the middle of a tree traversal, unlike [AHU76, Tar79b]), we would need
to determine what the LCA queries are with an initial pass over the tree. Finally, we
compute the length function in T (x) as follows. If (u, v) ∈ E(T (x)) and v is ancestral
to u in M , then (u, v) = dM (u, root(M )) − dM (v, root(M )), where dM is the
distance function for M . Clearly the dM (·, root(M )) function can be computed in
O(n) time. See Lemma 2.1 for a simulation of subtraction in the comparison-addition
model.

5.4. Phase 3: Constructing the refined hierarchy. We show in this section


how to construct an H(x) from T (x) that is consistent with Property 5.1.
The Refine-Hierarchy procedure, given as pseudocode below, constructs H(x)
in a bottom-up fashion by traversing the tree T (x). A call to Refine-Hierarchy(v),
1420 SETH PETTIE AND VIJAYA RAMACHANDRAN

where v ∈ T (x), will produce an array of sets v[·] whose elements are nodes in H(x)
that represent (collectively) the subtree of T (x) rooted at v. The set v[j] holds rank
j nodes, which, taken together, are not yet massive enough to become a rank j + 1
node. We extend the mass notation to sets v[·] as follows. Bear in mind that this
mass is w.r.t. the tree T (x), not M (x). By Lemma 5.5(ii), mass w.r.t. T (x) is a good
approximation to the mass of the equivalent subtree in M (x):
⎛ ⎞

mass(v[j]) = mass ⎝ V (y)⎠ .


j  ≤j y∈v[j  ]

Refine-Hierarchy(v): Constructing H(x), for a given x ∈ H0 ,


where v is a vertex in T (x).
1. Initialize v[j] := ∅ for all j.
2. If v = root(M (y)) for some child y of x in H0
3. Let j be maximal s.t. mass(y)/norm(x) ≥ λj
4. v[j] := {y} (i.e., y is implicitly designated a rank j node)
5. For each child w of v in T (x):
6. Refine-Hierarchy(w)
7. For all i, v[i] := v[i] ∪ w[i]
8. Let j be maximal such that mass(v[j])/norm(x) ≥ λj+1
9. Promote v[0], . . . , v[j] (see Definition 5.7)
10. If v is the root of T (x), promote v[0], v[1], . . . until one node remains.
(This final node is the root of H(x).)
The structure of Refine-Hierarchy is fairly simple. To begin with, we initialize
v[·] to be an array of empty sets. Then, if v is a root vertex of a child y of x in H0 ,
we create a node representing y and put it in the proper set in v[·]; which set receives
y depends only on mass(y). Next we process the children of v. On each pass through
the loop, we pick an as yet unprocessed child w of v; recurse on w, producing sets
w[·] representing the subtree rooted at w; and then merge the sets w[·] into their
counterparts in v[·]. At this point, the mass of some sets may be beyond a critical
threshold: the threshold for v[j] is λj+1 · norm(x). In order to restore a quiescent
state in the sets v[·] we perform promotions until no set’s mass is above threshold.
Definition 5.7. Promoting the set v[j] involves removing the nodes from v[j],
making them the children of a new rank j + 1 node, and then placing this node in
v[j + 1]. There is one exception: if |v[j]| = 1, then to comply with Definition 4.1(iii),
we simply move the node from v[j] to v[j + 1]. Promoting the sets v[0], v[1], . . . , v[j]
means promoting v[0], then v[1], up to v[j], in that order.
Suppose that after merging w[·] into v[·], j is maximal such that mass(v[j]) is
beyond its threshold of λj+1 · norm(x) (there need not be such a j). We promote
the sets v[0], . . . , v[j], which has the effect of emptying v[0], . . . , v[j] and adding a new
node to v[j + 1] representing the nodes formerly in v[0], . . . , v[j]. Lemma 5.8, given
below, shows that we can compute the H(x) trees in linear time.
Lemma 5.8. Given {T (x)}x , {H(x)}x can be constructed to satisfy Property 5.1
in O(n) time.
Proof. We first argue that Refine-Hierarchy produces a refinement H of H0
that satisfies Property 5.1. We then look at how to implement it in linear time.
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1421

Property 5.1(a) states that internal nodes in H(x) must have norm-values equal
to that of x, which we satisfy by simply assigning them the proper norm-values, and
that no node of H(x) has one child. By our treatment of one-element sets in the
promotion procedure of Definition 5.7, it is simply impossible to create a one-child
node in H(x). Property 5.1(e) follows from Lemma 5.5(ii) and the observation that
the mass (in T (x)) represented by nodes of the same rank is disjoint. Now consider
Property 5.1(c), regarding stunted nodes. We show that whenever a set v[j] accepts a
new node z, either v[j] is immediately promoted, or z is not stunted, or the promotion
of z into v[j] represents the last promotion in the construction of H(x). Consider the
pattern of promotions in line 9. We promote the sets v[0], . . . , v[j] in a cascading
fashion: v[0] to v[1], v[1] to v[2], and so on. The only set accepting a new node that
is not immediately promoted is v[j + 1], so in order to prove Property 5.1(c) we must
show that the node derived from promoting v[0], . . . , v[j] is not stunted. By choice
of j, mass(v[j]) ≥ λj+1 · norm(x), where mass is w.r.t. the tree T (x). By Lemma
5.5(ii) the mass of the equivalent tree in M (x) is at least λj+1 · norm(x)/2, which is
exactly the threshold for this node being stunted. Finally, consider Property 5.1(d).
Before the merging step in line 7, none of the sets in v[·] or w[·] is massive enough
to be promoted. Let v[·] and w[·] denote the sets associated with v and w before the
merging in step 7, and let v  [·] denote the set associated with v after step 7. By the
definition of mass we have

mass(v  [j]) = mass(v[j]) + mass(w[j]) + (v, w) < 2 · λj+1 · norm(x) + (v, w).

Since (v, w) is an edge in T (x), it can be arbitrarily large compared to norm(x),


meaning we cannot place any reasonable bound on mass(v  [j]) after the merging step.
Let us consider how Property 5.1(d) is maintained. Suppose that v  [j] is promoted in
lines 9 or 10, and let y be the resulting rank j + 1 node. Using the terminology from
Property 5.1(d), let Y1 = v[j], Y2 = w[j] and let z be the node derived by promoting
v  [0], . . . , v  [j − 1]. Since neither v[j] nor w[j] was sufficiently massive to be promoted
before they were merged, we have (mass(Y1 ) + mass(Y2 ))/norm(x) < 2λj+1 . This
is slightly stronger than what Property 5.1(d) calls for, which is the inequality <
(2 + o(1))λj+1 . We’ll see why the (2 + o(1)) is needed below.
Suppose that we implemented Refine-Hierarchy in a straightforward manner.
Let L be the (known) maximum possible index of any nonempty set v[·] during the
course of Refine-Hierarchy. One can easily see that the initialization in lines 1–4
takes O(L + 1) time and that, exclusive of recursive calls, each time through the for-
loop in line 5 takes O(L + 1) amortized time. (The bound on line 5 is amortized since
promoting a set v[j] takes worst case O(|v[j]| + 1) time but only constant amortized
time.) The only hidden costs in this procedure are updating the mass of sets, which
is done as follows. After the merging step in line 7, we simply set mass(v[j]) :=
mass(v[j])+(v, w)+mass(w[j]) for each j ≤ L. Therefore the total cost of computing
H(x) from T (x) is O((L + 1) · |T (x)|). We can bound L as L ≤ 2log∗ (4n) as follows.
The first node placed in any previously empty set is unstunted; therefore, by Lemma
5.1, the maximum nonempty set has rank at most 2log∗ (mass(T (x))/norm(x)). By
Lemma 5.5(ii) and the construction of H0 , mass(T (x)) ≤ 2 · mass(M (x)) < 4(n − 1) ·
norm(x).
In order to reduce the cost to linear we make a couple adjustments to the Refine-
Hierarchy procedure. First, v[·] is represented as a linked list of nonempty sets.
Second, we update the mass variables in a lazy fashion. The time for steps 1–4 is
dominated by the time to find the appropriate j in step 3, which takes time t1 —see
1422 SETH PETTIE AND VIJAYA RAMACHANDRAN

below. The time for merging the v[·] and w[·] sets in line 7 is only proportional to the
shorter list; this time bound is given by expression t2 below.
mass(v)
t1 = O 1 + log∗ ,
norm(x)

min{mass(v[·]), mass(w[·])}
t2 = O 1 + log∗ ,
norm(x)

where mass(v[·]) is just the total mass represented by the v[·] sets. We update the
mass of only the first t1 + t2 sets in v[·], and, as a rule, we update v[j + 1] half as
often as v[j]. It is routine to show that Refine-Hierarchy will have a lower bound
on the mass of v[j] that is off by a 1+o(1) factor, where the o(1) is a function of
j.7 This leads to the conspicuous 2 + o(1) in Property 5.1(d). To bound the cost of
Refine-Hierarchy we model its computation as a binary tree: leaves represent the
creation of nodes in lines 1–4, and internal nodes represent the merging events in line
7. The cost of a leaf f is log∗ (mass(f )/norm(x)), and the cost of an internal node
f with children f1 and f2 is 1 + log∗ (min{mass(f1 )/norm(x), mass(f2 )/norm(x)}).
We can think of charging the cost of f collectively to the mass in the subtree of f1 or
f2 , whichever is smaller. Therefore, no unit of mass can be charged for two nodes f
and g if the total mass under f is within twice the total mass under g. The total cost
is then
 ∞

 mass(T (x))  log∗ (2i ) mass(x)
cost(f ) = O |T (x)| + · i
=O .
norm(x) i=0
2 norm(x)
f

The last equality follows because |T (x)| = O(mass(T (x))/norm(x)) =


O(mass(M (x))/norm(x)). Summing over all x ∈ H0 , the total cost of construct-
ing {H(x)}x∈H0 is, by Lemma 5.4, O(n).
Lemma 5.9. In O(mst(m, n) + min{n log log r, n log n}) time we can construct
both the coarse hierarchy H0 and a refinement H of H0 satisfying Property 5.1.
Proof. The proof follows from Lemmas 5.3, 5.6, and 5.8.
5.5. Analysis. In this section we prove bounds on the running times of Visit
and Visit-B, given an appropriate refined hierarchy such as the one constructed in
section 5.4. Theorem 1.1 follows directly from Lemma 5.10, given below, and Lemma
5.9.
Lemma 5.10. Let H be any refinement of H0 satisfying Property 5.1. Using H,
Visit computes SSSP in O(split-findmin(m, n)) time, and Visit-B computes SSSP
in O(m + nlog∗ n) time.
Proof. We prove that φ(H) = O(n) and ψ(H) = O(nlog∗ n). Together with
Lemmas 4.4 and 4.5, this will complete the proof.
With the observation that mass(x) is an upper bound on the diameter of V (x),
we will substitute mass(x) for diam(x) in the functions φ and ψ. By Lemma 5.4,
the first sum in φ is O(n). The first sum of ψ(H) is much like that in φ, except we
sum over all nodes in H, not just those nodes that also appear in H0 . By Property
5.1(a), (c), and (d) and Lemma 5.1, the maximum rank of any node in H(x) is
7 The proof of this is somewhat tedious. Basically one shows that for i < j the mass of v[i] can

be updated at most 2j−i − 1 times before the mass of v[j] is updated. Since 2j−1 − 1 · λi  λj , our
neglecting to update the mass of v[j] causes a negligible error.
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1423

2log∗ (mass(x)/norm(x)) ≤ 2log∗ n. By Property 5.1(e) the total mass of nodes of


one rank in H(x)  is bounded by 2 · mass(x).  Therefore, we can bound the first
sum in ψ(H) as x mass(x)/norm(x) ≤ 4log∗ n · x∈H0 mass(x)/norm(x), which is
O(nlog∗ n) by Lemma 5.4.
We now turn to the second summationsin φ(H) and ψ(H), which can be written
as x deg(x) log(mass(x)/norm(x)) and x deg(x) log deg(x), respectively. Since
deg(x) ≤ 1 + mass(x)/norm(x), any bound established on the first summation will
extend to the second.
Let y be a rank j node. Using the terms from Property 5.1(d), let α = (mass(Y1 )+
mass(Y2 ))/norm(y) and β = mass(y)/norm(y) − α. Property 5.1(c), (d) imply that
α < (2 + o(1)) · λj and that deg(y) ≤ 2α/λj−1 + 2, where the +2 represents the
stunted child and the child z exempted from Property 5.1(d):

mass(y) 2α
deg(y) log ≤ + 2 log(α + β) {see explanations below}
norm(y) λj−1
max{α log(2λj ), β}
=O
λj−1
α+β mass(y)
=O = O .
2 j−1 norm(y) · 2j−1

The first line follows from our bound on deg(y) and the definitions of α and β.
The second line follows since α < (2 + o(1))λj and α log(α + β) = O(max{α log α, β}).
The lastline follows since log λj = λj−1 /2j−1 > 1. By the above bound and Property
5.1(e), y∈H(x) deg(y) log(mass(y)/norm(y)) = O(mass(x)/norm(x)). Therefore,
by Lemma 5.4, the second summations in both φ(H) and ψ(H) are bounded by
O(n).
6. Limits of hierarchy-type algorithms. In this section we state a simple
property (Property 6.1) of all hierarchy-type algorithms and give a lower bound on
any undirected SSSP algorithm satisfying that property. The upshot is that our
SSSP algorithm is optimal (up to an inverse-Ackermann factor) for a fairly large
class of SSSP algorithms, which includes all hierarchy-type algorithms, variations on
Dijkstra’s algorithm, and even a heuristic SSSP algorithm [G01].
We will state Property 6.1 in terms of directed graphs. Let cycles(u, v) denote
the set of all cycles, including nonsimple cycles, that pass through both u and v,
and let sep(u, v) = minC∈cycles(u,v) maxe∈C (e). Note that in undirected graphs
sep(u, v) corresponds exactly to the longest edge on the MST path between u and v.
Property 6.1. An SSSP algorithm with the hierarchy property computes, aside
from shortest paths, a permutation πs : V (G) → V (G) such that for any vertices u, v,
we find d(s, u) ≥ d(s, v) + sep(u, v) =⇒ πs (u) > πs (v), where s is the source and d
the distance function.
The permutation πs corresponds to the order in which vertices are visited when
the source is s. Property 6.1 says that πs is loosely sorted by distance, but may invert
pairs of vertices if their relative distance is less than their sep-value. To see that
our hierarchy-based algorithm satisfies Property 6.1, consider two vertices u and v.
Let x be the LCA of u and v in H, and let u and v  be the ancestors of u and v,
respectively, which are children of x. By our construction of H, norm(x) ≤ sep(u, v).
If d(s, u) ≥ d(s, v) + sep(u, v), then d(s, u) ≥ d(s, v) + norm(x), and therefore the
recursive calls on u and v  that cause u and v to be visited are not passed the same
interval argument, since both intervals have width norm(x). The recursive call on u
1424 SETH PETTIE AND VIJAYA RAMACHANDRAN

. . .

group 1 group 2 group q

Fig. 4. The minimum spanning tree of the graph.

must, therefore, precede the recursive call on v  , and u must be visited before v.
Theorem 6.1. Suppose that our computational model allows any set of func-
tions from RO(1) → R and comparison between two reals. Any SSSP algorithm for
real-weighted graphs satisfying Property 6.1 makes Ω(m + min{n log log r, n log n})
operations in the worst case, where r is the ratio of the maximum to minimum edge
length.
Proof. Let q be an integer. Assume without loss of generality that 2q divides
n − 1. The MST of the input graph is as depicted in Figure 4. It consists of the source
vertex s, which is connected to p = (n − 1)/2 vertices in the top row, each of which
is paired with one vertex in the bottom row. All the vertices (except s) are divided
into disjoint groups, where group i consists of exactly p/q randomly chosen pairs of
vertices. There are exactly p!/(p/q)!q = q Ω(p) possible group arrangements. We will
show that any algorithm satisfying Property 6.1 must be able to distinguish them.
We choose edge lengths as follows. All edges in group i have length 2i . This
includes edges from s to the group’s top row and between the two rows. Other
non-MST edges are chosen so that shortest paths from s correspond to paths in
the MST. Let vi denote any vertex in the bottom row of group i. Then d(s, vi ) =
2 · 2i and sep(vi , vj ) = 2max{i,j} . By Property 6.1, vi must be visited before vj if
d(s, vi ) + sep(vi , vj ) ≤ d(s, vj ), which is true for i < j since 2 · 2i + 2j ≤ 2 · 2j .
Therefore, any algorithm satisfying Property 6.1 must be prepared to visit vertices in
q Ω(p) distinct permutations and make at least Ω(p log q) = Ω(n log log r) comparisons
in the worst case. It also must include every non-MST edge in at least one operation,
which gives the lower bound.
Theorem 6.1 shows that our SSSP algorithm is optimal among hierarchy-type
algorithms, to within a tiny inverse-Ackermann factor. A lower bound on directed
SSSP algorithms satisfying Property 6.1 is given in [Pet04]. Theorem 6.1 differs from
that lower bound in two respects. First, the [Pet04] bound is Ω(m+min{n log r, n log n}),
which is Ω(m + n log n) for even reasonably small values of r. Second, the [Pet04]
bound holds even if the algorithm is allowed to compute the sep function (and sort
the values) for free. Contrast this with our SSSP algorithm, where the main obstacle
to achieving linear time is the need to sort the sep-values.

7. Discussion. We have shown that with a near-linear time investment in pre-


processing, SSSP queries can be answered in very close to linear time. Furthermore,
among a natural class of SSSP algorithms captured by Property 5.1, our SSSP algo-
rithm is optimal, aside from a tiny inverse-Ackermann factor. We can imagine several
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1425

avenues for further research, the most interesting of which is developing a feasible
alternative to Property 5.1 that does not have an intrinsic sorting bottleneck. This
would be a backward approach to algorithm design: first we define a desirable prop-
erty, then we hunt about for algorithms with that property. Another avenue, which
might have some real-world impact, is to reduce the preprocessing cost of the directed
shortest path algorithm in [Pet04] from O(mn) to near-linear, as it is in our algorithm.
The marginal cost of computing SSSP with our algorithm may or may not be
linear; it all depends on the complexity of the split-findmin structure. This data
structure, invented first by Gabow [G85a] for use in a weighted matching algorithm,
actually has connections with other fundamental problems. For instance, it can be
used to solve both the minimum spanning tree and shortest path tree sensitivity
analysis problems [Pet03]. (The sensitivity analysis problem is to decide how much
each edge’s length can be perturbed without changing the solution tree.) Therefore,
by Theorem 4.2 both these problems have complexity O(m log α(m, n)), an α/ log α
improvement over Tarjan’s path-compression–based algorithm [Tar82]. If we consider
the offline version of the split-findmin problem, where all splits and decrease-keys
are given in advance, one can show that it is reducible to both the MST problem
and the MST sensitivity analysis problem. None of these reductions proves whether
mst(m, n) dominates split-findmin(m, n) or vice versa; however, they do suggest
that we have no hope of solving the MST problem [PR02b, PR02c] without first
solving the manifestly simpler split-findmin and MST sensitivity analysis problems.
The experimental study of Pettie, Ramachandran, and Sridhar [PRS02] shows
that our algorithm is very efficient in practice. However, the [PRS02] study did not
explore all possible implementation choices, such as the proper heap to use, the best
preprocessing algorithm, or different implementations of the split-findmin structure.
To our knowledge no one has investigated whether the other hierarchy-type algorithms
[Tho99, Hag00, Pet04] are competitive in real-world scenarios.
An outstanding research problem in parallel computing is to bound the time-work
complexity of SSSP. There are several published algorithms on the subject [BTZ98,
CMMS98, KS97, M02, TZ96], though none runs in worst-case polylogarithmic time
using work comparable to Dijkstra’s algorithm. There is clearly a lot of parallelism in
the hierarchy-based algorithms. Whether this approach can be effectively parallelized
is an intriguing question.

Appendix A. The bucket-heap. The bucket-heap structure consists of an


array of buckets, where the ith bucket spans the interval [δ + iμ, δ + (i + 1)μ), for fixed
reals δ and μ. Logically speaking, a heap item with key κ appears in the bucket whose
interval spans κ. We are never concerned about the relative order of items within the
same bucket.
Proof of Lemma 4.3. Our structure simulates the logical specification given earler;
it actually consists of levels of bucket arrays. The level zero buckets are the ones
referred to in the bucket-heap’s specification, and the level i buckets preside over
disjoint intervals of 2i level zero buckets. The interval represented by a higher-level
bucket is the union of its component level zero buckets. Only one bucket at each level
is active: it is the first one that presides over no closed level zero buckets; see Figure
5. Suppose that an item x should logically be in the level zero bucket B. We maintain
the invariant that x is either descending and in the lowest active bucket presiding over
B, or ascending and in some active bucket presiding over level zero buckets before B.
To insert a node we put it in the first open level zero bucket and label it as
ascending. This clearly satisfies the invariant. The result of a decrease-key depends
1426 SETH PETTIE AND VIJAYA RAMACHANDRAN

Level 3
2
1
0

Closed buckets marked with an X,


Active buckets are shaded

The effect of closing the first open bucket

Fig. 5. Active buckets are shaded.

on whether the node x is ascending or descending. Suppose x is ascending and in


a bucket (at some level) spanning the interval [a, b). If key(x) < b, we relabel it
as descending; otherwise we do nothing. If x is descending (or was just relabeled as
descending), we move it to the lowest level active bucket consistent with the invariant.
If x drops i ≥ 0 levels, we assume that this is accomplished in O(i + 1) time; i.e., we
search from its current level down, not from the bottom up.
Suppose that we close the first open level zero bucket B. According to the invari-
ant all items that are logically in B are descending and actually in B, so enumerating
them is no problem; there will, in general, be ascending items in B that do not log-
ically belong there. In order to maintain the invariant we must deactivate all active
buckets that preside over B (including B). Consider one such bucket at level i. If
i > 0, we move each descending node in it to the level i − 1 active bucket. For each
ascending node (at level i ≥ 0), depending on its key, we either move it to the level
i + 1 active bucket and keep it ascending, or relabel it descending and move it to the
proper active bucket at level ≤ i + 1.
From the invariant it follows that no node x appears in more than 2 log Δx + 1
distinct buckets: log Δx + 1 buckets as an ascending node and another log Δx as a
descending node. Aside from this cost of moving nodes around, the other costs are
clearly O(N ).
We remark that the bucket-heap need not actually label the items. Whether an
item is ascending or descending can be inferred from context.
Appendix B. The split-findmin problem. The split-findmin problem is to
maintain a collection of sequences of weighted elements under the following operations:

split(x): Split the sequence containing x into two sequences: the


elements up to and including x and the rest.
decrease-key(x, κ): Set key(x) = min{key(x), κ}.
findmin(x): Return the element in x’s sequence with minimum key.

Gabow [G85a] gave an elegant algorithm for this problem that is nearly optimal.
On an initial sequence of n elements, it handles up to n − 1 splits and m decrease-keys
in O((m + n)α(m, n)) time. Gabow’s algorithm runs on a pointer machine [Tar79].
We now prove Theorem 4.2 from section 4.2.
Proof of Theorem 4.2. In Gabow’s decrease-key routine a sequence of roughly
α variables needs to be updated, although it is already known that their values are
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1427

monotonically decreasing. We observe that, on a pointer machine, the same task


can be accomplished in O(α) time using O(log α) comparisons for a binary search.
Using a simple two-level scheme, one can easily reduce the nα term in the running
time to n. This gives the split-findmin algorithm that performs O(m log α(m, n) + n)
comparisons.
To get a potentially faster algorithm on the RAM model we construct all possible
split-findmin solvers on inputs with at most q = log log n elements and choose one
that is close to optimal for all problem sizes. We then show how to compose this
optimal split-findmin solver on q elements with Gabow’s structure to get an optimal
solver on n elements.
We consider only instances with m < q 2 decrease-keys. If more decrease-keys are
actually encountered, we can revert to Gabow’s algorithm [G85a] or a trivial one that
runs in O(m ) time.
We represent the state of the solver with three components: a bit-vector with
length q − 1, representing where the splits are; a directed graph H on no more than
q + m < q(q + 1) vertices, representing known inequalities between current keys and
older keys retired by decrease-key operations; and finally, a mapping from elements
to vertices in H. One may easily confirm that the state can be represented in no
more than 3q 4 = o(log n) bits. One may also confirm that a split or decrease-key can
update the state in O(1) time. We now turn to the findmin operation. Consider the
findmin-action function, which determines the next step in the findmin procedure. It
can be represented as
 
findmin-action : state × {1, . . . , q} → V (H) × V (H) ∪ {1, . . . , q},

where the first {1, . . . , q} represents the argument to the findmin query. The findmin-
action function can either perform a comparison (represented by V (H)×V (H)) which,
if performed, will alter the state, or return an answer to the findmin query, represented
by the second {1, . . . , q}. One simply applies the findmin-action function until it
produces an answer. We will represent the findmin-action function as a table. Since
the state is represented in o(log n) bits, we can keep it in one machine word; therefore,
computing the findmin-action function (and updating the state) takes constant time
on a RAM.
One can see that any split-findmin solver can be converted, without loss of effi-
ciency, into one that performs comparisons only during calls to findmin. Therefore,
finding the optimal findmin-action function is tantamount to finding the optimal split-
findmin solver.
We have now reduced the split-findmin problem to a brute force search over the
4 4
findmin-action function. There are less than F = 23q · q · (q 4 + q) < 24q distinct
findmin-action functions, most of which do not produce correct answers. There are
2
less than I = (2q + q 2 (q + 1))q +3q distinct instances of the problem, because the
number of decrease-keys is < q 2 , findmins < 2q, and splits < q. Furthermore, each
operation can be a split or findmin, giving the 2q term, or a decrease-key, which
requires us to choose an element and where to fit its new key into the permutation,
giving the q 2 (q + 1) term. Each findmin-action/problem instance pair can be tested
for correctness in V = O(q 2 ) time, and therefore all correct findmin-action functions
4
can be chosen in time F · I · V = 2Ω(q ) . For q = log log n this is o(n), meaning the
time for this brute force search does not affect the other constant factors involved.
How do we choose the optimal split-findmin solver? This is actually not a trivial
question because of the possibility of there not being one solver that dominates all
1428 SETH PETTIE AND VIJAYA RAMACHANDRAN

others on all input sizes. Consider charting the worst-case complexity of a solver S
as a function gS of the number of operations p in the input sequence. It is plausible
that certain solvers are optimal for only certain densities p/q. We need to show
that for some solver S ∗ , gS ∗ is within a constant factor of the lower envelope of
{gS }S , where S ranges over all correct solvers. Let Sk be the optimal solver for 2k
operations. We let S ∗ be the solver that mimics Sk from operations 2k−1 + 1 to
2k . At operation 2k it resets its state, reexecutes all 2k operations under Sk+1 , and
continues using Sk+1 until operation 2k+1 . Since gSk+1 (2k+1 ) ≤ 2 · gSk (2k ), it follows
that gS ∗ (p) ≤ 4 · minS {gS (p)}.
Our overall algorithm is very simple. We divide the n elements into n = n/q
superelements, each representing a contiguous block of q elements. Each unsplit se-
quence then consists of three parts: two subsequences in the leftmost and rightmost
superelements and a third subsequence consisting of unsplit superelements. We use
Gabow’s algorithm on the unsplit superelements, where the key of a superelement
is the minimum over constituent elements. For the superelements already split, we
use the S ∗ split-findmin solver constructed as above. The cost of Gabow’s algo-
rithm is O((m + n/q)α(m, n/q)) = O(m + n), and the cost of using S ∗ on each
superelement is Θ(split-findmin(m, n)) by construction; therefore the overall cost is
Θ(split-findmin(m, n)).
One can easily extend the proof to randomized split-findmin solvers by defining
the findmin-action as selecting a distribution over actions.
We note that the time bound of Theorem 4.2 on pointer machines is provably
optimal. La Poutré [LaP96] gave a lower bound on the pointer machine complexity of
the split-find problem, which is subsumed by the split-findmin problem. The results
in this section address the RAM complexity and decision-tree complexity of split-
findmin, which are unrelated to La Poutré’s result.

REFERENCES

[AGM97] N. Alon, Z. Galil, and O. Margalit, On the exponent of the all pairs shortest path
problem, J. Comput. System Sci., 54 (1997), pp. 255–262.
[AHU76] A. V. Aho, J. E. Hopcroft, and J. D. Ullman, On finding lowest common ancestors
in trees, SIAM J. Comput., 5 (1976), pp. 115–132.
[AMO93] R. K. Ahuja, T. L. Magnati, and J. B. Orlin, Network Flows: Theory, Algorithms,
and Applications, Prentice–Hall, Englewood Cliffs, NJ, 1993.
[AMOT90] R. K. Ahuja, K. Mehlhorn, J. B. Orlin, and R. E. Tarjan, Faster algorithms for
the shortest path problem, J. ACM, 37 (1990), pp. 213–223.
[BKRW98] A. L. Buchsbaum, H. Kaplan, A. Rogers, and J. R. Westbrook, Linear-time
pointer-machine algorithms for LCAs, MST verification, and dominators, in Pro-
ceedings of the 30th ACM Symposium on Theory of Computing (STOC), Dallas,
TX, 1998, ACM, New York, 1998, pp. 279–288.
[BTZ98] G. S. Brodal, J. L. Tra̋ff, and C. D. Zaroliagis, A parallel priority queue with
constant time operations, J. Parallel and Distrib. Comput., 49 (1998), pp. 4–21.
[Chaz00] B. Chazelle, A minimum spanning tree algorithm with inverse-Ackermann type com-
plexity, J. ACM, 47 (2000), pp. 1028–1047.
[CLRS01] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction to Algo-
rithms, MIT Press, Cambridge, MA, 2001.
[CMMS98] A. Crauser, K. Mehlhorn, U. Meyer, and P. Sanders, A parallelization of Di-
jkstra’s shortest path algorithm, in Proceedings of the 23rd International Sympo-
sium on Mathematical Foundations of Computer Science (MFCS), Lecture Notes
in Comput. Sci. 1450, Springer, New York, 1998, pp. 722–731.
[Dij59] E. W. Dijkstra, A note on two problems in connexion with graphs, Numer. Math., 1
(1959), pp. 269–271.
[Din78] E. A. Dinic, Economical algorithms for finding shortest paths in a network, Trans-
portation Modeling Systems, (1978), pp. 36–44 (in Russian).
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1429

[Din03] Y. Dinitz, Personal communication, Ben-Gurion University, Be’er Sheva, Israel, 2003.
[FG85] A. M. Frieze and G. R. Grimmett, The shortest-path problem for graphs with random
arc-lengths, Discrete Appl. Math., 10 (1985), pp. 57–77.
[FR01] J. Fakcharoenphol and S. Rao, Planar graphs, negative weight edges, shortest paths,
and near linear time, in Proceedings of the 42nd IEEE Symposium on Foundations
of Computer Science (FOCS), Las Vegas, NV, 2001, IEEE Press, Piscataway, NJ,
pp. 232–241.
[F91] G. N. Frederickson, Planar graph decomposition and all pairs shortest paths, J. ACM,
38 (1991), pp. 162–204.
[F76] M. L. Fredman, New bounds on the complexity of the shortest path problem, SIAM
J. Comput., 5 (1976), pp. 83–89.
[FT87] M. L. Fredman and R. E. Tarjan, Fibonacci heaps and their uses in improved network
optimization algorithms, J. ACM, 34 (1987), pp. 596–615.
[FW93] M. L. Fredman and D. E. Willard, Surpassing the information-theoretic bound with
fusion trees, J. Comput. System Sci., 47 (1993), pp. 424–436.
[G01] A. V. Goldberg, A simple shortest path algorithm with linear average time, in Pro-
ceedings of the 9th European Symposium on Algorithms (ESA), Lecture Notes in
Comput. Sci. 2161, Springer, New York, 2001, pp. 230–241.
[G85a] H. N. Gabow, A scaling algorithm for weighted matching on general graphs, in Proceed-
ings of the 26th IEEE Symposium on Foundations of Computer Science (FOCS),
Portland, OR, 1985, IEEE Press, Piscataway, NJ, pp. 90–100.
[G85b] H. N. Gabow, Scaling algorithms for network problems, J. Comput. System Sci., 31
(1985), pp. 148–168.
[G95] A. V. Goldberg, Scaling algorithms for the shortest paths problem, SIAM J. Comput.,
24 (1995), pp. 494–504.
[GM97] Z. Galil and O. Margalit, All pairs shortest distances for graphs with small integer
length edges, Inform. and Comput., 134 (1997), pp. 103–139.
[GR98] A. V. Goldberg and S. Rao, Beyond the flow decomposition barrier, J. ACM, 45
(1998), pp. 783–797.
[GT89] H. N. Gabow and R. E. Tarjan, Faster scaling algorithms for network problems,
SIAM J. Comput., 18 (1989), pp. 1013–1036.
[GT91] H. N. Gabow and R. E. Tarjan, Faster scaling algorithms for general graph-matching
problems, J. ACM, 38 (1991), pp. 815–853.
[GYY80] R. L. Graham, A. C. Yao, and F. F. Yao, Information bounds are weak in the shortest
distance problem, J. ACM, 27 (1980), pp. 428–444.
[Hag00] T. Hagerup, Improved shortest paths on the word RAM, in Proceedings of the 27th
International Colloquium on Automata, Languages, and Programming (ICALP),
Lecture Notes in Comput. Sci. 1853, Springer, New York, 2000, pp. 61–72.
[Hag04] T. Hagerup, Simpler computation of single-source shortest paths in linear average
time, in Proceedings in the 21st Annual Symposium on Theoretical Aspects of Com-
puter Science (STACS), Montpellier, France, 2004, Springer, New York, pp. 362–
369.
[Han04] Y. Han, Improved algorithm for all pairs shortest paths, Inform. Process. Lett., 91
(2004), pp. 245–250.
[HKRS97] M. R. Henzinger, P. N. Klein, S. Rao, and S. Subramanian, Faster shortest path
 Sci., 55 (1997), pp. 3–23.
algorithms for planar graphs, J. Comput. System
[HT02] Y. Han and M. Thorup, Integer sorting in O(n log log n) expected time and linear
space, in Proceedings of the 43rd Annual Symposium on Foundations of Computer
Science (FOCS), Vancouver, 2002, IEEE Press, Piscataway, NJ, pp. 135–144.
[J77] D. B. Johnson, Efficient algorithms for shortest paths in sparse networks, J. ACM, 24
(1977), pp. 1–13.
[K70] L. R. Kerr, The Effect of Algebraic Structure on the Computational Complexity of
Matrix Multiplication, Technical report TR70-75, Computer Science Department,
Cornell University, Ithaca, NY, 1970.
[KKP93] D. R. Karger, D. Koller, and S. J. Phillips, Finding the hidden path: Time bounds
for all-pairs shortest paths, SIAM J. Comput., 22 (1993), pp. 1199–1217.
[KKT95] D. R. Karger, P. N. Klein, and R. E. Tarjan, A randomized linear-time algorithm
for finding minimum spanning trees, J. ACM, 42 (1995), pp. 321–329.
[KS97] P. N. Klein and S. Subramanian, A randomized parallel algorithm for single-source
shortest paths, J. Algorithms, 25 (1997), pp. 205–220.
[KS98] S. G. Kolliopoulos and C. Stein, Finding real-valued single-source shortest paths in
o(n3 ) expected time, J. Algorithms, 28 (1998), pp. 125–141.
1430 SETH PETTIE AND VIJAYA RAMACHANDRAN

[LaP96] H. LaPoutré, Lower bounds for the union-find and the split-find problem on pointer
machines, J. Comput. System Sci., 52 (1996), pp. 87–99.
[M01] U. Meyer, Single-source shortest-paths on arbitrary directed graphs in linear average-
case time, in Proceedings of the 12th Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA), Washington, DC, 2001, SIAM, Philadelphia, pp. 797–806.
[M02] U. Meyer, Buckets strike back: Improved parallel shortest-paths, in Proceedings of
the 16th International Parallel and Distributed Processing Symposium (IPDPS),
Ft. Lauderdale, FL, 2002, IEEE Computer Society Press, Los Alamitos, CA, pp. 75–
82.
[Mit00] J. S. B. Mitchell, Geometric shortest paths and network optimization, in Handbook
of Computational Geometry, North–Holland, Amsterdam, 2000, pp. 633–701.
[MN00] K. Mehlhorn and S. Näher, LEDA: A Platform for Combinatorial and Geometric
Computing, Cambridge University Press, Cambridge, UK, 2000.
[MT87] A. Moffat and T. Takaoka, An all pairs shortest path algorithm with expected time
O(n2 log n), SIAM J. Comput., 16 (1987), pp. 1023–1031.
[Pet02b] S. Pettie, On the comparison-addition complexity of all-pairs shortest paths, in Pro-
ceedings of the 13th International Symposium on Algorithms and Computation
(ISAAC’02), Vancouver, 2002, Springer, New York, pp. 32–43.
[Pet03] S. Pettie, On the Shortest Path and Minimum Spanning Tree Problems, Ph.D.
thesis, Department of Computer Sciences, The University of Texas at Austin,
Austin, TX, 2003; also available online as Technical report TR-03-35 at
http://www.cs.utexas.edu/ftp/pub/techreports/tr03-35.ps.gz.
[Pet04] S. Pettie, A new approach to all-pairs shortest paths on real-weighted graphs, Spe-
cial Issue of Selected Papers from the 29th International Colloqium on Automata
Languages and Programming (ICALP 2002), Theoret. Comput. Sci., 312 (2004),
pp. 47–74.
[PR02a] S. Pettie and V. Ramachandran, Computing shortest paths with comparisons and
additions, in Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete
Algorithms (SODA), San Francisco, CA, 2002, SIAM, Philadelphia, pp. 267–276.
[PR02b] S. Pettie and V. Ramachandran, Minimizing randomness in minimum spanning
tree, parallel connectivity, and set maxima algorithms, in Proceedings of the 13th
Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), San Francisco,
CA, 2002, SIAM, Philadelphia, pp. 713–722.
[PR02c] S. Pettie and V. Ramachandran, An optimal minimum spanning tree algorithm,
J. ACM, 49 (2002), pp. 16–34.
[PRS02] S. Pettie, V. Ramachandran, and S. Sridhar, Experimental evaluation of a new
shortest path algorithm, in Proceedings of the 4th Workshop on Algorithm En-
gineering and Experiments (ALENEX), San Francisco, CA, 2002, Springer, New
York, pp. 126–142.
[Sei95] R. Seidel, On the all-pairs-shortest-path problem in unweighted undirected graphs,
J. Comput. System Sci., 51 (1995), pp. 400–403.
[SP75] P. M. Spira and A. Pan, On finding and updating spanning trees and shortest paths,
SIAM J. Comput., 4 (1975), pp. 375–380.
[Spi73] P. M. Spira, A new algorithm for finding all shortest paths in a graph of positive arcs
in average time O(n2 log2 n), SIAM J. Comput., 2 (1973), pp. 28–32.
[SZ99] A. Shoshan and U. Zwick, All pairs shortest paths in undirected graphs with integer
weights, in Proceedings of the 40th Annual IEEE Symposium on Foundations of
Computer Science (FOCS), New York, 1999, IEEE Press, Piscataway, NJ, pp. 605–
614.
[Tak92] T. Takaoka, A new upper bound on the complexity of the all pairs shortest path prob-
lem, Inform. Process. Lett., 43 (1992), pp. 195–199.
[Tak98] T. Takaoka, Subcubic cost algorithms for the all pairs shortest path problem, Algo-
rithmica, 20 (1998), pp. 309–318.
[Tar79] R. E. Tarjan, A class of algorithms which require nonlinear time to maintain disjoint
sets, J. Comput. System Sci., 18 (1979), pp. 110–127.
[Tar79b] R. E. Tarjan, Applications of path compression on balanced trees, J. ACM, 26 (1979),
pp. 690–715.
[Tar82] R. E. Tarjan, Sensitivity analysis of minimum spanning trees and shortest path trees,
Inform. Process. Lett., 14 (1982), pp. 30–33; Corrigendum, Inform. Process. Lett.,
23 (1986), p. 219.
[Tho00] M. Thorup, Floats, integers, and single source shortest paths, J. Algorithms, 35 (2000),
pp. 189–201.
A SHORTEST PATH ALGORITHM FOR UNDIRECTED GRAPHS 1431

[Tho03] M. Thorup, Integer priority queues with decrease key in constant time and the single
source shortest paths problem, in Proceedings of the 35th Annual ACM Symposium
on Theory of Computing (STOC), San Diego, CA, 2003, ACM, New York, pp. 149–
158.
[Tho99] M. Thorup, Undirected single-source shortest paths with positive integer weights in
linear time, J. ACM, 46 (1999), pp. 362–394.
[TZ96] J. L. Träff and C. D. Zaroliagis, A simple parallel algorithm for the single-source
shortest path problem on planar digraphs, in Parallel Algorithms for Irregularly
Structured Problems, Lecture Notes in Comput. Sci. 1117, Springer, New York,
1996, pp. 183–194.
[Z01] U. Zwick, Exact and approximate distances in graphs—A survey, in Proceedings of the
9th European Symposium on Algorithms (ESA), University of Aarhus, Denmark,
2001, pp. 33–48; available online at http://www.cs.tau.ac.il/∼zwick/.
[Z02] U. Zwick, All pairs shortest paths using bridging sets and rectangular matrix multipli-
cation, J. ACM, 49 (2002), pp. 289–317.
[Z04] U. Zwick, A slightly improved sub-cubic algorithm for the all pairs shortest paths prob-
lem with real edge lengths, in Proceedings of the 15th International Symposium
on Algorithms and Computation (ISAAC), Lecture Notes in Comput. Sci. 3341,
Springer, New York, 2004, pp. 921–932.

You might also like