Fundamental Data Structures
Fundamental Data Structures
Structures
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.
PDF generated at: Mon, 30 Sep 2013 03:32:24 UTC
Contents
Articles
Introduction
Data structure
Analysis of algorithms
11
Amortized analysis
16
Accounting method
17
Potential method
19
Sequences
22
22
26
Dynamic array
32
Linked list
35
50
55
84
Double-ended queue
86
Circular buffer
89
Dictionaries
102
Associative array
102
Association list
105
Hash table
106
Linear probing
118
Quadratic probing
119
Double hashing
122
Cuckoo hashing
124
Hopscotch hashing
128
Hash function
129
138
Universal hashing
140
K-independent hashing
145
Tabulation hashing
146
Sets
149
157
157
Bit array
162
Bloom filter
167
MinHash
179
182
Partition refinement
186
Priority queues
188
Priority queue
188
193
Binary heap
196
d-ary heap
202
Binomial heap
204
Fibonacci heap
210
Pairing heap
215
217
Soft heap
222
224
224
233
240
Tree rotation
244
247
Treap
250
AVL tree
254
Redblack tree
259
Scapegoat tree
273
Splay tree
277
Tango tree
291
Skip list
294
B-tree
300
B+ tree
311
316
316
Radix tree
331
336
Suffix tree
337
Suffix array
342
348
Fusion tree
353
References
Article Sources and Contributors
357
363
Article Licenses
License
366
Introduction
Abstract data type
In computer science, an abstract data type (ADT) is a mathematical model for a certain class of data structures that
have similar behavior; or for certain data types of one or more programming languages that have similar semantics.
An abstract data type is defined indirectly, only by the operations that may be performed on it and by mathematical
constraints on the effects (and possibly cost) of those operations.[1]
For example, an abstract stack could be defined by three operations: push, that inserts some data item onto the
structure, pop, that extracts an item from it (with the constraint that each pop always returns the most recently
pushed item that has not been popped yet), and peek, that allows data on top of the structure to be examined without
removal. When analyzing the efficiency of algorithms that use stacks, one may also specify that all operations take
the same time no matter how many items have been pushed into the stack, and that the stack uses a constant amount
of storage for each element.
Abstract data types are purely theoretical entities, used (among other things) to simplify the description of abstract
algorithms, to classify and evaluate data structures, and to formally describe the type systems of programming
languages. However, an ADT may be implemented by specific data types or data structures, in many ways and in
many programming languages; or described in a formal specification language. ADTs are often implemented as
modules: the module's interface declares procedures that correspond to the ADT operations, sometimes with
comments that describe the constraints. This information hiding strategy allows the implementation of the module to
be changed without disturbing the client programs.
The term abstract data type can also be regarded as a generalised approach of a number of algebraic structures,
such as lattices, groups, and rings.[2] This can be treated as part of the subject area of artificial intelligence. The
notion of abstract data types is related to the concept of data abstraction, important in object-oriented programming
and design by contract methodologies for software development [citation needed].
Imperative view
In the "imperative" view, which is closer to the philosophy of imperative programming languages, an abstract data
structure is conceived as an entity that is mutable meaning that it may be in different states at different times.
Some operations may change the state of the ADT; therefore, the order in which operations are evaluated is
important, and the same operation on the same entities may have different effects if executed at different times
just like the instructions of a computer, or the commands and procedures of an imperative language. To underscore
this view, it is customary to say that the operations are executed or applied, rather than evaluated. The imperative
style is often used when describing abstract algorithms. This is described by Donald E. Knuth and can be referenced
from here The Art of Computer Programming.
Typical operations
Some operations that are often specified for ADTs (possibly under other names) are
compare(s,t), that tests whether two structures are equivalent in some sense;
hash(s), that computes some standard hash function from the instance's state;
print(s) or show(s), that produces a human-readable representation of the structure's state.
In imperative-style ADT definitions, one often finds also
create(), that yields a new instance of the ADT;
initialize(s), that prepares a newly created instance s for further operations, or resets it to some "initial
state";
copy(s,t), that puts instance s in a state equivalent to that of t;
clone(t), that performs s new(), copy(s,t), and returns s;
free(s) or destroy(s), that reclaims the memory and other resources used by s;
The free operation is not normally relevant or meaningful, since ADTs are theoretical entities that do not "use
memory". However, it may be necessary when one needs to analyze the storage used by an algorithm that uses the
ADT. In that case one needs additional axioms that specify how much memory each ADT instance uses, as a
function of its state, and how much of it is returned to the pool by free.
Examples
Some common ADTs, which have proved useful in a great variety of applications, are
Container
Deque
List
Map
Multimap
Multiset
Priority queue
Queue
Set
Stack
String
Tree
Graph
Each of these ADTs may be defined in many ways and variants, not necessarily equivalent. For example, a stack
ADT may or may not have a count operation that tells how many items have been pushed and not yet popped.
This choice makes a difference not only for its clients but also for the implementation.
Implementation
Implementing an ADT means providing one procedure or function for each abstract operation. The ADT instances
are represented by some concrete data structure that is manipulated by those procedures, according to the ADT's
specifications.
Usually there are many ways to implement the same ADT, using several different concrete data structures. Thus, for
example, an abstract stack can be implemented by a linked list or by an array.
An ADT implementation is often packaged as one or more modules, whose interface contains only the signature
(number and types of the parameters and results) of the operations. The implementation of the module namely,
the bodies of the procedures and the concrete data structure used can then be hidden from most clients of the
module. This makes it possible to change the implementation without affecting the clients.
When implementing an ADT, each instance (in imperative-style definitions) or each state (in functional-style
definitions) is usually represented by a handle of some sort.[3]
Modern object-oriented languages, such as C++ and Java, support a form of abstract data types. When a class is used
as a type, it is an abstract type that refers to a hidden representation. In this model an ADT is typically implemented
as a class, and each instance of the ADT is an object of that class. The module's interface typically declares the
constructors as ordinary procedures, and most of the other ADT operations as methods of that class. However, such
an approach does not easily encapsulate multiple representational variants found in an ADT. It also can undermine
the extensibility of object-oriented programs. In a pure object-oriented program that uses interfaces as types, types
refer to behaviors not representations.
/* Type: instance
stack_T stack_create(void);
instance, initially empty. */
void stack_push(stack_T s, stack_Item e);
the stack. */
stack_Item stack_pop(stack_T s);
the stack and return it . */
int stack_empty(stack_T ts);
empty. */
/*
/*
/*
/*
void *e = stack_pop(t);
the stack. */
if (stack_empty(t)) { }
This interface can be implemented in many ways. The implementation may be arbitrarily inefficient, since the formal
definition of the ADT, above, does not specify how much space the stack may use, nor how long each operation
should take. It also does not specify whether the stack state t continues to exist after a call s pop(t).
In practice the formal definition should specify that the space is proportional to the number of items pushed and not
yet popped; and that every one of the operations above must finish in a constant amount of time, independently of
that number. To comply with these additional specifications, the implementation could use a linked list, or an array
(with dynamic resizing) together with two integers (an item count and the array size)
Functional-style interface
Functional-style ADT definitions are more appropriate for functional programming languages, and vice-versa.
However, one can provide a functional style interface even in an imperative language like C. For example:
typedef struct stack_Rep stack_Rep;
representation (an opaque record). */
typedef stack_Rep *stack_T;
state (an opaque pointer). */
typedef void *stack_Item;
address). */
stack_T stack_empty(void);
/* Returns the empty stack
state. */
stack_T stack_push(stack_T s, stack_Item x); /* Adds x at the top of s,
returns the resulting state. */
stack_Item stack_top(stack_T s);
/* Returns the item
currently at the top of s. */
stack_T stack_pop(stack_T s);
/* Remove the top item
from s, returns the resulting state. */
The main problem is that C lacks garbage collection, and this makes this style of programming impractical;
moreover, memory allocation routines in C are slower than allocation in a typical garbage collector, thus the
performance impact of so many allocations is even greater.
ADT libraries
Many modern programming languages, such as C++ and Java, come with standard libraries that implement several
common ADTs, such as those listed above.
References
[1] Barbara Liskov, Programming with Abstract Data Types, in Proceedings of the ACM SIGPLAN Symposium on Very High Level Languages,
pp. 50--59, 1974, Santa Monica, California
[2] , Chapter 7,section 40.
[3] , definition 4.4.
Further
Mitchell, John C.; Plotkin, Gordon (July 1988). "Abstract Types Have Existential Type" (http://theory.stanford.
edu/~jcm/papers/mitch-plotkin-88.pdf). ACM Transactions on Programming Languages and Systems 10 (3).
External links
Abstract data type (http://www.nist.gov/dads/HTML/abstractDataType.html) in NIST Dictionary of
Algorithms and Data Structures
Walls and Mirrors, the classic textbook
Data structure
In computer science, a data structure is
a particular way of storing and
organizing data in a computer so that it
can be used efficiently.[1][2]
Different kinds of data structures are
suited to different kinds of applications,
and some are highly specialized to
specific tasks. For example, B-trees are
particularly
well-suited
for
implementation of databases, while
compiler implementations usually use
hash tables to look up identifiers.
Data structures provide a means to
manage large amounts of data efficiently,
A hash table
such as large databases and internet
indexing services. Usually, efficient data
structures are a key to designing efficient algorithms. Some formal design methods and programming languages
emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and
retrieving can be carried out on data stored in both main memory and in secondary memory.
Data structure
Overview
An array stores a number of elements in a specific order. They are accessed using an integer to specify which
element is required (although the elements may be of almost any type). Arrays may be fixed-length or
expandable.
Records (also called tuples or structs) are among the simplest data structures. A record is a value that contains
other values, typically in fixed number and sequence and typically indexed by names. The elements of records are
usually called fields or members.
A hash table (also called a dictionary or map) is a more flexible variation on a record, in which name-value
pairs can be added and deleted freely.
A union type specifies which of a number of permitted primitive types may be stored in its instances, e.g. "float
or long integer". Contrast with a record, which could be defined to contain a float and an integer; whereas, in a
union, there is only one value at a time.
A tagged union (also called a variant, variant record, discriminated union, or disjoint union) contains an
additional field indicating its current type, for enhanced type safety.
A set is an abstract data structure that can store specific values, without any particular order, and no repeated
values. Values themselves are not retrieved from sets, rather one tests a value for membership to obtain a boolean
"in" or "not in".
Graphs and trees are linked abstract data structures composed of nodes. Each node contains a value and also one
or more pointers to other nodes. Graphs can be used to represent networks, while trees are generally used for
sorting and searching, having their nodes arranged in some relative order based on their values.
An object contains data fields, like a record, and also contains program code fragments for accessing or
modifying those fields. Data structures not containing code, like those above, are called plain old data structures.
Many others are possible, but they tend to be further variations and compounds of the above.
Basic principles
Data structures are generally based on the ability of a computer to fetch and store data at any place in its memory,
specified by an addressa bit string that can be itself stored in memory and manipulated by the program. Thus the
record and array data structures are based on computing the addresses of data items with arithmetic operations; while
the linked data structures are based on storing addresses of data items within the structure itself. Many data structures
use both principles, sometimes combined in non-trivial ways (as in XOR linking).
The implementation of a data structure usually requires writing a set of procedures that create and manipulate
instances of that structure. The efficiency of a data structure cannot be analyzed separately from those operations.
This observation motivates the theoretical concept of an abstract data type, a data structure that is defined indirectly
by the operations that may be performed on it, and the mathematical properties of those operations (including their
space and time cost).
Data structure
Language support
Most assembly languages and some low-level languages, such as BCPL (Basic Combined Programming Language),
lack support for data structures. Many high-level programming languages and some higher-level assembly
languages, such as MASM, on the other hand, have special syntax or other built-in support for certain data
structures, such as vectors (one-dimensional arrays) in the C language or multi-dimensional arrays in Pascal.
Most programming languages feature some sort of library mechanism that allows data structure implementations to
be reused by different programs. Modern languages usually come with standard libraries that implement the most
common data structures. Examples are the C++ Standard Template Library, the Java Collections Framework, and
Microsoft's .NET Framework.
Modern languages also generally support modular programming, the separation between the interface of a library
module and its implementation. Some provide opaque data types that allow clients to hide implementation details.
Object-oriented programming languages, such as C++, Java and Smalltalk may use classes for this purpose.
Many known data structures have concurrent versions that allow multiple computing threads to access the data
structure simultaneously.
References
[1] Paul E. Black (ed.), entry for data structure in Dictionary of Algorithms and Data Structures. U.S. National Institute of Standards and
Technology. 15 December 2004. Online version (http:/ / www. itl. nist. gov/ div897/ sqg/ dads/ HTML/ datastructur. html) Accessed May 21,
2009.
[2] Entry data structure in the Encyclopdia Britannica (2009) Online entry (http:/ / www. britannica. com/ EBchecked/ topic/ 152190/
data-structure) accessed on May 21, 2009.
Further reading
Peter Brass, Advanced Data Structures, Cambridge University Press, 2008.
Donald Knuth, The Art of Computer Programming, vol. 1. Addison-Wesley, 3rd edition, 1997.
Dinesh Mehta and Sartaj Sahni Handbook of Data Structures and Applications, Chapman and Hall/CRC Press,
2007.
Niklaus Wirth, Algorithms and Data Structures, Prentice Hall, 1985.
Diane Zak, Introduction to programming with c++, copyright 2011 Cengage Learning Asia Pte Ltd
External links
10
Analysis of algorithms
Analysis of algorithms
In computer science, the analysis of algorithms is the determination of the amount of resources (such as time and
storage) necessary to execute them. Most algorithms are designed to work with inputs of arbitrary length. Usually,
the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps
(time complexity) or storage locations (space complexity).
Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical
estimates for the resources needed by any algorithm which solves a given computational problem. These estimates
provide an insight into reasonable directions of search for efficient algorithms.
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to
estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta
notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the
logarithm of the length of the list being searched, or in O(log(n)), colloquially "in logarithmic time". Usually
asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency.
However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant
multiplicative factor called a hidden constant.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain
assumptions concerning the particular implementation of the algorithm, called model of computation. A model of
computation may be defined in terms of an abstract computer, e.g., Turing machine, and/or by postulating that
certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has n
elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2
n + 1 time units are needed to return an answer.
Cost models
Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the
actual execution time, the time required to perform a step must be guaranteed to be bounded above by a constant.
One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption
may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily
large, the time required by a single addition can no longer be assumed to be constant.
Two cost models are generally used:[1]
the uniform cost model, also called uniform-cost measurement (and similar variations), assigns a constant cost
to every machine operation, regardless of the size of the numbers involved
the logarithmic cost model, also called logarithmic-cost measurement (and variations thereof), assigns a cost to
every machine operation proportional to the number of bits involved
The latter is more cumbersome to use, so it's only employed when necessary, for example in the analysis of
arbitrary-precision arithmetic algorithms, like those used in cryptography.
A key point which is often overlooked is that published lower bounds for problems are often given for a model of
computation that is more restricted than the set of operations that you could use in practice and therefore there are
algorithms that are faster than what would naively be thought possible.[2]
11
Analysis of algorithms
12
Run-time analysis
Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or
run-time) of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great
interest in computer science: A program can take seconds, hours or even years to finish executing, depending on
which algorithm it implements (see also performance analysis, which is the analysis of an algorithm's run-time in
practice).
Computer A
run-time
(in nanoseconds)
Computer B
run-time
(in nanoseconds)
15
100,000
65
32
150,000
250
125
200,000
1,000
500
250,000
Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is
far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient
number, that conclusion is dramatically demonstrated to be in error:
n (list size)
Computer A
run-time
(in nanoseconds)
Computer B
run-time
(in nanoseconds)
15
100,000
65
32
150,000
250
125
200,000
1,000
500
250,000
...
...
...
1,000,000
500,000
500,000
4,000,000
2,000,000
550,000
16,000,000
8,000,000
600,000
...
...
...
1,375,000 ns,
or 1.375 milliseconds
Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly
proportional to its input size. Doubling the input size doubles the run time, quadrupling the input size quadruples the
run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic
Analysis of algorithms
13
growth rate. Doubling the input size only increases the run time by a constant amount (in this example, 25,000 ns).
Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time
because it's running an algorithm with a much slower growth rate.
Orders of growth
Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a
certain input size n, the function f(n) times a positive constant provides an upper bound or limit for the run-time of
that algorithm. In other words, for a given input size n greater than some n0 and a constant c, the running time of that
algorithm will never be larger than c f(n). This concept is frequently expressed using Big O notation. For example,
since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of
order O(n).
Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be
used to express the average-case for example, the worst-case scenario for quicksort is O(n), but the average-case
run-time is O(n log n).[3]
. If the order of growth indeed follows the power rule, the empirical value
of a will stay constant at different ranges, and if not, it will change - but still could serve for comparison of any two
given algorithms as to their empirical local orders of growth behaviour. Applied to the above table:
n (list size)
Computer A
run-time
(in nanoseconds)
Local order of
growth
(n^_)
Computer B
run-time
(in nanoseconds)
Local order of
growth
(n^_)
15
65
32
1.04
150,000
0.28
250
125
1.01
200,000
0.21
1,000
500
1.00
250,000
0.16
...
...
1,000,000
500,000
1.00
500,000
0.10
4,000,000
2,000,000
1.00
550,000
0.07
16,000,000 8,000,000
1.00
600,000
0.06
...
...
100,000
...
...
It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The
empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any
case has much lower local orders of growth (and improving further still), empirically, than the first one.
Analysis of algorithms
A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out
this algorithm. The specific amount of time to carry out a given instruction will vary depending on which instruction
is being executed and which computer is executing it, but on a conventional computer, this amount will be
deterministic.[5] Say that the actions carried out in step 1 are considered to consume time T1, step 2 uses time T2, and
so forth.
In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that
step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is:
The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 ) times (note
that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume T4( n +
1 ) time. The inner loop, on the other hand, is governed by the value of i, which iterates from 1 to i. On the first pass
through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6)
consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j
iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time,
and the inner loop test (step 5) consumes 3T5 time.
Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression:
which can be factored[6] as
The total time required to run the inner loop test can be evaluated similarly:
which reduces to
As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth
and thus defines its run-time order. In this example, n is the highest-order term, so one can conclude that f(n) =
14
Analysis of algorithms
15
(for n 0)
Let k be a constant greater than or equal to [T1..T7]
(for n 1)
Therefore
for
A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of
time greater than or equal to [T1..T7].Wikipedia:Please clarify This would mean that the algorithm's running time
breaks down as follows:[7]
(for n 1)
Relevance
Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can
significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can
render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of
computing power or storage in order to run, again rendering it practically useless.
Notes
[1] , section 1.3
[2] Examples of the price of abstraction? (http:/ / cstheory. stackexchange. com/ questions/ 608/ examples-of-the-price-of-abstraction),
cstheory.stackexchange.com
[3] The term lg is often used as shorthand for log2
[4] How To Avoid O-Abuse and Bribes (http:/ / rjlipton. wordpress. com/ 2009/ 07/ 24/ how-to-avoid-o-abuse-and-bribes/ ), at the blog "Gdels
Lost Letter and P=NP" by R. J. Lipton, professor of Computer Science at Georgia Tech, recounting idea by Robert Sedgewick
[5] However, this is not the case with a quantum computer
[6] It can be proven by induction that UNIQ-math-0-5cd073dcfe757d11-QINU
[7] This approach, unlike the above approach, neglects the constant time consumed by the loop tests which terminate their respective loops, but it
is trivial to prove that such omission does not affect the final result
Analysis of algorithms
References
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. & Stein, Clifford (2001). Introduction to
Algorithms. Chapter 1: Foundations (Second ed.). Cambridge, MA: MIT Press and McGraw-Hill. pp.3122.
ISBN0-262-03293-7.
Sedgewick, Robert (1998). Algorithms in C, Parts 1-4: Fundamentals, Data Structures, Sorting, Searching (3rd
ed.). Reading, MA: Addison-Wesley Professional. ISBN978-0-201-31452-6.
Knuth, Donald. The Art of Computer Programming. Addison-Wesley.
Greene, Daniel A.; Knuth, Donald E. (1982). Mathematics for the Analysis of Algorithms (Second ed.).
Birkhuser. ISBN3-7643-3102-X.
Goldreich, Oded (2010). Computational Complexity: A Conceptual Perspective. Cambridge University Press.
ISBN978-0-521-88473-0.
Amortized analysis
In computer science, amortized analysis is a method of analyzing algorithms that considers the entire sequence of
operations of the program. It allows for the establishment of a worst-case bound for the performance of an algorithm
irrespective of the inputs by looking at all of the operations. This analysis is most commonly discussed using
Big_O_notation.
At the heart of the method is the idea that while certain operations may be extremely costly in resources, they cannot
occur at a high-enough frequency to weigh down the entire program because the number of less costly operations
will far outnumber the costly ones in the long run, "paying back" the program over a number of iterations. It is
particularly useful because it guarantees worst-case performance rather than making assumptions about the state of
the program.
History
Amortized analysis initially emerged from a method called aggregate analysis, which is now subsumed by amortized
analysis. However, the technique was first formally introduced by Robert Tarjan in his paper Amortized
Computational Complexity, which addressed the need for a more useful form of analysis than the common
probabilistic methods used. Amortization was initially used for very specific types of algorithms, particularly those
involving binary trees and union operations. However, it is now ubiquitous and comes into play when analyzing
many other algorithms as well.
Method
The method requires knowledge of which series of operations are possible. This is most commonly the case with
data structures, which have state that persists between operations. The basic idea is that a worst case operation can
alter the state in such a way that the worst case cannot occur again for a long time, thus "amortizing" its cost.
There are generally three methods for performing amortized analysis: the aggregate method, the accounting method,
and the potential method. All of these give the same answers, and their usage difference is primarily circumstantial
and due to individual preference.
Aggregate analysis determines the upper bound T(n) on the total cost of a sequence of n operations, then
calculates the amortized cost to be T(n) / n.
The accounting method determines the individual cost of each operation, combining its immediate execution time
and its influence on the running time of future operations. Usually, many short-running operations accumulate a
"debt" of unfavorable state in small increments, while rare long-running operations decrease it drastically.
16
Amortized analysis
The potential method is like the accounting method, but overcharges operations early to compensate for
undercharges later.
Common use
In common usage, an "amortized algorithm" is one that an amortized analysis has shown to perform well.
Online algorithms commonly use amortized analysis.
References
Allan Borodin and Ran El-Yaniv (1998). Online Computation and Competitive Analysis [1]. Cambridge
University Press. pp.20,141.
[1] http:/ / www. cs. technion. ac. il/ ~rani/ book. html
Accounting method
In the field of analysis of algorithms in computer science, the accounting method is a method of amortized analysis
based on accounting. The accounting method often gives a more intuitive account of the amortized cost of an
operation than either aggregate analysis or the potential method. Note, however, that this does not guarantee such
analysis will be immediately obvious; often, choosing the correct parameters for the accounting method requires as
much knowledge of the problem and the complexity bounds one is attempting to prove as the other two methods.
The accounting method is most naturally suited for proving an O(1) bound on time. The method as explained here is
for proving such a bound.
The method
A set of elementary operations which will be used in the algorithm is chosen and their costs are arbitrarily set to 1.
The fact that the costs of these operations may differ in reality presents no difficulty in principle. What is important
is that each elementary operation has a constant cost.
Each aggregate operation is assigned a "payment". The payment is intended to cover the cost of elementary
operations needed to complete this particular operation, with some of the payment left over, placed in a pool to be
used later.
The difficulty with problems that require amortized analysis is that, in general, some of the operations will require
greater than constant cost. This means that no constant payment will be enough to cover the worst case cost of an
operation, in and of itself. With proper selection of payment, however, this is no longer a difficulty; the expensive
operations will only occur when there is sufficient payment in the pool to cover their costs.
17
Accounting method
18
Examples
A few examples will help to illustrate the use of the accounting method.
Table expansion
It is often necessary to create a table before it is known how much space is needed. One possible strategy is to double
the size of the table when it is full. Here we will use the accounting method to show that the amortized cost of an
insertion operation in such a table is O(1).
Before looking at the procedure in detail, we need some definitions. Let T be a table, E an element to insert, num(T)
the number of elements in T, and size(T) the allocated size of T. We assume the existence of operations
create_table(n), which creates an empty table of size n, for now assumed to be free, and elementary_insert(T,E),
which inserts element E into a table T that already has space allocated, with a cost of 1.
The following pseudocode illustrates the table insertion procedure:
function table_insert(T,E)
if num(T) = size(T)
U := create_table(2 size(T))
for each F in T
elementary_insert(U,F)
T := U
elementary_insert(T,E)
Without amortized analysis, the best bound we can show for n insert operations is O(n2) this is due to the loop at
line 4 that performs num(T) elementary insertions.
For analysis using the accounting method, we assign a payment of 3 to each table insertion. Although the reason for
this is not clear now, it will become clear during the course of the analysis.
Assume that initially the table is empty with size(T) = m. The first m insertions therefore do not require reallocation
and only have cost 1 (for the elementary insert). Therefore, when num(T) = m, the pool has (3 - 1)m = 2m.
Inserting element m + 1 requires reallocation of the table. Creating the new table on line 3 is free (for now). The loop
on line 4 requires m elementary insertions, for a cost of m. Including the insertion on the last line, the total cost for
this operation is m + 1. After this operation, the pool therefore has 2m + 3 - (m + 1) = m + 2.
Next, we add another m - 1 elements to the table. At this point the pool has m + 2 + 2(m - 1) = 3m. Inserting an
additional element (that is, element 2m + 1) can be seen to have cost 2m + 1 and a payment of 3. After this operation,
the pool has 3m + 3 - (2m + 1) = m + 2. Note that this is the same amount as after inserting element m + 1. In fact,
we can show that this will be the case for any number of reallocations.
It can now be made clear why the payment for an insertion is 3. 1 goes to inserting the element the first time it is
added to the table, 1 goes to moving it the next time the table is expanded, and 1 goes to moving one of the elements
that was already in the table the next time the table is expanded.
We initially assumed that creating a table was free. In reality, creating a table of size n may be as expensive as O(n).
Let us say that the cost of creating a table of size n is n. Does this new cost present a difficulty? Not really; it turns
out we use the same method to show the amortized O(1) bounds. All we have to do is change the payment.
When a new table is created, there is an old table with m entries. The new table will be of size 2m. As long as the
entries currently in the table have added enough to the pool to pay for creating the new table, we will be all right.
We cannot expect the first
We must then rely on the last
entries to help pay for the new table. Those entries already paid for the current table.
entries to pay the cost
to the payment
Accounting method
19
References
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 17.2: The accounting method,
pp.410412.
Potential method
In computational complexity theory, the potential method is a method used to analyze the amortized time and space
complexity of a data structure, a measure of its performance over sequences of operations that smooths out the cost
of infrequent but expensive operations.
where C is a non-negative constant of proportionality (in units of time) that must remain fixed throughout the
analysis. That is, the amortized time is defined to be the actual time taken by the operation plus C times the
difference in potential caused by the operation.
. In more
detail,
where the sequence of potential function values forms a telescoping series in which all terms other than the initial
and final potential function values cancel in pairs, and where the final inequality arises from the assumptions that
and
. Therefore, amortized time can be used to provide accurate predictions about
the actual time of sequences of operations, even though the amortized time for an individual operation may vary
widely from its actual time.
Potential method
Example
A dynamic array is a data structure for maintaining an array of items, allowing both random access to positions
within the array and the ability to increase the array size by one. It is available in Java as the "ArrayList" type and in
Python as the "list" type. A dynamic array may be implemented by a data structure consisting of an array A of items,
of some length N, together with a number nN representing the positions within the array that have been used so
far. With this structure, random accesses to the dynamic array may be implemented by accessing the same cell of the
internal array A, and when n<N an operation that increases the dynamic array size may be implemented simply by
incrementingn. However, when n=N, it is necessary to resize A, and a common strategy for doing so is to double its
size, replacing A by a new array of length2n.[1]
This structure may be analyzed using a potential function =2nN. Since the resizing strategy always causes A to
be at least half-full, this potential function is always non-negative, as desired. When an increase-size operation does
not lead to a resize operation, increases by 2, a constant. Therefore, the constant actual time of the operation and
the constant increase in potential combine to give a constant amortized time for an operation of this type. However,
when an increase-size operation causes a resize, the potential value of n prior to the resize decreases to zero after the
resize. Allocating a new internal array A and copying all of the values from the old internal array to the new one
takes O(n) actual time, but (with an appropriate choice of the constant of proportionality C) this is entirely cancelled
by the decrease of n in the potential function, leaving again a constant total amortized time for the operation. The
other operations of the data structure (reading and writing array cells without changing the array size) do not cause
the potential function to change and have the same constant amortized time as their actual time.
Therefore, with this choice of resizing strategy and potential function, the potential method shows that all dynamic
array operations take constant amortized time. Combining this with the inequality relating amortized time and actual
time over sequences of operations, this shows that any sequence of n dynamic array operations takes O(n) actual
time in the worst case, despite the fact that some of the individual operations may themselves take a linear amount of
time.
20
Potential method
Applications
The potential function method is commonly used to analyze Fibonacci heaps, a form of priority queue in which
removing an item takes logarithmic amortized time, and all other operations take constant amortized time.[2] It may
also be used to analyze splay trees, a self-adjusting form of binary search tree with logarithmic amortized time per
operation.[3]
References
[1] Goodrich and Tamassia, 1.5.2 Analyzing an Extendable Array Implementation, pp. 139141; Cormen et al., 17.4 Dynamic tables, pp.
416424.
[2] Cormen et al., Chapter 20, "Fibonacci Heaps", pp. 476497.
[3] Goodrich and Tamassia, Section 3.4, "Splay Trees", pp. 185194.
21
22
Sequences
Array data type
In computer science, an array type is a data type that is meant to describe a collection of elements (values or
variables), each selected by one or more indices (identifying keys) that can be computed at run time by the program.
Such a collection is usually called an array variable, array value, or simply array.[1] By analogy with the
mathematical concepts of vector and matrix, array types with one and two indices are often called vector type and
matrix type, respectively.
Language support for array types may include certain built-in array data types, some syntactic constructions (array
type constructors) that the programmer may use to define such types and declare array variables, and special notation
for indexing array elements. For example, in the Pascal programming language, the declaration type MyTable
= array [1..4,1..2] of integer, defines a new array data type called MyTable. The declaration var
A: MyTable then defines a variable A of that type, which is an aggregate of eight elements, each being an integer
variable identified by two indices. In the Pascal program, those elements are denoted A[1,1], A[1,2],
A[2,1], A[4,2].[2] Special array types are often defined by the language's standard libraries.
Arrays are distinguished from lists in that arrays allow random access, while lists only allow sequential access.
Dynamic lists are also more common and easier to implement than dynamic arrays. Array types are distinguished
from record types mainly because they allow the element indices to be computed at run time, as in the Pascal
assignment A[I,J] := A[N-I,2*J]. Among other things, this feature allows a single iterative statement to
process arbitrarily many elements of an array variable.
In more theoretical contexts, especially in type theory and in the description of abstract algorithms, the terms "array"
and "array type" sometimes refer to an abstract data type (ADT) also called abstract array or may refer to an
associative array, a mathematical model with the basic operations and behavior of a typical array type in most
languages basically, a collection of elements that are selected by indices computed at run-time.
Depending on the language, array types may overlap (or be identified with) other data types that describe aggregates
of values, such as lists and strings. Array types are often implemented by array data structures, but sometimes by
other means, such as hash tables, linked lists, or search trees.
History
Assembly languages and low-level languages like BCPL[3] generally have no syntactic support for arrays.
Because of the importance of array structures for efficient computation, the earliest high-level programming
languages, including FORTRAN (1957), COBOL (1960), and Algol 60 (1960), provided support for
multi-dimensional arrays.
Abstract arrays
An array data structure can be mathematically modeled as an abstract data structure (an abstract array) with two
operations
get(A, I): the data stored in the element of the array A whose indices are the integer tuple I.
set(A,I,V): the array that results by setting the value of that element to V.
These operations are required to satisfy the axioms[4]
Implementations
In order to effectively implement variables of such types as array structures (with indexing done by pointer
arithmetic), many languages restrict the indices to integer data types (or other types that can be interpreted as
integers, such as bytes and enumerated types), and require that all elements have the same data type and storage size.
Most of those languages also restrict each index to a finite interval of integers, that remains fixed throughout the
lifetime of the array variable. In some compiled languages, in fact, the index ranges may have to be known at
compile time.
On the other hand, some programming languages provide more liberal array types, that allow indexing by arbitrary
values, such as floating-point numbers, strings, objects, references, etc.. Such index values cannot be restricted to an
interval, much less a fixed interval. So, these languages usually allow arbitrary new elements to be created at any
time. This choice precludes the implementation of array types as array data structures. That is, those languages use
array-like syntax to implement a more general associative array semantics, and must therefore be implemented by a
hash table or some other search data structure.
Language support
Multi-dimensional arrays
The number of indices needed to specify an element is called the dimension, dimensionality, or rank of the array
type. (This nomenclature conflicts with the concept of dimension in linear algebra,[5] where it is the number of
elements. Thus, an array of numbers with 5 rows and 4 columns, hence 20 elements, is said to have dimension 2 in
computing contexts, but represents a matrix with dimension 4-by-5 or 20 in mathematics. Also, the computer science
meaning of "rank" is similar to its meaning in tensor algebra but not to the linear algebra concept of rank of a
matrix.)
Many languages support only one-dimensional arrays. In those languages, a
multi-dimensional array is typically represented by an Iliffe vector, a
one-dimensional array of references to arrays of one dimension less. A
two-dimensional array, in particular, would be implemented as a vector of pointers
to its rows. Thus an element in row i and column j of an array A would be accessed
by double indexing (A[i][j] in typical notation). This way of emulating
multi-dimensional arrays allows the creation of ragged or jagged arrays, where each
row may have a different size or, in general, where the valid range of each index depends on the values of all
preceding indices.
This representation for multi-dimensional arrays is quite prevalent in C and C++ software. However, C and C++ will
use a linear indexing formula for multi-dimensional arrays that are declared as such, e.g. by int A[10][20] or
int A[m][n], instead of the traditional int **A.[6]:p.81
23
Indexing notation
Most programming languages that support arrays support the store and select operations, and have special syntax for
indexing. Early languages used parentheses, e.g. A(i,j), as in FORTRAN; others choose square brackets, e.g.
A[i,j] or A[i][j], as in Algol 60 and Pascal.
Index types
Array data types are most often implemented as array structures: with the indices restricted to integer (or totally
ordered) values, index ranges fixed at array creation time, and multilinear element addressing. This was the case in
most "third generation" languages, and is still the case of most systems programming languages such as Ada, C, and
C++. In some languages, however, array data types have the semantics of associative arrays, with indices of arbitrary
type and dynamic element creation. This is the case in some scripting languages such as Awk and Lua, and of some
array types provided by standard C++ libraries.
Bounds checking
Some languages (like Pascal and Modula) perform bounds checking on every access, raising an exception or
aborting the program when any index is out of its valid range. Compilers may allow these checks to be turned off to
trade safety for speed. Other languages (like FORTRAN and C) trust the programmer and perform no checks. Good
compilers may also analyze the program to determine the range of possible values that the index may have, and this
analysis may lead to bounds-checking elimination.
Index origin
Some languages, such as C, provide only zero-based array types, for which the minimum valid value for any index is
0. This choice is convenient for array implementation and address computations. With a language such as C, a
pointer to the interior of any array can be defined that will symbolically act as a pseudo-array that accommodates
negative indices. This works only because C does not check an index against bounds when used.
Other languages provide only one-based array types, where each index starts at 1; this is the traditional convention in
mathematics for matrices and mathematical sequences. A few languages, such as Pascal, support n-based array
types, whose minimum legal indices are chosen by the programmer. The relative merits of each choice have been the
subject of heated debate. Zero-based indexing has a natural advantage to one-based indexing in avoiding off-by-one
or fencepost errors.[7]
See comparison of programming languages (array) for the base indices used by various languages.
Highest index
The relation between numbers appearing in an array declaration and the index of that array's last element also varies
by language. In many languages (such as C), one should specify the number of elements contained in the array;
whereas in others (such as Pascal and Visual Basic .NET) one should specify the numeric value of the index of the
last element. Needless to say, this distinction is immaterial in languages where the indices start at 1.
Array algebra
Some programming languages (including APL, Matlab, and newer versions of Fortran) directly support array
programming, where operations and functions defined for certain data types are implicitly extended to arrays of
elements of those types. Thus one can write A+B to add corresponding elements of two arrays A and B. The
multiplication operation may be merely distributed over corresponding elements of the operands (APL) or may be
interpreted as the matrix product of linear algebra (Matlab).
24
Slicing
An array slicing operation takes a subset of the elements of an array-typed entity (value or variable) and then
assembles them as another array-typed entity, possibly with other indices. If array types are implemented as array
structures, many useful slicing operations (such as selecting a sub-array, swapping indices, or reversing the direction
of the indices) can be performed very efficiently by manipulating the dope vector of the structure. The possible
slicings depend on the implementation details: for example, FORTRAN allows slicing off one column of a matrix
variable, but not a row, and treat it as a vector; whereas C allow slicing off a row from a matrix, but not a column.
On the other hand, other slicing operations are possible when array types are implemented in other ways.
Resizing
Some languages allow dynamic arrays (also called resizable, growable, or extensible): array variables whose index
ranges may be expanded at any time after creation, without changing the values of its current elements.
For one-dimensional arrays, this facility may be provided as an operation "append(A,x)" that increases the size of
the array A by one and then sets the value of the last element to x. Other array types (such as Pascal strings) provide a
concatenation operator, which can be used together with slicing to achieve that effect and more. In some languages,
assigning a value to an element of an array automatically extends the array, if necessary, to include that element. In
other array types, a slice can be replaced by an array of different size" with subsequent elements being renumbered
accordingly as in Python's list assignment "A[5:5] = [10,20,30]", that inserts three new elements (10,20, and 30)
before element "A[5]". Resizable arrays are conceptually similar to lists, and the two concepts are synonymous in
some languages.
An extensible array can be implemented as a fixed-size array, with a counter that records how many elements are
actually in use. The append operation merely increments the counter; until the whole array is used, when the
append operation may be defined to fail. This is an implementation of a dynamic array with a fixed capacity, as in
the string type of Pascal. Alternatively, the append operation may re-allocate the underlying array with a
larger size, and copy the old elements to the new area.
25
References
[1] Robert W. Sebesta (2001) Concepts of Programming Languages. Addison-Wesley. 4th edition (1998), 5th edition (2001), ISBN
0-201-38596-1 ISBN13: 9780201385960
[2] K. Jensen and Niklaus Wirth, PASCAL User Manual and Report. Springer. Paperback edition (2007) 184 pages, ISBN 3-540-06950-X ISBN
978-3540069508
[3] John Mitchell, Concepts of Programming Languages. Cambridge University Press.
[4] Lukham, Suzuki (1979), "Verification of array, record, and pointer operations in Pascal". ACM Transactions on Programming Languages and
Systems 1(2), 226244.
[5] see the definition of a matrix
[6] Brian W. Kernighan and Dennis M. Ritchie (1988), The C programming Language. Prentice-Hall, 205 pages.
[7] Edsger W. Dijkstra, Why numbering should start at zero (http:/ / www. cs. utexas. edu/ users/ EWD/ transcriptions/ EWD08xx/ EWD831.
html)
External links
NIST's Dictionary of Algorithms and Data Structures: Array (http://www.nist.gov/dads/HTML/array.html)
26
History
The first digital computers used machine-language programming to set up and access array structures for data tables,
vector and matrix computations, and for many other purposes. Von Neumann wrote the first array-sorting program
(merge sort) in 1945, during the building of the first stored-program computer.[3]p.159 Array indexing was originally
done by self-modifying code, and later using index registers and indirect addressing. Some mainframes designed in
the 1960s, such as the Burroughs B5000 and its successors, had special instructions for array indexing that included
index-bounds checking.[citation needed].
Assembly languages generally have no special support for arrays, other than what the machine itself provides. The
earliest high-level programming languages, including FORTRAN (1957), COBOL (1960), and ALGOL 60 (1960),
had support for multi-dimensional arrays, and so has C (1972). In C++ (1983), class templates exist for
multi-dimensional arrays whose dimension is fixed at runtime as well as for runtime-flexible arrays.
Applications
Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables. Many
databases, small and large, consist of (or include) one-dimensional arrays whose elements are records.
Arrays are used to implement other data structures, such as heaps, hash tables, deques, queues, stacks, strings, and
VLists.
One or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly
memory pool allocation. Historically, this has sometimes been the only way to allocate "dynamic memory" portably.
Arrays can be used to determine partial or complete control flow in programs, as a compact alternative to (otherwise
repetitive) multiple IF statements. They are known in this context as control tables and are used in conjunction with
a purpose built interpreter whose control flow is altered according to values contained in the array. The array may
contain subroutine pointers (or relative subroutine numbers that can be acted upon by SWITCH statements) that
direct the path of the execution.
27
One-dimensional arrays
A one-dimensional array (or single dimension array) is a type of linear array. Accessing its elements involves a
single subscript which can either represent a row or column index.
As an example consider the C declaration int anArrayName[10];
Syntax : datatype anArrayname[sizeofArray];
In the given example the array can contain 10 elements of any value available to the int type. In C, the array
element indices are 0-9 inclusive in this case. For example, the expressions anArrayName[0] and
anArrayName[9] are the first and last elements respectively.
For a vector with linear addressing, the element with index i is located at the address B + c i, where B is a fixed
base address and c a fixed constant, sometimes called the address increment or stride.
If the valid element indices begin at 0, the constant B is simply the address of the first element of the array. For this
reason, the C programming language specifies that array indices always begin at 0; and many programmers will call
that element "zeroth" rather than "first".
However, one can choose the index of the first element by an appropriate choice of the base address B. For example,
if the array has five elements, indexed 1 through 5, and the base address B is replaced by B + 30c, then the indices of
those same elements will be 31 to 35. If the numbering does not start at 0, the constant B may not be the address of
any element.
Multidimensional arrays
For a two-dimensional array, the element with indices i,j would have address B + c i + d j, where the coefficients c
and d are the row and column address increments, respectively.
More generally, in a k-dimensional array, the address of an element with indices i1, i2, , ik is
B + c1 i1 + c2 i2 + + ck ik.
For example: int a[3][2];
This means that array a has 3 rows and 2 columns, and the array is of integer type. Here we can store 6 elements they
are stored linearly but starting from first row linear then continuing with second row. The above array will be stored
as a11, a12, a13, a21, a22, a23.
This formula requires only k multiplications and k additions, for any array that can fit in memory. Moreover, if any
coefficient is a fixed power of 2, the multiplication can be replaced by bit shifting.
The coefficients ck must be chosen so that every valid index tuple maps to the address of a distinct element.
If the minimum legal value for every index is 0, then B is the address of the element whose indices are all zero. As in
the one-dimensional case, the element indices may be changed by changing the base address B. Thus, if a
two-dimensional array has rows and columns indexed from 1 to 10 and 1 to 20, respectively, then replacing B by B +
c1 - 3 c1 will cause them to be renumbered from 0 through 9 and 4 through 23, respectively. Taking advantage of
this feature, some languages (like FORTRAN 77) specify that array indices begin at 1, as in mathematical tradition;
while other languages (like Fortran 90, Pascal and Algol) let the user choose the minimum value for each index.
Dope vectors
The addressing formula is completely defined by the dimension d, the base address B, and the increments c1, c2, ,
ck. It is often useful to pack these parameters into a record called the array's descriptor or stride vector or dope
vector. The size of each element, and the minimum and maximum values allowed for each index may also be
included in the dope vector. The dope vector is a complete handle for the array, and is a convenient way to pass
arrays as arguments to procedures. Many useful array slicing operations (such as selecting a sub-array, swapping
indices, or reversing the direction of the indices) can be performed very efficiently by manipulating the dope vector.
28
29
Compact layouts
Often the coefficients are chosen so that the elements occupy a contiguous area of memory. However, that is not
necessary. Even if arrays are always created with contiguous elements, some array slicing operations may create
non-contiguous sub-arrays from them.
There are two systematic compact layouts for a two-dimensional array. For example, consider the matrix
In the row-major order layout (adopted by C for statically declared arrays), the elements in each row are stored in
consecutive positions and all of the elements of a row have a lower address than any of the elements of a consecutive
row:
1 2 3 4 5 6 7 8 9
In column-major order (traditionally used by Fortran), the elements in each column are consecutive in memory and
all of the elements of a column have a lower address than any of the elements of a consecutive column:
1 4 7 2 5 8 3 6 9
For arrays with three or more indices, "row major order" puts in consecutive positions any two elements whose index
tuples differ only by one in the last index. "Column major order" is analogous with respect to the first index.
In systems which use processor cache or virtual memory, scanning an array is much faster if successive elements are
stored in consecutive positions in memory, rather than sparsely scattered. Many algorithms that use
multidimensional arrays will scan them in a predictable order. A programmer (or a sophisticated compiler) may use
this information to choose between row- or column-major layout for each array. For example, when computing the
product AB of two matrices, it would be best to have A stored in row-major order, and B in column-major order.
Array resizing
Static arrays have a size that is fixed when they are created and consequently do not allow elements to be inserted or
removed. However, by allocating a new array and copying the contents of the old array to it, it is possible to
effectively implement a dynamic version of an array; see dynamic array. If this operation is done infrequently,
insertions at the end of the array require only amortized constant time.
Some array data structures do not reallocate storage, but do store a count of the number of elements of the array in
use, called the count or size. This effectively makes the array a dynamic array with a fixed maximum size or
capacity; Pascal strings are examples of this.
30
Non-linear formulas
More complicated (non-linear) formulas are occasionally used. For a compact two-dimensional triangular array, for
instance, the addressing formula is a polynomial of degree 2.
Efficiency
Both store and select take (deterministic worst case) constant time. Arrays take linear (O(n)) space in the number of
elements n that they hold.
In an array with element size k and on a machine with a cache line size of B bytes, iterating through an array of n
elements requires the minimum of ceiling(nk/B) cache misses, because its elements occupy contiguous memory
locations. This is roughly a factor of B/k better than the number of cache misses needed to access n elements at
random memory locations. As a consequence, sequential iteration over an array is noticeably faster in practice than
iteration over many other data structures, a property called locality of reference (this does not mean however, that
using a perfect hash or trivial hash within the same (local) array, will not be even faster - and achievable in constant
time). Libraries provide low-level optimized facilities for copying ranges of memory (such as memcpy) which can be
used to move contiguous blocks of array elements significantly faster than can be achieved through individual
element access. The speedup of such optimized routines varies by array element size, architecture, and
implementation.
Memory-wise, arrays are compact data structures with no per-element overhead. There may be a per-array overhead,
e.g. to store index bounds, but this is language-dependent. It can also happen that elements stored in an array require
less memory than the same elements stored in individual variables, because several array elements can be stored in a
single word; such arrays are often called packed arrays. An extreme (but commonly used) case is the bit array, where
every bit represents a single element. A single octet can thus hold up to 256 different combinations of up to 8
different conditions, in the most compact form.
Array accesses with statically predictable access patterns are a major source of data parallelism.
Dynamic Balanced
array
tree
Random access
list
Indexing
(n)
(1)
(1)
(log n)
(log n)
Insert/delete at beginning
(1)
N/A
(n)
(log n)
(1)
Insert/delete at end
(n)
last element is
unknown
(1)
last element is known
N/A
Insert/delete in middle
search time +
[4][5][6]
(1)
N/A
(n)
(1) amortized
(n)
(n)
(n)
(n)
Growable arrays are similar to arrays but add the ability to insert and delete elements; adding and deleting at the end
is particularly efficient. However, they reserve linear ((n)) additional storage, whereas arrays do not reserve
additional storage.
Associative arrays provide a mechanism for array-like functionality without huge storage overheads when the index
values are sparse. For example, an array that contains values only at indexes 1 and 2 billion may benefit from using
such a structure. Specialized associative arrays with integer keys include Patricia tries, Judy arrays, and van Emde
Meaning of dimension
The dimension of an array is the number of indices needed to select an element. Thus, if the array is seen as a
function on a set of possible index combinations, it is the dimension of the space of which its domain is a discrete
subset. Thus a one-dimensional array is a list of data, a two-dimensional array a rectangle of data, a
three-dimensional array a block of data, etc.
This should not be confused with the dimension of the set of all matrices with a given domain, that is, the number of
elements in the array. For example, an array with 5 rows and 4 columns is two-dimensional, but such matrices form a
20-dimensional space. Similarly, a three-dimensional vector can be represented by a one-dimensional array of size
three.
References
[1] David R. Richardson (2002), The Book on Data Structures. iUniverse, 112 pages. ISBN 0-595-24039-9, ISBN 978-0-595-24039-5.
[2] T. Veldhuizen. Arrays in Blitz++. In Proc. of the 2nd Int. Conf. on Scientific Computing in Object-Oriented Parallel Environments
(ISCOPE), LNCS 1505, pages 223-220. Springer, 1998.
[3] Donald Knuth, The Art of Computer Programming, vol. 3. Addison-Wesley
[4] Gerald Kruse. CS 240 Lecture Notes (http:/ / www. juniata. edu/ faculty/ kruse/ cs240/ syllabus. htm): Linked Lists Plus: Complexity
Trade-offs (http:/ / www. juniata. edu/ faculty/ kruse/ cs240/ linkedlist2. htm). Juniata College. Spring 2008.
[5] Day 1 Keynote - Bjarne Stroustrup: C++11 Style (http:/ / channel9. msdn. com/ Events/ GoingNative/ GoingNative-2012/
Keynote-Bjarne-Stroustrup-Cpp11-Style) at GoingNative 2012 on channel9.msdn.com from minute 45 or foil 44
[6] Number crunching: Why you should never, ever, EVER use linked-list in your code again (http:/ / kjellkod. wordpress. com/ 2012/ 02/ 25/
why-you-should-never-ever-ever-use-linked-list-in-your-code-again/ ) at kjellkod.wordpress.com
[7] Counted B-Tree (http:/ / www. chiark. greenend. org. uk/ ~sgtatham/ algorithms/ cbtree. html)
31
Dynamic array
32
Dynamic array
In computer science, a dynamic array, growable array, resizable
array, dynamic table, mutable array, or array list is a random
access, variable-size list data structure that allows elements to be added
or removed. It is supplied with standard libraries in many modern
mainstream programming languages.
A dynamic array is not the same thing as a dynamically allocated array,
which is a fixed-size array whose size is fixed when the array is
allocated, although a dynamic array may use such a fixed-size array as
a back end.[1]
In applications where the logical size is bounded, the fixed-size data structure suffices. This may be short-sighted, as
more space may be needed later. A philosophical programmer may prefer to write the code to make every array
capable of resizing from the outset, then return to using fixed-size arrays during program optimization. Resizing the
underlying array is an expensive task, typically involving copying the entire contents of the array.
Dynamic array
33
Performance
Linked list Array
Dynamic Balanced
array
tree
Random access
list
Indexing
(n)
(1)
(1)
(log n)
(log n)
Insert/delete at beginning
(1)
N/A
(n)
(log n)
(1)
Insert/delete at end
(n)
last element is
unknown
(1)
last element is known
N/A
Insert/delete in middle
search time +
[3][4][5]
(1)
N/A
(n)
(1) amortized
(n)
(n)
(n)
(n)
The dynamic array has performance similar to an array, with the addition of new operations to add and remove
elements from the end:
Dynamic arrays benefit from many of the advantages of arrays, including good locality of reference and data cache
utilization, compactness (low memory use), and random access. They usually have only a small fixed additional
overhead for storing information about the size and capacity. This makes dynamic arrays an attractive tool for
building cache-friendly data structures. However, in languages like Python or Java that enforce reference semantics,
the dynamic array generally will not store the actual data, but rather it will store references to the data that resides in
other areas of memory. In this case, accessing items in the array sequentially will actually involve accessing multiple
non-contiguous areas of memory, so many the advantages of the cache-friendliness of this data structure are lost.
Compared to linked lists, dynamic arrays have faster indexing (constant time versus linear time) and typically faster
iteration due to improved locality of reference; however, dynamic arrays require linear time to insert or delete at an
arbitrary location, since all following elements must be moved, while linked lists can do this in constant time. This
disadvantage is mitigated by the gap buffer and tiered vector variants discussed under Variants below. Also, in a
highly fragmented memory region, it may be expensive or impossible to find contiguous space for a large dynamic
array, whereas linked lists do not require the whole data structure to be stored contiguously.
A balanced tree can store a list while providing all operations of both dynamic arrays and linked lists reasonably
efficiently, but both insertion at the end and iteration over the list are slower than for a dynamic array, in theory and
in practice, due to non-contiguous storage and tree traversal/manipulation overhead.
Dynamic array
Variants
Gap buffers are similar to dynamic arrays but allow efficient insertion and deletion operations clustered near the
same arbitrary location. Some deque implementations use array deques, which allow amortized constant time
insertion/removal at both ends, instead of just one end.
Goodrich presented a dynamic array algorithm called Tiered Vectors that provided O(n1/2) performance for order
preserving insertions or deletions from the middle of the array.
Hashed Array Tree (HAT) is a dynamic array algorithm published by Sitarski in 1996. Hashed Array Tree wastes
order n1/2 amount of storage space, where n is the number of elements in the array. The algorithm has O(1)
amortized performance when appending a series of objects to the end of a Hashed Array Tree.
In a 1999 paper, Brodnik et al. describe a tiered dynamic array data structure, which wastes only n1/2 space for n
elements at any point in time, and they prove a lower bound showing that any dynamic array must waste this much
space if the operations are to remain amortized constant time. Additionally, they present a variant where growing and
shrinking the buffer has not only amortized but worst-case constant time.
Bagwell (2002) presented the VList algorithm, which can be adapted to implement a dynamic array.
Language support
C++'s std::vector is an implementation of dynamic arrays, as are the ArrayList[6] classes supplied with the
Java API and the .NET Framework. The generic List<> class supplied with version 2.0 of the .NET Framework is
also implemented with dynamic arrays. Smalltalk's OrderedCollection is a dynamic array with dynamic start
and end-index, making the removal of the first element also O(1). Python's list datatype implementation is a
dynamic array. Delphi and D implement dynamic arrays at the language's core. Ada's
Ada.Containers.Vectors generic package provides dynamic array implementation for a given subtype. Many
scripting languages such as Perl and Ruby offer dynamic arrays as a built-in primitive data type. Several
cross-platform frameworks provide dynamic array implementations for C: CFArray and CFMutableArray in
Core Foundation; GArray and GPtrArray in GLib.
References
[1] See, for example, the source code of java.util.ArrayList class from OpenJDK 6 (http:/ / hg. openjdk. java. net/ jdk6/ jdk6/ jdk/ file/
e0e25ac28560/ src/ share/ classes/ java/ util/ ArrayList. java).
[2] List object implementation (http:/ / svn. python. org/ projects/ python/ trunk/ Objects/ listobject. c) from python.org, retrieved 2011-09-27.
[3] Gerald Kruse. CS 240 Lecture Notes (http:/ / www. juniata. edu/ faculty/ kruse/ cs240/ syllabus. htm): Linked Lists Plus: Complexity
Trade-offs (http:/ / www. juniata. edu/ faculty/ kruse/ cs240/ linkedlist2. htm). Juniata College. Spring 2008.
[4] Day 1 Keynote - Bjarne Stroustrup: C++11 Style (http:/ / channel9. msdn. com/ Events/ GoingNative/ GoingNative-2012/
Keynote-Bjarne-Stroustrup-Cpp11-Style) at GoingNative 2012 on channel9.msdn.com from minute 45 or foil 44
[5] Number crunching: Why you should never, ever, EVER use linked-list in your code again (http:/ / kjellkod. wordpress. com/ 2012/ 02/ 25/
why-you-should-never-ever-ever-use-linked-list-in-your-code-again/ ) at kjellkod.wordpress.com
[6] Javadoc on
34
Dynamic array
External links
NIST Dictionary of Algorithms and Data Structures: Dynamic array (http://www.nist.gov/dads/HTML/
dynamicarray.html)
VPOOL (http://www.bsdua.org/libbsdua.html#vpool) - C language implementation of dynamic array.
CollectionSpy (http://www.collectionspy.com) A Java profiler with explicit support for debugging
ArrayList- and Vector-related issues.
Open Data Structures - Chapter 2 - Array-Based Lists (http://opendatastructures.org/versions/edition-0.1e/
ods-java/2_Array_Based_Lists.html)
Linked list
In computer science, a linked list is a data structure consisting of a group of nodes which together represent a
sequence. Under the simplest form, each node is composed of a data and a reference (in other words, a link) to the
next node in the sequence; more complex variants add additional links. This structure allows for efficient insertion or
removal of elements from any position in the sequence.
A linked list whose nodes contain two fields: an integer value and a link to the next node. The last node is linked to a terminator used to signify the
end of the list.
Linked lists are among the simplest and most common data structures. They can be used to implement several other
common abstract data types, including lists (the abstract data type), stacks, queues, associative arrays, and
S-expressions, though it is not uncommon to implement the other data structures directly without using a list as the
basis of implementation.
The principal benefit of a linked list over a conventional array is that the list elements can easily be inserted or
removed without reallocation or reorganization of the entire structure because the data items need not be stored
contiguously in memory or on disk. Linked lists allow insertion and removal of nodes at any point in the list, and can
do so with a constant number of operations if the link previous to the link being added or removed is maintained
during list traversal.
On the other hand, simple linked lists by themselves do not allow random access to the data, or any form of efficient
indexing. Thus, many basic operations such as obtaining the last node of the list (assuming that the last node is
not maintained as separate node reference in the list structure), or finding a node that contains a given datum, or
locating the place where a new node should be inserted may require scanning most or all of the list elements.
History
Linked lists were developed in 1955-56 by Allen Newell, Cliff Shaw and Herbert A. Simon at RAND Corporation as
the primary data structure for their Information Processing Language. IPL was used by the authors to develop several
early artificial intelligence programs, including the Logic Theory Machine, the General Problem Solver, and a
computer chess program. Reports on their work appeared in IRE Transactions on Information Theory in 1956, and
several conference proceedings from 1957 to 1959, including Proceedings of the Western Joint Computer
Conference in 1957 and 1958, and Information Processing (Proceedings of the first UNESCO International
Conference on Information Processing) in 1959. The now-classic diagram consisting of blocks representing list
nodes with arrows pointing to successive list nodes appears in "Programming the Logic Theory Machine" by Newell
and Shaw in Proc. WJCC, February 1957. Newell and Simon were recognized with the ACM Turing Award in 1975
for having "made basic contributions to artificial intelligence, the psychology of human cognition, and list
35
Linked list
processing". The problem of machine translation for natural language processing led Victor Yngve at Massachusetts
Institute of Technology (MIT) to use linked lists as data structures in his COMIT programming language for
computer research in the field of linguistics. A report on this language entitled "A programming language for
mechanical translation" appeared in Mechanical Translation in 1958.
LISP, standing for list processor, was created by John McCarthy in 1958 while he was at MIT and in 1960 he
published its design in a paper in the Communications of the ACM, entitled "Recursive Functions of Symbolic
Expressions and Their Computation by Machine, Part I". One of LISP's major data structures is the linked list. By
the early 1960s, the utility of both linked lists and languages which use these structures as their primary data
representation was well established. Bert Green of the MIT Lincoln Laboratory published a review article entitled
"Computer languages for symbol manipulation" in IRE Transactions on Human Factors in Electronics in March
1961 which summarized the advantages of the linked list approach. A later review article, "A Comparison of
list-processing computer languages" by Bobrow and Raphael, appeared in Communications of the ACM in April
1964.
Several operating systems developed by Technical Systems Consultants (originally of West Lafayette Indiana, and
later of Chapel Hill, North Carolina) used singly linked lists as file structures. A directory entry pointed to the first
sector of a file, and succeeding portions of the file were located by traversing pointers. Systems using this technique
included Flex (for the Motorola 6800 CPU), mini-Flex (same CPU), and Flex9 (for the Motorola 6809 CPU). A
variant developed by TSC for and marketed by Smoke Signal Broadcasting in California, used doubly linked lists in
the same manner.
The TSS/360 operating system, developed by IBM for the System 360/370 machines, used a double linked list for
their file system catalog. The directory structure was similar to Unix, where a directory could contain files and/or
other directories and extend to any depth. A utility flea was created to fix file system problems after a crash, since
modified portions of the file catalog were sometimes in memory when a crash occurred. Problems were detected by
comparing the forward and backward links for consistency. If a forward link was corrupt, then if a backward link to
the infected node was found, the forward link was set to the node with the backward link. A humorous comment in
the source code where this utility was invoked stated "Everyone knows a flea collar gets rid of bugs in cats".
A singly linked list whose nodes contain two fields: an integer value and a link to the next node
36
Linked list
A doubly linked list whose nodes contain three fields: an integer value, the link forward to the next node, and the link backward to the previous
node
A technique known as XOR-linking allows a doubly linked list to be implemented using a single link field in each
node. However, this technique requires the ability to do bit operations on addresses, and therefore may not be
available in some high-level languages.
Circular list
In the last node of a list, the link field often contains a null reference, a special value used to indicate the lack of
further nodes. A less common convention is to make it point to the first node of the list; in that case the list is said to
be circular or circularly linked; otherwise it is said to be open or linear.
In the case of a circular doubly linked list, the only change that occurs is that the end, or "tail", of the said list is
linked back to the front, or "head", of the list and vice versa.
Sentinel nodes
In some implementations, an extra sentinel or dummy node may be added before the first data record and/or after
the last one. This convention simplifies and accelerates some list-handling algorithms, by ensuring that all links can
be safely dereferenced and that every list (even one that contains no data elements) always has a "first" and "last"
node.
Empty lists
An empty list is a list that contains no data records. This is usually the same as saying that it has zero nodes. If
sentinel nodes are being used, the list is usually said to be empty when it has only sentinel nodes.
37
Linked list
38
Hash linking
The link fields need not be physically part of the nodes. If the data records are stored in an array and referenced by
their indices, the link field may be stored in a separate array with the same indices as the data records.
List handles
Since a reference to the first node gives access to the whole list, that reference is often called the address, pointer,
or handle of the list. Algorithms that manipulate linked lists usually get such handles to the input lists and return the
handles to the resulting lists. In fact, in the context of such algorithms, the word "list" often means "list handle". In
some situations, however, it may be convenient to refer to a list by a handle that consists of two links, pointing to its
first and last nodes.
Combining alternatives
The alternatives listed above may be arbitrarily combined in almost every way, so one may have circular doubly
linked lists without sentinels, circular singly linked lists with sentinels, etc.
Tradeoffs
As with most choices in computer programming and design, no method is well suited to all circumstances. A linked
list data structure might work well in one case, but cause problems in another. This is a list of some of the common
tradeoffs involving linked list structures.
Dynamic Balanced
array
tree
Random access
list
Indexing
(n)
(1)
(1)
(log n)
(log n)
Insert/delete at beginning
(1)
N/A
(n)
(log n)
(1)
Insert/delete at end
(n)
last element is
unknown
(1)
last element is known
N/A
Insert/delete in middle
search time +
[1][2][3]
(1)
N/A
(n)
(1) amortized
(n)
(n)
(n)
(n)
A dynamic array is a data structure that allocates all elements contiguously in memory, and keeps a count of the
current number of elements. If the space reserved for the dynamic array is exceeded, it is reallocated and (possibly)
copied, an expensive operation.
Linked lists have several advantages over dynamic arrays. Insertion or deletion of an element at a specific point of a
list, assuming that we have a pointer to the node (before the one to be removed, or before the insertion point)
already, is a constant-time operation (otherwise without this reference it is O(n)), whereas insertion in a dynamic
array at random locations will require moving half of the elements on average, and all the elements in the worst case.
While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this
causes fragmentation that impedes the performance of iteration.
Moreover, arbitrarily many elements may be inserted into a linked list, limited only by the total memory available;
while a dynamic array will eventually fill up its underlying array data structure and will have to reallocate an
Linked list
expensive operation, one that may not even be possible if memory is fragmented, although the cost of reallocation
can be averaged over insertions, and the cost of an insertion due to reallocation would still be amortized O(1). This
helps with appending elements at the array's end, but inserting into (or removing from) middle positions still carries
prohibitive costs due to data moving to maintain contiguity. An array from which many elements are removed may
also have to be resized in order to avoid wasting too much space.
On the other hand, dynamic arrays (as well as fixed-size array data structures) allow constant-time random access,
while linked lists allow only sequential access to elements. Singly linked lists, in fact, can only be traversed in one
direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index
quickly, such as heapsort. Sequential access on arrays and dynamic arrays is also faster than on linked lists on many
machines, because they have optimal locality of reference and thus make good use of data caching.
Another disadvantage of linked lists is the extra storage needed for references, which often makes them impractical
for lists of small data items such as characters or boolean values, because the storage overhead for the links may
exceed by a factor of two or more the size of the data. In contrast, a dynamic array requires only the space for the
data itself (and a very small amount of control data).[4] It can also be slow, and with a nave allocator, wasteful, to
allocate memory separately for each new element, a problem generally solved using memory pools.
Some hybrid solutions try to combine the advantages of the two representations. Unrolled linked lists store several
elements in each list node, increasing cache performance while decreasing memory overhead for references. CDR
coding does both these as well, by replacing references with the actual data referenced, which extends off the end of
the referencing record.
A good example that highlights the pros and cons of using dynamic arrays vs. linked lists is by implementing a
program that resolves the Josephus problem. The Josephus problem is an election method that works by having a
group of people stand in a circle. Starting at a predetermined person, you count around the circle n times. Once you
reach the nth person, take them out of the circle and have the members close the circle. Then count around the circle
the same n times and repeat the process, until only one person is left. That person wins the election. This shows the
strengths and weaknesses of a linked list vs. a dynamic array, because if you view the people as connected nodes in a
circular linked list then it shows how easily the linked list is able to delete nodes (as it only has to rearrange the links
to the different nodes). However, the linked list will be poor at finding the next person to remove and will need to
search through the list until it finds that person. A dynamic array, on the other hand, will be poor at deleting nodes
(or elements) as it cannot remove one node without individually shifting all the elements up the list by one.
However, it is exceptionally easy to find the nth person in the circle by directly referencing them by their position in
the array.
The list ranking problem concerns the efficient conversion of a linked list representation into an array. Although
trivial for a conventional computer, solving this problem by a parallel algorithm is complicated and has been the
subject of much research.
A balanced tree has similar memory access patterns and space overhead to a linked list while permitting much more
efficient indexing, taking O(log n) time instead of O(n) for a random access. However, insertion and deletion
operations are more expensive due to the overhead of tree manipulations to maintain balance. Schemes exist for trees
to automatically maintain themselves in a balanced state: AVL trees or red-black trees.
39
Linked list
40
Linked list
41
Linked list
42
The following code inserts a node after an existing node in a singly linked list. The diagram shows how it works.
Inserting a node before an existing one cannot be done directly; instead, one must keep track of the previous node
and insert a node after it.
:= list.firstNode
list.firstNode := newNode
Similarly, we have functions for removing the node after a given node, and for removing a node from the beginning
of the list. The diagram demonstrates the former. To find and remove a particular node, one must again keep track of
the previous element.
Linked list
function removeBeginning(List list) // remove first node
obsoleteNode := list.firstNode
list.firstNode := list.firstNode.next // point past deleted node
destroy obsoleteNode
Notice that removeBeginning() sets list.firstNode to null when removing the last node in the list.
Since we can't iterate backwards, efficient insertBefore or removeBefore operations are not possible.
Appending one linked list to another can be inefficient unless a reference to the tail is kept as part of the List
structure, because we must traverse the entire first list in order to find the tail, and then append the second list to this.
Thus, if two linearly linked lists are each of length , list appending has asymptotic time complexity of
. In
the Lisp family of languages, list appending is provided by the append procedure.
Many of the special cases of linked list operations can be eliminated by including a dummy element at the front of
the list. This ensures that there are no special cases for the beginning of the list and renders both
insertBeginning() and removeBeginning() unnecessary. In this case, the first useful data in the list will
be found at list.firstNode.next.
43
Linked list
44
newNode.next := node.next
node.next := newNode
Suppose that "L" is a variable pointing to the last node of a circular linked list (or null if the list is empty). To append
"newNode" to the end of the list, one may do
insertAfter(L, newNode)
L := newNode
To insert "newNode" at the beginning of the list, one may do
insertAfter(L, newNode)
if L = null
L := newNode
Next Prev
Name
Balance
Jones, John
123.45
-1
Smith, Joseph
234.56
-1
Adams, Adam
0.00
2 (listHead) 4
3
4
Another, Anita
876.54
5
6
7
In the above example, ListHead would be set to 2, the location of the first entry in the list. Notice that entry 3 and
5 through 7 are not part of the list. These cells are available for any additions to the list. By creating a ListFree
Linked list
integer variable, a free list could be created to keep track of what cells are available. If all entries are in use, the size
of the array would have to be increased or some elements would have to be deleted before new entries could be
stored in the list.
The following code would traverse the list and display names and account balance:
i := listHead
while i 0 // loop through the list
print i, Records[i].name, Records[i].balance // print entry
i := Records[i].next
When faced with a choice, the advantages of this approach include:
The linked list is relocatable, meaning it can be moved about in memory at will, and it can also be quickly and
directly serialized for storage on disk or transfer over a network.
Especially for a small list, array indexes can occupy significantly less space than a full pointer on many
architectures.
Locality of reference can be improved by keeping the nodes together in memory and by periodically rearranging
them, although this can also be done in a general store.
Nave dynamic memory allocators can produce an excessive amount of overhead storage for each node allocated;
almost no allocation overhead is incurred per node in this approach.
Seizing an entry from a pre-allocated array is faster than using dynamic memory allocation for each node, since
dynamic memory allocation typically requires a search for a free memory block of the desired size.
This approach has one main disadvantage, however: it creates and manages a private memory space for its nodes.
This leads to the following issues:
It increases complexity of the implementation.
Growing a large array when it is full may be difficult or impossible, whereas finding space for a new linked list
node in a large, general memory pool may be easier.
Adding elements to a dynamic array will occasionally (when it is full) unexpectedly take linear (O(n)) instead of
constant time (although it's still an amortized constant).
Using a general memory pool leaves more memory for other data if the list is smaller than expected or if many
nodes are freed.
For these reasons, this approach is mainly used for languages that do not support dynamic memory allocation. These
disadvantages are also mitigated if the maximum size of the list is known at the time the array is created.
Language support
Many programming languages such as Lisp and Scheme have singly linked lists built in. In many functional
languages, these lists are constructed from nodes, each called a cons or cons cell. The cons has two fields: the car, a
reference to the data for that node, and the cdr, a reference to the next node. Although cons cells can be used to build
other data structures, this is their primary purpose.
In languages that support abstract data types or templates, linked list ADTs or templates are available for building
linked lists. In other languages, linked lists are typically built using references together with records.
45
Linked list
46
Linked list
47
print information about family
aMember := aFamily.members // get head of list of this family's members
while aMember null // loop through list of members
print information about member
aMember := aMember.next
aFamily := aFamily.next
Linked list
Speeding up search
Finding a specific element in a linked list, even if it is sorted, normally requires O(n) time (linear search). This is one
of the primary disadvantages of linked lists over other data structures. In addition to the variants discussed above,
below are two simple ways to improve search time.
In an unordered list, one simple heuristic for decreasing average search time is the move-to-front heuristic, which
simply moves an element to the beginning of the list once it is found. This scheme, handy for creating simple caches,
ensures that the most recently used items are also the quickest to find again.
Another common approach is to "index" a linked list using a more efficient external data structure. For example, one
can build a red-black tree or hash table whose elements are references to the linked list nodes. Multiple such indexes
can be built on a single list. The disadvantage is that these indexes may need to be updated each time a node is added
or removed (or at least, before that index is used again).
48
Linked list
Notes
[1] Gerald Kruse. CS 240 Lecture Notes (http:/ / www. juniata. edu/ faculty/ kruse/ cs240/ syllabus. htm): Linked Lists Plus: Complexity
Trade-offs (http:/ / www. juniata. edu/ faculty/ kruse/ cs240/ linkedlist2. htm). Juniata College. Spring 2008.
[2] Day 1 Keynote - Bjarne Stroustrup: C++11 Style (http:/ / channel9. msdn. com/ Events/ GoingNative/ GoingNative-2012/
Keynote-Bjarne-Stroustrup-Cpp11-Style) at GoingNative 2012 on channel9.msdn.com from minute 45 or foil 44
[3] Number crunching: Why you should never, ever, EVER use linked-list in your code again (http:/ / kjellkod. wordpress. com/ 2012/ 02/ 25/
why-you-should-never-ever-ever-use-linked-list-in-your-code-again/ ) at kjellkod.wordpress.com
[4] The amount of control data required for a dynamic array is usually of the form UNIQ-math-0-5cd073dcfe757d11-QINU , where
UNIQ-math-1-5cd073dcfe757d11-QINU is a per-array constant, UNIQ-math-2-5cd073dcfe757d11-QINU is a per-dimension constant, and
UNIQ-math-3-5cd073dcfe757d11-QINU is the number of dimensions. UNIQ-math-4-5cd073dcfe757d11-QINU and
UNIQ-math-5-5cd073dcfe757d11-QINU are typically on the order of 10 bytes.
[5] Ford, William and Topp, William Data Structures with C++ using STL Second Edition (2002). Prentice-Hall. ISBN 0-13-085850-1, pp.
466-467
Footnotes
References
Juan, Angel (2006). "Ch20 Data Structures; ID06 - PROGRAMMING with JAVA (slide part of the book "Big
Java", by CayS. Horstmann)" (http://www.uoc.edu/in3/emath/docs/java/ch20.pdf) (PDF). p.3
"Definition of a linked list" (http://nist.gov/dads/HTML/linkedList.html). National Institute of Standards and
Technology. 2004-08-16. Retrieved 2004-12-14.
Antonakos, James L.; Mansfield, Kenneth C., Jr. (1999). Practical Data Structures Using C/C++. Prentice-Hall.
pp.165190. ISBN0-13-280843-9.
Collins, William J. (2005) [2002]. Data Structures and the Java Collections Framework. New York: McGraw
Hill. pp.239303. ISBN0-07-282379-8.
Cormen, Thomas H.; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2003). Introduction to Algorithms.
MIT Press. pp.205213 & 501505. ISBN0-262-03293-7.
Cormen, Thomas H.; Charles E. Leiserson; Ronald L. Rivest; Clifford Stein (2001). "10.2: Linked lists".
Introduction to Algorithms (2md ed.). MIT Press. pp.204209. ISBN0-262-03293-7.
Green, Bert F. Jr. (1961). "Computer Languages for Symbol Manipulation". IRE Transactions on Human Factors
in Electronics (2): 38. doi: 10.1109/THFE2.1961.4503292 (http://dx.doi.org/10.1109/THFE2.1961.
4503292).
McCarthy, John (1960). "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part
I" (http://www-formal.stanford.edu/jmc/recursive.html). Communications of the ACM 3 (4): 184. doi:
10.1145/367177.367199 (http://dx.doi.org/10.1145/367177.367199).
Knuth, Donald (1997). "2.2.3-2.2.5". Fundamental Algorithms (3rd ed.). Addison-Wesley. pp.254298.
ISBN0-201-89683-4.
Newell, Allen; Shaw, F. C. (1957). "Programming the Logic Theory Machine". Proceedings of the Western Joint
Computer Conference: 230240.
Parlante, Nick (2001). "Linked list basics" (http://cslibrary.stanford.edu/103/LinkedListBasics.pdf). Stanford
University. Retrieved 2009-09-21.
Sedgewick, Robert (1998). Algorithms in C. Addison Wesley. pp.90109. ISBN0-201-31452-5.
Shaffer, Clifford A. (1998). A Practical Introduction to Data Structures and Algorithm Analysis. New Jersey:
Prentice Hall. pp.77102. ISBN0-13-660911-2.
Wilkes, Maurice Vincent (1964). "An Experiment with a Self-compiling Compiler for a Simple List-Processing
Language". Annual Review in Automatic Programming (Pergamon Press) 4 (1): 1. doi:
10.1016/0066-4138(64)90013-8 (http://dx.doi.org/10.1016/0066-4138(64)90013-8).
Wilkes, Maurice Vincent (1964). "Lists and Why They are Useful". Proceeds of the ACM National Conference,
Philadelphia 1964 (ACM) (P64): F11.
49
Linked list
50
External links
Introduction to Circular Linked (http://scanftree.com/Data_Structure/Circular) from the Scanftree
Description (http://nist.gov/dads/HTML/linkedList.html) from the Dictionary of Algorithms and Data
Structures
Introduction to Linked Lists (http://cslibrary.stanford.edu/103/), Stanford University Computer Science
Library
Linked List Problems (http://cslibrary.stanford.edu/105/), Stanford University Computer Science Library
Open Data Structures - Chapter 3 - Linked Lists (http://opendatastructures.org/versions/edition-0.1e/ods-java/
3_Linked_Lists.html)
Patent for the idea of having nodes which are in several linked lists simultaneously (http://www.google.com/
patents?vid=USPAT7028023) (note that this technique was widely used for many decades before the patent was
granted)
A doubly-linked list whose nodes contain three fields: an integer value, the link to the next node, and the link to the previous node.
The two node links allow traversal of the list in either direction. While adding or removing a node in a doubly-linked
list requires changing more links than the same operations on a singly linked list, the operations are simpler and
potentially more efficient (for nodes other than first nodes) because there is no need to keep track of the previous
node during traversal or no need to traverse the list to find the previous node, so that its link can be modified.
51
Basic algorithms
Open doubly-linked lists
record DoublyLinkedNode {
prev // A reference to the previous node
next // A reference to the next node
data // Data or a reference to data
}
record DoublyLinkedList {
DoublyLinkedNode firstNode
DoublyLinkedNode lastNode
}
52
:= someNode
:= someNode
53
Advanced concepts
Asymmetric doubly-linked list
An asymmetric doubly-linked list is somewhere between the singly-linked list and the regular doubly-linked list. It
shares some features with the singly linked list (single-direction traversal) and others from the doubly-linked list
(ease of modification)
It is a list where each node's previous link points not to the previous node, but to the link to itself. While this makes
little difference between nodes (it just points to an offset within the previous node), it changes the head of the list: It
allows the first node to modify the firstNode link easily.[1][2]
As long as a node is in a list, its previous link is never null.
Inserting a node
To insert a node before another, we change the link that pointed to the old node, using the prev link; then set the new
node's next link to point to the old node, and change that node's prev link accordingly.
function insertBefore(Node node, Node newNode)
if node.prev == null
error "The node is not in a list"
newNode.prev := node.prev
atAddress(newNode.prev) := newNode
newNode.next := node
node.prev = addressOf(newNode.next)
function insertAfter(Node node, Node newNode)
newNode.next := node.next
if newNode.next != null
newNode.next.prev = addressOf(newNode.next)
node.next := newNode
newNode.prev := addressOf(node.next)
Deleting a node
To remove a node, we simply modify the link pointed by prev, regardless of whether the node was the first one of
the list.
function remove(Node node)
atAddress(node.prev) := node.next
if node.next != null
node.next.prev = node.prev
destroy node
54
55
References
[1] http:/ / www. codeofhonor. com/ blog/ avoiding-game-crashes-related-to-linked-lists
[2] https:/ / github. com/ webcoyote/ coho/ blob/ master/ Base/ List. h
A stack may be implemented to have a bounded capacity. If the stack is full and does not contain enough space to
accept an entity to be pushed, the stack is then considered to be in an overflow state. The pop operation removes an
item from the top of the stack. A pop either reveals previously concealed items or results in an empty stack, but, if
the stack is empty, it goes into underflow state, which means no items are present in stack to be removed.
A stack is a restricted data structure, because only a small number of operations are performed on it. The nature of
the pop and push operations also means that stack elements have a natural order. Elements are removed from the
stack in the reverse order to the order of their addition. Therefore, the lower elements are those that have been on the
stack the longest.[1]
History
The stack was first proposed in 1946, in the computer design of Alan M. Turing (who used the terms "bury" and
"unbury") as a means of calling and returning from subroutines.Wikipedia:Please clarify The Germans Klaus
Samelson and Friedrich L. Bauer proposed the idea in 1955 and filed a patent in 1957. The same concept was
developed, independently, by the Australian Charles Leonard Hamblin in the first half of 1957.[2]
Abstract definition
A stack is a basic computer science data structure and can be defined in an abstract, implementation-free manner, or
it can be generally defined as a linear list of items in which all additions and deletion are restricted to one end that is
Top.
This is a VDM (Vienna Development Method) description of a stack:[3]
Function signatures:
init: -> Stack
push: N x Stack -> Stack
top: Stack -> (N U ERROR)
pop: Stack -> Stack
isempty: Stack -> Boolean
Inessential operations
In many implementations, a stack has more operations than "push" and "pop". An example is "top of stack", or
"peek", which observes the top-most element without removing it from the stack.[4] Since this can be done with a
"pop" and a "push" with the same data, it is not essential. An underflow condition can occur in the "stack top"
operation if the stack is empty, the same as "pop". Often implementations have a function which just returns if the
stack is empty.
Software stacks
Implementation
In most high level languages, a stack can be easily implemented either through an array or a linked list. What
identifies the data structure as a stack in either case is not the implementation but the interface: the user is only
allowed to pop or push items onto the array or linked list, with few other helper operations. The following will
demonstrate both implementations, using C.
Array
The array implementation aims to create an array where the first element (usually at the zero-offset) is the bottom.
That is, array[0] is the first element pushed onto the stack and the last element popped off. The program must
keep track of the size, or the length of the stack. The stack itself can therefore be effectively implemented as a
two-element structure in C:
typedef struct {
size_t size;
int items[STACKSIZE];
} STACK;
The push() operation is used both to initialize the stack, and to store values to it. It is responsible for inserting
(copying) the value into the ps->items[] array and for incrementing the element counter (ps->size). In a
responsible C implementation, it is also necessary to check whether the array is already full to prevent an overrun.
void push(STACK *ps, int x)
{
if (ps->size == STACKSIZE) {
fputs("Error: stack overflow\n", stderr);
abort();
} else
ps->items[ps->size++] = x;
}
56
57
The pop() operation is responsible for removing a value from the stack, and decrementing the value of
ps->size. A responsible C implementation will also need to check that the array is not already empty.
int pop(STACK *ps)
{
if (ps->size == 0){
fputs("Error: stack underflow\n", stderr);
abort();
} else
return ps->items[--ps->size];
}
If we use a dynamic array, then we can implement a stack that can grow or shrink as much as needed. The size of the
stack is simply the size of the dynamic array. A dynamic array is a very efficient implementation of a stack, since
adding items to or removing items from the end of a dynamic array is amortized O(1) time.
Linked list
The linked-list implementation is equally simple and straightforward. In fact, a simple singly linked list is sufficient
to implement a stackit only requires that the head node or element can be removed, or popped, and a node can
only be inserted by becoming the new head node.
Unlike the array implementation, our structure typedef corresponds not to the entire stack structure, but to a single
node:
typedef struct stack {
int data;
struct stack *next;
} STACK;
Such a node is identical to a typical singly linked list node, at least to those that are implemented in C.
The push() operation both initializes an empty stack, and adds a new node to a non-empty one. It works by
receiving a data value to push onto the stack, along with a target stack, creating a new node by allocating memory for
it, and then inserting it into a linked list as the new head:
void push(STACK **head, int value)
{
STACK *node = malloc(sizeof(STACK));
if (node == NULL){
fputs("Error: no space available for node\n", stderr);
abort();
} else {
/* initialize node */
node->data = value;
node->next = empty(*head) ? NULL : *head; /* insert new head if
any */
*head = node;
}
}
A pop() operation removes the head from the linked list, and assigns the pointer to the head to the previous second
node. It checks whether the list is empty before popping from it:
58
59
Hardware stacks
A common use of stacks at the architecture level is as a means of allocating and accessing memory.
A typical stack, storing local data and call information for nested procedure calls (not
necessarily nested procedures!). This stack grows downward from its origin. The
stack pointer points to the current topmost datum on the stack. A push operation
decrements the pointer and copies the data to the stack; a pop operation copies data
from the stack and then increments the pointer. Each procedure called in the program
stores procedure return information (in yellow) and local data (in other colors) by
pushing them onto the stack. This type of stack implementation is extremely
common, but it is vulnerable to buffer overflow attacks (see the text).
60
Swap or exchange: the two topmost items on the stack exchange places.
Rotate (or Roll): the n topmost items are moved on the stack in a rotating fashion. For example, if n=3, items 1, 2,
and 3 on the stack are moved to positions 2, 3, and 1 on the stack, respectively. Many variants of this operation
are possible, with the most common being called left rotate and right rotate.
Stacks are either visualized growing from the bottom up (like real-world stacks), or, with the top of the stack in a
fixed position (see image [note in the image, the top (28) is the stack 'bottom', since the stack 'top' is where items are
pushed or popped from]), a coin holder, a Pez dispenser, or growing from left to right, so that "topmost" becomes
"rightmost". This visualization may be independent of the actual structure of the stack in memory. This means that a
right rotate will move the first element to the third position, the second to the first and the third to the second. Here
are two equivalent visualizations of this process:
apple
banana
cucumber
cucumber
banana
apple
===right rotate==>
banana
cucumber
apple
===left rotate==>
apple
cucumber
banana
A stack is usually represented in computers by a block of memory cells, with the "bottom" at a fixed location, and
the stack pointer holding the address of the current "top" cell in the stack. The top and bottom terminology are used
irrespective of whether the stack actually grows towards lower memory addresses or towards higher memory
addresses.
Pushing an item on to the stack adjusts the stack pointer by the size of the item (either decrementing or incrementing,
depending on the direction in which the stack grows in memory), pointing it to the next cell, and copies the new top
item to the stack area. Depending again on the exact implementation, at the end of a push operation, the stack pointer
may point to the next unused location in the stack, or it may point to the topmost item in the stack. If the stack points
to the current topmost item, the stack pointer will be updated before a new item is pushed onto the stack; if it points
to the next available location in the stack, it will be updated after the new item is pushed onto the stack.
Popping the stack is simply the inverse of pushing. The topmost item in the stack is removed and the stack pointer is
updated, in the opposite order of that used in the push operation.
Hardware support
Stack in main memory
Most CPUs have registers that can be used as stack pointers. Processor families like the x86, Z80, 6502, and many
others have special instructions that implicitly use a dedicated (hardware) stack pointer to conserve opcode space.
Some processors, like the PDP-11 and the 68000, also have special addressing modes for implementation of stacks,
typically with a semi-dedicated stack pointer as well (such as A7 in the 68000). However, in most processors, several
different registers may be used as additional stack pointers as needed (whether updated via addressing modes or via
add/sub instructions).
Stack in registers or dedicated memory
The x87 floating point architecture is an example of a set of registers organised as a stack where direct access to
individual registers (relative the current top) is also possible. As with stack-based machines in general, having the
top-of-stack as an implicit argument allows for a small machine code footprint with a good usage of bus bandwidth
and code caches, but it also prevents some types of optimizations possible on processors permitting random access to
the register file for all (two or three) operands. A stack structure also makes superscalar implementations with
61
register renaming (for speculative execution) somewhat more complex to implement, although it is still feasible, as
exemplified by modern x87 implementations.
Sun SPARC, AMD Am29000, and Intel i960 are all examples of architectures using register windows within a
register-stack as another strategy to avoid the use of slow main memory for function arguments and return values.
There are also a number of small microprocessors that implements a stack directly in hardware and some
microcontrollers have a fixed-depth stack that is not directly accessible. Examples are the PIC microcontrollers, the
Computer Cowboys MuP21, the Harris RTX line, and the Novix NC4016. Many stack-based microprocessors were
used to implement the programming language Forth at the microcode level. Stacks were also used as a basis of a
number of mainframes and mini computers. Such machines were called stack machines, the most famous being the
Burroughs B5000.
Applications
Stacks have numerous applications. We see stacks in everyday life, from the books in our library, to the sheaf of
papers that we keep in our printer tray. All of them follow the Last In First Out (LIFO) logic, that is when we add a
book to a pile of books, we add it to the top of the pile, whereas when we remove a book from the pile, we generally
remove it from the top of the pile.
Given below are a few applications of stacks in the world of computers:
62
return error
end if
n = floor(n / 2)
end while
while s is not empty do
output(s.pop())
end while
end function
Towers of Hanoi
One of the most interesting applications of
stacks can be found in solving a puzzle
called Tower of Hanoi. According to an old
Brahmin story, the existence of the universe
is calculated in terms of the time taken by a
number of monks, who are working all the
time, to move 64 disks from one pole to
another. But there are some rules about how
this should be done, which are:
Towers of Hanoi
63
64
Tower of Hanoi
65
The C++ code for this solution can be implemented in two ways:
First implementation (using stacks implicitly by recursion)
#include <stdio.h>
void TowersofHanoi(int n, int a, int b, int c)
{
if(n > 0)
{
TowersofHanoi(n-1, a, c, b);
//recursion
printf("> Move top disk from tower %d to tower %d.\n", a, b);
TowersofHanoi(n-1, c, b, a);
//recursion
}
}
Second implementation (using stacks explicitly)
// Global variable , tower [1:3] are three towers
arrayStack<int> tower[4];
void TowerofHanoi(int n)
{
// Preprocessor for moveAndShow.
for (int d = n; d > 0; d--)
tower[1].push(d);
moveAndShow(n, 1, 2, 3);
tower 3 using
//initialize
//add disk d to tower 1
/*move n disks from tower 1 to
tower 2 as intermediate tower*/
}
void moveAndShow(int n, int a, int b, int c)
{
// Move the top n disks from tower a to tower b showing states.
// Use tower c for intermediate storage.
if(n > 0)
{
moveAndShow(n-1, a, c, b);
//recursion
66
int d = tower[a].top();
x to top of
tower[a].pop();
tower[c].push(d);
showState();
moveAndShow(n-1, b, a, c);
}
}
Opening bracket
Numbers
Operators
Closing bracket
New line character
(2.1)
Number
(2.2)
Operator
(2.3)
Closing brackets
(2.4)
67
push into the stack
Go to step (2.4)
(2.5)
Result: The evaluation of the fully parenthesized infix expression is printed as follows:
Input String: (((2 * 5) - (1 * 2)) / (11 - 9))
Input Symbol Stack (from bottom to top)
(
((
(((
(((2
(((2*
(((2*5
( ( 10
( ( 10 -
( ( 10 - (
( ( 10 - ( 1
( ( 10 - ( 1 *
( ( 10 - ( 1 * 2
( ( 10 - 2
1 * 2 = 2 & Push
(8
10 - 2 = 8 & Push
(8/
(8/(
11
( 8 / ( 11
( 8 / ( 11 -
( 8 / ( 11 - 9
(8/2
11 - 9 = 2 & Push
8 / 2 = 4 & Push
New line
Empty
Opening parentheses
Numbers
Operators
Closing parentheses
New line character (\n)
Operation
2 * 5 = 10 and push
68
We do not know what to do if an operator is read as an input character. By implementing the priority rule for
operators, we have a solution to this problem.
The Priority rule: we should perform a comparative priority check if an operator is read, and then push it. If the stack
top contains an operator of priority higher than or equal to the priority of the input operator, then we pop it and print
it. We keep on performing the priority check until the top of stack either contains an operator of lower priority or if it
does not contain an operator.
Data Structure Requirement for this problem: a character stack and an integer stack
Algorithm:
1. Read an input character
2. Actions that will be performed at the end of each input
Opening parentheses
Number
Operator
(2.1)
(2.2)
(2.3)
(2.4)
(2.5)
Result: The evaluation of an infix expression that is not fully parenthesized is printed as follows:
Input String: (2 * 5 - 1 * 2) / (11 - 9)
69
Input Symbol Character Stack (from bottom to top) Integer Stack (from bottom to top)
(
(*
(*
(*
Operation performed
2
Push as * has higher priority
25
Since '-' has less priority, we do 2 * 5 = 10
(-
10
(-
10 1
(-*
10 1
(-*
10 1 2
(-
10 2
/(
11
/(
8 11
/(-
8 11
/(-
8 11 9
82
New line
Operator
(2.2)
70
/-
/-*
/-*2
/-*25
/-*25*
/-*25*1
/-*25*12
/-*25*12-
11
/ - * 2 5 * 1 2 - 11
/ - * 2 5 * 1 2 - 11 9
\n
/ - * 2 5 * 1 2 - 11
/-*25*12-
9 11
/-*25*12
/-*25*1
22
/-*25*
221
/-*25
22
/-*2
225
/-*
2252
/-
2 2 10
5 * 2 = 10
28
10 - 2 = 8
Stack is empty
8/2=4
Stack is empty
Print 4
11 - 9 = 2
1*2=2
71
Input
Operation
Push operand 1
Push operand 2, 1
Push operand 4, 2, 1
Multiply
8, 1
Add
Push operand 3, 9
Add
12
The final result, 12, lies on the top of the stack at the end of the calculation.
Example in C
#include<stdio.h>
int main()
{
int a[100], i;
printf("To pop enter -1\n");
for(i = 0;;)
{
printf("Push ");
scanf("%d", &a[i]);
if(a[i] == -1)
{
if(i == 0)
{
printf("Underflow\n");
}
else
{
printf("pop = %d\n", a[--i]);
}
}
else
{
i++;
}
}
}
72
{ADT of STACK}
{dictionary}
const
mark = '.';
var
data : stack;
f : text;
cc : char;
ccInt, cc1, cc2 : integer;
{functions}
IsOperand (cc : char) : boolean;
{JUST Prototype}
{return TRUE if cc is operand}
ChrToInt (cc : char) : integer;
{JUST Prototype}
{change char to integer}
Operator (cc1, cc2 : integer) : integer;
{JUST Prototype}
{operate two operands}
{algorithms}
begin
assign (f, cc);
reset (f);
read (f, cc); {first
if (cc = mark) then
begin
writeln ('empty
end
else
begin
repeat
if (IsOperand
begin
ccInt :=
elmt}
archives !');
(cc)) then
ChrToInt (cc);
73
Opening parentheses
Numbers
Operators
Closing parentheses
New line character (\n)
(2.1)
Number
(2.2)
Operator
(2.3)
Closing parentheses
(2.4)
(2.5)
STOP
Therefore, the final output after conversion of an infix expression to a postfix expression is as follows:
Input
74
Operation
Stack (after
op)
Output on
monitor
((
(((
(2.2) Print it
(2.2) Print it
(2.4) Pop from the stack: Since popped element is '+' print it
(((
81+
(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next
character
((
81+
((-
((-(
(2.2) Print it
(2.2) Print it
(2.4) Pop from the stack: Since popped element is '-' print it
((-(
(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next
character
((-
(2.4) Pop from the stack: Since popped element is '-' print it
((
(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next
character
(/
(/(
11
(2.2) Print it
(2.2) Print it
(2.4) Pop from the stack: Since popped element is '-' print it
(/(
(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next
character
(/
(2.4) Pop from the stack: Since popped element is '/' print it
(2.4) Pop from the stack: Since popped element is '(' we ignore it and read next
character
Stack is empty
New line
character
(2.5) STOP
8
(((+
8
81
81+7
((-(81+74
81+74-
81+74--
8 1 + 7 4 - - 11
(/(8 1 + 7 4 - - 11 9
8 1 + 7 4 - - 11 9 -
8 1 + 7 4 - - 11 9 - /
75
76
The car 4 is moved to output track. No other cars can be moved to output
track at this time.
The next car 8 is moved to holding track H1.
Car 5 is output from input track. Car 6 is moved to output track from H2, so is the 7 from H3,8 from H1 & 9 from
H3.
[]
Backtracking
Another important application of stacks is backtracking. Consider a simple example of finding the correct path in a
maze. There are a series of points, from the starting point to the destination. We start from one point. To reach the
final destination, there are several paths. Suppose we choose a random path. After following a certain path, we
realise that the path we have chosen is wrong. So we need to find a way by which we can return to the beginning of
that path. This can be done with the use of stacks. With the help of stacks, we remember the point where we have
reached. This is done by pushing that point into the stack. In case we end up on the wrong path, we can pop the last
point from the stack and thus return to the last point and continue our quest to find the right path. This is called
backtracking.
Quicksort
Sorting means arranging the list of elements in a particular order. In case of numbers, it could be in ascending order,
or in the case of letters, alphabetic order.
Quicksort is an algorithm of the divide and conquer type. In this method, to sort a set of numbers, we reduce it to
two smaller sets, and then sort these smaller sets.
This can be explained with the help of the following example:
Suppose A is a list of the following numbers:
In the reduction step, we find the final position of one of the numbers. In this case, let us assume that we have to find
the final position of 48, which is the first number in the list.
To accomplish this, we adopt the following method. Begin with the last number, and move from right to left.
Compare each number with 48. If the number is smaller than 48, we stop at that number and swap it with 48.
In our case, the number is 24. Hence, we swap 24 and 48.
The numbers 96 and 72 to the right of 48, are greater than 48. Now beginning with 24, scan the numbers in the
opposite direction, that is from left to right. Compare every number with 48 until you find a number that is greater
than 48.
In this case, it is 60. Therefore we swap 48 and 60.
Note that the numbers 12, 24 and 36 to the left of 48 are all smaller than 48. Now, start scanning numbers from 60,
in the right to left direction. As soon as you find lesser number, swap it with 48.
In this case, it is 44. Swap it with 48. The final result is:
77
78
Now, beginning with 44, scan the list from left to right, until you find a number greater than 48.
Such a number is 84. Swap it with 48. The final result is:
Now, beginning with 84, traverse the list from right to left, until you reach a number lesser than 48. We do not find
such a number before reaching 48. This means that all the numbers in the list have been scanned and compared with
48. Also, we notice that all numbers less than 48 are to the left of it, and all numbers greater than 48, are to its right.
The final partitions look as follows:
Therefore, 48 has been placed in its proper position and now our task is reduced to sorting the two partitions. This
above step of creating partitions can be repeated with every partition containing 2 or more elements. As we can
process only a single partition at a time, we should be able to keep track of the other partitions, for future processing.
This is done by using two stacks called LOWERBOUND and UPPERBOUND, to temporarily store these partitions.
The addresses of the first and last elements of the partitions are pushed into the LOWERBOUND and
UPPERBOUND stacks respectively. Now, the above reduction step is applied to the partitions only after its
boundary values are popped from the stack.
We can understand this from the following example:
Take the above list A with 12 elements. The algorithm starts by pushing the boundary values of A, that is 1 and 12
into the LOWERBOUND and UPPERBOUND stacks respectively. Therefore the stacks look as follows:
LOWERBOUND:
UPPERBOUND:
12
To perform the reduction step, the values of the stack top are popped from the stack. Therefore, both the stacks
become empty.
LOWERBOUND:
{empty}
UPPERBOUND: {empty}
Now, the reduction step causes 48 to be fixed to the 5th position and creates two partitions, one from position 1 to 4
and the other from position 6 to 12. Hence, the values 1 and 6 are pushed into the LOWERBOUND stack and 4 and
79
1, 6
UPPERBOUND: 4, 12
For applying the reduction step again, the values at the stack top are popped. Therefore, the values 6 and 12 are
popped. Therefore the stacks look like:
LOWERBOUND:
UPPERBOUND: 4
The reduction step is now applied to the second partition, that is from the 6th to 12th element.
After the reduction step, 98 is fixed in the 11th position. So, the second partition has only one element. Therefore, we
push the upper and lower boundary values of the first partition onto the stack. So, the stacks are as follows:
LOWERBOUND:
1, 6
UPPERBOUND:
4, 10
The processing proceeds in the following way and ends when the stacks do not contain any upper and lower bounds
of the partition to be processed, and the list gets sorted.
80
81
An element once popped from the stack N is never pushed back into it. Therefore,
So, the running time of all the statements in the while loop is O(
The running time of all the steps in the algorithm is calculated by adding the time taken by all these steps. The run
time of each step is O( ). Hence the running time complexity of this algorithm is O( ).
Security
Some computing environments use stacks in ways that may make them vulnerable to security breaches and attacks.
Programmers working in such environments must take special care to avoid the pitfalls of these implementations.
For example, some programming languages use a common stack to store both data local to a called procedure and
the linking information that allows the procedure to return to its caller. This means that the program moves data into
and out of the same stack that contains critical return addresses for the procedure calls. If data is moved to the wrong
location on the stack, or an oversized data item is moved to a stack location that is not large enough to contain it,
return information for procedure calls may be corrupted, causing the program to fail.
Malicious parties may attempt a stack smashing attack that takes advantage of this type of implementation by
providing oversized data input to a program that does not check the length of input. Such a program may copy the
data in its entirety to a location on the stack, and in so doing it may change the return addresses for procedures that
have called it. An attacker can experiment to find a specific type of data that can be provided to such a program such
that the return address of the current procedure is reset to point to an area within the stack itself (and within the data
provided by the attacker), which in turn contains instructions that carry out unauthorized operations.
This type of attack is a variation on the buffer overflow attack and is an extremely frequent source of security
breaches in software, mainly because some of the most popular compilers use a shared stack for both data and
procedure calls, and do not verify the length of data items. Frequently programmers do not write code to verify the
size of data items, either, and when an oversized or undersized data item is copied to the stack, a security breach may
occur.
82
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Further reading
Donald Knuth. The Art of Computer Programming, Volume 1: Fundamental Algorithms, Third
Edition.Addison-Wesley, 1997. ISBN 0-201-89683-4. Section 2.2.1: Stacks, Queues, and Deques, pp.238243.
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 10.1: Stacks and queues,
pp.200204.
External links
83
Queue implementation
Theoretically, one characteristic of a queue is that it does not have a specific capacity. Regardless of how many
elements are already contained, a new element can always be added. It can also be empty, at which point removing
an element will be impossible until a new element has been added again.
Fixed length arrays are limited in capacity, but it is not true that items need to be copied towards the head of the
queue. The simple trick of turning the array into a closed circle and letting the head and tail drift around endlessly in
that circle makes it unnecessary to ever move items stored in the array. If n is the size of the array, then computing
indices modulo n will turn the array into a circle. This is still the conceptually simplest way to construct a queue in a
high level language, but it does admittedly slow things down a little, because the array indices must be compared to
zero and the array size, which is comparable to the time taken to check whether an array index is out of bounds,
which some languages do, but this will certainly be the method of choice for a quick and dirty implementation, or for
any high level language that does not have pointer syntax. The array size must be declared ahead of time, but some
implementations simply double the declared array size when overflow occurs. Most modern languages with objects
or pointers can implement or come with libraries for dynamic lists. Such data structures may have not specified fixed
capacity limit besides memory constraints. Queue overflow results from trying to add an element onto a full queue
and queue underflow happens when trying to remove an element from an empty queue.
A bounded queue is a queue limited to a fixed number of items.
There are several efficient implementations of FIFO queues. An efficient implementation is one that can perform the
operationsenqueuing and dequeuingin O(1) time.
84
References
General
Donald Knuth. The Art of Computer Programming, Volume 1: Fundamental Algorithms, Third Edition.
Addison-Wesley, 1997. ISBN 0-201-89683-4. Section 2.2.1: Stacks, Queues, and Deques, pp.238243.
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 10.1: Stacks and queues,
pp.200204.
William Ford, William Topp. Data Structures with C++ and STL, Second Edition. Prentice Hall, 2002. ISBN
0-13-085850-1. Chapter 8: Queues and Priority Queues, pp.386390.
Adam Drozdek. Data Structures and Algorithms in C++, Third Edition. Thomson Course Technology, 2005.
ISBN 0-534-49182-0. Chapter 4: Stacks and Queues, pp.137169.
Citations
[1]
[2]
[3]
[4]
http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ Queue. html
http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ LinkedList. html
http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ ArrayDeque. html
http:/ / www. php. net/ manual/ en/ class. splqueue. php
External links
Queues with algo and 'c' programe (http://scanftree.com/Data_Structure/Queues)
STL Quick Reference (http://www.halpernwightsoftware.com/stdlib-scratch/quickref.html#containers14)
VBScript implementation of stack, queue, deque, and Red-Black Tree (http://www.ludvikjerabek.com/
downloads.html)
Paul E. Black, Bounded queue (http:/ / www. nist. gov/ dads/ HTML/ boundedqueue. html) at the NIST Dictionary
of Algorithms and Data Structures.
85
Double-ended queue
86
Double-ended queue
In computer science, a double-ended queue (dequeue, often abbreviated to deque, pronounced deck) is an abstract
data type that generalizes a queue, for which elements can be added to or removed from either the front (head) or
back (tail).[1] It is also often called a head-tail linked list, though properly this refers to a specific data structure
implementation (see below).
Naming conventions
Deque is sometimes written dequeue, but this use is generally deprecated in technical literature or technical writing
because dequeue is also a verb meaning "to remove from a queue". Nevertheless, several libraries and some writers,
such as Aho, Hopcroft, and Ullman in their textbook Data Structures and Algorithms, spell it dequeue. John
Mitchell, author of Concepts in Programming Languages, also uses this terminology.
Operations
The basic operations on a deque are enqueue and dequeue on either end. Also generally implemented are peek
operations, which return the value at that end without dequeuing it.
Names vary between languages; major implementations include:
operation common
name(s)
Ada
C++
Java
push
PHP
array_push
Python
append
Ruby
push
JavaScript
insert
element
at back
inject,
snoc
Append
push_back
insert
element
at front
push,
cons
Prepend
remove
last
element
eject
Delete_Last
pop_back
pollLast
pop
array_pop
pop
pop
pop
remove
first
element
pop
Delete_First
pop_front
pollFirst
shift
array_shift
popleft
shift
shift
Last_Element
back
peekLast
$array[-1] end
<obj>[-1]
last
<obj>[<obj>.length
- 1]
examine
last
element
offerLast
Perl
push
Double-ended queue
examine
first
element
First_Element front
87
peekFirst
$array[0]
reset
<obj>[0]
first
<obj>[0]
Implementations
There are at least two common ways to efficiently implement a deque: with a modified dynamic array or with a
doubly linked list.
The dynamic array approach uses a variant of a dynamic array that can grow from both ends, sometimes called array
deques. These array deques have all the properties of a dynamic array, such as constant-time random access, good
locality of reference, and inefficient insertion/removal in the middle, with the addition of amortized constant-time
insertion/removal at both ends, instead of just one end. Three common implementations include:
Storing deque contents in a circular buffer, and only resizing when the buffer becomes full. This decreases the
frequency of resizings.
Allocating deque contents from the center of the underlying array, and resizing the underlying array when either
end is reached. This approach may require more frequent resizings and waste more space, particularly when
elements are only inserted at one end.
Storing contents in multiple smaller arrays, allocating additional arrays at the beginning or end as needed.
Indexing is implemented by keeping a dynamic array containing pointers to each of the smaller arrays.
Language support
Ada's
containers
provides
the
generic
packages
Ada.Containers.Vectors
and
Ada.Containers.Doubly_Linked_Lists, for the dynamic array and linked list implementations,
respectively.
C++'s Standard Template Library provides the class templates std::deque and std::list, for the multiple
array and linked list implementations, respectively.
As of Java 6, Java's Collections Framework provides a new Deque [2] interface that provides the functionality of
insertion and removal at both ends. It is implemented by classes such as ArrayDeque [3] (also new in Java 6) and
LinkedList [2], providing the dynamic array and linked list implementations, respectively. However, the
ArrayDeque, contrary to its name, does not support random access.
Python 2.4 introduced the collections module with support for deque objects.
As of PHP 5.3, PHP's SPL extension contains the 'SplDoublyLinkedList' class that can be used to implement Deque
datastructures. Previously to make a Deque structure the array functions array_shift/unshift/pop/push had to be used
instead.
GHC's Data.Sequence [3] module implements an efficient, functional deque structure in Haskell. The implementation
uses 2-3 finger trees annotated with sizes. There are other (fast) possibilities to implement purely functional (thus
also persistent) double queues (most using heavily lazy evaluation).[4][5] Kaplan and Tarjan were the first to
implement optimal confluently persistent catenable deques.[6] Their implementation was strictly purely functional in
the sense that it did not use lazy evaluation. Okasaki simplified the data structure by using lazy evaluation with a
bootstrapped data structure and degrading the performance bounds from worst-case to amortized. Kaplan, Okasaki,
and Tarjan produced a simpler, non-bootstrapped, amortized version that can be implemented either using lazy
evaluation or more efficiently using mutation in a broader but still restricted fashion. Mihaesau and Tarjan created a
simpler (but still highly complex) strictly purely functional implementation of catenable deques, and also a much
simpler implementation of strictly purely functional non-catenable deques, both of which have optimal worst-case
bounds.
Double-ended queue
Complexity
In a doubly linked list implementation and assuming no allocation/deallocation overhead, the time complexity of
all deque operations is O(1). Additionally, the time complexity of insertion or deletion in the middle, given an
iterator, is O(1); however, the time complexity of random access by index is O(n).
In a growing array, the amortized time complexity of all deque operations is O(1). Additionally, the time
complexity of random access by index is O(1); but the time complexity of insertion or deletion in the middle is
O(n).
Applications
One example where a deque can be used is the A-Steal job scheduling algorithm.[7] This algorithm implements task
scheduling for several processors. A separate deque with threads to be executed is maintained for each processor. To
execute the next thread, the processor gets the first element from the deque (using the "remove first element" deque
operation). If the current thread forks, it is put back to the front of the deque ("insert element at front") and a new
thread is executed. When one of the processors finishes execution of its own threads (i.e. its deque is empty), it can
"steal" a thread from another processor: it gets the last element from the deque of another processor ("remove last
element") and executes it.
References
[1] Donald Knuth. The Art of Computer Programming, Volume 1: Fundamental Algorithms, Third Edition. Addison-Wesley, 1997. ISBN
0-201-89683-4. Section 2.2.1: Stacks, Queues, and Deques, pp. 238243.
[2] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ Deque. html
[3] http:/ / www. haskell. org/ ghc/ docs/ latest/ html/ libraries/ containers/ Data-Sequence. html
[4] www.cs.cmu.edu/~rwh/theses/okasaki.pdf C. Okasaki, "Purely Functional Data Structures", September 1996
[5] Adam L. Buchsbaum and Robert E. Tarjan. Confluently persistent deques via data structural bootstrapping. Journal of Algorithms,
18(3):513547, May 1995. (pp. 58, 101, 125)
[6] Haim Kaplan and Robert E. Tarjan. Purely functional representations of catenable sorted lists. In ACM Symposium on Theory of Computing,
pages 202211, May 1996. (pp. 4, 82, 84, 124)
[7] See p.22.
External links
SGI STL Documentation: deque<T, Alloc> (http://www.sgi.com/tech/stl/Deque.html)
Code Project: An In-Depth Study of the STL Deque Container (http://www.codeproject.com/KB/stl/
vector_vs_deque.aspx)
Diagram of a typical STL deque implementation (http://pages.cpsc.ucalgary.ca/~kremer/STL/1024x768/
deque.html)
Deque implementation in C (http://www.martinbroadhurst.com/articles/deque.html)
VBScript implementation of stack, queue, deque, and Red-Black Tree (http://www.ludvikjerabek.com/
downloads.html)
Multiple implementations of non-catenable deques in Haskell (https://code.google.com/p/deques/source/
browse/)
88
Circular buffer
89
Circular buffer
A circular buffer, cyclic buffer or ring buffer is a data structure
that uses a single, fixed-size buffer as if it were connected
end-to-end. This structure lends itself easily to buffering data
streams.
Uses
The useful property of a circular buffer is that it does not need to
have its elements shuffled around when one is consumed. (If a
non-circular buffer were used then it would be necessary to shift
all elements when one is consumed.) In other words, the circular
buffer is well suited as a FIFO buffer while a standard,
non-circular buffer is well suited as a LIFO buffer.
Circular buffering makes a good implementation strategy for a
queue that has fixed maximum size. Should a maximum size be
adopted for a queue, then a circular buffer is a completely ideal
implementation; all queue operations are constant time. However,
expanding a circular buffer requires shifting memory, which is
comparatively costly. For arbitrarily expanding queues, a Linked
list approach may be preferred instead.
How it works
A circular buffer first starts empty and of some predefined length.
For example, this is a 7-element buffer:
Assume that a 1 is written into the middle of the buffer (exact starting location does not matter in a circular buffer):
Then assume that two more elements are added 2 & 3 which get appended after the 1:
If two elements are then removed from the buffer, the oldest values inside the buffer are removed. The two elements
removed, in this case, are 1 & 2 leaving the buffer with just a 3:
Circular buffer
A consequence of the circular buffer is that when it is full and a subsequent write is performed, then it starts
overwriting the oldest data. In this case, two more elements A & B are added and they overwrite the 3 & 4:
Alternatively, the routines that manage the buffer could prevent overwriting the data and return an error or raise an
exception. Whether or not data is overwritten is up to the semantics of the buffer routines or the application using the
circular buffer.
Finally, if two elements are now removed then what would be returned is not 3 & 4 but 5 & 6 because A & B
overwrote the 3 & the 4 yielding the buffer with:
Alternatively, a fixed-length buffer with two integers to keep track of indices can be used in languages that do not
have pointers.
Taking a couple of examples from above. (While there are numerous ways to label the pointers and exact semantics
can vary, this is one way to do it.)
This image shows a partially full buffer:
This image shows a full buffer with two elements having been overwritten:
What to note about the second one is that after each element is overwritten then the start pointer is incremented as
well.
90
Circular buffer
91
Difficulties
Full / Empty Buffer Distinction
A small disadvantage of relying on pointers or relative indices of the start and end of data is, that in the case the
buffer is entirely full, both pointers point to the same element:
the same slot, the buffer is empty. If the end (write) pointer refers to the slot preceding the one referred to by the start
(read) pointer, the buffer is full.
The advantage is:
The solution is simple and robust.
The disadvantages are:
One slot is lost, so it is a bad compromise when the buffer size is small or the slot is big or is implemented in
hardware.
The full test requires a modulo operation
Example implementation, 'C' language
/* Circular buffer example, keeps one slot open */
#include <stdio.h>
#include <stdlib.h>
/* Opaque buffer element type. This would be defined by the
application. */
typedef struct { int value; } ElemType;
/* Circular buffer object */
typedef struct {
int
size;
/* maximum number of elements
int
start; /* index of oldest element
int
end;
/* index at which to write new element
ElemType
*elems; /* vector of elements
*/
*/
*/
*/
Circular buffer
} CircularBuffer;
void cbInit(CircularBuffer *cb, int size) {
cb->size = size + 1; /* include empty elem */
cb->start = 0;
cb->end
= 0;
cb->elems = (ElemType *)calloc(cb->size, sizeof(ElemType));
}
void cbFree(CircularBuffer *cb) {
free(cb->elems); /* OK if null */ }
int cbIsFull(CircularBuffer *cb) {
return (cb->end + 1) % cb->size == cb->start; }
int cbIsEmpty(CircularBuffer *cb) {
return cb->end == cb->start; }
/* Write an element, overwriting oldest element if buffer is full. App
can
choose to avoid the overwrite by checking cbIsFull(). */
void cbWrite(CircularBuffer *cb, ElemType *elem) {
cb->elems[cb->end] = *elem;
cb->end = (cb->end + 1) % cb->size;
if (cb->end == cb->start)
cb->start = (cb->start + 1) % cb->size; /* full, overwrite */
}
/* Read oldest element. App must ensure !cbIsEmpty() first. */
void cbRead(CircularBuffer *cb, ElemType *elem) {
*elem = cb->elems[cb->start];
cb->start = (cb->start + 1) % cb->size;
}
int main(int argc, char **argv) {
CircularBuffer cb;
ElemType elem = {0};
int testBufferSize = 10; /* arbitrary size */
cbInit(&cb, testBufferSize);
/* Fill buffer with test elements 3 times */
for (elem.value = 0; elem.value < 3 * testBufferSize; ++ elem.value)
cbWrite(&cb, &elem);
/* Remove and print all elements */
while (!cbIsEmpty(&cb)) {
92
Circular buffer
93
cbRead(&cb, &elem);
printf("%d\n", elem.value);
}
cbFree(&cb);
return 0;
}
Use a Fill Count
This approach replaces the end pointer with a counter that tracks the number of readable items in the buffer. This
unambiguously indicates when the buffer is empty or full and allows use of all buffer slots.
The performance impact should be negligible, since this approach adds the costs of maintaining the counter and
computing the tail slot on writes but eliminates the need to maintain the end pointer and simplifies the fullness test.
The advantage is:
The test for full/empty is simple
The disadvantages are:
You need modulo for read and write
Read and write operation must share the counter field, so it requires synchronization in multi-threaded situation.
Note: When using semaphores in a Producer-consumer model, the semaphores act as a fill count.
Differences from previous example
/* This approach replaces the CircularBuffer 'end' field with the
'count' field and changes these functions: */
void cbInit(CircularBuffer *cb, int size) {
cb->size = size;
cb->start = 0;
cb->count = 0;
cb->elems = (ElemType *)calloc(cb->size, sizeof(ElemType));
}
int cbIsFull(CircularBuffer *cb) {
return cb->count == cb->size; }
int cbIsEmpty(CircularBuffer *cb) {
return cb->count == 0; }
void cbWrite(CircularBuffer *cb, ElemType *elem) {
int end = (cb->start + cb->count) % cb->size;
cb->elems[end] = *elem;
if (cb->count == cb->size)
cb->start = (cb->start + 1) % cb->size; /* full, overwrite */
else
++ cb->count;
}
Circular buffer
94
It is easy to see above that when the pointers (including the extra msb bit) are equal, the buffer is empty, while if the
pointers differ only by the extra msb bit, the buffer is full.
The advantages are:
The test for full/empty is simple
No modulo operation is needed
The source and sink of data can implement independent policies for dealing with a full buffer and overrun while
adhering to the rule that only the source of data modifies the write count and only the sink of data modifies the
read count. This can result in elegant and robust circular buffer implementations even in multi-threaded
environments.
The disadvantage is:
You need one more bit for read and write pointer
Differences from Always Keep One Slot Open example
/* This approach adds one bit to end and start pointers */
/* Circular buffer object */
typedef struct {
int
size;
/* maximum number of elements
int
start; /* index of oldest element
int
end;
/* index at which to write new element
int
s_msb;
int
e_msb;
ElemType
*elems; /* vector of elements
} CircularBuffer;
void cbInit(CircularBuffer *cb, int size) {
cb->size = size;
cb->start = 0;
cb->end
= 0;
*/
*/
*/
*/
Circular buffer
95
cb->s_msb = 0;
cb->e_msb = 0;
cb->elems = (ElemType *)calloc(cb->size, sizeof(ElemType));
}
int cbIsFull(CircularBuffer *cb) {
return cb->end == cb->start && cb->e_msb != cb->s_msb; }
int cbIsEmpty(CircularBuffer *cb) {
return cb->end == cb->start && cb->e_msb == cb->s_msb; }
void cbIncr(CircularBuffer *cb, int *p, int *msb) {
*p = *p + 1;
if (*p == cb->size) {
*msb ^= 1;
*p = 0;
}
}
void cbWrite(CircularBuffer *cb, ElemType *elem) {
cb->elems[cb->end] = *elem;
if (cbIsFull(cb)) /* full, overwrite moves start pointer */
cbIncr(cb, &cb->start, &cb->s_msb);
cbIncr(cb, &cb->end, &cb->e_msb);
}
void cbRead(CircularBuffer *cb, ElemType *elem) {
*elem = cb->elems[cb->start];
cbIncr(cb, &cb->start, &cb->s_msb);
}
If the size is a power of two, the implementation is simpler and the separate msb variables are no longer necessary,
removing the disadvantage:
Differences from Always Keep One Slot Open example
/* This approach adds one bit to end and start pointers */
/* Circular buffer object */
typedef struct {
int
size;
/* maximum number of elements
int
start; /* index of oldest element
int
end;
/* index at which to write new element
ElemType
*elems; /* vector of elements
} CircularBuffer;
void cbInit(CircularBuffer *cb, int size) {
cb->size = size;
cb->start = 0;
*/
*/
*/
*/
Circular buffer
cb->end
= 0;
cb->elems = (ElemType *)calloc(cb->size, sizeof(ElemType));
}
void cbPrint(CircularBuffer *cb) {
printf("size=0x%x, start=%d, end=%d\n", cb->size, cb->start, cb->end);
}
int cbIsFull(CircularBuffer *cb) {
return cb->end == (cb->start ^ cb->size); /* This inverts the most
significant bit of start before comparison */ }
int cbIsEmpty(CircularBuffer *cb) {
return cb->end == cb->start; }
int cbIncr(CircularBuffer *cb, int p) {
return (p + 1)&(2*cb->size-1); /* start and end pointers
incrementation is done modulo 2*size */
}
void cbWrite(CircularBuffer *cb, ElemType *elem) {
cb->elems[cb->end&(cb->size-1)] = *elem;
if (cbIsFull(cb)) /* full, overwrite moves start pointer */
cb->start = cbIncr(cb, cb->start);
cb->end = cbIncr(cb, cb->end);
}
void cbRead(CircularBuffer *cb, ElemType *elem) {
*elem = cb->elems[cb->start&(cb->size-1)];
cb->start = cbIncr(cb, cb->start);
}
Read / Write Counts
Another solution is to keep counts of the number of items written to and read from the circular buffer. Both counts
are stored in signed integer variables with numerical limits larger than the number of items that can be stored and are
allowed to wrap freely.
The unsigned difference (write_count - read_count) always yields the number of items placed in the buffer and not
yet retrieved. This can indicate that the buffer is empty, partially full, completely full (without waste of a storage
location) or in a state of overrun.
The advantage is:
The source and sink of data can implement independent policies for dealing with a full buffer and overrun while
adhering to the rule that only the source of data modifies the write count and only the sink of data modifies the
read count. This can result in elegant and robust circular buffer implementations even in multi-threaded
environments.
The disadvantage is:
You need two additional variables.
96
Circular buffer
Absolute indices
It is possible to optimize the previous solution by using indices instead of pointers: indices can store read/write
counts instead of the offset from start of the buffer, the separate variables in the above solution are removed and
relative indices are obtained on the fly by division modulo the buffer's length.
The advantage is:
No extra variables are needed.
The disadvantages are:
Every access needs an additional modulo operation.
If counter wrap is possible, complex logic can be needed if the buffer's length is not a divisor of the counter's
capacity.
On binary computers, both of these disadvantages disappear if the buffer's length is a power of twoat the cost of a
constraint on possible buffers lengths.
Record last operation
Another solution is to keep a flag indicating whether the most recent operation was a read or a write. If the two
pointers are equal, then the flag will show whether the buffer is full or empty: if the most recent operation was a
write, the buffer must be full, and conversely if it was a read, it must be empty.
The advantages are:
Only a single bit needs to be stored (which may be particularly useful if the algorithm is implemented in
hardware)
The test for full/empty is simple
The disadvantage is:
You need an extra variable
Read and write operation must share the flag, so it probably require synchronization in multi-threaded situation.
Chunked Buffer
Much more complex are different chunks of data in the same circular buffer. The writer is not only writing elements
to the buffer, it also assigns these elements to chunks [citation needed].
The reader should not only be able to read from the buffer, it should also get informed about the chunk borders.
Example: The writer is reading data from small files, writing them into the same circular buffer. The reader is
reading the data, but needs to know when and which file is starting at a given position.
Optimization
A circular-buffer implementation may be optimized by mapping the underlying buffer to two contiguous regions of
virtual memory. (Naturally, the underlying buffers length must then equal some multiple of the systems page size.)
Reading from and writing to the circular buffer may then be carried out with greater efficiency by means of direct
memory access; those accesses which fall beyond the end of the first virtual-memory region will automatically wrap
around to the beginning of the underlying buffer. When the read offset is advanced into the second virtual-memory
region, both offsetsread and writeare decremented by the length of the underlying buffer.[1]
97
Circular buffer
98
Circular buffer
address =
mmap (buffer->address, buffer->count_bytes, PROT_READ | PROT_WRITE,
MAP_FIXED | MAP_SHARED, file_descriptor, 0);
if (address != buffer->address)
report_exceptional_condition ();
address = mmap (buffer->address + buffer->count_bytes,
buffer->count_bytes, PROT_READ | PROT_WRITE,
MAP_FIXED | MAP_SHARED, file_descriptor, 0);
if (address != buffer->address + buffer->count_bytes)
report_exceptional_condition ();
status = close (file_descriptor);
if (status)
report_exceptional_condition ();
}
void
ring_buffer_free (struct ring_buffer *buffer)
{
int status;
status = munmap (buffer->address, buffer->count_bytes << 1);
if (status)
report_exceptional_condition ();
}
void *
ring_buffer_write_address (struct ring_buffer *buffer)
{
/*** void pointer arithmetic is a constraint violation. ***/
return buffer->address + buffer->write_offset_bytes;
}
void
ring_buffer_write_advance (struct ring_buffer *buffer,
unsigned long count_bytes)
{
buffer->write_offset_bytes += count_bytes;
}
void *
ring_buffer_read_address (struct ring_buffer *buffer)
{
return buffer->address + buffer->read_offset_bytes;
99
Circular buffer
}
void
ring_buffer_read_advance (struct ring_buffer *buffer,
unsigned long count_bytes)
{
buffer->read_offset_bytes += count_bytes;
if (buffer->read_offset_bytes >= buffer->count_bytes)
{
buffer->read_offset_bytes -= buffer->count_bytes;
buffer->write_offset_bytes -= buffer->count_bytes;
}
}
unsigned long
ring_buffer_count_bytes (struct ring_buffer *buffer)
{
return buffer->write_offset_bytes - buffer->read_offset_bytes;
}
unsigned long
ring_buffer_count_free_bytes (struct ring_buffer *buffer)
{
return buffer->count_bytes - ring_buffer_count_bytes (buffer);
}
void
ring_buffer_clear (struct ring_buffer *buffer)
{
buffer->write_offset_bytes = 0;
buffer->read_offset_bytes = 0;
}
/*Note, that initial anonymous mmap() can be avoided - after initial
mmap() for descriptor fd,
you can try mmap() with hinted address as (buffer->address +
buffer->count_bytes) and if it fails another one with hinted address as (buffer->address buffer->count_bytes).
Make sure MAP_FIXED is not used in such case, as under certain
situations it could end with segfault.
The advantage of such approach is, that it avoids requirement to map
twice the amount you need initially
(especially useful e.g. if you want to use hugetlbfs and the allowed
amount is limited)
and in context of gcc/glibc - you can avoid certain feature macros
100
Circular buffer
(MAP_ANONYMOUS usually requires one of: _BSD_SOURCE, _SVID_SOURCE or
_GNU_SOURCE).*/
Variants
Perhaps the most common version of the circular buffer uses 8-bit bytes as elements.
Some implementations of the circular buffer use fixed-length elements that are bigger than 8-bit bytes -- 16-bit
integers for audio buffers, 53-byte ATM cells for telecom buffers, etc. Each item is contiguous and has the correct
data alignment, so software reading and writing these values can be faster than software that handles non-contiguous
and non-aligned values.
Ping-pong buffering can be considered a very specialized circular buffer with exactly two large fixed-length
elements.
The Bip Buffer is very similar to a circular buffer, except it always returns contiguous blocks (which can be variable
length).
External links
[1] Simon Cooke. "The Bip Buffer - The Circular Buffer with a Twist" (http:/ / www. codeproject. com/ Articles/ 3479/
The-Bip-Buffer-The-Circular-Buffer-with-a-Twist). 2003.
101
102
Dictionaries
Associative array
In computer science, an associative array, map, symbol table, or dictionary is an abstract data type composed of a
collection of
pairs, such that each possible key appears at most once in the collection.
Operations associated with this data type allow:
The dictionary problem is the task of designing a data structure that implements an associative array. A standard
solution to the dictionary problem is a hash table; in some cases it is also possible to solve the problem using directly
addressed arrays, binary search trees, or other more specialized structures.
Many programming languages include associative arrays as primitive data types, and they are available in software
libraries for many others. Content-addressable memory is a form of direct hardware-level support for associative
arrays.
Associative arrays have many applications including such fundamental programming patterns as memoization and
the decorator pattern.[1]
Operations
In an associative array, the association between a key and a value is often known as a "binding", and the same word
"binding" may also be used to refer to the process of creating a new association.
The operations that are usually defined for an associative array are:
Add or insert: add a new
pair to the collection, binding the new key to its new value. The
key to a new value. As with an insertion, the arguments to this operation are the key and the value.
Remove or delete: remove a
pair from the collection, unbinding a given key from its value. The
argument to this operation is the key.
Lookup: find the value (if any) that is bound to a given key. The argument to this operation is the key, and the
value is returned from the operation. If no value is found, some associative array implementations raise an
exception.
In addition, associative arrays may also include other operations such as determining the number of bindings or
constructing an iterator to loop over all the bindings. Usually, for such an operation, the order in which the bindings
are returned may be arbitrary.
A multimap generalizes an associative array by allowing multiple values to be associated with a single key.[2] A
bidirectional map is a related abstract data type in which the bindings operate in both directions: each value must be
associated with a unique key, and a second lookup operation takes a value as argument and looks up the key
associated with that value.
Associative array
Example
Suppose that the set of loans made by a library is to be represented in a data structure. Each book in a library may be
checked out only by a single library patron at a time. However, a single patron may be able to check out multiple
books. Therefore, the information about which books are checked out to which patrons may be represented by an
associative array, in which the books are the keys and the patrons are the values. For instance (using notation from
Python, or JSON (JavaScript Object Notation), in which a binding is represented by placing a colon between the key
and the value), the current checkouts may be represented by an associative array
{
"Great Expectations": "John",
"Pride and Prejudice": "Alice",
"Wuthering Heights": "Alice"
}
A lookup operation with the key "Great Expectations" in this array would return the name of the person who checked
out that book, John. If John returns his book, that would cause a deletion operation in the associative array, and if Pat
checks out another book, that would cause an insertion operation, leading to a different state:
{
"Pride and Prejudice": "Alice",
"The Brothers Karamazov": "Pat",
"Wuthering Heights": "Alice"
}
In this new state, the same lookup as before, with the key "Great Expectations", would raise an exception, because
this key is no longer present in the array.
Implementation
For dictionaries with very small numbers of bindings, it may make sense to implement the dictionary using an
association list, a linked list of bindings. With this implementation, the time to perform the basic dictionary
operations is linear in the total number of bindings; however, it is easy to implement and the constant factors in its
running time are small.
Another very simple implementation technique, usable when the keys are restricted to a narrow range of integers, is
direct addressing into an array: the value for a given key k is stored at the array cell A[k], or if there is no binding for
k then the cell stores a special sentinel value that indicates the absence of a binding. As well as being simple, this
technique is fast: each dictionary operation takes constant time. However, the space requirement for this structure is
the size of the entire keyspace, making it impractical unless the keyspace is small.
The most frequently used general purpose implementation of an associative array is with a hash table: an array of
bindings, together with a hash function that maps each possible key into an array index. The basic idea of a hash
table is that the binding for a given key is stored at the position given by applying the hash function to that key, and
that lookup operations are performed by looking at that cell of the array and using the binding found there. However,
hash table based dictionaries must be prepared to handle collisions that occur when two keys are mapped by the hash
function to the same index, and many different collision resolution strategies have been developed for dealing with
this situation, often based either on open addressing (looking at a sequence of hash table indices instead of a single
index, until finding either the given key or an empty cell) or on hash chaining (storing a small association list instead
of a single binding in each hash table cell).
Dictionaries may also be stored in binary search trees or in data structures specialized to a particular type of keys
such as radix trees, tries, Judy arrays, or van Emde Boas trees, but these implementation methods are less efficient
103
Associative array
than hash tables as well as placing greater restrictions on the types of data that they can handle. The advantages of
these alternative structures come from their ability to handle operations beyond the basic ones of an associative
array, such as finding the binding whose key is the closest to a queried key, when the query is not itself present in the
set of bindings.
Language support
Associative arrays can be implemented in any programming language as a package and many language systems
provide them as part of their standard library. In some languages, they are not only built into the standard system, but
have special syntax, often using array-like subscripting.
Built-in syntactic support for associative arrays was introduced by SNOBOL4, under the name "table". MUMPS
made multi-dimensional associative arrays, optionally persistent, its key data structure. SETL supported them as one
possible implementation of sets and maps. Most modern scripting languages, starting with AWK and including Perl,
Tcl, JavaScript, Python, Ruby, and Lua, support associative arrays as a primary container type. In many more
languages, they are available as library functions without special syntax.
In Smalltalk, Objective-C, .NET, Python, and REALbasic they are called dictionaries; in Perl and Ruby they are
called hashes; in C++, Java, Go, Clojure, Scala, OCaml, Haskell they are called maps (see map (C++),
unordered_map (C++), and Map [3]); in Common Lisp and Windows PowerShell, they are called hash tables (since
both typically use this implementation). In PHP, all arrays can be associative, except that the keys are limited to
integers and strings. In JavaScript (see also JSON), all objects behave as associative arrays. In Lua, they are called
tables, and are used as the primitive building block for all data structures. In Visual FoxPro, they are called
Collections.
References
[1] , pp. 597599.
[2] , pp. 389397.
[3] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ Map. html
External links
NIST's Dictionary of Algorithms and Data Structures: Associative Array (http://www.nist.gov/dads/HTML/
assocarray.html)
104
Association list
105
Association list
Association list
Type
associative array
Time complexity
in big O notation
Average Worst case
Space O(n)
O(n)
Search O(n)
O(n)
Insert O(1)
O(1)
Delete O(n)
O(n)
In computer programming and particularly in Lisp, an association list, often referred to as an alist, is a linked list in
which each list element (or node) comprises a key and a value. The association list is said to associate the value with
the key. In order to find the value associated with a given key, each element of the list is searched in turn, starting at
the head, until the key is found. Duplicate keys that appear later in the list are ignored. It is a simple way of
implementing an associative array.
The disadvantage of association lists is that the time to search is O(n), where n is the length of the list. And unless
the list is regularly pruned to remove elements with duplicate keys multiple values associated with the same key will
increase the size of the list, and thus the time to search, without providing any compensatory advantage. One
advantage is that a new element can be added to the list at its head, which can be done in constant time. For quite
small values of n it is more efficient in terms of time and space than more sophisticated strategies such as hash tables
and trees.
In the early development of Lisp, association lists were used to resolve references to free variables in procedures.
Many programming languages, including Lisp, Scheme, OCaml, and Haskell have functions for handling association
lists in their standard library.
References
Hash table
106
Hash table
Hash table
Type
Invented 1953
Time complexity
in big O notation
Average
Worst case
Space
O(n)
O(n)
Search
O(1)
O(n)
Insert
O(1)
O(n)
Delete
O(1)
O(n)
Hash table
Hashing
The idea of hashing is to distribute the entries (key/value pairs) across an array of buckets. Given a key, the
algorithm computes an index that suggests where the entry can be found:
index = f(key, array_size)
Often this is done in two steps:
hash = hashfunc(key)
index = hash % array_size
In this method, the hash is independent of the array size, and it is then reduced to an index (a number between 0 and
array_size1) using the modulus operator (%).
In the case that the array size is a power of two, the remainder operation is reduced to masking, which improves
speed, but can increase problems with a poor hash function.
107
Hash table
Key statistics
A critical statistic for a hash table is called the load factor. This is simply the number of entries divided by the
number of buckets, that is, n/k where n is the number of entries and k is the number of buckets.
If the load factor is kept reasonable, the hash table should perform well, provided the hashing is good. If the load
factor grows too large, the hash table will become slow, or it may fail to work (depending on the method used). The
expected constant time property of a hash table assumes that the load factor is kept below some bound. For a fixed
number of buckets, the time for a lookup grows with the number of entries and so does not achieve the desired
constant time.
Second to that, one can examine the variance of number of entries per bucket. For example, two tables both have
1000 entries and 1000 buckets; one has exactly one entry in each bucket, the other has all entries in the same bucket.
Clearly the hashing is not working in the second one.
A low load factor is not especially beneficial. As load factor approaches 0, the proportion of unused areas in the hash
table increases, but there is not necessarily any reduction in search cost. This results in wasted memory.
Collision resolution
Hash collisions are practically unavoidable when hashing a random subset of a large set of possible keys. For
example, if 2,500 keys are hashed into a million buckets, even with a perfectly uniform random distribution,
according to the birthday problem there is a 95% chance of at least two of the keys being hashed to the same slot.
Therefore, most hash table implementations have some collision resolution strategy to handle such events. Some
common strategies are described below. All these methods require that the keys (or pointers to them) be stored in the
table, together with the associated values.
Separate chaining
In the method known as separate
chaining, each bucket is independent,
and has some sort of list of entries with
the same index. The time for hash table
operations is the time to find the
bucket (which is constant) plus the
time for the list operation. (The
technique is also called open hashing
or closed addressing.)
In a good hash table, each bucket has
zero or one entries, and sometimes two
or three, but rarely more than that.
Therefore, structures that are efficient
in time and space for these cases are
Hash collision resolved by separate chaining.
preferred. Structures that are efficient
for a fairly large number of entries are
not needed or desirable. If these cases happen often, the hashing is not working well, and this needs to be fixed.
108
Hash table
109
Hash collision by separate chaining with head records in the bucket array.
Hash table
110
Open addressing
In another strategy, called open
addressing, all entry records are stored
in the bucket array itself. When a new
entry has to be inserted, the buckets are
examined, starting with the hashed-to
slot and proceeding in some probe
sequence, until an unoccupied slot is
found. When searching for an entry,
the buckets are scanned in the same
sequence, until either the target record
is found, or an unused array slot is
found, which indicates that there is no
such key in the table. The name "open
addressing" refers to the fact that the
location ("address") of the item is not
determined by its hash value. (This
method is also called closed hashing;
it should not be confused with "open
hashing" or "closed addressing" that
usually mean separate chaining.)
Hash collision resolved by open addressing with linear probing (interval=1). Note that
"Ted Baker" has a unique hash, but nevertheless collided with "Sandra Dee", that had
previously collided with "John Smith".
Hash table
111
Double hashing, in which the interval between probes is computed by another hash function
A drawback of all these open addressing schemes is that the number of stored entries cannot exceed the number of
slots in the bucket array. In fact, even with good hash functions, their performance dramatically degrades when the
load factor grows beyond 0.7 or so. Thus a more aggressive resize scheme is needed. Separate linking works
correctly with any load factor, although performance is likely to be reasonable if it is kept below 2 or so. For many
applications, these restrictions mandate the use of dynamic resizing, with its attendant costs.
Open addressing schemes also put more stringent requirements on the hash function: besides distributing the keys
more uniformly over the buckets, the function must also minimize the clustering of hash values that are consecutive
in the probe order. Using separate chaining, the only concern is that too many objects map to the same hash value;
whether they are adjacent or nearby is completely irrelevant.
Open addressing only saves memory if the entries are small (less than four times the size of a pointer) and the load
factor is not too small. If the load factor is close to zero (that is, there are far more buckets than stored entries), open
addressing is wasteful even if each entry is just two words.
Open addressing avoids the time
overhead of allocating each new entry
record, and can be implemented even
in the absence of a memory allocator.
It also avoids the extra indirection
required to access the first entry of
each bucket (that is, usually the only
one). It also has better locality of
reference, particularly with linear
probing. With small record sizes, these
factors can yield better performance
than chaining, particularly for lookups.
Hash tables with open addressing are
also easier to serialize, because they do
not use pointers.
This graph compares the average number of cache misses required to look up elements in
tables with chaining and linear probing. As the table passes the 80%-full mark, linear
probing's performance drastically degrades.
Hash table
Coalesced hashing
A hybrid of chaining and open addressing, coalesced hashing links together chains of nodes within the table itself.
Like open addressing, it achieves space usage and (somewhat diminished) cache advantages over chaining. Like
chaining, it does not exhibit clustering effects; in fact, the table can be efficiently filled to a high density. Unlike
chaining, it cannot have more elements than table slots.
Cuckoo hashing
Another alternative open-addressing solution is cuckoo hashing, which ensures constant lookup time in the worst
case, and constant amortized time for insertions and deletions. It uses two or more hash functions, which means any
key/value pair could be in two or more locations. For lookup, the first hash function is used; if the key/value is not
found, then the second hash function is used, and so on. If a collision happens during insertion, then the key is
re-hashed with the second hash function to map it to another bucket. If all hash functions are used and there is still a
collision, then the key it collided with is removed to make space for the new key, and the old key is re-hashed with
one of the other hash functions, which maps it to another bucket. If that location also results in a collision, then the
process repeats until there is no collision or the process traverses all the buckets, at which point the table is resized.
By combining multiple hash functions with multiple cells per bucket, very high space utilisation can be achieved.
2-choice hashing
2-choice hashing employs 2 different hash functions, h1(x) and h2(x), for the hash table. Both hash functions are used
to compute two table locations. When an object is inserted in the table, then it is placed in the table location that
contains fewer objects (with the default being the h1(x) table location if there is equality in bucket size). 2-choice
hashing employs the principle of the power of two choices.
Hopscotch hashing
Another alternative open-addressing solution is hopscotch hashing, which combines the approaches of cuckoo
hashing and linear probing, yet seems in general to avoid their limitations. In particular it works well even when the
load factor grows beyond 0.9. The algorithm is well suited for implementing a resizable concurrent hash table.
The hopscotch hashing algorithm works by defining a neighborhood of buckets near the original hashed bucket,
where a given entry is always found. Thus, search is limited to the number of entries in this neighborhood, which is
logarithmic in the worst case, constant on average, and with proper alignment of the neighborhood typically requires
one cache miss. When inserting an entry, one first attempts to add it to a bucket in the neighborhood. However, if all
buckets in this neighborhood are occupied, the algorithm traverses buckets in sequence until an open slot (an
unoccupied bucket) is found (as in linear probing). At that point, since the empty bucket is outside the neighborhood,
items are repeatedly displaced in a sequence of hops. (This is similar to cuckoo hashing, but with the difference that
in this case the empty slot is being moved into the neighborhood, instead of items being moved out with the hope of
eventually finding an empty slot.) Each hop brings the open slot closer to the original neighborhood, without
112
Hash table
invalidating the neighborhood property of any of the buckets along the way. In the end, the open slot has been
moved into the neighborhood, and the entry being inserted can be added to it.
Dynamic resizing
To keep the load factor under a certain limit, e.g. under 3/4, many table implementations expand the table when
items are inserted. For example, in Java's HashMap class the default load factor threshold for table expansion is
0.75.
Since buckets are usually implemented on top of a dynamic array and any constant proportion for resizing greater
than 1 will keep the load factor under the desired limit, the exact choice of the constant is determined by the same
space-time tradeoff as for dynamic arrays.
Resizing is accompanied by a full or incremental table rehash whereby existing items are mapped to new bucket
locations.
To limit the proportion of memory wasted due to empty buckets, some implementations also shrink the size of the
tablefollowed by a rehashwhen items are deleted. From the point of space-time tradeoffs, this operation is
similar to the deallocation in dynamic arrays.
Incremental resizing
Some hash table implementations, notably in real-time systems, cannot pay the price of enlarging the hash table all at
once, because it may interrupt time-critical operations. If one cannot avoid dynamic resizing, a solution is to perform
the resizing gradually:
During the resize, allocate the new hash table, but keep the old table unchanged.
In each lookup or delete operation, check both tables.
Perform insertion operations only in the new table.
At each insertion also move r elements from the old table to the new table.
When all elements are removed from the old table, deallocate it.
To ensure that the old table is completely copied over before the new table itself needs to be enlarged, it is necessary
to increase the size of the table by a factor of at least (r + 1)/r during resizing.
113
Hash table
Monotonic keys
If it is known that key values will always increase (or decrease) monotonically, then a variation of consistent hashing
can be achieved by keeping a list of the single most recent key value at each hash table resize operation. Upon
lookup, keys that fall in the ranges defined by these list entries are directed to the appropriate hash functionand
indeed hash tableboth of which can be different for each range. Since it is common to grow the overall number of
entries by doubling, there will only be O(lg(N)) ranges to check, and binary search time for the redirection would be
O(lg(lg(N))). As with consistent hashing, this approach guarantees that any key's hash, once issued, will never
change, even when the hash table is later grown.
Other solutions
Linear hashing is a hash table algorithm that permits incremental hash table expansion. It is implemented using a
single hash table, but with two possible look-up functions.
Another way to decrease the cost of table resizing is to choose a hash function in such a way that the hashes of most
values do not change when the table is resized. This approach, called consistent hashing, is prevalent in disk-based
and distributed hashes, where rehashing is prohibitively costly.
Performance analysis
In the simplest model, the hash function is completely unspecified and the table does not resize. For the best possible
choice of hash function, a table of size k with open addressing has no collisions and holds up to k elements, with a
single comparison for successful lookup, and a table of size k with chaining and n keys has the minimum max(0, n-k)
collisions and O(1 + n/k) comparisons for lookup. For the worst choice of hash function, every insertion causes a
collision, and hash tables degenerate to linear search, with (n) amortized comparisons per insertion and up to n
comparisons for a successful lookup.
Adding rehashing to this model is straightforward. As in a dynamic array, geometric resizing by a factor of b implies
that only n/bi keys are inserted i or more times, so that the total number of insertions is bounded above by bn/(b-1),
which is O(n). By using rehashing to maintain n < k, tables using both chaining and open addressing can have
unlimited elements and perform successful lookup in a single comparison for the best choice of hash function.
In more realistic models, the hash function is a random variable over a probability distribution of hash functions, and
performance is computed on average over the choice of hash function. When this distribution is uniform, the
assumption is called "simple uniform hashing" and it can be shown that hashing with chaining requires (1 + n/k)
comparisons on average for an unsuccessful lookup, and hashing with open addressing requires (1/(1 - n/k)).[4]
Both these bounds are constant, if we maintain n/k < c using table resizing, where c is a fixed constant less than 1.
Features
Advantages
The main advantage of hash tables over other table data structures is speed. This advantage is more apparent when
the number of entries is large. Hash tables are particularly efficient when the maximum number of entries can be
predicted in advance, so that the bucket array can be allocated once with the optimum size and never resized.
If the set of key-value pairs is fixed and known ahead of time (so insertions and deletions are not allowed), one may
reduce the average lookup cost by a careful choice of the hash function, bucket table size, and internal data
structures. In particular, one may be able to devise a hash function that is collision-free, or even perfect (see below).
In this case the keys need not be stored in the table.
114
Hash table
Drawbacks
Although operations on a hash table take constant time on average, the cost of a good hash function can be
significantly higher than the inner loop of the lookup algorithm for a sequential list or search tree. Thus hash tables
are not effective when the number of entries is very small. (However, in some cases the high cost of computing the
hash function can be mitigated by saving the hash value together with the key.)
For certain string processing applications, such as spell-checking, hash tables may be less efficient than tries, finite
automata, or Judy arrays. Also, if each key is represented by a small enough number of bits, then, instead of a hash
table, one may use the key directly as the index into an array of values. Note that there are no collisions in this case.
The entries stored in a hash table can be enumerated efficiently (at constant cost per entry), but only in some
pseudo-random order. Therefore, there is no efficient way to locate an entry whose key is nearest to a given key.
Listing all n entries in some specific order generally requires a separate sorting step, whose cost is proportional to
log(n) per entry. In comparison, ordered search trees have lookup and insertion cost proportional to log(n), but allow
finding the nearest key at about the same cost, and ordered enumeration of all entries at constant cost per entry.
If the keys are not stored (because the hash function is collision-free), there may be no easy way to enumerate the
keys that are present in the table at any given moment.
Although the average cost per operation is constant and fairly small, the cost of a single operation may be quite high.
In particular, if the hash table uses dynamic resizing, an insertion or deletion operation may occasionally take time
proportional to the number of entries. This may be a serious drawback in real-time or interactive applications.
Hash tables in general exhibit poor locality of referencethat is, the data to be accessed is distributed seemingly at
random in memory. Because hash tables cause access patterns that jump around, this can trigger microprocessor
cache misses that cause long delays. Compact data structures such as arrays searched with linear search may be
faster, if the table is relatively small and keys are integers or other short strings. According to Moore's Law, cache
sizes are growing exponentially and so what is considered "small" may be increasing. The optimal performance point
varies from system to system.
Hash tables become quite inefficient when there are many collisions. While extremely uneven hash distributions are
extremely unlikely to arise by chance, a malicious adversary with knowledge of the hash function may be able to
supply information to a hash that creates worst-case behavior by causing excessive collisions, resulting in very poor
performance, e.g. a denial of service attack.[5] In critical applications, universal hashing can be used; a data structure
with better worst-case guarantees may be preferable.[6]
Uses
Associative arrays
Hash tables are commonly used to implement many types of in-memory tables. They are used to implement
associative arrays (arrays whose indices are arbitrary strings or other complicated objects), especially in interpreted
programming languages like AWK, Perl, and PHP.
When storing a new item into a multimap and a hash collision occurs, the multimap unconditionally stores both
items.
When storing a new item into a typical associative array and a hash collision occurs, but the actual keys themselves
are different, the associative array likewise stores both items. However, if the key of the new item exactly matches
the key of an old item, the associative array typically erases the old item and overwrites it with the new item, so
every item in the table has a unique key.
115
Hash table
Database indexing
Hash tables may also be used as disk-based data structures and database indices (such as in dbm) although B-trees
are more popular in these applications.
Caches
Hash tables can be used to implement caches, auxiliary data tables that are used to speed up the access to data that is
primarily stored in slower media. In this application, hash collisions can be handled by discarding one of the two
colliding entriesusually erasing the old item that is currently stored in the table and overwriting it with the new
item, so every item in the table has a unique hash value.
Sets
Besides recovering the entry that has a given key, many hash table implementations can also tell whether such an
entry exists or not.
Those structures can therefore be used to implement a set data structure, which merely records whether a given key
belongs to a specified set of keys. In this case, the structure can be simplified by eliminating all parts that have to do
with the entry values. Hashing can be used to implement both static and dynamic sets.
Object representation
Several dynamic languages, such as Perl, Python, JavaScript, and Ruby, use hash tables to implement objects. In this
representation, the keys are the names of the members and methods of the object, and the values are pointers to the
corresponding member or method.
Implementations
In programming languages
Many programming languages provide hash table functionality, either as built-in associative arrays or as standard
library modules. In C++11, for example, the unordered_map class provides hash tables for keys and values of
arbitrary type.
In PHP 5, the Zend 2 engine uses one of the hash functions from Daniel J. Bernstein to generate the hash values used
in managing the mappings of data pointers stored in a hash table. In the PHP source code, it is labelled as DJBX33A
(Daniel J. Bernstein, Times 33 with Addition).
Python's built-in hash table implementation, in the form of the dict type, as well as Perl's hash type (%) are highly
optimized as they are used internally to implement namespaces.
In the .NET Framework, support for hash tables is provided via the non-generic Hashtable and generic
Dictionary classes, which store key-value pairs, and the generic HashSet class, which stores only values.
116
Hash table
Independent packages
SparseHash [7] (formerly Google SparseHash) An extremely memory-efficient hash_map implementation, with
only 2 bits/entry of overhead. The SparseHash library has several C++ hash map implementations with different
performance characteristics, including one that optimizes for memory use and another that optimizes for speed.
SunriseDD [8] An open source C library for hash table storage of arbitrary data objects with lock-free lookups,
built-in reference counting and guaranteed order iteration. The library can participate in external reference
counting systems or use its own built-in reference counting. It comes with a variety of hash functions and allows
the use of runtime supplied hash functions via callback mechanism. Source code is well documented.
uthash [9] This is an easy-to-use hash table for C structures.
History
The idea of hashing arose independently in different places. In January 1953, H. P. Luhn wrote an internal IBM
memorandum that used hashing with chaining. G. N. Amdahl, E. M. Boehme, N. Rochester, and Arthur Samuel
implemented a program using hashing at about the same time. Open addressing with linear probing (relatively prime
stepping) is credited to Amdahl, but Ershov (in Russia) had the same idea.
References
[1] Charles E. Leiserson, Amortized Algorithms, Table Doubling, Potential Method (http:/ / videolectures. net/ mit6046jf05_leiserson_lec13/ )
Lecture 13, course MIT 6.046J/18.410J Introduction to AlgorithmsFall 2005
[2] Thomas Wang (1997), Prime Double Hash Table (http:/ / www. concentric. net/ ~Ttwang/ tech/ primehash. htm). Retrieved April 27, 2012
[3] Erik Demaine, Jeff Lind. 6.897: Advanced Data Structures. MIT Computer Science and Artificial Intelligence Laboratory. Spring 2003. http:/
/ courses. csail. mit. edu/ 6. 897/ spring03/ scribe_notes/ L2/ lecture2. pdf
[4] Doug Dunham. CS 4521 Lecture Notes (http:/ / www. duluth. umn. edu/ ~ddunham/ cs4521s09/ notes/ ch11. txt). University of Minnesota
Duluth. Theorems 11.2, 11.6. Last modified April 21, 2009.
[5] Alexander Klink and Julian Wlde's Efficient Denial of Service Attacks on Web Application Platforms (http:/ / events. ccc. de/ congress/
2011/ Fahrplan/ attachments/ 2007_28C3_Effective_DoS_on_web_application_platforms. pdf), December 28, 2011, 28th Chaos
Communication Congress. Berlin, Germany.
[6] Crosby and Wallach's Denial of Service via Algorithmic Complexity Attacks (http:/ / www. cs. rice. edu/ ~scrosby/ hash/
CrosbyWallach_UsenixSec2003. pdf).
[7] http:/ / code. google. com/ p/ sparsehash/
[8] http:/ / www. sunrisetel. net/ software/ devtools/ sunrise-data-dictionary. shtml
[9] http:/ / uthash. sourceforge. net/
Further reading
Tamassia, Roberto; Michael T. Goodrich (2006). "Chapter Nine: Maps and Dictionaries". Data structures and
algorithms in Java : [updated for Java 5.0] (4th ed.). Hoboken, N.J.: Wiley. pp.369418. ISBN0-471-73884-0.
McKenzie, B. J.; R. Harries, T.Bell (Feb 1990). "Selecting a hashing algorithm". Software -- Practice & Experience
20 (2): 209224.
External links
A Hash Function for Hash Table Lookup (http://www.burtleburtle.net/bob/hash/doobs.html) by Bob Jenkins.
Hash Tables (http://www.sparknotes.com/cs/searching/hashtables/summary.html) by
SparkNotesexplanation using C
Hash functions (http://www.azillionmonkeys.com/qed/hash.html) by Paul Hsieh
Design of Compact and Efficient Hash Tables for Java (http://blog.griddynamics.com/2011/03/
ultimate-sets-and-maps-for-java-part-i.html) link not working
Libhashish (http://libhashish.sourceforge.net/) hash library
NIST entry on hash tables (http://www.nist.gov/dads/HTML/hashtab.html)
117
Hash table
118
Open addressing hash table removal algorithm from ICI programming language, ici_set_unassign in set.c (http://
ici.cvs.sourceforge.net/ici/ici/set.c?view=markup) (and other occurrences, with permission).
A basic explanation of how the hash table works by Reliable Software (http://www.relisoft.com/book/lang/
pointer/8hash.html)
Lecture on Hash Tables (http://compgeom.cs.uiuc.edu/~jeffe/teaching/373/notes/06-hashing.pdf)
Hash-tables in C (http://task3.cc/308/hash-maps-with-linear-probing-and-separate-chaining/)two simple and
clear examples of hash tables implementation in C with linear probing and chaining
Open Data Structures - Chapter 5 - Hash Tables (http://opendatastructures.org/versions/edition-0.1e/ods-java/
5_Hash_Tables.html)
MIT's Introduction to Algorithms: Hashing 1 (http://ocw.mit.edu/courses/
electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/
video-lectures/lecture-7-hashing-hash-functions/) MIT OCW lecture Video
MIT's Introduction to Algorithms: Hashing 2 (http://ocw.mit.edu/courses/
electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/
video-lectures/lecture-8-universal-hashing-perfect-hashing/) MIT OCW lecture Video
How to sort a HashMap (Java) and keep the duplicate entries (http://www.lampos.net/sort-hashmap)
Linear probing
Linear probing is a scheme in computer programming for resolving hash collisions of values of hash functions by
sequentially searching the hash table for a free location. This is accomplished using two values - one as a starting
value and one as an interval between successive values in modular arithmetic. The second value, which is the same
for all keys and known as the stepsize, is repeatedly added to the starting value until a free space is found, or the
entire table is traversed. (In order to traverse the entire table the stepsize should be relatively prime to the arraysize,
which is why the array size is often chosen to be a prime number.)
newLocation = (startingValue + stepSize) % arraySize
This algorithm, which is used in open-addressed hash tables, provides good memory caching (if stepsize is equal to
one), through good locality of reference, but also results in clustering, an unfortunately high probability that where
there has been one collision there will be more. The performance of linear probing is also more sensitive to input
distribution when compared to double hashing, where the stepsize is determined by another hash function applied to
the value instead of a fixed stepsize as in linear probing.
Given an ordinary hash function H(x), a linear probing function (H(x, i)) would be:
Here H(x) is the starting value, n the size of the hash table, and the stepsize is i in this case.
Linear probing
119
References
External links
How Caching Affects Hashing (http://www.siam.org/meetings/alenex05/papers/13gheileman.pdf) by
Gregory L. Heileman and Wenbin Luo 2005.
Open Data Structures - Section 5.2 - LinearHashTable: Linear Probing (http://opendatastructures.org/versions/
edition-0.1e/ods-java/5_2_LinearHashTable_Linear_.html)
Quadratic probing
Quadratic probing is an open addressing scheme in computer programming for resolving collisions in hash
tableswhen an incoming data's hash value indicates it should be stored in an already-occupied slot or bucket.
Quadratic probing operates by taking the original hash index and adding successive values of an arbitrary quadratic
polynomial until an open slot is found.
For a given hash value, the indices generated by linear probing are as follows:
This method results in primary clustering, and as the cluster grows larger, the search for those items hashing within
the cluster becomes less efficient.
An example sequence using quadratic probing is:
Quadratic probing can be a more efficient algorithm in a closed hash table, since it better avoids the clustering
problem that can occur with linear probing, although it is not immune. It also provides good memory caching
because it preserves some locality of reference; however, linear probing has greater locality and, thus, better cache
performance.
Quadratic probing is used in the Berkeley Fast File System to allocate free blocks. The allocation routine chooses a
new cylinder-group when the current is nearly full using quadratic probing, because of the speed it shows in finding
unused cylinder-groups.
Quadratic function
Let h(k) be a hash function that maps an element k to an integer in [0,m-1], where m is the size of the table. Let the
ith probe position for a value k be given by the function
where c2 0. If c2 = 0, then h(k,i) degrades to a linear probe. For a given hash table, the values of c1 and c2 remain
constant.
Examples:
If
For m = 2n, a good choice for the constants are c1 = c2 = 1/2, as the values of h(k,i) for i in [0,m-1] are all distinct.
This leads to a probe sequence of
where the values increase by 1, 2,
3, ...
For prime m > 2, most choices of c1 and c2 will make h(k,i) distinct for i in [0, (m-1)/2]. Such choices include c1 =
c2 = 1/2, c1 = c2 = 1, and c1 = 0, c2 = 1. Because there are only about m/2 distinct probes for a given element, it is
difficult to guarantee that insertions will succeed when the load factor is > 1/2.
Quadratic probing
120
Quadratic probing
Limitations
For linear probing it is a bad idea to let the hash table get nearly full, because performance is degraded as the hash
table gets filled. In the case of quadratic probing, the situation is even more drastic. With the exception of the
triangular number case for a power-of-two-sized hash table, there is no guarantee of finding an empty cell once the
table gets more than half full, or even before the table gets half full if the table size is not prime. This is because at
most half of the table can be used as alternative locations to resolve collisions. If the hash table size is b (a prime
greater than 3), it can be proven that the first
alternative locations including the initial location h(k) are all
distinct and unique. Suppose, we assume two of the alternative locations to be given by
121
Quadratic probing
122
and
, where 0 x, y (b / 2). If these two locations point to the same key space, but x y. Then the followi
have to be true,
As b (table size) is a prime greater than 3, either (x - y) or (x + y) has to be equal to zero. Since x and y are unique, (x
- y) cannot be zero. Also, since 0 x, y (b / 2), (x + y) cannot be zero.
Thus, by contradiction, it can be said that the first (b / 2) alternative locations after h(k) are unique. So an empty key
space can always be found as long as at most (b / 2) locations are filled, i.e., the hash table is not more than half full.
References
External links
Tutorial/quadratic probing (http://research.cs.vt.edu/AVresearch/hashing/quadratic.php)
Double hashing
Double hashing is a computer programming technique used in hash tables to resolve hash collisions, cases when
two different values to be searched for produce the same hash key. It is a popular collision-resolution technique in
open-addressed hash tables. Double hashing is implemented in many popular libraries.
Like linear probing, it uses one hash value as a starting point and then repeatedly steps forward an interval until the
desired value is located, an empty location is reached, or the entire table has been searched; but this interval is
decided using a second, independent hash function (hence the name double hashing). Unlike linear probing and
quadratic probing, the interval depends on the data, so that even values mapping to the same location have different
bucket sequences; this minimizes repeated collisions and the effects of clustering.
Given two randomly, uniformly, and independently selected hash functions
bucket sequence for value k in a hash table
and
and
is:
Generally,
then
. Let
Double hashing approximates uniform open address hashing. That is, start by randomly, uniformly and
independently selecting two universal hash functions
and
to build a double hashing table .
All elements are put in
and
. Given a key
, determining the
-st hash
regardless of the
distribution of the inputs. More percisely, these two uniformly, randomly and independently chosen hash functions
Double hashing
123
are chosen from a set of universal hash functions where pair-wise independence suffices.
Previous results include: Guibas and Szemerdi[2] showed
. Also, Lueker and Molodowitch[3] showed this held assuming ideal randomized functions. Schmidt
and Siegel[4] showed this with
).
The resulting sequence will always remain at the initial hash value. One possible solution is to change the secondary
hash function to:
This ensures that the secondary hash function will always be non zero.
Notes
[1] P. G. Bradford and M. Katehakis: A Probabilistic Study on Combinatorial Expanders and Hashing, SIAM Journal on Computing 2007
(37:1), 83-111. http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 91. 2647
[2] L. Guibas and E. Szemerdi: The Analysis of Double Hashing, Journal of Computer and System Sciences, 1978, 16, 226-274.
[3] G. S. Lueker and M. Molodowitch: More Analysis of Double Hashing, Combinatorica, 1993, 13(1), 83-96.
[4] J. P. Schmidt and A. Siegel: Double Hashing is Computable and Randomizable with Universal Hash Functions, manuscript.
External links
How Caching Affects Hashing (http://www.siam.org/meetings/alenex05/papers/13gheileman.pdf) by
Gregory L. Heileman and Wenbin Luo 2005.
Hash Table Animation (http://www.cs.pitt.edu/~kirk/cs1501/animations/Hashing.html)
Cuckoo hashing
124
Cuckoo hashing
Cuckoo hashing is a scheme in computer programming for resolving hash
collisions of values of hash functions in a table, with worst-case constant
lookup time. The name derives from the behavior of some species of
cuckoo, where the cuckoo chick pushes the other eggs or young out of the
nest when it hatches; analogously, inserting a new key into a cuckoo hashing
table may push an older key to a different location in the table.
History
Cuckoo hashing was first described by Rasmus Pagh and Flemming Friche
Rodler in 2001.
Theory
The basic idea is to use two hash functions instead of only one. This
provides two possible locations in the hash table for each key. In one of the
commonly used variants of the algorithm, the hash table is split into two
smaller tables of equal size, and each hash function provides an index into
one of these two tables.
When a new key is inserted, a greedy algorithm is used: The new key is
inserted in one of its two possible locations, "kicking out", that is,
displacing, any key that might already reside in this location. This displaced
key is then inserted in its alternative location, again kicking out any key that
might reside there, until a vacant position is found, or the procedure enters
an infinite loop. In the latter case, the hash table is rebuilt in-place using
new hash functions:
There is no need to allocate new tables for the rehashing: We may
simply run through the tables to delete and perform the usual insertion
procedure on all keys found not to be at their intended position in the
table.
Pagh & Rodler,"Cuckoo Hashing"
Lookup requires inspection of just two locations in the hash table, which
takes constant time in the worst case (see Big O notation). This is in contrast to many other hash table algorithms,
which may not have a constant worst-case bound on the time to do a lookup.
It can also be shown that insertions succeed in expected constant time, even considering the possibility of having to
rebuild the table, as long as the number of keys is kept below half of the capacity of the hash table, i.e., the load
factor is below 50%. One method of proving this uses the theory of random graphs: one may form an undirected
graph called the "Cuckoo Graph" that has a vertex for each hash table location, and an edge for each hashed value,
with the endpoints of the edge being the two possible locations of the value. Then, the greedy insertion algorithm for
adding a set of values to a cuckoo hash table succeeds if and only if the Cuckoo Graph for this set of values is a
pseudoforest, a graph with at most one cycle in each of its connected components, as any vertex-induced subgraph
with more edges than vertices corresponds to a set of keys for which there are an insufficient number of slots in the
hash table. This property is true with high probability for a random graph in which the number of edges is less than
Cuckoo hashing
125
Example
The following hashfunctions are given:
h(k) h'(k)
20
50
53
75
100 1
67
105 6
36
39
Columns in the following two tables show the state of the hash tables over time as the elements are inserted.
1. table for h(k)
20 50 53 75 100 67 105 3
36
39
67
67
100
36
0
1
100 67 67
2
3
4
5
6
50 50 50 50
7
8
9
10
20 20 20 20 20
20 53
53
53
75
Cuckoo hashing
126
105 3
36
39
20
20
20
20
36
39
2
3
4
53 53 53
53
50
50
50
53
75 75
75
75
75
75
67
5
6
7
8
9
10
Cycle
If you now wish to insert the element 6, then you get into a cycle. In the last row of the table we find the same initial
situation as at the beginning again.
table 2
50
53
50
53
75
53
67
75
67
100
67
105
100
105
105
36
39
36
39
105
39
100
105
100
67
100
75
67
75
53
75
50
53
50
39
50
36
39
36
36
50
53
50
Cuckoo hashing
References
A cool and practical alternative to traditional hash tables (http://www.ru.is/faculty/ulfar/CuckooHash.pdf),
U. Erlingsson, M. Manasse, F. Mcsherry, 2006.
Cuckoo Hashing for Undergraduates, 2006 (http://www.it-c.dk/people/pagh/papers/cuckoo-undergrad.pdf),
R. Pagh, 2006.
Cuckoo Hashing, Theory and Practice (http://mybiasedcoin.blogspot.com/2007/06/
cuckoo-hashing-theory-and-practice-part.html) (Part 1, Part 2 (http://mybiasedcoin.blogspot.com/2007/06/
cuckoo-hashing-theory-and-practice-part_15.html) and Part 3 (http://mybiasedcoin.blogspot.com/2007/06/
cuckoo-hashing-theory-and-practice-part_19.html)), Michael Mitzenmacher, 2007.
Naor, Moni; Segev, Gil; Wieder, Udi (2008). "History-Independent Cuckoo Hashing" (http://www.wisdom.
weizmann.ac.il/~naor/PAPERS/cuckoo_hi_abs.html). International Colloquium on Automata, Languages and
Programming (ICALP). Reykjavik, Iceland. Retrieved 2008-07-21.
External links
127
Hopscotch hashing
128
Hopscotch hashing
Hopscotch hashing is a scheme in computer programming for resolving hash collisions of values of hash functions
in a table using open addressing. It is also well suited for implementing a concurrent hash table. Hopscotch hashing
was introduced by Maurice Herlihy, Nir Shavit and Moran Tzafrir in 2008. The name is derived from the sequence
of hops that characterize the table's insertion algorithm.
The algorithm uses a single array of n
buckets. For each bucket, its
neighborhood is a small collection of
nearby consecutive buckets (i.e. one
with close indexes to the original
hashed bucket). The desired property
of the neighborhood is that the cost of
finding an item in the buckets of the
neighborhood is close to the cost of
finding it in the bucket itself (for
example, by having buckets in the
neighborhood fall within the same
cache line). The size of the
neighborhood must be sufficient to
accommodate a logarithmic number of
items in the worst case (i.e. it must
accommodate log(n) items), but only a
constant number on average. If some
bucket's neighborhood is filled, the
table is resized.
Hopscotch hashing. Here, H is 4. Gray entries are occupied. In part (a), the item x is
added with a hash value of 6. A linear probe finds that entry 13 is empty. Because 13 is
more than 4 entries away from 6, the algorithm looks for an earlier entry to swap with 13.
The first place to look in is H-1 = 3 entries before, at entry 10. That entry's hop
information bit-map indicates that d, the item at entry 11, can be displaced to 13. After
displacing d, Entry 11 is still too far from entry 6, so the algorithm examines entry 8. The
hop information bit-map indicates that item c at entry 9 can be moved to entry 11. Finally,
a is moved to entry 9. Part (b) shows the table state just after adding x.
Hopscotch hashing
The idea is that hopscotch hashing "moves the empty slot towards the desired bucket". This distinguishes it from
linear probing which leaves the empty slot where it was found, possibly far away from the original bucket, or from
cuckoo hashing that, in order to create a free bucket, moves an item out of one of the desired buckets in the target
arrays, and only then tries to find the displaced item a new place.
To remove an item from the table, one simply removes it from the table entry. If the neighborhood buckets are cache
aligned, then one could apply a reorganization operation in which items are moved into the now vacant location in
order to improve alignment.
One advantage of hopscotch hashing is that it provides good performance at very high table load factors, even ones
exceeding 0.9. Part of this efficiency is due to using a linear probe only to find an empty slot during insertion, not for
every lookup as in the original linear probing hash table algorithm. Another advantage is that one can use any hash
function, in particular simple ones that are close-to-universal.
References
External links
libhash - a C hopscotch hashing implementation (https://code.google.com/p/libhhash/wiki/Intro)
Hash function
A hash function is any algorithm that maps data of
variable length to data of a fixed length. The values
returned by a hash function are called hash values, hash
codes, hash sums, checksums or simply hashes.
Description
Hash functions are primarily used to generate
fixed-length output data that acts as a shortened reference
to the original data. This is useful when the output data is
too cumbersome to use in its entirety.
One practical use is a data structure called a hash table
A hash function that maps names to integers from 0 to 15. There is
where the data is stored associatively. Searching for a
a collision between keys "John Smith" and "Sandra Dee".
person's name in a list is slow, but the hashed value can
be used to store a reference to the original data and retrieve constant time (barring collisions). Another use is in
cryptography, the science of encoding and safeguarding data. It is easy to generate hash values from input data and
easy to verify that the data matches the hash, but hard to 'fake' a hash value to hide malicious data. This is the
principle behind the Pretty Good Privacy algorithm for data validation.
Hash functions are also used to accelerate table lookup or data comparison tasks such as finding items in a database,
detecting duplicated or similar records in a large file, finding similar stretches in DNA sequences, and so on.
A hash function should be referentially transparent (stable), i.e.,if called twice on input that is "equal" (for example,
strings that consist of the same sequence of characters), it should give the same result. There is a construct in many
programming languages that allows the user to override equality and hash functions for an object: if two objects are
equal, their hash codes must be the same. This is crucial to finding an element in a hash table quickly, because two of
the same element would both hash to the same slot.
129
Hash function
Hash functions are destructive, akin to lossy compression, as the original data is lost when hashed. Unlike
compression algorithms, where something resembling the original data can be decompressed from compressed data,
the goal of a hash value is to uniquely identify a reference to the object so that it can be retrieved in its entirety.
Unfortunately, all hash functions that map a larger set of data to a smaller set of data cause collisions. Such hash
functions try to map the keys to the hash values as evenly as possible because collisions become more frequent as
hash tables fill up. Thus, single-digit hash values are frequently restricted to 80% of the size of the table. Depending
on the algorithm used, other properties may be required as well, such as double hashing and linear probing. Although
the idea was conceived in the 1950s, the design of good hash functions is still a topic of active research.
Hash functions are related to (and often confused with) checksums, check digits, fingerprints, randomization
functions, error correcting codes, and cryptographic hash functions. Although these concepts overlap to some extent,
each has its own uses and requirements and is designed and optimized differently. The HashKeeper database
maintained by the American National Drug Intelligence Center, for instance, is more aptly described as a catalog of
file fingerprints than of hash values.
Hash tables
Hash functions are primarily used in hash tables, to quickly locate a data record (e.g.,a dictionary definition) given
its search key (the headword). Specifically, the hash function is used to map the search key to an index; the index
gives the place in the hash table where the corresponding record should be stored. Hash tables, in turn, are used to
implement associative arrays and dynamic sets.
Typically, the domain of a hash function (the set of possible keys) is larger than its range (the number of different
table indexes), and so it will map several different keys to the same index. Therefore, each slot of a hash table is
associated with (implicitly or explicitly) a set of records, rather than a single record. For this reason, each slot of a
hash table is often called a bucket, and hash values are also called bucket indices.
Thus, the hash function only hints at the record's locationit tells where one should start looking for it. Still, in a
half-full table, a good hash function will typically narrow the search down to only one or two entries.
Caches
Hash functions are also used to build caches for large data sets stored in slow media. A cache is generally simpler
than a hashed search table, since any collision can be resolved by discarding or writing back the older of the two
colliding items. This is also used in file comparison.
Bloom filters
Hash functions are an essential ingredient of the Bloom filter, a space-efficient probabilistic data structure that is
used to test whether an element is a member of a set.
130
Hash function
Geometric hashing
This principle is widely used in computer graphics, computational geometry and many other disciplines, to solve
many proximity problems in the plane or in three-dimensional space, such as finding closest pairs in a set of points,
similar shapes in a list of shapes, similar images in an image database, and so on. In these applications, the set of all
inputs is some sort of metric space, and the hashing function can be interpreted as a partition of that space into a grid
of cells. The table is often an array with two or more indices (called a grid file, grid index, bucket grid, and similar
names), and the hash function returns an index tuple. This special case of hashing is known as geometric hashing or
the grid method. Geometric hashing is also used in telecommunications (usually under the name vector quantization)
to encode and compress multi-dimensional signals.
Properties
Good hash functions, in the original sense of the term, are usually required to satisfy certain properties listed below.
Note that different requirements apply to the other related concepts (cryptographic hash functions, checksums, etc.).
Determinism
A hash procedure must be deterministicmeaning that for a given input value it must always generate the same hash
value. In other words, it must be a function of the data to be hashed, in the mathematical sense of the term. This
requirement excludes hash functions that depend on external variable parameters, such as pseudo-random number
generators or the time of day. It also excludes functions that depend on the memory address of the object being
hashed, because that address may change during execution (as may happen on systems that use certain methods of
garbage collection), although sometimes rehashing of the item is possible.
131
Hash function
Uniformity
A good hash function should map the expected inputs as evenly as possible over its output range. That is, every hash
value in the output range should be generated with roughly the same probability. The reason for this last requirement
is that the cost of hashing-based methods goes up sharply as the number of collisionspairs of inputs that are
mapped to the same hash valueincreases. Basically, if some hash values are more likely to occur than others, a
larger fraction of the lookup operations will have to search through a larger set of colliding table entries.
Note that this criterion only requires the value to be uniformly distributed, not random in any sense. A good
randomizing function is (barring computational efficiency concerns) generally a good choice as a hash function, but
the converse need not be true.
Hash tables often contain only a small subset of the valid inputs. For instance, a club membership list may contain
only a hundred or so member names, out of the very large set of all possible names. In these cases, the uniformity
criterion should hold for almost all typical subsets of entries that may be found in the table, not just for the global set
of all possible entries.
In other words, if a typical set of m records is hashed to n table slots, the probability of a bucket receiving many
more than m/n records should be vanishingly small. In particular, if m is less than n, very few buckets should have
more than one or two records. (In an ideal "perfect hash function", no bucket should have more than one record; but
a small number of collisions is virtually inevitable, even if n is much larger than m see the birthday paradox).
When testing a hash function, the uniformity of the distribution of hash values can be evaluated by the chi-squared
test.
Variable range
In many applications, the range of hash values may be different for each run of the program, or may change along
the same run (for instance, when a hash table needs to be expanded). In those situations, one needs a hash function
which takes two parametersthe input data z, and the number n of allowed hash values.
A common solution is to compute a fixed hash function with a very large range (say, 0 to 2321), divide the result
by n, and use the division's remainder. If n is itself a power of 2, this can be done by bit masking and bit shifting.
When this approach is used, the hash function must be chosen so that the result has fairly uniform distribution
between 0 and n1, for any value of n that may occur in the application. Depending on the function, the remainder
may be uniform only for certain values of n, e.g. odd or prime numbers.
We can allow the table size n to not be a power of 2 and still not have to perform any remainder or division
operation, as these computations are sometimes costly. For example, let n be significantly less than 2b. Consider a
pseudorandom number generator (PRNG) function P(key) that is uniform on the interval [0, 2b1]. A hash function
uniform on the interval [0, n-1] is n P(key)/2b. We can replace the division by a (possibly faster) right bit shift:
nP(key) >> b.
132
Hash function
Several algorithms that preserve the uniformity property but require time proportional to n to compute the value of
H(z,n) have been invented.
Data normalization
In some applications, the input data may contain features that are irrelevant for comparison purposes. For example,
when looking up a personal name, it may be desirable to ignore the distinction between upper and lower case letters.
For such data, one must use a hash function that is compatible with the data equivalence criterion being used: that is,
any two inputs that are considered equivalent must yield the same hash value. This can be accomplished by
normalizing the input before hashing it, as by upper-casing all letters.
Continuity
A hash function that is used to search for similar (as opposed to equivalent) data must be as continuous as possible;
two inputs that differ by a little should be mapped to equal or nearly equal hash values.[citation needed]
Note that continuity is usually considered a fatal flaw for checksums, cryptographic hash functions, and other related
concepts. Continuity is desirable for hash functions only in some applications, such as hash tables used in Nearest
neighbor search.
133
Hash function
134
Perfect hashing
A hash function that is injectivethat is, maps each valid
input to a different hash valueis said to be perfect.
With such a function one can directly locate the desired
entry in a hash table, without any additional searching.
Hash function
135
Rolling hash
In some applications, such as substring search, one must compute a hash function h for every k-character substring of
a given n-character string t; where k is a fixed integer, and n is k. The straightforward solution, which is to extract
every such substring s of t and compute h(s) separately, requires a number of operations proportional to kn.
However, with the proper choice of h, one can use the technique of rolling hash to compute all those hashes with an
effort proportional to k+n.
Universal hashing
A universal hashing scheme is a randomized algorithm that selects a hashing function h among a family of such
functions, in such a way that the probability of a collision of any two distinct keys is 1/n, where n is the number of
distinct hash values desiredindependently of the two keys. Universal hashing ensures (in a probabilistic sense) that
the hash function application will behave as well as if it were using a random function, for any distribution of the
input data. It will however have more collisions than perfect hashing, and may require more operations than a
Hash function
special-purpose hash function.
136
Hash function
radicallyWikipedia:Disputed statement for strings such as "Aaaaaaaaaa" and "Aaaaaaaaab".
Locality-sensitive hashing
Locality-sensitive hashing (LSH) is a method of performing probabilistic dimension reduction of high-dimensional
data. The basic idea is to hash the input items so that similar items are mapped to the same buckets with high
probability (the number of buckets being much smaller than the universe of possible input items). This is different
from the conventional hash functions, such as those used in cryptography, as in this case the goal is to maximize the
probability of "collision" of similar items rather than to avoid collisions.
One example of LSH is MinHash algorithm used for finding similar documents (such as web-pages):
Let h be a hash function that maps the members of A and B to distinct integers, and for any set S define hmin(S) to be
the member x of S with the minimum value of h(x). Then hmin(A) = hmin(B) exactly when the minimum hash value of
the union A B lies in the intersection A B. Therefore,
Pr[hmin(A) = hmin(B)] = J(A,B). where J is Jaccard index.
In other words, if r is a random variable that is one when hmin(A) = hmin(B) and zero otherwise, then r is an unbiased
estimator of J(A,B), although it has too high a variance to be useful on its own. The idea of the MinHash scheme is to
reduce the variance by averaging together several variables constructed in the same way.
References
[1] "Robust Audio Hashing for Content Identification by Jaap Haitsma, Ton Kalker and Job Oostveen" (http:/ / citeseer. ist. psu. edu/ rd/
11787382,504088,1,0. 25,Download/ http:/ / citeseer. ist. psu. edu/ cache/ papers/ cs/ 25861/ http:zSzzSzwww. extra. research. philips.
comzSznatlabzSzdownloadzSzaudiofpzSzcbmi01audiohashv1. 0. pdf/ haitsma01robust. pdf)
[2] Bret Mulvey, Evaluation of CRC32 for Hash Tables (http:/ / home. comcast. net/ ~bretm/ hash/ 8. html), in Hash Functions (http:/ / home.
comcast. net/ ~bretm/ hash/ ). Accessed April 10, 2009.
[3] Bret Mulvey, Evaluation of SHA-1 for Hash Tables (http:/ / home. comcast. net/ ~bretm/ hash/ 9. html), in Hash Functions (http:/ / home.
comcast. net/ ~bretm/ hash/ ). Accessed April 10, 2009.
[4] http:/ / citeseer. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 18. 7520 Performance in Practice of String Hashing Functions
137
Hash function
External links
General purpose hash function algorithms (C/C++/Pascal/Java/Python/Ruby) (http://www.partow.net/
programming/hashfunctions/index.html)
Hash Functions and Block Ciphers by Bob Jenkins (http://burtleburtle.net/bob/hash/index.html)
The Goulburn Hashing Function (http://www.webcitation.org/query?url=http://www.geocities.com/
drone115b/Goulburn06.pdf&date=2009-10-25+21:06:51) (PDF) by Mayur Patel
Hash Function Construction for Textual and Geometrical Data Retrieval (http://herakles.zcu.cz/~skala/PUBL/
PUBL_2010/2010_WSEAS-Corfu_Hash-final.pdf) Latest Trends on Computers, Vol.2, pp.483489, CSCC
conference, Corfu, 2010
138
References
[1] Fredman, M. L., Komls, J., and Szemerdi, E. 1984. Storing a Sparse Table with O(1) Worst Case Access Time. J. ACM 31, 3 (Jun. 1984),
538-544 http:/ / portal. acm. org/ citation. cfm?id=1884#
Further reading
Richard J. Cichelli. Minimal Perfect Hash Functions Made Simple, Communications of the ACM, Vol. 23,
Number 1, January 1980.
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms,
Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 11.5: Perfect hashing,
pp.245249.
Fabiano C. Botelho, Rasmus Pagh and Nivio Ziviani. "Perfect Hashing for Data Management Applications"
(http://arxiv.org/pdf/cs/0702159).
Fabiano C. Botelho and Nivio Ziviani. "External perfect hashing for very large key sets" (http://homepages.dcc.
ufmg.br/~nivio/papers/cikm07.pdf). 16th ACM Conference on Information and Knowledge Management
(CIKM07), Lisbon, Portugal, November 2007.
Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. "Monotone minimal perfect hashing:
Searching a sorted table with O(1) accesses" (http://vigna.dsi.unimi.it/ftp/papers/
MonotoneMinimalPerfectHashing.pdf). In Proceedings of the 20th Annual ACM-SIAM Symposium On Discrete
Mathematics (SODA), New York, 2009. ACM Press.
Djamal Belazzougui, Paolo Boldi, Rasmus Pagh, and Sebastiano Vigna. "Theory and practise of monotone
minimal perfect hashing" (http://www.siam.org/proceedings/alenex/2009/alx09_013_belazzouguid.pdf). In
Proceedings of the Tenth Workshop on Algorithm Engineering and Experiments (ALENEX). SIAM, 2009.
Douglas C. Schmidt, GPERF: A Perfect Hash Function Generator (http://www.cs.wustl.edu/~schmidt/PDF/
gperf.pdf), C++ Report, SIGS, Vol. 10, No. 10, November/December, 1998.
External links
139
Universal hashing
140
Universal hashing
Using universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random
from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low
number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known
(for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous
uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
Introduction
Assume we want to map keys from some universe
algorithm will have to handle some data set
into
of
bins (labelled
). The
is greater than
to be precisely the preimage of a bin. This means that all data keys land in the same bin, making
hashing useless. Furthermore, a deterministic hash function does not allow for rehashing: sometimes the input data
turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash
function.
The solution to these problems is to pick a function randomly from a family of hash functions. A family of functions
is called a universal family if,
In other words, any two keys of the universe collide with probability at most
drawn randomly from
is
. This is exactly the probability of collision we would expect if the hash function assigned
truly random hash codes to every key. Sometimes, the definition is relaxed to allow collision probability
This concept was introduced by Carter and Wegman in 1977, and has found numerous applications in computer
science (see, for example ). If we have an upper bound of
on the collision probability, we say that we have
-almost universality.
Many, but not all, universal families have the following stronger uniform difference property:
,
when
is
drawn
randomly
is uniformly distributed in
concerned with whether
where
the
the
difference
of two.)
An even stronger condition is pairwise independence: we have this property when
probability that
family
stronger.
(Similarly, a universal family can be XOR universal if
uniformly distributed in
from
is
is a power
we have the
Universal hashing
141
for all
. Unfortunately, the same is not true of (merely) universal families. For example the family made of the
identity function
fails to
be universal.
Mathematical guarantees
For any fixed set
of
in
is
by chaining, this number is proportional to the expected running time of an operation involving the key (for
example a query, insertion or deletion).
2. The expected number of pairs of keys
in with
that collide (
) is bounded above
by
, which is of order
number of collisions is
, is
, the expected
a half.
3. The expected number of keys in bins with at least keys in them is bounded above by
. Thus, if the capacity of each bin is capped to three times the average size (
), the total number of keys in overflowing bins is at most
family whose collision probability is bounded above by
, this result is no longer true.
As the above guarantees hold for any fixed set
adversary has to make this choice before (or independent of) the algorithm's random choice of a hash function. If the
adversary can observe the random choice of the algorithm, randomness serves no purpose, and the situation is the
same as deterministic hashing.
The second and third guarantee are typically used in conjunction with rehashing. For instance, a randomized
algorithm may be prepared to handle some
number of collisions. If it observes too many collisions, it chooses
another random
from the family and repeats. Universality guarantees that the number of repetitions is a geometric
random variable.
Constructions
Since any computer data can be represented as one or more machine words, one generally needs hash functions for
three types of domains: machine words ("integers"); fixed-length vectors of machine words; and variable-length
vectors ("strings").
Hashing integers
This section refers to the case of hashing integers that fit in machines words; thus, operations like multiplication,
addition, division, etc. are cheap machine-level instructions. Let the universe to be hashed be
.
The original proposal of Carter and Wegman was to pick a prime
where
with
and define
. Technically, adding
Universal hashing
142
To see that
between
. Solving for
and
. If
, their difference,
,
.
There are
(since
possible values for the right hand side. Thus the collision probability is
which tends to
for large
is nonzero and
is uniformly distributed in
. The distribution of
up to a difference in probability of
family is
, it follows that
modulo
is
.
To understand the behavior of the hash function, notice that, if
highest-order 'M' bits, then
whether
or
position
. Since
and
has either all 1's or all 0's as its highest order M bits (depending on
is larger. Assume that the least significant set bit of
is a random odd integer and odd integers have inverses in the ring
then bit
, it follows that
. The probability that these bits are all 0's or all 1's is therefore at most
if
appears on
is 1 and
only
bits is tight, as can be shown with
are also
1, which happens with probability
This if
analysis
the example
and
hash function, one can use the multiply-add-shift scheme
which can be implemented in C-like programming languages by
(unsigned) (a*x+b) >> (w-M)
if and
. 'universal'
. To obtain a truly
Universal hashing
where
143
and
and
for all
.
. This differs
Hashing vectors
This section is concerned with hashing a fixed-length vector of machine words. Interpret the input as a vector
of machine words (integers of bits each). If
is a universal family with the uniform
difference property, the following family (dating back to Carter and Wegman) also has the uniform difference
property (and hence is universal):
, where each
If
In practice, if double-precision arithmetic is available, this is instantiated with the multiply-shift hash family of.
Initialize the hash function with a vector
of random odd integers on
bits each. Then if
the number of bins is
for
:
.
It is possible to halve the number of multiplications, which roughly translates to a two-fold speed-up in practice.
Initialize the hash function with a vector
of random odd integers on
bits each. The
following hash family is universal:[2]
.
If double-precision operations are not available, one can interpret the input as a vector of half-words (
integers). The algorithm will then use
multiplications, where
-bit
Thus, the algorithm runs at a "rate" of one multiplication per word of input.
The same scheme can also be used for hashing integers, by interpreting their bits as vectors of bytes. In this variant,
the vector technique is known as tabulation hashing and it provides a practical alternative to multiplication-based
universal hashing schemes.
Strong universality at high speed is also possible. Initialize the hash function with a vector
random integers on
of
bits. Compute
.
bits. Experimentally, it was found to run at 0.2 CPU cycle per byte on recent
Hashing strings
This refers to hashing a variable-sized vector of machine words. If the length of the string can be bounded by a small
number, it is best to use the vector solution from above (conceptually padding the vector with zeros up to the upper
bound). The space required is the maximal length of the string, but the time to evaluate
is just the length of
. As long as zeroes are forbidden in the string, the zero-padding can be ignored when evaluating the hash function
without affecting universality). Note that if zeroes are allowed in the string, then it might be best to append a
fictitious non-zero (e.g., 1) character to all strings prior to padding: this will ensure that universality is not affected.
Universal hashing
144
, let
is chosen
roots modulo
implies that
. Thus, if the
is sufficiently large compared to the length of strings hashed, the family is very close to universal (in
statistical distance).
To mitigate the computational penalty of modular arithmetic, two tricks are used in practice:
1. One chooses the prime
modulo
to be implemented without division (using faster operations like addition and shifts). For instance, on
References
[1] , section 5.3
[2] , Equation 1
Further reading
Knuth, Donald Ervin (1998). [The Art of Computer Programming], Vol. III: Sorting and Searching (2e ed.).
Reading, Mass ; London: Addison-Wesley. ISBN0-201-89685-0. knuth. Unknown parameter |notes= ignored
(help)
External links
Open Data Structures - Section 5.1.1 - Multiplicative Hashing (http://opendatastructures.org/versions/
edition-0.1e/ods-java/5_1_ChainedHashTable_Hashin.html#SECTION00811000000000000000)
K-independent hashing
145
K-independent hashing
A family of hash functions is said to be
-independent or
from the family guarantees that the hash codes of any designated
precise mathematical definitions below). Such families allow good average case performance in randomized
algorithms or data structures, even if the input data is chosen by an adversary. The trade-offs between the degree of
independence and the efficiency of evaluating the hash function are well studied, and many -independent families
have been proposed.
Introduction
The goal of hashing is usually to map keys from some large domain (universe)
bins (labelled
desirable for the hash codes of various keys to "behave randomly". For instance, if the hash code of each key were an
independent random choice in
, the number of keys per bin could be analyzed using the Chernoff bound. A
deterministic hash function cannot offer any such guarantee in an adversarial setting, as the adversary may choose
the keys to be the precisely the preimage of a bin. Furthermore, a deterministic hash function does not allow for
rehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so
one would like to change the hash function.
The solution to these problems is to pick a function randomly from a large family of hash functions. The randomness
in choosing the hash function can be used to guarantee some desired random behavior of the hash codes of any keys
of interest. The first definition along these lines was universal hashing, which guarantees a low collision probability
for any two designated keys. The concept of -independent hashing, introduced by Wegman and Carter in 1981,
strengthens the guarantees of random behavior to families of
Mathematical Definitions
The strictest definition, introduced by Wegman and Carter under the name "strongly universal hash family", is the
following. A family of hash functions
is -independent if for any distinct keys
and any
, we have:
, as
, as
is uniformly distributed in
.
are
and
is close to 1,
in the analysis of randomized algorithms. Therefore, a more common alternative to dealing with rounding issues is to
prove that the hash family is close in statistical distance to a -independent family, which allows black-box use of
the independence properties.
K-independent hashing
References
Further reading
Motwani, Rajeev; Raghavan, Prabhakar (1995). Randomized Algorithms. Cambridge University Press. p.221.
ISBN0-521-47465-5.
Tabulation hashing
In computer science, tabulation hashing is a method for constructing universal families of hash functions by
combining table lookup with exclusive or operations. It is simple and fast enough to be usable in practice, and has
theoretical properties that (in contrast to some other universal hashing methods) make it usable with linear probing,
cuckoo hashing, and the MinHash technique for estimating the size of set intersections. The first instance of
tabulation hashing is Zobrist hashing (1969). It was later rediscovered by Carter & Wegman (1979) and studied in
more detail by Ptracu & Thorup (2011).
Method
Let p denote the number of bits in a key to be hashed, and q denote the number of bits desired in an output hash
function. Let r be a number smaller than p, and let t be the smallest integer that is at least as large as p/r. For
instance, if r=8, then an r-bit number is a byte, and t is the number of bytes per key.
The key idea of tabulation hashing is to view a key as a vector of t r-bit numbers, use a lookup table filled with
random values to compute a hash value for each of the r-bit numbers representing a given key, and combine these
values with the bitwise binary exclusive or operation. The choice of t and r should be made in such a way that this
table is not too large; e.g., so that it fits into the computer's cache memory.
The initialization phase of the algorithm creates a two-dimensional array T of dimensions 2r by t, and fills the array
with random numbers. Once the array T is initialized, it can be used to compute the hash value h(x) of any given key
x. To do so, partition x into r-bit values, where x0 consists of the low order r bits of x, x1 consists of the next r bits,
etc. (E.g., again, with r=8, xi is just the ith byte of x). Then, use these values as indices into T and combine them
with the exclusive or operation:
h(x) = T[x0,0] T[x1,1] T[x2,2] ...
Universality
Carter & Wegman (1979) define a randomized scheme for generating hash functions to be universal if, for any two
keys, the probability that they collide (that is, they are mapped to the same value as each other) is 1/m, where m is
the number of values that the keys can take on. They defined a stronger property in the subsequent paper Wegman &
Carter (1981): a randomized scheme for generating hash functions is k-independent if, for every k-tuple of keys, and
each possible k-tuple of values, the probability that those keys are mapped to those values is 1/mk. 2-independent
hashing schemes are automatically universal, and any universal hashing scheme can be converted into a
2-independent scheme by storing a random number x in the initialization phase of the algorithm and adding x to each
hash value, so universality is essentially the same as 2-independence, but k-independence for larger values of k is a
stronger property, held by fewer hashing algorithms.
As Ptracu & Thorup (2011) observe, tabulation hashing is 3-independent but not 4-independent. For any single key
x, T[x0,0] is equally likely to take on any hash value, and the exclusive or of T[x0,0] with the remaining table values
does not change this property. For any two keys x and y, x is equally likely to be mapped to any hash value as before,
and there is at least one position i where xiyi; the table value T[yi,i] is used in the calculation of h(y) but not in the
146
Tabulation hashing
calculation of h(x), so even after the value of h(x) has been determined, h(y) is equally likely to be any valid hash
value. Similarly, for any three keys x, y, and z, at least one of the three keys has a position i where its value zi differs
from the other two, so that even after the values of h(x) and h(y) are determined, h(z) is equally likely to be any valid
hash value.
However, this reasoning breaks down for four keys because there are sets of keys w, x, y, and z where none of the
four has a byte value that it does not share with at least one of the other keys. For instance, if the keys have two bytes
each, and w, x, y, and z are the four keys that have either zero or one as their byte values, then each byte value in
each position is shared by exactly two of the four keys. For these four keys, the hash values computed by tabulation
hashing will always satisfy the equation h(w) h(x) h(y) h(z) = 0, whereas for a 4-independent hashing scheme
the same equation would only be satisfied with probability 1/m. Therefore, tabulation hashing is not 4-independent.
Siegel (2004) uses the same idea of using exclusive or operations to combine random values from a table, with a
more complicated algorithm based on expander graphs for transforming the key bits into table indices, to define
hashing schemes that are k-independent for any constant or even logarithmic value of k. However, the number of
table lookups needed to compute each hash value using Siegel's variation of tabulation hashing, while constant, is
still too large to be practical, and the use of expanders in Siegel's technique also makes it not fully constructive.
One limitation of tabulation hashing is that it assumes that the input keys have a fixed number of bits. Lemire (2012)
has studied variations of tabulation hashing that can be applied to variable-length strings, and shown that they can be
universal (2-independent) but not 3-independent.
Application
Because tabulation hashing is a universal hashing scheme, it can be used in any hashing-based algorithm in which
universality is sufficient. For instance, in hash chaining, the expected time per operation is proportional to the sum of
collision probabilities, which is the same for any universal scheme as it would be for truly random hash functions,
and is constant whenever the load factor of the hash table is constant. Therefore, tabulation hashing can be used to
compute hash functions for hash chaining with a theoretical guarantee of constant expected time per operation.
However, universal hashing is not strong enough to guarantee the performance of some other hashing algorithms.
For instance, for linear probing, 5-independent hash functions are strong enough to guarantee constant time
operation, but there are 4-independent hash functions that fail.[1] Nevertheless, despite only being 3-independent,
tabulation hashing provides the same constant-time guarantee for linear probing.
Cuckoo hashing, another technique for implementing hash tables, guarantees constant time per lookup (regardless of
the hash function). Insertions into a cuckoo hash table may fail, causing the entire table to be rebuilt, but such
failures are sufficiently unlikely that the expected time per insertion (using either a truly random hash function or a
hash function with logarithmic independence) is constant. With tabulation hashing, on the other hand, the best bound
known on the failure probability is higher, high enough that insertions cannot be guaranteed to take constant
expected time. Nevertheless, tabulation hashing is adequate to ensure the linear-expected-time construction of a
cuckoo hash table for a static set of keys that does not change as the table is used.
Algorithms such as Karp-Rabin requires the efficient computation of hashing all consecutive sequences of
characters. We typically use rolling hash functions for these problems. Tabulation hashing is used to construct
families of strongly universal functions (for example, hashing by cyclic polynomials).
147
Tabulation hashing
Notes
[1] For the sufficiency of 5-independent hashing for linear probing, see . For examples of weaker hashing schemes that fail, see .
References
Carter, J. Lawrence; Wegman, Mark N. (1979), "Universal classes of hash functions", Journal of Computer and
System Sciences 18 (2): 143154, doi: 10.1016/0022-0000(79)90044-8 (http://dx.doi.org/10.1016/
0022-0000(79)90044-8), MR 532173 (http://www.ams.org/mathscinet-getitem?mr=532173).
Lemire, Daniel (2012), "The universality of iterated hashing over variable-length strings", Discrete Applied
Mathematics 160: 604617, arXiv: 1008.1715 (http://arxiv.org/abs/1008.1715), doi:
10.1016/j.dam.2011.11.009 (http://dx.doi.org/10.1016/j.dam.2011.11.009).
Pagh, Anna; Pagh, Rasmus; Rui, Milan (2009), "Linear probing with constant independence", SIAM Journal on
Computing 39 (3): 11071120, doi: 10.1137/070702278 (http://dx.doi.org/10.1137/070702278), MR
2538852 (http://www.ams.org/mathscinet-getitem?mr=2538852).
Ptracu, Mihai; Thorup, Mikkel (2010), "On the k-independence required by linear probing and minwise
independence" (http://people.csail.mit.edu/mip/papers/kwise-lb/kwise-lb.pdf), Automata, Languages and
Programming, 37th International Colloquium, ICALP 2010, Bordeaux, France, July 6-10, 2010, Proceedings,
Part I, Lecture Notes in Computer Science 6198, Springer, pp.715726, doi: 10.1007/978-3-642-14165-2_60
(http://dx.doi.org/10.1007/978-3-642-14165-2_60).
Ptracu, Mihai; Thorup, Mikkel (2011), "The power of simple tabulation hashing", Proceedings of the 43rd
annual ACM Symposium on Theory of Computing (STOC '11), pp.110, arXiv: 1011.5200 (http://arxiv.org/
abs/1011.5200), doi: 10.1145/1993636.1993638 (http://dx.doi.org/10.1145/1993636.1993638).
Siegel, Alan (2004), "On universal classes of extremely random constant-time hash functions", SIAM Journal on
Computing 33 (3): 505543 (electronic), doi: 10.1137/S0097539701386216 (http://dx.doi.org/10.1137/
S0097539701386216), MR 2066640 (http://www.ams.org/mathscinet-getitem?mr=2066640).
Wegman, Mark N.; Carter, J. Lawrence (1981), "New hash functions and their use in authentication and set
equality", Journal of Computer and System Sciences 22 (3): 265279, doi: 10.1016/0022-0000(81)90033-7 (http:/
/dx.doi.org/10.1016/0022-0000(81)90033-7), MR 633535 (http://www.ams.org/
mathscinet-getitem?mr=633535).
148
149
A cryptographic hash function (specifically, SHA-1) at work. Note that even small
changes in the source input (here in the word "over") drastically change the resulting
output, by the so-called avalanche effect.
Properties
Most cryptographic hash functions are designed to take a string of any length as input and produce a fixed-length
hash value.
A cryptographic hash function must be able to withstand all known types of cryptanalytic attack. As a minimum, it
must have the following properties:
Pre-image resistance
Given a hash h it should be difficult to find any message m such that h = hash(m). This concept is related to
that of one-way function. Functions that lack this property are vulnerable to preimage attacks.
Second pre-image resistance
Given an input m1 it should be difficult to find another input m2 such that m1 m2 and hash(m1) = hash(m2).
Functions that lack this property are vulnerable to second-preimage attacks.
Collision resistance
It should be difficult to find two different messages m1 and m2 such that hash(m1) = hash(m2). Such a pair is
called a cryptographic hash collision. This property is sometimes referred to as strong collision resistance. It
requires a hash value at least twice as long as that required for preimage-resistance; otherwise collisions may
be found by a birthday attack.
Degree of difficulty
In cryptographic practice, difficult generally means almost certainly beyond the reach of any adversary who must
be prevented from breaking the system for as long as the security of the system is deemed important. The meaning
of the term is therefore somewhat dependent on the application, since the effort that a malicious agent may put into
the task is usually proportional to his expected gain. However, since the needed effort usually grows very quickly
with the digest length, even a thousand-fold advantage in processing power can be neutralized by adding a few dozen
bits to the latter.
In some theoretical analyses difficult has a specific mathematical meaning, such as "not solvable in asymptotic
polynomial time". Such interpretations of difficulty are important in the study of provably secure cryptographic hash
functions but do not usually have a strong connection to practical security. For example, an exponential time
algorithm can sometimes still be fast enough to make a feasible attack. Conversely, a polynomial time algorithm
(e.g., one that requires n20 steps for n-digit keys) may be too slow for any practical use.
Illustration
An illustration of the potential use of a cryptographic hash is as follows: Alice poses a tough math problem to Bob
and claims she has solved it. Bob would like to try it himself, but would yet like to be sure that Alice is not bluffing.
Therefore, Alice writes down her solution, computes its hash and tells Bob the hash value (whilst keeping the
solution secret). Then, when Bob comes up with the solution himself a few days later, Alice can prove that she had
the solution earlier by revealing it and having Bob hash it and check that it matches the hash value given to him
before. (This is an example of a simple commitment scheme; in actual practice, Alice and Bob will often be
computer programs, and the secret would be something less easily spoofed than a claimed puzzle solution).
150
Applications
Verifying the integrity of files or messages
An important application of secure hashes is verification of message integrity. Determining whether any changes
have been made to a message (or a file), for example, can be accomplished by comparing message digests calculated
before, and after, transmission (or any other event).
For this reason, most digital signature algorithms only confirm the authenticity of a hashed digest of the message to
be "signed". Verifying the authenticity of a hashed digest of the message is considered proof that the message itself
is authentic.
MD5 or SHA1 hashes are sometimes posted along with files on websites or forums to allow verification of integrity.
This practice is not secure because of the chain of trust problem posted hashes cannot be trusted equally as files,
unless these websites are authenticated by HTTPS but in this case the hashes are redundant.
Password verification
A related application is password verification. Storing all user passwords as cleartext can result in a massive security
breach if the password file is compromised. One way to reduce this danger is to only store the hash digest of each
password. To authenticate a user, the password presented by the user is hashed and compared with the stored hash.
(Note that this approach prevents the original passwords from being retrieved if forgotten or lost, and they have to be
replaced with new ones.) The password is often concatenated with a random, non-secret salt value before the hash
function is applied. The salt is stored with the password hash. Because users have different salts, it is not feasible to
store tables of precomputed hash values for common passwords. Key stretching functions, such as PBKDF2, Bcrypt
or Scrypt, typically use repeated invocations of a cryptographic hash to increase the time required to perform brute
force attacks on stored password digests.
In 2013 a long-term competition was announced to choose a new, standard algorithm for password hashing.
151
MerkleDamgrd construction
A hash function must be able to
process an arbitrary-length message
into a fixed-length output. This can be
achieved by breaking the input up into
a series of equal-sized blocks, and
operating on them in sequence using a
one-way compression function. The
compression function can either be
specially designed for hashing or be
built from a block cipher. A hash
The MerkleDamgrd hash construction.
function
built
with
the
MerkleDamgrd construction is as
resistant to collisions as is its compression function; any collision for the full hash function can be traced back to a
collision in the compression function.
The last block processed should also be unambiguously length padded; this is crucial to the security of this
construction. This construction is called the MerkleDamgrd construction. Most widely used hash functions,
including SHA-1 and MD5, take this form.
The construction has certain inherent flaws, including length-extension and generate-and-paste attacks, and cannot
be parallelized. As a result, many entrants in the current NIST hash function competition are built on different,
sometimes novel, constructions.
152
153
154
Theoretical weaknesses of SHA-1 exist as well,[6][7] suggesting that it may be practical to break within years. New
applications can avoid these problems by using more advanced members of the SHA family, such as SHA-2, or
using techniques such as randomized hashing[8][9] that do not require collision resistance.
However, to ensure the long-term robustness of applications that use hash functions, there was a competition to
design a replacement for SHA-2. On October 2, 2012, Keccak was selected as the winner of the NIST hash function
competition. A version of this algorithm is expected to become a FIPS standard in 2014 under the name SHA-3.[10]
Some of the following algorithms are used often in cryptography; consult the article for each specific algorithm for
more information on the status of each algorithm. Note that this list does not include candidates in the current NIST
hash function competition. For additional hash functions see the box at the bottom of the page.
Algorithm
GOST
256
Internal
state
[11]
size
256
Block size
Length Word
size
size
Rounds
Collision
256
256
32
256
HAVAL
256/224/192/160/128
256
1,024
64
32
160/128/96
MD2
128
384
128
32
864
MD4
128
128
512
64
32
[13]
Yes ( 2105
Yes
Yes ( 263.3
48
Yes ( 3
[15]
[17]
Second
preimage
Preimage
[14]
Yes
(
[13]
2192
)
Yes ( 2192
[13]
)
No
No
Yes ( 273
[16]
)
Yes ( 273
[16]
)
Yes ( 264
[18]
)
Yes ( 278.4
[18]
)
MD5
128
128
512
64
32
64
Yes ( 220.96
[19]
)
Yes ( 2123.4
[20]
)
Yes ( 2123.4
[20]
)
PANAMA
256
8,736
256
32
Yes
No
No
RadioGatn
Up to 608/1,216 (19
words)
58 words
3 words
164
With flaws (
2352 or 2704
[21]
)
No
No
RIPEMD
128
128
512
64
32
48
Yes ( 218
No
No
RIPEMD-128/256
128/256
128/256
512
64
32
64
No
No
No
RIPEMD-160
160
160
512
64
32
80
Yes ( 251:48
[22]
)
No
No
RIPEMD-320
320
320
512
64
32
80
No
No
No
SHA-0
160
160
512
64
32
80
Yes ( 233.6
No
No
SHA-1
160
160
512
64
32
80
Yes ( 251
No
No
SHA-256/224
256/224
256
512
64
32
64
[25]
Theoretical
[26]
( 228.5:24
)
Theoretical
( 2248.4:42
[18]
)
Theoretical
( 2248.4:42
[18]
)
SHA-512/384
512/384
512
1,024
128
64
80
Theoretical (
[26]
232.5:24
)
Theoretical
( 2494.6:42
[18]
)
Theoretical
( 2494.6:42
[18]
)
SHA-3
[27]
224/256/384/512
[15]
[23]
[24]
1600
1600-2*bits
64
24
No
No
No
SHA-3-224
224
1600
1152
64
24
No
No
No
SHA-3-256
256
1600
1088
64
24
No
No
No
155
SHA-3-384
384
1600
832
64
24
No
No
No
SHA-3-512
512
1600
576
64
24
No
No
No
Tiger(2)-192/160/128
192/160/128
192
512
64
64
24
Yes ( 262:19
[28]
)
Yes ( 2184.3
[18]
)
Yes ( 2184.3
[18]
)
WHIRLPOOL
512
512
512
256
10
Yes ( 2120:4.5
[29]
)
No
No
Notes
[1] Note that any two messages that collide the concatenated function also collide each component function, by the nature of concatenation. For
example, if concat(sha1(message1), md5(message1)) == concat(sha1(message2), md5(message2)) then sha1(message1) == sha1(message2)
and md5(message1)==md5(message2). The concatenated function could have other problems that the strongest hash lacks -- for example, it
might leak information about the message when the strongest component does not, or it might be detectably nonrandom when the strongest
component is not -- but it can't be less collision-resistant.
[2] More generally, if an attack can produce a collision in one hash function's internal state, attacking the combined construction is only as
difficult as a birthday attack against the other function(s). For the detailed argument, see the Joux and Finney references that follow.
[3] Antoine Joux. Multicollisions in Iterated Hash Functions. Application to Cascaded Constructions. LNCS 3152/2004, pages 306-316 Full text
(http:/ / www. springerlink. com/ index/ DWWVMQJU0N0A3UGJ. pdf).
[4] http:/ / article. gmane. org/ gmane. comp. encryption. general/ 5154
[5] Alexander Sotirov, Marc Stevens, Jacob Appelbaum, Arjen Lenstra, David Molnar, Dag Arne Osvik, Benne de Weger, MD5 considered
harmful today: Creating a rogue CA certificate (http:/ / www. win. tue. nl/ hashclash/ rogue-ca/ ), accessed March 29, 2009
[6] Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu, Finding Collisions in the Full SHA-1 (http:/ / people. csail. mit. edu/ yiqun/
SHA1AttackProceedingVersion. pdf)
[7] Bruce Schneier, Cryptanalysis of SHA-1 (http:/ / www. schneier. com/ blog/ archives/ 2005/ 02/ cryptanalysis_o. html) (summarizes Wang et
al. results and their implications)
[8] Shai Halevi, Hugo Krawczyk, Update on Randomized Hashing (http:/ / csrc. nist. gov/ groups/ ST/ hash/ documents/
HALEVI_UpdateonRandomizedHashing0824. pdf)
[9] Shai Halevi and Hugo Krawczyk, Randomized Hashing and Digital Signatures (http:/ / www. ee. technion. ac. il/ ~hugo/ rhash/ )
[10] NIST.gov - Computer Security Division - Computer Security Resource Center (http:/ / csrc. nist. gov/ groups/ ST/ hash/ sha-3/ timeline_fips.
html)
[11] The internal state here means the "internal hash sum" after each compression of a data block. Most hash algorithms also internally use some
additional variables such as length of the data compressed so far since that is needed for the length padding in the end. See the
Merkle-Damgrd construction for details.
[12] When omitted, rounds are full number.
[13] http:/ / www. springerlink. com/ content/ 2514122231284103/
[14] There isn't a unique second preimage attack against this hash. However, the second preimage challenge reduces to the ordinary preimage
attack by simply constructing a hash of the given message.
[15] http:/ / www. springerlink. com/ content/ n5vrtdha97a2udkx/
[16] http:/ / eprint. iacr. org/ 2008/ 089. pdf
[17] http:/ / www. springerlink. com/ content/ v6526284mu858v37/
[18] http:/ / eprint. iacr. org/ 2010/ 016. pdf
[19] http:/ / eprint. iacr. org/ 2009/ 223. pdf
[20] http:/ / springerlink. com/ content/ d7pm142n58853467/
[21] http:/ / eprint. iacr. org/ 2008/ 515
[22] http:/ / www. springerlink. com/ content/ 3540l03h1w31n6w7
[23] http:/ / www. springerlink. com/ content/ 3810jp9730369045/
[24] http:/ / eprint. iacr. org/ 2008/ 469. pdf
[25] There is no known attack against the full version of this hash function, however there is an attack against this hashing scheme when the
number of rounds is reduced.
[26] http:/ / eprint. iacr. org/ 2008/ 270. pdf
[27] Although the underlying algorithm Keccak has arbitrary hash lengths, the NIST specified 224, 256, 384 and 512 bits output as valid modes
for SHA-3.
[28] http:/ / www. springerlink. com/ content/ u762587644802p38/
[29] https:/ / www. cosic. esat. kuleuven. be/ fse2009/ slides/ 2402_1150_Schlaeffer. pdf
References
External links
Christof Paar, Jan Pelzl, "Hash Functions" (http://wiki.crypto.rub.de/Buch/movies.php), Chapter 11 of
"Understanding Cryptography, A Textbook for Students and Practitioners". (companion web site contains online
cryptography course that covers hash functions), Springer, 2009.
"The ECRYPT Hash Function Website" (http://ehash.iaik.tugraz.at/wiki/The_eHash_Main_Page)
"Series of mini-lectures about cryptographic hash functions" (http://www.guardtime.com/
educational-series-on-hashes/) by A. Buldas, 2011.
"Cryptographic Hash-Function Basics: Definitions, Implications, and Separations for Preimage Resistance,
Second-Preimage Resistance, and Collision Resistance" (http://citeseerx.ist.psu.edu/viewdoc/
summary?doi=10.1.1.3.6200) by P. Rogaway, T. Shrimpton, 2004
156
157
Sets
Set (abstract data type)
In computer science, a set is an abstract data structure that can store certain values, without any particular order, and
no repeated values. It is a computer implementation of the mathematical concept of a finite set. Unlike most other
collection types, rather than retrieving a specific element from a set, one typically tests a value for membership in a
set.
Some set data structures are designed for static or frozen sets that do not change after they are constructed. Static
sets allow only query operations on their elements such as checking whether a given value is in the set, or
enumerating the values in some arbitrary order. Other variants, called dynamic or mutable sets, allow also the
insertion and deletion of elements from the set.
An abstract data structure is a collection, or aggregate, of data. The data may be booleans, numbers, characters, or
other data structures. If one considers the structure yielded by packaging[1] or indexing,[2] there are four basic data
structures:
1.
2.
3.
4.
In this view, the contents of a set are a bunch, and isolated data items are elementary bunches (elements). Whereas
sets contain elements, bunches consist of elements.
Further structuring may be achieved by considering the multiplicity of elements (sets become multisets, bunches
become hyperbunches) or their homogeneity (a record is a set of fields, not necessarily all of the same type).
Implementations
A set can be implemented in many ways. For example, one can use a list, ignoring the order of the elements and
taking care to avoid repeated values. Sets are often implemented using various flavors of trees, tries, or hash tables.
A set can be seen, and implemented, as a (partial) associative array, in which the value of each key-value pair has the
unit type.
Type theory
In type theory, sets are generally identified with their indicator function: accordingly, a set of values of type
be denoted by
or
may
. (Subtypes and subsets may be modeled by refinement types, and quotient sets may be
Operations
Core set-theoretical operations
One may define the operations of the algebra of sets:
Static sets
Typical operations that may be provided by a static set structure S are:
Dynamic sets
Dynamic set structures typically add:
create(): creates a new, initially empty set structure.
create_with_capacity(n): creates a new set structure, initially empty but capable of holding up to n
elements.
add(S,x): adds the element x to S, if it is not present already.
remove(S, x): removes the element x from S, if it is present.
capacity(S): returns the maximum number of values that S can hold.
Some set structures may allow only some of these operations. The cost of each operation will depend on the
implementation, and possibly also on the particular values stored in the set, and the order in which they are inserted.
Additional operations
There are many other operations that can (in principle) be defined in terms of the above, such as:
Other operations can be defined for sets with elements of a special type:
sum(S): returns the sum of all elements of S for some definition of "sum". For example, over integers or reals, it
may be defined as fold(0, add, S).
nearest(S,x): returns the element of S that is closest in value to x (by some metric).
158
Implementations
Sets can be implemented using various data structures, which provide different time and space trade-offs for various
operations. Some implementations are designed to improve the efficiency of very specialized operations, such as
nearest or union. Implementations described as "general use" typically strive to optimize the element_of,
add, and delete operations.
As sets can be interpreted as a kind of map (by the indicator function), sets are commonly implemented in the same
way as maps (associative arrays), namely, a self-balancing binary search tree for sorted sets (which has O(log n) for
most operations), or a hash table for unsorted sets (which has O(1) average-case, but O(n) worst-case, for most
operations). A sorted linear hash table may be used to provide deterministically ordered sets.
Other popular methods include arrays. In particular a subset of the integers 1..n can be implemented efficiently as an
n-bit bit array, which also support very efficient union and intersection operations. A Bloom map implements a set
probabilistically, using a very compact representation but risking a small chance of false positives on queries.
The Boolean set operations can be implemented in terms of more elementary operations (pop, clear, and add),
but specialized algorithms may yield lower asymptotic time bounds. If sets are implemented as sorted lists, for
example, the naive algorithm for union(S,T) will take code proportional to the length m of S times the length n
of T; whereas a variant of the list merging algorithm will do the job in time proportional to m+n. Moreover, there are
specialized set data structures (such as the union-find data structure) that are optimized for one or more of these
operations, at the expense of others.
Language support
One of the earliest languages to support sets was Pascal; many languages now include it, whether in the core
language or in a standard library.
Java offers the Set [3] interface to support sets (with the HashSet [4] class implementing it using a hash table),
and the SortedSet [5] sub-interface to support sorted sets (with the TreeSet [6] class implementing it using
a binary search tree).
Apple's Foundation framework (part of Cocoa) provides the Objective-C classes NSSet [7], NSMutableSet
[8]
, NSCountedSet [9], NSOrderedSet [10], and NSMutableOrderedSet [11]. The CoreFoundation
APIs provide the CFSet [12] and CFMutableSet [13] types for use in C.
Python has built-in set and frozenset types [14] since 2.4, and since Python 3.0 and 2.7, supports
non-empty set literals using a curly-bracket syntax, e.g.: {x, y, z}.
The .NET Framework provides the generic HashSet [15] and SortedSet [16] classes that implement the
generic ISet [17] interface.
Smalltalk's class library includes Set and IdentitySet, using equality and identity for inclusion test
respectively. Many dialects provide variations for compressed storage (NumberSet, CharacterSet), for
ordering (OrderedSet, SortedSet, etc.) or for weak references (WeakIdentitySet).
Ruby's standard library includes a set [18] module which contains Set and SortedSet classes that
implement sets using hash tables, the latter allowing iteration in sorted order.
OCaml's standard library contains a Set module, which implements a functional set data structure using binary
search trees.
The GHC implementation of Haskell provides a Data.Set [19] module, which implements a functional set data
structure using binary search trees.
The Tcl Tcllib package provides a set module which implements a set data structure based upon TCL lists.
As noted in the previous section, in languages which do not directly support sets but do support associative arrays,
sets can be emulated using associative arrays, by using the elements as keys, and using a dummy value as the values,
which are ignored.
159
In C++
In C++, the Standard Template Library (STL) provides the set template class, which implements a sorted set using
a binary search tree; SGI's STL also provides the hash_set template class, which implements a set using a hash
table.
In sets, the elements themselves are the keys, in contrast to sequenced containers, where elements are accessed using
their (relative or absolute) position. Set elements must have a strict weak ordering.
Multiset
A generalization of the notion of a set is that of a multiset or bag, which is similar to a set but allows repeated
("equal") values (duplicates). This is used in two distinct senses: either equal values are considered identical, and are
simply counted, or equal values are considered equivalent, and are stored as distinct items. For example, given a list
of people (by name) and ages (in years), one could construct a multiset of ages, which simply counts the number of
people of a given age. Alternatively, one can construct a multiset of people, where two people are considered
equivalent if their ages are the same (but may be different people and have different names), in which case each pair
(name, age) must be stored, and selecting on a given age gives all the people of a given age.
Formally, it is possible for objects in computer science to be considered "equal" under some equivalence relation but
still distinct under another relation. Some types of multiset implementations will store distinct equal objects as
separate items in the data structure; while others will collapse it down to one version (the first one encountered) and
keep a positive integer count of the multiplicity of the element.
As with sets, multisets can naturally be implemented using hash table or trees, which yield different performance
characteristics.
The set of all bags over type T is given by the expression bag T. If by multiset one considers equal items identical
and simply counts them, then a multiset can be interpreted as a function from the input domain to the non-negative
integers (natural numbers), generalizing the identification of a set with its indicator function. In some cases a
multiset in this counting sense may be generalized to allow negative values, as in Python.
C++'s Standard Template Library implements both sorted and unsorted multisets. It provides the multiset
class for the sorted multiset, as a kind of associative container, which implements this multiset using a
self-balancing binary search tree. It provides the unordered_multiset class for the unsorted multiset, as a
kind of unordered associative containers, which implements this multiset using a hash table. The unsorted
multiset is standard as of C++11; previously SGI's STL provides the hash_multiset class, which was copied
and eventually standardized.
For Java, third-party libraries provide multiset functionality:
Apache Commons Collections provides the Bag [20] and SortedBag interfaces, with implementing classes
like HashBag and TreeBag.
Google Guava provides the Multiset [21] interface, with implementing classes like HashMultiset and
TreeMultiset.
Apple provides the NSCountedSet [9] class as part of Cocoa, and the CFBag [22] and CFMutableBag [23]
types as part of CoreFoundation.
Python's standard library includes collections.Counter [24], which is similar to a multiset.
Smalltalk includes the Bag class, which can be instantiated to use either identity or equality as predicate for
inclusion test.
Where a multiset data structure is not available, a workaround is to use a regular set, but override the equality
predicate of its items to always return "not equal" on distinct objects (however, such will still not be able to store
multiple occurrences of the same object) or use an associative array mapping the values to their integer multiplicities
(this will not be able to distinguish between equal elements at all).
160
Multisets in SQL
In relational databases, a table can be a (mathematical) set or a multiset, depending on the presence on unicity
constraints on some columns (which turns it into a candidate key).
SQL allows the selection of rows from a relational table: this operation will in general yield a multiset, unless the
keyword DISTINCT is used to force the rows to be all different, or the selection includes the primary (or a
candidate) key.
In ANSI SQL the MULTISET keyword can be used to transform a subquery in a collection expression:
SELECT expression1, expression2... FROM table_name...
is a general select that can be used as subquery expression of another more general query, while
MULTISET(SELECT expression1, expression2... FROM table_name...)
transforms the subquery into a collection expression that can be used in another query, or in assignment to a column
of appropriate collection type.
References
[1] "Packaging" consists in supplying a container for an aggregation of objects in order to turn them into a single object. Consider a function call:
without packaging, a function can be called to act upon a bunch only by passing each bunch element as a separate argument, which
complicates the function's signature considerably (and is just not possible in some programming languages). By packaging the bunch's
elements into a set, the function may now be called upon a single, elementary argument: the set object (the bunch's package).
[2] Indexing is possible when the elements being considered are totally ordered. Being without order, the elements of a multiset (for example) do
not have lesser/greater or preceding/succeeding relationships: they can only be compared in absolute terms (same/different).
[3] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ Set. html
[4] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ HashSet. html
[5] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ SortedSet. html
[6] http:/ / download. oracle. com/ javase/ 7/ docs/ api/ java/ util/ TreeSet. html
[7] http:/ / developer. apple. com/ documentation/ Cocoa/ Reference/ Foundation/ Classes/ NSSet_Class/
[8] http:/ / developer. apple. com/ documentation/ Cocoa/ Reference/ Foundation/ Classes/ NSMutableSet_Class/
[9] http:/ / developer. apple. com/ documentation/ Cocoa/ Reference/ Foundation/ Classes/ NSCountedSet_Class/
[10] http:/ / developer. apple. com/ library/ mac/ #documentation/ Foundation/ Reference/ NSOrderedSet_Class/ Reference/ Reference. html
[11] https:/ / developer. apple. com/ library/ mac/ #documentation/ Foundation/ Reference/ NSMutableOrderedSet_Class/ Reference/ Reference.
html
[12] http:/ / developer. apple. com/ documentation/ CoreFoundation/ Reference/ CFSetRef/
[13] http:/ / developer. apple. com/ documentation/ CoreFoundation/ Reference/ CFMutableSetRef/
[14] http:/ / docs. python. org/ library/ stdtypes. html#set-types-set-frozenset
[15] http:/ / msdn. microsoft. com/ en-us/ library/ bb359438. aspx
[16] http:/ / msdn. microsoft. com/ en-us/ library/ dd412070. aspx
[17] http:/ / msdn. microsoft. com/ en-us/ library/ dd412081. aspx
[18] http:/ / ruby-doc. org/ stdlib/ libdoc/ set/ rdoc/ index. html
[19] http:/ / hackage. haskell. org/ packages/ archive/ containers/ 0. 2. 0. 1/ doc/ html/ Data-Set. html
161
http:/ / commons. apache. org/ collections/ api-release/ org/ apache/ commons/ collections/ Bag. html
http:/ / google-collections. googlecode. com/ svn/ trunk/ javadoc/ com/ google/ common/ collect/ Multiset. html
http:/ / developer. apple. com/ documentation/ CoreFoundation/ Reference/ CFBagRef/
http:/ / developer. apple. com/ documentation/ CoreFoundation/ Reference/ CFMutableBagRef/
http:/ / docs. python. org/ library/ collections. html#collections. Counter
Bit array
A bit array (also known as bitmap, bitset, bit string, or bit vector) is an array data structure that compactly stores
bits. It can be used to implement a simple set data structure. A bit array is effective at exploiting bit-level parallelism
in hardware to perform operations quickly. A typical bit array stores kw bits, where w is the number of bits in the
unit of storage, such as a by