Algorithms Wikipedia PDF
Algorithms Wikipedia PDF
en.wikipedia.org
May 7, 2020
On the 28th of April 2012 the contents of the English as well as German Wikibooks and Wikipedia
projects were licensed under Creative Commons Attribution-ShareAlike 3.0 Unported license. A
URI to this license is given in the list of figures on page 2055. If this document is a derived work
from the contents of one of these projects and the content was still licensed by the project under
this license at the time of derivation this document has to be licensed under the same, a similar or a
compatible license, as stated in section 4b of the license. The list of contributors is included in chapter
Contributors on page 1669. The licenses GPL, LGPL and GFDL are included in chapter Licenses on
page 2085, since this book and/or parts of it may or may not be licensed under one or more of these
licenses, and thus require inclusion of these licenses. The licenses of the figures are given in the list of
figures on page 2055. This PDF was generated by the LATEX typesetting software. The LATEX source
code is included as an attachment (source.7z.txt) in this PDF file. To extract the source from
the PDF file, you can use the pdfdetach tool including in the poppler suite, or the http://www.
pdflabs.com/tools/pdftk-the-pdf-toolkit/ utility. Some PDF viewers may also let you save
the attachment to a file. After extracting it from the PDF file you have to rename it to source.7z.
To uncompress the resulting archive we recommend the use of http://www.7-zip.org/. The LATEX
source itself was generated by a program written by Dirk Hünniger, which is freely available under
an open source license from http://de.wikibooks.org/wiki/Benutzer:Dirk_Huenniger/wb2pdf.
Contents
1 Sorting algorithm 3
1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Comparison of algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Popular sorting algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Memory usage patterns and index sorting . . . . . . . . . . . . . . . . . 22
1.6 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2 Comparison sort 31
2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 Performance limits and advantages of different sorting techniques . . . . 33
2.3 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Number of comparisons required to sort a list . . . . . . . . . . . . . . . 35
2.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3 Selection sort 41
3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Comparison to other sorting algorithms . . . . . . . . . . . . . . . . . . 45
3.5 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4 Insertion sort 51
4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 Best, worst, and average cases . . . . . . . . . . . . . . . . . . . . . . . 55
4.3 Relation to other sorting algorithms . . . . . . . . . . . . . . . . . . . . 55
4.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
III
Contents
5 Merge sort 63
5.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2 Natural merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.5 Use with tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.6 Optimizing merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.7 Parallel merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.8 Comparison with other sort algorithms . . . . . . . . . . . . . . . . . . 81
5.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6 Merge sort 87
6.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2 Natural merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.5 Use with tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.6 Optimizing merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7 Parallel merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.8 Comparison with other sort algorithms . . . . . . . . . . . . . . . . . . 105
6.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7 Quicksort 111
7.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.3 Formal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.4 Relation to other algorithms . . . . . . . . . . . . . . . . . . . . . . . . 123
7.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
8 Heapsort 135
8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.3 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.4 Comparison with other sorts . . . . . . . . . . . . . . . . . . . . . . . . 142
8.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
IV
Contents
10 Shellsort 163
10.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.3 Gap sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
10.4 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 168
10.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.8 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
10.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
V
Contents
VI
Contents
20 Trie 287
20.1 History and etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
20.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
20.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
20.4 Implementation strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 292
20.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
20.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
20.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
VII
Contents
VIII
Contents
IX
Contents
34 Xorshift 451
34.1 Example implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 451
34.2 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
34.3 xoshiro and xoroshiro . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
34.4 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
34.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
34.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
34.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
39 Combinatorics 495
39.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
39.2 Approaches and subfields of combinatorics . . . . . . . . . . . . . . . . . 499
39.3 Related fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
39.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
39.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
X
Contents
XI
Contents
48 B* 659
48.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
48.2 Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
48.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
48.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
XII
Contents
XIII
Contents
57 Centrality 745
57.1 Definition and characterization of centrality indices . . . . . . . . . . . . 748
57.2 Important limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
57.3 Degree centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
57.4 Closeness centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
57.5 Betweenness centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
57.6 Eigenvector centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
57.7 Katz centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.8 PageRank centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.9 Percolation centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.10 Cross-clique centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
57.11 Freeman centralization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
57.12 Dissimilarity based centrality measures . . . . . . . . . . . . . . . . . . 762
57.13 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
57.14 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
57.15 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
57.16 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
XIV
Contents
62 Color-coding 797
62.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
62.2 The method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
62.3 Derandomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
62.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
62.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
62.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
66 D* 829
66.1 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
66.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
66.3 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
66.4 Minimum cost versus current cost . . . . . . . . . . . . . . . . . . . . . 840
66.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
66.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841
XV
Contents
XVI
Contents
XVII
Contents
XVIII
Contents
XIX
Contents
XX
Contents
XXI
Contents
XXII
Contents
XXIII
Contents
XXIV
Contents
XXV
Contents
XXVI
Contents
XXVII
Contents
XXVIII
Contents
XXIX
Contents
XXX
Contents
XXXI
Contents
1
1 Sorting algorithm
This section does not cite1 any sources2 . Please help improve this section3 by
adding citations to reliable sources4 . Unsourced material may be challenged and
removed5 .
Find sources: ”Sorting algorithm”6 – news7 · newspapers8 · books9 · scholar10 · JSTOR11
(May 2019)(Learn how and when to remove this template message12 )
1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
3 https://en.wikipedia.org/w/index.php?title=Sorting_algorithm&action=edit
4 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
5 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
6 http://www.google.com/search?as_eq=wikipedia&q=%22Sorting+algorithm%22
7 http://www.google.com/search?tbm=nws&q=%22Sorting+algorithm%22+-wikipedia
http://www.google.com/search?&q=%22Sorting+algorithm%22+site:news.google.com/
8
newspapers&source=newspapers
9 http://www.google.com/search?tbs=bks:1&q=%22Sorting+algorithm%22+-wikipedia
10 http://scholar.google.com/scholar?q=%22Sorting+algorithm%22
11 https://www.jstor.org/action/doBasicSearch?Query=%22Sorting+algorithm%22&acc=on&wc=on
12 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
13 https://en.wikipedia.org/wiki/Computer_science
14 https://en.wikipedia.org/wiki/Algorithm
15 https://en.wikipedia.org/wiki/List_(computing)
16 https://en.wikipedia.org/wiki/Total_order
17 https://en.wikipedia.org/wiki/Numerical_order
18 https://en.wikipedia.org/wiki/Lexicographical_order
19 https://en.wikipedia.org/wiki/Sorting
20 https://en.wikipedia.org/wiki/Algorithmic_efficiency
21 https://en.wikipedia.org/wiki/Search_algorithm
22 https://en.wikipedia.org/wiki/Merge_algorithm
23 https://en.wikipedia.org/wiki/Canonicalization
3
Sorting algorithm
1. The output is in nondecreasing order (each element is no smaller than the previous
element according to the desired total order24 );
2. The output is a permutation25 (a reordering, yet retaining all of the original elements)
of the input.
Further, the input data is often stored in an array26 , which allows random access27 , rather
than a list, which only allows sequential access28 ; though many algorithms can be applied
to either type of data after suitable modification.
Sorting algorithms are often referred to as a word followed by the word ”sort,” and gram-
matically are used in English as noun phrases, for example in the sentence, ”it is inefficient
to use insertion sort on large lists,” the phrase insertion sort refers to the insertion sort29
sorting algorithm.
1.1 History
From the beginning of computing, the sorting problem has attracted a great deal of research,
perhaps due to the complexity of solving it efficiently despite its simple, familiar statement.
Among the authors of early sorting algorithms around 1951 was Betty Holberton30 (née
Snyder), who worked on ENIAC31 and UNIVAC32 .[1][2] Bubble sort33 was analyzed as early
as 1956.[3] Comparison sorting algorithms have a fundamental requirement of Ω(n log n)34
comparisons (some input sequences will require a multiple of n log n comparisons); algo-
rithms not based on comparisons, such as counting sort35 , can have better performance.
Asymptotically optimal algorithms have been known since the mid-20th century—useful
new algorithms are still being invented, with the now widely used Timsort36 dating to 2002,
and the library sort37 being first published in 2006.
Sorting algorithms are prevalent in introductory computer science38 classes, where the abun-
dance of algorithms for the problem provides a gentle introduction to a variety of core algo-
rithm concepts, such as big O notation39 , divide and conquer algorithms40 , data structures41
24 https://en.wikipedia.org/wiki/Total_order
25 https://en.wikipedia.org/wiki/Permutation
26 https://en.wikipedia.org/wiki/Array_data_type
27 https://en.wikipedia.org/wiki/Random_access
28 https://en.wikipedia.org/wiki/Sequential_access
29 https://en.wikipedia.org/wiki/Insertion_sort
30 https://en.wikipedia.org/wiki/Betty_Holberton
31 https://en.wikipedia.org/wiki/ENIAC
32 https://en.wikipedia.org/wiki/UNIVAC
33 https://en.wikipedia.org/wiki/Bubble_sort
34 https://en.wikipedia.org/wiki/Big_omega_notation
35 https://en.wikipedia.org/wiki/Counting_sort
36 https://en.wikipedia.org/wiki/Timsort
37 https://en.wikipedia.org/wiki/Library_sort
38 https://en.wikipedia.org/wiki/Computer_science
39 https://en.wikipedia.org/wiki/Big_O_notation
40 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
41 https://en.wikipedia.org/wiki/Data_structure
4
Classification
such as heaps42 and binary trees43 , randomized algorithms44 , best, worst and average case45
analysis, time–space tradeoffs46 , and upper and lower bounds47 .
1.2 Classification
42 https://en.wikipedia.org/wiki/Heap_(data_structure)
43 https://en.wikipedia.org/wiki/Binary_tree
44 https://en.wikipedia.org/wiki/Randomized_algorithm
45 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
46 https://en.wikipedia.org/wiki/Time%E2%80%93space_tradeoff
47 https://en.wikipedia.org/wiki/Upper_and_lower_bounds
48 https://en.wikipedia.org/wiki/Computational_complexity_theory
49 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
50 https://en.wikipedia.org/wiki/Big_O_notation
51 https://en.wikipedia.org/wiki/Comparison_sort
52 https://en.wikipedia.org/wiki/Computational_complexity_theory
53 https://en.wikipedia.org/wiki/Memory_(computing)
54 https://en.wikipedia.org/wiki/In-place_algorithm
55 #Stability
56 https://en.wikipedia.org/wiki/Comparison_sort
57 https://en.wikipedia.org/wiki/Adaptive_sort
5
Sorting algorithm
1.2.1 Stability
Figure 2 An example of stable sort on playing cards. When the cards are sorted by
rank with a stable sort, the two 5s must remain in the same order in the sorted output
that they were originally in. When they are sorted with a non-stable sort, the 5s may end
up in the opposite order in the sorted output.
Stable sort algorithms sort repeated elements in the same order that they appear in the
input. When sorting some kinds of data, only part of the data is examined when determining
the sort order. For example, in the card sorting example to the right, the cards are being
sorted by their rank, and their suit is being ignored. This allows the possibility of multiple
different correctly sorted versions of the original list. Stable sorting algorithms choose one
6
Classification
of these, according to the following rule: if two items compare as equal, like the two 5 cards,
then their relative order will be preserved, so that if one came before the other in the input,
it will also come before the other in the output.
Stability is important for the following reason: say that student records consisting of name
and class section are sorted dynamically on a web page, first by name, then by class section
in a second operation. If a stable sorting algorithm is used in both cases, the sort-by-
class-section operation will not change the name order; with an unstable sort, it could be
that sorting by section shuffles the name order. Using a stable sort, users can choose to
sort by section and then by name, by first sorting using name and then sort again using
section, resulting in the name order being preserved. (Some spreadsheet programs obey
this behavior: sorting by name, then by section yields an alphabetical list of students by
section.)
More formally, the data being sorted can be represented as a record or tuple of values, and
the part of the data that is used for sorting is called the key. In the card example, cards are
represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable
if whenever there are two records R and S with the same key, and R appears before S in
the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any
data where the entire element is the key, stability is not an issue. Stability is also not an
issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing
this is to artificially extend the key comparison, so that comparisons between two objects
with otherwise equal keys are decided using the order of the entries in the original input list
as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary
key. For example, suppose we wish to sort a hand of cards such that the suits are in the
order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are
sorted by rank. This can be done by first sorting the cards by rank (using any sort), and
then doing a stable sort by suit:
7
Sorting algorithm
Figure 3
Within each suit, the stable sort preserves the ordering by rank that was already done. This
idea can be extended to any number of keys and is utilised by radix sort58 . The same effect
can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g.,
compares first by suit, and then compares by rank if the suits are the same.
In this table, n is the number of records to be sorted. The columns ”Average” and ”Worst”
give the time complexity59 in each case, under the assumption that the length of each
key is constant, and that therefore all comparisons, swaps, and other needed operations can
proceed in constant time. ”Memory” denotes the amount of auxiliary storage needed beyond
that used by the list itself, under the same assumption. The run times and the memory
requirements listed below should be understood to be inside big O notation60 , hence the
base of the logarithms does not matter; the notation log2 n means (log n)2 .
58 https://en.wikipedia.org/wiki/Radix_sort
59 https://en.wikipedia.org/wiki/Time_complexity
60 https://en.wikipedia.org/wiki/Big_O_notation
8
Comparison of algorithms
Below is a table of comparison sorts61 . A comparison sort cannot perform better than
O(n log n).[4]
Comparison sorts62
61 https://en.wikipedia.org/wiki/Comparison_sort
63 https://en.wikipedia.org/wiki/Quicksort
64 https://en.wikipedia.org/wiki/Merge_sort
65 https://en.wikipedia.org/wiki/Merge_sort#Parallel_merge_sort
66 https://en.wikipedia.org/wiki/In-place_merge_sort
67 https://en.wikipedia.org/wiki/Introsort
68 https://en.wikipedia.org/wiki/Standard_Template_Library
69 https://en.wikipedia.org/wiki/Heapsort
70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/Inversion_(discrete_mathematics)
72 https://en.wikipedia.org/wiki/Block_sort
73 https://en.wikipedia.org/wiki/Merge_sort#Bottom-up_implementation
74 https://en.wikipedia.org/wiki/Sorting_network
75 https://en.wikipedia.org/wiki/Timsort
76 https://en.wikipedia.org/wiki/Selection_sort
9
Sorting algorithm
Comparison sorts62
Cubesort77 n n log n n log n n Yes Insertion Makes
n comparisons
when the data
is already sorted
or reverse
sorted.
4/3 3/2
Shellsort78 n log n n n 1 No Insertion Small code size.
2 2
Bubble sort79 n n n 1 Yes Exchanging Tiny code size.
Tree sort80 n log n n log n n log n<wbr n Yes Insertion When using a
/>(balanced) self-balancing
binary search
tree81 .
2 2 2
Cycle sort82 n n n 1 No Insertion In-place with
theoretically
optimal number
of writes.
2
Library sort83 n n log n n n Yes Insertion
Patience n — n log n n No Insertion & Selection Finds all
sorting84 the longest
increasing
subsequences85
in O(n log n).
Smoothsort86 n n log n n log n 1 No Selection An adaptive87
variant of
heapsort
based upon
the Leonardo
sequence88
rather than
a traditional
binary heap89 .
2 2
Strand sort90 n n n n Yes Selection
Tournament n log n n log n n log n n[12] No Selection Variation of
sort91 Heap Sort.
2 2
Cocktail shaker n n n 1 Yes Exchanging
sort92
2 2
Comb sort93 n log n n n 1 No Exchanging Faster than
bubble sort on
average.
2 2
Gnome sort94 n n n 1 Yes Exchanging Tiny code size.
UnShuffle n kn kn n No Distribution and Merge No exchanges
Sort[13] are performed.
The parameter
k is propor-
tional to the
entropy in the
input. k = 1
for ordered or
reverse ordered
input.
Franceschini's — n log n n log n 1 Yes ?
method[14]
2 2
Odd–even sort95 n n n 1 Yes Exchanging Can be run
on parallel
processors
easily.
77 https://en.wikipedia.org/wiki/Cubesort
78 https://en.wikipedia.org/wiki/Shellsort
79 https://en.wikipedia.org/wiki/Bubble_sort
80 https://en.wikipedia.org/wiki/Tree_sort
81 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
82 https://en.wikipedia.org/wiki/Cycle_sort
83 https://en.wikipedia.org/wiki/Library_sort
84 https://en.wikipedia.org/wiki/Patience_sorting
85 https://en.wikipedia.org/wiki/Longest_increasing_subsequence
86 https://en.wikipedia.org/wiki/Smoothsort
87 https://en.wikipedia.org/wiki/Adaptive_sort
88 https://en.wikipedia.org/wiki/Leonardo_number
89 https://en.wikipedia.org/wiki/Binary_heap
90 https://en.wikipedia.org/wiki/Strand_sort
91 https://en.wikipedia.org/wiki/Tournament_sort
92 https://en.wikipedia.org/wiki/Cocktail_shaker_sort
93 https://en.wikipedia.org/wiki/Comb_sort
94 https://en.wikipedia.org/wiki/Gnome_sort
95 https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
10
Comparison of algorithms
Comparison sorts62
Zip sort n log n n log n n log n 1 Yes Merging In-place merge
algorithm,
minimises data
moves.[15]
The following table describes integer sorting96 algorithms and other sorting algorithms that
are not comparison sorts97 . As such, they are not limited to Ω(n log n).[16] Complexities
below assume n items to be sorted, with keys of size k, digit size d, and r the range of
numbers to be sorted. Many of them are based on the assumption that the key size is large
enough that all entries have unique key values, and hence that n ≪2k , where ≪ means ”much
less than”. In the unit-cost random access machine98 model, algorithms with running time
of n· kd , such as radix sort, still take time proportional to Θ(n log n), because n is limited to
k
be not more than 2 d , and a larger number of elements to sort would require a bigger k in
order to store them in the memory.[17]
Non-comparison sorts
96 https://en.wikipedia.org/wiki/Integer_sorting
97 https://en.wikipedia.org/wiki/Comparison_sort
98 https://en.wikipedia.org/wiki/Random_access_machine
99 https://en.wikipedia.org/wiki/Pigeonhole_sort
100 https://en.wikipedia.org/wiki/Bucket_sort
101 https://en.wikipedia.org/wiki/Bucket_sort
102 https://en.wikipedia.org/wiki/Counting_sort
103 https://en.wikipedia.org/wiki/Radix_sort#Least_significant_digit_radix_sorts
11
Sorting algorithm
Non-comparison sorts
k k
MSD Radix — n· n· n + 2d Yes No Stable
Sort104 d d version uses
an external
array of size
n to hold all
of the bins.
k k
MSD Radix — n· n· 21 No No d=1 for in-
Sort105 1 1 place, k/1
(in-place) recursion
levels, no
( ) count array.
k k k d
Spread- n n· n· +d ·2 No No Asymptotic
sort106 d s d are based
on the
assumption
that n ≪2k ,
but the
algorithm
does not
require this.
k k k
Burstsort107 — n· n· n· No No Has better
d d d constant
factor than
radix sort
for sorting
strings.
Though
relies some-
what on
specifics of
commonly
encountered
strings.
Flashsort108 n n+r n2 n No No Requires
uniform
distribution
of elements
from the
domain in
the array to
run in lin-
ear time. If
distribution
is extremely
skewed then
it can go
quadratic
if underly-
ing sort is
quadratic
(it is usu-
ally an
insertion
sort). In-
place ver-
sion is not
stable.
104 https://en.wikipedia.org/wiki/Radix_sort#Most_significant_digit_radix_sorts
105 https://en.wikipedia.org/wiki/Radix_sort#Most_significant_digit_radix_sorts
106 https://en.wikipedia.org/wiki/Spreadsort
107 https://en.wikipedia.org/wiki/Burstsort
108 https://en.wikipedia.org/wiki/Flashsort
12
Comparison of algorithms
Non-comparison sorts
k k
Postman — n· n· n + 2d — No A variation
sort109 d d of bucket
sort, which
works very
similar
to MSD
Radix Sort.
Specific to
post service
needs.
1.3.3 Others
Some algorithms are slow compared to those discussed above, such as the bogosort111 with
unbounded run time and the stooge sort112 which has O(n2.7 ) run time. These sorts are usu-
ally described for educational purposes in order to demonstrate how run time of algorithms
is estimated. The following table describes some sorting algorithms that are impractical for
real-life use in traditional software contexts due to extremely poor performance or special-
ized hardware requirements.
Name Best Average Worst Memory Stable Comparison Other notes
2
Bead sort113 n S S n N/A No Works only
with positive
integers. Requires
specialized
hardware for
it to run in
guaranteed O(n)
time. There is
a possibility
for software
implementation,
but running time
will be O(S),
where S is sum of
all integers to be
sorted, in case of
small integers it
can be considered
to be linear.
Simple pancake — n n log n No Yes Count is number
sort114 of flips.
2
Spaghetti (Poll) n n n n Yes Polling This is a linear-
sort115 time, analog
algorithm
for sorting a
sequence of items,
requiring O(n)
stack space,
and the sort
is stable. This
requires n parallel
processors.
See spaghetti
sort#Analysis116 .
109 https://en.wikipedia.org/wiki/Postman_sort
110 https://en.wikipedia.org/wiki/Samplesort
111 https://en.wikipedia.org/wiki/Bogosort
112 https://en.wikipedia.org/wiki/Stooge_sort
113 https://en.wikipedia.org/wiki/Bead_sort
114 https://en.wikipedia.org/wiki/Pancake_sorting
115 https://en.wikipedia.org/wiki/Spaghetti_sort
116 https://en.wikipedia.org/wiki/Spaghetti_sort#Analysis
13
Sorting algorithm
Theoretical computer scientists have detailed other sorting algorithms that provide better
than O(n log n) time complexity assuming additional constraints, including:
• Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite
size, taking O(n log log n) time and O(n) space.[20]( )
√
• A randomized integer sorting123 algorithm taking O n log log n expected time and O(n)
space.[21]
While there are a large number of sorting algorithms, in practical implementations a few
algorithms predominate. Insertion sort is widely used for small data sets, while for large data
sets an asymptotically efficient sort is used, primarily heap sort, merge sort, or quicksort.
Efficient implementations generally use a hybrid algorithm124 , combining an asymptotically
efficient algorithm for the overall sort with insertion sort for small lists at the bottom
of a recursion. Highly tuned implementations use more sophisticated variants, such as
Timsort125 (merge sort, insertion sort, and additional logic), used in Android, Java, and
Python, and introsort126 (quicksort and heap sort), used (in variant forms) in some C++
sort127 implementations and in .NET.
117 https://en.wikipedia.org/wiki/Sorting_network
118 https://en.wikipedia.org/wiki/Wikipedia:Disputed_statement
119 https://en.wikipedia.org/wiki/Talk:Sorting_algorithm
120 https://en.wikipedia.org/wiki/Bitonic_sorter
121 https://en.wikipedia.org/wiki/Bogosort
122 https://en.wikipedia.org/wiki/Stooge_sort
123 https://en.wikipedia.org/wiki/Integer_sorting
124 https://en.wikipedia.org/wiki/Hybrid_algorithm
125 https://en.wikipedia.org/wiki/Timsort
126 https://en.wikipedia.org/wiki/Introsort
127 https://en.wikipedia.org/wiki/Sort_(C%2B%2B)
14
Popular sorting algorithms
For more restricted data, such as numbers in a fixed interval, distribution sorts128 such as
counting sort or radix sort are widely used. Bubble sort and variants are rarely used in
practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intu-
itively generally use insertion sorts for small sets. For larger sets, people often first bucket,
such as by initial letter, and multiple bucketing allows practical sorting of very large sets.
Often space is relatively cheap, such as by spreading objects out on the floor or over a large
area, but operations are expensive, particularly moving an object a large distance – locality
of reference is important. Merge sorts are also practical for physical objects, particularly as
two hands can be used, one for each list to merge, while other algorithms, such as heap sort
or quick sort, are poorly suited for human use. Other algorithms, such as library sort129 , a
variant of insertion sort that leaves spaces, are also practical for physical use.
Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on
small data, due to low overhead, but not efficient on large data. Insertion sort is generally
faster than selection sort in practice, due to fewer comparisons and good performance on
almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes,
and thus is used when write performance is a limiting factor.
Insertion sort
Main article: Insertion sort130 Insertion sort131 is a simple sorting algorithm that is relatively
efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated
algorithms. It works by taking elements from the list one by one and inserting them in
their correct position into a new sorted list similar to how we put money in our wallet.[22] In
arrays, the new list and the remaining elements can share the array's space, but insertion
is expensive, requiring shifting all following elements over by one. Shellsort132 (see below)
is a variant of insertion sort that is more efficient for larger lists.
Selection sort
Main article: Selection sort133 Selection sort is an in-place134 comparison sort135 . It has
O136 (n2 ) complexity, making it inefficient on large lists, and generally performs worse than
128 #Distribution_sort
129 https://en.wikipedia.org/wiki/Library_sort
130 https://en.wikipedia.org/wiki/Insertion_sort
131 https://en.wikipedia.org/wiki/Insertion_sort
132 #Shellsort
133 https://en.wikipedia.org/wiki/Selection_sort
134 https://en.wikipedia.org/wiki/In-place_algorithm
135 https://en.wikipedia.org/wiki/Comparison_sort
136 https://en.wikipedia.org/wiki/Big_O_notation
15
Sorting algorithm
the similar insertion sort137 . Selection sort is noted for its simplicity, and also has perfor-
mance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and
repeats these steps for the remainder of the list.[23] It does no more than n swaps, and thus
is useful where swapping is very expensive.
Practical general sorting algorithms are almost always based on an algorithm with average
time complexity (and generally worst-case complexity) O(n log n), of which the most com-
mon are heap sort, merge sort, and quicksort. Each has advantages and drawbacks, with
the most significant being that simple implementation of merge sort uses O(n) additional
space, and simple implementation of quicksort has O(n2 ) worst-case complexity. These
problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency
on real-world data various modifications are used. First, the overhead of these algorithms
becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching
to insertion sort once the data is small enough. Second, the algorithms often perform poorly
on already sorted data or almost sorted data – these are common in real-world data, and can
be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable138 ,
and stability is often a desirable property in a sort. Thus more sophisticated algorithms
are often employed, such as Timsort139 (based on merge sort) or introsort140 (based on
quicksort, falling back to heap sort).
Merge sort
Main article: Merge sort141 Merge sort takes advantage of the ease of merging already sorted
lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then
3 with 4...) and swapping them if the first should come after the second. It then merges
each of the resulting lists of two into lists of four, then merges those lists of four, and so on;
until at last two lists are merged into the final sorted list.[24] Of the algorithms described
here, this is the first that scales well to very large lists, because its worst-case running time
is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential
access, not random access. However, it has additional O(n) space complexity, and involves
a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations,
due to its use in the sophisticated algorithm Timsort142 , which is used for the standard sort
137 https://en.wikipedia.org/wiki/Insertion_sort
138 https://en.wikipedia.org/wiki/Unstable_sort
139 https://en.wikipedia.org/wiki/Timsort
140 https://en.wikipedia.org/wiki/Introsort
141 https://en.wikipedia.org/wiki/Merge_sort
142 https://en.wikipedia.org/wiki/Timsort
16
Popular sorting algorithms
routine in the programming languages Python143[25] and Java144 (as of JDK7145[26] ). Merge
sort itself is the standard routine in Perl146 ,[27] among others, and has been used in Java at
least since 2000 in JDK1.3147 .[28]
Heapsort
Main article: Heapsort148 Heapsort is a much more efficient version of selection sort149 . It
also works by determining the largest (or smallest) element of the list, placing that at the
end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this
task efficiently by using a data structure called a heap150 , a special type of binary tree151 .[29]
Once the data list has been made into a heap, the root node is guaranteed to be the largest
(or smallest) element. When it is removed and placed at the end of the list, the heap is
rearranged so the largest element remaining moves to the root. Using the heap, finding
the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple
selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst
case complexity.
Quicksort
Main article: Quicksort152 Quicksort is a divide and conquer153 algorithm154 which relies on
a partition operation: to partition an array, an element called a pivot is selected.[30][31] All
elements smaller than the pivot are moved before it and all greater elements are moved after
it. This can be done efficiently in linear time and in-place155 . The lesser and greater sublists
are then recursively sorted. This yields average time complexity of O(n log n), with low
overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with
in-place partitioning) are typically unstable sorts and somewhat complex, but are among
the fastest sorting algorithms in practice. Together with its modest O(log n) space usage,
quicksort is one of the most popular sorting algorithms and is available in many standard
programming libraries.
The important caveat about quicksort is that its worst-case performance is O(n2 ); while this
is rare, in naive implementations (choosing the first or last element as pivot) this occurs
for sorted data, which is a common case. The most complex issue in quicksort is thus
choosing a good pivot element, as consistently poor choices of pivots can result in drastically
slower O(n2 ) performance, but good choice of pivots yields O(n log n) performance, which
143 https://en.wikipedia.org/wiki/Python_(programming_language)
144 https://en.wikipedia.org/wiki/Java_(programming_language)
145 https://en.wikipedia.org/wiki/JDK7
146 https://en.wikipedia.org/wiki/Perl
147 https://en.wikipedia.org/wiki/Java_version_history#J2SE_1.3
148 https://en.wikipedia.org/wiki/Heapsort
149 https://en.wikipedia.org/wiki/Selection_sort
150 https://en.wikipedia.org/wiki/Heap_(data_structure)
151 https://en.wikipedia.org/wiki/Binary_tree
152 https://en.wikipedia.org/wiki/Quicksort
153 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
154 https://en.wikipedia.org/wiki/Algorithm
155 https://en.wikipedia.org/wiki/In-place_algorithm
17
Sorting algorithm
is asymptotically optimal. For example, if at each step the median156 is chosen as the
pivot then the algorithm works in O(n log n). Finding the median, such as by the median
of medians157 selection algorithm158 is however an O(n) operation on unsorted lists and
therefore exacts significant overhead with sorting. In practice choosing a random pivot
almost certainly yields O(n log n) performance.
Shellsort
Figure 4 A Shell sort, different from bubble sort in that it moves elements to numerous
swapping positions.
156 https://en.wikipedia.org/wiki/Median
157 https://en.wikipedia.org/wiki/Median_of_medians
158 https://en.wikipedia.org/wiki/Selection_algorithm
18
Popular sorting algorithms
Main article: Shell sort159 Shellsort was invented by Donald Shell160 in 1959.[32] It improves
upon insertion sort by moving out of order elements more than one position at a time.
The concept behind Shellsort is that insertion sort performs in O(kn) time, where k is
the greatest distance between two out-of-place elements. This means that generally, they
perform in O(n2 ), but for data that is mostly sorted, with only a few elements out of place,
they perform faster. So, by first sorting elements far away, and progressively shrinking the
gap between the elements to sort, the final sort computes much faster. One implementation
can be described as arranging the data sequence in a two-dimensional array and then sorting
the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is an open problem161 and depends on the
gap sequence used, with known complexities ranging from O(n2 ) to O(n4/3 ) and Θ(n log2
n). This, combined with the fact that Shellsort is in-place162 , only needs a relatively small
amount of code, and does not require use of the call stack163 , makes it is useful in situations
where memory is at a premium, such as in embedded systems164 and operating system
kernels165 .
This section does not cite166 any sources167 . Please help improve this sec-
tion168 by adding citations to reliable sources169 . Unsourced material may be chal-
lenged and removed170 .
Find sources: ”Sorting algorithm”171 –
news172 · newspapers173 · books174 · scholar175 · JSTOR176 (May 2019)(Learn how
and when to remove this template message177 )
159 https://en.wikipedia.org/wiki/Shellsort
160 https://en.wikipedia.org/wiki/Donald_Shell
161 https://en.wikipedia.org/wiki/Open_problem
162 https://en.wikipedia.org/wiki/In-place
163 https://en.wikipedia.org/wiki/Call_stack
164 https://en.wikipedia.org/wiki/Embedded_system
165 https://en.wikipedia.org/wiki/Operating_system_kernel
166 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
167 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
168 https://en.wikipedia.org/w/index.php?title=Sorting_algorithm&action=edit
169 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
170 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
171 http://www.google.com/search?as_eq=wikipedia&q=%22Sorting+algorithm%22
172 http://www.google.com/search?tbm=nws&q=%22Sorting+algorithm%22+-wikipedia
http://www.google.com/search?&q=%22Sorting+algorithm%22+site:news.google.com/
173
newspapers&source=newspapers
174 http://www.google.com/search?tbs=bks:1&q=%22Sorting+algorithm%22+-wikipedia
175 http://scholar.google.com/scholar?q=%22Sorting+algorithm%22
176 https://www.jstor.org/action/doBasicSearch?Query=%22Sorting+algorithm%22&acc=on&wc=on
177 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
19
Sorting algorithm
Bubble sort, and variants such as the shell sort178 and cocktail sort179 , are simple, highly-
inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of
analysis, but they are rarely used in practice.
Bubble sort
Figure 6 A bubble sort, a sorting algorithm that continuously steps through a list,
swapping items until they appear in the correct order.
Main article: Bubble sort180 Bubble sort is a simple sorting algorithm. The algorithm starts
at the beginning of the data set. It compares the first two elements, and if the first is greater
than the second, it swaps them. It continues doing this for each pair of adjacent elements
to the end of the data set. It then starts again with the first two elements, repeating until
no swaps have occurred on the last pass.[33] This algorithm's average time and worst-case
performance is O(n2 ), so it is rarely used to sort large, unordered data sets. Bubble sort
can be used to sort a small number of items (where its asymptotic inefficiency is not a
178 https://en.wikipedia.org/wiki/Shell_sort
179 https://en.wikipedia.org/wiki/Cocktail_sort
180 https://en.wikipedia.org/wiki/Bubble_sort
20
Popular sorting algorithms
high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly
sorted (that is, the elements are not significantly out of place). For example, if any number
of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble
sort's exchange will get them in order on the first pass, the second pass will find all elements
in order, so the sort will take only 2n time.
[34]
Comb sort
Main article: Comb sort181 Comb sort is a relatively simple sorting algorithm based on
bubble sort182 and originally designed by Włodzimierz Dobosiewicz in 1980.[35] It was later
rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine183
article published in April 1991. The basic idea is to eliminate turtles, or small values
near the end of the list, since in a bubble sort these slow the sorting down tremendously.
(Rabbits, large values around the beginning of the list, do not pose a problem in bubble
sort) It accomplishes this by initially swapping elements that are a certain distance from
one another in the array, rather than only swapping elements if they are adjacent to one
another, and then shrinking the chosen distance until it is operating as a normal bubble
sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that
swaps elements spaced a certain distance away from one another, comb sort can be thought
of as the same generalization applied to bubble sort.
See also: External sorting184 Distribution sort refers to any sorting algorithm where data
is distributed from their input to multiple intermediate structures which are then gathered
and placed on the output. For example, both bucket sort185 and flashsort186 are distribution
based sorting algorithms. Distribution sorting algorithms can be used on a single processor,
or they can be a distributed algorithm187 , where individual subsets are separately sorted on
different processors, then combined. This allows external sorting188 of data too large to fit
into a single computer's memory.
Counting sort
Main article: Counting sort189 Counting sort is applicable when each input is known to
belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and
181 https://en.wikipedia.org/wiki/Comb_sort
182 https://en.wikipedia.org/wiki/Bubble_sort
183 https://en.wikipedia.org/wiki/Byte_Magazine
184 https://en.wikipedia.org/wiki/External_sorting
185 https://en.wikipedia.org/wiki/Bucket_sort
186 https://en.wikipedia.org/wiki/Flashsort
187 https://en.wikipedia.org/wiki/Distributed_algorithm
188 https://en.wikipedia.org/wiki/External_sorting
189 https://en.wikipedia.org/wiki/Counting_sort
21
Sorting algorithm
O(|S|) memory where n is the length of the input. It works by creating an integer array of
size |S| and using the ith bin to count the occurrences of the ith member of S in the input.
Each input is then counted by incrementing the value of its corresponding bin. Afterward,
the counting array is looped through to arrange all of the inputs in order. This sorting
algorithm often cannot be used because S needs to be reasonably small for the algorithm
to be efficient, but it is extremely fast and demonstrates great asymptotic behavior as
n increases. It also can be modified to provide stable behavior.
Bucket sort
Main article: Bucket sort190 Bucket sort is a divide and conquer191 sorting algorithm that
generalizes counting sort192 by partitioning an array into a finite number of buckets. Each
bucket is then sorted individually, either using a different sorting algorithm, or by recursively
applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across
all buckets.
Radix sort
Main article: Radix sort193 Radix sort is an algorithm that sorts numbers by processing
individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix
sort can process digits of each number either starting from the least significant digit194 (LSD)
or starting from the most significant digit195 (MSD). The LSD algorithm first sorts the list
by the least significant digit while preserving their relative order using a stable sort. Then
it sorts them by the next digit, and so on from the least significant to the most significant,
ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the
MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix
sort is not stable. It is common for the counting sort196 algorithm to be used internally by
the radix sort. A hybrid197 sorting approach, such as using insertion sort198 for small bins
improves performance of radix sort significantly.
When the size of the array to be sorted approaches or exceeds the available primary mem-
ory, so that (much slower) disk or swap space must be employed, the memory usage pattern
of a sorting algorithm becomes important, and an algorithm that might have been fairly
190 https://en.wikipedia.org/wiki/Bucket_sort
191 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
192 https://en.wikipedia.org/wiki/Counting_sort
193 https://en.wikipedia.org/wiki/Radix_sort
194 https://en.wikipedia.org/wiki/Least_significant_digit
195 https://en.wikipedia.org/wiki/Most_significant_digit
196 https://en.wikipedia.org/wiki/Counting_sort
197 https://en.wikipedia.org/wiki/Hybrid_algorithm
198 https://en.wikipedia.org/wiki/Insertion_sort
22
Memory usage patterns and index sorting
efficient when the array fit easily in RAM may become impractical. In this scenario, the
total number of comparisons becomes (relatively) less important, and the number of times
sections of memory must be copied or swapped to and from the disk can dominate the per-
formance characteristics of an algorithm. Thus, the number of passes and the localization
of comparisons can be more important than the raw number of comparisons, since compar-
isons of nearby elements to one another happen at system bus199 speed (or, with caching,
even at CPU200 speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort201 algorithm provides quite reasonable per-
formance with adequate RAM, but due to the recursive way that it copies portions of the
array it becomes much less practical when the array does not fit in RAM, because it may
cause a number of slow copy or move operations to and from disk. In that scenario, another
algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a
relational database202 ) are being sorted by a relatively small key field, is to create an index
into the array and then sort the index, rather than the entire array. (A sorted version of
the entire array can then be produced with one pass, reading from the index, but often even
that is unnecessary, as having the sorted index is adequate.) Because the index is much
smaller than the entire array, it may fit easily in memory where the entire array would not,
effectively eliminating the disk-swapping problem. This procedure is sometimes called ”tag
sort”.[36]
Another technique for overcoming the memory-size problem is using external sorting203 ,
for example one of the ways is to combine two algorithms in a way that takes advantage
of the strength of each to improve overall performance. For instance, the array might be
subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted
using an efficient algorithm (such as quicksort204 ), and the results merged using a k-way
merge similar to that used in mergesort205 . This is faster than performing either mergesort
or quicksort over the entire list.[37][38]
Techniques can also be combined. For sorting very large sets of data that vastly exceed
system memory, even the index may need to be sorted using an algorithm or combination
of algorithms designed to perform reasonably with virtual memory206 , i.e., to reduce the
amount of swapping required.
199 https://en.wikipedia.org/wiki/Computer_bus
200 https://en.wikipedia.org/wiki/Central_Processing_Unit
201 https://en.wikipedia.org/wiki/Quicksort
202 https://en.wikipedia.org/wiki/Relational_database
203 https://en.wikipedia.org/wiki/External_sorting
204 https://en.wikipedia.org/wiki/Quicksort
205 https://en.wikipedia.org/wiki/Mergesort
206 https://en.wikipedia.org/wiki/Virtual_memory
23
Sorting algorithm
Related problems include partial sorting207 (sorting only the k smallest elements of a list, or
alternatively computing the k smallest elements, but unordered) and selection208 (computing
the kth smallest element). These can be solved inefficiently by a total sort, but more
efficient algorithms exist, often derived by generalizing a sorting algorithm. The most
notable example is quickselect209 , which is related to quicksort210 . Conversely, some sorting
algorithms can be derived by repeated application of a selection algorithm; quicksort and
quickselect can be seen as the same pivoting move, differing only in whether one recurses
on both sides (quicksort, divide and conquer211 ) or one side (quickselect, decrease and
conquer212 ).
A kind of opposite of a sorting algorithm is a shuffling algorithm213 . These are fundamen-
tally different because they require a source of random numbers. Shuffling can also be
implemented by a sorting algorithm, namely by a random sort: assigning a random number
to each element of the list and then sorting based on the random numbers. This is generally
not done in practice, however, and there is a well-known simple and efficient algorithm for
shuffling: the Fisher–Yates shuffle214 .
1.8 References
207 https://en.wikipedia.org/wiki/Partial_sorting
208 https://en.wikipedia.org/wiki/Selection_algorithm
209 https://en.wikipedia.org/wiki/Quickselect
210 https://en.wikipedia.org/wiki/Quicksort
211 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
212 https://en.wikipedia.org/wiki/Decrease_and_conquer
213 https://en.wikipedia.org/wiki/Shuffling_algorithm
214 https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle
215 https://en.wikipedia.org/wiki/Collation
216 https://en.wikipedia.org/wiki/Schwartzian_transform
217 https://en.wikipedia.org/wiki/Search_algorithm
218 https://en.wikipedia.org/wiki/Quantum_sort
24
References
This article includes a list of references219 , but its sources remain unclear be-
cause it has insufficient inline citations220 . Please help to improve221 this ar-
ticle by introducing222 more precise citations. (September 2009)(Learn how and when
to remove this template message223 )
219 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
220 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
221 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
222 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
223 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
224 http://mentalfloss.com/article/53160/meet-refrigerator-ladies-who-programmed-eniac
https://www.nytimes.com/2001/12/17/business/frances-e-holberton-84-early-computer-
225
programmer.html
226 https://en.wikipedia.org/wiki/ProQuest_(identifier)
227 https://search.proquest.com/docview/301940891
228 https://en.wikipedia.org/wiki/Thomas_H._Cormen
229 https://en.wikipedia.org/wiki/Charles_E._Leiserson
230 https://en.wikipedia.org/wiki/Ron_Rivest
231 https://en.wikipedia.org/wiki/Clifford_Stein
232 https://books.google.com/books?id=NLngYyWFl_YC
233 https://en.wikipedia.org/wiki/ISBN_(identifier)
234 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03293-3
235 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
236 https://books.google.com/books?id=ylAETlep0CwC
237 https://en.wikipedia.org/wiki/ISBN_(identifier)
238 https://en.wikipedia.org/wiki/Special:BookSources/978-81-317-1291-7
239 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
240 https://en.wikipedia.org/wiki/Communications_of_the_ACM
241 https://en.wikipedia.org/wiki/Doi_(identifier)
242 https://doi.org/10.1145%2F359619.359631
25
Sorting algorithm
7. A, M.243 ; K, J.244 ; S, E.245 (1983). An O(n log n) sorting
network. STOC246 '83. Proceedings of the fifteenth annual ACM symposium on Theory
of computing. pp. 1–9. doi247 :10.1145/800061.808726248 . ISBN249 0-89791-099-0250 .
8. H, B. C.; L, M. A. (D 1992). ”F S M
S C E S”251 (PDF). Comput. J.252 35 (6): 643–
650. CiteSeerX253 10.1.1.54.8381254 . doi255 :10.1093/comjnl/35.6.643256 .
9. K, P. S.; K, A. (2008). Ratio Based Stable In-Place Merging.
TAMC257 2008. Theory and Applications of Models of Computation. LNCS258 .
4978. pp. 246–257. CiteSeerX259 10.1.1.330.2641260 . doi261 :10.1007/978-3-540-
79228-4_22262 . ISBN263 978-3-540-79227-7264 .
10. 265
11. ”SELECTION SORT (J, C++) - A D S”266 .
www.algolist.net. Retrieved 14 April 2018.
12. 267
13. K, A (N 1985). ”U, N Q S”. Computer
Language. 2 (11).
14. F, G. (J 2007). ”S S, P, O( )
C O() M”. Theory of Computing Systems. 40 (4): 327–353.
doi268 :10.1007/s00224-006-1311-1269 .
15. C, R. (M 2020). ”- --”270 .
www.github.com.
243 https://en.wikipedia.org/wiki/Mikl%C3%B3s_Ajtai
244 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
245 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
246 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
247 https://en.wikipedia.org/wiki/Doi_(identifier)
248 https://doi.org/10.1145%2F800061.808726
249 https://en.wikipedia.org/wiki/ISBN_(identifier)
250 https://en.wikipedia.org/wiki/Special:BookSources/0-89791-099-0
251 http://comjnl.oxfordjournals.org/content/35/6/643.full.pdf
252 https://en.wikipedia.org/wiki/The_Computer_Journal
253 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
254 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.8381
255 https://en.wikipedia.org/wiki/Doi_(identifier)
256 https://doi.org/10.1093%2Fcomjnl%2F35.6.643
https://en.wikipedia.org/wiki/International_Conference_on_Theory_and_Applications_of_
257
Models_of_Computation
258 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
259 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
260 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.330.2641
261 https://en.wikipedia.org/wiki/Doi_(identifier)
262 https://doi.org/10.1007%2F978-3-540-79228-4_22
263 https://en.wikipedia.org/wiki/ISBN_(identifier)
264 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-79227-7
265 https://qiita.com/hon_no_mushi/items/92ff1a220f179b8d40f9
266 http://www.algolist.net/Algorithms/Sorting/Selection_sort
267 http://dbs.uni-leipzig.de/skripte/ADS1/PDF4/kap4.pdf
268 https://en.wikipedia.org/wiki/Doi_(identifier)
269 https://doi.org/10.1007%2Fs00224-006-1311-1
270 https://github.com/ceorron/stable-inplace-sorting-algorithms
26
References
16. C, T H.271 ; L, C E.272 ; R, R L.273 ;
S, C274 (2001), ”8”, Introduction To Algorithms275 (2 .), C-
, MA: T MIT P, . 165, ISBN276 0-262-03293-7277
17. N, S (2000). ”T F S A?”278 . Dr.
279
Dobb's .
18. C, T H.280 ; L, C E.281 ; R, R L.282 ;
S, C283 (2001) [1990]. Introduction to Algorithms284 (2 .). MIT
P MG-H. ISBN285 0-262-03293-7286 .
19. G, M T.287 ; T, R288 (2002). ”4.5 B-S
R-S”. Algorithm Design: Foundations, Analysis, and Internet Examples.
John Wiley & Sons. pp. 241–243. ISBN289 978-0-471-38365-9290 .
20. T, M.291 (F 2002). ”R S O( )
T L S U A, S, B- B O-
”. Journal of Algorithms. 42 (2): 205–230. doi292 :10.1006/jagm.2002.1211293 .
21. H, Y; T, M.294 (2002). Integer sorting in O(n√(log log n)) expected time
and linear space. The 43rd Annual IEEE Symposium on Foundations of Computer Sci-
ence295 . pp. 135–144. doi296 :10.1109/SFCS.2002.1181890297 . ISBN298 0-7695-1822-
2299 .
22. W, N300 (1986), Algorithms & Data Structures, Upper Saddle River,
NJ: Prentice-Hall, pp. 76–77, ISBN301 978-0130220059302
271 https://en.wikipedia.org/wiki/Thomas_H._Cormen
272 https://en.wikipedia.org/wiki/Charles_E._Leiserson
273 https://en.wikipedia.org/wiki/Ron_Rivest
274 https://en.wikipedia.org/wiki/Clifford_Stein
275 https://books.google.com/books?id=NLngYyWFl_YC
276 https://en.wikipedia.org/wiki/ISBN_(identifier)
277 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
http://www.drdobbs.com/architecture-and-design/the-fastest-sorting-algorithm/
278
184404062
279 https://en.wikipedia.org/wiki/Dr._Dobb%27s
280 https://en.wikipedia.org/wiki/Thomas_H._Cormen
281 https://en.wikipedia.org/wiki/Charles_E._Leiserson
282 https://en.wikipedia.org/wiki/Ron_Rivest
283 https://en.wikipedia.org/wiki/Clifford_Stein
284 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
285 https://en.wikipedia.org/wiki/ISBN_(identifier)
286 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
287 https://en.wikipedia.org/wiki/Michael_T._Goodrich
288 https://en.wikipedia.org/wiki/Roberto_Tamassia
289 https://en.wikipedia.org/wiki/ISBN_(identifier)
290 https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-38365-9
291 https://en.wikipedia.org/wiki/Mikkel_Thorup
292 https://en.wikipedia.org/wiki/Doi_(identifier)
293 https://doi.org/10.1006%2Fjagm.2002.1211
294 https://en.wikipedia.org/wiki/Mikkel_Thorup
295 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
296 https://en.wikipedia.org/wiki/Doi_(identifier)
297 https://doi.org/10.1109%2FSFCS.2002.1181890
298 https://en.wikipedia.org/wiki/ISBN_(identifier)
299 https://en.wikipedia.org/wiki/Special:BookSources/0-7695-1822-2
300 https://en.wikipedia.org/wiki/Niklaus_Wirth
301 https://en.wikipedia.org/wiki/ISBN_(identifier)
302 https://en.wikipedia.org/wiki/Special:BookSources/978-0130220059
27
Sorting algorithm
303 #CITEREFWirth1986
304 #CITEREFWirth1986
305 http://svn.python.org/projects/python/trunk/Objects/listsort.txt
http://cr.openjdk.java.net/~martin/webrevs/openjdk7/timsort/raw_files/new/src/share/
306
classes/java/util/TimSort.java
307 http://perldoc.perl.org/functions/sort.html
http://java.sun.com/j2se/1.3/docs/api/java/util/Arrays.html#sort(java.lang.Object%5B%
308
5D)
https://web.archive.org/web/20090304021927/http://java.sun.com/j2se/1.3/docs/api/
309
java/util/Arrays.html#sort(java.lang.Object%5B%5D)#sort(java.lang.Object%5B%5D)
310 https://en.wikipedia.org/wiki/Wayback_Machine
311 #CITEREFWirth1986
312 #CITEREFWirth1986
313 https://en.wikipedia.org/wiki/Thomas_H._Cormen
314 https://en.wikipedia.org/wiki/Charles_E._Leiserson
315 https://en.wikipedia.org/wiki/Ron_Rivest
316 https://en.wikipedia.org/wiki/Clifford_Stein
317 https://en.wikipedia.org/wiki/ISBN_(identifier)
318 https://en.wikipedia.org/wiki/Special:BookSources/978-0262033848
319 http://penguin.ewu.edu/cscd300/Topic/AdvSorting/p30-shell.pdf
320 https://en.wikipedia.org/wiki/Doi_(identifier)
321 https://doi.org/10.1145%2F368370.368387
322 #CITEREFWirth1986
https://github.com/torvalds/linux/blob/72932611b4b05bbd89fafa369d564ac8e449809b/
323
kernel/groups.c#L105
324 https://en.wikipedia.org/wiki/Information_Processing_Letters
325 https://en.wikipedia.org/wiki/Doi_(identifier)
326 https://doi.org/10.1016%2FS0020-0190%2800%2900223-4
327 https://www.pcmag.com/encyclopedia_term/0,2542,t=tag+sort&i=52532,00.asp
28
External links
37. Donald Knuth328 , The Art of Computer Programming329 , Volume 3: Sorting and
Searching, Second Edition. Addison-Wesley, 1998, ISBN330 0-201-89685-0331 , Section
5.4: External Sorting, pp. 248–379.
38. Ellis Horowitz332 and Sartaj Sahni333 , Fundamentals of Data Structures, H. Freeman
& Co., ISBN334 0-7167-8042-9335 .
The Wikibook Algorithm implementation344 has a page on the topic of: Sorting
algorithms345
The Wikibook A-level Mathematics346 has a page on the topic of: Sorting algo-
rithms347
328 https://en.wikipedia.org/wiki/Donald_Knuth
329 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
330 https://en.wikipedia.org/wiki/ISBN_(identifier)
331 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
332 https://en.wikipedia.org/wiki/Ellis_Horowitz
333 https://en.wikipedia.org/wiki/Sartaj_Sahni
334 https://en.wikipedia.org/wiki/ISBN_(identifier)
335 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-8042-9
336 https://en.wikipedia.org/wiki/Donald_Knuth
337 https://en.wikipedia.org/wiki/ISBN_(identifier)
338 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
339 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
340 https://archive.org/details/computationalpro00actu/page/101
341 https://archive.org/details/computationalpro00actu/page/101
342 https://en.wikipedia.org/wiki/ISBN_(identifier)
343 https://en.wikipedia.org/wiki/Special:BookSources/0-12-394680-8
344 https://en.wikibooks.org/wiki/Algorithm_implementation
345 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting
346 https://en.wikibooks.org/wiki/A-level_Mathematics
https://en.wikibooks.org/wiki/A-level_Mathematics/OCR/D1/Algorithms#Sorting_
347
Algorithms
348 https://commons.wikimedia.org/wiki/Category:Sort_algorithms
29
Sorting algorithm
Sorting algorithms
349 https://web.archive.org/web/20150303022622/http://www.sorting-algorithms.com/
350 https://en.wikipedia.org/wiki/Wayback_Machine
351 http://www.iti.fh-flensburg.de/lang/algorithmen/sortieren/algoen.htm
352 https://www.nist.gov/dads/
353 http://www.softpanorama.org/Algorithms/sorting.shtml
354 https://en.wikipedia.org/wiki/Quicksort
355 https://www.youtube.com/watch?v=kPRA0W1kECg
356 https://oeis.org/A036604
357 https://en.wikipedia.org/wiki/Ford%E2%80%93Johnson_algorithm
358 https://www.youtube.com/watch?v=d2d0r1bArUQ
30
2 Comparison sort
Figure 7 Sorting a set of unlabelled weights by weight using only a balance scale
requires a comparison sort algorithm.
A comparison sort is a type of sorting algorithm1 that only reads the list elements through
a single abstract comparison operation (often a ”less than or equal to” operator or a three-
way comparison2 ) that determines which of two elements should occur first in the final
1 https://en.wikipedia.org/wiki/Sorting_algorithm
2 https://en.wikipedia.org/wiki/Three-way_comparison
31
Comparison sort
sorted list. The only requirement is that the operator forms a total preorder3 over the data,
with:
1. if a ≤b and b ≤c then a ≤c (transitivity)
2. for all a and b, a ≤b or b ≤a (connexity4 ).
It is possible that both a ≤b and b ≤a; in this case either may come first in the sorted list.
In a stable sort5 , the input order determines the sorted order in this case.
A metaphor for thinking about comparison sorts is that someone has a set of unlabelled
weights and a balance scale6 . Their goal is to line up the weights in order by their weight
without any information except that obtained by placing two weights on the scale and seeing
which one is heavier (or if they weigh the same).
2.1 Examples
Figure 8 Quicksort in action on a list of numbers. The horizontal lines are pivot values.
3 https://en.wikipedia.org/wiki/Total_preorder
4 https://en.wikipedia.org/wiki/Connex_relation
5 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
6 https://en.wikipedia.org/wiki/Balance_scale
32
Performance limits and advantages of different sorting techniques
There are fundamental limits on the performance of comparison sorts. A comparison sort
must have an average-case lower bound of Ω21 (n log n) comparison operations,[1] which is
known as linearithmic22 time. This is a consequence of the limited information available
through comparisons alone — or, to put it differently, of the vague algebraic structure of
totally ordered sets. In this sense, mergesort, heapsort, and introsort are asymptotically
optimal23 in terms of the number of comparisons they must perform, although this metric
neglects other operations. Non-comparison sorts (such as the examples discussed below)
can achieve O24 (n) performance by using operations other than comparisons, allowing them
to sidestep this lower bound (assuming elements are constant-sized).
7 https://en.wikipedia.org/wiki/Quicksort
8 https://en.wikipedia.org/wiki/Heapsort
9 https://en.wikipedia.org/wiki/Shellsort
10 https://en.wikipedia.org/wiki/Merge_sort
11 https://en.wikipedia.org/wiki/Introsort
12 https://en.wikipedia.org/wiki/Insertion_sort
13 https://en.wikipedia.org/wiki/Selection_sort
14 https://en.wikipedia.org/wiki/Bubble_sort
15 https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
16 https://en.wikipedia.org/wiki/Cocktail_shaker_sort
17 https://en.wikipedia.org/wiki/Cycle_sort
18 https://en.wikipedia.org/wiki/Merge-insertion_sort
19 https://en.wikipedia.org/wiki/Smoothsort
20 https://en.wikipedia.org/wiki/Timsort
21 https://en.wikipedia.org/wiki/Big-O_notation
22 https://en.wikipedia.org/wiki/Linearithmic
23 https://en.wikipedia.org/wiki/Asymptotically_optimal
24 https://en.wikipedia.org/wiki/Big-O_notation
33
Comparison sort
Comparison sorts may run faster on some lists; many adaptive sorts25 such as insertion
sort26 run in O(n) time on an already-sorted or nearly-sorted list. The Ω27 (n log n) lower
bound applies only to the case in which the input list can be in any possible order.
Real-world measures of sorting speed may need to take into account the ability of some
algorithms to optimally use relatively fast cached computer memory28 , or the application
may benefit from sorting methods where sorted data begins to appear to the user quickly
(and then user's speed of reading will be the limiting factor) as opposed to sorting methods
where no output is available until the whole list is sorted.
Despite these limitations, comparison sorts offer the notable practical advantage that control
over the comparison function allows sorting of many different datatypes and fine control
over how the list is sorted. For example, reversing the result of the comparison function
allows the list to be sorted in reverse; and one can sort a list of tuples29 in lexicographic
order30 by just creating a comparison function that compares each part in sequence:
function tupleCompare((lefta, leftb, leftc), (righta, rightb, rightc))
if lefta ≠ righta
return compare(lefta, righta)
else if leftb ≠ rightb
return compare(leftb, rightb)
else
return compare(leftc, rightc)
Balanced ternary31 notation allows comparisons to be made in one step, whose result will
be one of ”less than”, ”greater than” or ”equal to”.
Comparison sorts generally adapt more easily to complex orders such as the order of floating-
point numbers32 . Additionally, once a comparison function is written, any comparison
sort can be used without modification; non-comparison sorts typically require specialized
versions for each datatype.
This flexibility, together with the efficiency of the above comparison sorting algorithms on
modern computers, has led to widespread preference for comparison sorts in most practical
work.
2.3 Alternatives
Some sorting problems admit a strictly faster solution than the Ω(n log n) bound for com-
parison sorting; an example is integer sorting33 , where all keys are integers. When the keys
form a small (compared to n) range, counting sort34 is an example algorithm that runs in
25 https://en.wikipedia.org/wiki/Adaptive_sort
26 https://en.wikipedia.org/wiki/Insertion_sort
27 https://en.wikipedia.org/wiki/Big-O_notation
28 https://en.wikipedia.org/wiki/Random_Access_Memory
29 https://en.wikipedia.org/wiki/Tuple
30 https://en.wikipedia.org/wiki/Lexicographic_order
31 https://en.wikipedia.org/wiki/Balanced_ternary
32 https://en.wikipedia.org/wiki/Floating-point_number
33 https://en.wikipedia.org/wiki/Integer_sorting
34 https://en.wikipedia.org/wiki/Counting_sort
34
Number of comparisons required to sort a list
linear time. Other integer sorting algorithms, such as radix sort35 , are not asymptotically
faster than comparison sorting, but can be faster in practice.
The problem of sorting pairs of numbers by their sum36 is not subject to the Ω(n² log n)
bound either (the square resulting from the pairing up); the best known algorithm still takes
O(n² log n) time, but only O(n²) comparisons.
n
n ⌈log2 (n!)⌉ n log2 n −
ln 2
10 22 19
100 525 521
1 000 8 530 8 524
10 000 118 459 118 451
100 000 1 516 705 1 516 695
1 000 000 18 488 885 18 488 874
35 https://en.wikipedia.org/wiki/Radix_sort
36 https://en.wikipedia.org/wiki/X_%2B_Y_sorting
35
Comparison sort
Above: A comparison of the lower bound ⌈log2 (n!)⌉ to the actual minimum number of
comparisons (from OEIS37 : A03660438 ) required to sort a list of n items (for the worst
case). Below: Using Stirling's approximation39 , this lower bound is well-approximated by
n
n log2 n − .
ln 2
The number of comparisons that a comparison sort algorithm requires increases in propor-
tion to n log(n), where n is the number of elements to sort. This bound is asymptotically
tight40 .
Given a list of distinct numbers (we can assume this because this is a worst-case analysis),
there are n factorial41 permutations exactly one of which is the list in sorted order. The
sort algorithm must gain enough information from the comparisons to identify the correct
permutation. If the algorithm always completes after at most f(n) steps, it cannot distin-
guish more than 2f(n) cases because the keys are distinct and each comparison has only two
possible outcomes. Therefore,
2f (n) ≥ n!, or equivalently f (n) ≥ log2 (n!).
By looking at the first n/2 factors of n! = n(n − 1) · · · 1, we obtain
(( ) n )
n 2 n log n n
log2 (n!) ≥ log2 = − = Θ(n log n).
2 2 log 2 2
log2 (n!) = Ω(n log n).
This provides the lower-bound part of the claim. A better bound can be given via Stirling's
approximation42 .
An identical upper bound follows from the existence of the algorithms that attain this bound
in the worst case, like heapsort43 and mergesort44 .
The above argument provides an absolute, rather than only asymptotic lower bound on the
number of comparisons, namely ⌈log2 (n!)⌉ comparisons. This lower bound is fairly good (it
can be approached within a linear tolerance by a simple merge sort), but it is known to be
inexact. For example, ⌈log2 (13!)⌉ = 33, but the minimal number of comparisons to sort 13
elements has been proved to be 34.
Determining the exact number of comparisons needed to sort a given number of entries
is a computationally hard problem even for small n, and no simple formula for the so-
lution is known. For some of the few concrete values that have been computed, see
OEIS45 : A03660446 .
37 https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences
38 http://oeis.org/A036604
39 https://en.wikipedia.org/wiki/Stirling%27s_approximation
40 https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
41 https://en.wikipedia.org/wiki/Factorial
42 https://en.wikipedia.org/wiki/Stirling%27s_approximation
43 https://en.wikipedia.org/wiki/Heapsort
44 https://en.wikipedia.org/wiki/Merge_sort
45 https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences
46 http://oeis.org/A036604
36
Number of comparisons required to sort a list
47 https://en.wikipedia.org/wiki/Information_theory
48 https://en.wikipedia.org/wiki/Shannon_entropy
49 https://en.wikipedia.org/wiki/Decision_tree_model
50 https://en.wikipedia.org/wiki/Decision_tree_model
37
Comparison sort
For example, for n = 3, the information-theoretic lower bound for the average case is ap-
proximately 2.58, while the average lower bound derived via Decision tree model51 is 8/3,
approximately 2.67.
In the case that multiple items may have the same key, there is no obvious statistical
interpretation for the term ”average case”, so an argument like the above cannot be applied
without making specific assumptions about the distribution of keys.
2.5 Notes
1. C, T H.52 ; L, C E.53 ; R, R L.54 ; S,
C55 (2009) [1990]. Introduction to Algorithms56 (3 .). MIT P
MG-H. . 191–193. ISBN57 0-262-03384-458 .
2. Mark Wells, Applications of a language for computing in combinatorics, Information
Processing 65 (Proceedings of the 1965 IFIP Congress), 497–498, 1966.
3. Mark Wells, Elements of Combinatorial Computing, Pergamon Press, Oxford, 1971.
4. Takumi Kasai, Shusaku Sawato, Shigeki Iwata, Thirty four comparisons are required
to sort 13 items, LNCS 792, 260-269, 1994.
5. Marcin Peczarski, Sorting 13 elements requires 34 comparisons, LNCS 2461, 785–794,
2002.
6. Marcin Peczarski, New results in minimum-comparison sorting, Algorithmica 40 (2),
133–145, 2004.
7. Marcin Peczarski, Computer assisted research of posets, PhD thesis, University of
Warsaw, 2006.
8. P, M (2007). ”T F-J -
47 ”. Inf. Process. Lett. 101 (3): 126–128.
doi59 :10.1016/j.ipl.2006.09.00160 .
9. C, W; L, X; W, G; L, J (O 2007).
”最少比较排序问题中S(15)和S(19)的解决”61 [T S(15) S(19)
- ]. Journal of Frontiers of Computer
Science and Technology (in Chinese). 1 (3): 305–313.
10. P, M (3 A 2011). ”T O S 16 E-
”. Acta Universitatis Sapientiae. 4 (2): 215–224. arXiv62 :1108.086663 . Bib-
code64 :2011arXiv1108.0866P65 .
51 https://en.wikipedia.org/wiki/Decision_tree_model
52 https://en.wikipedia.org/wiki/Thomas_H._Cormen
53 https://en.wikipedia.org/wiki/Charles_E._Leiserson
54 https://en.wikipedia.org/wiki/Ron_Rivest
55 https://en.wikipedia.org/wiki/Clifford_Stein
56 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
59 https://en.wikipedia.org/wiki/Doi_(identifier)
60 https://doi.org/10.1016%2Fj.ipl.2006.09.001
61 http://fcst.ceaj.org/EN/abstract/abstract47.shtml
62 https://en.wikipedia.org/wiki/ArXiv_(identifier)
63 http://arxiv.org/abs/1108.0866
64 https://en.wikipedia.org/wiki/Bibcode_(identifier)
65 https://ui.adsabs.harvard.edu/abs/2011arXiv1108.0866P
38
References
2.6 References
• Donald Knuth66 . The Art of Computer Programming67 , Volume 3: Sorting and Search-
ing, Second Edition. Addison-Wesley, 1997. ISBN68 0-201-89685-069 . Section 5.3.1:
Minimum-Comparison Sorting, pp. 180–197.
Sorting algorithms
66 https://en.wikipedia.org/wiki/Donald_Knuth
67 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
68 https://en.wikipedia.org/wiki/ISBN_(identifier)
69 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
39
3 Selection sort
This article includes a list of references1 , related reading or external links2 , but
its sources remain unclear because it lacks inline citations3 . Please help
to improve4 this article by introducing5 more precise citations. (May 2019)(Learn
how and when to remove this template message6 )
Selection sort
Class Sorting algorithm
Data structure Array
Worst-case per- О(n2 ) comparisons,
formance О(n) swaps
Best-case perfor- О(n2 ) comparisons,
mance О(n) swaps
Average perfor- О(n2 ) comparisons,
mance О(n) swaps
Worst-case space O(1) auxiliary
complexity
1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:External_links
3 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
4 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
5 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
6 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
7 https://en.wikipedia.org/wiki/Computer_science
8 https://en.wikipedia.org/wiki/In-place_algorithm
9 https://en.wikipedia.org/wiki/Comparison_sort
10 https://en.wikipedia.org/wiki/Sorting_algorithm
11 https://en.wikipedia.org/wiki/Big_O_notation
12 https://en.wikipedia.org/wiki/Time_complexity
13 https://en.wikipedia.org/wiki/Insertion_sort
14 https://en.wikipedia.org/wiki/Auxiliary_memory
41
Selection sort
sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest,
depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with
the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries
one element to the right.
The time efficiency of selection sort is quadratic, so there are a number of sorting techniques
which have better time complexity than selection sort. One thing which distinguishes
selection sort from other sorting algorithms is that it makes the minimum possible number
of swaps, n − 1 in the worst case.
3.1 Example
42
Example
Figure 9 Selection sort animation. Red is current min. Yellow is sorted list. Blue is
current item.
(Nothing appears changed on these last two lines because the last two numbers were already
in order.)
Selection sort can also be used on list structures that make add and remove efficient, such
as a linked list15 . In this case it is more common to remove the minimum element from the
remainder of the list, and then insert it at the end of the values sorted so far. For example:
arr[] = 64 25 12 22 11
15 https://en.wikipedia.org/wiki/Linked_list
43
Selection sort
3.2 Implementations
This section does not cite16 any sources17 . Please help improve this section18
by adding citations to reliable sources19 . Unsourced material may be challenged
and removed20 .
Find sources: ”Selection sort”21 – news22 · newspapers23 · books24 · scholar25 · JSTOR26
(May 2019)(Learn how and when to remove this template message27 )
Below is an implementation in C28 . More implementations can be found on the talk page
of this Wikipedia article29 .
16 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
17 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
18 https://en.wikipedia.org/w/index.php?title=Selection_sort&action=edit
19 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
20 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
21 http://www.google.com/search?as_eq=wikipedia&q=%22Selection+sort%22
22 http://www.google.com/search?tbm=nws&q=%22Selection+sort%22+-wikipedia
http://www.google.com/search?&q=%22Selection+sort%22+site:news.google.com/newspapers&
23
source=newspapers
24 http://www.google.com/search?tbs=bks:1&q=%22Selection+sort%22+-wikipedia
25 http://scholar.google.com/scholar?q=%22Selection+sort%22
26 https://www.jstor.org/action/doBasicSearch?Query=%22Selection+sort%22&acc=on&wc=on
27 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
28 https://en.wikipedia.org/wiki/C_(programming_language)
29 https://en.wikipedia.org/wiki/Talk:Selection_sort#Implementations
44
Complexity
10
11 /* assume the min is the first element */
12 int jMin = i;
13 /* test against elements after i to find the smallest */
14 for (j = i+1; j < aLength; j++)
15 {
16 /* if this element is less, then it is the new minimum */
17 if (a[j] < a[jMin])
18 {
19 /* found new minimum; remember its index */
20 jMin = j;
21 }
22 }
23
24 if (jMin != i)
25 {
26 swap(a[i], a[jMin]);
27 }
28 }
3.3 Complexity
Selection sort is not difficult to analyze compared to other sorting algorithms since none
of the loops depend on the data in the array. Selecting the minimum requires scanning n
elements (taking n − 1 comparisons) and then swapping it into the first position. Finding the
next lowest element requires scanning the remaining n − 1 elements and so on. Therefore,
the total number of comparisons is
∑
n−1
(n − 1) + (n − 2) + ... + 1 = i
i=1
By arithmetic progression30 ,
∑
n−1
(n − 1) + 1 1 1
i= (n − 1) = n(n − 1) = (n2 − n)
i=1
2 2 2
30 https://en.wikipedia.org/wiki/Arithmetic_progression
https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann%E2%80%93Landau_
31
notations
32 https://en.wikipedia.org/wiki/Bubble_sort
33 https://en.wikipedia.org/wiki/Gnome_sort
34 https://en.wikipedia.org/wiki/Insertion_sort
45
Selection sort
needs in order to place the k + 1st element, while selection sort must scan all remaining
elements to find the k + 1st element.
Simple calculation shows that insertion sort will therefore usually perform about half as
many comparisons as selection sort, although it can perform just as many or far fewer
depending on the order the array was in prior to sorting. It can be seen as an advantage
for some real-time35 applications that selection sort will perform identically regardless of
the order of the array, while insertion sort's running time can vary considerably. However,
this is more often an advantage for insertion sort in that it runs much more efficiently if the
array is already sorted or ”close to sorted.”
While selection sort is preferable to insertion sort in terms of number of writes (Θ(n) swaps
versus Ο(n2 ) swaps), it almost always far exceeds (and never beats) the number of writes
that cycle sort36 makes, as cycle sort is theoretically optimal in the number of writes.
This can be important if writes are significantly more expensive than reads, such as with
EEPROM37 or Flash38 memory, where every write lessens the lifespan of the memory.
Finally, selection sort is greatly outperformed on larger arrays by Θ(n log n) divide-and-
conquer algorithms39 such as mergesort40 . However, insertion sort or selection sort are both
typically faster for small arrays (i.e. fewer than 10–20 elements). A useful optimization in
practice for the recursive algorithms is to switch to insertion sort or selection sort for ”small
enough” sublists.
3.5 Variants
Heapsort41 greatly improves the basic algorithm by using an implicit42 heap43 data struc-
ture44 to speed up finding and removing the lowest datum. If implemented correctly, the
heap will allow finding the next lowest element in Θ(log n) time instead of Θ(n) for the
inner loop in normal selection sort, reducing the total running time to Θ(n log n).
A bidirectional variant of selection sort (sometimes called cocktail sort due to its similarity
to the bubble-sort variant cocktail shaker sort45 ) is an algorithm which finds both the
minimum and maximum values in the list in every pass. This reduces the number of scans
of the input by a factor of two. Each scan performs three comparisons per two elements (a
pair of elements is compared, then the greater is compared to the maximum and the lesser
is compared to the minimum), a 25% savings over regular selection sort, which does one
comparison per element. Sometimes this is double selection sort.
35 https://en.wikipedia.org/wiki/Real-time_computing
36 https://en.wikipedia.org/wiki/Cycle_sort
37 https://en.wikipedia.org/wiki/EEPROM
38 https://en.wikipedia.org/wiki/Flash_memory
39 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
40 https://en.wikipedia.org/wiki/Mergesort
41 https://en.wikipedia.org/wiki/Heapsort
42 https://en.wikipedia.org/wiki/Implicit_data_structure
43 https://en.wikipedia.org/wiki/Heap_(data_structure)
44 https://en.wikipedia.org/wiki/Data_structure
45 https://en.wikipedia.org/wiki/Cocktail_shaker_sort
46
See also
Selection sort can be implemented as a stable sort46 . If, rather than swapping in step 2,
the minimum value is inserted into the first position (that is, all intervening items moved
down), the algorithm is stable. However, this modification either requires a data structure
that supports efficient insertions or deletions, such as a linked list, or it leads to performing
Θ(n2 ) writes.
In the bingo sort variant, items are ordered by repeatedly looking through the remaining
items to find the greatest value and moving all items with that value to their final location.[1]
Like counting sort47 , this is an efficient variant if there are many duplicate values. Indeed,
selection sort does one pass through the remaining items for each item moved. Bingo sort
does one pass for each value (not item): after an initial pass to find the biggest value, the
next passes can move every item with that value to its final location while finding the next
value as in the following pseudocode48 (arrays are zero-based and the for-loop includes both
the top and bottom limits, as in Pascal49 ):
bingo(array A)
Thus, if on average there are more than two items with the same value, bingo sort can be
expected to be faster because it executes the inner loop fewer times than selection sort.
46 https://en.wikipedia.org/wiki/Sorting_algorithm#Classification
47 https://en.wikipedia.org/wiki/Counting_sort
48 https://en.wikipedia.org/wiki/Pseudocode
49 https://en.wikipedia.org/wiki/Pascal_(programming_language)
50 https://en.wikipedia.org/wiki/Selection_algorithm
47
Selection sort
3.7 References
1. This article incorporates public domain material51 from the NIST52 document:
B, P E. ”B ”53 . Dictionary of Algorithms and Data Structures54 .
• Donald Knuth55 . The Art of Computer Programming56 , Volume 3: Sorting and Searching,
Third Edition. Addison–Wesley, 1997. ISBN57 0-201-89685-058 . Pages 138–141 of Section
5.2.3: Sorting by Selection.
• Anany Levitin. Introduction to the Design & Analysis of Algorithms, 2nd Edition.
ISBN59 0-321-35828-760 . Section 3.1: Selection Sort, pp 98–100.
• Robert Sedgewick61 . Algorithms in C++, Parts 1–4: Fundamentals, Data Structure,
Sorting, Searching: Fundamentals, Data Structures, Sorting, Searching Pts. 1–4, Second
Edition. Addison–Wesley Longman, 1998. ISBN62 0-201-35088-263 . Pages 273–274
The Wikibook Algorithm implementation64 has a page on the topic of: Selection
sort65
Sorting algorithms
https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_
51
the_United_States
52 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
53 https://xlinux.nist.gov/dads/HTML/bingosort.html
54 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
55 https://en.wikipedia.org/wiki/Donald_Knuth
56 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/0-321-35828-7
61 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
62 https://en.wikipedia.org/wiki/ISBN_(identifier)
63 https://en.wikipedia.org/wiki/Special:BookSources/0-201-35088-2
64 https://en.wikibooks.org/wiki/Algorithm_implementation
65 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Selection_sort
https://web.archive.org/web/20150307110315/http://www.sorting-algorithms.com/
66
selection-sort
67 https://en.wikipedia.org/wiki/Wayback_Machine
48
External links
Sorting algorithms
49
4 Insertion sort
Insertion sort
Animation of insertion sort
Class Sorting algorithm
Data structure Array
Worst-case per- О(n2 ) comparisons
formance and swaps
Best-case perfor- O(n) comparisons,
mance O(1) swaps
Average perfor- О(n2 ) comparisons
mance and swaps
Worst-case space О(n) total, O(1) aux-
complexity iliary
Insertion sort is a simple sorting algorithm1 that builds the final sorted array2 (or list) one
item at a time. It is much less efficient on large lists than more advanced algorithms such as
quicksort3 , heapsort4 , or merge sort5 . However, insertion sort provides several advantages:
• Simple implementation: Jon Bentley6 shows a three-line C7 version, and a five-line opti-
mized8 version[1]
• Efficient for (quite) small data sets, much like other quadratic sorting algorithms
• More efficient in practice than most other simple quadratic (i.e., O9 (n2 )) algorithms such
as selection sort10 or bubble sort11
• Adaptive12 , i.e., efficient for data sets that are already substantially sorted: the time
complexity13 is O14 (kn) when each element in the input is no more than k places away
from its sorted position
• Stable15 ; i.e., does not change the relative order of elements with equal keys
1 https://en.wikipedia.org/wiki/Sorting_algorithm
2 https://en.wikipedia.org/wiki/Sorted_array
3 https://en.wikipedia.org/wiki/Quicksort
4 https://en.wikipedia.org/wiki/Heapsort
5 https://en.wikipedia.org/wiki/Merge_sort
6 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
7 https://en.wikipedia.org/wiki/C_(programming_language)
8 https://en.wikipedia.org/wiki/Program_optimization
9 https://en.wikipedia.org/wiki/Big_O_notation
10 https://en.wikipedia.org/wiki/Selection_sort
11 https://en.wikipedia.org/wiki/Bubble_sort
12 https://en.wikipedia.org/wiki/Adaptive_sort
13 https://en.wikipedia.org/wiki/Time_complexity
14 https://en.wikipedia.org/wiki/Big_O_notation
15 https://en.wikipedia.org/wiki/Stable_sort
51
Insertion sort
• In-place16 ; i.e., only requires a constant amount O(1) of additional memory space
• Online17 ; i.e., can sort a list as it receives it
When people manually sort cards in a bridge hand, most use a method that is similar to
insertion sort.[2]
4.1 Algorithm
Figure 11 A graphical example of insertion sort. The partial sorted list (black) initially
contains only the first element in the list. With each iteration one element (red) is removed
from the ”not yet checked for order” input data and inserted in-place into the sorted list.
Insertion sort iterates18 , consuming one input element each repetition, and growing a sorted
output list. At each iteration, insertion sort removes one element from the input data, finds
the location it belongs within the sorted list, and inserts it there. It repeats until no input
elements remain.
Sorting is typically done in-place, by iterating up the array, growing the sorted list behind
it. At each array-position, it checks the value there against the largest value in the sorted
list (which happens to be next to it, in the previous array-position checked). If larger, it
leaves the element in place and moves to the next. If smaller, it finds the correct position
within the sorted list, shifts all the larger values up to make a space, and inserts into that
correct position.
16 https://en.wikipedia.org/wiki/In-place_algorithm
17 https://en.wikipedia.org/wiki/Online_algorithm
18 https://en.wikipedia.org/wiki/Iteration
52
Algorithm
The resulting array after k iterations has the property where the first k + 1 entries are
sorted (”+1” because the first entry is skipped). In each iteration the first remaining entry
of the input is removed, and inserted into the result at the correct position, thus extending
the result:
becomes
with each element greater than x copied to the right as it is compared against x.
The most common variant of insertion sort, which operates on arrays, can be described as
follows:
1. Suppose there exists a function called Insert designed to insert a value into a sorted
sequence at the beginning of an array. It operates by beginning at the end of the
sequence and shifting each element one place to the right until a suitable position is
found for the new element. The function has the side effect of overwriting the value
stored immediately after the sorted sequence in the array.
2. To perform an insertion sort, begin at the left-most element of the array and invoke
Insert to insert each element encountered into its correct position. The ordered se-
quence into which the element is inserted is stored at the beginning of the array in the
set of indices already examined. Each insertion overwrites a single value: the value
being inserted.
Pseudocode19 of the complete algorithm follows, where the arrays are zero-based20 :[1]
i ← 1
while i < length(A)
j←i
while j > 0 and A[j-1] > A[j]
swap A[j] and A[j-1]
j←j-1
end while
i←i+1
end while
19 https://en.wikipedia.org/wiki/Pseudocode
20 https://en.wikipedia.org/wiki/Zero-based_numbering
53
Insertion sort
The outer loop runs over all the elements except the first one, because the single-element
prefix A[0:1] is trivially sorted, so the invariant21 that the first i entries are sorted is true
from the start. The inner loop moves element A[i] to its correct place so that after the
loop, the first i+1 elements are sorted. Note that the and-operator in the test must use
short-circuit evaluation22 , otherwise the test might result in an array bounds error23 , when
j=0 and it tries to evaluate A[j-1] > A[j] (i.e. accessing A[-1] fails).
After expanding the swap operation in-place as x ← A[j]; A[j] ← A[j-1]; A[j-1] ←
x (where x is a temporary variable), a slightly faster version can be produced that moves
A[i] to its position in one go and only performs one assignment in the inner loop body:[1]
i ← 1
while i < length(A)
x ← A[i]
j←i-1
while j >= 0 and A[j] > x
A[j+1] ← A[j]
j←j-1
end while
24
A[j+1] ← x[3]
i←i+1
end while
The new inner loop shifts elements to the right to clear a spot for x = A[i].
The algorithm can also be implemented in a recursive way. The recursion just replaces
the outer loop, calling itself and storing successively smaller values of n on the stack until
n equals 0, where the function then returns back up the call chain to execute the code
after each recursive call starting with n equal to 1, with n increasing by 1 as each instance
of the function returns to the prior instance. The initial call would be insertionSortR(A,
length(A)-1).
function insertionSortR(array A, int n)
if n > 0
insertionSortR(A, n-1)
x ← A[n]
j ← n-1
while j >= 0 and A[j] > x
A[j+1] ← A[j]
j ← j-1
end while
A[j+1] ← x
end if
end function
It does not make the code any shorter, it also doesn't reduce the execution time, but it
increases the additional memory consumption from O(1) to O(N) (at the deepest level of
recursion the stack contains N references to the A array, each with accompanying value of
variable n from N down to 1).
21 https://en.wikipedia.org/wiki/Invariant_(computer_science)
22 https://en.wikipedia.org/wiki/Short-circuit_evaluation
23 https://en.wikipedia.org/wiki/Bounds_checking
54
Best, worst, and average cases
The best case input is an array that is already sorted. In this case insertion sort has a linear
running time (i.e., O(n)). During each iteration, the first remaining element of the input is
only compared with the right-most element of the sorted subsection of the array.
The simplest worst case input is an array sorted in reverse order. The set of all worst case
inputs consists of all arrays where each element is the smallest or second-smallest of the
elements before it. In these cases every iteration of the inner loop will scan and shift the
entire sorted subsection of the array before inserting the next element. This gives insertion
sort a quadratic running time (i.e., O(n2 )).
The average case is also quadratic[4] , which makes insertion sort impractical for sorting
large arrays. However, insertion sort is one of the fastest algorithms for sorting very small
arrays, even faster than quicksort25 ; indeed, good quicksort26 implementations use insertion
sort for arrays smaller than a certain threshold, also when arising as subproblems; the exact
threshold must be determined experimentally and depends on the machine, but is commonly
around ten.
Example: The following table shows the steps for sorting the sequence {3, 7, 4, 9, 5, 2, 6,
1}. In each step, the key under consideration is underlined. The key that was moved (or
left in place because it was biggest yet considered) in the previous step is marked with an
asterisk.
3 7 4 9 5 2 6 1
3* 7 4 9 5 2 6 1
3 7* 4 9 5 2 6 1
3 4* 7 9 5 2 6 1
3 4 7 9* 5 2 6 1
3 4 5* 7 9 2 6 1
2* 3 4 5 7 9 6 1
2 3 4 5 6* 7 9 1
1* 2 3 4 5 6 7 9
Insertion sort is very similar to selection sort27 . As in selection sort, after k passes through
the array, the first k elements are in sorted order. However, the fundamental difference
between the two algorithms is that for selection sort these are the k smallest elements of the
unsorted input, while in insertion sort they are simply the first k elements of the input. The
primary advantage of insertion sort over selection sort is that selection sort must always
scan all remaining elements to find the absolute smallest element in the unsorted portion of
the list, while insertion sort requires only a single comparison when the (k + 1)-st element
is greater than the k-th element; when this is frequently true (such as if the input array
is already sorted or partially sorted), insertion sort is distinctly more efficient compared to
selection sort. On average (assuming the rank of the (k + 1)-st element rank is random),
25 https://en.wikipedia.org/wiki/Quicksort
26 https://en.wikipedia.org/wiki/Quicksort
27 https://en.wikipedia.org/wiki/Selection_sort
55
Insertion sort
insertion sort will require comparing and shifting half of the previous k elements, meaning
that insertion sort will perform about half as many comparisons as selection sort on average.
In the worst case for insertion sort (when the input array is reverse-sorted), insertion sort
performs just as many comparisons as selection sort. However, a disadvantage of insertion
sort over selection sort is that it requires more writes due to the fact that, on each iteration,
inserting the (k + 1)-st element into the sorted portion of the array requires many element
swaps to shift all of the following elements, while only a single swap is required for each
iteration of selection sort. In general, insertion sort will write to the array O(n2 ) times,
whereas selection sort will write only O(n) times. For this reason selection sort may be
preferable in cases where writing to memory is significantly more expensive than reading,
such as with EEPROM28 or flash memory29 .
While some divide-and-conquer algorithms30 such as quicksort31 and mergesort32 outper-
form insertion sort for larger arrays, non-recursive sorting algorithms such as insertion sort
or selection sort are generally faster for very small arrays (the exact size varies by envi-
ronment and implementation, but is typically between 7 and 50 elements). Therefore, a
useful optimization in the implementation of those algorithms is a hybrid approach, using
the simpler algorithm when the array has been divided to a small size.[1]
4.4 Variants
D. L. Shell33 made substantial improvements to the algorithm; the modified version is called
Shell sort34 . The sorting algorithm compares elements separated by a distance that decreases
on each pass. Shell sort has distinctly improved running times in practical work, with two
simple variants requiring O(n3/2 ) and O(n4/3 ) running time.[5][6]
If the cost of comparisons exceeds the cost of swaps, as is the case for example with string
keys stored by reference or with human interaction (such as choosing one of a pair displayed
35
side-by-side), then using binary insertion sort[citation needed ] may yield better performance.
Binary insertion sort employs a binary search36 to determine the correct location to insert
new elements, and therefore performs ⌈log2 n⌉ comparisons in the worst case, which is
O(n log n). The algorithm as a whole still has a running time of O(n2 ) on average because
of the series of swaps required for each insertion.
The number of swaps can be reduced by calculating the position of multiple elements before
moving them. For example, if the target position of two elements is calculated before they
are moved into the proper position, the number of swaps can be reduced by about 25% for
random data. In the extreme case, this variant works similar to merge sort37 .
28 https://en.wikipedia.org/wiki/EEPROM
29 https://en.wikipedia.org/wiki/Flash_memory
30 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
31 https://en.wikipedia.org/wiki/Quicksort
32 https://en.wikipedia.org/wiki/Mergesort
33 https://en.wikipedia.org/wiki/Donald_Shell
34 https://en.wikipedia.org/wiki/Shellsort
36 https://en.wikipedia.org/wiki/Binary_search_algorithm
37 https://en.wikipedia.org/wiki/Merge_sort
56
Variants
A variant named binary merge sort uses a binary insertion sort to sort groups of 32 elements,
followed by a final sort using merge sort38 . It combines the speed of insertion sort on small
data sets with the speed of merge sort on large data sets.[7]
To avoid having to make a series of swaps for each insertion, the input could be stored in
a linked list39 , which allows elements to be spliced into or out of the list in constant time
when the position in the list is known. However, searching a linked list requires sequentially
following the links to the desired position: a linked list does not have random access, so it
cannot use a faster method such as binary search. Therefore, the running time required for
searching is O(n), and the time for sorting is O(n2 ). If a more sophisticated data structure40
(e.g., heap41 or binary tree42 ) is used, the time required for searching and insertion can be
reduced significantly; this is the essence of heap sort43 and binary tree sort44 .
In 2006 Bender, Martin Farach-Colton45 , and Mosteiro published a new variant of insertion
sort called library sort46 or gapped insertion sort that leaves a small number of unused
spaces (i.e., ”gaps”) spread throughout the array. The benefit is that insertions need only
shift elements over until a gap is reached. The authors show that this sorting algorithm
runs with high probability in O(n log n) time.[8]
If a skip list47 is used, the insertion time is brought down to O(log n), and swaps are not
needed because the skip list is implemented on a linked list structure. The final running
time for insertion would be O(n log n).
List insertion sort is a variant of insertion sort. It reduces the number of
48
movements.[citation needed ]
If the items are stored in a linked list, then the list can be sorted with O(1) additional space.
The algorithm starts with an initially empty (and therefore trivially sorted) list. The input
items are taken off the list one at a time, and then inserted in the proper place in the sorted
list. When the input list is empty, the sorted list has the desired result.
38 https://en.wikipedia.org/wiki/Merge_sort
39 https://en.wikipedia.org/wiki/Linked_list
40 https://en.wikipedia.org/wiki/Data_structure
41 https://en.wikipedia.org/wiki/Heap_(data_structure)
42 https://en.wikipedia.org/wiki/Binary_tree
43 https://en.wikipedia.org/wiki/Heap_sort
44 https://en.wikipedia.org/wiki/Binary_tree_sort
45 https://en.wikipedia.org/wiki/Martin_Farach-Colton
46 https://en.wikipedia.org/wiki/Library_sort
47 https://en.wikipedia.org/wiki/Skip_list
57
Insertion sort
pList = pList->pNext;
if (head == NULL || current->iValue < head->iValue) {
// insert into the head of the sorted list
// or as the first element into an empty sorted list
current->pNext = head;
head = current;
} else {
// insert current element into proper position in non-empty sorted
list
struct LIST * p = head;
while (p != NULL) {
if (p->pNext == NULL || // last element of the sorted list
current->iValue < p->pNext->iValue) // middle of the list
{
// insert into middle of the sorted list or as the last
element
current->pNext = p->pNext;
p->pNext = current;
break; // done
}
p = p->pNext;
}
}
}
return head;
}
The algorithm below uses a trailing pointer[9] for the insertion into the sorted list. A simpler
recursive method rebuilds the list each time (rather than splicing) and can use O(n) stack
space.
struct LIST
{
struct LIST * pNext;
int iValue;
};
/* take items off the input list one by one until empty */
while (pList != NULL) {
/* remember the head */
struct LIST * pHead = pList;
/* trailing pointer for efficient splice */
struct LIST ** ppTrail = &pSorted;
pHead->pNext = *ppTrail;
*ppTrail = pHead;
}
58
References
return pSorted;
}
4.5 References
1. B, J (2000), Programming Pearls, ACM Press/Addison–Wesley, pp. 107–
109
2. S, R49 (1983), Algorithms50 , A-W, . 9551 ,
ISBN52 978-0-201-06672-253 .
3. C, T H.54 ; L, C E.55 ; R, R L.56 ; S,
C57 (2009) [1990]. ”S 2.1: I ”. Introduction to Al-
gorithms58 (3 .). MIT P MG-H. . 16–18. ISBN59 0-262-
03384-460 .. See in particular p. 18.
4. S, K. ”W Θ(^2) ? (-
””)”61 . S O.
5. F, R. M.; L, R. B. (1960). ”A H-S S P”.
Communications of the ACM. 3 (1): 20–22. doi62 :10.1145/366947.36695763 .
6. S, R64 (1986). ”A N U B S”. Journal
of Algorithms. 7 (2): 159–173. doi65 :10.1016/0196-6774(86)90001-566 .
7. ”B M S”67
8. B, M A.; F-C, M68 ; M, M A.
(2006), ”I O(n log n)”, Theory of Computing Systems, 39 (3): 391–
397, arXiv69 :cs/040700370 , doi71 :10.1007/s00224-005-1237-z72 , MR73 221840974
49 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
50 https://archive.org/details/algorithms00sedg/page/95
51 https://archive.org/details/algorithms00sedg/page/95
52 https://en.wikipedia.org/wiki/ISBN_(identifier)
53 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-06672-2
54 https://en.wikipedia.org/wiki/Thomas_H._Cormen
55 https://en.wikipedia.org/wiki/Charles_E._Leiserson
56 https://en.wikipedia.org/wiki/Ron_Rivest
57 https://en.wikipedia.org/wiki/Clifford_Stein
58 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
61 https://stackoverflow.com/a/17055342
62 https://en.wikipedia.org/wiki/Doi_(identifier)
63 https://doi.org/10.1145%2F366947.366957
64 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
65 https://en.wikipedia.org/wiki/Doi_(identifier)
66 https://doi.org/10.1016%2F0196-6774%2886%2990001-5
67 https://docs.google.com/file/d/0B8KIVX-AaaGiYzcta0pFUXJnNG8
68 https://en.wikipedia.org/wiki/Martin_Farach-Colton
69 https://en.wikipedia.org/wiki/ArXiv_(identifier)
70 http://arxiv.org/abs/cs/0407003
71 https://en.wikipedia.org/wiki/Doi_(identifier)
72 https://doi.org/10.1007%2Fs00224-005-1237-z
73 https://en.wikipedia.org/wiki/MR_(identifier)
74 http://www.ams.org/mathscinet-getitem?mr=2218409
59
Insertion sort
9. H, C (.), ”T P T”, Euler75 , V C S
U, 22 S 2012.
The Wikibook Algorithm implementation80 has a page on the topic of: Insertion
sort81
Sorting algorithms
75 http://euler.vcsu.edu:7000/11421/
76 https://en.wikipedia.org/wiki/Donald_Knuth
77 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
78 https://en.wikipedia.org/wiki/ISBN_(identifier)
79 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
80 https://en.wikibooks.org/wiki/Algorithm_implementation
81 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Insertion_sort
82 https://commons.wikimedia.org/wiki/Category:Insertion_sort
https://web.archive.org/web/20150308232109/http://www.sorting-algorithms.com/
83
insertion-sort
84 https://en.wikipedia.org/wiki/Wayback_Machine
85 http://www.pathcom.com/~vadco/binary.html
86 http://corewar.co.uk/assembly/insertion.htm
87 https://en.wikipedia.org/wiki/United_Kingdom
88 http://literateprograms.org/Category:Insertion_sort
60
External links
Sorting algorithms
61
5 Merge sort
This article possibly contains original research1 . Please improve it2 by veri-
fying3 the claims made and adding inline citations4 . Statements consisting only of
original research should be removed. (May 2016)(Learn how and when to remove this
template message5 )
Merge sort
An example of merge sort. First divide the list into the smallest unit (1 element), then
compare each element with the adjacent list to sort and merge the two adjacent lists.
Finally all the elements are sorted and merged.
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(n log n)
perfor-
mance
Best-case O(n log n) typical,O(n) nat-
perfor- ural variant
mance
Average O(n log n)
perfor-
mance
Worst-case О(n) total with O(n) aux-
space com- iliary, O(1) auxiliary with
plexity linked lists[1]
1 https://en.wikipedia.org/wiki/Wikipedia:No_original_research
2 https://en.wikipedia.org/w/index.php?title=Merge_sort&action=edit
3 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
4 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/Computer_science
7 https://en.wikipedia.org/wiki/Comparison_sort
8 https://en.wikipedia.org/wiki/Sorting_algorithm
9 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
63
Merge sort
output. Merge sort is a divide and conquer algorithm10 that was invented by John von Neu-
mann11 in 1945.[2] A detailed description and analysis of bottom-up mergesort appeared in
a report by Goldstine12 and von Neumann13 as early as 1948.[3]
5.1 Algorithm
Example C-like15 code using indices for top-down merge sort algorithm that recursively
splits the list (called runs in this example) into sublists until sublist size is 1, then merges
those sublists to produce a sorted list. The copy back step is avoided with alternating the
direction of the merge with each level of recursion (except for an initial one time copy). To
help understand this, consider an array with 2 elements. the elements are copied to B[],
then merged back to A[]. If there are 4 elements, when the bottom of recursion level is
reached, single element runs from A[] are merged to B[], and then at the next higher level
of recursion, those 2 element runs are merged to A[]. This pattern continues with each level
of recursion.
// Array A[] has the items to sort; array B[] is a work array.
void TopDownMergeSort(A[], B[], n)
{
CopyArray(A, 0, n, B); // one time copy of A[] to B[]
TopDownSplitMerge(B, 0, n, A); // sort data from B[] into A[]
}
// Sort the given run of array A[] using array B[] as a source.
// iBegin is inclusive; iEnd is exclusive (A[iEnd] is not in the set).
void TopDownSplitMerge(B[], iBegin, iEnd, A[])
{
if(iEnd - iBegin < 2) // if run size == 1
return; // consider it sorted
// split the run longer than 1 item into halves
iMiddle = (iEnd + iBegin) / 2; // iMiddle = mid point
// recursively sort both runs from array A[] into B[]
TopDownSplitMerge(A, iBegin, iMiddle, B); // sort the left run
TopDownSplitMerge(A, iMiddle, iEnd, B); // sort the right run
// merge the resulting runs from array B[] into A[]
TopDownMerge(B, iBegin, iMiddle, iEnd, A);
}
10 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
11 https://en.wikipedia.org/wiki/John_von_Neumann
12 https://en.wikipedia.org/wiki/Herman_Goldstine
13 https://en.wikipedia.org/wiki/John_von_Neumann
14 https://en.wikipedia.org/wiki/Merge_algorithm
15 https://en.wikipedia.org/wiki/C-like
64
Algorithm
Example C-like code using indices for bottom-up merge sort algorithm which treats the
list as an array of n sublists (called runs in this example) of size 1, and iteratively merges
sub-lists back and forth between two buffers:
// array A[] has the items to sort; array B[] is a work array
void BottomUpMergeSort(A[], B[], n)
{
// Each 1-element run in A is already "sorted".
// Make successively longer sorted runs of length 2, 4, 8, 16... until whole
array is sorted.
for (width = 1; width < n; width = 2 * width)
{
// Array A is full of runs of length width.
for (i = 0; i < n; i = i + 2 * width)
{
// Merge two runs: A[i:i+width-1] and A[i+width:i+2*width-1] to B[]
// or copy A[i:n-1] to B[] ( if(i+width >= n) )
BottomUpMerge(A, i, min(i+width, n), min(i+2*width, n), B);
}
// Now work array B is full of runs of length 2*width.
// Copy array B to array A for next iteration.
// A more efficient implementation would swap the roles of A and B.
CopyArray(B, A, n);
// Now array A is full of runs of length 2*width.
}
}
65
Merge sort
Pseudocode16 for top-down merge sort algorithm which recursively divides the input list
into smaller sublists until the sublists are trivially sorted, and then merges the sublists
while returning up the call chain.
function merge_sort(list m) is
// Base case. A list of zero or one elements is sorted, by definition.
if length of m ≤ 1 then
return m
In this example, the merge function merges the left and right sublists.
function merge(left, right) is
var result := empty list
16 https://en.wikipedia.org/wiki/Pseudocode
66
Natural merge sort
Pseudocode17 for bottom-up merge sort algorithm which uses a small fixed size array of
references to nodes, where array[i] is either a reference to a list of size 2i or nil18 . node is
a reference or pointer to a node. The merge() function would be similar to the one shown
in the top-down merge lists example, it merges two already sorted lists, and handles empty
lists. In this case, merge() would use node for its input parameters and return value.
function merge_sort(node head) is
// return if empty list
if head = nil then
return nil
var node array[32]; initially all nil
var node result
var node next
var int i
result := head
// merge nodes into array
while result ≠ nil do
next := result.next;
result.next := nil
for(i = 0; (i < 32) && (array[i] ≠ nil); i += 1) do
result := merge(array[i], result)
array[i] := nil
// do not go past end of array
if i = 32 then
i -= 1
array[i] := result
result := next
// merge array into single list
result := nil
for (i = 0; i < 32; i += 1) do
result := merge(array[i], result)
return result
A natural merge sort is similar to a bottom-up merge sort except that any naturally occur-
ring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (al-
ternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being
convenient data structures (used as FIFO queues19 or LIFO stacks20 ).[4] In the bottom-up
merge sort, the starting point assumes each run is one item long. In practice, random input
17 https://en.wikipedia.org/wiki/Pseudocode
18 https://en.wikipedia.org/wiki/Null_pointer
19 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
20 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
67
Merge sort
data will have many short runs that just happen to be sorted. In the typical case, the
natural merge sort may not need as many passes because there are fewer runs to merge.
In the best case, the input is already sorted (i.e., is one run), so the natural merge sort
need only make one pass through the data. In many practical cases, long natural runs
are present, and for that reason natural merge sort is exploited as the key component of
Timsort21 . Example:
Start : 3 4 2 1 7 5 8 9 0 6
Select runs : (3 4)(2)(1 7)(5 8 9)(0 6)
Merge : (2 3 4)(1 5 7 8 9)(0 6)
Merge : (1 2 3 4 5 7 8 9)(0 6)
Merge : (0 1 2 3 4 5 6 7 8 9)
Tournament replacement selection sorts22 are used to gather the initial runs for external
sorting algorithms.
21 https://en.wikipedia.org/wiki/Timsort
22 https://en.wikipedia.org/wiki/Tournament_sort
68
Analysis
5.3 Analysis
Figure 14 A recursive merge sort algorithm used to sort an array of 7 integer values.
These are the steps a human would take to emulate merge sort (top-down).
23 https://en.wikipedia.org/wiki/Average_performance
24 https://en.wikipedia.org/wiki/Worst-case_performance
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
69
Merge sort
In the worst case, the number of comparisons merge sort makes is given by the sorting
numbers27 . These numbers are equal to or slightly smaller than (n ⌈lg28 n⌉ − 2⌈lg n⌉ + 1),
which is between (n lg n − n + 1) and (n lg n + n + O(lg n)).[5]
For large n and a randomly ordered input list, merge sort's expected (average) number of
∑∞
1
comparisons approaches α·n fewer than the worst case where α = −1 + k +1
≈ 0.2645.
k=0
2
In the worst case, merge sort does about 39% fewer comparisons than quicksort29 does in
the average case. In terms of moves, merge sort's worst case complexity is O30 (n log n)—
the same complexity as quicksort's best case, and merge sort's best case takes about half
31
as many iterations as the worst case.[citation needed ]
Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can
only be efficiently accessed sequentially, and is thus popular in languages such as Lisp32 ,
where sequentially accessed data structures are very common. Unlike some (efficient) im-
plementations of quicksort, merge sort is a stable sort.
Merge sort's most common implementation does not sort in place;[6] therefore, the memory
size of the input must be allocated for the sorted output to be stored in (see below for
versions that need only n/2 extra spaces).
5.4 Variants
Variants of merge sort are primarily concerned with reducing the space complexity and the
cost of copying.
A simple alternative for reducing the space overhead to n/2 is to maintain left and right as
a combined structure, copy only the left part of m into temporary space, and to direct the
merge routine to place the merged output into m. With this version it is better to allocate
the temporary space outside the merge routine, so that only one allocation is needed. The
excessive copying mentioned previously is also mitigated, since the last pair of lines before
the return result statement (function mergein the pseudo code above) become superfluous.
One drawback of merge sort, when implemented on arrays, is its O(n) working memory
requirement. Several in-place33 variants have been suggested:
• Katajainen et al. present an algorithm that requires a constant amount of working mem-
ory: enough storage space to hold one element of the input array, and additional space
to hold O(1) pointers into the input array. They achieve an O(n log n) time bound with
small constants, but their algorithm is not stable.[7]
• Several attempts have been made at producing an in-place merge algorithm that can
be combined with a standard (top-down or bottom-up) merge sort to produce an in-
27 https://en.wikipedia.org/wiki/Sorting_number
28 https://en.wikipedia.org/wiki/Binary_logarithm
29 https://en.wikipedia.org/wiki/Quicksort
30 https://en.wikipedia.org/wiki/Big_O_notation
32 https://en.wikipedia.org/wiki/Lisp_programming_language
33 https://en.wikipedia.org/wiki/In-place_algorithm
70
Variants
place merge sort. In this case, the notion of ”in-place” can be relaxed to mean ”taking
logarithmic stack space”, because standard merge sort requires that amount of space
for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is
possible in O(n log n) time using a constant amount of scratch space, but their algorithm
is complicated and has high constant factors: merging arrays of length n and m can take
5n + 12m + o(m) moves.[8] This high constant factor and complicated in-place algorithm
was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston[9]
presented a straightforward linear time algorithm practical in-place merge to merge a
sorted list using fixed amount of additional space. They both have used the work of
Kronrod and others. It merges in linear time and constant extra space. The algorithm
takes little more average time than standard merge sort algorithms, free to exploit O(n)
temporary extra memory cells, by less than a factor of two. Though the algorithm is
much faster in a practical way but it is unstable also for some lists. But using similar
concepts, they have been able to solve this problem. Other in-place algorithms include
SymMerge, which takes O((n + m) log (n + m)) time in total and is stable.[10] Plugging
such an algorithm into merge sort increases its complexity to the non-linearithmic34 , but
still quasilinear35 , O(n (log n)2 ).
• A modern stable linear and in-place merging is block merge sort36 .
An alternative to reduce the copying into multiple lists is to associate a new field of infor-
mation with each key (the elements in m are called keys). This field will be used to link
the keys and any associated information together in a sorted list (a key and its related
information is called a record). Then the merging of the sorted lists proceeds by changing
the link values; no records need to be moved at all. A field which contains only a link will
generally be smaller than an entire record so less space will also be used. This is a standard
sorting technique, not restricted to merge sort.
34 https://en.wikipedia.org/wiki/Linearithmic
35 https://en.wikipedia.org/wiki/Quasilinear_time
36 https://en.wikipedia.org/wiki/Block_merge_sort
71
Merge sort
Figure 15 Merge sort type algorithms allowed large data sets to be sorted on early
computers that had small random access memories by modern standards. Records were
stored on magnetic tape and processed on banks of magnetic tape drives, such as these
IBM 729s.
An external37 merge sort is practical to run using disk38 or tape39 drives when the data to
be sorted is too large to fit into memory40 . External sorting41 explains how merge sort is
implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is
sequential (except for rewinds at the end of each pass). A minimal implementation can get
by with just two record buffers and a few program variables.
Naming the four tape drives as A, B, C, D, with the original data on A, and using only 2
record buffers, the algorithm is similar to Bottom-up implementation42 , using pairs of tape
drives instead of arrays in memory. The basic algorithm can be described as follows:
37 https://en.wikipedia.org/wiki/External_sorting
38 https://en.wikipedia.org/wiki/Disk_storage
39 https://en.wikipedia.org/wiki/Tape_drive
40 https://en.wikipedia.org/wiki/Primary_storage
41 https://en.wikipedia.org/wiki/External_sorting
42 #Bottom-up_implementation
72
Use with tape drives
43 https://en.wikipedia.org/wiki/Hybrid_algorithm
44 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
45 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
46 https://en.wikipedia.org/wiki/K-way_merge_algorithm
47 https://en.wikipedia.org/wiki/Polyphase_merge_sort
73
Merge sort
Figure 16 Tiled merge sort applied to an array of random integers. The horizontal axis
is the array index and the vertical axis is the integer.
48 https://en.wikipedia.org/wiki/Locality_of_reference
49 https://en.wikipedia.org/wiki/Software_optimization
50 https://en.wikipedia.org/wiki/Memory_hierarchy
51 https://en.wikipedia.org/wiki/Cache_(computing)
52 https://en.wikipedia.org/wiki/Insertion_sort
74
Parallel merge sort
53 ]
recursive fashion. This algorithm has demonstrated better performance[example needed on
machines that benefit from cache optimization. (LaMarca & Ladner 199754 )
Kronrod (1969)55 suggested an alternative version of merge sort that uses constant addi-
tional space. This algorithm was later refined. (Katajainen, Pasanen & Teuhola 199656 )
harv error: multiple targets (2×): CITEREFKatajainenPasanenTeuhola1996 (help57 )
Also, many applications of external sorting58 use a form of merge sorting where the input
get split up to a higher number of sublists, ideally to a number for which merging them still
makes the currently processed set of pages59 fit into main memory.
Merge sort parallelizes well due to the use of the divide-and-conquer60 method. Several
different parallel variants of the algorithm have been developed over the years. Some parallel
merge sort algorithms are strongly related to the sequential top-down merge algorithm while
others have a different general structure and use the K-way merge61 method.
The sequential merge sort procedure can be described in two phases, the divide phase and
the merge phase. The first consists of many recursive calls that repeatedly perform the same
division process until the subsequences are trivially sorted (containing one or no element).
An intuitive approach is the parallelization of those recursive calls.[12] Following pseudocode
describes the merge sort with parallel recursion using the fork and join62 keywords:
// Sort elements lo through hi (exclusive) of array A.
algorithm mergesort(A, lo, hi) is
if lo+1 < hi then // Two or more elements.
mid := ⌊(lo + hi) / 2⌋
fork mergesort(A, lo, mid)
mergesort(A, mid, hi)
join
merge(A, lo, mid, hi)
This algorithm is the trivial modification of the sequential version and does not parallelize
well. Therefore, its speedup is not very impressive. It has a span63 of Θ(n), which is
only an improvement of Θ(log n) compared to the sequential version (see Introduction to
54 #CITEREFLaMarcaLadner1997
55 #CITEREFKronrod1969
56 #CITEREFKatajainenPasanenTeuhola1996
57 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
58 https://en.wikipedia.org/wiki/External_sorting
59 https://en.wikipedia.org/wiki/Page_(computer_memory)
60 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
61 https://en.wikipedia.org/wiki/K-way_merge_algorithm
62 https://en.wikipedia.org/wiki/Fork%E2%80%93join_model
63 https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms#Overview
75
Merge sort
Algorithms64 ). This is mainly due to the sequential merge method, as it is the bottleneck
of the parallel executions.
Main article: Merge algorithm § Parallel merge65 Better parallelism can be achieved by
using a parallel merge algorithm66 . Cormen et al.67 present a binary variant that merges
two sorted sub-sequences into one sorted output sequence.[12]
In one of the sequences (the longer one if unequal length), the element of the middle index
is selected. Its position in the other sequence is determined in such a way that this sequence
would remain sorted if this element were inserted at this position. Thus, one knows how
many other elements from both sequences are smaller and the position of the selected
element in the output sequence can be calculated. For the partial sequences of the smaller
and larger elements created in this way, the merge algorithm is again executed in parallel
until the base case of the recursion is reached.
The following pseudocode shows the modified parallel merge sort method using the parallel
merge algorithm (adopted from Cormen et al.).
/**
* A: Input array
* B: Output array
* lo: lower bound
* hi: upper bound
* off: offset
*/
algorithm parallelMergesort(A, lo, hi, B, off) is
len := hi - lo + 1
if len == 1 then
B[off] := A[lo]
else let T[1..len] be a new array
mid := ⌊(lo + hi) / 2⌋
mid' := mid - lo + 1
fork parallelMergesort(A, lo, mid, T, 1)
parallelMergesort(A, mid + 1, hi, T, mid' + 1)
join
parallelMerge(T, 1, mid', mid' + 1, len, B, off)
In order to analyze a Recurrence relation68 for the worst case span, the recursive calls
of parallelMergesort have to be incorporated only once due to their parallel execution,
obtaining
sort (n) = T sort
(n) merge sort
(n) ( )
T∞ ∞ 2 + T∞ (n) = T∞ 2 + Θ log(n)2 .
For detailed information about the complexity of the parallel merge procedure, see Merge
algorithm69 .
The solution of this recurrence is given by
64 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
65 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge
66 https://en.wikipedia.org/wiki/Merge_algorithm
67 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
68 https://en.wikipedia.org/wiki/Recurrence_relation
69 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge
76
Parallel merge sort
( )
sort = Θ log(n)3 .
T∞
( )
n
This parallel merge algorithm reaches a parallelism of Θ , which is much higher
(log n)2
than the parallelism of the previous algorithm. Such a sort can perform well in practice when
combined with a fast stable sequential sort, such as insertion sort70 , and a fast sequential
merge as a base case for merging small arrays.[13]
It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there
are usually p > 2 processors available. A better approach may be to use a K-way merge71
method, a generalization of binary merge, in which k sorted sequences are merged together.
This merge variant is well suited to describe a sorting algorithm on a PRAM72[14][15] .
Basic Idea
70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/K-way_merge_algorithm
72 https://en.wikipedia.org/wiki/Parallel_random-access_machine
77
Merge sort
Given an unsorted sequence of n elements, the goal is to sort the sequence with p available
processors73 . These elements are distributed equally among all processors and sorted locally
using a sequential Sorting algorithm74 . Hence, the sequence consists of sorted sequences
S1 , ..., Sp of length ⌈ np ⌉. For simplification let n be a multiple of p, so that |Si | = np for
i = 1, ..., p.
These sequences will be used to perform a multisequence selection/splitter selection. For
j = 1, ..., p, the algorithm determines splitter elements vj with global rank k = j np . Then
the corresponding positions of v1 , ..., vp in each sequence Si are determined with binary
search75 and thus the Si are further partitioned into p subsequences Si,1 , ..., Si,p with
Si,j := {x ∈ Si |rank(vj−1 ) < rank(x) ≤ rank(vj )}.
Furthermore, the elements of S1,i , ..., Sp,i are assigned to processor i, means all elements
between rank (i − 1) np and rank i np , which are distributed over all Si . Thus, each processor
receives a sequence of sorted sequences. The fact that the rank k of the splitter elements
vi was chosen globally, provides two important properties: On the one hand, k was chosen
so that each processor can still operate on n/p elements after assignment. The algorithm is
perfectly load-balanced76 . On the other hand, all elements on processor i are less than or
equal to all elements on processor i + 1. Hence, each processor performs the p-way merge77
locally and thus obtains a sorted sequence from its sub-sequences. Because of the second
property, no further p-way-merge has to be performed, the results only have to be put
together in the order of the processor number.
Multisequence selection
In its simplest form, given p sorted sequences S1 , ..., Sp distributed evenly on p processors
and a rank k, the task is to find an element x with a global rank k in the union of the
sequences. Hence, this can be used to divide each Si in two parts at a splitter index li ,
where the lower part contains only elements which are smaller than x, while the elements
bigger than x are located in the upper part.
The presented sequential algorithm returns the indices of the splits in each sequence,
e.g. the indices li in sequences Si such that Si [li ] has a global rank less than k and
rank (Si [li + 1]) ≥ k.[16]
algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is
for i = 1 to p do
(l_i, r_i) = (0, |S_i|-1)
73 https://en.wikipedia.org/wiki/Processor_(computing)
74 https://en.wikipedia.org/wiki/Sorting_algorithm
75 https://en.wikipedia.org/wiki/Binary_search_algorithm
76 https://en.wikipedia.org/wiki/Load_balancing_(computing)
77 https://en.wikipedia.org/wiki/K-way_merge_algorithm
78
Parallel merge sort
l := m
return l
For the complexity analysis the PRAM78 model is chosen. If the data is evenly dis-
tributed over all p, the p-fold execution of the binarySearch method has a running time
∑
of O (p log (n/p)). The expected recursion depth is O (log ( i |Si |)) = O(log(n)) as in the
ordinary Quickselect79 . Thus the overall expected running time is O (p log(n/p) log(n)).
Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel
such that all splitter elements of rank i np for i = 1, .., p are found simultaneously. These
splitter elements can then be used to partition each sequence in p parts, with the same total
running time of O (p log(n/p) log(n)).
Pseudocode
Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We
assume that there is a barrier synchronization before and after the multisequence selection
such that every processor can determine the splitting elements and the sequence partition
properly.
/**
* d: Unsorted Array of Elements
* n: Number of Elements
* p: Number of Processors
* return Sorted Array
*/
algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is
o := new Array[0, n] // the output array
for i = 1 to p do in parallel // each processor in parallel
S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p
sort(S_i) // sort locally
synch
v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank
i * n/p
synch
(S_i,1 ,..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i
into subsequences
return o
Analysis
Firstly, each processor sorts the assigned n/p elements locally using a sorting algorithm with
complexity O (n/p log(n/p)). After that, the splitter elements have to be calculated in time
O (p log(n/p) log(n)). Finally, each group of p splits have to be merged in parallel by each
78 https://en.wikipedia.org/wiki/Parallel_random-access_machine
79 https://en.wikipedia.org/wiki/Quickselect
79
Merge sort
processor with a running time of O(log(p)n/p) using a sequential p-way merge algorithm80 .
Thus, the overall running time is given by
( ( ) ( ) )
n n n n
O log + p log log(n) + log(p) .
p p p p
The multiway merge sort algorithm is very scalable through its high parallelization capabil-
ity, which allows the use of many processors. This makes the algorithm a viable candidate
for sorting large amounts of data, such as those processed in computer clusters81 . Also,
since in such systems memory is usually not a limiting resource, the disadvantage of space
complexity of merge sort is negligible. However, other factors become important in such
systems, which are not taken into account when modelling on a PRAM82 . Here, the follow-
ing aspects need to be considered: Memory hierarchy83 , when the data does not fit into the
processors cache, or the communication overhead of exchanging data between processors,
which could become a bottleneck when the data can no longer be accessed via the shared
memory.
Sanders84 et al. have presented in their paper a bulk synchronous parallel85 algorithm for
multilevel multiway mergesort, which divides p processors into r groups of size p′ . All
processors sort locally first. Unlike single level multiway mergesort, these sequences are
then partitioned into r parts and assigned to the appropriate processor groups. These
steps are repeated recursively in those groups. This reduces communication and especially
avoids problems with many small messages. The hierarchial structure of the underlying real
network can be used to define the processor groups (e.g. racks86 , clusters87 ,...).[15]
Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with
Richard Cole using a clever subsampling algorithm to ensure O(1) merge.[17] Other sophis-
ticated parallel sorting algorithms can achieve the same or better time bounds with a lower
constant. For example, in 1991 David Powers described a parallelized quicksort88 (and a
related radix sort89 ) that can operate in O(log n) time on a CRCW90 parallel random-access
machine91 (PRAM) with n processors by performing partitioning implicitly.[18] Powers fur-
ther shows that a pipelined version of Batcher's Bitonic Mergesort92 at O((log n)2 ) time
80 https://en.wikipedia.org/wiki/Merge_algorithm
81 https://en.wikipedia.org/wiki/Computer_cluster
82 https://en.wikipedia.org/wiki/Parallel_random-access_machine
83 https://en.wikipedia.org/wiki/Memory_hierarchy
84 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
85 https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
86 https://en.wikipedia.org/wiki/19-inch_rack
87 https://en.wikipedia.org/wiki/Computer_cluster
88 https://en.wikipedia.org/wiki/Quicksort
89 https://en.wikipedia.org/wiki/Radix_sort
90 https://en.wikipedia.org/wiki/CRCW
91 https://en.wikipedia.org/wiki/Parallel_random-access_machine
92 https://en.wikipedia.org/wiki/Bitonic_sorter
80
Comparison with other sort algorithms
on a butterfly sorting network93 is in practice actually faster than his O(log n) sorts on a
PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix
and parallel sorting.[19]
Although heapsort94 has the same time bounds as merge sort, it requires only Θ(1) auxiliary
space instead of merge sort's Θ(n). On typical modern architectures, efficient quicksort95 im-
96
plementations generally outperform mergesort for sorting RAM-based arrays.[citation needed ]
On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-
access sequential media. Merge sort is often the best choice for sorting a linked list97 : in this
situation it is relatively easy to implement a merge sort in such a way that it requires only
Θ(1) extra space, and the slow random-access performance of a linked list makes some other
algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely
impossible.
As of Perl98 5.8, merge sort is its default sorting algorithm (it was quicksort in previous
versions of Perl). In Java99 , the Arrays.sort()100 methods use merge sort or a tuned quicksort
depending on the datatypes and for implementation efficiency switch to insertion sort101
when fewer than seven array elements are being sorted.[20] The Linux102 kernel uses merge
sort for its linked lists.[21] Python103 uses Timsort104 , another tuned hybrid of merge sort
and insertion sort, that has become the standard sort algorithm in Java SE 7105 (for arrays
of non-primitive types),[22] on the Android platform106 ,[23] and in GNU Octave107 .[24]
5.9 Notes
1. Skiena (2008108 , p. 122)
2. Knuth (1998109 , p. 158)
3. K, J; T, J L (M 1997). ”A
”110 (PDF). Proceedings of the 3rd Italian Con-
93 https://en.wikipedia.org/wiki/Sorting_network
94 https://en.wikipedia.org/wiki/Heapsort
95 https://en.wikipedia.org/wiki/Quicksort
97 https://en.wikipedia.org/wiki/Linked_list
98 https://en.wikipedia.org/wiki/Perl
99 https://en.wikipedia.org/wiki/Java_platform
https://docs.oracle.com/javase/9/docs/api/java/util/Arrays.html#sort-java.lang.
100
Object:A-
101 https://en.wikipedia.org/wiki/Insertion_sort
102 https://en.wikipedia.org/wiki/Linux
103 https://en.wikipedia.org/wiki/Python_(programming_language)
104 https://en.wikipedia.org/wiki/Timsort
105 https://en.wikipedia.org/wiki/Java_7
106 https://en.wikipedia.org/wiki/Android_(operating_system)
107 https://en.wikipedia.org/wiki/GNU_Octave
108 #CITEREFSkiena2008
109 #CITEREFKnuth1998
110 http://hjemmesider.diku.dk/~jyrki/Paper/CIAC97.pdf
81
Merge sort
111 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
112 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.3154
113 https://en.wikipedia.org/wiki/Doi_(identifier)
114 https://doi.org/10.1007%2F3-540-62592-5_74
115 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
116 https://en.wikipedia.org/wiki/Donald_Knuth
117 https://en.wikipedia.org/wiki/Art_of_Computer_Programming
118 https://en.wikipedia.org/wiki/ISBN_(identifier)
119 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
120 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
121 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1016%2FS0304-3975%2898%2900162-5
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1145%2F42392.42403
126 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
127 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.4612
128 https://en.wikipedia.org/wiki/Doi_(identifier)
129 https://doi.org/10.1007%2F978-3-540-30140-0_63
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-23025-0
132 #CITEREFCormenLeisersonRivestStein2009
133 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
82
Notes
13. Victor J. Duvanenko ”Parallel Merge Sort” Dr. Dobb's Journal & blog[1]134 and
GitHub repo C++ implementation [2]135
14. Peter Sanders, Johannes Singler. 2008. Lecture Parallel algorithms Last visited
05.02.2020. 136
15. ”P M P S | P 27
ACM P A A”.
137 :10.1145/2755573.2755595138 . Cite journal requires |journal= (help139 )
16. Peter Sanders. 2019. Lecture Parallel algorithms Last visited 05.02.2020. 140
17. C, R (A 1988). ”P ”. SIAM J. Comput.
17 (4): 770–785. CiteSeerX141 10.1.1.464.7118142 . doi143 :10.1137/0217049144 .CS1
maint: ref=harv (link145 )
18. Powers, David M. W. Parallelized Quicksort and Radixsort with Optimal Speedup146 ,
Proceedings of International Conference on Parallel Computing Technologies. Novosi-
birsk147 . 1991.
19. David M. W. Powers, Parallel Unification: Practical Complexity148 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
20. OpenJDK src/java.base/share/classes/java/util/Arrays.java @ 53904:9c3fe09f69bc149
21. linux kernel /lib/list_sort.c150
22. . ”C 6804124: R ” ”
..A. ”151 . Java Development Kit 7 Hg repo.
Archived152 from the original on 2018-01-26. Retrieved 24 Feb 2011.
23. ”C: ..TS<T>”153 . Android JDK Documentation. Archived
from the original154 on January 20, 2015. Retrieved 19 Jan 2015.
24. ”//-.”155 . Mercurial repository of Octave source code.
Lines 23-25 of the initial comment block. Retrieved 18 Feb 2013. Code stolen in large
134 https://duvanenko.tech.blog/2018/01/13/parallel-merge-sort/
135 https://github.com/DragonSpit/ParallelAlgorithms
136 http://algo2.iti.kit.edu/sanders/courses/paralg08/singler.pdf
137 https://en.wikipedia.org/wiki/Doi_(identifier)
138 https://doi.org/10.1145%2F2755573.2755595
139 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
140 http://algo2.iti.kit.edu/sanders/courses/paralg19/vorlesung.pdf
141 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
142 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.464.7118
143 https://en.wikipedia.org/wiki/Doi_(identifier)
144 https://doi.org/10.1137%2F0217049
145 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
146 http://citeseer.ist.psu.edu/327487.html
147 https://en.wikipedia.org/wiki/Novosibirsk
148 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://hg.openjdk.java.net/jdk/jdk/file/9c3fe09f69bc/src/java.base/share/classes/
149
java/util/Arrays.java#l1331
150 https://github.com/torvalds/linux/blob/master/lib/list_sort.c
151 http://hg.openjdk.java.net/jdk7/jdk7/jdk/rev/bfd7abda8f79
https://web.archive.org/web/20180126184957/http://hg.openjdk.java.net/jdk7/jdk7/jdk/
152
rev/bfd7abda8f79
https://web.archive.org/web/20150120063131/https://android.googlesource.com/platform/
153
libcore/%2B/jb-mr2-release/luni/src/main/java/java/util/TimSort.java
https://android.googlesource.com/platform/libcore/+/jb-mr2-release/luni/src/main/
154
java/java/util/TimSort.java
155 http://hg.savannah.gnu.org/hgweb/octave/file/0486a29d780f/liboctave/util/oct-sort.cc
83
Merge sort
part from Python's, listobject.c, which itself had no license header. However, thanks
to Tim Peters156 for the parts of the code I ripped-off.
5.10 References
• C, T H.157 ; L, C E.158 ; R, R L.159 ; S,
C160 (2009) [1990]. Introduction to Algorithms161 (3 .). MIT P
MG-H. ISBN162 0-262-03384-4163 .CS1 maint: ref=harv (link164 )
• K, J; P, T; T, J (1996). ”P -
”165 . Nordic Journal of Computing. 3. pp. 27–40. ISSN166 1236-
6064167 . Archived from the original168 on 2011-08-07. Retrieved 2009-04-04.CS1 maint:
ref=harv (link169 ). Also Practical In-Place Mergesort170 . Also [3]171
• K, D172 (1998). ”S 5.2.4: S M”. Sorting and
Searching. The Art of Computer Programming173 . 3 (2nd ed.). Addison-Wesley.
pp. 158–168. ISBN174 0-201-89685-0175 .CS1 maint: ref=harv (link176 )
• K, M. A. (1969). ”O
”. Soviet Mathematics - Doklady. 10. p. 744.CS1 maint: ref=harv (link177 )
• LM, A.; L, R. E. (1997). ”T -
”. Proc. 8th Ann. ACM-SIAM Symp. On Discrete Algorithms
(SODA97): 370–379. CiteSeerX178 10.1.1.31.1153179 .CS1 maint: ref=harv (link180 )
156 https://en.wikipedia.org/wiki/Tim_Peters_(software_engineer)
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
https://web.archive.org/web/20110807033704/http://www.diku.dk/hjemmesider/ansatte/
165
jyrki/Paper/mergesort_NJC.ps
166 https://en.wikipedia.org/wiki/ISSN_(identifier)
167 http://www.worldcat.org/issn/1236-6064
168 http://www.diku.dk/hjemmesider/ansatte/jyrki/Paper/mergesort_NJC.ps
169 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
170 http://citeseer.ist.psu.edu/katajainen96practical.html
171 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
172 https://en.wikipedia.org/wiki/Donald_Knuth
173 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
174 https://en.wikipedia.org/wiki/ISBN_(identifier)
175 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
176 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
177 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
178 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
179 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.1153
180 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
84
External links
The Wikibook Algorithm implementation187 has a page on the topic of: Merge
sort188
Sorting algorithms
181 https://en.wikipedia.org/wiki/Steven_Skiena
182 https://en.wikipedia.org/wiki/ISBN_(identifier)
183 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
184 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
185 http://java.sun.com/javase/6/docs/api/java/util/Arrays.html
186 https://docs.oracle.com/javase/10/docs/api/java/util/Arrays.html
187 https://en.wikibooks.org/wiki/Algorithm_implementation
188 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Merge_sort
https://web.archive.org/web/20150306071601/http://www.sorting-algorithms.com/merge-
189
sort
190 https://en.wikipedia.org/wiki/Wayback_Machine
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
191
Sorti.html#SECTION001411000000000000000
192 https://en.wikipedia.org/wiki/Pat_Morin
85
6 Merge sort
This article possibly contains original research1 . Please improve it2 by veri-
fying3 the claims made and adding inline citations4 . Statements consisting only of
original research should be removed. (May 2016)(Learn how and when to remove this
template message5 )
Merge sort
An example of merge sort. First divide the list into the smallest unit (1 element), then
compare each element with the adjacent list to sort and merge the two adjacent lists.
Finally all the elements are sorted and merged.
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(n log n)
perfor-
mance
Best-case O(n log n) typical,O(n) nat-
perfor- ural variant
mance
Average O(n log n)
perfor-
mance
Worst-case О(n) total with O(n) aux-
space com- iliary, O(1) auxiliary with
plexity linked lists[1]
1 https://en.wikipedia.org/wiki/Wikipedia:No_original_research
2 https://en.wikipedia.org/w/index.php?title=Merge_sort&action=edit
3 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
4 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/Computer_science
7 https://en.wikipedia.org/wiki/Comparison_sort
8 https://en.wikipedia.org/wiki/Sorting_algorithm
9 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
87
Merge sort
output. Merge sort is a divide and conquer algorithm10 that was invented by John von Neu-
mann11 in 1945.[2] A detailed description and analysis of bottom-up mergesort appeared in
a report by Goldstine12 and von Neumann13 as early as 1948.[3]
6.1 Algorithm
Example C-like15 code using indices for top-down merge sort algorithm that recursively
splits the list (called runs in this example) into sublists until sublist size is 1, then merges
those sublists to produce a sorted list. The copy back step is avoided with alternating the
direction of the merge with each level of recursion (except for an initial one time copy). To
help understand this, consider an array with 2 elements. the elements are copied to B[],
then merged back to A[]. If there are 4 elements, when the bottom of recursion level is
reached, single element runs from A[] are merged to B[], and then at the next higher level
of recursion, those 2 element runs are merged to A[]. This pattern continues with each level
of recursion.
// Array A[] has the items to sort; array B[] is a work array.
void TopDownMergeSort(A[], B[], n)
{
CopyArray(A, 0, n, B); // one time copy of A[] to B[]
TopDownSplitMerge(B, 0, n, A); // sort data from B[] into A[]
}
// Sort the given run of array A[] using array B[] as a source.
// iBegin is inclusive; iEnd is exclusive (A[iEnd] is not in the set).
void TopDownSplitMerge(B[], iBegin, iEnd, A[])
{
if(iEnd - iBegin < 2) // if run size == 1
return; // consider it sorted
// split the run longer than 1 item into halves
iMiddle = (iEnd + iBegin) / 2; // iMiddle = mid point
// recursively sort both runs from array A[] into B[]
TopDownSplitMerge(A, iBegin, iMiddle, B); // sort the left run
TopDownSplitMerge(A, iMiddle, iEnd, B); // sort the right run
// merge the resulting runs from array B[] into A[]
TopDownMerge(B, iBegin, iMiddle, iEnd, A);
}
10 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
11 https://en.wikipedia.org/wiki/John_von_Neumann
12 https://en.wikipedia.org/wiki/Herman_Goldstine
13 https://en.wikipedia.org/wiki/John_von_Neumann
14 https://en.wikipedia.org/wiki/Merge_algorithm
15 https://en.wikipedia.org/wiki/C-like
88
Algorithm
Example C-like code using indices for bottom-up merge sort algorithm which treats the
list as an array of n sublists (called runs in this example) of size 1, and iteratively merges
sub-lists back and forth between two buffers:
// array A[] has the items to sort; array B[] is a work array
void BottomUpMergeSort(A[], B[], n)
{
// Each 1-element run in A is already "sorted".
// Make successively longer sorted runs of length 2, 4, 8, 16... until whole
array is sorted.
for (width = 1; width < n; width = 2 * width)
{
// Array A is full of runs of length width.
for (i = 0; i < n; i = i + 2 * width)
{
// Merge two runs: A[i:i+width-1] and A[i+width:i+2*width-1] to B[]
// or copy A[i:n-1] to B[] ( if(i+width >= n) )
BottomUpMerge(A, i, min(i+width, n), min(i+2*width, n), B);
}
// Now work array B is full of runs of length 2*width.
// Copy array B to array A for next iteration.
// A more efficient implementation would swap the roles of A and B.
CopyArray(B, A, n);
// Now array A is full of runs of length 2*width.
}
}
89
Merge sort
Pseudocode16 for top-down merge sort algorithm which recursively divides the input list
into smaller sublists until the sublists are trivially sorted, and then merges the sublists
while returning up the call chain.
function merge_sort(list m) is
// Base case. A list of zero or one elements is sorted, by definition.
if length of m ≤ 1 then
return m
In this example, the merge function merges the left and right sublists.
function merge(left, right) is
var result := empty list
16 https://en.wikipedia.org/wiki/Pseudocode
90
Natural merge sort
Pseudocode17 for bottom-up merge sort algorithm which uses a small fixed size array of
references to nodes, where array[i] is either a reference to a list of size 2i or nil18 . node is
a reference or pointer to a node. The merge() function would be similar to the one shown
in the top-down merge lists example, it merges two already sorted lists, and handles empty
lists. In this case, merge() would use node for its input parameters and return value.
function merge_sort(node head) is
// return if empty list
if head = nil then
return nil
var node array[32]; initially all nil
var node result
var node next
var int i
result := head
// merge nodes into array
while result ≠ nil do
next := result.next;
result.next := nil
for(i = 0; (i < 32) && (array[i] ≠ nil); i += 1) do
result := merge(array[i], result)
array[i] := nil
// do not go past end of array
if i = 32 then
i -= 1
array[i] := result
result := next
// merge array into single list
result := nil
for (i = 0; i < 32; i += 1) do
result := merge(array[i], result)
return result
A natural merge sort is similar to a bottom-up merge sort except that any naturally occur-
ring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (al-
ternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being
convenient data structures (used as FIFO queues19 or LIFO stacks20 ).[4] In the bottom-up
merge sort, the starting point assumes each run is one item long. In practice, random input
17 https://en.wikipedia.org/wiki/Pseudocode
18 https://en.wikipedia.org/wiki/Null_pointer
19 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
20 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
91
Merge sort
data will have many short runs that just happen to be sorted. In the typical case, the
natural merge sort may not need as many passes because there are fewer runs to merge.
In the best case, the input is already sorted (i.e., is one run), so the natural merge sort
need only make one pass through the data. In many practical cases, long natural runs
are present, and for that reason natural merge sort is exploited as the key component of
Timsort21 . Example:
Start : 3 4 2 1 7 5 8 9 0 6
Select runs : (3 4)(2)(1 7)(5 8 9)(0 6)
Merge : (2 3 4)(1 5 7 8 9)(0 6)
Merge : (1 2 3 4 5 7 8 9)(0 6)
Merge : (0 1 2 3 4 5 6 7 8 9)
Tournament replacement selection sorts22 are used to gather the initial runs for external
sorting algorithms.
21 https://en.wikipedia.org/wiki/Timsort
22 https://en.wikipedia.org/wiki/Tournament_sort
92
Analysis
6.3 Analysis
Figure 18 A recursive merge sort algorithm used to sort an array of 7 integer values.
These are the steps a human would take to emulate merge sort (top-down).
23 https://en.wikipedia.org/wiki/Average_performance
24 https://en.wikipedia.org/wiki/Worst-case_performance
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
93
Merge sort
In the worst case, the number of comparisons merge sort makes is given by the sorting
numbers27 . These numbers are equal to or slightly smaller than (n ⌈lg28 n⌉ − 2⌈lg n⌉ + 1),
which is between (n lg n − n + 1) and (n lg n + n + O(lg n)).[5]
For large n and a randomly ordered input list, merge sort's expected (average) number of
∑∞
1
comparisons approaches α·n fewer than the worst case where α = −1 + k +1
≈ 0.2645.
k=0
2
In the worst case, merge sort does about 39% fewer comparisons than quicksort29 does in
the average case. In terms of moves, merge sort's worst case complexity is O30 (n log n)—
the same complexity as quicksort's best case, and merge sort's best case takes about half
31
as many iterations as the worst case.[citation needed ]
Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can
only be efficiently accessed sequentially, and is thus popular in languages such as Lisp32 ,
where sequentially accessed data structures are very common. Unlike some (efficient) im-
plementations of quicksort, merge sort is a stable sort.
Merge sort's most common implementation does not sort in place;[6] therefore, the memory
size of the input must be allocated for the sorted output to be stored in (see below for
versions that need only n/2 extra spaces).
6.4 Variants
Variants of merge sort are primarily concerned with reducing the space complexity and the
cost of copying.
A simple alternative for reducing the space overhead to n/2 is to maintain left and right as
a combined structure, copy only the left part of m into temporary space, and to direct the
merge routine to place the merged output into m. With this version it is better to allocate
the temporary space outside the merge routine, so that only one allocation is needed. The
excessive copying mentioned previously is also mitigated, since the last pair of lines before
the return result statement (function mergein the pseudo code above) become superfluous.
One drawback of merge sort, when implemented on arrays, is its O(n) working memory
requirement. Several in-place33 variants have been suggested:
• Katajainen et al. present an algorithm that requires a constant amount of working mem-
ory: enough storage space to hold one element of the input array, and additional space
to hold O(1) pointers into the input array. They achieve an O(n log n) time bound with
small constants, but their algorithm is not stable.[7]
• Several attempts have been made at producing an in-place merge algorithm that can
be combined with a standard (top-down or bottom-up) merge sort to produce an in-
27 https://en.wikipedia.org/wiki/Sorting_number
28 https://en.wikipedia.org/wiki/Binary_logarithm
29 https://en.wikipedia.org/wiki/Quicksort
30 https://en.wikipedia.org/wiki/Big_O_notation
32 https://en.wikipedia.org/wiki/Lisp_programming_language
33 https://en.wikipedia.org/wiki/In-place_algorithm
94
Variants
place merge sort. In this case, the notion of ”in-place” can be relaxed to mean ”taking
logarithmic stack space”, because standard merge sort requires that amount of space
for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is
possible in O(n log n) time using a constant amount of scratch space, but their algorithm
is complicated and has high constant factors: merging arrays of length n and m can take
5n + 12m + o(m) moves.[8] This high constant factor and complicated in-place algorithm
was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston[9]
presented a straightforward linear time algorithm practical in-place merge to merge a
sorted list using fixed amount of additional space. They both have used the work of
Kronrod and others. It merges in linear time and constant extra space. The algorithm
takes little more average time than standard merge sort algorithms, free to exploit O(n)
temporary extra memory cells, by less than a factor of two. Though the algorithm is
much faster in a practical way but it is unstable also for some lists. But using similar
concepts, they have been able to solve this problem. Other in-place algorithms include
SymMerge, which takes O((n + m) log (n + m)) time in total and is stable.[10] Plugging
such an algorithm into merge sort increases its complexity to the non-linearithmic34 , but
still quasilinear35 , O(n (log n)2 ).
• A modern stable linear and in-place merging is block merge sort36 .
An alternative to reduce the copying into multiple lists is to associate a new field of infor-
mation with each key (the elements in m are called keys). This field will be used to link
the keys and any associated information together in a sorted list (a key and its related
information is called a record). Then the merging of the sorted lists proceeds by changing
the link values; no records need to be moved at all. A field which contains only a link will
generally be smaller than an entire record so less space will also be used. This is a standard
sorting technique, not restricted to merge sort.
34 https://en.wikipedia.org/wiki/Linearithmic
35 https://en.wikipedia.org/wiki/Quasilinear_time
36 https://en.wikipedia.org/wiki/Block_merge_sort
95
Merge sort
Figure 19 Merge sort type algorithms allowed large data sets to be sorted on early
computers that had small random access memories by modern standards. Records were
stored on magnetic tape and processed on banks of magnetic tape drives, such as these
IBM 729s.
An external37 merge sort is practical to run using disk38 or tape39 drives when the data to
be sorted is too large to fit into memory40 . External sorting41 explains how merge sort is
implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is
sequential (except for rewinds at the end of each pass). A minimal implementation can get
by with just two record buffers and a few program variables.
Naming the four tape drives as A, B, C, D, with the original data on A, and using only 2
record buffers, the algorithm is similar to Bottom-up implementation42 , using pairs of tape
drives instead of arrays in memory. The basic algorithm can be described as follows:
37 https://en.wikipedia.org/wiki/External_sorting
38 https://en.wikipedia.org/wiki/Disk_storage
39 https://en.wikipedia.org/wiki/Tape_drive
40 https://en.wikipedia.org/wiki/Primary_storage
41 https://en.wikipedia.org/wiki/External_sorting
42 #Bottom-up_implementation
96
Use with tape drives
43 https://en.wikipedia.org/wiki/Hybrid_algorithm
44 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
45 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
46 https://en.wikipedia.org/wiki/K-way_merge_algorithm
47 https://en.wikipedia.org/wiki/Polyphase_merge_sort
97
Merge sort
Figure 20 Tiled merge sort applied to an array of random integers. The horizontal axis
is the array index and the vertical axis is the integer.
48 https://en.wikipedia.org/wiki/Locality_of_reference
49 https://en.wikipedia.org/wiki/Software_optimization
50 https://en.wikipedia.org/wiki/Memory_hierarchy
51 https://en.wikipedia.org/wiki/Cache_(computing)
52 https://en.wikipedia.org/wiki/Insertion_sort
98
Parallel merge sort
53 ]
recursive fashion. This algorithm has demonstrated better performance[example needed on
machines that benefit from cache optimization. (LaMarca & Ladner 199754 )
Kronrod (1969)55 suggested an alternative version of merge sort that uses constant addi-
tional space. This algorithm was later refined. (Katajainen, Pasanen & Teuhola 199656 )
harv error: multiple targets (2×): CITEREFKatajainenPasanenTeuhola1996 (help57 )
Also, many applications of external sorting58 use a form of merge sorting where the input
get split up to a higher number of sublists, ideally to a number for which merging them still
makes the currently processed set of pages59 fit into main memory.
Merge sort parallelizes well due to the use of the divide-and-conquer60 method. Several
different parallel variants of the algorithm have been developed over the years. Some parallel
merge sort algorithms are strongly related to the sequential top-down merge algorithm while
others have a different general structure and use the K-way merge61 method.
The sequential merge sort procedure can be described in two phases, the divide phase and
the merge phase. The first consists of many recursive calls that repeatedly perform the same
division process until the subsequences are trivially sorted (containing one or no element).
An intuitive approach is the parallelization of those recursive calls.[12] Following pseudocode
describes the merge sort with parallel recursion using the fork and join62 keywords:
// Sort elements lo through hi (exclusive) of array A.
algorithm mergesort(A, lo, hi) is
if lo+1 < hi then // Two or more elements.
mid := ⌊(lo + hi) / 2⌋
fork mergesort(A, lo, mid)
mergesort(A, mid, hi)
join
merge(A, lo, mid, hi)
This algorithm is the trivial modification of the sequential version and does not parallelize
well. Therefore, its speedup is not very impressive. It has a span63 of Θ(n), which is
only an improvement of Θ(log n) compared to the sequential version (see Introduction to
54 #CITEREFLaMarcaLadner1997
55 #CITEREFKronrod1969
56 #CITEREFKatajainenPasanenTeuhola1996
57 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
58 https://en.wikipedia.org/wiki/External_sorting
59 https://en.wikipedia.org/wiki/Page_(computer_memory)
60 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
61 https://en.wikipedia.org/wiki/K-way_merge_algorithm
62 https://en.wikipedia.org/wiki/Fork%E2%80%93join_model
63 https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms#Overview
99
Merge sort
Algorithms64 ). This is mainly due to the sequential merge method, as it is the bottleneck
of the parallel executions.
Main article: Merge algorithm § Parallel merge65 Better parallelism can be achieved by
using a parallel merge algorithm66 . Cormen et al.67 present a binary variant that merges
two sorted sub-sequences into one sorted output sequence.[12]
In one of the sequences (the longer one if unequal length), the element of the middle index
is selected. Its position in the other sequence is determined in such a way that this sequence
would remain sorted if this element were inserted at this position. Thus, one knows how
many other elements from both sequences are smaller and the position of the selected
element in the output sequence can be calculated. For the partial sequences of the smaller
and larger elements created in this way, the merge algorithm is again executed in parallel
until the base case of the recursion is reached.
The following pseudocode shows the modified parallel merge sort method using the parallel
merge algorithm (adopted from Cormen et al.).
/**
* A: Input array
* B: Output array
* lo: lower bound
* hi: upper bound
* off: offset
*/
algorithm parallelMergesort(A, lo, hi, B, off) is
len := hi - lo + 1
if len == 1 then
B[off] := A[lo]
else let T[1..len] be a new array
mid := ⌊(lo + hi) / 2⌋
mid' := mid - lo + 1
fork parallelMergesort(A, lo, mid, T, 1)
parallelMergesort(A, mid + 1, hi, T, mid' + 1)
join
parallelMerge(T, 1, mid', mid' + 1, len, B, off)
In order to analyze a Recurrence relation68 for the worst case span, the recursive calls
of parallelMergesort have to be incorporated only once due to their parallel execution,
obtaining
sort (n) = T sort
(n) merge sort
(n) ( )
T∞ ∞ 2 + T∞ (n) = T∞ 2 + Θ log(n)2 .
For detailed information about the complexity of the parallel merge procedure, see Merge
algorithm69 .
The solution of this recurrence is given by
64 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
65 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge
66 https://en.wikipedia.org/wiki/Merge_algorithm
67 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
68 https://en.wikipedia.org/wiki/Recurrence_relation
69 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge
100
Parallel merge sort
( )
sort = Θ log(n)3 .
T∞
( )
n
This parallel merge algorithm reaches a parallelism of Θ , which is much higher
(log n)2
than the parallelism of the previous algorithm. Such a sort can perform well in practice when
combined with a fast stable sequential sort, such as insertion sort70 , and a fast sequential
merge as a base case for merging small arrays.[13]
It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there
are usually p > 2 processors available. A better approach may be to use a K-way merge71
method, a generalization of binary merge, in which k sorted sequences are merged together.
This merge variant is well suited to describe a sorting algorithm on a PRAM72[14][15] .
Basic Idea
70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/K-way_merge_algorithm
72 https://en.wikipedia.org/wiki/Parallel_random-access_machine
101
Merge sort
Given an unsorted sequence of n elements, the goal is to sort the sequence with p available
processors73 . These elements are distributed equally among all processors and sorted locally
using a sequential Sorting algorithm74 . Hence, the sequence consists of sorted sequences
S1 , ..., Sp of length ⌈ np ⌉. For simplification let n be a multiple of p, so that |Si | = np for
i = 1, ..., p.
These sequences will be used to perform a multisequence selection/splitter selection. For
j = 1, ..., p, the algorithm determines splitter elements vj with global rank k = j np . Then
the corresponding positions of v1 , ..., vp in each sequence Si are determined with binary
search75 and thus the Si are further partitioned into p subsequences Si,1 , ..., Si,p with
Si,j := {x ∈ Si |rank(vj−1 ) < rank(x) ≤ rank(vj )}.
Furthermore, the elements of S1,i , ..., Sp,i are assigned to processor i, means all elements
between rank (i − 1) np and rank i np , which are distributed over all Si . Thus, each processor
receives a sequence of sorted sequences. The fact that the rank k of the splitter elements
vi was chosen globally, provides two important properties: On the one hand, k was chosen
so that each processor can still operate on n/p elements after assignment. The algorithm is
perfectly load-balanced76 . On the other hand, all elements on processor i are less than or
equal to all elements on processor i + 1. Hence, each processor performs the p-way merge77
locally and thus obtains a sorted sequence from its sub-sequences. Because of the second
property, no further p-way-merge has to be performed, the results only have to be put
together in the order of the processor number.
Multisequence selection
In its simplest form, given p sorted sequences S1 , ..., Sp distributed evenly on p processors
and a rank k, the task is to find an element x with a global rank k in the union of the
sequences. Hence, this can be used to divide each Si in two parts at a splitter index li ,
where the lower part contains only elements which are smaller than x, while the elements
bigger than x are located in the upper part.
The presented sequential algorithm returns the indices of the splits in each sequence,
e.g. the indices li in sequences Si such that Si [li ] has a global rank less than k and
rank (Si [li + 1]) ≥ k.[16]
algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is
for i = 1 to p do
(l_i, r_i) = (0, |S_i|-1)
73 https://en.wikipedia.org/wiki/Processor_(computing)
74 https://en.wikipedia.org/wiki/Sorting_algorithm
75 https://en.wikipedia.org/wiki/Binary_search_algorithm
76 https://en.wikipedia.org/wiki/Load_balancing_(computing)
77 https://en.wikipedia.org/wiki/K-way_merge_algorithm
102
Parallel merge sort
l := m
return l
For the complexity analysis the PRAM78 model is chosen. If the data is evenly dis-
tributed over all p, the p-fold execution of the binarySearch method has a running time
∑
of O (p log (n/p)). The expected recursion depth is O (log ( i |Si |)) = O(log(n)) as in the
ordinary Quickselect79 . Thus the overall expected running time is O (p log(n/p) log(n)).
Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel
such that all splitter elements of rank i np for i = 1, .., p are found simultaneously. These
splitter elements can then be used to partition each sequence in p parts, with the same total
running time of O (p log(n/p) log(n)).
Pseudocode
Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We
assume that there is a barrier synchronization before and after the multisequence selection
such that every processor can determine the splitting elements and the sequence partition
properly.
/**
* d: Unsorted Array of Elements
* n: Number of Elements
* p: Number of Processors
* return Sorted Array
*/
algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is
o := new Array[0, n] // the output array
for i = 1 to p do in parallel // each processor in parallel
S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p
sort(S_i) // sort locally
synch
v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank
i * n/p
synch
(S_i,1 ,..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i
into subsequences
return o
Analysis
Firstly, each processor sorts the assigned n/p elements locally using a sorting algorithm with
complexity O (n/p log(n/p)). After that, the splitter elements have to be calculated in time
O (p log(n/p) log(n)). Finally, each group of p splits have to be merged in parallel by each
78 https://en.wikipedia.org/wiki/Parallel_random-access_machine
79 https://en.wikipedia.org/wiki/Quickselect
103
Merge sort
processor with a running time of O(log(p)n/p) using a sequential p-way merge algorithm80 .
Thus, the overall running time is given by
( ( ) ( ) )
n n n n
O log + p log log(n) + log(p) .
p p p p
The multiway merge sort algorithm is very scalable through its high parallelization capabil-
ity, which allows the use of many processors. This makes the algorithm a viable candidate
for sorting large amounts of data, such as those processed in computer clusters81 . Also,
since in such systems memory is usually not a limiting resource, the disadvantage of space
complexity of merge sort is negligible. However, other factors become important in such
systems, which are not taken into account when modelling on a PRAM82 . Here, the follow-
ing aspects need to be considered: Memory hierarchy83 , when the data does not fit into the
processors cache, or the communication overhead of exchanging data between processors,
which could become a bottleneck when the data can no longer be accessed via the shared
memory.
Sanders84 et al. have presented in their paper a bulk synchronous parallel85 algorithm for
multilevel multiway mergesort, which divides p processors into r groups of size p′ . All
processors sort locally first. Unlike single level multiway mergesort, these sequences are
then partitioned into r parts and assigned to the appropriate processor groups. These
steps are repeated recursively in those groups. This reduces communication and especially
avoids problems with many small messages. The hierarchial structure of the underlying real
network can be used to define the processor groups (e.g. racks86 , clusters87 ,...).[15]
Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with
Richard Cole using a clever subsampling algorithm to ensure O(1) merge.[17] Other sophis-
ticated parallel sorting algorithms can achieve the same or better time bounds with a lower
constant. For example, in 1991 David Powers described a parallelized quicksort88 (and a
related radix sort89 ) that can operate in O(log n) time on a CRCW90 parallel random-access
machine91 (PRAM) with n processors by performing partitioning implicitly.[18] Powers fur-
ther shows that a pipelined version of Batcher's Bitonic Mergesort92 at O((log n)2 ) time
80 https://en.wikipedia.org/wiki/Merge_algorithm
81 https://en.wikipedia.org/wiki/Computer_cluster
82 https://en.wikipedia.org/wiki/Parallel_random-access_machine
83 https://en.wikipedia.org/wiki/Memory_hierarchy
84 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
85 https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
86 https://en.wikipedia.org/wiki/19-inch_rack
87 https://en.wikipedia.org/wiki/Computer_cluster
88 https://en.wikipedia.org/wiki/Quicksort
89 https://en.wikipedia.org/wiki/Radix_sort
90 https://en.wikipedia.org/wiki/CRCW
91 https://en.wikipedia.org/wiki/Parallel_random-access_machine
92 https://en.wikipedia.org/wiki/Bitonic_sorter
104
Comparison with other sort algorithms
on a butterfly sorting network93 is in practice actually faster than his O(log n) sorts on a
PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix
and parallel sorting.[19]
Although heapsort94 has the same time bounds as merge sort, it requires only Θ(1) auxiliary
space instead of merge sort's Θ(n). On typical modern architectures, efficient quicksort95 im-
96
plementations generally outperform mergesort for sorting RAM-based arrays.[citation needed ]
On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-
access sequential media. Merge sort is often the best choice for sorting a linked list97 : in this
situation it is relatively easy to implement a merge sort in such a way that it requires only
Θ(1) extra space, and the slow random-access performance of a linked list makes some other
algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely
impossible.
As of Perl98 5.8, merge sort is its default sorting algorithm (it was quicksort in previous
versions of Perl). In Java99 , the Arrays.sort()100 methods use merge sort or a tuned quicksort
depending on the datatypes and for implementation efficiency switch to insertion sort101
when fewer than seven array elements are being sorted.[20] The Linux102 kernel uses merge
sort for its linked lists.[21] Python103 uses Timsort104 , another tuned hybrid of merge sort
and insertion sort, that has become the standard sort algorithm in Java SE 7105 (for arrays
of non-primitive types),[22] on the Android platform106 ,[23] and in GNU Octave107 .[24]
6.9 Notes
1. Skiena (2008108 , p. 122)
2. Knuth (1998109 , p. 158)
3. K, J; T, J L (M 1997). ”A
”110 (PDF). Proceedings of the 3rd Italian Con-
93 https://en.wikipedia.org/wiki/Sorting_network
94 https://en.wikipedia.org/wiki/Heapsort
95 https://en.wikipedia.org/wiki/Quicksort
97 https://en.wikipedia.org/wiki/Linked_list
98 https://en.wikipedia.org/wiki/Perl
99 https://en.wikipedia.org/wiki/Java_platform
https://docs.oracle.com/javase/9/docs/api/java/util/Arrays.html#sort-java.lang.
100
Object:A-
101 https://en.wikipedia.org/wiki/Insertion_sort
102 https://en.wikipedia.org/wiki/Linux
103 https://en.wikipedia.org/wiki/Python_(programming_language)
104 https://en.wikipedia.org/wiki/Timsort
105 https://en.wikipedia.org/wiki/Java_7
106 https://en.wikipedia.org/wiki/Android_(operating_system)
107 https://en.wikipedia.org/wiki/GNU_Octave
108 #CITEREFSkiena2008
109 #CITEREFKnuth1998
110 http://hjemmesider.diku.dk/~jyrki/Paper/CIAC97.pdf
105
Merge sort
111 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
112 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.3154
113 https://en.wikipedia.org/wiki/Doi_(identifier)
114 https://doi.org/10.1007%2F3-540-62592-5_74
115 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
116 https://en.wikipedia.org/wiki/Donald_Knuth
117 https://en.wikipedia.org/wiki/Art_of_Computer_Programming
118 https://en.wikipedia.org/wiki/ISBN_(identifier)
119 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
120 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
121 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1016%2FS0304-3975%2898%2900162-5
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1145%2F42392.42403
126 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
127 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.4612
128 https://en.wikipedia.org/wiki/Doi_(identifier)
129 https://doi.org/10.1007%2F978-3-540-30140-0_63
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-23025-0
132 #CITEREFCormenLeisersonRivestStein2009
133 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
106
Notes
13. Victor J. Duvanenko ”Parallel Merge Sort” Dr. Dobb's Journal & blog[1]134 and
GitHub repo C++ implementation [2]135
14. Peter Sanders, Johannes Singler. 2008. Lecture Parallel algorithms Last visited
05.02.2020. 136
15. ”P M P S | P 27
ACM P A A”.
137 :10.1145/2755573.2755595138 . Cite journal requires |journal= (help139 )
16. Peter Sanders. 2019. Lecture Parallel algorithms Last visited 05.02.2020. 140
17. C, R (A 1988). ”P ”. SIAM J. Comput.
17 (4): 770–785. CiteSeerX141 10.1.1.464.7118142 . doi143 :10.1137/0217049144 .CS1
maint: ref=harv (link145 )
18. Powers, David M. W. Parallelized Quicksort and Radixsort with Optimal Speedup146 ,
Proceedings of International Conference on Parallel Computing Technologies. Novosi-
birsk147 . 1991.
19. David M. W. Powers, Parallel Unification: Practical Complexity148 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
20. OpenJDK src/java.base/share/classes/java/util/Arrays.java @ 53904:9c3fe09f69bc149
21. linux kernel /lib/list_sort.c150
22. . ”C 6804124: R ” ”
..A. ”151 . Java Development Kit 7 Hg repo.
Archived152 from the original on 2018-01-26. Retrieved 24 Feb 2011.
23. ”C: ..TS<T>”153 . Android JDK Documentation. Archived
from the original154 on January 20, 2015. Retrieved 19 Jan 2015.
24. ”//-.”155 . Mercurial repository of Octave source code.
Lines 23-25 of the initial comment block. Retrieved 18 Feb 2013. Code stolen in large
134 https://duvanenko.tech.blog/2018/01/13/parallel-merge-sort/
135 https://github.com/DragonSpit/ParallelAlgorithms
136 http://algo2.iti.kit.edu/sanders/courses/paralg08/singler.pdf
137 https://en.wikipedia.org/wiki/Doi_(identifier)
138 https://doi.org/10.1145%2F2755573.2755595
139 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
140 http://algo2.iti.kit.edu/sanders/courses/paralg19/vorlesung.pdf
141 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
142 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.464.7118
143 https://en.wikipedia.org/wiki/Doi_(identifier)
144 https://doi.org/10.1137%2F0217049
145 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
146 http://citeseer.ist.psu.edu/327487.html
147 https://en.wikipedia.org/wiki/Novosibirsk
148 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://hg.openjdk.java.net/jdk/jdk/file/9c3fe09f69bc/src/java.base/share/classes/
149
java/util/Arrays.java#l1331
150 https://github.com/torvalds/linux/blob/master/lib/list_sort.c
151 http://hg.openjdk.java.net/jdk7/jdk7/jdk/rev/bfd7abda8f79
https://web.archive.org/web/20180126184957/http://hg.openjdk.java.net/jdk7/jdk7/jdk/
152
rev/bfd7abda8f79
https://web.archive.org/web/20150120063131/https://android.googlesource.com/platform/
153
libcore/%2B/jb-mr2-release/luni/src/main/java/java/util/TimSort.java
https://android.googlesource.com/platform/libcore/+/jb-mr2-release/luni/src/main/
154
java/java/util/TimSort.java
155 http://hg.savannah.gnu.org/hgweb/octave/file/0486a29d780f/liboctave/util/oct-sort.cc
107
Merge sort
part from Python's, listobject.c, which itself had no license header. However, thanks
to Tim Peters156 for the parts of the code I ripped-off.
6.10 References
• C, T H.157 ; L, C E.158 ; R, R L.159 ; S,
C160 (2009) [1990]. Introduction to Algorithms161 (3 .). MIT P
MG-H. ISBN162 0-262-03384-4163 .CS1 maint: ref=harv (link164 )
• K, J; P, T; T, J (1996). ”P -
”165 . Nordic Journal of Computing. 3. pp. 27–40. ISSN166 1236-
6064167 . Archived from the original168 on 2011-08-07. Retrieved 2009-04-04.CS1 maint:
ref=harv (link169 ). Also Practical In-Place Mergesort170 . Also [3]171
• K, D172 (1998). ”S 5.2.4: S M”. Sorting and
Searching. The Art of Computer Programming173 . 3 (2nd ed.). Addison-Wesley.
pp. 158–168. ISBN174 0-201-89685-0175 .CS1 maint: ref=harv (link176 )
• K, M. A. (1969). ”O
”. Soviet Mathematics - Doklady. 10. p. 744.CS1 maint: ref=harv (link177 )
• LM, A.; L, R. E. (1997). ”T -
”. Proc. 8th Ann. ACM-SIAM Symp. On Discrete Algorithms
(SODA97): 370–379. CiteSeerX178 10.1.1.31.1153179 .CS1 maint: ref=harv (link180 )
156 https://en.wikipedia.org/wiki/Tim_Peters_(software_engineer)
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
https://web.archive.org/web/20110807033704/http://www.diku.dk/hjemmesider/ansatte/
165
jyrki/Paper/mergesort_NJC.ps
166 https://en.wikipedia.org/wiki/ISSN_(identifier)
167 http://www.worldcat.org/issn/1236-6064
168 http://www.diku.dk/hjemmesider/ansatte/jyrki/Paper/mergesort_NJC.ps
169 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
170 http://citeseer.ist.psu.edu/katajainen96practical.html
171 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
172 https://en.wikipedia.org/wiki/Donald_Knuth
173 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
174 https://en.wikipedia.org/wiki/ISBN_(identifier)
175 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
176 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
177 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
178 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
179 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.1153
180 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
108
External links
The Wikibook Algorithm implementation187 has a page on the topic of: Merge
sort188
Sorting algorithms
181 https://en.wikipedia.org/wiki/Steven_Skiena
182 https://en.wikipedia.org/wiki/ISBN_(identifier)
183 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
184 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
185 http://java.sun.com/javase/6/docs/api/java/util/Arrays.html
186 https://docs.oracle.com/javase/10/docs/api/java/util/Arrays.html
187 https://en.wikibooks.org/wiki/Algorithm_implementation
188 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Merge_sort
https://web.archive.org/web/20150306071601/http://www.sorting-algorithms.com/merge-
189
sort
190 https://en.wikipedia.org/wiki/Wayback_Machine
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
191
Sorti.html#SECTION001411000000000000000
192 https://en.wikipedia.org/wiki/Pat_Morin
109
7 Quicksort
Quicksort
Animated visualization of the quicksort algorithm. The horizontal lines are pivot val-
ues.
Class Sorting algorithm
Worst-case O(n2 )
performance
Best-case per- O(n log n) (simple parti-
formance tion)
or O(n) (three-way parti-
tion and equal keys)
Average per- O(n log n)
formance
Worst-case O(n) auxiliary (naive)
space com- O(log n) auxiliary
plexity (Sedgewick 1978)
1 https://en.wikipedia.org/wiki/Algorithm_efficiency
2 https://en.wikipedia.org/wiki/Sorting_algorithm
3 https://en.wikipedia.org/wiki/Tony_Hoare
4 https://en.wikipedia.org/wiki/Merge_sort
5 https://en.wikipedia.org/wiki/Heapsort
7 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
8 https://en.wikipedia.org/wiki/Recursion_(computer_science)
9 https://en.wikipedia.org/wiki/In-place_algorithm
10 https://en.wikipedia.org/wiki/Main_memory
11 https://en.wikipedia.org/wiki/Comparison_sort
12 https://en.wikipedia.org/wiki/Total_order
111
Quicksort
Quicksort are not a stable sort13 , meaning that the relative order of equal sort items is not
preserved.
Mathematical analysis14 of quicksort shows that, on average15 , the algorithm takes
O16 (n log n) comparisons to sort n items. In the worst case17 , it makes O(n2 ) compar-
isons, though this behavior is rare.
7.1 History
The quicksort algorithm was developed in 1959 by Tony Hoare18 while in the Soviet Union19 ,
as a visiting student at Moscow State University20 . At that time, Hoare worked on a project
on machine translation21 for the National Physical Laboratory22 . As a part of the translation
process, he needed to sort the words in Russian sentences prior to looking them up in a
Russian-English dictionary that was already sorted in alphabetic order on magnetic tape23 .[4]
After recognizing that his first idea, insertion sort24 , would be slow, he quickly came up with
a new idea that was Quicksort. He wrote a program in Mercury Autocode25 for the partition
but could not write the program to account for the list of unsorted segments. On return to
England, he was asked to write code for Shellsort26 as part of his new job. Hoare mentioned
to his boss that he knew of a faster algorithm and his boss bet sixpence that he did not. His
boss ultimately accepted that he had lost the bet. Later, Hoare learned about ALGOL27
and its ability to do recursion that enabled him to publish the code in Communications of
the Association for Computing Machinery28 , the premier computer science journal of the
time.[2][5]
Quicksort gained widespread adoption, appearing, for example, in Unix29 as the default
library sort subroutine. Hence, it lent its name to the C standard library30 subroutine
qsort31[6] and in the reference implementation of Java32 .
13 https://en.wikipedia.org/wiki/Stable_sort
14 https://en.wikipedia.org/wiki/Analysis_of_algorithms
15 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
16 https://en.wikipedia.org/wiki/Big_O_notation
17 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
18 https://en.wikipedia.org/wiki/Tony_Hoare
19 https://en.wikipedia.org/wiki/Soviet_Union
20 https://en.wikipedia.org/wiki/Moscow_State_University
21 https://en.wikipedia.org/wiki/Machine_translation
22 https://en.wikipedia.org/wiki/National_Physical_Laboratory,_UK
23 https://en.wikipedia.org/wiki/Magnetic_tape_data_storage
24 https://en.wikipedia.org/wiki/Insertion_sort
25 https://en.wikipedia.org/wiki/Autocode
26 https://en.wikipedia.org/wiki/Shellsort
27 https://en.wikipedia.org/wiki/ALGOL
28 https://en.wikipedia.org/wiki/Communications_of_the_ACM
29 https://en.wikipedia.org/wiki/Unix
30 https://en.wikipedia.org/wiki/C_standard_library
31 https://en.wikipedia.org/wiki/Qsort
32 https://en.wikipedia.org/wiki/Java_(programming_language)
112
History
Robert Sedgewick33 's Ph.D. thesis in 1975 is considered a milestone in the study of Quick-
sort where he resolved many open problems related to the analysis of various pivot selection
schemes including Samplesort34 , adaptive partitioning by Van Emden[7] as well as deriva-
tion of expected number of comparisons and swaps.[6] Jon Bentley35 and Doug McIlroy36
incorporated various improvements for use in programming libraries, including a technique
to deal with equal elements and a pivot scheme known as pseudomedian of nine, where a
sample of nine elements is divided into groups of three and then the median of the three
medians from three groups is chosen.[6] Bentley described another simpler and compact
partitioning scheme in his book Programming Pearls that he attributed to Nico Lomuto.
Later Bentley wrote that he used Hoare's version for years but never really understood it
but Lomuto's version was simple enough to prove correct.[8] Bentley described Quicksort as
the ”most beautiful code I had ever written” in the same essay. Lomuto's partition scheme
was also popularized by the textbook Introduction to Algorithms37 although it is inferior to
Hoare's scheme because it does three times more swaps on average and degrades to O(n2 )
38
runtime when all elements are equal.[9][self-published source? ]
In 2009, Vladimir Yaroslavskiy proposed the new dual pivot Quicksort implementation.[10]
In the Java core library mailing lists, he initiated a discussion claiming his new algorithm
to be superior to the runtime library's sorting method, which was at that time based on
the widely used and carefully tuned variant of classic Quicksort by Bentley and McIlroy.[11]
Yaroslavskiy's Quicksort has been chosen as the new default sorting algorithm in Oracle's
Java 7 runtime library[12] after extensive empirical performance tests.[13]
33 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
34 https://en.wikipedia.org/wiki/Samplesort
35 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
36 https://en.wikipedia.org/wiki/Douglas_McIlroy
37 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
113
Quicksort
7.2 Algorithm
Figure 22 Full example of quicksort on a random set of numbers. The shaded element
is the pivot. It is always chosen as the last element of the partition. However, always
choosing the last element in the partition as the pivot in this way results in poor
performance (O(n²)) on already sorted arrays, or arrays of identical elements. Since
sub-arrays of sorted / identical elements crop up a lot towards the end of a sorting
procedure on a large set, versions of the quicksort algorithm that choose the pivot as the
middle element run much more quickly than the algorithm described in this diagram on
large sets of numbers.
114
Algorithm
Quicksort is a divide and conquer algorithm39 . It first divides the input array into two
smaller sub-arrays: the low elements and the high elements. It then recursively sorts the
sub-arrays. The steps for in-place40 Quicksort are:
1. Pick an element, called a pivot, from the array.
2. Partitioning: reorder the array so that all elements with values less than the pivot
come before the pivot, while all elements with values greater than the pivot come after
it (equal values can go either way). After this partitioning, the pivot is in its final
position. This is called the partition operation.
3. Recursively41 apply the above steps to the sub-array of elements with smaller values
and separately to the sub-array of elements with greater values.
The base case of the recursion is arrays of size zero or one, which are in order by definition,
so they never need to be sorted.
The pivot selection and partitioning steps can be done in several different ways; the choice
of specific implementation schemes greatly affects the algorithm's performance.
This scheme is attributed to Nico Lomuto and popularized by Bentley in his book Pro-
gramming Pearls[14] and Cormen et al. in their book Introduction to Algorithms42 .[15] This
scheme chooses a pivot that is typically the last element in the array. The algorithm main-
tains index i as it scans the array using another index j such that the elements at lo through
i-1 (inclusive) are less than the pivot, and the elements at i through j (inclusive) are equal
to or greater than the pivot. As this scheme is more compact and easy to understand, it
is frequently used in introductory material, although it is less efficient than Hoare's origi-
nal scheme.[16] This scheme degrades to O(n2 ) when the array is already in order.[9] There
have been various variants proposed to boost performance including various ways to select
pivot, deal with equal elements, use other sorting algorithms such as Insertion sort43 for
small arrays and so on. In pseudocode44 , a quicksort that sorts elements at lo through hi
(inclusive) of an array A can be expressed as:[15]
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := partition(A, lo, hi)
quicksort(A, lo, p - 1)
quicksort(A, p + 1, hi)
39 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
40 https://en.wikipedia.org/wiki/In-place_algorithm
41 https://en.wikipedia.org/wiki/Recursion_(computer_science)
42 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
43 https://en.wikipedia.org/wiki/Insertion_sort
44 https://en.wikipedia.org/wiki/Pseudocode
115
Quicksort
return i
The original partition scheme described by C.A.R. Hoare uses two indices that start at
the ends of the array being partitioned, then move toward each other, until they detect an
inversion: a pair of elements, one greater than or equal to the pivot, one lesser or equal, that
are in the wrong order relative to each other. The inverted elements are then swapped.[17]
When the indices meet, the algorithm stops and returns the final index. Hoare's scheme is
more efficient than Lomuto's partition scheme because it does three times fewer swaps on
45
average, and it creates efficient partitions even when all values are equal.[9][self-published source? ]
Like Lomuto's partition scheme, Hoare's partitioning also would cause Quicksort to degrade
to O(n2 ) for already sorted input, if the pivot was chosen as the first or the last element.
With the middle element as the pivot, however, sorted data results with (almost) no swaps
in equally sized partitions leading to best case behavior of Quicksort, i.e. O(n log(n)). Like
others, Hoare's partitioning doesn't produce a stable sort. In this scheme, the pivot's final
location is not necessarily at the index that was returned, and the next two segments that
the main algorithm recurs on are (lo..p) and (p+1..hi) as opposed to (lo..p-1) and (p+1..hi)
as in Lomuto's scheme. However, the partitioning algorithm guarantees lo ≤ p < hi which
implies both resulting partitions are non-empty, hence there's no risk of infinite recursion.
In pseudocode46 ,[15]
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := partition(A, lo, hi)
quicksort(A, lo, p)
quicksort(A, p + 1, hi)
An important point in choosing the pivot item is to round the division result towards zero.
This is the implicit behavior of integer division in some programming languages (e.g., C,
C++, Java), hence rounding is omitted in implementing code. Here it is emphasized with
explicit use of a floor function47 , denoted with a ⌊ ⌋symbols pair. Rounding down is
important to avoid using A[hi] as the pivot, which can result in infinite recursion.
46 https://en.wikipedia.org/wiki/Pseudocode
47 https://en.wikipedia.org/wiki/Floor_and_ceiling_functions
116
Algorithm
Choice of pivot
In the very early versions of quicksort, the leftmost element of the partition would often
be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already
sorted arrays, which is a rather common use-case. The problem was easily solved by choosing
either a random index for the pivot, choosing the middle index of the partition or (especially
for longer partitions) choosing the median48 of the first, middle and last element of the
partition for the pivot (as recommended by Sedgewick49 ).[18] This ”median-of-three” rule
counters the case of sorted (or reverse-sorted) input, and gives a better estimate of the
optimal pivot (the true median) than selecting any single element, when no information
about the ordering of the input is known.
Median-of-three code snippet for Lomuto partition:
mid := (lo + hi) / 2
if A[mid] < A[lo]
swap A[lo] with A[mid]
if A[hi] < A[lo]
swap A[lo] with A[hi]
if A[mid] < A[hi]
swap A[mid] with A[hi]
pivot := A[hi]
It puts a median into A[hi] first, then that new value of A[hi] is used for a pivot, as in a
basic algorithm presented above.
Specifically, the expected number of comparisons needed to sort n elements (see § Analysis
of randomized quicksort50 ) with random pivot selection is 1.386 n log n. Median-of-three
pivoting brings this down to C51 n, 2 ≈ 1.188 n log n, at the expense of a three-percent increase
in the expected number of swaps.[6] An even stronger pivoting rule, for larger arrays, is to
pick the ninther52 , a recursive median-of-three (Mo3), defined as[6]
ninther(a) = median(Mo3(first ⅓ of a), Mo3(middle ⅓ of a), Mo3(final ⅓ of a))
Selecting a pivot element is also complicated by the existence of integer overflow53 . If the
boundary indices of the subarray being sorted are sufficiently large, the naïve expression for
the middle index, (lo + hi)/2, will cause overflow and provide an invalid pivot index. This
can be overcome by using, for example, lo + (hi−lo)/2 to index the middle element, at the
cost of more complex arithmetic. Similar issues arise in some other methods of selecting
the pivot element.
48 https://en.wikipedia.org/wiki/Median
49 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
50 #Analysis_of_randomized_quicksort
51 https://en.wikipedia.org/wiki/Binomial_coefficient
52 https://en.wikipedia.org/wiki/Ninther
53 https://en.wikipedia.org/wiki/Integer_overflow
117
Quicksort
Repeated elements
With a partitioning algorithm such as the Lomuto partition scheme described above (even
one that chooses good pivot values), quicksort exhibits poor performance for inputs that
contain many repeated elements. The problem is clearly apparent when all the input el-
ements are equal: at each recursion, the left partition is empty (no input values are less
than the pivot), and the right partition has only decreased by one element (the pivot is
removed). Consequently, the Lomuto partition scheme takes quadratic time54 to sort an
array of equal values. However, with a partitioning algorithm such as the Hoare partition
scheme, repeated elements generally results in better partitioning, and although needless
swaps of elements equal to the pivot may occur, the running time generally decreases as the
number of repeated elements increases (with memory cache reducing the swap overhead).
In the case where all elements are equal, Hoare partition scheme needlessly swaps elements,
but the partitioning itself is best case, as noted in the Hoare partition section above.
To solve the Lomuto partition scheme problem (sometimes called the Dutch national flag
problem55[6] ), an alternative linear-time partition routine can be used that separates the
values into three groups: values less than the pivot, values equal to the pivot, and values
greater than the pivot. (Bentley and McIlroy call this a ”fat partition” and it was already
implemented in the qsort56 of Version 7 Unix57 .[6] ) The values equal to the pivot are already
sorted, so only the less-than and greater-than partitions need to be recursively sorted. In
pseudocode, the quicksort algorithm becomes
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := pivot(A, lo, hi)
left, right := partition(A, p, lo, hi) // note: multiple return values
quicksort(A, lo, left - 1)
quicksort(A, right + 1, hi)
The partition algorithm returns indices to the first ('leftmost') and to the last ('rightmost')
item of the middle partition. Every item of the partition is equal to p and is therefore
sorted. Consequently, the items of the partition need not be included in the recursive calls
to quicksort.
The best case for the algorithm now occurs when all elements are equal (or are chosen from
a small set of k ≪n elements). In the case of all equal elements, the modified quicksort will
perform only two recursive calls on empty subarrays and thus finish in linear time (assuming
the partition subroutine takes no longer than linear time).
Optimizations
Two other important optimizations, also suggested by Sedgewick and widely used in prac-
tice, are:[19][20]
54 https://en.wikipedia.org/wiki/Quadratic_time
55 https://en.wikipedia.org/wiki/Dutch_national_flag_problem
56 https://en.wikipedia.org/wiki/Qsort
57 https://en.wikipedia.org/wiki/Version_7_Unix
118
Algorithm
• To make sure at most O(log n) space is used, recur58 first into the smaller side of the
partition, then use a tail call59 to recur into the other, or update the parameters to no
longer include the now sorted smaller side, and iterate to sort the larger side.
• When the number of elements is below some threshold (perhaps ten elements), switch
to a non-recursive sorting algorithm such as insertion sort60 that performs fewer swaps,
comparisons or other operations on such small arrays. The ideal 'threshold' will vary
based on the details of the specific implementation.
• An older variant of the previous optimization: when the number of elements is less than
the threshold k, simply stop; then after the whole array has been processed, perform inser-
tion sort on it. Stopping the recursion early leaves the array k-sorted, meaning that each
element is at most k positions away from its final sorted position. In this case, insertion
sort takes O(kn) time to finish the sort, which is linear if k is a constant.[21][14]:117 Com-
pared to the ”many small sorts” optimization, this version may execute fewer instructions,
but it makes suboptimal use of the cache memories61 in modern computers.[22]
Parallelization
58 https://en.wiktionary.org/wiki/recurse
59 https://en.wikipedia.org/wiki/Tail_call
60 https://en.wikipedia.org/wiki/Insertion_sort
61 https://en.wikipedia.org/wiki/Cache_memory
62 https://en.wikipedia.org/wiki/Parallel_algorithm
63 https://en.wikipedia.org/wiki/Task_parallelism
64 https://en.wikipedia.org/wiki/Prefix_sum
65 https://en.wikipedia.org/wiki/Merge_sort
66 https://en.wikipedia.org/wiki/Radix_sort
67 https://en.wikipedia.org/wiki/Parallel_random-access_machine#Read/write_conflicts
68 https://en.wikipedia.org/wiki/Parallel_Random_Access_Machine
119
Quicksort
The most unbalanced partition occurs when one of the sublists returned by the partitioning
routine is of size n − 1.[27] This may occur if the pivot happens to be the smallest or
largest element in the list, or in some implementations (e.g., the Lomuto partition scheme
as described above) when all the elements are equal.
If this happens repeatedly in every partition, then each recursive call processes a list of size
one less than the previous list. Consequently, we can make n − 1 nested calls before we
reach a list of size 1. This means that the call tree69 is a linear chain of n − 1 nested calls.
∑
The ith call does O(n − i) work to do the partition, and ni=0 (n − i) = O(n2 ), so in that
case Quicksort takes O(n²) time.
In the most balanced case, each time we perform a partition we divide the list into two nearly
equal pieces. This means each recursive call processes a list of half the size. Consequently,
we can make only log2 n nested calls before we reach a list of size 1. This means that the
depth of the call tree70 is log2 n. But no two calls at the same level of the call tree process
the same part of the original list; thus, each level of calls needs only O(n) time all together
(each call has some constant overhead, but since there are only O(n) calls at each level, this
is subsumed in the O(n) factor). The result is that the algorithm uses only O(n log n) time.
To sort an array of n distinct elements, quicksort takes O(n log n) time in expectation,
averaged over all n! permutations of n elements with equal probability71 . We list here three
common proofs to this claim providing different insights into quicksort's workings.
Using percentiles
If each pivot has rank somewhere in the middle 50 percent, that is, between the 25th
percentile72 and the 75th percentile, then it splits the elements with at least 25% and at
most 75% on each side. If we could consistently choose such pivots, we would only have
to split the list at most log4/3 n times before reaching lists of size 1, yielding an O(n log n)
algorithm.
When the input is a random permutation, the pivot has a random rank, and so it is not
guaranteed to be in the middle 50 percent. However, when we start from a random per-
mutation, in each recursive call the pivot has a random rank in its list, and so it is in the
69 https://en.wikipedia.org/wiki/Call_stack
70 https://en.wikipedia.org/wiki/Call_stack
71 https://en.wikipedia.org/wiki/Uniform_distribution_(discrete)
72 https://en.wikipedia.org/wiki/Percentile
120
Formal analysis
middle 50 percent about half the time. That is good enough. Imagine that you flip a coin:
heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't.
Imagine that you are flipping a coin over and over until you get k heads. Although this
could take a long time, on average only 2k flips are required, and the chance that you won't
get k heads after 100k flips is highly improbable (this can be made rigorous using Chernoff
bounds73 ). By the same argument, Quicksort's recursion will terminate on average at a call
depth of only 2 log4/3 n. But if its average call depth is O(log n), and each level of the call
tree processes at most n elements, the total amount of work done on average is the product,
O(n log n). The algorithm does not have to verify that the pivot is in the middle half—if
we hit it any constant fraction of the times, that is enough for the desired complexity.
Using recurrences
An alternative approach is to set up a recurrence relation74 for the T(n) factor, the time
needed to sort a list of size n. In the most unbalanced case, a single quicksort call involves
O(n) work plus two recursive calls on lists of size 0 and n−1, so the recurrence relation is
T (n) = O(n) + T (0) + T (n − 1) = O(n) + T (n − 1).
This is the same relation as for insertion sort75 and selection sort76 , and it solves to worst
case T(n) = O(n²).
In the most balanced case, a single quicksort call involves O(n) work plus two recursive calls
on lists of size n/2, so the recurrence relation is
( )
n
T (n) = O(n) + 2T .
2
The master theorem for divide-and-conquer recurrences77 tells us that T(n) = O(n log n).
The outline of a formal proof of the O(n log n) expected time complexity follows. Assume
that there are no duplicates as duplicates could be handled with linear time pre- and post-
processing, or considered cases easier than the analyzed. When the input is a random
permutation, the rank of the pivot is uniform random from 0 to n − 1. Then the resulting
parts of the partition have sizes i and n − i − 1, and i is uniform random from 0 to n −
1. So, averaging over all possible splits and noting that the number of comparisons for the
partition is n − 1, the average number of comparisons over all permutations of the input
sequence can be estimated accurately by solving the recurrence relation:
∑
1 n−1 ∑
2 n−1
C(n) = n − 1 + (C(i) + C(n − i − 1)) = n − 1 + C(i)
n i=0 n i=0
∑
n−1
nC(n) = n(n − 1) + 2 C(i)
i=0
73 https://en.wikipedia.org/wiki/Chernoff_bound
74 https://en.wikipedia.org/wiki/Recurrence_relation
75 https://en.wikipedia.org/wiki/Insertion_sort
76 https://en.wikipedia.org/wiki/Selection_sort
77 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)
121
Quicksort
To each execution of quicksort corresponds the following binary search tree81 (BST): the
initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot
of the right half is the root of the right subtree, and so on. The number of comparisons of the
execution of quicksort equals the number of comparisons during the construction of the BST
by a sequence of insertions. So, the average number of comparisons for randomized quicksort
equals the average cost of constructing a BST when the values inserted (x1 , x2 , . . . , xn ) form
a random permutation.
Consider a BST created by insertion of a sequence (x1 , x2 , . . . , xn ) of values forming
∑ ∑ a random
permutation. Let C denote the cost of creation of the BST. We have C = ci,j , where
i j<i
ci,j is an binary random variable expressing whether during the insertion of xi there was a
comparison to xj .
∑∑
By linearity of expectation82 , the expected value E[C] of C is E[C] = Pr(ci,j ).
i j<i
Fix i and j<i. The values x1 , x2 , . . . , xj , once sorted, define j+1 intervals. The core structural
observation is that xi is compared to xj in the algorithm if and only if xi falls inside one of
the two intervals adjacent to xj .
78 https://en.wikipedia.org/wiki/Comparison_sort
https://en.wikipedia.org/wiki/Comparison_sort#Lower_bound_for_the_average_number_of_
79
comparisons
80 https://en.wikipedia.org/wiki/Stirling%27s_approximation
81 https://en.wikipedia.org/wiki/Binary_search_tree
82 https://en.wikipedia.org/wiki/Expected_value#Linearity
122
Relation to other algorithms
Quicksort is a space-optimized version of the binary tree sort85 . Instead of inserting items
sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is
implied by the recursive calls. The algorithms make exactly the same comparisons, but in a
different order. An often desirable property of a sorting algorithm86 is stability – that is the
83 https://en.wikipedia.org/wiki/Tail_recursion
84 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
85 https://en.wikipedia.org/wiki/Binary_tree_sort
86 https://en.wikipedia.org/wiki/Sorting_algorithm
123
Quicksort
order of elements that compare equal is not changed, allowing controlling order of multikey
tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain
for in situ (or in place) quicksort (that uses only constant additional space for pointers and
buffers, and O(log n) additional space for the management of explicit or implicit recursion).
For variant quicksorts involving extra memory due to representations using pointers (e.g.
lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex,
or disk-bound, data structures tend to increase time cost, in general making increasing use
of virtual memory or disk.
The most direct competitor of quicksort is heapsort87 . Heapsort's running time is O(n log n),
but heapsort's average running time is usually considered slower than in-place quicksort.[28]
This result is debatable; some publications indicate the opposite.[29][30] Introsort88 is a vari-
ant of quicksort that switches to heapsort when a bad case is detected to avoid quicksort's
worst-case running time.
Quicksort also competes with merge sort89 , another O(n log n) sorting algorithm. Mergesort
is a stable sort90 , unlike standard in-place quicksort and heapsort, and has excellent worst-
case performance. The main disadvantage of mergesort is that, when operating on arrays,
efficient implementations require O(n) auxiliary space, whereas the variant of quicksort with
in-place partitioning and tail recursion uses only O(log n) space.
Mergesort works very well on linked lists91 , requiring only a small, constant amount of
auxiliary storage. Although quicksort can be implemented as a stable sort using linked
lists, it will often suffer from poor pivot choices without random access. Mergesort is also
the algorithm of choice for external sorting92 of very large data sets stored on slow-to-access
media such as disk storage93 or network-attached storage94 .
Bucket sort95 with two buckets is very similar to quicksort; the pivot in this case is effec-
tively the value in the middle of the value range, which does well on average for uniformly
distributed inputs.
A selection algorithm96 chooses the kth smallest of a list of numbers; this is an easier problem
in general than sorting. One simple but effective selection algorithm works nearly in the
same manner as quicksort, and is accordingly known as quickselect97 . The difference is that
instead of making recursive calls on both sublists, it only makes a single tail-recursive call on
the sublist that contains the desired element. This change lowers the average complexity to
linear or O(n) time, which is optimal for selection, but the sorting algorithm is still O(n2 ).
87 https://en.wikipedia.org/wiki/Heapsort
88 https://en.wikipedia.org/wiki/Introsort
89 https://en.wikipedia.org/wiki/Merge_sort
90 https://en.wikipedia.org/wiki/Stable_sort
91 https://en.wikipedia.org/wiki/Linked_list
92 https://en.wikipedia.org/wiki/External_sorting
93 https://en.wikipedia.org/wiki/Disk_storage
94 https://en.wikipedia.org/wiki/Network-attached_storage
95 https://en.wikipedia.org/wiki/Bucket_sort
96 https://en.wikipedia.org/wiki/Selection_algorithm
97 https://en.wikipedia.org/wiki/Quickselect
124
Relation to other algorithms
A variant of quickselect, the median of medians98 algorithm, chooses pivots more carefully,
ensuring that the pivots are near the middle of the data (between the 30th and 70th per-
centiles), and thus has guaranteed linear time – O(n). This same pivot strategy can be used
to construct a variant of quicksort (median of medians quicksort) with O(n log n) time.
However, the overhead of choosing the pivot is significant, so this is generally not used in
practice.
More abstractly, given an O(n) selection algorithm, one can use it to find the ideal pivot
(the median) at every step of quicksort and thus produce a sorting algorithm with O(n log
n) running time. Practical implementations this variant are considerably slower on average,
but they are of theoretical interest because they show an optimal selection algorithm can
yield an optimal sorting algorithm.
7.4.2 Variants
Multi-pivot quicksort
Instead of partitioning into two subarrays using a single pivot, multi-pivot quicksort (also
multiquicksort[22] ) partitions its input into some s number of subarrays using s − 1 piv-
ots. While the dual-pivot case (s = 3) was considered by Sedgewick and others already
in the mid-1970s, the resulting algorithms were not faster in practice than the ”classical”
quicksort.[31] A 1999 assessment of a multiquicksort with a variable number of pivots, tuned
to make efficient use of processor caches, found it to increase the instruction count by
some 20%, but simulation results suggested that it would be more efficient on very large
inputs.[22] A version of dual-pivot quicksort developed by Yaroslavskiy in 2009[10] turned
out to be fast enough to warrant implementation in Java 799 , as the standard algorithm to
sort arrays of primitives100 (sorting arrays of objects101 is done using Timsort102 ).[32] The
performance benefit of this algorithm was subsequently found to be mostly related to cache
performance,[33] and experimental results indicate that the three-pivot variant may perform
even better on modern machines.[34][35]
External quicksort
For magnetic tape files is the same as regular quicksort except the pivot is replaced by a
buffer. First, the M/2 first and last elements are read into the buffer and sorted, then the
next element from the beginning or end is read to balance writing. If the next element is
less than the least of the buffer, write it to available space at the beginning. If greater than
the greatest, write it to the end. Otherwise write the greatest or least of the buffer, and
put the next element in the buffer. Keep the maximum lower and minimum upper keys
written to avoid resorting middle elements that are in order. When done, write the buffer.
Recursively sort the smaller partition, and loop to sort the remaining partition. This is
98 https://en.wikipedia.org/wiki/Median_of_medians
99 https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7_(July_28,_2011)
100 https://en.wikipedia.org/wiki/Primitive_data_type
101 https://en.wikipedia.org/wiki/Object_(computer_science)
102 https://en.wikipedia.org/wiki/Timsort
125
Quicksort
a kind of three-way quicksort in which the middle partition (buffer) represents a sorted
subarray of elements that are approximately equal to the pivot.
Main article: Multi-key quicksort103 This algorithm is a combination of radix sort104 and
quicksort. Pick an element from the array (the pivot) and consider the first character (key)
of the string (multikey). Partition the remaining elements into three sets: those whose corre-
sponding character is less than, equal to, and greater than the pivot's character. Recursively
sort the ”less than” and ”greater than” partitions on the same character. Recursively sort
the ”equal to” partition by the next character (key). Given we sort using bytes or words of
length W bits, the best case is O(KN) and the worst case O(2K N) or at least O(N2 ) as for
standard quicksort, given for unique keys N<2K , and K is a hidden constant in all standard
comparison sort105 algorithms including quicksort. This is a kind of three-way quicksort
in which the middle partition represents a (trivially) sorted subarray of elements that are
exactly equal to the pivot.
Also developed by Powers as an o(K) parallel PRAM106 algorithm. This is again a combi-
nation of radix sort107 and quicksort but the quicksort left/right partition decision is made
on successive bits of the key, and is thus O(KN) for N K-bit keys. All comparison sort108
algorithms impliclty assume the transdichotomous model109 with K in Θ(log N), as if K is
smaller we can sort in O(N) time using a hash table or integer sorting110 . If K ≫log N but
elements are unique within O(log N) bits, the remaining bits will not be looked at by either
quicksort or quick radix sort. Failing that, all comparison sorting algorithms will also have
the same overhead of looking through O(K) relatively useless bits but quick radix sort will
avoid the worst case O(N2 ) behaviours of standard quicksort and radix quicksort, and will
be faster even in the best case of those comparison algorithms under these conditions of
uniqueprefix(K) ≫ log N. See Powers[36] for further discussion of the hidden overheads in
comparison, radix and parallel sorting.
BlockQuicksort
103 https://en.wikipedia.org/wiki/Multi-key_quicksort
104 https://en.wikipedia.org/wiki/Radix_sort
105 https://en.wikipedia.org/wiki/Comparison_sort
106 https://en.wikipedia.org/wiki/Parallel_random-access_machine
107 https://en.wikipedia.org/wiki/Radix_sort
108 https://en.wikipedia.org/wiki/Comparison_sort
109 https://en.wikipedia.org/wiki/Transdichotomous_model
110 https://en.wikipedia.org/wiki/Integer_sorting
111 https://en.wikipedia.org/wiki/Branch_misprediction
126
See also
Main article: Partial sorting115 Several variants of quicksort exist that separate the k small-
est or largest elements from the rest of the input.
7.4.3 Generalization
112 https://en.wikipedia.org/wiki/Data_dependencies
113 https://en.wikipedia.org/wiki/Loop_blocking
114 https://en.wikipedia.org/wiki/Data_cache
115 https://en.wikipedia.org/wiki/Partial_sorting
116 https://en.wikipedia.org/wiki/Richard_J._Cole
117 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
118 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
119 https://en.wikipedia.org/wiki/Douglas_McIlroy
120 https://en.wikipedia.org/wiki/Portal:Computer_programming
121 https://en.wikipedia.org/wiki/Introsort
127
Quicksort
7.6 Notes
1. ”S A H”122 . C H M. A
123 3 A 2015. R 22 A 2015.
2. H, C. A. R.124 (1961). ”A 64: Q”. Comm. ACM125 . 4 (7):
321. doi126 :10.1145/366622.366644127 .
3. S, S S.128 (2008). The Algorithm Design Manual129 . S. . 129.
ISBN130 978-1-84800-069-8131 .
4. S, L. (2009). ”I: A C.A.R. H”. Comm.
ACM132 . 52 (3): 38–41. doi133 :10.1145/1467247.1467261134 .
5. ”M Q S T H, Q-
”135 . M M D B. 15 M 2015.
6. B, J L.; MI, M. D (1993). ”E
”136 . Software—Practice and Experience. 23 (11): 1249–1265. Cite-
SeerX137 10.1.1.14.8162138 . doi139 :10.1002/spe.4380231105140 .
7. V E, M. H. (1 N 1970). ”A 402: I-
E Q”. Commun. ACM. 13 (11): 693–694.
doi141 :10.1145/362790.362803142 . ISSN143 0001-0782144 .
8. B, J145 (2007). ”T I ”. I
O, A; W, G (.). Beautiful Code: Leading Programmers Explain
How They Think. O'Reilly Media. p. 30. ISBN146 978-0-596-51004-6147 .
9. ”Q P: H . L”148 . cs.stackexchange.com. Re-
trieved 3 August 2015.
https://web.archive.org/web/20150403184558/http://www.computerhistory.org/
122
fellowawards/hall/bios/Antony%2CHoare/
123 http://www.computerhistory.org/fellowawards/hall/bios/Antony,Hoare/
124 https://en.wikipedia.org/wiki/Tony_Hoare
125 https://en.wikipedia.org/wiki/Communications_of_the_ACM
126 https://en.wikipedia.org/wiki/Doi_(identifier)
127 https://doi.org/10.1145%2F366622.366644
128 https://en.wikipedia.org/wiki/Steven_Skiena
129 https://books.google.com/books?id=7XUSn0IKQEgC
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
132 https://en.wikipedia.org/wiki/Communications_of_the_ACM
133 https://en.wikipedia.org/wiki/Doi_(identifier)
134 https://doi.org/10.1145%2F1467247.1467261
http://anothercasualcoder.blogspot.com/2015/03/my-quickshort-interview-with-sir-
135
tony.html
136 http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
137 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
138 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
139 https://en.wikipedia.org/wiki/Doi_(identifier)
140 https://doi.org/10.1002%2Fspe.4380231105
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1145%2F362790.362803
143 https://en.wikipedia.org/wiki/ISSN_(identifier)
144 http://www.worldcat.org/issn/0001-0782
145 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
146 https://en.wikipedia.org/wiki/ISBN_(identifier)
147 https://en.wikipedia.org/wiki/Special:BookSources/978-0-596-51004-6
148 https://cs.stackexchange.com/q/11550
128
Notes
https://web.archive.org/web/20151002230717/http://iaroslavski.narod.ru/quicksort/
149
DualPivotQuicksort.pdf
150 http://iaroslavski.narod.ru/quicksort/DualPivotQuicksort.pdf
151 http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628
152 https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(int%5b%5d)
153 https://en.wikipedia.org/wiki/Doi_(identifier)
154 https://doi.org/10.1137%2F1.9781611972931.5
155 https://en.wikipedia.org/wiki/ISBN_(identifier)
156 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61197-253-5
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3463
165 https://en.wikipedia.org/wiki/Tony_Hoare
166 http://comjnl.oxfordjournals.org/content/5/1/10
167 https://en.wikipedia.org/wiki/Doi_(identifier)
168 https://doi.org/10.1093%2Fcomjnl%2F5.1.10
169 https://en.wikipedia.org/wiki/ISSN_(identifier)
170 http://www.worldcat.org/issn/0010-4620
171 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
172 https://books.google.com/books?id=ylAETlep0CwC
173 https://en.wikipedia.org/wiki/ISBN_(identifier)
174 https://en.wikipedia.org/wiki/Special:BookSources/978-81-317-1291-7
175 https://en.wikipedia.org/wiki/GNU_libc
176 https://www.cs.columbia.edu/~hgs/teaching/isp/hw/qsort.c
177 http://repo.or.cz/w/glibc.git/blob/HEAD:/stdlib/qsort.c
129
Quicksort
stances Heapsort is already considerably slower than Quicksort (in our experiments
more than 30% for n = 210 ) and on larger instances it suffers from its poor cache
178 http://www.ugrad.cs.ubc.ca/~cs260/chnotes/ch6/Ch6CovCompiled.html
180 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
181 https://en.wikipedia.org/wiki/Communications_of_the_ACM
182 https://en.wikipedia.org/wiki/Doi_(identifier)
183 https://doi.org/10.1145%2F359619.359631
184 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
185 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.1788
186 https://en.wikipedia.org/wiki/Doi_(identifier)
187 https://doi.org/10.1006%2Fjagm.1998.0985
188 https://www.cs.cmu.edu/afs/cs/academic/class/15210-s13/www/lectures/lecture19.pdf
189 http://www.drdobbs.com/parallel/quicksort-partition-via-prefix-scan/240003109
190 https://books.google.com/books?id=dZoZAQAAIAAJ
191 https://en.wikipedia.org/wiki/ISBN_(identifier)
192 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-086373-7
193 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
194 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.9071
195 https://en.wikipedia.org/wiki/ArXiv_(identifier)
196 http://arxiv.org/abs/1811.99833
197 https://en.wikipedia.org/wiki/Doi_(identifier)
198 https://doi.org/10.1137%2F1.9781611975499.1
199 https://en.wikipedia.org/wiki/ISBN_(identifier)
200 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61197-549-9
130
Notes
behavior (in our experiments more than eight times slower than Quicksort for sorting
228 elements).
29. H, P (2004). ”S ”201 . ... R-
26 A 2010.
30. MK, D (D 2005). ”H, Q, E”202 .
A203 1 A 2009. R 20 D
2019.
31. W, S; N, M E. (2012). Average case analysis of Java 7's
dual pivot quicksort. European Symposium on Algorithms. arXiv204 :1310.7409205 .
Bibcode206 :2013arXiv1310.7409W207 .
32. ”A”208 . Java Platform SE 7. Oracle. Retrieved 4 September 2014.
33. W, S (3 N 2015). ”W I D-P Q F?”.
X209 :1511.01138210 [.DS211 ].
34. K, S; L-O, A; Q, A;
M, J. I (2014). Multi-Pivot Quicksort: Theory and Experiments.
Proc. Workshop on Algorithm Engineering and Experiments (ALENEX).
doi212 :10.1137/1.9781611973198.6213 .
35. K, S; L-O, A; M, J. I; Q, A
(7 F 2014). Multi-Pivot Quicksort: Theory and Experiments214 (PDF) (S-
). W, O215 .
36. David M. W. Powers, Parallel Unification: Practical Complexity216 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
37. K, K; S, P (11–13 S 2006). How Branch Mis-
predictions Affect Quicksort217 (PDF). ESA 2006: 14 A E S-
A. Z218 . 219 :10.1007/11841036_69220 .
201 http://www.azillionmonkeys.com/qed/sort.html
202 http://www.inference.org.uk/mackay/sorting/sorting.html
https://web.archive.org/web/20090401163041/http://users.aims.ac.za/~mackay/sorting/
203
sorting.html
204 https://en.wikipedia.org/wiki/ArXiv_(identifier)
205 http://arxiv.org/abs/1310.7409
206 https://en.wikipedia.org/wiki/Bibcode_(identifier)
207 https://ui.adsabs.harvard.edu/abs/2013arXiv1310.7409W
208 http://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort%28byte%5B%5D%29
209 https://en.wikipedia.org/wiki/ArXiv_(identifier)
210 http://arxiv.org/abs/1511.01138
211 http://arxiv.org/archive/cs.DS
212 https://en.wikipedia.org/wiki/Doi_(identifier)
213 https://doi.org/10.1137%2F1.9781611973198.6
https://lusy.fri.uni-lj.si/sites/lusy.fri.uni-lj.si/files/publications/alopez2014-
214
seminar-qsort.pdf
215 https://en.wikipedia.org/wiki/Waterloo,_Ontario
216 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://www.cs.auckland.ac.nz/~mcw/Teaching/refs/sorting/quicksort-branch-prediction.
217
pdf
218 https://en.wikipedia.org/wiki/Zurich
219 https://en.wikipedia.org/wiki/Doi_(identifier)
220 https://doi.org/10.1007%2F11841036_69
131
Quicksort
38. E, S; WSS, A (22 A 2016). ”BQ: H
B M ' Q”. X221 :1604.06697222
[.DS223 ].
39. Richard Cole, David C. Kandathil: ”The average case analysis of Partition sorts”224 ,
European Symposium on Algorithms, 14–17 September 2004, Bergen, Norway. Pub-
lished: Lecture Notes in Computer Science 3221, Springer Verlag, pp. 240–251.
7.7 References
• S, R.225 (1978). ”I Q ”. Comm. ACM226 .
21 (10): 847–857. doi227 :10.1145/359619.359631228 .
• D, B. C. (2006). ”A -
' ' ”. Discrete Applied Mathematics. 154: 1–5.
doi229 :10.1016/j.dam.2005.07.005230 .
• H, C. A. R.231 (1961). ”A 63: P”. Comm. ACM232 . 4 (7):
321. doi233 :10.1145/366622.366642234 .
• H, C. A. R.235 (1961). ”A 65: F”. Comm. ACM236 . 4 (7): 321–322.
doi237 :10.1145/366622.366647238 .
• H, C. A. R.239 (1962). ”Q”. Comput. J.240 5 (1): 10–16.
doi241 :10.1093/comjnl/5.1.10242 . (Reprinted in Hoare and Jones: Essays in computing
science243 , 1989.)
221 https://en.wikipedia.org/wiki/ArXiv_(identifier)
222 http://arxiv.org/abs/1604.06697
223 http://arxiv.org/archive/cs.DS
224 http://www.cs.nyu.edu/cole/papers/part-sort.pdf
225 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
226 https://en.wikipedia.org/wiki/Communications_of_the_ACM
227 https://en.wikipedia.org/wiki/Doi_(identifier)
228 https://doi.org/10.1145%2F359619.359631
229 https://en.wikipedia.org/wiki/Doi_(identifier)
230 https://doi.org/10.1016%2Fj.dam.2005.07.005
231 https://en.wikipedia.org/wiki/Tony_Hoare
232 https://en.wikipedia.org/wiki/Communications_of_the_ACM
233 https://en.wikipedia.org/wiki/Doi_(identifier)
234 https://doi.org/10.1145%2F366622.366642
235 https://en.wikipedia.org/wiki/Tony_Hoare
236 https://en.wikipedia.org/wiki/Communications_of_the_ACM
237 https://en.wikipedia.org/wiki/Doi_(identifier)
238 https://doi.org/10.1145%2F366622.366647
239 https://en.wikipedia.org/wiki/Tony_Hoare
240 https://en.wikipedia.org/wiki/The_Computer_Journal
241 https://en.wikipedia.org/wiki/Doi_(identifier)
242 https://doi.org/10.1093%2Fcomjnl%2F5.1.10
243 http://portal.acm.org/citation.cfm?id=SERIES11430.63445
132
External links
244 https://en.wikipedia.org/wiki/David_Musser
245 http://www.cs.rpi.edu/~musser/gp/introsort.ps
246 https://en.wikipedia.org/wiki/Doi_(identifier)
https://doi.org/10.1002%2F%28SICI%291097-024X%28199708%2927%3A8%3C983%3A%3AAID-
247
SPE117%3E3.0.CO%3B2-%23
248 https://en.wikipedia.org/wiki/Donald_Knuth
249 https://en.wikipedia.org/wiki/ISBN_(identifier)
250 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
251 https://en.wikipedia.org/wiki/Thomas_H._Cormen
252 https://en.wikipedia.org/wiki/Charles_E._Leiserson
253 https://en.wikipedia.org/wiki/Ronald_L._Rivest
254 https://en.wikipedia.org/wiki/Clifford_Stein
255 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
256 https://en.wikipedia.org/wiki/MIT_Press
257 https://en.wikipedia.org/wiki/McGraw-Hill
258 https://en.wikipedia.org/wiki/ISBN_(identifier)
259 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
260 https://en.wikipedia.org/wiki/Faron_Moller
261 http://www.cs.swan.ac.uk/~csfm/Courses/CS_332/quicksort.pdf
262 https://en.wikipedia.org/wiki/Swansea_University
263 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
264 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
265 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.4954
266 https://en.wikipedia.org/wiki/Doi_(identifier)
267 https://doi.org/10.1137%2FS0097539700382108
268 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
269 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
270 https://en.wikipedia.org/wiki/Doi_(identifier)
271 https://doi.org/10.1002%2Fspe.4380231105
133
Quicksort
The Wikibook Algorithm implementation272 has a page on the topic of: Quick-
sort273
Sorting algorithms
272 https://en.wikibooks.org/wiki/Algorithm_implementation
273 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Quicksort
https://web.archive.org/web/20150302145415/http://www.sorting-algorithms.com/quick-
274
sort
275 https://en.wikipedia.org/wiki/Category:CS1_maint:_BOT:_original-url_status_unknown
https://web.archive.org/web/20150306071949/http://www.sorting-algorithms.com/quick-
276
sort-3-way
277 https://en.wikipedia.org/wiki/Category:CS1_maint:_BOT:_original-url_status_unknown
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
278
Sorti.html#SECTION001412000000000000000
279 https://en.wikipedia.org/wiki/Pat_Morin
https://web.archive.org/web/20180629183103/http://www.tomgsmith.com/quicksort/
280
content/illustration/
134
8 Heapsort
Heapsort
A run of heapsort sorting an array of randomly permuted values. In the first stage of
the algorithm the array elements are reordered to satisfy the heap property. Before the
actual sorting takes place, the heap tree structure is shown briefly for illustration.
Class Sorting algorithm
Data structure Array
Worst-case perfor- O(n log n)
mance
Best-case perfor- O(n log n) (distinct
mance keys)
or O(n) (equal
keys)
Average perfor- O(n log n)
mance
Worst-case space O(n) total O(1)
complexity auxiliary
1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Comparison_sort
3 https://en.wikipedia.org/wiki/Sorting_algorithm
4 https://en.wikipedia.org/wiki/Selection_sort
5 https://en.wikipedia.org/wiki/Heap_(data_structure)
6 https://en.wikipedia.org/wiki/Quicksort
7 https://en.wikipedia.org/wiki/Big_O_notation
8 https://en.wikipedia.org/wiki/In-place_algorithm
9 https://en.wikipedia.org/wiki/Stable_sort
135
Heapsort
Heapsort was invented by J. W. J. Williams10 in 1964.[2] This was also the birth of the
heap, presented already by Williams as a useful data structure in its own right.[3] In the
same year, R. W. Floyd11 published an improved version that could sort an array in-place,
continuing his earlier research into the treesort12 algorithm.[3]
8.1 Overview
In the second step, a sorted array is created by repeatedly removing the largest element
from the heap (the root of the heap), and inserting it into the array. The heap is updated
after each removal to maintain the heap property. Once all objects have been removed from
the heap, the result is a sorted array.
Heapsort can be performed in place. The array can be split into two parts, the sorted array
and the heap. The storage of heaps as arrays is diagrammed here16 . The heap's invariant
is preserved after each extraction, so the only cost is that of extraction.
8.2 Algorithm
The Heapsort algorithm involves preparing the list by first turning it into a max heap17 .
The algorithm then repeatedly swaps the first value of the list with the last value, decreasing
the range of values considered in the heap operation by one, and sifting the new first value
into its position in the heap. This repeats until the range of considered values is one value
in length.
The steps are:
10 https://en.wikipedia.org/wiki/J._W._J._Williams
11 https://en.wikipedia.org/wiki/Robert_Floyd
12 https://en.wikipedia.org/wiki/Treesort
13 https://en.wikipedia.org/wiki/Heap_(data_structure)
14 https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap
15 https://en.wikipedia.org/wiki/Binary_tree#Types_of_binary_trees
16 https://en.wikipedia.org/wiki/Binary_heap#Heap_implementation
17 https://en.wikipedia.org/wiki/Binary_heap
136
Algorithm
1. Call the buildMaxHeap() function on the list. Also referred to as heapify(), this builds
a heap from a list in O(n) operations.
2. Swap the first element of the list with the final element. Decrease the considered range
of the list by one.
3. Call the siftDown() function on the list to sift the new first element to its appropriate
index in the heap.
4. Go to step (2) unless the considered range of the list is one element.
The buildMaxHeap() operation is run once, and is O(n) in performance. The siftDown()
function is O(log n), and is called n times. Therefore, the performance of this algorithm is
O(n + n log n) = O(n log n).
8.2.1 Pseudocode
The following is a simple way to implement the algorithm in pseudocode18 . Arrays are
zero-based19 and swap is used to exchange two elements of the array. Movement 'down'
means from the root towards the leaves, or from lower indices to higher. Note that during
the sort, the largest element is at the root of the heap at a[0], while at the end of the sort,
the largest element is in a[end].
procedure heapsort(a, count) is
input: an unordered array a of length count
(The following loop maintains the invariants20 that a[0:end] is a heap and every element
beyond end is greater than everything before it (so a[end:count] is in sorted order))
end ← count - 1
while end > 0 do
(a[0] is the root and largest value. The swap moves it in front of the sorted elements.)
swap(a[end], a[0])
(the heap size is reduced by one)
end ← end - 1
(the swap ruined the heap property, so restore it)
siftDown(a, 0, end)
The sorting routine uses two subroutines, heapify and siftDown. The former is the com-
mon in-place heap construction routine, while the latter is a common subroutine for imple-
menting heapify.
(Put elements of 'a' in heap order, in-place)
procedure heapify(a, count) is
(start is assigned the index in 'a' of the last parent node)
(the last element in a 0-based array is at index count-1; find the parent of that element)
start ← iParent(count-1)
while start ≥ 0 do
(sift down the node at index 'start' to the proper place such that all nodes below
the start index are in heap order)
siftDown(a, start, count - 1)
(go to the next parent node)
start ← start - 1
18 https://en.wikipedia.org/wiki/Pseudocode
19 https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(array)
20 https://en.wikipedia.org/wiki/Loop_invariant
137
Heapsort
(after sifting down the root all nodes/elements are in heap order)
(Repair the heap whose root element is at index 'start', assuming the heaps rooted at its children are valid)
procedure siftDown(a, start, end) is
root ← start
while iLeftChild(root) ≤ end do (While the root has at least one child)
child ← iLeftChild(root) (Left child of root)
swap ← root (Keeps track of child to swap with)
The heapify procedure can be thought of as building a heap from the bottom up by suc-
cessively sifting downward to establish the heap property21 . An alternative version (shown
below) that builds the heap top-down and sifts upward may be simpler to understand. This
siftUp version can be visualized as starting with an empty heap and successively inserting
elements, whereas the siftDown version given above treats the entire input array as a full
but ”broken” heap and ”repairs” it starting from the last non-trivial sub-heap (that is, the
last parent node).
Figure 24 Difference in time complexity between the ”siftDown” version and the
”siftUp” version.
Also, the siftDown version of heapify has O(n) time complexity22 , while the siftUp version
given below has O(n log n) time complexity due to its equivalence with inserting each
element, one at a time, into an empty heap.[4] This may seem counter-intuitive since, at a
glance, it is apparent that the former only makes half as many calls to its logarithmic-time
21 https://en.wikipedia.org/wiki/Heap_(data_structure)
22 https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap
138
Variations
sifting function as the latter; i.e., they seem to differ only by a constant factor, which never
affects asymptotic analysis.
To grasp the intuition behind this difference in complexity, note that the number of swaps
that may occur during any one siftUp call increases with the depth of the node on which the
call is made. The crux is that there are many (exponentially many) more ”deep” nodes than
there are ”shallow” nodes in a heap, so that siftUp may have its full logarithmic running-time
on the approximately linear number of calls made on the nodes at or near the ”bottom” of
the heap. On the other hand, the number of swaps that may occur during any one siftDown
call decreases as the depth of the node on which the call is made increases. Thus, when
the siftDown heapify begins and is calling siftDown on the bottom and most numerous
node-layers, each sifting call will incur, at most, a number of swaps equal to the ”height”
(from the bottom of the heap) of the node on which the sifting call is made. In other words,
about half the calls to siftDown will have at most only one swap, then about a quarter of
the calls will have at most two swaps, etc.
The heapsort algorithm itself has O(n log n) time complexity using either version of heapify.
procedure heapify(a,count) is
(end is assigned the index of the first (left) child of the root)
end := 1
8.3 Variations
The most important variation to the basic algorithm, which is included in all practical
implementations, is a heap-construction algorithm by Floyd which runs in O(n) time and
uses siftdown23 rather than siftup24 , avoiding the need to implement siftup at all.
Rather than starting with a trivial heap and repeatedly adding leaves, Floyd's algorithm
starts with the leaves, observing that they are trivial but valid heaps by themselves, and
23 https://en.wikipedia.org/wiki/Binary_heap#Extract
24 https://en.wikipedia.org/wiki/Binary_heap#Insert
139
Heapsort
then adds parents. Starting with element n/2 and working backwards, each internal node
is made the root of a valid heap by sifting down. The last step is sifting down the first
element, after which the entire array obeys the heap property.
The worst-case number of comparisons during the Floyd's heap-construction phase of Heap-
sort is known to be equal to 2n − 2s2 (n) − e2 (n), where s2 (n) is the number of 1 bits in the
binary representation of n and e2 (n) is number of trailing 0 bits.[5]
The standard implementation of Floyd's heap-construction algorithm causes a large num-
ber of cache misses25 once the size of the data exceeds that of the CPU cache26 . Much
better performance on large data sets can be obtained by merging in depth-first27 order,
combining subheaps as soon as possible, rather than combining all subheaps on one level
before proceeding to the one above.[6][7]
25 https://en.wikipedia.org/wiki/Cache_miss
26 https://en.wikipedia.org/wiki/CPU_cache
27 https://en.wikipedia.org/wiki/Depth-first
28 https://en.wikipedia.org/wiki/Function_call
140
Variations
Because it goes all the way to the bottom and then comes back up, it is called heapsort
with bounce by some authors.[12]
function leafSearch(a, i, end) is
j←i
while iRightChild(j) ≤ end do
(Determine which of j's two children is the greater)
if a[iRightChild(j)] > a[iLeftChild(j)] then
j ← iRightChild(j)
else
j ← iLeftChild(j)
(At the last level, there might be only one child)
if iLeftChild(j) ≤ end then
j ← iLeftChild(j)
return j
The return value of the leafSearch is used in the modified siftDown routine:[9]
procedure siftDown(a, i, end) is
j ← leafSearch(a, i, end)
while a[i] > a[j] do
j ← iParent(j)
x ← a[j]
a[j] ← a[i]
while j > i do
swap x, a[iParent(j)]
j ← iParent(j)
Bottom-up heapsort was announced as beating quicksort (with median-of-three pivot selec-
tion) on arrays of size ≥16000.[8]
A 2008 re-evaluation of this algorithm showed it to be no faster than ordinary heapsort
for integer keys, presumably because modern branch prediction29 nullifies the cost of the
predictable comparisons which bottom-up heapsort manages to avoid.[10]
A further refinement does a binary search in the path to the selected leaf, and sorts in a worst
case of (n+1)(log2 (n+1) + log2 log2 (n+1) + 1.82) + O(log2 n) comparisons, approaching
the information-theoretic lower bound30 of n log2 n − 1.4427n comparisons.[13]
A variant which uses two extra bits per internal node (n−1 bits total for an n-element heap)
to cache information about which child is greater (two bits are required to store three cases:
left, right, and unknown)[11] uses less than n log2 n + 1.1n compares.[14]
29 https://en.wikipedia.org/wiki/Branch_prediction
https://en.wikipedia.org/wiki/Comparison_sort#Number_of_comparisons_required_to_sort_
30
a_list
31 https://en.wikipedia.org/wiki/Ternary_heap
141
Heapsort
32
in the binary heap, which only cover 23 = 8.[citation needed ] This is primarily of academic
interest, as the additional complexity is not worth the minor savings, and bottom-up
heapsort beats both.
• The smoothsort33 algorithm[16] is a variation of heapsort developed by Edsger Dijkstra34
in 1981. Like heapsort, smoothsort's upper bound is O(n log n)35 . The advantage of
smoothsort is that it comes closer to O(n) time if the input is already sorted to some
degree36 , whereas heapsort averages O(n log n) regardless of the initial sorted state. Due
37
to its complexity, smoothsort is rarely used.[citation needed ]
• Levcopoulos and Petersson[17] describe a variation of heapsort based on a heap of Carte-
sian trees38 . First, a Cartesian tree is built from the input in O(n) time, and its root is
placed in a 1-element binary heap. Then we repeatedly extract the minimum from the
binary heap, output the tree's root element, and add its left and right children (if any)
which are themselves Cartesian trees, to the binary heap.[18] As they show, if the input is
already nearly sorted, the Cartesian trees will be very unbalanced, with few nodes having
left and right children, resulting in the binary heap remaining small, and allowing the
algorithm to sort more quickly than O(n log n) for inputs that are already nearly sorted.
• Several variants such as weak heapsort39 require n log2 n+O(1) comparisons in the worst
case, close to the theoretical minimum, using one extra bit of state per node. While this
extra bit makes the algorithms not truly in-place, if space for it can be found inside the
element, these algorithms are simple and efficient,[6]:40 but still slower than binary heaps
if key comparisons are cheap enough (e.g. integer keys) that a constant factor does not
matter.[19]
• Katajainen's ”ultimate heapsort” requires no extra storage, performs n log2 n+O(1) com-
parisons, and a similar number of element moves.[20] It is, however, even more complex
and not justified unless comparisons are very expensive.
Heapsort primarily competes with quicksort40 , another very efficient general purpose nearly-
in-place comparison-based sort algorithm.
Quicksort is typically somewhat faster due to some factors, but the worst-case running time
for quicksort is O(n2 ), which is unacceptable for large data sets and can be deliberately
triggered given enough knowledge of the implementation, creating a security risk. See
quicksort41 for a detailed discussion of this problem and possible solutions.
Thus, because of the O(n log n) upper bound on heapsort's running time and constant upper
bound on its auxiliary storage, embedded systems with real-time constraints or systems
concerned with security often use heapsort, such as the Linux kernel.[21]
33 https://en.wikipedia.org/wiki/Smoothsort
34 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
35 https://en.wikipedia.org/wiki/Big_O_notation
36 https://en.wikipedia.org/wiki/Adaptive_sort
38 https://en.wikipedia.org/wiki/Cartesian_tree
39 https://en.wikipedia.org/wiki/Weak_heap
40 https://en.wikipedia.org/wiki/Quicksort
41 https://en.wikipedia.org/wiki/Quicksort
142
Example
Heapsort also competes with merge sort42 , which has the same time bounds. Merge sort
requires Ω(n) auxiliary space, but heapsort requires only a constant amount. Heapsort
typically runs faster in practice on machines with small or slow data caches43 , and does not
require as much external memory. On the other hand, merge sort has several advantages
over heapsort:
• Merge sort on arrays has considerably better data cache performance, often outperforming
heapsort on modern desktop computers because merge sort frequently accesses contiguous
memory locations (good locality of reference44 ); heapsort references are spread throughout
the heap.
• Heapsort is not a stable sort45 ; merge sort is stable.
• Merge sort parallelizes46 well and can achieve close to linear speedup47 with a trivial
implementation; heapsort is not an obvious candidate for a parallel algorithm.
• Merge sort can be adapted to operate on singly linked lists48 with O(1) extra space.
Heapsort can be adapted to operate on doubly linked lists with only O(1) extra space
49
overhead.[citation needed ]
• Merge sort is used in external sorting50 ; heapsort is not. Locality of reference is the issue.
Introsort51 is an alternative to heapsort that combines quicksort and heapsort to retain
advantages of both: worst case speed of heapsort and average speed of quicksort.
8.5 Example
Let { 6, 5, 3, 1, 8, 7, 2, 4 } be the list that we want to sort from the smallest to the largest.
(NOTE, for 'Building the Heap' step: Larger nodes don't stay below smaller node parents.
They are swapped with parents, and then recursively checked if another swap is needed, to
keep larger numbers above smaller numbers on the heap binary tree.)
42 https://en.wikipedia.org/wiki/Merge_sort
43 https://en.wikipedia.org/wiki/Data_cache
44 https://en.wikipedia.org/wiki/Locality_of_reference
45 https://en.wikipedia.org/wiki/Stable_sort
46 https://en.wikipedia.org/wiki/Parallel_algorithm
47 https://en.wikipedia.org/wiki/Linear_speedup
48 https://en.wikipedia.org/wiki/Linked_list
50 https://en.wikipedia.org/wiki/External_sorting
51 https://en.wikipedia.org/wiki/Introsort
143
Heapsort
144
Example
2. Sorting
145
Heapsort
2. Sorting
5, 1, 3, 4, 2 1, 4 6, 7, 8 swap 1 and 4
as they are not
in order in the
heap
5, 4, 3, 1, 2 5, 2 6, 7, 8 swap 5 and
2 in order to
delete 5 from
heap
2, 4, 3, 1, 5 5 6, 7, 8 delete 5 from
heap and add
to sorted array
2, 4, 3, 1 2, 4 5, 6, 7, 8 swap 2 and 4
as they are not
in order in the
heap
4, 2, 3, 1 4, 1 5, 6, 7, 8 swap 4 and
1 in order to
delete 4 from
heap
1, 2, 3, 4 4 5, 6, 7, 8 delete 4 from
heap and add
to sorted array
1, 2, 3 1, 3 4, 5, 6, 7, 8 swap 1 and 3
as they are not
in order in the
heap
3, 2, 1 3, 1 4, 5, 6, 7, 8 swap 3 and
1 in order to
delete 3 from
heap
1, 2, 3 3 4, 5, 6, 7, 8 delete 3 from
heap and add
to sorted array
1, 2 1, 2 3, 4, 5, 6, 7, 8 swap 1 and 2
as they are not
in order in the
heap
2, 1 2, 1 3, 4, 5, 6, 7, 8 swap 2 and
1 in order to
delete 2 from
heap
1, 2 2 3, 4, 5, 6, 7, 8 delete 2 from
heap and add
to sorted array
146
Notes
2. Sorting
1 1 2, 3, 4, 5, 6, 7, delete 1 from
8 heap and add
to sorted array
1, 2, 3, 4, 5, 6, completed
7, 8
8.6 Notes
1. S, S52 (2008). ”S S”. The Algorithm Design
Manual. Springer. p. 109. doi53 :10.1007/978-1-84800-070-4_454 . ISBN55 978-1-
84800-069-856 . [H]eapsort is nothing but an implementation of selection sort using
the right data structure.
2. Williams 196457
3. B, P (2008). Advanced Data Structures. Cambridge University Press.
p. 209. ISBN58 978-0-521-88037-459 .
4. ”P Q”60 . R 24 M 2011.
5. S, M A. (2012), ”E Y P W-C A-
F' H-C P”, Fundamenta Informaticae61 ,
120 (1): 75–92, doi62 :10.3233/FI-2012-75163
6. B, J; K, J; S, M (2000). ”P
E C S: H C”64 (PS). ACM Jour-
nal of Experimental Algorithmics. 5 (15): 15–es. CiteSeerX65 10.1.1.35.324866 .
doi67 :10.1145/351827.38425768 . Alternate PDF source69 .
7. C, J; E, S; E, A; K, J (27–
31 A 2012). In-place Heap Construction with Optimized Comparisons, Moves,
and Cache Misses70 (PDF). 37 M-
F C S. B, S. . 259–270.
52 https://en.wikipedia.org/wiki/Steven_Skiena
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1007%2F978-1-84800-070-4_4
55 https://en.wikipedia.org/wiki/ISBN_(identifier)
56 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
57 #CITEREFWilliams1964
58 https://en.wikipedia.org/wiki/ISBN_(identifier)
59 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-88037-4
60 http://faculty.simpson.edu/lydia.sinapova/www/cmsc250/LN250_Weiss/L10-PQueues.htm
61 https://en.wikipedia.org/wiki/Fundamenta_Informaticae
62 https://en.wikipedia.org/wiki/Doi_(identifier)
63 https://doi.org/10.3233%2FFI-2012-751
64 http://hjemmesider.diku.dk/~jyrki/Paper/katajain.ps
65 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
66 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3248
67 https://en.wikipedia.org/wiki/Doi_(identifier)
68 https://doi.org/10.1145%2F351827.384257
https://www.semanticscholar.org/paper/Performance-Engineering-Case-Study-Heap-
69
Bojesen-Katajainen/6f4ada5912c1da64e16453d67ec99c970173fb5b
https://pdfs.semanticscholar.org/9cc6/36d7998d58b3937ba0098e971710ff039612.pdf#page=
70
11
147
Heapsort
71 https://en.wikipedia.org/wiki/Doi_(identifier)
72 https://doi.org/10.1007%2F978-3-642-32589-2_25
73 https://en.wikipedia.org/wiki/ISBN_(identifier)
74 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-32588-5
75 https://en.wikipedia.org/wiki/Ingo_Wegener
76 https://core.ac.uk/download/pdf/82350265.pdf
77 https://en.wikipedia.org/wiki/Doi_(identifier)
78 https://doi.org/10.1016%2F0304-3975%2893%2990364-y
79 http://staff.gutech.edu.om/~rudolf/Paper/buh_algorithmica94.pdf
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1007%2Fbf01182770
82 https://en.wikipedia.org/wiki/Hdl_(identifier)
83 http://hdl.handle.net/11858%2F00-001M-0000-0014-7B02-C
http://pubman.mpdl.mpg.de/pubman/item/escidoc:1834997:3/component/escidoc:2463941/
84
MPI-I-94-104.pdf
85 https://en.wikipedia.org/wiki/Max_Planck_Institute_for_Informatics
86 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
87 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
88 http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/PriorityQueues.pdf#page=16
89 http://people.mpi-inf.mpg.de/~mehlhorn/Toolbox.html
90 https://en.wikipedia.org/wiki/ISBN_(identifier)
91 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-77977-3
92 http://cgm.cs.mcgill.ca/~breed/2016COMP610/BUILDINGHEAPSFAST.pdf
93 https://en.wikipedia.org/wiki/Doi_(identifier)
94 https://doi.org/10.1016%2F0196-6774%2889%2990033-3
95 https://en.wikipedia.org/wiki/Bernard_Moret
96 https://en.wikipedia.org/wiki/ISBN_(identifier)
97 https://en.wikipedia.org/wiki/Special:BookSources/0-8053-8008-6
148
Notes
13. C, S (M 1987). ”A -
”98 (PDF). Information Processing Letters. 24 (4):
247–250. doi99 :10.1016/0020-0190(87)90142-6100 .
14. W, I101 (M 1992). ”T M-
D R' BOTTOM-UP HEAPSORT
n log n + 1.1n”. Information and Computation. 97 (1): 86–96. doi102 :10.1016/0890-
5401(92)90005-Z103 .
104 105 106
15. ”Data Structures Using Pascal”, 1991, page 405,[full citation needed ][author missing ][ISBN missing ]
gives a ternary heapsort as a student exercise. ”Write a sorting routine similar to the
heapsort except that it uses a ternary heap.”
16. D, E W.107 Smoothsort – an alternative to sorting in situ (EWD-
796a)108 (PDF). E.W. D A. C A H, U-
T A109 . (transcription110 )
17. L, C; P, O (1989), ”H—A
P F”, WADS '89: Proceedings of the Workshop on Algorithms and
Data Structures, Lecture Notes in Computer Science, 382, London, UK: Springer-
Verlag, pp. 499–509, doi111 :10.1007/3-540-51542-9_41112 , ISBN113 978-3-540-51542-
5114 Heapsort—Adapted for presorted files (Q56049336)115 .
18. S, K (27 D 2010). ”CTS.”116 . Archive
of Interesting Code. Retrieved 5 March 2019.
19. K, J (23 S 2013). Seeking for the best priority queue:
Lessons learnt117 . A E (S 13391). D. . 19–
20, 24.
20. K, J (2–3 F 1998). The Ultimate Heapsort118 . C-
: 4 A T S. Australian Computer Science
Communications. 20 (3). Perth. pp. 87–96.
21. 119 Linux kernel source
98 https://pdfs.semanticscholar.org/caec/6682ffd13c6367a8c51b566e2420246faca2.pdf
99 https://en.wikipedia.org/wiki/Doi_(identifier)
100 https://doi.org/10.1016%2F0020-0190%2887%2990142-6
101 https://en.wikipedia.org/wiki/Ingo_Wegener
102 https://en.wikipedia.org/wiki/Doi_(identifier)
103 https://doi.org/10.1016%2F0890-5401%2892%2990005-Z
107 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
108 http://www.cs.utexas.edu/users/EWD/ewd07xx/EWD796a.PDF
109 https://en.wikipedia.org/wiki/University_of_Texas_at_Austin
110 http://www.cs.utexas.edu/users/EWD/transcriptions/EWD07xx/EWD796a.html
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1007%2F3-540-51542-9_41
113 https://en.wikipedia.org/wiki/ISBN_(identifier)
114 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-51542-5
115 https://www.wikidata.org/wiki/Special:EntityPage/Q56049336
116 http://www.keithschwarz.com/interesting/code/?dir=cartesian-tree-sort
117 http://hjemmesider.diku.dk/~jyrki/Myris/Kat2013-09-23P.html
118 http://hjemmesider.diku.dk/~jyrki/Myris/Kat1998C.html
119 https://github.com/torvalds/linux/blob/master/lib/sort.c
149
Heapsort
8.7 References
• W, J. W. J.120 (1964), ”A 232 - H”, Communications of the
ACM121 , 7 (6): 347–348, doi122 :10.1145/512274.512284123
• F, R W.124 (1964), ”A 245 - T 3”, Communications of
the ACM125 , 7 (12): 701, doi126 :10.1145/355588.365103127
• C, S128 (1987), ”A- ”, BIT, 27 (1):
2–17, doi129 :10.1007/bf01937350130
• K, D131 (1997), ”§5.2.3, S S”, Sorting and Search-
ing, The Art of Computer Programming132 , 3 (third ed.), Addison-Wesley, pp. 144–155,
ISBN133 978-0-201-89685-5134
• Thomas H. Cormen135 , Charles E. Leiserson136 , Ronald L. Rivest137 , and Clifford Stein138 .
Introduction to Algorithms139 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN140 0-262-03293-7141 . Chapters 6 and 7 Respectively: Heapsort and Priority Queues
• A PDF of Dijkstra's original paper on Smoothsort142
• Heaps and Heapsort Tutorial143 by David Carlson, St. Vincent College
The Wikibook Algorithm implementation144 has a page on the topic of: Heap-
sort145
120 https://en.wikipedia.org/wiki/J._W._J._Williams
121 https://en.wikipedia.org/wiki/Communications_of_the_ACM
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1145%2F512274.512284
124 https://en.wikipedia.org/wiki/Robert_W._Floyd
125 https://en.wikipedia.org/wiki/Communications_of_the_ACM
126 https://en.wikipedia.org/wiki/Doi_(identifier)
127 https://doi.org/10.1145%2F355588.365103
128 https://sv.wikipedia.org/wiki/Svante_Carlsson
129 https://en.wikipedia.org/wiki/Doi_(identifier)
130 https://doi.org/10.1007%2Fbf01937350
131 https://en.wikipedia.org/wiki/Donald_Knuth
132 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
133 https://en.wikipedia.org/wiki/ISBN_(identifier)
134 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
135 https://en.wikipedia.org/wiki/Thomas_H._Cormen
136 https://en.wikipedia.org/wiki/Charles_E._Leiserson
137 https://en.wikipedia.org/wiki/Ronald_L._Rivest
138 https://en.wikipedia.org/wiki/Clifford_Stein
139 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
140 https://en.wikipedia.org/wiki/ISBN_(identifier)
141 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
142 http://www.cs.utexas.edu/users/EWD/ewd07xx/EWD796a.PDF
143 http://cis.stvincent.edu/html/tutorials/swd/heaps/heaps.html
144 https://en.wikibooks.org/wiki/Algorithm_implementation
145 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Heapsort
150
External links
Sorting algorithms
https://web.archive.org/web/20150306071556/http://www.sorting-algorithms.com/heap-
146
sort
147 https://en.wikipedia.org/wiki/Wayback_Machine
https://web.archive.org/web/20130326084250/http://olli.informatik.uni-oldenburg.de/
148
heapsort_SALA/english/start.html
149 https://xlinux.nist.gov/dads/HTML/heapSort.html
150 http://www.codecodex.com/wiki/Heapsort
151 http://www.azillionmonkeys.com/qed/sort.html
152 http://employees.oneonta.edu/zhangs/powerPointPlatform/index.php
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
153
Sorti.html#SECTION001413000000000000000
154 https://en.wikipedia.org/wiki/Pat_Morin
151
9 Bubble sort
This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Bubble sort”4 – news5 · newspapers6 · books7 · scholar8 · JSTOR9 (Novem-
ber 2016)(Learn how and when to remove this template message10 )
Bubble sort
Static visualization of bubble sort[1]
Class Sorting algorithm
Data structure Array
Worst-case per- O(n2 ) comparisons,
formance O(n2 ) swaps
Best-case perfor- O(n) comparisons,
mance O(1) swaps
Average perfor- O(n2 ) comparisons,
mance O(n2 ) swaps
Worst-case space O(n) total, O(1) aux-
complexity iliary
Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm11 that
repeatedly steps through the list, compares adjacent elements and swaps12 them if they
are in the wrong order. The pass through the list is repeated until the list is sorted. The
1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Bubble_sort&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Bubble+sort%22
5 http://www.google.com/search?tbm=nws&q=%22Bubble+sort%22+-wikipedia
http://www.google.com/search?&q=%22Bubble+sort%22+site:news.google.com/newspapers&
6
source=newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22Bubble+sort%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Bubble+sort%22
9 https://www.jstor.org/action/doBasicSearch?Query=%22Bubble+sort%22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
11 https://en.wikipedia.org/wiki/Sorting_algorithm
12 https://en.wikipedia.org/wiki/Swap_(computer_science)
153
Bubble sort
algorithm, which is a comparison sort13 , is named for the way smaller or larger elements
”bubble” to the top of the list.
This simple algorithm performs poorly in real world use and is used primarily as an edu-
cational tool. More efficient algorithms such as timsort14 , or merge sort15 are used by the
sorting libraries built into popular programming languages such as Python and Java.[2][3]
9.1 Analysis
Figure 27 An example of bubble sort. Starting from the beginning of the list, compare
every adjacent pair, swap their position if they are not in the right order (the latter one is
smaller than the former one). After each iteration, one less element (the last one) is
needed to be compared until there are no more elements left to be compared.
9.1.1 Performance
Bubble sort has a worst-case and average complexity of О16 (n2 ), where n is the number
of items being sorted. Most practical sorting algorithms have substantially better worst-
case or average complexity, often O(n log n). Even other О(n2 ) sorting algorithms, such as
13 https://en.wikipedia.org/wiki/Comparison_sort
14 https://en.wikipedia.org/wiki/Timsort
15 https://en.wikipedia.org/wiki/Merge_sort
16 https://en.wikipedia.org/wiki/Big_o_notation
154
Analysis
insertion sort17 , generally run faster than bubble sort, and are no more complex. Therefore,
bubble sort is not a practical sorting algorithm.
The only significant advantage that bubble sort has over most other algorithms, even quick-
sort18 , but not insertion sort19 , is that the ability to detect that the list is sorted efficiently
is built into the algorithm. When the list is already sorted (best-case), the complexity
of bubble sort is only O(n). By contrast, most other algorithms, even those with better
average-case complexity20 , perform their entire sorting process on the set and thus are more
complex. However, not only does insertion sort21 share this advantage, but it also performs
better on a list that is substantially sorted (having a small number of inversions22 ).
Bubble sort should be avoided in the case of large collections. It will not be efficient in the
case of a reverse-ordered collection.
The distance and direction that elements must move during the sort determine bubble sort's
performance because elements move in different directions at different speeds. An element
that must move toward the end of the list can move quickly because it can take part in
successive swaps. For example, the largest element in the list will win every swap, so it
moves to its sorted position on the first pass even if it starts near the beginning. On the
other hand, an element that must move toward the beginning of the list cannot move faster
than one step per pass, so elements move toward the beginning very slowly. If the smallest
element is at the end of the list, it will take n−1 passes to move it to the beginning. This
has led to these types of elements being named rabbits and turtles, respectively, after the
characters in Aesop's fable of The Tortoise and the Hare23 .
Various efforts have been made to eliminate turtles to improve upon the speed of bubble
sort. Cocktail sort24 is a bi-directional bubble sort that goes from beginning to end, and
then reverses itself, going end to beginning. It can move turtles fairly well, but it retains
O(n2 )25 worst-case complexity. Comb sort26 compares elements separated by large gaps,
and can move turtles extremely quickly before proceeding to smaller and smaller gaps to
smooth out the list. Its average speed is comparable to faster algorithms like quicksort27 .
17 https://en.wikipedia.org/wiki/Insertion_sort
18 https://en.wikipedia.org/wiki/Quicksort
19 https://en.wikipedia.org/wiki/Insertion_sort
20 https://en.wikipedia.org/wiki/Average-case_complexity
21 https://en.wikipedia.org/wiki/Insertion_sort
22 https://en.wikipedia.org/wiki/Inversion_(discrete_mathematics)
23 https://en.wikipedia.org/wiki/The_Tortoise_and_the_Hare
24 https://en.wikipedia.org/wiki/Cocktail_sort
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Comb_sort
27 https://en.wikipedia.org/wiki/Quicksort
155
Bubble sort
Take an array of numbers ” 5 1 4 2 8”, and sort the array from lowest number to greatest
number using bubble sort. In each step, elements written in bold are being compared.
Three passes will be required;
First Pass
( 5 1 4 2 8 ) → ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps
since 5 > 1.
( 1 5 4 2 8 ) → ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) → ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) → ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5),
algorithm does not swap them.
Second Pass
(14258)→(14258)
( 1 4 2 5 8 ) → ( 1 2 4 5 8 ), Swap since 4 > 2
(12458)→(12458)
(12458)→(12458)
Now, the array is already sorted, but the algorithm does not know if it is completed. The
algorithm needs one whole pass without any swap to know it is sorted.
Third Pass
(12458)→(12458)
(12458)→(12458)
(12458)→(12458)
(12458)→(12458)
9.2 Implementation
28 https://en.wikipedia.org/wiki/Pseudocode
156
Implementation
The bubble sort algorithm can be optimized by observing that the n-th pass finds the n-th
largest element and puts it into its final place. So, the inner loop can avoid looking at the
last n − 1 items when running for the n-th time:
More generally, it can happen that more than one element is placed in their final position
on a single pass. In particular, after every pass, all elements after the last swap are sorted,
and do not need to be checked again. This allows to skip over many elements, resulting
in about a worst case 50% improvement in comparison count (though no improvement in
swap counts), and adds very little complexity because the new code subsumes the ”swapped”
variable:
To accomplish this in pseudocode, the following can be written:
procedure bubbleSort(A : list of sortable items)
n := length(A)
repeat
newn := 0
for i := 1 to n - 1 inclusive do
if A[i - 1] > A[i] then
swap(A[i - 1], A[i])
newn := i
end if
end for
n := newn
until n ≤ 1
end procedure
Alternate modifications, such as the cocktail shaker sort29 attempt to improve on the bub-
ble sort performance while keeping the same idea of repeatedly comparing and swapping
adjacent items.
29 https://en.wikipedia.org/wiki/Cocktail_shaker_sort
157
Bubble sort
9.3 Use
Figure 28 A bubble sort, a sorting algorithm that continuously steps through a list,
swapping items until they appear in the correct order. The list was plotted in a Cartesian
coordinate system, with each point (x, y) indicating that the value y is stored at index x.
Then the list would be sorted by bubble sort according to every pixel's value. Note that
the largest end gets sorted first, with smaller elements taking longer to move to their
correct positions.
Although bubble sort is one of the simplest sorting algorithms to understand and implement,
its O(n2 )30 complexity means that its efficiency decreases dramatically on lists of more than
a small number of elements. Even among simple O(n2 ) sorting algorithms, algorithms like
insertion sort31 are usually considerably more efficient.
Due to its simplicity, bubble sort is often used to introduce the concept of an algorithm, or a
sorting algorithm, to introductory computer science32 students. However, some researchers
30 https://en.wikipedia.org/wiki/Big_O_notation
31 https://en.wikipedia.org/wiki/Insertion_sort
32 https://en.wikipedia.org/wiki/Computer_science
158
Variations
such as Owen Astrachan33 have gone to great lengths to disparage bubble sort and its
continued popularity in computer science education, recommending that it no longer even
be taught.[4]
The Jargon File34 , which famously calls bogosort35 ”the archetypical [sic] perversely awful
algorithm”, also calls bubble sort ”the generic bad algorithm”.[5] Donald Knuth36 , in The
Art of Computer Programming37 , concluded that ”the bubble sort seems to have nothing to
recommend it, except a catchy name and the fact that it leads to some interesting theoretical
problems”, some of which he then discusses.[6]
Bubble sort is asymptotically38 equivalent in running time to insertion sort in the worst
case, but the two algorithms differ greatly in the number of swaps necessary. Experimental
results such as those of Astrachan have also shown that insertion sort performs considerably
better even on random lists. For these reasons many modern algorithm textbooks avoid
using the bubble sort algorithm in favor of insertion sort.
Bubble sort also interacts poorly with modern CPU hardware. It produces at least twice
as many writes as insertion sort, twice as many cache misses, and asymptotically more
40
branch mispredictions39 .[citation needed ] Experiments by Astrachan sorting strings in Java41
show bubble sort to be roughly one-fifth as fast as an insertion sort and 70% as fast as a
selection sort42 .[4]
In computer graphics bubble sort is popular for its capability to detect a very small error (like
swap of just two elements) in almost-sorted arrays and fix it with just linear complexity (2n).
For example, it is used in a polygon filling algorithm, where bounding lines are sorted by
their x coordinate at a specific scan line (a line parallel to the x axis) and with incrementing
y their order changes (two elements are swapped) only at intersections of two lines. Bubble
sort is a stable sort algorithm, like insertion sort.
9.4 Variations
• Odd–even sort43 is a parallel version of bubble sort, for message passing systems.
• Passes can be from right to left, rather than left to right. This is more efficient for lists
with unsorted items added to the end.
• Cocktail shaker sort44 alternates leftwards and rightwards passes.
33 https://en.wikipedia.org/wiki/Owen_Astrachan
34 https://en.wikipedia.org/wiki/Jargon_File
35 https://en.wikipedia.org/wiki/Bogosort
36 https://en.wikipedia.org/wiki/Donald_Knuth
37 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
38 https://en.wikipedia.org/wiki/Big_O_notation
39 h