100% found this document useful (1 vote)
546 views2,118 pages

Algorithms Wikipedia PDF

Uploaded by

Aayush Borkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
546 views2,118 pages

Algorithms Wikipedia PDF

Uploaded by

Aayush Borkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2118

Book:Algorithms

en.wikipedia.org
May 7, 2020

On the 28th of April 2012 the contents of the English as well as German Wikibooks and Wikipedia
projects were licensed under Creative Commons Attribution-ShareAlike 3.0 Unported license. A
URI to this license is given in the list of figures on page 2055. If this document is a derived work
from the contents of one of these projects and the content was still licensed by the project under
this license at the time of derivation this document has to be licensed under the same, a similar or a
compatible license, as stated in section 4b of the license. The list of contributors is included in chapter
Contributors on page 1669. The licenses GPL, LGPL and GFDL are included in chapter Licenses on
page 2085, since this book and/or parts of it may or may not be licensed under one or more of these
licenses, and thus require inclusion of these licenses. The licenses of the figures are given in the list of
figures on page 2055. This PDF was generated by the LATEX typesetting software. The LATEX source
code is included as an attachment (source.7z.txt) in this PDF file. To extract the source from
the PDF file, you can use the pdfdetach tool including in the poppler suite, or the http://www.
pdflabs.com/tools/pdftk-the-pdf-toolkit/ utility. Some PDF viewers may also let you save
the attachment to a file. After extracting it from the PDF file you have to rename it to source.7z.
To uncompress the resulting archive we recommend the use of http://www.7-zip.org/. The LATEX
source itself was generated by a program written by Dirk Hünniger, which is freely available under
an open source license from http://de.wikibooks.org/wiki/Benutzer:Dirk_Huenniger/wb2pdf.
Contents

1 Sorting algorithm 3
1.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Comparison of algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Popular sorting algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Memory usage patterns and index sorting . . . . . . . . . . . . . . . . . 22
1.6 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2 Comparison sort 31
2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 Performance limits and advantages of different sorting techniques . . . . 33
2.3 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Number of comparisons required to sort a list . . . . . . . . . . . . . . . 35
2.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3 Selection sort 41
3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.4 Comparison to other sorting algorithms . . . . . . . . . . . . . . . . . . 45
3.5 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4 Insertion sort 51
4.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 Best, worst, and average cases . . . . . . . . . . . . . . . . . . . . . . . 55
4.3 Relation to other sorting algorithms . . . . . . . . . . . . . . . . . . . . 55
4.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

III
Contents

5 Merge sort 63
5.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2 Natural merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.5 Use with tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.6 Optimizing merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.7 Parallel merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.8 Comparison with other sort algorithms . . . . . . . . . . . . . . . . . . 81
5.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

6 Merge sort 87
6.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.2 Natural merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
6.5 Use with tape drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.6 Optimizing merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.7 Parallel merge sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.8 Comparison with other sort algorithms . . . . . . . . . . . . . . . . . . 105
6.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

7 Quicksort 111
7.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.3 Formal analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.4 Relation to other algorithms . . . . . . . . . . . . . . . . . . . . . . . . 123
7.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
7.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

8 Heapsort 135
8.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
8.3 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.4 Comparison with other sorts . . . . . . . . . . . . . . . . . . . . . . . . 142
8.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
8.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

9 Bubble sort 153


9.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

IV
Contents

9.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156


9.3 Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
9.4 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
9.5 Debate over name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.6 In popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
9.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
9.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

10 Shellsort 163
10.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.3 Gap sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
10.4 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 168
10.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
10.8 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
10.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

11 Integer sorting 175


11.1 General considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
11.2 Practical algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
11.3 Theoretical algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
11.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

12 Counting sort 189


12.1 Input and output assumptions . . . . . . . . . . . . . . . . . . . . . . . 190
12.2 The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
12.3 Complexity analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
12.4 Variant algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
12.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
12.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
12.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

13 Bucket sort 195


13.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
13.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
13.3 Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
13.4 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
13.5 Comparison with other sorting algorithms . . . . . . . . . . . . . . . . . 200
13.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
13.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

14 Radix sort 203


14.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
14.2 Digit order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
14.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

V
Contents

14.4 Complexity and performance . . . . . . . . . . . . . . . . . . . . . . . . 206


14.5 Specialized variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
14.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
14.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
14.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

15 Data structure 213


15.1 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
15.2 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
15.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
15.4 Language support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
15.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
15.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
15.7 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
15.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
15.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

16 Search algorithm 223


16.1 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
16.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
16.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
16.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

17 Linear search 231


17.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
17.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
17.3 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
17.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
17.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

18 Binary search algorithm 237


18.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
18.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
18.3 Binary search versus other schemes . . . . . . . . . . . . . . . . . . . . . 248
18.4 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
18.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
18.6 Implementation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
18.7 Library support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
18.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
18.9 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
18.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

19 Binary search tree 271


19.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
19.2 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
19.3 Examples of applications . . . . . . . . . . . . . . . . . . . . . . . . . . 280
19.4 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
19.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

VI
Contents

19.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283


19.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
19.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
19.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

20 Trie 287
20.1 History and etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
20.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
20.3 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
20.4 Implementation strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 292
20.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
20.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
20.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299

21 Hash table 301


21.1 Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
21.2 Key statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
21.3 Collision resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
21.4 Dynamic resizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
21.5 Performance analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
21.6 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
21.7 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
21.8 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
21.9 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
21.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
21.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
21.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
21.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

22 Hash function 331


22.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
22.2 Hash tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
22.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
22.4 Hashing integer data types . . . . . . . . . . . . . . . . . . . . . . . . . 340
22.5 Hashing variable-length data . . . . . . . . . . . . . . . . . . . . . . . . 345
22.6 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
22.7 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
22.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
22.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
22.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
22.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350

23 Collision (computer science) 351


23.1 Computer security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
23.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
23.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353

VII
Contents

24 Perfect hash function 355


24.1 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
24.2 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
24.3 Space lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
24.4 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
24.5 Related constructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
24.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
24.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
24.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

25 Open addressing 363


25.1 Example pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
25.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
25.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366

26 Linear probing 367


26.1 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
26.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
26.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
26.4 Choice of hash function . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
26.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
26.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374

27 Quadratic probing 379


27.1 Quadratic function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
27.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
27.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
27.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381

28 Double hashing 383


28.1 Selection of h2 (k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
28.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
28.3 Enhanced double hashing . . . . . . . . . . . . . . . . . . . . . . . . . . 384
28.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
28.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
28.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386

29 Cuckoo hashing 389


29.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
29.2 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
29.3 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
29.4 Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
29.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
29.6 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
29.7 Comparison with related structures . . . . . . . . . . . . . . . . . . . . 395
29.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
29.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
29.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397

VIII
Contents

30 Random number generation 399


30.1 Practical applications and uses . . . . . . . . . . . . . . . . . . . . . . . 401
30.2 ”True”vs. pseudo-random numbers . . . . . . . . . . . . . . . . . . . . . 402
30.3 Generation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
30.4 Post-processing and statistical checks . . . . . . . . . . . . . . . . . . . 407
30.5 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
30.6 Low-discrepancy sequences as an alternative . . . . . . . . . . . . . . . . 408
30.7 Activities and demonstrations . . . . . . . . . . . . . . . . . . . . . . . . 408
30.8 Backdoors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
30.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
30.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
30.11 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
30.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414

31 Pseudorandom number generator 415


31.1 Potential problems with deterministic generators . . . . . . . . . . . . . 416
31.2 Generators based on linear recurrences . . . . . . . . . . . . . . . . . . . 417
31.3 Cryptographically secure pseudorandom number generators . . . . . . . 417
31.4 BSI evaluation criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
31.5 Mathematical definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
31.6 Early approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
31.7 Non-uniform generators . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
31.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
31.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
31.10 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
31.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425

32 Linear congruential generator 427


32.1 Period length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
32.2 Parameters in common use . . . . . . . . . . . . . . . . . . . . . . . . . 430
32.3 Advantages and disadvantages . . . . . . . . . . . . . . . . . . . . . . . 432
32.4 Sample Python code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
32.5 Sample Free Pascal code . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
32.6 LCG derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
32.7 Comparison with other PRNGs . . . . . . . . . . . . . . . . . . . . . . . 437
32.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
32.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
32.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
32.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

33 Middle-square method 445


33.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
33.2 The method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
33.3 Middle Square Weyl Sequence PRNG . . . . . . . . . . . . . . . . . . . 448
33.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
33.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450

IX
Contents

34 Xorshift 451
34.1 Example implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 451
34.2 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
34.3 xoshiro and xoroshiro . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
34.4 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
34.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
34.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
34.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460

35 Mersenne Twister 461


35.1 Adoption in software systems . . . . . . . . . . . . . . . . . . . . . . . . 461
35.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
35.3 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
35.4 Alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
35.5 k-distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
35.6 Algorithmic detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
35.7 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
35.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470
35.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
35.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474

36 Cryptographically secure pseudorandom number generator 475


36.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
36.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
36.3 Entropy extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
36.4 Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
36.5 Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
36.6 NSA kleptographic backdoor in the Dual_EC_DRBG PRNG . . . . . . 482
36.7 Security flaws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
36.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
36.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486

37 Blum Blum Shub 489


37.1 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
37.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
37.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
37.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492

38 Blum–Micali algorithm 493


38.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
38.2 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494

39 Combinatorics 495
39.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
39.2 Approaches and subfields of combinatorics . . . . . . . . . . . . . . . . . 499
39.3 Related fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
39.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
39.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514

X
Contents

39.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516


39.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517

40 Cycle detection 519


40.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
40.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
40.3 Computer representation . . . . . . . . . . . . . . . . . . . . . . . . . . 521
40.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522
40.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
40.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
40.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531

41 Stable marriage problem 533


41.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
41.2 Different stable matchings . . . . . . . . . . . . . . . . . . . . . . . . . . 534
41.3 Algorithmic solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
41.4 Rural hospitals theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
41.5 Related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
41.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
41.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
41.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541

42 Graph theory 543


42.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
42.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
42.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
42.4 Graph drawing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
42.5 Graph-theoretic data structures . . . . . . . . . . . . . . . . . . . . . . . 557
42.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558
42.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
42.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 567
42.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
42.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572

43 Graph coloring 575


43.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
43.2 Definition and terminology . . . . . . . . . . . . . . . . . . . . . . . . . 579
43.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
43.4 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
43.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
43.6 Other colorings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
43.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596
43.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
43.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
43.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605

44 A* search algorithm 607


44.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610

XI
Contents

44.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611


44.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
44.4 Bounded relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
44.5 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
44.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
44.7 Relations to other algorithms . . . . . . . . . . . . . . . . . . . . . . . . 621
44.8 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 621
44.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
44.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
44.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
44.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
44.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627

45 Szemerédi regularity lemma 629


45.1 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 630
45.2 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
45.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
45.4 History and Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . 636
45.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
45.6 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 642

46 Alpha–beta pruning 643


46.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
46.2 Core idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645
46.3 Improvements over naive minimax . . . . . . . . . . . . . . . . . . . . . 646
46.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
46.5 Heuristic improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . 649
46.6 Other algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
46.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 650
46.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651
46.9 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652

47 Aperiodic graph 655


47.1 Graphs that cannot be aperiodic . . . . . . . . . . . . . . . . . . . . . . 657
47.2 Testing for aperiodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
47.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658
47.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 658

48 B* 659
48.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
48.2 Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 660
48.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
48.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663

49 Barabási–Albert model 665


49.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669
49.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
49.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672

XII
Contents

49.4 Limiting cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677


49.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677
49.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
49.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678
49.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682

50 Belief propagation 683


50.1 Description of the sum-product algorithm . . . . . . . . . . . . . . . . . 684
50.2 Exact algorithm for trees . . . . . . . . . . . . . . . . . . . . . . . . . . 685
50.3 Approximate algorithm for general graphs . . . . . . . . . . . . . . . . . 686
50.4 Related algorithm and complexity issues . . . . . . . . . . . . . . . . . . 687
50.5 Relation to free energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
50.6 Generalized belief propagation (GBP) . . . . . . . . . . . . . . . . . . . 688
50.7 Gaussian belief propagation (GaBP) . . . . . . . . . . . . . . . . . . . . 688
50.8 Syndrome-based BP decoding . . . . . . . . . . . . . . . . . . . . . . . . 689
50.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
50.10 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694

51 Bellman–Ford algorithm 697


51.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
51.2 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 702
51.3 Finding negative cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
51.4 Applications in routing . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
51.5 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 704
51.6 Trivia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
51.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
51.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705

52 Bidirectional search 709


52.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711
52.2 Approaches for Bidirectional Heuristic Search . . . . . . . . . . . . . . . 712
52.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 713

53 Borůvka's algorithm 715


53.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717
53.2 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
53.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
53.4 Other algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
53.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 720

54 Bottleneck traveling salesman problem 723


54.1 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
54.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
54.3 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
54.4 Metric approximation algorithm . . . . . . . . . . . . . . . . . . . . . . 724
54.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
54.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725

XIII
Contents

55 Breadth-first search 727


55.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
55.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
55.3 BFS ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
55.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733
55.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
55.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 734
55.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735

56 Bron–Kerbosch algorithm 737


56.1 Without pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737
56.2 With pivoting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
56.3 With vertex ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 738
56.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 740
56.5 Worst-case analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
56.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
56.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
56.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743

57 Centrality 745
57.1 Definition and characterization of centrality indices . . . . . . . . . . . . 748
57.2 Important limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752
57.3 Degree centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753
57.4 Closeness centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755
57.5 Betweenness centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 756
57.6 Eigenvector centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 758
57.7 Katz centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.8 PageRank centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.9 Percolation centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759
57.10 Cross-clique centrality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
57.11 Freeman centralization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
57.12 Dissimilarity based centrality measures . . . . . . . . . . . . . . . . . . 762
57.13 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
57.14 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763
57.15 Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 764
57.16 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769

58 Chaitin's algorithm 771


58.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771

59 Christofides algorithm 773


59.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 773
59.2 Approximation ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
59.3 Lower bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 774
59.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
59.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 778
59.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779

XIV
Contents

60 Clique percolation method 781


60.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
60.2 Percolation transition in the CPM . . . . . . . . . . . . . . . . . . . . . 784
60.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
60.4 Algorithms and Software . . . . . . . . . . . . . . . . . . . . . . . . . . 784
60.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785
60.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785

61 Closure problem 791


61.1 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 791
61.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793
61.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795

62 Color-coding 797
62.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797
62.2 The method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 798
62.3 Derandomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799
62.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
62.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800
62.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801

63 Contraction hierarchies 803


63.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804
63.2 Customized contraction hierarchies . . . . . . . . . . . . . . . . . . . . . 807
63.3 Extensions and applications . . . . . . . . . . . . . . . . . . . . . . . . . 808
63.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 809
63.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 811

64 Courcelle's theorem 813


64.1 Formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813
64.2 Proof strategy and complexity . . . . . . . . . . . . . . . . . . . . . . . 815
64.3 Bojańczyk-Pilipczuk's theorem . . . . . . . . . . . . . . . . . . . . . . . 816
64.4 Satisfiability and Seese's theorem . . . . . . . . . . . . . . . . . . . . . . 817
64.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817
64.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 818

65 Cuthill–McKee algorithm 825


65.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
65.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827
65.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828

66 D* 829
66.1 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 831
66.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 838
66.3 Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839
66.4 Minimum cost versus current cost . . . . . . . . . . . . . . . . . . . . . 840
66.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 840
66.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 841

XV
Contents

67 Depth-first search 843


67.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 845
67.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 846
67.3 Output of a depth-first search . . . . . . . . . . . . . . . . . . . . . . . . 847
67.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 849
67.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 850
67.6 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
67.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851
67.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 852
67.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 853
67.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 854

68 Iterative deepening depth-first search 855


68.1 Algorithm for directed graphs . . . . . . . . . . . . . . . . . . . . . . . . 857
68.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
68.3 Asymptotic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 858
68.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 860
68.5 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
68.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 864

69 Dijkstra's algorithm 865


69.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 868
69.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 870
69.3 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 871
69.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 872
69.5 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875
69.6 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 876
69.7 Related problems and algorithms . . . . . . . . . . . . . . . . . . . . . . 879
69.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 880
69.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 881
69.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884
69.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 885

70 Dijkstra–Scholten algorithm 887


70.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 887
70.2 Dijkstra–Scholten algorithm for a tree . . . . . . . . . . . . . . . . . . . 888
70.3 Dijkstra–Scholten algorithm for directed acyclic graphs . . . . . . . . . 888
70.4 Dijkstra–Scholten algorithm for cyclic directed graphs . . . . . . . . . . 888
70.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889
70.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 889

71 Dinic's algorithm 891


71.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
71.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
71.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
71.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 892
71.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 893
71.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894

XVI
Contents

71.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 894


71.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895

72 Double pushout graph rewriting 897


72.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
72.2 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 897
72.3 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898
72.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 898

73 Dulmage–Mendelsohn decomposition 901


73.1 The coarse decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . 901
73.2 The fine decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . 902
73.3 Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
73.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
73.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
73.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 904

74 Edmonds' algorithm 905


74.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 906
74.2 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
74.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
74.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 908

75 Blossom algorithm 909


75.1 Augmenting paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909
75.2 Blossoms and contractions . . . . . . . . . . . . . . . . . . . . . . . . . . 911
75.3 Finding an augmenting path . . . . . . . . . . . . . . . . . . . . . . . . 914
75.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 918

76 Edmonds–Karp algorithm 921


76.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
76.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
76.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
76.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
76.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925

77 Euler tour technique 927


77.1 Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 928
77.2 Roots, advance and retreat edges . . . . . . . . . . . . . . . . . . . . . . 928
77.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
77.4 Euler tour trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 929
77.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 930

78 FKT algorithm 931


78.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
78.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
78.3 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
78.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
78.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936

XVII
Contents

78.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938

79 Flooding algorithm 939


79.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
79.2 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939

80 Flow network 941


80.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
80.2 Flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 941
80.3 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 942
80.4 Concepts useful to flow problems . . . . . . . . . . . . . . . . . . . . . . 943
80.5 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
80.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 945
80.7 Classifying flow problems . . . . . . . . . . . . . . . . . . . . . . . . . . 945
80.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
80.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
80.10 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
80.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 949

81 Floyd–Warshall algorithm 951


81.1 History and naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
81.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953
81.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
81.4 Behavior with negative cycles . . . . . . . . . . . . . . . . . . . . . . . . 956
81.5 Path reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 956
81.6 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957
81.7 Applications and generalizations . . . . . . . . . . . . . . . . . . . . . . 958
81.8 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 958
81.9 Comparison with other shortest path algorithms . . . . . . . . . . . . . 959
81.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 960
81.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 961

82 Force-directed graph drawing 963


82.1 Forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
82.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
82.3 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 966
82.4 Disadvantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967
82.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 968
82.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
82.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 969
82.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971
82.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 971

83 Ford–Fulkerson algorithm 973


83.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 973
83.2 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
83.3 Integral example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 975
83.4 Non-terminating example . . . . . . . . . . . . . . . . . . . . . . . . . . 977

XVIII
Contents

83.5 Python implementation of Edmonds–Karp algorithm . . . . . . . . . . . 978


83.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
83.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
83.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980
83.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980

84 Fringe search 983


84.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
84.2 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 986
84.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987
84.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987

85 Girvan–Newman algorithm 989


85.1 Edge betweenness and community structure . . . . . . . . . . . . . . . . 989
85.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990
85.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 990

86 Goal node (computer science) 991


86.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
86.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991

87 Gomory–Hu tree 993


87.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
87.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993
87.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 994
87.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
87.5 Implementations: Sequential and Parallel . . . . . . . . . . . . . . . . . 999
87.6 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
87.7 Related concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
87.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
87.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000

88 Graph bandwidth 1003


88.1 Bandwidth formulas for some graphs . . . . . . . . . . . . . . . . . . . . 1003
88.2 Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004
88.3 Computing the bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
88.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
88.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005
88.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
88.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007

89 Graph embedding 1009


89.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010
89.2 Combinatorial embedding . . . . . . . . . . . . . . . . . . . . . . . . . . 1010
89.3 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 1011
89.4 Embeddings of graphs into higher-dimensional spaces . . . . . . . . . . 1011
89.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
89.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012

XIX
Contents

90 Graph isomorphism 1015


90.1 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
90.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
90.3 Whitney theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
90.4 Recognition of graph isomorphism . . . . . . . . . . . . . . . . . . . . . 1018
90.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
90.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1019
90.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1020

91 Graph isomorphism problem 1021


91.1 State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022
91.2 Solved special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
91.3 Complexity class GI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023
91.4 Program checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
91.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1027
91.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
91.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
91.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1031

92 Graph kernel 1041


92.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
92.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
92.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043

93 Graph reduction 1045


93.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1045
93.2 Combinator graph reduction . . . . . . . . . . . . . . . . . . . . . . . . 1048
93.3 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
93.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1048
93.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
93.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049
93.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1049

94 Graph traversal 1051


94.1 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
94.2 Graph traversal algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 1053
94.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
94.4 Graph exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056
94.5 Universal traversal sequences . . . . . . . . . . . . . . . . . . . . . . . . 1057
94.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057
94.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1058

95 Hierarchical clustering of networks 1059


95.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
95.2 Weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1059
95.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060
95.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1060

XX
Contents

96 Hopcroft–Karp algorithm 1061


96.1 Augmenting paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1062
96.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063
96.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1064
96.4 Comparison with other bipartite matching algorithms . . . . . . . . . . 1065
96.5 Non-bipartite graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
96.6 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
96.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1068
96.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069
96.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1069

97 Iterative deepening A* 1073


97.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
97.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1075
97.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1076
97.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077
97.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077

98 Iterative deepening depth-first search 1079


98.1 Algorithm for directed graphs . . . . . . . . . . . . . . . . . . . . . . . . 1081
98.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
98.3 Asymptotic analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1082
98.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1084
98.5 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1085
98.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1088

99 Johnson's algorithm 1089


99.1 Algorithm description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1091
99.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1091
99.3 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
99.4 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
99.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1093
99.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094

100 Journal of Graph Algorithms and Applications 1095


100.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096
100.2 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096

101 Jump point search 1097


101.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
101.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
101.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1099

102 k shortest path routing 1101


102.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101
102.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1101
102.3 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102
102.4 Some examples and description . . . . . . . . . . . . . . . . . . . . . . . 1103
102.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105

XXI
Contents

102.6 Related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105


102.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105
102.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1105
102.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107

103 Karger's algorithm 1109


103.1 The global minimum cut problem . . . . . . . . . . . . . . . . . . . . . 1110
103.2 Contraction algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1111
103.3 Karger–Stein algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115
103.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1117

104 Knight's tour 1119


104.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1121
104.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1122
104.3 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1125
104.4 Number of tours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126
104.5 Finding tours with computers . . . . . . . . . . . . . . . . . . . . . . . . 1126
104.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131
104.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1131
104.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1134

105 Kosaraju's algorithm 1135


105.1 The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1135
105.2 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136
105.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137
105.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1137

106 Kruskal's algorithm 1139


106.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1141
106.2 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1142
106.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143
106.4 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1143
106.5 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
106.6 Parallel algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
106.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
106.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
106.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1150

107 Lexicographic breadth-first search 1151


107.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1152
107.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
107.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154
107.4 LexBFS ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155
107.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
107.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156

108 Longest path problem 1157


108.1 NP-hardness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1157
108.2 Acyclic graphs and critical paths . . . . . . . . . . . . . . . . . . . . . . 1158

XXII
Contents

108.3 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159


108.4 Parameterized complexity . . . . . . . . . . . . . . . . . . . . . . . . . . 1159
108.5 Special classes of graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
108.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
108.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
108.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1165

109 Minimax 1167


109.1 Game theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1167
109.2 Combinatorial game theory . . . . . . . . . . . . . . . . . . . . . . . . . 1170
109.3 Minimax for individual decisions . . . . . . . . . . . . . . . . . . . . . . 1175
109.4 Maximin in philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
109.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1176
109.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177
109.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1178

110 Minimum cut 1181


110.1 Without terminal nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
110.2 With terminal nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
110.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1182
110.4 Number of minimum cuts . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
110.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183
110.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183

111 Nearest neighbour algorithm 1185


111.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1185
111.2 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186
111.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1186

112 Nonblocking minimal spanning switch 1187


112.1 Background: switching topologies . . . . . . . . . . . . . . . . . . . . . . 1189
112.2 Practical implementations of switches . . . . . . . . . . . . . . . . . . . 1192
112.3 Digital switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1193
112.4 Example of rerouting a switch . . . . . . . . . . . . . . . . . . . . . . . 1195
112.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1196
112.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197

113 Path-based strong component algorithm 1199


113.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1199
113.2 Related algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
113.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200
113.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1200

114 Prim's algorithm 1203


114.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
114.2 Time complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
114.3 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
114.4 Parallel algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
114.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212

XXIII
Contents

114.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212


114.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214

115 Proof-number search 1217


115.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218
115.2 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1218

116 Push–relabel maximum flow algorithm 1219


116.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219
116.2 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1220
116.3 The generic push–relabel algorithm . . . . . . . . . . . . . . . . . . . . . 1222
116.4 Practical implementations . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
116.5 Sample implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1229
116.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232

117 Reverse-delete algorithm 1235


117.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1235
117.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1236
117.3 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
117.4 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
117.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
117.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240

118 Sethi–Ullman algorithm 1241


118.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
118.2 Simple Sethi–Ullman algorithm . . . . . . . . . . . . . . . . . . . . . . . 1242
118.3 Advanced Sethi–Ullman algorithm . . . . . . . . . . . . . . . . . . . . . 1243
118.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1243
118.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244
118.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244

119 Shortest Path Faster Algorithm 1245


119.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1246
119.2 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1248
119.3 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
119.4 Optimization techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249
119.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1249

120 Shortest path problem 1251


120.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1252
120.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1253
120.3 Single-source shortest paths . . . . . . . . . . . . . . . . . . . . . . . . . 1253
120.4 All-pairs shortest paths . . . . . . . . . . . . . . . . . . . . . . . . . . . 1256
120.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257
120.6 Related problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1258
120.7 Linear programming formulation . . . . . . . . . . . . . . . . . . . . . . 1260
120.8 General algebraic framework on semirings: the algebraic path problem . 1260
120.9 Shortest path in stochastic time-dependent networks . . . . . . . . . . . 1261
120.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262

XXIV
Contents

120.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1262


120.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1268

121 SMA* 1271


121.1 Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273
121.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1273
121.3 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274
121.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1274

122 Spectral layout 1275


122.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275

123 Strongly connected component 1277


123.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1278
123.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279
123.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
123.4 Related results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1281
123.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
123.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1282
123.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1284

124 Subgraph isomorphism problem 1285


124.1 Decision problem and computational complexity . . . . . . . . . . . . . 1285
124.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286
124.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
124.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1287
124.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1288
124.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1289

125 Suurballe's algorithm 1293


125.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293
125.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294
125.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1294
125.4 Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1295
125.5 Analysis and running time . . . . . . . . . . . . . . . . . . . . . . . . . 1296
125.6 Variations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296
125.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1296
125.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1297

126 Tarjan's off-line lowest common ancestors algorithm 1299


126.1 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1299
126.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1300

127 Tarjan's strongly connected components algorithm 1301


127.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1301
127.2 The algorithm in pseudocode . . . . . . . . . . . . . . . . . . . . . . . . 1302
127.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
127.4 Additional remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304
127.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304

XXV
Contents

128 Topological sorting 1307


128.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1307
128.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1308
128.3 Application to shortest path finding . . . . . . . . . . . . . . . . . . . . 1313
128.4 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1313
128.5 Relation to partial orders . . . . . . . . . . . . . . . . . . . . . . . . . . 1314
128.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1314
128.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1315
128.8 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316
128.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316

129 Transitive closure 1317


129.1 Transitive relations and examples . . . . . . . . . . . . . . . . . . . . . . 1317
129.2 Existence and description . . . . . . . . . . . . . . . . . . . . . . . . . . 1318
129.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
129.4 In graph theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1319
129.5 In logic and computational complexity . . . . . . . . . . . . . . . . . . . 1320
129.6 In database query languages . . . . . . . . . . . . . . . . . . . . . . . . 1321
129.7 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
129.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1321
129.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1322
129.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1323

130 Transitive reduction 1325


130.1 In acyclic directed graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 1325
130.2 In graphs with cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1327
130.3 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 1327
130.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1329
130.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1329
130.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1330

131 Travelling salesman problem 1331


131.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1333
131.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1336
131.3 Integer linear programming formulations . . . . . . . . . . . . . . . . . . 1338
131.4 Computing a solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340
131.5 Special cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1350
131.6 Computational complexity . . . . . . . . . . . . . . . . . . . . . . . . . 1355
131.7 Human and animal performance . . . . . . . . . . . . . . . . . . . . . . 1356
131.8 Natural computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1356
131.9 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
131.10 Popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
131.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
131.12 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1357
131.13 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1365
131.14 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1368
131.15 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1371

XXVI
Contents

132 Tree traversal 1373


132.1 Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1375
132.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1380
132.3 Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1381
132.4 Infinite trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1384
132.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1385
132.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1386

133 Dijkstra's algorithm 1387


133.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1390
133.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1392
133.3 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1393
133.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1394
133.5 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1397
133.6 Running time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1398
133.7 Related problems and algorithms . . . . . . . . . . . . . . . . . . . . . . 1401
133.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1402
133.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1403
133.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1406
133.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1407

134 Widest path problem 1409


134.1 Undirected graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1410
134.2 Directed graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1412
134.3 Euclidean point sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1415
134.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1416

135 Yen's algorithm 1421


135.1 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1423
135.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
135.3 Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426
135.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
135.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1427
135.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1428

136 Hungarian algorithm 1429


136.1 The problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1429
136.2 The algorithm in terms of bipartite graphs . . . . . . . . . . . . . . . . 1431
136.3 Matrix interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1432
136.4 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
136.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1435
136.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436

137 Prüfer sequence 1439


137.1 Algorithm to convert a tree into a Prüfer sequence . . . . . . . . . . . . 1439
137.2 Algorithm to convert a Prüfer sequence into a tree . . . . . . . . . . . . 1441
137.3 Cayley's formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1441
137.4 Other applications[3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442

XXVII
Contents

137.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1442


137.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1443

138 Graph drawing 1445


138.1 Graphical conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1447
138.2 Quality measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448
138.3 Layout methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1451
138.4 Application-specific graph drawings . . . . . . . . . . . . . . . . . . . . 1454
138.5 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455
138.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1456
138.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1463

139 Analysis of algorithms 1465


139.1 Cost models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1468
139.2 Run-time analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1469
139.3 Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1474
139.4 Constant factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
139.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475
139.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1476
139.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1477
139.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1478

140 Time complexity 1479


140.1 Table of common time complexities . . . . . . . . . . . . . . . . . . . . 1480
140.2 Constant time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
140.3 Logarithmic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1482
140.4 Polylogarithmic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
140.5 Sub-linear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483
140.6 Linear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
140.7 Quasilinear time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1484
140.8 Sub-quadratic time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1485
140.9 Polynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1486
140.10 Superpolynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488
140.11 Quasi-polynomial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1488
140.12 Sub-exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1490
140.13 Exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1491
140.14 Factorial time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
140.15 Double exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
140.16 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1492
140.17 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1493

141 Space complexity 1497


141.1 Space complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . 1497
141.2 Relationships between classes . . . . . . . . . . . . . . . . . . . . . . . . 1498
141.3 LOGSPACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1498
141.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1499

XXVIII
Contents

142 Big O notation 1501


142.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1503
142.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1504
142.3 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505
142.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1507
142.5 Multiple variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509
142.6 Matters of notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1509
142.7 Orders of common functions . . . . . . . . . . . . . . . . . . . . . . . . 1511
142.8 Related asymptotic notations . . . . . . . . . . . . . . . . . . . . . . . . 1513
142.9 Generalizations and related usages . . . . . . . . . . . . . . . . . . . . . 1518
142.10 History (Bachmann–Landau, Hardy, and Vinogradov notations) . . . . 1519
142.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1520
142.12 References and notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1520
142.13 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1523
142.14 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1524

143 Master theorem 1527

144 Best, worst and average case 1529


144.1 Best case performance for algorithm . . . . . . . . . . . . . . . . . . . . 1530
144.2 Worst-case versus average-case performance . . . . . . . . . . . . . . . . 1530
144.3 Practical consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1532
144.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1532
144.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1534
144.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535

145 Amortized analysis 1537


145.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1537
145.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1537
145.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1539
145.4 Common use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1540
145.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1541

146 Computational complexity theory 1543


146.1 Computational problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 1544
146.2 Machine models and complexity measures . . . . . . . . . . . . . . . . . 1548
146.3 Complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552
146.4 Important open problems . . . . . . . . . . . . . . . . . . . . . . . . . . 1557
146.5 Intractability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1559
146.6 Continuous complexity theory . . . . . . . . . . . . . . . . . . . . . . . . 1561
146.7 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1561
146.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1562
146.9 Works on Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563
146.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1563
146.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1568

147 Complexity class 1571


147.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1573

XXIX
Contents

147.2 Common complexity classes . . . . . . . . . . . . . . . . . . . . . . . . . 1575


147.3 Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1584
147.4 Closure properties of classes . . . . . . . . . . . . . . . . . . . . . . . . . 1585
147.5 Relationships between complexity classes . . . . . . . . . . . . . . . . . 1585
147.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1588
147.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1588
147.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589
147.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1589

148 P (complexity) 1591


148.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1591
148.2 Notable problems in P . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1592
148.3 Relationships to other classes . . . . . . . . . . . . . . . . . . . . . . . . 1592
148.4 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1593
148.5 Pure existence proofs of polynomial-time algorithms . . . . . . . . . . . 1594
148.6 Alternative characterizations . . . . . . . . . . . . . . . . . . . . . . . . 1594
148.7 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1594
148.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1595
148.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1596
148.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1597

149 NP (complexity) 1599


149.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1600
149.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1601
149.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1602
149.4 Why some NP problems are hard to solve . . . . . . . . . . . . . . . . . 1603
149.5 Equivalence of definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 1603
149.6 Relationship to other classes . . . . . . . . . . . . . . . . . . . . . . . . 1604
149.7 Other characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1604
149.8 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605
149.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606
149.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606
149.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1606
149.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607
149.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1607

150 NP-hardness 1609


150.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610
150.2 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1610
150.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1611
150.4 NP-naming convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1611
150.5 Application areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1612
150.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1613

151 NP-completeness 1615


151.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1616
151.2 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1617
151.3 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1617

XXX
Contents

151.4 NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1619


151.5 Solving NP-complete problems . . . . . . . . . . . . . . . . . . . . . . . 1621
151.6 Completeness under different types of reduction . . . . . . . . . . . . . . 1622
151.7 Naming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1623
151.8 Common misconceptions . . . . . . . . . . . . . . . . . . . . . . . . . . 1623
151.9 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1624
151.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1624
151.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625
151.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1629

152 PSPACE 1631


152.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1631
152.2 Relation among other classes . . . . . . . . . . . . . . . . . . . . . . . . 1632
152.3 Closure properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
152.4 Other characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1633
152.5 PSPACE-completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1634
152.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1634

153 EXPSPACE 1637


153.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1637
153.2 Examples of problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
153.3 Relationship to other classes . . . . . . . . . . . . . . . . . . . . . . . . 1638
153.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1638
153.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1639

154 P versus NP problem 1641


154.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
154.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1643
154.3 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1644
154.4 NP-completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1645
154.5 Harder problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1646
154.6 Problems in NP not known to be in P or NP-complete . . . . . . . . . . 1647
154.7 Does P mean ”easy”? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1649
154.8 Reasons to believe P ≠NP or P = NP . . . . . . . . . . . . . . . . . . . 1650
154.9 Consequences of solution . . . . . . . . . . . . . . . . . . . . . . . . . . 1651
154.10 Results about difficulty of proof . . . . . . . . . . . . . . . . . . . . . . 1653
154.11 Claimed solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
154.12 Logical characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 1655
154.13 Polynomial-time algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 1656
154.14 Formal definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1657
154.15 Popular culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1658
154.16 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659
154.17 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659
154.18 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1659
154.19 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
154.20 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1666

155 Contributors 1669

XXXI
Contents

List of Figures 2055

156 Licenses 2085


156.1 GNU GENERAL PUBLIC LICENSE . . . . . . . . . . . . . . . . . . . 2085
156.2 GNU Free Documentation License . . . . . . . . . . . . . . . . . . . . . 2086
156.3 GNU Lesser General Public License . . . . . . . . . . . . . . . . . . . . 2087

1
1 Sorting algorithm

An algorithm that arranges lists in order

This section does not cite1 any sources2 . Please help improve this section3 by
adding citations to reliable sources4 . Unsourced material may be challenged and
removed5 .
Find sources: ”Sorting algorithm”6 – news7 · newspapers8 · books9 · scholar10 · JSTOR11
(May 2019)(Learn how and when to remove this template message12 )

In computer science13 , a sorting algorithm is an algorithm14 that puts elements of a


list15 in a certain order16 . The most frequently used orders are numerical order17 and
lexicographical order18 . Efficient sorting19 is important for optimizing the efficiency20 of
other algorithms (such as search21 and merge22 algorithms) that require input data to be
in sorted lists. Sorting is also often useful for canonicalizing23 data and for producing
human-readable output. More formally, the output of any sorting algorithm must satisfy
two conditions:

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
3 https://en.wikipedia.org/w/index.php?title=Sorting_algorithm&action=edit
4 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
5 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
6 http://www.google.com/search?as_eq=wikipedia&q=%22Sorting+algorithm%22
7 http://www.google.com/search?tbm=nws&q=%22Sorting+algorithm%22+-wikipedia
http://www.google.com/search?&q=%22Sorting+algorithm%22+site:news.google.com/
8
newspapers&source=newspapers
9 http://www.google.com/search?tbs=bks:1&q=%22Sorting+algorithm%22+-wikipedia
10 http://scholar.google.com/scholar?q=%22Sorting+algorithm%22
11 https://www.jstor.org/action/doBasicSearch?Query=%22Sorting+algorithm%22&acc=on&wc=on
12 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
13 https://en.wikipedia.org/wiki/Computer_science
14 https://en.wikipedia.org/wiki/Algorithm
15 https://en.wikipedia.org/wiki/List_(computing)
16 https://en.wikipedia.org/wiki/Total_order
17 https://en.wikipedia.org/wiki/Numerical_order
18 https://en.wikipedia.org/wiki/Lexicographical_order
19 https://en.wikipedia.org/wiki/Sorting
20 https://en.wikipedia.org/wiki/Algorithmic_efficiency
21 https://en.wikipedia.org/wiki/Search_algorithm
22 https://en.wikipedia.org/wiki/Merge_algorithm
23 https://en.wikipedia.org/wiki/Canonicalization

3
Sorting algorithm

1. The output is in nondecreasing order (each element is no smaller than the previous
element according to the desired total order24 );
2. The output is a permutation25 (a reordering, yet retaining all of the original elements)
of the input.
Further, the input data is often stored in an array26 , which allows random access27 , rather
than a list, which only allows sequential access28 ; though many algorithms can be applied
to either type of data after suitable modification.
Sorting algorithms are often referred to as a word followed by the word ”sort,” and gram-
matically are used in English as noun phrases, for example in the sentence, ”it is inefficient
to use insertion sort on large lists,” the phrase insertion sort refers to the insertion sort29
sorting algorithm.

1.1 History

From the beginning of computing, the sorting problem has attracted a great deal of research,
perhaps due to the complexity of solving it efficiently despite its simple, familiar statement.
Among the authors of early sorting algorithms around 1951 was Betty Holberton30 (née
Snyder), who worked on ENIAC31 and UNIVAC32 .[1][2] Bubble sort33 was analyzed as early
as 1956.[3] Comparison sorting algorithms have a fundamental requirement of Ω(n log n)34
comparisons (some input sequences will require a multiple of n log n comparisons); algo-
rithms not based on comparisons, such as counting sort35 , can have better performance.
Asymptotically optimal algorithms have been known since the mid-20th century—useful
new algorithms are still being invented, with the now widely used Timsort36 dating to 2002,
and the library sort37 being first published in 2006.
Sorting algorithms are prevalent in introductory computer science38 classes, where the abun-
dance of algorithms for the problem provides a gentle introduction to a variety of core algo-
rithm concepts, such as big O notation39 , divide and conquer algorithms40 , data structures41

24 https://en.wikipedia.org/wiki/Total_order
25 https://en.wikipedia.org/wiki/Permutation
26 https://en.wikipedia.org/wiki/Array_data_type
27 https://en.wikipedia.org/wiki/Random_access
28 https://en.wikipedia.org/wiki/Sequential_access
29 https://en.wikipedia.org/wiki/Insertion_sort
30 https://en.wikipedia.org/wiki/Betty_Holberton
31 https://en.wikipedia.org/wiki/ENIAC
32 https://en.wikipedia.org/wiki/UNIVAC
33 https://en.wikipedia.org/wiki/Bubble_sort
34 https://en.wikipedia.org/wiki/Big_omega_notation
35 https://en.wikipedia.org/wiki/Counting_sort
36 https://en.wikipedia.org/wiki/Timsort
37 https://en.wikipedia.org/wiki/Library_sort
38 https://en.wikipedia.org/wiki/Computer_science
39 https://en.wikipedia.org/wiki/Big_O_notation
40 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
41 https://en.wikipedia.org/wiki/Data_structure

4
Classification

such as heaps42 and binary trees43 , randomized algorithms44 , best, worst and average case45
analysis, time–space tradeoffs46 , and upper and lower bounds47 .

1.2 Classification

Sorting algorithms are often classified by:


• Computational complexity48 (worst, average and best49 behavior) in terms of the size of
the list (n). For typical serial sorting algorithms good behavior is O(n log n), with parallel
sort in O(log2 n), and bad behavior is O(n2 ). (See Big O notation50 .) Ideal behavior for
a serial sort is O(n), but this is not possible in the average case. Optimal parallel sorting
is O(log n). Comparison-based sorting algorithms51 need at least Ω(n log n) comparisons
for most inputs.
• Computational complexity52 of swaps (for ”in-place” algorithms).
• Memory53 usage (and use of other computer resources). In particular, some sorting
algorithms are ”in-place54 ”. Strictly, an in-place sort needs only O(1) memory beyond the
items being sorted; sometimes O(log(n)) additional memory is considered ”in-place”.
• Recursion. Some algorithms are either recursive or non-recursive, while others may be
both (e.g., merge sort).
• Stability: stable sorting algorithms55 maintain the relative order of records with equal
keys (i.e., values).
• Whether or not they are a comparison sort56 . A comparison sort examines the data only
by comparing two elements with a comparison operator.
• General method: insertion, exchange, selection, merging, etc. Exchange sorts include
bubble sort and quicksort. Selection sorts include shaker sort and heapsort.
• Whether the algorithm is serial or parallel. The remainder of this discussion almost
exclusively concentrates upon serial algorithms and assumes serial operation.
• Adaptability: Whether or not the presortedness of the input affects the running time.
Algorithms that take this into account are known to be adaptive57 .

42 https://en.wikipedia.org/wiki/Heap_(data_structure)
43 https://en.wikipedia.org/wiki/Binary_tree
44 https://en.wikipedia.org/wiki/Randomized_algorithm
45 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
46 https://en.wikipedia.org/wiki/Time%E2%80%93space_tradeoff
47 https://en.wikipedia.org/wiki/Upper_and_lower_bounds
48 https://en.wikipedia.org/wiki/Computational_complexity_theory
49 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
50 https://en.wikipedia.org/wiki/Big_O_notation
51 https://en.wikipedia.org/wiki/Comparison_sort
52 https://en.wikipedia.org/wiki/Computational_complexity_theory
53 https://en.wikipedia.org/wiki/Memory_(computing)
54 https://en.wikipedia.org/wiki/In-place_algorithm
55 #Stability
56 https://en.wikipedia.org/wiki/Comparison_sort
57 https://en.wikipedia.org/wiki/Adaptive_sort

5
Sorting algorithm

1.2.1 Stability

Figure 2 An example of stable sort on playing cards. When the cards are sorted by
rank with a stable sort, the two 5s must remain in the same order in the sorted output
that they were originally in. When they are sorted with a non-stable sort, the 5s may end
up in the opposite order in the sorted output.

Stable sort algorithms sort repeated elements in the same order that they appear in the
input. When sorting some kinds of data, only part of the data is examined when determining
the sort order. For example, in the card sorting example to the right, the cards are being
sorted by their rank, and their suit is being ignored. This allows the possibility of multiple
different correctly sorted versions of the original list. Stable sorting algorithms choose one

6
Classification

of these, according to the following rule: if two items compare as equal, like the two 5 cards,
then their relative order will be preserved, so that if one came before the other in the input,
it will also come before the other in the output.
Stability is important for the following reason: say that student records consisting of name
and class section are sorted dynamically on a web page, first by name, then by class section
in a second operation. If a stable sorting algorithm is used in both cases, the sort-by-
class-section operation will not change the name order; with an unstable sort, it could be
that sorting by section shuffles the name order. Using a stable sort, users can choose to
sort by section and then by name, by first sorting using name and then sort again using
section, resulting in the name order being preserved. (Some spreadsheet programs obey
this behavior: sorting by name, then by section yields an alphabetical list of students by
section.)
More formally, the data being sorted can be represented as a record or tuple of values, and
the part of the data that is used for sorting is called the key. In the card example, cards are
represented as a record (rank, suit), and the key is the rank. A sorting algorithm is stable
if whenever there are two records R and S with the same key, and R appears before S in
the original list, then R will always appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or more generally, any
data where the entire element is the key, stability is not an issue. Stability is also not an
issue if all keys are different.
Unstable sorting algorithms can be specially implemented to be stable. One way of doing
this is to artificially extend the key comparison, so that comparisons between two objects
with otherwise equal keys are decided using the order of the entries in the original input list
as a tie-breaker. Remembering this order, however, may require additional time and space.
One application for stable sorting algorithms is sorting a list using a primary and secondary
key. For example, suppose we wish to sort a hand of cards such that the suits are in the
order clubs (♣), diamonds (♦), hearts (♥), spades (♠), and within each suit, the cards are
sorted by rank. This can be done by first sorting the cards by rank (using any sort), and
then doing a stable sort by suit:

7
Sorting algorithm

Figure 3

Within each suit, the stable sort preserves the ordering by rank that was already done. This
idea can be extended to any number of keys and is utilised by radix sort58 . The same effect
can be achieved with an unstable sort by using a lexicographic key comparison, which, e.g.,
compares first by suit, and then compares by rank if the suits are the same.

1.3 Comparison of algorithms

In this table, n is the number of records to be sorted. The columns ”Average” and ”Worst”
give the time complexity59 in each case, under the assumption that the length of each
key is constant, and that therefore all comparisons, swaps, and other needed operations can
proceed in constant time. ”Memory” denotes the amount of auxiliary storage needed beyond
that used by the list itself, under the same assumption. The run times and the memory
requirements listed below should be understood to be inside big O notation60 , hence the
base of the logarithms does not matter; the notation log2 n means (log n)2 .

58 https://en.wikipedia.org/wiki/Radix_sort
59 https://en.wikipedia.org/wiki/Time_complexity
60 https://en.wikipedia.org/wiki/Big_O_notation

8
Comparison of algorithms

1.3.1 Comparison sorts

Below is a table of comparison sorts61 . A comparison sort cannot perform better than
O(n log n).[4]
Comparison sorts62

Name Best Average Worst Memory Stable Method Other notes


2
Quicksort63 n log n n log n n log n No Partitioning Quicksort is
usually done
in-place with
O(log n) stack
space.[5] [6]
Merge sort64 n log n n log n n log n n Yes Merging Highly paral-
lelizable65 (up
to O(log n)
using the Three
Hungarians'
Algorithm).[7]
2
In-place merge — — n log n 1 Yes Merging Can be im-
sort66 plemented as
a stable sort
based on sta-
ble in-place
merging.[8]
Introsort67 n log n n log n n log n log n No Partitioning & Selection Used in several
STL68 imple-
mentations.
Heapsort69 n log n n log n n log n 1 No Selection
2 2
Insertion sort70 n n n 1 Yes Insertion O(n + d), in
the worst case
over sequences
that have
d inversions71 .
Block sort72 n n log n n log n 1 Yes Insertion & Merging Combine a
block-based
O(n) in-
place merge
algorithm[9]
with a bottom-
up merge
sort73 .
Quadsort n n log n n log n n Yes Merging Uses a 4-
input sorting
network74 .[10]
Timsort75 n n log n n log n n Yes Insertion & Merging Makes
n comparisons
when the data
is already sorted
or reverse
sorted.
2 2 2
Selection sort76 n n n 1 No Selection Stable with
O(n) extra
space or when
using linked
lists.[11]

61 https://en.wikipedia.org/wiki/Comparison_sort
63 https://en.wikipedia.org/wiki/Quicksort
64 https://en.wikipedia.org/wiki/Merge_sort
65 https://en.wikipedia.org/wiki/Merge_sort#Parallel_merge_sort
66 https://en.wikipedia.org/wiki/In-place_merge_sort
67 https://en.wikipedia.org/wiki/Introsort
68 https://en.wikipedia.org/wiki/Standard_Template_Library
69 https://en.wikipedia.org/wiki/Heapsort
70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/Inversion_(discrete_mathematics)
72 https://en.wikipedia.org/wiki/Block_sort
73 https://en.wikipedia.org/wiki/Merge_sort#Bottom-up_implementation
74 https://en.wikipedia.org/wiki/Sorting_network
75 https://en.wikipedia.org/wiki/Timsort
76 https://en.wikipedia.org/wiki/Selection_sort

9
Sorting algorithm

Comparison sorts62
Cubesort77 n n log n n log n n Yes Insertion Makes
n comparisons
when the data
is already sorted
or reverse
sorted.
4/3 3/2
Shellsort78 n log n n n 1 No Insertion Small code size.
2 2
Bubble sort79 n n n 1 Yes Exchanging Tiny code size.
Tree sort80 n log n n log n n log n<wbr n Yes Insertion When using a
/>(balanced) self-balancing
binary search
tree81 .
2 2 2
Cycle sort82 n n n 1 No Insertion In-place with
theoretically
optimal number
of writes.
2
Library sort83 n n log n n n Yes Insertion
Patience n — n log n n No Insertion & Selection Finds all
sorting84 the longest
increasing
subsequences85
in O(n log n).
Smoothsort86 n n log n n log n 1 No Selection An adaptive87
variant of
heapsort
based upon
the Leonardo
sequence88
rather than
a traditional
binary heap89 .
2 2
Strand sort90 n n n n Yes Selection
Tournament n log n n log n n log n n[12] No Selection Variation of
sort91 Heap Sort.
2 2
Cocktail shaker n n n 1 Yes Exchanging
sort92
2 2
Comb sort93 n log n n n 1 No Exchanging Faster than
bubble sort on
average.
2 2
Gnome sort94 n n n 1 Yes Exchanging Tiny code size.
UnShuffle n kn kn n No Distribution and Merge No exchanges
Sort[13] are performed.
The parameter
k is propor-
tional to the
entropy in the
input. k = 1
for ordered or
reverse ordered
input.
Franceschini's — n log n n log n 1 Yes ?
method[14]
2 2
Odd–even sort95 n n n 1 Yes Exchanging Can be run
on parallel
processors
easily.

77 https://en.wikipedia.org/wiki/Cubesort
78 https://en.wikipedia.org/wiki/Shellsort
79 https://en.wikipedia.org/wiki/Bubble_sort
80 https://en.wikipedia.org/wiki/Tree_sort
81 https://en.wikipedia.org/wiki/Self-balancing_binary_search_tree
82 https://en.wikipedia.org/wiki/Cycle_sort
83 https://en.wikipedia.org/wiki/Library_sort
84 https://en.wikipedia.org/wiki/Patience_sorting
85 https://en.wikipedia.org/wiki/Longest_increasing_subsequence
86 https://en.wikipedia.org/wiki/Smoothsort
87 https://en.wikipedia.org/wiki/Adaptive_sort
88 https://en.wikipedia.org/wiki/Leonardo_number
89 https://en.wikipedia.org/wiki/Binary_heap
90 https://en.wikipedia.org/wiki/Strand_sort
91 https://en.wikipedia.org/wiki/Tournament_sort
92 https://en.wikipedia.org/wiki/Cocktail_shaker_sort
93 https://en.wikipedia.org/wiki/Comb_sort
94 https://en.wikipedia.org/wiki/Gnome_sort
95 https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort

10
Comparison of algorithms

Comparison sorts62
Zip sort n log n n log n n log n 1 Yes Merging In-place merge
algorithm,
minimises data
moves.[15]

1.3.2 Non-comparison sorts

The following table describes integer sorting96 algorithms and other sorting algorithms that
are not comparison sorts97 . As such, they are not limited to Ω(n log n).[16] Complexities
below assume n items to be sorted, with keys of size k, digit size d, and r the range of
numbers to be sorted. Many of them are based on the assumption that the key size is large
enough that all entries have unique key values, and hence that n ≪2k , where ≪ means ”much
less than”. In the unit-cost random access machine98 model, algorithms with running time
of n· kd , such as radix sort, still take time proportional to Θ(n log n), because n is limited to
k
be not more than 2 d , and a larger number of elements to sort would require a bigger k in
order to store them in the memory.[17]
Non-comparison sorts

Name Best Average Worst Memory Stable n ≪2k Notes


Pigeonhole — n + 2k n + 2k 2k Yes Yes
sort99
Bucket — n+k n2 · k n·k Yes No Assumes
sort100 uniform
(uniform distribu-
keys) tion of ele-
ments from
the do-
main in the
array.[18]
Bucket — n+r n+r n+r Yes Yes If r is O(n),
sort101 then aver-
(integer age time
keys) complexity
is O(n).[19]
Counting — n+r n+r n+r Yes Yes If r is O(n),
sort102 then aver-
age time
complexity
is O(n).[18]
k k k
LSD Radix — n· n· n + 2d Yes No recur-
Sort103 d d d
sion levels,
2d for count
array.[18][19]

96 https://en.wikipedia.org/wiki/Integer_sorting
97 https://en.wikipedia.org/wiki/Comparison_sort
98 https://en.wikipedia.org/wiki/Random_access_machine
99 https://en.wikipedia.org/wiki/Pigeonhole_sort
100 https://en.wikipedia.org/wiki/Bucket_sort
101 https://en.wikipedia.org/wiki/Bucket_sort
102 https://en.wikipedia.org/wiki/Counting_sort
103 https://en.wikipedia.org/wiki/Radix_sort#Least_significant_digit_radix_sorts

11
Sorting algorithm

Non-comparison sorts
k k
MSD Radix — n· n· n + 2d Yes No Stable
Sort104 d d version uses
an external
array of size
n to hold all
of the bins.
k k
MSD Radix — n· n· 21 No No d=1 for in-
Sort105 1 1 place, k/1
(in-place) recursion
levels, no
( ) count array.
k k k d
Spread- n n· n· +d ·2 No No Asymptotic
sort106 d s d are based
on the
assumption
that n ≪2k ,
but the
algorithm
does not
require this.
k k k
Burstsort107 — n· n· n· No No Has better
d d d constant
factor than
radix sort
for sorting
strings.
Though
relies some-
what on
specifics of
commonly
encountered
strings.
Flashsort108 n n+r n2 n No No Requires
uniform
distribution
of elements
from the
domain in
the array to
run in lin-
ear time. If
distribution
is extremely
skewed then
it can go
quadratic
if underly-
ing sort is
quadratic
(it is usu-
ally an
insertion
sort). In-
place ver-
sion is not
stable.

104 https://en.wikipedia.org/wiki/Radix_sort#Most_significant_digit_radix_sorts
105 https://en.wikipedia.org/wiki/Radix_sort#Most_significant_digit_radix_sorts
106 https://en.wikipedia.org/wiki/Spreadsort
107 https://en.wikipedia.org/wiki/Burstsort
108 https://en.wikipedia.org/wiki/Flashsort

12
Comparison of algorithms

Non-comparison sorts
k k
Postman — n· n· n + 2d — No A variation
sort109 d d of bucket
sort, which
works very
similar
to MSD
Radix Sort.
Specific to
post service
needs.

Samplesort110 can be used to parallelize any of the non-comparison sorts, by efficiently


distributing data into several buckets and then passing down sorting to several processors,
with no need to merge as buckets are already sorted between each other.

1.3.3 Others

Some algorithms are slow compared to those discussed above, such as the bogosort111 with
unbounded run time and the stooge sort112 which has O(n2.7 ) run time. These sorts are usu-
ally described for educational purposes in order to demonstrate how run time of algorithms
is estimated. The following table describes some sorting algorithms that are impractical for
real-life use in traditional software contexts due to extremely poor performance or special-
ized hardware requirements.
Name Best Average Worst Memory Stable Comparison Other notes
2
Bead sort113 n S S n N/A No Works only
with positive
integers. Requires
specialized
hardware for
it to run in
guaranteed O(n)
time. There is
a possibility
for software
implementation,
but running time
will be O(S),
where S is sum of
all integers to be
sorted, in case of
small integers it
can be considered
to be linear.
Simple pancake — n n log n No Yes Count is number
sort114 of flips.
2
Spaghetti (Poll) n n n n Yes Polling This is a linear-
sort115 time, analog
algorithm
for sorting a
sequence of items,
requiring O(n)
stack space,
and the sort
is stable. This
requires n parallel
processors.
See spaghetti
sort#Analysis116 .

109 https://en.wikipedia.org/wiki/Postman_sort
110 https://en.wikipedia.org/wiki/Samplesort
111 https://en.wikipedia.org/wiki/Bogosort
112 https://en.wikipedia.org/wiki/Stooge_sort
113 https://en.wikipedia.org/wiki/Bead_sort
114 https://en.wikipedia.org/wiki/Pancake_sorting
115 https://en.wikipedia.org/wiki/Spaghetti_sort
116 https://en.wikipedia.org/wiki/Spaghetti_sort#Analysis

13
Sorting algorithm

Name Best Average Worst Memory Stable Comparison Other notes


2 2 2 2
Sorting net- log n log n log n n log n Varies (stable Yes Order of
work117 sorting networks comparisons are
require more set in advance
comparisons) based on a fixed
network size.
Impractical for
more than 32118 119
items.[disputed − discuss ]
2 2 2 2
Bitonic sorter120 log n log n log n n log n No Yes An effective
variation of
Sorting networks.
Bogosort121 n (n × n!) ∞ 1 No Yes Random shuffling.
Used for example
purposes only,
as sorting with
unbounded worst
case running
time.
log 3/ log 1.5 log 3/ log 1.5 log 3/ log 1.5
Stooge sort122 n n n n No Yes Slower than most
of the sorting
algorithms (even
naive ones)
with a time
complexity of
O(nlog 3 / log 1.5 )
= O(n2.7095... ).

Theoretical computer scientists have detailed other sorting algorithms that provide better
than O(n log n) time complexity assuming additional constraints, including:
• Thorup's algorithm, a randomized algorithm for sorting keys from a domain of finite
size, taking O(n log log n) time and O(n) space.[20]( )

• A randomized integer sorting123 algorithm taking O n log log n expected time and O(n)
space.[21]

1.4 Popular sorting algorithms

While there are a large number of sorting algorithms, in practical implementations a few
algorithms predominate. Insertion sort is widely used for small data sets, while for large data
sets an asymptotically efficient sort is used, primarily heap sort, merge sort, or quicksort.
Efficient implementations generally use a hybrid algorithm124 , combining an asymptotically
efficient algorithm for the overall sort with insertion sort for small lists at the bottom
of a recursion. Highly tuned implementations use more sophisticated variants, such as
Timsort125 (merge sort, insertion sort, and additional logic), used in Android, Java, and
Python, and introsort126 (quicksort and heap sort), used (in variant forms) in some C++
sort127 implementations and in .NET.

117 https://en.wikipedia.org/wiki/Sorting_network
118 https://en.wikipedia.org/wiki/Wikipedia:Disputed_statement
119 https://en.wikipedia.org/wiki/Talk:Sorting_algorithm
120 https://en.wikipedia.org/wiki/Bitonic_sorter
121 https://en.wikipedia.org/wiki/Bogosort
122 https://en.wikipedia.org/wiki/Stooge_sort
123 https://en.wikipedia.org/wiki/Integer_sorting
124 https://en.wikipedia.org/wiki/Hybrid_algorithm
125 https://en.wikipedia.org/wiki/Timsort
126 https://en.wikipedia.org/wiki/Introsort
127 https://en.wikipedia.org/wiki/Sort_(C%2B%2B)

14
Popular sorting algorithms

For more restricted data, such as numbers in a fixed interval, distribution sorts128 such as
counting sort or radix sort are widely used. Bubble sort and variants are rarely used in
practice, but are commonly found in teaching and theoretical discussions.
When physically sorting objects (such as alphabetizing papers, tests or books) people intu-
itively generally use insertion sorts for small sets. For larger sets, people often first bucket,
such as by initial letter, and multiple bucketing allows practical sorting of very large sets.
Often space is relatively cheap, such as by spreading objects out on the floor or over a large
area, but operations are expensive, particularly moving an object a large distance – locality
of reference is important. Merge sorts are also practical for physical objects, particularly as
two hands can be used, one for each list to merge, while other algorithms, such as heap sort
or quick sort, are poorly suited for human use. Other algorithms, such as library sort129 , a
variant of insertion sort that leaves spaces, are also practical for physical use.

1.4.1 Simple sorts

Two of the simplest sorts are insertion sort and selection sort, both of which are efficient on
small data, due to low overhead, but not efficient on large data. Insertion sort is generally
faster than selection sort in practice, due to fewer comparisons and good performance on
almost-sorted data, and thus is preferred in practice, but selection sort uses fewer writes,
and thus is used when write performance is a limiting factor.

Insertion sort

Main article: Insertion sort130 Insertion sort131 is a simple sorting algorithm that is relatively
efficient for small lists and mostly sorted lists, and is often used as part of more sophisticated
algorithms. It works by taking elements from the list one by one and inserting them in
their correct position into a new sorted list similar to how we put money in our wallet.[22] In
arrays, the new list and the remaining elements can share the array's space, but insertion
is expensive, requiring shifting all following elements over by one. Shellsort132 (see below)
is a variant of insertion sort that is more efficient for larger lists.

Selection sort

Main article: Selection sort133 Selection sort is an in-place134 comparison sort135 . It has
O136 (n2 ) complexity, making it inefficient on large lists, and generally performs worse than

128 #Distribution_sort
129 https://en.wikipedia.org/wiki/Library_sort
130 https://en.wikipedia.org/wiki/Insertion_sort
131 https://en.wikipedia.org/wiki/Insertion_sort
132 #Shellsort
133 https://en.wikipedia.org/wiki/Selection_sort
134 https://en.wikipedia.org/wiki/In-place_algorithm
135 https://en.wikipedia.org/wiki/Comparison_sort
136 https://en.wikipedia.org/wiki/Big_O_notation

15
Sorting algorithm

the similar insertion sort137 . Selection sort is noted for its simplicity, and also has perfor-
mance advantages over more complicated algorithms in certain situations.
The algorithm finds the minimum value, swaps it with the value in the first position, and
repeats these steps for the remainder of the list.[23] It does no more than n swaps, and thus
is useful where swapping is very expensive.

1.4.2 Efficient sorts

Practical general sorting algorithms are almost always based on an algorithm with average
time complexity (and generally worst-case complexity) O(n log n), of which the most com-
mon are heap sort, merge sort, and quicksort. Each has advantages and drawbacks, with
the most significant being that simple implementation of merge sort uses O(n) additional
space, and simple implementation of quicksort has O(n2 ) worst-case complexity. These
problems can be solved or ameliorated at the cost of a more complex algorithm.
While these algorithms are asymptotically efficient on random data, for practical efficiency
on real-world data various modifications are used. First, the overhead of these algorithms
becomes significant on smaller data, so often a hybrid algorithm is used, commonly switching
to insertion sort once the data is small enough. Second, the algorithms often perform poorly
on already sorted data or almost sorted data – these are common in real-world data, and can
be sorted in O(n) time by appropriate algorithms. Finally, they may also be unstable138 ,
and stability is often a desirable property in a sort. Thus more sophisticated algorithms
are often employed, such as Timsort139 (based on merge sort) or introsort140 (based on
quicksort, falling back to heap sort).

Merge sort

Main article: Merge sort141 Merge sort takes advantage of the ease of merging already sorted
lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then
3 with 4...) and swapping them if the first should come after the second. It then merges
each of the resulting lists of two into lists of four, then merges those lists of four, and so on;
until at last two lists are merged into the final sorted list.[24] Of the algorithms described
here, this is the first that scales well to very large lists, because its worst-case running time
is O(n log n). It is also easily applied to lists, not only arrays, as it only requires sequential
access, not random access. However, it has additional O(n) space complexity, and involves
a large number of copies in simple implementations.
Merge sort has seen a relatively recent surge in popularity for practical implementations,
due to its use in the sophisticated algorithm Timsort142 , which is used for the standard sort

137 https://en.wikipedia.org/wiki/Insertion_sort
138 https://en.wikipedia.org/wiki/Unstable_sort
139 https://en.wikipedia.org/wiki/Timsort
140 https://en.wikipedia.org/wiki/Introsort
141 https://en.wikipedia.org/wiki/Merge_sort
142 https://en.wikipedia.org/wiki/Timsort

16
Popular sorting algorithms

routine in the programming languages Python143[25] and Java144 (as of JDK7145[26] ). Merge
sort itself is the standard routine in Perl146 ,[27] among others, and has been used in Java at
least since 2000 in JDK1.3147 .[28]

Heapsort

Main article: Heapsort148 Heapsort is a much more efficient version of selection sort149 . It
also works by determining the largest (or smallest) element of the list, placing that at the
end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this
task efficiently by using a data structure called a heap150 , a special type of binary tree151 .[29]
Once the data list has been made into a heap, the root node is guaranteed to be the largest
(or smallest) element. When it is removed and placed at the end of the list, the heap is
rearranged so the largest element remaining moves to the root. Using the heap, finding
the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple
selection sort. This allows Heapsort to run in O(n log n) time, and this is also the worst
case complexity.

Quicksort

Main article: Quicksort152 Quicksort is a divide and conquer153 algorithm154 which relies on
a partition operation: to partition an array, an element called a pivot is selected.[30][31] All
elements smaller than the pivot are moved before it and all greater elements are moved after
it. This can be done efficiently in linear time and in-place155 . The lesser and greater sublists
are then recursively sorted. This yields average time complexity of O(n log n), with low
overhead, and thus this is a popular algorithm. Efficient implementations of quicksort (with
in-place partitioning) are typically unstable sorts and somewhat complex, but are among
the fastest sorting algorithms in practice. Together with its modest O(log n) space usage,
quicksort is one of the most popular sorting algorithms and is available in many standard
programming libraries.
The important caveat about quicksort is that its worst-case performance is O(n2 ); while this
is rare, in naive implementations (choosing the first or last element as pivot) this occurs
for sorted data, which is a common case. The most complex issue in quicksort is thus
choosing a good pivot element, as consistently poor choices of pivots can result in drastically
slower O(n2 ) performance, but good choice of pivots yields O(n log n) performance, which

143 https://en.wikipedia.org/wiki/Python_(programming_language)
144 https://en.wikipedia.org/wiki/Java_(programming_language)
145 https://en.wikipedia.org/wiki/JDK7
146 https://en.wikipedia.org/wiki/Perl
147 https://en.wikipedia.org/wiki/Java_version_history#J2SE_1.3
148 https://en.wikipedia.org/wiki/Heapsort
149 https://en.wikipedia.org/wiki/Selection_sort
150 https://en.wikipedia.org/wiki/Heap_(data_structure)
151 https://en.wikipedia.org/wiki/Binary_tree
152 https://en.wikipedia.org/wiki/Quicksort
153 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
154 https://en.wikipedia.org/wiki/Algorithm
155 https://en.wikipedia.org/wiki/In-place_algorithm

17
Sorting algorithm

is asymptotically optimal. For example, if at each step the median156 is chosen as the
pivot then the algorithm works in O(n log n). Finding the median, such as by the median
of medians157 selection algorithm158 is however an O(n) operation on unsorted lists and
therefore exacts significant overhead with sorting. In practice choosing a random pivot
almost certainly yields O(n log n) performance.

Shellsort

Figure 4 A Shell sort, different from bubble sort in that it moves elements to numerous
swapping positions.

156 https://en.wikipedia.org/wiki/Median
157 https://en.wikipedia.org/wiki/Median_of_medians
158 https://en.wikipedia.org/wiki/Selection_algorithm

18
Popular sorting algorithms

Main article: Shell sort159 Shellsort was invented by Donald Shell160 in 1959.[32] It improves
upon insertion sort by moving out of order elements more than one position at a time.
The concept behind Shellsort is that insertion sort performs in O(kn) time, where k is
the greatest distance between two out-of-place elements. This means that generally, they
perform in O(n2 ), but for data that is mostly sorted, with only a few elements out of place,
they perform faster. So, by first sorting elements far away, and progressively shrinking the
gap between the elements to sort, the final sort computes much faster. One implementation
can be described as arranging the data sequence in a two-dimensional array and then sorting
the columns of the array using insertion sort.
The worst-case time complexity of Shellsort is an open problem161 and depends on the
gap sequence used, with known complexities ranging from O(n2 ) to O(n4/3 ) and Θ(n log2
n). This, combined with the fact that Shellsort is in-place162 , only needs a relatively small
amount of code, and does not require use of the call stack163 , makes it is useful in situations
where memory is at a premium, such as in embedded systems164 and operating system
kernels165 .

1.4.3 Bubble sort and variants

This section does not cite166 any sources167 . Please help improve this sec-
tion168 by adding citations to reliable sources169 . Unsourced material may be chal-
lenged and removed170 .
Find sources: ”Sorting algorithm”171 –
news172 · newspapers173 · books174 · scholar175 · JSTOR176 (May 2019)(Learn how
and when to remove this template message177 )

159 https://en.wikipedia.org/wiki/Shellsort
160 https://en.wikipedia.org/wiki/Donald_Shell
161 https://en.wikipedia.org/wiki/Open_problem
162 https://en.wikipedia.org/wiki/In-place
163 https://en.wikipedia.org/wiki/Call_stack
164 https://en.wikipedia.org/wiki/Embedded_system
165 https://en.wikipedia.org/wiki/Operating_system_kernel
166 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
167 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
168 https://en.wikipedia.org/w/index.php?title=Sorting_algorithm&action=edit
169 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
170 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
171 http://www.google.com/search?as_eq=wikipedia&q=%22Sorting+algorithm%22
172 http://www.google.com/search?tbm=nws&q=%22Sorting+algorithm%22+-wikipedia
http://www.google.com/search?&q=%22Sorting+algorithm%22+site:news.google.com/
173
newspapers&source=newspapers
174 http://www.google.com/search?tbs=bks:1&q=%22Sorting+algorithm%22+-wikipedia
175 http://scholar.google.com/scholar?q=%22Sorting+algorithm%22
176 https://www.jstor.org/action/doBasicSearch?Query=%22Sorting+algorithm%22&acc=on&wc=on
177 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal

19
Sorting algorithm

Bubble sort, and variants such as the shell sort178 and cocktail sort179 , are simple, highly-
inefficient sorting algorithms. They are frequently seen in introductory texts due to ease of
analysis, but they are rarely used in practice.

Bubble sort

Figure 6 A bubble sort, a sorting algorithm that continuously steps through a list,
swapping items until they appear in the correct order.

Main article: Bubble sort180 Bubble sort is a simple sorting algorithm. The algorithm starts
at the beginning of the data set. It compares the first two elements, and if the first is greater
than the second, it swaps them. It continues doing this for each pair of adjacent elements
to the end of the data set. It then starts again with the first two elements, repeating until
no swaps have occurred on the last pass.[33] This algorithm's average time and worst-case
performance is O(n2 ), so it is rarely used to sort large, unordered data sets. Bubble sort
can be used to sort a small number of items (where its asymptotic inefficiency is not a

178 https://en.wikipedia.org/wiki/Shell_sort
179 https://en.wikipedia.org/wiki/Cocktail_sort
180 https://en.wikipedia.org/wiki/Bubble_sort

20
Popular sorting algorithms

high penalty). Bubble sort can also be used efficiently on a list of any length that is nearly
sorted (that is, the elements are not significantly out of place). For example, if any number
of elements are out of place by only one position (e.g. 0123546789 and 1032547698), bubble
sort's exchange will get them in order on the first pass, the second pass will find all elements
in order, so the sort will take only 2n time.
[34]

Comb sort

Main article: Comb sort181 Comb sort is a relatively simple sorting algorithm based on
bubble sort182 and originally designed by Włodzimierz Dobosiewicz in 1980.[35] It was later
rediscovered and popularized by Stephen Lacey and Richard Box with a Byte Magazine183
article published in April 1991. The basic idea is to eliminate turtles, or small values
near the end of the list, since in a bubble sort these slow the sorting down tremendously.
(Rabbits, large values around the beginning of the list, do not pose a problem in bubble
sort) It accomplishes this by initially swapping elements that are a certain distance from
one another in the array, rather than only swapping elements if they are adjacent to one
another, and then shrinking the chosen distance until it is operating as a normal bubble
sort. Thus, if Shellsort can be thought of as a generalized version of insertion sort that
swaps elements spaced a certain distance away from one another, comb sort can be thought
of as the same generalization applied to bubble sort.

1.4.4 Distribution sort

See also: External sorting184 Distribution sort refers to any sorting algorithm where data
is distributed from their input to multiple intermediate structures which are then gathered
and placed on the output. For example, both bucket sort185 and flashsort186 are distribution
based sorting algorithms. Distribution sorting algorithms can be used on a single processor,
or they can be a distributed algorithm187 , where individual subsets are separately sorted on
different processors, then combined. This allows external sorting188 of data too large to fit
into a single computer's memory.

Counting sort

Main article: Counting sort189 Counting sort is applicable when each input is known to
belong to a particular set, S, of possibilities. The algorithm runs in O(|S| + n) time and

181 https://en.wikipedia.org/wiki/Comb_sort
182 https://en.wikipedia.org/wiki/Bubble_sort
183 https://en.wikipedia.org/wiki/Byte_Magazine
184 https://en.wikipedia.org/wiki/External_sorting
185 https://en.wikipedia.org/wiki/Bucket_sort
186 https://en.wikipedia.org/wiki/Flashsort
187 https://en.wikipedia.org/wiki/Distributed_algorithm
188 https://en.wikipedia.org/wiki/External_sorting
189 https://en.wikipedia.org/wiki/Counting_sort

21
Sorting algorithm

O(|S|) memory where n is the length of the input. It works by creating an integer array of
size |S| and using the ith bin to count the occurrences of the ith member of S in the input.
Each input is then counted by incrementing the value of its corresponding bin. Afterward,
the counting array is looped through to arrange all of the inputs in order. This sorting
algorithm often cannot be used because S needs to be reasonably small for the algorithm
to be efficient, but it is extremely fast and demonstrates great asymptotic behavior as
n increases. It also can be modified to provide stable behavior.

Bucket sort

Main article: Bucket sort190 Bucket sort is a divide and conquer191 sorting algorithm that
generalizes counting sort192 by partitioning an array into a finite number of buckets. Each
bucket is then sorted individually, either using a different sorting algorithm, or by recursively
applying the bucket sorting algorithm.
A bucket sort works best when the elements of the data set are evenly distributed across
all buckets.

Radix sort

Main article: Radix sort193 Radix sort is an algorithm that sorts numbers by processing
individual digits. n numbers consisting of k digits each are sorted in O(n · k) time. Radix
sort can process digits of each number either starting from the least significant digit194 (LSD)
or starting from the most significant digit195 (MSD). The LSD algorithm first sorts the list
by the least significant digit while preserving their relative order using a stable sort. Then
it sorts them by the next digit, and so on from the least significant to the most significant,
ending up with a sorted list. While the LSD radix sort requires the use of a stable sort, the
MSD radix sort algorithm does not (unless stable sorting is desired). In-place MSD radix
sort is not stable. It is common for the counting sort196 algorithm to be used internally by
the radix sort. A hybrid197 sorting approach, such as using insertion sort198 for small bins
improves performance of radix sort significantly.

1.5 Memory usage patterns and index sorting

When the size of the array to be sorted approaches or exceeds the available primary mem-
ory, so that (much slower) disk or swap space must be employed, the memory usage pattern
of a sorting algorithm becomes important, and an algorithm that might have been fairly

190 https://en.wikipedia.org/wiki/Bucket_sort
191 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
192 https://en.wikipedia.org/wiki/Counting_sort
193 https://en.wikipedia.org/wiki/Radix_sort
194 https://en.wikipedia.org/wiki/Least_significant_digit
195 https://en.wikipedia.org/wiki/Most_significant_digit
196 https://en.wikipedia.org/wiki/Counting_sort
197 https://en.wikipedia.org/wiki/Hybrid_algorithm
198 https://en.wikipedia.org/wiki/Insertion_sort

22
Memory usage patterns and index sorting

efficient when the array fit easily in RAM may become impractical. In this scenario, the
total number of comparisons becomes (relatively) less important, and the number of times
sections of memory must be copied or swapped to and from the disk can dominate the per-
formance characteristics of an algorithm. Thus, the number of passes and the localization
of comparisons can be more important than the raw number of comparisons, since compar-
isons of nearby elements to one another happen at system bus199 speed (or, with caching,
even at CPU200 speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort201 algorithm provides quite reasonable per-
formance with adequate RAM, but due to the recursive way that it copies portions of the
array it becomes much less practical when the array does not fit in RAM, because it may
cause a number of slow copy or move operations to and from disk. In that scenario, another
algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a
relational database202 ) are being sorted by a relatively small key field, is to create an index
into the array and then sort the index, rather than the entire array. (A sorted version of
the entire array can then be produced with one pass, reading from the index, but often even
that is unnecessary, as having the sorted index is adequate.) Because the index is much
smaller than the entire array, it may fit easily in memory where the entire array would not,
effectively eliminating the disk-swapping problem. This procedure is sometimes called ”tag
sort”.[36]
Another technique for overcoming the memory-size problem is using external sorting203 ,
for example one of the ways is to combine two algorithms in a way that takes advantage
of the strength of each to improve overall performance. For instance, the array might be
subdivided into chunks of a size that will fit in RAM, the contents of each chunk sorted
using an efficient algorithm (such as quicksort204 ), and the results merged using a k-way
merge similar to that used in mergesort205 . This is faster than performing either mergesort
or quicksort over the entire list.[37][38]
Techniques can also be combined. For sorting very large sets of data that vastly exceed
system memory, even the index may need to be sorted using an algorithm or combination
of algorithms designed to perform reasonably with virtual memory206 , i.e., to reduce the
amount of swapping required.

199 https://en.wikipedia.org/wiki/Computer_bus
200 https://en.wikipedia.org/wiki/Central_Processing_Unit
201 https://en.wikipedia.org/wiki/Quicksort
202 https://en.wikipedia.org/wiki/Relational_database
203 https://en.wikipedia.org/wiki/External_sorting
204 https://en.wikipedia.org/wiki/Quicksort
205 https://en.wikipedia.org/wiki/Mergesort
206 https://en.wikipedia.org/wiki/Virtual_memory

23
Sorting algorithm

1.6 Related algorithms

Related problems include partial sorting207 (sorting only the k smallest elements of a list, or
alternatively computing the k smallest elements, but unordered) and selection208 (computing
the kth smallest element). These can be solved inefficiently by a total sort, but more
efficient algorithms exist, often derived by generalizing a sorting algorithm. The most
notable example is quickselect209 , which is related to quicksort210 . Conversely, some sorting
algorithms can be derived by repeated application of a selection algorithm; quicksort and
quickselect can be seen as the same pivoting move, differing only in whether one recurses
on both sides (quicksort, divide and conquer211 ) or one side (quickselect, decrease and
conquer212 ).
A kind of opposite of a sorting algorithm is a shuffling algorithm213 . These are fundamen-
tally different because they require a source of random numbers. Shuffling can also be
implemented by a sorting algorithm, namely by a random sort: assigning a random number
to each element of the list and then sorting based on the random numbers. This is generally
not done in practice, however, and there is a well-known simple and efficient algorithm for
shuffling: the Fisher–Yates shuffle214 .

1.7 See also


• Collation215
• Schwartzian transform216
• Search algorithm217 − Any algorithm which solves the search problem
• Quantum sort218 − Sorting algorithms for quantum computers

1.8 References

207 https://en.wikipedia.org/wiki/Partial_sorting
208 https://en.wikipedia.org/wiki/Selection_algorithm
209 https://en.wikipedia.org/wiki/Quickselect
210 https://en.wikipedia.org/wiki/Quicksort
211 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
212 https://en.wikipedia.org/wiki/Decrease_and_conquer
213 https://en.wikipedia.org/wiki/Shuffling_algorithm
214 https://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle
215 https://en.wikipedia.org/wiki/Collation
216 https://en.wikipedia.org/wiki/Schwartzian_transform
217 https://en.wikipedia.org/wiki/Search_algorithm
218 https://en.wikipedia.org/wiki/Quantum_sort

24
References

This article includes a list of references219 , but its sources remain unclear be-
cause it has insufficient inline citations220 . Please help to improve221 this ar-
ticle by introducing222 more precise citations. (September 2009)(Learn how and when
to remove this template message223 )

1. ”M  'R L' W P  ENIAC”224 . Mental


Floss. 2013-10-13. Retrieved 2016-06-16.
2. L, S (D 17, 2001). ”F E. H, 84, E C
P”225 . NYT. R 16 D 2014.
3. D, H B. (1956). Electronic Data Sorting (PhD thesis). Stanford
University. ProQuest226 301940891227 .
4. C, T H.228 ; L, C E.229 ; R, R L.230 ;
S, C231 (2009), ”8”, Introduction To Algorithms232 (3 .), C-
, MA: T MIT P, . 167, ISBN233 978-0-262-03293-3234
5. S, R235 (1 S 1998). Algorithms In C: Fundamentals,
Data Structures, Sorting, Searching, Parts 1-4236 (3 .). P E.
ISBN237 978-81-317-1291-7238 . R 27 N 2012.
6. S, R.239 (1978). ”I Q ”. Comm.
ACM240 . 21 (10): 847–857. doi241 :10.1145/359619.359631242 .

219 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
220 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
221 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
222 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
223 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
224 http://mentalfloss.com/article/53160/meet-refrigerator-ladies-who-programmed-eniac
https://www.nytimes.com/2001/12/17/business/frances-e-holberton-84-early-computer-
225
programmer.html
226 https://en.wikipedia.org/wiki/ProQuest_(identifier)
227 https://search.proquest.com/docview/301940891
228 https://en.wikipedia.org/wiki/Thomas_H._Cormen
229 https://en.wikipedia.org/wiki/Charles_E._Leiserson
230 https://en.wikipedia.org/wiki/Ron_Rivest
231 https://en.wikipedia.org/wiki/Clifford_Stein
232 https://books.google.com/books?id=NLngYyWFl_YC
233 https://en.wikipedia.org/wiki/ISBN_(identifier)
234 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03293-3
235 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
236 https://books.google.com/books?id=ylAETlep0CwC
237 https://en.wikipedia.org/wiki/ISBN_(identifier)
238 https://en.wikipedia.org/wiki/Special:BookSources/978-81-317-1291-7
239 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
240 https://en.wikipedia.org/wiki/Communications_of_the_ACM
241 https://en.wikipedia.org/wiki/Doi_(identifier)
242 https://doi.org/10.1145%2F359619.359631

25
Sorting algorithm

7. A, M.243 ; K, J.244 ; S, E.245 (1983). An O(n log n) sorting
network. STOC246 '83. Proceedings of the fifteenth annual ACM symposium on Theory
of computing. pp. 1–9. doi247 :10.1145/800061.808726248 . ISBN249 0-89791-099-0250 .
8. H, B. C.; L, M. A. (D 1992). ”F S M
 S  C E S”251 (PDF). Comput. J.252 35 (6): 643–
650. CiteSeerX253 10.1.1.54.8381254 . doi255 :10.1093/comjnl/35.6.643256 .
9. K, P. S.; K, A. (2008). Ratio Based Stable In-Place Merging.
TAMC257 2008. Theory and Applications of Models of Computation. LNCS258 .
4978. pp. 246–257. CiteSeerX259 10.1.1.330.2641260 . doi261 :10.1007/978-3-540-
79228-4_22262 . ISBN263 978-3-540-79227-7264 .
10. 265
11. ”SELECTION SORT (J, C++) - A  D S”266 .
www.algolist.net. Retrieved 14 April 2018.
12. 267
13. K, A (N 1985). ”U, N Q  S”. Computer
Language. 2 (11).
14. F, G. (J 2007). ”S S,  P,  O(  )
C  O() M”. Theory of Computing Systems. 40 (4): 327–353.
doi268 :10.1007/s00224-006-1311-1269 .
15. C, R. (M 2020). ”- --”270 .
www.github.com.

243 https://en.wikipedia.org/wiki/Mikl%C3%B3s_Ajtai
244 https://en.wikipedia.org/wiki/J%C3%A1nos_Koml%C3%B3s_(mathematician)
245 https://en.wikipedia.org/wiki/Endre_Szemer%C3%A9di
246 https://en.wikipedia.org/wiki/Symposium_on_Theory_of_Computing
247 https://en.wikipedia.org/wiki/Doi_(identifier)
248 https://doi.org/10.1145%2F800061.808726
249 https://en.wikipedia.org/wiki/ISBN_(identifier)
250 https://en.wikipedia.org/wiki/Special:BookSources/0-89791-099-0
251 http://comjnl.oxfordjournals.org/content/35/6/643.full.pdf
252 https://en.wikipedia.org/wiki/The_Computer_Journal
253 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
254 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.54.8381
255 https://en.wikipedia.org/wiki/Doi_(identifier)
256 https://doi.org/10.1093%2Fcomjnl%2F35.6.643
https://en.wikipedia.org/wiki/International_Conference_on_Theory_and_Applications_of_
257
Models_of_Computation
258 https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science
259 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
260 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.330.2641
261 https://en.wikipedia.org/wiki/Doi_(identifier)
262 https://doi.org/10.1007%2F978-3-540-79228-4_22
263 https://en.wikipedia.org/wiki/ISBN_(identifier)
264 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-79227-7
265 https://qiita.com/hon_no_mushi/items/92ff1a220f179b8d40f9
266 http://www.algolist.net/Algorithms/Sorting/Selection_sort
267 http://dbs.uni-leipzig.de/skripte/ADS1/PDF4/kap4.pdf
268 https://en.wikipedia.org/wiki/Doi_(identifier)
269 https://doi.org/10.1007%2Fs00224-006-1311-1
270 https://github.com/ceorron/stable-inplace-sorting-algorithms

26
References

16. C, T H.271 ; L, C E.272 ; R, R L.273 ;
S, C274 (2001), ”8”, Introduction To Algorithms275 (2 .), C-
, MA: T MIT P, . 165, ISBN276 0-262-03293-7277
17. N, S (2000). ”T F S A?”278 . Dr.
279
Dobb's .
18. C, T H.280 ; L, C E.281 ; R, R L.282 ;
S, C283 (2001) [1990]. Introduction to Algorithms284 (2 .). MIT
P  MG-H. ISBN285 0-262-03293-7286 .
19. G, M T.287 ; T, R288 (2002). ”4.5 B-S
 R-S”. Algorithm Design: Foundations, Analysis, and Internet Examples.
John Wiley & Sons. pp. 241–243. ISBN289 978-0-471-38365-9290 .
20. T, M.291 (F 2002). ”R S  O(   )
T  L S U A, S,  B- B O-
”. Journal of Algorithms. 42 (2): 205–230. doi292 :10.1006/jagm.2002.1211293 .
21. H, Y; T, M.294 (2002). Integer sorting in O(n√(log log n)) expected time
and linear space. The 43rd Annual IEEE Symposium on Foundations of Computer Sci-
ence295 . pp. 135–144. doi296 :10.1109/SFCS.2002.1181890297 . ISBN298 0-7695-1822-
2299 .
22. W, N300 (1986), Algorithms & Data Structures, Upper Saddle River,
NJ: Prentice-Hall, pp. 76–77, ISBN301 978-0130220059302

271 https://en.wikipedia.org/wiki/Thomas_H._Cormen
272 https://en.wikipedia.org/wiki/Charles_E._Leiserson
273 https://en.wikipedia.org/wiki/Ron_Rivest
274 https://en.wikipedia.org/wiki/Clifford_Stein
275 https://books.google.com/books?id=NLngYyWFl_YC
276 https://en.wikipedia.org/wiki/ISBN_(identifier)
277 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
http://www.drdobbs.com/architecture-and-design/the-fastest-sorting-algorithm/
278
184404062
279 https://en.wikipedia.org/wiki/Dr._Dobb%27s
280 https://en.wikipedia.org/wiki/Thomas_H._Cormen
281 https://en.wikipedia.org/wiki/Charles_E._Leiserson
282 https://en.wikipedia.org/wiki/Ron_Rivest
283 https://en.wikipedia.org/wiki/Clifford_Stein
284 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
285 https://en.wikipedia.org/wiki/ISBN_(identifier)
286 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
287 https://en.wikipedia.org/wiki/Michael_T._Goodrich
288 https://en.wikipedia.org/wiki/Roberto_Tamassia
289 https://en.wikipedia.org/wiki/ISBN_(identifier)
290 https://en.wikipedia.org/wiki/Special:BookSources/978-0-471-38365-9
291 https://en.wikipedia.org/wiki/Mikkel_Thorup
292 https://en.wikipedia.org/wiki/Doi_(identifier)
293 https://doi.org/10.1006%2Fjagm.2002.1211
294 https://en.wikipedia.org/wiki/Mikkel_Thorup
295 https://en.wikipedia.org/wiki/Symposium_on_Foundations_of_Computer_Science
296 https://en.wikipedia.org/wiki/Doi_(identifier)
297 https://doi.org/10.1109%2FSFCS.2002.1181890
298 https://en.wikipedia.org/wiki/ISBN_(identifier)
299 https://en.wikipedia.org/wiki/Special:BookSources/0-7695-1822-2
300 https://en.wikipedia.org/wiki/Niklaus_Wirth
301 https://en.wikipedia.org/wiki/ISBN_(identifier)
302 https://en.wikipedia.org/wiki/Special:BookSources/978-0130220059

27
Sorting algorithm

23. Wirth 1986303 , pp. 79–80


24. Wirth 1986304 , pp. 101–102
25. ”T P'    ”305 . python.org. Retrieved 14
April 2018.
26. ”OJDK' TS.”306 . java.net. Retrieved 14 April 2018.
27. ” - ..”307 . perldoc.perl.org. Retrieved 14 April 2018.
28. Merge sort in Java 1.3308 , Sun. Archived309 2009-03-04 at the Wayback Machine310
29. Wirth 1986311 , pp. 87–89
30. Wirth 1986312 , p. 93
31. C, T H.313 ; L, C E.314 ; R, R L.315 ;
S, C316 (2009), Introduction to Algorithms (3rd ed.), Cambridge, MA:
The MIT Press, pp. 171–172, ISBN317 978-0262033848318
32. S, D. L. (1959). ”A H-S S P”319 (PDF). Communi-
cations of the ACM. 2 (7): 30–32. doi320 :10.1145/368370.368387321 .
33. Wirth 1986322 , pp. 81–82
34. ”/.”323 . R 2012-05-05.
35. B, B. (15 S 2001). ”A   S”. Inf.
Process. Lett.324 79 (5): 223–227. doi325 :10.1016/S0020-0190(00)00223-4326 .
36. ”  D  PC M E”327 . www.pcmag.com.
Retrieved 14 April 2018.

303 #CITEREFWirth1986
304 #CITEREFWirth1986
305 http://svn.python.org/projects/python/trunk/Objects/listsort.txt
http://cr.openjdk.java.net/~martin/webrevs/openjdk7/timsort/raw_files/new/src/share/
306
classes/java/util/TimSort.java
307 http://perldoc.perl.org/functions/sort.html
http://java.sun.com/j2se/1.3/docs/api/java/util/Arrays.html#sort(java.lang.Object%5B%
308
5D)
https://web.archive.org/web/20090304021927/http://java.sun.com/j2se/1.3/docs/api/
309
java/util/Arrays.html#sort(java.lang.Object%5B%5D)#sort(java.lang.Object%5B%5D)
310 https://en.wikipedia.org/wiki/Wayback_Machine
311 #CITEREFWirth1986
312 #CITEREFWirth1986
313 https://en.wikipedia.org/wiki/Thomas_H._Cormen
314 https://en.wikipedia.org/wiki/Charles_E._Leiserson
315 https://en.wikipedia.org/wiki/Ron_Rivest
316 https://en.wikipedia.org/wiki/Clifford_Stein
317 https://en.wikipedia.org/wiki/ISBN_(identifier)
318 https://en.wikipedia.org/wiki/Special:BookSources/978-0262033848
319 http://penguin.ewu.edu/cscd300/Topic/AdvSorting/p30-shell.pdf
320 https://en.wikipedia.org/wiki/Doi_(identifier)
321 https://doi.org/10.1145%2F368370.368387
322 #CITEREFWirth1986
https://github.com/torvalds/linux/blob/72932611b4b05bbd89fafa369d564ac8e449809b/
323
kernel/groups.c#L105
324 https://en.wikipedia.org/wiki/Information_Processing_Letters
325 https://en.wikipedia.org/wiki/Doi_(identifier)
326 https://doi.org/10.1016%2FS0020-0190%2800%2900223-4
327 https://www.pcmag.com/encyclopedia_term/0,2542,t=tag+sort&i=52532,00.asp

28
External links

37. Donald Knuth328 , The Art of Computer Programming329 , Volume 3: Sorting and
Searching, Second Edition. Addison-Wesley, 1998, ISBN330 0-201-89685-0331 , Section
5.4: External Sorting, pp. 248–379.
38. Ellis Horowitz332 and Sartaj Sahni333 , Fundamentals of Data Structures, H. Freeman
& Co., ISBN334 0-7167-8042-9335 .

1.9 Further reading


• K, D E.336 (1998), Sorting and Searching, The Art of Computer Program-
ming, 3 (2nd ed.), Boston: Addison-Wesley, ISBN337 0-201-89685-0338
• S, R339 (1980), ”E S  C: A I-
”, Computational Probability340 , N Y: A P, . 101–130341 ,
ISBN342 0-12-394680-8343

1.10 External links

The Wikibook Algorithm implementation344 has a page on the topic of: Sorting
algorithms345

The Wikibook A-level Mathematics346 has a page on the topic of: Sorting algo-
rithms347

Wikimedia Commons has media related to Sorting algorithms348 .

328 https://en.wikipedia.org/wiki/Donald_Knuth
329 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
330 https://en.wikipedia.org/wiki/ISBN_(identifier)
331 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
332 https://en.wikipedia.org/wiki/Ellis_Horowitz
333 https://en.wikipedia.org/wiki/Sartaj_Sahni
334 https://en.wikipedia.org/wiki/ISBN_(identifier)
335 https://en.wikipedia.org/wiki/Special:BookSources/0-7167-8042-9
336 https://en.wikipedia.org/wiki/Donald_Knuth
337 https://en.wikipedia.org/wiki/ISBN_(identifier)
338 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
339 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
340 https://archive.org/details/computationalpro00actu/page/101
341 https://archive.org/details/computationalpro00actu/page/101
342 https://en.wikipedia.org/wiki/ISBN_(identifier)
343 https://en.wikipedia.org/wiki/Special:BookSources/0-12-394680-8
344 https://en.wikibooks.org/wiki/Algorithm_implementation
345 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting
346 https://en.wikibooks.org/wiki/A-level_Mathematics
https://en.wikibooks.org/wiki/A-level_Mathematics/OCR/D1/Algorithms#Sorting_
347
Algorithms
348 https://commons.wikimedia.org/wiki/Category:Sort_algorithms

29
Sorting algorithm

• Sorting Algorithm Animations349 at the Wayback Machine350 (archived 3 March 2015)


• Sequential and parallel sorting algorithms351 – explanations and analyses of many sorting
algorithms
• Dictionary of Algorithms, Data Structures, and Problems352 – dictionary of algorithms,
techniques, common functions, and problems
• Slightly Skeptical View on Sorting Algorithms353 – Discusses several classic algorithms
and promotes alternatives to the quicksort354 algorithm
• 15 Sorting Algorithms in 6 Minutes (Youtube)355 – visualization and ”audibilization” of
15 Sorting Algorithms in 6 Minutes
• A036604 sequence in OEIS database titled ”Sorting numbers: minimal number of com-
parisons needed to sort n elements”356 – which performed by Ford–Johnson algorithm357
• Sorting Algorithms Used on Famous Paintings (Youtube)358 - Visualization of Sorting
Algorithms on Many Famous Paintings.

Sorting algorithms

349 https://web.archive.org/web/20150303022622/http://www.sorting-algorithms.com/
350 https://en.wikipedia.org/wiki/Wayback_Machine
351 http://www.iti.fh-flensburg.de/lang/algorithmen/sortieren/algoen.htm
352 https://www.nist.gov/dads/
353 http://www.softpanorama.org/Algorithms/sorting.shtml
354 https://en.wikipedia.org/wiki/Quicksort
355 https://www.youtube.com/watch?v=kPRA0W1kECg
356 https://oeis.org/A036604
357 https://en.wikipedia.org/wiki/Ford%E2%80%93Johnson_algorithm
358 https://www.youtube.com/watch?v=d2d0r1bArUQ

30
2 Comparison sort

Figure 7 Sorting a set of unlabelled weights by weight using only a balance scale
requires a comparison sort algorithm.

A comparison sort is a type of sorting algorithm1 that only reads the list elements through
a single abstract comparison operation (often a ”less than or equal to” operator or a three-
way comparison2 ) that determines which of two elements should occur first in the final

1 https://en.wikipedia.org/wiki/Sorting_algorithm
2 https://en.wikipedia.org/wiki/Three-way_comparison

31
Comparison sort

sorted list. The only requirement is that the operator forms a total preorder3 over the data,
with:
1. if a ≤b and b ≤c then a ≤c (transitivity)
2. for all a and b, a ≤b or b ≤a (connexity4 ).
It is possible that both a ≤b and b ≤a; in this case either may come first in the sorted list.
In a stable sort5 , the input order determines the sorted order in this case.
A metaphor for thinking about comparison sorts is that someone has a set of unlabelled
weights and a balance scale6 . Their goal is to line up the weights in order by their weight
without any information except that obtained by placing two weights on the scale and seeing
which one is heavier (or if they weigh the same).

2.1 Examples

Figure 8 Quicksort in action on a list of numbers. The horizontal lines are pivot values.

3 https://en.wikipedia.org/wiki/Total_preorder
4 https://en.wikipedia.org/wiki/Connex_relation
5 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability
6 https://en.wikipedia.org/wiki/Balance_scale

32
Performance limits and advantages of different sorting techniques

Some of the most well-known comparison sorts include:


• Quicksort7
• Heapsort8
• Shellsort9
• Merge sort10
• Introsort11
• Insertion sort12
• Selection sort13
• Bubble sort14
• Odd–even sort15
• Cocktail shaker sort16
• Cycle sort17
• Merge-insertion sort18
• Smoothsort19
• Timsort20

2.2 Performance limits and advantages of different sorting


techniques

There are fundamental limits on the performance of comparison sorts. A comparison sort
must have an average-case lower bound of Ω21 (n log n) comparison operations,[1] which is
known as linearithmic22 time. This is a consequence of the limited information available
through comparisons alone — or, to put it differently, of the vague algebraic structure of
totally ordered sets. In this sense, mergesort, heapsort, and introsort are asymptotically
optimal23 in terms of the number of comparisons they must perform, although this metric
neglects other operations. Non-comparison sorts (such as the examples discussed below)
can achieve O24 (n) performance by using operations other than comparisons, allowing them
to sidestep this lower bound (assuming elements are constant-sized).

7 https://en.wikipedia.org/wiki/Quicksort
8 https://en.wikipedia.org/wiki/Heapsort
9 https://en.wikipedia.org/wiki/Shellsort
10 https://en.wikipedia.org/wiki/Merge_sort
11 https://en.wikipedia.org/wiki/Introsort
12 https://en.wikipedia.org/wiki/Insertion_sort
13 https://en.wikipedia.org/wiki/Selection_sort
14 https://en.wikipedia.org/wiki/Bubble_sort
15 https://en.wikipedia.org/wiki/Odd%E2%80%93even_sort
16 https://en.wikipedia.org/wiki/Cocktail_shaker_sort
17 https://en.wikipedia.org/wiki/Cycle_sort
18 https://en.wikipedia.org/wiki/Merge-insertion_sort
19 https://en.wikipedia.org/wiki/Smoothsort
20 https://en.wikipedia.org/wiki/Timsort
21 https://en.wikipedia.org/wiki/Big-O_notation
22 https://en.wikipedia.org/wiki/Linearithmic
23 https://en.wikipedia.org/wiki/Asymptotically_optimal
24 https://en.wikipedia.org/wiki/Big-O_notation

33
Comparison sort

Comparison sorts may run faster on some lists; many adaptive sorts25 such as insertion
sort26 run in O(n) time on an already-sorted or nearly-sorted list. The Ω27 (n log n) lower
bound applies only to the case in which the input list can be in any possible order.
Real-world measures of sorting speed may need to take into account the ability of some
algorithms to optimally use relatively fast cached computer memory28 , or the application
may benefit from sorting methods where sorted data begins to appear to the user quickly
(and then user's speed of reading will be the limiting factor) as opposed to sorting methods
where no output is available until the whole list is sorted.
Despite these limitations, comparison sorts offer the notable practical advantage that control
over the comparison function allows sorting of many different datatypes and fine control
over how the list is sorted. For example, reversing the result of the comparison function
allows the list to be sorted in reverse; and one can sort a list of tuples29 in lexicographic
order30 by just creating a comparison function that compares each part in sequence:
function tupleCompare((lefta, leftb, leftc), (righta, rightb, rightc))
if lefta ≠ righta
return compare(lefta, righta)
else if leftb ≠ rightb
return compare(leftb, rightb)
else
return compare(leftc, rightc)

Balanced ternary31 notation allows comparisons to be made in one step, whose result will
be one of ”less than”, ”greater than” or ”equal to”.
Comparison sorts generally adapt more easily to complex orders such as the order of floating-
point numbers32 . Additionally, once a comparison function is written, any comparison
sort can be used without modification; non-comparison sorts typically require specialized
versions for each datatype.
This flexibility, together with the efficiency of the above comparison sorting algorithms on
modern computers, has led to widespread preference for comparison sorts in most practical
work.

2.3 Alternatives

Some sorting problems admit a strictly faster solution than the Ω(n log n) bound for com-
parison sorting; an example is integer sorting33 , where all keys are integers. When the keys
form a small (compared to n) range, counting sort34 is an example algorithm that runs in

25 https://en.wikipedia.org/wiki/Adaptive_sort
26 https://en.wikipedia.org/wiki/Insertion_sort
27 https://en.wikipedia.org/wiki/Big-O_notation
28 https://en.wikipedia.org/wiki/Random_Access_Memory
29 https://en.wikipedia.org/wiki/Tuple
30 https://en.wikipedia.org/wiki/Lexicographic_order
31 https://en.wikipedia.org/wiki/Balanced_ternary
32 https://en.wikipedia.org/wiki/Floating-point_number
33 https://en.wikipedia.org/wiki/Integer_sorting
34 https://en.wikipedia.org/wiki/Counting_sort

34
Number of comparisons required to sort a list

linear time. Other integer sorting algorithms, such as radix sort35 , are not asymptotically
faster than comparison sorting, but can be faster in practice.
The problem of sorting pairs of numbers by their sum36 is not subject to the Ω(n² log n)
bound either (the square resulting from the pairing up); the best known algorithm still takes
O(n² log n) time, but only O(n²) comparisons.

2.4 Number of comparisons required to sort a list

n ⌈log2 (n!)⌉ Minimum


1 0 0
2 1 1
3 3 3
4 5 5
5 7 7
6 10 10
7 13 13
8 16 16
9 19 19
10 22 22
11 26 26
12 29 30[2][3]
13 33 34[4][5][6]
14 37 38[6]
15 41 42[7][8][9]
16 45 45 or 46[10]
17 49 49 or 50
18 53 53 or 54
19 57 58[9]
20 62 62
21 66 66
22 70 71[6]

n
n ⌈log2 (n!)⌉ n log2 n −
ln 2
10 22 19
100 525 521
1 000 8 530 8 524
10 000 118 459 118 451
100 000 1 516 705 1 516 695
1 000 000 18 488 885 18 488 874

35 https://en.wikipedia.org/wiki/Radix_sort
36 https://en.wikipedia.org/wiki/X_%2B_Y_sorting

35
Comparison sort

Above: A comparison of the lower bound ⌈log2 (n!)⌉ to the actual minimum number of
comparisons (from OEIS37 : A03660438 ) required to sort a list of n items (for the worst
case). Below: Using Stirling's approximation39 , this lower bound is well-approximated by
n
n log2 n − .
ln 2
The number of comparisons that a comparison sort algorithm requires increases in propor-
tion to n log(n), where n is the number of elements to sort. This bound is asymptotically
tight40 .
Given a list of distinct numbers (we can assume this because this is a worst-case analysis),
there are n factorial41 permutations exactly one of which is the list in sorted order. The
sort algorithm must gain enough information from the comparisons to identify the correct
permutation. If the algorithm always completes after at most f(n) steps, it cannot distin-
guish more than 2f(n) cases because the keys are distinct and each comparison has only two
possible outcomes. Therefore,
2f (n) ≥ n!, or equivalently f (n) ≥ log2 (n!).
By looking at the first n/2 factors of n! = n(n − 1) · · · 1, we obtain
(( ) n )
n 2 n log n n
log2 (n!) ≥ log2 = − = Θ(n log n).
2 2 log 2 2
log2 (n!) = Ω(n log n).
This provides the lower-bound part of the claim. A better bound can be given via Stirling's
approximation42 .
An identical upper bound follows from the existence of the algorithms that attain this bound
in the worst case, like heapsort43 and mergesort44 .
The above argument provides an absolute, rather than only asymptotic lower bound on the
number of comparisons, namely ⌈log2 (n!)⌉ comparisons. This lower bound is fairly good (it
can be approached within a linear tolerance by a simple merge sort), but it is known to be
inexact. For example, ⌈log2 (13!)⌉ = 33, but the minimal number of comparisons to sort 13
elements has been proved to be 34.
Determining the exact number of comparisons needed to sort a given number of entries
is a computationally hard problem even for small n, and no simple formula for the so-
lution is known. For some of the few concrete values that have been computed, see
OEIS45 : A03660446 .

37 https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences
38 http://oeis.org/A036604
39 https://en.wikipedia.org/wiki/Stirling%27s_approximation
40 https://en.wikipedia.org/wiki/Asymptotic_computational_complexity
41 https://en.wikipedia.org/wiki/Factorial
42 https://en.wikipedia.org/wiki/Stirling%27s_approximation
43 https://en.wikipedia.org/wiki/Heapsort
44 https://en.wikipedia.org/wiki/Merge_sort
45 https://en.wikipedia.org/wiki/On-Line_Encyclopedia_of_Integer_Sequences
46 http://oeis.org/A036604

36
Number of comparisons required to sort a list

2.4.1 Lower bound for the average number of comparisons

A similar bound applies to the average number of comparisons. Assuming that


• all keys are distinct, i.e. every comparison will give either a>b or a<b, and
• the input is a random permutation, chosen uniformly from the set of all possible permu-
tations of n elements,
it is impossible to determine which order the input is in with fewer than log2 (n!) comparisons
on average.
This can be most easily seen using concepts from information theory47 . The Shannon en-
tropy48 of such a random permutation is log2 (n!) bits. Since a comparison can give only
two results, the maximum amount of information it provides is 1 bit. Therefore, after
k comparisons the remaining entropy of the permutation, given the results of those compar-
isons, is at least log2 (n!) − k bits on average. To perform the sort, complete information is
needed, so the remaining entropy must be 0. It follows that k must be at least log2 (n!) on
average.
The lower bound derived via information theory is phrased as 'information-theoretic lower
bound'. Information-theoretic lower bound is correct but is not necessarily the strongest
lower bound. And in some cases, the information-theoretic lower bound of a problem may
even be far from the true lower bound. For example, the information-theoretic lower bound
of selection is ⌈log2 (n)⌉ whereas n − 1 comparisons are needed by an adversarial argument.
The interplay between information-theoretic lower bound and the true lower bound are
much like a real-valued function lower-bounding an integer function. However, this is not
exactly correct when the average case is considered.
To unearth what happens while analyzing the average case, the key is that what does
'average' refer to? Averaging over what? With some knowledge of information theory, the
information-theoretic lower bound averages over the set of all permutations as a whole. But
any computer algorithms (under what are believed currently) must treat each permutation
as an individual instance of the problem. Hence, the average lower bound we're searching
for is averaged over all individual cases.
To search for the lower bound relating to the non-achievability of computers, we adopt the
Decision tree model49 . Let's rephrase a bit of what our objective is. In the Decision tree
model50 , the lower bound to be shown is the lower bound of the average length of root-to-leaf
paths of an n!-leaf binary tree (in which each leaf corresponds to a permutation). It would
be convinced to say that a balanced full binary tree achieves the minimum of the average
length. With some careful calculations, for a balanced full binary tree with n! leaves, the
average length of root-to-leaf paths is given by
(2n! − 2⌊log2 n!⌋+1 ) · ⌈log2 n!⌉ + (2⌊log2 n!⌋+1 − n!) · ⌊log2 n!⌋
n!

47 https://en.wikipedia.org/wiki/Information_theory
48 https://en.wikipedia.org/wiki/Shannon_entropy
49 https://en.wikipedia.org/wiki/Decision_tree_model
50 https://en.wikipedia.org/wiki/Decision_tree_model

37
Comparison sort

For example, for n = 3, the information-theoretic lower bound for the average case is ap-
proximately 2.58, while the average lower bound derived via Decision tree model51 is 8/3,
approximately 2.67.
In the case that multiple items may have the same key, there is no obvious statistical
interpretation for the term ”average case”, so an argument like the above cannot be applied
without making specific assumptions about the distribution of keys.

2.5 Notes
1. C, T H.52 ; L, C E.53 ; R, R L.54 ; S,
C55 (2009) [1990]. Introduction to Algorithms56 (3 .). MIT P 
MG-H. . 191–193. ISBN57 0-262-03384-458 .
2. Mark Wells, Applications of a language for computing in combinatorics, Information
Processing 65 (Proceedings of the 1965 IFIP Congress), 497–498, 1966.
3. Mark Wells, Elements of Combinatorial Computing, Pergamon Press, Oxford, 1971.
4. Takumi Kasai, Shusaku Sawato, Shigeki Iwata, Thirty four comparisons are required
to sort 13 items, LNCS 792, 260-269, 1994.
5. Marcin Peczarski, Sorting 13 elements requires 34 comparisons, LNCS 2461, 785–794,
2002.
6. Marcin Peczarski, New results in minimum-comparison sorting, Algorithmica 40 (2),
133–145, 2004.
7. Marcin Peczarski, Computer assisted research of posets, PhD thesis, University of
Warsaw, 2006.
8. P, M (2007). ”T F-J    -
    47 ”. Inf. Process. Lett. 101 (3): 126–128.
doi59 :10.1016/j.ipl.2006.09.00160 .
9. C, W; L, X; W, G; L, J (O 2007).
”最少比较排序问题中S(15)和S(19)的解决”61 [T   S(15)  S(19)
 -  ]. Journal of Frontiers of Computer
Science and Technology (in Chinese). 1 (3): 305–313.
10. P, M (3 A 2011). ”T O S  16 E-
”. Acta Universitatis Sapientiae. 4 (2): 215–224. arXiv62 :1108.086663 . Bib-
code64 :2011arXiv1108.0866P65 .

51 https://en.wikipedia.org/wiki/Decision_tree_model
52 https://en.wikipedia.org/wiki/Thomas_H._Cormen
53 https://en.wikipedia.org/wiki/Charles_E._Leiserson
54 https://en.wikipedia.org/wiki/Ron_Rivest
55 https://en.wikipedia.org/wiki/Clifford_Stein
56 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
59 https://en.wikipedia.org/wiki/Doi_(identifier)
60 https://doi.org/10.1016%2Fj.ipl.2006.09.001
61 http://fcst.ceaj.org/EN/abstract/abstract47.shtml
62 https://en.wikipedia.org/wiki/ArXiv_(identifier)
63 http://arxiv.org/abs/1108.0866
64 https://en.wikipedia.org/wiki/Bibcode_(identifier)
65 https://ui.adsabs.harvard.edu/abs/2011arXiv1108.0866P

38
References

2.6 References
• Donald Knuth66 . The Art of Computer Programming67 , Volume 3: Sorting and Search-
ing, Second Edition. Addison-Wesley, 1997. ISBN68 0-201-89685-069 . Section 5.3.1:
Minimum-Comparison Sorting, pp. 180–197.

Sorting algorithms

66 https://en.wikipedia.org/wiki/Donald_Knuth
67 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
68 https://en.wikipedia.org/wiki/ISBN_(identifier)
69 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0

39
3 Selection sort

This article includes a list of references1 , related reading or external links2 , but
its sources remain unclear because it lacks inline citations3 . Please help
to improve4 this article by introducing5 more precise citations. (May 2019)(Learn
how and when to remove this template message6 )

Selection sort
Class Sorting algorithm
Data structure Array
Worst-case per- О(n2 ) comparisons,
formance О(n) swaps
Best-case perfor- О(n2 ) comparisons,
mance О(n) swaps
Average perfor- О(n2 ) comparisons,
mance О(n) swaps
Worst-case space O(1) auxiliary
complexity

In computer science7 , selection sort is an in-place8 comparison9 sorting algorithm10 . It


has an O11 (n2 ) time complexity12 , which makes it inefficient on large lists, and generally
performs worse than the similar insertion sort13 . Selection sort is noted for its simplicity
and has performance advantages over more complicated algorithms in certain situations,
particularly where auxiliary memory14 is limited.
The algorithm divides the input list into two parts: a sorted sublist of items which is built
up from left to right at the front (left) of the list and a sublist of the remaining unsorted
items that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted

1 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
2 https://en.wikipedia.org/wiki/Wikipedia:External_links
3 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
4 https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Fact_and_Reference_Check
5 https://en.wikipedia.org/wiki/Wikipedia:When_to_cite
6 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
7 https://en.wikipedia.org/wiki/Computer_science
8 https://en.wikipedia.org/wiki/In-place_algorithm
9 https://en.wikipedia.org/wiki/Comparison_sort
10 https://en.wikipedia.org/wiki/Sorting_algorithm
11 https://en.wikipedia.org/wiki/Big_O_notation
12 https://en.wikipedia.org/wiki/Time_complexity
13 https://en.wikipedia.org/wiki/Insertion_sort
14 https://en.wikipedia.org/wiki/Auxiliary_memory

41
Selection sort

sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest,
depending on sorting order) element in the unsorted sublist, exchanging (swapping) it with
the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries
one element to the right.
The time efficiency of selection sort is quadratic, so there are a number of sorting techniques
which have better time complexity than selection sort. One thing which distinguishes
selection sort from other sorting algorithms is that it makes the minimum possible number
of swaps, n − 1 in the worst case.

3.1 Example

Here is an example of this sort algorithm sorting five elements:

Sorted sublist Unsorted sublist Least element in unsorted list


() (11, 25, 12, 22, 64) 11
(11) (25, 12, 22, 64) 12
(11, 12) (25, 22, 64) 22
(11, 12, 22) (25, 64) 25
(11, 12, 22, 25) (64) 64
(11, 12, 22, 25, 64) ()

42
Example

Figure 9 Selection sort animation. Red is current min. Yellow is sorted list. Blue is
current item.

(Nothing appears changed on these last two lines because the last two numbers were already
in order.)
Selection sort can also be used on list structures that make add and remove efficient, such
as a linked list15 . In this case it is more common to remove the minimum element from the
remainder of the list, and then insert it at the end of the values sorted so far. For example:
arr[] = 64 25 12 22 11

15 https://en.wikipedia.org/wiki/Linked_list

43
Selection sort

// Find the minimum element in arr[0...4]


// and place it at beginning
11 25 12 22 64

// Find the minimum element in arr[1...4]


// and place it at beginning of arr[1...4]
11 12 25 22 64

// Find the minimum element in arr[2...4]


// and place it at beginning of arr[2...4]
11 12 22 25 64

// Find the minimum element in arr[3...4]


// and place it at beginning of arr[3...4]
11 12 22 25 64

3.2 Implementations

This section does not cite16 any sources17 . Please help improve this section18
by adding citations to reliable sources19 . Unsourced material may be challenged
and removed20 .
Find sources: ”Selection sort”21 – news22 · newspapers23 · books24 · scholar25 · JSTOR26
(May 2019)(Learn how and when to remove this template message27 )

Below is an implementation in C28 . More implementations can be found on the talk page
of this Wikipedia article29 .

1 /* a[0] to a[aLength-1] is the array to sort */


2 int i,j;
3 int aLength; // initialise to as length
4
5 /* advance the position through the entire array */
6 /* (could do i < aLength-1 because single element is also min element) */
7 for (i = 0; i < aLength-1; i++)
8 {
9 /* find the min element in the unsorted a[i .. aLength-1] */

16 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources
17 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
18 https://en.wikipedia.org/w/index.php?title=Selection_sort&action=edit
19 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
20 https://en.wikipedia.org/wiki/Wikipedia:Verifiability#Burden_of_evidence
21 http://www.google.com/search?as_eq=wikipedia&q=%22Selection+sort%22
22 http://www.google.com/search?tbm=nws&q=%22Selection+sort%22+-wikipedia
http://www.google.com/search?&q=%22Selection+sort%22+site:news.google.com/newspapers&
23
source=newspapers
24 http://www.google.com/search?tbs=bks:1&q=%22Selection+sort%22+-wikipedia
25 http://scholar.google.com/scholar?q=%22Selection+sort%22
26 https://www.jstor.org/action/doBasicSearch?Query=%22Selection+sort%22&acc=on&wc=on
27 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
28 https://en.wikipedia.org/wiki/C_(programming_language)
29 https://en.wikipedia.org/wiki/Talk:Selection_sort#Implementations

44
Complexity

10
11 /* assume the min is the first element */
12 int jMin = i;
13 /* test against elements after i to find the smallest */
14 for (j = i+1; j < aLength; j++)
15 {
16 /* if this element is less, then it is the new minimum */
17 if (a[j] < a[jMin])
18 {
19 /* found new minimum; remember its index */
20 jMin = j;
21 }
22 }
23
24 if (jMin != i)
25 {
26 swap(a[i], a[jMin]);
27 }
28 }

3.3 Complexity

Selection sort is not difficult to analyze compared to other sorting algorithms since none
of the loops depend on the data in the array. Selecting the minimum requires scanning n
elements (taking n − 1 comparisons) and then swapping it into the first position. Finding the
next lowest element requires scanning the remaining n − 1 elements and so on. Therefore,
the total number of comparisons is

n−1
(n − 1) + (n − 2) + ... + 1 = i
i=1

By arithmetic progression30 ,

n−1
(n − 1) + 1 1 1
i= (n − 1) = n(n − 1) = (n2 − n)
i=1
2 2 2

which is of complexity O(n2 ) in terms of number of comparisons. Each of these scans


requires one swap for n − 1 elements (the final element is already in place).

3.4 Comparison to other sorting algorithms

Among quadratic sorting algorithms (sorting algorithms with a simple average-case of


Θ(n2 )31 ), selection sort almost always outperforms bubble sort32 and gnome sort33 . In-
sertion sort34 is very similar in that after the kth iteration, the first k elements in the array
are in sorted order. Insertion sort's advantage is that it only scans as many elements as it

30 https://en.wikipedia.org/wiki/Arithmetic_progression
https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann%E2%80%93Landau_
31
notations
32 https://en.wikipedia.org/wiki/Bubble_sort
33 https://en.wikipedia.org/wiki/Gnome_sort
34 https://en.wikipedia.org/wiki/Insertion_sort

45
Selection sort

needs in order to place the k + 1st element, while selection sort must scan all remaining
elements to find the k + 1st element.
Simple calculation shows that insertion sort will therefore usually perform about half as
many comparisons as selection sort, although it can perform just as many or far fewer
depending on the order the array was in prior to sorting. It can be seen as an advantage
for some real-time35 applications that selection sort will perform identically regardless of
the order of the array, while insertion sort's running time can vary considerably. However,
this is more often an advantage for insertion sort in that it runs much more efficiently if the
array is already sorted or ”close to sorted.”
While selection sort is preferable to insertion sort in terms of number of writes (Θ(n) swaps
versus Ο(n2 ) swaps), it almost always far exceeds (and never beats) the number of writes
that cycle sort36 makes, as cycle sort is theoretically optimal in the number of writes.
This can be important if writes are significantly more expensive than reads, such as with
EEPROM37 or Flash38 memory, where every write lessens the lifespan of the memory.
Finally, selection sort is greatly outperformed on larger arrays by Θ(n log n) divide-and-
conquer algorithms39 such as mergesort40 . However, insertion sort or selection sort are both
typically faster for small arrays (i.e. fewer than 10–20 elements). A useful optimization in
practice for the recursive algorithms is to switch to insertion sort or selection sort for ”small
enough” sublists.

3.5 Variants

Heapsort41 greatly improves the basic algorithm by using an implicit42 heap43 data struc-
ture44 to speed up finding and removing the lowest datum. If implemented correctly, the
heap will allow finding the next lowest element in Θ(log n) time instead of Θ(n) for the
inner loop in normal selection sort, reducing the total running time to Θ(n log n).
A bidirectional variant of selection sort (sometimes called cocktail sort due to its similarity
to the bubble-sort variant cocktail shaker sort45 ) is an algorithm which finds both the
minimum and maximum values in the list in every pass. This reduces the number of scans
of the input by a factor of two. Each scan performs three comparisons per two elements (a
pair of elements is compared, then the greater is compared to the maximum and the lesser
is compared to the minimum), a 25% savings over regular selection sort, which does one
comparison per element. Sometimes this is double selection sort.

35 https://en.wikipedia.org/wiki/Real-time_computing
36 https://en.wikipedia.org/wiki/Cycle_sort
37 https://en.wikipedia.org/wiki/EEPROM
38 https://en.wikipedia.org/wiki/Flash_memory
39 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
40 https://en.wikipedia.org/wiki/Mergesort
41 https://en.wikipedia.org/wiki/Heapsort
42 https://en.wikipedia.org/wiki/Implicit_data_structure
43 https://en.wikipedia.org/wiki/Heap_(data_structure)
44 https://en.wikipedia.org/wiki/Data_structure
45 https://en.wikipedia.org/wiki/Cocktail_shaker_sort

46
See also

Selection sort can be implemented as a stable sort46 . If, rather than swapping in step 2,
the minimum value is inserted into the first position (that is, all intervening items moved
down), the algorithm is stable. However, this modification either requires a data structure
that supports efficient insertions or deletions, such as a linked list, or it leads to performing
Θ(n2 ) writes.
In the bingo sort variant, items are ordered by repeatedly looking through the remaining
items to find the greatest value and moving all items with that value to their final location.[1]
Like counting sort47 , this is an efficient variant if there are many duplicate values. Indeed,
selection sort does one pass through the remaining items for each item moved. Bingo sort
does one pass for each value (not item): after an initial pass to find the biggest value, the
next passes can move every item with that value to its final location while finding the next
value as in the following pseudocode48 (arrays are zero-based and the for-loop includes both
the top and bottom limits, as in Pascal49 ):

bingo(array A)

{ This procedure sorts in ascending order. }


begin
max := length(A)-1;

{ The first iteration is written to look very similar to the subsequent


ones, but
without swaps. }
nextValue := A[max];
for i := max - 1 downto 0 do
if A[i] > nextValue then
nextValue := A[i];
while (max > 0) and (A[max] = nextValue) do
max := max - 1;

while max > 0 do begin


value := nextValue;
nextValue := A[max];
for i := max - 1 downto 0 do
if A[i] = value then begin
swap(A[i], A[max]);
max := max - 1;
end else if A[i] > nextValue then
nextValue := A[i];
while (max > 0) and (A[max] = nextValue) do
max := max - 1;
end;
end;

Thus, if on average there are more than two items with the same value, bingo sort can be
expected to be faster because it executes the inner loop fewer times than selection sort.

3.6 See also


• Selection algorithm50

46 https://en.wikipedia.org/wiki/Sorting_algorithm#Classification
47 https://en.wikipedia.org/wiki/Counting_sort
48 https://en.wikipedia.org/wiki/Pseudocode
49 https://en.wikipedia.org/wiki/Pascal_(programming_language)
50 https://en.wikipedia.org/wiki/Selection_algorithm

47
Selection sort

3.7 References
1. This article incorporates public domain material51 from the NIST52 document:
B, P E. ”B ”53 . Dictionary of Algorithms and Data Structures54 .
• Donald Knuth55 . The Art of Computer Programming56 , Volume 3: Sorting and Searching,
Third Edition. Addison–Wesley, 1997. ISBN57 0-201-89685-058 . Pages 138–141 of Section
5.2.3: Sorting by Selection.
• Anany Levitin. Introduction to the Design & Analysis of Algorithms, 2nd Edition.
ISBN59 0-321-35828-760 . Section 3.1: Selection Sort, pp 98–100.
• Robert Sedgewick61 . Algorithms in C++, Parts 1–4: Fundamentals, Data Structure,
Sorting, Searching: Fundamentals, Data Structures, Sorting, Searching Pts. 1–4, Second
Edition. Addison–Wesley Longman, 1998. ISBN62 0-201-35088-263 . Pages 273–274

3.8 External links

The Wikibook Algorithm implementation64 has a page on the topic of: Selection
sort65

• Animated Sorting Algorithms: Selection Sort66 at the Wayback Machine67 (archived 7


March 2015) – graphical demonstration

Sorting algorithms

https://en.wikipedia.org/wiki/Copyright_status_of_works_by_the_federal_government_of_
51
the_United_States
52 https://en.wikipedia.org/wiki/National_Institute_of_Standards_and_Technology
53 https://xlinux.nist.gov/dads/HTML/bingosort.html
54 https://en.wikipedia.org/wiki/Dictionary_of_Algorithms_and_Data_Structures
55 https://en.wikipedia.org/wiki/Donald_Knuth
56 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
57 https://en.wikipedia.org/wiki/ISBN_(identifier)
58 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/0-321-35828-7
61 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
62 https://en.wikipedia.org/wiki/ISBN_(identifier)
63 https://en.wikipedia.org/wiki/Special:BookSources/0-201-35088-2
64 https://en.wikibooks.org/wiki/Algorithm_implementation
65 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Selection_sort
https://web.archive.org/web/20150307110315/http://www.sorting-algorithms.com/
66
selection-sort
67 https://en.wikipedia.org/wiki/Wayback_Machine

48
External links

Sorting algorithms

49
4 Insertion sort

Insertion sort
Animation of insertion sort
Class Sorting algorithm
Data structure Array
Worst-case per- О(n2 ) comparisons
formance and swaps
Best-case perfor- O(n) comparisons,
mance O(1) swaps
Average perfor- О(n2 ) comparisons
mance and swaps
Worst-case space О(n) total, O(1) aux-
complexity iliary

Insertion sort is a simple sorting algorithm1 that builds the final sorted array2 (or list) one
item at a time. It is much less efficient on large lists than more advanced algorithms such as
quicksort3 , heapsort4 , or merge sort5 . However, insertion sort provides several advantages:
• Simple implementation: Jon Bentley6 shows a three-line C7 version, and a five-line opti-
mized8 version[1]
• Efficient for (quite) small data sets, much like other quadratic sorting algorithms
• More efficient in practice than most other simple quadratic (i.e., O9 (n2 )) algorithms such
as selection sort10 or bubble sort11
• Adaptive12 , i.e., efficient for data sets that are already substantially sorted: the time
complexity13 is O14 (kn) when each element in the input is no more than k places away
from its sorted position
• Stable15 ; i.e., does not change the relative order of elements with equal keys

1 https://en.wikipedia.org/wiki/Sorting_algorithm
2 https://en.wikipedia.org/wiki/Sorted_array
3 https://en.wikipedia.org/wiki/Quicksort
4 https://en.wikipedia.org/wiki/Heapsort
5 https://en.wikipedia.org/wiki/Merge_sort
6 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
7 https://en.wikipedia.org/wiki/C_(programming_language)
8 https://en.wikipedia.org/wiki/Program_optimization
9 https://en.wikipedia.org/wiki/Big_O_notation
10 https://en.wikipedia.org/wiki/Selection_sort
11 https://en.wikipedia.org/wiki/Bubble_sort
12 https://en.wikipedia.org/wiki/Adaptive_sort
13 https://en.wikipedia.org/wiki/Time_complexity
14 https://en.wikipedia.org/wiki/Big_O_notation
15 https://en.wikipedia.org/wiki/Stable_sort

51
Insertion sort

• In-place16 ; i.e., only requires a constant amount O(1) of additional memory space
• Online17 ; i.e., can sort a list as it receives it
When people manually sort cards in a bridge hand, most use a method that is similar to
insertion sort.[2]

4.1 Algorithm

Figure 11 A graphical example of insertion sort. The partial sorted list (black) initially
contains only the first element in the list. With each iteration one element (red) is removed
from the ”not yet checked for order” input data and inserted in-place into the sorted list.

Insertion sort iterates18 , consuming one input element each repetition, and growing a sorted
output list. At each iteration, insertion sort removes one element from the input data, finds
the location it belongs within the sorted list, and inserts it there. It repeats until no input
elements remain.
Sorting is typically done in-place, by iterating up the array, growing the sorted list behind
it. At each array-position, it checks the value there against the largest value in the sorted
list (which happens to be next to it, in the previous array-position checked). If larger, it
leaves the element in place and moves to the next. If smaller, it finds the correct position
within the sorted list, shifts all the larger values up to make a space, and inserts into that
correct position.

16 https://en.wikipedia.org/wiki/In-place_algorithm
17 https://en.wikipedia.org/wiki/Online_algorithm
18 https://en.wikipedia.org/wiki/Iteration

52
Algorithm

The resulting array after k iterations has the property where the first k + 1 entries are
sorted (”+1” because the first entry is skipped). In each iteration the first remaining entry
of the input is removed, and inserted into the result at the correct position, thus extending
the result:

Figure 12 Array prior to the insertion of x

becomes

Figure 13 Array after the insertion of x

with each element greater than x copied to the right as it is compared against x.
The most common variant of insertion sort, which operates on arrays, can be described as
follows:
1. Suppose there exists a function called Insert designed to insert a value into a sorted
sequence at the beginning of an array. It operates by beginning at the end of the
sequence and shifting each element one place to the right until a suitable position is
found for the new element. The function has the side effect of overwriting the value
stored immediately after the sorted sequence in the array.
2. To perform an insertion sort, begin at the left-most element of the array and invoke
Insert to insert each element encountered into its correct position. The ordered se-
quence into which the element is inserted is stored at the beginning of the array in the
set of indices already examined. Each insertion overwrites a single value: the value
being inserted.
Pseudocode19 of the complete algorithm follows, where the arrays are zero-based20 :[1]
i ← 1
while i < length(A)
j←i
while j > 0 and A[j-1] > A[j]
swap A[j] and A[j-1]
j←j-1
end while
i←i+1
end while

19 https://en.wikipedia.org/wiki/Pseudocode
20 https://en.wikipedia.org/wiki/Zero-based_numbering

53
Insertion sort

The outer loop runs over all the elements except the first one, because the single-element
prefix A[0:1] is trivially sorted, so the invariant21 that the first i entries are sorted is true
from the start. The inner loop moves element A[i] to its correct place so that after the
loop, the first i+1 elements are sorted. Note that the and-operator in the test must use
short-circuit evaluation22 , otherwise the test might result in an array bounds error23 , when
j=0 and it tries to evaluate A[j-1] > A[j] (i.e. accessing A[-1] fails).
After expanding the swap operation in-place as x ← A[j]; A[j] ← A[j-1]; A[j-1] ←
x (where x is a temporary variable), a slightly faster version can be produced that moves
A[i] to its position in one go and only performs one assignment in the inner loop body:[1]
i ← 1
while i < length(A)
x ← A[i]
j←i-1
while j >= 0 and A[j] > x
A[j+1] ← A[j]
j←j-1
end while
24
A[j+1] ← x[3]
i←i+1
end while

The new inner loop shifts elements to the right to clear a spot for x = A[i].
The algorithm can also be implemented in a recursive way. The recursion just replaces
the outer loop, calling itself and storing successively smaller values of n on the stack until
n equals 0, where the function then returns back up the call chain to execute the code
after each recursive call starting with n equal to 1, with n increasing by 1 as each instance
of the function returns to the prior instance. The initial call would be insertionSortR(A,
length(A)-1).
function insertionSortR(array A, int n)
if n > 0
insertionSortR(A, n-1)
x ← A[n]
j ← n-1
while j >= 0 and A[j] > x
A[j+1] ← A[j]
j ← j-1
end while
A[j+1] ← x
end if
end function

It does not make the code any shorter, it also doesn't reduce the execution time, but it
increases the additional memory consumption from O(1) to O(N) (at the deepest level of
recursion the stack contains N references to the A array, each with accompanying value of
variable n from N down to 1).

21 https://en.wikipedia.org/wiki/Invariant_(computer_science)
22 https://en.wikipedia.org/wiki/Short-circuit_evaluation
23 https://en.wikipedia.org/wiki/Bounds_checking

54
Best, worst, and average cases

4.2 Best, worst, and average cases

The best case input is an array that is already sorted. In this case insertion sort has a linear
running time (i.e., O(n)). During each iteration, the first remaining element of the input is
only compared with the right-most element of the sorted subsection of the array.
The simplest worst case input is an array sorted in reverse order. The set of all worst case
inputs consists of all arrays where each element is the smallest or second-smallest of the
elements before it. In these cases every iteration of the inner loop will scan and shift the
entire sorted subsection of the array before inserting the next element. This gives insertion
sort a quadratic running time (i.e., O(n2 )).
The average case is also quadratic[4] , which makes insertion sort impractical for sorting
large arrays. However, insertion sort is one of the fastest algorithms for sorting very small
arrays, even faster than quicksort25 ; indeed, good quicksort26 implementations use insertion
sort for arrays smaller than a certain threshold, also when arising as subproblems; the exact
threshold must be determined experimentally and depends on the machine, but is commonly
around ten.
Example: The following table shows the steps for sorting the sequence {3, 7, 4, 9, 5, 2, 6,
1}. In each step, the key under consideration is underlined. The key that was moved (or
left in place because it was biggest yet considered) in the previous step is marked with an
asterisk.
3 7 4 9 5 2 6 1
3* 7 4 9 5 2 6 1
3 7* 4 9 5 2 6 1
3 4* 7 9 5 2 6 1
3 4 7 9* 5 2 6 1
3 4 5* 7 9 2 6 1
2* 3 4 5 7 9 6 1
2 3 4 5 6* 7 9 1
1* 2 3 4 5 6 7 9

4.3 Relation to other sorting algorithms

Insertion sort is very similar to selection sort27 . As in selection sort, after k passes through
the array, the first k elements are in sorted order. However, the fundamental difference
between the two algorithms is that for selection sort these are the k smallest elements of the
unsorted input, while in insertion sort they are simply the first k elements of the input. The
primary advantage of insertion sort over selection sort is that selection sort must always
scan all remaining elements to find the absolute smallest element in the unsorted portion of
the list, while insertion sort requires only a single comparison when the (k + 1)-st element
is greater than the k-th element; when this is frequently true (such as if the input array
is already sorted or partially sorted), insertion sort is distinctly more efficient compared to
selection sort. On average (assuming the rank of the (k + 1)-st element rank is random),

25 https://en.wikipedia.org/wiki/Quicksort
26 https://en.wikipedia.org/wiki/Quicksort
27 https://en.wikipedia.org/wiki/Selection_sort

55
Insertion sort

insertion sort will require comparing and shifting half of the previous k elements, meaning
that insertion sort will perform about half as many comparisons as selection sort on average.
In the worst case for insertion sort (when the input array is reverse-sorted), insertion sort
performs just as many comparisons as selection sort. However, a disadvantage of insertion
sort over selection sort is that it requires more writes due to the fact that, on each iteration,
inserting the (k + 1)-st element into the sorted portion of the array requires many element
swaps to shift all of the following elements, while only a single swap is required for each
iteration of selection sort. In general, insertion sort will write to the array O(n2 ) times,
whereas selection sort will write only O(n) times. For this reason selection sort may be
preferable in cases where writing to memory is significantly more expensive than reading,
such as with EEPROM28 or flash memory29 .
While some divide-and-conquer algorithms30 such as quicksort31 and mergesort32 outper-
form insertion sort for larger arrays, non-recursive sorting algorithms such as insertion sort
or selection sort are generally faster for very small arrays (the exact size varies by envi-
ronment and implementation, but is typically between 7 and 50 elements). Therefore, a
useful optimization in the implementation of those algorithms is a hybrid approach, using
the simpler algorithm when the array has been divided to a small size.[1]

4.4 Variants

D. L. Shell33 made substantial improvements to the algorithm; the modified version is called
Shell sort34 . The sorting algorithm compares elements separated by a distance that decreases
on each pass. Shell sort has distinctly improved running times in practical work, with two
simple variants requiring O(n3/2 ) and O(n4/3 ) running time.[5][6]
If the cost of comparisons exceeds the cost of swaps, as is the case for example with string
keys stored by reference or with human interaction (such as choosing one of a pair displayed
35
side-by-side), then using binary insertion sort[citation needed ] may yield better performance.
Binary insertion sort employs a binary search36 to determine the correct location to insert
new elements, and therefore performs ⌈log2 n⌉ comparisons in the worst case, which is
O(n log n). The algorithm as a whole still has a running time of O(n2 ) on average because
of the series of swaps required for each insertion.
The number of swaps can be reduced by calculating the position of multiple elements before
moving them. For example, if the target position of two elements is calculated before they
are moved into the proper position, the number of swaps can be reduced by about 25% for
random data. In the extreme case, this variant works similar to merge sort37 .

28 https://en.wikipedia.org/wiki/EEPROM
29 https://en.wikipedia.org/wiki/Flash_memory
30 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
31 https://en.wikipedia.org/wiki/Quicksort
32 https://en.wikipedia.org/wiki/Mergesort
33 https://en.wikipedia.org/wiki/Donald_Shell
34 https://en.wikipedia.org/wiki/Shellsort
36 https://en.wikipedia.org/wiki/Binary_search_algorithm
37 https://en.wikipedia.org/wiki/Merge_sort

56
Variants

A variant named binary merge sort uses a binary insertion sort to sort groups of 32 elements,
followed by a final sort using merge sort38 . It combines the speed of insertion sort on small
data sets with the speed of merge sort on large data sets.[7]
To avoid having to make a series of swaps for each insertion, the input could be stored in
a linked list39 , which allows elements to be spliced into or out of the list in constant time
when the position in the list is known. However, searching a linked list requires sequentially
following the links to the desired position: a linked list does not have random access, so it
cannot use a faster method such as binary search. Therefore, the running time required for
searching is O(n), and the time for sorting is O(n2 ). If a more sophisticated data structure40
(e.g., heap41 or binary tree42 ) is used, the time required for searching and insertion can be
reduced significantly; this is the essence of heap sort43 and binary tree sort44 .
In 2006 Bender, Martin Farach-Colton45 , and Mosteiro published a new variant of insertion
sort called library sort46 or gapped insertion sort that leaves a small number of unused
spaces (i.e., ”gaps”) spread throughout the array. The benefit is that insertions need only
shift elements over until a gap is reached. The authors show that this sorting algorithm
runs with high probability in O(n log n) time.[8]
If a skip list47 is used, the insertion time is brought down to O(log n), and swaps are not
needed because the skip list is implemented on a linked list structure. The final running
time for insertion would be O(n log n).
List insertion sort is a variant of insertion sort. It reduces the number of
48
movements.[citation needed ]

4.4.1 List insertion sort code in C

If the items are stored in a linked list, then the list can be sorted with O(1) additional space.
The algorithm starts with an initially empty (and therefore trivially sorted) list. The input
items are taken off the list one at a time, and then inserted in the proper place in the sorted
list. When the input list is empty, the sorted list has the desired result.

struct LIST * SortList1(struct LIST * pList)


{
// zero or one element in list
if (pList == NULL || pList->pNext == NULL)
return pList;
// head is the first element of resulting sorted list
struct LIST * head = NULL;
while (pList != NULL) {
struct LIST * current = pList;

38 https://en.wikipedia.org/wiki/Merge_sort
39 https://en.wikipedia.org/wiki/Linked_list
40 https://en.wikipedia.org/wiki/Data_structure
41 https://en.wikipedia.org/wiki/Heap_(data_structure)
42 https://en.wikipedia.org/wiki/Binary_tree
43 https://en.wikipedia.org/wiki/Heap_sort
44 https://en.wikipedia.org/wiki/Binary_tree_sort
45 https://en.wikipedia.org/wiki/Martin_Farach-Colton
46 https://en.wikipedia.org/wiki/Library_sort
47 https://en.wikipedia.org/wiki/Skip_list

57
Insertion sort

pList = pList->pNext;
if (head == NULL || current->iValue < head->iValue) {
// insert into the head of the sorted list
// or as the first element into an empty sorted list
current->pNext = head;
head = current;
} else {
// insert current element into proper position in non-empty sorted
list
struct LIST * p = head;
while (p != NULL) {
if (p->pNext == NULL || // last element of the sorted list
current->iValue < p->pNext->iValue) // middle of the list
{
// insert into middle of the sorted list or as the last
element
current->pNext = p->pNext;
p->pNext = current;
break; // done
}
p = p->pNext;
}
}
}
return head;
}

The algorithm below uses a trailing pointer[9] for the insertion into the sorted list. A simpler
recursive method rebuilds the list each time (rather than splicing) and can use O(n) stack
space.

struct LIST
{
struct LIST * pNext;
int iValue;
};

struct LIST * SortList(struct LIST * pList)


{
// zero or one element in list
if (!pList || !pList->pNext)
return pList;

/* build up the sorted array from the empty list */


struct LIST * pSorted = NULL;

/* take items off the input list one by one until empty */
while (pList != NULL) {
/* remember the head */
struct LIST * pHead = pList;
/* trailing pointer for efficient splice */
struct LIST ** ppTrail = &pSorted;

/* pop head off list */


pList = pList->pNext;

/* splice head into sorted list at proper place */


while (!(*ppTrail == NULL || pHead->iValue < (*ppTrail)->iValue)) { /*
does head belong here? */
/* no - continue down the list */
ppTrail = &(*ppTrail)->pNext;
}

pHead->pNext = *ppTrail;
*ppTrail = pHead;
}

58
References

return pSorted;
}

4.5 References
1. B, J (2000), Programming Pearls, ACM Press/Addison–Wesley, pp. 107–
109
2. S, R49 (1983), Algorithms50 , A-W, . 9551 ,
ISBN52 978-0-201-06672-253 .
3. C, T H.54 ; L, C E.55 ; R, R L.56 ; S,
C57 (2009) [1990]. ”S 2.1: I ”. Introduction to Al-
gorithms58 (3 .). MIT P  MG-H. . 16–18. ISBN59 0-262-
03384-460 .. See in particular p. 18.
4. S, K. ”W    Θ(^2)    ? (-
  ””)”61 . S O.
5. F, R. M.; L, R. B. (1960). ”A H-S S P”.
Communications of the ACM. 3 (1): 20–22. doi62 :10.1145/366947.36695763 .
6. S, R64 (1986). ”A N U B  S”. Journal
of Algorithms. 7 (2): 159–173. doi65 :10.1016/0196-6774(86)90001-566 .
7. ”B M S”67
8. B, M A.; F-C, M68 ; M, M A.
(2006), ”I   O(n log n)”, Theory of Computing Systems, 39 (3): 391–
397, arXiv69 :cs/040700370 , doi71 :10.1007/s00224-005-1237-z72 , MR73 221840974

49 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
50 https://archive.org/details/algorithms00sedg/page/95
51 https://archive.org/details/algorithms00sedg/page/95
52 https://en.wikipedia.org/wiki/ISBN_(identifier)
53 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-06672-2
54 https://en.wikipedia.org/wiki/Thomas_H._Cormen
55 https://en.wikipedia.org/wiki/Charles_E._Leiserson
56 https://en.wikipedia.org/wiki/Ron_Rivest
57 https://en.wikipedia.org/wiki/Clifford_Stein
58 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
59 https://en.wikipedia.org/wiki/ISBN_(identifier)
60 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
61 https://stackoverflow.com/a/17055342
62 https://en.wikipedia.org/wiki/Doi_(identifier)
63 https://doi.org/10.1145%2F366947.366957
64 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
65 https://en.wikipedia.org/wiki/Doi_(identifier)
66 https://doi.org/10.1016%2F0196-6774%2886%2990001-5
67 https://docs.google.com/file/d/0B8KIVX-AaaGiYzcta0pFUXJnNG8
68 https://en.wikipedia.org/wiki/Martin_Farach-Colton
69 https://en.wikipedia.org/wiki/ArXiv_(identifier)
70 http://arxiv.org/abs/cs/0407003
71 https://en.wikipedia.org/wiki/Doi_(identifier)
72 https://doi.org/10.1007%2Fs00224-005-1237-z
73 https://en.wikipedia.org/wiki/MR_(identifier)
74 http://www.ams.org/mathscinet-getitem?mr=2218409

59
Insertion sort

9. H, C (.), ”T P T”, Euler75 , V C S
U,  22 S 2012.

4.6 Further reading


• K, D76 (1998), ”5.2.1: S  I”, The Art of Computer
Programming77 , 3. S  S ( .), A-W, . 80–
105, ISBN78 0-201-89685-079 .

4.7 External links

The Wikibook Algorithm implementation80 has a page on the topic of: Insertion
sort81

Wikimedia Commons has media related to Insertion sort82 .

• Animated Sorting Algorithms: Insertion Sort83 at the Wayback Machine84 (archived 8


March 2015) – graphical demonstration
• A, J P, Binary Insertion Sort – Scoreboard – Complete Investigation
and C Implementation85 , P.
• Insertion Sort – a comparison with other O(n2 ) sorting algorithms86 , UK87 : C .
• Category:Insertion Sort88 (), LP – implementations of insertion
sort in various programming languages

Sorting algorithms

75 http://euler.vcsu.edu:7000/11421/
76 https://en.wikipedia.org/wiki/Donald_Knuth
77 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
78 https://en.wikipedia.org/wiki/ISBN_(identifier)
79 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
80 https://en.wikibooks.org/wiki/Algorithm_implementation
81 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Insertion_sort
82 https://commons.wikimedia.org/wiki/Category:Insertion_sort
https://web.archive.org/web/20150308232109/http://www.sorting-algorithms.com/
83
insertion-sort
84 https://en.wikipedia.org/wiki/Wayback_Machine
85 http://www.pathcom.com/~vadco/binary.html
86 http://corewar.co.uk/assembly/insertion.htm
87 https://en.wikipedia.org/wiki/United_Kingdom
88 http://literateprograms.org/Category:Insertion_sort

60
External links

Sorting algorithms

61
5 Merge sort

A divide and combine sorting algorithm

This article possibly contains original research1 . Please improve it2 by veri-
fying3 the claims made and adding inline citations4 . Statements consisting only of
original research should be removed. (May 2016)(Learn how and when to remove this
template message5 )

Merge sort
An example of merge sort. First divide the list into the smallest unit (1 element), then
compare each element with the adjacent list to sort and merge the two adjacent lists.
Finally all the elements are sorted and merged.
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(n log n)
perfor-
mance
Best-case O(n log n) typical,O(n) nat-
perfor- ural variant
mance
Average O(n log n)
perfor-
mance
Worst-case О(n) total with O(n) aux-
space com- iliary, O(1) auxiliary with
plexity linked lists[1]

In computer science6 , merge sort (also commonly spelled mergesort) is an efficient,


general-purpose, comparison-based7 sorting algorithm8 . Most implementations produce a
stable sort9 , which means that the order of equal elements is the same in the input and

1 https://en.wikipedia.org/wiki/Wikipedia:No_original_research
2 https://en.wikipedia.org/w/index.php?title=Merge_sort&action=edit
3 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
4 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/Computer_science
7 https://en.wikipedia.org/wiki/Comparison_sort
8 https://en.wikipedia.org/wiki/Sorting_algorithm
9 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability

63
Merge sort

output. Merge sort is a divide and conquer algorithm10 that was invented by John von Neu-
mann11 in 1945.[2] A detailed description and analysis of bottom-up mergesort appeared in
a report by Goldstine12 and von Neumann13 as early as 1948.[3]

5.1 Algorithm

Conceptually, a merge sort works as follows:


1. Divide the unsorted list into n sublists, each containing one element (a list of one
element is considered sorted).
2. Repeatedly merge14 sublists to produce new sorted sublists until there is only one
sublist remaining. This will be the sorted list.

5.1.1 Top-down implementation

Example C-like15 code using indices for top-down merge sort algorithm that recursively
splits the list (called runs in this example) into sublists until sublist size is 1, then merges
those sublists to produce a sorted list. The copy back step is avoided with alternating the
direction of the merge with each level of recursion (except for an initial one time copy). To
help understand this, consider an array with 2 elements. the elements are copied to B[],
then merged back to A[]. If there are 4 elements, when the bottom of recursion level is
reached, single element runs from A[] are merged to B[], and then at the next higher level
of recursion, those 2 element runs are merged to A[]. This pattern continues with each level
of recursion.

// Array A[] has the items to sort; array B[] is a work array.
void TopDownMergeSort(A[], B[], n)
{
CopyArray(A, 0, n, B); // one time copy of A[] to B[]
TopDownSplitMerge(B, 0, n, A); // sort data from B[] into A[]
}

// Sort the given run of array A[] using array B[] as a source.
// iBegin is inclusive; iEnd is exclusive (A[iEnd] is not in the set).
void TopDownSplitMerge(B[], iBegin, iEnd, A[])
{
if(iEnd - iBegin < 2) // if run size == 1
return; // consider it sorted
// split the run longer than 1 item into halves
iMiddle = (iEnd + iBegin) / 2; // iMiddle = mid point
// recursively sort both runs from array A[] into B[]
TopDownSplitMerge(A, iBegin, iMiddle, B); // sort the left run
TopDownSplitMerge(A, iMiddle, iEnd, B); // sort the right run
// merge the resulting runs from array B[] into A[]
TopDownMerge(B, iBegin, iMiddle, iEnd, A);
}

10 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
11 https://en.wikipedia.org/wiki/John_von_Neumann
12 https://en.wikipedia.org/wiki/Herman_Goldstine
13 https://en.wikipedia.org/wiki/John_von_Neumann
14 https://en.wikipedia.org/wiki/Merge_algorithm
15 https://en.wikipedia.org/wiki/C-like

64
Algorithm

// Left source half is A[ iBegin:iMiddle-1].


// Right source half is A[iMiddle:iEnd-1 ].
// Result is B[ iBegin:iEnd-1 ].
void TopDownMerge(A[], iBegin, iMiddle, iEnd, B[])
{
i = iBegin, j = iMiddle;

// While there are elements in the left or right runs...


for (k = iBegin; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.
if (i < iMiddle && (j >= iEnd || A[i] <= A[j])) {
B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}

void CopyArray(A[], iBegin, iEnd, B[])


{
for(k = iBegin; k < iEnd; k++)
B[k] = A[k];
}

5.1.2 Bottom-up implementation

Example C-like code using indices for bottom-up merge sort algorithm which treats the
list as an array of n sublists (called runs in this example) of size 1, and iteratively merges
sub-lists back and forth between two buffers:

// array A[] has the items to sort; array B[] is a work array
void BottomUpMergeSort(A[], B[], n)
{
// Each 1-element run in A is already "sorted".
// Make successively longer sorted runs of length 2, 4, 8, 16... until whole
array is sorted.
for (width = 1; width < n; width = 2 * width)
{
// Array A is full of runs of length width.
for (i = 0; i < n; i = i + 2 * width)
{
// Merge two runs: A[i:i+width-1] and A[i+width:i+2*width-1] to B[]
// or copy A[i:n-1] to B[] ( if(i+width >= n) )
BottomUpMerge(A, i, min(i+width, n), min(i+2*width, n), B);
}
// Now work array B is full of runs of length 2*width.
// Copy array B to array A for next iteration.
// A more efficient implementation would swap the roles of A and B.
CopyArray(B, A, n);
// Now array A is full of runs of length 2*width.
}
}

// Left run is A[iLeft :iRight-1].


// Right run is A[iRight:iEnd-1 ].
void BottomUpMerge(A[], iLeft, iRight, iEnd, B[])
{
i = iLeft, j = iRight;
// While there are elements in the left or right runs...
for (k = iLeft; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.

65
Merge sort

if (i < iRight && (j >= iEnd || A[i] <= A[j])) {


B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}

void CopyArray(B[], A[], n)


{
for(i = 0; i < n; i++)
A[i] = B[i];
}

5.1.3 Top-down implementation using lists

Pseudocode16 for top-down merge sort algorithm which recursively divides the input list
into smaller sublists until the sublists are trivially sorted, and then merges the sublists
while returning up the call chain.
function merge_sort(list m) is
// Base case. A list of zero or one elements is sorted, by definition.
if length of m ≤ 1 then
return m

// Recursive case. First, divide the list into equal-sized sublists


// consisting of the first half and second half of the list.
// This assumes lists start at index 0.
var left := empty list
var right := empty list
for each x with index i in m do
if i < (length of m)/2 then
add x to left
else
add x to right

// Recursively sort both sublists.


left := merge_sort(left)
right := merge_sort(right)

// Then merge the now-sorted sublists.


return merge(left, right)

In this example, the merge function merges the left and right sublists.
function merge(left, right) is
var result := empty list

while left is not empty and right is not empty do


if first(left) ≤ first(right) then
append first(left) to result
left := rest(left)
else
append first(right) to result
right := rest(right)

// Either left or right may have elements left; consume them.


// (Only one of the following loops will actually be entered.)
while left is not empty do

16 https://en.wikipedia.org/wiki/Pseudocode

66
Natural merge sort

append first(left) to result


left := rest(left)
while right is not empty do
append first(right) to result
right := rest(right)
return result

5.1.4 Bottom-up implementation using lists

Pseudocode17 for bottom-up merge sort algorithm which uses a small fixed size array of
references to nodes, where array[i] is either a reference to a list of size 2i or nil18 . node is
a reference or pointer to a node. The merge() function would be similar to the one shown
in the top-down merge lists example, it merges two already sorted lists, and handles empty
lists. In this case, merge() would use node for its input parameters and return value.
function merge_sort(node head) is
// return if empty list
if head = nil then
return nil
var node array[32]; initially all nil
var node result
var node next
var int i
result := head
// merge nodes into array
while result ≠ nil do
next := result.next;
result.next := nil
for(i = 0; (i < 32) && (array[i] ≠ nil); i += 1) do
result := merge(array[i], result)
array[i] := nil
// do not go past end of array
if i = 32 then
i -= 1
array[i] := result
result := next
// merge array into single list
result := nil
for (i = 0; i < 32; i += 1) do
result := merge(array[i], result)
return result

5.2 Natural merge sort

A natural merge sort is similar to a bottom-up merge sort except that any naturally occur-
ring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (al-
ternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being
convenient data structures (used as FIFO queues19 or LIFO stacks20 ).[4] In the bottom-up
merge sort, the starting point assumes each run is one item long. In practice, random input

17 https://en.wikipedia.org/wiki/Pseudocode
18 https://en.wikipedia.org/wiki/Null_pointer
19 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
20 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)

67
Merge sort

data will have many short runs that just happen to be sorted. In the typical case, the
natural merge sort may not need as many passes because there are fewer runs to merge.
In the best case, the input is already sorted (i.e., is one run), so the natural merge sort
need only make one pass through the data. In many practical cases, long natural runs
are present, and for that reason natural merge sort is exploited as the key component of
Timsort21 . Example:
Start : 3 4 2 1 7 5 8 9 0 6
Select runs : (3 4)(2)(1 7)(5 8 9)(0 6)
Merge : (2 3 4)(1 5 7 8 9)(0 6)
Merge : (1 2 3 4 5 7 8 9)(0 6)
Merge : (0 1 2 3 4 5 6 7 8 9)

Tournament replacement selection sorts22 are used to gather the initial runs for external
sorting algorithms.

21 https://en.wikipedia.org/wiki/Timsort
22 https://en.wikipedia.org/wiki/Tournament_sort

68
Analysis

5.3 Analysis

Figure 14 A recursive merge sort algorithm used to sort an array of 7 integer values.
These are the steps a human would take to emulate merge sort (top-down).

In sorting n objects, merge sort has an average23 and worst-case performance24 of


O25 (n log n). If the running time of merge sort for a list of length n is T(n), then the
recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the al-
gorithm to two lists of half the size of the original list, and add the n steps taken to merge
the resulting two lists). The closed form follows from the master theorem for divide-and-
conquer recurrences26 .

23 https://en.wikipedia.org/wiki/Average_performance
24 https://en.wikipedia.org/wiki/Worst-case_performance
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)

69
Merge sort

In the worst case, the number of comparisons merge sort makes is given by the sorting
numbers27 . These numbers are equal to or slightly smaller than (n ⌈lg28 n⌉ − 2⌈lg n⌉ + 1),
which is between (n lg n − n + 1) and (n lg n + n + O(lg n)).[5]
For large n and a randomly ordered input list, merge sort's expected (average) number of
∑∞
1
comparisons approaches α·n fewer than the worst case where α = −1 + k +1
≈ 0.2645.
k=0
2

In the worst case, merge sort does about 39% fewer comparisons than quicksort29 does in
the average case. In terms of moves, merge sort's worst case complexity is O30 (n log n)—
the same complexity as quicksort's best case, and merge sort's best case takes about half
31
as many iterations as the worst case.[citation needed ]
Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can
only be efficiently accessed sequentially, and is thus popular in languages such as Lisp32 ,
where sequentially accessed data structures are very common. Unlike some (efficient) im-
plementations of quicksort, merge sort is a stable sort.
Merge sort's most common implementation does not sort in place;[6] therefore, the memory
size of the input must be allocated for the sorted output to be stored in (see below for
versions that need only n/2 extra spaces).

5.4 Variants

Variants of merge sort are primarily concerned with reducing the space complexity and the
cost of copying.
A simple alternative for reducing the space overhead to n/2 is to maintain left and right as
a combined structure, copy only the left part of m into temporary space, and to direct the
merge routine to place the merged output into m. With this version it is better to allocate
the temporary space outside the merge routine, so that only one allocation is needed. The
excessive copying mentioned previously is also mitigated, since the last pair of lines before
the return result statement (function mergein the pseudo code above) become superfluous.
One drawback of merge sort, when implemented on arrays, is its O(n) working memory
requirement. Several in-place33 variants have been suggested:
• Katajainen et al. present an algorithm that requires a constant amount of working mem-
ory: enough storage space to hold one element of the input array, and additional space
to hold O(1) pointers into the input array. They achieve an O(n log n) time bound with
small constants, but their algorithm is not stable.[7]
• Several attempts have been made at producing an in-place merge algorithm that can
be combined with a standard (top-down or bottom-up) merge sort to produce an in-

27 https://en.wikipedia.org/wiki/Sorting_number
28 https://en.wikipedia.org/wiki/Binary_logarithm
29 https://en.wikipedia.org/wiki/Quicksort
30 https://en.wikipedia.org/wiki/Big_O_notation
32 https://en.wikipedia.org/wiki/Lisp_programming_language
33 https://en.wikipedia.org/wiki/In-place_algorithm

70
Variants

place merge sort. In this case, the notion of ”in-place” can be relaxed to mean ”taking
logarithmic stack space”, because standard merge sort requires that amount of space
for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is
possible in O(n log n) time using a constant amount of scratch space, but their algorithm
is complicated and has high constant factors: merging arrays of length n and m can take
5n + 12m + o(m) moves.[8] This high constant factor and complicated in-place algorithm
was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston[9]
presented a straightforward linear time algorithm practical in-place merge to merge a
sorted list using fixed amount of additional space. They both have used the work of
Kronrod and others. It merges in linear time and constant extra space. The algorithm
takes little more average time than standard merge sort algorithms, free to exploit O(n)
temporary extra memory cells, by less than a factor of two. Though the algorithm is
much faster in a practical way but it is unstable also for some lists. But using similar
concepts, they have been able to solve this problem. Other in-place algorithms include
SymMerge, which takes O((n + m) log (n + m)) time in total and is stable.[10] Plugging
such an algorithm into merge sort increases its complexity to the non-linearithmic34 , but
still quasilinear35 , O(n (log n)2 ).
• A modern stable linear and in-place merging is block merge sort36 .
An alternative to reduce the copying into multiple lists is to associate a new field of infor-
mation with each key (the elements in m are called keys). This field will be used to link
the keys and any associated information together in a sorted list (a key and its related
information is called a record). Then the merging of the sorted lists proceeds by changing
the link values; no records need to be moved at all. A field which contains only a link will
generally be smaller than an entire record so less space will also be used. This is a standard
sorting technique, not restricted to merge sort.

34 https://en.wikipedia.org/wiki/Linearithmic
35 https://en.wikipedia.org/wiki/Quasilinear_time
36 https://en.wikipedia.org/wiki/Block_merge_sort

71
Merge sort

5.5 Use with tape drives

Figure 15 Merge sort type algorithms allowed large data sets to be sorted on early
computers that had small random access memories by modern standards. Records were
stored on magnetic tape and processed on banks of magnetic tape drives, such as these
IBM 729s.

An external37 merge sort is practical to run using disk38 or tape39 drives when the data to
be sorted is too large to fit into memory40 . External sorting41 explains how merge sort is
implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is
sequential (except for rewinds at the end of each pass). A minimal implementation can get
by with just two record buffers and a few program variables.
Naming the four tape drives as A, B, C, D, with the original data on A, and using only 2
record buffers, the algorithm is similar to Bottom-up implementation42 , using pairs of tape
drives instead of arrays in memory. The basic algorithm can be described as follows:

37 https://en.wikipedia.org/wiki/External_sorting
38 https://en.wikipedia.org/wiki/Disk_storage
39 https://en.wikipedia.org/wiki/Tape_drive
40 https://en.wikipedia.org/wiki/Primary_storage
41 https://en.wikipedia.org/wiki/External_sorting
42 #Bottom-up_implementation

72
Use with tape drives

1. Merge pairs of records from A; writing two-record sublists alternately to C and D.


2. Merge two-record sublists from C and D into four-record sublists; writing these alter-
nately to A and B.
3. Merge four-record sublists from A and B into eight-record sublists; writing these
alternately to C and D
4. Repeat until you have one list containing all the data, sorted—in log2 (n) passes.
Instead of starting with very short runs, usually a hybrid algorithm43 is used, where the
initial pass will read many records into memory, do an internal sort to create a long run,
and then distribute those long runs onto the output set. The step avoids many early passes.
For example, an internal sort of 1024 records will save nine passes. The internal sort is
often large because it has such a benefit. In fact, there are techniques that can make the
initial runs longer than the available internal memory.[11]
With some overhead, the above algorithm can be modified to use three tapes. O(n log n)
running time can also be achieved using two queues44 , or a stack45 and a queue, or three
stacks. In the other direction, using k > two tapes (and O(k) items in memory), we can
reduce the number of tape operations in O(log k) times by using a k/2-way merge46 .
A more sophisticated merge sort that optimizes tape (and disk) drive usage is the polyphase
merge sort47 .

43 https://en.wikipedia.org/wiki/Hybrid_algorithm
44 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
45 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
46 https://en.wikipedia.org/wiki/K-way_merge_algorithm
47 https://en.wikipedia.org/wiki/Polyphase_merge_sort

73
Merge sort

5.6 Optimizing merge sort

Figure 16 Tiled merge sort applied to an array of random integers. The horizontal axis
is the array index and the vertical axis is the integer.

On modern computers, locality of reference48 can be of paramount importance in software


optimization49 , because multilevel memory hierarchies50 are used. Cache51 -aware versions
of the merge sort algorithm, whose operations have been specifically chosen to minimize
the movement of pages in and out of a machine's memory cache, have been proposed. For
example, the tiled merge sort algorithm stops partitioning subarrays when subarrays of
size S are reached, where S is the number of data items fitting into a CPU's cache. Each
of these subarrays is sorted with an in-place sorting algorithm such as insertion sort52 ,
to discourage memory swaps, and normal merge sort is then completed in the standard

48 https://en.wikipedia.org/wiki/Locality_of_reference
49 https://en.wikipedia.org/wiki/Software_optimization
50 https://en.wikipedia.org/wiki/Memory_hierarchy
51 https://en.wikipedia.org/wiki/Cache_(computing)
52 https://en.wikipedia.org/wiki/Insertion_sort

74
Parallel merge sort

53 ]
recursive fashion. This algorithm has demonstrated better performance[example needed on
machines that benefit from cache optimization. (LaMarca & Ladner 199754 )
Kronrod (1969)55 suggested an alternative version of merge sort that uses constant addi-
tional space. This algorithm was later refined. (Katajainen, Pasanen & Teuhola 199656 )
harv error: multiple targets (2×): CITEREFKatajainenPasanenTeuhola1996 (help57 )
Also, many applications of external sorting58 use a form of merge sorting where the input
get split up to a higher number of sublists, ideally to a number for which merging them still
makes the currently processed set of pages59 fit into main memory.

5.7 Parallel merge sort

Merge sort parallelizes well due to the use of the divide-and-conquer60 method. Several
different parallel variants of the algorithm have been developed over the years. Some parallel
merge sort algorithms are strongly related to the sequential top-down merge algorithm while
others have a different general structure and use the K-way merge61 method.

5.7.1 Merge sort with parallel recursion

The sequential merge sort procedure can be described in two phases, the divide phase and
the merge phase. The first consists of many recursive calls that repeatedly perform the same
division process until the subsequences are trivially sorted (containing one or no element).
An intuitive approach is the parallelization of those recursive calls.[12] Following pseudocode
describes the merge sort with parallel recursion using the fork and join62 keywords:
// Sort elements lo through hi (exclusive) of array A.
algorithm mergesort(A, lo, hi) is
if lo+1 < hi then // Two or more elements.
mid := ⌊(lo + hi) / 2⌋
fork mergesort(A, lo, mid)
mergesort(A, mid, hi)
join
merge(A, lo, mid, hi)

This algorithm is the trivial modification of the sequential version and does not parallelize
well. Therefore, its speedup is not very impressive. It has a span63 of Θ(n), which is
only an improvement of Θ(log n) compared to the sequential version (see Introduction to

54 #CITEREFLaMarcaLadner1997
55 #CITEREFKronrod1969
56 #CITEREFKatajainenPasanenTeuhola1996
57 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
58 https://en.wikipedia.org/wiki/External_sorting
59 https://en.wikipedia.org/wiki/Page_(computer_memory)
60 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
61 https://en.wikipedia.org/wiki/K-way_merge_algorithm
62 https://en.wikipedia.org/wiki/Fork%E2%80%93join_model
63 https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms#Overview

75
Merge sort

Algorithms64 ). This is mainly due to the sequential merge method, as it is the bottleneck
of the parallel executions.

5.7.2 Merge sort with parallel merging

Main article: Merge algorithm § Parallel merge65 Better parallelism can be achieved by
using a parallel merge algorithm66 . Cormen et al.67 present a binary variant that merges
two sorted sub-sequences into one sorted output sequence.[12]
In one of the sequences (the longer one if unequal length), the element of the middle index
is selected. Its position in the other sequence is determined in such a way that this sequence
would remain sorted if this element were inserted at this position. Thus, one knows how
many other elements from both sequences are smaller and the position of the selected
element in the output sequence can be calculated. For the partial sequences of the smaller
and larger elements created in this way, the merge algorithm is again executed in parallel
until the base case of the recursion is reached.
The following pseudocode shows the modified parallel merge sort method using the parallel
merge algorithm (adopted from Cormen et al.).
/**
* A: Input array
* B: Output array
* lo: lower bound
* hi: upper bound
* off: offset
*/
algorithm parallelMergesort(A, lo, hi, B, off) is
len := hi - lo + 1
if len == 1 then
B[off] := A[lo]
else let T[1..len] be a new array
mid := ⌊(lo + hi) / 2⌋
mid' := mid - lo + 1
fork parallelMergesort(A, lo, mid, T, 1)
parallelMergesort(A, mid + 1, hi, T, mid' + 1)
join
parallelMerge(T, 1, mid', mid' + 1, len, B, off)

In order to analyze a Recurrence relation68 for the worst case span, the recursive calls
of parallelMergesort have to be incorporated only once due to their parallel execution,
obtaining
sort (n) = T sort
(n) merge sort
(n) ( )
T∞ ∞ 2 + T∞ (n) = T∞ 2 + Θ log(n)2 .
For detailed information about the complexity of the parallel merge procedure, see Merge
algorithm69 .
The solution of this recurrence is given by

64 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
65 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge
66 https://en.wikipedia.org/wiki/Merge_algorithm
67 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
68 https://en.wikipedia.org/wiki/Recurrence_relation
69 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge

76
Parallel merge sort

( )
sort = Θ log(n)3 .
T∞
( )
n
This parallel merge algorithm reaches a parallelism of Θ , which is much higher
(log n)2
than the parallelism of the previous algorithm. Such a sort can perform well in practice when
combined with a fast stable sequential sort, such as insertion sort70 , and a fast sequential
merge as a base case for merging small arrays.[13]

5.7.3 Parallel multiway merge sort

It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there
are usually p > 2 processors available. A better approach may be to use a K-way merge71
method, a generalization of binary merge, in which k sorted sequences are merged together.
This merge variant is well suited to describe a sorting algorithm on a PRAM72[14][15] .

Basic Idea

Figure 17 The parallel multiway mergesort process on four processors t0 to t3 .

70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/K-way_merge_algorithm
72 https://en.wikipedia.org/wiki/Parallel_random-access_machine

77
Merge sort

Given an unsorted sequence of n elements, the goal is to sort the sequence with p available
processors73 . These elements are distributed equally among all processors and sorted locally
using a sequential Sorting algorithm74 . Hence, the sequence consists of sorted sequences
S1 , ..., Sp of length ⌈ np ⌉. For simplification let n be a multiple of p, so that |Si | = np for
i = 1, ..., p.
These sequences will be used to perform a multisequence selection/splitter selection. For
j = 1, ..., p, the algorithm determines splitter elements vj with global rank k = j np . Then
the corresponding positions of v1 , ..., vp in each sequence Si are determined with binary
search75 and thus the Si are further partitioned into p subsequences Si,1 , ..., Si,p with
Si,j := {x ∈ Si |rank(vj−1 ) < rank(x) ≤ rank(vj )}.
Furthermore, the elements of S1,i , ..., Sp,i are assigned to processor i, means all elements
between rank (i − 1) np and rank i np , which are distributed over all Si . Thus, each processor
receives a sequence of sorted sequences. The fact that the rank k of the splitter elements
vi was chosen globally, provides two important properties: On the one hand, k was chosen
so that each processor can still operate on n/p elements after assignment. The algorithm is
perfectly load-balanced76 . On the other hand, all elements on processor i are less than or
equal to all elements on processor i + 1. Hence, each processor performs the p-way merge77
locally and thus obtains a sorted sequence from its sub-sequences. Because of the second
property, no further p-way-merge has to be performed, the results only have to be put
together in the order of the processor number.

Multisequence selection

In its simplest form, given p sorted sequences S1 , ..., Sp distributed evenly on p processors
and a rank k, the task is to find an element x with a global rank k in the union of the
sequences. Hence, this can be used to divide each Si in two parts at a splitter index li ,
where the lower part contains only elements which are smaller than x, while the elements
bigger than x are located in the upper part.
The presented sequential algorithm returns the indices of the splits in each sequence,
e.g. the indices li in sequences Si such that Si [li ] has a global rank less than k and
rank (Si [li + 1]) ≥ k.[16]
algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is
for i = 1 to p do
(l_i, r_i) = (0, |S_i|-1)

while there exists i: l_i < r_i do


//pick Pivot Element in S_j[l_j],..,S_j[r_j], chose random j uniformly
v := pickPivot(S, l, r)
for i = 1 to p do
m_i = binarySearch(v, S_i[l_i, r_i]) //sequentially
if m_1 + ... + m_p >= k then //m_1+ ... + m_p is the global rank of v
r := m //vector assignment
else

73 https://en.wikipedia.org/wiki/Processor_(computing)
74 https://en.wikipedia.org/wiki/Sorting_algorithm
75 https://en.wikipedia.org/wiki/Binary_search_algorithm
76 https://en.wikipedia.org/wiki/Load_balancing_(computing)
77 https://en.wikipedia.org/wiki/K-way_merge_algorithm

78
Parallel merge sort

l := m

return l

For the complexity analysis the PRAM78 model is chosen. If the data is evenly dis-
tributed over all p, the p-fold execution of the binarySearch method has a running time

of O (p log (n/p)). The expected recursion depth is O (log ( i |Si |)) = O(log(n)) as in the
ordinary Quickselect79 . Thus the overall expected running time is O (p log(n/p) log(n)).
Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel
such that all splitter elements of rank i np for i = 1, .., p are found simultaneously. These
splitter elements can then be used to partition each sequence in p parts, with the same total
running time of O (p log(n/p) log(n)).

Pseudocode

Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We
assume that there is a barrier synchronization before and after the multisequence selection
such that every processor can determine the splitting elements and the sequence partition
properly.
/**
* d: Unsorted Array of Elements
* n: Number of Elements
* p: Number of Processors
* return Sorted Array
*/
algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is
o := new Array[0, n] // the output array
for i = 1 to p do in parallel // each processor in parallel
S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p
sort(S_i) // sort locally
synch
v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank
i * n/p
synch
(S_i,1 ,..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i
into subsequences

o[(i-1) * n/p, i * n/p] := kWayMerge(s_1,i, ..., s_p,i) // merge and assign


to output array

return o

Analysis

Firstly, each processor sorts the assigned n/p elements locally using a sorting algorithm with
complexity O (n/p log(n/p)). After that, the splitter elements have to be calculated in time
O (p log(n/p) log(n)). Finally, each group of p splits have to be merged in parallel by each

78 https://en.wikipedia.org/wiki/Parallel_random-access_machine
79 https://en.wikipedia.org/wiki/Quickselect

79
Merge sort

processor with a running time of O(log(p)n/p) using a sequential p-way merge algorithm80 .
Thus, the overall running time is given by
( ( ) ( ) )
n n n n
O log + p log log(n) + log(p) .
p p p p

Practical adaption and application

The multiway merge sort algorithm is very scalable through its high parallelization capabil-
ity, which allows the use of many processors. This makes the algorithm a viable candidate
for sorting large amounts of data, such as those processed in computer clusters81 . Also,
since in such systems memory is usually not a limiting resource, the disadvantage of space
complexity of merge sort is negligible. However, other factors become important in such
systems, which are not taken into account when modelling on a PRAM82 . Here, the follow-
ing aspects need to be considered: Memory hierarchy83 , when the data does not fit into the
processors cache, or the communication overhead of exchanging data between processors,
which could become a bottleneck when the data can no longer be accessed via the shared
memory.
Sanders84 et al. have presented in their paper a bulk synchronous parallel85 algorithm for
multilevel multiway mergesort, which divides p processors into r groups of size p′ . All
processors sort locally first. Unlike single level multiway mergesort, these sequences are
then partitioned into r parts and assigned to the appropriate processor groups. These
steps are repeated recursively in those groups. This reduces communication and especially
avoids problems with many small messages. The hierarchial structure of the underlying real
network can be used to define the processor groups (e.g. racks86 , clusters87 ,...).[15]

5.7.4 Further Variants

Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with
Richard Cole using a clever subsampling algorithm to ensure O(1) merge.[17] Other sophis-
ticated parallel sorting algorithms can achieve the same or better time bounds with a lower
constant. For example, in 1991 David Powers described a parallelized quicksort88 (and a
related radix sort89 ) that can operate in O(log n) time on a CRCW90 parallel random-access
machine91 (PRAM) with n processors by performing partitioning implicitly.[18] Powers fur-
ther shows that a pipelined version of Batcher's Bitonic Mergesort92 at O((log n)2 ) time

80 https://en.wikipedia.org/wiki/Merge_algorithm
81 https://en.wikipedia.org/wiki/Computer_cluster
82 https://en.wikipedia.org/wiki/Parallel_random-access_machine
83 https://en.wikipedia.org/wiki/Memory_hierarchy
84 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
85 https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
86 https://en.wikipedia.org/wiki/19-inch_rack
87 https://en.wikipedia.org/wiki/Computer_cluster
88 https://en.wikipedia.org/wiki/Quicksort
89 https://en.wikipedia.org/wiki/Radix_sort
90 https://en.wikipedia.org/wiki/CRCW
91 https://en.wikipedia.org/wiki/Parallel_random-access_machine
92 https://en.wikipedia.org/wiki/Bitonic_sorter

80
Comparison with other sort algorithms

on a butterfly sorting network93 is in practice actually faster than his O(log n) sorts on a
PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix
and parallel sorting.[19]

5.8 Comparison with other sort algorithms

Although heapsort94 has the same time bounds as merge sort, it requires only Θ(1) auxiliary
space instead of merge sort's Θ(n). On typical modern architectures, efficient quicksort95 im-
96
plementations generally outperform mergesort for sorting RAM-based arrays.[citation needed ]
On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-
access sequential media. Merge sort is often the best choice for sorting a linked list97 : in this
situation it is relatively easy to implement a merge sort in such a way that it requires only
Θ(1) extra space, and the slow random-access performance of a linked list makes some other
algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely
impossible.
As of Perl98 5.8, merge sort is its default sorting algorithm (it was quicksort in previous
versions of Perl). In Java99 , the Arrays.sort()100 methods use merge sort or a tuned quicksort
depending on the datatypes and for implementation efficiency switch to insertion sort101
when fewer than seven array elements are being sorted.[20] The Linux102 kernel uses merge
sort for its linked lists.[21] Python103 uses Timsort104 , another tuned hybrid of merge sort
and insertion sort, that has become the standard sort algorithm in Java SE 7105 (for arrays
of non-primitive types),[22] on the Android platform106 ,[23] and in GNU Octave107 .[24]

5.9 Notes
1. Skiena (2008108 , p. 122)
2. Knuth (1998109 , p. 158)
3. K, J; T, J L (M 1997). ”A 
   ”110 (PDF). Proceedings of the 3rd Italian Con-

93 https://en.wikipedia.org/wiki/Sorting_network
94 https://en.wikipedia.org/wiki/Heapsort
95 https://en.wikipedia.org/wiki/Quicksort
97 https://en.wikipedia.org/wiki/Linked_list
98 https://en.wikipedia.org/wiki/Perl
99 https://en.wikipedia.org/wiki/Java_platform
https://docs.oracle.com/javase/9/docs/api/java/util/Arrays.html#sort-java.lang.
100
Object:A-
101 https://en.wikipedia.org/wiki/Insertion_sort
102 https://en.wikipedia.org/wiki/Linux
103 https://en.wikipedia.org/wiki/Python_(programming_language)
104 https://en.wikipedia.org/wiki/Timsort
105 https://en.wikipedia.org/wiki/Java_7
106 https://en.wikipedia.org/wiki/Android_(operating_system)
107 https://en.wikipedia.org/wiki/GNU_Octave
108 #CITEREFSkiena2008
109 #CITEREFKnuth1998
110 http://hjemmesider.diku.dk/~jyrki/Paper/CIAC97.pdf

81
Merge sort

ference on Algorithms and Complexity. Italian Conference on Algorithms and Com-


plexity. Rome. pp. 217–228. CiteSeerX111 10.1.1.86.3154112 . doi113 :10.1007/3-540-
62592-5_74114 .CS1 maint: ref=harv (link115 )
4. Powers, David M. W. and McMahon Graham B. (1983), ”A compendium of interesting
prolog programs”, DCS Technical Report 8313, Department of Computer Science,
University of New South Wales.
5. The worst case number given here does not agree with that given in Knuth116 's Art
of Computer Programming117 , Vol 3. The discrepancy is due to Knuth analyzing a
variant implementation of merge sort that is slightly sub-optimal
6. C; L; R; S. Introduction to Algorithms. p. 151.
ISBN118 978-0-262-03384-8119 .
7. K, J; P, T; T, J (1996). ”P-
 - ”. Nordic J. Computing. 3 (1): 27–40. Cite-
SeerX120 10.1.1.22.8523121 .
8. G, V; K, J; P, T (2000). ”A-
  - ”. Theoretical Computer Science. 237 (1–2):
159–181. doi122 :10.1016/S0304-3975(98)00162-5123 .
9. H, B-C; L, M A. (M 1988). ”P-
 I-P M”. Communications of the ACM. 31 (3): 348–352.
doi124 :10.1145/42392.42403125 .
10. K, P-S; K, A (2004). Stable Minimum Storage Merging by
Symmetric Comparisons. European Symp. Algorithms. Lecture Notes in Computer
Science. 3221. pp. 714–723. CiteSeerX126 10.1.1.102.4612127 . doi128 :10.1007/978-3-
540-30140-0_63129 . ISBN130 978-3-540-23025-0131 .
11. Selection sort. Knuth's snowplow. Natural merge.
12. Cormen et al. 2009132 , pp. 797–805 harvnb error: no target: CITEREFCormenLeis-
ersonRivestStein2009 (help133 )

111 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
112 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.3154
113 https://en.wikipedia.org/wiki/Doi_(identifier)
114 https://doi.org/10.1007%2F3-540-62592-5_74
115 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
116 https://en.wikipedia.org/wiki/Donald_Knuth
117 https://en.wikipedia.org/wiki/Art_of_Computer_Programming
118 https://en.wikipedia.org/wiki/ISBN_(identifier)
119 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
120 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
121 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1016%2FS0304-3975%2898%2900162-5
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1145%2F42392.42403
126 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
127 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.4612
128 https://en.wikipedia.org/wiki/Doi_(identifier)
129 https://doi.org/10.1007%2F978-3-540-30140-0_63
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-23025-0
132 #CITEREFCormenLeisersonRivestStein2009
133 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

82
Notes

13. Victor J. Duvanenko ”Parallel Merge Sort” Dr. Dobb's Journal & blog[1]134 and
GitHub repo C++ implementation [2]135
14. Peter Sanders, Johannes Singler. 2008. Lecture Parallel algorithms Last visited
05.02.2020. 136
15. ”P M P S | P   27
ACM   P  A  A”.
137 :10.1145/2755573.2755595138 . Cite journal requires |journal= (help139 )
16. Peter Sanders. 2019. Lecture Parallel algorithms Last visited 05.02.2020. 140
17. C, R (A 1988). ”P  ”. SIAM J. Comput.
17 (4): 770–785. CiteSeerX141 10.1.1.464.7118142 . doi143 :10.1137/0217049144 .CS1
maint: ref=harv (link145 )
18. Powers, David M. W. Parallelized Quicksort and Radixsort with Optimal Speedup146 ,
Proceedings of International Conference on Parallel Computing Technologies. Novosi-
birsk147 . 1991.
19. David M. W. Powers, Parallel Unification: Practical Complexity148 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
20. OpenJDK src/java.base/share/classes/java/util/Arrays.java @ 53904:9c3fe09f69bc149
21. linux kernel /lib/list_sort.c150
22. . ”C 6804124: R ” ” 
..A.  ”151 . Java Development Kit 7 Hg repo.
Archived152 from the original on 2018-01-26. Retrieved 24 Feb 2011.
23. ”C: ..TS<T>”153 . Android JDK Documentation. Archived
from the original154 on January 20, 2015. Retrieved 19 Jan 2015.
24. ”//-.”155 . Mercurial repository of Octave source code.
Lines 23-25 of the initial comment block. Retrieved 18 Feb 2013. Code stolen in large

134 https://duvanenko.tech.blog/2018/01/13/parallel-merge-sort/
135 https://github.com/DragonSpit/ParallelAlgorithms
136 http://algo2.iti.kit.edu/sanders/courses/paralg08/singler.pdf
137 https://en.wikipedia.org/wiki/Doi_(identifier)
138 https://doi.org/10.1145%2F2755573.2755595
139 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
140 http://algo2.iti.kit.edu/sanders/courses/paralg19/vorlesung.pdf
141 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
142 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.464.7118
143 https://en.wikipedia.org/wiki/Doi_(identifier)
144 https://doi.org/10.1137%2F0217049
145 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
146 http://citeseer.ist.psu.edu/327487.html
147 https://en.wikipedia.org/wiki/Novosibirsk
148 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://hg.openjdk.java.net/jdk/jdk/file/9c3fe09f69bc/src/java.base/share/classes/
149
java/util/Arrays.java#l1331
150 https://github.com/torvalds/linux/blob/master/lib/list_sort.c
151 http://hg.openjdk.java.net/jdk7/jdk7/jdk/rev/bfd7abda8f79
https://web.archive.org/web/20180126184957/http://hg.openjdk.java.net/jdk7/jdk7/jdk/
152
rev/bfd7abda8f79
https://web.archive.org/web/20150120063131/https://android.googlesource.com/platform/
153
libcore/%2B/jb-mr2-release/luni/src/main/java/java/util/TimSort.java
https://android.googlesource.com/platform/libcore/+/jb-mr2-release/luni/src/main/
154
java/java/util/TimSort.java
155 http://hg.savannah.gnu.org/hgweb/octave/file/0486a29d780f/liboctave/util/oct-sort.cc

83
Merge sort

part from Python's, listobject.c, which itself had no license header. However, thanks
to Tim Peters156 for the parts of the code I ripped-off.

5.10 References
• C, T H.157 ; L, C E.158 ; R, R L.159 ; S,
C160 (2009) [1990]. Introduction to Algorithms161 (3 .). MIT P 
MG-H. ISBN162 0-262-03384-4163 .CS1 maint: ref=harv (link164 )
• K, J; P, T; T, J (1996). ”P -
 ”165 . Nordic Journal of Computing. 3. pp. 27–40. ISSN166 1236-
6064167 . Archived from the original168 on 2011-08-07. Retrieved 2009-04-04.CS1 maint:
ref=harv (link169 ). Also Practical In-Place Mergesort170 . Also [3]171
• K, D172 (1998). ”S 5.2.4: S  M”. Sorting and
Searching. The Art of Computer Programming173 . 3 (2nd ed.). Addison-Wesley.
pp. 158–168. ISBN174 0-201-89685-0175 .CS1 maint: ref=harv (link176 )
• K, M. A. (1969). ”O    
”. Soviet Mathematics - Doklady. 10. p. 744.CS1 maint: ref=harv (link177 )
• LM, A.; L, R. E. (1997). ”T      -
  ”. Proc. 8th Ann. ACM-SIAM Symp. On Discrete Algorithms
(SODA97): 370–379. CiteSeerX178 10.1.1.31.1153179 .CS1 maint: ref=harv (link180 )

156 https://en.wikipedia.org/wiki/Tim_Peters_(software_engineer)
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
https://web.archive.org/web/20110807033704/http://www.diku.dk/hjemmesider/ansatte/
165
jyrki/Paper/mergesort_NJC.ps
166 https://en.wikipedia.org/wiki/ISSN_(identifier)
167 http://www.worldcat.org/issn/1236-6064
168 http://www.diku.dk/hjemmesider/ansatte/jyrki/Paper/mergesort_NJC.ps
169 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
170 http://citeseer.ist.psu.edu/katajainen96practical.html
171 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
172 https://en.wikipedia.org/wiki/Donald_Knuth
173 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
174 https://en.wikipedia.org/wiki/ISBN_(identifier)
175 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
176 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
177 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
178 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
179 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.1153
180 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

84
External links

• S, S S.181 (2008). ”4.5: M: S  D--


C”. The Algorithm Design Manual (2nd ed.). Springer. pp. 120–125.
ISBN182 978-1-84800-069-8183 .CS1 maint: ref=harv (link184 )
• S M. ”A API (J SE 6)”185 . R 2007-11-19.
• O C. ”A (J SE 10 & JDK 10)”186 . R 2018-07-23.

5.11 External links

The Wikibook Algorithm implementation187 has a page on the topic of: Merge
sort188

• Animated Sorting Algorithms: Merge Sort189 at the Wayback Machine190 (archived 6


March 2015) – graphical demonstration
• Open Data Structures - Section 11.1.1 - Merge Sort191 , Pat Morin192

Sorting algorithms

181 https://en.wikipedia.org/wiki/Steven_Skiena
182 https://en.wikipedia.org/wiki/ISBN_(identifier)
183 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
184 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
185 http://java.sun.com/javase/6/docs/api/java/util/Arrays.html
186 https://docs.oracle.com/javase/10/docs/api/java/util/Arrays.html
187 https://en.wikibooks.org/wiki/Algorithm_implementation
188 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Merge_sort
https://web.archive.org/web/20150306071601/http://www.sorting-algorithms.com/merge-
189
sort
190 https://en.wikipedia.org/wiki/Wayback_Machine
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
191
Sorti.html#SECTION001411000000000000000
192 https://en.wikipedia.org/wiki/Pat_Morin

85
6 Merge sort

A divide and combine sorting algorithm

This article possibly contains original research1 . Please improve it2 by veri-
fying3 the claims made and adding inline citations4 . Statements consisting only of
original research should be removed. (May 2016)(Learn how and when to remove this
template message5 )

Merge sort
An example of merge sort. First divide the list into the smallest unit (1 element), then
compare each element with the adjacent list to sort and merge the two adjacent lists.
Finally all the elements are sorted and merged.
Class Sorting algorithm
Data struc- Array
ture
Worst-case O(n log n)
perfor-
mance
Best-case O(n log n) typical,O(n) nat-
perfor- ural variant
mance
Average O(n log n)
perfor-
mance
Worst-case О(n) total with O(n) aux-
space com- iliary, O(1) auxiliary with
plexity linked lists[1]

In computer science6 , merge sort (also commonly spelled mergesort) is an efficient,


general-purpose, comparison-based7 sorting algorithm8 . Most implementations produce a
stable sort9 , which means that the order of equal elements is the same in the input and

1 https://en.wikipedia.org/wiki/Wikipedia:No_original_research
2 https://en.wikipedia.org/w/index.php?title=Merge_sort&action=edit
3 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
4 https://en.wikipedia.org/wiki/Wikipedia:Citing_sources#Inline_citations
5 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
6 https://en.wikipedia.org/wiki/Computer_science
7 https://en.wikipedia.org/wiki/Comparison_sort
8 https://en.wikipedia.org/wiki/Sorting_algorithm
9 https://en.wikipedia.org/wiki/Sorting_algorithm#Stability

87
Merge sort

output. Merge sort is a divide and conquer algorithm10 that was invented by John von Neu-
mann11 in 1945.[2] A detailed description and analysis of bottom-up mergesort appeared in
a report by Goldstine12 and von Neumann13 as early as 1948.[3]

6.1 Algorithm

Conceptually, a merge sort works as follows:


1. Divide the unsorted list into n sublists, each containing one element (a list of one
element is considered sorted).
2. Repeatedly merge14 sublists to produce new sorted sublists until there is only one
sublist remaining. This will be the sorted list.

6.1.1 Top-down implementation

Example C-like15 code using indices for top-down merge sort algorithm that recursively
splits the list (called runs in this example) into sublists until sublist size is 1, then merges
those sublists to produce a sorted list. The copy back step is avoided with alternating the
direction of the merge with each level of recursion (except for an initial one time copy). To
help understand this, consider an array with 2 elements. the elements are copied to B[],
then merged back to A[]. If there are 4 elements, when the bottom of recursion level is
reached, single element runs from A[] are merged to B[], and then at the next higher level
of recursion, those 2 element runs are merged to A[]. This pattern continues with each level
of recursion.

// Array A[] has the items to sort; array B[] is a work array.
void TopDownMergeSort(A[], B[], n)
{
CopyArray(A, 0, n, B); // one time copy of A[] to B[]
TopDownSplitMerge(B, 0, n, A); // sort data from B[] into A[]
}

// Sort the given run of array A[] using array B[] as a source.
// iBegin is inclusive; iEnd is exclusive (A[iEnd] is not in the set).
void TopDownSplitMerge(B[], iBegin, iEnd, A[])
{
if(iEnd - iBegin < 2) // if run size == 1
return; // consider it sorted
// split the run longer than 1 item into halves
iMiddle = (iEnd + iBegin) / 2; // iMiddle = mid point
// recursively sort both runs from array A[] into B[]
TopDownSplitMerge(A, iBegin, iMiddle, B); // sort the left run
TopDownSplitMerge(A, iMiddle, iEnd, B); // sort the right run
// merge the resulting runs from array B[] into A[]
TopDownMerge(B, iBegin, iMiddle, iEnd, A);
}

10 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
11 https://en.wikipedia.org/wiki/John_von_Neumann
12 https://en.wikipedia.org/wiki/Herman_Goldstine
13 https://en.wikipedia.org/wiki/John_von_Neumann
14 https://en.wikipedia.org/wiki/Merge_algorithm
15 https://en.wikipedia.org/wiki/C-like

88
Algorithm

// Left source half is A[ iBegin:iMiddle-1].


// Right source half is A[iMiddle:iEnd-1 ].
// Result is B[ iBegin:iEnd-1 ].
void TopDownMerge(A[], iBegin, iMiddle, iEnd, B[])
{
i = iBegin, j = iMiddle;

// While there are elements in the left or right runs...


for (k = iBegin; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.
if (i < iMiddle && (j >= iEnd || A[i] <= A[j])) {
B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}

void CopyArray(A[], iBegin, iEnd, B[])


{
for(k = iBegin; k < iEnd; k++)
B[k] = A[k];
}

6.1.2 Bottom-up implementation

Example C-like code using indices for bottom-up merge sort algorithm which treats the
list as an array of n sublists (called runs in this example) of size 1, and iteratively merges
sub-lists back and forth between two buffers:

// array A[] has the items to sort; array B[] is a work array
void BottomUpMergeSort(A[], B[], n)
{
// Each 1-element run in A is already "sorted".
// Make successively longer sorted runs of length 2, 4, 8, 16... until whole
array is sorted.
for (width = 1; width < n; width = 2 * width)
{
// Array A is full of runs of length width.
for (i = 0; i < n; i = i + 2 * width)
{
// Merge two runs: A[i:i+width-1] and A[i+width:i+2*width-1] to B[]
// or copy A[i:n-1] to B[] ( if(i+width >= n) )
BottomUpMerge(A, i, min(i+width, n), min(i+2*width, n), B);
}
// Now work array B is full of runs of length 2*width.
// Copy array B to array A for next iteration.
// A more efficient implementation would swap the roles of A and B.
CopyArray(B, A, n);
// Now array A is full of runs of length 2*width.
}
}

// Left run is A[iLeft :iRight-1].


// Right run is A[iRight:iEnd-1 ].
void BottomUpMerge(A[], iLeft, iRight, iEnd, B[])
{
i = iLeft, j = iRight;
// While there are elements in the left or right runs...
for (k = iLeft; k < iEnd; k++) {
// If left run head exists and is <= existing right run head.

89
Merge sort

if (i < iRight && (j >= iEnd || A[i] <= A[j])) {


B[k] = A[i];
i = i + 1;
} else {
B[k] = A[j];
j = j + 1;
}
}
}

void CopyArray(B[], A[], n)


{
for(i = 0; i < n; i++)
A[i] = B[i];
}

6.1.3 Top-down implementation using lists

Pseudocode16 for top-down merge sort algorithm which recursively divides the input list
into smaller sublists until the sublists are trivially sorted, and then merges the sublists
while returning up the call chain.
function merge_sort(list m) is
// Base case. A list of zero or one elements is sorted, by definition.
if length of m ≤ 1 then
return m

// Recursive case. First, divide the list into equal-sized sublists


// consisting of the first half and second half of the list.
// This assumes lists start at index 0.
var left := empty list
var right := empty list
for each x with index i in m do
if i < (length of m)/2 then
add x to left
else
add x to right

// Recursively sort both sublists.


left := merge_sort(left)
right := merge_sort(right)

// Then merge the now-sorted sublists.


return merge(left, right)

In this example, the merge function merges the left and right sublists.
function merge(left, right) is
var result := empty list

while left is not empty and right is not empty do


if first(left) ≤ first(right) then
append first(left) to result
left := rest(left)
else
append first(right) to result
right := rest(right)

// Either left or right may have elements left; consume them.


// (Only one of the following loops will actually be entered.)
while left is not empty do

16 https://en.wikipedia.org/wiki/Pseudocode

90
Natural merge sort

append first(left) to result


left := rest(left)
while right is not empty do
append first(right) to result
right := rest(right)
return result

6.1.4 Bottom-up implementation using lists

Pseudocode17 for bottom-up merge sort algorithm which uses a small fixed size array of
references to nodes, where array[i] is either a reference to a list of size 2i or nil18 . node is
a reference or pointer to a node. The merge() function would be similar to the one shown
in the top-down merge lists example, it merges two already sorted lists, and handles empty
lists. In this case, merge() would use node for its input parameters and return value.
function merge_sort(node head) is
// return if empty list
if head = nil then
return nil
var node array[32]; initially all nil
var node result
var node next
var int i
result := head
// merge nodes into array
while result ≠ nil do
next := result.next;
result.next := nil
for(i = 0; (i < 32) && (array[i] ≠ nil); i += 1) do
result := merge(array[i], result)
array[i] := nil
// do not go past end of array
if i = 32 then
i -= 1
array[i] := result
result := next
// merge array into single list
result := nil
for (i = 0; i < 32; i += 1) do
result := merge(array[i], result)
return result

6.2 Natural merge sort

A natural merge sort is similar to a bottom-up merge sort except that any naturally occur-
ring runs (sorted sequences) in the input are exploited. Both monotonic and bitonic (al-
ternating up/down) runs may be exploited, with lists (or equivalently tapes or files) being
convenient data structures (used as FIFO queues19 or LIFO stacks20 ).[4] In the bottom-up
merge sort, the starting point assumes each run is one item long. In practice, random input

17 https://en.wikipedia.org/wiki/Pseudocode
18 https://en.wikipedia.org/wiki/Null_pointer
19 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
20 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)

91
Merge sort

data will have many short runs that just happen to be sorted. In the typical case, the
natural merge sort may not need as many passes because there are fewer runs to merge.
In the best case, the input is already sorted (i.e., is one run), so the natural merge sort
need only make one pass through the data. In many practical cases, long natural runs
are present, and for that reason natural merge sort is exploited as the key component of
Timsort21 . Example:
Start : 3 4 2 1 7 5 8 9 0 6
Select runs : (3 4)(2)(1 7)(5 8 9)(0 6)
Merge : (2 3 4)(1 5 7 8 9)(0 6)
Merge : (1 2 3 4 5 7 8 9)(0 6)
Merge : (0 1 2 3 4 5 6 7 8 9)

Tournament replacement selection sorts22 are used to gather the initial runs for external
sorting algorithms.

21 https://en.wikipedia.org/wiki/Timsort
22 https://en.wikipedia.org/wiki/Tournament_sort

92
Analysis

6.3 Analysis

Figure 18 A recursive merge sort algorithm used to sort an array of 7 integer values.
These are the steps a human would take to emulate merge sort (top-down).

In sorting n objects, merge sort has an average23 and worst-case performance24 of


O25 (n log n). If the running time of merge sort for a list of length n is T(n), then the
recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the al-
gorithm to two lists of half the size of the original list, and add the n steps taken to merge
the resulting two lists). The closed form follows from the master theorem for divide-and-
conquer recurrences26 .

23 https://en.wikipedia.org/wiki/Average_performance
24 https://en.wikipedia.org/wiki/Worst-case_performance
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)

93
Merge sort

In the worst case, the number of comparisons merge sort makes is given by the sorting
numbers27 . These numbers are equal to or slightly smaller than (n ⌈lg28 n⌉ − 2⌈lg n⌉ + 1),
which is between (n lg n − n + 1) and (n lg n + n + O(lg n)).[5]
For large n and a randomly ordered input list, merge sort's expected (average) number of
∑∞
1
comparisons approaches α·n fewer than the worst case where α = −1 + k +1
≈ 0.2645.
k=0
2

In the worst case, merge sort does about 39% fewer comparisons than quicksort29 does in
the average case. In terms of moves, merge sort's worst case complexity is O30 (n log n)—
the same complexity as quicksort's best case, and merge sort's best case takes about half
31
as many iterations as the worst case.[citation needed ]
Merge sort is more efficient than quicksort for some types of lists if the data to be sorted can
only be efficiently accessed sequentially, and is thus popular in languages such as Lisp32 ,
where sequentially accessed data structures are very common. Unlike some (efficient) im-
plementations of quicksort, merge sort is a stable sort.
Merge sort's most common implementation does not sort in place;[6] therefore, the memory
size of the input must be allocated for the sorted output to be stored in (see below for
versions that need only n/2 extra spaces).

6.4 Variants

Variants of merge sort are primarily concerned with reducing the space complexity and the
cost of copying.
A simple alternative for reducing the space overhead to n/2 is to maintain left and right as
a combined structure, copy only the left part of m into temporary space, and to direct the
merge routine to place the merged output into m. With this version it is better to allocate
the temporary space outside the merge routine, so that only one allocation is needed. The
excessive copying mentioned previously is also mitigated, since the last pair of lines before
the return result statement (function mergein the pseudo code above) become superfluous.
One drawback of merge sort, when implemented on arrays, is its O(n) working memory
requirement. Several in-place33 variants have been suggested:
• Katajainen et al. present an algorithm that requires a constant amount of working mem-
ory: enough storage space to hold one element of the input array, and additional space
to hold O(1) pointers into the input array. They achieve an O(n log n) time bound with
small constants, but their algorithm is not stable.[7]
• Several attempts have been made at producing an in-place merge algorithm that can
be combined with a standard (top-down or bottom-up) merge sort to produce an in-

27 https://en.wikipedia.org/wiki/Sorting_number
28 https://en.wikipedia.org/wiki/Binary_logarithm
29 https://en.wikipedia.org/wiki/Quicksort
30 https://en.wikipedia.org/wiki/Big_O_notation
32 https://en.wikipedia.org/wiki/Lisp_programming_language
33 https://en.wikipedia.org/wiki/In-place_algorithm

94
Variants

place merge sort. In this case, the notion of ”in-place” can be relaxed to mean ”taking
logarithmic stack space”, because standard merge sort requires that amount of space
for its own stack usage. It was shown by Geffert et al. that in-place, stable merging is
possible in O(n log n) time using a constant amount of scratch space, but their algorithm
is complicated and has high constant factors: merging arrays of length n and m can take
5n + 12m + o(m) moves.[8] This high constant factor and complicated in-place algorithm
was made simpler and easier to understand. Bing-Chao Huang and Michael A. Langston[9]
presented a straightforward linear time algorithm practical in-place merge to merge a
sorted list using fixed amount of additional space. They both have used the work of
Kronrod and others. It merges in linear time and constant extra space. The algorithm
takes little more average time than standard merge sort algorithms, free to exploit O(n)
temporary extra memory cells, by less than a factor of two. Though the algorithm is
much faster in a practical way but it is unstable also for some lists. But using similar
concepts, they have been able to solve this problem. Other in-place algorithms include
SymMerge, which takes O((n + m) log (n + m)) time in total and is stable.[10] Plugging
such an algorithm into merge sort increases its complexity to the non-linearithmic34 , but
still quasilinear35 , O(n (log n)2 ).
• A modern stable linear and in-place merging is block merge sort36 .
An alternative to reduce the copying into multiple lists is to associate a new field of infor-
mation with each key (the elements in m are called keys). This field will be used to link
the keys and any associated information together in a sorted list (a key and its related
information is called a record). Then the merging of the sorted lists proceeds by changing
the link values; no records need to be moved at all. A field which contains only a link will
generally be smaller than an entire record so less space will also be used. This is a standard
sorting technique, not restricted to merge sort.

34 https://en.wikipedia.org/wiki/Linearithmic
35 https://en.wikipedia.org/wiki/Quasilinear_time
36 https://en.wikipedia.org/wiki/Block_merge_sort

95
Merge sort

6.5 Use with tape drives

Figure 19 Merge sort type algorithms allowed large data sets to be sorted on early
computers that had small random access memories by modern standards. Records were
stored on magnetic tape and processed on banks of magnetic tape drives, such as these
IBM 729s.

An external37 merge sort is practical to run using disk38 or tape39 drives when the data to
be sorted is too large to fit into memory40 . External sorting41 explains how merge sort is
implemented with disk drives. A typical tape drive sort uses four tape drives. All I/O is
sequential (except for rewinds at the end of each pass). A minimal implementation can get
by with just two record buffers and a few program variables.
Naming the four tape drives as A, B, C, D, with the original data on A, and using only 2
record buffers, the algorithm is similar to Bottom-up implementation42 , using pairs of tape
drives instead of arrays in memory. The basic algorithm can be described as follows:

37 https://en.wikipedia.org/wiki/External_sorting
38 https://en.wikipedia.org/wiki/Disk_storage
39 https://en.wikipedia.org/wiki/Tape_drive
40 https://en.wikipedia.org/wiki/Primary_storage
41 https://en.wikipedia.org/wiki/External_sorting
42 #Bottom-up_implementation

96
Use with tape drives

1. Merge pairs of records from A; writing two-record sublists alternately to C and D.


2. Merge two-record sublists from C and D into four-record sublists; writing these alter-
nately to A and B.
3. Merge four-record sublists from A and B into eight-record sublists; writing these
alternately to C and D
4. Repeat until you have one list containing all the data, sorted—in log2 (n) passes.
Instead of starting with very short runs, usually a hybrid algorithm43 is used, where the
initial pass will read many records into memory, do an internal sort to create a long run,
and then distribute those long runs onto the output set. The step avoids many early passes.
For example, an internal sort of 1024 records will save nine passes. The internal sort is
often large because it has such a benefit. In fact, there are techniques that can make the
initial runs longer than the available internal memory.[11]
With some overhead, the above algorithm can be modified to use three tapes. O(n log n)
running time can also be achieved using two queues44 , or a stack45 and a queue, or three
stacks. In the other direction, using k > two tapes (and O(k) items in memory), we can
reduce the number of tape operations in O(log k) times by using a k/2-way merge46 .
A more sophisticated merge sort that optimizes tape (and disk) drive usage is the polyphase
merge sort47 .

43 https://en.wikipedia.org/wiki/Hybrid_algorithm
44 https://en.wikipedia.org/wiki/Queue_(abstract_data_type)
45 https://en.wikipedia.org/wiki/Stack_(abstract_data_type)
46 https://en.wikipedia.org/wiki/K-way_merge_algorithm
47 https://en.wikipedia.org/wiki/Polyphase_merge_sort

97
Merge sort

6.6 Optimizing merge sort

Figure 20 Tiled merge sort applied to an array of random integers. The horizontal axis
is the array index and the vertical axis is the integer.

On modern computers, locality of reference48 can be of paramount importance in software


optimization49 , because multilevel memory hierarchies50 are used. Cache51 -aware versions
of the merge sort algorithm, whose operations have been specifically chosen to minimize
the movement of pages in and out of a machine's memory cache, have been proposed. For
example, the tiled merge sort algorithm stops partitioning subarrays when subarrays of
size S are reached, where S is the number of data items fitting into a CPU's cache. Each
of these subarrays is sorted with an in-place sorting algorithm such as insertion sort52 ,
to discourage memory swaps, and normal merge sort is then completed in the standard

48 https://en.wikipedia.org/wiki/Locality_of_reference
49 https://en.wikipedia.org/wiki/Software_optimization
50 https://en.wikipedia.org/wiki/Memory_hierarchy
51 https://en.wikipedia.org/wiki/Cache_(computing)
52 https://en.wikipedia.org/wiki/Insertion_sort

98
Parallel merge sort

53 ]
recursive fashion. This algorithm has demonstrated better performance[example needed on
machines that benefit from cache optimization. (LaMarca & Ladner 199754 )
Kronrod (1969)55 suggested an alternative version of merge sort that uses constant addi-
tional space. This algorithm was later refined. (Katajainen, Pasanen & Teuhola 199656 )
harv error: multiple targets (2×): CITEREFKatajainenPasanenTeuhola1996 (help57 )
Also, many applications of external sorting58 use a form of merge sorting where the input
get split up to a higher number of sublists, ideally to a number for which merging them still
makes the currently processed set of pages59 fit into main memory.

6.7 Parallel merge sort

Merge sort parallelizes well due to the use of the divide-and-conquer60 method. Several
different parallel variants of the algorithm have been developed over the years. Some parallel
merge sort algorithms are strongly related to the sequential top-down merge algorithm while
others have a different general structure and use the K-way merge61 method.

6.7.1 Merge sort with parallel recursion

The sequential merge sort procedure can be described in two phases, the divide phase and
the merge phase. The first consists of many recursive calls that repeatedly perform the same
division process until the subsequences are trivially sorted (containing one or no element).
An intuitive approach is the parallelization of those recursive calls.[12] Following pseudocode
describes the merge sort with parallel recursion using the fork and join62 keywords:
// Sort elements lo through hi (exclusive) of array A.
algorithm mergesort(A, lo, hi) is
if lo+1 < hi then // Two or more elements.
mid := ⌊(lo + hi) / 2⌋
fork mergesort(A, lo, mid)
mergesort(A, mid, hi)
join
merge(A, lo, mid, hi)

This algorithm is the trivial modification of the sequential version and does not parallelize
well. Therefore, its speedup is not very impressive. It has a span63 of Θ(n), which is
only an improvement of Θ(log n) compared to the sequential version (see Introduction to

54 #CITEREFLaMarcaLadner1997
55 #CITEREFKronrod1969
56 #CITEREFKatajainenPasanenTeuhola1996
57 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors
58 https://en.wikipedia.org/wiki/External_sorting
59 https://en.wikipedia.org/wiki/Page_(computer_memory)
60 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
61 https://en.wikipedia.org/wiki/K-way_merge_algorithm
62 https://en.wikipedia.org/wiki/Fork%E2%80%93join_model
63 https://en.wikipedia.org/wiki/Analysis_of_parallel_algorithms#Overview

99
Merge sort

Algorithms64 ). This is mainly due to the sequential merge method, as it is the bottleneck
of the parallel executions.

6.7.2 Merge sort with parallel merging

Main article: Merge algorithm § Parallel merge65 Better parallelism can be achieved by
using a parallel merge algorithm66 . Cormen et al.67 present a binary variant that merges
two sorted sub-sequences into one sorted output sequence.[12]
In one of the sequences (the longer one if unequal length), the element of the middle index
is selected. Its position in the other sequence is determined in such a way that this sequence
would remain sorted if this element were inserted at this position. Thus, one knows how
many other elements from both sequences are smaller and the position of the selected
element in the output sequence can be calculated. For the partial sequences of the smaller
and larger elements created in this way, the merge algorithm is again executed in parallel
until the base case of the recursion is reached.
The following pseudocode shows the modified parallel merge sort method using the parallel
merge algorithm (adopted from Cormen et al.).
/**
* A: Input array
* B: Output array
* lo: lower bound
* hi: upper bound
* off: offset
*/
algorithm parallelMergesort(A, lo, hi, B, off) is
len := hi - lo + 1
if len == 1 then
B[off] := A[lo]
else let T[1..len] be a new array
mid := ⌊(lo + hi) / 2⌋
mid' := mid - lo + 1
fork parallelMergesort(A, lo, mid, T, 1)
parallelMergesort(A, mid + 1, hi, T, mid' + 1)
join
parallelMerge(T, 1, mid', mid' + 1, len, B, off)

In order to analyze a Recurrence relation68 for the worst case span, the recursive calls
of parallelMergesort have to be incorporated only once due to their parallel execution,
obtaining
sort (n) = T sort
(n) merge sort
(n) ( )
T∞ ∞ 2 + T∞ (n) = T∞ 2 + Θ log(n)2 .
For detailed information about the complexity of the parallel merge procedure, see Merge
algorithm69 .
The solution of this recurrence is given by

64 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
65 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge
66 https://en.wikipedia.org/wiki/Merge_algorithm
67 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
68 https://en.wikipedia.org/wiki/Recurrence_relation
69 https://en.wikipedia.org/wiki/Merge_algorithm#Parallel_merge

100
Parallel merge sort

( )
sort = Θ log(n)3 .
T∞
( )
n
This parallel merge algorithm reaches a parallelism of Θ , which is much higher
(log n)2
than the parallelism of the previous algorithm. Such a sort can perform well in practice when
combined with a fast stable sequential sort, such as insertion sort70 , and a fast sequential
merge as a base case for merging small arrays.[13]

6.7.3 Parallel multiway merge sort

It seems arbitrary to restrict the merge sort algorithms to a binary merge method, since there
are usually p > 2 processors available. A better approach may be to use a K-way merge71
method, a generalization of binary merge, in which k sorted sequences are merged together.
This merge variant is well suited to describe a sorting algorithm on a PRAM72[14][15] .

Basic Idea

Figure 21 The parallel multiway mergesort process on four processors t0 to t3 .

70 https://en.wikipedia.org/wiki/Insertion_sort
71 https://en.wikipedia.org/wiki/K-way_merge_algorithm
72 https://en.wikipedia.org/wiki/Parallel_random-access_machine

101
Merge sort

Given an unsorted sequence of n elements, the goal is to sort the sequence with p available
processors73 . These elements are distributed equally among all processors and sorted locally
using a sequential Sorting algorithm74 . Hence, the sequence consists of sorted sequences
S1 , ..., Sp of length ⌈ np ⌉. For simplification let n be a multiple of p, so that |Si | = np for
i = 1, ..., p.
These sequences will be used to perform a multisequence selection/splitter selection. For
j = 1, ..., p, the algorithm determines splitter elements vj with global rank k = j np . Then
the corresponding positions of v1 , ..., vp in each sequence Si are determined with binary
search75 and thus the Si are further partitioned into p subsequences Si,1 , ..., Si,p with
Si,j := {x ∈ Si |rank(vj−1 ) < rank(x) ≤ rank(vj )}.
Furthermore, the elements of S1,i , ..., Sp,i are assigned to processor i, means all elements
between rank (i − 1) np and rank i np , which are distributed over all Si . Thus, each processor
receives a sequence of sorted sequences. The fact that the rank k of the splitter elements
vi was chosen globally, provides two important properties: On the one hand, k was chosen
so that each processor can still operate on n/p elements after assignment. The algorithm is
perfectly load-balanced76 . On the other hand, all elements on processor i are less than or
equal to all elements on processor i + 1. Hence, each processor performs the p-way merge77
locally and thus obtains a sorted sequence from its sub-sequences. Because of the second
property, no further p-way-merge has to be performed, the results only have to be put
together in the order of the processor number.

Multisequence selection

In its simplest form, given p sorted sequences S1 , ..., Sp distributed evenly on p processors
and a rank k, the task is to find an element x with a global rank k in the union of the
sequences. Hence, this can be used to divide each Si in two parts at a splitter index li ,
where the lower part contains only elements which are smaller than x, while the elements
bigger than x are located in the upper part.
The presented sequential algorithm returns the indices of the splits in each sequence,
e.g. the indices li in sequences Si such that Si [li ] has a global rank less than k and
rank (Si [li + 1]) ≥ k.[16]
algorithm msSelect(S : Array of sorted Sequences [S_1,..,S_p], k : int) is
for i = 1 to p do
(l_i, r_i) = (0, |S_i|-1)

while there exists i: l_i < r_i do


//pick Pivot Element in S_j[l_j],..,S_j[r_j], chose random j uniformly
v := pickPivot(S, l, r)
for i = 1 to p do
m_i = binarySearch(v, S_i[l_i, r_i]) //sequentially
if m_1 + ... + m_p >= k then //m_1+ ... + m_p is the global rank of v
r := m //vector assignment
else

73 https://en.wikipedia.org/wiki/Processor_(computing)
74 https://en.wikipedia.org/wiki/Sorting_algorithm
75 https://en.wikipedia.org/wiki/Binary_search_algorithm
76 https://en.wikipedia.org/wiki/Load_balancing_(computing)
77 https://en.wikipedia.org/wiki/K-way_merge_algorithm

102
Parallel merge sort

l := m

return l

For the complexity analysis the PRAM78 model is chosen. If the data is evenly dis-
tributed over all p, the p-fold execution of the binarySearch method has a running time

of O (p log (n/p)). The expected recursion depth is O (log ( i |Si |)) = O(log(n)) as in the
ordinary Quickselect79 . Thus the overall expected running time is O (p log(n/p) log(n)).
Applied on the parallel multiway merge sort, this algorithm has to be invoked in parallel
such that all splitter elements of rank i np for i = 1, .., p are found simultaneously. These
splitter elements can then be used to partition each sequence in p parts, with the same total
running time of O (p log(n/p) log(n)).

Pseudocode

Below, the complete pseudocode of the parallel multiway merge sort algorithm is given. We
assume that there is a barrier synchronization before and after the multisequence selection
such that every processor can determine the splitting elements and the sequence partition
properly.
/**
* d: Unsorted Array of Elements
* n: Number of Elements
* p: Number of Processors
* return Sorted Array
*/
algorithm parallelMultiwayMergesort(d : Array, n : int, p : int) is
o := new Array[0, n] // the output array
for i = 1 to p do in parallel // each processor in parallel
S_i := d[(i-1) * n/p, i * n/p] // Sequence of length n/p
sort(S_i) // sort locally
synch
v_i := msSelect([S_1,...,S_p], i * n/p) // element with global rank
i * n/p
synch
(S_i,1 ,..., S_i,p) := sequence_partitioning(si, v_1, ..., v_p) // split s_i
into subsequences

o[(i-1) * n/p, i * n/p] := kWayMerge(s_1,i, ..., s_p,i) // merge and assign


to output array

return o

Analysis

Firstly, each processor sorts the assigned n/p elements locally using a sorting algorithm with
complexity O (n/p log(n/p)). After that, the splitter elements have to be calculated in time
O (p log(n/p) log(n)). Finally, each group of p splits have to be merged in parallel by each

78 https://en.wikipedia.org/wiki/Parallel_random-access_machine
79 https://en.wikipedia.org/wiki/Quickselect

103
Merge sort

processor with a running time of O(log(p)n/p) using a sequential p-way merge algorithm80 .
Thus, the overall running time is given by
( ( ) ( ) )
n n n n
O log + p log log(n) + log(p) .
p p p p

Practical adaption and application

The multiway merge sort algorithm is very scalable through its high parallelization capabil-
ity, which allows the use of many processors. This makes the algorithm a viable candidate
for sorting large amounts of data, such as those processed in computer clusters81 . Also,
since in such systems memory is usually not a limiting resource, the disadvantage of space
complexity of merge sort is negligible. However, other factors become important in such
systems, which are not taken into account when modelling on a PRAM82 . Here, the follow-
ing aspects need to be considered: Memory hierarchy83 , when the data does not fit into the
processors cache, or the communication overhead of exchanging data between processors,
which could become a bottleneck when the data can no longer be accessed via the shared
memory.
Sanders84 et al. have presented in their paper a bulk synchronous parallel85 algorithm for
multilevel multiway mergesort, which divides p processors into r groups of size p′ . All
processors sort locally first. Unlike single level multiway mergesort, these sequences are
then partitioned into r parts and assigned to the appropriate processor groups. These
steps are repeated recursively in those groups. This reduces communication and especially
avoids problems with many small messages. The hierarchial structure of the underlying real
network can be used to define the processor groups (e.g. racks86 , clusters87 ,...).[15]

6.7.4 Further Variants

Merge sort was one of the first sorting algorithms where optimal speed up was achieved, with
Richard Cole using a clever subsampling algorithm to ensure O(1) merge.[17] Other sophis-
ticated parallel sorting algorithms can achieve the same or better time bounds with a lower
constant. For example, in 1991 David Powers described a parallelized quicksort88 (and a
related radix sort89 ) that can operate in O(log n) time on a CRCW90 parallel random-access
machine91 (PRAM) with n processors by performing partitioning implicitly.[18] Powers fur-
ther shows that a pipelined version of Batcher's Bitonic Mergesort92 at O((log n)2 ) time

80 https://en.wikipedia.org/wiki/Merge_algorithm
81 https://en.wikipedia.org/wiki/Computer_cluster
82 https://en.wikipedia.org/wiki/Parallel_random-access_machine
83 https://en.wikipedia.org/wiki/Memory_hierarchy
84 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
85 https://en.wikipedia.org/wiki/Bulk_synchronous_parallel
86 https://en.wikipedia.org/wiki/19-inch_rack
87 https://en.wikipedia.org/wiki/Computer_cluster
88 https://en.wikipedia.org/wiki/Quicksort
89 https://en.wikipedia.org/wiki/Radix_sort
90 https://en.wikipedia.org/wiki/CRCW
91 https://en.wikipedia.org/wiki/Parallel_random-access_machine
92 https://en.wikipedia.org/wiki/Bitonic_sorter

104
Comparison with other sort algorithms

on a butterfly sorting network93 is in practice actually faster than his O(log n) sorts on a
PRAM, and he provides detailed discussion of the hidden overheads in comparison, radix
and parallel sorting.[19]

6.8 Comparison with other sort algorithms

Although heapsort94 has the same time bounds as merge sort, it requires only Θ(1) auxiliary
space instead of merge sort's Θ(n). On typical modern architectures, efficient quicksort95 im-
96
plementations generally outperform mergesort for sorting RAM-based arrays.[citation needed ]
On the other hand, merge sort is a stable sort and is more efficient at handling slow-to-
access sequential media. Merge sort is often the best choice for sorting a linked list97 : in this
situation it is relatively easy to implement a merge sort in such a way that it requires only
Θ(1) extra space, and the slow random-access performance of a linked list makes some other
algorithms (such as quicksort) perform poorly, and others (such as heapsort) completely
impossible.
As of Perl98 5.8, merge sort is its default sorting algorithm (it was quicksort in previous
versions of Perl). In Java99 , the Arrays.sort()100 methods use merge sort or a tuned quicksort
depending on the datatypes and for implementation efficiency switch to insertion sort101
when fewer than seven array elements are being sorted.[20] The Linux102 kernel uses merge
sort for its linked lists.[21] Python103 uses Timsort104 , another tuned hybrid of merge sort
and insertion sort, that has become the standard sort algorithm in Java SE 7105 (for arrays
of non-primitive types),[22] on the Android platform106 ,[23] and in GNU Octave107 .[24]

6.9 Notes
1. Skiena (2008108 , p. 122)
2. Knuth (1998109 , p. 158)
3. K, J; T, J L (M 1997). ”A 
   ”110 (PDF). Proceedings of the 3rd Italian Con-

93 https://en.wikipedia.org/wiki/Sorting_network
94 https://en.wikipedia.org/wiki/Heapsort
95 https://en.wikipedia.org/wiki/Quicksort
97 https://en.wikipedia.org/wiki/Linked_list
98 https://en.wikipedia.org/wiki/Perl
99 https://en.wikipedia.org/wiki/Java_platform
https://docs.oracle.com/javase/9/docs/api/java/util/Arrays.html#sort-java.lang.
100
Object:A-
101 https://en.wikipedia.org/wiki/Insertion_sort
102 https://en.wikipedia.org/wiki/Linux
103 https://en.wikipedia.org/wiki/Python_(programming_language)
104 https://en.wikipedia.org/wiki/Timsort
105 https://en.wikipedia.org/wiki/Java_7
106 https://en.wikipedia.org/wiki/Android_(operating_system)
107 https://en.wikipedia.org/wiki/GNU_Octave
108 #CITEREFSkiena2008
109 #CITEREFKnuth1998
110 http://hjemmesider.diku.dk/~jyrki/Paper/CIAC97.pdf

105
Merge sort

ference on Algorithms and Complexity. Italian Conference on Algorithms and Com-


plexity. Rome. pp. 217–228. CiteSeerX111 10.1.1.86.3154112 . doi113 :10.1007/3-540-
62592-5_74114 .CS1 maint: ref=harv (link115 )
4. Powers, David M. W. and McMahon Graham B. (1983), ”A compendium of interesting
prolog programs”, DCS Technical Report 8313, Department of Computer Science,
University of New South Wales.
5. The worst case number given here does not agree with that given in Knuth116 's Art
of Computer Programming117 , Vol 3. The discrepancy is due to Knuth analyzing a
variant implementation of merge sort that is slightly sub-optimal
6. C; L; R; S. Introduction to Algorithms. p. 151.
ISBN118 978-0-262-03384-8119 .
7. K, J; P, T; T, J (1996). ”P-
 - ”. Nordic J. Computing. 3 (1): 27–40. Cite-
SeerX120 10.1.1.22.8523121 .
8. G, V; K, J; P, T (2000). ”A-
  - ”. Theoretical Computer Science. 237 (1–2):
159–181. doi122 :10.1016/S0304-3975(98)00162-5123 .
9. H, B-C; L, M A. (M 1988). ”P-
 I-P M”. Communications of the ACM. 31 (3): 348–352.
doi124 :10.1145/42392.42403125 .
10. K, P-S; K, A (2004). Stable Minimum Storage Merging by
Symmetric Comparisons. European Symp. Algorithms. Lecture Notes in Computer
Science. 3221. pp. 714–723. CiteSeerX126 10.1.1.102.4612127 . doi128 :10.1007/978-3-
540-30140-0_63129 . ISBN130 978-3-540-23025-0131 .
11. Selection sort. Knuth's snowplow. Natural merge.
12. Cormen et al. 2009132 , pp. 797–805 harvnb error: no target: CITEREFCormenLeis-
ersonRivestStein2009 (help133 )

111 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
112 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.86.3154
113 https://en.wikipedia.org/wiki/Doi_(identifier)
114 https://doi.org/10.1007%2F3-540-62592-5_74
115 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
116 https://en.wikipedia.org/wiki/Donald_Knuth
117 https://en.wikipedia.org/wiki/Art_of_Computer_Programming
118 https://en.wikipedia.org/wiki/ISBN_(identifier)
119 https://en.wikipedia.org/wiki/Special:BookSources/978-0-262-03384-8
120 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
121 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1016%2FS0304-3975%2898%2900162-5
124 https://en.wikipedia.org/wiki/Doi_(identifier)
125 https://doi.org/10.1145%2F42392.42403
126 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
127 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.4612
128 https://en.wikipedia.org/wiki/Doi_(identifier)
129 https://doi.org/10.1007%2F978-3-540-30140-0_63
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-23025-0
132 #CITEREFCormenLeisersonRivestStein2009
133 https://en.wikipedia.org/wiki/Category:Harv_and_Sfn_template_errors

106
Notes

13. Victor J. Duvanenko ”Parallel Merge Sort” Dr. Dobb's Journal & blog[1]134 and
GitHub repo C++ implementation [2]135
14. Peter Sanders, Johannes Singler. 2008. Lecture Parallel algorithms Last visited
05.02.2020. 136
15. ”P M P S | P   27
ACM   P  A  A”.
137 :10.1145/2755573.2755595138 . Cite journal requires |journal= (help139 )
16. Peter Sanders. 2019. Lecture Parallel algorithms Last visited 05.02.2020. 140
17. C, R (A 1988). ”P  ”. SIAM J. Comput.
17 (4): 770–785. CiteSeerX141 10.1.1.464.7118142 . doi143 :10.1137/0217049144 .CS1
maint: ref=harv (link145 )
18. Powers, David M. W. Parallelized Quicksort and Radixsort with Optimal Speedup146 ,
Proceedings of International Conference on Parallel Computing Technologies. Novosi-
birsk147 . 1991.
19. David M. W. Powers, Parallel Unification: Practical Complexity148 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
20. OpenJDK src/java.base/share/classes/java/util/Arrays.java @ 53904:9c3fe09f69bc149
21. linux kernel /lib/list_sort.c150
22. . ”C 6804124: R ” ” 
..A.  ”151 . Java Development Kit 7 Hg repo.
Archived152 from the original on 2018-01-26. Retrieved 24 Feb 2011.
23. ”C: ..TS<T>”153 . Android JDK Documentation. Archived
from the original154 on January 20, 2015. Retrieved 19 Jan 2015.
24. ”//-.”155 . Mercurial repository of Octave source code.
Lines 23-25 of the initial comment block. Retrieved 18 Feb 2013. Code stolen in large

134 https://duvanenko.tech.blog/2018/01/13/parallel-merge-sort/
135 https://github.com/DragonSpit/ParallelAlgorithms
136 http://algo2.iti.kit.edu/sanders/courses/paralg08/singler.pdf
137 https://en.wikipedia.org/wiki/Doi_(identifier)
138 https://doi.org/10.1145%2F2755573.2755595
139 https://en.wikipedia.org/wiki/Help:CS1_errors#missing_periodical
140 http://algo2.iti.kit.edu/sanders/courses/paralg19/vorlesung.pdf
141 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
142 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.464.7118
143 https://en.wikipedia.org/wiki/Doi_(identifier)
144 https://doi.org/10.1137%2F0217049
145 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
146 http://citeseer.ist.psu.edu/327487.html
147 https://en.wikipedia.org/wiki/Novosibirsk
148 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://hg.openjdk.java.net/jdk/jdk/file/9c3fe09f69bc/src/java.base/share/classes/
149
java/util/Arrays.java#l1331
150 https://github.com/torvalds/linux/blob/master/lib/list_sort.c
151 http://hg.openjdk.java.net/jdk7/jdk7/jdk/rev/bfd7abda8f79
https://web.archive.org/web/20180126184957/http://hg.openjdk.java.net/jdk7/jdk7/jdk/
152
rev/bfd7abda8f79
https://web.archive.org/web/20150120063131/https://android.googlesource.com/platform/
153
libcore/%2B/jb-mr2-release/luni/src/main/java/java/util/TimSort.java
https://android.googlesource.com/platform/libcore/+/jb-mr2-release/luni/src/main/
154
java/java/util/TimSort.java
155 http://hg.savannah.gnu.org/hgweb/octave/file/0486a29d780f/liboctave/util/oct-sort.cc

107
Merge sort

part from Python's, listobject.c, which itself had no license header. However, thanks
to Tim Peters156 for the parts of the code I ripped-off.

6.10 References
• C, T H.157 ; L, C E.158 ; R, R L.159 ; S,
C160 (2009) [1990]. Introduction to Algorithms161 (3 .). MIT P 
MG-H. ISBN162 0-262-03384-4163 .CS1 maint: ref=harv (link164 )
• K, J; P, T; T, J (1996). ”P -
 ”165 . Nordic Journal of Computing. 3. pp. 27–40. ISSN166 1236-
6064167 . Archived from the original168 on 2011-08-07. Retrieved 2009-04-04.CS1 maint:
ref=harv (link169 ). Also Practical In-Place Mergesort170 . Also [3]171
• K, D172 (1998). ”S 5.2.4: S  M”. Sorting and
Searching. The Art of Computer Programming173 . 3 (2nd ed.). Addison-Wesley.
pp. 158–168. ISBN174 0-201-89685-0175 .CS1 maint: ref=harv (link176 )
• K, M. A. (1969). ”O    
”. Soviet Mathematics - Doklady. 10. p. 744.CS1 maint: ref=harv (link177 )
• LM, A.; L, R. E. (1997). ”T      -
  ”. Proc. 8th Ann. ACM-SIAM Symp. On Discrete Algorithms
(SODA97): 370–379. CiteSeerX178 10.1.1.31.1153179 .CS1 maint: ref=harv (link180 )

156 https://en.wikipedia.org/wiki/Tim_Peters_(software_engineer)
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
https://web.archive.org/web/20110807033704/http://www.diku.dk/hjemmesider/ansatte/
165
jyrki/Paper/mergesort_NJC.ps
166 https://en.wikipedia.org/wiki/ISSN_(identifier)
167 http://www.worldcat.org/issn/1236-6064
168 http://www.diku.dk/hjemmesider/ansatte/jyrki/Paper/mergesort_NJC.ps
169 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
170 http://citeseer.ist.psu.edu/katajainen96practical.html
171 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.8523
172 https://en.wikipedia.org/wiki/Donald_Knuth
173 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
174 https://en.wikipedia.org/wiki/ISBN_(identifier)
175 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
176 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
177 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
178 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
179 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.1153
180 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv

108
External links

• S, S S.181 (2008). ”4.5: M: S  D--


C”. The Algorithm Design Manual (2nd ed.). Springer. pp. 120–125.
ISBN182 978-1-84800-069-8183 .CS1 maint: ref=harv (link184 )
• S M. ”A API (J SE 6)”185 . R 2007-11-19.
• O C. ”A (J SE 10 & JDK 10)”186 . R 2018-07-23.

6.11 External links

The Wikibook Algorithm implementation187 has a page on the topic of: Merge
sort188

• Animated Sorting Algorithms: Merge Sort189 at the Wayback Machine190 (archived 6


March 2015) – graphical demonstration
• Open Data Structures - Section 11.1.1 - Merge Sort191 , Pat Morin192

Sorting algorithms

181 https://en.wikipedia.org/wiki/Steven_Skiena
182 https://en.wikipedia.org/wiki/ISBN_(identifier)
183 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
184 https://en.wikipedia.org/wiki/Category:CS1_maint:_ref%3Dharv
185 http://java.sun.com/javase/6/docs/api/java/util/Arrays.html
186 https://docs.oracle.com/javase/10/docs/api/java/util/Arrays.html
187 https://en.wikibooks.org/wiki/Algorithm_implementation
188 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Merge_sort
https://web.archive.org/web/20150306071601/http://www.sorting-algorithms.com/merge-
189
sort
190 https://en.wikipedia.org/wiki/Wayback_Machine
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
191
Sorti.html#SECTION001411000000000000000
192 https://en.wikipedia.org/wiki/Pat_Morin

109
7 Quicksort

A divide and conquer sorting algorithm

Quicksort
Animated visualization of the quicksort algorithm. The horizontal lines are pivot val-
ues.
Class Sorting algorithm
Worst-case O(n2 )
performance
Best-case per- O(n log n) (simple parti-
formance tion)
or O(n) (three-way parti-
tion and equal keys)
Average per- O(n log n)
formance
Worst-case O(n) auxiliary (naive)
space com- O(log n) auxiliary
plexity (Sedgewick 1978)

Quicksort (sometimes called partition-exchange sort) is an efficient1 sorting algorithm2 .


Developed by British computer scientist Tony Hoare3 in 1959[1] and published in 1961,[2] it
is still a commonly used algorithm for sorting. When implemented well, it can be about two
6
or three times faster than its main competitors, merge sort4 and heapsort5 .[3][contradictory ]
Quicksort is a divide-and-conquer algorithm7 . It works by selecting a 'pivot' element from
the array and partitioning the other elements into two sub-arrays, according to whether
they are less than or greater than the pivot. The sub-arrays are then sorted recursively8 .
This can be done in-place9 , requiring small additional amounts of memory10 to perform the
sorting.
Quicksort is a comparison sort11 , meaning that it can sort items of any type for which
a ”less-than” relation (formally, a total order12 ) is defined. Efficient implementations of

1 https://en.wikipedia.org/wiki/Algorithm_efficiency
2 https://en.wikipedia.org/wiki/Sorting_algorithm
3 https://en.wikipedia.org/wiki/Tony_Hoare
4 https://en.wikipedia.org/wiki/Merge_sort
5 https://en.wikipedia.org/wiki/Heapsort
7 https://en.wikipedia.org/wiki/Divide-and-conquer_algorithm
8 https://en.wikipedia.org/wiki/Recursion_(computer_science)
9 https://en.wikipedia.org/wiki/In-place_algorithm
10 https://en.wikipedia.org/wiki/Main_memory
11 https://en.wikipedia.org/wiki/Comparison_sort
12 https://en.wikipedia.org/wiki/Total_order

111
Quicksort

Quicksort are not a stable sort13 , meaning that the relative order of equal sort items is not
preserved.
Mathematical analysis14 of quicksort shows that, on average15 , the algorithm takes
O16 (n log n) comparisons to sort n items. In the worst case17 , it makes O(n2 ) compar-
isons, though this behavior is rare.

7.1 History

The quicksort algorithm was developed in 1959 by Tony Hoare18 while in the Soviet Union19 ,
as a visiting student at Moscow State University20 . At that time, Hoare worked on a project
on machine translation21 for the National Physical Laboratory22 . As a part of the translation
process, he needed to sort the words in Russian sentences prior to looking them up in a
Russian-English dictionary that was already sorted in alphabetic order on magnetic tape23 .[4]
After recognizing that his first idea, insertion sort24 , would be slow, he quickly came up with
a new idea that was Quicksort. He wrote a program in Mercury Autocode25 for the partition
but could not write the program to account for the list of unsorted segments. On return to
England, he was asked to write code for Shellsort26 as part of his new job. Hoare mentioned
to his boss that he knew of a faster algorithm and his boss bet sixpence that he did not. His
boss ultimately accepted that he had lost the bet. Later, Hoare learned about ALGOL27
and its ability to do recursion that enabled him to publish the code in Communications of
the Association for Computing Machinery28 , the premier computer science journal of the
time.[2][5]
Quicksort gained widespread adoption, appearing, for example, in Unix29 as the default
library sort subroutine. Hence, it lent its name to the C standard library30 subroutine
qsort31[6] and in the reference implementation of Java32 .

13 https://en.wikipedia.org/wiki/Stable_sort
14 https://en.wikipedia.org/wiki/Analysis_of_algorithms
15 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
16 https://en.wikipedia.org/wiki/Big_O_notation
17 https://en.wikipedia.org/wiki/Best,_worst_and_average_case
18 https://en.wikipedia.org/wiki/Tony_Hoare
19 https://en.wikipedia.org/wiki/Soviet_Union
20 https://en.wikipedia.org/wiki/Moscow_State_University
21 https://en.wikipedia.org/wiki/Machine_translation
22 https://en.wikipedia.org/wiki/National_Physical_Laboratory,_UK
23 https://en.wikipedia.org/wiki/Magnetic_tape_data_storage
24 https://en.wikipedia.org/wiki/Insertion_sort
25 https://en.wikipedia.org/wiki/Autocode
26 https://en.wikipedia.org/wiki/Shellsort
27 https://en.wikipedia.org/wiki/ALGOL
28 https://en.wikipedia.org/wiki/Communications_of_the_ACM
29 https://en.wikipedia.org/wiki/Unix
30 https://en.wikipedia.org/wiki/C_standard_library
31 https://en.wikipedia.org/wiki/Qsort
32 https://en.wikipedia.org/wiki/Java_(programming_language)

112
History

Robert Sedgewick33 's Ph.D. thesis in 1975 is considered a milestone in the study of Quick-
sort where he resolved many open problems related to the analysis of various pivot selection
schemes including Samplesort34 , adaptive partitioning by Van Emden[7] as well as deriva-
tion of expected number of comparisons and swaps.[6] Jon Bentley35 and Doug McIlroy36
incorporated various improvements for use in programming libraries, including a technique
to deal with equal elements and a pivot scheme known as pseudomedian of nine, where a
sample of nine elements is divided into groups of three and then the median of the three
medians from three groups is chosen.[6] Bentley described another simpler and compact
partitioning scheme in his book Programming Pearls that he attributed to Nico Lomuto.
Later Bentley wrote that he used Hoare's version for years but never really understood it
but Lomuto's version was simple enough to prove correct.[8] Bentley described Quicksort as
the ”most beautiful code I had ever written” in the same essay. Lomuto's partition scheme
was also popularized by the textbook Introduction to Algorithms37 although it is inferior to
Hoare's scheme because it does three times more swaps on average and degrades to O(n2 )
38
runtime when all elements are equal.[9][self-published source? ]
In 2009, Vladimir Yaroslavskiy proposed the new dual pivot Quicksort implementation.[10]
In the Java core library mailing lists, he initiated a discussion claiming his new algorithm
to be superior to the runtime library's sorting method, which was at that time based on
the widely used and carefully tuned variant of classic Quicksort by Bentley and McIlroy.[11]
Yaroslavskiy's Quicksort has been chosen as the new default sorting algorithm in Oracle's
Java 7 runtime library[12] after extensive empirical performance tests.[13]

33 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
34 https://en.wikipedia.org/wiki/Samplesort
35 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
36 https://en.wikipedia.org/wiki/Douglas_McIlroy
37 https://en.wikipedia.org/wiki/Introduction_to_Algorithms

113
Quicksort

7.2 Algorithm

Figure 22 Full example of quicksort on a random set of numbers. The shaded element
is the pivot. It is always chosen as the last element of the partition. However, always
choosing the last element in the partition as the pivot in this way results in poor
performance (O(n²)) on already sorted arrays, or arrays of identical elements. Since
sub-arrays of sorted / identical elements crop up a lot towards the end of a sorting
procedure on a large set, versions of the quicksort algorithm that choose the pivot as the
middle element run much more quickly than the algorithm described in this diagram on
large sets of numbers.

114
Algorithm

Quicksort is a divide and conquer algorithm39 . It first divides the input array into two
smaller sub-arrays: the low elements and the high elements. It then recursively sorts the
sub-arrays. The steps for in-place40 Quicksort are:
1. Pick an element, called a pivot, from the array.
2. Partitioning: reorder the array so that all elements with values less than the pivot
come before the pivot, while all elements with values greater than the pivot come after
it (equal values can go either way). After this partitioning, the pivot is in its final
position. This is called the partition operation.
3. Recursively41 apply the above steps to the sub-array of elements with smaller values
and separately to the sub-array of elements with greater values.
The base case of the recursion is arrays of size zero or one, which are in order by definition,
so they never need to be sorted.
The pivot selection and partitioning steps can be done in several different ways; the choice
of specific implementation schemes greatly affects the algorithm's performance.

7.2.1 Lomuto partition scheme

This scheme is attributed to Nico Lomuto and popularized by Bentley in his book Pro-
gramming Pearls[14] and Cormen et al. in their book Introduction to Algorithms42 .[15] This
scheme chooses a pivot that is typically the last element in the array. The algorithm main-
tains index i as it scans the array using another index j such that the elements at lo through
i-1 (inclusive) are less than the pivot, and the elements at i through j (inclusive) are equal
to or greater than the pivot. As this scheme is more compact and easy to understand, it
is frequently used in introductory material, although it is less efficient than Hoare's origi-
nal scheme.[16] This scheme degrades to O(n2 ) when the array is already in order.[9] There
have been various variants proposed to boost performance including various ways to select
pivot, deal with equal elements, use other sorting algorithms such as Insertion sort43 for
small arrays and so on. In pseudocode44 , a quicksort that sorts elements at lo through hi
(inclusive) of an array A can be expressed as:[15]
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := partition(A, lo, hi)
quicksort(A, lo, p - 1)
quicksort(A, p + 1, hi)

algorithm partition(A, lo, hi) is


pivot := A[hi]
i := lo
for j := lo to hi do
if A[j] < pivot then
swap A[i] with A[j]
i := i + 1
swap A[i] with A[hi]

39 https://en.wikipedia.org/wiki/Divide_and_conquer_algorithm
40 https://en.wikipedia.org/wiki/In-place_algorithm
41 https://en.wikipedia.org/wiki/Recursion_(computer_science)
42 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
43 https://en.wikipedia.org/wiki/Insertion_sort
44 https://en.wikipedia.org/wiki/Pseudocode

115
Quicksort

return i

Sorting the entire array is accomplished by quicksort(A, 0, length(A) - 1).

7.2.2 Hoare partition scheme

The original partition scheme described by C.A.R. Hoare uses two indices that start at
the ends of the array being partitioned, then move toward each other, until they detect an
inversion: a pair of elements, one greater than or equal to the pivot, one lesser or equal, that
are in the wrong order relative to each other. The inverted elements are then swapped.[17]
When the indices meet, the algorithm stops and returns the final index. Hoare's scheme is
more efficient than Lomuto's partition scheme because it does three times fewer swaps on
45
average, and it creates efficient partitions even when all values are equal.[9][self-published source? ]
Like Lomuto's partition scheme, Hoare's partitioning also would cause Quicksort to degrade
to O(n2 ) for already sorted input, if the pivot was chosen as the first or the last element.
With the middle element as the pivot, however, sorted data results with (almost) no swaps
in equally sized partitions leading to best case behavior of Quicksort, i.e. O(n log(n)). Like
others, Hoare's partitioning doesn't produce a stable sort. In this scheme, the pivot's final
location is not necessarily at the index that was returned, and the next two segments that
the main algorithm recurs on are (lo..p) and (p+1..hi) as opposed to (lo..p-1) and (p+1..hi)
as in Lomuto's scheme. However, the partitioning algorithm guarantees lo ≤ p < hi which
implies both resulting partitions are non-empty, hence there's no risk of infinite recursion.
In pseudocode46 ,[15]
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := partition(A, lo, hi)
quicksort(A, lo, p)
quicksort(A, p + 1, hi)

algorithm partition(A, lo, hi) is


pivot := A[⌊(hi + lo) / 2⌋]
i := lo - 1
j := hi + 1
loop forever
do
i := i + 1
while A[i] < pivot
do
j := j - 1
while A[j] > pivot
if i ≥ j then
return j
swap A[i] with A[j]

An important point in choosing the pivot item is to round the division result towards zero.
This is the implicit behavior of integer division in some programming languages (e.g., C,
C++, Java), hence rounding is omitted in implementing code. Here it is emphasized with
explicit use of a floor function47 , denoted with a ⌊ ⌋symbols pair. Rounding down is
important to avoid using A[hi] as the pivot, which can result in infinite recursion.

46 https://en.wikipedia.org/wiki/Pseudocode
47 https://en.wikipedia.org/wiki/Floor_and_ceiling_functions

116
Algorithm

The entire array is sorted by quicksort(A, 0, length(A) - 1).

7.2.3 Implementation issues

Choice of pivot

In the very early versions of quicksort, the leftmost element of the partition would often
be chosen as the pivot element. Unfortunately, this causes worst-case behavior on already
sorted arrays, which is a rather common use-case. The problem was easily solved by choosing
either a random index for the pivot, choosing the middle index of the partition or (especially
for longer partitions) choosing the median48 of the first, middle and last element of the
partition for the pivot (as recommended by Sedgewick49 ).[18] This ”median-of-three” rule
counters the case of sorted (or reverse-sorted) input, and gives a better estimate of the
optimal pivot (the true median) than selecting any single element, when no information
about the ordering of the input is known.
Median-of-three code snippet for Lomuto partition:
mid := (lo + hi) / 2
if A[mid] < A[lo]
swap A[lo] with A[mid]
if A[hi] < A[lo]
swap A[lo] with A[hi]
if A[mid] < A[hi]
swap A[mid] with A[hi]
pivot := A[hi]

It puts a median into A[hi] first, then that new value of A[hi] is used for a pivot, as in a
basic algorithm presented above.
Specifically, the expected number of comparisons needed to sort n elements (see § Analysis
of randomized quicksort50 ) with random pivot selection is 1.386 n log n. Median-of-three
pivoting brings this down to C51 n, 2 ≈ 1.188 n log n, at the expense of a three-percent increase
in the expected number of swaps.[6] An even stronger pivoting rule, for larger arrays, is to
pick the ninther52 , a recursive median-of-three (Mo3), defined as[6]
ninther(a) = median(Mo3(first ⅓ of a), Mo3(middle ⅓ of a), Mo3(final ⅓ of a))
Selecting a pivot element is also complicated by the existence of integer overflow53 . If the
boundary indices of the subarray being sorted are sufficiently large, the naïve expression for
the middle index, (lo + hi)/2, will cause overflow and provide an invalid pivot index. This
can be overcome by using, for example, lo + (hi−lo)/2 to index the middle element, at the
cost of more complex arithmetic. Similar issues arise in some other methods of selecting
the pivot element.

48 https://en.wikipedia.org/wiki/Median
49 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
50 #Analysis_of_randomized_quicksort
51 https://en.wikipedia.org/wiki/Binomial_coefficient
52 https://en.wikipedia.org/wiki/Ninther
53 https://en.wikipedia.org/wiki/Integer_overflow

117
Quicksort

Repeated elements

With a partitioning algorithm such as the Lomuto partition scheme described above (even
one that chooses good pivot values), quicksort exhibits poor performance for inputs that
contain many repeated elements. The problem is clearly apparent when all the input el-
ements are equal: at each recursion, the left partition is empty (no input values are less
than the pivot), and the right partition has only decreased by one element (the pivot is
removed). Consequently, the Lomuto partition scheme takes quadratic time54 to sort an
array of equal values. However, with a partitioning algorithm such as the Hoare partition
scheme, repeated elements generally results in better partitioning, and although needless
swaps of elements equal to the pivot may occur, the running time generally decreases as the
number of repeated elements increases (with memory cache reducing the swap overhead).
In the case where all elements are equal, Hoare partition scheme needlessly swaps elements,
but the partitioning itself is best case, as noted in the Hoare partition section above.
To solve the Lomuto partition scheme problem (sometimes called the Dutch national flag
problem55[6] ), an alternative linear-time partition routine can be used that separates the
values into three groups: values less than the pivot, values equal to the pivot, and values
greater than the pivot. (Bentley and McIlroy call this a ”fat partition” and it was already
implemented in the qsort56 of Version 7 Unix57 .[6] ) The values equal to the pivot are already
sorted, so only the less-than and greater-than partitions need to be recursively sorted. In
pseudocode, the quicksort algorithm becomes
algorithm quicksort(A, lo, hi) is
if lo < hi then
p := pivot(A, lo, hi)
left, right := partition(A, p, lo, hi) // note: multiple return values
quicksort(A, lo, left - 1)
quicksort(A, right + 1, hi)

The partition algorithm returns indices to the first ('leftmost') and to the last ('rightmost')
item of the middle partition. Every item of the partition is equal to p and is therefore
sorted. Consequently, the items of the partition need not be included in the recursive calls
to quicksort.
The best case for the algorithm now occurs when all elements are equal (or are chosen from
a small set of k ≪n elements). In the case of all equal elements, the modified quicksort will
perform only two recursive calls on empty subarrays and thus finish in linear time (assuming
the partition subroutine takes no longer than linear time).

Optimizations

Two other important optimizations, also suggested by Sedgewick and widely used in prac-
tice, are:[19][20]

54 https://en.wikipedia.org/wiki/Quadratic_time
55 https://en.wikipedia.org/wiki/Dutch_national_flag_problem
56 https://en.wikipedia.org/wiki/Qsort
57 https://en.wikipedia.org/wiki/Version_7_Unix

118
Algorithm

• To make sure at most O(log n) space is used, recur58 first into the smaller side of the
partition, then use a tail call59 to recur into the other, or update the parameters to no
longer include the now sorted smaller side, and iterate to sort the larger side.
• When the number of elements is below some threshold (perhaps ten elements), switch
to a non-recursive sorting algorithm such as insertion sort60 that performs fewer swaps,
comparisons or other operations on such small arrays. The ideal 'threshold' will vary
based on the details of the specific implementation.
• An older variant of the previous optimization: when the number of elements is less than
the threshold k, simply stop; then after the whole array has been processed, perform inser-
tion sort on it. Stopping the recursion early leaves the array k-sorted, meaning that each
element is at most k positions away from its final sorted position. In this case, insertion
sort takes O(kn) time to finish the sort, which is linear if k is a constant.[21][14]:117 Com-
pared to the ”many small sorts” optimization, this version may execute fewer instructions,
but it makes suboptimal use of the cache memories61 in modern computers.[22]

Parallelization

Quicksort's divide-and-conquer formulation makes it amenable to parallelization62 using


task parallelism63 . The partitioning step is accomplished through the use of a parallel
prefix sum64 algorithm to compute an index for each array element in its section of the
partitioned array.[23][24] Given an array of size n, the partitioning step performs O(n) work
in O(log n) time and requires O(n) additional scratch space. After the array has been
partitioned, the two partitions can be sorted recursively in parallel. Assuming an ideal
choice of pivots, parallel quicksort sorts an array of size n in O(n log n) work in O(log² n)
time using O(n) additional space.
Quicksort has some disadvantages when compared to alternative sorting algorithms, like
merge sort65 , which complicate its efficient parallelization. The depth of quicksort's divide-
and-conquer tree directly impacts the algorithm's scalability, and this depth is highly de-
pendent on the algorithm's choice of pivot. Additionally, it is difficult to parallelize the
partitioning step efficiently in-place. The use of scratch space simplifies the partitioning
step, but increases the algorithm's memory footprint and constant overheads.
Other more sophisticated parallel sorting algorithms can achieve even better time bounds.[25]
For example, in 1991 David Powers described a parallelized quicksort (and a related radix
sort66 ) that can operate in O(log n) time on a CRCW67 (concurrent read and concurrent
write) PRAM68 (parallel random-access machine) with n processors by performing parti-
tioning implicitly.[26]

58 https://en.wiktionary.org/wiki/recurse
59 https://en.wikipedia.org/wiki/Tail_call
60 https://en.wikipedia.org/wiki/Insertion_sort
61 https://en.wikipedia.org/wiki/Cache_memory
62 https://en.wikipedia.org/wiki/Parallel_algorithm
63 https://en.wikipedia.org/wiki/Task_parallelism
64 https://en.wikipedia.org/wiki/Prefix_sum
65 https://en.wikipedia.org/wiki/Merge_sort
66 https://en.wikipedia.org/wiki/Radix_sort
67 https://en.wikipedia.org/wiki/Parallel_random-access_machine#Read/write_conflicts
68 https://en.wikipedia.org/wiki/Parallel_Random_Access_Machine

119
Quicksort

7.3 Formal analysis

7.3.1 Worst-case analysis

The most unbalanced partition occurs when one of the sublists returned by the partitioning
routine is of size n − 1.[27] This may occur if the pivot happens to be the smallest or
largest element in the list, or in some implementations (e.g., the Lomuto partition scheme
as described above) when all the elements are equal.
If this happens repeatedly in every partition, then each recursive call processes a list of size
one less than the previous list. Consequently, we can make n − 1 nested calls before we
reach a list of size 1. This means that the call tree69 is a linear chain of n − 1 nested calls.

The ith call does O(n − i) work to do the partition, and ni=0 (n − i) = O(n2 ), so in that
case Quicksort takes O(n²) time.

7.3.2 Best-case analysis

In the most balanced case, each time we perform a partition we divide the list into two nearly
equal pieces. This means each recursive call processes a list of half the size. Consequently,
we can make only log2 n nested calls before we reach a list of size 1. This means that the
depth of the call tree70 is log2 n. But no two calls at the same level of the call tree process
the same part of the original list; thus, each level of calls needs only O(n) time all together
(each call has some constant overhead, but since there are only O(n) calls at each level, this
is subsumed in the O(n) factor). The result is that the algorithm uses only O(n log n) time.

7.3.3 Average-case analysis

To sort an array of n distinct elements, quicksort takes O(n log n) time in expectation,
averaged over all n! permutations of n elements with equal probability71 . We list here three
common proofs to this claim providing different insights into quicksort's workings.

Using percentiles

If each pivot has rank somewhere in the middle 50 percent, that is, between the 25th
percentile72 and the 75th percentile, then it splits the elements with at least 25% and at
most 75% on each side. If we could consistently choose such pivots, we would only have
to split the list at most log4/3 n times before reaching lists of size 1, yielding an O(n log n)
algorithm.
When the input is a random permutation, the pivot has a random rank, and so it is not
guaranteed to be in the middle 50 percent. However, when we start from a random per-
mutation, in each recursive call the pivot has a random rank in its list, and so it is in the

69 https://en.wikipedia.org/wiki/Call_stack
70 https://en.wikipedia.org/wiki/Call_stack
71 https://en.wikipedia.org/wiki/Uniform_distribution_(discrete)
72 https://en.wikipedia.org/wiki/Percentile

120
Formal analysis

middle 50 percent about half the time. That is good enough. Imagine that you flip a coin:
heads means that the rank of the pivot is in the middle 50 percent, tail means that it isn't.
Imagine that you are flipping a coin over and over until you get k heads. Although this
could take a long time, on average only 2k flips are required, and the chance that you won't
get k heads after 100k flips is highly improbable (this can be made rigorous using Chernoff
bounds73 ). By the same argument, Quicksort's recursion will terminate on average at a call
depth of only 2 log4/3 n. But if its average call depth is O(log n), and each level of the call
tree processes at most n elements, the total amount of work done on average is the product,
O(n log n). The algorithm does not have to verify that the pivot is in the middle half—if
we hit it any constant fraction of the times, that is enough for the desired complexity.

Using recurrences

An alternative approach is to set up a recurrence relation74 for the T(n) factor, the time
needed to sort a list of size n. In the most unbalanced case, a single quicksort call involves
O(n) work plus two recursive calls on lists of size 0 and n−1, so the recurrence relation is
T (n) = O(n) + T (0) + T (n − 1) = O(n) + T (n − 1).
This is the same relation as for insertion sort75 and selection sort76 , and it solves to worst
case T(n) = O(n²).
In the most balanced case, a single quicksort call involves O(n) work plus two recursive calls
on lists of size n/2, so the recurrence relation is
( )
n
T (n) = O(n) + 2T .
2
The master theorem for divide-and-conquer recurrences77 tells us that T(n) = O(n log n).
The outline of a formal proof of the O(n log n) expected time complexity follows. Assume
that there are no duplicates as duplicates could be handled with linear time pre- and post-
processing, or considered cases easier than the analyzed. When the input is a random
permutation, the rank of the pivot is uniform random from 0 to n − 1. Then the resulting
parts of the partition have sizes i and n − i − 1, and i is uniform random from 0 to n −
1. So, averaging over all possible splits and noting that the number of comparisons for the
partition is n − 1, the average number of comparisons over all permutations of the input
sequence can be estimated accurately by solving the recurrence relation:

1 n−1 ∑
2 n−1
C(n) = n − 1 + (C(i) + C(n − i − 1)) = n − 1 + C(i)
n i=0 n i=0


n−1
nC(n) = n(n − 1) + 2 C(i)
i=0

73 https://en.wikipedia.org/wiki/Chernoff_bound
74 https://en.wikipedia.org/wiki/Recurrence_relation
75 https://en.wikipedia.org/wiki/Insertion_sort
76 https://en.wikipedia.org/wiki/Selection_sort
77 https://en.wikipedia.org/wiki/Master_theorem_(analysis_of_algorithms)

121
Quicksort

nC(n) − (n − 1)C(n − 1) = n(n − 1) − (n − 1)(n − 2) + 2C(n − 1)


nC(n) = (n + 1)C(n − 1) + 2n − 2
C(n) C(n − 1) 2 2 C(n − 1) 2
= + − ≤ +
n+1 n n + 1 n(n + 1) n n+1
C(n − 2) 2 2 2 C(n − 2) 2 2
= + − + ≤ + +
n−1 n (n − 1)n n + 1 n−1 n n+1
..
.
∫ n
C(1) ∑ n
2 ∑1
n−1
1
= + ≤2 ≈2 dx = 2 ln n
2 i=2
i+1 i=1
i 1 x
Solving the recurrence gives C(n) = 2n ln n ≈1.39n log₂n.
This means that, on average, quicksort performs only about 39% worse than in its best
case. In this sense, it is closer to the best case than the worst case. A comparison sort78
cannot use less than log₂(n!) comparisons on average to sort n items (as explained in the
article Comparison sort79 ) and in case of large n, Stirling's approximation80 yields log₂(n!)
≈ n(log₂ n − log₂e), so quicksort is not much worse than an ideal comparison sort. This fast
average runtime is another reason for quicksort's practical dominance over other sorting
algorithms.

Using a binary search tree

To each execution of quicksort corresponds the following binary search tree81 (BST): the
initial pivot is the root node; the pivot of the left half is the root of the left subtree, the pivot
of the right half is the root of the right subtree, and so on. The number of comparisons of the
execution of quicksort equals the number of comparisons during the construction of the BST
by a sequence of insertions. So, the average number of comparisons for randomized quicksort
equals the average cost of constructing a BST when the values inserted (x1 , x2 , . . . , xn ) form
a random permutation.
Consider a BST created by insertion of a sequence (x1 , x2 , . . . , xn ) of values forming
∑ ∑ a random
permutation. Let C denote the cost of creation of the BST. We have C = ci,j , where
i j<i
ci,j is an binary random variable expressing whether during the insertion of xi there was a
comparison to xj .
∑∑
By linearity of expectation82 , the expected value E[C] of C is E[C] = Pr(ci,j ).
i j<i

Fix i and j<i. The values x1 , x2 , . . . , xj , once sorted, define j+1 intervals. The core structural
observation is that xi is compared to xj in the algorithm if and only if xi falls inside one of
the two intervals adjacent to xj .

78 https://en.wikipedia.org/wiki/Comparison_sort
https://en.wikipedia.org/wiki/Comparison_sort#Lower_bound_for_the_average_number_of_
79
comparisons
80 https://en.wikipedia.org/wiki/Stirling%27s_approximation
81 https://en.wikipedia.org/wiki/Binary_search_tree
82 https://en.wikipedia.org/wiki/Expected_value#Linearity

122
Relation to other algorithms

Observe that since (x1 , x2 , . . . , xn ) is a random permutation, (x1 , x2 , . . . , xj , xi ) is also a


2
random permutation, so the probability that xi is adjacent to xj is exactly .
j +1
We end with a short calculation:
( )
∑∑ 2 ∑
E[C] = =O log i = O(n log n).
i j<i
j +1 i

7.3.4 Space complexity

The space used by quicksort depends on the version used.


The in-place version of quicksort has a space complexity of O(log n), even in the worst case,
when it is carefully implemented using the following strategies:
• in-place partitioning is used. This unstable partition requires O(1) space.
• After partitioning, the partition with the fewest elements is (recursively) sorted first,
requiring at most O(log n) space. Then the other partition is sorted using tail recursion83
or iteration, which doesn't add to the call stack. This idea, as discussed above, was
described by R. Sedgewick84 , and keeps the stack depth bounded by O(log n).[18][21]
Quicksort with in-place and unstable partitioning uses only constant additional space before
making any recursive call. Quicksort must store a constant amount of information for each
nested recursive call. Since the best case makes at most O(log n) nested recursive calls, it
uses O(log n) space. However, without Sedgewick's trick to limit the recursive calls, in the
worst case quicksort could make O(n) nested recursive calls and need O(n) auxiliary space.
From a bit complexity viewpoint, variables such as lo and hi do not use constant space; it
takes O(log n) bits to index into a list of n items. Because there are such variables in every
stack frame, quicksort using Sedgewick's trick requires O((log n)²) bits of space. This space
requirement isn't too terrible, though, since if the list contained distinct elements, it would
need at least O(n log n) bits of space.
Another, less common, not-in-place, version of quicksort uses O(n) space for working storage
and can implement a stable sort. The working storage allows the input array to be easily
partitioned in a stable manner and then copied back to the input array for successive
recursive calls. Sedgewick's optimization is still appropriate.

7.4 Relation to other algorithms

Quicksort is a space-optimized version of the binary tree sort85 . Instead of inserting items
sequentially into an explicit tree, quicksort organizes them concurrently into a tree that is
implied by the recursive calls. The algorithms make exactly the same comparisons, but in a
different order. An often desirable property of a sorting algorithm86 is stability – that is the

83 https://en.wikipedia.org/wiki/Tail_recursion
84 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
85 https://en.wikipedia.org/wiki/Binary_tree_sort
86 https://en.wikipedia.org/wiki/Sorting_algorithm

123
Quicksort

order of elements that compare equal is not changed, allowing controlling order of multikey
tables (e.g. directory or folder listings) in a natural way. This property is hard to maintain
for in situ (or in place) quicksort (that uses only constant additional space for pointers and
buffers, and O(log n) additional space for the management of explicit or implicit recursion).
For variant quicksorts involving extra memory due to representations using pointers (e.g.
lists or trees) or files (effectively lists), it is trivial to maintain stability. The more complex,
or disk-bound, data structures tend to increase time cost, in general making increasing use
of virtual memory or disk.
The most direct competitor of quicksort is heapsort87 . Heapsort's running time is O(n log n),
but heapsort's average running time is usually considered slower than in-place quicksort.[28]
This result is debatable; some publications indicate the opposite.[29][30] Introsort88 is a vari-
ant of quicksort that switches to heapsort when a bad case is detected to avoid quicksort's
worst-case running time.
Quicksort also competes with merge sort89 , another O(n log n) sorting algorithm. Mergesort
is a stable sort90 , unlike standard in-place quicksort and heapsort, and has excellent worst-
case performance. The main disadvantage of mergesort is that, when operating on arrays,
efficient implementations require O(n) auxiliary space, whereas the variant of quicksort with
in-place partitioning and tail recursion uses only O(log n) space.
Mergesort works very well on linked lists91 , requiring only a small, constant amount of
auxiliary storage. Although quicksort can be implemented as a stable sort using linked
lists, it will often suffer from poor pivot choices without random access. Mergesort is also
the algorithm of choice for external sorting92 of very large data sets stored on slow-to-access
media such as disk storage93 or network-attached storage94 .
Bucket sort95 with two buckets is very similar to quicksort; the pivot in this case is effec-
tively the value in the middle of the value range, which does well on average for uniformly
distributed inputs.

7.4.1 Selection-based pivoting

A selection algorithm96 chooses the kth smallest of a list of numbers; this is an easier problem
in general than sorting. One simple but effective selection algorithm works nearly in the
same manner as quicksort, and is accordingly known as quickselect97 . The difference is that
instead of making recursive calls on both sublists, it only makes a single tail-recursive call on
the sublist that contains the desired element. This change lowers the average complexity to
linear or O(n) time, which is optimal for selection, but the sorting algorithm is still O(n2 ).

87 https://en.wikipedia.org/wiki/Heapsort
88 https://en.wikipedia.org/wiki/Introsort
89 https://en.wikipedia.org/wiki/Merge_sort
90 https://en.wikipedia.org/wiki/Stable_sort
91 https://en.wikipedia.org/wiki/Linked_list
92 https://en.wikipedia.org/wiki/External_sorting
93 https://en.wikipedia.org/wiki/Disk_storage
94 https://en.wikipedia.org/wiki/Network-attached_storage
95 https://en.wikipedia.org/wiki/Bucket_sort
96 https://en.wikipedia.org/wiki/Selection_algorithm
97 https://en.wikipedia.org/wiki/Quickselect

124
Relation to other algorithms

A variant of quickselect, the median of medians98 algorithm, chooses pivots more carefully,
ensuring that the pivots are near the middle of the data (between the 30th and 70th per-
centiles), and thus has guaranteed linear time – O(n). This same pivot strategy can be used
to construct a variant of quicksort (median of medians quicksort) with O(n log n) time.
However, the overhead of choosing the pivot is significant, so this is generally not used in
practice.
More abstractly, given an O(n) selection algorithm, one can use it to find the ideal pivot
(the median) at every step of quicksort and thus produce a sorting algorithm with O(n log
n) running time. Practical implementations this variant are considerably slower on average,
but they are of theoretical interest because they show an optimal selection algorithm can
yield an optimal sorting algorithm.

7.4.2 Variants

Multi-pivot quicksort

Instead of partitioning into two subarrays using a single pivot, multi-pivot quicksort (also
multiquicksort[22] ) partitions its input into some s number of subarrays using s − 1 piv-
ots. While the dual-pivot case (s = 3) was considered by Sedgewick and others already
in the mid-1970s, the resulting algorithms were not faster in practice than the ”classical”
quicksort.[31] A 1999 assessment of a multiquicksort with a variable number of pivots, tuned
to make efficient use of processor caches, found it to increase the instruction count by
some 20%, but simulation results suggested that it would be more efficient on very large
inputs.[22] A version of dual-pivot quicksort developed by Yaroslavskiy in 2009[10] turned
out to be fast enough to warrant implementation in Java 799 , as the standard algorithm to
sort arrays of primitives100 (sorting arrays of objects101 is done using Timsort102 ).[32] The
performance benefit of this algorithm was subsequently found to be mostly related to cache
performance,[33] and experimental results indicate that the three-pivot variant may perform
even better on modern machines.[34][35]

External quicksort

For magnetic tape files is the same as regular quicksort except the pivot is replaced by a
buffer. First, the M/2 first and last elements are read into the buffer and sorted, then the
next element from the beginning or end is read to balance writing. If the next element is
less than the least of the buffer, write it to available space at the beginning. If greater than
the greatest, write it to the end. Otherwise write the greatest or least of the buffer, and
put the next element in the buffer. Keep the maximum lower and minimum upper keys
written to avoid resorting middle elements that are in order. When done, write the buffer.
Recursively sort the smaller partition, and loop to sort the remaining partition. This is

98 https://en.wikipedia.org/wiki/Median_of_medians
99 https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7_(July_28,_2011)
100 https://en.wikipedia.org/wiki/Primitive_data_type
101 https://en.wikipedia.org/wiki/Object_(computer_science)
102 https://en.wikipedia.org/wiki/Timsort

125
Quicksort

a kind of three-way quicksort in which the middle partition (buffer) represents a sorted
subarray of elements that are approximately equal to the pivot.

Three-way radix quicksort

Main article: Multi-key quicksort103 This algorithm is a combination of radix sort104 and
quicksort. Pick an element from the array (the pivot) and consider the first character (key)
of the string (multikey). Partition the remaining elements into three sets: those whose corre-
sponding character is less than, equal to, and greater than the pivot's character. Recursively
sort the ”less than” and ”greater than” partitions on the same character. Recursively sort
the ”equal to” partition by the next character (key). Given we sort using bytes or words of
length W bits, the best case is O(KN) and the worst case O(2K N) or at least O(N2 ) as for
standard quicksort, given for unique keys N<2K , and K is a hidden constant in all standard
comparison sort105 algorithms including quicksort. This is a kind of three-way quicksort
in which the middle partition represents a (trivially) sorted subarray of elements that are
exactly equal to the pivot.

Quick radix sort

Also developed by Powers as an o(K) parallel PRAM106 algorithm. This is again a combi-
nation of radix sort107 and quicksort but the quicksort left/right partition decision is made
on successive bits of the key, and is thus O(KN) for N K-bit keys. All comparison sort108
algorithms impliclty assume the transdichotomous model109 with K in Θ(log N), as if K is
smaller we can sort in O(N) time using a hash table or integer sorting110 . If K ≫log N but
elements are unique within O(log N) bits, the remaining bits will not be looked at by either
quicksort or quick radix sort. Failing that, all comparison sorting algorithms will also have
the same overhead of looking through O(K) relatively useless bits but quick radix sort will
avoid the worst case O(N2 ) behaviours of standard quicksort and radix quicksort, and will
be faster even in the best case of those comparison algorithms under these conditions of
uniqueprefix(K) ≫ log N. See Powers[36] for further discussion of the hidden overheads in
comparison, radix and parallel sorting.

BlockQuicksort

In any comparison-based sorting algorithm, minimizing the number of comparisons requires


maximizing the amount of information gained from each comparison, meaning that the com-
parison results are unpredictable. This causes frequent branch mispredictions111 , limiting

103 https://en.wikipedia.org/wiki/Multi-key_quicksort
104 https://en.wikipedia.org/wiki/Radix_sort
105 https://en.wikipedia.org/wiki/Comparison_sort
106 https://en.wikipedia.org/wiki/Parallel_random-access_machine
107 https://en.wikipedia.org/wiki/Radix_sort
108 https://en.wikipedia.org/wiki/Comparison_sort
109 https://en.wikipedia.org/wiki/Transdichotomous_model
110 https://en.wikipedia.org/wiki/Integer_sorting
111 https://en.wikipedia.org/wiki/Branch_misprediction

126
See also

performance.[37] BlockQuicksort[38] rearranges the computations of quicksort to convert un-


predictable branches to data dependencies112 . When partitioning, the input is divided into
moderate-sized blocks113 (which fit easily into the data cache114 ), and two arrays are filled
with the positions of elements to swap. (To avoid conditional branches, the position is
unconditionally stored at the end of the array, and the index of the end is incremented
if a swap is needed.) A second pass exchanges the elements at the positions indicated in
the arrays. Both loops have only one conditional branch, a test for termination, which is
usually taken.

Partial and incremental quicksort

Main article: Partial sorting115 Several variants of quicksort exist that separate the k small-
est or largest elements from the rest of the input.

7.4.3 Generalization

Richard Cole116 and David C. Kandathil, in 2004, discovered a one-parameter family of


sorting algorithms, called partition sorts, which on average (with all input orderings equally
likely) perform at most n log n + O(n) comparisons (close to the information theoretic lower
bound) and Θ(n log n) operations; at worst they perform Θ(n log2 n) comparisons (and also
operations); these are in-place, requiring only additional O(log n) space. Practical efficiency
and smaller variance in performance were demonstrated against optimised quicksorts (of
Sedgewick117 and Bentley118 -McIlroy119 ).[39]

7.5 See also

• Computer programming portal120


• Introsort121 − Hybrid sorting algorithm

112 https://en.wikipedia.org/wiki/Data_dependencies
113 https://en.wikipedia.org/wiki/Loop_blocking
114 https://en.wikipedia.org/wiki/Data_cache
115 https://en.wikipedia.org/wiki/Partial_sorting
116 https://en.wikipedia.org/wiki/Richard_J._Cole
117 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
118 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
119 https://en.wikipedia.org/wiki/Douglas_McIlroy
120 https://en.wikipedia.org/wiki/Portal:Computer_programming
121 https://en.wikipedia.org/wiki/Introsort

127
Quicksort

7.6 Notes
1. ”S A H”122 . C H M. A  
123  3 A 2015. R 22 A 2015.
2. H, C. A. R.124 (1961). ”A 64: Q”. Comm. ACM125 . 4 (7):
321. doi126 :10.1145/366622.366644127 .
3. S, S S.128 (2008). The Algorithm Design Manual129 . S. . 129.
ISBN130 978-1-84800-069-8131 .
4. S, L. (2009). ”I: A   C.A.R. H”. Comm.
ACM132 . 52 (3): 38–41. doi133 :10.1145/1467247.1467261134 .
5. ”M Q   S T H,    Q-
”135 . M M D B. 15 M 2015.
6. B, J L.; MI, M. D (1993). ”E  
”136 . Software—Practice and Experience. 23 (11): 1249–1265. Cite-
SeerX137 10.1.1.14.8162138 . doi139 :10.1002/spe.4380231105140 .
7. V E, M. H. (1 N 1970). ”A 402: I-
  E  Q”. Commun. ACM. 13 (11): 693–694.
doi141 :10.1145/362790.362803142 . ISSN143 0001-0782144 .
8. B, J145 (2007). ”T    I  ”. I
O, A; W, G (.). Beautiful Code: Leading Programmers Explain
How They Think. O'Reilly Media. p. 30. ISBN146 978-0-596-51004-6147 .
9. ”Q P: H . L”148 . cs.stackexchange.com. Re-
trieved 3 August 2015.

https://web.archive.org/web/20150403184558/http://www.computerhistory.org/
122
fellowawards/hall/bios/Antony%2CHoare/
123 http://www.computerhistory.org/fellowawards/hall/bios/Antony,Hoare/
124 https://en.wikipedia.org/wiki/Tony_Hoare
125 https://en.wikipedia.org/wiki/Communications_of_the_ACM
126 https://en.wikipedia.org/wiki/Doi_(identifier)
127 https://doi.org/10.1145%2F366622.366644
128 https://en.wikipedia.org/wiki/Steven_Skiena
129 https://books.google.com/books?id=7XUSn0IKQEgC
130 https://en.wikipedia.org/wiki/ISBN_(identifier)
131 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
132 https://en.wikipedia.org/wiki/Communications_of_the_ACM
133 https://en.wikipedia.org/wiki/Doi_(identifier)
134 https://doi.org/10.1145%2F1467247.1467261
http://anothercasualcoder.blogspot.com/2015/03/my-quickshort-interview-with-sir-
135
tony.html
136 http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
137 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
138 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
139 https://en.wikipedia.org/wiki/Doi_(identifier)
140 https://doi.org/10.1002%2Fspe.4380231105
141 https://en.wikipedia.org/wiki/Doi_(identifier)
142 https://doi.org/10.1145%2F362790.362803
143 https://en.wikipedia.org/wiki/ISSN_(identifier)
144 http://www.worldcat.org/issn/0001-0782
145 https://en.wikipedia.org/wiki/Jon_Bentley_(computer_scientist)
146 https://en.wikipedia.org/wiki/ISBN_(identifier)
147 https://en.wikipedia.org/wiki/Special:BookSources/978-0-596-51004-6
148 https://cs.stackexchange.com/q/11550

128
Notes

10. Y, V (2009). ”D-P Q”149 (PDF).


A   150 (PDF)  2 O 2015.
11. ”R  Q  ..A   D-P
Q”151 . permalink.gmane.org. Retrieved 3 August 2015.
12. ”J 7 A API ”152 . O. R 23 J 2018.
13. W, S.; N, M.; R, R.; L, U. (7 J 2013). Engineering Java
7's Dual Pivot Quicksort Using MaLiJAn. Proceedings. Society for Industrial and
Applied Mathematics. pp. 55–69. doi153 :10.1137/1.9781611972931.5154 . ISBN155 978-
1-61197-253-5156 .
14. J B (1999). Programming Pearls. Addison-Wesley Professional.
15. C, T H.157 ; L, C E.158 ; R, R L.159 ;
S, C160 (2009) [1990]. ”Q”. Introduction to Algorithms161 (3
.). MIT P  MG-H. . 170–190. ISBN162 0-262-03384-4163 .
16. W, S (2012). ”J 7' D P Q”164 . T
U K.
17. H, C. A. R.165 (1 J 1962). ”Q”166 . The Computer Journal.
5 (1): 10–16. doi167 :10.1093/comjnl/5.1.10168 . ISSN169 0010-4620170 .
18. S, R171 (1 S 1998). Algorithms in C: Fundamentals,
Data Structures, Sorting, Searching, Parts 1–4172 (3 .). P E.
ISBN173 978-81-317-1291-7174 . R 27 N 2012.
19. qsort.c in GNU libc175 : [1]176 , [2]177

https://web.archive.org/web/20151002230717/http://iaroslavski.narod.ru/quicksort/
149
DualPivotQuicksort.pdf
150 http://iaroslavski.narod.ru/quicksort/DualPivotQuicksort.pdf
151 http://permalink.gmane.org/gmane.comp.java.openjdk.core-libs.devel/2628
152 https://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort(int%5b%5d)
153 https://en.wikipedia.org/wiki/Doi_(identifier)
154 https://doi.org/10.1137%2F1.9781611972931.5
155 https://en.wikipedia.org/wiki/ISBN_(identifier)
156 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61197-253-5
157 https://en.wikipedia.org/wiki/Thomas_H._Cormen
158 https://en.wikipedia.org/wiki/Charles_E._Leiserson
159 https://en.wikipedia.org/wiki/Ron_Rivest
160 https://en.wikipedia.org/wiki/Clifford_Stein
161 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
162 https://en.wikipedia.org/wiki/ISBN_(identifier)
163 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03384-4
164 https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3463
165 https://en.wikipedia.org/wiki/Tony_Hoare
166 http://comjnl.oxfordjournals.org/content/5/1/10
167 https://en.wikipedia.org/wiki/Doi_(identifier)
168 https://doi.org/10.1093%2Fcomjnl%2F5.1.10
169 https://en.wikipedia.org/wiki/ISSN_(identifier)
170 http://www.worldcat.org/issn/0010-4620
171 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
172 https://books.google.com/books?id=ylAETlep0CwC
173 https://en.wikipedia.org/wiki/ISBN_(identifier)
174 https://en.wikipedia.org/wiki/Special:BookSources/978-81-317-1291-7
175 https://en.wikipedia.org/wiki/GNU_libc
176 https://www.cs.columbia.edu/~hgs/teaching/isp/hw/qsort.c
177 http://repo.or.cz/w/glibc.git/blob/HEAD:/stdlib/qsort.c

129
Quicksort

20. 178[permanent dead link179 ]

21. S, R.180 (1978). ”I Q ”. Comm.


ACM181 . 21 (10): 847–857. doi182 :10.1145/359619.359631183 .
22. LM, A; L, R E. (1999). ”T I  C
  P  S”. Journal of Algorithms. 31 (1): 66–104. Cite-
SeerX184 10.1.1.27.1788185 . doi186 :10.1006/jagm.1998.0985187 . Although saving small
subarrays until the end makes sense from an instruction count perspective, it is exactly
the wrong thing to do from a cache performance perspective.
23. Umut A. Acar, Guy E Blelloch, Margaret Reid-Miller, and Kanat Tangwongsan,
Quicksort and Sorting Lower Bounds188 , Parallel and Sequential Data Structures and
Algorithms. 2013.
24. B, C (2012). ”Q P  P S”189 . Dr.
Dobb's.
25. M, R; B, L (2000). Algorithms sequential & parallel: a
unified approach190 . P H. ISBN191 978-0-13-086373-7192 . R 27
N 2012.
26. P, D M. W. (1991). Parallelized Quicksort and Radixsort with Op-
timal Speedup. Proc. Int'l Conf. on Parallel Computing Technologies. Cite-
SeerX193 10.1.1.57.9071194 .
27. The other one may either have 1 element or be empty (have 0 elements), depending
on whether the pivot is included in one of subpartitions, as in the Hoare's partitioning
routine, or is excluded from both of them, like in the Lomuto's routine.
28. E, S; WSS, A (7–8 J 2019). Worst-Case Ef-
ficient Sorting with QuickMergesort. ALENEX 2019: 21st Workshop on Al-
gorithm Engineering and Experiments. San Diego. arXiv195 :1811.99833196 .
doi :10.1137/1.9781611975499.1 . ISBN 978-1-61197-549-9200 . on small in-
197 198 199

stances Heapsort is already considerably slower than Quicksort (in our experiments
more than 30% for n = 210 ) and on larger instances it suffers from its poor cache

178 http://www.ugrad.cs.ubc.ca/~cs260/chnotes/ch6/Ch6CovCompiled.html
180 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
181 https://en.wikipedia.org/wiki/Communications_of_the_ACM
182 https://en.wikipedia.org/wiki/Doi_(identifier)
183 https://doi.org/10.1145%2F359619.359631
184 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
185 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.1788
186 https://en.wikipedia.org/wiki/Doi_(identifier)
187 https://doi.org/10.1006%2Fjagm.1998.0985
188 https://www.cs.cmu.edu/afs/cs/academic/class/15210-s13/www/lectures/lecture19.pdf
189 http://www.drdobbs.com/parallel/quicksort-partition-via-prefix-scan/240003109
190 https://books.google.com/books?id=dZoZAQAAIAAJ
191 https://en.wikipedia.org/wiki/ISBN_(identifier)
192 https://en.wikipedia.org/wiki/Special:BookSources/978-0-13-086373-7
193 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
194 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.9071
195 https://en.wikipedia.org/wiki/ArXiv_(identifier)
196 http://arxiv.org/abs/1811.99833
197 https://en.wikipedia.org/wiki/Doi_(identifier)
198 https://doi.org/10.1137%2F1.9781611975499.1
199 https://en.wikipedia.org/wiki/ISBN_(identifier)
200 https://en.wikipedia.org/wiki/Special:BookSources/978-1-61197-549-9

130
Notes

behavior (in our experiments more than eight times slower than Quicksort for sorting
228 elements).
29. H, P (2004). ”S ”201 . ... R-
 26 A 2010.
30. MK, D (D 2005). ”H, Q,  E”202 .
A203     1 A 2009. R 20 D
2019.
31. W, S; N, M E. (2012). Average case analysis of Java 7's
dual pivot quicksort. European Symposium on Algorithms. arXiv204 :1310.7409205 .
Bibcode206 :2013arXiv1310.7409W207 .
32. ”A”208 . Java Platform SE 7. Oracle. Retrieved 4 September 2014.
33. W, S (3 N 2015). ”W I D-P Q F?”.
X209 :1511.01138210 [.DS211 ].
34. K, S; L-O, A; Q, A;
M, J. I (2014). Multi-Pivot Quicksort: Theory and Experiments.
Proc. Workshop on Algorithm Engineering and Experiments (ALENEX).
doi212 :10.1137/1.9781611973198.6213 .
35. K, S; L-O, A; M, J. I; Q, A
(7 F 2014). Multi-Pivot Quicksort: Theory and Experiments214 (PDF) (S-
 ). W, O215 .
36. David M. W. Powers, Parallel Unification: Practical Complexity216 , Australasian
Computer Architecture Workshop, Flinders University, January 1995
37. K, K; S, P (11–13 S 2006). How Branch Mis-
predictions Affect Quicksort217 (PDF). ESA 2006: 14 A E S-
  A. Z218 . 219 :10.1007/11841036_69220 .

201 http://www.azillionmonkeys.com/qed/sort.html
202 http://www.inference.org.uk/mackay/sorting/sorting.html
https://web.archive.org/web/20090401163041/http://users.aims.ac.za/~mackay/sorting/
203
sorting.html
204 https://en.wikipedia.org/wiki/ArXiv_(identifier)
205 http://arxiv.org/abs/1310.7409
206 https://en.wikipedia.org/wiki/Bibcode_(identifier)
207 https://ui.adsabs.harvard.edu/abs/2013arXiv1310.7409W
208 http://docs.oracle.com/javase/7/docs/api/java/util/Arrays.html#sort%28byte%5B%5D%29
209 https://en.wikipedia.org/wiki/ArXiv_(identifier)
210 http://arxiv.org/abs/1511.01138
211 http://arxiv.org/archive/cs.DS
212 https://en.wikipedia.org/wiki/Doi_(identifier)
213 https://doi.org/10.1137%2F1.9781611973198.6
https://lusy.fri.uni-lj.si/sites/lusy.fri.uni-lj.si/files/publications/alopez2014-
214
seminar-qsort.pdf
215 https://en.wikipedia.org/wiki/Waterloo,_Ontario
216 http://david.wardpowers.info/Research/AI/papers/199501-ACAW-PUPC.pdf
https://www.cs.auckland.ac.nz/~mcw/Teaching/refs/sorting/quicksort-branch-prediction.
217
pdf
218 https://en.wikipedia.org/wiki/Zurich
219 https://en.wikipedia.org/wiki/Doi_(identifier)
220 https://doi.org/10.1007%2F11841036_69

131
Quicksort

38. E, S; WSS, A (22 A 2016). ”BQ: H
B M '  Q”. X221 :1604.06697222
[.DS223 ].
39. Richard Cole, David C. Kandathil: ”The average case analysis of Partition sorts”224 ,
European Symposium on Algorithms, 14–17 September 2004, Bergen, Norway. Pub-
lished: Lecture Notes in Computer Science 3221, Springer Verlag, pp. 240–251.

7.7 References
• S, R.225 (1978). ”I Q ”. Comm. ACM226 .
21 (10): 847–857. doi227 :10.1145/359619.359631228 .
• D, B. C. (2006). ”A       -
 '  ' ”. Discrete Applied Mathematics. 154: 1–5.
doi229 :10.1016/j.dam.2005.07.005230 .
• H, C. A. R.231 (1961). ”A 63: P”. Comm. ACM232 . 4 (7):
321. doi233 :10.1145/366622.366642234 .
• H, C. A. R.235 (1961). ”A 65: F”. Comm. ACM236 . 4 (7): 321–322.
doi237 :10.1145/366622.366647238 .
• H, C. A. R.239 (1962). ”Q”. Comput. J.240 5 (1): 10–16.
doi241 :10.1093/comjnl/5.1.10242 . (Reprinted in Hoare and Jones: Essays in computing
science243 , 1989.)

221 https://en.wikipedia.org/wiki/ArXiv_(identifier)
222 http://arxiv.org/abs/1604.06697
223 http://arxiv.org/archive/cs.DS
224 http://www.cs.nyu.edu/cole/papers/part-sort.pdf
225 https://en.wikipedia.org/wiki/Robert_Sedgewick_(computer_scientist)
226 https://en.wikipedia.org/wiki/Communications_of_the_ACM
227 https://en.wikipedia.org/wiki/Doi_(identifier)
228 https://doi.org/10.1145%2F359619.359631
229 https://en.wikipedia.org/wiki/Doi_(identifier)
230 https://doi.org/10.1016%2Fj.dam.2005.07.005
231 https://en.wikipedia.org/wiki/Tony_Hoare
232 https://en.wikipedia.org/wiki/Communications_of_the_ACM
233 https://en.wikipedia.org/wiki/Doi_(identifier)
234 https://doi.org/10.1145%2F366622.366642
235 https://en.wikipedia.org/wiki/Tony_Hoare
236 https://en.wikipedia.org/wiki/Communications_of_the_ACM
237 https://en.wikipedia.org/wiki/Doi_(identifier)
238 https://doi.org/10.1145%2F366622.366647
239 https://en.wikipedia.org/wiki/Tony_Hoare
240 https://en.wikipedia.org/wiki/The_Computer_Journal
241 https://en.wikipedia.org/wiki/Doi_(identifier)
242 https://doi.org/10.1093%2Fcomjnl%2F5.1.10
243 http://portal.acm.org/citation.cfm?id=SERIES11430.63445

132
External links

• M, D R.244 (1997). ”I S  S


A” . 245 Software: Practice and Experience. 27 (8): 983–993.
doi246 :10.1002/(SICI)1097-024X(199708)27:8<983::AID-SPE117>3.0.CO;2-#247 .
• Donald Knuth248 . The Art of Computer Programming, Volume 3: Sorting and Search-
ing, Third Edition. Addison-Wesley, 1997. ISBN249 0-201-89685-0250 . Pages 113–122 of
section 5.2.2: Sorting by Exchanging.
• Thomas H. Cormen251 , Charles E. Leiserson252 , Ronald L. Rivest253 , and Clifford Stein254 .
Introduction to Algorithms255 , Second Edition. MIT Press256 and McGraw-Hill257 , 2001.
ISBN258 0-262-03293-7259 . Chapter 7: Quicksort, pp. 145–164.
• Faron Moller260 . Analysis of Quicksort261 . CS 332: Designing Algorithms. Department
of Computer Science, Swansea University262 .
• M, C.; R, S. (2001). ”O S S  Q-
  Q”. SIAM J. Comput.263 31 (3): 683–705. Cite-
SeerX 10.1.1.17.4954 . doi :10.1137/S0097539700382108267 .
264 265 266

• B, J. L.; MI, M. D. (1993). ”E   ”. Soft-


ware: Practice and Experience. 23 (11): 1249–1265. CiteSeerX268 10.1.1.14.8162269 .
doi270 :10.1002/spe.4380231105271 .

7.8 External links

244 https://en.wikipedia.org/wiki/David_Musser
245 http://www.cs.rpi.edu/~musser/gp/introsort.ps
246 https://en.wikipedia.org/wiki/Doi_(identifier)
https://doi.org/10.1002%2F%28SICI%291097-024X%28199708%2927%3A8%3C983%3A%3AAID-
247
SPE117%3E3.0.CO%3B2-%23
248 https://en.wikipedia.org/wiki/Donald_Knuth
249 https://en.wikipedia.org/wiki/ISBN_(identifier)
250 https://en.wikipedia.org/wiki/Special:BookSources/0-201-89685-0
251 https://en.wikipedia.org/wiki/Thomas_H._Cormen
252 https://en.wikipedia.org/wiki/Charles_E._Leiserson
253 https://en.wikipedia.org/wiki/Ronald_L._Rivest
254 https://en.wikipedia.org/wiki/Clifford_Stein
255 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
256 https://en.wikipedia.org/wiki/MIT_Press
257 https://en.wikipedia.org/wiki/McGraw-Hill
258 https://en.wikipedia.org/wiki/ISBN_(identifier)
259 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
260 https://en.wikipedia.org/wiki/Faron_Moller
261 http://www.cs.swan.ac.uk/~csfm/Courses/CS_332/quicksort.pdf
262 https://en.wikipedia.org/wiki/Swansea_University
263 https://en.wikipedia.org/wiki/SIAM_Journal_on_Computing
264 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
265 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.17.4954
266 https://en.wikipedia.org/wiki/Doi_(identifier)
267 https://doi.org/10.1137%2FS0097539700382108
268 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
269 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.14.8162
270 https://en.wikipedia.org/wiki/Doi_(identifier)
271 https://doi.org/10.1002%2Fspe.4380231105

133
Quicksort

The Wikibook Algorithm implementation272 has a page on the topic of: Quick-
sort273

• ”A S A: Q S”274 . A   


 2 M 2015. R 25 N 2008.CS1 maint: BOT: original-url status
unknown (link275 ) – graphical demonstration
• ”A S A: Q S (3- )”276 . A
    6 M 2015. R 25 N 2008.CS1 maint:
BOT: original-url status unknown (link277 )
• Open Data Structures – Section 11.1.2 – Quicksort278 , Pat Morin279
• Interactive illustration of Quicksort280 , with code walkthrough

Sorting algorithms

272 https://en.wikibooks.org/wiki/Algorithm_implementation
273 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Quicksort
https://web.archive.org/web/20150302145415/http://www.sorting-algorithms.com/quick-
274
sort
275 https://en.wikipedia.org/wiki/Category:CS1_maint:_BOT:_original-url_status_unknown
https://web.archive.org/web/20150306071949/http://www.sorting-algorithms.com/quick-
276
sort-3-way
277 https://en.wikipedia.org/wiki/Category:CS1_maint:_BOT:_original-url_status_unknown
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
278
Sorti.html#SECTION001412000000000000000
279 https://en.wikipedia.org/wiki/Pat_Morin
https://web.archive.org/web/20180629183103/http://www.tomgsmith.com/quicksort/
280
content/illustration/

134
8 Heapsort

A sorting algorithm which uses the heap data structure

Heapsort
A run of heapsort sorting an array of randomly permuted values. In the first stage of
the algorithm the array elements are reordered to satisfy the heap property. Before the
actual sorting takes place, the heap tree structure is shown briefly for illustration.
Class Sorting algorithm
Data structure Array
Worst-case perfor- O(n log n)
mance
Best-case perfor- O(n log n) (distinct
mance keys)
or O(n) (equal
keys)
Average perfor- O(n log n)
mance
Worst-case space O(n) total O(1)
complexity auxiliary

In computer science1 , heapsort is a comparison-based2 sorting algorithm3 . Heapsort can be


thought of as an improved selection sort4 : like selection sort, heapsort divides its input into
a sorted and an unsorted region, and it iteratively shrinks the unsorted region by extracting
the largest element from it and inserting it into the sorted region. Unlike selection sort,
heapsort does not waste time with a linear-time scan of the unsorted region; rather, heap
sort maintains the unsorted region in a heap5 data structure to more quickly find the largest
element in each step.[1]
Although somewhat slower in practice on most machines than a well-implemented quick-
sort6 , it has the advantage of a more favorable worst-case O(n log n)7 runtime. Heapsort
is an in-place algorithm8 , but it is not a stable sort9 .

1 https://en.wikipedia.org/wiki/Computer_science
2 https://en.wikipedia.org/wiki/Comparison_sort
3 https://en.wikipedia.org/wiki/Sorting_algorithm
4 https://en.wikipedia.org/wiki/Selection_sort
5 https://en.wikipedia.org/wiki/Heap_(data_structure)
6 https://en.wikipedia.org/wiki/Quicksort
7 https://en.wikipedia.org/wiki/Big_O_notation
8 https://en.wikipedia.org/wiki/In-place_algorithm
9 https://en.wikipedia.org/wiki/Stable_sort

135
Heapsort

Heapsort was invented by J. W. J. Williams10 in 1964.[2] This was also the birth of the
heap, presented already by Williams as a useful data structure in its own right.[3] In the
same year, R. W. Floyd11 published an improved version that could sort an array in-place,
continuing his earlier research into the treesort12 algorithm.[3]

8.1 Overview

The heapsort algorithm can be divided into two parts.


In the first step, a heap13 is built out of the data (see Binary heap § Building a heap14 ).
The heap is often placed in an array with the layout of a complete binary tree15 . The
complete binary tree maps the binary tree structure into the array indices; each array index
represents a node; the index of the node's parent, left child branch, or right child branch
are simple expressions. For a zero-based array, the root node is stored at index 0; if i is the
index of the current node, then
iParent(i) = floor((i-1) / 2) where floor functions map a real number to
the smallest leading integer.
iLeftChild(i) = 2*i + 1
iRightChild(i) = 2*i + 2

In the second step, a sorted array is created by repeatedly removing the largest element
from the heap (the root of the heap), and inserting it into the array. The heap is updated
after each removal to maintain the heap property. Once all objects have been removed from
the heap, the result is a sorted array.
Heapsort can be performed in place. The array can be split into two parts, the sorted array
and the heap. The storage of heaps as arrays is diagrammed here16 . The heap's invariant
is preserved after each extraction, so the only cost is that of extraction.

8.2 Algorithm

The Heapsort algorithm involves preparing the list by first turning it into a max heap17 .
The algorithm then repeatedly swaps the first value of the list with the last value, decreasing
the range of values considered in the heap operation by one, and sifting the new first value
into its position in the heap. This repeats until the range of considered values is one value
in length.
The steps are:

10 https://en.wikipedia.org/wiki/J._W._J._Williams
11 https://en.wikipedia.org/wiki/Robert_Floyd
12 https://en.wikipedia.org/wiki/Treesort
13 https://en.wikipedia.org/wiki/Heap_(data_structure)
14 https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap
15 https://en.wikipedia.org/wiki/Binary_tree#Types_of_binary_trees
16 https://en.wikipedia.org/wiki/Binary_heap#Heap_implementation
17 https://en.wikipedia.org/wiki/Binary_heap

136
Algorithm

1. Call the buildMaxHeap() function on the list. Also referred to as heapify(), this builds
a heap from a list in O(n) operations.
2. Swap the first element of the list with the final element. Decrease the considered range
of the list by one.
3. Call the siftDown() function on the list to sift the new first element to its appropriate
index in the heap.
4. Go to step (2) unless the considered range of the list is one element.
The buildMaxHeap() operation is run once, and is O(n) in performance. The siftDown()
function is O(log n), and is called n times. Therefore, the performance of this algorithm is
O(n + n log n) = O(n log n).

8.2.1 Pseudocode

The following is a simple way to implement the algorithm in pseudocode18 . Arrays are
zero-based19 and swap is used to exchange two elements of the array. Movement 'down'
means from the root towards the leaves, or from lower indices to higher. Note that during
the sort, the largest element is at the root of the heap at a[0], while at the end of the sort,
the largest element is in a[end].
procedure heapsort(a, count) is
input: an unordered array a of length count

(Build the heap in array a so that largest value is at the root)


heapify(a, count)

(The following loop maintains the invariants20 that a[0:end] is a heap and every element
beyond end is greater than everything before it (so a[end:count] is in sorted order))
end ← count - 1
while end > 0 do
(a[0] is the root and largest value. The swap moves it in front of the sorted elements.)
swap(a[end], a[0])
(the heap size is reduced by one)
end ← end - 1
(the swap ruined the heap property, so restore it)
siftDown(a, 0, end)

The sorting routine uses two subroutines, heapify and siftDown. The former is the com-
mon in-place heap construction routine, while the latter is a common subroutine for imple-
menting heapify.
(Put elements of 'a' in heap order, in-place)
procedure heapify(a, count) is
(start is assigned the index in 'a' of the last parent node)
(the last element in a 0-based array is at index count-1; find the parent of that element)
start ← iParent(count-1)

while start ≥ 0 do
(sift down the node at index 'start' to the proper place such that all nodes below
the start index are in heap order)
siftDown(a, start, count - 1)
(go to the next parent node)
start ← start - 1

18 https://en.wikipedia.org/wiki/Pseudocode
19 https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(array)
20 https://en.wikipedia.org/wiki/Loop_invariant

137
Heapsort

(after sifting down the root all nodes/elements are in heap order)

(Repair the heap whose root element is at index 'start', assuming the heaps rooted at its children are valid)
procedure siftDown(a, start, end) is
root ← start

while iLeftChild(root) ≤ end do (While the root has at least one child)
child ← iLeftChild(root) (Left child of root)
swap ← root (Keeps track of child to swap with)

if a[swap] < a[child] then


swap ← child
(If there is a right child and that child is greater)
if child+1 ≤ end and a[swap] < a[child+1] then
swap ← child + 1
if swap = root then
(The root holds the largest element. Since we assume the heaps rooted at the
children are valid, this means that we are done.)
return
else
swap(a[root], a[swap])
root ← swap (repeat to continue sifting down the child now)

The heapify procedure can be thought of as building a heap from the bottom up by suc-
cessively sifting downward to establish the heap property21 . An alternative version (shown
below) that builds the heap top-down and sifts upward may be simpler to understand. This
siftUp version can be visualized as starting with an empty heap and successively inserting
elements, whereas the siftDown version given above treats the entire input array as a full
but ”broken” heap and ”repairs” it starting from the last non-trivial sub-heap (that is, the
last parent node).

Figure 24 Difference in time complexity between the ”siftDown” version and the
”siftUp” version.

Also, the siftDown version of heapify has O(n) time complexity22 , while the siftUp version
given below has O(n log n) time complexity due to its equivalence with inserting each
element, one at a time, into an empty heap.[4] This may seem counter-intuitive since, at a
glance, it is apparent that the former only makes half as many calls to its logarithmic-time

21 https://en.wikipedia.org/wiki/Heap_(data_structure)
22 https://en.wikipedia.org/wiki/Binary_heap#Building_a_heap

138
Variations

sifting function as the latter; i.e., they seem to differ only by a constant factor, which never
affects asymptotic analysis.
To grasp the intuition behind this difference in complexity, note that the number of swaps
that may occur during any one siftUp call increases with the depth of the node on which the
call is made. The crux is that there are many (exponentially many) more ”deep” nodes than
there are ”shallow” nodes in a heap, so that siftUp may have its full logarithmic running-time
on the approximately linear number of calls made on the nodes at or near the ”bottom” of
the heap. On the other hand, the number of swaps that may occur during any one siftDown
call decreases as the depth of the node on which the call is made increases. Thus, when
the siftDown heapify begins and is calling siftDown on the bottom and most numerous
node-layers, each sifting call will incur, at most, a number of swaps equal to the ”height”
(from the bottom of the heap) of the node on which the sifting call is made. In other words,
about half the calls to siftDown will have at most only one swap, then about a quarter of
the calls will have at most two swaps, etc.
The heapsort algorithm itself has O(n log n) time complexity using either version of heapify.
procedure heapify(a,count) is
(end is assigned the index of the first (left) child of the root)
end := 1

while end < count


(sift up the node at index end to the proper place such that all nodes above
the end index are in heap order)
siftUp(a, 0, end)
end := end + 1
(after sifting up the last node all nodes are in heap order)

procedure siftUp(a, start, end) is


input: start represents the limit of how far up the heap to sift.
end is the node to sift up.
child := end
while child > start
parent := iParent(child)
if a[parent] < a[child] then (out of max-heap order)
swap(a[parent], a[child])
child := parent (repeat to continue sifting up the parent now)
else
return

8.3 Variations

8.3.1 Floyd's heap construction

The most important variation to the basic algorithm, which is included in all practical
implementations, is a heap-construction algorithm by Floyd which runs in O(n) time and
uses siftdown23 rather than siftup24 , avoiding the need to implement siftup at all.
Rather than starting with a trivial heap and repeatedly adding leaves, Floyd's algorithm
starts with the leaves, observing that they are trivial but valid heaps by themselves, and

23 https://en.wikipedia.org/wiki/Binary_heap#Extract
24 https://en.wikipedia.org/wiki/Binary_heap#Insert

139
Heapsort

then adds parents. Starting with element n/2 and working backwards, each internal node
is made the root of a valid heap by sifting down. The last step is sifting down the first
element, after which the entire array obeys the heap property.
The worst-case number of comparisons during the Floyd's heap-construction phase of Heap-
sort is known to be equal to 2n − 2s2 (n) − e2 (n), where s2 (n) is the number of 1 bits in the
binary representation of n and e2 (n) is number of trailing 0 bits.[5]
The standard implementation of Floyd's heap-construction algorithm causes a large num-
ber of cache misses25 once the size of the data exceeds that of the CPU cache26 . Much
better performance on large data sets can be obtained by merging in depth-first27 order,
combining subheaps as soon as possible, rather than combining all subheaps on one level
before proceeding to the one above.[6][7]

8.3.2 Bottom-up heapsort

Bottom-up heapsort is a variant which reduces the number of comparisons required by a


significant factor. While ordinary heapsort requires 2n log2 n + O(n) comparisons worst-case
and on average,[8] the bottom-up variant requires n log2 n + O(1) comparisons on average,[8]
and 1.5n log2 n + O(n) in the worst case.[9]
If comparisons are cheap (e.g. integer keys) then the difference is unimportant,[10] as top-
down heapsort compares values that have already been loaded from memory. If, however,
comparisons require a function call28 or other complex logic, then bottom-up heapsort is
advantageous.
This is accomplished by improving the siftDown procedure. The change improves the
linear-time heap-building phase somewhat,[11] but is more significant in the second phase.
Like ordinary heapsort, each iteration of the second phase extracts the top of the heap, a[0],
and fills the gap it leaves with a[end], then sifts this latter element down the heap. But this
element comes from the lowest level of the heap, meaning it is one of the smallest elements
in the heap, so the sift-down will likely take many steps to move it back down. In ordinary
heapsort, each step of the sift-down requires two comparisons, to find the minimum of three
elements: the new node and its two children.
Bottom-up heapsort instead finds the path of largest children to the leaf level of the tree
(as if it were inserting −∞) using only one comparison per level. Put another way, it finds
a leaf which has the property that it and all of its ancestors are greater than or equal to
their siblings. (In the absence of equal keys, this leaf is unique.) Then, from this leaf, it
searches upward (using one comparison per level) for the correct position in that path to
insert a[end]. This is the same location as ordinary heapsort finds, and requires the same
number of exchanges to perform the insert, but fewer comparisons are required to find that
location.[9]

25 https://en.wikipedia.org/wiki/Cache_miss
26 https://en.wikipedia.org/wiki/CPU_cache
27 https://en.wikipedia.org/wiki/Depth-first
28 https://en.wikipedia.org/wiki/Function_call

140
Variations

Because it goes all the way to the bottom and then comes back up, it is called heapsort
with bounce by some authors.[12]
function leafSearch(a, i, end) is
j←i
while iRightChild(j) ≤ end do
(Determine which of j's two children is the greater)
if a[iRightChild(j)] > a[iLeftChild(j)] then
j ← iRightChild(j)
else
j ← iLeftChild(j)
(At the last level, there might be only one child)
if iLeftChild(j) ≤ end then
j ← iLeftChild(j)
return j

The return value of the leafSearch is used in the modified siftDown routine:[9]
procedure siftDown(a, i, end) is
j ← leafSearch(a, i, end)
while a[i] > a[j] do
j ← iParent(j)
x ← a[j]
a[j] ← a[i]
while j > i do
swap x, a[iParent(j)]
j ← iParent(j)

Bottom-up heapsort was announced as beating quicksort (with median-of-three pivot selec-
tion) on arrays of size ≥16000.[8]
A 2008 re-evaluation of this algorithm showed it to be no faster than ordinary heapsort
for integer keys, presumably because modern branch prediction29 nullifies the cost of the
predictable comparisons which bottom-up heapsort manages to avoid.[10]
A further refinement does a binary search in the path to the selected leaf, and sorts in a worst
case of (n+1)(log2 (n+1) + log2 log2 (n+1) + 1.82) + O(log2 n) comparisons, approaching
the information-theoretic lower bound30 of n log2 n − 1.4427n comparisons.[13]
A variant which uses two extra bits per internal node (n−1 bits total for an n-element heap)
to cache information about which child is greater (two bits are required to store three cases:
left, right, and unknown)[11] uses less than n log2 n + 1.1n compares.[14]

8.3.3 Other variations


• Ternary heapsort[15] uses a ternary heap31 instead of a binary heap; that is, each element
in the heap has three children. It is more complicated to program, but does a constant
number of times fewer swap and comparison operations. This is because each sift-down
step in a ternary heap requires three comparisons and one swap, whereas in a binary
heap two comparisons and one swap are required. Two levels in a ternary heap cover
32 = 9 elements, doing more work with the same number of comparisons as three levels

29 https://en.wikipedia.org/wiki/Branch_prediction
https://en.wikipedia.org/wiki/Comparison_sort#Number_of_comparisons_required_to_sort_
30
a_list
31 https://en.wikipedia.org/wiki/Ternary_heap

141
Heapsort

32
in the binary heap, which only cover 23 = 8.[citation needed ] This is primarily of academic
interest, as the additional complexity is not worth the minor savings, and bottom-up
heapsort beats both.
• The smoothsort33 algorithm[16] is a variation of heapsort developed by Edsger Dijkstra34
in 1981. Like heapsort, smoothsort's upper bound is O(n log n)35 . The advantage of
smoothsort is that it comes closer to O(n) time if the input is already sorted to some
degree36 , whereas heapsort averages O(n log n) regardless of the initial sorted state. Due
37
to its complexity, smoothsort is rarely used.[citation needed ]
• Levcopoulos and Petersson[17] describe a variation of heapsort based on a heap of Carte-
sian trees38 . First, a Cartesian tree is built from the input in O(n) time, and its root is
placed in a 1-element binary heap. Then we repeatedly extract the minimum from the
binary heap, output the tree's root element, and add its left and right children (if any)
which are themselves Cartesian trees, to the binary heap.[18] As they show, if the input is
already nearly sorted, the Cartesian trees will be very unbalanced, with few nodes having
left and right children, resulting in the binary heap remaining small, and allowing the
algorithm to sort more quickly than O(n log n) for inputs that are already nearly sorted.
• Several variants such as weak heapsort39 require n log2 n+O(1) comparisons in the worst
case, close to the theoretical minimum, using one extra bit of state per node. While this
extra bit makes the algorithms not truly in-place, if space for it can be found inside the
element, these algorithms are simple and efficient,[6]:40 but still slower than binary heaps
if key comparisons are cheap enough (e.g. integer keys) that a constant factor does not
matter.[19]
• Katajainen's ”ultimate heapsort” requires no extra storage, performs n log2 n+O(1) com-
parisons, and a similar number of element moves.[20] It is, however, even more complex
and not justified unless comparisons are very expensive.

8.4 Comparison with other sorts

Heapsort primarily competes with quicksort40 , another very efficient general purpose nearly-
in-place comparison-based sort algorithm.
Quicksort is typically somewhat faster due to some factors, but the worst-case running time
for quicksort is O(n2 ), which is unacceptable for large data sets and can be deliberately
triggered given enough knowledge of the implementation, creating a security risk. See
quicksort41 for a detailed discussion of this problem and possible solutions.
Thus, because of the O(n log n) upper bound on heapsort's running time and constant upper
bound on its auxiliary storage, embedded systems with real-time constraints or systems
concerned with security often use heapsort, such as the Linux kernel.[21]

33 https://en.wikipedia.org/wiki/Smoothsort
34 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
35 https://en.wikipedia.org/wiki/Big_O_notation
36 https://en.wikipedia.org/wiki/Adaptive_sort
38 https://en.wikipedia.org/wiki/Cartesian_tree
39 https://en.wikipedia.org/wiki/Weak_heap
40 https://en.wikipedia.org/wiki/Quicksort
41 https://en.wikipedia.org/wiki/Quicksort

142
Example

Heapsort also competes with merge sort42 , which has the same time bounds. Merge sort
requires Ω(n) auxiliary space, but heapsort requires only a constant amount. Heapsort
typically runs faster in practice on machines with small or slow data caches43 , and does not
require as much external memory. On the other hand, merge sort has several advantages
over heapsort:
• Merge sort on arrays has considerably better data cache performance, often outperforming
heapsort on modern desktop computers because merge sort frequently accesses contiguous
memory locations (good locality of reference44 ); heapsort references are spread throughout
the heap.
• Heapsort is not a stable sort45 ; merge sort is stable.
• Merge sort parallelizes46 well and can achieve close to linear speedup47 with a trivial
implementation; heapsort is not an obvious candidate for a parallel algorithm.
• Merge sort can be adapted to operate on singly linked lists48 with O(1) extra space.
Heapsort can be adapted to operate on doubly linked lists with only O(1) extra space
49
overhead.[citation needed ]
• Merge sort is used in external sorting50 ; heapsort is not. Locality of reference is the issue.
Introsort51 is an alternative to heapsort that combines quicksort and heapsort to retain
advantages of both: worst case speed of heapsort and average speed of quicksort.

8.5 Example

Let { 6, 5, 3, 1, 8, 7, 2, 4 } be the list that we want to sort from the smallest to the largest.
(NOTE, for 'Building the Heap' step: Larger nodes don't stay below smaller node parents.
They are swapped with parents, and then recursively checked if another swap is needed, to
keep larger numbers above smaller numbers on the heap binary tree.)

42 https://en.wikipedia.org/wiki/Merge_sort
43 https://en.wikipedia.org/wiki/Data_cache
44 https://en.wikipedia.org/wiki/Locality_of_reference
45 https://en.wikipedia.org/wiki/Stable_sort
46 https://en.wikipedia.org/wiki/Parallel_algorithm
47 https://en.wikipedia.org/wiki/Linear_speedup
48 https://en.wikipedia.org/wiki/Linked_list
50 https://en.wikipedia.org/wiki/External_sorting
51 https://en.wikipedia.org/wiki/Introsort

143
Heapsort

Figure 25 An example on heapsort.

1. Build the heap

Heap newly added element swap elements


null 6
6 5
6, 5 3
6, 5, 3 1
6, 5, 3, 1 8
6, 5, 3, 1, 8 5, 8
6, 8, 3, 1, 5 6, 8
8, 6, 3, 1, 5 7
8, 6, 3, 1, 5, 7 3, 7
8, 6, 7, 1, 5, 3 2
8, 6, 7, 1, 5, 3, 2 4
8, 6, 7, 1, 5, 3, 2, 4 1, 4
8, 6, 7, 4, 5, 3, 2, 1

144
Example

2. Sorting

Heap swap ele- delete ele- sorted array details


ments ment
8, 6, 7, 4, 5, 3, 8, 1 swap 8 and
2, 1 1 in order to
delete 8 from
heap
1, 6, 7, 4, 5, 3, 8 delete 8 from
2, 8 heap and add
to sorted array
1, 6, 7, 4, 5, 3, 1, 7 8 swap 1 and 7
2 as they are not
in order in the
heap
7, 6, 1, 4, 5, 3, 1, 3 8 swap 1 and 3
2 as they are not
in order in the
heap
7, 6, 3, 4, 5, 1, 7, 2 8 swap 7 and
2 2 in order to
delete 7 from
heap
2, 6, 3, 4, 5, 1, 7 8 delete 7 from
7 heap and add
to sorted array
2, 6, 3, 4, 5, 1 2, 6 7, 8 swap 2 and 6
as they are not
in order in the
heap
6, 2, 3, 4, 5, 1 2, 5 7, 8 swap 2 and 5
as they are not
in order in the
heap
6, 5, 3, 4, 2, 1 6, 1 7, 8 swap 6 and
1 in order to
delete 6 from
heap
1, 5, 3, 4, 2, 6 6 7, 8 delete 6 from
heap and add
to sorted array
1, 5, 3, 4, 2 1, 5 6, 7, 8 swap 1 and 5
as they are not
in order in the
heap

145
Heapsort

2. Sorting
5, 1, 3, 4, 2 1, 4 6, 7, 8 swap 1 and 4
as they are not
in order in the
heap
5, 4, 3, 1, 2 5, 2 6, 7, 8 swap 5 and
2 in order to
delete 5 from
heap
2, 4, 3, 1, 5 5 6, 7, 8 delete 5 from
heap and add
to sorted array
2, 4, 3, 1 2, 4 5, 6, 7, 8 swap 2 and 4
as they are not
in order in the
heap
4, 2, 3, 1 4, 1 5, 6, 7, 8 swap 4 and
1 in order to
delete 4 from
heap
1, 2, 3, 4 4 5, 6, 7, 8 delete 4 from
heap and add
to sorted array
1, 2, 3 1, 3 4, 5, 6, 7, 8 swap 1 and 3
as they are not
in order in the
heap
3, 2, 1 3, 1 4, 5, 6, 7, 8 swap 3 and
1 in order to
delete 3 from
heap
1, 2, 3 3 4, 5, 6, 7, 8 delete 3 from
heap and add
to sorted array
1, 2 1, 2 3, 4, 5, 6, 7, 8 swap 1 and 2
as they are not
in order in the
heap
2, 1 2, 1 3, 4, 5, 6, 7, 8 swap 2 and
1 in order to
delete 2 from
heap
1, 2 2 3, 4, 5, 6, 7, 8 delete 2 from
heap and add
to sorted array

146
Notes

2. Sorting
1 1 2, 3, 4, 5, 6, 7, delete 1 from
8 heap and add
to sorted array
1, 2, 3, 4, 5, 6, completed
7, 8

8.6 Notes
1. S, S52 (2008). ”S  S”. The Algorithm Design
Manual. Springer. p. 109. doi53 :10.1007/978-1-84800-070-4_454 . ISBN55 978-1-
84800-069-856 . [H]eapsort is nothing but an implementation of selection sort using
the right data structure.
2. Williams 196457
3. B, P (2008). Advanced Data Structures. Cambridge University Press.
p. 209. ISBN58 978-0-521-88037-459 .
4. ”P Q”60 . R 24 M 2011.
5. S, M A. (2012), ”E Y P W-C A-
  F' H-C P”, Fundamenta Informaticae61 ,
120 (1): 75–92, doi62 :10.3233/FI-2012-75163
6. B, J; K, J; S, M (2000). ”P
E C S: H C”64 (PS). ACM Jour-
nal of Experimental Algorithmics. 5 (15): 15–es. CiteSeerX65 10.1.1.35.324866 .
doi67 :10.1145/351827.38425768 . Alternate PDF source69 .
7. C, J; E, S; E, A; K, J (27–
31 A 2012). In-place Heap Construction with Optimized Comparisons, Moves,
and Cache Misses70 (PDF). 37    M-
 F  C S. B, S. . 259–270.

52 https://en.wikipedia.org/wiki/Steven_Skiena
53 https://en.wikipedia.org/wiki/Doi_(identifier)
54 https://doi.org/10.1007%2F978-1-84800-070-4_4
55 https://en.wikipedia.org/wiki/ISBN_(identifier)
56 https://en.wikipedia.org/wiki/Special:BookSources/978-1-84800-069-8
57 #CITEREFWilliams1964
58 https://en.wikipedia.org/wiki/ISBN_(identifier)
59 https://en.wikipedia.org/wiki/Special:BookSources/978-0-521-88037-4
60 http://faculty.simpson.edu/lydia.sinapova/www/cmsc250/LN250_Weiss/L10-PQueues.htm
61 https://en.wikipedia.org/wiki/Fundamenta_Informaticae
62 https://en.wikipedia.org/wiki/Doi_(identifier)
63 https://doi.org/10.3233%2FFI-2012-751
64 http://hjemmesider.diku.dk/~jyrki/Paper/katajain.ps
65 https://en.wikipedia.org/wiki/CiteSeerX_(identifier)
66 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.3248
67 https://en.wikipedia.org/wiki/Doi_(identifier)
68 https://doi.org/10.1145%2F351827.384257
https://www.semanticscholar.org/paper/Performance-Engineering-Case-Study-Heap-
69
Bojesen-Katajainen/6f4ada5912c1da64e16453d67ec99c970173fb5b
https://pdfs.semanticscholar.org/9cc6/36d7998d58b3937ba0098e971710ff039612.pdf#page=
70
11

147
Heapsort

71 :10.1007/978-3-642-32589-2_2572 . ISBN73 978-3-642-32588-574 . See particularly


Fig. 3.
8. W, I75 (13 S 1993). ”BOTTOM-UP HEAPSORT, 
   HEAPSORT ,   , QUICKSORT ( 
   )”76 (PDF). Theoretical Computer Science. 118 (1): 81–98.
doi77 :10.1016/0304-3975(93)90364-y78 . Although this is a reprint of work first pub-
lished in 1990 (at the Mathematical Foundations of Computer Science conference),
the technique was published by Carlsson in 1987.[13]
9. F, R (F 1994). ”A     
   B-U-H”79 (PDF). Algorithmica. 11 (2): 104–115.
doi80 :10.1007/bf0118277081 . hdl82 :11858/00-001M-0000-0014-7B02-C83 . Also avail-
able as
F, R (A 1991). A tight lower bound for the worst case of Bottom-
Up-Heapsort84 (PDF) (T ). MPI-INF85 . MPI-I-91-104.
10. M, K86 ; S, P87 (2008). ”P Q”88 (PDF). Al-
gorithms and Data Structures: The Basic Toolbox89 . S. . 142. ISBN90 978-
3-540-77977-391 .
11. MD, C.J.H.; R, B.A. (S 1989). ”B  ”92
(PDF). Journal of Algorithms. 10 (3): 352–365. doi93 :10.1016/0196-6774(89)90033-
394 .
12. M, B95 ; S, H D. (1991). ”8.6 H”. Algorithms
from P to NP Volume 1: Design and Efficiency. Benjamin/Cummings. p. 528.
ISBN96 0-8053-8008-697 . For lack of a better name we call this enhanced program
'heapsort with bounce.'

71 https://en.wikipedia.org/wiki/Doi_(identifier)
72 https://doi.org/10.1007%2F978-3-642-32589-2_25
73 https://en.wikipedia.org/wiki/ISBN_(identifier)
74 https://en.wikipedia.org/wiki/Special:BookSources/978-3-642-32588-5
75 https://en.wikipedia.org/wiki/Ingo_Wegener
76 https://core.ac.uk/download/pdf/82350265.pdf
77 https://en.wikipedia.org/wiki/Doi_(identifier)
78 https://doi.org/10.1016%2F0304-3975%2893%2990364-y
79 http://staff.gutech.edu.om/~rudolf/Paper/buh_algorithmica94.pdf
80 https://en.wikipedia.org/wiki/Doi_(identifier)
81 https://doi.org/10.1007%2Fbf01182770
82 https://en.wikipedia.org/wiki/Hdl_(identifier)
83 http://hdl.handle.net/11858%2F00-001M-0000-0014-7B02-C
http://pubman.mpdl.mpg.de/pubman/item/escidoc:1834997:3/component/escidoc:2463941/
84
MPI-I-94-104.pdf
85 https://en.wikipedia.org/wiki/Max_Planck_Institute_for_Informatics
86 https://en.wikipedia.org/wiki/Kurt_Mehlhorn
87 https://en.wikipedia.org/wiki/Peter_Sanders_(computer_scientist)
88 http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/PriorityQueues.pdf#page=16
89 http://people.mpi-inf.mpg.de/~mehlhorn/Toolbox.html
90 https://en.wikipedia.org/wiki/ISBN_(identifier)
91 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-77977-3
92 http://cgm.cs.mcgill.ca/~breed/2016COMP610/BUILDINGHEAPSFAST.pdf
93 https://en.wikipedia.org/wiki/Doi_(identifier)
94 https://doi.org/10.1016%2F0196-6774%2889%2990033-3
95 https://en.wikipedia.org/wiki/Bernard_Moret
96 https://en.wikipedia.org/wiki/ISBN_(identifier)
97 https://en.wikipedia.org/wiki/Special:BookSources/0-8053-8008-6

148
Notes

13. C, S (M 1987). ”A      -
   ”98 (PDF). Information Processing Letters. 24 (4):
247–250. doi99 :10.1016/0020-0190(87)90142-6100 .
14. W, I101 (M 1992). ”T     M-
D  R'   BOTTOM-UP HEAPSORT   
n log n + 1.1n”. Information and Computation. 97 (1): 86–96. doi102 :10.1016/0890-
5401(92)90005-Z103 .
104 105 106
15. ”Data Structures Using Pascal”, 1991, page 405,[full citation needed ][author missing ][ISBN missing ]
gives a ternary heapsort as a student exercise. ”Write a sorting routine similar to the
heapsort except that it uses a ternary heap.”
16. D, E W.107 Smoothsort – an alternative to sorting in situ (EWD-
796a)108 (PDF). E.W. D A. C  A H, U-
  T  A109 . (transcription110 )
17. L, C; P, O (1989), ”H—A 
P F”, WADS '89: Proceedings of the Workshop on Algorithms and
Data Structures, Lecture Notes in Computer Science, 382, London, UK: Springer-
Verlag, pp. 499–509, doi111 :10.1007/3-540-51542-9_41112 , ISBN113 978-3-540-51542-
5114 Heapsort—Adapted for presorted files (Q56049336)115 .
18. S, K (27 D 2010). ”CTS.”116 . Archive
of Interesting Code. Retrieved 5 March 2019.
19. K, J (23 S 2013). Seeking for the best priority queue:
Lessons learnt117 . A E (S 13391). D. . 19–
20, 24.
20. K, J (2–3 F 1998). The Ultimate Heapsort118 . C-
:  4 A T S. Australian Computer Science
Communications. 20 (3). Perth. pp. 87–96.
21. 119 Linux kernel source

98 https://pdfs.semanticscholar.org/caec/6682ffd13c6367a8c51b566e2420246faca2.pdf
99 https://en.wikipedia.org/wiki/Doi_(identifier)
100 https://doi.org/10.1016%2F0020-0190%2887%2990142-6
101 https://en.wikipedia.org/wiki/Ingo_Wegener
102 https://en.wikipedia.org/wiki/Doi_(identifier)
103 https://doi.org/10.1016%2F0890-5401%2892%2990005-Z
107 https://en.wikipedia.org/wiki/Edsger_W._Dijkstra
108 http://www.cs.utexas.edu/users/EWD/ewd07xx/EWD796a.PDF
109 https://en.wikipedia.org/wiki/University_of_Texas_at_Austin
110 http://www.cs.utexas.edu/users/EWD/transcriptions/EWD07xx/EWD796a.html
111 https://en.wikipedia.org/wiki/Doi_(identifier)
112 https://doi.org/10.1007%2F3-540-51542-9_41
113 https://en.wikipedia.org/wiki/ISBN_(identifier)
114 https://en.wikipedia.org/wiki/Special:BookSources/978-3-540-51542-5
115 https://www.wikidata.org/wiki/Special:EntityPage/Q56049336
116 http://www.keithschwarz.com/interesting/code/?dir=cartesian-tree-sort
117 http://hjemmesider.diku.dk/~jyrki/Myris/Kat2013-09-23P.html
118 http://hjemmesider.diku.dk/~jyrki/Myris/Kat1998C.html
119 https://github.com/torvalds/linux/blob/master/lib/sort.c

149
Heapsort

8.7 References
• W, J. W. J.120 (1964), ”A 232 - H”, Communications of the
ACM121 , 7 (6): 347–348, doi122 :10.1145/512274.512284123
• F, R W.124 (1964), ”A 245 - T 3”, Communications of
the ACM125 , 7 (12): 701, doi126 :10.1145/355588.365103127
• C, S128 (1987), ”A-   ”, BIT, 27 (1):
2–17, doi129 :10.1007/bf01937350130
• K, D131 (1997), ”§5.2.3, S  S”, Sorting and Search-
ing, The Art of Computer Programming132 , 3 (third ed.), Addison-Wesley, pp. 144–155,
ISBN133 978-0-201-89685-5134
• Thomas H. Cormen135 , Charles E. Leiserson136 , Ronald L. Rivest137 , and Clifford Stein138 .
Introduction to Algorithms139 , Second Edition. MIT Press and McGraw-Hill, 2001.
ISBN140 0-262-03293-7141 . Chapters 6 and 7 Respectively: Heapsort and Priority Queues
• A PDF of Dijkstra's original paper on Smoothsort142
• Heaps and Heapsort Tutorial143 by David Carlson, St. Vincent College

8.8 External links

The Wikibook Algorithm implementation144 has a page on the topic of: Heap-
sort145

120 https://en.wikipedia.org/wiki/J._W._J._Williams
121 https://en.wikipedia.org/wiki/Communications_of_the_ACM
122 https://en.wikipedia.org/wiki/Doi_(identifier)
123 https://doi.org/10.1145%2F512274.512284
124 https://en.wikipedia.org/wiki/Robert_W._Floyd
125 https://en.wikipedia.org/wiki/Communications_of_the_ACM
126 https://en.wikipedia.org/wiki/Doi_(identifier)
127 https://doi.org/10.1145%2F355588.365103
128 https://sv.wikipedia.org/wiki/Svante_Carlsson
129 https://en.wikipedia.org/wiki/Doi_(identifier)
130 https://doi.org/10.1007%2Fbf01937350
131 https://en.wikipedia.org/wiki/Donald_Knuth
132 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
133 https://en.wikipedia.org/wiki/ISBN_(identifier)
134 https://en.wikipedia.org/wiki/Special:BookSources/978-0-201-89685-5
135 https://en.wikipedia.org/wiki/Thomas_H._Cormen
136 https://en.wikipedia.org/wiki/Charles_E._Leiserson
137 https://en.wikipedia.org/wiki/Ronald_L._Rivest
138 https://en.wikipedia.org/wiki/Clifford_Stein
139 https://en.wikipedia.org/wiki/Introduction_to_Algorithms
140 https://en.wikipedia.org/wiki/ISBN_(identifier)
141 https://en.wikipedia.org/wiki/Special:BookSources/0-262-03293-7
142 http://www.cs.utexas.edu/users/EWD/ewd07xx/EWD796a.PDF
143 http://cis.stvincent.edu/html/tutorials/swd/heaps/heaps.html
144 https://en.wikibooks.org/wiki/Algorithm_implementation
145 https://en.wikibooks.org/wiki/Algorithm_implementation/Sorting/Heapsort

150
External links

• Animated Sorting Algorithms: Heap Sort146 at the Wayback Machine147 (archived 6


March 2015) – graphical demonstration
• Courseware on Heapsort from Univ. Oldenburg148 - With text, animations and interactive
exercises
• NIST's Dictionary of Algorithms and Data Structures: Heapsort149
• Heapsort implemented in 12 languages150
• Sorting revisited151 by Paul Hsieh
• A PowerPoint presentation demonstrating how Heap sort works152 that is for educators.
• Open Data Structures - Section 11.1.3 - Heap-Sort153 , Pat Morin154

Sorting algorithms

https://web.archive.org/web/20150306071556/http://www.sorting-algorithms.com/heap-
146
sort
147 https://en.wikipedia.org/wiki/Wayback_Machine
https://web.archive.org/web/20130326084250/http://olli.informatik.uni-oldenburg.de/
148
heapsort_SALA/english/start.html
149 https://xlinux.nist.gov/dads/HTML/heapSort.html
150 http://www.codecodex.com/wiki/Heapsort
151 http://www.azillionmonkeys.com/qed/sort.html
152 http://employees.oneonta.edu/zhangs/powerPointPlatform/index.php
http://opendatastructures.org/versions/edition-0.1e/ods-java/11_1_Comparison_Based_
153
Sorti.html#SECTION001413000000000000000
154 https://en.wikipedia.org/wiki/Pat_Morin

151
9 Bubble sort

Simple comparison sorting algorithm

This article needs additional citations for verification1 . Please help improve
this article2 by adding citations to reliable sources3 . Unsourced material may be
challenged and removed.
Find sources: ”Bubble sort”4 – news5 · newspapers6 · books7 · scholar8 · JSTOR9 (Novem-
ber 2016)(Learn how and when to remove this template message10 )

Bubble sort
Static visualization of bubble sort[1]
Class Sorting algorithm
Data structure Array
Worst-case per- O(n2 ) comparisons,
formance O(n2 ) swaps
Best-case perfor- O(n) comparisons,
mance O(1) swaps
Average perfor- O(n2 ) comparisons,
mance O(n2 ) swaps
Worst-case space O(n) total, O(1) aux-
complexity iliary

Bubble sort, sometimes referred to as sinking sort, is a simple sorting algorithm11 that
repeatedly steps through the list, compares adjacent elements and swaps12 them if they
are in the wrong order. The pass through the list is repeated until the list is sorted. The

1 https://en.wikipedia.org/wiki/Wikipedia:Verifiability
2 https://en.wikipedia.org/w/index.php?title=Bubble_sort&action=edit
3 https://en.wikipedia.org/wiki/Help:Introduction_to_referencing_with_Wiki_Markup/1
4 http://www.google.com/search?as_eq=wikipedia&q=%22Bubble+sort%22
5 http://www.google.com/search?tbm=nws&q=%22Bubble+sort%22+-wikipedia
http://www.google.com/search?&q=%22Bubble+sort%22+site:news.google.com/newspapers&
6
source=newspapers
7 http://www.google.com/search?tbs=bks:1&q=%22Bubble+sort%22+-wikipedia
8 http://scholar.google.com/scholar?q=%22Bubble+sort%22
9 https://www.jstor.org/action/doBasicSearch?Query=%22Bubble+sort%22&acc=on&wc=on
10 https://en.wikipedia.org/wiki/Help:Maintenance_template_removal
11 https://en.wikipedia.org/wiki/Sorting_algorithm
12 https://en.wikipedia.org/wiki/Swap_(computer_science)

153
Bubble sort

algorithm, which is a comparison sort13 , is named for the way smaller or larger elements
”bubble” to the top of the list.
This simple algorithm performs poorly in real world use and is used primarily as an edu-
cational tool. More efficient algorithms such as timsort14 , or merge sort15 are used by the
sorting libraries built into popular programming languages such as Python and Java.[2][3]

9.1 Analysis

Figure 27 An example of bubble sort. Starting from the beginning of the list, compare
every adjacent pair, swap their position if they are not in the right order (the latter one is
smaller than the former one). After each iteration, one less element (the last one) is
needed to be compared until there are no more elements left to be compared.

9.1.1 Performance

Bubble sort has a worst-case and average complexity of О16 (n2 ), where n is the number
of items being sorted. Most practical sorting algorithms have substantially better worst-
case or average complexity, often O(n log n). Even other О(n2 ) sorting algorithms, such as

13 https://en.wikipedia.org/wiki/Comparison_sort
14 https://en.wikipedia.org/wiki/Timsort
15 https://en.wikipedia.org/wiki/Merge_sort
16 https://en.wikipedia.org/wiki/Big_o_notation

154
Analysis

insertion sort17 , generally run faster than bubble sort, and are no more complex. Therefore,
bubble sort is not a practical sorting algorithm.
The only significant advantage that bubble sort has over most other algorithms, even quick-
sort18 , but not insertion sort19 , is that the ability to detect that the list is sorted efficiently
is built into the algorithm. When the list is already sorted (best-case), the complexity
of bubble sort is only O(n). By contrast, most other algorithms, even those with better
average-case complexity20 , perform their entire sorting process on the set and thus are more
complex. However, not only does insertion sort21 share this advantage, but it also performs
better on a list that is substantially sorted (having a small number of inversions22 ).
Bubble sort should be avoided in the case of large collections. It will not be efficient in the
case of a reverse-ordered collection.

9.1.2 Rabbits and turtles

The distance and direction that elements must move during the sort determine bubble sort's
performance because elements move in different directions at different speeds. An element
that must move toward the end of the list can move quickly because it can take part in
successive swaps. For example, the largest element in the list will win every swap, so it
moves to its sorted position on the first pass even if it starts near the beginning. On the
other hand, an element that must move toward the beginning of the list cannot move faster
than one step per pass, so elements move toward the beginning very slowly. If the smallest
element is at the end of the list, it will take n−1 passes to move it to the beginning. This
has led to these types of elements being named rabbits and turtles, respectively, after the
characters in Aesop's fable of The Tortoise and the Hare23 .
Various efforts have been made to eliminate turtles to improve upon the speed of bubble
sort. Cocktail sort24 is a bi-directional bubble sort that goes from beginning to end, and
then reverses itself, going end to beginning. It can move turtles fairly well, but it retains
O(n2 )25 worst-case complexity. Comb sort26 compares elements separated by large gaps,
and can move turtles extremely quickly before proceeding to smaller and smaller gaps to
smooth out the list. Its average speed is comparable to faster algorithms like quicksort27 .

17 https://en.wikipedia.org/wiki/Insertion_sort
18 https://en.wikipedia.org/wiki/Quicksort
19 https://en.wikipedia.org/wiki/Insertion_sort
20 https://en.wikipedia.org/wiki/Average-case_complexity
21 https://en.wikipedia.org/wiki/Insertion_sort
22 https://en.wikipedia.org/wiki/Inversion_(discrete_mathematics)
23 https://en.wikipedia.org/wiki/The_Tortoise_and_the_Hare
24 https://en.wikipedia.org/wiki/Cocktail_sort
25 https://en.wikipedia.org/wiki/Big_O_notation
26 https://en.wikipedia.org/wiki/Comb_sort
27 https://en.wikipedia.org/wiki/Quicksort

155
Bubble sort

9.1.3 Step-by-step example

Take an array of numbers ” 5 1 4 2 8”, and sort the array from lowest number to greatest
number using bubble sort. In each step, elements written in bold are being compared.
Three passes will be required;
First Pass
( 5 1 4 2 8 ) → ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps
since 5 > 1.
( 1 5 4 2 8 ) → ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) → ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) → ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5),
algorithm does not swap them.
Second Pass
(14258)→(14258)
( 1 4 2 5 8 ) → ( 1 2 4 5 8 ), Swap since 4 > 2
(12458)→(12458)
(12458)→(12458)
Now, the array is already sorted, but the algorithm does not know if it is completed. The
algorithm needs one whole pass without any swap to know it is sorted.
Third Pass
(12458)→(12458)
(12458)→(12458)
(12458)→(12458)
(12458)→(12458)

9.2 Implementation

9.2.1 Pseudocode implementation

In pseudocode28 the algorithm can be expressed as (0-based array):

procedure bubbleSort(A : list of sortable items)


n := length(A)
repeat
swapped := false
for i := 1 to n-1 inclusive do
/* if this pair is out of order */
if A[i-1] > A[i] then

28 https://en.wikipedia.org/wiki/Pseudocode

156
Implementation

/* swap them and remember something changed */


swap(A[i-1], A[i])
swapped := true
end if
end for
until not swapped
end procedure

9.2.2 Optimizing bubble sort

The bubble sort algorithm can be optimized by observing that the n-th pass finds the n-th
largest element and puts it into its final place. So, the inner loop can avoid looking at the
last n − 1 items when running for the n-th time:

procedure bubbleSort(A : list of sortable items)


n := length(A)
repeat
swapped := false
for i := 1 to n - 1 inclusive do
if A[i - 1] > A[i] then
swap(A[i - 1], A[i])
swapped = true
end if
end for
n := n - 1
until not swapped
end procedure

More generally, it can happen that more than one element is placed in their final position
on a single pass. In particular, after every pass, all elements after the last swap are sorted,
and do not need to be checked again. This allows to skip over many elements, resulting
in about a worst case 50% improvement in comparison count (though no improvement in
swap counts), and adds very little complexity because the new code subsumes the ”swapped”
variable:
To accomplish this in pseudocode, the following can be written:
procedure bubbleSort(A : list of sortable items)
n := length(A)
repeat
newn := 0
for i := 1 to n - 1 inclusive do
if A[i - 1] > A[i] then
swap(A[i - 1], A[i])
newn := i
end if
end for
n := newn
until n ≤ 1
end procedure

Alternate modifications, such as the cocktail shaker sort29 attempt to improve on the bub-
ble sort performance while keeping the same idea of repeatedly comparing and swapping
adjacent items.

29 https://en.wikipedia.org/wiki/Cocktail_shaker_sort

157
Bubble sort

9.3 Use

Figure 28 A bubble sort, a sorting algorithm that continuously steps through a list,
swapping items until they appear in the correct order. The list was plotted in a Cartesian
coordinate system, with each point (x, y) indicating that the value y is stored at index x.
Then the list would be sorted by bubble sort according to every pixel's value. Note that
the largest end gets sorted first, with smaller elements taking longer to move to their
correct positions.

Although bubble sort is one of the simplest sorting algorithms to understand and implement,
its O(n2 )30 complexity means that its efficiency decreases dramatically on lists of more than
a small number of elements. Even among simple O(n2 ) sorting algorithms, algorithms like
insertion sort31 are usually considerably more efficient.
Due to its simplicity, bubble sort is often used to introduce the concept of an algorithm, or a
sorting algorithm, to introductory computer science32 students. However, some researchers

30 https://en.wikipedia.org/wiki/Big_O_notation
31 https://en.wikipedia.org/wiki/Insertion_sort
32 https://en.wikipedia.org/wiki/Computer_science

158
Variations

such as Owen Astrachan33 have gone to great lengths to disparage bubble sort and its
continued popularity in computer science education, recommending that it no longer even
be taught.[4]
The Jargon File34 , which famously calls bogosort35 ”the archetypical [sic] perversely awful
algorithm”, also calls bubble sort ”the generic bad algorithm”.[5] Donald Knuth36 , in The
Art of Computer Programming37 , concluded that ”the bubble sort seems to have nothing to
recommend it, except a catchy name and the fact that it leads to some interesting theoretical
problems”, some of which he then discusses.[6]
Bubble sort is asymptotically38 equivalent in running time to insertion sort in the worst
case, but the two algorithms differ greatly in the number of swaps necessary. Experimental
results such as those of Astrachan have also shown that insertion sort performs considerably
better even on random lists. For these reasons many modern algorithm textbooks avoid
using the bubble sort algorithm in favor of insertion sort.
Bubble sort also interacts poorly with modern CPU hardware. It produces at least twice
as many writes as insertion sort, twice as many cache misses, and asymptotically more
40
branch mispredictions39 .[citation needed ] Experiments by Astrachan sorting strings in Java41
show bubble sort to be roughly one-fifth as fast as an insertion sort and 70% as fast as a
selection sort42 .[4]
In computer graphics bubble sort is popular for its capability to detect a very small error (like
swap of just two elements) in almost-sorted arrays and fix it with just linear complexity (2n).
For example, it is used in a polygon filling algorithm, where bounding lines are sorted by
their x coordinate at a specific scan line (a line parallel to the x axis) and with incrementing
y their order changes (two elements are swapped) only at intersections of two lines. Bubble
sort is a stable sort algorithm, like insertion sort.

9.4 Variations
• Odd–even sort43 is a parallel version of bubble sort, for message passing systems.
• Passes can be from right to left, rather than left to right. This is more efficient for lists
with unsorted items added to the end.
• Cocktail shaker sort44 alternates leftwards and rightwards passes.

33 https://en.wikipedia.org/wiki/Owen_Astrachan
34 https://en.wikipedia.org/wiki/Jargon_File
35 https://en.wikipedia.org/wiki/Bogosort
36 https://en.wikipedia.org/wiki/Donald_Knuth
37 https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming
38 https://en.wikipedia.org/wiki/Big_O_notation
39 h