0% found this document useful (0 votes)
13 views42 pages

Chapter 1 Fundamentals of Algorithms

The document introduces algorithms as well-defined computational methods that transform input into output through a sequence of steps. It discusses various types of problems that algorithms can solve, including sorting, data analysis in the Human Genome Project, and optimizing resource allocation in industries. The characteristics of algorithms, such as clarity, precision, and termination, are also outlined, emphasizing the importance of correct algorithms in computational problem-solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views42 pages

Chapter 1 Fundamentals of Algorithms

The document introduces algorithms as well-defined computational methods that transform input into output through a sequence of steps. It discusses various types of problems that algorithms can solve, including sorting, data analysis in the Human Genome Project, and optimizing resource allocation in industries. The characteristics of algorithms, such as clarity, precision, and termination, are also outlined, emphasizing the importance of correct algorithms in computational problem-solving.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CHAPTER

1
FUNDAMENTALS OF
ALGORITHMS
All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable copyright law.

CONTENTS
1.1. Introduction......................................................................................... 2
1.2. Different Types of Problems Solved by Algorithms................................ 4
1.3. Data Structures.................................................................................... 8
1.4. Algorithms as a Technology.................................................................. 9
1.5. Algorithms and Other Technologies................................................... 11
1.6. Getting Started With Algorithm.......................................................... 13
1.7. Analyzing Algorithms......................................................................... 18
References................................................................................................ 25
2019. Arcler Press.

EBSCO Publishing: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA 2013910; Flejoles, Rex Porbasas; Introduction To
Algorithms Account:eds.
2 Introduction To Algorithms

1.1. INTRODUCTION
We define an algorithm as any well-defined computational method that takes
input in the form of any value, or else set of values, and resultantly produces
any value, or else set of values, as output. Hence, an algorithm is considered
a sequence of computational steps which transforms any input into the
output. Another way of seeing algorithm is to consider it as a tool that solves
a well-specified computational problem (Aggarwal & Vitter, 1988; Agrawal
et al., 2004). Generally, the statement of the problem postulates the desired
relationship between input and output; whereas, the algorithm defines a
particular computational process for achieving that relationship between
input and output. For instance, suppose that we want to sort a sequence of
numbers into an increasing order. This problem arises quite often in practice
and makes available a fertile ground for presenting numerous standard
design practices as well as analysis tools (Adel’son-Vel’skii & Landis, 1962;
Abramowitz & Stegun, 1965). Given below is the way we formally define
any sorting problem (Figure 1.1):
Input: An arrangement of n numbers (a1, a2, …, an).
Output: A rearrangement (permutation) < a1' , a2' ,..., an' > of the input se-
quence in such a manner that a1' ≤ a2' ≤  ≤ an' .
As an example, consider an input sequence (30, 35, 59, 26, 35, 57).
The output obtained from a sorting algorithm will return (26, 30, 35, 35,
57, 59) sequence. This type of input sequence is known as an instance of
the sorting problem. Generally, an instance of a problem contains an input
(satisfying whatsoever constraints are enforced in the problem statement),
which is required to compute a solution of the encountered problem (Aho &
Hopcroft, 1974; Aho et al., 1983).
Sorting is regarded as a fundamental step in computer science as
numerous programs makes it useful as an intermediate step. Resultantly, we
have a large number of excellent sorting algorithms at our disposal (Ahuja
& Orlin, 1989; Ahuja et al., 1990; 1993). The choice of a best algorithm for
any given application depends mainly on several factors such as the number
of items which need to be sorted, the degree to which the items have already
been slightly sorted, the architecture of the computer, potential limitations
on the item values, as well as the type of storage devices to be employed:
main memory, tapes or disks.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 3

An algorithm is considered as correct given that it halts with the correct out-
put for each of the input instance. We say that an algorithm which is correct
solves the specified computational problem. On the other hand, an algorithm
which is incorrect will possibly not halt at all on some of the input instances,
or it may halt with a wrong answer (Ahuja et al., 1989; Courcoubetis et
al., 1992). Opposing to our expectation, incorrect algorithms can sometimes
prove beneficial, provided we can control their rate of error. However, in
general, we will only be concerned with the correct algorithms (Szymanski
& Van Wyk, 1983; King, 1995; Didier, 2009).

Figure 1.1: Basic layout and outline of algorithms (Source: http://openclass-


room.stanford.edu/MainFolder/CoursePage.php?course=IntroToAlgorithms).
We can specify an algorithm in English, in the form of a computer
program, or even by way of a hardware design. The single requirement is
that the problem specification should necessarily provide a detailed account
of the computational practice to be followed (Snyder, 1984; Salisbury et al.,
1996). The characteristics of an algorithm are stated below:
i. Each and every single instruction must be precise and clear, that
is, each and every instruction should have only one meaning.
ii. Each and every instruction must be performed the infinite amount
of time.
iii. There should not be any infinite repetition of one or more than one
instructions which means that the algorithm should necessarily
terminate ultimately.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
4 Introduction To Algorithms

iv. After the execution of the instructions, the user should acquire the
desired results.

1.2. DIFFERENT TYPES OF PROBLEMS SOLVED


BY ALGORITHMS
Sorting is not the only computational problem for which scientists have
developed algorithms. There are abundant practical applications of
algorithms existing worldwide. Few of such examples are stated below
(Figure 1.2):
i. Over the past years, the Human Genome Project has made
excessive development toward the aim of classifying all the
100,000 genes present in human DNA, defining the sequences
of the three billion chemical base pairs which compose human
DNA, stowage of this huge amount of information in databases,
as well as developing tools for analysis of data. Each of the above-
mentioned steps needs sophisticated algorithms. Even though the
solutions of the several problems involved are outside the scope of
this book, several methods to find the solution of these biological
problems employ ideas from various sections in this book, hence
enabling scientists to achieve tasks while utilizing resources in an
efficient manner (Aki, 1989; Akra & Bazzi, 1998). The savings
are in time, both machine and human, as well as in money since
we can extract more information from laboratory techniques
(Regli, 1992; Ajtai et al., 2001).
ii. Worldwide, the Internet is enabling people to rapidly access as
well as retrieve a large quantity of information. With the help of
intelligent algorithms, different sites on the Internet manage and
use this large amount of data. Few examples of problems that
essentially make use of different algorithms include discovering
good routes on which the data can travel, and employing a search
engine which can speedily find pages on which our specific
required information is present (Alon, 1990; andersson, 1995;
1996).
iii. Electronic commerce facilitates us to negotiate as well as
electronically exchange services and goods. It relies on the privacy
of personal information like passwords, credit card numbers, and
bank statements. The fundamental technologies that are employed
in electronic commerce include digital signatures and public-key

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 5

cryptography. These technologies are based on number theory


and numerical algorithms (Andersson & Thorup, 2000; Bakker et
al., 2012).
iv. Every so often, manufacturing and various other commercial
enterprises require allocation of rare resources in the most
favorable way. For instance, an oil company might wish to
identify as to where it should place its wells for maximizing its
anticipated profit. An airline might hope to assign crews to flights
in the most inexpensive manner possible while making sure to
cover each flight and that government regulation concerning
the scheduling of the crew are fulfilled (Ramadge & Wonham,
1989; Amir et al., 2006). Some political candidate might need to
decide where to spend his money ordering campaign advertising
for maximizing the likelihood of winning an election. An ISP
(Internet service provider) might want to know the place where
they can place additional resources so as to serve their customers
in a more effective manner. All of the above-mentioned examples
constitute problems that which can be solved by employing linear
programming (Dengiz et al., 1997; Berger & Barkaoui, 2004).

Figure 1.2: A characteristic example of algorithm application. (Source: https://


invisiblecomputer.wonderhowto.com/how-to/coding-fundamentals-introduc-
tion-data-structures-and-algorithms-0135613/).

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
6 Introduction To Algorithms

Even though few of the details of the above-mentioned examples are outside
the scope of this book, we will just give underlying techniques which are ap-
plicable to these problems and problem areas. We will also see how to solve
numerous specific problems, including the ones stated below:
i. Suppose we are provided with a road map containing the distance
marked between each pair of adjacent intersections, and we
desire to find the shortest route possible from one intersection to
another. There can be a huge number of possible routes, even if
we prohibit routes crossing over themselves (Festa & Resende,
2002; Zhu & Wilhelm, 2006). Now, in this situation, how to
choose the shortest of all the possible routes? In such case, we
model the roadmap as a graph and then wish to find the shortest
possible route from one vertex to another in the graph.
ii. Assume that we are provided with two methodical sequences of
symbols like, X= (x1, x2 …, xm) and Y= (y1, y2… yn), and we would
like to discover the longest communal subsequence of Y and X.
A subsequence of X will contain X minus some (or perhaps all
or none) of its elements. For instance, one subsequence of (A, B,
C, D, E, F, G, H, I) can be (B, C, E, G). The span of the longest
communal subsequence of X and Y gives us the extent of how alike
these two sequences are. For instance, if the under consideration
two sequences are base pairs in strands of DNA, then we may
want to consider them alike given that they have a long shared
subsequence. If X contains m symbols and Y contains n symbols,
then X and Y will have 2m and 2n conceivable subsequences,
respectively. Choosing all conceivable subsequences of X and Y
and afterward matching them up may possibly take excessively
long time except if both m and n are very small (Wallace et al.,
2004; Wang, 2008).
iii. We are provided with a mechanical design with regard to a library
of parts. Each part might include instances of some other parts,
and we are required to list the parts in sequence so that each part
comes before any other part that uses it (Maurer, 1985; Smith,
1986). If there are n parts of the design, then there will be nŠ
potential orders, where nŠ signifies the factorial function. Since
the factorial function raises quicker as compared to even an
exponential function, thus, it is not feasible for us to generate
each possible order and then carry out is verification within that
order such that each part comes before the parts using it. Such a

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 7

problem is an instance of topological sorting (Herr, 1980; Stock


& Watson, 2001).
iv. Assume that we are provided with n points in the plane, and we
desire to discover the convex hull of these points which is the
smallest convex polygon enclosing the points. Instinctively, we
can consider each point as being characterized by a nail piercing
out from a cardboard. The convex hull then will be signified by
a tight rubber band surrounding all the nails (Arora et al., 1998;
2012). Each of the nails about which the rubber band creates a
turn is a vertex of the convex hull. Hence, any of the 2n subsets of
the points can be the vertices of the convex hull. It is not enough
here to know the points that are the vertices of the convex hull, as
we also must know the order of their appearance. Hence, several
different choices are available for the vertices of the convex hull
(Price, 1973; Berry & Howls, 2012).
These lists exhibit two features that are common to numerous exciting
algorithmic problems:
i. They have a number of candidate solutions, the vast majority of
which are unable to solve the problem at hand. To find the one
that solves the problem or the one that is “best,” can be quite
challenging (Arora, 1994; 1998; Arora & Howls, 2012).
ii. Most of them have practical applications. The easiest example
from the problems listed above was finding the shortest path. Any
transportation company, such as a railroad or a trucking company,
has a financial interest to find the shortest possible path through a
rail network or a road since the adoption of shorter paths outcomes
in lower labor and fuel prices (Bellare & Sudan, 1994; Friedl
& Sudan, 1995). Similarly, a routing node on the Internet may
desire to discover the shortest path through the network for quick
routing of a message. Or else a person who wishes to drive from
Washington to Chicago might want to know driving directions
from a suitable Website or may use his/her GPS during driving
(Sudan, 1992; Arora & Lund, 1996).
iii. It is not necessary that every problem which is solved by
algorithms would have an easily identified set of contestant
solutions. For instance, assume that we are provided with a set of
numerical values expressing samples of a signal. Now we wish to
calculate the discrete Fourier transform (DFT) of these samples

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
8 Introduction To Algorithms

(Fredman & Willard, 1993; Raman, 1996). DFT by generating a


set of numerical coefficients, converts the time domain into the
frequency domain, such that we can define the strength of different
frequencies in the sampled signal. Besides lying at the heart of
signal processing, there are numerous applications of discrete
Fourier transforms in data compression as well as multiplication
of large polynomials and integers (Polishchuk & Spielman, 1994;
Thorup, 1997).

1.3. DATA STRUCTURES


A data structure is a method of storing and organizing data in order to ease
access and modifications. There is no single data structure that works equally
well for all purposes, and hence it is imperative to know the strengths as well
as limitations of a number of them (Brodnik et al., 1997.

1.3.1. Hard Problems


Talking about efficient algorithms, our typical measure of efficiency is speed,
that is, how long it takes for an algorithm to output its result. However, at
times, there can be few problems for which there does not exist any efficient
solution. NP (nondeterministic polynomial time)-complete problems are
considered interesting because of two reasons (Phillips & Westbrook, 1993;
Ciurea & Ciupala, 2001). First, even though there has not been found any
effective algorithm for an NP-complete problem, no one person has ever
proved that an effective algorithm for such problem cannot exist. We can say
that no one knows whether or not there are any efficient algorithms present
for NP-complete problems. Secondly, there is a remarkable property of a set
of NP-complete problems that if any effective algorithm exists for any one of
them, then such an efficient algorithm will exist for all of them (Cheriyan &
Hagerup, 1989; 1995). This relationship between the NP-complete problems
results in the lack of effectual solutions all the more tormenting. Thirdly,
there are several similar NP-complete problems but they are not identical
to problems for which we have efficient algorithms. Computer experts are
enthralled by how a minor change in the problem statement can lead to a
big change in the proficiency of the best-known algorithm (Cheriyan et al.,
1990; 1996).
One should be familiar with NP-complete problems as some of them
unexpectedly arise quite often in real applications. Given that you are called
upon to create an efficient algorithm for any NP-complete problem, it is

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 9

likely that you will spend a lot of time in a futile search. On the other hand,
if you can prove that the problem is NP-complete, then instead you can
spend your time to develop an efficient algorithm which provides a good,
nevertheless not the best possible, solution (Leighton, 1996; Roura, 2001;
Drmota & Szpankowski, 2013).
As an existing example, assume a delivery firm with a central depot.
Each day, its delivery trucks are loaded at the depot and then send around
for delivering goods to various addresses. By the end of the day, each truck
is required to reach back at the depot so that it can be made ready for loading
for the next day (Bentley et al., 1980; Chan, 2000; Yap, 2011). In order to
decrease the prices, the company wishes to choose a sequence of delivery
stops which results in the lowest overall distance covered by each truck.
This problem is the popular “traveling-salesman problem,” and it comes in
the NP-complete category. It does not have an efficient algorithm. However,
under some certain assumptions, we can know of such efficient algorithms
which provides an overall distance not too far above the shortest possible
(Shen & Marston, 1995; Verma, 1997).

1.3.2. Parallelism
For several years, we had been able to rely upon processor clock speeds
growing at a steady rate. Even though physical limitations cause an ultimate
roadblock to ever-growing clock speeds, nevertheless: as with the speed
of clock, power density rises super-linearly, hence, chips run the hazard
of melting after their clock speeds get plenty high (Meijer & Akl, S1987;
1988). Therefore, for performing more computations per second, we design
chips to contain several processing cores. We can compare these multicore
computers to numerous sequential computers on a single chip; or else we
can say that they are a type of “parallel computer.” For the purpose of
stimulating the best possible performance from multicore computers, we
must design algorithms keeping parallelism in mind. Algorithms such as
“multithreaded” algorithms take benefit of multiple cores. This type of
model has several advantages from a theoretical viewpoint, and thus it forms
the basis of numerous successful computer programs (Wilf, 1984; Selim &
Meijer, 1987).

1.4. ALGORITHMS AS A TECHNOLOGY


Let’s assume that computers were substantially fast and their memory was
free. Can you think of any reason to study algorithms then? The answer is still

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
10 Introduction To Algorithms

yes, as we would still want to demonstrate that our solution terminates and
does so with the accurate answer. Given that computers were extremely fast,
any accurate method for problem-solving will terminate (Igarashi et al., 1987;
Park & Oldfield, 1993). You most likely would want your implementation
of software to lie within the bounds of decent software engineering practice
(for instance, your implementation must be well designed as well as well
documented), but quite often you would be likely using whatever method is
the easiest to implement (Glasser & Austin Barron, 1983; Immorlica et al.,
2006).
Obviously, computers can be fast, but they cannot be infinitely fast.
Likewise, their memory may be low-priced, but it is not entirely free. Hence,
computing time is a bounded resource, likewise is space in memory. One
should utilize these resources wisely, and algorithms that are efficient with
regard to space or time can help us do so (Wilson, 1991; Kleinberg, 2005;
Buchbinder et al., 2010).
Different algorithms that are written for the solution of the same problem
frequently differ widely in terms of their efficiency. Such differences can be
much more noteworthy as compared to the differences owing to software
and hardware (Vanderbei, 1980; Babaioff et al., 2007; Bateni et al., 2010).
Take an example of two different sorting algorithms for more clarity.
The first is known as insertion sort and the time it takes is approximately
equal to c1n2 to sort n items; here c1 is a constant which is not dependent
on n; which means that it takes time approximately proportional to n2. The
second algorithm is known as merge sort and it takes time approximately
equal to c2n lg n, here lg n stands for log2 n whereas c2 is another constant
that too is not dependent on n. As compared to merge sort, Insertion sort has
normally a smaller constant factor, such that c1 < c2. We will find out that
the impact of constant factors can be far less on the running time than the
dependency on the input size n (Aslam et al., 2010; Du & Atallah, 2001;
Eppstein et al., 2005). If we write insertion sort’s running time as c1n n and
running time of merge sort algorithm as c2n lg n, then we will find out that
where the factor of insertion sort is n in its running time, merge sort is having
a factor of lg n, which is obviously much smaller. Even though insertion
sort generally runs more rapidly than merge sort for trivial input sizes,
nevertheless, after the input size n becomes sufficiently large, merge sort’s
benefit of lg n versus. n will compensate fairly for the difference in constant
factors. Regardless of how much smaller c1 is compared to c2, there will
always exist a crossover point outside which merge sort is faster (Sardelis &

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 11

Valahas, 1999; Dickerson et al., 2003). For a real example, let us consider
a faster computer (name it as computer A) running insertion sort alongside
a slower computer (name it as computer B) running merge sort. Each of
them must sort an array containing 10 million numbers. (Even though it
may seem that 10 million numbers are a lot, given that the numbers are
eight-byte integers, then the input will be occupying around 80 megabytes,
which can fit, numerous times over, in the memory of even a low-cost laptop
computer). Let’s assume that computer A is executing 10 billion instructions
per second; whereas, computer B is executing merely 10 million instructions
per second, hence, it makes computer A 1000 times quicker than computer B
in terms of raw computing power (Sion et al., 2002; Goodrich et al., 2005).
To amplify this dramatic difference, assume that the world’s most devious
programmer codes insertion sort for computer A in machine language, and
the resultant code needs 2n2 instructions to sort n numbers. Additionally,
assume that an average programmer implements merge sort by employing
a high-level language using an inefficient compiler, and the resultant code
takes 50n lg n instructions. Now, for sorting 10 million numbers, computer
A will take:
2 ⋅ (107 ) 2 instructions
= 20, 000sec ond (more than 5.5 hours )
1010 instructions / sec ond
Whereas, computer B will take:
50 ⋅107 lg107 instructions
≈ 1163sec ond (less than 20 min utes ).
1010 instructions / sec ond
Even with a poor compiler, by making use of an algorithm whose
running time raises more slowly, computer B will be running more than 17
times faster as compared to computer A. The benefit of merge sort is even
more noticeable when we sort 100 million numbers: in this case insertion
sort will be taking more than 23 days, whereas, merge sort will only take
under four hours. As a general rule, as the size of the problem increases, so
does the relative benefit of merge sorting (Eppstein et al., 2008; Acar, 2009).

1.5. ALGORITHMS AND OTHER TECHNOLOGIES


The example stated above proves that just like computer hardware, we
should consider algorithms, as a technology. The entire performance of the
system depends on picking efficient algorithms as much as on the selection
of fast hardware. Just like rapid progress is being made in several computer

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
12 Introduction To Algorithms

technologies, likewise, they are also being made in algorithms (Amtoft et al.,
2002; Ausiello et al., 2012). You might be curious if algorithms are truly that
significant on contemporary computers taking into account other advanced
technologies, for example:
i. Intuitive, easy-to-use, graphical user interfaces (GUIs).
ii. Advanced computer architectures and production technologies.
iii. Fast networking, both wireless and wired.
iv. Object-oriented systems.
v. Integrated Web technologies.
The answer is yes. Even though few applications do not explicitly
need algorithmic content at the application level (like some small, web-
based applications), several others do. For instance, suppose a Web-based
service which defines how to travel from one location to another location
(Szymanski, 1975; Aslam et al., 2010). The implementation of this service
would depend on a graphical user interface, fast hardware, wide-area
networking, and quite probably on object orientation. Though it will also
need algorithms for certain operations like discovering routes (probably
making use of a shortest-path algorithm), interpolating addresses, and
rendering maps (Bach, 1990; Avidan & Shamir, 2007).
Furthermore, even an application which, at the application level, does
not need algorithmic content is highly dependent upon algorithms. Since
the application is reliant on fast hardware and the design hardware uses
algorithms. Furthermore, the application also relies on graphical user
interfaces whose design again relies on algorithms (Winograd, 1970; Bach &
Shallit, 1996). Additionally, the application can also rely on networking and
route in networks also relies mainly on algorithms. If the application is written
in any language other than the machine language, then it must be processed
using an interpreter, a compiler, or assembler, all of these extensively uses
different algorithms. Hence, we can conclude that algorithms lie at the core
of most technologies that are employed in contemporary computers (Bailey
et al., 1991; Baswana et al., 2002).
Moreover, with the ever-growing capabilities of computers, we utilize
them for solving complex problems than ever before. As it has been seen from
the comparison above between merge sort and insertion sort, the differences
in efficiency among algorithms become mostly more prominent in the larger
problem (Bayer, 1972; Arge, 2001; Graefe, 2011). Being equipped with a
solid base of algorithmic technique and knowledge is one distinctive feature

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 13

that sets apart the skilled programmers from the beginners. Although with
the help of modern computing technology, we can achieve few tasks without
having to know much about algorithms, however, with a sound background
in algorithms, we can do much, much more (Blasgen et al., 1977; Harth et
al., 2007).

1.6. GETTING STARTED WITH ALGORITHM


We start by examination of the insertion sort algorithm for solving the
sorting problem presented in earlier sections. We will define a “pseudo
code” which should be known to you provided you have knowledge of
computer programming. We will be using it to demonstrate how to specify
our algorithms. After specification of the insertion sort algorithm, we then
reason that it sorts appropriately, and afterward we analyze its running
time (Kwong & Wood, 1982; Icking et al., 1987). The resultant analysis
brings forth a notation that emphases on how that time rises by increasing
the number of items that need to be sorted. Subsequent to our discussion
of insertion sort, we will introduce the divide-and-conquer method for
designing of algorithms and employ it for developing an algorithm known
as merge sort. Afterward, we will be ending with an analysis of the running
time of merge sort (Nievergelt, 1974; Beauchemin et al., 1988).

1.6.1. Insertion Sort


The first algorithm, insertion sort, solve the sorting problem presented in the
previous sections of this chapter:
Input: A series of n numbers (a1, a2…, an).

Output: A rearrangement (permutation) < a1' , a2' ,..., an' > of the input
sequence in such a manner that a1' ≤ a2' ≤  ≤ an'
The numbers that are to be sorted are also called the keys. Even though we
are sorting a sequence from a theoretical point of view, the input approaches
us in the form of an array having n elements.
Here we will be typically describing algorithms as programs that are
written in a pseudo code which in many respects is analogous to C++,
C, Pascal, Java, or Python. If you have already been familiar with any of
these languages then you should have little distress reading our algorithms
(Monier, 1980; Kim & Pomerance, 1989; Damgård et al., 1993) (Figure
1.3).

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
14 Introduction To Algorithms

Figure 1.3: Sorting a hand of cards by making use of insertion sort. (Source:
https://mitpress.mit.edu/books/introduction-algorithms-third-edition).
The difference between real code and pseudocode is that in pseudocode,
we make use of whatever expressive method is most brief and clear to
stipulate a given algorithm. Occasionally, the clearest method is English,
so it shouldn’t come as a surprise if one comes across a phrase in English
or any English sentence embedded inside a section of “real” code (Ben-
Or, 1983; Bender et al., 2000). One more difference between real code and
pseudocode is that pseudocode is not normally concerned with problems
of software engineering. Issues of modularity, data abstraction, and error
handling are frequently ignored for the purpose of conveying the spirit of
the algorithm in a more concise manner (Wunderlich, 1983; Morain, 1988).
We will start with insertion sort. Insertion sort is an efficient algorithm
to sort out a small figure of elements. We can relate the working of insertion
sort as the way various people sort a hand of playing cards. Starting with an
empty left hand, the cards will face down on the table. After that, we take
away one card from the table at a time and place it in the right position in
the left hand (Fussenegger & Gabow, 1979; John, 1988). For finding the
right position for a card, we make its comparison with each of the already
held cards in the hand, from right to left, as shown in the figure below. At
all instances, the cards seized in the left hand are sorted. Further, these were
the cards that were originally on the top of the pile on the table (Kirkpatrick,
1981; Munro & Poblete, 1982; Bent & John, 1985).
Our pseudocode for insertion sort is presented as a procedure known as
INSERTION-SORT, which takes an array as a parameter (A [1 … n). This
array contains a sequence of length n that needs to be sorted. The algorithm

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 15

then sorts the input array in place: it reorders the numbers present within
the array A, with a constant number of them stored outside the array as a
maximum at any time. The input array A has the sorted output sequence
after the procedure of Insertion-Sort is finished (Schönhage et al., 1976;
Bentley et al., 1980; Bienstock, 2008) (Figure 1.4).

Figure 1.4: The insertion-sort operation on the array A which is equal to (2, 5,
4, 6, 1, 3). Indices of array appear above the rectangles; whereas, values that are
stored in the array positions lie within the rectangles. (a)–(d) the reiterations of
the ‘for loop’ of lines 1–8. During each iteration, the rectangle (black in color)
holds the key taken from A (j), which is then equated with the values inside
shaded rectangles to its left in the test of line 5. Shaded arrows exhibit values
of array relocated one position to the right in line 6, and black arrows signify
where the key moves to in line 8. (e) The ultimate sorted array. (Source: https://
mitpress.mit.edu/books/introduction-algorithms-third-edition).
Insertion Sort (A) is illustrated below:
i. For j=2 to length.A
ii. Key=A[j]
iii. //Insetion of A[j] into a sorted sequence a[1 … j–1]
iv. i=j–1
v. while A[i]>key and i>0
vi. A[i+1]=A[i]
vii. \\feeding addition of value in A[i]
viii. i =i-1
ix. A[i+1]=key

1.6.2. Loop Invariants and the Precision of Insertion sort


Fig.4 shown in the previous section exhibits the mechanism of this algorithm
for A = (2, 5, 4, 6, 1, and 3). The index j denotes the “current card” being
placed into the hand. At the start of each iteration of the for loop (indexed
by j), the sub-array containing elements A [1… j – 1] institutes the presently

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
16 Introduction To Algorithms

sorted hand, and the leftover sub-array A[j + 1 … n] relates to the pile of
cards that are still present on the table. As a matter of fact, elements A [1 …
j – 1] are the original elements that were in positions 1 through j 1, however,
now in sorted order. These properties of A [1 … j – 1] is formally stated as
a loop invariant
Loop invariants are used to understand as to why an algorithm is accurate.
We should essentially demonstrate three things about a loop invariant:
Initialization: It is accurate preceding the first iteration of the loop.
Maintenance: If it is accurate prior to the iteration of the loop, it remains
accurate before the succeeding iteration.
Termination: As the loop terminates, the invariant provides us a beneficial
property that helps in demonstrating that the algorithm is correct.
If the first two properties hold true, then the loop invariant is true preceding
every iteration of the loop. The similarity to mathematical induction must
be noted here. For proving that a property holds, we prove a base case as
well as an inductive step (Duda et al., 1973; Bloom & Van Reenen, 2002).
In this case, demonstrating that the invariant holds prior the first iteration
corresponds to the base case, whereas, demonstrating that the invariant
holds among iterations correspond to the inductive step (Dehling & Philipp,
2002; Bienstock & McClosky, 2012).
As we are employing the loop invariant to exhibit correctness, the third
property is possibly the most significant one. Normally, we make use of the
loop invariant together with the condition that caused the termination of the
loop. The termination property varies from our general use of mathematical
induction where we infinitely apply the inductive step; in this case, we
stop the “induction” as the loop terminates. Now, let us understand how
these properties hold in case of insertion sort (Bollerslev et al., 1994; Tan &
Acharya, 1999).
Initialization: We start by demonstrating that the loop invariant holds prior
to the iteration of the first loop when j= 2. Hence, the subarray A[1 … j –
1], is just composed of the single element A[1], which as a matter of fact
is the original element in A[1]. Additionally, this subarray is sorted which
implies that the loop invariant holds before the first iteration of the loop
(Williamson, 2002; D’Aristotile et al., 2003).
Maintenance: Now, coming up to the second property: demonstrating that
each of the iterations maintains the loop invariant. The for loop body works
by moving A[j – 1] , A[j – 2], A[j – 3] , and likewise by single position

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 17

to the right till it discovers the proper position for A[j] (lines 4–7), at this
point it inserts the A[j] value (line 8). Then, the sub-array A[1 … j] contains
elements that were originally in A[1 … j], however, in sorted order. The
increment of j for the subsequent iteration of the ‘for loop’ conserves the
loop invariant (Nelson & Foster, 1995; COnforti et al., 2010).
Termination: Lastly, we observe what happens at the time the loop
terminates. The condition which is responsible for termination of the ‘for
loop’ is that j > A. length= n. since each loop iteration rises j by 1, we should
have j = n + 1 at that time. Replacing n + 1 for j in the language of loop
invariant, we have the subarray A[1... n] consisting of the elements that were
originally in A[1... n], however, in sorted order. Seeing that the sub-array
A [1... n] is the perfect array, we settle that the complete array is sorted.
Therefore, the algorithm is correct (Fourier, 1890; 1973; Bazzi et al., 2017).

1.6.3. Pseudocode Conventions


In our pseudocode, we are using the conventions stated below:
i. Indentation denotes structure of the block. For instance, the body
of for loop that starts on line 1 comprises of lines from 2–8,
whereas the body of the while loop that starts on line 5 comprises
of lines from 6–7, however, not line 8. This style of indentation
style is also applicable to if-else statements. Making use of
indentation in place of conventional indicators of block structure
like end or begin statements, significantly decreases clutter at the
same time preserving, or even increasing, clarity (Verma, 1994;
1997; Faenza & Sanità, 2015).
ii. The looping constructs like for, while, and the conditional
construct such as if-else, repeat-until are analogous to those used
in C++, C, Python, Java, and Pascal [4]. Here we have assumed
that the loop counter maintains its value after leaving the loop,
in contrast to some situations that encounter in Java, C++, and
Pascal. Hence, soon after a for loop, the counter value of loop
is the value that first topped the for loop bound. We can use this
feature in our correctness argument for insertion sort (Blelloch,
1989; 1996; Xiaodong & Qingxiang, 1996).
iii. In case of an if-else statement, else is indented at the same level as
its corresponding if. Even though we exclude the keyword then,
we sometimes refer to the part executed when the test succeeding
if statement is true as a then clause. In case of multiway tests, we

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
18 Introduction To Algorithms

make use of else-if for tests afterward the first one. Majority of
block-structured languages have comparable constructs, although
the precise syntax may differ. In Python, there are no repeat-until
loops; moreover for loop in it operate in a different manner from
the for loops discussed here (Blelloch et al., 1994; 1995; 1997;
1999).
iv. The symbol “//” implies that the remainder of the line is a remark
(comment).
v. Multiple assignments of the type i= j= e assigns the value of
expression e to both variables i and j; hence, it must be treated as
equal to the assignment j= e proceeded by the assignment i= j.
vi. Variables (like the key, i, and j) are local to any specified
procedure. Global variables must not be used without explicit
indication (Blelloch & Greiner, 1996; Blelloch & Mags, 1996;
2010).
vii. We access elements of an array by stating the array name succeeded
by the index written in square brackets. For instance, A[i] denotes
the i-th element of the array A. The “…“ notation signifies a range
of values inside an array. Therefore, A [1…j] denotes the subarray
of A containing the j elements A[1], A[2], …, A[j].

1.7. ANALYZING ALGORITHMS


While analyzing an algorithm, we have to predict the resources needed by the
algorithm. Sometimes, resources like communication bandwidth, memory,
or computer hardware are of main concern for us, but most of the time,
we wish to measure the computational time (Frigo et al., 1998; Blumofe &
Leiserson, 1999). Normally, by analysis of various contestant algorithms for
a problem, identification of the one which is most efficient can be carried
out. Analysis such as this may signify more than a single viable candidate;
however, we can frequently discard various inferior algorithms during the
process (Blum et al., 1973; Blelloch & Gibbons, 2004).
Before we can run the analysis of an algorithm, we ought to have an
implementation technology model that we will employ. This model must
also include a model for the resources of that technology as well as their
costs. Here, we will consider a generic one-processor as our implementation
technology, that is, random-access machine (RAM) model of computation.
Moreover, we will be implementing our algorithms as computer programs.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 19

In the RAM model, each of the instructions is executed one after another,
with no concomitant operations (Brent, 1974; Buhler et al., 1993; Brassard
& Bratley, 1996).
We must accurately define the instructions of the RAM model as well
as their costs. However, this task will not only be tedious but also result
in little insight into the design of the algorithm and its analysis. Still, care
should be taken as to not abuse the RAM model. For instance, in case if a
RAM contains an instruction that sort, we can sort in merely one instruction.
However, such a RAM is not as realistic as real computers do not contain
such instructions (Chen & Davis, 1990; Prechelt, 1993; Sheffler, 1993).
Hence, we will be focusing on the design of real computers. The RAM
model includes instructions that are generally found in real computers: data
movement (copy, load, store,), arithmetic (like subtraction, addition, division,
multiplication, remainder, ceiling, floor, etc.), and control (subroutine call
and return unconditional and conditional branch). Each of these instructions
needs a constant amount of time (Hsu et al., 1992; 1993).
The RAM model’s data types for storing real numbers are floating point
and integer. Even though, normally we are not very concerned with precision,
however, in a few applications precision is vital. Another assumption we
make is a limit on the size of each data word. For instance, while working
with inputs of size n, normally, we assume that representation of integers
is done by c lg n bits for any constant c ≥ 1. We need c ≥ 1 in order that
each word can have the value of n, permitting us to catalog the individual
input elements. Additionally, we restrict c to be a constant so that the size
of the word does not grow randomly. (In case of arbitrary growth of the size
of the word, we can store a large quantity of data in one word and carry
out an operation on it all in constant time which is obviously an unrealistic
scenario).
Real computers hold instructions which represent a gray area in the
RAM model. For instance, in general case, exponentiation is not a constant
time instruction as it takes more than a few instructions to compute xy
when both x and y are real numbers (Little t al., 1989; Narayanan & Davis,
1992). However, in limited situations, exponentiation can be a constant time
operation. Various computers contain a “shift left” instruction (Martínez et
al., 2004; Bengtsson, 2006). This instruction shifts the integer bits in constant
time by k positions to the left. In the majority of the computers, shifting the
integer bits by one position to the left is the same as multiplication by a
factor of 2, in other words, we can say that shifting the bits to the left by k

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
20 Introduction To Algorithms

positions is equal to multiplication by 2k. Consequently, such computers have


the ability to compute 2k in one constant time instruction through shifting
the integer 1 to the left by k positions, provided that k is no more than the
total number of bits in a computer word (Hung, 1988; Chen & Davis, 1991).
In the RAM model, we do not try to model the hierarchy of memory
which is common in modern computers. To be clearer, we do not model
virtual memory or caches. A number of computational models make efforts
to account for effects of memory-hierarchy, which are occasionally important
in real programs on real machines. As compared to RAM Models, such
models that include the memory hierarchy are often more complicated, and
hence they are often difficult to deal with. Furthermore, analyses of RAM-
model are generally brilliant indicators of performance on real machines
(Bae & Takaoka, 2005; Lin & Lee, 2007).
Analysis of even a straightforward algorithm in the RAM model can
prove to be challenging. Several mathematical tools are needed such as
algebraic dexterity, combinatorics, probability theory, and the capability to
recognize the most important terms in a formula. Since for each possible
input, the behavior of an algorithm may vary, we require a method to
summarize that behavior in straightforward and easily understood formulas
(Agrawal et al., 2006; 2007; Brodal & Jørgensen, 2007).
Although we normally choose just one model of machine for analysis
of any given algorithm, we still have several choices to decide how to
express our analysis. The preferred way is the one that is simple to write
and manipulate and represents the significant characteristics of the resource
requirements of an algorithm as well as suppresses tedious details.

1.7.1. Order of Growth


We can use few simplifying abstractions to make our analysis of the
INSERTION-SORT easy. As a first step, the actual cost of each statement
can be ignored by employing the constants ci for representing these costs.
After that, we will find that even these constants provide more detail to us
that we require in actual. In the previous sections, the worst case running
time was expressed as an2 = b n+ C for some constants a, b, and c which is
dependent on the statement costs ci. Hence, we can ignore the abstract costs
ci in addition to ignoring the actual statement costs.
Another simplifying abstraction that can be made is the order or rate of
growth of the running time which in actuality interests us. Thus, we consider
just the leading term of a formula (that is, an2), as for large values of n, the

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 21

lower-order terms are comparatively unimportant. Further, we also ignore the


constant coefficient of the leading term as they are less important as compared
to the rate of growth for determination of computational efficiency for large
inputs. In case of insertion sort, after ignoring the constant coefficient of the
leading term and the lower-order terms, we are only left with the n2 factor
from the leading term.

1.7.2. Analysis of Insertion Sort


The time required by the Insertion-Sort procedure mainly depends on the
input: of course, sorting a thousand numbers will take more time than
sorting three numbers. Additionally, for sorting two input sequences of the
same size, Insertion-Sort can take a different amount of time depending
on how much they are already sorted. As a general rule, the time required
by an algorithm increases with increasing the size of the input, hence it is
conventional to describe the program running time in terms of a function of
the input size. In order to do so, we must define the terms “size of input” and
“runtime” more cautiously.
The best conception for the size of input is dependent on the problem
under investigation. For numerous problems like computing or sorting
discrete Fourier transforms, the most usual measure is the number of items
in the input—for instance, the array size n for sorting. For several other
problems like multiplication of two integers, the most optimum measure
of input size is the total number of bits that are required for representation
of input in common binary notation. Occasionally, it is more suitable to
describe the input size with two numbers instead of one. For example, if
the input of an algorithm is a graph, we can describe the size of the input
by the numbers of edges and vertices in the graph. We will point out which
measure of input size is being utilized with each of our studied problems.
On a particular input, the running time of an algorithm is the number of
executed steps or primitive operations. It is feasible to define the concept
of step in order that it is as machine-independent as possible. For now,
let us assume the following view. For the execution of each line of our
pseudo code, a constant sum of time is needed. Although, the time required
for execution of one line may differ from the time taken for execution
of another line, nevertheless, we will suppose that each execution of the
i-th line takes a constant amount of time equal to ci. This perspective is
in keeping with the model of RAM; moreover, it also reveals the how the
implementation of pseudo takes place on most of the real computers. The

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
22 Introduction To Algorithms

following demonstration deals with the evolution of the expression used for
the INSERTION-SORT running time. The evolution of the expression takes
place from a complex formula which utilizes all the statement costs (i.e.,
ci) to a simpler notation which is brief and more conveniently manipulated.
This simple notation makes it convenient to determine the efficiency of a
particular algorithm with respect to others. Initially, the demonstration of the
INSERTION-SORT procedure with respect to each statement time “cost”
and the number of execution attempts is presented below.
For each j = 2, 3, 4… . n, where n= length. A, we let tj represent the
number of attempts the while loop experiment in line 5 is carried out for that
particular value of j. When a ‘while’ or ‘for’ exits in a customary way (i.e.,
because of the test in a looping header), the execution of the test is carried
out one time further than the loop body. It is assumed that comments are
non-executable statements, which do not consume any time.

Sr. No. Insertion Sort (A) Cost Times


i For j = 2 to length. A c1 n
ii Key =A[j] c2 n–1
Insertion of A[j] into a sort-
iii 0 n–1
ed sequence a[1 … j – 1]
iv i=j–1 c4 n–1
v While A[i]>key and i>0 c5

vi A[i + 1]=A[i] c6

Feeding addition of value


vii 0 n–1
in A[i]

viii i=i–1 c7

ix A[i + 1]=key c8 n–1


There are some intricacies to the above-mentioned phenomenon.
Computational steps specified in English are typically the variants of a
procedure which sometimes requires more time than the anticipated (fixed)
amount of time. For instance, “sorting of the points by x-coordinates”
typically takes more time than the allotted amount of time. It can also be
noted that a statement which calls a subroutine consumes constant time,
although the subroutine may take more time. Therefore, we distinguish the
calling process for this type of phenomena. The total running time of an
algorithm is the addition of running times for each of the executed statement.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 23

Any statement that undergoes ci steps for execution and in total executes n
times will add ci n to the whole running time. For computation of the total
running time of Insertion-Sort on an input containing n values, denoted by
T(n), we add the products of the times and cost columns, getting
n n n
T (n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 ∑ t j + c6 ∑ (t j − 1) + c7 ∑ (t j − 1) + c8 (n − 1)
=j 2=j 2 =j 2

Even in the case of inputs of a known size, the running time of an


algorithm might depend on which input of that size is known. For instance,
in INSERTION-SORT, the most excellent case scenario is when the array
is already sorted. For each of values of j = 2, 3.... n, we get that A[i]≤ key in
line 5 when the initial value of i is j – 1. Hence, tj=1 for j = 2, 3… n and the
running time of best-case is
T (n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 (n − 1) + c8 (n − 1)
= (c1 + c2 + c4 + c5 + c8 )n − (c2 + c4 + c5 + c8 ).
This running time can be expressed as an + b for constants a and b.
both a and b are dependent on the statement costs ci; therefore, it is a linear
function of n.
In the case where the array is sorted in reverse order—that is to say, in
decreasing order—it results in the worst-case scenario. Each element A[j]
then must be compared with each element in the whole sorted subarray A[1
… j – 1] , and thus tj = j for j = 2, 3, … ., n. Keeping in mind that
n
n(n + 1)
∑j
=
j =2 2
−1

and
n
n(n − 1)
∑ ( j − 1) =
j =2 2
The running time of INSERTION-SORT in worst case scenario is:
 n(n + 1)   n(n − 1)   n(n − 1) 
T (n) = c1n + c2 (n − 1) + c4 (n − 1) + c5  − 1 + c6   + c7   + c8 (n − 1)
 2   2   2 
c c c   c c c 
=  5 + 6 + 7  n 2 +  c1 + c2 + c4 + 5 − 6 − 7 + c8  n − (c2 + c4 + c5 + c8 ).
2 2 2  2 2 2 

The running time of this worst-case can be expressed as an2 = bn + c for


constants a, b, and c. These constants depend on the statement costs ci; there-
fore, making it a quadratic function of n.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
24 Introduction To Algorithms

Normally, the running time of an algorithm in case of Insertion sort is


fixed for any given input, even though there are also some “randomized”
algorithms, the behavior of whom vary even for a fixed input.

1.7.3. Analysis of Worst-Case and Average-Case


For the analysis of insertion sort, we have discussed both cases; the best
case where the input array is already sorted, and secondly, the worst case,
where the input array is sorted in a reverse manner. Our main concentration
is only on worst-case running time, which is the longest running time for
any input having size n. the three main reasons behind this orientation are
stated below:
i. Through the worst-case running time of an algorithm, we get
an upper bound on any inputs running time. Knowing the upper
bound gives an assurance that the algorithm will by no means
take any longer. Hence, it eliminates the requirement of making
an educated guess by us, and we can assume that it will never get
much worse.
ii. The worst case takes place quite frequently for some algorithms.
For instance, while searching any database to find any specific
information, we will often encounter the worst case of the
searching algorithm when our required information is not there in
the database.
iii. Most of the times, the “average case” is as bad as the worst case.
Let’s assume that we select n numbers randomly and then apply
insertion sort. What amount of time is required for determining as
where to insert element A[j] in sub-array A[1 … j – 1]? Generally,
half of the elements present in A[1 … j – 1] is greater than A[j],
and half are less. Hence, on average, we will be checking half
of the sub-array A[1 … j – 1], and subsequently, tj is about j = 2.
The consequential running time of average-case turns out to be
a quadratic function of the size of the input, just like the running
time of the worst-case.
The capacity of average-case analysis is constrained as it might not be
clear what makes up an “average” input for a specific problem. Mostly, we
like to assume that all inputs of a specified size are equally likely. In reality,
this assumption may possibly be violated, however, at times, we can use
employ a randomized algorithm, which makes random choices to permit a
probabilistic analysis and give in an anticipated running time.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 25

REFERENCES
1. Abramowitz, M., & Stegun, I. A., (1965). Handbook of mathematical
functions with formulas, graphs, and mathematical table (Vol. 2172,
pp. 101-155). New York: Dover.
2. Acar, U. A., (2009, January). Self-adjusting computation:(an
overview). In Proceedings of the 2009 ACM SIGPLAN workshop on
Partial evaluation and program manipulation (pp. 1–6). ACM.
3. Aggarwal, A., & Vitter, J., (1988). The input/output complexity of
sorting and related problems. Communications of the ACM, 31(9),
1116–1127.
4. Agrawal, K., He, Y., & Leiserson, C. E., (2007, March). Adaptive
work stealing with parallelism feedback. In Proceedings of the 12th
ACM SIGPLAN symposium on Principles and practice of parallel
programming (pp. 112–120). ACM.
5. Agrawal, K., He, Y., Hsu, W. J., & Leiserson, C. E., (2006, March).
Adaptive scheduling with parallelism feedback. In Proceedings of the
eleventh ACM SIGPLAN symposium on Principles and practice of
parallel programming (pp. 100–109). ACM.
6. Agrawal, M., Kayal, N., & Saxena, N., (2004). PRIMES is in P. Annals
of mathematics, 781–793.
7. Aho, A. V., & Hopcroft, J. E. (1974). The design and analysis of
computer algorithms (Vo. 1, pp. 44-53). Pearson Education India.
8. Aho, A. V., Hopcroft, J. E., & Ullman, J. D., (1983). Data structures
and algorithms.
9. Ahuja, R. K., & Orlin, J. B., (1989). A fast and simple algorithm for the
maximum flow problem. Operations Research, 37(5), 748–759.
10. Ahuja, R. K., Magnanti, T. L., & Orlin, J. B., (1993). Network flows:
theory, algorithms, and applications (Vol. 1, p. 846). Englewood Cliffs,
NJ: Prentice Hall.
11. Ahuja, R. K., Mehlhorn, K., Orlin, J., & Tarjan, R. E., (1990). Faster
algorithms for the shortest path problem. Journal of the ACM (JACM),
37(2), 213–223.
12. Ahuja, R. K., Orlin, J. B., & Tarjan, R. E., (1989). Improved time
bounds for the maximum flow problem. SIAM Journal on Computing,
18(5), 939–954.
13. Ajtai, M., Megiddo, N., & Waarts, O., (2001). Improved algorithms

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
26 Introduction To Algorithms

and analysis for secretary problems and generalizations. SIAM Journal


on Discrete Mathematics, 14(1), 1–27.
14. Aki, S. G., (1989). The design and analysis of parallel algorithms. Vo.
1, pp. 35-55. United States.
15. Akra, M., & Bazzi, L., (1998). On the solution of linear recurrence
equations. Computational Optimization and Applications, 10(2), 195–
210.
16. Alon, N., (1990). Generating pseudo-random permutations and
maximum flow algorithms. Information Processing Letters, 35(4),
201–204.
17. Amir, A., Aumann, Y., Benson, G., Levy, A., Lipsky, O., Porat, E.,
& Vishne, U., (2006, January). Pattern matching with address errors:
rearrangement distances. In Proceedings of the Seventeenth Annual
ACM-SIAM Symposium on a Discrete Algorithm (pp. 1221–1229).
Society for Industrial and Applied Mathematics.
18. Amtoft, T., Consel, C., Danvy, O., & Malmkjær, K., (2002). The
abstraction and instantiation of string-matching programs. The Essence
of Computation (pp. 332–357). Springer, Berlin, Heidelberg.
19. Andersson, A., (1995, October). Sublogarithmic searching without
multiplications. In Foundations of Computer Science, 1995
Proceedings, 36th Annual Symposium on (pp. 655–663). IEEE.
20. Andersson, A., (1996, October). Faster deterministic sorting and
searching in linear space. In Foundations of Computer Science, 1996
Proceedings, 37th Annual Symposium on (pp. 135–141). IEEE.
21. Andersson, A. A., & Thorup, M., (2000, May). Tight (er) worst-case
bounds on dynamic searching and priority queues. In Proceedings of
the thirty-second annual ACM symposium on Theory of computing
(pp. 335–342). ACM.
22. Arge, L. (2001, August). External memory data structures. In European
Symposium on Algorithms (pp. 1–29). Springer, Berlin, Heidelberg.
23. Arora, N. S., Blumofe, R. D., & Plaxton, C. G., (2001). Thread scheduling
for multiprogrammed multiprocessors. Theory of Computing Systems,
34(2), 115–144.
24. Arora, S., (1994). Probabilistic checking of proofs and hardness of
approximation problems (Doctoral dissertation, Princeton University,
Department of Computer Science).
25. Arora, S., (1998, May). The approximability of NP-hard problems.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 27

In Proceedings of the thirtieth annual ACM symposium on Theory of


computing (pp. 337–348). ACM.
26. Arora, S., & Lund, C., (1996, August). The hardness of approximations.
In Approximation algorithms for NP-hard problems (pp. 399–446).
PWS Publishing Co.
27. Arora, S., Lund, C., Motwani, R., Sudan, M., & Szegedy, M., (1998).
Proof verification and the hardness of approximation problems. Journal
of the ACM (JACM), 45(3), 501–555.
28. Aslam, F., Fennell, L., Schindelhauer, C., Thiemann, P., Ernst, G.,
Haussmann, E., & Uzmi, Z. A. (2010, June). Optimized java binary
and virtual machine for tiny motes. In International Conference on
Distributed Computing in Sensor Systems (Vol. 6131, pp. 15-30).
Springer, Berlin, Heidelberg.
29. Ausiello, G., Crescenzi, P., Gambosi, G., Kann, V., Marchetti-
Spaccamela, A., & Protasi, M., (2012). Complexity and approximation:
Combinatorial optimization problems and their approximability
properties (Vol. 1, pp. 57-78). Springer Science & Business Media.
30. Avidan, S., & Shamir, A. (2007, August). Seam carving for content-
aware image resizing. In ACM Transactions on Graphics (TOG) (Vol.
26, No. 3, p. 10). ACM.
31. Babaioff, M., Immorlica, N., Kempe, D., & Kleinberg, R., (2007).
A knapsack secretary problem with applications. In Approximation,
randomization, and combinatorial optimization. Algorithms and
techniques (pp. 16–28). Springer, Berlin, Heidelberg.
32. Bach, E., (1990). Number-theoretic algorithms. Annual Review of
Computer Science, 4(1), 119–172.
33. Bach, E., & Shallit, J. O., (1996). Algorithmic Number Theory:
Efficient Algorithms (Vol. 1, pp. 35-42). MIT press.
34. Bae, S. E., & Takaoka, T., (2005, August). Improved algorithms
for the K-maximum subarray problem for small K. In International
Computing and Combinatorics Conference (pp. 621–631). Springer,
Berlin, Heidelberg.
35. Bailey, D. H., Lee, K., & Simon, H. D., (1991). Using Strassen’s
algorithm to accelerate the solution of linear systems. The Journal of
Supercomputing, 4(4), 357–371.
36. Bakker, M., Riezebos, J., & Teunter, R. H., (2012). Review of inventory
systems with deterioration since 2001. European Journal of Operational

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
28 Introduction To Algorithms

Research, 221(2), 275–284.


37. Baswana, S., Hariharan, R., & Sen, S., (2002, May). Improved
decremental algorithms for maintaining transitive closure and all-
pairs shortest paths. In Proceedings of the thirty-fourth annual ACM
symposium on Theory of computing (pp. 117–123). ACM.
38. Bateni, M., Hajiaghayi, M., & Zadimoghaddam, M., (2010).
Submodular secretary problem and extensions. In Approximation,
Randomization, and Combinatorial Optimization. Algorithms and
Techniques (pp. 39–52). Springer, Berlin, Heidelberg.
39. Bayer, R., (1972). Symmetric binary B-trees: Data structure and
maintenance algorithms. Acta Informatica, 1(4), 290–306.
40. Bazzi, A., Fiorini, S., Huang, S., & Svensson, O., (2017, January).
Small extended formulation for knapsack cover inequalities from
monotone circuits. In Proceedings of the Twenty-Eighth Annual ACM-
SIAM Symposium on Discrete Algorithms (Vol. 1, pp. 2326–2341).
Society for Industrial and Applied Mathematics.
41. Beauchemin, P., Brassard, G., Crépeau, C., Goutier, C., & Pomerance,
C., (1988). The generation of random numbers that are probably prime.
Journal of Cryptology, 1(1), 53–64.
42. Bellare, M., & Sudan, M., (1994, May). Improved non-approximability
results. In Proceedings of the twenty-sixth annual ACM symposium on
Theory of computing (pp. 184–193). ACM.
43. Bender, M. A., Demaine, E. D., & Farach-Colton, M., (2000). Cache-
oblivious B-trees. In Foundations of Computer Science, 2000.
Proceedings. 41st Annual Symposium on (pp. 399–409). IEEE.
44. Bengtsson, F., (2006, August). Computing maximum-scoring segments
in almost linear time. In International Computing and Combinatorics
Conference (pp. 255–264). Springer, Berlin, Heidelberg.
45. Ben-Or, M., (1983, December). Lower bounds for algebraic computation
trees. In Proceedings of the fifteenth annual ACM symposium on Theory
of computing (pp. 80–86). ACM.
46. Bent, S. W., & John, J. W., (1985, December). Finding the median
requires 2n comparisons. In Proceedings of the Seventeenth Annual
ACM Symposium on Theory of Computing. (pp. 213–216). ACM.
47. Bentley, J. L., Haken, D., & Saxe, J. B., (1980). A general method for
solving divide-and-conquer recurrences. ACM SIGACT News, 12(3),
36–44.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 29

48. Berger, J., & Barkaoui, M., (2004). A parallel hybrid genetic algorithm
for the vehicle routing problem with time windows. Computers &
operations research, 31(12), 2037–2053.
49. Berry, M. V., & Howls, C. J., (2012). Integrals with coalescing saddles.
Chapter 36, 775–793.
50. Bienstock, D. (2008). Approximate formulations for 0–1 knapsack
sets. Operations Research Letters, 36(3), 317–320.
51. Bienstock, D., & McClosky, B., (2012). Tightening simple mixed-
integer sets with guaranteed bounds. Mathematical Programming,
133(1), 337–363.
52. Bishop, C. M., (2006). Pattern Recognition and Machine Learning
(Information Science and Statistics). New York, New York: Springer
Science and Business Media.
53. Blasgen, M. W., Casey, R. G., & Eswaran, K. P., (1977). An encoding
method for multifield sorting and indexing. Communications of the
ACM, 20(11), 874–878.
54. Blelloch, G. E. (1989). Scan primitives and parallel vector
models (Doctoral dissertation, Massachusetts Institute of Technology.
Laboratory for Computer Science), Vol. 1, pp. 25-37.
55. Blelloch, G. E. (1990). Vector models for data-parallel computing
(Vol. 75, pp. 15-26). Cambridge: MIT Press. Institute of Technology.
Laboratory for Computer Science).
56. Blelloch, G. E., (1996). Programming parallel algorithms.
Communications of the ACM, 39(3), 85–97.
57. Blelloch, G. E., & Gibbons, P. B., (2004, June). Effectively sharing
a cache among threads. In Proceedings of the Sixteenth Annual ACM
Symposium on Parallelism in Algorithms and Architectures (pp. 235–
244). ACM.
58. Blelloch, G. E., & Greiner, J., (1996, June). A provable time and space
efficient implementation of NESL. In ACM SIGPLAN Notices (Vol.
31, No. 6, pp. 213–225). ACM.
59. Blelloch, G. E., & Maggs, B. M., (1996). Parallel Algorithms. ACM
Computing Surveys (CSUR), 28(1), 51–54.
60. Blelloch, G. E., & Maggs, B. M., (2010, January). Parallel Algorithms.
In Algorithms and theory of computation handbook (pp. 25–25).
Chapman & Hall/CRC.
61. Blelloch, G. E., Gibbons, P. B., & Matias, Y., (1995, July). Provably

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
30 Introduction To Algorithms

efficient scheduling for languages with fine-grained parallelism. In


Proceedings of the Seventh Annual ACM Symposium on Parallel
Algorithms and Architectures (pp. 1–12). ACM.
62. Blelloch, G. E., Gibbons, P. B., & Matias, Y., (1999). Provably efficient
scheduling for languages with fine-grained parallelism. Journal of the
ACM (JACM), 46(2), 281–321.
63. Blelloch, G. E., Gibbons, P. B., Matias, Y., & Narlikar, G. J., (1997,
June). Space-efficient scheduling of parallelism with synchronization
variables. In Proceedings of the Ninth Annual ACM Symposium on
Parallel Algorithms and Architectures (pp. 12–23). ACM.
64. Blelloch, G. E., Hardwick, J. C., Sipelstein, J., Zagha, M., & Chatterjee,
S., (1994). Implementation of a portable nested data-parallel language.
Journal of Parallel and Distributed Computing, 21(1), 4–14.
65. Block H, D., (1961). The Perceptron: A Model of Brain Functioning.
34 (1), 123–135.
66. Block, H. D., Knight Jr, B. W., & Rosenblatt, F., (1962). Analysis of a
four-layer series-coupled perceptron. II. Reviews of Modern Physics,
34(1), 135.
67. Bloom, N., & Van Reenen, J., (2002). Patents, real options, and firm
performance. The Economic Journal, 112(478).
68. Blum, M., Floyd, R. W., Pratt, V. R., Rivest, R. L., & Tarjan, R. E.,
(1973). Two papers on the selection problem: Time Bounds for Selection
[by Manual Blum, Robert W. Floyd, Vaughan Pratt, Ronald L. Rivest,
and Robert E. Tarjan] and Expected Time Bounds for Selection [by
Robert W. Floyd and Ronald L. Rivest].
69. Blumofe, R. D., & Leiserson, C. E., (1999). Scheduling multithreaded
computations by work stealing. Journal of the ACM (JACM), 46(5),
720–748.
70. Bollerslev, T., Engle, R. F., & Nelson, D. B., (1994). ARCH models.
Handbook of Econometrics, 4, 2959–3038.
71. Brassard, G., & Bratley, P., (1996). Fundamentals of Algorithmics
(Vol. 33, pp. 342-356). Englewood Cliffs: Prentice Hall.
72. Brent, R. P., (1974). The parallel evaluation of general arithmetic
expressions. Journal of the ACM (JACM), 21(2), 201–206.
73. Brent, R. P. (1980). An improved Monte Carlo factorization algorithm.
BIT Numerical Mathematics, 20(2), 176–184.
74. Brereton, R. G., (2015). Pattern recognition in chemometrics.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 31

Chemometrics and Intelligent Laboratory Systems, 149, 90–96.


75. Brodal, G. S., & Jørgensen, A. G., (2007, August). A linear time
algorithm for the k maximal sums problem. In International Symposium
on Mathematical Foundations of Computer Science (pp. 442–453).
Springer, Berlin, Heidelberg.
76. Brodnik, A., Miltersen, P. B., & Munro, J. I., (1997, August). Trans-
dichotomous algorithms without multiplication—some upper and
lower bounds. In Workshop on Algorithms and Data Structures (pp.
426–439). Springer, Berlin, Heidelberg.
77. Buchbinder, N., Jain, K., & Singh, M., (2010, June). Secretary
problems via linear programming. In International Conference on
Integer Programming and Combinatorial Optimization (pp. 163–176).
Springer, Berlin, Heidelberg.
78. Buhler, J. P., Lenstra, H. W., & Pomerance, C., (1993). Factoring
integers with the number field sieve. The Development of the Number
Field Sieve (pp. 50–94). Springer, Berlin, Heidelberg.
79. Burke, E. K., Hyde, M. R., & Kendall, G. (2006). Evolving bin packing
heuristics with genetic programming. In Parallel Problem Solving from
Nature-PPSN IX (pp. 860–869). Springer, Berlin, Heidelberg.
80. Campbell, D. T., (1976). On the conflicts between biological and social
evolution and between psychology and moral tradition. Zygon®, 11(3),
167–208.
81. Carling, A., (1992). Introducing Neural Networks. Wilmslow, UK:
Sigma Press.
82. Caudill, M., & Butler, C., (1993). Understanding Neural Networks:
Computer Explorations (No. 006.3 C3).
83. Chali, Y. S. R., (2009). Complex Question Answering: Unsupervised
Learning Approaches and Experiments. Journal of Artificial Intelligent
Research, 1–47.
84. Chan, T. H., (2000). HOMFLY polynomials of some generalized Hopf
links. Journal of Knot Theory and Its Ramifications, 9(07), 865–883.
85. Chen, L. T., & Davis, L. S., (1990, April). A parallel algorithm for
list ranking image curves in O (log N) time. In DARPA Image
Understanding Workshop (pp. 805–815).
86. Chen, L. T., & Davis, L. S., (1991). Parallel algorithms for testing if a
point is inside a closed curve. Pattern recognition letters, 12(2), 73–77.
87. Chen, Y. S., Qin, Y. S., Xiang, Y. G., Zhong, J. X., & Jiao, X. L.,

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
32 Introduction To Algorithms

(2011). Intrusion detection system based on immune algorithm and


support vector machine in a wireless sensor network. In Information
and Automation (pp. 372–376). Springer, Berlin, Heidelberg.
88. Cheriyan, J., & Hagerup, T., (1989, October). A randomized maximum-
flow algorithm. In Foundations of Computer Science, 1989., 30th
Annual Symposium on (pp. 118–123). IEEE.
89. Cheriyan, J., & Hagerup, T., (1995). A randomized maximum-flow
algorithm. SIAM Journal on Computing, 24(2), 203–226.
90. Cheriyan, J., Hagerup, T., & Mehlhorn, K., (1990, July). Can a maximum
flow be computed in o (nm) time?. In International Colloquium on
Automata, Languages, and Programming (pp. 235–248). Springer,
Berlin, Heidelberg.
91. Cheriyan, J., Hagerup, T., & Mehlhorn, K., (1996). An o(n^3)-Time
Maximum-Flow Algorithm. SIAM Journal on Computing, 25(6),
1144–1170.
92. Ciurea, E., & Ciupala, L., (2001). Algorithms for minimum flows.
Computer Science Journal of Moldova, 9(3), 27.
93. Conforti, M., Wolsey, L. A., & Zambelli, G., (2010). Projecting an
extended formulation for mixed-integer covers on bipartite graphs.
Mathematics of operations research, 35(3), 603–623.
94. Courcoubetis, C., Vardi, M., Wolper, P., & Yannakakis, M., (1992).
Memory-efficient algorithms for the verification of temporal properties.
Formal methods in system design, 1(2–3), 275–288.
95. Michie, D. (1968). “Memo” functions and machine
learning. Nature, 218(5136), 19.
96. Damgård, I., Landrock, P., & Pomerance, C., (1993). Average case
error estimates for the strong probable prime test. Mathematics of
Computation, 61(203), 177–194.
97. D’Aristotile, A., Diaconis, P., & Newman, C. M., (2003). Brownian
motion and the classical groups. Lecture Notes-Monograph Series,
97–116.
98. Dehling, H., & Philipp, W., (2002). Empirical process techniques for
dependent data. In Empirical process techniques for dependent data
(pp. 3–113). Birkhäuser, Boston, MA.
99. Dempster, A. P., Laird, N. M., & Rubin, D. B., (1977). Maximum
likelihood from incomplete data via the EM algorithm. Journal of the
royal statistical society. Series B (methodological), 1–38.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 33

100. Dengiz, B., Altiparmak, F., & Smith, A. E., (1997). Efficient
optimization of all-terminal reliable networks, using an evolutionary
approach. IEEE Transactions on Reliability, 46(1), 18–26.
101. Dickerson, M., Eppstein, D., Goodrich, M. T., & Meng, J. Y., (2003,
September). Confluent drawings: visualizing non-planar diagrams in a
planar way. In International Symposium on Graph Drawing (pp. 1–12).
Springer, Berlin, Heidelberg.
102. Didier, F., (2009). Efficient erasure decoding of Reed-Solomon codes.
arXiv preprint arXiv:0901.1886.
103. Dockens, W. S., (1979). Induction/catastrophe theory: A behavioral
ecological approach to cognition in human individuals. Systems
Research and Behavioral Science, 24(2), 94–111.
104. Doya, K. (Ed.)., (2007). Bayesian brain: Probabilistic approaches to
neural coding. MIT press.
105. Drmota, M., & Szpankowski, W., (2013). A master theorem for discrete
divide and conquer recurrences. Journal of the ACM (JACM), 60(3),
16.
106. Du, W., & Atallah, M. J., (2001). Protocols for secure remote database
access with approximate matching. In E-Commerce Security and
Privacy (pp. 87–111). Springer, Boston, MA.
107. Duda, R. O., Hart, P. E., & Stork, D. G. (1973). Fisher’s Linear
Discriminant. Patent Classification and Scene Analysis, Copyright,
Vol. 2, pp. 114-119. New York: Wiley.
108. Durbin, R., & Rumelhart, D. E., (1989). Product units: A computationally
powerful and biologically plausible extension to backpropagation
networks. Neural Computation, 1(1), 133–142.
109. Dutra da Silva, R., Robson, W., & Pedrini Schwartz, H., (2011). Image
segmentation based on wavelet feature descriptor and dimensionality
reduction applied to remote sensing. Chilean J. Stat, 2(2), 51-60.
110. Dutton, J. M., & Starbuck, W. H., (1971). Computer simulation models
of human behavior: A history of an intellectual technology. IEEE
Transactions on Systems, Man, and Cybernetics, 2, 128–171.
111. Elliott S. W. & Anderson, J. R., (1995). Learning and Memory. Wiley,
New York, USA.
112. Eppstein, D., Goodrich, M. T., & Sun, J. Z., (2005, June). The skip
quadtree: a simple dynamic data structure for multidimensional data.
In Proceedings of the twenty-first annual symposium on Computational

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
34 Introduction To Algorithms

Geometry (pp. 296–305). ACM.


113. Eppstein, D., Goodrich, M. T., & Sun, J. Z., (2008). Skip quadtrees:
Dynamic data structures for multidimensional point sets. International
Journal of Computational Geometry & Applications, 18(01n02), 131–
160.
114. Faenza, Y., & Sanità, L., (2015). On the existence of compact
ε-approximated formulations for knapsack in the original space.
Operations Research Letters, 43(3), 339–342.
115. Farhangfar, A., Kurgan, L., & Dy, J., (2008). Impact of the imputation
of missing values on classification error for discrete data. Pattern
Recognition, 41(12), 3692–3705.
116. Fausett, L., (1994). Fundamentals of Neural Networks. New York:
Prentice Hall.
117. Festa, P., & Resende, M. G., (2002). GRASP: An annotated
bibliography. In Essays and surveys in metaheuristics (pp. 325–367).
Springer, Boston, MA.
118. Fisher, D. H., (1987). Knowledge acquisition via incremental
conceptual clustering. Machine learning, 2(2), 139–172.
119. Forsyth, M. E., Hochberg, M., Cook, G., Renals, S., Robinson, T.,
Schechtman, R., & Doherty-Sneddon, G., (1995). Semi-continuous
hidden Markov models for speaker verification. In Proc. ARPA Spoken
Language Technology Workshop (Vol. 1, pp. 2171–2174). Universiteit
Twente, Enschede.
120. Forsyth, R. S., (1990). The strange story of the Perceptron. Artificial
Intelligence Review, 4 (2), 147–155.
121. Fourier, J. B. J., (1890). from 1824, republished as Second extrait in
oeuvres de Fourier, tome ii (G. Darboux, ed.). Gauthier-Villars, Paris,
38–42.
122. Fourier, J. B. J., (1973). From 1824, republished as the Second extract
in Oeuvres de Fourier, Tome II, G. Darboux, ed. Gauthier-Villars, Paris
1890, see DA Kohler, Translation of a report by Fourier on his work on
linear inequalities.
123. Franc, V., & Hlaváč, V., (2005, August). Simple solvers for large
quadratic programming tasks. In Joint Pattern Recognition Symposium
(pp. 75–84). Springer, Berlin, Heidelberg.
124. Fredman, M. L., & Willard, D. E., (1993). Surpassing the information
theoretic bound with fusion trees. Journal of computer and system

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 35

sciences, 47(3), 424–436.


125. Freund, Y., & Schapire, R. E., (1999). Large margin classification using
the perceptron algorithm. Machine learning, 37(3), 277–296.
126. Friedberg, R. M., (1958). A learning machine: Part, 1. IBM Journal,
2–13.
127. Friedl, K., & Sudan, M., (1995, January). Some improvements to total
degree tests. In Theory of Computing and Systems, 1995. Proceedings.,
Third Israel Symposium on the (pp. 190–198). IEEE.
128. Frigo, M., Leiserson, C. E., & Randall, K. H., (1998). The
implementation of the Cilk-5 multithreaded language. ACM Sigplan
Notices, 33(5), 212–223.
129. Fu, S. S., & Lee, M. K., (2005). IT-Based Knowledge Sharing
and Organizational Trust: The Development and Initial Test of a
Comprehensive Model. ECIS 2005 Proceedings, 56.
130. Fukunaga, A. S., (2008). Automated discovery of local search heuristics
for satisfiability testing. Evolutionary Computation, 16(1), 31–61.
131. Fussenegger, F., & Gabow, H. N., (1979). A counting approach to
lower bounds for selection problems. Journal of the ACM (JACM),
26(2), 227–238.
132. Gennari, J. H., Langley, P., & Fisher, D., (1988). Models of incremental
concept formation (No. UCI-ICS-TR-88–16). California University
Irvine Department of Information and Computer Science.
133. Getoor, L., & Taskar, B. (Eds.)., (2007). Introduction to statistical
relational learning. MIT Press.
134. Ghahramani, Z., (2008). Unsupervised learning algorithms are designed
to extract structure from data. 178, pp. 1–8. IOS Press.
135. Ghahramani, & Jordan, M. I. (1997). Factorial hidden Markov
models. Machine Learning, 29, 245-273.
136. Gillies, D., (1994). A rapprochement between deductive and inductive
logic. Logic Journal of the IGPL, 2(2), 149-166.
137. Glasser, K. S., & Austin Barron, R. H., (1983). The d choice secretary
problem. Sequential Analysis, 2(3), 177–199.
138. Gong, Y., & Haton, J. P., (1992). Non-linear vectorial interpolation
for speaker recognition, in Proceedings of IEEE Int. Conf. Acoust.,
Speech, and Signal Processing, vol. 2,(San Francisco, California,
USA), pp. II173–II176, March 1992.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
36 Introduction To Algorithms

139. González, L., Angulo, C., Velasco, F., & Catala, A., (2006). Dual
unification of bi-class support vector machine formulations. Pattern
recognition, 39(7), 1325–1332.
140. Goodacre, R., Broadhurst, D., Smilde, A. K., Kristal, B. S., Baker,
J. D., Beger, R., & Ebbels, T., (2007). Proposed minimum reporting
standards for data analysis in metabolomics. Metabolomics, 3(3), 231–
241.
141. Goodrich, M. T., Atallah, M. J., & Tamassia, R., (2005, June). Indexing
information for data forensics. In International Conference on Applied
Cryptography and Network Security (pp. 206–221). Springer, Berlin,
Heidelberg.
142. Graefe, G., (2011). Modern B-tree techniques. Foundations and
Trends® in Databases, 3(4), 203–402.
143. Gregan‐Paxton, J., Hoeffler, S., & Zhao, M., (2005). When categorization
is ambiguous: Factors that facilitate the use of a multiple category
inference strategy. Journal of Consumer Psychology, 15(2), 127–140.
144. Gross, G. N., Lømo, T., & Sveen, O., (1969). Participation of inhibitory
and excitatory interneurons in the control of hippocampal-cortical
output, Per Anderson, The Interneuron.
145. Grzymala-Busse, J. W., & Hu, M., (2000, October). A comparison
of several approaches to missing attribute values in data mining.
In International Conference on Rough Sets and Current Trends in
Computing (pp. 378–385). Springer, Berlin, Heidelberg.
146. Grzymala-Busse, J. W., Goodwin, L. K., Grzymala-Busse, W. J., &
Zheng, X., (2005, August). Handling missing attribute values in preterm
birth data sets. In International Workshop on Rough Sets, Fuzzy Sets,
Data Mining, and Granular-Soft Computing (pp. 342–351). Springer,
Berlin, Heidelberg.
147. Hall, M., Frank, E., Holmes, G., Pfahringer, BAdel’son-Vel’skii, G.
M., & Landis, E. M., (1962). Lil C. Miranda, Equazioni alle derivate
parziali di tipo ellittico, Springer, Berlin, 1955.
148. Harth, A., Umbrich, J., Hogan, A., & Decker, S., (2007). YARS2: A
federated repository for querying graph structured data from the web.
In The Semantic Web (pp. 211–224). Springer, Berlin, Heidelberg.
149. Herr, D. G., (1980). On the history of the use of geometry in the general
linear model. The American Statistician, 34(1), 43–47.
150. Hsu, T. S., Ramachandran, V., & Dean, N., (1992). Implementation of

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 37

Parallel Graph Algorithms on the MasPar. In Computational Support


for Discrete Mathematics (pp. 165–198).
151. Hsu, T. S., Ramachandran, V., & Dean, N., (1993). Implementation
of parallel graph algorithms on a massively parallel SIMD computer
with virtual processing. University of Texas at Austin, Department of
Computer Sciences.
152. Hung, Y., (1988). Processing geometric representations on SIMD
computers. Maryland Univ., College Park, MD (USA).
153. Icking, C., Klein, R., & Ottmann, T., (1987, June). Priority search
trees in secondary memory. In International Workshop on Graph-
Theoretic Concepts in Computer Science (pp. 84–93). Springer, Berlin,
Heidelberg.
154. Igarashi, Y., Sado, K., & Saga, K., (1987). Fast parallel sorts on a
practical sized mesh-connected processor array. IEICE Transactions
(1976–1990), 70(1), 56–64.
155. Immorlica, N., Kleinberg, R., & Mahdian, M., (2006, December).
Secretary problems with competing employers. In International
Workshop on Internet and Network Economics (pp. 389–400).
Springer, Berlin, Heidelberg.
156. John, J. W., (1988). A new lower bound for the set partitioning problem.
SIAM Journal on Computing, 17(4), 640–647.
157. Kim, S. H., & Pomerance, C., (1989). The probability that a random
probable prime is composite. Mathematics of Computation, 53(188),
721–741.
158. King, D. J., (1995). Functional binomial queues. In Functional
Programming, Glasgow 1994 (pp. 141–150). Springer, London.
159. Kirkpatrick, D. G., (1981). A unified lower bound for selection and set
partitioning problems. Journal of the ACM (JACM), 28(1), 150–165.
160. Kleinberg, R., (2005, January). A multiple-choice secretary algorithm
with applications to online auctions. In Proceedings of the sixteenth
annual ACM-SIAM symposium on Discrete Algorithms (pp. 630–
631). Society for Industrial and Applied Mathematics.
161. Kwong, Y. S., & Wood, D., (1982). A new method for concurrency in
B-trees. IEEE Transactions on Software Engineering, 3, 211–222.
162. Leighton, T., (1996). Notes on better master theorems for divide-
and-conquer recurrences. Manuscript. Massachusetts Institute of
Technology.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
38 Introduction To Algorithms

163. Lin, T. C., & Lee, D. T., (2007). Randomized algorithm for the sum
selection problem. Theoretical Computer Science, 377(1–3), 151–156.
164. Little, J. J., Blelloch, G. E., & Cass, T. A., (1989). Algorithmic
techniques for computer vision on a fine-grained parallel machine.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
11(3), 244–257.
165. Martínez, C., Panario, D., & Viola, A., (2004, January). Adaptive
sampling for quickselect. In Proceedings of the fifteenth annual ACM-
SIAM symposium on Discrete Algorithms (pp. 447–455). Society for
Industrial and Applied Mathematics.
166. Maurer, S. B., (1985). The Lessons of Williamstown. In New Directions
in Two-Year College Mathematics (pp. 255–270). Springer, New York,
NY.
167. Meijer, H., & Akl, S. G., (1987). Optimal computation of prefix sums
on a binary tree of processors. International Journal of Parallel
Programming, 16(2), 127–136.
168. Meijer, H., & Akl, S. G., (1988). Bit-serial addition trees and their
applications. Computing, 40(1), 9–17.
169. Monier, L., (1980). Evaluation and comparison of two efficient
probabilistic primality testing algorithms. Theoretical Computer
Science, 12(1), 97–108.
170. Morain, F., (1988). Implementation of the Atkin-Goldwasser-Kilian
primality testing algorithm (Doctoral dissertation, INRIA).
171. Munro, J. I., & Poblete, P. V., (1982). A Lower Bound for Determining
the Median. Faculty of Mathematics, University of Waterloo.
172. Narayanan, P. J., & Davis, L. S., (1992). Replicated image algorithms
and their analyses on SIMD machines. International Journal of Pattern
Recognition and Artificial Intelligence, 6(02n03), 335–352.
173. Nelson, D. B., & Foster, D. P. (1995). Filtering and forecasting with
misspecified ARCH models II: making the right forecast with the
wrong model. Journal of Econometrics, 67(2), 303-335.
174. Nievergelt, J., (1974). Binary search trees and file organization. ACM
Computing Surveys (CSUR), 6(3), 195–207.
175. Park, T. G., & Oldfield, J. V., (1993). Minimum spanning tree generation
with content-addressable memory. Electronics Letters, 29(11), 1037–
1039.
176. Phillips, S., & Westbrook, J., (1993, June). Online load balancing

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 39

and network flow. In Proceedings of the twenty-fifth annual ACM


symposium on Theory of computing (pp. 402–411). ACM.
177. Polishchuk, A., & Spielman, D. A., (1994, May). Nearly-linear size
holographic proofs. In Proceedings of the twenty-sixth annual ACM
symposium on Theory of computing (pp. 194–203). ACM.
178. Prechelt, L., (1993). Measurements of MasPar MP-1216A
communication operations. Univ., Fak. für Informatik.
179. Price, G. B., (1973). Telescoping Sums and the Summation of
Sequences. The Two-Year College Mathematics Journal, 4(2), 16–29.
180. Ramadge, P. J., & Wonham, W. M., (1989). The control of discrete
event systems. Proceedings of the IEEE, 77(1), 81–98.
181. Raman, R., (1996, September). Priority queues: Small, monotone, and
Trans-dichotomous. In European Symposium on Algorithms (pp. 121–
137). Springer, Berlin, Heidelberg.
182. Regli, W. C., (1992). A survey of automated feature recognition
techniques, National Science Foundation Engineering Research
Program, TR 92-18, 14-14.
183. Roura, S., (2001). Improved master theorems for divide-and-conquer
recurrences. Journal of the ACM (JACM), 48(2), 170–205.
184. Salisbury, M., Anderson, C., Lischinski, D., & Salesin, D. H., (1996,
August). Scale-dependent reproduction of pen-and-ink illustrations. In
Proceedings of the 23rd annual conference on Computer graphics and
interactive techniques (pp. 461–468). ACM.
185. Sardelis, D. A., & Valahas, T. M., (1999). Decision making: A golden
rule. The American Mathematical Monthly, 106(3), 215–226.
186. Schönhage, A., Paterson, M., & Pippenger, N., (1976). Finding the
median. Journal of Computer and System Sciences, 13(2), 184–199.
187. Selim, G., & Meijer, H., (1987). On the Bit Complexity of Parallel
Computations.’ In Parallel Processing and Applications: Proceedings of
the International Conference on Parallel Processing and Applications,
L’Aquila, Italy, 23–25 September 1987 (p. 101). North Holland.
188. Sheffler, T. J., (1993, August). Implementing the multiprefix operation
on parallel and vector computers. In Proceedings of the fifth annual
ACM symposium on Parallel algorithms and architectures (pp. 377–
386). ACM.
189. Shen, Z., & Marston, C. M., (1995). A study of a dice problem. Applied
Mathematics and Computation, 73(2–3), 231–247.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
40 Introduction To Algorithms

190. Sion, R., Atallah, M., & Prabhakar, S., (2002, April). Power: A metric
for evaluating watermarking algorithms. In Information Technology:
Coding and Computing, 2002. Proceedings. International Conference
on (pp. 95–99). IEEE.
191. Smith, R. S., (1986). Rolle over Lagrange—Another shot at the mean
value theorem. The College Mathematics Journal, 17(5), 403–406.
192. Snyder, L., (1984). Parallel Programming and the Poker Programming
Environment (No. TR-84–04–02). Washington Univ Seattle Dept of
Computer Science.
193. Stock, J. H., & Watson, M. W., (2001). Vector autoregressions. Journal
of Economic Perspectives, 15(4), 101–115.
194. Sudan, M., (1992). Efficient checking of polynomials and proofs and
the hardness of approximation problems. Lecture Notes in Computer
Science, 1001.
195. Szymanski, T. G., (1975). A special case of the maximal common
subsequence problem. Technical Report TR-170, Computer Science
Laboratory, Princeton University.
196. Szymanski, T. G., & Van Wyk, C. J., (1983, June). Space efficient
algorithms for VLSI artwork analysis. In Proceedings of the 20th
Design Automation Conference (pp. 734–739). IEEE Press.
197. Tan, Y. P., & Acharya, T., (1999, March). A robust sequential approach
for the detection of defective pixels in an image sensor. In Acoustics,
Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE
International Conference on (Vol. 4, pp. 2239–2242). IEEE.
198. Thorup, M., (1997). Faster deterministic sorting and priority queues in
linear space (pp. 550–555). Max-Planck-Institut für Informatik.
199. Vanderbei, R. J., (1980). The optimal choice of a subset of a population.
Mathematics of Operations Research, 5(4), 481–486.
200. Verma, R. M., (1994). A general method and a master theorem
for divide-and-conquer recurrences with applications. Journal of
Algorithms, 16(1), 67–79.
201. Verma, R. M., (1997). General techniques for analyzing recursive
algorithms with applications. SIAM Journal on Computing, 26(2),
568–581.
202. Wallace, L., Keil, M., & Rai, A., (2004). Understanding software
project risk: a cluster analysis. Information & Management, 42(1),
115–125.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 41

203. Wang, Y., (2008). Topology control for wireless sensor networks. In
Wireless sensor networks and applications (pp. 113–147). Springer,
Boston, MA.
204. Wilf, H. S., (1984). A Bijection in the Theory of Derangements.
Mathematics Magazine, 57(1), 37–40.
205. Williamson, J., (2002). Probability logic. In Studies in Logic and
Practical Reasoning (Vol. 1, pp. 397–424). Elsevier.
206. Wilson, J. G., (1991). Optimal choice and assignment of the best m of
n randomly arriving items. Stochastic processes and their applications,
39(2), 325–343.
207. Winograd, S., (1970). On the algebraic complexity of functions. In
Actes du Congres International des Mathématiciens (Vol. 3, pp. 283–
288).
208. Wunderlich, M. C., (1983). A performance analysis of a simple prime-
testing algorithm. Mathematics of computation, 40(162), 709–714.
209. Xiaodong, W., & Qingxiang, F., (1996). A frame for general divide-
and-conquer recurrences. Information Processing Letters, 59(1), 45–
51.
210. Yap, C., (2011, May). A real elementary approach to the master
recurrence and generalizations. In International Conference on Theory
and Applications of Models of Computation (pp. 14–26). Springer,
Berlin, Heidelberg.
211. Zhu, X., & Wilhelm, W. E., (2006). Scheduling and lot sizing with
the sequence-dependent setup: A literature review. IIE Transactions,
38(11), 987–1007.

EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.

You might also like