Chapter 1 Fundamentals of Algorithms
Chapter 1 Fundamentals of Algorithms
1
FUNDAMENTALS OF
ALGORITHMS
All rights reserved. May not be reproduced in any form without permission from the publisher, except fair uses permitted under U.S. or applicable copyright law.
CONTENTS
1.1. Introduction......................................................................................... 2
1.2. Different Types of Problems Solved by Algorithms................................ 4
1.3. Data Structures.................................................................................... 8
1.4. Algorithms as a Technology.................................................................. 9
1.5. Algorithms and Other Technologies................................................... 11
1.6. Getting Started With Algorithm.......................................................... 13
1.7. Analyzing Algorithms......................................................................... 18
References................................................................................................ 25
2019. Arcler Press.
EBSCO Publishing: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA 2013910; Flejoles, Rex Porbasas; Introduction To
Algorithms Account:eds.
2 Introduction To Algorithms
1.1. INTRODUCTION
We define an algorithm as any well-defined computational method that takes
input in the form of any value, or else set of values, and resultantly produces
any value, or else set of values, as output. Hence, an algorithm is considered
a sequence of computational steps which transforms any input into the
output. Another way of seeing algorithm is to consider it as a tool that solves
a well-specified computational problem (Aggarwal & Vitter, 1988; Agrawal
et al., 2004). Generally, the statement of the problem postulates the desired
relationship between input and output; whereas, the algorithm defines a
particular computational process for achieving that relationship between
input and output. For instance, suppose that we want to sort a sequence of
numbers into an increasing order. This problem arises quite often in practice
and makes available a fertile ground for presenting numerous standard
design practices as well as analysis tools (Adel’son-Vel’skii & Landis, 1962;
Abramowitz & Stegun, 1965). Given below is the way we formally define
any sorting problem (Figure 1.1):
Input: An arrangement of n numbers (a1, a2, …, an).
Output: A rearrangement (permutation) < a1' , a2' ,..., an' > of the input se-
quence in such a manner that a1' ≤ a2' ≤ ≤ an' .
As an example, consider an input sequence (30, 35, 59, 26, 35, 57).
The output obtained from a sorting algorithm will return (26, 30, 35, 35,
57, 59) sequence. This type of input sequence is known as an instance of
the sorting problem. Generally, an instance of a problem contains an input
(satisfying whatsoever constraints are enforced in the problem statement),
which is required to compute a solution of the encountered problem (Aho &
Hopcroft, 1974; Aho et al., 1983).
Sorting is regarded as a fundamental step in computer science as
numerous programs makes it useful as an intermediate step. Resultantly, we
have a large number of excellent sorting algorithms at our disposal (Ahuja
& Orlin, 1989; Ahuja et al., 1990; 1993). The choice of a best algorithm for
any given application depends mainly on several factors such as the number
of items which need to be sorted, the degree to which the items have already
been slightly sorted, the architecture of the computer, potential limitations
on the item values, as well as the type of storage devices to be employed:
main memory, tapes or disks.
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 3
An algorithm is considered as correct given that it halts with the correct out-
put for each of the input instance. We say that an algorithm which is correct
solves the specified computational problem. On the other hand, an algorithm
which is incorrect will possibly not halt at all on some of the input instances,
or it may halt with a wrong answer (Ahuja et al., 1989; Courcoubetis et
al., 1992). Opposing to our expectation, incorrect algorithms can sometimes
prove beneficial, provided we can control their rate of error. However, in
general, we will only be concerned with the correct algorithms (Szymanski
& Van Wyk, 1983; King, 1995; Didier, 2009).
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
4 Introduction To Algorithms
iv. After the execution of the instructions, the user should acquire the
desired results.
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 5
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
6 Introduction To Algorithms
Even though few of the details of the above-mentioned examples are outside
the scope of this book, we will just give underlying techniques which are ap-
plicable to these problems and problem areas. We will also see how to solve
numerous specific problems, including the ones stated below:
i. Suppose we are provided with a road map containing the distance
marked between each pair of adjacent intersections, and we
desire to find the shortest route possible from one intersection to
another. There can be a huge number of possible routes, even if
we prohibit routes crossing over themselves (Festa & Resende,
2002; Zhu & Wilhelm, 2006). Now, in this situation, how to
choose the shortest of all the possible routes? In such case, we
model the roadmap as a graph and then wish to find the shortest
possible route from one vertex to another in the graph.
ii. Assume that we are provided with two methodical sequences of
symbols like, X= (x1, x2 …, xm) and Y= (y1, y2… yn), and we would
like to discover the longest communal subsequence of Y and X.
A subsequence of X will contain X minus some (or perhaps all
or none) of its elements. For instance, one subsequence of (A, B,
C, D, E, F, G, H, I) can be (B, C, E, G). The span of the longest
communal subsequence of X and Y gives us the extent of how alike
these two sequences are. For instance, if the under consideration
two sequences are base pairs in strands of DNA, then we may
want to consider them alike given that they have a long shared
subsequence. If X contains m symbols and Y contains n symbols,
then X and Y will have 2m and 2n conceivable subsequences,
respectively. Choosing all conceivable subsequences of X and Y
and afterward matching them up may possibly take excessively
long time except if both m and n are very small (Wallace et al.,
2004; Wang, 2008).
iii. We are provided with a mechanical design with regard to a library
of parts. Each part might include instances of some other parts,
and we are required to list the parts in sequence so that each part
comes before any other part that uses it (Maurer, 1985; Smith,
1986). If there are n parts of the design, then there will be nŠ
potential orders, where nŠ signifies the factorial function. Since
the factorial function raises quicker as compared to even an
exponential function, thus, it is not feasible for us to generate
each possible order and then carry out is verification within that
order such that each part comes before the parts using it. Such a
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 7
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
8 Introduction To Algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 9
likely that you will spend a lot of time in a futile search. On the other hand,
if you can prove that the problem is NP-complete, then instead you can
spend your time to develop an efficient algorithm which provides a good,
nevertheless not the best possible, solution (Leighton, 1996; Roura, 2001;
Drmota & Szpankowski, 2013).
As an existing example, assume a delivery firm with a central depot.
Each day, its delivery trucks are loaded at the depot and then send around
for delivering goods to various addresses. By the end of the day, each truck
is required to reach back at the depot so that it can be made ready for loading
for the next day (Bentley et al., 1980; Chan, 2000; Yap, 2011). In order to
decrease the prices, the company wishes to choose a sequence of delivery
stops which results in the lowest overall distance covered by each truck.
This problem is the popular “traveling-salesman problem,” and it comes in
the NP-complete category. It does not have an efficient algorithm. However,
under some certain assumptions, we can know of such efficient algorithms
which provides an overall distance not too far above the shortest possible
(Shen & Marston, 1995; Verma, 1997).
1.3.2. Parallelism
For several years, we had been able to rely upon processor clock speeds
growing at a steady rate. Even though physical limitations cause an ultimate
roadblock to ever-growing clock speeds, nevertheless: as with the speed
of clock, power density rises super-linearly, hence, chips run the hazard
of melting after their clock speeds get plenty high (Meijer & Akl, S1987;
1988). Therefore, for performing more computations per second, we design
chips to contain several processing cores. We can compare these multicore
computers to numerous sequential computers on a single chip; or else we
can say that they are a type of “parallel computer.” For the purpose of
stimulating the best possible performance from multicore computers, we
must design algorithms keeping parallelism in mind. Algorithms such as
“multithreaded” algorithms take benefit of multiple cores. This type of
model has several advantages from a theoretical viewpoint, and thus it forms
the basis of numerous successful computer programs (Wilf, 1984; Selim &
Meijer, 1987).
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
10 Introduction To Algorithms
yes, as we would still want to demonstrate that our solution terminates and
does so with the accurate answer. Given that computers were extremely fast,
any accurate method for problem-solving will terminate (Igarashi et al., 1987;
Park & Oldfield, 1993). You most likely would want your implementation
of software to lie within the bounds of decent software engineering practice
(for instance, your implementation must be well designed as well as well
documented), but quite often you would be likely using whatever method is
the easiest to implement (Glasser & Austin Barron, 1983; Immorlica et al.,
2006).
Obviously, computers can be fast, but they cannot be infinitely fast.
Likewise, their memory may be low-priced, but it is not entirely free. Hence,
computing time is a bounded resource, likewise is space in memory. One
should utilize these resources wisely, and algorithms that are efficient with
regard to space or time can help us do so (Wilson, 1991; Kleinberg, 2005;
Buchbinder et al., 2010).
Different algorithms that are written for the solution of the same problem
frequently differ widely in terms of their efficiency. Such differences can be
much more noteworthy as compared to the differences owing to software
and hardware (Vanderbei, 1980; Babaioff et al., 2007; Bateni et al., 2010).
Take an example of two different sorting algorithms for more clarity.
The first is known as insertion sort and the time it takes is approximately
equal to c1n2 to sort n items; here c1 is a constant which is not dependent
on n; which means that it takes time approximately proportional to n2. The
second algorithm is known as merge sort and it takes time approximately
equal to c2n lg n, here lg n stands for log2 n whereas c2 is another constant
that too is not dependent on n. As compared to merge sort, Insertion sort has
normally a smaller constant factor, such that c1 < c2. We will find out that
the impact of constant factors can be far less on the running time than the
dependency on the input size n (Aslam et al., 2010; Du & Atallah, 2001;
Eppstein et al., 2005). If we write insertion sort’s running time as c1n n and
running time of merge sort algorithm as c2n lg n, then we will find out that
where the factor of insertion sort is n in its running time, merge sort is having
a factor of lg n, which is obviously much smaller. Even though insertion
sort generally runs more rapidly than merge sort for trivial input sizes,
nevertheless, after the input size n becomes sufficiently large, merge sort’s
benefit of lg n versus. n will compensate fairly for the difference in constant
factors. Regardless of how much smaller c1 is compared to c2, there will
always exist a crossover point outside which merge sort is faster (Sardelis &
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 11
Valahas, 1999; Dickerson et al., 2003). For a real example, let us consider
a faster computer (name it as computer A) running insertion sort alongside
a slower computer (name it as computer B) running merge sort. Each of
them must sort an array containing 10 million numbers. (Even though it
may seem that 10 million numbers are a lot, given that the numbers are
eight-byte integers, then the input will be occupying around 80 megabytes,
which can fit, numerous times over, in the memory of even a low-cost laptop
computer). Let’s assume that computer A is executing 10 billion instructions
per second; whereas, computer B is executing merely 10 million instructions
per second, hence, it makes computer A 1000 times quicker than computer B
in terms of raw computing power (Sion et al., 2002; Goodrich et al., 2005).
To amplify this dramatic difference, assume that the world’s most devious
programmer codes insertion sort for computer A in machine language, and
the resultant code needs 2n2 instructions to sort n numbers. Additionally,
assume that an average programmer implements merge sort by employing
a high-level language using an inefficient compiler, and the resultant code
takes 50n lg n instructions. Now, for sorting 10 million numbers, computer
A will take:
2 ⋅ (107 ) 2 instructions
= 20, 000sec ond (more than 5.5 hours )
1010 instructions / sec ond
Whereas, computer B will take:
50 ⋅107 lg107 instructions
≈ 1163sec ond (less than 20 min utes ).
1010 instructions / sec ond
Even with a poor compiler, by making use of an algorithm whose
running time raises more slowly, computer B will be running more than 17
times faster as compared to computer A. The benefit of merge sort is even
more noticeable when we sort 100 million numbers: in this case insertion
sort will be taking more than 23 days, whereas, merge sort will only take
under four hours. As a general rule, as the size of the problem increases, so
does the relative benefit of merge sorting (Eppstein et al., 2008; Acar, 2009).
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
12 Introduction To Algorithms
technologies, likewise, they are also being made in algorithms (Amtoft et al.,
2002; Ausiello et al., 2012). You might be curious if algorithms are truly that
significant on contemporary computers taking into account other advanced
technologies, for example:
i. Intuitive, easy-to-use, graphical user interfaces (GUIs).
ii. Advanced computer architectures and production technologies.
iii. Fast networking, both wireless and wired.
iv. Object-oriented systems.
v. Integrated Web technologies.
The answer is yes. Even though few applications do not explicitly
need algorithmic content at the application level (like some small, web-
based applications), several others do. For instance, suppose a Web-based
service which defines how to travel from one location to another location
(Szymanski, 1975; Aslam et al., 2010). The implementation of this service
would depend on a graphical user interface, fast hardware, wide-area
networking, and quite probably on object orientation. Though it will also
need algorithms for certain operations like discovering routes (probably
making use of a shortest-path algorithm), interpolating addresses, and
rendering maps (Bach, 1990; Avidan & Shamir, 2007).
Furthermore, even an application which, at the application level, does
not need algorithmic content is highly dependent upon algorithms. Since
the application is reliant on fast hardware and the design hardware uses
algorithms. Furthermore, the application also relies on graphical user
interfaces whose design again relies on algorithms (Winograd, 1970; Bach &
Shallit, 1996). Additionally, the application can also rely on networking and
route in networks also relies mainly on algorithms. If the application is written
in any language other than the machine language, then it must be processed
using an interpreter, a compiler, or assembler, all of these extensively uses
different algorithms. Hence, we can conclude that algorithms lie at the core
of most technologies that are employed in contemporary computers (Bailey
et al., 1991; Baswana et al., 2002).
Moreover, with the ever-growing capabilities of computers, we utilize
them for solving complex problems than ever before. As it has been seen from
the comparison above between merge sort and insertion sort, the differences
in efficiency among algorithms become mostly more prominent in the larger
problem (Bayer, 1972; Arge, 2001; Graefe, 2011). Being equipped with a
solid base of algorithmic technique and knowledge is one distinctive feature
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 13
that sets apart the skilled programmers from the beginners. Although with
the help of modern computing technology, we can achieve few tasks without
having to know much about algorithms, however, with a sound background
in algorithms, we can do much, much more (Blasgen et al., 1977; Harth et
al., 2007).
Output: A rearrangement (permutation) < a1' , a2' ,..., an' > of the input
sequence in such a manner that a1' ≤ a2' ≤ ≤ an'
The numbers that are to be sorted are also called the keys. Even though we
are sorting a sequence from a theoretical point of view, the input approaches
us in the form of an array having n elements.
Here we will be typically describing algorithms as programs that are
written in a pseudo code which in many respects is analogous to C++,
C, Pascal, Java, or Python. If you have already been familiar with any of
these languages then you should have little distress reading our algorithms
(Monier, 1980; Kim & Pomerance, 1989; Damgård et al., 1993) (Figure
1.3).
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
14 Introduction To Algorithms
Figure 1.3: Sorting a hand of cards by making use of insertion sort. (Source:
https://mitpress.mit.edu/books/introduction-algorithms-third-edition).
The difference between real code and pseudocode is that in pseudocode,
we make use of whatever expressive method is most brief and clear to
stipulate a given algorithm. Occasionally, the clearest method is English,
so it shouldn’t come as a surprise if one comes across a phrase in English
or any English sentence embedded inside a section of “real” code (Ben-
Or, 1983; Bender et al., 2000). One more difference between real code and
pseudocode is that pseudocode is not normally concerned with problems
of software engineering. Issues of modularity, data abstraction, and error
handling are frequently ignored for the purpose of conveying the spirit of
the algorithm in a more concise manner (Wunderlich, 1983; Morain, 1988).
We will start with insertion sort. Insertion sort is an efficient algorithm
to sort out a small figure of elements. We can relate the working of insertion
sort as the way various people sort a hand of playing cards. Starting with an
empty left hand, the cards will face down on the table. After that, we take
away one card from the table at a time and place it in the right position in
the left hand (Fussenegger & Gabow, 1979; John, 1988). For finding the
right position for a card, we make its comparison with each of the already
held cards in the hand, from right to left, as shown in the figure below. At
all instances, the cards seized in the left hand are sorted. Further, these were
the cards that were originally on the top of the pile on the table (Kirkpatrick,
1981; Munro & Poblete, 1982; Bent & John, 1985).
Our pseudocode for insertion sort is presented as a procedure known as
INSERTION-SORT, which takes an array as a parameter (A [1 … n). This
array contains a sequence of length n that needs to be sorted. The algorithm
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 15
then sorts the input array in place: it reorders the numbers present within
the array A, with a constant number of them stored outside the array as a
maximum at any time. The input array A has the sorted output sequence
after the procedure of Insertion-Sort is finished (Schönhage et al., 1976;
Bentley et al., 1980; Bienstock, 2008) (Figure 1.4).
Figure 1.4: The insertion-sort operation on the array A which is equal to (2, 5,
4, 6, 1, 3). Indices of array appear above the rectangles; whereas, values that are
stored in the array positions lie within the rectangles. (a)–(d) the reiterations of
the ‘for loop’ of lines 1–8. During each iteration, the rectangle (black in color)
holds the key taken from A (j), which is then equated with the values inside
shaded rectangles to its left in the test of line 5. Shaded arrows exhibit values
of array relocated one position to the right in line 6, and black arrows signify
where the key moves to in line 8. (e) The ultimate sorted array. (Source: https://
mitpress.mit.edu/books/introduction-algorithms-third-edition).
Insertion Sort (A) is illustrated below:
i. For j=2 to length.A
ii. Key=A[j]
iii. //Insetion of A[j] into a sorted sequence a[1 … j–1]
iv. i=j–1
v. while A[i]>key and i>0
vi. A[i+1]=A[i]
vii. \\feeding addition of value in A[i]
viii. i =i-1
ix. A[i+1]=key
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
16 Introduction To Algorithms
sorted hand, and the leftover sub-array A[j + 1 … n] relates to the pile of
cards that are still present on the table. As a matter of fact, elements A [1 …
j – 1] are the original elements that were in positions 1 through j 1, however,
now in sorted order. These properties of A [1 … j – 1] is formally stated as
a loop invariant
Loop invariants are used to understand as to why an algorithm is accurate.
We should essentially demonstrate three things about a loop invariant:
Initialization: It is accurate preceding the first iteration of the loop.
Maintenance: If it is accurate prior to the iteration of the loop, it remains
accurate before the succeeding iteration.
Termination: As the loop terminates, the invariant provides us a beneficial
property that helps in demonstrating that the algorithm is correct.
If the first two properties hold true, then the loop invariant is true preceding
every iteration of the loop. The similarity to mathematical induction must
be noted here. For proving that a property holds, we prove a base case as
well as an inductive step (Duda et al., 1973; Bloom & Van Reenen, 2002).
In this case, demonstrating that the invariant holds prior the first iteration
corresponds to the base case, whereas, demonstrating that the invariant
holds among iterations correspond to the inductive step (Dehling & Philipp,
2002; Bienstock & McClosky, 2012).
As we are employing the loop invariant to exhibit correctness, the third
property is possibly the most significant one. Normally, we make use of the
loop invariant together with the condition that caused the termination of the
loop. The termination property varies from our general use of mathematical
induction where we infinitely apply the inductive step; in this case, we
stop the “induction” as the loop terminates. Now, let us understand how
these properties hold in case of insertion sort (Bollerslev et al., 1994; Tan &
Acharya, 1999).
Initialization: We start by demonstrating that the loop invariant holds prior
to the iteration of the first loop when j= 2. Hence, the subarray A[1 … j –
1], is just composed of the single element A[1], which as a matter of fact
is the original element in A[1]. Additionally, this subarray is sorted which
implies that the loop invariant holds before the first iteration of the loop
(Williamson, 2002; D’Aristotile et al., 2003).
Maintenance: Now, coming up to the second property: demonstrating that
each of the iterations maintains the loop invariant. The for loop body works
by moving A[j – 1] , A[j – 2], A[j – 3] , and likewise by single position
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 17
to the right till it discovers the proper position for A[j] (lines 4–7), at this
point it inserts the A[j] value (line 8). Then, the sub-array A[1 … j] contains
elements that were originally in A[1 … j], however, in sorted order. The
increment of j for the subsequent iteration of the ‘for loop’ conserves the
loop invariant (Nelson & Foster, 1995; COnforti et al., 2010).
Termination: Lastly, we observe what happens at the time the loop
terminates. The condition which is responsible for termination of the ‘for
loop’ is that j > A. length= n. since each loop iteration rises j by 1, we should
have j = n + 1 at that time. Replacing n + 1 for j in the language of loop
invariant, we have the subarray A[1... n] consisting of the elements that were
originally in A[1... n], however, in sorted order. Seeing that the sub-array
A [1... n] is the perfect array, we settle that the complete array is sorted.
Therefore, the algorithm is correct (Fourier, 1890; 1973; Bazzi et al., 2017).
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
18 Introduction To Algorithms
make use of else-if for tests afterward the first one. Majority of
block-structured languages have comparable constructs, although
the precise syntax may differ. In Python, there are no repeat-until
loops; moreover for loop in it operate in a different manner from
the for loops discussed here (Blelloch et al., 1994; 1995; 1997;
1999).
iv. The symbol “//” implies that the remainder of the line is a remark
(comment).
v. Multiple assignments of the type i= j= e assigns the value of
expression e to both variables i and j; hence, it must be treated as
equal to the assignment j= e proceeded by the assignment i= j.
vi. Variables (like the key, i, and j) are local to any specified
procedure. Global variables must not be used without explicit
indication (Blelloch & Greiner, 1996; Blelloch & Mags, 1996;
2010).
vii. We access elements of an array by stating the array name succeeded
by the index written in square brackets. For instance, A[i] denotes
the i-th element of the array A. The “…“ notation signifies a range
of values inside an array. Therefore, A [1…j] denotes the subarray
of A containing the j elements A[1], A[2], …, A[j].
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 19
In the RAM model, each of the instructions is executed one after another,
with no concomitant operations (Brent, 1974; Buhler et al., 1993; Brassard
& Bratley, 1996).
We must accurately define the instructions of the RAM model as well
as their costs. However, this task will not only be tedious but also result
in little insight into the design of the algorithm and its analysis. Still, care
should be taken as to not abuse the RAM model. For instance, in case if a
RAM contains an instruction that sort, we can sort in merely one instruction.
However, such a RAM is not as realistic as real computers do not contain
such instructions (Chen & Davis, 1990; Prechelt, 1993; Sheffler, 1993).
Hence, we will be focusing on the design of real computers. The RAM
model includes instructions that are generally found in real computers: data
movement (copy, load, store,), arithmetic (like subtraction, addition, division,
multiplication, remainder, ceiling, floor, etc.), and control (subroutine call
and return unconditional and conditional branch). Each of these instructions
needs a constant amount of time (Hsu et al., 1992; 1993).
The RAM model’s data types for storing real numbers are floating point
and integer. Even though, normally we are not very concerned with precision,
however, in a few applications precision is vital. Another assumption we
make is a limit on the size of each data word. For instance, while working
with inputs of size n, normally, we assume that representation of integers
is done by c lg n bits for any constant c ≥ 1. We need c ≥ 1 in order that
each word can have the value of n, permitting us to catalog the individual
input elements. Additionally, we restrict c to be a constant so that the size
of the word does not grow randomly. (In case of arbitrary growth of the size
of the word, we can store a large quantity of data in one word and carry
out an operation on it all in constant time which is obviously an unrealistic
scenario).
Real computers hold instructions which represent a gray area in the
RAM model. For instance, in general case, exponentiation is not a constant
time instruction as it takes more than a few instructions to compute xy
when both x and y are real numbers (Little t al., 1989; Narayanan & Davis,
1992). However, in limited situations, exponentiation can be a constant time
operation. Various computers contain a “shift left” instruction (Martínez et
al., 2004; Bengtsson, 2006). This instruction shifts the integer bits in constant
time by k positions to the left. In the majority of the computers, shifting the
integer bits by one position to the left is the same as multiplication by a
factor of 2, in other words, we can say that shifting the bits to the left by k
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
20 Introduction To Algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 21
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
22 Introduction To Algorithms
following demonstration deals with the evolution of the expression used for
the INSERTION-SORT running time. The evolution of the expression takes
place from a complex formula which utilizes all the statement costs (i.e.,
ci) to a simpler notation which is brief and more conveniently manipulated.
This simple notation makes it convenient to determine the efficiency of a
particular algorithm with respect to others. Initially, the demonstration of the
INSERTION-SORT procedure with respect to each statement time “cost”
and the number of execution attempts is presented below.
For each j = 2, 3, 4… . n, where n= length. A, we let tj represent the
number of attempts the while loop experiment in line 5 is carried out for that
particular value of j. When a ‘while’ or ‘for’ exits in a customary way (i.e.,
because of the test in a looping header), the execution of the test is carried
out one time further than the loop body. It is assumed that comments are
non-executable statements, which do not consume any time.
vi A[i + 1]=A[i] c6
viii i=i–1 c7
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 23
Any statement that undergoes ci steps for execution and in total executes n
times will add ci n to the whole running time. For computation of the total
running time of Insertion-Sort on an input containing n values, denoted by
T(n), we add the products of the times and cost columns, getting
n n n
T (n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 ∑ t j + c6 ∑ (t j − 1) + c7 ∑ (t j − 1) + c8 (n − 1)
=j 2=j 2 =j 2
and
n
n(n − 1)
∑ ( j − 1) =
j =2 2
The running time of INSERTION-SORT in worst case scenario is:
n(n + 1) n(n − 1) n(n − 1)
T (n) = c1n + c2 (n − 1) + c4 (n − 1) + c5 − 1 + c6 + c7 + c8 (n − 1)
2 2 2
c c c c c c
= 5 + 6 + 7 n 2 + c1 + c2 + c4 + 5 − 6 − 7 + c8 n − (c2 + c4 + c5 + c8 ).
2 2 2 2 2 2
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
24 Introduction To Algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 25
REFERENCES
1. Abramowitz, M., & Stegun, I. A., (1965). Handbook of mathematical
functions with formulas, graphs, and mathematical table (Vol. 2172,
pp. 101-155). New York: Dover.
2. Acar, U. A., (2009, January). Self-adjusting computation:(an
overview). In Proceedings of the 2009 ACM SIGPLAN workshop on
Partial evaluation and program manipulation (pp. 1–6). ACM.
3. Aggarwal, A., & Vitter, J., (1988). The input/output complexity of
sorting and related problems. Communications of the ACM, 31(9),
1116–1127.
4. Agrawal, K., He, Y., & Leiserson, C. E., (2007, March). Adaptive
work stealing with parallelism feedback. In Proceedings of the 12th
ACM SIGPLAN symposium on Principles and practice of parallel
programming (pp. 112–120). ACM.
5. Agrawal, K., He, Y., Hsu, W. J., & Leiserson, C. E., (2006, March).
Adaptive scheduling with parallelism feedback. In Proceedings of the
eleventh ACM SIGPLAN symposium on Principles and practice of
parallel programming (pp. 100–109). ACM.
6. Agrawal, M., Kayal, N., & Saxena, N., (2004). PRIMES is in P. Annals
of mathematics, 781–793.
7. Aho, A. V., & Hopcroft, J. E. (1974). The design and analysis of
computer algorithms (Vo. 1, pp. 44-53). Pearson Education India.
8. Aho, A. V., Hopcroft, J. E., & Ullman, J. D., (1983). Data structures
and algorithms.
9. Ahuja, R. K., & Orlin, J. B., (1989). A fast and simple algorithm for the
maximum flow problem. Operations Research, 37(5), 748–759.
10. Ahuja, R. K., Magnanti, T. L., & Orlin, J. B., (1993). Network flows:
theory, algorithms, and applications (Vol. 1, p. 846). Englewood Cliffs,
NJ: Prentice Hall.
11. Ahuja, R. K., Mehlhorn, K., Orlin, J., & Tarjan, R. E., (1990). Faster
algorithms for the shortest path problem. Journal of the ACM (JACM),
37(2), 213–223.
12. Ahuja, R. K., Orlin, J. B., & Tarjan, R. E., (1989). Improved time
bounds for the maximum flow problem. SIAM Journal on Computing,
18(5), 939–954.
13. Ajtai, M., Megiddo, N., & Waarts, O., (2001). Improved algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
26 Introduction To Algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 27
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
28 Introduction To Algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 29
48. Berger, J., & Barkaoui, M., (2004). A parallel hybrid genetic algorithm
for the vehicle routing problem with time windows. Computers &
operations research, 31(12), 2037–2053.
49. Berry, M. V., & Howls, C. J., (2012). Integrals with coalescing saddles.
Chapter 36, 775–793.
50. Bienstock, D. (2008). Approximate formulations for 0–1 knapsack
sets. Operations Research Letters, 36(3), 317–320.
51. Bienstock, D., & McClosky, B., (2012). Tightening simple mixed-
integer sets with guaranteed bounds. Mathematical Programming,
133(1), 337–363.
52. Bishop, C. M., (2006). Pattern Recognition and Machine Learning
(Information Science and Statistics). New York, New York: Springer
Science and Business Media.
53. Blasgen, M. W., Casey, R. G., & Eswaran, K. P., (1977). An encoding
method for multifield sorting and indexing. Communications of the
ACM, 20(11), 874–878.
54. Blelloch, G. E. (1989). Scan primitives and parallel vector
models (Doctoral dissertation, Massachusetts Institute of Technology.
Laboratory for Computer Science), Vol. 1, pp. 25-37.
55. Blelloch, G. E. (1990). Vector models for data-parallel computing
(Vol. 75, pp. 15-26). Cambridge: MIT Press. Institute of Technology.
Laboratory for Computer Science).
56. Blelloch, G. E., (1996). Programming parallel algorithms.
Communications of the ACM, 39(3), 85–97.
57. Blelloch, G. E., & Gibbons, P. B., (2004, June). Effectively sharing
a cache among threads. In Proceedings of the Sixteenth Annual ACM
Symposium on Parallelism in Algorithms and Architectures (pp. 235–
244). ACM.
58. Blelloch, G. E., & Greiner, J., (1996, June). A provable time and space
efficient implementation of NESL. In ACM SIGPLAN Notices (Vol.
31, No. 6, pp. 213–225). ACM.
59. Blelloch, G. E., & Maggs, B. M., (1996). Parallel Algorithms. ACM
Computing Surveys (CSUR), 28(1), 51–54.
60. Blelloch, G. E., & Maggs, B. M., (2010, January). Parallel Algorithms.
In Algorithms and theory of computation handbook (pp. 25–25).
Chapman & Hall/CRC.
61. Blelloch, G. E., Gibbons, P. B., & Matias, Y., (1995, July). Provably
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
30 Introduction To Algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 31
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
32 Introduction To Algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 33
100. Dengiz, B., Altiparmak, F., & Smith, A. E., (1997). Efficient
optimization of all-terminal reliable networks, using an evolutionary
approach. IEEE Transactions on Reliability, 46(1), 18–26.
101. Dickerson, M., Eppstein, D., Goodrich, M. T., & Meng, J. Y., (2003,
September). Confluent drawings: visualizing non-planar diagrams in a
planar way. In International Symposium on Graph Drawing (pp. 1–12).
Springer, Berlin, Heidelberg.
102. Didier, F., (2009). Efficient erasure decoding of Reed-Solomon codes.
arXiv preprint arXiv:0901.1886.
103. Dockens, W. S., (1979). Induction/catastrophe theory: A behavioral
ecological approach to cognition in human individuals. Systems
Research and Behavioral Science, 24(2), 94–111.
104. Doya, K. (Ed.)., (2007). Bayesian brain: Probabilistic approaches to
neural coding. MIT press.
105. Drmota, M., & Szpankowski, W., (2013). A master theorem for discrete
divide and conquer recurrences. Journal of the ACM (JACM), 60(3),
16.
106. Du, W., & Atallah, M. J., (2001). Protocols for secure remote database
access with approximate matching. In E-Commerce Security and
Privacy (pp. 87–111). Springer, Boston, MA.
107. Duda, R. O., Hart, P. E., & Stork, D. G. (1973). Fisher’s Linear
Discriminant. Patent Classification and Scene Analysis, Copyright,
Vol. 2, pp. 114-119. New York: Wiley.
108. Durbin, R., & Rumelhart, D. E., (1989). Product units: A computationally
powerful and biologically plausible extension to backpropagation
networks. Neural Computation, 1(1), 133–142.
109. Dutra da Silva, R., Robson, W., & Pedrini Schwartz, H., (2011). Image
segmentation based on wavelet feature descriptor and dimensionality
reduction applied to remote sensing. Chilean J. Stat, 2(2), 51-60.
110. Dutton, J. M., & Starbuck, W. H., (1971). Computer simulation models
of human behavior: A history of an intellectual technology. IEEE
Transactions on Systems, Man, and Cybernetics, 2, 128–171.
111. Elliott S. W. & Anderson, J. R., (1995). Learning and Memory. Wiley,
New York, USA.
112. Eppstein, D., Goodrich, M. T., & Sun, J. Z., (2005, June). The skip
quadtree: a simple dynamic data structure for multidimensional data.
In Proceedings of the twenty-first annual symposium on Computational
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
34 Introduction To Algorithms
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 35
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
36 Introduction To Algorithms
139. González, L., Angulo, C., Velasco, F., & Catala, A., (2006). Dual
unification of bi-class support vector machine formulations. Pattern
recognition, 39(7), 1325–1332.
140. Goodacre, R., Broadhurst, D., Smilde, A. K., Kristal, B. S., Baker,
J. D., Beger, R., & Ebbels, T., (2007). Proposed minimum reporting
standards for data analysis in metabolomics. Metabolomics, 3(3), 231–
241.
141. Goodrich, M. T., Atallah, M. J., & Tamassia, R., (2005, June). Indexing
information for data forensics. In International Conference on Applied
Cryptography and Network Security (pp. 206–221). Springer, Berlin,
Heidelberg.
142. Graefe, G., (2011). Modern B-tree techniques. Foundations and
Trends® in Databases, 3(4), 203–402.
143. Gregan‐Paxton, J., Hoeffler, S., & Zhao, M., (2005). When categorization
is ambiguous: Factors that facilitate the use of a multiple category
inference strategy. Journal of Consumer Psychology, 15(2), 127–140.
144. Gross, G. N., Lømo, T., & Sveen, O., (1969). Participation of inhibitory
and excitatory interneurons in the control of hippocampal-cortical
output, Per Anderson, The Interneuron.
145. Grzymala-Busse, J. W., & Hu, M., (2000, October). A comparison
of several approaches to missing attribute values in data mining.
In International Conference on Rough Sets and Current Trends in
Computing (pp. 378–385). Springer, Berlin, Heidelberg.
146. Grzymala-Busse, J. W., Goodwin, L. K., Grzymala-Busse, W. J., &
Zheng, X., (2005, August). Handling missing attribute values in preterm
birth data sets. In International Workshop on Rough Sets, Fuzzy Sets,
Data Mining, and Granular-Soft Computing (pp. 342–351). Springer,
Berlin, Heidelberg.
147. Hall, M., Frank, E., Holmes, G., Pfahringer, BAdel’son-Vel’skii, G.
M., & Landis, E. M., (1962). Lil C. Miranda, Equazioni alle derivate
parziali di tipo ellittico, Springer, Berlin, 1955.
148. Harth, A., Umbrich, J., Hogan, A., & Decker, S., (2007). YARS2: A
federated repository for querying graph structured data from the web.
In The Semantic Web (pp. 211–224). Springer, Berlin, Heidelberg.
149. Herr, D. G., (1980). On the history of the use of geometry in the general
linear model. The American Statistician, 34(1), 43–47.
150. Hsu, T. S., Ramachandran, V., & Dean, N., (1992). Implementation of
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 37
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
38 Introduction To Algorithms
163. Lin, T. C., & Lee, D. T., (2007). Randomized algorithm for the sum
selection problem. Theoretical Computer Science, 377(1–3), 151–156.
164. Little, J. J., Blelloch, G. E., & Cass, T. A., (1989). Algorithmic
techniques for computer vision on a fine-grained parallel machine.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
11(3), 244–257.
165. Martínez, C., Panario, D., & Viola, A., (2004, January). Adaptive
sampling for quickselect. In Proceedings of the fifteenth annual ACM-
SIAM symposium on Discrete Algorithms (pp. 447–455). Society for
Industrial and Applied Mathematics.
166. Maurer, S. B., (1985). The Lessons of Williamstown. In New Directions
in Two-Year College Mathematics (pp. 255–270). Springer, New York,
NY.
167. Meijer, H., & Akl, S. G., (1987). Optimal computation of prefix sums
on a binary tree of processors. International Journal of Parallel
Programming, 16(2), 127–136.
168. Meijer, H., & Akl, S. G., (1988). Bit-serial addition trees and their
applications. Computing, 40(1), 9–17.
169. Monier, L., (1980). Evaluation and comparison of two efficient
probabilistic primality testing algorithms. Theoretical Computer
Science, 12(1), 97–108.
170. Morain, F., (1988). Implementation of the Atkin-Goldwasser-Kilian
primality testing algorithm (Doctoral dissertation, INRIA).
171. Munro, J. I., & Poblete, P. V., (1982). A Lower Bound for Determining
the Median. Faculty of Mathematics, University of Waterloo.
172. Narayanan, P. J., & Davis, L. S., (1992). Replicated image algorithms
and their analyses on SIMD machines. International Journal of Pattern
Recognition and Artificial Intelligence, 6(02n03), 335–352.
173. Nelson, D. B., & Foster, D. P. (1995). Filtering and forecasting with
misspecified ARCH models II: making the right forecast with the
wrong model. Journal of Econometrics, 67(2), 303-335.
174. Nievergelt, J., (1974). Binary search trees and file organization. ACM
Computing Surveys (CSUR), 6(3), 195–207.
175. Park, T. G., & Oldfield, J. V., (1993). Minimum spanning tree generation
with content-addressable memory. Electronics Letters, 29(11), 1037–
1039.
176. Phillips, S., & Westbrook, J., (1993, June). Online load balancing
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 39
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
40 Introduction To Algorithms
190. Sion, R., Atallah, M., & Prabhakar, S., (2002, April). Power: A metric
for evaluating watermarking algorithms. In Information Technology:
Coding and Computing, 2002. Proceedings. International Conference
on (pp. 95–99). IEEE.
191. Smith, R. S., (1986). Rolle over Lagrange—Another shot at the mean
value theorem. The College Mathematics Journal, 17(5), 403–406.
192. Snyder, L., (1984). Parallel Programming and the Poker Programming
Environment (No. TR-84–04–02). Washington Univ Seattle Dept of
Computer Science.
193. Stock, J. H., & Watson, M. W., (2001). Vector autoregressions. Journal
of Economic Perspectives, 15(4), 101–115.
194. Sudan, M., (1992). Efficient checking of polynomials and proofs and
the hardness of approximation problems. Lecture Notes in Computer
Science, 1001.
195. Szymanski, T. G., (1975). A special case of the maximal common
subsequence problem. Technical Report TR-170, Computer Science
Laboratory, Princeton University.
196. Szymanski, T. G., & Van Wyk, C. J., (1983, June). Space efficient
algorithms for VLSI artwork analysis. In Proceedings of the 20th
Design Automation Conference (pp. 734–739). IEEE Press.
197. Tan, Y. P., & Acharya, T., (1999, March). A robust sequential approach
for the detection of defective pixels in an image sensor. In Acoustics,
Speech, and Signal Processing, 1999. Proceedings., 1999 IEEE
International Conference on (Vol. 4, pp. 2239–2242). IEEE.
198. Thorup, M., (1997). Faster deterministic sorting and priority queues in
linear space (pp. 550–555). Max-Planck-Institut für Informatik.
199. Vanderbei, R. J., (1980). The optimal choice of a subset of a population.
Mathematics of Operations Research, 5(4), 481–486.
200. Verma, R. M., (1994). A general method and a master theorem
for divide-and-conquer recurrences with applications. Journal of
Algorithms, 16(1), 67–79.
201. Verma, R. M., (1997). General techniques for analyzing recursive
algorithms with applications. SIAM Journal on Computing, 26(2),
568–581.
202. Wallace, L., Keil, M., & Rai, A., (2004). Understanding software
project risk: a cluster analysis. Information & Management, 42(1),
115–125.
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
Fundamentals of Algorithms 41
203. Wang, Y., (2008). Topology control for wireless sensor networks. In
Wireless sensor networks and applications (pp. 113–147). Springer,
Boston, MA.
204. Wilf, H. S., (1984). A Bijection in the Theory of Derangements.
Mathematics Magazine, 57(1), 37–40.
205. Williamson, J., (2002). Probability logic. In Studies in Logic and
Practical Reasoning (Vol. 1, pp. 397–424). Elsevier.
206. Wilson, J. G., (1991). Optimal choice and assignment of the best m of
n randomly arriving items. Stochastic processes and their applications,
39(2), 325–343.
207. Winograd, S., (1970). On the algebraic complexity of functions. In
Actes du Congres International des Mathématiciens (Vol. 3, pp. 283–
288).
208. Wunderlich, M. C., (1983). A performance analysis of a simple prime-
testing algorithm. Mathematics of computation, 40(162), 709–714.
209. Xiaodong, W., & Qingxiang, F., (1996). A frame for general divide-
and-conquer recurrences. Information Processing Letters, 59(1), 45–
51.
210. Yap, C., (2011, May). A real elementary approach to the master
recurrence and generalizations. In International Conference on Theory
and Applications of Models of Computation (pp. 14–26). Springer,
Berlin, Heidelberg.
211. Zhu, X., & Wilhelm, W. E., (2006). Scheduling and lot sizing with
the sequence-dependent setup: A literature review. IIE Transactions,
38(11), 987–1007.
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.
EBSCOhost: eBook Collection (EBSCOhost) printed on 10/22/2025 10:26:13 PM UTC via BIVA. All use subject to https://www.ebsco.com/terms-of-use.