Concepts et Traversées des Arbres Binaires
Concepts et Traversées des Arbres Binaires
51 {
52 n3 = n2 / n1;
53 break;
54 }
55 }
56 push(n3);
57 }
58 e++;
59 }
61 return 0;
62
63 }
64
Output:
Binary Tree
A binary tree consists of a finite set of nodes that is either empty, or consists of one
specially designated node called the root of the binary tree, and the elements of two
disjoint binary trees called the left subtree and right subtree of the root.
Note that the definition above is recursive: we have defined a binary tree in terms of
binary trees. This is appropriate since recursion is an innate characteristic of tree
structures.
Diagram 1: A binary tree
Tree terminology is generally derived from the terminology of family trees (specifically,
the type of family tree called a lineal chart).
Each root is said to be the parent of the roots of its subtrees.
Two nodes with the same parent are said to be siblings; they are the children of
their parent.
The root node has no parent.
A great deal of tree processing takes advantage of the relationship between a
parent and its children, and we commonly say a directed edge (or simply
an edge) extends from a parent to its children. Thus edges connect a root with
the roots of each subtree. An undirected edge extends in both directions between
a parent and a child.
Grandparent and grandchild relations can be defined in a similar manner; we
could also extend this terminology further if we wished (designating nodes as
cousins, as an uncle or aunt, etc.).
The number of subtrees of a node is called the degree of the node. In a binary
tree, all nodes have degree 0, 1, or 2.
A node of degree zero is called a terminal node or leaf node.
A non-leaf node is often called a branch node.
The degree of a tree is the maximum degree of a node in the tree. A binary tree
is degree 2.
A directed path from node n1 to nk is defined as a sequence of nodes n1, n2,
..., nk such that ni is the parent of ni+1 for 1 <= i < k. An undirected path is a
similar sequence of undirected edges. The length of this path is the number of
edges on the path, namely k – 1 (i.e., the number of nodes – 1). There is a path
of length zero from every node to itself. Notice that in a binary tree there is
exactly one path from the root to each node.
The level or depth of a node with respect to a tree is defined recursively: the level
of the root is zero; and the level of any other node is one higher than that of its
parent. Or to put it another way, the level or depth of a node ni is the length of the
unique path from the root to ni.
The height of ni is the length of the longest path from ni to a leaf. Thus all leaves
in the tree are at height 0.
The height of a tree is equal to the height of the root. The depth of a tree is equal
to the level or depth of the deepest leaf; this is always equal to the height of the
tree.
If there is a directed path from n1 to n2, then n1 is an ancestor of n2 and n2 is a
descendant of n1.
Lecture-14
Array Representation
For a complete or almost complete binary tree, storing the binary tree as an array may
be a good choice.
One way to do this is to store the root of the tree in the first element of the array. Then,
for each node in the tree that is stored at subscript k, the node's left child can be stored
at subscript 2k+1 and the right child can be stored at subscript 2k+2. For example, the
almost complete binary tree shown in Diagram 2 can be stored in an array like so:
However, if this scheme is used to store a binary tree that is not complete or almost
complete, we can end up with a great deal of wasted space in the array.
For example, the following binary tree
Linked Representation
If a binary tree is not complete or almost complete, a better choice for storing it is to use
a linked representation similar to the linked list structures covered earlier in the
semester:
Each tree node has two pointers (usually named left and right). The tree class has a
pointer to the root node of the tree (labeled root in the diagram above).
Any pointer in the tree structure that does not point to a node will normally contain the
value NULL. A linked tree with N nodes will always contain N + 1 null links.
Lecture-15
Tree Traversal:
Traversal is a process to visit all the nodes of a tree and may print their values too.
Because, all nodes are connected via edges (links) we always start from the root
(head) node. That is, we cannot randomly access a node in a tree. There are three
ways which we use to traverse a tree −
In-order Traversal
Pre-order Traversal
Post-order Traversal
Generally, we traverse a tree to search or locate a given item or key in the tree or to
print all the values it contains.
In-order Traversal
In this traversal method, the left subtree is visited first, then the root and later the right
sub-tree. We should always remember that every node may represent a subtree itself.
If a binary tree is traversed in-order, the output will produce sorted key values in an
ascending order.
We start from A, and following in-order traversal, we move to its left subtree B. B is
also traversed in-order. The process goes on until all the nodes are visited. The output
of inorder traversal of this tree will be −
D→B→E→A→F→C→G
Algorithm
Pre-order Traversal
In this traversal method, the root node is visited first, then the left subtree and finally
the right subtree.
We start from A, and following pre-order traversal, we first visit A itself and then move
to its left subtree B. B is also traversed pre-order. The process goes on until all the
nodes are visited. The output of pre-order traversal of this tree will be −
A→B→D→E→C→F→G
Algorithm
We start from A, and following Post-order traversal, we first visit the left subtree B. B is
also traversed post-order. The process goes on until all the nodes are visited. The
output of post-order traversal of this tree will be −
D→E→B→F→G→C→A
Algorithm
Yes
Examination shows
that each left sub-tree
has a height 1 greater
than each right sub-
tree.
No
Sub-tree with root 8 has
height 4 and sub-tree
with root 18 has height
2
Insertion
As with the red-black tree, insertion is somewhat complex and involves a number of
cases. Implementations of AVL tree insertion may be found in many textbooks: they rely
on adding an extra attribute, the balance factor to each node. This factor indicates
whether the tree is left-heavy (the height of the left sub-tree is 1 greater than the right
sub-tree), balanced (both sub-trees are the same height) or right-heavy(the height of the
right sub-tree is 1 greater than the left sub-tree). If the balance would be destroyed by
an insertion, a rotation is performed to correct the balance.
A new item has been
added to the left subtree
of node 1, causing its
height to become 2
greater than 2's right sub-
tree (shown in green). A
right-rotation is performed
to correct the imbalance.
Lecture-17
B+-tree
A B+-tree requires that each leaf be the same distance from the root, as in this picture,
where searching for any of the 11 values (all listed on the bottom level) will involve
loading three nodes from the disk (the root block, a second-level block, and a leaf).
In practice, d will be larger — as large, in fact, as it takes to fill a disk block. Suppose a
block is 4KB, our keys are 4-byte integers, and each reference is a 6-byte file offset.
Then we'd choose d to be the largest value so that 4 (d − 1) + 6 d ≤ 4096; solving this
inequality for d, we end up with d ≤ 410, so we'd use 410 for d. As you can see, d can
be large.
A B+-tree maintains the following invariants:
Every node has one more references than it has keys.
All leaves are at the same distance from the root.
For every non-leaf node N with k being the number of keys in N: all keys in the
first child's subtree are less than N's first key; and all keys in the ith child's
subtree (2 ≤ i ≤ k) are between the (i − 1)th key of n and the ith key of n.
The root has at least two children.
Every non-leaf, non-root node has at least floor(d / 2) children.
Each leaf contains at least floor(d / 2) keys.
Every key from the table appears in a leaf, in left-to-right sorted order.
In our examples, we'll continue to use 4 for d. Looking at our invariants, this requires
that each leaf have at least two keys, and each internal node to have at least two
children (and thus at least one key).
2. Insertion algorithm
Descend to the leaf where the key fits.
1. If the node has an empty space, insert the key/reference pair into the node.
2. If the node is already full, split it into two nodes, distributing the keys evenly
between the two nodes. If the node is a leaf, take a copy of the minimum value in
the second of these two nodes and repeat this insertion algorithm to insert it into
the parent node. If the node is a non-leaf, exclude the middle value during the
split and repeat this insertion algorithm to insert this excluded value into the
parent node.
Initial:
Insert 20:
Insert 13:
Insert 15:
Insert 10:
Insert 11:
Insert 12:
3. Deletion algorithm
Delete 13:
Delete 15:
Delete 1:
Expression Trees:
Trees are used in many other ways in the computer science. Compilers and database
are two major examples in this regard. In case of compilers, when the languages are
translated into machine language, tree-like structures are used. We have also seen an
example of expression tree comprising the mathematical expression. Let’s have more
discussion on the expression trees. We will see what are the benefits of expression
trees and how can we build an expression tree. Following is the figure of an expression
tree.
In the above tree, the expression on the left side is a + b * c while on the right side, we
have d * e + f * g. If you look at the figure, it becomes evident that the inner nodes
contain operators while leaf nodes have operands. We know that there are two types of
nodes in the tree i.e. inner nodes and leaf nodes. The leaf nodes are such nodes which
have left and right subtrees as null. You will find these at the bottom level of the tree.
The leaf nodes are connected with the inner nodes. So in trees, we have some inner
nodes and some leaf nodes.
In the above diagram, all the inner nodes (the nodes which have either left or right child
or both) have operators. In this case, we have + or * as operators. Whereas leaf nodes
contain operands only i.e. a, b, c, d, e, f, g. This tree is binary as the operators are
binary. We have discussed the evaluation of postfix and infix expressions and have
seen that the binary operators need two operands. In the infix expressions, one operand
is on the left side of the operator and the other is on the right side. Suppose, if we have
+ operator, it will be written as 2 + 4. However, in case of multiplication, we will write as
5*6. We may have unary operators like negation (-) or in Boolean expression we have
NOT. In this example, there are all the binary operators. Therefore, this tree is a binary
tree. This is not the Binary Search Tree. In BST, the values on the left side of the nodes
are smaller and the values on the right side are greater than the node. Therefore, this is
not a BST. Here we have an expression tree with no sorting process involved.
This is not necessary that expression tree is always binary tree. Suppose we have a
unary operator like negation. In this case, we have a node which has (-) in it and there is
only one leaf node under it. It means just negate that operand.
Let’s talk about the traversal of the expression tree. The inorder traversal may be
executed here.
Lecture-18
Binary Search Tree (BST)
A Binary Search Tree (BST) is a tree in which all the nodes follow the below-mentioned
properties −
The left sub-tree of a node has a key less than or equal to its parent node's key.
The right sub-tree of a node has a key greater than to its parent node's key.
Thus, BST divides all its sub-trees into two segments; the left sub-tree and the right
sub-tree and can be defined as −
Representation
BST is a collection of nodes arranged in a way where they maintain BST properties.
Each node has a key and an associated value. While searching, the desired key is
compared to the keys in BST and if found, the associated value is retrieved.
Following is a pictorial representation of BST −
We observe that the root node key (27) has all less-valued keys on the left sub-tree
and the higher valued keys on the right sub-tree.
Basic Operations
Following are the basic operations of a tree −
Search − Searches an element in a tree.
Insert − Inserts an element in a tree.
Pre-order Traversal − Traverses a tree in a pre-order manner.
In-order Traversal − Traverses a tree in an in-order manner.
Post-order Traversal − Traverses a tree in a post-order manner.
Node
Define a node having some data, references to its left and right child nodes.
struct node {
int data;
struct node *leftChild;
struct node *rightChild;
};
Search Operation
Whenever an element is to be searched, start searching from the root node. Then if the
data is less than the key value, search for the element in the left subtree. Otherwise,
search for the element in the right subtree. Follow the same algorithm for each node.
Algorithm
while(current->data != data){
if(current != NULL) {
printf("%d ",current->data);
//not found
if(current == NULL){
return NULL;
}
}
}
return current;
}
Insert Operation
Whenever an element is to be inserted, first locate its proper location. Start searching
from the root node, then if the data is less than the key value, search for the empty
location in the left subtree and insert the data. Otherwise, search for the empty location
in the right subtree and insert the data.
Algorithm
tempNode->data = data;
tempNode->leftChild = NULL;
tempNode->rightChild = NULL;
if(current == NULL) {
parent->leftChild = tempNode;
return;
}
} //go to right of the tree
else {
current = current->rightChild;
4
Visit D and mark it as visited
and put onto the stack. Here,
we have B and C nodes,
which are adjacent to D and
both are unvisited. However,
we shall again choose in an
alphabetical order.
We choose B, mark it as
visited and put onto the stack.
Here Bdoes not have any
unvisited adjacent node. So,
we pop Bfrom the stack.
6
As C does not have any unvisited adjacent node so we keep popping the stack until we
find a node that has an unvisited adjacent node. In this case, there's none and we keep
popping until the stack is empty.
Lecture-21
Breadth First Search
Breadth First Search (BFS) algorithm traverses a graph in a breadthward motion and
uses a queue to remember to get the next vertex to start a search, when a dead end
occurs in any iteration.
As in the example given above, BFS algorithm traverses from A to B to E to F first then
to C and G lastly to D. It employs the following rules.
Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display it. Insert it
in a queue.
Rule 2 − If no adjacent vertex is found, remove the first vertex from the queue.
Rule 3 − Repeat Rule 1 and Rule 2 until the queue is empty.
We start from
visiting S(starting node), and
mark it as visited.
3
We then see an unvisited
adjacent node from S. In this
example, we have three nodes
but alphabetically we
choose A, mark it as visited
and enqueue it.
From A we have D as
unvisited adjacent node. We
mark it as visited and enqueue
it.
At this stage, we are left with no unmarked (unvisited) nodes. But as per the algorithm
we keep on dequeuing in order to get all unvisited nodes. When the queue gets
emptied, the program is over.
Lecture-22
Graph representation
You can represent a graph in many ways. The two most common ways of representing
a graph is as follows:
Adjacency matrix
An adjacency matrix is a VxV binary matrix A. Element Ai,j is 1 if there is an edge from
vertex i to vertex j else Ai,jis 0.
Note: A binary matrix is a matrix in which the cells can have only one of two possible
values - either a 0 or 1.
The adjacency matrix can also be modified for the weighted graph in which instead of
storing 0 or 1 in Ai,j, the weight or cost of the edge will be stored.
In an undirected graph, if Ai,j = 1, then Aj,i = 1. In a directed graph, if Ai,j = 1,
then Aj,i may or may not be 1.
Adjacency matrix provides constant time access (O(1) ) to determine if there is an
edge between two nodes. Space complexity of the adjacency matrix is O(V2).
The adjacency matrix of the following graph is:
i/j : 1 2 3 4
1:0101
2:1010
3:0101
4:1010
Consider the same undirected graph from an adjacency matrix. The adjacency list of the
graph is as follows:
A1 → 2 → 4
A2 → 1 → 3
A3 → 2 → 4
A4 → 1 → 3
Consider the same directed graph from an adjacency matrix. The adjacency list of the
graph is as follows:
A1 → 2
A2 → 4
A3 → 1 → 4
A4 → 2
Lecture-23
Topological Sorting:
Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices
such that for every directed edge uv, vertex u comes before v in the
ordering. Topological Sorting for a graph is not possible if the graph is not a DAG.
For example, a topological sorting of the following graph is “5 4 2 3 1 0”. There can be
more than one topological sorting for a graph. For example, another topological sorting
of the following graph is “4 5 2 3 1 0”. The first vertex in topological sorting is always a
vertex with in-degree as 0 (a vertex with no in-coming edges).
Algorithm to find Topological Sorting:
In DFS, we start from a vertex, we first print it and then recursively call DFS for its
adjacent vertices. In topological sorting, we use a temporary stack. We don’t print the
vertex immediately, we first recursively call topological sorting for all its adjacent
vertices, then push it to a stack. Finally, print contents of stack. Note that a vertex is
pushed to stack only when all of its adjacent vertices (and their adjacent vertices and so
on) are already in stack.
Topological Sorting vs Depth First Traversal (DFS):
In DFS, we print a vertex and then recursively call DFS for its adjacent vertices. In
topological sorting, we need to print a vertex before its adjacent vertices. For example,
in the given graph, the vertex ‘5’ should be printed before vertex ‘0’, but unlike DFS, the
vertex ‘4’ should also be printed before vertex ‘0’. So Topological sorting is different
from DFS. For example, a DFS of the shown graph is “5 2 3 1 0 4”, but it is not a
topological sorting
Dynamic Programming
The Floyd Warshall Algorithm is for solving the All Pairs Shortest Path problem. The
problem is to find shortest distances between every pair of vertices in a given edge
weighted directed Graph.
Example:
Input:
graph[][] = { {0, 5, INF, 10},
{INF, 0, 3, INF},
{INF, INF, 0, 1},
{INF, INF, INF, 0} }
which represents the following graph
10
(0)------->(3)
| /|\
5| |
| |1
\|/ |
(1)------->(2)
3
Note that the value of graph[i][j] is 0 if i is equal to j
And graph[i][j] is INF (infinite) if there is no edge from vertex i to j.
Output:
Shortest distance matrix
0 5 8 9
INF 0 3 4
INF INF 0 1
INF INF INF 0
Floyd Warshall Algorithm
We initialize the solution matrix same as the input graph matrix as a first step. Then we
update the solution matrix by considering all vertices as an intermediate vertex. The
idea is to one by one pick all vertices and update all shortest paths which include the
picked vertex as an intermediate vertex in the shortest path. When we pick vertex
number k as an intermediate vertex, we already have considered vertices {0, 1, 2, .. k-1}
as intermediate vertices. For every pair (i, j) of source and destination vertices
respectively, there are two possible cases.
1) k is not an intermediate vertex in shortest path from i to j. We keep the value of
dist[i][j] as it is.
2) k is an intermediate vertex in shortest path from i to j. We update the value of dist[i][j]
as dist[i][k] + dist[k][j].
The following figure shows the above optimal substructure property in the all-pairs
shortest path problem.
Lecture-24
Bubble Sort
We take an unsorted array for our example. Bubble sort takes Ο(n 2) time so we're
keeping it short and precise.
Bubble sort starts with very first two elements, comparing them to check which one is
greater.
In this case, value 33 is greater than 14, so it is already in sorted locations. Next, we
compare 33 with 27.
We find that 27 is smaller than 33 and these two values must be swapped.
Next we compare 33 and 35. We find that both are in already sorted positions.
We know then that 10 is smaller 35. Hence they are not sorted.
We swap these values. We find that we have reached the end of the array. After one
iteration, the array should look like this −
To be precise, we are now showing how an array should look like after each iteration.
After the second iteration, it should look like this −
Notice that after each iteration, at least one value moves at the end.
And when there's no swap required, bubble sorts learns that an array is completely
sorted.
return list
end BubbleSort
Pseudocode
We observe in algorithm that Bubble Sort compares each pair of array element unless
the whole array is completely sorted in an ascending order. This may cause a few
complexity issues like what if the array needs no more swapping as all the elements
are already ascending.
To ease-out the issue, we use one flag variable swapped which will help us see if any
swap has happened or not. If no swap has occurred, i.e. the array requires no more
processing to be sorted, it will come out of the loop.
Pseudocode of BubbleSort algorithm can be written as follows −
procedure bubbleSort( list : array of items )
loop = list.count;
end for
end for
It finds that both 14 and 33 are already in ascending order. For now, 14 is in sorted
sub-list.
It swaps 33 with 27. It also checks with all the elements of sorted sub-list. Here we see
that the sorted sub-list has only one element 14, and 27 is greater than 14. Hence, the
sorted sub-list remains sorted after swapping.
By now we have 14 and 27 in the sorted sub-list. Next, it compares 33 with 10.
So we swap them.
We swap them again. By the end of third iteration, we have a sorted sub-list of 4 items.
This process goes on until all the unsorted values are covered in a sorted sub-list. Now
we shall see some programming aspects of insertion sort.
Algorithm
Now we have a bigger picture of how this sorting technique works, so we can derive
simple steps by which we can achieve insertion sort.
Step 1 − If it is the first element, it is already sorted. return 1;
Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shift all the elements in the sorted sub-list that is greater than the
value to be sorted
Step 5 − Insert the value
Step 6 − Repeat until list is sorted
Pseudocode
procedure insertionSort( A : array of items )
int holePosition
int valueToInsert
for i = 1 to length(A) inclusive do:
valueToInsert = A[i]
holePosition = i
end for
end procedure
Lecture-26
Selection Sort
Consider the following depicted array as an example.
For the first position in the sorted list, the whole list is scanned sequentially. The first
position where 14 is stored presently, we search the whole list and find that 10 is the
lowest value.
So we replace 14 with 10. After one iteration 10, which happens to be the minimum
value in the list, appears in the first position of the sorted list.
For the second position, where 33 is residing, we start scanning the rest of the list in a
linear manner.
We find that 14 is the second lowest value in the list and it should appear at the second
place. We swap these values.
After two iterations, two least values are positioned at the beginning in a sorted
manner.
The same process is applied to the rest of the items in the array.
Following is a pictorial depiction of the entire sorting process −
Now, let us learn some programming aspects of selection sort.
Algorithm
for i = 1 to n - 1
/* set current element as minimum*/
min = i
for j = i+1 to n
if list[j] < list[min] then
min = j;
end if
end for
end procedure
Lecture-27
Merge Sort
To understand merge sort, we take an unsorted array as the following −
We know that merge sort first divides the whole array iteratively into equal halves
unless the atomic values are achieved. We see here that an array of 8 items is divided
into two arrays of size 4.
This does not change the sequence of appearance of items in the original. Now we
divide these two arrays into halves.
We further divide these arrays and we achieve atomic value which can no more be
divided.
Now, we combine them in exactly the same manner as they were broken down. Please
note the color codes given to these lists.
We first compare the element for each list and then combine them into another list in a
sorted manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10
and in the target list of 2 values we put 10 first, followed by 27. We change the order of
19 and 35 whereas 42 and 44 are placed sequentially.
In the next iteration of the combining phase, we compare lists of two data values, and
merge them into a list of found data values placing all in a sorted order.
After the final merging, the list should look like this −
Algorithm
Merge sort keeps on dividing the list into equal halves until it can no more be divided.
By definition, if it is only one element in the list, it is sorted. Then, merge sort combines
the smaller sorted lists keeping the new list sorted too.
Step 1 − if it is only one element in the list it is already sorted, return.
Step 2 − divide the list recursively into two halves until it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.
Merge sort works with recursion and we shall see our implementation in the same way.
procedure mergesort( var a as array )
if ( n == 1 ) return a
var l1 as array = a[0] ... a[n/2]
var l2 as array = a[n/2+1] ... a[n]
l1 = mergesort( l1 )
l2 = mergesort( l2 )
return merge( l1, l2 )
end procedure
procedure merge( var a as array, var b as array )
var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while
The pivot value divides the list into two parts. And recursively, we find the pivot for
each sub-lists until all lists contains only one element.
Quick Sort Pivot Algorithm
Based on our understanding of partitioning in quick sort, we will now try to write an
algorithm for it, which is as follows.
Step 1 − Choose the highest index value has pivot
Step 2 − Take two variables to point left and right of the list excluding pivot
Step 3 − left points to the low index
Step 4 − right points to the high
Step 5 − while value at left is less than pivot move right
Step 6 − while value at right is greater than pivot move left
Step 7 − if both step 5 and step 6 does not match swap left and right
Step 8 − if left ≥ right, the point where they met is new pivot
Quick Sort Pivot Pseudocode
The pseudocode for the above algorithm can be derived as −
function partitionFunc(left, right, pivot)
leftPointer = left
rightPointer = right - 1
while True do
while A[++leftPointer] < pivot do
//do-nothing
end while
end while
swap leftPointer,right
return leftPointer
end function
Quick Sort Algorithm
Using pivot algorithm recursively, we end up with smaller possible partitions. Each
partition is then processed for quick sort. We define recursive algorithm for quicksort as
follows −
Step 1 − Make the right-most index value pivot
Step 2 − partition the array using pivot value
Step 3 − quicksort left partition recursively
Step 4 − quicksort right partition recursively
Quick Sort Pseudocode
To get more into it, let see the pseudocode for quick sort algorithm −
procedure quickSort(left, right)
if right-left <= 0
return
else
pivot = A[right]
partition = partitionFunc(left, right, pivot)
quickSort(left,partition-1)
quickSort(partition+1,right)
end if
end procedure
Lecture-29
Heap Sort
Heap sort is a comparison based sorting technique based on Binary Heap data
structure. It is similar to selection sort where we first find the maximum element and
place the maximum element at the end. We repeat the same process for remaining
element.
What is Binary Heap?
Let us first define a Complete Binary Tree. A complete binary tree is a binary tree in
which every level, except possibly the last, is completely filled, and all nodes are as far
left as possible
A Binary Heap is a Complete Binary Tree where items are stored in a special order
such that value in a parent node is greater(or smaller) than the values in its two children
nodes. The former is called as max heap and the latter is called min heap. The heap
can be represented by binary tree or array.
Why array based representation for Binary Heap?
Since a Binary Heap is a Complete Binary Tree, it can be easily represented as array
and array based representation is space efficient. If the parent node is stored at index I,
the left child can be calculated by 2 * I + 1 and right child by 2 * I + 2 (assuming the
indexing starts at 0).
Heap Sort Algorithm for sorting in increasing order:
Heapify procedure can be applied to a node only if its children nodes are heapified. So
the heapification must be performed in the bottom up order.
Lets understand with the help of an example:
Input data: 4, 10, 3, 5, 1
4(0)
/ \
10(1) 3(2)
/ \
5(3) 1(4)
We can’t use counting sort because counting sort will take O(n 2) which is worse than
comparison based sorting algorithms. Can we sort such an array in linear time?
Radix Sort is the answer. The idea of Radix Sort is to do digit by digit sort starting from
least significant digit to most significant digit. Radix sort uses counting sort as a
subroutine to sort.
Lecture-30
Radix Sort
1) Do following for each digit i where i varies from least significant digit to the most
significant digit.
………….a) Sort input array using counting sort (or any stable sort) according to the i’th
digit.
Example:
Original, unsorted list:
170, 45, 75, 90, 802, 24, 2, 66
Sorting by least significant digit (1s place) gives: [*Notice that we keep 802 before 2,
because 802 occurred before 2 in the original list, and similarly for pairs 170 & 90 and
45 & 75.]
170, 90, 802, 2, 24, 45, 75, 66
Sorting by next digit (10s place) gives: [*Notice that 802 again comes before 2 as 802
comes before 2 in the previous list.]
802, 2, 24, 45, 66, 170, 75, 90
Sorting by most significant digit (100s place) gives:
2, 24, 45, 66, 75, 90, 170, 802
What is the running time of Radix Sort?
Let there be d digits in input integers. Radix Sort takes O(d*(n+b)) time where b is the
base for representing numbers, for example, for decimal system, b is 10. What is the
value of d? If k is the maximum possible value, then d would be O(log b(k)). So overall
time complexity is O((n+b) * logb(k)). Which looks more than the time complexity of
comparison based sorting algorithms for a large k. Let us first limit k. Let k <= nc where
c is a constant. In that case, the complexity becomes O(nLog b(n)). But it still doesn’t
beat comparison based sorting algorithms.
Linear Search
Linear search is to check each element one by one in sequence. The following
method linearSearch() searches a target in an array and returns the index of the target; if
not found, it returns -1, which indicates an invalid index.
1 int linearSearch(int arr[], int target)
2 {
3 for (int i = 0; i < arr.length; i++)
4 {
5 if (arr[i] == target)
6 return i;
7 }
8 return -1;
9 }
Linear search loops through each element in the array; each loop body takes constant
time. Therefore, it runs in linear time O(n).
Lecture-31
Binary Search
For sorted arrays, binary search is more efficient than linear search. The process starts
from the middle of the input array:
If the target equals the element in the middle, return its index.
If the target is larger than the element in the middle, search the right half.
If the target is smaller, search the left half.
In the following binarySearch() method, the two index variables first and last indicates the
searching boundary at each round.
1 int binarySearch(int arr[], int target)
2 {
3 int first = 0, last = arr.length - 1;
4
5 while (first <= last)
6 {
7 int mid = (first + last) / 2;
8 if (target == arr[mid])
9 return mid;
10 if (target > arr[mid])
11 first = mid + 1;
12 else
13 last = mid - 1;
14 }
15 return -1;
16 }
1 arr: {3, 9, 10, 27, 38, 43, 82}
2
3 target: 10
4 first: 0, last: 6, mid: 3, arr[mid]: 27 -- go left
5 first: 0, last: 2, mid: 1, arr[mid]: 9 -- go right
6 first: 2, last: 2, mid: 2, arr[mid]: 10 -- found
7
8 target: 40
9 first: 0, last: 6, mid: 3, arr[mid]: 27 -- go right
10 first: 4, last: 6, mid: 5, arr[mid]: 43 -- go left
11 first: 4, last: 4, mid: 4, arr[mid]: 38 -- go right
12 first: 5, last: 4 -- not found
Binary search divides the array in the middle at each round of the loop. Suppose the
array has length n and the loop runs in t rounds, then we have n * (1/2)^t = 1 since at
each round the array length is divided by 2. Thus t = log(n). At each round, the loop
body takes constant time. Therefore, binary search runs in logarithmic time O(log n).
The following code implements binary search using recursion. To call the method, we
need provide with the boundary indexes, for example,
binarySearch(arr, 0, arr.length - 1, target);
1
2 binarySearch(int arr[], int first, int last, int target)
3 {
4 if (first > last)
5 return -1;
6
7 int mid = (first + last) / 2;
8
9 if (target == arr[mid])
10 return mid;
11 if (target > arr[mid])
12 return binarySearch(arr, mid + 1, last, target);
13 // target < arr[mid]
14 return binarySearch(arr, first, mid - 1, target);
15 }
Lecture-32
Hashing
Introduction
When we put objects into a hashtable, it is possible that different objects (by
the equals() method) might have the same hashcode. This is called a collision. Here is
the example of collision. Two different strings ""Aa" and "BB" have the same key: .
"Aa" = 'A' * 31 + 'a' = 2112
"BB" = 'B' * 31 + 'B' = 2112
The big attraction of using a hash table is a constant-time performance for the basic
operations add, remove, contains, size. Though, because of collisions, we cannot guarantee
the constant runtime in the worst-case. Why? Imagine that all our objects collide into the
same index. Then searching for one of them will be equivalent to searching in a list, that
takes a liner runtime. However, we can guarantee an expected constant runtime, if we
make sure that our lists won't become too long. This is usually implemnted by
maintaining a load factor that keeps a track of the average length of lists. If a load factor
approaches a set in advanced threshold, we create a bigger array and rehash all
elements from the old table into the new one.
Another technique of collision resolution is a linear probing. If we cannoit insert at index
k, we try the next slot k+1. If that one is occupied, we go to k+2, and so on.
Lecture-33
Hashing Functions
Choosing a good hashing function, h(k), is essential for hash-table based
searching. h should distribute the elements of our collection as uniformly as possible to
the "slots" of the hash table. The key criterion is that there should be a minimum
number of collisions.
If the probability that a key, k, occurs in our collection is P(k), then if there are m slots in
our hash table, a uniform hashing function, h(k), would ensure:
Sometimes, this is easy to ensure. For example, if the keys are randomly distributed in
(0,r], then,
h(k) = floor((mk)/r)
will provide uniform hashing.
Mapping keys to natural numbers
Most hashing functions will first map the keys to some set of natural numbers, say (0,r].
There are many ways to do this, for example if the key is a string of ASCII characters,
we can simply add the ASCII representations of the characters mod 255 to produce a
number in (0,255) - or we could xor them, or we could add them in pairs mod 2 16-1, or
...
Having mapped the keys to a set of natural numbers, we then have a number of
possibilities.
1. Use a mod function:
h(k) = k mod m.
When using this method, we usually avoid certain values of m. Powers of 2 are
usually avoided, for k mod 2b simply selects the b low order bits of k. Unless we
know that all the 2b possible values of the lower order bits are equally likely, this
will not be a good choice, because some bits of the key are not used in the hash
function.
Prime numbers which are close to powers of 2 seem to be generally good
choices for m.
For example, if we have 4000 elements, and we have chosen an overflow table
organization, but wish to have the probability of collisions quite low, then we
might choose m = 4093. (4093 is the largest prime less than 4096 = 2 12.)
2. Use the multiplication method:
o Multiply the key by a constant A, 0 < A < 1,
o Extract the fractional part of the product,
o Multiply this value by m.
Thus the hash function is:
h(k) = floor(m * (kA - floor(kA)))
In this case, the value of m is not critical and we typically choose a power of 2 so
that we can get the following efficient procedure on most digital computers:
o Choose m = 2p.
o Multiply the w bits of k by floor(A * 2w) to obtain a 2w bit product.
o Extract the p most significant bits of the lower half of this product.
It seems that:
A = (sqrt(5)-1)/2 = 0.6180339887
is a good choice (see Knuth, "Sorting and Searching", v. 3 of "The Art of
Computer Programming").
3. Use universal hashing:
A malicious adversary can always chose the keys so that they all hash to the
same slot, leading to an average O(n) retrieval time. Universal hashing seeks to
avoid this by choosing the hashing function randomly from a collection of hash
functions (cf Cormen et al, p 229- ). This makes the probability that the hash
function will generate poor behaviour small and produces good average
performance.