0% found this document useful (0 votes)
83 views49 pages

Advanced Data Structures

The document provides an overview of advanced data structures, focusing on heaps, min-max heaps, and binomial heaps. It discusses the properties, applications, and operations associated with these data structures, emphasizing their use in priority queues and various algorithms. The document also explains the creation, merging, and extraction processes for binomial heaps, highlighting their efficiency and structure.

Uploaded by

srilaxmi550
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views49 pages

Advanced Data Structures

The document provides an overview of advanced data structures, focusing on heaps, min-max heaps, and binomial heaps. It discusses the properties, applications, and operations associated with these data structures, emphasizing their use in priority queues and various algorithms. The document also explains the creation, merging, and extraction processes for binomial heaps, highlighting their efficiency and structure.

Uploaded by

srilaxmi550
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 49

ADVANCED DATA STRUCTURES

Unit-1

1.Heap Data Structure


A Heap is a complete binary tree data structure that satisfies the heap property: for every node,
the value of its children is greater than or equal to its own value. Heaps are usually used to
implement priority queues, where the smallest (or largest) element is always at the root of the
tree.
Applications of Heap Data Structure
Heap Data Structure is generally taught with Heapsort. Heapsort algorithm has limited uses
because Quicksort is better in practice. Nevertheless, the Heap data structure itself is enormously
used.
1. Priority Queues: Heaps are commonly used to implement priority queues, where elements
with higher priority are extracted first. This is useful in many applications such as scheduling
tasks, handling interruptions, and processing events.
2. Sorting Algorithms: Heapsort, a comparison-based sorting algorithm, is implemented using
the Heap data structure. It has a time complexity of O(n log n), making it efficient for large
datasets.
3. Graph algorithms: Heaps are used in graph algorithms such as Prim’s Algorithm, Dijkstra’s
algorithm., and the A* search algorithm.
4. Lossless Compression: Heaps are used in data compression algorithms such as Huffman
coding, which uses a priority queue implemented as a min-heap to build a Huffman tree.
5. Medical Applications: In medical applications, heaps are used to store and manage patient
information based on priority, such as vital signs, treatments, and test results.
6. Load balancing: Heaps are used in load balancing algorithms to distribute tasks or requests
to servers, by processing elements with the lowest load first.
7. Order statistics: The Heap data structure can be used to efficiently find the kth smallest (or
largest) element in an array. See method 4 and 6 of this post for details.
8. Resource allocation: Heaps can be used to efficiently allocate resources in a system, such as
memory blocks or CPU time, by assigning a priority to each resource and processing requests
in order of priority.
9. Job scheduling: The heap data structure is used in job scheduling algorithms, where tasks
are scheduled based on their priority or deadline. The heap data structure allows efficient
access to the highest-priority task, making it a useful data structure for job scheduling

applications.
2.Min-Max Heaps

Min-max Heap Min-max heap is a heap with the additional property that the levels are
alternately labeled with MIN and MAX. In particular, the root is at min level and its children are
at max level, further its children are at min level and so on. Also, an element v in a max level has
the property that v is the largest of all the elements in the subtree rooted at v. Similarly, if v is in
min level, then v is the smallest of all the elements in the subtree rooted at v. These two
properties we refer to as max, and min property, respectively. An example is illustrated in Figure
1. Creation of min-max heap: a top-down approach This is similar to creation of min-heap in top-
down fashion. In top-down approach each new element is inserted at the last level respecting
nearly complete binary property and the min-max heapify procedure bubbles up the element to
its correct position by recursively checking the min or max property starting from its parent or
grand parent and this check is done recursively till the parent or grand parent is the root node.
For creation of an n-element min-max heap we need
Figures 1 − 18 illustrate the creation of a min-max heap. Min property violation of Figure 2 is set
right in Figure 3. Similarly, max property violation occurs in Figures 4, 6, 8, 11, 16, 17 and min
property is violated in Figure 14. Insertion in a min-max heap Insertion of an element in a min-
max heap is similar to a heap where we insert an element as a left child of the first leaf node
(indexing nodes from top to bottom and left to right) or right child of the last non-leaf. After
insertion we set right the min-max heap by recursively checking the min-max heap property
starting from the inserted element till the root. Depending on the level in which the new element
is inserted, it can violate either min or max property. For example, if we are inserting an element
80 in Figure 2, then it will be first inserted as the left of the element 63. Note that 80 is now in
min level. To set right the heap, we first observe that the max property at the parent element 63 is
violated, and we exchange 63 and 80 (See Figure 2 (a)). Note that there is no need to check the
min property violation with respect to any min elements above 63 (starting from 42). We further
check max property of grand parent of 80, which is 79, see Figure 2 (b). Max property at 79 is
violated, and therefore, exchange 79 and 80. This completes the insertion process and we finally
obtain a min-max heap with 80 inserted as shown in Figure 2 (c). Note that in the insertion,
either the min property is violated or the max property but not both and therefore we may need
to check one among them recursively till the root. Moreover, we need O(log n) effort for
inserting an element in a min-max heap. Extract Min and Max in a min-max heap For extract
min in a min-max heap we replace the root element with the last leaf (the last element in the last
level), henceforth referred to as key. Note that this replace operation ensures heap property is
maintained, however, there may be violation of min-max property. Further we may need to set
right the min-max heap property. Depending on whether the key is in min-level or max-level, or
left subtree or right subtree of the root, to bring back the min-max heap property, we recursively
do the following checks. 1. Compare the key with second minimum which is one of the four
grand children. If key is larger than the second min, swap key and second min. 2. While
swapping key and second min, violation may happen with respect to max property. i.e., key is
greater than its parent which is at the max level. If so, swap key and its parent after performing
the above swap. 3. Recursively do these checks until no longer possible.

In Figure 3, for extract min operation, the element 10 in the root is replaced with 33. Then the
root is exchanged with 20, the second minimum which is shown in Figure 3 (c). Now since there
is no max violation at the element 40. As part of recursive check, 33 is swapped with its second
minimum grand child 25. Now, we notice a max violation and hence swap 33 and 30 and this
completes the extract min procedure. Similar approach works for extract max. Extract min (or
max) need O(1) time for replacing the min or max element with the last leaf element and at most
l 2 + 1 exchanges, where l is the number of levels in the n element min-max heap. Since l = θ(log
n), extract min (or max) can be performed in θ(log n) time. We shall present another data
structure Deaps which also performs extract min and extract max in θ(log n) time.
Binomial Heap

In this article, we will discuss the binomial heap. Before start discussing the topic, we should
first understand some basic terms such as heap, min-heap, max-heap, and binomial tree.

What is a heap?

A heap is a complete binary tree, and the binary tree is a tree in which the node can have utmost
two children. There are two types of heap that are defined as follows -

o Min Heap: The value of the parent node should be less than or equal to either of its
children.
Mathematically, the min-heap can be defined as -
A[Parent(i)] <= A[i]
o Max Heap: The value of the parent node is greater than or equal to its children.
Mathematically, the max-heap can be defined as -
A[Parent(i)] >= A[i]
What is a Binomial tree?

A Binomial tree Bk is an ordered tree defined recursively, where k is defined as the order of the
binomial tree.

o If the binomial tree is represented as B0, then the tree consists of a single node.
o In general terms, Bk consists of two binomial trees, i.e., B k-1 and Bk-1 that are linked
together in which one tree becomes the left subtree of another binomial tree.
We can understand it with the example given below.

If B0, where k is 0, there would exist only one node in the tree.

If B1, where k is 1. Therefore, there would be two binomial trees of B 0 in which one B0 becomes
the left subtree of another B0.

If B2, where k is 2. Therefore, there would be two binomial trees of B 1 in which one B1 becomes
the left subtree of another B1.

If B3, where k is 3. Therefore, there would be two binomial trees of B 2 in which one B2 becomes
the left subtree of another B2.

Now, let's start the main topic 'Binomial Heap'.


What is a Binomial Heap?

A binomial heap can be defined as the collection of binomial trees that satisfies the heap
properties, i.e., min-heap. The min-heap is a heap in which each node has a value lesser than the
value of its child nodes. Mainly, Binomial heap is used to implement a priority queue. It is an
extension of binary heap that gives faster merge or union operations along with other operations
provided by binary heap.

Properties of Binomial heap

There are following properties for a binomial heap with n nodes -

o Every binomial tree in the heap must follow the min-heap property, i.e., the key of a
node is greater than or equal to the key of its parent.
o For any non-negative integer k, there should be atleast one binomial tree in a heap where
root has degree k.
The first property of the heap ensures that the min-heap property is hold throughout the heap.
Whereas the second property listed above ensures that a binary tree with n nodes should have at
most 1 + log2 n binomial trees, here log2 is the binary logarithm.

We can understand the properties listed above with the help of an example -

The above figure has three binomial trees, i.e., B 0, B2, and B3. The above all three binomial trees
satisfy the min heap's property as all the nodes have a smaller value than the child nodes.

The above figure also satisfies the second property of the binomial heap. For example, if we
consider the value of k as 3, we can observe in the above figure that the binomial tree of degree 3
exists in a heap.

Binomial Heap and the binary representation of a number


A binomial heap with n nodes consists the binomial trees equal to the number of set bits in the
binary representation of n.

Suppose we want to create the binomial heap of 'n' nodes that can be simply defined by the
binary number of 'n'. For example: if we want to create the binomial heap of 13 nodes; the binary
form of 13 is 1101, so if we start the numbering from the rightmost digit, then we can observe
that 1 is available at the 0, 2, and 3 positions; therefore, the binomial heap with 13 nodes will
have B0, B2, and B3 binomial trees.

We can use another example to understand it more clearly, suppose we have to create the
binomial heap with 9 nodes. The binary representation of 9 is 1001. So, in the binary
representation of 9, digit 1 is occurring at 0 and 3 positions, therefore, the binomial heap will
contain the binomial trees of 0 and 3 degrees.

Now, let's move towards the operations performed on Binomial heap.

Operations on Binomial Heap


The operations that can be performed on binomial heap are listed as follows -

o Creating a binomial heap


o Finding the minimum key
o Union or merging of two binomial heaps
o Inserting a node
o Extracting minimum key
o Decreasing a key
o Deleting a node
Now, let's discuss the above-listed operations in detail.

Creating a new binomial heap

When we create a new binomial heap, it simply takes O(1) time because creating a heap will
create the head of the heap in which no elements are attached.

Finding the minimum key


As stated above, binomial heap is the collection of binomial trees, and every binomial tree
satisfies the min-heap property. It means that the root node contains a minimum value.
Therefore, we only have to compare the root node of all the binomial trees to find the minimum
key. The time complexity of finding the minimum key in binomial heap is O(logn).

Union or Merging of two binomial heap


It is the most important operation performed on the binomial heap. Merging in a heap can be
done by comparing the keys at the roots of two trees, and the root node with the larger key will
become the child of the root with a smaller key than the other. The time complexity for finding a
union is O(logn). The function to merge the two trees is given as follows -

1. function merge(a,b)
2. if a.root.key ? b.root.key
3. return a.add(b)
4. else
5. return b.add(a)
To perform the union of two binomial heaps, we have to consider the below cases -
Case 1: If degree[x] is not equal to degree[next x], then move pointer ahead.

Case 2: if degree[x] = degree[next x] = degree[sibling(next x)] then,

Move the pointer ahead.

Case 3: If degree[x] = degree[next x] but not equal to degree[sibling[next x]]

and key[x] < key[next x] then remove [next x] from root and attached to x.

Case 4: If degree[x] = degree[next x] but not equal to degree[sibling[next x]]

and key[x] > key[next x] then remove x from root and attached to [next x].

Now, let's understand the merging or union of two binomial heaps with the help of an example.
Consider two binomial heaps -

We can see that there are two binomial heaps, so, first, we have to combine both heaps. To
combine the heaps, first, we need to arrange their binomial trees in increasing order.

In the above heap first, the pointer x points to the node 12 with degree B 0, and the pointer next[x]
points the node 18 with degree B 0. Node 7 with degree B1 is the sibling of 18, therefore, it is
represented as sibling[next[x]].
Now, first apply Case1 that says 'if degree[x] ≠ degree[next x] then move pointer ahead' but
in the above example, the degree[x] = degree[next[x]], so this case is not valid.

Now, apply Case2 that says 'if degree[x] = degree[next x] = degree[sibling(next x)] then
Move pointer ahead'. So, this case is also not applied in the above heap.

Now, apply Case3 that says ' If degree[x] = degree[next x] ≠ degree[sibling[next x]] and
key[x] < key[next x] then remove [next x] from root and attached to x'. We will apply this
case because the above heap follows the conditions of case 3 -

degree[x] = degree[next x] ≠ degree[sibling[next x]] {as, B 0 = B0¬ ≠ B1} and key[x] < key[next
x] {as 12 < 18}.

So, remove the node 18 and attach it to 12 as shown below -

x = 12, next[x] = 7, sibling[next[x]] = 3, and degree[x] = B 1, dgree[next[x]] = B1,


degree[sibling[next[x]]] = B1

Now we will reapply the cases in the above binomial heap. First, we will apply case 1. Since x is
pointing to node 12 and next[x] is pointing to node 7, the degree of x is equal to the degree of
next x; therefore, case 1 is not valid.

Here, case 2 is valid as the degree of x, next[x], and sibling[next[x]] is equal. So, according to
the case, we have to move the pointer ahead.

Therefore, x = 7, next[x] = 3, sibling[next[x]] = 15, and degree[x] = B1, dgree[next[x]] = B1,


degree[sibling[next[x]]] = B2

Now, let's try to apply case 3, here, first condition of case3 is satisfied as degree[x] =
degree[next[x]] ≠ degree[sibling[next[x]]], but second condition (key[x] < key[next x]) of case 3
is not satisfied.

Now, let's try to apply case 4. So, first condition of case4 is satisfied and second condition
(key[x] > key[next x]) is also satisfied. Therefore, remove x from the root and attach it to
[next[x]].
Now, the pointer x points to node 3, next[x] points to node 15, and sibling[next[x]] points to the
node 6. Since, the degree of x is equal to the degree of next[x] but not equal to the
degree[sibling[next[x]]], and the key value of x is less than the key value of next[x], so we have
to remove next[x] and attach it to x as shown below -

Now, x represents to the node 3, and next[x] points to node 6. Since, the degree of x and next[x]
is not equal, so case1 is valid. Therefore, move the pointer ahead. Now, the pointer x points the
node 6.

The B4 is the last binomial tree in a heap, so it leads to the termination of the loop. The above
tree is the final tree after the union of two binomial heaps.

Insert an element in the heap


Inserting an element in the heap can be done by simply creating a new heap only with the
element to be inserted, and then merging it with the original heap. Due to the merging, the single
insertion in a heap takes O(logn) time. Now, let's understand the process of inserting a new node
in a heap using an example.

In the above heap, there are three binomial trees of degrees 0, 1, and 2 are given where B0 is
attached to the head of the heap.
Suppose we have to insert node 15 in the above heap.

First, we have to combine both of the heaps. As both node 12 and node 15 are of degree 0, so
node 15 is attached to node 12 as shown below -

Now, assign x to B0 with value 12, next(x) to B0 with value 15, and assign sibling(next(x)) to
B1 with value 7. As the degree of x and next(x) is equal. The key value of x is smaller than the
key value of next(x), so next(x) is removed and attached to the x. It is shown in the below image
-

Now, x points to node 12 with degree B 1, next(x) to node 7 with degree B 1, and sibling(next(x))
points to node 15 with degree B2. The degree of x is equal to the degree of next(x) but not equal
to the degree of sibling(next(x)). The key value of x is greater than the key value of next(x);
therefore, x is removed and attached to the next(x) as shown in the below image -

Now, x points to node 7, and next(x) points to node 15. The degree of both x and next(x) is B 2,
and the key value of x is less than the key value of next(x), so next(x) will be removed and
attached to x as shown in the below image -
Now, the degree of the above heap is B3, and it is the final binomial heap after inserting node 15.

Extracting the minimum key


It means that we have to remove an element with the minimum key value. As we know, in min-
heap, the root element contains the minimum key value. So, we have to compare the key value of
the root node of all the binomial trees. Let's see an example of extracting the minimum key from
the heap.

Suppose the heap is -

Now, compare the key values of the root node of the binomial trees in the above heap. So, 12, 7,
and 15 are the key values of the root node in the above heap in which 7 is minimum; therefore,
remove node 7 from the tree as shown in the below image -

Advertisement

Now, the degree of node 12 and node 25 is B 0, and the degree of node 15 is B2. Pointer x points
to the node 12, next(x) points to the node 25, and sibling(next(x)) points to the node 15. Since
the degree of x is equal to the degree of next(x) but not equal to the degree of sibling(next(x)).
Value of pointer x is less than the pointer next(x), so node 25 will be removed and attached to
node 12 as shown in the below image -
Now, the degree of node 12 is changed to B 1. The above heap is the final heap after extracting
the minimum key.

Decreasing a key
Now, let's move forward to another operation to be performed on binomial heap. Once the value
of the key is decreased, it might be smaller than its parent's key that results in the violation of
min-heap property. If such case occurs after decreasing the key, then exchange the element with
its parent, grandparent, and so on until the min-heap property is satisfied.

Let's understand the process of decreasing a key in a binomial heap using an example. Consider a
heap given below -

Decrease the key 45 by 7 of the above heap. After decreasing 45 by 7, the heap will be -

After decreasing the key, the min-heap property of the above heap is violated. Now, compare 7
wits its parent 30, as it is lesser than the parent, swap 7 with 30, and after swapping, the heap will
be -
Again compare the element 7 with its parent 8, again it is lesser than the parent, so swap the
element 7 with its parent 8, after swapping the heap will be -

Now, the min-heap property of the above heap is satisfied. So, the above heap is the final heap
after decreasing a key.

Deleting a node from the heap


To delete a node from the heap, first, we have to decrease its key to negative infinity (or -∞) and
then delete the minimum in the heap. Now we will see how to delete a node with the help of an
example. Consider the below heap, and suppose we have to delete the node 41 from the heap -

First, replace the node with negative infinity (or -∞) as shown below -

Now, swap the negative infinity with its root node in order to maintain the min-heap property.

Now, again swap the negative infinity with its root node.
The next step is to extract the minimum key from the heap. Since the minimum key in the above
heap is -infinity so we will extract this key, and the heap would be:

The above is the final heap after deleting the node 41.

Complexity of binomial heap


Now, let's see the time complexity of the operations performed on binomial heap.

Time Complexity

Time complexity

Finding the minimum key O(log n)

Inserting a node O(log n)

Extracting minimum key O(log n)


Decreasing a key O(log n)

Union or merging O(log n)

Deleting a node O(log n)

The time complexity of finding the minimum element from the heap can be reduced to O(1). It
can be done by using an additional pointer to the minimum element.

Space Complexity

The space complexity of a binomial heap with 'n' elements is O(n).

Fibonacci Heap | Set 1 (Introduction)


INTRODUCTION:
1. A Fibonacci heap is a data structure used for implementing priority queues. It is a type of
heap data structure, but with several improvements over the traditional binary heap and
binomial heap data structures.
2. The key advantage of a Fibonacci heap over other heap data structures is its fast amortized
running time for operations such as insert, merge and extract-min, making it one of the most
efficient data structures for these operations. The running time of these operations in a
Fibonacci heap is O(1) for insert, O(log n) for extract-min and O(1) amortized for merge.
3. A Fibonacci heap is a collection of trees, where each tree is a heap-ordered multi-tree,
meaning that each tree has a single root node with its children arranged in a heap-ordered
manner. The trees in a Fibonacci heap are organized in such a way that the root node with the
smallest key is always at the front of the list of trees.
4. In a Fibonacci heap, when a new element is inserted, it is added as a singleton tree. When
two heaps are merged, the root list of one heap is simply appended to the root list of the other
heap. When the extract-min operation is performed, the tree with the minimum root node is
removed from the root list and its children are added to the root list.
5. One unique feature of a Fibonacci heap is the use of lazy consolidation, which is a technique
for improving the efficiency of the merge operation. In lazy consolidation, the merging of
trees is postponed until it is necessary, rather than performed immediately. This allows for
the merging of trees to be performed more efficiently in batches, rather than one at a time.
In summary, a Fibonacci heap is a highly efficient data structure for implementing priority
queues, with fast amortized running times for operations such as insert, merge and extract-min.
Its use of lazy consolidation and its multi-tree structure make it a superior alternative to
traditional binary and binomial heaps in many applications.
Heaps are mainly used for implementing priority queue. We have discussed the below heaps in
previous posts.
 Binary Heap
 Binomial Heap
In terms of Time Complexity, Fibonacci Heap beats both Binary and Binomial Heap.
Below are amortized time complexities of the Fibonacci Heap.
1) Find Min: Θ(1) [Same as Binary but not Binomial since binomial has o(log n)]
2) Delete Min: O(Log n) [Θ(Log n) in both Binary and Binomial]
3) Insert: Θ(1) [Θ(Log n) in Binary and Θ(1) in Binomial]
4) Decrease-Key: Θ(1) [Θ(Log n) in both Binary and Binomial]
5) Merge: Θ(1) [Θ(m Log n) or Θ(m+n) in Binary and
Θ(Log n) in Binomial]
Like Binomial Heap, Fibonacci Heap is a collection of trees with min-heap or max-heap
properties. In Fibonacci Heap, trees can have any shape even if all trees can be single nodes
(This is unlike Binomial Heap where every tree has to be a Binomial Tree).
Below is an example Fibonacci Heap taken from here.

Fibonacci Heap maintains a pointer to the minimum value (which is the root of a tree). All tree
roots are connected using a circular doubly linked list, so all of them can be accessed using a
single ‘min’ pointer.
The main idea is to execute operations in a “lazy” way. For example merge operation simply
links two heaps, insert operation simply adds a new tree with a single node. The operation extract
minimum is the most complicated operation. It does delay the work of consolidating trees. This
makes delete also complicated as delete first decreases the key to minus infinite, then calls
extract minimum.
Below are some interesting facts about Fibonacci Heap
1. The reduced time complexity of Decrease-Key has importance in Dijkstra and Prim
algorithms. With Binary Heap, the time complexity of these algorithms is O(VLogV +
ELogV). If Fibonacci Heap is used, then time complexity is improved to O(VLogV + E)
2. Although Fibonacci Heap looks promising time complexity-wise, it has been found slow in
practice as hidden constants are high (Source Wiki).
3. Fibonacci heaps is mainly called so because Fibonacci numbers are used in the running time
analysis. Also, every node in Fibonacci Heap has a degree at most O(log n) and the size of a
subtree rooted in a node of degree k is at least Fk+2, where Fk is the kth Fibonacci number.
Advantages of Fibonacci Heap:
1. Fast amortized running time: The running time of operations such as insert, extract-min and
merge in a Fibonacci heap is O(1) for insert, O(log n) for extract-min and O(1) amortized for
merge, making it one of the most efficient data structures for these operations.
2. Lazy consolidation: The use of lazy consolidation allows for the merging of trees to be
performed more efficiently in batches, rather than one at a time, improving the efficiency of
the merge operation.
3. Efficient memory usage: Fibonacci heaps have a relatively small constant factor compared to
other data structures, making them a more memory-efficient choice in some applications.
Disadvantages of Fibonacci Heap:
1. Increased complexity: The structure and operations of a Fibonacci heap are more complex
than those of a binary or binomial heap, making it a less intuitive data structure for some
users.
2. Less well-known: Compared to other data structures, Fibonacci heaps are less well-known
and widely used, making it more difficult to find resources and support for implementation
and optimization.
UNIT-2
What is Hash Table?
A Hash table is defined as a data structure used to insert, look up, and remove key-value pairs
quickly. It operates on the hashing concept, where each key is translated by a hash function
into a distinct index in an array. The index functions as a storage location for the matching
value. In simple words, it maps the keys with the value.

What is Load factor?


A hash table’s load factor is determined by how many elements are kept there in relation to
how big the table is. The table may be cluttered and have longer search times and collisions if
the load factor is high. An ideal load factor can be maintained with the use of a good hash
function and proper table resizing.
What is a Hash function?
A Function that translates keys to array indices is known as a hash function. The keys should
be evenly distributed across the array via a decent hash function to reduce collisions and ensure
quick lookup speeds.
 Integer universe assumption: The keys are assumed to be integers within a certain range
according to the integer universe assumption. This enables the use of basic hashing
operations like division or multiplication hashing.
 Hashing by division: This straightforward hashing technique uses the key’s remaining
value after dividing it by the array’s size as the index. When an array size is a prime
number and the keys are evenly spaced out, it performs well.
 Hashing by multiplication: This straightforward hashing operation multiplies the key by a
constant between 0 and 1 before taking the fractional portion of the outcome. After that, the
index is determined by multiplying the fractional component by the array’s size. Also, it
functions effectively when the keys are scattered equally.
Choosing a hash function :
Selecting a decent hash function is based on the properties of the keys and the intended
functionality of the hash table. Using a function that evenly distributes the keys and reduces
collisions is crucial.
Criteria based on which a hash function is chosen:
 To ensure that the number of collisions is kept to a minimum, a good hash function should
distribute the keys throughout the hash table in a uniform manner. This implies that for all
pairings of keys, the likelihood of two keys hashing to the same position in the table should
be rather constant.
 To enable speedy hashing and key retrieval, the hash function should be computationally
efficient.
 It ought to be challenging to deduce the key from its hash value. As a result, attempts to
guess the key using the hash value are less likely to succeed.
 A hash function should be flexible enough to adjust as the data being hashed changes. For
instance, the hash function needs to continue to perform properly if the keys being hashed
change in size or format.
Collision resolution techniques :
Collisions happen when two or more keys point to the same array index. Chaining, open
addressing, and double hashing are a few techniques for resolving collisions.
 Open addressing: collisions are handled by looking for the following empty space in the
table. If the first slot is already taken, the hash function is applied to the subsequent slots
until one is left empty. There are various ways to use this approach, including double
hashing, linear probing, and quadratic probing.
 Separate Chaining: In separate chaining, a linked list of objects that hash to each slot in
the hash table is present. Two keys are included in the linked list if they hash to the same
slot. This method is rather simple to use and can manage several collisions.
 Robin Hood hashing: To reduce the length of the chain, collisions in Robin Hood hashing
are addressed by switching off keys. The algorithm compares the distance between the slot
and the occupied slot of the two keys if a new key hashes to an already-occupied slot. The
existing key gets swapped out with the new one if it is closer to its ideal slot. This brings
the existing key closer to its ideal slot. This method has a tendency to cut down on
collisions and average chain length.
Dynamic resizing:
This feature enables the hash table to expand or contract in response to changes in the number
of elements contained in the table. This promotes a load factor that is ideal and quick lookup
times.
Example Implementation of Hash Table
Python, Java, C++, and Ruby are just a few of the programming languages that support hash
tables. They can be used as a customized data structure in addition to frequently being included
in the standard library.
Example: hashIndex = key % noOfBuckets
Insert: Move to the bucket corresponding to the above-calculated hash index and insert the
new node at the end of the list.
Delete: To delete a node from hash table, calculate the hash index for the key, move to the
bucket corresponding to the calculated hash index, and search the list in the current bucket to
find and remove the node with the given key (if found).

Hash Functions and Types of Hash functions

in various applications such as data storage, retrieval, and cryptography. In data structures and
algorithms (DSA), hash functions are primarily used in hash tables, which are essential for
efficient data management. This article delves into the intricacies of hash functions, their
properties, and the different types of hash functions used in DSA.
What is a Hash Function?
A hash function is a function that takes an input (or ‘message’) and returns a fixed-size string of
bytes. The output, typically a number, is called the hash code or hash value. The main purpose
of a hash function is to efficiently map data of arbitrary size to fixed-size values, which are often
used as indexes in hash tables.

Key Properties of Hash Functions


 Deterministic: A hash function must consistently produce the same output for the same
input.
 Fixed Output Size: The output of a hash function should have a fixed size, regardless of the
size of the input.
 Efficiency: The hash function should be able to process input quickly.
 Uniformity: The hash function should distribute the hash values uniformly across the output
space to avoid clustering.
 Pre-image Resistance: It should be computationally infeasible to reverse the hash function,
i.e., to find the original input given a hash value.
 Collision Resistance: It should be difficult to find two different inputs that produce the same
hash value.
 Avalanche Effect: A small change in the input should produce a significantly different hash
value.
Applications of Hash Functions
 Hash Tables: The most common use of hash functions in DSA is in hash tables, which
provide an efficient way to store and retrieve data.
 Data Integrity: Hash functions are used to ensure the integrity of data by generating
checksums.
 Cryptography: In cryptographic applications, hash functions are used to create secure hash
algorithms like SHA-256.
 Data Structures: Hash functions are utilized in various data structures such as Bloom filters
and hash sets.
Types of Hash Functions
There are many hash functions that use numeric or alphanumeric keys. This article focuses on
discussing different hash functions:
1. Division Method.
2. Multiplication Method
3. Mid-Square Method
4. Folding Method
5. Cryptographic Hash Functions
6. Universal Hashing
7. Perfect Hashing
Let’s begin discussing these methods in detail.
1. Division Method
The division method involves dividing the key by a prime number and using the remainder as the
hash value.

Where k is the key and 𝑚m is a prime number.


h(k)=k mod m

Advantages:
Works well when 𝑚m is a prime number.
 Simple to implement.

 Poor distribution if 𝑚m is not chosen wisely.


Disadvantages:

In the multiplication method, a constant 𝐴A (0 < A < 1) is used to multiply the key. The
2. Multiplication Method

fractional part of the product is then multiplied by 𝑚m to get the hash value.
h(k)=⌊m(kAmod1)⌋
Where ⌊ ⌋ denotes the floor function.

 Less sensitive to the choice of 𝑚m.


Advantages:

Disadvantages:
 More complex than the division method.
3. Mid-Square Method
In the mid-square method, the key is squared, and the middle digits of the result are taken as the
hash value.
Steps:
1. Square the key.
2. Extract the middle digits of the squared value.
Advantages:
 Produces a good distribution of hash values.
Disadvantages:
 May require more computational effort.
4. Folding Method

taking the modulo with respect to 𝑚m.


The folding method involves dividing the key into equal parts, summing the parts, and then

Steps:
1. Divide the key into parts.

3. Take the modulo 𝑚m of the sum.


2. Sum the parts.

Advantages:
 Simple and easy to implement.
Disadvantages:
 Depends on the choice of partitioning scheme.

Collision Resolution Techniques


In Hashing, hash functions were used to generate hash values. The hash value is used to create an
index for the keys in the hash table. The hash function may return the same hash value for two or
more keys. When two or more keys have the same hash value, a collision happens. To handle
this collision, we use Collision Resolution Techniques.
Collision Resolution Techniques
There are mainly two methods to handle collision:
1. Separate Chaining
2. Open Addressing

1) Separate Chaining
The idea behind Separate Chaining is to make each cell of the hash table point to a linked list
of records that have the same hash function value. Chaining is simple but requires additional
memory outside the table.
Example: We have given a hash function and we have to insert some elements in the hash
table using a separate chaining method for collision resolution technique.
Hash function = key % 5,
Elements = 12, 15, 22, 25 and 37.
Let’s see step by step approach to how to solve the above problem:
Hence In this way, the separate chaining method is used as the collision resolution technique.

2) Open Addressing
In open addressing, all elements are stored in the hash table itself. Each table entry contains
either a record or NIL. When searching for an element, we examine the table slots one by one
until the desired element is found or it is clear that the element is not in the table.

2.a) Linear Probing


In linear probing, the hash table is searched sequentially that starts from the original location of
the hash. If in case the location that we get is already occupied, then we check for the next
location.
Algorithm:

Calculate the hash key. i.e. key = data % size


Check, if hashTable[key] is empty
store the value directly by hashTable[key] = data
If the hash index already has some value then
check for next index using key = (key+1) % size
Check, if the next index is available hashTable[key] then store the value. Otherwise try for next
index.
Do the above process till we find the space.
Example: Let us consider a simple hash function as “key mod 5” and a sequence of keys that are
to be inserted are 50, 70, 76, 85, 93.
2.b) Quadratic Probing
Quadratic probing is an open addressing scheme in computer programming for resolving hash
collisions in hash tables. Quadratic probing operates by taking the original hash index and
adding successive values of an arbitrary quadratic polynomial until an open slot is found.
An example sequence using quadratic probing is:
H + 1 2 , H + 2 2 , H + 3 2 , H + 4 2 …………………. H + k 2
This method is also known as the mid-square method because in this method we look for i2-th
probe (slot) in i-th iteration and the value of i = 0, 1, . . . n – 1. We always start from the
original hash location. If only the location is occupied then we check the other slots.
Let hash(x) be the slot index computed using the hash function and n be the size of the hash
table.
If the slot hash(x) % n is full, then we try (hash(x) + 1 2 ) % n.
If (hash(x) + 1 2 ) % n is also full, then we try (hash(x) + 2 2 ) % n.
If (hash(x) + 2 2 ) % n is also full, then we try (hash(x) + 3 2 ) % n.
This process will be repeated for all the values of i until an empty slot is found
Example: Let us consider table Size = 7, hash function as Hash(x) = x % 7 and collision
resolution strategy to be f(i) = i 2 . Insert = 22, 30, and 50
2.c) Double Hashing
Double hashing is a collision resolving technique in Open Addressed Hash tables. Double
hashing make use of two hash function,

The first hash function is h1(k) which takes the key and gives out a location on the hash table.
But if the new location is not occupied or empty then we can easily place our key.
But in case the location is occupied (collision) we will use secondary hash-function h2(k) in
combination with the first hash-function h1(k) to find the new location on the hash table.
This combination of hash functions is of the form

h(k, i) = (h1(k) + i * h2(k)) % n

where

i is a non-negative integer that indicates a collision number,


k = element/key which is being hashed
n = hash table size.
Complexity of the Double hashing algorithm:
Time complexity: O(n)
Example: Insert the keys 27, 43, 692, 72 into the Hash Table of size 7. where first hash-function
is h1(k) = k mod 7 and second hash-function is h2(k) = 1 + (k mod 5)
UNIT-3

Optimal Binary Search Tree

As we know that in binary search tree, the nodes in the left subtree have lesser value than the
root node and the nodes in the right subtree have greater value than the root node.

We know the key values of each node in the tree, and we also know the frequencies of each node
in terms of searching means how much time is required to search a node. The frequency and key-
value determine the overall cost of searching a node. The cost of searching is a very important
factor in various applications. The overall cost of searching a node should be less. The time
required to search a node in BST is more than the balanced binary search tree as a balanced
binary search tree contains a lesser number of levels than the BST. There is one way that can
reduce the cost of a binary search tree is known as an optimal binary search tree.

Let's understand through an example.

If the keys are 10, 20, 30, 40, 50, 60, 70

In the above tree, all the nodes on the left subtree are smaller than the value of the root node, and
all the nodes on the right subtree are larger than the value of the root node. The maximum time
required to search a node is equal to the minimum height of the tree, equal to logn.

Now we will see how many binary search trees can be made from the given number of keys.
For example: 10, 20, 30 are the keys, and the following are the binary search trees that can be
made out from these keys.

The Formula for calculating the number of trees:

When we use the above formula, then it is found that total 5 number of trees can be created.

The cost required for searching an element depends on the comparisons to be made to search an
element. Now, we will calculate the average cost of time of the above binary search trees.

In the above tree, total number of 3 comparisons can be made. The average number of
comparisons can be made as:

In the above tree, the average number of comparisons that can be made as:

In the above tree, the average number of comparisons that can be made as:
In the above tree, the total number of comparisons can be made as 3. Therefore, the average
number of comparisons that can be made as:

In the above tree, the total number of comparisons can be made as 3. Therefore, the average
number of comparisons that can be made as:

In the third case, the number of comparisons is less because the height of the tree is less, so it's a
balanced binary search tree.

Till now, we read about the height-balanced binary search tree. To find the optimal binary search
tree, we will determine the frequency of searching a key.

Let's assume that frequencies associated with the keys 10, 20, 30 are 3, 2, 5.

The above trees have different frequencies. The tree with the lowest frequency would be
considered the optimal binary search tree. The tree with the frequency 17 is the lowest, so it
would be considered as the optimal binary search tree.

Dynamic Approach

Consider the below table, which contains the keys and frequencies.
First, we will calculate the values where j-i is equal to zero.

When i=0, j=0, then j-i = 0

When i = 1, j=1, then j-i = 0

When i = 2, j=2, then j-i = 0

When i = 3, j=3, then j-i = 0

When i = 4, j=4, then j-i = 0

Therefore, c[0, 0] = 0, c[1 , 1] = 0, c[2,2] = 0, c[3,3] = 0, c[4,4] = 0

Now we will calculate the values where j-i equal to 1.

When j=1, i=0 then j-i = 1

When j=2, i=1 then j-i = 1

When j=3, i=2 then j-i = 1

When j=4, i=3 then j-i = 1

Now to calculate the cost, we will consider only the jth value.

The cost of c[0,1] is 4 (The key is 10, and the cost corresponding to key 10 is 4).

The cost of c[1,2] is 2 (The key is 20, and the cost corresponding to key 20 is 2).

The cost of c[2,3] is 6 (The key is 30, and the cost corresponding to key 30 is 6)

The cost of c[3,4] is 3 (The key is 40, and the cost corresponding to key 40 is 3)
Now we will calculate the values where j-i = 2

When j=2, i=0 then j-i = 2

When j=3, i=1 then j-i = 2

When j=4, i=2 then j-i = 2

In this case, we will consider two keys.

o When i=0 and j=2, then keys 10 and 20. There are two possible trees that can be made
out from these two keys shown below:

In the first binary tree, cost would be: 4*1 + 2*2 = 8

In the second binary tree, cost would be: 4*2 + 2*1 = 10

The minimum cost is 8; therefore, c[0,2] = 8

o When i=1 and j=3, then keys 20 and 30. There are two possible trees that can be made
out from these two keys shown below:
In the first binary tree, cost would be: 1*2 + 2*6 = 14
In the second binary tree, cost would be: 1*6 + 2*2 = 10

The minimum cost is 10; therefore, c[1,3] = 10

o When i=2 and j=4, we will consider the keys at 3 and 4, i.e., 30 and 40. There are two
possible trees that can be made out from these two keys shown as below:
In the first binary tree, cost would be: 1*6 + 2*3 = 12

In the second binary tree, cost would be: 1*3 + 2*6 = 15

The minimum cost is 12, therefore, c[2,4] = 12

Now we will calculate the values when j-i = 3

When j=3, i=0 then j-i = 3

When j=4, i=1 then j-i = 3

o When i=0, j=3 then we will consider three keys, i.e., 10, 20, and 30.
The following are the trees that can be made if 10 is considered as a root node.

Advertisement

In the above tree, 10 is the root node, 20 is the right child of node 10, and 30 is the right child of
node 20.

Cost would be: 1*4 + 2*2 + 3*6 = 26


In the above tree, 10 is the root node, 30 is the right child of node 10, and 20 is the left child of
node 20.

Cost would be: 1*4 + 2*6 + 3*2 = 22

The following tree can be created if 20 is considered as the root node.

In the above tree, 20 is the root node, 30 is the right child of node 20, and 10 is the left child of
node 20.

Cost would be: 1*2 + 4*2 + 6*2 = 22

The following are the trees that can be created if 30 is considered as the root node.

In the above tree, 30 is the root node, 20 is the left child of node 30, and 10 is the left child of
node 20.

Cost would be: 1*6 + 2*2 + 3*4 = 22

In the above tree, 30 is the root node, 10 is the left child of node 30 and 20 is the right child of
node 10.

Cost would be: 1*6 + 2*4 + 3*2 = 20

Therefore, the minimum cost is 20 which is the 3rd root. So, c[0,3] is equal to 20.

o When i=1 and j=4 then we will consider the keys 20, 30, 40
c[1,4] = min{ c[1,1] + c[2,4], c[1,2] + c[3,4], c[1,3] + c[4,4] } + 11
= min{0+12, 2+3, 10+0}+ 11

= min{12, 5, 10} + 11

The minimum value is 5; therefore, c[1,4] = 5+11 = 16

o Now we will calculate the values when j-i = 4


When j=4 and i=0 then j-i = 4

In this case, we will consider four keys, i.e., 10, 20, 30 and 40. The frequencies of 10, 20, 30 and
40 are 4, 2, 6 and 3 respectively.

w[0, 4] = 4 + 2 + 6 + 3 = 15

If we consider 10 as the root node then

C[0, 4] = min {c[0,0] + c[1,4]}+ w[0,4]

= min {0 + 16} + 15= 31

If we consider 20 as the root node then

C[0,4] = min{c[0,1] + c[2,4]} + w[0,4]

= min{4 + 12} + 15

= 16 + 15 = 31

If we consider 30 as the root node then,

C[0,4] = min{c[0,2] + c[3,4]} +w[0,4]

= min {8 + 3} + 15

= 26
If we consider 40 as the root node then,

C[0,4] = min{c[0,3] + c[4,4]} + w[0,4]

= min{20 + 0} + 15

= 35

In the above cases, we have observed that 26 is the minimum cost; therefore, c[0,4] is equal to
26.

The optimal binary tree can be created as:

General formula for calculating the minimum cost is:

C[i,j] = min{c[i, k-1] + c[k,j]} + w(i,j)

2-3 Trees | (Search, Insert and Deletion)

In binary search trees we have seen the average-case time for operations like
search/insert/delete is O(log N) and the worst-case time is O(N) where N is the number of
nodes in the tree.
Like other Trees include AVL trees, Red Black Tree, B tree, 2-3 Tree is also a height
balanced tree.
The time complexity of search/insert/delete is O(log N) .
A 2-3 tree is a B-tree of order 3.
Properties of 2-3 tree:
 Nodes with two children are called 2-nodes. The 2-nodes have one data value and two
children
 Nodes with three children are called 3-nodes. The 3-nodes have two data values and three
children.
 Data is stored in sorted order.
 It is a balanced tree.
 All the leaf nodes are at same level.
 Each node can either be leaf, 2 node, or 3 node.
 Always insertion is done at leaf.
Search: To search a key K in given 2-3 tree T, we follow the following procedure:
Base cases:
1. If T is empty, return False (key cannot be found in the tree).
2. If current node contains data value which is equal to K, return True.
3. If we reach the leaf-node and it doesn’t contain the required key value K, return False.
Recursive Calls:
1. If K < currentNode.leftVal, we explore the left subtree of the current node.
2. Else if currentNode.leftVal < K < currentNode.rightVal, we explore the middle subtree of
the current node.
3. Else if K > currentNode.rightVal, we explore the right subtree of the current node.
Consider the following example:
Insertion: There are 3 possible cases in insertion which have been discussed below:
 Case 1: Insert in a node with only one data element

. Case 2: Insert in a node with two data elements whose parent contains only one data
element.
 Case 3: Insert in a node with two data elements whose parent also contains two data
elements.

In Deletion Process for a specific value:


 To delete a value, it is replaced by its in-order successor and then removed.
 If a node is left with less than one data value then two nodes must be merged together.
 If a node becomes empty after deleting a value, it is then merged with another node.
To Understand the deletion process-
Consider the 2-3 tree given below
 Case 3: Insert in a node with two data elements whose parent also contains two data
elements.

In Deletion Process for a specific value:


 To delete a value, it is replaced by its in-order successor and then removed.
 If a node is left with less than one data value then two nodes must be merged together.
 If a node becomes empty after deleting a value, it is then merged with another node.
To Understand the deletion process-
Consider the 2-3 tree given below

Given 2-3 Tree

delete the following values from it: 69,72, 99, 81.


To delete 69, swap it with its in-order successor, that is, 72. 69 now comes in the leaf node.
Remove the value 69 from the leaf node.

After deletion 69

To delete 72, 72 is an internal node. To delete this value swap 72 with its in-order successor
81 so that 72 now becomes a leaf node. Remove the value 72 from the leaf node.

After deletion 72

Now there is a leaf node that has less than 1 data value thereby violating the property of a 2-3
tree. So the node must be merged.
To merge the node, pull down the lowest data value in the parent’s node and merge it with its
left sibling.

Rebalancing to Satisfy 23 Tree property

To delete 99, 99 is present in a leaf node, so the data value can be easily removed.
After deletion 99

Now there is a leaf node that has less than 1 data value, thereby violating the property of a 2-3
tree.
So the node must be merged. To merge the node, pull down the lowest data value in the
parent’s node and merge it with its left sibling.

Rebalancing to Satisfy 2-3 Tree Property

To delete 81, 81 is an internal node. To delete this value swap 81 with its in-order successor
90 so that 81 now becomes a leaf node. Remove the value 81 from the leaf node.

After deletion 81

Now there is a leaf node that has less than 1 data value, thereby violating the property of a 2-3
tree. So the node must be merged. To merge the node, pull down the lowest data value in the
parent’s node and merge it with its left sibling.

Rebalancing to Satisfy 2-3 Tree property

As internal node cannot be empty. So now pull down the lowest data value from the parent’s
node and merge the empty node with its left sibling
Rebalancing to Satisfy 2-3 Tree Property

NOTE: In a 2-3 tree, each interior node has either two or three children. This means that a 2-3
tree is not a binary tree.
Unit-4
Multiway Trees in Data Structures

Introduction:
In computer science and information technology, multiway trees-also referred to as multi-ary
trees or generic trees-are a fundamental type of data structure. They offer a flexible approach to
describe hierarchical structures and are used in a variety of contexts, including file systems,
databases, and parser trees. We shall examine the idea of multiway trees, their attributes, types,
operations, and importance in data structures in this article.

Characteristics of Multiway Trees:


A tree data structure called a multiway tree allows each node to have several offspring. Multiway
trees can have a variety of child nodes, as opposed to binary trees, where each node can only
have a maximum of two children. They are ideal for modeling intricate hierarchical relationships
because of their flexibility.

Multiway trees key characteristics include:

o Node Structure: Each node in a multiway tree has numerous pointers pointing to the
nodes that are its children. From node to node, the number of pointers may differ.
o Root Node: The root node, from which all other nodes can be accessed, is the node that
acts as the tree's origin.
o Leaf Nodes: Nodes with no children are known as leaf nodes. Leaf nodes in a multiway
tree can have any number of children, even zero.
o Height and Depth: Multiway trees have height (the maximum depth of the tree) and
depth (the distance from the root to a node). The depth and height of the tree might vary
substantially due to the different number of children.
Types of Multiway Trees:
Multiway trees come in a variety of forms, each with particular properties and uses. The most
typical varieties include:

o General Multiway Trees: Each node in these trees is allowed to have an unlimited
number of offspring. They are utilized in a broad variety of applications, such as
company organizational hierarchies and directory structures in file systems.
o B-trees: In file systems and databases, B-trees are a sort of self-balancing multiway tree.
They are effective for insertions, deletions, and searches because they are made to
preserve a balanced structure.
o Ternary Trees: A particular variety of multiway trees in which each node has precisely
three children is known as a ternary tree. They are frequently incorporated into
optimization and language processing methods.
o Quad Trees and Oct Trees: These are two examples of specialized multiway trees used
in geographic information systems and computer graphics for spatial segmentation and
indexing. Oct trees have eight children per node, compared to quad trees' four.

Operations on Multiway Trees:


Multiple operations are supported by multiway trees, allowing for effective data modification
and retrieval. The following are a few examples of basic operations:

o Insertion: Adding a new node to the tree while making sure it preserves the structure and
characteristics of the tree.
o Deletion: A node is deleted from the tree while still preserving its integrity.
o Search: Finding a particular node or value within the tree using search.
o Traversal: Traversal is the process of going through every node in the tree in a particular
order, such as pre-order, in-order, or post-order.
o Balancing (for B-trees): To maintain quick search and retrieval times, balancing (for B-
trees) involves making sure the tree maintains its balance after insertions and deletions.

Significance of Multiway Trees:


For the following reasons, multiway trees are very important in data structures and computer
science.

o Versatility: Multiway trees are useful for a wide range of applications because they offer
a versatile way to describe hierarchical relationships.
o Efficiency: For managing and organizing massive datasets in databases and file systems,
certain varieties of multiway trees, such as B-trees, are built for efficiency.
o Parsing and Syntax Trees: Multiway trees are crucial for compilers and interpreters
because they are utilized in the parsing and syntax analysis of programming languages.
o Spatial Data Structures: Multiway trees, such as quadtrees and oct trees, allow for
effective spatial indexing and partitioning in geographic information systems and
computer graphics.

digital search tree (DST)


A digital search tree (DST) is a fundamental data structure on words that finds various
applications from the popular Lempel–Zivʼ78 data compression scheme to distributed hash
tables. The profile of a DST measures the number of nodes at the same distance from the
root; it depends on the number of stored strings and the distance from the root. Most
parameters of DST (e.g., depth, height, fillup) can be expressed in terms of the profile. We
study here asymptotics of the average profile in a DST built from sequences generated
independently by a memoryless source. After representing the average profile by a
recurrence, we solve it using a wide range of analytic tools. This analysis is surprisingly
demanding but once it is carried out it reveals an unusually intriguing and interesting
behavior. The average profile undergoes phase transitions when moving from the root to the
longest path: at first it resembles a full tree until it abruptly starts growing polynomially and
oscillating in this range. These results are derived by methods of analytic combinatorics such
as generating functions, Mellin transform, poissonization and depoissonization, the saddle
point method, singularity analysis and uniform asymptotic analysis.

You might also like