Data Structures3
Data Structures3
1. Array.
2. Linked list.
3. Stack.
4. Queue.
In this article, we will learn about Abstract Data Types (ADTs).
But before understanding what an ADT is, let us consider
different built-in data types provided by programming
languages. Data types such as int, float, double, and long are
built-in types that allow us to perform basic operations like
addition, subtraction, division, and multiplication. However,
there are scenarios where we need custom operations for
different data types. These operations are defined based on
specific requirements and are tailored as needed. To address
such needs, we can create data structures along with their
operations, which are known as Abstract Data Types (ADTs).
For example, we use primitive values like int, float, and char with
the understanding that these data types can operate and be
performed on without any knowledge of their implementation
details. ADTs operate similarly by defining what operations are
possible without detailing their implementation.
Defining ADTs: Examples
Now, let’s understand three common ADT’s: List ADT, Stack ADT,
and Queue ADT.
1. List ADT
Vies of list
The List ADT need to store the required data in the sequence and
should have the following operations:
get(): Return an element from the list at any given position.
insert(): Insert an element at any position in the list.
remove(): Remove the first occurrence of any element from a
non-empty list.
removeAt(): Remove the element at a specified location from
a non-empty list.
replace(): Replace an element at any position with another
element.
size(): Return the number of elements in the list.
isEmpty(): Return true if the list is empty; otherwise, return
false.
isFull(): Return true if the list is full; otherwise, return false.
2. Stack ADT
View of stack
The Queue ADT follows a design similar to the Stack ADT, but the
order of insertion and deletion changes to FIFO. Elements are
inserted at one end (called the rear) and removed from the other
end (called the front). It should support the following operations:
enqueue(): Insert an element at the end of the queue.
dequeue(): Remove and return the first element of the queue,
if the queue is not empty.
peek(): Return the element of the queue without removing it,
if the queue is not empty.
size(): Return the number of elements in the queue.
isEmpty(): Return true if the queue is empty; otherwise,
return false.
1.3)overview of time and space complexity analysis for linear data structures
Time and space complexity analysis for linear data structures measures
how much time and memory an algorithm takes to run.its important for
designing software, building web-sites and analyzing large datasets
Time complexity
The amount of time it takes an algorithm to run
The number of operations like comparisions,required to complete the
algorithm
The worst-case time complexity is the maximum time it takes for any input
The average-case time complexity is the average time it takes for an input
The best-case time complexity is the minimum time it takes for an input
Space complexity
The amount of memory an algorithm uses
The fixed amount of space required by the algorithm
The variable amount of space required by the algorithm which
depends on the input size
Calculating time and space complexity
Identify the basic operation in the algorithm
Count how many times the basic operation is performed
Express the count as a function of the input size
Simplify the expression and identify the dominant term
Express the time complexity using big o notation
In Linear Search, we iterate over all the elements of the array and
check if it the current element is equal to the target element. If
we find any element to be equal to the target element, then
return the index of the current element. Otherwise, if no element
is equal to the target element, then return -1 as the element is
not found. Linear search is also known as sequential search.
For example: Consider the array arr[] = {10, 50, 30, 70, 80,
20, 90, 40} and key = 30
1. Initial Setup: Binary search begins by looking at the middle element of the
sorted array.
2. Comparison:
o If the middle element matches the target value, the search is complete.
o If the middle element is greater than the target, the target must be in
the left half of the array (since the array is sorted). So, the search
continues in the left half.
o If the middle element is less than the target, the target must be in the
right half of the array, and the search continues there.
3. Repeat: This process repeats, halving the search range each time, until the
element is found or the search range is empty (indicating the element is not
in the array).
Example:
Given a sorted array: [2, 5, 8, 12, 15, 19, 25, 32, 37, 40] and
the target element 15.
1. Time Complexity:
o Best case: O(1)O(1)O(1) — The target is found in the first
comparison (if it's the middle element).
o Average case: O(logn)O(\log n)O(logn) — With each iteration, the
search space is halved, so the time complexity grows logarithmically
with the size of the array.
o Worst case: O(logn)O(\log n)O(logn) — The algorithm may need to
make logn\log nlogn comparisons to exhaust the search space.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Binary search operates in
constant space, as it only requires a few variables to track the low,
high, and mid indices.
Sorted Array/List: The array must be sorted before applying binary search.
If the array is not sorted, you would need to sort it first, which would take
O(nlogn)O(n \log n)O(nlogn) time.
Efficient Search Space Reduction: Binary search reduces the search space
by half each time, which is why it is much more efficient than linear search
for large datasets.
Advantages:
Efficient for Large Datasets: With a time complexity of O(logn)O(\log
n)O(logn), binary search is very efficient compared to linear search,
especially for large datasets.
Constant Space: It uses O(1)O(1)O(1) space, making it very memory-
efficient.
Disadvantages:
1. Iterate through the list: Starting at the first element, compare the current
element with the next element.
2. Swap if necessary: If the current element is greater than the next one (for
ascending order), swap the two elements.
3. Repeat: Continue this process for the entire list. After each pass, the largest
element "bubbles" to the correct position at the end of the list.
4. Optimization: If during a pass, no swaps are made, the list is already sorted,
and the algorithm can terminate early.
Example:
1. First Pass:
o Compare 5 and 3, swap → [3, 5, 8, 4, 2]
o Compare 5 and 8, no swap → [3, 5, 8, 4, 2]
o Compare 8 and 4, swap → [3, 5, 4, 8, 2]
o Compare 8 and 2, swap → [3, 5, 4, 2, 8]
o After the first pass, the largest element (8) is in its correct position at
the end of the list.
2. Second Pass:
o Compare 3 and 5, no swap → [3, 5, 4, 2, 8]
o Compare 5 and 4, swap → [3, 4, 5, 2, 8]
o Compare 5 and 2, swap → [3, 4, 2, 5, 8]
o After the second pass, the second-largest element (5) is in its correct
position.
3. Third Pass:
o Compare 3 and 4, no swap → [3, 4, 2, 5, 8]
o Compare 4 and 2, swap → [3, 2, 4, 5, 8]
o After the third pass, the third-largest element (4) is in its correct
position.
4. Fourth Pass:
o Compare 3 and 2, swap → [2, 3, 4, 5, 8]
o The list is now sorted.
Time Complexity:
Space Complexity:
Space Complexity: O(1)O(1)O(1) — Bubble Sort is an in-place sorting
algorithm, meaning it doesn’t require additional storage proportional to the
input size.
Advantages:
Disadvantages:
Selection Sort is a simple and intuitive sorting algorithm. It repeatedly selects the
smallest (or largest) element from the unsorted portion of the array and swaps it
with the element at the beginning of the unsorted portion. This process is repeated
until the entire array is sorted.
Example:
Step-by-step Process:
First Pass:
Now, we focus on the unsorted portion of the array: [25, 12, 22, 64].
Find the smallest element in this subarray:
o Compare 25 with 12, 12 is smaller.
o Compare 12 with 22, 12 is smaller.
o Compare 12 with 64, 12 is still smaller.
o The smallest element is 12.
1. Time Complexity:
o Best, Average, and Worst Case: O(n2)O(n^2)O(n2) — Selection Sort
always compares every element with every other element in the
unsorted portion, resulting in a quadratic number of comparisons.
Best Case: Even if the array is already sorted, Selection Sort
still performs O(n2)O(n^2)O(n2) comparisons.
Worst Case: The worst case happens when the array is sorted
in reverse order, which still requires O(n2)O(n^2)O(n2)
comparisons.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Selection Sort is an in-place
sorting algorithm, meaning it requires only a constant amount of
additional space for the temporary variable used in swapping.
Insertion Sort is a simple comparison-based sorting algorithm that builds the final
sorted array one element at a time. It is much like sorting playing cards in your
hands, where you take one card at a time and place it in the correct position relative
to the cards already sorted.
1. Start with the second element (since a single element is trivially sorted).
2. Compare the current element with the elements in the sorted portion of
the array (to its left).
3. Shift all elements that are greater than the current element one position to
the right to make space for the current element.
4. Insert the current element into its correct position.
5. Repeat this process for all the elements in the array.
Example:
Step-by-step Process:
Initial Array:
[5, 2, 9, 1, 5, 6]
We will start with the second element (index 1), because the first element is
trivially considered sorted.
Current element: 2
Compare 2 with 5 (the element to its left).
o 2 < 5, so shift 5 to the right.
o The array now looks like: [5, 5, 9, 1, 5, 6].
Insert 2 into its correct position: Place 2 at the start.
The array becomes: [2, 5, 9, 1, 5, 6].
Current element: 9
Compare 9 with 5 (the element to its left).
o 9 > 5, no shift needed.
Insert 9: It's already in the correct position.
The array remains: [2, 5, 9, 1, 5, 6].
Current element: 1
Compare 1 with 9 (shift 9 right).
o The array becomes: [2, 5, 9, 9, 5, 6].
Compare 1 with 5 (shift 5 right).
o The array becomes: [2, 5, 5, 9, 5, 6].
Compare 1 with 2 (shift 2 right).
o The array becomes: [2, 2, 5, 9, 5, 6].
Insert 1 at the start.
The array becomes: [1, 2, 5, 9, 5, 6].
Fourth Pass (i = 4):
Current element: 5
Compare 5 with 9 (shift 9 right).
o The array becomes: [1, 2, 5, 9, 9, 6].
Compare 5 with 5 (no shift needed).
Insert 5 after the first 5.
The array becomes: [1, 2, 5, 5, 9, 6].
Current element: 6
Compare 6 with 9 (shift 9 right).
o The array becomes: [1, 2, 5, 5, 9, 9].
Compare 6 with 5 (no shift needed).
Insert 6 after the second 5.
The array becomes: [1, 2, 5, 5, 6, 9].
[1, 2, 5, 5, 6, 9]
1. Time Complexity:
o Best case: O(n)O(n)O(n) — This happens when the array is already
sorted. In this case, only one comparison is made for each element,
and no shifting occurs.
o Worst case: O(n2)O(n^2)O(n2) — This occurs when the array is
sorted in reverse order. Each element needs to be compared and
shifted to the beginning.
o Average case: O(n2)O(n^2)O(n2) — On average, the algorithm will
need to perform about n2/2n^2 / 2n2/2 comparisons and shifts.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Insertion sort is an in-place
sorting algorithm, meaning it doesn't require any additional space
besides the input array.
Small datasets: It is efficient for small arrays where the overhead of more
complex algorithms isn't necessary.
Partially sorted arrays: If the array is already partially sorted, Insertion Sort
can be much faster than other algorithms.
Memory-constrained environments: It uses only O(1)O(1)O(1) extra space,
so it is useful when memory is limited.
Chapter -2
2.1)What is Linked List?
A linked list is a linear data structure which can store a collection of
"nodes" connected together via links i.e. pointers. Linked lists nodes are
not stored at a contiguous location, rather they are linked using pointers
to the different memory locations. A node consists of the data value and a
pointer to the address of the next node within the linked list.
A linked list is a dynamic linear data structure whose memory size can be
allocated or de-allocated at run time based on the operation insertion or
deletion, this helps in using system memory efficiently. Linked lists can be
used to implement various data structures like a stack, queue, graph,
hash maps, etc.
A linked list starts with a head node which points to the first node. Every
node consists of data which holds the actual data (value) associated with
the node and a next pointer which holds the memory address of the next
node in the linked list. The last node is called the tail node in the list
which points to null indicating the end of the list.
Since the last node and the first node of the circular linked list are
connected, the traversal in this linked list will go on forever until it is
broken.
The basic operations in the linked lists are insertion, deletion, searching,
display, and deleting an element at a given key. These operations are
performed on Singly Linked Lists as given below −
Adding a new node in linked list is a more than one step activity. We shall
learn this with diagrams here. First, create a node using the same
structure and find the location where it has to be inserted.
Now, the next node at the left should point to the new node.
This will put the new node in the middle of the two. The new list should
look like this −
Insertion in linked list can be done in three different ways. They are
explained as follows −
Insertion at Beginning
In this operation, we are adding an element at the beginning of the list.
Algorithm
1. START
2. Create a node to store the data
3. Check if the list is empty
4. If the list is empty, add the data to the node and
assign the head pointer to it.
5. If the list is not empty, add the data to a node and link to the
current head. Assign the head to the newly added node.
6. END
Insertion at Ending
In this operation, we are adding an element at the ending of the list.
Algorithm
1. START
2. Create a new node and assign the data
3. Find the last node
4. Point the last node to new node
5. END
Algorithm
1. START
2. Create a new node and assign data to it
3. Iterate until the node at position is found
4. Point first to new first node
5. END
Deletion is also a more than one step process. We shall learn with
pictorial representation. First, locate the target node to be removed, by
using searching algorithms.
The left (previous) node of the target node now should point to the next
node of the target node −
This will remove the link that was pointing to the target node. Now, using
the following code, we will remove what the target node is pointing at.
TargetNode.next -> NULL;
We need to use the deleted node. We can keep that in memory otherwise
we can simply deallocate memory and wipe off the target node
completely.
Deletion in linked lists is also performed in three different ways. They are
as follows −
Deletion at Beginning
In this deletion operation of the linked, we are deleting an element from
the beginning of the list. For this, we point the head to the second node.
Algorithm
1. START
2. Assign the head pointer to the next node in the list
3. END
Deletion at Ending
In this deletion operation of the linked, we are deleting an element from
the ending of the list.
Algorithm
1. START
2. Iterate until you find the second last element in the list.
3. Assign NULL to the second last element in the list.
4. END
Algorithm
1. START
2. Iterate until find the current node at position in the list.
3. Assign the adjacent node of current node in the list
to its previous node.
4. END
Except the node (first node) pointed by the head node, all nodes should
point to their predecessor, making them their new successor. The first
node will point to NULL.
We'll make the head node point to the new first node by using the temp
node.
Algorithm
Step by step process to reverse a linked list is as follows −
1. START
2. We use three pointers to perform the reversing:
prev, next, head.
3. Point the current node to head and assign its next value to
the prev node.
4. Iteratively repeat the step 3 for all the nodes in the list.
5. Assign head to the prev node.
Searching for an element in the list using a key element. This operation is
done in the same way as array search; comparing every element in the
list with the key element given.
Algorithm
1 START
2 If the list is not empty, iteratively check if the list
contains the key
3 If the key element is not present in the list, unsuccessful
search
4 END
The traversal operation walks through all the elements of the list in an
order and displays the elements in that order.
Algorithm
1. START
2. While the list is not empty and did not reach the end of the list,
print the data in each node
3. END
2.2)Comparing arrays and Linked lists
Linked List vs Array
Linked lists are often used in dynamic memory allocation and management,
where the size of data structures needs to change at runtime.
Linked lists are used as a base to implement other data structures such as:
Stacks
Queues
Sparse Matrices
3. Undo Mechanism in Applications
Many applications (e.g., text editors or Photoshop) use linked lists to store
states for the "Undo" and "Redo" operations.
Each node represents a state of the document, and traversal allows going
back and forth between changes.
4. Dynamic Data Storage
Linked lists are suitable for storing data where the size is unknown or
changes frequently.
Example:
File systems like FAT (File Allocation Table) use linked lists to represent file
blocks. Each block contains a pointer to the next block, enabling sequential
storage of file data.
7. Networking
Doubly and circular linked lists are useful in memory-critical systems where
efficient memory utilization is a priority.
A doubly linked list is used to manage the history of visited web pages,
allowing users to navigate forward and backward efficiently.
11. Music and Media Players
Circular linked lists are used to implement playlists in media players, where
the last song is connected to the first, allowing continuous playback.
Linked lists (particularly adjacency lists) are used in graph algorithms for
efficient representation and traversal of nodes.
UNIT-3
3.1)Stack in Data Structure: What is
Stack and Its Applications
Stacks in Data Structures is a linear type of data structure that follows the
LIFO (Last-In-First-Out) principle and allows insertion and deletion operations
from one end of the stack data structure, that is top. Implementation of the
stack can be done by contiguous memory which is an array, and non-
contiguous memory which is a linked list. Stack plays a vital role in many
applications.
The stack data structure is a linear data structure that accompanies a principle
known as LIFO (Last In First Out) or FILO (First In Last Out). Real-life
examples of a stack are a deck of cards, piles of books, piles of money, and
many more.
Stack Representation in Data Structures
You can only see the top, i.e., the top-most book, namely 40, which is kept top
of the stack.
If you want to insert a new book first, namely 50, you must update the top and
then insert a new text.
And if you want to access any other book other than the topmost book that is
40, you first remove the topmost book from the stack, and then the top will
point to the next topmost book.
After working on the representation of stacks in data structures, you will see
some basic operations performed on the stacks in data structures.
There following are some operations that are implemented on the stack.
Push Operation
Push operation involves inserting new elements in the stack. Since you have
only one end to insert a unique element on top of the stack, it inserts the new
element at the top of the stack.
Pop Operation
Pop operation refers to removing the element from the stack again since you
have only one end to do all top of the stack. So removing an element from the
top of the stack is termed pop operation.
Peek Operation
Peek operation refers to retrieving the topmost element in the stack without
removing it from the collections of data elements.
isFull()
isEmpty()
isFull()
The following is the algorithm of the isFull() function:
Begin
If
return true
else
return false
else if
end
Bool isFull()
if(top == maxsize)
return true;
else
return false;
isEmpty()
Begin
If
topless than 1
return true
else
return false
else if
end
Bool isEmpty()
if(top = = -1)
return true;
else
return false;
Push Operation
Step 5: Success
end if
top ->top+1;
end
{
top = top + 1;
stack[top] = item;
else {
printf(“stack is full”);
Pop Operation
Step 5: Success
return null
end if
Return item;
End
If isEmpty()) {
item = stack[top];
top = top - 1;
return item;
else{
printf(“stack if empty”);
Peek Operation
begin to peek
return stack[top];
end
return stack[top];
You can perform the implementation of stacks in data structures using two
data structures that are an array and a linked list.
Array: In array implementation, the stack is formed using an array. All the
operations are performed using arrays. You will see how all operations can
be implemented on the stack in data structures using an array data structure.
Properties of a stack:-
1.)LIFO Behaviour:- The fundamental principle of a
stack is that the last element added will be the first
one removed like a pile of plates where you only
access the top plate
2.)Single access point:-All insertions and deletions
occur only at the top of the stack
Implementing Stacks using linked list:-
Linked-List: Every new element is inserted as a top element in the linked
list implementation of stacks in data structures. That means every newly
inserted element is pointed to the top. Whenever you want to remove an
element from the stack, remove the node indicated by the top, by moving
the top to its previous node in the list.
3.3)Applications of stacks in expression
evaluation, backtracking, & reversing List
1. Expression Evaluation:
Infix to Postfix Conversion: The stack helps convert infix expressions (like
A + B * C) to postfix notation (A B C * +) by handling operator
precedence and associativity.
Postfix Expression Evaluation: After conversion to postfix, the expression
can be evaluated using a stack. Operands are pushed onto the stack, and
when an operator is encountered, operands are popped off, the operation is
performed, and the result is pushed back onto the stack.
2. Backtracking:
Example: In a maze, you might explore paths by moving forward (pushing each
decision onto the stack). If you hit a dead end, you backtrack by popping the stack
to find the previous decision point.
3. Reversing a List:
A stack can be used to reverse a list because stacks operate on a Last In, First Out
(LIFO) principle, meaning the last element added will be the first one to be
removed.
Algorithm: Push each element of the list onto a stack, and then pop all
elements from the stack, which will result in the reversed order.