0% found this document useful (0 votes)
34 views47 pages

Data Structures3

The document provides an overview of linear data structures, including their definition, importance, and types such as arrays, linked lists, stacks, and queues. It also discusses Abstract Data Types (ADTs), their operations, and examples like List ADT, Stack ADT, and Queue ADT, along with time and space complexity analysis for algorithms. Additionally, it covers searching techniques (linear and binary search) and sorting algorithms (bubble sort and selection sort), detailing their processes, time, and space complexities.

Uploaded by

villanmshuhaiv05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views47 pages

Data Structures3

The document provides an overview of linear data structures, including their definition, importance, and types such as arrays, linked lists, stacks, and queues. It also discusses Abstract Data Types (ADTs), their operations, and examples like List ADT, Stack ADT, and Queue ADT, along with time and space complexity analysis for algorithms. Additionally, it covers searching techniques (linear and binary search) and sorting algorithms (bubble sort and selection sort), detailing their processes, time, and space complexities.

Uploaded by

villanmshuhaiv05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 47

Data structures

1.1)definition and importance of linear data structures:

 A linear data structure is a way to store data in a sequential order,with each


element connected to the next and previous elements
 Linear data structures are easy to understand and implement
Importance:
 Linear data structures are the building blocks for more complex
algorithms and structures
 They are used in many applications from simple data storage to
complex problem solving

Types of Linear Data Structure

There are four types of linear data structure:

1. Array.
2. Linked list.
3. Stack.
4. Queue.

1.2) Abstract Data Types



 In this article, we will learn about Abstract Data Types (ADTs).
But before understanding what an ADT is, let us consider
different built-in data types provided by programming
languages. Data types such as int, float, double, and long are
built-in types that allow us to perform basic operations like
addition, subtraction, division, and multiplication. However,
there are scenarios where we need custom operations for
different data types. These operations are defined based on
specific requirements and are tailored as needed. To address
such needs, we can create data structures along with their
operations, which are known as Abstract Data Types (ADTs).

Abstract Data Type (ADT)


Data types such as int,float,double,and long are built in types to perform basic
operations like addition,substraction,multiplication and division

An Abstract Data Type (ADT) is a conceptual model that defines a


set of operations and behaviours for a data type, without
specifying how these operations are implemented or how data is
organized in memory.

For example, we use primitive values like int, float, and char with
the understanding that these data types can operate and be
performed on without any knowledge of their implementation
details. ADTs operate similarly by defining what operations are
possible without detailing their implementation.
Defining ADTs: Examples
Now, let’s understand three common ADT’s: List ADT, Stack ADT,
and Queue ADT.
1. List ADT

Vies of list

The List ADT need to store the required data in the sequence and
should have the following operations:
 get(): Return an element from the list at any given position.
 insert(): Insert an element at any position in the list.
 remove(): Remove the first occurrence of any element from a
non-empty list.
 removeAt(): Remove the element at a specified location from
a non-empty list.
 replace(): Replace an element at any position with another
element.
 size(): Return the number of elements in the list.
 isEmpty(): Return true if the list is empty; otherwise, return
false.
 isFull(): Return true if the list is full; otherwise, return false.
2. Stack ADT
View of stack

In Stack ADT, the order of insertion and deletion should be


according to the FILO or LIFO Principle. Elements are inserted and
removed from the same end, called the top of the stack. It should
also support the following operations:
 push(): Insert an element at one end of the stack called the
top.
 pop(): Remove and return the element at the top of the stack,
if it is not empty.
 peek(): Return the element at the top of the stack without
removing it, if the stack is not empty.
 size(): Return the number of elements in the stack.
 isEmpty(): Return true if the stack is empty; otherwise, return
false.
 isFull(): Return true if the stack is full; otherwise, return false.
3. Queue ADT
View of Queue

The Queue ADT follows a design similar to the Stack ADT, but the
order of insertion and deletion changes to FIFO. Elements are
inserted at one end (called the rear) and removed from the other
end (called the front). It should support the following operations:
 enqueue(): Insert an element at the end of the queue.
 dequeue(): Remove and return the first element of the queue,
if the queue is not empty.
 peek(): Return the element of the queue without removing it,
if the queue is not empty.
 size(): Return the number of elements in the queue.
 isEmpty(): Return true if the queue is empty; otherwise,
return false.
1.3)overview of time and space complexity analysis for linear data structures

 Time and space complexity analysis for linear data structures measures
how much time and memory an algorithm takes to run.its important for
designing software, building web-sites and analyzing large datasets
Time complexity
 The amount of time it takes an algorithm to run
 The number of operations like comparisions,required to complete the
algorithm
 The worst-case time complexity is the maximum time it takes for any input
 The average-case time complexity is the average time it takes for an input
 The best-case time complexity is the minimum time it takes for an input
Space complexity
 The amount of memory an algorithm uses
 The fixed amount of space required by the algorithm
 The variable amount of space required by the algorithm which
depends on the input size
Calculating time and space complexity
 Identify the basic operation in the algorithm
 Count how many times the basic operation is performed
 Express the count as a function of the input size
 Simplify the expression and identify the dominant term
 Express the time complexity using big o notation

Example: Linear search algorithm

 The time complexity is o(n).where n is the number of elements in the array


 The space complexity is o(1).which means it uses a constant amount of
extra space

1.4)Searching techniques:Linear and binary search

Linear Search Algorithm



Given an array, arr of n integers, and an integer element x, find


whether element x is present in the array. Return the index of the
first occurrence of x in the array, or -1 if it doesn’t exist.
Input: arr[] = [1, 2, 3, 4], x = 3
Output: 2

Input: arr[] = [10, 8, 30], x = 6


Output: -1
Explanation: The element to be searched is 6 and its not
present, so we return -1.

In Linear Search, we iterate over all the elements of the array and
check if it the current element is equal to the target element. If
we find any element to be equal to the target element, then
return the index of the current element. Otherwise, if no element
is equal to the target element, then return -1 as the element is
not found. Linear search is also known as sequential search.
For example: Consider the array arr[] = {10, 50, 30, 70, 80,
20, 90, 40} and key = 30

Binary Search Algorithm – Iterative and Recursive Implementation

Binary Search is a highly efficient algorithm used to find an element in a sorted


array or list. Unlike linear search, which checks each element sequentially, binary
search works by repeatedly dividing the search interval in half, which makes it
much faster than linear search, especially for large datasets.

How Binary Search Works:

1. Initial Setup: Binary search begins by looking at the middle element of the
sorted array.
2. Comparison:
o If the middle element matches the target value, the search is complete.
o If the middle element is greater than the target, the target must be in
the left half of the array (since the array is sorted). So, the search
continues in the left half.
o If the middle element is less than the target, the target must be in the
right half of the array, and the search continues there.
3. Repeat: This process repeats, halving the search range each time, until the
element is found or the search range is empty (indicating the element is not
in the array).

Binary Search Algorithm

1. Start with the whole array.


2. Calculate the middle index: mid = (low + high) // 2
3. Compare the middle element with the target:
o If arr[mid] == target, return the index.
o If arr[mid] > target, repeat the search in the left half (high
= mid - 1).
o If arr[mid] < target, repeat the search in the right half (low
= mid + 1).
4. Repeat the process until the element is found or the search range becomes
invalid (low > high).

Example:

Given a sorted array: [2, 5, 8, 12, 15, 19, 25, 32, 37, 40] and
the target element 15.

1. Initial Range: low = 0, high = 9 (the indices of the array).


o mid = (0 + 9) // 2 = 4, so arr[mid] = 15.
o arr[mid] == 15, so we return mid = 4.

Time and Space Complexity:

1. Time Complexity:
o Best case: O(1)O(1)O(1) — The target is found in the first
comparison (if it's the middle element).
o Average case: O(log⁡n)O(\log n)O(logn) — With each iteration, the
search space is halved, so the time complexity grows logarithmically
with the size of the array.
o Worst case: O(log⁡n)O(\log n)O(logn) — The algorithm may need to
make log⁡n\log nlogn comparisons to exhaust the search space.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Binary search operates in
constant space, as it only requires a few variables to track the low,
high, and mid indices.

Requirements for Binary Search:

 Sorted Array/List: The array must be sorted before applying binary search.
If the array is not sorted, you would need to sort it first, which would take
O(nlog⁡n)O(n \log n)O(nlogn) time.
 Efficient Search Space Reduction: Binary search reduces the search space
by half each time, which is why it is much more efficient than linear search
for large datasets.

Advantages:
 Efficient for Large Datasets: With a time complexity of O(log⁡n)O(\log
n)O(logn), binary search is very efficient compared to linear search,
especially for large datasets.
 Constant Space: It uses O(1)O(1)O(1) space, making it very memory-
efficient.

Disadvantages:

 Sorted Data Requirement: The array or list must be sorted beforehand.


 Not Efficient for Small Datasets: For small datasets, binary search may not
be worth the overhead compared to simpler algorithms like linear search.



1.5) Sorting techniques: Bubble Sort, Selection Sort,


Insertion Sort:
Bubble Sort Algorithm



Bubble Sort is a simple comparison-based sorting algorithm in computer science.


It is named "bubble sort" because the smaller elements "bubble" to the top
(beginning of the array) with each pass through the list.

How Bubble Sort Works:

1. Iterate through the list: Starting at the first element, compare the current
element with the next element.
2. Swap if necessary: If the current element is greater than the next one (for
ascending order), swap the two elements.
3. Repeat: Continue this process for the entire list. After each pass, the largest
element "bubbles" to the correct position at the end of the list.
4. Optimization: If during a pass, no swaps are made, the list is already sorted,
and the algorithm can terminate early.

Example:

Let's sort an array: [5, 3, 8, 4, 2] in ascending order.

1. First Pass:
o Compare 5 and 3, swap → [3, 5, 8, 4, 2]
o Compare 5 and 8, no swap → [3, 5, 8, 4, 2]
o Compare 8 and 4, swap → [3, 5, 4, 8, 2]
o Compare 8 and 2, swap → [3, 5, 4, 2, 8]
o After the first pass, the largest element (8) is in its correct position at
the end of the list.
2. Second Pass:
o Compare 3 and 5, no swap → [3, 5, 4, 2, 8]
o Compare 5 and 4, swap → [3, 4, 5, 2, 8]
o Compare 5 and 2, swap → [3, 4, 2, 5, 8]
o After the second pass, the second-largest element (5) is in its correct
position.
3. Third Pass:
o Compare 3 and 4, no swap → [3, 4, 2, 5, 8]
o Compare 4 and 2, swap → [3, 2, 4, 5, 8]
o After the third pass, the third-largest element (4) is in its correct
position.
4. Fourth Pass:
o Compare 3 and 2, swap → [2, 3, 4, 5, 8]
o The list is now sorted.

Time Complexity:

 Best case (already sorted array): O(n)O(n)O(n) — when the algorithm


terminates early due to no swaps.
 Average case: O(n2)O(n^2)O(n2) — when the array is unsorted.
 Worst case: O(n2)O(n^2)O(n2) — when the array is sorted in reverse order.

Space Complexity:
 Space Complexity: O(1)O(1)O(1) — Bubble Sort is an in-place sorting
algorithm, meaning it doesn’t require additional storage proportional to the
input size.

Advantages:

 Simple to understand and implement.


 In-place sorting with O(1)O(1)O(1) extra space.

Disadvantages:

 Inefficient for large datasets due to its O(n2)O(n^2)O(n2) time complexity.


 Not suitable for large-scale sorting when compared to algorithms like
Merge Sort or Quick Sort.

Selection Sort is a simple and intuitive sorting algorithm. It repeatedly selects the
smallest (or largest) element from the unsorted portion of the array and swaps it
with the element at the beginning of the unsorted portion. This process is repeated
until the entire array is sorted.

How Selection Sort Works:

1. Start with the first element of the list.


2. Find the smallest element in the unsorted part of the list (from the current
element to the last element).
3. Swap the smallest element with the first unsorted element.
4. Move the boundary of the unsorted portion by one position forward (i.e.,
move the left pointer to the next element).
5. Repeat this process until the entire array is sorted.

Example:

Let's walk through Selection Sort on the following array:

[64, 25, 12, 22, 11]

Step-by-step Process:

First Pass:

 The initial array is [64, 25, 12, 22, 11].


 We need to find the smallest element from the entire array.
o Compare 64 with 25, 25 is smaller.
o Compare 25 with 12, 12 is smaller.
o Compare 12 with 22, 12 is still smaller.
o Compare 12 with 11, 11 is smaller.
o The smallest element in this pass is 11.

 Swap 11 with 64 (the first element of the array).


 The array now becomes:

[11, 25, 12, 22, 64]


Second Pass:

 Now, we focus on the unsorted portion of the array: [25, 12, 22, 64].
 Find the smallest element in this subarray:
o Compare 25 with 12, 12 is smaller.
o Compare 12 with 22, 12 is smaller.
o Compare 12 with 64, 12 is still smaller.
o The smallest element is 12.

 Swap 12 with 25 (the first element in this unsorted subarray).


 The array now becomes:

[11, 12, 25, 22, 64]


Third Pass:

 Now, focus on the subarray [25, 22, 64].


 Find the smallest element in this subarray:
o Compare 25 with 22, 22 is smaller.
o Compare 22 with 64, 22 is smaller.
o The smallest element is 22.

 Swap 22 with 25.


 The array now becomes:

[11, 12, 22, 25, 64]


Fourth Pass:

 Focus on the subarray [25, 64].


 Find the smallest element in this subarray:
o 25 is smaller than 64.

 No need to swap since 25 is already in the correct position.


 The array remains:

[11, 12, 22, 25, 64]


Fifth Pass:

 The remaining subarray is just [64], which is already sorted.

Final Sorted Array:

After completing all passes, the array is sorted:

[11, 12, 22, 25, 64]

Time and Space Complexity:

1. Time Complexity:
o Best, Average, and Worst Case: O(n2)O(n^2)O(n2) — Selection Sort
always compares every element with every other element in the
unsorted portion, resulting in a quadratic number of comparisons.
 Best Case: Even if the array is already sorted, Selection Sort
still performs O(n2)O(n^2)O(n2) comparisons.
 Worst Case: The worst case happens when the array is sorted
in reverse order, which still requires O(n2)O(n^2)O(n2)
comparisons.
2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Selection Sort is an in-place
sorting algorithm, meaning it requires only a constant amount of
additional space for the temporary variable used in swapping.

Advantages of Selection Sort:

 Simplicity: Very easy to understand and implement.


 In-place sorting: It sorts the array without using additional memory (i.e., no
auxiliary array).
 Not affected by the initial order of elements: Unlike algorithms like Bubble
Sort or Insertion Sort, which can perform better if the array is partially
sorted, Selection Sort always performs the same number of comparisons.

Disadvantages of Selection Sort:

 Inefficient for large datasets: Because of its O(n2)O(n^2)O(n2) time


complexity, it is very slow for large arrays, especially when compared to
more advanced algorithms like Merge Sort or Quick Sort, which have
O(nlog⁡n)O(n \log n)O(nlogn) time complexity.
 Not adaptive: It doesn't improve even when the array is partially sorted.
This makes it inefficient in many practical scenarios.

Insertion Sort in Data Structures with Example

Insertion Sort is a simple comparison-based sorting algorithm that builds the final
sorted array one element at a time. It is much like sorting playing cards in your
hands, where you take one card at a time and place it in the correct position relative
to the cards already sorted.

How Insertion Sort Works:

1. Start with the second element (since a single element is trivially sorted).
2. Compare the current element with the elements in the sorted portion of
the array (to its left).
3. Shift all elements that are greater than the current element one position to
the right to make space for the current element.
4. Insert the current element into its correct position.
5. Repeat this process for all the elements in the array.

Example:

Consider the array: [5, 2, 9, 1, 5, 6]

Step-by-step Process:

Initial Array:
[5, 2, 9, 1, 5, 6]

We will start with the second element (index 1), because the first element is
trivially considered sorted.

First Pass (i = 1):

 Current element: 2
 Compare 2 with 5 (the element to its left).
o 2 < 5, so shift 5 to the right.
o The array now looks like: [5, 5, 9, 1, 5, 6].
 Insert 2 into its correct position: Place 2 at the start.
 The array becomes: [2, 5, 9, 1, 5, 6].

Second Pass (i = 2):

 Current element: 9
 Compare 9 with 5 (the element to its left).
o 9 > 5, no shift needed.
 Insert 9: It's already in the correct position.
 The array remains: [2, 5, 9, 1, 5, 6].

Third Pass (i = 3):

 Current element: 1
 Compare 1 with 9 (shift 9 right).
o The array becomes: [2, 5, 9, 9, 5, 6].
 Compare 1 with 5 (shift 5 right).
o The array becomes: [2, 5, 5, 9, 5, 6].
 Compare 1 with 2 (shift 2 right).
o The array becomes: [2, 2, 5, 9, 5, 6].
 Insert 1 at the start.
 The array becomes: [1, 2, 5, 9, 5, 6].
Fourth Pass (i = 4):

 Current element: 5
 Compare 5 with 9 (shift 9 right).
o The array becomes: [1, 2, 5, 9, 9, 6].
 Compare 5 with 5 (no shift needed).
 Insert 5 after the first 5.
 The array becomes: [1, 2, 5, 5, 9, 6].

Fifth Pass (i = 5):

 Current element: 6
 Compare 6 with 9 (shift 9 right).
o The array becomes: [1, 2, 5, 5, 9, 9].
 Compare 6 with 5 (no shift needed).
 Insert 6 after the second 5.
 The array becomes: [1, 2, 5, 5, 6, 9].

Final Sorted Array:

After completing all passes, the array is sorted:

[1, 2, 5, 5, 6, 9]

Insertion Sort Algorithm Code (in Python):

Time and Space Complexity:

1. Time Complexity:
o Best case: O(n)O(n)O(n) — This happens when the array is already
sorted. In this case, only one comparison is made for each element,
and no shifting occurs.
o Worst case: O(n2)O(n^2)O(n2) — This occurs when the array is
sorted in reverse order. Each element needs to be compared and
shifted to the beginning.
o Average case: O(n2)O(n^2)O(n2) — On average, the algorithm will
need to perform about n2/2n^2 / 2n2/2 comparisons and shifts.

2. Space Complexity:
o Space Complexity: O(1)O(1)O(1) — Insertion sort is an in-place
sorting algorithm, meaning it doesn't require any additional space
besides the input array.

Advantages of Insertion Sort:

 Simple to understand and implement: It is one of the easiest sorting


algorithms to understand and code.
 Efficient for small datasets: For small arrays, the overhead of more
advanced algorithms may not be justified, and Insertion Sort can perform
well.
 Adaptive: It is adaptive, meaning if the array is already partially sorted, the
algorithm will perform better (i.e., fewer shifts and comparisons).
 Stable: Insertion sort is stable, meaning that it maintains the relative order
of elements with equal values.

Disadvantages of Insertion Sort:

 Inefficient for large datasets: Due to its O(n2)O(n^2)O(n2) time complexity,


it is not suitable for large arrays.
 Not ideal for large-scale data: For larger datasets, more efficient algorithms
like Quick Sort, Merge Sort, or Heap Sort are typically preferred.

When to Use Insertion Sort:

 Small datasets: It is efficient for small arrays where the overhead of more
complex algorithms isn't necessary.
 Partially sorted arrays: If the array is already partially sorted, Insertion Sort
can be much faster than other algorithms.
 Memory-constrained environments: It uses only O(1)O(1)O(1) extra space,
so it is useful when memory is limited.
Chapter -2
2.1)What is Linked List?
A linked list is a linear data structure which can store a collection of
"nodes" connected together via links i.e. pointers. Linked lists nodes are
not stored at a contiguous location, rather they are linked using pointers
to the different memory locations. A node consists of the data value and a
pointer to the address of the next node within the linked list.

A linked list is a dynamic linear data structure whose memory size can be
allocated or de-allocated at run time based on the operation insertion or
deletion, this helps in using system memory efficiently. Linked lists can be
used to implement various data structures like a stack, queue, graph,
hash maps, etc.

A linked list starts with a head node which points to the first node. Every
node consists of data which holds the actual data (value) associated with
the node and a next pointer which holds the memory address of the next
node in the linked list. The last node is called the tail node in the list
which points to null indicating the end of the list.

Types of Linked List


Following are the various types of linked list.

Singly Linked Lists


Singly linked lists contain two "buckets" in one node; one bucket holds
the data and the other bucket holds the address of the next node of the
list. Traversals can be done in one direction only as there is only a single
link between two nodes of the same list.
Doubly Linked Lists
Doubly Linked Lists contain three "buckets" in one node; one bucket holds
the data and the other buckets hold the addresses of the previous and
next nodes in the list. The list is traversed twice as the nodes in the list
are connected to each other from both sides.

Circular Linked Lists


Circular linked lists can exist in both singly linked list and doubly linked
list.

Since the last node and the first node of the circular linked list are
connected, the traversal in this linked list will go on forever until it is
broken.

Basic Operations in Linked List:-

The basic operations in the linked lists are insertion, deletion, searching,
display, and deleting an element at a given key. These operations are
performed on Singly Linked Lists as given below −

 Insertion − Adds an element at the beginning of the list.


 Deletion − Deletes an element at the beginning of the list.
 Display − Displays the complete list.
 Search − Searches an element using the given key.
 Delete − Deletes an element using the given key.

Linked List - Insertion Operation

Adding a new node in linked list is a more than one step activity. We shall
learn this with diagrams here. First, create a node using the same
structure and find the location where it has to be inserted.

Imagine that we are inserting a node B (NewNode), between A (LeftNode)


and C (RightNode). Then point B.next to C −

NewNode.next -> RightNode;

It should look like this −

Now, the next node at the left should point to the new node.

LeftNode.next -> NewNode;

This will put the new node in the middle of the two. The new list should
look like this −
Insertion in linked list can be done in three different ways. They are
explained as follows −

Insertion at Beginning
In this operation, we are adding an element at the beginning of the list.

Algorithm

1. START
2. Create a node to store the data
3. Check if the list is empty
4. If the list is empty, add the data to the node and
assign the head pointer to it.
5. If the list is not empty, add the data to a node and link to the
current head. Assign the head to the newly added node.
6. END

Insertion at Ending
In this operation, we are adding an element at the ending of the list.

Algorithm

1. START
2. Create a new node and assign the data
3. Find the last node
4. Point the last node to new node
5. END

Insertion at a Given Position


In this operation, we are adding an element at any position within the list.

Algorithm

1. START
2. Create a new node and assign data to it
3. Iterate until the node at position is found
4. Point first to new first node
5. END

Linked List - Deletion Operation

Deletion is also a more than one step process. We shall learn with
pictorial representation. First, locate the target node to be removed, by
using searching algorithms.

The left (previous) node of the target node now should point to the next
node of the target node −

LeftNode.next -> TargetNode.next;

This will remove the link that was pointing to the target node. Now, using
the following code, we will remove what the target node is pointing at.
TargetNode.next -> NULL;

We need to use the deleted node. We can keep that in memory otherwise
we can simply deallocate memory and wipe off the target node
completely.

Similar steps should be taken if the node is being inserted at the


beginning of the list. While inserting it at the end, the second last node of
the list should point to the new node and the new node will point to NULL.

Deletion in linked lists is also performed in three different ways. They are
as follows −

Deletion at Beginning
In this deletion operation of the linked, we are deleting an element from
the beginning of the list. For this, we point the head to the second node.

Algorithm

1. START
2. Assign the head pointer to the next node in the list
3. END
Deletion at Ending
In this deletion operation of the linked, we are deleting an element from
the ending of the list.

Algorithm

1. START
2. Iterate until you find the second last element in the list.
3. Assign NULL to the second last element in the list.
4. END

Deletion at a Given Position


In this deletion operation of the linked, we are deleting an element at any
position of the list.

Algorithm

1. START
2. Iterate until find the current node at position in the list.
3. Assign the adjacent node of current node in the list
to its previous node.
4. END

Linked List - Reversal Operation:-

This operation is a thorough one. We need to make the last node to be


pointed by the head node and reverse the whole linked list.

First, we traverse to the end of the list. It should be pointing to NULL.


Now, we shall make it point to its previous node −
We have to make sure that the last node is not the last node. So we'll
have some temp node, which looks like the head node pointing to the last
node. Now, we shall make all left side nodes point to their previous nodes
one by one.

Except the node (first node) pointed by the head node, all nodes should
point to their predecessor, making them their new successor. The first
node will point to NULL.

We'll make the head node point to the new first node by using the temp
node.

Algorithm
Step by step process to reverse a linked list is as follows −

1. START
2. We use three pointers to perform the reversing:
prev, next, head.
3. Point the current node to head and assign its next value to
the prev node.
4. Iteratively repeat the step 3 for all the nodes in the list.
5. Assign head to the prev node.

Linked List - Search Operation:-

Searching for an element in the list using a key element. This operation is
done in the same way as array search; comparing every element in the
list with the key element given.

Algorithm

1 START
2 If the list is not empty, iteratively check if the list
contains the key
3 If the key element is not present in the list, unsuccessful
search
4 END

Linked List - Traversal Operation:-

The traversal operation walks through all the elements of the list in an
order and displays the elements in that order.

Algorithm

1. START
2. While the list is not empty and did not reach the end of the list,
print the data in each node
3. END
2.2)Comparing arrays and Linked lists
Linked List vs Array


Array: Arrays store elements in contiguous memory


locations, resulting in easily calculable addresses for the
elements stored and this allows faster access to an element
at a specific index.

Data storage scheme of an array

Linked List: Linked lists are less rigid in their storage


structure and elements are usually not stored in contiguous
locations, hence they need to be stored with additional
tags giving a reference to the next element.
Linked-List representation

Advantages of Linked List over arrays :


 Efficient insertion and deletion. : We only need to
change few pointers (or references) to insert (or delete) an
item in the middle. Insertion and deletion at any point in a
linked list take O(1) time. Whereas in an array data
structure, insertion / deletion in the middle takes O(n)
time.
 Implementation of Queue and Deque : Simple array
implementation is not efficient at all. We must use circular
array to efficiently implement which is complex. But with
linked list, it is easy and straightforward. That is why most
of the language libraries use Linked List internally to
implement these data structures.
 Space Efficient in Some Cases : Linked List might turn
out to be more space efficient compare to arrays in cases
where we cannot guess the number of elements in
advance.
 Circular List with Deletion/Addition : Circular Linked
Lists are useful to implement CPU round robin scheduling
or similar requirements in the real world because of the
quick deletion/insertion in a circular manner.
Advantages of Arrays over Linked List :
 Random Access. : We can access ith item in O(1) time
(only some basic arithmetic required using base address).
In case of linked lists, it is O(n) operation due to sequential
access.
 Cache Friendliness : Array items (Or item references)
are stored at contiguous locations which makes array
cache friendly (Please refer Spatial locality of reference for
more details)
 Easy to use : Arrays are relatively very easy to use and
are available as core of programming languages
 Less Overhead : Unlike linked list, we do not have any
extra references / pointers to be stored with every item.
2.3) Applications of linked lists:-
Linked lists have several practical applications in computer science and real-
world scenarios. Due to their dynamic nature and efficient
insertion/deletion operations, they are used in various areas where flexible
and efficient memory usage is required. Here are the major applications of
linked lists in data structures:

1. Dynamic Memory Management

Linked lists are often used in dynamic memory allocation and management,
where the size of data structures needs to change at runtime.

Example: Implementing stacks, queues, and heaps.


2. Implementation of Other Data Structures

Linked lists are used as a base to implement other data structures such as:

Stacks

Queues

Hash Tables (for handling collisions via chaining)


Graphs (for adjacency lists)

Sparse Matrices
3. Undo Mechanism in Applications

Many applications (e.g., text editors or Photoshop) use linked lists to store
states for the "Undo" and "Redo" operations.

Each node represents a state of the document, and traversal allows going
back and forth between changes.
4. Dynamic Data Storage

Linked lists are suitable for storing data where the size is unknown or
changes frequently.

Example:

Maintaining a dynamic playlist in a media player.

Managing a list of active users in a chat application.


5. Efficient Insertion/Deletion

Linked lists allow efficient insertion and deletion operations compared to


arrays, especially in scenarios where elements are frequently added or
removed.

Example: Maintaining jobs in a printer queue or task scheduling systems.


6. File Systems

File systems like FAT (File Allocation Table) use linked lists to represent file
blocks. Each block contains a pointer to the next block, enabling sequential
storage of file data.
7. Networking

Linked lists are used in routers and switches to maintain dynamically


changing lists of packet buffers.
8. Polynomial Representation

Linked lists can be used to represent and manipulate polynomials. Each


node represents a term in the polynomial (coefficient and exponent), and
operations like addition and multiplication are easier to implement.
9. Memory-Efficient Data Storage

Doubly and circular linked lists are useful in memory-critical systems where
efficient memory utilization is a priority.

Example: Circular linked lists are used in implementing round-robin


scheduling algorithms.

10. Browser History Management

A doubly linked list is used to manage the history of visited web pages,
allowing users to navigate forward and backward efficiently.
11. Music and Media Players

Circular linked lists are used to implement playlists in media players, where
the last song is connected to the first, allowing continuous playback.

12. Graph and Tree Traversal

Linked lists (particularly adjacency lists) are used in graph algorithms for
efficient representation and traversal of nodes.

Example: Breadth-First Search (BFS) and Depth-First Search (DFS).


13. Operating System Applications
Linked lists are used in:

Process scheduling (ready queue).

Paging and memory management systems.

UNIT-3
3.1)Stack in Data Structure: What is
Stack and Its Applications
Stacks in Data Structures is a linear type of data structure that follows the
LIFO (Last-In-First-Out) principle and allows insertion and deletion operations
from one end of the stack data structure, that is top. Implementation of the
stack can be done by contiguous memory which is an array, and non-
contiguous memory which is a linked list. Stack plays a vital role in many
applications.

Introduction to Stack in Data Structures

The stack data structure is a linear data structure that accompanies a principle
known as LIFO (Last In First Out) or FILO (First In Last Out). Real-life
examples of a stack are a deck of cards, piles of books, piles of money, and
many more.
Stack Representation in Data Structures

Working of Stack in Data Structures

Now, assume that you have a stack of books.

You can only see the top, i.e., the top-most book, namely 40, which is kept top
of the stack.

If you want to insert a new book first, namely 50, you must update the top and
then insert a new text.

And if you want to access any other book other than the topmost book that is
40, you first remove the topmost book from the stack, and then the top will
point to the next topmost book.
After working on the representation of stacks in data structures, you will see
some basic operations performed on the stacks in data structures.

Basic Operations on Stack in Data Structures

There following are some operations that are implemented on the stack.

Push Operation

Push operation involves inserting new elements in the stack. Since you have
only one end to insert a unique element on top of the stack, it inserts the new
element at the top of the stack.
Pop Operation

Pop operation refers to removing the element from the stack again since you
have only one end to do all top of the stack. So removing an element from the
top of the stack is termed pop operation.
Peek Operation

Peek operation refers to retrieving the topmost element in the stack without
removing it from the collections of data elements.

isFull()

isFull function is used to check whether or not a stack is empty.

isEmpty()

isEmpty function is used to check whether or not a stack is empty.

First, you will learn about the functions:

isFull()
The following is the algorithm of the isFull() function:

Begin

If

top equals to maxsize

return true

else

return false

else if

end

The implementation of the isFull() function is as follows:

Bool isFull()

if(top == maxsize)
return true;

else

return false;

isEmpty()

The following is the algorithm of the isEmpty() function:

Begin

If

topless than 1

return true

else

return false

else if
end

The implementation of the isEmpty() function is:

Bool isEmpty()

if(top = = -1)

return true;

else

return false;

Push Operation

Push operation includes various steps, which are as follows :

Step 1: First, check whether or not the stack is full

Step 2: If the stack is complete, then exit


Step 3: If not, increment the top by one

Step 4: Insert a new element where the top is pointing

Step 5: Success

The algorithm of the push operation is:

Begin push: stack, item

If the stack is complete, return null

end if

top ->top+1;

stack[top] <- item

end

This is how you implement a push operation:

if(! isFull ())

{
top = top + 1;

stack[top] = item;

else {

printf(“stack is full”);

Pop Operation

Step 1: First, check whether or not the stack is empty

Step 2: If the stack is empty, then exit

Step 3: If not, access the topmost data element

Step 4: Decrement the top by one

Step 5: Success

The following is the algorithm of the pop operation:

Begin pop: stack


If the stack is empty

return null

end if

item -> stack[top] ;

Top -> top - 1;

Return item;

End

Implementing a pop operation is as follows:

int pop( int item){

If isEmpty()) {

item = stack[top];

top = top - 1;
return item;

else{

printf(“stack if empty”);

Peek Operation

The algorithm of a peek operation is:

begin to peek

return stack[top];

end

The implementation of the peek operation is:


int peek()

return stack[top];

3.2) Implementation of Stack in Data Structures

You can perform the implementation of stacks in data structures using two
data structures that are an array and a linked list.

 Array: In array implementation, the stack is formed using an array. All the
operations are performed using arrays. You will see how all operations can
be implemented on the stack in data structures using an array data structure.
Properties of a stack:-
1.)LIFO Behaviour:- The fundamental principle of a
stack is that the last element added will be the first
one removed like a pile of plates where you only
access the top plate
2.)Single access point:-All insertions and deletions
occur only at the top of the stack
Implementing Stacks using linked list:-
 Linked-List: Every new element is inserted as a top element in the linked
list implementation of stacks in data structures. That means every newly
inserted element is pointed to the top. Whenever you want to remove an
element from the stack, remove the node indicated by the top, by moving
the top to its previous node in the list.
3.3)Applications of stacks in expression
evaluation, backtracking, & reversing List
1. Expression Evaluation:

Stacks are widely used in evaluating mathematical expressions, especially those in


postfix (Reverse Polish Notation, RPN) and infix notation.

 Infix to Postfix Conversion: The stack helps convert infix expressions (like
A + B * C) to postfix notation (A B C * +) by handling operator
precedence and associativity.
 Postfix Expression Evaluation: After conversion to postfix, the expression
can be evaluated using a stack. Operands are pushed onto the stack, and
when an operator is encountered, operands are popped off, the operation is
performed, and the result is pushed back onto the stack.

Example: For postfix expression 2 3 4 * +, evaluate:

 Push 2, 3, 4 onto the stack.


 Encounter *: pop 3 and 4, multiply to get 12, push 12.
 Encounter +: pop 2 and 12, add to get 14, push 14.
 Final result: 14.

2. Backtracking:

Backtracking is a problem-solving technique used in decision-making processes.


Stacks are helpful for backtracking, as they allow us to "remember" previous
states.

 Example: Solving puzzles like the N-Queens problem or Sudoku, where


you need to explore different possible configurations and "undo" certain
decisions when you reach a dead end.
 When exploring potential solutions, you push choices onto the stack. If a
choice leads to a dead end, you pop from the stack to backtrack to a previous
state and try a different choice.

Example: In a maze, you might explore paths by moving forward (pushing each
decision onto the stack). If you hit a dead end, you backtrack by popping the stack
to find the previous decision point.

3. Reversing a List:

A stack can be used to reverse a list because stacks operate on a Last In, First Out
(LIFO) principle, meaning the last element added will be the first one to be
removed.

 Algorithm: Push each element of the list onto a stack, and then pop all
elements from the stack, which will result in the reversed order.

Example: For a list: [1, 2, 3, 4]

 Push 1, 2, 3, 4 onto the stack.


 Pop each element: first 4, then 3, 2, and finally 1, resulting in [4, 3, 2,
1].

You might also like