0% found this document useful (0 votes)
6 views8 pages

Algorithm Internal

The document covers various fundamental concepts in computer science, including arrays, pointers, recursion, data abstraction, sets, stacks, queues, time and space complexity, sorting algorithms, and string manipulation. Each section details properties, advantages, disadvantages, and operations associated with these concepts, along with pseudocode and time complexity analysis for sorting algorithms. It serves as a comprehensive guide for understanding data structures and algorithms.

Uploaded by

Heet Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views8 pages

Algorithm Internal

The document covers various fundamental concepts in computer science, including arrays, pointers, recursion, data abstraction, sets, stacks, queues, time and space complexity, sorting algorithms, and string manipulation. Each section details properties, advantages, disadvantages, and operations associated with these concepts, along with pseudocode and time complexity analysis for sorting algorithms. It serves as a comprehensive guide for understanding data structures and algorithms.

Uploaded by

Heet Patel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

01. Properties, Advantage, Disadvantage of array.

02. Memory Address Calculation in Arrays.


03. Pointers, Types, Advantage, Disadvantage.
04. Recursion, Advantage, Disadvantage.
05. Data Abstraction, Achieving Abstraction, Advantage.
06. Sets, Operations on Sets, Multisets, Operations on Multisets.

07. Stack(push, pop, peek), Applications, Time & Space Complexity.


08. Queue(enqueue, dequeue), Applications, Time & Space Complexity.
09. Time & Space Complexity Analysis(Definition, Types).
10. Asymptotic Notations(Big O, Omega, Theta).
11. String(Length, Copy, Concatenation, Compare, Reverse, Substring).

12. Sorting(Definition, Categories).


13. All sorting Algorithm with pseudocode and Time & Space complexity.
14. Binary Search Tree, Balanced Search Tree, AVL Tree, 2-3 Tree.

ANSWERS:
01. Properties, Advantage, Disadvantage of array.
Properties:
-Same Data Type - All elements in an array must be of the same type (e.g., int,
float).
-Sequential Storage - The elements are stored one after another in memory.
-Index-Based Access - Each element can be accessed using an index (starting from
0).
-Same Name for Elements - All elements share the same array name.
-Insertion & Deletion is Slow - Shifting elements takes time.
-2D Arrays Stored Row-Wise - Multi-dimensional arrays are stored row by row in
memory.
-Fixed Size - The size must be declared during array creation.
Advantages:
-Less Code - Easier access and manipulation of multiple values.
-Fast Traversing - Loops can quickly go through all elements.
-Sorting - Sorting is simpler and requires less code.
-Random Access - Any element can be accessed directly.
Disadvantages:
-Fixed Size - Cannot grow or shrink dynamically.
-Memory Waste - Unused elements occupy space.

02. Memory Address Calculation in Arrays.


-1D Array: Address of A[I]=B + W *(I − LB)
-2d Array:
Raw Major Order: Address of A[I][J]= B + W * ((I − LR) * N + (J − LC))
Column Major Order: Address of A[I][J]=B + W * ((J − LC) * M + (I − LR))

03. Pointers, Types, Advantage, Disadvantage.


1. Null Pointer:A pointer that does not point to any valid memory location
(initialized to NULL).
2. Void Pointer (Generic Pointer):A pointer that can store the address of any
data type.
3. Wild Pointer:An uninitialized pointer, pointing to a random memory location.
4. Dangling Pointer:A pointer that refers to a freed memory location.
5. Pointer to Pointer (Double Pointer):A pointer that stores the address of
another pointer.

✅✅
Advantages of Pointers
Efficient Memory Access – Directly accesses memory locations.

✅✅Dynamic Memory Allocation – Used with malloc(), calloc().


Pass by Reference – Functions can modify actual values.
Faster Execution – Used in arrays, structures, and linked lists.

❌❌
Disadvantages of Pointers
Difficult to Understand – Requires proper memory management.
Memory Leaks – If memory is not freed properly.
❌❌ Dangling & Wild Pointers – Can cause crashes or undefined behavior.
Security Issues – Can be exploited (e.g., buffer overflow attacks).]

04. Recursion, Advantage, Disadvantage.


Advantages:
-Reduces code length.
-Useful for problems like tree traversal, Tower of Hanoi, etc.
Disadvantages:
-More memory usage (stack space).
-Difficult to debug.

05. Data Abstraction, Achieving Abstraction, Advantage.


Abstraction means hiding unnecessary details and exposing only the needed
features.
->Achieving Abstraction
-Using Classes - Hide details in private members.
-Using Header Files - Example: pow() function in <math.h>.
->Advantages
-Encapsulation of data.
-Less code repetition.
-Easier modifications.

06. Sets, Operations on Sets, Multisets, Operations on Multisets.


Set: A set is a collection of distinct objects (no duplicates).
It is commonly used in mathematics and computer science to manage unique data.
Operations on Sets:
-Union (A ∪ B): Elements in A or B or both.
-Intersection (A ∩ B): Elements common in A and B.
-Difference (A - B): Elements only in A, not in B.
Multisets:A multiset is a collection of elements where duplicates are allowed
(unlike sets).
Operations:
- Union of Multisets (A ∪ B):The highest occurrence of each element is taken.
-Intersection of Multisets (A ∩ B):The lowest occurrence of each element is
taken
-Difference of Multisets (A - B):The occurrence of elements in A is reduced by
its occurrence in B.
-Sum of Multisets (A + B):The occurrences of elements in A and B are added
together.

07. Stack(push, pop, peek), Applications, Time & Space Complexity.


->A stack is a data structure that follows the Last In, First Out (LIFO) rule.
This means that the last element added is the first one removed.
PUSH: All operation same complexities O(1)
*Step 1: Start
* Step 2: if Top >= n-1
Write “stack overflow”. Go to step 5.
* Step 3: Increment Top ← Top+1
* Step 4: S[Top] ← x
* Step 5: Finished
POP:
 Step 1: Start
 Step 2: if (Top=0) or (Top= -1)
Write stack is “under flow” go to step 6.
 Step 3: value ← s[Top]
 Step 4: Top ← Top-1
 Step 5: return value
 Step 6: Finished
PEEK:
 Step 1: start
 Step 2: if (Top=0)
then print “under flow” go to step 4.
 Step 3: Return s[Top]
✅✅
 Step 4: FinishedApplications of Stack
Undo/Redo (Text editors)

✅✅Backtracking (Maze solving, Browser history)


Expression Evaluation (Postfix, Prefix)
Function Calls (Recursion in programming)
CODE:
class Stack:
def __init__(self): self.arr = []
def push(self, x): self.arr.append(x)
def pop(self): return self.arr.pop() if self.arr else "Stack Underflow"
def peek(self): return self.arr[-1] if self.arr else "Stack is Empty"

s = Stack()
s.push(10); s.push(20)
print(s.peek()) # Output: 20
s.pop()
print(s.peek()) # Output: 10

08. Queue(enqueue, dequeue), Applications, Time & Space Complexity.


->A queue follows the First In, First Out (FIFO) rule. The first element added
is the first one removed.

✅✅
Applications of Queue
CPU Scheduling (Round Robin)

✅✅Printer Queue (Printing tasks)


Call Center (Customer service queue)
Traffic Management (Car queues at toll gates)
ENQUEUE: Complexities O(1)
 Step 1: Start
 Step 2: if rear>= size then
Print “Queue is overflow” go to step 6
 Step 3: Rear = Rear+1
 Step 4: S[Rear] = x
 Step 5: if front=-1 then
Front=0
 Step 6: Exit
DEQUEUE:
 Step 1: Start
 Step 2: if (front=-1 or front=0) then
Print “queue is underflow” go to step6
 Step 3: X = S[front]
 Step 4: write X
 Step 5: if front=rear
Then Front =0
Rear = 0 else
Front=front+1
 Step 6: Exit.
CODE:
class Queue:
def __init__(self): self.arr = []
def enqueue(self, x): self.arr.append(x)
def dequeue(self): return self.arr.pop(0) if self.arr else "Queue Underflow"
def front(self): return self.arr[0] if self.arr else "Queue is Empty"

q = Queue()
q.enqueue(10); q.enqueue(20)
print(q.front()) # Output: 10
q.dequeue()
print(q.front()) # Output: 20

09. Time & Space Complexity Analysis(Definition, Types).


Time complexity is a way to measure how fast an algorithm runs.
->Types of Time Complexity
Best Case → Fastest execution time
Average Case → Expected execution time
Worst Case → Slowest execution time
->Space Complexity: Measures how much memory an algorithm uses during execution.

10. Asymptotic Notations(Big O, Omega, Theta).


->Asymptotic notation helps to analyze the performance of an algorithm as the
input size (n) grows.
->It tells how the time or space complexity changes relative to input size.
Three main notations:
**Big-O (O) → Upper Bound (Worst Case):
-Worst Case Analysis
-Definition: Describes the maximum time an algorithm can take.
-Used to measure the upper bound of an algorithm’s runtime.
- Time Complexity: O(n²)
**Omega (Ω) → Lower Bound (Best Case)
– Best Case Analysis
-Definition: Describes the minimum time an algorithm will take.
-Used to measure the lower bound of an algorithm’s runtime.
-Time Complexity: Ω(1)
**Theta (Θ) → Tight Bound (Average Case)
-Average Case Analysis
-Definition: Describes the exact running time of an algorithm.
-Used when an algorithm has the same complexity in the worst and best cases.
-Time Complexity: Θ(n)

11. String(Length, Copy, Concatenation, Compare, Reverse, Substring).


***LENGTH:
def strlen(s):
count = 0
for _ in s: count += 1
return count

print(strlen("Hello")) # Output: 5
***COPY:
def strcpy(s):
s2 = ""
for ch in s: s2 += ch
return s2

print(strcpy("Hello")) # Output: Hello


***CONCATENATION:
def strcat(s1, s2):
result = s1
for ch in s2: result += ch
return result

print(strcat("Hello ", "World")) # Output: Hello World


***COMPARISON:
def strcmp(s1, s2):
i = 0
while i < len(s1) and i < len(s2):
if s1[i] != s2[i]: return False
i += 1
return len(s1) == len(s2)

print(strcmp("apple", "apple")) # Output: True


print(strcmp("apple", "orange")) # Output: False
***REVERSE:
def strrev(s):
rev = ""
for i in range(len(s) - 1, -1, -1): rev += s[i]
return rev

print(strrev("Hello")) # Output: olleH


***SUBSTRING:
def substr(s, start, length):
sub = ""
for i in range(start, start + length):
if i < len(s): sub += s[i]
return sub

print(substr("computer", 3, 3)) # Output: put

12. Sorting(Definition, Categories).


->Sorting means arranging data in a specific order, such as ascending (smallest
to largest) or descending (largest to smallest). It makes searching easier.
**Categories of Sorting
Internal Sorting: When data fits in main memory.
External Sorting: When data is too large for memory and stored in external
storage.

13. All sorting Algorithm with pseudocode and Time & Space complexity.
*** Bubble Sort: Time-Best O(n),Avg & Worst O(n²) Space-O(1)
//Pseudocode:
procedure BubbleSort(A : array of n elements)
for i from 0 to n-1 do
for j from 0 to n-i-1 do
if A[j] > A[j+1] then
swap(A[j], A[j+1])
end if
end for
end for
end procedure
//Algorithm: BubbleSort(A, n)
1. for i ← 0 to n-1 do
2. for j ← 0 to n-i-1 do
3. if A[j] > A[j+1] then
4. Swap A[j] and A[j+1]
5. end if
6. end for
7. end for
8. End
***Selection SOrt: Time- Best, Average, and Worst Case: O(n²) Space-O(1)
//Pseudocode:
procedure SelectionSort(A : array of n elements)
for i from 0 to n-2 do
minIndex = i
for j from i+1 to n-1 do
if A[j] < A[minIndex] then
minIndex = j
end if
end for
if minIndex ≠ i then
swap(A[i], A[minIndex])
end if
end for
end procedure
//Algorithm: SelectionSort(A, n)
1. for i ← 0 to n-2 do
2. minIndex ← i
3. for j ← i+1 to n-1 do
4. if A[j] < A[minIndex] then
5. minIndex ← j
6. end if
7. end for
8. Swap A[i] and A[minIndex]
9. end for
10. End
***Insertion sort: Time-Best O(n),Avg & Worst O(n²) Space-O(1)
//Pseudocode:
procedure InsertionSort(A : array of n elements)
for i from 1 to n-1 do
key = A[i]
j = i - 1
while j >= 0 AND A[j] > key do
A[j+1] = A[j]
j = j - 1
end while
A[j+1] = key
end for
end procedure
//Algorithm: InsertionSort(A, n)
1. for i ← 1 to n-1 do
2. key ← A[i]
3. j ← i - 1
4. while j ≥ 0 and A[j] > key do
5. A[j+1] ← A[j]
6. j ← j - 1
7. end while
8. A[j+1] ← key
9. end for
10. End
***Merge sort: Time all O(n log n), Space- O(n)
//Pseudocode:
procedure MergeSort(A, left, right)
if left < right then
mid = (left + right) / 2
MergeSort(A, left, mid)
MergeSort(A, mid+1, right)
Merge(A, left, mid, right)
end if
end procedure

procedure Merge(A, left, mid, right)


create temp arrays L and R
copy left half to L and right half to R
merge L and R back into A in sorted order
end procedure

//Algorithm: MergeSort(A, left, right)


1. if left < right then
2. mid ← (left + right) / 2
3. MergeSort(A, left, mid)
4. MergeSort(A, mid+1, right)
5. Merge(A, left, mid, right)
6. end if
7. End

Algorithm: Merge(A, left, mid, right)


1. Create two temporary arrays L and R
2. Copy elements from A[left..mid] into L
3. Copy elements from A[mid+1..right] into R
4. Merge L and R into A in sorted order
5. End
***Quick sort:Time- Space/Best/Average Case: O(n log n),Worst O(n²)
//Pseudocode:
procedure QuickSort(A, low, high)
if low < high then
pivotIndex = Partition(A, low, high)
QuickSort(A, low, pivotIndex - 1)
QuickSort(A, pivotIndex + 1, high)
end if
end procedure

procedure Partition(A, low, high)


pivot = A[high]
i = low - 1
for j from low to high - 1 do
if A[j] < pivot then
i = i + 1
swap(A[i], A[j])
end if
end for
swap(A[i + 1], A[high])
return i + 1
end procedure
Algorithm: QuickSort(A, low, high)
1. if low < high then
2. pivotIndex ← Partition(A, low, high)
3. QuickSort(A, low, pivotIndex - 1)
4. QuickSort(A, pivotIndex + 1, high)
5. end if
6. End

Algorithm: Partition(A, low, high)


1. pivot ← A[high]
2. i ← low - 1
3. for j ← low to high - 1 do
4. if A[j] < pivot then
5. i ← i + 1
6. Swap A[i] and A[j]
7. end if
8. end for
9. Swap A[i+1] and A[high]
10. return i+1

***Radix sort: Time- All cases: O(nk) Space-O(n)


//Pseudocode:
procedure RadixSort(A, n)
maxNumber = findMaximum(A)
exp = 1
while maxNumber / exp > 0 do
CountingSortByDigit(A, n, exp)
exp = exp * 10
end while
end procedure

procedure CountingSortByDigit(A, n, exp)


create output array of size n
create count array of size 10 (for digits 0-9), initialized to 0
count occurrences of each digit at (A[i] / exp) % 10
update count array to store cumulative counts
place elements in output array based on their digit values
copy output array back to A
end procedure
Algorithm: RadixSort(A, n)
1. maxNumber ← FindMaximum(A)
2. exp ← 1
3. while maxNumber / exp > 0 do
4. CountingSortByDigit(A, n, exp)
5. exp ← exp * 10
6. end while
7. End

Algorithm: CountingSortByDigit(A, n, exp)


1. Create output array of size n
2. Create count array of size 10 (digits 0-9) and initialize to 0
3. Count occurrences of each digit at (A[i] / exp) % 10
4. Update count array to store cumulative counts
5. Place elements in output array based on their digit values
6. Copy output array back to A
7. End

14. Binary Search Tree, Balanced Search Tree, AVL Tree, 2-3 Tree.
***Binary Search Tree (BST)
-A Binary Search Tree (BST) is a node-based binary tree where:
-->Left subtree contains elements smaller than the root.
-->Right subtree contains elements greater than the root.
-->Each subtree is also a BST.
Operation: Time
Insertion-Best case-O(log n) worst-O(n)
Deletion- same
Searching- same

***Balanced Search Tree:A Balanced Search Tree ensures that the height remains
logarithmic (O(log n)).
This makes operations faster compared to an unbalanced BST.
Types: AVL, Red-Black, B, 2-3

***AVL Tree:An AVL Tree is a self-balancing BST where:


-->Balance Factor = Height(Left) - Height(Right) (must be -1, 0, or 1).
-If imbalance occurs, rotations are used to fix it.
-Rotations in AVL Tree
---LL Rotation (Right Rotation) → When the left subtree is too tall.
---RR Rotation (Left Rotation) → When the right subtree is too tall.
---LR Rotation (Left-Right Rotation) → Convert LL to RR and fix.
---RL Rotation (Right-Left Rotation) → Convert RR to LL and fix.
-->Insert,Delete,Search: Time all same: O(log n)

***2-3 Tree:A 2-3 Tree is a balanced search tree where:


-Each node contains either 1 or 2 values.
-Each node has either 2 or 3 children.
-All leaf nodes are at the same level.
->Used in databases due to its efficiency.
-->Insert,Delete,Search: Time all same: O(log n)

You might also like