0% found this document useful (0 votes)
15 views16 pages

Theoretical Questions-Data Structure

Uploaded by

x679x8mt2p
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views16 pages

Theoretical Questions-Data Structure

Uploaded by

x679x8mt2p
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Final Revision Theoretical questions

Data Structure

Mamdouh Moussa
MOUSSA ACADEMY
Week 2: Introduction to Data Structures

1. What are the 3 main performance cases in algorithm analysis?

Answer:

1. Best Case – The minimum number of steps the algorithm could take.

2. Average Case – The expected number of steps over all possible inputs.

3. Worst Case – The maximum number of steps the algorithm will take.

2. How does using a proper data structure differ from using a simple array in terms
of performance and scalability?

Answer:
Using a simple array for tasks like searching or selecting elements can result in high
time complexity (e.g., sorting and scanning 30 million elements may not finish in
reasonable time).
A proper data structure, however, improves design and performance, especially in
large-scale problems.

3. Describe the steps on how you would implement a generic component using Java
5.

Answer:

1. Define a class with a type parameter (e.g., <T>).

2. Use the type parameter in field declarations, methods, and return types.

3. Instantiate the class with a specific type (e.g., GenericMemoryCell<Integer>).

4. Use diamond operator (Java 7+) for simplified syntax.

5. Leverage auto-boxing/unboxing for primitives using their wrapper classes.

4. What are the 4 rules for designing a recursive function?

Answer:

1. Base Case – Must exist and be solvable without recursion.

2. Making Progress – Each recursive call must move toward a base case.

3. Design Rule – Assume recursive calls work correctly.

4. Compound Interest Rule – Avoid solving the same subproblem more than once.
5. How does using wrapper classes differ from using primitive types in Java in terms
of functionality and compatibility?

Answer:
Primitive types (e.g., int, double) cannot be used in Java collections and require
manual conversion.
Wrapper classes (e.g., Integer, Double) enable storage in objects like ArrayList and
support auto-boxing and unboxing, making them compatible with Java generics.

Week 3: Algorithm Analysis

1. What are the 3 main cases to analyze an algorithm?

Answer:

1. Worst Case – The scenario with the maximum number of steps.

2. Average Case – The expected performance over all inputs.

3. Best Case – The minimum number of steps required.

2. How does worst-case analysis differ from average-case analysis in terms of


purpose and accuracy?

Answer:
Worst-case analysis provides an upper bound and guarantees performance under the
most demanding input.
Average-case analysis gives a more realistic expectation but depends on input
distribution and is harder to compute.

3. Describe the steps on how you would calculate the running time of a simple for-
loop program.

Answer:

1. Identify the lines of code executed in and outside the loop.

2. Count each instruction’s execution based on the number of iterations.

3. Sum up all the contributions.

4. Ignore constant factors and lower-order terms to determine Big-O complexity.


4. What are the 4 general rules for calculating the time complexity of code blocks?

Answer:

1. Rule 1 – For Loops: Time = number of iterations × time inside loop.

2. Rule 2 – Nested Loops: Multiply the sizes of all loops.

3. Rule 3 – Consecutive Statements: Take the max time among them.

4. Rule 4 – if/else: Time = condition time + max(S1, S2) execution.

5. How does Big-O notation differ from Omega and Theta in terms of bounds and
usage?

Answer:
Big-O gives the upper bound of an algorithm’s growth rate.
Omega gives the lower bound.
Theta defines the exact bound—when the upper and lower bounds are equal.

Week 4: Linked Lists

1. How does an abstract data type differ from a concrete data type in terms of
definition and implementation?

Answer:
An abstract data type (ADT) defines what data is stored and how it can be accessed,
without specifying implementation details.
A concrete data type defines how the data is actually stored and manipulated in
memory, including classes and methods.

2. What are the 3 main operations that can be performed on a List ADT?

Answer:

1. Insert – Add an item at a specific position.

2. Remove – Delete an item from a specific position.

3. Retrieve – Read an item from a specific position.


3. Describe the steps on how you would implement a singly linked list node.

Answer:

1. Create a Node class with a data field and a reference to the next node.

2. Define a head pointer to store the beginning of the list.

3. For insertion, create a new node, link it to the next node, and adjust previous
pointers.

4. For deletion, change the previous node’s pointer to skip the node to be deleted.

4. How does an ArrayList differ from a LinkedList in terms of insertion and access
time?

Answer:
ArrayList provides fast access (O(1)) but slow insertions at the front (O(n)).
LinkedList supports fast insertions/removals at known positions (O(1)) but slower
access (O(n)).

5. What are the 4 types of linked lists?

Answer:

1. Singly Linked List – One pointer to the next node.

2. Doubly Linked List – Pointers to both previous and next nodes.

3. Circular Linked List – Last node points to the first node.

4. Doubly Circular Linked List – Combines doubly and circular structures.

Week 5: Stack

1. How does a stack differ from a queue in terms of data access and application?

Answer:
A stack follows the LIFO (Last In, First Out) principle—only the last inserted item can be
removed first.
A queue follows the FIFO (First In, First Out) principle—items are removed in the order
they were added.
2. Describe the steps on how you would implement a stack using an array.

Answer:

1. Define an array and a variable topOfStack initialized to -1.

2. To push an item: increment topOfStack, then assign the value at


array[topOfStack].

3. To pop an item: retrieve the value at array[topOfStack], then decrement


topOfStack.

3. What are the 2 main operations in a stack?

Answer:

1. Push – Insert an element at the top.

2. Pop – Remove the most recently added element.

4. How does using a linked list differ from an array for implementing a stack in
terms of flexibility and memory usage?

Answer:
Linked list stacks can grow dynamically and don’t require pre-allocated memory.
Array stacks are fixed in size and may waste memory or need resizing when full.

5. What are the 4 main applications of stacks discussed in this chapter?

Answer:

1. Expression Evaluation

2. Expression Conversion (e.g., infix to postfix)

3. Backtracking (e.g., undo operations)

4. Memory Management
Week6: Queue

1. How does a queue differ from a stack in terms of data access and application?

Answer:
A queue follows the FIFO principle, where the first inserted element is the first to be
removed.
A stack, by contrast, uses LIFO, where the last inserted element is removed first.

2. Describe the steps on how you would implement a queue using a circular array.

Answer:

1. Create an array and track front, back, and currentSize.

2. For enqueue: increment back (wrap around if needed), place the element, and
update currentSize.

3. For dequeue: read the value at front, increment front (wrap around), and update
currentSize.

4. Use modular arithmetic to wrap around indices.

3. What are the 2 basic operations in a queue?

Answer:

1. Enqueue – Insert at the rear of the queue.

2. Dequeue – Remove from the front of the queue.

4. How does array-based queue implementation differ from linked list


implementation in terms of memory and flexibility?

Answer:
Array-based queues use fixed-size memory and may require resizing.
Linked list queues are dynamic and grow/shrink as needed, offering greater flexibility.
5. What are 3 common real-life applications of queues?

Answer:

1. Printer Job Scheduling – Tasks are printed in order of arrival.

2. Customer Service Lines – Serve first-come-first-served.

3. Call Center Queues – Calls are handled in the order received.

Week 7: Tree

1. What are the 3 main parts of a tree node?

Answer:

1. Data – The value stored in the node.

2. Left Child – Reference to the left subtree.

3. Right Child – Reference to the right subtree

2. How does a binary tree differ from a linked list in terms of structure and traversal
options?

Answer:
A binary tree allows each node to link to two children (left and right), supporting
hierarchical data and various tree traversals (inorder, preorder, postorder).
A linked list is linear, with each node pointing to only one next node, and supports
sequential access only.

3. What are the 3 basic types of tree traversal?

Answer:

1. Preorder – Visit root, then left subtree, then right subtree.

2. Inorder – Visit left subtree, root, then right subtree.

3. Postorder – Visit left subtree, right subtree, then root.


4. Describe the steps on how you would build an expression tree from a postfix
expression.

Answer:

1. Start with an empty stack.

2. Read each token from the postfix expression.

3. If the token is an operand, push it as a node.

4. If the token is an operator, pop two nodes, make them children, and push the
new subtree.

5. Repeat until the expression is fully processed.

5. How does a full binary tree differ from a complete binary tree in terms of node
arrangement and level filling?

Answer:
A full binary tree has every node with either 0 or 2 children.
A complete binary tree has all levels completely filled except possibly the last, which
is filled from left to right.

Week9: Hashing & Heaps ? (Week 8 does not exist in the slides)

1. What is a hash function and what is its role in a hash table?

Answer:
A hash function computes a hash code from a key, which is then used as an index in a
hash table.
Its role is to distribute keys uniformly and minimize collisions.
Example: hash(x) = x mod 10

2. How does separate chaining differ from open addressing in resolving collisions?

Answer:
Separate chaining uses linked lists at each index to store colliding items.
Open addressing searches for another available position within the table using probing
techniques.
3. Describe the steps on how you would insert elements using quadratic probing.

Answer:

1. Compute initial hash index using key mod table size.

2. If collision occurs, use formula: index = (hashed key + i²) mod table size.

3. Repeat with increasing i until an empty slot is found.

4. Insert the element at that position.

4. What are the 3 main techniques for open addressing?

Answer:

1. Linear Probing

2. Quadratic Probing

3. Double Hashing

5. How does a binary heap implement a priority queue in terms of structure and
operations?

Answer:
A binary heap is a complete binary tree that maintains the heap-order property,
where each parent is smaller than its children.
It supports insert and deleteMin operations efficiently, making it suitable for
implementing priority queues.

Week 10: Sorting I

1. What is insertion sort and how does it work?

Answer:
Insertion sort builds the sorted list one item at a time:

1. Start with a single-item sorted sublist.

2. Take the next item and insert it in the correct position in the sublist.

3. Repeat until the entire list is sorted


2. Describe the steps on how you would implement shellsort.

Answer:

1. Choose an increment sequence (e.g., Shell’s method: h = N/2, then h = h/2, etc.).

2. Divide the list into sublists using the increment.

3. Sort each sublist using insertion sort.

4. Reduce the increment and repeat the process.

5. Stop when the increment is 1

3. How does shellsort improve over insertion sort in terms of efficiency and
distance of swaps?

Answer:
Shellsort allows comparisons and swaps of elements that are far apart using increment
sequences, reducing the number of moves.
Insertion sort only compares adjacent elements, which is less efficient for large,
unsorted lists.

4. What are the differences between max-heap and min-heap in terms of sorting
order?

Answer:
A max-heap places the largest value at the root, yielding increasing sorted order when
used in heapsort.
A min-heap places the smallest value at the root, yielding decreasing sorted order.

5. What are the 4 main steps in heapsort?

Answer:

1. Swap root with last element.

2. Disconnect last element (now sorted).

3. Fix heap by re-heaping remaining elements.

4. Repeat steps 1–3 until all elements are sorted.


Week11: Sorting II

1. Describe the steps on how you would implement the mergesort algorithm.

Answer:

1. Divide the list into two equal halves.

2. Recursively divide each half until each sublist has one item.

3. Merge pairs of sublists into sorted lists.

4. Repeat merging until all are combined into a single sorted list.

2. How does quicksort differ from mergesort in terms of memory usage and
partitioning?

Answer:
Quicksort does not require additional memory (in-place) and partitions the list around
a pivot.
Mergesort requires extra memory for merging and divides the list evenly regardless of
values.

3. What are the 4 steps in the mergesort process?

Answer:

1. Divide the list.

2. Recursively divide further.

3. Merge each two parts into a sorted list.

4. Continue merging until the list is fully sorted

4. Describe the steps on how quicksort rearranges elements around a pivot.

Answer:

1. Choose a pivot element.

2. Move elements less than the pivot to its left.

3. Move elements greater than the pivot to its right.

4. Recursively apply the process to the left and right partitions.

5. Concatenate results to form the sorted list.


5. What are the 3 element groups used in quicksort partitioning?

Answer:

1. Group 1 – Elements less than the pivot.

2. Group 2 – Elements equal to the pivot.

3. Group 3 – Elements greater than the pivot.

Week12: Graph Algorithms

1. What are the main components of a graph?

Answer:

1. Vertices (V): The nodes of the graph, representing entities.

2. Edges (E): Connections between vertices.

3. Weights (optional): Costs or values associated with edges, in weighted graphs

2. How does a directed graph differ from an undirected graph in terms of edge
behavior and representation?

Answer:
In a directed graph (digraph), edges have direction, meaning (v, w) is not the same as
(w, v).
In an undirected graph, edges are bidirectional—(v, w) and (w, v) represent the same
connection.

3. Describe the steps on how you would implement a graph using an adjacency list.

Answer:

1. Create a list of vertices.

2. For each vertex, create a sublist to store its adjacent vertices.

3. When adding an edge (v, w), append w to v’s adjacency list.

4. For undirected graphs, also append v to w’s list.

5. Use the list structure to traverse neighbors efficiently.


4. What are the key differences between a graph and a tree?

Answer:
A tree is a special kind of graph that is acyclic and connected with n−1 edges for n
nodes.
A graph can be cyclic, disconnected, and have any number of edges.

Source: General content across “Definitions” and implied understanding of tree vs


graph structure

5. What are the 3 main ways to represent a graph in memory?

Answer:

1. Adjacency List – List of neighbors per vertex.

2. Adjacency Matrix – 2D array indicating edge presence.

3. Edge List – A list of all edges (pairs of vertices).

Week 13: Algorithm Design Techniques I

1. How does greedy algorithm design differ from dynamic programming in terms of
decision-making and subproblem reuse?

Answer:
Greedy algorithms make the best local decision at each step, aiming for a global
optimum but without reusing subproblem results.
Dynamic programming breaks problems into overlapping subproblems and stores
results to avoid recomputation, ensuring optimal solutions for problems with optimal
substructure.

2. Describe the steps on how you would implement the divide and conquer
strategy.

Answer:

1. Divide the original problem into smaller subproblems.

2. Conquer each subproblem recursively.

3. Combine the solutions of the subproblems into the final solution.


3. What are the 3 main steps in the divide and conquer approach?

Answer:

1. Divide

2. Conquer

3. Combine

4. How does dynamic programming differ from divide and conquer in terms of
subproblem overlap and optimization?

Answer:
Divide and conquer assumes subproblems are independent and solves them
separately.
Dynamic programming handles overlapping subproblems by solving each only once
and storing the results to optimize performance.

5. Describe the steps on how you would solve the scheduling problem using a
greedy algorithm.

Answer:

1. Sort activities by their finish times.

2. Select the first activity.

3. For each next activity, select it only if it starts after the previous one finishes.

4. Continue until all activities are considered.


Algorithms

ID Category Topic / Algorithm Week

1 Data Structure Operation Linked List (Insert, Delete, Traverse) 4

2 Data Structure Operation Stack (Push, Pop, Array/Linked) 5

3 Data Structure Operation Queue (Enqueue, Dequeue, Circular) 6

4 Tree Algorithm Tree Traversals (Pre, In, Post) 7

5 Tree Algorithm Expression Tree Construction 7

6 Hashing Technique Linear Probing 9

7 Hashing Technique Quadratic Probing 9

8 Hashing Technique Double Hashing 9

9 Hashing Technique Rehashing 9

10 Sorting Algorithm Insertion Sort 10

11 Sorting Algorithm Shell Sort 10

12 Sorting Algorithm Heap Sort 10

13 Sorting Algorithm Merge Sort 11

14 Sorting Algorithm Quick Sort 11

15 Graph Algorithm Dijkstra’s Algorithm 12

16 Graph Traversal BFS / DFS 12

17 Design Technique Greedy Algorithm 13

18 Design Technique Dynamic Programming 13

19 Design Technique Divide and Conquer 13

You might also like