Data-Structure Cheat Notes
Data-Structure Cheat Notes
(Quick Guide)
What are Data Structures?
Data structure is a storage that is used to store and organize data. It is a way of arranging data on a
computer so that it can be accessed and updated efficiently. Each data structure is good and is
specialized for its own thing. Think of it like a filing system where you put information in different folders or
boxes so you can easily find and use it later.
• Deletion: This is removing data, like taking an item (mango) off your shopping list.
• Traversal: This means going through each item one by one, like flipping through pages of a book to
look at each word.
• Searching: This is trying to find something in the data, like looking for a specific contact in your
phone.
• Sorting: This means arranging data in order, like organizing books on a shelf alphabetically.
• Access: This is how we retrieve the data we’ve stored, like opening a document on your computer to
read it.
• Linked List: A sequence of elements where each element points to the next one, like a chain where
each link is connected to the next.
• Stack: A collection that follows the Last In, First Out (LIFO) principle, like a stack of plates where
the last plate placed on top is the first one removed.
• Queue: A collection that follows the First In, First Out (FIFO) principle, like a line of people waiting
to buy tickets—first in line, first to leave.
• Tree: A hierarchical structure with a root and branches, like a family tree where each person can
have children, and those children can have their own children.
• Graph: A set of nodes (or vertices) connected by edges, like cities connected by roads on a map,
where you can move from one node to another based on their connections.
• Hash Table (or Hash Map): A collection that stores data in key-value pairs, like a dictionary where
each word (key) has a corresponding definition (value).
Arrays
An array is a data structure that holds a fixed number of elements, all of the same type,
in contiguous memory locations. Each element in the array can be accessed using its
index, starting from 0 for the first element.
Insertion:
Adding an element to the array at a specific index.
• Example: If we have {10, 20, 30, 40}, inserting 25 at index 2 will make it {10, 20, 25, 30,
40}.
• Time Complexity: O(n) if you need to shift elements.
Deletion:
Removing an element from a specific index and shifting other elements.
• Example: Removing 30 from {10, 20, 30, 40} will result in {10, 20, 40}.
• Time Complexity: O(n) due to shifting.
Traversal:
Visiting each element of the array one by one.
Searching:
Finding an element by its value or index.
• Linear Search (O(n)): Goes through each element until it finds the one you're
looking for.
• Binary Search (O(log n)): If the array is sorted, it can divide and conquer to find
an element faster.
Sorting:
Arranging the elements in increasing or decreasing order.
• Algorithms like QuickSort (O(n log n)) and BubbleSort (O(n²)) can be applied
Advantages of Arrays:
Direct Access:
Arrays provide fast and direct access to any element using its index, which makes it efficient for read
operations.
Memory Efficiency:
Arrays use contiguous memory, so they are efficient in terms of memory usage compared to some
other data structures that require extra memory for pointers or links.
Ease of Use:
They are simple to understand and use. Most programming languages have built-in support for
arrays.
Disadvantages of Arrays:
Fixed Size:
Once an array is created, you cannot resize it. If you need to store more elements than you originally
planned for, you'll need to create a new, larger array and copy the data over.
• Example: You can't add a 6th element to a {10, 20, 30, 40, 50} array if its size is fixed to 5.
Wasted Memory:
If you allocate an array larger than necessary (to avoid resizing), you might end up with
unused space, which can waste memory.
Hash Table
A hash table (or hash map) is a data structure that stores data in key-value pairs. It
uses a hash function to compute an index (or hash code), where the value
corresponding to a particular key is stored. This allows for fast access to the values
based on their keys.
what does hash function means?
A hash function is like a special formula that takes a key (like a word, number, or name) and converts
it into a unique number, which is called a hash value or hash code. This number tells us where to
store or find the data in a hash table.
In simple terms, a hash function helps convert a piece of data (key) into a number that tells us where
to store or find it quickly in a hash table, without searching through all the lockers (or data).
Insertion:
Adding a new key-value pair to the hash table.
Deletion:
Removing a key-value pair from the hash table.
Search:
Finding the value associated with a given key.
Traversal:
Visiting every key-value pair in the hash table.
• Time Complexity: O(n) where n is the number of elements in the hash table.
Collisions:
If two keys hash to the same index, it’s called a collision. Hash tables need a way to handle collisions,
typically through chaining (storing multiple values at the same index) or open addressing (finding
another index). Collisions can degrade performance.
Memory Overhead:
Hash tables often use more memory than other data structures because they require extra space for
the hash function and to handle collisions.
Unordered:
Hash tables do not maintain the order of elements. If you need to store elements in a specific order, a
hash table is not a good choice.
Chaining:
In chaining, each index of the hash table points to a linked list of entries. If two keys hash to the same
index, both are stored in the list.
Open Addressing:
In open addressing, when a collision occurs, the hash table looks for the next available slot based on a
probing sequence (e.g., linear probing or quadratic probing).
Linked List
A linked list is a linear data structure where elements, called nodes, are stored in a sequence. Each
node contains two parts:
Unlike arrays, linked lists don't store elements in contiguous memory locations. Instead, each
element points to the next, forming a chain
A linked list is a way of storing a collection of items, where each item (called a node) points to the
next item. Think of it like a train where each car is connected to the next one.
Imagine you have a box of toys. Instead of putting the toy directly on a shelf, you have a piece of
paper that says, "The toy is in Box A." This paper doesn't have the toy; it just tells you where to find it.
In computer programming, a pointer works similarly—it holds the address (or location) of a piece of
data in memory instead of the data itself.
Stack
A stack is a linear data structure that follows the LIFO (Last In, First Out) principle, meaning the last
element added is the first one to be removed.
Think of a stack as a pile of plates: you always add a new plate on top, and when you need one, you
take it from the top.
Key Operations
Advantages of Stack
Simple to Implement: Easy to understand and use for basic applications like reversing strings or
checking for balanced parentheses.
Efficient Operations: Push and pop operations are fast, with a time complexity of O(1).
Useful in Recursion: Stacks are often used in algorithms that rely on recursion (such as depth-first
search).
Disadvantages of Stack
Limited Access: You can only access or remove the top element; if you need something in the middle,
you’ll have to pop everything above it.
Fixed Size (for Static Stacks): In some implementations, like array-based stacks, the size might be
fixed, which limits how much data you can store.
Not Ideal for Random Access: Stacks are not suitable for situations where you need to access
elements randomly or search for specific items in the middle.
Browser Back Button: Browsers use a stack to keep track of the pages you visit. When you press the
back button, it pops the last visited page off the stack.
Expression Evaluation: Stacks are often used to evaluate expressions, especially in compilers and
interpreters.
Queue
A queue is a linear data structure that follows the FIFO (First In, First Out) principle, meaning the first
element added to the queue is the first one to be removed.
Think of a queue like waiting in line at a grocery store checkout. The first person to get in line is the
first one to be served, and the last person to join the line has to wait their turn.
Enqueue (Add to the queue): When someone joins the back of the line, it's like adding an item to the
end of the queue.
Dequeue (Remove from the queue): When the person at the front of the line is served and leaves,
it's like removing an item from the front of the queue.
Peek: If you want to see who’s first in line without serving them, that’s like looking at the front item
in the queue without removing it.
Use Cases:
->Ticket counter: The first person in line buys the ticket first, and new people join at the back.
->Customer service: Calls are answered in the order they are received, so the first caller gets served
first.
Advantages:
Order Preservation: Items are processed in the exact order they were added
Multiple Implementations: Can be implemented using arrays, linked lists, or circular arrays.
Disadvantages:
Limited Access: Only the front and rear elements can be accessed directly.
Fixed Size: If implemented with arrays, resizing is required for dynamic operations.
Use Cases:
-> Task Scheduling: Queues are used in scheduling tasks like CPU processes.
-> Printing Jobs: Jobs in a printer queue are processed in the order they are received.
-> Breadth-First Search: Used in algorithms to explore nodes layer by layer.
Types of Queues:
Simple Queue: The basic FIFO structure. Just like standing in a straight line. The first person to enter the
line is the first one served (FIFO - First In, First Out). You can only add elements at the rear (end) and remove
them from the front.
Limitations: If implemented with a fixed array, once the rear reaches the end, even if space is available at the
front (after dequeuing), it cannot be reused without shifting elements.
Circular Queue: The last element points back to the first, making the queue circular. Imagine a queue
that wraps around in a circle. After the last position, it loops back to the first, making efficient use of
space.
Useful in systems where memory is limited, like buffering in network routers or managing tasks in a
round-robin fashion.
Makes better use of space compared to a simple queue. When the rear reaches the end, it can still
reuse the free space at the front.
Priority Queue: Elements are dequeued based on priority rather than order. Here, each item has a
priority level, and the highest priority item is served first, regardless of when it joined the queue. It's
like VIP customers always getting to cut in line.
Task management where some tasks are more urgent than others, such as emergency services or
operating systems scheduling processes.
Requires additional computation to sort items by priority when adding them, which can slow down
the enqueue operation.
When elements are added (enqueued), they are placed in the queue according to their priority. This
can be done by sorting the elements, or by using a specialized data structure (like a binary heap) that
efficiently maintains the priority order.
Unlike a regular queue where the front and rear are based on the order in which elements were
added (FIFO), a priority queue doesn't follow this rule.
Instead, it places elements based on their priority. The highest-priority element gets to the front of
the queue, regardless of when it was added.
Double-Ended Queue (Deque): Like a line where you can enter or leave from both ends. You can add
or remove elements from either the front or the rear. Useful in scenarios where you need to quickly
add or remove items from both ends, such as undo/redo operations in software or implementing
sliding windows in algorithms.Very flexible, allowing operations at both ends.
Operations:
Imagine you're at a small park bench where you can sit on either side. You can choose to sit on one
end, and if someone else joins, they can sit on the opposite end. Similarly, when leaving, you can get
up from either side, and the same goes for others.
Tree
A Tree is a type of data structure used to organize data in a hierarchical way, like a family tree or an
organization chart. Each element in a tree is called a node, and nodes are connected by edges (links)
that show relationships between them.
Imagine you're organizing your computer's folders. You start with a main folder, and inside it, there
are subfolders, which might contain more subfolders or files. This structure forms a tree, with each
folder or file being a node in the tree.
Key Terms:
->Root: This is the starting node of the tree, like the main folder on your computer. It's the topmost element
that doesn't have a parent.
->Node: Each item in the tree, like a folder or file, is called a node. Nodes can store data (e.g., the name of a
file).
->Edge: The connection between two nodes. For example, when a folder contains a subfolder, there's an edge
between them
->Parent and Child: If a node is connected to another node below it, the top one is called the parent, and the
bottom one is the child. For example, if a folder contains a subfolder, the main folder is the parent, and the
subfolder is the child.
->Leaf: A node that doesn't have any children (like a file in a folder with no subfolders).
->Subtree: A part of the tree itself, like a smaller section of the hierarchy
->Depth/Height: Depth measures how far a node is from the root. Height is how many levels a tree has.
Types of Trees:
Binary Tree: A tree where each node has up to two children (like two arms). It’s one of the simplest
types of trees. Imagine a family tree where each parent can have at most two children. Simple to
understand. Useful for representing simple hierarchical relationships
Binary Search Tree (BST): A special type of binary tree where the left child is always smaller than
the parent, and the right child is always bigger. This makes it faster for searching.
Imagine a Bookshelf:
• Left side of the bookshelf is for books with titles smaller than the current book.
• Right side is for books with titles larger than the current book.
Balanced Tree (like AVL Tree or Red-Black Tree): A binary search tree that keeps itself
balanced, meaning both sides of the tree are nearly equal in size.
• You want to add books but make sure that no part of the shelf gets too heavy. If you pile too many
books on one side, the shelf might get difficult to manage.
• So, after adding a book, you check if both the left side and the right side have a similar number of
books. If one side has more books, you adjust things to keep it balanced.
A balanced tree ensures that both sides are almost equal in terms of height (number of levels of
books).
An unbalanced tree could have one side very deep (lots of books on one side), making it hard to
search through the books efficiently.
Trie (Prefix Tree): A tree used to store strings (sequences of characters). Each path down the tree
represents a word or part of a word. Think of your phone’s predictive text feature. As you type “ca,” it
suggests “cat,” “car,” and “camera” because those words all share the same starting letters. That’s
how a Trie works.
Fast for finding and storing lots of words, especially in auto-complete or dictionary systems.
It can take up a lot of memory because each letter in each word needs to be stored separately.
Efficient Searching:
• If you want to search for a word that starts with C, you don’t need to check every word. You
just follow the path from C and narrow it down.
• This is great for autocomplete. For example, if you type "ca", it can quickly suggest "cat" and
"car".
Shared Prefixes:
• Words with the same beginning letters (like "cat" and "car") share the same path until they
diverge. This saves space and makes the structure more efficient.
Fast Insertion:
• When you want to add a new word, you only need to add the parts that are different. So,
adding "cab" would simply mean adding a new path after A in the C branch.
Pros of a Trie:
• Quick Prefix Searches: Great for finding all words starting with a certain prefix.
• Efficient for Auto-Complete: Can suggest words based on what you've typed so far.
• Memory Efficient for Similar Words: Words with shared prefixes save space.
Cons of a Trie:
• More Memory for Short Words: For very short words or unrelated words, it can end up using
more memory than a basic list of words.
• Overhead: It takes more time to build and manage a Trie compared to a basic array or list.
N-ary Tree: An N-ary tree is like a family tree where each person can have more than two children.
In simpler terms, it's a tree structure where each node (point in the tree) can have any number of
branches or children, not just two like in a binary tree.
Think of a family tree where each parent can have many children. Each parent is a node, and their
children branch out from them. There’s no limit on how many children a parent can have, so this
family tree can branch out as much as needed.
Imagine a classroom where the teacher (the root) asks 4 students (the branches) to form groups of
their own. Each of those 4 students can then form more subgroups under them. There is no limit to
how many students can be part of each subgroup, making it an N-ary tree, where each student can
have multiple group members branching out under them.
Benefits:
• Flexible structure: Unlike binary trees, where each node can only have two children, N-ary trees allow
you to expand as needed.
• Useful for real-world scenarios: They work well for organizing data in systems that need more
branches, like file systems or company hierarchies.
Drawbacks:
• More complex: Managing an N-ary tree can be harder than a binary tree because there are more
possible paths to navigate or balance.
B-Tree:
A tree used in databases and file systems. It’s a type of self-balancing tree where each node can have
multiple children. It's very efficient for handling large amounts of data.
When you search for a file on your computer, the system uses a B-tree to quickly find where that file
is stored.
1. Multiple Branches: Unlike a binary tree (which only has two branches per node), a B-Tree can
have many branches per node. Think of each node like a library shelf that contains a
collection of books (or data points).
2. Balanced Structure: In a B-Tree, all the leaf nodes (the lowest level) are at the same height.
This keeps everything balanced so that you don’t have one long branch that’s hard to search
through. It’s like making sure all your library shelves are equally filled, so you don’t have to
look too far for anything.
3. Nodes Store Multiple Items: Each node (or shelf) can store multiple pieces of data. So instead
of just holding one book (like a regular tree), a B-Tree node can hold multiple books. This
helps organize things better and reduces the number of steps needed to find something.
4. Efficient Searching: The B-Tree divides the data into sections and uses a system to guide you
directly to the right section, much like how a library index points you to the exact shelf and
section where a book is located.
Graph
A graph is a way to represent a collection of items and the connections between them. You can think
of it like a map of cities where the cities are points (called nodes or vertices) and the roads
connecting them are the lines (called edges).
1. Vertices (Nodes): These are the points or locations in the graph. For example, in a city map,
each city can be a vertex.
2. Edges: These are the lines that connect the vertices. In our city map, the roads connecting the
cities are edges.
Types of Graphs
1. Undirected Graph: In this type of graph, the edges have no direction. If there’s a connection
between two vertices, you can travel both ways. For example, if City A is connected to City B,
you can travel from A to B and from B to A.
2. Directed Graph (Digraph): In a directed graph, the edges have a direction. This means that
you can only travel one way. For example, if there’s an edge from City A to City B, you can
travel from A to B, but not from B to A unless there’s a separate edge going back.
3. Weighted Graph: In a weighted graph, each edge has a value or weight associated with it. This
is often used to represent things like distances or costs. For example, if you have a map where
the roads between cities have distances marked, that’s a weighted graph.
4. Unweighted Graph: In an unweighted graph, all edges are treated equally, meaning there is
no cost or distance associated with them.
1. Traversal: This means visiting all the vertices in a graph. There are two popular ways to
traverse:
o Depth-First Search (DFS): Go as deep as possible down one path before backing up.
o Breadth-First Search (BFS): Visit all the neighbors at the present depth level before moving on
to nodes at the next depth level.
2. Searching: Finding a specific vertex in the graph.
3. Shortest Path: Finding the shortest route from one vertex to another, which is crucial in maps
or networking.
What is an Algorithm?
An algorithm is like a recipe or a set of instructions that you follow to solve a problem or complete a
task. It’s a clear, step-by-step method for getting something done.
Imagine you want to bake a cake. You follow a recipe that tells you:
1. Gather ingredients.
2. Mix the flour, sugar, and eggs.
3. Bake it at 350°F for 30 minutes.
This recipe is an algorithm for making a cake. It tells you what to do, in what order, to get the desired
result (a cake).
Algorithms in Computers
In computers, an algorithm is a sequence of steps the computer follows to solve problems or perform
tasks. For example:
• Sorting Numbers: If you give the computer a list of numbers, it can follow an algorithm to sort
them from smallest to largest.
• Searching for Information: When you search for something on Google, the search engine uses
an algorithm to find the most relevant results for you.
What is Recursion?
Recursion is like a situation where something repeats itself in a smaller or simpler form. Imagine
you’re looking at two mirrors facing each other. If you stand between them, you'll see a reflection of
yourself inside another reflection, which goes on forever. This repeating process is similar to how
recursion works in programming.
In programming, recursion works in a similar way. A function calls itself repeatedly, solving a smaller
part of the problem each time, until it reaches a base case, which is a condition that tells the function
to stop.
• 5! (factorial of 5) = 5 × 4 × 3 × 2 × 1 = 120
• factorial(2) = 2 × 1 = 2
• factorial(3) = 3 × 2 = 6
• factorial(4) = 4 × 6 = 24
• factorial(5) = 5 × 24 = 120
1. Base Case: The condition where recursion stops. Without a base case, recursion would go on
forever, causing a problem called infinite recursion.
2. Recursive Case: The part where the function calls itself, reducing the problem size.
3. Breaking the Problem Down: Each recursive call should bring the problem closer to the base
case.
Advantages of Recursion
• Simplicity: Recursion can make complex problems easier to understand and solve.
• Breaks Down Problems: It’s useful when the problem can be divided into similar
subproblems, like tree structures or searching algorithms.
Disadvantages of Recursion
• Memory Usage: Each recursive call uses memory, so deep recursions can cause stack
overflow errors.
• Slower: Recursion can be slower than other methods like iteration (loops) because it involves
multiple function calls.
What is Sorting?
Sorting means arranging things in a specific order, like alphabetically, by size, or by age. Imagine you
have a list of numbers, and you want to arrange them from the smallest to the largest. Sorting
algorithms are methods used to organize data like this.
There are many types of sorting algorithms, each with different ways of arranging data. Some are
faster, some use less memory, and some are simple to understand.
1. Bubble Sort
Bubble Sort works by repeatedly "bubbling up" the largest unsorted number to the end. Think of it
like comparing pairs of numbers and swapping them if they are in the wrong order. You keep doing
this until the whole list is sorted.
Imagine you're arranging a row of kids from shortest to tallest. You look at two kids at a time, and if
the shorter kid is standing behind the taller one, you swap them. You keep going over the line again
and again until everything is in order.
Pros:
Cons:
Selection Sort
Selection Sort works by repeatedly selecting the smallest number from the unsorted part of the list
and swapping it with the first unsorted number.
Imagine you’re sorting a pile of books from shortest to tallest. You don’t want to compare all the
books at once, so you decide to sort them step by step:
Insertion Sort is a simple sorting algorithm that builds the final sorted array one item at a time. It
works similarly to how you sort playing cards in your hands: you take one card (element) at a time
and insert it into its correct position in the sorted portion of the deck (array).
Key Steps:
1. Start from the second element: Compare it to the elements before it (one by one) and insert
it in its correct position.
2. Shift elements if needed: If the current element is smaller than the ones before it, shift the
larger elements to the right to make space for it.
3. Repeat: Continue this process for each subsequent element until the entire array is sorted.
Another example
Imagine you are organizing a group of books by their size.
• You take one book at a time and place it in the correct position among the books you’ve
already organized.
• You don’t start over from scratch; you just insert each new book into its correct spot among
the already sorted ones.
1. Start with the first book: The first book is already sorted because it’s alone.
2. Pick the second book: Compare it to the first one. If it’s smaller, place it before. If it’s larger,
leave it as it is.
3. Pick the third book: Compare it to the first two books, find the correct place for it, and insert
it.
4. Repeat: Continue picking books one by one and inserting them into their correct positions.
Merge Sort
Merge Sort is an efficient, stable, and comparison-based sorting algorithm that follows the divide and
conquer approach. The main idea behind Merge Sort is to divide the array into smaller parts, sort
those parts, and then merge them back together in a sorted manner
Imagine you have a big pile of socks of different colors, and you want to sort them into pairs of the
same color. Instead of trying to sort the entire pile at once, you use a divide and conquer strategy,
like this:
Keep dividing:
• You split each of those smaller piles into even smaller ones, and you keep going until each pile
has just one sock.
Start merging:
• Now, you look at two socks at a time from different piles and compare them.
• If they are of the same color, you make a pair.
• If they are not, you place them in the correct order based on color and keep sorting.
• By the end, you’ll have all your socks sorted into pairs, and your pile will be perfectly ordered.
1. Divide: Split the array into two halves recursively until each sub-array contains only one
element.
2. Conquer: Sort and merge the sub-arrays back together by comparing elements.
3. Merge: During the merging process, combine the smaller sub-arrays in a way that ensures the
resulting array is sorted.
Advantages:
Disadvantages:
• Not in-place: Requires additional memory to store temporary arrays during the merging
process.
• Slower for small datasets compared to simpler algorithms like Insertion Sort.
Quick Sort
Quick Sort is another popular sorting algorithm that uses the divide and conquer approach. It is
faster than many other algorithms, such as Bubble Sort or Selection Sort, especially when handling
large datasets.
Imagine you have a group of students of varying heights, and you want to line them up from shortest
to tallest. Instead of comparing everyone to everyone else, you can use this clever approach:
1. Pick a "pivot": Choose one student at random to be the "pivot" — the person you compare
everyone else to.
2. Separate the group:
o Make two groups:
▪ Group 1: Students shorter than the pivot.
▪ Group 2: Students taller than the pivot.
o The pivot now stands in the middle.
3. Repeat the process: Now, treat each smaller group (Group 1 and Group 2) just like the original
group. Pick a pivot from each group, and divide those groups into even smaller groups.
4. Keep sorting: You keep dividing and comparing, always putting students shorter than the
pivot on one side and students taller on the other.
5. Line everyone up: Eventually, each group will be sorted, and when you combine all the small
groups, everyone will be standing in the correct order.
1. Pick a Pivot: Choose an element from the array as the pivot. This can be the first element, the
last element, or any element in between.
2. Partition: Rearrange the array so that:
o All elements smaller than the pivot come before it.
o All elements greater than the pivot come after it.
3. Recursion: Recursively apply the same process to the sub-arrays (those smaller and larger
than the pivot).
4. Combine: As each part is sorted individually, combining them results in a fully sorted array.
• Fast for large datasets: On average, it performs faster than Merge Sort or Insertion Sort.
• In-place: It doesn’t require much extra space, as it sorts the array in-place.
Searching Algorithms
Searching algorithms are techniques used to find a particular item or group of items from a collection
of data. The goal is to locate the desired item quickly and efficiently.
In Linear Search, you simply go through each element one by one and check if it is the item you're
looking for.
Imagine you're searching for a specific book on a messy shelf where books are arranged randomly.
You would look at each book from left to right, one by one, until you find the book you're searching
for.
How It Works:
Time Complexity:
• Simple to implement.
• Works on both sorted and unsorted data.
Cons:
• Inefficient for large datasets because it requires checking each item one by one.
Binary Search
Binary Search is a faster searching algorithm, but it only works on sorted data. It repeatedly divides
the search interval in half to quickly zero in on the target item.
Imagine looking for a word in a dictionary. You don't start from the first page. Instead, you open the
dictionary somewhere in the middle. If the word you're looking for comes alphabetically before the
middle word, you focus on the earlier part of the dictionary. Otherwise, you check the later part. This
process continues until you find the word.
How It Works:
• Start with two pointers: one at the beginning and one at the end of the array.
• Find the middle element.
• If the middle element is the target, you're done.
• If the target is smaller than the middle element, repeat the search on the left half.
• If the target is larger than the middle element, repeat the search on the right half.
• Continue this process until the target is found or the search interval becomes empty.
Time Complexity:
Cons:
Dijkstra's Algorithm
Dijkstra's Algorithm is a popular method used to find the shortest path between two points in a
graph. It ensures that you get the least expensive or quickest path from one point to another,
whether you're traveling between cities, navigating through a computer network, or analyzing road
maps.
Imagine you are planning a road trip, and you want to drive from City A to City B. There are multiple
routes and roads that connect several cities in between. You want to find the shortest or quickest
route to reach your destination.
You could either guess or follow your instincts, but Dijkstra's Algorithm helps you do it methodically
and accurately. It calculates the best route step by step, ensuring you travel the least distance or time
between two points.
Bellman-Ford Algorithm
Bellman-Ford Algorithm is a method used to find the shortest path from one starting point (node) to
all other points in a network (graph). It's like Dijkstra's algorithm but with one major difference—it
can handle negative weights. In simpler terms, if there's a road or a connection that rewards you or
costs less as you travel, Bellman-Ford can still calculate the best path.
Imagine you’re planning a road trip again, but now you can encounter discounts or penalties on
certain roads. Some roads may even give you back money (negative cost) or take longer but are
cheaper. Your task is still to find the least expensive route from City A to other cities.
• Some roads might have a positive distance (like normal travel distance or time).
• Some roads might have a negative distance (imagine getting a cash bonus for taking a certain
road).
• Bellman-Ford will ensure you avoid routes that may reduce your distance but lead you into
infinite loops (to prevent constantly cycling through roads for extra bonuses).
Backtracking
Definition: Backtracking is a problem-solving algorithm that incrementally builds candidates for
solutions and abandons a candidate as soon as it determines that this candidate cannot be extended
to a valid solution. Essentially, it explores all potential solutions to find the correct one by making
choices and then retracing (or "backtracking") when a choice leads to a dead end.
Backtracking is like solving a puzzle step by step, where you make a choice, and if that choice doesn’t
lead to a solution, you take a step back (or "backtrack") and try a different option. It’s a way to find
answers to problems by exploring all possible paths and eliminating those that don't work.
• Imagine you’re trying to find your way through a maze. You start at the entrance and choose a
direction to go.
-> Exploring:
• As you go along, you check if you’re still on the right path. If you hit a dead end, you stop and
look for another way to go back.
-> Backtracking:
• If a path doesn’t lead to the exit (solution), you return to where you made a decision
(backtrack) and try another direction.
• You keep making choices and backtracking until you either find the exit or realize there’s no
way out.
Definition: Caching is like a temporary storage space where you keep frequently accessed data so
that you can retrieve it quickly without having to go back to the original source each time.
Imagine you often visit a coffee shop that has a long line. To make things faster, the shop keeps your
favorite order on hand. So, when you arrive, they quickly grab it for you instead of making it from
scratch each time. This way, you save time, and the shop can serve more customers
Key Points:
Memoization
Memoization is a specific type of caching used in programming, particularly in recursive functions. It
involves storing the results of expensive function calls and returning the cached result when the same
inputs occur again.
Think of a student studying for an exam. When they solve a math problem, they write down the
answer in their notebook. If they encounter the same problem again, they check their notebook
instead of solving it from scratch. This saves them time and effort.
Key Points:
=================================================