0% found this document useful (0 votes)
8 views27 pages

Daa Report 3 PDF

Uploaded by

Randomizer 225
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views27 pages

Daa Report 3 PDF

Uploaded by

Randomizer 225
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

TO DEVELOP A RANDOMIZED QUICKSORT

A PROJECT REPORT

Submitted by
SHREYA GS (113223072098)

In partial fulfillment of the requirements for the award of the degree of

BACHELOR OF TECHNOLOGY
in
ARTIFICIAL INTELLIGENCE AND DATA SCIENCE

VELAMMAL ENGINEERING COLLEGE,


CHENNAI-66.

APRIL 2025
BONAFIDE CERTIFICATE

This is to certify that this project report titled “ TO DEVELOP A RANDOMIZED


QUICKSORT” is the Bonafide work of Ms.SHREYA GS(113223072098), who
carried out the Project Based Assignment work under my supervision.

INTERNAL GUIDE HEAD OF THE DEPARTMENT


Mrs. Joncy J Dr. Visu P
Assistant Professor Professor & Head
Department of Artificial Department of Artificial
Intelligence and Data Science Intelligence and Data Science
Velammal Engineering College Velammal Engineering College
Chennai– 600066. Chennai – 600066.
ABSTRACT

Randomized Quicksort is a highly efficient sorting algorithm that primarily follows the
Divide and Conquer paradigm. In this approach, the algorithm selects a pivot element
randomly from the array and then partitions the array into two sub-arrays — one containing
elements less than the pivot and the other containing elements greater than the pivot. These
sub-arrays are then sorted recursively using the same strategy. The divide step splits the
problem into smaller, non-overlapping subproblems, the conquer step recursively solves
them, and the results are implicitly combined through the recursive return. Unlike Brute
Force algorithms, which attempt to solve problems by exhaustively exploring all possible
solutions (such as generating and checking all permutations of an array to find the sorted
version), Randomized Quicksort avoids this inefficiency. It leverages a more intelligent
approach that does not require trying every possibility, leading to significantly better
performance with an expected time complexity of O(n log n). Greedy algorithms work by
making locally optimal choices at each step with the hope that these choices lead to a
globally optimal solution. Randomized Quicksort does not follow this paradigm because it
does not attempt to make the best or most promising pivot selection based on the current
state. Instead, it chooses the pivot randomly, which, while not locally optimal, statistically
helps in balancing partitions over multiple executions. Lastly, Dynamic Programming (DP)
is a strategy used when a problem can be broken into overlapping subproblems whose
solutions can be reused. Since the subproblems in Randomized Quicksort are non-
overlapping — meaning each recursive call processes a completely separate segment of the
array — there is no need for storing and reusing solutions. Therefore, dynamic
programming is not applicable to this algorithm.

iii
ACKNOWLEDGEMENT

I am deeply thankful to the Lord Almighty for His abundant blessings, which enabled me
to successfully complete this project.
I would like to express my sincere gratitude to our esteemed Chairman, Dr.M.V.
Muthuramalingam, for his unwavering support and encouragement.
I also extend my heartfelt thanks to our Chief Executive Officer, Thiru.M.V.M.
Velmurugan, for his constant support and motivation. I am equally grateful to Thiru. V.
Karthik Muthuramalingam, the Deputy CEO, for his invaluable assistance and
encouragement.
My sincere thanks go to Dr. S. Satish Kumar, Principal, for his continuous support and
motivation throughout this project.
I extend my deepest gratitude to Dr. P. Visu, Professor & Head of the Department of
Artificial Intelligence and Data Science, for being a guiding force and constant source of
inspiration.
I am especially grateful to my Project Guide, Mrs. J. Joncy, Assistant Professor,
Department of Artificial Intelligence and Data Science, for her invaluable guidance,
support, and encouragement.
Finally, I would like to thank all the faculty and non-teaching staff members of the
Department of Artificial Intelligence and Data Science for their continuous support, and
to everyone who contributed in any way to the successful completion of this project.

SHREYA GS

iv
TABLE OF CONTENTS

CHAPTER NO TITLE PAGE NO

ABSTRACT iii

LIST OF FIGURES vi

LIST OF ABBREVIATIONS vii


1 INTRODUCTION 1
2 LITERATURE SURVEY 3
3 SYSTEM REQUIREMENTS 6
3.1 Hardware requirements
3.2 Software requirements
4 SYSTEM DESIGN 9
4.1 System Overview
4.2 Detailed Component Design
4.3 System Data Flow
5 MODULE DESCRIPTION 11
5.1 Input Handling
5.2 Sorting
5.3 Displayed Sorted Output
6 CONCLUSION AND FUTURE 17
ENHANCEMENTS
6.1 Conclusion
6.2 Future Enhancements
7 REFERENCES 18
APPENDIX

v
LIST OF FIGURES

FIGURE NO TITLE PAGE NO

5.1 Input Handling 12

5.2 Sorting 14

5.3 Displayed Sorted Output 16

vi
LIST OF ABBREVIATIONS

arr Array
i Index (Left pointer)
j Index (Right pointer)
low Lower Bound
high Higher Bound
pivot Pivot Element
pivot_index Pivot Index
RQS Randomized Quick Sort
swap Swap Operation
rec Recursive Call
def Define (Function Definition in Python)
int Integer
map Mapping Function
input Input Function
print Print Output Function

vii
CHAPTER 1

INTRODUCTION

 In the realm of computer science, algorithms serve as the backbone for solving a wide array of
computational problems. The efficiency and effectiveness of an algorithm can significantly
impact the performance of software systems, especially in an era where data sizes and real-time
processing demands continue to grow. Among the various algorithmic design techniques, Brute
Force, Greedy, Divide and Conquer, and Dynamic Programming are four fundamental paradigms
that have been widely studied, implemented, and applied across numerous domains. Each of
these approaches provides a unique method of problem-solving, offering different balances
between simplicity, speed, and accuracy.

 Brute Force is the most basic algorithmic technique, often considered the starting point in
problem-solving. It involves exploring all possible solutions to identify the correct or optimal
one. While brute force guarantees accuracy, its major limitation is inefficiency, particularly in
problems with a large solution space. Its time complexity often grows exponentially with input
size, making it suitable primarily for small-scale problems or as a baseline for testing the
performance of optimized algorithms. Despite its limitations, brute force remains a reliable tool
when no shortcuts or patterns are present in the problem.

 In contrast, the Greedy algorithmic approach focuses on making the most optimal local choice at
each step, with the expectation that these choices will lead to a globally optimal solution. This
paradigm is particularly effective when the problem exhibits the greedy-choice property and
optimal substructure. Greedy algorithms are appreciated for their simplicity and speed, often
running in linear or polynomial time, making them ideal for applications such as resource
allocation, graph algorithms, and data compression. However, their drawback lies in the fact that
they do not always produce the best solution in complex scenarios.

1
 The Divide and Conquer approach works by recursively breaking a problem into smaller,
manageable subproblems, solving each independently, and combining their results to form a
complete solution. It is especially effective for problems where the same operation is performed
repeatedly on smaller inputs, such as sorting and searching. Algorithms like Merge Sort, Quick
Sort, and Binary Search exemplify the power of this strategy. Divide and Conquer algorithms
tend to offer improved efficiency, often operating in O(nlog⁡n)O(n \log n)O(nlogn) time, and
are particularly beneficial when problems can be solved in parallel.

 Dynamic Programming (DP), meanwhile, is used to tackle problems with overlapping


subproblems and optimal substructure. It works by storing the solutions to subproblems to avoid
redundant computations. DP is highly efficient for optimization problems like the Knapsack
problem, matrix chain multiplication, and sequence alignment in bioinformatics. It combines the
exhaustive nature of brute force with the efficiency of storing and reusing solutions, making it a
powerful approach in both theoretical and practical applications.

 In summary, each algorithmic paradigm has its strengths and weaknesses, and understanding
when and how to use them is key to efficient problem-solving. The choice of approach depends
on the specific nature of the problem, including factors such as input size, time constraints, and
the need for optimality. This study explores the theoretical foundation, practical applications,
and performance trade-offs among Brute Force, Greedy, Divide and Conquer, and Dynamic
Programming algorithms.

2
CHAPTER 2
LITERATURE SURVEY

Foundations of Algorithm Design

 Algorithmic strategies are essential in problem-solving, and four prominent paradigms—Brute


Force, Greedy, Divide and Conquer, and Dynamic Programming—form the foundation of many
solutions. Each method follows a unique approach to finding results efficiently, depending on
the structure and complexity of the problem. These techniques are widely studied and discussed
in academic literature, particularly in standard textbooks and algorithm research.

Brute Force: Exhaustive Yet Accurate

 Brute Force is the most basic and intuitive approach that checks all possible solutions to a
problem. It ensures optimality and correctness but is highly inefficient for large inputs. In early
studies, brute force is often used as a benchmark for comparing other optimized algorithms.
While simple to implement, its exponential time complexity makes it impractical for complex or
real-time systems.

Greedy Algorithm: Fast and Efficient

 Greedy algorithms make the locally optimal choice at each step with the hope of achieving a
globally optimal solution. Literature shows they perform well on problems with the greedy-
choice property and optimal substructure. Applications such as Kruskal’s and Prim’s algorithms
in graphs, and Huffman encoding in compression, highlight their effectiveness. However, their
limitation is the lack of guaranteed optimality in all cases.

Divide and Conquer: Recursive Efficiency

 Divide and Conquer is a recursive problem-solving method where a problem is divided into
smaller subproblems, each solved independently before combining results. Classic examples like
Merge Sort and Quick Sort demonstrate its power. According to research, this paradigm is highly
efficient for sorting and searching tasks due to its logarithmic depth and ability to parallelize
subproblems.

3
Dynamic Programming: Overlapping Subproblems Solved

 Dynamic Programming (DP) is widely studied for its ability to solve problems by storing
intermediate results to avoid redundant computation. Literature emphasizes its use in
optimization problems like the Knapsack problem, Longest Common Subsequence, and
Fibonacci numbers. DP is especially powerful for problems with overlapping subproblems and
optimal substructure, offering a balance between brute-force completeness and greedy
efficiency.

Performance and Time Complexity

 Comparative studies often evaluate the time complexities of these approaches. Brute Force can
be exponential, Greedy often linear or polynomial, Divide and Conquer commonly logarithmic
or O(nlogn)O(n \log n)O(nlogn), and Dynamic Programming typically polynomial. This analysis
helps in selecting the most suitable method based on the size and structure of the input.

Accuracy vs. Speed Trade-Off

 Greedy algorithms are preferred for their speed, but DP and Brute Force are chosen when
optimality is critical. Literature reflects that Divide and Conquer strikes a balance, especially in
problems where division naturally leads to simpler instances. Researchers note that while Brute
Force is always accurate, it’s rarely efficient, which is why DP is often preferred in complex real-
world cases.

Practical Applications in Industry

 Each algorithmic paradigm finds use in real-world systems:


 Brute Force in cryptography and testing,
 Greedy in routing, scheduling, and data compression,
 Divide and Conquer in parallel processing and sorting algorithms,

Hybrid and Evolving Strategies

 Recent literature explores combining these paradigms to create hybrid algorithms. For example,
Greedy is used to generate a base solution, which is then improved using Dynamic Programming
or Brute Force. Divide and Conquer can be mixed with DP in approaches like the Strassen Matrix
Multiplication. This blending increases algorithm adaptability in real-world conditions.
4
Research Trends and Future Scope

 Ongoing research in algorithm design focuses on improving performance in large-scale, data-


intensive applications using these paradigms. Machine learning is now being used to learn which
strategy works best for a given problem. There is also a growing interest in adaptive algorithms
that switch between Greedy, DP, or Divide and Conquer based on real-time input properties.

Educational Significance in Curriculum Design

 The importance of understanding different algorithmic paradigms is emphasized in academic


literature for developing strong computational thinking. Brute Force introduces beginners to
exhaustive search logic, Greedy helps understand optimization under constraints, Divide and
Conquer builds recursive problem-solving skills, and Dynamic Programming teaches efficient
resource management. Studies on computer science education underscore how balanced
exposure to these paradigms equips students with a versatile toolkit for real-world problem-
solving.

Comparative Case Studies in Algorithm Selection

 Several comparative studies highlight how different paradigms perform on the same problem.
For example, the Traveling Salesman Problem (TSP) can be solved using brute force (exact
solution), greedy heuristics (nearest neighbour), or dynamic programming (Held-Karp
algorithm). Case studies show that while brute force gives the most accurate results, greedy and
DP offer more practical solutions for large inputs. Such comparative literature provides insights
into performance benchmarking and algorithm tuning.

5
CHAPTER 3
SYSTEM REQUIREMENTS

3.1 Hardware Requirements


Client Side
Processor
 A basic 1 GHz processor is sufficient to run Python scripts and send/receive data. For
smoother performance during GUI interactions or larger data input, a 2 GHz dual-core
processor is recommended.

RAM
 2 GB RAM will handle simple tasks, but 4 GB or more ensures better responsiveness,
especially When working with GUIs or visualizing data.

Storage
 Python scripts and libraries don’t take much space. A minimum of 100 MB is needed, but 500
MB allows room for additional tools, virtual environments, and saved outputs.

Display
 A screen resolution of 1024×768 is the minimum for comfortably using development tools
like PyCharm or Jupyter. Higher resolution is better for viewing code and GUIs clearly.

Input Devices
 Standard keyboard and mouse are necessary for coding, running the application, and entering
algorithm input data.

Server side
Processor
 A 2 GHz dual-core processor is the minimum needed to handle algorithm execution. For
better performance under multiple requests or larger datasets, a quad-core processor is
preferred.
6
RAM
 4 GB is enough to run the server and execute most algorithms. 8 GB or more is ideal for better
speed, multitasking, and handling parallel requests.

Storage
 The server needs at least 1 GB for storing the Python API, algorithm scripts, logs, and possibly
some uploaded data. 2 GB or more ensures future scalability.

Network
 A reliable Wi-Fi or LAN connection is essential for real-time communication between client
and server. The server should have a static IP or localhost (for testing).

3.2 Software Requirements


Client Side
Operating System
 Any modern OS like Windows 10/11, macOS, or Ubuntu is suitable. They all support Python
and its tools well. Linux (like Ubuntu) is lightweight and widely used by developers.

Programming Language
 Python is used for writing the logic to interact with the server. It supports quick development
and has vast libraries for communication and GUI.

IDE/Code Editor
 Tools like:
 VS Code – Lightweight, customizable, and plugin-rich.
 PyCharm – Feature-rich IDE for Python with debugging and code analysis.
 Jupyter Notebook – Best for interactive, step-by-step algorithm testing.
 Thonny – Beginner-friendly and simple.

 These help you write, test, and debug client-side code efficiently.

7
Package Manager

 This tool manages Python libraries. You use pip install <library> to bring in required modules
like requests or tkinter.

Common Libraries

 requests: For sending HTTP requests to the server.

 tkinter/PyQt5: For GUI applications (data input/output).

 numpy: For numeric and matrix operations.

 matplotlib: For visualizing results, especially for Dynamic Programming.

Server Side
Operating System

 Ubuntu (20.04 or above) is often preferred for server deployment because it’s secure, lightweight,
and widely supported.

 Windows Server 2019+ is good if you’re comfortable with Windows.

 macOS can be used for local testing.

Programming Language
Used to implement all algorithm logic (sorting, optimization, etc.), and to build the server that
processes client requests.
 Web Framework
 Flask: Lightweight and simple for APIs. Easy to set up.
 FastAPI: Faster and supports automatic documentation. Ideal for newer projects.

 These frameworks allow you to expose Python functions (like QuickSort or Knapsack) as web
services.

8
CHAPTER 4
SYSTEM DESIGN

4.1 System Architecture

Two-Tier Client Server Model

 The system uses a two-tier architecture where the client sends algorithm-related requests, and
the server processes them and sends results back.

Platform Independence

 The client and server can be run on different platforms or machines, connected over HTTP—
making the architecture flexible and scalable.

Centralized Processing

 The server handles all the computational logic (e.g., QuickSort, Knapsack), ensuring
uniformity and central control over algorithm execution.

Loose Coupling

 The client and server are loosely coupled through API calls. Changes on one side (e.g., new
algorithms) don't break the other.

Restful Communication

 Client and server communicate using REST principles over HTTP, typically with JSON as
the data exchange format.

4.2 Detailed Component Design

User Interface (Client Side)

 This is the part where the user interacts with the system. It can be a simple Python GUI (like
Tkinter) or a command-line interface. The user selects an algorithm and enters the required
input (like a list of numbers or weights).
9
API Request Sender (Client Side)

 After getting the input, the client sends it to the server using an HTTP POST request. The data
is sent in JSON format, using Python's requests library.

Server API (Server Side)

 The server receives the client request using a web framework like Flask or FastAPI. It reads
the data and figures out which algorithm to run based on the API endpoint or request content.

Algorithm Processor (Server Side)

 This is where the actual logic happens. The server processes the input using the chosen
algorithm — whether it’s Quick Sort, Knapsack (DP), or another method — and calculates
the result.

Response and Output (Both Sides)

 The server sends the result back to the client in JSON format. The client then displays the
final output (like a sorted array or max profit) clearly to the user.

4.3.System Workflow

 Client Sends API Request


The input is sent to the server as a JSON payload through a POST request to a defined API
endpoint.

 Server Parses and Processes Input

The server interprets the request, runs the corresponding algorithm, and (optionally) times the
execution.

 Server Responds with Output

After computation, the result (e.g., sorted list or max profit) is sent back in JSON format.

10
CHAPTER 5
MODULE DESCRIPTION

MODULE 1: INPUT HANDLING

Comparison Between Greedy and Brute Force Algorithms

 Greedy and Brute Force are two fundamental approaches used in algorithm design, each having
its own strengths and limitations. Understanding how these techniques work and where they
apply is essential in selecting the right algorithm for a particular problem.

 A Greedy algorithm works by making the best possible choice at each step, with the hope that
this local optimum will lead to a global optimum. It does not backtrack or reconsider previous
decisions. Greedy methods are known for their efficiency and simplicity, making them suitable
for real-time and large-scale problems where performance is critical. They typically have a
lower time complexity, often ranging from O(n) to O(n log n), depending on the problem and
the sorting or selection mechanism used.

 For example, the Activity Selection problem uses a greedy approach to select the maximum
number of activities that don’t overlap by always choosing the next activity that finishes
earliest. Another classic example is the Fractional Knapsack problem, where the greedy strategy
sorts items by their value-to-weight ratio and keeps picking the most valuable fractions of items
until the capacity is full.

 However, greedy algorithms have a major limitation — they do not always produce the optimal
solution. They work best when the problem satisfies the greedy-choice property (a global
optimum can be arrived at by choosing a local optimum) and optimal substructure (optimal
solutions to subproblems lead to an overall optimal solution). If these conditions aren’t met, the
solution may be suboptimal, even if it appears efficient.

11
 In contrast, a Brute Force algorithm takes a more exhaustive approach. It tries every possible
solution or combination and selects the best one. This guarantees the optimal result, as it
evaluates all possibilities, but at the cost of extremely high time complexity. In most cases,
brute force has exponential or factorial time complexity like O(2ⁿ), O(n!), etc., making it
impractical for large datasets.

 An example of a brute force method is the 0/1 Knapsack problem, where all subsets of items
are checked to find the one with the maximum value without exceeding capacity. Another
common brute force problem is the Travelling Salesman Problem (TSP), where the algorithm
tries all routes to find the shortest path. Though accurate, these methods become infeasible as
input size grows, due to the sheer number of combinations.

 In terms of input handling, greedy algorithms are generally easier to implement and validate.
They often work with simple inputs like arrays or sorted data. Brute force algorithms, on the
other hand, require handling large combinations, recursion, or nested loops, making them more
complex and resource-intensive.

 To summarize, Greedy algorithms are preferred for problems where performance and simplicity
are key, even if they may not always be perfectly optimal. Brute Force algorithms are reliable
when optimality is critical and the problem size is small enough to allow exhaustive checking.
Choosing between them depends on the problem constraints, required accuracy, and available
computational resources.

OUTPUT

Figure 5.1: Input Handling

12
MODULE 2 : SORTING

Comparison Between Divide and Conquer and Dynamic Programming

 Divide and Conquer and Dynamic Programming (DP) are both powerful algorithm design
techniques, but they differ significantly in how they solve problems. Both methods break down
a problem into smaller subproblems, but their approach to solving and combining those
subproblems makes a major difference.

 Divide and Conquer works by dividing the original problem into two or more subproblems,
solving each subproblem independently, and then combining their solutions. This method is
efficient when the subproblems are completely separate and don’t overlap. A classic example is
Merge Sort, where the array is divided in half, sorted independently, and then merged. Similarly,
QuickSort uses Divide and Conquer by choosing a pivot, splitting the array, and recursively
sorting each part. Divide and Conquer is best suited for problems where subproblems are disjoint
and don’t rely on each other's results.

 On the other hand, Dynamic Programming is used when subproblems overlap, meaning the same
subproblem gets solved multiple times. To avoid redundant computations, DP stores the results
of solved subproblems in a table (memoization or tabulation) and reuses them. This makes DP
extremely efficient for problems with overlapping subproblems and optimal substructure. A
well-known example is the Fibonacci number sequence, where using Divide and Conquer would
result in exponential time due to repeated calculations, but DP reduces it to linear time. Another
example is the 0/1 Knapsack problem, where DP stores the result of previous capacities and item
combinations.

 In terms of efficiency, Dynamic Programming is generally better when the problem has
overlapping subproblems. It avoids recomputation and greatly improves performance, often
reducing time complexity from exponential to polynomial. However, DP uses more memory to
store the subproblem results, while Divide and Conquer may use less space but more computation
time due to recursive calls and lack of memoization.

13
 When it comes to ease of implementation, Divide and Conquer is often simpler and more intuitive
for problems like sorting and searching. Dynamic Programming, though more powerful, can be
harder to implement due to the need for careful table design and indexing.

 Both approaches are essential in algorithm design. Divide and Conquer is ideal for problems like
sorting, searching, and multiplication of matrices. Dynamic Programming shines in optimization
problems like Knapsack, Matrix Chain Multiplication, and Longest Common Subsequence.
Choosing the better one depends on the problem structure—if subproblems overlap and you want
to improve performance, DP is the way to go.

OUTPUT

Figure 5.2 : Sorting

14
MODULE 3 : DISPLAY SORTED OUTPUT

Comparison Between Greedy and Dynamic Programming

 Greedy and Dynamic Programming (DP) are two different algorithmic strategies used to solve
optimization problems, but they differ significantly in how they make decisions. Both aim to find
the best solution, but their approach to selecting choices and handling subproblems makes them
suitable for different kinds of problems.

 A Greedy algorithm works by making the best local choice at each step with the hope that it will
lead to the global optimum. It does not reconsider its decisions once made. This makes greedy
algorithms fast, simple, and memory-efficient. They usually run in O(n) or O(n log n) time and
are easy to implement. Common examples include the Fractional Knapsack problem, Activity
Selection, and Huffman Encoding. However, greedy algorithms work only when the problem
has the greedy-choice property, meaning a global optimum can be reached by choosing the best
local option at each step.

 In contrast, Dynamic Programming solves problems by breaking them down into overlapping
subproblems, solving each subproblem once, and storing its result for reuse. It guarantees an
optimal solution, even in cases where greedy algorithms fail. Examples include the 0/1 Knapsack
problem, Longest Common Subsequence, and Matrix Chain Multiplication. While DP is more
powerful, it can also be more complex to implement and often requires more memory due to
table storage.

 When it comes to performance, greedy is faster because it skips all the recomputation and
storage. But this speed comes at the cost of accuracy. In problems like 0/1 Knapsack, greedy fails
to find the optimal answer, while DP always gives the correct one. DP takes more time and
memory (usually O(n²) or O(n × capacity)), but it’s reliable.

 Greedy algorithms are faster and simpler, perfect for real-time or large-scale problems where
"good enough" is acceptable. Dynamic Programming is more accurate and reliable, best for
complex optimization problems where every decision matters.

15
OUTPUT

Figure 5.3 : Displayed Sorted Output

16
CHAPTER 6
CONCLUSION AND FUTURE ENHANCEMENTS

6.1 Conclusion

In conclusion, both Greedy and Dynamic Programming (DP) algorithms are essential strategies in
problem-solving, each with its unique advantages. Greedy algorithms offer simplicity, speed, and low
memory usage, making them ideal for problems where local decisions lead to a global optimum.
However, they can fail when the problem does not satisfy the greedy-choice property. On the other hand,
Dynamic Programming guarantees optimal results by solving and storing solutions to overlapping
subproblems, though it may require more time and space. Understanding the structure of the problem —
particularly whether it involves overlapping subproblems and optimal substructure — is crucial in
choosing the right approach. While greedy is efficient, DP is more robust and accurate, especially for
complex optimization problems.

6.2 Future Enhancements

 Future work can explore combining Greedy and DP methods into hybrid algorithms that offer
both speed and accuracy for specific problem types.

 A smart system can be built to analyze input patterns and automatically choose whether to apply
a greedy or dynamic programming strategy.

 Developing user-friendly visualization tools to show step-by-step execution of Greedy vs DP


could enhance learning and debugging.

 Optimize DP algorithms to handle larger datasets more efficiently using space-reduction


techniques like rolling arrays or memoization.

 Machine Learning techniques could be integrated to learn from past problem patterns and
recommend the best approach dynamically.

17
CHAPTER 7
REFERENCES

 Cor men, T. H., Leiser son, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to
Algorithms (3rd ed.). MIT Press.
A comprehensive guide to algorithm design techniques, including Greedy and Dynamic
Programming.

 Geeks for Geeks. (n.d.). Greedy Algorithms.


Practical examples and explanations of greedy algorithms.

 Geeks for Geeks. (n.d.). Dynamic Programming.


Concepts and coding patterns for various DP problems.

 Khan Academy. (n.d.). Dynamic Programming.


Beginner-friendly explanation of DP and when to use it.

 Hacker earth Blog. (n.d.). Greedy vs. Dynamic Programming.


Good comparison between the two approaches with examples.

 Coursera - Algorithms Specialization by Stanford University. (n.d.).


Offers deep dives into Divide and Conquer, Greedy, and DP algorithms.

18
APPENDIX
SERVER CODE:

import random

def input_values():
arr = list(map(int, input("Enter numbers separated by space: ").split()))
print("\nInitial array:", arr)
return arr

def partition(arr, low, high):


pivot = arr[high]
i = low - 1

for j in range(low, high):


if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]

arr[i + 1], arr[high] = arr[high], arr[i + 1]


print(f"Partition around {pivot}: {arr}")
return i + 1

def random_partition(arr, low, high):


pivot_index = random.randint(low, high)
arr[pivot_index], arr[high] = arr[high], arr[pivot_index]
return partition(arr, low, high)

def randomized_quick_sort(arr, low, high):


if low < high:
pivot_index = random_partition(arr, low, high)

19
randomized_quick_sort(arr, low, pivot_index - 1)
randomized_quick_sort(arr, pivot_index + 1, high)

def display_sorted_array(arr):
print("\nFinal Sorted array:", arr)

arr = input_values()
randomized_quick_sort(arr, 0, len(arr) - 1)
display_sorted_array(arr)

20

You might also like