0% found this document useful (0 votes)
27 views7 pages

Evid 2

The document discusses three elementary sorting algorithms: Selection Sort, Insertion Sort, and Bubble Sort, detailing their definitions, explanations, time complexities, and key characteristics. It also provides C++ code examples for each algorithm, illustrating their implementation and functionality. Additionally, it compares Quicksort and Heapsort, highlighting their time complexities, practical performance, and when to prefer one over the other.

Uploaded by

grey.wolf.800140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views7 pages

Evid 2

The document discusses three elementary sorting algorithms: Selection Sort, Insertion Sort, and Bubble Sort, detailing their definitions, explanations, time complexities, and key characteristics. It also provides C++ code examples for each algorithm, illustrating their implementation and functionality. Additionally, it compares Quicksort and Heapsort, highlighting their time complexities, practical performance, and when to prefer one over the other.

Uploaded by

grey.wolf.800140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 7

Certainly!

Let's break down these elementary sorting methods:

**1. Selection Sort:**

* **Definition:**
* Selection sort is a simple, in-place comparison sorting algorithm. It divides
the input list into two parts: a sorted sublist and an unsorted sublist.
* It repeatedly finds the minimum element from the unsorted sublist and places
it at the end of the sorted sublist.
* In essence, it "selects" the smallest remaining element and puts it in its
correct position.
* **Explanation:**
* The algorithm iterates through the array. For each iteration, it finds the
smallest element in the unsorted portion of the array.
* Then, it swaps that smallest element with the element at the beginning of the
unsorted portion.
* This process continues until the entire array is sorted.
* **Time Complexity:**
* Average case: O(n²)
* Worst case: O(n²)
* Selection sort has a quadratic time complexity in all cases, because the
nested loops will always run.
* **Key Characteristics:**
* It performs well on small lists.
* It minimizes the number of swaps.

**2. Insertion Sort:**

* **Definition:**
* Insertion sort is another simple, in-place comparison sorting algorithm.
* It builds the final sorted array one item at a time.
* It iterates through the input elements and inserts each element into its
correct position in the already sorted portion of the array.
* **Explanation:**
* The algorithm starts with the second element and compares it with the
elements before it.
* If the element is smaller than its predecessor, it is inserted into the
correct position by shifting the larger elements to the right.
* This process is repeated for each element in the unsorted portion of the
array.
* **Time Complexity:**
* Average case: O(n²)
* Worst case: O(n²)
* Best case: O(n) (when the array is already sorted)
* **Key Characteristics:**
* It is efficient for small lists and partially sorted lists.
* It is a stable sorting algorithm (maintains the relative order of equal
elements).

**3. Bubble Sort:**

* **Definition:**
* Bubble sort is a simple comparison sorting algorithm that repeatedly steps
through the list, compares adjacent elements, and swaps them if they are in the
wrong order.
* The pass through the list is repeated until the list is sorted.
* **Explanation:**
* The algorithm repeatedly compares adjacent elements and swaps them if they
are in the wrong order.
* Larger elements "bubble" to the end of the list with each pass.
* The process is repeated until no more swaps are needed, indicating that the
list is sorted.
* **Time Complexity:**
* Average case: O(n²)
* Worst case: O(n²)
* Best case: O(n) (when the array is already sorted)
* **Key Characteristics:**
* It is easy to understand and implement.
* It is generally inefficient for large lists.
* It is also a stable sorting algorithm.

**In summary:**

* All three are considered elementary sorting algorithms, meaning they are
relatively simple to implement.
* They all have a worst-case and average-case time complexity of O(n²), making them
inefficient for large datasets.
* Insertion sort has a best case of O(n), and is generally more efficient then
bubble sort, and sometimes more efficient then selection sort.
* Selection sort minimizes the amount of swaps.

I hope this explanation is helpful!

Absolutely! Let's illustrate each of these sorting algorithms with C++ code
examples and explanations.

**1. Selection Sort (C++ Example):**

```cpp
#include <iostream>
#include <vector>

void selectionSort(std::vector<int>& arr) {


int n = arr.size();
for (int i = 0; i < n - 1; ++i) {
int minIndex = i;
for (int j = i + 1; j < n; ++j) {
if (arr[j] < arr[minIndex]) {
minIndex = j;
}
}
if (minIndex != i) {
std::swap(arr[i], arr[minIndex]);
}
}
}

int main() {
std::vector<int> arr = {64, 25, 12, 22, 11};
selectionSort(arr);
std::cout << "Sorted array: ";
for (int num : arr) {
std::cout << num << " ";
}
std::cout << std::endl;
return 0;
}
```

* **Explanation:**
* The outer loop iterates from the beginning to the second-to-last element.
* `minIndex` stores the index of the minimum element found in the unsorted
part.
* The inner loop finds the actual minimum element in the unsorted portion.
* `std::swap` exchanges the current element with the minimum element found.
* **Example:**
* Given `[64, 25, 12, 22, 11]`, the algorithm finds 11 as the smallest and
swaps it with 64. Then finds 12, and so on.

**2. Insertion Sort (C++ Example):**

```cpp
#include <iostream>
#include <vector>

void insertionSort(std::vector<int>& arr) {


int n = arr.size();
for (int i = 1; i < n; ++i) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j + 1] = arr[j];
j--;
}
arr[j + 1] = key;
}
}

int main() {
std::vector<int> arr = {64, 25, 12, 22, 11};
insertionSort(arr);
std::cout << "Sorted array: ";
for (int num : arr) {
std::cout << num << " ";
}
std::cout << std::endl;
return 0;
}
```

* **Explanation:**
* The outer loop starts from the second element.
* `key` stores the current element that needs to be inserted into the sorted
portion.
* The inner `while` loop shifts elements greater than `key` to the right.
* Finally, `key` is inserted into its correct position.
* **Example:**
* Given `[64, 25, 12, 22, 11]`, it compares 25 with 64, inserts 25 before 64.
Then compares 12 with 25 and 64, and inserts 12 at the beginning, etc.
**3. Bubble Sort (C++ Example):**

```cpp
#include <iostream>
#include <vector>

void bubbleSort(std::vector<int>& arr) {


int n = arr.size();
for (int i = 0; i < n - 1; ++i) {
for (int j = 0; j < n - i - 1; ++j) {
if (arr[j] > arr[j + 1]) {
std::swap(arr[j], arr[j + 1]);
}
}
}
}

int main() {
std::vector<int> arr = {64, 25, 12, 22, 11};
bubbleSort(arr);
std::cout << "Sorted array: ";
for (int num : arr) {
std::cout << num << " ";
}
std::cout << std::endl;
return 0;
}
```

* **Explanation:**
* The outer loop controls the number of passes.
* The inner loop compares adjacent elements and swaps them if they are in the
wrong order.
* The largest element "bubbles" to the end with each pass.
* **Example:**
* Given `[64, 25, 12, 22, 11]`, it compares 64 and 25, swaps them. Then 64 and
12, swaps them. This process repeats, and the largest number progresses to the end
of the array.

These C++ examples should help you understand how each sorting algorithm works in
practice.

```cpp
#include <iostream>
#include <vector>

// Function to partition the array and return the pivot index


int partition(std::vector<int>& arr, int low, int high) {
// Choose the rightmost element as the pivot
int pivot = arr[high];

// Index of smaller element and indicates the right position of pivot found so
far
int i = (low - 1);

// Iterate through the array from low to high-1


for (int j = low; j <= high - 1; j++) {
// If current element is smaller than or equal to pivot
if (arr[j] <= pivot) {
// Increment index of smaller element
i++;
// Swap arr[i] and arr[j]
std::swap(arr[i], arr[j]);
}
}
// Swap arr[i+1] and arr[high] (pivot)
std::swap(arr[i + 1], arr[high]);
// Return the pivot index
return (i + 1);
}

// Function to implement quicksort


void quickSort(std::vector<int>& arr, int low, int high) {
// Check if there are more than one element to sort
if (low < high) {
// Partition the array and get the pivot index
int pi = partition(arr, low, high);

// Recursively sort the elements before and after the pivot


quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

// Example usage
int main() {
std::vector<int> arr = {10, 7, 8, 9, 1, 5};
int n = arr.size();

// Call quicksort function


quickSort(arr, 0, n - 1);

// Print the sorted array


std::cout << "Sorted array: ";
for (int num : arr) {
std::cout << num << " ";
}
std::cout << std::endl;

return 0;
}
```

**Explanation:**

1. **`partition(std::vector<int>& arr, int low, int high)` Function:**


* This function selects the last element of the subarray as the pivot.
* It iterates through the subarray and rearranges elements so that all elements
smaller than or equal to the pivot are placed before it, and all elements greater
than the pivot are placed after it.
* It returns the index of the pivot element after partitioning.
* `i` is the index of the smaller element, and it is incremented each time a
smaller element is found.
* `j` is the index used to traverse the array.
* After the loop completes, the pivot element is placed in its correct position
by swapping it with the element at `arr[i + 1]`.

2. **`quickSort(std::vector<int>& arr, int low, int high)` Function:**


* This is the recursive function that implements the quicksort algorithm.
* It checks if the subarray has more than one element (`low < high`).
* If it does, it calls the `partition()` function to partition the subarray.
* Then, it recursively calls `quickSort()` on the subarrays before and after
the pivot.
* The base case of the recursion is when `low >= high`, meaning the subarray
has one or zero elements, which are already sorted.

3. **`main()` Function:**
* Creates a sample vector of integers.
* Calls the `quickSort()` function to sort the vector.
* Prints the sorted vector.

**How Quicksort Works:**

* **Divide:** The `partition()` function divides the array into two subarrays
around a pivot element.
* **Conquer:** The `quickSort()` function recursively sorts the two subarrays.
* **Combine:** Quicksort is an in-place sorting algorithm, so no explicit combining
step is needed. The subarrays are already sorted in their correct positions.
* The selection of the pivot greatly impacts the performance of quicksort. In this
implementation, the last element is chosen. Other pivot selection strategies exist,
such as picking a random element or the median of three elements.

When comparing quicksort and heapsort, it's essential to consider both their
theoretical time complexities and their practical performance characteristics.
Here's a breakdown:

**Time Complexity:**

* **Quicksort:**
* Average case: O(n log n)
* Worst case: O(n²) (occurs with poor pivot selection)
* **Heapsort:**
* Best, average, and worst case: O(n log n)

**Practical Performance:**

* **Quicksort:**
* Generally faster than heapsort in practice due to better cache locality. This
means that the data being accessed is more likely to be in the computer's fast
cache memory, leading to faster access times.
* However, its performance can be highly dependent on pivot selection. A poor
pivot can lead to the worst-case O(n²) time complexity.
* **Heapsort:**
* Provides consistent performance, guaranteeing O(n log n) time complexity in
all cases.
* Tends to be slower than quicksort in practice due to less favorable cache
locality.
* Heapsort performs many jumps in the array, making it less cache friendly.

**When to Prefer One Over the Other:**


* **Quicksort:**
* Use when average-case performance is critical and you can implement a good
pivot selection strategy (e.g., random pivot).
* Suitable for most general-purpose sorting tasks.
* When average case speed is most important.
* **Heapsort:**
* Use when guaranteed O(n log n) time complexity is essential, regardless of
the input data.
* Useful in situations where worst-case performance must be avoided.
* Useful in embedded systems, or other systems where memory is very
constrained. Because it is in place, and has a very small foot print.
* When you need a priority queue implementation. Because heaps are the data
structure that priority queues are based on.

**Key Considerations:**

* **Cache Locality:** Quicksort's sequential access patterns generally result in


better cache performance than heapsort's more scattered access patterns.
* **Stability:** Heapsort is not a stable sorting algorithm (it does not preserve
the relative order of equal elements), while quicksort can be implemented as a
stable sort with some modifications.
* **In-Place Sorting:** Both quicksort and heapsort are in-place sorting
algorithms, meaning they require minimal additional memory.
* **Introsort:** In practice, many standard library sorting functions use
introsort, which starts with quicksort and switches to heapsort if the recursion
depth becomes too large, preventing worst-case quicksort behavior.

In summary, quicksort is often the faster choice for general-purpose sorting, but
heapsort provides a reliable guarantee of O(n log n) time complexity.

You might also like