100% found this document useful (1 vote)
678 views70 pages

Data Structure - Chapter 8 Queue Part 2

This is a preview chapter 8 Queue (Part 2) of the upcoming book "DSA in Modern C++: From the Ground Up To Real-World Applications" by Eyadere Addis. It covers the fundamentals of the Queue, FIFO principle, operations, implementations using arrays and linked lists, with C++17 code, flowcharts, and exercises. Full book will be available soon on Amazon KDP and Google Play Books.

Uploaded by

eyadere addis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
678 views70 pages

Data Structure - Chapter 8 Queue Part 2

This is a preview chapter 8 Queue (Part 2) of the upcoming book "DSA in Modern C++: From the Ground Up To Real-World Applications" by Eyadere Addis. It covers the fundamentals of the Queue, FIFO principle, operations, implementations using arrays and linked lists, with C++17 code, flowcharts, and exercises. Full book will be available soon on Amazon KDP and Google Play Books.

Uploaded by

eyadere addis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1 DSA in Modern C++: From the Ground Up To Real-World Applications

Contents
Part 2: Linked Lists, STL, Applications & Advanced Topics ........................................................................... 2
Introduction to Part 2 ................................................................................................................................... 2
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue)...................................................... 3
The Node and LinkedQueue Blueprint: .................................................................................................... 3
Dissecting the Modern LinkedQueue: .................................................................................................. 5
8.6.1 The Nitty-Gritty: Implementing the Core Operations ...................................................................... 5
[Link] Operation: enqueue(item) – Joining the Line ........................................................................... 6
[Link] Operation: dequeue() – Next Customer, Please!...................................................................... 9
[Link] Operation: front() (or peek()) – Who's Next? ......................................................................... 15
8.6.2 Exercises on "Linked Queue" ......................................................................................................... 17
Solutions for 8.6.2 Exercises ............................................................................................................... 18
8.7 Comparing Queue Implementations: The Final Showdown ................................................................. 19
Factor 1: Performance Consistency (The "Hiccup" Factor)..................................................................... 19
Factor 2: Memory Usage......................................................................................................................... 19
Factor 3: Cache Performance (The Hidden Speed Boost) ....................................................................... 20
Summary Table: Array vs. Linked List ..................................................................................................... 20
8.8 The STL Solution: std::queue - Your Go-To for FIFO ............................................................................. 21
8.8.1 The Adaptor Philosophy: Not a Container, but a Controller.......................................................... 21
8.8.2 The Workhorse Beneath: The Underlying Container ..................................................................... 21
C++ code.............................................................................................................................................. 22
8.9 Full C++ Example Application ...................................................................................................... 23
8.9.1 Array Implementations in Action ................................................................................................... 23
1. C++ Code of Shifty Queue ............................................................................................................... 24
2. C++ Code of CircularArrayQueue .................................................................................................... 29
8.9.2 A real-world example of queue- Hospital Patient Management System – Using Linked Lists ...... 37
Expected output .................................................................................................................................. 42
8.10 Beyond the Basics: Advanced Queue Variants ................................................................................... 45
8.10.1 The Double-Ended Queue (Deque): The Line-Cutter ................................................................... 45
Diagram of Deque Operations ............................................................................................................ 46
8.10.2 The Priority Queue: The VIP Line ................................................................................................. 47
8.11 Part 2 Summary: The Linked List, The STL, and The VIP Line .............................................................. 48
Summary Table: Queue Implementations & Variants ............................................................................ 49
8.12 Practice Problems: Part 2 .................................................................................................................... 50
Part 2: Linked Lists, STL, Applications & Advanced Topics 2

Solutions to Part 2 Problems .................................................................................................................. 51


8.13 Chapter Summary: Mastering the Line ............................................................................................... 51
Summary Table: The Full Queue Family ................................................................................................. 52
8.14 Chapter 8 Practice Problems - Answers & Explanations ..................................................................... 52
Conceptual Understanding & Tracing: .................................................................................................... 52
Implementation & Modification Exercises: ............................................................................................ 53
Application & Problem-Solving Exercises: .............................................................................................. 54
Answers & Explanations for 8.14 Practice Problems .............................................................................. 55
Conceptual Understanding & Tracing: ................................................................................................ 55
Implementation & Modification Exercises: ........................................................................................ 56
Application & Problem-Solving Exercises: .......................................................................................... 58
8.15 Coding Interview Questions: Queues in Action .................................................................................. 59
Question 1: Implement Stack using Two Queues ................................................................................... 59
The Brainstorm: How Do We Flip the Line? ........................................................................................ 60
The C++ Solution (Expensive Pop)....................................................................................................... 61
Question 2: First Non-Repeating Character in a Stream......................................................................... 63
Solution for Question 2: First Non-Repeating Character in a Stream................................................. 63
The C++ Code: ..................................................................................................................................... 64
Question 3: Sliding Window Maximum .................................................................................................. 66
Solution for Question 3: Sliding Window Maximum .......................................................................... 66
The C++ Code: ..................................................................................................................................... 67
8.16 Key Terminology ................................................................................................................................. 68
8.17 List Resources, Books, and References (for Queues & General DSA) ................................................. 69

Part 2: Linked Lists, STL, Applications & Advanced


Topics

Introduction to Part 2
Alright, you've made it through the workshop. You've built a robust, high-performance
circularArrayQueue and understand the engineering that makes it tick. You have a solid, array-based tool
in your belt.
3 DSA in Modern C++: From the Ground Up To Real-World Applications

Now for the fun part: let's expand our horizons. In this second half of the chapter, we'll explore a
completely different, and in many ways more elegant, way to build a queue using a linked list. We'll put
the two implementations head-to-head to understand their trade-offs.

Then, we'll bridge the gap to professional C++ development by mastering the Standard Library's
std::queue and std::deque. Finally, we'll take our new knowledge and put it to work, solving real-world
problems and tackling the classic, insightful interview questions that every software engineer should
know. This is where the abstract concept of FIFO becomes a concrete and powerful problem-solving
technique.

8.6 All Aboard! Building a Queue the Modern C++ Way


(LinkedQueue)
A singly linked list is a natural fit for a queue, especially when we track both the head_ (our front) and
tail_ (our rear). But in modern C++, we can do better than juggling raw new and delete. Just like we did
with our trees, we're going to leverage std::unique_ptr to handle memory management for us
automatically.

This makes our code safer, cleaner, and frankly, easier to write. The Node owns its next node, and when
it's destroyed, the chain of destruction is handled for us.

The Node and LinkedQueue Blueprint:


#include <memory> // For std::unique_ptr
#include <stdexcept> // For exceptions
#include <utility> // For std::move

template <typename T>


class LinkedQueue {
private:
// The Node now uses a unique_ptr to own the next node in the chain.
struct Node {
T data;
std::unique_ptr<Node> next;

Node(T val) : data(std::move(val)), next(nullptr) {}


};

// The queue owns the head of the list.


std::unique_ptr<Node> head_;
// We need a raw, non-owning pointer to the tail for O(1) enqueues.
Node* tail_;
size_t count_;

public:
// --- Constructor & The Rule of ZERO ---
// Because unique_ptr handles cleanup, our destructor can be defaulted!
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 4

// And because unique_ptr is non-copyable, our queue is non-copyable


// by default, which is a safe starting point. Moving is fine.
LinkedQueue() : head_(nullptr), tail_(nullptr), count_(0) {}
~LinkedQueue() = default;
LinkedQueue(const LinkedQueue&) = delete;
LinkedQueue& operator=(const LinkedQueue&) = delete;
LinkedQueue(LinkedQueue&&) = default;
LinkedQueue& operator=(LinkedQueue&&) = default;

// --- Core Operations ---


size_t size() const { return count_; }
bool isEmpty() const { return count_ == 0; }

// Let's look at the front item.


const T& front() const {
if (isEmpty()) {
throw std::runtime_error("Queue underflow on front()");
}
return head_->data;
}

// Enqueue a new item at the tail.


void enqueue(T value) {
// Create the new node. It will be owned by a unique_ptr.
auto newNode = std::make_unique<Node>(std::move(value));
// Get a raw pointer to it before we give up ownership.
Node* newNodePtr = [Link]();

if (isEmpty()) {
// If the line is empty, this new node is both the head and the tail.
head_ = std::move(newNode);
tail_ = head_.get();
} else {
// The current tail's 'next' pointer takes ownership of the new node.
tail_->next = std::move(newNode);
// And our tail pointer moves to this new last node.
tail_ = newNodePtr;
}
count_++;
}

// Dequeue an item from the head.


void dequeue() {
if (isEmpty()) {
throw std::runtime_error("Queue underflow on dequeue()");
}
// Take ownership of the head's 'next' node.
std::unique_ptr<Node> oldHeadNext = std::move(head_->next);
5 DSA in Modern C++: From the Ground Up To Real-World Applications

// The old head is now gone. The new head is what was next.
head_ = std::move(oldHeadNext);

count_--;

// CRITICAL: If the queue is now empty, the tail was also the head,
// so we must null out the dangling tail pointer.
if (isEmpty()) {
tail_ = nullptr;
}
}
};

Dissecting the Modern LinkedQueue:


No More Manual delete: Look at the destructor and dequeue. There isn't a single delete call.
std::unique_ptr's destructor is called automatically, creating a clean, safe, cascading cleanup. This
is the power of RAII.

The tail_ Pointer: Notice that tail_ is a raw Node*. Why? Because the head_ unique_ptr and the
chain of next unique_ptrs already define the ownership of the list. The tail_ pointer is just a non-
owning "shortcut" so we can get to the end in O(1) time without traversing the whole list.

std::move is Key: In enqueue and dequeue, we use std::move to explicitly transfer ownership of
the unique_ptrs. This is how smart pointers are passed around and re-linked.

This version is more robust, safer, and more idiomatic in a modern C++ codebase.

8.6.1 The Nitty-Gritty: Implementing the Core Operations


Alright, we've got our LinkedQueue class skeleton, complete with a constructor to create an empty queue
and a destructor to clean up the mess afterward. We also have our quick isEmpty() and size() checks
ready to go.

Now for the main event. It's time to implement the three operations that give the queue its entire
personality:

1. enqueue(item): How do we let a new person join the back of the line?

2. dequeue(): How do we serve the person at the front and send them on their way?

3. front(): How do we peek at who's next without actually serving them?

This is where our head_ and tail_ pointers will do all the work. As we'll see, the logic for each of these is
a beautiful, efficient dance of pointer manipulation. Let's break them down one by one.
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 6

[Link] Operation: enqueue(item) – Joining the Line


When a new item wants to join our LinkedQueue, it always goes to the very end – the tail_ of our list.
This is exactly what push_back does for a linked list that knows where its tail is.

The Goal (Algorithm):

When a new item arrives, we want to add it to the very rear (or back) of the queue. The queue then
grows by one, and this new item becomes the newest one in line.

The Plan (Pseudocode):


PROCEDURE enqueue(myQueue, newItem)
/* First, check if there's even room (for queues with a fixed
size limit)*/
IF myQueue is full THEN
Oops! No more space. SIGNAL error (Queue Overflow)
ELSE
/* All clear! Place the newItem at the designated rear
position.*/

PLACE newItem at myQueue.rear_position

// Update where the new rear is (if we track it explicitly)


// or simply increment how many items we have.
UPDATE myQueue.rear_indicator
// e.g., move rear pointer/index
INCREMENT myQueue.current_item_count
END IF
END PROCEDURE
7 DSA in Modern C++: From the Ground Up To Real-World Applications

Flowchart of enqueue(item)

Figure 8.20. Flowchart of the enqueue Operation for a Linked List Queue. This diagram outlines the logic for
adding a new item. After creating a new node, a critical check determines if the queue is empty. If it is, the new
node becomes both the head and the tail. If not, the new node is linked after the current tail, and the tail pointer is
updated. In both cases, the queue's count is incremented before the operation concludes..

C++ Implementation - enqueue()


// --- Implementation Details for LinkedQueue ---

template <typename T>


void LinkedQueue<T>::enqueue(const T& value) {
// Step 1: Create our new passenger (node) for the queue.
// Its 'next' will be nullptr because it's going to be the new tail.
ListNode<T>* newNode = nullptr;
try {
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 8

newNode = new ListNode<T>(value, nullptr); // Data is 'value', next is


initially null
} catch (const std::bad_alloc& e) {
// If 'new' fails, we can't proceed. Let the caller know.
std::cerr << "ERROR: Memory allocation failed for new queue node!" <<
std::endl;
throw; // Re-throw the bad_alloc exception
}

// Step 2: Is this the very first item in an empty queue?


if (isEmpty()) {
// If so, this new node is both the head AND the tail of our line!
head_ = newNode;
tail_ = newNode;
}
// Step 3: Or, are we adding to an existing line?
else {
// The current tail node needs to point its 'next' to our newNode.
tail_->next = newNode;
// And now, our newNode officially becomes the new tail of the queue.
tail_ = newNode;
}

// Step 4: Don't forget to count our new arrival!


count_++;
}

Analysis:

So, what's the performance cost of our enqueue operation? Let's break it down into the work we're
actually doing.
1. Creating a New Node (new Node(value)): First, we have to ask the system for a small chunk of
memory to hold our new node. On a modern computer, memory allocation is incredibly fast.
While technically not guaranteed to be constant time (in very rare, complex scenarios it could be
slower), for all practical purposes in analysis, we consider a single new call to be an O(1) operation.
2. Checking if the Queue is Empty (isEmpty()): Our isEmpty() method is just a simple check: return
count_ == 0;. This is a single comparison, a clear O(1) operation.
3. The Pointer Shuffle: This is the heart of the operation.
o If the queue was empty, we do two pointer assignments (head_ = newNode; tail_ =
newNode;).
o If the queue was not empty, we do two different pointer assignments (tail_->next =
newNode; tail_ = newNode;).
In either case, it's a fixed, small number of pointer manipulations. It doesn't matter if
the queue has one element or one million elements; the number of pointer assignments
is the same. This is the definition of a constant-time, O(1) operation.
9 DSA in Modern C++: From the Ground Up To Real-World Applications

4. Incrementing the Count (count_++): A single increment. Also O(1).

The Verdict:

When we add up the costs—a few O(1) pointer assignments, an O(1) check, and an O(1) memory
allocation—the total work is constant. It does not depend on N, the number of items already in the
queue.
Time Complexity: O(1)

Space Complexity (Auxiliary): O(1) (We only allocate space for the new node itself, which is part
of the data structure's growth, not extra temporary space for the operation).

This is exactly what we want from a core queue operation. Thanks to that tail_ pointer, adding a new
item to the end of the line is always a super-fast, predictable operation, no matter how long the line gets.

Diagram of enqueue on LinkedQueue

Figure 8.21: The enqueue Operation on a Linked List Queue. Adding an element is a three-step dance with
pointers. 1. A new node is created. 2. The next pointer of the current tail node (B) is updated to point to this new
node, linking it to the end of the list. 3. Finally, the queue's tail pointer itself is moved to this new node, establishing
it as the new end of the line. Because we always know where the tail is, this is a fast O(1) operation.

[Link] Operation: dequeue() – Next Customer, Please!


Serving the next item in the queue means taking it from the front, which in our linked list is the head_.
This is the pop_front equivalent for a list.
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 10

The Goal (Algorithm):

We need to remove the item that's been waiting the longest – the one at the very front of the queue. If
the operation is also meant to return this item, we grab it first. The queue then shrinks by one.

The Plan (Pseudocode):


FUNCTION dequeue(myQueue) RETURNS item_type_or_nothing
// Design choice!
// Safety first! Can't serve from an empty line.
IF myQueue is empty THEN
Uh oh! SIGNAL error (Queue Underflow)
ELSE
// Okay, let's get the item from the front.
item_to_return = GET_ITEM_FROM myQueue.front_position

// Now, "remove" it by advancing the front.


UPDATE myQueue.front_indicator
// e.g., move head pointer or front index

// One less item in the queue.


DECREMENT myQueue.current_item_count

// (Optional, if dequeue is designed to return the item)


RETURN item_to_return
END IF
END FUNCTION

(Note: Many modern queue designs, like std::queue::pop(), have dequeue (or pop) as a void function.
You'd call front() first to get the item, then pop() to remove it. This separation can be cleaner for error
handling.)
11 DSA in Modern C++: From the Ground Up To Real-World Applications

Flowchart of the dequeue() Operation

Figure 8.22 Flowchart of the dequeue Operation for a Linked List Queue. The process begins with a mandatory
underflow check to ensure the queue is not empty. If it's safe to proceed, the head pointer is advanced to the next
node in the list, and the count is decremented. A crucial final check determines if the queue has become empty; if
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 12

so, the tail pointer must also be set to nullptr to maintain a consistent state. Finally, the memory for the old head
node is deallocated.

C++ Implementation Dequeue()


// --- Implementation Details for LinkedQueue ---

template <typename T>


void LinkedQueue<T>::dequeue() {
// This version just removes, doesn't return value
// Step 1: Can't serve anyone if the line is empty!
if (isEmpty()) {
throw std::underflow_error("LinkedQueue::dequeue(): Queue is empty, nothing
to remove!");
}

// Step 2: Remember who's at the front; they're about to leave.


ListNode<T>* nodeToServeAndRemove = head_;

/* Step 3: The person behind them (if any) is now the new head of
the line. */
head_ = head_->next;
// Or equivalently: head_ = nodeToServeAndRemove->next;

// Step 4: One less in the queue.


count_--;

/*
Step 5: CRITICAL Detail! What if that was the *last* person in
line?
- If head_ is now nullptr, it means the queue just became empty.
- In that case, our tail_ pointer is now pointing to the node
we're about to delete (it was both head and tail). We MUST set
tail_ to nullptr too!
*/
if (head_ == nullptr) { // Or equivalently: if (count_ == 0)
tail_ = nullptr;
}

/* Step 6: Okay, the person has been served (conceptually). We can


now free up the memory for the node that just left.
delete nodeToServeAndRemove;
}

Analysis:

Checking isEmpty is O(1). The pointer reassignments are O(1). Decrementing count_ is O(1). The delete
operation is O(1) on average. Thus, dequeue is also a speedy O(1) time complexity operation. The check
for head_ == nullptr to update tail_ is vital for correctness when the queue becomes empty.
13 DSA in Modern C++: From the Ground Up To Real-World Applications

Analysis of dequeue(): The Cost of Serving the Front

Alright, let's put our dequeue operation under the microscope. We've seen the logic: we check for an
empty line, move the head pointer forward, and clean up the old node. But how fast is it, really?

Let's break down the work involved.

Checking if the Queue is Empty (isEmpty()): This is just a check of our count_ member or whether head_
is null. It's a single, instantaneous check. That's a textbook O(1) operation.

The Pointer Shuffle: This is the core of the operation.

ListNode<T>* nodeToServeAndRemove = head_; — A single pointer copy. O(1).

head_ = head_->next; — A single pointer assignment. O(1).

if (head_ == nullptr) { tail_ = nullptr; } — A single check and, in the worst case, one extra pointer
assignment. Still O(1).

The key takeaway here is that the number of pointer operations is constant. It doesn't matter if the
queue has two elements or two million; we are only ever manipulating the pointers at the very front of
the list. We never have to walk the list.

Decrementing the Count (count_--): A single subtraction. Another O(1) operation.

Cleaning Up (delete nodeToServeAndRemove;): Deallocating a single chunk of memory is, on average,


considered an O(1) operation.

The Verdict:

When you add it all up, every step in the dequeue process is a constant-time operation. The amount of
work does not grow as the queue gets longer. This is the efficiency we were hoping for.

Time Complexity: Guaranteed O(1)

Space Complexity (Auxiliary): O(1) (We use one or two temporary pointers, but that's a constant amount
of extra memory).

This guaranteed O(1) performance is the big win for a linked-list-based queue. Unlike the "shifty" array
that gets slower as the line gets longer, our LinkedQueue is a smooth, predictable operator. Serving the
next person in line is always just as fast, whether they're the second person or the two-thousandth.
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 14

Diagram of dequeue on LinkedQueue

Figure 8.23 The dequeue Operation on a Linked List Queue. Removing an element is a quick, two-step pointer
operation. 1. The queue's head pointer is simply moved forward to point to the next node in the chain (B), making it
the new front. 2. The old head node (A), now disconnected from the list, is deallocated from memory. This is a
guaranteed O(1) operation, as no traversal is needed.

Anti-Pattern: Forgetting to Update tail When Queue Becomes Empty

A common bug in linked-list queue implementations is failing to reset the tail pointer to nullptr when the
last element is dequeued.

Problem:
// BUGGY Dequeue - Doesn't handle tail reset
template <typename T>
void LinkedQueue<T>::dequeue_buggy() {
if (isEmpty()) { throw std::underflow_error("empty"); }
ListNode<T>* nodeToDelete = head_;
head_ = head_->next;
count_--;
// PROBLEM: Missing check! If head_ became nullptr here,
// tail_ still points to the just-deleted node (nodeToDelete)!
// if (head_ == nullptr) { tail_ = nullptr; } // <<< FORGOTTEN LINE
delete nodeToDelete;
}

Consequences:
15 DSA in Modern C++: From the Ground Up To Real-World Applications

If dequeue makes the queue empty, head becomes nullptr but tail becomes a dangling pointer
(pointing to freed memory).

Subsequent enqueue operations check isEmpty() (which might check head == nullptr and return
true) or might directly use the dangling tail pointer (e.g., tail->next = newNode), leading to
undefined behavior/crashes.

The internal state (head=nullptr, tail=dangling) is inconsistent.

Prevention:
After updating head and decrementing count in dequeue, always check if the queue has become
empty (head == nullptr or count == 0).

If it has become empty, explicitly set tail = nullptr;.

[Link] Operation: front() (or peek()) – Who's Next?


Sometimes we just want to see who's at the front of the queue without actually serving them yet. That's
the job of front().

A. The Goal (Algorithm):

Simply take a look at the item at the very front of the queue without removing it. The queue itself should
remain completely unchanged.

The Plan (Pseudocode):


FUNCTION front(myQueue) RETURNS item_type
// Can't peek into an empty line!
IF myQueue is empty THEN
Oops! SIGNAL error (Queue Empty)
ELSE
// Just return what's at the front. Don't change anything else.
RETURN ITEM_AT myQueue.front_position
END IF
END FUNCTION
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 16

Flowchart Visualizing the flow

Figure 8.24: Flowchart of the front() (or peek()) Operation. The logic for peeking at the front element is simple but
must be safe. The first and most important step is to check if the queue is empty to prevent an underflow error. If
the queue is not empty, the operation simply returns a reference to the data held by the head node. Crucially, this is
a non-destructive, read-only operation; the state of the queue (its pointers and count) remains completely
unchanged.

C++ Implementation-front() or peak()


// --- Implementation Details for LinkedQueue ---

// The non-const version lets you potentially modify the front item
template <typename T>
T& LinkedQueue<T>::front() {
if (isEmpty()) {
throw std::underflow_error("LinkedQueue::front(): Queue is empty, nothing to
see here!");
}
// Simply return a reference to the data held by the head node.
return head_->data;
}

// The const version is for when you have a const Queue object
// and just want to read the front item.
template <typename T>
17 DSA in Modern C++: From the Ground Up To Real-World Applications

const T& LinkedQueue<T>::front() const {


if (isEmpty()) {
throw std::underflow_error("LinkedQueue::front() const: Queue is empty,
nothing to see here!");
}
// Return a const reference.
return head_->data;
}

Analysis of front()

Sometimes the simplest analysis is the most beautiful. Let's break down the work our computer does
when we call front().

Checking if the Queue is Empty (isEmpty()): We look at our count_ member variable. Is it zero? This is a
single comparison. A lightning-fast O(1) operation.

Accessing the Head's Data: If the queue isn't empty, we follow the head_ pointer to the first node and
access its data member. Following a single pointer and accessing a member are also fundamental O(1)
operations.

And... that's it. There are no loops, no recursion, no traversing the list. The amount of work is tiny and,
most importantly, it's constant. It doesn't matter if there's one person in line or a million; checking who
is at the very front takes exactly the same amount of time.

The Verdict:

Time Complexity: Guaranteed O(1)

Space Complexity (Auxiliary): O(1) (The operation uses no extra memory, aside from the few bytes
needed for the return value itself).

The front() operation is as efficient as it gets, providing immediate access to the next item in the queue
without any performance penalty.

8.6.2 Exercises on "Linked Queue"


1. (Tracing): Imagine an empty LinkedQueue. Trace the exact state of the head pointer, tail pointer,
and the next pointer of each node after the following sequence of operations:
o enqueue(10)
o enqueue(20)
o dequeue()
o enqueue(30)
o dequeue()

2. (Edge Cases): Why is it so important to handle the case where the queue becomes empty after a
dequeue operation (i.e., when head == nullptr after removing the last node)? What specific
pointer needs to be updated in this case that isn't updated in a normal dequeue?
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 18

3. (Implementation Detail): Our LinkedQueue implementation used two pointers, head and tail.
What would be the Big O time complexity of the enqueue operation if we only used a head
pointer? Explain your reasoning.

Solutions for 8.6.2 Exercises


1. (Tracing):

Initial: head = nullptr, tail = nullptr

enqueue(10):
ohead points to Node(10).
o tail points to Node(10).
o Node(10)->next is nullptr.
enqueue(20):
o head points to Node(10).
o tail points to Node(20).
o Node(10)->next points to Node(20).
o Node(20)->next is nullptr.

dequeue(): (returns 10)


o head now points to Node(20).
o tail still points to Node(20).
o Node(20)->next is nullptr. (The old Node(10) is deleted).

enqueue(30):
o head points to Node(20).
o tail now points to Node(30).
o Node(20)->next points to Node(30).
o Node(30)->next is nullptr.

dequeue(): (returns 20)


o head now points to Node(30).
o tail still points to Node(30).
o Node(30)->next is nullptr.

2. (Edge Cases):

When you dequeue the last element, the head pointer will naturally become nullptr. If you don't also
update the tail pointer to nullptr, you create a dangling pointer. The tail would still be pointing to the
memory location of the node that was just deleted. If you were to then enqueue a new item, the code
would try to do tail->next = ..., which would access freed memory and lead to a crash or undefined
behavior. So, after dequeuing the last item, you must set tail = nullptr to correctly represent an empty
queue.
19 DSA in Modern C++: From the Ground Up To Real-World Applications

3. (Single Pointer Complexity):

If we only had a head pointer, the enqueue operation would become O(N).

o Why? To add a new element to the back of the line, you would have to start at the head
and traverse the entire linked list, node by node (current = current->next), until you
reached the last node. Only then could you attach the new node. This traversal takes time
proportional to the number of elements already in the queue, hence O(N). This is precisely
why we keep a tail pointer—it gives us direct O(1) access to the end of the line.

8.7 Comparing Queue Implementations: The Final


Showdown
Alright, we’ve been through the trenches. We've built queues in two fundamentally different ways: one
with a chain of nodes (the LinkedQueue) and one with a clever, wrapping block of memory (the
CircularArrayQueue).
Both of them work. Both of them are efficient, with O(1) performance for their core operations. So...
which one is better?
The honest answer, like with so many things in programming, is: "It depends."
There's no single "best" implementation for every situation. A professional software engineer knows the
trade-offs and chooses the right tool for the job. Let's put our two champions in the ring and compare
them head-to-head on the factors that really matter.

Factor 1: Performance Consistency (The "Hiccup" Factor)


 LinkedQueue: The winner here. Every single enqueue and dequeue operation is a guaranteed
O(1). It involves a predictable number of steps: create one node, swap a couple of pointers. That's
it. It's smooth and consistent.

 CircularArrayQueue: This one is O(1) amortized. This is a fancy way of saying that most of the
time, it's incredibly fast. But every once in a while, when the underlying array gets full, it has to
hit the brakes to perform a resize. This resize operation is slow—it's O(N) because it has to allocate
a new, bigger array and copy all N elements over.

The Verdict: If you're building a system where a single, unpredictable delay could be catastrophic (like a
real-time audio processor or a high-frequency trading system), the guaranteed consistency of the
LinkedQueue is a major advantage. For most other applications, the "hiccup" of the circular array is so
infrequent that its average performance is fantastic.

Factor 2: Memory Usage


 LinkedQueue: This is a tale of "death by a thousand cuts." For every single element you store, you
pay a memory tax: the next pointer. On a 64-bit system, that's an extra 8 bytes of overhead for
every single item. If you're storing millions of small items (like chars or ints), this overhead can
actually be larger than the data itself!
8.7 Comparing Queue Implementations: The Final Showdown 20

 CircularArrayQueue: This implementation is much more memory-dense. The elements are


packed together tightly in a single block of memory. The only "wasted" space is any unused
capacity at the end of the array.

The Verdict: If you are storing large objects, the 8-byte pointer overhead of the linked list is insignificant,
so it doesn't matter. But if you are storing a huge number of very small, primitive data types, the
CircularArrayQueue is significantly more memory-efficient.

Factor 3: Cache Performance (The Hidden Speed Boost)


This is the secret weapon of array-based structures. Modern CPUs are incredibly fast, but they are
dramatically slowed down when they have to fetch data from main memory (RAM). To combat this, they
have a small, super-fast memory cache right on the chip. They love to pre-fetch chunks of contiguous
memory.

 CircularArrayQueue: Winner, by a landslide. All of its data is stored in one contiguous block. When
the CPU needs to access element i, it will likely also pull i+1, i+2, etc., into its fast cache. This means
that as you process the queue, the next few elements are often already waiting in the fastest
possible memory. This is called cache locality, and it can make array-based code run much faster
in the real world than its Big O notation would suggest.

 LinkedQueue: The enemy of the cache. Each node is allocated separately in memory. They could
be anywhere. To get from one node to the next, the CPU has to follow a pointer, which often
results in a cache miss—a slow trip out to main memory to find the next node's location.

The Verdict: If you are processing large amounts of data in tight loops, the superior cache performance
of the CircularArrayQueue can make it significantly faster in practice than a LinkedQueue, even though
they are both technically O(1).

Summary Table: Array vs. Linked List

Feature Circular Array Linked List Queue The Takeaway


Queue

enqueue Time Amortized O(1) Guaranteed O(1) The linked list has more
predictable insertion time; no
resize pauses.

dequeue Time Guaranteed O(1) Guaranteed O(1) Both are excellent here.

Memory Usage O(Capacity) - can O(N) - exact memory The array might waste space; the
have slack space + pointer overhead list has a "pointer tax" on every
element.
21 DSA in Modern C++: From the Ground Up To Real-World Applications

Cache Locality Excellent Poor (scattered The array is often faster in the real
(contiguous memory) world for primitive types due to
memory) the CPU cache.

Implementation More complex Simpler (pointer The linked list logic is often
(index math, manipulation) considered more straightforward.
resize)

Table 8.4 Table of Array vs. Linked List

My Personal Rule of Thumb?

When in doubt, start with the STL's default std::queue. It's a highly optimized std::deque under the hood
and gives you the best of both worlds for most common scenarios. But now, you're not just a user of
std::queue; you're an engineer who understands exactly what's going on inside and why it's designed the
way it is. You know the trade-offs, and you know how to build either version from scratch if you ever
need to.

8.8 The STL Solution: std::queue - Your Go-To for FIFO


After all our hard work building queues from the ground up with arrays and linked lists, it's time to meet
the C++ Standard Template Library's offering: std::queue. You'll find it in the <queue> header, and for
most practical purposes, it's the queue you'll want to use.

8.8.1 The Adaptor Philosophy: Not a Container, but a


Controller
Here's a neat design insight: std::queue isn't a brand-new, standalone container type like std::vector or
std::list. Instead, it's what's known as a container adaptor.

Think of it like this: std::queue provides the face of a queue – the familiar push (for enqueue), pop (for
dequeue), front, back, empty, and size operations. But behind that face, it uses an existing C++ sequence
container to actually store the elements. It "adapts" that underlying container, restricting its interface to
only expose the FIFO behavior we expect from a queue. You can't, for example, directly access an
element in the middle of a std::queue using an index, even if the underlying container (like std::vector or
std::deque) would allow it. The adaptor enforces the queue discipline.

8.8.2 The Workhorse Beneath: The Underlying Container


1. std::deque<T> - The Default Champion:

By default, if you just write std::queue<int> myQueue;, std::queue will use a


std::deque<T> (a double-ended queue, which we'll meet properly very soon!) to do its
heavy lifting. Why deque? Because std::deque is a master at adding elements to its back
(push_back()) and removing them from its front (pop_front()) very efficiently – typically
O(1) amortized or guaranteed. These map perfectly to a queue's enqueue and dequeue.
8.8 The STL Solution: std::queue - Your Go-To for FIFO 22

o [Link](item); --> Effectively calls internal_deque.push_back(item);


o [Link](); ---------> Effectively calls internal_deque.pop_front();
o [Link](); ----- --> Effectively calls internal_deque.front();
o [Link](); --------> Effectively calls internal_deque.back(); (to see the last
item enqueued)
2. std::list<T> - Another Option:

You can also tell std::queue to use a std::list<T> (a doubly-linked list) as its foundation if
you prefer. std::list also offers O(1) push_back, pop_front, front, and back.

C++ code
#include <queue>
#include <list>
#include <deque> // Default, included for completeness
#include <iostream>
#include <string>

int main_std_queue_example() { // Renamed for clarity


// Queue using the default std::deque<std::string>
std::queue<std::string> q1;

// Queue explicitly using std::list<std::string>


std::queue<std::string, std::list<std::string>> q2;

std::cout << "q1 initially empty? " << std::boolalpha << [Link]() << std::endl;

[Link]("First Job"); // Enqueue


[Link]("Second Job");
[Link]("Alpha Task");
[Link]("Third Job");

std::cout << "q1 size: " << [Link]() << std::endl; // Should be 3

// Let's see who's at the front and back


if (![Link]()) {
std::cout << "q1 front: \"" << [Link]() << "\"" << std::endl; // "First
Job"
std::cout << "q1 back: \"" << [Link]() << "\"" << std::endl; // "Third
Job"
// [Link]() = "Urgent Job"; // We can modify the front element if needed
}

std::cout << "Processing q1..." << std::endl;


while (![Link]()) { // Always check before pop or front!
std::cout << " Serving: " << [Link]() << std::endl;
[Link](); // Dequeue
23 DSA in Modern C++: From the Ground Up To Real-World Applications

std::cout << "q1 empty after processing? " << std::boolalpha << [Link]() <<
std::endl;
return 0;
}

Expected output:
q1 initially empty? true
q1 size: 3
q1 front: "First Job"
q1 back: "Third Job"
Processing q1...
Serving: First Job
Serving: Second Job
Serving: Third Job
q1 empty after processing? true

8.9 Full C++ Example Application


Alright, we've spent this entire chapter talking theory, drawing diagrams, and looking at code snippets.
Theory is great, but at the end of the day, we're programmers. We need to see the code.

This final section is where we put it all together. I'm going to give you three complete, runnable C++
programs that demonstrate everything we've learned. The first two will let you see the dramatic
performance difference between our "naive" array queue and the much smarter circular array queue.
The final example will show how we can use a linked list to build a more complex, real-world system.

My advice? Don't just read this code. Type it out. Compile it. Run it. Play with it. Break it. Fix it. That's
how you'll truly make these concepts your own.

8.9.1 Array Implementations in Action


First up, let's look at our two array-based queues side-by-side. One of them is a cautionary tale, a perfect
example of an anti-pattern that looks simple but hides a disastrous performance flaw. The other is its
clever, efficient successor.

Here it is: the ShiftyQueue. This is our naive implementation, the one that uses a std::vector in the most
straightforward way possible. As you'll recall, its enqueue operation is fast, but its dequeue is a
performance nightmare because it has to shift every single element to the left.

The main function includes a performance test. I want you to run it and feel the pause when it starts
dequeuing. That pause is the feeling of an O(N) algorithm struggling under a heavy load. It's a lesson you
won't forget.
8.8 The STL Solution: std::queue - Your Go-To for FIFO 24

1. C++ Code of Shifty Queue


// =============================================================
// The Naive "Shifty" Array Queue - An Anti-Pattern
//
// This is our first attempt at an array-based queue. I'm calling
// it the 'ShiftyQueue' because, as you'll see, it spends a
// ridiculous amount of time just shuffling data around.
//
// This is a great example of something that *works* but has
// terrible performance. We're building this to understand *why*
// we need a smarter solution like a circular array.
//
// Written by: Eyadere Addis
//
// Date: 2021-8-4
// ==============================================================

#include <iostream>
#include <vector>
#include <chrono> // We need this to time our operations and prove a point.
#include <stdexcept> // For throwing errors when someone misuses our queue.

class ShiftyQueue {
private:
// We're backing our queue with a std::vector. It's simple, and it
// handles all the memory allocation for us.
std::vector<int> data_;

public:
// The default constructor is fine. It just creates an empty vector.
ShiftyQueue() = default;

// To add an item, we just push it to the back of the vector.


// This is fast! std::vector is optimized for this. Amortized O(1).
void enqueue(int value) {
data_.push_back(value);
}

//
// And here it is. The villain of our story.
//
// To remove from the front, we have to erase the element at index 0.
// The std::vector then has to shift *every other element* one spot
// to the left to fill the gap. This is a classic O(N) operation.
// Painfully slow for a large queue.
//
25 DSA in Modern C++: From the Ground Up To Real-World Applications

int dequeue() {
if (isEmpty()) {
// Can't serve a customer if the line is empty.
throw std::runtime_error("Queue underflow! Tried to dequeue from an empty
queue.");
}
int front_value = data_.front(); // Grab the value before we erase it.
data_.erase(data_.begin()); // The expensive shift happens here.
return front_value;
}

// Just take a peek at the front element without removing it.


int front() const {
if (isEmpty()) {
throw std::runtime_error("Queue underflow! Tried to peek at an empty
queue.");
}
return data_.front();
}

// Simple utility functions.


bool isEmpty() const {
return data_.empty();
}

size_t size() const {


return data_.size();
}

// A little helper to see what's actually in our queue.


void display() const {
if (isEmpty()) {
std::cout << "Queue is empty" << std::endl;
return;
}

std::cout << "\nShifty Queue Contents:" << std::endl;


std::cout << "Index\tValue" << std::endl;
std::cout << "-----\t-----" << std::endl;

for (size_t i = 0; i < data_.size(); i++) {


std::cout << "[" << i << "]\t" << data_[i];
if (i == 0) std::cout << "\t<--- front";
if (i == data_.size() - 1) std::cout << "\t<--- rear";
std::cout << std::endl;
}
}
};
8.8 The STL Solution: std::queue - Your Go-To for FIFO 26

// This is where we prove our point. Let's time this thing and see
// just how bad that O(N) dequeue really is.
void demonstrateShiftyQueuePerformance() {
std::cout << "\n==================================================";
std::cout << "\n= PERFORMANCE TEST: The Shifty Queue Dilemma =";
std::cout << "\n==================================================\n";

ShiftyQueue shiftyQueue;
// Let's use a big number so the difference is obvious.
const int num_elements = 50000;

std::cout << "Step 1: Enqueuing " << num_elements << " items. This should be
fast...\n";
auto start_enqueue = std::chrono::high_resolution_clock::now();
for (int i = 0; i < num_elements; ++i) {
[Link](i);
}
auto end_enqueue = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> enqueue_duration = end_enqueue -
start_enqueue;

std::cout << " -> Time taken for " << num_elements << " enqueues: "
<< enqueue_duration.count() << " ms\n";

std::cout << "\nStep 2: Dequeuing all " << num_elements << " items. Go grab a
coffee...\n";
auto start_dequeue = std::chrono::high_resolution_clock::now();
while (![Link]()) {
[Link]();
}
auto end_dequeue = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> dequeue_duration = end_dequeue -
start_dequeue;

std::cout << " -> Time taken for " << num_elements << " dequeues: "
<< dequeue_duration.count() << " ms\n";

std::cout << "\nTHE VERDICT: Notice how dequeuing is dramatically slower.\n";


std::cout << "That's the O(N) shift in action! For every single dequeue, we had
to\n";
std::cout << "shuffle the entire rest of the array. This is why the naive
approach fails.\n";
std::cout << "==================================================\n";
}

// The main driver for the interactive demo remains the same,
// as it's just a simple UI to let the user play around.
27 DSA in Modern C++: From the Ground Up To Real-World Applications

int main() {
ShiftyQueue queue;
int choice, value;

do {
std::cout << "\n--- Interactive Shifty Queue Menu ---\n";
std::cout << "1. Enqueue\n";
std::cout << "2. Dequeue\n";
std::cout << "3. View front\n";
std::cout << "4. Display queue\n";
std::cout << "5. Run performance test\n";
std::cout << "0. Exit\n";
std::cout << "Your choice: ";
std::cin >> choice;

// Basic input validation


if (std::[Link]()) {
std::[Link]();
std::[Link](std::numeric_limits<std::streamsize>::max(), '\n');
choice = -1; // Invalid choice
}

try {
switch (choice) {
case 1:
std::cout << "Enter value to enqueue: ";
std::cin >> value;
[Link](value);
std::cout << "Enqueued " << value << ".\n";
break;
case 2:
std::cout << "Dequeued " << [Link]() << ".\n";
break;
case 3:
std::cout << "Front element is " << [Link]() << ".\n";
break;
case 4:
[Link]();
break;
case 5:
demonstrateShiftyQueuePerformance();
break;
case 0:
std::cout << "Exiting. Thanks for playing!\n";
break;
default:
std::cout << "Invalid choice. Please try again.\n";
}
8.8 The STL Solution: std::queue - Your Go-To for FIFO 28

} catch (const std::exception& e) {


std::cerr << "!!! ERROR: " << [Link]() << " !!!\n";
}
} while (choice != 0);

return 0;
}

Expected Output (will vary, but shows the delay):

--- Interactive Shifty Queue Menu --- 2. Dequeue


1. Enqueue 3. View front
2. Dequeue 4. Display queue
3. View front 5. Run performance test
4. Display queue 0. Exit
5. Run performance test Your choice: 4
0. Exit
Your choice: 1 Shifty Queue Contents:
Enter value to enqueue: 4
Enqueued 4. Index Value
----- -----
--- Interactive Shifty Queue Menu --- [0] 4 <--- front
1. Enqueue [1] 8
2. Dequeue [2] 2 <--- rear
3. View front
4. Display queue --- Interactive Shifty Queue Menu ---
5. Run performance test 1. Enqueue
0. Exit 2. Dequeue
Your choice: 1 3. View front
Enter value to enqueue: 8 4. Display queue
Enqueued 8. 5. Run performance test
0. Exit
--- Interactive Shifty Queue Menu --- Your choice: 2
1. Enqueue Dequeued 4.
2. Dequeue
3. View front --- Interactive Shifty Queue Menu ---
4. Display queue 1. Enqueue
5. Run performance test 2. Dequeue
0. Exit 3. View front
Your choice: 1 4. Display queue
Enter value to enqueue: 2 5. Run performance test
Enqueued 2. 0. Exit
Your choice: 4
--- Interactive Shifty Queue Menu ---
1. Enqueue Shifty Queue Contents:
29 DSA in Modern C++: From the Ground Up To Real-World Applications

Index Value -> Time taken for 50000 enqueues: 1.502 ms


----- -----
[0] 8 <--- front Step 2: Dequeuing all 50000 items. Go grab a
[1] 2 <--- rear coffee...
-> Time taken for 50000 dequeues: 205.66 ms
--- Interactive Shifty Queue Menu ---
1. Enqueue THE VERDICT: Notice how dequeuing is dramatically
2. Dequeue slower.
3. View front That's the O(N) shift in action! For every single
4. Display queue dequeue, we had to
5. Run performance test shuffle the entire rest of the array. This is why the
0. Exit naive approach fails.
Your choice: 3 =========================================
Front element is 8.
--- Interactive Shifty Queue Menu ---
--- Interactive Shifty Queue Menu --- 1. Enqueue
1. Enqueue 2. Dequeue
2. Dequeue 3. View front
3. View front 4. Display queue
4. Display queue 5. Run performance test
5. Run performance test 0. Exit
0. Exit Your choice: 0
Your choice: 5 Exiting. Thanks for playing!

========================================= --------------------------------
= PERFORMANCE TEST: The Shifty Queue Dilemma Process exited after 46.43 seconds with return value
= 0
========================================= Press any key to continue . . .
Step 1: Enqueuing 50000 items. This should be fast...

Now, let's use the CircularArrayQueue we designed. This example will show that both enqueue and
dequeue remain fast, regardless of the number of items.

2. C++ Code of CircularArrayQueue


Now for the hero of our story: the CircularArrayQueue. This is the right way to build an array-based
queue. By treating the array like a circle and letting our front and rear indices chase each other around
it, we completely eliminate the need for that costly shifting.

Notice how the dequeue operation is now just a simple index modification—an O(1) operation. When
you run the performance test in this program (which you'll find is identical to the one for the
ShiftyQueue), the difference will be staggering. This is what good algorithm design looks like in practice.

C++ Code
// =============================================================
8.8 The STL Solution: std::queue - Your Go-To for FIFO 30

// A Modern Circular Array Queue in C++


//
// Notice:
// - No raw pointers, no `new[]`, no `delete[]`,
// - No manual destructor, and no need to delete the copy operations.
// - The `std::vector` underneath to let the standard
// library handle the messy memory stuff for us (RAII for the win!).
// - A `rear()` method to peek at the last element.
// - A clever use of `std::optional` and `std::reference_wrapper`
// for safe, modern C++ getter methods.
// - A really slick `display()` method to visualize the wrap-around.
//
// Written by: Eyadere Addis
//
// Date: 2021-8-14
// ==============================================================

#include <iostream>
#include <vector>
#include <stdexcept>
#include <optional> // For safe return values that might be empty
#include <functional> // For std::reference_wrapper, our tool for optional
references
#include <iomanip> // For pretty printing in display()
#include <limits> // For robust input clearing

template <typename T>


class CircularQueue {
private:
// The vector is our workhorse. It owns the memory.
std::vector<T> data_;

// We just keep track of where the line starts and how long it is.
size_t front_idx_ = 0;
size_t count_ = 0;

// This little helper is the heart of the "circular" logic. It translates


// our queue's logical index (0th element, 1st element, etc.) into a
// physical index in the underlying vector, handling the wrap-around.
size_t physical_index(size_t logical_index) const {
return (front_idx_ + logical_index) % data_.size();
}

public:
// A sensible constructor. Let's the user pick a starting size.
explicit CircularQueue(size_t capacity = 8) {
// Can't have a queue with zero space. That's just silly.
if (capacity == 0) {
31 DSA in Modern C++: From the Ground Up To Real-World Applications

capacity = 1;
}
data_.resize(capacity);
}

// No destructor needed. No copy/move needed. The Rule of Zero!


// The compiler-generated versions that work on our `std::vector` member
// are perfect. This is modern C++ at its best - less code, more safety.

// Add an item to the back of the queue.


void enqueue(const T& value) {
// If we've run out of room, it's time to grow.
if (count_ == data_.size()) {
std::vector<T> new_data(data_.size() * 2); // Let's double the space.

// We have to "unroll" our queue from its circular state


// into a simple, linear sequence in the new vector.
for (size_t i = 0; i < count_; ++i) {
new_data[i] = std::move(data_[physical_index(i)]);
}

// The `swap` trick is a fast, exception-safe way to replace our


// old data with the new, bigger vector.
data_.swap(new_data);
front_idx_ = 0; // The queue is now nicely lined up at the start.
}

// Figure out where the back of the line is and drop in the new value.
size_t rear_idx = physical_index(count_);
data_[rear_idx] = value;
count_++;
}

// Remove an item from the front.


std::optional<T> dequeue() {
if (isEmpty()) {
return std::nullopt; // Can't dequeue from an empty line.
}

// Grab the value before we "forget" about it.


// Using std::move can be a nice optimization for complex objects.
T value_to_return = std::move(data_[front_idx_]);

// The magic of O(1) dequeue: just move the front pointer. No shifting!
front_idx_ = (front_idx_ + 1) % data_.size();
count_--;

return value_to_return;
8.8 The STL Solution: std::queue - Your Go-To for FIFO 32

// Let's peek at the front element without removing it.


// Returning an optional reference_wrapper is the modern, safe way to do this.
std::optional<std::reference_wrapper<const T>> front() const {
if (isEmpty()) return std::nullopt;
return data_[front_idx_];
}

// And here's the non-const version in case the user wants to modify the front.
std::optional<std::reference_wrapper<T>> front() {
if (isEmpty()) return std::nullopt;
return data_[front_idx_];
}

// A new addition: let's peek at the REAR element too.


std::optional<std::reference_wrapper<const T>> rear() const {
if (isEmpty()) return std::nullopt;
// The rear is the *last actual element*, which is at logical index `count_ -
1`.
return data_[physical_index(count_ - 1)];
}

// --- Simple utility methods ---


bool isEmpty() const { return count_ == 0; }
size_t size() const { return count_; }
size_t capacity() const { return data_.size(); }

// A nice display function to help us visualize what's going on.


// This is super useful for debugging and learning!
void display() const {
std::cout << "\n--- Queue State ---\n";
if (isEmpty()) {
std::cout << "Queue is empty.\n";
std::cout << "-------------------\n";
return;
}

std::cout << "Physical Index | Value\n";


std::cout << "---------------+-------\n";

for (size_t i = 0; i < count_; ++i) {


size_t phys_idx = physical_index(i);
std::cout << std::setw(8) << phys_idx << " | " << data_[phys_idx];

// Add little markers to show where our pointers are.


if (i == 0) {
std::cout << " <-- front";
33 DSA in Modern C++: From the Ground Up To Real-World Applications

}
if (i == count_ - 1) {
std::cout << " <-- rear";
}
std::cout << "\n";
}
std::cout << "-------------------\n";
}
};

// A little helper to keep our main loop clean from input errors.
void clearInputBuffer() {
std::[Link]();
std::[Link](std::numeric_limits<std::streamsize>::max(), '\n');
}

// Our interactive test driver.


int main() {
int capacity;
std::cout << "Let's build a Circular Queue. What initial capacity would you like?
";
std::cin >> capacity;
clearInputBuffer();

CircularQueue<int> queue(capacity);
std::cout << "Queue created with capacity " << [Link]() << ".\n";

int choice;
do {
std::cout << "\n=== What's next? ===\n";
std::cout << "1. Enqueue (Add to back)\n";
std::cout << "2. Dequeue (Remove from front)\n";
std::cout << "3. Peek at Front\n";
std::cout << "4. Peek at Rear\n";
std::cout << "5. Display Contents\n";
std::cout << "6. Get Stats (Size/Capacity)\n";
std::cout << "7. Exit\n";
std::cout << "Your choice: ";

std::cin >> choice;


// If the user types something that's not a number...
if (std::[Link]()) {
clearInputBuffer();
choice = -1; // Force the default case
} else {
clearInputBuffer(); // Good practice even for valid numbers
}
8.8 The STL Solution: std::queue - Your Go-To for FIFO 34

switch (choice) {
case 1: {
int value;
std::cout << "Enter value to enqueue: ";
std::cin >> value;
clearInputBuffer();

[Link](value);
std::cout << "Enqueued " << value << ".\n";
[Link]();
break;
}
case 2: {
auto result = [Link]();
if (result) {
std::cout << "Dequeued value: " << *result << "\n";
} else {
std::cout << "Queue is empty. Nothing to dequeue.\n";
}
break;
}
case 3: {
auto front_ref = [Link]();
if (front_ref) {
// .get() unwraps the reference_wrapper
std::cout << "Front element is: " << front_ref->get() << "\n";
} else {
std::cout << "Queue is empty.\n";
}
break;
}
case 4: {
auto rear_ref = [Link]();
if (rear_ref) {
std::cout << "Rear element is: " << rear_ref->get() << "\n";
} else {
std::cout << "Queue is empty.\n";
}
break;
}
case 5: {
[Link]();
break;
}
case 6: {
std::cout << "\n--- Queue Stats ---\n";
std::cout << " Size: " << [Link]() << "\n";
std::cout << " Capacity: " << [Link]() << "\n";
35 DSA in Modern C++: From the Ground Up To Real-World Applications

std::cout << " Is Empty: " << ([Link]() ? "Yes" : "No") <<
"\n";
std::cout << "-------------------\n";
break;
}
case 7: {
std::cout << "Exiting. Happy queueing!\n";
break;
}
default: {
std::cout << "Oops! Invalid choice. Please pick a number from the
menu.\n";
break;
}
}
} while (choice != 7);

return 0;
}

Expected Output (will vary, but shows the speed):


Let's build a Circular Queue. What initial capacity 4. Peek at Rear
would you like? 3 5. Display Contents
Queue created with capacity 3. 6. Get Stats (Size/Capacity)
7. Exit
=== What's next? === Your choice: 1
1. Enqueue (Add to back) Enter value to enqueue: 8
2. Dequeue (Remove from front) Enqueued 8.
3. Peek at Front
4. Peek at Rear --- Queue State ---
5. Display Contents Physical Index | Value
6. Get Stats (Size/Capacity) ---------------+-------
7. Exit 0 | 4 <-- front
Your choice: 1 1 | 8 <-- rear
Enter value to enqueue: 4 -------------------
Enqueued 4.
=== What's next? ===
--- Queue State --- 1. Enqueue (Add to back)
Physical Index | Value 2. Dequeue (Remove from front)
---------------+------- 3. Peek at Front
0 | 4 <-- front <-- rear 4. Peek at Rear
------------------- 5. Display Contents
6. Get Stats (Size/Capacity)
=== What's next? === 7. Exit
1. Enqueue (Add to back) Your choice: 1
2. Dequeue (Remove from front) Enter value to enqueue: 2
3. Peek at Front Enqueued 2.
8.8 The STL Solution: std::queue - Your Go-To for FIFO 36

Your choice: 5
--- Queue State ---
Physical Index | Value --- Queue State ---
---------------+------- Physical Index | Value
0 | 4 <-- front ---------------+-------
1 |8 1 | 8 <-- front
2 | 2 <-- rear 2 | 2 <-- rear
------------------- -------------------

=== What's next? === === What's next? ===


1. Enqueue (Add to back) 1. Enqueue (Add to back)
2. Dequeue (Remove from front) 2. Dequeue (Remove from front)
3. Peek at Front 3. Peek at Front
4. Peek at Rear 4. Peek at Rear
5. Display Contents 5. Display Contents
6. Get Stats (Size/Capacity) 6. Get Stats (Size/Capacity)
7. Exit 7. Exit
Your choice: 5 Your choice: 1
Enter value to enqueue: 5
--- Queue State --- Enqueued 5.
Physical Index | Value
---------------+------- --- Queue State ---
0 | 4 <-- front Physical Index | Value
1 |8 ---------------+-------
2 | 2 <-- rear 1 | 8 <-- front
------------------- 2 |2
0 | 5 <-- rear
=== What's next? === -------------------
1. Enqueue (Add to back)
2. Dequeue (Remove from front) === What's next? ===
3. Peek at Front 1. Enqueue (Add to back)
4. Peek at Rear 2. Dequeue (Remove from front)
5. Display Contents 3. Peek at Front
6. Get Stats (Size/Capacity) 4. Peek at Rear
7. Exit 5. Display Contents
Your choice: 2 6. Get Stats (Size/Capacity)
Dequeued value: 4 7. Exit
Your choice: 6
=== What's next? ===
1. Enqueue (Add to back) --- Queue Stats ---
2. Dequeue (Remove from front) Size: 3
3. Peek at Front Capacity: 3
4. Peek at Rear Is Empty: No
5. Display Contents -------------------
6. Get Stats (Size/Capacity)
7. Exit === What's next? ===
37 DSA in Modern C++: From the Ground Up To Real-World Applications

1. Enqueue (Add to back) 6. Get Stats (Size/Capacity)


2. Dequeue (Remove from front) 7. Exit
3. Peek at Front Your choice: 6
4. Peek at Rear
5. Display Contents --- Queue Stats ---
6. Get Stats (Size/Capacity) Size: 4
7. Exit Capacity: 6
Your choice: 1 Is Empty: No
Enter value to enqueue: 9 -------------------
Enqueued 9.
=== What's next? ===
--- Queue State --- 1. Enqueue (Add to back)
Physical Index | Value 2. Dequeue (Remove from front)
---------------+------- 3. Peek at Front
0 |5 4. Peek at Rear
1 | 8 <-- front 5. Display Contents
2 |2 6. Get Stats (Size/Capacity)
3 | 9 <-- rear 7. Exit
------------------- Your choice: 7
Exiting. Happy queueing!
=== What's next? ===
1. Enqueue (Add to back) --------------------------------
2. Dequeue (Remove from front) Process exited after 45.17 seconds with return value
3. Peek at Front 0
4. Peek at Rear Press any key to continue . . .
5. Display Contents

8.9.2 A real-world example of queue- Hospital Patient


Management System – Using Linked Lists
Okay, we've seen the raw mechanics. Now let's build something more interesting. This final example uses
a linked list to model a hospital emergency room's patient waiting list.

But there's a twist. An ER isn't a simple "first-come, first-served" line. It's a Priority Queue. A patient with
a critical injury (high priority) needs to be seen before someone with a minor sprain (low priority), even
if the sprain-patient arrived an hour earlier.

This code demonstrates how we can use a sorted linked list to implement this priority system. Instead
of just adding new patients to the tail, our admit_patient function will intelligently find the correct spot
in the list based on the patient's severity. The patient at the head of our list will always be the one who
needs help most urgently.

This example shows how the fundamental queue concept can be adapted to solve more complex, real-
world problems. Pay close attention to how the insertion logic differs from a standard FIFO queue.
C++ Code:
8.8 The STL Solution: std::queue - Your Go-To for FIFO 38

// =============================================================
// Real-World Example: An Emergency Room Priority Queue
//
// This isn't just a simple "first-come, first-served" queue. An ER
// is a perfect example of a PRIORITY QUEUE. A patient with a
// critical injury (high priority) has to be seen before someone
// with a minor issue, even if they arrived later.
//
// We'll build this using a sorted linked list, where the "head" of
// the list is always the highest-priority patient. And to do it
// the modern, safe C++ way, we'll use std::unique_ptr to manage
// our patient nodes automatically. No manual `delete`!
//
// Written by: Eyadere Addis
//
// Date: 2021-8-24
// ==============================================================

#include <iostream>
#include <string>
#include <ctime>
#include <iomanip>
#include <memory> // The star of the show: for std::unique_ptr
#include <utility> // For std::move

class Patient {
public:
// I'm making the data public here for simplicity in this example.
// In a bigger system, you'd have private members and getters.
std::string name;
std::string condition;
int severity; // 1=critical, 5=minor
time_t arrival_time;

// The link to the next patient in our list.


// The unique_ptr means each node OWNS the rest of the list.
std::unique_ptr<Patient> next;

Patient(std::string n, std::string cond, int sev)


: name(std::move(n)),
condition(std::move(cond)),
severity(sev),
next(nullptr) {
arrival_time = time(nullptr);
}

void display() const {


39 DSA in Modern C++: From the Ground Up To Real-World Applications

// Using std::put_time is a bit old-school, but it works for a simple time


format.
char time_buf[100];
strftime(time_buf, sizeof(time_buf), "%H:%M:%S",
std::localtime(&arrival_time));

std::cout << "Patient: " << std::setw(15) << std::left << name
<< "| Severity: " << severity << "/5"
<< " | Condition: " << std::setw(20) << std::left << condition
<< " | Arrived: " << time_buf << "\n";
}
};

class EmergencyRoom {
private:
// 'head' is a unique_ptr that owns the first patient in the list.
// When 'head' is reset, it triggers a cascading deletion of the whole list.
std::unique_ptr<Patient> head;
int doctors_available;

public:
EmergencyRoom(int num_doctors)
: head(nullptr), doctors_available(num_doctors) {}

// --- Look ma, no destructor! ---


// Because 'head' is a unique_ptr, when the EmergencyRoom object is
// destroyed, 'head' is automatically destroyed, which deletes the first
// patient, which destroys its 'next' unique_ptr, and so on. RAII at its finest.

// A patient comes in. We need to insert them into our sorted list
// based on their priority.
void admit_patient(std::string name, std::string condition, int severity) {
auto new_patient = std::make_unique<Patient>(name, condition, severity);
std::cout << new_patient->name << " has been admitted to the ER.\n";

// --- Priority Insertion Logic ---


// Case 1: The waiting list is empty, or this new patient is the
// highest priority we've ever seen. They go straight to the front.
if (!head || new_patient->severity < head->severity) {
new_patient->next = std::move(head);
head = std::move(new_patient);
return;
}

// Case 2: We need to find the right spot in the line.


// We'll walk down the list until we find a patient with a
// lower priority than our new patient.
Patient* current = [Link]();
8.8 The STL Solution: std::queue - Your Go-To for FIFO 40

while (current->next && current->next->severity <= new_patient->severity) {


current = current->[Link]();
}

// Now, we insert our new patient right after 'current'.


new_patient->next = std::move(current->next);
current->next = std::move(new_patient);
}

// A doctor is free. Let's treat the highest-priority patient.


void treat_next_patient() {
if (!head) {
std::cout << "No patients waiting. The doctors can take a break!\n";
return;
}
if (doctors_available <= 0) {
std::cout << "All doctors are currently busy. Please wait.\n";
return;
}

doctors_available--;

// The patient at the 'head' is always the highest priority.


// We take ownership of the patient from the list.
std::unique_ptr<Patient> patient_to_treat = std::move(head);
// The list's head now automatically points to the next patient.
head = std::move(patient_to_treat->next);

std::cout << "\n=== Doctor is treating a patient ===\n";


patient_to_treat->display();
std::cout << "Treatment in progress... (Simulating work)\n";
// In a real system, this would happen asynchronously.

std::cout << "Treatment for " << patient_to_treat->name << " is complete.\n";
// The 'patient_to_treat' unique_ptr goes out of scope here, and the
// patient's memory is automatically freed. No manual 'delete'!

doctors_available++;
std::cout << "A doctor is now available.\n";
}

// Just show me who's waiting, in order of priority.


void display_waiting_list() const {
std::cout << "\n--- Current ER Waiting List (Highest Priority First) ---\n";
if (!head) {
std::cout << "The waiting room is empty.\n";
return;
}
41 DSA in Modern C++: From the Ground Up To Real-World Applications

Patient* current = [Link]();


int position = 1;
while (current) {
std::cout << std::setw(3) << std::left << std::to_string(position) + ".";
current->display();
current = current->[Link]();
position++;
}
std::cout << "--------------------------------------------------------\n";
}
};

// --- The interactive main loop remains conceptually similar ---


int main() {
EmergencyRoom er(2); // We have 2 doctors on shift.
int choice;

while (true) {
std::cout << "\n=== Hospital ER Management System ===\n";
std::cout << "1. Admit New Patient\n";
std::cout << "2. Treat Next Patient\n";
std::cout << "3. View Waiting List\n";
std::cout << "0. Exit System\n";
std::cout << "Your choice: ";
std::cin >> choice;

if (std::[Link]()) {
std::[Link]();
std::[Link](std::numeric_limits<std::streamsize>::max(), '\n');
choice = -1; // Force invalid choice
}
std::[Link](std::numeric_limits<std::streamsize>::max(), '\n'); //
Consume newline

switch (choice) {
case 1: {
std::string name, condition;
int severity;
std::cout << "Enter patient name: ";
std::getline(std::cin, name);
std::cout << "Enter condition: ";
std::getline(std::cin, condition);
std::cout << "Enter severity (1-5, 1 is most critical): ";
std::cin >> severity;
er.admit_patient(name, condition, severity);
break;
}
8.8 The STL Solution: std::queue - Your Go-To for FIFO 42

case 2:
er.treat_next_patient();
break;
case 3:
er.display_waiting_list();
break;
case 0:
std::cout << "Shutting down the system. Goodbye!\n";
return 0;
default:
std::cout << "Invalid choice. Please try again.\n";
}
}
}

Expected output
=== Hospital ER Management System ===
1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 1
Enter patient name: John Smith
Enter condition: Chest pain
Enter severity (1-5, 1 is most critical): 2
John Smith has been admitted to the ER.

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 1
Enter patient name: Maria Garcia
Enter condition: Broken arm
Enter severity (1-5, 1 is most critical): 3
Maria Garcia has been admitted to the ER.

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 1
Enter patient name: Robert Johnson
43 DSA in Modern C++: From the Ground Up To Real-World Applications

Enter condition: Heart attack


Enter severity (1-5, 1 is most critical): 1
Robert Johnson has been admitted to the ER.

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 3

--- Current ER Waiting List (Highest Priority First) ---


1. Robert Johnson | Severity: 1/5 | Condition: Heart attack | Arrived: [Link]
2. John Smith | Severity: 2/5 | Condition: Chest pain | Arrived: [Link]
3. Maria Garcia | Severity: 3/5 | Condition: Broken arm | Arrived: [Link]
--------------------------------------------------------

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 2

=== Doctor is treating a patient ===


Patient: Robert Johnson | Severity: 1/5 | Condition: Heart attack | Arrived: [Link]
Treatment in progress... (Simulating work)
Treatment for Robert Johnson is complete.
A doctor is now available.

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 3

--- Current ER Waiting List (Highest Priority First) ---


1. John Smith | Severity: 2/5 | Condition: Chest pain | Arrived: [Link]
2. Maria Garcia | Severity: 3/5 | Condition: Broken arm | Arrived: [Link]
--------------------------------------------------------

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
8.8 The STL Solution: std::queue - Your Go-To for FIFO 44

0. Exit System
Your choice: 1
Enter patient name: Sarah Wilson
Enter condition: Severe bleeding
Enter severity (1-5, 1 is most critical): 1
Sarah Wilson has been admitted to the ER.

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 3

--- Current ER Waiting List (Highest Priority First) ---


1. Sarah Wilson | Severity: 1/5 | Condition: Severe bleeding | Arrived: [Link]
2. John Smith | Severity: 2/5 | Condition: Chest pain | Arrived: [Link]
3. Maria Garcia | Severity: 3/5 | Condition: Broken arm | Arrived: [Link]
--------------------------------------------------------

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 2

=== Doctor is treating a patient ===


Patient: Sarah Wilson | Severity: 1/5 | Condition: Severe bleeding | Arrived: [Link]
Treatment in progress... (Simulating work)
Treatment for Sarah Wilson is complete.
A doctor is now available.

=== Hospital ER Management System ===


1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 0
Shutting down the system. Goodbye!
45 DSA in Modern C++: From the Ground Up To Real-World Applications

8.10 Beyond the Basics: Advanced Queue Variants


Alright, we've absolutely mastered the standard FIFO queue. We know how to build it with a linked list,
and we know the right way to build it with a circular array. For 90% of the problems you'll face that
require a simple, fair, first-in-first-out line, these implementations are all you'll ever need.

But the world of computer science is a fascinating place, and clever people have taken the basic queue
concept and adapted it to solve more specialized problems. Think of these as the "special ops" versions
of our standard queue. You won't need them every day, but when you do, they're incredibly powerful.

Let's take a quick tour of two of the most important variants: the Double-Ended Queue (Deque) and the
Priority Queue.

8.10.1 The Double-Ended Queue (Deque): The Line-Cutter


Imagine our standard, polite queue. You can only join at the back and leave from the front. Now, imagine
a more chaotic line where people can join at either end and leave from either end. That's a Deque
(pronounced "deck," like a deck of cards).

A Deque is a hybrid. It's like a queue and a stack mashed together. It's a sequence of elements that
supports efficient addition and removal of elements from both the front and the back.

The Deque ADT - Four Key Operations:

push_front(value): Adds an element to the very beginning.

push_back(value): Adds an element to the very end (just like enqueue).

pop_front(): Removes the element from the very beginning (just like dequeue).

pop_back(): Removes the element from the very end (like a stack's pop).
8.10 Beyond the Basics: Advanced Queue Variants 46

Diagram of Deque Operations

Figure 8.25: A Deque Operations: Diagram showing a sequence of elements like [10, 20, 30]. Shows four different
operations: push_front adds a 5 to the beginning. push_back adds a 40 to the end. pop_front removes the 5.
pop_back removes the 40.

How would you build this?


This is where our CircularArrayQueue starts to look really smart. A circular array is a perfect foundation
for a deque!

push_back (enqueue) works exactly the same: you add an element at the rear_idx and move the
index forward.

push_front is the new trick: you move the front_idx backwards (wrapping around if necessary)
and place the new element there.

pop_front (dequeue) and pop_back are just the reverse of these operations.

This is exactly how C++'s std::deque works. It's typically implemented as a series of linked blocks of arrays
(a "deque of blocks"), which allows it to grow efficiently in either direction without the massive single-
block reallocations of a std::vector. When you use std::queue, by default, it's just a simple wrapper
around a std::deque!

When would you use a Deque?

Sliding Window Problems: A very common algorithm pattern where you need to maintain a
"window" of the last k elements seen in a stream. As a new element arrives, you push_back the
new one and pop_front the oldest one to keep the window size constant.
47 DSA in Modern C++: From the Ground Up To Real-World Applications

Implementing a "Undo" with a "Redo" limit: A text editor's undo system is a stack. But what if
you only want to remember the last 100 actions? You could use a deque. As you perform a new
action, you push_front it. If the deque's size exceeds 100, you pop_back the oldest undo action.

8.10.2 The Priority Queue: The VIP Line


This is the one we already built a preview of in our hospital example. A Priority Queue is a queue that
breaks the sacred FIFO rule in the name of importance.

In a priority queue, every element has a priority associated with it. The dequeue operation doesn't return
the oldest element; it returns the element with the highest priority. If two elements have the same
priority, then (and only then) does it typically fall back to FIFO order.

The Priority Queue ADT:

The interface looks almost the same as a regular queue, but the behavior is totally different.

insert(value, priority) (or push): Adds an element with its associated priority.

extract_highest_priority() (or pop): Removes and returns the element with the highest priority.

peek_highest_priority() (or top): Looks at the highest priority element without removing it.

How would you build this?

There are a few ways, each with different performance trade-offs:

1. An Unsorted Array/List: insert is a fast O(1) (just add it to the end). But
extract_highest_priority is a disaster. You have to scan the entire list (O(N)) every single time
just to find the highest priority element. This is almost always a bad idea.
2. A Sorted Array/List: This is what we did in our hospital example. insert becomes slow (O(N))
because you have to find the correct sorted position to place the new element. But
extract_highest_priority is a beautiful O(1), because the most important item is always right
at the front!
3. The Professional's Choice: A Heap. This is the gold standard. A Heap is a special kind of
binary tree (which we'll cover in detail in Chapter 14) that uses some clever tricks to keep
the highest-priority element at the root. It gives us the best of both worlds: both insert and
extract_highest_priority are incredibly efficient O(log N) operations. This is how C++'s
std::priority_queue is implemented.

Table of Priority Queue Implementations

Implementation insert extract_highest_priority When to Use?


8.11 Part 2 Summary: The Linked List, The STL, and The VIP Line 48

Unsorted O(1) O(N) Almost never. Only if you do vastly more


Array/List insertions than extractions.

Sorted Array/List O(N) O(1) Good for a small number of items or when
you peek far more than you insert.

Heap (The Best) O(log O(log N) This is the default, go-to implementation for
N) any serious use.

Table 8.5 Priority Queue Implementations: A table comparing the three implementations.

When do you use a Priority Queue?

ER Systems: Like our example, triaging patients.

Operating System Schedulers: The OS needs to decide which of the hundreds of waiting
processes gets to use the CPU next. A high-priority system process should run before your low-
priority file download.

Graph Algorithms: Famous algorithms like Dijkstra's (for finding the shortest path) and Prim's (for
finding minimum spanning trees) use a priority queue to efficiently decide which vertex to
explore next.

So, while our standard FIFO queue is a powerful and fundamental tool, remember that it's just the
beginning. By tweaking the rules of entry and exit, we can create specialized variants like the Deque and
Priority Queue that are tailor-made for a whole new class of problems.

8.11 Part 2 Summary: The Linked List, The STL, and The
VIP Line
Alright, we've reached the end of our deep dive into the Queue. In this second half of the chapter, we
moved from the abstract to the concrete, building a robust, modern queue and exploring the wider world
of queue-like data structures.

Let's recap the key takeaways from this journey:

The Modern LinkedQueue: We started by rolling up our sleeves and building a queue from
scratch using a linked list and modern C++. The big win here was using std::unique_ptr to
manage our nodes. This gave us automatic, safe memory management (RAII) and saved us from
the old-school headaches of manual new and delete. By keeping track of both a head and a tail
pointer, we built a queue with a guaranteed, rock-solid O(1) performance for both enqueue and
dequeue.
49 DSA in Modern C++: From the Ground Up To Real-World Applications

The STL Way (std::queue): We then learned that, most of the time, we don't need to build our
own queue. The C++ Standard Template Library gives us std::queue, a container adaptor that
does the job perfectly. We discovered that it's not a container itself, but a wrapper that enforces
the strict FIFO rules on top of another underlying container, which by default is the powerful
std::deque. For 99% of your day-to-day work, std::queue is the right tool for the job.

Real-World Application: We put our linked list knowledge to the test by building a Hospital
Emergency Room system. This wasn't just any queue; it was a Priority Queue, where patients
were ordered by the severity of their condition, not just their arrival time. This powerful example
showed how the fundamental linked list structure can be adapted to solve more complex, real-
world problems.

Expanding Our Horizons (Advanced Queues): Finally, we took a peek beyond the standard FIFO
line and met two important relatives:

o The Deque (Double-Ended Queue): The ultimate flexible line, allowing O(1) additions
and removals from both the front and the back.
o The Priority Queue: The "VIP line" that always serves the most important item first. We
learned that while a sorted list works, the truly efficient implementation (which we'll
see later) is a Heap, giving us O(log N) performance.

You now have a complete picture. You know the FIFO principle, you know the trade-offs between array
and linked list implementations, you know how to build a modern LinkedQueue, and you know how and
when to use the STL's std::queue. Most importantly, you understand that the simple concept of a "line"
is the foundation for a whole family of powerful tools.

Summary Table: Queue Implementations & Variants

Implementation enqueue / dequeue / push_front / Key Pro/Con


push_back pop_front pop_back

Linked List O(1) O(1) N/A Pro: Consistent O(1) time, no


Queue (guaranteed) (guaranteed) resizing hiccups. Con: Memory
overhead per node for
pointers.

std::queue O(1) O(1) N/A The Standard Choice. Uses


(default) (amortized) (amortized) std::deque underneath,
providing a great balance of
performance and cache-
friendliness.
8.12 Practice Problems: Part 2 50

Deque O(1) O(1) O(1) Pro: Extremely flexible. Con:


(amortized) (amortized) (amortized) Slightly more complex than a
standard queue.

Priority Queue O(log N) O(log N) N/A Specialized: Not FIFO. Always


(Heap) serves the highest priority
element. The go-to for
scheduling and many graph
algorithms.

Table 8.6: Summary Table of part 1

8.12 Practice Problems: Part 2


Time to test your knowledge of circular arrays and the advanced queue variants. Sketching these out is
highly recommended!
Exercises on Circular Arrays and Advanced Queues
1. (Circular Array Trace): You have an empty CircularArrayQueue with an initial capacity of 4. The
front_idx starts at 0 and the rear_idx starts at 0. Trace the exact values of front_idx, rear_idx,
and count_ after this sequence of operations:
o enqueue(10)
o enqueue(20)
o enqueue(30)
o dequeue()
o enqueue(40)
o enqueue(50) (This should trigger a resize!)
2. (Deque Operations): Start with an empty deque. What is the final state of the deque after the
following sequence of operations?
o push_back(10)
o push_front(5)
o push_back(20)
o push_front(1)
o pop_back()
o pop_front()
3. (Priority Queue Logic): You are managing tasks for a CPU. A lower number indicates a higher
priority. You have an empty priority queue. In what order will the tasks be executed (dequeued)
if they are enqueued in the following order?
o insert("Update UI", priority=3)
o insert("Run Virus Scan", priority=5)
o insert("Respond to Mouse Click", priority=1)
o insert("Play Music", priority=4)
o insert("Kernel Panic Alert", priority=0)
51 DSA in Modern C++: From the Ground Up To Real-World Applications

Solutions to Part 2 Problems


1. (Circular Array Trace):
o Initial: capacity=4, front_idx=0, rear_idx=0, count=0
o enqueue(10): data[0]=10, rear_idx=1, count=1
o enqueue(20): data[1]=20, rear_idx=2, count=2
o enqueue(30): data[2]=30, rear_idx=3, count=3
o dequeue(): (returns 10) front_idx=1, count=2
o enqueue(40): data[3]=40, rear_idx=0 (wraps around!), count=3
o enqueue(50): count == capacity (3+1=4). Resize to capacity 8!
 New data array is created.
 Elements 20, 30, 40 are unrolled into the new array at indices 0, 1, 2.
 front_idx is reset to 0. rear_idx is set to count (which is 3).
 Now, enqueue(50) is performed on the new array.
 Final State: capacity=8, front_idx=0, rear_idx=4, count=4. The array contains
[20, 30, 40, 50, ?, ?, ?, ?].
2. (Deque Operations):
o Initial: []
o push_back(10): [10]
o push_front(5): [5, 10]
o push_back(20): [5, 10, 20]
o push_front(1): [1, 5, 10, 20]
o pop_back(): (removes 20) [1, 5, 10]
o pop_front(): (removes 1) [5, 10]
o Final State: [5, 10]
3. (Priority Queue Logic):
The priority queue always dequeues the item with the lowest priority number.
o After all insertions, the conceptual "line" looks like this (priority order):
1. Kernel Panic Alert (0)
2. Respond to Mouse Click (1)
3. Update UI (3)
4. Play Music (4)
5. Run Virus Scan (5)
o Execution Order: Kernel Panic Alert, Respond to Mouse Click, Update UI, Play Music,
Run Virus Scan.

8.13 Chapter Summary: Mastering the Line


And with that, our deep dive into the Queue is complete. We've seen it all, from the abstract "rule of
fairness" to the nitty-gritty details of high-performance implementations.

We started by defining the FIFO (First-In, First-Out) principle and the core ADT operations that enforce
it: enqueue, dequeue, and front. We contrasted this with the LIFO logic of a stack to solidify our
understanding.

Then we rolled up our sleeves and started building. We saw two primary paths:
8.14 Chapter 8 Practice Problems - Answers & Explanations 52

1. The Linked List Queue: A natural and elegant solution that gives us guaranteed O(1)
performance for our core operations, at the cost of some memory overhead for pointers.
2. The Array-Based Queue: We first fell into the naive "shifty" queue trap, learning a valuable
lesson about its disastrous O(N) dequeue performance. This motivated us to find a better way,
which led us to the ingenious Circular Array. By letting our indices wrap around, we achieved an
efficient, cache-friendly, amortized O(1) implementation.

Finally, we expanded our view to include the more specialized Deque (the stack/queue hybrid) and the
Priority Queue (the VIP line), understanding that the basic queue is a foundation for a whole family of
powerful tools.

You now have a complete and robust understanding of how queues work, why they're important, and
how to build them the right way.

Summary Table: The Full Queue Family


(This would be the final, comprehensive summary table for the entire chapter.)

Data Core Key Operations Best Big O Primary Use Case


Structure Principle Implementation (Core
Ops)
Queue FIFO (First- enqueue, dequeue, Linked O(1) Orderly processing,
In, First- front List/Circular Array scheduling, BFS
Out)
Stack LIFO (Last- push, pop, top Linked O(1) Reversing order,
In, First- List/Dynamic call stacks,
Out) Array backtracking
Deque FIFO + LIFO push/pop_front, Circular Array O(1) Sliding windows,
push/pop_back (Blocks) flexible buffers
Priority Highest insert, Heap O(log Task scheduling,
Queue Priority Out extract_highest N) pathfinding
(Dijkstra's)

Table 8.7: Summary Table of chapter 8

8.14 Chapter 8 Practice Problems - Answers &


Explanations
This section provides insights, key ideas, and, where appropriate, pseudocode or C++ snippets for the
practice problems presented at the end of Chapter 8. Remember, the goal is to understand the process,
not just to get the answer!

Conceptual Understanding & Tracing:


1. Visual Trace (CircularArrayQueue):
53 DSA in Modern C++: From the Ground Up To Real-World Applications

o For each step, the answer should clearly list the expected values of data_ (array
contents), front_idx_, rear_idx_, and count_.
o Highlight when rear_idx_ wraps around (e.g., after enqueue(60) if initial capacity was 5
and it became full then resized).
o The answer for dequeue() on an empty queue should state that an std::underflow_error
(or similar) would be thrown.
o The answer for isEmpty() should be true after all elements are dequeued.
2. tail_ Pointer Importance (Linked List Queue):
o Why Crucial for O(1) enqueue: Without a tail_ pointer, to add an element to the end,
you would have to traverse the entire list from head_ to find the last node. This
traversal takes O(N) time. With a tail_ pointer, you directly access the last node, set its
next to the new node, and update tail_ to the new node, all in O(1) time.
o enqueue with only head_: You'd start at head_. If head_ is null, the new node becomes
head_. Otherwise, you loop while (current->next != nullptr) current = current->next;.
Then current->next = newNode;. This loop is O(N).
3. Edge Cases (LinkedQueue):
o enqueue on Empty: head_ and tail_ are both nullptr, count_ is 0. New node is created.
Both head_ and tail_ are set to point to this newNode. count_ becomes 1.
o dequeue on Single Element Queue: head_ and tail_ point to the same (only) node,
count_ is 1. nodeToServeAndRemove points to this node. head_ becomes head_->next
(which is nullptr). count_ becomes 0. The if (head_ == nullptr) condition becomes true,
so tail_ is also set to nullptr. The original node is deleted.
4. Memory Management (LinkedQueue):
o No delete in Destructor: A memory leak would occur. All ListNode objects allocated
with new would remain in memory but would be unreachable after the LinkedQueue
object itself is destroyed. The program would consume more and more memory over
time if many queues were created and destroyed.
o current = current->next; before delete: This is vital. If you delete current; first, then
current->next would be accessing freed memory (a dangling pointer), leading to
undefined behavior (likely a crash) when trying to advance to the next node. You must
get the address of the next node before deleting the current one.
5. const Correctness (LinkedQueue::front()):
o const T& front() const: The first const (on const T&) means the returned reference is to
a constant T, so the caller cannot modify the front element through this reference. The
second const (after the parentheses) means this member function itself does not modify
any member variables of the LinkedQueue object (it's a "read-only" operation on the
queue).
o T& front(): This non-const version returns a (non-const) reference, allowing the caller to
modify the data of the front element directly. This version cannot be called on a const
LinkedQueue object.
o Benefit: const correctness allows the compiler to enforce that read-only operations
don't accidentally change data. It also enables front() to be called on const LinkedQueue
objects or references, which is important in many C++ contexts (e.g., passing queues by
const reference to functions).

Implementation & Modification Exercises:


1. getRear() Method (CircularArrayQueue):
8.14 Chapter 8 Practice Problems - Answers & Explanations 54

o Logic: If empty, throw. Otherwise, the actual last element is at index (rear_idx_ - 1 +
capacity_) % capacity_.
o Time Complexity: O(1).
// Conceptual implementation for CircularArrayQueue
const T& rear() const {
if (isEmpty()) {
throw std::runtime_error("Queue is empty, no rear
element.");
}
// rear_idx_ points to the NEXT empty slot.
// So, the actual last element is one position "before"
rear_idx_ (circularly).
size_t last_element_idx = (rear_idx_ == 0) ? (capacity_ - 1) :
(rear_idx_ - 1);
// More robustly:
// size_t last_element_idx = (rear_idx_ + capacity_ - 1) %
capacity_;
return data_[last_element_idx];
}

2. clear() Method (LinkedQueue):


o The logic would be identical to the destructor: iterate through the list, deleting each
node, then set head_, tail_ to nullptr and count_ to 0.
3. Return Value for LinkedQueue::dequeue():
o Change signature to T dequeue(). Store head_->data in a temp T variable before
deleting the node. Return the temp variable.
4. Move Semantics (LinkedQueue with raw pointers):
o Move Constructor: LinkedQueue(LinkedQueue&& other) noexcept :
head_(other.head_), tail_(other.tail_), count_(other.count_) { other.head_ = nullptr;
other.tail_ = nullptr; other.count_ = 0; }
o Move Assignment: Check for self-assignment. clear() current object's resources. Steal
pointers and count from other. Null out other's members.
o Benefit: Avoids expensive deep copies when transferring ownership of a queue (e.g.,
returning from a function).
5. QueueIterator (Advanced - Conceptual):
o begin() would return an iterator pointing to the head_ node.
o end() would return an iterator conceptually pointing "past" the tail_ (often represented
by a nullptr node pointer in the iterator).
o Iterator needs: operator*() to dereference (get data), operator++() to move to node-
>next, operator!=() (and operator==()) to compare with end().

Application & Problem-Solving Exercises:


1. Josephus Problem Simulation:
o The solution involves enqueuing numbers 1 to n. In a loop (while queue size > 1), k-1
elements are dequeued and re-enqueued. The k-th element is dequeued (eliminated).
The last one remaining is the winner.
2. Print Binary Numbers:
55 DSA in Modern C++: From the Ground Up To Real-World Applications

o The hint is the core algorithm. The queue stores strings. Start with "1". Dequeue "1",
print it, enqueue "10", "11". Dequeue "10", print it, enqueue "100", "101", etc.
3. Reverse First K Elements of a Queue:
o Dequeue the first k elements from the queue and push them onto a temporary stack.
o Dequeue the remaining N-k elements from the queue and store them in another
temporary queue (or just re-enqueue them at the back of the original queue if it's
empty now).
o Enqueue elements from the stack back into the original queue (they will now be in
reversed order at the front).
o Enqueue elements from the temporary queue (the N-k elements) back into the original
queue.

Answers & Explanations for 8.14 Practice Problems


Alright, you've tackled the problems. Let's see how you did. Remember, the goal here isn't just about
getting the right answer; it's about understanding why it's the right answer. Let's break down the logic
for each problem.

Conceptual Understanding & Tracing:


1. Visual Trace (CircularArrayQueue):
This was a test of your ability to juggle the four key variables: the array itself, front_idx, rear_idx, and
count. Here’s how it should have played out, step by step.
 Initial State:
o capacity=5, data_=[?, ?, ?, ?, ?]
o front_idx=0, rear_idx=0, count=0
 enqueue(10): data_=[10, ?, ?, ?, ?], rear_idx=1, count=1
 enqueue(20): data_=[10, 20, ?, ?, ?], rear_idx=2, count=2
 dequeue(): (returns 10) front_idx moves. data_ is unchanged.
o front_idx=1, rear_idx=2, count=1
 enqueue(30): data_=[10, 20, 30, ?, ?], rear_idx=3, count=2
 enqueue(40): data_=[10, 20, 30, 40, ?], rear_idx=4, count=3
 enqueue(50): data_=[10, 20, 30, 40, 50], rear_idx=0 (wraps around!), count=4
 enqueue(60): RESIZE! count would become 5, which equals capacity.
o A new array of capacity=10 is created.
o The queue is "unrolled": 20 (from index 1), 30, 40, 50 are copied into the new array at
indices 0, 1, 2, 3.
o front_idx is reset to 0, rear_idx is reset to count (which is 4).
o Now the enqueue(60) happens.
o Final State:
 capacity=10, data_=[20, 30, 40, 50, 60, ?, ?, ?, ?, ?]
 front_idx=0, rear_idx=5, count=5
2. The Importance of the tail Pointer:
 Why is it crucial for O(1) enqueue?
Simple: without a tail pointer, you don't know where the end of the line is. To add a new person
(node), you'd have to start at the head and walk down the entire line, asking every single
person, "Are you the last one?" until you find the end. That walk is an O(N) operation. It's
8.14 Chapter 8 Practice Problems - Answers & Explanations 56

terribly inefficient.
The tail pointer is like a bouncer who stands at the very end of the line. When a new person
arrives, you just tell the bouncer, and they can add the person in O(1) time. It's the difference
between a quick tap on the shoulder and a full-on expedition.
3. Edge Cases (LinkedQueue):
 enqueue on an Empty Queue: This is the seed that starts the whole list. If you don't handle it,
nothing works. When the queue is empty, head and tail are both nullptr. The new node you
create needs to become both the head and the tail. If you only set head, tail remains nullptr, and
the next enqueue will crash.
 dequeue on a Single-Element Queue: This is the opposite problem. When you remove the last
element, head correctly becomes nullptr (because head->next was null). But what about tail? It's
still pointing to the memory of the node you just deleted! This is a classic dangling pointer. If
you don't also set tail to nullptr, the next enqueue will try to access that freed memory and your
program will explode. Emptying the queue means both head and tail must be nullified.
4. Memory Management (delete):
 What if you forget delete in the destructor?
You get a memory leak, plain and simple. Every ListNode you created with new lives on the
heap. If the LinkedQueue object is destroyed but you don't walk through the list and delete
every node, that memory is lost forever. It's like the queue disbands, but every person in line
just stands there, occupying space, for the rest of the program's life. Do this enough times, and
your program runs out of memory.
 Why current = current->next; before delete?
Think of it as grabbing onto the next person's hand before you let go of the current person. If
you delete current; first, you've just blown up your only link to the rest of the list. current is now
a dangling pointer. Trying to access current->next after that is like asking a ghost for directions—
it's undefined behavior and will almost certainly crash your program.
5. const Correctness (Why have two front() methods?):
This is a key concept in writing good, safe C++.
 const T& front() const;: The second const is a promise: "I swear this function will not change the
queue in any way." This allows you to call .front() on a const queue object. The first const is the
result of that promise: "Since I'm not changing anything, I'm only going to give you a const
reference back, so you can't change the data either."
 T& front();: This is the non-const version. It doesn't make that promise. It says, "You can call me,
and I'll give you a regular reference back, which you can use to modify the data at the front of
the queue." But the compiler will only let you call this on a non-const queue object.
 Benefit: It's all about safety and flexibility. It lets the compiler be your bodyguard, preventing
you from accidentally modifying a queue that was supposed to be read-only.

Implementation & Modification Exercises:


1. getRear() for CircularArrayQueue:
The trick here is remembering that rear_idx points to the next empty slot. The actual last element is the
one before it.
C++ code:
// In your CircularArrayQueue class...
const T& rear() const {
if (isEmpty()) {
throw std::runtime_error("Can't get rear of an empty queue.");
57 DSA in Modern C++: From the Ground Up To Real-World Applications

}
// The most robust way to go back one step in a circular array
// is to add (capacity - 1) and then take the modulus.
// This avoids issues with negative numbers if rear_idx_ is 0.
size_t last_element_idx = (rear_idx_ + capacity_ - 1) % capacity_;
return data_[last_element_idx];
}

This is a clean, O(1) operation.


2. clear() for LinkedQueue:
The logic is exactly the same as the destructor. You need a loop that safely deletes each node.
Code C++
// In your LinkedQueue class...
void clear() {
while (head_ != nullptr) {
dequeue(); // The easiest way is to just leverage your existing
dequeue logic!
}
// Or manually:
// while (head_ != nullptr) {
// Node* temp = head_;
// head_ = head_->next;
// delete temp;
// }
// tail_ = nullptr; // Don't forget this!
// count_ = 0;
}

3. dequeue() returning a value:


You just need to add a temporary variable to hold the data before you delete the node.
Code C++
// In your LinkedQueue class...
T dequeue() {
if (isEmpty()) {
throw std::runtime_error("Queue underflow!");
}
Node* node_to_delete = head_;
T value_to_return = node_to_delete->data; // <-- Grab the data first!

head_ = head_->next;
delete node_to_delete;
count_--;

if (head_ == nullptr) {
tail_ = nullptr;
}

return value_to_return; // <-- Return the saved data.


8.14 Chapter 8 Practice Problems - Answers & Explanations 58

4. Move Semantics (for raw pointer LinkedQueue):


This is about efficiently transferring ownership without making a slow, deep copy.
 Move Constructor: You're essentially saying, "The new queue I'm creating will just steal all the
pointers and the count from the old one."
Code C++
LinkedQueue(LinkedQueue&& other) noexcept
: head_(other.head_), tail_(other.tail_), count_(other.count_) {
// Now, gut the old queue so its destructor doesn't delete our nodes!
other.head_ = nullptr;
other.tail_ = nullptr;
other.count_ = 0;
}

 Move Assignment: This is similar, but you have to clean up your own resources first.
Code C++
LinkedQueue& operator=(LinkedQueue&& other) noexcept {
if (this != &other) { // Protect against self-assignment (q =
std::move(q))
this->clear(); // Delete all of our own nodes.

// Steal the pointers and count from the other queue.


head_ = other.head_;
tail_ = other.tail_;
count_ = other.count_;

// Gut the other queue.


other.head_ = nullptr;
other.tail_ = nullptr;
other.count_ = 0;
}
return *this;
}

5. Queue Iterator (Conceptual):


This is an advanced topic, but the core idea is to create a small helper class.
 The iterator class would hold a Node* current_node;.
 operator*() would just return current_node->data;.
 operator++() (prefix) would do current_node = current_node->next; and return *this;.
 operator!=() would compare the current_node pointers of two iterators.
 begin() would return an iterator initialized with head_.
 end() would return an iterator initialized with nullptr.

Application & Problem-Solving Exercises:


1. Josephus Problem:
59 DSA in Modern C++: From the Ground Up To Real-World Applications

A queue is perfect for simulating this. You enqueue all the people (numbers 1 to N). Then you start a big
loop: while ([Link]() > 1). Inside, you have a smaller loop that runs k-1 times, doing
[Link]([Link]()). This moves the first k-1 people to the back of the line. After that inner
loop, you just call [Link]() one more time to eliminate the k-th person. The last number left in
the queue is the survivor.
2. Print Binary Numbers:
The queue here acts as a "to-do" list of numbers to process.
1. Start with a queue of strings: [Link]("1").
2. Loop n times:
a. current = [Link]().
b. Print current.
c. [Link](current + "0").
d. [Link](current + "1").
The FIFO nature of the queue guarantees that you will generate and print the numbers in the
correct sequence, level by level.
3. Reverse First K Elements:
This is a great problem for showing how stacks and queues are opposites.
1. Create an empty std::stack.
2. Loop k times: [Link]([Link]()). This pulls the first k elements off the queue and
puts them on the stack, reversing their order.
3. Now, loop k times: [Link]([Link]()); [Link](). This takes the reversed elements
from the stack and puts them back onto the end of the queue.
4. Finally, to get the remaining N-k elements back to the end, loop [Link]() - k times:
[Link]([Link]()). This just cycles the back part of the queue around to the
end.

8.15 Coding Interview Questions: Queues in Action


Queues, with their First-In, First-Out (FIFO) nature, are a cornerstone of many real-world systems and a
frequent topic in coding interviews. Questions involving queues often test your ability to manage ordered
processing, simulate level-by-level exploration, or even use queues in non-obvious ways to implement
other data structures.

In this section, we'll dissect common interview problems that can be solved elegantly with queues. For
each problem, we'll follow our standard approach: understanding the problem, brainstorming solutions,
implementing the code, analyzing its complexity, and considering follow-up questions.

Question 1: Implement Stack using Two Queues


Problem Statement:
Implement a LIFO (Last-In, First-Out) stack using only two standard FIFO (First-In, First-Out) queues. The
stack you implement should support the following operations:
 push(x): Pushes element x onto the top of the stack.
 pop(): Removes the element from the top of the stack.
 top(): Returns the element at the top of the stack without removing it.
 empty(): Returns true if the stack is empty, false otherwise.
8.15 Coding Interview Questions: Queues in Action 60

You must use only standard queue operations on your two queues: push (to back), pop (from front),
front (peek), empty, and size.
Clarification Questions (What a Candidate Should Ask):
 Underlying Container: "Can I use the C++ STL's std::queue for this implementation?"
o Interviewer's Likely Response: "Yes, std::queue is perfectly fine."
 Empty Stack Behavior: "What should pop() or top() do if the stack is empty? Should they throw
an exception, or can I assume valid calls will always be made?"
o Interviewer's Likely Response: "Good question. Let's say it should throw a standard
exception like std::runtime_error."
 Performance Expectations: "Are there any specific time complexity goals for the operations?
For example, is it okay if one operation is slower than the others?"
o Interviewer's Likely Response: "That's the key trade-off to consider. Discuss the
complexities of your chosen approach."
These questions show you're thinking about the API contract, error handling, and performance trade-
offs, which are all hallmarks of a good engineer.

Alright, let's talk about one of my favorite interview brain-teasers: making a stack out of queues. This
problem is great because it seems impossible at first. How can you get LIFO (Last-In, First-Out) behavior
out of a structure that's built from the ground up to be FIFO (First-In, First-Out)?

The key is that they give you two queues. One queue is our main line, and the other is our temporary
holding area, a tool we can use to shuffle people around and reverse their order.

The Brainstorm: How Do We Flip the Line?


So, our goal is to make push add to the "top" and pop remove from the "top". With queues, our only
tools are adding to the back and removing from the front. The "top" of our stack is the element that was
added most recently. In a queue, that element is always sitting at the very back of the line, where we
can't touch it.

This leads us to two possible strategies, and it all comes down to a classic trade-off: when do you want
to pay the price? Do you do the hard work when you push, or do you wait and do it when you pop?
Strategy 1: Pay Upfront (Expensive Push)
The idea here is to make sure our "top" element is always at the front of q1. This makes pop and top
super fast O(1) operations.
 To push(x):
1. Add the new element x to the empty helper queue, q2.
2. Now, painstakingly move every single element from q1 over to q2, one by one.
3. Finally, swap q1 and q2. Now q1 holds [x, old_element1, old_element2, ...].
 The Cost: This is an O(N) push. Every single time you add an element, you have to shuffle the
entire contents of the stack. Ouch.
Strategy 2: Pay on Demand (Expensive Pop) - My Preferred Method
This is usually the better approach. Let's make push as fast as possible and only do the hard work when
someone actually asks to pop or see the top.
 To push(x): Just [Link](x). Done. It's a beautiful O(1).
 To pop():
61 DSA in Modern C++: From the Ground Up To Real-World Applications

1. The element we want is at the back of q1. To get to it, we have to move everyone else
out of the way.
2. We dequeue N-1 elements from q1 and enqueue them into q2.
3. The single, lonely element left in q1 is our "top". We just dequeue it and throw it away.
4. Now, q2 has all the remaining elements in the correct order. So, we swap q1 and q2,
and q2 becomes our new primary queue.
 The Cost: This is an O(N) pop. We're paying the price here instead of during the push.

In most real-world scenarios, you tend to push to a stack more often than you pop from it, so making the
push fast is usually the right call. Let's code up that second strategy.

The C++ Solution (Expensive Pop)


#include <queue>
#include <stdexcept>
#include <utility> // Gotta have this for std::swap

template <typename T>


class StackFromQueues {
private:
std::queue<T> q1; // This is our main queue where we add everything.
std::queue<T> q2; // This is our temporary helper for shuffling.

public:
StackFromQueues() {}

// push is the easy part. Just add to the back of our main line.
// Time Complexity: O(1)
void push(const T& value) {
[Link](value);
}

// pop is where the work happens.


// Time Complexity: O(N)
T pop() {
if (empty()) {
throw std::runtime_error("Stack's empty, pal. Can't pop.");
}

// We need to move everyone except the last person from q1 to q2.


while ([Link]() > 1) {
[Link]([Link]());
[Link]();
}

// The last one left in q1 is our "top" element.


// Let's grab it so we can return it.
T top_value = [Link]();
8.15 Coding Interview Questions: Queues in Action 62

[Link](); // And now, officially remove it.

// q2 is now the main queue with all the remaining elements.


// A quick swap makes it official.
std::swap(q1, q2);

return top_value;
}

// top() is almost identical to pop(), but with one crucial difference:


// we have to put the top element back!
// Time Complexity: O(N)
T& top() {
if (empty()) {
throw std::runtime_error("Stack's empty, pal. Nothing to see here.");
}

// Same as pop: move N-1 elements over.


while ([Link]() > 1) {
[Link]([Link]());
[Link]();
}

// This is our top element.


T& top_element = [Link]();

// CRITICAL STEP for top(): Don't discard the top element.


// Move it to q2 as well so it's preserved.
[Link](top_element);
[Link](); // q1 is now empty.

// Swap 'em back.


std::swap(q1, q2);

// A little trick: the element we want a reference to is now


// conveniently at the *back* of our newly swapped q1.
return [Link]();
}

// The utility functions are easy, they just operate on our primary queue.
bool empty() const {
return [Link]();
}
};

Complexity Analysis (for Costly Pop/Top Approach):

 Time Complexity:
63 DSA in Modern C++: From the Ground Up To Real-World Applications

o push(x): O(1). This operation is a single enqueue onto q1, which is a constant-time
operation for std::queue.
o pop(): O(N). To remove the last element, we must perform N-1 dequeue/enqueue
operations to transfer elements to the auxiliary queue. The total work is proportional to
the number of elements in the stack.
o top(): O(N). Similar to pop(), this operation also requires moving N-1 elements to
identify the top element. It then has to move the last element as well to preserve state,
making it O(N).
o empty() and size(): O(1). These just query the state of the primary queue (q1).
 Space Complexity:
o O(N). In the worst case, all N elements are stored within the two queues. The space
required is proportional to the total number of elements pushed onto the stack.
Variations and Follow-up Questions:
1. Implement the other strategy.
o Problem: Now, implement the stack where the push operation is O(N) and pop/top are
O(1).
o Hint: Follow the logic from our brainstorming session. This is a great way to test a
candidate's ability to analyze and implement different trade-offs.
2. Using a Single Queue.
o Problem: Can you implement a stack using only one queue?
o Hint: Yes. For push(x), first enqueue x. Then, for [Link]() - 1 times, perform a "rotation":
[Link]([Link]()); [Link]();. This brings the newly added x to the front of the queue. Now,
pop and top can operate on the front in O(1). This makes push O(N).

Question 2: First Non-Repeating Character in a Stream


 Problem: Design a data structure that, given a stream of characters, can return the first non-
repeating character seen so far at any point.
 Approach: Use a std::queue<char> to maintain the order of characters that are currently non-
repeating. Use a frequency map (like std::unordered_map<char, int>) to count character
occurrences.
o add(char c): Increment count in map. If count is 1, enqueue(c).
o getFirstNonRepeating(): Look at the front() of the queue. While the queue is not empty
and the count of [Link]() is > 1, dequeue(). After the loop, if the queue is not
empty, its front() is the answer.
 Complexity: add is O(1). getFirstNonRepeating is O(1) amortized, because each character is
enqueued and dequeued at most once.

Solution for Question 2: First Non-Repeating Character in a Stream

This is a great problem. It's not just about storing data; it's about tracking a changing state over time.
The stream is the key word here—we get characters one by one and have to have an answer ready at
any moment.

The "Aha!" Moment: We have two competing needs. We need to know how many times we've seen a
character (a job for a hash map), and we need to know the order in which the unique characters first
appeared (a job for a queue). We need to use both!
8.15 Coding Interview Questions: Queues in Action 64

The Strategy:

1. We'll use a hash map (like std::unordered_map<char, int>) to keep a running tally of every
character we've ever seen. This is our frequency counter.
2. We'll use a std::queue<char> as our "line" of potential candidates. We only add a character to
this queue the first time we see it. This queue remembers the original arrival order of all the
characters that might be the first non-repeater.
3. When someone asks for the getFirstNonRepeating() character, we look at the front of our
candidate queue. Is the character at the front still a non-repeater? We check our hash map. If its
count is 1, great! That's our answer. If its count is greater than 1, it's been disqualified. We
dequeue it and look at the next person in line, repeating this until we find a valid candidate or
the queue is empty.

The C++ Code:


#include <iostream>
#include <string>
#include <queue>
#include <unordered_map>

class FirstNonRepeating {
private:
// This map is our memory. It tracks how many times we've seen every character.
std::unordered_map<char, int> frequency_counts;

// This queue is our "candidate line". It preserves the arrival order


// of characters that have, at some point, appeared only once.
std::queue<char> candidates;

public:
// This is called for every character that flows in from the stream.
void add(char c) {
// First, update our frequency count for this character.
frequency_counts[c]++;

// If this is the VERY FIRST time we're seeing this character...


if (frequency_counts[c] == 1) {
// ...then it's a potential candidate. Add it to the back of the line.
[Link](c);
}
std::cout << "Added '" << c << "'. Stream is now processed.\n";
}

// This can be called at any time to get the current answer.


std::optional<char> getFirstNonRepeating() {
// The core of the logic is here. We need to purge any disqualified
// candidates from the front of our queue.
65 DSA in Modern C++: From the Ground Up To Real-World Applications

while (![Link]()) {
// Let's peek at the person at the front of the line.
char front_char = [Link]();

// Now, check our memory. Is their count still 1?


if (frequency_counts[front_char] > 1) {
// Nope! They've appeared again. They're disqualified.
// Kick them out of the line and check the next person.
[Link]();
} else {
// Yes! Their count is still 1. They are the first non-repeating
// character we've seen so far. This is our answer.
return front_char;
}
}

// If we get here, it means we've emptied the entire candidate queue.


// There are no non-repeating characters in the stream so far.
return std::nullopt;
}
};

int main() {
FirstNonRepeating stream_processor;

stream_processor.add('a'); // Stream: a
if (auto c = stream_processor.getFirstNonRepeating()) std::cout << " ->
First non-repeating: " << *c << "\n";

stream_processor.add('b'); // Stream: ab
if (auto c = stream_processor.getFirstNonRepeating()) std::cout << " ->
First non-repeating: " << *c << "\n";

stream_processor.add('a'); // Stream: aba


if (auto c = stream_processor.getFirstNonRepeating()) std::cout << " ->
First non-repeating: " << *c << "\n";

stream_processor.add('c'); // Stream: abac


if (auto c = stream_processor.getFirstNonRepeating()) std::cout << " ->
First non-repeating: " << *c << "\n";

stream_processor.add('b'); // Stream: abacb


if (auto c = stream_processor.getFirstNonRepeating()) std::cout << " ->
First non-repeating: " << *c << "\n";

stream_processor.add('d'); // Stream: abacbd


8.15 Coding Interview Questions: Queues in Action 66

if (auto c = stream_processor.getFirstNonRepeating()) std::cout << " ->


First non-repeating: " << *c << "\n";
}

Complexity: add is a beautiful O(1) (hash map access is average O(1)). getFirstNonRepeating is amortized
O(1). Why amortized? Because while the while loop could run multiple times, each character is only ever
pushed onto the queue once and popped from the queue once over the entire lifetime of the stream. Its
cost is spread out.

Question 3: Sliding Window Maximum


 Problem: Given an array of numbers and a window size k, find the maximum value in each
sliding window of size k.
 Approach: This is a classic problem for a Deque (Double-Ended Queue). Maintain a deque of
indices of elements. The deque should be kept in decreasing order of the elements' values they
point to.
o For each element arr[i]:
1. Remove indices from the back of the deque that point to values smaller than
arr[i].
2. Push i to the back.
3. Remove indices from the front that are no longer in the current window.
4. The maximum for the current window is always at arr[[Link]()].
 Complexity: O(N) time (each index is pushed and popped at most once). O(k) space for the
deque.

Solution for Question 3: Sliding Window Maximum


This is a tough one. It's a classic for a reason. A naive solution (looping through all k elements for every
window) would be O(N*k), which is too slow. The key is to find an O(N) solution, and that requires a
Deque.

The "Aha!" Moment: We need a way to keep track of the "best candidates" for being the maximum in
any given window. We don't care about smaller numbers that come after a bigger number within the
same window. For example, if we see [... 10, 3, 5 ...], we know 3 and 5 can never be the maximum as long
as 10 is still in the window.

The Strategy:

We'll use a std::deque to store the indices of the elements in our array. This deque will be our list of
"useful" candidates, and we will maintain one crucial invariant: the values at the indices in the deque
will always be in decreasing order.
Here's how we process each element arr[i]:
1. Clean up the back: Look at the last index in the deque. Is the value at that index smaller than
our current value arr[i]? If so, it's now useless. It can never be the maximum again because our
new, bigger arr[i] has arrived. Pop it from the back of the deque. Keep doing this until the deque
is empty or the last element is greater than or equal to arr[i].
2. Add the new guy: Push the index i to the back of the deque. Now our deque is still sorted by
value.
67 DSA in Modern C++: From the Ground Up To Real-World Applications

3. Clean up the front: Look at the index at the front of the deque. Has it fallen out of our current
window? (i.e., is [Link]() <= i - k?). If so, it's too old. Pop it from the front.
4. The answer: After all that cleanup, the index at the very front of the deque always points to the
maximum element for the current window.

The C++ Code:


#include <iostream>
#include <vector>
#include <deque>

void print_sliding_window_max(const std::vector<int>& arr, int k) {


if ([Link]() || k <= 0 || k > [Link]()) {
std::cout << "Invalid input.\n";
return;
}

// Our deque will store indices of the elements in 'arr'.


// It's our list of "max-candidates" for the current window.
std::deque<int> candidates;
std::vector<int> results;

std::cout << "Finding max in windows of size " << k << " for array: ";
for (int num : arr) std::cout << num << " ";
std::cout << "\n";

// Process each element of the array.


for (int i = 0; i < [Link](); ++i) {

// --- Step 1: Clean up the back of the deque ---


// Remove any elements from the back that are smaller than our current
element.
// They are no longer useful candidates.
while (![Link]() && arr[[Link]()] <= arr[i]) {
candidates.pop_back();
}

// --- Step 2: Add the current element's index to the back ---
candidates.push_back(i);

// --- Step 3: Clean up the front of the deque ---


// Remove the index at the front if it's no longer in our window of size k.
if ([Link]() <= i - k) {
candidates.pop_front();
}

// --- Step 4: The answer for this window is at the front! ---
// We only start recording results once we have a full window.
8.16 Key Terminology 68

if (i >= k - 1) {
results.push_back(arr[[Link]()]);
}
}

std::cout << "Maximums: ";


for (int max_val : results) {
std::cout << max_val << " ";
}
std::cout << "\n";
}

int main() {
print_sliding_window_max({1, 3, -1, -3, 5, 3, 6, 7}, 3);
std::cout << "\n";
print_sliding_window_max({9, 10, 9, -7, -4, -8, 2, -6}, 5);
}

Complexity: This looks complicated, but it's a brilliant O(N). Why? Because each index from the input
array is pushed onto the deque at most once and popped from the deque at most once. All the work
inside the loop is amortized over the N elements, giving us a linear time solution. The space complexity
is O(k) because, in the worst case, the deque holds k indices (for a strictly decreasing sequence of
numbers).

8.16 Key Terminology


Key Terminology: Queues
 Queue: An abstract data type (ADT) that stores a collection of elements and follows the First-In,
First-Out (FIFO) principle.
 FIFO (First-In, First-Out): The policy dictating that the element that has been in the queue the
longest is the first one to be removed.
 Enqueue: The operation of adding a new element to the rear (or back/tail) of the queue.
 Dequeue: The operation of removing an element from the front (or head) of the queue. Some
dequeue operations also return the removed element.
 Front (or Peek): The operation of accessing the element at the front of the queue without
removing it.
 Rear (or Back): The conceptual end of the queue where new elements are added. Can also refer
to an operation to view the last element.
 Head: Often used to refer to the front of a linked-list based queue; the first node.
 Tail: Often used to refer to the rear of a linked-list based queue; the last node.
 Underflow: An error condition that occurs when attempting to dequeue or access the front of
an empty queue.
 Overflow: An error condition that occurs when attempting to enqueue onto a queue that has
reached its maximum capacity (primarily relevant for fixed-capacity array-based queues).
69 DSA in Modern C++: From the Ground Up To Real-World Applications

 Circular Array (Circular Buffer): An array-based implementation technique where the array's
indices wrap around from the end back to the beginning, typically using the modulo operator, to
efficiently implement a queue.
 front_idx_ (or head_idx): In a circular array queue, the index pointing to the actual front
element.
 rear_idx_ (or tail_idx): In a circular array queue, the index pointing to the next available empty
slot where a new element will be enqueued.
 count_ (or size_): An explicit counter for the number of elements in a queue, particularly useful
for circular array queues to distinguish empty from full states.
 Container Adaptor (std::queue): An STL component that provides a queue interface by
wrapping an underlying sequence container (like std::deque).
 Producer-Consumer Pattern: A common concurrent programming pattern where one or more
"producers" generate data/tasks and place them into a shared queue, and one or more
"consumers" retrieve and process data/tasks from that queue, decoupling the two.

8.17 List Resources, Books, and References (for Queues


& General DSA)
Further Reading and Resources for Queues & Data Structures

Mastering data structures and algorithms is an ongoing journey. Here are some highly recommended
resources if you wish to delve deeper into queues, explore other implementations, or broaden your
understanding of DSA in general.
Classic and Foundational DSA Textbooks:
 "Introduction to Algorithms" (CLRS) by Cormen, Leiserson, Rivest, and Stein:
o The definitive reference for many algorithms. Chapter 10 covers elementary data
structures including queues, often with pseudocode and rigorous analysis.
 "Algorithms" (4th Edition) by Robert Sedgewick and Kevin Wayne:
o Excellent explanations with a practical focus, often with Java examples. Section 1.3
covers Bags, Queues, and Stacks with clear implementations and applications.
 "Data Structures and Algorithm Analysis in C++" by Mark Allen Weiss:
o A popular textbook that balances theory with C++ implementations. Queues are typically
covered in chapters on linear structures.
Modern C++ and STL Focused Books:
 "The C++ Standard Library: A Tutorial and Reference" by Nicolai M. Josuttis:
o An indispensable guide to the C++ STL. Its chapters on sequence containers (std::deque)
and container adaptors (std::queue) provide detailed information on usage and nuances.
 "Effective Modern C++" by Scott Meyers:
o While not a DSA book, it's essential for writing high-quality modern C++. Understanding
items on smart pointers, move semantics, and lambdas will improve your DSA
implementations.
Online Resources & Communities:
 [Link]: The go-to online reference for C++ language and library features, including
detailed documentation for std::queue and std::deque.
8.17 List Resources, Books, and References (for Queues & General DSA) 70

 GeeksforGeeks ([Link]): Contains a vast number of articles, tutorials, and practice


problems on various data structures, including different queue implementations and
applications.
 LeetCode ([Link]) / HackerRank ([Link]): Excellent platforms for practicing
coding problems that often involve queues (e.g., BFS, level-order traversal, simulation
problems).
 TopCoder / Codeforces: For more advanced competitive programming challenges where
efficient use of data structures like queues is critical.
 Stack Overflow ([Link]): A vast Q&A community. Searching for specific C++ queue
implementation details or conceptual questions will often yield helpful discussions.
Specific Topics to Explore Further:
 Priority Queues: A variation where elements have priorities, and dequeue removes the highest
(or lowest) priority element. Often implemented with heaps (Chapter 14).
 Deques (Double-Ended Queues): As briefly mentioned, these allow insertion and deletion at
both ends (std::deque in STL).
 Concurrent Queues: Thread-safe queue implementations for use in multi-threaded applications
(an advanced topic).
 Circular Buffers in Embedded Systems: The principles of circular array queues are heavily used
in low-level programming and embedded systems for efficient data buffering.
This list is a starting point. The field of data structures and algorithms is vast and constantly evolving.
Continuous learning and practice are key!

Figure 8.26 End of chapter 8

You might also like