Data Structure - Chapter 8 Queue Part 2
Data Structure - Chapter 8 Queue Part 2
Contents
Part 2: Linked Lists, STL, Applications & Advanced Topics ........................................................................... 2
Introduction to Part 2 ................................................................................................................................... 2
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue)...................................................... 3
The Node and LinkedQueue Blueprint: .................................................................................................... 3
Dissecting the Modern LinkedQueue: .................................................................................................. 5
8.6.1 The Nitty-Gritty: Implementing the Core Operations ...................................................................... 5
[Link] Operation: enqueue(item) – Joining the Line ........................................................................... 6
[Link] Operation: dequeue() – Next Customer, Please!...................................................................... 9
[Link] Operation: front() (or peek()) – Who's Next? ......................................................................... 15
8.6.2 Exercises on "Linked Queue" ......................................................................................................... 17
Solutions for 8.6.2 Exercises ............................................................................................................... 18
8.7 Comparing Queue Implementations: The Final Showdown ................................................................. 19
Factor 1: Performance Consistency (The "Hiccup" Factor)..................................................................... 19
Factor 2: Memory Usage......................................................................................................................... 19
Factor 3: Cache Performance (The Hidden Speed Boost) ....................................................................... 20
Summary Table: Array vs. Linked List ..................................................................................................... 20
8.8 The STL Solution: std::queue - Your Go-To for FIFO ............................................................................. 21
8.8.1 The Adaptor Philosophy: Not a Container, but a Controller.......................................................... 21
8.8.2 The Workhorse Beneath: The Underlying Container ..................................................................... 21
C++ code.............................................................................................................................................. 22
8.9 Full C++ Example Application ...................................................................................................... 23
8.9.1 Array Implementations in Action ................................................................................................... 23
1. C++ Code of Shifty Queue ............................................................................................................... 24
2. C++ Code of CircularArrayQueue .................................................................................................... 29
8.9.2 A real-world example of queue- Hospital Patient Management System – Using Linked Lists ...... 37
Expected output .................................................................................................................................. 42
8.10 Beyond the Basics: Advanced Queue Variants ................................................................................... 45
8.10.1 The Double-Ended Queue (Deque): The Line-Cutter ................................................................... 45
Diagram of Deque Operations ............................................................................................................ 46
8.10.2 The Priority Queue: The VIP Line ................................................................................................. 47
8.11 Part 2 Summary: The Linked List, The STL, and The VIP Line .............................................................. 48
Summary Table: Queue Implementations & Variants ............................................................................ 49
8.12 Practice Problems: Part 2 .................................................................................................................... 50
Part 2: Linked Lists, STL, Applications & Advanced Topics 2
Introduction to Part 2
Alright, you've made it through the workshop. You've built a robust, high-performance
circularArrayQueue and understand the engineering that makes it tick. You have a solid, array-based tool
in your belt.
3 DSA in Modern C++: From the Ground Up To Real-World Applications
Now for the fun part: let's expand our horizons. In this second half of the chapter, we'll explore a
completely different, and in many ways more elegant, way to build a queue using a linked list. We'll put
the two implementations head-to-head to understand their trade-offs.
Then, we'll bridge the gap to professional C++ development by mastering the Standard Library's
std::queue and std::deque. Finally, we'll take our new knowledge and put it to work, solving real-world
problems and tackling the classic, insightful interview questions that every software engineer should
know. This is where the abstract concept of FIFO becomes a concrete and powerful problem-solving
technique.
This makes our code safer, cleaner, and frankly, easier to write. The Node owns its next node, and when
it's destroyed, the chain of destruction is handled for us.
public:
// --- Constructor & The Rule of ZERO ---
// Because unique_ptr handles cleanup, our destructor can be defaulted!
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 4
if (isEmpty()) {
// If the line is empty, this new node is both the head and the tail.
head_ = std::move(newNode);
tail_ = head_.get();
} else {
// The current tail's 'next' pointer takes ownership of the new node.
tail_->next = std::move(newNode);
// And our tail pointer moves to this new last node.
tail_ = newNodePtr;
}
count_++;
}
// The old head is now gone. The new head is what was next.
head_ = std::move(oldHeadNext);
count_--;
// CRITICAL: If the queue is now empty, the tail was also the head,
// so we must null out the dangling tail pointer.
if (isEmpty()) {
tail_ = nullptr;
}
}
};
The tail_ Pointer: Notice that tail_ is a raw Node*. Why? Because the head_ unique_ptr and the
chain of next unique_ptrs already define the ownership of the list. The tail_ pointer is just a non-
owning "shortcut" so we can get to the end in O(1) time without traversing the whole list.
std::move is Key: In enqueue and dequeue, we use std::move to explicitly transfer ownership of
the unique_ptrs. This is how smart pointers are passed around and re-linked.
This version is more robust, safer, and more idiomatic in a modern C++ codebase.
Now for the main event. It's time to implement the three operations that give the queue its entire
personality:
1. enqueue(item): How do we let a new person join the back of the line?
2. dequeue(): How do we serve the person at the front and send them on their way?
This is where our head_ and tail_ pointers will do all the work. As we'll see, the logic for each of these is
a beautiful, efficient dance of pointer manipulation. Let's break them down one by one.
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 6
When a new item arrives, we want to add it to the very rear (or back) of the queue. The queue then
grows by one, and this new item becomes the newest one in line.
Flowchart of enqueue(item)
Figure 8.20. Flowchart of the enqueue Operation for a Linked List Queue. This diagram outlines the logic for
adding a new item. After creating a new node, a critical check determines if the queue is empty. If it is, the new
node becomes both the head and the tail. If not, the new node is linked after the current tail, and the tail pointer is
updated. In both cases, the queue's count is incremented before the operation concludes..
Analysis:
So, what's the performance cost of our enqueue operation? Let's break it down into the work we're
actually doing.
1. Creating a New Node (new Node(value)): First, we have to ask the system for a small chunk of
memory to hold our new node. On a modern computer, memory allocation is incredibly fast.
While technically not guaranteed to be constant time (in very rare, complex scenarios it could be
slower), for all practical purposes in analysis, we consider a single new call to be an O(1) operation.
2. Checking if the Queue is Empty (isEmpty()): Our isEmpty() method is just a simple check: return
count_ == 0;. This is a single comparison, a clear O(1) operation.
3. The Pointer Shuffle: This is the heart of the operation.
o If the queue was empty, we do two pointer assignments (head_ = newNode; tail_ =
newNode;).
o If the queue was not empty, we do two different pointer assignments (tail_->next =
newNode; tail_ = newNode;).
In either case, it's a fixed, small number of pointer manipulations. It doesn't matter if
the queue has one element or one million elements; the number of pointer assignments
is the same. This is the definition of a constant-time, O(1) operation.
9 DSA in Modern C++: From the Ground Up To Real-World Applications
The Verdict:
When we add up the costs—a few O(1) pointer assignments, an O(1) check, and an O(1) memory
allocation—the total work is constant. It does not depend on N, the number of items already in the
queue.
Time Complexity: O(1)
Space Complexity (Auxiliary): O(1) (We only allocate space for the new node itself, which is part
of the data structure's growth, not extra temporary space for the operation).
This is exactly what we want from a core queue operation. Thanks to that tail_ pointer, adding a new
item to the end of the line is always a super-fast, predictable operation, no matter how long the line gets.
Figure 8.21: The enqueue Operation on a Linked List Queue. Adding an element is a three-step dance with
pointers. 1. A new node is created. 2. The next pointer of the current tail node (B) is updated to point to this new
node, linking it to the end of the list. 3. Finally, the queue's tail pointer itself is moved to this new node, establishing
it as the new end of the line. Because we always know where the tail is, this is a fast O(1) operation.
We need to remove the item that's been waiting the longest – the one at the very front of the queue. If
the operation is also meant to return this item, we grab it first. The queue then shrinks by one.
(Note: Many modern queue designs, like std::queue::pop(), have dequeue (or pop) as a void function.
You'd call front() first to get the item, then pop() to remove it. This separation can be cleaner for error
handling.)
11 DSA in Modern C++: From the Ground Up To Real-World Applications
Figure 8.22 Flowchart of the dequeue Operation for a Linked List Queue. The process begins with a mandatory
underflow check to ensure the queue is not empty. If it's safe to proceed, the head pointer is advanced to the next
node in the list, and the count is decremented. A crucial final check determines if the queue has become empty; if
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 12
so, the tail pointer must also be set to nullptr to maintain a consistent state. Finally, the memory for the old head
node is deallocated.
/* Step 3: The person behind them (if any) is now the new head of
the line. */
head_ = head_->next;
// Or equivalently: head_ = nodeToServeAndRemove->next;
/*
Step 5: CRITICAL Detail! What if that was the *last* person in
line?
- If head_ is now nullptr, it means the queue just became empty.
- In that case, our tail_ pointer is now pointing to the node
we're about to delete (it was both head and tail). We MUST set
tail_ to nullptr too!
*/
if (head_ == nullptr) { // Or equivalently: if (count_ == 0)
tail_ = nullptr;
}
Analysis:
Checking isEmpty is O(1). The pointer reassignments are O(1). Decrementing count_ is O(1). The delete
operation is O(1) on average. Thus, dequeue is also a speedy O(1) time complexity operation. The check
for head_ == nullptr to update tail_ is vital for correctness when the queue becomes empty.
13 DSA in Modern C++: From the Ground Up To Real-World Applications
Alright, let's put our dequeue operation under the microscope. We've seen the logic: we check for an
empty line, move the head pointer forward, and clean up the old node. But how fast is it, really?
Checking if the Queue is Empty (isEmpty()): This is just a check of our count_ member or whether head_
is null. It's a single, instantaneous check. That's a textbook O(1) operation.
if (head_ == nullptr) { tail_ = nullptr; } — A single check and, in the worst case, one extra pointer
assignment. Still O(1).
The key takeaway here is that the number of pointer operations is constant. It doesn't matter if the
queue has two elements or two million; we are only ever manipulating the pointers at the very front of
the list. We never have to walk the list.
The Verdict:
When you add it all up, every step in the dequeue process is a constant-time operation. The amount of
work does not grow as the queue gets longer. This is the efficiency we were hoping for.
Space Complexity (Auxiliary): O(1) (We use one or two temporary pointers, but that's a constant amount
of extra memory).
This guaranteed O(1) performance is the big win for a linked-list-based queue. Unlike the "shifty" array
that gets slower as the line gets longer, our LinkedQueue is a smooth, predictable operator. Serving the
next person in line is always just as fast, whether they're the second person or the two-thousandth.
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 14
Figure 8.23 The dequeue Operation on a Linked List Queue. Removing an element is a quick, two-step pointer
operation. 1. The queue's head pointer is simply moved forward to point to the next node in the chain (B), making it
the new front. 2. The old head node (A), now disconnected from the list, is deallocated from memory. This is a
guaranteed O(1) operation, as no traversal is needed.
A common bug in linked-list queue implementations is failing to reset the tail pointer to nullptr when the
last element is dequeued.
Problem:
// BUGGY Dequeue - Doesn't handle tail reset
template <typename T>
void LinkedQueue<T>::dequeue_buggy() {
if (isEmpty()) { throw std::underflow_error("empty"); }
ListNode<T>* nodeToDelete = head_;
head_ = head_->next;
count_--;
// PROBLEM: Missing check! If head_ became nullptr here,
// tail_ still points to the just-deleted node (nodeToDelete)!
// if (head_ == nullptr) { tail_ = nullptr; } // <<< FORGOTTEN LINE
delete nodeToDelete;
}
Consequences:
15 DSA in Modern C++: From the Ground Up To Real-World Applications
If dequeue makes the queue empty, head becomes nullptr but tail becomes a dangling pointer
(pointing to freed memory).
Subsequent enqueue operations check isEmpty() (which might check head == nullptr and return
true) or might directly use the dangling tail pointer (e.g., tail->next = newNode), leading to
undefined behavior/crashes.
Prevention:
After updating head and decrementing count in dequeue, always check if the queue has become
empty (head == nullptr or count == 0).
Simply take a look at the item at the very front of the queue without removing it. The queue itself should
remain completely unchanged.
Figure 8.24: Flowchart of the front() (or peek()) Operation. The logic for peeking at the front element is simple but
must be safe. The first and most important step is to check if the queue is empty to prevent an underflow error. If
the queue is not empty, the operation simply returns a reference to the data held by the head node. Crucially, this is
a non-destructive, read-only operation; the state of the queue (its pointers and count) remains completely
unchanged.
// The non-const version lets you potentially modify the front item
template <typename T>
T& LinkedQueue<T>::front() {
if (isEmpty()) {
throw std::underflow_error("LinkedQueue::front(): Queue is empty, nothing to
see here!");
}
// Simply return a reference to the data held by the head node.
return head_->data;
}
// The const version is for when you have a const Queue object
// and just want to read the front item.
template <typename T>
17 DSA in Modern C++: From the Ground Up To Real-World Applications
Analysis of front()
Sometimes the simplest analysis is the most beautiful. Let's break down the work our computer does
when we call front().
Checking if the Queue is Empty (isEmpty()): We look at our count_ member variable. Is it zero? This is a
single comparison. A lightning-fast O(1) operation.
Accessing the Head's Data: If the queue isn't empty, we follow the head_ pointer to the first node and
access its data member. Following a single pointer and accessing a member are also fundamental O(1)
operations.
And... that's it. There are no loops, no recursion, no traversing the list. The amount of work is tiny and,
most importantly, it's constant. It doesn't matter if there's one person in line or a million; checking who
is at the very front takes exactly the same amount of time.
The Verdict:
Space Complexity (Auxiliary): O(1) (The operation uses no extra memory, aside from the few bytes
needed for the return value itself).
The front() operation is as efficient as it gets, providing immediate access to the next item in the queue
without any performance penalty.
2. (Edge Cases): Why is it so important to handle the case where the queue becomes empty after a
dequeue operation (i.e., when head == nullptr after removing the last node)? What specific
pointer needs to be updated in this case that isn't updated in a normal dequeue?
8.6 All Aboard! Building a Queue the Modern C++ Way (LinkedQueue) 18
3. (Implementation Detail): Our LinkedQueue implementation used two pointers, head and tail.
What would be the Big O time complexity of the enqueue operation if we only used a head
pointer? Explain your reasoning.
enqueue(10):
ohead points to Node(10).
o tail points to Node(10).
o Node(10)->next is nullptr.
enqueue(20):
o head points to Node(10).
o tail points to Node(20).
o Node(10)->next points to Node(20).
o Node(20)->next is nullptr.
enqueue(30):
o head points to Node(20).
o tail now points to Node(30).
o Node(20)->next points to Node(30).
o Node(30)->next is nullptr.
2. (Edge Cases):
When you dequeue the last element, the head pointer will naturally become nullptr. If you don't also
update the tail pointer to nullptr, you create a dangling pointer. The tail would still be pointing to the
memory location of the node that was just deleted. If you were to then enqueue a new item, the code
would try to do tail->next = ..., which would access freed memory and lead to a crash or undefined
behavior. So, after dequeuing the last item, you must set tail = nullptr to correctly represent an empty
queue.
19 DSA in Modern C++: From the Ground Up To Real-World Applications
If we only had a head pointer, the enqueue operation would become O(N).
o Why? To add a new element to the back of the line, you would have to start at the head
and traverse the entire linked list, node by node (current = current->next), until you
reached the last node. Only then could you attach the new node. This traversal takes time
proportional to the number of elements already in the queue, hence O(N). This is precisely
why we keep a tail pointer—it gives us direct O(1) access to the end of the line.
CircularArrayQueue: This one is O(1) amortized. This is a fancy way of saying that most of the
time, it's incredibly fast. But every once in a while, when the underlying array gets full, it has to
hit the brakes to perform a resize. This resize operation is slow—it's O(N) because it has to allocate
a new, bigger array and copy all N elements over.
The Verdict: If you're building a system where a single, unpredictable delay could be catastrophic (like a
real-time audio processor or a high-frequency trading system), the guaranteed consistency of the
LinkedQueue is a major advantage. For most other applications, the "hiccup" of the circular array is so
infrequent that its average performance is fantastic.
The Verdict: If you are storing large objects, the 8-byte pointer overhead of the linked list is insignificant,
so it doesn't matter. But if you are storing a huge number of very small, primitive data types, the
CircularArrayQueue is significantly more memory-efficient.
CircularArrayQueue: Winner, by a landslide. All of its data is stored in one contiguous block. When
the CPU needs to access element i, it will likely also pull i+1, i+2, etc., into its fast cache. This means
that as you process the queue, the next few elements are often already waiting in the fastest
possible memory. This is called cache locality, and it can make array-based code run much faster
in the real world than its Big O notation would suggest.
LinkedQueue: The enemy of the cache. Each node is allocated separately in memory. They could
be anywhere. To get from one node to the next, the CPU has to follow a pointer, which often
results in a cache miss—a slow trip out to main memory to find the next node's location.
The Verdict: If you are processing large amounts of data in tight loops, the superior cache performance
of the CircularArrayQueue can make it significantly faster in practice than a LinkedQueue, even though
they are both technically O(1).
enqueue Time Amortized O(1) Guaranteed O(1) The linked list has more
predictable insertion time; no
resize pauses.
dequeue Time Guaranteed O(1) Guaranteed O(1) Both are excellent here.
Memory Usage O(Capacity) - can O(N) - exact memory The array might waste space; the
have slack space + pointer overhead list has a "pointer tax" on every
element.
21 DSA in Modern C++: From the Ground Up To Real-World Applications
Cache Locality Excellent Poor (scattered The array is often faster in the real
(contiguous memory) world for primitive types due to
memory) the CPU cache.
Implementation More complex Simpler (pointer The linked list logic is often
(index math, manipulation) considered more straightforward.
resize)
When in doubt, start with the STL's default std::queue. It's a highly optimized std::deque under the hood
and gives you the best of both worlds for most common scenarios. But now, you're not just a user of
std::queue; you're an engineer who understands exactly what's going on inside and why it's designed the
way it is. You know the trade-offs, and you know how to build either version from scratch if you ever
need to.
Think of it like this: std::queue provides the face of a queue – the familiar push (for enqueue), pop (for
dequeue), front, back, empty, and size operations. But behind that face, it uses an existing C++ sequence
container to actually store the elements. It "adapts" that underlying container, restricting its interface to
only expose the FIFO behavior we expect from a queue. You can't, for example, directly access an
element in the middle of a std::queue using an index, even if the underlying container (like std::vector or
std::deque) would allow it. The adaptor enforces the queue discipline.
You can also tell std::queue to use a std::list<T> (a doubly-linked list) as its foundation if
you prefer. std::list also offers O(1) push_back, pop_front, front, and back.
C++ code
#include <queue>
#include <list>
#include <deque> // Default, included for completeness
#include <iostream>
#include <string>
std::cout << "q1 initially empty? " << std::boolalpha << [Link]() << std::endl;
std::cout << "q1 size: " << [Link]() << std::endl; // Should be 3
std::cout << "q1 empty after processing? " << std::boolalpha << [Link]() <<
std::endl;
return 0;
}
Expected output:
q1 initially empty? true
q1 size: 3
q1 front: "First Job"
q1 back: "Third Job"
Processing q1...
Serving: First Job
Serving: Second Job
Serving: Third Job
q1 empty after processing? true
This final section is where we put it all together. I'm going to give you three complete, runnable C++
programs that demonstrate everything we've learned. The first two will let you see the dramatic
performance difference between our "naive" array queue and the much smarter circular array queue.
The final example will show how we can use a linked list to build a more complex, real-world system.
My advice? Don't just read this code. Type it out. Compile it. Run it. Play with it. Break it. Fix it. That's
how you'll truly make these concepts your own.
Here it is: the ShiftyQueue. This is our naive implementation, the one that uses a std::vector in the most
straightforward way possible. As you'll recall, its enqueue operation is fast, but its dequeue is a
performance nightmare because it has to shift every single element to the left.
The main function includes a performance test. I want you to run it and feel the pause when it starts
dequeuing. That pause is the feeling of an O(N) algorithm struggling under a heavy load. It's a lesson you
won't forget.
8.8 The STL Solution: std::queue - Your Go-To for FIFO 24
#include <iostream>
#include <vector>
#include <chrono> // We need this to time our operations and prove a point.
#include <stdexcept> // For throwing errors when someone misuses our queue.
class ShiftyQueue {
private:
// We're backing our queue with a std::vector. It's simple, and it
// handles all the memory allocation for us.
std::vector<int> data_;
public:
// The default constructor is fine. It just creates an empty vector.
ShiftyQueue() = default;
//
// And here it is. The villain of our story.
//
// To remove from the front, we have to erase the element at index 0.
// The std::vector then has to shift *every other element* one spot
// to the left to fill the gap. This is a classic O(N) operation.
// Painfully slow for a large queue.
//
25 DSA in Modern C++: From the Ground Up To Real-World Applications
int dequeue() {
if (isEmpty()) {
// Can't serve a customer if the line is empty.
throw std::runtime_error("Queue underflow! Tried to dequeue from an empty
queue.");
}
int front_value = data_.front(); // Grab the value before we erase it.
data_.erase(data_.begin()); // The expensive shift happens here.
return front_value;
}
// This is where we prove our point. Let's time this thing and see
// just how bad that O(N) dequeue really is.
void demonstrateShiftyQueuePerformance() {
std::cout << "\n==================================================";
std::cout << "\n= PERFORMANCE TEST: The Shifty Queue Dilemma =";
std::cout << "\n==================================================\n";
ShiftyQueue shiftyQueue;
// Let's use a big number so the difference is obvious.
const int num_elements = 50000;
std::cout << "Step 1: Enqueuing " << num_elements << " items. This should be
fast...\n";
auto start_enqueue = std::chrono::high_resolution_clock::now();
for (int i = 0; i < num_elements; ++i) {
[Link](i);
}
auto end_enqueue = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> enqueue_duration = end_enqueue -
start_enqueue;
std::cout << " -> Time taken for " << num_elements << " enqueues: "
<< enqueue_duration.count() << " ms\n";
std::cout << "\nStep 2: Dequeuing all " << num_elements << " items. Go grab a
coffee...\n";
auto start_dequeue = std::chrono::high_resolution_clock::now();
while (![Link]()) {
[Link]();
}
auto end_dequeue = std::chrono::high_resolution_clock::now();
std::chrono::duration<double, std::milli> dequeue_duration = end_dequeue -
start_dequeue;
std::cout << " -> Time taken for " << num_elements << " dequeues: "
<< dequeue_duration.count() << " ms\n";
// The main driver for the interactive demo remains the same,
// as it's just a simple UI to let the user play around.
27 DSA in Modern C++: From the Ground Up To Real-World Applications
int main() {
ShiftyQueue queue;
int choice, value;
do {
std::cout << "\n--- Interactive Shifty Queue Menu ---\n";
std::cout << "1. Enqueue\n";
std::cout << "2. Dequeue\n";
std::cout << "3. View front\n";
std::cout << "4. Display queue\n";
std::cout << "5. Run performance test\n";
std::cout << "0. Exit\n";
std::cout << "Your choice: ";
std::cin >> choice;
try {
switch (choice) {
case 1:
std::cout << "Enter value to enqueue: ";
std::cin >> value;
[Link](value);
std::cout << "Enqueued " << value << ".\n";
break;
case 2:
std::cout << "Dequeued " << [Link]() << ".\n";
break;
case 3:
std::cout << "Front element is " << [Link]() << ".\n";
break;
case 4:
[Link]();
break;
case 5:
demonstrateShiftyQueuePerformance();
break;
case 0:
std::cout << "Exiting. Thanks for playing!\n";
break;
default:
std::cout << "Invalid choice. Please try again.\n";
}
8.8 The STL Solution: std::queue - Your Go-To for FIFO 28
return 0;
}
========================================= --------------------------------
= PERFORMANCE TEST: The Shifty Queue Dilemma Process exited after 46.43 seconds with return value
= 0
========================================= Press any key to continue . . .
Step 1: Enqueuing 50000 items. This should be fast...
Now, let's use the CircularArrayQueue we designed. This example will show that both enqueue and
dequeue remain fast, regardless of the number of items.
Notice how the dequeue operation is now just a simple index modification—an O(1) operation. When
you run the performance test in this program (which you'll find is identical to the one for the
ShiftyQueue), the difference will be staggering. This is what good algorithm design looks like in practice.
C++ Code
// =============================================================
8.8 The STL Solution: std::queue - Your Go-To for FIFO 30
#include <iostream>
#include <vector>
#include <stdexcept>
#include <optional> // For safe return values that might be empty
#include <functional> // For std::reference_wrapper, our tool for optional
references
#include <iomanip> // For pretty printing in display()
#include <limits> // For robust input clearing
// We just keep track of where the line starts and how long it is.
size_t front_idx_ = 0;
size_t count_ = 0;
public:
// A sensible constructor. Let's the user pick a starting size.
explicit CircularQueue(size_t capacity = 8) {
// Can't have a queue with zero space. That's just silly.
if (capacity == 0) {
31 DSA in Modern C++: From the Ground Up To Real-World Applications
capacity = 1;
}
data_.resize(capacity);
}
// Figure out where the back of the line is and drop in the new value.
size_t rear_idx = physical_index(count_);
data_[rear_idx] = value;
count_++;
}
// The magic of O(1) dequeue: just move the front pointer. No shifting!
front_idx_ = (front_idx_ + 1) % data_.size();
count_--;
return value_to_return;
8.8 The STL Solution: std::queue - Your Go-To for FIFO 32
// And here's the non-const version in case the user wants to modify the front.
std::optional<std::reference_wrapper<T>> front() {
if (isEmpty()) return std::nullopt;
return data_[front_idx_];
}
}
if (i == count_ - 1) {
std::cout << " <-- rear";
}
std::cout << "\n";
}
std::cout << "-------------------\n";
}
};
// A little helper to keep our main loop clean from input errors.
void clearInputBuffer() {
std::[Link]();
std::[Link](std::numeric_limits<std::streamsize>::max(), '\n');
}
CircularQueue<int> queue(capacity);
std::cout << "Queue created with capacity " << [Link]() << ".\n";
int choice;
do {
std::cout << "\n=== What's next? ===\n";
std::cout << "1. Enqueue (Add to back)\n";
std::cout << "2. Dequeue (Remove from front)\n";
std::cout << "3. Peek at Front\n";
std::cout << "4. Peek at Rear\n";
std::cout << "5. Display Contents\n";
std::cout << "6. Get Stats (Size/Capacity)\n";
std::cout << "7. Exit\n";
std::cout << "Your choice: ";
switch (choice) {
case 1: {
int value;
std::cout << "Enter value to enqueue: ";
std::cin >> value;
clearInputBuffer();
[Link](value);
std::cout << "Enqueued " << value << ".\n";
[Link]();
break;
}
case 2: {
auto result = [Link]();
if (result) {
std::cout << "Dequeued value: " << *result << "\n";
} else {
std::cout << "Queue is empty. Nothing to dequeue.\n";
}
break;
}
case 3: {
auto front_ref = [Link]();
if (front_ref) {
// .get() unwraps the reference_wrapper
std::cout << "Front element is: " << front_ref->get() << "\n";
} else {
std::cout << "Queue is empty.\n";
}
break;
}
case 4: {
auto rear_ref = [Link]();
if (rear_ref) {
std::cout << "Rear element is: " << rear_ref->get() << "\n";
} else {
std::cout << "Queue is empty.\n";
}
break;
}
case 5: {
[Link]();
break;
}
case 6: {
std::cout << "\n--- Queue Stats ---\n";
std::cout << " Size: " << [Link]() << "\n";
std::cout << " Capacity: " << [Link]() << "\n";
35 DSA in Modern C++: From the Ground Up To Real-World Applications
std::cout << " Is Empty: " << ([Link]() ? "Yes" : "No") <<
"\n";
std::cout << "-------------------\n";
break;
}
case 7: {
std::cout << "Exiting. Happy queueing!\n";
break;
}
default: {
std::cout << "Oops! Invalid choice. Please pick a number from the
menu.\n";
break;
}
}
} while (choice != 7);
return 0;
}
Your choice: 5
--- Queue State ---
Physical Index | Value --- Queue State ---
---------------+------- Physical Index | Value
0 | 4 <-- front ---------------+-------
1 |8 1 | 8 <-- front
2 | 2 <-- rear 2 | 2 <-- rear
------------------- -------------------
But there's a twist. An ER isn't a simple "first-come, first-served" line. It's a Priority Queue. A patient with
a critical injury (high priority) needs to be seen before someone with a minor sprain (low priority), even
if the sprain-patient arrived an hour earlier.
This code demonstrates how we can use a sorted linked list to implement this priority system. Instead
of just adding new patients to the tail, our admit_patient function will intelligently find the correct spot
in the list based on the patient's severity. The patient at the head of our list will always be the one who
needs help most urgently.
This example shows how the fundamental queue concept can be adapted to solve more complex, real-
world problems. Pay close attention to how the insertion logic differs from a standard FIFO queue.
C++ Code:
8.8 The STL Solution: std::queue - Your Go-To for FIFO 38
// =============================================================
// Real-World Example: An Emergency Room Priority Queue
//
// This isn't just a simple "first-come, first-served" queue. An ER
// is a perfect example of a PRIORITY QUEUE. A patient with a
// critical injury (high priority) has to be seen before someone
// with a minor issue, even if they arrived later.
//
// We'll build this using a sorted linked list, where the "head" of
// the list is always the highest-priority patient. And to do it
// the modern, safe C++ way, we'll use std::unique_ptr to manage
// our patient nodes automatically. No manual `delete`!
//
// Written by: Eyadere Addis
//
// Date: 2021-8-24
// ==============================================================
#include <iostream>
#include <string>
#include <ctime>
#include <iomanip>
#include <memory> // The star of the show: for std::unique_ptr
#include <utility> // For std::move
class Patient {
public:
// I'm making the data public here for simplicity in this example.
// In a bigger system, you'd have private members and getters.
std::string name;
std::string condition;
int severity; // 1=critical, 5=minor
time_t arrival_time;
std::cout << "Patient: " << std::setw(15) << std::left << name
<< "| Severity: " << severity << "/5"
<< " | Condition: " << std::setw(20) << std::left << condition
<< " | Arrived: " << time_buf << "\n";
}
};
class EmergencyRoom {
private:
// 'head' is a unique_ptr that owns the first patient in the list.
// When 'head' is reset, it triggers a cascading deletion of the whole list.
std::unique_ptr<Patient> head;
int doctors_available;
public:
EmergencyRoom(int num_doctors)
: head(nullptr), doctors_available(num_doctors) {}
// A patient comes in. We need to insert them into our sorted list
// based on their priority.
void admit_patient(std::string name, std::string condition, int severity) {
auto new_patient = std::make_unique<Patient>(name, condition, severity);
std::cout << new_patient->name << " has been admitted to the ER.\n";
doctors_available--;
std::cout << "Treatment for " << patient_to_treat->name << " is complete.\n";
// The 'patient_to_treat' unique_ptr goes out of scope here, and the
// patient's memory is automatically freed. No manual 'delete'!
doctors_available++;
std::cout << "A doctor is now available.\n";
}
while (true) {
std::cout << "\n=== Hospital ER Management System ===\n";
std::cout << "1. Admit New Patient\n";
std::cout << "2. Treat Next Patient\n";
std::cout << "3. View Waiting List\n";
std::cout << "0. Exit System\n";
std::cout << "Your choice: ";
std::cin >> choice;
if (std::[Link]()) {
std::[Link]();
std::[Link](std::numeric_limits<std::streamsize>::max(), '\n');
choice = -1; // Force invalid choice
}
std::[Link](std::numeric_limits<std::streamsize>::max(), '\n'); //
Consume newline
switch (choice) {
case 1: {
std::string name, condition;
int severity;
std::cout << "Enter patient name: ";
std::getline(std::cin, name);
std::cout << "Enter condition: ";
std::getline(std::cin, condition);
std::cout << "Enter severity (1-5, 1 is most critical): ";
std::cin >> severity;
er.admit_patient(name, condition, severity);
break;
}
8.8 The STL Solution: std::queue - Your Go-To for FIFO 42
case 2:
er.treat_next_patient();
break;
case 3:
er.display_waiting_list();
break;
case 0:
std::cout << "Shutting down the system. Goodbye!\n";
return 0;
default:
std::cout << "Invalid choice. Please try again.\n";
}
}
}
Expected output
=== Hospital ER Management System ===
1. Admit New Patient
2. Treat Next Patient
3. View Waiting List
0. Exit System
Your choice: 1
Enter patient name: John Smith
Enter condition: Chest pain
Enter severity (1-5, 1 is most critical): 2
John Smith has been admitted to the ER.
0. Exit System
Your choice: 1
Enter patient name: Sarah Wilson
Enter condition: Severe bleeding
Enter severity (1-5, 1 is most critical): 1
Sarah Wilson has been admitted to the ER.
But the world of computer science is a fascinating place, and clever people have taken the basic queue
concept and adapted it to solve more specialized problems. Think of these as the "special ops" versions
of our standard queue. You won't need them every day, but when you do, they're incredibly powerful.
Let's take a quick tour of two of the most important variants: the Double-Ended Queue (Deque) and the
Priority Queue.
A Deque is a hybrid. It's like a queue and a stack mashed together. It's a sequence of elements that
supports efficient addition and removal of elements from both the front and the back.
pop_front(): Removes the element from the very beginning (just like dequeue).
pop_back(): Removes the element from the very end (like a stack's pop).
8.10 Beyond the Basics: Advanced Queue Variants 46
Figure 8.25: A Deque Operations: Diagram showing a sequence of elements like [10, 20, 30]. Shows four different
operations: push_front adds a 5 to the beginning. push_back adds a 40 to the end. pop_front removes the 5.
pop_back removes the 40.
push_back (enqueue) works exactly the same: you add an element at the rear_idx and move the
index forward.
push_front is the new trick: you move the front_idx backwards (wrapping around if necessary)
and place the new element there.
pop_front (dequeue) and pop_back are just the reverse of these operations.
This is exactly how C++'s std::deque works. It's typically implemented as a series of linked blocks of arrays
(a "deque of blocks"), which allows it to grow efficiently in either direction without the massive single-
block reallocations of a std::vector. When you use std::queue, by default, it's just a simple wrapper
around a std::deque!
Sliding Window Problems: A very common algorithm pattern where you need to maintain a
"window" of the last k elements seen in a stream. As a new element arrives, you push_back the
new one and pop_front the oldest one to keep the window size constant.
47 DSA in Modern C++: From the Ground Up To Real-World Applications
Implementing a "Undo" with a "Redo" limit: A text editor's undo system is a stack. But what if
you only want to remember the last 100 actions? You could use a deque. As you perform a new
action, you push_front it. If the deque's size exceeds 100, you pop_back the oldest undo action.
In a priority queue, every element has a priority associated with it. The dequeue operation doesn't return
the oldest element; it returns the element with the highest priority. If two elements have the same
priority, then (and only then) does it typically fall back to FIFO order.
The interface looks almost the same as a regular queue, but the behavior is totally different.
insert(value, priority) (or push): Adds an element with its associated priority.
extract_highest_priority() (or pop): Removes and returns the element with the highest priority.
peek_highest_priority() (or top): Looks at the highest priority element without removing it.
1. An Unsorted Array/List: insert is a fast O(1) (just add it to the end). But
extract_highest_priority is a disaster. You have to scan the entire list (O(N)) every single time
just to find the highest priority element. This is almost always a bad idea.
2. A Sorted Array/List: This is what we did in our hospital example. insert becomes slow (O(N))
because you have to find the correct sorted position to place the new element. But
extract_highest_priority is a beautiful O(1), because the most important item is always right
at the front!
3. The Professional's Choice: A Heap. This is the gold standard. A Heap is a special kind of
binary tree (which we'll cover in detail in Chapter 14) that uses some clever tricks to keep
the highest-priority element at the root. It gives us the best of both worlds: both insert and
extract_highest_priority are incredibly efficient O(log N) operations. This is how C++'s
std::priority_queue is implemented.
Sorted Array/List O(N) O(1) Good for a small number of items or when
you peek far more than you insert.
Heap (The Best) O(log O(log N) This is the default, go-to implementation for
N) any serious use.
Table 8.5 Priority Queue Implementations: A table comparing the three implementations.
Operating System Schedulers: The OS needs to decide which of the hundreds of waiting
processes gets to use the CPU next. A high-priority system process should run before your low-
priority file download.
Graph Algorithms: Famous algorithms like Dijkstra's (for finding the shortest path) and Prim's (for
finding minimum spanning trees) use a priority queue to efficiently decide which vertex to
explore next.
So, while our standard FIFO queue is a powerful and fundamental tool, remember that it's just the
beginning. By tweaking the rules of entry and exit, we can create specialized variants like the Deque and
Priority Queue that are tailor-made for a whole new class of problems.
8.11 Part 2 Summary: The Linked List, The STL, and The
VIP Line
Alright, we've reached the end of our deep dive into the Queue. In this second half of the chapter, we
moved from the abstract to the concrete, building a robust, modern queue and exploring the wider world
of queue-like data structures.
The Modern LinkedQueue: We started by rolling up our sleeves and building a queue from
scratch using a linked list and modern C++. The big win here was using std::unique_ptr to
manage our nodes. This gave us automatic, safe memory management (RAII) and saved us from
the old-school headaches of manual new and delete. By keeping track of both a head and a tail
pointer, we built a queue with a guaranteed, rock-solid O(1) performance for both enqueue and
dequeue.
49 DSA in Modern C++: From the Ground Up To Real-World Applications
The STL Way (std::queue): We then learned that, most of the time, we don't need to build our
own queue. The C++ Standard Template Library gives us std::queue, a container adaptor that
does the job perfectly. We discovered that it's not a container itself, but a wrapper that enforces
the strict FIFO rules on top of another underlying container, which by default is the powerful
std::deque. For 99% of your day-to-day work, std::queue is the right tool for the job.
Real-World Application: We put our linked list knowledge to the test by building a Hospital
Emergency Room system. This wasn't just any queue; it was a Priority Queue, where patients
were ordered by the severity of their condition, not just their arrival time. This powerful example
showed how the fundamental linked list structure can be adapted to solve more complex, real-
world problems.
Expanding Our Horizons (Advanced Queues): Finally, we took a peek beyond the standard FIFO
line and met two important relatives:
o The Deque (Double-Ended Queue): The ultimate flexible line, allowing O(1) additions
and removals from both the front and the back.
o The Priority Queue: The "VIP line" that always serves the most important item first. We
learned that while a sorted list works, the truly efficient implementation (which we'll
see later) is a Heap, giving us O(log N) performance.
You now have a complete picture. You know the FIFO principle, you know the trade-offs between array
and linked list implementations, you know how to build a modern LinkedQueue, and you know how and
when to use the STL's std::queue. Most importantly, you understand that the simple concept of a "line"
is the foundation for a whole family of powerful tools.
We started by defining the FIFO (First-In, First-Out) principle and the core ADT operations that enforce
it: enqueue, dequeue, and front. We contrasted this with the LIFO logic of a stack to solidify our
understanding.
Then we rolled up our sleeves and started building. We saw two primary paths:
8.14 Chapter 8 Practice Problems - Answers & Explanations 52
1. The Linked List Queue: A natural and elegant solution that gives us guaranteed O(1)
performance for our core operations, at the cost of some memory overhead for pointers.
2. The Array-Based Queue: We first fell into the naive "shifty" queue trap, learning a valuable
lesson about its disastrous O(N) dequeue performance. This motivated us to find a better way,
which led us to the ingenious Circular Array. By letting our indices wrap around, we achieved an
efficient, cache-friendly, amortized O(1) implementation.
Finally, we expanded our view to include the more specialized Deque (the stack/queue hybrid) and the
Priority Queue (the VIP line), understanding that the basic queue is a foundation for a whole family of
powerful tools.
You now have a complete and robust understanding of how queues work, why they're important, and
how to build them the right way.
o For each step, the answer should clearly list the expected values of data_ (array
contents), front_idx_, rear_idx_, and count_.
o Highlight when rear_idx_ wraps around (e.g., after enqueue(60) if initial capacity was 5
and it became full then resized).
o The answer for dequeue() on an empty queue should state that an std::underflow_error
(or similar) would be thrown.
o The answer for isEmpty() should be true after all elements are dequeued.
2. tail_ Pointer Importance (Linked List Queue):
o Why Crucial for O(1) enqueue: Without a tail_ pointer, to add an element to the end,
you would have to traverse the entire list from head_ to find the last node. This
traversal takes O(N) time. With a tail_ pointer, you directly access the last node, set its
next to the new node, and update tail_ to the new node, all in O(1) time.
o enqueue with only head_: You'd start at head_. If head_ is null, the new node becomes
head_. Otherwise, you loop while (current->next != nullptr) current = current->next;.
Then current->next = newNode;. This loop is O(N).
3. Edge Cases (LinkedQueue):
o enqueue on Empty: head_ and tail_ are both nullptr, count_ is 0. New node is created.
Both head_ and tail_ are set to point to this newNode. count_ becomes 1.
o dequeue on Single Element Queue: head_ and tail_ point to the same (only) node,
count_ is 1. nodeToServeAndRemove points to this node. head_ becomes head_->next
(which is nullptr). count_ becomes 0. The if (head_ == nullptr) condition becomes true,
so tail_ is also set to nullptr. The original node is deleted.
4. Memory Management (LinkedQueue):
o No delete in Destructor: A memory leak would occur. All ListNode objects allocated
with new would remain in memory but would be unreachable after the LinkedQueue
object itself is destroyed. The program would consume more and more memory over
time if many queues were created and destroyed.
o current = current->next; before delete: This is vital. If you delete current; first, then
current->next would be accessing freed memory (a dangling pointer), leading to
undefined behavior (likely a crash) when trying to advance to the next node. You must
get the address of the next node before deleting the current one.
5. const Correctness (LinkedQueue::front()):
o const T& front() const: The first const (on const T&) means the returned reference is to
a constant T, so the caller cannot modify the front element through this reference. The
second const (after the parentheses) means this member function itself does not modify
any member variables of the LinkedQueue object (it's a "read-only" operation on the
queue).
o T& front(): This non-const version returns a (non-const) reference, allowing the caller to
modify the data of the front element directly. This version cannot be called on a const
LinkedQueue object.
o Benefit: const correctness allows the compiler to enforce that read-only operations
don't accidentally change data. It also enables front() to be called on const LinkedQueue
objects or references, which is important in many C++ contexts (e.g., passing queues by
const reference to functions).
o Logic: If empty, throw. Otherwise, the actual last element is at index (rear_idx_ - 1 +
capacity_) % capacity_.
o Time Complexity: O(1).
// Conceptual implementation for CircularArrayQueue
const T& rear() const {
if (isEmpty()) {
throw std::runtime_error("Queue is empty, no rear
element.");
}
// rear_idx_ points to the NEXT empty slot.
// So, the actual last element is one position "before"
rear_idx_ (circularly).
size_t last_element_idx = (rear_idx_ == 0) ? (capacity_ - 1) :
(rear_idx_ - 1);
// More robustly:
// size_t last_element_idx = (rear_idx_ + capacity_ - 1) %
capacity_;
return data_[last_element_idx];
}
o The hint is the core algorithm. The queue stores strings. Start with "1". Dequeue "1",
print it, enqueue "10", "11". Dequeue "10", print it, enqueue "100", "101", etc.
3. Reverse First K Elements of a Queue:
o Dequeue the first k elements from the queue and push them onto a temporary stack.
o Dequeue the remaining N-k elements from the queue and store them in another
temporary queue (or just re-enqueue them at the back of the original queue if it's
empty now).
o Enqueue elements from the stack back into the original queue (they will now be in
reversed order at the front).
o Enqueue elements from the temporary queue (the N-k elements) back into the original
queue.
terribly inefficient.
The tail pointer is like a bouncer who stands at the very end of the line. When a new person
arrives, you just tell the bouncer, and they can add the person in O(1) time. It's the difference
between a quick tap on the shoulder and a full-on expedition.
3. Edge Cases (LinkedQueue):
enqueue on an Empty Queue: This is the seed that starts the whole list. If you don't handle it,
nothing works. When the queue is empty, head and tail are both nullptr. The new node you
create needs to become both the head and the tail. If you only set head, tail remains nullptr, and
the next enqueue will crash.
dequeue on a Single-Element Queue: This is the opposite problem. When you remove the last
element, head correctly becomes nullptr (because head->next was null). But what about tail? It's
still pointing to the memory of the node you just deleted! This is a classic dangling pointer. If
you don't also set tail to nullptr, the next enqueue will try to access that freed memory and your
program will explode. Emptying the queue means both head and tail must be nullified.
4. Memory Management (delete):
What if you forget delete in the destructor?
You get a memory leak, plain and simple. Every ListNode you created with new lives on the
heap. If the LinkedQueue object is destroyed but you don't walk through the list and delete
every node, that memory is lost forever. It's like the queue disbands, but every person in line
just stands there, occupying space, for the rest of the program's life. Do this enough times, and
your program runs out of memory.
Why current = current->next; before delete?
Think of it as grabbing onto the next person's hand before you let go of the current person. If
you delete current; first, you've just blown up your only link to the rest of the list. current is now
a dangling pointer. Trying to access current->next after that is like asking a ghost for directions—
it's undefined behavior and will almost certainly crash your program.
5. const Correctness (Why have two front() methods?):
This is a key concept in writing good, safe C++.
const T& front() const;: The second const is a promise: "I swear this function will not change the
queue in any way." This allows you to call .front() on a const queue object. The first const is the
result of that promise: "Since I'm not changing anything, I'm only going to give you a const
reference back, so you can't change the data either."
T& front();: This is the non-const version. It doesn't make that promise. It says, "You can call me,
and I'll give you a regular reference back, which you can use to modify the data at the front of
the queue." But the compiler will only let you call this on a non-const queue object.
Benefit: It's all about safety and flexibility. It lets the compiler be your bodyguard, preventing
you from accidentally modifying a queue that was supposed to be read-only.
}
// The most robust way to go back one step in a circular array
// is to add (capacity - 1) and then take the modulus.
// This avoids issues with negative numbers if rear_idx_ is 0.
size_t last_element_idx = (rear_idx_ + capacity_ - 1) % capacity_;
return data_[last_element_idx];
}
head_ = head_->next;
delete node_to_delete;
count_--;
if (head_ == nullptr) {
tail_ = nullptr;
}
Move Assignment: This is similar, but you have to clean up your own resources first.
Code C++
LinkedQueue& operator=(LinkedQueue&& other) noexcept {
if (this != &other) { // Protect against self-assignment (q =
std::move(q))
this->clear(); // Delete all of our own nodes.
A queue is perfect for simulating this. You enqueue all the people (numbers 1 to N). Then you start a big
loop: while ([Link]() > 1). Inside, you have a smaller loop that runs k-1 times, doing
[Link]([Link]()). This moves the first k-1 people to the back of the line. After that inner
loop, you just call [Link]() one more time to eliminate the k-th person. The last number left in
the queue is the survivor.
2. Print Binary Numbers:
The queue here acts as a "to-do" list of numbers to process.
1. Start with a queue of strings: [Link]("1").
2. Loop n times:
a. current = [Link]().
b. Print current.
c. [Link](current + "0").
d. [Link](current + "1").
The FIFO nature of the queue guarantees that you will generate and print the numbers in the
correct sequence, level by level.
3. Reverse First K Elements:
This is a great problem for showing how stacks and queues are opposites.
1. Create an empty std::stack.
2. Loop k times: [Link]([Link]()). This pulls the first k elements off the queue and
puts them on the stack, reversing their order.
3. Now, loop k times: [Link]([Link]()); [Link](). This takes the reversed elements
from the stack and puts them back onto the end of the queue.
4. Finally, to get the remaining N-k elements back to the end, loop [Link]() - k times:
[Link]([Link]()). This just cycles the back part of the queue around to the
end.
In this section, we'll dissect common interview problems that can be solved elegantly with queues. For
each problem, we'll follow our standard approach: understanding the problem, brainstorming solutions,
implementing the code, analyzing its complexity, and considering follow-up questions.
You must use only standard queue operations on your two queues: push (to back), pop (from front),
front (peek), empty, and size.
Clarification Questions (What a Candidate Should Ask):
Underlying Container: "Can I use the C++ STL's std::queue for this implementation?"
o Interviewer's Likely Response: "Yes, std::queue is perfectly fine."
Empty Stack Behavior: "What should pop() or top() do if the stack is empty? Should they throw
an exception, or can I assume valid calls will always be made?"
o Interviewer's Likely Response: "Good question. Let's say it should throw a standard
exception like std::runtime_error."
Performance Expectations: "Are there any specific time complexity goals for the operations?
For example, is it okay if one operation is slower than the others?"
o Interviewer's Likely Response: "That's the key trade-off to consider. Discuss the
complexities of your chosen approach."
These questions show you're thinking about the API contract, error handling, and performance trade-
offs, which are all hallmarks of a good engineer.
Alright, let's talk about one of my favorite interview brain-teasers: making a stack out of queues. This
problem is great because it seems impossible at first. How can you get LIFO (Last-In, First-Out) behavior
out of a structure that's built from the ground up to be FIFO (First-In, First-Out)?
The key is that they give you two queues. One queue is our main line, and the other is our temporary
holding area, a tool we can use to shuffle people around and reverse their order.
This leads us to two possible strategies, and it all comes down to a classic trade-off: when do you want
to pay the price? Do you do the hard work when you push, or do you wait and do it when you pop?
Strategy 1: Pay Upfront (Expensive Push)
The idea here is to make sure our "top" element is always at the front of q1. This makes pop and top
super fast O(1) operations.
To push(x):
1. Add the new element x to the empty helper queue, q2.
2. Now, painstakingly move every single element from q1 over to q2, one by one.
3. Finally, swap q1 and q2. Now q1 holds [x, old_element1, old_element2, ...].
The Cost: This is an O(N) push. Every single time you add an element, you have to shuffle the
entire contents of the stack. Ouch.
Strategy 2: Pay on Demand (Expensive Pop) - My Preferred Method
This is usually the better approach. Let's make push as fast as possible and only do the hard work when
someone actually asks to pop or see the top.
To push(x): Just [Link](x). Done. It's a beautiful O(1).
To pop():
61 DSA in Modern C++: From the Ground Up To Real-World Applications
1. The element we want is at the back of q1. To get to it, we have to move everyone else
out of the way.
2. We dequeue N-1 elements from q1 and enqueue them into q2.
3. The single, lonely element left in q1 is our "top". We just dequeue it and throw it away.
4. Now, q2 has all the remaining elements in the correct order. So, we swap q1 and q2,
and q2 becomes our new primary queue.
The Cost: This is an O(N) pop. We're paying the price here instead of during the push.
In most real-world scenarios, you tend to push to a stack more often than you pop from it, so making the
push fast is usually the right call. Let's code up that second strategy.
public:
StackFromQueues() {}
// push is the easy part. Just add to the back of our main line.
// Time Complexity: O(1)
void push(const T& value) {
[Link](value);
}
return top_value;
}
// The utility functions are easy, they just operate on our primary queue.
bool empty() const {
return [Link]();
}
};
Time Complexity:
63 DSA in Modern C++: From the Ground Up To Real-World Applications
o push(x): O(1). This operation is a single enqueue onto q1, which is a constant-time
operation for std::queue.
o pop(): O(N). To remove the last element, we must perform N-1 dequeue/enqueue
operations to transfer elements to the auxiliary queue. The total work is proportional to
the number of elements in the stack.
o top(): O(N). Similar to pop(), this operation also requires moving N-1 elements to
identify the top element. It then has to move the last element as well to preserve state,
making it O(N).
o empty() and size(): O(1). These just query the state of the primary queue (q1).
Space Complexity:
o O(N). In the worst case, all N elements are stored within the two queues. The space
required is proportional to the total number of elements pushed onto the stack.
Variations and Follow-up Questions:
1. Implement the other strategy.
o Problem: Now, implement the stack where the push operation is O(N) and pop/top are
O(1).
o Hint: Follow the logic from our brainstorming session. This is a great way to test a
candidate's ability to analyze and implement different trade-offs.
2. Using a Single Queue.
o Problem: Can you implement a stack using only one queue?
o Hint: Yes. For push(x), first enqueue x. Then, for [Link]() - 1 times, perform a "rotation":
[Link]([Link]()); [Link]();. This brings the newly added x to the front of the queue. Now,
pop and top can operate on the front in O(1). This makes push O(N).
This is a great problem. It's not just about storing data; it's about tracking a changing state over time.
The stream is the key word here—we get characters one by one and have to have an answer ready at
any moment.
The "Aha!" Moment: We have two competing needs. We need to know how many times we've seen a
character (a job for a hash map), and we need to know the order in which the unique characters first
appeared (a job for a queue). We need to use both!
8.15 Coding Interview Questions: Queues in Action 64
The Strategy:
1. We'll use a hash map (like std::unordered_map<char, int>) to keep a running tally of every
character we've ever seen. This is our frequency counter.
2. We'll use a std::queue<char> as our "line" of potential candidates. We only add a character to
this queue the first time we see it. This queue remembers the original arrival order of all the
characters that might be the first non-repeater.
3. When someone asks for the getFirstNonRepeating() character, we look at the front of our
candidate queue. Is the character at the front still a non-repeater? We check our hash map. If its
count is 1, great! That's our answer. If its count is greater than 1, it's been disqualified. We
dequeue it and look at the next person in line, repeating this until we find a valid candidate or
the queue is empty.
class FirstNonRepeating {
private:
// This map is our memory. It tracks how many times we've seen every character.
std::unordered_map<char, int> frequency_counts;
public:
// This is called for every character that flows in from the stream.
void add(char c) {
// First, update our frequency count for this character.
frequency_counts[c]++;
while (![Link]()) {
// Let's peek at the person at the front of the line.
char front_char = [Link]();
int main() {
FirstNonRepeating stream_processor;
stream_processor.add('a'); // Stream: a
if (auto c = stream_processor.getFirstNonRepeating()) std::cout << " ->
First non-repeating: " << *c << "\n";
stream_processor.add('b'); // Stream: ab
if (auto c = stream_processor.getFirstNonRepeating()) std::cout << " ->
First non-repeating: " << *c << "\n";
Complexity: add is a beautiful O(1) (hash map access is average O(1)). getFirstNonRepeating is amortized
O(1). Why amortized? Because while the while loop could run multiple times, each character is only ever
pushed onto the queue once and popped from the queue once over the entire lifetime of the stream. Its
cost is spread out.
The "Aha!" Moment: We need a way to keep track of the "best candidates" for being the maximum in
any given window. We don't care about smaller numbers that come after a bigger number within the
same window. For example, if we see [... 10, 3, 5 ...], we know 3 and 5 can never be the maximum as long
as 10 is still in the window.
The Strategy:
We'll use a std::deque to store the indices of the elements in our array. This deque will be our list of
"useful" candidates, and we will maintain one crucial invariant: the values at the indices in the deque
will always be in decreasing order.
Here's how we process each element arr[i]:
1. Clean up the back: Look at the last index in the deque. Is the value at that index smaller than
our current value arr[i]? If so, it's now useless. It can never be the maximum again because our
new, bigger arr[i] has arrived. Pop it from the back of the deque. Keep doing this until the deque
is empty or the last element is greater than or equal to arr[i].
2. Add the new guy: Push the index i to the back of the deque. Now our deque is still sorted by
value.
67 DSA in Modern C++: From the Ground Up To Real-World Applications
3. Clean up the front: Look at the index at the front of the deque. Has it fallen out of our current
window? (i.e., is [Link]() <= i - k?). If so, it's too old. Pop it from the front.
4. The answer: After all that cleanup, the index at the very front of the deque always points to the
maximum element for the current window.
std::cout << "Finding max in windows of size " << k << " for array: ";
for (int num : arr) std::cout << num << " ";
std::cout << "\n";
// --- Step 2: Add the current element's index to the back ---
candidates.push_back(i);
// --- Step 4: The answer for this window is at the front! ---
// We only start recording results once we have a full window.
8.16 Key Terminology 68
if (i >= k - 1) {
results.push_back(arr[[Link]()]);
}
}
int main() {
print_sliding_window_max({1, 3, -1, -3, 5, 3, 6, 7}, 3);
std::cout << "\n";
print_sliding_window_max({9, 10, 9, -7, -4, -8, 2, -6}, 5);
}
Complexity: This looks complicated, but it's a brilliant O(N). Why? Because each index from the input
array is pushed onto the deque at most once and popped from the deque at most once. All the work
inside the loop is amortized over the N elements, giving us a linear time solution. The space complexity
is O(k) because, in the worst case, the deque holds k indices (for a strictly decreasing sequence of
numbers).
Circular Array (Circular Buffer): An array-based implementation technique where the array's
indices wrap around from the end back to the beginning, typically using the modulo operator, to
efficiently implement a queue.
front_idx_ (or head_idx): In a circular array queue, the index pointing to the actual front
element.
rear_idx_ (or tail_idx): In a circular array queue, the index pointing to the next available empty
slot where a new element will be enqueued.
count_ (or size_): An explicit counter for the number of elements in a queue, particularly useful
for circular array queues to distinguish empty from full states.
Container Adaptor (std::queue): An STL component that provides a queue interface by
wrapping an underlying sequence container (like std::deque).
Producer-Consumer Pattern: A common concurrent programming pattern where one or more
"producers" generate data/tasks and place them into a shared queue, and one or more
"consumers" retrieve and process data/tasks from that queue, decoupling the two.
Mastering data structures and algorithms is an ongoing journey. Here are some highly recommended
resources if you wish to delve deeper into queues, explore other implementations, or broaden your
understanding of DSA in general.
Classic and Foundational DSA Textbooks:
"Introduction to Algorithms" (CLRS) by Cormen, Leiserson, Rivest, and Stein:
o The definitive reference for many algorithms. Chapter 10 covers elementary data
structures including queues, often with pseudocode and rigorous analysis.
"Algorithms" (4th Edition) by Robert Sedgewick and Kevin Wayne:
o Excellent explanations with a practical focus, often with Java examples. Section 1.3
covers Bags, Queues, and Stacks with clear implementations and applications.
"Data Structures and Algorithm Analysis in C++" by Mark Allen Weiss:
o A popular textbook that balances theory with C++ implementations. Queues are typically
covered in chapters on linear structures.
Modern C++ and STL Focused Books:
"The C++ Standard Library: A Tutorial and Reference" by Nicolai M. Josuttis:
o An indispensable guide to the C++ STL. Its chapters on sequence containers (std::deque)
and container adaptors (std::queue) provide detailed information on usage and nuances.
"Effective Modern C++" by Scott Meyers:
o While not a DSA book, it's essential for writing high-quality modern C++. Understanding
items on smart pointers, move semantics, and lambdas will improve your DSA
implementations.
Online Resources & Communities:
[Link]: The go-to online reference for C++ language and library features, including
detailed documentation for std::queue and std::deque.
8.17 List Resources, Books, and References (for Queues & General DSA) 70