1040-queue-via-deque — Queue Using VecDeque
Tutorial
The Problem
A queue (FIFO — first in, first out) is fundamental to breadth-first search, task scheduling, message passing, and producer-consumer patterns. Implementing a queue efficiently requires O(1) enqueue at one end and O(1) dequeue at the other.
Vec<T> with push at the back and remove(0) at the front is O(n) dequeue — each removal shifts all remaining elements. VecDeque<T>, backed by a ring buffer, provides O(1) at both ends and is the correct queue implementation in Rust.
🎯 Learning Outcomes
Vec::remove(0) is O(n) and unsuitable for queuesVecDeque::push_back and pop_front for O(1) FIFO operationsenqueue/dequeue methodsVecDeque directly vs a wrapperCode Example
#![allow(clippy::all)]
// 1040: Queue Using VecDeque
// FIFO queue: push_back, pop_front — both O(1)
use std::collections::VecDeque;
/// Queue wrapper (optional — VecDeque is already a queue)
struct Queue<T> {
inner: VecDeque<T>,
}
impl<T> Queue<T> {
fn new() -> Self {
Queue {
inner: VecDeque::new(),
}
}
fn enqueue(&mut self, value: T) {
self.inner.push_back(value);
}
fn dequeue(&mut self) -> Option<T> {
self.inner.pop_front()
}
fn peek(&self) -> Option<&T> {
self.inner.front()
}
fn is_empty(&self) -> bool {
self.inner.is_empty()
}
fn len(&self) -> usize {
self.inner.len()
}
}
fn basic_queue() {
let mut q = Queue::new();
q.enqueue(1);
q.enqueue(2);
q.enqueue(3);
assert_eq!(q.len(), 3);
assert_eq!(q.peek(), Some(&1));
assert_eq!(q.dequeue(), Some(1));
assert_eq!(q.dequeue(), Some(2));
assert_eq!(q.dequeue(), Some(3));
assert!(q.is_empty());
}
/// VecDeque directly as a queue
fn vecdeque_as_queue() {
let mut q: VecDeque<&str> = VecDeque::new();
q.push_back("first");
q.push_back("second");
q.push_back("third");
assert_eq!(q.front(), Some(&"first"));
assert_eq!(q.pop_front(), Some("first"));
assert_eq!(q.pop_front(), Some("second"));
assert_eq!(q.len(), 1);
}
/// BFS with level tracking using queue
fn bfs_levels(adjacency: &[Vec<usize>], start: usize) -> Vec<Vec<usize>> {
let n = adjacency.len();
let mut visited = vec![false; n];
let mut queue: VecDeque<(usize, usize)> = VecDeque::new();
let mut levels: Vec<Vec<usize>> = Vec::new();
visited[start] = true;
queue.push_back((start, 0));
while let Some((node, level)) = queue.pop_front() {
// Extend levels vec if needed
while levels.len() <= level {
levels.push(Vec::new());
}
levels[level].push(node);
for &neighbor in &adjacency[node] {
if !visited[neighbor] {
visited[neighbor] = true;
queue.push_back((neighbor, level + 1));
}
}
}
levels
}
fn bfs_test() {
let adj = vec![
vec![1, 2], // 0 -> 1, 2
vec![3], // 1 -> 3
vec![3], // 2 -> 3
vec![], // 3 -> (none)
];
let levels = bfs_levels(&adj, 0);
assert_eq!(levels[0], vec![0]);
assert_eq!(levels[1], vec![1, 2]);
assert_eq!(levels[2], vec![3]);
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic() {
basic_queue();
}
#[test]
fn test_vecdeque() {
vecdeque_as_queue();
}
#[test]
fn test_bfs() {
bfs_test();
}
#[test]
fn test_empty_dequeue() {
let mut q: Queue<i32> = Queue::new();
assert_eq!(q.dequeue(), None);
assert_eq!(q.peek(), None);
}
}Key Differences
VecDeque is a ring buffer (cache-friendly, contiguous memory); OCaml's Queue is a doubly-linked list.push_back/pop_front; OCaml uses add/pop or push/take.VecDeque::with_capacity(n) pre-allocates; OCaml's Queue.create() starts empty with no pre-allocation hint.VecDeque supports O(1) index access; OCaml's Queue is sequential-access only.OCaml Approach
OCaml's standard Queue module provides a mutable FIFO queue backed by a doubly-linked list:
let q = Queue.create ()
Queue.add 1 q (* enqueue *)
Queue.push 2 q (* also enqueue — Queue.push is an alias *)
let x = Queue.pop q (* dequeue, raises Empty if empty *)
let y = Queue.take_opt q (* dequeue returning option *)
OCaml's Queue.add is O(1); Queue.pop is O(1). The linked-list backing means more memory overhead per element than Rust's ring buffer.
Full Source
#![allow(clippy::all)]
// 1040: Queue Using VecDeque
// FIFO queue: push_back, pop_front — both O(1)
use std::collections::VecDeque;
/// Queue wrapper (optional — VecDeque is already a queue)
struct Queue<T> {
inner: VecDeque<T>,
}
impl<T> Queue<T> {
fn new() -> Self {
Queue {
inner: VecDeque::new(),
}
}
fn enqueue(&mut self, value: T) {
self.inner.push_back(value);
}
fn dequeue(&mut self) -> Option<T> {
self.inner.pop_front()
}
fn peek(&self) -> Option<&T> {
self.inner.front()
}
fn is_empty(&self) -> bool {
self.inner.is_empty()
}
fn len(&self) -> usize {
self.inner.len()
}
}
fn basic_queue() {
let mut q = Queue::new();
q.enqueue(1);
q.enqueue(2);
q.enqueue(3);
assert_eq!(q.len(), 3);
assert_eq!(q.peek(), Some(&1));
assert_eq!(q.dequeue(), Some(1));
assert_eq!(q.dequeue(), Some(2));
assert_eq!(q.dequeue(), Some(3));
assert!(q.is_empty());
}
/// VecDeque directly as a queue
fn vecdeque_as_queue() {
let mut q: VecDeque<&str> = VecDeque::new();
q.push_back("first");
q.push_back("second");
q.push_back("third");
assert_eq!(q.front(), Some(&"first"));
assert_eq!(q.pop_front(), Some("first"));
assert_eq!(q.pop_front(), Some("second"));
assert_eq!(q.len(), 1);
}
/// BFS with level tracking using queue
fn bfs_levels(adjacency: &[Vec<usize>], start: usize) -> Vec<Vec<usize>> {
let n = adjacency.len();
let mut visited = vec![false; n];
let mut queue: VecDeque<(usize, usize)> = VecDeque::new();
let mut levels: Vec<Vec<usize>> = Vec::new();
visited[start] = true;
queue.push_back((start, 0));
while let Some((node, level)) = queue.pop_front() {
// Extend levels vec if needed
while levels.len() <= level {
levels.push(Vec::new());
}
levels[level].push(node);
for &neighbor in &adjacency[node] {
if !visited[neighbor] {
visited[neighbor] = true;
queue.push_back((neighbor, level + 1));
}
}
}
levels
}
fn bfs_test() {
let adj = vec![
vec![1, 2], // 0 -> 1, 2
vec![3], // 1 -> 3
vec![3], // 2 -> 3
vec![], // 3 -> (none)
];
let levels = bfs_levels(&adj, 0);
assert_eq!(levels[0], vec![0]);
assert_eq!(levels[1], vec![1, 2]);
assert_eq!(levels[2], vec![3]);
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic() {
basic_queue();
}
#[test]
fn test_vecdeque() {
vecdeque_as_queue();
}
#[test]
fn test_bfs() {
bfs_test();
}
#[test]
fn test_empty_dequeue() {
let mut q: Queue<i32> = Queue::new();
assert_eq!(q.dequeue(), None);
assert_eq!(q.peek(), None);
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic() {
basic_queue();
}
#[test]
fn test_vecdeque() {
vecdeque_as_queue();
}
#[test]
fn test_bfs() {
bfs_test();
}
#[test]
fn test_empty_dequeue() {
let mut q: Queue<i32> = Queue::new();
assert_eq!(q.dequeue(), None);
assert_eq!(q.peek(), None);
}
}
Deep Comparison
Queue Using VecDeque — Comparison
Core Insight
FIFO queues need efficient operations at both ends. Rust's VecDeque (ring buffer) provides O(1) for both. OCaml offers a mutable Queue module or the functional two-list queue with amortized O(1) dequeue.
OCaml Approach
Queue module: mutable, push/pop/peek{ inbox; outbox } two-list queueQueue.push/Queue.popRust Approach
VecDeque<T>: push_back (enqueue), pop_front (dequeue)front() for peek without removalim crate for persistent queuesComparison Table
| Feature | OCaml (Queue) | Rust (VecDeque) |
|---|---|---|
| Enqueue | Queue.push | push_back |
| Dequeue | Queue.pop | pop_front |
| Peek | Queue.peek | front() |
| Complexity | O(1) | O(1) |
| Mutability | Mutable | Mutable |
| Implementation | Doubly-linked | Ring buffer |
| Functional alt | Two-list queue | None in std |
Exercises
BoundedQueue<T> that blocks (returns Err) when full, suitable for backpressure in a pipeline.Queue::drain_all<F: Fn(T)>(&mut self, f: F) that processes and removes all elements in FIFO order.