ExamplesBy LevelBy TopicLearning Paths
1040 Intermediate

1040-queue-via-deque — Queue Using VecDeque

Functional Programming

Tutorial

The Problem

A queue (FIFO — first in, first out) is fundamental to breadth-first search, task scheduling, message passing, and producer-consumer patterns. Implementing a queue efficiently requires O(1) enqueue at one end and O(1) dequeue at the other.

Vec<T> with push at the back and remove(0) at the front is O(n) dequeue — each removal shifts all remaining elements. VecDeque<T>, backed by a ring buffer, provides O(1) at both ends and is the correct queue implementation in Rust.

🎯 Learning Outcomes

  • • Understand why Vec::remove(0) is O(n) and unsuitable for queues
  • • Use VecDeque::push_back and pop_front for O(1) FIFO operations
  • • Implement a queue wrapper with ergonomic enqueue/dequeue methods
  • • Apply a queue to BFS traversal
  • • Know when to use VecDeque directly vs a wrapper
  • Code Example

    #![allow(clippy::all)]
    // 1040: Queue Using VecDeque
    // FIFO queue: push_back, pop_front — both O(1)
    
    use std::collections::VecDeque;
    
    /// Queue wrapper (optional — VecDeque is already a queue)
    struct Queue<T> {
        inner: VecDeque<T>,
    }
    
    impl<T> Queue<T> {
        fn new() -> Self {
            Queue {
                inner: VecDeque::new(),
            }
        }
    
        fn enqueue(&mut self, value: T) {
            self.inner.push_back(value);
        }
    
        fn dequeue(&mut self) -> Option<T> {
            self.inner.pop_front()
        }
    
        fn peek(&self) -> Option<&T> {
            self.inner.front()
        }
    
        fn is_empty(&self) -> bool {
            self.inner.is_empty()
        }
    
        fn len(&self) -> usize {
            self.inner.len()
        }
    }
    
    fn basic_queue() {
        let mut q = Queue::new();
        q.enqueue(1);
        q.enqueue(2);
        q.enqueue(3);
    
        assert_eq!(q.len(), 3);
        assert_eq!(q.peek(), Some(&1));
        assert_eq!(q.dequeue(), Some(1));
        assert_eq!(q.dequeue(), Some(2));
        assert_eq!(q.dequeue(), Some(3));
        assert!(q.is_empty());
    }
    
    /// VecDeque directly as a queue
    fn vecdeque_as_queue() {
        let mut q: VecDeque<&str> = VecDeque::new();
        q.push_back("first");
        q.push_back("second");
        q.push_back("third");
    
        assert_eq!(q.front(), Some(&"first"));
        assert_eq!(q.pop_front(), Some("first"));
        assert_eq!(q.pop_front(), Some("second"));
        assert_eq!(q.len(), 1);
    }
    
    /// BFS with level tracking using queue
    fn bfs_levels(adjacency: &[Vec<usize>], start: usize) -> Vec<Vec<usize>> {
        let n = adjacency.len();
        let mut visited = vec![false; n];
        let mut queue: VecDeque<(usize, usize)> = VecDeque::new();
        let mut levels: Vec<Vec<usize>> = Vec::new();
    
        visited[start] = true;
        queue.push_back((start, 0));
    
        while let Some((node, level)) = queue.pop_front() {
            // Extend levels vec if needed
            while levels.len() <= level {
                levels.push(Vec::new());
            }
            levels[level].push(node);
    
            for &neighbor in &adjacency[node] {
                if !visited[neighbor] {
                    visited[neighbor] = true;
                    queue.push_back((neighbor, level + 1));
                }
            }
        }
        levels
    }
    
    fn bfs_test() {
        let adj = vec![
            vec![1, 2], // 0 -> 1, 2
            vec![3],    // 1 -> 3
            vec![3],    // 2 -> 3
            vec![],     // 3 -> (none)
        ];
    
        let levels = bfs_levels(&adj, 0);
        assert_eq!(levels[0], vec![0]);
        assert_eq!(levels[1], vec![1, 2]);
        assert_eq!(levels[2], vec![3]);
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_basic() {
            basic_queue();
        }
    
        #[test]
        fn test_vecdeque() {
            vecdeque_as_queue();
        }
    
        #[test]
        fn test_bfs() {
            bfs_test();
        }
    
        #[test]
        fn test_empty_dequeue() {
            let mut q: Queue<i32> = Queue::new();
            assert_eq!(q.dequeue(), None);
            assert_eq!(q.peek(), None);
        }
    }

    Key Differences

  • Backing structure: Rust's VecDeque is a ring buffer (cache-friendly, contiguous memory); OCaml's Queue is a doubly-linked list.
  • Method names: Rust uses push_back/pop_front; OCaml uses add/pop or push/take.
  • Capacity hints: Rust's VecDeque::with_capacity(n) pre-allocates; OCaml's Queue.create() starts empty with no pre-allocation hint.
  • Index access: Rust's VecDeque supports O(1) index access; OCaml's Queue is sequential-access only.
  • OCaml Approach

    OCaml's standard Queue module provides a mutable FIFO queue backed by a doubly-linked list:

    let q = Queue.create ()
    Queue.add 1 q  (* enqueue *)
    Queue.push 2 q  (* also enqueue — Queue.push is an alias *)
    let x = Queue.pop q  (* dequeue, raises Empty if empty *)
    let y = Queue.take_opt q  (* dequeue returning option *)
    

    OCaml's Queue.add is O(1); Queue.pop is O(1). The linked-list backing means more memory overhead per element than Rust's ring buffer.

    Full Source

    #![allow(clippy::all)]
    // 1040: Queue Using VecDeque
    // FIFO queue: push_back, pop_front — both O(1)
    
    use std::collections::VecDeque;
    
    /// Queue wrapper (optional — VecDeque is already a queue)
    struct Queue<T> {
        inner: VecDeque<T>,
    }
    
    impl<T> Queue<T> {
        fn new() -> Self {
            Queue {
                inner: VecDeque::new(),
            }
        }
    
        fn enqueue(&mut self, value: T) {
            self.inner.push_back(value);
        }
    
        fn dequeue(&mut self) -> Option<T> {
            self.inner.pop_front()
        }
    
        fn peek(&self) -> Option<&T> {
            self.inner.front()
        }
    
        fn is_empty(&self) -> bool {
            self.inner.is_empty()
        }
    
        fn len(&self) -> usize {
            self.inner.len()
        }
    }
    
    fn basic_queue() {
        let mut q = Queue::new();
        q.enqueue(1);
        q.enqueue(2);
        q.enqueue(3);
    
        assert_eq!(q.len(), 3);
        assert_eq!(q.peek(), Some(&1));
        assert_eq!(q.dequeue(), Some(1));
        assert_eq!(q.dequeue(), Some(2));
        assert_eq!(q.dequeue(), Some(3));
        assert!(q.is_empty());
    }
    
    /// VecDeque directly as a queue
    fn vecdeque_as_queue() {
        let mut q: VecDeque<&str> = VecDeque::new();
        q.push_back("first");
        q.push_back("second");
        q.push_back("third");
    
        assert_eq!(q.front(), Some(&"first"));
        assert_eq!(q.pop_front(), Some("first"));
        assert_eq!(q.pop_front(), Some("second"));
        assert_eq!(q.len(), 1);
    }
    
    /// BFS with level tracking using queue
    fn bfs_levels(adjacency: &[Vec<usize>], start: usize) -> Vec<Vec<usize>> {
        let n = adjacency.len();
        let mut visited = vec![false; n];
        let mut queue: VecDeque<(usize, usize)> = VecDeque::new();
        let mut levels: Vec<Vec<usize>> = Vec::new();
    
        visited[start] = true;
        queue.push_back((start, 0));
    
        while let Some((node, level)) = queue.pop_front() {
            // Extend levels vec if needed
            while levels.len() <= level {
                levels.push(Vec::new());
            }
            levels[level].push(node);
    
            for &neighbor in &adjacency[node] {
                if !visited[neighbor] {
                    visited[neighbor] = true;
                    queue.push_back((neighbor, level + 1));
                }
            }
        }
        levels
    }
    
    fn bfs_test() {
        let adj = vec![
            vec![1, 2], // 0 -> 1, 2
            vec![3],    // 1 -> 3
            vec![3],    // 2 -> 3
            vec![],     // 3 -> (none)
        ];
    
        let levels = bfs_levels(&adj, 0);
        assert_eq!(levels[0], vec![0]);
        assert_eq!(levels[1], vec![1, 2]);
        assert_eq!(levels[2], vec![3]);
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_basic() {
            basic_queue();
        }
    
        #[test]
        fn test_vecdeque() {
            vecdeque_as_queue();
        }
    
        #[test]
        fn test_bfs() {
            bfs_test();
        }
    
        #[test]
        fn test_empty_dequeue() {
            let mut q: Queue<i32> = Queue::new();
            assert_eq!(q.dequeue(), None);
            assert_eq!(q.peek(), None);
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_basic() {
            basic_queue();
        }
    
        #[test]
        fn test_vecdeque() {
            vecdeque_as_queue();
        }
    
        #[test]
        fn test_bfs() {
            bfs_test();
        }
    
        #[test]
        fn test_empty_dequeue() {
            let mut q: Queue<i32> = Queue::new();
            assert_eq!(q.dequeue(), None);
            assert_eq!(q.peek(), None);
        }
    }

    Deep Comparison

    Queue Using VecDeque — Comparison

    Core Insight

    FIFO queues need efficient operations at both ends. Rust's VecDeque (ring buffer) provides O(1) for both. OCaml offers a mutable Queue module or the functional two-list queue with amortized O(1) dequeue.

    OCaml Approach

  • Queue module: mutable, push/pop/peek
  • • Functional alternative: { inbox; outbox } two-list queue
  • • Amortized O(1) via lazy reversal when outbox empties
  • • BFS naturally uses Queue.push/Queue.pop
  • Rust Approach

  • VecDeque<T>: push_back (enqueue), pop_front (dequeue)
  • front() for peek without removal
  • • Ring buffer — true O(1), not amortized
  • • Optional wrapper struct for semantic clarity
  • • No functional queue in std — use im crate for persistent queues
  • Comparison Table

    FeatureOCaml (Queue)Rust (VecDeque)
    EnqueueQueue.pushpush_back
    DequeueQueue.poppop_front
    PeekQueue.peekfront()
    ComplexityO(1)O(1)
    MutabilityMutableMutable
    ImplementationDoubly-linkedRing buffer
    Functional altTwo-list queueNone in std

    Exercises

  • Implement a bounded queue BoundedQueue<T> that blocks (returns Err) when full, suitable for backpressure in a pipeline.
  • Write a level-order tree traversal using the queue.
  • Implement Queue::drain_all<F: Fn(T)>(&mut self, f: F) that processes and removes all elements in FIFO order.
  • Open Source Repos