ExamplesBy LevelBy TopicLearning Paths
961 Advanced

961 Lru Cache

Functional Programming

Tutorial

The Problem

Implement an LRU (Least Recently Used) cache with a fixed capacity. When the cache is full, evict the least recently accessed entry on put. Maintain access order using HashMap for O(1) lookup and VecDeque as a recency queue. Both get and put update the recency order.

🎯 Learning Outcomes

  • • Combine HashMap<K, V> for O(1) key lookup with VecDeque<K> for recency tracking
  • • Implement get(&mut self, key: &K) -> Option<&V> that promotes the key to the back (most-recently-used position) via retain
  • • Implement put(key, value) that evicts the front of the deque (LRU end) when at capacity
  • • Understand the K: Eq + Hash + Clone bounds required for hash map keys and deque membership
  • • Recognize the O(n) deque retain cost and contrast with O(1) doubly-linked-list alternatives
  • Code Example

    #![allow(clippy::all)]
    // 961: LRU Cache
    // OCaml: Hashtbl + Queue; Rust: HashMap + VecDeque
    // Both O(1) get/put amortized (with O(n) queue cleanup in this simplified version)
    
    use std::collections::{HashMap, VecDeque};
    
    pub struct LruCache<K, V> {
        capacity: usize,
        table: HashMap<K, V>,
        order: VecDeque<K>,
    }
    
    impl<K: Eq + std::hash::Hash + Clone, V> LruCache<K, V> {
        pub fn new(capacity: usize) -> Self {
            assert!(capacity > 0, "capacity must be > 0");
            LruCache {
                capacity,
                table: HashMap::with_capacity(capacity),
                order: VecDeque::with_capacity(capacity),
            }
        }
    
        pub fn get(&mut self, key: &K) -> Option<&V> {
            if self.table.contains_key(key) {
                // Move to back (most recently used)
                self.order.retain(|k| k != key);
                self.order.push_back(key.clone());
                self.table.get(key)
            } else {
                None
            }
        }
    
        pub fn put(&mut self, key: K, value: V) {
            if self.table.contains_key(&key) {
                // Update: remove from order, re-add at back
                self.order.retain(|k| k != &key);
            } else if self.table.len() >= self.capacity {
                // Evict LRU (front of deque)
                if let Some(lru_key) = self.order.pop_front() {
                    self.table.remove(&lru_key);
                }
            }
            self.table.insert(key.clone(), value);
            self.order.push_back(key);
        }
    
        pub fn size(&self) -> usize {
            self.table.len()
        }
    
        pub fn contains(&self, key: &K) -> bool {
            self.table.contains_key(key)
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_basic_get_put() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("c", 3);
    
            assert_eq!(c.get(&"a"), Some(&1));
            assert_eq!(c.get(&"b"), Some(&2));
            assert_eq!(c.size(), 3);
        }
    
        #[test]
        fn test_eviction() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("c", 3);
    
            // Access "a" to make it recently used → order: b, c, a
            c.get(&"a");
            // Insert "d" → evicts "b" (front)
            c.put("d", 4);
    
            assert_eq!(c.size(), 3);
            assert_eq!(c.get(&"b"), None); // evicted
            assert_eq!(c.get(&"a"), Some(&1));
            assert_eq!(c.get(&"c"), Some(&3));
            assert_eq!(c.get(&"d"), Some(&4));
        }
    
        #[test]
        fn test_update_existing() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("a", 99);
            assert_eq!(c.get(&"a"), Some(&99));
            assert_eq!(c.size(), 2);
        }
    
        #[test]
        fn test_capacity_one() {
            let mut c: LruCache<i32, &str> = LruCache::new(1);
            c.put(1, "one");
            c.put(2, "two"); // evicts 1
            assert_eq!(c.get(&1), None);
            assert_eq!(c.get(&2), Some(&"two"));
        }
    
        #[test]
        fn test_miss() {
            let mut c: LruCache<&str, i32> = LruCache::new(2);
            c.put("x", 10);
            assert_eq!(c.get(&"y"), None);
        }
    }

    Key Differences

    AspectRustOCaml
    Lookup structureHashMap — O(1)Hashtbl or assoc list
    Order structureVecDeque — O(n) retainList — O(n) filter
    K: Clone boundRequired for dual ownershipGC shares automatically
    O(1) productionlru crate (linked list + map)External library needed
    assert! on capacityPanics on capacity = 0Equivalent assert false

    OCaml Approach

    module type LRU = sig
      type ('k, 'v) t
      val create : int -> ('k, 'v) t
      val get : ('k, 'v) t -> 'k -> 'v option
      val put : ('k, 'v) t -> 'k -> 'v -> unit
    end
    
    (* Simple association-list LRU (O(n) all operations, pedagogical) *)
    let create cap = { capacity = cap; entries = [] }
    
    let get cache key =
      match List.assoc_opt key cache.entries with
      | None -> None
      | Some v ->
        cache.entries <-
          (key, v) :: List.filter (fun (k, _) -> k <> key) cache.entries;
        Some v
    
    let put cache key value =
      let filtered = List.filter (fun (k, _) -> k <> key) cache.entries in
      let trimmed =
        if List.length filtered >= cache.capacity then
          List.filteri (fun i _ -> i < cache.capacity - 1) filtered
        else filtered
      in
      cache.entries <- (key, value) :: trimmed
    

    OCaml's association list approach mirrors the deque approach but uses the list head as MRU. Both have O(n) update cost. For production, OCaml's Hashtbl combined with a Queue gives O(1) amortized.

    Full Source

    #![allow(clippy::all)]
    // 961: LRU Cache
    // OCaml: Hashtbl + Queue; Rust: HashMap + VecDeque
    // Both O(1) get/put amortized (with O(n) queue cleanup in this simplified version)
    
    use std::collections::{HashMap, VecDeque};
    
    pub struct LruCache<K, V> {
        capacity: usize,
        table: HashMap<K, V>,
        order: VecDeque<K>,
    }
    
    impl<K: Eq + std::hash::Hash + Clone, V> LruCache<K, V> {
        pub fn new(capacity: usize) -> Self {
            assert!(capacity > 0, "capacity must be > 0");
            LruCache {
                capacity,
                table: HashMap::with_capacity(capacity),
                order: VecDeque::with_capacity(capacity),
            }
        }
    
        pub fn get(&mut self, key: &K) -> Option<&V> {
            if self.table.contains_key(key) {
                // Move to back (most recently used)
                self.order.retain(|k| k != key);
                self.order.push_back(key.clone());
                self.table.get(key)
            } else {
                None
            }
        }
    
        pub fn put(&mut self, key: K, value: V) {
            if self.table.contains_key(&key) {
                // Update: remove from order, re-add at back
                self.order.retain(|k| k != &key);
            } else if self.table.len() >= self.capacity {
                // Evict LRU (front of deque)
                if let Some(lru_key) = self.order.pop_front() {
                    self.table.remove(&lru_key);
                }
            }
            self.table.insert(key.clone(), value);
            self.order.push_back(key);
        }
    
        pub fn size(&self) -> usize {
            self.table.len()
        }
    
        pub fn contains(&self, key: &K) -> bool {
            self.table.contains_key(key)
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_basic_get_put() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("c", 3);
    
            assert_eq!(c.get(&"a"), Some(&1));
            assert_eq!(c.get(&"b"), Some(&2));
            assert_eq!(c.size(), 3);
        }
    
        #[test]
        fn test_eviction() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("c", 3);
    
            // Access "a" to make it recently used → order: b, c, a
            c.get(&"a");
            // Insert "d" → evicts "b" (front)
            c.put("d", 4);
    
            assert_eq!(c.size(), 3);
            assert_eq!(c.get(&"b"), None); // evicted
            assert_eq!(c.get(&"a"), Some(&1));
            assert_eq!(c.get(&"c"), Some(&3));
            assert_eq!(c.get(&"d"), Some(&4));
        }
    
        #[test]
        fn test_update_existing() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("a", 99);
            assert_eq!(c.get(&"a"), Some(&99));
            assert_eq!(c.size(), 2);
        }
    
        #[test]
        fn test_capacity_one() {
            let mut c: LruCache<i32, &str> = LruCache::new(1);
            c.put(1, "one");
            c.put(2, "two"); // evicts 1
            assert_eq!(c.get(&1), None);
            assert_eq!(c.get(&2), Some(&"two"));
        }
    
        #[test]
        fn test_miss() {
            let mut c: LruCache<&str, i32> = LruCache::new(2);
            c.put("x", 10);
            assert_eq!(c.get(&"y"), None);
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_basic_get_put() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("c", 3);
    
            assert_eq!(c.get(&"a"), Some(&1));
            assert_eq!(c.get(&"b"), Some(&2));
            assert_eq!(c.size(), 3);
        }
    
        #[test]
        fn test_eviction() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("c", 3);
    
            // Access "a" to make it recently used → order: b, c, a
            c.get(&"a");
            // Insert "d" → evicts "b" (front)
            c.put("d", 4);
    
            assert_eq!(c.size(), 3);
            assert_eq!(c.get(&"b"), None); // evicted
            assert_eq!(c.get(&"a"), Some(&1));
            assert_eq!(c.get(&"c"), Some(&3));
            assert_eq!(c.get(&"d"), Some(&4));
        }
    
        #[test]
        fn test_update_existing() {
            let mut c: LruCache<&str, i32> = LruCache::new(3);
            c.put("a", 1);
            c.put("b", 2);
            c.put("a", 99);
            assert_eq!(c.get(&"a"), Some(&99));
            assert_eq!(c.size(), 2);
        }
    
        #[test]
        fn test_capacity_one() {
            let mut c: LruCache<i32, &str> = LruCache::new(1);
            c.put(1, "one");
            c.put(2, "two"); // evicts 1
            assert_eq!(c.get(&1), None);
            assert_eq!(c.get(&2), Some(&"two"));
        }
    
        #[test]
        fn test_miss() {
            let mut c: LruCache<&str, i32> = LruCache::new(2);
            c.put("x", 10);
            assert_eq!(c.get(&"y"), None);
        }
    }

    Deep Comparison

    LRU Cache — Comparison

    Core Insight

    LRU cache = hash map (fast lookup) + ordered queue (eviction order). Both OCaml and Rust use the same two-structure approach. The get operation must update recency — removing the key from the middle of the queue and reinserting at the back. Both use O(n) queue cleanup (sufficient for most uses; production LRU uses a doubly-linked list).

    OCaml Approach

  • Hashtbl.t + Queue.t in a record { capacity; table; order }
  • Queue.pop — dequeue front (LRU victim)
  • Queue.add k q — enqueue back (most recent)
  • remove_from_queue — filter queue to remove a key from the middle
  • Queue.iter + temporary queue for key removal
  • • Mutable structure passed by reference (OCaml records are mutable when fields are mutable)
  • Rust Approach

  • HashMap<K, V> + VecDeque<K> in a struct
  • VecDeque::pop_front() — remove LRU
  • VecDeque::push_back(key) — insert as MRU
  • order.retain(|k| k != key) — elegant middle removal (O(n))
  • K: Eq + Hash + Clone — trait bounds for HashMap + cloning into deque
  • &mut self on get — Rust enforces mutation is explicit
  • Comparison Table

    AspectOCamlRust
    Hash storeHashtbl.tHashMap<K, V>
    Order queueQueue.tVecDeque<K>
    Evict frontQueue.pop qorder.pop_front()
    Add to backQueue.add k qorder.push_back(k)
    Remove from middleManual filter with temp queueorder.retain(\|k\| k != key)
    Get mutates (recency)Yes (mutable record fields)&mut self required
    Generic types('k, 'v) lruLruCache<K, V>
    Trait boundsNone (structural)K: Eq + Hash + Clone

    Exercises

  • Implement len(&self) -> usize and is_empty(&self) -> bool.
  • Implement peek(&self, key: &K) -> Option<&V> that looks up without updating recency order.
  • Refactor to use a doubly linked list + HashMap for O(1) get/put (or use the lru crate and benchmark the difference).
  • Add evict_callback: Option<Box<dyn Fn(K, V)>> that fires when an entry is evicted.
  • Implement resize(new_capacity) that evicts entries from the LRU end until the cache fits the new capacity.
  • Open Source Repos