961 Lru Cache
Tutorial
The Problem
Implement an LRU (Least Recently Used) cache with a fixed capacity. When the cache is full, evict the least recently accessed entry on put. Maintain access order using HashMap for O(1) lookup and VecDeque as a recency queue. Both get and put update the recency order.
🎯 Learning Outcomes
HashMap<K, V> for O(1) key lookup with VecDeque<K> for recency trackingget(&mut self, key: &K) -> Option<&V> that promotes the key to the back (most-recently-used position) via retainput(key, value) that evicts the front of the deque (LRU end) when at capacityK: Eq + Hash + Clone bounds required for hash map keys and deque membershipretain cost and contrast with O(1) doubly-linked-list alternativesCode Example
#![allow(clippy::all)]
// 961: LRU Cache
// OCaml: Hashtbl + Queue; Rust: HashMap + VecDeque
// Both O(1) get/put amortized (with O(n) queue cleanup in this simplified version)
use std::collections::{HashMap, VecDeque};
pub struct LruCache<K, V> {
capacity: usize,
table: HashMap<K, V>,
order: VecDeque<K>,
}
impl<K: Eq + std::hash::Hash + Clone, V> LruCache<K, V> {
pub fn new(capacity: usize) -> Self {
assert!(capacity > 0, "capacity must be > 0");
LruCache {
capacity,
table: HashMap::with_capacity(capacity),
order: VecDeque::with_capacity(capacity),
}
}
pub fn get(&mut self, key: &K) -> Option<&V> {
if self.table.contains_key(key) {
// Move to back (most recently used)
self.order.retain(|k| k != key);
self.order.push_back(key.clone());
self.table.get(key)
} else {
None
}
}
pub fn put(&mut self, key: K, value: V) {
if self.table.contains_key(&key) {
// Update: remove from order, re-add at back
self.order.retain(|k| k != &key);
} else if self.table.len() >= self.capacity {
// Evict LRU (front of deque)
if let Some(lru_key) = self.order.pop_front() {
self.table.remove(&lru_key);
}
}
self.table.insert(key.clone(), value);
self.order.push_back(key);
}
pub fn size(&self) -> usize {
self.table.len()
}
pub fn contains(&self, key: &K) -> bool {
self.table.contains_key(key)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic_get_put() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("c", 3);
assert_eq!(c.get(&"a"), Some(&1));
assert_eq!(c.get(&"b"), Some(&2));
assert_eq!(c.size(), 3);
}
#[test]
fn test_eviction() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("c", 3);
// Access "a" to make it recently used → order: b, c, a
c.get(&"a");
// Insert "d" → evicts "b" (front)
c.put("d", 4);
assert_eq!(c.size(), 3);
assert_eq!(c.get(&"b"), None); // evicted
assert_eq!(c.get(&"a"), Some(&1));
assert_eq!(c.get(&"c"), Some(&3));
assert_eq!(c.get(&"d"), Some(&4));
}
#[test]
fn test_update_existing() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("a", 99);
assert_eq!(c.get(&"a"), Some(&99));
assert_eq!(c.size(), 2);
}
#[test]
fn test_capacity_one() {
let mut c: LruCache<i32, &str> = LruCache::new(1);
c.put(1, "one");
c.put(2, "two"); // evicts 1
assert_eq!(c.get(&1), None);
assert_eq!(c.get(&2), Some(&"two"));
}
#[test]
fn test_miss() {
let mut c: LruCache<&str, i32> = LruCache::new(2);
c.put("x", 10);
assert_eq!(c.get(&"y"), None);
}
}Key Differences
| Aspect | Rust | OCaml |
|---|---|---|
| Lookup structure | HashMap — O(1) | Hashtbl or assoc list |
| Order structure | VecDeque — O(n) retain | List — O(n) filter |
K: Clone bound | Required for dual ownership | GC shares automatically |
| O(1) production | lru crate (linked list + map) | External library needed |
assert! on capacity | Panics on capacity = 0 | Equivalent assert false |
OCaml Approach
module type LRU = sig
type ('k, 'v) t
val create : int -> ('k, 'v) t
val get : ('k, 'v) t -> 'k -> 'v option
val put : ('k, 'v) t -> 'k -> 'v -> unit
end
(* Simple association-list LRU (O(n) all operations, pedagogical) *)
let create cap = { capacity = cap; entries = [] }
let get cache key =
match List.assoc_opt key cache.entries with
| None -> None
| Some v ->
cache.entries <-
(key, v) :: List.filter (fun (k, _) -> k <> key) cache.entries;
Some v
let put cache key value =
let filtered = List.filter (fun (k, _) -> k <> key) cache.entries in
let trimmed =
if List.length filtered >= cache.capacity then
List.filteri (fun i _ -> i < cache.capacity - 1) filtered
else filtered
in
cache.entries <- (key, value) :: trimmed
OCaml's association list approach mirrors the deque approach but uses the list head as MRU. Both have O(n) update cost. For production, OCaml's Hashtbl combined with a Queue gives O(1) amortized.
Full Source
#![allow(clippy::all)]
// 961: LRU Cache
// OCaml: Hashtbl + Queue; Rust: HashMap + VecDeque
// Both O(1) get/put amortized (with O(n) queue cleanup in this simplified version)
use std::collections::{HashMap, VecDeque};
pub struct LruCache<K, V> {
capacity: usize,
table: HashMap<K, V>,
order: VecDeque<K>,
}
impl<K: Eq + std::hash::Hash + Clone, V> LruCache<K, V> {
pub fn new(capacity: usize) -> Self {
assert!(capacity > 0, "capacity must be > 0");
LruCache {
capacity,
table: HashMap::with_capacity(capacity),
order: VecDeque::with_capacity(capacity),
}
}
pub fn get(&mut self, key: &K) -> Option<&V> {
if self.table.contains_key(key) {
// Move to back (most recently used)
self.order.retain(|k| k != key);
self.order.push_back(key.clone());
self.table.get(key)
} else {
None
}
}
pub fn put(&mut self, key: K, value: V) {
if self.table.contains_key(&key) {
// Update: remove from order, re-add at back
self.order.retain(|k| k != &key);
} else if self.table.len() >= self.capacity {
// Evict LRU (front of deque)
if let Some(lru_key) = self.order.pop_front() {
self.table.remove(&lru_key);
}
}
self.table.insert(key.clone(), value);
self.order.push_back(key);
}
pub fn size(&self) -> usize {
self.table.len()
}
pub fn contains(&self, key: &K) -> bool {
self.table.contains_key(key)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic_get_put() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("c", 3);
assert_eq!(c.get(&"a"), Some(&1));
assert_eq!(c.get(&"b"), Some(&2));
assert_eq!(c.size(), 3);
}
#[test]
fn test_eviction() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("c", 3);
// Access "a" to make it recently used → order: b, c, a
c.get(&"a");
// Insert "d" → evicts "b" (front)
c.put("d", 4);
assert_eq!(c.size(), 3);
assert_eq!(c.get(&"b"), None); // evicted
assert_eq!(c.get(&"a"), Some(&1));
assert_eq!(c.get(&"c"), Some(&3));
assert_eq!(c.get(&"d"), Some(&4));
}
#[test]
fn test_update_existing() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("a", 99);
assert_eq!(c.get(&"a"), Some(&99));
assert_eq!(c.size(), 2);
}
#[test]
fn test_capacity_one() {
let mut c: LruCache<i32, &str> = LruCache::new(1);
c.put(1, "one");
c.put(2, "two"); // evicts 1
assert_eq!(c.get(&1), None);
assert_eq!(c.get(&2), Some(&"two"));
}
#[test]
fn test_miss() {
let mut c: LruCache<&str, i32> = LruCache::new(2);
c.put("x", 10);
assert_eq!(c.get(&"y"), None);
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic_get_put() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("c", 3);
assert_eq!(c.get(&"a"), Some(&1));
assert_eq!(c.get(&"b"), Some(&2));
assert_eq!(c.size(), 3);
}
#[test]
fn test_eviction() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("c", 3);
// Access "a" to make it recently used → order: b, c, a
c.get(&"a");
// Insert "d" → evicts "b" (front)
c.put("d", 4);
assert_eq!(c.size(), 3);
assert_eq!(c.get(&"b"), None); // evicted
assert_eq!(c.get(&"a"), Some(&1));
assert_eq!(c.get(&"c"), Some(&3));
assert_eq!(c.get(&"d"), Some(&4));
}
#[test]
fn test_update_existing() {
let mut c: LruCache<&str, i32> = LruCache::new(3);
c.put("a", 1);
c.put("b", 2);
c.put("a", 99);
assert_eq!(c.get(&"a"), Some(&99));
assert_eq!(c.size(), 2);
}
#[test]
fn test_capacity_one() {
let mut c: LruCache<i32, &str> = LruCache::new(1);
c.put(1, "one");
c.put(2, "two"); // evicts 1
assert_eq!(c.get(&1), None);
assert_eq!(c.get(&2), Some(&"two"));
}
#[test]
fn test_miss() {
let mut c: LruCache<&str, i32> = LruCache::new(2);
c.put("x", 10);
assert_eq!(c.get(&"y"), None);
}
}
Deep Comparison
LRU Cache — Comparison
Core Insight
LRU cache = hash map (fast lookup) + ordered queue (eviction order). Both OCaml and Rust use the same two-structure approach. The get operation must update recency — removing the key from the middle of the queue and reinserting at the back. Both use O(n) queue cleanup (sufficient for most uses; production LRU uses a doubly-linked list).
OCaml Approach
Hashtbl.t + Queue.t in a record { capacity; table; order }Queue.pop — dequeue front (LRU victim)Queue.add k q — enqueue back (most recent)remove_from_queue — filter queue to remove a key from the middleQueue.iter + temporary queue for key removalmutable)Rust Approach
HashMap<K, V> + VecDeque<K> in a structVecDeque::pop_front() — remove LRUVecDeque::push_back(key) — insert as MRUorder.retain(|k| k != key) — elegant middle removal (O(n))K: Eq + Hash + Clone — trait bounds for HashMap + cloning into deque&mut self on get — Rust enforces mutation is explicitComparison Table
| Aspect | OCaml | Rust |
|---|---|---|
| Hash store | Hashtbl.t | HashMap<K, V> |
| Order queue | Queue.t | VecDeque<K> |
| Evict front | Queue.pop q | order.pop_front() |
| Add to back | Queue.add k q | order.push_back(k) |
| Remove from middle | Manual filter with temp queue | order.retain(\|k\| k != key) |
| Get mutates (recency) | Yes (mutable record fields) | &mut self required |
| Generic types | ('k, 'v) lru | LruCache<K, V> |
| Trait bounds | None (structural) | K: Eq + Hash + Clone |
Exercises
len(&self) -> usize and is_empty(&self) -> bool.peek(&self, key: &K) -> Option<&V> that looks up without updating recency order.lru crate and benchmark the difference).evict_callback: Option<Box<dyn Fn(K, V)>> that fires when an entry is evicted.resize(new_capacity) that evicts entries from the LRU end until the cache fits the new capacity.