375: LRU Cache
Tutorial Video
Text description (accessibility)
This video demonstrates the "375: LRU Cache" functional Rust example. Difficulty level: Advanced. Key concepts covered: Functional Programming. Cache memory is finite. Key difference from OCaml: 1. **Complexity**: Rust's `HashMap + VecDeque` achieves O(1) average for get/put; OCaml's association list is O(n) per operation due to linear scan.
Tutorial
The Problem
Cache memory is finite. When CPU caches, web caches, and database buffer pools fill up, they must decide which entries to evict. The Least Recently Used (LRU) policy discards the entry that has not been accessed for the longest time, exploiting temporal locality — the observation that recently accessed data is more likely to be accessed again soon. The challenge is achieving O(1) time for both lookup and eviction simultaneously.
LRU caches appear in operating system page replacement, CPU cache controllers, database buffer managers (PostgreSQL's shared_buffers), DNS resolvers, web CDNs, and Redis's maxmemory-policy lru mode.
🎯 Learning Outcomes
HashMap and VecDeque to achieve O(1) average operationsCode Example
#![allow(clippy::all)]
//! LRU Cache
//!
//! Least Recently Used cache with O(1) get/put operations.
use std::collections::{HashMap, VecDeque};
use std::hash::Hash;
/// LRU Cache with fixed capacity
pub struct LruCache<K: Clone + Eq + Hash, V> {
map: HashMap<K, V>,
order: VecDeque<K>,
capacity: usize,
}
impl<K: Clone + Eq + Hash, V> LruCache<K, V> {
/// Create a new LRU cache with given capacity
pub fn new(capacity: usize) -> Self {
assert!(capacity > 0, "capacity must be > 0");
Self {
map: HashMap::new(),
order: VecDeque::new(),
capacity,
}
}
/// Get a value, marking it as recently used
pub fn get(&mut self, key: &K) -> Option<&V> {
if self.map.contains_key(key) {
self.order.retain(|k| k != key);
self.order.push_front(key.clone());
self.map.get(key)
} else {
None
}
}
/// Put a value, evicting LRU if at capacity
pub fn put(&mut self, key: K, val: V) {
if self.map.contains_key(&key) {
self.order.retain(|k| k != &key);
} else if self.map.len() >= self.capacity {
if let Some(lru_key) = self.order.pop_back() {
self.map.remove(&lru_key);
}
}
self.map.insert(key.clone(), val);
self.order.push_front(key);
}
/// Check if key exists (without affecting LRU order)
pub fn contains(&self, key: &K) -> bool {
self.map.contains_key(key)
}
/// Get current size
pub fn len(&self) -> usize {
self.map.len()
}
/// Check if empty
pub fn is_empty(&self) -> bool {
self.map.is_empty()
}
/// Get capacity
pub fn capacity(&self) -> usize {
self.capacity
}
/// Remove a key
pub fn remove(&mut self, key: &K) -> Option<V> {
if self.map.contains_key(key) {
self.order.retain(|k| k != key);
self.map.remove(key)
} else {
None
}
}
/// Clear all entries
pub fn clear(&mut self) {
self.map.clear();
self.order.clear();
}
/// Get keys in LRU order (most recent first)
pub fn keys(&self) -> impl Iterator<Item = &K> {
self.order.iter()
}
}
impl<K: Clone + Eq + Hash, V: Clone> LruCache<K, V> {
/// Peek at a value without affecting LRU order
pub fn peek(&self, key: &K) -> Option<&V> {
self.map.get(key)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic_lru() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
assert_eq!(cache.get(&1), Some(&1));
cache.put(3, 3); // 2 is LRU, gets evicted
assert_eq!(cache.get(&2), None);
assert_eq!(cache.get(&1), Some(&1));
assert_eq!(cache.get(&3), Some(&3));
}
#[test]
fn test_update_existing() {
let mut cache: LruCache<&str, i32> = LruCache::new(2);
cache.put("a", 1);
cache.put("b", 2);
cache.put("a", 10); // update, a becomes MRU
cache.put("c", 3); // b evicted (was LRU)
assert_eq!(cache.get(&"a"), Some(&10));
assert_eq!(cache.get(&"b"), None);
assert_eq!(cache.get(&"c"), Some(&3));
}
#[test]
fn test_contains() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
assert!(cache.contains(&1));
assert!(!cache.contains(&2));
}
#[test]
fn test_remove() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
assert_eq!(cache.remove(&1), Some(1));
assert_eq!(cache.len(), 1);
assert!(!cache.contains(&1));
}
#[test]
fn test_clear() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
cache.clear();
assert!(cache.is_empty());
}
#[test]
fn test_peek_no_affect() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
// Peek at 1 without making it MRU
assert_eq!(cache.peek(&1), Some(&1));
cache.put(3, 3); // 1 should be evicted (still LRU)
assert_eq!(cache.peek(&1), None);
}
}Key Differences
HashMap + VecDeque achieves O(1) average for get/put; OCaml's association list is O(n) per operation due to linear scan.&mut self methods that mutate in place; OCaml uses a mutable record field (lru.data <- ...) with reconstructed lists, blending functional and imperative styles.K: Clone + Eq + Hash are checked at compile time; OCaml uses polymorphic types with structural equality by default.OCaml Approach
The OCaml version in example.ml uses a mutable association list (('k * 'v) list) stored most-recent-first. The get function uses List.assoc_opt for lookup, then reconstructs the list with the found pair at the front. The put function filters out any existing key, prepends the new pair, then truncates to capacity with List.filteri. This is idiomatic functional style — immutable-feeling operations on mutable record fields — but O(n) per operation due to list traversal.
Full Source
#![allow(clippy::all)]
//! LRU Cache
//!
//! Least Recently Used cache with O(1) get/put operations.
use std::collections::{HashMap, VecDeque};
use std::hash::Hash;
/// LRU Cache with fixed capacity
pub struct LruCache<K: Clone + Eq + Hash, V> {
map: HashMap<K, V>,
order: VecDeque<K>,
capacity: usize,
}
impl<K: Clone + Eq + Hash, V> LruCache<K, V> {
/// Create a new LRU cache with given capacity
pub fn new(capacity: usize) -> Self {
assert!(capacity > 0, "capacity must be > 0");
Self {
map: HashMap::new(),
order: VecDeque::new(),
capacity,
}
}
/// Get a value, marking it as recently used
pub fn get(&mut self, key: &K) -> Option<&V> {
if self.map.contains_key(key) {
self.order.retain(|k| k != key);
self.order.push_front(key.clone());
self.map.get(key)
} else {
None
}
}
/// Put a value, evicting LRU if at capacity
pub fn put(&mut self, key: K, val: V) {
if self.map.contains_key(&key) {
self.order.retain(|k| k != &key);
} else if self.map.len() >= self.capacity {
if let Some(lru_key) = self.order.pop_back() {
self.map.remove(&lru_key);
}
}
self.map.insert(key.clone(), val);
self.order.push_front(key);
}
/// Check if key exists (without affecting LRU order)
pub fn contains(&self, key: &K) -> bool {
self.map.contains_key(key)
}
/// Get current size
pub fn len(&self) -> usize {
self.map.len()
}
/// Check if empty
pub fn is_empty(&self) -> bool {
self.map.is_empty()
}
/// Get capacity
pub fn capacity(&self) -> usize {
self.capacity
}
/// Remove a key
pub fn remove(&mut self, key: &K) -> Option<V> {
if self.map.contains_key(key) {
self.order.retain(|k| k != key);
self.map.remove(key)
} else {
None
}
}
/// Clear all entries
pub fn clear(&mut self) {
self.map.clear();
self.order.clear();
}
/// Get keys in LRU order (most recent first)
pub fn keys(&self) -> impl Iterator<Item = &K> {
self.order.iter()
}
}
impl<K: Clone + Eq + Hash, V: Clone> LruCache<K, V> {
/// Peek at a value without affecting LRU order
pub fn peek(&self, key: &K) -> Option<&V> {
self.map.get(key)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic_lru() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
assert_eq!(cache.get(&1), Some(&1));
cache.put(3, 3); // 2 is LRU, gets evicted
assert_eq!(cache.get(&2), None);
assert_eq!(cache.get(&1), Some(&1));
assert_eq!(cache.get(&3), Some(&3));
}
#[test]
fn test_update_existing() {
let mut cache: LruCache<&str, i32> = LruCache::new(2);
cache.put("a", 1);
cache.put("b", 2);
cache.put("a", 10); // update, a becomes MRU
cache.put("c", 3); // b evicted (was LRU)
assert_eq!(cache.get(&"a"), Some(&10));
assert_eq!(cache.get(&"b"), None);
assert_eq!(cache.get(&"c"), Some(&3));
}
#[test]
fn test_contains() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
assert!(cache.contains(&1));
assert!(!cache.contains(&2));
}
#[test]
fn test_remove() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
assert_eq!(cache.remove(&1), Some(1));
assert_eq!(cache.len(), 1);
assert!(!cache.contains(&1));
}
#[test]
fn test_clear() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
cache.clear();
assert!(cache.is_empty());
}
#[test]
fn test_peek_no_affect() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
// Peek at 1 without making it MRU
assert_eq!(cache.peek(&1), Some(&1));
cache.put(3, 3); // 1 should be evicted (still LRU)
assert_eq!(cache.peek(&1), None);
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_basic_lru() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
assert_eq!(cache.get(&1), Some(&1));
cache.put(3, 3); // 2 is LRU, gets evicted
assert_eq!(cache.get(&2), None);
assert_eq!(cache.get(&1), Some(&1));
assert_eq!(cache.get(&3), Some(&3));
}
#[test]
fn test_update_existing() {
let mut cache: LruCache<&str, i32> = LruCache::new(2);
cache.put("a", 1);
cache.put("b", 2);
cache.put("a", 10); // update, a becomes MRU
cache.put("c", 3); // b evicted (was LRU)
assert_eq!(cache.get(&"a"), Some(&10));
assert_eq!(cache.get(&"b"), None);
assert_eq!(cache.get(&"c"), Some(&3));
}
#[test]
fn test_contains() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
assert!(cache.contains(&1));
assert!(!cache.contains(&2));
}
#[test]
fn test_remove() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
assert_eq!(cache.remove(&1), Some(1));
assert_eq!(cache.len(), 1);
assert!(!cache.contains(&1));
}
#[test]
fn test_clear() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
cache.clear();
assert!(cache.is_empty());
}
#[test]
fn test_peek_no_affect() {
let mut cache: LruCache<i32, i32> = LruCache::new(2);
cache.put(1, 1);
cache.put(2, 2);
// Peek at 1 without making it MRU
assert_eq!(cache.peek(&1), Some(&1));
cache.put(3, 3); // 1 should be evicted (still LRU)
assert_eq!(cache.peek(&1), None);
}
}
Deep Comparison
OCaml vs Rust: LRU Cache
Both use hash map + doubly linked list (or deque).
Exercises
VecDeque + HashMap<K,V> with a proper doubly-linked list and HashMap<K, NodePtr> to achieve true O(1) without the O(n) retain call in the current implementation.std::time::Instant stored alongside each value and check on access.Arc<Mutex<...>> and write a test that spawns 8 threads concurrently calling get and put, verifying no data races occur and capacity is never exceeded.