ExamplesBy LevelBy TopicLearning Paths
507 Advanced

Closure Memoization

Functional Programming

Tutorial Video

Text description (accessibility)

This video demonstrates the "Closure Memoization" functional Rust example. Difficulty level: Advanced. Key concepts covered: Functional Programming. Recursive algorithms like Fibonacci without memoization recompute exponentially many subproblems. Key difference from OCaml: 1. **Self

Tutorial

The Problem

Recursive algorithms like Fibonacci without memoization recompute exponentially many subproblems. fib(40) calls fib(39) and fib(38), which each call their children — the same subtree computed millions of times. Memoization eliminates this redundancy by storing results in a HashMap. More generally, any referentially transparent (pure) function can be memoized: database query caches, HTTP response caches, and compiler analysis passes all apply this pattern.

🎯 Learning Outcomes

  • • Build a generic Memoize<F, A, R> wrapper that caches results by input
  • • Use HashMap<A, R> where A: Eq + Hash + Clone as the cache store
  • • Implement recursive memoization by passing the cache as a separate parameter to inner functions
  • • Understand why a closure cannot call itself directly (no self-reference in closures)
  • • Apply cache_size() to verify cache hit behaviour
  • Code Example

    #![allow(clippy::all)]
    //! # Closure Memoization — Caching Results
    
    use std::collections::HashMap;
    use std::hash::Hash;
    
    /// Simple memoization wrapper
    pub struct Memoize<F, A, R>
    where
        A: Eq + Hash + Clone,
        R: Clone,
    {
        func: F,
        cache: HashMap<A, R>,
    }
    
    impl<F, A, R> Memoize<F, A, R>
    where
        F: Fn(A) -> R,
        A: Eq + Hash + Clone,
        R: Clone,
    {
        pub fn new(func: F) -> Self {
            Self {
                func,
                cache: HashMap::new(),
            }
        }
    
        pub fn call(&mut self, arg: A) -> R {
            if let Some(result) = self.cache.get(&arg) {
                return result.clone();
            }
            let result = (self.func)(arg.clone());
            self.cache.insert(arg, result.clone());
            result
        }
    
        pub fn cache_size(&self) -> usize {
            self.cache.len()
        }
    }
    
    /// Recursive memoization (e.g., fibonacci)
    pub fn memoized_fib() -> impl FnMut(u64) -> u64 {
        let mut cache: HashMap<u64, u64> = HashMap::new();
    
        fn fib_inner(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
            if let Some(&result) = cache.get(&n) {
                return result;
            }
            let result = if n <= 1 {
                n
            } else {
                fib_inner(n - 1, cache) + fib_inner(n - 2, cache)
            };
            cache.insert(n, result);
            result
        }
    
        move |n| fib_inner(n, &mut cache)
    }
    
    /// Once-computed lazy value
    pub struct Lazy<T, F: FnOnce() -> T> {
        value: Option<T>,
        init: Option<F>,
    }
    
    impl<T, F: FnOnce() -> T> Lazy<T, F> {
        pub fn new(init: F) -> Self {
            Self {
                value: None,
                init: Some(init),
            }
        }
    
        pub fn get(&mut self) -> &T {
            if self.value.is_none() {
                let init = self.init.take().expect("already initialized");
                self.value = Some(init());
            }
            self.value.as_ref().unwrap()
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_memoize() {
            let mut memo = Memoize::new(|x: i32| x * x);
    
            assert_eq!(memo.call(5), 25);
            assert_eq!(memo.call(5), 25); // Cached
            assert_eq!(memo.call(6), 36);
            assert_eq!(memo.cache_size(), 2);
        }
    
        #[test]
        fn test_memoized_fib() {
            let mut fib = memoized_fib();
            assert_eq!(fib(0), 0);
            assert_eq!(fib(1), 1);
            assert_eq!(fib(10), 55);
            assert_eq!(fib(20), 6765);
            assert_eq!(fib(40), 102334155);
        }
    
        #[test]
        fn test_lazy() {
            use std::cell::Cell;
            let computed = Cell::new(false);
            let mut lazy = Lazy::new(|| {
                computed.set(true);
                42
            });
    
            assert!(!computed.get());
            assert_eq!(*lazy.get(), 42);
            assert!(computed.get());
            assert_eq!(*lazy.get(), 42); // Doesn't recompute
        }
    
        #[test]
        fn test_expensive_computation() {
            let mut memo = Memoize::new(|s: String| s.chars().map(|c| c as u32).sum::<u32>());
    
            let result1 = memo.call("hello".to_string());
            let result2 = memo.call("hello".to_string());
            assert_eq!(result1, result2);
        }
    }

    Key Differences

  • Self-referential closures: Rust closures cannot reference themselves; a named function or explicit Y combinator is needed for recursive memoization. OCaml's let rec closure can.
  • **Clone requirement**: Rust's Memoize requires R: Clone to return cached values (since HashMap::get returns &R); OCaml's polymorphic Hashtbl has no such constraint.
  • **FnMut requirement**: Memoize::call takes &mut self because it modifies the cache — callers need mut memo; OCaml's tbl is a mutable reference captured by the closure.
  • Thread safety: Memoize is single-threaded; thread-safe memoization requires Arc<Mutex<HashMap>> or dashmap. OCaml's Mutex.protect serves the same purpose.
  • OCaml Approach

    OCaml's let rec makes self-referential closures trivial:

    let memoize f =
      let tbl = Hashtbl.create 16 in
      fun x -> match Hashtbl.find_opt tbl x with
        | Some v -> v
        | None -> let v = f x in Hashtbl.add tbl x v; v
    
    let fib =
      let tbl = Hashtbl.create 64 in
      let rec go n = match Hashtbl.find_opt tbl n with
        | Some v -> v
        | None -> let v = if n <= 1 then n else go (n-1) + go (n-2) in
                  Hashtbl.add tbl n v; v
      in go
    

    OCaml's let rec allows the closure to reference itself directly, simplifying recursive memoization.

    Full Source

    #![allow(clippy::all)]
    //! # Closure Memoization — Caching Results
    
    use std::collections::HashMap;
    use std::hash::Hash;
    
    /// Simple memoization wrapper
    pub struct Memoize<F, A, R>
    where
        A: Eq + Hash + Clone,
        R: Clone,
    {
        func: F,
        cache: HashMap<A, R>,
    }
    
    impl<F, A, R> Memoize<F, A, R>
    where
        F: Fn(A) -> R,
        A: Eq + Hash + Clone,
        R: Clone,
    {
        pub fn new(func: F) -> Self {
            Self {
                func,
                cache: HashMap::new(),
            }
        }
    
        pub fn call(&mut self, arg: A) -> R {
            if let Some(result) = self.cache.get(&arg) {
                return result.clone();
            }
            let result = (self.func)(arg.clone());
            self.cache.insert(arg, result.clone());
            result
        }
    
        pub fn cache_size(&self) -> usize {
            self.cache.len()
        }
    }
    
    /// Recursive memoization (e.g., fibonacci)
    pub fn memoized_fib() -> impl FnMut(u64) -> u64 {
        let mut cache: HashMap<u64, u64> = HashMap::new();
    
        fn fib_inner(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
            if let Some(&result) = cache.get(&n) {
                return result;
            }
            let result = if n <= 1 {
                n
            } else {
                fib_inner(n - 1, cache) + fib_inner(n - 2, cache)
            };
            cache.insert(n, result);
            result
        }
    
        move |n| fib_inner(n, &mut cache)
    }
    
    /// Once-computed lazy value
    pub struct Lazy<T, F: FnOnce() -> T> {
        value: Option<T>,
        init: Option<F>,
    }
    
    impl<T, F: FnOnce() -> T> Lazy<T, F> {
        pub fn new(init: F) -> Self {
            Self {
                value: None,
                init: Some(init),
            }
        }
    
        pub fn get(&mut self) -> &T {
            if self.value.is_none() {
                let init = self.init.take().expect("already initialized");
                self.value = Some(init());
            }
            self.value.as_ref().unwrap()
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_memoize() {
            let mut memo = Memoize::new(|x: i32| x * x);
    
            assert_eq!(memo.call(5), 25);
            assert_eq!(memo.call(5), 25); // Cached
            assert_eq!(memo.call(6), 36);
            assert_eq!(memo.cache_size(), 2);
        }
    
        #[test]
        fn test_memoized_fib() {
            let mut fib = memoized_fib();
            assert_eq!(fib(0), 0);
            assert_eq!(fib(1), 1);
            assert_eq!(fib(10), 55);
            assert_eq!(fib(20), 6765);
            assert_eq!(fib(40), 102334155);
        }
    
        #[test]
        fn test_lazy() {
            use std::cell::Cell;
            let computed = Cell::new(false);
            let mut lazy = Lazy::new(|| {
                computed.set(true);
                42
            });
    
            assert!(!computed.get());
            assert_eq!(*lazy.get(), 42);
            assert!(computed.get());
            assert_eq!(*lazy.get(), 42); // Doesn't recompute
        }
    
        #[test]
        fn test_expensive_computation() {
            let mut memo = Memoize::new(|s: String| s.chars().map(|c| c as u32).sum::<u32>());
    
            let result1 = memo.call("hello".to_string());
            let result2 = memo.call("hello".to_string());
            assert_eq!(result1, result2);
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_memoize() {
            let mut memo = Memoize::new(|x: i32| x * x);
    
            assert_eq!(memo.call(5), 25);
            assert_eq!(memo.call(5), 25); // Cached
            assert_eq!(memo.call(6), 36);
            assert_eq!(memo.cache_size(), 2);
        }
    
        #[test]
        fn test_memoized_fib() {
            let mut fib = memoized_fib();
            assert_eq!(fib(0), 0);
            assert_eq!(fib(1), 1);
            assert_eq!(fib(10), 55);
            assert_eq!(fib(20), 6765);
            assert_eq!(fib(40), 102334155);
        }
    
        #[test]
        fn test_lazy() {
            use std::cell::Cell;
            let computed = Cell::new(false);
            let mut lazy = Lazy::new(|| {
                computed.set(true);
                42
            });
    
            assert!(!computed.get());
            assert_eq!(*lazy.get(), 42);
            assert!(computed.get());
            assert_eq!(*lazy.get(), 42); // Doesn't recompute
        }
    
        #[test]
        fn test_expensive_computation() {
            let mut memo = Memoize::new(|s: String| s.chars().map(|c| c as u32).sum::<u32>());
    
            let result1 = memo.call("hello".to_string());
            let result2 = memo.call("hello".to_string());
            assert_eq!(result1, result2);
        }
    }

    Deep Comparison

    Closure Memoization: Comparison

    See src/lib.rs for the Rust implementation.

    Exercises

  • TTL cache: Extend Memoize with a Duration TTL per entry — store (Instant, R) and re-compute when the entry expires.
  • LRU memoize: Combine memoization with an LRU eviction policy (from example 375) to bound memory usage.
  • Thread-safe memoize: Implement ThreadSafeMemoize<F, A, R> using Arc<Mutex<HashMap<A, R>>> and verify correctness under concurrent calls with thread::scope.
  • Open Source Repos