ExamplesBy LevelBy TopicLearning Paths
256 Intermediate

Memoization — Fibonacci with Hashtable Cache

MemoizationHigher-Order Functions

Tutorial Video

Text description (accessibility)

This video demonstrates the "Memoization — Fibonacci with Hashtable Cache" functional Rust example. Difficulty level: Intermediate. Key concepts covered: Memoization, Higher-Order Functions. Implement transparent memoization using a hash-table cache wrapper, demonstrated with Fibonacci — the recursive calls hit the cache instead of recomputing. Key difference from OCaml: 1. **Mutable state visibility:** OCaml hides the `Hashtbl` inside a closure; idiomatic Rust surfaces it via `&mut self` or `RefCell`

Tutorial

The Problem

Implement transparent memoization using a hash-table cache wrapper, demonstrated with Fibonacci — the recursive calls hit the cache instead of recomputing.

🎯 Learning Outcomes

  • • How Rust makes mutable state explicit (&mut self) vs OCaml's hidden Hashtbl
  • • The RefCell<HashMap> pattern for interior mutability inside closures
  • • Why Rust cannot directly express OCaml's let rec … and mutual recursion for closures, and three idiomatic workarounds
  • thread_local! as Rust's equivalent of module-level mutable state
  • 🦀 The Rust Way

    Rust offers three clean patterns. The struct-based approach owns the HashMap on the heap and exposes a &mut self method — mutable state is visible in the type signature. The HOF approach mirrors OCaml's memoize with RefCell<HashMap> for interior mutability, then threads an explicit cache reference through a named inner function to enable recursion. The thread-local approach uses thread_local! to hide the cache entirely, giving call-site transparency identical to the OCaml version.

    Code Example

    use std::collections::HashMap;
    
    pub struct FibMemo {
        cache: HashMap<u64, u64>,
    }
    
    impl FibMemo {
        pub fn new() -> Self { Self { cache: HashMap::new() } }
    
        pub fn fib(&mut self, n: u64) -> u64 {
            if let Some(&v) = self.cache.get(&n) { return v; }
            let v = if n <= 1 { n } else { self.fib(n - 1) + self.fib(n - 2) };
            self.cache.insert(n, v);
            v
        }
    }

    Key Differences

  • Mutable state visibility: OCaml hides the Hashtbl inside a closure; idiomatic Rust surfaces it via &mut self or RefCell
  • Mutual recursion: OCaml's let rec … and allows closures to reference each other at definition time; Rust requires explicit cache parameter passing or global state
  • Interior mutability: OCaml's GC handles aliasing freely; Rust uses RefCell to borrow-check at runtime inside otherwise-immutable closures
  • Thread safety: OCaml's Hashtbl is not thread-safe; Rust's thread_local! makes the scope explicit, and Mutex<HashMap> would be needed for sharing
  • OCaml Approach

    OCaml's memoize is a generic HOF that captures a Hashtbl in a closure and returns a new function that checks the table before calling the original. Recursive memoization uses let rec … and mutual recursion to wire fib' and memo_fib together at binding time — a language feature with no direct Rust equivalent.

    Full Source

    #![allow(clippy::all)]
    // Example 256: Memoization — Fibonacci with Hashtable Cache
    //
    // OCaml uses a generic `memoize` HOF that wraps any function with a Hashtbl
    // cache, then wires it to a recursive definition via `let rec … and`.
    //
    // Rust has no `let rec … and` for closures, so we show three equivalent
    // patterns ranked from most idiomatic to closest to the OCaml source.
    
    use std::cell::RefCell;
    use std::collections::HashMap;
    use std::hash::Hash;
    
    // ─────────────────────────────────────────────────────────────────────────────
    // Solution 1: Idiomatic Rust — struct-based memoization
    //
    // A struct owns the HashMap; the recursive method takes `&mut self`.
    // Mutable state is explicit in the type signature — no hidden globals.
    // ─────────────────────────────────────────────────────────────────────────────
    
    /// Fibonacci calculator that accumulates a memoization cache across calls.
    pub struct FibMemo {
        cache: HashMap<u64, u64>,
    }
    
    impl Default for FibMemo {
        fn default() -> Self {
            Self::new()
        }
    }
    
    impl FibMemo {
        pub fn new() -> Self {
            Self {
                cache: HashMap::new(),
            }
        }
    
        /// Compute fib(n), caching every intermediate result.
        ///
        /// `&mut self` signals to callers that the cache is mutated — Rust makes
        /// the side-effect visible in the type, unlike OCaml's hidden Hashtbl.
        pub fn fib(&mut self, n: u64) -> u64 {
            if let Some(&v) = self.cache.get(&n) {
                return v;
            }
            let v = if n <= 1 {
                n
            } else {
                self.fib(n - 1) + self.fib(n - 2)
            };
            self.cache.insert(n, v);
            v
        }
    }
    
    /// Convenience wrapper — fresh cache per call (for one-off queries).
    pub fn fib_struct(n: u64) -> u64 {
        FibMemo::new().fib(n)
    }
    
    // ─────────────────────────────────────────────────────────────────────────────
    // Solution 2: Generic memoize HOF + explicit-cache recursion
    //
    // OCaml: let memoize f = let cache = Hashtbl.create 16 in fun x -> …
    //
    // The generic `memoize` works for any pure, non-recursive function.
    // For *recursive* memoization the inner function must receive the cache
    // explicitly — Rust's equivalent of OCaml's `let rec … and memo_fib = …`.
    // ─────────────────────────────────────────────────────────────────────────────
    
    /// Generic memoize wrapper for non-recursive functions.
    ///
    /// Mirrors the OCaml `memoize` HOF exactly.  The returned `FnMut` owns
    /// a `RefCell<HashMap>` so it can mutate the cache on each call.
    pub fn memoize<A, R, F>(f: F) -> impl FnMut(A) -> R
    where
        A: Eq + Hash + Clone,
        R: Clone,
        F: Fn(A) -> R,
    {
        // RefCell gives us interior mutability inside the immutable closure capture.
        let cache = RefCell::new(HashMap::new());
        move |x: A| {
            if let Some(v) = cache.borrow().get(&x).cloned() {
                return v;
            }
            // Clone `x` so we can both call `f` and use `x` as the map key.
            let v = f(x.clone());
            cache.borrow_mut().insert(x, v.clone());
            v
        }
    }
    
    /// Fibonacci using an explicit-cache helper — the Rust analogue of OCaml's
    /// `let rec fib' … and memo_fib = memoize fib'`.
    ///
    /// A named inner function (not a closure) can call itself recursively while
    /// also accepting a `&RefCell<HashMap>` parameter to share the cache.
    pub fn fib_hof(n: u64) -> u64 {
        let cache = RefCell::new(HashMap::<u64, u64>::new());
    
        fn inner(n: u64, cache: &RefCell<HashMap<u64, u64>>) -> u64 {
            if let Some(&v) = cache.borrow().get(&n) {
                return v;
            }
            let v = if n <= 1 {
                n
            } else {
                inner(n - 1, cache) + inner(n - 2, cache)
            };
            cache.borrow_mut().insert(n, v);
            v
        }
    
        inner(n, &cache)
    }
    
    // ─────────────────────────────────────────────────────────────────────────────
    // Solution 3: Thread-local transparent memoization
    //
    // The closest Rust equivalent to OCaml's "transparent" memoization: callers
    // invoke `fib_tl(n)` with no cache argument.  The cache lives in a
    // `thread_local!` — analogous to OCaml's module-level `let cache = Hashtbl…`.
    //
    // Trade-off: cache persists across all calls in the thread (same as OCaml),
    // but is not shared across threads (each thread gets its own copy).
    // ─────────────────────────────────────────────────────────────────────────────
    
    thread_local! {
        static FIB_CACHE: RefCell<HashMap<u64, u64>> = RefCell::new(HashMap::new());
    }
    
    /// Fibonacci with a thread-local memoization cache.
    ///
    /// Call signature is identical to a plain recursive function — the cache is
    /// completely hidden, matching OCaml's transparent memoization style.
    pub fn fib_tl(n: u64) -> u64 {
        // Borrow, check, and drop the guard *before* recursing to avoid a
        // `BorrowMutError` when the recursive call tries to insert into the cache.
        if let Some(v) = FIB_CACHE.with(|c| c.borrow().get(&n).copied()) {
            return v;
        }
        let v = if n <= 1 {
            n
        } else {
            fib_tl(n - 1) + fib_tl(n - 2)
        };
        FIB_CACHE.with(|c| c.borrow_mut().insert(n, v));
        v
    }
    
    // ─────────────────────────────────────────────────────────────────────────────
    // Tests
    // ─────────────────────────────────────────────────────────────────────────────
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        // ── struct-based ─────────────────────────────────────────────────────────
    
        #[test]
        fn test_struct_base_cases() {
            let mut m = FibMemo::new();
            assert_eq!(m.fib(0), 0);
            assert_eq!(m.fib(1), 1);
        }
    
        #[test]
        fn test_struct_small() {
            assert_eq!(fib_struct(10), 55);
        }
    
        #[test]
        fn test_struct_large() {
            assert_eq!(fib_struct(35), 9_227_465);
        }
    
        #[test]
        fn test_struct_cache_reuse() {
            let mut m = FibMemo::new();
            // Warm the cache up to fib(20); fib(10) is then a cache hit.
            let _ = m.fib(20);
            assert_eq!(m.fib(10), 55);
        }
    
        // ── generic memoize HOF ──────────────────────────────────────────────────
    
        #[test]
        fn test_memoize_non_recursive() {
            let mut sq = memoize(|x: u64| x * x);
            assert_eq!(sq(7), 49);
            assert_eq!(sq(7), 49); // second call is a cache hit
        }
    
        #[test]
        fn test_hof_base_cases() {
            assert_eq!(fib_hof(0), 0);
            assert_eq!(fib_hof(1), 1);
        }
    
        #[test]
        fn test_hof_small() {
            assert_eq!(fib_hof(10), 55);
        }
    
        #[test]
        fn test_hof_large() {
            assert_eq!(fib_hof(35), 9_227_465);
        }
    
        // ── thread-local ─────────────────────────────────────────────────────────
    
        #[test]
        fn test_tl_base_cases() {
            assert_eq!(fib_tl(0), 0);
            assert_eq!(fib_tl(1), 1);
        }
    
        #[test]
        fn test_tl_small() {
            assert_eq!(fib_tl(10), 55);
        }
    
        #[test]
        fn test_tl_large() {
            assert_eq!(fib_tl(35), 9_227_465);
        }
    
        // ── cross-implementation agreement ───────────────────────────────────────
    
        #[test]
        fn test_all_implementations_agree() {
            let mut m = FibMemo::new();
            for n in 0u64..=20 {
                let expected = m.fib(n);
                assert_eq!(fib_hof(n), expected, "fib_hof({n}) mismatch");
                assert_eq!(fib_tl(n), expected, "fib_tl({n}) mismatch");
            }
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
    
        // ── struct-based ─────────────────────────────────────────────────────────
    
        #[test]
        fn test_struct_base_cases() {
            let mut m = FibMemo::new();
            assert_eq!(m.fib(0), 0);
            assert_eq!(m.fib(1), 1);
        }
    
        #[test]
        fn test_struct_small() {
            assert_eq!(fib_struct(10), 55);
        }
    
        #[test]
        fn test_struct_large() {
            assert_eq!(fib_struct(35), 9_227_465);
        }
    
        #[test]
        fn test_struct_cache_reuse() {
            let mut m = FibMemo::new();
            // Warm the cache up to fib(20); fib(10) is then a cache hit.
            let _ = m.fib(20);
            assert_eq!(m.fib(10), 55);
        }
    
        // ── generic memoize HOF ──────────────────────────────────────────────────
    
        #[test]
        fn test_memoize_non_recursive() {
            let mut sq = memoize(|x: u64| x * x);
            assert_eq!(sq(7), 49);
            assert_eq!(sq(7), 49); // second call is a cache hit
        }
    
        #[test]
        fn test_hof_base_cases() {
            assert_eq!(fib_hof(0), 0);
            assert_eq!(fib_hof(1), 1);
        }
    
        #[test]
        fn test_hof_small() {
            assert_eq!(fib_hof(10), 55);
        }
    
        #[test]
        fn test_hof_large() {
            assert_eq!(fib_hof(35), 9_227_465);
        }
    
        // ── thread-local ─────────────────────────────────────────────────────────
    
        #[test]
        fn test_tl_base_cases() {
            assert_eq!(fib_tl(0), 0);
            assert_eq!(fib_tl(1), 1);
        }
    
        #[test]
        fn test_tl_small() {
            assert_eq!(fib_tl(10), 55);
        }
    
        #[test]
        fn test_tl_large() {
            assert_eq!(fib_tl(35), 9_227_465);
        }
    
        // ── cross-implementation agreement ───────────────────────────────────────
    
        #[test]
        fn test_all_implementations_agree() {
            let mut m = FibMemo::new();
            for n in 0u64..=20 {
                let expected = m.fib(n);
                assert_eq!(fib_hof(n), expected, "fib_hof({n}) mismatch");
                assert_eq!(fib_tl(n), expected, "fib_tl({n}) mismatch");
            }
        }
    }

    Deep Comparison

    OCaml vs Rust: Memoization — Fibonacci with Hashtable Cache

    Side-by-Side Code

    OCaml

    let memoize f =
      let cache = Hashtbl.create 16 in
      fun x ->
        match Hashtbl.find_opt cache x with
        | Some v -> v
        | None ->
          let v = f x in
          Hashtbl.add cache x v;
          v
    
    let fib =
      let rec fib' n =
        if n <= 1 then n
        else memo_fib (n - 1) + memo_fib (n - 2)
      and memo_fib = memoize fib'
      in memo_fib
    

    Rust (idiomatic — struct-based)

    use std::collections::HashMap;
    
    pub struct FibMemo {
        cache: HashMap<u64, u64>,
    }
    
    impl FibMemo {
        pub fn new() -> Self { Self { cache: HashMap::new() } }
    
        pub fn fib(&mut self, n: u64) -> u64 {
            if let Some(&v) = self.cache.get(&n) { return v; }
            let v = if n <= 1 { n } else { self.fib(n - 1) + self.fib(n - 2) };
            self.cache.insert(n, v);
            v
        }
    }
    

    Rust (generic HOF memoize — mirrors OCaml)

    use std::cell::RefCell;
    use std::collections::HashMap;
    use std::hash::Hash;
    
    pub fn memoize<A, R, F>(f: F) -> impl FnMut(A) -> R
    where
        A: Eq + Hash + Clone,
        R: Clone,
        F: Fn(A) -> R,
    {
        let cache = RefCell::new(HashMap::new());
        move |x: A| {
            if let Some(v) = cache.borrow().get(&x).cloned() { return v; }
            let v = f(x.clone());
            cache.borrow_mut().insert(x, v.clone());
            v
        }
    }
    
    // Recursive memoization requires threading the cache explicitly
    pub fn fib_hof(n: u64) -> u64 {
        let cache = RefCell::new(HashMap::<u64, u64>::new());
        fn inner(n: u64, cache: &RefCell<HashMap<u64, u64>>) -> u64 {
            if let Some(&v) = cache.borrow().get(&n) { return v; }
            let v = if n <= 1 { n } else { inner(n-1, cache) + inner(n-2, cache) };
            cache.borrow_mut().insert(n, v);
            v
        }
        inner(n, &cache)
    }
    

    Rust (thread-local transparent memoization)

    use std::cell::RefCell;
    use std::collections::HashMap;
    
    thread_local! {
        static FIB_CACHE: RefCell<HashMap<u64, u64>> = RefCell::new(HashMap::new());
    }
    
    pub fn fib_tl(n: u64) -> u64 {
        if let Some(v) = FIB_CACHE.with(|c| c.borrow().get(&n).copied()) {
            return v;
        }
        let v = if n <= 1 { n } else { fib_tl(n - 1) + fib_tl(n - 2) };
        FIB_CACHE.with(|c| c.borrow_mut().insert(n, v));
        v
    }
    

    Type Signatures

    ConceptOCamlRust
    Generic memoizeval memoize : ('a -> 'b) -> 'a -> 'bfn memoize<A,R,F>(f: F) -> impl FnMut(A) -> R
    Cache type('a, 'b) Hashtbl.tHashMap<A, R>
    Interior mutabilityimplicit (GC + mutation)RefCell<HashMap<…>>
    Transparent global cachemodule-level let cache = Hashtbl…thread_local! { static … }
    Mutable self referencelet rec … and&mut self method or explicit parameter

    Key Insights

  • **No let rec … and for closures:** OCaml's mutual recursion at the binding level lets fib' and memo_fib reference each other as if simultaneously defined. Rust closures cannot do this — the workarounds are explicit cache parameters (HOF style) or module-level state (thread_local!).
  • **RefCell is runtime borrow checking:** OCaml's GC allows the closure to mutate the Hashtbl freely. Rust requires RefCell<HashMap> to defer the borrow check to runtime, enabling mutation inside an Fn closure that is otherwise captured by immutable reference.
  • Mutable state is visible in Rust types: FibMemo::fib(&mut self) advertises in its signature that the cache is modified. OCaml hides this behind the closure — convenient but less traceable in large codebases.
  • **thread_local! scope vs OCaml's module scope:** OCaml's module-level Hashtbl is shared within a process (single-threaded assumption). Rust's thread_local! gives each thread its own copy — safer by default, but requires Mutex<HashMap> for cross-thread sharing.
  • **The generic memoize HOF is valid but limited:** Both OCaml and Rust can express a generic memoize wrapper. The limitation in Rust is that the wrapped f receives no reference back to the memoized version of itself, so naively applying memoize to a recursive function memoizes only the top-level call.
  • When to Use Each Style

    **Use struct-based (FibMemo) when:** you want to carry the cache across multiple calls, share it explicitly, or need to inspect/clear it. This is the most idiomatic Rust pattern.

    **Use the HOF memoize wrapper when:** you are wrapping non-recursive functions (query deduplication, expensive pure computations) and want a composable, reusable abstraction.

    **Use thread_local! when:** you want call-site transparency — callers invoke fib_tl(n) with no extra arguments, matching the OCaml experience. Suitable when the cache should persist across calls within a thread.

    Exercises

  • Generalize the memoization approach into a reusable memoize higher-order function that wraps any Fn(u64) -> u64 with a HashMap cache.
  • Implement memoized mutual recursion: two functions is_even and is_odd that call each other, each with its own cache, and verify they produce correct results for large inputs without redundant calls.
  • Implement bottom-up dynamic programming for Fibonacci using a fixed-size array instead of a HashMap, compare memory and performance with the top-down memoized version, and explain the trade-offs.
  • Open Source Repos