ExamplesBy LevelBy TopicLearning Paths
1051 Intermediate

1051-fibonacci-memo — Fibonacci with Memoization

Functional Programming

Tutorial

The Problem

The naive recursive Fibonacci is O(2^n) due to exponential redundant recomputation — fib(n) calls fib(n-1) and fib(n-2), each of which calls the same sub-problems repeatedly. Memoization caches each result after its first computation, reducing the complexity to O(n) time and O(n) space.

Memoization is the top-down variant of dynamic programming. It is the mechanical transformation of any recursive function with overlapping sub-problems into an efficient one.

🎯 Learning Outcomes

  • • Transform naive recursive Fibonacci into a memoized version using HashMap
  • • Implement memoization with an explicit cache parameter
  • • Build a memoizing closure using move semantics
  • • Compare top-down memoization to bottom-up DP (example 1052)
  • • Understand the trade-offs between the two approaches
  • Code Example

    #![allow(clippy::all)]
    // 1051: Fibonacci with HashMap Memoization
    
    use std::collections::HashMap;
    
    // Approach 1: Iterative with HashMap cache
    fn fib_memo(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
        if n <= 1 {
            return n;
        }
        if let Some(&v) = cache.get(&n) {
            return v;
        }
        let v = fib_memo(n - 1, cache) + fib_memo(n - 2, cache);
        cache.insert(n, v);
        v
    }
    
    // Approach 2: Closure-based memoizer
    fn make_fib_memo() -> impl FnMut(u64) -> u64 {
        let mut cache = HashMap::new();
        move |n| {
            fn inner(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
                if n <= 1 {
                    return n;
                }
                if let Some(&v) = cache.get(&n) {
                    return v;
                }
                let v = inner(n - 1, cache) + inner(n - 2, cache);
                cache.insert(n, v);
                v
            }
            inner(n, &mut cache)
        }
    }
    
    // Approach 3: Generic memoization wrapper
    struct Memoize<F> {
        cache: HashMap<u64, u64>,
        func: F,
    }
    
    impl<F: Fn(u64, &mut dyn FnMut(u64) -> u64) -> u64> Memoize<F> {
        fn new(func: F) -> Self {
            Memoize {
                cache: HashMap::new(),
                func,
            }
        }
    
        fn call(&mut self, n: u64) -> u64 {
            if let Some(&v) = self.cache.get(&n) {
                return v;
            }
            // We need to use unsafe here or restructure; instead use a simpler pattern
            let mut cache = std::mem::take(&mut self.cache);
            let v = (self.func)(n, &mut |x| {
                if x <= 1 {
                    return x;
                }
                if let Some(&v) = cache.get(&x) {
                    return v;
                }
                // For deeply nested, fall back to iterative
                let mut a = 0u64;
                let mut b = 1u64;
                for _ in 2..=x {
                    let t = a + b;
                    a = b;
                    b = t;
                }
                cache.insert(x, b);
                b
            });
            cache.insert(n, v);
            self.cache = cache;
            v
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_fib_memo() {
            let mut cache = HashMap::new();
            assert_eq!(fib_memo(0, &mut cache), 0);
            assert_eq!(fib_memo(1, &mut cache), 1);
            assert_eq!(fib_memo(10, &mut cache), 55);
            assert_eq!(fib_memo(20, &mut cache), 6765);
            assert_eq!(fib_memo(30, &mut cache), 832040);
        }
    
        #[test]
        fn test_fib_closure() {
            let mut fib = make_fib_memo();
            assert_eq!(fib(0), 0);
            assert_eq!(fib(1), 1);
            assert_eq!(fib(10), 55);
            assert_eq!(fib(20), 6765);
            assert_eq!(fib(30), 832040);
        }
    
        #[test]
        fn test_fib_generic() {
            let mut memo = Memoize::new(|n, recurse: &mut dyn FnMut(u64) -> u64| {
                if n <= 1 {
                    n
                } else {
                    recurse(n - 1) + recurse(n - 2)
                }
            });
            assert_eq!(memo.call(0), 0);
            assert_eq!(memo.call(10), 55);
            assert_eq!(memo.call(30), 832040);
        }
    }

    Key Differences

  • **Base.Memo.recursive**: OCaml's Base library provides a generic memoizer for recursive functions; Rust has no equivalent in std, requiring manual implementation.
  • Cache parameter vs captured: Rust's explicit &mut HashMap parameter is testable and clear; OCaml's captured Hashtbl is more concise.
  • Overflow: Rust uses u64 which overflows for Fibonacci numbers > 93; OCaml's int is the machine word size (64-bit on modern hardware).
  • Stack depth: Both languages risk stack overflow for large n with recursive memoization; the iterative bottom-up approach (1052) eliminates this.
  • OCaml Approach

    OCaml's memoization uses a Hashtbl:

    let make_fib () =
      let cache = Hashtbl.create 64 in
      let rec fib n =
        match Hashtbl.find_opt cache n with
        | Some v -> v
        | None ->
          let v = if n <= 1 then n else fib (n-1) + fib (n-2) in
          Hashtbl.add cache n v;
          v
      in
      fib
    

    OCaml's recursive let rec inside a closure naturally captures the cache. The Base.Memo.recursive function automates memoization for any recursive function.

    Full Source

    #![allow(clippy::all)]
    // 1051: Fibonacci with HashMap Memoization
    
    use std::collections::HashMap;
    
    // Approach 1: Iterative with HashMap cache
    fn fib_memo(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
        if n <= 1 {
            return n;
        }
        if let Some(&v) = cache.get(&n) {
            return v;
        }
        let v = fib_memo(n - 1, cache) + fib_memo(n - 2, cache);
        cache.insert(n, v);
        v
    }
    
    // Approach 2: Closure-based memoizer
    fn make_fib_memo() -> impl FnMut(u64) -> u64 {
        let mut cache = HashMap::new();
        move |n| {
            fn inner(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
                if n <= 1 {
                    return n;
                }
                if let Some(&v) = cache.get(&n) {
                    return v;
                }
                let v = inner(n - 1, cache) + inner(n - 2, cache);
                cache.insert(n, v);
                v
            }
            inner(n, &mut cache)
        }
    }
    
    // Approach 3: Generic memoization wrapper
    struct Memoize<F> {
        cache: HashMap<u64, u64>,
        func: F,
    }
    
    impl<F: Fn(u64, &mut dyn FnMut(u64) -> u64) -> u64> Memoize<F> {
        fn new(func: F) -> Self {
            Memoize {
                cache: HashMap::new(),
                func,
            }
        }
    
        fn call(&mut self, n: u64) -> u64 {
            if let Some(&v) = self.cache.get(&n) {
                return v;
            }
            // We need to use unsafe here or restructure; instead use a simpler pattern
            let mut cache = std::mem::take(&mut self.cache);
            let v = (self.func)(n, &mut |x| {
                if x <= 1 {
                    return x;
                }
                if let Some(&v) = cache.get(&x) {
                    return v;
                }
                // For deeply nested, fall back to iterative
                let mut a = 0u64;
                let mut b = 1u64;
                for _ in 2..=x {
                    let t = a + b;
                    a = b;
                    b = t;
                }
                cache.insert(x, b);
                b
            });
            cache.insert(n, v);
            self.cache = cache;
            v
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_fib_memo() {
            let mut cache = HashMap::new();
            assert_eq!(fib_memo(0, &mut cache), 0);
            assert_eq!(fib_memo(1, &mut cache), 1);
            assert_eq!(fib_memo(10, &mut cache), 55);
            assert_eq!(fib_memo(20, &mut cache), 6765);
            assert_eq!(fib_memo(30, &mut cache), 832040);
        }
    
        #[test]
        fn test_fib_closure() {
            let mut fib = make_fib_memo();
            assert_eq!(fib(0), 0);
            assert_eq!(fib(1), 1);
            assert_eq!(fib(10), 55);
            assert_eq!(fib(20), 6765);
            assert_eq!(fib(30), 832040);
        }
    
        #[test]
        fn test_fib_generic() {
            let mut memo = Memoize::new(|n, recurse: &mut dyn FnMut(u64) -> u64| {
                if n <= 1 {
                    n
                } else {
                    recurse(n - 1) + recurse(n - 2)
                }
            });
            assert_eq!(memo.call(0), 0);
            assert_eq!(memo.call(10), 55);
            assert_eq!(memo.call(30), 832040);
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_fib_memo() {
            let mut cache = HashMap::new();
            assert_eq!(fib_memo(0, &mut cache), 0);
            assert_eq!(fib_memo(1, &mut cache), 1);
            assert_eq!(fib_memo(10, &mut cache), 55);
            assert_eq!(fib_memo(20, &mut cache), 6765);
            assert_eq!(fib_memo(30, &mut cache), 832040);
        }
    
        #[test]
        fn test_fib_closure() {
            let mut fib = make_fib_memo();
            assert_eq!(fib(0), 0);
            assert_eq!(fib(1), 1);
            assert_eq!(fib(10), 55);
            assert_eq!(fib(20), 6765);
            assert_eq!(fib(30), 832040);
        }
    
        #[test]
        fn test_fib_generic() {
            let mut memo = Memoize::new(|n, recurse: &mut dyn FnMut(u64) -> u64| {
                if n <= 1 {
                    n
                } else {
                    recurse(n - 1) + recurse(n - 2)
                }
            });
            assert_eq!(memo.call(0), 0);
            assert_eq!(memo.call(10), 55);
            assert_eq!(memo.call(30), 832040);
        }
    }

    Deep Comparison

    Fibonacci with HashMap Memoization — Comparison

    Core Insight

    Memoization transforms exponential O(2^n) Fibonacci into O(n). Both languages support hash-based caching, but OCaml's mutable Hashtbl is simpler to thread through recursion than Rust's HashMap due to borrowing constraints.

    OCaml Approach

  • Hashtbl.create for imperative memoization — simple and direct
  • Map module for a more functional flavor using immutable maps with a ref cell
  • • CPS (continuation-passing style) variant shows functional composition with memoization
  • • The find_opt / pattern match idiom is clean and readable
  • Rust Approach

  • HashMap passed as &mut parameter — explicit but requires threading through calls
  • • Closure-based memoizer captures HashMap in a move closure
  • • Generic Memoize struct demonstrates the complexity of self-referential memoization in Rust
  • • Rust's borrow checker makes recursive memoization harder: you can't borrow cache mutably while also recursing
  • Comparison Table

    AspectOCamlRust
    Hash map typeHashtbl.t (mutable)HashMap<K, V>
    Memoization patternfind_opt + addget + insert
    Closure captureTrivial — closures capture freelyRequires move + ownership management
    Recursive memoNatural — just call fib recursivelyBorrow checker friction with &mut
    Immutable variantMap module + ref cellNot idiomatic — would need RefCell
    PerformanceGC-managed hash tableZero-cost HashMap with predictable perf

    Exercises

  • Generalize fib_memo into memoize<A: Hash + Eq + Clone, R: Clone>(f: impl Fn(A, &mut HashMap<A, R>) -> R) for any memoizable function.
  • Implement memoized tribonacci (each term is the sum of the previous three) using the same cache pattern.
  • Write a memoized solution for the number of ways to tile a 1×n board with 1×1 and 1×2 tiles.
  • Open Source Repos