1051-fibonacci-memo — Fibonacci with Memoization
Tutorial
The Problem
The naive recursive Fibonacci is O(2^n) due to exponential redundant recomputation — fib(n) calls fib(n-1) and fib(n-2), each of which calls the same sub-problems repeatedly. Memoization caches each result after its first computation, reducing the complexity to O(n) time and O(n) space.
Memoization is the top-down variant of dynamic programming. It is the mechanical transformation of any recursive function with overlapping sub-problems into an efficient one.
🎯 Learning Outcomes
HashMapmove semanticsCode Example
#![allow(clippy::all)]
// 1051: Fibonacci with HashMap Memoization
use std::collections::HashMap;
// Approach 1: Iterative with HashMap cache
fn fib_memo(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
if n <= 1 {
return n;
}
if let Some(&v) = cache.get(&n) {
return v;
}
let v = fib_memo(n - 1, cache) + fib_memo(n - 2, cache);
cache.insert(n, v);
v
}
// Approach 2: Closure-based memoizer
fn make_fib_memo() -> impl FnMut(u64) -> u64 {
let mut cache = HashMap::new();
move |n| {
fn inner(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
if n <= 1 {
return n;
}
if let Some(&v) = cache.get(&n) {
return v;
}
let v = inner(n - 1, cache) + inner(n - 2, cache);
cache.insert(n, v);
v
}
inner(n, &mut cache)
}
}
// Approach 3: Generic memoization wrapper
struct Memoize<F> {
cache: HashMap<u64, u64>,
func: F,
}
impl<F: Fn(u64, &mut dyn FnMut(u64) -> u64) -> u64> Memoize<F> {
fn new(func: F) -> Self {
Memoize {
cache: HashMap::new(),
func,
}
}
fn call(&mut self, n: u64) -> u64 {
if let Some(&v) = self.cache.get(&n) {
return v;
}
// We need to use unsafe here or restructure; instead use a simpler pattern
let mut cache = std::mem::take(&mut self.cache);
let v = (self.func)(n, &mut |x| {
if x <= 1 {
return x;
}
if let Some(&v) = cache.get(&x) {
return v;
}
// For deeply nested, fall back to iterative
let mut a = 0u64;
let mut b = 1u64;
for _ in 2..=x {
let t = a + b;
a = b;
b = t;
}
cache.insert(x, b);
b
});
cache.insert(n, v);
self.cache = cache;
v
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_fib_memo() {
let mut cache = HashMap::new();
assert_eq!(fib_memo(0, &mut cache), 0);
assert_eq!(fib_memo(1, &mut cache), 1);
assert_eq!(fib_memo(10, &mut cache), 55);
assert_eq!(fib_memo(20, &mut cache), 6765);
assert_eq!(fib_memo(30, &mut cache), 832040);
}
#[test]
fn test_fib_closure() {
let mut fib = make_fib_memo();
assert_eq!(fib(0), 0);
assert_eq!(fib(1), 1);
assert_eq!(fib(10), 55);
assert_eq!(fib(20), 6765);
assert_eq!(fib(30), 832040);
}
#[test]
fn test_fib_generic() {
let mut memo = Memoize::new(|n, recurse: &mut dyn FnMut(u64) -> u64| {
if n <= 1 {
n
} else {
recurse(n - 1) + recurse(n - 2)
}
});
assert_eq!(memo.call(0), 0);
assert_eq!(memo.call(10), 55);
assert_eq!(memo.call(30), 832040);
}
}Key Differences
Base.Memo.recursive**: OCaml's Base library provides a generic memoizer for recursive functions; Rust has no equivalent in std, requiring manual implementation.&mut HashMap parameter is testable and clear; OCaml's captured Hashtbl is more concise.u64 which overflows for Fibonacci numbers > 93; OCaml's int is the machine word size (64-bit on modern hardware).OCaml Approach
OCaml's memoization uses a Hashtbl:
let make_fib () =
let cache = Hashtbl.create 64 in
let rec fib n =
match Hashtbl.find_opt cache n with
| Some v -> v
| None ->
let v = if n <= 1 then n else fib (n-1) + fib (n-2) in
Hashtbl.add cache n v;
v
in
fib
OCaml's recursive let rec inside a closure naturally captures the cache. The Base.Memo.recursive function automates memoization for any recursive function.
Full Source
#![allow(clippy::all)]
// 1051: Fibonacci with HashMap Memoization
use std::collections::HashMap;
// Approach 1: Iterative with HashMap cache
fn fib_memo(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
if n <= 1 {
return n;
}
if let Some(&v) = cache.get(&n) {
return v;
}
let v = fib_memo(n - 1, cache) + fib_memo(n - 2, cache);
cache.insert(n, v);
v
}
// Approach 2: Closure-based memoizer
fn make_fib_memo() -> impl FnMut(u64) -> u64 {
let mut cache = HashMap::new();
move |n| {
fn inner(n: u64, cache: &mut HashMap<u64, u64>) -> u64 {
if n <= 1 {
return n;
}
if let Some(&v) = cache.get(&n) {
return v;
}
let v = inner(n - 1, cache) + inner(n - 2, cache);
cache.insert(n, v);
v
}
inner(n, &mut cache)
}
}
// Approach 3: Generic memoization wrapper
struct Memoize<F> {
cache: HashMap<u64, u64>,
func: F,
}
impl<F: Fn(u64, &mut dyn FnMut(u64) -> u64) -> u64> Memoize<F> {
fn new(func: F) -> Self {
Memoize {
cache: HashMap::new(),
func,
}
}
fn call(&mut self, n: u64) -> u64 {
if let Some(&v) = self.cache.get(&n) {
return v;
}
// We need to use unsafe here or restructure; instead use a simpler pattern
let mut cache = std::mem::take(&mut self.cache);
let v = (self.func)(n, &mut |x| {
if x <= 1 {
return x;
}
if let Some(&v) = cache.get(&x) {
return v;
}
// For deeply nested, fall back to iterative
let mut a = 0u64;
let mut b = 1u64;
for _ in 2..=x {
let t = a + b;
a = b;
b = t;
}
cache.insert(x, b);
b
});
cache.insert(n, v);
self.cache = cache;
v
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_fib_memo() {
let mut cache = HashMap::new();
assert_eq!(fib_memo(0, &mut cache), 0);
assert_eq!(fib_memo(1, &mut cache), 1);
assert_eq!(fib_memo(10, &mut cache), 55);
assert_eq!(fib_memo(20, &mut cache), 6765);
assert_eq!(fib_memo(30, &mut cache), 832040);
}
#[test]
fn test_fib_closure() {
let mut fib = make_fib_memo();
assert_eq!(fib(0), 0);
assert_eq!(fib(1), 1);
assert_eq!(fib(10), 55);
assert_eq!(fib(20), 6765);
assert_eq!(fib(30), 832040);
}
#[test]
fn test_fib_generic() {
let mut memo = Memoize::new(|n, recurse: &mut dyn FnMut(u64) -> u64| {
if n <= 1 {
n
} else {
recurse(n - 1) + recurse(n - 2)
}
});
assert_eq!(memo.call(0), 0);
assert_eq!(memo.call(10), 55);
assert_eq!(memo.call(30), 832040);
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_fib_memo() {
let mut cache = HashMap::new();
assert_eq!(fib_memo(0, &mut cache), 0);
assert_eq!(fib_memo(1, &mut cache), 1);
assert_eq!(fib_memo(10, &mut cache), 55);
assert_eq!(fib_memo(20, &mut cache), 6765);
assert_eq!(fib_memo(30, &mut cache), 832040);
}
#[test]
fn test_fib_closure() {
let mut fib = make_fib_memo();
assert_eq!(fib(0), 0);
assert_eq!(fib(1), 1);
assert_eq!(fib(10), 55);
assert_eq!(fib(20), 6765);
assert_eq!(fib(30), 832040);
}
#[test]
fn test_fib_generic() {
let mut memo = Memoize::new(|n, recurse: &mut dyn FnMut(u64) -> u64| {
if n <= 1 {
n
} else {
recurse(n - 1) + recurse(n - 2)
}
});
assert_eq!(memo.call(0), 0);
assert_eq!(memo.call(10), 55);
assert_eq!(memo.call(30), 832040);
}
}
Deep Comparison
Fibonacci with HashMap Memoization — Comparison
Core Insight
Memoization transforms exponential O(2^n) Fibonacci into O(n). Both languages support hash-based caching, but OCaml's mutable Hashtbl is simpler to thread through recursion than Rust's HashMap due to borrowing constraints.
OCaml Approach
Hashtbl.create for imperative memoization — simple and directMap module for a more functional flavor using immutable maps with a ref cellfind_opt / pattern match idiom is clean and readableRust Approach
HashMap passed as &mut parameter — explicit but requires threading through callsmove closureMemoize struct demonstrates the complexity of self-referential memoization in Rustcache mutably while also recursingComparison Table
| Aspect | OCaml | Rust |
|---|---|---|
| Hash map type | Hashtbl.t (mutable) | HashMap<K, V> |
| Memoization pattern | find_opt + add | get + insert |
| Closure capture | Trivial — closures capture freely | Requires move + ownership management |
| Recursive memo | Natural — just call fib recursively | Borrow checker friction with &mut |
| Immutable variant | Map module + ref cell | Not idiomatic — would need RefCell |
| Performance | GC-managed hash table | Zero-cost HashMap with predictable perf |
Exercises
fib_memo into memoize<A: Hash + Eq + Clone, R: Clone>(f: impl Fn(A, &mut HashMap<A, R>) -> R) for any memoizable function.