ExamplesBy LevelBy TopicLearning Paths
989 Fundamental

989 Once Init

Functional Programming

Tutorial

The Problem

Demonstrate one-time initialization using OnceLock<T> — a global that is set exactly once and then read-only for the lifetime of the program. Multiple concurrent threads may call get_or_init simultaneously; only one initialization runs. Use for global configuration, pre-computed prime sieves, and expensive singleton setup.

🎯 Learning Outcomes

  • • Declare static CONFIG: OnceLock<String> and initialize with CONFIG.get_or_init(|| ...)
  • • Understand that get_or_init guarantees the initializer runs exactly once, even under concurrent calls
  • • Use OnceLock<Vec<u32>> for expensive pre-computed data (prime sieve) initialized on first use
  • • Recognize OnceLock<T> vs Lazy<T> (from once_cell/std::sync::LazyLock in Rust 1.80+)
  • • Connect to OCaml's lazy keyword and Lazy.force
  • Code Example

    #![allow(clippy::all)]
    // 989: One-Time Initialization
    // Rust: OnceLock<T> — set once, read many times (thread-safe)
    
    use std::sync::{Arc, Mutex, OnceLock};
    use std::thread;
    
    // --- Approach 1: OnceLock<T> for global one-time init ---
    static CONFIG: OnceLock<String> = OnceLock::new();
    
    fn get_config() -> &'static String {
        CONFIG.get_or_init(|| {
            // Only runs once, even with concurrent calls
            "production-config-v42".to_string()
        })
    }
    
    // --- Approach 2: OnceLock with expensive computation ---
    static PRIMES: OnceLock<Vec<u32>> = OnceLock::new();
    
    fn sieve(limit: usize) -> Vec<u32> {
        let mut is_prime = vec![true; limit + 1];
        is_prime[0] = false;
        if limit > 0 {
            is_prime[1] = false;
        }
        for i in 2..=limit {
            if is_prime[i] {
                let mut j = i * i;
                while j <= limit {
                    is_prime[j] = false;
                    j += i;
                }
            }
        }
        (2..=limit as u32)
            .filter(|&n| is_prime[n as usize])
            .collect()
    }
    
    fn get_primes() -> &'static [u32] {
        PRIMES.get_or_init(|| sieve(100))
    }
    
    // --- Approach 3: Instance-level OnceLock (not just global) ---
    struct LazyConfig {
        inner: OnceLock<String>,
        prefix: String,
    }
    
    impl LazyConfig {
        fn new(prefix: &str) -> Self {
            LazyConfig {
                inner: OnceLock::new(),
                prefix: prefix.to_string(),
            }
        }
    
        fn get(&self) -> &str {
            self.inner
                .get_or_init(|| format!("{}-initialized", self.prefix))
        }
    }
    
    // --- Approach 4: Thread-safe once across multiple threads ---
    fn concurrent_once_init() -> usize {
        static INIT_COUNT: OnceLock<usize> = OnceLock::new();
        let call_count = Arc::new(Mutex::new(0usize));
    
        let handles: Vec<_> = (0..10)
            .map(|_| {
                let count = Arc::clone(&call_count);
                thread::spawn(move || {
                    INIT_COUNT.get_or_init(|| {
                        *count.lock().unwrap() += 1;
                        42
                    });
                })
            })
            .collect();
    
        for h in handles {
            h.join().unwrap();
        }
        let x = *call_count.lock().unwrap();
        x // should be 1 — init ran only once
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_get_config_same_value() {
            let c1 = get_config();
            let c2 = get_config();
            assert_eq!(c1, c2);
            assert!(std::ptr::eq(c1 as *const _, c2 as *const _)); // same allocation
        }
    
        #[test]
        fn test_primes_correctness() {
            let primes = get_primes();
            assert_eq!(&primes[..5], &[2, 3, 5, 7, 11]);
            assert!(!primes.contains(&4));
            assert!(!primes.contains(&100));
        }
    
        #[test]
        fn test_lazy_config_cached() {
            let lc = LazyConfig::new("test");
            let v1 = lc.get();
            let v2 = lc.get();
            assert_eq!(v1, "test-initialized");
            assert_eq!(v1, v2);
        }
    
        #[test]
        fn test_concurrent_once_init_runs_exactly_once() {
            // OnceLock guarantees init closure runs at most once
            // even with 10 concurrent threads
            let count = concurrent_once_init();
            assert_eq!(count, 1, "init should run exactly once");
        }
    
        #[test]
        fn test_oncelock_get_before_init() {
            let lock: OnceLock<i32> = OnceLock::new();
            assert!(lock.get().is_none());
            lock.get_or_init(|| 42);
            assert_eq!(lock.get(), Some(&42));
        }
    }

    Key Differences

    AspectRustOCaml
    One-time initOnceLock::get_or_initlazy + Lazy.force
    Return type&'static T — guaranteed static lifetime'a Lazy.t — suspended value
    Thread safetyGuaranteed by OnceLockGuaranteed by Lazy.force (OCaml 5+)
    Lazy evaluationOnceLock + closure (explicit)lazy keyword (syntactic)
    Rust 1.80+std::sync::LazyLock<T>Already has lazy

    OnceLock is preferred over std::sync::Once when the initialized value needs to be returned. LazyLock<T> (stable in Rust 1.80+) provides the lazy-like pattern: static X: LazyLock<T> = LazyLock::new(|| expensive_init()).

    OCaml Approach

    (* OCaml lazy values — initialized on first force *)
    let config = lazy "production-config-v42"
    let get_config () = Lazy.force config
    
    (* Lazy computation *)
    let primes = lazy (sieve 10_000)
    let get_primes () = Lazy.force primes
    
    (* Thread-safe lazy (OCaml 5.0+) *)
    (* Lazy.force is thread-safe: only one thread runs the initializer *)
    let expensive = lazy begin
      Printf.printf "initializing...\n%!";
      heavy_computation ()
    end
    

    OCaml's lazy expr creates a suspended computation that runs once on Lazy.force. In OCaml 5.0+, Lazy.force is thread-safe — concurrent forces compete to run the computation, and only one wins. This is identical to OnceLock::get_or_init.

    Full Source

    #![allow(clippy::all)]
    // 989: One-Time Initialization
    // Rust: OnceLock<T> — set once, read many times (thread-safe)
    
    use std::sync::{Arc, Mutex, OnceLock};
    use std::thread;
    
    // --- Approach 1: OnceLock<T> for global one-time init ---
    static CONFIG: OnceLock<String> = OnceLock::new();
    
    fn get_config() -> &'static String {
        CONFIG.get_or_init(|| {
            // Only runs once, even with concurrent calls
            "production-config-v42".to_string()
        })
    }
    
    // --- Approach 2: OnceLock with expensive computation ---
    static PRIMES: OnceLock<Vec<u32>> = OnceLock::new();
    
    fn sieve(limit: usize) -> Vec<u32> {
        let mut is_prime = vec![true; limit + 1];
        is_prime[0] = false;
        if limit > 0 {
            is_prime[1] = false;
        }
        for i in 2..=limit {
            if is_prime[i] {
                let mut j = i * i;
                while j <= limit {
                    is_prime[j] = false;
                    j += i;
                }
            }
        }
        (2..=limit as u32)
            .filter(|&n| is_prime[n as usize])
            .collect()
    }
    
    fn get_primes() -> &'static [u32] {
        PRIMES.get_or_init(|| sieve(100))
    }
    
    // --- Approach 3: Instance-level OnceLock (not just global) ---
    struct LazyConfig {
        inner: OnceLock<String>,
        prefix: String,
    }
    
    impl LazyConfig {
        fn new(prefix: &str) -> Self {
            LazyConfig {
                inner: OnceLock::new(),
                prefix: prefix.to_string(),
            }
        }
    
        fn get(&self) -> &str {
            self.inner
                .get_or_init(|| format!("{}-initialized", self.prefix))
        }
    }
    
    // --- Approach 4: Thread-safe once across multiple threads ---
    fn concurrent_once_init() -> usize {
        static INIT_COUNT: OnceLock<usize> = OnceLock::new();
        let call_count = Arc::new(Mutex::new(0usize));
    
        let handles: Vec<_> = (0..10)
            .map(|_| {
                let count = Arc::clone(&call_count);
                thread::spawn(move || {
                    INIT_COUNT.get_or_init(|| {
                        *count.lock().unwrap() += 1;
                        42
                    });
                })
            })
            .collect();
    
        for h in handles {
            h.join().unwrap();
        }
        let x = *call_count.lock().unwrap();
        x // should be 1 — init ran only once
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_get_config_same_value() {
            let c1 = get_config();
            let c2 = get_config();
            assert_eq!(c1, c2);
            assert!(std::ptr::eq(c1 as *const _, c2 as *const _)); // same allocation
        }
    
        #[test]
        fn test_primes_correctness() {
            let primes = get_primes();
            assert_eq!(&primes[..5], &[2, 3, 5, 7, 11]);
            assert!(!primes.contains(&4));
            assert!(!primes.contains(&100));
        }
    
        #[test]
        fn test_lazy_config_cached() {
            let lc = LazyConfig::new("test");
            let v1 = lc.get();
            let v2 = lc.get();
            assert_eq!(v1, "test-initialized");
            assert_eq!(v1, v2);
        }
    
        #[test]
        fn test_concurrent_once_init_runs_exactly_once() {
            // OnceLock guarantees init closure runs at most once
            // even with 10 concurrent threads
            let count = concurrent_once_init();
            assert_eq!(count, 1, "init should run exactly once");
        }
    
        #[test]
        fn test_oncelock_get_before_init() {
            let lock: OnceLock<i32> = OnceLock::new();
            assert!(lock.get().is_none());
            lock.get_or_init(|| 42);
            assert_eq!(lock.get(), Some(&42));
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_get_config_same_value() {
            let c1 = get_config();
            let c2 = get_config();
            assert_eq!(c1, c2);
            assert!(std::ptr::eq(c1 as *const _, c2 as *const _)); // same allocation
        }
    
        #[test]
        fn test_primes_correctness() {
            let primes = get_primes();
            assert_eq!(&primes[..5], &[2, 3, 5, 7, 11]);
            assert!(!primes.contains(&4));
            assert!(!primes.contains(&100));
        }
    
        #[test]
        fn test_lazy_config_cached() {
            let lc = LazyConfig::new("test");
            let v1 = lc.get();
            let v2 = lc.get();
            assert_eq!(v1, "test-initialized");
            assert_eq!(v1, v2);
        }
    
        #[test]
        fn test_concurrent_once_init_runs_exactly_once() {
            // OnceLock guarantees init closure runs at most once
            // even with 10 concurrent threads
            let count = concurrent_once_init();
            assert_eq!(count, 1, "init should run exactly once");
        }
    
        #[test]
        fn test_oncelock_get_before_init() {
            let lock: OnceLock<i32> = OnceLock::new();
            assert!(lock.get().is_none());
            lock.get_or_init(|| 42);
            assert_eq!(lock.get(), Some(&42));
        }
    }

    Deep Comparison

    One-Time Initialization — Comparison

    Core Insight

    Lazy.t and OnceLock both implement deferred singleton: compute a value at most once, cache forever. The difference is thread safety — OCaml's Lazy.t uses GC for safety; Rust's OnceLock uses atomics for lock-free concurrent initialization.

    OCaml Approach

  • lazy expr wraps an expression; Lazy.force evaluates it (once)
  • • Result is cached — subsequent Lazy.force calls return immediately
  • • Thread-safe in OCaml 5 (domains); not safe across threads in OCaml 4 without wrapping
  • • Typical use: module-level initialization, expensive config loading
  • Lazy.is_val checks if already evaluated without forcing
  • Rust Approach

  • OnceLock<T> is in std::sync since Rust 1.70
  • get_or_init(f) runs f at most once, returns &T thereafter
  • get() returns Option<&T>None if not yet initialized
  • • Works for static globals and instance fields
  • set(v) for explicit single-write (returns error if already set)
  • LazyLock<T> (Rust 1.80+) for lock-free lazy static
  • Comparison Table

    ConceptOCamlRust
    Declare lazylet x = lazy (expr)static X: OnceLock<T> = OnceLock::new()
    Force / initializeLazy.force xX.get_or_init(\|\| expr)
    Check if readyLazy.is_val xX.get().is_some()
    Thread-safeOCaml 5 onlyYes — std guarantees
    Instance levellet _ = lazy (...) in structOnceLock<T> field
    Type annotation'a Lazy.tOnceLock<T>
    Return type'a (the value)&'static T (reference)

    std vs tokio

    Aspectstd versiontokio version
    RuntimeOS threads via std::threadAsync tasks on tokio runtime
    Synchronizationstd::sync::Mutex, Condvartokio::sync::Mutex, channels
    Channelsstd::sync::mpsc (unbounded)tokio::sync::mpsc (bounded, async)
    BlockingThread blocks on lock/recvTask yields, runtime switches tasks
    OverheadOne OS thread per taskMany tasks per thread (M:N)
    Best forCPU-bound, simple concurrencyI/O-bound, high-concurrency servers

    Exercises

  • Use LazyLock (Rust 1.80+) instead of OnceLock to remove the explicit get_or_init call.
  • Implement a once_cell-style OnceCell<T> manually using UnsafeCell<Option<T>> + Once.
  • Initialize the prime sieve on first use and benchmark first vs second call to verify one-time execution.
  • Implement a global RequestCounter that is initialized to 0 and incremented atomically — combine OnceLock with AtomicUsize.
  • Implement a per_thread_once: each thread has its own one-time-initialized value using thread_local! { static: OnceLock<T> }.
  • Open Source Repos