989 Once Init
Tutorial
The Problem
Demonstrate one-time initialization using OnceLock<T> — a global that is set exactly once and then read-only for the lifetime of the program. Multiple concurrent threads may call get_or_init simultaneously; only one initialization runs. Use for global configuration, pre-computed prime sieves, and expensive singleton setup.
🎯 Learning Outcomes
static CONFIG: OnceLock<String> and initialize with CONFIG.get_or_init(|| ...)get_or_init guarantees the initializer runs exactly once, even under concurrent callsOnceLock<Vec<u32>> for expensive pre-computed data (prime sieve) initialized on first useOnceLock<T> vs Lazy<T> (from once_cell/std::sync::LazyLock in Rust 1.80+)lazy keyword and Lazy.forceCode Example
#![allow(clippy::all)]
// 989: One-Time Initialization
// Rust: OnceLock<T> — set once, read many times (thread-safe)
use std::sync::{Arc, Mutex, OnceLock};
use std::thread;
// --- Approach 1: OnceLock<T> for global one-time init ---
static CONFIG: OnceLock<String> = OnceLock::new();
fn get_config() -> &'static String {
CONFIG.get_or_init(|| {
// Only runs once, even with concurrent calls
"production-config-v42".to_string()
})
}
// --- Approach 2: OnceLock with expensive computation ---
static PRIMES: OnceLock<Vec<u32>> = OnceLock::new();
fn sieve(limit: usize) -> Vec<u32> {
let mut is_prime = vec![true; limit + 1];
is_prime[0] = false;
if limit > 0 {
is_prime[1] = false;
}
for i in 2..=limit {
if is_prime[i] {
let mut j = i * i;
while j <= limit {
is_prime[j] = false;
j += i;
}
}
}
(2..=limit as u32)
.filter(|&n| is_prime[n as usize])
.collect()
}
fn get_primes() -> &'static [u32] {
PRIMES.get_or_init(|| sieve(100))
}
// --- Approach 3: Instance-level OnceLock (not just global) ---
struct LazyConfig {
inner: OnceLock<String>,
prefix: String,
}
impl LazyConfig {
fn new(prefix: &str) -> Self {
LazyConfig {
inner: OnceLock::new(),
prefix: prefix.to_string(),
}
}
fn get(&self) -> &str {
self.inner
.get_or_init(|| format!("{}-initialized", self.prefix))
}
}
// --- Approach 4: Thread-safe once across multiple threads ---
fn concurrent_once_init() -> usize {
static INIT_COUNT: OnceLock<usize> = OnceLock::new();
let call_count = Arc::new(Mutex::new(0usize));
let handles: Vec<_> = (0..10)
.map(|_| {
let count = Arc::clone(&call_count);
thread::spawn(move || {
INIT_COUNT.get_or_init(|| {
*count.lock().unwrap() += 1;
42
});
})
})
.collect();
for h in handles {
h.join().unwrap();
}
let x = *call_count.lock().unwrap();
x // should be 1 — init ran only once
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_get_config_same_value() {
let c1 = get_config();
let c2 = get_config();
assert_eq!(c1, c2);
assert!(std::ptr::eq(c1 as *const _, c2 as *const _)); // same allocation
}
#[test]
fn test_primes_correctness() {
let primes = get_primes();
assert_eq!(&primes[..5], &[2, 3, 5, 7, 11]);
assert!(!primes.contains(&4));
assert!(!primes.contains(&100));
}
#[test]
fn test_lazy_config_cached() {
let lc = LazyConfig::new("test");
let v1 = lc.get();
let v2 = lc.get();
assert_eq!(v1, "test-initialized");
assert_eq!(v1, v2);
}
#[test]
fn test_concurrent_once_init_runs_exactly_once() {
// OnceLock guarantees init closure runs at most once
// even with 10 concurrent threads
let count = concurrent_once_init();
assert_eq!(count, 1, "init should run exactly once");
}
#[test]
fn test_oncelock_get_before_init() {
let lock: OnceLock<i32> = OnceLock::new();
assert!(lock.get().is_none());
lock.get_or_init(|| 42);
assert_eq!(lock.get(), Some(&42));
}
}Key Differences
| Aspect | Rust | OCaml |
|---|---|---|
| One-time init | OnceLock::get_or_init | lazy + Lazy.force |
| Return type | &'static T — guaranteed static lifetime | 'a Lazy.t — suspended value |
| Thread safety | Guaranteed by OnceLock | Guaranteed by Lazy.force (OCaml 5+) |
| Lazy evaluation | OnceLock + closure (explicit) | lazy keyword (syntactic) |
| Rust 1.80+ | std::sync::LazyLock<T> | Already has lazy |
OnceLock is preferred over std::sync::Once when the initialized value needs to be returned. LazyLock<T> (stable in Rust 1.80+) provides the lazy-like pattern: static X: LazyLock<T> = LazyLock::new(|| expensive_init()).
OCaml Approach
(* OCaml lazy values — initialized on first force *)
let config = lazy "production-config-v42"
let get_config () = Lazy.force config
(* Lazy computation *)
let primes = lazy (sieve 10_000)
let get_primes () = Lazy.force primes
(* Thread-safe lazy (OCaml 5.0+) *)
(* Lazy.force is thread-safe: only one thread runs the initializer *)
let expensive = lazy begin
Printf.printf "initializing...\n%!";
heavy_computation ()
end
OCaml's lazy expr creates a suspended computation that runs once on Lazy.force. In OCaml 5.0+, Lazy.force is thread-safe — concurrent forces compete to run the computation, and only one wins. This is identical to OnceLock::get_or_init.
Full Source
#![allow(clippy::all)]
// 989: One-Time Initialization
// Rust: OnceLock<T> — set once, read many times (thread-safe)
use std::sync::{Arc, Mutex, OnceLock};
use std::thread;
// --- Approach 1: OnceLock<T> for global one-time init ---
static CONFIG: OnceLock<String> = OnceLock::new();
fn get_config() -> &'static String {
CONFIG.get_or_init(|| {
// Only runs once, even with concurrent calls
"production-config-v42".to_string()
})
}
// --- Approach 2: OnceLock with expensive computation ---
static PRIMES: OnceLock<Vec<u32>> = OnceLock::new();
fn sieve(limit: usize) -> Vec<u32> {
let mut is_prime = vec![true; limit + 1];
is_prime[0] = false;
if limit > 0 {
is_prime[1] = false;
}
for i in 2..=limit {
if is_prime[i] {
let mut j = i * i;
while j <= limit {
is_prime[j] = false;
j += i;
}
}
}
(2..=limit as u32)
.filter(|&n| is_prime[n as usize])
.collect()
}
fn get_primes() -> &'static [u32] {
PRIMES.get_or_init(|| sieve(100))
}
// --- Approach 3: Instance-level OnceLock (not just global) ---
struct LazyConfig {
inner: OnceLock<String>,
prefix: String,
}
impl LazyConfig {
fn new(prefix: &str) -> Self {
LazyConfig {
inner: OnceLock::new(),
prefix: prefix.to_string(),
}
}
fn get(&self) -> &str {
self.inner
.get_or_init(|| format!("{}-initialized", self.prefix))
}
}
// --- Approach 4: Thread-safe once across multiple threads ---
fn concurrent_once_init() -> usize {
static INIT_COUNT: OnceLock<usize> = OnceLock::new();
let call_count = Arc::new(Mutex::new(0usize));
let handles: Vec<_> = (0..10)
.map(|_| {
let count = Arc::clone(&call_count);
thread::spawn(move || {
INIT_COUNT.get_or_init(|| {
*count.lock().unwrap() += 1;
42
});
})
})
.collect();
for h in handles {
h.join().unwrap();
}
let x = *call_count.lock().unwrap();
x // should be 1 — init ran only once
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_get_config_same_value() {
let c1 = get_config();
let c2 = get_config();
assert_eq!(c1, c2);
assert!(std::ptr::eq(c1 as *const _, c2 as *const _)); // same allocation
}
#[test]
fn test_primes_correctness() {
let primes = get_primes();
assert_eq!(&primes[..5], &[2, 3, 5, 7, 11]);
assert!(!primes.contains(&4));
assert!(!primes.contains(&100));
}
#[test]
fn test_lazy_config_cached() {
let lc = LazyConfig::new("test");
let v1 = lc.get();
let v2 = lc.get();
assert_eq!(v1, "test-initialized");
assert_eq!(v1, v2);
}
#[test]
fn test_concurrent_once_init_runs_exactly_once() {
// OnceLock guarantees init closure runs at most once
// even with 10 concurrent threads
let count = concurrent_once_init();
assert_eq!(count, 1, "init should run exactly once");
}
#[test]
fn test_oncelock_get_before_init() {
let lock: OnceLock<i32> = OnceLock::new();
assert!(lock.get().is_none());
lock.get_or_init(|| 42);
assert_eq!(lock.get(), Some(&42));
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_get_config_same_value() {
let c1 = get_config();
let c2 = get_config();
assert_eq!(c1, c2);
assert!(std::ptr::eq(c1 as *const _, c2 as *const _)); // same allocation
}
#[test]
fn test_primes_correctness() {
let primes = get_primes();
assert_eq!(&primes[..5], &[2, 3, 5, 7, 11]);
assert!(!primes.contains(&4));
assert!(!primes.contains(&100));
}
#[test]
fn test_lazy_config_cached() {
let lc = LazyConfig::new("test");
let v1 = lc.get();
let v2 = lc.get();
assert_eq!(v1, "test-initialized");
assert_eq!(v1, v2);
}
#[test]
fn test_concurrent_once_init_runs_exactly_once() {
// OnceLock guarantees init closure runs at most once
// even with 10 concurrent threads
let count = concurrent_once_init();
assert_eq!(count, 1, "init should run exactly once");
}
#[test]
fn test_oncelock_get_before_init() {
let lock: OnceLock<i32> = OnceLock::new();
assert!(lock.get().is_none());
lock.get_or_init(|| 42);
assert_eq!(lock.get(), Some(&42));
}
}
Deep Comparison
One-Time Initialization — Comparison
Core Insight
Lazy.t and OnceLock both implement deferred singleton: compute a value at most once, cache forever. The difference is thread safety — OCaml's Lazy.t uses GC for safety; Rust's OnceLock uses atomics for lock-free concurrent initialization.
OCaml Approach
lazy expr wraps an expression; Lazy.force evaluates it (once)Lazy.force calls return immediatelyLazy.is_val checks if already evaluated without forcingRust Approach
OnceLock<T> is in std::sync since Rust 1.70get_or_init(f) runs f at most once, returns &T thereafterget() returns Option<&T> — None if not yet initializedstatic globals and instance fieldsset(v) for explicit single-write (returns error if already set)LazyLock<T> (Rust 1.80+) for lock-free lazy staticComparison Table
| Concept | OCaml | Rust |
|---|---|---|
| Declare lazy | let x = lazy (expr) | static X: OnceLock<T> = OnceLock::new() |
| Force / initialize | Lazy.force x | X.get_or_init(\|\| expr) |
| Check if ready | Lazy.is_val x | X.get().is_some() |
| Thread-safe | OCaml 5 only | Yes — std guarantees |
| Instance level | let _ = lazy (...) in struct | OnceLock<T> field |
| Type annotation | 'a Lazy.t | OnceLock<T> |
| Return type | 'a (the value) | &'static T (reference) |
std vs tokio
| Aspect | std version | tokio version |
|---|---|---|
| Runtime | OS threads via std::thread | Async tasks on tokio runtime |
| Synchronization | std::sync::Mutex, Condvar | tokio::sync::Mutex, channels |
| Channels | std::sync::mpsc (unbounded) | tokio::sync::mpsc (bounded, async) |
| Blocking | Thread blocks on lock/recv | Task yields, runtime switches tasks |
| Overhead | One OS thread per task | Many tasks per thread (M:N) |
| Best for | CPU-bound, simple concurrency | I/O-bound, high-concurrency servers |
Exercises
LazyLock (Rust 1.80+) instead of OnceLock to remove the explicit get_or_init call.once_cell-style OnceCell<T> manually using UnsafeCell<Option<T>> + Once.RequestCounter that is initialized to 0 and incremented atomically — combine OnceLock with AtomicUsize.per_thread_once: each thread has its own one-time-initialized value using thread_local! { static: OnceLock<T> }.