342: Arc<Mutex<T>> Pattern
Tutorial
The Problem
Multiple threads need to read and modify the same data safely. Arc<Mutex<T>> is Rust's canonical solution: Arc provides reference-counted shared ownership across threads, while Mutex ensures only one thread accesses the inner value at a time. This pattern implements the classic mutual exclusion concept formalized by Dijkstra (1965) — a lock that serializes access to a critical section. Without it, concurrent writes produce undefined behavior; with it, Rust's type system statically prevents data races, something C++ and Go can only detect at runtime with race detectors.
🎯 Learning Outcomes
Arc::new(Mutex::new(value)) for thread-safe shared stateArc with Arc::clone(&arc) to share ownership across threadsmutex.lock().unwrap() and dereference to access dataArc<Mutex<T>> for ergonomic APIsCode Example
#![allow(clippy::all)]
//! # Arc<Mutex<T>> Pattern
//! Thread-safe shared mutable state: Arc gives shared ownership, Mutex ensures exclusive access.
use std::sync::{Arc, Mutex};
use std::thread;
pub fn shared_counter(num_threads: usize) -> i32 {
let counter = Arc::new(Mutex::new(0));
let handles: Vec<_> = (0..num_threads)
.map(|_| {
let c = Arc::clone(&counter);
thread::spawn(move || {
*c.lock().unwrap() += 1;
})
})
.collect();
for h in handles {
h.join().unwrap();
}
let result = *counter.lock().unwrap();
result
}
pub struct ThreadSafeCache<T> {
data: Arc<Mutex<Vec<T>>>,
}
impl<T: Clone> ThreadSafeCache<T> {
pub fn new() -> Self {
Self {
data: Arc::new(Mutex::new(Vec::new())),
}
}
pub fn push(&self, item: T) {
self.data.lock().unwrap().push(item);
}
pub fn get_all(&self) -> Vec<T> {
self.data.lock().unwrap().clone()
}
pub fn len(&self) -> usize {
self.data.lock().unwrap().len()
}
}
impl<T: Clone> Default for ThreadSafeCache<T> {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn counter_works() {
assert_eq!(shared_counter(10), 10);
}
#[test]
fn cache_thread_safe() {
let cache = Arc::new(ThreadSafeCache::<i32>::new());
let handles: Vec<_> = (0..5)
.map(|i| {
let c = Arc::clone(&cache);
thread::spawn(move || c.push(i))
})
.collect();
for h in handles {
h.join().unwrap();
}
assert_eq!(cache.len(), 5);
}
}Key Differences
| Aspect | Rust Arc<Mutex<T>> | OCaml ref + Mutex |
|---|---|---|
| Ownership tracking | Compile-time via Arc reference count | GC tracks all references |
| Lock acquisition | .lock() returns Result<Guard> | Mutex.lock may raise |
| Poisoning on panic | Yes — subsequent locks get Err | No — state may be corrupt |
| Deadlock detection | None — avoid by design | None |
RwLock variant | Arc<RwLock<T>> for read-many | RwLock in Thread module |
OCaml Approach
OCaml 5 uses domains for parallelism with Mutex from the standard library:
let m = Mutex.create () in
let counter = ref 0 in
let inc () =
Mutex.lock m;
incr counter;
Mutex.unlock m
in
(* or safer: *)
let with_lock m f =
Mutex.lock m;
let result = f () in
Mutex.unlock m; result
OCaml's garbage collector handles the shared reference counting automatically — no Arc needed, since the GC tracks liveness. In OCaml 4, threads share the GIL (Global Interpreter Lock), making Mutex less critical for pure OCaml data.
Full Source
#![allow(clippy::all)]
//! # Arc<Mutex<T>> Pattern
//! Thread-safe shared mutable state: Arc gives shared ownership, Mutex ensures exclusive access.
use std::sync::{Arc, Mutex};
use std::thread;
pub fn shared_counter(num_threads: usize) -> i32 {
let counter = Arc::new(Mutex::new(0));
let handles: Vec<_> = (0..num_threads)
.map(|_| {
let c = Arc::clone(&counter);
thread::spawn(move || {
*c.lock().unwrap() += 1;
})
})
.collect();
for h in handles {
h.join().unwrap();
}
let result = *counter.lock().unwrap();
result
}
pub struct ThreadSafeCache<T> {
data: Arc<Mutex<Vec<T>>>,
}
impl<T: Clone> ThreadSafeCache<T> {
pub fn new() -> Self {
Self {
data: Arc::new(Mutex::new(Vec::new())),
}
}
pub fn push(&self, item: T) {
self.data.lock().unwrap().push(item);
}
pub fn get_all(&self) -> Vec<T> {
self.data.lock().unwrap().clone()
}
pub fn len(&self) -> usize {
self.data.lock().unwrap().len()
}
}
impl<T: Clone> Default for ThreadSafeCache<T> {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn counter_works() {
assert_eq!(shared_counter(10), 10);
}
#[test]
fn cache_thread_safe() {
let cache = Arc::new(ThreadSafeCache::<i32>::new());
let handles: Vec<_> = (0..5)
.map(|i| {
let c = Arc::clone(&cache);
thread::spawn(move || c.push(i))
})
.collect();
for h in handles {
h.join().unwrap();
}
assert_eq!(cache.len(), 5);
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn counter_works() {
assert_eq!(shared_counter(10), 10);
}
#[test]
fn cache_thread_safe() {
let cache = Arc::new(ThreadSafeCache::<i32>::new());
let handles: Vec<_> = (0..5)
.map(|i| {
let c = Arc::clone(&cache);
thread::spawn(move || c.push(i))
})
.collect();
for h in handles {
h.join().unwrap();
}
assert_eq!(cache.len(), 5);
}
}
Deep Comparison
OCaml vs Rust: Arc Mutex Pattern
Overview
See the example.rs and example.ml files for detailed implementations.
Key Differences
| Aspect | OCaml | Rust |
|---|---|---|
| Type system | Hindley-Milner | Ownership + traits |
| Memory | GC | Zero-cost abstractions |
| Mutability | Explicit ref | mut keyword |
| Error handling | Option/Result | Result<T, E> |
See README.md for detailed comparison.
Exercises
RwLock comparison**: Replace Mutex with RwLock in ThreadSafeCache so multiple readers can access simultaneously; benchmark read throughput with 8 concurrent readers.AtomicI32 instead of Mutex<i32>; compare performance with the mutex version under contention.