ExamplesBy LevelBy TopicLearning Paths
342 Advanced

342: Arc<Mutex<T>> Pattern

Functional Programming

Tutorial

The Problem

Multiple threads need to read and modify the same data safely. Arc<Mutex<T>> is Rust's canonical solution: Arc provides reference-counted shared ownership across threads, while Mutex ensures only one thread accesses the inner value at a time. This pattern implements the classic mutual exclusion concept formalized by Dijkstra (1965) — a lock that serializes access to a critical section. Without it, concurrent writes produce undefined behavior; with it, Rust's type system statically prevents data races, something C++ and Go can only detect at runtime with race detectors.

🎯 Learning Outcomes

  • • Combine Arc::new(Mutex::new(value)) for thread-safe shared state
  • • Clone the Arc with Arc::clone(&arc) to share ownership across threads
  • • Acquire a lock guard with mutex.lock().unwrap() and dereference to access data
  • • Understand that the lock guard releases automatically when it goes out of scope (RAII)
  • • Build thread-safe structs that wrap Arc<Mutex<T>> for ergonomic APIs
  • • Recognize deadlock risks when holding multiple locks simultaneously
  • Code Example

    #![allow(clippy::all)]
    //! # Arc<Mutex<T>> Pattern
    //! Thread-safe shared mutable state: Arc gives shared ownership, Mutex ensures exclusive access.
    
    use std::sync::{Arc, Mutex};
    use std::thread;
    
    pub fn shared_counter(num_threads: usize) -> i32 {
        let counter = Arc::new(Mutex::new(0));
        let handles: Vec<_> = (0..num_threads)
            .map(|_| {
                let c = Arc::clone(&counter);
                thread::spawn(move || {
                    *c.lock().unwrap() += 1;
                })
            })
            .collect();
        for h in handles {
            h.join().unwrap();
        }
        let result = *counter.lock().unwrap();
        result
    }
    
    pub struct ThreadSafeCache<T> {
        data: Arc<Mutex<Vec<T>>>,
    }
    
    impl<T: Clone> ThreadSafeCache<T> {
        pub fn new() -> Self {
            Self {
                data: Arc::new(Mutex::new(Vec::new())),
            }
        }
        pub fn push(&self, item: T) {
            self.data.lock().unwrap().push(item);
        }
        pub fn get_all(&self) -> Vec<T> {
            self.data.lock().unwrap().clone()
        }
        pub fn len(&self) -> usize {
            self.data.lock().unwrap().len()
        }
    }
    
    impl<T: Clone> Default for ThreadSafeCache<T> {
        fn default() -> Self {
            Self::new()
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
        #[test]
        fn counter_works() {
            assert_eq!(shared_counter(10), 10);
        }
        #[test]
        fn cache_thread_safe() {
            let cache = Arc::new(ThreadSafeCache::<i32>::new());
            let handles: Vec<_> = (0..5)
                .map(|i| {
                    let c = Arc::clone(&cache);
                    thread::spawn(move || c.push(i))
                })
                .collect();
            for h in handles {
                h.join().unwrap();
            }
            assert_eq!(cache.len(), 5);
        }
    }

    Key Differences

    AspectRust Arc<Mutex<T>>OCaml ref + Mutex
    Ownership trackingCompile-time via Arc reference countGC tracks all references
    Lock acquisition.lock() returns Result<Guard>Mutex.lock may raise
    Poisoning on panicYes — subsequent locks get ErrNo — state may be corrupt
    Deadlock detectionNone — avoid by designNone
    RwLock variantArc<RwLock<T>> for read-manyRwLock in Thread module

    OCaml Approach

    OCaml 5 uses domains for parallelism with Mutex from the standard library:

    let m = Mutex.create () in
    let counter = ref 0 in
    let inc () =
      Mutex.lock m;
      incr counter;
      Mutex.unlock m
    in
    (* or safer: *)
    let with_lock m f =
      Mutex.lock m;
      let result = f () in
      Mutex.unlock m; result
    

    OCaml's garbage collector handles the shared reference counting automatically — no Arc needed, since the GC tracks liveness. In OCaml 4, threads share the GIL (Global Interpreter Lock), making Mutex less critical for pure OCaml data.

    Full Source

    #![allow(clippy::all)]
    //! # Arc<Mutex<T>> Pattern
    //! Thread-safe shared mutable state: Arc gives shared ownership, Mutex ensures exclusive access.
    
    use std::sync::{Arc, Mutex};
    use std::thread;
    
    pub fn shared_counter(num_threads: usize) -> i32 {
        let counter = Arc::new(Mutex::new(0));
        let handles: Vec<_> = (0..num_threads)
            .map(|_| {
                let c = Arc::clone(&counter);
                thread::spawn(move || {
                    *c.lock().unwrap() += 1;
                })
            })
            .collect();
        for h in handles {
            h.join().unwrap();
        }
        let result = *counter.lock().unwrap();
        result
    }
    
    pub struct ThreadSafeCache<T> {
        data: Arc<Mutex<Vec<T>>>,
    }
    
    impl<T: Clone> ThreadSafeCache<T> {
        pub fn new() -> Self {
            Self {
                data: Arc::new(Mutex::new(Vec::new())),
            }
        }
        pub fn push(&self, item: T) {
            self.data.lock().unwrap().push(item);
        }
        pub fn get_all(&self) -> Vec<T> {
            self.data.lock().unwrap().clone()
        }
        pub fn len(&self) -> usize {
            self.data.lock().unwrap().len()
        }
    }
    
    impl<T: Clone> Default for ThreadSafeCache<T> {
        fn default() -> Self {
            Self::new()
        }
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
        #[test]
        fn counter_works() {
            assert_eq!(shared_counter(10), 10);
        }
        #[test]
        fn cache_thread_safe() {
            let cache = Arc::new(ThreadSafeCache::<i32>::new());
            let handles: Vec<_> = (0..5)
                .map(|i| {
                    let c = Arc::clone(&cache);
                    thread::spawn(move || c.push(i))
                })
                .collect();
            for h in handles {
                h.join().unwrap();
            }
            assert_eq!(cache.len(), 5);
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
        #[test]
        fn counter_works() {
            assert_eq!(shared_counter(10), 10);
        }
        #[test]
        fn cache_thread_safe() {
            let cache = Arc::new(ThreadSafeCache::<i32>::new());
            let handles: Vec<_> = (0..5)
                .map(|i| {
                    let c = Arc::clone(&cache);
                    thread::spawn(move || c.push(i))
                })
                .collect();
            for h in handles {
                h.join().unwrap();
            }
            assert_eq!(cache.len(), 5);
        }
    }

    Deep Comparison

    OCaml vs Rust: Arc Mutex Pattern

    Overview

    See the example.rs and example.ml files for detailed implementations.

    Key Differences

    AspectOCamlRust
    Type systemHindley-MilnerOwnership + traits
    MemoryGCZero-cost abstractions
    MutabilityExplicit refmut keyword
    Error handlingOption/ResultResult<T, E>

    See README.md for detailed comparison.

    Exercises

  • **RwLock comparison**: Replace Mutex with RwLock in ThreadSafeCache so multiple readers can access simultaneously; benchmark read throughput with 8 concurrent readers.
  • Deadlock scenario: Write two threads each holding one of two mutexes and waiting for the other; observe the deadlock, then fix it by enforcing a consistent lock acquisition order.
  • Mutex-free counter: Implement the shared counter using AtomicI32 instead of Mutex<i32>; compare performance with the mutex version under contention.
  • Open Source Repos