ExamplesBy LevelBy TopicLearning Paths
987 Fundamental

987 Rwlock Pattern

Functional Programming

Tutorial

The Problem

Demonstrate Rust's RwLock<T> — a readers-writer lock that allows multiple concurrent readers OR a single exclusive writer. Show that many threads can hold read locks simultaneously, and that a write lock excludes all readers. Implement a read-heavy shared configuration pattern.

🎯 Learning Outcomes

  • • Use RwLock::read() to acquire a shared read guard — multiple threads can hold these simultaneously
  • • Use RwLock::write() to acquire an exclusive write guard — blocks until all readers release
  • • Combine with Arc<RwLock<T>> for shared ownership across threads
  • • Understand when RwLock is preferred over Mutex: read-heavy workloads where concurrent reads improve throughput
  • • Understand the write-starvation risk: many readers can delay writers indefinitely
  • Code Example

    #![allow(clippy::all)]
    // 987: Read-Write Lock Pattern
    // Rust: RwLock<T> — many readers OR one writer, never both
    
    use std::sync::{Arc, RwLock};
    use std::thread;
    
    // --- Approach 1: Multiple readers in parallel ---
    fn concurrent_readers() -> Vec<i32> {
        let data = Arc::new(RwLock::new(42i32));
    
        let handles: Vec<_> = (0..5)
            .map(|_| {
                let data = Arc::clone(&data);
                thread::spawn(move || {
                    let guard = data.read().unwrap(); // shared read lock
                    *guard // all 5 can hold read lock simultaneously
                })
            })
            .collect();
    
        handles.into_iter().map(|h| h.join().unwrap()).collect()
    }
    
    // --- Approach 2: Writer excludes readers ---
    fn write_then_read() -> i32 {
        let data = Arc::new(RwLock::new(0i32));
    
        {
            let mut guard = data.write().unwrap(); // exclusive write lock
            *guard = 100;
            // guard drops here — write lock released
        }
    
        let guard = data.read().unwrap();
        *guard
    }
    
    // --- Approach 3: Shared config pattern (read-heavy) ---
    #[derive(Clone, Debug)]
    struct Config {
        threshold: i32,
        name: String,
    }
    
    fn config_pattern() -> (String, i32) {
        let config = Arc::new(RwLock::new(Config {
            threshold: 10,
            name: "default".to_string(),
        }));
    
        // Many readers
        let readers: Vec<_> = (0..4)
            .map(|_| {
                let config = Arc::clone(&config);
                thread::spawn(move || {
                    let c = config.read().unwrap();
                    (c.name.clone(), c.threshold)
                })
            })
            .collect();
    
        // One writer updates the config
        {
            let cfg = Arc::clone(&config);
            let writer = thread::spawn(move || {
                let mut c = cfg.write().unwrap();
                c.threshold = 99;
                c.name = "updated".to_string();
            });
            writer.join().unwrap();
        }
    
        for h in readers {
            h.join().unwrap();
        } // let readers finish
    
        let c = config.read().unwrap();
        (c.name.clone(), c.threshold)
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_concurrent_readers_all_see_same() {
            let reads = concurrent_readers();
            assert_eq!(reads.len(), 5);
            assert!(reads.iter().all(|&v| v == 42));
        }
    
        #[test]
        fn test_write_then_read() {
            assert_eq!(write_then_read(), 100);
        }
    
        #[test]
        fn test_config_pattern() {
            let (name, threshold) = config_pattern();
            assert_eq!(name, "updated");
            assert_eq!(threshold, 99);
        }
    
        #[test]
        fn test_try_read_write() {
            let rw = RwLock::new(0i32);
            let _r1 = rw.read().unwrap();
            let _r2 = rw.read().unwrap(); // multiple reads OK
                                          // rw.try_write() would fail here (readers active)
            assert!(rw.try_write().is_err());
        }
    
        #[test]
        fn test_rwlock_write_exclusive() {
            let rw = Arc::new(RwLock::new(vec![1, 2, 3]));
            {
                let mut w = rw.write().unwrap();
                w.push(4);
            }
            assert_eq!(*rw.read().unwrap(), vec![1, 2, 3, 4]);
        }
    }

    Key Differences

    AspectRustOCaml
    Read guarddata.read().unwrap() — RAIIRwLock.read_lock + manual unlock
    Write guarddata.write().unwrap() — RAIIRwLock.write_lock + manual unlock
    StarvationWriter-prefer or reader-prefer (OS-dependent)Same OS-level behavior
    PoisonRead/write guards handle poisoned locksNo equivalent

    RwLock is a trade-off: better throughput for read-heavy workloads, but higher complexity than Mutex. If writes are frequent, Mutex is simpler and may perform equally well.

    OCaml Approach

    (* OCaml 5.0+ Stdlib has RwLock *)
    let rwlock = RwLock.create 42
    
    let concurrent_readers () =
      let threads = List.init 5 (fun _ ->
        Thread.create (fun () ->
          RwLock.read_lock rwlock;
          let v = RwLock.read_value rwlock in
          RwLock.read_unlock rwlock;
          v
        ) ()
      ) in
      List.map Thread.join threads
    
    (* Pre-5.0: use Mutex with read tracking *)
    let read_lock m readers = Mutex.lock m; incr readers; Mutex.unlock m
    let read_unlock m readers cond =
      Mutex.lock m; decr readers;
      if !readers = 0 then Condition.signal cond;
      Mutex.unlock m
    

    OCaml 5.0+ added RwLock to the standard library. Earlier versions required manual implementation using Mutex + Condition + reader count — exactly how Rust's RwLock is implemented internally.

    Full Source

    #![allow(clippy::all)]
    // 987: Read-Write Lock Pattern
    // Rust: RwLock<T> — many readers OR one writer, never both
    
    use std::sync::{Arc, RwLock};
    use std::thread;
    
    // --- Approach 1: Multiple readers in parallel ---
    fn concurrent_readers() -> Vec<i32> {
        let data = Arc::new(RwLock::new(42i32));
    
        let handles: Vec<_> = (0..5)
            .map(|_| {
                let data = Arc::clone(&data);
                thread::spawn(move || {
                    let guard = data.read().unwrap(); // shared read lock
                    *guard // all 5 can hold read lock simultaneously
                })
            })
            .collect();
    
        handles.into_iter().map(|h| h.join().unwrap()).collect()
    }
    
    // --- Approach 2: Writer excludes readers ---
    fn write_then_read() -> i32 {
        let data = Arc::new(RwLock::new(0i32));
    
        {
            let mut guard = data.write().unwrap(); // exclusive write lock
            *guard = 100;
            // guard drops here — write lock released
        }
    
        let guard = data.read().unwrap();
        *guard
    }
    
    // --- Approach 3: Shared config pattern (read-heavy) ---
    #[derive(Clone, Debug)]
    struct Config {
        threshold: i32,
        name: String,
    }
    
    fn config_pattern() -> (String, i32) {
        let config = Arc::new(RwLock::new(Config {
            threshold: 10,
            name: "default".to_string(),
        }));
    
        // Many readers
        let readers: Vec<_> = (0..4)
            .map(|_| {
                let config = Arc::clone(&config);
                thread::spawn(move || {
                    let c = config.read().unwrap();
                    (c.name.clone(), c.threshold)
                })
            })
            .collect();
    
        // One writer updates the config
        {
            let cfg = Arc::clone(&config);
            let writer = thread::spawn(move || {
                let mut c = cfg.write().unwrap();
                c.threshold = 99;
                c.name = "updated".to_string();
            });
            writer.join().unwrap();
        }
    
        for h in readers {
            h.join().unwrap();
        } // let readers finish
    
        let c = config.read().unwrap();
        (c.name.clone(), c.threshold)
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_concurrent_readers_all_see_same() {
            let reads = concurrent_readers();
            assert_eq!(reads.len(), 5);
            assert!(reads.iter().all(|&v| v == 42));
        }
    
        #[test]
        fn test_write_then_read() {
            assert_eq!(write_then_read(), 100);
        }
    
        #[test]
        fn test_config_pattern() {
            let (name, threshold) = config_pattern();
            assert_eq!(name, "updated");
            assert_eq!(threshold, 99);
        }
    
        #[test]
        fn test_try_read_write() {
            let rw = RwLock::new(0i32);
            let _r1 = rw.read().unwrap();
            let _r2 = rw.read().unwrap(); // multiple reads OK
                                          // rw.try_write() would fail here (readers active)
            assert!(rw.try_write().is_err());
        }
    
        #[test]
        fn test_rwlock_write_exclusive() {
            let rw = Arc::new(RwLock::new(vec![1, 2, 3]));
            {
                let mut w = rw.write().unwrap();
                w.push(4);
            }
            assert_eq!(*rw.read().unwrap(), vec![1, 2, 3, 4]);
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
    
        #[test]
        fn test_concurrent_readers_all_see_same() {
            let reads = concurrent_readers();
            assert_eq!(reads.len(), 5);
            assert!(reads.iter().all(|&v| v == 42));
        }
    
        #[test]
        fn test_write_then_read() {
            assert_eq!(write_then_read(), 100);
        }
    
        #[test]
        fn test_config_pattern() {
            let (name, threshold) = config_pattern();
            assert_eq!(name, "updated");
            assert_eq!(threshold, 99);
        }
    
        #[test]
        fn test_try_read_write() {
            let rw = RwLock::new(0i32);
            let _r1 = rw.read().unwrap();
            let _r2 = rw.read().unwrap(); // multiple reads OK
                                          // rw.try_write() would fail here (readers active)
            assert!(rw.try_write().is_err());
        }
    
        #[test]
        fn test_rwlock_write_exclusive() {
            let rw = Arc::new(RwLock::new(vec![1, 2, 3]));
            {
                let mut w = rw.write().unwrap();
                w.push(4);
            }
            assert_eq!(*rw.read().unwrap(), vec![1, 2, 3, 4]);
        }
    }

    Deep Comparison

    Read-Write Lock Pattern — Comparison

    Core Insight

    RwLock encodes the read-write exclusion invariant in the type: &T access (shared) maps to read lock; &mut T access (exclusive) maps to write lock. This mirrors Rust's own ownership model.

    OCaml Approach

  • • No standard RwLock — must simulate with Mutex + Condition + reader count
  • readers: int ref tracks active readers; writer waits until readers = 0
  • writer_waiting: bool prevents reader starvation of writers
  • • More complex than needed — OCaml's GC handles most sharing without locks
  • Rust Approach

  • RwLock::new(data) is standard in std::sync
  • rw.read()RwLockReadGuard — shared, many at once
  • rw.write()RwLockWriteGuard — exclusive, blocks all others
  • try_read() / try_write() non-blocking variants
  • • RAII: guards unlock on drop — no manual unlock
  • Comparison Table

    ConceptOCaml (simulated)Rust
    CreateManual struct + Mutex + ConditionRwLock::new(data)
    Read lockread_lock / read_unlockrw.read().unwrap()
    Write lockwrite_lock / write_unlockrw.write().unwrap()
    Multiple readersYes (via reader count)Yes — RwLockReadGuard is shared
    Prevent writer starvationManual writer_waiting flagImplementation-dependent
    UnlockManual callDrop the guard (RAII)
    Try-lockNot shown (custom needed)try_read() / try_write()

    std vs tokio

    Aspectstd versiontokio version
    RuntimeOS threads via std::threadAsync tasks on tokio runtime
    Synchronizationstd::sync::Mutex, Condvartokio::sync::Mutex, channels
    Channelsstd::sync::mpsc (unbounded)tokio::sync::mpsc (bounded, async)
    BlockingThread blocks on lock/recvTask yields, runtime switches tasks
    OverheadOne OS thread per taskMany tasks per thread (M:N)
    Best forCPU-bound, simple concurrencyI/O-bound, high-concurrency servers

    Exercises

  • Measure the throughput difference between Mutex and RwLock for 8 readers and 1 writer with a 99:1 read-write ratio.
  • Implement write-starvation: start 10 long-running read threads, then try to acquire a write lock — observe the delay.
  • Implement upgrade_from_read_to_write — release the read lock and acquire the write lock atomically (spoiler: not possible in std; discuss why).
  • Build a CachingConfig where reads check an in-memory HashMap (under read lock) and miss falls through to a "database" (uses write lock to update cache).
  • Implement a version counter alongside the data: increment a AtomicUsize on every write, read it under the read lock to detect stale views.
  • Open Source Repos