987 Rwlock Pattern
Tutorial
The Problem
Demonstrate Rust's RwLock<T> — a readers-writer lock that allows multiple concurrent readers OR a single exclusive writer. Show that many threads can hold read locks simultaneously, and that a write lock excludes all readers. Implement a read-heavy shared configuration pattern.
🎯 Learning Outcomes
RwLock::read() to acquire a shared read guard — multiple threads can hold these simultaneouslyRwLock::write() to acquire an exclusive write guard — blocks until all readers releaseArc<RwLock<T>> for shared ownership across threadsRwLock is preferred over Mutex: read-heavy workloads where concurrent reads improve throughputCode Example
#![allow(clippy::all)]
// 987: Read-Write Lock Pattern
// Rust: RwLock<T> — many readers OR one writer, never both
use std::sync::{Arc, RwLock};
use std::thread;
// --- Approach 1: Multiple readers in parallel ---
fn concurrent_readers() -> Vec<i32> {
let data = Arc::new(RwLock::new(42i32));
let handles: Vec<_> = (0..5)
.map(|_| {
let data = Arc::clone(&data);
thread::spawn(move || {
let guard = data.read().unwrap(); // shared read lock
*guard // all 5 can hold read lock simultaneously
})
})
.collect();
handles.into_iter().map(|h| h.join().unwrap()).collect()
}
// --- Approach 2: Writer excludes readers ---
fn write_then_read() -> i32 {
let data = Arc::new(RwLock::new(0i32));
{
let mut guard = data.write().unwrap(); // exclusive write lock
*guard = 100;
// guard drops here — write lock released
}
let guard = data.read().unwrap();
*guard
}
// --- Approach 3: Shared config pattern (read-heavy) ---
#[derive(Clone, Debug)]
struct Config {
threshold: i32,
name: String,
}
fn config_pattern() -> (String, i32) {
let config = Arc::new(RwLock::new(Config {
threshold: 10,
name: "default".to_string(),
}));
// Many readers
let readers: Vec<_> = (0..4)
.map(|_| {
let config = Arc::clone(&config);
thread::spawn(move || {
let c = config.read().unwrap();
(c.name.clone(), c.threshold)
})
})
.collect();
// One writer updates the config
{
let cfg = Arc::clone(&config);
let writer = thread::spawn(move || {
let mut c = cfg.write().unwrap();
c.threshold = 99;
c.name = "updated".to_string();
});
writer.join().unwrap();
}
for h in readers {
h.join().unwrap();
} // let readers finish
let c = config.read().unwrap();
(c.name.clone(), c.threshold)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_concurrent_readers_all_see_same() {
let reads = concurrent_readers();
assert_eq!(reads.len(), 5);
assert!(reads.iter().all(|&v| v == 42));
}
#[test]
fn test_write_then_read() {
assert_eq!(write_then_read(), 100);
}
#[test]
fn test_config_pattern() {
let (name, threshold) = config_pattern();
assert_eq!(name, "updated");
assert_eq!(threshold, 99);
}
#[test]
fn test_try_read_write() {
let rw = RwLock::new(0i32);
let _r1 = rw.read().unwrap();
let _r2 = rw.read().unwrap(); // multiple reads OK
// rw.try_write() would fail here (readers active)
assert!(rw.try_write().is_err());
}
#[test]
fn test_rwlock_write_exclusive() {
let rw = Arc::new(RwLock::new(vec![1, 2, 3]));
{
let mut w = rw.write().unwrap();
w.push(4);
}
assert_eq!(*rw.read().unwrap(), vec![1, 2, 3, 4]);
}
}Key Differences
| Aspect | Rust | OCaml |
|---|---|---|
| Read guard | data.read().unwrap() — RAII | RwLock.read_lock + manual unlock |
| Write guard | data.write().unwrap() — RAII | RwLock.write_lock + manual unlock |
| Starvation | Writer-prefer or reader-prefer (OS-dependent) | Same OS-level behavior |
| Poison | Read/write guards handle poisoned locks | No equivalent |
RwLock is a trade-off: better throughput for read-heavy workloads, but higher complexity than Mutex. If writes are frequent, Mutex is simpler and may perform equally well.
OCaml Approach
(* OCaml 5.0+ Stdlib has RwLock *)
let rwlock = RwLock.create 42
let concurrent_readers () =
let threads = List.init 5 (fun _ ->
Thread.create (fun () ->
RwLock.read_lock rwlock;
let v = RwLock.read_value rwlock in
RwLock.read_unlock rwlock;
v
) ()
) in
List.map Thread.join threads
(* Pre-5.0: use Mutex with read tracking *)
let read_lock m readers = Mutex.lock m; incr readers; Mutex.unlock m
let read_unlock m readers cond =
Mutex.lock m; decr readers;
if !readers = 0 then Condition.signal cond;
Mutex.unlock m
OCaml 5.0+ added RwLock to the standard library. Earlier versions required manual implementation using Mutex + Condition + reader count — exactly how Rust's RwLock is implemented internally.
Full Source
#![allow(clippy::all)]
// 987: Read-Write Lock Pattern
// Rust: RwLock<T> — many readers OR one writer, never both
use std::sync::{Arc, RwLock};
use std::thread;
// --- Approach 1: Multiple readers in parallel ---
fn concurrent_readers() -> Vec<i32> {
let data = Arc::new(RwLock::new(42i32));
let handles: Vec<_> = (0..5)
.map(|_| {
let data = Arc::clone(&data);
thread::spawn(move || {
let guard = data.read().unwrap(); // shared read lock
*guard // all 5 can hold read lock simultaneously
})
})
.collect();
handles.into_iter().map(|h| h.join().unwrap()).collect()
}
// --- Approach 2: Writer excludes readers ---
fn write_then_read() -> i32 {
let data = Arc::new(RwLock::new(0i32));
{
let mut guard = data.write().unwrap(); // exclusive write lock
*guard = 100;
// guard drops here — write lock released
}
let guard = data.read().unwrap();
*guard
}
// --- Approach 3: Shared config pattern (read-heavy) ---
#[derive(Clone, Debug)]
struct Config {
threshold: i32,
name: String,
}
fn config_pattern() -> (String, i32) {
let config = Arc::new(RwLock::new(Config {
threshold: 10,
name: "default".to_string(),
}));
// Many readers
let readers: Vec<_> = (0..4)
.map(|_| {
let config = Arc::clone(&config);
thread::spawn(move || {
let c = config.read().unwrap();
(c.name.clone(), c.threshold)
})
})
.collect();
// One writer updates the config
{
let cfg = Arc::clone(&config);
let writer = thread::spawn(move || {
let mut c = cfg.write().unwrap();
c.threshold = 99;
c.name = "updated".to_string();
});
writer.join().unwrap();
}
for h in readers {
h.join().unwrap();
} // let readers finish
let c = config.read().unwrap();
(c.name.clone(), c.threshold)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_concurrent_readers_all_see_same() {
let reads = concurrent_readers();
assert_eq!(reads.len(), 5);
assert!(reads.iter().all(|&v| v == 42));
}
#[test]
fn test_write_then_read() {
assert_eq!(write_then_read(), 100);
}
#[test]
fn test_config_pattern() {
let (name, threshold) = config_pattern();
assert_eq!(name, "updated");
assert_eq!(threshold, 99);
}
#[test]
fn test_try_read_write() {
let rw = RwLock::new(0i32);
let _r1 = rw.read().unwrap();
let _r2 = rw.read().unwrap(); // multiple reads OK
// rw.try_write() would fail here (readers active)
assert!(rw.try_write().is_err());
}
#[test]
fn test_rwlock_write_exclusive() {
let rw = Arc::new(RwLock::new(vec![1, 2, 3]));
{
let mut w = rw.write().unwrap();
w.push(4);
}
assert_eq!(*rw.read().unwrap(), vec![1, 2, 3, 4]);
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_concurrent_readers_all_see_same() {
let reads = concurrent_readers();
assert_eq!(reads.len(), 5);
assert!(reads.iter().all(|&v| v == 42));
}
#[test]
fn test_write_then_read() {
assert_eq!(write_then_read(), 100);
}
#[test]
fn test_config_pattern() {
let (name, threshold) = config_pattern();
assert_eq!(name, "updated");
assert_eq!(threshold, 99);
}
#[test]
fn test_try_read_write() {
let rw = RwLock::new(0i32);
let _r1 = rw.read().unwrap();
let _r2 = rw.read().unwrap(); // multiple reads OK
// rw.try_write() would fail here (readers active)
assert!(rw.try_write().is_err());
}
#[test]
fn test_rwlock_write_exclusive() {
let rw = Arc::new(RwLock::new(vec![1, 2, 3]));
{
let mut w = rw.write().unwrap();
w.push(4);
}
assert_eq!(*rw.read().unwrap(), vec![1, 2, 3, 4]);
}
}
Deep Comparison
Read-Write Lock Pattern — Comparison
Core Insight
RwLock encodes the read-write exclusion invariant in the type: &T access (shared) maps to read lock; &mut T access (exclusive) maps to write lock. This mirrors Rust's own ownership model.
OCaml Approach
Mutex + Condition + reader countreaders: int ref tracks active readers; writer waits until readers = 0writer_waiting: bool prevents reader starvation of writersRust Approach
RwLock::new(data) is standard in std::syncrw.read() → RwLockReadGuard — shared, many at oncerw.write() → RwLockWriteGuard — exclusive, blocks all otherstry_read() / try_write() non-blocking variantsComparison Table
| Concept | OCaml (simulated) | Rust |
|---|---|---|
| Create | Manual struct + Mutex + Condition | RwLock::new(data) |
| Read lock | read_lock / read_unlock | rw.read().unwrap() |
| Write lock | write_lock / write_unlock | rw.write().unwrap() |
| Multiple readers | Yes (via reader count) | Yes — RwLockReadGuard is shared |
| Prevent writer starvation | Manual writer_waiting flag | Implementation-dependent |
| Unlock | Manual call | Drop the guard (RAII) |
| Try-lock | Not shown (custom needed) | try_read() / try_write() |
std vs tokio
| Aspect | std version | tokio version |
|---|---|---|
| Runtime | OS threads via std::thread | Async tasks on tokio runtime |
| Synchronization | std::sync::Mutex, Condvar | tokio::sync::Mutex, channels |
| Channels | std::sync::mpsc (unbounded) | tokio::sync::mpsc (bounded, async) |
| Blocking | Thread blocks on lock/recv | Task yields, runtime switches tasks |
| Overhead | One OS thread per task | Many tasks per thread (M:N) |
| Best for | CPU-bound, simple concurrency | I/O-bound, high-concurrency servers |
Exercises
Mutex and RwLock for 8 readers and 1 writer with a 99:1 read-write ratio.upgrade_from_read_to_write — release the read lock and acquire the write lock atomically (spoiler: not possible in std; discuss why).CachingConfig where reads check an in-memory HashMap (under read lock) and miss falls through to a "database" (uses write lock to update cache).AtomicUsize on every write, read it under the read lock to detect stale views.