338: Async RwLock — Multiple Readers, One Writer
Tutorial Video
Text description (accessibility)
This video demonstrates the "338: Async RwLock — Multiple Readers, One Writer" functional Rust example. Difficulty level: Advanced. Key concepts covered: Functional Programming. Many shared data structures are read frequently and written rarely — caches, configuration, routing tables. Key difference from OCaml: 1. **RAII guards**: Rust's `read().unwrap()` returns a `RwLockReadGuard` that drops automatically; OCaml requires explicit unlock calls.
Tutorial
The Problem
Many shared data structures are read frequently and written rarely — caches, configuration, routing tables. A Mutex serializes all access (both reads and writes), creating unnecessary contention. RwLock<T> allows multiple concurrent readers OR one exclusive writer, matching the actual access pattern. This is the correct concurrency primitive for read-heavy workloads in database caches, HTTP routers, and configuration stores.
🎯 Learning Outcomes
RwLock<T> as allowing many concurrent readers OR one exclusive writerread() for shared access and write() for exclusive accessSharedDb with concurrent read access and exclusive writeRwLock beats Mutex: read-heavy workloads with infrequent writesCode Example
struct SharedDb { data: RwLock<HashMap<String, i32>> }
fn read(&self, k: &str) -> Option<i32> {
self.data.read().unwrap().get(k).copied()
}
fn write(&self, k: &str, v: i32) {
self.data.write().unwrap().insert(k.to_string(), v);
}Key Differences
read().unwrap() returns a RwLockReadGuard that drops automatically; OCaml requires explicit unlock calls.std::sync::RwLock may starve writers if there are always readers; parking_lot::RwLock provides fairer scheduling.Mutex, Rust's RwLock is poisoned on writer panic; subsequent read() or write() returns Err.tokio::sync::RwLock is the async-aware version with .read().await and .write().await.OCaml Approach
OCaml 5's RWMutex from Thread provides the same semantics. For Lwt, Lwt_mutex.with_lock serializes writes, and reads from immutable snapshots avoid locking:
let db = ref (Hashtbl.create 16)
let rwlock = RWMutex.create ()
let read key = RWMutex.read_lock rwlock;
let v = Hashtbl.find_opt !db key in
RWMutex.read_unlock rwlock; v
Full Source
#![allow(clippy::all)]
//! # Async RwLock
//!
//! Multiple concurrent readers, one exclusive writer — the right lock for read-heavy shared state.
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::thread;
/// A shared database with read-write lock semantics.
pub struct SharedDb {
data: RwLock<HashMap<String, i32>>,
}
impl SharedDb {
pub fn new() -> Arc<Self> {
Arc::new(Self {
data: RwLock::new(HashMap::new()),
})
}
/// Read a value (multiple readers can run simultaneously).
pub fn read(&self, key: &str) -> Option<i32> {
self.data.read().unwrap().get(key).copied()
}
/// Write a value (exclusive access).
pub fn write(&self, key: &str, value: i32) {
self.data.write().unwrap().insert(key.to_string(), value);
}
/// Update a value with a function.
pub fn update(&self, key: &str, f: impl Fn(i32) -> i32) {
if let Some(v) = self.data.write().unwrap().get_mut(key) {
*v = f(*v);
}
}
/// Get all keys.
pub fn keys(&self) -> Vec<String> {
self.data.read().unwrap().keys().cloned().collect()
}
/// Get the number of entries.
pub fn len(&self) -> usize {
self.data.read().unwrap().len()
}
pub fn is_empty(&self) -> bool {
self.len() == 0
}
}
impl Default for SharedDb {
fn default() -> Self {
Self {
data: RwLock::new(HashMap::new()),
}
}
}
/// Demonstrates concurrent reads don't block each other.
pub fn concurrent_reads(db: &Arc<SharedDb>) -> Vec<Option<i32>> {
let handles: Vec<_> = (0..5)
.map(|_| {
let db = Arc::clone(db);
thread::spawn(move || db.read("x"))
})
.collect();
handles.into_iter().map(|h| h.join().unwrap()).collect()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_read_write() {
let db = SharedDb::new();
db.write("key", 99);
assert_eq!(db.read("key"), Some(99));
}
#[test]
fn test_missing_key_returns_none() {
let db = SharedDb::new();
assert_eq!(db.read("nonexistent"), None);
}
#[test]
fn test_update() {
let db = SharedDb::new();
db.write("x", 10);
db.update("x", |v| v * 2);
assert_eq!(db.read("x"), Some(20));
}
#[test]
fn test_concurrent_reads_all_succeed() {
let db = SharedDb::new();
db.write("k", 7);
let handles: Vec<_> = (0..10)
.map(|_| {
let db = Arc::clone(&db);
thread::spawn(move || db.read("k"))
})
.collect();
assert!(handles.into_iter().all(|h| h.join().unwrap() == Some(7)));
}
#[test]
fn test_keys_and_len() {
let db = SharedDb::new();
db.write("a", 1);
db.write("b", 2);
assert_eq!(db.len(), 2);
let mut keys = db.keys();
keys.sort();
assert_eq!(keys, vec!["a", "b"]);
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_read_write() {
let db = SharedDb::new();
db.write("key", 99);
assert_eq!(db.read("key"), Some(99));
}
#[test]
fn test_missing_key_returns_none() {
let db = SharedDb::new();
assert_eq!(db.read("nonexistent"), None);
}
#[test]
fn test_update() {
let db = SharedDb::new();
db.write("x", 10);
db.update("x", |v| v * 2);
assert_eq!(db.read("x"), Some(20));
}
#[test]
fn test_concurrent_reads_all_succeed() {
let db = SharedDb::new();
db.write("k", 7);
let handles: Vec<_> = (0..10)
.map(|_| {
let db = Arc::clone(&db);
thread::spawn(move || db.read("k"))
})
.collect();
assert!(handles.into_iter().all(|h| h.join().unwrap() == Some(7)));
}
#[test]
fn test_keys_and_len() {
let db = SharedDb::new();
db.write("a", 1);
db.write("b", 2);
assert_eq!(db.len(), 2);
let mut keys = db.keys();
keys.sort();
assert_eq!(keys, vec!["a", "b"]);
}
}
Deep Comparison
OCaml vs Rust: Async RwLock
Read-Write Lock
OCaml: No stdlib RwLock. Use Mutex or Lwt_rwlock.
Rust:
struct SharedDb { data: RwLock<HashMap<String, i32>> }
fn read(&self, k: &str) -> Option<i32> {
self.data.read().unwrap().get(k).copied()
}
fn write(&self, k: &str, v: i32) {
self.data.write().unwrap().insert(k.to_string(), v);
}
Key Differences
| Aspect | OCaml | Rust |
|---|---|---|
| RwLock in stdlib | No | Yes |
| Concurrent reads | Requires Lwt_rwlock | Built-in with read() |
| Exclusive write | Manual | write() blocks readers |
| Data wrapping | Separate | RwLock wraps HashMap |
Exercises
Arc<Mutex<T>> vs Arc<RwLock<T>> with 8 reader threads and 1 writer thread — measure throughput.