465: Message Passing vs. Shared Memory
Tutorial
The Problem
Concurrent programming has two fundamental coordination models. Shared memory: threads share data structures, protected by locks. Message passing: threads communicate by sending values, with no shared state. Shared memory is faster for fine-grained updates (no serialization); message passing is safer, scales better, and avoids deadlocks. The word-count example demonstrates both: message passing (each thread computes a local count, sends the whole map) vs. shared memory (all threads write into a shared HashMap under a mutex).
This comparison is central to Go's mantra "don't communicate by sharing memory, share memory by communicating" and Erlang's everything-is-messages model.
🎯 Learning Outcomes
Code Example
#![allow(clippy::all)]
// 465. Message passing vs shared memory
use std::collections::HashMap;
use std::sync::{mpsc, Arc, Mutex};
use std::thread;
fn count_words(s: &str) -> HashMap<String, usize> {
let mut m = HashMap::new();
for w in s.split_whitespace() {
*m.entry(w.to_lowercase()).or_insert(0) += 1;
}
m
}
fn merge(mut a: HashMap<String, usize>, b: HashMap<String, usize>) -> HashMap<String, usize> {
for (k, v) in b {
*a.entry(k).or_insert(0) += v;
}
a
}
fn msg_passing(texts: Vec<String>) -> HashMap<String, usize> {
let (tx, rx) = mpsc::channel::<HashMap<String, usize>>();
for t in texts {
let tx = tx.clone();
thread::spawn(move || {
tx.send(count_words(&t)).unwrap();
});
}
drop(tx);
rx.iter().fold(HashMap::new(), merge)
}
fn shared_mem(texts: Vec<String>) -> HashMap<String, usize> {
let shared = Arc::new(Mutex::new(HashMap::<String, usize>::new()));
let hs: Vec<_> = texts
.into_iter()
.map(|t| {
let s = Arc::clone(&shared);
thread::spawn(move || {
let local = count_words(&t);
let mut g = s.lock().unwrap();
for (k, v) in local {
*g.entry(k).or_insert(0) += v;
}
})
})
.collect();
for h in hs {
h.join().unwrap();
}
Arc::try_unwrap(shared).unwrap().into_inner().unwrap()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_same() {
let t = vec!["a b c".to_string(), "a d".to_string()];
assert_eq!(msg_passing(t.clone()), shared_mem(t));
}
}Key Differences
OCaml Approach
OCaml's message passing uses channels: let ch = Event.new_channel() with Thread.create (fun () -> Event.sync (Event.send ch result)) (). Shared memory uses Hashtbl.t + Mutex.t. Functional OCaml naturally favors message passing (immutable local maps merged at end) since persistent data structures share structure efficiently. The Parallel library provides higher-level abstractions for both models.
Full Source
#![allow(clippy::all)]
// 465. Message passing vs shared memory
use std::collections::HashMap;
use std::sync::{mpsc, Arc, Mutex};
use std::thread;
fn count_words(s: &str) -> HashMap<String, usize> {
let mut m = HashMap::new();
for w in s.split_whitespace() {
*m.entry(w.to_lowercase()).or_insert(0) += 1;
}
m
}
fn merge(mut a: HashMap<String, usize>, b: HashMap<String, usize>) -> HashMap<String, usize> {
for (k, v) in b {
*a.entry(k).or_insert(0) += v;
}
a
}
fn msg_passing(texts: Vec<String>) -> HashMap<String, usize> {
let (tx, rx) = mpsc::channel::<HashMap<String, usize>>();
for t in texts {
let tx = tx.clone();
thread::spawn(move || {
tx.send(count_words(&t)).unwrap();
});
}
drop(tx);
rx.iter().fold(HashMap::new(), merge)
}
fn shared_mem(texts: Vec<String>) -> HashMap<String, usize> {
let shared = Arc::new(Mutex::new(HashMap::<String, usize>::new()));
let hs: Vec<_> = texts
.into_iter()
.map(|t| {
let s = Arc::clone(&shared);
thread::spawn(move || {
let local = count_words(&t);
let mut g = s.lock().unwrap();
for (k, v) in local {
*g.entry(k).or_insert(0) += v;
}
})
})
.collect();
for h in hs {
h.join().unwrap();
}
Arc::try_unwrap(shared).unwrap().into_inner().unwrap()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_same() {
let t = vec!["a b c".to_string(), "a d".to_string()];
assert_eq!(msg_passing(t.clone()), shared_mem(t));
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_same() {
let t = vec!["a b c".to_string(), "a d".to_string()];
assert_eq!(msg_passing(t.clone()), shared_mem(t));
}
}
Exercises
Arc<Vec<AtomicUsize>> for per-bucket atomic increment. Compare performance.