ExamplesBy LevelBy TopicLearning Paths
465 Fundamental

465: Message Passing vs. Shared Memory

Functional Programming

Tutorial

The Problem

Concurrent programming has two fundamental coordination models. Shared memory: threads share data structures, protected by locks. Message passing: threads communicate by sending values, with no shared state. Shared memory is faster for fine-grained updates (no serialization); message passing is safer, scales better, and avoids deadlocks. The word-count example demonstrates both: message passing (each thread computes a local count, sends the whole map) vs. shared memory (all threads write into a shared HashMap under a mutex).

This comparison is central to Go's mantra "don't communicate by sharing memory, share memory by communicating" and Erlang's everything-is-messages model.

🎯 Learning Outcomes

  • • Understand the message passing model: no shared state, pass computed results
  • • Learn the shared memory model: protect shared data structures with locks
  • • See how the word-count problem demonstrates both approaches
  • • Understand the performance trade-off: message passing avoids lock contention but serializes merges
  • • Learn when each model is appropriate (read-heavy vs. write-heavy workloads)
  • Code Example

    #![allow(clippy::all)]
    // 465. Message passing vs shared memory
    use std::collections::HashMap;
    use std::sync::{mpsc, Arc, Mutex};
    use std::thread;
    
    fn count_words(s: &str) -> HashMap<String, usize> {
        let mut m = HashMap::new();
        for w in s.split_whitespace() {
            *m.entry(w.to_lowercase()).or_insert(0) += 1;
        }
        m
    }
    
    fn merge(mut a: HashMap<String, usize>, b: HashMap<String, usize>) -> HashMap<String, usize> {
        for (k, v) in b {
            *a.entry(k).or_insert(0) += v;
        }
        a
    }
    
    fn msg_passing(texts: Vec<String>) -> HashMap<String, usize> {
        let (tx, rx) = mpsc::channel::<HashMap<String, usize>>();
        for t in texts {
            let tx = tx.clone();
            thread::spawn(move || {
                tx.send(count_words(&t)).unwrap();
            });
        }
        drop(tx);
        rx.iter().fold(HashMap::new(), merge)
    }
    
    fn shared_mem(texts: Vec<String>) -> HashMap<String, usize> {
        let shared = Arc::new(Mutex::new(HashMap::<String, usize>::new()));
        let hs: Vec<_> = texts
            .into_iter()
            .map(|t| {
                let s = Arc::clone(&shared);
                thread::spawn(move || {
                    let local = count_words(&t);
                    let mut g = s.lock().unwrap();
                    for (k, v) in local {
                        *g.entry(k).or_insert(0) += v;
                    }
                })
            })
            .collect();
        for h in hs {
            h.join().unwrap();
        }
        Arc::try_unwrap(shared).unwrap().into_inner().unwrap()
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
        #[test]
        fn test_same() {
            let t = vec!["a b c".to_string(), "a d".to_string()];
            assert_eq!(msg_passing(t.clone()), shared_mem(t));
        }
    }

    Key Differences

  • Contention: Message passing has no lock contention during computation; shared memory locks on every write.
  • Memory: Message passing creates N copies of partial results; shared memory uses one structure (but serializes writes).
  • Correctness: Message passing is easier to reason about (no shared mutable state); shared memory requires careful lock discipline.
  • Scale: Message passing scales better to more threads; shared memory becomes a bottleneck as thread count increases.
  • OCaml Approach

    OCaml's message passing uses channels: let ch = Event.new_channel() with Thread.create (fun () -> Event.sync (Event.send ch result)) (). Shared memory uses Hashtbl.t + Mutex.t. Functional OCaml naturally favors message passing (immutable local maps merged at end) since persistent data structures share structure efficiently. The Parallel library provides higher-level abstractions for both models.

    Full Source

    #![allow(clippy::all)]
    // 465. Message passing vs shared memory
    use std::collections::HashMap;
    use std::sync::{mpsc, Arc, Mutex};
    use std::thread;
    
    fn count_words(s: &str) -> HashMap<String, usize> {
        let mut m = HashMap::new();
        for w in s.split_whitespace() {
            *m.entry(w.to_lowercase()).or_insert(0) += 1;
        }
        m
    }
    
    fn merge(mut a: HashMap<String, usize>, b: HashMap<String, usize>) -> HashMap<String, usize> {
        for (k, v) in b {
            *a.entry(k).or_insert(0) += v;
        }
        a
    }
    
    fn msg_passing(texts: Vec<String>) -> HashMap<String, usize> {
        let (tx, rx) = mpsc::channel::<HashMap<String, usize>>();
        for t in texts {
            let tx = tx.clone();
            thread::spawn(move || {
                tx.send(count_words(&t)).unwrap();
            });
        }
        drop(tx);
        rx.iter().fold(HashMap::new(), merge)
    }
    
    fn shared_mem(texts: Vec<String>) -> HashMap<String, usize> {
        let shared = Arc::new(Mutex::new(HashMap::<String, usize>::new()));
        let hs: Vec<_> = texts
            .into_iter()
            .map(|t| {
                let s = Arc::clone(&shared);
                thread::spawn(move || {
                    let local = count_words(&t);
                    let mut g = s.lock().unwrap();
                    for (k, v) in local {
                        *g.entry(k).or_insert(0) += v;
                    }
                })
            })
            .collect();
        for h in hs {
            h.join().unwrap();
        }
        Arc::try_unwrap(shared).unwrap().into_inner().unwrap()
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
        #[test]
        fn test_same() {
            let t = vec!["a b c".to_string(), "a d".to_string()];
            assert_eq!(msg_passing(t.clone()), shared_mem(t));
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
        #[test]
        fn test_same() {
            let t = vec!["a b c".to_string(), "a d".to_string()];
            assert_eq!(msg_passing(t.clone()), shared_mem(t));
        }
    }

    Exercises

  • Benchmark comparison: Implement both approaches for counting words in 100 documents of 10,000 words each. Benchmark with 1, 2, 4, 8, and 16 threads. Plot throughput and identify where each approach breaks down.
  • Hybrid approach: Combine both: use message passing to compute partial results in parallel (no lock contention), then merge partial results using a parallel fan-in tree (multiple levels of merging) to avoid sequential merge bottleneck.
  • Histogram computation: Apply both patterns to computing a histogram of a billion integer samples. Message passing: each thread counts local histogram, sends to main for merge. Shared memory: Arc<Vec<AtomicUsize>> for per-bucket atomic increment. Compare performance.
  • Open Source Repos