ExamplesBy LevelBy TopicLearning Paths
125 Advanced

Send and Sync Marker Traits

Functional Programming

Tutorial

The Problem

Data races are a class of concurrency bugs where two threads access the same memory simultaneously with at least one write, with no synchronization. They cause undefined behavior in C/C++ and subtle bugs in GC languages. Rust eliminates them at compile time using two marker traits: Send (a type's ownership can cross thread boundaries) and Sync (shared references to the type can be accessed from multiple threads). Violating these rules is a compile error, not a runtime crash.

🎯 Learning Outcomes

  • • Understand what Send and Sync mean and how they prevent data races at compile time
  • • Learn why Rc<T> is neither Send nor Sync, but Arc<T> is both (when T: Send + Sync)
  • • See Arc<Mutex<T>> as the canonical pattern for shared mutable state across threads
  • • Understand channel-based concurrency (mpsc) as an alternative to shared state
  • Code Example

    use std::sync::{Arc, Mutex};
    use std::thread;
    
    pub fn parallel_sum(numbers: Vec<i32>) -> i32 {
        let total = Arc::new(Mutex::new(0i32));
        let mid = numbers.len() / 2;
        let (left, right) = numbers.split_at(mid);
        let left = left.to_vec();
        let right = right.to_vec();
    
        let total_clone = Arc::clone(&total);
        let handle = thread::spawn(move || {
            let partial: i32 = left.iter().sum();
            *total_clone.lock().unwrap() += partial;
        });
    
        let partial: i32 = right.iter().sum();
        *total.lock().unwrap() += partial;
    
        handle.join().unwrap();
        *total.lock().unwrap()
    }

    Key Differences

  • Compile-time vs. runtime: Rust catches data races at compile time via Send/Sync; OCaml relies on runtime locking or the GIL (pre-5) to prevent them.
  • **Rc vs. Arc**: Rust provides both non-atomic (Rc, not Send) and atomic (Arc, Send + Sync) reference counting; OCaml has one GC-managed reference type.
  • Automatic derivation: Send and Sync are auto-implemented for all types whose fields are Send/Sync; adding a non-Send field (like raw pointer) automatically breaks thread safety.
  • Channels: Both Rust's mpsc and OCaml's Event module provide channel primitives; Rust's are typed and checked at compile time.
  • OCaml Approach

    OCaml (before Domain-based parallelism in OCaml 5) used a Global Interpreter Lock — only one thread ran OCaml code at a time, so data races on GC-managed values were impossible. OCaml 5 introduces Domains and requires careful use of atomic operations and mutexes for shared mutable state. OCaml has no compile-time equivalents of Send/Sync; safety is the programmer's responsibility.

    Full Source

    #![allow(clippy::all)]
    // Example 125: Send and Sync Marker Traits
    //
    // Send: a type whose ownership can be transferred to another thread.
    // Sync: a type whose references (&T) can be shared between threads.
    // Both are auto-implemented by the compiler for types whose fields satisfy them.
    // Violating them (e.g. sharing Rc across threads) is a *compile error*.
    
    use std::sync::{mpsc, Arc, Mutex};
    use std::thread;
    
    // ---------------------------------------------------------------------------
    // Solution 1: Idiomatic — Arc<Mutex<T>> for shared mutable state
    //
    // Arc<T>   is Send + Sync when T: Send (atomic ref-count is thread-safe)
    // Mutex<T> is Send + Sync when T: Send (lock enforces exclusive access)
    // ---------------------------------------------------------------------------
    pub fn parallel_sum(numbers: Vec<i32>) -> i32 {
        let total = Arc::new(Mutex::new(0i32));
        let mid = numbers.len() / 2;
        let (left, right) = numbers.split_at(mid);
        let left = left.to_vec();
        let right = right.to_vec();
    
        // Clone the Arc — each thread gets its own handle to the same Mutex.
        let total_clone = Arc::clone(&total);
        let handle = thread::spawn(move || {
            // left: Vec<i32> is Send, so this closure is Send.
            let partial: i32 = left.iter().sum();
            *total_clone.lock().unwrap() += partial;
        });
    
        let partial: i32 = right.iter().sum();
        *total.lock().unwrap() += partial;
    
        handle.join().unwrap();
        let result = *total.lock().unwrap();
        result
    }
    
    // ---------------------------------------------------------------------------
    // Solution 2: Functional — channel-based (mpsc) scatter/gather
    //
    // Sender<T> is Send when T: Send.  Values flow through the channel without
    // shared mutable state, matching OCaml's typical concurrent style.
    // ---------------------------------------------------------------------------
    pub fn channel_sum(numbers: Vec<i32>) -> i32 {
        let (tx, rx) = mpsc::channel::<i32>();
        let mid = numbers.len() / 2;
        let (left, right) = numbers.split_at(mid);
        let left = left.to_vec();
        let right = right.to_vec();
    
        let tx2 = tx.clone();
        thread::spawn(move || {
            let partial: i32 = left.iter().sum();
            tx2.send(partial).unwrap();
        });
    
        let partial: i32 = right.iter().sum();
        tx.send(partial).unwrap();
    
        // Collect exactly 2 partial sums.
        rx.iter().take(2).sum()
    }
    
    // ---------------------------------------------------------------------------
    // Solution 3: Demonstrate Send explicitly via thread::spawn type constraints.
    //
    // thread::spawn requires F: Send + 'static.  Immutable data moved into the
    // closure satisfies this automatically when T: Send.
    // ---------------------------------------------------------------------------
    pub fn spawn_and_collect<T, F, R>(items: Vec<T>, f: F) -> R
    where
        T: Send + 'static,
        F: FnOnce(Vec<T>) -> R + Send + 'static,
        R: Send + 'static,
    {
        thread::spawn(move || f(items)).join().unwrap()
    }
    
    // ---------------------------------------------------------------------------
    // Illustrative wrapper: show that Sync allows shared reads.
    // Arc<Vec<i32>> — Vec<i32>: Sync, so &Vec<i32> can cross thread boundaries.
    // ---------------------------------------------------------------------------
    pub fn shared_read_sum(data: Arc<Vec<i32>>) -> i32 {
        let data2 = Arc::clone(&data);
        let handle = thread::spawn(move || data2.iter().sum::<i32>());
        handle.join().unwrap()
    }
    
    // ---------------------------------------------------------------------------
    // Tests
    // ---------------------------------------------------------------------------
    #[cfg(test)]
    mod tests {
        use super::*;
        use std::sync::Arc;
    
        #[test]
        fn test_parallel_sum_basic() {
            assert_eq!(parallel_sum(vec![1, 2, 3, 4, 5]), 15);
        }
    
        #[test]
        fn test_parallel_sum_empty() {
            assert_eq!(parallel_sum(vec![]), 0);
        }
    
        #[test]
        fn test_parallel_sum_single() {
            assert_eq!(parallel_sum(vec![42]), 42);
        }
    
        #[test]
        fn test_channel_sum_basic() {
            assert_eq!(channel_sum(vec![10, 20, 30, 40]), 100);
        }
    
        #[test]
        fn test_channel_sum_empty() {
            assert_eq!(channel_sum(vec![]), 0);
        }
    
        #[test]
        fn test_spawn_and_collect() {
            let result = spawn_and_collect(vec![1, 2, 3, 4, 5], |v| v.iter().sum::<i32>());
            assert_eq!(result, 15);
        }
    
        #[test]
        fn test_shared_read_sum() {
            let data = Arc::new(vec![1, 2, 3, 4, 5]);
            assert_eq!(shared_read_sum(data), 15);
        }
    
        #[test]
        fn test_arc_mutex_counter() {
            // Verify Arc<Mutex<T>> correctly serialises increments across threads.
            let counter = Arc::new(Mutex::new(0u32));
            let handles: Vec<_> = (0..10)
                .map(|_| {
                    let c = Arc::clone(&counter);
                    thread::spawn(move || {
                        *c.lock().unwrap() += 1;
                    })
                })
                .collect();
            for h in handles {
                h.join().unwrap();
            }
            assert_eq!(*counter.lock().unwrap(), 10);
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
        use std::sync::Arc;
    
        #[test]
        fn test_parallel_sum_basic() {
            assert_eq!(parallel_sum(vec![1, 2, 3, 4, 5]), 15);
        }
    
        #[test]
        fn test_parallel_sum_empty() {
            assert_eq!(parallel_sum(vec![]), 0);
        }
    
        #[test]
        fn test_parallel_sum_single() {
            assert_eq!(parallel_sum(vec![42]), 42);
        }
    
        #[test]
        fn test_channel_sum_basic() {
            assert_eq!(channel_sum(vec![10, 20, 30, 40]), 100);
        }
    
        #[test]
        fn test_channel_sum_empty() {
            assert_eq!(channel_sum(vec![]), 0);
        }
    
        #[test]
        fn test_spawn_and_collect() {
            let result = spawn_and_collect(vec![1, 2, 3, 4, 5], |v| v.iter().sum::<i32>());
            assert_eq!(result, 15);
        }
    
        #[test]
        fn test_shared_read_sum() {
            let data = Arc::new(vec![1, 2, 3, 4, 5]);
            assert_eq!(shared_read_sum(data), 15);
        }
    
        #[test]
        fn test_arc_mutex_counter() {
            // Verify Arc<Mutex<T>> correctly serialises increments across threads.
            let counter = Arc::new(Mutex::new(0u32));
            let handles: Vec<_> = (0..10)
                .map(|_| {
                    let c = Arc::clone(&counter);
                    thread::spawn(move || {
                        *c.lock().unwrap() += 1;
                    })
                })
                .collect();
            for h in handles {
                h.join().unwrap();
            }
            assert_eq!(*counter.lock().unwrap(), 10);
        }
    }

    Deep Comparison

    OCaml vs Rust: Send and Sync — Compile-Time Thread Safety

    Side-by-Side Code

    OCaml

    (* OCaml has no Send/Sync concepts; the programmer manually ensures safety.
       OCaml 5 uses Mutex for shared mutable state — same idea, no type enforcement. *)
    let parallel_sum data =
      let total = ref 0 in
      let m = Mutex.create () in
      let n = List.length data / 2 in
      let left, right = (* split at n *) ... in
      let t = Thread.create (fun () ->
        let s = List.fold_left ( + ) 0 left in
        Mutex.lock m; total := !total + s; Mutex.unlock m) () in
      let s = List.fold_left ( + ) 0 right in
      Mutex.lock m; total := !total + s; Mutex.unlock m;
      Thread.join t;
      !total
    

    Rust (idiomatic — Arc<Mutex<T>>)

    use std::sync::{Arc, Mutex};
    use std::thread;
    
    pub fn parallel_sum(numbers: Vec<i32>) -> i32 {
        let total = Arc::new(Mutex::new(0i32));
        let mid = numbers.len() / 2;
        let (left, right) = numbers.split_at(mid);
        let left = left.to_vec();
        let right = right.to_vec();
    
        let total_clone = Arc::clone(&total);
        let handle = thread::spawn(move || {
            let partial: i32 = left.iter().sum();
            *total_clone.lock().unwrap() += partial;
        });
    
        let partial: i32 = right.iter().sum();
        *total.lock().unwrap() += partial;
    
        handle.join().unwrap();
        *total.lock().unwrap()
    }
    

    Rust (functional — channel scatter/gather)

    use std::sync::mpsc;
    use std::thread;
    
    pub fn channel_sum(numbers: Vec<i32>) -> i32 {
        let (tx, rx) = mpsc::channel::<i32>();
        let mid = numbers.len() / 2;
        let (left, right) = numbers.split_at(mid);
        let (left, right) = (left.to_vec(), right.to_vec());
    
        let tx2 = tx.clone();
        thread::spawn(move || tx2.send(left.iter().sum()).unwrap());
        tx.send(right.iter().sum()).unwrap();
    
        rx.iter().take(2).sum()
    }
    

    Type Signatures

    ConceptOCamlRust
    Thread-safe shared ownershipref + Mutex (by convention)Arc<Mutex<T>> (enforced by type)
    Thread safety markernone — manual disciplineSend, Sync auto-traits
    Spawn constraintnone (runtime crash on violation)F: FnOnce() -> R + Send + 'static
    Channel senderEvent.channel / Queuempsc::Sender<T> where T: Send
    Shared immutable refref (mutable) or let bindingArc<T> where T: Sync

    Key Insights

  • Compile-time vs runtime enforcement: OCaml's type system has no notion of thread safety — nothing stops you from sharing a non-thread-safe value across threads. Rust's Send/Sync auto-traits make unsafe sharing a compile error, eliminating an entire class of data-race bugs.
  • Auto-derivation: You rarely write unsafe impl Send yourself. The compiler automatically derives Send for any struct whose fields are all Send, and Sync for any struct whose fields are all Sync. The work happens at the type-composition level, not at the call site.
  • **Arc vs Rc:** Rc<T> uses a non-atomic reference count and is intentionally !Send + !Sync — the compiler will refuse to let it cross a thread boundary. Arc<T> uses atomics and is Send + Sync when T: Send + Sync. The naming difference (A = atomic) is a deliberate design signal.
  • **Mutex<T> owns its data:** Unlike OCaml's Mutex.create () which is separate from the data it protects, Rust's Mutex<T> wraps T. You cannot access T without going through the lock. This makes the invariant structurally enforced rather than conventional.
  • Channel ownership transfers: mpsc::Sender<T> requires T: Send, encoding at the type level that values flowing through channels cross thread boundaries. The functional scatter/gather pattern maps cleanly to this: produce partial results in parallel, collect in the main thread — no shared mutable state needed.
  • When to Use Each Style

    **Use Arc<Mutex<T>> when:* multiple threads need to read and* write a shared value, and the mutation pattern is irregular (not just produce-then-consume).

    **Use channels (mpsc) when:** work is partitioned upfront and results flow in one direction — spawn workers, collect results. This is the functional style: closer to OCaml's Domain + Event pattern and avoids shared mutable state entirely.

    Exercises

  • Try sharing an Rc<i32> across threads — observe the compile error explaining why Rc is not Send.
  • Implement a parallel map using thread::spawn and channels: split a Vec<i32> into chunks, process each in a thread, gather results.
  • Write a custom type with a raw pointer and manually implement Send (with unsafe), explaining what invariant you are promising to uphold.
  • Open Source Repos