ExamplesBy LevelBy TopicLearning Paths
321 Intermediate

321: async fn and .await Fundamentals

Functional Programming

Tutorial Video

Text description (accessibility)

This video demonstrates the "321: async fn and .await Fundamentals" functional Rust example. Difficulty level: Intermediate. Key concepts covered: Functional Programming. Network servers, database clients, and file-processing applications spend most of their time waiting for I/O. Key difference from OCaml: 1. **Runtime required**: Rust async requires a runtime (`tokio`, `async

Tutorial

The Problem

Network servers, database clients, and file-processing applications spend most of their time waiting for I/O. Blocking threads during waits is wasteful — a single server handling 10,000 connections would need 10,000 threads. Async/await enables concurrent I/O on a small thread pool by pausing and resuming tasks when they would otherwise block. This example demonstrates the fundamental concepts using synchronous thread-based analogies before introducing true async syntax.

🎯 Learning Outcomes

  • • Understand the difference between sequential blocking and concurrent execution
  • • Recognize that async fn creates a future that is lazy until .awaited
  • • See how join! or thread spawning enables concurrent execution vs sequential
  • • Understand why concurrency reduces total wall-clock time for I/O-bound work
  • Code Example

    fn sequential_fetch(id: u32) -> (String, Vec<String>) {
        (fetch_user(id), fetch_posts(id))
        // Sequential: ~18ms total
    }

    Key Differences

  • Runtime required: Rust async requires a runtime (tokio, async-std); OCaml's Lwt/Async are also libraries, not language builtins.
  • Syntax: Rust uses async fn + .await; OCaml uses let* / >>= with promise types.
  • Compilation: Rust transforms async fn into state machines at compile time; OCaml's Lwt uses continuation closures at runtime.
  • Thread model: Rust's async is cooperative (explicit .await points); OCaml 5's Domain uses OS threads with shared memory.
  • OCaml Approach

    OCaml's Lwt and Async libraries provide similar async/await functionality. Lwt.both is the equivalent of join!:

    (* Lwt: concurrent fetch *)
    let* (user, posts) = Lwt.both
      (fetch_user 1)
      (fetch_posts 1)
    

    OCaml 5.0's Effect system and Domain provide even lower-level concurrency primitives.

    Full Source

    #![allow(clippy::all)]
    //! # Async Basics: Sequential vs Concurrent Execution
    //!
    //! Demonstrates the fundamental difference between sequential and concurrent
    //! execution patterns that form the basis of async programming.
    
    use std::thread;
    use std::time::Duration;
    
    /// Simulates fetching a user by ID with some latency.
    pub fn fetch_user(id: u32) -> String {
        thread::sleep(Duration::from_millis(10));
        format!("User({})", id)
    }
    
    /// Simulates fetching posts for a user with some latency.
    pub fn fetch_posts(user_id: u32) -> Vec<String> {
        thread::sleep(Duration::from_millis(8));
        vec![
            format!("Post1 by {}", user_id),
            format!("Post2 by {}", user_id),
        ]
    }
    
    /// Approach 1: Sequential fetch - each operation blocks until complete.
    /// Like: `let user = fetch_user(id).await; let posts = fetch_posts(id).await;`
    pub fn sequential_fetch(id: u32) -> (String, Vec<String>) {
        let user = fetch_user(id);
        let posts = fetch_posts(id);
        (user, posts)
    }
    
    /// Approach 2: Concurrent fetch using threads.
    /// Like: `join!(fetch_user(id), fetch_posts(id))`
    pub fn concurrent_fetch(id: u32) -> (String, Vec<String>) {
        let handle_user = thread::spawn(move || fetch_user(id));
        let handle_posts = thread::spawn(move || fetch_posts(id));
    
        let user = handle_user.join().expect("user thread panicked");
        let posts = handle_posts.join().expect("posts thread panicked");
        (user, posts)
    }
    
    /// Approach 3: Generic concurrent executor for multiple tasks.
    /// Returns results in the same order as input tasks.
    pub fn run_concurrent<T, F>(tasks: Vec<F>) -> Vec<T>
    where
        T: Send + 'static,
        F: FnOnce() -> T + Send + 'static,
    {
        let handles: Vec<_> = tasks.into_iter().map(|task| thread::spawn(task)).collect();
    
        handles
            .into_iter()
            .map(|h| h.join().expect("task panicked"))
            .collect()
    }
    
    #[cfg(test)]
    mod tests {
        use super::*;
        use std::time::Instant;
    
        #[test]
        fn test_sequential_fetch_returns_correct_user() {
            let (user, _) = sequential_fetch(42);
            assert_eq!(user, "User(42)");
        }
    
        #[test]
        fn test_sequential_fetch_returns_correct_posts() {
            let (_, posts) = sequential_fetch(7);
            assert_eq!(posts.len(), 2);
            assert!(posts[0].contains("7"));
        }
    
        #[test]
        fn test_concurrent_fetch_same_results_as_sequential() {
            let (user1, posts1) = sequential_fetch(99);
            let (user2, posts2) = concurrent_fetch(99);
            assert_eq!(user1, user2);
            assert_eq!(posts1, posts2);
        }
    
        #[test]
        fn test_concurrent_is_faster_than_sequential() {
            let start_seq = Instant::now();
            let _ = sequential_fetch(1);
            let seq_time = start_seq.elapsed();
    
            let start_conc = Instant::now();
            let _ = concurrent_fetch(1);
            let conc_time = start_conc.elapsed();
    
            // Concurrent should be faster (both operations overlap)
            assert!(
                conc_time < seq_time,
                "Concurrent ({:?}) should be faster than sequential ({:?})",
                conc_time,
                seq_time
            );
        }
    
        #[test]
        fn test_run_concurrent_preserves_order() {
            let tasks: Vec<Box<dyn FnOnce() -> i32 + Send>> = vec![
                Box::new(|| {
                    thread::sleep(Duration::from_millis(20));
                    1
                }),
                Box::new(|| {
                    thread::sleep(Duration::from_millis(5));
                    2
                }),
                Box::new(|| {
                    thread::sleep(Duration::from_millis(10));
                    3
                }),
            ];
    
            let results = run_concurrent(tasks);
            assert_eq!(results, vec![1, 2, 3]);
        }
    
        #[test]
        fn test_run_concurrent_empty_list() {
            let tasks: Vec<Box<dyn FnOnce() -> i32 + Send>> = vec![];
            let results = run_concurrent(tasks);
            assert!(results.is_empty());
        }
    }
    ✓ Tests Rust test suite
    #[cfg(test)]
    mod tests {
        use super::*;
        use std::time::Instant;
    
        #[test]
        fn test_sequential_fetch_returns_correct_user() {
            let (user, _) = sequential_fetch(42);
            assert_eq!(user, "User(42)");
        }
    
        #[test]
        fn test_sequential_fetch_returns_correct_posts() {
            let (_, posts) = sequential_fetch(7);
            assert_eq!(posts.len(), 2);
            assert!(posts[0].contains("7"));
        }
    
        #[test]
        fn test_concurrent_fetch_same_results_as_sequential() {
            let (user1, posts1) = sequential_fetch(99);
            let (user2, posts2) = concurrent_fetch(99);
            assert_eq!(user1, user2);
            assert_eq!(posts1, posts2);
        }
    
        #[test]
        fn test_concurrent_is_faster_than_sequential() {
            let start_seq = Instant::now();
            let _ = sequential_fetch(1);
            let seq_time = start_seq.elapsed();
    
            let start_conc = Instant::now();
            let _ = concurrent_fetch(1);
            let conc_time = start_conc.elapsed();
    
            // Concurrent should be faster (both operations overlap)
            assert!(
                conc_time < seq_time,
                "Concurrent ({:?}) should be faster than sequential ({:?})",
                conc_time,
                seq_time
            );
        }
    
        #[test]
        fn test_run_concurrent_preserves_order() {
            let tasks: Vec<Box<dyn FnOnce() -> i32 + Send>> = vec![
                Box::new(|| {
                    thread::sleep(Duration::from_millis(20));
                    1
                }),
                Box::new(|| {
                    thread::sleep(Duration::from_millis(5));
                    2
                }),
                Box::new(|| {
                    thread::sleep(Duration::from_millis(10));
                    3
                }),
            ];
    
            let results = run_concurrent(tasks);
            assert_eq!(results, vec![1, 2, 3]);
        }
    
        #[test]
        fn test_run_concurrent_empty_list() {
            let tasks: Vec<Box<dyn FnOnce() -> i32 + Send>> = vec![];
            let results = run_concurrent(tasks);
            assert!(results.is_empty());
        }
    }

    Deep Comparison

    OCaml vs Rust: Async Basics

    Sequential Fetch

    OCaml:

    let fetch_user id =
      Thread.delay 0.05;
      Printf.sprintf "User(%d)" id
    
    let () =
      let user = fetch_user 42 in
      let posts = fetch_posts 42 in
      (* Sequential: ~80ms total *)
    

    Rust:

    fn sequential_fetch(id: u32) -> (String, Vec<String>) {
        (fetch_user(id), fetch_posts(id))
        // Sequential: ~18ms total
    }
    

    Concurrent Fetch

    OCaml (with threads):

    let parallel tasks =
      let threads = List.map (fun f -> Thread.create f ()) tasks in
      List.iter Thread.join threads
    

    Rust:

    fn concurrent_fetch(id: u32) -> (String, Vec<String>) {
        let h1 = thread::spawn(move || fetch_user(id));
        let h2 = thread::spawn(move || fetch_posts(id));
        (h1.join().unwrap(), h2.join().unwrap())
    }
    

    Key Differences

    AspectOCamlRust
    Native asyncNo (use Lwt/Async)Yes (async/await)
    Thread APIThread.createthread::spawn
    Move semanticsImplicitExplicit move
    Error handlingExceptionsResult from join()
    Concurrency modelGIL limits parallelismTrue parallelism

    Exercises

  • Measure the wall-clock time difference between sequential and concurrent thread-based fetches for 5 operations of varying latency.
  • Implement a concurrent_map(items: Vec<T>, f: Fn(T) -> R) -> Vec<R> that processes all items in parallel using threads.
  • Identify which operations in a sequential workflow are independent (can be parallelized) vs dependent (must be sequential).
  • Open Source Repos