342: Async I/O Concepts
Tutorial
The Problem
Servers that handle thousands of simultaneous network connections cannot dedicate one OS thread per connection — thread stacks alone would consume gigabytes of memory. Async I/O solves this by decoupling waiting from threads: while one operation waits for a disk read or network packet, the same thread processes other work. This model traces back to the C10K problem (Dan Kegel, 1999) and the design of event loops in Node.js, nginx, and later Tokio. Rust's async/await brings this efficiency without sacrificing type safety or requiring a garbage collector, achieving C-level throughput with safe, readable code.
🎯 Learning Outcomes
async fn compiles to a state machine that calls poll()ReadyCode Example
#![allow(clippy::all)]
// 342: Async I/O Concepts
// Polling vs blocking, simulated with threads and channels
use std::sync::mpsc;
use std::thread;
use std::time::Duration;
// Approach 1: Blocking I/O
fn blocking_read() -> String {
thread::sleep(Duration::from_millis(10));
"data from blocking read".to_string()
}
// Approach 2: Threaded I/O with channels
fn parallel_reads() -> Vec<String> {
let (tx1, rx) = mpsc::channel();
let tx2 = tx1.clone();
thread::spawn(move || {
thread::sleep(Duration::from_millis(10));
tx1.send("result1".to_string()).unwrap();
});
thread::spawn(move || {
thread::sleep(Duration::from_millis(10));
tx2.send("result2".to_string()).unwrap();
});
let mut results = Vec::new();
for _ in 0..2 {
results.push(rx.recv().unwrap());
}
results
}
// Approach 3: Polling simulation
enum PollResult<T> {
Ready(T),
Pending,
}
fn simulate_poll(counter: &mut u32) -> PollResult<&'static str> {
if *counter >= 3 {
*counter = 0;
PollResult::Ready("done")
} else {
*counter += 1;
PollResult::Pending
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_blocking() {
assert_eq!(blocking_read(), "data from blocking read");
}
#[test]
fn test_parallel() {
let results = parallel_reads();
assert_eq!(results.len(), 2);
assert!(results.contains(&"result1".to_string()));
assert!(results.contains(&"result2".to_string()));
}
#[test]
fn test_poll() {
let mut counter = 0;
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(
simulate_poll(&mut counter),
PollResult::Ready("done")
));
}
}Key Differences
| Aspect | Rust async/await | OCaml Lwt |
|---|---|---|
| Concurrency model | Multi-threaded (Tokio) or single-thread | Single-threaded by default |
| Syntax | async fn / await built into language | let%lwt PPX rewriter macro |
| Error propagation | ? operator in async context | Lwt_result.bind or let* |
| Cancellation | tokio::select! / CancellationToken | Lwt.cancel |
| Zero-cost | Yes (state machines, no allocation) | No (heap-allocated continuations) |
OCaml Approach
OCaml's Lwt library implements cooperative async I/O via promises:
let%lwt content1 = Lwt_io.read_file "a.txt" in
let%lwt content2 = Lwt_io.read_file "b.txt" in
(* sequential - overlapping requires Lwt.both *)
let%lwt (c1, c2) = Lwt.both
(Lwt_io.read_file "a.txt")
(Lwt_io.read_file "b.txt") in
Lwt_io.printf "%s %s\n" c1 c2
Lwt.both runs two I/O operations concurrently on one thread, analogous to tokio::join!. Both Lwt and Tokio use an event loop driven by OS-level readiness notifications.
Full Source
#![allow(clippy::all)]
// 342: Async I/O Concepts
// Polling vs blocking, simulated with threads and channels
use std::sync::mpsc;
use std::thread;
use std::time::Duration;
// Approach 1: Blocking I/O
fn blocking_read() -> String {
thread::sleep(Duration::from_millis(10));
"data from blocking read".to_string()
}
// Approach 2: Threaded I/O with channels
fn parallel_reads() -> Vec<String> {
let (tx1, rx) = mpsc::channel();
let tx2 = tx1.clone();
thread::spawn(move || {
thread::sleep(Duration::from_millis(10));
tx1.send("result1".to_string()).unwrap();
});
thread::spawn(move || {
thread::sleep(Duration::from_millis(10));
tx2.send("result2".to_string()).unwrap();
});
let mut results = Vec::new();
for _ in 0..2 {
results.push(rx.recv().unwrap());
}
results
}
// Approach 3: Polling simulation
enum PollResult<T> {
Ready(T),
Pending,
}
fn simulate_poll(counter: &mut u32) -> PollResult<&'static str> {
if *counter >= 3 {
*counter = 0;
PollResult::Ready("done")
} else {
*counter += 1;
PollResult::Pending
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_blocking() {
assert_eq!(blocking_read(), "data from blocking read");
}
#[test]
fn test_parallel() {
let results = parallel_reads();
assert_eq!(results.len(), 2);
assert!(results.contains(&"result1".to_string()));
assert!(results.contains(&"result2".to_string()));
}
#[test]
fn test_poll() {
let mut counter = 0;
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(
simulate_poll(&mut counter),
PollResult::Ready("done")
));
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_blocking() {
assert_eq!(blocking_read(), "data from blocking read");
}
#[test]
fn test_parallel() {
let results = parallel_reads();
assert_eq!(results.len(), 2);
assert!(results.contains(&"result1".to_string()));
assert!(results.contains(&"result2".to_string()));
}
#[test]
fn test_poll() {
let mut counter = 0;
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(simulate_poll(&mut counter), PollResult::Pending));
assert!(matches!(
simulate_poll(&mut counter),
PollResult::Ready("done")
));
}
}
Deep Comparison
Core Insight
Blocking I/O wastes threads waiting; async uses polling/callbacks to handle many connections on few threads
OCaml Approach
Rust Approach
Comparison Table
| Feature | OCaml | Rust |
|---|---|---|
| See | example.ml | example.rs |
Exercises
simulate_poll every millisecond until it returns Ready — then refactor to use a Condvar to avoid busy-waiting.tokio, rewrite parallel_reads as an async function using tokio::join! with two tokio::time::sleep futures; verify it completes in ~10ms not ~20ms.