Rust Concurrency: Fearless Concurrency

Concurrent programs are programs that run multiple tasks (or appear to do so), meaning two or more tasks alternate execution within overlapping time spans. These tasks are executed by threads—the smallest unit of processing. Behind the scenes, this is not true multitasking (parallel processing), but rather rapid context switching between threads at speeds imperceptible to humans. Many modern applications rely on this illusion; for example, a server can process one request while waiting for others. When threads share data, many issues can arise, the most common being: race conditions and deadlocks. Rust’s ownership system and type safety system are powerful tools for solving memory safety and concurrency issues. Through ownership and type checking, most errors are caught at compile time instead of runtime. This allows developers to fix code during development rather than after deployment to production. Once code compiles, you can trust it to run safely in a multithreaded environment without the hard-to-track bugs common in other languages. This is what Rust calls fearless concurrency. Multithreading Model The Risks of Multithreaded Programming In most modern operating systems, running program code executes within a process, which the operating system manages. Within a program, there can also be multiple independently executing components, known as threads. Splitting program computation into multiple threads can improve performance, as the program can handle multiple tasks simultaneously. However, it also increases complexity. Because threads run concurrently, there’s no guarantee of the order in which code in different threads will execute. This can lead to issues like: Race conditions, where multiple threads access data or resources in inconsistent orders Deadlocks, where two threads wait on each other to release resources they each hold, preventing further progress Bugs that occur only under specific circumstances and are hard to reproduce or fix consistently Programming languages have different ways of implementing threads. Many operating systems provide APIs to create new threads. When a language uses the OS API to create threads, this is often referred to as a 1:1 model, where one OS thread corresponds to one language-level thread. Rust’s standard library only provides the 1:1 threading model. Creating New Threads with spawn use std::thread; use std::time::Duration; fn main() { let thread = thread::spawn(|| { for i in 1..10 { println!("this is thread {}", i); thread::sleep(Duration::from_millis(1)); } }); for k in 1..5 { println!("this is main {}", k); thread::sleep(Duration::from_millis(1)); } } Output: this is main 1 this is thread 1 this is main 2 this is thread 2 this is main 3 this is thread 3 this is main 4 this is thread 4 this is thread 5 We see that after the main thread finishes its 5-loop iteration and exits, the newly created thread—though designed for 10 iterations—only gets to execute 5 iterations and then exits. When the main thread ends, the new thread also ends, regardless of whether it has completed. If we want the new thread to finish before the main thread continues, we can use JoinHandle: use std::thread; use std::time::Duration; fn main() { let handler = thread::spawn(|| { for i in 1..10 { println!("this is thread {}", i); thread::sleep(Duration::from_millis(1)); } }); for k in 1..5 { println!("this is main {}", k); thread::sleep(Duration::from_millis(1)); } handler.join().unwrap(); // Block the main thread until the new thread finishes } Output: this is main 1 this is thread 1 this is main 2 this is thread 2 this is main 3 this is thread 3 this is main 4 this is thread 4 this is thread 5 this is thread 6 this is thread 7 this is thread 8 this is thread 9 The return type of thread::spawn is JoinHandle. JoinHandle is an owned value, and calling its join method waits for the thread to finish. Calling join on a handle blocks the current thread until the thread represented by the handle ends. Blocking a thread means preventing it from doing further work or exiting. Threads and move Closures We can use a move closure to transfer ownership of variables from the main thread into the closure: use std::thread; fn main() { let v = vec![2, 4, 5]; // `move` transfers ownership of `v` into the closure let thread = thread::spawn(move || { println!("v is {:?}", v); }); } Output: v is [2, 4, 5] Rust moves the ownership of the variable v into the new thread. This ensures that the variable is safe to use within the new thread, and it also means the main thread can no longer use v (e.g., to drop it). If the move keyword is omitted, the compiler will raise an error: $ cargo run error[E0373]: closure may outlive

Mar 26, 2025 - 22:27
 0
Rust Concurrency: Fearless Concurrency

Cover

Concurrent programs are programs that run multiple tasks (or appear to do so), meaning two or more tasks alternate execution within overlapping time spans. These tasks are executed by threads—the smallest unit of processing. Behind the scenes, this is not true multitasking (parallel processing), but rather rapid context switching between threads at speeds imperceptible to humans. Many modern applications rely on this illusion; for example, a server can process one request while waiting for others. When threads share data, many issues can arise, the most common being: race conditions and deadlocks.

Rust’s ownership system and type safety system are powerful tools for solving memory safety and concurrency issues. Through ownership and type checking, most errors are caught at compile time instead of runtime. This allows developers to fix code during development rather than after deployment to production. Once code compiles, you can trust it to run safely in a multithreaded environment without the hard-to-track bugs common in other languages. This is what Rust calls fearless concurrency.

Multithreading Model

The Risks of Multithreaded Programming

In most modern operating systems, running program code executes within a process, which the operating system manages. Within a program, there can also be multiple independently executing components, known as threads.

Splitting program computation into multiple threads can improve performance, as the program can handle multiple tasks simultaneously. However, it also increases complexity. Because threads run concurrently, there’s no guarantee of the order in which code in different threads will execute. This can lead to issues like:

  • Race conditions, where multiple threads access data or resources in inconsistent orders
  • Deadlocks, where two threads wait on each other to release resources they each hold, preventing further progress
  • Bugs that occur only under specific circumstances and are hard to reproduce or fix consistently

Programming languages have different ways of implementing threads. Many operating systems provide APIs to create new threads. When a language uses the OS API to create threads, this is often referred to as a 1:1 model, where one OS thread corresponds to one language-level thread.

Rust’s standard library only provides the 1:1 threading model.

Creating New Threads with spawn

use std::thread;
use std::time::Duration;

fn main() {
    let thread = thread::spawn(|| {
        for i in 1..10 {
            println!("this is thread {}", i);
            thread::sleep(Duration::from_millis(1));
        }
    });

    for k in 1..5 {
        println!("this is main {}", k);
        thread::sleep(Duration::from_millis(1));
    }
}

Output:

this is main 1
this is thread 1
this is main 2
this is thread 2
this is main 3
this is thread 3
this is main 4
this is thread 4
this is thread 5

We see that after the main thread finishes its 5-loop iteration and exits, the newly created thread—though designed for 10 iterations—only gets to execute 5 iterations and then exits. When the main thread ends, the new thread also ends, regardless of whether it has completed.

If we want the new thread to finish before the main thread continues, we can use JoinHandle:

use std::thread;
use std::time::Duration;

fn main() {
    let handler = thread::spawn(|| {
        for i in 1..10 {
            println!("this is thread {}", i);
            thread::sleep(Duration::from_millis(1));
        }
    });

    for k in 1..5 {
        println!("this is main {}", k);
        thread::sleep(Duration::from_millis(1));
    }

    handler.join().unwrap(); // Block the main thread until the new thread finishes
}

Output:

this is main 1
this is thread 1
this is main 2
this is thread 2
this is main 3
this is thread 3
this is main 4
this is thread 4
this is thread 5
this is thread 6
this is thread 7
this is thread 8
this is thread 9

The return type of thread::spawn is JoinHandle. JoinHandle is an owned value, and calling its join method waits for the thread to finish.

Calling join on a handle blocks the current thread until the thread represented by the handle ends. Blocking a thread means preventing it from doing further work or exiting.

Threads and move Closures

We can use a move closure to transfer ownership of variables from the main thread into the closure:

use std::thread;

fn main() {
    let v = vec![2, 4, 5];
    // `move` transfers ownership of `v` into the closure
    let thread = thread::spawn(move || {
        println!("v is {:?}", v);
    });
}

Output:

v is [2, 4, 5]

Rust moves the ownership of the variable v into the new thread. This ensures that the variable is safe to use within the new thread, and it also means the main thread can no longer use v (e.g., to drop it).

If the move keyword is omitted, the compiler will raise an error:

$ cargo run
error[E0373]: closure may outlive the current function, but it borrows `v`, which is owned by the current function
 --> src/main.rs:6:32
  |
6 |     let handle = thread::spawn(|| {
  |                                ^^ may outlive borrowed value `v`
7 |         println!("Here's a vector: {:?}", v);
  |                                           - `v` is borrowed here

Rust’s ownership rules have once again helped ensure memory safety!

Message Passing

In Rust, one primary tool for message-passing concurrency is the channel, a concept provided by the standard library. You can think of it like a water channel—a river or stream. If you place something like a rubber duck or a boat in it, it flows downstream to the receiver.

A channel has two parts: a transmitter and a receiver. When either the transmitter or receiver is dropped, the channel is considered closed.

Channels are implemented through the standard library's std::sync::mpsc, which stands for multiple producer, single consumer.

Note: Based on the number of readers and writers, channels can be categorized as:

  • SPSC – Single Producer, Single Consumer (can use only atomics)
  • SPMC – Single Producer, Multiple Consumers (requires locking on the consumer side)
  • MPSC – Multiple Producers, Single Consumer (requires locking on the producer side)
  • MPMC – Multiple Producers, Multiple Consumers

Passing Messages Between Threads

use std::thread;
use std::sync::mpsc;

fn main() {
    let (tx, rx) = mpsc::channel();
    // Move `tx` into the closure so the new thread owns it
    thread::spawn(move || {
        tx.send("hello").unwrap();
    });

    // `recv()` blocks the main thread until a value is received
    let msg = rx.recv().unwrap();
    println!("message is {}", msg);
}

Output:

message is hello

The receiving end of the channel has two useful methods: recv and try_recv.

Here we used recv, short for receive, which blocks the main thread until a value is received. Once a value is sent, recv returns it in a Result. If the transmitter is closed, it returns an error indicating no more values will arrive.

try_recv does not block; instead, it returns immediately with a Result: Ok if data is available, or Err if not.

If the new thread hasn't finished execution, using try_recv may result in a runtime error:

use std::thread;
use std::sync::mpsc;

fn main() {
    let (tx, rx) = mpsc::channel();

    thread::spawn(move || {
        tx.send("hello").unwrap();
    });

    // `try_recv` returns immediately, so it might not receive the message in time
    let msg = rx.try_recv().unwrap();
    println!("message is {}", msg);
}

Error:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Empty', ...

Sending Multiple Values and Observing Receiver Waiting

use std::thread;
use std::sync::mpsc;
use std::time::Duration;

fn main() {
    let (tx, rx) = mpsc::channel();

    thread::spawn(move || {
        let vals = vec![
            String::from("hi"),
            String::from("from"),
            String::from("the"),
            String::from("thread"),
        ];

        for val in vals {
            tx.send(val).unwrap();
            thread::sleep(Duration::from_secs(1));
        }
    });

    // The `for` loop on `rx` implicitly waits for incoming values as an iterator
    for received in rx {
        println!("Got: {}", received);
    }
}

Sample Output (with 1-second pauses between lines):

Got: hi
Got: from
Got: the
Got: thread

Creating Multiple Producers by Cloning the Sender

use std::thread;
use std::sync::mpsc;
use std::time::Duration;

fn main() {
    let (tx, rx) = mpsc::channel();
    // Clone the sender `tx` to create a second producer
    let tx1 = tx.clone();

    thread::spawn(move || {
        let vals = vec![
            String::from("hi"),
            String::from("from"),
            String::from("the"),
            String::from("thread"),
        ];

        for val in vals {
            tx1.send(val).unwrap(); // Use the cloned sender
            thread::sleep(Duration::from_secs(1));
        }
    });

    thread::spawn(move || {
        let vals = vec![
            String::from("more"),
            String::from("messages"),
            String::from("for"),
            String::from("you"),
        ];

        for val in vals {
            tx.send(val).unwrap(); // Use the original sender
            thread::sleep(Duration::from_secs(1));
        }
    });

    // Both tx and tx1 send values to the same receiver rx
    for received in rx {
        println!("Got: {}", received);
    }
}

Sample Output (varies by system due to scheduling):

Got: hi
Got: more
Got: from
Got: messages
Got: for
Got: the
Got: thread
Got: you

Shared State

Shared state or data means that multiple threads access the same memory location at the same time. Rust uses mutexes (mutual exclusion locks) to implement shared-memory concurrency primitives.

A Mutex Allows Only One Thread to Access Data at a Time

Mutex is short for mutual exclusion, meaning only one thread can access certain data at any given time. To access data in a mutex, a thread must first acquire the lock. The lock is a data structure that tracks who currently has exclusive access.

Using the standard library’s std::sync::Mutex:

use std::sync::Mutex;

fn main() {
    let m = Mutex::new(5);
    {
        let mut num = m.lock().unwrap();
        *num = 10; // Mutate the value inside the Mutex
        println!("num is {}", num);
    }
    println!("m is {:?}", m);
}

Output:

num is 10
m is Mutex { data: 10 }

We use the lock method to get access to the data in the mutex. This call blocks the current thread until the lock is acquired.

A Mutex is a smart pointer. More specifically, lock returns a MutexGuard, which is a smart pointer that implements Deref to point to the underlying data. It also implements Drop, so when the MutexGuard goes out of scope, the lock is automatically released.

Sharing a Mutex Between Threads

When sharing data between multiple threads, where multiple owners are needed, we use the Arc smart pointer to wrap the Mutex. Arc is thread-safe; Rc is not and cannot be used safely in multithreaded contexts.

Here’s an example of using Arc to wrap a Mutex, allowing shared ownership across threads:

use std::sync::{Mutex, Arc};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = vec![];

    for _ in 0..10 {
        let counter = Arc::clone(&counter); // Clone the Arc before moving into the thread
        let handle = thread::spawn(move || {
            let mut num = counter.lock().unwrap();
            *num += 1;
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Result: {}", *counter.lock().unwrap());
}

Output:

Result: 10

To summarize:

  • Rc + RefCell is typically used for single-threaded interior mutability
  • Arc + Mutex is used for multithreaded interior mutability

Note: Mutex can still cause deadlocks. This can happen if two operations each need to lock two resources and two threads hold one lock each, waiting on the other—resulting in a circular wait.

Thread Safety Based on Send and Sync

In Rust, concurrency-related tools are part of the standard library, not the language itself. However, there are two concurrency concepts embedded in the language: the Send and Sync traits in std::marker.

Purpose of Send and Sync

Send and Sync are at the core of safe concurrency in Rust. Technically, they are marker traits (traits that don’t define any methods) and are used to mark types for concurrency behavior:

  • A type that implements Send can have its ownership transferred safely between threads.
  • A type that implements Sync can be shared between threads via references.

From this, we can infer: if &T is Send, then T is Sync.

Types That Implement Send and Sync

In Rust, almost all types implement Send and Sync by default. This means that for compound types (like structs), if all their members are Send or Sync, then the compound type automatically inherits those traits.

But if even one member isn’t Send or Sync, then the whole type isn’t.

To summarize:

  • Types that implement Send can safely transfer ownership between threads.
  • Types that implement Sync can be safely shared between threads (by reference).
  • The vast majority of types in Rust are both Send and Sync.

Common types not implementing these traits include:

  • Raw pointers
  • Cell, RefCell
  • Rc

It is possible to manually implement Send and Sync for your own types, but:

  • You must use unsafe code
  • You must manually ensure thread safety
  • This is rarely necessary, and should be done with great caution

Note:

  • Cell and RefCell are not Sync because their core implementation (UnsafeCell) is not Sync.
  • Rc is neither Send nor Sync because its internal reference counter is not thread-safe.
  • Raw pointers implement neither trait since they provide no safety guarantees.

Summary

Rust provides both async/await and multithreaded concurrency models. To use the multithreaded model effectively, one must understand Rust’s threading fundamentals, including:

  • Thread creation
  • Thread synchronization
  • Thread safety

Rust supports:

  • Message-passing concurrency, where channels are used to transmit data between threads
  • Shared-state concurrency, where Mutex and Arc are used to share and safely mutate data across threads

The type system and borrow checker ensure these patterns are free of data races and dangling references.

Once the code compiles, you can be confident it will run correctly in multithreaded environments, without the elusive, hard-to-debug bugs seen in other languages.

The Send and Sync traits provide the guarantees for safely transferring or sharing data between threads.

In Summary:

  • Threading model: Multithreaded programs must handle race conditions, deadlocks, and bugs that are hard to reproduce.
  • Message passing: Uses channels to transmit data between threads.
  • Shared state: Mutex + Arc allow multiple threads to access and mutate the same data.
  • Thread safety: Send and Sync traits guarantee the safety of data transfer and sharing in multithreaded contexts.

We are Leapcell, your top choice for hosting Rust projects.

Leapcell

Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:

Multi-Language Support

  • Develop with Node.js, Python, Go, or Rust.

Deploy unlimited projects for free

  • pay only for usage — no requests, no charges.

Unbeatable Cost Efficiency

  • Pay-as-you-go with no idle charges.
  • Example: $25 supports 6.94M requests at a 60ms average response time.

Streamlined Developer Experience

  • Intuitive UI for effortless setup.
  • Fully automated CI/CD pipelines and GitOps integration.
  • Real-time metrics and logging for actionable insights.

Effortless Scalability and High Performance

  • Auto-scaling to handle high concurrency with ease.
  • Zero operational overhead — just focus on building.

Explore more in the Documentation!

Try Leapcell

Follow us on X: @LeapcellHQ

Read on our blog