We're hiring!
*

Constructor acquires, destructor releases

Gustavo Noronha avatar

Gustavo Noronha
June 09, 2025

Share this post:

Reading time:

In this final article based on Matt Godbolt's talk on making APIs easy to use and hard to misuse, I will discuss locking. This is actually an area where C++ has produced some interesting ideas, most notably something called RAII — Resource Acquisition Is Initialization.

Matt doesn't like that acronym very much, so he proposes a new one that I also think is much better: CADR. It tells you everything you need to know: Constructor Acquires, Destructor Releases. That is basically what the RAII pattern does.

Let's not beat around the bush. Here's the C++ code that Matt comes up with to protect his API from misuse:

#include <iostream>
#include <mutex>

class MyWidget {
  std::mutex mutex_;
  void tinker_with(int amount) { ... }

public:
  class Tinkerable;

  Tinkerable get_tinkerable();
};

class MyWidget::Tinkerable {
  MyWidget &widget_;
  std::scoped_lock lock_;
  friend MyWidget;

  
  explicit Tinkerable(MyWidget &widget)
      : widget_(widget), lock_(widget_.mutex_) {}

public:
  void tinker_with(int amount) { widget_.tinker_with(amount); }
};

MyWidget::Tinkerable MyWidget::get_tinkerable() { return Tinkerable(*this); }

int main(void) {
  auto widget = MyWidget();
  auto tinkerable = widget.get_tinkerable();
  tinkerable.tinker_with(10);
  return 0;
}

The important bit here is to make sure a call to the tinker_with() method can only be made while the mutex held by MyWidget is locked. To get there we end up with a friend function that can call get_tinkerable(), which is the only way to access the inner constructor for Tinkerable, which locks the mutex using a std::scoped_lock() — a CADR locking mechanism.

Let's talk about the reason why that is required in C++ to get you the protection we are looking for. Here is the same code with manual locking and some threads being created:

#include <iostream>
#include <mutex>
#include <thread>
#include <vector>

class MyWidget {
  std::mutex mutex_;

public:
  void lock() { mutex_.lock(); }
  void unlock() { mutex_.unlock(); }

  void tinker_with(int amount) { ... }
};

int main(void) {
  auto widget = MyWidget();
  std::vector threads;

  for (auto i = 0; i < 10; i++) {
    threads.emplace_back([&]() {
      while (true) {
        widget.tinker_with(i);
      }
    });
  }

  for (auto &t : threads)
    t.join();

  return 0;
}

The compiler won't complain about this code at all, but the most eagle-eyed among you will notice that I forgot to actually do any locking before calling the tinker_with() method. Well… we are now running into a data race, meaning the state of Widget can be tinkered with by 10 threads at the same time and the outcome is very likely to be inconsistent at best.

If I did remember to call lock(), there is also a chance I would forget to call unlock() after I am done with the value. That is solved with great ideas like std::scoped_lock(), which Matt used as part of his solution —a CADR tool that gives you exactly the behavior you want. The constructor for that acquires the lock, and once the scope is gone the destructor is called and the lock is released — no chance of forgetting it and causing a deadlock. If you need to keep the lock alive, you just pass the scoped lock on to where it's still needed.

Honestly, well done there C++. The only problem? As you can see from the example above, there is no enforcement that these tools are used or used correctly, so it is still up to the programmer to not forget to use them, and use them well.

Rust: fearless concurrency

What does it mean to be fearless when dealing with concurrency?

Well, if you have worked with C and C++ for over 20 years like myself, a version of the code above without using the locks is something you probably saw many times in your career. But actually much worse: it was on a much more complex codebase, with much more complex code paths. Can I do this here or is there a thread that comes by this same path and will mess things up? The answer may be no right now, but may become yes when I inadvertently call this function from a thread on another improvement later today.

As you've come to expect, Rust disallows this kind of mistake from being written. The main building block that allows it to do that are "marker traits". There are two main traits associated with concurrency: Send and Sync. A type is Send if it can be safely moved between threads. A type is Sync if it can safely be shared between threads. The compiler will auto-mark types based on their contents, so the only developers who need to mind whether their types are marked correctly are those creating concurrency primitives. Let's look at the example:

struct MyWidget {
    amount: i64,
}

impl MyWidget {
    fn tinker_with(&mut self, amount: i64) {
        self.amount += amount;
    }
}

fn main() {
    let mut widget = MyWidget { amount: 0 };
    let mut threads = vec![];

    for _ in 0..10 {
        let handle = std::thread::spawn(|| loop {
            widget.tinker_with(10);
        });
        threads.push(handle);
    }

    for t in threads {
        t.join().expect("Thread panicked");
    }
}

Here we try to share MyWidget to several threads with no locking, like in the C++ example. But as you've come to expect Rust won't make it easy for us to make the mistake here:

error[E0373]: closure may outlive the current function, but it borrows `widget`, which is owned by the current function
  --> locks/locks-2.rs:16:41
   |
16 |         let handle = std::thread::spawn(|| loop {
   |                                         ^^ may outlive borrowed value `widget`
17 |             widget.tinker_with(10);
   |             ------ `widget` is borrowed here
   |
note: function requires argument type to outlive `'static`
  --> locks/locks-2.rs:16:22
   |
16 |           let handle = std::thread::spawn(|| loop {
   |  ______________________^
17 | |             widget.tinker_with(10);
18 | |         });
   | |__________^
help: to force the closure to take ownership of `widget` (and any other referenced variables), use the `move` keyword
   |
16 |         let handle = std::thread::spawn(move || loop {
   |                                         ++++

error[E0499]: cannot borrow `widget` as mutable more than once at a time
  --> locks/locks-2.rs:16:41
   |
16 |           let handle = std::thread::spawn(|| loop {
   |                        -                  ^^ `widget` was mutably borrowed here in the previous iteration of the loop
   |  ______________________|
   | |
17 | |             widget.tinker_with(10);
   | |             ------ borrows occur due to use of `widget` in closure
18 | |         });
   | |__________- argument requires that `widget` is borrowed for `'static`

error: aborting due to 2 previous errors

Some errors have detailed explanations: E0373, E0499.
For more information about an error, try `rustc --explain E0373`.

The error is not about concurrency, at least directly. It is complaining because we are trying to borrow a mutable variable more than once — we are using them on the thread closure, so it is being captured 10 times. That does not work well for the borrow checker. You may remember I mentioned sharing state and interior mutability last time, this is one case where that may help. We can wrap our value on a reference counting smart pointer and move copies of that to each thread:

...
    let widget = Rc::new(MyWidget { amount: 0 });
    let mut threads = vec![];

    for _ in 0..10 {
        let mut widget = widget.clone();
        let handle = std::thread::spawn(move || loop {
            widget.tinker_with(10);
...

That won't work, as the Rc smart pointer is not Send — cannot be moved between threads.

error[E0277]: `Rc<MyWidget>` cannot be sent between threads safely
   --> locks/locks-3.rs:19:41
    |
19  |           let handle = std::thread::spawn(move || loop {
    |                        ------------------ ^------
    |                        |                  |
    |  ______________________|__________________within this `{closure@locks/locks-3.rs:19:41: 19:48}`
    | |                      |
    | |                      required by a bound introduced by this call
20  | |             widget.tinker_with(10);
21  | |         });
    | |_________^ `Rc<MyWidget>` cannot be sent between threads safely
    |
    = help: within `{closure@locks/locks-3.rs:19:41: 19:48}`, the trait `Send` is not implemented for `Rc<MyWidget>`

See, the Rc type has internal state for tracking how many references have been given away (by cloning the smart pointer). It disallows sending any of its copies to another thread as that would potentially allow an attempt to manage that counter from multiple threads. C++ allows us to do something obviously wrong, while Rust is out here protecting us from misusing implementation details. We can replace that Rc with Arc, the reference counting smart pointer which is also atomic, and thus not only Send, but also Sync:

    let widget = Arc::new(MyWidget { amount: 0 });

And now:

error[E0596]: cannot borrow data in an `Arc` as mutable
  --> locks/locks-4.rs:20:13
   |
20 |             widget.tinker_with(10);
   |             ^^^^^^ cannot borrow as mutable
   |
   = help: trait `DerefMut` is required to modify through a dereference, but it is not implemented for `Arc<MyWidget>`

That's right. Smart pointers that allow for sharing references still implement Rust guarantees. We managed to share multiple references of our data to multiple threads, but that only allows for safe reading, not mutation. The only way we can mutate something is if we have exclusive access to it, and Rust insists on making sure we express that.

We now need to wrap our value on a type that transfers that exclusive access guarantee to runtime, like a Mutex, which guarantees exclusive access, or an RwLock, which allows for either multiple concurrent reads or one exclusive write:

    let widget = Arc::new(Mutex::new(MyWidget { amount: 0 }));
    let mut threads = vec![];

    for _ in 0..10 {
        let widget = widget.clone();
        let handle = std::thread::spawn(move || loop {
            let w = widget.lock().expect("Poisoned lock");
            w.tinker_with(10);
        });

This will build — and work as expected. And it uses the CADR design pattern — the Mutex's lock() method returns a guard that keeps the lock active while it exists. If I need to use the locked variable on other methods or if I need to extend the lock beyond the current function I just move the guard to wherever it needs to go. But I can't move it to another thread, Rust will not allow it as it is not Send. Neat, but what happens if I forget to actually lock?

    let widget = Arc::new(Mutex::new(MyWidget { amount: 0 }));
    let mut threads = vec![];

    for _ in 0..10 {
        let widget = widget.clone();
        let handle = std::thread::spawn(move || loop {
            (*widget).tinker_with(10);
        });
        threads.push(handle);
    }

I even tried to de-reference through the Arc there, but this is a CADR-wrapped value, my friend, I cannot just go accessing the innards of things without constructing my acquirer ;D the access to the inner type is only done through the guard type, which I can only get by locking the Mutex — similar behavior to the one Matt got with his hidden Tinkerable constructor and friend function, but baked into the basics of how the language works.

error[E0599]: no method named `tinker_with` found for struct `Mutex` in the current scope
  --> locks/locks-6.rs:20:23
   |
20 |             (*widget).tinker_with(10);
   |                       ^^^^^^^^^^^ method not found in `Mutex<MyWidget>`
   |
note: the method `tinker_with` exists on the type `MyWidget`
  --> locks/locks-6.rs:8:5
   |
8  |     fn tinker_with(&mut self, amount: i64) {
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
help: use `.lock().unwrap()` to borrow the `MyWidget`, blocking the current thread until it can be acquired
   |
20 |             (*widget).lock().unwrap().tinker_with(10);
   |                      ++++++++++++++++

Closing thoughts

This is what people who start working with Rust mean when they talk about fearless concurrency — if you have managed to convince the compiler that your code is correct, it would be really hard — essentially impossible — for it to contain a data race. Through a combination of markers traits derived by the compiler for types, that disallow raw values from being shared unprotected between threads, and CADR-style smart pointers, that mediate access to the actual values, Rust is able to absolutely require that you do the right thing when concurrently accessing state. No mistakes allowed.

After working with such a strict (and helpful) system for a while, looking at C code that tries to protect critical regions with procedural locks generates some anxiety. "I am sure there are data races here, the compiler is just not telling me where."

In fact, these guard-rails have been credited by the folks at Mozilla as being key to doing CSS styling in parallel after various attempts with C++ went nowhere, as keeping track of things manually without the help of the compiler became unwieldy.

At first sight having to wrap things on Arc and Mutex and handle the cloning and locking will seem like an unnecessary burden to someone who is starting with Rust. But anyone who has tried to hold the whole locking design for a complex codebase in their mind and has had to deal with memory corruption or other effects of data races or deadlocks will quickly start seeing the benefits of the guardrails in lowering the mental load.

Having to reason through the code paths unassisted is already tough, but more than that: in languages like C++ you must reason about these questions globally; all you need is another pointer being passed to a function, or held in a class — the whole codebase is a potential caller, locker or tinkerer, potential misuse is everywhere.

Rust's restrictions and the way they are implemented make the reasoning local. The value you are working with simply can't be shared or worked on by other parts of the code without that being encoded in the types, and thus checked by the compiler. To wrap something on a Mutex you need to own it, so by the time you have an Arc<Mutex> you know no other part of the code could have a reference to that widget that can be accessed without going through that Mutex.

You don't need to worry about potentially having another reference you don't remember that might need locking, or having to remember the shared references at all when changing how things work — the compiler will find them and point them out. And that is how you get to fearless concurrency — by allowing yourself to make things concurrent while not having to worry whether you are forgetting some detail, the compiler has your back!

Comments (0)


Add a Comment






Allowed tags: <b><i><br>Add a new comment:


Search the newsroom

Latest Blog Posts

Constructor acquires, destructor releases

09/06/2025

In this final article of a three-part series, I look at locking with C++ and Matt Godbolt's notion of CADR.

What if C++ had decades to learn?

21/05/2025

In this second article of a three-part series, I look at how Matt Godbolt uses modern C++ features to try to protect against misusing an…

Unleashing gst-python-ml: Python-powered ML analytics for GStreamer pipelines

12/05/2025

Powerful video analytics pipelines are easy to make when you're well-equipped. Combining GStreamer and Machine Learning frameworks are the…

Matt Godbolt sold me on Rust (by showing me C++)

06/05/2025

Gustavo Noronha helps break down C++ and shows how that knowledge can open up new possibilities with Rust.

Customizing WirePlumber's configuration for embedded systems

29/04/2025

Configuring WirePlumber on embedded Linux systems can be somewhat confusing. We take a moment to demystify this process for a particular…

Evolving hardware, evolving demo: Collabora's Embedded World Board Farm

24/04/2025

Collabora's Board Farm demo, showcasing our recent hardware enablement and continuous integration efforts, has undergone serious development…

Open Since 2005 logo

Our website only uses a strictly necessary session cookie provided by our CMS system. To find out more please follow this link.

Collabora Limited © 2005-2025. All rights reserved. Privacy Notice. Sitemap.