A shared recursive mutex in standard C++ A shared recursive mutex in standard C++ multithreading multithreading

A shared recursive mutex in standard C++


Recursive property of the mutex operates with term owner, which in case of shared_mutex is not well-defined: several threads may have .lock_shared() called at the same time.

Assuming owner as a thread which calls .lock() (not .lock_shared()!), implementation of recursive shared mutex can be simply derived from shared_mutex:

class shared_recursive_mutex: public shared_mutex{public:    void lock(void) {        std::thread::id this_id = std::this_thread::get_id();        if(owner == this_id) {            // recursive locking            count++;        }        else {            // normal locking            shared_mutex::lock();            owner = this_id;            count = 1;        }    }    void unlock(void) {        if(count > 1) {            // recursive unlocking            count--;        }        else {            // normal unlocking            owner = std::thread::id();            count = 0;            shared_mutex::unlock();        }    }private:    std::atomic<std::thread::id> owner;    int count;};

Field .owner need to be declared as atomic, because in .lock() method it is checked without protection from concurrent access.

If you want to recursively call .lock_shared() method, you need to maintain map of owners, and accesses to that map should be protected with some additional mutex.

Allowing thread with active .lock() to call .lock_shared() make implementation more complex.

Finally, allowing thread to advance locking from .lock_shared() to .lock() is no-no, as it leads to possible deadlock when two threads attempt to perform that advancing.


Again, semantic of recursive shared mutex would be very fragile, so it is better to not use it at all.


If you are on Linux / POSIX platform, you are in luck because C++ mutexes are modelled after POSIX ones. The POSIX ones provide more features, including being recursive, process shared and more. And wrapping POSIX primitives into C++ classes is straight-forward.

Good entry point into POSIX threads documentation.


Here is a quick thread-safety wrapper around a type T:

template<class T, class Lock>struct lock_guarded {  Lock l;  T* t;  T* operator->()&&{ return t; }  template<class Arg>  auto operator[](Arg&&arg)&&  -> decltype(std::declval<T&>()[std::declval<Arg>()])  {    return (*t)[std::forward<Arg>(arg)];  }  T& operator*()&&{ return *t; }};constexpr struct emplace_t {} emplace {};template<class T>struct mutex_guarded {  lock_guarded<T, std::unique_lock<std::mutex>>  get_locked() {    return {{m},&t};  }  lock_guarded<T const, std::unique_lock<std::mutex>>  get_locked() const {    return {{m},&t};  }  lock_guarded<T, std::unique_lock<std::mutex>>  operator->() {    return get_locked();  }  lock_guarded<T const, std::unique_lock<std::mutex>>  operator->() const {    return get_locked();  }  template<class F>  std::result_of_t<F(T&)>  operator->*(F&& f) {    return std::forward<F>(f)(*get_locked());  }  template<class F>  std::result_of_t<F(T const&)>  operator->*(F&& f) const {    return std::forward<F>(f)(*get_locked());  }  template<class...Args>  mutex_guarded(emplace_t, Args&&...args):    t(std::forward<Args>(args)...)  {}  mutex_guarded(mutex_guarded&& o):    t( std::move(*o.get_locked()) )  {}  mutex_guarded(mutex_guarded const& o):    t( *o.get_locked() )  {}  mutex_guarded() = default;  ~mutex_guarded() = default;  mutex_guarded& operator=(mutex_guarded&& o)  {    T tmp = std::move(o.get_locked());    *get_locked() = std::move(tmp);    return *this;  }  mutex_guarded& operator=(mutex_guarded const& o):  {    T tmp = o.get_locked();    *get_locked() = std::move(tmp);    return *this;  }private:  std::mutex m;  T t;};

You can use either:

mutex_guarded<std::vector<int>> guarded;auto s0 = guarded->size();auto s1 = guarded->*[](auto&&e){return e.size();};

both do roughly the same thing, and the object guarded is only accessed when the mutex is locked.

Stealing from @tsyvarev 's answer (with some minor changes) we get:

class shared_recursive_mutex{  std::shared_mutex mpublic:  void lock(void) {    std::thread::id this_id = std::this_thread::get_id();    if(owner == this_id) {      // recursive locking      ++count;    } else {      // normal locking      m.lock();      owner = this_id;      count = 1;    }  }  void unlock(void) {    if(count > 1) {      // recursive unlocking      count--;    } else {      // normal unlocking      owner = std::thread::id();      count = 0;      m.unlock();    }  }  void lock_shared() {    std::thread::id this_id = std::this_thread::get_id();    if (shared_counts->count(this_id)) {      ++(shared_count.get_locked()[this_id]);    } else {      m.lock_shared();      shared_count.get_locked()[this_id] = 1;    }  }  void unlock_shared() {    std::thread::id this_id = std::this_thread::get_id();    auto it = shared_count->find(this_id);    if (it->second > 1) {      --(it->second);    } else {      shared_count->erase(it);      m.unlock_shared();    }  }private:  std::atomic<std::thread::id> owner;  std::atomic<std::size_t> count;  mutex_guarded<std::map<std::thread::id, std::size_t>> shared_counts;};

try_lock and try_lock_shared left as an exercise.

Both lock and unlock shared lock the mutex twice (this is safe, as the branches are really about "is this thread in control of the mutex", and another thread cannot change that answer from "no" to "yes" or vice versa). You could do it with one lock with ->* instead of ->, which would make it faster (at the cost of some complexity in the logic).


The above does not support having an exclusive lock, then a shared lock. That is tricky. It cannot support having a shared lock, then upgrading to an unique lock, because that is basically impossible to stop it from deadlocking when 2 threads try that.

That last issue may be why recursive shared mutexes are a bad idea.