Mutex example / tutorial? [closed] Mutex example / tutorial? [closed] multithreading multithreading

Mutex example / tutorial? [closed]


Here goes my humble attempt to explain the concept to newbies around the world: (a color coded version on my blog too)

A lot of people run to a lone phone booth (they don't have mobile phones) to talk to their loved ones. The first person to catch the door-handle of the booth, is the one who is allowed to use the phone. He has to keep holding on to the handle of the door as long as he uses the phone, otherwise someone else will catch hold of the handle, throw him out and talk to his wife :) There's no queue system as such. When the person finishes his call, comes out of the booth and leaves the door handle, the next person to get hold of the door handle will be allowed to use the phone.

A thread is : Each person
The mutex is : The door handle
The lock is : The person's hand
The resource is : The phone

Any thread which has to execute some lines of code which should not be modified by other threads at the same time (using the phone to talk to his wife), has to first acquire a lock on a mutex (clutching the door handle of the booth). Only then will a thread be able to run those lines of code (making the phone call).

Once the thread has executed that code, it should release the lock on the mutex so that another thread can acquire a lock on the mutex (other people being able to access the phone booth).

[The concept of having a mutex is a bit absurd when considering real-world exclusive access, but in the programming world I guess there was no other way to let the other threads 'see' that a thread was already executing some lines of code. There are concepts of recursive mutexes etc, but this example was only meant to show you the basic concept. Hope the example gives you a clear picture of the concept.]

With C++11 threading:

#include <iostream>#include <thread>#include <mutex>std::mutex m;//you can use std::lock_guard if you want to be exception safeint i = 0;void makeACallFromPhoneBooth() {    m.lock();//man gets a hold of the phone booth door and locks it. The other men wait outside      //man happily talks to his wife from now....      std::cout << i << " Hello Wife" << std::endl;      i++;//no other thread can access variable i until m.unlock() is called      //...until now, with no interruption from other men    m.unlock();//man lets go of the door handle and unlocks the door}int main() {    //This is the main crowd of people uninterested in making a phone call    //man1 leaves the crowd to go to the phone booth    std::thread man1(makeACallFromPhoneBooth);    //Although man2 appears to start second, there's a good chance he might    //reach the phone booth before man1    std::thread man2(makeACallFromPhoneBooth);    //And hey, man3 also joined the race to the booth    std::thread man3(makeACallFromPhoneBooth);    man1.join();//man1 finished his phone call and joins the crowd    man2.join();//man2 finished his phone call and joins the crowd    man3.join();//man3 finished his phone call and joins the crowd    return 0;}

Compile and run using g++ -std=c++0x -pthread -o thread thread.cpp;./thread

Instead of explicitly using lock and unlock, you can use brackets as shown here, if you are using a scoped lock for the advantage it provides. Scoped locks have a slight performance overhead though.


While a mutex may be used to solve other problems, the primary reason they exist is to provide mutual exclusion and thereby solve what is known as a race condition. When two (or more) threads or processes are attempting to access the same variable concurrently, we have potential for a race condition. Consider the following code

//somewhere long ago, we have i declared as intvoid my_concurrently_called_function(){  i++;}

The internals of this function look so simple. It's only one statement. However, a typical pseudo-assembly language equivalent might be:

load i from memory into a registeradd 1 to istore i back into memory

Because the equivalent assembly-language instructions are all required to perform the increment operation on i, we say that incrementing i is a non-atmoic operation. An atomic operation is one that can be completed on the hardware with a gurantee of not being interrupted once the instruction execution has begun. Incrementing i consists of a chain of 3 atomic instructions. In a concurrent system where several threads are calling the function, problems arise when a thread reads or writes at the wrong time. Imagine we have two threads running simultaneoulsy and one calls the function immediately after the other. Let's also say that we have i initialized to 0. Also assume that we have plenty of registers and that the two threads are using completely different registers, so there will be no collisions. The actual timing of these events may be:

thread 1 load 0 into register from memory corresponding to i //register is currently 0thread 1 add 1 to a register //register is now 1, but not memory is 0thread 2 load 0 into register from memory corresponding to ithread 2 add 1 to a register //register is now 1, but not memory is 0thread 1 write register to memory //memory is now 1thread 2 write register to memory //memory is now 1

What's happened is that we have two threads incrementing i concurrently, our function gets called twice, but the outcome is inconsistent with that fact. It looks like the function was only called once. This is because the atomicity is "broken" at the machine level, meaning threads can interrupt each other or work together at the wrong times.

We need a mechanism to solve this. We need to impose some ordering to the instructions above. One common mechanism is to block all threads except one. Pthread mutex uses this mechanism.

Any thread which has to execute some lines of code which may unsafely modify shared values by other threads at the same time (using the phone to talk to his wife), should first be made acquire a lock on a mutex. In this way, any thread that requires access to the shared data must pass through the mutex lock. Only then will a thread be able to execute the code. This section of code is called a critical section.

Once the thread has executed the critical section, it should release the lock on the mutex so that another thread can acquire a lock on the mutex.

The concept of having a mutex seems a bit odd when considering humans seeking exclusive access to real, physical objects but when programming, we must be intentional. Concurrent threads and processes don't have the social and cultural upbringing that we do, so we must force them to share data nicely.

So technically speaking, how does a mutex work? Doesn't it suffer from the same race conditions that we mentioned earlier? Isn't pthread_mutex_lock() a bit more complex that a simple increment of a variable?

Technically speaking, we need some hardware support to help us out. The hardware designers give us machine instructions that do more than one thing but are guranteed to be atomic. A classic example of such an instruction is the test-and-set (TAS). When trying to acquire a lock on a resource, we might use the TAS might check to see if a value in memory is 0. If it is, that would be our signal that the resource is in use and we do nothing (or more accurately, we wait by some mechanism. A pthreads mutex will put us into a special queue in the operating system and will notify us when the resource becomes available. Dumber systems may require us to do a tight spin loop, testing the condition over and over). If the value in memory is not 0, the TAS sets the location to something other than 0 without using any other instructions. It's like combining two assembly instructions into 1 to give us atomicity. Thus, testing and changing the value (if changing is appropriate) cannot be interrupted once it has begun. We can build mutexes on top of such an instruction.

Note: some sections may appear similar to an earlier answer. I accepted his invite to edit, he preferred the original way it was, so I'm keeping what I had which is infused with a little bit of his verbiage.


The best threads tutorial I know of is here:

https://computing.llnl.gov/tutorials/pthreads/

I like that it's written about the API, rather than about a particular implementation, and it gives some nice simple examples to help you understand synchronization.