Following pointers in a multithreaded environment Following pointers in a multithreaded environment multithreading multithreading

Following pointers in a multithreaded environment


In the general case, even if multi-threading wasn't involved and your loop looked like:

void check_flag(foo_t* f) {    while(f->flag)        foo(&f->c, &f->m);}

the compiler would be unable to to cache the f->flag test. That's because the compiler can't know whether or not a function (like foo() above) might change whatever object f is pointing to.

Under special circumstances (foo() is visible to the compiler, and all pointers passed to the check_flag() are known not to be aliased or otherwise modifiable by foo()) the compiler might be able to optimize the check.

However, pthread_cond_wait() must be implemented in a way that would prevent that optimization.

See Does guarding a variable with a pthread mutex guarantee it's also not cached?:

You might also be interested in Steve Jessop's answer to: Can a C/C++ compiler legally cache a variable in a register across a pthread library call?

But how far you want to take the issues raised by Boehm's paper in your own work is up to you. As far as I can tell, if you want to take the stand that pthreads doesn't/can't make the guarantee, then you're in essence taking the stand that pthreads is useless (or at least provides no safety guarantees, which I think by reduction has the same outcome). While this might be true in the strictest sense (as addressed in the paper), it's also probably not a useful answer. I'm not sure what option you'd have other than pthreads on Unix-based platforms.


Normally, you should try to lock the pthread mutex before waiting on the condition object as the pthread_cond_wait call release the mutex (and reacquire it before returning). So, your check_flag function should be rewritten like that to conform to the semantic on the pthread condition.

void check_flag(foo_t* f) {    pthread_mutex_lock(&f->m);    while(f->flag)        pthread_cond_wait(&f->c, &f->m);    pthread_mutex_unlock(&f->m);}

Concerning the question of whether or not the compiler is allowed to optimize the reading of the flagfield, this answer explains it in more detail than I can.

Basically, the compiler know about the semantic of pthread_cond_wait, pthread_mutex_lock and pthread_mutex_unlock. He know that he can't optimize memory reading in those situation (the call to pthread_cond_wait in this exemple). There is no notion of memory barrier here, just a special knowledge of certain function, and some rule to follow in their presence.

There is another thing protecting you from optimization performed by the processor. Your average processor is capable of reordering memory access (read / write) provided that the semantic is conserved, and it is always doing it (as it allow to increase performance). However, this break when more than one processor can access the same memory address. A memory barrier is just an instruction to the processor telling it that it can move the read / write that were issued before the barrier and execute them after the barrier. It has finish them now.


As written, the compiler is free to cache the result as you describe or even in a more subtle way - by putting it into a register. You can prevent this optimization from taking place by making the variable volatile. But that is not necessarily enough - you should not code it this way! You should use condition variables as prescribed (lock, wait, unlock).

Trying to do work around the library is bad, but it gets worse. Perhaps reading Hans Boehm's paper on the general topic from PLDI 2005 ("Threads Cannot be Implemented as a Library"), or many of his follow-on articles (which lead up to work on a revised C++ memory model) will put the fear of God in you and steer you back to the straight and narrow :).