Wait on multiple condition variables on Linux without unnecessary sleeps? Wait on multiple condition variables on Linux without unnecessary sleeps? multithreading multithreading

Wait on multiple condition variables on Linux without unnecessary sleeps?


Your #3 option (writing dummy bytes to files or pipes instead, and polling on those) has a better alternative on Linux: eventfd.

Instead of a limited-size buffer (as in a pipe) or an infinitely-growing buffer (as in a file), with eventfd you have an in-kernel unsigned 64-bit counter. An 8-byte write adds a number to the counter; an 8-byte read either zeroes the counter and returns its previous value (without EFD_SEMAPHORE), or decrements the counter by 1 and returns 1 (with EFD_SEMAPHORE). The file descriptor is considered readable to the polling functions (select, poll, epoll) when the counter is nonzero.

Even if the counter is near the 64-bit limit, the write will just fail with EAGAIN if you made the file descriptor non-blocking. The same happens with read when the counter is zero.


If you are talking about POSIX threads I'd recommend to use single condition variable and number of event flags or something alike. The idea is to use peer condvar mutex to guard event notifications. You anyway need to check for event after cond_wait() exit. Here is my old enough code to illustrate this from my training (yes, I checked that it runs, but please note it was prepared some time ago and in a hurry for newcomers).

#include <pthread.h>#include <stdio.h>#include <unistd.h>static pthread_cond_t var;static pthread_mutex_t mtx;unsigned event_flags = 0;#define FLAG_EVENT_1    1#define FLAG_EVENT_2    2void signal_1(){    pthread_mutex_lock(&mtx);    event_flags |= FLAG_EVENT_1;    pthread_cond_signal(&var);    pthread_mutex_unlock(&mtx);}void signal_2(){    pthread_mutex_lock(&mtx);    event_flags |= FLAG_EVENT_2;    pthread_cond_signal(&var);    pthread_mutex_unlock(&mtx);}void* handler(void*){    // Mutex is unlocked only when we wait or process received events.    pthread_mutex_lock(&mtx);    // Here should be race-condition prevention in real code.    while(1)    {        if (event_flags)        {            unsigned copy = event_flags;            // We unlock mutex while we are processing received events.            pthread_mutex_unlock(&mtx);            if (copy & FLAG_EVENT_1)            {                printf("EVENT 1\n");                copy ^= FLAG_EVENT_1;            }            if (copy & FLAG_EVENT_2)            {                printf("EVENT 2\n");                copy ^= FLAG_EVENT_2;                // And let EVENT 2 to be 'quit' signal.                // In this case for consistency we break with locked mutex.                pthread_mutex_lock(&mtx);                break;            }            // Note we should have mutex locked at the iteration end.            pthread_mutex_lock(&mtx);        }        else        {            // Mutex is locked. It is unlocked while we are waiting.            pthread_cond_wait(&var, &mtx);            // Mutex is locked.        }    }    // ... as we are dying.    pthread_mutex_unlock(&mtx);}int main(){    pthread_mutex_init(&mtx, NULL);    pthread_cond_init(&var, NULL);    pthread_t id;    pthread_create(&id, NULL, handler, NULL);    sleep(1);    signal_1();    sleep(1);    signal_1();    sleep(1);    signal_2();    sleep(1);    pthread_join(id, NULL);    return 0;}


If you want maximum flexibility under the POSIX condition variable model of synchronization, you must avoid writing modules which communicate events to their users only by means of exposing a condition variable. (You have then essentially reinvented a semaphore.)

Active modules should be designed such that their interfaces provide callback notifications of events, via registered functions: and, if necessary, such that multiple callbacks can be registered.

A client of multiple modules registers a callback with each of them. These can all be routed into a common place where they lock the same mutex, change some state, unlock, and hit the same condition variable.

This design also offers the possibility that, if the amount of work done in response to an event is reasonably small, perhaps it can just be done in the context of the callback.

Callbacks also have some advantages in debugging. You can put a breakpoint on an event which arrives in the form of a callback, and see the call stack of how it was generated. If you put a breakpoint on an event that arrives as a semaphore wakeup, or via some message passing mechanism, the call trace doesn't reveal the origin of the event.


That being said, you can make your own synchronization primitives with mutexes and condition variables which support waiting on multiple objects. These synchronization primitives can be internally based on callbacks, in a way that is invisible to the rest of the application.

The gist of it is that for each object that a thread wants to wait on, the wait operation queues a callback interface with that object. When an object is signaled, it invokes all of its registered callbacks. The woken threads dequeue all the callback interfaces, and peek at some status flags in each one to see which objects signaled.