ManualResetEvent vs. Thread.Sleep ManualResetEvent vs. Thread.Sleep multithreading multithreading

ManualResetEvent vs. Thread.Sleep


The events are kernel primitives provided by the OS/Kernel that's designed just for this sort of things. The kernel provides a boundary upon which you can guarantee atomic operations which is important for synchronization(Some atomicity can be done in user space too with hardware support).

In short, when a thread waits on an event it's put on a waiting list for that event and marked as non-runnable.When the event is signaled, the kernel wakes up the ones in the waiting list and marks them as runnable and they can continue to run. It's naturally a huge benefit that a thread can wake up immediately when the event is signalled, vs sleeping for a long time and recheck the condition every now and then.

Even one millisecond is a really really long time, you could have processed thousands of event in that time. Also the time resolution is traditionally 10ms, so sleeping less than 10ms usually just results in a 10ms sleep anyway. With an event, a thread can be woken up and scheduled immediately


First locking on _workerWait is pointless, an Event is a system (kernel) object designed for signaling between threads (and heavily used in the Win32 API for asynchronous operations). Therefore it is quite safe for multiple threads to set or reset it without additional synchronization.

As to your main question, need to see the logic for placing things on the queue as well, and some information on how much work is done for each job (is the worker thread spending more time processing work or on waiting for work).

Likely the best solution would be to use an object instance to lock on and use Monitor.Pulse and Monitor.Wait as a condition variable.

Edit: With a view of the code to enqueue, it appears that answer #1116297 has it right: a 1ms delay is too long to wait, given that many of the work items will be extremely quick to process.

The approach of having a mechanism to wake up the worker thread is correct (as there is no .NET concurrent queue with a blocking dequeue operation). However rather than using an event, a condition variable is going to be a little more efficient (as in non-contended cases it does not require a kernel transition):

object sync = new Object();var queue = new Queue<TriggerData>();public void EnqueueTriggers(IEnumerable<TriggerData> triggers) {  lock (sync) {    foreach (var t in triggers) {      queue.Enqueue(t);    }    Monitor.Pulse(sync);  // Use PulseAll if there are multiple worker threads  }}void WorkerThread() {  while (!exit) {    TriggerData job = DequeueTrigger();    // Do work  }}private TriggerData DequeueTrigger() {  lock (sync) {    if (queue.Count > 0) {      return queue.Dequeue();    }    while (queue.Count == 0) {      Monitor.Wait(sync);    }    return queue.Dequeue();  }}

Monitor.Wait will release the lock on the parameter, wait until Pulse() or PulseAll() is called against the lock, then re-enter the lock and return. Need to recheck the wait condition because some other thread could have read the item off the queue.