Is this a correct use of Thread.MemoryBarrier()? Is this a correct use of Thread.MemoryBarrier()? multithreading multithreading

Is this a correct use of Thread.MemoryBarrier()?


Is this a correct use of Thread.MemoryBarrier()?

No. Suppose one thread sets the flag before the loop even begins to execute. The loop could still execute once, using a cached value of the flag. Is that correct? It certainly seems incorrect to me. I would expect that if I set the flag before the first execution of the loop, that the loop executes zero times, not once.

As far as I understand Thread.MemoryBarrier(), having this call inside the while loop will prevent my work thread from getting a cached version of the shouldRun, and effectively preventing an infinite loop from happening. Is my understanding about Thread.MemoryBarrier correct?

The memory barrier will ensure that the processor does not do any reorderings of reads and writes such that a memory access that is logically before the barrier is actually observed to be after it, and vice versa.

If you are hell bent on doing low-lock code, I would be inclined to make the field volatile rather than introducing an explicit memory barrier. "volatile" is a feature of the C# language. A dangerous and poorly understood feature, but a feature of the language. It clearly communicates to the reader of the code that the field in question is going to be used without locks on multiple threads.

is this a reasonable way to ensure that my loop will stop once shouldRun is set to false by any thread?

Some people would consider it reasonable. I would not do this in my own code without a very, very good reason.

Typically low-lock techniques are justified by performance considerations. There are two such considerations:

First, a contended lock is potentially extremely slow; it blocks as long as there is code executing in the lock. If you have a performance problem because there is too much contention then I would first try to solve the problem by eliminating the contention. Only if I could not eliminate the contention would I go to a low-lock technique.

Second, it might be that an uncontended lock is too slow. If the "work" you are doing in the loop takes, say, less that 200 nanoseconds then the time required to check the uncontended lock -- about 20 ns -- is a significant fraction of the time spent doing work. In that case I would suggest that you do more work per loop. Is it really necessary that the loop stops within 200 ns of the control flag being set?

Only in the most extreme of performance scenarios would I imagine that the cost of checking an uncontended lock is a significant fraction of the time spent in the program.

And also, of course, if you are inducing a memory barrier every 200 ns or so, you are also possibly wrecking performance in other ways. The processor wants to make those moving-memory-accesses-around-in-time optimizations for you; if you are forcing it to constantly abandon those optimizations, you're missing out on a potential win.


I believe your understanding is a bit off on this line

As far as I understand Thread.MemoryBarrier(), having this call inside the while loop will prevent my work thread from getting a cached version of the shouldRun, and effectively preventing an infinite loop from happening.

A memory barrier is a way of enforcing an ordering constraint on read / write instructions. While the results of read / write reordering can have the appearance of caching a memory barrier doesn't actually effect caching in any way. It simply acts as a fence over which read and write instructions can't cross.

In all probability this won't prevent an infinite loop. What the memory fence is doing is this scenario is forcing all of the reads and writes which happen in the body of the loop to occur before the value of shouldRun is read by the loop conditional

Wikipedia has a nice walk through on memory barrier that you may find useful


Depending on what you are trying to do, this will probably not do what you expect. From MSDN, MemoryBarrier:

Synchronizes memory access as follows: The processor executing the current thread cannot reorder instructions in such a way that memory accesses prior to the call to MemoryBarrier execute after memory accesses that follow the call to MemoryBarrier.

This will prevent reordering of instructions after the memory barrier call to before it, but that will not prevent thread scheduling from deciding that the loop should have another go round before the writer thread actually performs the write, e.g. it's not a locking or synchronization mechanism. It just prevents other instructions in the loop (like variable initialization) from being re-ordered before the check for shouldRun's value.

Likewise, leaving this out will not cause an infinite loop- in either case, shouldRun will be checked with every iteration. There is no "cached value" here.