Why we need Thread.MemoryBarrier()? Why we need Thread.MemoryBarrier()? multithreading multithreading

Why we need Thread.MemoryBarrier()?


You are going to have a very hard time reproducing this bug. In fact, I would go as far as saying you will never be able to reproduce it using the .NET Framework. The reason is because Microsoft's implementation uses a strong memory model for writes. That means writes are treated as if they were volatile. A volatile write has lock-release semantics which means that all prior writes must be committed before the current write.

However, the ECMA specification has a weaker memory model. So it is theoretically possible that Mono or even a future version of the .NET Framework might start exhibiting the buggy behavior.

So what I am saying is that it is very unlikely that removing barriers #1 and #2 will have any impact on the behavior of the program. That, of course, is not a guarantee, but an observation based on the current implementation of the CLR only.

Removing barriers #3 and #4 will definitely have an impact. This is actually pretty easy to reproduce. Well, not this example per se, but the following code is one of the more well known demonstrations. It has to be compiled using the Release build and ran outside of the debugger. The bug is that the program does not end. You can fix the bug by placing a call to Thread.MemoryBarrier inside the while loop or by marking stop as volatile.

class Program{    static bool stop = false;    public static void Main(string[] args)    {        var t = new Thread(() =>        {            Console.WriteLine("thread begin");            bool toggle = false;            while (!stop)            {                toggle = !toggle;            }            Console.WriteLine("thread end");        });        t.Start();        Thread.Sleep(1000);        stop = true;        Console.WriteLine("stop = true");        Console.WriteLine("waiting...");        t.Join();    }}

The reason why some threading bugs are hard to reproduce is because the same tactics you use to simulate thread interleaving can actually fix the bug. Thread.Sleep is the most notable example because it generates memory barriers. You can verify that by placing a call inside the while loop and observing that the bug goes away.

You can see my answer here for another analysis of the example from the book you cited.


Odds are very good that the first task is completed by the time the 2nd task even starts running. You can only observe this behavior if both threads run that code simultaneously and there's no intervening cache-synchronizing operations. There is one in your code, the StartNew() method will take a lock inside the thread pool manager somewhere.

Getting two threads to run this code simultaneously is very hard. This code completes in a couple of nanoseconds. You would have to try billions of times and introduce variable delays to have any odds. Not much point to this of course, the real problem is when this happens randomly when you don't expect it.

Stay away from this, use the lock statement to write sane multi-threaded code.


If you use volatile and lock, the memory barrier is built in. But, yes, you do need it otherwise. Having said that, I suspect that you need half as many as your example shows.