Can I force cache coherency on a multicore x86 CPU? Can I force cache coherency on a multicore x86 CPU? multithreading multithreading

Can I force cache coherency on a multicore x86 CPU?


volatile only forces your code to re-read the value, it cannot control where the value is read from. If the value was recently read by your code then it will probably be in cache, in which case volatile will force it to be re-read from cache, NOT from memory.

There are not a lot of cache coherency instructions in x86. There are prefetch instructions like prefetchnta, but that doesn't affect the memory-ordering semantics. It used to be implemented by bringing the value to L1 cache without polluting L2, but things are more complicated for modern Intel designs with a large shared inclusive L3 cache.

x86 CPUs use a variation on the MESI protocol (MESIF for Intel, MOESI for AMD) to keep their caches coherent with each other (including the private L1 caches of different cores). A core that wants to write a cache line has to force other cores to invalidate their copy of it before it can change its own copy from Shared to Modified state.


You don't need any fence instructions (like MFENCE) to produce data in one thread and consume it in another on x86, because x86 loads/stores have acquire/release semantics built-in. You do need MFENCE (full barrier) to get sequential consistency. (A previous version of this answer suggested that clflush was needed, which is incorrect).

You do need to prevent compile-time reordering, because C++'s memory model is weakly-ordered. volatile is an old, bad way to do this; C++11 std::atomic is a much better way to write lock-free code.


Cache coherence is guaranteed between cores due to the MESI protocol employed by x86 processors. You only need to worry about memory coherence when dealing with external hardware which may access memory while data is still siting on cores' caches. Doesn't look like it's your case here, though, since the text suggests you're programming in userland.


You don't need to worry about cache coherency. The hardware will take care of that. What you may need to worry about is performance issues due to that cache coherency.

If core#1 writes to a variable, that invalidates all other copies of the cache line in other cores (because it has to get exclusive ownership of the cache line before committing the store). When core#2 reads that same variable, it will miss in cache (unless core#1 has already written it back as far as a shared level of cache).

Since an entire cache line (64 bytes) has to be read from memory (or written back to shared cache and then read by core#2), it will have some performance cost. In this case, it's unavoidable. This is the desired behavior.


The problem is that when you have multiple variables in the same cache line, the processor might spend extra time keeping the caches in sync even if the cores are reading/writing different variables within the same cache line.

That cost can be avoided by making sure those variables are not in the same cache line. This effect is known as False Sharing since you are forcing the processors to synchronize the values of objects which are not actually shared between threads.