Does Java volatile prevent caching or enforce write-through caching? Does Java volatile prevent caching or enforce write-through caching? multithreading multithreading

Does Java volatile prevent caching or enforce write-through caching?


The guarantees are only what you see in the language specification. In theory, writing a volatile variable might force a cache flush to main memory, or it might not, perhaps with a subsequent read forcing the cache flush or somehow causing transfer of the data between caches without a cache flush. This vagueness is deliberate, as it permits potential future optimizations that might not be possible if the mechanics of volatile variables were spelled out in more detail.

In practice, with current hardware, it probably means that, absent a coherent cache, writing a volatile variable forces a cache flush to main memory. With a coherent cache, of course, such a flush isn't needed.


Within Java, it's most accurate to say that all threads will see the most recent write to a volatile field, along with any writes which preceded that volatile read/write.

Within the Java abstraction, this is functionally equivalent to the volatile fields being read/written from shared memory (but this isn't strictly accurate at a lower level).


At a much lower level than is relevant to Java; in modern hardware, any and all reads/writes to any and all memory addresses always occur in L1 and registers first. That being said, Java is designed to hide this kind of low level behavior from the programmer, so this is only conceptually relevant to the discussion.

When we use the volatile keyword on a field in Java, this simply tells the compiler to insert something known as a memory barrier on the reads/writes to this field. A memory barrier effectively ensures two things;

  1. Any threads reading this address will use the most up-to-date value (the barrier makes them wait until the most recent write makes it back to shared memory, and no reading threads can continue until this updated value makes it to their L1 cache).

  2. No reads/writes to ANY fields can cross over the barrier (aka, they are always written back before the other thread can continue, and the compiler/OOO cannot move them to a point after the barrier).

To give a simple Java example;

//on one threadcounter += 1; //normal int fieldflag = true; //flag is volatile//on another threadif (flag) foo(counter); //will see the incremented value

Essentially, when setting flag to true, we create a memory barrier. When Thread #2 tries to read this field, it runs into our barrier and waits for the new value to arrive. At the same time, the CPU ensures that counter += 1 is written back before that new value arrives. As a result, if flag == true then counter will have been incremented.


So to sum up;

  1. All threads see the most up-to-date values of volatile fields (which can be loosely described as "reads/writes go through shared memory").

  2. Reads/writes to volatile fields establish happens-before relationships with previous reads/writes to any fields on one thread.