Cache using ConcurrentHashMap Cache using ConcurrentHashMap multithreading multithreading

Cache using ConcurrentHashMap


As of Java 8, you can also prevent this addition of duplicate keys with:

public class Cache {    private final Map map = new ConcurrentHashMap();    public Object get(Object key) {        Object value = map.computeIfAbsent(key, (key) -> {          return new SomeObject();        });        return value;    }}

The API docs state:

If the specified key is not already associated with a value, attempts to compute its value using the given mapping function and enters it into this map unless null. The entire method invocation is performed atomically, so the function is applied at most once per key. Some attempted update operations on this map by other threads may be blocked while computation is in progress, so the computation should be short and simple, and must not attempt to update any other mappings of this map.


put and get are thread safe in the sense that calling them from different threads cannot corrupt the data structure (as, e.g., is possible with a normal java.util.HashMap).

However, since the block is not synchronized, you may still have multiple threads adding the same key:Both threads may pass the null check, one adds the key and returns its value, and then the second will override that value with a new one and returns it.


could multiple threads add a the same key twice?

Yes, they could. To fix this problem you can:

1) Use putIfAbsent method instead of put. It very fast but unnecessary SomeObject instances can be created.

2) Use double checked locking:

Object value = map.get(key);if (value == null) {    synchronized (map) {        value = map.get(key);        if (value == null) {            value = new SomeObject();            map.put(key, value);        }    }}return value;

Lock is much slower, but only necessary objects will be created