Method call to Future.get() blocks. Is that really desirable? Method call to Future.get() blocks. Is that really desirable? multithreading multithreading

Method call to Future.get() blocks. Is that really desirable?


Future offers you method isDone() which is not blocking and returns true if computation has completed, false otherwise.

Future.get() is used to retrieve the result of computation.

You have a couple of options:

  • call isDone() and if the result is ready ask for it by invoking get(), notice how there is no blocking
  • block indefinitely with get()
  • block for specified timeout with get(long timeout, TimeUnit unit)

The whole Future API thing is there to have easy way obtaining values from threads executing parallel tasks. This can be done synchronously or asynchronously if you prefer, as described in bullets above.

UPDATE WITH CACHE EXAMPLE

Here is a cache implementation from Java Concurrency In Practice, an excellent use case for Future.

  • If the computation is already running, caller interested in result of computation will wait for computation to finish
  • If the result is ready in the cache, caller will collect it
  • if the result is not ready and computation has not started yet, caller will start computation and wrap result in Future for other callers.

This is all easily achieved with Future API.

package net.jcip.examples;import java.util.concurrent.*;/** * Memoizer * <p/> * Final implementation of Memoizer * * @author Brian Goetz and Tim Peierls */public class Memoizer <A, V> implements Computable<A, V> {    private final ConcurrentMap<A, Future<V>> cache            = new ConcurrentHashMap<A, Future<V>>();    private final Computable<A, V> c;public Memoizer(Computable<A, V> c) {    this.c = c;}public V compute(final A arg) throws InterruptedException {    while (true) {        Future<V> f = cache.get(arg);        // computation not started        if (f == null) {            Callable<V> eval = new Callable<V>() {                public V call() throws InterruptedException {                    return c.compute(arg);                }            };            FutureTask<V> ft = new FutureTask<V>(eval);            f = cache.putIfAbsent(arg, ft);            // start computation if it's not started in the meantime            if (f == null) {                f = ft;                ft.run();            }        }        // get result if ready, otherwise block and wait        try {            return f.get();        } catch (CancellationException e) {            cache.remove(arg, f);        } catch (ExecutionException e) {            throw LaunderThrowable.launderThrowable(e.getCause());        }    }  }}


Below is the snippet of the pseudo code. My question is- Does the below code not defeat the very notion of parallel asynchronous processing?

It all depends on your use case:

  1. If you really want to block till you get the result, use blocking get()

  2. If you can wait for a specific period to know the status instead of infinite blocking duration, use get() with time-out

  3. If you can continue without analysing the result immediately and inspect the result at future time, use CompletableFuture (java 8)

    A Future that may be explicitly completed (setting its value and status), and may be used as a CompletionStage, supporting dependent functions and actions that trigger upon its completion.

  4. You can implement callback mechanism from your Runnable/Callable. Have a look at below SE question:

    Java executors: how to be notified, without blocking, when a task completes?


I would like to give my share on this one, more on theoretical point of view as there are some technical answers already. I would like to base my answer on the comment:

Let me give you my example. The tasks I submit to the service end up raising HTTP requests, The result of the HTTP request can take a lot of time. But I do need the result of each HTTP request. The tasks are submitted in a loop. If I wait for each task to return (get), then I am loosing parallelism here, ain't I?

which agrees with what is said in the question.

Say you have three kids, and you want to make a cake, for your birthday. Since you want to make the greatest of cakes you need a lot of different stuff to prepare it. So what you do is split the ingredients on three different lists, because where you live there exist just 3 supermarkets that sell different products, and assign each of your kids a single task, simultaneously.

Now, before you can start preparing the cake (let's assume again, that you need all the ingredients beforehand) you will have to wait for the kid that have to do the longest route. Now, the fact that you need to wait for all the ingredients before starting to make the cake is your necessity, not a dependency among tasks. Your kids have been working on the tasks simoultaneously as long as they could (e.g: until the first kid completed the task). So, to conclude, here you have the paralelilsm.

The sequential example is described when you have 1 kid and you assign all three tasks to him/her.