ThreadPoolExecutor Block When Queue Is Full? ThreadPoolExecutor Block When Queue Is Full? multithreading multithreading

ThreadPoolExecutor Block When Queue Is Full?


In some very narrow circumstances, you can implement a java.util.concurrent.RejectedExecutionHandler that does what you need.

RejectedExecutionHandler block = new RejectedExecutionHandler() {  rejectedExecution(Runnable r, ThreadPoolExecutor executor) {     executor.getQueue().put( r );  }};ThreadPoolExecutor pool = new ...pool.setRejectedExecutionHandler(block);

Now. This is a very bad idea for the following reasons

  • It's prone to deadlock because all the threads in the pool may die before the thing you put in the queue is visible. Mitigate this by setting a reasonable keep alive time.
  • The task is not wrapped the way your Executor may expect. Lots of executor implementations wrap their tasks in some sort of tracking object before execution. Look at the source of yours.
  • Adding via getQueue() is strongly discouraged by the API, and may be prohibited at some point.

A almost-always-better strategy is to install ThreadPoolExecutor.CallerRunsPolicy which will throttle your app by running the task on the thread which is calling execute().

However, sometimes a blocking strategy, with all its inherent risks, is really what you want. I'd say under these conditions

  • You only have one thread calling execute()
  • You have to (or want to) have a very small queue length
  • You absolutely need to limit the number of threads running this work (usually for external reasons), and a caller-runs strategy would break that.
  • Your tasks are of unpredictable size, so caller-runs could introduce starvation if the pool was momentarily busy with 4 short tasks and your one thread calling execute got stuck with a big one.

So, as I say. It's rarely needed and can be dangerous, but there you go.

Good Luck.


What you need to do is to wrap your ThreadPoolExecutor into Executor which explicitly limits amount of concurrently executed operations inside it:

 private static class BlockingExecutor implements Executor {    final Semaphore semaphore;    final Executor delegate;    private BlockingExecutor(final int concurrentTasksLimit, final Executor delegate) {        semaphore = new Semaphore(concurrentTasksLimit);        this.delegate = delegate;    }    @Override    public void execute(final Runnable command) {        try {            semaphore.acquire();        } catch (InterruptedException e) {            e.printStackTrace();            return;        }        final Runnable wrapped = () -> {            try {                command.run();            } finally {                semaphore.release();            }        };        delegate.execute(wrapped);    }}

You can adjust concurrentTasksLimit to the threadPoolSize + queueSize of your delegate executor and it will pretty much solve your problem


You could use a semaphore to block threads from going into the pool.

ExecutorService service = new ThreadPoolExecutor(    3,     3,     1,     TimeUnit.HOURS,     new ArrayBlockingQueue<>(6, false));Semaphore lock = new Semaphore(6); // equal to queue capacityfor (int i = 0; i < 100000; i++ ) {    try {        lock.acquire();        service.submit(() -> {            try {              task.run();            } finally {              lock.release();            }        });    } catch (InterruptedException e) {        throw new RuntimeException(e);    }}

Some gotchas:

  • Only use this pattern with a fixed thread pool. The queue is unlikely to be full often, thus new threads won't be created. Check out the java docs on ThreadPoolExecutor for more details: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html There is a way around this, but it is out of scope of this answer.
  • Queue size should be higher than the number of core threads. If we were to make the queue size 3, what would end up happening is:

    • T0: all three threads are doing work, the queue is empty, no permits are available.
    • T1: Thread 1 finishes, releases a permit.
    • T2: Thread 1 polls the queue for new work, finds none, and waits.
    • T3: Main thread submits work into the pool, thread 1 starts work.

    The example above translates to thread the main thread blocking thread 1. It may seem like a small period, but now multiply the frequency by days and months. All of a sudden, short periods of time add up to a large amount of time wasted.