ThreadPoolExecutor policy
It probably isn't necessary to micro-manage the thread pool as being requested.
A cached thread pool will re-use idle threads while also allowing potentially unlimited concurrent threads. This of course could lead to runaway performance degrading from context switching overhead during bursty periods.
Executors.newCachedThreadPool();
A better option is to place a limit on the total number of threads while discarding the notion of ensuring idle threads are used first. The configuration changes would be:
corePoolSize = maximumPoolSize = N;allowCoreThreadTimeOut(true);setKeepAliveTime(aReasonableTimeDuration, TimeUnit.SECONDS);
Reasoning over this scenario, if the executor has less than corePoolSize
threads, than it must not be very busy. If the system is not very busy, then there is little harm in spinning up a new thread. Doing this will cause your ThreadPoolExecutor
to always create a new worker if it is under the maximum number of workers allowed. Only when the maximum number of workers are "running" will workers waiting idly for tasks be given tasks. If a worker waits aReasonableTimeDuration
without a task, then it is allowed to terminate. Using reasonable limits for the pool size (after all, there are only so many CPUs) and a reasonably large timeout (to keep threads from needlessly terminating), the desired benefits will likely be seen.
The final option is hackish. Basically, the ThreadPoolExecutor
internally uses BlockingQueue.offer
to determine if the queue has capacity. A custom implementation of BlockingQueue
could always reject the offer
attempt. When the ThreadPoolExecutor
fails to offer
a task to the queue, it will try to make a new worker. If a new worker can not be created, a RejectedExecutionHandler
would be called. At that point, a custom RejectedExecutionHandler
could force a put
into the custom BlockingQueue
.
/** Hackish BlockingQueue Implementation tightly coupled to ThreadPoolexecutor implementation details. */class ThreadPoolHackyBlockingQueue<T> implements BlockingQueue<T>, RejectedExecutionHandler { BlockingQueue<T> delegate; public boolean offer(T item) { return false; } public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { delegate.put(r); } //.... delegate methods}
Just set corePoolsize = maximumPoolSize
and use an unbounded queue?
In your list of points, 1 excludes 2, since corePoolSize
will always be less or equal than maximumPoolSize
.
Edit
There is still something incompatible between what you want and what TPE will offer you.
If you have an unbounded queue, maximumPoolSize
is ignored so, as you observed, no more than corePoolSize
threads will ever be created and used.
So, again, if you take corePoolsize = maximumPoolSize
with an unbounded queue, you have what you want, no?
Would you be looking for something more like a cached thread pool?