How to get the ThreadPoolExecutor to increase threads to max before queueing?

ぐ巨炮叔叔 提交于 2019-11-26 14:06:29
Gray

How can I work around this limitation in ThreadPoolExecutor where the queue needs to be bounded and full before more threads will be started.

I believe I have finally found a somewhat elegant (maybe a little hacky) solution to this limitation with ThreadPoolExecutor. It involves extending LinkedBlockingQueue to have it return false for queue.offer(...) when there are already some tasks queued. If the current threads are not keeping up with the queued tasks, the TPE will add additional threads. If the pool is already at max threads, then the RejectedExecutionHandler will be called. It is the handler which then does the put(...) into the queue.

It certainly is strange to write a queue where offer(...) can return false and put() never blocks so that's the hack part. But this works well with TPE's usage of the queue so I don't see any problem with doing this.

Here's the code:

// extend LinkedBlockingQueue to force offer() to return false conditionally
BlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>() {
    private static final long serialVersionUID = -6903933921423432194L;
    @Override
    public boolean offer(Runnable e) {
        /*
         * Offer it to the queue if there is 0 items already queued, else
         * return false so the TPE will add another thread. If we return false
         * and max threads have been reached then the RejectedExecutionHandler
         * will be called which will do the put into the queue.
         */
        if (size() == 0) {
            return super.offer(e);
        } else {
            return false;
        }
    }
};
ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1 /*core*/, 50 /*max*/,
        60 /*secs*/, TimeUnit.SECONDS, queue);
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
    @Override
    public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
        try {
            /*
             * This does the actual put into the queue. Once the max threads
             * have been reached, the tasks will then queue up.
             */
            executor.getQueue().put(r);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            return;
        }
    }
});

With this mechanism, when I submit tasks to the queue, the ThreadPoolExecutor will:

  1. Scale the number of threads up to the core size initially (here 1).
  2. Offer it to the queue. If the queue is empty it will be queued to be handled by the existing threads.
  3. If the queue has 1 or more elements already, the offer(...) will return false.
  4. If false is returned, scale up the number of threads in the pool until they reach the max number (here 50).
  5. If at the max then it calls the RejectedExecutionHandler
  6. The RejectedExecutionHandler then puts the task into the queue to be processed by the first available thread in FIFO order.

Although in my example code above, the queue is unbounded, you could also define it as a bounded queue. For example, if you add a capacity of 1000 to the LinkedBlockingQueue then it will:

  1. scale the threads up to max
  2. then queue up until it is full with 1000 tasks
  3. then block the caller until space becomes available to the queue.

In addition, if you really needed to use offer(...) in the RejectedExecutionHandler then you could use the offer(E, long, TimeUnit) method instead with Long.MAX_VALUE as the timeout.

Edit:

I've tweaked my offer(...) method override per @Ralf's feedback. This will only scale up the number of threads in the pool if they are not keeping up with the load.

Edit:

Another tweak to this answer could be to actually ask the TPE if there are idle threads and only enqueue the item if there is so. You would have to make a true class for this and add a ourQueue.setThreadPoolExecutor(tpe); method on it.

Then your offer(...) method might look something like:

  1. Check to see if the tpe.getPoolSize() == tpe.getMaximumPoolSize() in which case just call super.offer(...).
  2. Else if tpe.getPoolSize() > tpe.getActiveCount() then call super.offer(...) since there seem to be idle threads.
  3. Otherwise return false to fork another thread.

Maybe this:

int poolSize = tpe.getPoolSize();
int maximumPoolSize = tpe.getMaximumPoolSize();
if (poolSize >= maximumPoolSize || poolSize > tpe.getActiveCount()) {
    return super.offer(e);
} else {
    return false;
}

Note that the get methods on TPE are expensive since they access volatile fields or (in the case of getActiveCount()) lock the TPE and walk the thread-list. Also, there are race conditions here that may cause a task to be enqueued improperly or another thread forked when there was an idle thread.

Set core size and max size to the same value, and allow core threads to be removed from the pool with allowCoreThreadTimeOut(true).

I've already got two other answers on this question, but I suspect this one is the best.

It's based on the technique of the currently accepted answer, namely:

  1. Override the queue's offer() method to (sometimes) return false,
  2. which causes the ThreadPoolExecutor to either spawn a new thread or reject the task, and
  3. set the RejectedExecutionHandler to actually queue the task on rejection.

The problem is when offer() should return false. The currently accepted answer returns false when the queue has a couple of tasks on it, but as I've pointed out in my comment there, this causes undesirable effects. Alternately, if you always return false, you'll keep spawning new threads even when you have threads waiting on the queue.

The solution is to use Java 7 LinkedTransferQueue and have offer() call tryTransfer(). When there is a waiting consumer thread the task will just get passed to that thread. Otherwise, offer() will return false and the ThreadPoolExecutor will spawn a new thread.

    BlockingQueue<Runnable> queue = new LinkedTransferQueue<Runnable>() {
        @Override
        public boolean offer(Runnable e) {
            return tryTransfer(e);
        }
    };
    ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1, 50, 60, TimeUnit.SECONDS, queue);
    threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
        @Override
        public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
            try {
                executor.getQueue().put(r);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        }
    });

Note: I now prefer and recommend my other answer.

Here's a version which feels to me much more straightforward: Increase the corePoolSize (up to the limit of maximumPoolSize) whenever a new task is executed, then decrease the corePoolSize (down to the limit of the user specified "core pool size") whenever a task completes.

To put it another way, keep track of the number of running or enqueued tasks, and ensure that the corePoolSize is equal to the number of tasks as long as it is between the user specified "core pool size" and the maximumPoolSize.

public class GrowBeforeQueueThreadPoolExecutor extends ThreadPoolExecutor {
    private int userSpecifiedCorePoolSize;
    private int taskCount;

    public GrowBeforeQueueThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue) {
        super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
        userSpecifiedCorePoolSize = corePoolSize;
    }

    @Override
    public void execute(Runnable runnable) {
        synchronized (this) {
            taskCount++;
            setCorePoolSizeToTaskCountWithinBounds();
        }
        super.execute(runnable);
    }

    @Override
    protected void afterExecute(Runnable runnable, Throwable throwable) {
        super.afterExecute(runnable, throwable);
        synchronized (this) {
            taskCount--;
            setCorePoolSizeToTaskCountWithinBounds();
        }
    }

    private void setCorePoolSizeToTaskCountWithinBounds() {
        int threads = taskCount;
        if (threads < userSpecifiedCorePoolSize) threads = userSpecifiedCorePoolSize;
        if (threads > getMaximumPoolSize()) threads = getMaximumPoolSize();
        setCorePoolSize(threads);
    }
}

As written the class doesn't support changing the user specified corePoolSize or maximumPoolSize after construction, and doesn't support manipulating the work queue directly or via remove() or purge().

We have a subclass of ThreadPoolExecutor that takes an additional creationThreshold and overrides execute.

public void execute(Runnable command) {
    super.execute(command);
    final int poolSize = getPoolSize();
    if (poolSize < getMaximumPoolSize()) {
        if (getQueue().size() > creationThreshold) {
            synchronized (this) {
                setCorePoolSize(poolSize + 1);
                setCorePoolSize(poolSize);
            }
        }
    }
}

maybe that helps too, but yours looks more artsy of course…

The recommended answer resolves only one (1) of the issue with the JDK thread pool:

  1. JDK thread pools are biased towards queuing. So instead of spawning a new thread, they will queue the task. Only if the queue reaches its limit will the thread pool spawn a new thread.

  2. Thread retirement does not happen when load lightens. For example if we have a burst of jobs hitting the pool that causes the pool to go to max, followed by light load of max 2 tasks at a time, the pool will use all threads to service the light load preventing thread retirement. (only 2 threads would be needed…)

Unhappy with the behavior above, I went ahead and implemented a pool to overcome the deficiencies above.

To resolve 2) Using Lifo scheduling resolves the issue. This idea was presented by Ben Maurer at ACM applicative 2015 conference: Systems @ Facebook scale

So a new implementation was born:

LifoThreadPoolExecutorSQP

So far this implementation improves async execution perfomance for ZEL.

The implementation is spin capable to reduce context switch overhead, yielding superior performance for certain use cases.

Hope it helps...

PS: JDK Fork Join Pool implement ExecutorService and works as a "normal" thread pool, Implementation is performant, It uses LIFO Thread scheduling, however there is no control over internal queue size, retirement timeout..., and most importantly tasks cannot be interrupted when canceling them

Note: I now prefer and recommend my other answer.

I have another proposal, following to the original idea of changing the queue to return false. In this one all tasks can enter the queue, but whenever a task is enqueued after execute(), we follow it with a sentinel no-op task which the queue rejects, causing a new thread to spawn, which will execute the no-op immediately followed by something from the queue.

Because worker threads may be polling the LinkedBlockingQueue for a new task, it's possible for a task to get enqueued even when there's an available thread. To avoid spawning new threads even when there are threads available, we need to keep track of how many threads are waiting for new tasks on the queue, and only spawn a new thread when there are more tasks on the queue than waiting threads.

final Runnable SENTINEL_NO_OP = new Runnable() { public void run() { } };

final AtomicInteger waitingThreads = new AtomicInteger(0);

BlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>() {
    @Override
    public boolean offer(Runnable e) {
        // offer returning false will cause the executor to spawn a new thread
        if (e == SENTINEL_NO_OP) return size() <= waitingThreads.get();
        else return super.offer(e);
    }

    @Override
    public Runnable poll(long timeout, TimeUnit unit) throws InterruptedException {
        try {
            waitingThreads.incrementAndGet();
            return super.poll(timeout, unit);
        } finally {
            waitingThreads.decrementAndGet();
        }
    }

    @Override
    public Runnable take() throws InterruptedException {
        try {
            waitingThreads.incrementAndGet();
            return super.take();
        } finally {
            waitingThreads.decrementAndGet();
        }
    }
};

ThreadPoolExecutor threadPool = new ThreadPoolExecutor(1, 50, 60, TimeUnit.SECONDS, queue) {
    @Override
    public void execute(Runnable command) {
        super.execute(command);
        if (getQueue().size() > waitingThreads.get()) super.execute(SENTINEL_NO_OP);
    }
};
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
    @Override
    public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
        if (r == SENTINEL_NO_OP) return;
        else throw new RejectedExecutionException();            
    }
});

The best solution that I can think of is to extend.

ThreadPoolExecutor offers a few hook methods: beforeExecute and afterExecute. In your extension you could maintain use a bounded queue to feed in tasks and a second unbounded queue to handle overflow. When someone calls submit, you could attempt to place the request into the bounded queue. If you're met with an exception, you just stick the task in your overflow queue. You could then utilize the afterExecute hook to see if there is anything in the overflow queue after finishing a task. This way, the executor will take care of the stuff in it's bounded queue first, and automatically pull from this unbounded queue as time permits.

It seems like more work than your solution, but at least it doesn't involve giving queues unexpected behaviors. I also imagine that there's a better way to check the status of the queue and threads rather than relying on exceptions, which are fairly slow to throw.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!