ThreadPoolExecutor parameter configuration

自古美人都是妖i 提交于 2021-02-20 02:29:27

问题


I'm working with a client application which needs to request data from a Rest API. Many of these requests are independent, so they could be called asynchronously. I'm using ThreadPoolExecutor to do so, and I've seen it can be configured with several parameters:

  • corePoolSize
  • maxPoolSize
  • queueCapacity

I read this article and I understand the following:

  • corePoolSize is the value below of which executor adds a new thread rather than queuing it
  • maxPoolSize is the value above of which executor queue the request
  • If the number of actual threads is between corePoolSize and maxPoolSize, the request is queued.

But I have some questions:

  • I've been testing, and the higher the corePoolSize is, the better results I get. In a production environment with lots of clients making petitions to this Rest API (maybe millions per day), how high should corePoolSize be?
  • How should I act to get the "optimal" parameters? Only by testing?
  • Which problems could vey high / low values (of each parameter) cause?

Thank you in advance

UPDATE

My current values are:

  • corePoolSize = 5
  • maxPoolSize = 20
  • queueCapacity = 100

回答1:


  • the corePoolSize is the number of threads to keep in the pool, even if they are idle, unless {@code allowCoreThreadTimeOut} is set
  • maximumPoolSize is the maximum number of threads to allow in the pool

The corePoolSize is the number of threads you want to keep waiting forever, even if there is no one requesting them. The maximumPoolSize is the maximum of how many threads and therefore number of concurrent requests to your Rest API you will start.

  • How many requests per second do you have? ( average / maximum per second).
  • How long does one request take?
  • How long long is the maximum acceptable wait time for a user?

corePoolSize >= requests per second * seconds per request

maximumPoolSize >= maximum requests per second * seconds per request

queueCapacity <= maximumPoolSize * maxWaitTime / timePerRequest (You should monitor this so that you know when you will have to act.)

You have to keep in mind that the Rest API or your own application/server/bandwidth might impose some limits on the number of concurrent connections and that many concurrent requests might increase the time per request.

I would rather keep the corePoolSize low, keepAliveTime quite high.

You have to keep in mind that each thread adds quite some overhead just for parallel HTTP-requests, there should be a NIO variant that does this without lots of threads. Maybe you could try Apache MINA.



来源:https://stackoverflow.com/questions/24390882/threadpoolexecutor-parameter-configuration

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!