问题
I need to perform some tasks(Mostly Call multiple External URL's with request parameters and read data) concurrently in java servlet and send response to user within a few seconds.I am trying to use ExecutorService to achieve the same. I need four FutureTasks created in each user request in the doGet method. Each Task runs for around 5-10 sec and the total response time to the user is around 15 sec.
Can you please suggest which of the following design is better while using ExecutorService in a Java servlet?
1)(Creating newFixedThreadPool per request and shutting it down ASAP)
public class MyTestServlet extends HttpServlet
{
ExecutorService myThreadPool = null;
public void init()
{
super.init();
}
protected void doGet(HttpServletRequest request,HttpServletResponse response)
{
myThreadPool = Executors.newFixedThreadPool(4);
taskOne = myThreadPool.submit();
taskTwo = myThreadPool.submit();
taskThree = myThreadPool.submit();
taskFour = myThreadPool.submit();
...
...
taskOne.get();
taskTwo.get();
taskThree.get();
taskFour.get();
...
myThreadPool.shutdown();
}
public void destroy()
{
super.destroy();
}
}
2) (Creating newFixedThreadPool during Servlet Init and shutting it down on servlet destroy)
public class MyTestServlet extends HttpServlet
{
ExecutorService myThreadPool = null;
public void init()
{
super.init();
//What should be the value of fixed thread pool so that it can handle multiple user requests without wait???
myThreadPool = Executors.newFixedThreadPool(20);
}
protected void doGet(HttpServletRequest request,HttpServletResponse response)
{
taskOne = myThreadPool.submit();
taskTwo = myThreadPool.submit();
taskThree = myThreadPool.submit();
taskFour = myThreadPool.submit();
...
...
taskOne.get();
taskTwo.get();
taskThree.get();
taskFour.get();
...
}
public void destroy()
{
super.destroy();
myThreadPool.shutdown();
}
}
3) (Creating newCachedThreadPool during Servlet Init and shutting it down on servlet destroy)
public class MyTestServlet extends HttpServlet
{
ExecutorService myThreadPool = null;
public void init()
{
super.init();
myThreadPool = Executors.newCachedThreadPool();
}
protected void doGet(HttpServletRequest request,HttpServletResponse response)
{
taskOne = myThreadPool.submit();
taskTwo = myThreadPool.submit();
taskThree = myThreadPool.submit();
taskFour = myThreadPool.submit();
...
...
taskOne.get();
taskTwo.get();
taskThree.get();
taskFour.get();
...
}
public void destroy()
{
super.destroy();
myThreadPool.shutdown();
}
}
回答1:
The first should not be an option. The idea of a thread pool (and probably any pool) is to minimize the overhead and memory required for the construction of the pool members (in this case, the worker threads). so In general the pools should be inited when your application is started and destroyed when it shuts down.
As for the choice between 2 and 3, please check the accepted answer in the following post. The answer explains the difference and you can then decide which one suits your needs better : newcachedthreadpool-v-s-newfixedthreadpool
回答2:
Creating and destroying a thread pool for each request is a bad idea : too expensive.
If you have some way to remember which HTTP request each URL fetching task is related to, I'd go for a CachedThreadPool. Its ability to grow and shrink on-demand will do wonders, because the URL fetching tasks are totally independant and network-bound (as opposed to CPU or memory-bound).
Also, I would wrap the ThreadPool in a CompletionService, which can notify you whenever a job is done, regardless of its submission order. First completed, first notified. This will ensure you don't block on a sloooow job if faster ones are already done.
CompletionService is easy to use : wrap it around an existing ThreadPool (newCachedThreadPool for example), submit() jobs to it, and then take() the results back. Please note that the take() method is blocking.
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/CompletionService.html
来源:https://stackoverflow.com/questions/11785801/executorservice-in-java-servlet