Pipelining in Tomcat - parallel?

£可爱£侵袭症+ 提交于 2019-12-13 13:27:10

问题


I am writing a service using TomCat and am trying to understand the pipelining feature of HTTP1.1 and its implementation in Tomcat.

Here are my questions:

1] Is pipelining in TomCat parallel. i.e => After it gets a pipelined request, does it break it down into individual request and invoke all that in parallel? Here is a small test I did: From my tests it looks like, but I am trying to find an authorative document etc?

public static void main(String[] args) throws IOException, InterruptedException
    {
        Socket socket = new Socket();
        socket.connect(new InetSocketAddress("ServerHost", 2080));
        int bufferSize = 166;
        byte[] reply = new byte[bufferSize];
        DataInputStream dis = null;

        //first without pipeline - TEST1
//        socket.getOutputStream().write(
//            ("GET URI HTTP/1.1\r\n" +
//            "Host: ServerHost:2080\r\n" +
//            "\r\n").getBytes());
//       
//        final long before = System.currentTimeMillis();
//        dis = new DataInputStream(socket.getInputStream());
//        Thread.currentThread().sleep(20);
//        final long after = System.currentTimeMillis();
//      
//        dis.readFully(reply);
//        System.out.println(new String(reply));        

        //now pipeline 3 Requests - TEST2
        byte[] request = ("GET URI HTTP/1.1\r\n" +
            "Host:ServerHost:2080\r\n" +
            "\r\n"+
            "GET URI HTTP/1.1\r\n" +
            "Host: ServerHost:2080\r\n" +
            "\r\n"+
            "GET URI HTTP/1.1\r\n" +
            "Host: ServerHost:2080\r\n" +
            "\r\n").getBytes();
        socket.getOutputStream().write(request);
        bufferSize = 1000*1;
        reply = new byte[bufferSize];

        final long before = System.currentTimeMillis();
        dis = new DataInputStream(socket.getInputStream());
        Thread.currentThread().sleep(20);
        final long after = System.currentTimeMillis();

        dis.readFully(reply);
        System.out.println(new String(reply));

        long time = after-before;
        System.out.println("Request took :"+ time +"milli secs");
    }

In the above test, in test2 the response time is not [20*3 = 60+ ms]. The actual GET request are very fast. This hints that these are getting parallelized, unless I am missing something ?

2] What is the default pipeline depth in Tomcat? How can I control it ?

3] When allowing pipelining on server side for my service, do I need to consider anything else assuming that the client follows the http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.4 spec while handling pipelining? Any experiences are welcome.


回答1:


I had a similar question about how Apache works and after making several tests i can confirm that Apache does infact wait for each request to be processed before starting processing the next one so processing is SEQUENTIAL




回答2:


The concept of Pipelining says that we must be able to accept the requests at any point of time, but the processing of the requests takes place in the order we get it. That is parallel processing does not take place



来源:https://stackoverflow.com/questions/5748897/pipelining-in-tomcat-parallel

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!