What is “backlog” in TCP connections?

后端 未结 3 1281
走了就别回头了
走了就别回头了 2020-12-13 00:30

Below, you see a python program that acts as a server listening for connection requests to port 9999:

# server.py 
import socket                             


        
3条回答
  •  小蘑菇
    小蘑菇 (楼主)
    2020-12-13 01:00

    The function of the parameter appears to be to limit the number of incoming connect requests a server will retain in a queue assuming it can serve the current request and the small amount of queued pending requests in a reasonable amount of time while under high load. Here's a good paragraph I came against that lends a little context around this argument...

    Finally, the argument to listen tells the socket library that we want it to queue up as many as 5 connect requests (the normal max) before refusing outside connections. If the rest of the code is written properly, that should be plenty.

    https://docs.python.org/3/howto/sockets.html#creating-a-socket

    There's text earlier up in the document that suggests clients should dip in and out of a server so you don't build up a long queue of requests in the first place...

    When the connect completes, the socket s can be used to send in a request for the text of the page. The same socket will read the reply, and then be destroyed. That’s right, destroyed. Client sockets are normally only used for one exchange (or a small set of sequential exchanges).

    The linked HowTo guide is a must read when getting up to speed on network programming with sockets. It really brings into focus some big picture themes about it. Now how the server socket manages this queue as far as implementation details is another story, probably an interesting one. I suppose the motivation for this design is more telling, without it the barrier for inflicting a denial of service attack would be very very low.

    As far as the reason for a minimum value of 0 vs 1, we should keep in mind that 0 is still a valid value, meaning queue up nothing. That is essentially to say let there be no request queue, just reject connections outright if the server socket is currently serving a connection. The point of a currently active connection being served should always be kept in mind in this context, it's the only reason a queue would be of interest in the first place.

    This brings us to the next question regarding a preferred value. This is all a design decision, do you want to queue up requests or not? If so you may pick a value you feel is warranted based on expected traffic and known hardware resources I suppose. I doubt there's anything formulaic in picking a value. This makes me wonder how lightweight a request is in the first place that you'd face a penalty in queuing anything up on the server.


    UPDATE

    I wanted to substantiate the comments from user207421 and went to lookup the python source. Unfortunately this level of detail is not to be found in the sockets.py source but rather in socketmodule.c#L3351-L3382 as of hash 530f506.

    The comments are very illuminating, I'll copy the source verbatim below and single out the clarifying comments here which are pretty illuminating...

    We try to choose a default backlog high enough to avoid connection drops for common workloads, yet not too high to limit resource usage.

    and

    If backlog is specified, it must be at least 0 (if it is lower, it is set to 0); it specifies the number of unaccepted connections that the system will allow before refusing new connections. If not specified, a default reasonable value is chosen.

    /* s.listen(n) method */
    
    static PyObject *
    sock_listen(PySocketSockObject *s, PyObject *args)
    {
        /* We try to choose a default backlog high enough to avoid connection drops
         * for common workloads, yet not too high to limit resource usage. */
        int backlog = Py_MIN(SOMAXCONN, 128);
        int res;
    
        if (!PyArg_ParseTuple(args, "|i:listen", &backlog))
            return NULL;
    
        Py_BEGIN_ALLOW_THREADS
        /* To avoid problems on systems that don't allow a negative backlog
         * (which doesn't make sense anyway) we force a minimum value of 0. */
        if (backlog < 0)
            backlog = 0;
        res = listen(s->sock_fd, backlog);
        Py_END_ALLOW_THREADS
        if (res < 0)
            return s->errorhandler();
        Py_RETURN_NONE;
    }
    
    PyDoc_STRVAR(listen_doc,
    "listen([backlog])\n\
    \n\
    Enable a server to accept connections.  If backlog is specified, it must be\n\
    at least 0 (if it is lower, it is set to 0); it specifies the number of\n\
    unaccepted connections that the system will allow before refusing new\n\
    connections. If not specified, a default reasonable value is chosen.");
    

    Going further down the rabbithole into the externals I trace the following source from socketmodule...

     res = listen(s->sock_fd, backlog);
    

    This source is over at socket.h and socket.c using linux as a concrete platform backdrop for discussion purposes.

    /* Maximum queue length specifiable by listen.  */
    #define SOMAXCONN   128
    extern int __sys_listen(int fd, int backlog);
    

    There's more info to be found in the man page

    http://man7.org/linux/man-pages/man2/listen.2.html

    int listen(int sockfd, int backlog);
    

    And the corresponding docstring

    listen() marks the socket referred to by sockfd as a passive socket, that is, as a socket that will be used to accept incoming connection requests using accept(2).

    The sockfd argument is a file descriptor that refers to a socket of type SOCK_STREAM or SOCK_SEQPACKET.

    The backlog argument defines the maximum length to which the queue of pending connections for sockfd may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds.

    One additional source identifies the kernel as being responsible for the backlog queue.

    The second argument backlog to this function specifies the maximum number of connections the kernel should queue for this socket.

    They briefly go on to relate how the unaccepted / queued connections are partitioned in the backlog (a useful figure is included on the linked source).

    To understand the backlog argument, we must realize that for a given listening socket, the kernel maintains two queues:

    An incomplete connection queue, which contains an entry for each SYN that has arrived from a client for which the server is awaiting completion of the TCP three-way handshake. These sockets are in the SYN_RCVD state (Figure 2.4).

    A completed connection queue, which contains an entry for each client with whom the TCP three-way handshake has completed. These sockets are in the ESTABLISHED state (Figure 2.4). These two queues are depicted in the figure below:

    When an entry is created on the incomplete queue, the parameters from the listen socket are copied over to the newly created connection. The connection creation mechanism is completely automatic; the server process is not involved.

提交回复
热议问题