Is there a way to block on a socket send() until we get the ack for that packet?

后端 未结 7 1340
盖世英雄少女心
盖世英雄少女心 2020-12-31 12:47

Or do I have to implement it at the application level?

相关标签:
7条回答
  • 2020-12-31 13:34

    TCP will in general require you to synchronize the receiver and sender at the application level. Combinations of SO_SNDBUF tweaking or TCP_NODELAY alone are not likely solve the problem completely. This is because the amount of data that can be "in flight" before send() will block is more or less equal to the sum of:

    1. The data in the transmit side's send buffer, including small data fragments being delayed by Nagle's algorithm,
    2. The amount of data carried in unacknowledged in-flight packets, which varies with the congestion window (CWIN) and receive window (RWIN) sizes. The TCP sender continuously tunes the congestion window size to network conditions as TCP transitions between slow-start, congestion avoidance, fast-recovery, and fast-retransmit modes. And,
    3. Data in the receive side's receive buffer, for which the receiver's TCP stack will have already sent an ACK, but that the application has not yet seen.

    To say it another way, after the receiver stops reading data from the socket, send() will only block when:

    1. The receiver's TCP receive buffer fills and TCP stops ACKing,
    2. The sender transmits unACKed data up to the congestion or receive window limit, and
    3. The sender's TCP send buffer fills or the sender application requests a send buffer flush.

    The goal of the algorithms used in TCP is to create the effect of a flowing stream of bytes rather than a sequence of packets. In general it tries to hide as much as possible the fact that the transmission is quantized into packets at all, and most socket APIs reflect that. One reason for this is that sockets may not be implemented on top TCP (or indeed even IP) at all: consider a Unix domain socket, which uses the same API.

    Attempting to rely on TCP's underlying implementation details for application behavior is generally not advisable. Stick to synchronizing at the application layer.

    If latency is a concern in the situation where you're doing the synchronization, you may also want to read about interactions between Nagle's algorithm and delayed ACK that can introduce unnecessary delays in certain circumstances.

    0 讨论(0)
  • 2020-12-31 13:35

    Why not just use a blocking socket?

    This may be a bit dated, but here is some explanation on blocking/non-blocking and overlapping IO.

    http://support.microsoft.com/kb/181611

    It would help if we knew which language and OS you were using, BTW, to better show code snippets.

    0 讨论(0)
  • 2020-12-31 13:37

    The ack for the packet is at the transport layer (well below the application layer). You are not even guaranteed to have your entire buffer belong to its own packet on the network. What is it you are trying to do?

    0 讨论(0)
  • 2020-12-31 13:43

    I also faced the same problem few weeks ago when implementing a VoIP server. After spending several days I could come up with a solution. As many others mentioned, there is no any direct system call to do the job. Instead,

    1. You can check if we have received the ACK after sending a packet with TCP_INFO option.
    2. If we haven't received the ACK, wait for few milliseconds and check again.

    This may continue until a time out reaches. You have to implement it as a wrapper function to send() call. You will need tcp_info struct from <netinet/tcp.h>. It is the data structure for holding information about your tcp connection.

    Here is the pseudo code

    int blockingSend(const char *msg, int msglen, int timeout) {
    
        std::lock_guard<std::mutex> lock(write_mutx);
    
        int sent = send(sock_fd, msg, msglen,  0); 
    
        tcp_info info;
        auto expireAt = chrono::system_clock::now() + chrono::milliseconds(timeout);
        do {
            this_thread::sleep_for(milliseconds(50));
            getsockopt(sock_fd,SOL_TCP, TCP_INFO, (void *) &info, sizeof(info));
    
          //wait till all packets acknowledged or time expires
        } while (info.tcpi_unacked > 0 && expireAt > system_clock::now());
    
        if(info.tcpi_unacked>0) {
            cerr << "no of unacked packets :" << info.tcpi_unacked << endl;
            return -1;
        }
        return sent;
    }
    

    Here tcpi_unacked member holds the number of packets unacknowledged of your connection. If you read it soon after the send() call, it will contain number of unacked packets which is equal to number of packets sent. With time, number of unacked packets will be decremented to zero. Therefore you need to periodically check the value of tcpi_unacked till it reaches zero. If the connection is half opened, you will never receive ACKs while causing a endless loop. For such scenarios you may need to add a timeout mechanism as implemented above.

    Even though this question has been asked long ago, this answer may help some one who has faced the same problem. I must mention that there could be more accurate solutions to this problem than this solution. Since I am a newbie to system programming and C/C++ this is what I could come up with.

    0 讨论(0)
  • 2020-12-31 13:43

    If you use setsockopt() to lower SO_SNDBUF to a value only large enough to send one packet, then the next send() on that socket should block until the previous packet is acknowledged. However, according to tcp(7), the socket buffer size must be set prior to listen()/connect().

    0 讨论(0)
  • 2020-12-31 13:51

    If you are talking about TCP, then no - no socket API I've seen allows you to do this.

    You need to implement the ack in your application protocol if you need to be sure that the other end had received(and possibly processed) your data.

    0 讨论(0)
提交回复
热议问题