Is there a way to flush a POSIX socket?

后端 未结 8 1196
误落风尘
误落风尘 2020-11-30 04:20

Is there a standard call for flushing the transmit side of a POSIX socket all the way through to the remote end or does this need to be implemented as part of the user level

相关标签:
8条回答
  • 2020-11-30 05:04

    I think it would be extremely difficult, if not impossible to implement correctly. What is the meaning of "flush" in this context? Bytes transmitted to network? Bytes acknowledged by receiver's TCP stack? Bytes passed on to receiver's user-mode app? Bytes completely processed by user-mode app?

    Looks like you need to do it at the app level...

    0 讨论(0)
  • 2020-11-30 05:10

    You could set the tcp option SO_LINGER to set a certain timeout and then close the socket in order to make sure all data has been sent (or detect failure to do so) upon the closeing of a connection. Other than that, TCP is a "best effort" protocol, and it doesn't provide any real guarantees that data will ever actually reach the destination (in contrast to what some seems to believe), it just tries it best to get it delivered in correct order and as soon as possible.

    0 讨论(0)
  • 2020-11-30 05:18

    For Unix-domain sockets, you can use fflush(), but I'm thinking you probably mean network sockets. There isn't really a concept of flushing those. The closest things are:

    1. At the end of your session, calling shutdown(sock, SHUT_WR) to close out writes on the socket.

    2. On TCP sockets, disabling the Nagle algorithm with sockopt TCP_NODELAY, which is generally a terrible idea that will not reliably do what you want, even if it seems to take care of it on initial investigation.

    It's very likely that handling whatever issue is calling for a 'flush' at the user protocol level is going to be the right thing.

    0 讨论(0)
  • 2020-11-30 05:21

    TCP gives only best-effort delivery, so the act of having all the bytes leave Machine A is asynchronous with their all having been received at Machine B. The TCP/IP protocol stack knows, of course, but I don't know of any way to interrogate the TCP stack to find out if everything sent has been acknowledged.

    By far the easiest way to handle the question is at the application level. Open a second TCP socket to act as a back channel and have the remote partner send you an acknowledgement that it has received the info you want. It will cost double but will be completely portable and will save you hours of programming time.

    0 讨论(0)
  • 2020-11-30 05:23

    In RFC 1122 the name of the thing that you are looking for is "PUSH". However, there does not seem to be a relevant TCP API implementation that implements "PUSH". Alas, no luck.

    Some answers and comments deal with the Nagle algorithm. Most of them seem to assume that the Nagle algorithm delays each and every send. This assumption is not correct. Nagle delays sending only when at least one of the previous packets has not yet been acknowledged (http://www.unixguide.net/network/socketfaq/2.11.shtml).

    To put it differently: TCP will send the first packet (of a row of packets) immediately. Only if the connection is slow and your computer does not get a timely acknowledgement, Nagle will delay sending subsequent data until either (whichever occurs first)

    • a time-out is reached or
    • the last unacknowledged packet is acknowledged or
    • your send buffer is full or
    • you disable Nagle or
    • you shutdown the sending direction of your connection

    A good mitigation is to avoid the business of subsequent data as far as possible. This means: If your application calls send() more than one time to transmit a single compound request, try to rewrite your application. Assemble the compound request in user space, then call send(). Once. This saves on context switches (much more expensive than most user-space operations), too.

    Besides, when the send buffer contains enough data to fill the maximum size of a network packet, Nagle does not delay either. This means: If the last packet that you send is big enough to fill your send buffer, TCP will send your data as soon as possible, no matter what.

    To sum it up: Nagle is not the brute-force approach to reducing packet fragmentation some might consider it to be. On the contrary: To me it seems to be a useful, dynamic and effective approach to keep both a good response time and a good ratio between user data and header data. That being said, you should know how to handle it efficiently.

    0 讨论(0)
  • 2020-11-30 05:24

    What about setting TCP_NODELAY and than reseting it back? Probably it could be done just before sending important data, or when we are done with sending a message.

    send(sock, "notimportant", ...);
    send(sock, "notimportant", ...);
    send(sock, "notimportant", ...);
    int flag = 1; 
    setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));
    send(sock, "important data or end of the current message", ...);
    flag = 0; 
    setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));
    

    As linux man pages says

    TCP_NODELAY ... setting this option forces an explicit flush of pending output ...

    So probably it would be better to set it after the message, but am not sure how it works on other systems

    0 讨论(0)
提交回复
热议问题