berkeley-sockets

Socket Shutdown: when should I use SocketShutdown.Both

做~自己de王妃 提交于 2021-02-18 21:01:42
问题 I believe the shutdown sequence is as follows (as described here): The MSDN documentation (remarks section) reads: When using a connection-oriented Socket , always call the Shutdown method before closing the Socket . This ensures that all data is sent and received on the connected socket before it is closed. This seems to imply that if I use Shutdown(SocketShutdown.Both) , any data that has not yet been received, may still be consumed. To test this: I continuously send data to the client (via

Socket Shutdown: when should I use SocketShutdown.Both

六月ゝ 毕业季﹏ 提交于 2021-02-18 21:01:25
问题 I believe the shutdown sequence is as follows (as described here): The MSDN documentation (remarks section) reads: When using a connection-oriented Socket , always call the Shutdown method before closing the Socket . This ensures that all data is sent and received on the connected socket before it is closed. This seems to imply that if I use Shutdown(SocketShutdown.Both) , any data that has not yet been received, may still be consumed. To test this: I continuously send data to the client (via

Why does a PF_PACKET RAW socket stop missing packets after “Wireshark” was launched?

会有一股神秘感。 提交于 2021-01-28 03:53:14
问题 I need to receive incoming UDP packets using RAW socket, which is being opened using this code snippet: static int fd; char *iface; iface = "eth0"; if ( (fd = socket(PF_PACKET, SOCK_DGRAM, htons(ETH_P_IP))) < 0 ) { perror("socket"); } if (setsockopt(fd, SOL_SOCKET, SO_BINDTODEVICE, iface, strlen(iface)) < 0) { perror("bind"); exit(EXIT_FAILURE); } I send, say, 100 identical packets and try to receive and count them. I use recv(...) to do this. Only 93 packets are delivered, and then recv(...)

What happens if one doesn't call POSIX's `recv` “fast enough”?

…衆ロ難τιáo~ 提交于 2020-05-06 11:04:06
问题 I want to account for a possible scenario where clients of my TCP/IP stream socket service send data to my service faster than it manages to move the data to its buffers (I am talking about application buffers, naturally) with recv and work with it. So basically, what happens in such scenarios? Obviously, some sort of service beneath my service which is a user application, has to receive incoming stream and store it somewhere until I issue 'recv', right? Most certainly the operating system. I

Get remote address/IP - C Berkeley Sockets

ε祈祈猫儿з 提交于 2020-01-14 10:49:07
问题 If I have a socket file descriptor connected (either by connect or by bind), type SOCK_STREAM , is it possible to get the remote address / IP address? I need to do this within a function where I don't have any other data than the socket file descriptor. 回答1: getpeername 回答2: See the getsockname() system call. 来源: https://stackoverflow.com/questions/4770127/get-remote-address-ip-c-berkeley-sockets

Reading from a socket until certain character is in buffer

こ雲淡風輕ζ 提交于 2019-12-11 09:38:00
问题 I am trying to read from a socket into a buffer until a certain character is reached using read(fd, buf, BUFFLEN) . For example, the socket will receive two lots of information separated by a blank line in one read call. Is it possible to put the read call in a loop so it stops when it reaches this blank line, then it can read the rest of the information later if it is required? 回答1: A simple approach would be to read a single byte at a time until the previous byte and the current byte are

Binding Sockets to IPv6 Addresses

风格不统一 提交于 2019-12-04 11:52:21
问题 I am trying to write a web server that listens on both IPv4 and IPv6 addresses. However, the code that I originally wrote did not work. Then I found out that the IPv6 structures work for both IPv4 and IPv6. So now I use the IPv6 structures however, only the IPv4 addresses work. This post, why can't i bind ipv6 socket to a linklocal address, which said to add server.sin6_scope_id = 5; so I did that but it still does not accept IPv6 telnet connections. Any help would be greatly appreciated

difference between “address in use” with bind() in Windows and on Linux - errno=98

大兔子大兔子 提交于 2019-11-29 12:46:42
I have a small TCP server that listens on a port. While debugging it's common for me to CTRL-C the server in order to kill the process. On Windows I'm able to restart the service quickly and the socket can be rebound. On Linux I have to wait a few minutes before bind() returns with success When bind() is failing it returns errno=98, address in use. I'd like to better understand the differences in implementations. Windows sure is more friendly to the developer, but I kind of doubt Linux is doing the 'wrong thing'. My best guess is Linux is waiting until all possible clients have detected the

How to ignore your own broadcast udp packets

て烟熏妆下的殇ゞ 提交于 2019-11-28 12:11:06
For the following I'm assuming one network card. I have a component of my program which is designed to let others in the subnet know of its existence. For this, I've implemented a solution where whenever the program starts up (and periodically afterwards) it sends a broadcast to INADDR_BROADCAST - whoever listens on the required port will remember where it came from for later use. The problem with this is that I don't want to remember my own broadcasts. I thought that in theory this would be easy to do - simply find out the local ip and compare to what you get in recvfrom . However, I've found

difference between “address in use” with bind() in Windows and on Linux - errno=98

夙愿已清 提交于 2019-11-28 06:29:44
问题 I have a small TCP server that listens on a port. While debugging it's common for me to CTRL-C the server in order to kill the process. On Windows I'm able to restart the service quickly and the socket can be rebound. On Linux I have to wait a few minutes before bind() returns with success When bind() is failing it returns errno=98, address in use. I'd like to better understand the differences in implementations. Windows sure is more friendly to the developer, but I kind of doubt Linux is