I have a single client that is trying to connect to my main server using socket s1. The client needs to keep trying to connect to main server with s1, but at the same time connect and keep sending "trying" messages to my secondary server. Is it a good idea to create 2 sockets,reuse port and create 2 binds for those 2 sockets or there are better ways to achieve this? This is a client side and using C sockets. Thanks.
问题:
回答1:
If your program is a client to multiple servers, use one socket per server. You don't need bind
for a client socket at all, just connect
.
回答2:
I think you are using TCP socket( aren't you?). So one socket for connection is needed. Then reuse port is not so important because your application is a client application, which is the part the start the connection. Any outbound port should be ok.
回答3:
Because you can only call connect(2)
once per stream-oriented socket, you really must use at least two sockets to make two simultaneous connections (or connection attempts).
You don't need to bind(2)
anything on client ports, except in strange cases. (I'm thinking of the Sun RPC portmapper daemon, but thankfully it's been nearly a decade since I've cared about the portmapper daemon. Also rlogin
needed to bind(2)
as a client when using the host-authentication method, which was horrible.)