Is it better to use POSIX message queues or Unix domain sockets for local IPC communication?
I have worked with Unix sockets between machines (not domain) and I reme
Is one better than the other, or is it a matter of programming familiarity? Or perhaps its depends upon the application being created?
SysV message queues compared to UNIX domain datagram sockets have the major differences I'm aware of:
You can poll() socket, but you can't message queue.
Message queue is global and might (and usually do) require some administrative involvement: cleaning old hanging SysV resources is one of the many sysadmin daily routines. While semantics of UNIX domain is much simpler and applications can generally maintain it completely internally without sysadmin involvement.
(?) Message queue is persistent, it might retain messages from the old sessions. (Can't recall that bit precisely, but IIRC that was happening to me more than once).
Looking at the man msgrcv I do not see analogue of socket's MSG_PEEK. Rarely needed, but at times comes quite handy.
Most of the time, users prefer to use in the configuration symbolic names, not the numeric key id. Lack of symbolic keys IMO is quite grave oversight on part of the SysV interface designers.
As with all SysV resources, their management is the major PITA. If you let system decide message queue ID, then you have to take care of properly sharing it with other applications. (And you also has to tell admins somehow that the ID has to be removed eventually). If you allow to configure the key for message queue, then you might run into trivial problems that the id is already used by some application or it is a remnant of the previous run. (Seeing servers being reboot only because they run out of SysV resources is pretty common.)
All in all, I avoid SysV resources when possible: lack of poll() support under most common circumstances is a deal breaker.
However, the client doesn't wait for an "its done" response -- although they do want to know if their request has been received or not.
That is a common dilemma of transactional processing. General response is (as in RDBMSs) you can't and after communication interruption (crash or whatever) the application has to check itself whether the request was already processed or not.
From that I can tell that probably TCP would be a better choice. Client sends request and declares it as finished only when it gets positive response from the server. Server unless it is capable of sending the response to client, has to rollback the transaction.