zeromq

How to protect ZeroMQ Request Reply pattern against potential drops of messages?

放肆的年华 提交于 2019-11-29 10:58:20
I'm trying to implement a ZeroMQ pattern on the TCP layer between a c# application and distributed python servers. I've gotten a version working with the request-reply REQ/REP pattern and it seems relatively stable when testing on localhost . However, in testing, I've debugged a few situations, where I accidently send multiple requests before receiving a reply which apparently is not acceptable. In practice the network will likely have lots of dropped packets and I suspect that I'll be dropping lots of replies and/or unable to send requests. 1) Is there a way to reset the connection between

ActiveMQ、RabbitMQ、ZeroMQ、Kafka、RocketMQ选型

╄→гoц情女王★ 提交于 2019-11-29 10:34:36
市面上很多MQ产品,比如ActiveMQ、RabbitMQ、ZeroMQ、Kafka、RocketMQ,到底哪种更加适合呢? RabbitMQ: 消息堆积的支持并不好,当大量消息积压的时候,会导致RabbitMQ的性能急剧下降。 每秒钟可以处理几万到十几万条消息。 RabbitMQ使用的编程语言Erlang,二次开发难度大。 最流行的消息中间之一。 RocketMQ: RocketMQ响应时延大多数情况下可以做到毫秒级的响应,适合在线业务场景。 周边生态系统的集成和兼容程度要略逊一筹。 支持事务消息。 RocketMQ的每秒钟大概能处理几十万条消息。 Kafka: Kafka与在大数据和流计算领域支持很好。 Kafka使用Scala和Java语言开发。 每秒钟可以处理几十万条消息,Kafka的极限处理能力可以超过每秒2000万条 同步收发消息的响应时延比较高,不太适合在线业务场景。 ActiveMQ: 已经脱离正轨。 ZeroMQ: 不是一个完整的消息队列产品 来源: https://my.oschina.net/u/4178242/blog/3105691

Compile C lib for iPhone

爷,独闯天下 提交于 2019-11-29 07:51:36
I'm trying to compile ZeroMQ C binding in order to be able to use it on iPhone, here is my configure options: ./configure --host=arm-apple-darwin --enable-static=yes --enable-shared=no CC=/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/arm-apple-darwin10-gcc-4.2.1 CFLAGS="-pipe -std=c99 -Wno-trigraphs -fpascal-strings -O0 -Wreturn-type -Wunused-variable -fmessage-length=0 -fvisibility=hidden -miphoneos-version-min=3.1.2 -gdwarf-2 -mthumb -I/Library/iPhone/include -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS4.0.sdk -mdynamic-no-pic" CPP=/Developer/Platforms

“Server” to “Server” ZeroMQ communication

可紊 提交于 2019-11-29 05:11:06
I want to build a system that has the following architecture: +------------------+ +------------------+ | App1. 0mq client | <------> | App2. 0mq server | +------------------+ +------------------+ where App2 is a ZeroMQ server and it's a black box, and App1 is a ZeroMQ client, but it is in fact a frontend server. The frontend server will process some requests from the clients and then will communicate with the App2 server. Given that: At any point in time any of the "servers" can go down or be restarted. I want to start any of the apps, even if the other app is not running. If App1 is started

How does zeromq work together with SSL?

风格不统一 提交于 2019-11-29 03:11:56
I am considerung to use zeromq as messaging layer between my applications. At least in some cases I want the communication to be secure and I am thinking about SSL. Is there some standard way how to ssl-enable zeromq? As far as I understand it doesn't support it out of the box. It would be nice if I just had a parameter when connnecting to a socket (bool: useSsl) :) Any ideas? Understanding that this is not really an answer to your question, I'm going to be encrypting the messages directly with RSA, before sending them with 0mq. In the absence of a more integrated encryption method that is

Recovering from zmq.error.ZMQError: Address already in use

风流意气都作罢 提交于 2019-11-29 02:57:39
问题 I hit Ctrl-C while running a PAIR pattern (non-blocking client servers) connection with ZMQ. Later when I tried running the REQ-REP (blocking client single server connection) pattern, I keep getting the Address already in use error. I have tried running netstat with netstat -ltnp | grep :<my port> but that does not list any process. So who exactly is using this address? Also how does one gracefully shutdown socket connections like these? 回答1: Question 1: If you do sudo netstat -ltnp , on a

Why do operating systems limit file descriptors?

人盡茶涼 提交于 2019-11-29 02:17:53
问题 I ask this question after trying my best to research the best way to implement a message queue server. Why do operating systems put limits on the number of open file descriptors a process and the global system can have? My current server implementation uses zeromq, and opens a subscriber socket for each connected websocket client. Obviously that single process is only going to be able to handle clients to the limit of the fds. When I research the topic I find lots of info on how to raise

Is this the right way to use a messaging queue?

只愿长相守 提交于 2019-11-29 01:27:49
I am new to messaging queues, and right now I am using ZeroMQ on my Linux server. I am using PHP to write both the client and the server. This is mainly used for processing push notifications. I am using the basic REQ - REP Formal-Communication Pattern on single I/O-threaded ZMQContext instances, as they have demonstrated. Here is the minimised zeromqServer.php code: include("someFile.php"); $context = new ZMQContext(1); // Socket to talk to clients $responder = new ZMQSocket($context, ZMQ::SOCKET_REP); $responder->bind("tcp://*:5555"); while (true) { $request = $responder->recv(); printf (

Using ZeroMQ together with Boost::ASIO

天涯浪子 提交于 2019-11-28 17:16:19
I've got a C++ application that is using ZeroMQ for some messaging. But it also has to provide a SGCI connection for an AJAX / Comet based web service. For this I need a normal TCP socket. I could do that by normal Posix sockets, but to stay cross platform portable and make my life easier (I hope...) I was thinking of using Boost::ASIO. But now I have the clash of ZMQ wanting to use it's own zmq_poll() and ASIO it's io_service.run() ... Is there a way to get ASIO to work together with the 0MQ zmq_poll() ? Or is there an other recommended way to achieve such a setup? Note: I could solve that by

ZeroMQ/ZMQ Push/Pull pattern usefulness

断了今生、忘了曾经 提交于 2019-11-28 16:39:57
In experimenting with the ZeroMQ Push/Pull (what they call Pipeline ) socket type, I'm having difficulty understanding the utility of this pattern. It's billed as a "load-balancer". Given a single server sending tasks to a number of workers, Push/Pull will evenly hand out the tasks between all the clients. 3 clients and 30 tasks, each client gets 10 tasks: client1 gets tasks 1, 4, 7,... client2, 2, 5,... and so on. Fair enough. Literally. However, in practice there is often a non-homogeneous mix of task complexity or client compute resources (or availability), then this pattern breaks badly.