amqp

Is there a AMQP implementation that has stable C++ Client library [duplicate]

自闭症网瘾萝莉.ら 提交于 2019-12-06 00:20:42
问题 This question already has answers here : AMQP C++ implementation (5 answers) Closed 4 years ago . Is there a AMQP implementation that has stable C++ Client library 回答1: Apache Qpid has a stable C++ client library -- see qpid.apache.org. 回答2: RabbitMQ: https://github.com/alanxz/SimpleAmqpClient https://github.com/akalend/amqpcpp Apache Qpid: https://qpid.apache.org/apis/0.16/cpp/html/ Reviews on Message Brokers (RabbitMQ and Apache Qpid came out on top for their uses) http://wiki.secondlife

rabbitmq multiple consumers on a queue- only one get the message

醉酒当歌 提交于 2019-12-06 00:02:07
问题 I implemented multiple consumers, who are fetching messages from a single queue, I am doing this using something similar to this example, except that I'm doing basic.get on an infinite loop for polling. Any idea how do I prevent racing between all consumers, in that only one consumer will get the message and the other will continue to do polling until another message comes? I trying to follow a logic in which as soon as I get the message I ack it for the message to be removed, but its seems

AMQP/RabbitMQ - Process messages sequentially

怎甘沉沦 提交于 2019-12-05 19:24:41
I have one direct exchange. There is also one queue, bound to this exchange. I have two consumers for that queue. The consumers are manually ack'ing the messages once they've done the corresponding processing. The messages are logically ordered/sorted, and should be processed in that order. Is it possible to enforce that all messages are received and processed sequentially accross consumer A and consumer B? In other words, prevent A and B from processing messages at the same time. Note: the consumers are not sharing the same connection and/or channel. This means I cannot use <channel>.basicQoS

Is Spring-AMQP re-queue message count JVM based?

泪湿孤枕 提交于 2019-12-05 18:43:52
I was poking around the rabbitmq documentation, and it seems that rabbitmq does not handle message redelivery count. If I were to manually ACK/NACK messages, I would need to either keep the retry count in memory (say, by using correlationId as the unique key in a map), or by setting my own header in the message, and redelivering it (thus putting it at the end of the queue) However, this is a case that spring handles. Specifically, I am referring to RetryInterceptorBuilder.stateful().maxAttempts(x). Is this count specific to a JVM though, or is it manipulating the message somehow? For example,

How do I load test a RabbitMQ server (Either using JMeter, python or any other tool..)?

安稳与你 提交于 2019-12-05 14:04:14
I have been given access to a RabbitMQ server to do a load test on it. I'm completely new to servers and AMQ protocol. I've been researching online to see what are some different methods. So far I'm investigating two methods. JMeter; I have found this project: https://github.com/jlavallee/JMeter-Rabbit-AMQP#build-dependencies . It gives me a jar file which I can create JMeter AMQP consumer and publisher, but I have no idea what to put in the fields. (virtual host vs host - dunno my ports -..) Python; using Pika. I have a simple sender script which connects from my client to my server and sends

DDS vs AMQP vs ZeroMQ [closed]

风格不统一 提交于 2019-12-05 13:51:56
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . I wanted a feedback on whether my evaluations and concerns are correct. I have been reseaching the three, Data Distribution Service,

消息队列属性及常见消息队列介绍

寵の児 提交于 2019-12-05 13:42:54
什么是消息队列? 消息队列是在消息的传输过程中保存消息的容器,用于接收消息并以文件的方式存储,一个队列的消息可以同时被多个消息消费者消费。分布式消息服务DMS则是分布式的队列系统,消息队列中的消息分布存储,且每条消息存储多个副本,以实现高可用性,如下图所示。 一般来说,消息队列具有如下属性: 消息顺序 普通队列支持“分区有序”和“全局队列”两种模式,ActiveMQ队列和Kafka队列均为分区有序。 分区有序的队列通过分布式处理,支持更高的并发,但由于队列的分布式特性,DMS无法保证能够以接收消息的精确顺序进行消费。如果用户要求保持顺序,建议在每条消息中放置排序信息,以便在收到消息时对消息重新排序。 全局有序的队列对消息消费遵循先入先出规则(FIFO),适用于对消费顺序要求较高的场景。 至少一次传递 在极少数情况下,当用户接收或删除消息时,存储消息副本的服务器之一可能不可用。如果出现这种情况,则该不可用服务器上的消息副本将不会被删除,并且在接收消息时可能会再次获得该消息副本。 这被称为“至少一次传递”,因此,用户的应用程序应该设计为幂等的应用程序(即,如果应用程序多次处理同一条消息,则不得受到不利影响)。 消息较少时单次消费不能获取指定数量的消息 从消息队列中消费消息时,DMS每次从部分消息存储分区中读取消息返回消息给消费者,如果队列中的消息数比较少,则单次消费可能会少于指定条数

Using RabbitMQ (Java client), is there a way to determine if network connection is closed during consume?

◇◆丶佛笑我妖孽 提交于 2019-12-05 12:59:03
问题 I'm using RabbitMQ on RHEL 5.3 using the Java client. I have 2 nodes (machines). Node1 is consuming messages from a queue on Node2 using the Java helper class QueueingConsumer. QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("MyQueueOnNode2", noAck, consumer); while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(); ... Process message - delivery.getBody() } If the interface is brought down on Node1 or Node2 (e.g. ifconfig eth1 down), the

Celery task state depends on CELERY_TASK_RESULT_EXPIRES

。_饼干妹妹 提交于 2019-12-05 12:25:55
From what I have seen, the task state depends entirely on the value set for CELERY_TASK_RESULT_EXPIRES - if I check the task state within this interval after the task has finished executing, the state returned by: AsyncResult(task_id).state is correct. If not, the state will not be updated and will remain forever PENDING. Can anyone explain me why does this happen? Is this a feature or a bug? Why is the task state depending on the result expiry time, even if I am ignoring results? (Celery version: 3.0.23, result backend: AMQP) State and result is the same. The result backend was initially used

Celery design help: how to prevent concurrently executing tasks

◇◆丶佛笑我妖孽 提交于 2019-12-05 10:53:46
I'm fairly new to Celery/AMQP and am trying to come up with a task/queue/worker design to meet the following requirements. I have multiple types of "per-user" tasks: e.g., TaskA, TaskB, TaskC. Each of these "per-user" tasks read/write data for one particular user in the system. So at any given time, I might need to create tasks User1_TaskA, User1_TaskB, User1_TaskC, User2_TaskA, User2_TaskB, etc. I need to ensure that, for each user , no two tasks of any task type execute concurrently. I want a system in which no worker can execute User1_TaskA at the same time as any other worker is executing