amazon-sqs

Amazon SQS Long Polling not returning all messages

雨燕双飞 提交于 2019-11-27 14:05:54
问题 I have a requirement to read all messages in my Amazon SQS queue in 1 read and then sort it based on created timestamp and do business logic on it. To make sure all the SQS hosts are checked for messages, I enabled long polling. The way I did that was to set the default wait time for the queue as 10 seconds. (Any value more than 0 will enable long polling). However when I tried to read the queue, it still did not give me all the messages and I had to do multiple reads to get all the messages.

Ideas for scaling chat in AWS?

自闭症网瘾萝莉.ら 提交于 2019-11-27 10:55:55
问题 I'm trying to come up with the best solution for scaling a chat service in AWS. I've come up with a couple potential solutions: Redis Pub/Sub - When a user establishes a connection to a server that server subscribes to that user's ID. When someone sends a message to that user, a server will perform a publish to the channel with the user's id. The server the user is connected to will receive the message and push it down to the appropriate client. SQS - I've thought of creating a queue for each

Receive XML response from Cross-Domain Ajax request with jQuery

时光总嘲笑我的痴心妄想 提交于 2019-11-27 06:46:08
问题 I trying to make an ajax request to another domain, it already works, but now I have another problem... This is my code: function getChannelMessages(channel) { jQuery.support.cors = true; $.ajax(channel, { cache : true, type : "get", data : _channels[channel].request, global : false, dataType : "jsonp text xml", jsonp : false, success : function jsonpCallback (response) { console.log(response); updateChannelRequest(channel); //getChannelMessages(channel); } }); } As I said, it already works,

Celery with Amazon SQS

▼魔方 西西 提交于 2019-11-27 06:20:38
I want to use Amazon SQS as broker backed of Celery . There’s the SQS transport implementation for Kombu , which Celery depends on. However there is not enough documentation for using it, so I cannot find how to configure SQS on Celery. Is there somebody that had succeeded to configure SQS on Celery? I ran into this question several times but still wasn't entirely sure how to setup Celery to work with SQS. It turns out that it is quite easy with the latest versions of Kombu and Celery. As an alternative to the BROKER_URL syntax mentioned in another answer, you can simply set the transport,

Why should I use Amazon Kinesis and not SNS-SQS?

柔情痞子 提交于 2019-11-26 23:50:57
问题 I have a use case where there will be stream of data coming and I cannot consume it at the same pace and need a buffer. This can be solved using an SNS-SQS queue. I came to know the Kinesis solves the same purpose, so what is the difference? Why should I prefer (or should not prefer) Kinesis? 回答1: On the surface they are vaguely similar, but your use case will determine which tool is appropriate. IMO, if you can get by with SQS then you should - if it will do what you want, it will be simpler

What's causing these ParseError exceptions when reading off an AWS SQS queue in my Storm cluster

风格不统一 提交于 2019-11-26 22:50:30
问题 I'm using Storm 0.8.1 to read incoming messages off an Amazon SQS queue and am getting consistent exceptions when doing so: 2013-12-02 02:21:38 executor [ERROR] java.lang.RuntimeException: com.amazonaws.AmazonClientException: Unable to unmarshall response (ParseError at [row,col]:[1,1] Message: JAXP00010001: The parser has encountered more than "64000" entity expansions in this document; this is the limit imposed by the JDK.) at REDACTED.spouts.SqsQueueSpout.handleNextTuple(SqsQueueSpout.java

What is a good practice to achieve the “Exactly-once delivery” behavior with Amazon SQS?

北战南征 提交于 2019-11-26 18:25:31
问题 According to the documentation: Q: How many times will I receive each message? Amazon SQS is engineered to provide “at least once” delivery of all messages in its queues. Although most of the time each message will be delivered to your application exactly once, you should design your system so that processing a message more than once does not create any errors or inconsistencies. Is there any good practice to achieve the exactly-once delivery? I was thinking about using the DynamoDB

Using many consumers in SQS Queue

眉间皱痕 提交于 2019-11-26 16:33:35
I know that it is possible to consume a SQS queue using multiple threads. I would like to guarantee that each message will be consumed once. I know that it is possible to change the visibility timeout of a message, e.g., equal to my processing time. If my process spend more time than the visibility timeout (e.g. a slow connection) other thread can consume the same message. What is the best approach to guarantee that a message will be processed once? Krease What is the best approach to guarantee that a message will be processed once? You're asking for a guarantee - you won't get one . You can

Celery with Amazon SQS

懵懂的女人 提交于 2019-11-26 11:57:31
问题 I want to use Amazon SQS as broker backed of Celery. There’s the SQS transport implementation for Kombu, which Celery depends on. However there is not enough documentation for using it, so I cannot find how to configure SQS on Celery. Is there somebody that had succeeded to configure SQS on Celery? 回答1: I ran into this question several times but still wasn't entirely sure how to setup Celery to work with SQS. It turns out that it is quite easy with the latest versions of Kombu and Celery. As