amazon-sqs

Elastic Beanstalk Worker's SQS daemon getting 504 gateway timeout after 1 minute

左心房为你撑大大i 提交于 2019-11-29 05:17:03
问题 I have an Elastic Beanstalk worker that can only run one task at a time and it takes some time to do so (from a few minutes to, hopefully, less than 30 minutes), so I'm queuing my tasks on a SQS. On my worker configuration, I have: HTTP connections: 1 Visibility timeout: 3600 Error visibility timeout: 300 (On "Advanced") Inactivity timeout: 1800 The problem is that there seems to be a 1 minute timeout (on nginx?) that overrides the "Inactivity timeout", returning a 504 (Gateway timeout). This

Using Amazon SQS with multiple consumers

好久不见. 提交于 2019-11-29 00:37:51
问题 I have a service-based application that uses Amazon SQS with multiple queues and multiple consumers. I am doing this so that I can implement an event-based architecture and decouple all the services, where the different services react to changes in state of other systems. For example: Registration Service : Emits event 'registration-new' when a new user registers. User Service : Emits event 'user-updated' when user is updated. Search Service : Reads from queue 'registration-new' and indexes

Amazon SQS Long Polling not returning all messages

我只是一个虾纸丫 提交于 2019-11-28 22:00:11
I have a requirement to read all messages in my Amazon SQS queue in 1 read and then sort it based on created timestamp and do business logic on it. To make sure all the SQS hosts are checked for messages, I enabled long polling. The way I did that was to set the default wait time for the queue as 10 seconds. (Any value more than 0 will enable long polling). However when I tried to read the queue, it still did not give me all the messages and I had to do multiple reads to get all the messages. I even enabled long polling through code per receive request, still did not work. Below is the code I

Ideas for scaling chat in AWS?

喜夏-厌秋 提交于 2019-11-28 18:22:11
I'm trying to come up with the best solution for scaling a chat service in AWS. I've come up with a couple potential solutions: Redis Pub/Sub - When a user establishes a connection to a server that server subscribes to that user's ID. When someone sends a message to that user, a server will perform a publish to the channel with the user's id. The server the user is connected to will receive the message and push it down to the appropriate client. SQS - I've thought of creating a queue for each user. The server the user is connected to will poll (or use SQS long-polling) that queue. When a new

Receive XML response from Cross-Domain Ajax request with jQuery

℡╲_俬逩灬. 提交于 2019-11-28 12:06:31
I trying to make an ajax request to another domain, it already works, but now I have another problem... This is my code: function getChannelMessages(channel) { jQuery.support.cors = true; $.ajax(channel, { cache : true, type : "get", data : _channels[channel].request, global : false, dataType : "jsonp text xml", jsonp : false, success : function jsonpCallback (response) { console.log(response); updateChannelRequest(channel); //getChannelMessages(channel); } }); } As I said, it already works, but the problem is the server returns an XML (Is not my server, is another server from another company

AWS: multiple instances reading SQS

醉酒当歌 提交于 2019-11-28 03:40:11
问题 Simple question: I want to run an autoscale group on Amazon, which fires up multiple instance which processes the messages from a SQS queue. But how do I know that the instances aren't processing the same messages? I can delete a message from the queue when it's processed. But if it's not deleted yet and still being processed by an instance, another instance CAN download that same message and processing it also, to my opinion. 回答1: Aside from the fairly remote possibility of SQS incorrectly

Why should I use Amazon Kinesis and not SNS-SQS?

折月煮酒 提交于 2019-11-28 02:51:59
I have a use case where there will be stream of data coming and I cannot consume it at the same pace and need a buffer. This can be solved using an SNS-SQS queue. I came to know the Kinesis solves the same purpose, so what is the difference? Why should I prefer (or should not prefer) Kinesis? E.J. Brennan On the surface they are vaguely similar, but your use case will determine which tool is appropriate. IMO, if you can get by with SQS then you should - if it will do what you want, it will be simpler and cheaper, but here is a better explanation from the AWS FAQ which gives examples of

Finding certain messages in SQS

不羁的心 提交于 2019-11-27 23:12:25
问题 I know SQS ain't build for that, but I'm curious is it possible to find messages in a queue that meet some criteria? I can pull messages in a loop, search the message bodies for some pattern (without even deserializing them), and filter the messages I needed. But then it is possible to end up with an infinite loop - the first messages I read will be back to the queue by the time when I reach the end of the queue... Extending visibility of the messages possible, but how do I know exactly how

What's causing these ParseError exceptions when reading off an AWS SQS queue in my Storm cluster

核能气质少年 提交于 2019-11-27 19:45:12
I'm using Storm 0.8.1 to read incoming messages off an Amazon SQS queue and am getting consistent exceptions when doing so: 2013-12-02 02:21:38 executor [ERROR] java.lang.RuntimeException: com.amazonaws.AmazonClientException: Unable to unmarshall response (ParseError at [row,col]:[1,1] Message: JAXP00010001: The parser has encountered more than "64000" entity expansions in this document; this is the limit imposed by the JDK.) at REDACTED.spouts.SqsQueueSpout.handleNextTuple(SqsQueueSpout.java:219) at REDACTED.spouts.SqsQueueSpout.nextTuple(SqsQueueSpout.java:88) at backtype.storm.daemon

What is a good practice to achieve the “Exactly-once delivery” behavior with Amazon SQS?

允我心安 提交于 2019-11-27 14:37:13
According to the documentation : Q: How many times will I receive each message? Amazon SQS is engineered to provide “at least once” delivery of all messages in its queues. Although most of the time each message will be delivered to your application exactly once, you should design your system so that processing a message more than once does not create any errors or inconsistencies. Is there any good practice to achieve the exactly-once delivery? I was thinking about using the DynamoDB “Conditional Writes” as distributed locking mechanism but... any better idea? Some reference to this topic: At