amazon-sqs

Customizing Amazon SNS email notifications

柔情痞子 提交于 2019-12-07 05:09:31
问题 We are developing Custom Commenting System in which email notifications will be send to all the subscribers to the post just like "Disqus". We found Amazon AWS provides Simple Notification System which does a fairly good job of sending mass email notifications and manages subscribers and topics, but I did not found any article on which the format of email notifications can be changed as all the emails have Amazon Branding. Is it possible to send through Amazon SNS? Custom Email without Amazon

SpringBoot @SqsListener - not working - with Exception - TaskRejectedException

不问归期 提交于 2019-12-07 01:00:30
问题 I have a AWS SQS with 5000 messages already on the Queue (Sample Message looks like this 'Hello @ 1') I created a SpringBoot Application and inside one of the Component Classes create a method to read messages from the SQS. package com.example.aws.sqs.service; import org.springframework.cloud.aws.messaging.listener.SqsMessageDeletionPolicy; import org.springframework.cloud.aws.messaging.listener.annotation.SqsListener; import org.springframework.stereotype.Component; import lombok.extern

Do Delay Queue messages count as “In Flight” in SQS?

断了今生、忘了曾经 提交于 2019-12-06 23:47:26
问题 I'm working on a project in which I intend to use an Amazon SQS Delay Queue. I'm having a bit of trouble understanding exactly what is meant by "inflight" messages. There is a note in the documentation that says: Note There is a 120,000 limit for the number of inflight messages per queue. Messages are inflight after they have been received by the queue, but have not yet been deleted from the queue. If you reach the 120,000 limit, you will receive an OverLimit error message from Amazon SQS. To

Purpose of Amazon SQS message's body as against message's attributes

荒凉一梦 提交于 2019-12-06 17:55:57
问题 What is the purpose of using message body in SQS while you're already able to add message attributes? Let's take an example, we want to push a message to new-user queue when a new user registered, I imagine the message will have an attribute userId , I don't see the use of body here. 回答1: Message attributes are supposed to be used as message metadata (like timestamp or possibly some category) and not the message itself. Ideally, message payload should be given in the message body So, for

Celery Consumer SQS Messages

那年仲夏 提交于 2019-12-06 12:19:10
问题 I am new to Celery and SQS , and would like to use it to periodically check messages stored in SQS and then fire a consumer. The consumer and Celery both live on EC2 , while the messages are sent from GAE using boto library. Currently, I am confused about: In the message body of creating_msg_gae.py , what task information I should put here? I assume this information would be the name of my celery task ? In the message body of creating_msg_gae.py , is url considered as the argument to be

Airflow + Cluster + Celery + SQS - Airflow Worker: 'Hub' object has no attribute '_current_http_client'

旧时模样 提交于 2019-12-06 07:11:31
I'm trying to cluster my Airflow setup and I'm using this article to do so. I just configured my airflow.cfg file to use the CeleryExecutor , I pointed my sql_alchemy_conn to my postgresql database that's running on the same master node, I've set the broker_url to use AWS SQS (I didn't set the access_key_id or secret_key since it's running on an EC2-Instance it doesn't need those), and I've set the celery_result_backend to my postgresql server too. I saved my new airflow.cfg changes, I ran airflow initdb , and then I ran airflow scheduler which worked. I went to the UI and turned on one of my

Using python BOTO with AWS SQS, getting back nonsense characters

痞子三分冷 提交于 2019-12-06 05:44:23
问题 So, I am using python and BOTO to access my AWS SQS. I have some messages in the SQS which I can see from the AWS dashboard. However, when I try to get these messages through python, the characters that come through are just gibberish. Any idea what is going on here? conn = boto.sqs.connect_to_region("us-east-1") q = conn.get_queue('my-worker-queue') print q #read from message queue message = q.read(60) print message print message.get_body() Given the code above, I get the following: Queue

AWS SQS Asynchronous Queuing Pattern (Request/Response)

我们两清 提交于 2019-12-06 05:35:24
问题 I'm looking for help with an architectural design decision I'm making with a product. We've got multiple producers (initiated by API Gateway calls into Lambda) that put messages on a SQS queue (the request queue). There can be multiple simultaneous calls, so there would be multiple Lambda instances running in parallel. Then we have consumers (lets say twenty EC2 instances) who long-poll on the SQS for the message to process them. They take about 30-45 seconds to process a message each. I

How to pool AWS SQS with AWS Lambda

ぐ巨炮叔叔 提交于 2019-12-06 05:22:19
At the moment I'm are pooling AWS SQS from our back-end and doing business logic once payload is received. I would like to move this to AWS Lambda and start automating business logic via SQS/SNS. As I can not subscribe to AWS SQS events, what is the best practice in implementing SQS pooling with Lambda (node.js)? SQS doesn't really work well with Lambda since you cannot automatically trigger Lambda functions from SQS queues messages. I would rather remove the SQS/SNS logic and go for a DynamoDB Streams based solution that would cover the queueing, archiving & Lambda triggering tasks natively:

My worker tier returns 400 Error, CloudWatch Error?

橙三吉。 提交于 2019-12-06 05:00:00
I have a worker tier Elastic Beanstalk Application connected to a SQS. Over, and over and over again I keep getting HTTP 400 Error returned in my log, but I don't get any error messages at all in the log. [14/Apr/2014:18:03:26 +0000] "POST /customer-registered HTTP/1.1" 400 192 "-" "aws-sqsd" I dont get any errors at error_log, the only error I can find in my log is the following, which is located at /var/log/aws-sqsd/default.log: 2014-04-14T18:02:58Z error: AWS::CloudWatch::Errors::AccessDenied: User: arn:aws:sts::809571490243:assumed-role/aws-elasticbeanstalk-ec2-role/i-a00fffe2 is not