amazon-sqs

How to implement a priority queue using SQS(Amazon simple queue service)

给你一囗甜甜゛ 提交于 2020-05-14 17:18:38
问题 I have a situation when a msg fails and I would like to replay that msg with the highest priority using python boto package so he will be taken first. If I'm not wrong SQS queue does not support priority queue, so I would like to implement something simple. Important note : when a msg fails I no longer have the message object, I only persist the receipt_handle so I can delete the message(if there was more than x retries) / change timeout visibility in order to push him back to queue. Thanks!

Listen message queue SQS with Spring Boot not works with standard config

喜你入骨 提交于 2020-04-17 07:29:09
问题 I'm unable to make works queue listener with Spring Boot and SQS (the message is sent and appear in SQS ui) The @MessageMapping or @SqsListener not works Java: 11 Spring Boot: 2.1.7 Dependencie: spring-cloud-aws-messaging This is my config @Configuration @EnableSqs public class SqsConfig { @Value("#{'${env.name:DEV}'}") private String envName; @Value("${cloud.aws.region.static}") private String region; @Value("${cloud.aws.credentials.access-key}") private String awsAccessKey; @Value("${cloud

Spring Cloud AWS Issue with setting manual acknowledge of SQS message

断了今生、忘了曾经 提交于 2020-04-10 11:54:08
问题 I'm trying to implement logic with manual deleting of AWS SQS message using spring-cloud-aws-messaging. This feature was implemented in scope of this ticket from the example in tests @SqsListener(value = "queueName", deletionPolicy = SqsMessageDeletionPolicy.NEVER) public void listen(SqsEventDTO message, Acknowledgment acknowledgment) { LOGGER.info("Received message {}", message.getFoo()); try { acknowledgment.acknowledge().get(); } catch (InterruptedException e) { LOGGER.error("Opps", e); }

Django celery 4 - ValueError: invalid literal for int() with base 10 when start celery worker

青春壹個敷衍的年華 提交于 2020-04-10 08:09:08
问题 I have configured my celery.py as its documents but I put my celery broker url to AWS SQS but I cannot start it to work. When I run the celery worker, I get the ValueError as: File "/Users/abd/Desktop/proj-aws/lib/python3.6/site-packages/celery/bin/base.py", line 244, in __call__ ret = self.run(*args, **kwargs) File "/Users/abd/Desktop/proj-aws/lib/python3.6/site-packages/celery/bin/worker.py", line 255, in run **kwargs) File "/Users/abd/Desktop/proj-aws/lib/python3.6/site-packages/celery

Create Amazon SQS backup automatically in another queue

只愿长相守 提交于 2020-03-05 04:12:08
问题 I'm wondering is there is a way to create 2 Amazon SQS queues: 1 for the main activity and the other for backup? Is there a way to make the first queue send every received message to another SQS queue automatically? 回答1: You cannot "backup" an Amazon SQS queue. However, if you configure the source system to send the message to an Amazon SNS topic, then multiple Amazon SQS queues can subscribe to the Amazon SNS topic . When doing this, I recommend you use raw message delivery to ensure that

How to specify dead letter dependency using modules?

随声附和 提交于 2020-03-04 05:40:18
问题 I have the following core module based off this official module: module "sqs" { source = "github.com/terraform-aws-modules/terraform-aws-sqs?ref=0d48cbdb6bf924a278d3f7fa326a2a1c864447e2" name = "${var.site_env}-sqs-${var.service_name}" } I'd like to create two queues: xyz and xyz_dead. xyz sends its dead letter messages to xyz_dead. module "xyz_queue" { source = "../helpers/sqs" service_name = "xyz" redrive_policy = <<POLICY { "deadLetterTargetArn" : "${data.TODO.TODO.arn}", "maxReceiveCount"

SQS message acknowledgement

本小妞迷上赌 提交于 2020-02-03 10:41:28
问题 My Sring Boot application listens Amazon SQS queue. Right now I need to implement correct message acknowledgement - I need to receive a message, do some business logic a only after that in case of success I need to ack the message(delete the message from the queue). For example, in case of error in my business logic the message must be re-enqueued. This is my SQS config: /** * AWS Credentials Bean */ @Bean public AWSCredentials awsCredentials() { return new BasicAWSCredentials(accessKey,

SQS message acknowledgement

我是研究僧i 提交于 2020-02-03 10:41:05
问题 My Sring Boot application listens Amazon SQS queue. Right now I need to implement correct message acknowledgement - I need to receive a message, do some business logic a only after that in case of success I need to ack the message(delete the message from the queue). For example, in case of error in my business logic the message must be re-enqueued. This is my SQS config: /** * AWS Credentials Bean */ @Bean public AWSCredentials awsCredentials() { return new BasicAWSCredentials(accessKey,

How do I fail a specific SQS message in a batch from a Lambda?

馋奶兔 提交于 2020-02-02 03:43:38
问题 I have a Lambda with an SQS trigger. When it gets hit, a batch of records from SQS comes in (usually about 10 at a time, I think). If I return a failed status code from the handler, all 10 messages will be retried. If I return a success code, they'll all be removed from the queue. What if 1 out of those 10 messages failed and I want to retry just that one? exports.handler = async (event) => { for(const e of event.Records){ try { let body = JSON.parse(e.body); // do things } catch(e){ // one

Can I trigger same AWS lambda from multiple SQS?

南笙酒味 提交于 2020-01-30 12:12:12
问题 I want to trigger a lambda function from multiple SQS queues. Most of the processing the lambda will do is same, just one small step will be based on the table name. I don't want to maintain two separate lambda for that. What's the pros and cons of having same/multiple lambda? 回答1: Yes there's no reason you can't configure it that way. It should work fine. 来源: https://stackoverflow.com/questions/59397753/can-i-trigger-same-aws-lambda-from-multiple-sqs