amazon-s3

Kafka Connect S3 Connector OutOfMemory errors with TimeBasedPartitioner

邮差的信 提交于 2021-01-21 03:51:19
问题 I'm currently working with the Kafka Connect S3 Sink Connector 3.3.1 to copy Kafka messages over to S3 and I have OutOfMemory errors when processing late data. I know it looks like a long question, but I tried my best to make it clear and simple to understand. I highly appreciate your help. High level info The connector does a simple byte to byte copy of the Kafka messages and add the length of the message at the beginning of the byte array (for decompression purposes). This is the role of

Spring Batch and S3 Integration - how to remove null characters first from S3 before start reading?

冷暖自知 提交于 2021-01-19 09:52:40
问题 In my case, we get the FlatFile from the source system and keep it on server and then we push this file to Amazon S3 Bucket due to some automated process. The Source system is Mainframe which somehow puts null characters into FlatFile and its unavoidable for them. Now before we start reading FlatFile we must need to remove null characters (like we do using linux command - tr \'\\000\' \' \' < \"%s\" > \"%s\" ) from the file present in Amazon S3 bucket. So far I don't see a way (not unable to

Lambda S3 Put function not triggering for larger files

走远了吗. 提交于 2021-01-18 10:11:27
问题 I am currently exploring storing the attachments of an email separately from the .eml file itself. I have an SES rule set that delivers an inbound email to a bucket. When the bucket retrieves the email, an S3 Put Lambda function parses the raw email (MIME format), base64 decodes the attachment buffers, and does a putObject for each attachment and the original .eml file to a new bucket. My problem is that this Lambda function does not trigger for emails with attachments exceeding ~3-4 MB. The

Lambda S3 Put function not triggering for larger files

╄→尐↘猪︶ㄣ 提交于 2021-01-18 10:10:01
问题 I am currently exploring storing the attachments of an email separately from the .eml file itself. I have an SES rule set that delivers an inbound email to a bucket. When the bucket retrieves the email, an S3 Put Lambda function parses the raw email (MIME format), base64 decodes the attachment buffers, and does a putObject for each attachment and the original .eml file to a new bucket. My problem is that this Lambda function does not trigger for emails with attachments exceeding ~3-4 MB. The

Lambda S3 Put function not triggering for larger files

这一生的挚爱 提交于 2021-01-18 10:09:41
问题 I am currently exploring storing the attachments of an email separately from the .eml file itself. I have an SES rule set that delivers an inbound email to a bucket. When the bucket retrieves the email, an S3 Put Lambda function parses the raw email (MIME format), base64 decodes the attachment buffers, and does a putObject for each attachment and the original .eml file to a new bucket. My problem is that this Lambda function does not trigger for emails with attachments exceeding ~3-4 MB. The

Lambda S3 Put function not triggering for larger files

自闭症网瘾萝莉.ら 提交于 2021-01-18 10:09:40
问题 I am currently exploring storing the attachments of an email separately from the .eml file itself. I have an SES rule set that delivers an inbound email to a bucket. When the bucket retrieves the email, an S3 Put Lambda function parses the raw email (MIME format), base64 decodes the attachment buffers, and does a putObject for each attachment and the original .eml file to a new bucket. My problem is that this Lambda function does not trigger for emails with attachments exceeding ~3-4 MB. The

Uploading multiple images with multer to AWS S3

那年仲夏 提交于 2021-01-09 09:45:12
问题 I currently have a backend where i can upload multiple images using multer. these images are passed through sharp to resize the images to create a full and thumbnail image. An entry of the post is stored in my posts table in my database and the fileName of the images is also stored in Post_Images table in my database. My problem is that i am currently using multer diskStorage to store my images on my disk and now i want to store the images on AWS S3. I was trying to implement this but it did

Uploading multiple images with multer to AWS S3

喜你入骨 提交于 2021-01-09 09:43:35
问题 I currently have a backend where i can upload multiple images using multer. these images are passed through sharp to resize the images to create a full and thumbnail image. An entry of the post is stored in my posts table in my database and the fileName of the images is also stored in Post_Images table in my database. My problem is that i am currently using multer diskStorage to store my images on my disk and now i want to store the images on AWS S3. I was trying to implement this but it did

Uploading multiple images with multer to AWS S3

别说谁变了你拦得住时间么 提交于 2021-01-09 09:42:14
问题 I currently have a backend where i can upload multiple images using multer. these images are passed through sharp to resize the images to create a full and thumbnail image. An entry of the post is stored in my posts table in my database and the fileName of the images is also stored in Post_Images table in my database. My problem is that i am currently using multer diskStorage to store my images on my disk and now i want to store the images on AWS S3. I was trying to implement this but it did

nodejs, multer, aws S3

爱⌒轻易说出口 提交于 2021-01-09 07:41:00
问题 How do i apply uuid and date so that the filename stored in my database and the filename stored in my S3 bucket are the same? With this current implementation, the uuid and the date are always the same even if a post was made hours later. Can someone help, would really appreciate it. const s3 = new AWS.S3({ accessKeyId: process.env.AWS_ID, secretAccessKey: process.env.AWS_SECRET, region:process.env.AWS_REGION }) const uid =uuidv4(); const date =new Date().toISOString() const multerS3Config =