amazon-s3

How to relationalize JSON containing arrays

假如想象 提交于 2021-01-28 14:09:14
问题 I am using AWS Glue to read data file containing JSON (on S3). This one is a JSON with data contained in array. I have tried using relationalize() function but it doesn't work on array. It does work on nested JSON but this is not the data format of input. Is there a way to relationalize JSON with arrays in it? Input data: { "ID":"1234", "territory":"US", "imgList":[ { "type":"box" "locale":"en-US" "url":"boxart/url.jpg" }, { "type":"square" "locale":"en-US" "url":"square/url.jpg" } ] } Code:

NoMethodError: undefined method `match' for nil:NilClass

爱⌒轻易说出口 提交于 2021-01-28 13:50:33
问题 I have a big problem with my application. My site is on AWS and this morning I do a cap production deploy to add the new version online. But my url don't work anymore and that is a very very big problem for me so I put my question here... I use Ruby on Rails EC2 S3 and shrine for my application and when I want to do a RAILS_ENV=production rake db:migrate I have the error : rake aborted! NoMethodError: undefined method `match' for nil:NilClass /home/coeurcoeur/.rvm/gems/ruby-2.5.0/gems/aws

Ghostscript PDF file compression using PHP's exec (Laravel on Docker)

风流意气都作罢 提交于 2021-01-28 12:02:08
问题 What needs to be done: User has to be able to upload a PDF, then the file is uploaded to an Amazon S3 bucket, the file should be compressed then. Current environment: Laravel application (mounted on Docker) ( php:7.4-fpm-alpine3.11 , GPL Ghostscript 9.50 , Laravel Framework 5.8.37 ) Amazon S3 bucket to save documents in Script is in a shell file which is made executable and added to /usr/local/bin as shrink Shell is not explicitly added in Docker container, should it be? Current flow: User

Understanding AWS Lambda Limits

守給你的承諾、 提交于 2021-01-28 11:47:40
问题 I am trying to understand the "Invoke request body payload size" limit. Is this limit for response provided by the lambda function. My use case: S3 event triggers Lambda function Lambda functions consists of a call to S3 bucket to fetch fetch object. The Object (json file) is expected to be maximum 1GB. Lambda function processes data from json files and makes individual/batch calls to DynamoDB inserting necessary information derived from each json object in file. Each record DynamoDB

AWS lambda tar file extraction doesn't seem to work

本秂侑毒 提交于 2021-01-28 11:27:38
问题 I'm trying to run serverless LibreOffice based on this tutorial. Here is the full python lambda function: import boto3 import os s3_bucket = boto3.resource("s3").Bucket("lambda-libreoffice-demo") os.system("curl https://s3.amazonaws.com/lambda-libreoffice-demo/lo.tar.gz -o /tmp/lo.tar.gz && cd /tmp && tar -xf /tmp/lo.tar.gz") convertCommand = "instdir/program/soffice --headless --invisible --nodefault --nofirststartwizard --nolockcheck --nologo --norestore --convert-to pdf --outdir /tmp" def

R reactiveFileReader reading from aws s3 bucket

岁酱吖の 提交于 2021-01-28 11:08:53
问题 I can read a csv from my S3 bucket using the below code aws.s3::s3read_using(read.csv, stringsAsFactors=FALSE, check.names=FALSE, object=paste0(Sys.getenv("BUCKET_PREFIX"), "/a.csv"), bucket = Sys.getenv("AWS_BUCKET_NAME"), opts=bucket_opts ) I want to change this to using the function reactiveFileReader. I tried the below with no success, any idea what I am doing wrong? reactiveFileReader( intervalMillis = 10000, session= session, filePath = paste0(Sys.getenv("BUCKET_PREFIX"), "/a.csv"),

R reactiveFileReader reading from aws s3 bucket

£可爱£侵袭症+ 提交于 2021-01-28 11:04:51
问题 I can read a csv from my S3 bucket using the below code aws.s3::s3read_using(read.csv, stringsAsFactors=FALSE, check.names=FALSE, object=paste0(Sys.getenv("BUCKET_PREFIX"), "/a.csv"), bucket = Sys.getenv("AWS_BUCKET_NAME"), opts=bucket_opts ) I want to change this to using the function reactiveFileReader. I tried the below with no success, any idea what I am doing wrong? reactiveFileReader( intervalMillis = 10000, session= session, filePath = paste0(Sys.getenv("BUCKET_PREFIX"), "/a.csv"),

Alternative for TransferManager in AWS sdk Java 2.x

北城以北 提交于 2021-01-28 10:58:25
问题 TransferManager class has been removed from AWS sdk Java 2.x. What is the alternative for TransferManager and how it can be used 回答1: TransferManager wasn't removed, it was just not implemented in Java 2.X yet. You can see the project to implement TransferManager on their github. It is currently in development and there does not appear to be a timeline of when this will be completed. You can use the S3Client.putObject method to transfer an object over to your S3 bucket, or if you really must

Multipart upload to Amazon S3 using Javascript in Browser

无人久伴 提交于 2021-01-28 10:01:46
问题 I am working on a project that requires me to upload large files directly from the browser to Amazon S3 using javascript. Does anyone know how to do it? Is there Amazon Javascript SDK that supports this? 回答1: Try EvaporateJS. It has a large community and broad browser support. https://github.com/TTLabs/EvaporateJS. 回答2: Use aws-sdk-js to directly upload to s3 from browser. In my case the file sizes could go up to 100Gb. I used multipart upload, very easy to use. I had to upload in a private

Unable to trigger AWS Lambda by upload to AWS S3

ぐ巨炮叔叔 提交于 2021-01-28 09:54:26
问题 I am trying to build a Kibana dashboard fed with twitter data collected via AWS Kinesis firehose where data passes into an S3 bucket which triggers a Lambda function which passes the data to AWS Elastic Search and then to Kibana. I am following this blog https://aws.amazon.com/blogs/big-data/building-a-near-real-time-discovery-platform-with-aws/ The data is loading into the S3 bucket correctly but it never arrives in Kibana, I believe this is because the Lambda function is not being triggered