amazon-s3

Uploading multiple files to AWS S3 using NodeJS

倾然丶 夕夏残阳落幕 提交于 2020-04-29 12:23:09
问题 I'm trying to upload all files within my directory to my S3 bucket using NodeJS. I'm able to upload one file at a time if I explicitly give the file path + literal string for the Key: field. Below is the script I'm using: var AWS = require('aws-sdk'), fs = require('fs'); // For dev purposes only AWS.config.update({ accessKeyId: '...', secretAccessKey: '...' }); // reg ex to match var re = /\.txt$/; // ensure that this file is in the directory of the files you want to run the cronjob on //

Automate bulk loading of data from s3 to Aurora MySQL RDS instance

青春壹個敷衍的年華 提交于 2020-04-18 12:45:05
问题 I am relatively new to AWS so I am not sure how to go about doing this, I have CSV files on s3 and I have already set up the Aurora instance on RDS. The thing that I am unable to figure out is how do I automate the bulk loading of data, essentially doing like a LOAD DATA FROM s3 kind of thing using something like AWS Glue. I also used the Glue native thing of s3 to RDS, but then it is essentially a bunch of inserts into RDS over a JDBC connection which is also super slow for large datasets. I

SSIS sending source Oledb data to S3 Buckets in parquet File

南楼画角 提交于 2020-04-18 06:32:30
问题 My source is SQL Server and I am using SSIS to export data to S3 Buckets, but now my requirement is to send files as parquet File formate. Can you guys give some clues on how to achieve this? Thanks, Ven 回答1: For folks stumbling on this answer, Apache Parquet is a project that specifies a columnar file format employed by Hadoop and other Apache projects. Unless you find a custom component or write some .NET code to do it, you're not going to be able to export data from SQL Server to a Parquet

SSIS sending source Oledb data to S3 Buckets in parquet File

拟墨画扇 提交于 2020-04-18 06:32:27
问题 My source is SQL Server and I am using SSIS to export data to S3 Buckets, but now my requirement is to send files as parquet File formate. Can you guys give some clues on how to achieve this? Thanks, Ven 回答1: For folks stumbling on this answer, Apache Parquet is a project that specifies a columnar file format employed by Hadoop and other Apache projects. Unless you find a custom component or write some .NET code to do it, you're not going to be able to export data from SQL Server to a Parquet

How to create a signed s3 url for requester pays bucket in python

六眼飞鱼酱① 提交于 2020-04-18 06:10:15
问题 I have a requester pays bucket that I do not control in form: s3://bucket-name/path-to-my-file I am attempting to generate a presigned url to send to a web app to render it in browser. I've gone through the boto s3 documentation but can't find anything that covers this :( My script below creates returns URL that does not have access and returns this error from s3: <Error> <Code>AccessDenied</Code> <Message>Access Denied</Message> <RequestId>11DCA24D8DF2E9E8</RequestId> <HostId>SeTDlt66hPsj5

S3 SignedURL fails

我们两清 提交于 2020-04-18 05:48:37
问题 I'm trying to create a signed URL for a GET object request in S3. I have this code working flawlessly for putting objects in S3 but I can't seem to get it to work for GET. I sign the URL with this code //Create the signed url using the company id func (user *User) signURLForUser(sess *session.Session) (*URLSign, error) { svc := s3.New(sess) svc.Config.Region = aws.String(os.Getenv("REGION")) req, _ := svc.GetObjectRequest(&s3.GetObjectInput{ Bucket: aws.String("bucket"), Key: aws.String(user

File can be uploaded to S3 locally but can't within a container (Unable to locate credential)

早过忘川 提交于 2020-04-18 01:09:30
问题 I have a Python script to upload a file to S3, the code is the same in this question. I have a bash script that pass the AWS credential. The file I wanted to upload is generated from a model that running on Fargate (ina container), so I tried to run this Python script within the container to upload to S3, I've built the image, but when I run docker run containername it will give me error: INFO:root:Uploading to S3 from test.csv to bucket_name test.csv File "/usr/local/lib/python3.6/dist

File can be uploaded to S3 locally but can't within a container (Unable to locate credential)

纵饮孤独 提交于 2020-04-18 01:08:32
问题 I have a Python script to upload a file to S3, the code is the same in this question. I have a bash script that pass the AWS credential. The file I wanted to upload is generated from a model that running on Fargate (ina container), so I tried to run this Python script within the container to upload to S3, I've built the image, but when I run docker run containername it will give me error: INFO:root:Uploading to S3 from test.csv to bucket_name test.csv File "/usr/local/lib/python3.6/dist

Use boto3 to upload a file to S3

懵懂的女人 提交于 2020-04-18 01:07:52
问题 I have a script to upload a csv file which is in a container to S3 bucket, I copied the file to my local machine and I'm testing the script locally, but getting errors. I'm still learning everything, trying to know what part I'm missing in the script and how I can get this running and upload the file to S3, Here's the errors: error_1: Traceback (most recent call last): File "C:/Users/U12345/IdeaProjects/xxx/s3_upload.py", line 19, in <module> r'C:\Users\U12345\IdeaProjects\xxx\test_' + str

Access Denied while querying S3 files from AWS Athena within Lambda in different account

戏子无情 提交于 2020-04-16 21:16:22
问题 I am trying to query Athena View from my Lambda code. Created Athena table for S3 files which are in different account. Athena Query editor is giving me below error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; I tried accessing Athena View from my Lambda code. Created Lambda Execution Role and allowed this role in Bucket Policy of another account S3 bucket as well like below: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS"