amazon-s3

how to access the file from amazon s3 expired url

雨燕双飞 提交于 2020-01-17 15:23:52
问题 I am trying to access the file from amazon web service s3. But i am getting error of expired url. Does the expired url deletes the file? Or can it be access by regenerating new url? How can i regenerate the new url? 回答1: You can either Generate a new pre-signed URL. Access the file via an S3 API. If you can't do any of that, you can't access the file. 来源: https://stackoverflow.com/questions/24805094/how-to-access-the-file-from-amazon-s3-expired-url

aws-sdk putObject Access Denied Request.extractError

て烟熏妆下的殇ゞ 提交于 2020-01-17 05:59:46
问题 I have the following policy attached to the IAM user I am using. { "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1468642330000", "Effect": "Allow", "Action": [ "s3:*" ], "Resource": [ "arn:aws:s3:::elasticbeanstalk-ap-southeast-1-648213736065/documents/*" ] } ] } Problem is when I do something like: return readFile(file.path) .then(function(buffer) { var s3obj = s3.putObject({ Bucket: bucket, Key: `documents/${destFileName}`, Body: buffer }); return s3obj.promise(); }); I get:

when creating Hive table against csv saved in S3, do I absolutely have to order fields in the order of comma separated values for rows in csv?

橙三吉。 提交于 2020-01-17 01:47:05
问题 when creating Hive table against csv saved in S3, do I absolutely have to order fields in the order of comma separated values for rows in csv? the csv has the first row as header. I understand that csv is row based not columnar, but was wondering if there is a way to match the value of the header with the field name of the hive table and order columns differently. 回答1: Yes, columns in the table definition (DDL) should be in the same order as in the underlying csv files. You can skip header

AWS S3 Transfer Between Accounts Not Working

南楼画角 提交于 2020-01-16 18:26:11
问题 I am trying to copy data in a bucket in one account, in which I have access to an IAM but not admin, to a bucket in another account, in which I am an admin, and failing. I can't even ls the source bucket. I've followed the directions from AWS and various sources online to give myself list/read/get permissions on the source bucket, with no success. I can provide the details (e.g., the bucket policy json), but it is what is in the AWS docs and other places. What I've done works between two

Unable to connect to S3 from Lambda/Python/Boto3 when VPC is enabled

孤人 提交于 2020-01-16 18:23:10
问题 I have a very simple python function in a lambda which runs fine if I leave VPC disabled. import json import boto3 import botocore def lambda_handler(event, context): s3 = boto3.client('s3', 'us-east-1', config=botocore.config.Config(s3={'addressing_style':'path'})) keys = [] resp = s3.list_objects_v2(Bucket='[BUCKET_NAME]') for obj in resp['Contents']: print(obj['Key']) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } When VPC is enabled the S3 connection continually

Can't get Amazon S3 Cross Region Replication between two accounts to work

爷,独闯天下 提交于 2020-01-16 14:42:10
问题 I'm hoping someone can help me with an Amazon S3 Cross Region Replication query. I have two Amazon AWS accounts, each with a bucket in a different region. I want to replicate the data from one bucket to another and from what I understand this should be a simple process to create. However, I'm really struggling and I don't know what I'm doing wrong. I've followed a lot of instructions online including going through various AWS tutorials, seeing lots of examples, but I can't get the data to

Can't get Amazon S3 Cross Region Replication between two accounts to work

回眸只為那壹抹淺笑 提交于 2020-01-16 14:41:30
问题 I'm hoping someone can help me with an Amazon S3 Cross Region Replication query. I have two Amazon AWS accounts, each with a bucket in a different region. I want to replicate the data from one bucket to another and from what I understand this should be a simple process to create. However, I'm really struggling and I don't know what I'm doing wrong. I've followed a lot of instructions online including going through various AWS tutorials, seeing lots of examples, but I can't get the data to

How to configure AWS4 S3 Authorization in Django?

和自甴很熟 提交于 2020-01-16 13:12:06
问题 I configurred S3 bucket for Frankfurt region. Though my Django -based service is able to write files to the bucket, whenever it tried to read them there's InvalidRequest error telling to upgrate authorization mechanizm: <Error> <Code>InvalidRequest</Code> <Message> The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. </Message> <RequestId>17E9629D33BF1E24</RequestId> <HostId> ... </HostId> </Error> Is the cause for this error burried in my incorrect

Calculate S3 ETag locally using spark md5

半世苍凉 提交于 2020-01-16 09:36:09
问题 I have uploaded a file 14MB to S3 in chunks (5MB) each and also using spark-md5 calculated the hash of each chunk. The individual hash of each chunk (generated by spark-md5) is matching with ETag of each chunk uploaded to S3. But the ETag hash generated by doing full upload to S3 is not matching with locally calculated hash generated by spark-md5. Below are the steps for local hash: Generate hash (generated by spark-md5) of each chunk Join the hash of each chunk Convert to hex Calculated hash

Questions about S3 Server access log format

荒凉一梦 提交于 2020-01-16 08:35:14
问题 I'm learning S3 buckets, I read this documents about S3 Server access log format From the given examples, I still not sure what is Bytes Sent, Object Size, Total Time, Turn-round Time, Referer, User-Agent, Version ID. They gave us an example for each but I didn't find relevant information from the five log records examples. Can someone explain a bit with example. Many thanks. 来源: https://stackoverflow.com/questions/59137987/questions-about-s3-server-access-log-format