Amazon S3 File Permissions, Access Denied when copied from another account

久未见 提交于 2019-12-29 05:07:10

问题


I have a set of video files that were copied from one AWS Bucket from another account to my account in my own bucket.

I'm running into a problem now with all of the files where i am receiving Access Denied errors when I try to make all of the files public.

Specifically, I login to my AWS account, go into S3, drill down through the folder structures to locate one of the videos files.

When I look at this specificfile, the permissions tab on the files does not show any permissions assigned to anyone. No users, groups, or system permissions have been assigned.

At the bottom of the Permissions tab, I see a small box that says "Error: Access Denied". I can't change anything about the file. I can't add meta-data. I can't add a user to the file. I cannot make the file Public.

Is there a way i can gain control of these files so that I can make them public? There are over 15,000 files / around 60GBs of files. I'd like to avoid downloading and reuploading all of the files.

With some assistance and suggestions from the folks here I have tried the following. I made a new folder in my bucket called "media".

I tried this command:

aws s3 cp s3://mybucket/2014/09/17/thumb.jpg s3://mybucket/media --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=my_aws_account_email_address

I receive a fatal error 403 when calling the HeadObject operation: Forbidden.


回答1:


A very interesting conundrum! Fortunately, there is a solution.

First, a recap:

  • Bucket A in Account A
  • Bucket B in Account B
  • User in Account A copies objects to Bucket B (having been granted appropriate permissions to do so)
  • Objects in Bucket B still belong to Account A and cannot be accessed by Account B

I managed to reproduce this and can confirm that users in Account B cannot access the file -- not even the root user in Account B!

Fortunately, things can be fixed. The aws s3 cp command in the AWS Command-Line Interface (CLI) can update permissions on a file when copied to the same name. However, to trigger this, you also have to update something else otherwise you get this error:

This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes.

Therefore, the permissions can be updated with this command:

aws s3 cp s3://my-bucket/ s3://my-bucket/ --recursive --acl bucket-owner-full-control --metadata "One=Two"
  • Must be run by an Account A user that has access permissions to the objects (eg the user who originally copied the objects to Bucket B)
  • The metadata content is unimportant, but needed to force the update
  • --acl bucket-owner-full-control will grant permission to Account B so you'll be able to use the objects as normal

End result: A bucket you can use!




回答2:


aws s3 cp s3://account1/ s3://accountb/ --recursive --acl bucket-owner-full-control 



回答3:


In case anyone trying to do the same but using Hadoop/Spark job instead of AWS CLI.

  • Step 1: Grant user in Account A appropriate permissions to copy objects to Bucket B. (mentioned in above answer)
  • Step 2: Set the fs.s3a.acl.default configuration option using Hadoop Configuration. This can be set in conf file or in program:

    Conf File:

    <property> <name>fs.s3a.acl.default</name> <description>Set a canned ACL for newly created and copied objects. Value may be Private, PublicRead, PublicReadWrite, AuthenticatedRead, LogDeliveryWrite, BucketOwnerRead, or BucketOwnerFullControl.</description> <value>$chooseOneFromDescription</value> </property>

    Programmatically:

    spark.sparkContext.hadoopConfiguration.set("fs.s3a.acl.default", "BucketOwnerFullControl")




回答4:


To correctly set the appropriate permissions for newly added files, add this bucket policy:

[...]
{
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::123456789012::user/their-user"
    },
    "Action": [
        "s3:PutObject",
        "s3:PutObjectAcl"
    ],
    "Resource": "arn:aws:s3:::my-bucket/*"
}

And set ACL for newly created files in code. Python example:

import boto3

client = boto3.client('s3')
local_file_path = '/home/me/data.csv'
bucket_name = 'my-bucket'
bucket_file_path = 'exports/data.csv'
client.upload_file(
    local_file_path,
    bucket_name, 
    bucket_file_path, 
    ExtraArgs={'ACL':'bucket-owner-full-control'}
)

source: https://medium.com/artificial-industry/how-to-download-files-that-others-put-in-your-aws-s3-bucket-2269e20ed041 (disclaimer: written by me)




回答5:


I'm afraid you won't be able to transfer ownership as you wish. Here's what you did:

Old account copies objects into new account.

The "right" way of doing it (assuming you wanted to assume ownership on the new account) would be:

New account copies objects from old account.

See the small but important difference? S3 docs kind of explain it.

I think you might get away with it without needing to download the whole thing by just copying all of the files within the same bucket, and then deleting the old files. Make sure you can change the permissions after doing the copy. This should save you some money too, as you won't have to pay for the data transfer costs of downloading everything.




回答6:


by putting

--acl bucket-owner-full-control made it to work.



来源:https://stackoverflow.com/questions/43722678/amazon-s3-file-permissions-access-denied-when-copied-from-another-account

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!