Could we use AWS Glue just copy a file from one S3 folder to another S3 folder?

百般思念 提交于 2019-12-22 00:40:26

问题


I need to copy a zipped file from one AWS S3 folder to another and would like to make that a scheduled AWS Glue job. I cannot find an example for such a simple task. Please help if you know the answer. May be the answer is in AWS Lambda, or other AWS tools.

Thank you very much!


回答1:


You can do this, and there may be a reason to use AWS Glue: if you have chained Glue jobs and glue_job_#2 is triggered on the successful completion of glue_job_#1.

The simple Python script below moves a file from one S3 folder (source) to another folder (target) using the boto3 library, and optionally deletes the original copy in source directory.

import boto3

bucketname = "my-unique-bucket-name"
s3 = boto3.resource('s3')
my_bucket = s3.Bucket(bucketname)
source = "path/to/folder1"
target = "path/to/folder2"

for obj in my_bucket.objects.filter(Prefix=source):
    source_filename = (obj.key).split('/')[-1]
    copy_source = {
        'Bucket': bucketname,
        'Key': obj.key
    }
    target_filename = "{}/{}".format(target, source_filename)
    s3.meta.client.copy(copy_source, bucketname, target_filename)
    # Uncomment the line below if you wish the delete the original source file
    # s3.Object(bucketname, obj.key).delete()

Reference: Boto3 Docs on S3 Client Copy

Note: I would use f-strings for generating the target_filename, but f-strings are only supported in >= Python3.6 and I believe the default AWS Glue Python interpreter is still 2.7.

Reference: PEP on f-strings




回答2:


I think you can do it with Glue, but wouldn't it be easier to use the CLI?

You can do the following:

aws s3 sync s3://bucket_1 s3://bucket_2




回答3:


You could do this with Glue but it's not the right tool for the job.

Far simpler would be to have a Lambda job triggered by a S3 created-object event. There's even a tutorial on AWS Docs on doing (almost) this exact thing.

http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html




回答4:


We ended up using Databricks to do everything.

Glue is not ready. It returns error messages that make no sense. We created tickets and waited for five days still no reply.




回答5:


the S3 API lets you do a COPY command (really a PUT with a header to indicate source URL) to copy objects within or between buckets. It's used to fake rename()s regularly but you could initiate the call yourself, from anything.

There is no need to D/L any data; within the same S3 region the copy has a bandwidth of about 6-10 MB/s.

AWS CLI cp command can do this.




回答6:


You can do that by downloading your zip file from s3 to tmp/ directory and then re-uploading the same to s3.

s3 = boto3.resource('s3')

Download file to local spark directory tmp:

s3.Bucket(bucket_name).download_file(DATA_DIR+file,'tmp/'+file)

Upload file from local spark directory tmp:

s3.meta.client.upload_file('tmp/'+file,bucket_name,TARGET_DIR+file)



回答7:


Now you can write python shell job in glue to do it. Just select Type in Glue job Creation wizard to Python Shell. You can run normal python script in it.




回答8:


Nothing required. I believe aws data pipeline is a best options. Just use command line option. Scheduled run also possible. I already tried. Successfully worked.



来源:https://stackoverflow.com/questions/47664004/could-we-use-aws-glue-just-copy-a-file-from-one-s3-folder-to-another-s3-folder

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!