s3cmd


large file from ec2 to s3

此生再无相见时 提交于 2020-01-24 03:30:12
问题 I have a 27GB file that I am trying to move from an AWS Linux EC2 to S3. I've tried both the 'S3put' command and the 'S3cmd put' command. Both work with a test file. Neither work with the large file. No errors are given, the command returns immediately but nothing happens. s3cmd put bigfile.tsv s3://bucket/bigfile.tsv 回答1: Though you can upload objects to S3 with sizes up to 5TB, S3 has a size limit of 5GB for an individual PUT operation. In order to load files larger than 5GB (or even files

Exclude folders for s3cmd sync

房东的猫 提交于 2019-12-30 08:39:52
问题 I am using s3cmd and i would like to know how to exclude all folders within a bucket and just sync the bucket root. for example bucket folder/two/ folder/two/file.jpg get.jpg with the sync i just want it to sync the get.jpg and ignore the folder and its contents. s3cmd --config sync s3://s3bucket (only sync root) local/ If someone could help that would be amazing i have already tried the --exclude but not sure how to use it in this situation? 回答1: You should indeed use the --exclude option.

Amazon s3 – 403 Forbidden with Correct Bucket Policy

雨燕双飞 提交于 2019-12-23 10:59:56
问题 I'm trying to make all of the images I've stored in my s3 bucket publicly readable, using the following bucket policy. { "Id": "Policy1380877762691", "Statement": [ { "Sid": "Stmt1380877761162", "Action": [ "s3:GetObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::<bucket-name>/*", "Principal": { "AWS": [ "*" ] } } ] } I have 4 other similar s3 buckets with the same bucket policy, but I keep getting 403 errors. The images in this bucket were transferred using s3cmd sync as I'm trying to

Can I move an object into a 'folder' inside an S3 bucket using the s3cmd mv command?

余生长醉 提交于 2019-12-22 04:38:16
问题 I have the s3cmd command line tool for linux installed. It works fine to put files in a bucket. However, I want to move a file into a 'folder'. I know that folders aren't natively supported by S3, but my Cyberduck GUI tool converts them nicely for me to view my backups. For instance, I have a file in the root of the bucket, called 'test.mov' that I want to move to the 'idea' folder. I am trying this: s3cmd mv s3://mybucket/test.mov s3://mybucket/idea/test.mov but I get strange errors like:

Error uploading small files to s3 using s3cmd?

爷,独闯天下 提交于 2019-12-20 04:50:55
问题 I am having an unusual error: my files appear to be too small to be uploaded to s3! I have a small log file which is not uploading: s3cmd put log.txt s3://MY-BUCKET/MY-SUB-BUCKET/ ERROR: S3 error: Access Denied But when I do this: yes | head -n 10000000 >> log.txt s3cmd put log.txt s3://MY-BUCKET/MY-SUB-BUCKET/ # this works for some reason. The magic number appears to be 15MB, the point at which s3cmd starts doing multipart uploads. 回答1: I was running into this same issue and apparently the

Granting read access to the Authenticated Users group for a file

ぃ、小莉子 提交于 2019-12-12 07:46:17
问题 How do I grant read access to the Authenticated Users group for a file? I'm using s3cmd and want to do it while uploading but I'm just focusing directly on changing the acl. What should I put in for http://acs.amazonaws.com/groups/global/AuthenticatedUsers? I have tried every combination of AuthenticatedUsers possible. ./s3cmd setacl --acl-grant=read:http://acs.amazonaws.com/groups/global/AuthenticatedUsers s3://BUCKET/FILE ./s3cmd setacl --acl-grant=read:AuthenticatedUsers s3://BUCKET/FILE

Issues with s4cmd

时光怂恿深爱的人放手 提交于 2019-12-12 05:05:11
问题 I have about 50GB data to upload to S3 bucket but s3cmd is unreliable and very slow. the sync doesn't seem to work because of the timeout error. I switched to s4cmd it works great, multi threaded and fast. s4cmd dsync -r -t 1000 --ignore-empty-source forms/ s3://bucket/J/M/ The above uploads a set of files and then throws error - [Thread Failure] Unable to read data from source: /home/ubuntu/path to file The source file contains an image file so there is nothing wrong there. s4cmd has options

How do you pipe the result of s3cmd get to a var?

左心房为你撑大大i 提交于 2019-12-08 06:45:18
问题 I need to get an image from S3, process it on an ec2, and save it to a different folder on S3. I would like to not save the file to ec2 during the process. Is it possible to pipe the result of "s3cmd get s3://bucket/image" to a var? It does not seem to print to standard output? 回答1: It would be better if you could show which output produce s3cmd in your case. But if you really want to save the output of the command to a variable you do it this way: $ a=$(s3cmd get s3://bucket/image) When you

Automatically sync two Amazon S3 buckets, besides s3cmd?

こ雲淡風輕ζ 提交于 2019-12-04 12:14:06
问题 Is there a another automated way of syncing two Amazon S3 bucket besides using s3cmd? Maybe Amazon has this as an option? The environment is linux, and every day I would like to sync new & deleted files to another bucket. I hate the thought of keeping all eggs in one basket. 回答1: You could use the standard Amazon CLI to make the sync. You just have to do something like: aws s3 sync s3://bucket1/folder1 s3://bucket2/folder2 http://aws.amazon.com/cli/ 回答2: I'm looking for something similar and

Exclude multiple folders using AWS S3 sync

谁说我不能喝 提交于 2019-12-03 14:25:51
问题 How to exclude multiple folders while using aws s3 syn ? I tried : # aws s3 sync s3://inksedge-app-file-storage-bucket-prod-env s3://inksedge-app-file-storage-bucket-test-env --exclude 'reportTemplate/* orders/* customers/*' But still it's doing sync for folder "customer" Output : copy: s3://inksedge-app-file-storage-bucket-prod-env/customers/116/miniimages/IMG_4800.jpg to s3://inksedge-app-file-storage-bucket-test-env/customers/116/miniimages/IMG_4800.jpg copy: s3://inksedge-app-file-storage

工具导航Map