aws-cli

aws cli in cygwin - how to clean up differences in windows and cygwin style paths

给你一囗甜甜゛ 提交于 2019-12-03 13:08:22
问题 I suspect this is my ineptitude in getting path variables set right, but I'm at a loss. I've installed the aws cli using pip in cygwin. pip install awscli I have two python environments... a windows anaconda distribution, and the version cygwin can install for you. which python > /usr/bin/python where python > C:\cygwin64\bin\python > C:\windows-style-path-to-anaconda\python.exe when I try to run aws cli aws --version > C:\windows-style-path-to-anaconda\python.exe: can't open file > 'cygdrive

Bash script to install AWS CLI tools

醉酒当歌 提交于 2019-12-03 10:36:37
I am writing a bash script that will automatically install and configure AWS CLI tools. I am able to install AWS CLI tools but unable to configure it. My script is something like this: #!/bin/bash wget https://s3.amazonaws.com/aws-cli/awscli-bundle.zip unzip awscli-bundle.zip sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws ./awscli-bundle/install -b ~/bin/aws ./awscli-bundle/install -h aws configure AWS Access Key ID [None]: ABCDEFGHIJKLMNOP ## unable to provide this data AWS Secret Access Key [None]: xbdwsdADDS/ssfsfa/afzfASADQASAd ## unable to provide this data Default

Fastest way to sync two Amazon S3 buckets

淺唱寂寞╮ 提交于 2019-12-03 10:32:26
I have a S3 bucket with around 4 million files taking some 500GB in total. I need to sync the files to a new bucket (actually changing the name of the bucket would suffice, but as that is not possible I need to create a new bucket, move the files there, and remove the old one). I'm using AWS CLI's s3 sync command and it does the job, but takes a lot of time. I would like to reduce the time so that the dependent system downtime is minimal . I was trying to run the sync both from my local machine and from EC2 c4.xlarge instance and there isn't much difference in time taken. I have noticed that

AWS CLI get download S3 URL for private bucket from AWS CLI

谁说我不能喝 提交于 2019-12-03 10:31:46
I could upload a file to a private S3 bucket successfully using following command: aws s3 cp "myfile.txt" "s3://myfolder/myfile.txt" --region=us-east-1 --output=json I would like to issue a AWS CLI command to return me a temporary URL download for myfile.txt and does anyone know how to? I googled and look like I have to do some signing to get temporary URL such as: http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html aws cli now supports presign command. You can run $ aws s3 presign s3://test-bucket/test-file.txt https://test-bucket/test-file.txt?Expires=1499152189&Signature

What is the correct syntax for filtering by tag in describe-vpcs?

故事扮演 提交于 2019-12-03 07:45:25
I am trying to understand a aws ec2 cli call. I am looking to describe all VPC then filer on a custom tag (vpcname=myvpc, however after trying multiple combinations I keep getting conflicting errors about the format and use of --filters. using as a reference [ http://docs.aws.amazon.com/cli/latest/reference/ec2/describe-vpcs.html][1] aws --profile myProfile --region eu-west-1 ec2 describe-vpcs --filters vpcname,myvpc however this returns Error parsing parameter '--filters': should be: Key value pairs, where values are separated by commas, and multiple pairs are separated by spaces. --filters

how to view aws log real time (like tail -f)

∥☆過路亽.° 提交于 2019-12-03 07:21:17
问题 I can view the log using the following command. aws logs get-log-events --log-group-name groupName --log-stream-name streamName --limit 100 what is the command to get feature like tail -f so that i can see the log real time 回答1: Have a look at awslogs. If you happen to be working with Lambda/API Gateway specifically, have a look at apilogs. 回答2: I was really disappointed with awslogs and cwtail so I made my own tool called Saw that efficiently streams CloudWatch logs to the console (and

How to Generate a Presigned S3 URL via AWS CLI

二次信任 提交于 2019-12-03 06:30:52
问题 Is there a way to create presigned URL for objects in S3 bucket using AWS CLI? I know that could be done using SDK, but is it possible with CLI? I found this on one of the AWS docs, but can't complete the command: s3cmd signurl s3://BUCKET/OBJECT <expiry_epoch|+expiry_offset> Any help? 回答1: Did you try aws s3 presign? Generate a pre-signed URL for an Amazon S3 object. This allows anyone who receives the pre-signed URL to retrieve the S3 object with an HTTP GET request. For sigv4 requests the

How to use AWS CLI to only copy files in S3 bucket that match a given string pattern

◇◆丶佛笑我妖孽 提交于 2019-12-03 06:29:22
I'm using the AWS CLI to copy files from an S3 bucket to my R machine using a command like below: system( "aws s3 cp s3://my_bucket_location/ ~/my_r_location/ --recursive --exclude '*' --include '*trans*' --region us-east-1" ) This works as expected, i.e. it copies all files in my_bucket_location that have "trans" in the filename at that location. The problem that I am facing is that I have other files with similar naming conventions that I don't want to import in this step. As an example, in the list below I only want to copy the first two files, not the last two: File list trans_120215.csv

How to export a dynamodb table as a csv through aws-cli ( without using pipeline)

北战南征 提交于 2019-12-03 05:28:24
I am new to aws-cli and I am trying to export my dynamodb table as a csv so that i can import it directly into postgresql. Is there a way to do that using aws-cli ? So far i have came across this command aws dynamodb scan --table-name . But this does not provide an option of a csv export. Also, through this command I can get the output on my command prompt but I am not sure how to write it in a file. If all items have the same attributes, e.g. id and name both of which are strings, then run: aws dynamodb scan \ --table-name mytable \ --query "Items[*].[id.S,name.S]" \ --output text That would

How list Amazon S3 bucket contents by modified date?

我怕爱的太早我们不能终老 提交于 2019-12-03 04:31:26
Most of the time it happens that we load files in a common S3 bucket due to which it becomes hard to figure out data in it. How can I view objects uploaded on a particular date? One solution would probably to use the s3api . It works easily if you have less than 1000 objects, otherwise you need to work with pagination. s3api can list all objects and has a property for the lastmodified attribute of keys imported in s3. It can then be sorted, find files after or before a date, matching a date ... Examples of running such option all files for a given date DATE=$(date +%Y-%m-%d) aws s3api list