问题
I'm working on a Python 3 script designed to get S3 space utilization statistics from AWS CloudFront using the Boto3 library.
I started with the AWS CLI and found I could get what I'm after with a command like this:
aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time 2017-03-06T00:00:00Z --end-time 2017-03-07T00:00:00Z --statistics Average --unit Bytes --region us-west-2 --dimensions Name=BucketName,Value=foo-bar Name=StorageType,Value=StandardStorage --period 86400 --output json
This returns the data I would expect. Now I'd like to do the same thing in Python 3 / Boto3. My code thusfar is:
from datetime import datetime, timedelta
import boto3
seconds_in_one_day = 86400 # used for granularity
cloudwatch = boto3.client('cloudwatch')
response = cloudwatch.get_metric_statistics(
Namespace='AWS/S3',
Dimensions=[
{
'Name': 'BucketName',
'Value': 'foo-bar'
},
{
'Name': 'StorageType',
'Value': 'StandardStorage'
}
],
MetricName='BucketSizeBytes',
StartTime=datetime.now() - timedelta(days=7),
EndTime=datetime.now(),
Period=seconds_in_one_day,
Statistics=[
'Average'
],
Unit='Bytes'
)
print(response)
When I run this, I get a valid response but no datapoints (it's an empty array). They seem to be identical except the Python method doesn't seem to have a place for the region, where the command line requires it.
One more thing I tried: my code is computing the dates for the last date versus the command line where they are hard coded. I did try hard coding the date just to see if I would get data back, and the result was the same.
So my questions are these:
Is the method I'm using in Boto / Python equivalent to the command line? Assuming they are, what could I be missing?
回答1:
I don't see anything obviously wrong with your code, so the region looks like a prime suspect here.
You can set it when creating the client with:
cloudwatch = boto3.client('cloudwatch', region_name='us-west-2')
If this is not set, boto will try to get the region from the AWS_DEFAULT_REGION
env variable first and then the ~/.aws/config
configuration file. Try checking those to see what is the default region set.
回答2:
I think the error is your command cloudwatch = boto3.client('cloudwatch')
. The default region is east-1. So you could use something like this:
from datetime import datetime, timedelta
import boto3
def credentials_AWS (account):
if (account == 'account1'):
aws_access_key_id = "key id east"
aws_secret_access_key = 'east secret_access_key'
region_name = 'us-east-1'
elif (account == 'account2'):
aws_access_key_id = "key id west"
aws_secret_access_key = 'west secret_access_key'
region_name = 'us-west-2'
return aws_access_key_id, aws_secret_access_key, region_name
def connect_service_aws (service, aws_access_key_id, aws_secret_access_key, region_name):
aws_connected = boto3.client (service,
aws_access_key_id = aws_access_key_id,
aws_secret_access_key = aws_secret_access_key,
region_name = region_name)
return aws_connected
def get_metrics(account):
seconds_in_one_day = 86400 # used for granularity
#cloudwatch = boto3.client('cloudwatch')
aws_access_key_id, aws_secret_access_key, region_name = credentials_AWS (account)
cloudwatch = connect_service_aws ('cloudwatch', aws_access_key_id,
aws_secret_access_key, region_name)
response = cloudwatch.get_metric_statistics(
Namespace='AWS/S3',
Dimensions=[
{
'Name': 'BucketName',
'Value': 'foo-bar'
},
{
'Name': 'StorageType',
'Value': 'StandardStorage'
}
],
MetricName='BucketSizeBytes',
StartTime=datetime.now() - timedelta(days=7),
EndTime=datetime.now(),
Period=seconds_in_one_day,
Statistics=[
'Average'
],
Unit='Bytes'
)
print(response)
回答3:
I have a viable work-around in case someone else needs this, but I still want to find a non-kludgy answer if one exists. It may not. I decided I'd just generate the command line and use python to run it and retrieve the json result - the same net result.
s3 = boto3.resource('s3')
s3_client = boto3.client('s3')
command = "aws cloudwatch get-metric-statistics --metric-name BucketSizeBytes --namespace AWS/S3 --start-time {} --end-time {} --statistics Average --unit Bytes --region {} --dimensions Name=BucketName,Value={} Name=StorageType,Value=StandardStorage --period 86400 --output json"
for bucket in s3.buckets.all():
region = s3_client.get_bucket_location(Bucket=bucket.name)
region_name = region['LocationConstraint']
start_date = datetime.now() - timedelta(days=7)
start_date_str = str(start_date.date()) + 'T00:00:00Z'
end_date = datetime.now()
end_date_str = str(end_date.date()) + 'T00:00:00Z'
cmd = command.format(start_date_str, end_date_str, region_name, bucket.name)
res = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
bucket_stats = json.loads(res.decode('ascii'))
if len(bucket_stats['Datapoints']) > 0:
print(bucket_stats['Datapoints'])
回答4:
i was able to resolve this. you need to specify the Dimensions parameter in the boto3 call.
回答5:
Here is one very good example to get data from cloudwatch in python using boto3. I had to spend few hours to get it working, but it should be easy to refer now.
def get_req_count(region, lb_name):
client = boto3.client('cloudwatch', region_name=region)
count = 0
response = client.get_metric_statistics(
Namespace="AWS/ApplicationELB",
MetricName="RequestCount",
Dimensions=[
{
"Name": "LoadBalancer",
"Value": lb_name
},
],
StartTime=str_yesterday,
EndTime=str_today,
Period=86460,
Statistics=[
"Sum",
]
)
#print(response2)
for r in response['Datapoints']:
count = (r['Sum'])
return count
回答6:
This is what I've done:
client = boto3.client(service_name='cloudwatch', region_name='us-east-1')
response = client.get_metric_statistics(
Namespace = 'AWS/EC2',
Period = 300,
StartTime = datetime.utcnow() - timedelta(seconds = 600),
EndTime = datetime.utcnow(),
MetricName = metricVar,
Statistics=['Average'], Unit='Percent',
Dimensions = [
{'Name': 'InstanceId', 'Value': asgName}
])
来源:https://stackoverflow.com/questions/42701110/how-can-i-use-aws-boto3-to-get-cloudwatch-metric-statistics