How to auto scale Amazon DynamoDB throughput?

前端 未结 11 736
渐次进展
渐次进展 2020-12-13 06:59

Amazon DynamoDB doesn’t provide inbuild capabilities to auto tune throughput based on Dynamic Load. It provide API to increase or Decrease throughput. Customers are being c

相关标签:
11条回答
  • 2020-12-13 07:20

    I think other answers have done a great job but I have a different approach to autoscale DynamoDB in an event driven fashion by leveraging CloudWatch alarms and DynamoDB's UpdateTable operation to change provisioned capacity. The following approach not only helps to reduce costs, but to scale up capacity for unexpected loads.

    Summary:

    Configure CloudWatch alarms on DynamoDB metrics to alert you based on thresholds and push the alerts to an SQS queue via SNS topic. A daemon process which polls SQS queue can process those alerts and change table provisioned capacity using DynamoDB's UpdateTable operation and update CloudWatch alarm thresholds.

    Detailed version:

    Please be advised that this approach would require 1. Understanding of AWS services like CloudWatch, SNS, SQS 2. Good amount of time for implementing in your favorite programming language 3. Maintaining a daemon to process SQS messages and change the provisioned capacity.

    One time setup:

    1. Create CloudWatch alarms on ConsumedWriteCapacityUnits and ConsumedReadCapacityUnits metrics of your DynamoDB table. You can use this documentation.
    2. Configure the CloudWatch alarms to alert a SNS topic. Create an AWS SQS queue and subscribe the queue to receive alerts from SNS topic.
    3. Write a daemon in any programming language to poll SQS queue and process all alerts. AWS has SDKs in multiple languages so choosing any of those languages would avoid writing lot of code to communicate with AWS services.

    Daemon algorithm:

    1. For every SQS message it receives, calculate the new provisioned capacity to be used and issue an UpdateTable operation with the new value.
    2. Update the CloudWatch alarm with the new thresholds, if required.

    You can use above approach to either scale up or down. For example, maintain CloudWatch alarm threshold at 80% of ProvisionedWriteCapacityUnits and every time the usage crosses 80%, increase the capacity and set alarm threshold to 80% of new value. Similarly you can scale down when the consumption falls below x%.

    Though this is the crux, there would be lot of points to be considered in a production quality solution.

    1. Understand about DynamoDB partitions and hot key problems.
    2. Be aware of all DynamoDB limits.
    3. Constraints on no.of scale downs per UTC day.
    4. Batching the multiple UpdateTable operations.

    Finally, Neptune.io provides a packaged SaaS solution to autoscale DynamoDB by using this architecture. See http://blog.neptune.io/one-click-autoscaling-of-dynamodb/ and http://blog.neptune.io/dos-and-donts-of-dynamodb-autoscaling/ for some reading on that.

    P.S: I work for Neptune. And, I can help you if you need more details of implementation.

    0 讨论(0)
  • 2020-12-13 07:22

    AWS added native auto scaling support for DynamoDB in June 2017. The following code (source) provides an example of how to configure auto scaling using the Java SDK:

    package com.amazonaws.codesamples.autoscaling;
    
    import com.amazonaws.services.applicationautoscaling.AWSApplicationAutoScalingClient;
    import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsRequest;
    import com.amazonaws.services.applicationautoscaling.model.DescribeScalableTargetsResult;
    import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesRequest;
    import com.amazonaws.services.applicationautoscaling.model.DescribeScalingPoliciesResult;
    import com.amazonaws.services.applicationautoscaling.model.MetricType;
    import com.amazonaws.services.applicationautoscaling.model.PolicyType;
    import com.amazonaws.services.applicationautoscaling.model.PredefinedMetricSpecification;
    import com.amazonaws.services.applicationautoscaling.model.PutScalingPolicyRequest;
    import com.amazonaws.services.applicationautoscaling.model.RegisterScalableTargetRequest;
    import com.amazonaws.services.applicationautoscaling.model.ScalableDimension;
    import com.amazonaws.services.applicationautoscaling.model.ServiceNamespace;
    import com.amazonaws.services.applicationautoscaling.model.TargetTrackingScalingPolicyConfiguration;
    
    public class EnableDynamoDBAutoscaling {
    
        static AWSApplicationAutoScalingClient aaClient = new AWSApplicationAutoScalingClient();
    
        public static void main(String args[]) {
    
            ServiceNamespace ns = ServiceNamespace.Dynamodb;
            ScalableDimension tableWCUs = ScalableDimension.DynamodbTableWriteCapacityUnits;
            String resourceID = "table/TestTable";
    
            // Define the scalable target
            RegisterScalableTargetRequest rstRequest = new RegisterScalableTargetRequest()
                .withServiceNamespace(ns)
                .withResourceId(resourceID)
                .withScalableDimension(tableWCUs)
                .withMinCapacity(5)
                .withMaxCapacity(10)
                .withRoleARN("SERVICE_ROLE_ARN_GOES_HERE");
    
            try {
                aaClient.registerScalableTarget(rstRequest);
            } catch (Exception e) {
                System.err.println("Unable to register scalable target: ");
                System.err.println(e.getMessage());
            }
    
            // Verify that the target was created
            DescribeScalableTargetsRequest dscRequest = new DescribeScalableTargetsRequest()
                .withServiceNamespace(ns)
                .withScalableDimension(tableWCUs)
                .withResourceIds(resourceID);
    
            try {
                DescribeScalableTargetsResult dsaResult = aaClient.describeScalableTargets(dscRequest);
                System.out.println("DescribeScalableTargets result: ");
                System.out.println(dsaResult);
                System.out.println();
            } catch (Exception e) {
                System.err.println("Unable to describe scalable target: ");
                System.err.println(e.getMessage());
            }
    
            System.out.println();
    
            // Configure a scaling policy
            TargetTrackingScalingPolicyConfiguration targetTrackingScalingPolicyConfiguration = 
                new TargetTrackingScalingPolicyConfiguration()
                .withPredefinedMetricSpecification(
                    new PredefinedMetricSpecification()
                    .withPredefinedMetricType(MetricType. DynamoDBWriteCapacityUtilization))
                .withTargetValue(50.0)
                .withScaleInCooldown(60)
                .withScaleOutCooldown(60);
    
            // Create the scaling policy, based on your configuration
            PutScalingPolicyRequest pspRequest = new PutScalingPolicyRequest()
                .withServiceNamespace(ns)
                .withScalableDimension(tableWCUs)
                .withResourceId(resourceID)
                .withPolicyName("MyScalingPolicy")
                .withPolicyType(PolicyType.TargetTrackingScaling)
                .withTargetTrackingScalingPolicyConfiguration(targetTrackingScalingPolicyConfiguration);
    
            try {
                aaClient.putScalingPolicy(pspRequest);
            } catch (Exception e) {
                System.err.println("Unable to put scaling policy: ");
                System.err.println(e.getMessage());
            }
    
            // Verify that the scaling policy was created
            DescribeScalingPoliciesRequest dspRequest = new DescribeScalingPoliciesRequest()
                .withServiceNamespace(ns)
                .withScalableDimension(tableWCUs)
                .withResourceId(resourceID);
    
            try {
                DescribeScalingPoliciesResult dspResult = aaClient.describeScalingPolicies(dspRequest);
                System.out.println("DescribeScalingPolicies result: ");
                System.out.println(dspResult);
            } catch (Exception e) {
                e.printStackTrace();
                System.err.println("Unable to describe scaling policy: ");
                System.err.println(e.getMessage());
            }            
        }
    }
    

    This code requires that you supply an ARN for a valid Application Auto Scaling service role. Replace SERVICE_ROLE_ARN_GOES_HERE with the actual ARN.

    0 讨论(0)
  • 2020-12-13 07:25

    Amazon just added autoscaling for dynamodb, see the details here

    0 讨论(0)
  • 2020-12-13 07:26

    Guidelines for DynamoDB Auto Scaling Script :

    Customers are being charged on hourly basis for provisioned read & write throughput. Below is Amazon Dynamo DB Pricing for EU (Ireland Region).

    • Write Throughput: $0.00735 per hour for every 10 units of Write Capacity • Read Throughput: $0.00735 per hour for every 50 units of Read Capacity

    Amazon Dynamo DB doesn’t provide in-build capabilities to auto tune throughput based on Dynamic Load. It provides API to increase or Decrease throughput with some restrictions like throughput can be decreased twice in a day and increased any time in a day.

    What will be the monthly bill of a Production Table for fixed read capacity 2,000 read/second and 2,000 write/second for 24 hours?

    Calculation: $0.00735 X 24hrs X 200 X 30days {write cost for month} + $0.00735X 24hrs X 40 X 30 days {read cost for month} = 1058.4+ 211.68 = Fixed 1270 $/month.

    Guidelines for writing utility {amazon supported programming languages} which adjust throughput of table and reduces monthly bills.

    (A) Initial Value: Basically, here you have to watch and decide read & write throughput for table as an initialization value after analyzing average usage considering 15 days or 1 month load and add X% extra for read and Y% extra for write on the top to withstand unexpected load. Initial read/write throughput = calculate read throughput based on average usage +X {read} % or Y {write} % X & Y can be anything between 10% and 30% based on observation.

    (B) Peak Load Shaping: Alert on tables can be set as when load reaches to 50% to 60 % of provisioned throughput, necessary action can be taken like calling throughput increment API to increase throughput anything between 30 % to 50% of provision throughput.*

    (C) Manual Shaping: For known heavy load like batch load/festival season, throughput should be set manually to 200% - 300% extra of normal daily operations until load is complete* * Once business working hours or load is over. Throughput should reduce down to initial value.

    Note: Reader can calculate monthly saving considering 1,000 read/write for 16 hrs. + 2,000 read/write for 8 hours, provided utility in place.

    0 讨论(0)
  • 2020-12-13 07:29

    Jeff Bar recently wrote a blog in AWS official blog: "Auto Scale DynamoDB With Dynamic DynamoDB":

    https://aws.amazon.com/blogs/aws/auto-scale-dynamodb-with-dynamic-dynamodb/

    He introduced Dynamic DynamoDB, an open source tool built by independent developer to handle this automatically with CloudFormation template.

    0 讨论(0)
提交回复
热议问题