Autoscaling in Google Container Engine

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-30 07:11:41

As we work towards a Beta release, we're definitely looking at integrating the Google Compute Engine AutoScaler.

There are actually two different kinds of scaling:

  1. Scaling up/down the number of worker nodes in the cluster depending on # of containers in the cluster
  2. Scaling pods up and down.

Since Kubernetes is an OSS project as well, we'd also like to add a Kubernetes native autoscaler that can scale replication controllers. It's definitely something that's on the roadmap. I expect we will actually have multiple autoscaler implementations, since it can be very application specific...

Kubernetes autoscaling: http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/

kubectl command: http://kubernetes.io/docs/user-guide/kubectl/kubectl_autoscale/

Example: kubectl autoscale deployment foo --min=2 --max=5 --cpu-percent=80

You can autoscale your deployment by using kubectl autoscale.

Autoscaling is actually when you want to modify the number of pods automatically as the requirement may arise.

kubectl autoscale deployment task2deploy1 –cpu-percent=50 –min=1 –max=10

kubectl get deployment task2deploy1

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

task2deploy1 1 1 1 1 49s

As the resource consumption increases the number of pods will increase and will be more than the number of pods you specified in your deployment.yaml file but always less than the maximum number of pods specified in the kubectl autoscale command.

kubectl get deployment task2deploy1

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

task2deploy1 7 7 7 3 4m

Similarly, as the resource consumption decreases, the number of pods will go down but never less than the number of minimum pods specified in the kubectl autoscale command.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!