Node pool does not reduce his node size to zero although autoscaling is enabled

浪尽此生 提交于 2019-12-05 00:15:19

问题


I have created two node pools. A small one for all the google system jobs and a bigger one for my tasks. The bigger one should reduce its size to 0 after the job is done.

The problem is: Even if there are no cron jobs, the node pool do not reduce his size to 0.

Creating cluster:

gcloud beta container --project "projectXY" clusters create "cluster" --zone "europe-west3-a" --username "admin" --cluster-version "1.9.6-gke.0" --machine-type "n1-standard-1" --image-type "COS" --disk-size "100" --scopes "https://www.googleapis.com/auth/cloud-platform" --num-nodes "1" --network "default" --enable-cloud-logging --enable-cloud-monitoring --subnetwork "default" --enable-autoscaling --enable-autoupgrade --min-nodes "1" --max-nodes "1"

Creating node pool:

The node pool should reduce its size to 0 after all tasks are done.

gcloud container node-pools create workerpool --cluster=cluster --machine-type="n1-highmem-8", -m "n1-highmem-8" --zone=europe-west3-a, -z europe-west3-a --disk-size=100 --enable-autoupgrade --num-nodes=0 --enable-autoscaling --max-nodes=2 --min-nodes=0

Create cron job:

kubectl create -f cronjob.yaml

回答1:


Quoting from Google Documentation:

"Note: Beginning with Kubernetes version 1.7, you can specify a minimum size of zero for your node pool. This allows your node pool to scale down completely if the instances within aren't required to run your workloads. However, while a node pool can scale to a zero size, the overall cluster size does not scale down to zero nodes (as at least one node is always required to run system Pods)."

Notice also that:

"Cluster autoscaler also measures the usage of each node against the node pool's total demand for capacity. If a node has had no new Pods scheduled on it for a set period of time, and [this option does not work for you since it is the last node] all Pods running on that node can be scheduled onto other nodes in the pool , the autoscaler moves the Pods and deletes the node.

Note that cluster autoscaler works based on Pod resource requests, that is, how many resources your Pods have requested. Cluster autoscaler does not take into account the resources your Pods are actively using. Essentially, cluster autoscaler trusts that the Pod resource requests you've provided are accurate and schedules Pods on nodes based on that assumption."

Therefore I would check:

  • that your version of your Kubernetes cluster is at least 1.7
  • that there are no pods running on the last node (check every namespace, the pods that have to run on every node do no count: fluentd, kube-dns, kube-proxy), the fact that there are no cronjobs is not enough
  • that for the autoscaler is NOT enabled for the corresponding managed instance groups since they are different tools
  • that there are no pods stuck in any weird state still assigned to that node
  • that there is no pods waiting to be scheduled in the cluster

If still everything likely it is an issue with the autoscaler and you can either open a private issue specifying your project ID with Google since there is not much the community can do.

If you are interested place in the comments the link of the issue tracker and I will take a look in your project (I work for Google Cloud Platform Support)




回答2:


I ran into the same issue and tested a number of different scenarios. I finally got it to work by doing the following:

  1. Create your node pool with an initial size of 1 instead of 0:

    gcloud container node-pools create ${NODE_POOL_NAME} \
      --cluster ${CLUSTER_NAME} \
      --num-nodes 1 \
      --enable-autoscaling --min-nodes 0 --max-nodes 1 \
      --zone ${ZONE} \
      --machine-type ${MACHINE_TYPE}
    
  2. Configure your CronJob in a similar fashion to this:

    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: cronjob800m
    spec:
      schedule: "7 * * * *"
      concurrencyPolicy: Forbid
      failedJobsHistoryLimit: 0
      successfulJobsHistoryLimit: 0
      jobTemplate:
        spec:
          template:
            spec:
              containers:
              - name: cronjob800m
                image: busybox
                args:
                - /bin/sh
                - -c
                - date; echo Hello from the Kubernetes cluster
                resources:
                  requests:
                    cpu: "800m"
              restartPolicy: Never
    

    Note that the resources are set in a way that the job is only able to be run on the large node pool but not on the small one. Also note that we set both failedJobsHistoryLimit and successfulJobsHistoryLimit to 0 in order for the job to be automatically cleaned from the node pool after success/failure.

That should be it.



来源:https://stackoverflow.com/questions/49903951/node-pool-does-not-reduce-his-node-size-to-zero-although-autoscaling-is-enabled

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!