Kubernetes reports “pod didn't trigger scale-up (it wouldn't fit if a new node is added)” even though it would?

一世执手 提交于 2019-12-11 03:23:08

问题


I don't understand why I'm receiving this error. A new node should definitely be able to accommodate the pod. As I'm only requesting 768Mi of memory and 450m of CPU, and the instance group that would be autoscaled is of type n1-highcpu-2 - 2 vCPU, 1.8GB.

How could I diagnose this further?

kubectl describe pod:

Name:           initial-projectinitialabcrad-697b74b449-848bl
Namespace:      production
Node:           <none>
Labels:         app=initial-projectinitialabcrad
                appType=abcrad-api
                pod-template-hash=2536306005
Annotations:    <none>
Status:         Pending
IP:             
Controlled By:  ReplicaSet/initial-projectinitialabcrad-697b74b449
Containers:
  app:
    Image:      gcr.io/example-project-abcsub/projectinitial-abcrad-app:production_6b0b3ddabc68d031e9f7874a6ea49ee9902207bc
    Port:       <none>
    Host Port:  <none>
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:     250m
      memory:  512Mi
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro)
  nginx:
    Image:      gcr.io/example-project-abcsub/projectinitial-abcrad-nginx:production_6b0b3ddabc68d031e9f7874a6ea49ee9902207bc
    Port:       80/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Readiness:    http-get http://:80/api/v1/ping delay=5s timeout=10s period=10s #success=1 #failure=3
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro)
  cloudsql-proxy:
    Image:      gcr.io/cloudsql-docker/gce-proxy:1.11
    Port:       3306/TCP
    Host Port:  0/TCP
    Command:
      /cloud_sql_proxy
      -instances=example-project-abcsub:us-central1:abcfn-staging=tcp:0.0.0.0:3306
      -credential_file=/secrets/cloudsql/credentials.json
    Limits:
      cpu:     1
      memory:  1Gi
    Requests:
      cpu:        100m
      memory:     128Mi
    Mounts:
      /secrets/cloudsql from cloudsql-instance-credentials (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-srv8k (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  cloudsql-instance-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cloudsql-instance-credentials
    Optional:    false
  default-token-srv8k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-srv8k
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason             Age                  From                Message
  ----     ------             ----                 ----                -------
  Normal   NotTriggerScaleUp  4m (x29706 over 3d)  cluster-autoscaler  pod didn't trigger scale-up (it wouldn't fit if a new node is added)
  Warning  FailedScheduling   4m (x18965 over 3d)  default-scheduler   0/4 nodes are available: 3 Insufficient memory, 4 Insufficient cpu.

回答1:


It's not the hardware requests but it's due to my pod affinity rule defined:

podAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
  - labelSelector:
      matchExpressions:
      - key: appType
        operator: NotIn
        values:
        - example-api
    topologyKey: kubernetes.io/hostname


来源:https://stackoverflow.com/questions/54314246/kubernetes-reports-pod-didnt-trigger-scale-up-it-wouldnt-fit-if-a-new-node-i

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!