why kubernete pod reports `Insufficient memory` even if there are free memory on the host?

亡梦爱人 提交于 2021-01-28 20:01:07

问题


I am running minikube v1.15.1 on MacOS and installed helm v3.4.1. I run helm install elasticsearch elastic/elasticsearch --set resources.requests.memory=2Gi --set resources.limits.memory=4Gi --set replicas=1 to install elasticsearch on k8s cluster. The pod elasticsearch-master-0 is deployed but it is in pending status.

When I run kubectl describe pod elasticsearch-master-0 it gives me below warning:


Warning  FailedScheduling  61s (x2 over 2m30s)  default-scheduler  0/1 nodes are available: 1 Insufficient memory.

it ways Insufficient memory but my host has at least 4GB free memory. Does the memory issue means the minikube doesn't have enough memory? If yes, how can I increase its memory?

I have increased memory in minikube and restarted minikube but still has the same issue.

I did run minikube delete followed by minikube start. You can see below output that it using 4 CPUs and 8GB memory

 minikube v1.15.1 on Darwin 11.0.1
✨  Automatically selected the docker driver. Other choices: hyperkit, virtualbox
👍  Starting control plane node minikube in cluster minikube
🔥  Creating docker container (CPUs=4, Memory=8096MB) ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Below is the code to get cpu and memory from config.

$ minikube config get cpus
4
$ minikube config get memory
8096

Below is the output from metrics-server.

$ kubectl top node
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
minikube   466m         5%     737Mi           37%

$ kubectl top pod
W1125 20:11:23.232025   46719 top_pod.go:265] Metrics not available for pod default/elasticsearch-master-0, age: 34m3.231199s
error: Metrics not available for pod default/elasticsearch-master-0, age: 34m3.231199s

The full output of kubectl describe pod is:

$ kubectl describe pod elasticsearch-master-0
Name:           elasticsearch-master-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=elasticsearch-master
                chart=elasticsearch
                controller-revision-hash=elasticsearch-master-677c65788d
                release=elasticsearch
                statefulset.kubernetes.io/pod-name=elasticsearch-master-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/elasticsearch-master
Init Containers:
  configure-sysctl:
    Image:      docker.elastic.co/elasticsearch/elasticsearch:7.10.0
    Port:       <none>
    Host Port:  <none>
    Command:
      sysctl
      -w
      vm.max_map_count=262144
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kthrd (ro)
Containers:
  elasticsearch:
    Image:       docker.elastic.co/elasticsearch/elasticsearch:7.10.0
    Ports:       9200/TCP, 9300/TCP
    Host Ports:  0/TCP, 0/TCP
    Limits:
      cpu:     1
      memory:  4Gi
    Requests:
      cpu:      1
      memory:   2Gi
    Readiness:  exec [sh -c #!/usr/bin/env bash -e
# If the node is starting up wait for the cluster to be ready (request params: "wait_for_status=green&timeout=1s" )
# Once it has started only check that the node itself is responding
START_FILE=/tmp/.es_start_file

# Disable nss cache to avoid filling dentry cache when calling curl
# This is required with Elasticsearch Docker using nss < 3.52
export NSS_SDB_USE_CACHE=no

http () {
  local path="${1}"
  local args="${2}"
  set -- -XGET -s

  if [ "$args" != "" ]; then
    set -- "$@" $args
  fi

  if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then
    set -- "$@" -u "${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}"
  fi

  curl --output /dev/null -k "$@" "http://127.0.0.1:9200${path}"
}

if [ -f "${START_FILE}" ]; then
  echo 'Elasticsearch is already running, lets check the node is healthy'
  HTTP_CODE=$(http "/" "-w %{http_code}")
  RC=$?
  if [[ ${RC} -ne 0 ]]; then
    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with RC ${RC}"
    exit ${RC}
  fi
  # ready if HTTP code 200, 503 is tolerable if ES version is 6.x
  if [[ ${HTTP_CODE} == "200" ]]; then
    exit 0
  elif [[ ${HTTP_CODE} == "503" && "7" == "6" ]]; then
    exit 0
  else
    echo "curl --output /dev/null -k -XGET -s -w '%{http_code}' \${BASIC_AUTH} http://127.0.0.1:9200/ failed with HTTP code ${HTTP_CODE}"
    exit 1
  fi

else
  echo 'Waiting for elasticsearch cluster to become ready (request params: "wait_for_status=green&timeout=1s" )'
  if http "/_cluster/health?wait_for_status=green&timeout=1s" "--fail" ; then
    touch ${START_FILE}
    exit 0
  else
    echo 'Cluster is not yet ready (request params: "wait_for_status=green&timeout=1s" )'
    exit 1
  fi
fi
] delay=10s timeout=5s period=10s #success=3 #failure=3
    Environment:
      node.name:                     elasticsearch-master-0 (v1:metadata.name)
      cluster.initial_master_nodes:  elasticsearch-master-0,
      discovery.seed_hosts:          elasticsearch-master-headless
      cluster.name:                  elasticsearch
      network.host:                  0.0.0.0
      ES_JAVA_OPTS:                  -Xmx1g -Xms1g
      node.data:                     true
      node.ingest:                   true
      node.master:                   true
      node.remote_cluster_client:    true
    Mounts:
      /usr/share/elasticsearch/data from elasticsearch-master (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kthrd (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  elasticsearch-master:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  elasticsearch-master-elasticsearch-master-0
    ReadOnly:   false
  default-token-kthrd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kthrd
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  76s (x2 over 77s)  default-scheduler  0/1 nodes are available: 1 Insufficient memory.

回答1:


Minikube will pick up your memory settings on its first start but if you previously launched without that option you need to perform minikube delete and restart.

To check resources that your pod/nodes are utilizing you can enable metrics-server with minikube addons:

➜  ~ minikube addons enable metrics-server 
🌟  The 'metrics-server' addon is enabled

You will have to wait a bit for the metrics to appear:

➜  ~ kubectl top node 
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
minikube   186m         4%     2344Mi          15%       
➜  ~ kubectl top pod 
NAME                     CPU(cores)   MEMORY(bytes)   
elasticsearch-master-0   6m           1272Mi   
    



回答2:


Minikube on mac uses a virtual machine to host kubernetes. This is separate from the host and restricts the available memory for the single node cluster.

You can configure more memory for the VM using

minikube start --memory=4096


来源:https://stackoverflow.com/questions/64995076/why-kubernete-pod-reports-insufficient-memory-even-if-there-are-free-memory-on

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!