How to reduce CPU limits of kubernetes system resources?

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-28 23:31:27

Changing the default Namespace's LimitRange spec.limits.defaultRequest.cpu should be a legitimate solution for changing the default for new Pods. Note that LimitRange objects are namespaced, so if you use extra Namespaces you probably want to think about what a sane default is for them.

As you point out, this will not affect existing objects or objects in the kube-system Namespace.

The objects in the kube-system Namespace were mostly sized empirically - based on observed values. Changing those might have detrimental effects, but maybe not if your cluster is very small.

We have an open issue (https://github.com/kubernetes/kubernetes/issues/13048) to adjust the kube-system requests based on total cluster size, but that is not is not implemented yet. We have another open issue (https://github.com/kubernetes/kubernetes/issues/13695) to perhaps use a lower QoS for some kube-system resources, but again - not implemented yet.

Of these, I think that #13048 is the right way to implement what you 're asking for. For now, the answer to "is there a better way" is sadly "no". We chose defaults for medium sized clusters - for very small clusters you probably need to do what you are doing.

I have found one of the best ways to reduce the system resource requests on a GKE cluster, is to use a vertical autoscaler.

Here are the VPA definitions I have used:

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: kube-dns-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: kube-dns
  updatePolicy:
    updateMode: "Auto"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: heapster-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: heapster-v1.6.0-beta.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: metadata-agent-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: metadata-agent
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: metrics-server-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: Deployment
    name: metrics-server-v0.3.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: fluentd-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: fluentd-gcp-v3.1.1
  updatePolicy:
    updateMode: "Initial"

---

apiVersion: autoscaling.k8s.io/v1beta2
kind: VerticalPodAutoscaler
metadata:
  namespace: kube-system
  name: kube-proxy-vpa
spec:
  targetRef:
    apiVersion: "extensions/v1beta1"
    kind: DaemonSet
    name: kube-proxy
  updatePolicy:
    updateMode: "Initial"

Here is a screenshot of what it does to a kube-dns deployment.

By the way just in case you wanted to try this on Google Cloud GCE. If you try to change the CPU limit of the core services like kube-dns you will get an error like this.

spec: Forbidden: pod updates may not change fields other than spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations

Tried on Kubernetes 1.8.7 and 1.9.4.

So at this time the minimum node you need to deploy is n1-standard-1. Also with that about 8% of your cpu is eaten almost constantly by the Kubernetes itself as soon as you have several pods and helms. even if you are not running any major load. I think there are a lot of polling going on and to make sure the cluster is responsive they keep refreshing some stats.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!