kubelet fails to get cgroup stats for docker and kubelet services

后端 未结 5 1501
有刺的猬
有刺的猬 2020-12-05 14:55

I\'m running kubernetes on bare-metal Debian (3 masters, 2 workers, PoC for now). I followed k8s-the-hard-way, and I\'m running into the fo

5条回答
  •  甜味超标
    2020-12-05 15:50

    Thanks angeloxx!

    I'm following the kubernetes guide: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/

    In the instructions, they have you make a file: /usr/lib/systemd/system/kubelet.service.d/20-etcd-service-manager.conf

    with the line:

    ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd
    

    I took your answer and added it to the end of the ExecStart line:

    ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice
    

    I'm writing this in case it helps someone else

    @ wolmi Thanks for the edit!

    One more note: The config I have above is for my etcd cluster, NOT the kubernetes nodes. A file like 20-etcd-service-manager.conf on a node would override all the settings in the "10-kubeadm.conf" file, causing all kinds if missed configurations. Use the "/var/lib/kubelet/config.yaml" file for nodes and/or /var/lib/kubelet/kubeadm-flags.env.

提交回复
热议问题