使用 kubeadm 安装最新 Kubernetes 1.15 版本

荒凉一梦 提交于 2021-02-17 07:40:20


导读:kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。


作者:青蛙小白

原文:https://blog.frognew.com/2019/07/kubeadm-install-kubernetes-1.15.html


最近发布的Kubernetes 1.15中,kubeadm对HA集群的配置已经达到beta可用,说明kubeadm距离生产环境中可用的距离越来越近了。

1.准备

1.1系统配置

在安装之前,需要先做如下准备。两台CentOS 7.6主机如下:

cat /etc/hosts192.168.99.11 node1192.168.99.12 node2

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。这里简单起见在各节点禁用防火墙:

systemctl stop firewalldsystemctl disable firewalld

禁用SELINUX:

setenforce 0


vi /etc/selinux/config
SELINUX=disabled

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1

执行命令使修改生效。

modprobe br_netfiltersysctl -p /etc/sysctl.d/k8s.conf

1.2kube-proxy开启ipvs的前置条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrack_ipv4

在所有的Kubernetes节点node1和node2上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

1.3安装Docker

Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现。

安装docker的yum源:

yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager \    --add-repo \    https://download.docker.com/linux/centos/docker-ce.repo

查看最新的Docker版本:

yum list docker-ce.x86_64  --showduplicates |sort -rdocker-ce.x86_64            3:18.09.7-3.el7                     docker-ce-stabledocker-ce.x86_64            3:18.09.6-3.el7                     docker-ce-stabledocker-ce.x86_64            3:18.09.5-3.el7                     docker-ce-stabledocker-ce.x86_64            3:18.09.4-3.el7                     docker-ce-stabledocker-ce.x86_64            3:18.09.3-3.el7                     docker-ce-stabledocker-ce.x86_64            3:18.09.2-3.el7                     docker-ce-stabledocker-ce.x86_64            3:18.09.1-3.el7                     docker-ce-stabledocker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stabledocker-ce.x86_64            18.06.3.ce-3.el7                    docker-ce-stabledocker-ce.x86_64            18.06.2.ce-3.el7                    docker-ce-stabledocker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stabledocker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stabledocker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
Kubernetes 1.15当前支持的docker版本列表是1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。这里在各节点安装docker的18.09.7版本。
yum makecache fast
yum install -y --setopt=obsoletes=0 \ docker-ce-18.09.7-3.el7
systemctl start dockersystemctl enable docker

确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT。

iptables -nvLChain INPUT (policy ACCEPT 263 packets, 19209 bytes) pkts bytes target     prot opt in     out     source               destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED 0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
1.4 修改docker cgroup driver为systemd

根据文档CRI installation中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为docker的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上docker的cgroup driver为systemd。

创建或修改/etc/docker/daemon.json:

{  "exec-opts": ["native.cgroupdriver=systemd"]}

重启docker:

systemctl restart docker
docker info | grep CgroupCgroup Driver: systemd

2.使用kubeadm部署Kubernetes

2.1 安装kubeadm和kubelet

下面在各节点安装kubeadm和kubelet:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpgEOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要科学上网。

curl https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
yum makecache fastyum install -y kubelet kubeadm kubectl
... Installed: kubeadm.x86_64 0:1.15.0-0 kubectl.x86_64 0:1.15.0-0 kubelet.x86_64 0:1.15.0-0
Dependency Installed: conntrack-tools.x86_64 0:1.4.4-4.el7 cri-tools.x86_64 0:1.12.0-0 kubernetes-cni.x86_64 0:0.7.5-0 libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2

从安装结果可以看出还安装了cri-tools, kubernetes-cni, socat三个依赖:

  • 官方从Kubernetes 1.14开始将cni依赖升级到了0.7.5版本

  • socat是kubelet的依赖

  • cri-tools是CRI(Container Runtime Interface)容器运行时接口的命令行工具

运行kubelet –help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,如:

......--address 0.0.0.0   The IP address for the Kubelet to serve on (set to 0.0.0.0 for all IPv4 interfaces and `::` for all IPv6 interfaces) (default 0.0.0.0) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)......

而官方推荐我们使用–config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file。这也是Kubernetes为了支持动态Kubelet配置(Dynamic Kubelet Configuration)才这么做的,参考Reconfigure a Node’s Kubelet in a Live Cluster。

kubelet的配置文件必须是json或yaml格式,具体可查看这里。

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。关闭系统的Swap方法如下:

swapoff -a

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

vm.swappiness=0

执行sysctl -p /etc/sysctl.d/k8s.conf使修改生效。

因为这里本次用于测试两台主机上还运行其他服务,关闭swap可能会对其他服务产生影响,所以这里修改kubelet的配置去掉这个限制。使用kubelet的启动参数–fail-swap-on=false去掉必须关闭Swap的限制,修改/etc/sysconfig/kubelet,加入:

KUBELET_EXTRA_ARGS=--fail-swap-on=false

2.2 使用kubeadm init初始化集群

在各节点开机启动kubelet服务:

systemctl enable kubelet.service

使用kubeadm config print init-defaults可以打印集群初始化默认的使用的配置:

apiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationkind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 1.2.3.4  bindPort: 6443nodeRegistration:  criSocket: /var/run/dockershim.sock  name: node1  taints:  - effect: NoSchedule    key: node-role.kubernetes.io/master---apiServer:  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns:  type: CoreDNSetcd:  local:    dataDir: /var/lib/etcdimageRepository: k8s.gcr.iokind: ClusterConfigurationkubernetesVersion: v1.14.0networking:  dnsDomain: cluster.local  serviceSubnet: 10.96.0.0/12scheduler: {}
从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml:
apiVersion: kubeadm.k8s.io/v1beta2kind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 192.168.99.11  bindPort: 6443nodeRegistration:  taints:  - effect: PreferNoSchedule    key: node-role.kubernetes.io/master---apiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: v1.15.0networking:  podSubnet: 10.244.0.0/16

使用kubeadm默认配置初始化的集群,会在master节点打上node-role.kubernetes.io/master:NoSchedule的污点,阻止master节点接受调度运行工作负载。这里测试环境只有两个节点,所以将这个taint修改为node-role.kubernetes.io/master:PreferNoSchedule。


在开始初始化集群之前可以使用kubeadm config images pull预先在各个节点上拉取所k8s需要的docker镜像。

接下来使用kubeadm初始化集群,选择node1作为Master Node,在node1上执行下面的命令:

kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap[init] Using Kubernetes version: v1.15.0[preflight] Running pre-flight checks    [WARNING Swap]: running with swap on is not supported. Please disable swap[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "etcd/ca" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [192.168.99.11 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.99.11 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.99.11][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 26.004907 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule][bootstrap-token] Using token: 4qcl2f.gtl3h8e5kjltuo0r[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \ --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。其中有以下关键内容:

  • [kubelet-start] 生成kubelet的配置文件”/var/lib/kubelet/config.yaml”

  • [certs]生成相关的各种证书

  • [kubeconfig]生成相关的kubeconfig文件

  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod

  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

  • 下面的命令是配置常规用户如何使用kubectl访问集群:


  • mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 最后给出了将节点加入集群的命令kubeadm join 192.168.99.11:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

查看一下集群状态,确认个组件都处于healthy状态:

kubectl get csNAME                 STATUS    MESSAGE             ERRORcontroller-manager   Healthy   okscheduler            Healthy   oketcd-0               Healthy   {"health":"true"}

集群初始化如果遇到问题,可以使用下面的命令进行清理:

kubeadm resetifconfig cni0 downip link delete cni0ifconfig flannel.1 downip link delete flannel.1rm -rf /var/lib/cni/

2.3 安装Pod Network

接下来安装flannel network add-on:

mkdir -p ~/k8s/cd ~/k8scurl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl apply -f  kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.extensions/kube-flannel-ds-amd64 createddaemonset.extensions/kube-flannel-ds-arm64 createddaemonset.extensions/kube-flannel-ds-arm createddaemonset.extensions/kube-flannel-ds-ppc64le createddaemonset.extensions/kube-flannel-ds-s390x created

这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64


如果Node有多个网卡的话,参考flannel issues 39701,目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>

containers:      - name: kube-flannel        image: quay.io/coreos/flannel:v0.11.0-amd64        command:        - /opt/bin/flanneld        args:        - --ip-masq        - --kube-subnet-mgr        - --iface=eth1......

使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

kubectl get pod -n kube-systemNAME                            READY   STATUS    RESTARTS   AGEcoredns-5c98db65d4-dr8lf        1/1     Running   0          52mcoredns-5c98db65d4-lp8dg        1/1     Running   0          52metcd-node1                      1/1     Running   0          51mkube-apiserver-node1            1/1     Running   0          51mkube-controller-manager-node1   1/1     Running   0          51mkube-flannel-ds-amd64-mm296     1/1     Running   0          44skube-proxy-kchkf                1/1     Running   0          52mkube-scheduler-node1            1/1     Running   0          51m

2.4 测试集群DNS是否可用

kubectl run curl --image=radial/busyboxplus:curl -itkubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.If you don't see a command prompt, try pressing enter.[ root@curl-5cc7b478b6-r997p:/ ]$

进入后执行nslookup kubernetes.default确认解析正常:

nslookup kubernetes.defaultServer:    10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes.defaultAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local
2.5 向Kubernetes集群中添加Node节点

下面将node2这个主机添加到Kubernetes集群中,在node2上执行:

kubeadm join 192.168.99.11:6443 --token 4qcl2f.gtl3h8e5kjltuo0r \    --discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e \ --ignore-preflight-errors=Swap[preflight] Running pre-flight checks    [WARNING Swap]: running with swap on is not supported. Please disable swap    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
node2加入集群很是顺利,下面在master节点上执行命令查看集群中的节点:
kubectl get nodeNAME    STATUS   ROLES    AGE   VERSIONnode1   Ready    master   57m   v1.15.0node2   Ready    <none>   11s   v1.15.0

2.5.1 如何从集群中移除Node

如果需要从集群中移除node2这个Node执行下面的命令:

在master节点上执行:

kubectl drain node2 --delete-local-data --force --ignore-daemonsetskubectl delete node node2

在node2上执行:

kubeadm resetifconfig cni0 downip link delete cni0ifconfig flannel.1 downip link delete flannel.1rm -rf /var/lib/cni/

在node1上执行:

kubectl delete node node2

2.6 kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”

kubectl edit cm kube-proxy -n kube-system

之后重启各个节点上的kube-proxy pod:

kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'


kubectl get pod -n kube-system | grep kube-proxykubekube-proxy-7fsrg                1/1     Running   0          3skube-proxy-k8vhm                1/1     Running   0          9s
kubectl logs kube-proxy-7fsrg -n kube-systemI0703 04:42:33.308289 1 server_others.go:170] Using ipvs Proxier.W0703 04:42:33.309074 1 proxier.go:401] IPVS scheduler not specified, use rr by defaultI0703 04:42:33.309831 1 server.go:534] Version: v1.15.0I0703 04:42:33.320088 1 conntrack.go:52] Setting nf_conntrack_max to 131072I0703 04:42:33.320365 1 config.go:96] Starting endpoints config controllerI0703 04:42:33.320393 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controllerI0703 04:42:33.320455 1 config.go:187] Starting service config controllerI0703 04:42:33.320470 1 controller_utils.go:1029] Waiting for caches to sync for service config controllerI0703 04:42:33.420899 1 controller_utils.go:1036] Caches are synced for endpoints config controllerI0703 04:42:33.420969       1 controller_utils.go:1036] Caches are synced for service config controller

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

3.Kubernetes常用组件部署

越来越多的公司和团队开始使用Helm这个Kubernetes的包管理器,这里也将使用Helm安装Kubernetes的常用组件。

3.1 Helm的安装

Helm由客户端命helm令行工具和服务端tiller组成,Helm的安装十分简单。下载helm命令行工具到master节点node1的/usr/local/bin下,这里下载的2.14.1版本:

curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gztar -zxvf helm-v2.14.1-linux-amd64.tar.gzcd linux-amd64/cp helm /usr/local/bin/

为了安装服务端tiller,还需要在这台机器上配置好kubectl工具和kubeconfig文件,确保kubectl工具可以在这台机器上访问apiserver且正常使用。这里的node1节点已经配置好了kubectl。

因为Kubernetes APIServer开启了RBAC访问控制,所以需要创建tiller使用的service account: tiller并分配合适的角色给它。详细内容可以查看helm文档中的Role-based Access Control。这里简单起见直接分配cluster-admin这个集群内置的ClusterRole给它。创建helm-rbac.yaml文件:

apiVersion: v1kind: ServiceAccountmetadata:  name: tiller  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:  name: tillerroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:  - kind: ServiceAccount    name: tiller
namespace: kube-system




kubectl create -f helm-rbac.yamlserviceaccountserviceaccount/tiller createdclusterrolebinding.rbac.authorization.k8s.io/tiller created

接下来使用helm部署tiller:

helm init --service-account tiller --skip-refreshCreating /root/.helmCreating /root/.helm/repositoryCreating /root/.helm/repository/cacheCreating /root/.helm/repository/localCreating /root/.helm/pluginsCreating /root/.helm/startersCreating /root/.helm/cache/archiveCreating /root/.helm/repository/repositories.yamlAdding stable repo with URL: https://kubernetes-charts.storage.googleapis.comAdding local repo with URL: http://127.0.0.1:8879/charts$HELM_HOME has been configured at /root/.helm.Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.To prevent this, run `helm init` with the --tiller-tls-verify flag.For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installationHappy Helming!
tiller默认被部署在k8s集群中的kube-system这个namespace下:
kubectl get pod -n kube-system -l app=helmNAME                            READY   STATUS    RESTARTS   AGEtiller-deploy-c4fd4cd68-dwkhv   1/1     Running   0          83s
helm versionClient: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}

注意由于某些原因需要网络可以访问gcr.io和kubernetes-charts.storage.googleapis.com,如果无法访问可以通过helm init –service-account tiller –tiller-image <your-docker-registry>/tiller:v2.13.1 –skip-refresh使用私有镜像仓库中的tiller镜像


最后在node1上修改helm chart仓库的地址为azure提供的镜像地址:

helm repo add stable http://mirror.azure.cn/kubernetes/charts"stable" has been added to your repositories
helm repo listNAME URLstable http://mirror.azure.cn/kubernetes/chartslocal http://127.0.0.1:8879/charts
3.2 使用Helm部署Nginx Ingress

为了便于将集群中的服务暴露到集群外部,需要使用Ingress。接下来使用Helm将Nginx Ingress部署到Kubernetes上。Nginx Ingress Controller被部署在Kubernetes的边缘节点上,关于Kubernetes边缘节点的高可用相关的内容可以查看之前整理的Bare metal环境下Kubernetes Ingress边缘节点的高可用,Ingress Controller使用hostNetwork。

我们将node1(192.168.99.11)做为边缘节点,打上Label:

kubectl label node node1 node-role.kubernetes.io/edge=node/node1 labeled

kubectl get nodeNAME STATUS ROLES AGE VERSIONnode1 Ready edge,master 138m v1.15.0node2 Ready <none> 82m v1.15.0
stable/nginx-ingress chart的值文件ingress-nginx.yaml如下:
controller:  replicaCount: 1  hostNetwork: true  nodeSelector:    node-role.kubernetes.io/edge: ''  affinity:    podAntiAffinity:        requiredDuringSchedulingIgnoredDuringExecution:        - labelSelector:            matchExpressions:            - key: app              operator: In              values:              - nginx-ingress            - key: component              operator: In              values:              - controller          topologyKey: kubernetes.io/hostname  tolerations:      - key: node-role.kubernetes.io/master        operator: Exists        effect: NoSchedule      - key: node-role.kubernetes.io/master        operator: Exists        effect: PreferNoScheduledefaultBackend:  nodeSelector:    node-role.kubernetes.io/edge: ''  tolerations:      - key: node-role.kubernetes.io/master        operator: Exists        effect: NoSchedule      - key: node-role.kubernetes.io/master        operator: Exists        effect: PreferNoSchedule

nginx ingress controller的副本数replicaCount为1,将被调度到node1这个边缘节点上。这里并没有指定nginx ingress controller service的externalIPs,而是通过hostNetwork: true设置nginx ingress controller使用宿主机网络。

helm repo update
helm install stable/nginx-ingress \-n nginx-ingress \--namespace ingress-nginx \-f ingress-nginx.yaml

kubectl get pod -n ingress-nginx -o wideNAME                                            READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATESnginx-ingress-controller-cc9b6d55b-pr8vr        1/1     Running   0          10m   192.168.99.11   node1   <none>           <none>nginx-ingress-default-backend-cc888fd56-bf4h2   1/1     Running   0          10m   10.244.0.14     node1   <none>           <none>
如果访问http://192.168.99.11返回default backend,则部署完成。

3.3 使用Helm部署dashboard

kubernetes-dashboard.yaml:

image:  repository: k8s.gcr.io/kubernetes-dashboard-amd64  tag: v1.10.1ingress:  enabled: true  hosts:    - k8s.frognew.com  annotations:    nginx.ingress.kubernetes.io/ssl-redirect: "true"    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"  tls:    - secretName: frognew-com-tls-secret      hosts:      - k8s.frognew.comnodeSelector:    node-role.kubernetes.io/edge: ''tolerations:    - key: node-role.kubernetes.io/master      operator: Exists      effect: NoSchedule    - key: node-role.kubernetes.io/master      operator: Exists      effect: PreferNoSchedulerbac:  clusterAdminRole: true
helm install stable/kubernetes-dashboard \-n kubernetes-dashboard \--namespace kube-system  \-f kubernetes-dashboard.yaml



kubectl -n kube-system get secret | grep kubernetes-dashboard-tokenkuberneteskubernetes-dashboard-token-pkm2s                 kubernetes.io/service-account-token   3      3m7s
kubectl describe -n kube-system secret/kubernetes-dashboard-token-pkm2sName: kubernetes-dashboard-token-pkm2sNamespace: kube-systemLabels: <none>Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid: 2f0781dd-156a-11e9-b0f0-080027bb7c43Type: kubernetes.io/service-account-token
Data====ca.crt: 1025 bytesnamespace: 11 bytestoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1wa20ycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmMDc4MWRkLTE1NmEtMTFlOS1iMGYwLTA4MDAyN2JiN2M0MyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.24ad6ZgZMxdydpwlmYAiMxZ9VSIN7dDR7Q6-RLW0qC81ajXoQKHAyrEGpIonfld3gqbE0xO8nisskpmlkQra72-9X6sBPoByqIKyTsO83BQlME2sfOJemWD0HqzwSCjvSQa0x-bUlq9HgH2vEXzpFuSS6Svi7RbfzLXlEuggNoC4MfA4E2hF1OX_ml8iAKx-49y1BQQe5FGWyCyBSi1TD_-ZpVs44H5gIvsGK2kcvi0JT4oHXtWjjQBKLIWL7xxyRCSE4HmUZT2StIHnOwlX7IEIB0oBX4mPg2_xNGnqwcu-8OERU9IoqAAE2cZa0v3b5O2LMcJPrcxrVOukvRIumA

在dashboard的登录窗口使用上面的token登录。


3.4 使用Helm部署metrics-server

从Heapster的github https://github.com/kubernetes/heapster中可以看到已经,heapster已经DEPRECATED。这里是heapster的deprecation timeline。可以看出heapster从Kubernetes 1.12开始从Kubernetes各种安装脚本中移除。

Kubernetes推荐使用metrics-server。我们这里也使用helm来部署metrics-server。

metrics-server.yaml:

args:- --logtostderr- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIPnodeSelector:    node-role.kubernetes.io/edge: ''tolerations:    - key: node-role.kubernetes.io/master      operator: Exists      effect: NoSchedule    - key: node-role.kubernetes.io/master      operator: Exists      effect: PreferNoSchedule
helm install stable/metrics-server \-n metrics-server \--namespace kube-system \-f metrics-server.yaml

使用下面的命令可以获取到关于集群节点基本的指标信息:

kubectl top nodeNAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%node1   650m         32%    1276Mi          73%node2   73m          3%     527Mi           30%kubectl top pod -n kube-system
NAME                                    CPUNAME                                    CPU(cores)   MEMORY(bytes)coredns-5c98db65d4-dr8lf                8m           7Micoredns-5c98db65d4-lp8dg                6m           8Mietcd-node1                              44m          46Mikube-apiserver-node1                    74m          295Mikube-controller-manager-node1           35m          50Mikube-flannel-ds-amd64-7lwm9             2m           8Mikube-flannel-ds-amd64-mm296             5m           9Mikube-proxy-7fsrg                        1m           11Mikube-proxy-k8vhm                        3m           11Mikube-scheduler-node1                    8m           15Mikubernetes-dashboard-848b8dd798-c4sc2   2m           14Mimetrics-server-8456fb6676-fwh2t         10m          19Mitiller-deploy-7bf78cdbf7-9q94c          1m           16Mi

遗憾的是,当前Kubernetes Dashboard还不支持metrics-server。因此如果使用metrics-server替代了heapster,将无法在dashboard中以图形展示Pod的内存和CPU情况(实际上这也不是很重要,当前我们是在Prometheus和Grafana中定制的Kubernetes集群中各个Pod的监控,因此在dashboard中查看Pod内存和CPU也不是很重要)。Dashboard的github上有很多这方面的讨论,如https://github.com/kubernetes/dashboard/issues/2986,Dashboard已经准备在将来的某个时间点支持metrics-server。但由于metrics-server和metrics pipeline肯定是Kubernetes在monitor方面未来的方向,所以推荐使用metrics-server。

4.总结

本次安装涉及到的Docker镜像:

# network and dnsquay.io/coreos/flannel:v0.11.0-amd64k8s.gcr.io/coredns:1.3.1# helm and tillergcr.io/kubernetes-helm/tiller:v2.14.1# nginx ingressquay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1k8s.gcr.io/defaultbackend:1.5# dashboard and metric-severk8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1gcr.io/google_containers/metrics-server-amd64:v0.3.2
参考:
https://kubernetes.io/docs/setup/independent/install-kubeadm/https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/https://docs.docker.com/engine/installation/linux/docker-ce/centos/https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2


--end--


K8S培训推荐

Kubernetes线下实战培训,采用3+1+1新的培训模式(3天线下实战培训,1年内可免费再次参加,每期前10名报名,可免费参加价值3600元的线上直播班;),资深一线讲师,实操环境实践,现场答疑互动,培训内容覆盖:Kubernetes集群搭建、Kubernetes设计、Pod、常用对象操作,Kuberentes调度系统、QoS、Helm、网络、存储、CI/CD、日志监控等。开班城市:北京/深圳/上海/成都点击查看更多课程信息!


推荐阅读

本文分享自微信公众号 - K8S中文社区(k8schina)。
如有侵权,请联系 support@oschina.cn 删除。
本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一起分享。

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!