k8s入门详细介绍

筅森魡賤 提交于 2020-08-13 16:15:27

阅读目录

1.k8s集群的安装

1.1k8s集群架构:

   master节点:etcd,api-server,scheduler,controller-manager

   node节点:kubelet,kube-proxy

 

etcd的作用:数据库

api-server:核心服务

controller-manager:控制器控制rc

scheduler:创建新pod,选择合适的节点

 

kubelet:调用docker来创建容器

kube-proxy:对外提供用户访问,对内提供一个负载均衡器

 

1.6所有节点配置flannel网络

作用:跨节点容器间的通信

a.安装etcd

b.安装配置flannel

c.重启docker生效

1.7配置master为docker镜像私有仓库

a.速度快

b.保护隐私

2.什么是k8s,k8s有什么功能?

2.1k8s的核心功能

自愈:当pod挂了的时候会自动重启

弹性伸缩:

服务自动发现和负载均衡:

滚动升级和一键回滚:

密码和配置文件管理:

 

2.2k8s历史

 

2.3k8s的安装方式

yum

源码编译(极不推荐)

二进制  生产使用

kubeadm  生产使用

 

 

3.k8s常用资源

3.1创建pod资源

k8s最小资源单位

pod资源至少包含两个容器,基础容器pod+业务容器

 

3.2ReplictionController资源

保证指定数量的pod运行

pod和rc是通过标签来关联

rc滚动升级和一键回滚

 

1.k8s安装

1.1:修改IP地址,主机和host解析

所有节点需要做hosts解析

10.0.0.11 master
10.0.0.12 node-1
10.0.0.13 node-2

1.2:master节点安装etcd

[root@master ~]# yum install etcd -y

[root@master ~]# vim /etc/hosts

[root@master ~]# vim /etc/hosts

10.0.0.11 master

10.0.0.12 node-1

10.0.0.13 node-2

[root@master ~]# systemctl restart network

[root@master ~]# vim /etc/etcd/etcd.conf

6行:ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

21行:ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"

.......

[root@master ~]# systemctl start etcd.service

[root@master ~]# systemctl enable etcd.service

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

[root@master ~]# netstat -lntup

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State     PID/Program name

tcp             0         0     127.0.0.1:2380     0.0.0.0:*         LISTEN   7390/etcd

tcp             0         0     0.0.0.0:22             0.0.0.0:*         LISTEN   6726/sshd

tcp6           0         0     :::2379                   :::*                LISTEN   7390/etcd

tcp6           0         0     :::22                       :::*                LISTEN   6726/sshd

udp           0         0     127.0.0.1:323         0.0.0.0:*                      5065/chronyd

udp6         0         0     ::1:323                    :::*                              5065/chronyd

### 测试etcd是否安装成功

[root@master ~]# etcdctl set testdir/testkey0 0

0

[root@master ~]# etcdctl get testdir/testkey0

0

### 检查健康状态

[root@master ~]# etcdctl -C http://10.0.0.11:2379 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://10.0.0.11:2379

cluster is healthy

 

1.4:master节点安装kubernets-master

[root@master ~]# yum install kubernetes-master.x86_64 -y

[root@master ~]# vim /etc/kubernetes/apiserver

......

8行: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

11行:KUBE_API_PORT="--port=8080"

17行:KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.0.11:2379"

23行:此处是一行

KUBE_ADMISSION_CONTROL="--admissioncontrol=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

......

[root@master ~]# vim /etc/kubernetes/config

......

22行:KUBE_MASTER="--master=http://10.0.0.11:8080"

......

[root@master ~]# systemctl enable kube-apiserver.service

[root@master ~]# systemctl restart kube-apiserver.service

[root@master ~]# systemctl enable kube-controller-manager.service

[root@master ~]# systemctl restart kube-controller-manager.service

[root@master ~]# systemctl enable kube-scheduler.service

[root@master ~]# systemctl restart kube-scheduler.service

检查服务是否安装正常

[root@k8s-master ~]# kubectl get componentstatus

NAME                         STATUS     MESSAGE     ERROR

scheduler                     Healthy     ok

controller-manager     Healthy     ok

etcd-0                         Healthy     {"health":"true"}

 

1.5:node节点安装kubernetes-node

[root@node-1 ~]# yum install kubernetes-node.x86_64 -y

[root@node-1 ~]# vim /etc/kubernetes/config

......

22行:KUBE_MASTER="--master=http://10.0.0.11:8080"

......

[root@node-1 ~]# vim /etc/kubernetes/kubelet

......

5行:KUBELET_ADDRESS="--address=0.0.0.0"

8行:KUBELET_PORT="--port=10250"

11行:KUBELET_HOSTNAME="--hostname-override=10.0.0.12"

14行:KUBELET_API_SERVER="--api-servers=http://10.0.0.11:8080"

......

[root@node-1 ~]# systemctl enable kubelet.service

[root@node-1 ~]# systemctl start kubelet.service

[root@node-1 ~]# systemctl enable kube-proxy.service

[root@node-1 ~]# systemctl start kube-proxy.service

在master节点检查

[root@k8s-master ~]# kubectl get nodes

NAME     STATUS AGE

10.0.0.12 Ready     6m

10.0.0.13 Ready     3s

 

1.6所有节点配置flannel网络

### 所有节点安装

[root@master ~]# yum install flannel -y

[root@master ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld

[root@node-1 ~]# yum install flannel -y

[root@node-1 ~]# ]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld

[root@node-2 ~]# yum install flannel -y

[root@node-2 ~]# sed -i 's#http://127.0.0.1:2379#http://10.0.0.11:2379#g' /etc/sysconfig/flanneld ##master节点操作:

[root@master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "172.16.0.0/16" }'

[root@master ~]# yum install docker -y

[root@master ~]# systemctl enable flanneld.service

[root@master ~]# systemctl restart flanneld.service

[root@master ~]# systemctl restart docker

[root@master ~]# systemctl enable docker

[root@master ~]# systemctl restart kube-apiserver.service

[root@master ~]# systemctl restart kube-controller-manager.service

[root@master ~]# systemctl restart kube-scheduler.service ###所有节点都上传

[root@master ~]# rz docker_busybox.tar.gz

[root@master ~]# docker load -i docker_busybox.tar.gz

adab5d09ba79: Loading layer [==================================================>] 1.416 MB/1.416 MB Loaded image: docker.io/busybox:latest

###所有机器都运行docker容器

[root@master ~]# docker run -it docker.io/busybox:latest

/ # ifconfig

eth0 Link encap:Ethernet HWaddr 02:42:AC:10:43:02 inet addr:172.16.67.2 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::42:acff:fe10:4302/64 Scope:Link

/ # ping 172.16.67.2

64 bytes from 172.16.67.2: seq=0 ttl=64 time=0.127 ms

64 bytes from 172.16.67.2: seq=1 ttl=64 time=0.062 ms

##node节点:node-1 node-2

[root@node-1 ~]# systemctl enable flanneld.service

[root@node-1 ~]# systemctl restart flanneld.service

[root@node-1 ~]# service docker restart

[root@node-1 ~]# systemctl restart kubelet.service

[root@node-1 ~]# systemctl restart kube-proxy.service ###所有节点启动docker node-1 node-2都部署

[root@node-1 ~]# vim /usr/lib/systemd/system/docker.service

#在[Service]区域下增加一行

......

[Service] ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT

......

 

systemctl daemon-reload

systemctl restart docker

 

 

 

 

参考地址:https://www.cnblogs.com/wangyongqiang/articles/12564373.html

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!