vultr 安装k8s
高可用集群所需节点配置如下
角色 | 数量 | 描述 |
---|---|---|
管理节点 | 1 | 运行ansible/easzctl脚本,复用master,建议使用独立节点(2c4g) |
etcd节点 | 3 | 注意etcd集群需要1,3,5,7...奇数个节点,一般复用master节点 |
master节点 | 1 | 高可用集群至少2个master节点 |
node节点 | 3 | 运行应用负载的节点,可根据需要提升机器配置/增加节点数 |
新加坡节点,开启ipv6和虚拟网
1.1 安装管理节点所需软件包
# Ubuntu 16.04 apt-get install git python-pip -y # CentOS 7 yum install git python-pip -y # pip安装ansible(国内如果安装太慢可以直接用pip阿里云加速) #pip install pip --upgrade #pip install ansible==2.6.12 netaddr==0.7.19 pip install pip --upgrade -i https://mirrors.aliyun.com/pypi/simple/ pip install ansible==2.6.12 netaddr==0.7.19 -i https://mirrors.aliyun.com/pypi/simple/
1.2 在管理端ansible控制端配置免密登录
# 更安全 Ed25519 算法 ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519 # 或者传统 RSA 算法 ssh-keygen -t rsa -b 2048 -N '' -f ~/.ssh/id_rsa ssh-copy-id $IPs #$IPs为所有节点地址包括自身,按照提示输入yes 和root密码
1.3 配置vultr的局域网
- To prevent issues, use the RFC 1918 private range for IPv4 networks ( 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16 ) and the RFC 4193 range for IPv6 networks ( fd00/8 ).
- 但是需要将mtu设置为1450才行
设置mtu命令[临时]
ip link set eth1 mtu 1400
设置ip命令[临时]
ip addr add 192.168.1.1/16 dev eth1
永久修改ip需要改配置文件
vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=static IPADDR=192.168.1.1 NETMASK=255.255.0.0 IPV6INIT=no MTU=1450
启动网卡
ifup eth1
2. 配置ansible初始化环境
[root@master1]# tree . └── ansible ├── ansible.cfg └── hosts [root@master1 ansible]# cat ansible.cfg [defaults] inventory=hosts forks=10 host_key_checking=False [root@master1 ansible]# cat hosts [master] 207.148.122.1 [nodes] 45.76.188.2 45.76.149.0 207.148.126.1 [etcd] 139.180.141.5 45.32.117.9 66.42.53.10 [k8s:children] master nodes etcd [k8s:vars] ansible_ssh_user=root ansible_ssh_pass=Abcabc123 [root@master1 ansible]# cat /etc/hosts 192.168.1.2 master2 192.168.1.1 master1 192.168.1.53 etcd3 192.168.1.51 etcd1 192.168.1.52 etcd2 192.168.1.13 node3 192.168.1.11 node1 192.168.1.12 node2
2.1 下发hosts到各机
[root@master1 ansible]# ansible k8s -m copy -a "src=/etc/hosts dest=/etc/hosts"
- 测试是否成功.
[root@master1 ansible]# ansible k8s -m shell -a "hostname -i"
2.2 下发公钥到各机(实现master到其它机器的免密)
[root@master1 ansible]# ansible k8s -m authorized_key -a "user=root state=present key=\"{{ lookup('file','/root/.ssh/id_ed25519.pub')}}\""
2.3 配置eth1的内网IP
[root@master1 ansible]# cat ifcfg-eth1.j2 DEVICE=eth1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=static IPADDR={{ipinfo['stdout']}} NETMASK=255.255.0.0 IPV6INIT=no MTU=1450 [root@master1 ansible]# cat setip.yml --- - hosts: k8s remote_user: root tasks: - name: register ip shell: hostname -i register: ipinfo - name: cpIpconf template: src=ifcfg-eth1.j2 dest=/etc/sysconfig/network-scripts/ifcfg-eth1 # notify: # - up eth1 # # handlers: # - name: up eth1 # shell: /usr/sbin/ifup eth1 # - name: up eth1 shell: ifup eth1
执行playbook
ansible-playbook setip.yml
3 在ansible控制端编排k8s安装
- 3.0 下载项目源码
- 3.1 下载二进制文件
- 3.2 下载离线docker镜像
推荐使用 easzup 脚本下载 所需文件;运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/ansilbe
# 下载工具脚本easzup,举例使用kubeasz版本2.0.2 export release=2.0.3 curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/easzup chmod +x ./easzup # 使用工具脚本下载 ./easzup -D
3.3 配置集群参数
3.3.1 必要配置:
cd /etc/ansible && cp example/hosts.multi-node hosts
, 然后实际情况修改此hosts文调整好参数后安装(选择分步安装或一步安装)
# 分步安装 ansible-playbook 01.prepare.yml ansible-playbook 02.etcd.yml ansible-playbook 03.docker.yml ansible-playbook 04.kube-master.yml ansible-playbook 05.kube-node.yml ansible-playbook 06.network.yml ansible-playbook 07.cluster-addon.yml # 一步安装 #ansible-playbook 90.setup.yml
安装kubernetes-dashboard
安装部署
# 部署dashboard 主yaml配置文件 $ kubectl apply -f /etc/ansible/manifests/dashboard/kubernetes-dashboard.yaml # 创建可读可写 admin Service Account $ kubectl apply -f /etc/ansible/manifests/dashboard/admin-user-sa-rbac.yaml # 创建只读 read Service Account $ kubectl apply -f /etc/ansible/manifests/dashboard/read-user-sa-rbac.yaml
验证
# 查看pod 运行状态 kubectl get pod -n kube-system | grep dashboard kubernetes-dashboard-5c7687cf8-rsdv4 1/1 Running 0 89m # 查看dashboard service kubectl get svc -n kube-system|grep dashboard kubernetes-dashboard NodePort 10.68.56.253 207.148.126.1 443:22046/TCP 89m # 查看集群服务 kubectl cluster-info|grep dashboard kubernetes-dashboard is running at https://192.168.1.1:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy # 查看pod 运行日志 kubectl logs kubernetes-dashboard-5c7687cf8-rsdv4 -n kube-system
- 令牌登录(admin)
选择“令牌(Token)”方式登陆,复制下面输出的admin token 字段到输入框
# 创建Service Account 和 ClusterRoleBinding $ kubectl apply -f /etc/ansible/manifests/dashboard/admin-user-sa-rbac.yaml # 获取 Bearer Token,找到输出中 ‘token:’ 开头那一行 $ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
暴露端口
kubectl edit svc kubernetes-dashboard -n kube-system
#添加如下 externalIPs: - 207.148.126.161 #原文 externalTrafficPolicy: Cluster ports: - nodePort: 22046 port: 443 protocol: TCP targetPort: 8443
访问dashboard
https://207.148.126.1:22046
输入token