简介:
kubernetes从1.7之后支持CRD的方式定义资源,达到拓展k8s功能的目的。目前开发CRD有两种方式,一种是通过code-generator 自定义资源代码生成器 的方式,一种是通过kubebuilder和operator-sdk等脚手架的方式。下面介绍一下通过kubebuilder完成CRD开发
1 安装kubebuilder
1.1 依赖环境
- go version v1.13+.
- docker version 17.03+.
- kubectl version v1.11.3+.
- kustomize v3.1.0+
- Access to a Kubernetes v1.11.3+ cluster.
1.2 安装go
本文环境没有安装go语音,所有先下载go安装包
wget https://dl.google.com/go/go1.14.4.linux-amd64.tar.gz
解压到/user/local目录
tar -C /usr/local -xzf go1.14.4.linux-amd64.tar.gz
添加环境变量,
vim /etc/profile
添加如下内容:
export GOPATH=/root/go
export GOROOT=/usr/local/go
export PATH=$PATH:/usr/local/go/bin
export PATH=$PATH:$GOPATH:$GOROOT:/bin
export GOPROXY=https://goproxy.io
之后执行生效:
source /etc/profile
1.2 安装kubebuilder
安装参考官方安装指南, 由于下载
curl -L https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.1/kubebuilder_2.3.1_linux_amd64.tar.gz | tar -xz -C /tmp/
之后移动到/user/local目录下
sudo mv /tmp/kubebuilder_2.3.1_linux_amd64 /usr/local/kubebuilder
添加环境变量
export PATH=$PATH:/usr/local/kubebuilder/bin
2 创建CRD
2.1 生成代码
mkdir $GOPATH/src/nodedemo
cd $GOPATH/src/nodedemo
创建domain
kubebuilder init --domain demo.crd.com
为CRD生成API groupVersion
kubebuilder create api --group test --version v1 --kind NodeTest
执行两次yes
执行完成后会生成文件目录如下:
.
├── api
│ └── v1
│ ├── groupversion_info.go
│ ├── nodetest_types.go
│ └── zz_generated.deepcopy.go
├── bin
│ └── manager
├── config
│ ├── certmanager
│ │ ├── certificate.yaml
│ │ ├── kustomization.yaml
│ │ └── kustomizeconfig.yaml
│ ├── crd
│ │ ├── kustomization.yaml
│ │ ├── kustomizeconfig.yaml
│ │ └── patches
│ │ ├── cainjection_in_nodetests.yaml
│ │ └── webhook_in_nodetests.yaml
│ ├── default
│ │ ├── kustomization.yaml
│ │ ├── manager_auth_proxy_patch.yaml
│ │ ├── manager_webhook_patch.yaml
│ │ └── webhookcainjection_patch.yaml
│ ├── manager
│ │ ├── kustomization.yaml
│ │ └── manager.yaml
│ ├── prometheus
│ │ ├── kustomization.yaml
│ │ └── monitor.yaml
│ ├── rbac
│ │ ├── auth_proxy_client_clusterrole.yaml
│ │ ├── auth_proxy_role_binding.yaml
│ │ ├── auth_proxy_role.yaml
│ │ ├── auth_proxy_service.yaml
│ │ ├── kustomization.yaml
│ │ ├── leader_election_role_binding.yaml
│ │ ├── leader_election_role.yaml
│ │ ├── nodetest_editor_role.yaml
│ │ ├── nodetest_viewer_role.yaml
│ │ └── role_binding.yaml
│ ├── samples
│ │ └── test_v1_nodetest.yaml
│ └── webhook
│ ├── kustomization.yaml
│ ├── kustomizeconfig.yaml
│ └── service.yaml
├── controllers
│ ├── nodetest_controller.go
│ └── suite_test.go
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│ └── boilerplate.go.txt
├── main.go
├── Makefile
└── PROJECT
2.2 创建CRD
make install #这个命令是创建CRD的,可以通过kustomize build config/crd这个命令查看创建crd的yaml文件内容
2.3 简单应用
我们创建完CRD之后,就可以应用我们的CRD创建资源,应用crd的结构属于前后端结构,首先我们要创建出CRD kind对应的controller,之后用户通过提交描述文件使用资源。CRD controller就是对用户提交的资源描述文件进行处理的。
(1)编写参数
查看生成的config/sample下的yaml文件,此文件为创建资源使用。
apiVersion: test.demo.crd.com/v1
kind: NodeTest
metadata:
name: nodetest-sample
spec:
# Add fields here
foo: bar
同时生成的type文件api/v1/nodetest_types.go,其参数与yaml文件中的参数对应
现在在文件的status内部增加一个名Created参数,代表是否创建,NodeTestStatus里边的内容。
type NodeTestSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
// Foo is an example field of NodeTest. Edit NodeTest_types.go to remove/update
Foo string `json:"foo,omitempty"`
}
// NodeTestStatus defines the observed state of NodeTest
type NodeTestStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
Created bool `json:"created,omitempty"`
}
(2)修改controller
修改controller的内容,正增加CRD处理逻辑。本节的处理逻辑很简单,当用户创建kind为NodeTest的资源时,打印出一下信息,并更新一下status的created参数信息
Reconcile 唯一需要实现的接口,修改nodetest_controller.go的Reconcile方法如下
func (r *NodeTestReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
_ = r.Log.WithValues("nodetest", req.NamespacedName)
// your logic here
nodetest := &testv1.NodeTest{}
if err :=r.Get(ctx,req.NamespacedName,nodetest); err!= nil {
//fmt.Errorf("could not find nodetest:%s",req.String())
r.Log.Error(err, "unable to fetch nodetest")
}else {
r.Log.V(1).Info("Successfully get foo","foo",nodetest.Spec.Foo)
r.Log.V(1).Info("Successfully get created","created",nodetest.Status.Created)
fmt.Println("status create: ",nodetest.Status.Created)
}
if !nodetest.Status.Created {
nodetest.Status.Created=true
r.Update(ctx,nodetest)
}
return ctrl.Result{}, nil
}
(3)查看结果
运行controller
make run # 这个go run ./main.go,是运行控制器的命令,保持在终端运行,要干其他的需要去别的终端。运行这个命令前会进行一些检查
创建kind资源,在另一个终端运行如下,可查看创建成功kind为NodeTest的资源,资源中status参数已经添加上
[root@k8s-1 nodedemo]# kubectl apply -f config/samples/
nodetest.test.demo.crd.com/nodetest-sample created
[root@k8s-1 nodedemo]# kubectl get NodeTest
NAME AGE
nodetest-sample 7s
[root@k8s-1 nodedemo]# kubectl get NodeTest
NAME AGE
nodetest-sample 119s
[root@k8s-1 nodedemo]# kubectl describe NodeTest nodetest-sample
Name: nodetest-sample
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"test.demo.crd.com/v1","kind":"NodeTest","metadata":{"annotations":{},"name":"nodetest-sample","namespace":"default"},"spec"...
API Version: test.demo.crd.com/v1
Kind: NodeTest
Metadata:
Creation Timestamp: 2020-07-13T02:49:55Z
Generation: 2
Resource Version: 97160540
Self Link: /apis/test.demo.crd.com/v1/namespaces/default/nodetests/nodetest-sample
UID: b124b780-98ad-4b73-a6e2-1ab8e362214d
Spec:
Foo: bar
Status:
Created: true
Events: <none>
成功运行后如下结果,在运行controller的终端查看输出
2020-07-13T10:49:55.305+0800 DEBUG controllers.NodeTest Successfully get foo {"foo": "bar"}
2020-07-13T10:49:55.305+0800 DEBUG controllers.NodeTest Successfully get created {"created": false}
status create: false
2020-07-13T10:49:55.321+0800 DEBUG controller-runtime.controller Successfully Reconciled {"controller": "nodetest", "request": "default/nodetest-sample"}
2020-07-13T10:49:55.321+0800 DEBUG controllers.NodeTest Successfully get foo {"foo": "bar"}
2020-07-13T10:49:55.321+0800 DEBUG controllers.NodeTest Successfully get created {"created": true}
status create: true
2020-07-13T10:49:55.321+0800 DEBUG controller-runtime.controller Successfully Reconciled {"controller": "nodetest", "request": "default/nodetest-sample"}
2.4 部署CRD controller
(1)修改Dockerfile
RUN go env -w GO111MODULE=on
RUN go env -w GOPROXY=https://goproxy.cn,direct
#以上是设置代理,不然go在下载模块的时候太慢,并且很多下不下来
RUN go mod download
# FROM gcr.io/distroless/static:nonroot //这个也下载不下来,将这一个替换成国内镜像,我替换成阿里云的
FROM registry.cn-hangzhou.aliyuncs.com/harmonycloud/static:nonroot
(2)构建镜像
[root@k8s-1 nodedemo]# make docker-build IMG=registry.d.com/dmp/nodecrd-controller:1.0
其中IMG=registry.d.com/dmp/nodecrd-controller:1.0为设置的镜像名称,自己修改
(3)上传镜像
[root@k8s-1 nodedemo]# make docker-push IMG=registry.d.com/dmp/nodecrd-controller:1.0
上传镜像到远程仓库。
(4)将CRD controller部署的k8s中
[root@k8s-1 nodedemo]# make deploy IMG=registry.d.com/dmp/nodecrd-controller:1.0
在部署的之前,可以通过命令kustomize build config/default查看都部署了那些内容到k8s。
部署是gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0这个镜像下载不下来需要替换,在文件config/default/manager_auth_proxy_patch.yaml中修改即可。
部署完成后通过如下命令查看部署情况
[root@k8s-1 nodedemo]# kubectl get all -n nodedemo-system
NAME READY STATUS RESTARTS AGE
pod/nodedemo-controller-manager-67c7b4f855-jzqpf 2/2 Running 0 7m41s
NAME TYPE CLUSTER-IP
service/nodedemo-controller-manager-metrics-service ClusterIP 10.105.39.51 <none>
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nodedemo-controller-manager 1/1 1 1 152m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nodedemo-controller-manager-67c7b4f855 1 1 1 7m42s
replicaset.apps/nodedemo-controller-manager-7dd6cb997b 0 0 0 152m
(5) 使用CRD资源
执行如下命令,创建kind为NodeTest的资源
[root@k8s-1 nodedemo]# kubectl apply -f config/samples/
nodetest.test.demo.crd.com/nodetest-sample created
执行完成后可以查看nodedemo-controller-manager的日志输出,
来源:oschina
链接:https://my.oschina.net/u/3825598/blog/4893976