问题
I want to know if there is anyway to define the kubernate or openshift template to load environment variables from a file in one of the volumes.
What i'm trying to achieve is to:
- generate a value on a initContainer
- write value on a file
- load value as an environment variable when starting the main container
If anyone knows an alternative to allow the main container to read a environment variable generated from the initContainer it will solve my problem too
Thank you
回答1:
I can see 2 ways to reach what you need:
1 - Use configMap: You need to give permissions to your initContainer runs kubectl
to create a configmap
or secret
with the desired value and make your main container read the configmap and configure as environment variable.
2 - Use persistentVolume: In the initContainer to write the file, and them mount the same volume on the pod, read the file and use as you want.
The first method is much more elegant IMO because you can configure the permission level and isolate the configMap object for using the Role permissions.
The second method is easier and requires less steps than the second, but it depends what kind of data you need to store, if it is a sensible data, I would recommend go to second method.
METHOD 1
This way consists in create a kubernetes configMap
with the variable you wish and use the value from this configMap to configure a environment variable in main container.
It requires some extra steps:
- Create a serviceAccount
- Create a Role allowing the serviceAccount perfom action in the configmap
- Create a RoleBinding to connect the serviceAccount with the Role
In this case the initContainer
will be the responsible to create/update the configmap
, and your main container will read this configmap and configure the values as env vars.
NOTE: In this examples I'm using all resource in myns namespace. You should make the proper changes to best fit in you environment such as: Role/ClusterRole permissions, you could make ir more restrictive using
resourceNames
. See here
envFrom
: This will be the responsible to read the configMap from Kubernetes and set you environment variable. More information here.
The following spec will create the serviceAccount, Role and RoleBinding:
Create a file named rbac-sa-myuser.yaml
with the following content
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-myuser
namespace: myns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: myns
name: role-configmap
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "update", "get", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rolebinding-configmap
namespace: myns
roleRef:
kind: Role
name: role-configmap
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: sa-myuser
namespace: myns
Apply with kubectl apply -f rbac-sa-myuser.yaml
Now, you need to make the proper changes in you deployment template, adding the extra parameters:
serviceAccountName:
spec:
serviceAccountName: sa-myuser
envFrom:
envFrom:
- configMapRef:
name: my-var
initContainer: Here is just an example of a initContainer running a command to create the configMap, you need to adjust for your use case:
initContainers:
- name: my-init
image: bitnami/kubectl
command: ['sh', '-c', 'kubectl delete cm my-var ; kubectl create cm my-var --from-literal MYVAR=UPVOTEIT']
In the end, your deployment spec must looks likes the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: var-example
namespace: myns
spec:
selector:
matchLabels:
app: var-example
template:
metadata:
labels:
app: var-example
spec:
serviceAccountName: sa-myuser
containers:
- name: var-example
image: nginx
envFrom:
- configMapRef:
name: my-var
ports:
- name: http
containerPort: 80
initContainers:
- name: my-init
image: bitnami/kubectl
command: ['sh', '-c', 'kubectl delete cm my-var ; kubectl create cm my-var --from-literal MYVAR=UPVOTEIT']
Method 2: persistentVolume
You will need to create a persistVolume and mount in both pods, as an example, I will use hostPath to demonstrate how it works, but you need to find the best kind of persistent volume for your workload. See here a list of all kinds.
The follow yaml
will create a 2Gi persistentVolume and a persistentVolumeClaim of 1Gi on your node.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Then just create your deployment mounting the volume in the initContainer and the main pod, example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: var-example
spec:
selector:
matchLabels:
app: var-example
template:
metadata:
labels:
app: var-example
spec:
volumes:
- name: pv-storage
persistentVolumeClaim:
claimName: pv-claim
containers:
- name: var-example
image: nginx
ports:
- name: http
containerPort: 80
volumeMounts:
- mountPath: "/mnt/data"
name: pv-storage
command: ["sh", "-c", "echo MYVAR=$(cat /mnt/data/myfile.txt) >> /etc/environment ; sleep 3600"]
initContainers:
- name: my-init
image: busybox:1.28
volumeMounts:
- mountPath: "/mnt/data"
name: pv-storage
command: ['sh', '-c', 'echo "UPVOTE_IT" > /mnt/data/myfile.txt']
来源:https://stackoverflow.com/questions/61375369/openshift-or-kubernate-environment-variables-from-file