persistent-volumes

are VOLUME in Dockerfile persistent in kubernetes

那年仲夏 提交于 2021-02-17 06:01:18
问题 Some Dockerfile have a VOLUME command. What happens when such containers are deployed in Kubernetes, but no kubernetes volume are provided: no persistent volume (PV), nor persistent volume claim (PVC) ? Where are the file stored ? Is the volume persistent ? For exemple, Dockerfile image for Docker's library/postgreSQL container image has: VOLUME /var/lib/postgresql/data The stable/postgresql helm charts won't always create a PV: kind: StatefulSet ### SNIP SNIP ### containers: - name: {{

Unable to setup couchbase operator 1.2 with persistent volume on local storage class

做~自己de王妃 提交于 2021-02-10 14:50:38
问题 I am trying to setup couchbase operator 1.2 on my local system. i followed the following steps : Install the Couchbase Admission Controller. Deploy the Couchbase Autonomous Operator. Deploy the Couchbase Cluster. Access CouchBase from UI. But the problem with this is that as soon as the system or docker resets or the pod resets, the cluster's data is lost. So for the same I tried to do it by adding persistent volume with local storage class as mentioned in the docs but the result was still

How to create a mysql kubernetes service with a locally mounted data volume?

 ̄綄美尐妖づ 提交于 2021-02-07 20:47:24
问题 I should be able to mount a local directory as a persistent volume data folder for a mysql docker container running under minikube/kubernetes. I don't have any problem achieving a shared volume running it with Docker directly, but running it under kubernetes, I'm not able to osx 10.13.6 Docker Desktop Community version 2.0.0.2 (30215) Channel: stable 0b030e17ca Engine 18.09.1 Compose: 1.23.2 Machine 0.16.1 Kubernetes v1.10.11 minikube version: v0.33.1 Steps to reproduce the behavior install

kubernetes persistent volume ReadWriteOnly(RWO) does not work for nfs

最后都变了- 提交于 2021-01-29 03:52:33
问题 there, According to the doc: ReadWriteOnce – the volume can be mounted as read-write by a single node I created a PV based on nfs: apiVersion: v1 kind: PersistentVolume metadata: name: tspv01 spec: capacity: storage: 15Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle nfs: path: /gpfs/fs01/shared/prod/democluster01/dashdb/gamestop/spv01 server: 169.55.11.79 a PVC for this PV: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: sclaim spec: accessModes: -

Kubernetes, cannot mount NFS share via DNS

那年仲夏 提交于 2021-01-28 08:20:47
问题 I am trying to mount a NFS share (outside of k8s cluster) in my container via DNS lookup, my config is as below apiVersion: v1 kind: Pod metadata: name: service-a spec: containers: - name: service-a image: dockerregistry:5000/centOSservice-a command: ["/bin/bash"] args: ["/etc/init.d/jboss","start"] volumeMounts: - name: service-a-vol mountPath: /myservice/por/data volumes: - name: service-a-vol nfs: server: nfs.service.domain path: "/myservice/data" restartPolicy: OnFailure nslookup of nfs

Kubernetes Failed to Create PersistentVolumeClaim on AWS-EBS

假装没事ソ 提交于 2021-01-28 07:50:34
问题 I setup a Kubernetes cluster with four EC2 instances using kubeadm . The Kubernetes cluster works fine, but failed when I try to create a PersistentVolumeClaim . First I created a StorageClass with following YAML which works fine. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: generic annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/aws-ebs parameters: type: gp2 encrypted: "false" Then I try to create a PersistentVolumeClaim with

How to enable persistence in helm prometheus-operator

筅森魡賤 提交于 2021-01-27 05:36:07
问题 I am using the prometheus-operator helm chart. I want the data in prometheus server to persist. But open restart of the prometheus StatefulSet , the data disappears. When inspecting the yaml definitions of the associated StatefulSet and Pod objects, there is no PersistVolumeClaim . I tried the following change to values.yaml , per the docs in https://github.com/helm/charts/tree/master/stable/prometheus: prometheus: server: persistentVolume: enabled: true but this has no effect on the end

HostPath assign persistentVolume to the specific work node in cluster

我怕爱的太早我们不能终老 提交于 2021-01-05 08:40:06
问题 Using kubeadm to create a cluster, I have a master and work node. Now I want to share a persistentVolume in the work node, which will be bound with Postgres pod. Expecting the code will create persistentVolume in the path /postgres of work node, but it seems the hostPath will not work in a cluster, how should I assign this property to the specific node? kind: PersistentVolume apiVersion: v1 metadata: name: pv-postgres labels: type: local spec: capacity: storage: 2Gi accessModes: -

Kubernetes: How to config a group of pods to be deployed on the same node?

半腔热情 提交于 2021-01-01 10:04:13
问题 The use case is like this: So we have several pods using the same persistentVolumeClaim with the accessMode set to ReadWriteOnce (because the storage class of the PersistentVolume only support ReadWriteOnce ). From https://kubernetes.io/docs/concepts/storage/persistent-volumes/, ReadWriteOnce -- the volume can be mounted as read-write by a single node So these pods should be deployed on the same node in order to access the PVC (otherwise they will fail). I would like to ask if there are any

Bind different Persistent Volume for each replica in a Kubernetes Deployment

北战南征 提交于 2020-12-28 20:38:47
问题 I am using a PVC with ReadWriteOnce access mode, which is used by a logstash Deployment which will run a stateful application and use this PVC.Each pod in the deployment will try to bind to the same persistent volume claim. In case of replicas > 1, it will fail (as it supports ReadWriteOnce, only the first one will be able to bind successfully). How do I specify that each pod is to be bound to a separate PV. I don't want to define 3 separate yamls for each logstash replica / instance