persistent-volume-claims

Azure ACS AzureFile Dynamic Persistent Volume Claim

♀尐吖头ヾ 提交于 2020-01-06 06:05:34
问题 I am trying to Dynamically provision storage using a storageclass I've defined with type azure-file. I've tried setting both the parameters in the storageclass for storageAccount and skuName. Here is my example with storageAccount set. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azuretestfilestorage namespace: kube-system provisioner: kubernetes.io/azure-file parameters: storageAccount: <storage_account_name> The storageclass is created successfully however when I try to

Kubernetes: how to set VolumeMount user group and file permissions

杀马特。学长 韩版系。学妹 提交于 2019-12-29 20:42:13
问题 I'm running a Kubernetes cluster on AWS using kops. I've mounted an EBS volume onto a container and it is visible from my application but it's read only because my application does not run as root. How can I mount a PersistentVolumeClaim as a user other than root? The VolumeMount does not seem to have any options to control the user, group or file permissions of the mounted path. Here is my Deployment yaml file: apiVersion: extensions/v1beta1 kind: Deployment metadata: name: notebook-1 spec:

MountVolume.SetUp failed for volume “nfs” : mount failed: exit status 32

岁酱吖の 提交于 2019-12-22 05:13:08
问题 This is 2nd question following 1st question at PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo" I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs. I am following kubernetes nfs example step by step from the following link: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs Based on feedback provided by 'helmbert', I modified the content of https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs

PersistentVolumeClaim is not bound: “nfs-pv-provisioning-demo”

北城以北 提交于 2019-12-21 22:54:47
问题 I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs. I am following kubernetes nfs example step by step from the following link: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs Trying the first section, NFS server part, executed 3 commands: $ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml $ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml $ kubectl create -f examples/volumes/nfs/nfs-server-service

Kubernetes: Can't delete PersistentVolumeClaim (pvc)

本秂侑毒 提交于 2019-12-20 11:49:50
问题 I created the following persistent volume by calling kubectl create -f nameOfTheFileContainingTheFollowingContent.yaml apiVersion: v1 kind: PersistentVolume metadata: name: pv-monitoring-static-content spec: capacity: storage: 100Mi accessModes: - ReadWriteOnce hostPath: path: "/some/path" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-monitoring-static-content-claim spec: accessModes: - ReadWriteOnce storageClassName: "" resources: requests: storage: 100Mi After this I

Can I rely on volumeClaimTemplates naming convention?

≡放荡痞女 提交于 2019-12-09 06:33:30
问题 I want to setup a pre-defined PostgreSQL cluster in a bare meta kubernetes 1.7 with local PV enable. I have three work nodes. I create local PV on each node and deploy the stateful set successfully (with some complex script to setup Postgres replication). However I'm noticed that there's a kind of naming convention between the volumeClaimTemplates and PersistentVolumeClaim. For example apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: postgres volumeClaimTemplates: - metadata: name:

Kubernetes: Is it possible to mount volumes to a container running as a CronJob?

浪子不回头ぞ 提交于 2019-12-06 23:17:34
问题 I'm attempting to create a Kubernetes CronJob to run an application every minute. A prerequisite is that I need to get my application code onto the container that runs within the CronJob. I figure that the best way to do so is to use a persistent volume, a pvclaim, and then defining the volume and mounting it to the container. I've done this successfully with containers running within a Pod, but it appears to be impossible within a CronJob? Here's my attempted configuration: apiVersion: batch

Can I rely on volumeClaimTemplates naming convention?

做~自己de王妃 提交于 2019-12-03 14:10:43
I want to setup a pre-defined PostgreSQL cluster in a bare meta kubernetes 1.7 with local PV enable. I have three work nodes. I create local PV on each node and deploy the stateful set successfully (with some complex script to setup Postgres replication). However I'm noticed that there's a kind of naming convention between the volumeClaimTemplates and PersistentVolumeClaim. For example apiVersion: apps/v1beta1 kind: StatefulSet metadata: name: postgres volumeClaimTemplates: - metadata: name: pgvolume The created pvc are pgvolume-postgres-0 , pgvolume-postgres-1 , pgvolume-postgres-2 . With

how to bound a Persistent volume claim with a gcePersistentDisk?

老子叫甜甜 提交于 2019-12-03 13:46:36
问题 I would like to bound PersistentVolumeClaim with a gcePersistentDisk PersistentVolume. Below the steps I did for getting that: 1. Creation of the gcePersistentDisk: gcloud compute disks create --size=2GB --zone=us-east1-b gce-nfs-disk 2. Definition the PersistentVolume and the PersistentVolumeClaim # pv-pvc.yml apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce gcePersistentDisk: pdName: gce-nfs-disk fsType: ext4 ---