Google Kubernetes Engine (GKE) cluster `error while creating mount source path` due to `read-only file system`

耗尽温柔 提交于 2021-01-28 12:46:14

问题


I have a container with the following configuration:

spec:
  template:
    spec:
      restartPolicy: OnFailure
      volumes:
        - name: local-src
          hostPath:
            path: /src/analysis/src
           type: DirectoryOrCreate
      containers:
          securityContext:
            privileged: true
            capabilities:
              add:
                - SYS_ADMIN
  • Note that I'm intentionally omitting some other configuration parameters to keep the question short

However, when I deploy it to my cluster on kubernetes on gcloud, I see the following error:

Error: failed to start container "market-state": Error response from daemon: error while creating mount source path '/src/analysis/src': mkdir /src: read-only file system

I have tried deploying the exact same job locally with minikube and it works fine.

My guess is that this has to do with the pod's permissions relative to the host, but I expected it to work given the SYS_ADMIN permissions that I'm setting. When creating my cluster, I gave it a devstorage.read_write scope for other reason, but am wondering if there are other scopes I need as well?

gcloud container clusters create my_cluster \
    --zone us-west1-a \
    --node-locations us-west1-a \
    --scopes=https://www.googleapis.com/auth/devstorage.read_write

DirectoryOrCreate


回答1:


IIUC, if your cluster is using Container-Optimized VMs, you'll need to be aware of the structure of the file system for these instances.

See https://cloud.google.com/container-optimized-os/docs/concepts/disks-and-filesystem




回答2:


As pointed by user @DazWilkin:

IIUC, if your cluster is using Container-Optimized VMs, you'll need to be aware of the structure of the file system for these instances.

See https://cloud.google.com/container-optimized-os/docs/concepts/disks-and-filesystem

This is correct understanding. You can't write to readonly location like: / (even with the SYS_ADMIN and privileged parameters) because of the:

The root filesystem is mounted as read-only to protect system integrity. However, home directories and /mnt/stateful_partition are persistent and writable.

-- Cloud.google.com: Container optimized OS: Docs: Concepts: Disk and filesystem: Filesystem

As for a workaround solution you can change the location of your hostPath on the node or use GKE with nodes that uses Ubuntu images instead of Container Optimized OS images. You will be able to use hostPath volumes with paths as specified in your question. You can read more about available node images by following official documentation:

  • Cloud.google.com: Kubernetes Engine: Node images

If your workload/use case allows using Persistent Volumes, I encourage you to do so.

PersistentVolume resources are used to manage durable storage in a cluster. In GKE, PersistentVolumes are typically backed by Compute Engine persistent disks.

<--->

PersistentVolumes are cluster resources that exist independently of Pods. This means that the disk and data represented by a PersistentVolume continue to exist as the cluster changes and as Pods are deleted and recreated. PersistentVolume resources can be provisioned dynamically through PersistentVolumeClaims, or they can be explicitly created by a cluster administrator.

-- Cloud.google.com: Kubernetes Engine: Persistent Volumes

You can also consider looking on Local SSD solution which can use hostPath type of Volume:

  • Cloud.google.com: Kubernetes Engine: Persistent Volumes: Local SSD

When creating my cluster, I gave it a devstorage.read_write scope for other reason, but am wondering if there are other scopes I need as well?

You can create GKE cluster without adding any additional scopes like:

  • $ gcloud container clusters create --zone=ZONE

The: --scopes=SCOPE will depend on the workload you are intending to run on it. You can assign scopes that will grant you access to specific Cloud Platform services (like Cloud Storage for example).

You can read more about it by following gcloud online manual:

  • Cloud.google.com: SDK: Gcloud: Container: Clusters: Create: Scopes

To add to the topic of authentication to Cloud Platform services:

There are three ways to authenticate to Google Cloud services using service accounts from within GKE:

  1. Use Workload Identity

Workload Identity is the recommended way to authenticate to Google Cloud services from GKE. Workload Identity allows you to configure Google Cloud service accounts using Kubernetes resources. If this fits your use case, it should be your first option. This example is meant to cover use cases where Workload Identity is not a good fit.

  1. Use the default Compute Engine Service Account

Each node in a GKE cluster is a Compute Engine instance. Therefore, applications running on a GKE cluster by default will attempt to authenticate using the "Compute Engine default service account", and inherit the associated scopes.

This default service account may or may not have permissions to use the Google Cloud services you need. It is possible to expand the scopes for the default service account, but that can create security risks and is not recommended.

  1. Manage Service Account credentials using Secrets

Your final option is to create a service account for your application, and inject the authentication key as a Kubernetes secret. This will be the focus of this tutorial.

-- Cloud.google.com: Kubernetes Engine: Authenticating to Cloud Platform



来源:https://stackoverflow.com/questions/64431454/google-kubernetes-engine-gke-cluster-error-while-creating-mount-source-path

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!