google-container-os

Docker distroless image how to add customize certificate to trust store?

限于喜欢 提交于 2020-12-29 16:44:26
问题 gcr.io/distroless/java How to add custom pki certificate? 回答1: The distroless images are based on Debian 9, so you can do a multi-stage build and do something like the following: FROM debian AS build-env # Add CA files ADD my-ca-file.crt /usr/local/share/ca-certificates/my-ca-file.crt RUN update-ca-certificates FROM gcr.io/distroless/base COPY --from=build-env /etc/ssl/certs /etc/ssl/certs 来源: https://stackoverflow.com/questions/52636213/docker-distroless-image-how-to-add-customize

Docker distroless image how to add customize certificate to trust store?

大兔子大兔子 提交于 2020-12-29 16:38:48
问题 gcr.io/distroless/java How to add custom pki certificate? 回答1: The distroless images are based on Debian 9, so you can do a multi-stage build and do something like the following: FROM debian AS build-env # Add CA files ADD my-ca-file.crt /usr/local/share/ca-certificates/my-ca-file.crt RUN update-ca-certificates FROM gcr.io/distroless/base COPY --from=build-env /etc/ssl/certs /etc/ssl/certs 来源: https://stackoverflow.com/questions/52636213/docker-distroless-image-how-to-add-customize

compute engine startup script can't execute as a non-root user

限于喜欢 提交于 2020-04-10 08:36:07
问题 Boiling my issue down to the simplest case, I'm using Compute Engine with the following startup-script: #! /bin/bash sudo useradd -m drupal su drupal cd /home/drupal touch test.txt I can confirm the drupal user exists after this command, so does the test file. However I expect the owner of the test file to be 'drupal' (hence the su). However, when I use this as a startup script I can still confirm ROOT is the owner of the file: meaning my su drupal did not work. sudo su drupal also does not

RBAC - Limit access for one service account

与世无争的帅哥 提交于 2020-01-07 06:35:11
问题 I want to limit the permissions to the following service account, created it as follows: kubectl create serviceaccount alice --namespace default secret=$(kubectl get sa alice -o json | jq -r .secrets[].name) kubectl get secret $secret -o json | jq -r '.data["ca.crt"]' | base64 -d > ca.crt user_token=$(kubectl get secret $secret -o json | jq -r '.data["token"]' | base64 -d) c=`kubectl config current-context` name=`kubectl config get-contexts $c | awk '{print $3}' | tail -n 1` endpoint=`kubectl

Docker container for google cloudML on compute engine - authenticating for mounting bucket

爷,独闯天下 提交于 2019-12-12 13:51:22
问题 I have been working with google's machine learning platform, cloudML . Big picture: I'm trying to figure out the cleanest way to get their docker environment up and running on google compute instances, have access to the cloudML API and my storage bucket. Starting locally, I have my service account configured C:\Program Files (x86)\Google\Cloud SDK>gcloud config list Your active configuration is: [service] [compute] region = us-central1 zone = us-central1-a [core] account = 773889352370

Access Google Cloud service account credentials on Container OS inside Docker Container

无人久伴 提交于 2019-12-05 15:23:24
问题 Using the Container Optimized OS (COS) on Google Cloud Compute, what's the best way to access the credentials of the default service account for the VM-project from within a Docker container? $ gcloud compute instances create test-instance \ --image=cos-stable --image-project=cos-cloud $ ssh (ip of the above) # gcloud ... Command not found # docker run -ti google/cloud-sdk:alpine /bin/sh # gcloud auth activate-service-account ... --key-file: Must be specified. If the credentials were on the

Access Google Cloud service account credentials on Container OS inside Docker Container

…衆ロ難τιáo~ 提交于 2019-12-04 02:33:13
Using the Container Optimized OS (COS) on Google Cloud Compute, what's the best way to access the credentials of the default service account for the VM-project from within a Docker container? $ gcloud compute instances create test-instance \ --image=cos-stable --image-project=cos-cloud $ ssh (ip of the above) # gcloud ... Command not found # docker run -ti google/cloud-sdk:alpine /bin/sh # gcloud auth activate-service-account ... --key-file: Must be specified. If the credentials were on the VM, then Docker could just mount those. Ordinarily credentials would be in .config/gcloud/ , and do this

Container-VM Image with GPD Volumes fails with “Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead”

这一生的挚爱 提交于 2019-12-02 14:43:16
问题 I currently try to switch from the "Container-Optimized Google Compute Engine Images" (https://cloud.google.com/compute/docs/containers/container_vms) to the "Container-VM" Image (https://cloud.google.com/compute/docs/containers/vm-image/#overview). In my containers.yaml, I define a volume and a container using the volume. apiVersion: v1 kind: Pod metadata: name: workhorse spec: containers: - name: postgres image: postgres:9.5 imagePullPolicy: Always volumeMounts: - name: postgres-storage

Container-VM Image with GPD Volumes fails with “Failed to get GCE Cloud Provider. plugin.host.GetCloudProvider returned <nil> instead”

霸气de小男生 提交于 2019-12-02 10:31:59
I currently try to switch from the "Container-Optimized Google Compute Engine Images" ( https://cloud.google.com/compute/docs/containers/container_vms ) to the "Container-VM" Image ( https://cloud.google.com/compute/docs/containers/vm-image/#overview ). In my containers.yaml, I define a volume and a container using the volume. apiVersion: v1 kind: Pod metadata: name: workhorse spec: containers: - name: postgres image: postgres:9.5 imagePullPolicy: Always volumeMounts: - name: postgres-storage mountPath: /var/lib/postgresql/data volumes: - name: postgres-storage gcePersistentDisk: pdName: disk