问题
I have a CoreOS docker host that I want to start running containers on, but when trying to use the docker command to fetch the image from the google container private registry (https://cloud.google.com/tools/container-registry/), I get a 403. I did some searching, but I\'m not sure how to attach authentication (or where to generate the user+pass bundle to use with the docker login command).
Has anybody had any luck pulling from the google private containers? I can\'t install the gcloud command because coreos doesn\'t come with python, which is a requirement
docker run -p 80:80 gcr.io/prj_name/image_name
Unable to find image \'gcr.io/prj_name/image_name:latest\' locally
Pulling repository gcr.io/prj_name/image_name
FATA[0000] HTTP code: 403
Update: after getting answers from @mattmoor and @Jesse:
The machine that I\'m pulling from does have devaccess
curl -H \'Metadata-Flavor: Google\' http://metadata.google.internal./computeMetadata/v1/instance/service-accounts/default/scopes
https://www.googleapis.com/auth/bigquery
https://www.googleapis.com/auth/cloud-platform
https://www.googleapis.com/auth/compute
https://www.googleapis.com/auth/datastore
----> https://www.googleapis.com/auth/devstorage.read_only
https://www.googleapis.com/auth/logging.admin
https://www.googleapis.com/auth/sqlservice.admin
https://www.googleapis.com/auth/taskqueue
https://www.googleapis.com/auth/userinfo.email
Additionally, I tried using the _token login method
jenkins@riskjenkins:/home/andre$ ACCESS_TOKEN=$(curl -H \'Metadata-Flavor: Google\' \'http://metadata.google.internal./computeMetadata/v1/instance/service-accounts/default/token\' | cut -d\'\"\' -f 4)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 142 100 142 0 0 14686 0 --:--:-- --:--:-- --:--:-- 15777
jenkins@riskjenkins:/home/andre$ echo $ACCESS_TOKEN
**************(redacted, but looks valid)
jenkins@riskjenkins:/home/andre$ docker login -e not@val.id -u _token -p $ACCESS_TOKEN http://gcr.io
Login Succeeded
jenkins@riskjenkins:/home/andre$ docker run gcr.io/prj_name/image_name
Unable to find image \'gcr.io/prj_name/image_name:latest\' locally
Pulling repository gcr.io/prj_name/image_name
FATA[0000] HTTP code: 403
回答1:
The Google Container Registry authentication scheme is to simply use:
username: '_token'
password: {oauth access token}
On Google Compute Engine you can login without gcloud with:
$ METADATA=http://metadata.google.internal./computeMetadata/v1
$ SVC_ACCT=$METADATA/instance/service-accounts/default
$ ACCESS_TOKEN=$(curl -H 'Metadata-Flavor: Google' $SVC_ACCT/token \
| cut -d'"' -f 4)
$ docker login -e not@val.id -u '_token' -p $ACCESS_TOKEN https://gcr.io
Update on {asia,eu,us,b}.gcr.io
To access a repository hosted in a localized repository, you should login to the appropriate hostname in the above docker login
command.
Update on quotes around _token
As of docker version 1.8, docker login requires the -u option to be in qoutes or start with a letter.
Some diagnostic tips...
Check that you have the Cloud Storage scope via:
$ curl -H 'Metadata-Flavor: Google' $SVC_ACCT/scopes
...
https://www.googleapis.com/auth/devstorage.full_control
https://www.googleapis.com/auth/devstorage.read_write
https://www.googleapis.com/auth/devstorage.read_only
...
NOTE: "docker pull" requires "read_only", but "docker push" requires "read_write".
To give this robot access to a bucket in another project, there are a few steps.
First, find out the VM service account (aka robot)'s identity via:
$ curl -H 'Metadata-Flavor: Google' $SVC_ACCT/email
1234567890@developer.gserviceaccount.com
Next, there are three important ACLs to update:
1) Bucket ACL (needed to list objects, etc)
PROJECT_ID=correct-answer-42
ROBOT=1234567890@developer.gserviceaccount.com
gsutil acl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
2) Bucket Default ACL (template for future #3)
gsutil defacl ch -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
3) Object ACLs (only needed when the bucket is non-empty)
gsutil -m acl ch -R -u $ROBOT:R gs://artifacts.$PROJECT_ID.appspot.com
Part of why this isn't in our official documentation yet is that we want a better high-level story for it, but tl;dr we respect GCS ACLs.
回答2:
The answers here deal with accessing docker from within a Google Compute Engine instance.
If you want to work with the Google Container Registry on a machine not in the Google Compute Engine (i.e. local) using vanilla docker you can follow Google's instructions.
The two main methods are using an access token or a JSON key file.
Note that _token
and _json_key
are the actual values you provide for the username (-u
)
Access Token
$ docker login -e 1234@5678.com -u _token -p "$(gcloud auth print-access-token)" https://gcr.io
JSON Key File
$ docker login -e 1234@5678.com -u _json_key -p "$(cat keyfile.json)" https://gcr.io
To create a key file you can follow these instructions:
- Open the Credentials page.
- To set up a new service account, do the following:
- Click Add credentials > Service account.
- Choose whether to download the service account's public/private key as a standard P12 file, or as a JSON file that can be loaded by a Google API client library.
- Your new public/private key pair is generated and downloaded to your machine; it serves as the only copy of this key. You are responsible for storing it securely.
You can view Google's documentation on generating a key file here.
回答3:
There are two official ways:
$ docker login -e 1234@5678.com -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://gcr.io
$ docker login -e 1234@5678.com -u _json_key -p "$JSON_KEY" https://gcr.io
Note: The e-mail is not used, so you can put whatever you want in it.
Change gcr.io
to whatever is your domain shown in your Google Container Registry (e.g. eu.gcr.io
).
Option (1) only gives a temporary token, so you probably want option (2). To get that $JSON_KEY
:
- Go to API Manager > Credentials
- Click "Create credentials" > Service account key:
- Service account: New service account
- Name: Anything you want, like
Docker Registry (read-only)
- Role: Storage (scroll down) > Storage Object Viewer
- Name: Anything you want, like
- Key type: JSON
- Service account: New service account
- Download as
keyfile.json
JSON_KEY=$(cat keyfile.json | tr '\n' ' ')
- Now you can use it.
Once logged in you can just run docker pull
. You can also copy the updated ~/.dockercfg
to preserve the settings.
回答4:
When you created your VM did you give it the necessary scopes in order to be able to read from the registry?
gcloud compute instances create INSTANCE \ --scopes https://www.googleapis.com/auth/devstorage.read_write
If you did so no further authentication is required.
回答5:
There is an official Google Container Registry Auth Plugin published. You are welcome to try it and leave feedback/report issues.
回答6:
I have developed a jenkins plugin that allows a slave running on GCE to login into google's registry using @mattmoor's solution. It might be useful to others. :)
It's available at https://github.com/Byclosure/gcr.io-login-plugin.
来源:https://stackoverflow.com/questions/29291576/access-google-container-registry-without-the-gcloud-client