gcloud

“bq” command line tool throws CERTIFICATE_VERIFY_FAILED

若如初见. 提交于 2019-12-18 16:59:32
问题 Update (2019-02-07): the issue has now been fixed, so if you're still running into this, try gcloud components update . At some point during the past few monthts, my bq tool stopped working. Even a simple thing shows this error: $ bq show BigQuery error in show operation: Cannot contact server. Please try again. Traceback: Traceback (most recent call last): File "/opt/google-cloud-sdk/platform/bq/bigquery_client.py", line 685, in BuildApiClient response_metadata, discovery_document = http

ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255]

时光怂恿深爱的人放手 提交于 2019-12-18 12:09:51
问题 I kept getting kicked out of my compute engine instance after a few seconds of idle with the indicated error (255). I used 'gcloud compute ssh' to log in. I am using the default firewall setting, which I believe would be good enough for ssh. But if I am missing something, please so indicate and suggest the fix for this error. Basically I can't get any efficient work done at this point having to ssh in so many times. Thanks in advance. Anh- 回答1: gcloud denies an ssh connection if there was a

tensorflow serving prediction not working with object detection pets example

两盒软妹~` 提交于 2019-12-18 07:36:21
问题 I was trying to do predictions on gcloud ml-engine with the tensorflow object detection pets example, but it doesn't work. I created a checkpoint using this example: https://github.com/tensorflow/models/blob/master/object_detection/g3doc/running_pets.md With the help of the tensorflow team, I was able to create an saved_model to upload to the gcloud ml-engine: https://github.com/tensorflow/models/issues/1811 Now, I can upload the model to the gcloud ml-engine. But unfortunately, I'm not able

Google Container Registry access denied when pushing docker container

半世苍凉 提交于 2019-12-18 05:43:35
问题 I try to push my docker container to the google container registry, using this tutorial, but when I run gcloud docker push b.gcr.io/my-bucket/image-name I get the error : The push refers to a repository [b.gcr.io/my-bucket/my-image] (len: 1) Sending image list Error: Status 403 trying to push repository my-bucket/my-image: "Access denied." I couldn't find any more explanation (no -D, --debug, --verbose arguments were recognized), gcloud auth list and docker info tell me I'm connected to both

Unable to start App Engine application after updating it via Google Cloud SDK

做~自己de王妃 提交于 2019-12-18 04:00:40
问题 Recently, I have updated Google App Engine from 1.9.17 to 1.9.18 via Google Cloud SDK by using command 'gcloud components update' in Windows 7 64 bit. After that I wasn't able to start any project using the App Engine launcher. Getting this error: Traceback (most recent call last): File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\dev_appserver.py", line 83, in <module> _run_file(__file__, globals()) File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk

`gcloud app deploy` vs. `appcfg.py` [closed]

旧巷老猫 提交于 2019-12-17 18:55:21
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 3 years ago . I've been a long time user of appcfg.py and I even build some bash scripts on top of it. Should we switch to gcloud app deploy ? Will appcfg.py be deprecated? If yes what is the timeline? Why isn't there a grace period for backward compatibility of the yaml file? Switching to

Google Cloud - Wrong project id being used from different email address

风流意气都作罢 提交于 2019-12-13 16:16:04
问题 Despite running gcloud auth application-default login and gcloud config set core/project CORRECT_PROJECT_ID the project keeps defaulting to an incorrect project id: gcloud config list [core] account = CORRECT_EMAIL disable_usage_reporting = True project = CORRECT_PROJECT_ID Your active configuration is: [default] I can successfully run the sample code from the tutorial (below) if I run in the terminal export GOOGLE_APPLICATION_CREDENTIALS="[PATH]" However, I didn't want to have to do this

How do I resolve a Pickling Error on class apache_beam.internal.clients.dataflow.dataflow_v1b3_messages.TypeValueValuesEnum?

血红的双手。 提交于 2019-12-13 14:07:15
问题 A PicklingError is raised when I run my data pipeline remotely: the data pipeline has been written using the Beam SDK for Python and I am running it on top of Google Cloud Dataflow. The pipeline works fine when I run it locally. The following code generates the PicklingError: this ought to reproduce the problem import apache_beam as beam from apache_beam.transforms import pvalue from apache_beam.io.fileio import _CompressionType from apache_beam.utils.options import PipelineOptions from

Google App Engine, Google Cloud Console, Jenkins trigger builds remotely, gcloud command not found

梦想的初衷 提交于 2019-12-13 05:59:41
问题 I'm trying to deploy my GAE app remotely with a URL, and this part works nicely. Jenkins checks out the latest revision correctly but when trying to build with the command specified in the Google Cloud Help: gcloud --project=<project-id> preview app deploy -q app.yaml I get the follow error message: [workspace] $ /bin/sh -xe /opt/bitnami/apache-tomcat/temp/hudson7352698921882428590.sh + gcloud --project=XYZXYZXYZ preview app deploy -q app.yaml /opt/bitnami/apache-tomcat/temp

argument --max-dispatches-per-second: invalid float value: '6/m'

天涯浪子 提交于 2019-12-13 03:44:18
问题 I am using cloud tasks , I want to set maxDispatchesPerSecond to 6/m . when I try to update my app engine queue with below command: ☁ rate-limit [master] ⚡ gcloud beta tasks queues update-app-engine-queue cloud-tasks-rate-limit --max-dispatches-per-second='6/m' ERROR: (gcloud.beta.tasks.queues.update-app-engine-queue) argument --max-dispatches-per-second: invalid float value: '6/m' Usage: gcloud beta tasks queues update-app-engine-queue QUEUE [optional flags] optional flags may be --clear-max