gcloud

Error when submitting the gcloud task to google cloud ML engine

社会主义新天地 提交于 2019-12-11 15:16:32
问题 I am new to the Google ML Cloud engine. I would like to post the Keras model to the cloud to train, but I always get this error: I master-replica-0 Running module trainer.bot. master-replica-0 I master-replica-0 Downloading the package: gs://zadravecm-bot/jobs/test_job4/packages/84f3c60920e885020405e1eb7afa5f509313d2a5406a1f1551a81b81993ac66c/trainer-1.0.tar.gz master-replica-0 I master-replica-0 Running command: gsutil -q cp gs://zadravecm-bot/jobs/test_job4/packages

Configuring two services on the same domain in dispatch.yaml

試著忘記壹切 提交于 2019-12-11 15:11:50
问题 Battling to get this to work. I have an app composed of two services - frontend in Angular, backend in Node. dispatch: - url: "<frontend-app>-dot-apt-aleph-767.appspot.com/" service: <frontend-app> - url: "<frontend-app>-dot-apt-aleph-767.appspot.com/backend/" service: <backend-app> This is the output from gcloud app describe : dispatchRules: - domain: <frontend-app>-dot-apt-aleph-767.appspot.com path: / service: <frontend-app> - domain: <frontend-app>-dot-apt-aleph-767.appspot.com path:

how to identify gcloud errors in scripts

老子叫甜甜 提交于 2019-12-11 14:42:26
问题 Suppose I want to delete some of my GCP project resources using gcloud . If I have a record of their names, I can delete them all in a single bash/node/python script. The problem is I need to be able to distinguish "OK" errors from those that aren't. For example, if I delete a resource that doesn't exist, gcloud reports an error and my code has no reliable way of determining this was a 404. In this case a 404 is good. I wanted the resource to be gone and it's gone. How do I reliably determine

Unable to run application in Google Compute Engine VM

孤街醉人 提交于 2019-12-11 14:33:59
问题 I have a Node.js application which runs correctly on localhost, but not in the Compute Engine VM. Here is a snippet: try { gcloud = require('gcloud'); var storage = gcloud.storage({ projectId: 'project-id' }); var bucket = storage.bucket('my-bucket'); bucket.file(src_file).createReadStream().pipe(fs.createWriteStream(src_file)); } catch (e) { e = 'Error loading required classes for gcloud: '+gcloud+ ': '+e console.log(e) res.status(200).send(e); } When I run this code undefined: Error: /app

Gcloud ML-Engine Prection Error OOM 429

末鹿安然 提交于 2019-12-11 13:23:51
问题 I'm getting the following error when trying to use gcloud ml-engine predict ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: { "error": { "code": 429, "message": "Prediction server is out of memory, possibly because model size is too big.", "status": "RESOURCE_EXHAUSTED" } } My model size is 151 mb, I'm also using Tensorflow version 1.4 that does not requiere variables folder. When performing prediction it uses over 2gb. I'm using a modified version of inception. 回答1:

(gcloud.preview.app.deploy) Error Response: [400] “env” setting is not supported for this deployment

谁都会走 提交于 2019-12-11 12:44:22
问题 I try these tutorial https://cloud.google.com/tools/cloud-repositories/docs/push-to-deploy, and I do mvn gcloud:deploy ,but got the error messages bellow: [dev-jenkins-test-1] $ /bin/sh -xe /tmp/hudson4310631253025446569.sh + mvn gcloud:deploy [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building jenkins-test-java 1.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO]

Gcloud preview app can't parse my yaml

二次信任 提交于 2019-12-11 11:07:11
问题 I'm trying to get the gcloud command to work so i can run it in Jenkins, but i'm having troubles. I'm running gcloud --project=hv-match preview app deploy -q app.yaml --promote --verbosity debug --bucket gs://hv-match.appspot.com --version=1 And that produces this: DEBUG: Running gcloud.preview.app.deploy with Namespace(__calliope_internal_deepest_parser=ArgumentParser(prog=' gcloud.preview.app.deploy', usage=None, description="*(BETA)* This command is used to deploy both code and confi

Error when submitting training job to gcloud

不问归期 提交于 2019-12-11 10:48:00
问题 I am new to training on Google Cloud. When I am running the training job, I get the following error: (gcloud.ml-engine.jobs.submit.training) Could not copy [research/dist/object_detection-0.1.tar.gz] to [training/packages/c5292b23e57f357dc2d63baab473c04337dbadd2deeb10965e743cd8422b964f/object_detection-0.1.tar.gz]. Please retry: HTTPError 404: Not Found I am using this to run the training job gcloud ml-engine jobs submit training job1 \ --job-dir=gs://${ml-project-neu}/training \ --packages

Gcloud components update -> can't find import

时光总嘲笑我的痴心妄想 提交于 2019-12-11 09:57:08
问题 Our code base has been compiling just fine up until now. Today, gcloud started pestering me with its update message again, so I ran a "gcloud components update" and it updated successfully. However, now when I try to deploy our project using " gcloud preview app deploy . ", I get the following error: can't find import: "github.com/dgrijalva/jwt-go" The line hasn't changed since it was properly deploying before the update. I've already tried a " go get -u github.com/dgrijalva/jwt-go ", which

How to set up timezones in a GKE Pod

断了今生、忘了曾经 提交于 2019-12-11 08:29:23
问题 I deployed docker Linux to gcloud gke pod. I added the code bellow, trying to set up the time zone in the dockerfile. This code is running correctly in a local docker. But it does not work in gcloud gke pod. The timezones are in local PST, timezones in GKE Pod are still in UTC. Please help! ENV TZ=America/Los_Angeles RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone 回答1: I'm not sure how this is working on your local environment. Looks like you are missing (Ubuntu