google-cloud-ml

SSL: no alternative certificate subject name matches target host name $name.storage.googleapis.com

会有一股神秘感。 提交于 2019-12-11 06:27:01
问题 I want to run tensorflow traning script in google cloud ml. One of the buckets from an external project. I have created cloud ml engine service account and add it as an user to this external project. After that, have executed the following command in my terminal with gcloud initialised project: gcloud auth activate-service-account --my-service-acc-key.json And then submit my job as: gcloud ml-engine jobs submit training ..arguments Job was submitted successfully and was running until

Deploying and predicting the tensorflow for poets on google-cloud-ml

[亡魂溺海] 提交于 2019-12-11 06:23:17
问题 I was able to deploy tensorflow for poets onto the cloud ml engine by creating a saved model using this script by rhaertel80 import tensorflow as tf from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import tag_constants from tensorflow.python.saved_model import builder as saved_model_builder input_graph = 'retrained_graph.pb' saved_model_dir = 'my_model' with tf.Graph().as_default() as graph: # Read in the export graph with tf.gfile.FastGFile

Hyperparameter tuning locally — Tensorflow Google Cloud ML Engine

吃可爱长大的小学妹 提交于 2019-12-11 06:08:50
问题 Is it possible to tune hyperparameters using ML Engine to train the model locally? The documentation only mentions training with hyperparameter tuning in the cloud (submitting a job), and has no mention to doing so locally. Otherwise, is there another commonly used hyperparameter tuning that passes in command arguments to task.py as in the census estimator tutorial? https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census 回答1: You cannot perform HPTuning (Bayesian

How do I call a REST Google Cloud API from AppMaker?

回眸只為那壹抹淺笑 提交于 2019-12-11 05:42:56
问题 I want to call the Google Cloud AutoML API from AppMaker, but it's hard to figure out how to do that. How do I make a REST call to Google Cloud from AppMaker? 回答1: First, follow the instructions here to generate a service account and download the private key. (I'm also assuming you already enabled the APIs for your project.) Then, follow the instructions under the section "Addendum: Service account authorization without OAuth", but you will need to implement your own JWT encoding algorithm as

ERROR: Couldn't match files for checkpoint gs://obj-detection/train/model.ckpt

让人想犯罪 __ 提交于 2019-12-11 05:24:04
问题 I run my detection model on google cloud ml and got this error while running the evaluation script. I found this link that mentioned about this issue, but it seems like the issue's till not be solved. Anyone knows how to fix this? Any helps would be appreciated. Thanks. ERROR 2018-02-04 12:53:10 -0600 master-replica-0 Couldn't match files for checkpoint gs://obj-detection/train/model.ckpt-0 INFO 2018-02-04 12:53:10 -0600 master-replica-0 No model found in gs://obj-detection/train. Will try

TPU custom chip available with Google Cloud ML

天大地大妈咪最大 提交于 2019-12-11 04:46:08
问题 Which type of Hardware is used as part of Google Cloud ML when using TensorFlow? Only CPU or Tensor Processing Unit (custom cards) are also available? cf this article 回答1: Cloud TPUs are available to the public as of 2018-06-27: https://cloud.google.com/tpu/docs/release-notes This was announced at Google Next '18: https://www.blog.google/products/google-cloud/empowering-businesses-and-developers-to-do-more-with-ai/ 回答2: Cloud ML currently focuses on CPUs. GPUs and TPUs will be available in

EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs

狂风中的少年 提交于 2019-12-11 02:43:39
问题 Tensorflow version 1.10 Using: DNNClassifier and tf.estimator.FinalExporter I'm using the Iris example from TF blog. I defined the following code: # The CSV features in our training & test data. COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species'] FEATURE_COLUMNS = COLUMN_NAMES[:4] INPUT_COLUMNS = [ tf.feature_column.numeric_column(column) for column in COLUMN_NAMES ] def serving_input_receiver_fn(): """Build the serving inputs.""" inputs = {} for feat in INPUT

Re-training inception google cloud stuck at global step 0

巧了我就是萌 提交于 2019-12-11 01:47:28
问题 I am following the flowers tutorials for re-training inception on google cloud ml. I can run the tutorial, train, predict, just fine. I then substituted the flowers dataset for a test dataset of my own. Optical character recognition of image digits. My full code is here Dict File for labels Eval set Training Set Running from recent docker build provided by google. `docker run -it -p "127.0.0.1:8080:8080" --entrypoint=/bin/bash gcr.io/cloud-datalab/datalab:local-20161227 I can preprocess files

Unknown Error Sending Data to Google Cloud ML Custom Prediction Routine

余生颓废 提交于 2019-12-10 18:48:45
问题 I am trying to write a custom ML prediction routine on AI Platform to get text data from a client, do some custom preprocessing, pass it into the model, and run the model. I was able to package and deploy this code on Google cloud successfully. However, every time I try to send a request to it from node.js, I get back data: { error: 'Prediction failed: unknown error.' }, . Here is my relevant custom prediction routine code. Note that I set instances to my text in the client and then tokenize

ml-engine vague error: “grpc epoll fd: 3”

好久不见. 提交于 2019-12-10 10:15:12
问题 I'm trying to train with gcloud ml-engine jobs submit training , and job is getting stuck with the following output on logs: My config.yaml: trainingInput: scaleTier: CUSTOM masterType: standard_gpu workerType: standard_gpu parameterServerType: large_model workerCount: 1 parameterServerCount: 1 Any hints about what "grpc epoll fd: 3" means and how to fix that? My input function is feeding a 16G TFRecord from gs://, but with batch = 4, shuffle buffer_size = 4. Each input sample is a single