google-cloud-ml

Getting batch predictions for TFrecords via CloudML

人走茶凉 提交于 2019-11-30 09:08:19
问题 I followed this great tutorial and successfully trained a model (on CloudML). My code also makes predictions offline, but now I am trying to use Cloud ML to make predictions and have some problems. To deploy my model I followed this tutorial. Now I have a code that generates TFRecords via apache_beam.io.WriteToTFRecord and I want to make predictions for those TFRecords . To do so I am following this article, my command looks like this: gcloud ml-engine jobs submit prediction $JOB_ID --model

Base64 images with Keras and Google Cloud ML

╄→гoц情女王★ 提交于 2019-11-30 05:27:21
问题 I'm predicting image classes using Keras. It works in Google Cloud ML (GCML), but for efficiency need change it to pass base64 strings instead of json array. Related Documentation I can easily run python code to decode a base64 string into json array, but when using GCML I don't have the opportunity to run a preprocessing step (unless maybe use a Lambda layer in Keras, but I don't think that is the correct approach). Another answer suggested adding tf.placeholder with type of tf.string ,

What does google cloud ml-engine do when a Json request contains “_bytes” or “b64”?

血红的双手。 提交于 2019-11-29 06:53:41
The google cloud documentation (see Binary data in prediction input) states: Your encoded string must be formatted as a JSON object with a single key named b64. The following Python example encodes a buffer of raw JPEG data using the base64 library to make an instance: {"image_bytes":{"b64": base64.b64encode(jpeg_data)}} In your TensorFlow model code, you must name the aliases for your input and output tensors so that they end with '_bytes'. I would like to understand more about how this process works on the google cloud side. Is the ml-engine automatically decoding any content after the "b64"

how make correct predictions of jpeg image in cloud-ml

浪子不回头ぞ 提交于 2019-11-28 13:01:57
I want to predict a jpeg image in cloud-ml. My training model is the inception model, and I would like to send the input to the first layer of the graph: 'DecodeJpeg/contents:0' (where I have to send a jpeg image). I have set this layer as possible input by adding in retrain.py : inputs = {'image_bytes': 'DecodeJpeg/contents:0'} tf.add_to_collection('inputs', json.dumps(inputs)) Then I save the results of the training in two files (export and export.meta) with: saver.save(sess, os.path.join(output_directory,'export')) and I create a model in cloud-ml using these files. As suggested in some

In Tensorflow for serving a model, what does the serving input function supposed to do exactly

孤人 提交于 2019-11-28 09:29:37
So, I've been struggling to understand what the main task of a serving_input_fn() is when a trained model is exported in Tensorflow for serving purposes. There are some examples online that explain it but I'm having problems defining it for myself. The problem I'm trying to solve is a regression problem where I have 29 inputs and one output. Is there a template for creating a corresponding serving input function for that? What if I use a one-class classification problem? Would my serving input function need to change or can I use the same function? And finally, do I always need serving input

Deploying Keras Models via Google Cloud ML

霸气de小男生 提交于 2019-11-28 07:01:20
I am looking to use Google Cloud ML to host my Keras models so that I can call the API and make some predictions. I am running into some issues from the Keras side of things. So far I have been able to build a model using TensorFlow and deploy it on CloudML. In order for this to work I had to make some changes to my basic TF code. The changes are documented here: https://cloud.google.com/ml/docs/how-tos/preparing-models#code_changes I have also been able to train a similar model using Keras. I can even save the model in the same export and export.meta format as I would get with TF. from keras

Deploy retrained inception SavedModel to google cloud ml engine

ⅰ亾dé卋堺 提交于 2019-11-27 15:41:49
I am trying to deploy a retrained version of the inception model on google cloud ml-engine. Gathering informations from the SavedModel documentation , this reference , and this post of rhaertel80, I exported successfully my retrained model to a SavedModel, uploaded it to a bucket and tried to deploy it to a ml-engine version. This last task actually creates a version, but it outputs this error: Create Version failed. Bad model detected with error: "Error loading the model: Unexpected error when loading the model" And when I try to get predictions from the model via commandline I get this error

How convert a jpeg image into json file in Google machine learning

对着背影说爱祢 提交于 2019-11-27 14:14:17
I'm working on Google cloud ML, and I want to get prediction on jpeg image. To do this, I would like to use: gcloud beta ml predict --instances=INSTANCES --model=MODEL [--version=VERSION] ( https://cloud.google.com/ml/reference/commandline/predict ) Instances is the path to a json file with all info about image. How can I create the json file from my jpeg image? Many thanks!! The first step is to make sure that the graph you export has a placeholder and ops that can accept JPEG data. Note that CloudML assumes you are sending a batch of images. We have to use a tf.map_fn to decode and resize a

Convert a graph proto (pb/pbtxt) to a SavedModel for use in TensorFlow Serving or Cloud ML Engine

随声附和 提交于 2019-11-27 13:23:17
I've been following the TensorFlow for Poets 2 codelab on a model I've trained, and have created a frozen, quantized graph with embedded weights. It's captured in a single file - say my_quant_graph.pb . Since I can use that graph for inference with the TensorFlow Android inference library just fine, I thought I could do the same with Cloud ML Engine, but it seems it only works on a SavedModel model. How can I simply convert a frozen/quantized graph in a single pb file to use on ML engine? It turns out that a SavedModel provides some extra info around a saved graph. Assuming a frozen graph

Google Storage (gs) wrapper file input/out for Cloud ML?

試著忘記壹切 提交于 2019-11-27 12:52:11
Google recently announced the Clould ML, https://cloud.google.com/ml/ and it's very useful. However, one limitation is that the input/out of a Tensorflow program should support gs://. If we use all tensorflow APIS to read/write files, it should OK, since these APIs support gs:// . However, if we use native file IO APIs such as open , it does not work, because they don't understand gs:// For example: with open(vocab_file, 'wb') as f: cPickle.dump(self.words, f) This code won't work in Google Cloud ML. However, modifying all native file IO APIs to tensorflow APIs or Google Storage Python APIs is