google-cloud-ml

Deploy Retrained inception model on Google cloud machine learning

拥有回忆 提交于 2019-12-02 01:58:11
问题 I manage to retrain my specific classification model using the generic inception model following this tutorial. I would like now to deploy it on the google cloud machine learning following this steps. I already managed to export it as MetaGraph but I can't manage to get the proper inputs and outputs. Using it locally, my entry point to the graph is DecodeJpeg/contents:0 which is fed with a jpeg image in binary format. The output are my predictions. The code I use locally (which is working) is

Deploy Retrained inception model on Google cloud machine learning

可紊 提交于 2019-12-01 21:42:30
I manage to retrain my specific classification model using the generic inception model following this tutorial . I would like now to deploy it on the google cloud machine learning following this steps . I already managed to export it as MetaGraph but I can't manage to get the proper inputs and outputs. Using it locally, my entry point to the graph is DecodeJpeg/contents:0 which is fed with a jpeg image in binary format. The output are my predictions. The code I use locally (which is working) is: softmax_tensor = sess.graph.get_tensor_by_name('final_result:0') predictions = sess.run(softmax

Google Cloud ML FAILED_PRECONDITION

醉酒当歌 提交于 2019-12-01 19:25:03
I am trying to use Google Cloud ML to host a Tensorflow model and get predictions. I have a pretrained model that I have uploaded to the cloud and I have created a model and version in my Cloud ML console. I followed the instructions from here to prepare my data for requesting online predictions. For both the Python method and the glcoud method I get the same error. For simplicity, I'll post the gcloud method: I run gcloud ml-engine predict --model spell_correction --json-instances test.json where test.json is my input data file (a JSON array named instances ). I get the following result:

Keras model to Tensorflow to input b64 encoded data instead of numpy ml-engine predict

丶灬走出姿态 提交于 2019-12-01 18:16:42
I am trying to convert a keras model to use it for predictions on google cloud's ml-engine. I have a pre-trained classifier that takes in a numpy array as input. The normal working data I send to model.predict is named input_data. I convert it to base 64 and dump it to a json file using the following few lines: data = {} data['image_bytes'] = [{'b64':base64.b64encode(input_data.tostring())}] with open('weights/keras/example.json', 'w') as outfile: json.dump(data, outfile) Now, I try to create the TF model from my existing model: from keras.models import model_from_json import tensorflow as tf

How do I convert a CloudML Alpha model to a SavedModel?

馋奶兔 提交于 2019-12-01 12:07:24
In the alpha release of CloudML's online prediction service, the format for exporting model was: inputs = {"x": x, "y_bytes": y} g.add_to_collection("inputs", json.dumps(inputs)) outputs = {"a": a, "b_bytes": b} g.add_to_collection("outputs", json.dumps(outputs)) I would like to convert this to a SavedModel without retraining my model. How can I do that? We can convert this to a SavedModel by importing the old model, creating the Signatures, and re-exporting it. This code is untested, but something like this should work: import json import tensorflow as tf from tensorflow.contrib.session

Google Cloud ML Engine - Job failed due to an internal error . Can't execute a job

假如想象 提交于 2019-12-01 11:53:21
This is a ml-job I previously trained successfully . But when I tried it today it's not working . So after that I tried removing all the things is the bucket and start over . Still it's not working . Giving the following error . Internal error occurred. Please retry in a few minutes. If you still experience errors, contact Cloud ML. 来源: https://stackoverflow.com/questions/45609164/google-cloud-ml-engine-job-failed-due-to-an-internal-error-cant-execute-a-j

Best way to process terabytes of data on gcloud ml-engine with keras

送分小仙女□ 提交于 2019-12-01 10:59:37
I want to train a model on about 2TB of image data on gcloud storage. I saved the image data as separate tfrecords and tried to use the tensorflow data api following this example https://medium.com/@moritzkrger/speeding-up-keras-with-tfrecord-datasets-5464f9836c36 But it seems like keras' model.fit(...) doesn't support validation for tfrecord datasets based on https://github.com/keras-team/keras/pull/8388 Is there a better approach for processing large amounts of data with keras from ml-engine that I'm missing? Thanks a lot! If you are willing to use tf.keras instead of actual Keras, you can

Best way to process terabytes of data on gcloud ml-engine with keras

别等时光非礼了梦想. 提交于 2019-12-01 09:22:43
问题 I want to train a model on about 2TB of image data on gcloud storage. I saved the image data as separate tfrecords and tried to use the tensorflow data api following this example https://medium.com/@moritzkrger/speeding-up-keras-with-tfrecord-datasets-5464f9836c36 But it seems like keras' model.fit(...) doesn't support validation for tfrecord datasets based on https://github.com/keras-team/keras/pull/8388 Is there a better approach for processing large amounts of data with keras from ml

How to read a utf-8 encoded binary string in tensorflow?

帅比萌擦擦* 提交于 2019-12-01 08:29:16
问题 I am trying to convert an encoded byte string back into the original array in the tensorflow graph (using tensorflow operations) in order to make a prediction in a tensorflow model. The array to byte conversion is based on this answer and it is the suggested input to tensorflow model prediction on google cloud's ml-engine. def array_request_example(input_array): input_array = input_array.astype(np.float32) byte_string = input_array.tostring() string_encoded_contents = base64.b64encode(byte

Base64 images with Keras and Google Cloud ML

扶醉桌前 提交于 2019-11-30 21:16:28
I'm predicting image classes using Keras. It works in Google Cloud ML (GCML), but for efficiency need change it to pass base64 strings instead of json array. Related Documentation I can easily run python code to decode a base64 string into json array, but when using GCML I don't have the opportunity to run a preprocessing step (unless maybe use a Lambda layer in Keras, but I don't think that is the correct approach). Another answer suggested adding tf.placeholder with type of tf.string , which makes sense, but how to incorporate that into the Keras model? Here is complete code for training the