google-cloud-ml

Export a basic Tensorflow model to Google Cloud ML

微笑、不失礼 提交于 2019-12-05 14:24:33
I am trying to export my local tensorflow model to use it on Google Cloud ML and run predictions on it. I am following the tensorflow serving example with mnist data . There is quite a bit of difference in the way they have processed and used their input/output vectors and it is not what you find in typical examples online. I am unsure how to set the parameters of my signatures : model_exporter.init( sess.graph.as_graph_def(), init_op = init_op, default_graph_signature = exporter.classification_signature( input_tensor = "**UNSURE**" , scores_tensor = "**UNSURE**"), named_graph_signatures = {

Google Cloud ML Tensorflow Version

落花浮王杯 提交于 2019-12-05 08:18:42
The docs for setting up Google Cloud ML suggest installing Tensorflow version r0.11. I've observed that TensorFlow functions newly available in r0.12 raise exceptions when run on Cloud ML. Is there a timeline for Cloud ML supporting r0.12? Will switching between r0.11 and r0.12 be optional or mandatory? Yes, you can specify --runtime-version=0.12 to get a 0.12 build. This is a new feature and is documented at https://cloud.google.com/ml/docs/concepts/runtime-version-list Note, however, that the 0.12 build is not yet considered stable and the exact Tensorflow build provided may change. Once the

Export a custom Keras model to be used for prediction with the Cloud ML Engine

懵懂的女人 提交于 2019-12-04 20:44:49
I have difficulties exporting a custom VGG-Net (not exactly the one from Keras), that was trained with Keras, so that it can be used for the Google Cloud Predict API. I am loading my model with Keras. sess = tf.Session() K.set_session(sess) model = load_model(model.h5) The image that I want to classify was encoded as base64 string. So, I will have to decode it for the prediction task with some code that I found in one of the google examples. channels = 3 height = 96 width = 96 def decode_and_resize(image_str_tensor): """Decodes jpeg string, resizes it and returns a uint8 tensor.""" image = tf

error after running a job in google cloud ML

断了今生、忘了曾经 提交于 2019-12-04 18:45:25
I tried running a word-RNN model from github on Google Cloud ML . After submitting the job,I am getting errors in log file. This is what i submitted for training gcloud ml-engine jobs submit training word_pred_7 \ --package-path trainer \ --module-name trainer.train \ --runtime-version 1.0 \ --job-dir $JOB_DIR \ --region $REGION \ -- \ --data_dir gs://model-development/arpit/word-rnn-tensorflow-master/data/tinyshakespeare/real1.txt \ --save_dir gs://model-development/arpit/word-rnn-tensorflow-master/save This is what I get in the log file. Finally, after submitting 77 jobs to cloud ML I am

Getting free text features into Tensorflow Canned Estimators with Dataset API via feature_columns

元气小坏坏 提交于 2019-12-04 11:51:56
I'm trying to build a model that gives reddit_score = f('subreddit','comment') Mainly this is as an example i can then build on for a work project. My code is here . My problem is that i see that canned estimators e.g. DNNLinearCombinedRegressor must have feature_columns that are part of FeatureColumn class. I have my vocab file and know that if i was to just limit to the first word of a comment i could just do something like tf.feature_column.categorical_column_with_vocabulary_file( key='comment', vocabulary_file='{}/vocab.csv'.format(INPUT_DIR) ) But if i'm passing in say first 10 words from

Google Cloud ML-engine scikit-learn prediction probability 'predict_proba()'

与世无争的帅哥 提交于 2019-12-04 11:15:29
问题 Google Cloud ML-engine supports the ability to deploy scikit-learn Pipeline objects. For example a text classification Pipeline could look like the following, classifier = Pipeline([ ('vect', CountVectorizer()), ('clf', naive_bayes.MultinomialNB())]) The classifier can be trained, classifier.fit(train_x, train_y) Then the classifier can be uploaded to Google Cloud Storage, model = 'model.joblib' joblib.dump(classifier, model) model_remote_path = os.path.join('gs://', bucket_name, datetime

import librosa in google cloud ml

丶灬走出姿态 提交于 2019-12-04 04:50:16
问题 I am running Google cloud ML and when I try to import librosa I get the error: ImportError: No module named _tkinter, please install the python-tk package I do have the have a setup.py File, an empty __init__.py file My full output from Google Cloud is the following: INFO 2017-02-10 12:45:53 -0800 unknown_task Validating job requirements... INFO 2017-02-10 12:45:53 -0800 unknown_task Job creation request has been successfully validated. INFO 2017-02-10 12:45:53 -0800 unknown_task Job

Using Training TFRecords that are stored on Google Cloud

南楼画角 提交于 2019-12-04 00:22:30
My goal is to use training data (format: tfrecords) stored on Google Cloud storage when I run my Tensorflow Training App, locally. (Why locally? : I am testing before I turn it into a training package for Cloud ML) Based on this thread I shouldn't have to do anything since the underlying Tensorflow API's should be able to read a gs://(url) However thats not the case and the errors I see are of the format: 2017-06-06 15:38:55.589068: I tensorflow/core/platform/cloud/retrying_utils.cc:77] The operation failed and will be automatically retried in 1.38118 seconds (attempt 1 out of 10), caused by:

Data Normalization with tensorflow tf-transform

雨燕双飞 提交于 2019-12-03 07:14:26
问题 I'm doing a neural network prediction with my own datasets using Tensorflow. The first I did was a model that works with a small dataset in my computer. After this, I changed the code a little bit in order to use Google Cloud ML-Engine with bigger datasets to realize in ML-Engine the train and the predictions. I am normalizing the features in the panda dataframe but this introduces skew and I get poor prediction results. What I would really like is use the library tf-transform to normalize

Google Cloud ML-engine scikit-learn prediction probability 'predict_proba()'

喜你入骨 提交于 2019-12-03 06:50:56
Google Cloud ML-engine supports the ability to deploy scikit-learn Pipeline objects. For example a text classification Pipeline could look like the following, classifier = Pipeline([ ('vect', CountVectorizer()), ('clf', naive_bayes.MultinomialNB())]) The classifier can be trained, classifier.fit(train_x, train_y) Then the classifier can be uploaded to Google Cloud Storage, model = 'model.joblib' joblib.dump(classifier, model) model_remote_path = os.path.join('gs://', bucket_name, datetime.datetime.now().strftime('model_%Y%m%d_%H%M%S'), model) subprocess.check_call(['gsutil', 'cp', model, model