google-cloud-ml

Export a basic Tensorflow model to Google Cloud ML

好久不见. 提交于 2019-12-07 08:46:13
问题 I am trying to export my local tensorflow model to use it on Google Cloud ML and run predictions on it. I am following the tensorflow serving example with mnist data. There is quite a bit of difference in the way they have processed and used their input/output vectors and it is not what you find in typical examples online. I am unsure how to set the parameters of my signatures : model_exporter.init( sess.graph.as_graph_def(), init_op = init_op, default_graph_signature = exporter

ScikitLearn model giving 'LocalOutlierFactor' object has no attribute 'predict' Error

爱⌒轻易说出口 提交于 2019-12-07 00:18:26
I'm new to machine learning world and I have built and trained a ml model using ScikitLearn library.It works perfectly well in the Jupyter notebook but when I deployed this model to Google Cloud ML and try to serve it using a Python script, it throws an error. Here's a snippet from my model code: Updated: from sklearn.metrics import classification_report, accuracy_score from sklearn.ensemble import IsolationForest from sklearn.neighbors import LocalOutlierFactor # define a random state state = 1 classifiers = { "Isolation Forest": IsolationForest(max_samples=len(X), contamination=outlier

Tensorflow on ML Engine: The replica master 0 exited with a non-zero status of 1

こ雲淡風輕ζ 提交于 2019-12-06 22:05:40
I launch a tensorflow task on ML Engine and after about 2 minutes I keep getting an error message " The replica master 0 exited with a non-zero status of 1. " (The task incidentally runs fine with ml-engine local.) Question: Is there any place or log file where can I see further information on what happened? The logs viewer just gives the following: { insertId: "ibal72g1rxhr63" logName: "projects/**-***-ml/logs/ml.googleapis.com%2Fcnn180322_170649" receiveTimestamp: "2018-03-22T17:08:38.344282172Z" resource: { labels: { job_id: "cnn180322_170649" project_id: "**-***-ml" task_name: "service" }

Export a custom Keras model to be used for prediction with the Cloud ML Engine

走远了吗. 提交于 2019-12-06 15:07:58
问题 I have difficulties exporting a custom VGG-Net (not exactly the one from Keras), that was trained with Keras, so that it can be used for the Google Cloud Predict API. I am loading my model with Keras. sess = tf.Session() K.set_session(sess) model = load_model(model.h5) The image that I want to classify was encoded as base64 string. So, I will have to decode it for the prediction task with some code that I found in one of the google examples. channels = 3 height = 96 width = 96 def decode_and

How do I need to modify exporting a keras model to accept b64 string to RESTful API/Google cloud ML

允我心安 提交于 2019-12-06 14:46:30
The complete code for exporting the model: (I've already trained it and now loading from weights file) def cnn_layers(inputs): conv_base= keras.applications.mobilenetv2.MobileNetV2(input_shape=(224,224,3), input_tensor=inputs, include_top=False, weights='imagenet') for layer in conv_base.layers[:-200]: layer.trainable = False last_layer = conv_base.output x = GlobalAveragePooling2D()(last_layer) x= keras.layers.GaussianNoise(0.3)(x) x = Dense(1024,name='fc-1')(x) x = keras.layers.BatchNormalization()(x) x = keras.layers.advanced_activations.LeakyReLU(0.3)(x) x = Dropout(0.4)(x) x = Dense(512

Pickled scipy sparse matrix as input data?

懵懂的女人 提交于 2019-12-06 14:33:09
I am working on a multiclass classification problem consisting in classifying resumes. I used sklearn and its TfIdfVectorizer to get a big scipy sparse matrix that I feed in a Tensorflow model after pickling it. On my local machine, I load it, convert a small batch to dense numpy arrays and fill a feed dictionnary. Everything works great. Now I would like to do the same thing on ML cloud. My pickle is stored at gs://my-bucket/path/to/pickle but when I run my trainer, the pickle file can't be found at this URI ( IOError: [Errno 2] No such file or directory ). I am using pickle.load(open('gs:/

error after running a job in google cloud ML

▼魔方 西西 提交于 2019-12-06 13:17:00
问题 I tried running a word-RNN model from github on Google Cloud ML . After submitting the job,I am getting errors in log file. This is what i submitted for training gcloud ml-engine jobs submit training word_pred_7 \ --package-path trainer \ --module-name trainer.train \ --runtime-version 1.0 \ --job-dir $JOB_DIR \ --region $REGION \ -- \ --data_dir gs://model-development/arpit/word-rnn-tensorflow-master/data/tinyshakespeare/real1.txt \ --save_dir gs://model-development/arpit/word-rnn-tensorflow

Google cloudml Always Gives Me The Same Results

只谈情不闲聊 提交于 2019-12-06 07:30:33
I'm working on machine learning and I would like to use Google Cloud ml service. At this moment, I have trained my model with retrain.py code of Tensorflow ( https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py#L103 ) and I have exported the results to a cloudml (export and export.meta files). However when I try to make a prediction of new data with command ( https://cloud.google.com/ml/reference/commandline/predict ): gcloud beta ml predict it always returns the same result (I want to predict different data). How is it possible? My data are

Getting free text features into Tensorflow Canned Estimators with Dataset API via feature_columns

自古美人都是妖i 提交于 2019-12-06 06:30:25
问题 I'm trying to build a model that gives reddit_score = f('subreddit','comment') Mainly this is as an example i can then build on for a work project. My code is here. My problem is that i see that canned estimators e.g. DNNLinearCombinedRegressor must have feature_columns that are part of FeatureColumn class. I have my vocab file and know that if i was to just limit to the first word of a comment i could just do something like tf.feature_column.categorical_column_with_vocabulary_file( key=

ml-engine vague error: “grpc epoll fd: 3”

瘦欲@ 提交于 2019-12-06 02:09:24
I'm trying to train with gcloud ml-engine jobs submit training , and job is getting stuck with the following output on logs: My config.yaml: trainingInput: scaleTier: CUSTOM masterType: standard_gpu workerType: standard_gpu parameterServerType: large_model workerCount: 1 parameterServerCount: 1 Any hints about what "grpc epoll fd: 3" means and how to fix that? My input function is feeding a 16G TFRecord from gs://, but with batch = 4, shuffle buffer_size = 4. Each input sample is a single channel 99 x 161px image: shape (15939,) - not huge. Thanks Maybe this is a bug in the Estimator