tensorflow-estimator

Tensorflow 1.10 TFRecordDataset - recovering TFRecords

戏子无情 提交于 2019-11-30 19:26:57
Notes: this question extends upon a previous question of mine . In that question I ask about the best way to store some dummy data as Example and SequenceExample seeking to know which is better for data similar to dummy data provided. I provide both explicit formulations of the Example and SequenceExample construction as well as, in the answers, a programatic way to do so. Because this is still a lot of code, I am providing a Colab (interactive jupyter notebook hosted by google) file where you can try the code out yourself to assist. All the necessary code is there and it is generously

Converting Tensorflow Graph to use Estimator, get 'TypeError: data type not understood' at loss function using `sampled_softmax_loss` or `nce_loss`

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-30 14:57:23
I am trying to convert Tensorflow's official basic word2vec implementation to use tf.Estimator. The issue is that the loss function( sampled_softmax_loss or nce_loss ) gives an error when using Tensorflow Estimators. It works perfectly fine in the original implementation. Here's is Tensorflow's official basic word2vec implementation: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/word2vec/word2vec_basic.py Here is the Google Colab notebook where I implemented this code, which is working. https://colab.research.google.com/drive/1nTX77dRBHmXx6PEF5pmYpkIVxj

Tensorflow 1.10 TFRecordDataset - recovering TFRecords

♀尐吖头ヾ 提交于 2019-11-30 03:26:28
问题 Notes: this question extends upon a previous question of mine. In that question I ask about the best way to store some dummy data as Example and SequenceExample seeking to know which is better for data similar to dummy data provided. I provide both explicit formulations of the Example and SequenceExample construction as well as, in the answers, a programatic way to do so. Because this is still a lot of code, I am providing a Colab (interactive jupyter notebook hosted by google) file where you

What does google cloud ml-engine do when a Json request contains “_bytes” or “b64”?

血红的双手。 提交于 2019-11-29 06:53:41
The google cloud documentation (see Binary data in prediction input) states: Your encoded string must be formatted as a JSON object with a single key named b64. The following Python example encodes a buffer of raw JPEG data using the base64 library to make an instance: {"image_bytes":{"b64": base64.b64encode(jpeg_data)}} In your TensorFlow model code, you must name the aliases for your input and output tensors so that they end with '_bytes'. I would like to understand more about how this process works on the google cloud side. Is the ml-engine automatically decoding any content after the "b64"

TensorFlow 1.10+ custom estimator early stopping with train_and_evaluate

懵懂的女人 提交于 2019-11-29 05:09:15
Suppose you are training a custom tf.estimator.Estimator with tf.estimator.train_and_evaluate using a validation dataset in a setup similar to that of @simlmx's : classifier = tf.estimator.Estimator( model_fn=model_fn, model_dir=model_dir, params=params) train_spec = tf.estimator.TrainSpec( input_fn = training_data_input_fn, ) eval_spec = tf.estimator.EvalSpec( input_fn = validation_data_input_fn, ) tf.estimator.train_and_evaluate( classifier, train_spec, eval_spec ) Often, one uses a validation dataset to cut off training to prevent over-fitting when the loss continues to improve for the

TensorFlow 1.10+ custom estimator early stopping with train_and_evaluate

与世无争的帅哥 提交于 2019-11-27 16:53:07
问题 Suppose you are training a custom tf.estimator.Estimator with tf.estimator.train_and_evaluate using a validation dataset in a setup similar to that of @simlmx's: classifier = tf.estimator.Estimator( model_fn=model_fn, model_dir=model_dir, params=params) train_spec = tf.estimator.TrainSpec( input_fn = training_data_input_fn, ) eval_spec = tf.estimator.EvalSpec( input_fn = validation_data_input_fn, ) tf.estimator.train_and_evaluate( classifier, train_spec, eval_spec ) Often, one uses a

Early stopping with tf.estimator, how?

一个人想着一个人 提交于 2019-11-27 05:29:09
问题 I'm using tf.estimator in TensorFlow 1.4 and tf.estimator.train_and_evaluate is great but I need early stopping. What's the prefered way of adding that? I assume there is some tf.train.SessionRunHook somewhere for this. I saw that there was an old contrib package with a ValidationMonitor that seemed to have early stopping, but it doesn't seem to be around anymore in 1.4. Or will the preferred way in the future be to rely on tf.keras (with which early stopping is really easy) instead of tf