tensorflow-estimator

Using a created tensorflow model for predicting

眉间皱痕 提交于 2019-12-02 11:31:52
I'm looking at source code from this Tensorflow article that talks about how to create a wide-and-deep learning model. https://www.tensorflow.org/versions/r1.3/tutorials/wide_and_deep Here is the link to the python source code: https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/examples/learn/wide_n_deep_tutorial.py What the goal of it is, is to train a model that will predict if someone makes more or less than $50k a year given the data in the census information. As instructed, I'm running this command to execute: python wide_n_deep_tutorial.py --model_type=wide_n_deep The result

Feature Column Pre-trained Embedding

橙三吉。 提交于 2019-12-02 07:03:36
问题 How to use pre-trained embedding with tf.feature_column.embedding_column . I used pre_trained embedding in tf.feature_column.embedding_column . But it doesn't work. Error is The error is : ValueError: initializer must be callable if specified. Embedding of column_name: itemx Here's my code: weight, vocab_size, emb_size = _create_pretrained_emb_from_txt(FLAGS.vocab, FLAGS.pre_emb) W = tf.Variable(tf.constant(0.0, shape=[vocab_size, emb_size]), trainable=False, name="W") embedding_placeholder =

Feature Column Pre-trained Embedding

ⅰ亾dé卋堺 提交于 2019-12-02 03:58:18
How to use pre-trained embedding with tf.feature_column.embedding_column . I used pre_trained embedding in tf.feature_column.embedding_column . But it doesn't work. Error is The error is : ValueError: initializer must be callable if specified. Embedding of column_name: itemx Here's my code: weight, vocab_size, emb_size = _create_pretrained_emb_from_txt(FLAGS.vocab, FLAGS.pre_emb) W = tf.Variable(tf.constant(0.0, shape=[vocab_size, emb_size]), trainable=False, name="W") embedding_placeholder = tf.placeholder(tf.float32, [vocab_size, emb_size]) embedding_init = W.assign(embedding_placeholder)

Restoring a model trained with tf.estimator and feeding input through feed_dict

风格不统一 提交于 2019-12-01 23:42:26
I trained a resnet with tf.estimator, the model was saved during the training process. The saved files consist of .data , .index and .meta . I'd like to load this model back and get predictions for new images. The data was fed to the model during training using tf.data.Dataset . I have closely followed the resnet implementation given here . I would like to restore the model and feed inputs to the nodes using a feed_dict. First attempt #rebuild input pipeline images, labels = input_fn(data_dir, batch_size=32, num_epochs=1) #rebuild graph prediction= imagenet_model_fn(images,labels,{'batch_size'

Restoring a model trained with tf.estimator and feeding input through feed_dict

情到浓时终转凉″ 提交于 2019-12-01 23:14:05
问题 I trained a resnet with tf.estimator, the model was saved during the training process. The saved files consist of .data , .index and .meta . I'd like to load this model back and get predictions for new images. The data was fed to the model during training using tf.data.Dataset . I have closely followed the resnet implementation given here. I would like to restore the model and feed inputs to the nodes using a feed_dict. First attempt #rebuild input pipeline images, labels = input_fn(data_dir,

TensorFlow v1.10+ load SavedModel with different device placement or manually set dynamic device placement?

南楼画角 提交于 2019-12-01 11:53:55
So in TensorFlow's guide for using GPUs there is a part about using multiple GPUs in a "multi-tower fashion": ... for d in ['/device:GPU:2', '/device:GPU:3']: with tf.device(d): # <---- manual device placement ... Seeing this, one might be tempted to leverage this style for multiple GPU training in a custom Estimator to indicate to the model that it can be distributed across multiple GPUs efficiently. To my knowledge, if manual device placement is absent TensorFlow does not have some form of optimal device mapping (expect perhaps if you have the GPU version installed and a GPU is available,

Why should I build separated graph for training and validation in tensorflow?

走远了吗. 提交于 2019-12-01 10:33:01
I've been using tensorflow for a while now. At first I had stuff like this: def myModel(training): with tf.scope_variables('model', reuse=not training): do model return model training_model = myModel(True) validation_model = myModel(False) Mostly because I started with some MOOCs that tought me to do that. But they also didn't use TFRecords or Queues. And I didn't know why I was using two separate models. I tried building only one and feeding the data with the feed_dict : everything worked. Ever since I've been usually using only one model. My inputs are always place_holders and I just input

TensorFlow v1.10+ load SavedModel with different device placement or manually set dynamic device placement?

笑着哭i 提交于 2019-12-01 09:48:49
问题 So in TensorFlow's guide for using GPUs there is a part about using multiple GPUs in a "multi-tower fashion": ... for d in ['/device:GPU:2', '/device:GPU:3']: with tf.device(d): # <---- manual device placement ... Seeing this, one might be tempted to leverage this style for multiple GPU training in a custom Estimator to indicate to the model that it can be distributed across multiple GPUs efficiently. To my knowledge, if manual device placement is absent TensorFlow does not have some form of

Why should I build separated graph for training and validation in tensorflow?

风流意气都作罢 提交于 2019-12-01 07:26:10
问题 I've been using tensorflow for a while now. At first I had stuff like this: def myModel(training): with tf.scope_variables('model', reuse=not training): do model return model training_model = myModel(True) validation_model = myModel(False) Mostly because I started with some MOOCs that tought me to do that. But they also didn't use TFRecords or Queues. And I didn't know why I was using two separate models. I tried building only one and feeding the data with the feed_dict : everything worked.

How to use tf.data's initializable iterators within a tf.estimator's input_fn?

大憨熊 提交于 2019-12-01 03:18:56
I would like to manage my training with a tf.estimator.Estimator but have some trouble to use it alongside the tf.data API. I have something like this: def model_fn(features, labels, params, mode): # Defines model's ops. # Initializes with tf.train.Scaffold. # Returns an tf.estimator.EstimatorSpec. def input_fn(): dataset = tf.data.TextLineDataset("test.txt") # map, shuffle, padded_batch, etc. iterator = dataset.make_initializable_iterator() return iterator.get_next() estimator = tf.estimator.Estimator(model_fn) estimator.train(input_fn) As I can't use a make_one_shot_iterator for my use case,