tensorflow-estimator

How to speed up batch preparation when using Estimators API combined with tf.data.Dataset

强颜欢笑 提交于 2020-06-26 06:11:17
问题 I'd like to speed up my training routine that uses the Estimator API with input_fn wrote using tf.data.Dataset . My implementation takes 2 second to prepare a batch of data and then runs training on GPU for 1 sec, and then start over preparing a batch. Which is really inefficient. I'm looking for a way to prepare the batches asynchronously and upload them to GPU to speed up the training. Or alternatively for a way to cache datasets between invocations of input_fn (the dataset.cache() doesn't

How to speed up batch preparation when using Estimators API combined with tf.data.Dataset

柔情痞子 提交于 2020-06-26 06:11:08
问题 I'd like to speed up my training routine that uses the Estimator API with input_fn wrote using tf.data.Dataset . My implementation takes 2 second to prepare a batch of data and then runs training on GPU for 1 sec, and then start over preparing a batch. Which is really inefficient. I'm looking for a way to prepare the batches asynchronously and upload them to GPU to speed up the training. Or alternatively for a way to cache datasets between invocations of input_fn (the dataset.cache() doesn't

Performing inference with a BERT (TF 1.x) saved model

≡放荡痞女 提交于 2020-05-30 07:58:45
问题 I'm stuck on one line of code and have been stalled on a project all weekend as a result. I am working on a project that uses BERT for sentence classification. I have successfully trained the model, and I can test the results using the example code from run_classifier.py. I can export the model using this example code (which has been reposted repeatedly, so I believe that it's right for this model): def export(self): def serving_input_fn(): label_ids = tf.placeholder(tf.int32, [None], name=

Performing inference with a BERT (TF 1.x) saved model

不打扰是莪最后的温柔 提交于 2020-05-30 07:58:05
问题 I'm stuck on one line of code and have been stalled on a project all weekend as a result. I am working on a project that uses BERT for sentence classification. I have successfully trained the model, and I can test the results using the example code from run_classifier.py. I can export the model using this example code (which has been reposted repeatedly, so I believe that it's right for this model): def export(self): def serving_input_fn(): label_ids = tf.placeholder(tf.int32, [None], name=

Convert tensor to numpy without a session

我的梦境 提交于 2020-05-30 07:20:30
问题 I'm using the estimator library of tensorflow on python. I want to train a student network by using a pre-trained teacher.I'm facing the following issue. train_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": train_data}, y=train_labels, batch_size=100, num_epochs=None, shuffle=True) student_classifier.train( input_fn=train_input_fn, steps=20, hooks=None) This code returns a generator object that is passed to a student classifier. Inside the generator, we have the inputs and labels (in

What's the difference between a Tensorflow Keras Model and Estimator?

守給你的承諾、 提交于 2020-05-09 17:49:46
问题 Both Tensorflow Keras models and Tensorflow Estimators are able to train neural network models and use them to predict new data. They are both high-level APIs that sits on top of the low-level core TensorFlow API. So when should I use one over the other? 回答1: Background The Estimators API was added to Tensorflow in Release 1.1, and provides a high-level abstraction over lower-level Tensorflow core operations. It works with an Estimator instance, which is TensorFlow's high-level representation

What's the difference between a Tensorflow Keras Model and Estimator?

六月ゝ 毕业季﹏ 提交于 2020-05-09 17:49:06
问题 Both Tensorflow Keras models and Tensorflow Estimators are able to train neural network models and use them to predict new data. They are both high-level APIs that sits on top of the low-level core TensorFlow API. So when should I use one over the other? 回答1: Background The Estimators API was added to Tensorflow in Release 1.1, and provides a high-level abstraction over lower-level Tensorflow core operations. It works with an Estimator instance, which is TensorFlow's high-level representation

Tensorflow DNNclassifier: error wile training (numpy.ndarray has no attribute index)

。_饼干妹妹 提交于 2020-04-13 03:44:49
问题 I am trying to train a DNNClassifier in tensorflow Here is my code train_input_fn = tf.estimator.inputs.pandas_input_fn( x=X_train, y=y_train, batch_size=1000, shuffle = True ) nn_classifier = tf.estimator.DNNClassifier(hidden_units=[1300,1300,1300], feature_columns=X_train, n_classes=200) nn_classifier.train(input_fn = train_input_fn, steps=2000) Here is how y_train looks [450 450 450 ... 327 327 327] type : numpy.ndarray And here is how X_train looks [[ 9.79285 11.659035 1.279528 ... 1

Run prediction from saved model in tensorflow 2.0

∥☆過路亽.° 提交于 2020-04-11 09:56:07
问题 I have a saved model (a directory with model.pd and variables) and wanted to run predictions on a pandas dataframe. I've unsuccessfully tried a few ways to do this: Attempt 1: Restore the estimator from the saved model estimator = tf.estimator.LinearClassifier( feature_columns=create_feature_cols(), model_dir=path, warm_start_from=path) Where path is the directory that has a model.pd and variables folder. I got an error ValueError: Tensor linear/linear_model/dummy_feature1/weights is not

How do i convert tensorflow 2.0 estimator model to tensorflow lite?

喜你入骨 提交于 2020-02-02 03:06:42
问题 THe following code i have below produce the regular tensorflow model but when i try to convert it to tensorflow lite it doesn't work, i followed the following documentations. https://www.tensorflow.org/tutorials/estimator/linear1 https://www.tensorflow.org/lite/guide/get_started export_dir = "tmp" serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn( tf.feature_column.make_parse_example_spec(feat_cols)) estimator.export_saved_model(export_dir, serving_input_fn) #