keras

indices[201] = [0,8] is out of order. Many sparse ops require sorted indices.Use `tf.sparse.reorder` to create a correctly ordered copy

天大地大妈咪最大 提交于 2021-01-27 05:06:43
问题 Im doing a neural network encoding every variable and when im going to fit the model, an error raises. indices[201] = [0,8] is out of order. Many sparse ops require sorted indices. Use `tf.sparse.reorder` to create a correctly ordered copy. [Op:SerializeManySparse] I dunno how to solve it. I can print some code here and if u want more i can still printing it def process_atributes(df, train, test): continuas = ['Trip_Duration'] cs = MinMaxScaler() trainCont = cs.fit_transform(train[continuas])

Could Keras prefetch data like tensorflow Dataset?

痞子三分冷 提交于 2021-01-27 01:39:10
问题 In TensorFlow's Dataset API, we can use dataset.prefetch(buffer_size=xxx) to preload other batches' data while GPU is processing the current batch's data, therefore, I can make full use of GPU. I'm going to use Keras, and wonder if keras has a similar API for me to make full use of GPU, instead of serial execution: read batch 0->process batch 0->read batch 1-> process batch 1-> ... I briefly looked through the keras API and did not see a description of the prefetch. 回答1: If you call fit

Efficient allreduce is not supported for 2 IndexedSlices

佐手、 提交于 2021-01-26 04:13:55
问题 I am trying to run a Subclassed Keras Model on multiple GPUs. The code is running as expected, however, the following "warning" crops up during the execution of the code: "Efficient allreduce is not supported for 2 IndexedSlices" What does this mean? I followed the Multi-GPU tutorial on Tensorflow 2.0 Beta guide. I am also using the Dataset API for my input pipeline. 来源: https://stackoverflow.com/questions/56843876/efficient-allreduce-is-not-supported-for-2-indexedslices

Efficient allreduce is not supported for 2 IndexedSlices

一笑奈何 提交于 2021-01-26 04:13:53
问题 I am trying to run a Subclassed Keras Model on multiple GPUs. The code is running as expected, however, the following "warning" crops up during the execution of the code: "Efficient allreduce is not supported for 2 IndexedSlices" What does this mean? I followed the Multi-GPU tutorial on Tensorflow 2.0 Beta guide. I am also using the Dataset API for my input pipeline. 来源: https://stackoverflow.com/questions/56843876/efficient-allreduce-is-not-supported-for-2-indexedslices

Keras load saved model: SystemError: unknown opcode

时光毁灭记忆、已成空白 提交于 2021-01-25 17:17:09
问题 if I load a model which was saved to a tf saved model file I get the following issue. (keras 2.0) SystemError: unknown opcode As I can find this has to do with a lambda layer in my model. https://github.com/keras-team/keras/issues/9595 There was stated the solution to save the architecure and weights. Keep the code around and just save the weights. As you said: save the architecture as code, and the weights in an h5. This will be compatible across versions But how do I do that? How do I save

Keras load saved model: SystemError: unknown opcode

人走茶凉 提交于 2021-01-25 16:59:30
问题 if I load a model which was saved to a tf saved model file I get the following issue. (keras 2.0) SystemError: unknown opcode As I can find this has to do with a lambda layer in my model. https://github.com/keras-team/keras/issues/9595 There was stated the solution to save the architecure and weights. Keep the code around and just save the weights. As you said: save the architecture as code, and the weights in an h5. This will be compatible across versions But how do I do that? How do I save

How to get Graph (or GraphDef) from a given Model?

旧街凉风 提交于 2021-01-24 14:52:46
问题 I have a big model defined using Tensorflow 2 with Keras. The model works well in Python. Now, I want to import it into C++ project. Inside my C++ project, I use TF_GraphImportGraphDef function. It works well if I prepare *.pb file using the following code: with open('load_model.pb', 'wb') as f: f.write(tf.compat.v1.get_default_graph().as_graph_def().SerializeToString()) I've tried this code on a simple network written using Tensorflow 1 (using tf.compat.v1.* functions). It works well. Now I

Why it's necessary to frozen all inner state of a Batch Normalization layer when fine-tuning

时间秒杀一切 提交于 2021-01-24 09:38:51
问题 The following content comes from Keras tutorial This behavior has been introduced in TensorFlow 2.0, in order to enable layer.trainable = False to produce the most commonly expected behavior in the convnet fine-tuning use case. Why we should freeze the layer when fine-tuning a convolutional neural network? Is it because some mechanisms in tensorflow keras or because of the algorithm of batch normalization? I run an experiment myself and I found that if trainable is not set to false the model

Getting different results from Keras model.evaluate and model.predict

安稳与你 提交于 2021-01-23 05:07:23
问题 I have trained a model to predict topic categories using word2vec and an lstm model using keras and got about 98% accuracy during training, I saved the model then loaded it into another file for trying on the test set, I used model.evaluate and model.predict and the results were very different. I'm using keras with tensorflow as a backend, the model summary is: _________________________________________________________________ Layer (type) Output Shape Param # =================================

Getting different results from Keras model.evaluate and model.predict

橙三吉。 提交于 2021-01-23 05:01:24
问题 I have trained a model to predict topic categories using word2vec and an lstm model using keras and got about 98% accuracy during training, I saved the model then loaded it into another file for trying on the test set, I used model.evaluate and model.predict and the results were very different. I'm using keras with tensorflow as a backend, the model summary is: _________________________________________________________________ Layer (type) Output Shape Param # =================================