tensorflow

Loading large data into TensorFlow 2.0 without loading it on the RAM

匆匆过客 提交于 2021-02-08 08:30:31
问题 I have processed and saved a large dataset of video and audio file (about 8 to 9 GB of data) The data is saved as 2 numpy arrays, one for each modality Shapes of the files are (number_of_examples, maximum_time_length, feature_length) I want to use this data for training my Neural Network for a classification task I am using the TensorFlow 2.0 Beta version I am running all the codes on Google Colab (after installing tf-2.0 beta) Each time I loading the data in tf.data the entire RAM of the

How to implement a stacked RNNs in Tensorflow?

孤者浪人 提交于 2021-02-08 08:21:28
问题 I want to implement an RNN using Tensorflow1.13 on GPU. Following the official recommendation, I write the following code to get a stack of RNN cells lstm = [tk.layers.CuDNNLSTM(128) for _ in range(2)] cells = tk.layers.StackedRNNCells(lstm) However, I receive an error message: ValueError: ('All cells must have a state_size attribute. received cells:', [< tensorflow.python.keras.layers.cudnn_recurrent.CuDNNLSTM object at 0x13aa1c940>]) How can I correct it? 回答1: This may be a Tensorflow bug

How to plot different summary metrics on the same plot with Tensorboard?

╄→гoц情女王★ 提交于 2021-02-08 08:14:24
问题 I would like to be able plot the training loss per batch and the average validation loss for the validation set on the same plot in Tensorboard. I ran into this issue when my validation set was too large to fit into memory so required batching and the use of tf.metrics update ops. This question could apply to any Tensorflow metrics you wanted to appear on the same graph in Tensorboard. I am able to plot these two graphs separately (see here) plot the validation-loss-per-validation-batch on

DLL load failed when importing TensorFlow with GPU support

你离开我真会死。 提交于 2021-02-08 08:11:20
问题 I am trying to install TensorFlow with GPU support on Windows 10, but I get an error (shown below) when importing it. The CPU version works fine. I have installed tensorflow-gpu through pip updated the NVidia drivers for my GTX 1050 with GeForce Experience installed CUDA 10.1 with NVidia's network installer installed cuDNN 7.5.0.56, taking care of copying every file in the right CUDA folder installed TensorRT 5.1.2.2 via the zip method and copied the relevant DLLs in CUDA again This is the

DLL load failed when importing TensorFlow with GPU support

夙愿已清 提交于 2021-02-08 08:11:18
问题 I am trying to install TensorFlow with GPU support on Windows 10, but I get an error (shown below) when importing it. The CPU version works fine. I have installed tensorflow-gpu through pip updated the NVidia drivers for my GTX 1050 with GeForce Experience installed CUDA 10.1 with NVidia's network installer installed cuDNN 7.5.0.56, taking care of copying every file in the right CUDA folder installed TensorRT 5.1.2.2 via the zip method and copied the relevant DLLs in CUDA again This is the

How to plot different summary metrics on the same plot with Tensorboard?

时间秒杀一切 提交于 2021-02-08 08:10:44
问题 I would like to be able plot the training loss per batch and the average validation loss for the validation set on the same plot in Tensorboard. I ran into this issue when my validation set was too large to fit into memory so required batching and the use of tf.metrics update ops. This question could apply to any Tensorflow metrics you wanted to appear on the same graph in Tensorboard. I am able to plot these two graphs separately (see here) plot the validation-loss-per-validation-batch on

TensorFlow: Should Loss and Metric be identical?

こ雲淡風輕ζ 提交于 2021-02-08 07:55:39
问题 I am using binary cross entropy as my loss function and also as my metric. However, I see different values for the loss and metric. They are very similar, however they are different. Why is this the case? I am using tf.keras.losses.binary_crossentropy(y_true, y_pred) for both. Loss: 0.1506 and Metric Value is 0.1525, which is different 回答1: If you use the same function as the loss and a metric, you will see different results usually in deep networks. This is generally just due to floating

How to ensure neural net performance comparability?

﹥>﹥吖頭↗ 提交于 2021-02-08 07:23:18
问题 For my thesis i am trying to evaluate the impact of different parameters on my active learning object detector with tensorflow (v 1.14). Therefore i am using the faster_rcnn_inception_v2_coco standard config from the model zoo and a fixed random.seed(1). To make sure i have a working baseline experiment i tried to run the object detector two times with the same dataset, learning time, poolingsize and so forth. Anyhow the two plotted graphs after 20 active learning cycles are quite different

Is it possible to create multiple instances of the same CNN that take in multiple images and are concatenated into a dense layer? (keras)

岁酱吖の 提交于 2021-02-08 07:21:35
问题 Similar to this question, I'm looking to have several image input layers that go through one larger CNN (e.g. XCeption minus dense layers), and then have the output of the one CNN across all images be concatenated into a dense layer. Is this possible with Keras or is it even possible to train a network from the ground-up with this architecture? I'm essentially looking to train a model that takes in a larger but fixed number of images per sample (i.e. 3+ image inputs with similar visual

Tensorflow dataset API - Apply windows to multiple sequences

不问归期 提交于 2021-02-08 06:55:43
问题 I want to setup a data pipeline working with sequential data. Each data point in a sequence has a fixed dimenstionality, e.g. 64x64. I have multiple sequences with variable length. So my dataset can be simplified to: seq1 = np.arange(5)[:, None, None] seq2 = np.arange(8)[:, None, None] seq3 = np.arange(7)[:, None, None] sequences = [seq1, seq2, seq3] Now, I want to operate on a series of time frames within the sequences, resulting in 3-dimensional data cubes [N_frames, data_dim1, data_dim2].